Message ID | 20171003185313.1017-6-mcgrof@kernel.org (mailing list archive) |
---|---|
State | RFC, archived |
Headers | show |
On Tuesday, October 3, 2017 8:53:13 PM CEST Luis R. Rodriguez wrote: > Now that all filesystems which used to rely on kthread > freezing have been converted to filesystem freeze/thawing > we can remove the kernel kthread freezer. > > Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> I like this one. :-) Thanks, Rafael
On Tue 2017-10-03 11:53:13, Luis R. Rodriguez wrote: > Now that all filesystems which used to rely on kthread > freezing have been converted to filesystem freeze/thawing > we can remove the kernel kthread freezer. Are you surely-sure? You mentioned other in kernel sources of writes; what about those? Can knfsd be a problem? Can memory management doing background writes be a problem? Did you do some hibernation testing? > Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> Please don't apply this just yet. Pavel
On Tue, 2017-10-03 at 11:53 -0700, Luis R. Rodriguez wrote: > Now that all filesystems which used to rely on kthread > freezing have been converted to filesystem freeze/thawing > we can remove the kernel kthread freezer. Many subsystems use system_freezable_wq and/or system_freezable_power_efficient_wq to queue work that should not run while processes are frozen. Should it be mentioned in the patch description that this patch does not affect workqueues for which WQ_FREEZABLE has been set? What about the many drivers outside filesystems that use the set_freezable() / try_to_freeze() / wait_event_freezable() API? Thanks, Bart.
On Tue, 3 Oct 2017, Pavel Machek wrote: > Can knfsd be a problem? It'd actually be useful to walk through all the 'try_to_freeze()' instances in kthreads, and analyze those. I've started this effort, and as a result I removed most of them; but there are still quite a few remaining. And knfsd is one of those. > Can memory management doing background writes be a problem? This point I don't understand. What exactly do you mean?
On Tue, 3 Oct 2017, Bart Van Assche wrote: > What about the many drivers outside filesystems that use the > set_freezable() / try_to_freeze() / wait_event_freezable() API? Many/most of them are just completely bogus and pointless. I've killed a lot of those in the past, but the copy/paste programming is just too strong enemy to fight against.
Hi! > > Can memory management doing background writes be a problem? > > This point I don't understand. What exactly do you mean? Hibernation: We freeze, do system snapshot, unfreeze, and write image to disk. Memory management wakes up (this used to be called bdflush), decides its time to write some dirty data back to disk. Powerdown. But state on the disk now does not correspond to what the image expects. Problem? Pavel
On Tue, 2017-10-03 at 22:17 +0200, Jiri Kosina wrote: > On Tue, 3 Oct 2017, Bart Van Assche wrote: > > What about the many drivers outside filesystems that use the > > set_freezable() / try_to_freeze() / wait_event_freezable() API? > > Many/most of them are just completely bogus and pointless. I've killed a > lot of those in the past, but the copy/paste programming is just too > strong enemy to fight against. If just a single driver would use that API to prevent that I/O occurs while processes are frozen then this patch will break that driver. I would like to know your opinion about the following patch in particular: "[PATCH v4 1/7] md: Make md resync and reshape threads freezable" (https://www.spinics.net/lists/linux-block/msg17854.html). Thanks, Bart.
On Tue, 3 Oct 2017, Bart Van Assche wrote: > If just a single driver would use that API to prevent that I/O occurs while > processes are frozen then this patch will break that driver. I would like to > know your opinion about the following patch in particular: "[PATCH v4 1/7] md: > Make md resync and reshape threads freezable" > (https://www.spinics.net/lists/linux-block/msg17854.html). Alright, if there are kthreads which are actually *creating* new I/O (contrary to kthreads being I/O completion helpers) -- which apparently is the md case here, then those definitely have to be frozen in some form.
On Tue, Oct 03, 2017 at 08:21:42PM +0000, Bart Van Assche wrote: > On Tue, 2017-10-03 at 22:17 +0200, Jiri Kosina wrote: > > On Tue, 3 Oct 2017, Bart Van Assche wrote: > > > What about the many drivers outside filesystems that use the > > > set_freezable() / try_to_freeze() / wait_event_freezable() API? > > > > Many/most of them are just completely bogus and pointless. I've killed a > > lot of those in the past, but the copy/paste programming is just too > > strong enemy to fight against. > > If just a single driver would use that API to prevent that I/O occurs while > processes are frozen then this patch will break that driver. Yes! And although as Jiri points out, its debatable where this is being used, but as you suggest there may be *valid* reasons for it *now* even though originally it *may* have been bogus... Its why I believe this now can only be done piecemeal wise, slowly but steady. To avoid regressions, and make this effort bisectable. Luis
On Tue, 3 Oct 2017, Pavel Machek wrote: > > This point I don't understand. What exactly do you mean? > > Hibernation: > > We freeze, do system snapshot, unfreeze, and write image to > disk. Where/why do you unfreeze between creating the snapshot and writing it out? Again, I agree that the (rare) kthreads that are actually "creating" new I/O have to be somehow frozen and require special care.
On Tue, Oct 3, 2017 at 10:38 PM, Jiri Kosina <jikos@kernel.org> wrote: > On Tue, 3 Oct 2017, Pavel Machek wrote: > >> > This point I don't understand. What exactly do you mean? >> >> Hibernation: >> >> We freeze, do system snapshot, unfreeze, and write image to >> disk. > > Where/why do you unfreeze between creating the snapshot and writing it > out? We do not unfreeze then.
On Tue, Oct 03, 2017 at 10:12:04PM +0200, Pavel Machek wrote: > On Tue 2017-10-03 11:53:13, Luis R. Rodriguez wrote: > > Now that all filesystems which used to rely on kthread > > freezing have been converted to filesystem freeze/thawing > > we can remove the kernel kthread freezer. > > Are you surely-sure? You mentioned other in kernel sources of writes; > what about those? You perhaps did not read the cover letter, it describes this patch will very likely take time to *ever* apply. We must cover tons of grounds first, to address precisely what you say. In fact other than kthreads that generate IO we may have now even crazy stupid kthreads using the freezer API which *do not generate IO* which are totally bogus but now acts as "features". We'll need to carefully carve out all freezer API uses on all kthreads. This should be done atomically to avoid regressions and ensure bisectability. Luis
On Tue, 3 Oct 2017, Jiri Kosina wrote: > > What about the many drivers outside filesystems that use the > > set_freezable() / try_to_freeze() / wait_event_freezable() API? > > Many/most of them are just completely bogus and pointless. More specifically -- they don't really care at all whether they get scheduled out exactly at the try_to_freeze() point; they are perfectly happy being scheduled out at any other scheduling point, and land on runqueue after the resume has been completed. Sure, certain drivers need to take action when system is undergoing hibernation/suspend. But that's what PM callbacks are for, not kthread hibernation.
On Tue 2017-10-03 22:38:33, Jiri Kosina wrote: > On Tue, 3 Oct 2017, Pavel Machek wrote: > > > > This point I don't understand. What exactly do you mean? > > > > Hibernation: > > > > We freeze, do system snapshot, unfreeze, and write image to > > disk. > > Where/why do you unfreeze between creating the snapshot and writing it > out? Yep, sorry, we don't unfreeze. > Again, I agree that the (rare) kthreads that are actually "creating" new > I/O have to be somehow frozen and require special care. Agreed. Was any effort made to identify those special kernel threads? Pavel
On Tue, 3 Oct 2017, Pavel Machek wrote: > > Again, I agree that the (rare) kthreads that are actually "creating" new > > I/O have to be somehow frozen and require special care. > > Agreed. Was any effort made to identify those special kernel threads? I don't think there is any other way than just inspecting all the try_to_freeze() instances in the kernel, and understanding what that particular kthread is doing. I've cleaned up most of the low-hanging fruit already, where the try_to_freeze() was obviously completely pointless, but a lot more time needs to be invested into this.
On Tue, Oct 03, 2017 at 11:53:13AM -0700, Luis R. Rodriguez wrote: > Now that all filesystems which used to rely on kthread > freezing have been converted to filesystem freeze/thawing > we can remove the kernel kthread freezer. Really? There's no other subsystem that relies on kernel thread and workqueue freezing to function correctly on suspend? > -/** > - * freeze_kernel_threads - Make freezable kernel threads go to the refrigerator. > - * > - * On success, returns 0. On failure, -errno and only the kernel threads are > - * thawed, so as to give a chance to the caller to do additional cleanups > - * (if any) before thawing the userspace tasks. So, it is the responsibility > - * of the caller to thaw the userspace tasks, when the time is right. > - */ > -int freeze_kernel_threads(void) > -{ > - int error; > - > - pr_info("Freezing remaining freezable tasks ... "); > - > - pm_nosig_freezing = true; > - error = try_to_freeze_tasks(false); This freezes workqueues as well as kernel threads, so this affects any subsystem that uses WQ_FREEZABLE. A quick glance tells me this includes graphics drivers, spi devices, usb hubs, power management, and a few filesystems, too. > - if (!error) > - pr_cont("done."); > - > - pr_cont("\n"); > - BUG_ON(in_atomic()); > - > - if (error) > - thaw_kernel_threads(); > - return error; > -} > - > void thaw_processes(void) > { > struct task_struct *g, *p; > @@ -234,23 +207,3 @@ void thaw_processes(void) > pr_cont("done.\n"); > trace_suspend_resume(TPS("thaw_processes"), 0, false); > } > - > -void thaw_kernel_threads(void) > -{ > - struct task_struct *g, *p; > - > - pm_nosig_freezing = false; > - pr_info("Restarting kernel threads ... "); > - > - thaw_workqueues(); And this is where the workqueues are thawed. So I doubt we can safely remove all this code like this... Cheers, Dave.
On Wed, Oct 04, 2017 at 08:04:49AM +1100, Dave Chinner wrote: > On Tue, Oct 03, 2017 at 11:53:13AM -0700, Luis R. Rodriguez wrote: > > Now that all filesystems which used to rely on kthread > > freezing have been converted to filesystem freeze/thawing > > we can remove the kernel kthread freezer. > > Really? There's no other subsystem that relies on kernel thread > and workqueue freezing to function correctly on suspend? On my cover letter I noted this patch should not be consideredd until *all* kthreads are vetted. So I agree vehemently with you. The patch set only addressed 2 filesystem, so two kthreads. Much other work would be needed before this lat patch could *ever* be considered. Luis
On Tue, Oct 3, 2017 at 3:00 PM, Jiri Kosina <jikos@kernel.org> wrote: > On Tue, 3 Oct 2017, Pavel Machek wrote: > >> > Again, I agree that the (rare) kthreads that are actually "creating" new >> > I/O have to be somehow frozen and require special care. >> >> Agreed. Was any effort made to identify those special kernel threads? > > I don't think there is any other way than just inspecting all the > try_to_freeze() instances in the kernel, and understanding what that > particular kthread is doing. > > I've cleaned up most of the low-hanging fruit already, where the > try_to_freeze() was obviously completely pointless, but a lot more time > needs to be invested into this. > There are about 36 drivers that call try_to_freeze() and half (18 ) of those are media drivers. Maybe it is easier handle sub-system by sub-system basis for a review of which one of these usages could be removed. cc'ing Mauro and linux-media thanks, -- Shuah
On Tuesday, October 3, 2017 8:59:00 PM CEST Rafael J. Wysocki wrote: > On Tuesday, October 3, 2017 8:53:13 PM CEST Luis R. Rodriguez wrote: > > Now that all filesystems which used to rely on kthread > > freezing have been converted to filesystem freeze/thawing > > we can remove the kernel kthread freezer. > > > > Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> > > I like this one. :-) However, suspend_freeze/thaw_processes() require some more work. In particular, the freezing of workqueues is being removed here without a replacement. Thanks, Rafael
On Tue, Oct 03, 2017 at 03:09:53PM -0600, Shuah Khan wrote: > On Tue, Oct 3, 2017 at 3:00 PM, Jiri Kosina <jikos@kernel.org> wrote: > > On Tue, 3 Oct 2017, Pavel Machek wrote: > > > >> > Again, I agree that the (rare) kthreads that are actually "creating" new > >> > I/O have to be somehow frozen and require special care. > >> > >> Agreed. Was any effort made to identify those special kernel threads? > > > > I don't think there is any other way than just inspecting all the > > try_to_freeze() instances in the kernel, and understanding what that > > particular kthread is doing. > > > > I've cleaned up most of the low-hanging fruit already, where the > > try_to_freeze() was obviously completely pointless, but a lot more time > > needs to be invested into this. > > > > There are about 36 drivers that call try_to_freeze() and half (18 ) of > those are media drivers. Maybe it is easier handle sub-system by > sub-system basis for a review of which one of these usages could be > removed. cc'ing Mauro and linux-media Yes :) I guess no one reads cover letters, but indeed. To be clear, this last patch should only go in after a few kernels from now all kthreads are vetted for piece-meal wise. This patch would be the nail on the kthread freezer coffin. It should go in last, who knows how many years from now, and if ever. Luis
On Tue, Oct 03, 2017 at 11:15:07PM +0200, Rafael J. Wysocki wrote: > On Tuesday, October 3, 2017 8:59:00 PM CEST Rafael J. Wysocki wrote: > > On Tuesday, October 3, 2017 8:53:13 PM CEST Luis R. Rodriguez wrote: > > > Now that all filesystems which used to rely on kthread > > > freezing have been converted to filesystem freeze/thawing > > > we can remove the kernel kthread freezer. > > > > > > Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> > > > > I like this one. :-) > > However, suspend_freeze/thaw_processes() require some more work. > > In particular, the freezing of workqueues is being removed here > without a replacement. Hrm, where do you see that? freeze_workqueues_busy() is still called on try_to_freeze_tasks(). Likewise thaw_processes() also calls thaw_workqueues(). I did forget to nuke pm_nosig_freezing though. Also as I have noted a few times now we have yet to determine if we can remove all freezer calls on kthreads without causing a regression. Granted I'm being overly careful here, I still would not be surprised to learn about a stupid use case where this will be hard to remove now. Only once this is done should this patch be considered. This will take time. So I'd like more general consensus on long term approach: 1) first address each fs to use its freezer calls on susend/hibernate / and thaw on resume. While at it, remove freezer API calls on their respective kthreads. 2) In parallel address removing freezer API calls on non-IO kthreads, these should be trivial, but who knows what surprises lurk here 3) Lookup for kthreads which seem to generate IO -- address / review if removal of the freezer API can be done somehow with a quescing. This is currently for example being done on SCSI / md. 4) Only after all the above is done should we consider this patch or some form of it. Luis
On Wed, 2017-10-04 at 02:47 +0200, Luis R. Rodriguez wrote: > 3) Lookup for kthreads which seem to generate IO -- address / review if > removal of the freezer API can be done somehow with a quescing. This > is currently for example being done on SCSI / md. > 4) Only after all the above is done should we consider this patch or some > form of it. After having given this more thought, I think we should omit these last two steps. Modifying the md driver such that it does not submit I/O requests while processes are frozen requires either to use the freezer API or to open-code it. I think there is general agreement in the kernel community that open-coding a single mechanism in multiple drivers is wrong. Does this mean that instead of removing the freezer API we should keep it and review all its users instead? Bart.
On 10/03/2017 08:53 PM, Luis R. Rodriguez wrote: > Now that all filesystems which used to rely on kthread > freezing have been converted to filesystem freeze/thawing > we can remove the kernel kthread freezer. > > Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> > --- > Documentation/power/freezing-of-tasks.txt | 9 ------ > drivers/xen/manage.c | 6 ---- > include/linux/freezer.h | 4 --- > kernel/power/hibernate.c | 10 ++----- > kernel/power/power.h | 20 +------------ > kernel/power/process.c | 47 ------------------------------- > kernel/power/user.c | 9 ------ > tools/power/pm-graph/analyze_suspend.py | 1 - > 8 files changed, 3 insertions(+), 103 deletions(-) > Weelll ... have you checked MD? It's using kernel threads, which probably should be stopped, too, when going into suspend. After all, MD might be doing on of it's internal operations (eg resync) at that time, which will generate quite some I/O. Cheers, Hannes
On Wed, Oct 04, 2017 at 02:47:56AM +0200, Luis R. Rodriguez wrote: > On Tue, Oct 03, 2017 at 11:15:07PM +0200, Rafael J. Wysocki wrote: > > On Tuesday, October 3, 2017 8:59:00 PM CEST Rafael J. Wysocki wrote: > > > On Tuesday, October 3, 2017 8:53:13 PM CEST Luis R. Rodriguez wrote: > > > > Now that all filesystems which used to rely on kthread > > > > freezing have been converted to filesystem freeze/thawing > > > > we can remove the kernel kthread freezer. > > > > > > > > Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> > > > > > > I like this one. :-) > > > > However, suspend_freeze/thaw_processes() require some more work. > > > > In particular, the freezing of workqueues is being removed here > > without a replacement. > > Hrm, where do you see that? freeze_workqueues_busy() is still called on > try_to_freeze_tasks(). Likewise thaw_processes() also calls thaw_workqueues(). > I did forget to nuke pm_nosig_freezing though. static int try_to_freeze_tasks(bool user_only) { ..... if (!user_only) freeze_workqueues_begin(); ..... if (!user_only) { wq_busy = freeze_workqueues_busy(); ..... } i.e. only try_to_freeze_tasks(false) will freeze workqueues, and the only function that calls that - freeze_kernel_threads() - is removed by this patch. Cheers, Dave.
On Tue 2017-10-03 22:49:40, Luis R. Rodriguez wrote: > On Tue, Oct 03, 2017 at 10:12:04PM +0200, Pavel Machek wrote: > > On Tue 2017-10-03 11:53:13, Luis R. Rodriguez wrote: > > > Now that all filesystems which used to rely on kthread > > > freezing have been converted to filesystem freeze/thawing > > > we can remove the kernel kthread freezer. > > > > Are you surely-sure? You mentioned other in kernel sources of writes; > > what about those? > > You perhaps did not read the cover letter, it describes this patch will very > likely take time to *ever* apply. We must cover tons of grounds first, to > address precisely what you say. Yeah, I was not careful enough reading cover letter. Having series where 1-4/5 are ready to go, and 5/5 not-good-idea for years to come is quite confusing. Really relevant parts of cover letter should be moved to 5/5 and it should be clearly marked as "not for application". Pavel
On Fri, Oct 06, 2017 at 02:07:13PM +0200, Pavel Machek wrote: > > Yeah, I was not careful enough reading cover letter. Having series > where 1-4/5 are ready to go, and 5/5 not-good-idea for years to come > is quite confusing. 4/5 is not ready to go either, at the very least.... - Ted
On Wed, Oct 04, 2017 at 01:03:54AM +0000, Bart Van Assche wrote: > On Wed, 2017-10-04 at 02:47 +0200, Luis R. Rodriguez wrote: > > 3) Lookup for kthreads which seem to generate IO -- address / review if > > removal of the freezer API can be done somehow with a quescing. This > > is currently for example being done on SCSI / md. > > 4) Only after all the above is done should we consider this patch or some > > form of it. > > After having given this more thought, I think we should omit these last two > steps. Modifying the md driver such that it does not submit I/O requests while > processes are frozen requires either to use the freezer API or to open-code it. > I think there is general agreement in the kernel community that open-coding a > single mechanism in multiple drivers is wrong. Does this mean that instead of > removing the freezer API we should keep it and review all its users instead? Agreed. Luis
diff --git a/Documentation/power/freezing-of-tasks.txt b/Documentation/power/freezing-of-tasks.txt index af005770e767..477559f57c95 100644 --- a/Documentation/power/freezing-of-tasks.txt +++ b/Documentation/power/freezing-of-tasks.txt @@ -71,15 +71,6 @@ Rationale behind the functions dealing with freezing and thawing of tasks: freeze_processes(): - freezes only userspace tasks -freeze_kernel_threads(): - - freezes all tasks (including kernel threads) because we can't freeze - kernel threads without freezing userspace tasks - -thaw_kernel_threads(): - - thaws only kernel threads; this is particularly useful if we need to do - anything special in between thawing of kernel threads and thawing of - userspace tasks, or if we want to postpone the thawing of userspace tasks - thaw_processes(): - thaws all tasks (including kernel threads) because we can't thaw userspace tasks without thawing kernel threads diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c index c425d03d37d2..8ca0e0c9a7d5 100644 --- a/drivers/xen/manage.c +++ b/drivers/xen/manage.c @@ -109,12 +109,6 @@ static void do_suspend(void) goto out; } - err = freeze_kernel_threads(); - if (err) { - pr_err("%s: freeze kernel threads failed %d\n", __func__, err); - goto out_thaw; - } - err = dpm_suspend_start(PMSG_FREEZE); if (err) { pr_err("%s: dpm_suspend_start %d\n", __func__, err); diff --git a/include/linux/freezer.h b/include/linux/freezer.h index dd03e837ebb7..037ef3f16173 100644 --- a/include/linux/freezer.h +++ b/include/linux/freezer.h @@ -43,9 +43,7 @@ extern void __thaw_task(struct task_struct *t); extern bool __refrigerator(bool check_kthr_stop); extern int freeze_processes(void); -extern int freeze_kernel_threads(void); extern void thaw_processes(void); -extern void thaw_kernel_threads(void); /* * DO NOT ADD ANY NEW CALLERS OF THIS FUNCTION @@ -263,9 +261,7 @@ static inline void __thaw_task(struct task_struct *t) {} static inline bool __refrigerator(bool check_kthr_stop) { return false; } static inline int freeze_processes(void) { return -ENOSYS; } -static inline int freeze_kernel_threads(void) { return -ENOSYS; } static inline void thaw_processes(void) {} -static inline void thaw_kernel_threads(void) {} static inline bool try_to_freeze_nowarn(void) { return false; } static inline bool try_to_freeze(void) { return false; } diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c index a5c36e9c56a6..7c3af084b10a 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -352,10 +352,6 @@ int hibernation_snapshot(int platform_mode) if (error) goto Close; - error = freeze_kernel_threads(); - if (error) - goto Cleanup; - if (hibernation_test(TEST_FREEZER)) { /* @@ -363,13 +359,13 @@ int hibernation_snapshot(int platform_mode) * successful freezer test. */ freezer_test_done = true; - goto Thaw; + goto Cleanup; } error = dpm_prepare(PMSG_FREEZE); if (error) { dpm_complete(PMSG_RECOVER); - goto Thaw; + goto Cleanup; } suspend_console(); @@ -405,8 +401,6 @@ int hibernation_snapshot(int platform_mode) platform_end(platform_mode); return error; - Thaw: - thaw_kernel_threads(); Cleanup: swsusp_free(); goto Close; diff --git a/kernel/power/power.h b/kernel/power/power.h index 1d2d761e3c25..333bde062e42 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -253,25 +253,7 @@ extern int pm_test_level; #ifdef CONFIG_SUSPEND_FREEZER static inline int suspend_freeze_processes(void) { - int error; - - error = freeze_processes(); - /* - * freeze_processes() automatically thaws every task if freezing - * fails. So we need not do anything extra upon error. - */ - if (error) - return error; - - error = freeze_kernel_threads(); - /* - * freeze_kernel_threads() thaws only kernel threads upon freezing - * failure. So we have to thaw the userspace tasks ourselves. - */ - if (error) - thaw_processes(); - - return error; + return freeze_processes(); } static inline void suspend_thaw_processes(void) diff --git a/kernel/power/process.c b/kernel/power/process.c index 9d1277768312..2e223555b764 100644 --- a/kernel/power/process.c +++ b/kernel/power/process.c @@ -169,33 +169,6 @@ int freeze_processes(void) return error; } -/** - * freeze_kernel_threads - Make freezable kernel threads go to the refrigerator. - * - * On success, returns 0. On failure, -errno and only the kernel threads are - * thawed, so as to give a chance to the caller to do additional cleanups - * (if any) before thawing the userspace tasks. So, it is the responsibility - * of the caller to thaw the userspace tasks, when the time is right. - */ -int freeze_kernel_threads(void) -{ - int error; - - pr_info("Freezing remaining freezable tasks ... "); - - pm_nosig_freezing = true; - error = try_to_freeze_tasks(false); - if (!error) - pr_cont("done."); - - pr_cont("\n"); - BUG_ON(in_atomic()); - - if (error) - thaw_kernel_threads(); - return error; -} - void thaw_processes(void) { struct task_struct *g, *p; @@ -234,23 +207,3 @@ void thaw_processes(void) pr_cont("done.\n"); trace_suspend_resume(TPS("thaw_processes"), 0, false); } - -void thaw_kernel_threads(void) -{ - struct task_struct *g, *p; - - pm_nosig_freezing = false; - pr_info("Restarting kernel threads ... "); - - thaw_workqueues(); - - read_lock(&tasklist_lock); - for_each_process_thread(g, p) { - if (p->flags & (PF_KTHREAD | PF_WQ_WORKER)) - __thaw_task(p); - } - read_unlock(&tasklist_lock); - - schedule(); - pr_cont("done.\n"); -} diff --git a/kernel/power/user.c b/kernel/power/user.c index 22df9f7ff672..ebb2e6a8ddc8 100644 --- a/kernel/power/user.c +++ b/kernel/power/user.c @@ -277,15 +277,6 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, swsusp_free(); memset(&data->handle, 0, sizeof(struct snapshot_handle)); data->ready = false; - /* - * It is necessary to thaw kernel threads here, because - * SNAPSHOT_CREATE_IMAGE may be invoked directly after - * SNAPSHOT_FREE. In that case, if kernel threads were not - * thawed, the preallocation of memory carried out by - * hibernation_snapshot() might run into problems (i.e. it - * might fail or even deadlock). - */ - thaw_kernel_threads(); break; case SNAPSHOT_PREF_IMAGE_SIZE: diff --git a/tools/power/pm-graph/analyze_suspend.py b/tools/power/pm-graph/analyze_suspend.py index 1b60fe203741..545a5e2eafbe 100755 --- a/tools/power/pm-graph/analyze_suspend.py +++ b/tools/power/pm-graph/analyze_suspend.py @@ -138,7 +138,6 @@ class SystemValues: 'pm_prepare_console': dict(), 'pm_notifier_call_chain': dict(), 'freeze_processes': dict(), - 'freeze_kernel_threads': dict(), 'pm_restrict_gfp_mask': dict(), 'acpi_suspend_begin': dict(), 'suspend_console': dict(),
Now that all filesystems which used to rely on kthread freezing have been converted to filesystem freeze/thawing we can remove the kernel kthread freezer. Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> --- Documentation/power/freezing-of-tasks.txt | 9 ------ drivers/xen/manage.c | 6 ---- include/linux/freezer.h | 4 --- kernel/power/hibernate.c | 10 ++----- kernel/power/power.h | 20 +------------ kernel/power/process.c | 47 ------------------------------- kernel/power/user.c | 9 ------ tools/power/pm-graph/analyze_suspend.py | 1 - 8 files changed, 3 insertions(+), 103 deletions(-)