Message ID | 20200820203902.11308-1-dave@stgolabs.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/kmemleak: rely on rcu for task stack scanning | expand |
On Thu, 20 Aug 2020, Qian Cai wrote: >On Thu, Aug 20, 2020 at 01:39:02PM -0700, Davidlohr Bueso wrote: >> kmemleak_scan() currently relies on the big tasklist_lock >> hammer to stabilize iterating through the tasklist. Instead, >> this patch proposes simply using rcu along with the rcu-safe >> for_each_process_thread flavor (without changing scan semantics), >> which doesn't make use of next_thread/p->thread_group and thus >> cannot race with exit. Furthermore, any races with fork() >> and not seeing the new child should be benign as it's not >> running yet and can also be detected by the next scan. > >It is not entirely clear to me what problem the patch is trying to solve. If >this is about performance, we will probably need some number. So in this case avoiding the tasklist_lock could prove beneficial for performance considering the scan operation is done periodically. I have seen improvements of 30%-ish when doing similar replacements on very pathological microbenchmarks (ie stressing get/setpriority(2)). However my main motivation is that it's one less user of the global lock, something that Linus has long time wanted to see gone eventually (if ever) even if the traditional fairness issues has been dealt with now with qrwlocks. Of course this is a very long ways ahead. This patch also kills another user of the deprecated tsk->thread_group. Thanks, Davidlohr
On 08/20, Davidlohr Bueso wrote: > > @@ -1471,15 +1471,15 @@ static void kmemleak_scan(void) > if (kmemleak_stack_scan) { > struct task_struct *p, *g; > > - read_lock(&tasklist_lock); > - do_each_thread(g, p) { > + rcu_read_lock(); > + for_each_process_thread(g, p) { > void *stack = try_get_task_stack(p); > if (stack) { > scan_block(stack, stack + THREAD_SIZE, NULL); > put_task_stack(p); > } > - } while_each_thread(g, p); > - read_unlock(&tasklist_lock); > + } > + rcu_read_unlock(); Acked-by: Oleg Nesterov <oleg@redhat.com>
On Thu, Aug 20, 2020 at 01:39:02PM -0700, Davidlohr Bueso wrote: > kmemleak_scan() currently relies on the big tasklist_lock > hammer to stabilize iterating through the tasklist. Instead, > this patch proposes simply using rcu along with the rcu-safe > for_each_process_thread flavor (without changing scan semantics), > which doesn't make use of next_thread/p->thread_group and thus > cannot race with exit. Furthermore, any races with fork() > and not seeing the new child should be benign as it's not > running yet and can also be detected by the next scan. > > Signed-off-by: Davidlohr Bueso <dbueso@suse.de> As long as the kernel thread stack is still around (kmemleak does use try_get_task_stack()), I'm fine with the change: Acked-by: Catalin Marinas <catalin.marinas@arm.com>
diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 5e252d91eb14..c0014d3b91c1 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -1471,15 +1471,15 @@ static void kmemleak_scan(void) if (kmemleak_stack_scan) { struct task_struct *p, *g; - read_lock(&tasklist_lock); - do_each_thread(g, p) { + rcu_read_lock(); + for_each_process_thread(g, p) { void *stack = try_get_task_stack(p); if (stack) { scan_block(stack, stack + THREAD_SIZE, NULL); put_task_stack(p); } - } while_each_thread(g, p); - read_unlock(&tasklist_lock); + } + rcu_read_unlock(); } /*
kmemleak_scan() currently relies on the big tasklist_lock hammer to stabilize iterating through the tasklist. Instead, this patch proposes simply using rcu along with the rcu-safe for_each_process_thread flavor (without changing scan semantics), which doesn't make use of next_thread/p->thread_group and thus cannot race with exit. Furthermore, any races with fork() and not seeing the new child should be benign as it's not running yet and can also be detected by the next scan. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> --- mm/kmemleak.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)