diff mbox series

mm: optimize the redundant loop of mm_update_owner_next()

Message ID 20240620122123.3877432-1-alexjlzheng@tencent.com (mailing list archive)
State New
Headers show
Series mm: optimize the redundant loop of mm_update_owner_next() | expand

Commit Message

Jinliang Zheng June 20, 2024, 12:21 p.m. UTC
From: Jinliang Zheng <alexjlzheng@tencent.com>

When mm_update_owner_next() is racing with swapoff (try_to_unuse()) or /proc or
ptrace or page migration (get_task_mm()), it is impossible to find an
appropriate task_struct in the loop whose mm_struct is the same as the target
mm_struct.

If the above race condition is combined with the stress-ng-zombie and
stress-ng-dup tests, such a long loop can easily cause a Hard Lockup in
write_lock_irq() for tasklist_lock.

Recognize this situation in advance and exit early.

Signed-off-by: Jinliang Zheng <alexjlzheng@tencent.com>
---
 kernel/exit.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Andrew Morton June 20, 2024, 8:20 p.m. UTC | #1
On Thu, 20 Jun 2024 20:21:24 +0800 alexjlzheng@gmail.com wrote:

> From: Jinliang Zheng <alexjlzheng@tencent.com>
> 
> When mm_update_owner_next() is racing with swapoff (try_to_unuse()) or /proc or
> ptrace or page migration (get_task_mm()), it is impossible to find an
> appropriate task_struct in the loop whose mm_struct is the same as the target
> mm_struct.
> 
> If the above race condition is combined with the stress-ng-zombie and
> stress-ng-dup tests, such a long loop can easily cause a Hard Lockup in
> write_lock_irq() for tasklist_lock.

This is not an optimization!  Userspace-triggerable hard lockup is a
serious bug.

> Recognize this situation in advance and exit early.
> 
> Signed-off-by: Jinliang Zheng <alexjlzheng@tencent.com>
> ---
>  kernel/exit.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/kernel/exit.c b/kernel/exit.c
> index f95a2c1338a8..81fcee45d630 100644
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -484,6 +484,8 @@ void mm_update_next_owner(struct mm_struct *mm)
>  	 * Search through everything else, we should not get here often.
>  	 */
>  	for_each_process(g) {
> +		if (atomic_read(&mm->mm_users) <= 1)
> +			break;
>  		if (g->flags & PF_KTHREAD)
>  			continue;
>  		for_each_thread(g, c) {

I agree that the patch is an optimization in some cases.  But does it
really fix the issue?  Isn't the problem simply that this search is too
lengthy?

Isn't it still possible for this search to have taken too much time
even before the new test triggers?

I wonder if this loop really does anything useful.  "we should not get
here often".  Well, under what circumstances *do* we get here?  What
goes wrong if we simply remove the entire loop?
diff mbox series

Patch

diff --git a/kernel/exit.c b/kernel/exit.c
index f95a2c1338a8..81fcee45d630 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -484,6 +484,8 @@  void mm_update_next_owner(struct mm_struct *mm)
 	 * Search through everything else, we should not get here often.
 	 */
 	for_each_process(g) {
+		if (atomic_read(&mm->mm_users) <= 1)
+			break;
 		if (g->flags & PF_KTHREAD)
 			continue;
 		for_each_thread(g, c) {