From patchwork Fri Aug 18 08:17:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13357458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3678C001DE for ; Fri, 18 Aug 2023 08:17:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F216E940057; Fri, 18 Aug 2023 04:17:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ED102940053; Fri, 18 Aug 2023 04:17:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D98DC940057; Fri, 18 Aug 2023 04:17:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C753C940053 for ; Fri, 18 Aug 2023 04:17:41 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8525E1CA20C for ; Fri, 18 Aug 2023 08:17:41 +0000 (UTC) X-FDA: 81136521522.28.BE66130 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf14.hostedemail.com (Postfix) with ESMTP id C55C1100029 for ; Fri, 18 Aug 2023 08:17:38 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692346659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=rke3wF3rcySUYU/TsSaMByZeBBZ8bJO2XNoXVlh7aJA=; b=6OVkYlSeCENG/9Y9RWymLuXz4K6uiijLzsQ5Qk3NkbqD/XQNxPhpKfxGZqoQyCCbfJ4kbs kpjdzMPU+C0VKjhr3i12fnRtOjA8+QcSlkm/XztQboHCDK2eJpcFc+XaLtoHCL+W1i+07g XULfTBKwtfGrIIFFWFYn/046hEUVXLE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692346659; a=rsa-sha256; cv=none; b=YPJu+Th169SxitG9xQpFFbkEGHG+cWVBU1St1pv9Kxh7WjMMfVmCNrQl1Kxd7RBU+HfAJP Q7D8cDVtcQGVKjdA2hDJrHwmIeydfo+HZTHh9OXIUSit2NzoXTGknBnuOrE4lRwPJzyXbN Gqwy6tq9EyQnAU24tJ9L0KolvL0H/Fg= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from kwepemm600017.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RRvnQ1vHGzVjkQ; Fri, 18 Aug 2023 16:15:22 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 18 Aug 2023 16:17:29 +0800 From: Tong Tiangen To: Andrew Morton , Naoya Horiguchi , Miaohe Lin CC: , , Tong Tiangen , , Guohanjun Subject: [RFC PATCH v2-next] mm: memory-failure: use rcu lock instead of tasklist_lock when collect_procs() Date: Fri, 18 Aug 2023 16:17:27 +0800 Message-ID: <20230818081727.4181963-1-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: C55C1100029 X-Rspam-User: X-Stat-Signature: fyrbyypw9y911nxdkaiyw76dqcbyrgqb X-Rspamd-Server: rspam03 X-HE-Tag: 1692346658-532415 X-HE-Meta: U2FsdGVkX19e93LZAf4ZlibDHcEOfI6VqT7AaUDdQN7t7ueqVoPjTSQqVDRJMo1uB34HsueXh5iRe6XBCmvvETwsmJUgPC52GvuI08f4vnBdBbY9rzXH+JJyCYZTPXvlQ1vzLVBoZMFM+EwjqbX/FXYRC22L5MZTzHjUJ2t8ZWpLxqxn5AhF2GrnrcJLTmtBOPgeC/ktMK9dZ7WwgIfoAwWZSs5GZU+rNI/RjQC9hmcmIt5uKA4OpoZXwNqASwlCrfZ68GJ2VVEuHqSMIXP7SS6nX6hHcMC4s+EZJCxPIZYZuh7RvKoAN8BQvEn3DTGr/YKn1IfZAos+3IzXQtR68PyaZLt6CBy4RWc+9BIHki/cb8Wj89JrEAlbGs4si04LQHjou/blJrKv5q5tCEuMD7U1gkC1NPmbtAc9+PLrQ3JVcCNfAx55ICCeEBJhcOpTdCpfGSR3UArEHoC2v4914ycKdJIeaBhfEda32C8UKwa3QNrugXI56WhAWfgep/2AVU1ObThHB/AlXlRalYmEt48r3a6Ox0ZdJo/NXTmhV/kClVNfHCrEvi2+UURO5pDxFg/NvEcz6atJEKAlmVL6uk76U70+2fhzveW0Y6mczphdR5v9rDecY/VfDX4bu8NGPjHmXuSXbqrzAI3DirS2dCoGk27nc8o7F94Y4SPnLfp7XZ2tDB1G30Ye87ytkiBansNjaM1MwyRJmBCr8+AtyKSCXsEDli0dIdQ3IEBP0H5JnUVIgbgMe9WCcTTjxe+D10AlK0Yqb5ZJcbaQu2Kkz8OwCreZfBAK9pNyHsDbva6DbmdkdNZhvLJ13LdQCwJUtIBru/F/GHOm+rshi17ra95m/gXbtGeE8RsdmQS+PFT+5sz7mR4cCZAQuSMZQ0is9wu4CI7bh7jtXH+445SLCepCtpEDzmcF67/P0sPhLuHU7PY7oMlS/75aZY+WNpAU/u962/wzvkZeEMJoaLD hpH6kxZw bSrjcg74BgOK652tjI9unIdGJX/jgREZRwrPk5k29t4br2tDntfUOp6Sqk5NNHl1ZpNV3Im1Yfn2CH+KrPbbA7cFNVsTy+6DDLARMhpOHMFN+OLi/6QyXuDzxFBiGTy6a69slUalosxhrue4vtkKOe3GUgKSfg1wcKg9ppfuDyI2Ys/fMpioxm4CXYCcIu/tGJaRjS0MK/gViea0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We found a softlock issue in our test, analyzed the logs, and found that the relevant CPU call trace as follows: CPU0: _do_fork -> copy_process() -> write_lock_irq(&tasklist_lock) //Disable irq,waiting for //tasklist_lock CPU1: wp_page_copy() ->pte_offset_map_lock() -> spin_lock(&page->ptl); //Hold page->ptl -> ptep_clear_flush() -> flush_tlb_others() ... -> smp_call_function_many() -> arch_send_call_function_ipi_mask() -> csd_lock_wait() //Waiting for other CPUs respond //IPI CPU2: collect_procs_anon() -> read_lock(&tasklist_lock) //Hold tasklist_lock ->for_each_process(tsk) -> page_mapped_in_vma() -> page_vma_mapped_walk() -> map_pte() ->spin_lock(&page->ptl) //Waiting for page->ptl We can see that CPU1 waiting for CPU0 respond IPI,CPU0 waiting for CPU2 unlock tasklist_lock, CPU2 waiting for CPU1 unlock page->ptl. As a result, softlockup is triggered. For collect_procs_anon(), we will not modify the tasklist, but only perform read traversal. Therefore, we can use rcu lock instead of spin lock tasklist_lock, from this, we can break the softlock chain above. The same logic can also be applied to: - collect_procs_file() - collect_procs_fsdax() - collect_procs_ksm() - find_early_kill_thread() Signed-off-by: Tong Tiangen --- v2: - 1. Modify the title description. - 2. Optimize the implementation of find_early_kill_thread() without functional changes. --- mm/ksm.c | 4 ++-- mm/memory-failure.c | 33 +++++++++++++++++++-------------- 2 files changed, 21 insertions(+), 16 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 6b7b8928fb96..dcbc0c7f68e7 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2919,7 +2919,7 @@ void collect_procs_ksm(struct page *page, struct list_head *to_kill, struct anon_vma *av = rmap_item->anon_vma; anon_vma_lock_read(av); - read_lock(&tasklist_lock); + rcu_read_lock(); for_each_process(tsk) { struct anon_vma_chain *vmac; unsigned long addr; @@ -2938,7 +2938,7 @@ void collect_procs_ksm(struct page *page, struct list_head *to_kill, } } } - read_unlock(&tasklist_lock); + rcu_read_unlock(); anon_vma_unlock_read(av); } } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 7b01fffe7a79..4f3081f47798 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -546,24 +546,29 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail, * Find a dedicated thread which is supposed to handle SIGBUS(BUS_MCEERR_AO) * on behalf of the thread group. Return task_struct of the (first found) * dedicated thread if found, and return NULL otherwise. - * - * We already hold read_lock(&tasklist_lock) in the caller, so we don't - * have to call rcu_read_lock/unlock() in this function. */ static struct task_struct *find_early_kill_thread(struct task_struct *tsk) { struct task_struct *t; + bool found = false; + rcu_read_lock(); for_each_thread(tsk, t) { if (t->flags & PF_MCE_PROCESS) { - if (t->flags & PF_MCE_EARLY) - return t; + if (t->flags & PF_MCE_EARLY) { + found = true; + break; + } } else { - if (sysctl_memory_failure_early_kill) - return t; + if (sysctl_memory_failure_early_kill) { + found = true; + break; + } } } - return NULL; + rcu_read_unlock(); + + return found ? t : NULL; } /* @@ -609,7 +614,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, return; pgoff = page_to_pgoff(page); - read_lock(&tasklist_lock); + rcu_read_lock(); for_each_process(tsk) { struct anon_vma_chain *vmac; struct task_struct *t = task_early_kill(tsk, force_early); @@ -626,7 +631,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, add_to_kill_anon_file(t, page, vma, to_kill); } } - read_unlock(&tasklist_lock); + rcu_read_unlock(); anon_vma_unlock_read(av); } @@ -642,7 +647,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill, pgoff_t pgoff; i_mmap_lock_read(mapping); - read_lock(&tasklist_lock); + rcu_read_lock(); pgoff = page_to_pgoff(page); for_each_process(tsk) { struct task_struct *t = task_early_kill(tsk, force_early); @@ -662,7 +667,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill, add_to_kill_anon_file(t, page, vma, to_kill); } } - read_unlock(&tasklist_lock); + rcu_read_unlock(); i_mmap_unlock_read(mapping); } @@ -685,7 +690,7 @@ static void collect_procs_fsdax(struct page *page, struct task_struct *tsk; i_mmap_lock_read(mapping); - read_lock(&tasklist_lock); + rcu_read_lock(); for_each_process(tsk) { struct task_struct *t = task_early_kill(tsk, true); @@ -696,7 +701,7 @@ static void collect_procs_fsdax(struct page *page, add_to_kill_fsdax(t, page, vma, to_kill, pgoff); } } - read_unlock(&tasklist_lock); + rcu_read_unlock(); i_mmap_unlock_read(mapping); } #endif /* CONFIG_FS_DAX */