From patchwork Tue Apr 20 13:30:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12214347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0816C433ED for ; Tue, 20 Apr 2021 13:31:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 508F361077 for ; Tue, 20 Apr 2021 13:31:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 508F361077 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BE1FE6B0070; Tue, 20 Apr 2021 09:31:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B923C6B0071; Tue, 20 Apr 2021 09:31:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A58F06B0072; Tue, 20 Apr 2021 09:31:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 815E46B0070 for ; Tue, 20 Apr 2021 09:31:50 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3362A180AD820 for ; Tue, 20 Apr 2021 13:31:50 +0000 (UTC) X-FDA: 78052833180.02.C8133A6 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf25.hostedemail.com (Postfix) with ESMTP id 3A38160006C5 for ; Tue, 20 Apr 2021 13:31:46 +0000 (UTC) Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FPl023XRwzBrLJ; Tue, 20 Apr 2021 21:29:22 +0800 (CST) Received: from huawei.com (10.175.104.175) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Tue, 20 Apr 2021 21:31:37 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , , Subject: [PATCH v3 1/4] mm/swapfile: use percpu_ref to serialize against concurrent swapoff Date: Tue, 20 Apr 2021 09:30:45 -0400 Message-ID: <20210420133048.6773-2-linmiaohe@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20210420133048.6773-1-linmiaohe@huawei.com> References: <20210420133048.6773-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.175] X-CFilter-Loop: Reflected X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3A38160006C5 X-Stat-Signature: 5bwkfb9mw477bqxb5m3bd4sskkmtibex Received-SPF: none (huawei.com>: No applicable sender policy available) receiver=imf25; identity=mailfrom; envelope-from=""; helo=szxga07-in.huawei.com; client-ip=45.249.212.35 X-HE-DKIM-Result: none/none X-HE-Tag: 1618925506-173498 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Using current get/put_swap_device() to guard against concurrent swapoff for some swap ops, e.g. swap_readpage(), looks terrible because they might take really long time. This patch adds the percpu_ref support to serialize against concurrent swapoff. Also we remove the SWP_VALID flag because it's used together with RCU solution. Signed-off-by: Miaohe Lin Reviewed-by: "Huang, Ying" --- include/linux/swap.h | 5 +-- mm/swapfile.c | 79 +++++++++++++++++++++++++++----------------- 2 files changed, 52 insertions(+), 32 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 144727041e78..c9e7fea10b83 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -177,7 +177,6 @@ enum { SWP_PAGE_DISCARD = (1 << 10), /* freed swap page-cluster discards */ SWP_STABLE_WRITES = (1 << 11), /* no overwrite PG_writeback pages */ SWP_SYNCHRONOUS_IO = (1 << 12), /* synchronous IO is efficient */ - SWP_VALID = (1 << 13), /* swap is valid to be operated on? */ /* add others here before... */ SWP_SCANNING = (1 << 14), /* refcount in scan_swap_map */ }; @@ -240,6 +239,7 @@ struct swap_cluster_list { * The in-memory structure used to track swap areas. */ struct swap_info_struct { + struct percpu_ref users; /* indicate and keep swap device valid. */ unsigned long flags; /* SWP_USED etc: see above */ signed short prio; /* swap priority of this type */ struct plist_node list; /* entry in swap_active_head */ @@ -260,6 +260,7 @@ struct swap_info_struct { struct block_device *bdev; /* swap device or bdev of swap file */ struct file *swap_file; /* seldom referenced */ unsigned int old_block_size; /* seldom referenced */ + struct completion comp; /* seldom referenced */ #ifdef CONFIG_FRONTSWAP unsigned long *frontswap_map; /* frontswap in-use, one bit per page */ atomic_t frontswap_pages; /* frontswap pages in-use counter */ @@ -511,7 +512,7 @@ sector_t swap_page_sector(struct page *page); static inline void put_swap_device(struct swap_info_struct *si) { - rcu_read_unlock(); + percpu_ref_put(&si->users); } #else /* CONFIG_SWAP */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 149e77454e3c..a640fc84be5b 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -39,6 +39,7 @@ #include #include #include +#include #include #include @@ -511,6 +512,14 @@ static void swap_discard_work(struct work_struct *work) spin_unlock(&si->lock); } +static void swap_users_ref_free(struct percpu_ref *ref) +{ + struct swap_info_struct *si; + + si = container_of(ref, struct swap_info_struct, users); + complete(&si->comp); +} + static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) { struct swap_cluster_info *ci = si->cluster_info; @@ -1270,18 +1279,12 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, * via preventing the swap device from being swapoff, until * put_swap_device() is called. Otherwise return NULL. * - * The entirety of the RCU read critical section must come before the - * return from or after the call to synchronize_rcu() in - * enable_swap_info() or swapoff(). So if "si->flags & SWP_VALID" is - * true, the si->map, si->cluster_info, etc. must be valid in the - * critical section. - * * Notice that swapoff or swapoff+swapon can still happen before the - * rcu_read_lock() in get_swap_device() or after the rcu_read_unlock() - * in put_swap_device() if there isn't any other way to prevent - * swapoff, such as page lock, page table lock, etc. The caller must - * be prepared for that. For example, the following situation is - * possible. + * percpu_ref_tryget_live() in get_swap_device() or after the + * percpu_ref_put() in put_swap_device() if there isn't any other way + * to prevent swapoff, such as page lock, page table lock, etc. The + * caller must be prepared for that. For example, the following + * situation is possible. * * CPU1 CPU2 * do_swap_page() @@ -1309,21 +1312,27 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) si = swp_swap_info(entry); if (!si) goto bad_nofile; - - rcu_read_lock(); - if (data_race(!(si->flags & SWP_VALID))) - goto unlock_out; + if (!percpu_ref_tryget_live(&si->users)) + goto out; + /* + * Guarantee the si->users are checked before accessing other + * fields of swap_info_struct. + * + * Paired with the spin_unlock() after setup_swap_info() in + * enable_swap_info(). + */ + smp_rmb(); offset = swp_offset(entry); if (offset >= si->max) - goto unlock_out; + goto put_out; return si; bad_nofile: pr_err("%s: %s%08lx\n", __func__, Bad_file, entry.val); out: return NULL; -unlock_out: - rcu_read_unlock(); +put_out: + percpu_ref_put(&si->users); return NULL; } @@ -2466,7 +2475,7 @@ static void setup_swap_info(struct swap_info_struct *p, int prio, static void _enable_swap_info(struct swap_info_struct *p) { - p->flags |= SWP_WRITEOK | SWP_VALID; + p->flags |= SWP_WRITEOK; atomic_long_add(p->pages, &nr_swap_pages); total_swap_pages += p->pages; @@ -2497,10 +2506,9 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, spin_unlock(&p->lock); spin_unlock(&swap_lock); /* - * Guarantee swap_map, cluster_info, etc. fields are valid - * between get/put_swap_device() if SWP_VALID bit is set + * Finished initialized swap device, now it's safe to reference it. */ - synchronize_rcu(); + percpu_ref_resurrect(&p->users); spin_lock(&swap_lock); spin_lock(&p->lock); _enable_swap_info(p); @@ -2616,16 +2624,16 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) reenable_swap_slots_cache_unlock(); - spin_lock(&swap_lock); - spin_lock(&p->lock); - p->flags &= ~SWP_VALID; /* mark swap device as invalid */ - spin_unlock(&p->lock); - spin_unlock(&swap_lock); /* - * wait for swap operations protected by get/put_swap_device() - * to complete + * Wait for swap operations protected by get/put_swap_device() + * to complete. + * + * We need synchronize_rcu() here to protect the accessing to + * the swap cache data structure. */ + percpu_ref_kill(&p->users); synchronize_rcu(); + wait_for_completion(&p->comp); flush_work(&p->discard_work); @@ -2857,6 +2865,12 @@ static struct swap_info_struct *alloc_swap_info(void) if (!p) return ERR_PTR(-ENOMEM); + if (percpu_ref_init(&p->users, swap_users_ref_free, + PERCPU_REF_INIT_DEAD, GFP_KERNEL)) { + kvfree(p); + return ERR_PTR(-ENOMEM); + } + spin_lock(&swap_lock); for (type = 0; type < nr_swapfiles; type++) { if (!(swap_info[type]->flags & SWP_USED)) @@ -2864,6 +2878,7 @@ static struct swap_info_struct *alloc_swap_info(void) } if (type >= MAX_SWAPFILES) { spin_unlock(&swap_lock); + percpu_ref_exit(&p->users); kvfree(p); return ERR_PTR(-EPERM); } @@ -2891,9 +2906,13 @@ static struct swap_info_struct *alloc_swap_info(void) plist_node_init(&p->avail_lists[i], 0); p->flags = SWP_USED; spin_unlock(&swap_lock); - kvfree(defer); + if (defer) { + percpu_ref_exit(&defer->users); + kvfree(defer); + } spin_lock_init(&p->lock); spin_lock_init(&p->cont_lock); + init_completion(&p->comp); return p; } From patchwork Tue Apr 20 13:30:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12214351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 790BCC43462 for ; Tue, 20 Apr 2021 13:31:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 06E0061026 for ; Tue, 20 Apr 2021 13:31:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06E0061026 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 57E4F6B006C; Tue, 20 Apr 2021 09:31:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5519F6B0072; Tue, 20 Apr 2021 09:31:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32F5E6B0074; Tue, 20 Apr 2021 09:31:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 140A66B006C for ; Tue, 20 Apr 2021 09:31:55 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C76D1181AEF1E for ; Tue, 20 Apr 2021 13:31:54 +0000 (UTC) X-FDA: 78052833348.37.7EFA500 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf11.hostedemail.com (Postfix) with ESMTP id 57BA7200027D for ; Tue, 20 Apr 2021 13:31:40 +0000 (UTC) Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FPl022kKBzBrVQ; Tue, 20 Apr 2021 21:29:22 +0800 (CST) Received: from huawei.com (10.175.104.175) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Tue, 20 Apr 2021 21:31:38 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , , Subject: [PATCH v3 2/4] swap: fix do_swap_page() race with swapoff Date: Tue, 20 Apr 2021 09:30:46 -0400 Message-ID: <20210420133048.6773-3-linmiaohe@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20210420133048.6773-1-linmiaohe@huawei.com> References: <20210420133048.6773-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.175] X-CFilter-Loop: Reflected X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 57BA7200027D X-Stat-Signature: skfxxrh4hgeonks8mhbcb59yihhkmcbz Received-SPF: none (huawei.com>: No applicable sender policy available) receiver=imf11; identity=mailfrom; envelope-from=""; helo=szxga07-in.huawei.com; client-ip=45.249.212.35 X-HE-DKIM-Result: none/none X-HE-Tag: 1618925500-383062 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When I was investigating the swap code, I found the below possible race window: CPU 1 CPU 2 ----- ----- do_swap_page if (data_race(si->flags & SWP_SYNCHRONOUS_IO) swap_readpage if (data_race(sis->flags & SWP_FS_OPS)) { swapoff p->flags &= ~SWP_VALID; .. synchronize_rcu(); .. p->swap_file = NULL; struct file *swap_file = sis->swap_file; struct address_space *mapping = swap_file->f_mapping;[oops!] Note that for the pages that are swapped in through swap cache, this isn't an issue. Because the page is locked, and the swap entry will be marked with SWAP_HAS_CACHE, so swapoff() can not proceed until the page has been unlocked. Using current get/put_swap_device() to guard against concurrent swapoff for swap_readpage() looks terrible because swap_readpage() may take really long time. And this race may not be really pernicious because swapoff is usually done when system shutdown only. To reduce the performance overhead on the hot-path as much as possible, it appears we can use the percpu_ref to close this race window(as suggested by Huang, Ying). Fixes: 0bcac06f27d7 ("mm,swap: skip swapcache for swapin of synchronous device") Reported-by: kernel test robot (auto build test ERROR) Signed-off-by: Miaohe Lin --- include/linux/swap.h | 9 +++++++++ mm/memory.c | 9 +++++++++ 2 files changed, 18 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index c9e7fea10b83..46d51d058d05 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -527,6 +527,15 @@ static inline struct swap_info_struct *swp_swap_info(swp_entry_t entry) return NULL; } +static inline struct swap_info_struct *get_swap_device(swp_entry_t entry) +{ + return NULL; +} + +static inline void put_swap_device(struct swap_info_struct *si) +{ +} + #define swap_address_space(entry) (NULL) #define get_nr_swap_pages() 0L #define total_swap_pages 0L diff --git a/mm/memory.c b/mm/memory.c index 27014c3bde9f..7a2fe12cf641 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3311,6 +3311,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct page *page = NULL, *swapcache; + struct swap_info_struct *si = NULL; swp_entry_t entry; pte_t pte; int locked; @@ -3338,6 +3339,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out; } + /* Prevent swapoff from happening to us. */ + si = get_swap_device(entry); + if (unlikely(!si)) + goto out; delayacct_set_flag(current, DELAYACCT_PF_SWAPIN); page = lookup_swap_cache(entry, vma, vmf->address); @@ -3514,6 +3519,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); out: + if (si) + put_swap_device(si); return ret; out_nomap: pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -3525,6 +3532,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) unlock_page(swapcache); put_page(swapcache); } + if (si) + put_swap_device(si); return ret; } From patchwork Tue Apr 20 13:30:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12214349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A40FC433ED for ; Tue, 20 Apr 2021 13:31:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9AF266113C for ; Tue, 20 Apr 2021 13:31:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9AF266113C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 21E706B0071; Tue, 20 Apr 2021 09:31:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CDCC6B0073; Tue, 20 Apr 2021 09:31:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E4D16B0072; Tue, 20 Apr 2021 09:31:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id A6DC86B006C for ; Tue, 20 Apr 2021 09:31:49 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6360F3653 for ; Tue, 20 Apr 2021 13:31:49 +0000 (UTC) X-FDA: 78052833138.09.BE29E06 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf12.hostedemail.com (Postfix) with ESMTP id E8A6DF2 for ; Tue, 20 Apr 2021 13:31:40 +0000 (UTC) Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FPl021qN7zBrVB; Tue, 20 Apr 2021 21:29:22 +0800 (CST) Received: from huawei.com (10.175.104.175) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Tue, 20 Apr 2021 21:31:39 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , , Subject: [PATCH v3 3/4] mm/swap: remove confusing checking for non_swap_entry() in swap_ra_info() Date: Tue, 20 Apr 2021 09:30:47 -0400 Message-ID: <20210420133048.6773-4-linmiaohe@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20210420133048.6773-1-linmiaohe@huawei.com> References: <20210420133048.6773-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.175] X-CFilter-Loop: Reflected X-Stat-Signature: 7pyqifgs8skxs1dx69ts83bbrxd9cur4 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E8A6DF2 Received-SPF: none (huawei.com>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=szxga07-in.huawei.com; client-ip=45.249.212.35 X-HE-DKIM-Result: none/none X-HE-Tag: 1618925500-428067 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The non_swap_entry() was used for working with VMA based swap readahead via commit ec560175c0b6 ("mm, swap: VMA based swap readahead"). Then it's moved to swap_ra_info() since commit eaf649ebc3ac ("mm: swap: clean up swap readahead"). But this makes the code confusing. The non_swap_entry() check looks racy because while we released the pte lock, somebody else might have faulted in this pte. So we should check whether it's swap pte first to guard against such race or swap_type will be unexpected. But the swap_entry isn't used in this function and we will have enough checking when we really operate the PTE entries later. So checking for non_swap_entry() is not really needed here and should be removed to avoid confusion. Signed-off-by: Miaohe Lin --- mm/swap_state.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 272ea2108c9d..df5405384520 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -721,7 +721,6 @@ static void swap_ra_info(struct vm_fault *vmf, { struct vm_area_struct *vma = vmf->vma; unsigned long ra_val; - swp_entry_t entry; unsigned long faddr, pfn, fpfn; unsigned long start, end; pte_t *pte, *orig_pte; @@ -739,11 +738,6 @@ static void swap_ra_info(struct vm_fault *vmf, faddr = vmf->address; orig_pte = pte = pte_offset_map(vmf->pmd, faddr); - entry = pte_to_swp_entry(*pte); - if ((unlikely(non_swap_entry(entry)))) { - pte_unmap(orig_pte); - return; - } fpfn = PFN_DOWN(faddr); ra_val = GET_SWAP_RA_VAL(vma); From patchwork Tue Apr 20 13:30:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12214355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D6C2C433ED for ; Tue, 20 Apr 2021 13:32:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1EF5960E08 for ; Tue, 20 Apr 2021 13:32:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1EF5960E08 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9E84E8D0001; Tue, 20 Apr 2021 09:31:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 94B1D900002; Tue, 20 Apr 2021 09:31:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C2438D0002; Tue, 20 Apr 2021 09:31:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id 5C67A8D0001 for ; Tue, 20 Apr 2021 09:31:58 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1EBA43631 for ; Tue, 20 Apr 2021 13:31:58 +0000 (UTC) X-FDA: 78052833516.31.324E2C8 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf19.hostedemail.com (Postfix) with ESMTP id A257E9001143 for ; Tue, 20 Apr 2021 13:31:34 +0000 (UTC) Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FPl072JXzzBrTw; Tue, 20 Apr 2021 21:29:27 +0800 (CST) Received: from huawei.com (10.175.104.175) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Tue, 20 Apr 2021 21:31:39 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , , Subject: [PATCH v3 4/4] mm/shmem: fix shmem_swapin() race with swapoff Date: Tue, 20 Apr 2021 09:30:48 -0400 Message-ID: <20210420133048.6773-5-linmiaohe@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20210420133048.6773-1-linmiaohe@huawei.com> References: <20210420133048.6773-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.104.175] X-CFilter-Loop: Reflected X-Stat-Signature: zhfbwco7kusqeafhku9saehqf9qnc5bg X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A257E9001143 Received-SPF: none (huawei.com>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from=""; helo=szxga07-in.huawei.com; client-ip=45.249.212.35 X-HE-DKIM-Result: none/none X-HE-Tag: 1618925494-2885 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When I was investigating the swap code, I found the below possible race window: CPU 1 CPU 2 ----- ----- shmem_swapin swap_cluster_readahead if (likely(si->flags & (SWP_BLKDEV | SWP_FS_OPS))) { swapoff percpu_ref_kill(&p->users) synchronize_rcu() wait_for_completion .. si->swap_file = NULL; struct inode *inode = si->swap_file->f_mapping->host;[oops!] Close this race window by using get/put_swap_device() to guard against concurrent swapoff. Fixes: 8fd2e0b505d1 ("mm: swap: check if swap backing device is congested or not") Signed-off-by: Miaohe Lin --- mm/shmem.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/shmem.c b/mm/shmem.c index 26c76b13ad23..936ba5595297 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1492,15 +1492,21 @@ static void shmem_pseudo_vma_destroy(struct vm_area_struct *vma) static struct page *shmem_swapin(swp_entry_t swap, gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { + struct swap_info_struct *si; struct vm_area_struct pvma; struct page *page; struct vm_fault vmf = { .vma = &pvma, }; + /* Prevent swapoff from happening to us. */ + si = get_swap_device(swap); + if (unlikely(!si)) + return NULL; shmem_pseudo_vma_init(&pvma, info, index); page = swap_cluster_readahead(swap, gfp, &vmf); shmem_pseudo_vma_destroy(&pvma); + put_swap_device(si); return page; }