From patchwork Wed Jun 8 14:40:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12873593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F11A8C433EF for ; Wed, 8 Jun 2022 14:40:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 715E36B0078; Wed, 8 Jun 2022 10:40:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C58B6B0080; Wed, 8 Jun 2022 10:40:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58CB76B007E; Wed, 8 Jun 2022 10:40:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3BB9E6B0075 for ; Wed, 8 Jun 2022 10:40:33 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 11AC880FDE for ; Wed, 8 Jun 2022 14:40:33 +0000 (UTC) X-FDA: 79555329546.16.7821F74 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf23.hostedemail.com (Postfix) with ESMTP id A618D140047 for ; Wed, 8 Jun 2022 14:40:31 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LJ8vH0Sv7zRjBF; Wed, 8 Jun 2022 22:37:15 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 8 Jun 2022 22:40:27 +0800 From: Miaohe Lin To: CC: , , , Subject: [PATCH v2 1/3] mm/swapfile: make security_vm_enough_memory_mm() work as expected Date: Wed, 8 Jun 2022 22:40:29 +0800 Message-ID: <20220608144031.829-2-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220608144031.829-1-linmiaohe@huawei.com> References: <20220608144031.829-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Stat-Signature: gt4kwub15w7pn837ij9gtnbic3u3fuw9 X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A618D140047 X-HE-Tag: 1654699231-900261 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: security_vm_enough_memory_mm() checks whether a process has enough memory to allocate a new virtual mapping. And total_swap_pages is considered as available memory while swapoff tries to make sure there's enough memory that can hold the swapped out memory. But total_swap_pages contains the swap space that is being swapoff. So security_vm_enough_memory_mm() will success even if there's no memory to hold the swapped out memory because total_swap_pages always greater than or equal to p->pages. In order to fix it, p->pages should be retracted from total_swap_pages first and then check whether there's enough memory for inuse swap pages. Signed-off-by: Miaohe Lin Reviewed-by: David Hildenbrand --- mm/swapfile.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index ec4c1b276691..d2bead7b8b70 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2398,6 +2398,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) struct filename *pathname; int err, found = 0; unsigned int old_block_size; + unsigned int inuse_pages; if (!capable(CAP_SYS_ADMIN)) return -EPERM; @@ -2428,9 +2429,13 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) spin_unlock(&swap_lock); goto out_dput; } - if (!security_vm_enough_memory_mm(current->mm, p->pages)) - vm_unacct_memory(p->pages); + + total_swap_pages -= p->pages; + inuse_pages = READ_ONCE(p->inuse_pages); + if (!security_vm_enough_memory_mm(current->mm, inuse_pages)) + vm_unacct_memory(inuse_pages); else { + total_swap_pages += p->pages; err = -ENOMEM; spin_unlock(&swap_lock); goto out_dput; @@ -2453,7 +2458,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) } plist_del(&p->list, &swap_active_head); atomic_long_sub(p->pages, &nr_swap_pages); - total_swap_pages -= p->pages; p->flags &= ~SWP_WRITEOK; spin_unlock(&p->lock); spin_unlock(&swap_lock); From patchwork Wed Jun 8 14:40:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12873596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D696DCCA47B for ; Wed, 8 Jun 2022 14:40:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58F656B0071; Wed, 8 Jun 2022 10:40:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 53F196B0073; Wed, 8 Jun 2022 10:40:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42DBA6B007E; Wed, 8 Jun 2022 10:40:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2E1316B0071 for ; Wed, 8 Jun 2022 10:40:43 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0301A60873 for ; Wed, 8 Jun 2022 14:40:42 +0000 (UTC) X-FDA: 79555329966.17.AA3CDD8 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf20.hostedemail.com (Postfix) with ESMTP id 25DCE1C002E for ; Wed, 8 Jun 2022 14:40:41 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LJ8wt6y9Kz1K9sx; Wed, 8 Jun 2022 22:38:38 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 8 Jun 2022 22:40:27 +0800 From: Miaohe Lin To: CC: , , , Subject: [PATCH v2 2/3] mm/swapfile: fix possible data races of inuse_pages Date: Wed, 8 Jun 2022 22:40:30 +0800 Message-ID: <20220608144031.829-3-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220608144031.829-1-linmiaohe@huawei.com> References: <20220608144031.829-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 25DCE1C002E X-Stat-Signature: h8rsj8tiioi7bjci7qh1kj7s4n67bazm X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspamd-Server: rspam08 X-HE-Tag: 1654699241-962680 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: si->inuse_pages could still be accessed concurrently now. The plain reads outside si->lock critical section, i.e. swap_show and si_swapinfo, which results in data races. But these should be ok because they're just used for showing swap info. Signed-off-by: Miaohe Lin Reviewed-by: David Hildenbrand --- mm/swapfile.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index d2bead7b8b70..3fa26f6971e9 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2646,7 +2646,7 @@ static int swap_show(struct seq_file *swap, void *v) } bytes = si->pages << (PAGE_SHIFT - 10); - inuse = si->inuse_pages << (PAGE_SHIFT - 10); + inuse = READ_ONCE(si->inuse_pages) << (PAGE_SHIFT - 10); file = si->swap_file; len = seq_file_path(swap, file, " \t\n\\"); @@ -3265,7 +3265,7 @@ void si_swapinfo(struct sysinfo *val) struct swap_info_struct *si = swap_info[type]; if ((si->flags & SWP_USED) && !(si->flags & SWP_WRITEOK)) - nr_to_be_unused += si->inuse_pages; + nr_to_be_unused += READ_ONCE(si->inuse_pages); } val->freeswap = atomic_long_read(&nr_swap_pages) + nr_to_be_unused; val->totalswap = total_swap_pages + nr_to_be_unused; From patchwork Wed Jun 8 14:40:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12873594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E04B6C43334 for ; Wed, 8 Jun 2022 14:40:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B33A6B0075; Wed, 8 Jun 2022 10:40:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93A636B007E; Wed, 8 Jun 2022 10:40:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 712FC6B0075; Wed, 8 Jun 2022 10:40:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4BAF86B0078 for ; Wed, 8 Jun 2022 10:40:33 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0CEBB60962 for ; Wed, 8 Jun 2022 14:40:33 +0000 (UTC) X-FDA: 79555329546.14.4FEED87 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf16.hostedemail.com (Postfix) with ESMTP id 3B165180043 for ; Wed, 8 Jun 2022 14:40:32 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LJ8vH6YdpzRj9k; Wed, 8 Jun 2022 22:37:15 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 8 Jun 2022 22:40:28 +0800 From: Miaohe Lin To: CC: , , , Subject: [PATCH v2 3/3] mm/swap: remove swap_cache_info statistics Date: Wed, 8 Jun 2022 22:40:31 +0800 Message-ID: <20220608144031.829-4-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220608144031.829-1-linmiaohe@huawei.com> References: <20220608144031.829-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 3B165180043 X-Stat-Signature: hujq7fn1fpq1pzand1fkp8p3g3odio5d X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf16.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspamd-Server: rspam08 X-HE-Tag: 1654699232-549506 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: swap_cache_info are not statistics that could be easily used to tune system performance because they are not easily accessile. Also they can't provide really useful info when OOM occurs. Remove these statistics can also help mitigate unneeded global swap_cache_info cacheline contention. Suggested-by: David Hildenbrand Signed-off-by: Miaohe Lin Reviewed-by: David Hildenbrand Acked-by: "Huang, Ying" Reviewed-by: Muchun Song --- mm/swap_state.c | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 0a2021fc55ad..41c6a6053d5c 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -59,24 +59,11 @@ static bool enable_vma_readahead __read_mostly = true; #define GET_SWAP_RA_VAL(vma) \ (atomic_long_read(&(vma)->swap_readahead_info) ? : 4) -#define INC_CACHE_INFO(x) data_race(swap_cache_info.x++) -#define ADD_CACHE_INFO(x, nr) data_race(swap_cache_info.x += (nr)) - -static struct { - unsigned long add_total; - unsigned long del_total; - unsigned long find_success; - unsigned long find_total; -} swap_cache_info; - static atomic_t swapin_readahead_hits = ATOMIC_INIT(4); void show_swap_cache_info(void) { printk("%lu pages in swap cache\n", total_swapcache_pages()); - printk("Swap cache stats: add %lu, delete %lu, find %lu/%lu\n", - swap_cache_info.add_total, swap_cache_info.del_total, - swap_cache_info.find_success, swap_cache_info.find_total); printk("Free swap = %ldkB\n", get_nr_swap_pages() << (PAGE_SHIFT - 10)); printk("Total swap = %lukB\n", total_swap_pages << (PAGE_SHIFT - 10)); @@ -133,7 +120,6 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, address_space->nrpages += nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); __mod_lruvec_page_state(page, NR_SWAPCACHE, nr); - ADD_CACHE_INFO(add_total, nr); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -172,7 +158,6 @@ void __delete_from_swap_cache(struct page *page, address_space->nrpages -= nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr); __mod_lruvec_page_state(page, NR_SWAPCACHE, -nr); - ADD_CACHE_INFO(del_total, nr); } /** @@ -348,12 +333,10 @@ struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, page = find_get_page(swap_address_space(entry), swp_offset(entry)); put_swap_device(si); - INC_CACHE_INFO(find_total); if (page) { bool vma_ra = swap_use_vma_readahead(); bool readahead; - INC_CACHE_INFO(find_success); /* * At the moment, we don't support PG_readahead for anon THP * so let's bail out rather than confusing the readahead stat.