From patchwork Wed Aug 30 09:50:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13370146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 243EFC83F14 for ; Wed, 30 Aug 2023 09:50:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A3252280051; Wed, 30 Aug 2023 05:50:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BC1E28004E; Wed, 30 Aug 2023 05:50:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83657280051; Wed, 30 Aug 2023 05:50:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7375A28004E for ; Wed, 30 Aug 2023 05:50:42 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4EC11B2B95 for ; Wed, 30 Aug 2023 09:50:42 +0000 (UTC) X-FDA: 81180301524.09.E80CBE1 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf27.hostedemail.com (Postfix) with ESMTP id 8FC1F40012 for ; Wed, 30 Aug 2023 09:50:40 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf27.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693389040; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G17L5TxSQCpiXEhkpkLQWUKlMnog/0uc4pVDC0rr5Ls=; b=D44HS3+JsoumtT8y6KiV6CfSPvei645VxVBS0DliA9cSN2IS/rsstvT6XB86MDZoOa17w3 /kMTwQ1guoJcyKGjD5wwZD7y0mRVAvXbDKml/xAOT5I4XIVhmCBpzeJbksh+gsa8cb8MmI Mp27nBwdYjl/inTAqdNLzXH0mxjc1qI= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf27.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693389040; a=rsa-sha256; cv=none; b=WLcxuT5zysdukOzJ95xKS0sMzSvrIf6PlKts471ydlnBDThba501omkm31N0kux0WG9NwD zry0mJLCJMH1DVqrTNiqoNwc5AWh5vNCuNsrSXq0DpwvpEDToH3wv91nqEAmh5FiI8c0oy ddy2V8QneFwonqPrsc1CfZ0rvI8Adh8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3CB9DFEC; Wed, 30 Aug 2023 02:51:19 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3A5003F64C; Wed, 30 Aug 2023 02:50:37 -0700 (PDT) From: Ryan Roberts To: Will Deacon , "Aneesh Kumar K.V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Christian Borntraeger , Sven Schnelle , Arnd Bergmann , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yu Zhao , "Kirill A. Shutemov" , Yin Fengwei , Yang Shi , "Huang, Ying" , Zi Yan Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/5] mm: Refector release_pages() Date: Wed, 30 Aug 2023 10:50:10 +0100 Message-Id: <20230830095011.1228673-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230830095011.1228673-1-ryan.roberts@arm.com> References: <20230830095011.1228673-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8FC1F40012 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 4qig5csr77nqk8gbh5bwfk4efrrd1sau X-HE-Tag: 1693389040-107007 X-HE-Meta: U2FsdGVkX1/D0aJ85Nx+fMyV8/+rNMoyzGEtW9JpsTyQ56bZmomE6mcn0uGIDk0QbbJjbbTAmNGIBuq0N4bOyR/90hz1AXuGyVmOlmz0lKgkpmSuzTlcMa5Rc1+GXj97jZCWBHwDBm+djhffBmibwWw8EHm2CnlKuiO8F6QGawl21Psb1R74TpH9oGohbdvKZyjMbole6JHfopjThfUUX0DOAQGNjqpEIGTBk7Jqen8MeLvxq6eOGChxOPUFSMcFE+kK1VijHEmgoBSdZZ/y6/OAVFLZblC1jOQHs9jePtBeO4rO/oLDo0ZtXTmxtRqLl2TGedOS5OfjmEFQVB2N5o7akNVtfu+JGDCLKsr9yjisyuJDycYL/PWg2IGCiSGY5ZCjv20vjd/AMUkuV8t4pmbKeLdMKAxpMc7UB3uc19OKpT0b/YeI2Hafz7llAzzdZwZ8y/BW06o3PscYi1KC6NPhjjurGZ0yxs97QyCkqdEPVg8X5q+Nt0H0Y80YmhEO/vx3BxBpJ+/uZOH8fENDXdc9XvxwhH1HEdPa4O/LSTBm+aTil5KtmrBh6YNqxht+QzDxdsjpwlVlO+fu/eycIMREH12b8zWfkpEIyC5e+mg6I2V03rHPG9rtGH3doWqB5FsK7oM5I2cfsN8kh1gfEWaGipoCbgPXZKrdWON2uqOaKy7PDzzFFiiWEBwBzTVlcs1cgWdJoP+ThCSlib0TnlyyRbzibtHjndfDPCAmaGhF1p+BZzxLp1wzVxQxIjI5MP/CX9rH3Tr6YgX5wJzI6mxhuumSW2BRrTWp995l7K/2OXYI9gH7dlEicUqeAsJ2CglId2aLTFan8vfYAVebVPW8tNzvlHgmrMSxvCEAlWXoMRM6EOy/xVeszCgnV2icWyzmuIpJJdu2hkXmylrTffevSoenylZvFMLnorC/RbB7r6rGYVa8gdZ/KXBb/piqUTdoIPKa2BDrZ5rxd65 BYYWHJ+8 DhcKFvsy6rVtspESEZjCQlEHa+ZQKB+L+aTtIqYHxPPdjQuYlGsyMEcoXFJqCG/rDNOQ9DmCWYP24CqhhE3sW9awE7i8gGcTY9zu18bIKliU2BWoVYyaaVRPqe1S5GXPIcfFzRc69eLax9hvJsiAZSJS3oChAXyOwdEgVbFpyU3pGamTfKQhZLDiUi69/q7UQmn27me4X6Nkxt/M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In preparation for implementing folios_put_refs() in the next patch, refactor release_pages() into a set of helper functions, which can be reused. The primary difference between release_pages() and folios_put_refs() is how they iterate over the set of folios. The per-folio actions are identical. No functional change intended. Signed-off-by: Ryan Roberts --- mm/swap.c | 167 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 97 insertions(+), 70 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index b05cce475202..5d3e35668929 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -945,6 +945,98 @@ void lru_cache_disable(void) #endif } +struct folios_put_refs_ctx { + struct list_head pages_to_free; + struct lruvec *lruvec; + unsigned long flags; + unsigned int lock_batch; +}; + +static void __folios_put_refs_init(struct folios_put_refs_ctx *ctx) +{ + *ctx = (struct folios_put_refs_ctx) { + .pages_to_free = LIST_HEAD_INIT(ctx->pages_to_free), + .lruvec = NULL, + .flags = 0, + }; +} + +static void __folios_put_refs_complete(struct folios_put_refs_ctx *ctx) +{ + if (ctx->lruvec) + unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags); + + mem_cgroup_uncharge_list(&ctx->pages_to_free); + free_unref_page_list(&ctx->pages_to_free); +} + +static void __folios_put_refs_do_one(struct folios_put_refs_ctx *ctx, + struct folio *folio, int refs) +{ + /* + * Make sure the IRQ-safe lock-holding time does not get + * excessive with a continuous string of pages from the + * same lruvec. The lock is held only if lruvec != NULL. + */ + if (ctx->lruvec && ++ctx->lock_batch == SWAP_CLUSTER_MAX) { + unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags); + ctx->lruvec = NULL; + } + + if (is_huge_zero_page(&folio->page)) + return; + + if (folio_is_zone_device(folio)) { + if (ctx->lruvec) { + unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags); + ctx->lruvec = NULL; + } + if (put_devmap_managed_page_refs(&folio->page, refs)) + return; + if (folio_ref_sub_and_test(folio, refs)) + free_zone_device_page(&folio->page); + return; + } + + if (!folio_ref_sub_and_test(folio, refs)) + return; + + if (folio_test_large(folio)) { + if (ctx->lruvec) { + unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags); + ctx->lruvec = NULL; + } + __folio_put_large(folio); + return; + } + + if (folio_test_lru(folio)) { + struct lruvec *prev_lruvec = ctx->lruvec; + + ctx->lruvec = folio_lruvec_relock_irqsave(folio, ctx->lruvec, + &ctx->flags); + if (prev_lruvec != ctx->lruvec) + ctx->lock_batch = 0; + + lruvec_del_folio(ctx->lruvec, folio); + __folio_clear_lru_flags(folio); + } + + /* + * In rare cases, when truncation or holepunching raced with + * munlock after VM_LOCKED was cleared, Mlocked may still be + * found set here. This does not indicate a problem, unless + * "unevictable_pgs_cleared" appears worryingly large. + */ + if (unlikely(folio_test_mlocked(folio))) { + __folio_clear_mlocked(folio); + zone_stat_sub_folio(folio, NR_MLOCK); + count_vm_event(UNEVICTABLE_PGCLEARED); + } + + list_add(&folio->lru, &ctx->pages_to_free); +} + /** * release_pages - batched put_page() * @arg: array of pages to release @@ -959,10 +1051,9 @@ void release_pages(release_pages_arg arg, int nr) { int i; struct page **pages = arg.pages; - LIST_HEAD(pages_to_free); - struct lruvec *lruvec = NULL; - unsigned long flags = 0; - unsigned int lock_batch; + struct folios_put_refs_ctx ctx; + + __folios_put_refs_init(&ctx); for (i = 0; i < nr; i++) { struct folio *folio; @@ -970,74 +1061,10 @@ void release_pages(release_pages_arg arg, int nr) /* Turn any of the argument types into a folio */ folio = page_folio(pages[i]); - /* - * Make sure the IRQ-safe lock-holding time does not get - * excessive with a continuous string of pages from the - * same lruvec. The lock is held only if lruvec != NULL. - */ - if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - - if (is_huge_zero_page(&folio->page)) - continue; - - if (folio_is_zone_device(folio)) { - if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - if (put_devmap_managed_page(&folio->page)) - continue; - if (folio_put_testzero(folio)) - free_zone_device_page(&folio->page); - continue; - } - - if (!folio_put_testzero(folio)) - continue; - - if (folio_test_large(folio)) { - if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - __folio_put_large(folio); - continue; - } - - if (folio_test_lru(folio)) { - struct lruvec *prev_lruvec = lruvec; - - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, - &flags); - if (prev_lruvec != lruvec) - lock_batch = 0; - - lruvec_del_folio(lruvec, folio); - __folio_clear_lru_flags(folio); - } - - /* - * In rare cases, when truncation or holepunching raced with - * munlock after VM_LOCKED was cleared, Mlocked may still be - * found set here. This does not indicate a problem, unless - * "unevictable_pgs_cleared" appears worryingly large. - */ - if (unlikely(folio_test_mlocked(folio))) { - __folio_clear_mlocked(folio); - zone_stat_sub_folio(folio, NR_MLOCK); - count_vm_event(UNEVICTABLE_PGCLEARED); - } - - list_add(&folio->lru, &pages_to_free); + __folios_put_refs_do_one(&ctx, folio, 1); } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - mem_cgroup_uncharge_list(&pages_to_free); - free_unref_page_list(&pages_to_free); + __folios_put_refs_complete(&ctx); } EXPORT_SYMBOL(release_pages);