From patchwork Tue Feb 27 17:42:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13574209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E671EC54798 for ; Tue, 27 Feb 2024 17:43:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B30E96B00F9; Tue, 27 Feb 2024 12:43:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7CD90940017; Tue, 27 Feb 2024 12:43:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44DD094000E; Tue, 27 Feb 2024 12:43:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 05EE46B00F8 for ; Tue, 27 Feb 2024 12:43:06 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id F3CE114091E for ; Tue, 27 Feb 2024 17:43:03 +0000 (UTC) X-FDA: 81838304646.04.34093BC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 4DA15A001B for ; Tue, 27 Feb 2024 17:43:01 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y37CTPdS; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709055782; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Dz7HcV0fyY0T2HVHE6hhj30IoC72xWK4he3mTjgcdts=; b=R5ljEdaWRzx8zJHvZwCWGJ95dConq+Yc5a5Px86hvh4H8uf9i7gYjlOcrJfsqY/5Nic9CN jDBTrc/dB9yrMLY5DGO9/QSYrMptejhvL3O1MX45rd8m1EmZjiYPTNeKBwvvjDwlTSpybg tIhNpWo7I7SY49uzjSfdy2G01m6xx5M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709055782; a=rsa-sha256; cv=none; b=Da9ilAux46Sr71SfWDlgkTIt4CiE9lZ7AaetWKw3XFjRwqEWZ7wXSfHCBpYvthvw675edk e6PPCT5CX85d4rxb0clrweCwGKTgASYxih0mKVAQTD2j9hj6wUnITsQW4UBl/oeJDSdwAr Jm+kwbBFFG8epzCoxxmfVUsCFa7tfDM= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y37CTPdS; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Dz7HcV0fyY0T2HVHE6hhj30IoC72xWK4he3mTjgcdts=; b=Y37CTPdSNLy571+DOBHZ4uFXxR Lvl1PRGXZRo7/LwpqGkbdr8AuS8zWI8rFYc9gbAiYjkd1urHVCsoNtxpb6AvvKK6Cs4j2U2WFvflE Lnk8y0dTR4t8ZNIxR4LjyWUGkg43i/RMfAVuAm75q5NNaaRuWoRGe0v5+urYDmBHqw4Lz5NOud6Je MxicavMvQq9cxKoi1CHltXMQcHY2EaJ411hD+TJf6FLkfQjky7GyyGChRZvi6qHyV1J7Kx5UXsyRP vjuw3Zlw+DsytCIBCe1dRpbLWnl846sqqyyLLQqJevQSyoQeOKZUyLPhBWDhoT+w7wxhpANIkuFB7 SUUxngww==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rf1Tl-00000002yqs-3TgF; Tue, 27 Feb 2024 17:42:58 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v3 01/18] mm: Make folios_put() the basis of release_pages() Date: Tue, 27 Feb 2024 17:42:35 +0000 Message-ID: <20240227174254.710559-2-willy@infradead.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240227174254.710559-1-willy@infradead.org> References: <20240227174254.710559-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: jd54cghxidsecwpj5sjbi6axif1f3yu8 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4DA15A001B X-Rspam-User: X-HE-Tag: 1709055781-704555 X-HE-Meta: U2FsdGVkX1+PXZD76qvwiSMxOifzMe6kDmowU/dpTnaUn58kz7ZynuKwPk7H/pJDu5OPNVz2L+G9yhLZAt8Qki0GHqdK692hVbUZYgEn9PrtBkS/f9796iz0XdKxNHRUBqmmC+QyLI8TvF1xAIHWh86bjzpYmkI3D48vteloLqGyQX5tC9Bh5c2/QRpDFr6FN0HPReGqgzFCElo2wV18MCseed9Mqpo1FBT+51uRGNFbiRlEels7esRifc0hrVHjp6PyKRTJL9x4ySaDX9+pbdzqs6ZNNh0BRXJ3oFc56uJ4UOO3SUCWFvHcV7bv6x4hAEqGpRkhKXCi29Ubo8Spbfwc0wEsBQhXbFqrhXRaQyyW6oR4/NA2BfPaYM/wxwj6rQgd3Ndhg+zQUC4t64NeC26MBwMwKeg9J5gQG1QXsC9B4/OEAo1h6Ptie5VgGzi0x1bs4cI76NzdaVanTx/wfBrKiuLB+EVFWEUsCysOUvuXShThiIRaykU1FxNS3OLLdttyY4d00o/qv/hhe9GuG4i8yUUdVHo6E/GlPhwGCvxBYnPUcpc71owe9KOZut+HgeHLXCZDkwG/jgZjhcARZ2PFOxiMVII99bCfs3RreywL4c66I0QnBeAK+2BNPHH8RtVQ9rasV3bdb5dfacwUeMbjhfuMk2dbsO/YefnSoQC/63t3Rt478EbywSSIeCFMOg3RzQRh7pp0arNMqc/TvS3rpyPwKjw8mCM4FY3mAvvh9yLZMGprbCXv0YzoJV+9PW0zqylbQ3fGFYQyW0sZwOJ/b4BkWexKYRB4f5zSN3Uj4P98bx3lkHkQ0XMp/Nwn2pj7ZxSKIhZ8WduV5ruk5ZU1wGCC5i2DlyI5/J4blh95K7IX9VX9+8Bor+Tfw2K4Cuf7P8IrXGYEsn5E/OGiqnPNq/SCuojiMfeFQG0h3KNQv40BeHAefUqa0KNlkxFA9F4ItbqctpPiWyH1GzU s3yDlszV CJ7x5gvndm4Fj5+akydi/s43tAk/31rkkfBdEoa+DEy0AVqNLeSlTtSGXYfjCwOB33ERpmzyMtGQjs7PMo0N+z/I+psTAfCX3N4B0N7jQmbFj2WRYJxJZgz04j2W4k5YwU8UdTCnTI4sXru3An0gRNYaqJgMAIYVtu5UVg++leL+HcxN4FJTIOlUfjT59L/hAhMJuYFPU5SC6/C9Q3U8ydTcnASCmm5N0eDZU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: By making release_pages() call folios_put(), we can get rid of the calls to compound_head() for the callers that already know they have folios. We can also get rid of the lock_batch tracking as we know the size of the batch is limited by folio_batch. This does reduce the maximum number of pages for which the lruvec lock is held, from SWAP_CLUSTER_MAX (32) to PAGEVEC_SIZE (15). I do not expect this to make a significant difference, but if it does, we can increase PAGEVEC_SIZE to 31. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 16 +++++--- mm/mlock.c | 3 +- mm/swap.c | 100 ++++++++++++++++++++++++++------------------- 3 files changed, 70 insertions(+), 49 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8c65171722b6..07d950e63c30 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -36,6 +36,7 @@ struct anon_vma; struct anon_vma_chain; struct user_struct; struct pt_regs; +struct folio_batch; extern int sysctl_page_lock_unfairness; @@ -1533,6 +1534,8 @@ static inline void folio_put_refs(struct folio *folio, int refs) __folio_put(folio); } +void folios_put_refs(struct folio_batch *folios, unsigned int *refs); + /* * union release_pages_arg - an array of pages or folios * @@ -1555,18 +1558,19 @@ void release_pages(release_pages_arg, int nr); /** * folios_put - Decrement the reference count on an array of folios. * @folios: The folios. - * @nr: How many folios there are. * - * Like folio_put(), but for an array of folios. This is more efficient - * than writing the loop yourself as it will optimise the locks which - * need to be taken if the folios are freed. + * Like folio_put(), but for a batch of folios. This is more efficient + * than writing the loop yourself as it will optimise the locks which need + * to be taken if the folios are freed. The folios batch is returned + * empty and ready to be reused for another batch; there is no need to + * reinitialise it. * * Context: May be called in process or interrupt context, but not in NMI * context. May be called while holding a spinlock. */ -static inline void folios_put(struct folio **folios, unsigned int nr) +static inline void folios_put(struct folio_batch *folios) { - release_pages(folios, nr); + folios_put_refs(folios, NULL); } static inline void put_page(struct page *page) diff --git a/mm/mlock.c b/mm/mlock.c index 086546ac5766..1ed2f2ab37cd 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -206,8 +206,7 @@ static void mlock_folio_batch(struct folio_batch *fbatch) if (lruvec) unlock_page_lruvec_irq(lruvec); - folios_put(fbatch->folios, folio_batch_count(fbatch)); - folio_batch_reinit(fbatch); + folios_put(fbatch); } void mlock_drain_local(void) diff --git a/mm/swap.c b/mm/swap.c index e5380d732c0d..3d51f8c72017 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -89,7 +89,7 @@ static void __page_cache_release(struct folio *folio) __folio_clear_lru_flags(folio); unlock_page_lruvec_irqrestore(lruvec, flags); } - /* See comment on folio_test_mlocked in release_pages() */ + /* See comment on folio_test_mlocked in folios_put() */ if (unlikely(folio_test_mlocked(folio))) { long nr_pages = folio_nr_pages(folio); @@ -175,7 +175,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio) * while the LRU lock is held. * * (That is not true of __page_cache_release(), and not necessarily - * true of release_pages(): but those only clear the mlocked flag after + * true of folios_put(): but those only clear the mlocked flag after * folio_put_testzero() has excluded any other users of the folio.) */ if (folio_evictable(folio)) { @@ -221,8 +221,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) if (lruvec) unlock_page_lruvec_irqrestore(lruvec, flags); - folios_put(fbatch->folios, folio_batch_count(fbatch)); - folio_batch_reinit(fbatch); + folios_put(fbatch); } static void folio_batch_add_and_move(struct folio_batch *fbatch, @@ -946,47 +945,30 @@ void lru_cache_disable(void) } /** - * release_pages - batched put_page() - * @arg: array of pages to release - * @nr: number of pages + * folios_put_refs - Reduce the reference count on a batch of folios. + * @folios: The folios. + * @refs: The number of refs to subtract from each folio. * - * Decrement the reference count on all the pages in @arg. If it - * fell to zero, remove the page from the LRU and free it. + * Like folio_put(), but for a batch of folios. This is more efficient + * than writing the loop yourself as it will optimise the locks which need + * to be taken if the folios are freed. The folios batch is returned + * empty and ready to be reused for another batch; there is no need + * to reinitialise it. If @refs is NULL, we subtract one from each + * folio refcount. * - * Note that the argument can be an array of pages, encoded pages, - * or folio pointers. We ignore any encoded bits, and turn any of - * them into just a folio that gets free'd. + * Context: May be called in process or interrupt context, but not in NMI + * context. May be called while holding a spinlock. */ -void release_pages(release_pages_arg arg, int nr) +void folios_put_refs(struct folio_batch *folios, unsigned int *refs) { int i; - struct encoded_page **encoded = arg.encoded_pages; LIST_HEAD(pages_to_free); struct lruvec *lruvec = NULL; unsigned long flags = 0; - unsigned int lock_batch; - for (i = 0; i < nr; i++) { - unsigned int nr_refs = 1; - struct folio *folio; - - /* Turn any of the argument types into a folio */ - folio = page_folio(encoded_page_ptr(encoded[i])); - - /* Is our next entry actually "nr_pages" -> "nr_refs" ? */ - if (unlikely(encoded_page_flags(encoded[i]) & - ENCODED_PAGE_BIT_NR_PAGES_NEXT)) - nr_refs = encoded_nr_pages(encoded[++i]); - - /* - * Make sure the IRQ-safe lock-holding time does not get - * excessive with a continuous string of pages from the - * same lruvec. The lock is held only if lruvec != NULL. - */ - if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } + for (i = 0; i < folios->nr; i++) { + struct folio *folio = folios->folios[i]; + unsigned int nr_refs = refs ? refs[i] : 1; if (is_huge_zero_page(&folio->page)) continue; @@ -1016,13 +998,8 @@ void release_pages(release_pages_arg arg, int nr) } if (folio_test_lru(folio)) { - struct lruvec *prev_lruvec = lruvec; - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); - if (prev_lruvec != lruvec) - lock_batch = 0; - lruvec_del_folio(lruvec, folio); __folio_clear_lru_flags(folio); } @@ -1046,6 +1023,47 @@ void release_pages(release_pages_arg arg, int nr) mem_cgroup_uncharge_list(&pages_to_free); free_unref_page_list(&pages_to_free); + folio_batch_reinit(folios); +} +EXPORT_SYMBOL(folios_put_refs); + +/** + * release_pages - batched put_page() + * @arg: array of pages to release + * @nr: number of pages + * + * Decrement the reference count on all the pages in @arg. If it + * fell to zero, remove the page from the LRU and free it. + * + * Note that the argument can be an array of pages, encoded pages, + * or folio pointers. We ignore any encoded bits, and turn any of + * them into just a folio that gets free'd. + */ +void release_pages(release_pages_arg arg, int nr) +{ + struct folio_batch fbatch; + int refs[PAGEVEC_SIZE]; + struct encoded_page **encoded = arg.encoded_pages; + int i; + + folio_batch_init(&fbatch); + for (i = 0; i < nr; i++) { + /* Turn any of the argument types into a folio */ + struct folio *folio = page_folio(encoded_page_ptr(encoded[i])); + + /* Is our next entry actually "nr_pages" -> "nr_refs" ? */ + refs[fbatch.nr] = 1; + if (unlikely(encoded_page_flags(encoded[i]) & + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) + refs[fbatch.nr] = encoded_nr_pages(encoded[++i]); + + if (folio_batch_add(&fbatch, folio) > 0) + continue; + folios_put_refs(&fbatch, refs); + } + + if (fbatch.nr) + folios_put_refs(&fbatch, refs); } EXPORT_SYMBOL(release_pages);