From patchwork Tue Feb 27 17:42:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13574207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7098BC54798 for ; Tue, 27 Feb 2024 17:43:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D27F6B00F7; Tue, 27 Feb 2024 12:43:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C29F6B0102; Tue, 27 Feb 2024 12:43:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 003016B0100; Tue, 27 Feb 2024 12:43:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 977566B00F8 for ; Tue, 27 Feb 2024 12:43:05 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6FE7CA099D for ; Tue, 27 Feb 2024 17:43:05 +0000 (UTC) X-FDA: 81838304730.21.F608B42 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id D5F6620017 for ; Tue, 27 Feb 2024 17:43:03 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YthG6Hrj; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709055783; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hIqW8ZzSmIsa86syKHo9BFjI8Rh2TkL2UCfXwvuToM8=; b=PoEMLyWjCygs/aDqIyPvI5BkBi3VRwOhlovA42Dwq2LPxQHYrmMiAP+SXUoQ/sstyVpJDT MFpV3bR1qaZ+pNYfu93W6yg9jtYDGX6wcSZuylG5PuRmbiSlYJtC9dXrBB0caGrWIL2nMA hnLSETj8XYijowVOZgmHWt9xqYg04pA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YthG6Hrj; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709055783; a=rsa-sha256; cv=none; b=u9iP8o1btb/fAs+ANHRtO3OypKefSZpwYc/F5nnxRSH11HEaaYRKgeW2NehrUbnBo1nuCm 6KMMOWGpVaAqDTAQ445xm3tcwrTUvOcEhoDeLPRdb8SxVf57Tv/Cn7xpdfpr9Emq6BgiWi bWAb9yo+xC2wPaC5Ya+KwjtUKIVpIo8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hIqW8ZzSmIsa86syKHo9BFjI8Rh2TkL2UCfXwvuToM8=; b=YthG6HrjgqHL2w7P6eAMjYwBgh 1NHeg5ilvyLwR3G4lvyzwC2R/xM+5JsRAxFtTKYTUhm7yKpe/nn8BT9vyMDVJfmHZDf586PZ6ymyJ Nun9YMEZNrcwBuPaLZVrO0gVnKzrvhC9hq/AbNrFEDi6X8IspCEYGuQw8biJR3IAkmbnQthTw8OmL EiNbW3SYsxJQOZmwjgiEsouKFCK1T9+KTc0SJXqYFHWLv4yzFoOx1JQg/bLw9G/Mij6HORxfY515m sebby4hweYlX2Udzmb5nFVyehNO0beJZZdPvYqNq6+cprBBxgwLLDRJZFwvfIirQGsZBZnPSvtDR5 IiTuTAGw==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rf1Tm-00000002yr0-0y0R; Tue, 27 Feb 2024 17:42:58 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v3 03/18] mm: Add free_unref_folios() Date: Tue, 27 Feb 2024 17:42:37 +0000 Message-ID: <20240227174254.710559-4-willy@infradead.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240227174254.710559-1-willy@infradead.org> References: <20240227174254.710559-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: D5F6620017 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: mpn9o7hagh6xt14kx5yp8oetdd6bxfrh X-HE-Tag: 1709055783-416954 X-HE-Meta: U2FsdGVkX1+Rpt5wXza3peAB2xrXO3YqeX1PGp+FeIk/S76MKZ+e3E8+qzz49RkxbaiCRN1f/H+e+U9E8ZmdlyvAh0kmPZtBjXdclfMsnbHgo3P3nGvzqORi5wEsTCrUrNBjHb5c3K7nK2/SBOKkL3NyTQ8HZFCTdKvtzkXlz/30sRmDEuyI+k9SFQuZbgLWAk2M5sf7gjXRnOn1ZAECRr0KNxa3ZDau6E5R2gk8T0ZDbUpW1qVUOQMbrxNQ9B/suJXrgEz6Ds9kF0cc4SfE7BJ5kJKmz2LpH2llFtTqspPo0mI6uhQph5WQeYBdv5db3KCGUYSgvHh5Sgp1kSCRiLuOpP3oJjh08v0UIdgSP3Sffukt3jkudpH5XhscoX1M4oXiPXNVfgUu3af75tlkkOwwo4afzwpdI/6bTeZ85PNTp03eekvAMqE1wKgxJ1lhUfhID5vGh33ekGYue29e2faKEynh0dG/Oa9bsIX2B4FqXEy1jrwIu727pv9tiFlVJ/7E0EMhYAmI8xdV6DyxhY7iYRT9GUbbv35HWIvvhNEwqOQrRuCeLAwgW0MFQ1D1Ff71RBO+q1Mbp0AHtXZSFVQZUiEN4TL39p4p0NpBAKFfqn6lEOLxMXsLhFNEZFOZ9kCbyRlz+KPbFQA5oIn6bESQwslfE2i1BBRCCJ0PKF/xPU7oPtyAtfDA3/Oku/f80hLQCLt46YVdc2SJ519n5jSdGg4heDPl6z31MtzVRaL3QszPL23nKeXxI04h1fRNw8x/fvjmoKI431jwhsG5OSD5GPhQF5lip5OcoZdxXMSayYVjMjGO4e/d9pcQbBewwJb6zkX/KzCw6Fen1F1prqSEXNRk5DWke1OwHSFQjFqv6TJ4NvR5GtcKE0rNv9JQF/dee2iDXgHxp8Vm5Tu2KPuCk43Q/ji+Oh4X+xobEvQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Iterate over a folio_batch rather than a linked list. This is easier for the CPU to prefetch and has a batch count naturally built in so we don't need to track it. Again, this lowers the maximum lock hold time from 32 folios to 15, but I do not expect this to have a significant effect. Signed-off-by: Matthew Wilcox (Oracle) --- mm/internal.h | 5 +++-- mm/page_alloc.c | 59 ++++++++++++++++++++++++++++++------------------- 2 files changed, 39 insertions(+), 25 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index b680a749cc37..3ca7e9d45b33 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -452,8 +452,9 @@ extern bool free_pages_prepare(struct page *page, unsigned int order); extern int user_min_free_kbytes; -extern void free_unref_page(struct page *page, unsigned int order); -extern void free_unref_page_list(struct list_head *list); +void free_unref_page(struct page *page, unsigned int order); +void free_unref_folios(struct folio_batch *fbatch); +void free_unref_page_list(struct list_head *list); extern void zone_pcp_reset(struct zone *zone); extern void zone_pcp_disable(struct zone *zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 24798531fe98..ff8759a69221 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -2551,57 +2552,51 @@ void free_unref_page(struct page *page, unsigned int order) } /* - * Free a list of 0-order pages + * Free a batch of 0-order pages */ -void free_unref_page_list(struct list_head *list) +void free_unref_folios(struct folio_batch *folios) { unsigned long __maybe_unused UP_flags; - struct folio *folio, *next; struct per_cpu_pages *pcp = NULL; struct zone *locked_zone = NULL; - int batch_count = 0; - int migratetype; + int i, j, migratetype; - /* Prepare pages for freeing */ - list_for_each_entry_safe(folio, next, list, lru) { + /* Prepare folios for freeing */ + for (i = 0, j = 0; i < folios->nr; i++) { + struct folio *folio = folios->folios[i]; unsigned long pfn = folio_pfn(folio); - if (!free_unref_page_prepare(&folio->page, pfn, 0)) { - list_del(&folio->lru); + if (!free_unref_page_prepare(&folio->page, pfn, 0)) continue; - } /* - * Free isolated pages directly to the allocator, see + * Free isolated folios directly to the allocator, see * comment in free_unref_page. */ migratetype = get_pcppage_migratetype(&folio->page); if (unlikely(is_migrate_isolate(migratetype))) { - list_del(&folio->lru); free_one_page(folio_zone(folio), &folio->page, pfn, 0, migratetype, FPI_NONE); continue; } + if (j != i) + folios->folios[j] = folio; + j++; } + folios->nr = j; - list_for_each_entry_safe(folio, next, list, lru) { + for (i = 0; i < folios->nr; i++) { + struct folio *folio = folios->folios[i]; struct zone *zone = folio_zone(folio); - list_del(&folio->lru); migratetype = get_pcppage_migratetype(&folio->page); - /* - * Either different zone requiring a different pcp lock or - * excessive lock hold times when freeing a large list of - * folios. - */ - if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX) { + /* Different zone requires a different pcp lock */ + if (zone != locked_zone) { if (pcp) { pcp_spin_unlock(pcp); pcp_trylock_finish(UP_flags); } - batch_count = 0; - /* * trylock is necessary as folios may be getting freed * from IRQ or SoftIRQ context after an IO completion. @@ -2628,13 +2623,31 @@ void free_unref_page_list(struct list_head *list) trace_mm_page_free_batched(&folio->page); free_unref_page_commit(zone, pcp, &folio->page, migratetype, 0); - batch_count++; } if (pcp) { pcp_spin_unlock(pcp); pcp_trylock_finish(UP_flags); } + folio_batch_reinit(folios); +} + +void free_unref_page_list(struct list_head *list) +{ + struct folio_batch fbatch; + + folio_batch_init(&fbatch); + while (!list_empty(list)) { + struct folio *folio = list_first_entry(list, struct folio, lru); + + list_del(&folio->lru); + if (folio_batch_add(&fbatch, folio) > 0) + continue; + free_unref_folios(&fbatch); + } + + if (fbatch.nr) + free_unref_folios(&fbatch); } /*