From patchwork Fri Mar 31 16:08:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13196166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A9B8C76196 for ; Fri, 31 Mar 2023 16:09:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A385C6B007D; Fri, 31 Mar 2023 12:09:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 887486B007E; Fri, 31 Mar 2023 12:09:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D87E6B0080; Fri, 31 Mar 2023 12:09:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 55C376B007D for ; Fri, 31 Mar 2023 12:09:43 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 24970411C6 for ; Fri, 31 Mar 2023 16:09:43 +0000 (UTC) X-FDA: 80629679046.22.15EC070 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 61F478001B for ; Fri, 31 Mar 2023 16:09:41 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BCaqYbIH; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680278981; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w9vYmidIh5WhninpbPh1DpHK4Q4l9z+4/ehUwmRVCvg=; b=Vwr/8eM9GSv0mNE2CswGlkPqeq85E40hYdEXqfkbEj7wasBdeAR2rjsYE7sywjTfRtnqbA SU18R/N/sED9vul3VjOY9ccSP9iBj+v73RUeidwczrD5fZ8Ewlk/znmVT0YyHwB6Hbhfa8 gu5eiAG15NuLHB0O9lpbWt7WPYwKmSU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BCaqYbIH; spf=pass (imf30.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680278981; a=rsa-sha256; cv=none; b=z8BbOx16qv37TOwz6zRAT8yc5JjnTX0xPNAnLUjxbvtvzGESTkgM4y3aJrvXu8+3pFXXVG SH8iu0a58tYaTDRG3fb3nvOT1wZC3Ebofka098tMt1AAXWDN+WO3NvGydH8KbAq0DU4PTc X6j9Xxjwfr7ZOd+NKoZRB634G28qj6k= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680278980; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w9vYmidIh5WhninpbPh1DpHK4Q4l9z+4/ehUwmRVCvg=; b=BCaqYbIHFUMcKud3wSStlZv5hun5jVrLw5zjrdW30axWOtE13UPNvX6eQCvF3wMTDZcUJY SQroYYCYD98hISIUnY9XTRssawrMdxdnuI7RajZbvJK1But9qu9nxpV0CVpeY/h7+D0x71 Haf4J8XikfKmNb9TGsgA7LLDGXQjEI8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-645-elJaY9qoPaOnZnoS3texTg-1; Fri, 31 Mar 2023 12:09:36 -0400 X-MC-Unique: elJaY9qoPaOnZnoS3texTg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1E2D6185A7A2; Fri, 31 Mar 2023 16:09:35 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 39DD614171B6; Fri, 31 Mar 2023 16:09:33 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 05/55] mm: Make the page_frag_cache allocator use multipage folios Date: Fri, 31 Mar 2023 17:08:24 +0100 Message-Id: <20230331160914.1608208-6-dhowells@redhat.com> In-Reply-To: <20230331160914.1608208-1-dhowells@redhat.com> References: <20230331160914.1608208-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 4f89x9kz5u1hbokd84sxnzmuk8r4r1fa X-Rspamd-Queue-Id: 61F478001B X-HE-Tag: 1680278981-756549 X-HE-Meta: U2FsdGVkX1/OQxkn8Yht1OPYAeb3nCyO+eMM8LblXNxaFQ1txCRonMWp8onmZ/O9Ed5rKd/d95dcLncpqmw+WfWKIhJ59nQTWT0OSHrc2FWkgKlixGTz/fD+4Ud7BoK6N+R1xTaD+9mgovcNAX/L1xanoRdc2Io34P2ZIJuz6POHMzN+WqyCkeva4tBgN2a2UOjyKiQ7gYefu+KKjNQCtkwrJKmjSBUXe4i7KhkvvPYFL9H+bdsMpIzkI9XejbG+/eEZixePRq8bDpPLjxG5h2VCFd5sBssQWbt4bRYaFeBFpvDsViZ8P6YUEH469/S8D9wEoyV5P3UBfDn+MBjONaKXVi2AQ6NFAUfIj07alVYSUpEWt2Q6OzAfHwVazNPjLEihfMd0D7slYpNNI8acY+C1i8zQrn4rP+L04NvN3FX3xAC1JdIB9+O8Ni6EXwaxqKGAUkZXnBZuX6o+lWpCtEI3AhSxRa4+pxyEAxl14j0NetEbyIvL0nZqmBuNKykrszw8GrVoIOhtW4+iOdguG9NA9iXiVdMid7VbonqBNZ4aNzAFmISxOeYTdgrpuSV6rx1dwtyICW/fm1zVsJ5OrzkqUkSq+hXEhB+q1+7W5fLJgQqXxVPLpwvSRTK7r8r3ZSGfbfex9agQFMqBQZ/c5PUN5xyjtjKxdQ3wsQmDAV13dNNwdV845WbAn/1NsuwIZAkz1tCppOVaR8iGzJLj50w710oxWAkJwcmT+938vDLqZomE4DGl1wofx5CCXb9RWUSeIXhilTVrFWhT6F0AHITLVArflwjAAndT8yGULE7iE6qUojfJkyMj3LQWkjXiqzdLIelfqxRrM0dpg7+qgPwzdWsao6EuQG6etG2H1YyJ401EKPcWrFvRlU7nzAj1t2sTpusYhlOTIxKGj7OwgoCW9bLwJBDeN3ZIIRuJGKc9R6r5DPHbPMfwuRFgGQC5XVgCepe7VCQG4rfLLn3 yrnybxiN eez6dy1yiy8WdnUgogzBEQRaH+IevKsb/aPiKhuGtt316EoDrro8GNEzqby8E+3v+rymxPPHo3iq6xpPnJeml3pnuQxVrV+m+iZWp7LytoPF3DVjErts37LZu7DpCpo4WaGY8O01/p9p65A3V8BTwp5zWduic01vq8tzHwj30m6lISHVUMaOI9VpAVSnK/Kt8E0nHCYeYWK7GnesamnymShWCs2FQFLy1setk53HxTRY0JS0Q+VL4yx55npvjV0OXoDMTEKzcDEE3syzpc9mb0YdR7GRJ4kouZaVKaLtqW4gLPc0lPS6hK8IuifO6H78g3lIO8IWxBOakwxleR3SMQboxWkZJsy4WnsNJfgb6KJJfYj9ZpO3lel8qjEJPQjkbwW5BJB3ClYVIfYc22bVJSvZZy/qSHrcxhzPiiyEP6ViH9x2wLtKrcgQj6mllc4iLYgYw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the page_frag_cache allocator to use multipage folios rather than groups of pages. This reduces page_frag_free to just a folio_put() or put_page(). Signed-off-by: David Howells cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- include/linux/mm_types.h | 13 ++---- mm/page_frag_alloc.c | 88 +++++++++++++++++++--------------------- 2 files changed, 45 insertions(+), 56 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0722859c3647..49a70b3f44a9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -420,18 +420,13 @@ static inline void *folio_get_private(struct folio *folio) } struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif + struct folio *folio; + unsigned int offset; /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. */ - unsigned int pagecnt_bias; - bool pfmemalloc; + unsigned int pagecnt_bias; + bool pfmemalloc; }; typedef unsigned long vm_flags_t; diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index bee95824ef8f..c3792b68ce32 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -16,33 +16,34 @@ #include #include -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) +/* + * Allocate a new folio for the frag cache. + */ +static struct folio *page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) { - struct page *page = NULL; + struct folio *folio = NULL; gfp_t gfp = gfp_mask; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | - __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; + gfp_mask |= __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + folio = folio_alloc(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); #endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; + if (unlikely(!folio)) + folio = folio_alloc(gfp, 0); - return page; + if (folio) + nc->folio = folio; + return folio; } void __page_frag_cache_drain(struct page *page, unsigned int count) { - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + struct folio *folio = page_folio(page); - if (page_ref_sub_and_test(page, count - 1)) - __free_pages(page, compound_order(page)); + VM_BUG_ON_FOLIO(folio_ref_count(folio) == 0, folio); + + folio_put_refs(folio, count); } EXPORT_SYMBOL(__page_frag_cache_drain); @@ -50,54 +51,47 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; + struct folio *folio = nc->folio; + size_t offset; - if (unlikely(!nc->va)) { + if (unlikely(!folio)) { refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) + folio = page_frag_cache_refill(nc, gfp_mask); + if (!folio) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + folio_ref_add(folio, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pfmemalloc = folio_is_pfmemalloc(folio); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = folio_size(folio); } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (page_ref_count(page) != nc->pagecnt_bias) + offset = nc->offset; + if (unlikely(fragsz > offset)) { + /* Reuse the folio if everyone we gave it to has finished with it. */ + if (!folio_ref_sub_and_test(folio, nc->pagecnt_bias)) { + nc->folio = NULL; goto refill; + } + if (unlikely(nc->pfmemalloc)) { - page_ref_sub(page, nc->pagecnt_bias - 1); - __free_pages(page, compound_order(page)); + __folio_put(folio); + nc->folio = NULL; goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { + offset = folio_size(folio); + if (unlikely(fragsz > offset)) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -107,15 +101,17 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, * it could make memory pressure worse * so we simply return NULL here. */ + nc->offset = offset; return NULL; } } nc->pagecnt_bias--; + offset -= fragsz; offset &= align_mask; nc->offset = offset; - return nc->va + offset; + return folio_address(folio) + offset; } EXPORT_SYMBOL(page_frag_alloc_align); @@ -124,8 +120,6 @@ EXPORT_SYMBOL(page_frag_alloc_align); */ void page_frag_free(void *addr) { - struct page *page = virt_to_head_page(addr); - - __free_pages(page, compound_order(page)); + folio_put(virt_to_folio(addr)); } EXPORT_SYMBOL(page_frag_free);