From patchwork Tue Oct 1 07:58:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14DA0E7716C for ; Thu, 5 Dec 2024 15:20:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E0FF46B00D5; Thu, 5 Dec 2024 10:19:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A86BE6B00CA; Thu, 5 Dec 2024 10:19:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 027CE6B00C0; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 192B7280036 for ; Tue, 1 Oct 2024 03:59:24 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C94B4C0DEE for ; Tue, 1 Oct 2024 07:59:23 +0000 (UTC) X-FDA: 82624283406.22.E74A7F6 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf07.hostedemail.com (Postfix) with ESMTP id DF8D640010 for ; Tue, 1 Oct 2024 07:59:21 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aa2EU7pp; spf=pass (imf07.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.66 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769436; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bJgMV83YGvWt1YNE++o8zIiouv7p8uQ7lLthenlKlXM=; b=IvpBnErLT0j8rFHlnEdq1qkjmQjPbdTeDieBWu2AMXSM4PrpUTc5fmf918Ka2qUqdcFCoT wac+oesZburHjOSAWvNMV0UyQ2d4PQkSTLRa/U3qZdIUT/BR9h2I5lpLRw/hq3V+UPVfJB o34hzkDnh3aTZDnMtH4CCg9aX5fs0zI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769436; a=rsa-sha256; cv=none; b=elpMR/g1Q5Jas2a7+jgGtHXMnUkMgcpv2LZ/hlrz4kfpqiLXWJEnpyRphTcfPVvx8TZtrc uR6PdxTChpyidUr1zW3pQmr8OMw7WCcBNK90h5W9nr/7G+CFOK2m6BvmX0DjLA9ZaxHPgj jLT/F/AwzHuuLhkYjneuQiArbk/21Zw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aa2EU7pp; spf=pass (imf07.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.66 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pj1-f66.google.com with SMTP id 98e67ed59e1d1-2e09f67bc39so4270989a91.1 for ; Tue, 01 Oct 2024 00:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769560; x=1728374360; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bJgMV83YGvWt1YNE++o8zIiouv7p8uQ7lLthenlKlXM=; b=aa2EU7ppcnNhoRoiWpLVGUJ420tK3Ea1vQUVkGfP0yqgfb1wfGCYvXydtlZNSHHAjC PBeUM0YhoLvQyC48nyx5p2Tg2kISKwFGTqUiDV8S8jf/obJXWFj7Yx623R0xI3iNa9NC /orFD4xtPo4lu91RflWvhQEs1OGGalotwda38jRgmQqywU7eAj2CPjnxJzv0hd+eJDNh tKlxTRLl8iGGvWdp0oAewqOH0c//EPHrFO480ygI56hKkhh6JM4QCjsUiTGZslEOXQYc EttgIYM+DMc16BzUZLBSI/AXHwdJfp2TGiyRWGXDRu2FdkFV6bmkk+uiVkpDSf3dP2it YZZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769560; x=1728374360; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bJgMV83YGvWt1YNE++o8zIiouv7p8uQ7lLthenlKlXM=; b=MuBLx+xUfn76b860R5gbJO6FxN5Kx+GxwtOJbBx8+3qmkPmWmhlIfDfmcODzhM1ztK 1etVnWTjalT8O1Trx9owsBeCPZuUso2Fo+ybAzn9tL73augWdqv/hxoqdvH2bdJciU6M Mz04x1hZgG0zMHf7NW3fhgFu723Mr8Pk2OQ8gw6YduQF1puG2+DisL3JOfh+a291ZmUU Eth8b893TPFn7ordmyuZlxWh3nszg9mzWHkFlWEG0yXtuJOk3ddHlJCaMO4ekltbH4K4 Ch1rw2+IxTrAJVDSbop6mOX1kjPEFS0Jf5nBeq33ce4CQTYHuJq74DqjuFfHLjj1ftPk fYeA== X-Forwarded-Encrypted: i=1; AJvYcCVYalwitxIWVt+0O1mwITDssvCAa5FE3ZgnrjblDSmyjKa0i2akeB0rBynnH1z58OTkqDF+wf/1mQ==@kvack.org X-Gm-Message-State: AOJu0YyuipNLncBppCjBVvhR5LJXE5pd0JEzaSQxEYI2dZHPz9EKXUpy LdHqZsLCFF8pV9VDB4dAUG/seSeO7ckXmCskGpV6APcKbRvA+hZ1 X-Google-Smtp-Source: AGHT+IHzcts/01i8+FgR6inTEYf/0bSOr5mGzvAa5EPSJQ1v0Y75HJyrMxr4NCeYxCPBTTipwb3fLA== X-Received: by 2002:a17:90a:2dc1:b0:2c9:9f2a:2b20 with SMTP id 98e67ed59e1d1-2e0b8b19a63mr17592544a91.22.1727769560412; Tue, 01 Oct 2024 00:59:20 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:20 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , David Howells , Alexander Duyck , Andrew Morton , Alexander Duyck , Eric Dumazet , Shuah Khan , linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [PATCH net-next v19 02/14] mm: move the page fragment allocator from page_alloc into its own file Date: Tue, 1 Oct 2024 15:58:45 +0800 Message-Id: <20241001075858.48936-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DF8D640010 X-Stat-Signature: d1wp111xu5fu5iykdpzzbs569ra7893m X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1727769561-341183 X-HE-Meta: U2FsdGVkX1+R40Z+j/F+8h2hxRysn3wz/bSxWAyH4Wc+zjBgrTQ51SCYnOZnsc+/ueal3N5Bwfbv1POO9scp7ig8cBCrmY5/QrfRgTIEqDfGieyaT4jiFK76Lz053QFTLSwFyb9fvIKvAiTgnpVDXjBatCqxexD1W4EWgeNFrlCmuGjDC5jXkWaLa9Al5tGC5yA0rFcuZD2Q1yoRSx8bmm5bg0otvKpmxhL/+3HlyZBeD4P2Qe/FVl2pZkPN1edHD796vFQY8zd57jwYVYJzRsR6yX9D491NNAjRF0TNMZhCvhOGYUsP4lggS8VAbmoPj4B/chAH+qg/JmredtD7Z7w8FGQeChPnIGmcVwYYQaoaDf00bK1ZW9zQm8nNbPpW6KoZ3h7qulAaCTI4BVzPnMYQv+q77UJTztZcrCAYE+CNDE+pjnJV1hmj+f4Et596g2vA1pMZ1EfgRVgF0feCRmC1ShA02yuRioVfmmq5TgNsL2A8zJcsGYJOrws77gWjrGZ3BSlkvnMq4Q4Obx3138SeZGwxAcsO7IKCQ7cgnjNBMwqlBB+16AClv3dBzKSVhtVcmtOkgZPObC/ZoUoKRMnt8Vu99phUsV/ExcL7J+cBbJ2YBNw3EJSeBFi/MJKX+yBmxkTY5hytVwUjJne0K+dnsKAdmaGTFdB3fICBRqZ8mKo0dPXEWGFGr0RI7L62qaDH3JPAd83m5sVBV1NMOA6ZxjMHkdyo7mZSTTof0UcXwuHUDDEuS5tukH3oBVq1IkH/DG18U5kg/SzD/BWsKF1tNS5dD8A9E+ry7r8OkQQC7M+hPXr2nXVN13qxhppzNrS9VNOcEz4pz+VKx0U5bXyzxe2Z+fLvXw71I9xs4pZhDTPkI7ag+gd9qK5Lqbccf3MdV2fvv3UJde5+ji3j6HJTmsqLWy4UzcVnkWNo1TtsjdpxhX+UyM1BvZthUei2NiPaMp6xx7ID9YwoWPr cU/Y0S+N WTctR3V4P4gVPI35PkXWspaYsWVzL6YU4zvEYH42ep6SavWgwJQaZtL9pHMa8GdmsNK7uJM/TlZDLxS7w35A0Z1TW/dgBkuSmZ54J9Rai7dzGreaFezBMlXJq9vwcDDoDl8EaNqH1rs8QJ5fnyuMs3rFhahw7KOQIgfzRXgisVHzesAjgjJKfE98bY/Tm+rhHHFTFM7Z0teP4pyZY9whm5P2FSzJwaUXqFgUEE1dMP2NcqXgG8LGUSi1sHSnaOPBHUV4p2qh6xZmoS5shP7EDoCuGb6oswJuo+N8gfSeMoFvCpOTyUJiGrZeqXY5HYak1fnbZ4IV4oeX9EHNqiZbMs2Ogwnfgtsy83JGJJ5r2XLIst1Cv2RA5N0Vvk3eehCAfmOPGGyEQotGtZhti3B5+yAbOyhoxSrTM2ldC/URdeVxIBANtcKwiv2gMKQSpqxUF57a1qnb4FrO0MCw7ojFMr5uioimDjNwBI9nfliC4NbZzleDQ0tTmAR9E7Oe+PDFlAdvTDA7IO7Df9UI+6EoteH0jgKGgaIpwCqTTm3vQFIrcelO+TsI3yMX4VafJpKd6FgwZJquE9wckDD0Zki9r9JDqFuiUwfjYRCdkYuJIbGcinG6Mtmh+pknl4kKLabcgKSTRBeXCqST0pld3ktVS18SeGvoF/C5D27Y15r9ufoyKTQWZCCAU64TGm85iaqbyHNOH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Inspired by [1], move the page fragment allocator from page_alloc into its own c file and header file, as we are about to make more change for it to replace another page_frag implementation in sock.c As this patchset is going to replace 'struct page_frag' with 'struct page_frag_cache' in sched.h, including page_frag_cache.h in sched.h has a compiler error caused by interdependence between mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler error by moving 'struct page_frag_cache' to mm_types_task.h as suggested by Alexander, see [3]. 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/ 2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/ 3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/ CC: David Howells CC: Alexander Duyck Signed-off-by: Yunsheng Lin Acked-by: Andrew Morton Reviewed-by: Alexander Duyck --- include/linux/gfp.h | 22 --- include/linux/mm_types.h | 18 --- include/linux/mm_types_task.h | 18 +++ include/linux/page_frag_cache.h | 31 ++++ include/linux/skbuff.h | 1 + mm/Makefile | 1 + mm/page_alloc.c | 136 ---------------- mm/page_frag_cache.c | 145 ++++++++++++++++++ .../selftests/mm/page_frag/page_frag_test.c | 2 +- 9 files changed, 197 insertions(+), 177 deletions(-) create mode 100644 include/linux/page_frag_cache.h create mode 100644 mm/page_frag_cache.c diff --git a/include/linux/gfp.h b/include/linux/gfp.h index a951de920e20..a0a6d25f883f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); -struct page_frag_cache; -void page_frag_cache_drain(struct page_frag_cache *nc); -extern void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); - -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} - -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); -} - -extern void page_frag_free(void *addr); - #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6e3bdf8e38bc..92314ef2d978 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); */ #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - /* * page_private can be used on tail pages. However, PagePrivate is only * checked by the VM on the head page. So page_private on the tail pages @@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } -struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; -}; - typedef unsigned long vm_flags_t; /* diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index bff5706b76e1..0ac6daebdd5c 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -8,6 +8,7 @@ * (These are defined separately to decouple sched.h from mm_types.h as much as possible.) */ +#include #include #include @@ -43,6 +44,23 @@ struct page_frag { #endif }; +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +struct page_frag_cache { + void *va; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + __u16 offset; + __u16 size; +#else + __u32 offset; +#endif + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + /* Track pages that require TLB flushes */ struct tlbflush_unmap_batch { #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h new file mode 100644 index 000000000000..67ac8626ed9b --- /dev/null +++ b/include/linux/page_frag_cache.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PAGE_FRAG_CACHE_H +#define _LINUX_PAGE_FRAG_CACHE_H + +#include +#include +#include + +void page_frag_cache_drain(struct page_frag_cache *nc); +void __page_frag_cache_drain(struct page *page, unsigned int count); +void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask, unsigned int align_mask); + +static inline void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); +} + +static inline void *page_frag_alloc(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); +} + +void page_frag_free(void *addr); + +#endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 39f1d16f3628..560e2b49f98b 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include diff --git a/mm/Makefile b/mm/Makefile index d5639b036166..dba52bb0da8a 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-y += page-alloc.o +obj-y += page_frag_cache.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afab64814dc..6ca2abce857b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4836,142 +4836,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | - __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void page_frag_cache_drain(struct page_frag_cache *nc) -{ - if (!nc->va) - return; - - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; -} -EXPORT_SYMBOL(page_frag_cache_drain); - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(__page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c new file mode 100644 index 000000000000..609a485cd02a --- /dev/null +++ b/mm/page_frag_cache.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include +#include +#include +#include "internal.h" + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count)) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + goto refill; + + if (unlikely(nc->pfmemalloc)) { + free_unref_page(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + if (unlikely(put_page_testzero(page))) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free); diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index eeb2b6bc681a..fdf204550c9a 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -6,12 +6,12 @@ * Copyright (C) 2024 Yunsheng Lin */ -#include #include #include #include #include #include +#include static struct ptr_ring ptr_ring; static int nr_objs = 512;