From patchwork Wed May 24 15:33:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13254195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3780FC77B73 for ; Wed, 24 May 2023 15:33:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA17D90000E; Wed, 24 May 2023 11:33:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C51C6900007; Wed, 24 May 2023 11:33:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B194690000E; Wed, 24 May 2023 11:33:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A327C900007 for ; Wed, 24 May 2023 11:33:24 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5C3D6160A5C for ; Wed, 24 May 2023 15:33:24 +0000 (UTC) X-FDA: 80825542728.09.E72E14A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 3A0DE1C0023 for ; Wed, 24 May 2023 15:33:21 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=jA8m9Dmk; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684942402; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VHLRJgykEA+vivKsr5PIe4LNQi6791a3nIaeRR1gBwE=; b=1/gBuvvxHVPXGC0x9eErs750BJYecfI7TkGPirfnUwk6Vh7/EsEkdQHbNdSqXGZjuHuYV7 mIJ7Ah8IaEjYJ4ZGllB/F5sBNMgDhtI8pavfYUOuQG0FZcWuz07yWZhzJNabbTBtQWLcCZ Rx3Mu5k4Fug86Ky/+Qau1eOFpGqMBEc= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=jA8m9Dmk; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf20.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684942402; a=rsa-sha256; cv=none; b=YVjG+7AdQYCU+r1TAgymddWWW0OWZ3yYkeVySm5z1Ztsi4Rceer/2/rbr7STMteFN3Y8f+ 0RkNXkgvQIWjvA9kG0+y99mBE3qk9KZLyRNqoNqPqOacdFXAHVuoNWOEA3Wt7ZVLcptoNL B1RNn8arJMy1sVpJCYkuICF+nbRWgeQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684942401; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VHLRJgykEA+vivKsr5PIe4LNQi6791a3nIaeRR1gBwE=; b=jA8m9DmkiVhTGPcHbHkjoqi6MSuirpsxs4HopqNAgTutRQIFsqpYUbEah44wBXfdGjl/Wb ECBvopsYKuyIejix2E9AlvsDVxFUUEEzPiMN9Z25PikTLHJZzWgg7rOpNfG0adijINBqLV m8KBvrZikU84Ga6ncw1UaQvaSxzVvl0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-283-sw8TgtPTPZKZI2FxAmu1nw-1; Wed, 24 May 2023 11:33:18 -0400 X-MC-Unique: sw8TgtPTPZKZI2FxAmu1nw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 317F8185A793; Wed, 24 May 2023 15:33:17 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 60D09492B00; Wed, 24 May 2023 15:33:15 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Subject: [PATCH net-next 01/12] mm: Move the page fragment allocator from page_alloc.c into its own file Date: Wed, 24 May 2023 16:33:00 +0100 Message-Id: <20230524153311.3625329-2-dhowells@redhat.com> In-Reply-To: <20230524153311.3625329-1-dhowells@redhat.com> References: <20230524153311.3625329-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3A0DE1C0023 X-Stat-Signature: rgcyqhb3o1ib9x68687m9ix4988e6i9a X-HE-Tag: 1684942401-997522 X-HE-Meta: U2FsdGVkX1+yNgs/1ZAWsAorSy8VN0NT1JI3ad1+68b3Edok72PDMI13g0hAyJDBIoWaJ3q6vnjUCKXjiGTfNEgdIyqm+NdyJ4rrozjQuFh4N12tD1HUkcC33SiRDexTy4ugZAspmh9UsoJjvGBoOcmCZwtCKdD3D1adx5wgOZ8bZn0qLivZguAirCxz2P9JexLXTSdRyF53KgU9NYwsYIfO05awPfppqbT4JhPx34j6mm3MxzYlaLepQGe1gjaqI2sUlA99TPAoch/rKlfEOBXbNPjsNr871xofrKvJzTuzMDSegzfZ7wTGJuP45CU5C4CUZpWkbr/y6a6PfDkM+UuTvhck0PGmFdb7JphvL49XBgS2HdooeNa1PKEni5yk2aonAap4SVAGdGtbsSDSLQaoqy7DduJrSVQhYn+FO/NFW8F4/oyrhlmel9PqoxDY0+4c3e47+tGvuI6wmWY7tuyedRtanAoJciaGLVFI51AjnDR8Kc+iH7iQa1GK6FtOdKJkhNfRhQu1MTSX1sE3iYePvlwE78F57XyOyEpYVrOhJa7KUccB0P+ysEnhVMjJVeUj0Ex1sKzcCmeRdaNEhTnHmPDzP4u/6gsHj8M64B4s5AvysRlPFGMif7JQXNtCSuydzNqJZfcbOYGu7KK8qYfzPwukVNRMH5Lid2NC59gGjHWJXxw8uI7oAdS5t5iBqzQXNbjXe5uHdC7K/vftPpnAN3KXcROse1/fJrHqbGZss/gU+eDw/pj2jqbwU89f6h7FQFmkBZHzzaK8GgZpngN9b3KvhiW5sJo+RD/OPpLAiGokRKb2MFT8eTrtZdGvHX7pMjgMqlBApqde9HntsKRaD/Oaqtt1jaaEV0i9f5mHW2gX7xW1foao8RIMkfSIG8M0epSZnQthKqnIXjqW9wDQAD3olH+Dw6cy45KwNUPBda1ybevmp429NMCONMeKRt3rvWD4L2AEEMLd+U6 vi5eihBl AHPMHpVOYARkQ6lbk4NmXJ91w7v3ZAwKS7Y6xxKO5NWHTPnIa1BKlvA6nXfF8rZCTbG1UN9RwapBkO2+GcD1Hfkf55n0/no2v6nErpYEBevUCPgD39lSQQLxYr0S4btxXrCZd1eaGYB93lNUw0u3opS9SntzG9tYlAieszZWnYsv548UlUxRSqd543zxGqS+4Ig2mQQY3MbNIJGI1HfD+fVGOQwfTxE16x4SSvyn5YV05CxXNBZ/YkgtwEacwhfXECbbYpvAZbXDWcwnfsxuc8cW1zGdV6g6GZCzJnj1x9/VK6s/Nq6wQawX5z6eMsG0SGZeGFIiQ2VO/799n/7bSqbaTxt2XSh+CEdpReh7gNs4r2Z3V+KwYsa68CGveILbbH7VpeD65OlurBa5/z1l7ZLgQMEdhvEsayuN3SAaRVTKR3aMo8LxWgSbxpLVi0FApEK76Fi67SQeO3I8gj9E60i5ILNQjb4WfQ5vSzIUBlKfHk0Lj29DqEHv8T2Sob49CEBljqyVEsIImtBgcgDLGtlt+LCZ7Bs2b+ytApZPyRioGUI+Ijx3hWuWJEojJF9cD0o/mxhHaYR+oSf0ZLPEVPbWkr6reGLeHztmWSnxqr0y3Kks= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the page fragment allocator from page_alloc.c into its own file preparatory to changing it. Signed-off-by: David Howells cc: Andrew Morton cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- mm/Makefile | 2 +- mm/page_alloc.c | 126 ----------------------------------------- mm/page_frag_alloc.c | 131 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 132 insertions(+), 127 deletions(-) create mode 100644 mm/page_frag_alloc.c diff --git a/mm/Makefile b/mm/Makefile index e29afc890cde..0daa4c6f4552 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -51,7 +51,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ readahead.o swap.o truncate.o vmscan.o shmem.o \ util.o mmzone.o vmstat.o backing-dev.o \ mm_init.o percpu.o slab_common.o \ - compaction.o \ + compaction.o page_frag_alloc.o \ interval_tree.o list_lru.o workingset.o \ debug.o gup.o mmap_lock.o $(mmu-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 47421bedc12b..29dc79dbeb22 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4871,132 +4871,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | - __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_the_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_the_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_the_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c new file mode 100644 index 000000000000..bee95824ef8f --- /dev/null +++ b/mm/page_frag_alloc.c @@ -0,0 +1,131 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | + __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count - 1)) + __free_pages(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (page_ref_count(page) != nc->pagecnt_bias) + goto refill; + if (unlikely(nc->pfmemalloc)) { + page_ref_sub(page, nc->pagecnt_bias - 1); + __free_pages(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + __free_pages(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free);