From patchwork Mon Apr 15 13:19:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84E9CC4345F for ; Mon, 15 Apr 2024 13:22:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC23A6B008A; Mon, 15 Apr 2024 09:22:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D71636B0093; Mon, 15 Apr 2024 09:22:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C39C96B0095; Mon, 15 Apr 2024 09:22:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A55A06B008A for ; Mon, 15 Apr 2024 09:22:00 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 263F6C05F2 for ; Mon, 15 Apr 2024 13:22:00 +0000 (UTC) X-FDA: 82011829200.19.C792E29 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf17.hostedemail.com (Postfix) with ESMTP id 9358F4000C for ; Mon, 15 Apr 2024 13:21:57 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187318; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3NTgxqaCBZs/fI8zac9OSD5CA+FrtXvTBvgj5GNhHto=; b=EtJfziinM3IqkiglKt17bUQMLFxQqjCsV6goDIgLZwJHpugpc+HdBdTKjlnL/lTec21EpK tr7aMLlVzWWY4AGXK0e+R4uaWsGP9rfGKq2tYKKA1mqQeODUIG7oodBshIuyiuwtDnJwUz lYUpJ8vU3tqjY8DF0cM9ACj7yhUm8pA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187318; a=rsa-sha256; cv=none; b=5TpydNhVnjxJNptb981rP0CrwvzoNIoxgnjbZ2zZaoPgUwEcVHgKAQnWh/XBZPRwjl3F+5 ZF5b2larCI4fkyk/BtzIVlVP5bfMDkwpDUNvkNNuL8M5By9FQkUq00T2vmEnQ6EbpQzDU4 W3ZkVSkOYpZmK/DM3dHk7u+P0iGR/W0= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4VJ76S4jxRz1fxcQ; Mon, 15 Apr 2024 21:18:56 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 6927F14011B; Mon, 15 Apr 2024 21:21:53 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:21:53 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Andrew Morton , Alexander Duyck , Subject: [PATCH net-next v2 01/15] mm: page_frag: add a test module for page_frag Date: Mon, 15 Apr 2024 21:19:26 +0800 Message-ID: <20240415131941.51153-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9358F4000C X-Stat-Signature: o9ptojdfz5ntrfwn6h7mn89pgqdhrn8o X-HE-Tag: 1713187317-127716 X-HE-Meta: U2FsdGVkX1/AGJk0BQF2o9eyJEdeVz0XM17xElflTkESZnJvYSjJGsL5lfr7qgW1ajDtXHIuoZt7yrQ+qgDQ7OydjHw7DanhIN+ywzZSdQ6FAuJL9or2lGpqru4zUTQxZruWmvjUP6ffcefwtuNtBDiT9AAGLgitDoOCIie1m0IV9eeRxODIhrGMR2Q3WEc8iDP0fGDrWFX4T7xo0CJpGvftzHQpmX1+Fwv3ESVdmX0qGd3DZDqLwIRGiMhpr8zv3uMUhjRyVjR7cLZBpzkEwbIHZXH4bsogRA/PiI0tuJN+utgkriLr4gtTdxarQzMFRD+SEqQG+HK+hv8ElNvN/pP9k6gJfwWrng3FXchi2kz7dW/N5E+myjSO+5KKeEa2wSapUw5m8ihiegmYV7Ci0w6i4+rAMe7qm5gQAefzDrFU2G1jC2mNP0uvc8U+QnNI9rACTbjXW0FvDlsgIu5e/+xY/63w9u+XGKbUKBd9MyILZUBhlYr32rDUaech+duSk3KOT37rBVWC5GhyoQF+aqf4cYaE78Vmywkfieiw8p4a7kxHwrfOQ2fBs7zuW7UNt5bb3obYPsI2B3vCfhj+pCvoswl8zsLfjSrSEoFcwGURHDgkcONnkZ1TD6yLEjLo0JX6sg/FJKzb0TgftJi9DLCEVu17M63Vnv55IhACFQdK4izpPWw3N4gRx4yEoSR1HmKmsJ/hhDlM+Suif6DQ8P41wuK7TjzTpKEWE3wOaH7sSDH5frW6zYUlY73RXinRa1GlWcyi536/Lw0UudJ4R8XUl1HOWOboSyVMYCAfe7TtOnCoiAMqVRBM/rEccIm+epfq6+ygdjUmuG9pU/0uSk4ll/zuuZUjy+7NGyt+G7dkzVBiFzUjdtLhXckpj3Fh24CjyNtczye9TxBdGGcB4gJ2bX+xQw9m5T8J2hLnnMDYtXnjlKt3ka4twBO4YIJj+vDbcTzf+/XPCBoyr8S s/VbbHaC wtgA8Jjb2540VzlcjI1nJngSOM9UxM3TLmw8dzTspXe+6oI3Q2vWfUIq1nrShpjPm+Q//f4D+MBbL9ieXJ30DpyRSZ+I8+0M+M8/wNsFUqUtXry20fAV4A5M5ZS+P5SxZi++IugigQOFRJGVsPk3TICNixmsRiDmJIRwLqKU7edvVEmMu6ow5uLLVxT0yxZFTJcRcvN0w/pzo9aJ2+cERrVfItl+VtAIuwWc2m0bZqe9xUqQcoDOOkyVfb40uwonib3fX81LEw/2xduOLyuNlDuBTz2M9J8iFbHrs X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Basing on the lib/objpool.c, change it to something like a ptrpool, so that we can utilize that to test the correctness and performance of the page_frag. The testing is done by ensuring that the fragments allocated from a frag_frag_cache instance is pushed into a ptrpool instance in a kthread binded to the first cpu, and a kthread binded to the current node will pop the fragmemt from the ptrpool and call page_frag_alloc_va() to free the fragmemt. We may refactor out the common part between objpool and ptrpool if this ptrpool thing turns out to be helpful for other place. Signed-off-by: Yunsheng Lin --- mm/Kconfig.debug | 8 + mm/Makefile | 1 + mm/page_frag_test.c | 364 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 373 insertions(+) create mode 100644 mm/page_frag_test.c diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index afc72fde0f03..1ebcd45f47d4 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -142,6 +142,14 @@ config DEBUG_PAGE_REF kernel code. However the runtime performance overhead is virtually nil until the tracepoints are actually enabled. +config DEBUG_PAGE_FRAG_TEST + tristate "Test module for page_frag" + default n + depends on m && DEBUG_KERNEL + help + This builds the "page_frag_test" module that is used to test the + correctness and performance of page_frag's implementation. + config DEBUG_RODATA_TEST bool "Testcase for the marking rodata read-only" depends on STRICT_KERNEL_RWX diff --git a/mm/Makefile b/mm/Makefile index 4abb40b911ec..5a14e6992f44 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -101,6 +101,7 @@ obj-$(CONFIG_MEMORY_FAILURE) += memory-failure.o obj-$(CONFIG_HWPOISON_INJECT) += hwpoison-inject.o obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o obj-$(CONFIG_DEBUG_RODATA_TEST) += rodata_test.o +obj-$(CONFIG_DEBUG_PAGE_FRAG_TEST) += page_frag_test.o obj-$(CONFIG_DEBUG_VM_PGTABLE) += debug_vm_pgtable.o obj-$(CONFIG_PAGE_OWNER) += page_owner.o obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c new file mode 100644 index 000000000000..6743db672dad --- /dev/null +++ b/mm/page_frag_test.c @@ -0,0 +1,364 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Test module for page_frag cache + * + * Copyright: linyunsheng@huawei.com + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define OBJPOOL_NR_OBJECT_MAX BIT(24) + +struct objpool_slot { + u32 head; + u32 tail; + u32 last; + u32 mask; + void *entries[]; +} __packed; + +struct objpool_head { + int nr_cpus; + int capacity; + struct objpool_slot **cpu_slots; +}; + +/* initialize percpu objpool_slot */ +static void objpool_init_percpu_slot(struct objpool_head *pool, + struct objpool_slot *slot) +{ + /* initialize elements of percpu objpool_slot */ + slot->mask = pool->capacity - 1; +} + +/* allocate and initialize percpu slots */ +static int objpool_init_percpu_slots(struct objpool_head *pool, + int nr_objs, gfp_t gfp) +{ + int i; + + for (i = 0; i < pool->nr_cpus; i++) { + struct objpool_slot *slot; + int size; + + /* skip the cpu node which could never be present */ + if (!cpu_possible(i)) + continue; + + size = struct_size(slot, entries, pool->capacity); + + /* + * here we allocate percpu-slot & objs together in a single + * allocation to make it more compact, taking advantage of + * warm caches and TLB hits. in default vmalloc is used to + * reduce the pressure of kernel slab system. as we know, + * minimal size of vmalloc is one page since vmalloc would + * always align the requested size to page size + */ + if (gfp & GFP_ATOMIC) + slot = kmalloc_node(size, gfp, cpu_to_node(i)); + else + slot = __vmalloc_node(size, sizeof(void *), gfp, + cpu_to_node(i), + __builtin_return_address(0)); + if (!slot) + return -ENOMEM; + + memset(slot, 0, size); + pool->cpu_slots[i] = slot; + + objpool_init_percpu_slot(pool, slot); + } + + return 0; +} + +/* cleanup all percpu slots of the object pool */ +static void objpool_fini_percpu_slots(struct objpool_head *pool) +{ + int i; + + if (!pool->cpu_slots) + return; + + for (i = 0; i < pool->nr_cpus; i++) + kvfree(pool->cpu_slots[i]); + kfree(pool->cpu_slots); +} + +/* initialize object pool and pre-allocate objects */ +static int objpool_init(struct objpool_head *pool, int nr_objs, gfp_t gfp) +{ + int rc, capacity, slot_size; + + /* check input parameters */ + if (nr_objs <= 0 || nr_objs > OBJPOOL_NR_OBJECT_MAX) + return -EINVAL; + + /* calculate capacity of percpu objpool_slot */ + capacity = roundup_pow_of_two(nr_objs); + if (!capacity) + return -EINVAL; + + gfp = gfp & ~__GFP_ZERO; + + /* initialize objpool pool */ + memset(pool, 0, sizeof(struct objpool_head)); + pool->nr_cpus = nr_cpu_ids; + pool->capacity = capacity; + slot_size = pool->nr_cpus * sizeof(struct objpool_slot *); + pool->cpu_slots = kzalloc(slot_size, gfp); + if (!pool->cpu_slots) + return -ENOMEM; + + /* initialize per-cpu slots */ + rc = objpool_init_percpu_slots(pool, nr_objs, gfp); + if (rc) + objpool_fini_percpu_slots(pool); + + return rc; +} + +/* adding object to slot, abort if the slot was already full */ +static int objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) +{ + struct objpool_slot *slot = pool->cpu_slots[cpu]; + u32 head, tail; + + /* loading tail and head as a local snapshot, tail first */ + tail = READ_ONCE(slot->tail); + + do { + head = READ_ONCE(slot->head); + /* fault caught: something must be wrong */ + if (unlikely(tail - head >= pool->capacity)) + return -ENOSPC; + } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); + + /* now the tail position is reserved for the given obj */ + WRITE_ONCE(slot->entries[tail & slot->mask], obj); + /* update sequence to make this obj available for pop() */ + smp_store_release(&slot->last, tail + 1); + + return 0; +} + +/* reclaim an object to object pool */ +static int objpool_push(void *obj, struct objpool_head *pool) +{ + unsigned long flags; + int rc; + + /* disable local irq to avoid preemption & interruption */ + raw_local_irq_save(flags); + rc = objpool_try_add_slot(obj, pool, raw_smp_processor_id()); + raw_local_irq_restore(flags); + + return rc; +} + +/* try to retrieve object from slot */ +static void *objpool_try_get_slot(struct objpool_head *pool, int cpu) +{ + struct objpool_slot *slot = pool->cpu_slots[cpu]; + /* load head snapshot, other cpus may change it */ + u32 head = smp_load_acquire(&slot->head); + + while (head != READ_ONCE(slot->last)) { + void *obj; + + /* + * data visibility of 'last' and 'head' could be out of + * order since memory updating of 'last' and 'head' are + * performed in push() and pop() independently + * + * before any retrieving attempts, pop() must guarantee + * 'last' is behind 'head', that is to say, there must + * be available objects in slot, which could be ensured + * by condition 'last != head && last - head <= nr_objs' + * that is equivalent to 'last - head - 1 < nr_objs' as + * 'last' and 'head' are both unsigned int32 + */ + if (READ_ONCE(slot->last) - head - 1 >= pool->capacity) { + head = READ_ONCE(slot->head); + continue; + } + + /* obj must be retrieved before moving forward head */ + obj = READ_ONCE(slot->entries[head & slot->mask]); + + /* move head forward to mark it's consumption */ + if (try_cmpxchg_release(&slot->head, &head, head + 1)) + return obj; + } + + return NULL; +} + +/* allocate an object from object pool */ +static void *objpool_pop(struct objpool_head *pool) +{ + void *obj = NULL; + unsigned long flags; + int i, cpu; + + /* disable local irq to avoid preemption & interruption */ + raw_local_irq_save(flags); + + cpu = raw_smp_processor_id(); + for (i = 0; i < num_possible_cpus(); i++) { + obj = objpool_try_get_slot(pool, cpu); + if (obj) + break; + cpu = cpumask_next_wrap(cpu, cpu_possible_mask, -1, 1); + } + raw_local_irq_restore(flags); + + return obj; +} + +/* release whole objpool forcely */ +static void objpool_free(struct objpool_head *pool) +{ + if (!pool->cpu_slots) + return; + + /* release percpu slots */ + objpool_fini_percpu_slots(pool); +} + +static struct objpool_head ptr_pool; +static int nr_objs = 512; +static int nr_test = 5120000; +static atomic_t nthreads; +static struct completion wait; +static struct page_frag_cache test_frag; + +module_param(nr_test, int, 0600); +MODULE_PARM_DESC(nr_test, "number of iterations to test"); + +static int page_frag_pop_thread(void *arg) +{ + struct objpool_head *pool = arg; + int nr = nr_test; + + pr_info("page_frag pop test thread begins on cpu %d\n", + smp_processor_id()); + + while (nr > 0) { + void *obj = objpool_pop(pool); + + if (obj) { + nr--; + page_frag_free(obj); + } else { + cond_resched(); + } + } + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + pr_info("page_frag pop test thread exits on cpu %d\n", + smp_processor_id()); + + return 0; +} + +static int page_frag_push_thread(void *arg) +{ + struct objpool_head *pool = arg; + int nr = nr_test; + + pr_info("page_frag push test thread begins on cpu %d\n", + smp_processor_id()); + + while (nr > 0) { + unsigned int size = get_random_u32(); + void *va; + int ret; + + size = clamp(size, 4U, 4096U); + va = page_frag_alloc(&test_frag, size, GFP_KERNEL); + if (!va) + continue; + + ret = objpool_push(va, pool); + if (ret) { + page_frag_free(va); + cond_resched(); + } else { + nr--; + } + } + + pr_info("page_frag push test thread exits on cpu %d\n", + smp_processor_id()); + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + return 0; +} + +static int __init page_frag_test_init(void) +{ + struct task_struct *tsk_push, *tsk_pop; + ktime_t start; + u64 duration; + int ret; + + test_frag.va = NULL; + atomic_set(&nthreads, 2); + init_completion(&wait); + + ret = objpool_init(&ptr_pool, nr_objs, GFP_KERNEL); + if (ret) + return ret; + + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_pool, + cpumask_first(cpu_online_mask), + "page_frag_push"); + if (IS_ERR(tsk_push)) + return PTR_ERR(tsk_push); + + tsk_pop = kthread_create(page_frag_pop_thread, &ptr_pool, + "page_frag_pop"); + if (IS_ERR(tsk_pop)) { + kthread_stop(tsk_push); + return PTR_ERR(tsk_pop); + } + + start = ktime_get(); + wake_up_process(tsk_push); + wake_up_process(tsk_pop); + + pr_info("waiting for test to complete\n"); + wait_for_completion(&wait); + + duration = (u64)ktime_us_delta(ktime_get(), start); + pr_info("%d of iterations took: %lluus\n", nr_test, duration); + + objpool_free(&ptr_pool); + page_frag_cache_drain(&test_frag); + + return -EAGAIN; +} + +static void __exit page_frag_test_exit(void) +{ +} + +module_init(page_frag_test_init); +module_exit(page_frag_test_exit); + +MODULE_LICENSE("GPL"); From patchwork Mon Apr 15 13:19:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8061C00A94 for ; Mon, 15 Apr 2024 13:22:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C7FC6B0093; Mon, 15 Apr 2024 09:22:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44C496B0095; Mon, 15 Apr 2024 09:22:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2ECDC6B0096; Mon, 15 Apr 2024 09:22:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 134516B0093 for ; Mon, 15 Apr 2024 09:22:03 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D0EC31205DE for ; Mon, 15 Apr 2024 13:22:02 +0000 (UTC) X-FDA: 82011829284.15.4A2B39D Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf06.hostedemail.com (Postfix) with ESMTP id 50CEF180018 for ; Mon, 15 Apr 2024 13:21:59 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187320; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zeduTcCDg526cCCJK/3xfYqN0ch44Yy/ybdxwM27+Wk=; b=tRTN+yb48hE33GWwKluenuVMmbv4i7DWKokLXgog050Pn2fK994P0jRRtUauOJurMBkgto xKJx11SJKo4BAm+UURk8Qh59I6WNSKv08HCX60dIkCBUPxzwhseLMi5aElRvOFMBruAwsQ ZAOD0uBKSUnYW8psFC7IazpiDCGNgJ4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187320; a=rsa-sha256; cv=none; b=7fV1/yikcXyDchqUqupeWMe1RUhXgVjplZwKhpuMVjzAqYQ2C3O19c/e5g61FbwVjLIGfl UfdJtr4lrLGRWvi0OGmXrbpzeSu2jB5ynU/3D7OjtpPvXrH0+JeMahP9NidGOM4gPMD29j xuidHU+JT3rBUvFDXbF8DjZhrAi6QP8= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4VJ76X1L9Kz2CcLb; Mon, 15 Apr 2024 21:19:00 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 98FE71403D1; Mon, 15 Apr 2024 21:21:56 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:21:56 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Andrew Morton , Subject: [PATCH net-next v2 03/15] mm: page_frag: use free_unref_page() to free page fragment Date: Mon, 15 Apr 2024 21:19:28 +0800 Message-ID: <20240415131941.51153-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: 50CEF180018 X-Rspam-User: X-Stat-Signature: 1oswttuk45rg1k88yzom45kw6y1xidmj X-Rspamd-Server: rspam03 X-HE-Tag: 1713187319-707424 X-HE-Meta: U2FsdGVkX1/CWhMzYrBDrU6TcZBD91JmfD3RsMw8Z+8ZEz/eaPGBcRyJU8b60KJI4YxVbzb6P9FgBIvepbWtiUP/ysKN4J9QbwM9qJzJLn2E+WOa+U92Errd6FZ71MYmqrJgjX2jH7zFrf96nE5U360D/J/TsodxtjUGQS4aPdruZtp/4TDbegmqu++q0NQ1+aWZqjTqn7iFCe3qaT0yFmmBLfATf3ifO1MZQbktJeqLoUfkyID9pX6PzxGylagyDRaZMkgjAjaaDcodmcOLXyOxR1kaNmSD1YUY/ubJK9GWqIhQgdqnKzpK7RefwRff0mVCJtlm7ezPkPbYRYhrssNy4Pp8WHCwly8glNKwULn31ajpOUehElryj6concuDFVkT8Qq4kx/oxxv6sUVRXVRk2VNjkDMZHQJ6dDuMjyE4ZfdyLpVtnX9xWEwoD006VQPBv6E3hRDLEJnyb2gK6awgsiN1x5vWP/df0fyT3/RvB4xehIxeTfAb0SlY21ru9hOz1TbeNwuFEaSBVsx1ciNtIfqKaE9UbCwF3aBQZTUE03PmjfhZCp4GxvDjcjLo0lAnJ4E6kpSWL37AYbPL5ANH2YkAkEJmIDb3t63n/z0+OLQVZAHLc5HADnd+74MgZJ8CIqtiLdtYirIxgMGpNWcY9cFi0nFxB6lRc0TjOP3nViGUuSz2jrZS7/ZSC7TwhCpCVveipsSwo5R5kbuKIo7mTDQei4kttz6FnM/+BxRBewoPzSLVPMfvIB6kJCf4BYE7Rt1ulCwfU9S0xgd5n/8jkAX6o+3nivPCQv9CsxD1tShnUAWW9kJxhnuXZrw70nOgaDaWn/r7o4JdSWssforRtyKAZSp5dn2BnDR0M6on19UZRJ/nnfVwVxTN5SzLi0p1C91VoMVYY5ID4iTJBN52di7jF3sGxkP/yyZuXDpDqxAakSm7tmss8IwrOIfbptOnxLY6rKao/fF72Ks RBAATPOI z6gVV0XME6EMRnd7oVSHmqZIkXbj2MaYpW41oyfNcGBrpUgXlj4XNcTNGubbucMNuBXnpdYske5I/A+c4fTZkKP/B1VlMn1+YyJNzcTMrdNwygz1aYB4DuHrnHV1L1bt8xeMbUVB3rvEkbz6Naiy7/hw/PNuDV9uBeLoG2GqREvXiFzElR3j7DCZ5MR0HGzeHibgKvTncRyRpPfyYS2pTESCU2tOXS/4mbjkH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: free_the_page() used by page_frag call free_unref_page() or __free_pages_ok() depending on pcp_allowed_order(), as the max order of page allocated for page_frag is 3, the checking in pcp_allowed_order() is unnecessary. So call free_unref_page() directly to free a page_frag page to aovid the unnecessary checking. As the free_the_page() is a static function in page_alloc.c, using the new one also allow moving page_frag related code to a new file in the next patch. Signed-off-by: Yunsheng Lin --- mm/page_alloc.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 14d39f34d336..7adb29f8f364 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4693,6 +4693,9 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp = gfp_mask; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* Ensure free_unref_page() can be used to free the page fragment */ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_ALLOC_COSTLY_ORDER); + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, @@ -4722,7 +4725,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); if (page_ref_sub_and_test(page, count)) - free_the_page(page, compound_order(page)); + free_unref_page(page, compound_order(page)); } EXPORT_SYMBOL(__page_frag_cache_drain); @@ -4763,7 +4766,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; if (unlikely(nc->pfmemalloc)) { - free_the_page(page, compound_order(page)); + free_unref_page(page, compound_order(page)); goto refill; } @@ -4807,7 +4810,7 @@ void page_frag_free(void *addr) struct page *page = virt_to_head_page(addr); if (unlikely(put_page_testzero(page))) - free_the_page(page, compound_order(page)); + free_unref_page(page, compound_order(page)); } EXPORT_SYMBOL(page_frag_free); From patchwork Mon Apr 15 13:19:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630028 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB976C4345F for ; Mon, 15 Apr 2024 13:22:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B0556B0095; Mon, 15 Apr 2024 09:22:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 363146B0096; Mon, 15 Apr 2024 09:22:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2287E6B0098; Mon, 15 Apr 2024 09:22:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F17CA6B0095 for ; Mon, 15 Apr 2024 09:22:06 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8939D1601F3 for ; Mon, 15 Apr 2024 13:22:06 +0000 (UTC) X-FDA: 82011829452.05.6299B2A Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf01.hostedemail.com (Postfix) with ESMTP id 699B440004 for ; Mon, 15 Apr 2024 13:22:03 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187324; a=rsa-sha256; cv=none; b=XRQbUVDfaFKfA2375t0ewzO3AVX1+qHbpc9iHQMfE/QuyJPGwJuM/86Q5PoasxVTnWx5BS hEQn8EKclICs/TITxTvWcwWwUfCcTb2iIvs1E2AmW6vW/e/l6xr9N1nKh3qkz2SawZwao9 qBob/x+MRgpHo+cNfBI075N0gUWUacU= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187324; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KyNUm6I2ZZpG8zdF0hj9y4N0/6AKMvG35CF0oWrpps8=; b=i8sqC1zejd7XDVdtUO8Vli19XBIKqB2z7rplYZjHfq1Rs05aWGMhtgKz8te9+bsod1PnLd pF+CYSJ8gopeCDcicpBJMAhXzrInEVq7t4pZepZzuntXwNlpZkG3VMQJDr0vHKdnX/tmCR fUHrJ9tvkvry+NFGUuXxrmKUu7h1gbQ= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VJ7903Kzhz1GHJ4; Mon, 15 Apr 2024 21:21:08 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 2012B18002D; Mon, 15 Apr 2024 21:22:00 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:21:59 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , David Howells , Andrew Morton , Alexander Duyck , Subject: [PATCH net-next v2 04/15] mm: move the page fragment allocator from page_alloc into its own file Date: Mon, 15 Apr 2024 21:19:29 +0800 Message-ID: <20240415131941.51153-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 699B440004 X-Stat-Signature: f3pkxd6xecosn396pffunsasxncynhao X-Rspam-User: X-HE-Tag: 1713187323-144580 X-HE-Meta: U2FsdGVkX1/RrT1IVkL+cf6ZLyrIG1WjeHICRYtNrythRfEmz+O9WyZQO3v1V7GDPqdHUvGDlh0DHha6IS+p2DoEECFYXWsfdxcGkqNiVvARvU8lywbORsU5lMOQkA9KvRP9r/o9J32A7Wg0WqgVPrX1FU8VhaLnJYPDAxx0XPlGpRaxDzCShKfpmXB+GtTPVfp905qlhMiELp/gH5EHfKfVrARSms7KyKVOhaC7JqAkwUl7tVjJq09V5r7Tf6/pTWsl/De3HNLpdKTfmYBa1KlALw9hiZqgW48XIj2eJGTPSoJXgPkCfUczqee9fPnel2Rnt156GuU9D9AY7G4qNF8nPsyCI/dLP2t93jgKJLLy0BK4mjeua4uKFCH4O2EHra6hDHr6KvrbDYpjAatoADJpfN9+n6HkrGNfgpFvp/98dypbh711oE5uepMIAKnFDRHOnO0Rh2dEi+VnrqtJOQm7dhnnHhgqdue1hZy/6Z/DV5e8pxP/YNt2UYGYCT6m3uOEBXEOlM8txENn/P7nOaEkwqKh61IBcxr7cyeDUVfOdhp9wDemRywyk9LWNp9RsvxCLO3Fprq+V93oBuU3cd8M9hBv10dT0j/RtO5xZhKcOe/Mtri0uFpOLmhkFkQ9xOzzOAomDb6WdzOr3v7tz60igKHI72s4L7QiRvvJWv9WJu0elV1Y3ovXEJ8VVSFKvmJ8VJCUzWYCxnfo4+1JCs+Jmz5GtR9vxa6mGiLnX6VJvxPotre9KZOZ+Il7NWgC0IcJMyB7/KaNOz0D5Ny0iko9mWvzSuelxv91Z8IqbK1lf/5r88K61sE+tj8as+HojW4T/Lebz6cvi5HM2qeMmmIHiiNva7a0Y9VuO7rKmHDpNcXJr9HBHSlyKzL8RwFaFXZq1ugBiLUQYWS/ipGDPrS+XQlObQelhT1yr5ElNXUuZtLeXpAH83YIbmRV3ZsIp9Ls3ELN51wK3DGWnvT UnU/bxAE I42ag3+HcVHXqvSD5JlwoGvG1RorFtXPjHmiRgWaFvWXNr90DS1sj/x02NEU0+TBslJI7Hcc4Heg8EwAe32qobR2b4EacSnW9PEDMQREzc3Aairiy6chqsMdKLK2WWGCtZ/HrWztrySTZFo+x1a/LATGVH1Z7rnJd62RF+bUepRSmq8ciVqd2kmYindDtoAKkzcvdq6tUTky6EBCRiQdWsPbZnhM8M1+g+ZrVDdbJGerPgzsR7fTGddTst5g9V6sYHFpFRB+/RLLDKyqxwCQ+8JST9JOdKQenmCr05wIwDrKIdk6Bv6u4VU41/E0zN/CRTVQNiqJWpbLD3yhzoNzXRfv/hw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Inspired by [1], move the page fragment allocator from page_alloc into its own c file and header file, as we are about to make more change for it to replace another page_frag implementation in sock.c 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/ CC: David Howells Signed-off-by: Yunsheng Lin --- include/linux/gfp.h | 22 ----- include/linux/mm_types.h | 18 ---- include/linux/page_frag_cache.h | 47 ++++++++++ include/linux/skbuff.h | 1 + mm/Makefile | 1 + mm/page_alloc.c | 139 ------------------------------ mm/page_frag_cache.c | 147 ++++++++++++++++++++++++++++++++ mm/page_frag_test.c | 1 + 8 files changed, 197 insertions(+), 179 deletions(-) create mode 100644 include/linux/page_frag_cache.h create mode 100644 mm/page_frag_cache.c diff --git a/include/linux/gfp.h b/include/linux/gfp.h index c775ea3c6015..5afeab2b906f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -310,28 +310,6 @@ __meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __al extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); -struct page_frag_cache; -void page_frag_cache_drain(struct page_frag_cache *nc); -extern void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); - -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} - -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); -} - -extern void page_frag_free(void *addr); - #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5240bd7bca33..78a92b4475a7 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -504,9 +504,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); */ #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - /* * page_private can be used on tail pages. However, PagePrivate is only * checked by the VM on the head page. So page_private on the tail pages @@ -525,21 +522,6 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } -struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; -}; - typedef unsigned long vm_flags_t; /* diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h new file mode 100644 index 000000000000..04810d8d6a7d --- /dev/null +++ b/include/linux/page_frag_cache.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PAGE_FRAG_CACHE_H +#define _LINUX_PAGE_FRAG_CACHE_H + +#include + +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) + +struct page_frag_cache { + void *va; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + __u16 offset; + __u16 size; +#else + __u32 offset; +#endif + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + +void page_frag_cache_drain(struct page_frag_cache *nc); +void __page_frag_cache_drain(struct page *page, unsigned int count); +void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask, unsigned int align_mask); + +static inline void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); +} + +static inline void *page_frag_alloc(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); +} + +void page_frag_free(void *addr); + +#endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 4072a7ee3859..f2dc1f735c79 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include diff --git a/mm/Makefile b/mm/Makefile index 5a14e6992f44..8b62f5de48a7 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -59,6 +59,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-y += page-alloc.o +obj-y += page_frag_cache.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7adb29f8f364..2308360d78eb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4675,145 +4675,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* Ensure free_unref_page() can be used to free the page fragment */ - BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_ALLOC_COSTLY_ORDER); - - gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | - __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void page_frag_cache_drain(struct page_frag_cache *nc) -{ - if (!nc->va) - return; - - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; -} -EXPORT_SYMBOL(page_frag_cache_drain); - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(__page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c new file mode 100644 index 000000000000..64993b5d1243 --- /dev/null +++ b/mm/page_frag_cache.c @@ -0,0 +1,147 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include +#include +#include "internal.h" + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* Ensure free_unref_page() can be used to free the page fragment */ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_ALLOC_COSTLY_ORDER); + + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count)) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + goto refill; + + if (unlikely(nc->pfmemalloc)) { + free_unref_page(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + if (unlikely(put_page_testzero(page))) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free); diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index 6743db672dad..ebfd1c3dae8f 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -15,6 +15,7 @@ #include #include #include +#include #define OBJPOOL_NR_OBJECT_MAX BIT(24) From patchwork Mon Apr 15 13:19:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FAD4C00A94 for ; Mon, 15 Apr 2024 13:22:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9A4EF6B0096; Mon, 15 Apr 2024 09:22:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 92DB06B0098; Mon, 15 Apr 2024 09:22:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F6DB6B0099; Mon, 15 Apr 2024 09:22:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 55EDB6B0096 for ; Mon, 15 Apr 2024 09:22:08 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 227A91605EF for ; Mon, 15 Apr 2024 13:22:08 +0000 (UTC) X-FDA: 82011829536.05.4FBB8F7 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf10.hostedemail.com (Postfix) with ESMTP id 9BA05C0015 for ; Mon, 15 Apr 2024 13:22:05 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187326; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=orreERWWsYX7btKTXi9psZRl1saCgViHEcv00oDh7jw=; b=J00jlAUpF2NzX/Ou5BQ2JC5p8zB4GKRW4/VSpgjkc4Tc3vRMNTE5969mVBh7xs0O8V/HkF 8/6JuCIdpdcLx4HofFSHEH0ipYPrsDh6zEBKCBY6psSnAnTHWDnNOJKHw8NIfCD+3m5UMV x8qwx6ozms4JRMrWuLIscjOyH5jtpWc= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187326; a=rsa-sha256; cv=none; b=4gPF71HeowEfpx3OQFw/ZGcUKsp6GVxejKoaKrvf93C37uUjzl9j0ReX+hGglgpSQbK15w 4vfJFlQXlCuUbXb0aq1WV8pTkAd+vjrCu5czWn7Y0QiELTWpgCBq2b2GZMda7+4fZols2o oxJfda8JJJiMZvmFcLbMPFsid2kOxUg= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VJ76F5p6mzTmPr; Mon, 15 Apr 2024 21:18:45 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id C1C5718007D; Mon, 15 Apr 2024 21:22:01 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:01 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v2 05/15] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Mon, 15 Apr 2024 21:19:30 +0800 Message-ID: <20240415131941.51153-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9BA05C0015 X-Stat-Signature: pyi1c314t9ftshoj5stseh7podyuufju X-Rspam-User: X-HE-Tag: 1713187325-163367 X-HE-Meta: U2FsdGVkX1/bk2Wqu7cN26D2H5AXsdHqJc0FdxWWS5MaJ/M4SQGCsXTTKU62TIM+PyhvSBX/CP4nwCg079mZzQ9qp1obN7rHZjXXJSRxoxe3UVCZ77nLUmMFtoUQSpIQGJL3dO4gDmNlrGklbvLpfrjBVENESzjeLgVcFchLcXkQ/XC0gTdcGCXO/tLjyUeU1eCDzRlLYY6S9oznc8kCKGdTPtI+5X23l8FioyUX2N0blpi18iSxWf8z0yp0ogUWKa916sZ2IRvROXKjPqOF1Oj3+JqKzZ5DA4BQzuY9ihmi+sN+KWf3NTvvUHTqIZtTkGO4sqD7a1oEpBeMPGTlOlPI6eioG9tIMUdgEVQtidbsignquBYtlY7nZxV5KBs5KdiLISnbQZA9DHMd4bm5BDLP1c2mH2XwAZSAf+poMkM4+m3N/NbdzwTKIfPTbrHerJVQJEfQgItcEv8J2+hZ+TW0nkRlMbymLuAfz46lPyXOJrIFMY/UPRAlTuMAHtnfCz7QKx8XcdDyl9Lb/ZJ9elcqjtPOx4w0zvP9bklAEUN9p1BpSE4DTUn7scajskrYEpIe/jRGfSzuAguNY2IBNUAi08Jyop1R8rOSxX/l8AxZ5Dm5DiqAU6LVmLEONk+a3guhgYWKu60q75vPTqsFvW/Psxe7I/DRwRB8ZrFXqb9r8uuUTDrPQ2RZXDnq004yn2ux82DNHGdk2TtwYaTciALsWN41RXVCkuONyFpBv82qSyKnGN18TWmwA9SKpNr1MKu5RVRr49EPEDDqqPpn9aVIlEZGLY1zUxtvtq4VOOwD1VyncoqZ9m+r+Hc1nKUFJChxjZgoV4WkWgvbyFzbgvvRQFI/ZspxRGPW6ixDHf0vQ+/M+HykW43ja63xDPqb495u/ey+seJR8h1Rs97EgQ8ANc5+iaV/2nYSbOF9nh5unZrb5oj6rij2pec4UvOXkOXwXs9k1ZxLPElPwoO nMb0wdLm RCvnpD+P2mbiX3Czrb9UG844xoz0q7vfJnAIpWqL8T23nTJrw+FvM5hIk04zekx7deXY17D3wcn4qT/qF39UKTeNZe6sP/p2x6ViO5FNP4923pyt9eZU4rSPM9B33TqFEzb4qeE27medCdB4qVZOriQ0YMw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We are above to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coaleasing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to aovid the disadvantages as much as possible. 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/ CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/page_frag_cache.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 64993b5d1243..dc864ee09536 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -65,9 +65,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { - unsigned int size = PAGE_SIZE; + unsigned int size, offset; struct page *page; - int offset; if (unlikely(!nc->va)) { refill: @@ -75,10 +74,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ @@ -87,11 +82,18 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = 0; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#else + size = PAGE_SIZE; +#endif + + offset = ALIGN(nc->offset, -align_mask); + if (unlikely(offset + fragsz > size)) { page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -102,17 +104,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { + offset = 0; + if (unlikely(fragsz > size)) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -127,8 +125,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; + nc->offset = offset + fragsz; return nc->va + offset; } From patchwork Mon Apr 15 13:19:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1445AC4345F for ; Mon, 15 Apr 2024 13:22:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00BC06B0099; Mon, 15 Apr 2024 09:22:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E5EC86B009B; Mon, 15 Apr 2024 09:22:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD6E96B009C; Mon, 15 Apr 2024 09:22:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A802F6B0099 for ; Mon, 15 Apr 2024 09:22:10 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 67957A0622 for ; Mon, 15 Apr 2024 13:22:10 +0000 (UTC) X-FDA: 82011829620.20.CD02518 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf14.hostedemail.com (Postfix) with ESMTP id DC119100015 for ; Mon, 15 Apr 2024 13:22:07 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187328; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qoJUf0glv70tXACV63b3D2nCCUXIOqXrCeqRZX72hzw=; b=ToKRohoc4+5a/NE2eoNa3vPe8Oj9ktycW/kngBfQJ4XhMa/lU55AathB7jXvED7t2BXCRJ pNX7FWQvSTxXH2EzHOqB4Z/p1B49P4EVEMWLhNiZ1+vCimqYi8eXcsZlwNlGiGWxODzrBi jjJYHGTp0C7gx04KI7MwKR6fo5rZREY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187328; a=rsa-sha256; cv=none; b=J8yPDcb2V0jELStM9xsaYOR9QwV6sadut/+PMb8+IVjVgkxzU9Qt2rQ0bzEwJXu/EXiojI 8BRlK9SetpBWTcEIuQ7dQmnSeV/uS117gtLzaLXDVJlRR44GAD2qm53jMFmNxj0f6HjtZl qbQBd5KdNt+Hs6JDG+R7c4PtC6WT9As= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4VJ77M08hvz1yp2X; Mon, 15 Apr 2024 21:19:43 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 45FFA1A0188; Mon, 15 Apr 2024 21:22:04 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:03 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Eric Dumazet , David Howells , Marc Dionne , , Subject: [PATCH net-next v2 06/15] mm: page_frag: change page_frag_alloc_* API to accept align param Date: Mon, 15 Apr 2024 21:19:31 +0800 Message-ID: <20240415131941.51153-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DC119100015 X-Stat-Signature: 8z7rzdcubbd9kpy3hzbr4x6smbmry33c X-HE-Tag: 1713187327-795296 X-HE-Meta: U2FsdGVkX1/SHvhPlXD47qi/mq2bshc/OVvdyJlJS7pu1v/VsDqqDazOhqeNT4DHxBUQK1W5ngkZkFkAj5dy9cXlNVXHpkeql4wBUNLHvQ10NSB1HOSWcVjM/PKhGY9Lv1MWOEN2VQQpvMeorTUmzjOig2yDaFFRKQz07LuWoAAuY+Y/lVL1CNG27npqREKLg9ipRgdkzw63sV0AAKgxX9fgrD3YiAPDZqf6LvjeW1Mq3EkYFQOp+u0qZokrW1oiZtNEDui2rkT2tu+iyHpwffNt1A3aC+K13L5UE3/1vhBsYVkAvC0S1c/E1PXc1mNyoCB/5liE3ZhiKYA7/DIFX4NQaE1NFT0EjTaYBo0C0RbCw6WrFEbQKwhVvB3/DPnb3pN8wWfJ9D4kjzteNZe71pwXbr5GXQR4xyd4rWNBS6MwYn8DDjzsxORj9l+xzIWu9jW5cGcaFsFjlT/cy8OPfGBXdCOD+OwGp8qbuErOEVx3TtcGxSxFj2ujJ1sDaYJbT84SeYpwyVK0N2tAJFGM8Qc77fTNKZk87r1YbRbqdqB7Q2uB/JuQXaeFWJo+pqseY5gAy2P/pro2q5QMDPFPJA6nt+Mj6Lws0NEfFa8I+EWlI2YAgewNdh6jvY13MCdnbNp1U0XJGBSq/SeC+hCngi90a+TsADaPP3Ur0VZB8xTNddslnvauZ5OhovdtTLsJXYoz36unSI0igwBJ3AlTTrFfqHaPayoWiyILs6VTM8dH0d/fFkUEX6uRHOBS9sSpN+PUYNYrgKTZ3w+DMkO6N/u61zpzOA5JS/5NW9st2Jidjd29OqB7KB6WrT8zelZq9c8+qCEMEAWIaUI2Q56rOwxQndUPTApM+DRJ8oXqY5PtOK/ctDbkEnJ5lK/uHIsr6rfiAiPn8mBiakQZFkQHpBiQiBIml8AZKQ9L4M3ruEeLONebr5lHjzA+abwRTVz99nCf5NiBfF+BYxzJGD4 tbvgA162 Deq9DLzhfeOfBbHVddJloO90/YfEcO7JGPrzrgmO8ep6eGV8AmYF8kNoKmrxfEvnCauL3zehmF3pF00oiSg8/gCH8QIJlHbugyHFvr4pa2xdtfNwGSmcdxcPQRgH+BK3ZQzyntIZyEQ/XyEvL4uBulaz4TgYj1ChmPbYvAkXN5s/5DqMFHyi6udWuraiyTXpKU/8dEj1zyAgS9UtsQ8mf1SEdxIShtaITehhLkwiL5OVbkK1EgYwLIiUj9uTSsCTjOY3zwoCd9ErBn40= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When page_frag_alloc_* API doesn't need data alignment, the ALIGN() operation is unnecessary, so change page_frag_alloc_* API to accept align param instead of align_mask param, and do the ALIGN()'ing in the inline helper when needed. Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 20 ++++++++++++-------- include/linux/skbuff.h | 12 ++++++------ mm/page_frag_cache.c | 9 ++++----- net/core/skbuff.c | 12 +++++------- net/rxrpc/txbuf.c | 5 +++-- 5 files changed, 30 insertions(+), 28 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 04810d8d6a7d..cc0ede0912f3 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -25,21 +25,25 @@ struct page_frag_cache { void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); +void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask); + +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + nc->offset = ALIGN(nc->offset, align); + + return page_frag_alloc(nc, fragsz, gfp_mask); +} static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) { WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, align); } void page_frag_free(void *addr); diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index f2dc1f735c79..43c704589deb 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3268,7 +3268,7 @@ static inline void skb_queue_purge(struct sk_buff_head *list) unsigned int skb_rbtree_purge(struct rb_root *root); void skb_errqueue_purge(struct sk_buff_head *list); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align); /** * netdev_alloc_frag - allocate a page fragment @@ -3279,14 +3279,14 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); */ static inline void *netdev_alloc_frag(unsigned int fragsz) { - return __netdev_alloc_frag_align(fragsz, ~0u); + return __netdev_alloc_frag_align(fragsz, 1u); } static inline void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) { WARN_ON_ONCE(!is_power_of_2(align)); - return __netdev_alloc_frag_align(fragsz, -align); + return __netdev_alloc_frag_align(fragsz, align); } struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length, @@ -3346,18 +3346,18 @@ static inline void skb_free_frag(void *addr) page_frag_free(addr); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align); static inline void *napi_alloc_frag(unsigned int fragsz) { - return __napi_alloc_frag_align(fragsz, ~0u); + return __napi_alloc_frag_align(fragsz, 1u); } static inline void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align) { WARN_ON_ONCE(!is_power_of_2(align)); - return __napi_alloc_frag_align(fragsz, -align); + return __napi_alloc_frag_align(fragsz, align); } struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int length); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index dc864ee09536..b4408187e1ab 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -61,9 +61,8 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) +void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask) { unsigned int size, offset; struct page *page; @@ -92,7 +91,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, size = PAGE_SIZE; #endif - offset = ALIGN(nc->offset, -align_mask); + offset = nc->offset; if (unlikely(offset + fragsz > size)) { page = virt_to_page(nc->va); @@ -129,7 +128,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return nc->va + offset; } -EXPORT_SYMBOL(__page_frag_alloc_align); +EXPORT_SYMBOL(page_frag_alloc); /* * Frees a page fragment allocated out of either a compound or order 0 page. diff --git a/net/core/skbuff.c b/net/core/skbuff.c index ea052fa710d8..676e2d857f02 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -306,18 +306,17 @@ void napi_get_frags_check(struct napi_struct *napi) local_bh_enable(); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - return __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, - align_mask); + return __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); } EXPORT_SYMBOL(__napi_alloc_frag_align); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) { void *data; @@ -325,15 +324,14 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); - data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, - align_mask); + data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); } else { struct napi_alloc_cache *nc; local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache); data = __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, - align_mask); + align); local_bh_enable(); } return data; diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index e0679658d9de..eb640875bf07 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -32,9 +32,10 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_ hoff = round_up(sizeof(*whdr), data_align) - sizeof(*whdr); total = hoff + sizeof(*whdr) + data_size; + data_align = max_t(size_t, data_align, L1_CACHE_BYTES); mutex_lock(&call->conn->tx_data_alloc_lock); - buf = __page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, - ~(data_align - 1) & ~(L1_CACHE_BYTES - 1)); + buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, + data_align); mutex_unlock(&call->conn->tx_data_alloc_lock); if (!buf) { kfree(txb); From patchwork Mon Apr 15 13:19:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 535F2C04FF8 for ; Mon, 15 Apr 2024 13:22:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D9B716B009E; Mon, 15 Apr 2024 09:22:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D4AD76B009F; Mon, 15 Apr 2024 09:22:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9E856B00A0; Mon, 15 Apr 2024 09:22:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 959F06B009E for ; Mon, 15 Apr 2024 09:22:16 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6262C1C0A67 for ; Mon, 15 Apr 2024 13:22:16 +0000 (UTC) X-FDA: 82011829872.23.D0388EE Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf05.hostedemail.com (Postfix) with ESMTP id 2AA30100021 for ; Mon, 15 Apr 2024 13:22:13 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187334; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TxnYwWzYLAYPsDNejp2UwWKQWEHW879JuZp9WbdhNVA=; b=HP9PnUuQob4YMoJwNfZ3E4FduQYnAO/QFBhIGJgSx448yARTUbJdkooA6c2HqP/jSxGT94 /Mpmh8fHfblxuJFfUNaffnYegFPJGmQIuSKjq9kIQ4zSAMABgFPLXIRC3f/cjml089b2F4 MgL2fyBA++Eg2/qbuak0c8ua2fSi2pc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187334; a=rsa-sha256; cv=none; b=bjX9tt/CI7WWSMpLZ4f2uLYJoJJp7FCVuDVM1Zgyl6nbxvGm+AESbLq4Qngcne5XTVA4EG UhG5GEMBhnWm8dbj5GAJvP+gcIrWLMbsdEVj9Feve4d+C/nb+rRGGpBow4hrz03GQYGMil RZLIFqRMJ+y0OAVPFb7tcLuy5U7tApo= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4VJ76n6zTZz2CcKP; Mon, 15 Apr 2024 21:19:13 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 6021B140410; Mon, 15 Apr 2024 21:22:10 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:09 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Eric Dumazet , Jesse Brandeburg , Tony Nguyen , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , "Michael S. Tsirkin" , Jason Wang , Alexander Duyck , Andrew Morton , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , David Howells , Marc Dionne , Chuck Lever , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker , , , , , , , , , , Subject: [PATCH net-next v2 07/15] mm: page_frag: add '_va' suffix to page_frag API Date: Mon, 15 Apr 2024 21:19:32 +0800 Message-ID: <20240415131941.51153-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: 2AA30100021 X-Rspam-User: X-Stat-Signature: oadkosm8uoips6fhaho6nx3yk8dywmse X-Rspamd-Server: rspam03 X-HE-Tag: 1713187333-595932 X-HE-Meta: U2FsdGVkX1/+EVAnLwNlAj5sTh+qEB/aEQJHaKNWWv+VAfSz64AfdRovBk7tf7HiR252vun0E03j364juO1YagN9OdWdFAhn4hjk89Gsm2Q1fC0PqdmgVOl0Op3aq63anVhWQAO2xTVU8ErN1Fvh71DxUhlk3S8FQ0s+Fx9oXqB9rcHvKIhhwoFVRvClShCuZfletB7eZTYsMgOIStg//PrRAVOhxkGjDiw97MIjWy1usCYZqu4Emae54ehkTtyit22S1Ff4k4o8Rg6RqDLY6327W8WEGnYQe8odn4TWgCNLhXJ8NJjOvb7XpSwxVETfklvp0IaIXEvb759zN3yPONqvw9uTX+hQaGXUj17C3b6QgaQIT/5xST07BJDFdg4k0Tonq+lEctB7pkTBXAKtJAe0LRqDVqkQOECaR4qggddwhBkTJ7+wzj7d3okizRoEAhIgsN47v5nLBCJRKD6X8weZeMMZ/D1fc2ZkG9z22mBJoxV52CuRgXzpXhXG+1KNVyPQV9olPr69CspyVhMyAcFBhvN83ovf//OYSA/DMfkcS2jNW/bAJcphtSoxqcn3l+NW5mcQ7kixw9mYt5gaC1FBe1YKLKMKAEZ6QaW+yayx/xKzyQDzlkQdMjHhMCu48j3IvCkkrLow7RqZHhv62VgD/g8qYxjxDej4Fyr+WyLi8kBRCNSajnzsC2115RlsDkzimejwcZUH/DTKzZjc80tZFId4myu6n/ovNWb2EcGneylx/2FYYdrOsDXRN/1RLYWr00jRLReJfOfru6SL9T1CZMwkwcYQKjvddbN6A5/8uilRlEARBFBldrqGFNvAwTWvpn2Wmgt7ScDThI9V085J6gt8WttkWHjSMvWi4JoC+Bd9YXCmDm7eY0+9gC548SI2Osmtk8X1fbhzvpI749H2jx7wMv485bBlj4P//iLpEa2zo7DCncO4X9UuhpuT0RZ5I5DS6xUsxDhyEUk YCQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently most of the API for page_frag API is returning 'virtual address' as output or expecting 'virtual address' as input, in order to differentiate the API handling between 'virtual address' and 'struct page', add '_va' suffix to the corresponding API mirroring the page_pool_alloc_va() API of the page_pool. Signed-off-by: Yunsheng Lin --- drivers/net/ethernet/google/gve/gve_rx.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +- .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++-- .../marvell/octeontx2/nic/otx2_common.c | 2 +- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++-- drivers/nvme/host/tcp.c | 8 +++---- drivers/nvme/target/tcp.c | 22 ++++++++--------- drivers/vhost/net.c | 6 ++--- include/linux/page_frag_cache.h | 24 ++++++++++--------- include/linux/skbuff.h | 2 +- kernel/bpf/cpumap.c | 2 +- mm/page_frag_cache.c | 10 ++++---- mm/page_frag_test.c | 6 ++--- net/core/skbuff.c | 15 ++++++------ net/core/xdp.c | 2 +- net/rxrpc/txbuf.c | 15 ++++++------ net/sunrpc/svcsock.c | 6 ++--- 19 files changed, 71 insertions(+), 67 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index cd727e55ae0f..820874c1c570 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -687,7 +687,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, total_len = headroom + SKB_DATA_ALIGN(len) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC); + frame = page_frag_alloc_va(&rx->page_cache, total_len, GFP_ATOMIC); if (!frame) { u64_stats_update_begin(&rx->statss); rx->xdp_alloc_fails++; @@ -700,7 +700,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, err = xdp_do_redirect(dev, &new, xdp_prog); if (err) - page_frag_free(frame); + page_frag_free_va(frame); return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8bb743f78fcb..399b317c509d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -126,7 +126,7 @@ ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf) dev_kfree_skb_any(tx_buf->skb); break; case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame(tx_buf->xdpf); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index feba314a3fe4..6379f57d8228 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -148,7 +148,7 @@ static inline int ice_skb_pad(void) * @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree() * @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA * @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats - * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats + * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free_va(), stats * @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats * @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats */ diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index df072ce767b1..c34cc02ad578 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -288,7 +288,7 @@ ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_buf *tx_buf, switch (tx_buf->type) { case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame_bulk(tx_buf->xdpf, bq); diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 3161a13079fe..c35b8f675b48 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -303,7 +303,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector, /* free the skb */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else napi_consume_skb(tx_buffer->skb, napi_budget); @@ -2413,7 +2413,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring *tx_ring) /* Free all the Tx ring sk_buffs */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else dev_kfree_skb_any(tx_buffer->skb); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index a85ac039d779..8eb5820b8a70 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -553,7 +553,7 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, *dma = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize, DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); if (unlikely(dma_mapping_error(pfvf->dev, *dma))) { - page_frag_free(buf); + page_frag_free_va(buf); return -ENOMEM; } diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 7063c78bd35f..c4228719f8a4 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -142,8 +142,8 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, dma_addr_t addr; void *buf; - buf = page_frag_alloc(&q->cache, q->buf_size, - GFP_ATOMIC | GFP_DMA32); + buf = page_frag_alloc_va(&q->cache, q->buf_size, + GFP_ATOMIC | GFP_DMA32); if (!buf) break; diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index fdbcdcedcee9..79eddd74bfbb 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -500,7 +500,7 @@ static void nvme_tcp_exit_request(struct blk_mq_tag_set *set, { struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); - page_frag_free(req->pdu); + page_frag_free_va(req->pdu); } static int nvme_tcp_init_request(struct blk_mq_tag_set *set, @@ -514,7 +514,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set, struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx]; u8 hdgst = nvme_tcp_hdgst_len(queue); - req->pdu = page_frag_alloc(&queue->pf_cache, + req->pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!req->pdu) @@ -1331,7 +1331,7 @@ static void nvme_tcp_free_async_req(struct nvme_tcp_ctrl *ctrl) { struct nvme_tcp_request *async = &ctrl->async_req; - page_frag_free(async->pdu); + page_frag_free_va(async->pdu); } static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) @@ -1340,7 +1340,7 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) struct nvme_tcp_request *async = &ctrl->async_req; u8 hdgst = nvme_tcp_hdgst_len(queue); - async->pdu = page_frag_alloc(&queue->pf_cache, + async->pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!async->pdu) diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index a5422e2c979a..ea356ce22672 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1462,24 +1462,24 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, c->queue = queue; c->req.port = queue->port->nport; - c->cmd_pdu = page_frag_alloc(&queue->pf_cache, + c->cmd_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->cmd_pdu) return -ENOMEM; c->req.cmd = &c->cmd_pdu->cmd; - c->rsp_pdu = page_frag_alloc(&queue->pf_cache, + c->rsp_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->rsp_pdu) goto out_free_cmd; c->req.cqe = &c->rsp_pdu->cqe; - c->data_pdu = page_frag_alloc(&queue->pf_cache, + c->data_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->data_pdu) goto out_free_rsp; - c->r2t_pdu = page_frag_alloc(&queue->pf_cache, + c->r2t_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->r2t_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->r2t_pdu) goto out_free_data; @@ -1494,20 +1494,20 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, return 0; out_free_data: - page_frag_free(c->data_pdu); + page_frag_free_va(c->data_pdu); out_free_rsp: - page_frag_free(c->rsp_pdu); + page_frag_free_va(c->rsp_pdu); out_free_cmd: - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->cmd_pdu); return -ENOMEM; } static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c) { - page_frag_free(c->r2t_pdu); - page_frag_free(c->data_pdu); - page_frag_free(c->rsp_pdu); - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->r2t_pdu); + page_frag_free_va(c->data_pdu); + page_frag_free_va(c->rsp_pdu); + page_frag_free_va(c->cmd_pdu); } static int nvmet_tcp_alloc_cmds(struct nvmet_tcp_queue *queue) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index c64ded183f8d..96d5ca299552 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -682,8 +682,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, return -ENOSPC; buflen += SKB_DATA_ALIGN(len + pad); - buf = page_frag_alloc_align(&net->pf_cache, buflen, GFP_KERNEL, - SMP_CACHE_BYTES); + buf = page_frag_alloc_va_align(&net->pf_cache, buflen, GFP_KERNEL, + SMP_CACHE_BYTES); if (unlikely(!buf)) return -ENOMEM; @@ -730,7 +730,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, return 0; err: - page_frag_free(buf); + page_frag_free_va(buf); return ret; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index cc0ede0912f3..9d5d86b2d3ab 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -25,27 +25,29 @@ struct page_frag_cache { void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask); +void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask); -static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) +static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, + gfp_t gfp_mask, + unsigned int align) { nc->offset = ALIGN(nc->offset, align); - return page_frag_alloc(nc, fragsz, gfp_mask); + return page_frag_alloc_va(nc, fragsz, gfp_mask); } -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) +static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, + gfp_t gfp_mask, + unsigned int align) { WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, align); + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } -void page_frag_free(void *addr); +void page_frag_free_va(void *addr); #endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 43c704589deb..cc80600dcedf 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3343,7 +3343,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev, static inline void skb_free_frag(void *addr) { - page_frag_free(addr); + page_frag_free_va(addr); } void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align); diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index a8e34416e960..3a6a237e7dd3 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -322,7 +322,7 @@ static int cpu_map_kthread_run(void *data) /* Bring struct page memory area to curr CPU. Read by * build_skb_around via page_is_pfmemalloc(), and when - * freed written by page_frag_free call. + * freed written by page_frag_free_va call. */ prefetchw(page); } diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index b4408187e1ab..50511d8522d0 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -61,8 +61,8 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask) +void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask) { unsigned int size, offset; struct page *page; @@ -128,16 +128,16 @@ void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, return nc->va + offset; } -EXPORT_SYMBOL(page_frag_alloc); +EXPORT_SYMBOL(page_frag_alloc_va); /* * Frees a page fragment allocated out of either a compound or order 0 page. */ -void page_frag_free(void *addr) +void page_frag_free_va(void *addr) { struct page *page = virt_to_head_page(addr); if (unlikely(put_page_testzero(page))) free_unref_page(page, compound_order(page)); } -EXPORT_SYMBOL(page_frag_free); +EXPORT_SYMBOL(page_frag_free_va); diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index ebfd1c3dae8f..cab05b8a2e77 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -260,7 +260,7 @@ static int page_frag_pop_thread(void *arg) if (obj) { nr--; - page_frag_free(obj); + page_frag_free_va(obj); } else { cond_resched(); } @@ -289,13 +289,13 @@ static int page_frag_push_thread(void *arg) int ret; size = clamp(size, 4U, 4096U); - va = page_frag_alloc(&test_frag, size, GFP_KERNEL); + va = page_frag_alloc_va(&test_frag, size, GFP_KERNEL); if (!va) continue; ret = objpool_push(va, pool); if (ret) { - page_frag_free(va); + page_frag_free_va(va); cond_resched(); } else { nr--; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 676e2d857f02..139a193853cc 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -312,7 +312,7 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align) fragsz = SKB_DATA_ALIGN(fragsz); - return __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); + return __page_frag_alloc_va_align(&nc->page, fragsz, GFP_ATOMIC, align); } EXPORT_SYMBOL(__napi_alloc_frag_align); @@ -324,14 +324,15 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); - data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); + data = __page_frag_alloc_va_align(nc, fragsz, GFP_ATOMIC, + align); } else { struct napi_alloc_cache *nc; local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache); - data = __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, - align); + data = __page_frag_alloc_va_align(&nc->page, fragsz, GFP_ATOMIC, + align); local_bh_enable(); } return data; @@ -741,12 +742,12 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc(nc, len, gfp_mask); + data = page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; } else { local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache.page); - data = page_frag_alloc(nc, len, gfp_mask); + data = page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; local_bh_enable(); } @@ -834,7 +835,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) } else { len = SKB_HEAD_ALIGN(len); - data = page_frag_alloc(&nc->page, len, gfp_mask); + data = page_frag_alloc_va(&nc->page, len, gfp_mask); pfmemalloc = nc->page.pfmemalloc; } diff --git a/net/core/xdp.c b/net/core/xdp.c index 41693154e426..245a2d011aeb 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -391,7 +391,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, page_pool_put_full_page(page->pp, page, napi_direct); break; case MEM_TYPE_PAGE_SHARED: - page_frag_free(data); + page_frag_free_va(data); break; case MEM_TYPE_PAGE_ORDER0: page = virt_to_page(data); /* Assumes order0 page*/ diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index eb640875bf07..f2fa98360789 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -34,8 +34,8 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_ data_align = max_t(size_t, data_align, L1_CACHE_BYTES); mutex_lock(&call->conn->tx_data_alloc_lock); - buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, - data_align); + buf = page_frag_alloc_va_align(&call->conn->tx_data_alloc, total, gfp, + data_align); mutex_unlock(&call->conn->tx_data_alloc_lock); if (!buf) { kfree(txb); @@ -97,17 +97,18 @@ struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_call *call, size_t sack_s if (!txb) return NULL; - buf = page_frag_alloc(&call->local->tx_alloc, - sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); + buf = page_frag_alloc_va(&call->local->tx_alloc, + sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); if (!buf) { kfree(txb); return NULL; } if (sack_size) { - buf2 = page_frag_alloc(&call->local->tx_alloc, sack_size, gfp); + buf2 = page_frag_alloc_va(&call->local->tx_alloc, sack_size, + gfp); if (!buf2) { - page_frag_free(buf); + page_frag_free_va(buf); kfree(txb); return NULL; } @@ -181,7 +182,7 @@ static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb) rxrpc_txbuf_free); for (i = 0; i < txb->nr_kvec; i++) if (txb->kvec[i].iov_base) - page_frag_free(txb->kvec[i].iov_base); + page_frag_free_va(txb->kvec[i].iov_base); kfree(txb); atomic_dec(&rxrpc_nr_txbuf); } diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 6b3f01beb294..42d20412c1c3 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1222,8 +1222,8 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, /* The stream record marker is copied into a temporary page * fragment buffer so that it can be included in rq_bvec. */ - buf = page_frag_alloc(&svsk->sk_frag_cache, sizeof(marker), - GFP_KERNEL); + buf = page_frag_alloc_va(&svsk->sk_frag_cache, sizeof(marker), + GFP_KERNEL); if (!buf) return -ENOMEM; memcpy(buf, &marker, sizeof(marker)); @@ -1235,7 +1235,7 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec, 1 + count, sizeof(marker) + rqstp->rq_res.len); ret = sock_sendmsg(svsk->sk_sock, &msg); - page_frag_free(buf); + page_frag_free_va(buf); if (ret < 0) return ret; *sentp += ret; From patchwork Mon Apr 15 13:19:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48EA3C4345F for ; Mon, 15 Apr 2024 13:22:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C6246B009F; Mon, 15 Apr 2024 09:22:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 277396B00A0; Mon, 15 Apr 2024 09:22:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13DCE6B00A1; Mon, 15 Apr 2024 09:22:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DBB856B009F for ; Mon, 15 Apr 2024 09:22:17 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id ABC93405E1 for ; Mon, 15 Apr 2024 13:22:17 +0000 (UTC) X-FDA: 82011829914.13.C26A089 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf19.hostedemail.com (Postfix) with ESMTP id 6AA0F1A0006 for ; Mon, 15 Apr 2024 13:22:15 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187335; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=n7kWKGNhRlCo3a8YMhRYkOnZd9Li/ML6T++WAa5LpeA=; b=CYDQo/TCmXzPFoY8DQZWJ7jI2dtPnm2feSEsQRRZpN9ibDUNbM7Klg556LU7ETcCgVZhcp alwya2pcre7AE2YrrH7CV252k/5QJjUjUOg5QAtTzvUIBdR8dAp+hzQowI+W9CRfDesjO+ UJGR1iqDeT9BRS2/FqqCuHZ5nJosoMM= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187335; a=rsa-sha256; cv=none; b=6Q/X2Yl9s2b+7vu8Kjr+ikBy344xJR3f6Vo/tvHTAqRbJ+nKgf/oNFkCiSaaJIlhYPydHn Hir9MFEJOpfgzIxum3wjbdIxZQwsun2R4X5bd9KkzaAqwHZwx1bWOYLqv9OqSdgzb7wcTA NwYRTbE0OjE/3GOTt7krJrAkjLZWoVU= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4VJ76y3P1Zz1R5sP; Mon, 15 Apr 2024 21:19:22 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 57EB718007D; Mon, 15 Apr 2024 21:22:12 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:12 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Eric Dumazet , Subject: [PATCH net-next v2 08/15] mm: page_frag: add two inline helper for page_frag API Date: Mon, 15 Apr 2024 21:19:33 +0800 Message-ID: <20240415131941.51153-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspam-User: X-Stat-Signature: 6nz7bz5heohyphy5x3f1kxrmor4cboz8 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6AA0F1A0006 X-HE-Tag: 1713187335-997323 X-HE-Meta: U2FsdGVkX18gCC+LPuGC6otmt5BJf13wn9RWFHEy/mPcpuQ0k8yPeWloPOMcQq27XUN9qP0TC5CrVtGwqRFbGtRiRLJI4+E7ytHC9xP4kEh3KKQO1P6loIcwsqM0iIC1YIWex4Q57l2SjkGnIb9V29cgxgkHLAh5kW/76mJL3GTYjAeyLg4yF0ojJ9meDiKv9M6uLLrnjzFzq0/SY+JfPr1TZI4hbH3j1gTs8kZ2yWS5ppFx4PdAb/SwnPl0P1NV5Z5VLHPOCjbMDhHjFX5g1rlwJftCTeQxA7SHwB/znboSUZi51lkjioy4r9UERpzcTn+ktRsdl6tOjMoKnRvLMCESSkk+sA7jX6VE9UrjU3oJrI0kGjH1uLel32dqXLnqYlug80fQc5OTXpf1snqOEG4YFlKzrzpblWH2P/B6+jYdvblGgpCwMK6gB/+tRZGIxMRreIFBlipZl2oZv07Y375RNQf0IcjxzcyeG6LsYQKPk96iBkPB3aibtIwVcyV+7GqXUtsRZmSa7aB16c0LkkU9ETMIS5xaFb4ZJLdU3k6Tbyph1SbdNgIdg4/RXAM+Jj5GJno7aXJc+mVPWHl3o16FBJwVdOxsuXBKgx+m6AQxh+wYZMI4dDLGLCRczV/gFq9nE3Dt611TKgHMNhoIlE6gVh40hBCKJks+OKm6o0fY1UZ0W3szetPrWeksfmqsCpXied2G5532L/VPIWL1S7HgJBhqSkInxjMGTKXzS3Vb3zhzyl6VPBpMSHEb86eC4e75ACATJmaR8KBEFk0+nXMIL4jawIzhmHNIt/y9S5fcLBWLg9yDuJiZv+vJQNW9597GBvuHRYZKvJ8+eoihpEKUHfxSfFv9ujRJ8lxBXPNTkGQ6tGeLmvcqNSrykN3stYP2OAnpSn8SBFj1K/MOwvqwhDQS0uGsKv/eMcvTszGvaQehbx86SWHYGI5H13mVSZMy0NSZx1ctd+bdzU2 Lso9/A/p E5XHU7/vPwAkXZLixrYSX3fi4tzFu/sYfm2bPISADLf+2Q4hZ3wbqIYBpclw1vN0l/ivEuoU6XjHkeHAOgqalPMsXXMpIijLQijJusTkkqvv6G6YBVKLGhiUXzWUbS/eS2aKmap8FLJpWAmpwvjpnKN7trBz2rGOIDFcS9xVNyYnwr97yByXpxjgzXt7MgcJp170VWCg2ORqZs9A1R2PDFI4jx8DPpUKvrpG4USJDnE0ldmPDldIbRAse5XmYE2qjKtY/mxsEuFYpHRs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add two inline helpers for page_frag API to avoid calling accessing the field of 'struct page_frag_cache'. Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 10 ++++++++++ mm/page_frag_test.c | 2 +- net/core/skbuff.c | 4 ++-- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 9d5d86b2d3ab..fe5faa80b6c3 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -23,6 +23,16 @@ struct page_frag_cache { bool pfmemalloc; }; +static inline void page_frag_cache_init(struct page_frag_cache *nc) +{ + nc->va = NULL; +} + +static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) +{ + return !!nc->pfmemalloc; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index cab05b8a2e77..20756b28df4a 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -318,7 +318,7 @@ static int __init page_frag_test_init(void) u64 duration; int ret; - test_frag.va = NULL; + page_frag_cache_init(&test_frag); atomic_set(&nthreads, 2); init_completion(&wait); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 139a193853cc..cdbfdf651001 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -743,12 +743,12 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); data = page_frag_alloc_va(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); } else { local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache.page); data = page_frag_alloc_va(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); local_bh_enable(); } From patchwork Mon Apr 15 13:19:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98662C04FF8 for ; Mon, 15 Apr 2024 13:22:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 698786B00A0; Mon, 15 Apr 2024 09:22:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F8CB6B00A2; Mon, 15 Apr 2024 09:22:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4257C6B00A3; Mon, 15 Apr 2024 09:22:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 21FBC6B00A0 for ; Mon, 15 Apr 2024 09:22:20 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E8A5FA04D2 for ; Mon, 15 Apr 2024 13:22:19 +0000 (UTC) X-FDA: 82011829998.12.B301360 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf27.hostedemail.com (Postfix) with ESMTP id 6E84740015 for ; Mon, 15 Apr 2024 13:22:17 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187338; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XfQNv+JcKFCyCnLthBkoL0Yadocs6h7dMObzV8mIa9A=; b=6q/HGpzdjnCE6fezLA4xm49a6y4l/QKTX82YwQsEuGzhXyG+HY9AKOabuj6DmCyfw/iEk1 UrtMVOpnlzQ0n/7c5Jot9CyPmmUDKJdpDiAecGX/5b7Z2gPDw8sjj4Cpl7QctOPThro9SU WmY3ep1ATIhy0kI3B1pmcWdisFjuyNI= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187338; a=rsa-sha256; cv=none; b=B1S1DzUxb/h8lhwYjoLdQSZ+ZZMhz5dhQMakoAAc9Wuzj/ITnGgNDTe1g3EJTeMBSNWZsm UY1C04NBmJy/CW165WcsmESWguPfPOE54GnLy3BrCakW9cb0VweBhAqdV8Q/etQQmNR7NB 2vskS+/f7B7YcGDXzSTh5neG29cIMCo= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4VJ7983ZXkz1wryw; Mon, 15 Apr 2024 21:21:16 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 333DD140410; Mon, 15 Apr 2024 21:22:14 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:13 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v2 09/15] mm: page_frag: reuse MSB of 'size' field for pfmemalloc Date: Mon, 15 Apr 2024 21:19:34 +0800 Message-ID: <20240415131941.51153-10-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: 6E84740015 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: uoubnyxk9bucjtnpjfosdqphe5nawq54 X-HE-Tag: 1713187337-689124 X-HE-Meta: U2FsdGVkX1/Ge1CS8lcbiqX8BNVKT/vVfYJevv1Qpaa3XhsgoYDctxtHuPrM3S4WXxJ9SgBTd7riy3F5CYZP83xtba6Im5rbQ5QRfEO2dGb7b9Nq88fFULIeq9mr1/GKmtQUquHhPSAdEJTwUu+s6kSnPrxTvzdZL2DycTbUG60pgUrQtsay64Fk6tdowIWrP02+K4hgrwRW4BzMPrNVMarhGtgvDWitus8zMMvh0KyinSlIRCykTQrIhuItCuQmgD13LYZPBHnBLL6MjkKH5cLe+4fbem/MeB9MBgYA/MY2B0hFJ+pSiPFF7ptp+9VUouUFdEsM51L1vKgtdIOxyKBBIZRm1oQsKD9iBsDdaav0v4yMMZ+tsccpne/O8NU2vp/QY7mNzdE+pW9lOIAS26G9W6KtXjlqOCBTyX/XD2EVaYuhJHwKid9qCTfmlpEz5+5q65tg346GdEfjhxphEu3w4uzCiYEM97YdLNkNMiPMi0ob0e6VJT5G7SVu/qM3RdfPV46ZkNvlDSstB6B3xgFEcatNw8eZBu6yfd6bcw8S5ZmEz9m/PB6qTFNlBhC7T9zGkCSK6rHMRjqBwdaqi+tthwDX157LY2k8dvBb9SdyiKxkSj+a87Y1iMQ2izihg6rWS/1UjMPTv6GIPiPXHNU19ONKLsH7zLGBznFUPEBLNykyom64yYsEsjNNLZbtTn4cXJvJ9FCZPPXg8GEMwAb8qr5L8vi3v4F7nma/ADSVaCxJ8LeMzP1WELy6BwmwgeJuviMEKS0e5X0IBgAdlwtRQOPaw8FrvPzeC1lr6jD17rCOtUDYPYC/DwtW5hphtJdUGhn6LO50Dcq85cI5mraHSyVM5y5te/9gERK7tSXEUF59YlbJop1ulFJQME61Wi1ExLZwwhiltN40eyNIKgw0T/nO28QAmzZ50zvPTYzxQGKrJvSe/k503STMLmmD9djLxQmBp4OqosklnsN wq1fVvyP wHuRYBSoUIRsJL7FESj55EvBrJhvoz+F8o8VGfPdBwIGiVWK4EH1VN/1NTwrEX6xTd6IXdCvXJTvzRpgISJolA9Hpln11SJSSOaqN3k7BtwJCXyuuxTsMEzW1fndyudPRpkjfjDhO/DXKaRzHkVdECKaAt1ADeShZnjrpUlG9SgKcBUoJRUeAUdg4l47hJeGCs6PW6fDgY1esuliQg8nxQRdevDLopbMe+wRAAwqkRer7aCSY5b1JrQk9GCnhOlvb+W8a X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The '(PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)' case is for the system with page size less than 32KB, which is 0x8000 bytes requiring 16 bits space, change 'size' to 'size_mask' to avoid using the MSB, and change 'pfmemalloc' field to reuse the that MSB, so that we remove the orginal space needed by 'pfmemalloc'. For another case, the MSB of 'offset' is reused for 'pfmemalloc'. Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 13 ++++++++----- mm/page_frag_cache.c | 5 +++-- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index fe5faa80b6c3..40a7d6da9ef0 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -12,15 +12,16 @@ struct page_frag_cache { void *va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) __u16 offset; - __u16 size; + __u16 size_mask:15; + __u16 pfmemalloc:1; #else - __u32 offset; + __u32 offset:31; + __u32 pfmemalloc:1; #endif /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. */ unsigned int pagecnt_bias; - bool pfmemalloc; }; static inline void page_frag_cache_init(struct page_frag_cache *nc) @@ -43,7 +44,9 @@ static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - nc->offset = ALIGN(nc->offset, align); + unsigned int offset = nc->offset; + + nc->offset = ALIGN(offset, align); return page_frag_alloc_va(nc, fragsz, gfp_mask); } @@ -53,7 +56,7 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - WARN_ON_ONCE(!is_power_of_2(align)); + WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE); return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 50511d8522d0..8d93029116e1 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -32,7 +32,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; + nc->size_mask = page ? PAGE_FRAG_CACHE_MAX_SIZE - 1 : PAGE_SIZE - 1; + VM_BUG_ON(page && nc->size_mask != PAGE_FRAG_CACHE_MAX_SIZE - 1); #endif if (unlikely(!page)) page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); @@ -86,7 +87,7 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; + size = nc->size_mask + 1; #else size = PAGE_SIZE; #endif From patchwork Mon Apr 15 13:19:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630034 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00F45C4345F for ; Mon, 15 Apr 2024 13:22:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C18986B00A3; Mon, 15 Apr 2024 09:22:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BC80D6B00A4; Mon, 15 Apr 2024 09:22:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A1A1D6B00A5; Mon, 15 Apr 2024 09:22:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7E7DF6B00A3 for ; Mon, 15 Apr 2024 09:22:21 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1D2F5C054C for ; Mon, 15 Apr 2024 13:22:21 +0000 (UTC) X-FDA: 82011830082.01.4F12325 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf06.hostedemail.com (Postfix) with ESMTP id BE08A180011 for ; Mon, 15 Apr 2024 13:22:18 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9s6PoiFEfUU1t6RgwVkprdTMCiqw+5QdXvBJfqgSCtk=; b=tdtHw7bG9Ysd5Nd14DraKT6HW+7Op+rr8OcgUX2NPVNl56akWYA4w5lNLOyc+8Rh0a8QdG 0XJRIwt9kwY/tKhf8ZLmEzYmu5S3zuO+F9QiL6fY1skMLWKkkAE8X2h4rc64DS4n2nEJRz 7wd4iF9zsgqhOQlNp1imADoImhlctGE= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187339; a=rsa-sha256; cv=none; b=uJK9MvDuBDi4kBBqUx36P/+P7qBvJ9JxjDIBbWhbBmuYxmmHvA1ghl4vRBO3xor3famxOq u+YpWTeQihEt8Iy5NX/aS30ADM3NrbIDdbWSZi+Y8gd8qfrum35STt0oYDeHFMmuAHiHls kwi5LpF6jAY4TxFZffoSi05qXTLTBXU= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4VJ76v2HPmz1fxcp; Mon, 15 Apr 2024 21:19:19 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 15A9818002D; Mon, 15 Apr 2024 21:22:16 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:15 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v2 10/15] mm: page_frag: reuse existing bit field of 'va' for pagecnt_bias Date: Mon, 15 Apr 2024 21:19:35 +0800 Message-ID: <20240415131941.51153-11-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: BE08A180011 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: p5ub1pwfnim15ykgwywr73or5yjj6j4k X-HE-Tag: 1713187338-71440 X-HE-Meta: U2FsdGVkX195+/kFfwmR0kyWU+J46eWIk/q7g/grQjdTwcDC1UZaddXLUUWkHy0nby3uocuDRWMORemuEesywueagSNegibRdaF3TIOHb7BdxuXCEBb0JB7wQSRi1SaUd+JKve25LCfktlBPjiFm+Waz6a9OAYzGReM/uYJlqYTJK1g0becaRJbyecyUFyUs0YlNXx7wF9uxjMSmcXcnG48OR2a5xAWHvpq7RJLLmZQHxZrR3nwt56pvA36Wu/eXXZtMZ51U40zvibawLa9vdu4BAZyQF1ZMI+/XqPyPVUqeNgHxZ885/EvGA6Y5gWQ/tLE94mg8c9leXtPOzJt6knoq7rUaCZluJjXQEv7F9jr4CLA7JH50ghaKkvbp6h0SWGaxe9k12TC2W42kzHd/zGj/1qcu7J297liR12dP2b72EFFm8g1r6stgy9juqTjnRtAq8qXZ05bf0nNsvluHz0xT5kBi8lHcNwY8VQhdiSgH7kK3gRAN4AhW0I/MSgGOLUuio20yi0QTUHhVjXVXoV4byNvX5oosyPKT0UJUYGRgxTyV3OWySaBmGSDihk37LdnvKK9bAmSjvHUpyanX7LLGs5rZAWcTq3iV6/nkKP8LuTTfdJ0qIdv/xumF6PTdf5NQBU+FBLGQQeRZZMT6hYDFC2aOrmiU3xACnyk3Jdu1+To6etpp6HS4jrNc+tp0D0ZCE8SWWVvfz3GC7NJlBMEHp8RczTmZhdPrPm6LdJoumCAahzCnpdK7+sEdSEaE+ckSD+A07lndJQB1X/An9Z4muuTYT65XF99NZgVQCIFvNbjZC0OprPveYQTlIER/H57zXfwRSrYTQHDg03nRGFMb+oX0cq4V64a0NqjXVZn054sETKzJEvnzoUAwo6tdeIvEps8RjBPssCvehuspOATorBPSRXfRjqwc4QRyGXXnb6rxWqzIphIbijFZlIQaKIAZZ3PU7WlXoINzbX2 0N3IRtU7 lyjSViPllIfoZtF36NiIAUcu6liXGgyBEyR+K8CokDBjaZwCZzFkgRtO5At2W1U5H6belpX5HLZ7Ivt6k/jGGXnchAb5tGEOnRw4988x+hkpHlkH5SpwbHE1rOWpdCOT6g3VFYULpgvV4WXfHXGhBIBKa/i2985OiobmBZ0owXrwqxdvocXjHMHecS50Kpf+gkj2mWebhqvL6A7/85AzXukjVZ2mh40wxJuS9m+uFLGuB6i2sy4U30NGvJznn7A1qlQRs06/lE8646dM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As alignment of 'va' is always aligned with the order of the page allocated, we can reuse the LSB bits for the pagecount bias, and remove the orginal space needed by 'pagecnt_bias'. Also limit the 'fragsz' to be at least the size of 'usigned int' to match the limited pagecnt_bias. Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 20 +++++++---- mm/page_frag_cache.c | 63 +++++++++++++++++++-------------- 2 files changed, 50 insertions(+), 33 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 40a7d6da9ef0..a97a1ac017d6 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -9,7 +9,18 @@ #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; + union { + void *va; + /* we maintain a pagecount bias, so that we dont dirty cache + * line containing page->_refcount every time we allocate a + * fragment. As 'va' is always aligned with the order of the + * page allocated, we can reuse the LSB bits for the pagecount + * bias, and its bit width happens to be indicated by the + * 'size_mask' below. + */ + unsigned long pagecnt_bias; + + }; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) __u16 offset; __u16 size_mask:15; @@ -18,10 +29,6 @@ struct page_frag_cache { __u32 offset:31; __u32 pfmemalloc:1; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; }; static inline void page_frag_cache_init(struct page_frag_cache *nc) @@ -56,7 +63,8 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE); + WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE || + fragsz < sizeof(unsigned int)); return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 8d93029116e1..5f7f96c88163 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -18,8 +18,8 @@ #include #include "internal.h" -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) +static bool __page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) { struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -38,9 +38,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, if (unlikely(!page)) page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - nc->va = page ? page_address(page) : NULL; + if (unlikely(!page)) { + nc->va = NULL; + return false; + } + + nc->va = page_address(page); - return page; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + VM_BUG_ON(nc->pagecnt_bias & nc->size_mask); + page_ref_add(page, nc->size_mask - 1); + nc->pagecnt_bias |= nc->size_mask; +#else + VM_BUG_ON(nc->pagecnt_bias & (PAGE_SIZE - 1)); + page_ref_add(page, PAGE_SIZE - 2); + nc->pagecnt_bias |= (PAGE_SIZE - 1); +#endif + + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->offset = 0; + return true; } void page_frag_cache_drain(struct page_frag_cache *nc) @@ -65,38 +82,31 @@ EXPORT_SYMBOL(__page_frag_cache_drain); void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - unsigned int size, offset; + unsigned long size_mask; + unsigned int offset; struct page *page; + void *va; if (unlikely(!nc->va)) { refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) + if (!__page_frag_cache_refill(nc, gfp_mask)) return NULL; - - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = 0; } #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size_mask + 1; + size_mask = nc->size_mask; #else - size = PAGE_SIZE; + size_mask = PAGE_SIZE - 1; #endif + va = (void *)((unsigned long)nc->va & ~size_mask); offset = nc->offset; - if (unlikely(offset + fragsz > size)) { - page = virt_to_page(nc->va); - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + if (unlikely(offset + fragsz > (size_mask + 1))) { + page = virt_to_page(va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias & size_mask)) goto refill; if (unlikely(nc->pfmemalloc)) { @@ -105,12 +115,11 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, } /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + set_page_count(page, size_mask); + nc->pagecnt_bias |= size_mask; - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; offset = 0; - if (unlikely(fragsz > size)) { + if (unlikely(fragsz > (size_mask + 1))) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -127,7 +136,7 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return va + offset; } EXPORT_SYMBOL(page_frag_alloc_va); From patchwork Mon Apr 15 13:19:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5A8EC4345F for ; Mon, 15 Apr 2024 13:22:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 251216B00A7; Mon, 15 Apr 2024 09:22:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 201616B00A6; Mon, 15 Apr 2024 09:22:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CB876B00A7; Mon, 15 Apr 2024 09:22:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D92316B00A5 for ; Mon, 15 Apr 2024 09:22:24 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A51DF804A6 for ; Mon, 15 Apr 2024 13:22:24 +0000 (UTC) X-FDA: 82011830208.23.DC6559F Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf26.hostedemail.com (Postfix) with ESMTP id 2E2CE140017 for ; Mon, 15 Apr 2024 13:22:21 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187342; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tg8ub9r3BJQYC3dUFyIgL4Vkq+m+TACZ2+T1X/aYvvw=; b=qXveBQyJb4kU/Z90YADfStey/7hG8d5gs6usxPhZXWRMG9wOHgBKHQ5Qbf0ixbv8xTPzS2 rV+h1PM6cCf/Srlbks55D0EzN7E5/YzfgEL4FLjhwgMNA0ACyt/56OGunVe+PyNkUewEjv 8Jz8USQzg6OQOmQMuemROVrZS6r38FM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187342; a=rsa-sha256; cv=none; b=IYGCxET1JTR/Z8scjFAfyqwWKg8DpUQlXqmUzx6cNAHrSTE2z7vvbzM7J899tctVKG0XDF PZYSEgi1Z7Wp/Vm9e9Wa365RMyF5RJ9aVoK6hu3PrZzMz3FPz/w7pN7/KDg5d9ippuRkxf UYaSCC2o5VC7o9ABa5z30wNOu1s7NMU= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4VJ79F6CnJz1ws3y; Mon, 15 Apr 2024 21:21:21 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 8DB7B1A0172; Mon, 15 Apr 2024 21:22:19 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:19 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v2 12/15] mm: page_frag: introduce prepare/commit API for page_frag Date: Mon, 15 Apr 2024 21:19:37 +0800 Message-ID: <20240415131941.51153-13-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: 2E2CE140017 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: crmo67ubojnz3hrw87zdjyxtcbxuek36 X-HE-Tag: 1713187341-911376 X-HE-Meta: U2FsdGVkX18SKPXtL3Fa3lCqww/cEWwtmVsZ2Y8W3C4nQFp11ZomHUbEopmkT1O3q6jnKEKhSTpXD/L3gvi6kCcJHeLLrzNR7aCxY7E1ezuPvnkFrE5cUEDFhG/QkWuocgy9EqRT6Z5tnmL2y2qo5VmlZbBxjVxHE/ZkHn2lltBXVrkyh6OiG/p+jlJKSEanK5VbP6aa4pfYxDzwmJrLgEVQ9pe3oQLT0IP2DJcBkb30UjbKu0BcdvToIEVWcyil7ews973xD4Rg0288Gvyuc5nwCFzhVLymGfppAUMw1yKsSpETfnSvOinzBpwAPyr84JPSnkZ3ZmsSyDdrzJJyqKd4IQgQnhvTHmH9xLvnCPvuFkASQA4BHuaGcCuMrcug6wFLVVkvHz35pFPsTR4WfAJrScn48jWZjAfewmY1hBQ4uja3UwpSPIja31rp/EULaa5lePrK3X1JZ3tGPcmEZXeXFkuPsQ2LYXvroRIhhrX25MTCYYSFfGVHweWTGMRJswnKhJg6WvyYdu2Hv/17VFTEVmdcRtDtPy83oF8uKlHp57C/uzM/dg7pdevkYopTes1OTo5rTbu2ZPlyeZs0Ee4Om3pe7AwFNM7VH4pU5nsKPASigiyPvU5g+x8KStaBAiuxC8+1hKfDs0VLdG4gHiW+y8wW7u1zwoxGNYblyNunUaOtN/m+kKnoFJX3bIfxjB1vdzyvXByBh+gIplGC6CK6Bs+yztlrhT7G5ZkwqIYwPIlc9APbW0b6gRVqIKBhErTzHjnXBar3aIc6MGi4p3PpyLq8zTXf+6jPd1y39jlNNh4FklPuj3CgfFgpcn1zCBiKq9J+jiSjMhbi6Nd8bU/0xhKBw+4mozZgrDJkLf3pf/akKpka2yKUwJvq+56rd0nmHSFXWokz/UWz9FjX9azv9FPymbEzgIMPsktViaVaqqcJJ8JGkvmFuzcx8hNTroI4GW9/o88SqJ2OUL3 T8hdbzbO smUXasDZagPC75VRIDscQPZU9Z9mFEM88W1b3uijkAh28/htqkDQGbWipAZQL6hBULHByqnQuGnoYo2GCOzgBap48I8obbIdU7tuBBkSHXBYoKheFOHJZfNFG1LU/y9cwvwIVB89+/rv13ulIe4Q8gW2JGwEOn8kTVPCiETVwACzbNYoizorntgG4NZxhC3LruHkrtKSEAGKsZ2l46xbyEH5UAxBurO4mztN5SheymYmJceladbHNdT6zfTgkWbLPWIwml6notveY2P+hvi+Wo8Cp7Wk6WQdFx6rgLHomOdchdw/8zvyV+on6t/rzgJ6s5N2z X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progressing, but can do better if there is more memory available. Currently skb_page_frag_refill() API is used to solve the above usecases, as mentioned in [1], its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller. And the caller can decide how much memory to use by calling commit API, or not calling the commit API if deciding to not use any memory. Note it seems hard to decide which header files for caling virt_to_page() in the inline helper, so macro is used instead of inline helper to avoid dealing with that. 1. https://lore.kernel.org/all/20240228093013.8263-1-linyunsheng@huawei.com/ Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 141 +++++++++++++++++++++++++++++++- mm/page_frag_cache.c | 13 ++- 2 files changed, 144 insertions(+), 10 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index a97a1ac017d6..28185969cd2c 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -43,8 +43,25 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask); +void *page_frag_cache_refill(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask); + +static inline void *page_frag_alloc_va(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + unsigned int offset; + void *va; + + va = page_frag_cache_refill(nc, fragsz, gfp_mask); + if (unlikely(!va)) + return NULL; + + offset = nc->offset; + nc->pagecnt_bias--; + nc->offset = offset + fragsz; + + return va + offset; +} static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, @@ -69,6 +86,126 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } +static inline void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *size, + gfp_t gfp_mask) +{ + void *va; + + va = page_frag_cache_refill(nc, *size, gfp_mask); + if (unlikely(!va)) + return NULL; + + *offset = nc->offset; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + *size = nc->size_mask - *offset + 1; +#else + *size = PAGE_SIZE - *offset; +#endif + + return va + *offset; +} + +static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *size, + unsigned int align, + gfp_t gfp_mask) +{ + WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE || + *size < sizeof(unsigned int)); + + *offset = nc->offset; + nc->offset = ALIGN(*offset, align); + return page_frag_alloc_va_prepare(nc, offset, size, gfp_mask); +} + +static inline void *__page_frag_alloc_pg_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *size, + gfp_t gfp_mask) +{ + void *va; + + va = page_frag_cache_refill(nc, *size, gfp_mask); + if (unlikely(!va)) + return NULL; + + *offset = nc->offset; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + *size = nc->size_mask - *offset + 1; +#else + *size = PAGE_SIZE - *offset; +#endif + + return va; +} + +#define page_frag_alloc_pg_prepare(nc, offset, size, gfp) \ +({ \ + struct page *__page = NULL; \ + void *__va; \ + \ + __va = __page_frag_alloc_pg_prepare(nc, offset, size, gfp); \ + if (likely(__va)) \ + __page = virt_to_page(__va); \ + \ + __page; \ +}) + +static inline void *__page_frag_alloc_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *size, + void **va, gfp_t gfp_mask) +{ + void *nc_va; + + nc_va = page_frag_cache_refill(nc, *size, gfp_mask); + if (unlikely(!nc_va)) + return NULL; + + *offset = nc->offset; + *va = nc_va + *offset; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + *size = nc->size_mask - *offset + 1; +#else + *size = PAGE_SIZE - *offset; +#endif + + return nc_va; +} + +#define page_frag_alloc_prepare(nc, offset, size, va, gfp) \ +({ \ + struct page *__page = NULL; \ + void *__va; \ + \ + __va = __page_frag_alloc_prepare(nc, offset, size, va, gfp); \ + if (likely(__va)) \ + __page = virt_to_page(__va); \ + \ + __page; \ +}) + +static inline void page_frag_alloc_commit(struct page_frag_cache *nc, + unsigned int offset, + unsigned int size) +{ + nc->pagecnt_bias--; + nc->offset = offset + size; +} + +static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, + unsigned int offset, + unsigned int size) +{ + nc->offset = offset + size; +} + void page_frag_free_va(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 5f7f96c88163..8774cb07e630 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -79,8 +79,8 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask) +void *page_frag_cache_refill(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask) { unsigned long size_mask; unsigned int offset; @@ -118,7 +118,7 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, set_page_count(page, size_mask); nc->pagecnt_bias |= size_mask; - offset = 0; + nc->offset = 0; if (unlikely(fragsz > (size_mask + 1))) { /* * The caller is trying to allocate a fragment @@ -133,12 +133,9 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, } } - nc->pagecnt_bias--; - nc->offset = offset + fragsz; - - return va + offset; + return va; } -EXPORT_SYMBOL(page_frag_alloc_va); +EXPORT_SYMBOL(page_frag_cache_refill); /* * Frees a page fragment allocated out of either a compound or order 0 page. From patchwork Mon Apr 15 13:19:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13630036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E953C4345F for ; Mon, 15 Apr 2024 13:22:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A19886B00A8; Mon, 15 Apr 2024 09:22:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C8BD6B00A9; Mon, 15 Apr 2024 09:22:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 842AC6B00AA; Mon, 15 Apr 2024 09:22:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 613406B00A8 for ; Mon, 15 Apr 2024 09:22:34 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 33C08A03E4 for ; Mon, 15 Apr 2024 13:22:34 +0000 (UTC) X-FDA: 82011830628.17.64822DD Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf27.hostedemail.com (Postfix) with ESMTP id ADAE140020 for ; Mon, 15 Apr 2024 13:22:31 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713187352; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iLi97pbOaPLhaV0Ge6OJC8yAWz5fOtES0YKXMkbn0jw=; b=vAoqRPWV1Cp3Px9s1BlBTz7/dwf+nQU1fx/LwgGvAMIz4IFQI8Zw+2gqWJTFG2x+jY4m9c k0uoP0PBKCihbhrQBah1wC+hv+Qoz33X9Ch4EegNfXNad3WVR2t68+9fhdDOfvADbMF+zv +/Rd1RZUX+u9FfK7V7wXGhDp1EXH1LE= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713187352; a=rsa-sha256; cv=none; b=HCNF1EhHDYtF/hYTftZdf+vzXzlzlpjwiEY0qKTt8PBS3RQUN6jpVi5r9DF6Pfuju+NVbv WgJsju3SiaII7qYr6iFB5dpx92HnAE8xjzIDg94/Wvd7MkBcijm9Qxz26pgJ/40DJghzbg r9DdQy6Cf/VRYMQ/8gY4VEAZeAJI9zY= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VJ7775GkYz1hwPD; Mon, 15 Apr 2024 21:19:31 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id EFDE714011B; Mon, 15 Apr 2024 21:22:27 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 15 Apr 2024 21:22:27 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , , Subject: [PATCH net-next v2 14/15] mm: page_frag: update documentation for page_frag Date: Mon, 15 Apr 2024 21:19:39 +0800 Message-ID: <20240415131941.51153-15-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240415131941.51153-1-linyunsheng@huawei.com> References: <20240415131941.51153-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: ADAE140020 X-Rspam-User: X-Stat-Signature: ir7bzrg1xups6kponwbtp3kr7tugtxrn X-Rspamd-Server: rspam01 X-HE-Tag: 1713187351-121195 X-HE-Meta: U2FsdGVkX1+MpSvd2yLaRhLM3ELXHhXACTn/AySO6LYGDw8XKnCMdU9JBwDvwRoAEGtOoKEiAZX0MyjJ3qAqUDg/F45TI4QfOfeg/8/HpSyhXwI6GNGnWZ1gHDDNBYMqj0KKiuzqMFz2+d3nfjHB2pZFcA733pJs3Rst0Jzi91e3vke6Bedmtenq3xYHca1VaNUKuizcbI2pyynDb/TGgfaMY+cuFeByH/fp3fCwEC/Jkgbs0EgT4rOf4/dFsFUcsZX5oKhJ+aq6EZo2pGUhPpTGeiJMU5aQPAHxt983JU9D+SEF/Uh87/5rEFcJNaTxmiMeBoMJF5vQ7sVZuw7yMR/wDpyQW4ciuCUUu1/ngw6k+SLoF/kLU+qV//+HlI6FPWOaDfuz2+uTSkypLdgpEQRXIBDK2nvWJ5ZhDTJCTmnXiO1vlNmX1xDRBIW6K+wCbzSoNxCrOV1owyJTy5axrHk1GFlz3AHSSOAsjizwrgqIO/n7xHsEsPZCY5GCmxFpYUQv+BieMpPiEBVU2PT8hf/Gq2okOIRglr658ckggl14wMRE2tbR3sQNjWxEOEg/f7SDkFEY2r5Ow+c3yGbfm4i0twX1SV6Bo7PXfNwQZ7m/ig8NXFPZtgXhJMD1OT3b+rb0/8m1o9v39VIQoBDBzE3NfuOh8Udl4MQKPd5m/l5rGTvEhlvECnR0as2pGcLGLF58azBFIfuKVpkFRNy8m54wAgyEuYGV8KX0mpRdFonJPNcOwYrneMZwhwhOV7hUXE+eNOY2OjZFhcidNfN5BbYyruj8M6eI+rvYHl8vyk/rPwfeqf2gNs52R3vUo9+lXokz8xF0yh60JINyEcJeDnSNH3SFqpGL1bYQmcEa3Iuzl7QvAweo/xbbFFaoLo2q6YBa8bYrIs2gJKXI2jnQf7VHqTeY6FhaX/SeJ/QytDzn91Mmcyrmefjhv0P9UFCdxn7/kWU8L1jgkjcg4Yh qaKE1bek ICx8GdR1AwA1aqmxV66Vh/eOvGqRVFliDUdXLlpC77HUP1xCoY/6dyWMCyIEvAqKaaio1ZpM0vgGd5BAFbfZkVDQ0Y/P9V2PliNOdx5tsls5jw032nhZLrd0o++mXIHRqnDHfKR36iPaLbFX73cBdwhixPkDi6KPdtwQlohnYD33GkguJ19SpexkYhscx1DeKCMopJFm/hdLKL8/P2gMDd7OnB5qkrtDgvh0IzCQso421uE7IVlPiLbcQVUBTGA1//r/LTjWefC/BC/X9qn//RzWl2X791ZMsEVjO2exU4kobisrMOEHM9mlQzU4HehlIamal X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 148 +++++++++++++++++++++++++++++++- include/linux/page_frag_cache.h | 133 ++++++++++++++++++++++++++++ mm/page_frag_cache.c | 4 + 3 files changed, 284 insertions(+), 1 deletion(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..ac9dd9e8ee16 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,148 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + ^ + | + | + | + v + +------------------------------------------------+ + | request page fragment | + +------------------------------------------------+ + ^ ^ ^ + | | Cache not enough | + | Cache empty v | + | +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | ^ | + | | | + v v | + +----------------------------------+ | + | refill cache with order 3 page | | + +----------------------------------+ | + ^ ^ | + | | | + | | Refill failed | + | | | Cache is enough + | | | + | v | + | +----------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------+ | + | ^ | + | Refill succeed | | + | | Refill succeed | + | | | + v v v + +------------------------------------------------+ + | allocate fragment from cache | + +------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API, the allocation side does not +allow concurrent calling, it is assumed that the caller must ensure there is not +concurrent alloc calling to the same page_frag_cache instance by using it's own +lock or rely on some lockless guarantee like NAPI softirq. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc_va*, page_frag_alloc_pg*, +or page_frag_alloc* API accordingly. + +There is also a use case that need minimum memory in order for forward +progressing, but can do better if there is more memory available. Introduce +page_frag_alloc_prepare() and page_frag_alloc_commit() related API, the caller +requests the minimum memory it need and the prepare API will return the maximum +size of the fragment returned, caller need to report back to the page_frag core +how much memory it actually use by calling commit API, or not calling the commit +API if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_alloc_va __page_frag_alloc_va_align + page_frag_alloc_va_align page_frag_alloc_va_prepare + page_frag_alloc_va_prepare_align page_frag_alloc_pg_prepare + page_frag_alloc_prepare page_frag_alloc_commit + page_frag_alloc_commit_noref page_frag_free_va + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: page_frag_cache_drain + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(pfrag); + ... + page_frag_cache_drain(pfrag); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_va_align(pfrag, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_free_va(va); + goto do_error; + } + +Prepare & Commit API +-------------------- + +.. code-block:: c + + unsigned int offset, size; + bool merge = true; + struct page *page; + void *va; + + size = 32U; + page = page_frag_alloc_prepare(pfrag, &offset, &size, &va); + if (!page) + goto wait_for_space; + + copy = min_t(int, copy, size); + if (!skb_can_coalesce(skb, i, page, offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, offset, copy); + } else { + skb_fill_page_desc(skb, i, page, offset, copy); + page_frag_alloc_commit(pfrag, offset, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 28185969cd2c..529e7c040dad 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -31,11 +31,28 @@ struct page_frag_cache { #endif }; +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { nc->va = NULL; } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expection as the alloc API. + * + * Return: + * Return true if the current page in page_frag cache is pfmemalloc'ed, + * otherwise return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return !!nc->pfmemalloc; @@ -46,6 +63,17 @@ void __page_frag_cache_drain(struct page *page, unsigned int count); void *page_frag_cache_refill(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask); +/** + * page_frag_alloc_va() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Get a page fragment from page_frag cache. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { @@ -63,6 +91,19 @@ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, return va + offset; } +/** + * __page_frag_alloc_va_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement + * + * Get a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, @@ -75,6 +116,19 @@ static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, return page_frag_alloc_va(nc, fragsz, gfp_mask); } +/** + * page_frag_alloc_va_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement + * + * WARN_ON_ONCE() checking for align and fragsz before getting a page fragment + * from page_frag cache with aligning requirement. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, @@ -86,6 +140,19 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } +/** + * page_frag_alloc_va_prepare() - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @size: in as the requested size, out as the available size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare a page fragment with minimum size of ‘size’, 'size' is also used to + * report the maximum size of the page fragment the caller can use. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *size, @@ -108,6 +175,21 @@ static inline void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, return va + *offset; } +/** + * page_frag_alloc_va_prepare_align() - Prepare allocing a page fragment with + * aligning requirement. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @size: in as the requested size, out as the available size + * @align: the requested aligning requirement + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare an aligned page fragment with minimum size of ‘size’, 'size' is also + * used to report the maximum size of the page fragment the caller can use. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, unsigned int *offset, unsigned int *size, @@ -144,6 +226,19 @@ static inline void *__page_frag_alloc_pg_prepare(struct page_frag_cache *nc, return va; } +/** + * page_frag_alloc_pg_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @size: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache need to be refilled + * + * Prepare a page fragment with minimum size of ‘size’, 'size' is also used to + * report the maximum size of the page fragment the caller can use. + * + * Return: + * Return the page fragment, otherwise return NULL. + */ #define page_frag_alloc_pg_prepare(nc, offset, size, gfp) \ ({ \ struct page *__page = NULL; \ @@ -179,6 +274,21 @@ static inline void *__page_frag_alloc_prepare(struct page_frag_cache *nc, return nc_va; } +/** + * page_frag_alloc_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @size: in as the requested size, out as the available size + * @va: out as the va of the returned page fragment + * @gfp: the allocation gfp to use when cache need to be refilled + * + * Prepare a page fragment with minimum size of ‘size’, 'size' is also used to + * report the maximum size of the page fragment. Return both 'page' and 'va' of + * the fragment to the caller. + * + * Return: + * Return the page fragment, otherwise return NULL. + */ #define page_frag_alloc_prepare(nc, offset, size, va, gfp) \ ({ \ struct page *__page = NULL; \ @@ -191,6 +301,14 @@ static inline void *__page_frag_alloc_prepare(struct page_frag_cache *nc, __page; \ }) +/** + * page_frag_alloc_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @offset: offset of the page fragment + * @size: size of the page fragment has been used + * + * Commit the alloc preparing by passing offset and the actual used size. + */ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, unsigned int offset, unsigned int size) @@ -199,6 +317,17 @@ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, nc->offset = offset + size; } +/** + * page_frag_alloc_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @offset: offset of the page fragment + * @size: size of the page fragment has been used + * + * Commit the alloc preparing by passing offset and the actual used size, but + * not taking page refcount. Mostly used for fragmemt coaleasing case when the + * current fragmemt can share the same refcount with previous fragmemt. + */ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, unsigned int offset, unsigned int size) @@ -206,6 +335,10 @@ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, nc->offset = offset + size; } +/** + * page_frag_free_va - Free a page fragment by va. + * @addr: va of page fragment to be freed + */ void page_frag_free_va(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 8774cb07e630..8b1d35aafcc1 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -60,6 +60,10 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc, return true; } +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->va)