From patchwork Tue Jul 9 13:27:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727972 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C2BBC3DA41 for ; Tue, 9 Jul 2024 13:31:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 05C9B6B00BB; Tue, 9 Jul 2024 09:31:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00C4E6B00BC; Tue, 9 Jul 2024 09:31:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E16606B00BD; Tue, 9 Jul 2024 09:31:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C34D16B00BB for ; Tue, 9 Jul 2024 09:31:06 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 542D2A4EE0 for ; Tue, 9 Jul 2024 13:31:06 +0000 (UTC) X-FDA: 82320300132.28.0D52D4B Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf09.hostedemail.com (Postfix) with ESMTP id 93E19140048 for ; Tue, 9 Jul 2024 13:31:00 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531848; a=rsa-sha256; cv=none; b=X2c6nKkYSBvdVwkt2kWwLkotfIVsOXAvjiRW9ySCiCt/wuZdE3zaeYGz2PGNcXyTUXMo8s rpW5cy088kRrWeSSaBZ6dINOoIpHwfaGd579ONk6r5UarMHkF5B7tp9VVV3sPRe7CMORnQ OJHXRkr4IM0fwNdqQ2Kyy6KR4slgo1k= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531848; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RgdJrQQEWYXYtSQpojwtvQQ0IZ43bcDS1bGlXY9ujfg=; b=5ko6TbKfZ0gWf4nXBtb0eLzHqFkC5RAkc5bypHNSjT7Cy1kSaFWqJy3WgEp1ke9mJHUYWK ok2qjqHY0M4MEXewl4E96nzq3SoxO1MSHTIw+gqMbQwZWoagAqvjK5flIxCSx3Uwl9kJSl rN14WvypmJ8HIi/JDwn+4Bc15POxcfQ= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4WJMFd3ddNz1T5Xw; Tue, 9 Jul 2024 21:26:13 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id DDE28180AE5; Tue, 9 Jul 2024 21:30:54 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:30:54 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v10 01/15] mm: page_frag: add a test module for page_frag Date: Tue, 9 Jul 2024 21:27:26 +0800 Message-ID: <20240709132741.47751-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: ifgruouxfua69945u6hw6rkx5zziromj X-Rspamd-Queue-Id: 93E19140048 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1720531860-860185 X-HE-Meta: U2FsdGVkX1+DBBA7uhjFTMHdQp3URbZQgxBvTpcXqVqq27AojKxu9uEWJMiVt+8Ux+0TlDPQL6QYCcFsgJBm/33lyUzqhZjAYgBxbMGetN/m8q97IclcPlW65jqKVTvZyZWqEaLOpyRLBoFML2a8sc1p8HqNLoHO5676Njyef9mTctrY0jsAogPWFFeiYcNMuXwiBfiY6j9yPd7SbnMMP98CNFdvDkzKSwqw1V1SC8muztewAPaZ+DFm8xZrjhY9euNPG7RIMVp9c9+p+6y68munwU+2NFsMVsEdamUoZd+WSit/6R+xaSduOtmLF0kym3/ltkDvbbOiIyOnyrbcMaw9Za9jA69zRdLRo4FDiat10cDbF8dAbxtncw9vGRPNWCcNHLCdsY4LeXPdpNwZttJcyVO8pc7hrDLYu7AggEorxNJrWWhYoHQQmcI7GO14pH2ee33S3Je3BsAVyY/FKUnHPFjzhkrM51RMeMDEb8Uqtg7kaUALVdfgJ0MmTZmAXKJ7p/3XXy7IFXWeE4fMDqYk92I6WkaTLlTA1teu7oWqeFBYkelnWK11v5mLovLMZu185MDE679CH5E5RNBIFc1KeAgzx9EJHEZNubblK4lFaXNNy5eNkZ+vqm7Jt/PZhSCbTX6dZ0Ymqe1kZXKX94iXRCexS2gKJd8BPAp3Usrm6tITVLU/uFNz6iwhFXUpl3oo1+Z+lPsv4+C2mXxsLsdlW07akA7U7T3p7TJQ0rQ9MthThanoAMg+Ix2kVLBLfx38FfSQO1xn716bOGKdzmOZgKG8aS9Me8YLdOEU+oIZT89s6sRdmTzDrzobEf1JYJKfrE9In1TdB1HiVoM5/ggocjlJPmdV525vhhgrmZrVG9N0gcqf7M6DC1I4CG2gfjTgYOepABW3SVNXC0QX9g3V7KKmwopbZW2vTfo7dlSPaFdSbVYmbpZVqKppR8/QgoZKLWLYuL/HV7BS3fg SUcp8vn/ 7Upp7PEee1+4SalS9M2Q8sA+mcxXnBe2AEJXRad45X29x+t3ZAj3ejwAQMkzogxzdwNNQ8NeQiUJx13CYttkQvoe5HTi4/wonktb/VSRIRG2MZXJMBbOI1AEWGmy4hNuqKj3e9Bc21MzTk7nbQJzMXEyKhBOAKCKUkETK1meDIV0n75NUDMEjBVACpqMQo1ok0tdGhdItggSPXD6kvz+ceKS5IfSvyNASaGVXxLYrpD2gFlWy081Ws0B4RC84YJmW0u61bKO1rgufqg8T8DB/PFPwyowMa1Y89tCckJkPWc1xL2j8afCOZQ0h9g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Basing on the lib/objpool.c, change it to something like a ptrpool, so that we can utilize that to test the correctness and performance of the page_frag. The testing is done by ensuring that the fragment allocated from a frag_frag_cache instance is pushed into a ptrpool instance in a kthread binded to a specified cpu, and a kthread binded to a specified cpu will pop the fragment from the ptrpool and free the fragment. We may refactor out the common part between objpool and ptrpool if this ptrpool thing turns out to be helpful for other place. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/Kconfig.debug | 8 + mm/Makefile | 1 + mm/page_frag_test.c | 389 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 398 insertions(+) create mode 100644 mm/page_frag_test.c diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index afc72fde0f03..1ebcd45f47d4 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -142,6 +142,14 @@ config DEBUG_PAGE_REF kernel code. However the runtime performance overhead is virtually nil until the tracepoints are actually enabled. +config DEBUG_PAGE_FRAG_TEST + tristate "Test module for page_frag" + default n + depends on m && DEBUG_KERNEL + help + This builds the "page_frag_test" module that is used to test the + correctness and performance of page_frag's implementation. + config DEBUG_RODATA_TEST bool "Testcase for the marking rodata read-only" depends on STRICT_KERNEL_RWX diff --git a/mm/Makefile b/mm/Makefile index 8fb85acda1b1..29d9f7618a33 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -106,6 +106,7 @@ obj-$(CONFIG_MEMORY_FAILURE) += memory-failure.o obj-$(CONFIG_HWPOISON_INJECT) += hwpoison-inject.o obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o obj-$(CONFIG_DEBUG_RODATA_TEST) += rodata_test.o +obj-$(CONFIG_DEBUG_PAGE_FRAG_TEST) += page_frag_test.o obj-$(CONFIG_DEBUG_VM_PGTABLE) += debug_vm_pgtable.o obj-$(CONFIG_PAGE_OWNER) += page_owner.o obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c new file mode 100644 index 000000000000..5ee3f33b756d --- /dev/null +++ b/mm/page_frag_test.c @@ -0,0 +1,389 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Test module for page_frag cache + * + * Copyright: linyunsheng@huawei.com + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define OBJPOOL_NR_OBJECT_MAX BIT(24) + +struct objpool_slot { + u32 head; + u32 tail; + u32 last; + u32 mask; + void *entries[]; +} __packed; + +struct objpool_head { + int nr_cpus; + int capacity; + struct objpool_slot **cpu_slots; +}; + +/* initialize percpu objpool_slot */ +static void objpool_init_percpu_slot(struct objpool_head *pool, + struct objpool_slot *slot) +{ + /* initialize elements of percpu objpool_slot */ + slot->mask = pool->capacity - 1; +} + +/* allocate and initialize percpu slots */ +static int objpool_init_percpu_slots(struct objpool_head *pool, + int nr_objs, gfp_t gfp) +{ + int i; + + for (i = 0; i < pool->nr_cpus; i++) { + struct objpool_slot *slot; + int size; + + /* skip the cpu node which could never be present */ + if (!cpu_possible(i)) + continue; + + size = struct_size(slot, entries, pool->capacity); + + /* + * here we allocate percpu-slot & objs together in a single + * allocation to make it more compact, taking advantage of + * warm caches and TLB hits. in default vmalloc is used to + * reduce the pressure of kernel slab system. as we know, + * minimal size of vmalloc is one page since vmalloc would + * always align the requested size to page size + */ + if (gfp & GFP_ATOMIC) + slot = kmalloc_node(size, gfp, cpu_to_node(i)); + else + slot = __vmalloc_node(size, sizeof(void *), gfp, + cpu_to_node(i), + __builtin_return_address(0)); + if (!slot) + return -ENOMEM; + + memset(slot, 0, size); + pool->cpu_slots[i] = slot; + + objpool_init_percpu_slot(pool, slot); + } + + return 0; +} + +/* cleanup all percpu slots of the object pool */ +static void objpool_fini_percpu_slots(struct objpool_head *pool) +{ + int i; + + if (!pool->cpu_slots) + return; + + for (i = 0; i < pool->nr_cpus; i++) + kvfree(pool->cpu_slots[i]); + kfree(pool->cpu_slots); +} + +/* initialize object pool and pre-allocate objects */ +static int objpool_init(struct objpool_head *pool, int nr_objs, gfp_t gfp) +{ + int rc, capacity, slot_size; + + /* check input parameters */ + if (nr_objs <= 0 || nr_objs > OBJPOOL_NR_OBJECT_MAX) + return -EINVAL; + + /* calculate capacity of percpu objpool_slot */ + capacity = roundup_pow_of_two(nr_objs); + if (!capacity) + return -EINVAL; + + gfp = gfp & ~__GFP_ZERO; + + /* initialize objpool pool */ + memset(pool, 0, sizeof(struct objpool_head)); + pool->nr_cpus = nr_cpu_ids; + pool->capacity = capacity; + slot_size = pool->nr_cpus * sizeof(struct objpool_slot *); + pool->cpu_slots = kzalloc(slot_size, gfp); + if (!pool->cpu_slots) + return -ENOMEM; + + /* initialize per-cpu slots */ + rc = objpool_init_percpu_slots(pool, nr_objs, gfp); + if (rc) + objpool_fini_percpu_slots(pool); + + return rc; +} + +/* adding object to slot, abort if the slot was already full */ +static int objpool_try_add_slot(void *obj, struct objpool_head *pool, int cpu) +{ + struct objpool_slot *slot = pool->cpu_slots[cpu]; + u32 head, tail; + + /* loading tail and head as a local snapshot, tail first */ + tail = READ_ONCE(slot->tail); + + do { + head = READ_ONCE(slot->head); + /* fault caught: something must be wrong */ + if (unlikely(tail - head >= pool->capacity)) + return -ENOSPC; + } while (!try_cmpxchg_acquire(&slot->tail, &tail, tail + 1)); + + /* now the tail position is reserved for the given obj */ + WRITE_ONCE(slot->entries[tail & slot->mask], obj); + /* update sequence to make this obj available for pop() */ + smp_store_release(&slot->last, tail + 1); + + return 0; +} + +/* reclaim an object to object pool */ +static int objpool_push(void *obj, struct objpool_head *pool) +{ + unsigned long flags; + int rc; + + /* disable local irq to avoid preemption & interruption */ + raw_local_irq_save(flags); + rc = objpool_try_add_slot(obj, pool, raw_smp_processor_id()); + raw_local_irq_restore(flags); + + return rc; +} + +/* try to retrieve object from slot */ +static void *objpool_try_get_slot(struct objpool_head *pool, int cpu) +{ + struct objpool_slot *slot = pool->cpu_slots[cpu]; + /* load head snapshot, other cpus may change it */ + u32 head = smp_load_acquire(&slot->head); + + while (head != READ_ONCE(slot->last)) { + void *obj; + + /* + * data visibility of 'last' and 'head' could be out of + * order since memory updating of 'last' and 'head' are + * performed in push() and pop() independently + * + * before any retrieving attempts, pop() must guarantee + * 'last' is behind 'head', that is to say, there must + * be available objects in slot, which could be ensured + * by condition 'last != head && last - head <= nr_objs' + * that is equivalent to 'last - head - 1 < nr_objs' as + * 'last' and 'head' are both unsigned int32 + */ + if (READ_ONCE(slot->last) - head - 1 >= pool->capacity) { + head = READ_ONCE(slot->head); + continue; + } + + /* obj must be retrieved before moving forward head */ + obj = READ_ONCE(slot->entries[head & slot->mask]); + + /* move head forward to mark it's consumption */ + if (try_cmpxchg_release(&slot->head, &head, head + 1)) + return obj; + } + + return NULL; +} + +/* allocate an object from object pool */ +static void *objpool_pop(struct objpool_head *pool) +{ + void *obj = NULL; + unsigned long flags; + int i, cpu; + + /* disable local irq to avoid preemption & interruption */ + raw_local_irq_save(flags); + + cpu = raw_smp_processor_id(); + for (i = 0; i < num_possible_cpus(); i++) { + obj = objpool_try_get_slot(pool, cpu); + if (obj) + break; + cpu = cpumask_next_wrap(cpu, cpu_possible_mask, -1, 1); + } + raw_local_irq_restore(flags); + + return obj; +} + +/* release whole objpool forcely */ +static void objpool_free(struct objpool_head *pool) +{ + if (!pool->cpu_slots) + return; + + /* release percpu slots */ + objpool_fini_percpu_slots(pool); +} + +static struct objpool_head ptr_pool; +static int nr_objs = 512; +static atomic_t nthreads; +static struct completion wait; +static struct page_frag_cache test_frag; + +static int nr_test = 5120000; +module_param(nr_test, int, 0); +MODULE_PARM_DESC(nr_test, "number of iterations to test"); + +static bool test_align; +module_param(test_align, bool, 0); +MODULE_PARM_DESC(test_align, "use align API for testing"); + +static int test_alloc_len = 2048; +module_param(test_alloc_len, int, 0); +MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); + +static int test_push_cpu; +module_param(test_push_cpu, int, 0); +MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment"); + +static int test_pop_cpu; +module_param(test_pop_cpu, int, 0); +MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment"); + +static int page_frag_pop_thread(void *arg) +{ + struct objpool_head *pool = arg; + int nr = nr_test; + + pr_info("page_frag pop test thread begins on cpu %d\n", + smp_processor_id()); + + while (nr > 0) { + void *obj = objpool_pop(pool); + + if (obj) { + nr--; + page_frag_free(obj); + } else { + cond_resched(); + } + } + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + pr_info("page_frag pop test thread exits on cpu %d\n", + smp_processor_id()); + + return 0; +} + +static int page_frag_push_thread(void *arg) +{ + struct objpool_head *pool = arg; + int nr = nr_test; + + pr_info("page_frag push test thread begins on cpu %d\n", + smp_processor_id()); + + while (nr > 0) { + void *va; + int ret; + + if (test_align) + va = page_frag_alloc_align(&test_frag, test_alloc_len, + GFP_KERNEL, SMP_CACHE_BYTES); + else + va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL); + + if (!va) + continue; + + ret = objpool_push(va, pool); + if (ret) { + page_frag_free(va); + cond_resched(); + } else { + nr--; + } + } + + pr_info("page_frag push test thread exits on cpu %d\n", + smp_processor_id()); + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + return 0; +} + +static int __init page_frag_test_init(void) +{ + struct task_struct *tsk_push, *tsk_pop; + ktime_t start; + u64 duration; + int ret; + + test_frag.va = NULL; + atomic_set(&nthreads, 2); + init_completion(&wait); + + if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0) + return -EINVAL; + + ret = objpool_init(&ptr_pool, nr_objs, GFP_KERNEL); + if (ret) + return ret; + + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_pool, + test_push_cpu, "page_frag_push"); + if (IS_ERR(tsk_push)) + return PTR_ERR(tsk_push); + + tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_pool, + test_pop_cpu, "page_frag_pop"); + if (IS_ERR(tsk_pop)) { + kthread_stop(tsk_push); + return PTR_ERR(tsk_pop); + } + + start = ktime_get(); + wake_up_process(tsk_push); + wake_up_process(tsk_pop); + + pr_info("waiting for test to complete\n"); + wait_for_completion(&wait); + + duration = (u64)ktime_us_delta(ktime_get(), start); + pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", duration); + + objpool_free(&ptr_pool); + page_frag_cache_drain(&test_frag); + + return -EAGAIN; +} + +static void __exit page_frag_test_exit(void) +{ +} + +module_init(page_frag_test_init); +module_exit(page_frag_test_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Yunsheng Lin "); +MODULE_DESCRIPTION("Test module for page_frag"); From patchwork Tue Jul 9 13:27:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13AEFC3DA49 for ; Tue, 9 Jul 2024 13:31:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2CFA6B00BC; Tue, 9 Jul 2024 09:31:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDB876B00BE; Tue, 9 Jul 2024 09:31:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1C526B00BC; Tue, 9 Jul 2024 09:31:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 966886B00BD for ; Tue, 9 Jul 2024 09:31:08 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DD14EA4C89 for ; Tue, 9 Jul 2024 13:31:07 +0000 (UTC) X-FDA: 82320300174.29.EE5C270 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf20.hostedemail.com (Postfix) with ESMTP id 249EC1C0004 for ; Tue, 9 Jul 2024 13:31:04 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531850; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IPhAoN1aSkR/K+DOMMPSqc4NXjA3XK/29B6AcPJWOiI=; b=06ulXDtp1HrGJFJMXcT9+fNEq8U1QyMOD4CcDYEo5fPR0T7V1lYBSPcqM/xFkUPFVx2hCK b88SJ+3nEVVhLdyOK6+WPwMFjHXDk74X2YTb3pGtc2FpFwyIQXOfEgAHNs49BjGy/daVk7 mJiDi/HXsD/qj3ap/Io+MpT2/rSeJ5A= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531850; a=rsa-sha256; cv=none; b=bQAGm89DpN1bjV6u3qodFntMLw4QSLxx9nPYkCbUqpkkps03sp2L3nSFueMd3KsMHm+J1w 4qtYN64fuXczCW0PbVR9ItvpkGxmxBCKHxK4xoDpyELKg3reblB9J8TVFDhaMIwfeaFz/U WILVIh5YhEpgbEYIPb+I8PoC2tcQ3KE= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4WJMKB3c5tzdgZk; Tue, 9 Jul 2024 21:29:18 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 85E03180AE5; Tue, 9 Jul 2024 21:30:58 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:30:58 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , David Howells , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v10 02/15] mm: move the page fragment allocator from page_alloc into its own file Date: Tue, 9 Jul 2024 21:27:27 +0800 Message-ID: <20240709132741.47751-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 249EC1C0004 X-Stat-Signature: ra1omnxouftymmmpeqd46bdej3ondz4e X-Rspam-User: X-HE-Tag: 1720531864-173827 X-HE-Meta: U2FsdGVkX1+cZ7o1ABRx0AijJj7WYcKvgr3MxHFXYzdvvogpaq3GwT7YkZUmao4VMLrYCiTL03RNgQxc5bW6t8XeabcrsE+3iJvswDioKSH/BpnpZ///cHTrh8g/sC2F3aEOGQDxXA280kD9riV/mC62oWSV2Mj7w+09yOk9KL6iZTsX4zC/MAODceTOJ42Wt/zIIIP69ZOHYkpfcEd9XmhywNUw0qOppeWZQocqN3lKKIYEWmEnmFR5i0hKy6RBRmqTEEpMvulb3/pU7CNQK+VyyspEkiwFv1nyHOVpisjsttuh5DRxTvHufVQZ8vAotK/Du7a2NTAZG1Hd+1M9wokPHZTZ3Rfqet9yVcJMAthwWCnkeV46WWNgSJ1lwPWPkrvgQIceA37BYPm2KnmECCqSx804V0bs3rUsnk+wwuezzqddw03Jeq5U5JA3TwwAss4hR1hM61XesL8kKaIJSdYU3FeolQvcRqyM77lKk6MldmFmBApQqnlAGxHC32orAAe5QqOtr+mj4MHvZ0VZQ6d4mpcpvwyZ2wzJxC7dnAh9PtDFJ+/ucARr4BN4v7fOTetxksohH44Z5L2lmcLbXSijGU9+Iu4DV1N/v1YckrS6giBRVhY4jvlqByS6KUS95rnUuNqWcMMqM8RVZ+GDlKC+0fsLjbIWJoxo7+wdfmw31fWiyzYIcPFWThtgHODL5Bgjo7B4b2mPqqJ0bCpJ32KTfkO7oTiKwLIq6uXoYCXirvHmwzXzkNckHJF8p2N9FQYwJVGPNdWBwuCF2hn3Qgi2u3+X0FsLYTVesZYv9QApfs3cz8+789MxgMZ1Neu3NykzQiuF3Hpxf8zi1zWKVdklOababIF8TEAzMaOyBv//8Jam8DbJLT7lqwAB1GL8UKtXhIUFQCbNK+gRU+jb8IInN71gVyTbVYV1Gx6LktF/9KN4Xod/TiVtd9PWu64Ejrbnzj4SyYCHqWftkw+ mcxHAw4H AFp919QtgtESpUSFewu80pqqyX57h7Hj0qvnotH/UkckXS3N1a14sxQa48jzKfZ7DjO6rspw52D1Naf63DKT9etXfvSChlxjuCjl8EPGwckk63HWIKx0qkPFNLt7MJR6dyKyGg7xNg9TNxRufNZGyFNhlZogfA/KWRRKFYnZJLW68Qoa7l3o1+L8wzPP6mY8OQYYnbF9QE4ULrFzWEDBKVypOllUl5jt/47oDo1AqNgteGKJae8gI3qQvoqLiM4W2wUCnEewarQ7GvHzicQqY3UERW6XTWTTJtPp+R7ioT0cFY3izdrkxwH2yL2V6/y7byQIDdDxNoN8bxP08n8WmC+nFzajwwkMWHpDsulrVH3l/iK+O8Fq7AL3YLw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Inspired by [1], move the page fragment allocator from page_alloc into its own c file and header file, as we are about to make more change for it to replace another page_frag implementation in sock.c 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/ CC: David Howells CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/gfp.h | 22 ----- include/linux/mm_types.h | 18 ---- include/linux/page_frag_cache.h | 50 +++++++++++ include/linux/skbuff.h | 1 + mm/Makefile | 1 + mm/page_alloc.c | 136 ------------------------------ mm/page_frag_cache.c | 145 ++++++++++++++++++++++++++++++++ mm/page_frag_test.c | 2 +- 8 files changed, 198 insertions(+), 177 deletions(-) create mode 100644 include/linux/page_frag_cache.h create mode 100644 mm/page_frag_cache.c diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 7f9691d375f0..3d8f9dc6c6ee 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -363,28 +363,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); -struct page_frag_cache; -void page_frag_cache_drain(struct page_frag_cache *nc); -extern void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); - -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} - -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); -} - -extern void page_frag_free(void *addr); - #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index af3a0256fa93..7a4e695a7a1e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -505,9 +505,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); */ #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - /* * page_private can be used on tail pages. However, PagePrivate is only * checked by the VM on the head page. So page_private on the tail pages @@ -526,21 +523,6 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } -struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; -}; - typedef unsigned long vm_flags_t; /* diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h new file mode 100644 index 000000000000..325872cec8a4 --- /dev/null +++ b/include/linux/page_frag_cache.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PAGE_FRAG_CACHE_H +#define _LINUX_PAGE_FRAG_CACHE_H + +#include +#include +#include +#include + +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) + +struct page_frag_cache { + void *va; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + __u16 offset; + __u16 size; +#else + __u32 offset; +#endif + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + +void page_frag_cache_drain(struct page_frag_cache *nc); +void __page_frag_cache_drain(struct page *page, unsigned int count); +void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask, unsigned int align_mask); + +static inline void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); +} + +static inline void *page_frag_alloc(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); +} + +void page_frag_free(void *addr); + +#endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 9c29bdd5596d..e0e2be5194fb 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include diff --git a/mm/Makefile b/mm/Makefile index 29d9f7618a33..3080257a0a75 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -64,6 +64,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-y += page-alloc.o +obj-y += page_frag_cache.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9ecf99190ea2..edbb5a43f47b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4786,142 +4786,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | - __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void page_frag_cache_drain(struct page_frag_cache *nc) -{ - if (!nc->va) - return; - - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; -} -EXPORT_SYMBOL(page_frag_cache_drain); - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(__page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c new file mode 100644 index 000000000000..609a485cd02a --- /dev/null +++ b/mm/page_frag_cache.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include +#include +#include +#include "internal.h" + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count)) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + goto refill; + + if (unlikely(nc->pfmemalloc)) { + free_unref_page(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + if (unlikely(put_page_testzero(page))) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free); diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index 5ee3f33b756d..755d66af9fd4 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -6,7 +6,6 @@ * Copyright: linyunsheng@huawei.com */ -#include #include #include #include @@ -16,6 +15,7 @@ #include #include #include +#include #define OBJPOOL_NR_OBJECT_MAX BIT(24) From patchwork Tue Jul 9 13:27:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 386EFC2BD09 for ; Tue, 9 Jul 2024 13:31:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C21EA6B00C0; Tue, 9 Jul 2024 09:31:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B84226B00BE; Tue, 9 Jul 2024 09:31:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A039F6B00BF; Tue, 9 Jul 2024 09:31:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7E3AB6B00BC for ; Tue, 9 Jul 2024 09:31:08 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E8616C1969 for ; Tue, 9 Jul 2024 13:31:07 +0000 (UTC) X-FDA: 82320300174.18.636DFFE Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf22.hostedemail.com (Postfix) with ESMTP id E35A9C0021 for ; Tue, 9 Jul 2024 13:31:04 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531842; a=rsa-sha256; cv=none; b=LIWOyUgd9nGCnFogU3V57e9IszR4mM8bBgDFGP5ScnM971aOdYy3IXCCX/rK+32HpW5oWS PeZgeH4QaeTZIKW4NirpFrAHwxi9g4on6GjWbl+/jQ8at9OUjjUnV9a8escq8JH8sMQRcb 4I8RxwhlCxQ2nb/1AsZA48f7OPsEXng= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531842; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3NzPwAzeBy1XsngGOLrvcyZTAzJgH79Nbo9wv61cnBw=; b=Fq67nlMczN/S+OW8wOPlk/S8bpR+TVdlu1h6tWobrwTJ6XLXHi8Wq5SrshiNyCOfM6QV+C JTt2WV/H7TR5KTsjIheJ4Ac0lFhdtim0Mmj1ECArJmxu8RZ6w9P1DpK8mBr3fXwO2Yondk stEfCwQclB8zCDra3DQ9+aZN5EKsKQ0= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4WJMLf1pLJzcpGJ; Tue, 9 Jul 2024 21:30:34 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 68479180064; Tue, 9 Jul 2024 21:31:00 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:00 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v10 03/15] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Tue, 9 Jul 2024 21:27:28 +0800 Message-ID: <20240709132741.47751-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: E35A9C0021 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 1zwi68uepe46cd1h1zz56cw5p998ea5e X-HE-Tag: 1720531864-28078 X-HE-Meta: U2FsdGVkX1+Z7UZIacpJNApEAGjJBjn7eB0BK5YZD3PI+gRvBC9hX+CqQ81HtqLMyZUBZ3/aX1Hr8yEh+SHtYmQ8TNvEKOjw4OrY9AfI+neZ6Ql2WBNbn4lp9Vxv4sJ6RbE1Uk649UyzIxaLWZ8SEymJUhliPoAPcLDEwEeRWO1zUMkm/6PWGN1IvWSdjyrpXBzgHlO6CYRvBqXWAn6Ua/uSBU15myJdMpWsbb8wacdrTSJRhy8i+2UBLqd10ZYkbVAWW2H9Qjq0i4xDhBcJItXJmGxT14MDLm3pvRv1FjGgQZxYz9+U5edzAOLennjeJtTr+RPzhpu/gZSmqA6lwXjZhvdkXm3upXTVSM3BvSF6WJ5E3WyXCgYfuNGGBvYQ+rjkqS9+tjpjuND9YfFM2qzXtRH+AnroJVCxhlC73lIjPXzQRdwzB8AP3Bj75S44l2iWHN8aYaAXbbiI7FiFVrJF2ca9DR3VbzU3ewwpQDBxiWzmk5mem+k6In8FCqYRDroAsiaZU6LZgh/OnGMSHO/WF6hbXC8PFlxP3rgoGx+7VbdOIj/iO65P08VweR1igf4RMZfjTBuZlFYXdX9yWa0aqMj610QA3Fr8oZeTDSzJJ2CHoDUF5DiJGZFmx/viGJFxqbWVywj6/K0lh4KrMW030UEE+fbsutOQ5sJUzlP/Tn3B5sGm/1n6Dfxq1AsqKrnSN+e+2nfLEbR7+bQJKBpOHz9JTQdQkbGCyTTMZjCM9GTT+X6g44oA9hQCdP3ghnanJCXPNqWcSa0iCrIovXbyrdSp3rZudXqLPFL422OFtDNw/eM06DspT8mrqgWOaPUa22LKzWj1cfFUYS4Hc2/3mBk8sKqISCxCKYaNj8iGRrbJ0gLZKw/AhRPzEuDTHZ1rJmSpPMwornxLPs1aVmWfKkOscIoZKLc/ayXgujf1S5mvWh46Lk2hUWK5YVXfp6qTMXeyR7pNkjtXead jKRODgRO d543VGQWcPvqZvuhPpXIpG58ks/Zd5XRmFQ4IW4xXdkdBmw75xB4L6QuzI8Mz1dnSxQ1Trj4tZGwXEqE2u9C3TgjpUD0WwJIBAzaZc+bSaHQbaan+vm2iycUNLr4d9GdCE2MOLPVXrWNuwQf4gv3NfStLmyryiv8bSTZKF4C3sUondg8kZysPxT9co0+pAoEXhoFKf9ywfD/y8WoeneXo+RDYajt10gmbd8p2YyCYf/RJz7FWqGawoME3V0cSA2tMwi3OpSElElQfFfd85D2NrDrjd4RMaHQ0Ro0h8giCk8t1AlaR83IHKBU1El1rs1/i1T2yYpZoai94nAlxFyLna5Ctvw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We are about to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coaleasing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to avoid the disadvantages as much as possible. Rename 'offset' to 'remaining' to retain the 'countdown' behavior as 'remaining countdown' instead of 'offset countdown'. Also, Renaming enable us to do a single 'fragsz > remaining' checking for the case of cache not being enough, which should be the fast path if we ensure 'remaining' is zero when 'va' == NULL by memset'ing 'struct page_frag_cache' in page_frag_cache_init() and page_frag_cache_drain(). 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/ CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 4 +-- mm/page_frag_cache.c | 50 +++++++++++++++++++++------------ 2 files changed, 34 insertions(+), 20 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 325872cec8a4..ed8bacbb877b 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -14,10 +14,10 @@ struct page_frag_cache { void *va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; + __u16 remaining; __u16 size; #else - __u32 offset; + __u32 remaining; #endif /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 609a485cd02a..ef0a02f12acc 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -22,6 +22,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned int page_size = PAGE_FRAG_CACHE_MAX_SIZE; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,12 +31,21 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + if (unlikely(!page)) { + nc->va = NULL; + return NULL; + } - nc->va = page ? page_address(page) : NULL; + page_size = PAGE_SIZE; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + nc->size = page_size; +#endif + nc->va = page_address(page); return page; } @@ -63,9 +73,9 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { + int aligned_remaining, remaining; unsigned int size = PAGE_SIZE; struct page *page; - int offset; if (unlikely(!nc->va)) { refill: @@ -82,14 +92,20 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - /* reset page count bias and offset to start of new frag */ + /* reset page count bias and remaining to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->remaining = size; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + + aligned_remaining = nc->remaining & align_mask; + remaining = aligned_remaining - fragsz; + if (unlikely(remaining < 0)) { page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -100,17 +116,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - /* reset page count bias and offset to start of new frag */ + /* reset page count bias and remaining to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { + nc->remaining = size; + + aligned_remaining = size; + remaining = aligned_remaining - fragsz; + if (unlikely(remaining < 0)) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -125,10 +140,9 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; + nc->remaining = remaining; - return nc->va + offset; + return nc->va + (size - aligned_remaining); } EXPORT_SYMBOL(__page_frag_alloc_align); From patchwork Tue Jul 9 13:27:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAFD3C2BD09 for ; Tue, 9 Jul 2024 13:31:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 234406B00BD; Tue, 9 Jul 2024 09:31:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E63B6B00BE; Tue, 9 Jul 2024 09:31:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00D146B00BF; Tue, 9 Jul 2024 09:31:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D1FCE6B00BD for ; Tue, 9 Jul 2024 09:31:12 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7C9EB81520 for ; Tue, 9 Jul 2024 13:31:12 +0000 (UTC) X-FDA: 82320300384.13.E5C9359 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf16.hostedemail.com (Postfix) with ESMTP id D63A5180029 for ; Tue, 9 Jul 2024 13:31:09 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531856; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hj6GZYwIowZcQuY1B/2Ja2Ei6DevAxq/hHnIuhK4zD0=; b=CwaPh+Px3tTsnluk8mVCd0cI/w5s4PZ6bO/kIyb6Hd0la/tnTKysXb31t/Wyk0vuDvdpqH jj+Z40j3oDZkL6l2Syw1ioRnWlbJHW46tFlw7TWLO9b4xBEDcOwmfEWrPqyfuTnIgSJpg2 DbWIBosskzqH83zTiS9wknFys/+o9sM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531856; a=rsa-sha256; cv=none; b=L5xJMyj2zSarW4Ag49MLqZ04AJzUumDTTJwBcLhkYgf4oWjUHgwkUNagd/g7SV0DaUW1N+ FCGmCA2k7F4aL+DgA9XxIwZoiiFhcmAAyUPfr/1j8GDGh0UEl2nMduwYR2Ygj+ZlSjurok wx1FSv5jBlHEfpWviy8XwKiaZDf23i4= Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4WJMLm2QSMzcpJ1; Tue, 9 Jul 2024 21:30:40 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 7747D140485; Tue, 9 Jul 2024 21:31:06 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:05 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Eric Dumazet , Tony Nguyen , Przemek Kitszel , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , "Michael S. Tsirkin" , Jason Wang , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Morton , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , David Howells , Marc Dionne , Trond Myklebust , Anna Schumaker , Chuck Lever , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , , , , , , , , , , Subject: [PATCH net-next v10 04/15] mm: page_frag: add '_va' suffix to page_frag API Date: Tue, 9 Jul 2024 21:27:29 +0800 Message-ID: <20240709132741.47751-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D63A5180029 X-Stat-Signature: eu4g3d5edi5tt343bynyxbqkr47hzbfu X-HE-Tag: 1720531869-62936 X-HE-Meta: U2FsdGVkX18dSZtqNyvnji98SrjD6/nUdHjutmQidgU2E+97PyAmDjBvxVf9R0UdUroz0s9MOzLgZIA2AaXCTDezMwVkjV6X1IuMktgTaXGo8nt1magUBscneEaxgfl2KQIK8CDz2+wKrh9jngfoZGh+AZtOzmUKLW6N8cvjrsGp5wiEI3HJQ23CcCgYLXUVKwCWcWMZK1SQQvfgOIA2wi5gh4/1qk4s+jMafv1mGU/oLfxGpNRGZmLWBgzKGilmfwqR42i1Z7s4asedMlG35Mnly9IzucgBS0bbJsuDeJdyBYImAcWwvIbfYTfg6hAMPaFYKbae18H7CMJmxm8si6EckngdfoNMgDV5eDimruqxkD5DVf3wzt49i86Al3IV/aSuYL65/QF9ekxaN/09Bj3U0kMOZDkmFtLcwuvzSSz8selTS2GDLTdmwMFRP8BRiW8pQhkpHb7wM1d7QAL/xEOhRNwCurfCtRnHEdu3pzwHjE40OSsWyF/j8C7rcs7v2QS1lFVkdwtpEJJUBHeoIV7OsAeKsoquNKPzvJjOPH75m4iFGsndn+Vnd7I+JjhyJMxhqd+fIQKUORIq0Q9dQ5eF/DxkOkhhwX2x08P6zYQZijogYAgQmcJj6IHZJkDsB3DUg4EocmvH7iIrBwLZ6LuXHL3JyfP2kxazrDmLJ9IXsWLMkoFPEbnjwjm8hfVhfk/rNKmbWZOzeERVx01nsir25xbLzkxfImeXqEoqs60m7St0TI98QgzByHToCZs0r+zhjN08tRhFssROT9U6BLKXxXjwW85EAIFMt9cpwjCfV89FSMk66k21TWGG3vF3MwRcTKetz8mcW+BUXNNUUWKDGKU0stEH4H7qILCHoS3/65/ySRCOd4F4hbSJLcvccG8Bwifczbydcwh+qzsbMNOlCjQCffDUGUkrHsyZJfWB7NU+/tiO6pPmw7Gv2L8bZm4wQdcNbsJp8YrOnSA P9FnP2zQ GWk8mnxtLeJiYlicH+6GbCYM55ayGC6F9HP24DmUE8h5G4VLZfMlCi8TiKCNtsIfeU25B/8qoMYdKD4GFsr3nmyZYOk9ByHgWmxxctDUZFXCTIfDOCo0XmxAn7p/aiRCAB5Lqn4A8Qh4xrJgEOWuyEJp5ipCltJr0M/mHB2VOf8H3ljIKF8nNNxwq87uSa4NPXuBBUoEyKRbplKUO4K2LKcECEDYozUt7kFgHBoksFDlxgppl+1FBUjUV9d+Z0f13epHwoM6gkn68bDxb7xe2e9X0V4+p179VgktP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently the page_frag API is returning 'virtual address' or 'va' when allocing and expecting 'virtual address' or 'va' as input when freeing. As we are about to support new use cases that the caller need to deal with 'struct page' or need to deal with both 'va' and 'struct page'. In order to differentiate the API handling between 'va' and 'struct page', add '_va' suffix to the corresponding API mirroring the page_pool_alloc_va() API of the page_pool. So that callers expecting to deal with va, page or both va and page may call page_frag_alloc_va*, page_frag_alloc_pg*, or page_frag_alloc* API accordingly. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Signed-off-by: Yunsheng Lin Reviewed-by: Subbaraya Sundeep --- drivers/net/ethernet/google/gve/gve_rx.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +- .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++-- .../marvell/octeontx2/nic/otx2_common.c | 2 +- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++-- drivers/nvme/host/tcp.c | 8 +++---- drivers/nvme/target/tcp.c | 22 +++++++++---------- drivers/vhost/net.c | 6 ++--- include/linux/page_frag_cache.h | 21 +++++++++--------- include/linux/skbuff.h | 2 +- kernel/bpf/cpumap.c | 2 +- mm/page_frag_cache.c | 12 +++++----- mm/page_frag_test.c | 13 ++++++----- net/core/skbuff.c | 14 ++++++------ net/core/xdp.c | 2 +- net/rxrpc/txbuf.c | 15 +++++++------ net/sunrpc/svcsock.c | 6 ++--- 19 files changed, 74 insertions(+), 69 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index acb73d4d0de6..b6c10100e462 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -729,7 +729,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, total_len = headroom + SKB_DATA_ALIGN(len) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC); + frame = page_frag_alloc_va(&rx->page_cache, total_len, GFP_ATOMIC); if (!frame) { u64_stats_update_begin(&rx->statss); rx->xdp_alloc_fails++; @@ -742,7 +742,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, err = xdp_do_redirect(dev, &new, xdp_prog); if (err) - page_frag_free(frame); + page_frag_free_va(frame); return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8bb743f78fcb..399b317c509d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -126,7 +126,7 @@ ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf) dev_kfree_skb_any(tx_buf->skb); break; case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame(tx_buf->xdpf); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index feba314a3fe4..6379f57d8228 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -148,7 +148,7 @@ static inline int ice_skb_pad(void) * @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree() * @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA * @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats - * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats + * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free_va(), stats * @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats * @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats */ diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 2719f0e20933..a1a41a14df0d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -250,7 +250,7 @@ ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_buf *tx_buf, switch (tx_buf->type) { case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame_bulk(tx_buf->xdpf, bq); diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index b938dc06045d..fcd1b149a45d 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -303,7 +303,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector, /* free the skb */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else napi_consume_skb(tx_buffer->skb, napi_budget); @@ -2413,7 +2413,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring *tx_ring) /* Free all the Tx ring sk_buffs */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else dev_kfree_skb_any(tx_buffer->skb); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 87d5776e3b88..a485e988fa1d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -553,7 +553,7 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, *dma = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize, DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); if (unlikely(dma_mapping_error(pfvf->dev, *dma))) { - page_frag_free(buf); + page_frag_free_va(buf); return -ENOMEM; } diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 7063c78bd35f..c4228719f8a4 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -142,8 +142,8 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, dma_addr_t addr; void *buf; - buf = page_frag_alloc(&q->cache, q->buf_size, - GFP_ATOMIC | GFP_DMA32); + buf = page_frag_alloc_va(&q->cache, q->buf_size, + GFP_ATOMIC | GFP_DMA32); if (!buf) break; diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 8b5e4327fe83..4b7a897897fc 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -506,7 +506,7 @@ static void nvme_tcp_exit_request(struct blk_mq_tag_set *set, { struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); - page_frag_free(req->pdu); + page_frag_free_va(req->pdu); } static int nvme_tcp_init_request(struct blk_mq_tag_set *set, @@ -520,7 +520,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set, struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx]; u8 hdgst = nvme_tcp_hdgst_len(queue); - req->pdu = page_frag_alloc(&queue->pf_cache, + req->pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!req->pdu) @@ -1337,7 +1337,7 @@ static void nvme_tcp_free_async_req(struct nvme_tcp_ctrl *ctrl) { struct nvme_tcp_request *async = &ctrl->async_req; - page_frag_free(async->pdu); + page_frag_free_va(async->pdu); } static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) @@ -1346,7 +1346,7 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) struct nvme_tcp_request *async = &ctrl->async_req; u8 hdgst = nvme_tcp_hdgst_len(queue); - async->pdu = page_frag_alloc(&queue->pf_cache, + async->pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!async->pdu) diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 380f22ee3ebb..bea3aa79ef43 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1463,24 +1463,24 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, c->queue = queue; c->req.port = queue->port->nport; - c->cmd_pdu = page_frag_alloc(&queue->pf_cache, + c->cmd_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->cmd_pdu) return -ENOMEM; c->req.cmd = &c->cmd_pdu->cmd; - c->rsp_pdu = page_frag_alloc(&queue->pf_cache, + c->rsp_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->rsp_pdu) goto out_free_cmd; c->req.cqe = &c->rsp_pdu->cqe; - c->data_pdu = page_frag_alloc(&queue->pf_cache, + c->data_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->data_pdu) goto out_free_rsp; - c->r2t_pdu = page_frag_alloc(&queue->pf_cache, + c->r2t_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->r2t_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->r2t_pdu) goto out_free_data; @@ -1495,20 +1495,20 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, return 0; out_free_data: - page_frag_free(c->data_pdu); + page_frag_free_va(c->data_pdu); out_free_rsp: - page_frag_free(c->rsp_pdu); + page_frag_free_va(c->rsp_pdu); out_free_cmd: - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->cmd_pdu); return -ENOMEM; } static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c) { - page_frag_free(c->r2t_pdu); - page_frag_free(c->data_pdu); - page_frag_free(c->rsp_pdu); - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->r2t_pdu); + page_frag_free_va(c->data_pdu); + page_frag_free_va(c->rsp_pdu); + page_frag_free_va(c->cmd_pdu); } static int nvmet_tcp_alloc_cmds(struct nvmet_tcp_queue *queue) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f16279351db5..6691fac01e0d 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -686,8 +686,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, return -ENOSPC; buflen += SKB_DATA_ALIGN(len + pad); - buf = page_frag_alloc_align(&net->pf_cache, buflen, GFP_KERNEL, - SMP_CACHE_BYTES); + buf = page_frag_alloc_va_align(&net->pf_cache, buflen, GFP_KERNEL, + SMP_CACHE_BYTES); if (unlikely(!buf)) return -ENOMEM; @@ -734,7 +734,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, return 0; err: - page_frag_free(buf); + page_frag_free_va(buf); return ret; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index ed8bacbb877b..185d875e3e6b 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -28,23 +28,24 @@ struct page_frag_cache { void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); +void *__page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask); -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) +static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, + gfp_t gfp_mask, unsigned int align) { WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) +static inline void *page_frag_alloc_va(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) { - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u); } -void page_frag_free(void *addr); +void page_frag_free_va(void *addr); #endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index e0e2be5194fb..fb74725d1af8 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3381,7 +3381,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev, static inline void skb_free_frag(void *addr) { - page_frag_free(addr); + page_frag_free_va(addr); } void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index fbdf5a1aabfe..3b70b6b071b9 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -323,7 +323,7 @@ static int cpu_map_kthread_run(void *data) /* Bring struct page memory area to curr CPU. Read by * build_skb_around via page_is_pfmemalloc(), and when - * freed written by page_frag_free call. + * freed written by page_frag_free_va call. */ prefetchw(page); } diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index ef0a02f12acc..373f3bc29fcb 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -69,9 +69,9 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) +void *__page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) { int aligned_remaining, remaining; unsigned int size = PAGE_SIZE; @@ -144,16 +144,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return nc->va + (size - aligned_remaining); } -EXPORT_SYMBOL(__page_frag_alloc_align); +EXPORT_SYMBOL(__page_frag_alloc_va_align); /* * Frees a page fragment allocated out of either a compound or order 0 page. */ -void page_frag_free(void *addr) +void page_frag_free_va(void *addr) { struct page *page = virt_to_head_page(addr); if (unlikely(put_page_testzero(page))) free_unref_page(page, compound_order(page)); } -EXPORT_SYMBOL(page_frag_free); +EXPORT_SYMBOL(page_frag_free_va); diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index 755d66af9fd4..50166a059c7d 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -276,7 +276,7 @@ static int page_frag_pop_thread(void *arg) if (obj) { nr--; - page_frag_free(obj); + page_frag_free_va(obj); } else { cond_resched(); } @@ -304,17 +304,20 @@ static int page_frag_push_thread(void *arg) int ret; if (test_align) - va = page_frag_alloc_align(&test_frag, test_alloc_len, - GFP_KERNEL, SMP_CACHE_BYTES); + va = page_frag_alloc_va_align(&test_frag, + test_alloc_len, + GFP_KERNEL, + SMP_CACHE_BYTES); else - va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL); + va = page_frag_alloc_va(&test_frag, test_alloc_len, + GFP_KERNEL); if (!va) continue; ret = objpool_push(va, pool); if (ret) { - page_frag_free(va); + page_frag_free_va(va); cond_resched(); } else { nr--; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 83f8cd8aa2d1..4b8acd967793 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -314,8 +314,8 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) fragsz = SKB_DATA_ALIGN(fragsz); local_lock_nested_bh(&napi_alloc_cache.bh_lock); - data = __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, - align_mask); + data = __page_frag_alloc_va_align(&nc->page, fragsz, GFP_ATOMIC, + align_mask); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); return data; @@ -330,8 +330,8 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, - align_mask); + data = __page_frag_alloc_va_align(nc, fragsz, GFP_ATOMIC, + align_mask); } else { local_bh_disable(); data = __napi_alloc_frag_align(fragsz, align_mask); @@ -748,14 +748,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc(nc, len, gfp_mask); + data = page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); - data = page_frag_alloc(nc, len, gfp_mask); + data = page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; local_unlock_nested_bh(&napi_alloc_cache.bh_lock); @@ -845,7 +845,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) } else { len = SKB_HEAD_ALIGN(len); - data = page_frag_alloc(&nc->page, len, gfp_mask); + data = page_frag_alloc_va(&nc->page, len, gfp_mask); pfmemalloc = nc->page.pfmemalloc; } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/core/xdp.c b/net/core/xdp.c index 022c12059cf2..23b318459a01 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -389,7 +389,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, page_pool_put_full_page(page->pp, page, napi_direct); break; case MEM_TYPE_PAGE_SHARED: - page_frag_free(data); + page_frag_free_va(data); break; case MEM_TYPE_PAGE_ORDER0: page = virt_to_page(data); /* Assumes order0 page*/ diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index c3913d8a50d3..dccb0353ee84 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -33,8 +33,8 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_ data_align = umax(data_align, L1_CACHE_BYTES); mutex_lock(&call->conn->tx_data_alloc_lock); - buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, - data_align); + buf = page_frag_alloc_va_align(&call->conn->tx_data_alloc, total, gfp, + data_align); mutex_unlock(&call->conn->tx_data_alloc_lock); if (!buf) { kfree(txb); @@ -96,17 +96,18 @@ struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_call *call, size_t sack_s if (!txb) return NULL; - buf = page_frag_alloc(&call->local->tx_alloc, - sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); + buf = page_frag_alloc_va(&call->local->tx_alloc, + sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); if (!buf) { kfree(txb); return NULL; } if (sack_size) { - buf2 = page_frag_alloc(&call->local->tx_alloc, sack_size, gfp); + buf2 = page_frag_alloc_va(&call->local->tx_alloc, sack_size, + gfp); if (!buf2) { - page_frag_free(buf); + page_frag_free_va(buf); kfree(txb); return NULL; } @@ -180,7 +181,7 @@ static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb) rxrpc_txbuf_free); for (i = 0; i < txb->nr_kvec; i++) if (txb->kvec[i].iov_base) - page_frag_free(txb->kvec[i].iov_base); + page_frag_free_va(txb->kvec[i].iov_base); kfree(txb); atomic_dec(&rxrpc_nr_txbuf); } diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 6b3f01beb294..42d20412c1c3 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1222,8 +1222,8 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, /* The stream record marker is copied into a temporary page * fragment buffer so that it can be included in rq_bvec. */ - buf = page_frag_alloc(&svsk->sk_frag_cache, sizeof(marker), - GFP_KERNEL); + buf = page_frag_alloc_va(&svsk->sk_frag_cache, sizeof(marker), + GFP_KERNEL); if (!buf) return -ENOMEM; memcpy(buf, &marker, sizeof(marker)); @@ -1235,7 +1235,7 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec, 1 + count, sizeof(marker) + rqstp->rq_res.len); ret = sock_sendmsg(svsk->sk_sock, &msg); - page_frag_free(buf); + page_frag_free_va(buf); if (ret < 0) return ret; *sentp += ret; From patchwork Tue Jul 9 13:27:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3417CC3DA41 for ; Tue, 9 Jul 2024 13:31:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E55F6B00C2; Tue, 9 Jul 2024 09:31:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 194086B00C3; Tue, 9 Jul 2024 09:31:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 037E86B00C4; Tue, 9 Jul 2024 09:31:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B81486B00C2 for ; Tue, 9 Jul 2024 09:31:20 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6210B1418D1 for ; Tue, 9 Jul 2024 13:31:20 +0000 (UTC) X-FDA: 82320300720.22.ACA83AF Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf27.hostedemail.com (Postfix) with ESMTP id B369940024 for ; Tue, 9 Jul 2024 13:31:17 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531853; a=rsa-sha256; cv=none; b=eaaPbRNUbH+Noiu00AWNYKXB9HwAFyytP5XvLWoOhCOrG3kGIGekKMFn+CTu3YgOw5Eudi WJYtkBg4h7dv6jsnM+Gpuqn5Ya3cp6FzwTX36VQOUi33Bxj8B6cXSR37mxvGO/eKPmiuQc oxyCxM/stpe89NGA4BWB276897HURwk= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531853; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oaPf5ETAe89wxhSZu4CnM3yAXRIb2UndjMGv2uzDz+g=; b=jUNQqYMGIq3U6sdMqPkDsrLawt0lTCK0ylbb0Wub5ofXRUS8GNFhhQNpr473KVTXkf8NaL pWOEWIMyC+m0mvT5cujqM1pEhDrMXBwtPMt0o3ImlVYq0L2GbpEK18viOuBSvRYF8m4nbW F9mHYh+BI6JXKTdOOdWhZZrWJs+ISVk= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WJMGp5BKwzQl5Q; Tue, 9 Jul 2024 21:27:14 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 8CF08180064; Tue, 9 Jul 2024 21:31:09 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:09 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , "Michael S. Tsirkin" , Jason Wang , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Morton , Eric Dumazet , David Howells , Marc Dionne , Trond Myklebust , Anna Schumaker , Chuck Lever , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , , , , , Subject: [PATCH net-next v10 05/15] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Date: Tue, 9 Jul 2024 21:27:30 +0800 Message-ID: <20240709132741.47751-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B369940024 X-Stat-Signature: jpbgy1jdbxtnmhi3pjgi1mxrsq3eo7w8 X-Rspam-User: X-HE-Tag: 1720531877-472536 X-HE-Meta: U2FsdGVkX1+fgQcI//6vAWxUKUQtdLiyLX8vGG/Gsj5+vAaVW3sU4ocGHe7C5FJsU2tz7M6Ls+e7RgQaT1ntjESarS6scIfVjGjvLUaDSz2RVGhGCFEpbqsuV0b/OX6zMK5IxzHRK5yEU15lRksrVkw5TYqHPPg3AFun9BWXH7s30P+83NLT9ZXidgRnQhatIhXgKBtAmIBMchRvhLmhzGdYHAkRecU/HmgfbwLrTdD5Gw0SDj3VHr/6XYPZd4/sXZhVMlNXMbN6gZdI9RbtKYhBMtAsP7Y8R8libwQysRmHC9oa8DraFma6jEpFZlJXi8Wh1hKRPwH2thxqflZEu5/ZRZFgyiXtFH8rfkNNiAEv1PR76U4uotNj7Tp/FLJs3YueDdii43xz7ZhFdB2/8ZdwQO4JnXaLoW3wwVT9juFLeyO5eycyk/FHFdOS/drZMQnpKgxid8dN+y1ZNwFsiF2BDUul8FYCkzkc/abmHdcT1H0mPofweu88chGn4UNHf9ePcooln6XwJ7Y2m3M5VHu8awoOFx0iJtC1w7tJIgZ/JE4OafpBFCbbHmIUpYp3LOaLJJqxDnUpIqeyeqqlv9drnsbIioTOq+R/FuDJlBM6YxbC1polCxMwpq7PRZat6wBbbStqmyDzM2OVqHghZgh1Mcg3VLmk+feL9Ud5BwXJQtyFJsk4AnuVbQzMBytpQjRl+kJ5XrTenD4FLVOuMu23ttnulaa37qn1kcK9scPTdOa3R5MdXMZe9q3f2ys7XWDIoXxzOhyzxB6UjL6RIt5kYFRaqBZD1CkrIMT0T7t9MB+BCyM/PZhEnj5F7S+vO4zjteJArF7wmVlpmHTid6RZezMM//kKEFVP9LJjIv60EkEnMdCfL9v29wVnnrOB8dAEdVE0L5MGvaxilbjIQL7w0A2KFUD8DYpVmvyKnn/CwYje654BsZ/aj3+Ef/AkzmL/YrrUNmWDeWMNcX8 Ew6wsbr4 fCK5dO1mkVcR/hJ1Lx0c6ojjrszrmZpA9oRRAxXWYmmTbxtjS07Px4EmeYzgumRl8WKuKlkitHfKdx2RyYxqmBWTXbGU8EVwAg3TYXq3F1ZOUvhBkPtgfYWdWPDf9cQOok0A9lSwlAfa2XPaJHQtqyDfUvvqgttm3gA1DYaxNp8VjNjQBEy+AsrV75QxqfWONfNOWaZMkWfotoWzistDNelnYgNVngwYVY8iXoD91uQLKA5Wl6oLPz6RF4V2HupPzHZrkVf8CLBiW9OSVUn+ztMUhwAulpzAuRY8w X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use appropriate frag_page API instead of caller accessing 'page_frag_cache' directly. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- drivers/vhost/net.c | 2 +- include/linux/page_frag_cache.h | 10 ++++++++++ mm/page_frag_test.c | 2 +- net/core/skbuff.c | 6 +++--- net/rxrpc/conn_object.c | 4 +--- net/rxrpc/local_object.c | 4 +--- net/sunrpc/svcsock.c | 6 ++---- 7 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 6691fac01e0d..b2737dc0dc50 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) vqs[VHOST_NET_VQ_RX]); f->private_data = n; - n->pf_cache.va = NULL; + page_frag_cache_init(&n->pf_cache); return 0; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 185d875e3e6b..0ba96b7b64ad 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -26,6 +26,16 @@ struct page_frag_cache { bool pfmemalloc; }; +static inline void page_frag_cache_init(struct page_frag_cache *nc) +{ + nc->va = NULL; +} + +static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) +{ + return !!nc->pfmemalloc; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_alloc_va_align(struct page_frag_cache *nc, diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index 50166a059c7d..0d47235b5cf2 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -340,7 +340,7 @@ static int __init page_frag_test_init(void) u64 duration; int ret; - test_frag.va = NULL; + page_frag_cache_init(&test_frag); atomic_set(&nthreads, 2); init_completion(&wait); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4b8acd967793..76a473b1072d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -749,14 +749,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); data = page_frag_alloc_va(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); data = page_frag_alloc_va(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); local_bh_enable(); @@ -846,7 +846,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) len = SKB_HEAD_ALIGN(len); data = page_frag_alloc_va(&nc->page, len, gfp_mask); - pfmemalloc = nc->page.pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page); } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 1539d315afe7..694c4df7a1a3 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work) */ rxrpc_purge_queue(&conn->rx_queue); - if (conn->tx_data_alloc.va) - __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va), - conn->tx_data_alloc.pagecnt_bias); + page_frag_cache_drain(&conn->tx_data_alloc); call_rcu(&conn->rcu, rxrpc_rcu_free_connection); } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 504453c688d7..a8cffe47cf01 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local) #endif rxrpc_purge_queue(&local->rx_queue); rxrpc_purge_client_connections(local); - if (local->tx_alloc.va) - __page_frag_cache_drain(virt_to_page(local->tx_alloc.va), - local->tx_alloc.pagecnt_bias); + page_frag_cache_drain(&local->tx_alloc); } /* diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 42d20412c1c3..4b1e87187614 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1609,7 +1609,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt) static void svc_sock_free(struct svc_xprt *xprt) { struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt); - struct page_frag_cache *pfc = &svsk->sk_frag_cache; struct socket *sock = svsk->sk_sock; trace_svcsock_free(svsk, sock); @@ -1619,8 +1618,7 @@ static void svc_sock_free(struct svc_xprt *xprt) sockfd_put(sock); else sock_release(sock); - if (pfc->va) - __page_frag_cache_drain(virt_to_head_page(pfc->va), - pfc->pagecnt_bias); + + page_frag_cache_drain(&svsk->sk_frag_cache); kfree(svsk); } From patchwork Tue Jul 9 13:27:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 423D8C2BD09 for ; Tue, 9 Jul 2024 13:31:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C6C0D6B00C1; Tue, 9 Jul 2024 09:31:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BCD276B00C4; Tue, 9 Jul 2024 09:31:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A20FF6B00C3; Tue, 9 Jul 2024 09:31:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 82B846B00C1 for ; Tue, 9 Jul 2024 09:31:20 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 24F7DC18A6 for ; Tue, 9 Jul 2024 13:31:20 +0000 (UTC) X-FDA: 82320300720.06.B10BD25 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf28.hostedemail.com (Postfix) with ESMTP id 962B2C000C for ; Tue, 9 Jul 2024 13:31:16 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531837; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/jkaXi9oCnUjsT51NW+o3Y7tDuYi8+zxDiHSk9e0aug=; b=vUeJWJ+wWe8mD9x3Mbi/dfCdgrPe0eABwtR3lOiVQ7EWC4uOSZr2QRVQFECZqtHq3/dAG4 VAXxMjdXRus7q4zC/wcxvUxi+wQJWzxqbvf4OKPgOE0mtXK1iuSOqH7xL289ZHO27h5+ce SNzKhzExtLLpBc2NraLGzLec4HT5Q0g= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531837; a=rsa-sha256; cv=none; b=H9Zzwb9TQeJJYeFg7w82UqiZOI66/vG4V/TA2yywCLeFBWQ+E8TJJnRCmibhjAhVgL5jh4 QZhQPxttijQo8UyuLFxKNEtiLOz9LVpQQR74ecfsr4BOfCkU5N0tXqN+LT4LeQ8i3fs0vx E6n5nZN9lEJc9OOEGN2X9omGKmOQSL4= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4WJMLv0NVRzcpK3; Tue, 9 Jul 2024 21:30:47 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 33A0B180064; Tue, 9 Jul 2024 21:31:13 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:12 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v10 07/15] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Tue, 9 Jul 2024 21:27:32 +0800 Message-ID: <20240709132741.47751-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 962B2C000C X-Stat-Signature: xzq785mftnx5mnkqjewhf47b7cr5ouq9 X-Rspam-User: X-HE-Tag: 1720531876-743926 X-HE-Meta: U2FsdGVkX1+t/U1TX9ex06k1ZDHIzNlh8+8T1hIMmQzVYNMY4LVRgxlrKzfuphWx6h65ugkAaKR/HUiFc5EhcSFTj15fPAJwDKUKq6pz8MkiWOAUnwPgQNQB57otNOFWqJ5qJF01mvJSov87TsKFbotEHj0qmNPacQWn03TP7IFx5a/zow8oyDz+qqHtPFcmX0ZwoE8P2ZQg1y2H9fNbQQ0fS4sG9HNKC3mTaE93rKtZxwpy3IfAH1MwyP9ZECdW3svK5HPjdEJqjEUtYSWJ5HCd1c6sTvsomOcMV5s6nkRpCqLRObRI6036bBbRVkUuo9cQ91a+D+aYL9WgPU7K+8T0uuHBbV5WjU+PNxLdgfkI+3o/ob6kCjwyUgAwhp72o4B0MmuQkYXW92I2cJYlFVrg/a3CG+0o1SXpIBcsjVM1VLt/IsX2u/lTswoZ5H/0qFcahtOa79sT9jgcyZ1ExrV9KuR6EWqPCNvLvBzjTug5lbi+caSYrur5kWM7S32F3Wr7a+PMHrcYZ1XYM6VKD3zI8g+y0kRVyx3B40b45tRwhiPirZ2RpXdwZU5u/AfAaFsTiWsCJSuUYEqbPZ5Y+IVwwU41eewg8eiSG+iXKRJwfc79vkDkSf3sKeM0XBMuujfX6zYKFw31H2gLpaEgFPWkTiJHI4btiewbJ7evmsvhlF9UFJ/0HKe3PBbTK5Q0P+b2aT2kEVzoZX6si/PrOHoJdDVpPMKwDlwwu0Nsu89Ss9Sd+YLu/1qZ4Ai9aA87DE6L9LGjVLNnkpgKu7y/dJ7ALxLmi8mPMXHCV15W21Ee6V2dCHGhgs6a+U+YHLaRPnti6gZ7b2DpVM4IGiVFgNmj9s7vwK3DLQBLW2NMvizpoAwUstinHdigsBsQN5yaxTnyClDoztCRnTqKvJE0mSSWiYEDDulLX65Gh1f6tVTo1M32wqPKnXcQ9hrExvivahn5oTHqN14KBZ5dRaY sXZFmT3d F9FGBalkKlh9UImS9oyLug8yIa4G0hq6ep5s9RXRoIa8V08U0n0ukKi9T/EG6JkntzbvR+dEi+G0XBvIDZy7RPA/EyLi8bQ5ReE/HYQSJBMhphn0h5VyE8cIubedySWxI2WT3+qAlWzSRZBWJD7JDip1qpU10ROY7Jmc3/+HBgFx1iUBT53AiTNUI1Rls92fmMud1IZL9txB/CSBL0myXtbfUenhONcajG3rFIJsIzwDNJYbDe2CSIStKwuaOKs5weUzQVg/LnoJ5upjc6tdyueAcoARD5DRuA3Ia X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 65 ++++++++++++++++++++++++++++----- mm/page_frag_cache.c | 47 +++++++++++------------- 2 files changed, 77 insertions(+), 35 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0ba96b7b64ad..87c3eb728f95 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -4,6 +4,8 @@ #define _LINUX_PAGE_FRAG_CACHE_H #include +#include +#include #include #include #include @@ -11,29 +13,72 @@ #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) -struct page_frag_cache { - void *va; +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(8) +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8 + +static inline unsigned long encode_aligned_va(void *va, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT >= PAGE_SHIFT); + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + return (unsigned long)va | order | + (pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT); +#else + return (unsigned long)va | + (pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT); +#endif +} + +static inline unsigned long encoded_page_order(unsigned long encoded_va) +{ #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + return encoded_va & PAGE_FRAG_CACHE_ORDER_MASK; +#else + return 0; +#endif +} + +static inline bool encoded_page_pfmemalloc(unsigned long encoded_va) +{ + return encoded_va & PAGE_FRAG_CACHE_PFMEMALLOC_BIT; +} + +static inline void *encoded_page_address(unsigned long encoded_va) +{ + return (void *)(encoded_va & PAGE_MASK); +} + +struct page_frag_cache { + /* encoded_va consists of the virtual address, pfmemalloc bit and order + * of a page. + */ + unsigned long encoded_va; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 remaining; - __u16 size; + __u16 pagecnt_bias; #else __u32 remaining; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_va = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return encoded_page_pfmemalloc(nc->encoded_va); +} + +static inline unsigned int page_frag_cache_page_size(unsigned long encoded_va) +{ + return PAGE_SIZE << encoded_page_order(encoded_va); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 373f3bc29fcb..02e4ec92f948 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -22,7 +22,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { - unsigned int page_size = PAGE_FRAG_CACHE_MAX_SIZE; + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -35,28 +35,27 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); if (unlikely(!page)) { - nc->va = NULL; + nc->encoded_va = 0; return NULL; } - page_size = PAGE_SIZE; + order = 0; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - nc->size = page_size; -#endif - nc->va = page_address(page); + nc->encoded_va = encode_aligned_va(page_address(page), order, + page_is_pfmemalloc(page)); return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_va) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va), + nc->pagecnt_bias); + nc->encoded_va = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -73,46 +72,44 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { + unsigned long encoded_va = nc->encoded_va; int aligned_remaining, remaining; - unsigned int size = PAGE_SIZE; + unsigned int size; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_va = nc->encoded_va; + size = page_frag_cache_page_size(encoded_va); + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and remaining to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->remaining = size; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + size = page_frag_cache_page_size(encoded_va); aligned_remaining = nc->remaining & align_mask; remaining = aligned_remaining - fragsz; if (unlikely(remaining < 0)) { - page = virt_to_page(nc->va); + page = virt_to_page((void *)encoded_va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(encoded_page_pfmemalloc(encoded_va))) { + VM_BUG_ON(compound_order(page) != + encoded_page_order(encoded_va)); + free_unref_page(page, encoded_page_order(encoded_va)); goto refill; } @@ -142,7 +139,7 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->remaining = remaining; - return nc->va + (size - aligned_remaining); + return encoded_page_address(encoded_va) + (size - aligned_remaining); } EXPORT_SYMBOL(__page_frag_alloc_va_align); From patchwork Tue Jul 9 13:27:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727978 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 036C3C2BD09 for ; Tue, 9 Jul 2024 13:31:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 196516B00C4; Tue, 9 Jul 2024 09:31:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 145F76B00C5; Tue, 9 Jul 2024 09:31:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F29FC6B00C6; Tue, 9 Jul 2024 09:31:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D06536B00C4 for ; Tue, 9 Jul 2024 09:31:21 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9814B1418E8 for ; Tue, 9 Jul 2024 13:31:21 +0000 (UTC) X-FDA: 82320300762.09.13E0750 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf07.hostedemail.com (Postfix) with ESMTP id 0296B40034 for ; Tue, 9 Jul 2024 13:31:18 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531864; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3bHVN1ugZa45wGLj92zG9u0SDB/xqJER2gWmKF4RCQ0=; b=MDlylIUVAXTEtgCmkfZ2bVdbdbhOuiHQwr9iHDCbJd2pa1p+mlaGW9b3QZlrbIcCRMWopT tp403ss2rjSjk6HY5XrFdj9aCFQaWuuhenNKZKKysVmKHixsaf/mFKfWUj0UsGURu9tazZ aA8S1hUycPUkLXqYWrAUoQ1x5lUV+Kw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531864; a=rsa-sha256; cv=none; b=rC2D5gbT1E0+P+HX5cJnfyjDy0wRufMv/b2sgO88uuoALbLplbEGawzmYa0frg71ftIHJG 5AOnXkuBaJQuVaYEJm3W+ovs7sHeLEtlX625aPCsVdhsr/cc3CnzvcW9u4BYbLY5dnIk7v UuAL57E5c0hNgC+djcLzDQNr10trRt8= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4WJMLw68S9zcpK6; Tue, 9 Jul 2024 21:30:48 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 0B10C140258; Tue, 9 Jul 2024 21:31:15 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:14 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v10 08/15] mm: page_frag: some minor refactoring before adding new API Date: Tue, 9 Jul 2024 21:27:33 +0800 Message-ID: <20240709132741.47751-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 0296B40034 X-Stat-Signature: 6gq58jxmn513ejh3n7pkezhup4y18rm3 X-HE-Tag: 1720531878-885568 X-HE-Meta: U2FsdGVkX19OtMj2RFNGC4bCo8VdHLmijyHvawGeJqnwMt3PVu4fCk2KLObVm3YdJ1qegfexxc4Pphvo14QkrZkvvsQWlQ45nLBGts9CD1OcTuZc950FOzsx+TIkXYPlKQyZNqxJjqcFC2uSJCWuSslJsBdzeWqRFqOhHifzqUyyBo7fzVKFk8E6fqHYidPJnldY2Zw3Zi1Cd5RHAAMGa8j/+ZrBbjVMwejjUVIOkwO1oSrPXH5aUAR6tisCsAAUX1tfKeFmg+DG+d4rX1dNaNThSVHd5mPntNM9Js/U7lHyLr6rsq+UAbprPClSbGp/7PtELuk0PHdDlKdL6lRex+XDlW7IiXBeTf7+COnhRq2mU7oGYYcN6lvMDIF0PmQddYBldTPCRbpMKdaHHIZxWrKWny3qxStevxIN107w54RDhp4sN7HeYlQXi9f/QE4esrFIUHdCPOtFsEc3GS7WXEi+JJJeN+UZRSnh/cteU1DmbHNDJJUGg0I3r85cGDf0bT1WH/D4COF5LNICqi9IBaFq8k65Gjt0siUzLFj4I7HKvynnZ47je//6CTSoTLhIRLGwlEv2UIcaXW1PZmuzO2pQtcn0z7602lwRXOg3/Psla+idIyZ0OAthbAC3bQ0noon5MzcUDSn20TyMc5l0TuKO0he29vGSi+2pxkfIiIgi9UpQUh8Ypa05zuEhbysDxOQpri58U5G16EDvGNQKHwjvV9DWx+IL2R4clF7arrvtnSKuHaKpRMqRysw4hQQO9wqfpEqgGPSBNe/72MmgJ3gGh1HeHVNNdUz01JNAoXg4r0eo+rzoBClb04Xq4FYFvC5Ig/NPziC9tqduRv+9MpQg2tGg5nSo98Wz23IsSv/EaQt6f3rPIRa9TqbjeoIvOhHgZV9cSNTE604g32dZZJP8Vbz+tgdEJVAgJMr6K8oSkbA3sQ5afff7R//ImVTRRV0zx5ckxPMrsKJ1yIg dLS4PMRs LKAhQtbqSe7tmr7YgulvleFah7IKwlMPwz8IqO/IIdlNugpFsGC+/3ReMm6eeJsyQlwlsJvH8aOMscyGmGEMT7gDb0cbAlfnWLXwH2G6zeQoxXsici17w60S/SjlOf0JLwi6C+akl9ptsqJoiduI1UhVjBB/Bjl/W0YaNX+wsSZmHw9ic0H5YJ3b0sdNNbycgmEW+dS/xRVXTjLflxF+B7T4HPv0HRU4Jz1DXHWY/coZdIWrP6VTgWWkcY6k7itnMFU8/h4cOmASS1KpMKnkZAPU6uIXeg2dk3gT4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Refactor common codes from __page_frag_alloc_va_align() to __page_frag_cache_refill(), so that the new API can make use of them. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 2 +- mm/page_frag_cache.c | 96 +++++++++++++++++---------------- 2 files changed, 50 insertions(+), 48 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 87c3eb728f95..71e08db1eb2f 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -68,7 +68,7 @@ struct page_frag_cache { static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->encoded_va = 0; + memset(nc, 0, sizeof(*nc)); } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 02e4ec92f948..73164d2482f8 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -19,6 +19,28 @@ #include #include "internal.h" +static struct page *__page_frag_cache_recharge(struct page_frag_cache *nc) +{ + unsigned long encoded_va = nc->encoded_va; + struct page *page; + + page = virt_to_page((void *)encoded_va); + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + return NULL; + + if (unlikely(encoded_page_pfmemalloc(encoded_va))) { + VM_BUG_ON(compound_order(page) != + encoded_page_order(encoded_va)); + free_unref_page(page, encoded_page_order(encoded_va)); + return NULL; + } + + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + return page; +} + static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { @@ -26,6 +48,14 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, struct page *page = NULL; gfp_t gfp = gfp_mask; + if (likely(nc->encoded_va)) { + page = __page_frag_cache_recharge(nc); + if (page) { + order = encoded_page_order(nc->encoded_va); + goto out; + } + } + #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; @@ -35,7 +65,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); if (unlikely(!page)) { - nc->encoded_va = 0; + memset(nc, 0, sizeof(*nc)); return NULL; } @@ -45,6 +75,16 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, nc->encoded_va = encode_aligned_va(page_address(page), order, page_is_pfmemalloc(page)); + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + +out: + /* reset page count bias and remaining to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->remaining = PAGE_SIZE << order; + return page; } @@ -55,7 +95,7 @@ void page_frag_cache_drain(struct page_frag_cache *nc) __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va), nc->pagecnt_bias); - nc->encoded_va = 0; + memset(nc, 0, sizeof(*nc)); } EXPORT_SYMBOL(page_frag_cache_drain); @@ -72,53 +112,15 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { - unsigned long encoded_va = nc->encoded_va; - int aligned_remaining, remaining; - unsigned int size; - struct page *page; - - if (unlikely(!encoded_va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - - encoded_va = nc->encoded_va; - size = page_frag_cache_page_size(encoded_va); + unsigned int size = page_frag_cache_page_size(nc->encoded_va); + int aligned_remaining = nc->remaining & align_mask; + int remaining = aligned_remaining - fragsz; - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and remaining to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->remaining = size; - } - - size = page_frag_cache_page_size(encoded_va); - - aligned_remaining = nc->remaining & align_mask; - remaining = aligned_remaining - fragsz; if (unlikely(remaining < 0)) { - page = virt_to_page((void *)encoded_va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(encoded_page_pfmemalloc(encoded_va))) { - VM_BUG_ON(compound_order(page) != - encoded_page_order(encoded_va)); - free_unref_page(page, encoded_page_order(encoded_va)); - goto refill; - } - - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + if (!__page_frag_cache_refill(nc, gfp_mask)) + return NULL; - /* reset page count bias and remaining to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->remaining = size; + size = page_frag_cache_page_size(nc->encoded_va); aligned_remaining = size; remaining = aligned_remaining - fragsz; @@ -139,7 +141,7 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->remaining = remaining; - return encoded_page_address(encoded_va) + (size - aligned_remaining); + return encoded_page_address(nc->encoded_va) + (size - aligned_remaining); } EXPORT_SYMBOL(__page_frag_alloc_va_align); From patchwork Tue Jul 9 13:27:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD755C2BD09 for ; Tue, 9 Jul 2024 13:31:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E8F0C6B00C5; Tue, 9 Jul 2024 09:31:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E3F5F6B00C6; Tue, 9 Jul 2024 09:31:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDE8D6B00C7; Tue, 9 Jul 2024 09:31:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AD6476B00C5 for ; Tue, 9 Jul 2024 09:31:23 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 73174806A3 for ; Tue, 9 Jul 2024 13:31:23 +0000 (UTC) X-FDA: 82320300846.17.40DC577 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf17.hostedemail.com (Postfix) with ESMTP id DA58740028 for ; Tue, 9 Jul 2024 13:31:20 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531856; a=rsa-sha256; cv=none; b=u8QIqUQsFZUNYICpBcI8ErYE8MgK8ivPyhDi+3WIwshqOgmHgzQZGf+q77MlHyQJQl+JBB Q/SfEiP6Q+q8sFfVRXxbeHFqyNNQGuhm6BBLqLujAnPPibD2rgPZRmmqu8R//yJELj9+h4 xT5EL9LfXMHMhz30lmr+EWOeTdedVWM= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531856; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dP19KnYEWTCM285a5LBgtLZQdy2OnFvx+h8KX6f4o6I=; b=pKHagr3R+IYcAwpmQrhAGOcyLOk7cDwRsKT+JtjNZe39hPmHps/mKf3uYMdQM/yPkjh29Z lKXIJBXgMgxeT7JB6rMdccP9GWAkgjSB75PjPylZEmIMaTe0iud+B1juKLxuu4kyx50W6T vevyXweBuDQr5QGpq0aR2hivDxenN1o= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4WJMG31zlhz1T5Xw; Tue, 9 Jul 2024 21:26:35 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id A14F7180064; Tue, 9 Jul 2024 21:31:16 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:16 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v10 09/15] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Date: Tue, 9 Jul 2024 21:27:34 +0800 Message-ID: <20240709132741.47751-10-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DA58740028 X-Stat-Signature: 5qq8mxhc9u3pzb9uhwi4dr79n9kx16i6 X-Rspam-User: X-HE-Tag: 1720531880-826147 X-HE-Meta: U2FsdGVkX18a3nGO4a+BnCb8RG1wsRA7tBoh91H/MtQw01wWpUX22dCrdbHWjAkwSBbElm78UTsxtr8fuehsuD9mIh8NVyGrO7yaIh6JwF24kTb0wh7gZxWgGTdh5Ta7livWy6Oc/l+XLRtXczCm3F8O56ieRGvE5m9kbK+BFSXLRYplNKNPNMf1e+e9hgsrKF+0iTzFi86VULZtnogGxZ/oQn6dnvzvmjFozv7uPWfjT8AYoGliyI+G6zWyVIRvBhp7nkSJtse/Yc1NcXsvYKe052HD7jTnM8QrwVIAfZoiuhXp2gm5Lg6zHeD79j9FRZgTO9sn/xA9kf84iWvexDjgkAkW42LnYLOf0rHRgbFpSwb9UIN5fEl83oOsZuybOpxT3V2KLdFc13RxhZys2/vcN4BpWVKAmC8plVhRUGrKbrA355E3Z0wgMN20+/3FdDW6EHjNTIh1SslY40LT2pyZDjlArsa/qpwvmaMkQ60XOpXiHsYaQxHI1bzykWYnXhxaXCO/FL3Dw3zwfT36PgTeGy8lCDDBLFEJYXzsQkJSs+tBhvEFILnVKRe1yrdOSOBdALC+zzze9S5iokR3qEtqiUX3pxmO+M+eh+5TefYtO+SnZNH/7ozcan5v9NtaEBYz1ebSvVeo8xmuPgBSQgh5rNxx4ab5rzoPUlLUWgC0c9v7oQmQlDqFVUBtjZWnbD5PW8MO62fyP20Gqya4Z8L+LuhXkbx0GyefgD1tWKxc2m83CQ7YXva3UfUFhrgOMFo6b9H4ud6KT2XBFWdBbpT2VJ0PZXqMQVxKGuW0VTT3dKbjNHCw5g5ezWoSO4Lkdl3LBUEjyT+QyfLixd78dmDQ4D2OaognkoWQhuNY6tfFg6l/E/UKHZaT30II0hhMN8hpKQ442VHYJLrc+jodYx03Btp4CwoWSyYgOGOOGtO/jYXpfBrp9VFHq/4L1DgfsAjLAs270qy9VL8RjcO ztx2tn2Z QvujK1MNb6aztn5nj3kN1WqwC9Dh4KAaijiuyHPwqo78sEYpYep928oFiRJlDZDoTVPv7vqTM39u+aER6nVSF6IZ6YPgnuQHvz+acvIKLXE80pCLqGaUkjw/VIB3EgpsGKSkaQaIQM4b5prDDhvqaTchSAXTHrgN1WyQckG7LTRQYXhVwTzrL4reNr9mKFLU8scgUa9Yep4y6PUsqAQIkbKgxRoElk2Op3NdL+Xg/Rd0Da47AF42+gW644K0xnfeZGQ/Jnspdb+sdV7mdu6uLdtfsNkIiuvt2vY8o X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are more new APIs calling __page_frag_cache_refill() in this patchset, which may cause compiler not being able to inline __page_frag_cache_refill() into __page_frag_alloc_va_align(). Not being able to do the inlining seems to casue some notiable performance degradation in arm64 system with 64K PAGE_SIZE after adding new API calling __page_frag_cache_refill(). It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() and __page_frag_cache_refill() in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is still part of the mm system. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 73164d2482f8..b2cb4473db54 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -59,11 +59,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); if (unlikely(!page)) { memset(nc, 0, sizeof(*nc)); return NULL; From patchwork Tue Jul 9 13:27:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727980 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E8BCC2BD09 for ; Tue, 9 Jul 2024 13:31:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F1856B00C7; Tue, 9 Jul 2024 09:31:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A0E46B00C8; Tue, 9 Jul 2024 09:31:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31D706B00C9; Tue, 9 Jul 2024 09:31:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 121E16B00C7 for ; Tue, 9 Jul 2024 09:31:28 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AD6FAC1994 for ; Tue, 9 Jul 2024 13:31:27 +0000 (UTC) X-FDA: 82320301014.02.FDDE37E Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf08.hostedemail.com (Postfix) with ESMTP id 16562160032 for ; Tue, 9 Jul 2024 13:31:24 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531861; a=rsa-sha256; cv=none; b=PtEiOFny+2DouscRrXvuSaeoVd5YBP6N95ot0fbqaYNkHDik33XL/Wdfv7qJSveSCVZXbS pjCmeWvglFEypqFdfLspkgVNU1K5RAWnI540zz6/2F6ZorL5bGt4Wqhcave+0t5BmvlJ1O aAWnEXI76V73Ro64bpMoX7e6U4NkU9I= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531861; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1kr+qbMzacq6ggd9YDSIbB6Fu2OuPmYVUm22SEWG2M0=; b=WaxlwYjM6Kf/cNLlHVTKZdRPBEZdFC/xJKOqfR/M5Iemn4sKaEDoEFkjUEHx4gnqBP0yL6 N9SFsjHISKA9rM1HSfkLdnJMmQ1V+Ew0eSkjHOFaOMl8U1MQm0vu65uDlNHPC/FzZOdC1X FHF4EuWnAIVuy2YmuD6LzPJOHZdSLRw= Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WJMH14sF0zQl5R; Tue, 9 Jul 2024 21:27:25 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 84A83140485; Tue, 9 Jul 2024 21:31:20 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:20 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v10 11/15] mm: page_frag: introduce prepare/probe/commit API Date: Tue, 9 Jul 2024 21:27:36 +0800 Message-ID: <20240709132741.47751-12-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 16562160032 X-Stat-Signature: fx47rtjwsftmaoenpbq8mmt5naxb5tqc X-Rspam-User: X-HE-Tag: 1720531884-420794 X-HE-Meta: U2FsdGVkX18rS/YiiTMoYAPMzS2A3wZei4DT10YHcyUyMdeYPU3xUqCuP8U2XS4KbGHv0LEztcvaZgyIyzJpEUByX5SGUnP9CTRh4SZHJOUqjoIQIpTswx2Ch0scg7nXgzXNjTUmCJsUZa3aG8st5y2l0u5ZFJ2bsnP0IzHjt/Hj7xF61yexrGzwL8oYafgow2su3UQ/mFC8Zw4doi3VgSrwXe5MbRG5FdtJQOl+bJXLO+oc1+paEb6wgEwMg6bek95KEjoTfWuirrEXWL+KRlLYRC2ZAVwy1YXIOHYfXXYmlSpNklaq7jaVLubpBk/Z1HJsSg2nTYba07uUkavxRKfR2Osnq1hBKFgm8K0fNmo/Q7zE8eb/iJMOgDhfHyIqX8Ma0p5qT2if7KqJFOv+zXeKexrAit9HdmMcU+stY+Hal08uzt37LHaBgF+8EJ26YhbvffSfkud/H7V/NbZu0T3nICDAd7gpHvk+Y2flKwsWpZJ0D2k7fMlC/fw/rrbLRL094yuoLmtQiJbTe6QLgr5X3pop+AIWwlIfQLbP58Pc38K36lioaU7VaJXWO0mmPblovEKz3VwSAUuF1etsuxdv/oi0J4o/3ltuw8NMMjCqn0n1QiNgBOup4B2rZT6V0wSZaO9tbw6POFCUGP4L8jeuOMp9JjralOzklMrSmuII17uaTAyVfyOQTdmuCTTGyJXntAFkpXEWKuippxcFXN+tNH49NclnZsWStvxxcDeYjPb1wDvn2fZ8QFmbRUuGsa73YPgVHYALBEu5CzRaGLdja8slZxYYLw7HuaZoVrw5TlR07HQVZ7/2SkTkql9BBNQnU+37N++Ff1RomHzqw8g68r5UcvqswjV3AuHQFr/taYLDTi5E81yq+uLI0VzFXgyppEyFfgfCda36ykVs3mPgL7D1RnWJ2pEUR2JmNqsZu4A4vZS+d4JVKsAsuuh/fWqoGvEjKF7qaNmBYn9 xhT+JKiW Bpt+vf+4prjAwqO8l/+N3w/gHDNPqBzZu//vhn6SZBKwrljocePzlpDBCdYaXnGU2sBfrnMuGyKsC8PcTKn4IbJiLucUr3muXdyCR7wrZ0P4RNVjFoL9l85ou7krFZqbbK9BUybNH5jIOfaL2sdHBNn3cwU0ZaeMhIKTpraq7+fzKBTYgEg+9ZGTqBBAtuDqgULOyaf+5XdKuOangIJ7zFVRwJgD4XApcGCsGJwapGxR8iDXu5+oQ112B5tAxMNuWzmBujWbOD5Fd341oBLU+LY92D1bvn2p38hfn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 76 +++++++++++++++++++++ mm/page_frag_cache.c | 114 ++++++++++++++++++++++++++++++++ 2 files changed, 190 insertions(+) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 71e08db1eb2f..cd60e08f6d44 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -8,6 +8,8 @@ #include #include #include +#include +#include #include #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) @@ -83,6 +85,9 @@ static inline unsigned int page_frag_cache_page_size(unsigned long encoded_va) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); +struct page *page_frag_alloc_pg(struct page_frag_cache *nc, + unsigned int *offset, unsigned int fragsz, + gfp_t gfp); void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask); @@ -95,12 +100,83 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return page_frag_cache_page_size(nc->encoded_va) - nc->remaining; +} + static inline void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u); } +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, + gfp_t gfp); + +static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, + unsigned int *fragsz, + gfp_t gfp, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + nc->remaining = nc->remaining & -align; + return page_frag_alloc_va_prepare(nc, fragsz, gfp); +} + +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, gfp_t gfp); + +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va, gfp_t gfp); + +static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va) +{ + unsigned long encoded_va; + struct page *page; + + VM_BUG_ON(!*fragsz); + if (unlikely(nc->remaining < *fragsz)) + return NULL; + + *fragsz = nc->remaining; + encoded_va = nc->encoded_va; + *va = encoded_page_address(encoded_va); + page = virt_to_page(*va); + *offset = page_frag_cache_page_size(encoded_va) - *fragsz; + *va += *offset; + + return page; +} + +static inline void page_frag_alloc_commit(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->remaining || !nc->pagecnt_bias); + nc->pagecnt_bias--; + nc->remaining -= fragsz; +} + +static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->remaining); + nc->remaining -= fragsz; +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, + unsigned int fragsz) +{ + nc->pagecnt_bias++; + nc->remaining += fragsz; +} + void page_frag_free_va(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index b2cb4473db54..b21001bb4087 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -88,6 +88,120 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, return page; } +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, + unsigned int *fragsz, gfp_t gfp) +{ + unsigned long encoded_va; + unsigned int remaining; + + remaining = nc->remaining; + if (unlikely(*fragsz > remaining)) { + if (unlikely(!__page_frag_cache_refill(nc, gfp) || + *fragsz > PAGE_SIZE)) + return NULL; + + remaining = nc->remaining; + } + + encoded_va = nc->encoded_va; + *fragsz = remaining; + return encoded_page_address(encoded_va) + + page_frag_cache_page_size(encoded_va) - remaining; +} +EXPORT_SYMBOL(page_frag_alloc_va_prepare); + +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, gfp_t gfp) +{ + unsigned long encoded_va; + unsigned int remaining; + struct page *page; + + remaining = nc->remaining; + if (unlikely(*fragsz > remaining)) { + if (unlikely(*fragsz > PAGE_SIZE)) { + *fragsz = 0; + return NULL; + } + + page = __page_frag_cache_refill(nc, gfp); + remaining = nc->remaining; + encoded_va = nc->encoded_va; + } else { + encoded_va = nc->encoded_va; + page = virt_to_page((void *)encoded_va); + } + + *offset = page_frag_cache_page_size(encoded_va) - remaining; + *fragsz = remaining; + + return page; +} +EXPORT_SYMBOL(page_frag_alloc_pg_prepare); + +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *fragsz, + void **va, gfp_t gfp) +{ + unsigned long encoded_va; + unsigned int remaining; + struct page *page; + + remaining = nc->remaining; + if (unlikely(*fragsz > remaining)) { + if (unlikely(*fragsz > PAGE_SIZE)) { + *fragsz = 0; + return NULL; + } + + page = __page_frag_cache_refill(nc, gfp); + remaining = nc->remaining; + encoded_va = nc->encoded_va; + } else { + encoded_va = nc->encoded_va; + page = virt_to_page((void *)encoded_va); + } + + *offset = page_frag_cache_page_size(encoded_va) - remaining; + *fragsz = remaining; + *va = encoded_page_address(encoded_va) + *offset; + + return page; +} +EXPORT_SYMBOL(page_frag_alloc_prepare); + +struct page *page_frag_alloc_pg(struct page_frag_cache *nc, + unsigned int *offset, unsigned int fragsz, + gfp_t gfp) +{ + struct page *page; + + if (unlikely(fragsz > nc->remaining)) { + if (unlikely(fragsz > PAGE_SIZE)) + return NULL; + + page = __page_frag_cache_refill(nc, gfp); + if (unlikely(!page)) + return NULL; + + *offset = 0; + } else { + unsigned long encoded_va = nc->encoded_va; + + page = virt_to_page((void *)encoded_va); + *offset = page_frag_cache_page_size(encoded_va) - + nc->remaining; + } + + nc->remaining -= fragsz; + nc->pagecnt_bias--; + + return page; +} +EXPORT_SYMBOL(page_frag_alloc_pg); + void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_va) From patchwork Tue Jul 9 13:27:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6D02C2BD09 for ; Tue, 9 Jul 2024 13:31:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 62D556B00C9; Tue, 9 Jul 2024 09:31:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B2826B00CA; Tue, 9 Jul 2024 09:31:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47AA46B00CB; Tue, 9 Jul 2024 09:31:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 278A76B00C9 for ; Tue, 9 Jul 2024 09:31:31 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D8A4C41991 for ; Tue, 9 Jul 2024 13:31:30 +0000 (UTC) X-FDA: 82320301140.05.36B0530 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf26.hostedemail.com (Postfix) with ESMTP id 16056140025 for ; Tue, 9 Jul 2024 13:31:27 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531858; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k+Tq8N6vdah0iKgjS8mY5dcOFkbDz5DS4pIzK8i9oLY=; b=TdC6tK/Ux4d+0vIMT5F4yGrblylZmInNkLoJRYRkuynGfUtlXX8Wnt4/2N0ReGRo5i+MMN /2SbPLrgtwwfPaKk4VbWcF4g41AcJGC0dznpxe3OCU9fT/aeUHRFp0IIywr7yb3jFbvAvp rL812aOk3lNSt4jIQU9nz+6cG07UsAQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531858; a=rsa-sha256; cv=none; b=JZRX53qMqGWR3I+8zUuBlsH/yHG6fVDxtftZsUOUJRmN9WBtiymFxTNiH24j1f3t9oqEJ5 6LQ+ZeQgS9Y1uJowXGjE0NCZAcA/+AI3gvzbG0PeGC0MzYeghWm0At1B1MjpVVIPz1Ue/W ASRCPUyjz0wOEWyyRkoFzbxMLZvn9EQ= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WJMG85xBmzwWGp; Tue, 9 Jul 2024 21:26:40 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 61D89180AE5; Tue, 9 Jul 2024 21:31:22 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:22 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v10 12/15] mm: page_frag: move 'struct page_frag_cache' to sched.h Date: Tue, 9 Jul 2024 21:27:37 +0800 Message-ID: <20240709132741.47751-13-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 16056140025 X-Stat-Signature: ox3kc9zqjmtgx6zpw7hpk999gepuxd1s X-HE-Tag: 1720531887-834017 X-HE-Meta: U2FsdGVkX1/7EuppA61CcRecZ+G3RMzTS0UM61qmJpDUYtIHKlCFfaSj8mGrXJuaRV+5mdlc5P1c5FiUd0Fdzrbq/y2LEC6SKedWf5Png6Si9wQN1a/DijhEn0VQ3tJOanCsdi3XY6ewtUujMPsF0W3SOqWm0czhETWAaHPKsVDuaYMybKxQYZDDC04mOa79VnPmsbWiFIVztojA0ejQVs4pnfYt10BdYbrqtGuKKQ22QIL4oVnqpnzs4yWS3QAILNy98X/OCQScBS1THrXmdMk4AmXsWYjpGN9RM08/6JxIMOGVQWkHLX96AMDUA4JonT0XKa3KXP+JXudDUPyPFmA31nZOHcP5x1YpwnE2gdyKcPCeiILLv+YcmsuL/xhXE9Kamj3AutteeC5d/07GpBRx5HpB9rIULOb3SK5vAw/sBtb81zM6499OpUiJ+oMhx14tfijXEFDjVHnDtsbSQKe29vh+HzyW9+YU+F/yKrVxCO8FLnOhi4WgXd++e8uSrSLcpTJw5dCZ6UiNoViZmos22sZVHTaC2VkmAaaEWX3dw2V4DZIv/YyuUiAaIyfAEgxBfYP5V2CQMDl3oDcAXCczsISclftBEGdc1w6jyf1SzhndXcUFLsxbkbXKXjbrXZ1J4C1CQhRoVwKXZcit3+1qXF6HR8R30G3jcSkgiypjX52ILLz7TqHWjZPNvemOMYUjSCj+YWembbSrL2vt17NUeRwCY3felSONSYa+SUl5/mqpd9hFESOMakXl7pELs/opQHJIpscV8gD7Up7wlp9Z1zQX8f5tAQ7YAVtp4X9mBV8VBk/2VZwQ9QQ/g+c1HqtRPaipiEj8p6mpf6upb0AlIdLzJpJWM9iDUGlXO0jcEYuoILTv6i4NYp9uCKZ2IXfJH7Q8VgpgIZQxJ7UL0+QHg+PXmNcgTI4fLkuvf4AMKPQt1/8aHcdeQ8HRqf9uWWWo1sa6sy1rSKpyyAe hValPSwP q2co1OreHo7IQnF6wRZ4C+CZxDbdi2hqcXWQ3TZgwETB/s3I3GFIlbYoB4Mm5Hcfei7QUNEuU6d72cy3l7fEAReBy+usNQ/qhEyXObi2Dd+Bqq7tvnHKXEi9My+u5JrSp4G0xgJfl5lQYllAU9J1CLsWgv17168KZUmwC/rHAgjhxJ7NoWk4ztbmp2C8ZLlvy/MiJcD2ollzzDDxqougoypHE6wyiY3rkP05x0aRvoJhW6x763TNHhvlvtAmeRDcUPpBozqRwsDyfeucwej4K9PrdOEriKOqu9D5LMd1F8rffqiEy5/Ntj3DeX0+H2/m4dBpPZlejCZwZxxroQ2MYMSlG8w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As the 'struct page_frag_cache' is going to replace the 'struct page_frag' in sched.h, including page_frag_cache.h in sched.h has a compiler error caused by interdependence between mm_types.h and mm.h for asm-offsets.c, see [1]. Avoid the above compiler error by moving the 'struct page_frag_cache' to sched.h as suggested by Alexander, see [2]. 1. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/ 2. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/ Suggested-by: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 18 ++++++++++++++++++ include/linux/page_frag_cache.h | 20 +------------------- 2 files changed, 19 insertions(+), 19 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index a2f6179b672b..f2610112a642 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -8,6 +8,7 @@ * (These are defined separately to decouple sched.h from mm_types.h as much as possible.) */ +#include #include #include @@ -46,6 +47,23 @@ struct page_frag { #endif }; +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +struct page_frag_cache { + /* encoded_va consists of the virtual address, pfmemalloc bit and order + * of a page. + */ + unsigned long encoded_va; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) + __u16 remaining; + __u16 pagecnt_bias; +#else + __u32 remaining; + __u32 pagecnt_bias; +#endif +}; + /* Track pages that require TLB flushes */ struct tlbflush_unmap_batch { #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index cd60e08f6d44..e0d65b57ac80 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,15 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H -#include #include #include #include #include #include +#include #include #include -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - #define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) #define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(8) #define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8 @@ -53,21 +50,6 @@ static inline void *encoded_page_address(unsigned long encoded_va) return (void *)(encoded_va & PAGE_MASK); } -struct page_frag_cache { - /* encoded_va consists of the virtual address, pfmemalloc bit and order - * of a page. - */ - unsigned long encoded_va; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) - __u16 remaining; - __u16 pagecnt_bias; -#else - __u32 remaining; - __u32 pagecnt_bias; -#endif -}; - static inline void page_frag_cache_init(struct page_frag_cache *nc) { memset(nc, 0, sizeof(*nc)); From patchwork Tue Jul 9 13:27:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13727982 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51BCDC2BD09 for ; Tue, 9 Jul 2024 13:31:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 394676B00CE; Tue, 9 Jul 2024 09:31:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 343C86B00CF; Tue, 9 Jul 2024 09:31:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BD896B00D0; Tue, 9 Jul 2024 09:31:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E6C906B00CE for ; Tue, 9 Jul 2024 09:31:37 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9D809141854 for ; Tue, 9 Jul 2024 13:31:37 +0000 (UTC) X-FDA: 82320301434.28.AB7E34D Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf18.hostedemail.com (Postfix) with ESMTP id 22A911C0026 for ; Tue, 9 Jul 2024 13:31:33 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720531871; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yTDoPrv2/v5tjnixRsNs5XHbuPQ4S+4JpsPq7boBiGY=; b=2gy/V+riAFeoD2SFwOG/hzcNefyB6rqr5too3g5+KzdaW+sWTCUnuB+dJVWIyMSyf2J0Ed TEDMeCD6v10lJp2eNgmubgh+OOV6meOvBLTK4ALEY+2YLSk6qtVx2LOB/KGewVBYQVGc9R SPX25AId0X2b5pMh+zujjkV0LwZsmjQ= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720531871; a=rsa-sha256; cv=none; b=g5B5IOD+CzIQQThkLt6eulwe/iZcZhTkHs/oDDbwxpg+xKxWI/7AqYFewBms5D7uRsoCqo mtBRfPSoki7USMAFrqWxaIAK46XQIzQZL5gpnI09Hhs/E+lOLzig3Dkszq+GpBCDYz7VUi mgUEiwkb/+zXKgy+Vw+hco/MRBytZXs= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4WJMGt69Qgz1j6KQ; Tue, 9 Jul 2024 21:27:18 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 520061400D4; Tue, 9 Jul 2024 21:31:30 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 9 Jul 2024 21:31:30 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , , Subject: [PATCH net-next v10 14/15] mm: page_frag: update documentation for page_frag Date: Tue, 9 Jul 2024 21:27:39 +0800 Message-ID: <20240709132741.47751-15-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240709132741.47751-1-linyunsheng@huawei.com> References: <20240709132741.47751-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: 7ixzuzga4fryhz85a4d1r9wm37z5aw38 X-Rspam-User: X-Rspamd-Queue-Id: 22A911C0026 X-Rspamd-Server: rspam02 X-HE-Tag: 1720531893-521605 X-HE-Meta: U2FsdGVkX196ylqDcI4+qPNNVK/Btk8OlrOGl3VoWIQ/XI2CGpSm8bRGfvaP0IG2QRzRXUEzxibFW3M986kfH0XSkGbu+Lamr8mFMOnev7wtAhEpyS/Vw/eT6yhwFK9L0EE1DvQBRDXhSTja4iB0zrGrJLJDk755D+RiLBlozkUOOlcAqzdZw1UlQXAcS+KPVJ/AGaAK580DpNh4ktW8qh1wtl/QtYZSgDHj9jx8nVwaZpvAkrCGQEYdaHvtpdTui7SfRpww2H8rHyQloLDABJ/RdEeBGDkkPc57JIOPp+SuLIzOvrkGkEakkSnu9zYmgCHSujZBNPNjPVjc603CBTJPjEpx6Af50EhkbqrCbB7cZw1xScDVRtxsUVpcdMFbtzq4q4nWmiOu5/yCiMqVSCGtaDZifF/HinbR9InvVxdkO61uWWUP1r+GBzK69CoX3bMN6UslEVOR4JlUVojM2Xq7H7zKHtb5Hw0y3hnoUERjxA0r/uWHc6PzUHbJmrBH3RQhp2lxRVzUB3DLn5r05ZsN1EEuJgEq3yue2kBLLqRKBbTgaBK04D/EhNBXHPJixIg0TokOxCvek9UpPOfE4DYPh//aq8WWm7jRxEdPQ3xAHE2kYXkzhSpnJ6CdObeWE3RpdyM9Hm7eIhUk0Br7drbZQVZ8HtZ6kgM6tMV6teoOGQMZVXXy1fs4lDcSW3BpDeyi68nFXPL0eF8BfNknXLHs7M5y4MKtwu8h0ntMR2lJvBPogEjBa3z7J2QDTajvkC/T1Kf/I4oeW8aVUdZ7vicqTt6BSAciIBPS/SghOtEa5HyBYLg15m7RGj/CEBB3TMugHDJPhKWYPL6Bpw5/F7p+X+2auItJWAjB3a8PLZ5pcaT8Uy2G6wge8CjO00fmOshhTyAXVj+n5/S8C83U9A18ZYk12ouJrIb/YFEZUVWMYa5oXvR+C+a6vVsvadq1xgg1Gjmj9WhueXjciQs BCt6v3bY 3fnFU5xp0x5wZKUyHYPNQWPzv96uNU+KNKCX4L5T8Xt2wzg0FWYZ34i3ndHy21/BLSwfbmAdmKHmPKKZGRDaml963K53xniLIYrhwAWTond8HpnoY7h5qhgzce5qD2p0FUfUqiUyTWH/FBWn/YMZqNUdkoK1CbdXp5Qc9CmvpGD9sfSDG1FsbIt8K358T2XYodBoRVJBoAxrN8P9avH9k8Pp9NyEa+0i+q3YZBChpkF4zsYeDGpEPJanq3xplhQRmITJAs8lo2N+zlwFxk7hy5UjvvCncwolDvBPV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 163 +++++++++++++++++++++++++++++++- include/linux/page_frag_cache.h | 107 +++++++++++++++++++++ mm/page_frag_cache.c | 77 ++++++++++++++- 3 files changed, 344 insertions(+), 3 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..6a4ac2616098 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,163 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + | + | + v + +---------------------------------------------------------------+ + | request page fragment | + +---------------------------------------------------------------+ + | | | + | | | + | Cache not enough | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | | | + v_________________________________v | + | | + | | + _________________v_______________ | + | | Cache is enough + | | | + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + | | | | + | | | | + | Refill failed | | + | | | | + | v v | + | +------------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------=-+ | + | | | + Refill succeed | | + | Refill succeed | + | | | + v v v + +---------------------------------------------------------------+ + | allocate fragment from cache | + +---------------------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API implies, the allocation side +does not allow concurrent calling. Instead it is assumed that the caller must +ensure there is not concurrent alloc calling to the same page_frag_cache +instance by using its own lock or rely on some lockless guarantee like NAPI +softirq. + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_alloc*_align*() to ensure the returned virtual address or offset of +the page is aligned according to the 'align/alignment' parameter. Note the size +of the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc_va*, page_frag_alloc_pg*, +or page_frag_alloc* API accordingly. + +There is also a use case that needs minimum memory in order for forward progress, +but more performant if more memory is available. Using page_frag_alloc_prepare() +and page_frag_alloc_commit() related API, the caller requests the minimum memory +it needs and the prepare API will return the maximum size of the fragment +returned. The caller needs to either call the commit API to report how much +memory it actually uses, or not do so if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_cache_page_offset page_frag_alloc_va + page_frag_alloc_va_align page_frag_alloc_va_prepare_align + page_frag_alloc_probe page_frag_alloc_commit + page_frag_alloc_commit_noref page_frag_alloc_abort + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: __page_frag_alloc_va_align page_frag_alloc_pg + page_frag_alloc_va_prepare page_frag_alloc_pg_prepare + page_frag_alloc_prepare page_frag_cache_drain + page_frag_free_va + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(pfrag); + ... + page_frag_cache_drain(pfrag); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_va_align(pfrag, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_free_va(va); + goto do_error; + } + +Prepare & Commit API +-------------------- + +.. code-block:: c + + unsigned int offset, size; + bool merge = true; + struct page *page; + void *va; + + size = 32U; + page = page_frag_alloc_prepare(pfrag, &offset, &size, &va); + if (!page) + goto wait_for_space; + + copy = min_t(unsigned int, copy, size); + if (!skb_can_coalesce(skb, i, page, offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, offset, copy); + } else { + skb_fill_page_desc(skb, i, page, offset, copy); + page_frag_alloc_commit(pfrag, offset, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index e0d65b57ac80..d1c4710392a8 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -50,11 +50,28 @@ static inline void *encoded_page_address(unsigned long encoded_va) return (void *)(encoded_va & PAGE_MASK); } +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { memset(nc, 0, sizeof(*nc)); } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expection as the alloc API. + * + * Return: + * true if the current page in page_frag cache is pfmemalloc'ed, otherwise + * return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return encoded_page_pfmemalloc(nc->encoded_va); @@ -74,6 +91,19 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask); +/** + * page_frag_alloc_va_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for virtual address of fragment + * + * WARN_ON_ONCE() checking for @align before allocing a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -82,11 +112,32 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_cache_page_offset() - Return the current page fragment's offset. + * @nc: page_frag cache from which to check + * + * The API is only used in net/sched/em_meta.c for historical reason, do not use + * it for new caller unless there is a strong reason. + * + * Return: + * the offset of the current page fragment in the page_frag cache. + */ static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) { return page_frag_cache_page_size(nc->encoded_va) - nc->remaining; } +/** + * page_frag_alloc_va() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Get a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { @@ -96,6 +147,21 @@ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp); +/** + * page_frag_alloc_va_prepare_align() - Prepare allocing a page fragment with + * aligning requirement. + * @nc: page_frag cache from which to prepare + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement + * + * WARN_ON_ONCE() checking for @align before preparing an aligned page fragment + * with minimum size of @fragsz, @fragsz is also used to report the maximum size + * of the page fragment the caller can use. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp, @@ -115,6 +181,21 @@ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, unsigned int *fragsz, void **va, gfp_t gfp); +/** + * page_frag_alloc_probe - Probe the available page fragment. + * @nc: page_frag cache from which to probe + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @va: out as the virtual address of the returned page fragment + * + * Probe the current available memory to caller without doing cache refilling. + * If no space is available in the page_frag cache, return NULL. + * If the requested space is available, up to @fragsz bytes may be added to the + * fragment using commit API. + * + * Return: + * the page fragment, otherwise return NULL. + */ static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, @@ -137,6 +218,14 @@ static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc, return page; } +/** + * page_frag_alloc_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @fragsz: size of the page fragment has been used + * + * Commit the actual used size for the allocation that was either prepared or + * probed. + */ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, unsigned int fragsz) { @@ -145,6 +234,16 @@ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, nc->remaining -= fragsz; } +/** + * page_frag_alloc_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @fragsz: size of the page fragment has been used + * + * Commit the alloc preparing or probing by passing the actual used size, but + * not taking refcount. Mostly used for fragmemt coalescing case when the + * current fragment can share the same refcount with previous fragment. + */ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, unsigned int fragsz) { @@ -152,6 +251,14 @@ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, nc->remaining -= fragsz; } +/** + * page_frag_alloc_abort - Abort the page fragment allocation. + * @nc: page_frag cache to which the page fragment is aborted back + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the alloc API. + * Mostly used for error handling cases where the fragment is no longer needed. + */ static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index b21001bb4087..31719fceb4fd 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -88,6 +88,18 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, return page; } +/** + * page_frag_alloc_va_prepare() - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment the caller can use. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp) { @@ -110,6 +122,19 @@ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_va_prepare); +/** + * page_frag_alloc_pg_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment the caller can use. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, gfp_t gfp) @@ -140,6 +165,21 @@ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_pg_prepare); +/** + * page_frag_alloc_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @va: out as the virtual address of the returned page fragment + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment. Return both 'struct page' + * and virtual address of the fragment to the caller. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, @@ -172,6 +212,18 @@ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_prepare); +/** + * page_frag_alloc_pg - Alloce a page fragment. + * @nc: page_frag cache from which to alloce + * @offset: out as the offset of the page fragment + * @fragsz: the requested fragment size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Get a page fragment from page_frag cache. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_pg(struct page_frag_cache *nc, unsigned int *offset, unsigned int fragsz, gfp_t gfp) @@ -202,6 +254,10 @@ struct page *page_frag_alloc_pg(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_pg); +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_va) @@ -222,6 +278,19 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); +/** + * __page_frag_alloc_va_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Get a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -259,8 +328,12 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_alloc_va_align); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free_va - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free_va(void *addr) {