From patchwork Tue Oct 1 07:58:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F00BE7716C for ; Thu, 5 Dec 2024 15:21:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9A34F6B0138; Thu, 5 Dec 2024 10:19:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 72B356B0104; Thu, 5 Dec 2024 10:19:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 932366B00C5; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 03872280036 for ; Tue, 1 Oct 2024 03:59:18 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B38D0C0DEE for ; Tue, 1 Oct 2024 07:59:18 +0000 (UTC) X-FDA: 82624283196.26.09856F9 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf16.hostedemail.com (Postfix) with ESMTP id C67EB18000D for ; Tue, 1 Oct 2024 07:59:15 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DwcmrNtJ; spf=pass (imf16.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.196 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769491; a=rsa-sha256; cv=none; b=3+va5fErAy7V82Jr3lMYlQdaLoxoS8w1iaGfUo6xld0YyzEykPB2jHfMeUV8MDAbjcGlE/ nNIxf0BRIB7xIQ6HApR7JOxb+EOC3HTT/yqTnqBczIgCgwVLmsuvF00WfWRVrM17L2aPHb IzSaPpfqUQx8AYcIM91h5YIMGMBbOZ4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DwcmrNtJ; spf=pass (imf16.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.196 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769491; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H586TTxS7HHj3h80507DWrTpplgeZKusQMfeMa39Y/Q=; b=cS7Ask45dC2/uNagPT13CYgJOsagdiWRvOsUjKTyY7reZoUJVuYilwvyOOPLiMM9tcvY5u XW/U34YKS7MFfxbt+pS3pjmTOJvnleHxu3di7yyvhA0JwOgZp9VRCz6Cbmjx/I/CxC6KR5 rFt/X7SyGGeSc6Ye6JINC9qqZYo4Jqo= Received: by mail-pg1-f196.google.com with SMTP id 41be03b00d2f7-7c3e1081804so2357515a12.3 for ; Tue, 01 Oct 2024 00:59:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769554; x=1728374354; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H586TTxS7HHj3h80507DWrTpplgeZKusQMfeMa39Y/Q=; b=DwcmrNtJzi71f0aD5ZBqjJRXxkrHw2bipGUZM6I/UR0ggGCyyzCsHPsFXIx82zZcsD 7mGt18ruO42NQ3lbqEMuL/S8Rjjdn18VKFV6KTJWd+ajUbWlr/FomJ/Hqg3nn0inP/oZ n7G+IqpNs3ypjAxrM+fZk2W4sp9ZEVqSjRjPq2nTfmo9e4r7zG/MPzz3JT7B4JKZQSHd y1N1JLSKWMve1fiMvNxQ9gr9FyTtxYwQqHu9hbzyU/7iNQSH2ayR8mLxc2WUNGRM9Lj9 K/o2uJ44FMFX4ybWOoxdXbR3Fz6RmsGpW0RHXkCAoYbgUnYR5WeLGrwtbgLec/lXZEbd hTtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769554; x=1728374354; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H586TTxS7HHj3h80507DWrTpplgeZKusQMfeMa39Y/Q=; b=Ttv4VxdBF4X2RbfmyMNBJea71q2fBRLMJWff1TmN7VYayjaD5LuhRJl6gJbxKpSYak eav9dSSA28zhgL+mx0mdVss7siGn2Ljr+EyfnhnB6QTldxmSnwJUa2w23rDAHmHzli1A 5wke+DHjIm5Cp4KiDKPfrytwLO4gNfih3UcBU7V06RBbKC3adz2EGY/ShTx6v/+uTBvx QaOpLBgkS5F7zSC7wnTQb1/c78y1+bQr7lWuA7zCRtkvN11ekeSDAcX7ZlRHwa3/rSgZ o3Lh/kXWu6y+rHeni9y3v7OuHaeMI3j8Va+7sIB4x6YaUYi9InQNg6A1aOFN3ZkCXCmg n1yg== X-Forwarded-Encrypted: i=1; AJvYcCX9psjUE/m4Vl7pTIzYk6pxF5s5x2Jul74z5lKo/zVUsdDN0PfCJOmMw83v/dn11wUNkIFDw6fG2A==@kvack.org X-Gm-Message-State: AOJu0YyUAygGSmqbR4JuoKUN5/tGUk8Ma5X51Ozpn0HOsMz7Z2ghg+xB 3vp8D8Mmio4QG+MgwCR2f870N5W9Lt6uM0KImb4wwJ+qacQpoukY X-Google-Smtp-Source: AGHT+IFUHxqszULLRLQpplxPUD5qSMOLhqfZNILyVQvIkZOeh87qa37mKjy54ctMsxpD3dVg4Eh9qQ== X-Received: by 2002:a05:6a21:3489:b0:1c6:fb2a:4696 with SMTP id adf61e73a8af0-1d4fa68768bmr23506067637.19.1727769554166; Tue, 01 Oct 2024 00:59:14 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:13 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Alexander Duyck , Andrew Morton , Shuah Khan , linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [PATCH net-next v19 01/14] mm: page_frag: add a test module for page_frag Date: Tue, 1 Oct 2024 15:58:44 +0800 Message-Id: <20241001075858.48936-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Stat-Signature: pys73goy9xhpbwequo7taragryu7ru8e X-Rspamd-Queue-Id: C67EB18000D X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1727769555-259300 X-HE-Meta: U2FsdGVkX1/l7rSXBpyBVPlgl8iwDZjz2c7jH+WI5ad7apN97pRC4T0Ji6x9cHOENrClwetza50Uog0mQ9e6urJ81wnqoVrF4GI++FsJ3oO679+8FQz9276lhAneWJf40Uxiu0Gv0d+iCTlf66iyTZh15QLQXlolfYi4pwlsFaVNhrrRR/F4wPdoISad2BTEfeRA+BSK/uMPXlOXdLv8IdcTZBGjX/a5rjcUzb+tMsH7NxGM/334U4UQNp9o1umTenlgIsCplADxM1liR4qAXzUTrXOl3OI8j15NhOuP7D63f3FA1UnUIRNBPQe2CE7xsvu8ct9TtokC5LHBNn9tex0ryu7ACAa/PThig47XnzlDzsRTfDePjrykYOdJKAmIXju2TSJw3H1f9IWyVgdgUNZe4d+UmPu2L9CQih/Y7R9Bw/5WGmHiGL3wzzS5s8gQpdRz+jRwLpRDtfEAkStMFsmpLa9qQQc2FuGd4lasa2k3iiALJsuesjcwZ39+6YLNV7qOqZLVHcy8+R/B/rGa7zOBwlxWndMzaq3aFKjUDHJJa2CWXLtvKxoZ18pEIBDl15PZvKFopm+1bY6x2NJPoLnmnbZWNG06HNc/+iukmyRmluiT0pDmAAmq3kT5ZCz1MpqZjnOz2PZ86NVOJq+2Om5Xmh1nYCfSoJm3b3UefucFfy6+QrGW7sRPj6+BrwHE6uusNLoxO2N9RnSPQtD+RlS1IrlU6HeRCbA6N8/swv9AXxpl6z+a55RiQ5VCE3ZS4ngLX4pd2HWFR1lDoyGIV6EGJ/kkdfooMt6PbPZ+XsPa3U++loyvYP3aFJ/IE33m5xWppsW4TlW+lTKd8Y6vyqEelKo9DVYoAOWECWMOdz8IZ9zcV6UNWMYUL+hyU3LpDbiQsMaux5fEtKZeLe/Rta7wvl2aDNsRGrqpkVJl/ScCc83sfiD1SkORBdWTd+qlCiH1qQrwpkNQ7vVCpvZ eOuwDDUt I4dRKT7MlRpxFSN0qjoPwJ6TYBMQ9aH3VgRTtg+xDE7BoHxKRWwFB7/Tg3EIU/hvRLQC0IA3XvIwBY/S0754QizJCgXt2PmXU7o7B4z7BbDbUsdWccPe7ivx3CrtajMqAbbQ1NETUTHgvluiW4Kk/CXihALmIf4Wc6qPn9I4u+pZ2fc+EY3/KH3p1us9/IGKEVmfzF0A3yHdIgu5aIE5mc7EF9/JtDqwbsPjmXcAe8oLj4hjrbvNT+KRzZwzbjAugF/ye5sKcugMf0uAr07AeBYirEft0LRtyewHyXGDfXkxhXJUfICF5wxKFenmWJC0ny0Drto/4sXaGbDimaLK0XRQCf9bSiPtL4CJMeJHu8iutjwbHdHkP2kIRhsOB7VqP2u7jge5CyybO6R1LZRj5HidREC5f3h68Mql1avpZCzu0mNtayRzIWE4V9O4Kn5DlbNpCC5+8iGGLd/ov2MQK/4Hi0b69ioemicvM/pu3dkHpKkSsvlgrhqZGrb7/CnqyqdWPhV/UbakFfEpWLmf5K0dsBXPgeSM4lVFL6Np7npyRsuz9nduVaOG5Qui3nFJ9uVH7e6jsgGNhvGqNPuGhRL8U7e8GyG1Ddv2glbi4hXwXle4IilaQnnjLaPjjUuFAJn5k X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The testing is done by ensuring that the fragment allocated from a frag_frag_cache instance is pushed into a ptr_ring instance in a kthread binded to a specified cpu, and a kthread binded to a specified cpu will pop the fragment from the ptr_ring and free the fragment. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- tools/testing/selftests/mm/Makefile | 3 + tools/testing/selftests/mm/page_frag/Makefile | 18 ++ .../selftests/mm/page_frag/page_frag_test.c | 173 ++++++++++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 8 + tools/testing/selftests/mm/test_page_frag.sh | 171 +++++++++++++++++ 5 files changed, 373 insertions(+) create mode 100644 tools/testing/selftests/mm/page_frag/Makefile create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c create mode 100755 tools/testing/selftests/mm/test_page_frag.sh diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 02e1204971b0..acec529baaca 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES) LDLIBS = -lrt -lpthread -lm +TEST_GEN_MODS_DIR := page_frag + TEST_GEN_FILES = cow TEST_GEN_FILES += compaction_test TEST_GEN_FILES += gup_longterm @@ -126,6 +128,7 @@ TEST_FILES += test_hmm.sh TEST_FILES += va_high_addr_switch.sh TEST_FILES += charge_reserved_hugetlb.sh TEST_FILES += hugetlb_reparenting_test.sh +TEST_FILES += test_page_frag.sh # required by charge_reserved_hugetlb.sh TEST_FILES += write_hugetlb_memory.sh diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile new file mode 100644 index 000000000000..58dda74d50a3 --- /dev/null +++ b/tools/testing/selftests/mm/page_frag/Makefile @@ -0,0 +1,18 @@ +PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) +KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..) + +ifeq ($(V),1) +Q = +else +Q = @ +endif + +MODULES = page_frag_test.ko + +obj-m += page_frag_test.o + +all: + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules + +clean: + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c new file mode 100644 index 000000000000..eeb2b6bc681a --- /dev/null +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -0,0 +1,173 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Test module for page_frag cache + * + * Copyright (C) 2024 Yunsheng Lin + */ + +#include +#include +#include +#include +#include +#include + +static struct ptr_ring ptr_ring; +static int nr_objs = 512; +static atomic_t nthreads; +static struct completion wait; +static struct page_frag_cache test_nc; +static int test_popped; +static int test_pushed; + +static int nr_test = 2000000; +module_param(nr_test, int, 0); +MODULE_PARM_DESC(nr_test, "number of iterations to test"); + +static bool test_align; +module_param(test_align, bool, 0); +MODULE_PARM_DESC(test_align, "use align API for testing"); + +static int test_alloc_len = 2048; +module_param(test_alloc_len, int, 0); +MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); + +static int test_push_cpu; +module_param(test_push_cpu, int, 0); +MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment"); + +static int test_pop_cpu; +module_param(test_pop_cpu, int, 0); +MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment"); + +static int page_frag_pop_thread(void *arg) +{ + struct ptr_ring *ring = arg; + + pr_info("page_frag pop test thread begins on cpu %d\n", + smp_processor_id()); + + while (test_popped < nr_test) { + void *obj = __ptr_ring_consume(ring); + + if (obj) { + test_popped++; + page_frag_free(obj); + } else { + cond_resched(); + } + } + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + pr_info("page_frag pop test thread exits on cpu %d\n", + smp_processor_id()); + + return 0; +} + +static int page_frag_push_thread(void *arg) +{ + struct ptr_ring *ring = arg; + + pr_info("page_frag push test thread begins on cpu %d\n", + smp_processor_id()); + + while (test_pushed < nr_test) { + void *va; + int ret; + + if (test_align) { + va = page_frag_alloc_align(&test_nc, test_alloc_len, + GFP_KERNEL, SMP_CACHE_BYTES); + + WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), + "unaligned va returned\n"); + } else { + va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + } + + if (!va) + continue; + + ret = __ptr_ring_produce(ring, va); + if (ret) { + page_frag_free(va); + cond_resched(); + } else { + test_pushed++; + } + } + + pr_info("page_frag push test thread exits on cpu %d\n", + smp_processor_id()); + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + return 0; +} + +static int __init page_frag_test_init(void) +{ + struct task_struct *tsk_push, *tsk_pop; + ktime_t start; + u64 duration; + int ret; + + test_nc.va = NULL; + atomic_set(&nthreads, 2); + init_completion(&wait); + + if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 || + !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu)) + return -EINVAL; + + ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL); + if (ret) + return ret; + + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring, + test_push_cpu, "page_frag_push"); + if (IS_ERR(tsk_push)) + return PTR_ERR(tsk_push); + + tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring, + test_pop_cpu, "page_frag_pop"); + if (IS_ERR(tsk_pop)) { + kthread_stop(tsk_push); + return PTR_ERR(tsk_pop); + } + + start = ktime_get(); + wake_up_process(tsk_push); + wake_up_process(tsk_pop); + + pr_info("waiting for test to complete\n"); + + while (!wait_for_completion_timeout(&wait, msecs_to_jiffies(10000))) + pr_info("page_frag_test progress: pushed = %d, popped = %d\n", + test_pushed, test_popped); + + duration = (u64)ktime_us_delta(ktime_get(), start); + pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", duration); + + ptr_ring_cleanup(&ptr_ring, NULL); + page_frag_cache_drain(&test_nc); + + return -EAGAIN; +} + +static void __exit page_frag_test_exit(void) +{ +} + +module_init(page_frag_test_init); +module_exit(page_frag_test_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Yunsheng Lin "); +MODULE_DESCRIPTION("Test module for page_frag"); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index c5797ad1d37b..2c5394584af4 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -75,6 +75,8 @@ separated by spaces: read-only VMAs - mdwe test prctl(PR_SET_MDWE, ...) +- page_frag + test handling of page fragment allocation and freeing example: ./run_vmtests.sh -t "hmm mmap ksm" EOF @@ -456,6 +458,12 @@ CATEGORY="mkdirty" run_test ./mkdirty CATEGORY="mdwe" run_test ./mdwe_test +CATEGORY="page_frag" run_test ./test_page_frag.sh smoke + +CATEGORY="page_frag" run_test ./test_page_frag.sh aligned + +CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned + echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix echo "1..${count_total}" | tap_output diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh new file mode 100755 index 000000000000..d750d910c899 --- /dev/null +++ b/tools/testing/selftests/mm/test_page_frag.sh @@ -0,0 +1,171 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Copyright (C) 2024 Yunsheng Lin +# Copyright (C) 2018 Uladzislau Rezki (Sony) +# +# This is a test script for the kernel test driver to test the +# correctness and performance of page_frag's implementation. +# Therefore it is just a kernel module loader. You can specify +# and pass different parameters in order to: +# a) analyse performance of page fragment allocations; +# b) stressing and stability check of page_frag subsystem. + +DRIVER="./page_frag/page_frag_test.ko" +CPU_LIST=$(grep -m 2 processor /proc/cpuinfo | cut -d ' ' -f 2) +TEST_CPU_0=$(echo $CPU_LIST | awk '{print $1}') + +if [ $(echo $CPU_LIST | wc -w) -gt 1 ]; then + TEST_CPU_1=$(echo $CPU_LIST | awk '{print $2}') + NR_TEST=100000000 +else + TEST_CPU_1=$TEST_CPU_0 + NR_TEST=1000000 +fi + +# 1 if fails +exitcode=1 + +# Kselftest framework requirement - SKIP code is 4. +ksft_skip=4 + +# +# Static templates for testing of page_frag APIs. +# Also it is possible to pass any supported parameters manually. +# +SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1" +NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST" +ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1" + +check_test_requirements() +{ + uid=$(id -u) + if [ $uid -ne 0 ]; then + echo "$0: Must be run as root" + exit $ksft_skip + fi + + if ! which insmod > /dev/null 2>&1; then + echo "$0: You need insmod installed" + exit $ksft_skip + fi + + if [ ! -f $DRIVER ]; then + echo "$0: You need to compile page_frag_test module" + exit $ksft_skip + fi +} + +run_nonaligned_check() +{ + echo "Run performance tests to evaluate how fast nonaligned alloc API is." + + insmod $DRIVER $NONALIGNED_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +run_aligned_check() +{ + echo "Run performance tests to evaluate how fast aligned alloc API is." + + insmod $DRIVER $ALIGNED_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +run_smoke_check() +{ + echo "Run smoke test." + + insmod $DRIVER $SMOKE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +usage() +{ + echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | " + echo "manual parameters" + echo + echo "Valid tests and parameters:" + echo + modinfo $DRIVER + echo + echo "Example usage:" + echo + echo "# Shows help message" + echo "$0" + echo + echo "# Smoke testing" + echo "$0 smoke" + echo + echo "# Performance testing for nonaligned alloc API" + echo "$0 nonaligned" + echo + echo "# Performance testing for aligned alloc API" + echo "$0 aligned" + echo + exit 0 +} + +function validate_passed_args() +{ + VALID_ARGS=`modinfo $DRIVER | awk '/parm:/ {print $2}' | sed 's/:.*//'` + + # + # Something has been passed, check it. + # + for passed_arg in $@; do + key=${passed_arg//=*/} + valid=0 + + for valid_arg in $VALID_ARGS; do + if [[ $key = $valid_arg ]]; then + valid=1 + break + fi + done + + if [[ $valid -ne 1 ]]; then + echo "Error: key is not correct: ${key}" + exit $exitcode + fi + done +} + +function run_manual_check() +{ + # + # Validate passed parameters. If there is wrong one, + # the script exists and does not execute further. + # + validate_passed_args $@ + + echo "Run the test with following parameters: $@" + insmod $DRIVER $@ > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +function run_test() +{ + if [ $# -eq 0 ]; then + usage + else + if [[ "$1" = "smoke" ]]; then + run_smoke_check + elif [[ "$1" = "nonaligned" ]]; then + run_nonaligned_check + elif [[ "$1" = "aligned" ]]; then + run_aligned_check + else + run_manual_check $@ + fi + fi +} + +check_test_requirements +run_test $@ + +exit 0 From patchwork Tue Oct 1 07:58:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14DA0E7716C for ; Thu, 5 Dec 2024 15:20:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E0FF46B00D5; Thu, 5 Dec 2024 10:19:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A86BE6B00CA; Thu, 5 Dec 2024 10:19:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 027CE6B00C0; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 192B7280036 for ; Tue, 1 Oct 2024 03:59:24 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C94B4C0DEE for ; Tue, 1 Oct 2024 07:59:23 +0000 (UTC) X-FDA: 82624283406.22.E74A7F6 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf07.hostedemail.com (Postfix) with ESMTP id DF8D640010 for ; Tue, 1 Oct 2024 07:59:21 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aa2EU7pp; spf=pass (imf07.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.66 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769436; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bJgMV83YGvWt1YNE++o8zIiouv7p8uQ7lLthenlKlXM=; b=IvpBnErLT0j8rFHlnEdq1qkjmQjPbdTeDieBWu2AMXSM4PrpUTc5fmf918Ka2qUqdcFCoT wac+oesZburHjOSAWvNMV0UyQ2d4PQkSTLRa/U3qZdIUT/BR9h2I5lpLRw/hq3V+UPVfJB o34hzkDnh3aTZDnMtH4CCg9aX5fs0zI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769436; a=rsa-sha256; cv=none; b=elpMR/g1Q5Jas2a7+jgGtHXMnUkMgcpv2LZ/hlrz4kfpqiLXWJEnpyRphTcfPVvx8TZtrc uR6PdxTChpyidUr1zW3pQmr8OMw7WCcBNK90h5W9nr/7G+CFOK2m6BvmX0DjLA9ZaxHPgj jLT/F/AwzHuuLhkYjneuQiArbk/21Zw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=aa2EU7pp; spf=pass (imf07.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.66 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pj1-f66.google.com with SMTP id 98e67ed59e1d1-2e09f67bc39so4270989a91.1 for ; Tue, 01 Oct 2024 00:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769560; x=1728374360; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bJgMV83YGvWt1YNE++o8zIiouv7p8uQ7lLthenlKlXM=; b=aa2EU7ppcnNhoRoiWpLVGUJ420tK3Ea1vQUVkGfP0yqgfb1wfGCYvXydtlZNSHHAjC PBeUM0YhoLvQyC48nyx5p2Tg2kISKwFGTqUiDV8S8jf/obJXWFj7Yx623R0xI3iNa9NC /orFD4xtPo4lu91RflWvhQEs1OGGalotwda38jRgmQqywU7eAj2CPjnxJzv0hd+eJDNh tKlxTRLl8iGGvWdp0oAewqOH0c//EPHrFO480ygI56hKkhh6JM4QCjsUiTGZslEOXQYc EttgIYM+DMc16BzUZLBSI/AXHwdJfp2TGiyRWGXDRu2FdkFV6bmkk+uiVkpDSf3dP2it YZZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769560; x=1728374360; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bJgMV83YGvWt1YNE++o8zIiouv7p8uQ7lLthenlKlXM=; b=MuBLx+xUfn76b860R5gbJO6FxN5Kx+GxwtOJbBx8+3qmkPmWmhlIfDfmcODzhM1ztK 1etVnWTjalT8O1Trx9owsBeCPZuUso2Fo+ybAzn9tL73augWdqv/hxoqdvH2bdJciU6M Mz04x1hZgG0zMHf7NW3fhgFu723Mr8Pk2OQ8gw6YduQF1puG2+DisL3JOfh+a291ZmUU Eth8b893TPFn7ordmyuZlxWh3nszg9mzWHkFlWEG0yXtuJOk3ddHlJCaMO4ekltbH4K4 Ch1rw2+IxTrAJVDSbop6mOX1kjPEFS0Jf5nBeq33ce4CQTYHuJq74DqjuFfHLjj1ftPk fYeA== X-Forwarded-Encrypted: i=1; AJvYcCVYalwitxIWVt+0O1mwITDssvCAa5FE3ZgnrjblDSmyjKa0i2akeB0rBynnH1z58OTkqDF+wf/1mQ==@kvack.org X-Gm-Message-State: AOJu0YyuipNLncBppCjBVvhR5LJXE5pd0JEzaSQxEYI2dZHPz9EKXUpy LdHqZsLCFF8pV9VDB4dAUG/seSeO7ckXmCskGpV6APcKbRvA+hZ1 X-Google-Smtp-Source: AGHT+IHzcts/01i8+FgR6inTEYf/0bSOr5mGzvAa5EPSJQ1v0Y75HJyrMxr4NCeYxCPBTTipwb3fLA== X-Received: by 2002:a17:90a:2dc1:b0:2c9:9f2a:2b20 with SMTP id 98e67ed59e1d1-2e0b8b19a63mr17592544a91.22.1727769560412; Tue, 01 Oct 2024 00:59:20 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:20 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , David Howells , Alexander Duyck , Andrew Morton , Alexander Duyck , Eric Dumazet , Shuah Khan , linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [PATCH net-next v19 02/14] mm: move the page fragment allocator from page_alloc into its own file Date: Tue, 1 Oct 2024 15:58:45 +0800 Message-Id: <20241001075858.48936-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DF8D640010 X-Stat-Signature: d1wp111xu5fu5iykdpzzbs569ra7893m X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1727769561-341183 X-HE-Meta: U2FsdGVkX1+R40Z+j/F+8h2hxRysn3wz/bSxWAyH4Wc+zjBgrTQ51SCYnOZnsc+/ueal3N5Bwfbv1POO9scp7ig8cBCrmY5/QrfRgTIEqDfGieyaT4jiFK76Lz053QFTLSwFyb9fvIKvAiTgnpVDXjBatCqxexD1W4EWgeNFrlCmuGjDC5jXkWaLa9Al5tGC5yA0rFcuZD2Q1yoRSx8bmm5bg0otvKpmxhL/+3HlyZBeD4P2Qe/FVl2pZkPN1edHD796vFQY8zd57jwYVYJzRsR6yX9D491NNAjRF0TNMZhCvhOGYUsP4lggS8VAbmoPj4B/chAH+qg/JmredtD7Z7w8FGQeChPnIGmcVwYYQaoaDf00bK1ZW9zQm8nNbPpW6KoZ3h7qulAaCTI4BVzPnMYQv+q77UJTztZcrCAYE+CNDE+pjnJV1hmj+f4Et596g2vA1pMZ1EfgRVgF0feCRmC1ShA02yuRioVfmmq5TgNsL2A8zJcsGYJOrws77gWjrGZ3BSlkvnMq4Q4Obx3138SeZGwxAcsO7IKCQ7cgnjNBMwqlBB+16AClv3dBzKSVhtVcmtOkgZPObC/ZoUoKRMnt8Vu99phUsV/ExcL7J+cBbJ2YBNw3EJSeBFi/MJKX+yBmxkTY5hytVwUjJne0K+dnsKAdmaGTFdB3fICBRqZ8mKo0dPXEWGFGr0RI7L62qaDH3JPAd83m5sVBV1NMOA6ZxjMHkdyo7mZSTTof0UcXwuHUDDEuS5tukH3oBVq1IkH/DG18U5kg/SzD/BWsKF1tNS5dD8A9E+ry7r8OkQQC7M+hPXr2nXVN13qxhppzNrS9VNOcEz4pz+VKx0U5bXyzxe2Z+fLvXw71I9xs4pZhDTPkI7ag+gd9qK5Lqbccf3MdV2fvv3UJde5+ji3j6HJTmsqLWy4UzcVnkWNo1TtsjdpxhX+UyM1BvZthUei2NiPaMp6xx7ID9YwoWPr cU/Y0S+N WTctR3V4P4gVPI35PkXWspaYsWVzL6YU4zvEYH42ep6SavWgwJQaZtL9pHMa8GdmsNK7uJM/TlZDLxS7w35A0Z1TW/dgBkuSmZ54J9Rai7dzGreaFezBMlXJq9vwcDDoDl8EaNqH1rs8QJ5fnyuMs3rFhahw7KOQIgfzRXgisVHzesAjgjJKfE98bY/Tm+rhHHFTFM7Z0teP4pyZY9whm5P2FSzJwaUXqFgUEE1dMP2NcqXgG8LGUSi1sHSnaOPBHUV4p2qh6xZmoS5shP7EDoCuGb6oswJuo+N8gfSeMoFvCpOTyUJiGrZeqXY5HYak1fnbZ4IV4oeX9EHNqiZbMs2Ogwnfgtsy83JGJJ5r2XLIst1Cv2RA5N0Vvk3eehCAfmOPGGyEQotGtZhti3B5+yAbOyhoxSrTM2ldC/URdeVxIBANtcKwiv2gMKQSpqxUF57a1qnb4FrO0MCw7ojFMr5uioimDjNwBI9nfliC4NbZzleDQ0tTmAR9E7Oe+PDFlAdvTDA7IO7Df9UI+6EoteH0jgKGgaIpwCqTTm3vQFIrcelO+TsI3yMX4VafJpKd6FgwZJquE9wckDD0Zki9r9JDqFuiUwfjYRCdkYuJIbGcinG6Mtmh+pknl4kKLabcgKSTRBeXCqST0pld3ktVS18SeGvoF/C5D27Y15r9ufoyKTQWZCCAU64TGm85iaqbyHNOH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Inspired by [1], move the page fragment allocator from page_alloc into its own c file and header file, as we are about to make more change for it to replace another page_frag implementation in sock.c As this patchset is going to replace 'struct page_frag' with 'struct page_frag_cache' in sched.h, including page_frag_cache.h in sched.h has a compiler error caused by interdependence between mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler error by moving 'struct page_frag_cache' to mm_types_task.h as suggested by Alexander, see [3]. 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/ 2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/ 3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/ CC: David Howells CC: Alexander Duyck Signed-off-by: Yunsheng Lin Acked-by: Andrew Morton Reviewed-by: Alexander Duyck --- include/linux/gfp.h | 22 --- include/linux/mm_types.h | 18 --- include/linux/mm_types_task.h | 18 +++ include/linux/page_frag_cache.h | 31 ++++ include/linux/skbuff.h | 1 + mm/Makefile | 1 + mm/page_alloc.c | 136 ---------------- mm/page_frag_cache.c | 145 ++++++++++++++++++ .../selftests/mm/page_frag/page_frag_test.c | 2 +- 9 files changed, 197 insertions(+), 177 deletions(-) create mode 100644 include/linux/page_frag_cache.h create mode 100644 mm/page_frag_cache.c diff --git a/include/linux/gfp.h b/include/linux/gfp.h index a951de920e20..a0a6d25f883f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); -struct page_frag_cache; -void page_frag_cache_drain(struct page_frag_cache *nc); -extern void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); - -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} - -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); -} - -extern void page_frag_free(void *addr); - #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6e3bdf8e38bc..92314ef2d978 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); */ #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - /* * page_private can be used on tail pages. However, PagePrivate is only * checked by the VM on the head page. So page_private on the tail pages @@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } -struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; -}; - typedef unsigned long vm_flags_t; /* diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index bff5706b76e1..0ac6daebdd5c 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -8,6 +8,7 @@ * (These are defined separately to decouple sched.h from mm_types.h as much as possible.) */ +#include #include #include @@ -43,6 +44,23 @@ struct page_frag { #endif }; +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +struct page_frag_cache { + void *va; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + __u16 offset; + __u16 size; +#else + __u32 offset; +#endif + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + /* Track pages that require TLB flushes */ struct tlbflush_unmap_batch { #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h new file mode 100644 index 000000000000..67ac8626ed9b --- /dev/null +++ b/include/linux/page_frag_cache.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PAGE_FRAG_CACHE_H +#define _LINUX_PAGE_FRAG_CACHE_H + +#include +#include +#include + +void page_frag_cache_drain(struct page_frag_cache *nc); +void __page_frag_cache_drain(struct page *page, unsigned int count); +void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask, unsigned int align_mask); + +static inline void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); +} + +static inline void *page_frag_alloc(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); +} + +void page_frag_free(void *addr); + +#endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 39f1d16f3628..560e2b49f98b 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include diff --git a/mm/Makefile b/mm/Makefile index d5639b036166..dba52bb0da8a 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-y += page-alloc.o +obj-y += page_frag_cache.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afab64814dc..6ca2abce857b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4836,142 +4836,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | - __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void page_frag_cache_drain(struct page_frag_cache *nc) -{ - if (!nc->va) - return; - - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; -} -EXPORT_SYMBOL(page_frag_cache_drain); - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(__page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c new file mode 100644 index 000000000000..609a485cd02a --- /dev/null +++ b/mm/page_frag_cache.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include +#include +#include +#include "internal.h" + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count)) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + goto refill; + + if (unlikely(nc->pfmemalloc)) { + free_unref_page(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + if (unlikely(put_page_testzero(page))) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free); diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index eeb2b6bc681a..fdf204550c9a 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -6,12 +6,12 @@ * Copyright (C) 2024 Yunsheng Lin */ -#include #include #include #include #include #include +#include static struct ptr_ring ptr_ring; static int nr_objs = 512; From patchwork Tue Oct 1 07:58:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1ED3E7716C for ; Thu, 5 Dec 2024 15:20:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B465D6B00CF; Thu, 5 Dec 2024 10:19:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C7056B00CC; Thu, 5 Dec 2024 10:19:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57BAF6B00C4; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A3156280036 for ; Tue, 1 Oct 2024 03:59:28 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 45C8B40B15 for ; Tue, 1 Oct 2024 07:59:28 +0000 (UTC) X-FDA: 82624283616.30.CBEE913 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf14.hostedemail.com (Postfix) with ESMTP id 566DC100008 for ; Tue, 1 Oct 2024 07:59:26 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=M9TOA6Jb; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.195 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769426; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jYCVdOudHYSEPLdfqa+bP8zA1NgdXBDfEGsBksPcr7U=; b=D//cJ9j0Q0+A0Z/MbiFcU0bATHtZg+lV6b681Dhcml4aQhIScAKjlhN7rjZrsBK14G9zal k2bHQ2AvL17sowCEl/pY71Ba96ekyLKaKj+D2SkYpLoCuoM8ARiJxd0dUDwaKSAXhHbw8X Tni5KEcZ5mY0VXzKrX+ymHjuCE66IMs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769426; a=rsa-sha256; cv=none; b=gNVddmKSBKiorJgyTvlNTrquzjAwMTVYy3R7wy78gX8N8Gzxxh9+P7zbP2nNIuBUH57Nf0 0nKiYxsLRWbao6X3UfNRZfmBjFxzn21nOzLW2Ix7tK0XZlcZdP6uq1872IlCEM7IZjrQ9F xqMroN2RBhsH5DOAiayAkzHRvzMP2DA= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=M9TOA6Jb; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.195 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com Received: by mail-pg1-f195.google.com with SMTP id 41be03b00d2f7-7c1324be8easo4690486a12.1 for ; Tue, 01 Oct 2024 00:59:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769565; x=1728374365; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jYCVdOudHYSEPLdfqa+bP8zA1NgdXBDfEGsBksPcr7U=; b=M9TOA6JbpCUeogcZ3y0BNZ72JUBMQXZbD5LBXWAKn1Yz4LIkNrG1WAJldN5RHCnccJ NyyM/TNyi/+TS6ePawNeKY48wZ8sLJ3qwXLS7sjplpdCmub8k7WN7GHZQg2nxWDQpKrE 5Kw9DcJ75u742WZ7IIt2bajrHZuHA9kziy7hhmAyBxk+yfMgGLWiqCicg3+LDtAkSQV2 U4uirKI7rAcvu7zMKZ2FzMjAW5lyKePCDtzOWjrr0uOAKWYPZqu3uWBJof6AX78s0F0Y 5b9evFnLGZLaNrYe5PLuMwTLanBizWBHEvIHqHI/dcDaXgxTZ1ayj7bYKqUL+69+58WP mPTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769565; x=1728374365; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jYCVdOudHYSEPLdfqa+bP8zA1NgdXBDfEGsBksPcr7U=; b=U+L4irYuDNrSiX3DNSFMRmN4XyXpz+W0prJPtALKkLC0C9jVSN93enE0FzTWHjIEkY CNAWy4LijdC+3Ga6ofElGZZ8wUoEcDai+8WeM1IjKdfhdVOl9XYM5tHiBSddtTqrsSLv 9enRIPvZ4gzbC4ZdOP5AcT0TvMR4gK2Wi83rR9z1DwdJIt7ZxrNKkAVzpQWY+3kr34Dg yQtijI4vJwNhc5md1RFYyrCnZ0FGN/AFB7CbOx8eDLQrhnCXx/Fdfy1hbN7bfLT3Sn3M 2xSLhL0LorVR76Lh0Kh+3ocmrHfmpbHL+1B/MhTHMXKWakgLeu4OcIDGHIjY+ahZ7vPm /aDA== X-Forwarded-Encrypted: i=1; AJvYcCX3eTvo3S5un1iFt+AXqXJtSRW7NhzfFhzM9Ohmow5BSbxZ7GTQxjszpYAXToblb6K8DBBR637Hbg==@kvack.org X-Gm-Message-State: AOJu0Ywq+ewW4fy4avfGTRlpmzBlLjJcgeRZol333MgCZTZ1cWxmMg5y PzHADALAwc4zqW870RhTqmnbdRgIJsSQtc6+WBAZq/kCgyYOBrlw X-Google-Smtp-Source: AGHT+IGyubSkW0LS2Q/mlw0qfA+FvSuoj1p26auGSTL84vixBHJ3lx7Y9nlk/IPOdCecjBEq2pxNbw== X-Received: by 2002:a17:90b:4d04:b0:2e0:a9e8:bb95 with SMTP id 98e67ed59e1d1-2e15a19ec2dmr3403921a91.3.1727769564970; Tue, 01 Oct 2024 00:59:24 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:24 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Alexander Duyck , Andrew Morton , linux-mm@kvack.org Subject: [PATCH net-next v19 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Tue, 1 Oct 2024 15:58:46 +0800 Message-Id: <20241001075858.48936-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 566DC100008 X-Stat-Signature: 8rtg7m1wobgx3s4bgtz3xxmeoib7zpqf X-Rspam-User: X-HE-Tag: 1727769566-746842 X-HE-Meta: U2FsdGVkX1/Z4XXRal1swZ5xiCgrYWb606pjHpaxok5ZgL6bMLuT7KsDucG2SeVmU00ccLRzAuVrdghGi7PC8VG5T8O8b58PThogXd9j/pZMUVXRUlXcOPjvdF83dflXzETzpTXjL21/tgRGdu/DmcwmPHI/CMADxQfidBeYjitTPS1OCr7G/45GnR3i08sFl3aVi7Y/puxq1IUMfhARgdPBagU5l2Bq2aHPL7McX+y5K5zuYUhcVo+s0gHFzFIqn3x6VX1rpUADphIsy8e8BWFmc9sU942BbcWzn4tPD6ac+vO24hMfqQI0M0J2rhYJe8UDRkCVkBmSIa+iA7pzlEN20nLJveo7HHQywSt0g4whbAGQ4C4LhREIwaGKhWA/2gTGdZjbhlIIS8zL1iiFtpwHzpz9D+iFR90Ndt5rambh+eDRG78OGI/1RFY+m9pOWwMNAM5yAvOFtN+Q0bvi2opv7tzcEetP9BYZGomWv2ejRYkzCwF7t5yuOoGR+TI8I8BpMlWXYsphy+zJVMC+o1+0Vqu5PJ1+xbzEwCSdz4D1IsrzdSUpZfVMu7ffPfv1CmdkzePM0z18vDdV1DFbXeshry8KieaxN63lsmXIku+5kqcHJ7XZkdMzJkYm10mIeFnDBhzeOgnamyBszkh5lOTrzBBGtvK7T65OByL4k3bIg+erz9jefx3daoVjYa4GNDIlZEGIcbaUP/cKmgk8QYtUez706s/3UicZPvGaSiHzpTwAcu+RXOJrQejScx5k4i9SD7T/ieDcnPEVD268wFS84uxPrm5rBMNjNZF4diCwuztuuWCZKPwEWsgiUcfDQmvogd7pM77KtUdef9uzHpc74SmwTn5TY5xzfN5UaB7+F1mEFuwMRfwRELDkhPQckGS1KL0D+hCEC3VG56+Id+WEzEod4CgsOfhXvPZhahlQolqjrr/vZnOUqvl61WcgSX9ZdssZ/bmn8Pkg8aD 4Vmw7UMO xcfW7GVl37z8Ynru6DcL5KuwCEnyF34RV7jTOP1N+vT+eAK219/xcDAPlZCM5cNvjIPY2H1NGZ53TvKBslGlVcJJBjz5POZoL3vYwfrtA2zoM4e6NuzrvJj7SH1WxNIHKUM6PqzUhSIUWfqLZ8ByirzEax/oDyAFa2R+A+vt5OYZTURtau2hG1LXKMKLzra/rMy2wrWYAP31Jr6QCa3xFA7quIjbrp40C81PSkmGLHD6solULmibGZd/2B46udiMF5YvuWHf6GYwiku8xju3ebQza1QYBZ8ueR7M0Tz3vGiaAqFC7yZ9jwjWbFMRiLF7+mp9lReO7oB3A8v3oNDoyvN0Fk5RalHVhPePFNQJvydTtnJU/R1xqu8W4qCCRc7wwT4leXEjrIFj/btcDW3aNu2dsxOPbxOoekrEvitEV6Zmq47mLzms+VrYFoZEb9ATNaoyRv1MWlTIk9bifpWoW7r6oURF90g5Xs+QWFgfOrKpAYXqw31jSD0y28tq+4PcdmYYHJCg5TOND6YmTUfFiqw3bKsK22qQcZdYgLVIuDFxerHTsx0GUwIXRamnN22LwYVqBTGkwcfyq8UvwSvP0CXICM+VB4UsVerjYJjQy11BYdG4lSNbf5wJk2wmGfqjvm/rKIbD9uiGHewn/yNqRCB5PmA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We are about to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coalescing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to avoid the disadvantages as much as possible. 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/ CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- mm/page_frag_cache.c | 46 ++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 609a485cd02a..4c8e04379cb3 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + unsigned int size = nc->size; +#else unsigned int size = PAGE_SIZE; +#endif + unsigned int offset; struct page *page; - int offset; if (unlikely(!nc->va)) { refill: @@ -85,11 +89,24 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = 0; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(offset + fragsz > size)) { + if (unlikely(fragsz > PAGE_SIZE)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -100,33 +117,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } + offset = 0; } nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; + nc->offset = offset + fragsz; return nc->va + offset; } From patchwork Tue Oct 1 07:58:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2832DE77170 for ; Thu, 5 Dec 2024 15:21:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3138F6B0108; Thu, 5 Dec 2024 10:19:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E7D7B6B00FA; Thu, 5 Dec 2024 10:19:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE6706B00AB; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 67338280036 for ; Tue, 1 Oct 2024 03:59:39 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B7EE980E86 for ; Tue, 1 Oct 2024 07:59:38 +0000 (UTC) X-FDA: 82624284036.22.F8EC7DC Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf09.hostedemail.com (Postfix) with ESMTP id 53F3914000F for ; Tue, 1 Oct 2024 07:59:35 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="OHx5I/+7"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.65 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769435; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vktSfFznxxqxdyooYxAJtLpLR+uBOH4pDsLUZk+TldI=; b=gVsidJVLMdx3Soy0mVmMHu+i4yekyJdZgcUX8WTJaXSueUA0pLskvLyDb/hgPmKzSzgbvj O3MxsZDwo2ALTmQENK4zVXe3SODPb0RQyOKxQHV7fxxnwpnSRW7ksDJkiFnHPWSnQUoOaz ezqIj3YeHJN2zsCE9YOFAPObb9IxyRg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769435; a=rsa-sha256; cv=none; b=gH2XKygX2BMjufQnEkBhodb0GF6nN+WetcQMsl0ThbBdxR3f9PcS7f3NK5G5Dqz5O0F2V+ 4e5L2l3o64VqXsUdLPkpkIixpmEJqAHSJhQwoMEwdL1eGwG62/9wKL0swjebiNE52bhG1G TIKe2YIO1Ws4p4Sd8L2hZzdDte5ZB5g= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="OHx5I/+7"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.65 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com Received: by mail-pj1-f65.google.com with SMTP id 98e67ed59e1d1-2e0b0142bbfso3267411a91.1 for ; Tue, 01 Oct 2024 00:59:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769574; x=1728374374; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vktSfFznxxqxdyooYxAJtLpLR+uBOH4pDsLUZk+TldI=; b=OHx5I/+7Y/fWK39C5yKW+zYmIHExt4uZS9rYrepyN+XmtSZCsdWDMWY2S+4qkgIeV1 flJhF6AzBnDhF6TmGeFao5PggwisZtHTSmF1DOYE+NMLf0RwjbXNdumQLQ4BMhF2IAnl xLu55MbPKkXHnV96zPJm7sRVMAfzkXX/KB6G2pvc75Z9Vi4nKCS096N1H+YL8j+FQnwg aw+RNr1B2NQB41u4gR09AwsZHw19Cb3TuNtxMdfTW9NFbQ+nV9TpcCHDZ3SnfxX0v9J2 /vYi3kOAULT/S3t1OSc8BeH6uhphH1c/4ApfCMP89LHj28amNF7M6j/Ztdb5alpWfWlD 4azw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769574; x=1728374374; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vktSfFznxxqxdyooYxAJtLpLR+uBOH4pDsLUZk+TldI=; b=UWLaaFsIqLyJv3Yfp1CM+w+KndfVf673coJUa5RcplS8OkPXKQbLGw19VFa3frM6wy x1C6LQ8XG1Jf/g8Byfb80z85TF6AA2obbIukeocZQ4oFjTXV7SbcUNfZN9v9wsZAkGdh nDeFmZRq9E602x2xvx4/X3wvRfnIDL8CSstAym9ARyGta5IhlD1ZpY/+3nxRv+y4m0R/ pZz2ZtysMMmSn8XbsO/aT7T0DWhft2RjoBvSaKI1B2mZseKcfVmVTV5eU5+zhrXEifcU TFR+TW57QUS8orGWH7WLUqhKN3SlmUH2FtCx2FbsJTUwm9OnM/gsq3h3TqcS31rgeNwa Skgw== X-Forwarded-Encrypted: i=1; AJvYcCXG35UIK0ALCrXic2YTptJleJWWCpeKHjHODOtKa5ee0K+AcmOH/j9KlfaWDCPY6jHl/Q/1niUjHw==@kvack.org X-Gm-Message-State: AOJu0Yx6vMK91GV1KG1NbOAPAvixfR8dTcceKWMt+XEWgYzd48Um7aKB CuQtSYFmPQfzBrYPobptrB4Kn0qbEIeAKzI+c5g32Wm9yRd7ZbH9 X-Google-Smtp-Source: AGHT+IH1qx+VpqO/M0ZFFo8jx2Ltv6BvYYP7WVtyTnNPIJSUx7pF6s/8RS08Pp9OK0t2pqi6/zCT9A== X-Received: by 2002:a17:90a:2f65:b0:2d8:999a:bc37 with SMTP id 98e67ed59e1d1-2e15a34cfe8mr3372002a91.19.1727769574029; Tue, 01 Oct 2024 00:59:34 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:33 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Alexander Duyck , Chuck Lever , "Michael S. Tsirkin" , Jason Wang , =?utf-8?q?Eugenio_P=C3=A9rez?= , Andrew Morton , Eric Dumazet , David Howells , Marc Dionne , Trond Myklebust , Anna Schumaker , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Shuah Khan , kvm@vger.kernel.org, virtualization@lists.linux.dev, linux-mm@kvack.org, linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH net-next v19 04/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Date: Tue, 1 Oct 2024 15:58:47 +0800 Message-Id: <20241001075858.48936-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 53F3914000F X-Stat-Signature: iy1un3cckhzr873daxcjmzqy5y4uzaei X-Rspam-User: X-HE-Tag: 1727769575-578360 X-HE-Meta: U2FsdGVkX1+woFIMvJjNNQoUSheciMh3hdDxrreROsMpvJNSfmNgHM1ACpUWc6p1/db8Q3RunHsFFYw9mlZ+pRI7FLHHOSGoQUYSysWGbHnvV8r8srVKfbipnizfaBfMZ2UNVDi89KJj9ZSKjvqfLhQ5g8aoidQ1QC3kEYkl7wxwpGwIr/Hja6F3zD9xTXXTBX9NPty5wbwlVX6M2CW+aCFkxjR2a5kku2RAurVO2CBUbtyn3zTmHZhy+puNXFcsMEK9DhU4bU3+qSb38MrIg6+FUCrtHAmSKF8GPXnXu72Flg30QKZfgJkRbq9rI//m+PCEXsP1uZTfyCk2exosNoMMPNv/aPABdUP0HMzQ6AHr5CtuNqQbCCCXWvRzxFvp2RF6F7ay7sfxfuqYeO7LXV9eQqqqbMVKwVkv8Xud27e5Niflw5khKQaov357Jpnwjnv5HMFoYyCOWF1veLAiB1WKzfwaJfHgovyFZ5M/Nx8P/AamOnB+ZV8X5lLjf4UE3Ao7OvtVfIQBkA3Tkb85n+MDpVDy8GWuz+M8fmZQKIkvQOYVnN52qc2BZgnUIjQo/7h38gOc96TXI4qDohGMGoZMLVo9YenlYYjIIGyAWUyqXDRUK7Daq/gsYMKVa+HeMPvqXaNLeli6kLhafJ94qRiRo1JwQdOig5pk4Gj2yBt0BtZ69lmP6SYNDqAzkvzWXyR2UtSFJPbDwktrIP0YrkSC0deSItkECiqnJ1Ea8tTr4+pe7FwrrVujKg6YAOc189F+DLcRI08bw99Xmp7VMO06OOyXwCXGnhK2+oAKWrxdhZrRhWp8oB9IurP88zCcTsPHwElUZo6wQ9BMZBL9bXG1ceP1zadxRw3V44UWBsZXfnaSvucvcd925fadBnW9k2EOgIDGSuVIkLZnmyCXW51LZkmSl4CgkncNvZtWjA0yo3zSBizyd0fNBw6LJWjPb6H9J8i/n5EBDGupnx+ yQqFkXTb 6Qm6B9hx00YRAlGlJBYBCmx/F3zYDBe9nYp307SNfIDuZbZVMsArahJlHx9w/vuANxV1BgJoJ/bOn7Lf1zp6SXamP23pndkyQ8edehztGUPkclKU17a/8Zmc9nkFg4CnIBcgly5t+np4OVVBjRiWjDdFm5KikL+I5TJ42Y8LLAVg9aSPucsD4ugojobH6Ow8WrkiQTR3Syf89V/sNCHqTdqyZv5xVx6ASzGQq95m9NNIzzwGeCrjQvtrb+ttQuR9FxVMxAUlAQDZzl/mHClg7iYJ++gF2dHnEjgHgHlrOcSWN3b7HW5A3M8PlBknHiZkOpjJ3c2U2ht49xAyN5mIAgiHdN3plHzvR9q7608NUYZ/Fg6ZgXlvsUocVPHMtBj8NivmYxdlcb+Ubbl5Xbdeuj9KgJIKXc2tM9jC0QL1ZMhiW5cJ+kKEsCM+e25uL1fbduBrbOvo8TJJg0k2ZdMIWINctgzBJGJ0c/OaLEyy3agEQMC8SOT4LQubgEdrHBLDvgfJt1Wstykse5Qf1/+CUuZpPH12Fr7RHLBKPRoeEooNYozJrcYZp8AFU1zRiv80GdhaTb5LOAyR9Vk4NX59hL7KdFF6FKoWZnVkREBKOoxIPf0ogABCVhPk5CQ4FimDiRXLuJkpUZfDEUhcMbHdliIqu+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use appropriate frag_page API instead of caller accessing 'page_frag_cache' directly. CC: Alexander Duyck Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck Acked-by: Chuck Lever --- drivers/vhost/net.c | 2 +- include/linux/page_frag_cache.h | 10 ++++++++++ net/core/skbuff.c | 6 +++--- net/rxrpc/conn_object.c | 4 +--- net/rxrpc/local_object.c | 4 +--- net/sunrpc/svcsock.c | 6 ++---- tools/testing/selftests/mm/page_frag/page_frag_test.c | 2 +- 7 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f16279351db5..9ad37c012189 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) vqs[VHOST_NET_VQ_RX]); f->private_data = n; - n->pf_cache.va = NULL; + page_frag_cache_init(&n->pf_cache); return 0; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 67ac8626ed9b..0a52f7a179c8 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -7,6 +7,16 @@ #include #include +static inline void page_frag_cache_init(struct page_frag_cache *nc) +{ + nc->va = NULL; +} + +static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) +{ + return !!nc->pfmemalloc; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 74149dc4ee31..ca01880c7ad0 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -753,14 +753,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); local_bh_enable(); @@ -850,7 +850,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) len = SKB_HEAD_ALIGN(len); data = page_frag_alloc(&nc->page, len, gfp_mask); - pfmemalloc = nc->page.pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page); } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 1539d315afe7..694c4df7a1a3 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work) */ rxrpc_purge_queue(&conn->rx_queue); - if (conn->tx_data_alloc.va) - __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va), - conn->tx_data_alloc.pagecnt_bias); + page_frag_cache_drain(&conn->tx_data_alloc); call_rcu(&conn->rcu, rxrpc_rcu_free_connection); } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 504453c688d7..a8cffe47cf01 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local) #endif rxrpc_purge_queue(&local->rx_queue); rxrpc_purge_client_connections(local); - if (local->tx_alloc.va) - __page_frag_cache_drain(virt_to_page(local->tx_alloc.va), - local->tx_alloc.pagecnt_bias); + page_frag_cache_drain(&local->tx_alloc); } /* diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 825ec5357691..b785425c3315 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1608,7 +1608,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt) static void svc_sock_free(struct svc_xprt *xprt) { struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt); - struct page_frag_cache *pfc = &svsk->sk_frag_cache; struct socket *sock = svsk->sk_sock; trace_svcsock_free(svsk, sock); @@ -1618,8 +1617,7 @@ static void svc_sock_free(struct svc_xprt *xprt) sockfd_put(sock); else sock_release(sock); - if (pfc->va) - __page_frag_cache_drain(virt_to_head_page(pfc->va), - pfc->pagecnt_bias); + + page_frag_cache_drain(&svsk->sk_frag_cache); kfree(svsk); } diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index fdf204550c9a..36543a129e40 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -117,7 +117,7 @@ static int __init page_frag_test_init(void) u64 duration; int ret; - test_nc.va = NULL; + page_frag_cache_init(&test_nc); atomic_set(&nthreads, 2); init_completion(&wait); From patchwork Tue Oct 1 07:58:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FE5DE7716D for ; Thu, 5 Dec 2024 15:21:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BD3096B0105; Thu, 5 Dec 2024 10:19:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A5CCE6B00ED; Thu, 5 Dec 2024 10:19:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5B186B00BF; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8EE4A280036 for ; Tue, 1 Oct 2024 03:59:47 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 35D1EAC285 for ; Tue, 1 Oct 2024 07:59:47 +0000 (UTC) X-FDA: 82624284414.07.CDF1A5A Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf19.hostedemail.com (Postfix) with ESMTP id 390ED1A0011 for ; Tue, 1 Oct 2024 07:59:45 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=b2bIoEHs; spf=pass (imf19.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.65 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769521; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h1flmiq1CIb5dWvur8o1cuFYGIlZjiDxQSttLTSZj20=; b=NHDFom1ipVIw8ubbylmX5n/QfkupTLJH132Ve7263NtgeMP6GXG+iHqiIoTajRLy1Gxhb8 48CsloQhhDLayA7MNVl6CWXXyUyc3PA3nXL1cYMf9ayf5aij01CNAouB7LTbIi9AoYYFnV 3lhT7APKo8z0E+tQSOsDCf0f+yiJ/XI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=b2bIoEHs; spf=pass (imf19.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.65 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769521; a=rsa-sha256; cv=none; b=uKuIHuUqqHywZ0tacC5DewXUq8/ez7W8rFCDBPgD2IWGep+C6VC+ACOiEEif1MFuImuSS0 /ONYfjyNQNpzOYKwiY+ovZc0oWEXCCzkXcJ6ELUizyoBn/XbhV87HOWvsaeROE357CSwtu 72wY8LTZEOTgrVbx7T3SNd11KCJ8syo= Received: by mail-pj1-f65.google.com with SMTP id 98e67ed59e1d1-2e07d91f78aso3963762a91.1 for ; Tue, 01 Oct 2024 00:59:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769584; x=1728374384; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h1flmiq1CIb5dWvur8o1cuFYGIlZjiDxQSttLTSZj20=; b=b2bIoEHsS7rYwJT0sSR8xK0U/Rw2Rpsad8xtY0wuaKIdbYxtNsCe3L73qwtDtwq6s+ x6O0cNqet+rrI3qdar2yLdjVPNBK8HgPzkB280QA53UqQXJDkaCA2uSA7bBvYKudcWvy /ZguyTtkn7hbq0tRpfFbKOq0pkGgaMt3Rm5fl1gakDhmbTCVa8yg4vf3+mF7lO5MAVgf jQNqBQmr58joGK8+cbOLUGSD7rEd9y/aL5+7IsK8jt9ls/ton5UQRgynA3iomFs9VINJ 8bylXX4oMJ/HuvNTWActeAZQBwLPa4qDXYDakieddN6lSk53GyEyLqcz5D65R3hMQwLI j4sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769584; x=1728374384; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h1flmiq1CIb5dWvur8o1cuFYGIlZjiDxQSttLTSZj20=; b=EuDWnvyWTSkksyGx2BmGPq2qqT5UEnLjlorAzj4KT3uhBld78U2IZdSe0kR8nJCzVO SJq0MvxnTRKWrmdYpizLwXGugX+BZaqgeaA4vlSaapUfIjIk/mCcCdUJN9uxcK8U+U+Y kDfAVjSBvJ/+T1K+MAUTf24mQzriu2h/hhIK7dgSckTSbwIm5zwqMBAelbwF5ubpXieS l3e9x7H9p3SSeeP8QYVbMHGlk6u75+C8C3h9s2VFILey+6vM0jwfWyVuniIhWqEXdFn1 MwFZgfZt/w5ze+21gWDoo09z/m0ADbEI0wyKjikXHzgQVix/rONXJSO+EDXb7I2f2f/Z JNQg== X-Forwarded-Encrypted: i=1; AJvYcCVJphiT1Hhd/jjQc+XYO3JlE9WfCjpLrZ8eqjo3NUBkk+f/z76QN9paHK6O9JUpEvWjtAAFnhjm9w==@kvack.org X-Gm-Message-State: AOJu0Yy08IbR7zOEGaDy8Iot9wyMdPWHaCcZ6TWYLGHRMjZyBb+cTVyt 4FBIxmOP7FB8mf3MoezohKi7vb6AW1ecHSx1oEYNrhEenqtP7ORb X-Google-Smtp-Source: AGHT+IGWWcq93SolMOcbPJboqSEg9Ubcs85ynCA4UMkBOerDIZugeKeyo4zjy4R1BEFlqXCJokgRZA== X-Received: by 2002:a17:90a:ac02:b0:2e0:8719:5f00 with SMTP id 98e67ed59e1d1-2e0b8b2261dmr17048516a91.22.1727769583943; Tue, 01 Oct 2024 00:59:43 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:43 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Andrew Morton , linux-mm@kvack.org Subject: [PATCH net-next v19 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Tue, 1 Oct 2024 15:58:49 +0800 Message-Id: <20241001075858.48936-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 390ED1A0011 X-Stat-Signature: gpueg3mbumth43nq9ethtahdqdmegh5w X-HE-Tag: 1727769585-628902 X-HE-Meta: U2FsdGVkX18cglsXhIRBED9pXKiC13lLHbhHI1DWFDT/HnFmVDAEJnIv3sob8O1FcMaHcA7zHWYOtfLdjYCanlOBkS726xDRqeXgwNEzqntRZ3hvPgHw8XdURfPJaSzJqJYTxvxuSGDweDIQ7QUTV2Nzb/ouU8IpyjgLYucFly2xHmmSppn6gnixfth33QnF8+Wo37/FRKYEICl6tXk9anKVtVp0bROC1cKninTcgWfaV0Xq1UvBgwkrzYz1dn6JQrN9sOqKxNKLSzOObT7fWIC7brTlcnSVsxW5BTNiFdUnp21CyNYQCLnROsuCtAPcsGi+Rkdxp7SeI/Z1Tiyt/Xh6NSUk3U826vQLnVOB7sxa/HaNo3E64OY59RMPfiCCVFCK/hGaA9k/FIoHY2QCkuuwsvK85UcgwuIVA3l5YBxNKpuf12eSNfHl2bpSdnDwzlYj28X92U9UJ8rE/9rJzZOLpZHmU+jWxmTyyIqiLTepiBjHu6T66XtwPOoaDa+MAbqcjautKTJenlZqN094LFI2vXuQnYUzGD9oP5dowqyQwoEQQmrhPps3rc9pRGcnLpacxeGGFNLWZL+4JxYRWo3CJLrrEtf/E6x5FZ8glktgoS+L2P5oAHysvoF1OIFn6stGGryiAAW9KGSe/MqCP42xrfSGNwiFOcEzQP6JYQfPW9pFM+Tw1YPLL0c1e5R/wO+pJ07j14dHVLonAblH4LWrf2ixsGnEf6pZq5sxPt8TcSqJ8PdcarlHGGiBWRR8Y+nVBchFBQNNT+k+xKAwT3k3INC+lu9oazVo4c8+RqJdAmPVfaSTVbsK9e9OyAnk3VMGuhsU0cbnzakDwtBEHFoceouhCfeRhiB/sQBGYFHsH3QWwQqu4UPURIkBsUuzy9b2ojWwuNOb9BGlNxmk9utk/nmQRDid4XAGpEIjReYG13rrDkUmtBvQAaXWb8GLfisFeuyxrSaYKvXCf1S vgtOENNr r8JRd+xpzdeoY1zIAaepQlBtgo9YN4uF6lL04lHhBONOUWyZsoc45NbqdNYKP3jfsOg3PhtxhDHlEF0WN4ucObtGVgR0siWxw7pfCzwgrbZnb0tcvqMGoKS4VxaMIM6VdU66QFKH5yaxft3mpHfa2V6kMtFL5gwRNdILgi3nsZsxDjuU6FjOl3EM3WK7m1eHSh2ufEirVHR8H6tDhfiw7VLq9heso14ZSZ8TAUjdyVD0mpSYT0uZn9k1hVOlVHqob+SUi/IyNZprhIbv592F+I9nUopzGjlQ1x+DdR7m87xEehxIGVZS5hIbBxvjGoLDLrf27cRBDu+CDmtwsacO88pw7olE72/xI4dicXTcQT6z8NlD3ZEAxdyc410SP5XbRuLWX0sD9EhxdwPdsXkNBmL73hDJ/G5b+vqOpYUoQ1dvdO9DJtoIHMGPtHFPWmwVj6Nbj+cbBll38OSAwpAwS58iW7WvmfbW2fTqQo99uDkiDJHtTf/4x8Xg8ogfzfZjLOiFrE7Qt1Skg3Tp4Udghz8xNfaZjcb6NIli3vEqkFXVhNrd4A0Fyi6xpNMuvFCY2N4tnTlRYfv9vyttW8MRGdwTC+TGwWhZgYl6O X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 19 +++++---- include/linux/page_frag_cache.h | 26 +++++++++++- mm/page_frag_cache.c | 75 +++++++++++++++++++++++---------- 3 files changed, 88 insertions(+), 32 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index 0ac6daebdd5c..a82aa80c0ba4 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -47,18 +47,21 @@ struct page_frag { #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* encoded_page consists of the virtual address, pfmemalloc bit and + * order of a page. + */ + unsigned long encoded_page; + + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 offset; - __u16 size; + __u16 pagecnt_bias; #else __u32 offset; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; /* Track pages that require TLB flushes */ diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0a52f7a179c8..75aaad6eaea2 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,40 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H +#include #include #include #include +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) +/* Use a full byte here to enable assembler optimization as the shift + * operation is usually expecting a byte. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT) +#else +/* Compiler should be able to figure out we don't read things as any value + * ANDed with 0 is 0. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT) +#endif + +static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) +{ + return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_page = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4c8e04379cb3..cf9375a81a64 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -12,6 +12,7 @@ * be used in the "frags" portion of skb_shared_info. */ +#include #include #include #include @@ -19,9 +20,41 @@ #include #include "internal.h" +static unsigned long page_frag_encode_page(struct page *page, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE); + + return (unsigned long)page_address(page) | + (order & PAGE_FRAG_CACHE_ORDER_MASK) | + ((unsigned long)pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT); +} + +static unsigned long page_frag_encoded_page_order(unsigned long encoded_page) +{ + return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK; +} + +static void *page_frag_encoded_page_address(unsigned long encoded_page) +{ + return (void *)(encoded_page & PAGE_MASK); +} + +static struct page *page_frag_encoded_page_ptr(unsigned long encoded_page) +{ + return virt_to_page((void *)encoded_page); +} + +static unsigned int page_frag_cache_page_size(unsigned long encoded_page) +{ + return PAGE_SIZE << page_frag_encoded_page_order(encoded_page); +} + static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,23 +63,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + order = 0; + } - nc->va = page ? page_address(page) : NULL; + nc->encoded_page = page ? + page_frag_encode_page(page, order, page_is_pfmemalloc(page)) : 0; return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_page) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(page_frag_encoded_page_ptr(nc->encoded_page), + nc->pagecnt_bias); + nc->encoded_page = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -63,35 +99,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - unsigned int size = nc->size; -#else - unsigned int size = PAGE_SIZE; -#endif - unsigned int offset; + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_page)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_page = nc->encoded_page; + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; } + size = page_frag_cache_page_size(encoded_page); offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); if (unlikely(offset + fragsz > size)) { if (unlikely(fragsz > PAGE_SIZE)) { @@ -107,13 +137,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = virt_to_page(nc->va); + page = page_frag_encoded_page_ptr(encoded_page); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(page_frag_encoded_page_pfmemalloc(encoded_page))) { + free_unref_page(page, + page_frag_encoded_page_order(encoded_page)); goto refill; } @@ -128,7 +159,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return page_frag_encoded_page_address(encoded_page) + offset; } EXPORT_SYMBOL(__page_frag_alloc_align); From patchwork Tue Oct 1 07:58:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DA75E7716C for ; Thu, 5 Dec 2024 15:22:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B2D16B0113; Thu, 5 Dec 2024 10:19:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F417C6B012A; Thu, 5 Dec 2024 10:19:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE1536B00B6; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 872E7280036 for ; Tue, 1 Oct 2024 03:59:52 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 0F2C6A0E7E for ; Tue, 1 Oct 2024 07:59:52 +0000 (UTC) X-FDA: 82624284624.30.6DF801E Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf11.hostedemail.com (Postfix) with ESMTP id 285CA40016 for ; Tue, 1 Oct 2024 07:59:49 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=j+S2jJQa; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.196 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769569; a=rsa-sha256; cv=none; b=qeL6Yt45TgRNgquSQZ6sYfoTp8YzNNcfC3GEshopVjA7UMR7aLxYHK2JX04f/RXQr0LnZZ oacqjubVPC+6oH+9/g2RWZ83jKeI6gsCEw4iq7p9sElhgaKb7azNTYLLQY/C5W8avxgWmn iNLmOzmlECXSkNblnsZHITrV8ZnkgL8= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=j+S2jJQa; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.196 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769569; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gd1RgN1SUA8afyUXxZzVZIDkcTWdN6ZwdQZ6zZRg9I0=; b=xKEmRsqZZzKsjLxbjR8SUJg6dZzmYMck9yOA7Xo25WO3nMuMXzlbzxNLRIGcE6Aou0g9Qp st8xPud72cP/KMF2/cxvuVWhRfwxuIgMWiHAillJPzxAz/XIPUa14R4kKjZ1e/orzhfpxW lERrFcgt+f0tZSoaYWyzG4hOc6PgXVE= Received: by mail-pg1-f196.google.com with SMTP id 41be03b00d2f7-7d50e7a3652so3502790a12.3 for ; Tue, 01 Oct 2024 00:59:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769589; x=1728374389; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gd1RgN1SUA8afyUXxZzVZIDkcTWdN6ZwdQZ6zZRg9I0=; b=j+S2jJQaQyMwYxWDr8OfrPYXk/Wo2JpjVhxLZ4WV+GoP6UPnNuoCQ29huf9s4P39DI bwDEVGcTiTYdyzJKYnXVjc8zaNXpEXC9JUZjtQbU7G9qDwCB/WYI5/hTu8yFUkR7ToT/ JM1mfgtHF+kXrQYBa8a1mN+uRFv7sLhozIqKznuP18/JuCOWOQUdqakWhuCLjIfhGX1E /82PMbzEU2WidT7WtwkeFHm0AbClSJnUarwJuyMIJs5n3dT9dQgJlib097ZvBRnmcHY4 w6Ixq9j8rPcsywCBWSHAMKYDM1vybQRt+jW94bKHpXu19D/7MLE8rJ+hCfPL5DvLS+MY sCJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769589; x=1728374389; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gd1RgN1SUA8afyUXxZzVZIDkcTWdN6ZwdQZ6zZRg9I0=; b=OFS3TF9orNNlqWyb59xzIUlu4Wx8qkTAYNpeZq9GGPdArqO2miY4NhVTaczKJN+ACt usIaIitu22QBF4Ap+zZuoGkEyC1XRFgcumk4fdb3mW4TRih0KPOnqdb4Y7zUTkFfHGW0 OUAo/e6L7TVooIiVgmMX78OVDLyCWgNVnxQXkJ+9pnAugf2csiekDaaRirdCzKFBnp4o jAiy53VEU5/j2Z7qSlXLVcn12wKwuS7gTgp6zfHGeLE0XReqHG7xNuvmgkVymnHahjps a/55GF3wfNLC7aZrKrhvG58s4xLDE39p/Rc7lmLJQTJDjcoh0YGonlqfCBxF1smuLUEu Q44A== X-Forwarded-Encrypted: i=1; AJvYcCUaPP7elz7JkAKs64euZrKyMm2u1jMzhk6yNmMFB3tvJ0WBdU5UAjIEIMZ4jKKW97lVHO2NQnthOg==@kvack.org X-Gm-Message-State: AOJu0YwOx9zGbi/BGg67LpeYjbvuo6ltFPHiiULyKAS17//PPHb8K8Ig NvcXKXbQEJA1EQ2qQrG1r8K7lkTJIcJp86MileO8xGXB9YKhVXrF X-Google-Smtp-Source: AGHT+IFZp+Gi8vE0ox+pNEB55COsRo/wNQjqM5glSXrin4ALoGI9vxZDRJ2kDHOmSEGDlwKVj4MDzw== X-Received: by 2002:a05:6a20:6f0a:b0:1cf:2843:f798 with SMTP id adf61e73a8af0-1d4fa7ecda0mr22889063637.47.1727769588834; Tue, 01 Oct 2024 00:59:48 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:48 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Andrew Morton , linux-mm@kvack.org Subject: [PATCH net-next v19 07/14] mm: page_frag: some minor refactoring before adding new API Date: Tue, 1 Oct 2024 15:58:50 +0800 Message-Id: <20241001075858.48936-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: n3sszgo14ji6fqmenqk8czxdmj3k1dnz X-Rspamd-Queue-Id: 285CA40016 X-Rspamd-Server: rspam02 X-HE-Tag: 1727769589-227439 X-HE-Meta: U2FsdGVkX19VlMnXaxR6u1VsI1PCyFqSqZnwPq67FJbSV5vp82Hv7WMFRUYnvodyHEDPBQJxHBR9+FBIKy1NNIMcIwl9mxmtMbb3CqEre/PfQ6gUdu61vELRp3ET9PtqMTXclo42lKHayCWN2X0f4sbEyQ0XfrjskgS2dHEXOrI6O4u1c8AdP8F0sK6rjjCt8AtHFEAMknDRyg0wYWUbX8BlXYEd2c4/yOHwzgbz66OL0VMvAp/ZAJ29cCpFYPac2GzgUO4WhIzsDw1lqVl5Yx0yrgUMocF3KHDsRO4cUXOKs6+PwI5eG/z3JkzMNENKINhqQP3zUI8XZP51OWJllDFARqKeG+/56Bs8rGDkTNS07GwpoLw7Y6s7c4x0GVGS4qowdLUqqmsM1INinqpPMvRu2LJv8c+9Br74fVP2Dp+kdvGtMa+RpvxJr9Gal7jJTrdxWIOgB6GFg9DJis8r3FOrnkLznaBbgYVCb9uffhjxlwCwhQr2YAdfDCXgXZaDnXvcoc9UIqsc/JMsgFX6QjZUYEPcph7kk85yg20DW4aJqcZmC9XigqdfLe8SY8iZk0bIytz1T0LUeNmEavhuUN3ARMlG6vZM6ETgny74kRermy8Xahj78pky1G4fEdpYXLvizRxvWvdBjvYAPhOEDtNVA7i5iYd5dXu49teXZ+jM2Ve5TUaN9RU1YnFaOGbblUqy1tJPnIxM5LZT8KwgwF9RJhC70Mow7nrdEL0eZ0sKc6EDpQ9DkUvLlT2KBub4GSPu2xr9VSpXyCncCOtMhuTGg4wRftATLc/LcUcDD8ckOrESXacz02N6L0gKbSljLNaitgROT5LcIEvzSi8AR0tox6P+sM+qrfbFLl/ZWNxG/nrMR1RNnoI97EmIeaPENqwREmxq8vSFdXfsIqkl3xnzKlEMbfoFB6G4v2AX7TtEn9QJTH9XKupgw6SKrUiMVvxTjxXpPDG9GssDkDx cz8cXeLa JWKAE/gMBuzjx3EtTSAIBJVdnqd1lAteAtiaTDw6Y/zoL+W482meK6aJtvHXCG8FRF/9S1gCnNciSC6UGi/bhAI+EFoEByJmYoKr387arkOPk66vmakXF9UAC3fAVC7RAkC2Cic2/tjnJ4ovzdYIbDdx2A3FeCqm01Gyfe3d5JWH5zs9VPTbBXs6qGn7EAo7joZPZbBCEjFjcC53ijAYUL9ly89j+uGSJrHdw3fLvm5e03ZGLNFxkTkMX/3FNgSiDHEeI9AtShfGAr0/rlws8AVs8n7joPO6dHq4lAKVhSNZkVOYS5upZO3PWKlrrirpu/K65W6a8+82W3EUZBOSnkGSPney6Ml+ILlQOO4buS59+SYMgmHjXd/zsbJyEnv0VOw+g8VKdUNacFe1SSCWisB+0mPr+YiNuySOgPQcw99GtwATdk/IVmW7pb8fvpoBzudStLphP1uy+s09YHhXqugt5CHfPw+Tdd+LWcBSK/BOTlTJgYEdmgUdpwXfcrN2GR4A4rOB25+yfS0RSBTwJjVgf1hxs562cWep19qLJ2nKwSwlUEh7ut+zqFWJ377dE366xL4Zz+1fcKhZXJqyV+RSqezZ9fjdTyjut X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Refactor common codes from __page_frag_alloc_va_align() to __page_frag_cache_prepare() and __page_frag_cache_commit(), so that the new API can make use of them. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 36 +++++++++++++++++++++++++++-- mm/page_frag_cache.c | 40 ++++++++++++++++++++++++++------- 2 files changed, 66 insertions(+), 10 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 75aaad6eaea2..b634e1338741 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -5,6 +5,7 @@ #include #include +#include #include #include @@ -41,8 +42,39 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask); +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz); + +static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + VM_BUG_ON(!nc->pagecnt_bias); + nc->pagecnt_bias--; + + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + struct page_frag page_frag; + void *va; + + va = __page_frag_cache_prepare(nc, fragsz, &page_frag, gfp_mask, + align_mask); + if (unlikely(!va)) + return NULL; + + __page_frag_cache_commit(nc, &page_frag, fragsz); + + return va; +} static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index cf9375a81a64..6f6e47bbdc8d 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -95,9 +95,31 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + unsigned int orig_offset; + + VM_BUG_ON(used_sz > pfrag->size); + VM_BUG_ON(pfrag->page != page_frag_encoded_page_ptr(nc->encoded_page)); + VM_BUG_ON(pfrag->offset + pfrag->size > + page_frag_cache_page_size(nc->encoded_page)); + + /* pfrag->offset might be bigger than the nc->offset due to alignment */ + VM_BUG_ON(nc->offset > pfrag->offset); + + orig_offset = nc->offset; + nc->offset = pfrag->offset + used_sz; + + /* Return true size back to caller considering the offset alignment */ + return nc->offset - orig_offset; +} +EXPORT_SYMBOL(__page_frag_cache_commit_noref); + +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask) { unsigned long encoded_page = nc->encoded_page; unsigned int size, offset; @@ -119,6 +141,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; + } else { + page = page_frag_encoded_page_ptr(encoded_page); } size = page_frag_cache_page_size(encoded_page); @@ -137,8 +161,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = page_frag_encoded_page_ptr(encoded_page); - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; @@ -153,15 +175,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = 0; offset = 0; } - nc->pagecnt_bias--; - nc->offset = offset + fragsz; + pfrag->page = page; + pfrag->offset = offset; + pfrag->size = size - offset; return page_frag_encoded_page_address(encoded_page) + offset; } -EXPORT_SYMBOL(__page_frag_alloc_align); +EXPORT_SYMBOL(__page_frag_cache_prepare); /* * Frees a page fragment allocated out of either a compound or order 0 page. From patchwork Tue Oct 1 07:58:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 623E2E7716D for ; Thu, 5 Dec 2024 15:22:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 75EE76B00C2; Thu, 5 Dec 2024 10:19:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FC926B00E5; Thu, 5 Dec 2024 10:19:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFCD86B00D7; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 30745280036 for ; Tue, 1 Oct 2024 03:59:57 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A881C1C6B34 for ; Tue, 1 Oct 2024 07:59:56 +0000 (UTC) X-FDA: 82624284792.26.57178D2 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf06.hostedemail.com (Postfix) with ESMTP id C1F37180015 for ; Tue, 1 Oct 2024 07:59:54 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SqSY5TEN; spf=pass (imf06.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.195 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769467; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ADg4SjL/TdWbtGYAOROzU48mvQIzqBv73gL/8ReCbpM=; b=7lk0l96581wpRw1A6sEvRPfQPM68bHYgah6vB2l6346KGPloPrf0DEXPlFQRKQzbIVD/eT kc4F2m5X+B+8al6ogt3DpTJ+l7oEMGLzDG+zIGPamG/Rq//RJ8YUJt5aR5C2T4FzJr2d40 2EJRrW+6v/TtxexSKuspki7lHEJE/58= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769467; a=rsa-sha256; cv=none; b=FWlSrFAfEQDAmLndgvhPv2ebbW/qoaGcUPeX6LimUbGy2P5O1Qh/kONFva00d6CfkBUk3U 8peZApj1+wdtsZv1XeinY/Ph72bbrtcsWdIP3n8qwLQ9byElZCw0gQNq7fX1sWqWfCdxH9 KeIxhMJJ7NZMmNBmM4kcdD6gB28K4P8= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SqSY5TEN; spf=pass (imf06.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.195 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pg1-f195.google.com with SMTP id 41be03b00d2f7-7d916b6a73aso3330166a12.1 for ; Tue, 01 Oct 2024 00:59:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769593; x=1728374393; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ADg4SjL/TdWbtGYAOROzU48mvQIzqBv73gL/8ReCbpM=; b=SqSY5TENojGeHT5YRVnvMyjxQ+D1QHX3XbGf+I2jfwENVj1k7AbRWK/Nuh1oyqnMXz 3HCbroxbulutJDNV6v3IvzsFFtu28j3bRHvhSKRWg/YvOrHcvmivYQ/OewCcy1v5hdUY Rq6FG5Zby0RFdxUKS4MMi2aaIvmnH34s9M1gydv+VIMzdmw2YHBN1saS2Wazxi6WnxiH zTDZlqqY6UN/n0CZdtyClHj19c1C+pgSH8QYM2Aoi9Xb/maDBV6/1gba9zn927m+0lYB yclyl+RP/K5Obzl2LMNVv/rGBsMqUoKEgf/ySHiGFfVNq9sfFgS2cJESqZoX1BEL62Pz qFVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769593; x=1728374393; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ADg4SjL/TdWbtGYAOROzU48mvQIzqBv73gL/8ReCbpM=; b=EYO44+2T4CBpxTc9mfhU6lRoDldLHKWYVv3oRkiX91hhBBYMps9xcFR6XDMzXvgDqm U6w08KeTdXiUC+TthnW5zeyPUbddvVGP8+8KWh4BFhfVwB3/5dTfBpe24w3qaX83FmdB VT1aFv8X4mxLFqhUCiN9PdPskorNNNFFr+PGNMRHPyH2cguz6ZyZMXyX1aLtIBwFwZDb x2BwJPdvbwLSyjMKDKALxkD6X4kiDL456f/Il7iSHPI6tdY7BCzNmqGzoeXygBsi0nC2 KMdN1EUzXPBTRI5ZQ/B7Tuu5WpeMt78BXHevghCX8VcE5xoJHHEpqNGpjAlQxfppCxmq 7CXg== X-Forwarded-Encrypted: i=1; AJvYcCUSJ7KojYnr1Z/TOuOJW4i3pcTYja9SHxwbzAg0sLKtB7Sd7p0aMrweBDs69RYbxHa6fIdMJYArLw==@kvack.org X-Gm-Message-State: AOJu0Yx+UXjsK3zbgrCK0w+ql9whM+YzTVVCpjw7azzCuid6fNBqxTbm gn64nMHyPKu/BTXuDhDrd6Ya27kEMidxRb5052uzVpXgsxyEnxNC X-Google-Smtp-Source: AGHT+IEMEMikIwAsGNCUaKklqicU3DUMXbKshUu12qfRtEdI3bF5DTyiGRaQfG0r7h/YT39HL2cwqg== X-Received: by 2002:a05:6a20:cf84:b0:1cf:49a6:9933 with SMTP id adf61e73a8af0-1d4fa6c2f99mr18322559637.20.1727769593548; Tue, 01 Oct 2024 00:59:53 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:53 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Andrew Morton , linux-mm@kvack.org Subject: [PATCH net-next v19 08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Date: Tue, 1 Oct 2024 15:58:51 +0800 Message-Id: <20241001075858.48936-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Stat-Signature: 69i58c5ob1u7whcwsom96zwbkx7tin3b X-Rspamd-Queue-Id: C1F37180015 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1727769594-653129 X-HE-Meta: U2FsdGVkX1/PYzkLM9xQhtqxkvbYzbVVLz0JRD1i7QyZ1QBMPy5xBzNCXIkIoYB9Yh0+mBzw1EEBqUGZLYgrDokZD9qOrnAnOtCelqO4jrnlZP0xkWy8gAHCAS4go4R2Yx1winI/VzmLlLpgcVxQZv2us2wu7P5kAmqULRYJgp/lPxIVwJssfpGOThqa8EOMBTahBJvp8Jj3f5WAEUTppntrMKZr4KilVHOGBJVlbz9q/V+h5JTHkXK9rHLt3lzWTDRaSmMHA2zJN2hzWGgs07vAEhEnkbw6tweNNjg00Tztd0ydUnCMXDHis9gKJoUhf+fhQ8zwDmJZbueq+vQxfJTnm2ueRPr1EbdRbPG7QEzurYYK2vmec5AV+9Z9Z6NXD+MbGn7RwahG9/H66S2BWUEJSPTpxqiDyVRoPaJq4iekWhGZbK+bthEnKpzgLPI/mLxMe/BhyU8X0Ekv/9DI8P4oaKOuw4IiHz3GnP0G8iy8Ln72JrHVLOE41rPG0f94Y3C+TcfhRQVdcCeTz6nHzUtKMgeU3v/4W4i9S4CsTZhCRD0j47RfzqnmLaizsIL67azFU9M55Gda11iPacs0Sk/SgXJEVQw/zaN/qYsM4WqVeuGFY8c88FKTNE7HQXvzO/KCftoL0wxJC1N28VzXlUQF545WpN2gsBfdLb7U67yKQ0G6k42aGyuZlfhji6AL8c/copB2tNooFzpVCC9fMJjdXxtsRS6lLgEpB5d12HfaIksCJZypPggQWaWoAlL+qkjF/i+OSFHzsVT4w6wdcmWBewoff4Mj1WPk6UnYC+8PlYwHyqnUSYBMbHegps6OHljj68uxjMUe+YXyafr0GxZFojnxpUAby1AxXEyvVjipjyWtBWDGy36aYmQVwk+ZXvCYKGUYbgGXBXf2UZ/8FGv6RGlJOwQjFihVaNohTZ6zw6k/7keG52LCbNw8gBcrn5QCJ3xolMek0YMC+Em x5BZ0Ife Kbp1zdwKACCAZaPy5k68ftKpLwxZbMRiQxR9/QByyXL01ONg92w8ZMxW2X18HZp4MMjUiAGWip31dzmN3rq8y/hn7Q91/o8glO+AMWkXOr/eH0ZqAOOdN/fmGV7TuGBccidYQXrWBfVn8rrfm3jkT5THlqa0/Zl5vYh03DpRAQPDC6vqKrgNMZoxPDROSF5DFtzrqwdYbsK+RuI9Ub12j3hnCKrVd/nEo/jDpx+P/Qh0lX/22caUAXLTqUdbV64l7acaBxGKt8mgV2srrsx6MNUfXlCHQFj58VcSGJUUysD3K2kAmEOZC14IvJnhJwqin3VeRVXGQKoy4eT+rpzBPhVCNuyJMEYU/hg+o7xSeHh3WP8Id1rAnJZpdI5QCbQjbvTildi0LteiTZyMy57dtR8en6YlycdOixbmLPW9sbtg0H7Zr9jpTgoHdXMo8KKbHKy/P/5AcM+Ze3bGG4bnFAKUGz/ySavp642LLn0XLpguqR7uxVFCQdW+fgAPRcXmgp3GB9H3z2W9hZ4GQhCaRuA4llt+GMf1orT/lCpqsQDIHIjNrs8b19LzTbfhVFUPQIh4Xh01zuYSRcTTkhzLeMtQe4/In9s4S17zP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() after refactoring in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is part of the mm system. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 6f6e47bbdc8d..a5448b44068a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -61,11 +61,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); order = 0; } From patchwork Tue Oct 1 07:58:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10F8CE7716D for ; Thu, 5 Dec 2024 15:21:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1BAC96B00C7; Thu, 5 Dec 2024 10:19:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA8A96B00DF; Thu, 5 Dec 2024 10:19:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D94856B00B5; Thu, 5 Dec 2024 10:19:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E279B280036 for ; Tue, 1 Oct 2024 04:00:07 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 87B021C544F for ; Tue, 1 Oct 2024 08:00:07 +0000 (UTC) X-FDA: 82624285254.26.C67D406 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf13.hostedemail.com (Postfix) with ESMTP id 77E362001B for ; Tue, 1 Oct 2024 08:00:05 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=iGHVivh4; spf=pass (imf13.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.195 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769478; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=r1RucxwqQRpOPX6vOYgsd3mSR4ltbKTf5O9GaoA3/H0=; b=L0jFZjTHi5i2fnQvYHpYhV6hMo9praRPldIDCKonvY7Cc4mHDUHR0q8dhUg7AcezNSFF8V gIvcUuiZzjoO+cPQP89Kh4nFsu0lNcoLz5y+e4aiQM54CmhI2w+o0nIjuAb0Iuf2gVeJ00 mGi7JUusrHCH10rLUo76qNA4m05pBi4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769478; a=rsa-sha256; cv=none; b=24PmKbkTTauX0Igh7mss+aoJFqRNurQBaFB0cgNiIXRi6yGimjYB0zFSira0UQ6Nf5hOS4 WVdzaeClqFK8aPZCQimm9oPZcSXSwCb4nDrKwTSw+eFgM2cxzYsUMwp4fqHFbfQBKEcC1Q csIrDEe2ETIET2IbOZssW1h4zphFckI= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=iGHVivh4; spf=pass (imf13.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.215.195 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pg1-f195.google.com with SMTP id 41be03b00d2f7-7c1324be8easo4690822a12.1 for ; Tue, 01 Oct 2024 01:00:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769604; x=1728374404; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r1RucxwqQRpOPX6vOYgsd3mSR4ltbKTf5O9GaoA3/H0=; b=iGHVivh4HBGXNcRqdvc+YWi5u3nekEzQdZj75N7mkTry5lhz0k6Ulil2+cFZ6uiMKV 1gm7IIBwXZJKwUt0G2lKY6NzOTvSl7qDFBEkQP2bTRIMBMwYAP1jCojYh1zd4U8gMopy fTL1zSBee8OgQ0cEbGluhgG1kbCwkAxqDdDBa2+IBmJLHhtlPyjHbbAsL1CWmpiWvjBx 1AdgMRM3ic2SB+ZAAowZp7Df0zDMzYj5fV1KdFTVZWCR++Be9X3KyCM8gAqXo90GJAqb a3cESj5APiTZLwvexV04JXIEIPDxG8Jitos7K1OMIYSFQfzDKixe/gD00JiFeZVBX17+ oNOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769604; x=1728374404; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r1RucxwqQRpOPX6vOYgsd3mSR4ltbKTf5O9GaoA3/H0=; b=XGMgdvYaGc0dMoEs4W5FwQXpsRPD8q1T9LL/6uIvQUG7WJWMiTUmozX2jZfeDk+O2T ZEvjXgEipe+NlXl4UaR9BIBZ0M2Uy3zuj+AucyXqbnWbvINwzoar1XnHVQhL+Zmazza5 HwS5NwNB1msAhb1u08IycOREGDg2m1pl1yk2sUstuOuzX2Nfb8j1AZmOXqlHDQoXvntV cKnElLxHyvME1xF4XDDbAbnldSUR9xvz6h5CNM7NR/BeRFNd771blyK+DFh92mYuFvoC K44uXywjqj5ciDjWMAc0lwH/srwu1d9HyRM4YLysV8D69ZEnar64H5rSCeYp1gwf2YrW JIvA== X-Forwarded-Encrypted: i=1; AJvYcCWrEsUkAqD3bA2KxPaCw/6ejgwaIZfKgWyshEEqAflxOMIKHSJ3nUf7DgeT7G6s2G2gJxPEfmhVKA==@kvack.org X-Gm-Message-State: AOJu0Yz7ItNH2qTWiHMc7TFWTKT3Mllc+flVgXEcBjSzreeH1gMXLy9b XnsemmKxORecFpYJ01wvIIZzm0WXYHIfjiROO+u5mHQWLkwc/1XI X-Google-Smtp-Source: AGHT+IFPxcxt0MFdf7tMGPm+w589CsajkXhlQaj5n+1Gi7AVAld6EP944pYZHk1LqVuIZirkwu8ymQ== X-Received: by 2002:a17:90a:e7d1:b0:2e0:728d:fb3b with SMTP id 98e67ed59e1d1-2e15a3431afmr3591602a91.18.1727769604111; Tue, 01 Oct 2024 01:00:04 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.01.00.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 01:00:03 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Andrew Morton , linux-mm@kvack.org Subject: [PATCH net-next v19 10/14] mm: page_frag: introduce prepare/probe/commit API Date: Tue, 1 Oct 2024 15:58:53 +0800 Message-Id: <20241001075858.48936-11-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 77E362001B X-Stat-Signature: a3ze9gmrg66r9owu7kx5tn53idwp3g1b X-HE-Tag: 1727769605-272100 X-HE-Meta: U2FsdGVkX181cEW39VYHEsFWSrDcwnO2WsNRuSaANKtDRKwFYjDz4EtEOthCtdpird89sL3JC0JKI5Vo99Gko1iKb6MEWkPEJl3iHhcUNe9kqnlcn++2tClzkC2biU7eFhJQ27+ca2y/WSP0+5PyXIkn8L0Qin+Kmm8VF9tg9cXJ4cG/YMXqnWA/OUILjDLMu99b8ifKzeWovJR7U5e5vAtCfOcgmizBtlOsFzxx7GBFmHRs0o6hSm5V6yft4ouWpLbrwA8mgfeqPCoffcNMG0DK8lnX1HIgWpg6CPdTLtaa4Vcaup+7aP+LO+kCe/x7doFPamQC2vrIY+sEpVinsA7RwDmSzagpQ9G7KyN2xTL+VY8RuUjA13Zx/o9EwBAl7/OGjtNRXTPxYMXowSJ96o4+7tR7yxDi2YvoHxEVs97qi+IhzeqSZ8utKOD8jpK/F1PlochyMTGg0h9FacMiMBCZK2FlTJaDP3aluB8o22NkJxNPcoI6Kpm5FjX/EN9Zw48up+CPijlPCt1pe3aDZ3bbR51655fEVr7zbY7Np+pTLq+C+6oqZhnEwh3Tr6TUSCCwCQWv2HqRg0Ku26TZFoV2JobdEJjUM/SkzvSu5EkJtLcmydE5dB8nnTxe9/cA4aiGnipEuzdgcS0wkGHgvGG94FmIapu7m7yF0hnXCxk1QjVrL0m0xKP2GBIDyYrsKnXt+QYFYnIW0ORLSjASrzFWNHs8Q2rA0gOORINHEhbm/rZMracnXk1GLInQ88JydGaJ1ImHNcWJGIE7vxl8XtL+8gVOSVVKtL3YLl4b/L9ofLu2P3nJwecHaKFROnL11zj4VLfGrVHgRsdVp7RzuY/hFNDReqM6n+EbWCnmXLQj+ommcIZswsvlaubwW1mYXGtZFqXFXI1jM0DS8/gBsIVludCeJ9x3toe3w//TckEE+JYyklu5ai4kbT+u6FAh5626TeRXX5qIrBugJtL howwd49i DarhFLBt1tTpU/2ZwIAc3br6Xmmp+XepjZeoPsecTccdl/P9CI499BeaO3QnvI0slxQOvdJMjrnsXkIdG+mthL6yXmN2Q7Is212vQ8CNikIbM6KMnjgoe8QyV3DhE45ybR4B0/0iF4G0bxEcwW+6TRNG3+7YXe6W5CEArMCibZynLj5owjzzMF5Y8oNzWS6a7tdEAcsnZUuS4rpw8+Z0JKEgQSL/KLOPCM2ne9cNH1ry4t8FiSt2Jipc9i4CRcxYI9b8tXzbaNE279YKb8ZOpc84tYBgErCQl9Sh+pZ1aIiEGCTT9fN1789V7KOz0w5DEECI6Rfj+xnlSOPNao/Q2dI3Fvxs/AlqTixYPK71UMgkdeI1PATJ2USrYzwDgxpCt0RI9Xsw9PrUqSL75dc5TLZ5yiQFRj781AjKBFPenIXv5swjfJHcAlT3TakUC2GPo0NuQO4gwETm96hxQc9D7g6F6YNOdV5jZnR+aNhhzt4Ajd1a6Ux3m6sDY18FYbJNvmbZPBDqMiGwh/ZI6LkxH4O9rbdy1sQ5UIowUQ1LnfEhxs4UJbWMN0YaBfjwNjUrIgwigXjuplWyQrHJfJ5z0JvO5UWn1OwhdlXpH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000004, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available or need to probe the cache info to use any memory available for frag caoleasing reason. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller and a probe API to report the current available memory to caller without doing cache refilling. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 135 ++++++++++++++++++++++++++++++++ mm/page_frag_cache.c | 21 +++++ 2 files changed, 156 insertions(+) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index b634e1338741..4e9018051956 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -40,6 +40,11 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) +{ + return nc->offset; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, @@ -48,6 +53,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, struct page_frag *pfrag, unsigned int used_sz); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask); static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, struct page_frag *pfrag, @@ -90,6 +99,132 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +static inline bool __page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + if (unlikely(!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask))) + return false; + + __page_frag_cache_commit(nc, pfrag, fragsz); + return true; +} + +static inline bool page_frag_refill_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); +} + +static inline bool page_frag_refill(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask) +{ + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); +} + +static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask); +} + +static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + -align); +} + +static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + ~0u); +} + +static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, -align); +} + +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, ~0u); +} + +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + +static inline void page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit(nc, pfrag, used_sz); +} + +static inline void page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, + unsigned int fragsz) +{ + VM_BUG_ON(fragsz > nc->offset); + + nc->pagecnt_bias++; + nc->offset -= fragsz; +} + void page_frag_free(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index a5448b44068a..c052c77a96eb 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -117,6 +117,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = page_frag_cache_page_size(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(!encoded_page || offset + fragsz > size)) + return NULL; + + pfrag->page = page_frag_encoded_page_ptr(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return page_frag_encoded_page_address(encoded_page) + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align); + void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask, unsigned int align_mask) From patchwork Tue Oct 1 07:58:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AEA7E77172 for ; Thu, 5 Dec 2024 15:21:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E45786B013D; Thu, 5 Dec 2024 10:19:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DBAEC6B00C7; Thu, 5 Dec 2024 10:19:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4B5A6B00BB; Thu, 5 Dec 2024 10:19:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 360DE6B0121 for ; Tue, 1 Oct 2024 04:00:13 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E597440E13 for ; Tue, 1 Oct 2024 08:00:12 +0000 (UTC) X-FDA: 82624285464.03.32E65A4 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf28.hostedemail.com (Postfix) with ESMTP id 060A8C0020 for ; Tue, 1 Oct 2024 08:00:10 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bOrEv8lJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.66 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769517; a=rsa-sha256; cv=none; b=uuINbcSbU+I8oROBFUde2Og5o0fj4kt2nQFwCIx9Bi2eXVOG4+74r9ZWKHVTY/s13iSUau b5U4mN/CrJ2ID3w0GUBSw2wqXYpLLqlloqwNIle0+aB+m2MAWTyBL1apazvxpUXAuZuMLm bTrlJ0GfsrFjatpEBR/00TNgtEWtVcE= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bOrEv8lJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.66 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769517; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5ojxcdjNvn9tfBEJlcLOUTnmOt/qIyQL5omYnTHVVOo=; b=eGXQOtXaaYuNcj3jkt52Hy7lqSX6ww5SLe8k+/EyMI3t1V1D4gMZKliIs4U01UDRo4Elfc gYy6gHob8iJEpNNVyt159hHikEldBOwG/4nwT+q6fF8uVBDYA2uZeVzotdgdBdoTKjW4yZ R+MXOKdPIsASCT51FpZaRtjKCGiZZJM= Received: by mail-pj1-f66.google.com with SMTP id 98e67ed59e1d1-2e0b9bca173so3007216a91.0 for ; Tue, 01 Oct 2024 01:00:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769610; x=1728374410; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5ojxcdjNvn9tfBEJlcLOUTnmOt/qIyQL5omYnTHVVOo=; b=bOrEv8lJDUnHBYENrr85k+UsZ3zZXCwWOZBjwQivDPxvSnl++RDi9t+tBaFgXfwfVg /9H5J6xro1e+yNV5nPQd0uS4C/eCFtubKEOeMQ8lXET7IyCciAO1vYe8dTqwe42x4XN4 yfQ3bWkbgG0LGKJlDnqtzHCc2mj68PJ3vKbqLyMQi58X8VC65a8aHiFYrJQ9EwQnCY/8 nHklziXXD9jy1U3KTh5seveUH+ghbchxpRxyTcK5RhowmiTbucDoOo0hB6TlFN1koj9m 5sgo6jaec1lZzMxbIE8DWF3R85OZKLJgqU08E438JRssdAYzvwZ5ZzKKo0mF8thcBHlM XSmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769610; x=1728374410; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5ojxcdjNvn9tfBEJlcLOUTnmOt/qIyQL5omYnTHVVOo=; b=uwsYCfz1/vH1r+fF5cEWuJPy/LukNly7ZYGpgLjBY8pfMiEo8ylw5iXhtsnrNJPq7x LPg6waNpmeXrH/TYZLFQYtncZMyaeTP1RlBKvrIjnXuLUzs73cnUEgMyTQhDHHuiYcGB sSRle0TQLGpDULUCydbt1hKtdgThi+qh1YiEnJz77whX92nY+38MmHJht8k5lTZY0dGZ lVWOhCLaWSh6EhrXT7C5ajvrfgaq7k8hzdFQoGLGTz3uJsMWve6cQq6cotsr3nN56s3B 4BwpAy05WgMsH0d6BZ02MPMHQ3JUU+DXwdrwh9v62rhOOyHtjSvl0CWTrZJBsThSxN+k 9xEA== X-Forwarded-Encrypted: i=1; AJvYcCVk+bwMxQHLMgmHLxo/64JMKRFgTW/Z/LvttpheCKkGiysXMOcLHb6IL3RQy636KMiW5oYayFsSBQ==@kvack.org X-Gm-Message-State: AOJu0Ywzuc0UFHomyZL2tsyRW6rL3Z/kmBUhxWhqczaxXQo9j2Hbh2nw 3fMU1RQgdj9kVBA8XaqqsOMkRr1jfvG2C82RkXq8PMDwYgI78qId X-Google-Smtp-Source: AGHT+IF3IG9U7cija4N7THOiO2nq96RB2NLJHQxqNLCpLLW60MhGNrffUUs5bLpndLyek42gdCV41A== X-Received: by 2002:a17:90a:f2cf:b0:2e0:b26c:9069 with SMTP id 98e67ed59e1d1-2e0b8ea648cmr15853457a91.25.1727769609750; Tue, 01 Oct 2024 01:00:09 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.01.00.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 01:00:09 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Andrew Morton , Shuah Khan , linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [PATCH net-next v19 11/14] mm: page_frag: add testing for the newly added prepare API Date: Tue, 1 Oct 2024 15:58:54 +0800 Message-Id: <20241001075858.48936-12-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 060A8C0020 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 8t1xkipbu1yjqq9wjq1ozhz3p1jodqgo X-HE-Tag: 1727769610-103412 X-HE-Meta: U2FsdGVkX1/qLhMl8mOJJkhmrTbDjoVSwl85Ciy4FFjGjr0eF5mxwacm32loRTqZma2GN7wisS+VsCr9308HjminhXX+hgX3YQzBw0H2JKRcTms/vFByPXmnR1c7QSSYeWXymsEwZo8eq7vSQniHaJeXy20QTtaxZwZTojS2VnvBw/YDrE/r+W5CQtGfnNCcc5ZpfrqGFgEw3eZIUnyzadl4J8Zdoh6/YLRWpmSx+Oqalqq3SF3JfIM5lcaoGWbxsKUOctxRr1/4KjzTArI0SHz042CzDjZpZ2BYDMD0C6Ek6tvNujTXomRo4rmwGL5JIVmceAfkwL+TblYo9I++g+UbEVyR3Z4crnOoBIh4Bj86lzdRGDdQ3V0iBmiTnc1pUYd0LZ5bb6oRsABOKZAXvLnC+/eeh/h0IVBtu3j/6pl4Td9TvDjrY0GN76v8DfcnQRmwi3luwWQMjJF0otLRYBfJLHWungwTXwCg35NvRYmRX0bE0jNe3vqiiynsowDpxOhxFBZ+IeIP6j/xh87kY+f+VwIRk0iwRkqc2k0sYNs9hqmKGbWDEYvpimCx8oz3iDlap/TzADR9sOxSOAbS3qyIMEqginRIcPUipuW+1DXnMZVLSJ9LOp5m97rz2OUHD9eLH1tky5iNhQtCL7ZvTetfxD8yrp/XhbsH2R40b+sbpkzyH37VXzo9XKIYvVw0fCNgQxYySr5MsYC4oJ8NfQS1Xvd0I57ou/e95OR1EymK2zGmKVewwto9yWLAvRFV8mM6UOwKI+hPAVjNTYKYZs0B+ZqLcdQAfbB6RBa5I3zmE4suhOvd0zUbY8BJIN8TJIYsZcFZ0oPIZmxnYJsx83yBzUmricfpCQ55L3j/KOn26tCjHV1GlQOzqjk7Re8Fx1XnwQnbyWqvDi+GhHhtyRKFNe4z3CU2ecWV+qRppujHQVwQL41wFkG4SrI8oebccKQXGKabjCsxhsHcV9J U4QQsAcB Eg3PlLKaleM/UN6JtUrzHURv94aSxy0CBVBn5b+ox3ffe3tMYWEXgGVWUFU8ZnvusV9OZdrA0S7m4nI/k83RrsO9v3EsnFGsmZxcV/tiqDxOSY1+6RMSl5ZHntC/yAcBKht/g2qed0BX4ew5fKm71JTkN1GJ+JAWS0QHjwnWY1EfJ/sKijKSTeBj1KRUj6xMBgQsRo+tK6jHoUjZhPFOTpaEm2q/8uE8cXTRM+XcX7qpzEPH16yKLgEQZz9CVZPciEOGnnnYNJCT7xcf+JsoQPt9lHu7aw8yP/3IfOyPyjxnK9h0XB6dTI4l7UFK8WetD9o50JXneYT181kXgMNY8nOB+rN4RxwvnKEMinnUCpPA+wn70pmvxZO5GCUg6HMW0wiAURVcX4vukooneba6KOMZgOy6+kPxfVNi8JTi4xKHnmyyGIBL8PXS5vmMMnPYILOVCHk15gDJ1aJcG0C2UE6bVcNnwouzzBSU7x9D++LBGFArPLV/aoz88yjcb7gPkVQR86V19Z8q0R9SWIYkNn9MjKJAeCMTW8fAzT/vV+boneNPtUAoMKKPERdN0xOi37115YIOUi+y+wFUkIFjpO1FyR443mwJxO2Xm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add testing for the newly added prepare API, for both aligned and non-aligned API, also probe API is also tested along with prepare API. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- .../selftests/mm/page_frag/page_frag_test.c | 66 +++++++++++++++++-- tools/testing/selftests/mm/run_vmtests.sh | 4 ++ tools/testing/selftests/mm/test_page_frag.sh | 31 +++++++++ 3 files changed, 96 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index 36543a129e40..567bcc6a181e 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -29,6 +29,10 @@ static bool test_align; module_param(test_align, bool, 0); MODULE_PARM_DESC(test_align, "use align API for testing"); +static bool test_prepare; +module_param(test_prepare, bool, 0); +MODULE_PARM_DESC(test_prepare, "use prepare API for testing"); + static int test_alloc_len = 2048; module_param(test_alloc_len, int, 0); MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); @@ -68,6 +72,18 @@ static int page_frag_pop_thread(void *arg) return 0; } +static void frag_frag_test_commit(struct page_frag_cache *nc, + struct page_frag *prepare_pfrag, + struct page_frag *probe_pfrag, + unsigned int used_sz) +{ + WARN_ON_ONCE(prepare_pfrag->page != probe_pfrag->page || + prepare_pfrag->offset != probe_pfrag->offset || + prepare_pfrag->size != probe_pfrag->size); + + page_frag_commit(nc, prepare_pfrag, used_sz); +} + static int page_frag_push_thread(void *arg) { struct ptr_ring *ring = arg; @@ -80,13 +96,52 @@ static int page_frag_push_thread(void *arg) int ret; if (test_align) { - va = page_frag_alloc_align(&test_nc, test_alloc_len, - GFP_KERNEL, SMP_CACHE_BYTES); + if (test_prepare) { + struct page_frag prepare_frag, probe_frag; + void *probe_va; + + va = page_frag_alloc_refill_prepare_align(&test_nc, + test_alloc_len, + &prepare_frag, + GFP_KERNEL, + SMP_CACHE_BYTES); + + probe_va = __page_frag_alloc_refill_probe_align(&test_nc, + test_alloc_len, + &probe_frag, + -SMP_CACHE_BYTES); + WARN_ON_ONCE(va != probe_va); + + if (likely(va)) + frag_frag_test_commit(&test_nc, &prepare_frag, + &probe_frag, test_alloc_len); + } else { + va = page_frag_alloc_align(&test_nc, + test_alloc_len, + GFP_KERNEL, + SMP_CACHE_BYTES); + } WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), "unaligned va returned\n"); } else { - va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + if (test_prepare) { + struct page_frag prepare_frag, probe_frag; + void *probe_va; + + va = page_frag_alloc_refill_prepare(&test_nc, test_alloc_len, + &prepare_frag, GFP_KERNEL); + + probe_va = page_frag_alloc_refill_probe(&test_nc, test_alloc_len, + &probe_frag); + + WARN_ON_ONCE(va != probe_va); + if (likely(va)) + frag_frag_test_commit(&test_nc, &prepare_frag, + &probe_frag, test_alloc_len); + } else { + va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + } } if (!va) @@ -152,8 +207,9 @@ static int __init page_frag_test_init(void) test_pushed, test_popped); duration = (u64)ktime_us_delta(ktime_get(), start); - pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, - test_align ? "aligned" : "non-aligned", duration); + pr_info("%d of iterations for %s %s API testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", + test_prepare ? "prepare" : "alloc", duration); ptr_ring_cleanup(&ptr_ring, NULL); page_frag_cache_drain(&test_nc); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 2c5394584af4..f6ff9080a6f2 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -464,6 +464,10 @@ CATEGORY="page_frag" run_test ./test_page_frag.sh aligned CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned +CATEGORY="page_frag" run_test ./test_page_frag.sh aligned_prepare + +CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned_prepare + echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix echo "1..${count_total}" | tap_output diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh index d750d910c899..71c3531fa38e 100755 --- a/tools/testing/selftests/mm/test_page_frag.sh +++ b/tools/testing/selftests/mm/test_page_frag.sh @@ -36,6 +36,8 @@ ksft_skip=4 SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1" NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST" ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1" +NONALIGNED_PREPARE_PARAM="$NONALIGNED_PARAM test_prepare=1" +ALIGNED_PREPARE_PARAM="$ALIGNED_PARAM test_prepare=1" check_test_requirements() { @@ -74,6 +76,24 @@ run_aligned_check() echo "Check the kernel ring buffer to see the summary." } +run_nonaligned_prepare_check() +{ + echo "Run performance tests to evaluate how fast nonaligned prepare API is." + + insmod $DRIVER $NONALIGNED_PREPARE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Ccheck the kernel ring buffer to see the summary." +} + +run_aligned_prepare_check() +{ + echo "Run performance tests to evaluate how fast aligned prepare API is." + + insmod $DRIVER $ALIGNED_PREPARE_PARAM > /dev/null 2>&1 + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + run_smoke_check() { echo "Run smoke test." @@ -86,6 +106,7 @@ run_smoke_check() usage() { echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | " + echo "[ aligned_prepare ] | [ nonaligned_prepare ] | " echo "manual parameters" echo echo "Valid tests and parameters:" @@ -106,6 +127,12 @@ usage() echo "# Performance testing for aligned alloc API" echo "$0 aligned" echo + echo "# Performance testing for nonaligned prepare API" + echo "$0 nonaligned_prepare" + echo + echo "# Performance testing for aligned prepare API" + echo "$0 aligned_prepare" + echo exit 0 } @@ -159,6 +186,10 @@ function run_test() run_nonaligned_check elif [[ "$1" = "aligned" ]]; then run_aligned_check + elif [[ "$1" = "nonaligned_prepare" ]]; then + run_nonaligned_prepare_check + elif [[ "$1" = "aligned_prepare" ]]; then + run_aligned_prepare_check else run_manual_check $@ fi From patchwork Tue Oct 1 07:58:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B855E7716D for ; Thu, 5 Dec 2024 15:21:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E82E16B00AE; Thu, 5 Dec 2024 10:19:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8149D6B0108; Thu, 5 Dec 2024 10:19:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69A8D6B00CF; Thu, 5 Dec 2024 10:19:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E1A026B00CD for ; Tue, 1 Oct 2024 04:00:33 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9F2F01C665F for ; Tue, 1 Oct 2024 08:00:33 +0000 (UTC) X-FDA: 82624286346.29.BA5561A Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf18.hostedemail.com (Postfix) with ESMTP id 8E5211C001F for ; Tue, 1 Oct 2024 08:00:31 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mfmLg07H; spf=pass (imf18.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.66 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769567; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8zjZyaQZU43G3B0nupuypiK3j9IrQpClbR8UP0D41RE=; b=lkU3pLVR1jfWgtxyzFGaCjAl0Bgv+JxJhEEBiDtn5xvd+duqBdKf1UOPIr8ekeg40psJJm fKsJhRpbhXl46RfDfOiiaoJRiBnYF/ALYZ0/9/gyE5xJS1xaV2BjrWiYe/XEB0qk3tgLtb OACiyuxx+1m/YYp4uFLPcNYCoyotK7Y= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mfmLg07H; spf=pass (imf18.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.66 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769567; a=rsa-sha256; cv=none; b=0F/t55GC/ZPTsLzttM7qVgwme7tcx7P62X+x97bYTYJ0UJqrGuxj6jYjc0ALkvcyz62NI1 fM62tkKMGO0VHRJiLpkDOZoEdDH5acWoqcHJgmvvAPa/uxo0tfmwjdDNLcAVPmYxNSSNw9 MsiZ7MU4qgvN+T4n/s2rVNXEQIAtClc= Received: by mail-pj1-f66.google.com with SMTP id 98e67ed59e1d1-2e0946f9a8eso3683453a91.1 for ; Tue, 01 Oct 2024 01:00:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769630; x=1728374430; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8zjZyaQZU43G3B0nupuypiK3j9IrQpClbR8UP0D41RE=; b=mfmLg07H+/IcgGvl41yvFxle0IvcMaOr2OpnxOAbQWuqw8qFv/JDnSVFp2VNJYPNf0 JSG1IUJtaErmOMGKcsWl/aaFz2ZSCLn7h1xaszqc8teelzIMNy4ZiPi/iS25BFGwf3Xb gQg+tB80/zXFlUjUBDrAMJ3tpnCbi40Z0m9JRV5a6MGm6lPnVvQFVzP0HKfFL/hd6eJi PhjJHO7mEXRG1yCDk+fJbhO3mJbGZwpxFLgdDH3MAhwmhYwFPN8UCkdLaqusZewdK7Ag EQ2Dx380QRChCv52+X+9ARSofhj312MOklII8EwIMwFV5vtiSa2h2IIP915OCp5e1MrI Z1ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769630; x=1728374430; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8zjZyaQZU43G3B0nupuypiK3j9IrQpClbR8UP0D41RE=; b=MFyziQRYGBqps++mBBy9FtmbxU1t9U31wC8PyWULrUusUNDzLa9lkn06LSqaS/Kf5C yn5axnWUnhxfUE7P99s1mUZ1PI2l5/QC4I09L7Lryv0LweW+te960zxOKwB+lHdbjT+C ZmLTjJuMiFWmcOkbY1k48ww/weUmOBfKs++k++b74cThdK2v2l1YimO9c5v0t+wuLzQL rA1Jeh3TLiBMmwOzqcIOAUtLg7jbJtvfAmzzd0O/dgncPLLO7BjV19NfhBLd4qt8cr7A eQjVsXzxNvWEjKMKNAUoRdMCPSptCWfnmNcoYFIV5/ohq2xfr5rP1pviJTUdNT6UkefC YVIA== X-Forwarded-Encrypted: i=1; AJvYcCUS7Ytk/wPas8Zjf88SS0YdgvTI46g/BVt4RKMvqYACfnHKI9F8aSsjtGU8uDf4xq31XWdCkpv6QA==@kvack.org X-Gm-Message-State: AOJu0YwvaZ9FqNV/ffVNCbEt330bbOSWvYMQSvd/za+COKcRG9L4bCV/ EkUFTqDifQEY2PeG60qiRwsHgO3dFMJtdYV+QeGYJR/2HgRM26To X-Google-Smtp-Source: AGHT+IG7hNeIVWEP/rFCNow+T5F8c8AsizFC7gmGKcJLiut0GduVyog9R8rOe180NnXbFxlVRSj3qg== X-Received: by 2002:a17:90a:f2cd:b0:2c2:5f25:5490 with SMTP id 98e67ed59e1d1-2e0b8ee5bc1mr16207817a91.34.1727769629984; Tue, 01 Oct 2024 01:00:29 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.01.00.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 01:00:29 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , linux-mm@kvack.org, linux-doc@vger.kernel.org Subject: [PATCH net-next v19 13/14] mm: page_frag: update documentation for page_frag Date: Tue, 1 Oct 2024 15:58:56 +0800 Message-Id: <20241001075858.48936-14-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 8E5211C001F X-Stat-Signature: yjsp7ra979ruztkwpx9txeq3yuqri4qx X-HE-Tag: 1727769631-826805 X-HE-Meta: U2FsdGVkX1/bkCbHz1K2L0KKwqp1t32kcM44cb22dxQiqUuGLi1CwHqFGvKU+gaWcYMRN3vFzAjY/1bJ1dlATeyJvnp8nrpPuBG23bYMFyoRnywxRgyEHyZdn3iOyAYWZue3ssP9D0zJ+YmNZzwuAdI9sSF4JWuFsZVqGTLCnP579pXDKoGM/QnfWLVTtx0wV7nKq/iVLUmgQWg7x9P94OImmwcMAFHf3wWGCMgWXfhxNOroFDyFEm3SpkoWqIkEwAqCUdeBgP3djMPgG47if+elqsdgjLlYb7uheUrWHmfn5VmRLKBBllCkdJKUllwP8rPEWsWSFP4jnVRVVq32toK896CslcMry4vjnS/fZJSj/x8ATVFDlaaPqwyQsZ0DssueKrVqsrt+ZHi7vJ5Xe/nEQk74p7FeTpFhp4oOXKT4RN7Q4nkUO1bpbCCU5vlqiU0Hyo0GVBjudTsbpD7mUBUFvBBj2JU0MC1dX4meg53WMty0PbupV2XFgPnkNXO64RaM8tCc3/FWfypJHoa0QK5SwmjsTCYvIUVXRqNpuudgBdCm+0Y6CR6Yg9y6wOJZ51jwAp8CPtIOLdnhOQW96AT78OxK2LCh0I+8YGcgVCAUetMg1N4cwbViw1UQx3Bmfrr9d6av/elicVqYL9LXLj4ivrBxh+5a7cO1n7hRNHXbwM5DH6azxAgJHLvU6PkFGwzSrH8oUEBizZRMLciEiZQXN5eLCOBdxaIh0XqwHRk8uyXf8UpixPHj1BSkjOSyuFzcNLQNCuUEsHTeyC+ZRon0NuljGwvPSdqHY4pbLPFfMvUsTLjnKIfRyH/TI76VuhuXSm+H8TMqpm0NAeuc1twJ7bmCAYXXkTchopC4aufME5vNmNr4k1fh0i1gGtppj4SNLcTrUKiXmEV0wEkgAMKSABOBjeFxcIgFs+UytBeuS+REU0gKZKuJGTbMGx33V3Z0gUqEpX5sy9SrJMZ HmPBYfvq DlJBznicoDXsvBOoRbOsHoH2+5sapEY+StVyUPkN/5Fe1Rf1U80cjBMYJ2vsL35c8DnPamj4XI7N/p6LB5RSsvifU/5zPSr0YEHk9Vd3cwruvzYhqa/STcNGjOtflagRVrfumZRMJxY6rtuz2581ZiHUuHXX1Q05+HKYIpw8tEWLzEZ997poHBc3aSyJt+OlNU54Xf9m95TCyX6XKlLtrKdQ6HVa3lIAlOkAoiwO7el0naAI58utJ0d5qD84nElGAz+3Ax1i4MNmzX4CdVfPeqtiEeiBPek+2P8q8865xFk5n8Yfos9bYPWOHNcE8z1/OBoRXFcRwlAlkTsTy4oBxTJz6PhS0KSyeTgRrgq0vGE30fZTmcUzCWbTQwK68j3vUGhd1zL8DfB24dvek2eVaTRQ0RnSliGIytjPliwOc1gQYPFXqpBxapPbEMe/0VkYhcRj77HIV3J0TaXJnftmt5hrgsV9rXISQV5di/znxYQ3bnyAFh330szbBR10/kq9qms16IVdlyBd84WHgqRT+XDIYlo+w6Ydv6zZvPE1bsHxTNRbGE0nfJU3ycBmO9bqsbmcexkxBVssqPGdBxaZQZUN+e0jvXOf2rJrv X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 177 +++++++++++++++++++++- include/linux/page_frag_cache.h | 259 +++++++++++++++++++++++++++++++- mm/page_frag_cache.c | 26 +++- 3 files changed, 451 insertions(+), 11 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..5eec04a3fe90 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,177 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + | + | + v + +------------------------------------------------------------------+ + | request page fragment | + +------------------------------------------------------------------+ + | | | + | | | + | Cache not enough | + | | | + | +-----------------+ | + | | reuse old cache |--Usable-->| + | +-----------------+ | + | | | + | Not usable | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | | | + v_________________________________v | + | | + | | + _________________v_______________ | + | | Cache is enough + | | | + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + | | | | + | | | | + | Refill failed | | + | | | | + | v v | + | +------------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------=-+ | + | | | + Refill succeed | | + | Refill succeed | + | | | + v v v + +------------------------------------------------------------------+ + | allocate fragment from cache | + +------------------------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API implies, the allocation side +does not allow concurrent calling. Instead it is assumed that the caller must +ensure there is not concurrent alloc calling to the same page_frag_cache +instance by using its own lock or rely on some lockless guarantee like NAPI +softirq. + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_*_align*() to ensure the returned virtual address or offset of the +page is aligned according to the 'align/alignment' parameter. Note the size of +the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc, page_frag_refill, or +page_frag_alloc_refill API accordingly. + +There is also a use case that needs minimum memory in order for forward progress, +but more performant if more memory is available. Using page_frag_*_prepare() and +page_frag_commit*() related API, the caller requests the minimum memory it needs +and the prepare API will return the maximum size of the fragment returned. The +caller needs to either call the commit API to report how much memory it actually +uses, or not do so if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_cache_page_offset __page_frag_alloc_align + page_frag_alloc_align page_frag_alloc + __page_frag_refill_align page_frag_refill_align + page_frag_refill __page_frag_refill_prepare_align + page_frag_refill_prepare_align page_frag_refill_prepare + __page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe + page_frag_refill_probe page_frag_commit + page_frag_commit_noref page_frag_alloc_abort + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: page_frag_cache_drain page_frag_free + __page_frag_alloc_refill_probe_align + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(nc); + ... + page_frag_cache_drain(nc); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_align(nc, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_abort(nc, size); + goto do_error; + } + + ... + + page_frag_free(va); + + +Prepare & Commit API +-------------------- + +.. code-block:: c + + struct page_frag page_frag, *pfrag; + bool merge = true; + void *va; + + pfrag = &page_frag; + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); + if (!va) + goto wait_for_space; + + copy = min_t(unsigned int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_commit_noref(nc, pfrag, copy); + } else { + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); + page_frag_commit(nc, pfrag, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 4e9018051956..dff68d8e0f30 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -30,16 +30,43 @@ static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); } +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { nc->encoded_page = 0; } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expectation as the alloc API. + * + * Return: + * true if the current page in page_frag cache is pfmemalloc'ed, otherwise + * return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } +/** + * page_frag_cache_page_offset() - Return the current page fragment's offset. + * @nc: page_frag cache from which to check + * + * The API is only used in net/sched/em_meta.c for historical reason, do not use + * it for new caller unless there is a strong reason. + * + * Return: + * the offset of the current page fragment in the page_frag cache. + */ static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) { return nc->offset; @@ -68,6 +95,19 @@ static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * __page_frag_alloc_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Alloc a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -85,6 +125,19 @@ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, return va; } +/** + * page_frag_alloc_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before allocing a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -93,12 +146,36 @@ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_alloc() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Alloc a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +/** + * __page_frag_refill_align() - Refill a page_frag with aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Refill a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool __page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -113,6 +190,20 @@ static inline bool __page_frag_refill_align(struct page_frag_cache *nc, return true; } +/** + * page_frag_refill_align() - Refill a page_frag with aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -122,6 +213,18 @@ static inline bool page_frag_refill_align(struct page_frag_cache *nc, return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); } +/** + * page_frag_refill() - Refill a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Refill a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask) @@ -129,6 +232,20 @@ static inline bool page_frag_refill(struct page_frag_cache *nc, return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); } +/** + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Prepare refill a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -139,6 +256,21 @@ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, align_mask); } +/** + * page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before prepare refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -150,6 +282,18 @@ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, -align); } +/** + * page_frag_refill_prepare() - Prepare refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -159,6 +303,20 @@ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, ~0u); } +/** + * __page_frag_alloc_refill_prepare_align() - Prepare allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment. + * + * Prepare allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -168,6 +326,21 @@ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cach return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); } +/** + * page_frag_alloc_refill_prepare_align() - Prepare allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement for the fragment. + * + * WARN_ON_ONCE() checking for @align before prepare allocing a fragment and + * refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -179,6 +352,19 @@ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache gfp_mask, -align); } +/** + * page_frag_alloc_refill_prepare() - Prepare allocing a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -188,6 +374,18 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, gfp_mask, ~0u); } +/** + * page_frag_alloc_refill_probe() - Probe allocing a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag) @@ -195,6 +393,17 @@ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); } +/** + * page_frag_refill_probe() - Probe refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_probe(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag) @@ -202,20 +411,54 @@ static inline bool page_frag_refill_probe(struct page_frag_cache *nc, return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); } -static inline void page_frag_commit(struct page_frag_cache *nc, - struct page_frag *pfrag, - unsigned int used_sz) +/** + * page_frag_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the actual used size for the allocation that was either prepared + * or probed. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) { - __page_frag_cache_commit(nc, pfrag, used_sz); + return __page_frag_cache_commit(nc, pfrag, used_sz); } -static inline void page_frag_commit_noref(struct page_frag_cache *nc, - struct page_frag *pfrag, - unsigned int used_sz) +/** + * page_frag_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the alloc preparing or probing by passing the actual used size, but + * not taking refcount. Mostly used for fragmemt coalescing case when the + * current fragment can share the same refcount with previous fragment. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) { - __page_frag_cache_commit_noref(nc, pfrag, used_sz); + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * page_frag_alloc_abort - Abort the page fragment allocation. + * @nc: page_frag cache to which the page fragment is aborted back + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the alloc API. + * Mostly used for error handling cases where the fragment is no longer needed. + */ static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index c052c77a96eb..209cc1e278ab 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -75,6 +75,10 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, return page; } +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_page) @@ -117,6 +121,20 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +/** + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @align_mask: the requested aligning requirement for the fragment. + * + * Probe allocing a fragment and refilling a page_frag from page_frag cache with + * aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -208,8 +226,12 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, } EXPORT_SYMBOL(__page_frag_cache_prepare); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free(void *addr) {