From patchwork Mon Oct 28 11:53:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13853359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BA6AD13596 for ; Mon, 28 Oct 2024 12:00:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 288606B0083; Mon, 28 Oct 2024 08:00:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2379A6B0085; Mon, 28 Oct 2024 08:00:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B19C6B0088; Mon, 28 Oct 2024 08:00:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E33176B0083 for ; Mon, 28 Oct 2024 08:00:13 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 867D3161E84 for ; Mon, 28 Oct 2024 11:59:47 +0000 (UTC) X-FDA: 82722866856.08.B5D4BC8 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf06.hostedemail.com (Postfix) with ESMTP id 146AC180021 for ; Mon, 28 Oct 2024 11:59:53 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730116757; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QrqQ0eSAJ55ychNfmAQti6DBfUqrAJKBrq/1XTASUyU=; b=yFg4h1MRTETg0WhhViGTFg3kVnXdbxl6CcGiLfQL7GattHqnf5gm9dzQnQ8XzWnreRWOZh cElPJgJNir4GHnCjBJufPE6arpz9Usa/urRz2BgpFRnp8deSgfr6/bj0UHdLNnDQX16D4Z IAeqgn8C+Gzzi2q3HDxxn8EyGpwChOk= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730116757; a=rsa-sha256; cv=none; b=473esrcbYX+QkX4zovgopW7uJ7LsEwoEh5DxneiH/OEbTHOmLQUbmrW5P6jmuCBI5KZG0c IKJWRNMTE0DInPhhEnbqOtiqzzD4pnXr1m/c3OXiMc4hN1/Fr2+D3HJlvkW/N9UMhHTsg5 Xt8dBr0TjRGuQmw6QYA6UfByAwM+jB4= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4XcWzs4R6Xz1HHg4; Mon, 28 Oct 2024 19:55:37 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 0D52A1A0188; Mon, 28 Oct 2024 20:00:04 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 28 Oct 2024 20:00:03 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Andrew Morton , Alexander Duyck , Linux-MM , Alexander Duyck , Shuah Khan , Subject: [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag Date: Mon, 28 Oct 2024 19:53:36 +0800 Message-ID: <20241028115343.3405838-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241028115343.3405838-1-linyunsheng@huawei.com> References: <20241028115343.3405838-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Stat-Signature: at7k9q6uemq5sxjrnptmbdrzznrkqrn5 X-Rspamd-Queue-Id: 146AC180021 X-Rspamd-Server: rspam11 X-HE-Tag: 1730116793-102801 X-HE-Meta: U2FsdGVkX19zAUnBbn/pUv336pCs50Kx+SM0HmYKx98+VaynD1lUJwYbLp6kdWR5DhnFpU/mFrHPfcmBGmN7GwiLEl87Xo5dvDTR54y8eyKOlpNB8xoal0KuZ/N6rh9o+rc0mhQxs4VxvtK4QW6Ck98uZ8rxjnqojk6GkKtDLpmZcc+MK19LmRx6aGE3Tp0nf1fuUUUA8/kN7c7ie1YbUNS7jz8GBApK8WVZPCWoXSeitKUG5Itras7+pXaMGkSHl9BIIPUWo1a+JMp49W7BgzY6FEZNw5Ae9meW3Y0y2rSQgq8ijHZLlIBiGUTjXZC0gi+AXkVhDmzKoZlfaSqwEJrWgqjp8Bof88L+m8JkjHxcsD7MqQdwIZZTvf7FhLhAsMQUYPzBlJYKQog49gx0qTuS+OWYJfUBxiIzW542pJMTxjq6Q/6W4nMTtfgS+mAt7QPPcKF7Tm+b8W0UE/sWjFwWREM9zKaCn76WPP61lLOgW6pAnB2oZvTGasYuz3OuCsuFezmH4zWY00CiOYMyWbDSs/avJu4JK6R66bAZo7MoZoZe8sZLynwpihhAvhOcO8m1SH41QUyW4bfP79ZY9r0vsM4vCW7HSnH0RF4VF5e8Vxdv6gSbgDRFrPzFHZRfzjYW+dKOaLzlVpidCkBY+sTGpPSjR/jYtus3wCuaAJpQ3pFE0HwUbllEKmgdfV47Hq55cS1ZnldJXwt1f4oR6TQsb+qc4KJr2mfVuAISzB7ZPqK3qrqdn+1uU90jRE7BSLyS+uH6Rj8XcYCBMxNnTZStJcP0VI8LNpwahsaMSx8za9LLReB4Ih42ZZw5vWg8X6eNSIt3erqMK46J3+UaAy9vqZrpAbWWJtWeZKWreNYdYLwP93nrZyA8Dke1uDcTULKlu+eVk5IAltQzatbBILGhW4CJApvan4P8HaRFdlVbnhrCK36nY1MBXiK61JUMrJohaCqguWxfBO26c+g zKeImzvy AtvR3JJ997TI1Fz/t0Af8JmxJQEve1/Kx1gUXRLVm0wnCe6q1eWELEyIivfQCTrFBDws8hAYsP4CIhh0q1/Xzegu8RAqRmqvdZ9LjbkAbaQIYe5qVX8fhlmfUidaot/HYhN914VxMwX/aF/uDIUS1UsHMZLbycM3XVnQoJWa/pXcrGm9SI3Nn4j+Kn04Eus8wTBP+mI8BOjkDLxhpuojcSnr35i5RepszNEUn9bd0CI4Qk5q/QiLqfZPh7KlNw+MXE74+ExraJYkq1N2REg5GM3gUfvSN3ync+YjT/80+SuXIxW4nBor4C7bVOJt/bPgaU5CKCNvCSZG8dyhir3m2E4LbrP8s1yDngYNp7tPLRbMGNSb/iL3/kaDzyvZ7y0s4z9HWj3ZLwSNsgkFGhor5KnHM6OAxDjX/1gaO9ro9ATlCmQms/UrYTa4piUMuaTPS+Z/Kw+GOAOA/MnMhAMJ0/BOLEuTVkxS9uJRjNZy4V1xZCKbLk0mDbAUnCeIorcOjxQWxDUbkVlZadtnQ9buKAcK0GKXuN5K5r9+/nJOORJflRoAryrtKTpLOzKHSlY6CclNx0v+FmrmT7Cc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The testing is done by ensuring that the fragment allocated from a frag_frag_cache instance is pushed into a ptr_ring instance in a kthread binded to a specified cpu, and a kthread binded to a specified cpu will pop the fragment from the ptr_ring and free the fragment. CC: Andrew Morton CC: Alexander Duyck CC: Linux-MM Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- tools/testing/selftests/mm/Makefile | 3 + tools/testing/selftests/mm/page_frag/Makefile | 18 ++ .../selftests/mm/page_frag/page_frag_test.c | 198 ++++++++++++++++++ tools/testing/selftests/mm/run_vmtests.sh | 8 + tools/testing/selftests/mm/test_page_frag.sh | 175 ++++++++++++++++ 5 files changed, 402 insertions(+) create mode 100644 tools/testing/selftests/mm/page_frag/Makefile create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c create mode 100755 tools/testing/selftests/mm/test_page_frag.sh diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 02e1204971b0..acec529baaca 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES) LDLIBS = -lrt -lpthread -lm +TEST_GEN_MODS_DIR := page_frag + TEST_GEN_FILES = cow TEST_GEN_FILES += compaction_test TEST_GEN_FILES += gup_longterm @@ -126,6 +128,7 @@ TEST_FILES += test_hmm.sh TEST_FILES += va_high_addr_switch.sh TEST_FILES += charge_reserved_hugetlb.sh TEST_FILES += hugetlb_reparenting_test.sh +TEST_FILES += test_page_frag.sh # required by charge_reserved_hugetlb.sh TEST_FILES += write_hugetlb_memory.sh diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile new file mode 100644 index 000000000000..58dda74d50a3 --- /dev/null +++ b/tools/testing/selftests/mm/page_frag/Makefile @@ -0,0 +1,18 @@ +PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) +KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..) + +ifeq ($(V),1) +Q = +else +Q = @ +endif + +MODULES = page_frag_test.ko + +obj-m += page_frag_test.o + +all: + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules + +clean: + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c new file mode 100644 index 000000000000..912d97b99107 --- /dev/null +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -0,0 +1,198 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Test module for page_frag cache + * + * Copyright (C) 2024 Yunsheng Lin + */ + +#include +#include +#include +#include +#include +#include + +#define TEST_FAILED_PREFIX "page_frag_test failed: " + +static struct ptr_ring ptr_ring; +static int nr_objs = 512; +static atomic_t nthreads; +static struct completion wait; +static struct page_frag_cache test_nc; +static int test_popped; +static int test_pushed; +static bool force_exit; + +static int nr_test = 2000000; +module_param(nr_test, int, 0); +MODULE_PARM_DESC(nr_test, "number of iterations to test"); + +static bool test_align; +module_param(test_align, bool, 0); +MODULE_PARM_DESC(test_align, "use align API for testing"); + +static int test_alloc_len = 2048; +module_param(test_alloc_len, int, 0); +MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); + +static int test_push_cpu; +module_param(test_push_cpu, int, 0); +MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment"); + +static int test_pop_cpu; +module_param(test_pop_cpu, int, 0); +MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment"); + +static int page_frag_pop_thread(void *arg) +{ + struct ptr_ring *ring = arg; + + pr_info("page_frag pop test thread begins on cpu %d\n", + smp_processor_id()); + + while (test_popped < nr_test) { + void *obj = __ptr_ring_consume(ring); + + if (obj) { + test_popped++; + page_frag_free(obj); + } else { + if (force_exit) + break; + + cond_resched(); + } + } + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + pr_info("page_frag pop test thread exits on cpu %d\n", + smp_processor_id()); + + return 0; +} + +static int page_frag_push_thread(void *arg) +{ + struct ptr_ring *ring = arg; + + pr_info("page_frag push test thread begins on cpu %d\n", + smp_processor_id()); + + while (test_pushed < nr_test && !force_exit) { + void *va; + int ret; + + if (test_align) { + va = page_frag_alloc_align(&test_nc, test_alloc_len, + GFP_KERNEL, SMP_CACHE_BYTES); + + if ((unsigned long)va & (SMP_CACHE_BYTES - 1)) { + force_exit = true; + WARN_ONCE(true, TEST_FAILED_PREFIX "unaligned va returned\n"); + } + } else { + va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + } + + if (!va) + continue; + + ret = __ptr_ring_produce(ring, va); + if (ret) { + page_frag_free(va); + cond_resched(); + } else { + test_pushed++; + } + } + + pr_info("page_frag push test thread exits on cpu %d\n", + smp_processor_id()); + + if (atomic_dec_and_test(&nthreads)) + complete(&wait); + + return 0; +} + +static int __init page_frag_test_init(void) +{ + struct task_struct *tsk_push, *tsk_pop; + int last_pushed = 0, last_popped = 0; + ktime_t start; + u64 duration; + int ret; + + test_nc.va = NULL; + atomic_set(&nthreads, 2); + init_completion(&wait); + + if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 || + !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu)) + return -EINVAL; + + ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL); + if (ret) + return ret; + + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring, + test_push_cpu, "page_frag_push"); + if (IS_ERR(tsk_push)) + return PTR_ERR(tsk_push); + + tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring, + test_pop_cpu, "page_frag_pop"); + if (IS_ERR(tsk_pop)) { + kthread_stop(tsk_push); + return PTR_ERR(tsk_pop); + } + + start = ktime_get(); + wake_up_process(tsk_push); + wake_up_process(tsk_pop); + + pr_info("waiting for test to complete\n"); + + while (!wait_for_completion_timeout(&wait, msecs_to_jiffies(10000))) { + /* exit if there is no progress for push or pop size */ + if (last_pushed == test_pushed || last_popped == test_popped) { + WARN_ONCE(true, TEST_FAILED_PREFIX "no progress\n"); + force_exit = true; + continue; + } + + last_pushed = test_pushed; + last_popped = test_popped; + pr_info("page_frag_test progress: pushed = %d, popped = %d\n", + test_pushed, test_popped); + } + + if (force_exit) { + pr_err(TEST_FAILED_PREFIX "exit with error\n"); + goto out; + } + + duration = (u64)ktime_us_delta(ktime_get(), start); + pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", duration); + +out: + ptr_ring_cleanup(&ptr_ring, NULL); + page_frag_cache_drain(&test_nc); + + return -EAGAIN; +} + +static void __exit page_frag_test_exit(void) +{ +} + +module_init(page_frag_test_init); +module_exit(page_frag_test_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Yunsheng Lin "); +MODULE_DESCRIPTION("Test module for page_frag"); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index c5797ad1d37b..2c5394584af4 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -75,6 +75,8 @@ separated by spaces: read-only VMAs - mdwe test prctl(PR_SET_MDWE, ...) +- page_frag + test handling of page fragment allocation and freeing example: ./run_vmtests.sh -t "hmm mmap ksm" EOF @@ -456,6 +458,12 @@ CATEGORY="mkdirty" run_test ./mkdirty CATEGORY="mdwe" run_test ./mdwe_test +CATEGORY="page_frag" run_test ./test_page_frag.sh smoke + +CATEGORY="page_frag" run_test ./test_page_frag.sh aligned + +CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned + echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix echo "1..${count_total}" | tap_output diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh new file mode 100755 index 000000000000..f55b105084cf --- /dev/null +++ b/tools/testing/selftests/mm/test_page_frag.sh @@ -0,0 +1,175 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Copyright (C) 2024 Yunsheng Lin +# Copyright (C) 2018 Uladzislau Rezki (Sony) +# +# This is a test script for the kernel test driver to test the +# correctness and performance of page_frag's implementation. +# Therefore it is just a kernel module loader. You can specify +# and pass different parameters in order to: +# a) analyse performance of page fragment allocations; +# b) stressing and stability check of page_frag subsystem. + +DRIVER="./page_frag/page_frag_test.ko" +CPU_LIST=$(grep -m 2 processor /proc/cpuinfo | cut -d ' ' -f 2) +TEST_CPU_0=$(echo $CPU_LIST | awk '{print $1}') + +if [ $(echo $CPU_LIST | wc -w) -gt 1 ]; then + TEST_CPU_1=$(echo $CPU_LIST | awk '{print $2}') + NR_TEST=100000000 +else + TEST_CPU_1=$TEST_CPU_0 + NR_TEST=1000000 +fi + +# 1 if fails +exitcode=1 + +# Kselftest framework requirement - SKIP code is 4. +ksft_skip=4 + +check_test_failed_prefix() { + if dmesg | grep -q 'page_frag_test failed:';then + echo "page_frag_test failed, please check dmesg" + exit $exitcode + fi +} + +# +# Static templates for testing of page_frag APIs. +# Also it is possible to pass any supported parameters manually. +# +SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1" +NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST" +ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1" + +check_test_requirements() +{ + uid=$(id -u) + if [ $uid -ne 0 ]; then + echo "$0: Must be run as root" + exit $ksft_skip + fi + + if ! which insmod > /dev/null 2>&1; then + echo "$0: You need insmod installed" + exit $ksft_skip + fi + + if [ ! -f $DRIVER ]; then + echo "$0: You need to compile page_frag_test module" + exit $ksft_skip + fi +} + +run_nonaligned_check() +{ + echo "Run performance tests to evaluate how fast nonaligned alloc API is." + + insmod $DRIVER $NONALIGNED_PARAM > /dev/null 2>&1 +} + +run_aligned_check() +{ + echo "Run performance tests to evaluate how fast aligned alloc API is." + + insmod $DRIVER $ALIGNED_PARAM > /dev/null 2>&1 +} + +run_smoke_check() +{ + echo "Run smoke test." + + insmod $DRIVER $SMOKE_PARAM > /dev/null 2>&1 +} + +usage() +{ + echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | " + echo "manual parameters" + echo + echo "Valid tests and parameters:" + echo + modinfo $DRIVER + echo + echo "Example usage:" + echo + echo "# Shows help message" + echo "$0" + echo + echo "# Smoke testing" + echo "$0 smoke" + echo + echo "# Performance testing for nonaligned alloc API" + echo "$0 nonaligned" + echo + echo "# Performance testing for aligned alloc API" + echo "$0 aligned" + echo + exit 0 +} + +function validate_passed_args() +{ + VALID_ARGS=`modinfo $DRIVER | awk '/parm:/ {print $2}' | sed 's/:.*//'` + + # + # Something has been passed, check it. + # + for passed_arg in $@; do + key=${passed_arg//=*/} + valid=0 + + for valid_arg in $VALID_ARGS; do + if [[ $key = $valid_arg ]]; then + valid=1 + break + fi + done + + if [[ $valid -ne 1 ]]; then + echo "Error: key is not correct: ${key}" + exit $exitcode + fi + done +} + +function run_manual_check() +{ + # + # Validate passed parameters. If there is wrong one, + # the script exists and does not execute further. + # + validate_passed_args $@ + + echo "Run the test with following parameters: $@" + insmod $DRIVER $@ > /dev/null 2>&1 +} + +function run_test() +{ + if [ $# -eq 0 ]; then + usage + else + if [[ "$1" = "smoke" ]]; then + run_smoke_check + elif [[ "$1" = "nonaligned" ]]; then + run_nonaligned_check + elif [[ "$1" = "aligned" ]]; then + run_aligned_check + else + run_manual_check $@ + fi + fi + + check_test_failed_prefix + + echo "Done." + echo "Check the kernel ring buffer to see the summary." +} + +check_test_requirements +run_test $@ + +exit 0 From patchwork Mon Oct 28 11:53:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13853360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1409D1359B for ; Mon, 28 Oct 2024 12:00:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4FAA96B0088; Mon, 28 Oct 2024 08:00:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A8466B0089; Mon, 28 Oct 2024 08:00:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D7226B008A; Mon, 28 Oct 2024 08:00:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id F0E3A6B0088 for ; Mon, 28 Oct 2024 08:00:15 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5264AC1E2F for ; Mon, 28 Oct 2024 11:59:51 +0000 (UTC) X-FDA: 82722867570.11.86FB597 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf03.hostedemail.com (Postfix) with ESMTP id 2E44720031 for ; Mon, 28 Oct 2024 12:00:01 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730116655; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UaXyQqffgHUz65enUOUzL/Qpfa8yYOWbmHcj++0BWQg=; b=a9W+WMNMYHt4JaymeJwICcVAE/P5NMn3NjsaeKqQLptluNDzkwsGYI6zM6Lh0pCPh+D4Up yyzKT7SkCcyrId5ezssblpMg+CuOhZf5qk2r871RyjPZhKapCu6QW3sRcykGFMMrYwuGaU VMAupvIz4ORy0qiEc8YFwtC/iIugbwc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730116655; a=rsa-sha256; cv=none; b=f7c8oyKYB/ziYmSgjztKu8l0K0do3Zhbj0fJ9359OzVc7nRYD3A5eieZkE6I3XbxWIoqLl Rwdb5nE0gW700q7yzTNvnkI/4hRIRdK0VyxMVgT0frn1Y65i2fOVat7w+K9m2ydLejMo5r BRkEQ7h8Jpmonuqhwvpw9Xx8NE5blMU= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4XcX2b4vjJz1T9CN; Mon, 28 Oct 2024 19:57:59 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id ECF2F1800DE; Mon, 28 Oct 2024 20:00:07 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 28 Oct 2024 20:00:07 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , David Howells , Alexander Duyck , Andrew Morton , Linux-MM , Alexander Duyck , Eric Dumazet , Simon Horman , Shuah Khan , Subject: [PATCH net-next v23 2/7] mm: move the page fragment allocator from page_alloc into its own file Date: Mon, 28 Oct 2024 19:53:37 +0800 Message-ID: <20241028115343.3405838-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241028115343.3405838-1-linyunsheng@huawei.com> References: <20241028115343.3405838-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2E44720031 X-Stat-Signature: y4fhgjkgwxgwzayzo8mqrp5xkqgnbku1 X-HE-Tag: 1730116801-711853 X-HE-Meta: U2FsdGVkX18DrcyQuYUNwsaWAb0rHHAjBroOSDvM69hd+2SvrvgkAHotgraLSNYFauRrhybyclQq+Ob+trAst02AwEDzUsF35CWdPCF3THKe/5wzOu7J8KGznFZmtclEL2DHCFngXE90LRIQqpDXELTJi0Hflp9tCKsb4R1WciK1kdGEHIoRJzIrfcA+bI7wiUspKoZteXhbYfGOCzh7wcYQB3p5WlfIfAy6G7vRZD1Li6hMAJ4cWrwndmhACWgC+7eB+BZQrVzwWb+Ko8+Ybjik3Jnm1jPWLtWWLZzp4rPTOSavSPR2FANo2zo6hFo7Vtruevv95cD5dTZPA3DWNMT8YYcXylQ7wLiRVNGe1HGtSp+KJlyTmPEgoXt3D80I2jABeDccIM9jxOw4+IV58R9B0gFr5A4TVhMUAEv1Fcd0IQ0C4tnlASc0Ae1Xg+vRH53hGz/gMzH6dfYCQYFwDjQtJ4oFePea2HwdNc2qe2QgeZwJDA79AznajaGkVUvBumymV58t5tTnszXiXgdGcVYbLI15onec+eUF2P3k+K6UTn5OQjc4Uxqm/KTGXGOk5Dw26B+tBmX0mOY/ZESkK0WAxc8wNGj+FzIgCbIKdi1hQ9mFLH8RPwDFj3JHLAoXBnpgriuM9pSeLqwGx4GWVEr+XRH+S8YJ8qNJs6lQs7Lh7W3dtuNItmhsnIly/vl0yuR2G1H0dTrRzfYDUy1bt2OYwRKZCzJXLLZZyQj1fqBybtaxsowZWjRGnFuyDflIXDPF27+vw5mh5UmTaLabiuk1wD8ndf6Cs+G8rYVQz0Kw8r+FJXJY47ptM1g42u192HMP/EOhywe49PhMOYI4AjylL5h7Ixmv+juIa98/FP+2qDB5jSM/x7BWGeK+rNiUZUKxp23gEdr26cNncDcjRXNPIxQRKjJOlgDGDxB74b7+XujEqpG9uwg6tOF72u6MrF2+A3obYLL/Q5vtQPG f6Aab1tB mv5mZCPz72OEouKOzdKgF4QNgj5t9Ny+qwWXlnpXKr98tV94/6+EYA+2PaH7GF3c/5Po84tPvN6tGNGZwpC9nKsZCETpJBmSEXTbZEORsk52sjvmmhCwaYFl/mrRaVHCKjJeWc31L5W0eS6R2NWcPsPY4yLFo+nAqPOh7YUZdfEY2EkBIizREj/7U5y7YMbANkOBOD7N2YpJNxew3mPJF0Jo8FxO7+kBLzrEuuyDy6zosKKqGzjs8nKb9xYitMCjDgRArwvG8BNDKv0q58Xq0DQU7bWCdoojICw1g5JiAz5vwe1nGf+5yus7LIoBnObZszsYhovmqjt3b/YrrKTX6ILu3SNZwb4P/uIxRDMYS4phzQqVB8H+sdK7VtY3a8U+DFOmJXUr1fMWb9NlP1Yf40kiENnttQLlIihOdm8rNR40gGIx5/LNz2VmyuQAsWFSusigk4eO+xivdiRGfkGr8L+k8tI05onEHrbPtgpYJAck9F+U4tqPDvIepmGvYrl+9QkpReKZkPGDRJimLezA1zPojEpbPU1tyizC1fGD9fIOZ/+PDzbTfUhJW1vHGW9F25hnL/YntMci+CzHAKiGVXy7ewTggyXYZcyfvzL/vafs3wmfBBCF/M9SZDA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Inspired by [1], move the page fragment allocator from page_alloc into its own c file and header file, as we are about to make more change for it to replace another page_frag implementation in sock.c As this patchset is going to replace 'struct page_frag' with 'struct page_frag_cache' in sched.h, including page_frag_cache.h in sched.h has a compiler error caused by interdependence between mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler error by moving 'struct page_frag_cache' to mm_types_task.h as suggested by Alexander, see [3]. 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/ 2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/ 3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/ CC: David Howells CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin Acked-by: Andrew Morton Reviewed-by: Alexander Duyck --- include/linux/gfp.h | 22 --- include/linux/mm_types.h | 18 --- include/linux/mm_types_task.h | 18 +++ include/linux/page_frag_cache.h | 31 ++++ include/linux/skbuff.h | 1 + mm/Makefile | 1 + mm/page_alloc.c | 136 ---------------- mm/page_frag_cache.c | 145 ++++++++++++++++++ .../selftests/mm/page_frag/page_frag_test.c | 2 +- 9 files changed, 197 insertions(+), 177 deletions(-) create mode 100644 include/linux/page_frag_cache.h create mode 100644 mm/page_frag_cache.c diff --git a/include/linux/gfp.h b/include/linux/gfp.h index a951de920e20..a0a6d25f883f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); -struct page_frag_cache; -void page_frag_cache_drain(struct page_frag_cache *nc); -extern void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); - -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); -} - -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) -{ - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); -} - -extern void page_frag_free(void *addr); - #define __free_page(page) __free_pages((page), 0) #define free_page(addr) free_pages((addr), 0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6e3bdf8e38bc..92314ef2d978 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); */ #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) -#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) -#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) - /* * page_private can be used on tail pages. However, PagePrivate is only * checked by the VM on the head page. So page_private on the tail pages @@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } -struct page_frag_cache { - void * va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - __u16 offset; - __u16 size; -#else - __u32 offset; -#endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; -}; - typedef unsigned long vm_flags_t; /* diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index bff5706b76e1..0ac6daebdd5c 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -8,6 +8,7 @@ * (These are defined separately to decouple sched.h from mm_types.h as much as possible.) */ +#include #include #include @@ -43,6 +44,23 @@ struct page_frag { #endif }; +#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) +#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) +struct page_frag_cache { + void *va; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + __u16 offset; + __u16 size; +#else + __u32 offset; +#endif + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + /* Track pages that require TLB flushes */ struct tlbflush_unmap_batch { #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h new file mode 100644 index 000000000000..67ac8626ed9b --- /dev/null +++ b/include/linux/page_frag_cache.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PAGE_FRAG_CACHE_H +#define _LINUX_PAGE_FRAG_CACHE_H + +#include +#include +#include + +void page_frag_cache_drain(struct page_frag_cache *nc); +void __page_frag_cache_drain(struct page *page, unsigned int count); +void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask, unsigned int align_mask); + +static inline void *page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); +} + +static inline void *page_frag_alloc(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); +} + +void page_frag_free(void *addr); + +#endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 48f1e0fa2a13..7adca0fa2602 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -31,6 +31,7 @@ #include #include #include +#include #include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include diff --git a/mm/Makefile b/mm/Makefile index d5639b036166..dba52bb0da8a 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-y += page-alloc.o +obj-y += page_frag_cache.o obj-y += init-mm.o obj-y += memblock.o obj-y += $(memory-hotplug-y) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afab64814dc..6ca2abce857b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4836,142 +4836,6 @@ void free_pages(unsigned long addr, unsigned int order) EXPORT_SYMBOL(free_pages); -/* - * Page Fragment: - * An arbitrary-length arbitrary-offset area of memory which resides - * within a 0 or higher order page. Multiple fragments within that page - * are individually refcounted, in the page's reference counter. - * - * The page_frag functions below provide a simple allocation framework for - * page fragments. This is used by the network stack and network device - * drivers to provide a backing region of memory for use as either an - * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. - */ -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) -{ - struct page *page = NULL; - gfp_t gfp = gfp_mask; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | - __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; -#endif - if (unlikely(!page)) - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - - nc->va = page ? page_address(page) : NULL; - - return page; -} - -void page_frag_cache_drain(struct page_frag_cache *nc) -{ - if (!nc->va) - return; - - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; -} -EXPORT_SYMBOL(page_frag_cache_drain); - -void __page_frag_cache_drain(struct page *page, unsigned int count) -{ - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); - - if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(__page_frag_cache_drain); - -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - unsigned int size = PAGE_SIZE; - struct page *page; - int offset; - - if (unlikely(!nc->va)) { -refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) - return NULL; - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; - } - - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { - page = virt_to_page(nc->va); - - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) - goto refill; - - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); - goto refill; - } - -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); - - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } - } - - nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; - - return nc->va + offset; -} -EXPORT_SYMBOL(__page_frag_alloc_align); - -/* - * Frees a page fragment allocated out of either a compound or order 0 page. - */ -void page_frag_free(void *addr) -{ - struct page *page = virt_to_head_page(addr); - - if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); -} -EXPORT_SYMBOL(page_frag_free); - static void *make_alloc_exact(unsigned long addr, unsigned int order, size_t size) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c new file mode 100644 index 000000000000..609a485cd02a --- /dev/null +++ b/mm/page_frag_cache.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Page fragment allocator + * + * Page Fragment: + * An arbitrary-length arbitrary-offset area of memory which resides within a + * 0 or higher order page. Multiple fragments within that page are + * individually refcounted, in the page's reference counter. + * + * The page_frag functions provide a simple allocation framework for page + * fragments. This is used by the network stack and network device drivers to + * provide a backing region of memory for use as either an sk_buff->head, or to + * be used in the "frags" portion of skb_shared_info. + */ + +#include +#include +#include +#include +#include +#include "internal.h" + +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) +{ + struct page *page = NULL; + gfp_t gfp = gfp_mask; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, + PAGE_FRAG_CACHE_MAX_ORDER); + nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; +#endif + if (unlikely(!page)) + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + + nc->va = page ? page_address(page) : NULL; + + return page; +} + +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + +void __page_frag_cache_drain(struct page *page, unsigned int count) +{ + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); + + if (page_ref_sub_and_test(page, count)) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(__page_frag_cache_drain); + +void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + unsigned int size = PAGE_SIZE; + struct page *page; + int offset; + + if (unlikely(!nc->va)) { +refill: + page = __page_frag_cache_refill(nc, gfp_mask); + if (!page) + return NULL; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + + offset = nc->offset - fragsz; + if (unlikely(offset < 0)) { + page = virt_to_page(nc->va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + goto refill; + + if (unlikely(nc->pfmemalloc)) { + free_unref_page(page, compound_order(page)); + goto refill; + } + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#endif + /* OK, page count is 0, we can safely set it */ + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + if (unlikely(offset < 0)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + } + + nc->pagecnt_bias--; + offset &= align_mask; + nc->offset = offset; + + return nc->va + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_align); + +/* + * Frees a page fragment allocated out of either a compound or order 0 page. + */ +void page_frag_free(void *addr) +{ + struct page *page = virt_to_head_page(addr); + + if (unlikely(put_page_testzero(page))) + free_unref_page(page, compound_order(page)); +} +EXPORT_SYMBOL(page_frag_free); diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index 912d97b99107..13c44133e009 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -6,12 +6,12 @@ * Copyright (C) 2024 Yunsheng Lin */ -#include #include #include #include #include #include +#include #define TEST_FAILED_PREFIX "page_frag_test failed: " From patchwork Mon Oct 28 11:53:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13853361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42C6DD1359B for ; Mon, 28 Oct 2024 12:00:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D8596B0089; Mon, 28 Oct 2024 08:00:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 286136B008A; Mon, 28 Oct 2024 08:00:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 101616B008C; Mon, 28 Oct 2024 08:00:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E66C26B0089 for ; Mon, 28 Oct 2024 08:00:16 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A6876161E9C for ; Mon, 28 Oct 2024 11:59:50 +0000 (UTC) X-FDA: 82722866772.09.6A556EA Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf15.hostedemail.com (Postfix) with ESMTP id 4CA38A0024 for ; Mon, 28 Oct 2024 11:59:49 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730116772; a=rsa-sha256; cv=none; b=kwVQDQIELNF+DPpbnhh3HWz+05JPOJfEf5mEi9GzVByQln2Oqh9vtSs+TB+4ue4x8kj64l gMtlCUhwKA3mc5c/M37bfYcdxr24F0syEdGUg3RHyWTJ1IbExFFHV4dysRjiLmREwMCo+/ lN/OMED0g8d3ElQz8+bA6t6m6JlQbgo= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730116772; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=abRKKi75LmcSbEZWqBUvRSsRQadP1Xm82D7oThKxcrw=; b=wrcVUCcmHw6nNwR1sTj2iFIvMG+Vc6WXy2LL1GN4d8JpEp1ZkBrBYUuzOk2C2tbnz2lg/b aUcIxaPPHYekqQyBqvCMIkHiYAAtFFp/sOdO72T28qKmv7Bz5SwuJNbkaRZl+qFIjlrWVl /x/dVXVqA26fBjDKdcrrkvQsj32gKZA= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XcX413cz8z20r0s; Mon, 28 Oct 2024 19:59:13 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id B1EDB14010C; Mon, 28 Oct 2024 20:00:10 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 28 Oct 2024 20:00:10 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Alexander Duyck Subject: [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align() Date: Mon, 28 Oct 2024 19:53:38 +0800 Message-ID: <20241028115343.3405838-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241028115343.3405838-1-linyunsheng@huawei.com> References: <20241028115343.3405838-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Stat-Signature: cozfqu4d973fpmu5adsip1fc3yffj9p5 X-Rspamd-Queue-Id: 4CA38A0024 X-Rspamd-Server: rspam02 X-HE-Tag: 1730116789-46799 X-HE-Meta: U2FsdGVkX1+Y+HWmtpWEOXBaS+2Hs4XlJRVK08Q4+mqY1wWjVKGk/t2DmF3JETm6EpKp8Akfj8Txe/YbEw77I1xcIELABvkBGSjUD8sSKfNBRmszz8/c1zOik8yaho52+qlFg8Z2y3HBlPwH19uSK8BkeIizRgaogHo2dQGG21Eo3EJRRV9CItNsugSqRfkYqkf9ETnhT8YhU6jehSKpZifnY522bKXvN/1sIzQHr8Vs0p3TZN6G1wnggoWEgkmMuZxShuRYq/zMPjph7LfOttDof4zAsEawCkB7klRj3/KQY1EiO++w8/XaKReXKOFdn8BFX8Lip0kdRxQ6avtzSGCVuRWozrW/+tlI0XXFe20FeU6/ciIycRqsrezKtljM1nDrbafigL6NNGMQyvqMobaUfkIIUS32Qh5eeq1xZ/hwDj4VzYQuuZtpQezrR+plyh+0roZM4RB3Lyw4gZ+V75tZEOX2JQmqOp+6msb03rAWoLy+i3/B4jl3Z5skHtjT67Ourw1c3RQ72DllW4aKmJKlwwYMtir0ODXSzgKWB0Ng2vdWsfsUKUGbSxhsL1xerdXDMJHf18O5111ZFjPKGe3J7R2oRGW9Bv3jTpRQqWItjtrD8ucdmpTbgOUXb7yZ45JB3dAOswT9n/f94mB1WQEipr93lqMIX2F1flUqkrEetfBo0CZCa+Rdcu0idPiAkUZkBraQIJ8sxm6ZHri5VScJ17DevSjalqB8Lb2/zAfIhPqfE1itahnwAtp8I1uwogR9Y7ZOqjnz1B2YXaBsYhfT0Ty1FC6ug5cvvEN9yB6YelNNPJ4TjuQz9VIfnpRrFiRbvoy0ikhgWXRLN3vvolGmWw6Ham35sIm9D+8OaavRyZuxgO4T+EmJsV/P9VjYE+bK86B/vPtkbSVVwXZ+hX38sGjnzDeMIicnN49JZ/8mRcSz34XpDULKcZQzt/EoZ1/Tq7ET+0CM6hco2ny WLgRr3TJ NqAlDeyPCYW8zqQX53GGVTtqjIo95BFdvfW9C01qIPVDfgTB866ZOd3Ft6h9+HenM98cwl+aQUVpRnyC2FjDcd0WdM7BGBkhwgJAH7tbJZ/jxIn4jFt8KYJgKbh9nRVZ1PYs0S8x2NelT6RGj6vZda+BlBqhNWW+5vznvngQtC5psGacF9G9C/aPzJQr/L1vWhCOhbhOXiHS/88UQaPA/gzhLthZsFe+OMS12qkhGWXwvIsG3hNHE67VI3Wu4OZ27NBAXsRqbe17LD5DpRgX7ZjGTYe/8EeSEiz9uu5oelzLaUM9rAh1FQK5vmgUmy+gVQFNO6WpPenc8+3x7xYsjPndy/sD7+UQG42Ca3wh/heJwAKM9xaWZxCxNTFsQu9tjTCJ6t3eRh5S7j7zBUdnIfX2YiN1urI3Ho8vT3Oc35xIg+5LwQQIK65WEEyvgNvaOyh04vrXUrsG68gRRhHl2qKtaynpbL0rO1hdWfil+Ih/y1NFO7HCNONIXxTh8zgfv5AnMnibKaDi9+s5kaSQiqKE9rMIiBwcFOyZnV4erAc5orMde/ly8cYyJozFYFj9JyOdGuHDxIL9kbhE8I5cEeLPvgLx4rD5oXIVq92C05ITnx1c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We are about to use page_frag_alloc_*() API to not just allocate memory for skb->data, but also use them to do the memory allocation for skb frag too. Currently the implementation of page_frag in mm subsystem is running the offset as a countdown rather than count-up value, there may have several advantages to that as mentioned in [1], but it may have some disadvantages, for example, it may disable skb frag coalescing and more correct cache prefetching We have a trade-off to make in order to have a unified implementation and API for page_frag, so use a initial zero offset in this patch, and the following patch will try to make some optimization to avoid the disadvantages as much as possible. 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/ CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- mm/page_frag_cache.c | 46 ++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 609a485cd02a..4c8e04379cb3 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + unsigned int size = nc->size; +#else unsigned int size = PAGE_SIZE; +#endif + unsigned int offset; struct page *page; - int offset; if (unlikely(!nc->va)) { refill: @@ -85,11 +89,24 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = 0; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(offset + fragsz > size)) { + if (unlikely(fragsz > PAGE_SIZE)) { + /* + * The caller is trying to allocate a fragment + * with fragsz > PAGE_SIZE but the cache isn't big + * enough to satisfy the request, this may + * happen in low memory conditions. + * We don't release the cache page because + * it could make memory pressure worse + * so we simply return NULL here. + */ + return NULL; + } + page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -100,33 +117,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { - /* - * The caller is trying to allocate a fragment - * with fragsz > PAGE_SIZE but the cache isn't big - * enough to satisfy the request, this may - * happen in low memory conditions. - * We don't release the cache page because - * it could make memory pressure worse - * so we simply return NULL here. - */ - return NULL; - } + offset = 0; } nc->pagecnt_bias--; - offset &= align_mask; - nc->offset = offset; + nc->offset = offset + fragsz; return nc->va + offset; } From patchwork Mon Oct 28 11:53:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13853362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77A4DD13596 for ; Mon, 28 Oct 2024 12:00:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 350036B008C; Mon, 28 Oct 2024 08:00:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2FD4A6B0093; Mon, 28 Oct 2024 08:00:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 177BC6B0092; Mon, 28 Oct 2024 08:00:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EC81F6B008A for ; Mon, 28 Oct 2024 08:00:20 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 57FE9C1D73 for ; Mon, 28 Oct 2024 11:59:56 +0000 (UTC) X-FDA: 82722867066.07.15FECB5 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf05.hostedemail.com (Postfix) with ESMTP id D7BC6100033 for ; Mon, 28 Oct 2024 11:59:32 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730116660; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eYrCI/oIz9h4w9Vd67DimJXsDUbAoR6ncLKRTN4tDHU=; b=ltHAdyu4p7C2ToM5iLGDxyaFvv8ECZ5XL6IHpUQtbPoOriTZNdF90p5RDHqXPnxYNDlTaF +mNFZwsbAzFC3z6TatizyFVCpKN2Q1Ehf4az6KNiTMym8E+V5VkeLsnz2bpAFzhrumM0u4 6NGnFzI/mPjkGMSPWmrEY7g1DubD9Bw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730116660; a=rsa-sha256; cv=none; b=fUTpgr4xFiC+HP44UpckNAZLfrWqhXwBd2vQHpfYBVJfXQ4/jnSn2PFFTsj0U2Lt1yh4Lx XfzvbFLumlGEl1X5oo+tI8y38Qadx5OVrCpJaSgbdgQesnpbRFPOsTsPeVQfxW1/zL5idX dYGWSA1DzYcLCN0gwPhCRO18fNa0b/k= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4XcX2x4VNKzpXPb; Mon, 28 Oct 2024 19:58:17 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 92E7D1800DE; Mon, 28 Oct 2024 20:00:13 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 28 Oct 2024 20:00:13 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Alexander Duyck , Chuck Lever , "Michael S. Tsirkin" , Jason Wang , =?utf-8?q?Eugenio_P=C3=A9rez?= , Eric Dumazet , Simon Horman , David Howells , Marc Dionne , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker , Shuah Khan , , , , , Subject: [PATCH net-next v23 4/7] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Date: Mon, 28 Oct 2024 19:53:39 +0800 Message-ID: <20241028115343.3405838-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241028115343.3405838-1-linyunsheng@huawei.com> References: <20241028115343.3405838-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: k1xyp4mcrig1qnix7wiq5coi61woiksj X-Rspamd-Queue-Id: D7BC6100033 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1730116772-686070 X-HE-Meta: U2FsdGVkX1+d537w+D30Z9fCJm9OrWDLsgMEEP8XDPuMM/S9F9E0yuNx2fpq9HrjDLi2BOKaKjEWpX+BbC5Awz22EB6D2jO+h0lnWdnPx88tOpMrmqyYaxQ+Mdg96ThC/31AGcYd+S4sHUAX/OENRRqT0PA7bk3TNcFfhEzgCL3CaR5tE9e0QgKFrsqMoW5UwmA1Outbt08SmA6T4vfjUOnXxuaPV6JbX9OwFLbRL5bvYdxb6V1BKWBJzfG0teKiU6HLVY+U//UuklOMot6l9tSzfv47+elrewRdl8QklGzwpcCCHlA6iZnjcPl6OS0LGCwIhTFzGqx9BKLKcUFnIEYnSunxDO6FUu6Ji8CXGzXZKLzEPhdCATw98zCs0HuTaBNt5MdUFXfseGm36c7+lffHvlxGA0ZvjifH4Awl7zd+HI+6qUn+jDJAglUejusbzxEoQj7YDRIFNKNP6NKS6nDwjimJPs+zKgivnXObQhj7WorU+woY9sw1n3a8B+smyuIDYvgXbY9d4Y69vdOXgHTxlWBuB7sVbhkzOKz3RQkKykj96TLlqpk4f24+cCs5rkFlDrtLfhSu56dDKs8aYrAZbsQpa1Reg2tv1cGkPdUqnhaRR01kyB0KUNUMHAvkZQId1e49kemuEIKY5ZiBIWZuvQbvnZ0OmQp3DjSNu3yeC58rloz+w+syTMegaBiZZaGH2eWiHnZ24QwW8klkoDXwtzYfZ5VOqhpiqKuaZci47CAz8UfIXxguV+ZgQs7A11XN7qfgioqojk341roANzeGfXiEyqL4+ife/Dub/Q/UbQEt0yIrucSqmh22BoWUpyW/0QkDNsk7mDkz9RF7CYFfjQUwURNEgzfxd7+AGYUw9WWzm/CbNtNOF+6kv+oTk8GrF8eFnB98jAbXKEHhbRykXpuVGaINvVRc+v+XuIjRDenMar9vPKV3zflyZsX7osv21aziuAOXYyGDlMj uRT2QS7K hzBM9IR5Ol1yQiMGS4BG/e2NEPlFAhFL7cTQyNMIJSKiMze84Jlzkn5HbMseUmXE49DCqpEHmjDhqzW9JGalu8n2J8IkUCeL0AJqvIM5+qaF2RmOcYzdTh0MA+0DuNFXCDA4sXxCmrycPPW25N7HED50WeTwxpq0OC6o1zondmHKrdOCccZhKCCL3+tHXFKNk6uEE3JlhtZJ6MBeK/A7KfHWQy2jy6zU9uyGmbkqW9aL4tLSjyK0Y+0QHq/x3C22jUlj/LpRAiMGAklMPdZApGSlK/RrAD1sTGO2+5456ZlI2DeYFoUSLxg76qt1+bcnLV5tzPQvvRCv7+Xq34ZDKpIAqtYAZCqN1QtAf3Xgyyjf3GbSJ7itU7U8yQiD3Zm5jx7BkYEOCrcW6I84yGD54/VaVPC/9w1MqpVKquWOwrhpu7ElGiNgfzWg0DV06hHwMSgH7eSPl8tHesoM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use appropriate frag_page API instead of caller accessing 'page_frag_cache' directly. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck Acked-by: Chuck Lever --- drivers/vhost/net.c | 2 +- include/linux/page_frag_cache.h | 10 ++++++++++ net/core/skbuff.c | 6 +++--- net/rxrpc/conn_object.c | 4 +--- net/rxrpc/local_object.c | 4 +--- net/sunrpc/svcsock.c | 6 ++---- tools/testing/selftests/mm/page_frag/page_frag_test.c | 2 +- 7 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f16279351db5..9ad37c012189 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) vqs[VHOST_NET_VQ_RX]); f->private_data = n; - n->pf_cache.va = NULL; + page_frag_cache_init(&n->pf_cache); return 0; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 67ac8626ed9b..0a52f7a179c8 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -7,6 +7,16 @@ #include #include +static inline void page_frag_cache_init(struct page_frag_cache *nc) +{ + nc->va = NULL; +} + +static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) +{ + return !!nc->pfmemalloc; +} + void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 00afeb90c23a..6841e61a6bd0 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -753,14 +753,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(nc); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); local_bh_enable(); @@ -850,7 +850,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) len = SKB_HEAD_ALIGN(len); data = page_frag_alloc(&nc->page, len, gfp_mask); - pfmemalloc = nc->page.pfmemalloc; + pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page); } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 1539d315afe7..694c4df7a1a3 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work) */ rxrpc_purge_queue(&conn->rx_queue); - if (conn->tx_data_alloc.va) - __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va), - conn->tx_data_alloc.pagecnt_bias); + page_frag_cache_drain(&conn->tx_data_alloc); call_rcu(&conn->rcu, rxrpc_rcu_free_connection); } diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index f9623ace2201..2792d2304605 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local) #endif rxrpc_purge_queue(&local->rx_queue); rxrpc_purge_client_connections(local); - if (local->tx_alloc.va) - __page_frag_cache_drain(virt_to_page(local->tx_alloc.va), - local->tx_alloc.pagecnt_bias); + page_frag_cache_drain(&local->tx_alloc); } /* diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 825ec5357691..b785425c3315 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1608,7 +1608,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt) static void svc_sock_free(struct svc_xprt *xprt) { struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt); - struct page_frag_cache *pfc = &svsk->sk_frag_cache; struct socket *sock = svsk->sk_sock; trace_svcsock_free(svsk, sock); @@ -1618,8 +1617,7 @@ static void svc_sock_free(struct svc_xprt *xprt) sockfd_put(sock); else sock_release(sock); - if (pfc->va) - __page_frag_cache_drain(virt_to_head_page(pfc->va), - pfc->pagecnt_bias); + + page_frag_cache_drain(&svsk->sk_frag_cache); kfree(svsk); } diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index 13c44133e009..e806c1866e36 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -126,7 +126,7 @@ static int __init page_frag_test_init(void) u64 duration; int ret; - test_nc.va = NULL; + page_frag_cache_init(&test_nc); atomic_set(&nthreads, 2); init_completion(&wait); From patchwork Mon Oct 28 11:53:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13853407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E031DD1359C for ; Mon, 28 Oct 2024 12:18:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B7706B0083; Mon, 28 Oct 2024 08:18:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 667D76B0085; Mon, 28 Oct 2024 08:18:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 556CC6B0088; Mon, 28 Oct 2024 08:18:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3A82D6B0083 for ; Mon, 28 Oct 2024 08:18:36 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B59ADA1F30 for ; Mon, 28 Oct 2024 12:17:54 +0000 (UTC) X-FDA: 82722913476.21.F0AA554 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf17.hostedemail.com (Postfix) with ESMTP id 3E9A040003 for ; Mon, 28 Oct 2024 12:18:14 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730117704; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WN+jut+L9OfbMgq+aHO24b1yK9gvcgZCCS02gTNvf5U=; b=EoE+ydbTf8SWSz3q/tkFfsyYeuAKlonVwac8PqXNVnJReqp54vbFw+7E4lHOQnak0Wo4va zzu/LhJwDAJSQykYxcVa7Z7QhYOq30MW7rlT6Zau1SR1v5hZtZB+wi26A+npQC/iShogJv DTXfnLkogfDlY9zEkVhlCad28mjhM28= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730117704; a=rsa-sha256; cv=none; b=UWKDUvSZ5Lpz8/t/0sRrOViCzc5Mr3ws+06pkm8osTORUx43Tn/jkYnrQCqXypzEiXG3mS 22FFmegqWivcqv3BxSRsLmEbWozJ6bjdL9cG7JXQKU7GLjmOIRCQ+xRlTxCeOlu0l+BWMH Lj2tlVJpcESGIPL5HPi0yRho27mKuVg= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4XcX2z0fVVzpWG7; Mon, 28 Oct 2024 19:58:19 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 115551800CF; Mon, 28 Oct 2024 20:00:15 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 28 Oct 2024 20:00:14 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Max Filippov , Alexander Duyck , Chris Zankel Subject: [PATCH net-next v23 5/7] xtensa: remove the get_order() implementation Date: Mon, 28 Oct 2024 19:53:40 +0800 Message-ID: <20241028115343.3405838-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241028115343.3405838-1-linyunsheng@huawei.com> References: <20241028115343.3405838-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3E9A040003 X-Stat-Signature: snst78x61mj5wdip18ixgugh71bw94ek X-Rspam-User: X-HE-Tag: 1730117894-169904 X-HE-Meta: U2FsdGVkX19U64SQdQHo1jWApIZ66XhqMtDV1i20hlGipNObDOUbMSH06V8L2+jh3jIJ7zQNZKMVuC8fBrPOKvsTRZ7Ng/e3FreYoHZQKI/+EEwEUWS49XSj6Ha4NAYUqSJOw+whCfvgKLIUcdbobuBGCgv5hunuVwlsubqTEevgJX6LEyZ8mgrxr0MMxgzZr+mzPVYeJw3fa6QTgGLDF1VVj7S7I1eaNf7L/YDGsMM/pcoZRz/e3d5Cp9OGM8PDRQHj3Z/PHONC+gDamjUqZJ0G86jAigzLleZqQy5ALnm9gG17RGkXy00Sgn7jyv86O7edIvWxkVlS0xck8bpTWzWf9JjCHmdcQnYtlfr4h5O54P4M0m2j4m4b5nn6plMvdyamAOwZOpXLvqnbwUbcxEUCrqNOVa7T5t2hSN7B5GsvbUnUqwdruQ12718bARcXsTpWMPFnCa3ShJdtkGXY85wwyhPj9b0Ld7BndDMMhG4pnyyoQEUw2O/4rMYSDYfc85Hj/xmqgRsuP1R3QFu5HplMjQcaFsyW0bJJZX795D46U9lMYaGhS34txjz2/PDpwn2Di3+NSvaRdaJidPHQvTGI+03mDdhlNgT2d14clz05MX1En4Ucg8w9op/H11iIUzg6/NDGJa+030I5GhivJd/Nw0a/yc3wjc/G3/x7A4rQPOJdX5qWxZFF6g3bXlxSKRRQANI4A7xJRgcGHA/aXcb9ddw6DCsDsbA6RfgNqi5IBtUgSuO79C7+SUpTlykFwnlKo6/sjxUfCkt6bju9XF4WgHiK5hbj1zk/QMwOQaIOvwNFw8/CLdCxccCyZT2N6/mJ3DDn2uihgytDrqOhttJPlVG1aG6zei3ElP9WShyF6H1VABLCjJrSUvYuG05rw5Kmx287Nx3wpmS6fadmIn+PYi2VO4adKEODyH8gwxTwKZloGLwzfvumXxGVdJLST6Ptb64XQ/zBA/YZNhO lfofJg8r ecZuS2YwxJIPmfoyeJ1gboX0Wc3Vo5vNlHwdw3LRm5M5rlfQjSZGJnkrVK9kMEmuUQMIKQpw1tu+BisGIfUYNYYFbvGz8Q3P+WY3c76QZM/gI0PzZKYkwEHCxTRftMgEkj3QS+9rpljfzCda8OcOhNpPHFKbfRJeHMzrnhN9TCZ7kIo+ZDiuQa6aptDUZ5XhQjE61HMLCZV9e5Ymo89aVHo63GRoPu+R6MZPBQ0xoLV9mqmUfbaMFCkUmJGlyT+cSdxWuAlx9cU/SROaVgwWjcXacD55CPB0lmj1i5/xqlprTwE2SkiHgyjH39nkO6DFkjr+qb7fobuOFf6lrD2zgdxs/uCD7P5i6qUFVsryY8morS8bSCi1gFQWob032K/JfxTe1zd5YtEtBmj93Gs/B7tQ0908Pq13nNlwdRdafs0zj2Kxyg8FNf6PIrp8YUDBhR/ZniW6et1OaC6AryONZXRoLZiQsFIB26XMErDV+iVx5yVkIW6y7r3J+90ImqpA5ZyaLBb5cqEvD+ysU9GkljzgyFg2St+syuucKmhVXY9uU4BEIamqaZ3grSNZvWbRRHOOi3iDV23T4QUY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As the get_order() implemented by xtensa supporting 'nsau' instruction seems be the same as the generic implementation in include/asm-generic/getorder.h when size is not a constant value as the generic implementation calling the fls*() is also utilizing the 'nsau' instruction for xtensa. So remove the get_order() implemented by xtensa, as using the generic implementation may enable the compiler to do the computing when size is a constant value instead of runtime computing and enable the using of get_order() in BUILD_BUG_ON() macro in next patch. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin Acked-by: Max Filippov Reviewed-by: Alexander Duyck --- arch/xtensa/include/asm/page.h | 18 ------------------ 1 file changed, 18 deletions(-) diff --git a/arch/xtensa/include/asm/page.h b/arch/xtensa/include/asm/page.h index 4db56ef052d2..8665d57991dd 100644 --- a/arch/xtensa/include/asm/page.h +++ b/arch/xtensa/include/asm/page.h @@ -109,26 +109,8 @@ typedef struct page *pgtable_t; #define __pgd(x) ((pgd_t) { (x) } ) #define __pgprot(x) ((pgprot_t) { (x) } ) -/* - * Pure 2^n version of get_order - * Use 'nsau' instructions if supported by the processor or the generic version. - */ - -#if XCHAL_HAVE_NSA - -static inline __attribute_const__ int get_order(unsigned long size) -{ - int lz; - asm ("nsau %0, %1" : "=r" (lz) : "r" ((size - 1) >> PAGE_SHIFT)); - return 32 - lz; -} - -#else - # include -#endif - struct page; struct vm_area_struct; extern void clear_page(void *page); From patchwork Mon Oct 28 11:53:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13853363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF4F4D13596 for ; Mon, 28 Oct 2024 12:00:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 98E076B0093; Mon, 28 Oct 2024 08:00:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93E406B0095; Mon, 28 Oct 2024 08:00:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74DC76B0098; Mon, 28 Oct 2024 08:00:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 561696B0093 for ; Mon, 28 Oct 2024 08:00:25 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9E464120923 for ; Mon, 28 Oct 2024 12:00:04 +0000 (UTC) X-FDA: 82722867864.05.A44E145 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf30.hostedemail.com (Postfix) with ESMTP id 752638000E for ; Mon, 28 Oct 2024 11:59:38 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730116649; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SS0D0+8XoiIo0c4S549OwEPNhKjM/DKpfjPetbIyZgQ=; b=z32GOJsbvOssB1TGl77RUS7w7qH3GpROY1BxcU++jnPqZir3a3WcI4u37AxuaTLhVmCmg5 wCB+42gYnQ4RTxIQWGDraqmDxDsPM0H28mXumgoCe4RK2aGF9gqKhIajjQO/Q7LaHd3+/C +iifG1/oeNre+b20CfdOT8Q30SiM6DQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730116649; a=rsa-sha256; cv=none; b=kwUuN2GBmXrb8Xsx7KItdZbQoLOSZvBLu8Sjw0md86hp3dW6NtUaZX5zcNYweLh7/Q12Gj O5osA5aAS9o06XLSCZTeZ2Qqdl3yV3yrRmYH3bBel9FmGUY8MAZ6nosUNVLomKtlkoMBLD fbm4KTLie36tmkmKy8I48FPVQJetF8c= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XcX3W319hz2Ddk5; Mon, 28 Oct 2024 19:58:47 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 0FFF314010C; Mon, 28 Oct 2024 20:00:17 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 28 Oct 2024 20:00:16 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Alexander Duyck Subject: [PATCH net-next v23 6/7] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Mon, 28 Oct 2024 19:53:41 +0800 Message-ID: <20241028115343.3405838-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241028115343.3405838-1-linyunsheng@huawei.com> References: <20241028115343.3405838-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 752638000E X-Stat-Signature: ukfcfw5wsi69zp8n9x5djowisatzax89 X-Rspam-User: X-HE-Tag: 1730116778-923955 X-HE-Meta: U2FsdGVkX18fV9y69bZSew6Rh7sFKKRJHCE8TBKevIHoF5Zas9yUw2/uLWQX7bLex2uVOJDM8VCsIt8CxDb1tVGUpgY5gKI0r0rrfH0QiPBesWQHOgZCWarxdtjnq5mfMGnWeYeA5Lgol1BykVG9cyHxnGTHdzXDyR8xqk4vzb7UYSLxxYj4zfskJEi4EtvSNkivmY/tzjuH8mq9l2y8C6Ro7Bx7otZn2SMgTOj/jVe6wyYtqR8esSJfXEZXtVdlrUY5150+CrCgsv4sHuEtCPWXBupYtBByOqXG3FGxuqmNMziDEc/cPnPGrrxoI4oGaQDzoRjXGcVQ5jTyJbttoAmIsJBdJScJA5yc+n2EJQqJKhm0qksYKlCLFqJiETRk+QgxBecDpu+y7VOWZH4PVo84M0HLTlx2nK5wINpQ7Mo3jVnkJSKN6u75utpJhbw3S8flymOAnCnbP4YJquLxE4S51I3IkshaRH/4vZJ3Rwy8nfYFVZNGGWN3XExXMcC2bI0vkyAnAdesRsz7PxCtoO1x3Z5HBCrmNWkY8NIkcJsiyKRBxmoh8+mHol+M5kvbx/LGbfHRKEgmeZpfTFwIZixtdCN5SFWNgte21bzf+W8VExB9trYooZrHJsctYiRDg49ifPn/234wj4NisAzonGGGfiLxd6hRWwcjUQG7tJG5buNiyJKsLq1adsZrv21We2OZtV6MJfeNoKXX7P/IDhWaudrlIDXeudWE+EeIFCUPQF7NPpMZcZJ9W3nS3rJca1f/3YjHKjYVguH97dpzIqBquatehhBI3/+YFPzpoayPPd1GbgfzBxOw+Y4roP331Kwwah4jYb4z49LdUTUCpzwlhar+witBbDFSUIRXNazE1Rf+xhL6hUkCk9+jIVaPx0ZWKBBR1YxE0I3rGwo+5WmZvmTHViWcFM0eth5UPIMOtOqYCdfK6zfIB4v5BBOZU2OGKp2U/y1tJ06seQF ch4yS+fr +tCprGXp1kNognaZ3+bYrxiT/Xg3pt/gm3rg3kryJpwGwx2mXmOrF2aq1ddWE7RgXf+VFfrs+BWyDiKhfqnOO732i5wjXwAgv4hZD+4oP4eEM4qyDXYcfw1JxIe6RkKQh0iznLYyU4R9Z93NTU6lCMDZZmaNf76X3ukNWbXrUwKudfOq+1RIFblDFDbT7K7nZX9gTfFO1ZWx5DO0/swUmEQMrYOQksQUI1gvCDIgaDau0UcuFrAgRBVARIRHtly8d2RKoTtKc/35Ezl3+pXvBsOefklovw1Lw5h2Rec5ULSVPkEHi5zCHoBMAd1tnMUTliDwd2lYEiiwohPFBPpa7hg0qFZWagi+Jf8+BmmSi/uAN2EMu2pRw4JEhPe3RIW/4q2zFar1QnKh/MrY2IPq7INTtcHATgKVEL3cxYlGj7EvSYmk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- include/linux/mm_types_task.h | 19 +++++---- include/linux/page_frag_cache.h | 24 ++++++++++- mm/page_frag_cache.c | 70 ++++++++++++++++++++++----------- 3 files changed, 81 insertions(+), 32 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index 0ac6daebdd5c..a82aa80c0ba4 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -47,18 +47,21 @@ struct page_frag { #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* encoded_page consists of the virtual address, pfmemalloc bit and + * order of a page. + */ + unsigned long encoded_page; + + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 offset; - __u16 size; + __u16 pagecnt_bias; #else __u32 offset; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; /* Track pages that require TLB flushes */ diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0a52f7a179c8..41a91df82631 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,38 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H +#include #include #include #include +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) +/* Use a full byte here to enable assembler optimization as the shift + * operation is usually expecting a byte. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#else +/* Compiler should be able to figure out we don't read things as any value + * ANDed with 0 is 0. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK 0 +#endif + +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT (PAGE_FRAG_CACHE_ORDER_MASK + 1) + +static inline bool encoded_page_decode_pfmemalloc(unsigned long encoded_page) +{ + return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_page = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return encoded_page_decode_pfmemalloc(nc->encoded_page); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4c8e04379cb3..a36fd09bf275 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -12,6 +12,7 @@ * be used in the "frags" portion of skb_shared_info. */ +#include #include #include #include @@ -19,9 +20,36 @@ #include #include "internal.h" +static unsigned long encoded_page_create(struct page *page, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE); + + return (unsigned long)page_address(page) | + (order & PAGE_FRAG_CACHE_ORDER_MASK) | + ((unsigned long)pfmemalloc * PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + +static unsigned long encoded_page_decode_order(unsigned long encoded_page) +{ + return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK; +} + +static void *encoded_page_decode_virt(unsigned long encoded_page) +{ + return (void *)(encoded_page & PAGE_MASK); +} + +static struct page *encoded_page_decode_page(unsigned long encoded_page) +{ + return virt_to_page((void *)encoded_page); +} + static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,23 +58,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + order = 0; + } - nc->va = page ? page_address(page) : NULL; + nc->encoded_page = page ? + encoded_page_create(page, order, page_is_pfmemalloc(page)) : 0; return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_page) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(encoded_page_decode_page(nc->encoded_page), + nc->pagecnt_bias); + nc->encoded_page = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -63,35 +94,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - unsigned int size = nc->size; -#else - unsigned int size = PAGE_SIZE; -#endif - unsigned int offset; + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_page)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_page = nc->encoded_page; + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; } + size = PAGE_SIZE << encoded_page_decode_order(encoded_page); offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); if (unlikely(offset + fragsz > size)) { if (unlikely(fragsz > PAGE_SIZE)) { @@ -107,13 +132,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = virt_to_page(nc->va); + page = encoded_page_decode_page(encoded_page); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(encoded_page_decode_pfmemalloc(encoded_page))) { + free_unref_page(page, + encoded_page_decode_order(encoded_page)); goto refill; } @@ -128,7 +154,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return encoded_page_decode_virt(encoded_page) + offset; } EXPORT_SYMBOL(__page_frag_alloc_align); From patchwork Mon Oct 28 11:53:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13853364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C843D13596 for ; Mon, 28 Oct 2024 12:00:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D893A6B0095; Mon, 28 Oct 2024 08:00:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D16EA6B0096; Mon, 28 Oct 2024 08:00:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6DC26B0099; Mon, 28 Oct 2024 08:00:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7168C6B0096 for ; Mon, 28 Oct 2024 08:00:25 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2E1F9141DE7 for ; Mon, 28 Oct 2024 12:00:01 +0000 (UTC) X-FDA: 82722866814.17.C5F9513 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf19.hostedemail.com (Postfix) with ESMTP id 9A07C1A0003 for ; Mon, 28 Oct 2024 11:59:52 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730116743; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PIvKsyT5LiI5apLvRpNTTV+XS0ge7DIXkJT1X+eybuw=; b=WNp8Gb5ib4JaFXHRfktrGGMmQV2bZ5Rc3oKFXvUr8Z/vq5dFwjqhmKe+WtxxMgvQMXebdI ouOzeUa9GTAN6Scekk8FcsLKO7Rypa1N7OV2mx0gZzV+KMqGI/uOGEzMzA7Wf61oP1omTq CGSriZUM/gI+Pto45KBdaPmfUmyNbK8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730116743; a=rsa-sha256; cv=none; b=WbxzfFsSax0pwN2+k91NSB0++3Bq4tlBbxHnPwQ6Zba9DJGLLu7UdR55VTOEO5ogQEHOFG hxhSKz833WU0hGRjNDblh+wl8XD0nQ3zrBONDLkPf+erPqY7h8YOJPxe9EMPjaAw3TpvHf WvJL1uHpVKNTcCP3xszaHbfAPN86S7E= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4XcX5Q5TTFz1ynk7; Mon, 28 Oct 2024 20:00:26 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 785E4140155; Mon, 28 Oct 2024 20:00:18 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 28 Oct 2024 20:00:18 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Alexander Duyck Subject: [PATCH net-next v23 7/7] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Date: Mon, 28 Oct 2024 19:53:42 +0800 Message-ID: <20241028115343.3405838-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241028115343.3405838-1-linyunsheng@huawei.com> References: <20241028115343.3405838-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 9A07C1A0003 X-Stat-Signature: eot91pw1x8wxqnj1pc8xwk1ict8d7qdg X-HE-Tag: 1730116792-792539 X-HE-Meta: U2FsdGVkX18bPdjGDiFZ6OYQ8eyba/o9ppkedY3lcDg5wMYdjyG5LngI/IZPBp57fLbuKl91RuW3or9VK+GYjwoWz6f/31CgYpQhcVVno0UPZpIX8WCU0ubUQPxgdsc2qiitVyRN40ZrOuKP2y0v+40B+4TVQQoHXY1J/hN0PZ7ObpxDcmEb1yNGyHvuCUkb+YBXwRGzXVkhxyhDlLZ2pzzWx0oiNrBeuR41srYdaw+RogQl4WAA4GulQSPEJ7BCM4oIrq1PLx6oMDDbA5ooAMOa5fFlyvpfn/k7KO5m93Iyep2oWggH7FOcISIos4nXMReCxK5KZPKnxBxoU7K5D/3pCCt2QPNUIkqdNCy41UuX5MWCIdXI0qkK7VqV32SFqEic4rnET8xOqx4eitEaZL06jlJko9FznlGmwzaq1rDRv9y7KKZmCsyBrnD75ISPItZY+FiTQP+6mL5uIjkIgRPXTEna/Oew6kHEX21rnV4DkjXPqZupPlJi/pUUucgVVWgAQQAwo6FQ+xNUA/17p4wmO//fVDtV5y76BjdrG4QO3vr3r8+o3wECF8YyQUdPOoj1FmRVJCXFuvVx1XxAvgi2sd0PlQ1dOUtEfuGOlXz7razGzWroffCc/xpL7LtDgaMYHALKivHHGeMIfg6Q0iqYN62u/F7dvpyuoNxkGehRDorTxpSkhQA/1FVoQaNMaoZ4owzbiTwUP2JSJReylnSOZ92gXtYTojcGZfSpqGDbFxEuZgTc6K8V8bhuIv1mF3dLe45NlCRq4A+HV/b7263K6+6gLHk4m7YKr5ToBg6IcPmNlP3toTOOBS82e16HFMcSzOlgwSTYaAMXcGFcBhWomhgGmnw4wQXDFKQ6nCOiTT8gTbERZldXVqjVIwK/Zf0F6vQyLWvxnYJWV+O48NnzH24MEj97zv7ZphqwnHzMy6HHPfw4ib+MviGIM8SETRkE65ok8/77e3LPCYH xYUY4eSn 96Zle5WqCG2FjszsrvAkkD80HTwLdinm4ECBLlaTaL0+tsuEmfWgFJrsdSzx+gwT/wJxtN46KLWUJH9zGv0OOgQKOg2Ml0nnhwSchdyg8e5oSGghZuKUFkIhdN5wLecOgBci0XvcOqJqNrspf4rd2bsm2mjxjIO3b7OKhPwzsqlxCMUlck8rGQHA/bWgUk8pOHGQAI3ds0PZBQOV7D9dnvLCuE6q/1s0zBLIRPT3Qt+LLxXwDMvX1cPKgrLc5NBaItAMqdH9CP1Y8L0dG2aFClSg0TEr3Vct+2VvocHAGnACxPYtgNQltqSOuMV2FenexHzgwNjbxhYfNI0CU99e2JQI9hE0mFKwk3RaeNP1XObifxYori1Sfzvu5IvSFJ1kQIiuX3cMSCIrDqK9NgN4cMj6WeHkn/8m8h9Jdj7SqpPXZVfE8zZpSIRLolPnrcm8V6RF6ZTMPa8lQEJgIGVlYtGl1zO3aRlCtNEX2gKaINorfZj/xZvoC8RHB360WSjdAXslp6dgSJEgU36jprP6UQTxx/4lnEwQs7PGf1moyFzClwUFOTq8RBw82cRtyT46Vu7R3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() after refactoring in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is part of the mm system. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index a36fd09bf275..3f7a203d35c6 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -56,11 +56,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); order = 0; }