From patchwork Wed Jan 3 09:56:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13509824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DA0FC47079 for ; Wed, 3 Jan 2024 09:57:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CB1C8D0060; Wed, 3 Jan 2024 04:57:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64EDD8D0061; Wed, 3 Jan 2024 04:57:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A69B8D0060; Wed, 3 Jan 2024 04:57:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 13C828D0053 for ; Wed, 3 Jan 2024 04:57:26 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C348D120143 for ; Wed, 3 Jan 2024 09:57:25 +0000 (UTC) X-FDA: 81637547250.23.A54FA56 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf15.hostedemail.com (Postfix) with ESMTP id D8DA1A0017 for ; Wed, 3 Jan 2024 09:57:22 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704275843; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B1dgut4mGSDcmZ/oAiy+PlQNhW5eXWtIAwCrJLKMCQE=; b=qyT4WJzsbj6CtOKhZOtm8kJa46FWda/66YzQBCDy1bjBzhtC+lawub+/HivWTxh7pMHjqS kt136yD4iMIBowvmAFlxB5CtDTRVSn82T9hcfInxjMkjBfNI67t9T1t20Opbvhc3aL2u+8 WiAbaVJ5hLcDBOeRA0crCkXoHlK43hc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704275843; a=rsa-sha256; cv=none; b=KukoKgZ5K/+VJjqf8ZwpXEUzYszgtaKyHbH1A8jDGf2RQHD/hcdKlhsmIL4OwCfgCDbTGw f4Xi3U72z/wAGKuO+2Ftpb1UfQVUu5zL5z3R0fMyCDkSKZgLbyc9il/P9LGZT4uchCNx0X kK8BCz0+X9DJPDYhrYvTzWQH2bAzKPQ= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4T4lV06TYjzvQCR; Wed, 3 Jan 2024 17:56:08 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 6C49E18001C; Wed, 3 Jan 2024 17:57:18 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 3 Jan 2024 17:57:18 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Eric Dumazet , Subject: [PATCH net-next 1/6] mm/page_alloc: modify page_frag_alloc_align() to accept align as an argument Date: Wed, 3 Jan 2024 17:56:44 +0800 Message-ID: <20240103095650.25769-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240103095650.25769-1-linyunsheng@huawei.com> References: <20240103095650.25769-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: D8DA1A0017 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: ebd3rew9dq6qdgaf7aw7sdcaktoyegsj X-HE-Tag: 1704275842-314960 X-HE-Meta: U2FsdGVkX18dBt+jrJdB04YGjtqSCeNoZt0png+ssmrztCXxmljthdCJwHgGctNuT+phlwFYymFwpAdn+pN3bFpIO3xg6z1jXwZRZEuTWMCTsd6ExdINHPzpOKmZofzYxIVmwfpZblszRT6JzPwMYUiIBmUr8/a3iIHiZbBfNkNf+jwJZHU6S5otmvwzzq4tWAhNprIUf3wofXvxlfSRRBFLQrqM7dRxH+X/+L6IbDCOC4q/wXWEWhXv5aA8bffI1eaEImT1/5k0YDl0L5sPmqBcrk2iwqBsfRNYxuOHg6hBXq6oXULd3lDC+Tiydc5cCpWe7IfzBdAO3EkC5VpuJOzIwge+5q80zz6+Dpsa8TokSTwotjPcZfQhWU6mGxHQU5CzWiDX7u0+uEE+yDd3AQSUCtk+907WsXmFYgf8CgOTBzW/KruCZMbB/qNLGWLOc65FgTyIpU0Y7G1/ZKKlkScP0QtRBKnonDKixZ1eZzjBFBxvNc+p+RReoUi3TnNfJ1NPN5ZU0c6oyanbNmynTaYqmYibB/gNF0C2jIAB2iaEqNi6Gpz/dGU2xQ4VUIOWQvuvAD2UCpnmi4LI2KSDviVrciNpRXYpwtTE5v7obTwJG6Kni6bo4+uSjP+LFBSrlh29E+w418gxh6dX+x2YVTEpb9RlkPmZaVeVts9WzhSkHRjr0XQtMKmzCFDFZ3ENG69+OnJfF/dISE5WWE5o0pr4iE009nKSbXVgSc6ijgbKwWA2k9pJkkxbJcxs2RKnMvxj0Mu66isVdbZsYzH7emiUUYZIFZXpmnhMRv6ViU1Pvl3CGgjU34EskQrhY8SMvheqE6GbjmWiL2ytakQ7Bk5+KP7HuP8csuYyaVQeiAML1cEmVrok8eHKQSzE1secD/Zfbvpcu8I1qOZ/4XyUftQ24XTkViQNKMO5KmhyC7yq4M3I918obDRh37s9i/IbHbxPIYLi3B6sg1YYhtm +IKccS2t cxBHFByyO4ZN0q7aEY/ovL+/LT+R2k1cBmBi2fwsX6z6Zs+xqYnTwAAVSh0rHYuEACJq3LDmsPZh6Qzu3qT8RVKSP6Rwomgj+WoXjeeD7HbIzFvFYZle64PtcGU0Ex27R5KX/3xjXiP2wgxmdWxEW4XKZtC/dqMRop7RqonQFWZQAAVgyz9ytVWkRJA8dUN2wWC9VliRf/i0IPi71tMW1xUFJfUVy1ODioOOi9tEA/HQOLr2aC33O+xYQ6dtM1KNUk5jvvB8Ofun6cOcFKh6wzaUJA1jyvluYlNFL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: napi_alloc_frag_align() and netdev_alloc_frag_align() accept align as an argument, and they are thin wrappers around the __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs doing the align and align_mask conversion, in order to call page_frag_alloc_align() directly. As __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs are only used by the above thin wrappers, it seems that it makes more sense to remove align and align_mask conversion and call page_frag_alloc_align() directly. By doing that, we can also avoid the confusion between napi_alloc_frag_align() accepting align as an argument and page_frag_alloc_align() accepting align_mask as an argument when they both have the 'align' suffix. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- include/linux/gfp.h | 4 ++-- include/linux/skbuff.h | 22 ++++------------------ mm/page_alloc.c | 6 ++++-- net/core/skbuff.c | 14 +++++++------- 4 files changed, 17 insertions(+), 29 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index de292a007138..bbd75976541e 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -314,12 +314,12 @@ struct page_frag_cache; extern void __page_frag_cache_drain(struct page *page, unsigned int count); extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask); + unsigned int align); static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return page_frag_alloc_align(nc, fragsz, gfp_mask, 1); } extern void page_frag_free(void *addr); diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index a5ae952454c8..c0a3b44ef5da 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3192,7 +3192,7 @@ static inline void skb_queue_purge(struct sk_buff_head *list) unsigned int skb_rbtree_purge(struct rb_root *root); void skb_errqueue_purge(struct sk_buff_head *list); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align); /** * netdev_alloc_frag - allocate a page fragment @@ -3203,14 +3203,7 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); */ static inline void *netdev_alloc_frag(unsigned int fragsz) { - return __netdev_alloc_frag_align(fragsz, ~0u); -} - -static inline void *netdev_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __netdev_alloc_frag_align(fragsz, -align); + return netdev_alloc_frag_align(fragsz, 1); } struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length, @@ -3270,18 +3263,11 @@ static inline void skb_free_frag(void *addr) page_frag_free(addr); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align); static inline void *napi_alloc_frag(unsigned int fragsz) { - return __napi_alloc_frag_align(fragsz, ~0u); -} - -static inline void *napi_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __napi_alloc_frag_align(fragsz, -align); + return napi_alloc_frag_align(fragsz, 1); } struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 37ca4f4b62bf..9a16305cf985 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4718,12 +4718,14 @@ EXPORT_SYMBOL(__page_frag_cache_drain); void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) + unsigned int align) { unsigned int size = PAGE_SIZE; struct page *page; int offset; + WARN_ON_ONCE(!is_power_of_2(align)); + if (unlikely(!nc->va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); @@ -4782,7 +4784,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= align_mask; + offset &= -align; nc->offset = offset; return nc->va + offset; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 12d22c0b8551..84c29a48f1a8 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -291,17 +291,17 @@ void napi_get_frags_check(struct napi_struct *napi) local_bh_enable(); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); } -EXPORT_SYMBOL(__napi_alloc_frag_align); +EXPORT_SYMBOL(napi_alloc_frag_align); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) { void *data; @@ -309,18 +309,18 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); } else { struct napi_alloc_cache *nc; local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache); - data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); local_bh_enable(); } return data; } -EXPORT_SYMBOL(__netdev_alloc_frag_align); +EXPORT_SYMBOL(netdev_alloc_frag_align); static struct sk_buff *napi_skb_cache_get(void) {