From patchwork Wed Jan 3 09:56:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13509824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DA0FC47079 for ; Wed, 3 Jan 2024 09:57:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CB1C8D0060; Wed, 3 Jan 2024 04:57:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64EDD8D0061; Wed, 3 Jan 2024 04:57:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A69B8D0060; Wed, 3 Jan 2024 04:57:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 13C828D0053 for ; Wed, 3 Jan 2024 04:57:26 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C348D120143 for ; Wed, 3 Jan 2024 09:57:25 +0000 (UTC) X-FDA: 81637547250.23.A54FA56 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf15.hostedemail.com (Postfix) with ESMTP id D8DA1A0017 for ; Wed, 3 Jan 2024 09:57:22 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704275843; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B1dgut4mGSDcmZ/oAiy+PlQNhW5eXWtIAwCrJLKMCQE=; b=qyT4WJzsbj6CtOKhZOtm8kJa46FWda/66YzQBCDy1bjBzhtC+lawub+/HivWTxh7pMHjqS kt136yD4iMIBowvmAFlxB5CtDTRVSn82T9hcfInxjMkjBfNI67t9T1t20Opbvhc3aL2u+8 WiAbaVJ5hLcDBOeRA0crCkXoHlK43hc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704275843; a=rsa-sha256; cv=none; b=KukoKgZ5K/+VJjqf8ZwpXEUzYszgtaKyHbH1A8jDGf2RQHD/hcdKlhsmIL4OwCfgCDbTGw f4Xi3U72z/wAGKuO+2Ftpb1UfQVUu5zL5z3R0fMyCDkSKZgLbyc9il/P9LGZT4uchCNx0X kK8BCz0+X9DJPDYhrYvTzWQH2bAzKPQ= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4T4lV06TYjzvQCR; Wed, 3 Jan 2024 17:56:08 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 6C49E18001C; Wed, 3 Jan 2024 17:57:18 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 3 Jan 2024 17:57:18 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Eric Dumazet , Subject: [PATCH net-next 1/6] mm/page_alloc: modify page_frag_alloc_align() to accept align as an argument Date: Wed, 3 Jan 2024 17:56:44 +0800 Message-ID: <20240103095650.25769-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240103095650.25769-1-linyunsheng@huawei.com> References: <20240103095650.25769-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: D8DA1A0017 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: ebd3rew9dq6qdgaf7aw7sdcaktoyegsj X-HE-Tag: 1704275842-314960 X-HE-Meta: U2FsdGVkX18dBt+jrJdB04YGjtqSCeNoZt0png+ssmrztCXxmljthdCJwHgGctNuT+phlwFYymFwpAdn+pN3bFpIO3xg6z1jXwZRZEuTWMCTsd6ExdINHPzpOKmZofzYxIVmwfpZblszRT6JzPwMYUiIBmUr8/a3iIHiZbBfNkNf+jwJZHU6S5otmvwzzq4tWAhNprIUf3wofXvxlfSRRBFLQrqM7dRxH+X/+L6IbDCOC4q/wXWEWhXv5aA8bffI1eaEImT1/5k0YDl0L5sPmqBcrk2iwqBsfRNYxuOHg6hBXq6oXULd3lDC+Tiydc5cCpWe7IfzBdAO3EkC5VpuJOzIwge+5q80zz6+Dpsa8TokSTwotjPcZfQhWU6mGxHQU5CzWiDX7u0+uEE+yDd3AQSUCtk+907WsXmFYgf8CgOTBzW/KruCZMbB/qNLGWLOc65FgTyIpU0Y7G1/ZKKlkScP0QtRBKnonDKixZ1eZzjBFBxvNc+p+RReoUi3TnNfJ1NPN5ZU0c6oyanbNmynTaYqmYibB/gNF0C2jIAB2iaEqNi6Gpz/dGU2xQ4VUIOWQvuvAD2UCpnmi4LI2KSDviVrciNpRXYpwtTE5v7obTwJG6Kni6bo4+uSjP+LFBSrlh29E+w418gxh6dX+x2YVTEpb9RlkPmZaVeVts9WzhSkHRjr0XQtMKmzCFDFZ3ENG69+OnJfF/dISE5WWE5o0pr4iE009nKSbXVgSc6ijgbKwWA2k9pJkkxbJcxs2RKnMvxj0Mu66isVdbZsYzH7emiUUYZIFZXpmnhMRv6ViU1Pvl3CGgjU34EskQrhY8SMvheqE6GbjmWiL2ytakQ7Bk5+KP7HuP8csuYyaVQeiAML1cEmVrok8eHKQSzE1secD/Zfbvpcu8I1qOZ/4XyUftQ24XTkViQNKMO5KmhyC7yq4M3I918obDRh37s9i/IbHbxPIYLi3B6sg1YYhtm +IKccS2t cxBHFByyO4ZN0q7aEY/ovL+/LT+R2k1cBmBi2fwsX6z6Zs+xqYnTwAAVSh0rHYuEACJq3LDmsPZh6Qzu3qT8RVKSP6Rwomgj+WoXjeeD7HbIzFvFYZle64PtcGU0Ex27R5KX/3xjXiP2wgxmdWxEW4XKZtC/dqMRop7RqonQFWZQAAVgyz9ytVWkRJA8dUN2wWC9VliRf/i0IPi71tMW1xUFJfUVy1ODioOOi9tEA/HQOLr2aC33O+xYQ6dtM1KNUk5jvvB8Ofun6cOcFKh6wzaUJA1jyvluYlNFL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: napi_alloc_frag_align() and netdev_alloc_frag_align() accept align as an argument, and they are thin wrappers around the __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs doing the align and align_mask conversion, in order to call page_frag_alloc_align() directly. As __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs are only used by the above thin wrappers, it seems that it makes more sense to remove align and align_mask conversion and call page_frag_alloc_align() directly. By doing that, we can also avoid the confusion between napi_alloc_frag_align() accepting align as an argument and page_frag_alloc_align() accepting align_mask as an argument when they both have the 'align' suffix. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- include/linux/gfp.h | 4 ++-- include/linux/skbuff.h | 22 ++++------------------ mm/page_alloc.c | 6 ++++-- net/core/skbuff.c | 14 +++++++------- 4 files changed, 17 insertions(+), 29 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index de292a007138..bbd75976541e 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -314,12 +314,12 @@ struct page_frag_cache; extern void __page_frag_cache_drain(struct page *page, unsigned int count); extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask); + unsigned int align); static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return page_frag_alloc_align(nc, fragsz, gfp_mask, 1); } extern void page_frag_free(void *addr); diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index a5ae952454c8..c0a3b44ef5da 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3192,7 +3192,7 @@ static inline void skb_queue_purge(struct sk_buff_head *list) unsigned int skb_rbtree_purge(struct rb_root *root); void skb_errqueue_purge(struct sk_buff_head *list); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align); /** * netdev_alloc_frag - allocate a page fragment @@ -3203,14 +3203,7 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); */ static inline void *netdev_alloc_frag(unsigned int fragsz) { - return __netdev_alloc_frag_align(fragsz, ~0u); -} - -static inline void *netdev_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __netdev_alloc_frag_align(fragsz, -align); + return netdev_alloc_frag_align(fragsz, 1); } struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length, @@ -3270,18 +3263,11 @@ static inline void skb_free_frag(void *addr) page_frag_free(addr); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align); static inline void *napi_alloc_frag(unsigned int fragsz) { - return __napi_alloc_frag_align(fragsz, ~0u); -} - -static inline void *napi_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __napi_alloc_frag_align(fragsz, -align); + return napi_alloc_frag_align(fragsz, 1); } struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 37ca4f4b62bf..9a16305cf985 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4718,12 +4718,14 @@ EXPORT_SYMBOL(__page_frag_cache_drain); void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) + unsigned int align) { unsigned int size = PAGE_SIZE; struct page *page; int offset; + WARN_ON_ONCE(!is_power_of_2(align)); + if (unlikely(!nc->va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); @@ -4782,7 +4784,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= align_mask; + offset &= -align; nc->offset = offset; return nc->va + offset; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 12d22c0b8551..84c29a48f1a8 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -291,17 +291,17 @@ void napi_get_frags_check(struct napi_struct *napi) local_bh_enable(); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); } -EXPORT_SYMBOL(__napi_alloc_frag_align); +EXPORT_SYMBOL(napi_alloc_frag_align); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) { void *data; @@ -309,18 +309,18 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); } else { struct napi_alloc_cache *nc; local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache); - data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); local_bh_enable(); } return data; } -EXPORT_SYMBOL(__netdev_alloc_frag_align); +EXPORT_SYMBOL(netdev_alloc_frag_align); static struct sk_buff *napi_skb_cache_get(void) { From patchwork Wed Jan 3 09:56:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13509823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97DA6C3DA6E for ; Wed, 3 Jan 2024 09:57:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B6BE8D0053; Wed, 3 Jan 2024 04:57:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 214DE8D0061; Wed, 3 Jan 2024 04:57:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AB168D0060; Wed, 3 Jan 2024 04:57:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E3CC58D0053 for ; Wed, 3 Jan 2024 04:57:25 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B82B2402C4 for ; Wed, 3 Jan 2024 09:57:25 +0000 (UTC) X-FDA: 81637547250.21.0265B64 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf24.hostedemail.com (Postfix) with ESMTP id 2AFBD180009 for ; Wed, 3 Jan 2024 09:57:22 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704275844; a=rsa-sha256; cv=none; b=k37fXJ8YxihpPM28WVnQN6ot5pN/UfWCG7k2IxcgZoencM4z1dVsPK1gNbX6dJ1rTCEpMD 8YK2RCXS7zgihJ9JN3FvZkq/Bs6lgb8mgbxP3ARYRaLsRgHDUvd3iFOc941iDAedNFhsHb ZXse9djrN/FL6LKEKAjlXo3N0Yv/GDY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704275844; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W4vWspD7IGDLWOIHtJgtU3+xzXnGzb0LMPh/Z5uUYMg=; b=Ihg6f3CzezGMWlUvoaB15FUZ8aCfkynZM5i7HjqrvvE7mdmJzF489nk6DIn7JcWStcLaTd ZS0UZe7vmNwuQFDYYiLnT899jUl9z+tMIMrb0toCHILJjsSjY2cYQEzhk1/+ZKSFhsVW/O SgDnXvap7Teb8M/l1XNrs21RFMaTgm8= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4T4lVn4kQZz1wrNT; Wed, 3 Jan 2024 17:56:49 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 60FA61A0172; Wed, 3 Jan 2024 17:57:19 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 3 Jan 2024 17:57:19 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , "Michael S. Tsirkin" , Jason Wang , Andrew Morton , Eric Dumazet , , , Subject: [PATCH net-next 2/6] page_frag: unify gfp bits for order 3 page allocation Date: Wed, 3 Jan 2024 17:56:45 +0800 Message-ID: <20240103095650.25769-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240103095650.25769-1-linyunsheng@huawei.com> References: <20240103095650.25769-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 2AFBD180009 X-Stat-Signature: kawt7cdxgtzi9xnkpp6pk1gsptuhpkdk X-Rspam-User: X-HE-Tag: 1704275842-481898 X-HE-Meta: U2FsdGVkX19GnO0ZjxZ4OgxuFIHjT3vqTpiC/KXzQjF2aNXbnUDaFIrF5Kp2uZ20bHum0oto/Z/DNkBTc7PTJVtaQ7zY/FqW5KldopX0KSgDux5Lrfhq/nn0CdLLRlEESyB4FM0TqrF8zEo5WqaKVGXAQJtin7wTOyGlDMbkcRc3E5cmYrCd3HLRc+CetFrEGdIHc3RWFd3VB1ajGA8v8K7BXboQtph4fpKJqRPHaW+wB6htMKUguJv1opRa/rB1Iw2nOoqtghgGdbwghtCweod9KiGooVpwsBHysVUQ1pzBGkzhEGIN2YqrgAmdnuC/jJGbFt0wixkRY6ktzOdpUoUeigID6gr93r0xOV0Fo48a7swNnNq1ylH73MXxfP8QeeB3TpdqmDBP+c7MCgaf6UlR+gXNENXq/ed+Jd1R5jbJYngL/bscDgLOZsc3dtXW7ndzHcCfPL2Re3bXKw1OpckF5d17VVogtnMOv8V4oL/DHazH4lQr8l0L0Vem+EH95RQX9kzUePEKSs1yaQHXoZDkyiGg0ueFvbbG2vdAMue35ubTB02gIJM8vEXUkLWH/Sc+v9N19oVkTNYVrLFqK9LAJYM2j3F/usnNPH/H3ZywMdY3DiqmR5Vxr45Z8Z6MGZa241sm1OLuz40AZ0LJSB9mciBM5bXHLWADS5N2sOp17qROiXvDsvG3diMu8PKyewodtwPUNROiRJE6vvx7oB/9Uagppg0M2YcYI+CxXdSAguVictEehkghhCDU276jfDymrHRx8IrioeWYrB5RSKnqxsuyoRVq81GpcCky6qumIZ4K0Y7xjUMWHRP6PHOQt7dqwO60LXYAlp/n5PwIL42xutP0Dn2cqyYsRXrRI4TqV64ClxnZ082oJwStND+zxhjMmVi5TwzhHwVY6jcVgXdkY1MzZXjQw+AQ0u4dLP/EZywnHPe/xRtie8V8jHJwbGSkHbge2T5e4uBQT6f 5gA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there seems to be three page frag implementions which all try to allocate order 3 page, if that fails, it then fail back to allocate order 0 page, and each of them all allow order 3 page allocation to fail under certain condition by using specific gfp bits. The gfp bits for order 3 page allocation are different between different implementation, __GFP_NOMEMALLOC is or'd to forbid access to emergency reserves memory for __page_frag_cache_refill(), but it is not or'd in other implementions, __GFP_DIRECT_RECLAIM is masked off to avoid direct reclaim in skb_page_frag_refill(), but it is not masked off in __page_frag_cache_refill(). This patch unifies the gfp bits used between different implementions by or'ing __GFP_NOMEMALLOC and masking off __GFP_DIRECT_RECLAIM for order 3 page allocation to avoid possible pressure for mm. Signed-off-by: Yunsheng Lin CC: Alexander Duyck Reviewed-by: Alexander Duyck --- drivers/vhost/net.c | 2 +- mm/page_alloc.c | 4 ++-- net/core/sock.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f2ed7167c848..e574e21cc0ca 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -670,7 +670,7 @@ static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz, /* Avoid direct reclaim but allow kswapd to wake */ pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | - __GFP_NORETRY, + __GFP_NORETRY | __GFP_NOMEMALLOC, SKB_FRAG_PAGE_ORDER); if (likely(pfrag->page)) { pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9a16305cf985..1f0b36dd81b5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4693,8 +4693,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp = gfp_mask; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | - __GFP_NOMEMALLOC; + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; diff --git a/net/core/sock.c b/net/core/sock.c index 446e945f736b..d643332c3ee5 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2900,7 +2900,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) /* Avoid direct reclaim but allow kswapd to wake */ pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | - __GFP_NORETRY, + __GFP_NORETRY | __GFP_NOMEMALLOC, SKB_FRAG_PAGE_ORDER); if (likely(pfrag->page)) { pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; From patchwork Wed Jan 3 09:56:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13509825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AB8DC3DA6E for ; Wed, 3 Jan 2024 09:57:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 430688D0062; Wed, 3 Jan 2024 04:57:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B8388D0061; Wed, 3 Jan 2024 04:57:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 231FD8D0062; Wed, 3 Jan 2024 04:57:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 001F78D0061 for ; Wed, 3 Jan 2024 04:57:27 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C2642804EE for ; Wed, 3 Jan 2024 09:57:27 +0000 (UTC) X-FDA: 81637547334.11.9245C3F Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf29.hostedemail.com (Postfix) with ESMTP id 59195120009 for ; Wed, 3 Jan 2024 09:57:24 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf29.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704275846; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sY6rt6ZDWSPLy+R4GC8GwC8YJzCuoyXBaVHV+cH68pA=; b=SGK/CWaMN9IPrEOdnuuT6UaJyfBY/KNouXC35C1lu4Q6O9Xf1ylwQTAbUWu4T9p3jrbhvo TOwhw/wZXxiW/DvHjse2MBXFDLbDcOSoPHjl/GM4+6X9x2CtZBuqc1gJW4zFfIhfOM577V mkfU6fWaNJim89ta2ub87KenCFH68i8= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf29.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704275846; a=rsa-sha256; cv=none; b=W95NH5HOV+G9d34pcmYAi5Qfve1+APgMB/klfpYie/zN3WYI6/lixL6YyS2sh/xRiGchNU VrN0od0gQcGwl37pTf8IQkmnebBf4Z76H4ce93A6Yg3u/0uUpvHQiM79tC+X1UGzPKKqv2 I8DLyYg1rA1cy/ys+pk80T9iVABmdsM= Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4T4lVq5pjPz1wrCT; Wed, 3 Jan 2024 17:56:51 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 86A2B140258; Wed, 3 Jan 2024 17:57:21 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 3 Jan 2024 17:57:21 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next 3/6] mm/page_alloc: use initial zero offset for page_frag_alloc_align() Date: Wed, 3 Jan 2024 17:56:46 +0800 Message-ID: <20240103095650.25769-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240103095650.25769-1-linyunsheng@huawei.com> References: <20240103095650.25769-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 59195120009 X-Stat-Signature: 1bujg8uzehck4tdj8kjc8o3eoqc65p6t X-Rspam-User: X-HE-Tag: 1704275844-102573 X-HE-Meta: U2FsdGVkX18AtcF9cs8nwIa9EJhIcitsWFmifcjc3HyxGck9nJIdMEMoqm8r7nCGNnEsAWiaKA6GP7Er+rYfaFZgq0ua1MFemsXeJ7W9zUAd2bqsrAP34fXAV/hpiLIUfz2pudmBVADc1KocRkR1URRjkoF7eVQmQgQUqpSpveSxVy3U2w410B4PjTZmxEsKHxNFEY70H+OgUQXUU6LOKvdXh/UPdw9jHlp/t0PiFnHg5u7zK594JqnIJaE8oQxVgfi3lIw0Z+X+03aW9iWr9v0Ro4qRN6fiqJPyK4g6uplN4ARxhbUF5NWfd2Ch59iGwsa6zBivvrxH5WPR7ANZ/AFXOoieyWYfJBvQekiIeGugkRzKNL0+TIo7b6KEKxHG4JeL63dHERxXMOu8H5Nf9q+kuhl2hOkuWeA+Yf7ivmf5VorXGM80qxXYcRXSuW6D9P6E4dztIP+IxnLW5QdAN16Z6xj5oj3DgeIjrHQ9So/JuWwYC5VLub6XLALNCNxhDxspCadnT8odWfKpHXiMwJ9axdrkwS4choyj5Qy2khZpBLNO5zf26Mm9nV2HMj9xGVlaoPoKtQkbIlcY06EJxXq1Wx9fr55OOG17WVTZ4WP3KcDnPNs79llYxeRJy2pkSumjw0wg5RMUT9NzDGi5bhjc4UNKLGS3pSh5ihzkKnS/2OHkjN9GCGcFsFIiY3DWvdmF7PkIX54Pm9bMWg2x+Ol5+UmRkBx6KSlfNXM8MIv8XPoBCUuPd0eh3zLKT1R7ngc17BBo5fYZwhg3tgm0In2v4r0KrOQ1xql8zqU+KDhavcOuLoqqxtnPXIo0uR82kiXtLOeHPUO0miuTGvbYFKwn1JzZ0PPaWFdYwp6EqlsgfsM6HSNeDlZiSzptd89JIzTjlTI2wNqWUT6qGJjqD8/JTLWx/s96W4ZalpE4OPoochiNizz+xpdvC6+S76w1m5ZgrNSB1g5/ymq0G9i wIOV9pMh Vw3L0JE3uwrOqEQXhiMEiwOxoGkJ+GFMpXeHRkr9v2n4NaLPHpDTzk5E3/2dQI15Eknoz39Xb7tsrj/WQzM+Q4poWkC+EUbj+RT4Jnzj/gx2zNBuShdhfWf2zcRmBdEVWh1gNmKMRUi+Iygk1wxrWVn4cqcq/GCwfhjtDvPq+ewl60FHvzXMxYQbFEYJ2gKloUPCJ/eA09VluajAeiegZvIzQ3W2vwC2hqrfnxIHqtTFhSB3TDP/Ptp7SdXgXncqMPPXPddiONYmvkOEVgP0VtYIzACoj7uJSUvDl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The next patch is above to use page_frag_alloc_align() to replace vhost_net_page_frag_refill(), the main difference between those two frag page implementations is whether we use a initial zero offset or not. It seems more nature to use a initial zero offset, as it may enable more correct cache prefetching and skb frag coalescing in the networking, so change it to use initial zero offset. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- mm/page_alloc.c | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1f0b36dd81b5..083e0c38fb62 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4720,7 +4720,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) { - unsigned int size = PAGE_SIZE; + unsigned int size; struct page *page; int offset; @@ -4732,10 +4732,6 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ @@ -4744,11 +4740,18 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = 0; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#else + size = PAGE_SIZE; +#endif + + offset = ALIGN(nc->offset, align); + if (unlikely(offset + fragsz > size)) { page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -4759,17 +4762,13 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { + offset = 0; + if (unlikely(fragsz > size)) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -4784,8 +4783,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= -align; - nc->offset = offset; + nc->offset = offset + fragsz; return nc->va + offset; } From patchwork Wed Jan 3 09:56:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13509826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2C86C47074 for ; Wed, 3 Jan 2024 09:57:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 42DF48D0063; Wed, 3 Jan 2024 04:57:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DE088D0061; Wed, 3 Jan 2024 04:57:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27E2C8D0063; Wed, 3 Jan 2024 04:57:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0FCF38D0061 for ; Wed, 3 Jan 2024 04:57:46 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D101F80232 for ; Wed, 3 Jan 2024 09:57:45 +0000 (UTC) X-FDA: 81637548090.17.3062820 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf23.hostedemail.com (Postfix) with ESMTP id 636DC14001A for ; Wed, 3 Jan 2024 09:57:42 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704275864; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CPfRm0RxHNeiMA1fbdeDjZpspbrje3JC2BDkcjIVSZs=; b=F648/WDm4VShWor//yv/MtK7I6M3uqSLMqtEQoBX1HfYnCCqViGoOq4EfstK4Kgw8wIFBI sTzXSAoAEKumQdG9YvjUOeze+eFyFR5BDIJ5sR4nSf+7RpzvWnYZ4EpdxpDwOaZqf32ZaN cshjPVT7vMFFY+02cLDJzJp9KDwrxzI= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704275864; a=rsa-sha256; cv=none; b=UaryiSpJAJwjvyPHFdhdjVVw6F+KE6Khrn3RAwDxDP8fnz66m03GNXOSRrfmeObd0LGHNJ sJ5q+RaAruAKFM2tn2ZpsMxAaYlXt2XH5F7Gv8ni0pDeyniAGZCG84n9H8QK0m8xMA/mwt ZO38SMUiSNma6pxYTV9kPPKXc3+YmCY= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4T4lR93hNjz1FHNv; Wed, 3 Jan 2024 17:53:41 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 5586B1404F1; Wed, 3 Jan 2024 17:57:40 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 3 Jan 2024 17:57:27 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Jason Wang , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Eric Dumazet , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , "Michael S. Tsirkin" , Andrew Morton , , , , , , Subject: [PATCH net-next 5/6] net: introduce page_frag_cache_drain() Date: Wed, 3 Jan 2024 17:56:48 +0800 Message-ID: <20240103095650.25769-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240103095650.25769-1-linyunsheng@huawei.com> References: <20240103095650.25769-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 636DC14001A X-Stat-Signature: ikdfe8m8jdu1tu1p5bu4e1ta6afy1htz X-HE-Tag: 1704275862-201229 X-HE-Meta: U2FsdGVkX1/By843HghkpG0VzV3mO8X9RPDV2H3eLpKUypFWwddfmHVP2X8xJZVyP+43M2VEH6XpyB+4dPiI/Lbd3ksc5+zZZada83dTZh91sGOEPgqa/e7ztXbdnm3k3QmWmjA/CHer/vBIwpMRe8CLXGEAu21Q6JFDxeqNzmOGb3cEk1jT3Cu9tiMqqYfEvrex3UgvuH+98M78eL23ZtL77QxqyDTBZ0P50EHFcXTWo7G3QJ5e0HN9sxe2DXpsxiAlSU+g/diF+lcyP7oeYbI/NGcYBxb9Vl4n0txm0yQ4Lq+DIlGbAuMenLVfv9Ym/g1WjPA6E/LY1TBz4d8tTI0wUbCeJ4tRkvEs4a74lDnh/2lmfUopPOJguI5KyC0qCySnXO+mhadJyTYeEtSnxVpp2EJkDsunAhMZo+t2mG5uWVGExIQ/1Cy+7kMeHNS1ycpsWWnVOTX0q6DI1f1yiCgl7nA5GMBJzh4jMGf5DUjaI09huTUE9xQGIznn3g3QC4vBpxjDxOiDGEu07bmqnC92j2H3/6MWWOH2hwpM/hVXYJGn3Uaihs5+OD4Kbd8hDX8rvPkx3NnoSdHY8C60SX7RP9rDzSSi45N2obu9gfCUKL7kJVh4Kzgmam5Hr5bJYj9SZ11KTDhaja2S/y2VMtrlLOs9UNLWVj8nx9khvnclgn1Cq8sSg6cz0WQUvCMtH0PxBsz4Mvw31kB3QKYEuCgvnC3c9ggXC1xl15lewN0c1O48AiFtpfvQgfJvtKe290aofoLKnvMCESztyqm2S4OO4twmk9jR16GDy9BOl3qSc65D9+em3YKYazCjw1jYo++aWlz3VyLcUoAh/REzDjeoKWoTsvHNi7tOAjZEZSeoH9RbVRHm7p2zVeI1H1dlDs2Wuqt93QNVxAMUDk4a7qg/D5xFviaorHu+/V21MvXYnYBZKnI+GKHYTRSxLx0FpogEcovzUbveqLQp833 5r//+rY0 fU6DJz1yLh3XrosKql1uZmqCkUMO8Z24cMjE7VA8ujfdoeutYVnVIHaNIkAcNF09oM5HmM93tPiuySiYfcuxIrdRDe9+MCDAo9FOLhfJer6pRtanVejwyscPlUIcfqxOZa8sC5llNDF+Tl0uY5i2BSUGgi96kq5nu7PHpPLYRfWN4k/35W4IONJO3rfL4rf+U/hw5Mzy80VhtpDTMgucxKWcs/skIZRg3bO6S/F1haKGI3xV2SmvVhBS6ocH8Q+HOuuWIouqgqHyzBBwsjCN4Rw0rAuJllEvT45j2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When draining a page_frag_cache, most user are doing the similar steps, so introduce an API to avoid code duplication. Signed-off-by: Yunsheng Lin Acked-by: Jason Wang Reviewed-by: Alexander Duyck --- drivers/net/ethernet/google/gve/gve_main.c | 11 ++--------- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 17 ++--------------- drivers/nvme/host/tcp.c | 7 +------ drivers/nvme/target/tcp.c | 4 +--- drivers/vhost/net.c | 4 +--- include/linux/gfp.h | 2 ++ mm/page_alloc.c | 10 ++++++++++ 7 files changed, 19 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 619bf63ec935..d976190b0f4d 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1278,17 +1278,10 @@ static void gve_unreg_xdp_info(struct gve_priv *priv) static void gve_drain_page_cache(struct gve_priv *priv) { - struct page_frag_cache *nc; int i; - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - nc = &priv->rx[i].page_cache; - if (nc->va) { - __page_frag_cache_drain(virt_to_page(nc->va), - nc->pagecnt_bias); - nc->va = NULL; - } - } + for (i = 0; i < priv->rx_cfg.num_queues; i++) + page_frag_cache_drain(&priv->rx[i].page_cache); } static int gve_open(struct net_device *dev) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index d58b07e7e123..7063c78bd35f 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -286,7 +286,6 @@ mtk_wed_wo_queue_free(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) static void mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; int i; for (i = 0; i < q->n_desc; i++) { @@ -301,19 +300,12 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) entry->buf = NULL; } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_drain(&q->cache); } static void mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; - for (;;) { void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true); @@ -323,12 +315,7 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) skb_free_frag(buf); } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_drain(&q->cache); } static void diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 08805f027810..c80037a78066 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1344,7 +1344,6 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) { - struct page *page; struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); struct nvme_tcp_queue *queue = &ctrl->queues[qid]; unsigned int noreclaim_flag; @@ -1355,11 +1354,7 @@ static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) if (queue->hdr_digest || queue->data_digest) nvme_tcp_free_crypto(queue); - if (queue->pf_cache.va) { - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); - queue->pf_cache.va = NULL; - } + page_frag_cache_drain(&queue->pf_cache); noreclaim_flag = memalloc_noreclaim_save(); /* ->sock will be released by fput() */ diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 4cc27856aa8f..11237557cfc5 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1576,7 +1576,6 @@ static void nvmet_tcp_free_cmd_data_in_buffers(struct nvmet_tcp_queue *queue) static void nvmet_tcp_release_queue_work(struct work_struct *w) { - struct page *page; struct nvmet_tcp_queue *queue = container_of(w, struct nvmet_tcp_queue, release_work); @@ -1600,8 +1599,7 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w) if (queue->hdr_digest || queue->data_digest) nvmet_tcp_free_crypto(queue); ida_free(&nvmet_tcp_queue_ida, queue->idx); - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); + page_frag_cache_drain(&queue->pf_cache); kfree(queue); } diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 805e11d598e4..4b2fcb228a0a 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1386,9 +1386,7 @@ static int vhost_net_release(struct inode *inode, struct file *f) kfree(n->vqs[VHOST_NET_VQ_RX].rxq.queue); kfree(n->vqs[VHOST_NET_VQ_TX].xdp); kfree(n->dev.vqs); - if (n->pf_cache.va) - __page_frag_cache_drain(virt_to_head_page(n->pf_cache.va), - n->pf_cache.pagecnt_bias); + page_frag_cache_drain(&n->pf_cache); kvfree(n); return 0; } diff --git a/include/linux/gfp.h b/include/linux/gfp.h index bbd75976541e..03ba079655d3 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -316,6 +316,8 @@ extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align); +void page_frag_cache_drain(struct page_frag_cache *nc); + static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 083e0c38fb62..5a0e68edcb05 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4716,6 +4716,16 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align)