From patchwork Fri Dec 1 12:02:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13475704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F330C46CA0 for ; Fri, 1 Dec 2023 12:02:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A709E8D0070; Fri, 1 Dec 2023 07:02:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A20518D0001; Fri, 1 Dec 2023 07:02:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E7FD8D0070; Fri, 1 Dec 2023 07:02:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7EA498D0001 for ; Fri, 1 Dec 2023 07:02:25 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4D2731C0282 for ; Fri, 1 Dec 2023 12:02:25 +0000 (UTC) X-FDA: 81518111850.12.856D110 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf17.hostedemail.com (Postfix) with ESMTP id 379AF4004D for ; Fri, 1 Dec 2023 12:02:21 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701432143; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m21myrZxxHZ4tvSPxHmf5UUjiuR8Kw6gSpKrihVxiGA=; b=bs22ovVo3CzbEWEY74ohrgvdjrO2/kLawKITWSrZ6Zz3JnJ1h4ekFLumtmbnsMtExuq3fa 74EEsUf5rOqLr/ZwQDXnILJcQ1vF1fbyfziClAqR4U5DhmPOUiUT6DoRgeS3KlfrtKFJmB v2tc4XQivOnu5e6HrKlthsNGos8UfLc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701432143; a=rsa-sha256; cv=none; b=kIEpBNWPb3hr2Vt4UL8U5ciBouCdMTooHJbvFo7TEDThB5A0aK+nqtFosq9Ij9KVK62WCR aJQHUQ43KHs5K1oHgwQY17+aI/Um5yaZZ+7CHoqemEd9WgV2uf05WaPInsbqw8sQsSycDu TjhZiwwYQfBKSL+V3lczx5LwBNsJyU0= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4ShWmV5SyqzsRSL; Fri, 1 Dec 2023 19:58:34 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 1 Dec 2023 20:02:16 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Eric Dumazet , Subject: [PATCH RFC 1/6] mm/page_alloc: modify page_frag_alloc_align() to accept align as an argument Date: Fri, 1 Dec 2023 20:02:02 +0800 Message-ID: <20231201120208.15080-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231201120208.15080-1-linyunsheng@huawei.com> References: <20231201120208.15080-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 379AF4004D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: zt6jau1ggjgzxwja1kiqyh1me4q9g3to X-HE-Tag: 1701432141-197380 X-HE-Meta: U2FsdGVkX197Mh2uZKSQ4phCXs6qgFVnm3V+nIyetgT4FDrJ1nk3B8x0t4lmt3ixkjRZupYtM/moOHqO6YWoJ0ROxR8r2Zw5/+lmObIIxgXgC5ulAwy4Da4QO5Si0i7qkyFREe/7os1XBKxWizAOT+u4O6sjZXnqJ8befT0BjialPp8JvLEEvTovJgLI5dp4rm2BF6wTyn7A0lIjHr6vFR/iZh2e+6uIFeKINRqF+QYASM9wKPWu1FdF2h/SPm7alGzCo2C+s1tdefYLjkmnP4ShrHDTySm2IAaNPtfTekvLiWc1wjiwB38rlzPhR3dO2JhPnWRmABRQtpt10OhKwl1e4zwZnhyY0dUJhzEMuUKk/GTLi1MJuxYxnu6qTy4OtDi6wGZmFJayT6mV35PB9KBhktwZJZSbATPGxspLBZ8ORDe8K2p3RDZli3v4bd8JRVJvHShpwPzbZkucX5fqU3wxy0hiW9Q/MaMyujs3qx7d/VnSPNuxrMC4k2Ftytp/OzAOvytINp8yyFa0OkXgqj8cnj1OVkZHlifwAihm3gvXjL7mJU+TDdIxv6UQOVtywRNJvLGnuUnCP0+Dwt9ud6iwYJw5wNpj350vHeiEJcpDbtAeIAE9ntAbtGJRd1nN/5X2PEPkArPg022Kk9/w26g731nfprAj73ZchJC3dSLEPC50tQor1yUdQIqWxJOY/mCCrzs2L9LDAITBDX1ALSEX1CMXetJoyzFwTXbrq89Pcjv8iBcNYOmNRm1dJDlxERqUW9cLd5z2Qubh9q9PUkfeuIs+yyXO3ck9oi8HKneym2um9ABuaes8wEfrysiuXvCx3RTzy9cUTL7sekfmaXn0woTUEuLShNlzyrQEmb2WJDGsLYRVoDTm36T0xQFiplZkhmts9ntDEJc7euSX/qeQ52kDrDI/5d9gYfz0Dw8Ny54OrTtMnNoW6wOAnW8eJVLjlzX94t9B77IVowL OM+Ajh6K Cz0TDsMhQKoO7pS9QxrpDfjMCZQ4EI4agEio4YI3YGpyUDjxDuFwqVYkCR37qNoJ1QQ1r+Zyuzj7yQOz/Q6BjYS3HnutVD92pdLUs6HXRqymfyqmPxPvLf1NxlCCwjZr04hH7rxJaoUL9S8kjfJJUc3G4P66muRMt2WEP8GI2Mc0RQsdCzu5rZrrvaFHV7fwIvpl5rxgy0Qkjkw6zhtEDY75uyqDzCNgg5p5InRpZ7vpwgObzlVuSzra3Y/LRmSlNSs6Wdx/LpdKSAyMrsTeSuBbUmvsrLGxzQTqW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: napi_alloc_frag_align() and netdev_alloc_frag_align() accept align as an argument, and they are thin wrappers around the __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs doing the align and align_mask conversion, in order to call page_frag_alloc_align() directly. As __napi_alloc_frag_align() and __netdev_alloc_frag_align() APIs are only used by the above thin wrappers, it seems that it makes more sense to remove align and align_mask conversion and call page_frag_alloc_align() directly. By doing that, we can also avoid the confusion between napi_alloc_frag_align() accepting align as an argument and page_frag_alloc_align() accepting align_align as an argument when they both have the 'align' suffix. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- include/linux/gfp.h | 4 ++-- include/linux/skbuff.h | 22 ++++------------------ mm/page_alloc.c | 6 ++++-- net/core/skbuff.c | 14 +++++++------- 4 files changed, 17 insertions(+), 29 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index de292a007138..bbd75976541e 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -314,12 +314,12 @@ struct page_frag_cache; extern void __page_frag_cache_drain(struct page *page, unsigned int count); extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask); + unsigned int align); static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return page_frag_alloc_align(nc, fragsz, gfp_mask, 1); } extern void page_frag_free(void *addr); diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 27998f73183e..c27ed5ab6557 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3182,7 +3182,7 @@ static inline void skb_queue_purge(struct sk_buff_head *list) unsigned int skb_rbtree_purge(struct rb_root *root); void skb_errqueue_purge(struct sk_buff_head *list); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align); /** * netdev_alloc_frag - allocate a page fragment @@ -3193,14 +3193,7 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); */ static inline void *netdev_alloc_frag(unsigned int fragsz) { - return __netdev_alloc_frag_align(fragsz, ~0u); -} - -static inline void *netdev_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __netdev_alloc_frag_align(fragsz, -align); + return netdev_alloc_frag_align(fragsz, 1); } struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int length, @@ -3260,18 +3253,11 @@ static inline void skb_free_frag(void *addr) page_frag_free(addr); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align); static inline void *napi_alloc_frag(unsigned int fragsz) { - return __napi_alloc_frag_align(fragsz, ~0u); -} - -static inline void *napi_alloc_frag_align(unsigned int fragsz, - unsigned int align) -{ - WARN_ON_ONCE(!is_power_of_2(align)); - return __napi_alloc_frag_align(fragsz, -align); + return napi_alloc_frag_align(fragsz, 1); } struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 37ca4f4b62bf..9a16305cf985 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4718,12 +4718,14 @@ EXPORT_SYMBOL(__page_frag_cache_drain); void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) + unsigned int align) { unsigned int size = PAGE_SIZE; struct page *page; int offset; + WARN_ON_ONCE(!is_power_of_2(align)); + if (unlikely(!nc->va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); @@ -4782,7 +4784,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= align_mask; + offset &= -align; nc->offset = offset; return nc->va + offset; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index b157efea5dea..b98d1da4004a 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -291,17 +291,17 @@ void napi_get_frags_check(struct napi_struct *napi) local_bh_enable(); } -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *napi_alloc_frag_align(unsigned int fragsz, unsigned int align) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); } -EXPORT_SYMBOL(__napi_alloc_frag_align); +EXPORT_SYMBOL(napi_alloc_frag_align); -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +void *netdev_alloc_frag_align(unsigned int fragsz, unsigned int align) { void *data; @@ -309,18 +309,18 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); } else { struct napi_alloc_cache *nc; local_bh_disable(); nc = this_cpu_ptr(&napi_alloc_cache); - data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); + data = page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); local_bh_enable(); } return data; } -EXPORT_SYMBOL(__netdev_alloc_frag_align); +EXPORT_SYMBOL(netdev_alloc_frag_align); static struct sk_buff *napi_skb_cache_get(void) { From patchwork Fri Dec 1 12:02:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13475705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E3C9C07E97 for ; Fri, 1 Dec 2023 12:02:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 716258D0071; Fri, 1 Dec 2023 07:02:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C5BF8D0001; Fri, 1 Dec 2023 07:02:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 518C58D0071; Fri, 1 Dec 2023 07:02:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 438A88D0001 for ; Fri, 1 Dec 2023 07:02:26 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1B7701A0224 for ; Fri, 1 Dec 2023 12:02:26 +0000 (UTC) X-FDA: 81518111892.04.DBA0499 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf10.hostedemail.com (Postfix) with ESMTP id 29FEEC0037 for ; Fri, 1 Dec 2023 12:02:22 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701432143; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3B5fwC9qERbgCOX4tFw3OYiISGUMa0xw9VoqY6py2PY=; b=5tnpz/zAkeHIM69X4EEjYK2WKMJCsKvHv1wtDs+iMWRQzlkntpv+jxd4XhSHeJscltveFt SqYM1+5kJBvyWbSn7ZmICW0uMZ4epMCXCDZxL00tV9LXBIhzSvUUtIS13fzKBNP6zRfx/C oQZI+OCmLVb/uXAw4SAwGLXuNktvl3M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701432143; a=rsa-sha256; cv=none; b=okOfY1DtlrYr8psZ+cg3dvJsdCzqFyLBr9Le9ARalS/i+G2D3v3KjlbuNZptzbeknLIwWz 3UvEjVH1u5DcxmQgF7HojUZZlOUdHR1L4QfjOx1qAdN6aPLMVcj8WGOn/XWbnbKt74vYof 4vJWOBkiheTFflYdE74GB+tcoHNZR1M= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4ShWlq02qgzShJS; Fri, 1 Dec 2023 19:57:59 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 1 Dec 2023 20:02:18 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , "Michael S. Tsirkin" , Jason Wang , Andrew Morton , Eric Dumazet , , , Subject: [PATCH RFC 2/6] page_frag: unify gfp bit for order 3 page allocation Date: Fri, 1 Dec 2023 20:02:03 +0800 Message-ID: <20231201120208.15080-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231201120208.15080-1-linyunsheng@huawei.com> References: <20231201120208.15080-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 29FEEC0037 X-Rspam-User: X-Stat-Signature: nheb6hcqfhs8qgyngfmo33hmfxwihijd X-Rspamd-Server: rspam03 X-HE-Tag: 1701432142-217644 X-HE-Meta: U2FsdGVkX1/AzTv9e0t7MmYiTXloskkYAb+Dtw6ua4YMTsSiKfIf+Sj1xAvyE632adjOMZ0BYsar/8zPR9YRkbvsZpbJF74VLZPgYYOABfh4CcvJWm3saEOx25Ei1aPbvQHNe1QrAOqEWL98X0xW4at2TwNyuU40zQXhPl0S96oNvRvKHhMO0qfzBC18zyAo+djs5eqc9mSy4emVhWFd333w6YvdyvB0QXapGTgr5MxvaajZrntE9cs/N2hDnMiHdxqfaZ23bPw5NvXTkXAP88vKMi9+ii/K3NssFFlYJtw4s9ZEatHBqs5/jSKF5qoQU/wzWcnAlM/J6zMYz2s+lgaFHxTyWDomciY2X2OxB8i+vaCbdBikyCdjXAAh580ZKzZ+YzJbXc/5WwJyMaafiyvYpNkhE04a2ZFAM4GqnEtzNXFwI362rQ4zcMTRnH1ImT7KH3k7egZcbbdZj1l3zrlH4Cik8gz+dcIK4f1d1NlXvr1bNtVXW9SsEi+ImD7XiwV6U+mepLneVvvdtiosDGWSIPJfQz4N40paH55ObUwF8H7Rb9YWl9/eloSxt0QmgXli865AARKOnNGc3+9Zb9Rnxg42u1lxIxAx77bVLAtVn6ha4k5ET5arWeXJ7+Sl85ZOgg+Go8K/OAXQ0vHgnxTwkiCpDeEEvD1fPugn+PJ2JuEFusqHvZVAakfyOp3AqqsTtf4QZoziJ4RN05DHca4oXTwdKyMHgUXiEKs1ilqBa56bVHwm6CFgsp+YSz+VU0v53mOAiM5o9rRpZxSBgh6fBKPkP7yOpnjxCLPRkylYQ/EA82DCxiQ+t7IysQsLR0Gx3CNNNWfvo0S2eZlMUTx7A7QqUknkmGeAeZMsYuRtU1ureiS7F6rQSG9lkByY6pGCUSUQej66uR5sXQ/rz5+0vbFVFF5IwzLhut0tjqyVLFnVfmd8wZi5Gr3G5jUhTtVGH93pNl8FJioJRtO AeN516BT UJT9FDl/zo5IQAfK4MVrBo9Q2hskOC3qiFJJ6CnSO1xgiSgpziqW6cde+AxGN5OaTj/P9mBvsosT3jaimSR+eBcbjPp1/+UFoutXrTQyHm14TEaJnVOMuGK5QPqHj7HqtHPwwd4BnGeFQh9w5OFfeD0VSp8D85MuSPumSGinL0415IvdpQJi1YzJudP0u19M2eZlyRC+mq+6YTdj0Cf1sFWCar4b9NUxxOgXYTdKMPHhf9uSyD07n9cwDmD1+ehlogKZApdVj9u68wgpT3Vf8D/i3Ftvar6EzDKsg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there seems to be three page frag implementions which all try to allocate order 3 page, if that fails, it then fail back to allocate order 0 page, and each of them all allow order 3 page allocation to fail under certain condition by using specific gfp bits. The gfp bits for order 3 page allocation are different between different implementation, __GFP_NOMEMALLOC is or'd to forbid access to emergency reserves memory for __page_frag_cache_refill(), but it is not or'd in other implementions, __GFP_DIRECT_RECLAIM is xor'd to avoid direct reclaim in skb_page_frag_refill(), but it is not xor'd in __page_frag_cache_refill(). This patch unifies the gfp bits used between different implementions by or'ing __GFP_NOMEMALLOC and xor'ing __GFP_DIRECT_RECLAIM for order 3 page allocation to avoid possible pressure for mm. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- drivers/vhost/net.c | 2 +- mm/page_alloc.c | 4 ++-- net/core/sock.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f2ed7167c848..e574e21cc0ca 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -670,7 +670,7 @@ static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz, /* Avoid direct reclaim but allow kswapd to wake */ pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | - __GFP_NORETRY, + __GFP_NORETRY | __GFP_NOMEMALLOC, SKB_FRAG_PAGE_ORDER); if (likely(pfrag->page)) { pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9a16305cf985..1f0b36dd81b5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4693,8 +4693,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp = gfp_mask; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | - __GFP_NOMEMALLOC; + gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | + __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; diff --git a/net/core/sock.c b/net/core/sock.c index fef349dd72fa..4efa9cae4b0d 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2904,7 +2904,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) /* Avoid direct reclaim but allow kswapd to wake */ pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | - __GFP_NORETRY, + __GFP_NORETRY | __GFP_NOMEMALLOC, SKB_FRAG_PAGE_ORDER); if (likely(pfrag->page)) { pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; From patchwork Fri Dec 1 12:02:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13475706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8CBDC4167B for ; Fri, 1 Dec 2023 12:02:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 198888D0072; Fri, 1 Dec 2023 07:02:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1218E8D0001; Fri, 1 Dec 2023 07:02:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F29898D0072; Fri, 1 Dec 2023 07:02:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E2D648D0001 for ; Fri, 1 Dec 2023 07:02:26 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B98CD120212 for ; Fri, 1 Dec 2023 12:02:26 +0000 (UTC) X-FDA: 81518111892.16.E25BB7F Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf21.hostedemail.com (Postfix) with ESMTP id 1BE171C0013 for ; Fri, 1 Dec 2023 12:02:23 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701432145; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sY6rt6ZDWSPLy+R4GC8GwC8YJzCuoyXBaVHV+cH68pA=; b=Ri8JkbNrxIq1fw1M0MaCvp2k4/nVzAhRZRcAR1kaLCYLheeb6sElUOJwXnoRJIT+nudxRC ZnAxbcmmmUHlvReY3U+vI5GgQgiPxDJp21ozGSp2E1kd3BjmBlbYTEGjKwk9wgij3FyZdv farAb6g4v8IyZeXn6p736hCC5jV/IKM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701432145; a=rsa-sha256; cv=none; b=ohzFPok+13Gjo08kN588mOW06h4GuA5qp8sMj/eLCxlndLG/LoXSOwdGZrSecnIhXo2PYV hX9/6198zoSyL2ySdzBIFNSsNfIm1CbIVP4zGQwKM6mEMXf6FUnL1FuS20bMlROLI1RLw2 jopoQSkyqjzfZhdB8OQw8H/JKMHw6w8= Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4ShWqx2WS9zWhx4; Fri, 1 Dec 2023 20:01:33 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 1 Dec 2023 20:02:20 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH RFC 3/6] mm/page_alloc: use initial zero offset for page_frag_alloc_align() Date: Fri, 1 Dec 2023 20:02:04 +0800 Message-ID: <20231201120208.15080-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231201120208.15080-1-linyunsheng@huawei.com> References: <20231201120208.15080-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 1BE171C0013 X-Stat-Signature: nu99i5ehc83ahcwe7nfs6p6od4ug5nfj X-HE-Tag: 1701432143-998551 X-HE-Meta: U2FsdGVkX1/XBm7pVrpjmsdvlGq9AeS+Stso9z97X3K9wgaQ6VUvryCiswo04+yLzPtM17AJA7k9ny3GkMpMnFfbE3IV+QwWinpTuTYv/ktsD1KOGPH42ITYpQ4oMZdvJRVSjCCIDb76FX3QxYJ0PGI9KbLbKQWqqRtM+pVSEINweYFmQgnnvzfFnw2uBilTpbyPQ/BwaWyj9kGk5z6QikbA6CicUZ+rgsFDlfO/zjpE6QrbAtw5BhebHrfH10FYXw2pHTFOajh5cszRdhVpVx9FZIcjov+vk/xOxJ79gQZiQga1/R0m9QmN5TmNJQX5E4GbyGrFp79GeY8wpX/UdQps3amk5wNgLMYvb++VgxJSU1hIR6JH1GBi2dRV8o8IzyobTLfCTzolpu0oHjx3KynlDY9suVhMxNbRl3t9AOxFNPJsdEJmSIGSEoloEZqBjNNxaRNvn8zItLi3e0eS9R/FqYRMLJbaTOVoEoym2delWkA+LlMXHfXDZFvv/nM5RdiVoC0l1WDkz2B0bV3zHBPyGD6Aw0nzZDhlzzYzKGLlC+7VKG/e/HBl7HZ2OZq5QZmOYYjv1NKdTbMQIaESfSgB33TVidRk0eLX5+XcKRKh+6IxWcmSKpGBhvuLIfRdp4P1mRg6pp/woTdapErB0/EfCWy08u+IVwfm4lv/lAgETAT6dlrS64j0xCNQU9qFwc/3+XFMtGDsCitLzXej1wg36jVryUYluv+Vkl9fRUUV53RhAzBzn8HnqII+vPsiojIxY3epg2m3C4HDoUXy8uGrd3OL9ikcqiO7TLV4LFFbMpKXc5Ndxw8jjvknH1xb1V8r5sabiUP1HHTQCGOUag/MNvziCGq9q+EjXuttQtnD5TH2hxj5rcvKHtpEDzj4s67uGirYOqiykKN4g/EEyffoeFtgGM2jFU6ERcJIOYbMZWqus6ljHbkn1Wyidb6SM8TR6fd2T+iByuQMsLD S9JBJSAf 8DTR7R8+ugH4hHj5NDeDGkec3u0h4clp348QlS3joQDKiIIgUjOsZZVP+HoXJU5EN/nRbH63Dh7S7Tqu1+aOpHj3co4pqRrLbgRPWffstf7kYX4eSWPThiaCUUjhwYosvXitOiOYRljjPeRJjF26FPsnMQ76O8rQhR11/EF2vmEvYMwk6Ew9SKGHz0NabUvM73pLMrLq6zVGW0bMALrjx+Ky+2dfAa38ksi6n32CFcczj6AAg9Gh3kPlyIMQ6H5vaIYqjxyOs6y7qFYDD86EpEsGMWeI6Bq8idzL6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The next patch is above to use page_frag_alloc_align() to replace vhost_net_page_frag_refill(), the main difference between those two frag page implementations is whether we use a initial zero offset or not. It seems more nature to use a initial zero offset, as it may enable more correct cache prefetching and skb frag coalescing in the networking, so change it to use initial zero offset. Signed-off-by: Yunsheng Lin CC: Alexander Duyck --- mm/page_alloc.c | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1f0b36dd81b5..083e0c38fb62 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4720,7 +4720,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) { - unsigned int size = PAGE_SIZE; + unsigned int size; struct page *page; int offset; @@ -4732,10 +4732,6 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ @@ -4744,11 +4740,18 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = size; + nc->offset = 0; } - offset = nc->offset - fragsz; - if (unlikely(offset < 0)) { +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* if size can vary use size else just use PAGE_SIZE */ + size = nc->size; +#else + size = PAGE_SIZE; +#endif + + offset = ALIGN(nc->offset, align); + if (unlikely(offset + fragsz > size)) { page = virt_to_page(nc->va); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) @@ -4759,17 +4762,13 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, goto refill; } -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif /* OK, page count is 0, we can safely set it */ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = size - fragsz; - if (unlikely(offset < 0)) { + offset = 0; + if (unlikely(fragsz > size)) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -4784,8 +4783,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, } nc->pagecnt_bias--; - offset &= -align; - nc->offset = offset; + nc->offset = offset + fragsz; return nc->va + offset; } From patchwork Fri Dec 1 12:02:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13475707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56A78C4167B for ; Fri, 1 Dec 2023 12:02:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E8F008D0073; Fri, 1 Dec 2023 07:02:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E40358D0001; Fri, 1 Dec 2023 07:02:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D079D8D0073; Fri, 1 Dec 2023 07:02:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C0C1F8D0001 for ; Fri, 1 Dec 2023 07:02:40 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9B59FA04A7 for ; Fri, 1 Dec 2023 12:02:40 +0000 (UTC) X-FDA: 81518112480.14.0BFF871 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf25.hostedemail.com (Postfix) with ESMTP id DE986A001E for ; Fri, 1 Dec 2023 12:02:35 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701432156; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=86QPUxD7fzPZOrhagjgMOSlG5abPnKt4lm0KTc4m3Xo=; b=5w42Vx3ze012WTGRb1bI0gNUXTIjDZ0+ImcpLdNfnoQYJbBIHzqYfnUydPkt+P8Z54IJrS Bays7K/2r0qmnpxQG2DZMCcJRZ1z8Me24CiVY8uuMga79SnyawVklT1ijzt1dX0e1qydCv yWQ30aXURuvIP89ZmDgOWiT7G+8MZMA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701432156; a=rsa-sha256; cv=none; b=ZqtHJ/urY4IZrW4HfmD3T1UlQh6NZqkY0Ye4v5BMF45Hu47lT/Ys3vveTW1UNk18m3zJBP SgIWAlCGffVNh3XAPpJnckB8Gev3nvf2YFq9fOLqMZH72Fc8tQHobonTMMxq7z6BVG4coe uLXR0MNcjILzDekDY4AT5s5BBtDgtKc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4ShWmn704Gz1P91G; Fri, 1 Dec 2023 19:58:49 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 1 Dec 2023 20:02:30 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Eric Dumazet , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , "Michael S. Tsirkin" , Jason Wang , Andrew Morton , , , , , , Subject: [PATCH RFC 5/6] net: introduce page_frag_cache_drain() Date: Fri, 1 Dec 2023 20:02:06 +0800 Message-ID: <20231201120208.15080-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20231201120208.15080-1-linyunsheng@huawei.com> References: <20231201120208.15080-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: DE986A001E X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 7sw7kaobu6qxt7x8dj6erx61efhyf38e X-HE-Tag: 1701432155-703842 X-HE-Meta: U2FsdGVkX19PKPi/ITM3y0Q6HJ8KYu6gP4gQCBtOJk1ZdNeXHCLREiWOy22ENz2WKXIXiqcYPD5ehUiAgZMF7eEti9ZrYJNvll0c3ULMPDMZK7ebMnxDaPIbA6ZNK9bYm0DqwFruS/IRn9r2ZBqB2O44X70MrZYM7uh+s4XvHOheNrMvhK6ug0o891nOyZrIrOJvvkoPercfhg42u0vNdva5ClHM9fgi/qvFrqOlu16BsqEMOBb9A9O55nDOmHFlXTCI/kI1KT72MF8pdGCIvctqKjIVl1G98MQB5zBfr7HqEz33rXfgSr3IbtH6TDSjSMTVXgi8GTysB/fbNUhqxC++ie4fOu9yROI2AQNo0Pl6eup9TkhqJzzT8o3MrmXX5HdP9O73W4JJJMWBpp74shRDx2+5Z+DnTlane6pFbWpZDpkXXLuKGMPw7xa13G+PZK+JgiRPhh590rnhVDstR/Nl/jFQfyVl0pPjnhWDPd5NubywLJc2GpCEViTxdJQzPc9UykLL02pqbmFXdwPlEd6e9H6lBPjKMgJKt3Yjsm+X0f3lQVYOfrg+90a9/GQ5lUGoknzzG/DJuFdTUgVa1h4mqd1stsXjB3f951zshiqhk65iIWD9TBk4J1Uh+uIbZVRzvOUkDzTAK4zPvEz1ZhwNxBv8owhOAIK1IP+7Wdi+OarEq6jjVh+QskhpIpkLfLb5h6Cd3HiOjNfqJR/XKBTEP+D9DJBONBNnlyszNWgXS5aSV3zCcgC4NsHXyqiBGSDDg68c6+3xPRVr45adpJLeQVIPGRZucjklrfdl03uEIk5XTRmmXucGKsC8wzipMoEnzl42N/4A9j8+GhA5Oec3s5Sl+OqK7u09w5HIATxzi9/98WFluvT3tWNF4ZR2DVZDpO4dmLnssH2RnKDRbz5wcxpEzcJcV77SWwhxZwD1f4NfaaGI3xEV6YKiLu5IIIebEfN02b3F2I55LTg XE4VT2b4 MIVzkXznRvUbQs/ZPPOK+vSQUYjT5RS7S2xH3rhWhoeJN0G7BagQWHg/zgmnsAmAoTqZoNEtLjiKhF97fy+It3+sNoEZhWkTLXzBTcfo5t2NSli8JD1VfhOUoYfqkd8BckRjWRFA7a8Vfsr30zwjH35+QhfmPcuVOGJXwLWhEOWXG/14Lnn085aeTvzawF9QwNXMTM/H6uE26EXcHPBkpgtVS9142Sv3k46F3heANL3oK40wNSMV+nxWJEg5Zdu01SsvZVzXn4WrM+ok= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When draining a page_frag_cache, most user are doing the similar steps, so introduce an API to avoid code duplication. Signed-off-by: Yunsheng Lin --- drivers/net/ethernet/google/gve/gve_main.c | 11 ++--------- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 17 ++--------------- drivers/nvme/host/tcp.c | 7 +------ drivers/nvme/target/tcp.c | 4 +--- drivers/vhost/net.c | 4 +--- include/linux/gfp.h | 2 ++ mm/page_alloc.c | 10 ++++++++++ 7 files changed, 19 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 619bf63ec935..d976190b0f4d 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1278,17 +1278,10 @@ static void gve_unreg_xdp_info(struct gve_priv *priv) static void gve_drain_page_cache(struct gve_priv *priv) { - struct page_frag_cache *nc; int i; - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - nc = &priv->rx[i].page_cache; - if (nc->va) { - __page_frag_cache_drain(virt_to_page(nc->va), - nc->pagecnt_bias); - nc->va = NULL; - } - } + for (i = 0; i < priv->rx_cfg.num_queues; i++) + page_frag_cache_drain(&priv->rx[i].page_cache); } static int gve_open(struct net_device *dev) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 7ffbd4fca881..df0a3ceaf59b 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -286,7 +286,6 @@ mtk_wed_wo_queue_free(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) static void mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; int i; for (i = 0; i < q->n_desc; i++) { @@ -298,19 +297,12 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) entry->buf = NULL; } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_drain(&q->cache); } static void mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; - for (;;) { void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true); @@ -320,12 +312,7 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) skb_free_frag(buf); } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_drain(&q->cache); } static void diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 89661a9cf850..8d4f4a06f9d9 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1338,7 +1338,6 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) { - struct page *page; struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); struct nvme_tcp_queue *queue = &ctrl->queues[qid]; unsigned int noreclaim_flag; @@ -1349,11 +1348,7 @@ static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) if (queue->hdr_digest || queue->data_digest) nvme_tcp_free_crypto(queue); - if (queue->pf_cache.va) { - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); - queue->pf_cache.va = NULL; - } + page_frag_cache_drain(&queue->pf_cache); noreclaim_flag = memalloc_noreclaim_save(); /* ->sock will be released by fput() */ diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 92b74d0b8686..f9a553d70a61 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1576,7 +1576,6 @@ static void nvmet_tcp_free_cmd_data_in_buffers(struct nvmet_tcp_queue *queue) static void nvmet_tcp_release_queue_work(struct work_struct *w) { - struct page *page; struct nvmet_tcp_queue *queue = container_of(w, struct nvmet_tcp_queue, release_work); @@ -1600,8 +1599,7 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w) if (queue->hdr_digest || queue->data_digest) nvmet_tcp_free_crypto(queue); ida_free(&nvmet_tcp_queue_ida, queue->idx); - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); + page_frag_cache_drain(&queue->pf_cache); kfree(queue); } diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 805e11d598e4..4b2fcb228a0a 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1386,9 +1386,7 @@ static int vhost_net_release(struct inode *inode, struct file *f) kfree(n->vqs[VHOST_NET_VQ_RX].rxq.queue); kfree(n->vqs[VHOST_NET_VQ_TX].xdp); kfree(n->dev.vqs); - if (n->pf_cache.va) - __page_frag_cache_drain(virt_to_head_page(n->pf_cache.va), - n->pf_cache.pagecnt_bias); + page_frag_cache_drain(&n->pf_cache); kvfree(n); return 0; } diff --git a/include/linux/gfp.h b/include/linux/gfp.h index bbd75976541e..03ba079655d3 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -316,6 +316,8 @@ extern void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align); +void page_frag_cache_drain(struct page_frag_cache *nc); + static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 083e0c38fb62..5a0e68edcb05 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4716,6 +4716,16 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); +void page_frag_cache_drain(struct page_frag_cache *nc) +{ + if (!nc->va) + return; + + __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); + nc->va = NULL; +} +EXPORT_SYMBOL(page_frag_cache_drain); + void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align)