From patchwork Fri Dec 6 12:25:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 737BDE77179 for ; Fri, 6 Dec 2024 12:32:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 005766B0261; Fri, 6 Dec 2024 07:32:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EF7F46B0269; Fri, 6 Dec 2024 07:32:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D53316B026A; Fri, 6 Dec 2024 07:32:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B40EA6B0261 for ; Fri, 6 Dec 2024 07:32:36 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 05C0D1C86AA for ; Fri, 6 Dec 2024 12:32:27 +0000 (UTC) X-FDA: 82864471704.25.0AB1A3B Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf12.hostedemail.com (Postfix) with ESMTP id 306F34001C for ; Fri, 6 Dec 2024 12:32:17 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488338; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3xiqcNUxqGifLj5ro6FSlzRyYoA7Pm6KCQOaDJzaKxk=; b=cadUgWk6UD0iKuaqEirVT7Rj7Dp9mAdpL+sadhRTbMK6bEGqqC+je50cne7IboNp27Ndgo PQ3RNdEk99zJl4McbbtO0FbSHJA66B+xC12gmpM4M7i2qwfKry39cz+G69szKBKZrRHa+P aY1ROvPKnJBLxt3elgcJxRUyAbua/hE= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488338; a=rsa-sha256; cv=none; b=hiJkzJjeNExrw2bm2bginH3XVW7QYJi1yx8vhSKHXhtbJcfXroE2VxT4QKtzncQ5Xvtv2k cLlbT14T0Igbi8aGL+hLuir4ylM5tXVE/jS169bbeJvBBMB/8oc8wFtSDAKWeNVEuGYlJp n/UDh9JMejOGPFF87d7qwxtf7u7DAw4= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Y4VvZ743kz1kvGW; Fri, 6 Dec 2024 20:30:02 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 507741401DC; Fri, 6 Dec 2024 20:32:22 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:22 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM Subject: [PATCH net-next v2 01/10] mm: page_frag: some minor refactoring before adding new API Date: Fri, 6 Dec 2024 20:25:24 +0800 Message-ID: <20241206122533.3589947-2-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 306F34001C X-Stat-Signature: 6y7uawpf1685eog8b19ibjh7iircy6xj X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1733488337-181340 X-HE-Meta: U2FsdGVkX18+32zoLHwA/XIjoaM0WnAWCQPT3fRpPkM2/iPjCakyB7/rhK+g1vAD3pcDH+W86nbOleGTCk8+2H5Xmml2VDd3nJ48F7NcZ2ShhkdIwIg3B888E76bQztZqi/dqX1w1YpZG2hushClP6C5ne9P9b/QldZAv7rykO4liwa+if72O/MTVnRp0Ltw5JnGxuA0ZEr0kI7l3GFT0OOWTEMjIl/5WbSAsHSyb7Au34NNyYx5PIL1//PlJ9i/Pgrjm54armw0prtUIM+OvG5pDzOcs2Fp4RkfSOqh9SzQ5bGGIKmTYpHJHCL83JdjsOCpYP8vXcr2y3nIBtAVytCW+KsJI+dRpQN3wxv+56vd0SylfE90SkyaZoITgH5MDANcuEprvgTQ4RiezXF4aAKBxUAXbOmgwUyOm3VPGZo0UbJBiZiCx8PbKgjxEpUq0DUfxklTS+oExNUswxWvne7iDgTm6Wn/SXxjK1Y60FGFS9mbnGn4WW7iGabpx1a2yvte3G5ek/wYYwvBfQtCTQAqGpriNzC4fG6dSE/WsBcYfsQ9gRsxmvvnlIk+EU2ZCGgqQTVj+3eD57I74XyD3kyBvaMufLYP+/+Qi1LlzfXWulKtLEG0dKChyikj8ZC4GxZwLqdB9op7Bw/bZdpm2t4vW+9URjtMhE9Aa0rU/LsUdThOCyT+o0F4HrO2A3wws96cVXwXgOg28PwXUW0RTBF9RzZ4AWJOzIEQE/wnD8My4X7mv5HqfNn/yMvu8ryBBjwDIr5ymiYsCHKKJsFXtpj9yR3Mvv6KBBoof+9DcHEdAElSgtOij3nOwMg0maO5n8GWHE7TPZJNhKzzxgYuB77STg6FtHEOPPMWAJxBG4g7ZzrXHJ6/i3HbLaiMGveVvLl+rrFNj0CW9wkSdZoIb4vLj2+1ZkDPo4TO0YiWZpRaMgI9n1QWRqvqonLSpnCBHUWMZkoetOzlMBFdTqH btvESa6d zmTP6lMt7NoVCoRoe7bYQQ8o7UpANfWrh4H6c/ryDLcuvHuXrntJyPWn4p/hfPPl0ZEUHlR6Y6LE+Lh4IOZIVPZljvftJN/s4VjHks8+YDihgCQnH8qAE7MERypmrsiwuXDF5E5f8Aj/eVUZNRx/8bUVr9clgrwXxf2sfoE0ri0m7WRIONBqZs35TXagmjNV8b3fjFuef788RZ4RoMalyWlLkvOnou5QPRt8w9xg36cZ2aJJY3ZgplM9UvJzw9wUbArrHjq12mqw3N59Uve8p2Je87odC4Yq7L5YxGxVlDhzrb5VM+pc3rkdp9qAwSNrfwcX6/hNiPxzsTtwJNztaKb0lhpliIUTC2Tn+eyX0K3qiM+5bnylrkBjBbEmjorNz3Szl0/uJbnc2mCzpJFNMCAWvhA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Refactor common codes from __page_frag_alloc_va_align() to __page_frag_cache_prepare() and __page_frag_cache_commit(), so that the new API can make use of them. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 34 ++++++++++++++++++++++++++-- mm/page_frag_cache.c | 40 ++++++++++++++++++++++++++------- 2 files changed, 64 insertions(+), 10 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 41a91df82631..5ae97f93a0a1 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -5,6 +5,7 @@ #include #include +#include #include #include @@ -39,8 +40,37 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask); +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz); + +static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + VM_BUG_ON(!nc->pagecnt_bias); + nc->pagecnt_bias--; + + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + struct page_frag page_frag; + void *va; + + va = __page_frag_cache_prepare(nc, fragsz, &page_frag, gfp_mask, + align_mask); + if (likely(va)) + __page_frag_cache_commit(nc, &page_frag, fragsz); + + return va; +} static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 3f7a203d35c6..f55d34cf7d43 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -90,9 +90,31 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + unsigned int orig_offset; + + VM_BUG_ON(used_sz > pfrag->size); + VM_BUG_ON(pfrag->page != encoded_page_decode_page(nc->encoded_page)); + VM_BUG_ON(pfrag->offset + pfrag->size > + (PAGE_SIZE << encoded_page_decode_order(nc->encoded_page))); + + /* pfrag->offset might be bigger than the nc->offset due to alignment */ + VM_BUG_ON(nc->offset > pfrag->offset); + + orig_offset = nc->offset; + nc->offset = pfrag->offset + used_sz; + + /* Return true size back to caller considering the offset alignment */ + return nc->offset - orig_offset; +} +EXPORT_SYMBOL(__page_frag_cache_commit_noref); + +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align_mask) { unsigned long encoded_page = nc->encoded_page; unsigned int size, offset; @@ -114,6 +136,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; + } else { + page = encoded_page_decode_page(encoded_page); } size = PAGE_SIZE << encoded_page_decode_order(encoded_page); @@ -132,8 +156,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = encoded_page_decode_page(encoded_page); - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; @@ -148,15 +170,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = 0; offset = 0; } - nc->pagecnt_bias--; - nc->offset = offset + fragsz; + pfrag->page = page; + pfrag->offset = offset; + pfrag->size = size - offset; return encoded_page_decode_virt(encoded_page) + offset; } -EXPORT_SYMBOL(__page_frag_alloc_align); +EXPORT_SYMBOL(__page_frag_cache_prepare); /* * Frees a page fragment allocated out of either a compound or order 0 page. From patchwork Fri Dec 6 12:25:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6066E77173 for ; Fri, 6 Dec 2024 12:32:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0EE806B025C; Fri, 6 Dec 2024 07:32:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 09E9F6B025F; Fri, 6 Dec 2024 07:32:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA8B16B0260; Fri, 6 Dec 2024 07:32:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CA34C6B025C for ; Fri, 6 Dec 2024 07:32:29 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5777CB00E1 for ; Fri, 6 Dec 2024 12:32:29 +0000 (UTC) X-FDA: 82864471998.15.AA6F050 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf03.hostedemail.com (Postfix) with ESMTP id 5BB5A2001F for ; Fri, 6 Dec 2024 12:32:18 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488336; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B8BESktr9IjMF4qyErGXRqxqIzluNuuWawkiuUhmtRQ=; b=Sc6UEFUSybzUj7X6WFKpSwXS5YXdH9SqGwk565RBYY9L23OhzJ/5vUHMbBY40oiDp62bel aOkqVBmsCgCKyI9wlSfBg/iMOopQTumwaHsfyxXO0YJW+COgIBt2V1Q98HK/zgkbGmwOLo uXaZrrSvFgRWBEAZWFDvOPzNrA3NYkM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488336; a=rsa-sha256; cv=none; b=7oSUzqkguKhzLRNn35hEx9fWm5UCpeZXqjKazfEi27MrZmXZA8i/ibzz476i+HV6n8BVGn S5ND/d6ICzdci4XVhgMOr9f4yEz94E4lyOJt7gEKix7wk+i2LQJMXXBjp9LZCDUbzyvIcz w4V6MuuPwYGQGxyc3FGIqzE4W4yTBgg= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Y4Vvd0Qjyz1kvJB; Fri, 6 Dec 2024 20:30:05 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 60CC41A0188; Fri, 6 Dec 2024 20:32:24 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:24 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Alexander Duyck , Eric Dumazet , Simon Horman , David Ahern Subject: [PATCH net-next v2 02/10] net: rename skb_copy_to_page_nocache() helper Date: Fri, 6 Dec 2024 20:25:25 +0800 Message-ID: <20241206122533.3589947-3-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5BB5A2001F X-Stat-Signature: yy79515ihmgtxtjth74yzwq8dwq35oww X-Rspam-User: X-HE-Tag: 1733488338-884071 X-HE-Meta: U2FsdGVkX1/LDoWuhXVgDvSKBfKsTljWrn/jt2Z87rQpr5UCdTpIN1tqgHTjAGl+39KuhwcinNqDgBfyCt9MPHI1cYOA0LSlyzE+JWtSPRH36bedI5lN3Enmz3T/Wcb7t/ojG3egHg5xqS9BpKTURVoNb+gHM4sZGG54d/zYVxaC4ykVilxgIrKtoEv3YrBrCjuMrOhaM9fBS+gVrXOE4Vgf+sPYCYK8wmQ16WC3/uWXgV/PqCJifQZ9fVANBfYzcQM7uFPPZnxNnYtOmdT869mD6FlCcmQKYmJaoW/JiIK8nZJpnFz9AMZYLH080ZyXE5l9JlBEJrjzP6wEuT4GRj6x15DyF0SA5JFz4a5kOmSBvmkpaV2h7qKAKNUk1vJI6P1+gZbz3HOJXjAMH3+/6+IivMXC6dHnFRdnN8mZiZ3yG+/L6WYUb3Q4wFQUVhD2M3e1vUgtGNxOXPk3FL0wrpZCcm0+PhlRpMegg3MXnkn5GoBKfGvVDehH+uMRMSsG51JXw65zx3v6s34ten4nN9u4zm6CRw/aOjZjbCxkC/y1bUP70oIK+Mm66GNWrxMiVyx+OA2wMvAxOfJQCc1ceB+uLG9AprfRZNBl0GWymUJLGOrif2Vd2BiJ32ScY8TfejwSvRCcynPFYLrn9TDAwHOUOFjdO2s/xXSmEAoGqY3d7zdtBdFIFxvuJpZyqf8+WiNC9lTDDm/XgSpQpCx/ucxhS6R95w4u4sGFe2mX8ZMYbXkWwtjCBQSEy+ny7kE+8UqT3GDTMcgdzpH1HIh003/cz4sH3LZl/8PoPNZxjXjaJjAJkSGqN7QSlwkrdVPmzL8bU2wm5UPauZqJhGU3vHjNYIpLGKswW86Qqjy4zd0EGCG9G60t14LiMW9dB0wnGJzfoGSxB+NXUg9F2d+odt8cMubXyoONLXipnyDjeoY1cpXlSP84wwsG3KLzipjA1Uul4yNFOCMgql53Lzo f0uk9T0Z 5qxD0HfICLBDmDBNe2gEfAcPoquEw6ys488eTm/wCBJPDdEKUeRUQ3QcgyunT4tT/tgpSQuvcZFdAnMrR964B6LqrxGgE0I2lU5fM+dIZMw5IvSrxzRmJkDrhZoFkkiX3CXcp9bxDB3p6M6oSo+r5ZmoXMsd4uK2YaxRHzO8+TR7K2GVKUig74eYla/bina21QhmC3sdvKTbcwJKVJkJV8a5O9E6MZ4F0hJXg8RifGuhkGztI82C1mQznsrap5XRtwp3TMDlFlwyav856xclJPUMfyLa661eBXLxx+cGGO0pDra0T5NhQsG3yi7TcM+ntaxkiycptVILMvA0r388V+3x4X35fZEHLY0ny8POTDqd50ijAosTZpZKIoASnPV0mn9+vMqNAfNMsNV1dI4b5xHGqrPDgYl6Zj+PKGgnmFPZOIU4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Rename skb_copy_to_page_nocache() to skb_copy_to_frag_nocache() to avoid calling virt_to_page() as we are about to pass virtual address directly. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin Reviewed-by: Alexander Duyck --- include/net/sock.h | 9 ++++----- net/ipv4/tcp.c | 7 +++---- net/kcm/kcmsock.c | 7 +++---- 3 files changed, 10 insertions(+), 13 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index 7464e9f9f47c..cf037c870e3b 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -2203,15 +2203,14 @@ static inline int skb_add_data_nocache(struct sock *sk, struct sk_buff *skb, return err; } -static inline int skb_copy_to_page_nocache(struct sock *sk, struct iov_iter *from, +static inline int skb_copy_to_frag_nocache(struct sock *sk, + struct iov_iter *from, struct sk_buff *skb, - struct page *page, - int off, int copy) + char *va, int copy) { int err; - err = skb_do_copy_data_nocache(sk, skb, from, page_address(page) + off, - copy, skb->len); + err = skb_do_copy_data_nocache(sk, skb, from, va, copy, skb->len); if (err) return err; diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 0d704bda6c41..0fbf1e222cda 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1219,10 +1219,9 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) if (!copy) goto wait_for_space; - err = skb_copy_to_page_nocache(sk, &msg->msg_iter, skb, - pfrag->page, - pfrag->offset, - copy); + err = skb_copy_to_frag_nocache(sk, &msg->msg_iter, skb, + page_address(pfrag->page) + + pfrag->offset, copy); if (err) goto do_error; diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index 24aec295a51c..94719d4af5fa 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -856,10 +856,9 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) if (!sk_wmem_schedule(sk, copy)) goto wait_for_memory; - err = skb_copy_to_page_nocache(sk, &msg->msg_iter, skb, - pfrag->page, - pfrag->offset, - copy); + err = skb_copy_to_frag_nocache(sk, &msg->msg_iter, skb, + page_address(pfrag->page) + + pfrag->offset, copy); if (err) goto out_error; From patchwork Fri Dec 6 12:25:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FBBAE7717D for ; Fri, 6 Dec 2024 12:32:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F1F726B026D; Fri, 6 Dec 2024 07:32:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ECFC36B026F; Fri, 6 Dec 2024 07:32:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4A186B0270; Fri, 6 Dec 2024 07:32:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AE0F66B026D for ; Fri, 6 Dec 2024 07:32:47 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5CEC642720 for ; Fri, 6 Dec 2024 12:32:32 +0000 (UTC) X-FDA: 82864472040.26.43F8EC0 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf08.hostedemail.com (Postfix) with ESMTP id 5B88B160014 for ; Fri, 6 Dec 2024 12:32:18 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488338; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VyfAy/0F+66WfZkRUvQ5hPfL76WheTY9mTIz1fIBfxM=; b=NV7BL0IVewzbYbXebOWhgbech4Nh/myd4rffNvAYBFzGd1zA6eJ4QL08mWyJgMrb9gA/hD SRhA9eoPR+EtGRCCoeNtxfeNtA1cB6qpxOIwEFw6GpBh11+1ij/A/ZlhJAQHy6X64RjyEx Lk7vQeCcVsaVcSx1BtHhu9r79gwFwak= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488338; a=rsa-sha256; cv=none; b=vlqPOL44YsIupBiowZQRNwI2X1Td5hIsdTdx60MegkquU83crQHIq7OBy0BfoP/HtswSRZ LHWTfW0araw7KxXASClxz/DRG/lDlPUoTZEh/0U3Y83IMBMs2tULeYST3oTk4WFwsfgyvk ELQEf8JaHDeuq4243BMa+rXwUkKSdTo= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Y4Vyf3L7fz1yrgk; Fri, 6 Dec 2024 20:32:42 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 708021A016C; Fri, 6 Dec 2024 20:32:26 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:26 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v2 03/10] mm: page_frag: update documentation for page_frag Date: Fri, 6 Dec 2024 20:25:26 +0800 Message-ID: <20241206122533.3589947-4-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam05 X-Stat-Signature: rrmps7w7111qbiw48gmbntqtmgd4yqmx X-Rspamd-Queue-Id: 5B88B160014 X-Rspam-User: X-HE-Tag: 1733488338-39913 X-HE-Meta: U2FsdGVkX19IwiPs98TS6pRoU5tZIz4rn0hta37LWoLTaj/7GP4tsUqbgTQtnRl8nHemsMWKfZ2ywLTR5ov13K9dA48Wm9sVt6W2NWQKpKga7/HeVimfMnmOrgOhrLQReT3HYT0y8uY+yBYT7so5K3+hFM4g4YbGw6Tvz95V/kltghJZA2is0bgPq2cdLGjpORi5TT4ca97r2aeSwVao6n4GIKoN6qp7Pk446upCXG5Iqis+r3NrjQffCGkFEKtzkeaaScfJBwu6n6pVlTlznRE8fkEmM7uOO6diOB2W8CaJvsxiyzTTkn8y7wvg49mkw9Bkvlg0ufibg6j8SS25QXL3UgaNEjzsiEHuf60Z/VmNiJzwxzIENWvIBfPipANt5Xhexc/d9PCid17oSk7e0h+laA5s9Lp4z26uidCQVUeTtcP89ILYas+wdHoS9zCWG6FLxaPgIMJ5uwCpZXRRf88kvWj0IL9RjU5q75L/QPSxj0R8a1hftLTRKiogVmJMAzogU+DLNDrqMK7d9H0LrbpIEM6MqFgJN8p/nUiBkY0qFIsi+5KGSIATYZKQDo3WSDVfnhMFUKev3jjnAbiePOTrppU+OVJBLtoz414nBxr0M6mfwrV6s2ls9J15R+JlHr6QmY44ZLCiPLSwN9gQ+NYsDMV6STlJ8WoT/2g43tgR43rS25lZoolMH9BExu1yjqP0b/PhaJw1a/YB8CRu7hyQ2MkWh5pC9LUtb8J5hsmCTu2YGB+lwfzShPMdSibtDJ9zKY+Srvvc3tFtQ9Mjn5esIaA1eEKNoz60O+vZ8RFcLmX7FL1QCy8NuSJwtiOmywzO8MG8UYZrLWBTR633uflVWtzyFYrj/opXb2BWBBBGTajHzLhgHjZdBlqRBMy3snh31SXIDJpjZ4UDjyPsNHQu7wmfBEFpR9+fuhWv0qyFfDFsbX8LxRvfty22SHHpP8CKIcC6nD/x+tJGw82 ljvnHBkc HeCHj9T5q7GoPy8UMDOA5HSQdSce7kxg2yHmtmxzl/LXR6239VoUSrnGyTljp5phBfI2btLWHtx11hKF1fxTM1vB2mfVumTPDoLn1jtrRMYtwZZtR9eiaBoA90XRmAa55OdBjZGHmYNaNrzwpwMMgKgjF0sT1Gc3ELW0dJ1aP19vIqJVm676A7WhMcP2RdzKufOpNtumE5HQ9cg0E4dA5s+OBdczdHrlxCsiSMHCET4RrhNQV9hr86530nuN+DOP8/vy20esbzqtwU7f4iwnGmzQZNq2imE2Tup0B/O0mXUeZmbCxThGZf3Y+ltyNaWSIsud4stzX1fP4wn6NIuFT6hXz250SjByYHabjTkq2+67uDP281v2Ts/wvFlFTXxxAyHj44Z9oZrDk7CieWvxy+5YxZg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 110 +++++++++++++++++++++++++++++++- include/linux/page_frag_cache.h | 54 ++++++++++++++++ mm/page_frag_cache.c | 12 +++- 3 files changed, 173 insertions(+), 3 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..34e654c2956e 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,110 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + | + | + v + +------------------------------------------------------------------+ + | request page fragment | + +------------------------------------------------------------------+ + | | | + | | | + | Cache not enough | + | | | + | +-----------------+ | + | | reuse old cache |--Usable-->| + | +-----------------+ | + | | | + | Not usable | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | | | + v_________________________________v | + | | + | | + _________________v_______________ | + | | Cache is enough + | | | + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + | | | | + | | | | + | Refill failed | | + | | | | + | v v | + | +------------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------=-+ | + | | | + Refill succeed | | + | Refill succeed | + | | | + v v v + +------------------------------------------------------------------+ + | allocate fragment from cache | + +------------------------------------------------------------------+ + +API interface +============= + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_*_align*() to ensure the returned virtual address or offset of the +page is aligned according to the 'align/alignment' parameter. Note the size of +the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + __page_frag_alloc_align page_frag_alloc_align page_frag_alloc + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: page_frag_cache_drain page_frag_free + +Coding examples +=============== + +Initialization and draining API +------------------------------- + +.. code-block:: c + + page_frag_cache_init(nc); + ... + page_frag_cache_drain(nc); + + +Allocation & freeing API +------------------------ + +.. code-block:: c + + void *va; + + va = page_frag_alloc_align(nc, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) + goto do_error; + + ... + + page_frag_free(va); diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 5ae97f93a0a1..a2b1127e8ac8 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -28,11 +28,28 @@ static inline bool encoded_page_decode_pfmemalloc(unsigned long encoded_page) return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); } +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to initialize the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { nc->encoded_page = 0; } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Check if the current page in page_frag cache is allocated from the pfmemalloc + * reserves. It has the same calling context expectation as the allocation API. + * + * Return: + * true if the current page in page_frag cache is allocated from the pfmemalloc + * reserves, otherwise return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return encoded_page_decode_pfmemalloc(nc->encoded_page); @@ -57,6 +74,19 @@ static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * __page_frag_alloc_align() - Allocate a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Allocate a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -72,6 +102,19 @@ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, return va; } +/** + * page_frag_alloc_align() - Allocate a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before allocating a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -80,6 +123,17 @@ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_alloc() - Allocate a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Allocate a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index f55d34cf7d43..d014130fb893 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -70,6 +70,10 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, return page; } +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_page) @@ -182,8 +186,12 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, } EXPORT_SYMBOL(__page_frag_cache_prepare); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free(void *addr) { From patchwork Fri Dec 6 12:25:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897120 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42DE9E77173 for ; Fri, 6 Dec 2024 12:32:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2BBE6B025F; Fri, 6 Dec 2024 07:32:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AB0A46B0261; Fri, 6 Dec 2024 07:32:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 978F26B0266; Fri, 6 Dec 2024 07:32:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 73ED96B025F for ; Fri, 6 Dec 2024 07:32:35 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E7D03B0157 for ; Fri, 6 Dec 2024 12:32:34 +0000 (UTC) X-FDA: 82864472250.05.C02F90C Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf08.hostedemail.com (Postfix) with ESMTP id 125A016000E for ; Fri, 6 Dec 2024 12:32:20 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488340; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tDKWn9PXBBfdu4Yz/x4AvaY5+qtAqpsrigULpBMtHIs=; b=XgCZzgSZzlMnq8lhQVOmQ0FKX+cwg5bmOqLk8jjF1WBJmxd30n6FBpYPl+erfk/+xgD72d R0ge7JY0oHUA5If45fErsfWhqd+hZAk5sJXI9krxbQDXusrkcSEX6sOJoziAkGZZadqc9Q CnaMazwVNWCkX9EhZiI52ycgSFHX1cA= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488340; a=rsa-sha256; cv=none; b=P6oowrPPhe0RLBy220DVtaZv7hrXnkn/mVkhkK+Lp9BMJIUxRM+nXVWIvm0VlFaG5SsSyn HIu/lxyMW83V0rZGnF0sPM8i7UJxS9dqCRQChD9p6VGsSsb79ETjZFyoDf9mDzPXQTrt0a 32hoyFkBF4FA6kAYfUESTQNDcMQ9sUw= Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Y4VwJ0q7kzqTfH; Fri, 6 Dec 2024 20:30:40 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 57E6C1401F3; Fri, 6 Dec 2024 20:32:28 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:28 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v2 04/10] mm: page_frag: introduce page_frag_alloc_abort() related API Date: Fri, 6 Dec 2024 20:25:27 +0800 Message-ID: <20241206122533.3589947-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 125A016000E X-Rspamd-Server: rspam12 X-Stat-Signature: u4j8tohh63g46uyjo5aekhemj3oof7dm X-Rspam-User: X-HE-Tag: 1733488340-674072 X-HE-Meta: U2FsdGVkX19ufuXr4sM53I7tkvWGHn+xGjhMuo7JSZ0/mpsOybgQo0dyVjorUeU1nTnxDVRAkCU8Gx0eZEc/m41h8WfqejlOLTXGhcDW5AItq9IgcEJGTjsLja6wX9XZp8mGV9rQ5S47lKGUhZf2AIqabD9g0Oj17dO1DCrBhAxHmfh/GbIGrO/+Bnxn/FR/iw2qzdHNu0jGJMMOOvURmDRE64NGEQP6VGmtqRXJDqydmRBMo9+0LwIQZvPFYzTSo/m8kP9KWg+3JbWJVt8n4YoBDVjLwClGrYxn33yQ5h0INjHmBmGWYPpuExaMvxmpoYlZZNTbNwPNPWDxiCqBKjeZ/2qbvk+QOCTAeBV7qOl8gbgSzXBCAH+2GuqZS9TjsXy3D1zZhddgh6jWT+j718CehMNCq/fqhM0rBEmR9pt0RaJU2uGPl6wRckdMY4W+65kacWf/9pgIsGnmpbXGQ/jDyDrEsTuhdwWA/3B+TD9zzYD60MsTqc2TIkQhQMQifqOzVd2SMG0fAsAQmWUFFtEn+Ea8J7G7R/q0MqOfoxeT9ZEzRsJ19w8erbU3XZcwTAAKJcMWa+F8nfvgzuCOuu7vga6ytQAbG4gW+QAEiF3qUya+RlNHjTDk80flfqo+SnhrWb6rnMA1q8i4dzCn2T/vy+POiG4Q0USUB/mlMx4XiydwYk5hglrIvY6glBeXwiwzxO6qDvDzSb74kFAbhGvtdDAd5Skmxd0sSyEUMPLaik4jlwG3i575+ePTjOVLgbj263Lu85UR0Mg0pDIi8pKo8j6pXBqZ86Psd+0YIT1vdborA/iFvykRSMJHWPCsQ/uNg7qy30QjA5NHHHi4kWOHuKGnexSYLHbaLKSKdKxZcGcqKZHqk65MKPhBWssjhKWqzqIg00UT2pmrnUM4kSVjX3Hq55fC/h3zMqvMEdGUZ3j5ITa+Zp8n9f9RXMPm8/TNrmnF9MwpSJP6y7H oPPgN4cz le3RPDJgH334UNQNdeqwiB8eo9r6/5Kap0RZgS2JWEL/zczyAKRemv3EuLdRRu1gAs4FkOE+REhQxyp0o09XqWl/2KTQOQx+jksw5vPuqgQ8V6bzcuIaQDgOC8K1cH/N0FEWYNYmA9SXezML99cuXerkG60jEzGSEMZ7ENdPLOVy8M50vqsSYTT9gNLO83JYD1Gzd/TOmswQ5cxnJkd1FmRoP6oQxiLrRIJKX+TsWBFKROMpFh9BO0Jc7qiUO1YMs5LbHxJVnEfDdK3tgV5vdgySL4DIWggQqXL/ircENIh2XsadX1wOqZ/ctk+zFerrcvfuG8WEAUS0XzhbTnVrjq6KEN/fr4lDBMEoNCrtPe2CkRTVQshPLhxyBwKFQjp4FqPeVz2xrWhN5NKpgxbFHLy+UWw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For some case as tun_build_skb() without the needing of using complicated prepare & commit API, add the abort API to abort the operation of page_frag_alloc_*() related API for error handling knowing that no one else is taking extra reference to the just allocated fragment, and add abort_ref API to only abort the reference counting of the allocated fragment if it is already referenced by someone else. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 7 +++++-- include/linux/page_frag_cache.h | 20 ++++++++++++++++++++ mm/page_frag_cache.c | 21 +++++++++++++++++++++ 3 files changed, 46 insertions(+), 2 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 34e654c2956e..339e641beb53 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -114,9 +114,10 @@ fragsz if there is an alignment requirement for the size of the fragment. .. kernel-doc:: include/linux/page_frag_cache.h :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc __page_frag_alloc_align page_frag_alloc_align page_frag_alloc + page_frag_alloc_abort .. kernel-doc:: mm/page_frag_cache.c - :identifiers: page_frag_cache_drain page_frag_free + :identifiers: page_frag_cache_drain page_frag_free page_frag_alloc_abort_ref Coding examples =============== @@ -143,8 +144,10 @@ Allocation & freeing API goto do_error; err = do_something(va, size); - if (err) + if (err) { + page_frag_alloc_abort(nc, va, size); goto do_error; + } ... diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index a2b1127e8ac8..c3347c97522c 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -141,5 +141,25 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, } void page_frag_free(void *addr); +void page_frag_alloc_abort_ref(struct page_frag_cache *nc, void *va, + unsigned int fragsz); + +/** + * page_frag_alloc_abort - Abort the page fragment allocation. + * @nc: page_frag cache to which the page fragment is aborted back + * @va: virtual address of page fragment to be aborted + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the allocation API. + * Mostly used for error handling cases to abort the fragment allocation knowing + * that no one else is taking extra reference to the just aborted fragment, so + * that the aborted fragment can be reused. + */ +static inline void page_frag_alloc_abort(struct page_frag_cache *nc, void *va, + unsigned int fragsz) +{ + page_frag_alloc_abort_ref(nc, va, fragsz); + nc->offset -= fragsz; +} #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index d014130fb893..8c3cfdbe8c2b 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -201,3 +201,24 @@ void page_frag_free(void *addr) free_unref_page(page, compound_order(page)); } EXPORT_SYMBOL(page_frag_free); + +/** + * page_frag_alloc_abort_ref - Abort the reference of allocated fragment. + * @nc: page_frag cache to which the page fragment is aborted back + * @va: virtual address of page fragment to be aborted + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the allocation API. + * Mostly used for error handling cases to abort the reference of allocated + * fragment if the fragment has been referenced for other usages, to avoid the + * atomic operation of page_frag_free() API. + */ +void page_frag_alloc_abort_ref(struct page_frag_cache *nc, void *va, + unsigned int fragsz) +{ + VM_BUG_ON(va + fragsz != + encoded_page_decode_virt(nc->encoded_page) + nc->offset); + + nc->pagecnt_bias++; +} +EXPORT_SYMBOL(page_frag_alloc_abort_ref); From patchwork Fri Dec 6 12:25:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F136DE77173 for ; Fri, 6 Dec 2024 12:32:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6E16F6B026B; Fri, 6 Dec 2024 07:32:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6925D6B026D; Fri, 6 Dec 2024 07:32:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 559B46B026E; Fri, 6 Dec 2024 07:32:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 361386B026B for ; Fri, 6 Dec 2024 07:32:46 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D54D142800 for ; Fri, 6 Dec 2024 12:32:35 +0000 (UTC) X-FDA: 82864471914.19.302CB59 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf06.hostedemail.com (Postfix) with ESMTP id 832F9180003 for ; Fri, 6 Dec 2024 12:32:21 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488346; a=rsa-sha256; cv=none; b=XnOT7zw+8cSV22YsYWvRaY0ZWgeem8hx8J/13LEGCJSqvSTqmK/f9++41kdtVW7Fr7R3G1 eJBd2VC0kDJNpM9BjSCkDHPi/i8Pe10fnAcpHXeZ705/cfc52S6CtdD0oStcVyjNZXctJi FAmYi0K5OaWeD8GLK4XzX0fJ3tC3h30= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488346; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SU9hFE/IxapLxlkNs+ckNoGZnzqsXf/PqdlweufhXC8=; b=40+uaOTUlDpdp4HicX/14+jV2tLhJG/SzXo10+S0d8/+OivM5YmJ7YQB0RT+/hbcejbjdd Mu5Q5JkXqkBVLmlJUiqhOnMhRrZPtfbDDjw8VC6G4TOS0d9wIrPqWjvjC4ThvMxwxb3KUH Mfr2mbEmBwv7dSpGWbbP3tPGjCJuPlw= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Y4VwV3k1zz21mgp; Fri, 6 Dec 2024 20:30:50 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 1577A1401DC; Fri, 6 Dec 2024 20:32:30 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:29 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v2 05/10] mm: page_frag: introduce refill prepare & commit API Date: Fri, 6 Dec 2024 20:25:28 +0800 Message-ID: <20241206122533.3589947-6-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 832F9180003 X-Stat-Signature: rkuhf1rtj3fazxfp3ad5zrrut56tjwah X-HE-Tag: 1733488341-920135 X-HE-Meta: U2FsdGVkX1/GbohKg8pGXx8/nyYV21SyWOP8KQ4siCal8DK8b6KK4g4wQo3hf2b2bhencufzTTjZwvnkGGXVL1a9G9V6p9j+wBRfufdkxV9Y+qBbrrcTC9LqkMDFv/HLoRn/CCwZm7u60Refe5f67ugArDViwo44e8n3Ccjw2rHfjQfvsnFyDOR158a3dFtZBbVIAFUP7VG+st/EC2Lswrb+Tw1AJ0OYQg4xWyJ85Upx5hVq34TFleaRwvrPl8WQac8FMImPDYgiiInG2A9U2YdPr8EKqQhaop0fbs97QqFzISK2R84zGuen6u27tlBcrkLCrPmgxpFcZBLKzGh/wzSChBrWDJkSkPpGx5sXxSci1U/uuMUi9XTDlX/3+bE4GsLfA2Fyis7a5bO8lyd4nuLsKkUrejQKSIOwKMdccsL2JyI4qnHc66Gc5uZ75bNNCs9RG/f28/YeGNxDqTO33dMsGh+MR6AQHt2J8wkDwtvkYI69evmAwoktBHlpxtNeMnOIwySnPvOOAdLczots2J3PR9IKJBdLWn8Sk/4VcIJUAyb0wf10gIJ8EtKyyGbsGdE5llafVlIruFG8JN6sM+l2J9LrxAWQNSuTjz6/p3Tnlg5zKeXfeecEqI+hxMuhAq2kxiHInr4KzkGJ4viQIiJZ1jJwhpWt97vbhebjhNLPMpSxZTtVLL/LdOqpTbIo69mlMrmIcnr+WPgrt9AgMn7COHCp7t1EGTLoUAlfgCaCV3c4RjRf4HMgoF2VGMBPpE95+rqkJ/KSjhGBFmaSH+KppX9Nf8hvRhMnTxnSC4xz4JMPckQEUZVTUBQEvlG7z7Yg/N+64xG15uKc3wfg/mk1KX6BHgAOZoyOWuTyPRtOob/uLggRzoqqJ6fKVH2p8JevHuKI4OoW4aJxdoMmeQqLLYL950Og/F8nYOdqeXqfEOJxwLgsriY+s4WPxmwPW3LzRBvb3p0GcjKYHOl Z5zAaU9m 9zDuugH8BoeBoidST7miHkS1u/T+YkQI/tQ86KHJ5VBOlfVG1mL5o0200KAquvywMa8nPKJAJTlFh5U0QIsNQLI1a/kU9TE4d6uIbitV8AG7vIxaL627fMw2ADewfz2SpoGIywrjy/3C9B7cQMgsfu46vN9G6jfUYXIpVu0MMMdyFVh2oh+tC0OpiLtqTkX78Q0KMJdgTzNzT+9B8tCtgFhI1VfB/4J5fDJH1oHmFARo/tZUwTMTZH1Y5Fh3pOJd4nRDRZHiXwow6eG/VYMTOk9xZhu4ErOUaRvP9fyQ7Du8IVIgJmJKPl/aNZIVjhi8JcrvgVylCTttsBQUV9d/1P2W0pPZr/EoKf3bbNy2Dz60Yt63xJerYnxCrodQFvL6r1wg9UJx5j5KVtK3KCbFM9Vq98Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently page_frag only have a alloc API which returns the virtual address of a fragment by a specific size. There are many use cases that need minimum memory in order for forward progress, but more performant if more memory is available, and expect to use the 'struct page' of the allocated fragment directly instead of the virtual address. Currently skb_page_frag_refill() API is used to solve the above use cases, but caller needs to know about the internal detail and access the data field of 'struct page_frag' to meet the requirement of the above use cases and its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller. The caller needs to either call the commit API to report how much memory it actually uses, or not do so if deciding to not use any memory. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 43 ++++++++++++- include/linux/page_frag_cache.h | 110 ++++++++++++++++++++++++++++++++ 2 files changed, 152 insertions(+), 1 deletion(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 339e641beb53..4cfdbe7db55a 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -111,10 +111,18 @@ page is aligned according to the 'align/alignment' parameter. Note the size of the allocated fragment is not aligned, the caller needs to provide an aligned fragsz if there is an alignment requirement for the size of the fragment. +There is a use case that needs minimum memory in order for forward progress, but +more performant if more memory is available. By using the prepare and commit +related API, the caller calls prepare API to requests the minimum memory it +needs and prepare API will return the maximum size of the fragment returned. The +caller needs to either call the commit API to report how much memory it actually +uses, or not do so if deciding to not use any memory. + .. kernel-doc:: include/linux/page_frag_cache.h :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc __page_frag_alloc_align page_frag_alloc_align page_frag_alloc - page_frag_alloc_abort + page_frag_alloc_abort __page_frag_refill_prepare_align + page_frag_refill_prepare_align page_frag_refill_prepare .. kernel-doc:: mm/page_frag_cache.c :identifiers: page_frag_cache_drain page_frag_free page_frag_alloc_abort_ref @@ -152,3 +160,36 @@ Allocation & freeing API ... page_frag_free(va); + + +Refill Preparation & committing API +----------------------------------- + +.. code-block:: c + + struct page_frag page_frag, *pfrag; + bool merge = true; + + pfrag = &page_frag; + if (!page_frag_refill_prepare(nc, 32U, pfrag, GFP_KERNEL)) + goto wait_for_space; + + copy = min_t(unsigned int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_refill_commit_noref(nc, pfrag, copy); + } else { + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); + page_frag_refill_commit(nc, pfrag, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index c3347c97522c..1e699334646a 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -140,6 +140,116 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +/** + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Prepare refilling a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ +static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align_mask) +{ + return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, + align_mask); +} + +/** + * page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before prepare refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ +static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + -align); +} + +/** + * page_frag_refill_prepare() - Prepare refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ +static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask, + ~0u); +} + +/** + * page_frag_refill_commit - Commit a prepare refilling. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the actual used size for the refill that was prepared. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_refill_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) +{ + return __page_frag_cache_commit(nc, pfrag, used_sz); +} + +/** + * page_frag_refill_commit_noref - Commit a prepare refilling without taking + * refcount. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the prepare refilling by passing the actual used size, but not taking + * refcount. Mostly used for fragmemt coalescing case when the current fragment + * can share the same refcount with previous fragment. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int +page_frag_refill_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, unsigned int used_sz) +{ + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); +} + void page_frag_free(void *addr); void page_frag_alloc_abort_ref(struct page_frag_cache *nc, void *va, unsigned int fragsz); From patchwork Fri Dec 6 12:25:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AC50E7717B for ; Fri, 6 Dec 2024 12:32:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 499A76B0269; Fri, 6 Dec 2024 07:32:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 448CC6B026B; Fri, 6 Dec 2024 07:32:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 229286B026C; Fri, 6 Dec 2024 07:32:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E66C66B0269 for ; Fri, 6 Dec 2024 07:32:37 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8DB071A19C1 for ; Fri, 6 Dec 2024 12:32:37 +0000 (UTC) X-FDA: 82864472292.12.3823C59 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf26.hostedemail.com (Postfix) with ESMTP id 45B3814001B for ; Fri, 6 Dec 2024 12:32:21 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488344; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lvuyRUjEeqKVfMNpsbMVljnBq4TEBQUgRXvscG9RWWs=; b=GvRyWzBpS5nsm0DRUDHGQl5G5EBXGakJDm8aQ4dqkxMxJHt9ccX/Q4lcEbcFXB4xN6nxos 8IWzGJvF8ESQLIeTsKIyK2hSxCPpuxz626Q7k0xd3y9XxGy/Sifc3wwtyK8wcQ9545V891 d4hs2dJCS5qr1vAGts+NuuEx3reG02E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488344; a=rsa-sha256; cv=none; b=abv41wWCZAuHSBOTfJyImUSL8KsnbbCShH6auW19CWgrIhrrMCm9fuR6f3gLkhIep4gQP2 YWP5gUEx1YlBFwy9KCJLSQnAftUWw8fG8WCNLRSa2U+hN7zsicbYaElIqXdKhpIlEwmm2Y b2gheh0bH/vUZ58E6UtCe10rkSF/r9w= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Y4VwX1XQ5z21mlC; Fri, 6 Dec 2024 20:30:52 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id BE341140136; Fri, 6 Dec 2024 20:32:31 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:31 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v2 06/10] mm: page_frag: introduce alloc_refill prepare & commit API Date: Fri, 6 Dec 2024 20:25:29 +0800 Message-ID: <20241206122533.3589947-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 45B3814001B X-Stat-Signature: tdnykkk7pdq1wq76ym7m4aoqawymkoix X-Rspam-User: X-HE-Tag: 1733488341-939544 X-HE-Meta: U2FsdGVkX1+K07rRnXsAgNH7tSNe/oOdLMTuXxfWrO/+AT+jkI4XEsr1f9m1hiLl4wCgCAkVgW2q6S66cfVdQf7scLynP3EieMYvb28eHCC/sxTL2jSJtrbG7jk05ouE5vUH/eJtF0A8xMDJDmG3hNlEBdXK+3LXwP39SRIoh/Cfg0raYmfXnHzRmdxUwd2FsLPH6L1alkrqP9MDTZL1s9pX4B7I/BBVqQvJ0F+u3NhBwbMIbbOvc1l/i4jDav5LXKbRvihIIeW6btuslsnl6MPHz10lAykmJIw6AAD4bPXRRtgODUn13ZJHvYB2aniATusaAT4bRzOx4dyzKhmiyXHLe4bhSzVQkuQ3/l/9e/mNTpGhdcxUNHtugUy1Zn2nUspQHG+BRs9j5blqm5D0NtuRaoiSWu1Ns+ZWSN2XNzfjfEBTI8DOAHvJEtT4MF/846yqvmUZhiUrJDZF4Lsc8aTZlYS6j6wRkjwPl6h3hxLmTfMS7Uek/pHo7dBcROYGOF5CNBP2TSPiG7idU2GWl9a1f5AcgFaGYCNekqQETkAuCssY7NAMkHFDbfZObJjn/vj15eG9ilDo+vxtgZN8FwlrJ/P6f2RZJoWLyfCZFlgj0eflQS3yKaUYKtXoDxb/1lCuDPoPd0PavAqyZWOs97A0CxSYZirYR86vVmbvqqEcIAPprFnDjNgGqRzEAkV5X0rjngRzAk8g3eYOMihdRcJdvIA2f5iURJeO5LjYeVBZggx8Dqs+PaNmA0Av2K0okfEZ2fpDcG139eBWjFlPhDuzIVr19kRoFsZ3Soosyr1lL3X4aOxgycvqPcEgpYz7hFgGUMPJXh0FFwMEydaVYxLwr1BBqyCAEQgs9LPN6bTJxDgjP04Hd3KWFheK+59GFyoG1gTWyFcMknIspNvOul85wqasWupCZEXaAUwgwiWjLbbuQyL+Jqo1gl+GMW98QIGeaBjEbdKU0gYEfl3 w106vAn2 qDTE5cgtKlGUumScOyWyDTzmAsJn1JqE/Y0lRKoYqSk7dzFXHHvw4wlHdmfwh3CszRptJ0VEk3q4U5b/4Ien4JkAo9KFMLnTOMwyMHmPRkAuMLFY3wupnd0FUj6VHMyZbaQnx4t+TPlQ1Gw83UmPMiklDMxt3lABowhY105VkXYOjDey3ejYuuMgeLYLPUR0V1QyeYmrH1YepNOwgBqUAiwfHCTer820MyJPWcdDUYAFVCYm3nK2aHOVbn2BzO7qIOxe0wKqCTRZk3C6fT4oQqbIoTgerz6tPt432x9NliV9PnxU8RtplcXqBiaguU+2hxvJ/aF8AC0fFXjQtm885aBbl3Pz3k28x4PmdnO5QyjeNbwoSbJiOKV13rgM1lRcUyXJYfOz+AEFk6s6mOPFnOy9VXw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently alloc related API returns virtual address of the allocated fragment and refill related API returns page info of the allocated fragment through 'struct page_frag'. There are use cases that need both the virtual address and page info of the allocated fragment. Introduce alloc_refill API for those use cases. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 45 +++++++++++++++++++++ include/linux/page_frag_cache.h | 71 +++++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 4cfdbe7db55a..1c98f7090d92 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -111,6 +111,9 @@ page is aligned according to the 'align/alignment' parameter. Note the size of the allocated fragment is not aligned, the caller needs to provide an aligned fragsz if there is an alignment requirement for the size of the fragment. +Depending on different use cases, callers expecting to deal with va, page or +both va and page may call alloc, refill or alloc_refill API accordingly. + There is a use case that needs minimum memory in order for forward progress, but more performant if more memory is available. By using the prepare and commit related API, the caller calls prepare API to requests the minimum memory it @@ -123,6 +126,9 @@ uses, or not do so if deciding to not use any memory. __page_frag_alloc_align page_frag_alloc_align page_frag_alloc page_frag_alloc_abort __page_frag_refill_prepare_align page_frag_refill_prepare_align page_frag_refill_prepare + __page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare .. kernel-doc:: mm/page_frag_cache.c :identifiers: page_frag_cache_drain page_frag_free page_frag_alloc_abort_ref @@ -193,3 +199,42 @@ Refill Preparation & committing API skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); page_frag_refill_commit(nc, pfrag, copy); } + + +Alloc_Refill Preparation & committing API +----------------------------------------- + +.. code-block:: c + + struct page_frag page_frag, *pfrag; + bool merge = true; + void *va; + + pfrag = &page_frag; + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); + if (!va) + goto wait_for_space; + + copy = min_t(unsigned int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_refill_commit_noref(nc, pfrag, copy); + } else { + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); + page_frag_refill_commit(nc, pfrag, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 1e699334646a..329390afbe78 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -211,6 +211,77 @@ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, ~0u); } +/** + * __page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment. + * + * Prepare allocating a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void +*__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +/** + * page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement for the fragment. + * + * WARN_ON_ONCE() checking for @align before prepare allocating a fragment and + * refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void +*page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, -align); +} + +/** + * page_frag_alloc_refill_prepare() - Prepare allocating a fragment and + * refilling a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare allocating a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, ~0u); +} + /** * page_frag_refill_commit - Commit a prepare refilling. * @nc: page_frag cache from which to commit From patchwork Fri Dec 6 12:25:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA83CE7717E for ; Fri, 6 Dec 2024 12:32:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A6B86B026F; Fri, 6 Dec 2024 07:32:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 731446B0270; Fri, 6 Dec 2024 07:32:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44A386B0272; Fri, 6 Dec 2024 07:32:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 000A16B0270 for ; Fri, 6 Dec 2024 07:32:47 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 747A042952 for ; Fri, 6 Dec 2024 12:32:39 +0000 (UTC) X-FDA: 82864472334.12.AC48FE9 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf18.hostedemail.com (Postfix) with ESMTP id 4EF291C0016 for ; Fri, 6 Dec 2024 12:32:28 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488344; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J1srTfzk09opjcf5dAMfaweqcdxDibImPR2py+TJtKc=; b=2cWvDpXifxKHvudsSl8Iuceal6vqVAEeVOU9DHdVfnSLlKF3/hCiMVClB0dZr54uzlgjUr WNb8N0JqxcHwKZa4xjXDJo5sJvVeHOzEiab+Ae4FvUG+C22QGGngOmYnGBiON7/rTpdeuf oHgDPJ1MIhYxp94a7BB6vkhYa25qpeY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488344; a=rsa-sha256; cv=none; b=55zRhyo1A4mxlTKZjseqWCXHClMIWAylGTnELs2xJnXS7+JI58anRzBWs4ubna47xC91no 3nUOmqq2zejUbRBprQ9aL7SxAIRUc1FISKJPm8HduDYeHHq0+p0rCeTS7EJcT5q72y3TMX 6hxV/gokOlgK36GePHax2XCoPVoc+dc= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Y4VwY6sbDzRj0W; Fri, 6 Dec 2024 20:30:53 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id B66F91800CA; Fri, 6 Dec 2024 20:32:33 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:33 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v2 07/10] mm: page_frag: introduce probe related API Date: Fri, 6 Dec 2024 20:25:30 +0800 Message-ID: <20241206122533.3589947-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 4EF291C0016 X-Rspamd-Server: rspam12 X-Stat-Signature: 5ds7rqdyzwhq8zsz3ek4s3f58ozowc8j X-Rspam-User: X-HE-Tag: 1733488348-410405 X-HE-Meta: U2FsdGVkX18D4YGrb2Y7N3NIM9mvI9DS05XijsPnsWLpgL5BKudaRBPznPxQY5JL7nR7fCJAFku0gtHFZZh68dB7w5pedGehTP4jX2l6/g4ycnt9vR7d+kPRj3CxUS9ZlsvtvktR3AflRfsGIecUd7n884xXxny2O9oIVlEUp5H9G5DPQ+K4kMhw7ogivozfH8NqBUmsdbF2jc6wgVqHo3vtyCcli1zzj4kZizNrj3IuYCh+ePTSpSD8XJ1B60hzjhlKa465IRXH5MEfxrMuqi+kuA2fkkfO4015fUrYjtQHKzKlFU1iQZOA6xTW0gqUrfzP6+uoTSE9Zr42J9d8/yfzZQMj76UZl92Mnlc/fXsDea3OnkNKtY6OYa1+YobjhMP8jDj7fYsxMhP4ZbK8rTo8dFmWlBra21FhFmCVU4DXxs+lMfc919yq2nS60eSm8GjAbmp922ROtQi4XbkmLFeYoJ8XvPrAJUenFQjXGXzWqpRJOvjc023mW/Jl88trIQKQaVhoA4Z8AlBEGzY+zFWzfSWnIZgvQngLemdw8jODxiaiMpXCRrfShvEyJ66rnf0ceB+5d3pxNUH3TVb+kM2rApqDP32XcoosAp1LgGi48n5fYeb2HVyWttwYb4Tq2nwXuTZ7Mmk/uFf2S0d8QRPAozEeSbHY0Demmg4GJ6eWbz7o2iORyC2x4gStLG3YSOvDwLb7kz1hlyP1MGfhmJJl+Wse+fkJyiS1gHpCPydDB+JhjY0+NvJVvPypxLolSB0PjzY4lSxdCcCytSPuFMLtmsrKKLAgUE8CEPYqgX7LPeim1UQwB/AvVj80quPF/hvA6nlX0nIUKgOdvzjzZ6khItDA+3qUzjkPN9Psn4biYaTdnm2kK7Ox79LDcpRPJIKfqTps6MK7+z2WuUnY/Exhj+jGHcAPuYEjcoLHBvKlN2AH5kYLEWzsWLCSCyC1KwtxwRVLJWtHqE73l37 t0GpcAeV 5ornF8XQ/3Y6TSNczAnRTsHxA3aInF+pwcvqoXYl4Ywpr18B2tRf8MgoMlVKBbSKxbxm3/k2LuYgu5Y6kIPaIu3SJGy7CU4nQ9sQQkEIwo9wXBDz1X4u+64x9VJtNKK+8TUsvDJwUTM3EYMomHBFxBPNm1JFaCoNSrlTkMzUZ8OP+YyR7hnM7/I3uLJlI4QPZB10IPwNWNMT+5A9nWYvT2N13nmpduAoznFpaf9K+zMsMre/7ymYnE3ivol3lUe0CbQ8alHuJIgMcg1sEK+gGyVaagqBv8LImCqokahAATQJbgpJGmdV8FHoExsqwdWhkjb7Xm0qKyCs96M2SvXhOItMS2XbnxC8uZCssEOr4Oq0wGUB4a/o3D6q+tl+1WmqAZoblQ6BXeycrX3G5SUz1wSUuiQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Some usecase may need a bigger fragment if current fragment can't be coalesced to previous fragment because more space for some header may be needed if it is a new fragment. So introduce probe related API to tell if there are minimum remaining memory in the cache to be coalesced to the previous fragment, in order to save memory as much as possible. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 10 +++++++- include/linux/page_frag_cache.h | 41 +++++++++++++++++++++++++++++++++ mm/page_frag_cache.c | 35 ++++++++++++++++++++++++++++ 3 files changed, 85 insertions(+), 1 deletion(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 1c98f7090d92..3e34831a0029 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -119,7 +119,13 @@ more performant if more memory is available. By using the prepare and commit related API, the caller calls prepare API to requests the minimum memory it needs and prepare API will return the maximum size of the fragment returned. The caller needs to either call the commit API to report how much memory it actually -uses, or not do so if deciding to not use any memory. +uses, or not do so if deciding to not use any memory. Some usecase may need a +bigger fragment if the current fragment can't be coalesced to previous fragment +because more space for some header may be needed if it is a new fragment, probe +related API can be used to tell if there are minimum remaining memory in the +cache to be coalesced to the previous fragment, in order to save memory as much +as possible. + .. kernel-doc:: include/linux/page_frag_cache.h :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc @@ -129,9 +135,11 @@ uses, or not do so if deciding to not use any memory. __page_frag_alloc_refill_prepare_align page_frag_alloc_refill_prepare_align page_frag_alloc_refill_prepare + page_frag_alloc_refill_probe page_frag_refill_probe .. kernel-doc:: mm/page_frag_cache.c :identifiers: page_frag_cache_drain page_frag_free page_frag_alloc_abort_ref + __page_frag_alloc_refill_probe_align Coding examples =============== diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 329390afbe78..0f7e8da91a67 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -63,6 +63,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, struct page_frag *pfrag, unsigned int used_sz); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask); static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, struct page_frag *pfrag, @@ -282,6 +286,43 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, gfp_mask, ~0u); } +/** + * page_frag_alloc_refill_probe() - Probe allocating a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe allocating a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +/** + * page_frag_refill_probe() - Probe refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + /** * page_frag_refill_commit - Commit a prepare refilling. * @nc: page_frag cache from which to commit diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 8c3cfdbe8c2b..ae40520d452a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -116,6 +116,41 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +/** + * __page_frag_alloc_refill_probe_align() - Probe allocating a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @align_mask: the requested aligning requirement for the fragment. + * + * Probe allocating a fragment and refilling a page_frag from page_frag cache + * with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = PAGE_SIZE << encoded_page_decode_order(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(!encoded_page || offset + fragsz > size)) + return NULL; + + pfrag->page = encoded_page_decode_page(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return encoded_page_decode_virt(encoded_page) + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align); + void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask, unsigned int align_mask) From patchwork Fri Dec 6 12:25:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99C54E77173 for ; Fri, 6 Dec 2024 12:32:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0EF456B0270; Fri, 6 Dec 2024 07:32:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0784A6B0273; Fri, 6 Dec 2024 07:32:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5CC16B0274; Fri, 6 Dec 2024 07:32:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BBB416B0270 for ; Fri, 6 Dec 2024 07:32:52 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 85B17C1CD7 for ; Fri, 6 Dec 2024 12:32:41 +0000 (UTC) X-FDA: 82864472544.05.CB5170D Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf12.hostedemail.com (Postfix) with ESMTP id 9C9CD4001C for ; Fri, 6 Dec 2024 12:32:31 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488342; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/9OZdqlxYJzkVqU1SL/72e3Ps4s0GFerlOiAbHSTQfs=; b=6oj4OlE7fTWCbnpJh3OYoSx2Ol37NrEMfIO/tCnSguqO0g5CY/A7TcW/pu+YJCN15z3p6q +8+QwzmhzeBd7Ah9Qu9H98XZvSfMp8/NbhO5pttpx29AA5o1f4r56M0yWEhAEIjf7Rsvpc D3oq4foMAV4aCWSac85EUwC8p4ApsqY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488342; a=rsa-sha256; cv=none; b=yKSKqc/1Oo5/AD717xaHaeYuTrVZBuhxM7M7Ahvclnas0rhleWE9yEXdxTM9gud33Wx9Ck 9bZ+1gnw9lPbMZkxRF04KP9x4Uz9bvtwzOD4v+8dtd+3MiC+XSKD4zuiNko8ZfK4toIeGQ /CxqstfattjXx/N0ng901HWqj85yxm4= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Y4Vyq5PNvz1yr42; Fri, 6 Dec 2024 20:32:51 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id B8586140136; Fri, 6 Dec 2024 20:32:35 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:35 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Shuah Khan , Subject: [PATCH net-next v2 08/10] mm: page_frag: add testing for the newly added API Date: Fri, 6 Dec 2024 20:25:31 +0800 Message-ID: <20241206122533.3589947-9-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 9C9CD4001C X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: 39j56qrkbr5skpjcaxjamyks7itpiuws X-HE-Tag: 1733488351-837915 X-HE-Meta: U2FsdGVkX1/l9pCAa6RXVF5UllA8s5d/KqGYz4r2QXI2hJy9trq0ZLW3+Dnu6bXLatLOxHWvG8yMz2rQCl2MFpDs4sNqXtCqrHj7ZmUKF8R3eg9D115Kh6Gkz88R4JBAe8VhihEuUtof9ImPA7e05wxWtTp7C6Ymph0/ZZYPIEGxdc8NURvBvzaelKa6OgX9QrnFyWjHnre/Ya8IOaZU0HKAlGXorT9VVUDioYNJg1Jasi/BsB7y8hRvcuKcRxkQbKbwv4VZAc4NarpcWmt8oCsUDT8npcgplJDFQY/SvBFSiPk/gVOTKos8t4wZ9siB3U3vrkR5l+sSYoZw05D8ve0aZLR/vicmRABEfQNTGmjKRFXEObcIUAeHngKG6aww3Dehw9uBR/xsatJ9YT2BEoLtcyiTcKkflZ+kkfM3r1WrqmVttFo2Uj4x0JF66Nn/1Z4APp6waMna6TsH9+DmUHu/DdNsgk1TzefTC3IsAtpStC48hw/0ysodesnbXj2TWDIdlLpeSYXwJUlI6gjCklm+i1fvZ83p8ZDSRsGNfe1Z/1xkzqHOFnrmyg/Wg0xIBrRCcRDimqP9l1lN8FXQekSBpC+CGEm8VM900d+FG2ntmXITRY0y00Wyroj7FxX1DfhLrohESkr4Iab8al/PGYDobaH8An8nQWqf7VpKDg73fnQKe1Ea7HbcR0BEm04XH6k6BduWeHEQ1X+yYz4N2pEJf+q0v5+ufBlFLqdrRlmiw8NRupUVszdEjmFcr9uXZ25dbbfilYcfZVK628VBcFqBrhmRAj0QYA+yvcglBkyAqumjvnfN9+UqrYZjkS8ZutKpG0mAlLtVfcbxTyfKu8zb5pgiGh7Hf/b7n0IAqZ8iWF8CaSEJ3deAXGBX/RVicu2eMulzvZxBWHpcC9OGfnCPiT/QFFbLcURLhJ+8KyhAL9874A/08B86MV7ir8JBcrJRu7bKvRnp7hfai7s /jQe7+qv JDlIEbbapaPtVNYhmvbLwuk75pRsMiyonVBrstoKAFl3x2W9zE1EHKEpBBvbKTzL3r2vmWRI67thJTrJacR0+2J9zmQUbhzF5fZx6kHsHqv388U25ZNjUrK0qjGK4AIUgLe5tFV/sW6cM2I735yxitwzKWHosFUVVk34Mh1UPWX/AIqEFHv/r8Fdq7zuUbK0T2+Y5rXZqNMYrL4O8lxK9ZmmvNViPoiHp5qrgmNjDZfM67D53g65sWpWvO4k9Kr32trO7VQFV4CwnDfHBZzfQ//GYNjWguMI4ijVWXXRu5E/fjYhz6yrx+WoOicMA6Rd9oqHeAfGA/j/LapZ8x56B5Kr1neWxvGyF4KbcqfmppzMLQvYkKqy9iErtE2poAL7vrrD+9zY0ab3HVBQ11j3tw7GRvg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add testing for the newly added prepare API, for both aligned and non-aligned API, also probe API is also tested along with prepare API. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- .../selftests/mm/page_frag/page_frag_test.c | 76 +++++++++++++++++-- tools/testing/selftests/mm/run_vmtests.sh | 4 + tools/testing/selftests/mm/test_page_frag.sh | 27 +++++++ 3 files changed, 102 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c index e806c1866e36..3b3c32389def 100644 --- a/tools/testing/selftests/mm/page_frag/page_frag_test.c +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c @@ -32,6 +32,10 @@ static bool test_align; module_param(test_align, bool, 0); MODULE_PARM_DESC(test_align, "use align API for testing"); +static bool test_prepare; +module_param(test_prepare, bool, 0); +MODULE_PARM_DESC(test_prepare, "use prepare API for testing"); + static int test_alloc_len = 2048; module_param(test_alloc_len, int, 0); MODULE_PARM_DESC(test_alloc_len, "alloc len for testing"); @@ -74,6 +78,21 @@ static int page_frag_pop_thread(void *arg) return 0; } +static void frag_frag_test_commit(struct page_frag_cache *nc, + struct page_frag *prepare_pfrag, + struct page_frag *probe_pfrag, + unsigned int used_sz) +{ + if (prepare_pfrag->page != probe_pfrag->page || + prepare_pfrag->offset != probe_pfrag->offset || + prepare_pfrag->size != probe_pfrag->size) { + force_exit = true; + WARN_ONCE(true, TEST_FAILED_PREFIX "wrong probed info\n"); + } + + page_frag_refill_commit(nc, prepare_pfrag, used_sz); +} + static int page_frag_push_thread(void *arg) { struct ptr_ring *ring = arg; @@ -86,15 +105,61 @@ static int page_frag_push_thread(void *arg) int ret; if (test_align) { - va = page_frag_alloc_align(&test_nc, test_alloc_len, - GFP_KERNEL, SMP_CACHE_BYTES); + if (test_prepare) { + struct page_frag prepare_frag, probe_frag; + void *probe_va; + + va = page_frag_alloc_refill_prepare_align(&test_nc, + test_alloc_len, + &prepare_frag, + GFP_KERNEL, + SMP_CACHE_BYTES); + + probe_va = __page_frag_alloc_refill_probe_align(&test_nc, + test_alloc_len, + &probe_frag, + -SMP_CACHE_BYTES); + if (va != probe_va) { + force_exit = true; + WARN_ONCE(true, TEST_FAILED_PREFIX "wrong va\n"); + } + + if (likely(va)) + frag_frag_test_commit(&test_nc, &prepare_frag, + &probe_frag, test_alloc_len); + } else { + va = page_frag_alloc_align(&test_nc, + test_alloc_len, + GFP_KERNEL, + SMP_CACHE_BYTES); + } if ((unsigned long)va & (SMP_CACHE_BYTES - 1)) { force_exit = true; WARN_ONCE(true, TEST_FAILED_PREFIX "unaligned va returned\n"); } } else { - va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + if (test_prepare) { + struct page_frag prepare_frag, probe_frag; + void *probe_va; + + va = page_frag_alloc_refill_prepare(&test_nc, test_alloc_len, + &prepare_frag, GFP_KERNEL); + + probe_va = page_frag_alloc_refill_probe(&test_nc, test_alloc_len, + &probe_frag); + + if (va != probe_va) { + force_exit = true; + WARN_ONCE(true, TEST_FAILED_PREFIX "wrong va\n"); + } + + if (likely(va)) + frag_frag_test_commit(&test_nc, &prepare_frag, + &probe_frag, test_alloc_len); + } else { + va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL); + } } if (!va) @@ -176,8 +241,9 @@ static int __init page_frag_test_init(void) } duration = (u64)ktime_us_delta(ktime_get(), start); - pr_info("%d of iterations for %s testing took: %lluus\n", nr_test, - test_align ? "aligned" : "non-aligned", duration); + pr_info("%d of iterations for %s %s API testing took: %lluus\n", nr_test, + test_align ? "aligned" : "non-aligned", + test_prepare ? "prepare" : "alloc", duration); out: ptr_ring_cleanup(&ptr_ring, NULL); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 2fc290d9430c..881c17803baf 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -466,6 +466,10 @@ CATEGORY="page_frag" run_test ./test_page_frag.sh aligned CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned +CATEGORY="page_frag" run_test ./test_page_frag.sh aligned_prepare + +CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned_prepare + echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix echo "1..${count_total}" | tap_output diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh index f55b105084cf..1c757fd11844 100755 --- a/tools/testing/selftests/mm/test_page_frag.sh +++ b/tools/testing/selftests/mm/test_page_frag.sh @@ -43,6 +43,8 @@ check_test_failed_prefix() { SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1" NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST" ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1" +NONALIGNED_PREPARE_PARAM="$NONALIGNED_PARAM test_prepare=1" +ALIGNED_PREPARE_PARAM="$ALIGNED_PARAM test_prepare=1" check_test_requirements() { @@ -77,6 +79,20 @@ run_aligned_check() insmod $DRIVER $ALIGNED_PARAM > /dev/null 2>&1 } +run_nonaligned_prepare_check() +{ + echo "Run performance tests to evaluate how fast nonaligned prepare API is." + + insmod $DRIVER $NONALIGNED_PREPARE_PARAM > /dev/null 2>&1 +} + +run_aligned_prepare_check() +{ + echo "Run performance tests to evaluate how fast aligned prepare API is." + + insmod $DRIVER $ALIGNED_PREPARE_PARAM > /dev/null 2>&1 +} + run_smoke_check() { echo "Run smoke test." @@ -87,6 +103,7 @@ run_smoke_check() usage() { echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | " + echo "[ aligned_prepare ] | [ nonaligned_prepare ] | " echo "manual parameters" echo echo "Valid tests and parameters:" @@ -107,6 +124,12 @@ usage() echo "# Performance testing for aligned alloc API" echo "$0 aligned" echo + echo "# Performance testing for nonaligned prepare API" + echo "$0 nonaligned_prepare" + echo + echo "# Performance testing for aligned prepare API" + echo "$0 aligned_prepare" + echo exit 0 } @@ -158,6 +181,10 @@ function run_test() run_nonaligned_check elif [[ "$1" = "aligned" ]]; then run_aligned_check + elif [[ "$1" = "nonaligned_prepare" ]]; then + run_nonaligned_prepare_check + elif [[ "$1" = "aligned_prepare" ]]; then + run_aligned_prepare_check else run_manual_check $@ fi From patchwork Fri Dec 6 12:25:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF813E77173 for ; Fri, 6 Dec 2024 12:33:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6E2E56B0276; Fri, 6 Dec 2024 07:33:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6938F6B0277; Fri, 6 Dec 2024 07:33:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E5B66B0278; Fri, 6 Dec 2024 07:33:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 293E06B0276 for ; Fri, 6 Dec 2024 07:33:58 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 260B481AD5 for ; Fri, 6 Dec 2024 12:32:49 +0000 (UTC) X-FDA: 82864472670.21.7A3FDD0 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf19.hostedemail.com (Postfix) with ESMTP id 188E51A0011 for ; Fri, 6 Dec 2024 12:32:27 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488356; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=enbeuMi8/jtIMt5Q/euB+UjIT0MMKamdnjX4Al+sm5Q=; b=fcQ7fU57VQz11eiq+9aWmGJ0Xm/tZ/Uiwz7O/KdohhAi0MriDpN7/HSIndY1udCCqZ+6Kw toyjsDE/G2g7eFibtALt2OFxZzOz3MmhsmeLASztFWhsCM13MIm0A7TOiXbiy1FEDSlO7V bgBe1ODQsBM0D6+oLX3n9ZorIfilUt4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488356; a=rsa-sha256; cv=none; b=AlCMt2WyTAWwPogzNDNswjX6yD20uGQ9dGCtmc0IMA0iuGMp/fQoXvppGGE1eNSaSmVzYh jLb8c61sj+S+UmsDmaXeXoj9zclbU+NcRvugLvygmAfSExCkhhiqDGMCmn+V52RnJKaJLz JZGY2333IwhrY6R/N2QvlFKrYTfcoGY= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Y4VvC2Dccz1V5lW; Fri, 6 Dec 2024 20:29:43 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 7E432180064; Fri, 6 Dec 2024 20:32:42 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:42 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Ayush Sawal , Andrew Lunn , Eric Dumazet , Willem de Bruijn , Jason Wang , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Simon Horman , John Fastabend , Jakub Sitnicki , David Ahern , Matthieu Baerts , Mat Martineau , Geliang Tang , Boris Pismenny , , Subject: [PATCH net-next v2 09/10] net: replace page_frag with page_frag_cache Date: Fri, 6 Dec 2024 20:25:32 +0800 Message-ID: <20241206122533.3589947-10-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 188E51A0011 X-Stat-Signature: t97bnj9dingk7umykp5qfoxj6y6ddya7 X-Rspam-User: X-HE-Tag: 1733488347-45020 X-HE-Meta: U2FsdGVkX18pdtBJ2u7u1DNh3bhM1wIemBPB/Dzkxmg8+P6ulaFkpvD98gzYUCIAIWrS/n/7NFtY0PzbuqbcytO4PyeZZSzt/jg2mCh9mDdIz3enAjudI1Ifh8QvxoG8fl8KVSlDZuOONjllgZUjC7OarZPoYHudEtvWyCMAH1g2dUCNTBPJeeFyNwUir+rN0U0qYj+3umA4g7pFaIrigf1V+ElyO6gYrTj7ahdlALj8QkIPIcmF5r5m0f9wBcenYLsO4JmS2MVJUkukjEBBYxDPuiK7VDd98+QtzJ3iS8yK+/Vpwb/l80rJ9B1u7Qa4Jn1IFK1m6iGKHUkZziF1yepFHukQteT4QY7rnJd+0zS/2WSKOVnCQbl47KV+pRMAb6zoDoAfP1UYc1VpDuYygpLm9v2h6wddeEoGMG0tKoFVbJH5bGB9o7/2ccFuv7D+75jqHgR+e1js5q7VbdLrN0Bo4JfWn3VXPVdjngIkFMDRtA9BmwZh7kzNP7e/uhft/8wjx/sRryhS1zvv/BtLZWQ9JQsDXeclxuRZGQYQvy4NdxSnnsn4wfj0xT3hsC82GAdVmIj/ZjhFLrWpMyAfk6YuE0tSFUgxTax5T/xhLzLpBq9ihtzjb5Y5ffjhCboAL4xm0crt0oBm8OcRc0SQNAzPUoKBNl2Hjsf9cGAKsuiLiiMRvowl5Ohs580NeRTVHGFHTHd+TvKe2Wv5Ch/6Okzrevm3f6SJo+V32k/cq/0RDy3ljIEp0GrlySoxyQ3ONoj912KOyT3An3opFIqyuXdGqOxRzEyE96yeQVh3J0gTCmfO4/sCgbdN302GfgUGJpU3d3l/n0jIda9Tyn5GD/Xzm/WcCUangasQGE7d00F1zbNHggAkoX6lYZ8p7QoGOLW9BDhyZGL1Ramlx+3/CTb3BrX8uDhtvfttGNZ9k1uX7iG0JanxXYmX4xFWmzHEeAWjqaleeuTCtx1YYmw ulztH05m ELj+aT+8upRUQnVVPJrYznton+PeZVuuapo7nuaTP9RMhxhNouZUrMjbkm3x1dijY9p+048EIQlhGEqj5VVX26hjaEt6v5bWQnj7zWAJ9pGfWBX+Qpj6CZqYgik59cZiDEOtfkde+Ne2cWNRjyuVD9p0WVUcbsxNbeV8YD5QXKmLCUFL/4uZJjdoIvPhM/xPku77Ynl5S8MAXWJShRE3fE1lLiOycAmwPBrQ3aolyw5iZ7RF/0qx4KCffOO+ZhdnoPjD9YnmhmXRXLA2pqKtu6+YGdFmZ1I8Z0pOJiL4HjaCzEO2APxFmUP5KPyfiL80+cndrPyVlarLOrJlqC1cPc6mw6S0sPWFjRlZeopaEVr0NjDcbwBkVwbNvMl0EBL0+G/t9FXul5BZocvz2ERW8X76Qa9DAJDYIDZaQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use the newly introduced prepare/probe/commit API to replace page_frag with page_frag_cache for sk_page_frag(). CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- .../chelsio/inline_crypto/chtls/chtls.h | 3 - .../chelsio/inline_crypto/chtls/chtls_io.c | 101 +++++------------- .../chelsio/inline_crypto/chtls/chtls_main.c | 3 - drivers/net/tun.c | 47 ++++---- include/linux/sched.h | 2 +- include/net/sock.h | 21 ++-- kernel/exit.c | 3 +- kernel/fork.c | 3 +- net/core/skbuff.c | 58 +++++----- net/core/skmsg.c | 12 ++- net/core/sock.c | 32 ++++-- net/ipv4/ip_output.c | 28 +++-- net/ipv4/tcp.c | 23 ++-- net/ipv4/tcp_output.c | 25 +++-- net/ipv6/ip6_output.c | 28 +++-- net/kcm/kcmsock.c | 18 ++-- net/mptcp/protocol.c | 47 ++++---- net/tls/tls_device.c | 100 ++++++++++------- 18 files changed, 293 insertions(+), 261 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h index 21e0dfeff158..85ce0b2f1f3f 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h @@ -234,7 +234,6 @@ struct chtls_dev { struct list_head list_node; struct list_head rcu_node; struct list_head na_node; - unsigned int send_page_order; int max_host_sndbuf; u32 round_robin_cnt; struct key_map kmap; @@ -453,8 +452,6 @@ enum { /* The ULP mode/submode of an skbuff */ #define skb_ulp_mode(skb) (ULP_SKB_CB(skb)->ulp_mode) -#define TCP_PAGE(sk) (sk->sk_frag.page) -#define TCP_OFF(sk) (sk->sk_frag.offset) static inline struct chtls_dev *to_chtls_dev(struct tls_toe_device *tlsdev) { diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c index d567e42e1760..7b1760ab55ba 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c @@ -825,12 +825,6 @@ void skb_entail(struct sock *sk, struct sk_buff *skb, int flags) ULP_SKB_CB(skb)->flags = flags; __skb_queue_tail(&csk->txq, skb); sk->sk_wmem_queued += skb->truesize; - - if (TCP_PAGE(sk) && TCP_OFF(sk)) { - put_page(TCP_PAGE(sk)); - TCP_PAGE(sk) = NULL; - TCP_OFF(sk) = 0; - } } static struct sk_buff *get_tx_skb(struct sock *sk, int size) @@ -882,16 +876,12 @@ static void push_frames_if_head(struct sock *sk) chtls_push_frames(csk, 1); } -static int chtls_skb_copy_to_page_nocache(struct sock *sk, - struct iov_iter *from, - struct sk_buff *skb, - struct page *page, - int off, int copy) +static int chtls_skb_copy_to_va_nocache(struct sock *sk, struct iov_iter *from, + struct sk_buff *skb, char *va, int copy) { int err; - err = skb_do_copy_data_nocache(sk, skb, from, page_address(page) + - off, copy, skb->len); + err = skb_do_copy_data_nocache(sk, skb, from, va, copy, skb->len); if (err) return err; @@ -1114,82 +1104,45 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) if (err) goto do_fault; } else { + struct page_frag_cache *nc = &sk->sk_frag; + struct page_frag page_frag, *pfrag; int i = skb_shinfo(skb)->nr_frags; - struct page *page = TCP_PAGE(sk); - int pg_size = PAGE_SIZE; - int off = TCP_OFF(sk); - bool merge; - - if (page) - pg_size = page_size(page); - if (off < pg_size && - skb_can_coalesce(skb, i, page, off)) { + bool merge = false; + void *va; + + pfrag = &page_frag; + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, + sk->sk_allocation); + if (unlikely(!va)) + goto wait_for_memory; + + if (skb_can_coalesce(skb, i, pfrag->page, + pfrag->offset)) merge = true; - goto copy; - } - merge = false; - if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) : - MAX_SKB_FRAGS)) + else if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) : + MAX_SKB_FRAGS)) goto new_buf; - if (page && off == pg_size) { - put_page(page); - TCP_PAGE(sk) = page = NULL; - pg_size = PAGE_SIZE; - } - - if (!page) { - gfp_t gfp = sk->sk_allocation; - int order = cdev->send_page_order; - - if (order) { - page = alloc_pages(gfp | __GFP_COMP | - __GFP_NOWARN | - __GFP_NORETRY, - order); - if (page) - pg_size <<= order; - } - if (!page) { - page = alloc_page(gfp); - pg_size = PAGE_SIZE; - } - if (!page) - goto wait_for_memory; - off = 0; - } -copy: - if (copy > pg_size - off) - copy = pg_size - off; + copy = min_t(int, copy, pfrag->size); if (is_tls_tx(csk)) copy = min_t(int, copy, csk->tlshws.txleft); - err = chtls_skb_copy_to_page_nocache(sk, &msg->msg_iter, - skb, page, - off, copy); - if (unlikely(err)) { - if (!TCP_PAGE(sk)) { - TCP_PAGE(sk) = page; - TCP_OFF(sk) = 0; - } + err = chtls_skb_copy_to_va_nocache(sk, &msg->msg_iter, + skb, va, copy); + if (unlikely(err)) goto do_fault; - } + /* Update the skb. */ if (merge) { skb_frag_size_add( &skb_shinfo(skb)->frags[i - 1], copy); + page_frag_refill_commit_noref(nc, pfrag, copy); } else { - skb_fill_page_desc(skb, i, page, off, copy); - if (off + copy < pg_size) { - /* space left keep page */ - get_page(page); - TCP_PAGE(sk) = page; - } else { - TCP_PAGE(sk) = NULL; - } + skb_fill_page_desc(skb, i, pfrag->page, + pfrag->offset, copy); + page_frag_refill_commit(nc, pfrag, copy); } - TCP_OFF(sk) = off + copy; } if (unlikely(skb->len == mss)) tx_skb_finalize(skb); diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c index 96fd31d75dfd..7284269174c5 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c @@ -34,7 +34,6 @@ static DEFINE_MUTEX(notify_mutex); static RAW_NOTIFIER_HEAD(listen_notify_list); static struct proto chtls_cpl_prot, chtls_cpl_protv6; struct request_sock_ops chtls_rsk_ops, chtls_rsk_opsv6; -static uint send_page_order = (14 - PAGE_SHIFT < 0) ? 0 : 14 - PAGE_SHIFT; static void register_listen_notifier(struct notifier_block *nb) { @@ -273,8 +272,6 @@ static void *chtls_uld_add(const struct cxgb4_lld_info *info) INIT_WORK(&cdev->deferq_task, process_deferq); spin_lock_init(&cdev->listen_lock); spin_lock_init(&cdev->idr_lock); - cdev->send_page_order = min_t(uint, get_order(32768), - send_page_order); cdev->max_host_sndbuf = 48 * 1024; if (lldi->vr->key.size) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index d7a865ef370b..4ca6590ef5fe 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1599,21 +1599,19 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile, } static struct sk_buff *__tun_build_skb(struct tun_file *tfile, - struct page_frag *alloc_frag, char *buf, - int buflen, int len, int pad) + char *buf, int buflen, int len, int pad) { struct sk_buff *skb = build_skb(buf, buflen); - if (!skb) + if (!skb) { + page_frag_free(buf); return ERR_PTR(-ENOMEM); + } skb_reserve(skb, pad); skb_put(skb, len); skb_set_owner_w(skb, tfile->socket.sk); - get_page(alloc_frag->page); - alloc_frag->offset += buflen; - return skb; } @@ -1661,8 +1659,8 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, struct virtio_net_hdr *hdr, int len, int *skb_xdp) { - struct page_frag *alloc_frag = ¤t->task_frag; struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; + struct page_frag_cache *nc = ¤t->task_frag; struct bpf_prog *xdp_prog; int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); char *buf; @@ -1677,16 +1675,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, buflen += SKB_DATA_ALIGN(len + pad); rcu_read_unlock(); - alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES); - if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL))) + buf = page_frag_alloc_align(nc, buflen, GFP_KERNEL, + SMP_CACHE_BYTES); + if (unlikely(!buf)) return ERR_PTR(-ENOMEM); - buf = (char *)page_address(alloc_frag->page) + alloc_frag->offset; - copied = copy_page_from_iter(alloc_frag->page, - alloc_frag->offset + pad, - len, from); - if (copied != len) + copied = copy_from_iter(buf + pad, len, from); + if (copied != len) { + page_frag_alloc_abort(nc, buf, buflen); return ERR_PTR(-EFAULT); + } /* There's a small window that XDP may be set after the check * of xdp_prog above, this should be rare and for simplicity @@ -1694,8 +1692,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, */ if (hdr->gso_type || !xdp_prog) { *skb_xdp = 1; - return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, - pad); + return __tun_build_skb(tfile, buf, buflen, len, pad); } *skb_xdp = 0; @@ -1712,21 +1709,23 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, xdp_prepare_buff(&xdp, buf, pad, len, false); act = bpf_prog_run_xdp(xdp_prog, &xdp); - if (act == XDP_REDIRECT || act == XDP_TX) { - get_page(alloc_frag->page); - alloc_frag->offset += buflen; - } err = tun_xdp_act(tun, xdp_prog, &xdp, act); if (err < 0) { - if (act == XDP_REDIRECT || act == XDP_TX) - put_page(alloc_frag->page); + if (act == XDP_REDIRECT || act == XDP_TX) { + page_frag_alloc_abort_ref(nc, buf, buflen); + goto out; + } + + page_frag_alloc_abort(nc, buf, buflen); goto out; } if (err == XDP_REDIRECT) xdp_do_flush(); - if (err != XDP_PASS) + if (err != XDP_PASS) { + page_frag_alloc_abort(nc, buf, buflen); goto out; + } pad = xdp.data - xdp.data_hard_start; len = xdp.data_end - xdp.data; @@ -1735,7 +1734,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, rcu_read_unlock(); local_bh_enable(); - return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, pad); + return __tun_build_skb(tfile, buf, buflen, len, pad); out: bpf_net_ctx_clear(bpf_net_ctx); diff --git a/include/linux/sched.h b/include/linux/sched.h index d380bffee2ef..73c425bac58d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1382,7 +1382,7 @@ struct task_struct { /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; - struct page_frag task_frag; + struct page_frag_cache task_frag; #ifdef CONFIG_TASK_DELAY_ACCT struct task_delay_info *delays; diff --git a/include/net/sock.h b/include/net/sock.h index cf037c870e3b..9b24f53c29e7 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -303,7 +303,7 @@ struct sk_filter; * @sk_stamp: time stamp of last packet received * @sk_stamp_seq: lock for accessing sk_stamp on 32 bit architectures only * @sk_tsflags: SO_TIMESTAMPING flags - * @sk_use_task_frag: allow sk_page_frag() to use current->task_frag. + * @sk_use_task_frag: allow sk_page_frag_cache() to use current->task_frag. * Sockets that can be used under memory reclaim should * set this to false. * @sk_bind_phc: SO_TIMESTAMPING bind PHC index of PTP virtual clock @@ -462,7 +462,7 @@ struct sock { struct sk_buff_head sk_write_queue; u32 sk_dst_pending_confirm; u32 sk_pacing_status; /* see enum sk_pacing */ - struct page_frag sk_frag; + struct page_frag_cache sk_frag; struct timer_list sk_timer; unsigned long sk_pacing_rate; /* bytes per second */ @@ -2491,22 +2491,22 @@ static inline void sk_stream_moderate_sndbuf(struct sock *sk) } /** - * sk_page_frag - return an appropriate page_frag + * sk_page_frag_cache - return an appropriate page_frag_cache * @sk: socket * - * Use the per task page_frag instead of the per socket one for + * Use the per task page_frag_cache instead of the per socket one for * optimization when we know that we're in process context and own * everything that's associated with %current. * * Both direct reclaim and page faults can nest inside other - * socket operations and end up recursing into sk_page_frag() - * while it's already in use: explicitly avoid task page_frag + * socket operations and end up recursing into sk_page_frag_cache() + * while it's already in use: explicitly avoid task page_frag_cache * when users disable sk_use_task_frag. * * Return: a per task page_frag if context allows that, * otherwise a per socket one. */ -static inline struct page_frag *sk_page_frag(struct sock *sk) +static inline struct page_frag_cache *sk_page_frag_cache(struct sock *sk) { if (sk->sk_use_task_frag) return ¤t->task_frag; @@ -2514,7 +2514,12 @@ static inline struct page_frag *sk_page_frag(struct sock *sk) return &sk->sk_frag; } -bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag); +bool sk_page_frag_refill_prepare(struct sock *sk, struct page_frag_cache *nc, + struct page_frag *pfrag); + +void *sk_page_frag_alloc_refill_prepare(struct sock *sk, + struct page_frag_cache *nc, + struct page_frag *pfrag); /* * Default write policy as shown to user space via poll/select/SIGIO diff --git a/kernel/exit.c b/kernel/exit.c index 1dcddfe537ee..010dc4a05dc5 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -973,8 +973,7 @@ void __noreturn do_exit(long code) if (tsk->splice_pipe) free_pipe_info(tsk->splice_pipe); - if (tsk->task_frag.page) - put_page(tsk->task_frag.page); + page_frag_cache_drain(&tsk->task_frag); exit_task_stack_account(tsk); diff --git a/kernel/fork.c b/kernel/fork.c index 1450b461d196..a0f7b2d9ce05 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -80,6 +80,7 @@ #include #include #include +#include #include #include #include @@ -1165,10 +1166,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->btrace_seq = 0; #endif tsk->splice_pipe = NULL; - tsk->task_frag.page = NULL; tsk->wake_q.next = NULL; tsk->worker_private = NULL; + page_frag_cache_init(&tsk->task_frag); kcov_task_init(tsk); kmsan_task_create(tsk); kmap_local_fork(tsk); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 6841e61a6bd0..684cd68ca4ab 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -3062,25 +3062,6 @@ static void sock_spd_release(struct splice_pipe_desc *spd, unsigned int i) put_page(spd->pages[i]); } -static struct page *linear_to_page(struct page *page, unsigned int *len, - unsigned int *offset, - struct sock *sk) -{ - struct page_frag *pfrag = sk_page_frag(sk); - - if (!sk_page_frag_refill(sk, pfrag)) - return NULL; - - *len = min_t(unsigned int, *len, pfrag->size - pfrag->offset); - - memcpy(page_address(pfrag->page) + pfrag->offset, - page_address(page) + *offset, *len); - *offset = pfrag->offset; - pfrag->offset += *len; - - return pfrag->page; -} - static bool spd_can_coalesce(const struct splice_pipe_desc *spd, struct page *page, unsigned int offset) @@ -3091,6 +3072,37 @@ static bool spd_can_coalesce(const struct splice_pipe_desc *spd, spd->partial[spd->nr_pages - 1].len == offset); } +static bool spd_fill_linear_page(struct splice_pipe_desc *spd, + struct page *page, unsigned int offset, + unsigned int *len, struct sock *sk) +{ + struct page_frag_cache *nc = sk_page_frag_cache(sk); + struct page_frag page_frag, *pfrag; + void *va; + + pfrag = &page_frag; + va = sk_page_frag_alloc_refill_prepare(sk, nc, pfrag); + if (!va) + return true; + + *len = min_t(unsigned int, *len, pfrag->size); + memcpy(va, page_address(page) + offset, *len); + + if (spd_can_coalesce(spd, pfrag->page, pfrag->offset)) { + spd->partial[spd->nr_pages - 1].len += *len; + page_frag_refill_commit_noref(nc, pfrag, *len); + return false; + } + + page_frag_refill_commit(nc, pfrag, *len); + spd->pages[spd->nr_pages] = pfrag->page; + spd->partial[spd->nr_pages].len = *len; + spd->partial[spd->nr_pages].offset = pfrag->offset; + spd->nr_pages++; + + return false; +} + /* * Fill page/offset/length into spd, if it can hold more pages. */ @@ -3103,11 +3115,9 @@ static bool spd_fill_page(struct splice_pipe_desc *spd, if (unlikely(spd->nr_pages == MAX_SKB_FRAGS)) return true; - if (linear) { - page = linear_to_page(page, len, &offset, sk); - if (!page) - return true; - } + if (linear) + return spd_fill_linear_page(spd, page, offset, len, sk); + if (spd_can_coalesce(spd, page, offset)) { spd->partial[spd->nr_pages - 1].len += *len; return false; diff --git a/net/core/skmsg.c b/net/core/skmsg.c index e90fbab703b2..db53f619e69a 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -27,23 +27,25 @@ static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, int elem_first_coalesce) { - struct page_frag *pfrag = sk_page_frag(sk); + struct page_frag_cache *nc = sk_page_frag_cache(sk); u32 osize = msg->sg.size; int ret = 0; len -= msg->sg.size; while (len > 0) { + struct page_frag page_frag, *pfrag; struct scatterlist *sge; u32 orig_offset; int use, i; - if (!sk_page_frag_refill(sk, pfrag)) { + pfrag = &page_frag; + if (!sk_page_frag_refill_prepare(sk, nc, pfrag)) { ret = -ENOMEM; goto msg_trim; } orig_offset = pfrag->offset; - use = min_t(int, len, pfrag->size - orig_offset); + use = min_t(int, len, pfrag->size); if (!sk_wmem_schedule(sk, use)) { ret = -ENOMEM; goto msg_trim; @@ -57,6 +59,7 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, sg_page(sge) == pfrag->page && sge->offset + sge->length == orig_offset) { sge->length += use; + page_frag_refill_commit_noref(nc, pfrag, use); } else { if (sk_msg_full(msg)) { ret = -ENOSPC; @@ -66,13 +69,12 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, sge = &msg->sg.data[msg->sg.end]; sg_unmark_end(sge); sg_set_page(sge, pfrag->page, use, orig_offset); - get_page(pfrag->page); + page_frag_refill_commit(nc, pfrag, use); sk_msg_iter_next(msg, end); } sk_mem_charge(sk, use); msg->sg.size += use; - pfrag->offset += use; len -= use; } diff --git a/net/core/sock.c b/net/core/sock.c index 74729d20cd00..c186ef593426 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2276,10 +2276,7 @@ static void __sk_destruct(struct rcu_head *head) pr_debug("%s: optmem leakage (%d bytes) detected\n", __func__, atomic_read(&sk->sk_omem_alloc)); - if (sk->sk_frag.page) { - put_page(sk->sk_frag.page); - sk->sk_frag.page = NULL; - } + page_frag_cache_drain(&sk->sk_frag); /* We do not need to acquire sk->sk_peer_lock, we are the last user. */ put_cred(sk->sk_peer_cred); @@ -3035,16 +3032,33 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) } EXPORT_SYMBOL(skb_page_frag_refill); -bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag) +bool sk_page_frag_refill_prepare(struct sock *sk, struct page_frag_cache *nc, + struct page_frag *pfrag) { - if (likely(skb_page_frag_refill(32U, pfrag, sk->sk_allocation))) + if (likely(page_frag_refill_prepare(nc, 32U, pfrag, sk->sk_allocation))) return true; sk_enter_memory_pressure(sk); sk_stream_moderate_sndbuf(sk); return false; } -EXPORT_SYMBOL(sk_page_frag_refill); +EXPORT_SYMBOL(sk_page_frag_refill_prepare); + +void *sk_page_frag_alloc_refill_prepare(struct sock *sk, + struct page_frag_cache *nc, + struct page_frag *pfrag) +{ + void *va; + + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, sk->sk_allocation); + if (likely(va)) + return va; + + sk_enter_memory_pressure(sk); + sk_stream_moderate_sndbuf(sk); + return NULL; +} +EXPORT_SYMBOL(sk_page_frag_alloc_refill_prepare); void __lock_sock(struct sock *sk) __releases(&sk->sk_lock.slock) @@ -3566,8 +3580,8 @@ void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid) sk->sk_error_report = sock_def_error_report; sk->sk_destruct = sock_def_destruct; - sk->sk_frag.page = NULL; - sk->sk_frag.offset = 0; + page_frag_cache_init(&sk->sk_frag); + sk->sk_peek_off = -1; sk->sk_peer_pid = NULL; diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index a59204a8d850..c94a428a5e37 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -953,7 +953,7 @@ static int __ip_append_data(struct sock *sk, struct flowi4 *fl4, struct sk_buff_head *queue, struct inet_cork *cork, - struct page_frag *pfrag, + struct page_frag_cache *nc, int getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb), void *from, int length, int transhdrlen, @@ -1237,13 +1237,19 @@ static int __ip_append_data(struct sock *sk, copy = err; wmem_alloc_delta += copy; } else if (!zc) { + struct page_frag page_frag, *pfrag; int i = skb_shinfo(skb)->nr_frags; + void *va; err = -ENOMEM; - if (!sk_page_frag_refill(sk, pfrag)) + pfrag = &page_frag; + va = sk_page_frag_alloc_refill_prepare(sk, nc, pfrag); + if (!va) goto error; skb_zcopy_downgrade_managed(skb); + copy = min_t(int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { err = -EMSGSIZE; @@ -1251,19 +1257,19 @@ static int __ip_append_data(struct sock *sk, goto error; __skb_fill_page_desc(skb, i, pfrag->page, - pfrag->offset, 0); + pfrag->offset, copy); skb_shinfo(skb)->nr_frags = ++i; - get_page(pfrag->page); + page_frag_refill_commit(nc, pfrag, copy); + } else { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], + copy); + page_frag_refill_commit_noref(nc, pfrag, copy); } - copy = min_t(int, copy, pfrag->size - pfrag->offset); + if (INDIRECT_CALL_1(getfrag, ip_generic_getfrag, - from, - page_address(pfrag->page) + pfrag->offset, - offset, copy, skb->len, skb) < 0) + from, va, offset, copy, skb->len, skb) < 0) goto error_efault; - pfrag->offset += copy; - skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); skb_len_add(skb, copy); wmem_alloc_delta += copy; } else { @@ -1378,7 +1384,7 @@ int ip_append_data(struct sock *sk, struct flowi4 *fl4, } return __ip_append_data(sk, fl4, &sk->sk_write_queue, &inet->cork.base, - sk_page_frag(sk), getfrag, + sk_page_frag_cache(sk), getfrag, from, length, transhdrlen, flags); } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 0fbf1e222cda..24068f949c4f 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1193,9 +1193,13 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) if (zc == 0) { bool merge = true; int i = skb_shinfo(skb)->nr_frags; - struct page_frag *pfrag = sk_page_frag(sk); + struct page_frag_cache *nc = sk_page_frag_cache(sk); + struct page_frag page_frag, *pfrag; + void *va; - if (!sk_page_frag_refill(sk, pfrag)) + pfrag = &page_frag; + va = sk_page_frag_alloc_refill_prepare(sk, nc, pfrag); + if (!va) goto wait_for_space; if (!skb_can_coalesce(skb, i, pfrag->page, @@ -1207,7 +1211,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) merge = false; } - copy = min_t(int, copy, pfrag->size - pfrag->offset); + copy = min_t(int, copy, pfrag->size); if (unlikely(skb_zcopy_pure(skb) || skb_zcopy_managed(skb))) { if (tcp_downgrade_zcopy_pure(sk, skb)) @@ -1220,20 +1224,19 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) goto wait_for_space; err = skb_copy_to_frag_nocache(sk, &msg->msg_iter, skb, - page_address(pfrag->page) + - pfrag->offset, copy); + va, copy); if (err) goto do_error; /* Update the skb. */ if (merge) { skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_refill_commit_noref(nc, pfrag, copy); } else { skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); - page_ref_inc(pfrag->page); + page_frag_refill_commit(nc, pfrag, copy); } - pfrag->offset += copy; } else if (zc == MSG_ZEROCOPY) { /* First append to a fragless skb builds initial * pure zerocopy skb @@ -3393,11 +3396,7 @@ int tcp_disconnect(struct sock *sk, int flags) WARN_ON(inet->inet_num && !icsk->icsk_bind_hash); - if (sk->sk_frag.page) { - put_page(sk->sk_frag.page); - sk->sk_frag.page = NULL; - sk->sk_frag.offset = 0; - } + page_frag_cache_drain(&sk->sk_frag); sk_error_report(sk); return 0; } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 5485a70b5fe5..d84b0d477a65 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3968,9 +3968,11 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) struct inet_connection_sock *icsk = inet_csk(sk); struct tcp_sock *tp = tcp_sk(sk); struct tcp_fastopen_request *fo = tp->fastopen_req; - struct page_frag *pfrag = sk_page_frag(sk); + struct page_frag_cache *nc = sk_page_frag_cache(sk); + struct page_frag page_frag, *pfrag; struct sk_buff *syn_data; int space, err = 0; + void *va; tp->rx_opt.mss_clamp = tp->advmss; /* If MSS is not cached */ if (!tcp_fastopen_cookie_check(sk, &tp->rx_opt.mss_clamp, &fo->cookie)) @@ -3989,21 +3991,25 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) space = min_t(size_t, space, fo->size); - if (space && - !skb_page_frag_refill(min_t(size_t, space, PAGE_SIZE), - pfrag, sk->sk_allocation)) - goto fallback; + if (space) { + pfrag = &page_frag; + va = page_frag_alloc_refill_prepare(nc, + min_t(size_t, space, PAGE_SIZE), + pfrag, sk->sk_allocation); + if (!va) + goto fallback; + } + syn_data = tcp_stream_alloc_skb(sk, sk->sk_allocation, false); if (!syn_data) goto fallback; memcpy(syn_data->cb, syn->cb, sizeof(syn->cb)); if (space) { - space = min_t(size_t, space, pfrag->size - pfrag->offset); + space = min_t(size_t, space, pfrag->size); space = tcp_wmem_schedule(sk, space); } if (space) { - space = copy_page_from_iter(pfrag->page, pfrag->offset, - space, &fo->data->msg_iter); + space = _copy_from_iter(va, space, &fo->data->msg_iter); if (unlikely(!space)) { tcp_skb_tsorted_anchor_cleanup(syn_data); kfree_skb(syn_data); @@ -4011,8 +4017,7 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) } skb_fill_page_desc(syn_data, 0, pfrag->page, pfrag->offset, space); - page_ref_inc(pfrag->page); - pfrag->offset += space; + page_frag_refill_commit(nc, pfrag, space); skb_len_add(syn_data, space); skb_zcopy_set(syn_data, fo->uarg, NULL); } diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index 3d672dea9f56..6e11dd8089e4 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -1416,7 +1416,7 @@ static int __ip6_append_data(struct sock *sk, struct sk_buff_head *queue, struct inet_cork_full *cork_full, struct inet6_cork *v6_cork, - struct page_frag *pfrag, + struct page_frag_cache *nc, int getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb), void *from, size_t length, int transhdrlen, @@ -1764,13 +1764,19 @@ static int __ip6_append_data(struct sock *sk, copy = err; wmem_alloc_delta += copy; } else if (!zc) { + struct page_frag page_frag, *pfrag; int i = skb_shinfo(skb)->nr_frags; + void *va; err = -ENOMEM; - if (!sk_page_frag_refill(sk, pfrag)) + pfrag = &page_frag; + va = sk_page_frag_alloc_refill_prepare(sk, nc, pfrag); + if (!va) goto error; skb_zcopy_downgrade_managed(skb); + copy = min_t(int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { err = -EMSGSIZE; @@ -1778,19 +1784,19 @@ static int __ip6_append_data(struct sock *sk, goto error; __skb_fill_page_desc(skb, i, pfrag->page, - pfrag->offset, 0); + pfrag->offset, copy); skb_shinfo(skb)->nr_frags = ++i; - get_page(pfrag->page); + page_frag_refill_commit(nc, pfrag, copy); + } else { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], + copy); + page_frag_refill_commit_noref(nc, pfrag, copy); } - copy = min_t(int, copy, pfrag->size - pfrag->offset); + if (INDIRECT_CALL_1(getfrag, ip_generic_getfrag, - from, - page_address(pfrag->page) + pfrag->offset, - offset, copy, skb->len, skb) < 0) + from, va, offset, copy, skb->len, skb) < 0) goto error_efault; - pfrag->offset += copy; - skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); skb->len += copy; skb->data_len += copy; skb->truesize += copy; @@ -1853,7 +1859,7 @@ int ip6_append_data(struct sock *sk, } return __ip6_append_data(sk, &sk->sk_write_queue, &inet->cork, - &np->cork, sk_page_frag(sk), getfrag, + &np->cork, sk_page_frag_cache(sk), getfrag, from, length, transhdrlen, flags, ipc6); } EXPORT_SYMBOL_GPL(ip6_append_data); diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index 94719d4af5fa..8f241a7173ed 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -804,9 +804,13 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) while (msg_data_left(msg)) { bool merge = true; int i = skb_shinfo(skb)->nr_frags; - struct page_frag *pfrag = sk_page_frag(sk); + struct page_frag_cache *nc = sk_page_frag_cache(sk); + struct page_frag page_frag, *pfrag; + void *va; - if (!sk_page_frag_refill(sk, pfrag)) + pfrag = &page_frag; + va = sk_page_frag_alloc_refill_prepare(sk, nc, pfrag); + if (!va) goto wait_for_memory; if (!skb_can_coalesce(skb, i, pfrag->page, @@ -851,14 +855,12 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) if (head != skb) head->truesize += copy; } else { - copy = min_t(int, msg_data_left(msg), - pfrag->size - pfrag->offset); + copy = min_t(int, msg_data_left(msg), pfrag->size); if (!sk_wmem_schedule(sk, copy)) goto wait_for_memory; err = skb_copy_to_frag_nocache(sk, &msg->msg_iter, skb, - page_address(pfrag->page) + - pfrag->offset, copy); + va, copy); if (err) goto out_error; @@ -866,13 +868,13 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) if (merge) { skb_frag_size_add( &skb_shinfo(skb)->frags[i - 1], copy); + page_frag_refill_commit_noref(nc, pfrag, copy); } else { skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); - get_page(pfrag->page); + page_frag_refill_commit(nc, pfrag, copy); } - pfrag->offset += copy; } copied += copy; diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 08a72242428c..815d4e48a44e 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -978,7 +978,6 @@ static bool mptcp_skb_can_collapse_to(u64 write_seq, } /* we can append data to the given data frag if: - * - there is space available in the backing page_frag * - the data frag tail matches the current page_frag free offset * - the data frag end sequence number matches the current write seq */ @@ -987,7 +986,6 @@ static bool mptcp_frag_can_collapse_to(const struct mptcp_sock *msk, const struct mptcp_data_frag *df) { return df && pfrag->page == df->page && - pfrag->size - pfrag->offset > 0 && pfrag->offset == (df->offset + df->data_len) && df->data_seq + df->data_len == msk->write_seq; } @@ -1103,14 +1101,20 @@ static void mptcp_enter_memory_pressure(struct sock *sk) /* ensure we get enough memory for the frag hdr, beyond some minimal amount of * data */ -static bool mptcp_page_frag_refill(struct sock *sk, struct page_frag *pfrag) +static void *mptcp_page_frag_alloc_refill_prepare(struct sock *sk, + struct page_frag_cache *nc, + struct page_frag *pfrag) { - if (likely(skb_page_frag_refill(32U + sizeof(struct mptcp_data_frag), - pfrag, sk->sk_allocation))) - return true; + unsigned int fragsz = 32U + sizeof(struct mptcp_data_frag); + void *va; + + va = page_frag_alloc_refill_prepare(nc, fragsz, pfrag, + sk->sk_allocation); + if (likely(va)) + return va; mptcp_enter_memory_pressure(sk); - return false; + return NULL; } static struct mptcp_data_frag * @@ -1813,7 +1817,7 @@ static u32 mptcp_send_limit(const struct sock *sk) static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) { struct mptcp_sock *msk = mptcp_sk(sk); - struct page_frag *pfrag; + struct page_frag_cache *nc; size_t copied = 0; int ret = 0; long timeo; @@ -1847,14 +1851,16 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) if (unlikely(sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN))) goto do_error; - pfrag = sk_page_frag(sk); + nc = sk_page_frag_cache(sk); while (msg_data_left(msg)) { + struct page_frag page_frag, *pfrag; int total_ts, frag_truesize = 0; struct mptcp_data_frag *dfrag; bool dfrag_collapsed; - size_t psize, offset; u32 copy_limit; + size_t psize; + void *va; /* ensure fitting the notsent_lowat() constraint */ copy_limit = mptcp_send_limit(sk); @@ -1865,21 +1871,26 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) * page allocator */ dfrag = mptcp_pending_tail(sk); - dfrag_collapsed = mptcp_frag_can_collapse_to(msk, pfrag, dfrag); + pfrag = &page_frag; + va = page_frag_alloc_refill_probe(nc, 1, pfrag); + dfrag_collapsed = va && mptcp_frag_can_collapse_to(msk, pfrag, + dfrag); if (!dfrag_collapsed) { - if (!mptcp_page_frag_refill(sk, pfrag)) + va = mptcp_page_frag_alloc_refill_prepare(sk, nc, + pfrag); + if (!va) goto wait_for_memory; dfrag = mptcp_carve_data_frag(msk, pfrag, pfrag->offset); frag_truesize = dfrag->overhead; + va += dfrag->overhead; } /* we do not bound vs wspace, to allow a single packet. * memory accounting will prevent execessive memory usage * anyway */ - offset = dfrag->offset + dfrag->data_len; - psize = pfrag->size - offset; + psize = pfrag->size - frag_truesize; psize = min_t(size_t, psize, msg_data_left(msg)); psize = min_t(size_t, psize, copy_limit); total_ts = psize + frag_truesize; @@ -1887,8 +1898,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) if (!sk_wmem_schedule(sk, total_ts)) goto wait_for_memory; - ret = do_copy_data_nocache(sk, psize, &msg->msg_iter, - page_address(dfrag->page) + offset); + ret = do_copy_data_nocache(sk, psize, &msg->msg_iter, va); if (ret) goto do_error; @@ -1897,7 +1907,6 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) copied += psize; dfrag->data_len += psize; frag_truesize += psize; - pfrag->offset += frag_truesize; WRITE_ONCE(msk->write_seq, msk->write_seq + psize); /* charge data on mptcp pending queue to the msk socket @@ -1905,10 +1914,12 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) */ sk_wmem_queued_add(sk, frag_truesize); if (!dfrag_collapsed) { - get_page(dfrag->page); + page_frag_refill_commit(nc, pfrag, frag_truesize); list_add_tail(&dfrag->list, &msk->rtx_queue); if (!msk->first_pending) WRITE_ONCE(msk->first_pending, dfrag); + } else { + page_frag_refill_commit_noref(nc, pfrag, frag_truesize); } pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d\n", msk, dfrag->data_seq, dfrag->data_len, dfrag->already_sent, diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index dc063c2c7950..0f020293fe10 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -253,8 +253,8 @@ static void tls_device_resync_tx(struct sock *sk, struct tls_context *tls_ctx, } static void tls_append_frag(struct tls_record_info *record, - struct page_frag *pfrag, - int size) + struct page_frag_cache *nc, + struct page_frag *pfrag, int size) { skb_frag_t *frag; @@ -262,15 +262,34 @@ static void tls_append_frag(struct tls_record_info *record, if (skb_frag_page(frag) == pfrag->page && skb_frag_off(frag) + skb_frag_size(frag) == pfrag->offset) { skb_frag_size_add(frag, size); + page_frag_refill_commit_noref(nc, pfrag, size); } else { ++frag; skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset, size); ++record->num_frags; + page_frag_refill_commit(nc, pfrag, size); + } + + record->len += size; +} + +static void tls_append_dummy_frag(struct tls_record_info *record, + struct page_frag *pfrag, int size) +{ + skb_frag_t *frag; + + frag = &record->frags[record->num_frags - 1]; + if (skb_frag_page(frag) == pfrag->page && + skb_frag_off(frag) + skb_frag_size(frag) == pfrag->offset) { + skb_frag_size_add(frag, size); + } else { + ++frag; + skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset, size); + ++record->num_frags; get_page(pfrag->page); } - pfrag->offset += size; record->len += size; } @@ -311,11 +330,11 @@ static int tls_push_record(struct sock *sk, static void tls_device_record_close(struct sock *sk, struct tls_context *ctx, struct tls_record_info *record, - struct page_frag *pfrag, + struct page_frag_cache *nc, unsigned char record_type) { struct tls_prot_info *prot = &ctx->prot_info; - struct page_frag dummy_tag_frag; + struct page_frag dummy_tag_frag, *pfrag; /* append tag * device will fill in the tag, we just need to append a placeholder @@ -323,13 +342,16 @@ static void tls_device_record_close(struct sock *sk, * increases frag count) * if we can't allocate memory now use the dummy page */ - if (unlikely(pfrag->size - pfrag->offset < prot->tag_size) && - !skb_page_frag_refill(prot->tag_size, pfrag, sk->sk_allocation)) { + pfrag = &dummy_tag_frag; + if (unlikely(!page_frag_refill_probe(nc, prot->tag_size, pfrag) && + !page_frag_refill_prepare(nc, prot->tag_size, pfrag, + sk->sk_allocation))) { dummy_tag_frag.page = dummy_page; dummy_tag_frag.offset = 0; - pfrag = &dummy_tag_frag; + tls_append_dummy_frag(record, pfrag, prot->tag_size); + } else { + tls_append_frag(record, nc, pfrag, prot->tag_size); } - tls_append_frag(record, pfrag, prot->tag_size); /* fill prepend */ tls_fill_prepend(ctx, skb_frag_address(&record->frags[0]), @@ -338,6 +360,7 @@ static void tls_device_record_close(struct sock *sk, } static int tls_create_new_record(struct tls_offload_context_tx *offload_ctx, + struct page_frag_cache *nc, struct page_frag *pfrag, size_t prepend_size) { @@ -352,8 +375,7 @@ static int tls_create_new_record(struct tls_offload_context_tx *offload_ctx, skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset, prepend_size); - get_page(pfrag->page); - pfrag->offset += prepend_size; + page_frag_refill_commit(nc, pfrag, prepend_size); record->num_frags = 1; record->len = prepend_size; @@ -361,33 +383,34 @@ static int tls_create_new_record(struct tls_offload_context_tx *offload_ctx, return 0; } -static int tls_do_allocation(struct sock *sk, - struct tls_offload_context_tx *offload_ctx, - struct page_frag *pfrag, - size_t prepend_size) +static void *tls_do_allocation(struct sock *sk, + struct tls_offload_context_tx *offload_ctx, + struct page_frag_cache *nc, + size_t prepend_size, struct page_frag *pfrag) { int ret; if (!offload_ctx->open_record) { - if (unlikely(!skb_page_frag_refill(prepend_size, pfrag, - sk->sk_allocation))) { + void *va; + + if (unlikely(!page_frag_refill_prepare(nc, prepend_size, pfrag, + sk->sk_allocation))) { READ_ONCE(sk->sk_prot)->enter_memory_pressure(sk); sk_stream_moderate_sndbuf(sk); - return -ENOMEM; + return NULL; } - ret = tls_create_new_record(offload_ctx, pfrag, prepend_size); + ret = tls_create_new_record(offload_ctx, nc, pfrag, + prepend_size); if (ret) - return ret; + return NULL; - if (pfrag->size > pfrag->offset) - return 0; + va = page_frag_alloc_refill_probe(nc, 1, pfrag); + if (va) + return va; } - if (!sk_page_frag_refill(sk, pfrag)) - return -ENOMEM; - - return 0; + return sk_page_frag_alloc_refill_prepare(sk, nc, pfrag); } static int tls_device_copy_data(void *addr, size_t bytes, struct iov_iter *i) @@ -424,8 +447,8 @@ static int tls_push_data(struct sock *sk, struct tls_prot_info *prot = &tls_ctx->prot_info; struct tls_offload_context_tx *ctx = tls_offload_ctx_tx(tls_ctx); struct tls_record_info *record; + struct page_frag_cache *nc; int tls_push_record_flags; - struct page_frag *pfrag; size_t orig_size = size; u32 max_open_record_len; bool more = false; @@ -454,7 +477,7 @@ static int tls_push_data(struct sock *sk, return rc; } - pfrag = sk_page_frag(sk); + nc = sk_page_frag_cache(sk); /* TLS_HEADER_SIZE is not counted as part of the TLS record, and * we need to leave room for an authentication tag. @@ -462,8 +485,12 @@ static int tls_push_data(struct sock *sk, max_open_record_len = TLS_MAX_PAYLOAD_SIZE + prot->prepend_size; do { - rc = tls_do_allocation(sk, ctx, pfrag, prot->prepend_size); - if (unlikely(rc)) { + struct page_frag page_frag, *pfrag; + void *va; + + pfrag = &page_frag; + va = tls_do_allocation(sk, ctx, nc, prot->prepend_size, pfrag); + if (unlikely(!va)) { rc = sk_stream_wait_memory(sk, &timeo); if (!rc) continue; @@ -512,16 +539,15 @@ static int tls_push_data(struct sock *sk, zc_pfrag.offset = off; zc_pfrag.size = copy; - tls_append_frag(record, &zc_pfrag, copy); + tls_append_dummy_frag(record, &zc_pfrag, copy); } else if (copy) { - copy = min_t(size_t, copy, pfrag->size - pfrag->offset); + copy = min_t(size_t, copy, pfrag->size); - rc = tls_device_copy_data(page_address(pfrag->page) + - pfrag->offset, copy, - iter); + rc = tls_device_copy_data(va, copy, iter); if (rc) goto handle_error; - tls_append_frag(record, pfrag, copy); + + tls_append_frag(record, nc, pfrag, copy); } size -= copy; @@ -539,7 +565,7 @@ static int tls_push_data(struct sock *sk, if (done || record->len >= max_open_record_len || (record->num_frags >= MAX_SKB_FRAGS - 1)) { tls_device_record_close(sk, tls_ctx, record, - pfrag, record_type); + nc, record_type); rc = tls_push_record(sk, tls_ctx, From patchwork Fri Dec 6 12:25:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30D61E77173 for ; Fri, 6 Dec 2024 12:32:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4F936B0273; Fri, 6 Dec 2024 07:32:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AFE976B0275; Fri, 6 Dec 2024 07:32:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C6BB6B0276; Fri, 6 Dec 2024 07:32:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7C5046B0273 for ; Fri, 6 Dec 2024 07:32:57 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0F9C9C1D1C for ; Fri, 6 Dec 2024 12:32:49 +0000 (UTC) X-FDA: 82864472502.01.E8B56AB Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf17.hostedemail.com (Postfix) with ESMTP id E3E1840006 for ; Fri, 6 Dec 2024 12:32:33 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488350; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R/5B5kxvbmqnCXeLzQB84h6CN5xZFZyKlqva6dBXykw=; b=4hpiyqNzyvCDg7A1vFvBP0XPJdv3HXgEhm2ZR+IUJap9MLL8FIuVyri7gtetkyCtpauSB4 mmNCpMAH5bAIMs1PFp+f9W19KWKEaMgIU6dEyBUFfEofyP+o/H6ww4gV77dzPP4HeP+zBO DVH0rDLZc3SlGHX37YGU1XvyYgZgIn4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488350; a=rsa-sha256; cv=none; b=jwgB/TXmpAv32iUQVpnc5tEYFayfqlntEwEq2GIu0fkMuvmB8GZJCX4ZwMz4+rKYxTmCsN K+HrRr0UR+jx3nkk1BtvyNe+kyqu2U38nDf5MVRU4KbK7HylvL/dM2QtDQMkuO8wqp3yhx mLPs8yXPtZUP8HDjU52CxI2RbvJzsHA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Y4Vw02lsXz2DhHb; Fri, 6 Dec 2024 20:30:24 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id DE7C11402C4; Fri, 6 Dec 2024 20:32:43 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:43 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM Subject: [PATCH net-next v2 10/10] mm: page_frag: add an entry in MAINTAINERS for page_frag Date: Fri, 6 Dec 2024 20:25:33 +0800 Message-ID: <20241206122533.3589947-11-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: E3E1840006 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: erawtnhxthy7qhom94wh7d74bz5gykac X-HE-Tag: 1733488353-686486 X-HE-Meta: U2FsdGVkX19IOeVVOX9f+7BjDEuO9F2CzLF9qKE22KWzaZvNDeZALgmOjXha8Qnxkjwj+jeuXzkk18LTOUn20BcSiN/PQ+8opsZsuCmOTO8e92Sux4CIHAxGqXmxgFX4rOF8Gjqi6J6UAbseSEHhtZynA3vmZ9q3GNKQ2hbdQkIk9mCtNcQEtM0CCMmqIi/tZElhgNBej9tKgNwudMFZhlI9kEwpUWobJhuV4AQTERNGLWVhxITeR7ulASS2B7/r04OhO59GsoXcznBuCgi4ShKCPL0zp/gaPKgXePNQ2zAZqFAkJIvFlxBKWqHesnNPt//IjZ1CiZlaLTzsM/YyHOw51tUHYjQ7Bf1PLEw3p8rNPVqkFE7xyhYu3EjIH8g/o2iRkHTjFaQMy8GoixA8YBb60gKIchWBL8h/zbDLfOgIKTD81ZoqLAZrnskw8b/xZRMvmg9WCpCj5dKelRP8KtdDcnEbBJ9+NtGYbkwOviOckgq8s0FQoxE60LWgZayx+RX4hW1yr0NGm2tB4d/RYpcUTJQ+4KP6BvCigP5ktyVC/SfNXDBatwNZ3pT6y9gGqXNO74Z4u7FYg8ji1MjfeiQhOv1hlez8AyJZ7+xQBus7kD31xa35X2ad/3urtKqY6VXrnXpGOFJqOfpND3DIDiGstE1EgTxZ8N3zIYYuoOsokOnX92uYQ1L/wEfR16ulRdua4BGOE3uUPGQWdlKd3GcTC3YqG1UeemrojZF4KZbp6yDGpYdMqgvN33xcsORMtqqBU5xV6aRj1ReW5GjkaNNITHxpS283dvD81O0kFPUd2A2r+qB8wmQ6EwTAJe6ya89rAMhdUyrCa2ohTAnxF2/zDQ0xiHXE+lJxIMbzNcQSRi29a7Q4pWsvzA0fx1zxKYqWEWDnO9IilCiFtmdW1K2A5OBWhZ6gGvwul8VlO0H7CSIU0ljHD07lYoavwAKhTocK+LVLuKhRmzC/9rf 1diteF9G wWe3H60E/ZXMBWCx5GiccmznhAqAaxeqRr7TXXMIegWsbnS+shJrlhf54VtQpMIL2vh0APnj6PkNkq1D3BJU77tH9xexgZXztYP9FoG2ZfDlL4mh1Qc7NIpH6b8OtkB99isu5Xy1WMyx627+D585CEmSWs6lCrWOzYsMO7MqFmPAD1QYUjiritgl1aM8MfaH37qs1PDhzT9Y7dqos0cBgrFwXa92oXKChQGuF9N7tvhO88wQrHY4Eo97k/9w8MfPr1jsBweFM+rL6h7QNpDlsdepJF1uAOX05AMff3BAhtqTsIHepcxbz0H+rfNfKX+97AlBQetZQIouGRvkMqyXUNJbAaX/yGNxNJUjwP6Kp11fDQj7Le/wNjJwlm5o2KWmPN5s7qGhEM+gsT7+NYDNKC5FvMLVxZ/8SxKlwmeog7PmJ8KXlZcwBKVEJJrESbMdROH2XuNo29UooZqxcvGy31xwZb8fhIDfHI8hZEbmbrlyIv2c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: After this patchset, page_frag is a small subsystem/library on its own, so add an entry in MAINTAINERS to indicate the new subsystem/library's maintainer, maillist, status and file lists of page_frag. Alexander is the original author of page_frag, add him in the MAINTAINERS too. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 0456a33ef657..7d3725bc40aa 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17585,6 +17585,18 @@ F: mm/page-writeback.c F: mm/readahead.c F: mm/truncate.c +PAGE FRAG +M: Alexander Duyck +M: Yunsheng Lin +L: linux-mm@kvack.org +L: netdev@vger.kernel.org +S: Supported +F: Documentation/mm/page_frags.rst +F: include/linux/page_frag_cache.h +F: mm/page_frag_cache.c +F: tools/testing/selftests/mm/page_frag/ +F: tools/testing/selftests/mm/test_page_frag.sh + PAGE POOL M: Jesper Dangaard Brouer M: Ilias Apalodimas