From patchwork Fri Dec 6 12:25:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AC50E7717B for ; Fri, 6 Dec 2024 12:32:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 499A76B0269; Fri, 6 Dec 2024 07:32:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 448CC6B026B; Fri, 6 Dec 2024 07:32:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 229286B026C; Fri, 6 Dec 2024 07:32:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E66C66B0269 for ; Fri, 6 Dec 2024 07:32:37 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8DB071A19C1 for ; Fri, 6 Dec 2024 12:32:37 +0000 (UTC) X-FDA: 82864472292.12.3823C59 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf26.hostedemail.com (Postfix) with ESMTP id 45B3814001B for ; Fri, 6 Dec 2024 12:32:21 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488344; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lvuyRUjEeqKVfMNpsbMVljnBq4TEBQUgRXvscG9RWWs=; b=GvRyWzBpS5nsm0DRUDHGQl5G5EBXGakJDm8aQ4dqkxMxJHt9ccX/Q4lcEbcFXB4xN6nxos 8IWzGJvF8ESQLIeTsKIyK2hSxCPpuxz626Q7k0xd3y9XxGy/Sifc3wwtyK8wcQ9545V891 d4hs2dJCS5qr1vAGts+NuuEx3reG02E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488344; a=rsa-sha256; cv=none; b=abv41wWCZAuHSBOTfJyImUSL8KsnbbCShH6auW19CWgrIhrrMCm9fuR6f3gLkhIep4gQP2 YWP5gUEx1YlBFwy9KCJLSQnAftUWw8fG8WCNLRSa2U+hN7zsicbYaElIqXdKhpIlEwmm2Y b2gheh0bH/vUZ58E6UtCe10rkSF/r9w= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Y4VwX1XQ5z21mlC; Fri, 6 Dec 2024 20:30:52 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id BE341140136; Fri, 6 Dec 2024 20:32:31 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:31 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v2 06/10] mm: page_frag: introduce alloc_refill prepare & commit API Date: Fri, 6 Dec 2024 20:25:29 +0800 Message-ID: <20241206122533.3589947-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 45B3814001B X-Stat-Signature: tdnykkk7pdq1wq76ym7m4aoqawymkoix X-Rspam-User: X-HE-Tag: 1733488341-939544 X-HE-Meta: U2FsdGVkX1+K07rRnXsAgNH7tSNe/oOdLMTuXxfWrO/+AT+jkI4XEsr1f9m1hiLl4wCgCAkVgW2q6S66cfVdQf7scLynP3EieMYvb28eHCC/sxTL2jSJtrbG7jk05ouE5vUH/eJtF0A8xMDJDmG3hNlEBdXK+3LXwP39SRIoh/Cfg0raYmfXnHzRmdxUwd2FsLPH6L1alkrqP9MDTZL1s9pX4B7I/BBVqQvJ0F+u3NhBwbMIbbOvc1l/i4jDav5LXKbRvihIIeW6btuslsnl6MPHz10lAykmJIw6AAD4bPXRRtgODUn13ZJHvYB2aniATusaAT4bRzOx4dyzKhmiyXHLe4bhSzVQkuQ3/l/9e/mNTpGhdcxUNHtugUy1Zn2nUspQHG+BRs9j5blqm5D0NtuRaoiSWu1Ns+ZWSN2XNzfjfEBTI8DOAHvJEtT4MF/846yqvmUZhiUrJDZF4Lsc8aTZlYS6j6wRkjwPl6h3hxLmTfMS7Uek/pHo7dBcROYGOF5CNBP2TSPiG7idU2GWl9a1f5AcgFaGYCNekqQETkAuCssY7NAMkHFDbfZObJjn/vj15eG9ilDo+vxtgZN8FwlrJ/P6f2RZJoWLyfCZFlgj0eflQS3yKaUYKtXoDxb/1lCuDPoPd0PavAqyZWOs97A0CxSYZirYR86vVmbvqqEcIAPprFnDjNgGqRzEAkV5X0rjngRzAk8g3eYOMihdRcJdvIA2f5iURJeO5LjYeVBZggx8Dqs+PaNmA0Av2K0okfEZ2fpDcG139eBWjFlPhDuzIVr19kRoFsZ3Soosyr1lL3X4aOxgycvqPcEgpYz7hFgGUMPJXh0FFwMEydaVYxLwr1BBqyCAEQgs9LPN6bTJxDgjP04Hd3KWFheK+59GFyoG1gTWyFcMknIspNvOul85wqasWupCZEXaAUwgwiWjLbbuQyL+Jqo1gl+GMW98QIGeaBjEbdKU0gYEfl3 w106vAn2 qDTE5cgtKlGUumScOyWyDTzmAsJn1JqE/Y0lRKoYqSk7dzFXHHvw4wlHdmfwh3CszRptJ0VEk3q4U5b/4Ien4JkAo9KFMLnTOMwyMHmPRkAuMLFY3wupnd0FUj6VHMyZbaQnx4t+TPlQ1Gw83UmPMiklDMxt3lABowhY105VkXYOjDey3ejYuuMgeLYLPUR0V1QyeYmrH1YepNOwgBqUAiwfHCTer820MyJPWcdDUYAFVCYm3nK2aHOVbn2BzO7qIOxe0wKqCTRZk3C6fT4oQqbIoTgerz6tPt432x9NliV9PnxU8RtplcXqBiaguU+2hxvJ/aF8AC0fFXjQtm885aBbl3Pz3k28x4PmdnO5QyjeNbwoSbJiOKV13rgM1lRcUyXJYfOz+AEFk6s6mOPFnOy9VXw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently alloc related API returns virtual address of the allocated fragment and refill related API returns page info of the allocated fragment through 'struct page_frag'. There are use cases that need both the virtual address and page info of the allocated fragment. Introduce alloc_refill API for those use cases. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 45 +++++++++++++++++++++ include/linux/page_frag_cache.h | 71 +++++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 4cfdbe7db55a..1c98f7090d92 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -111,6 +111,9 @@ page is aligned according to the 'align/alignment' parameter. Note the size of the allocated fragment is not aligned, the caller needs to provide an aligned fragsz if there is an alignment requirement for the size of the fragment. +Depending on different use cases, callers expecting to deal with va, page or +both va and page may call alloc, refill or alloc_refill API accordingly. + There is a use case that needs minimum memory in order for forward progress, but more performant if more memory is available. By using the prepare and commit related API, the caller calls prepare API to requests the minimum memory it @@ -123,6 +126,9 @@ uses, or not do so if deciding to not use any memory. __page_frag_alloc_align page_frag_alloc_align page_frag_alloc page_frag_alloc_abort __page_frag_refill_prepare_align page_frag_refill_prepare_align page_frag_refill_prepare + __page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare .. kernel-doc:: mm/page_frag_cache.c :identifiers: page_frag_cache_drain page_frag_free page_frag_alloc_abort_ref @@ -193,3 +199,42 @@ Refill Preparation & committing API skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); page_frag_refill_commit(nc, pfrag, copy); } + + +Alloc_Refill Preparation & committing API +----------------------------------------- + +.. code-block:: c + + struct page_frag page_frag, *pfrag; + bool merge = true; + void *va; + + pfrag = &page_frag; + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); + if (!va) + goto wait_for_space; + + copy = min_t(unsigned int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_refill_commit_noref(nc, pfrag, copy); + } else { + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); + page_frag_refill_commit(nc, pfrag, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 1e699334646a..329390afbe78 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -211,6 +211,77 @@ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, ~0u); } +/** + * __page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment. + * + * Prepare allocating a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void +*__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask, unsigned int align_mask) +{ + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); +} + +/** + * page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement for the fragment. + * + * WARN_ON_ONCE() checking for @align before prepare allocating a fragment and + * refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void +*page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, gfp_t gfp_mask, + unsigned int align) +{ + WARN_ON_ONCE(!is_power_of_2(align)); + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, -align); +} + +/** + * page_frag_alloc_refill_prepare() - Prepare allocating a fragment and + * refilling a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare allocating a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + gfp_t gfp_mask) +{ + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag, + gfp_mask, ~0u); +} + /** * page_frag_refill_commit - Commit a prepare refilling. * @nc: page_frag cache from which to commit