From patchwork Fri Dec 6 12:25:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13897125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA83CE7717E for ; Fri, 6 Dec 2024 12:32:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A6B86B026F; Fri, 6 Dec 2024 07:32:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 731446B0270; Fri, 6 Dec 2024 07:32:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44A386B0272; Fri, 6 Dec 2024 07:32:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 000A16B0270 for ; Fri, 6 Dec 2024 07:32:47 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 747A042952 for ; Fri, 6 Dec 2024 12:32:39 +0000 (UTC) X-FDA: 82864472334.12.AC48FE9 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf18.hostedemail.com (Postfix) with ESMTP id 4EF291C0016 for ; Fri, 6 Dec 2024 12:32:28 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733488344; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J1srTfzk09opjcf5dAMfaweqcdxDibImPR2py+TJtKc=; b=2cWvDpXifxKHvudsSl8Iuceal6vqVAEeVOU9DHdVfnSLlKF3/hCiMVClB0dZr54uzlgjUr WNb8N0JqxcHwKZa4xjXDJo5sJvVeHOzEiab+Ae4FvUG+C22QGGngOmYnGBiON7/rTpdeuf oHgDPJ1MIhYxp94a7BB6vkhYa25qpeY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733488344; a=rsa-sha256; cv=none; b=55zRhyo1A4mxlTKZjseqWCXHClMIWAylGTnELs2xJnXS7+JI58anRzBWs4ubna47xC91no 3nUOmqq2zejUbRBprQ9aL7SxAIRUc1FISKJPm8HduDYeHHq0+p0rCeTS7EJcT5q72y3TMX 6hxV/gokOlgK36GePHax2XCoPVoc+dc= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Y4VwY6sbDzRj0W; Fri, 6 Dec 2024 20:30:53 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id B66F91800CA; Fri, 6 Dec 2024 20:32:33 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 6 Dec 2024 20:32:33 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Linux-MM , Jonathan Corbet , Subject: [PATCH net-next v2 07/10] mm: page_frag: introduce probe related API Date: Fri, 6 Dec 2024 20:25:30 +0800 Message-ID: <20241206122533.3589947-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241206122533.3589947-1-linyunsheng@huawei.com> References: <20241206122533.3589947-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Queue-Id: 4EF291C0016 X-Rspamd-Server: rspam12 X-Stat-Signature: 5ds7rqdyzwhq8zsz3ek4s3f58ozowc8j X-Rspam-User: X-HE-Tag: 1733488348-410405 X-HE-Meta: U2FsdGVkX18D4YGrb2Y7N3NIM9mvI9DS05XijsPnsWLpgL5BKudaRBPznPxQY5JL7nR7fCJAFku0gtHFZZh68dB7w5pedGehTP4jX2l6/g4ycnt9vR7d+kPRj3CxUS9ZlsvtvktR3AflRfsGIecUd7n884xXxny2O9oIVlEUp5H9G5DPQ+K4kMhw7ogivozfH8NqBUmsdbF2jc6wgVqHo3vtyCcli1zzj4kZizNrj3IuYCh+ePTSpSD8XJ1B60hzjhlKa465IRXH5MEfxrMuqi+kuA2fkkfO4015fUrYjtQHKzKlFU1iQZOA6xTW0gqUrfzP6+uoTSE9Zr42J9d8/yfzZQMj76UZl92Mnlc/fXsDea3OnkNKtY6OYa1+YobjhMP8jDj7fYsxMhP4ZbK8rTo8dFmWlBra21FhFmCVU4DXxs+lMfc919yq2nS60eSm8GjAbmp922ROtQi4XbkmLFeYoJ8XvPrAJUenFQjXGXzWqpRJOvjc023mW/Jl88trIQKQaVhoA4Z8AlBEGzY+zFWzfSWnIZgvQngLemdw8jODxiaiMpXCRrfShvEyJ66rnf0ceB+5d3pxNUH3TVb+kM2rApqDP32XcoosAp1LgGi48n5fYeb2HVyWttwYb4Tq2nwXuTZ7Mmk/uFf2S0d8QRPAozEeSbHY0Demmg4GJ6eWbz7o2iORyC2x4gStLG3YSOvDwLb7kz1hlyP1MGfhmJJl+Wse+fkJyiS1gHpCPydDB+JhjY0+NvJVvPypxLolSB0PjzY4lSxdCcCytSPuFMLtmsrKKLAgUE8CEPYqgX7LPeim1UQwB/AvVj80quPF/hvA6nlX0nIUKgOdvzjzZ6khItDA+3qUzjkPN9Psn4biYaTdnm2kK7Ox79LDcpRPJIKfqTps6MK7+z2WuUnY/Exhj+jGHcAPuYEjcoLHBvKlN2AH5kYLEWzsWLCSCyC1KwtxwRVLJWtHqE73l37 t0GpcAeV 5ornF8XQ/3Y6TSNczAnRTsHxA3aInF+pwcvqoXYl4Ywpr18B2tRf8MgoMlVKBbSKxbxm3/k2LuYgu5Y6kIPaIu3SJGy7CU4nQ9sQQkEIwo9wXBDz1X4u+64x9VJtNKK+8TUsvDJwUTM3EYMomHBFxBPNm1JFaCoNSrlTkMzUZ8OP+YyR7hnM7/I3uLJlI4QPZB10IPwNWNMT+5A9nWYvT2N13nmpduAoznFpaf9K+zMsMre/7ymYnE3ivol3lUe0CbQ8alHuJIgMcg1sEK+gGyVaagqBv8LImCqokahAATQJbgpJGmdV8FHoExsqwdWhkjb7Xm0qKyCs96M2SvXhOItMS2XbnxC8uZCssEOr4Oq0wGUB4a/o3D6q+tl+1WmqAZoblQ6BXeycrX3G5SUz1wSUuiQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Some usecase may need a bigger fragment if current fragment can't be coalesced to previous fragment because more space for some header may be needed if it is a new fragment. So introduce probe related API to tell if there are minimum remaining memory in the cache to be coalesced to the previous fragment, in order to save memory as much as possible. CC: Alexander Duyck CC: Andrew Morton CC: Linux-MM Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 10 +++++++- include/linux/page_frag_cache.h | 41 +++++++++++++++++++++++++++++++++ mm/page_frag_cache.c | 35 ++++++++++++++++++++++++++++ 3 files changed, 85 insertions(+), 1 deletion(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 1c98f7090d92..3e34831a0029 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -119,7 +119,13 @@ more performant if more memory is available. By using the prepare and commit related API, the caller calls prepare API to requests the minimum memory it needs and prepare API will return the maximum size of the fragment returned. The caller needs to either call the commit API to report how much memory it actually -uses, or not do so if deciding to not use any memory. +uses, or not do so if deciding to not use any memory. Some usecase may need a +bigger fragment if the current fragment can't be coalesced to previous fragment +because more space for some header may be needed if it is a new fragment, probe +related API can be used to tell if there are minimum remaining memory in the +cache to be coalesced to the previous fragment, in order to save memory as much +as possible. + .. kernel-doc:: include/linux/page_frag_cache.h :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc @@ -129,9 +135,11 @@ uses, or not do so if deciding to not use any memory. __page_frag_alloc_refill_prepare_align page_frag_alloc_refill_prepare_align page_frag_alloc_refill_prepare + page_frag_alloc_refill_probe page_frag_refill_probe .. kernel-doc:: mm/page_frag_cache.c :identifiers: page_frag_cache_drain page_frag_free page_frag_alloc_abort_ref + __page_frag_alloc_refill_probe_align Coding examples =============== diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 329390afbe78..0f7e8da91a67 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -63,6 +63,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, struct page_frag *pfrag, unsigned int used_sz); +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask); static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, struct page_frag *pfrag, @@ -282,6 +286,43 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, gfp_mask, ~0u); } +/** + * page_frag_alloc_refill_probe() - Probe allocating a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe allocating a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); +} + +/** + * page_frag_refill_probe() - Probe refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ +static inline bool page_frag_refill_probe(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag) +{ + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); +} + /** * page_frag_refill_commit - Commit a prepare refilling. * @nc: page_frag cache from which to commit diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 8c3cfdbe8c2b..ae40520d452a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -116,6 +116,41 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +/** + * __page_frag_alloc_refill_probe_align() - Probe allocating a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @align_mask: the requested aligning requirement for the fragment. + * + * Probe allocating a fragment and refilling a page_frag from page_frag cache + * with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, + unsigned int fragsz, + struct page_frag *pfrag, + unsigned int align_mask) +{ + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; + + size = PAGE_SIZE << encoded_page_decode_order(encoded_page); + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); + if (unlikely(!encoded_page || offset + fragsz > size)) + return NULL; + + pfrag->page = encoded_page_decode_page(encoded_page); + pfrag->size = size - offset; + pfrag->offset = offset; + + return encoded_page_decode_virt(encoded_page) + offset; +} +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align); + void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask, unsigned int align_mask)