From patchwork Sat Oct 12 11:23:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13833401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA3B8CF2564 for ; Sat, 12 Oct 2024 11:30:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 785596B009F; Sat, 12 Oct 2024 07:30:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7359D6B00A0; Sat, 12 Oct 2024 07:30:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D6A56B00A1; Sat, 12 Oct 2024 07:30:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 398A26B009F for ; Sat, 12 Oct 2024 07:30:10 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 471841A1D83 for ; Sat, 12 Oct 2024 11:30:00 +0000 (UTC) X-FDA: 82664731254.23.D9912DD Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf26.hostedemail.com (Postfix) with ESMTP id DF0DD140014 for ; Sat, 12 Oct 2024 11:30:04 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728732537; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cFF4X7X5PSnEg0I+lcdTnH4yKpe1j1ZKU0m1SQ7EMRc=; b=XigLuuWXTINoa1+2sfv2aNNGd0rLIUST1Y5PuaXGPFA3ATe7bd9zo5PmdpKg9TTMx55CK0 AqTkud5fHg6zCnkqN/ZoPRsp8/ZPXPKDOVd937H/cjxo5aVURBYA0TgIsLnvMSUtnKlfM7 bvdhiny1piZ8XyCj9ijLmuEsJ3gvmXA= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728732537; a=rsa-sha256; cv=none; b=xARjUiQkqTEgRmPQGr8+1BuFdDDuILw3dQRgW+r7jGtYc/WfCO52wmihGwQVPgYL2RG+++ AydT0eWWyL3NFqstjrG2ZZzKpQyeJQsmNNlkB8fnJnKDARtCDG8JtYB/WopBT6pCEEgwxM PbUMILYElTjYF42p1RIN+O2ccw91wQU= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4XQh6z2sqGzkWby; Sat, 12 Oct 2024 19:27:39 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 8ED0B180087; Sat, 12 Oct 2024 19:30:04 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 12 Oct 2024 19:30:04 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , , Subject: [PATCH net-next v21 13/14] mm: page_frag: update documentation for page_frag Date: Sat, 12 Oct 2024 19:23:19 +0800 Message-ID: <20241012112320.2503906-14-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241012112320.2503906-1-linyunsheng@huawei.com> References: <20241012112320.2503906-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: DF0DD140014 X-Stat-Signature: amun6f5ta9gsjzic4btpok638rxhzrbn X-HE-Tag: 1728732604-84255 X-HE-Meta: U2FsdGVkX18yuPlsWEN19IvIRCl9JPsU17zNsrGGt7AdK453DT11Y31RZb5J6EWxR4pfIgC7jlij1AuHKjLkfv1OTtlXYiB7G9QikC8o5oM/moHM68myNffrfGRC/6VIwUkimj5WOpCzivgzkzZHqscui9oHLKDECjypkLnrJNBNteYnOOposdp08vCaEKr0Br1vJHHAyEO1HEQXAo5Eq4EM7Zpmy5VWzVk5tP6v66xMkonJXeZ0UAWlzsqF/cXt6kIHDp7yes/4YVPzcqozVgwpsGm23jSbxSK5WpiwqwQILq3Me9T47oVnypCpbbPjesoglTAkkKtqOpEbgKTcHgpPF0K8BilaTd9Xa4hJgJBJ57xp7hcGm+oYxfGt75cgIjmYJs2v+qvoHPPe8OKu1uucJW5XKDrKx9J0c0Tzi9lBDYQ64UoTr6hgrdQaOjzEQbvbdSlXaa604lKJ9tbTCN3LPYYk6IaEHjclwjzDCAW2nF1SGpBIKN/2LcdkPn/OE0gEGv7oqfQyXJDefoOxi0Awn6pXCVwMKnlwkInuHv94Z22IVG1Tq+PTiEQOfyPp8il6ZYKi5vnGr1FOBOcFRFC+6KVh5RHskUEChuu/bRlnHl92jHZZ6/tqPnwQeRBCzYccJW1nEBBOdDCqqFS0VtfXSjtXTR6Y+PDagmAUVFcFdbXruU2/W5RUVDMrS0TXtSi11WOOs5YkES/JTLdwtSjEePyWrevsUvzVFYc9RBnTYotN818jBvOdnKzyqAyBonQNof90XmF9WZL4F5w2Srzs2hl/y/J6P7RwCRcjDyb5OIItikkX+1RSHZBgesbvfcEs4iaOcFduenPBcPjOUyp32hTvymOVwIARraiv0EAesVDeSXAojxwXhlbjkbB0zt312lhhlYOHx2vj2KUkRsZbRMtO1k87Y+VoIn1pbxuo56tDIlWTIMTU5Kg2lRlL1Hg7D7rAZKMaFZ5sFEE JBD7xZ2X uZ/zEdyjQuOHjhPO32exfwgZsqXBbzoN/axglNj3wWGexavWOyVmKJ5m6x1B2rTEgxcLbacNZ64CydXec70S6syrRzod67+xz3BL2pCsJX1/WAgTUTyvah7o8UPoXP5so7Q0323Qadvn8OMvqMZGPkNTdgDMT2tdZEOSc5MRot2eiBP8tDhLpfC8mLMK5X5fzI8YGWZ2UC8jB4pvTWJ8qAtMCkLNTEXMTD4bEEAxAvexLirs3cx20VGMIDYrkmEl5AIK8bVXvQYtUSFsDB8w/mPMRURYlg+n7je2oPDxiGwJc8w2TNdrFIKDP8+3zF2CTb/j8+zQ2guWw2uFVLT23au5WUeaQO/F3m/4z7iARfq+CcAyeXEU5zkdkg4PEK8j6RFv6qakADPqhRIC8U5V5Lh6toK7XFYud2YFPDRTd5jnluyz7xOHki+QUW8IGJ6CaEuQ3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 177 +++++++++++++++++++++- include/linux/page_frag_cache.h | 259 +++++++++++++++++++++++++++++++- mm/page_frag_cache.c | 26 +++- 3 files changed, 451 insertions(+), 11 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..5eec04a3fe90 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,177 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + | + | + v + +------------------------------------------------------------------+ + | request page fragment | + +------------------------------------------------------------------+ + | | | + | | | + | Cache not enough | + | | | + | +-----------------+ | + | | reuse old cache |--Usable-->| + | +-----------------+ | + | | | + | Not usable | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | | | + v_________________________________v | + | | + | | + _________________v_______________ | + | | Cache is enough + | | | + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + | | | | + | | | | + | Refill failed | | + | | | | + | v v | + | +------------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------=-+ | + | | | + Refill succeed | | + | Refill succeed | + | | | + v v v + +------------------------------------------------------------------+ + | allocate fragment from cache | + +------------------------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API implies, the allocation side +does not allow concurrent calling. Instead it is assumed that the caller must +ensure there is not concurrent alloc calling to the same page_frag_cache +instance by using its own lock or rely on some lockless guarantee like NAPI +softirq. + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_*_align*() to ensure the returned virtual address or offset of the +page is aligned according to the 'align/alignment' parameter. Note the size of +the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc, page_frag_refill, or +page_frag_alloc_refill API accordingly. + +There is also a use case that needs minimum memory in order for forward progress, +but more performant if more memory is available. Using page_frag_*_prepare() and +page_frag_commit*() related API, the caller requests the minimum memory it needs +and the prepare API will return the maximum size of the fragment returned. The +caller needs to either call the commit API to report how much memory it actually +uses, or not do so if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_cache_page_offset __page_frag_alloc_align + page_frag_alloc_align page_frag_alloc + __page_frag_refill_align page_frag_refill_align + page_frag_refill __page_frag_refill_prepare_align + page_frag_refill_prepare_align page_frag_refill_prepare + __page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare_align + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe + page_frag_refill_probe page_frag_commit + page_frag_commit_noref page_frag_alloc_abort + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: page_frag_cache_drain page_frag_free + __page_frag_alloc_refill_probe_align + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(nc); + ... + page_frag_cache_drain(nc); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_align(nc, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_abort(nc, size); + goto do_error; + } + + ... + + page_frag_free(va); + + +Prepare & Commit API +-------------------- + +.. code-block:: c + + struct page_frag page_frag, *pfrag; + bool merge = true; + void *va; + + pfrag = &page_frag; + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); + if (!va) + goto wait_for_space; + + copy = min_t(unsigned int, copy, pfrag->size); + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_commit_noref(nc, pfrag, copy); + } else { + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); + page_frag_commit(nc, pfrag, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index e21688e7edc8..f6f7e3e3a4bc 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -28,16 +28,43 @@ static inline bool encoded_page_decode_pfmemalloc(unsigned long encoded_page) return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); } +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { nc->encoded_page = 0; } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expectation as the alloc API. + * + * Return: + * true if the current page in page_frag cache is pfmemalloc'ed, otherwise + * return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return encoded_page_decode_pfmemalloc(nc->encoded_page); } +/** + * page_frag_cache_page_offset() - Return the current page fragment's offset. + * @nc: page_frag cache from which to check + * + * The API is only used in net/sched/em_meta.c for historical reason, do not use + * it for new caller unless there is a strong reason. + * + * Return: + * the offset of the current page fragment in the page_frag cache. + */ static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) { return nc->offset; @@ -66,6 +93,19 @@ static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc, return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * __page_frag_alloc_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Alloc a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -83,6 +123,19 @@ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, return va; } +/** + * page_frag_alloc_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before allocing a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -91,12 +144,36 @@ static inline void *page_frag_alloc_align(struct page_frag_cache *nc, return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_alloc() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Alloc a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +/** + * __page_frag_refill_align() - Refill a page_frag with aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Refill a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool __page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -111,6 +188,20 @@ static inline bool __page_frag_refill_align(struct page_frag_cache *nc, return true; } +/** + * page_frag_refill_align() - Refill a page_frag with aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -120,6 +211,18 @@ static inline bool page_frag_refill_align(struct page_frag_cache *nc, return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align); } +/** + * page_frag_refill() - Refill a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Refill a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, gfp_t gfp_mask) @@ -127,6 +230,20 @@ static inline bool page_frag_refill(struct page_frag_cache *nc, return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u); } +/** + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment + * + * Prepare refill a page_frag from page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -137,6 +254,21 @@ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc, align_mask); } +/** + * page_frag_refill_prepare_align() - Prepare refilling a page_frag with + * aligning requirement. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for the fragment + * + * WARN_ON_ONCE() checking for @align before prepare refilling a page_frag from + * page_frag cache with aligning requirement. + * + * Return: + * True if prepare refilling succeeds, otherwise return false. + */ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -148,6 +280,18 @@ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc, -align); } +/** + * page_frag_refill_prepare() - Prepare refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -157,6 +301,20 @@ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc, ~0u); } +/** + * __page_frag_alloc_refill_prepare_align() - Prepare allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the fragment. + * + * Prepare allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -166,6 +324,21 @@ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cach return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask); } +/** + * page_frag_alloc_refill_prepare_align() - Prepare allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement for the fragment. + * + * WARN_ON_ONCE() checking for @align before prepare allocing a fragment and + * refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -177,6 +350,19 @@ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache gfp_mask, -align); } +/** + * page_frag_alloc_refill_prepare() - Prepare allocing a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Prepare allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -186,6 +372,18 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc, gfp_mask, ~0u); } +/** + * page_frag_alloc_refill_probe() - Probe allocing a fragment and refilling + * a page_frag. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe allocing a fragment and refilling a page_frag from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag) @@ -193,6 +391,17 @@ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc, return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u); } +/** + * page_frag_refill_probe() - Probe refilling a page_frag. + * @nc: page_frag cache from which to refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled + * + * Probe refilling a page_frag from page_frag cache. + * + * Return: + * True if refill succeeds, otherwise return false. + */ static inline bool page_frag_refill_probe(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag) @@ -200,20 +409,54 @@ static inline bool page_frag_refill_probe(struct page_frag_cache *nc, return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag); } -static inline void page_frag_commit(struct page_frag_cache *nc, - struct page_frag *pfrag, - unsigned int used_sz) +/** + * page_frag_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the actual used size for the allocation that was either prepared + * or probed. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_commit(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) { - __page_frag_cache_commit(nc, pfrag, used_sz); + return __page_frag_cache_commit(nc, pfrag, used_sz); } -static inline void page_frag_commit_noref(struct page_frag_cache *nc, - struct page_frag *pfrag, - unsigned int used_sz) +/** + * page_frag_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @pfrag: the page_frag to be committed + * @used_sz: size of the page fragment has been used + * + * Commit the alloc preparing or probing by passing the actual used size, but + * not taking refcount. Mostly used for fragmemt coalescing case when the + * current fragment can share the same refcount with previous fragment. + * + * Return: + * The true size of the fragment considering the offset alignment. + */ +static inline unsigned int page_frag_commit_noref(struct page_frag_cache *nc, + struct page_frag *pfrag, + unsigned int used_sz) { - __page_frag_cache_commit_noref(nc, pfrag, used_sz); + return __page_frag_cache_commit_noref(nc, pfrag, used_sz); } +/** + * page_frag_alloc_abort - Abort the page fragment allocation. + * @nc: page_frag cache to which the page fragment is aborted back + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the alloc API. + * Mostly used for error handling cases where the fragment is no longer needed. + */ static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 5ea4b663ab8e..51f4eb4b2169 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -70,6 +70,10 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, return page; } +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_page) @@ -112,6 +116,20 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_cache_commit_noref); +/** + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and + * refilling a page_frag with aligning requirement. + * @nc: page_frag cache from which to allocate and refill + * @fragsz: the requested fragment size + * @pfrag: the page_frag to be refilled. + * @align_mask: the requested aligning requirement for the fragment. + * + * Probe allocing a fragment and refilling a page_frag from page_frag cache with + * aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc, unsigned int fragsz, struct page_frag *pfrag, @@ -203,8 +221,12 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz, } EXPORT_SYMBOL(__page_frag_cache_prepare); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free(void *addr) {