From patchwork Thu Mar 28 13:38:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13608547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0FA1CD1284 for ; Thu, 28 Mar 2024 13:41:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F1DE6B0098; Thu, 28 Mar 2024 09:41:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A2536B0099; Thu, 28 Mar 2024 09:41:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01B7A6B009A; Thu, 28 Mar 2024 09:41:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CB6686B0098 for ; Thu, 28 Mar 2024 09:41:04 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9D25DA0140 for ; Thu, 28 Mar 2024 13:41:04 +0000 (UTC) X-FDA: 81946558848.13.A4AF702 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf24.hostedemail.com (Postfix) with ESMTP id 0464F18002F for ; Thu, 28 Mar 2024 13:41:01 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711633262; a=rsa-sha256; cv=none; b=28goA/3l9gKw7a7Mj0ZaGYZIHLq25cdLp37YR9qUb1W2D5bftoh5BFVUIz2P3+mX+cgfZ7 jP3Aqiyipj+mORy0CDkDOh9uVR0owr28g660YQ12upLeljDzFmYPrsbMfyCSd/1P2HYeST P0+/B0cZjc0Yprf1eweIngYjvwPpUyE= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711633262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w4kQekiS03AKW+jku5NPBM/1NO5oEvu58K65xm21xKA=; b=naw6KbzKZNF6wHhKDUljrMwedb2fGrndZTUnBvASfmsje7f07BMFt+rcG1smAiFEumMU7K GWT9x8vRJkD3AeHH6cMgdnP8KDivLgEJqkQlru3032c/43XTbUQRjuBt9P3rY/Cx2dYFKE z8OMeSwGsrnK6bNeja3FI0jlk177+fQ= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4V54P4204YzwQ0X; Thu, 28 Mar 2024 21:38:16 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id BDDE91400CD; Thu, 28 Mar 2024 21:40:57 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 28 Mar 2024 21:40:57 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Andrew Morton , Subject: [PATCH RFC 07/10] mm: page_frag: reuse existing bit field of 'va' for pagecnt_bias Date: Thu, 28 Mar 2024 21:38:36 +0800 Message-ID: <20240328133839.13620-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240328133839.13620-1-linyunsheng@huawei.com> References: <20240328133839.13620-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0464F18002F X-Stat-Signature: rf1f3gkqfpwk8puxy71j7m9851jy87ja X-HE-Tag: 1711633261-28627 X-HE-Meta: U2FsdGVkX1+OEbh2lLaV4jeof3UQOd8xmQHU2x7KBQxycMARQZpBbPcJURlKWcCj5DHd1HHKKOQL6bU3Qp0XJJvRSWIBX9GRvqRYCPkn6tJb9S1yadZ/2oEdHSu9WySRodChFhR5OJ6VRhnbI0iOZ/g0DNOz0Ft5uEsiGfQgUImgbP8oEFPkidqDDmGEYFoJ64OQQl8BLQBE0fR7U9s4Ni5mzOEdTgjr5A5cIy5Jet7zLOyxVfyx6pzp8yTWum4C0M7fC5iU+XKNAx17vnFkdUX3n/N+CnCpG3j/jCLM8mVtNLhfkoQs2dc94pykp/sLRmeJ2xQkhey8RRATIrqbFuIeDZ+RZvIZKXZJzCCuMJO+zK+FLgj3E5bp1G8IrS+8eZamp9YF0nQ9TiaEK2Fn2XgHiOWwNdQYxHtgWYbTU3622kJQKnWIIS6ep3jYs1vo2t8Q0RiHyRpGtTMDiHz0HcX+Sd/nPHFgWMVTK1oexQC5NCdraykYOLlGj1RTBTZVkZOm61YeT65DGAbmICyYFB1Is4A5TtvCWOMwSp7/C7yZtoITi6xie7NxEL2Zq32NcmDdQoqLMy80WMrtQLiGZTixJBPrhkBmKTYbQxo8T9CiJZkKwnK+t0WKrqMFOeUByNzYx52O5ETTy/mji5dUWOzuiYIVzK+UfE6y6mlyb1e0KB3NfeVompBf0yuY8snxNyU2DrWgg8UghPgjBW31CTKgxAobBsZoBmah1sXZQ3ieTfWUs420crxk0CKV7UwnYjtqjva3nKNlFjrrblV1AI2yUHyXxRf5eoZ7ZIThDTpQPJzpuYD8urgsgRRElvs72HH2vsa6ix/7adB9w1xycDIlkIXoo1nWQF4ID2IqClIpDq2NxQAw07m06/LM2hXcQDHmfADqz32FMeAy9lx3E0B2IeRUdy1epZ8BG3AHMucn0KEUk9BreoVgYYJPCwFV1taKslTe1/lmkQWSM27 oP9c6JDD MXgfBN3nQk9ubkgPiu3u6Qpft73pQJVQ6JI/MztvTuM2eyAr+LKxjOUQ5NBJaQXnZMKeXEMWoUDD0UtDQylRDBWLaEYo4eUNvASWuWVxTG/F90yqDnqxCYvgx7EqD90aB/taz4aP8Afgpltmpp3IUyBcM3onuLqnVNOA0gJPsiCXOz+b/ob6eGQ1IFbM7MYiLdfm1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As alignment of 'va' is always aligned with the order of the page allocated, we can reuse the LSB bits for the pagecount bias, and remove the orginal space needed by 'pagecnt_bias'. Also limit the 'fragsz' to be at least the size of 'usigned int' to match the limited pagecnt_bias. Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 20 +++++++---- mm/page_frag_alloc.c | 63 +++++++++++++++++++-------------- 2 files changed, 50 insertions(+), 33 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 40a7d6da9ef0..a97a1ac017d6 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -9,7 +9,18 @@ #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; + union { + void *va; + /* we maintain a pagecount bias, so that we dont dirty cache + * line containing page->_refcount every time we allocate a + * fragment. As 'va' is always aligned with the order of the + * page allocated, we can reuse the LSB bits for the pagecount + * bias, and its bit width happens to be indicated by the + * 'size_mask' below. + */ + unsigned long pagecnt_bias; + + }; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) __u16 offset; __u16 size_mask:15; @@ -18,10 +29,6 @@ struct page_frag_cache { __u32 offset:31; __u32 pfmemalloc:1; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; }; static inline void page_frag_cache_init(struct page_frag_cache *nc) @@ -56,7 +63,8 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE); + WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE || + fragsz < sizeof(unsigned int)); return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index a02e57a439f0..ae1393d0619a 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -18,8 +18,8 @@ #include #include "internal.h" -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) +static bool __page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) { struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -35,9 +35,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, if (unlikely(!page)) page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - nc->va = page ? page_address(page) : NULL; + if (unlikely(!page)) { + nc->va = NULL; + return false; + } + + nc->va = page_address(page); - return page; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + VM_BUG_ON(nc->pagecnt_bias & nc->size_mask); + page_ref_add(page, nc->size_mask - 1); + nc->pagecnt_bias |= nc->size_mask; +#else + VM_BUG_ON(nc->pagecnt_bias & (PAGE_SIZE - 1)); + page_ref_add(page, PAGE_SIZE - 2); + nc->pagecnt_bias |= (PAGE_SIZE - 1); +#endif + + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->offset = 0; + return true; } void page_frag_cache_drain(struct page_frag_cache *nc) @@ -67,38 +84,31 @@ EXPORT_SYMBOL(__page_frag_cache_drain); void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - unsigned int size, offset; + unsigned long size_mask; + unsigned int offset; struct page *page; + void *va; if (unlikely(!nc->va)) { refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) + if (!__page_frag_cache_refill(nc, gfp_mask)) return NULL; - - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = 0; } #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size_mask + 1; + size_mask = nc->size_mask; #else - size = PAGE_SIZE; + size_mask = PAGE_SIZE - 1; #endif + va = (void *)((unsigned long)nc->va & ~size_mask); offset = nc->offset; - if (unlikely(offset + fragsz > size)) { - page = virt_to_page(nc->va); - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + if (unlikely(offset + fragsz > (size_mask + 1))) { + page = virt_to_page(va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias & size_mask)) goto refill; if (unlikely(nc->pfmemalloc)) { @@ -107,12 +117,11 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, } /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + set_page_count(page, size_mask); + nc->pagecnt_bias |= size_mask; - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; offset = 0; - if (unlikely(fragsz > size)) { + if (unlikely(fragsz > (size_mask + 1))) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -129,7 +138,7 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return va + offset; } EXPORT_SYMBOL(page_frag_alloc_va);