From patchwork Tue Oct 8 11:20:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13826286 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A00ECEF158 for ; Tue, 8 Oct 2024 11:27:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 815026B0093; Tue, 8 Oct 2024 07:27:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C7826B0095; Tue, 8 Oct 2024 07:27:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68DC96B0096; Tue, 8 Oct 2024 07:27:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4959B6B0093 for ; Tue, 8 Oct 2024 07:27:16 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C17F741BDC for ; Tue, 8 Oct 2024 11:27:14 +0000 (UTC) X-FDA: 82650208830.20.1FA623A Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf15.hostedemail.com (Postfix) with ESMTP id 53F22A000D for ; Tue, 8 Oct 2024 11:27:12 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728386655; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QVTvlQuTiuW0707pqZ5Os8qZ3TMNziWLl1bRShT6IgY=; b=F7Lx+P6mOvzV0QbOcw+Wi79cR+TA9AlASjtBfY0SLe6R0oo4ia3cXoaraPfc43LX7k7fuE QBViXREtZ1lFzjEmDPhiTHaLj9icPSoF+376lD811BsUbrSZeaxOC7WCUJ+xRIvb620pQN HmMyke5EaD0x0ipexDOgjbddNgHWL1E= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728386655; a=rsa-sha256; cv=none; b=CZpr1HK4iYMsIFDAhOHfbUsEtk4/XyZVuhuv6N/y/mSstNJIwtbNABSCvvJV0VKY3jKuFg sB4SurK0Hv8dEF5huQW5JjmqhzKRewHdyw98XIhRzuIrg8zIesB7L5NRQK6PejhKKRIOwi 4siU1w1v3J9MPBbX3l3aTpF73zXM0SI= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XNDH16TJWz2DclM; Tue, 8 Oct 2024 19:26:05 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id E6B0614013B; Tue, 8 Oct 2024 19:27:09 +0800 (CST) Received: from localhost.localdomain (10.90.30.45) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 8 Oct 2024 19:27:09 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v20 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Tue, 8 Oct 2024 19:20:40 +0800 Message-ID: <20241008112049.2279307-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20241008112049.2279307-1-linyunsheng@huawei.com> References: <20241008112049.2279307-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.30.45] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 53F22A000D X-Stat-Signature: nmurhpstm33br697gei7qhisde1cgwwx X-Rspam-User: X-HE-Tag: 1728386832-896799 X-HE-Meta: U2FsdGVkX18tVQJcDg1lwVghyG36d5gXLcGNnVAkEfxJ1VgIvx+PNX26nWc11DYzu4dwCdKqRf4ecmCDpMc6Hd3WfOenXvt3feqBnnUAiAVgICob4CrPa5uUs1JrMlQNiQ2TEbiqU0njKNDCAst51qK/R78R3M6a1GiU86bAWlhI7Qx5WeXs8fuGu4EgfL+uNxbCIb9lxKJtb4gPkvt7HLnhxvGt/+6z1HPeUaeh284uV1DGutTNHEyO4qhoWHlj6rs/ZlO9U+1jZ/D8mtidNLEiJMv0Tk9Pq1+V0zjz3Mx4Acni1WS0wOCuQwbg9TRUITPx40BHUFPWtHJ87WKLawCRMc16WVurg/Tsa140eYj2oI1lgfG3ovcVykUkRSXF9kRjrjASFWCkU/EIkkOOSbBSO1ji176BhC6n30BAfLVbV8/fhTT9mr8mZHSEYmxPLIi1/WK7U3E3v/nCPNAcOJqcRygRRvl0jWA+u+vdz8/XHZ1wxDENRiG7PANBl0yk1dHeoMCDdi0ZXnpdU0mmMrGGYWkTRG11HfIoTfoA01EnAPiKscjrvHSqymf4Ct5TKXMgu4JuQopPwY+zUW+ifUwiy88SMd/JrwKAHsHemhthoXKcVhlPCqf4meE9vGjklnSjL9jLi65FlbXL0oXgu8IFQenOfNUhXQDT0xVXFmaF6JOShL4kIaRXwkms0HULMQq3usM1fJwacjAelORpTgMynI4+UQjBVMcBThmix+MILZp8HH4h50u+qs/+aTVCAYdVZRQlZ4cVdNmuQO77HBCf2YD9HqiuH0vvv9mLGtXp+jg7ZCi8+IOrpUXRCgQNIZpH/z0VDuBt8049CtfaAFk8Sq6U+C2Yj/3cOEoBGB1U2nStpFjzc5R7hVf4KMatIJ7DdH3E1QM8u10gg7vZewmdTv21PreNLbbc77bTyWV8RNu/vqWQYmb3PhPbLJwuBO6sDo5Y8WLUtrfu6f8 OwzbkA76 /hCBZhT18mnYnMomtLZaxySvzendaGgwTC/C21xLR5JdD4DBmPP0ryuBr4ql3fYOOdkGW4Sm+KdkuUrtkNxADqaAx4cW2MJsyCzekIM6BT8rLSuNlIkA+q17NH9BiBcpTguB6mvn56Yq3ffsoX4J4sHo5RwDRoHSLi0C8+mhmB8d9Tcq6KXUCKIXH5VmHI2SK5l1+LwaYg4zhgEjCQ9+Xchv4Z3kZnCDEkr8wn9hlceXs5GKXxCPCxXU1LV90BtemElLH9+9aUqcZ8ESdfYwOmbsZNSaL3gzh5L04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 19 +++++---- include/linux/page_frag_cache.h | 24 ++++++++++- mm/page_frag_cache.c | 75 +++++++++++++++++++++++---------- 3 files changed, 86 insertions(+), 32 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index 0ac6daebdd5c..a82aa80c0ba4 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -47,18 +47,21 @@ struct page_frag { #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* encoded_page consists of the virtual address, pfmemalloc bit and + * order of a page. + */ + unsigned long encoded_page; + + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 offset; - __u16 size; + __u16 pagecnt_bias; #else __u32 offset; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; /* Track pages that require TLB flushes */ diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0a52f7a179c8..dba2268e451a 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,38 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H +#include #include #include #include +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) +/* Use a full byte here to enable assembler optimization as the shift + * operation is usually expecting a byte. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#else +/* Compiler should be able to figure out we don't read things as any value + * ANDed with 0 is 0. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK 0 +#endif + +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT (PAGE_FRAG_CACHE_ORDER_MASK + 1) + +static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) +{ + return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_page = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4c8e04379cb3..4bff4de58808 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -12,6 +12,7 @@ * be used in the "frags" portion of skb_shared_info. */ +#include #include #include #include @@ -19,9 +20,41 @@ #include #include "internal.h" +static unsigned long page_frag_encode_page(struct page *page, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE); + + return (unsigned long)page_address(page) | + (order & PAGE_FRAG_CACHE_ORDER_MASK) | + ((unsigned long)pfmemalloc * PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + +static unsigned long page_frag_encoded_page_order(unsigned long encoded_page) +{ + return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK; +} + +static void *page_frag_encoded_page_address(unsigned long encoded_page) +{ + return (void *)(encoded_page & PAGE_MASK); +} + +static struct page *page_frag_encoded_page_ptr(unsigned long encoded_page) +{ + return virt_to_page((void *)encoded_page); +} + +static unsigned int page_frag_cache_page_size(unsigned long encoded_page) +{ + return PAGE_SIZE << page_frag_encoded_page_order(encoded_page); +} + static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,23 +63,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + order = 0; + } - nc->va = page ? page_address(page) : NULL; + nc->encoded_page = page ? + page_frag_encode_page(page, order, page_is_pfmemalloc(page)) : 0; return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_page) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(page_frag_encoded_page_ptr(nc->encoded_page), + nc->pagecnt_bias); + nc->encoded_page = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -63,35 +99,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - unsigned int size = nc->size; -#else - unsigned int size = PAGE_SIZE; -#endif - unsigned int offset; + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_page)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_page = nc->encoded_page; + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; } + size = page_frag_cache_page_size(encoded_page); offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); if (unlikely(offset + fragsz > size)) { if (unlikely(fragsz > PAGE_SIZE)) { @@ -107,13 +137,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = virt_to_page(nc->va); + page = page_frag_encoded_page_ptr(encoded_page); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(page_frag_encoded_page_pfmemalloc(encoded_page))) { + free_unref_page(page, + page_frag_encoded_page_order(encoded_page)); goto refill; } @@ -128,7 +159,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return page_frag_encoded_page_address(encoded_page) + offset; } EXPORT_SYMBOL(__page_frag_alloc_align);