From patchwork Tue Oct 1 07:58:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13895546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FE5DE7716D for ; Thu, 5 Dec 2024 15:21:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BD3096B0105; Thu, 5 Dec 2024 10:19:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A5CCE6B00ED; Thu, 5 Dec 2024 10:19:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5B186B00BF; Thu, 5 Dec 2024 10:19:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8EE4A280036 for ; Tue, 1 Oct 2024 03:59:47 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 35D1EAC285 for ; Tue, 1 Oct 2024 07:59:47 +0000 (UTC) X-FDA: 82624284414.07.CDF1A5A Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf19.hostedemail.com (Postfix) with ESMTP id 390ED1A0011 for ; Tue, 1 Oct 2024 07:59:45 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=b2bIoEHs; spf=pass (imf19.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.65 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727769521; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h1flmiq1CIb5dWvur8o1cuFYGIlZjiDxQSttLTSZj20=; b=NHDFom1ipVIw8ubbylmX5n/QfkupTLJH132Ve7263NtgeMP6GXG+iHqiIoTajRLy1Gxhb8 48CsloQhhDLayA7MNVl6CWXXyUyc3PA3nXL1cYMf9ayf5aij01CNAouB7LTbIi9AoYYFnV 3lhT7APKo8z0E+tQSOsDCf0f+yiJ/XI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=b2bIoEHs; spf=pass (imf19.hostedemail.com: domain of yunshenglin0825@gmail.com designates 209.85.216.65 as permitted sender) smtp.mailfrom=yunshenglin0825@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727769521; a=rsa-sha256; cv=none; b=uKuIHuUqqHywZ0tacC5DewXUq8/ez7W8rFCDBPgD2IWGep+C6VC+ACOiEEif1MFuImuSS0 /ONYfjyNQNpzOYKwiY+ovZc0oWEXCCzkXcJ6ELUizyoBn/XbhV87HOWvsaeROE357CSwtu 72wY8LTZEOTgrVbx7T3SNd11KCJ8syo= Received: by mail-pj1-f65.google.com with SMTP id 98e67ed59e1d1-2e07d91f78aso3963762a91.1 for ; Tue, 01 Oct 2024 00:59:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727769584; x=1728374384; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h1flmiq1CIb5dWvur8o1cuFYGIlZjiDxQSttLTSZj20=; b=b2bIoEHsS7rYwJT0sSR8xK0U/Rw2Rpsad8xtY0wuaKIdbYxtNsCe3L73qwtDtwq6s+ x6O0cNqet+rrI3qdar2yLdjVPNBK8HgPzkB280QA53UqQXJDkaCA2uSA7bBvYKudcWvy /ZguyTtkn7hbq0tRpfFbKOq0pkGgaMt3Rm5fl1gakDhmbTCVa8yg4vf3+mF7lO5MAVgf jQNqBQmr58joGK8+cbOLUGSD7rEd9y/aL5+7IsK8jt9ls/ton5UQRgynA3iomFs9VINJ 8bylXX4oMJ/HuvNTWActeAZQBwLPa4qDXYDakieddN6lSk53GyEyLqcz5D65R3hMQwLI j4sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727769584; x=1728374384; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h1flmiq1CIb5dWvur8o1cuFYGIlZjiDxQSttLTSZj20=; b=EuDWnvyWTSkksyGx2BmGPq2qqT5UEnLjlorAzj4KT3uhBld78U2IZdSe0kR8nJCzVO SJq0MvxnTRKWrmdYpizLwXGugX+BZaqgeaA4vlSaapUfIjIk/mCcCdUJN9uxcK8U+U+Y kDfAVjSBvJ/+T1K+MAUTf24mQzriu2h/hhIK7dgSckTSbwIm5zwqMBAelbwF5ubpXieS l3e9x7H9p3SSeeP8QYVbMHGlk6u75+C8C3h9s2VFILey+6vM0jwfWyVuniIhWqEXdFn1 MwFZgfZt/w5ze+21gWDoo09z/m0ADbEI0wyKjikXHzgQVix/rONXJSO+EDXb7I2f2f/Z JNQg== X-Forwarded-Encrypted: i=1; AJvYcCVJphiT1Hhd/jjQc+XYO3JlE9WfCjpLrZ8eqjo3NUBkk+f/z76QN9paHK6O9JUpEvWjtAAFnhjm9w==@kvack.org X-Gm-Message-State: AOJu0Yy08IbR7zOEGaDy8Iot9wyMdPWHaCcZ6TWYLGHRMjZyBb+cTVyt 4FBIxmOP7FB8mf3MoezohKi7vb6AW1ecHSx1oEYNrhEenqtP7ORb X-Google-Smtp-Source: AGHT+IGWWcq93SolMOcbPJboqSEg9Ubcs85ynCA4UMkBOerDIZugeKeyo4zjy4R1BEFlqXCJokgRZA== X-Received: by 2002:a17:90a:ac02:b0:2e0:8719:5f00 with SMTP id 98e67ed59e1d1-2e0b8b2261dmr17048516a91.22.1727769583943; Tue, 01 Oct 2024 00:59:43 -0700 (PDT) Received: from yunshenglin-MS-7549.. ([2409:8a55:301b:e120:88bd:a0fb:c6d6:c4a2]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e06e16d6d2sm13168168a91.2.2024.10.01.00.59.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2024 00:59:43 -0700 (PDT) From: Yunsheng Lin X-Google-Original-From: Yunsheng Lin To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Yunsheng Lin , Alexander Duyck , Andrew Morton , linux-mm@kvack.org Subject: [PATCH net-next v19 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Date: Tue, 1 Oct 2024 15:58:49 +0800 Message-Id: <20241001075858.48936-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241001075858.48936-1-linyunsheng@huawei.com> References: <20241001075858.48936-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 390ED1A0011 X-Stat-Signature: gpueg3mbumth43nq9ethtahdqdmegh5w X-HE-Tag: 1727769585-628902 X-HE-Meta: U2FsdGVkX18cglsXhIRBED9pXKiC13lLHbhHI1DWFDT/HnFmVDAEJnIv3sob8O1FcMaHcA7zHWYOtfLdjYCanlOBkS726xDRqeXgwNEzqntRZ3hvPgHw8XdURfPJaSzJqJYTxvxuSGDweDIQ7QUTV2Nzb/ouU8IpyjgLYucFly2xHmmSppn6gnixfth33QnF8+Wo37/FRKYEICl6tXk9anKVtVp0bROC1cKninTcgWfaV0Xq1UvBgwkrzYz1dn6JQrN9sOqKxNKLSzOObT7fWIC7brTlcnSVsxW5BTNiFdUnp21CyNYQCLnROsuCtAPcsGi+Rkdxp7SeI/Z1Tiyt/Xh6NSUk3U826vQLnVOB7sxa/HaNo3E64OY59RMPfiCCVFCK/hGaA9k/FIoHY2QCkuuwsvK85UcgwuIVA3l5YBxNKpuf12eSNfHl2bpSdnDwzlYj28X92U9UJ8rE/9rJzZOLpZHmU+jWxmTyyIqiLTepiBjHu6T66XtwPOoaDa+MAbqcjautKTJenlZqN094LFI2vXuQnYUzGD9oP5dowqyQwoEQQmrhPps3rc9pRGcnLpacxeGGFNLWZL+4JxYRWo3CJLrrEtf/E6x5FZ8glktgoS+L2P5oAHysvoF1OIFn6stGGryiAAW9KGSe/MqCP42xrfSGNwiFOcEzQP6JYQfPW9pFM+Tw1YPLL0c1e5R/wO+pJ07j14dHVLonAblH4LWrf2ixsGnEf6pZq5sxPt8TcSqJ8PdcarlHGGiBWRR8Y+nVBchFBQNNT+k+xKAwT3k3INC+lu9oazVo4c8+RqJdAmPVfaSTVbsK9e9OyAnk3VMGuhsU0cbnzakDwtBEHFoceouhCfeRhiB/sQBGYFHsH3QWwQqu4UPURIkBsUuzy9b2ojWwuNOb9BGlNxmk9utk/nmQRDid4XAGpEIjReYG13rrDkUmtBvQAaXWb8GLfisFeuyxrSaYKvXCf1S vgtOENNr r8JRd+xpzdeoY1zIAaepQlBtgo9YN4uF6lL04lHhBONOUWyZsoc45NbqdNYKP3jfsOg3PhtxhDHlEF0WN4ucObtGVgR0siWxw7pfCzwgrbZnb0tcvqMGoKS4VxaMIM6VdU66QFKH5yaxft3mpHfa2V6kMtFL5gwRNdILgi3nsZsxDjuU6FjOl3EM3WK7m1eHSh2ufEirVHR8H6tDhfiw7VLq9heso14ZSZ8TAUjdyVD0mpSYT0uZn9k1hVOlVHqob+SUi/IyNZprhIbv592F+I9nUopzGjlQ1x+DdR7m87xEehxIGVZS5hIbBxvjGoLDLrf27cRBDu+CDmtwsacO88pw7olE72/xI4dicXTcQT6z8NlD3ZEAxdyc410SP5XbRuLWX0sD9EhxdwPdsXkNBmL73hDJ/G5b+vqOpYUoQ1dvdO9DJtoIHMGPtHFPWmwVj6Nbj+cbBll38OSAwpAwS58iW7WvmfbW2fTqQo99uDkiDJHtTf/4x8Xg8ogfzfZjLOiFrE7Qt1Skg3Tp4Udghz8xNfaZjcb6NIli3vEqkFXVhNrd4A0Fyi6xpNMuvFCY2N4tnTlRYfv9vyttW8MRGdwTC+TGwWhZgYl6O X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently there is one 'struct page_frag' for every 'struct sock' and 'struct task_struct', we are about to replace the 'struct page_frag' with 'struct page_frag_cache' for them. Before begin the replacing, we need to ensure the size of 'struct page_frag_cache' is not bigger than the size of 'struct page_frag', as there may be tens of thousands of 'struct sock' and 'struct task_struct' instances in the system. By or'ing the page order & pfmemalloc with lower bits of 'va' instead of using 'u16' or 'u32' for page size and 'u8' for pfmemalloc, we are able to avoid 3 or 5 bytes space waste. And page address & pfmemalloc & order is unchanged for the same page in the same 'page_frag_cache' instance, it makes sense to fit them together. After this patch, the size of 'struct page_frag_cache' should be the same as the size of 'struct page_frag'. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- include/linux/mm_types_task.h | 19 +++++---- include/linux/page_frag_cache.h | 26 +++++++++++- mm/page_frag_cache.c | 75 +++++++++++++++++++++++---------- 3 files changed, 88 insertions(+), 32 deletions(-) diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h index 0ac6daebdd5c..a82aa80c0ba4 100644 --- a/include/linux/mm_types_task.h +++ b/include/linux/mm_types_task.h @@ -47,18 +47,21 @@ struct page_frag { #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + /* encoded_page consists of the virtual address, pfmemalloc bit and + * order of a page. + */ + unsigned long encoded_page; + + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32) __u16 offset; - __u16 size; + __u16 pagecnt_bias; #else __u32 offset; + __u32 pagecnt_bias; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; - bool pfmemalloc; }; /* Track pages that require TLB flushes */ diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 0a52f7a179c8..75aaad6eaea2 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -3,18 +3,40 @@ #ifndef _LINUX_PAGE_FRAG_CACHE_H #define _LINUX_PAGE_FRAG_CACHE_H +#include #include #include #include +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) +/* Use a full byte here to enable assembler optimization as the shift + * operation is usually expecting a byte. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0) +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT) +#else +/* Compiler should be able to figure out we don't read things as any value + * ANDed with 0 is 0. + */ +#define PAGE_FRAG_CACHE_ORDER_MASK 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 0 +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT) +#endif + +static inline bool page_frag_encoded_page_pfmemalloc(unsigned long encoded_page) +{ + return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT); +} + static inline void page_frag_cache_init(struct page_frag_cache *nc) { - nc->va = NULL; + nc->encoded_page = 0; } static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { - return !!nc->pfmemalloc; + return page_frag_encoded_page_pfmemalloc(nc->encoded_page); } void page_frag_cache_drain(struct page_frag_cache *nc); diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 4c8e04379cb3..cf9375a81a64 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -12,6 +12,7 @@ * be used in the "frags" portion of skb_shared_info. */ +#include #include #include #include @@ -19,9 +20,41 @@ #include #include "internal.h" +static unsigned long page_frag_encode_page(struct page *page, unsigned int order, + bool pfmemalloc) +{ + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK); + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE); + + return (unsigned long)page_address(page) | + (order & PAGE_FRAG_CACHE_ORDER_MASK) | + ((unsigned long)pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT); +} + +static unsigned long page_frag_encoded_page_order(unsigned long encoded_page) +{ + return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK; +} + +static void *page_frag_encoded_page_address(unsigned long encoded_page) +{ + return (void *)(encoded_page & PAGE_MASK); +} + +static struct page *page_frag_encoded_page_ptr(unsigned long encoded_page) +{ + return virt_to_page((void *)encoded_page); +} + +static unsigned int page_frag_cache_page_size(unsigned long encoded_page) +{ + return PAGE_SIZE << page_frag_encoded_page_order(encoded_page); +} + static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_t gfp_mask) { + unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER; struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -30,23 +63,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; #endif - if (unlikely(!page)) + if (unlikely(!page)) { page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + order = 0; + } - nc->va = page ? page_address(page) : NULL; + nc->encoded_page = page ? + page_frag_encode_page(page, order, page_is_pfmemalloc(page)) : 0; return page; } void page_frag_cache_drain(struct page_frag_cache *nc) { - if (!nc->va) + if (!nc->encoded_page) return; - __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias); - nc->va = NULL; + __page_frag_cache_drain(page_frag_encoded_page_ptr(nc->encoded_page), + nc->pagecnt_bias); + nc->encoded_page = 0; } EXPORT_SYMBOL(page_frag_cache_drain); @@ -63,35 +99,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) { -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - unsigned int size = nc->size; -#else - unsigned int size = PAGE_SIZE; -#endif - unsigned int offset; + unsigned long encoded_page = nc->encoded_page; + unsigned int size, offset; struct page *page; - if (unlikely(!nc->va)) { + if (unlikely(!encoded_page)) { refill: page = __page_frag_cache_refill(nc, gfp_mask); if (!page) return NULL; -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) - /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; -#endif + encoded_page = nc->encoded_page; + /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; nc->offset = 0; } + size = page_frag_cache_page_size(encoded_page); offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); if (unlikely(offset + fragsz > size)) { if (unlikely(fragsz > PAGE_SIZE)) { @@ -107,13 +137,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return NULL; } - page = virt_to_page(nc->va); + page = page_frag_encoded_page_ptr(encoded_page); if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) goto refill; - if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + if (unlikely(page_frag_encoded_page_pfmemalloc(encoded_page))) { + free_unref_page(page, + page_frag_encoded_page_order(encoded_page)); goto refill; } @@ -128,7 +159,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return page_frag_encoded_page_address(encoded_page) + offset; } EXPORT_SYMBOL(__page_frag_alloc_align);