From patchwork Thu Mar 28 13:38:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13608546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F4A0C54E67 for ; Thu, 28 Mar 2024 13:41:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9F276B0095; Thu, 28 Mar 2024 09:41:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C521B6B0096; Thu, 28 Mar 2024 09:41:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B19456B0098; Thu, 28 Mar 2024 09:41:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 94B326B0095 for ; Thu, 28 Mar 2024 09:41:01 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6458D14069E for ; Thu, 28 Mar 2024 13:41:01 +0000 (UTC) X-FDA: 81946558722.22.532CA4B Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf12.hostedemail.com (Postfix) with ESMTP id 063E94000E for ; Thu, 28 Mar 2024 13:40:58 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711633259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gRd8uG7vDon8UIOOd8j8PGsSlhcHwIdSJGe8YCIjnFU=; b=TcQwmoGcbrejoCEGY2SW1B1x2vOJ7mhlY8RuofPvG0LczHok1h3QRl46CtLjiDvuzrJVLq c8TkVDTsCl/PJ2Hp357BJFrPdxxCkGOE41tGbs97Be3ARIbDDsmIta7a0RtSmPL/rKBCYw Ude8EjBzsm0movyKbjlcP/mGP/iFnM4= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711633259; a=rsa-sha256; cv=none; b=MR0HVmncPhd3zcGHbxBmvwg3GLe9Dx27bOQdRfZvnuZMwpWjSP+IqSUo0zNJMXs9gdTISL XoabfL/+SDW87uxIjo2c4SqT+Hm1g92is0UI8h/brjOI93ClMazvqV7JTsqKqk4C3ly6qS MKbXHaHXDBCKefkpLe+MV9dz8FtaYdc= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4V54P32lZ6z1h4Kn; Thu, 28 Mar 2024 21:38:15 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id B5FBC140258; Thu, 28 Mar 2024 21:40:55 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 28 Mar 2024 21:40:55 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Andrew Morton , Subject: [PATCH RFC 06/10] mm: page_frag: reuse MSB of 'size' field for pfmemalloc Date: Thu, 28 Mar 2024 21:38:35 +0800 Message-ID: <20240328133839.13620-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240328133839.13620-1-linyunsheng@huawei.com> References: <20240328133839.13620-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: 063E94000E X-Rspam-User: X-Stat-Signature: x9q3eyzin6pt688st4xd7xxh1hk6o897 X-Rspamd-Server: rspam01 X-HE-Tag: 1711633258-967368 X-HE-Meta: U2FsdGVkX1+ZyEHrF890W3VJz2bw74pbOjp+/M0sRABNaOULzDyQ114GKJsMl+u1LZ3fK0Ph4DBOrPlXpjSlk+v+Z5IXJvdAZZAWXkTlQYJvqv2vVlkR1//g4oyuEMJEpZk7qmHvGXywYR2VjEXRkRi0ecif4HBfGbPL+6Y2lNhC7uhDSig1wSBA1/1teGPbkatO8id9qe/o+Hz8ZJOiRY/lYtrS53qRxbfS09tqhG139PUoCcKpcChhDIF+LT72LTnW1SkVvHyGrKbalM+oIWG+m5JQDlm2VNNQ4ow37l1w8FVaghpXjwxeKE7nwNY3qyFkxhiIpwhUXXucxTccT4gaqlUzddQfZBLsjWrEZySslA2xuXYsrTMgFzgdjv3sktNFjQtZvopmq7MDhhMQMkpbsrkhx0A/VZGImZR1Gh0hH4uUnLOYaf90CUo8MmYIwi/p28688kLn8f4oPPI26qJ87VuLomFv0dbnVrdkwB9En8Gu4i6YPDOAnirPjzS4AuqX0VjNPFV5Pyh6bvjyAtO6tDBssXNI/p6Ej4PoZLGYEM9rkHIbLSkPgE93+RELNB79bbDRKlLmB2rvT4k3DhvnXUoRHSAu41ZW4bSBcP0fI1akUiufPuLgEWP3F7KiFlSaKlKCeDginL9RMucNXrtTcH7iYMmvT3Mfka1n2TqWXCR2MNpa3S7NTzQ4CjNfF6gREdiqw74ykgj2zRnwZniTdDDhrFKHwtu9/mABWwl3w/jQSYk+rvOIOX6HwopTzxipj8Qz4SV1ysQ8O74AreDk3IYVi+Sn28FkWx9RktR80MkZHdTg+1MOLQpw62lee85aBKuI49PAAWpsXBShaJ/MWVDInbnYtDQL7qq7KbNRftw9DqIsS1RZ3gbE9VtS3eNYrx59Acs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The '(PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)' case is for the system with page size less than 32KB, which is 0x8000 bytes requiring 16 bits space, change 'size' to 'size_mask' to avoid using the MSB, and change 'pfmemalloc' field to reuse the that MSB, so that we remove the orginal space needed by 'pfmemalloc'. For another case, the MSB of 'offset' is reused for 'pfmemalloc'. Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 13 ++++++++----- mm/page_frag_alloc.c | 5 +++-- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index fe5faa80b6c3..40a7d6da9ef0 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -12,15 +12,16 @@ struct page_frag_cache { void *va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) __u16 offset; - __u16 size; + __u16 size_mask:15; + __u16 pfmemalloc:1; #else - __u32 offset; + __u32 offset:31; + __u32 pfmemalloc:1; #endif /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. */ unsigned int pagecnt_bias; - bool pfmemalloc; }; static inline void page_frag_cache_init(struct page_frag_cache *nc) @@ -43,7 +44,9 @@ static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - nc->offset = ALIGN(nc->offset, align); + unsigned int offset = nc->offset; + + nc->offset = ALIGN(offset, align); return page_frag_alloc_va(nc, fragsz, gfp_mask); } @@ -53,7 +56,7 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - WARN_ON_ONCE(!is_power_of_2(align)); + WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE); return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index 7f639af4e518..a02e57a439f0 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -29,7 +29,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; + nc->size_mask = page ? PAGE_FRAG_CACHE_MAX_SIZE - 1 : PAGE_SIZE - 1; + VM_BUG_ON(page && nc->size_mask != PAGE_FRAG_CACHE_MAX_SIZE - 1); #endif if (unlikely(!page)) page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); @@ -88,7 +89,7 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; + size = nc->size_mask + 1; #else size = PAGE_SIZE; #endif