From patchwork Mon Jul 8 06:33:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13726194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94E0CC3DA42 for ; Mon, 8 Jul 2024 06:29:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28F496B00A1; Mon, 8 Jul 2024 02:29:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 23E186B00A2; Mon, 8 Jul 2024 02:29:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B6846B00A3; Mon, 8 Jul 2024 02:29:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DB8556B00A1 for ; Mon, 8 Jul 2024 02:29:09 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A292A121137 for ; Mon, 8 Jul 2024 06:29:09 +0000 (UTC) X-FDA: 82315608018.01.B3AEE0A Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf26.hostedemail.com (Postfix) with ESMTP id 523F614001F for ; Mon, 8 Jul 2024 06:29:06 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iFPjsLTe; spf=pass (imf26.hostedemail.com: domain of alexs@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=alexs@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720420118; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W1He/r4hhQFSeokZC2RLbg062zbXW7Tzoh0YJXomhAo=; b=BED/yhhoC0FApGzE2dqUBVmcBzXj3tZOr1AH7o9e6PzY9PP/v+DV1C76yajmTVPuQ8B4VM WFxPmjSTlq8Lt9ZCHU/7u4ynA9p0jfZzS4l/430DCdQXT83djW7ytZD72mpfRFDUQxcMOT OazVlzlEGB9CRJQvrCLWPV8Vi3dW/bU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720420118; a=rsa-sha256; cv=none; b=wY+9Rjy+yj+uko1wGgphcfTTnujeiDnoGs1gh2ZpdlWe1zBwRKAtOXM/eOqvXRGvu89XHt X0Yp7J3jkDAah1i0WmXdq9LFJtjD0alTutkpd0GTWf2dyeWLZHs1CGCOkKSP3zE4EMdE6Q mKVk1FiwjtDT2hyUfC7E7oTjV9QrjQ8= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iFPjsLTe; spf=pass (imf26.hostedemail.com: domain of alexs@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=alexs@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 527BDCE0A57; Mon, 8 Jul 2024 06:29:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 85057C4AF0D; Mon, 8 Jul 2024 06:29:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1720420143; bh=wQxGtmbL1aK1VwnBgVGS8tWob34VVxGd8gh+B3iTQwc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iFPjsLTezAT14viZJrWtHow972VaopLMBb8d3TjRdKFke5FbWCPT2+C5MZvc50sws SdN4cefEBvuK4j7c5szip31wVMQetv+8wvaxKWGHgWXkF7uXCSVdH2LAOkpAY1fiBd kqwXVTS4G+22/QZM+hSywN6/rBnYYRQbgnUrAVuvKNo8mucHvrgNnMeWWothY6XpIC fmshppA9dnJBTFqzO6yl1GlmgTYZvw+T4Q1+i7uz5dvZAkJTWw5+jucyJh7WJcK7d2 vyxh/8hCvC2GuTlXHECl5wussQuJi27BpBslTQUjnLwbm7JADBR8BBQlWHr8PK/ZRS NYDv6k1PMMX6Q== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v3 06/20] mm/zsmalloc: convert create_page_chain() and its users to use zpdesc Date: Mon, 8 Jul 2024 14:33:27 +0800 Message-ID: <20240708063344.1096626-7-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240708063344.1096626-1-alexs@kernel.org> References: <20240708063344.1096626-1-alexs@kernel.org> MIME-Version: 1.0 X-Stat-Signature: sbxi8f4qhf6madbm5tj8qcnhyr8jazjb X-Rspamd-Queue-Id: 523F614001F X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1720420146-868057 X-HE-Meta: U2FsdGVkX190Q5CUkjnPS4GcnGiCGy7WhaUVj+rC27F3klvtzhXCRO4pr6Ys2xgpGObf8GhjtVZs6s/IQgiBpYZHY9C6gA6UK7uRYCfu5BxCkwbadZOza6BdT4WqzOjhT+2ipW9/MUcUdD3w2pYVBQZ43FaNw0lFHIaLtm5IMsChLsC84uDvqbRzDooh+ny5vV35lljynqBzGWCn1alTP+6On7xNzdBuOvjbKw1TBuXw9ivmXsTqKBI7xd8IKLLIuL4HWZynzM7rpGkctnGiXumuyrwvc6ORTXUrZbRGf71XBd6O9nD+7L84URp63RdB6cboxTKjJcLH3Je2BNPzntjhvidwGQ8GZ/4HP0zzYyaW98ECl5NFZtDsJb0s932RJ9/o78j93ecECj47HVO2vLGaWpg93BZ6jMnG9l0nCkqrlh5xKEJBnPd9jUxYYN70tkNf41+0W1cNXnbUS9weUok2R1JPvKr6xTssnSD+abpK8Zg0+r7oUWGzrYDbNeVjIs+hWX2RVeiLxMo65f6dHqTBuIW854NUjf501ZNWlZbX5j3QB5XXqRJj9H0nt1IPpKvzR+ZnqtqrgJKT2rBaAWb7d8pH2OX91RfFTarl3CfT4SU6H8aMFKC9FqpOPkwnRjog7f8Pmyw/odY+tie3rhkPtk5g6AawyBN1dfxxp0/tFaLBandP0CW+objH+64iIt8Q1dZiyPt48BdP4jp1fbYR4GmAOx6smtffLPN1yYE4pMnEthLZOEsM//KOGuqY9rA+HMgOJHPYzTWYN9qbqv18ejoLFHLZw7c07gotYFJViwWgcA+RVZR85J0y1c0c+E12QJvv36Is4JEBWBUQbqH4p56WORnvthnrhYUWJiM8Ds5YqiJZFq0q0j9HEZjBGXFt3h+SShtvpH8jo6PzUc/IfN/GLqC6x9t4fSy4NBcdGrjxrRyu5I8k+BK3fU6IRK0Md0MfISTvtlBDsxH rU5QJu+e /W+grl4BvwygaoBc+C6oBdcmqFzcZoAbwPX1WkOokznk9qMWe34vAva01zsohZB0hn578inzYPUpmapSbCEE4cjYV4+nwJuob5ZdaWKATM2vIlmeA8iOQadW3kw/LmavORD0oSFhAWHwfLouzivRpHU6678kXFErWZS3Izttc3AT4FFj/hqW+RcsL+rXigjes9lTTqgNKTH5Xqgiwrq/Os7UbEZrHe/tzHVp6ygnOyp6jt7XlW96BXrqHrdfLwAqHPtm1gV+oYcYCYN2l7T9NHccPgWnD4S+Zht9Xzzd/M1AwBHKJTvnt04RU4/KAOrjkhBkk5lDyTDNOb8LA8BS2iMbgQ198BsQHmaUOfnmYX5QuLulkyl6Yn72Fog== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alex Shi (Tencent) Introduce a few helper functions for conversion to convert create_page_chain() to use zpdesc, then use zpdesc in replace_sub_page() too. Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi (Tencent) --- mm/zpdesc.h | 6 +++ mm/zsmalloc.c | 115 +++++++++++++++++++++++++++++++++----------------- 2 files changed, 82 insertions(+), 39 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 79ec40b03956..2293453f5d57 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -102,4 +102,10 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn) { return page_zpdesc(pfn_to_page(pfn)); } + +static inline void __zpdesc_set_movable(struct zpdesc *zpdesc, + const struct movable_operations *mops) +{ + __SetPageMovable(zpdesc_page(zpdesc), mops); +} #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index bbc165cb587d..a8f390beeab8 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -248,6 +248,41 @@ static inline void *zpdesc_kmap_atomic(struct zpdesc *zpdesc) return kmap_atomic(zpdesc_page(zpdesc)); } +static inline void zpdesc_set_zspage(struct zpdesc *zpdesc, + struct zspage *zspage) +{ + zpdesc->zspage = zspage; +} + +static inline void zpdesc_set_first(struct zpdesc *zpdesc) +{ + SetPagePrivate(zpdesc_page(zpdesc)); +} + +static inline void zpdesc_inc_zone_page_state(struct zpdesc *zpdesc) +{ + inc_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES); +} + +static inline void zpdesc_dec_zone_page_state(struct zpdesc *zpdesc) +{ + dec_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES); +} + +static inline struct zpdesc *alloc_zpdesc(gfp_t gfp) +{ + struct page *page = alloc_page(gfp); + + return page_zpdesc(page); +} + +static inline void free_zpdesc(struct zpdesc *zpdesc) +{ + struct page *page = zpdesc_page(zpdesc); + + __free_page(page); +} + struct zspage { struct { unsigned int huge:HUGE_BITS; @@ -954,35 +989,35 @@ static void init_zspage(struct size_class *class, struct zspage *zspage) } static void create_page_chain(struct size_class *class, struct zspage *zspage, - struct page *pages[]) + struct zpdesc *zpdescs[]) { int i; - struct page *page; - struct page *prev_page = NULL; - int nr_pages = class->pages_per_zspage; + struct zpdesc *zpdesc; + struct zpdesc *prev_zpdesc = NULL; + int nr_zpdescs = class->pages_per_zspage; /* * Allocate individual pages and link them together as: - * 1. all pages are linked together using page->index - * 2. each sub-page point to zspage using page->private + * 1. all pages are linked together using zpdesc->next + * 2. each sub-page point to zspage using zpdesc->zspage * - * we set PG_private to identify the first page (i.e. no other sub-page + * we set PG_private to identify the first zpdesc (i.e. no other zpdesc * has this flag set). */ - for (i = 0; i < nr_pages; i++) { - page = pages[i]; - set_page_private(page, (unsigned long)zspage); - page->index = 0; + for (i = 0; i < nr_zpdescs; i++) { + zpdesc = zpdescs[i]; + zpdesc_set_zspage(zpdesc, zspage); + zpdesc->next = NULL; if (i == 0) { - zspage->first_zpdesc = page_zpdesc(page); - SetPagePrivate(page); + zspage->first_zpdesc = zpdesc; + zpdesc_set_first(zpdesc); if (unlikely(class->objs_per_zspage == 1 && class->pages_per_zspage == 1)) SetZsHugePage(zspage); } else { - prev_page->index = (unsigned long)page; + prev_zpdesc->next = zpdesc; } - prev_page = page; + prev_zpdesc = zpdesc; } } @@ -994,7 +1029,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, gfp_t gfp) { int i; - struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE]; + struct zpdesc *zpdescs[ZS_MAX_PAGES_PER_ZSPAGE]; struct zspage *zspage = cache_alloc_zspage(pool, gfp); if (!zspage) @@ -1004,25 +1039,25 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, migrate_lock_init(zspage); for (i = 0; i < class->pages_per_zspage; i++) { - struct page *page; + struct zpdesc *zpdesc; - page = alloc_page(gfp); - if (!page) { + zpdesc = alloc_zpdesc(gfp); + if (!zpdesc) { while (--i >= 0) { - dec_zone_page_state(pages[i], NR_ZSPAGES); - __ClearPageZsmalloc(pages[i]); - __free_page(pages[i]); + zpdesc_dec_zone_page_state(zpdescs[i]); + __ClearPageZsmalloc(zpdesc_page(zpdescs[i])); + free_zpdesc(zpdescs[i]); } cache_free_zspage(pool, zspage); return NULL; } - __SetPageZsmalloc(page); + __SetPageZsmalloc(zpdesc_page(zpdesc)); - inc_zone_page_state(page, NR_ZSPAGES); - pages[i] = page; + zpdesc_inc_zone_page_state(zpdesc); + zpdescs[i] = zpdesc; } - create_page_chain(class, zspage, pages); + create_page_chain(class, zspage, zpdescs); init_zspage(class, zspage); zspage->pool = pool; zspage->class = class->index; @@ -1753,26 +1788,28 @@ static void migrate_write_unlock(struct zspage *zspage) static const struct movable_operations zsmalloc_mops; static void replace_sub_page(struct size_class *class, struct zspage *zspage, - struct page *newpage, struct page *oldpage) + struct zpdesc *newzpdesc, struct zpdesc *oldzpdesc) { - struct page *page; - struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, }; + struct zpdesc *zpdesc; + struct zpdesc *zpdescs[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, }; + unsigned int first_obj_offset; int idx = 0; - page = get_first_page(zspage); + zpdesc = get_first_zpdesc(zspage); do { - if (page == oldpage) - pages[idx] = newpage; + if (zpdesc == oldzpdesc) + zpdescs[idx] = newzpdesc; else - pages[idx] = page; + zpdescs[idx] = zpdesc; idx++; - } while ((page = get_next_page(page)) != NULL); + } while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL); - create_page_chain(class, zspage, pages); - set_first_obj_offset(newpage, get_first_obj_offset(oldpage)); + create_page_chain(class, zspage, zpdescs); + first_obj_offset = get_first_obj_offset(zpdesc_page(oldzpdesc)); + set_first_obj_offset(zpdesc_page(newzpdesc), first_obj_offset); if (unlikely(ZsHugePage(zspage))) - newpage->index = oldpage->index; - __SetPageMovable(newpage, &zsmalloc_mops); + newzpdesc->handle = oldzpdesc->handle; + __zpdesc_set_movable(newzpdesc, &zsmalloc_mops); } static bool zs_page_isolate(struct page *page, isolate_mode_t mode) @@ -1845,7 +1882,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, } kunmap_atomic(s_addr); - replace_sub_page(class, zspage, newpage, page); + replace_sub_page(class, zspage, page_zpdesc(newpage), page_zpdesc(page)); /* * Since we complete the data copy and set up new zspage structure, * it's okay to release migration_lock.