From patchwork Mon Jul 8 06:33:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13726205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 401BBC3DA42 for ; Mon, 8 Jul 2024 06:29:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C2CA06B00B7; Mon, 8 Jul 2024 02:29:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD8CC6B00B8; Mon, 8 Jul 2024 02:29:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A7B046B00B9; Mon, 8 Jul 2024 02:29:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 85D9D6B00B7 for ; Mon, 8 Jul 2024 02:29:50 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4CFE81A10A4 for ; Mon, 8 Jul 2024 06:29:50 +0000 (UTC) X-FDA: 82315609740.20.DA62448 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf07.hostedemail.com (Postfix) with ESMTP id F3AD54001B for ; Mon, 8 Jul 2024 06:29:47 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=H3TVIOi8; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf07.hostedemail.com: domain of alexs@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=alexs@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720420156; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2X3iHMqFI+jq+3USO/y0cSHCGA/h5vCQ47OZlN/JjM4=; b=pMqks5VzfPpb5rGn+2WlQjw0QAMmAh7oc9QjfOkcJAGZGgBCB5Svxp/EWRE5I7Ql0nWwDY t7H/32d1XgMgE4VKXRdwe3lpFquaXn0AQLOkA7l9Pa3HFByWpznf4KobN3DaBy0Q6wde+B 9F7CpX7DA+pLqlyDKRjhJXXVUw8rPDY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720420156; a=rsa-sha256; cv=none; b=LevQInwQc/+aiefC6vN/ZsDwX2Y+a38pYY+M5cH99fyIoita+uG49KtPoKr69pkSD6mLNG GC3GfbbBvWFxo4gJHaqvEaJn+Zlk2aQV8AHOD4rCwRGizjJnBDUHIAY5T9mInt2zcnx6Wi ghn+tkjxmgN98lp694/DDZ4rYnLl8/A= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=H3TVIOi8; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf07.hostedemail.com: domain of alexs@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=alexs@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 705E7CE0A7C; Mon, 8 Jul 2024 06:29:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7497C116B1; Mon, 8 Jul 2024 06:29:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1720420184; bh=Y64RHT8UUVO7Yuad89btOx9mw5qKYY1Q7Nr/pGMgLzg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H3TVIOi8tn+/Jpf1JSx2+yuyzZOtImlkUm5t0LnzYklpni4kdSFiNHrqpQOXZqOL6 eKTBzQL6JVvFtsElAu7G5prq398JdgHze+6UeKvCmxHlMaC6BpNtTzo0tlkPZi5e6u DXlNgwNKMHm0UQBzrcZX6BPaN3TTk5eMyjGw7qRf9HJC8/bqtDsza7aNuxJB5E98Kh cVBOvKC38kiKD4gMr4YS7nAAFe17otobJQn5FNGbjOzVh8XnFWdWFnTy2QLvYSiqu7 +b4+8NbWTwh9n/uuXoC10fhHNQkguOvxg8q07JSihLZaF7N3nVxyDqDLJTP6G62bBx ROe9sU1M31ITQ== From: alexs@kernel.org To: Vitaly Wool , Miaohe Lin , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, minchan@kernel.org, willy@infradead.org, senozhatsky@chromium.org, david@redhat.com, 42.hyeyoo@gmail.com, Yosry Ahmed , nphamcs@gmail.com Cc: Alex Shi Subject: [PATCH v3 17/20] mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc Date: Mon, 8 Jul 2024 14:33:38 +0800 Message-ID: <20240708063344.1096626-18-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240708063344.1096626-1-alexs@kernel.org> References: <20240708063344.1096626-1-alexs@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: F3AD54001B X-Stat-Signature: xcxc5x3nemgtoq9uspwmd7ht5zmpoc6z X-Rspam-User: X-HE-Tag: 1720420187-753619 X-HE-Meta: U2FsdGVkX1+lVhjMXq2fEggd4Mf3RVQsPPv3xIIXqJEJtuEcSr+mDPD4wP65k0eJU6uBruN3AI8RYijTCk5ZihOeeNRv2NZZSr/xOyXguSAEpILMIyGqMPmk7PsbEpap7Jr5Foj5uVRIeKTFiiXXiRmGBgnHEP50/MIiuMOmzC5Qkts5VdCdmBT8kxgzsfK7YKQfRjZuXZZGMQJOW4mShVBHwahXPFP0taCg8TVlRk+biaulG/G9G6/6VN8p4tNhtcuyczMrKjm+lxFDz8oS7fFx1UZ3qAR9Fo1kYp7LirolEoGImBeJwt3xKyezq9SUDVs5/rFYcaHt3wzL7KD7Nun6lhQ0/ujO9ekFfSFUBIOrgKiJcnLa1IbCfL9ThPFu4MH1Yqitm5AkjQ+cM7UVAB92vcHdX0d8iMSoKJ//3qWZ2IOFmad1xA5LNFRcQsjtJAULTjVungHkC6h5xph0Ut/dXNUNC+sgnM5pWGz5dFb6opM7WaYI+XccR7spXhi0yE0laDMCu5jyRYYQ16Z0lwTOqCtUUL4/rj/HiD+mS5nzq5vQRoYjQDw1OZy/A0xz/dTEPpzMWZREO3C2+CBrAqjtawiVr6ZQqXwXzOxb2PbEwA420d5a1XGv4zE2Dv73O6g95UnhbWy5A+wjtmdWkkzSvNJGz+ynoTaTAAif9q8q/eQjGIkuWdP4K/Cr28NMemcf6BTBNL/+6nF0+v+mb32a5ask6/1FEf2y9R9YfSh/naEyhw1EuyTomFTDKRKvUp/YIQPJSUOEoij7BuE34RqTBLKdvQKcb+I6Yl9PvCxhRuXrzWh1IXktFXB1ZwThy3GmdIBhb8xIfJdQU01bRvM+grDaKrFT5fcdxiy4WPU5D9ODRi2Uv91apHntaNkpGmXodY89nvETZnrs+iaZhl5g+UhlNJsqkVkeIWCmS2IQ9ZUnxsWvxF0Unpur+hBtpS7gFWKRtFcPEvE1YnO uh+rPtqc jDlX6dM3UYgBXOxmOCQyKWuUjIkwYp47MLznZmMw97sEEw3sY7PvjdUEufGjliaedYWHJx7/34rN7IIH1/KRReKl0jamdISBKJTBeNky8+pPqTeekyEgLwdJD9PrfFnuhcvEXR3DQ059LqgDlf1aiJit9lFYL+oTbZAxfwoZq0gwY4VvBPGmRVaSSW4HtJ3ZZQr7Ow3+kBuUB2BGMzT3qsfHZWabi5XSrSksCabHOhaWtF6O+9pu5fXNVsCb4i3iUQ8QGeMClp99pFcUBJ8c3nQlmqWdJFsDwXLBFH9cVRafPTLB5Z4olW3ntgejd3//tqtApLc9+ZjEQAAxt3teiLg7cZUV5PwiPpnkET5uo/ongeseOjecXKMiom1jIzs6YLA1+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alex Shi (Tencent) Now that all users of get/set_first_obj_offset() are converted to use zpdesc, convert them to take zpdesc. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi (Tencent) --- mm/zpdesc.h | 7 ++++++- mm/zsmalloc.c | 36 ++++++++++++++++++------------------ 2 files changed, 24 insertions(+), 19 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 72c8c072b4c8..f64e813f4847 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -15,6 +15,8 @@ * @next: Next zpdesc in a zspage in zsmalloc zpool * @handle: For huge zspage in zsmalloc zpool * @zspage: Pointer to zspage in zsmalloc + * @first_obj_offset: First object offset in zsmalloc zpool + * @_refcount: Indirectly use by page migration * @memcg_data: Memory Control Group data. * * This struct overlays struct page for now. Do not modify without a good @@ -31,7 +33,8 @@ struct zpdesc { unsigned long handle; }; struct zspage *zspage; - unsigned long _zp_pad_1; + unsigned int first_obj_offset; + atomic_t _refcount; #ifdef CONFIG_MEMCG unsigned long memcg_data; #endif @@ -45,6 +48,8 @@ ZPDESC_MATCH(mapping, mops); ZPDESC_MATCH(index, next); ZPDESC_MATCH(index, handle); ZPDESC_MATCH(private, zspage); +ZPDESC_MATCH(page_type, first_obj_offset); +ZPDESC_MATCH(_refcount, _refcount); #ifdef CONFIG_MEMCG ZPDESC_MATCH(memcg_data, memcg_data); #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 8b713ac03902..bb8b5f13a966 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -20,8 +20,8 @@ * zpdesc->next: links together all component pages of a zspage * For the huge page, this is always 0, so we use this field * to store handle. - * page->page_type: PG_zsmalloc, lower 16 bit locate the first object - * offset in a subpage of a zspage + * zpdesc->first_obj_offset: PG_zsmalloc, lower 16 bit locate the first + * object offset in a subpage of a zspage * * Usage of struct zpdesc(page) flags: * PG_private: identifies the first component page @@ -494,26 +494,26 @@ static struct zpdesc *get_first_zpdesc(struct zspage *zspage) #define FIRST_OBJ_PAGE_TYPE_MASK 0xffff -static inline void reset_first_obj_offset(struct page *page) +static inline void reset_first_obj_offset(struct zpdesc *zpdesc) { - VM_WARN_ON_ONCE(!PageZsmalloc(page)); - page->page_type |= FIRST_OBJ_PAGE_TYPE_MASK; + VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc))); + zpdesc->first_obj_offset |= FIRST_OBJ_PAGE_TYPE_MASK; } -static inline unsigned int get_first_obj_offset(struct page *page) +static inline unsigned int get_first_obj_offset(struct zpdesc *zpdesc) { - VM_WARN_ON_ONCE(!PageZsmalloc(page)); - return page->page_type & FIRST_OBJ_PAGE_TYPE_MASK; + VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc))); + return zpdesc->first_obj_offset & FIRST_OBJ_PAGE_TYPE_MASK; } -static inline void set_first_obj_offset(struct page *page, unsigned int offset) +static inline void set_first_obj_offset(struct zpdesc *zpdesc, unsigned int offset) { /* With 16 bit available, we can support offsets into 64 KiB pages. */ BUILD_BUG_ON(PAGE_SIZE > SZ_64K); - VM_WARN_ON_ONCE(!PageZsmalloc(page)); + VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc))); VM_WARN_ON_ONCE(offset & ~FIRST_OBJ_PAGE_TYPE_MASK); - page->page_type &= ~FIRST_OBJ_PAGE_TYPE_MASK; - page->page_type |= offset & FIRST_OBJ_PAGE_TYPE_MASK; + zpdesc->first_obj_offset &= ~FIRST_OBJ_PAGE_TYPE_MASK; + zpdesc->first_obj_offset |= offset & FIRST_OBJ_PAGE_TYPE_MASK; } static inline unsigned int get_freeobj(struct zspage *zspage) @@ -850,7 +850,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc) ClearPagePrivate(page); zpdesc->zspage = NULL; zpdesc->next = NULL; - reset_first_obj_offset(page); + reset_first_obj_offset(zpdesc); __ClearPageZsmalloc(page); } @@ -934,7 +934,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage) struct link_free *link; void *vaddr; - set_first_obj_offset(zpdesc_page(zpdesc), off); + set_first_obj_offset(zpdesc, off); vaddr = zpdesc_kmap_atomic(zpdesc); link = (struct link_free *)vaddr + off / sizeof(*link); @@ -1589,7 +1589,7 @@ static unsigned long find_alloced_obj(struct size_class *class, unsigned long handle = 0; void *addr = zpdesc_kmap_atomic(zpdesc); - offset = get_first_obj_offset(zpdesc_page(zpdesc)); + offset = get_first_obj_offset(zpdesc); offset += class->size * index; while (offset < PAGE_SIZE) { @@ -1784,8 +1784,8 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage, } while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL); create_page_chain(class, zspage, zpdescs); - first_obj_offset = get_first_obj_offset(zpdesc_page(oldzpdesc)); - set_first_obj_offset(zpdesc_page(newzpdesc), first_obj_offset); + first_obj_offset = get_first_obj_offset(oldzpdesc); + set_first_obj_offset(newzpdesc, first_obj_offset); if (unlikely(ZsHugePage(zspage))) newzpdesc->handle = oldzpdesc->handle; __zpdesc_set_movable(newzpdesc, &zsmalloc_mops); @@ -1840,7 +1840,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, /* the migrate_write_lock protects zpage access via zs_map_object */ migrate_write_lock(zspage); - offset = get_first_obj_offset(zpdesc_page(zpdesc)); + offset = get_first_obj_offset(zpdesc); s_addr = zpdesc_kmap_atomic(zpdesc); /*