From patchwork Wed May 22 21:03:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13671079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DF40C25B7C for ; Wed, 22 May 2024 21:04:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 164F36B0095; Wed, 22 May 2024 17:04:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1152C6B0096; Wed, 22 May 2024 17:04:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED0A86B0098; Wed, 22 May 2024 17:03:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id CCEB16B0095 for ; Wed, 22 May 2024 17:03:59 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7C1FA164CF2 for ; Wed, 22 May 2024 21:03:59 +0000 (UTC) X-FDA: 82147258998.12.08544B1 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id C9829100016 for ; Wed, 22 May 2024 21:03:57 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BNzyaXA5; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716411837; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DITBJncVvL2hCWefig3RBM7cSj7OZ/WRaZeUNOkaevI=; b=WQdxXmR4ZtFAgKdR9cuRIld4sgr9/ZTVTGoVe+h/S+09srKvWQvdIG0IFqT7mLVi3WKwlD Zt7Xv+oHy6q8F4xV4ME63LnLMWV/JlfSKwQwkHQROw7oMaHcsJseyXZ7F8GsrGBoKlN+zo IyOPRs0EVFOnVj+g782CIj5EnDBZQCc= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BNzyaXA5; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716411837; a=rsa-sha256; cv=none; b=HKPK83HOC+IeksckAeNeZZRz2I3uhpQipFnVdFETUT8Ndfoe0uYpMjpB7yN0hXjhVBU6/E WoL7HnqaJB3/kI7SVtkhkl7u+46WuD3Ac1r47Q0KhExHJoj2n2s5IVZxH3ITQKu5EYYa3W UD2gTkVBX0Wpb2nV4ooWVjW/b8yw3FQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1716411837; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DITBJncVvL2hCWefig3RBM7cSj7OZ/WRaZeUNOkaevI=; b=BNzyaXA5Ss4ZGKfbmBteNNnTKR22B+dYOgJ8u47EW87zNL/mQz+nvPEaM30lkGTGPfvMOP nq9SjkbFT99LvXA46CeSHWKBMG2uiy2ebyyhdSzVPKhsrYu4Dv8ETpXSW6g5flpI9h/hFv ijhfxo54P2di7wDhYRcFxC1e/3Yda8w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-688-dYd8dj26PQesLwmRv2SXfA-1; Wed, 22 May 2024 17:03:53 -0400 X-MC-Unique: dYd8dj26PQesLwmRv2SXfA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F3C5780028D; Wed, 22 May 2024 21:03:52 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.192.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id C6A5628E3; Wed, 22 May 2024 21:03:50 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Mike Rapoport , Minchan Kim , Sergey Senozhatsky , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH RFC 3/6] mm/zsmalloc: use a proper page type Date: Wed, 22 May 2024 23:03:38 +0200 Message-ID: <20240522210341.1030552-4-david@redhat.com> In-Reply-To: <20240522210341.1030552-1-david@redhat.com> References: <20240522210341.1030552-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Rspamd-Queue-Id: C9829100016 X-Stat-Signature: 1kqfcm7wk5kksqq44kpbxji7n748z4z7 X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1716411837-733803 X-HE-Meta: U2FsdGVkX1+CD7FjkMgCVM36OWFXhO3AxBIkYd/JYIy7sTRw5AEg9W/IHFKhCPFhln9utSq1hkbG8S0zPz2DM04CTQauuHV1AbAT4KiFxgcQEI9TUF2QKj9J21FEIWxoYzlZow2oAzX2UpUGzoaF0AGvlCA5rfRINtiuERyL4jKrQ8xNeqNwTsRXol1cY/GdZV0aPx71+fYz88lbCDoavPUFTDbpryvKYgM2qiuFvkU/pathKtW1u0dGO8P5HYXWpyGuR6IE87gZD1sZGj6fqe1J33y4zyKXyDRJ7wAxSFCbGqwrb5EVDVx8wyxkc1RLP64XH7vH1O0Rz9gLpLrqizn//y52UkW4thEE2+Eqd6SGGLByn61qsAkuI66O+ntY5By+px0h8iXOVZWx9ta+QhVxFNTIHDLLJNNMyt6aE77FeGG+4GvHYxhVSTygNLnJ6g3CzVhSXHZn5CtpHh0hdxUUo7Je/rsyM+QjYNcQ9xAyJX7jLph2+jKUs5xOl8kjONrhILA9I5T+39DZldIYSFyL2E41R49RvblkkYPW0fqTYr0yc73OQFG5AolM4i86se5nkDmkhJc0BIjmZ19S8mgc1O2HylmEX3X1AzJYW/cOUvPfsV3xkmNBnpN4etgn5jQxoQrmbBk/Pz/NxKXuIDFnwM4ucPLjO59os8iT8xW5xlbVY0q3J3DSSaxVeepNuLrPsgv9QJoarWs4bCsA+6lIhBSD2McbXXECCwPQpGo9RzVn3LLDRBxf2FPAL5UJ/bhiJbIUwSlzTg91ng3B3F7kL9M6ykEf4sZyOU5KPJLi82wpkyeOHwLY3t1lOeb+sb0XUcvHvGq8AM1bBJKSKtKebA8i3Ou+rJR2oylqa9gRx3FA7VHnF7PpHGyFtURH1MUaKgvJG3guTmVlqYN6LOopZgT9vbrrZcXCtmczd3jvIYegmGjR/1pBLr03INmY1E602+kmt7rUC55lYkU KlioHM3Q 02M+gN9YbGvTTgTS6fyx0B04FjUJcg1oiu3HGX1Jb90IHU4aG+jh/ePdQdFSMBoWY/I0vKyczBtal6MpE1iwWysynCJE+mOLzlVXnHhey9uxde1AZ5+AudHhiWBIA7Cytsd35SV3wXRY0L45gZkyZF27v7pQvYeknNLiXPQFaQNI0Y8ePe0Aojqab62Rm9Frgz8QlsU30sPRVDE+3HYMdYwy//Al8fo84dH0PdNm6t8bo3RDI74CP7LNawH2o5IlRJHV0qtsSHwSNFzfuhmDHaQkzmiJYlhBTDEIhZT4f7KDQJlZ5pU5PCb5b/B+wI4vf/Fd4sf0bGEBENj4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's clean it up: use a proper page type and store our data (offset into a page) in the lower 16 bit as documented. We'll have to restrict ourselves to <= 64KB base page size (so the offset fits into 16 bit), which sounds reasonable. Unfortunately, we don't have any space to store it elsewhere for now. Based on this, we should do a proper "struct zsdesc" conversion, as proposed in [1]. This removes the last _mapcount/page_type offender. [1] https://lore.kernel.org/all/20231130101242.2590384-1-42.hyeyoo@gmail.com/ Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: David Hildenbrand --- include/linux/page-flags.h | 3 +++ mm/Kconfig | 1 + mm/zsmalloc.c | 23 +++++++++++++++++++---- 3 files changed, 23 insertions(+), 4 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index ed9ac4b5233d..ccaf16656de9 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -959,6 +959,7 @@ PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned) #define PG_guard 0x00080000 #define PG_hugetlb 0x00100800 #define PG_slab 0x00200000 +#define PG_zsmalloc 0x00400000 #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) @@ -1073,6 +1074,8 @@ FOLIO_TYPE_OPS(hugetlb, hugetlb) FOLIO_TEST_FLAG_FALSE(hugetlb) #endif +PAGE_TYPE_OPS(Zsmalloc, zsmalloc, zsmalloc) + /** * PageHuge - Determine if the page belongs to hugetlbfs * @page: The page to test. diff --git a/mm/Kconfig b/mm/Kconfig index b4cb45255a54..0371d79b1b75 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -190,6 +190,7 @@ config ZSMALLOC tristate prompt "N:1 compression allocator (zsmalloc)" if ZSWAP depends on MMU + depends on PAGE_SIZE_LESS_THAN_256KB # we want <= 64KB help zsmalloc is a slab-based memory allocator designed to store pages of various compression levels efficiently. It achieves diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b42d3545ca85..6f0032e06242 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -20,7 +20,8 @@ * page->index: links together all component pages of a zspage * For the huge page, this is always 0, so we use this field * to store handle. - * page->page_type: first object offset in a subpage of zspage + * page->page_type: PG_zsmalloc, lower 16 bit locate the first object + * offset in a subpage of a zspage * * Usage of struct page flags: * PG_private: identifies the first component page @@ -450,14 +451,22 @@ static inline struct page *get_first_page(struct zspage *zspage) return first_page; } +static inline void reset_first_obj_offset(struct page *page) +{ + page->page_type |= 0xffff; +} + static inline unsigned int get_first_obj_offset(struct page *page) { - return page->page_type; + return page->page_type & 0xffff; } static inline void set_first_obj_offset(struct page *page, unsigned int offset) { - page->page_type = offset; + BUILD_BUG_ON(PAGE_SIZE & ~0xffff); + VM_WARN_ON_ONCE(offset & ~0xffff); + page->page_type &= ~0xffff; + page->page_type |= offset & 0xffff; } static inline unsigned int get_freeobj(struct zspage *zspage) @@ -791,8 +800,9 @@ static void reset_page(struct page *page) __ClearPageMovable(page); ClearPagePrivate(page); set_page_private(page, 0); - page_mapcount_reset(page); page->index = 0; + reset_first_obj_offset(page); + __ClearPageZsmalloc(page); } static int trylock_zspage(struct zspage *zspage) @@ -965,11 +975,13 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, if (!page) { while (--i >= 0) { dec_zone_page_state(pages[i], NR_ZSPAGES); + __ClearPageZsmalloc(pages[i]); __free_page(pages[i]); } cache_free_zspage(pool, zspage); return NULL; } + __SetPageZsmalloc(page); inc_zone_page_state(page, NR_ZSPAGES); pages[i] = page; @@ -1762,6 +1774,9 @@ static int zs_page_migrate(struct page *newpage, struct page *page, VM_BUG_ON_PAGE(!PageIsolated(page), page); + /* We're committed, tell the world that this is a Zsmalloc page. */ + __SetPageZsmalloc(newpage); + /* The page is locked, so this pointer must remain valid */ zspage = get_zspage(page); pool = zspage->pool;