From patchwork Thu Jan 18 03:05:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Li X-Patchwork-Id: 13522326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D4DFC4725D for ; Thu, 18 Jan 2024 03:06:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 649506B0075; Wed, 17 Jan 2024 22:06:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F9366B0078; Wed, 17 Jan 2024 22:06:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4734B6B007B; Wed, 17 Jan 2024 22:06:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 39DB86B0075 for ; Wed, 17 Jan 2024 22:06:08 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0D1DE804D9 for ; Thu, 18 Jan 2024 03:06:08 +0000 (UTC) X-FDA: 81690942816.28.76B2381 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf26.hostedemail.com (Postfix) with ESMTP id 75F3A140020 for ; Thu, 18 Jan 2024 03:06:05 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UcZdgahe; spf=pass (imf26.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705547166; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t5TQLKzEnplmDeZ2tdKsKEW6su+ZH7HcxPbElKvQPM4=; b=lYl6FwGkiYrPci97hdZ6Jmfg1asPhQQYmiBvlLHJRGVx5QfNzq41a8jHWw8K+2uclnarTO cuJgh0NsASs/CMDAFeVFIR0U3GwW+RCzZVAoopgsNrmAISa0oDWMPQ/9K797zG35auJhsq e+uQcPjv+ZcHwfE60GrJmXCGFsl2bd4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705547166; a=rsa-sha256; cv=none; b=sSFbi8CxuuUR8zB2pQ7B3z/1uOmPv4wf83WtC/TITS16Oy5OWaA/P86xccfom5Xfgdo52R lpa6bJPEXSA8L2jMu4Jys0iZ/BsNolFq1Y7r2r23L+b+PH5DQDdpbUDX2UZO6wK2FUT9Wm 6xFNRVXmG8IOZl0Fo0CnwHrye6BtaUo= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UcZdgahe; spf=pass (imf26.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 2384ECE1CA6; Thu, 18 Jan 2024 03:06:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE59BC433F1; Thu, 18 Jan 2024 03:05:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705547161; bh=3DwC3VSiZT27oLdnDyvSWHDXtxL2ymfHgLFoSU1pH4U=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=UcZdgahelcMT357ClgpltNiCRfjxsyoHwta5sDcXBwfjRTuVsvctiRl2f8upxk2xA FDiwiSUN3lsR6RWtiq2VDQhLVDMpp0yL8V8xYF1yb4mKGNIwnB/KRuvnXYdJfpFv/u di38kb1QAXM+u0PSjzT3jHeTUB3MWI3tB4hed9UjBZ28cb5FW2V5sz9XrNJ30WQi4a ggZUMMXmawcXpWbaq4oMsvR81FtYGkgWFCf1mUuA3li8WfSGkncv27Y+vAlTw587+6 qS7JFORCfCQeOjAmZlvt2v6e2Wlj9/yFhd0uXGgf2cNLKgRh6EO5TcuDplwtcF/3v5 6hZz7U7jcGp+Q== From: Chris Li Date: Wed, 17 Jan 2024 19:05:41 -0800 Subject: [PATCH 1/2] mm: zswap.c: add xarray tree to zswap MIME-Version: 1.0 Message-Id: <20240117-zswap-xarray-v1-1-6daa86c08fae@kernel.org> References: <20240117-zswap-xarray-v1-0-6daa86c08fae@kernel.org> In-Reply-To: <20240117-zswap-xarray-v1-0-6daa86c08fae@kernel.org> To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, =?utf-8?b?V2VpIFh177+8?= , Yu Zhao , Greg Thelen , Chun-Tse Shao , =?utf-8?q?Suren_Baghdasaryan=EF=BF=BC?= , Yosry Ahmed , Brain Geffon , Minchan Kim , Michal Hocko , Mel Gorman , Huang Ying , Nhat Pham , Johannes Weiner , Kairui Song , Zhongkun He , Kemeng Shi , Barry Song , "Matthew Wilcox (Oracle)" , "Liam R. Howlett" , Joel Fernandes , Chengming Zhou , Chris Li X-Mailer: b4 0.12.3 X-Rspamd-Queue-Id: 75F3A140020 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 3asbqpzestuu4q5nicatoipzx1ia9ddb X-HE-Tag: 1705547165-553000 X-HE-Meta: U2FsdGVkX1/1hCJpLE/wa45wu1lBei+7CCSJ9oIzx3b4JvII7ZdDTq81SFpWSEjuLiuaEnKztp0lF2+uYSeKqCdFoFve8d/xZDNOYD/mQ1TvpKE6e/pDi8hIimCTAxwuw+Elk3vSpQGv8rQQFb3R2hvEI0ddnrlvCkX/gSr9slTp/1dPyp5H87/mIQI1VWq8yKCXGAYZ3ZfMKoS2bLVzczkiNkSKMOjInwdtk5jkN9IY6UUVMqfmdvsxXUVKCQZNmz4lHowkAFqxAMyiQYOArfXza6KRaoUcdLvHkHF0sQUu0AUTLoPYL4H8ymAshQbhsuUSyB9Jc+jxKBMFSyMUvvioCrbTVJtG979I2Tmt+UJuMC7avbOEGM3/xp5R5utG4esjX5bUbtc9BqTVkpnNgSodAZMuDCHAZM7hnEQlsZ3cLffpQZQHnPAnaE+2HxI0zWFq1+DrsyeVvi7MJ82Qq3LQUJ+yMbd9Lu3GVeVhmMQoSBiM+Kf4GLPFG9zmLqENLdDNyQBwQdU0QHJjvfx8Ydd/IwZ5ZSVjU8u+9C1UrVzZYCE+TwefusRDzKZ4tzxkapQc1USo4iv1p0LjrWE3GFIsKD3u6MBQHF95xzU5OWScT+cYf01G8gVMOIWdqBe6osoAV9YuLSQd+TQmzMsanibDSCPgsIuh6bkMWQ/g0GmTY9NtaI/fBh4Y3KWqABDjmncAoCav2vx/HMkDKAeBjfCxHHpfZI6eVTjpNf803Mo5CTgjqj4qajYcwx9+wRf1/DTNIQgyj9w4Q4eVFXY1NTs9BLtOqozAnsmTfV8jhTTA7S/dcswnn8PNvD/uKERM4c3tpHmBoB897wcvtPg6YKhnr8dZFUKfuRVrdevwiChGbylay7BK7o/YkksnoAm5wUAaKHR4vtm2SJuUoXncKp0Sm7Jf4teWGks1636jiTOettLosbsGtNfwgEKr5Q7cJ5T1IriZQMCXGK5wYli fGsWjKYB cA6o+vvIUuWHOW4A6o70HHBysaKqCvcWAPrjVQSipPWeXQ7ZkoHex0AUamE15ZsJuRWUduUXa7wAZ/D5YX9FYQkNOTvutsUFsosXMR9lr4bMc0Pit4HDmPUo0QOjUjcxSxmcxSLZteu5V58qFZE8XjM8pQrd+T6kZXxVTdiVWKBCO1nE6YgStKKfcIrdj/qt21m1h X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The xarray tree is added alongside the zswap RB tree. Checks for the xarray get the same result as the RB tree operations. Rename the zswap RB tree function to a more generic function name without the RB part. --- mm/zswap.c | 60 ++++++++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 42 insertions(+), 18 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index f8bc9e089268..a40b0076722b 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -235,6 +235,7 @@ struct zswap_entry { */ struct zswap_tree { struct rb_root rbroot; + struct xarray xarray; spinlock_t lock; }; @@ -462,9 +463,9 @@ static void zswap_lru_putback(struct list_lru *list_lru, /********************************* * rbtree functions **********************************/ -static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset) +static struct zswap_entry *zswap_search(struct zswap_tree *tree, pgoff_t offset) { - struct rb_node *node = root->rb_node; + struct rb_node *node = tree->rbroot.rb_node; struct zswap_entry *entry; pgoff_t entry_offset; @@ -475,8 +476,12 @@ static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset) node = node->rb_left; else if (entry_offset < offset) node = node->rb_right; - else + else { + struct zswap_entry *e = xa_load(&tree->xarray, offset); + + BUG_ON(entry != e); return entry; + } } return NULL; } @@ -485,13 +490,15 @@ static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset) * In the case that a entry with the same offset is found, a pointer to * the existing entry is stored in dupentry and the function returns -EEXIST */ -static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry, +static int zswap_insert(struct zswap_tree *tree, struct zswap_entry *entry, struct zswap_entry **dupentry) { + struct rb_root *root = &tree->rbroot; struct rb_node **link = &root->rb_node, *parent = NULL; - struct zswap_entry *myentry; + struct zswap_entry *myentry, *old; pgoff_t myentry_offset, entry_offset = swp_offset(entry->swpentry); + while (*link) { parent = *link; myentry = rb_entry(parent, struct zswap_entry, rbnode); @@ -501,19 +508,26 @@ static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry, else if (myentry_offset < entry_offset) link = &(*link)->rb_right; else { + old = xa_load(&tree->xarray, entry_offset); + BUG_ON(old != myentry); *dupentry = myentry; return -EEXIST; } } rb_link_node(&entry->rbnode, parent, link); rb_insert_color(&entry->rbnode, root); + old = xa_store(&tree->xarray, entry_offset, entry, GFP_KERNEL); return 0; } -static bool zswap_rb_erase(struct rb_root *root, struct zswap_entry *entry) +static bool zswap_erase(struct zswap_tree *tree, struct zswap_entry *entry) { + pgoff_t offset = swp_offset(entry->swpentry); if (!RB_EMPTY_NODE(&entry->rbnode)) { - rb_erase(&entry->rbnode, root); + struct zswap_entry *old; + old = xa_erase(&tree->xarray, offset); + BUG_ON(old != entry); + rb_erase(&entry->rbnode, &tree->rbroot); RB_CLEAR_NODE(&entry->rbnode); return true; } @@ -575,12 +589,12 @@ static void zswap_entry_put(struct zswap_tree *tree, } /* caller must hold the tree lock */ -static struct zswap_entry *zswap_entry_find_get(struct rb_root *root, +static struct zswap_entry *zswap_entry_find_get(struct zswap_tree *tree, pgoff_t offset) { struct zswap_entry *entry; - entry = zswap_rb_search(root, offset); + entry = zswap_search(tree, offset); if (entry) zswap_entry_get(entry); @@ -845,7 +859,7 @@ static struct zswap_pool *zswap_pool_find_get(char *type, char *compressor) static void zswap_invalidate_entry(struct zswap_tree *tree, struct zswap_entry *entry) { - if (zswap_rb_erase(&tree->rbroot, entry)) + if (zswap_erase(tree, entry)) zswap_entry_put(tree, entry); } @@ -875,7 +889,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o /* Check for invalidate() race */ spin_lock(&tree->lock); - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) + if (entry != zswap_search(tree, swpoffset)) goto unlock; /* Hold a reference to prevent a free during writeback */ @@ -1407,6 +1421,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, struct zswap_tree *tree) { swp_entry_t swpentry = entry->swpentry; + pgoff_t offset = swp_offset(swpentry); + struct zswap_entry *e; struct folio *folio; struct mempolicy *mpol; bool folio_was_allocated; @@ -1439,7 +1455,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, * avoid overwriting a new swap folio with old compressed data. */ spin_lock(&tree->lock); - if (zswap_rb_search(&tree->rbroot, swp_offset(entry->swpentry)) != entry) { + e = zswap_search(tree, offset); + if (e != entry) { spin_unlock(&tree->lock); delete_from_swap_cache(folio); return -ENOMEM; @@ -1528,7 +1545,7 @@ bool zswap_store(struct folio *folio) * the tree, and it might be written back overriding the new data. */ spin_lock(&tree->lock); - dupentry = zswap_rb_search(&tree->rbroot, offset); + dupentry = zswap_search(tree, offset); if (dupentry) { zswap_duplicate_entry++; zswap_invalidate_entry(tree, dupentry); @@ -1671,7 +1688,7 @@ bool zswap_store(struct folio *folio) * found again here it means that something went wrong in the swap * cache. */ - while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { + while (zswap_insert(tree, entry, &dupentry) == -EEXIST) { WARN_ON(1); zswap_duplicate_entry++; zswap_invalidate_entry(tree, dupentry); @@ -1722,7 +1739,7 @@ bool zswap_load(struct folio *folio) /* find */ spin_lock(&tree->lock); - entry = zswap_entry_find_get(&tree->rbroot, offset); + entry = zswap_entry_find_get(tree, offset); if (!entry) { spin_unlock(&tree->lock); return false; @@ -1762,7 +1779,7 @@ void zswap_invalidate(int type, pgoff_t offset) /* find */ spin_lock(&tree->lock); - entry = zswap_rb_search(&tree->rbroot, offset); + entry = zswap_search(tree, offset); if (!entry) { /* entry was written back */ spin_unlock(&tree->lock); @@ -1783,6 +1800,7 @@ void zswap_swapon(int type) } tree->rbroot = RB_ROOT; + xa_init(&tree->xarray); spin_lock_init(&tree->lock); zswap_trees[type] = tree; } @@ -1790,15 +1808,21 @@ void zswap_swapon(int type) void zswap_swapoff(int type) { struct zswap_tree *tree = zswap_trees[type]; - struct zswap_entry *entry, *n; + struct zswap_entry *entry, *e, *n; + XA_STATE(xas, tree ? &tree->xarray : NULL, 0); if (!tree) return; /* walk the tree and free everything */ spin_lock(&tree->lock); + + xas_for_each(&xas, e, ULONG_MAX) + zswap_invalidate_entry(tree, e); + rbtree_postorder_for_each_entry_safe(entry, n, &tree->rbroot, rbnode) - zswap_free_entry(entry); + BUG_ON(entry); + tree->rbroot = RB_ROOT; spin_unlock(&tree->lock); kfree(tree); From patchwork Thu Jan 18 03:05:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Li X-Patchwork-Id: 13522324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 141D8C4725D for ; Thu, 18 Jan 2024 03:06:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 536D06B0072; Wed, 17 Jan 2024 22:06:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E70C6B0074; Wed, 17 Jan 2024 22:06:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AE966B0075; Wed, 17 Jan 2024 22:06:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2B8E36B0072 for ; Wed, 17 Jan 2024 22:06:07 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E762AA01BC for ; Thu, 18 Jan 2024 03:06:06 +0000 (UTC) X-FDA: 81690942732.07.5CD8252 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf27.hostedemail.com (Postfix) with ESMTP id 2EB7E4000F for ; Thu, 18 Jan 2024 03:06:04 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t5RIQxZe; spf=pass (imf27.hostedemail.com: domain of chrisl@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705547165; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k2P1pGC48c0AZhnSY1LUv0UkoXUkbP8aVJl5w7I5bWM=; b=LrPxZAShlZOEuqmiIfJoO3Cm9knqMolgJxVSZajejNS4ctb8NaAIDTOGs/rW8qceHR1++z vThkwileY2zypHMO1g7WH6pV3Km1k20PG8DfbpjlRS+mZawuZXzTart1dntayhKom47ivh wC4aCwePNYRRNC3GobIZBslYtIYgdeg= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t5RIQxZe; spf=pass (imf27.hostedemail.com: domain of chrisl@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705547165; a=rsa-sha256; cv=none; b=LlW0Z03+U6hFf0xzwCunJ3Y0VmP6VcBY/6JzGDsAzUrBtIHpnwgAKj6ZWBnwj1u6oTYQb+ mhShfPgUTWLMvju7Mw3fePi/3p/CU1qBWne1jXdrDRq8rSsRCroDVURtDKxZWWy+cuvEvS jZ5ySepvIig2wKrUh0JUBcoDBXqBsk4= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 4DCF4B815CC; Thu, 18 Jan 2024 03:06:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6C828C43390; Thu, 18 Jan 2024 03:06:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705547162; bh=GHp2KzzBbnQ1JaxEb3GCoI9IFAmZSnUKBqj3CudwNdk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=t5RIQxZelFG8p0vIhHFUgSLhOWy89aW7w8rU+5tr6GrKaP4LPWttCPCfhYme38ZYa ddSc6c4HMYuKRHJ5Q8lDr74UDwSYBTPMUvdEV66m2gltQzEymX93M8iEQ4YtoYsBUZ kp2EzSvzg+GBIXXeCbJB2eiO2UE1D/68EtgLsUjuGhniobHj/BSnjZdUHHllQDpLfy O4T75c3R0lY8GMwVzLHExfeJzKTMP/9KgEO0nQENmiwK/Bubl1+j0SLJhgTdsocsdi ayiqZYKY08MGvsa7NVglyvTmBXwdBcqa7H+AkChbjy3YHgaiGw7SIwc2194GUY/F/Y FCUjAuIaX4azQ== From: Chris Li Date: Wed, 17 Jan 2024 19:05:42 -0800 Subject: [PATCH 2/2] mm: zswap.c: remove RB tree MIME-Version: 1.0 Message-Id: <20240117-zswap-xarray-v1-2-6daa86c08fae@kernel.org> References: <20240117-zswap-xarray-v1-0-6daa86c08fae@kernel.org> In-Reply-To: <20240117-zswap-xarray-v1-0-6daa86c08fae@kernel.org> To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, =?utf-8?b?V2VpIFh177+8?= , Yu Zhao , Greg Thelen , Chun-Tse Shao , =?utf-8?q?Suren_Baghdasaryan=EF=BF=BC?= , Yosry Ahmed , Brain Geffon , Minchan Kim , Michal Hocko , Mel Gorman , Huang Ying , Nhat Pham , Johannes Weiner , Kairui Song , Zhongkun He , Kemeng Shi , Barry Song , "Matthew Wilcox (Oracle)" , "Liam R. Howlett" , Joel Fernandes , Chengming Zhou , Chris Li X-Mailer: b4 0.12.3 X-Rspamd-Queue-Id: 2EB7E4000F X-Rspam-User: X-Stat-Signature: ze86g1c7n3zsb3btwj4g1fpntmu3561s X-Rspamd-Server: rspam01 X-HE-Tag: 1705547164-28624 X-HE-Meta: U2FsdGVkX1+nDjW7m6DZIAmGf9Or0tOnydITzaNXKl5mG6JRE9vSgR6vljzTD2KQwt+BYBgAtH440pw1qyA9RSQox9Vi5Sz4gDdfLyjkt17MHgVHQWe4zi8o9LcTwChCxpWcKRYFO6TxQoILPXwOOG4CdWzZHyeC+07O7c7jLB2D4mSXuwHYhaoIUNFWGOXoyWpwpFx1r5GUK9cD7+xdKASd62fwmE9bssPbNO0JvC6V4eV/mpvaTHfpRE55RBAsH7x0A11vH4lrxxXOCnoisV5GhzxUyzAxFQXu6xPiJV06BXxjPJYwB1t1Jw8EhMJHoPtrkjogh/rpijKkFWmmEgha9hfey8SZDHeYQmF6ZF/48eTCo4aTXymt67LDxH1YpMwGazvehRU4xKCLvx5FDb0KEuYmlDPX6ddzwRfsfakCQU3BjdIyClGiLxNvxpXXXB5E+PjwCgdViwHl+77qtbiO0hSK+mqY3UEyZEdVBNbO0W4v2/dEV15wkM7Q4N6o2CSwbJexeSGzYcdGrUYjMBRMWKJ7HMwWRbvsjRMl/2QrgbjLj5QWyZDtmpugpA1DQ79k+vkY12Puh5CX3Z5JL0mLBhQ73LZgK6eMnzZ2yh4WIzWScDFLkTa0x7jfXwoROG9YXPjvvhnHnMcVS4G5fqddGVKHrNwgR2Lyrjs1fHpIrRctuTxxIrAqPQ2cVNvce6XYP+DcVVm2QuRxbLZkDrRA9DBu7ow1Hgkwy2ueX/JMIkRl4iDMMPE/O9YQJet8SENSI/F3WqPH2IWeUFgIZ3uXDi068NGTRr3bCAQT6HUf4A5LIKtmCEuhXr/sR8/lQBEeVuX5woilan662nxQSfgP8C1Iij6RTxuqWKdL9SkgVfcWPf8fnKNCGdJr46yHxJKi3luf64A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: remove the RB tree code and the RB tree data structure from zswap. The xarray insert and erase code have been updated to use the XAS version of the API to cache the lookup before the final xarray store. The zswap tree spinlock hasn't been removed yet due to other usage outside of the zswap tree. The zswap xarray function should work fine with its internal lock on RCU without the zswap tree lock. This removes the RB node inside the zswap entry, saving about three pointers in size. Considering the extra overhead of xarray lookup tables, this should have some net saving in terms of memory overhead in zswap if the index is dense. The zswap RB tree spin lock is still there to protect the zswap entry. Expect the follow up change to merge the RB tree spin lock with the xarray lock. --- mm/zswap.c | 98 +++++++++++++++++++++++--------------------------------------- 1 file changed, 36 insertions(+), 62 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index a40b0076722b..555d5608d401 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -197,7 +197,6 @@ struct zswap_pool { * This structure contains the metadata for tracking a single compressed * page within zswap. * - * rbnode - links the entry into red-black tree for the appropriate swap type * swpentry - associated swap entry, the offset indexes into the red-black tree * refcount - the number of outstanding reference to the entry. This is needed * to protect against premature freeing of the entry by code @@ -215,7 +214,6 @@ struct zswap_pool { * lru - handle to the pool's lru used to evict pages. */ struct zswap_entry { - struct rb_node rbnode; swp_entry_t swpentry; int refcount; unsigned int length; @@ -234,7 +232,6 @@ struct zswap_entry { * - the refcount field of each entry in the tree */ struct zswap_tree { - struct rb_root rbroot; struct xarray xarray; spinlock_t lock; }; @@ -357,7 +354,6 @@ static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) if (!entry) return NULL; entry->refcount = 1; - RB_CLEAR_NODE(&entry->rbnode); return entry; } @@ -465,25 +461,7 @@ static void zswap_lru_putback(struct list_lru *list_lru, **********************************/ static struct zswap_entry *zswap_search(struct zswap_tree *tree, pgoff_t offset) { - struct rb_node *node = tree->rbroot.rb_node; - struct zswap_entry *entry; - pgoff_t entry_offset; - - while (node) { - entry = rb_entry(node, struct zswap_entry, rbnode); - entry_offset = swp_offset(entry->swpentry); - if (entry_offset > offset) - node = node->rb_left; - else if (entry_offset < offset) - node = node->rb_right; - else { - struct zswap_entry *e = xa_load(&tree->xarray, offset); - - BUG_ON(entry != e); - return entry; - } - } - return NULL; + return xa_load(&tree->xarray, offset); } /* @@ -493,45 +471,47 @@ static struct zswap_entry *zswap_search(struct zswap_tree *tree, pgoff_t offset) static int zswap_insert(struct zswap_tree *tree, struct zswap_entry *entry, struct zswap_entry **dupentry) { - struct rb_root *root = &tree->rbroot; - struct rb_node **link = &root->rb_node, *parent = NULL; - struct zswap_entry *myentry, *old; - pgoff_t myentry_offset, entry_offset = swp_offset(entry->swpentry); - - - while (*link) { - parent = *link; - myentry = rb_entry(parent, struct zswap_entry, rbnode); - myentry_offset = swp_offset(myentry->swpentry); - if (myentry_offset > entry_offset) - link = &(*link)->rb_left; - else if (myentry_offset < entry_offset) - link = &(*link)->rb_right; - else { - old = xa_load(&tree->xarray, entry_offset); - BUG_ON(old != myentry); - *dupentry = myentry; + struct zswap_entry *e; + pgoff_t offset = swp_offset(entry->swpentry); + XA_STATE(xas, &tree->xarray, offset); + + do { + xas_lock_irq(&xas); + do { + e = xas_load(&xas); + if (xa_is_zero(e)) + e = NULL; + } while (xas_retry(&xas, e)); + if (xas_valid(&xas) && e) { + xas_unlock_irq(&xas); + *dupentry = e; return -EEXIST; } - } - rb_link_node(&entry->rbnode, parent, link); - rb_insert_color(&entry->rbnode, root); - old = xa_store(&tree->xarray, entry_offset, entry, GFP_KERNEL); - return 0; + xas_store(&xas, entry); + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); + return xas_error(&xas); } static bool zswap_erase(struct zswap_tree *tree, struct zswap_entry *entry) { + struct zswap_entry *e; pgoff_t offset = swp_offset(entry->swpentry); - if (!RB_EMPTY_NODE(&entry->rbnode)) { - struct zswap_entry *old; - old = xa_erase(&tree->xarray, offset); - BUG_ON(old != entry); - rb_erase(&entry->rbnode, &tree->rbroot); - RB_CLEAR_NODE(&entry->rbnode); - return true; - } - return false; + XA_STATE(xas, &tree->xarray, offset); + + do { + xas_lock_irq(&xas); + do { + e = xas_load(&xas); + } while (xas_retry(&xas, e)); + if (xas_valid(&xas) && e != entry) { + xas_unlock_irq(&xas); + return false; + } + xas_store(&xas, NULL); + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); + return !xas_error(&xas); } static struct zpool *zswap_find_zpool(struct zswap_entry *entry) @@ -583,7 +563,6 @@ static void zswap_entry_put(struct zswap_tree *tree, WARN_ON_ONCE(refcount < 0); if (refcount == 0) { - WARN_ON_ONCE(!RB_EMPTY_NODE(&entry->rbnode)); zswap_free_entry(entry); } } @@ -1799,7 +1778,6 @@ void zswap_swapon(int type) return; } - tree->rbroot = RB_ROOT; xa_init(&tree->xarray); spin_lock_init(&tree->lock); zswap_trees[type] = tree; @@ -1808,7 +1786,7 @@ void zswap_swapon(int type) void zswap_swapoff(int type) { struct zswap_tree *tree = zswap_trees[type]; - struct zswap_entry *entry, *e, *n; + struct zswap_entry *e; XA_STATE(xas, tree ? &tree->xarray : NULL, 0); if (!tree) @@ -1820,10 +1798,6 @@ void zswap_swapoff(int type) xas_for_each(&xas, e, ULONG_MAX) zswap_invalidate_entry(tree, e); - rbtree_postorder_for_each_entry_safe(entry, n, &tree->rbroot, rbnode) - BUG_ON(entry); - - tree->rbroot = RB_ROOT; spin_unlock(&tree->lock); kfree(tree); zswap_trees[type] = NULL;