From patchwork Thu Jan 18 03:05:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Li X-Patchwork-Id: 13522324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 141D8C4725D for ; Thu, 18 Jan 2024 03:06:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 536D06B0072; Wed, 17 Jan 2024 22:06:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E70C6B0074; Wed, 17 Jan 2024 22:06:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AE966B0075; Wed, 17 Jan 2024 22:06:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2B8E36B0072 for ; Wed, 17 Jan 2024 22:06:07 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E762AA01BC for ; Thu, 18 Jan 2024 03:06:06 +0000 (UTC) X-FDA: 81690942732.07.5CD8252 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf27.hostedemail.com (Postfix) with ESMTP id 2EB7E4000F for ; Thu, 18 Jan 2024 03:06:04 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t5RIQxZe; spf=pass (imf27.hostedemail.com: domain of chrisl@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705547165; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k2P1pGC48c0AZhnSY1LUv0UkoXUkbP8aVJl5w7I5bWM=; b=LrPxZAShlZOEuqmiIfJoO3Cm9knqMolgJxVSZajejNS4ctb8NaAIDTOGs/rW8qceHR1++z vThkwileY2zypHMO1g7WH6pV3Km1k20PG8DfbpjlRS+mZawuZXzTart1dntayhKom47ivh wC4aCwePNYRRNC3GobIZBslYtIYgdeg= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t5RIQxZe; spf=pass (imf27.hostedemail.com: domain of chrisl@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705547165; a=rsa-sha256; cv=none; b=LlW0Z03+U6hFf0xzwCunJ3Y0VmP6VcBY/6JzGDsAzUrBtIHpnwgAKj6ZWBnwj1u6oTYQb+ mhShfPgUTWLMvju7Mw3fePi/3p/CU1qBWne1jXdrDRq8rSsRCroDVURtDKxZWWy+cuvEvS jZ5ySepvIig2wKrUh0JUBcoDBXqBsk4= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 4DCF4B815CC; Thu, 18 Jan 2024 03:06:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6C828C43390; Thu, 18 Jan 2024 03:06:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705547162; bh=GHp2KzzBbnQ1JaxEb3GCoI9IFAmZSnUKBqj3CudwNdk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=t5RIQxZelFG8p0vIhHFUgSLhOWy89aW7w8rU+5tr6GrKaP4LPWttCPCfhYme38ZYa ddSc6c4HMYuKRHJ5Q8lDr74UDwSYBTPMUvdEV66m2gltQzEymX93M8iEQ4YtoYsBUZ kp2EzSvzg+GBIXXeCbJB2eiO2UE1D/68EtgLsUjuGhniobHj/BSnjZdUHHllQDpLfy O4T75c3R0lY8GMwVzLHExfeJzKTMP/9KgEO0nQENmiwK/Bubl1+j0SLJhgTdsocsdi ayiqZYKY08MGvsa7NVglyvTmBXwdBcqa7H+AkChbjy3YHgaiGw7SIwc2194GUY/F/Y FCUjAuIaX4azQ== From: Chris Li Date: Wed, 17 Jan 2024 19:05:42 -0800 Subject: [PATCH 2/2] mm: zswap.c: remove RB tree MIME-Version: 1.0 Message-Id: <20240117-zswap-xarray-v1-2-6daa86c08fae@kernel.org> References: <20240117-zswap-xarray-v1-0-6daa86c08fae@kernel.org> In-Reply-To: <20240117-zswap-xarray-v1-0-6daa86c08fae@kernel.org> To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, =?utf-8?b?V2VpIFh177+8?= , Yu Zhao , Greg Thelen , Chun-Tse Shao , =?utf-8?q?Suren_Baghdasaryan=EF=BF=BC?= , Yosry Ahmed , Brain Geffon , Minchan Kim , Michal Hocko , Mel Gorman , Huang Ying , Nhat Pham , Johannes Weiner , Kairui Song , Zhongkun He , Kemeng Shi , Barry Song , "Matthew Wilcox (Oracle)" , "Liam R. Howlett" , Joel Fernandes , Chengming Zhou , Chris Li X-Mailer: b4 0.12.3 X-Rspamd-Queue-Id: 2EB7E4000F X-Rspam-User: X-Stat-Signature: ze86g1c7n3zsb3btwj4g1fpntmu3561s X-Rspamd-Server: rspam01 X-HE-Tag: 1705547164-28624 X-HE-Meta: U2FsdGVkX1+nDjW7m6DZIAmGf9Or0tOnydITzaNXKl5mG6JRE9vSgR6vljzTD2KQwt+BYBgAtH440pw1qyA9RSQox9Vi5Sz4gDdfLyjkt17MHgVHQWe4zi8o9LcTwChCxpWcKRYFO6TxQoILPXwOOG4CdWzZHyeC+07O7c7jLB2D4mSXuwHYhaoIUNFWGOXoyWpwpFx1r5GUK9cD7+xdKASd62fwmE9bssPbNO0JvC6V4eV/mpvaTHfpRE55RBAsH7x0A11vH4lrxxXOCnoisV5GhzxUyzAxFQXu6xPiJV06BXxjPJYwB1t1Jw8EhMJHoPtrkjogh/rpijKkFWmmEgha9hfey8SZDHeYQmF6ZF/48eTCo4aTXymt67LDxH1YpMwGazvehRU4xKCLvx5FDb0KEuYmlDPX6ddzwRfsfakCQU3BjdIyClGiLxNvxpXXXB5E+PjwCgdViwHl+77qtbiO0hSK+mqY3UEyZEdVBNbO0W4v2/dEV15wkM7Q4N6o2CSwbJexeSGzYcdGrUYjMBRMWKJ7HMwWRbvsjRMl/2QrgbjLj5QWyZDtmpugpA1DQ79k+vkY12Puh5CX3Z5JL0mLBhQ73LZgK6eMnzZ2yh4WIzWScDFLkTa0x7jfXwoROG9YXPjvvhnHnMcVS4G5fqddGVKHrNwgR2Lyrjs1fHpIrRctuTxxIrAqPQ2cVNvce6XYP+DcVVm2QuRxbLZkDrRA9DBu7ow1Hgkwy2ueX/JMIkRl4iDMMPE/O9YQJet8SENSI/F3WqPH2IWeUFgIZ3uXDi068NGTRr3bCAQT6HUf4A5LIKtmCEuhXr/sR8/lQBEeVuX5woilan662nxQSfgP8C1Iij6RTxuqWKdL9SkgVfcWPf8fnKNCGdJr46yHxJKi3luf64A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: remove the RB tree code and the RB tree data structure from zswap. The xarray insert and erase code have been updated to use the XAS version of the API to cache the lookup before the final xarray store. The zswap tree spinlock hasn't been removed yet due to other usage outside of the zswap tree. The zswap xarray function should work fine with its internal lock on RCU without the zswap tree lock. This removes the RB node inside the zswap entry, saving about three pointers in size. Considering the extra overhead of xarray lookup tables, this should have some net saving in terms of memory overhead in zswap if the index is dense. The zswap RB tree spin lock is still there to protect the zswap entry. Expect the follow up change to merge the RB tree spin lock with the xarray lock. --- mm/zswap.c | 98 +++++++++++++++++++++++--------------------------------------- 1 file changed, 36 insertions(+), 62 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index a40b0076722b..555d5608d401 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -197,7 +197,6 @@ struct zswap_pool { * This structure contains the metadata for tracking a single compressed * page within zswap. * - * rbnode - links the entry into red-black tree for the appropriate swap type * swpentry - associated swap entry, the offset indexes into the red-black tree * refcount - the number of outstanding reference to the entry. This is needed * to protect against premature freeing of the entry by code @@ -215,7 +214,6 @@ struct zswap_pool { * lru - handle to the pool's lru used to evict pages. */ struct zswap_entry { - struct rb_node rbnode; swp_entry_t swpentry; int refcount; unsigned int length; @@ -234,7 +232,6 @@ struct zswap_entry { * - the refcount field of each entry in the tree */ struct zswap_tree { - struct rb_root rbroot; struct xarray xarray; spinlock_t lock; }; @@ -357,7 +354,6 @@ static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) if (!entry) return NULL; entry->refcount = 1; - RB_CLEAR_NODE(&entry->rbnode); return entry; } @@ -465,25 +461,7 @@ static void zswap_lru_putback(struct list_lru *list_lru, **********************************/ static struct zswap_entry *zswap_search(struct zswap_tree *tree, pgoff_t offset) { - struct rb_node *node = tree->rbroot.rb_node; - struct zswap_entry *entry; - pgoff_t entry_offset; - - while (node) { - entry = rb_entry(node, struct zswap_entry, rbnode); - entry_offset = swp_offset(entry->swpentry); - if (entry_offset > offset) - node = node->rb_left; - else if (entry_offset < offset) - node = node->rb_right; - else { - struct zswap_entry *e = xa_load(&tree->xarray, offset); - - BUG_ON(entry != e); - return entry; - } - } - return NULL; + return xa_load(&tree->xarray, offset); } /* @@ -493,45 +471,47 @@ static struct zswap_entry *zswap_search(struct zswap_tree *tree, pgoff_t offset) static int zswap_insert(struct zswap_tree *tree, struct zswap_entry *entry, struct zswap_entry **dupentry) { - struct rb_root *root = &tree->rbroot; - struct rb_node **link = &root->rb_node, *parent = NULL; - struct zswap_entry *myentry, *old; - pgoff_t myentry_offset, entry_offset = swp_offset(entry->swpentry); - - - while (*link) { - parent = *link; - myentry = rb_entry(parent, struct zswap_entry, rbnode); - myentry_offset = swp_offset(myentry->swpentry); - if (myentry_offset > entry_offset) - link = &(*link)->rb_left; - else if (myentry_offset < entry_offset) - link = &(*link)->rb_right; - else { - old = xa_load(&tree->xarray, entry_offset); - BUG_ON(old != myentry); - *dupentry = myentry; + struct zswap_entry *e; + pgoff_t offset = swp_offset(entry->swpentry); + XA_STATE(xas, &tree->xarray, offset); + + do { + xas_lock_irq(&xas); + do { + e = xas_load(&xas); + if (xa_is_zero(e)) + e = NULL; + } while (xas_retry(&xas, e)); + if (xas_valid(&xas) && e) { + xas_unlock_irq(&xas); + *dupentry = e; return -EEXIST; } - } - rb_link_node(&entry->rbnode, parent, link); - rb_insert_color(&entry->rbnode, root); - old = xa_store(&tree->xarray, entry_offset, entry, GFP_KERNEL); - return 0; + xas_store(&xas, entry); + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); + return xas_error(&xas); } static bool zswap_erase(struct zswap_tree *tree, struct zswap_entry *entry) { + struct zswap_entry *e; pgoff_t offset = swp_offset(entry->swpentry); - if (!RB_EMPTY_NODE(&entry->rbnode)) { - struct zswap_entry *old; - old = xa_erase(&tree->xarray, offset); - BUG_ON(old != entry); - rb_erase(&entry->rbnode, &tree->rbroot); - RB_CLEAR_NODE(&entry->rbnode); - return true; - } - return false; + XA_STATE(xas, &tree->xarray, offset); + + do { + xas_lock_irq(&xas); + do { + e = xas_load(&xas); + } while (xas_retry(&xas, e)); + if (xas_valid(&xas) && e != entry) { + xas_unlock_irq(&xas); + return false; + } + xas_store(&xas, NULL); + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, GFP_KERNEL)); + return !xas_error(&xas); } static struct zpool *zswap_find_zpool(struct zswap_entry *entry) @@ -583,7 +563,6 @@ static void zswap_entry_put(struct zswap_tree *tree, WARN_ON_ONCE(refcount < 0); if (refcount == 0) { - WARN_ON_ONCE(!RB_EMPTY_NODE(&entry->rbnode)); zswap_free_entry(entry); } } @@ -1799,7 +1778,6 @@ void zswap_swapon(int type) return; } - tree->rbroot = RB_ROOT; xa_init(&tree->xarray); spin_lock_init(&tree->lock); zswap_trees[type] = tree; @@ -1808,7 +1786,7 @@ void zswap_swapon(int type) void zswap_swapoff(int type) { struct zswap_tree *tree = zswap_trees[type]; - struct zswap_entry *entry, *e, *n; + struct zswap_entry *e; XA_STATE(xas, tree ? &tree->xarray : NULL, 0); if (!tree) @@ -1820,10 +1798,6 @@ void zswap_swapoff(int type) xas_for_each(&xas, e, ULONG_MAX) zswap_invalidate_entry(tree, e); - rbtree_postorder_for_each_entry_safe(entry, n, &tree->rbroot, rbnode) - BUG_ON(entry); - - tree->rbroot = RB_ROOT; spin_unlock(&tree->lock); kfree(tree); zswap_trees[type] = NULL;