From patchwork Wed Nov 27 22:53:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13887410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1743D6D247 for ; Wed, 27 Nov 2024 22:53:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 548956B0083; Wed, 27 Nov 2024 17:53:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A9EB6B0085; Wed, 27 Nov 2024 17:53:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FD246B0088; Wed, 27 Nov 2024 17:53:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 082A96B0083 for ; Wed, 27 Nov 2024 17:53:33 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B1B081C8337 for ; Wed, 27 Nov 2024 22:53:32 +0000 (UTC) X-FDA: 82833377886.09.E0CAEE8 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by imf04.hostedemail.com (Postfix) with ESMTP id 278CD40003 for ; Wed, 27 Nov 2024 22:53:23 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cOPfPGCG; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf04.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732748004; a=rsa-sha256; cv=none; b=TwZXMsz+xA9wvYBOPvksEhBrlRM76EXDd633dCvhsRZHM4vsT+GwGgnbPKZV02rizPDOn0 JwY6dCxDvueHKKPnsY+ZUh2nJzx0e+YxTElpBcC8XYaMPQp5dvTS0peKWkyoc1D+Q9lSuF MUJnGlyHief51UWYmsX0PK/ttZmxfRI= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cOPfPGCG; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf04.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732748004; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=I+VMizOPkq1PvL/j6g1VvO5fRPvAU2GcuaHqVBMmVcc=; b=c0dO7Lo00SlTAcyo2AqkA1XPP6leDHKoAKbT0m7cWxD7Y5BvnxFwgkyLJ/x2DtFZ1g+pK2 bU/WWhhKSSSg4EgzK4QcSxezwLJJ9gp4Mzchd2RuE0MfBLWTNSZ0fi8rWsJOh9EJTPFAZp TcADyKCaBINQ0+nHv021Ai4dhFE1LuE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732748011; x=1764284011; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v3LlDsATLJ9yWYXEOtbAuM5Fnc0kQ/M+aHEO4nE6tzE=; b=cOPfPGCGYYthmfmAyMhJPTCbHsXOJ8yADq1mBlsXkqsjeKoG6lrrD894 VLWOqDD1iTNt3UNUB6V66cnt2LX4xyTzKuOFf8HoCgOOEEtSmoTfbRQKB Hqj0gCVm7+CobyAYLzvBTaQPb2uVU8qoInFwDXk941Edx1G3sU1Ka5zu4 TjmrTc9m5fDBdRLImVCQWGcJP8waNAghvNvQpFX9hU1M84FpBTxHF8cZ5 3h/bnyi/LuMuFlj8obYclW3lkiH2DHHIP1xyLO8nelgzYkKxlW4oH5put qv80B7Tg8NIaah7b1mQ8xZjBe59Gsg6EGhoUPAp+PzyHYMBW21c7FmLla A==; X-CSE-ConnectionGUID: 4PBKCPIdSzGfbCurxywclA== X-CSE-MsgGUID: 9zIxzDf9SJOh9HnVRV+K4A== X-IronPort-AV: E=McAfee;i="6700,10204,11269"; a="33022408" X-IronPort-AV: E=Sophos;i="6.12,190,1728975600"; d="scan'208";a="33022408" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2024 14:53:26 -0800 X-CSE-ConnectionGUID: JICzFf11QY6j60/+U18wxw== X-CSE-MsgGUID: 34/6rIBPRY2GdRkmAkuBuA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,190,1728975600"; d="scan'208";a="92235434" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by fmviesa008.fm.intel.com with ESMTP; 27 Nov 2024 14:53:25 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v1 1/2] mm: zswap: Modified zswap_store_page() to process multiple pages in a folio. Date: Wed, 27 Nov 2024 14:53:23 -0800 Message-Id: <20241127225324.6770-2-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241127225324.6770-1-kanchana.p.sridhar@intel.com> References: <20241127225324.6770-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 278CD40003 X-Stat-Signature: kb3yp5j11znwwnctszboao31nqj5taqq X-Rspam-User: X-HE-Tag: 1732748003-564810 X-HE-Meta: U2FsdGVkX19l1xaoGEbQglfEMmAB5YG7wdvi0Ex0+VfcsyVllr+fja3VOZrP5xw+kTsdkdxRI/IkGHtgdww23ABovgMQ0Ga51lfi7QIKHCDFP2ZBSecD8GyUQeBSWhTX81yC5juLLFz/HmhVnW+dSNqHS5Ts9iJaZulVGIqUKsW87fkyBmXvD3VcoU2zRTFZX1YrWK/B8RhtfyF6JLNJtzs5IMuoNsvChapM2GSwaP0O1eaLfYTKWSjLHfcRoMZ2x2/S1RfcpuHFrmFMnhTG9xACfDTCFe4tN8ZCDY/7b7i5NbhwNkDaaI71EHbrFAjZnaQNU12esEQs7ib8EHnZIY0ApTQW4o/wsGnTkMSqlvBBTAs24YHS2e/0qptH2LJ6tc7HHFgBwPjpVcMdCh3rhPjfadOF+l2tfNJqgmJsgGG3nzhAqZUtyNClnRauhLQkp+QjDKES8nYcB5VsM+j1KrTim22LL6YihOyt5T2kZPmj2cuh8z1VNpBbaXk1PH4Bn74uWarevQ85o5ZYWmvbTU7mdA5oRiNGHbRlgID9RQfJYDec4U3Ky+7u0EvZXw6/rHL1V9QtSTwkFQ6IVGZXXNLWVgk3OchUP+qb9PVk04FdyCv5ySnI79Apaf0nQpUd4nofkIuxEuY7HvePzJ0+a0/bPFA5/pttneVl84TitcwUxSARPcyZglKRSQZlO5ryxG3mZwcUf9MRVJs3VKVRzN317ug9Mel+52NvW9xgncXxbtIo0WW8JQTNXciM9iWVlRuhL/4WSGYvnsWzOdCz+zigagUWL0HDrx5eZdnh6UTF0zLdVYc0bs2C54sBD0f2xe6dR1FQQKD81Uw83EM+JF0n+tv7Hax1jqGwUTe7MpGxy1VSGHH0/4Xx2OoiMgq/wqVh7zyFbGlB7Dt6/crH/tgj9z9Mhx/h27+lGfDGhXviNM2pvOTKwKF+ooYZwGTXvpPNaI6fvrB9XafxiqE /NnpjG+n J7nKa7pEIbyHEm6yygI2FiBeVQUHH/de6Iq3NvpRHZkIyOGb6syoY9/Lru5ykAD6KiHSCIzcQFwoG9S/o84SP31H9JJJu4yQqJt5ishE6VdFVwcBTBLEITsDHpQUhb7zFC8ymNAF7bVctueylKro89K9WVCa/rkkPluWPydcbhinvN86SQJFs2JoeL2Y4F9t0EX8iAQN1CSdpnWgUbByWGbtAF73GIOIfTcdTjOTOsQ9KjxTmhj1zPmMxqS+Qhkgt7SNz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Modified zswap_store() to store the folio in batches of SWAP_CRYPTO_BATCH_SIZE pages. Accordingly, refactored zswap_store_page() into zswap_store_pages() that processes a range of pages in the folio. zswap_store_pages() is a vectorized version of zswap_store_page(). For now, zswap_store_pages() will sequentially compress these pages with zswap_compress(). These changes are follow-up to code review comments received for [1], and are intended to set up zswap_store() for batching with Intel IAA. [1]: https://patchwork.kernel.org/project/linux-mm/patch/20241123070127.332773-11-kanchana.p.sridhar@intel.com/ Signed-off-by: Kanchana P Sridhar --- include/linux/zswap.h | 1 + mm/zswap.c | 154 ++++++++++++++++++++++++------------------ 2 files changed, 88 insertions(+), 67 deletions(-) diff --git a/include/linux/zswap.h b/include/linux/zswap.h index d961ead91bf1..05a81e750744 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -7,6 +7,7 @@ struct lruvec; +#define SWAP_CRYPTO_BATCH_SIZE 8UL extern atomic_long_t zswap_stored_pages; #ifdef CONFIG_ZSWAP diff --git a/mm/zswap.c b/mm/zswap.c index f6316b66fb23..b09d1023e775 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1409,78 +1409,96 @@ static void shrink_worker(struct work_struct *w) * main API **********************************/ -static ssize_t zswap_store_page(struct page *page, - struct obj_cgroup *objcg, - struct zswap_pool *pool) +/* + * Store multiple pages in @folio, starting from the page at index @si up to + * and including the page at index @ei. + */ +static ssize_t zswap_store_pages(struct folio *folio, + long si, + long ei, + struct obj_cgroup *objcg, + struct zswap_pool *pool) { - swp_entry_t page_swpentry = page_swap_entry(page); + struct page *page; + swp_entry_t page_swpentry; struct zswap_entry *entry, *old; + size_t compressed_bytes = 0; + u8 nr_pages = ei - si + 1; + u8 i; + + for (i = 0; i < nr_pages; ++i) { + page = folio_page(folio, si + i); + page_swpentry = page_swap_entry(page); + + /* allocate entry */ + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); + if (!entry) { + zswap_reject_kmemcache_fail++; + return -EINVAL; + } - /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); - if (!entry) { - zswap_reject_kmemcache_fail++; - return -EINVAL; - } - - if (!zswap_compress(page, entry, pool)) - goto compress_failed; + if (!zswap_compress(page, entry, pool)) + goto compress_failed; - old = xa_store(swap_zswap_tree(page_swpentry), - swp_offset(page_swpentry), - entry, GFP_KERNEL); - if (xa_is_err(old)) { - int err = xa_err(old); + old = xa_store(swap_zswap_tree(page_swpentry), + swp_offset(page_swpentry), + entry, GFP_KERNEL); + if (xa_is_err(old)) { + int err = xa_err(old); - WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); - zswap_reject_alloc_fail++; - goto store_failed; - } + WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); + zswap_reject_alloc_fail++; + goto store_failed; + } - /* - * We may have had an existing entry that became stale when - * the folio was redirtied and now the new version is being - * swapped out. Get rid of the old. - */ - if (old) - zswap_entry_free(old); + /* + * We may have had an existing entry that became stale when + * the folio was redirtied and now the new version is being + * swapped out. Get rid of the old. + */ + if (old) + zswap_entry_free(old); - /* - * The entry is successfully compressed and stored in the tree, there is - * no further possibility of failure. Grab refs to the pool and objcg. - * These refs will be dropped by zswap_entry_free() when the entry is - * removed from the tree. - */ - zswap_pool_get(pool); - if (objcg) - obj_cgroup_get(objcg); + /* + * The entry is successfully compressed and stored in the tree, there is + * no further possibility of failure. Grab refs to the pool and objcg. + * These refs will be dropped by zswap_entry_free() when the entry is + * removed from the tree. + */ + zswap_pool_get(pool); + if (objcg) + obj_cgroup_get(objcg); - /* - * We finish initializing the entry while it's already in xarray. - * This is safe because: - * - * 1. Concurrent stores and invalidations are excluded by folio lock. - * - * 2. Writeback is excluded by the entry not being on the LRU yet. - * The publishing order matters to prevent writeback from seeing - * an incoherent entry. - */ - entry->pool = pool; - entry->swpentry = page_swpentry; - entry->objcg = objcg; - entry->referenced = true; - if (entry->length) { - INIT_LIST_HEAD(&entry->lru); - zswap_lru_add(&zswap_list_lru, entry); - } + /* + * We finish initializing the entry while it's already in xarray. + * This is safe because: + * + * 1. Concurrent stores and invalidations are excluded by folio lock. + * + * 2. Writeback is excluded by the entry not being on the LRU yet. + * The publishing order matters to prevent writeback from seeing + * an incoherent entry. + */ + entry->pool = pool; + entry->swpentry = page_swpentry; + entry->objcg = objcg; + entry->referenced = true; + if (entry->length) { + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&zswap_list_lru, entry); + } - return entry->length; + compressed_bytes += entry->length; + continue; store_failed: - zpool_free(pool->zpool, entry->handle); + zpool_free(pool->zpool, entry->handle); compress_failed: - zswap_entry_cache_free(entry); - return -EINVAL; + zswap_entry_cache_free(entry); + return -EINVAL; + } + + return compressed_bytes; } bool zswap_store(struct folio *folio) @@ -1492,7 +1510,7 @@ bool zswap_store(struct folio *folio) struct zswap_pool *pool; size_t compressed_bytes = 0; bool ret = false; - long index; + long si, ei, incr = SWAP_CRYPTO_BATCH_SIZE; VM_WARN_ON_ONCE(!folio_test_locked(folio)); VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); @@ -1526,11 +1544,13 @@ bool zswap_store(struct folio *folio) mem_cgroup_put(memcg); } - for (index = 0; index < nr_pages; ++index) { - struct page *page = folio_page(folio, index); + /* Store the folio in batches of SWAP_CRYPTO_BATCH_SIZE pages. */ + for (si = 0, ei = min(si + incr - 1, nr_pages - 1); + ((si < nr_pages) && (ei < nr_pages)); + si = ei + 1, ei = min(si + incr - 1, nr_pages - 1)) { ssize_t bytes; - bytes = zswap_store_page(page, objcg, pool); + bytes = zswap_store_pages(folio, si, ei, objcg, pool); if (bytes < 0) goto put_pool; compressed_bytes += bytes; @@ -1565,9 +1585,9 @@ bool zswap_store(struct folio *folio) struct zswap_entry *entry; struct xarray *tree; - for (index = 0; index < nr_pages; ++index) { - tree = swap_zswap_tree(swp_entry(type, offset + index)); - entry = xa_erase(tree, offset + index); + for (si = 0; si < nr_pages; ++si) { + tree = swap_zswap_tree(swp_entry(type, offset + si)); + entry = xa_erase(tree, offset + si); if (entry) zswap_entry_free(entry); } From patchwork Wed Nov 27 22:53:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13887411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7219D6D245 for ; Wed, 27 Nov 2024 22:53:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1EFF96B0085; Wed, 27 Nov 2024 17:53:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1536B6B0089; Wed, 27 Nov 2024 17:53:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6E806B0092; Wed, 27 Nov 2024 17:53:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C3CAC6B0085 for ; Wed, 27 Nov 2024 17:53:33 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 731C0161918 for ; Wed, 27 Nov 2024 22:53:33 +0000 (UTC) X-FDA: 82833378264.29.1613029 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by imf14.hostedemail.com (Postfix) with ESMTP id 63BDF100002 for ; Wed, 27 Nov 2024 22:53:24 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hz42fSfz; spf=pass (imf14.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732748006; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6fKa+aiRYieQ8R1Pdd/mL8nMWz+Vn9O/jkKFWoSPg+M=; b=VfoCf46zqTL0nChX/iNOx3pifUDD0jhSarW3kJDgQ7X5HOwlNeS6sGfiU5IkZw7UH57GLr 567oQoybw1ezs0nCsCo5C1pgIuBpqR4O9orwhV/0Wqy49dI77nZlSAk6m0xWTmOfCeh+G3 jnJ2CgpFmKIZYgKUfWSUMrXz5EDY79Y= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hz42fSfz; spf=pass (imf14.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732748006; a=rsa-sha256; cv=none; b=EM3QixYbuzxuMiNUlVOwmljoD11Rm3+Ap9fHChei36fRU8HptQZO+VbAv6zJ0vR+LEdI0b Bt9T6Kxy8Ad8t5wiwPu/yVrKWQhxzN2IwQXQU2zfK966aeNyy7QDk+0FixuaUS0/QHm7oe qPJ/P83wBEgfqLXV7U/hJcU2hH4eQws= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732748012; x=1764284012; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=C3GKVK6Y3WXAyh5rwDc2Noj36/JK07WlrTl18HMt5Dk=; b=hz42fSfz8fHEocGYuWq7odIiFo1yE9TrfPVLtEgWBuRVUOgeySR8jK4H SVNdr1xyTDd/na51jZ2lYQK1Y2RUovPHgnjNS88WfPgXWK7EqjJXmnxkS OTzTINSLcXenUYykWNnAaRj8U5YfQN7ORTpH3vMyDx4YkStlDdLNTNDv8 e8D99AsjP5m+I5TqUN8ikb2+9dI+8bbRy9KcXxbuoX7nWwDyVl3+et33Y J6/VfjULtgO+YebJTnH4Yt1L4f0OLu3zIErEIqGkhroE9TQWMt0P03ArP xbjxFXIQYhkQg+LLFEmFXxCV8QPqnuxYmXIYFGphLfvjEj3/oEbTsyueD Q==; X-CSE-ConnectionGUID: rK5RhLQjRNG+U5Dt1gnzVg== X-CSE-MsgGUID: xyoOPqpvTNO69qx1peJJCw== X-IronPort-AV: E=McAfee;i="6700,10204,11269"; a="33022416" X-IronPort-AV: E=Sophos;i="6.12,190,1728975600"; d="scan'208";a="33022416" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2024 14:53:26 -0800 X-CSE-ConnectionGUID: 3kh+VfrZRcCXbRoufuZUbQ== X-CSE-MsgGUID: ddh5E4T1QkKfP79JROuekw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,190,1728975600"; d="scan'208";a="92235438" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by fmviesa008.fm.intel.com with ESMTP; 27 Nov 2024 14:53:26 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v1 2/2] mm: zswap: zswap_store_pages() simplifications for batching. Date: Wed, 27 Nov 2024 14:53:24 -0800 Message-Id: <20241127225324.6770-3-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241127225324.6770-1-kanchana.p.sridhar@intel.com> References: <20241127225324.6770-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 63BDF100002 X-Rspamd-Server: rspam12 X-Stat-Signature: zweejgrkeat6tajc7i56e966km14hym5 X-Rspam-User: X-HE-Tag: 1732748004-173372 X-HE-Meta: U2FsdGVkX191MXIAIpDQW5xPRTxw+SiZ2FDFDAqNgdtQEDWdeeOujhnq5/OcE9ohINUcdBpOX0V37jYyg1mZZOSL96DSOYjpcqgXHgNva/kMCkdetlOUO0P5vZCrYL7nxzdUCvKCkJLDlCe1CG6w4yS/r6MwgO7WKqgzQ4jNMZt5eexaZxVjzS7q4+fCgM2NKICyrmjK0CjqvYrOuFqAiEZJh8rD/Crt+zQUhI/OYoQcv/C2HPvagzqUbG+SQ3QY5WK6iGvlAEQStbC0i4MJcQkkBaOo0rUBtKT4b2TNefYJsxlhEIXZ+PNvTGVwOpm8yDw+/VzQYgL/WzP1iQWWTZ4/DnMdAaIyx4O9XhKWXYQQqkHh9KqIN796lNVSoiZiIjhXgC0OhIzWxdxf/pqvSK+UzHcNOBnKGteCktBMKEsAk+P/C7c37Ndhx+7EVKUj3x8ABN2zP7HDeFhfyVG0dqkFDaaGGZjXNuq+CoL2cGxLoopDMdZPB9kCIRl4RoNwKyEv4LjYsIv4gCWXOYCx4P10d4HpXFXO6gkdT5QNS/UdyUt1lP18R4LPaTsrvl0279ngl1JsJhS1DvAA8GC3B27mJ3+k+vgDkdVk0YaLoOR9aYyJzKRX7cYxUGsFRl/7EjmD062L+gOFt2PaKmOmRkIOu71sakYClnSZs/oL8uCCVfxXChg/LfKiVb8WAkKEz6m8ssLnVARBSDNzt+YFOE2lfcu3hKAVdc7UlW5vEvPgNAvwrKCLxWXc9QJ9x4jxRs3gnacfYh5+vN7kzjHHIeNpqrfewW/PUWCiMLe8SNYOkHQqmVCzIl0rAMJCr027Zbw05OwnbqlJTYSdksa6xfiIdOyhtiU0iT1m4zTkCLnl89WmzrrPVX5EylCj2lXiXztltlBLt5SwGT2Nk/PCTKx3BB/JB3B2A7XIgZn9utYd/i+siL8iiQJ4ugIQFUdAfQeOPUSW75/secFUFag uY+tIcSA Uq1hCrx7v/bixF+uk2FHeBeEN7scZBU5VwU/0CNA3IR6zqa+uUc2WKyna4aVHQwlqOPC+gbK0LY60yzFWpVR0PVYuiYQp/lMfW7zHjB5jEgfc5uPIyoX5xrmOArAAjROv+4ZfYr5jphVRIBtCFBugnwzHJXdr9TG8RARkVVTDXE+mqR2+q7+WzutTZ26kaMJYW4QcjbXp7LAfWuyqpuxTsMWpdMS7d63ASVBx97wvNpcXQ/wisfYuw18dvg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to set up zswap_store_pages() to enable a clean batching implementation in [1], this patch implements the following changes: 1) Addition of zswap_alloc_entries() which will allocate zswap entries for all pages in the specified range for the folio, upfront. If this fails, we return an error status to zswap_store(). 2) Addition of zswap_compress_pages() that calls zswap_compress() for each page, and returns false if any zswap_compress() fails, so zswap_store_page() can cleanup resources allocated and return an error status to zswap_store(). 3) A "store_pages_failed" label that is a catch-all for all failure points in zswap_store_pages(). This facilitates cleaner error handling within zswap_store_pages(), which will become important for IAA compress batching in [1]. [1]: https://patchwork.kernel.org/project/linux-mm/list/?series=911935 Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 93 +++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 71 insertions(+), 22 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index b09d1023e775..db80c66e2205 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1409,9 +1409,56 @@ static void shrink_worker(struct work_struct *w) * main API **********************************/ +static bool zswap_compress_pages(struct page *pages[], + struct zswap_entry *entries[], + u8 nr_pages, + struct zswap_pool *pool) +{ + u8 i; + + for (i = 0; i < nr_pages; ++i) { + if (!zswap_compress(pages[i], entries[i], pool)) + return false; + } + + return true; +} + +/* + * Allocate @nr zswap entries for storing @nr pages in a folio. + * If any one of the entry allocation fails, delete all entries allocated + * thus far, and return false. + * If @nr entries are successfully allocated, set each entry's "handle" + * to "ERR_PTR(-EINVAL)" to denote that the handle has not yet been allocated. + */ +static bool zswap_alloc_entries(struct zswap_entry *entries[], int node_id, u8 nr) +{ + u8 i; + + for (i = 0; i < nr; ++i) { + entries[i] = zswap_entry_cache_alloc(GFP_KERNEL, node_id); + if (!entries[i]) { + u8 j; + + zswap_reject_kmemcache_fail++; + for (j = 0; j < i; ++j) + zswap_entry_cache_free(entries[j]); + return false; + } + + entries[i]->handle = (unsigned long)ERR_PTR(-EINVAL); + } + + return true; +} + /* * Store multiple pages in @folio, starting from the page at index @si up to * and including the page at index @ei. + * The error handling from all failure points is handled by the + * "store_pages_failed" label, based on the initial ERR_PTR(-EINVAL) value for + * the zswap_entry's handle set by zswap_alloc_entries(), and the fact that the + * entry's handle is subsequently modified only upon a successful zpool_malloc(). */ static ssize_t zswap_store_pages(struct folio *folio, long si, @@ -1419,26 +1466,25 @@ static ssize_t zswap_store_pages(struct folio *folio, struct obj_cgroup *objcg, struct zswap_pool *pool) { - struct page *page; - swp_entry_t page_swpentry; - struct zswap_entry *entry, *old; + struct zswap_entry *entries[SWAP_CRYPTO_BATCH_SIZE], *old; + struct page *pages[SWAP_CRYPTO_BATCH_SIZE]; size_t compressed_bytes = 0; u8 nr_pages = ei - si + 1; u8 i; - for (i = 0; i < nr_pages; ++i) { - page = folio_page(folio, si + i); - page_swpentry = page_swap_entry(page); + /* allocate entries */ + if (!zswap_alloc_entries(entries, folio_nid(folio), nr_pages)) + return -EINVAL; - /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); - if (!entry) { - zswap_reject_kmemcache_fail++; - return -EINVAL; - } + for (i = 0; i < nr_pages; ++i) + pages[i] = folio_page(folio, si + i); - if (!zswap_compress(page, entry, pool)) - goto compress_failed; + if (!zswap_compress_pages(pages, entries, nr_pages, pool)) + goto store_pages_failed; + + for (i = 0; i < nr_pages; ++i) { + swp_entry_t page_swpentry = page_swap_entry(pages[i]); + struct zswap_entry *entry = entries[i]; old = xa_store(swap_zswap_tree(page_swpentry), swp_offset(page_swpentry), @@ -1448,7 +1494,7 @@ static ssize_t zswap_store_pages(struct folio *folio, WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); zswap_reject_alloc_fail++; - goto store_failed; + goto store_pages_failed; } /* @@ -1489,16 +1535,19 @@ static ssize_t zswap_store_pages(struct folio *folio, } compressed_bytes += entry->length; - continue; - -store_failed: - zpool_free(pool->zpool, entry->handle); -compress_failed: - zswap_entry_cache_free(entry); - return -EINVAL; } return compressed_bytes; + +store_pages_failed: + for (i = 0; i < nr_pages; ++i) { + if (!IS_ERR_VALUE(entries[i]->handle)) + zpool_free(pool->zpool, entries[i]->handle); + + zswap_entry_cache_free(entries[i]); + } + + return -EINVAL; } bool zswap_store(struct folio *folio)