From patchwork Fri Feb 28 10:00:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13996122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EDDCC19776 for ; Fri, 28 Feb 2025 10:01:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1C6E280013; Fri, 28 Feb 2025 05:00:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CA3FA280012; Fri, 28 Feb 2025 05:00:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1D27280013; Fri, 28 Feb 2025 05:00:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8EFBD280012 for ; Fri, 28 Feb 2025 05:00:45 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 34A1B1406D4 for ; Fri, 28 Feb 2025 10:00:45 +0000 (UTC) X-FDA: 83168909250.19.11314EB Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by imf27.hostedemail.com (Postfix) with ESMTP id 1570740021 for ; Fri, 28 Feb 2025 10:00:41 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hYBzWHAk; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.18 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740736842; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=j7GIziu1yfO1je/wZqK8YcnG3usjIQRGVwkaxVMWF/o=; b=ivx7o5QlFtwkvTK+V1PnJRfnUFh4JUEgy4epVhEdZe49+sEMCxpshLfcutN4I1tT8ZFQCk e74TKoZXFbBDuSke8uw1wHunXe9L6qxjJwNYw6BPaurcFYohLiGXrr0MUBToNb0yqvaPxM gdgbSqKRwKFyvnIvxxn9F6ZjVdQ9lUg= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hYBzWHAk; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.18 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740736842; a=rsa-sha256; cv=none; b=BZHVbtwImN8ICGptlOKDA9oljiPEdpq3zXnc0jG6tgEfQmnNuxjm9tSfweAR5XquqMCskS vRSX0wDUhhO/L0EoKkrft+XvkSKXMdaEevr0EKL7bnvw55PFErkLy0T5HEsun+jjJ0z39+ SKFlM14imGarWOLd9w1brfkgkaE4Vjw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740736842; x=1772272842; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jvGXi0w+0TEz8Qyri1YoZHBnZuLioAdX+aCO0qi+82U=; b=hYBzWHAksB4ZETazslBFney+nyjhgg+W6i7tWTonlaGPM4F7K2Yq/8mD LlUfvlkeboHShaN/Oz7OidgEuCGPpC0LRxIW78luwMZt4L/ZyiF3WDKd3 Fpl6LPDhIAmqO+eQZXLui9fAxHgrso70jm1/dTKFbAMtlfq+4vCThkKkn WEs0eky5By5xOHFmxprqYngR3PPvG7YdbC7Q70FI2tRuJwR7eh2MGaJwv AXLQnxvQRqlBhJtvzbGcaUTCBJvqTYf6B7XgRQDlumaFXVe2sEsdDUN8U TVYWWmUwGcYpu4fd6Qh9YqnQQxVwAq6DGpIu0imHyY5IvI+BdsIAaL/sn g==; X-CSE-ConnectionGUID: KN/Qij1MSAaiS4/ufIf0Xw== X-CSE-MsgGUID: aVUMBj88SR2R0u0fNH3CWA== X-IronPort-AV: E=McAfee;i="6700,10204,11358"; a="40902749" X-IronPort-AV: E=Sophos;i="6.13,322,1732608000"; d="scan'208";a="40902749" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2025 02:00:30 -0800 X-CSE-ConnectionGUID: gf9n65U1T1W3py/NR3L3jA== X-CSE-MsgGUID: FFypTQGRSBufECFT1f0iYQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,322,1732608000"; d="scan'208";a="117325761" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by orviesa006.jf.intel.com with ESMTP; 28 Feb 2025 02:00:30 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v7 15/15] mm: zswap: Compress batching with request chaining in zswap_store() of large folios. Date: Fri, 28 Feb 2025 02:00:24 -0800 Message-Id: <20250228100024.332528-16-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250228100024.332528-1-kanchana.p.sridhar@intel.com> References: <20250228100024.332528-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1570740021 X-Rspam-User: X-Stat-Signature: kkqn396rj7d1dpt5998zhcwa6u7f8yes X-HE-Tag: 1740736841-560954 X-HE-Meta: U2FsdGVkX1+C/DyYZsf71efDuSRrBFGUicfsHXyVbnbi3S3hsdlBOQOwHX/XPugTBsGC2MrMLV3jr7yNmudfkx91WapumFdugv/3dSG5PoKgNnkOecwogYtVdlogzIo66ij5mzM6j4kBxGJjWj+yoyMjgG/JDi2ILEYNTpxXIPoRXX4WuEw1TPPH+Dsxvo6fWEArmmRzZACy/jVCdF/dm2ckugFadl0KLQMD3qonLDsDnvPKIJ3q5bt1wJhd4Fk/d4wtWYNDFEr8t9H0AQczMpDi8rghi/mlWrUxS6tb5uaTe34z4GDzQ716LmLPMA/3JANTIgdb92ImfjI3DHauyW+pgzf5dhYOS57ggUxDOvTMKCChVVZyjurJPGAaR1rYvx2G3R1ovHWSJBojyZd38nBtS2Z2Z6CeifMJRGKDGkUNo8SBAPCo5+IWQJ0EfZlX0i2SoBQE6pGLTvkO/C4wZMkSnzmFSAmy23S9cnHJxpuD4YfdrpxblR28/KnWnlbcwAABZkLJSDmVKhkjKyRdhPsHV1pUg1N48D5spr9t6k80J/YS8vX+AdzxiTwr0WiaMyWL267GZgQ5Z53HDY2USxTI0W8QBj3Kjv8fMZMs36AE/1gCTCq4KQxsjDUWS4ZiW+hcya/+HpGgh9zlwcMKNrqbkqOYPn+I/Q1dn+HkEmlhAOyfvuh8qvsH21eZzNN4AJ88JPGh/WjKglOXI5o3WsNqOwx4+55RlffPWGfDlrnhwK8kql/1CS4WpyTIXjQjlPyqrwpKLEcoIVJIjDFY7s6q2AHz581B//KkFRgKaktSYqYYhrwFQnDveO/btFvtmSidhJxqfkl4wSX8Xv/2yvYGM7ezADz3LCeJX1ZlZJyBNqBoc5rh2uTtOCRfj/9KKGnDbUQauPUoEMjdsJ55G5ZfH1dcI65MDQ/TO+H5qGoL+sTBxEB1Wi5tToL6au18iCsRl4PDzJvM02MIgm6 Y3GLxn9+ IHkbAnC/0DQcPQRPlnDuchR/DNxWowII3oVmRV3lmiwltx8ZjQJ84+/Xwb29cf/PJeuhAD4fT2E8+0WlvQb1FBQ9U5/qpwu9QWFXHRDeUa+QWQ45EmjkEq7Hw0VglDVLeVKmWDDBGHJwZUVE4Y3ZpLme/A9BfA7l6dOPoriUqQTT69M/nqdAB5Z5W5d5mH5/wrFzIRxwo55HqckoW+RlmXuItJ5pYGxzBUGs4Ev+JyzHW32M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch introduces zswap_batch_compress() that takes an index within a folio, and sets up a request chain for compressing multiple pages of that folio, as a batch. The call to the crypto layer is exactly the same as in zswap_compress(), when batch compressing a request chain in zswap_batch_compress(). zswap_store_folio() is modified to detect if the pool's acomp_ctx has more than one "nr_reqs", which will be the case if the CPU onlining code has allocated multiple batching resources in the acomp_ctx. If so, it means compress batching can be used with a batch-size of "acomp_ctx->nr_reqs". If compress batching can be used, zswap_store_folio() will invoke zswap_batch_compress() to compress and store the folio in batches of "acomp_ctx->nr_reqs" pages. With Intel IAA, the iaa_crypto driver will compress each batch of pages in parallel in hardware. Hence, zswap_batch_compress() does the same computes for a batch, as zswap_compress() does for a page; and returns true if the batch was successfully compressed/stored, and false otherwise. If the pool does not support compress batching, or the folio has only one page, zswap_store_folio() calls zswap_compress() for each individual page in the folio, as before. Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 296 ++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 224 insertions(+), 72 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index ab9167220cb6..626574bd84f6 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1051,9 +1051,9 @@ static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx) } static bool zswap_compress(struct page *page, struct zswap_entry *entry, - struct zswap_pool *pool) + struct zswap_pool *pool, + struct crypto_acomp_ctx *acomp_ctx) { - struct crypto_acomp_ctx *acomp_ctx; struct scatterlist input, output; int comp_ret = 0, alloc_ret = 0; unsigned int dlen = PAGE_SIZE; @@ -1063,7 +1063,8 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, gfp_t gfp; u8 *dst; - acomp_ctx = acomp_ctx_get_cpu_lock(pool); + lockdep_assert_held(&acomp_ctx->mutex); + dst = acomp_ctx->buffers[0]; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -1091,7 +1092,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->reqs[0]), &acomp_ctx->wait); dlen = acomp_ctx->reqs[0]->dlen; if (comp_ret) - goto unlock; + goto check_errors; zpool = pool->zpool; gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; @@ -1099,7 +1100,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, gfp |= __GFP_HIGHMEM | __GFP_MOVABLE; alloc_ret = zpool_malloc(zpool, dlen, gfp, &handle); if (alloc_ret) - goto unlock; + goto check_errors; buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO); memcpy(buf, dst, dlen); @@ -1108,7 +1109,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, entry->handle = handle; entry->length = dlen; -unlock: +check_errors: if (comp_ret == -ENOSPC || alloc_ret == -ENOSPC) zswap_reject_compress_poor++; else if (comp_ret) @@ -1116,7 +1117,6 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, else if (alloc_ret) zswap_reject_alloc_fail++; - acomp_ctx_put_unlock(acomp_ctx); return comp_ret == 0 && alloc_ret == 0; } @@ -1580,6 +1580,106 @@ static void shrink_worker(struct work_struct *w) * main API **********************************/ +/* + * Batch compress multiple @nr_pages in @folio, starting from @index. + */ +static bool zswap_batch_compress(struct folio *folio, + long index, + unsigned int nr_pages, + struct zswap_entry *entries[], + struct zswap_pool *pool, + struct crypto_acomp_ctx *acomp_ctx) +{ + struct scatterlist inputs[ZSWAP_MAX_BATCH_SIZE]; + struct scatterlist outputs[ZSWAP_MAX_BATCH_SIZE]; + unsigned int i; + int err = 0; + + lockdep_assert_held(&acomp_ctx->mutex); + + for (i = 0; i < nr_pages; ++i) { + struct page *page = folio_page(folio, index + i); + + sg_init_table(&inputs[i], 1); + sg_set_page(&inputs[i], page, PAGE_SIZE, 0); + + /* + * Each dst buffer should be of size (PAGE_SIZE * 2). + * Reflect same in sg_list. + */ + sg_init_one(&outputs[i], acomp_ctx->buffers[i], PAGE_SIZE * 2); + acomp_request_set_params(acomp_ctx->reqs[i], &inputs[i], + &outputs[i], PAGE_SIZE, PAGE_SIZE); + + /* Use acomp request chaining. */ + if (i) + acomp_request_chain(acomp_ctx->reqs[i], acomp_ctx->reqs[0]); + else + acomp_reqchain_init(acomp_ctx->reqs[0], 0, crypto_req_done, + &acomp_ctx->wait); + } + + err = crypto_wait_req(crypto_acomp_compress(acomp_ctx->reqs[0]), &acomp_ctx->wait); + + /* + * Get the individual compress errors from request chaining. + */ + for (i = 0; i < nr_pages; ++i) { + if (unlikely(acomp_request_err(acomp_ctx->reqs[i]))) { + err = -EINVAL; + if (acomp_request_err(acomp_ctx->reqs[i]) == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_compress_fail++; + } + } + + if (likely(!err)) { + /* + * All batch pages were successfully compressed. + * Store the pages in zpool. + */ + struct zpool *zpool = pool->zpool; + gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; + + if (zpool_malloc_support_movable(zpool)) + gfp |= __GFP_HIGHMEM | __GFP_MOVABLE; + + for (i = 0; i < nr_pages; ++i) { + unsigned long handle; + char *buf; + + err = zpool_malloc(zpool, acomp_ctx->reqs[i]->dlen, gfp, &handle); + + if (err) { + if (err == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_alloc_fail++; + + break; + } + + buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO); + memcpy(buf, acomp_ctx->buffers[i], acomp_ctx->reqs[i]->dlen); + zpool_unmap_handle(zpool, handle); + + entries[i]->handle = handle; + entries[i]->length = acomp_ctx->reqs[i]->dlen; + } + } + + /* + * Request chaining cleanup: + * + * - Clear the CRYPTO_TFM_REQ_CHAIN bit on acomp_ctx->reqs[0]. + * - Reset the acomp_ctx->wait to notify acomp_ctx->reqs[0]. + */ + acomp_reqchain_clear(acomp_ctx->reqs[0], &acomp_ctx->wait); + + return !err; +} + /* * Store all pages in a folio. * @@ -1588,95 +1688,146 @@ static void shrink_worker(struct work_struct *w) * handles to ERR_PTR(-EINVAL) at allocation time, and the fact that the * entry's handle is subsequently modified only upon a successful zpool_malloc() * after the page is compressed. + * + * For compressors that don't support batching, the following structure + * showed a performance regression with zstd using 64K as well as 2M folios: + * + * Batched stores: + * --------------- + * - Allocate all entries, + * - Compress all entries, + * - Store all entries in xarray/LRU. + * + * Hence, the above structure is maintained only for batched stores, and the + * following structure is implemented for sequential stores of large folio pages, + * that fixes the regression, while preserving common code paths for batched + * and sequential stores of a folio: + * + * Sequential stores: + * ------------------ + * For each page in folio: + * - allocate an entry, + * - compress the page, + * - store the entry in xarray/LRU. */ static bool zswap_store_folio(struct folio *folio, struct obj_cgroup *objcg, struct zswap_pool *pool) { - long index, from_index = 0, nr_pages = folio_nr_pages(folio); + long index = 0, from_index = 0, nr_pages, nr_folio_pages = folio_nr_pages(folio); struct zswap_entry **entries = NULL; + struct crypto_acomp_ctx *acomp_ctx; int node_id = folio_nid(folio); + unsigned int batch_size; + bool batching; - entries = kmalloc(nr_pages * sizeof(*entries), GFP_KERNEL); + entries = kmalloc(nr_folio_pages * sizeof(*entries), GFP_KERNEL); if (!entries) return false; - for (index = from_index; index < nr_pages; ++index) { - entries[index] = zswap_entry_cache_alloc(GFP_KERNEL, node_id); + acomp_ctx = acomp_ctx_get_cpu_lock(pool); - if (!entries[index]) { - zswap_reject_kmemcache_fail++; - nr_pages = index; - goto store_folio_failed; - } + batch_size = acomp_ctx->nr_reqs; - entries[index]->handle = (unsigned long)ERR_PTR(-EINVAL); - } + nr_pages = (batch_size > 1) ? nr_folio_pages : 1; + batching = (nr_pages > 1) ? true : false; - for (index = from_index; index < nr_pages; ++index) { - struct page *page = folio_page(folio, index); - swp_entry_t page_swpentry = page_swap_entry(page); - struct zswap_entry *old, *entry = entries[index]; + while (1) { + for (index = from_index; index < nr_pages; ++index) { + entries[index] = zswap_entry_cache_alloc(GFP_KERNEL, node_id); - if (!zswap_compress(page, entry, pool)) { - from_index = index; - goto store_folio_failed; - } + if (!entries[index]) { + zswap_reject_kmemcache_fail++; + nr_pages = index; + goto store_folio_failed; + } - old = xa_store(swap_zswap_tree(page_swpentry), - swp_offset(page_swpentry), - entry, GFP_KERNEL); - if (xa_is_err(old)) { - int err = xa_err(old); + entries[index]->handle = (unsigned long)ERR_PTR(-EINVAL); + } - WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); - zswap_reject_alloc_fail++; - from_index = index; - goto store_folio_failed; + if (batching) { + /* Batch compress the pages in the folio. */ + for (index = from_index; index < nr_pages; index += batch_size) { + + if (!zswap_batch_compress(folio, index, + min((unsigned int)(nr_pages - index), + batch_size), + &entries[index], pool, acomp_ctx)) + goto store_folio_failed; + } + } else { + /* Sequential compress the next page in the folio. */ + struct page *page = folio_page(folio, from_index); + + if (!zswap_compress(page, entries[from_index], pool, acomp_ctx)) + goto store_folio_failed; } - /* - * We may have had an existing entry that became stale when - * the folio was redirtied and now the new version is being - * swapped out. Get rid of the old. - */ - if (old) - zswap_entry_free(old); + for (index = from_index; index < nr_pages; ++index) { + swp_entry_t page_swpentry = page_swap_entry(folio_page(folio, index)); + struct zswap_entry *old, *entry = entries[index]; - /* - * The entry is successfully compressed and stored in the tree, there is - * no further possibility of failure. Grab refs to the pool and objcg, - * charge zswap memory, and increment zswap_stored_pages. - * The opposite actions will be performed by zswap_entry_free() - * when the entry is removed from the tree. - */ - zswap_pool_get(pool); - if (objcg) { - obj_cgroup_get(objcg); - obj_cgroup_charge_zswap(objcg, entry->length); - } - atomic_long_inc(&zswap_stored_pages); + old = xa_store(swap_zswap_tree(page_swpentry), + swp_offset(page_swpentry), + entry, GFP_KERNEL); + if (xa_is_err(old)) { + int err = xa_err(old); - /* - * We finish initializing the entry while it's already in xarray. - * This is safe because: - * - * 1. Concurrent stores and invalidations are excluded by folio lock. - * - * 2. Writeback is excluded by the entry not being on the LRU yet. - * The publishing order matters to prevent writeback from seeing - * an incoherent entry. - */ - entry->pool = pool; - entry->swpentry = page_swpentry; - entry->objcg = objcg; - entry->referenced = true; - if (entry->length) { - INIT_LIST_HEAD(&entry->lru); - zswap_lru_add(&zswap_list_lru, entry); + WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); + zswap_reject_alloc_fail++; + from_index = index; + goto store_folio_failed; + } + + /* + * We may have had an existing entry that became stale when + * the folio was redirtied and now the new version is being + * swapped out. Get rid of the old. + */ + if (old) + zswap_entry_free(old); + + /* + * The entry is successfully compressed and stored in the tree, there is + * no further possibility of failure. Grab refs to the pool and objcg, + * charge zswap memory, and increment zswap_stored_pages. + * The opposite actions will be performed by zswap_entry_free() + * when the entry is removed from the tree. + */ + zswap_pool_get(pool); + if (objcg) { + obj_cgroup_get(objcg); + obj_cgroup_charge_zswap(objcg, entry->length); + } + atomic_long_inc(&zswap_stored_pages); + + /* + * We finish initializing the entry while it's already in xarray. + * This is safe because: + * + * 1. Concurrent stores and invalidations are excluded by folio lock. + * + * 2. Writeback is excluded by the entry not being on the LRU yet. + * The publishing order matters to prevent writeback from seeing + * an incoherent entry. + */ + entry->pool = pool; + entry->swpentry = page_swpentry; + entry->objcg = objcg; + entry->referenced = true; + if (entry->length) { + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&zswap_list_lru, entry); + } } + + from_index = nr_pages++; + + if (nr_pages > nr_folio_pages) + break; } + acomp_ctx_put_unlock(acomp_ctx); kfree(entries); return true; @@ -1688,6 +1839,7 @@ static bool zswap_store_folio(struct folio *folio, zswap_entry_cache_free(entries[index]); } + acomp_ctx_put_unlock(acomp_ctx); kfree(entries); return false; }