From patchwork Wed Nov 27 22:53:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13887411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7219D6D245 for ; Wed, 27 Nov 2024 22:53:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1EFF96B0085; Wed, 27 Nov 2024 17:53:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1536B6B0089; Wed, 27 Nov 2024 17:53:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6E806B0092; Wed, 27 Nov 2024 17:53:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C3CAC6B0085 for ; Wed, 27 Nov 2024 17:53:33 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 731C0161918 for ; Wed, 27 Nov 2024 22:53:33 +0000 (UTC) X-FDA: 82833378264.29.1613029 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by imf14.hostedemail.com (Postfix) with ESMTP id 63BDF100002 for ; Wed, 27 Nov 2024 22:53:24 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hz42fSfz; spf=pass (imf14.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732748006; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6fKa+aiRYieQ8R1Pdd/mL8nMWz+Vn9O/jkKFWoSPg+M=; b=VfoCf46zqTL0nChX/iNOx3pifUDD0jhSarW3kJDgQ7X5HOwlNeS6sGfiU5IkZw7UH57GLr 567oQoybw1ezs0nCsCo5C1pgIuBpqR4O9orwhV/0Wqy49dI77nZlSAk6m0xWTmOfCeh+G3 jnJ2CgpFmKIZYgKUfWSUMrXz5EDY79Y= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hz42fSfz; spf=pass (imf14.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732748006; a=rsa-sha256; cv=none; b=EM3QixYbuzxuMiNUlVOwmljoD11Rm3+Ap9fHChei36fRU8HptQZO+VbAv6zJ0vR+LEdI0b Bt9T6Kxy8Ad8t5wiwPu/yVrKWQhxzN2IwQXQU2zfK966aeNyy7QDk+0FixuaUS0/QHm7oe qPJ/P83wBEgfqLXV7U/hJcU2hH4eQws= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732748012; x=1764284012; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=C3GKVK6Y3WXAyh5rwDc2Noj36/JK07WlrTl18HMt5Dk=; b=hz42fSfz8fHEocGYuWq7odIiFo1yE9TrfPVLtEgWBuRVUOgeySR8jK4H SVNdr1xyTDd/na51jZ2lYQK1Y2RUovPHgnjNS88WfPgXWK7EqjJXmnxkS OTzTINSLcXenUYykWNnAaRj8U5YfQN7ORTpH3vMyDx4YkStlDdLNTNDv8 e8D99AsjP5m+I5TqUN8ikb2+9dI+8bbRy9KcXxbuoX7nWwDyVl3+et33Y J6/VfjULtgO+YebJTnH4Yt1L4f0OLu3zIErEIqGkhroE9TQWMt0P03ArP xbjxFXIQYhkQg+LLFEmFXxCV8QPqnuxYmXIYFGphLfvjEj3/oEbTsyueD Q==; X-CSE-ConnectionGUID: rK5RhLQjRNG+U5Dt1gnzVg== X-CSE-MsgGUID: xyoOPqpvTNO69qx1peJJCw== X-IronPort-AV: E=McAfee;i="6700,10204,11269"; a="33022416" X-IronPort-AV: E=Sophos;i="6.12,190,1728975600"; d="scan'208";a="33022416" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2024 14:53:26 -0800 X-CSE-ConnectionGUID: 3kh+VfrZRcCXbRoufuZUbQ== X-CSE-MsgGUID: ddh5E4T1QkKfP79JROuekw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,190,1728975600"; d="scan'208";a="92235438" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by fmviesa008.fm.intel.com with ESMTP; 27 Nov 2024 14:53:26 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v1 2/2] mm: zswap: zswap_store_pages() simplifications for batching. Date: Wed, 27 Nov 2024 14:53:24 -0800 Message-Id: <20241127225324.6770-3-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241127225324.6770-1-kanchana.p.sridhar@intel.com> References: <20241127225324.6770-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 63BDF100002 X-Rspamd-Server: rspam12 X-Stat-Signature: zweejgrkeat6tajc7i56e966km14hym5 X-Rspam-User: X-HE-Tag: 1732748004-173372 X-HE-Meta: U2FsdGVkX191MXIAIpDQW5xPRTxw+SiZ2FDFDAqNgdtQEDWdeeOujhnq5/OcE9ohINUcdBpOX0V37jYyg1mZZOSL96DSOYjpcqgXHgNva/kMCkdetlOUO0P5vZCrYL7nxzdUCvKCkJLDlCe1CG6w4yS/r6MwgO7WKqgzQ4jNMZt5eexaZxVjzS7q4+fCgM2NKICyrmjK0CjqvYrOuFqAiEZJh8rD/Crt+zQUhI/OYoQcv/C2HPvagzqUbG+SQ3QY5WK6iGvlAEQStbC0i4MJcQkkBaOo0rUBtKT4b2TNefYJsxlhEIXZ+PNvTGVwOpm8yDw+/VzQYgL/WzP1iQWWTZ4/DnMdAaIyx4O9XhKWXYQQqkHh9KqIN796lNVSoiZiIjhXgC0OhIzWxdxf/pqvSK+UzHcNOBnKGteCktBMKEsAk+P/C7c37Ndhx+7EVKUj3x8ABN2zP7HDeFhfyVG0dqkFDaaGGZjXNuq+CoL2cGxLoopDMdZPB9kCIRl4RoNwKyEv4LjYsIv4gCWXOYCx4P10d4HpXFXO6gkdT5QNS/UdyUt1lP18R4LPaTsrvl0279ngl1JsJhS1DvAA8GC3B27mJ3+k+vgDkdVk0YaLoOR9aYyJzKRX7cYxUGsFRl/7EjmD062L+gOFt2PaKmOmRkIOu71sakYClnSZs/oL8uCCVfxXChg/LfKiVb8WAkKEz6m8ssLnVARBSDNzt+YFOE2lfcu3hKAVdc7UlW5vEvPgNAvwrKCLxWXc9QJ9x4jxRs3gnacfYh5+vN7kzjHHIeNpqrfewW/PUWCiMLe8SNYOkHQqmVCzIl0rAMJCr027Zbw05OwnbqlJTYSdksa6xfiIdOyhtiU0iT1m4zTkCLnl89WmzrrPVX5EylCj2lXiXztltlBLt5SwGT2Nk/PCTKx3BB/JB3B2A7XIgZn9utYd/i+siL8iiQJ4ugIQFUdAfQeOPUSW75/secFUFag uY+tIcSA Uq1hCrx7v/bixF+uk2FHeBeEN7scZBU5VwU/0CNA3IR6zqa+uUc2WKyna4aVHQwlqOPC+gbK0LY60yzFWpVR0PVYuiYQp/lMfW7zHjB5jEgfc5uPIyoX5xrmOArAAjROv+4ZfYr5jphVRIBtCFBugnwzHJXdr9TG8RARkVVTDXE+mqR2+q7+WzutTZ26kaMJYW4QcjbXp7LAfWuyqpuxTsMWpdMS7d63ASVBx97wvNpcXQ/wisfYuw18dvg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to set up zswap_store_pages() to enable a clean batching implementation in [1], this patch implements the following changes: 1) Addition of zswap_alloc_entries() which will allocate zswap entries for all pages in the specified range for the folio, upfront. If this fails, we return an error status to zswap_store(). 2) Addition of zswap_compress_pages() that calls zswap_compress() for each page, and returns false if any zswap_compress() fails, so zswap_store_page() can cleanup resources allocated and return an error status to zswap_store(). 3) A "store_pages_failed" label that is a catch-all for all failure points in zswap_store_pages(). This facilitates cleaner error handling within zswap_store_pages(), which will become important for IAA compress batching in [1]. [1]: https://patchwork.kernel.org/project/linux-mm/list/?series=911935 Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 93 +++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 71 insertions(+), 22 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index b09d1023e775..db80c66e2205 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1409,9 +1409,56 @@ static void shrink_worker(struct work_struct *w) * main API **********************************/ +static bool zswap_compress_pages(struct page *pages[], + struct zswap_entry *entries[], + u8 nr_pages, + struct zswap_pool *pool) +{ + u8 i; + + for (i = 0; i < nr_pages; ++i) { + if (!zswap_compress(pages[i], entries[i], pool)) + return false; + } + + return true; +} + +/* + * Allocate @nr zswap entries for storing @nr pages in a folio. + * If any one of the entry allocation fails, delete all entries allocated + * thus far, and return false. + * If @nr entries are successfully allocated, set each entry's "handle" + * to "ERR_PTR(-EINVAL)" to denote that the handle has not yet been allocated. + */ +static bool zswap_alloc_entries(struct zswap_entry *entries[], int node_id, u8 nr) +{ + u8 i; + + for (i = 0; i < nr; ++i) { + entries[i] = zswap_entry_cache_alloc(GFP_KERNEL, node_id); + if (!entries[i]) { + u8 j; + + zswap_reject_kmemcache_fail++; + for (j = 0; j < i; ++j) + zswap_entry_cache_free(entries[j]); + return false; + } + + entries[i]->handle = (unsigned long)ERR_PTR(-EINVAL); + } + + return true; +} + /* * Store multiple pages in @folio, starting from the page at index @si up to * and including the page at index @ei. + * The error handling from all failure points is handled by the + * "store_pages_failed" label, based on the initial ERR_PTR(-EINVAL) value for + * the zswap_entry's handle set by zswap_alloc_entries(), and the fact that the + * entry's handle is subsequently modified only upon a successful zpool_malloc(). */ static ssize_t zswap_store_pages(struct folio *folio, long si, @@ -1419,26 +1466,25 @@ static ssize_t zswap_store_pages(struct folio *folio, struct obj_cgroup *objcg, struct zswap_pool *pool) { - struct page *page; - swp_entry_t page_swpentry; - struct zswap_entry *entry, *old; + struct zswap_entry *entries[SWAP_CRYPTO_BATCH_SIZE], *old; + struct page *pages[SWAP_CRYPTO_BATCH_SIZE]; size_t compressed_bytes = 0; u8 nr_pages = ei - si + 1; u8 i; - for (i = 0; i < nr_pages; ++i) { - page = folio_page(folio, si + i); - page_swpentry = page_swap_entry(page); + /* allocate entries */ + if (!zswap_alloc_entries(entries, folio_nid(folio), nr_pages)) + return -EINVAL; - /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); - if (!entry) { - zswap_reject_kmemcache_fail++; - return -EINVAL; - } + for (i = 0; i < nr_pages; ++i) + pages[i] = folio_page(folio, si + i); - if (!zswap_compress(page, entry, pool)) - goto compress_failed; + if (!zswap_compress_pages(pages, entries, nr_pages, pool)) + goto store_pages_failed; + + for (i = 0; i < nr_pages; ++i) { + swp_entry_t page_swpentry = page_swap_entry(pages[i]); + struct zswap_entry *entry = entries[i]; old = xa_store(swap_zswap_tree(page_swpentry), swp_offset(page_swpentry), @@ -1448,7 +1494,7 @@ static ssize_t zswap_store_pages(struct folio *folio, WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); zswap_reject_alloc_fail++; - goto store_failed; + goto store_pages_failed; } /* @@ -1489,16 +1535,19 @@ static ssize_t zswap_store_pages(struct folio *folio, } compressed_bytes += entry->length; - continue; - -store_failed: - zpool_free(pool->zpool, entry->handle); -compress_failed: - zswap_entry_cache_free(entry); - return -EINVAL; } return compressed_bytes; + +store_pages_failed: + for (i = 0; i < nr_pages; ++i) { + if (!IS_ERR_VALUE(entries[i]->handle)) + zpool_free(pool->zpool, entries[i]->handle); + + zswap_entry_cache_free(entries[i]); + } + + return -EINVAL; } bool zswap_store(struct folio *folio)