From patchwork Sat Nov 23 07:01:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFA34E6ADE8 for ; Sat, 23 Nov 2024 07:01:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A1B66B0082; Sat, 23 Nov 2024 02:01:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 350FE6B0083; Sat, 23 Nov 2024 02:01:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F2046B0085; Sat, 23 Nov 2024 02:01:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0257E6B0082 for ; Sat, 23 Nov 2024 02:01:32 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7D7321A1672 for ; Sat, 23 Nov 2024 07:01:32 +0000 (UTC) X-FDA: 82816464024.24.3865328 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf07.hostedemail.com (Postfix) with ESMTP id 2E0A840011 for ; Sat, 23 Nov 2024 07:01:28 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UqSpIx6w; spf=pass (imf07.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Osorv8u+OBgpaj9fcDkKQgl4zsmQ4E8Qigs/uA0MTYo=; b=pUFaYj8bxRjKEimZYQukMSQWu0h5T7/Zl7SaXsX2qG7cXKf76byZQZdsJfewWP07TkZeOA 7/SoGZxBsDUw1CVy/64a8s5VqF7uc2fDi4iHVeZbQ+lni3E67jHT8+/xcOdBjsYMmn8njg 2nHatVwi+S/cGq0445L2pnj3fsP/QJQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345290; a=rsa-sha256; cv=none; b=1UJcso39ha5pCpBwrNmeIa5jcM2dOEp6OxGCFF3XDKSBJ3XZr7hO4KJXa46qMTf5ep+Pt4 YJUg15KhJRF8Lo1jD/DYjG9NWhSGZxGnSPBdcWljdmKslJs9+rQoCUucL/oAd5Hfebm84v gV2q7XV0QtaFD/RxxYVQOool5N1SSBc= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UqSpIx6w; spf=pass (imf07.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345290; x=1763881290; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eLUZRvzAb/JMAqXqi3XSFoRGRNFvuqn37EsysAuIq7I=; b=UqSpIx6wIx9T7IbVkWmwMtT/RuzTZvXJgkOFZZ2jUMxfmvdEEYdx6Htw O4LmqUNohoklyD1YjtMaQNCPkpU/0wCTEGwa9qpZL+laarCs+UWFEXIF0 2c66Pnl4g5pZLkLB+JXaBJjzbfeHhaOz/6BsOdlp/WPQUyeF6KvKrbFYx +5/HjEFbGKJaLyBElzQxPReFR7JS8/PrWpHvohPAVCYW2la8u7sMeicoM wei+/ZHrQZXqPBBWwMEIYEvtVBnZvwuwYDslL+hubJV5BX8AxwwVYDZ2O R411H7rwR+f7t1G1WsSc6UFEwMFS35/PrBVGnqLp1m5ZyBygzSURHKyPq Q==; X-CSE-ConnectionGUID: UBRR9X6mT7iaPUV9sFq6lA== X-CSE-MsgGUID: ASHZfkVtThWZOhxrA2ClOA== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435462" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435462" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:28 -0800 X-CSE-ConnectionGUID: nicOeVJqS2iI9cBEFnDBKg== X-CSE-MsgGUID: +RhtWcv3QkidP9LiFSjxvw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573547" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:28 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 01/10] crypto: acomp - Define two new interfaces for compress/decompress batching. Date: Fri, 22 Nov 2024 23:01:18 -0800 Message-Id: <20241123070127.332773-2-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: tmqbifpi7bac8pqt8h9ao9xjwofd4uif X-Rspam-User: X-Rspamd-Queue-Id: 2E0A840011 X-Rspamd-Server: rspam02 X-HE-Tag: 1732345288-902859 X-HE-Meta: U2FsdGVkX1+PIgHEaTV1pePxjZQMKpNWTQeFUwW/YEGq5J8pQInXTr0N6TvEWPXrfZzVvfhxEbrGk45y6TBx71ze6PwnWRvPzOsgCx0NKMYJJMzjWrfb1DH5RtT8HIDVmtGcHqxL2Y78YyJqvtpshkDg+BIQ23q0XyEZ24hKQFvpYvjD6okxF24+fMbco1uzZ9gn8heF4CyQsXTWB197BhcSwe/rnZRYMzqxe/0NPyXK+31tFFsy+nrOw4YPd/rQyAGY08mb1ZnDGo5eWTs+abuadBRmzIR/dwJQJgRSawUVt4usky/9bDpsZqQbk8Sv9KxJ9FuaZkJ3QFonFfFTqxBw13xmUsZFiGLo6FA/bnYBE/HetyX0JQLXcvYiT9VEXGWi8VQ2+mNbF9xNrMNdIeOI6XGAzTS0qOdGZ0UZT2I9CM/19ZIboXkRfiapiSiMnjkfu5fY7Qj1U9MxJBz5jtdFT5UW2bNXHU9SXtwZkjKEvPg3Mw6Jd8kdaOrPty/6NRLSR2tYKMXpz5Ntty3OJJ5u2tN4qcCH8efuyCQl8hi4gy+KCdLptp3C0Cwonr8vY4gD67H2+wI7S83+KZuiyoEklxSD0IP9OMWDEuVNt0VqdsbtnlzGQBJQuC/0O5zF1VghowS5QyCGUAVuBB5wt+Dhcp83eR3Hr4KIDjZNa7qHnSJ2+xUup29HWPS9c3r9Zqvc10T1W8OgOLkpbuIahuuVqnxOqCyx1T43Hysn5/hWRSOs9fyzzTKuYHuhHVj9PKFdXSqBu5S2fk9bEU3xFLsSNU9TdyCn7x77zYGq26U49179IY6lTwJKrELy9Y+ZUUfbj/FhuILNKnPniPq2HlWreuvzn1NYEn8xQgs0N6/GwmOGXYeUOyqdyobexb2WUQSIgt1FXe+XdAqBOSCO/iG70aeZu7S69Oo0LwnJ4bd+Ox34xRExnjMtk5v+ghJR8DhQPXR5wj4CaZCXoqd tZRBqAKZ oI6k5T3cMrDp3b+g4/7Jyygacu9oJN3oLkmR8YPxea7MYBq5BI6c87ZMX6dPx2xEIAn6HWKwc6X4UfNWtfr1skW7qupN1k5QRtx7UW9Jjh4Qh6n9L9DepL7E3BxByUDrCJnccLMY/qY50jk94+9TnLl98cUJh7Yj13AIAJvBQuoOg1Phxspccx8w3eA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This commit adds batch_compress() and batch_decompress() interfaces to: struct acomp_alg struct crypto_acomp This allows the iaa_crypto Intel IAA driver to register implementations for the batch_compress() and batch_decompress() API, that can subsequently be invoked from the kernel zswap/zram swap modules to compress/decompress up to CRYPTO_BATCH_SIZE (i.e. 8) pages in parallel in the IAA hardware accelerator to improve swapout/swapin performance. A new helper function acomp_has_async_batching() can be invoked to query if a crypto_acomp has registered these batch_compress and batch_decompress interfaces. Signed-off-by: Kanchana P Sridhar --- crypto/acompress.c | 2 + include/crypto/acompress.h | 91 +++++++++++++++++++++++++++++ include/crypto/internal/acompress.h | 16 +++++ 3 files changed, 109 insertions(+) diff --git a/crypto/acompress.c b/crypto/acompress.c index 6fdf0ff9f3c0..a506db499a37 100644 --- a/crypto/acompress.c +++ b/crypto/acompress.c @@ -71,6 +71,8 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm) acomp->compress = alg->compress; acomp->decompress = alg->decompress; + acomp->batch_compress = alg->batch_compress; + acomp->batch_decompress = alg->batch_decompress; acomp->dst_free = alg->dst_free; acomp->reqsize = alg->reqsize; diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index 54937b615239..4252bab3d0e1 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -37,12 +37,20 @@ struct acomp_req { void *__ctx[] CRYPTO_MINALIGN_ATTR; }; +/* + * The max compress/decompress batch size, for crypto algorithms + * that support batch_compress and batch_decompress API. + */ +#define CRYPTO_BATCH_SIZE 8UL + /** * struct crypto_acomp - user-instantiated objects which encapsulate * algorithms and core processing logic * * @compress: Function performs a compress operation * @decompress: Function performs a de-compress operation + * @batch_compress: Function performs a batch compress operation + * @batch_decompress: Function performs a batch decompress operation * @dst_free: Frees destination buffer if allocated inside the * algorithm * @reqsize: Context size for (de)compression requests @@ -51,6 +59,20 @@ struct acomp_req { struct crypto_acomp { int (*compress)(struct acomp_req *req); int (*decompress)(struct acomp_req *req); + void (*batch_compress)(struct acomp_req *reqs[], + struct crypto_wait *wait, + struct page *pages[], + u8 *dsts[], + unsigned int dlens[], + int errors[], + int nr_pages); + void (*batch_decompress)(struct acomp_req *reqs[], + struct crypto_wait *wait, + u8 *srcs[], + struct page *pages[], + unsigned int slens[], + int errors[], + int nr_pages); void (*dst_free)(struct scatterlist *dst); unsigned int reqsize; struct crypto_tfm base; @@ -142,6 +164,13 @@ static inline bool acomp_is_async(struct crypto_acomp *tfm) CRYPTO_ALG_ASYNC; } +static inline bool acomp_has_async_batching(struct crypto_acomp *tfm) +{ + return (acomp_is_async(tfm) && + (crypto_comp_alg_common(tfm)->base.cra_flags & CRYPTO_ALG_TYPE_ACOMPRESS) && + tfm->batch_compress && tfm->batch_decompress); +} + static inline struct crypto_acomp *crypto_acomp_reqtfm(struct acomp_req *req) { return __crypto_acomp_tfm(req->base.tfm); @@ -265,4 +294,66 @@ static inline int crypto_acomp_decompress(struct acomp_req *req) return crypto_acomp_reqtfm(req)->decompress(req); } +/** + * crypto_acomp_batch_compress() -- Invoke asynchronous compress of + * a batch of requests + * + * Function invokes the asynchronous batch compress operation + * + * @reqs: @nr_pages asynchronous compress requests. + * @wait: crypto_wait for synchronous acomp batch compress. If NULL, the + * driver must provide a way to process completions asynchronously. + * @pages: Pages to be compressed. + * @dsts: Pre-allocated destination buffers to store results of compression. + * @dlens: Will contain the compressed lengths. + * @errors: zero on successful compression of the corresponding + * req, or error code in case of error. + * @nr_pages: The number of pages, up to CRYPTO_BATCH_SIZE, + * to be compressed. + */ +static inline void crypto_acomp_batch_compress(struct acomp_req *reqs[], + struct crypto_wait *wait, + struct page *pages[], + u8 *dsts[], + unsigned int dlens[], + int errors[], + int nr_pages) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(reqs[0]); + + return tfm->batch_compress(reqs, wait, pages, dsts, + dlens, errors, nr_pages); +} + +/** + * crypto_acomp_batch_decompress() -- Invoke asynchronous decompress of + * a batch of requests + * + * Function invokes the asynchronous batch decompress operation + * + * @reqs: @nr_pages asynchronous decompress requests. + * @wait: crypto_wait for synchronous acomp batch decompress. If NULL, the + * driver must provide a way to process completions asynchronously. + * @srcs: The src buffers to be decompressed. + * @pages: The pages to store the decompressed buffers. + * @slens: Compressed lengths of @srcs. + * @errors: zero on successful compression of the corresponding + * req, or error code in case of error. + * @nr_pages: The number of pages, up to CRYPTO_BATCH_SIZE, + * to be decompressed. + */ +static inline void crypto_acomp_batch_decompress(struct acomp_req *reqs[], + struct crypto_wait *wait, + u8 *srcs[], + struct page *pages[], + unsigned int slens[], + int errors[], + int nr_pages) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(reqs[0]); + + return tfm->batch_decompress(reqs, wait, srcs, pages, + slens, errors, nr_pages); +} + #endif diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h index 8831edaafc05..acfe2d9d5a83 100644 --- a/include/crypto/internal/acompress.h +++ b/include/crypto/internal/acompress.h @@ -17,6 +17,8 @@ * * @compress: Function performs a compress operation * @decompress: Function performs a de-compress operation + * @batch_compress: Function performs a batch compress operation + * @batch_decompress: Function performs a batch decompress operation * @dst_free: Frees destination buffer if allocated inside the algorithm * @init: Initialize the cryptographic transformation object. * This function is used to initialize the cryptographic @@ -37,6 +39,20 @@ struct acomp_alg { int (*compress)(struct acomp_req *req); int (*decompress)(struct acomp_req *req); + void (*batch_compress)(struct acomp_req *reqs[], + struct crypto_wait *wait, + struct page *pages[], + u8 *dsts[], + unsigned int dlens[], + int errors[], + int nr_pages); + void (*batch_decompress)(struct acomp_req *reqs[], + struct crypto_wait *wait, + u8 *srcs[], + struct page *pages[], + unsigned int slens[], + int errors[], + int nr_pages); void (*dst_free)(struct scatterlist *dst); int (*init)(struct crypto_acomp *tfm); void (*exit)(struct crypto_acomp *tfm); From patchwork Sat Nov 23 07:01:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78D1FE6ADE9 for ; Sat, 23 Nov 2024 07:01:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8D456B0085; Sat, 23 Nov 2024 02:01:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D17436B0088; Sat, 23 Nov 2024 02:01:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B681C6B0089; Sat, 23 Nov 2024 02:01:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 98E236B0085 for ; Sat, 23 Nov 2024 02:01:34 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 45B911A1A9C for ; Sat, 23 Nov 2024 07:01:34 +0000 (UTC) X-FDA: 82816464066.08.3F6710D Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf21.hostedemail.com (Postfix) with ESMTP id 380B31C0009 for ; Sat, 23 Nov 2024 07:01:31 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="ay/0ykPt"; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345291; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cLV3EYnjobaIrpsJgwTTh+2K3zYd4sDcsAaGVIPHKs0=; b=oINii0sHdK624bvTnoN2WtQiL9/7gSmT9bIqanLB/XWr3J4J/ZZlAVmQlY5MAoj1TAEDfR 2VoO0AUOASnxS0tR6NAaTHkos8rQQoycd0KN+FEyybdTlzsQCdyysAU89lbMN6S+Bopjeg Jk6aHI0UmYQd68cu3M3eP9LMJeisO9M= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="ay/0ykPt"; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345291; a=rsa-sha256; cv=none; b=dB2VcGUQV1tLICgJaJ5s7UNcjuM5XHXj6n+xGVo/r9RzqeXDUSbUacIPtTHc4bulFqKE9v n/XPjUqffYm67x9831x+e6EvZTE4SdAGAo5ZwnklaAXcovLwgBuevRk37xuJ4/Eqdzpj58 mYxV1ZSMBuhtSxEIC17KJYYqFOOGvf8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345293; x=1763881293; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2nV7tAyF29XIJl5hqo4L8d+IjA/bGBNSbr3o8YrSMP0=; b=ay/0ykPtDnj2oGZvRuMkGFNjnAYE/IgcolA5ARnY7tUhXoL94dagyYD6 mC3UST9U+EIil7LMyh02CQxbkobDHB4SpWDxPUKtUunr9bYVsGbffdo+B 89H63F0sjrqQ0FAJ0l3N2PNAQhR47FEWIEQMukPYST12BJ+0iGGWBSH+k 9Mio4lcmp6o7mcJjG4yofY4MGr9PwzaTaMgDxEUEnlRNeBtiUXY4/IM7X 7+jVikZVpu4wyFTMyBQmUvIXKYUgGa1t5o5+DqUn+DiXuBontwMYIdqNB X8sqK4zbF63ps+GfVFERwVd2EalvXkGCVD/FjNtN6vLVtzRrswIIpr2OJ w==; X-CSE-ConnectionGUID: pE4oa+4RQzawgOZU35hI2Q== X-CSE-MsgGUID: 0UaTNizAR1KEPlAPmggDqA== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435473" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435473" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:29 -0800 X-CSE-ConnectionGUID: zgZFbcPVTsqJbBTEyVkAlQ== X-CSE-MsgGUID: 2fHdvS1NQqiWAwE+SNwUbA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573550" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:28 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 02/10] crypto: iaa - Add an acomp_req flag CRYPTO_ACOMP_REQ_POLL to enable async mode. Date: Fri, 22 Nov 2024 23:01:19 -0800 Message-Id: <20241123070127.332773-3-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 380B31C0009 X-Stat-Signature: ehoxxggqpxdmyeyc94oup3z64t4txqun X-Rspam-User: X-HE-Tag: 1732345291-811075 X-HE-Meta: U2FsdGVkX19WHUG6OH8mJTmmQpqayOtXcDj9r9hv8N6ss1qT3yj48MeSjWkd0tmSXMTH6+cu8jgZs/SBubFuXG34s/+QQeBMyn2prjhA0Y6nG4LNF9tXj7g5IG2kZRHb8b14EPukIp8josGIi41LQYSKfTau19wXY/ofyJRdIXOWO4XsuPl3JC+wEKxqVuHXHz+2nETVkRDYjXhv0MYlaHNYRixLDfO8frpaFeROZpVtkqgVzjBNh5YPxS2PlbLghd2WXom2sKuwWLqP1Uhj36/OoIYSqASD7fZkmk0WYbiGmVq1sLQvomI4Fap/iaNNOAPu5EnvvCcuFY4qhoYTCSR9vzjfIK/k2bcWGfYG7ZHnV2MSV8+Ms2n5KvHjhobzaFbj4L9YMp1o1ZqPESoqbiOCm4A9rxN0wrzgPqpiDacppuQn8SQeuRFlI7cZDPEcbdprl/bkN6wxA1PnqZD/u28ywR4EHb85w7i/ysG2QwRDu35TsCYFjYq8y1VF/geccWdoY+HaG9Ud1Rz5bxD4rs8LxAgTd/4tYWz6xPvZU/0PwPc9GXoXGQrMBDi/cEmFV1AslHWT3mW4dISC1E9Spu2u24U8ZjdYISKscUXt4tSsPlpdO+o2yIh4KKHLcQzHbr4c68mrMKjFv3aiokR4leD1SCY3+69SZR/TcP/DFAnFAht+eApsyLUpqOJH5/x42KpRRqriWhsRFRz3HcP1M/NvDviZYG5h1HH4fwm6Uxe9Po/Oc5ena4iHEpEsebFlcZxxk9cHkhGYF1mYyugjG8v4jg2o1xmVfQWEFdxxlL+o5BFbYUsG6qBmVOFW2MOPkT5IzSROAzCMS+19Ptm5Mkq4WJF6lRRF4PbVFAduZZAS4dAfBox6xLpLmAzrV+X3T9ipKReTS1MDs4N128dxxElF6gEZMcC0lXc0KI1/kVXdXfQ/10k/kDODGlzF67wDcGM6SJBPeUJXox+Dj5a xewXQ7Aa 9xhuiB0uT9l8gJiHzYVtoxRZUHTkQX/fxaY4XtLShiU80j9I4StbvyS+G62opT+cUMMCaa4qPP/3tpZC85Ra6XkHZb+TEUixeqySewcpaUVDcnwtsIq26GtBdcQYMTyDO6wy1q2FG65tWjn10zvEakZHn0HeP8ViM7udGRcYpo/UHTlyjw/u6v2BMLYVSA0lFw3rSXHpFw0GzphawJ0J+Hd3O3Doz0DaGZrDuNfuTHf0lEb2bw7QG5nBcpAQpLOB8MqSRETgMtT6SyOUbEBTKEfKGJctkMMd3Zr4+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If the iaa_crypto driver has async_mode set to true, and use_irq set to false, it can still be forced to use synchronous mode by turning off the CRYPTO_ACOMP_REQ_POLL flag in req->flags. All three of the following need to be true for a request to be processed in fully async poll mode: 1) async_mode should be "true" 2) use_irq should be "false" 3) req->flags & CRYPTO_ACOMP_REQ_POLL should be "true" Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 11 ++++++++++- include/crypto/acompress.h | 5 +++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 237f87000070..2edaecd42cc6 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1510,6 +1510,10 @@ static int iaa_comp_acompress(struct acomp_req *req) return -EINVAL; } + /* If the caller has requested no polling, disable async. */ + if (!(req->flags & CRYPTO_ACOMP_REQ_POLL)) + disable_async = true; + cpu = get_cpu(); wq = wq_table_next_wq(cpu); put_cpu(); @@ -1702,6 +1706,7 @@ static int iaa_comp_adecompress(struct acomp_req *req) { struct crypto_tfm *tfm = req->base.tfm; dma_addr_t src_addr, dst_addr; + bool disable_async = false; int nr_sgs, cpu, ret = 0; struct iaa_wq *iaa_wq; struct device *dev; @@ -1717,6 +1722,10 @@ static int iaa_comp_adecompress(struct acomp_req *req) return -EINVAL; } + /* If the caller has requested no polling, disable async. */ + if (!(req->flags & CRYPTO_ACOMP_REQ_POLL)) + disable_async = true; + if (!req->dst) return iaa_comp_adecompress_alloc_dest(req); @@ -1765,7 +1774,7 @@ static int iaa_comp_adecompress(struct acomp_req *req) req->dst, req->dlen, sg_dma_len(req->dst)); ret = iaa_decompress(tfm, req, wq, src_addr, req->slen, - dst_addr, &req->dlen, false); + dst_addr, &req->dlen, disable_async); if (ret == -EINPROGRESS) return ret; diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index 4252bab3d0e1..c1ed47405557 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -14,6 +14,11 @@ #include #define CRYPTO_ACOMP_ALLOC_OUTPUT 0x00000001 +/* + * If set, the driver must have a way to submit the req, then + * poll its completion status for success/error. + */ +#define CRYPTO_ACOMP_REQ_POLL 0x00000002 #define CRYPTO_ACOMP_DST_MAX 131072 /** From patchwork Sat Nov 23 07:01:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ACA1E6ADE8 for ; Sat, 23 Nov 2024 07:01:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 465076B0088; Sat, 23 Nov 2024 02:01:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4160E6B0089; Sat, 23 Nov 2024 02:01:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28F9B6B008A; Sat, 23 Nov 2024 02:01:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0874C6B0088 for ; Sat, 23 Nov 2024 02:01:35 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BCBE41A1672 for ; Sat, 23 Nov 2024 07:01:34 +0000 (UTC) X-FDA: 82816464108.14.20AF86C Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf07.hostedemail.com (Postfix) with ESMTP id 1044F40007 for ; Sat, 23 Nov 2024 07:01:31 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="gnHF/Eqn"; spf=pass (imf07.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345292; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uPB6RvclSJszS//sA27sTj58FbZmlxn8frZlRUCg9Xs=; b=049Fn8lJWoCu0OWilfEEll1XA/ymbGBMXUSwrN0HHGnMGCl8M1n6DjIvqpYu5NdUBVCD0g HwZu1GQP8kHy3jH0uy48FTm26ARclcQ55J0+K7rfpsbxEhTNaEZIF9LNTBvFfUI+8NDxeV VEOMpm7l2C08Le5BK2MwEO9F9rN5lec= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345292; a=rsa-sha256; cv=none; b=dGs0IaPYIcctl7JVI2T9kTR67yX574Zux2MALDdpCyoJphTjBbjXJw+jDfvb22eIq1lLS+ MXQ/R6foxkmMnP97tE9OYgZvLKPGywi01S0GvXLokSpj08UKrvj8m1y8ITzSAknEHWUe8M c05mv509/o6E+pBTl6lYxrt3t7Cav1s= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="gnHF/Eqn"; spf=pass (imf07.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345293; x=1763881293; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I6blFzOYKhBMlEnizcaLhUUqkIJ1cKN/tKISn2YqJLA=; b=gnHF/EqnZPx5w5Q+0ViBiC7bfc3w6LUG6i1LDOCt5WMV+r10yJ3ipJ6p kLcKdi79XHIRpiDecqlV4sGO5p4i/6nXe+GPnLRH/c0qurWn4t8sqSNit x0r+fwBHlKPpFw/4iN1/SoEe/TDSwuxlnJ//D+iqLaCOEfsdlo4SPbxZa C0s6AR/fKJArirmipJxRHhhtX01Zf1nhAs+PLPK/9NlxqHq17kFmJHnv0 jwyNXjlyO+7fp1qveUTaRfDjw3RrhIRvaCyqtreeULY33R/13jadGjqjl v8bJ1c/WvpTOAqlEsXv6niZPuFc9XgVroIRGX9owBuRIyyV3zhmpbnk7U Q==; X-CSE-ConnectionGUID: TX9rD7MAQMCL1BuM/JekYA== X-CSE-MsgGUID: +9oUX1kCSJ6pjDpm8c4PMw== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435486" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435486" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:29 -0800 X-CSE-ConnectionGUID: +9cFtZNpRvepDy39vtdjBA== X-CSE-MsgGUID: Ykq8GXYtQf6rbBjDUFo5yg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573553" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:28 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 03/10] crypto: iaa - Implement batch_compress(), batch_decompress() API in iaa_crypto. Date: Fri, 22 Nov 2024 23:01:20 -0800 Message-Id: <20241123070127.332773-4-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: t4sdnkj6amy4h8e3x8kafrj6778ew4yd X-Rspam-User: X-Rspamd-Queue-Id: 1044F40007 X-Rspamd-Server: rspam02 X-HE-Tag: 1732345291-222237 X-HE-Meta: U2FsdGVkX19JhJaS0bpb+0YXN7aXA4l6TDgW6pXnH/mc+2LbKujwNtccT8edSw1qSfqtlIqubtKXFf2if7Rioq19+XI6H9QymiSde+SAIUKemDiL7M8A0l776w1jT1kkhZyybOBGiQMDvGkVZDG6Zxg5MmojFSJGGN748IxCole2hCg3b7Tz1XHrUvwkrYLOQPkpBdHr50WOjSS+UJ50f2CkZ2vPntNeaziAw/hjvd3RfU5z8Fx+BbQVJF5ku7uEKbDDZB1KhvTGmZJsYY+Umum2jwPWXa8aVImCxVp+AvKkWYoQkHk5wqBd6Hyi/iGsjzAR00lswlB4+YqLvxGwZub1S3PbJmB9ydc7GnAd+ATRGLyMf4ZenvVo0i72TEg3JBPqDjEnj9TcXT/dLUkffPY069LIo1EE5YDNbhma4P+TUbRQucQ7HuBq4kzv6COj4XFrergD9avmL/NzquwVUp8y+9txl04uRr28R5690Vrc2KPMNZKUHNETKkIO6Y4YB9RpZ5B2X2SgYw35R81urRLKW8sA5x29ckjbFIrlEkOuguGfnzwwS0DzZOg58PEjQHgGj04tkBi/f/wuWU5ryHHlHcQRHqXyGeLF1UBRYu00MmFOvNw7hN8tyaHX6xQ9siiK0bapLAu8IQsdeD2uUH1TmBhkGJpD+mOoV5hkJxOLo7sjV97xjGVp0wtPksd+WEKNdLZGBoGr1wryArTfkXcrH1n4lTn1ZAaDTEFv1a8vGdcXP35LPO4PlIk8jvbIKSIuNsGDUwcwpEyCKu3bKNwLMXyQVs7PfsXFjJq7ic5duwZKf4fHaekHRQGrrP1QzYUIDprg8jAUu+GoFFfPPtg94gxdFiAJoUBXZ55iJJNdkHJ8p+7uvieBq7nzqeWUrSfEdw9rpxP8eoYsC5yqJMXKMg19nfO2myb1G2+QnLq+jdUvmAIxFQOl//gI90/DZXucsZVQ3delWsZJqii nSovgDmQ qjc6Hmre3fu29yTZ+SAbL2kDT9UHuUWdHyvQoJQ6L1icQky/S5sEToVQVweiEyxWFR0NfgtGQ2tYdm4yNKisgKUpoMvNj93p30yXxdQqkrIGoOMLJGaJlvBnsQnWShvzUkt70dHI3W4VEX09O/0wYnwdtN1ey9yJRBHd96yl7QdBYZhGPf0MWVWY4k7whGD2KSxboew4G6IO8GHGo7p6HFXhQUu9ijpITOJOtBtXJdbYRalunQOqifXzafhw4QvBmN4ogZAZwmPLGxAZy5UY88/UmeeXKPTYrZYu9DTHjq1guoF86cMkQXY0nZA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch provides iaa_crypto driver implementations for the newly added crypto_acomp batch_compress() and batch_decompress() interfaces. This allows swap modules such as zswap/zram to invoke batch parallel compression/decompression of pages on systems with Intel IAA, by invoking these API, respectively: crypto_acomp_batch_compress(...); crypto_acomp_batch_decompress(...); This enables zswap_batch_store() compress batching code to be developed in a manner similar to the current single-page synchronous calls to: crypto_acomp_compress(...); crypto_acomp_decompress(...); thereby, facilitating encapsulated and modular hand-off between the kernel zswap/zram code and the crypto_acomp layer. Suggested-by: Yosry Ahmed Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 337 +++++++++++++++++++++ 1 file changed, 337 insertions(+) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 2edaecd42cc6..cbf147a3c3cb 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1797,6 +1797,341 @@ static void compression_ctx_init(struct iaa_compression_ctx *ctx) ctx->use_irq = use_irq; } +static int iaa_comp_poll(struct acomp_req *req) +{ + struct idxd_desc *idxd_desc; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + struct idxd_wq *wq; + bool compress_op; + int ret; + + idxd_desc = req->base.data; + if (!idxd_desc) + return -EAGAIN; + + compress_op = (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS); + wq = idxd_desc->wq; + iaa_wq = idxd_wq_get_private(wq); + idxd = iaa_wq->iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + + ret = check_completion(dev, idxd_desc->iax_completion, true, true); + if (ret == -EAGAIN) + return ret; + if (ret) + goto out; + + req->dlen = idxd_desc->iax_completion->output_size; + + /* Update stats */ + if (compress_op) { + update_total_comp_bytes_out(req->dlen); + update_wq_comp_bytes(wq, req->dlen); + } else { + update_total_decomp_bytes_in(req->slen); + update_wq_decomp_bytes(wq, req->slen); + } + + if (iaa_verify_compress && (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS)) { + struct crypto_tfm *tfm = req->base.tfm; + dma_addr_t src_addr, dst_addr; + u32 compression_crc; + + compression_crc = idxd_desc->iax_completion->crc; + + dma_sync_sg_for_device(dev, req->dst, 1, DMA_FROM_DEVICE); + dma_sync_sg_for_device(dev, req->src, 1, DMA_TO_DEVICE); + + src_addr = sg_dma_address(req->src); + dst_addr = sg_dma_address(req->dst); + + ret = iaa_compress_verify(tfm, req, wq, src_addr, req->slen, + dst_addr, &req->dlen, compression_crc); + } +out: + /* caller doesn't call crypto_wait_req, so no acomp_request_complete() */ + + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + + idxd_free_desc(idxd_desc->wq, idxd_desc); + + dev_dbg(dev, "%s: returning ret=%d\n", __func__, ret); + + return ret; +} + +static void iaa_set_req_poll( + struct acomp_req *reqs[], + int nr_reqs, + bool set_flag) +{ + int i; + + for (i = 0; i < nr_reqs; ++i) { + set_flag ? (reqs[i]->flags |= CRYPTO_ACOMP_REQ_POLL) : + (reqs[i]->flags &= ~CRYPTO_ACOMP_REQ_POLL); + } +} + +/** + * This API provides IAA compress batching functionality for use by swap + * modules. + * + * @reqs: @nr_pages asynchronous compress requests. + * @wait: crypto_wait for synchronous acomp batch compress. If NULL, the + * completions will be processed asynchronously. + * @pages: Pages to be compressed by IAA in parallel. + * @dsts: Pre-allocated destination buffers to store results of IAA + * compression. Each element of @dsts must be of size "PAGE_SIZE * 2". + * @dlens: Will contain the compressed lengths. + * @errors: zero on successful compression of the corresponding + * req, or error code in case of error. + * @nr_pages: The number of pages, up to CRYPTO_BATCH_SIZE, + * to be compressed. + */ +static void iaa_comp_acompress_batch( + struct acomp_req *reqs[], + struct crypto_wait *wait, + struct page *pages[], + u8 *dsts[], + unsigned int dlens[], + int errors[], + int nr_pages) +{ + struct scatterlist inputs[CRYPTO_BATCH_SIZE]; + struct scatterlist outputs[CRYPTO_BATCH_SIZE]; + bool compressions_done = false; + bool poll = (async_mode && !use_irq); + int i; + + BUG_ON(nr_pages > CRYPTO_BATCH_SIZE); + BUG_ON(!poll && !wait); + + if (poll) + iaa_set_req_poll(reqs, nr_pages, true); + else + iaa_set_req_poll(reqs, nr_pages, false); + + /* + * Prepare and submit acomp_reqs to IAA. IAA will process these + * compress jobs in parallel if async-poll mode is enabled. + * If IAA is used in sync mode, the jobs will be processed sequentially + * using "wait". + */ + for (i = 0; i < nr_pages; ++i) { + sg_init_table(&inputs[i], 1); + sg_set_page(&inputs[i], pages[i], PAGE_SIZE, 0); + + /* + * Each dst buffer should be of size (PAGE_SIZE * 2). + * Reflect same in sg_list. + */ + sg_init_one(&outputs[i], dsts[i], PAGE_SIZE * 2); + acomp_request_set_params(reqs[i], &inputs[i], + &outputs[i], PAGE_SIZE, dlens[i]); + + /* + * If poll is in effect, submit the request now, and poll for + * a completion status later, after all descriptors have been + * submitted. If polling is not enabled, submit the request + * and wait for it to complete, i.e., synchronously, before + * moving on to the next request. + */ + if (poll) { + errors[i] = iaa_comp_acompress(reqs[i]); + + if (errors[i] != -EINPROGRESS) + errors[i] = -EINVAL; + else + errors[i] = -EAGAIN; + } else { + acomp_request_set_callback(reqs[i], + CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, wait); + errors[i] = crypto_wait_req(iaa_comp_acompress(reqs[i]), + wait); + if (!errors[i]) + dlens[i] = reqs[i]->dlen; + } + } + + /* + * If not doing async compressions, the batch has been processed at + * this point and we can return. + */ + if (!poll) + goto reset_reqs_wait; + + /* + * Poll for and process IAA compress job completions + * in out-of-order manner. + */ + while (!compressions_done) { + compressions_done = true; + + for (i = 0; i < nr_pages; ++i) { + /* + * Skip, if the compression has already completed + * successfully or with an error. + */ + if (errors[i] != -EAGAIN) + continue; + + errors[i] = iaa_comp_poll(reqs[i]); + + if (errors[i]) { + if (errors[i] == -EAGAIN) + compressions_done = false; + } else { + dlens[i] = reqs[i]->dlen; + } + } + } + +reset_reqs_wait: + /* + * For the same 'reqs[]' and 'wait' to be usable by + * iaa_comp_acompress()/iaa_comp_deacompress(): + * Clear the CRYPTO_ACOMP_REQ_POLL bit on the acomp_reqs. + * Reset the crypto_wait "wait" callback to reqs[0]. + */ + iaa_set_req_poll(reqs, nr_pages, false); + acomp_request_set_callback(reqs[0], + CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, wait); +} + +/** + * This API provides IAA decompress batching functionality for use by swap + * modules. + * + * @reqs: @nr_pages asynchronous decompress requests. + * @wait: crypto_wait for synchronous acomp batch decompress. If NULL, the + * driver must provide a way to process completions asynchronously. + * @srcs: The src buffers to be decompressed by IAA in parallel. + * @pages: The pages to store the decompressed buffers. + * @slens: Compressed lengths of @srcs. + * @errors: zero on successful compression of the corresponding + * req, or error code in case of error. + * @nr_pages: The number of pages, up to CRYPTO_BATCH_SIZE, + * to be decompressed. + */ +static void iaa_comp_adecompress_batch( + struct acomp_req *reqs[], + struct crypto_wait *wait, + u8 *srcs[], + struct page *pages[], + unsigned int slens[], + int errors[], + int nr_pages) +{ + struct scatterlist inputs[CRYPTO_BATCH_SIZE]; + struct scatterlist outputs[CRYPTO_BATCH_SIZE]; + unsigned int dlens[CRYPTO_BATCH_SIZE]; + bool decompressions_done = false; + bool poll = (async_mode && !use_irq); + int i; + + BUG_ON(nr_pages > CRYPTO_BATCH_SIZE); + BUG_ON(!poll && !wait); + + if (poll) + iaa_set_req_poll(reqs, nr_pages, true); + else + iaa_set_req_poll(reqs, nr_pages, false); + + /* + * Prepare and submit acomp_reqs to IAA. IAA will process these + * decompress jobs in parallel if async-poll mode is enabled. + * If IAA is used in sync mode, the jobs will be processed sequentially + * using "wait". + */ + for (i = 0; i < nr_pages; ++i) { + dlens[i] = PAGE_SIZE; + sg_init_one(&inputs[i], srcs[i], slens[i]); + sg_init_table(&outputs[i], 1); + sg_set_page(&outputs[i], pages[i], PAGE_SIZE, 0); + acomp_request_set_params(reqs[i], &inputs[i], + &outputs[i], slens[i], dlens[i]); + /* + * If poll is in effect, submit the request now, and poll for + * a completion status later, after all descriptors have been + * submitted. If polling is not enabled, submit the request + * and wait for it to complete, i.e., synchronously, before + * moving on to the next request. + */ + if (poll) { + errors[i] = iaa_comp_adecompress(reqs[i]); + + if (errors[i] != -EINPROGRESS) + errors[i] = -EINVAL; + else + errors[i] = -EAGAIN; + } else { + acomp_request_set_callback(reqs[i], + CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, wait); + errors[i] = crypto_wait_req(iaa_comp_adecompress(reqs[i]), + wait); + if (!errors[i]) { + dlens[i] = reqs[i]->dlen; + BUG_ON(dlens[i] != PAGE_SIZE); + } + } + } + + /* + * If not doing async decompressions, the batch has been processed at + * this point and we can return. + */ + if (!poll) + goto reset_reqs_wait; + + /* + * Poll for and process IAA decompress job completions + * in out-of-order manner. + */ + while (!decompressions_done) { + decompressions_done = true; + + for (i = 0; i < nr_pages; ++i) { + /* + * Skip, if the decompression has already completed + * successfully or with an error. + */ + if (errors[i] != -EAGAIN) + continue; + + errors[i] = iaa_comp_poll(reqs[i]); + + if (errors[i]) { + if (errors[i] == -EAGAIN) + decompressions_done = false; + } else { + dlens[i] = reqs[i]->dlen; + BUG_ON(dlens[i] != PAGE_SIZE); + } + } + } + +reset_reqs_wait: + /* + * For the same 'reqs[]' and 'wait' to be usable by + * iaa_comp_acompress()/iaa_comp_deacompress(): + * Clear the CRYPTO_ACOMP_REQ_POLL bit on the acomp_reqs. + * Reset the crypto_wait "wait" callback to reqs[0]. + */ + iaa_set_req_poll(reqs, nr_pages, false); + acomp_request_set_callback(reqs[0], + CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, wait); +} + static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm); @@ -1822,6 +2157,8 @@ static struct acomp_alg iaa_acomp_fixed_deflate = { .compress = iaa_comp_acompress, .decompress = iaa_comp_adecompress, .dst_free = dst_free, + .batch_compress = iaa_comp_acompress_batch, + .batch_decompress = iaa_comp_adecompress_batch, .base = { .cra_name = "deflate", .cra_driver_name = "deflate-iaa", From patchwork Sat Nov 23 07:01:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0EC3E6ADE7 for ; Sat, 23 Nov 2024 07:01:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F23A6B008A; Sat, 23 Nov 2024 02:01:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 35DC66B0092; Sat, 23 Nov 2024 02:01:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 159B96B008C; Sat, 23 Nov 2024 02:01:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E78146B0089 for ; Sat, 23 Nov 2024 02:01:35 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9A2F5141B26 for ; Sat, 23 Nov 2024 07:01:35 +0000 (UTC) X-FDA: 82816464150.10.91FA4A2 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf02.hostedemail.com (Postfix) with ESMTP id 8282580017 for ; Sat, 23 Nov 2024 07:01:32 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=e6yuXIDE; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bdW0VFB1jw9NppF2KFtR0NRJUsxJabGz0CmNQLLI0Hc=; b=qbSwLO0idlyDCuBjV0VYbqP0Tp1S/faEWJNX0jONva+ilmK9LdBaUfGHboHL5+Mx5HukAt UBwcdhVu7zoiHpY5+jeKCQAQ4+jrcSDNfjdkUN4sxq4+rgEtaduV/N0fnNlDu8U6RaO42Q 3gTZmsT4VVXwICeCW6IMyYJE7LPV/hQ= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=e6yuXIDE; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345293; a=rsa-sha256; cv=none; b=TXZjRl5SoK9vVLJePhELIbxIzJaLNx9zTvefMrEZ3R3qR27w5tYgZpgqIoJ6CFX3izW2Fk XShLMbnxxEg21u8Czf4kRtsXlAsxVruiLffN/tWE06e3CHkGClC5w2OAm01SJjMMX9kcxl fxuWUjKd/Vb5qzUc3OEkZWL3sAmj+1E= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345294; x=1763881294; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QbVPHifD6kbvaQyPWicO/+NjbMZ0sYtZbDpqlwtWffY=; b=e6yuXIDEfUvK8Nm2VSvP6u+QeHU0TjEs3IXTeqNFhCB9pCQko9TMiWbs fcj9qeIv8vLqmYj950/GhwbhEDVOhiRalOa3uGK5wUfDBGO7PJAgsR1jZ JcbIUc2/BvujKaQtko2h+MHDM36MBDc1LdjW7JHJYk7ACrdrBUxlU0Ucr 3qQYiwdoKkoWxc/YoE3enwtXZIC4UPiIhYIsWGiyZv72sjpRZrOf9eKbJ ZPycSljKoQUYKpVxZOOpPIYCjzf+DwM2EQBdc+4Mby7BrvQdGeF5PCnie hThpdbyfPDU4VAgRiMqgw4I50UryDlDJXO23B+wzqLA/BTJIJBGX42Zg4 w==; X-CSE-ConnectionGUID: MkIBTHoRSBWxutKjyZ9HSg== X-CSE-MsgGUID: 1k7dkfMLRo2U2GwUBok9jQ== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435499" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435499" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:29 -0800 X-CSE-ConnectionGUID: ObQFb837QdKzc06jS1W0Yw== X-CSE-MsgGUID: +0JuenxqTvCxqHgsFnoJTQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573556" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:28 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 04/10] crypto: iaa - Make async mode the default. Date: Fri, 22 Nov 2024 23:01:21 -0800 Message-Id: <20241123070127.332773-5-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8282580017 X-Stat-Signature: ukjggeef48okfswwe3am8pbidtd8a5pi X-Rspam-User: X-HE-Tag: 1732345292-687559 X-HE-Meta: U2FsdGVkX1+RV6pRLuHw0G/Mgc79YLEFs/BpVDqZ1s89GWEL3Yzn3ebp1NFU97f1kLlvF0bnEoyNSWaDsCftqDiJuS7XoS/701ys4Ko6cZosQZmBqMICP2Cg21taQ8gCD5aDijVEH4iIbo+JERHV3gJERTG6aA5/6D0deHt/JKaT6mGNSUTc4cwxdNxCKmLH5rRp7f0MMzwSgpdiZMnMOv2OZTdSoeHjGUEtIL5xIHrZMHMEfIbELU1JzukF2dAhmPEDXeX6T6V6qs5t5CuIT88UOON5wnitFa9xXZqBYKt5l+zsf4kKcZkCr1Z/Z7zdYwT2cNy8vObltvAYcxyOV2HWxM8aWYEmdmYuT6jFdYt35QcK4iXX54Ib1iQKsc5ni8U9juLdIpRQCTHb0pj6m57iJx+C8P03gBSkgZMEgv4ulfcoDmCaQHH7NcuTkRco0jQ5GaXz4yOEpTgVxa86D/+Kvzz1ut3yT/tsBq/N8T+a6VdppAvYxmho6SQSxyIY0WMHPIplwEbZiof55tmI64Fvqgk31x7tmGjSLaDoYn/slwL9S+Y/U3+YBKZB+tf7Vaao5uDRQwzf27+UJgyLL60lDk0muCRpqn3LcV3mt6lIM3v+Cj3aqTLH1ShJe7atT9qdC+1cEqIM//0K784AeBU9hsnDqhBfd0D+EoaqTCWmmD2TCHkM7cz352dmxn00h9Gv2xz4ektds3aaNUC0PwvGw3dOwGIqxx5YggpV23Jicd34ODMXA1DDCAXtokfM0PwVANoNTOMEbMmEZHzg+Ch3uYo0hOgdfSx+epSslqiF8CPVshZ3grJOqlFE7Htr6lOEZmZ7ejnBrYvVO0AqN/n3B40SsxqvHAvcdv91XsJABmBvUe2hWge6U2kMZk5QPKgM0yVUFXRH5LwyHqemgf56SKs8HmxTq/sNOELv4wU8gz0WYGhVlI7DZUaSPPR/p5GFuP7ndUKWq9qPX51 7JyP872w ZiMcibVd7LspUi+UDQpk5c+0pZ1PBMrZ7nWFpy5xxwkPoIIJaKiRGwkfay9/HpkWbiyQMuynHluSl1hmPRa9BxjaHrulr6Aw1DidK473ur/AqkvY3owM9M50KjKvuB7Y536HcmJOPvJB+fbTMDcFFQENZC99UwcFpmr11u5fHYn/J6NFJva2nDnBf0Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch makes it easier for IAA hardware acceleration in the iaa_crypto driver to be loaded by default in the most efficient/recommended "async" mode for parallel compressions/decompressions, namely, asynchronous submission of descriptors, followed by polling for job completions. Earlier, the "sync" mode used to be the default. This way, anyone that wants to use IAA can do so after building the kernel, and without having to go through these steps to use async poll: 1) disable all the IAA device/wq bindings that happen at boot time 2) rmmod iaa_crypto 3) modprobe iaa_crypto 4) echo async > /sys/bus/dsa/drivers/crypto/sync_mode 5) re-run initialization of the IAA devices and wqs Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index cbf147a3c3cb..bd2db0b6f145 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -153,7 +153,7 @@ static DRIVER_ATTR_RW(verify_compress); */ /* Use async mode */ -static bool async_mode; +static bool async_mode = true; /* Use interrupts */ static bool use_irq; From patchwork Sat Nov 23 07:01:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 646F0E6ADE8 for ; Sat, 23 Nov 2024 07:01:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4431B6B0092; Sat, 23 Nov 2024 02:01:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3AC756B0093; Sat, 23 Nov 2024 02:01:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F5EE6B0095; Sat, 23 Nov 2024 02:01:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E928A6B0092 for ; Sat, 23 Nov 2024 02:01:36 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 750D6141B1E for ; Sat, 23 Nov 2024 07:01:36 +0000 (UTC) X-FDA: 82816464192.06.F9471DB Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf21.hostedemail.com (Postfix) with ESMTP id 729C21C0009 for ; Sat, 23 Nov 2024 07:01:33 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hmarHTiy; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345294; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qdN5Jxr4ZkZjLL4IWTXD7VKDKpCa/mDMHWPxfrVw/QY=; b=tE4oI5Sz0hqk+IhoMKKESUkP/XlU4H40OUthF8dZD4a6tPaxd/rkraoeGSB52F3X8n2ZUH uWrzrOP3AqLXDPvoIavXI4ZjnM2vYj3JZZtgLxymgRv9u1lKXZhE+Np7FMQ5IwM2s/uUsz KcQRqku402zvivGdKU4yVos9HArHsHU= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hmarHTiy; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345294; a=rsa-sha256; cv=none; b=iqMGIG2V2ldIcQrK/dGFmvxhV2DAUqKyeoM0kq61qBfer2UYWGQFV9v3ZlidByr3+w3wU5 g7PllreYEV2nfy2yIMbF7e5eePccqJkLmJx3vjQH2KjTwzxlFevUI9K8epdrZ4vDJMafbb t8thREuIYAnNTpT8FyZF2j3sPHncxvo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345295; x=1763881295; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sUaKs4IKHRtAS2VJ+T+gUebh7+S3mchKFlm9dP1TtUQ=; b=hmarHTiycQgSB65eUN4Tg5bhegcuS/VVbXuNQj1yQXKw6gdjSRgGjKcu 8RvG9+bSPsMoIPUJRkc7EfN9jqN00Fy+LlGv65NcbROiY0gRrQ5gKyli8 QQJ7sUyZaBU1Wgr9D+lholG0cmwkJW+dTjsmgvLpgUTSq1Ug81BGDFHvJ XitANkk+JC8MlD0r4ttKLbrBouKqavGTIH2TwT1ObL1GXtKbXBHRPJUyV Z+LMXmYHG0vcFGyfWOJQQ8oqfNR3Tu4FBcSzPMzq3m5Z3AbsCaXaG9ylh UUV7jLf9gXAvPmtqOWAuAtIzjgHs9g7PKHcgHopSKA4haGO+7r7GY1IXN w==; X-CSE-ConnectionGUID: Pc8NNrdYQF+FHbcdZJwsCw== X-CSE-MsgGUID: xVV9IaiYQwKO12Z7NgCNCw== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435511" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435511" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:29 -0800 X-CSE-ConnectionGUID: X3bP/RyIT4erTLLHM7SmIw== X-CSE-MsgGUID: JCLeK2TqTU2sZVaYYC8rcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573559" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:28 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 05/10] crypto: iaa - Disable iaa_verify_compress by default. Date: Fri, 22 Nov 2024 23:01:22 -0800 Message-Id: <20241123070127.332773-6-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 729C21C0009 X-Stat-Signature: bm6z3wb1z4fxpq1eh49dhojjhzkwxn59 X-Rspam-User: X-HE-Tag: 1732345293-77277 X-HE-Meta: U2FsdGVkX1+P3ClwzAQVMl5sUDoioEOwy3BAO2g4Zg57IEFF0RNKKBM5Td8zhgWSn3PWnU9JywuP+aMG4CesWRQlgESP/XC/YXfWx5IV7ubo9UK84znL3U1T+OiwUajFuTvKGwavrZfLScZbEKpFHsz83o+Jho9H3ZeIBrREjYJeVT4zGupPr1w9Mv/3wk/TS4grgaTFKVVvWX8d9sJ1XgpFnjt8uQwjiLbfmdBhZ3WYSnKqGrfQYzYx58PVrP5sCd6N+6bFVgM0yxOXhPrxE+KfNlElqzMlZhPioQqpnrK2sVhfZavZ0oqxLE+7RyK6yalA17NqleLl8WA0fV9oFuf7e7W/ikYksZM63aIweLEqRd8e0u5sHLUEiRYhaTwi6IZOkgb93b+HrnbYix1OVndgm9diueU5v9rBmKiCRxWnoTenZnqfl/7rMCV26tpyZftSVNQuB3NiH/4bVPalzi+Bh0/exFzeyp+Uu9Bs/uQ+h1DMl2WN9wzWcFW3AVDKplTS1FHYUTAo1oELNJ5XIvqLL9MTfSsT4Nlhj2CzB0ZZ4XfLOVD5FmqphWqnu3E5F2wScpI3+eSYtW+YZb2mW4hXjwyUuvXVlR8Bnus18HHWcR82IkPJSI3PUTUIq3HKr8pKJ/q525kJlAyxSdOT9R/4iPU0FKCPNXbzXU28MtyG3q48p2+xBL4Yu7abxpT+taSoNHObxvN8GwupAvBkN7jDL44kURgp3PgcZdgRB8UO6JCsh1hKho/KKYeZP67TblwviHG9MdTzJfqrd6tVyvmFdGhd/PaVO6y6Gfvx06kHXY9tHSJH9jh6IqIiwKiiKNS+WPKL3//ReoLdtWAvAx1Jq+j3yUZaSqIMUaLUSTbzQMJpEt58Zs+TBF6RUdSeA1IZVQTmOPy+VVxw1BJrB1WYpBuLAA0e4Czx1KYIsOzDRGdfXa5VoPy9ENkpzHdmKLPQlHSlT8ND6W0k+6d 6qP1+GLk zIYh246fNEu0qYwqiWqwN98Ba3QoEkTy9TAyfPfdpee3XRLFeEKKP30HGMVVXNl+//CZFm7ZbwL4vq/lpLyNTulaX603/WA1djjojmFuc2lXhIHH43JOaLEBEgRewT/QMcA3Dj6qX59uWFfX4EjNRWPcASug0t2zkuYFm32E6hd5eNHFNA1gOT2hWRNtLyOtNNU1o/JwDiKlCAhSITd8vaKqB++Msyb2Hb96x X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch makes it easier for IAA hardware acceleration in the iaa_crypto driver to be loaded by default with "iaa_verify_compress" disabled, to facilitate performance comparisons with software compressors (which also do not run compress verification by default). Earlier, iaa_crypto compress verification used to be enabled by default. With this patch, if users want to enable compress verification, they can do so with these steps: 1) disable all the IAA device/wq bindings that happen at boot time 2) rmmod iaa_crypto 3) modprobe iaa_crypto 4) echo 1 > /sys/bus/dsa/drivers/crypto/verify_compress 5) re-run initialization of the IAA devices and wqs Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index bd2db0b6f145..a572803a53d0 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -94,7 +94,7 @@ static bool iaa_crypto_enabled; static bool iaa_crypto_registered; /* Verify results of IAA compress or not */ -static bool iaa_verify_compress = true; +static bool iaa_verify_compress = false; static ssize_t verify_compress_show(struct device_driver *driver, char *buf) { From patchwork Sat Nov 23 07:01:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C4D9E6ADE7 for ; Sat, 23 Nov 2024 07:01:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0A776B0093; Sat, 23 Nov 2024 02:01:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BB756B0095; Sat, 23 Nov 2024 02:01:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E5966B0096; Sat, 23 Nov 2024 02:01:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 50E936B0095 for ; Sat, 23 Nov 2024 02:01:37 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 01618AF123 for ; Sat, 23 Nov 2024 07:01:36 +0000 (UTC) X-FDA: 82816464192.20.FF6D295 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf07.hostedemail.com (Postfix) with ESMTP id 586AF40007 for ; Sat, 23 Nov 2024 07:01:34 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UxFldMAw; spf=pass (imf07.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345294; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dCUxGa0Upb4gugAGHIyLXnmkTRJ6knXuQQHbS+lG0Xw=; b=p3YlEuLpRTfJU9riMkHS5aIano+PculSvyk/0yxX1ltm2l170ThBZ+r0SQlRcg326bkI8k mpg/VOeaAgHtl5yguAjGVrgMop7WFASGzD2Gj+97bJa8YtlEkKmhIUEq91m47URJGrM/pB F7Np0Y7j7Dt6gzsQez4SEthfxYBGccw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345294; a=rsa-sha256; cv=none; b=PbvpdcakuSwBg6tyMxzsUptGOrjHOUHKfSxx1JgdVCUcBni+OKc7/H/0JdbKAyxAJSHy4q iLJKHcpYNGlRiO719J6U8ISsQObhGxMEQSETPLHT+p4UQZvUHhN3pOXM59C1B1Fcc0SrSn zeVRwCMVkb7nCoAlFbnWFDsPAB0bKN8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UxFldMAw; spf=pass (imf07.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345295; x=1763881295; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=phDCVCTbOTOUTs6hnR3/+bpctRtdx7MRywI2ALUlrtw=; b=UxFldMAwzvmXVSEMwcO1NkcIa70pXwqHOW5So2NENgGcMr6QdKrOtvJW SjgdhDZ0ev/mfDS1GktbTT2RXiKIvFoWrZxEN/4KS0I3axVn5KFQVyDz3 s2nrJy3fZBqdP58kB58pQpIWf1TatQvUJvnwFPPXiyxJgnSmJlmaxtAW3 JLDGvLGIOuWTIivy3DePIME6jnG0FGhuRRayehOe5RazPgroYSVC/KOJP hCr+L+CNRG8eRHlMme9tFPnO8Z7qtre91mKe6V78pj8RoLjF73g7kIPxL ALk08g78QVXNFh0XEgxHcj8CTGil/o5lhSo3qr1HNS2wpQ+cI4Rvj9+X1 Q==; X-CSE-ConnectionGUID: l/Ho8hIHSGOquiRLWsJ8Bg== X-CSE-MsgGUID: AsQuJQ7jTcmSoMfTBg1Xlg== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435523" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435523" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:29 -0800 X-CSE-ConnectionGUID: cj8DpYg8QrSAj4SMeEsQKw== X-CSE-MsgGUID: KPSj7zQzSEyM3bKi0wVjiw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573562" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:28 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 06/10] crypto: iaa - Re-organize the iaa_crypto driver code. Date: Fri, 22 Nov 2024 23:01:23 -0800 Message-Id: <20241123070127.332773-7-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: u4ouw9ddssk63aa4ood5845g9hgdgssp X-Rspam-User: X-Rspamd-Queue-Id: 586AF40007 X-Rspamd-Server: rspam02 X-HE-Tag: 1732345294-449022 X-HE-Meta: U2FsdGVkX199WQ+Rf4M0XCuWqEBmSAVj8atv5ctg2NyFG/5ygxT2xsjQJ8ShD+DMx5O6ICRZ2C1LgYl7J3sD67FRqgQ9GtChilWhbSq2NK9ESC8Nlsd/XmJLCyiJa6icsrHPtUTO+wG9SQP3OfqpKvWPjrrinpjw6GkIawonxwkZlaFVZJ0tTbVAj+WwQkfpYRsVe1YDdhVe/VBRXR2mAgDEE/ERJAYwSEXMp+TdHvUg2OCIjJPb+pDc9ygAcUHykvWRqXa4bNsvkejuZ3tQqcCocdbaZvpP0rd+W1J7IrGyx+pZxvbmRFKhKDYAsLsDAw+QJRT62wqhCSTwmuYZ+D4pwwXZdZdugJiPW+GWjX6CCMFKwtev9T3ujFeCdJKGFa3liEYvnVY1ilSPWxc8EUaWUJGbuIOmncqNw52a9pRX5pFEo48k5GAk6mb3pJYVwDD4eg+VI+T/x3Nfj9W4aWAcRyUGihWdcVoKRXAXZqhPmCq6nmxwyPuczv8+mmCdyNDog3EiHDvawpx//HdPfqqvyOnCnJYivQCA0u4hJQtIz+qcj4r6kBss8a7xGpp6DUAoS+m3QGStlZ0SOedmQNWvXmE0QIw30ImMFzPwZObZJjfL+OkRRVrSACJMZjkCWh3Li71dzctCqzoQi9WFqdm3pafcKrADoKHb2wvKDG9DXBKA6iswSZohjRe2xiBazTojiaIyF7OeuxhY4uGpOqbTheBb4j6C/W/S8MkA6eP1EB46NtJeBnqKj75b/G+cIFmd7Ceii5BOoyeBkz8t/VsGl8eWupwgCHdWo5WSezpybw84jNhQYnfXcwd+LW+k5imzLOZ2TsP1JC/GmSTlYNxwTFIZ1R2tk8x1bzzrKt4lG6jt8MFgqrG5xgtxtem825keXUIXo+c5YpbipBe6URRd8nlmLpZr3KJfxM7WkAht+LRrbuYQkCFrFzRItDwyoExNTg/u/08fL75HZ8l fzLunN1Z n6EgJSvJJCYCCn4Yh6SDDMumnJQ2gGkdNFywqHc3/H1jWFvs71LBjppgTfjYUJKXt9XXohHQnpnC0s9CBzu6pH2sB7JMJ/w8cGs2tIei7Ih7hsJujXbbGmmVPpguLbMIYH12yfJdMLUqdqhTeceA6EQwXOh5mGinpXyg20vlKstrwBLSkVuBbo3Y76wl5eTucFZWZBJeOMfm40fCR3iV1KVM1GMSQxBX1ip9ugoflnCbIJ9Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch merely reorganizes the code in iaa_crypto_main.c, so that the functions are consolidated into logically related sub-sections of code. This is expected to make the code more maintainable and for it to be easier to replace functional layers and/or add new features. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 540 +++++++++++---------- 1 file changed, 275 insertions(+), 265 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index a572803a53d0..c2362e4525bd 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -24,6 +24,9 @@ #define IAA_ALG_PRIORITY 300 +/************************************** + * Driver internal global variables. + **************************************/ /* number of iaa instances probed */ static unsigned int nr_iaa; static unsigned int nr_cpus; @@ -36,55 +39,46 @@ static unsigned int cpus_per_iaa; static struct crypto_comp *deflate_generic_tfm; /* Per-cpu lookup table for balanced wqs */ -static struct wq_table_entry __percpu *wq_table; +static struct wq_table_entry __percpu *wq_table = NULL; -static struct idxd_wq *wq_table_next_wq(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - if (++entry->cur_wq >= entry->n_wqs) - entry->cur_wq = 0; - - if (!entry->wqs[entry->cur_wq]) - return NULL; - - pr_debug("%s: returning wq at idx %d (iaa wq %d.%d) from cpu %d\n", __func__, - entry->cur_wq, entry->wqs[entry->cur_wq]->idxd->id, - entry->wqs[entry->cur_wq]->id, cpu); - - return entry->wqs[entry->cur_wq]; -} - -static void wq_table_add(int cpu, struct idxd_wq *wq) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - if (WARN_ON(entry->n_wqs == entry->max_wqs)) - return; - - entry->wqs[entry->n_wqs++] = wq; - - pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__, - entry->wqs[entry->n_wqs - 1]->idxd->id, - entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); -} - -static void wq_table_free_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); +/* Verify results of IAA compress or not */ +static bool iaa_verify_compress = false; - kfree(entry->wqs); - memset(entry, 0, sizeof(*entry)); -} +/* + * The iaa crypto driver supports three 'sync' methods determining how + * compressions and decompressions are performed: + * + * - sync: the compression or decompression completes before + * returning. This is the mode used by the async crypto + * interface when the sync mode is set to 'sync' and by + * the sync crypto interface regardless of setting. + * + * - async: the compression or decompression is submitted and returns + * immediately. Completion interrupts are not used so + * the caller is responsible for polling the descriptor + * for completion. This mode is applicable to only the + * async crypto interface and is ignored for anything + * else. + * + * - async_irq: the compression or decompression is submitted and + * returns immediately. Completion interrupts are + * enabled so the caller can wait for the completion and + * yield to other threads. When the compression or + * decompression completes, the completion is signaled + * and the caller awakened. This mode is applicable to + * only the async crypto interface and is ignored for + * anything else. + * + * These modes can be set using the iaa_crypto sync_mode driver + * attribute. + */ -static void wq_table_clear_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); +/* Use async mode */ +static bool async_mode = true; +/* Use interrupts */ +static bool use_irq; - entry->n_wqs = 0; - entry->cur_wq = 0; - memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); -} +static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX]; LIST_HEAD(iaa_devices); DEFINE_MUTEX(iaa_devices_lock); @@ -93,9 +87,9 @@ DEFINE_MUTEX(iaa_devices_lock); static bool iaa_crypto_enabled; static bool iaa_crypto_registered; -/* Verify results of IAA compress or not */ -static bool iaa_verify_compress = false; - +/************************************************** + * Driver attributes along with get/set functions. + **************************************************/ static ssize_t verify_compress_show(struct device_driver *driver, char *buf) { return sprintf(buf, "%d\n", iaa_verify_compress); @@ -123,40 +117,6 @@ static ssize_t verify_compress_store(struct device_driver *driver, } static DRIVER_ATTR_RW(verify_compress); -/* - * The iaa crypto driver supports three 'sync' methods determining how - * compressions and decompressions are performed: - * - * - sync: the compression or decompression completes before - * returning. This is the mode used by the async crypto - * interface when the sync mode is set to 'sync' and by - * the sync crypto interface regardless of setting. - * - * - async: the compression or decompression is submitted and returns - * immediately. Completion interrupts are not used so - * the caller is responsible for polling the descriptor - * for completion. This mode is applicable to only the - * async crypto interface and is ignored for anything - * else. - * - * - async_irq: the compression or decompression is submitted and - * returns immediately. Completion interrupts are - * enabled so the caller can wait for the completion and - * yield to other threads. When the compression or - * decompression completes, the completion is signaled - * and the caller awakened. This mode is applicable to - * only the async crypto interface and is ignored for - * anything else. - * - * These modes can be set using the iaa_crypto sync_mode driver - * attribute. - */ - -/* Use async mode */ -static bool async_mode = true; -/* Use interrupts */ -static bool use_irq; - /** * set_iaa_sync_mode - Set IAA sync mode * @name: The name of the sync mode @@ -219,8 +179,9 @@ static ssize_t sync_mode_store(struct device_driver *driver, } static DRIVER_ATTR_RW(sync_mode); -static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX]; - +/**************************** + * Driver compression modes. + ****************************/ static int find_empty_iaa_compression_mode(void) { int i = -EINVAL; @@ -411,11 +372,6 @@ static void free_device_compression_mode(struct iaa_device *iaa_device, IDXD_OP_FLAG_WR_SRC2_AECS_COMP | \ IDXD_OP_FLAG_AECS_RW_TGLS) -static int check_completion(struct device *dev, - struct iax_completion_record *comp, - bool compress, - bool only_once); - static int init_device_compression_mode(struct iaa_device *iaa_device, struct iaa_compression_mode *mode, int idx, struct idxd_wq *wq) @@ -502,6 +458,10 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device) } } +/*********************************************************** + * Functions for use in crypto probe and remove interfaces: + * allocate/init/query/deallocate devices/wqs. + ***********************************************************/ static struct iaa_device *iaa_device_alloc(void) { struct iaa_device *iaa_device; @@ -614,16 +574,6 @@ static void del_iaa_wq(struct iaa_device *iaa_device, struct idxd_wq *wq) } } -static void clear_wq_table(void) -{ - int cpu; - - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_clear_entry(cpu); - - pr_debug("cleared wq table\n"); -} - static void free_iaa_device(struct iaa_device *iaa_device) { if (!iaa_device) @@ -704,43 +654,6 @@ static int iaa_wq_put(struct idxd_wq *wq) return ret; } -static void free_wq_table(void) -{ - int cpu; - - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_free_entry(cpu); - - free_percpu(wq_table); - - pr_debug("freed wq table\n"); -} - -static int alloc_wq_table(int max_wqs) -{ - struct wq_table_entry *entry; - int cpu; - - wq_table = alloc_percpu(struct wq_table_entry); - if (!wq_table) - return -ENOMEM; - - for (cpu = 0; cpu < nr_cpus; cpu++) { - entry = per_cpu_ptr(wq_table, cpu); - entry->wqs = kcalloc(max_wqs, sizeof(struct wq *), GFP_KERNEL); - if (!entry->wqs) { - free_wq_table(); - return -ENOMEM; - } - - entry->max_wqs = max_wqs; - } - - pr_debug("initialized wq table\n"); - - return 0; -} - static int save_iaa_wq(struct idxd_wq *wq) { struct iaa_device *iaa_device, *found = NULL; @@ -829,6 +742,87 @@ static void remove_iaa_wq(struct idxd_wq *wq) cpus_per_iaa = 1; } +/*************************************************************** + * Mapping IAA devices and wqs to cores with per-cpu wq_tables. + ***************************************************************/ +static void wq_table_free_entry(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); + + kfree(entry->wqs); + memset(entry, 0, sizeof(*entry)); +} + +static void wq_table_clear_entry(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); + + entry->n_wqs = 0; + entry->cur_wq = 0; + memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); +} + +static void clear_wq_table(void) +{ + int cpu; + + for (cpu = 0; cpu < nr_cpus; cpu++) + wq_table_clear_entry(cpu); + + pr_debug("cleared wq table\n"); +} + +static void free_wq_table(void) +{ + int cpu; + + for (cpu = 0; cpu < nr_cpus; cpu++) + wq_table_free_entry(cpu); + + free_percpu(wq_table); + + pr_debug("freed wq table\n"); +} + +static int alloc_wq_table(int max_wqs) +{ + struct wq_table_entry *entry; + int cpu; + + wq_table = alloc_percpu(struct wq_table_entry); + if (!wq_table) + return -ENOMEM; + + for (cpu = 0; cpu < nr_cpus; cpu++) { + entry = per_cpu_ptr(wq_table, cpu); + entry->wqs = kcalloc(max_wqs, sizeof(struct wq *), GFP_KERNEL); + if (!entry->wqs) { + free_wq_table(); + return -ENOMEM; + } + + entry->max_wqs = max_wqs; + } + + pr_debug("initialized wq table\n"); + + return 0; +} + +static void wq_table_add(int cpu, struct idxd_wq *wq) +{ + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); + + if (WARN_ON(entry->n_wqs == entry->max_wqs)) + return; + + entry->wqs[entry->n_wqs++] = wq; + + pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__, + entry->wqs[entry->n_wqs - 1]->idxd->id, + entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); +} + static int wq_table_add_wqs(int iaa, int cpu) { struct iaa_device *iaa_device, *found_device = NULL; @@ -939,6 +933,29 @@ static void rebalance_wq_table(void) } } +/*************************************************************** + * Assign work-queues for driver ops using per-cpu wq_tables. + ***************************************************************/ +static struct idxd_wq *wq_table_next_wq(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); + + if (++entry->cur_wq >= entry->n_wqs) + entry->cur_wq = 0; + + if (!entry->wqs[entry->cur_wq]) + return NULL; + + pr_debug("%s: returning wq at idx %d (iaa wq %d.%d) from cpu %d\n", __func__, + entry->cur_wq, entry->wqs[entry->cur_wq]->idxd->id, + entry->wqs[entry->cur_wq]->id, cpu); + + return entry->wqs[entry->cur_wq]; +} + +/************************************************* + * Core iaa_crypto compress/decompress functions. + *************************************************/ static inline int check_completion(struct device *dev, struct iax_completion_record *comp, bool compress, @@ -1010,13 +1027,130 @@ static int deflate_generic_decompress(struct acomp_req *req) static int iaa_remap_for_verify(struct device *dev, struct iaa_wq *iaa_wq, struct acomp_req *req, - dma_addr_t *src_addr, dma_addr_t *dst_addr); + dma_addr_t *src_addr, dma_addr_t *dst_addr) +{ + int ret = 0; + int nr_sgs; + + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + + nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_FROM_DEVICE); + if (nr_sgs <= 0 || nr_sgs > 1) { + dev_dbg(dev, "verify: couldn't map src sg for iaa device %d," + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, + iaa_wq->wq->id, ret); + ret = -EIO; + goto out; + } + *src_addr = sg_dma_address(req->src); + dev_dbg(dev, "verify: dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p," + " req->slen %d, sg_dma_len(sg) %d\n", *src_addr, nr_sgs, + req->src, req->slen, sg_dma_len(req->src)); + + nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_TO_DEVICE); + if (nr_sgs <= 0 || nr_sgs > 1) { + dev_dbg(dev, "verify: couldn't map dst sg for iaa device %d," + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, + iaa_wq->wq->id, ret); + ret = -EIO; + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_FROM_DEVICE); + goto out; + } + *dst_addr = sg_dma_address(req->dst); + dev_dbg(dev, "verify: dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p," + " req->dlen %d, sg_dma_len(sg) %d\n", *dst_addr, nr_sgs, + req->dst, req->dlen, sg_dma_len(req->dst)); +out: + return ret; +} static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, struct idxd_wq *wq, dma_addr_t src_addr, unsigned int slen, dma_addr_t dst_addr, unsigned int *dlen, - u32 compression_crc); + u32 compression_crc) +{ + struct iaa_device_compression_mode *active_compression_mode; + struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); + struct iaa_device *iaa_device; + struct idxd_desc *idxd_desc; + struct iax_hw_desc *desc; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + int ret = 0; + + iaa_wq = idxd_wq_get_private(wq); + iaa_device = iaa_wq->iaa_device; + idxd = iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + + active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); + + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + if (IS_ERR(idxd_desc)) { + dev_dbg(dev, "idxd descriptor allocation failed\n"); + dev_dbg(dev, "iaa compress failed: ret=%ld\n", + PTR_ERR(idxd_desc)); + return PTR_ERR(idxd_desc); + } + desc = idxd_desc->iax_hw; + + /* Verify (optional) - decompress and check crc, suppress dest write */ + + desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC; + desc->opcode = IAX_OPCODE_DECOMPRESS; + desc->decompr_flags = IAA_DECOMP_FLAGS | IAA_DECOMP_SUPPRESS_OUTPUT; + desc->priv = 0; + + desc->src1_addr = (u64)dst_addr; + desc->src1_size = *dlen; + desc->dst_addr = (u64)src_addr; + desc->max_dst_size = slen; + desc->completion_addr = idxd_desc->compl_dma; + + dev_dbg(dev, "(verify) compression mode %s," + " desc->src1_addr %llx, desc->src1_size %d," + " desc->dst_addr %llx, desc->max_dst_size %d," + " desc->src2_addr %llx, desc->src2_size %d\n", + active_compression_mode->name, + desc->src1_addr, desc->src1_size, desc->dst_addr, + desc->max_dst_size, desc->src2_addr, desc->src2_size); + + ret = idxd_submit_desc(wq, idxd_desc); + if (ret) { + dev_dbg(dev, "submit_desc (verify) failed ret=%d\n", ret); + goto err; + } + + ret = check_completion(dev, idxd_desc->iax_completion, false, false); + if (ret) { + dev_dbg(dev, "(verify) check_completion failed ret=%d\n", ret); + goto err; + } + + if (compression_crc != idxd_desc->iax_completion->crc) { + ret = -EINVAL; + dev_dbg(dev, "(verify) iaa comp/decomp crc mismatch:" + " comp=0x%x, decomp=0x%x\n", compression_crc, + idxd_desc->iax_completion->crc); + print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET, + 8, 1, idxd_desc->iax_completion, 64, 0); + goto err; + } + + idxd_free_desc(wq, idxd_desc); +out: + return ret; +err: + idxd_free_desc(wq, idxd_desc); + dev_dbg(dev, "iaa compress failed: ret=%d\n", ret); + + goto out; +} static void iaa_desc_complete(struct idxd_desc *idxd_desc, enum idxd_complete_type comp_type, @@ -1235,133 +1369,6 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, goto out; } -static int iaa_remap_for_verify(struct device *dev, struct iaa_wq *iaa_wq, - struct acomp_req *req, - dma_addr_t *src_addr, dma_addr_t *dst_addr) -{ - int ret = 0; - int nr_sgs; - - dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); - dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); - - nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_FROM_DEVICE); - if (nr_sgs <= 0 || nr_sgs > 1) { - dev_dbg(dev, "verify: couldn't map src sg for iaa device %d," - " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, - iaa_wq->wq->id, ret); - ret = -EIO; - goto out; - } - *src_addr = sg_dma_address(req->src); - dev_dbg(dev, "verify: dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p," - " req->slen %d, sg_dma_len(sg) %d\n", *src_addr, nr_sgs, - req->src, req->slen, sg_dma_len(req->src)); - - nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_TO_DEVICE); - if (nr_sgs <= 0 || nr_sgs > 1) { - dev_dbg(dev, "verify: couldn't map dst sg for iaa device %d," - " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, - iaa_wq->wq->id, ret); - ret = -EIO; - dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_FROM_DEVICE); - goto out; - } - *dst_addr = sg_dma_address(req->dst); - dev_dbg(dev, "verify: dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p," - " req->dlen %d, sg_dma_len(sg) %d\n", *dst_addr, nr_sgs, - req->dst, req->dlen, sg_dma_len(req->dst)); -out: - return ret; -} - -static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, - struct idxd_wq *wq, - dma_addr_t src_addr, unsigned int slen, - dma_addr_t dst_addr, unsigned int *dlen, - u32 compression_crc) -{ - struct iaa_device_compression_mode *active_compression_mode; - struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); - struct iaa_device *iaa_device; - struct idxd_desc *idxd_desc; - struct iax_hw_desc *desc; - struct idxd_device *idxd; - struct iaa_wq *iaa_wq; - struct pci_dev *pdev; - struct device *dev; - int ret = 0; - - iaa_wq = idxd_wq_get_private(wq); - iaa_device = iaa_wq->iaa_device; - idxd = iaa_device->idxd; - pdev = idxd->pdev; - dev = &pdev->dev; - - active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); - - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); - if (IS_ERR(idxd_desc)) { - dev_dbg(dev, "idxd descriptor allocation failed\n"); - dev_dbg(dev, "iaa compress failed: ret=%ld\n", - PTR_ERR(idxd_desc)); - return PTR_ERR(idxd_desc); - } - desc = idxd_desc->iax_hw; - - /* Verify (optional) - decompress and check crc, suppress dest write */ - - desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC; - desc->opcode = IAX_OPCODE_DECOMPRESS; - desc->decompr_flags = IAA_DECOMP_FLAGS | IAA_DECOMP_SUPPRESS_OUTPUT; - desc->priv = 0; - - desc->src1_addr = (u64)dst_addr; - desc->src1_size = *dlen; - desc->dst_addr = (u64)src_addr; - desc->max_dst_size = slen; - desc->completion_addr = idxd_desc->compl_dma; - - dev_dbg(dev, "(verify) compression mode %s," - " desc->src1_addr %llx, desc->src1_size %d," - " desc->dst_addr %llx, desc->max_dst_size %d," - " desc->src2_addr %llx, desc->src2_size %d\n", - active_compression_mode->name, - desc->src1_addr, desc->src1_size, desc->dst_addr, - desc->max_dst_size, desc->src2_addr, desc->src2_size); - - ret = idxd_submit_desc(wq, idxd_desc); - if (ret) { - dev_dbg(dev, "submit_desc (verify) failed ret=%d\n", ret); - goto err; - } - - ret = check_completion(dev, idxd_desc->iax_completion, false, false); - if (ret) { - dev_dbg(dev, "(verify) check_completion failed ret=%d\n", ret); - goto err; - } - - if (compression_crc != idxd_desc->iax_completion->crc) { - ret = -EINVAL; - dev_dbg(dev, "(verify) iaa comp/decomp crc mismatch:" - " comp=0x%x, decomp=0x%x\n", compression_crc, - idxd_desc->iax_completion->crc); - print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET, - 8, 1, idxd_desc->iax_completion, 64, 0); - goto err; - } - - idxd_free_desc(wq, idxd_desc); -out: - return ret; -err: - idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa compress failed: ret=%d\n", ret); - - goto out; -} - static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, struct idxd_wq *wq, dma_addr_t src_addr, unsigned int slen, @@ -2132,6 +2139,9 @@ static void iaa_comp_adecompress_batch( crypto_req_done, wait); } +/********************************************* + * Interfaces to crypto_alg and crypto_acomp. + *********************************************/ static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm); From patchwork Sat Nov 23 07:01:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BD33E6ADE8 for ; Sat, 23 Nov 2024 07:01:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D9326B0095; Sat, 23 Nov 2024 02:01:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 686F66B0098; Sat, 23 Nov 2024 02:01:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B4986B0099; Sat, 23 Nov 2024 02:01:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 21A826B0095 for ; Sat, 23 Nov 2024 02:01:38 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C97AB141B1E for ; Sat, 23 Nov 2024 07:01:37 +0000 (UTC) X-FDA: 82816464234.16.542D214 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf02.hostedemail.com (Postfix) with ESMTP id BED0F80004 for ; Sat, 23 Nov 2024 07:01:34 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hE5nOIYO; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345295; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F8TRNjuJw02BJjmwhPPiqVF0miaaiC2BW3btRMFsS1U=; b=agdJUCK0sPLSJn7v23pLL4Gav4xJewoOf3InobsgVJNn0Wk+KtVdZUEAO/E0oEx2VywKPE 4i+HdaOZsmM/K/xS6QAcT1uQbvKzqMhpY/e9fJUiEpxIMOIkU9gwYBAZYNQfmdPGEdJ9Ds xOJT6RvSTDBbyPGoVSBzJzUK28ulF0Y= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hE5nOIYO; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345295; a=rsa-sha256; cv=none; b=O5fb9t8oFIq9QZN/jPV2SK4JWaYPVmkjx5HdfxkgsdlJryGhGAYses6yhdMzE+LGe5z48z XCDoioyouhIDLiOrLklgKwfFm8Bstjhl3o5ANNBmFR9fuLktq/rEsdKXsUKwHJHcK0uA0l YGmEAxjtZz1Tjwqfc7wjclQwCIRwjCE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345296; x=1763881296; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DO3/7O2ZpUjwMR3VehypPmAPmQx23xHOewI0rvj2B7Y=; b=hE5nOIYOdeXy6XW70S6txxhN1FMvtSddlzC0kcqmIRZdliXY06DyfaHf KnfjBeiyXHUxXAO/aKrjoxHUfGULLEvzBBaBeO/qLW90ouEighd6rokBk M9IemLpJsy/uo9serXy6k1VxD/6voMJS0F7Nat+lcPWFNMyHkNbofNjvY UUAM5Lxg2fNK2meTPw8JhcyW6iXgm3Q4MBMooC5fHBwvAt7+8Qzt8rpaA Ep+ZISdZP/hD/vQRx7E2Vh/Ww69jPeUTdI6fJy7VnwWLXRwj2jxfLYYER SE7Mtyb+K8tTDW1va+1cukxbQQ8gK5aAFHfH7n+y94jv5KDTVzEh3FD9t g==; X-CSE-ConnectionGUID: 2bJ2vjJ5Qi2wHFk3KMDrxg== X-CSE-MsgGUID: EENCco5KQLWXdkvO/a89VQ== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435537" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435537" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:29 -0800 X-CSE-ConnectionGUID: uUHpSe6CT1q3CZdP12z6dQ== X-CSE-MsgGUID: w3LXN6+CRxW907z0i/z7aA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573565" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:29 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 07/10] crypto: iaa - Map IAA devices/wqs to cores based on packages instead of NUMA. Date: Fri, 22 Nov 2024 23:01:24 -0800 Message-Id: <20241123070127.332773-8-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BED0F80004 X-Stat-Signature: tq8a7f3eonntpgpy5ct3og6fu3ifph7b X-Rspam-User: X-HE-Tag: 1732345294-121517 X-HE-Meta: U2FsdGVkX1+ovNvXfmHTZw6w2Fs7U0r2hmxL1CsIVgWMUexxidg8sza54v9ohSqU2i3avpjj/D8LrRHWSwDHhE2JEA94TDnk7nmzB7gDXmm6cimm9wGCH++H07ACmc2l3p5pA+4ZybWT5Os7qB40Mj+OD8hFThD++GJfGBCAWuPHDvmi444HWZmVjx4n29CrTVDgTjGPyZS57bhbcm45KEsNBSOX5gds29ZjSeE+ffgNZZ1Tdwi2N6Wh7B8mJt+LF9w9gydF/ai09CycAuk3vtdHRWhN/MZRUs0VZc8CBZkU5qjT2JAB9X9Tg1hR6oGH2TpOnC48Ne1kH4pQolxVN6Ew0YACt26tD/YvdYrcJLhlTi9bftAa6qvfuZhCohD7oKG9bWF31ifEI9i/e/CpFsFlN6e3OLQK5MsKZ3igOFVF85HGgv0NKfNo2Dh6KAZ7wqTkzn/omRYYbm5S8jIesYGGNesceKQgNAxGyXiyJtiL66P3lHHW/KoEsRDl/pFbMDp6ml/9csiKfENpwUareG0nmXuoLZ4ztxdMAMHNTNA+HVj7tUuAj2rdKlm9nwraZg/2QB69kmpwP2944mNYnELKNQGIwVi3eQItsfC2+ugLRAnW6clsM9zo03ukG3DOLzTEq0+Pv5Tw8MG3QthDyCIrVdnG6nhX8FnDaBaEDDVO3aL9JB1dAStOwAvVwxPvVmDUZPNM/a/VBeLAsuJF2fTCC6hiP/GTRIMRGYlP/9HEYQP57UgVzTungeHsjjHszDDO4KOpfK0wIOPCOS4B4xMH3YGUobSjX0DPztjUMLFh3qp3CY35BOEMy+dVUV78xyavWBQMlvHgwCQ1lKlwlldajPYmfl9jpInGQP6VuDOCleoQNJAYoGB2NMQw5ZzjIv+cA8zJst4C7K+BeidcfkEVNA93hEatspkmvex+SYeJeL/rbnAkEpD3k8bDmRBxZ0jfvZm3xtHzpjSqhYK wN1vR558 i8P2TATMNDRfZrqUZSaNS5+tC8+0AXWSegb5S8hIpB02kEG/l95ANKlixKr+9wQukADCuisZcaXnlid7inXH7o7CSl5BVvpFAJSvX4Cqfb5W210HVtTwkQiguhCu0P2ljxCx0EgrJgExrNmmOGk9JmxPx/Cl3rYNTcbxhz+Ft/0cMALdWnpLAXcwr6w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch modifies the algorithm for mapping available IAA devices and wqs to cores, as they are being discovered, based on packages instead of NUMA nodes. This leads to a more realistic mapping of IAA devices as compression/decompression resources for a package, rather than for a NUMA node. This also resolves problems that were observed during internal validation on Intel platforms with many more NUMA nodes than packages: for such cases, the earlier NUMA based allocation caused some IAAs to be over-subscribed and some to not be utilized at all. As a result of this change from NUMA to packages, some of the core functions used by the iaa_crypto driver's "probe" and "remove" API have been re-written. The new infrastructure maintains a static/global mapping of "local wqs" per IAA device, in the "struct iaa_device" itself. The earlier implementation would allocate memory per-cpu for this data, which never changes once the IAA devices/wqs have been initialized. Two main outcomes from this new iaa_crypto driver infrastructure are: 1) Resolves "task blocked for more than x seconds" errors observed during internal validation on Intel systems with the earlier NUMA node based mappings, which was root-caused to the non-optimal IAA-to-core mappings described earlier. 2) Results in a NUM_THREADS factor reduction in memory footprint cost of initializing IAA devices/wqs, due to eliminating the per-cpu copies of each IAA device's wqs. On a 384 cores Intel Granite Rapids server with 8 IAA devices, this saves 140MiB. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 17 +- drivers/crypto/intel/iaa/iaa_crypto_main.c | 276 ++++++++++++--------- 2 files changed, 171 insertions(+), 122 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 56985e395263..ca317c5aaf27 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -46,6 +46,7 @@ struct iaa_wq { struct idxd_wq *wq; int ref; bool remove; + bool mapped; struct iaa_device *iaa_device; @@ -63,6 +64,13 @@ struct iaa_device_compression_mode { dma_addr_t aecs_comp_table_dma_addr; }; +struct wq_table_entry { + struct idxd_wq **wqs; + int max_wqs; + int n_wqs; + int cur_wq; +}; + /* Representation of IAA device with wqs, populated by probe */ struct iaa_device { struct list_head list; @@ -73,19 +81,14 @@ struct iaa_device { int n_wq; struct list_head wqs; + struct wq_table_entry *iaa_local_wqs; + atomic64_t comp_calls; atomic64_t comp_bytes; atomic64_t decomp_calls; atomic64_t decomp_bytes; }; -struct wq_table_entry { - struct idxd_wq **wqs; - int max_wqs; - int n_wqs; - int cur_wq; -}; - #define IAA_AECS_ALIGN 32 /* diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index c2362e4525bd..28f2f5617bf0 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -30,8 +30,9 @@ /* number of iaa instances probed */ static unsigned int nr_iaa; static unsigned int nr_cpus; -static unsigned int nr_nodes; -static unsigned int nr_cpus_per_node; +static unsigned int nr_packages; +static unsigned int nr_cpus_per_package; +static unsigned int nr_iaa_per_package; /* Number of physical cpus sharing each iaa instance */ static unsigned int cpus_per_iaa; @@ -462,17 +463,46 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device) * Functions for use in crypto probe and remove interfaces: * allocate/init/query/deallocate devices/wqs. ***********************************************************/ -static struct iaa_device *iaa_device_alloc(void) +static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) { + struct wq_table_entry *local; struct iaa_device *iaa_device; iaa_device = kzalloc(sizeof(*iaa_device), GFP_KERNEL); if (!iaa_device) - return NULL; + goto err; + + iaa_device->idxd = idxd; + + /* IAA device's local wqs. */ + iaa_device->iaa_local_wqs = kzalloc(sizeof(struct wq_table_entry), GFP_KERNEL); + if (!iaa_device->iaa_local_wqs) + goto err; + + local = iaa_device->iaa_local_wqs; + + local->wqs = kzalloc(iaa_device->idxd->max_wqs * sizeof(struct wq *), GFP_KERNEL); + if (!local->wqs) + goto err; + + local->max_wqs = iaa_device->idxd->max_wqs; + local->n_wqs = 0; INIT_LIST_HEAD(&iaa_device->wqs); return iaa_device; + +err: + if (iaa_device) { + if (iaa_device->iaa_local_wqs) { + if (iaa_device->iaa_local_wqs->wqs) + kfree(iaa_device->iaa_local_wqs->wqs); + kfree(iaa_device->iaa_local_wqs); + } + kfree(iaa_device); + } + + return NULL; } static bool iaa_has_wq(struct iaa_device *iaa_device, struct idxd_wq *wq) @@ -491,12 +521,10 @@ static struct iaa_device *add_iaa_device(struct idxd_device *idxd) { struct iaa_device *iaa_device; - iaa_device = iaa_device_alloc(); + iaa_device = iaa_device_alloc(idxd); if (!iaa_device) return NULL; - iaa_device->idxd = idxd; - list_add_tail(&iaa_device->list, &iaa_devices); nr_iaa++; @@ -537,6 +565,7 @@ static int add_iaa_wq(struct iaa_device *iaa_device, struct idxd_wq *wq, iaa_wq->wq = wq; iaa_wq->iaa_device = iaa_device; idxd_wq_set_private(wq, iaa_wq); + iaa_wq->mapped = false; list_add_tail(&iaa_wq->list, &iaa_device->wqs); @@ -580,6 +609,13 @@ static void free_iaa_device(struct iaa_device *iaa_device) return; remove_device_compression_modes(iaa_device); + + if (iaa_device->iaa_local_wqs) { + if (iaa_device->iaa_local_wqs->wqs) + kfree(iaa_device->iaa_local_wqs->wqs); + kfree(iaa_device->iaa_local_wqs); + } + kfree(iaa_device); } @@ -716,9 +752,14 @@ static int save_iaa_wq(struct idxd_wq *wq) if (WARN_ON(nr_iaa == 0)) return -EINVAL; - cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa; + cpus_per_iaa = (nr_packages * nr_cpus_per_package) / nr_iaa; if (!cpus_per_iaa) cpus_per_iaa = 1; + + nr_iaa_per_package = nr_iaa / nr_packages; + if (!nr_iaa_per_package) + nr_iaa_per_package = 1; + out: return 0; } @@ -735,53 +776,45 @@ static void remove_iaa_wq(struct idxd_wq *wq) } if (nr_iaa) { - cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa; + cpus_per_iaa = (nr_packages * nr_cpus_per_package) / nr_iaa; if (!cpus_per_iaa) cpus_per_iaa = 1; - } else + + nr_iaa_per_package = nr_iaa / nr_packages; + if (!nr_iaa_per_package) + nr_iaa_per_package = 1; + } else { cpus_per_iaa = 1; + nr_iaa_per_package = 1; + } } /*************************************************************** * Mapping IAA devices and wqs to cores with per-cpu wq_tables. ***************************************************************/ -static void wq_table_free_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - kfree(entry->wqs); - memset(entry, 0, sizeof(*entry)); -} - -static void wq_table_clear_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - entry->n_wqs = 0; - entry->cur_wq = 0; - memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); -} - -static void clear_wq_table(void) +/* + * Given a cpu, find the closest IAA instance. The idea is to try to + * choose the most appropriate IAA instance for a caller and spread + * available workqueues around to clients. + */ +static inline int cpu_to_iaa(int cpu) { - int cpu; - - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_clear_entry(cpu); + int package_id, base_iaa, iaa = 0; - pr_debug("cleared wq table\n"); -} + if (!nr_packages || !nr_iaa_per_package) + return 0; -static void free_wq_table(void) -{ - int cpu; + package_id = topology_logical_package_id(cpu); + base_iaa = package_id * nr_iaa_per_package; + iaa = base_iaa + ((cpu % nr_cpus_per_package) / cpus_per_iaa); - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_free_entry(cpu); + pr_debug("cpu = %d, package_id = %d, base_iaa = %d, iaa = %d", + cpu, package_id, base_iaa, iaa); - free_percpu(wq_table); + if (iaa >= 0 && iaa < nr_iaa) + return iaa; - pr_debug("freed wq table\n"); + return (nr_iaa - 1); } static int alloc_wq_table(int max_wqs) @@ -795,13 +828,11 @@ static int alloc_wq_table(int max_wqs) for (cpu = 0; cpu < nr_cpus; cpu++) { entry = per_cpu_ptr(wq_table, cpu); - entry->wqs = kcalloc(max_wqs, sizeof(struct wq *), GFP_KERNEL); - if (!entry->wqs) { - free_wq_table(); - return -ENOMEM; - } + entry->wqs = NULL; entry->max_wqs = max_wqs; + entry->n_wqs = 0; + entry->cur_wq = 0; } pr_debug("initialized wq table\n"); @@ -809,33 +840,27 @@ static int alloc_wq_table(int max_wqs) return 0; } -static void wq_table_add(int cpu, struct idxd_wq *wq) +static void wq_table_add(int cpu, struct wq_table_entry *iaa_local_wqs) { struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - if (WARN_ON(entry->n_wqs == entry->max_wqs)) - return; - - entry->wqs[entry->n_wqs++] = wq; + entry->wqs = iaa_local_wqs->wqs; + entry->max_wqs = iaa_local_wqs->max_wqs; + entry->n_wqs = iaa_local_wqs->n_wqs; + entry->cur_wq = 0; - pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__, + pr_debug("%s: cpu %d: added %d iaa local wqs up to wq %d.%d\n", __func__, + cpu, entry->n_wqs, entry->wqs[entry->n_wqs - 1]->idxd->id, - entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); + entry->wqs[entry->n_wqs - 1]->id); } static int wq_table_add_wqs(int iaa, int cpu) { struct iaa_device *iaa_device, *found_device = NULL; - int ret = 0, cur_iaa = 0, n_wqs_added = 0; - struct idxd_device *idxd; - struct iaa_wq *iaa_wq; - struct pci_dev *pdev; - struct device *dev; + int ret = 0, cur_iaa = 0; list_for_each_entry(iaa_device, &iaa_devices, list) { - idxd = iaa_device->idxd; - pdev = idxd->pdev; - dev = &pdev->dev; if (cur_iaa != iaa) { cur_iaa++; @@ -843,7 +868,8 @@ static int wq_table_add_wqs(int iaa, int cpu) } found_device = iaa_device; - dev_dbg(dev, "getting wq from iaa_device %d, cur_iaa %d\n", + dev_dbg(&found_device->idxd->pdev->dev, + "getting wq from iaa_device %d, cur_iaa %d\n", found_device->idxd->id, cur_iaa); break; } @@ -858,29 +884,58 @@ static int wq_table_add_wqs(int iaa, int cpu) } cur_iaa = 0; - idxd = found_device->idxd; - pdev = idxd->pdev; - dev = &pdev->dev; - dev_dbg(dev, "getting wq from only iaa_device %d, cur_iaa %d\n", + dev_dbg(&found_device->idxd->pdev->dev, + "getting wq from only iaa_device %d, cur_iaa %d\n", found_device->idxd->id, cur_iaa); } - list_for_each_entry(iaa_wq, &found_device->wqs, list) { - wq_table_add(cpu, iaa_wq->wq); - pr_debug("rebalance: added wq for cpu=%d: iaa wq %d.%d\n", - cpu, iaa_wq->wq->idxd->id, iaa_wq->wq->id); - n_wqs_added++; + wq_table_add(cpu, found_device->iaa_local_wqs); + +out: + return ret; +} + +static int map_iaa_device_wqs(struct iaa_device *iaa_device) +{ + struct wq_table_entry *local; + int ret = 0, n_wqs_added = 0; + struct iaa_wq *iaa_wq; + + local = iaa_device->iaa_local_wqs; + + list_for_each_entry(iaa_wq, &iaa_device->wqs, list) { + if (iaa_wq->mapped && ++n_wqs_added) + continue; + + pr_debug("iaa_device %px: processing wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + + if (WARN_ON(local->n_wqs == local->max_wqs)) + break; + + local->wqs[local->n_wqs++] = iaa_wq->wq; + pr_debug("iaa_device %px: added local wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + + iaa_wq->mapped = true; + ++n_wqs_added; } - if (!n_wqs_added) { - pr_debug("couldn't find any iaa wqs!\n"); + if (!n_wqs_added && !iaa_device->n_wq) { + pr_debug("iaa_device %d: couldn't find any iaa wqs!\n", iaa_device->idxd->id); ret = -EINVAL; - goto out; } -out: + return ret; } +static void map_iaa_devices(void) +{ + struct iaa_device *iaa_device; + + list_for_each_entry(iaa_device, &iaa_devices, list) { + BUG_ON(map_iaa_device_wqs(iaa_device)); + } +} + /* * Rebalance the wq table so that given a cpu, it's easy to find the * closest IAA instance. The idea is to try to choose the most @@ -889,48 +944,42 @@ static int wq_table_add_wqs(int iaa, int cpu) */ static void rebalance_wq_table(void) { - const struct cpumask *node_cpus; - int node, cpu, iaa = -1; + int cpu, iaa; if (nr_iaa == 0) return; - pr_debug("rebalance: nr_nodes=%d, nr_cpus %d, nr_iaa %d, cpus_per_iaa %d\n", - nr_nodes, nr_cpus, nr_iaa, cpus_per_iaa); + map_iaa_devices(); - clear_wq_table(); + pr_debug("rebalance: nr_packages=%d, nr_cpus %d, nr_iaa %d, cpus_per_iaa %d\n", + nr_packages, nr_cpus, nr_iaa, cpus_per_iaa); - if (nr_iaa == 1) { - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (WARN_ON(wq_table_add_wqs(0, cpu))) { - pr_debug("could not add any wqs for iaa 0 to cpu %d!\n", cpu); - return; - } + for (cpu = 0; cpu < nr_cpus; cpu++) { + iaa = cpu_to_iaa(cpu); + pr_debug("rebalance: cpu=%d iaa=%d\n", cpu, iaa); + + if (WARN_ON(iaa == -1)) { + pr_debug("rebalance (cpu_to_iaa(%d)) failed!\n", cpu); + return; } - return; + if (WARN_ON(wq_table_add_wqs(iaa, cpu))) { + pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); + return; + } } - for_each_node_with_cpus(node) { - node_cpus = cpumask_of_node(node); - - for (cpu = 0; cpu < cpumask_weight(node_cpus); cpu++) { - int node_cpu = cpumask_nth(cpu, node_cpus); - - if (WARN_ON(node_cpu >= nr_cpu_ids)) { - pr_debug("node_cpu %d doesn't exist!\n", node_cpu); - return; - } - - if ((cpu % cpus_per_iaa) == 0) - iaa++; + pr_debug("Finished rebalance local wqs."); +} - if (WARN_ON(wq_table_add_wqs(iaa, node_cpu))) { - pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); - return; - } - } +static void free_wq_tables(void) +{ + if (wq_table) { + free_percpu(wq_table); + wq_table = NULL; } + + pr_debug("freed local wq table\n"); } /*************************************************************** @@ -2281,7 +2330,7 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev) free_iaa_wq(idxd_wq_get_private(wq)); err_save: if (first_wq) - free_wq_table(); + free_wq_tables(); err_alloc: mutex_unlock(&iaa_devices_lock); idxd_drv_disable_wq(wq); @@ -2331,7 +2380,9 @@ static void iaa_crypto_remove(struct idxd_dev *idxd_dev) if (nr_iaa == 0) { iaa_crypto_enabled = false; - free_wq_table(); + free_wq_tables(); + BUG_ON(!list_empty(&iaa_devices)); + INIT_LIST_HEAD(&iaa_devices); module_put(THIS_MODULE); pr_info("iaa_crypto now DISABLED\n"); @@ -2357,16 +2408,11 @@ static struct idxd_device_driver iaa_crypto_driver = { static int __init iaa_crypto_init_module(void) { int ret = 0; - int node; + INIT_LIST_HEAD(&iaa_devices); nr_cpus = num_possible_cpus(); - for_each_node_with_cpus(node) - nr_nodes++; - if (!nr_nodes) { - pr_err("IAA couldn't find any nodes with cpus\n"); - return -ENODEV; - } - nr_cpus_per_node = nr_cpus / nr_nodes; + nr_cpus_per_package = topology_num_cores_per_package(); + nr_packages = topology_max_packages(); if (crypto_has_comp("deflate-generic", 0, 0)) deflate_generic_tfm = crypto_alloc_comp("deflate-generic", 0, 0); From patchwork Sat Nov 23 07:01:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883784 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7206E6ADE8 for ; Sat, 23 Nov 2024 07:01:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9275B6B009A; Sat, 23 Nov 2024 02:01:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8ADB36B009C; Sat, 23 Nov 2024 02:01:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D5866B009B; Sat, 23 Nov 2024 02:01:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 480486B0099 for ; Sat, 23 Nov 2024 02:01:39 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BFB4E81ADB for ; Sat, 23 Nov 2024 07:01:38 +0000 (UTC) X-FDA: 82816464276.16.1E4CA4A Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf21.hostedemail.com (Postfix) with ESMTP id 975C11C0009 for ; Sat, 23 Nov 2024 07:01:35 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=etHuAurg; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345296; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LERhzXeQ0FGQ4ltb75aXsesP1Y2BUuuDpqaWw4awoZk=; b=Yk8BRQhG/Dx1sM0l3GgykS0Vb1H9oQ5vFAEJR7o3TCYm+VxbuaJy/62XzgWgVw5hUzmKt0 6LzHYDKpXEqSuSV+gd3scZmr+acZqtBAsyw5L0lCQpaUOdpe2wFDT5byPVD1Ql+EFSNM4f VtA2pJ3R7rIN6CjkqM4sS8WXs8CgugE= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=etHuAurg; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345296; a=rsa-sha256; cv=none; b=4fmBAD63dLfdLWsy4u26TU47cKwh78aCHvoXy8Usk5QI3FL695rWId/l9VxmjqFBqyepjs /WawU5khRQPI4Cx3IJsJod7MQb1iztzUzd2AXh2fI7jCMyq6o0uwsNHITDeEFY/oWcoG+/ P6G0FzCVKM5RZYEfb847HFUIPEURWgU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345297; x=1763881297; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bFvtANVq4M+PbLW5fVSl1IQBttbUhT3TGl28se1XnmQ=; b=etHuAurgR5ZlV/DEyJRK2lxArIWa9okfNCTljPwfCVlKWuLfNROIDQ/l eiVjPq8/VfPYdUSXbDxu7Rxfhrlf7hrcDXTSWNn0nTUKA+C63IkNXTGJw vLNJXoPsieOd/F99gyXIEFYW1mG4fL3678AwZwyUhnG+Dxhbl5TLn+wvF LbnYj6bEM7tMiJhQaNHjwLWQLy0RRWmp35L2IObY9KbXU5u9BailB1Oxk YJjgqPcOlDt1uivN68UmlkUXcwt15w+oCbe/cj/tGBD2qJAdbY4t5KhA0 LmYa2bnyru/34vL96aPvgZ3/CXZQCDVAaz7b/kQy6wXNl2dh7jaBNJq71 Q==; X-CSE-ConnectionGUID: qnzTRt8UTDS+RphS8LF2ug== X-CSE-MsgGUID: Zp1xpRtYQVyUOV9njOHYcQ== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435547" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435547" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:29 -0800 X-CSE-ConnectionGUID: 4uDe8fUhSDGkT2v19xKzeA== X-CSE-MsgGUID: bcrQTk/5RTy+gMcZzkadVQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573568" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:29 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 08/10] crypto: iaa - Distribute compress jobs from all cores to all IAAs on a package. Date: Fri, 22 Nov 2024 23:01:25 -0800 Message-Id: <20241123070127.332773-9-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 975C11C0009 X-Stat-Signature: bag7tbdtrgf7ds4qu3wc75ffdm5a33uw X-Rspam-User: X-HE-Tag: 1732345295-340209 X-HE-Meta: U2FsdGVkX18ae2xNhnyB/YsITrSewIVX++4iN+t/78sjghc20d60ZxSTo8qAdGNLr8VR6y+GnYxt7NAIIMczm/0I6dgI5VUfVwsP09HwrOU35WJq1HMc2TmJheXgHcOdRmveOhHUoTZmNiu0hKNtbpXEZucx3BczSVaBeOCwM9IclhCsIhLKe7K3OimZxaLY1XajHQOGGYPFK61QbNlvPzH0HsthVrUxE2pnPtSPTsREXqm/EzB8rsBFNvniutPz+K5haNnQg7DCAc3Vz3Xyq0KYl95r688JMwPHg9aFfFs+nGlgiPmYgpZ/42/27QUszAd+aBST3Brb96KR8dcRw+fY3+fypnERwjKVCZ+pE3DD/Dnhaekgszj+375rf3VPfBrLuRF60i4imBAG7wB9IrnLa800jDVuZD/0PV/KiQcvrP76qmcVteGt7MsZeYNBWtj7vGhqe7+Pb5pX0rTpIjRAbiUSkLZU1v73Jexn8dx58txhztH76kFaLvqPF/t86I7cKxHK+4F7wJ/jptdhRnpiokr/wbhniI/XqYVB0tvmgQW/Y8VnrNjLL538liHUPiaC33ZNuU5MaCwGdo2MrXC7A19vrI/Ovr7kXihI8OQpB5K2+7G00LknTgVqNaulGw409bZOzbR464+DVYKa7BmZuCY+mZ5s3t6ovWDHHnqPw2iLzmmgp7SZRjR3cxteIB/Wux5JHudnvnc4pUl9uBhPVnvnx1G3oDH2hNToPtjorgHD+fI8KstUhnAMn/25Fn6Kv7tzIi9SYPDGV26PB/cyxjflI8D20RwhGGmkDbUadDRAv0o4Q/TMxsuHC8bqC31wP2RDF++W35MoV4tSGPPClasXr4R7gXys+c0bn4EQxfPA5TGBO7Zellvoh1xD8ieqM+VQ8T1yLUK1OsjdAJK8kA7nKCvmvQ35GdRIJdJTZK2ggu6QEGrz9V84d6AfhSO90OZgJ4AuzdbHoks A8vWL0so OTZR7OLHA+2xYx1hJFLGczFHLpKn34xRSMPKziAqmDfPB8F7JwmuFlwVyzcwwYpukzAf3q8NRN3VxrgUQjIxj9yKpdXCl2RDogXf29LJUNYgN1xQE2pY/cyEsnWYJVD1nPCMs54pWDSo0iOK/CRnD7LLvr1jliRRNIuAuQgP2bXaL9dCLT7Uq90eO8XF63QXmfWYV1z3DiQ4noF0UvAKJ5v7tNg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This change enables processes running on any logical core on a package to use all the IAA devices enabled on that package for compress jobs. In other words, compressions originating from any process in a package will be distributed in round-robin manner to the available IAA devices on the same package. The main premise behind this change is to make sure that no compress engines on any IAA device are un-utilized/under-utilized/over-utilized. In other words, the compress engines on all IAA devices are considered a global resource for that package, thus maximizing compression throughput. This allows the use of all IAA devices present in a given package for (batched) compressions originating from zswap/zram, from all cores on this package. A new per-cpu "global_wq_table" implements this in the iaa_crypto driver. We can think of the global WQ per IAA as a WQ to which all cores on that package can submit compress jobs. To avail of this feature, the user must configure 2 WQs per IAA in order to enable distribution of compress jobs to multiple IAA devices. Each IAA will have 2 WQs: wq.0 (local WQ): Used for decompress jobs from cores mapped by the cpu_to_iaa() "even balancing of logical cores to IAA devices" algorithm. wq.1 (global WQ): Used for compress jobs from *all* logical cores on that package. The iaa_crypto driver will place all global WQs from all same-package IAA devices in the global_wq_table per cpu on that package. When the driver receives a compress job, it will lookup the "next" global WQ in the cpu's global_wq_table to submit the descriptor. The starting wq in the global_wq_table for each cpu is the global wq associated with the IAA nearest to it, so that we stagger the starting global wq for each process. This results in very uniform usage of all IAAs for compress jobs. Two new driver module parameters are added for this feature: g_wqs_per_iaa (default 0): /sys/bus/dsa/drivers/crypto/g_wqs_per_iaa This represents the number of global WQs that can be configured per IAA device. The recommended setting is 1 to enable the use of this feature once the user configures 2 WQs per IAA using higher level scripts as described in Documentation/driver-api/crypto/iaa/iaa-crypto.rst. g_consec_descs_per_gwq (default 1): /sys/bus/dsa/drivers/crypto/g_consec_descs_per_gwq This represents the number of consecutive compress jobs that will be submitted to the same global WQ (i.e. to the same IAA device) from a given core, before moving to the next global WQ. The default is 1, which is also the recommended setting to avail of this feature. The decompress jobs from any core will be sent to the "local" IAA, namely the one that the driver assigns with the cpu_to_iaa() mapping algorithm that evenly balances the assignment of logical cores to IAA devices on a package. On a 2-package Sapphire Rapids server where each package has 56 cores and 4 IAA devices, this is how the compress/decompress jobs will be mapped when the user configures 2 WQs per IAA device (which implies wq.1 will be added to the global WQ table for each logical core on that package): package(s): 2 package0 CPU(s): 0-55,112-167 package1 CPU(s): 56-111,168-223 Compress jobs: -------------- package 0: iaa_crypto will send compress jobs from all cpus (0-55,112-167) to all IAA devices on the package (iax1/iax3/iax5/iax7) in round-robin manner: iaa: iax1 iax3 iax5 iax7 package 1: iaa_crypto will send compress jobs from all cpus (56-111,168-223) to all IAA devices on the package (iax9/iax11/iax13/iax15) in round-robin manner: iaa: iax9 iax11 iax13 iax15 Decompress jobs: ---------------- package 0: cpu 0-13,112-125 14-27,126-139 28-41,140-153 42-55,154-167 iaa: iax1 iax3 iax5 iax7 package 1: cpu 56-69,168-181 70-83,182-195 84-97,196-209 98-111,210-223 iaa: iax9 iax11 iax13 iax15 Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 1 + drivers/crypto/intel/iaa/iaa_crypto_main.c | 385 ++++++++++++++++++++- 2 files changed, 378 insertions(+), 8 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index ca317c5aaf27..ca7326d6e9bf 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -82,6 +82,7 @@ struct iaa_device { struct list_head wqs; struct wq_table_entry *iaa_local_wqs; + struct wq_table_entry *iaa_global_wqs; atomic64_t comp_calls; atomic64_t comp_bytes; diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 28f2f5617bf0..1cbf92d1b3e5 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -42,6 +42,18 @@ static struct crypto_comp *deflate_generic_tfm; /* Per-cpu lookup table for balanced wqs */ static struct wq_table_entry __percpu *wq_table = NULL; +static struct wq_table_entry **pkg_global_wq_tables = NULL; + +/* Per-cpu lookup table for global wqs shared by all cpus. */ +static struct wq_table_entry __percpu *global_wq_table = NULL; + +/* + * Per-cpu counter of consecutive descriptors allocated to + * the same wq in the global_wq_table, so that we know + * when to switch to the next wq in the global_wq_table. + */ +static int __percpu *num_consec_descs_per_wq = NULL; + /* Verify results of IAA compress or not */ static bool iaa_verify_compress = false; @@ -79,6 +91,16 @@ static bool async_mode = true; /* Use interrupts */ static bool use_irq; +/* Number of global wqs per iaa*/ +static int g_wqs_per_iaa = 0; + +/* + * Number of consecutive descriptors to allocate from a + * given global wq before switching to the next wq in + * the global_wq_table. + */ +static int g_consec_descs_per_gwq = 1; + static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX]; LIST_HEAD(iaa_devices); @@ -180,6 +202,60 @@ static ssize_t sync_mode_store(struct device_driver *driver, } static DRIVER_ATTR_RW(sync_mode); +static ssize_t g_wqs_per_iaa_show(struct device_driver *driver, char *buf) +{ + return sprintf(buf, "%d\n", g_wqs_per_iaa); +} + +static ssize_t g_wqs_per_iaa_store(struct device_driver *driver, + const char *buf, size_t count) +{ + int ret = -EBUSY; + + mutex_lock(&iaa_devices_lock); + + if (iaa_crypto_enabled) + goto out; + + ret = kstrtoint(buf, 10, &g_wqs_per_iaa); + if (ret) + goto out; + + ret = count; +out: + mutex_unlock(&iaa_devices_lock); + + return ret; +} +static DRIVER_ATTR_RW(g_wqs_per_iaa); + +static ssize_t g_consec_descs_per_gwq_show(struct device_driver *driver, char *buf) +{ + return sprintf(buf, "%d\n", g_consec_descs_per_gwq); +} + +static ssize_t g_consec_descs_per_gwq_store(struct device_driver *driver, + const char *buf, size_t count) +{ + int ret = -EBUSY; + + mutex_lock(&iaa_devices_lock); + + if (iaa_crypto_enabled) + goto out; + + ret = kstrtoint(buf, 10, &g_consec_descs_per_gwq); + if (ret) + goto out; + + ret = count; +out: + mutex_unlock(&iaa_devices_lock); + + return ret; +} +static DRIVER_ATTR_RW(g_consec_descs_per_gwq); + /**************************** * Driver compression modes. ****************************/ @@ -465,7 +541,7 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device) ***********************************************************/ static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) { - struct wq_table_entry *local; + struct wq_table_entry *local, *global; struct iaa_device *iaa_device; iaa_device = kzalloc(sizeof(*iaa_device), GFP_KERNEL); @@ -488,6 +564,20 @@ static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) local->max_wqs = iaa_device->idxd->max_wqs; local->n_wqs = 0; + /* IAA device's global wqs. */ + iaa_device->iaa_global_wqs = kzalloc(sizeof(struct wq_table_entry), GFP_KERNEL); + if (!iaa_device->iaa_global_wqs) + goto err; + + global = iaa_device->iaa_global_wqs; + + global->wqs = kzalloc(iaa_device->idxd->max_wqs * sizeof(struct wq *), GFP_KERNEL); + if (!global->wqs) + goto err; + + global->max_wqs = iaa_device->idxd->max_wqs; + global->n_wqs = 0; + INIT_LIST_HEAD(&iaa_device->wqs); return iaa_device; @@ -499,6 +589,8 @@ static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) kfree(iaa_device->iaa_local_wqs->wqs); kfree(iaa_device->iaa_local_wqs); } + if (iaa_device->iaa_global_wqs) + kfree(iaa_device->iaa_global_wqs); kfree(iaa_device); } @@ -616,6 +708,12 @@ static void free_iaa_device(struct iaa_device *iaa_device) kfree(iaa_device->iaa_local_wqs); } + if (iaa_device->iaa_global_wqs) { + if (iaa_device->iaa_global_wqs->wqs) + kfree(iaa_device->iaa_global_wqs->wqs); + kfree(iaa_device->iaa_global_wqs); + } + kfree(iaa_device); } @@ -817,6 +915,58 @@ static inline int cpu_to_iaa(int cpu) return (nr_iaa - 1); } +static void free_global_wq_table(void) +{ + if (global_wq_table) { + free_percpu(global_wq_table); + global_wq_table = NULL; + } + + if (num_consec_descs_per_wq) { + free_percpu(num_consec_descs_per_wq); + num_consec_descs_per_wq = NULL; + } + + pr_debug("freed global wq table\n"); +} + +static int pkg_global_wq_tables_alloc(void) +{ + int i, j; + + pkg_global_wq_tables = kzalloc(nr_packages * sizeof(*pkg_global_wq_tables), GFP_KERNEL); + if (!pkg_global_wq_tables) + return -ENOMEM; + + for (i = 0; i < nr_packages; ++i) { + pkg_global_wq_tables[i] = kzalloc(sizeof(struct wq_table_entry), GFP_KERNEL); + + if (!pkg_global_wq_tables[i]) { + for (j = 0; j < i; ++j) + kfree(pkg_global_wq_tables[j]); + kfree(pkg_global_wq_tables); + pkg_global_wq_tables = NULL; + return -ENOMEM; + } + pkg_global_wq_tables[i]->wqs = NULL; + } + + return 0; +} + +static void pkg_global_wq_tables_dealloc(void) +{ + int i; + + for (i = 0; i < nr_packages; ++i) { + if (pkg_global_wq_tables[i]->wqs) + kfree(pkg_global_wq_tables[i]->wqs); + kfree(pkg_global_wq_tables[i]); + } + kfree(pkg_global_wq_tables); + pkg_global_wq_tables = NULL; +} + static int alloc_wq_table(int max_wqs) { struct wq_table_entry *entry; @@ -835,6 +985,35 @@ static int alloc_wq_table(int max_wqs) entry->cur_wq = 0; } + global_wq_table = alloc_percpu(struct wq_table_entry); + if (!global_wq_table) + return 0; + + for (cpu = 0; cpu < nr_cpus; cpu++) { + entry = per_cpu_ptr(global_wq_table, cpu); + + entry->wqs = NULL; + entry->max_wqs = max_wqs; + entry->n_wqs = 0; + entry->cur_wq = 0; + } + + num_consec_descs_per_wq = alloc_percpu(int); + if (!num_consec_descs_per_wq) { + free_global_wq_table(); + return 0; + } + + for (cpu = 0; cpu < nr_cpus; cpu++) { + int *num_consec_descs = per_cpu_ptr(num_consec_descs_per_wq, cpu); + *num_consec_descs = 0; + } + + if (pkg_global_wq_tables_alloc()) { + free_global_wq_table(); + return 0; + } + pr_debug("initialized wq table\n"); return 0; @@ -895,13 +1074,120 @@ static int wq_table_add_wqs(int iaa, int cpu) return ret; } +static void pkg_global_wq_tables_reinit(void) +{ + int i, cur_iaa = 0, pkg = 0, nr_pkg_wqs = 0; + struct iaa_device *iaa_device; + struct wq_table_entry *global; + + if (!pkg_global_wq_tables) + return; + + /* Reallocate per-package wqs. */ + list_for_each_entry(iaa_device, &iaa_devices, list) { + global = iaa_device->iaa_global_wqs; + nr_pkg_wqs += global->n_wqs; + + if (++cur_iaa == nr_iaa_per_package) { + nr_pkg_wqs = nr_pkg_wqs ? max_t(int, iaa_device->idxd->max_wqs, nr_pkg_wqs) : 0; + + if (pkg_global_wq_tables[pkg]->wqs) { + kfree(pkg_global_wq_tables[pkg]->wqs); + pkg_global_wq_tables[pkg]->wqs = NULL; + } + + if (nr_pkg_wqs) + pkg_global_wq_tables[pkg]->wqs = kzalloc(nr_pkg_wqs * + sizeof(struct wq *), + GFP_KERNEL); + + pkg_global_wq_tables[pkg]->n_wqs = 0; + pkg_global_wq_tables[pkg]->cur_wq = 0; + pkg_global_wq_tables[pkg]->max_wqs = nr_pkg_wqs; + + if (++pkg == nr_packages) + break; + cur_iaa = 0; + nr_pkg_wqs = 0; + } + } + + pkg = 0; + cur_iaa = 0; + + /* Re-initialize per-package wqs. */ + list_for_each_entry(iaa_device, &iaa_devices, list) { + global = iaa_device->iaa_global_wqs; + + if (pkg_global_wq_tables[pkg]->wqs) + for (i = 0; i < global->n_wqs; ++i) + pkg_global_wq_tables[pkg]->wqs[pkg_global_wq_tables[pkg]->n_wqs++] = global->wqs[i]; + + pr_debug("pkg_global_wq_tables[%d] has %d wqs", pkg, pkg_global_wq_tables[pkg]->n_wqs); + + if (++cur_iaa == nr_iaa_per_package) { + if (++pkg == nr_packages) + break; + cur_iaa = 0; + } + } +} + +static void global_wq_table_add(int cpu, struct wq_table_entry *pkg_global_wq_table) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + + /* This could be NULL. */ + entry->wqs = pkg_global_wq_table->wqs; + entry->max_wqs = pkg_global_wq_table->max_wqs; + entry->n_wqs = pkg_global_wq_table->n_wqs; + entry->cur_wq = 0; + + if (entry->wqs) + pr_debug("%s: cpu %d: added %d iaa global wqs up to wq %d.%d\n", __func__, + cpu, entry->n_wqs, + entry->wqs[entry->n_wqs - 1]->idxd->id, + entry->wqs[entry->n_wqs - 1]->id); +} + +static void global_wq_table_set_start_wq(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + int start_wq = g_wqs_per_iaa * (cpu_to_iaa(cpu) % nr_iaa_per_package); + + if ((start_wq >= 0) && (start_wq < entry->n_wqs)) + entry->cur_wq = start_wq; +} + +static void global_wq_table_add_wqs(void) +{ + int cpu; + + if (!pkg_global_wq_tables) + return; + + for (cpu = 0; cpu < nr_cpus; cpu += nr_cpus_per_package) { + /* cpu's on the same package get the same global_wq_table. */ + int package_id = topology_logical_package_id(cpu); + int pkg_cpu; + + for (pkg_cpu = cpu; pkg_cpu < cpu + nr_cpus_per_package; ++pkg_cpu) { + if (pkg_global_wq_tables[package_id]->n_wqs > 0) { + global_wq_table_add(pkg_cpu, pkg_global_wq_tables[package_id]); + global_wq_table_set_start_wq(pkg_cpu); + } + } + } +} + static int map_iaa_device_wqs(struct iaa_device *iaa_device) { - struct wq_table_entry *local; + struct wq_table_entry *local, *global; int ret = 0, n_wqs_added = 0; struct iaa_wq *iaa_wq; local = iaa_device->iaa_local_wqs; + global = iaa_device->iaa_global_wqs; list_for_each_entry(iaa_wq, &iaa_device->wqs, list) { if (iaa_wq->mapped && ++n_wqs_added) @@ -909,11 +1195,18 @@ static int map_iaa_device_wqs(struct iaa_device *iaa_device) pr_debug("iaa_device %px: processing wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); - if (WARN_ON(local->n_wqs == local->max_wqs)) - break; + if ((!n_wqs_added || ((n_wqs_added + g_wqs_per_iaa) < iaa_device->n_wq)) && + (local->n_wqs < local->max_wqs)) { + + local->wqs[local->n_wqs++] = iaa_wq->wq; + pr_debug("iaa_device %px: added local wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + } else { + if (WARN_ON(global->n_wqs == global->max_wqs)) + break; - local->wqs[local->n_wqs++] = iaa_wq->wq; - pr_debug("iaa_device %px: added local wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + global->wqs[global->n_wqs++] = iaa_wq->wq; + pr_debug("iaa_device %px: added global wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + } iaa_wq->mapped = true; ++n_wqs_added; @@ -969,6 +1262,10 @@ static void rebalance_wq_table(void) } } + if (iaa_crypto_enabled && pkg_global_wq_tables) { + pkg_global_wq_tables_reinit(); + global_wq_table_add_wqs(); + } pr_debug("Finished rebalance local wqs."); } @@ -979,7 +1276,17 @@ static void free_wq_tables(void) wq_table = NULL; } - pr_debug("freed local wq table\n"); + if (global_wq_table) { + free_percpu(global_wq_table); + global_wq_table = NULL; + } + + if (num_consec_descs_per_wq) { + free_percpu(num_consec_descs_per_wq); + num_consec_descs_per_wq = NULL; + } + + pr_debug("freed wq tables\n"); } /*************************************************************** @@ -1002,6 +1309,35 @@ static struct idxd_wq *wq_table_next_wq(int cpu) return entry->wqs[entry->cur_wq]; } +/* + * Caller should make sure to call only if the + * per_cpu_ptr "global_wq_table" is non-NULL + * and has at least one wq configured. + */ +static struct idxd_wq *global_wq_table_next_wq(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + int *num_consec_descs = per_cpu_ptr(num_consec_descs_per_wq, cpu); + + /* + * Fall-back to local IAA's wq if there were no global wqs configured + * for any IAA device, or if there were problems in setting up global + * wqs for this cpu's package. + */ + if (!entry->wqs) + return wq_table_next_wq(cpu); + + if ((*num_consec_descs) == g_consec_descs_per_gwq) { + if (++entry->cur_wq >= entry->n_wqs) + entry->cur_wq = 0; + *num_consec_descs = 0; + } + + ++(*num_consec_descs); + + return entry->wqs[entry->cur_wq]; +} + /************************************************* * Core iaa_crypto compress/decompress functions. *************************************************/ @@ -1553,6 +1889,7 @@ static int iaa_comp_acompress(struct acomp_req *req) struct idxd_wq *wq; struct device *dev; int order = -1; + struct wq_table_entry *entry; compression_ctx = crypto_tfm_ctx(tfm); @@ -1571,8 +1908,15 @@ static int iaa_comp_acompress(struct acomp_req *req) disable_async = true; cpu = get_cpu(); - wq = wq_table_next_wq(cpu); + entry = per_cpu_ptr(global_wq_table, cpu); + + if (!entry || !entry->wqs || entry->n_wqs == 0) { + wq = wq_table_next_wq(cpu); + } else { + wq = global_wq_table_next_wq(cpu); + } put_cpu(); + if (!wq) { pr_debug("no wq configured for cpu=%d\n", cpu); return -ENODEV; @@ -2380,6 +2724,7 @@ static void iaa_crypto_remove(struct idxd_dev *idxd_dev) if (nr_iaa == 0) { iaa_crypto_enabled = false; + pkg_global_wq_tables_dealloc(); free_wq_tables(); BUG_ON(!list_empty(&iaa_devices)); INIT_LIST_HEAD(&iaa_devices); @@ -2449,6 +2794,20 @@ static int __init iaa_crypto_init_module(void) goto err_sync_attr_create; } + ret = driver_create_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); + if (ret) { + pr_debug("IAA g_wqs_per_iaa attr creation failed\n"); + goto err_g_wqs_per_iaa_attr_create; + } + + ret = driver_create_file(&iaa_crypto_driver.drv, + &driver_attr_g_consec_descs_per_gwq); + if (ret) { + pr_debug("IAA g_consec_descs_per_gwq attr creation failed\n"); + goto err_g_consec_descs_per_gwq_attr_create; + } + if (iaa_crypto_debugfs_init()) pr_warn("debugfs init failed, stats not available\n"); @@ -2456,6 +2815,12 @@ static int __init iaa_crypto_init_module(void) out: return ret; +err_g_consec_descs_per_gwq_attr_create: + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); +err_g_wqs_per_iaa_attr_create: + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_sync_mode); err_sync_attr_create: driver_remove_file(&iaa_crypto_driver.drv, &driver_attr_verify_compress); @@ -2479,6 +2844,10 @@ static void __exit iaa_crypto_cleanup_module(void) &driver_attr_sync_mode); driver_remove_file(&iaa_crypto_driver.drv, &driver_attr_verify_compress); + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_consec_descs_per_gwq); idxd_driver_unregister(&iaa_crypto_driver); iaa_aecs_cleanup_fixed(); crypto_free_comp(deflate_generic_tfm); From patchwork Sat Nov 23 07:01:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC923E6ADE9 for ; Sat, 23 Nov 2024 07:01:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DDDFB6B0099; Sat, 23 Nov 2024 02:01:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D3D666B009B; Sat, 23 Nov 2024 02:01:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B90596B009D; Sat, 23 Nov 2024 02:01:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 80A616B0099 for ; Sat, 23 Nov 2024 02:01:39 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 351BD141B20 for ; Sat, 23 Nov 2024 07:01:39 +0000 (UTC) X-FDA: 82816464276.15.5C78152 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf07.hostedemail.com (Postfix) with ESMTP id 94D2940002 for ; Sat, 23 Nov 2024 07:01:36 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NMU2xJ7d; spf=pass (imf07.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345297; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U2/PhxLvhkVvGZlUDynr7JgQDrRDD7pmiBS4plXw1Ws=; b=6+ryVHvJQRKAD0Jt5/ds0M439v20R3ply+RuyznxaFfvx3NuSxdYaLzWHrmFFMx7QFw14Q PvVL+1s8qmc0wSHnPZqQH4m32KtN9Ta3ieD6ZocsQzjasSg6suKYpOlofaRPyChRAYuOtk aBZ8cDrB/HUwwpd7aBYooF6oTExjQ8o= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345297; a=rsa-sha256; cv=none; b=thsNFGzWiGhU7sXVm5BzoE57uyBzah+pU6s6XINWXrcBBucSccmnF00EwaacN/c1GAL/2L sb3tnGH/7G2icDdJ7wFHxCzdq2lLKtSuMT7ECNfpM7KygzKNAIsApY7gniCwhYWbpwGqyi ZB0MGgUfE6MtPc8jYe3PqEXlua+tdKU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NMU2xJ7d; spf=pass (imf07.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345298; x=1763881298; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7BRL3M7QAqVe3HBLgp+ukwHwPsCAsGa3NLBo+BVhcjM=; b=NMU2xJ7dDUZf+lRs7NYUAC/Mb8Aww4varaJMARES/XXj9ehZv5ct3seq 87bWLPrBQDitjmtG3Fn5Y5EkQ9KxKigYsqkyvmAPNNNnhnVLuBM5/O82o MOPHoR6Aa9gwQ2ec8ZzGrp4G2OUgsHiL8d9fD1y2is1UZYqy12luA5scS lUmGoGNdWkaZaB1IgSldtjuSqFZPwfVqNbhDfsfsbWntKCpptlEBWiqJk 9zGLy1w7YyFm8oJqDOE684dkjtN3TtxFV6fODxljRju5QDznL56z8QI8n mJC+LfQjlIyBIG0Rr/nFa/3IgA2BjYgqouaZIyuNUgtb8dw4TnHOmex47 A==; X-CSE-ConnectionGUID: e/wOn99WTValutGzNaG/ww== X-CSE-MsgGUID: 5ovUJP50QsySDvP7/3O1pw== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435559" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435559" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:30 -0800 X-CSE-ConnectionGUID: osjt3+rHSjukUyKzXXPNhQ== X-CSE-MsgGUID: lYHG2HlqS9GEU+WtlJ2VGQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573571" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:29 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 09/10] mm: zswap: Allocate pool batching resources if the crypto_alg supports batching. Date: Fri, 22 Nov 2024 23:01:26 -0800 Message-Id: <20241123070127.332773-10-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: 5wyw6zqmpzhi1scu6415ab8joyab93no X-Rspam-User: X-Rspamd-Queue-Id: 94D2940002 X-Rspamd-Server: rspam02 X-HE-Tag: 1732345296-616421 X-HE-Meta: U2FsdGVkX1/eK0HaR6RprbmOFH2jqMonhFq1mLNr1iNp4391ZmtrNCV8DT40rjTR5mezUz82Dpk/q+rX9GhlJiNf4x8IUsBkkKvABHdivLAWIxhzw5FIpdnFQfmNnn9UZB8sPVAD4KO0Bhqf7Yzb7mR0CA6qSkjiHvQXIviXkgsP8N6d/5yl+8hoZu9zg7YdH3zioRJLIG1a5M6vK5D4ziT3VP7dmJYuG2XV5KT1wngQ5M+Ieh+/alX6GTHekCX6koFdYbSxRYyo6BVjkyiG3jCg6mv0QQg9cLqGyBJDXxCZ9ACYh2Ytggy9TdGzobFKyeI9uAz66fYY0J4pr4vTzdh+FkspFB/ZpixgoMhLb7VVDNR18O+WgcuU9smnPp4StJT0SnsuR8Vwmz6m36q8SLGnn4eEwWjSlSbKoGcXoBLAV4Vflew/au2GLFTI4l99ijA/IZ2vtiXkhvtoLYI+7HhP49rSHs9nQrwOldbgJcTeC2fk7V8PlOUrHuWWUxYV5v5J0lDju2r5uepDILE7z2LK10aKCo1VJJ2SxiBGfS+uz8tLapD8s2mn7LwA5G5srllj+2GGkjqvwrGODWFWV5zIjQo7BIMiTDHY1CD+nKoEEqXz6giGmAdPY8eaP9Xptn+TOqZXZ1Y+dEJqImJJ7RhxikzvJF+ZHOCEnD/eI1HM/Em9PIIzbSj3knON4ujhrI296vCxQ+YH8fdwKWq+MPJlBM4wz2N1Wt7ZVAFh8UiIx+c+F+SmqcIHucfSg39IZAGkjfMAGINhySmB65gcsrSsHz7UF5ZiQ2xsDCuPpLHGi6ceEYoOF2tqr8aAxv4A3YywibXq71b5BoBJCXco1Rjo0hP3pRj2FXmBfJuWjaCm9D729NNmO5wlqMNFN87Y7j2yr7tLnmZ070fDD34CTQEsyHMq6YAQ+J4pkdUGWnmXgHoAaOehi4bwc7wqHCZNYDUzeG4jLtSp/rmcYkp 6sdmKK2a TxN8YlYbou4MdxqhxGeDgA7y3Z1q3H9P4ncqVdneNAe8Def6omKYo5VwYWf6BQmAe06Hl+DUQU/Wobk5+Z7hA2/ukYRhY6W6P31YAL14xX65plhCpnfSRYhy5Mj6Rf6xDYlsjfrG59hwfv+BhXGfaTGtHXrA66AkgTY3+iq1DMWdzkSjkWO0hM/kHHzCSGWqG5YYJ5u7FmpSA0kC97Lv3G75Vgh0KUWb/uIS16/LnhvvCH5o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch does the following: 1) Modifies the definition of "struct crypto_acomp_ctx" to represent a configurable number of acomp_reqs and buffers. Adds a "nr_reqs" to "struct crypto_acomp_ctx" to contain the nr of resources that will be allocated in the cpu onlining code. 2) The zswap_cpu_comp_prepare() cpu onlining code will detect if the crypto_acomp created for the pool (in other words, the zswap compression algorithm) has registered an implementation for batch_compress() and batch_decompress(). If so, it will set "nr_reqs" to SWAP_CRYPTO_BATCH_SIZE and allocate these many reqs/buffers, and set the acomp_ctx->nr_reqs accordingly. If the crypto_acomp does not support batching, "nr_reqs" defaults to 1. 3) Adds a "bool can_batch" to "struct zswap_pool" that step (2) will set to true if the batching API are present for the crypto_acomp. SWAP_CRYPTO_BATCH_SIZE is set to 8, which will be the IAA compress batching "sub-batch" size when zswap_batch_store() is processing a large folio. This represents the nr of buffers that can be compressed/decompressed in parallel by Intel IAA hardware. Signed-off-by: Kanchana P Sridhar --- include/linux/zswap.h | 7 +++ mm/zswap.c | 120 +++++++++++++++++++++++++++++++----------- 2 files changed, 95 insertions(+), 32 deletions(-) diff --git a/include/linux/zswap.h b/include/linux/zswap.h index d961ead91bf1..9ad27ab3d222 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -7,6 +7,13 @@ struct lruvec; +/* + * For IAA compression batching: + * Maximum number of IAA acomp compress requests that will be processed + * in a batch: in parallel, if iaa_crypto async/no irq mode is enabled + * (the default); else sequentially, if iaa_crypto sync mode is in effect. + */ +#define SWAP_CRYPTO_BATCH_SIZE 8UL extern atomic_long_t zswap_stored_pages; #ifdef CONFIG_ZSWAP diff --git a/mm/zswap.c b/mm/zswap.c index f6316b66fb23..173f7632990e 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -143,9 +143,10 @@ bool zswap_never_enabled(void) struct crypto_acomp_ctx { struct crypto_acomp *acomp; - struct acomp_req *req; + struct acomp_req **reqs; + u8 **buffers; + unsigned int nr_reqs; struct crypto_wait wait; - u8 *buffer; struct mutex mutex; bool is_sleepable; }; @@ -158,6 +159,7 @@ struct crypto_acomp_ctx { */ struct zswap_pool { struct zpool *zpool; + bool can_batch; struct crypto_acomp_ctx __percpu *acomp_ctx; struct percpu_ref ref; struct list_head list; @@ -285,6 +287,8 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) goto error; } + pool->can_batch = false; + ret = cpuhp_state_add_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); if (ret) @@ -818,49 +822,90 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); struct crypto_acomp *acomp; - struct acomp_req *req; - int ret; + unsigned int nr_reqs = 1; + int ret = -ENOMEM; + int i, j; mutex_init(&acomp_ctx->mutex); - - acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); - if (!acomp_ctx->buffer) - return -ENOMEM; + acomp_ctx->nr_reqs = 0; acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); if (IS_ERR(acomp)) { pr_err("could not alloc crypto acomp %s : %ld\n", pool->tfm_name, PTR_ERR(acomp)); - ret = PTR_ERR(acomp); - goto acomp_fail; + return PTR_ERR(acomp); } acomp_ctx->acomp = acomp; acomp_ctx->is_sleepable = acomp_is_async(acomp); - req = acomp_request_alloc(acomp_ctx->acomp); - if (!req) { - pr_err("could not alloc crypto acomp_request %s\n", - pool->tfm_name); - ret = -ENOMEM; + /* + * Create the necessary batching resources if the crypto acomp alg + * implements the batch_compress and batch_decompress API. + */ + if (acomp_has_async_batching(acomp)) { + pool->can_batch = true; + nr_reqs = SWAP_CRYPTO_BATCH_SIZE; + pr_info_once("Creating acomp_ctx with %d reqs for batching since crypto acomp %s\nhas registered batch_compress() and batch_decompress()\n", + nr_reqs, pool->tfm_name); + } + + acomp_ctx->buffers = kmalloc_node(nr_reqs * sizeof(u8 *), GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->buffers) + goto buf_fail; + + for (i = 0; i < nr_reqs; ++i) { + acomp_ctx->buffers[i] = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->buffers[i]) { + for (j = 0; j < i; ++j) + kfree(acomp_ctx->buffers[j]); + kfree(acomp_ctx->buffers); + ret = -ENOMEM; + goto buf_fail; + } + } + + acomp_ctx->reqs = kmalloc_node(nr_reqs * sizeof(struct acomp_req *), GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->reqs) goto req_fail; + + for (i = 0; i < nr_reqs; ++i) { + acomp_ctx->reqs[i] = acomp_request_alloc(acomp_ctx->acomp); + if (!acomp_ctx->reqs[i]) { + pr_err("could not alloc crypto acomp_request reqs[%d] %s\n", + i, pool->tfm_name); + for (j = 0; j < i; ++j) + acomp_request_free(acomp_ctx->reqs[j]); + kfree(acomp_ctx->reqs); + ret = -ENOMEM; + goto req_fail; + } } - acomp_ctx->req = req; + /* + * The crypto_wait is used only in fully synchronous, i.e., with scomp + * or non-poll mode of acomp, hence there is only one "wait" per + * acomp_ctx, with callback set to reqs[0], under the assumption that + * there is at least 1 request per acomp_ctx. + */ crypto_init_wait(&acomp_ctx->wait); /* * if the backend of acomp is async zip, crypto_req_done() will wakeup * crypto_wait_req(); if the backend of acomp is scomp, the callback * won't be called, crypto_wait_req() will return without blocking. */ - acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + acomp_request_set_callback(acomp_ctx->reqs[0], CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &acomp_ctx->wait); + acomp_ctx->nr_reqs = nr_reqs; return 0; req_fail: + for (i = 0; i < nr_reqs; ++i) + kfree(acomp_ctx->buffers[i]); + kfree(acomp_ctx->buffers); +buf_fail: crypto_free_acomp(acomp_ctx->acomp); -acomp_fail: - kfree(acomp_ctx->buffer); + pool->can_batch = false; return ret; } @@ -870,11 +915,22 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); if (!IS_ERR_OR_NULL(acomp_ctx)) { - if (!IS_ERR_OR_NULL(acomp_ctx->req)) - acomp_request_free(acomp_ctx->req); + int i; + + for (i = 0; i < acomp_ctx->nr_reqs; ++i) + if (!IS_ERR_OR_NULL(acomp_ctx->reqs[i])) + acomp_request_free(acomp_ctx->reqs[i]); + kfree(acomp_ctx->reqs); + + for (i = 0; i < acomp_ctx->nr_reqs; ++i) + kfree(acomp_ctx->buffers[i]); + kfree(acomp_ctx->buffers); + if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) crypto_free_acomp(acomp_ctx->acomp); - kfree(acomp_ctx->buffer); + + acomp_ctx->nr_reqs = 0; + acomp_ctx = NULL; } return 0; @@ -897,7 +953,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, mutex_lock(&acomp_ctx->mutex); - dst = acomp_ctx->buffer; + dst = acomp_ctx->buffers[0]; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -907,7 +963,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * giving the dst buffer with enough length to avoid buffer overflow. */ sg_init_one(&output, dst, PAGE_SIZE * 2); - acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SIZE, dlen); + acomp_request_set_params(acomp_ctx->reqs[0], &input, &output, PAGE_SIZE, dlen); /* * it maybe looks a little bit silly that we send an asynchronous request, @@ -921,8 +977,8 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * but in different threads running on different cpu, we have different * acomp instance, so multiple threads can do (de)compression in parallel. */ - comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait); - dlen = acomp_ctx->req->dlen; + comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->reqs[0]), &acomp_ctx->wait); + dlen = acomp_ctx->reqs[0]->dlen; if (comp_ret) goto unlock; @@ -975,20 +1031,20 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) */ if ((acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) || !virt_addr_valid(src)) { - memcpy(acomp_ctx->buffer, src, entry->length); - src = acomp_ctx->buffer; + memcpy(acomp_ctx->buffers[0], src, entry->length); + src = acomp_ctx->buffers[0]; zpool_unmap_handle(zpool, entry->handle); } sg_init_one(&input, src, entry->length); sg_init_table(&output, 1); sg_set_folio(&output, folio, PAGE_SIZE, 0); - acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE); - BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait)); - BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE); + acomp_request_set_params(acomp_ctx->reqs[0], &input, &output, entry->length, PAGE_SIZE); + BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->reqs[0]), &acomp_ctx->wait)); + BUG_ON(acomp_ctx->reqs[0]->dlen != PAGE_SIZE); mutex_unlock(&acomp_ctx->mutex); - if (src != acomp_ctx->buffer) + if (src != acomp_ctx->buffers[0]) zpool_unmap_handle(zpool, entry->handle); } From patchwork Sat Nov 23 07:01:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13883786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C849CE6ADE7 for ; Sat, 23 Nov 2024 07:02:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D43BB6B009C; Sat, 23 Nov 2024 02:01:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CEE116B009E; Sat, 23 Nov 2024 02:01:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B42056B009F; Sat, 23 Nov 2024 02:01:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 894236B009C for ; Sat, 23 Nov 2024 02:01:40 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 35B88121B16 for ; Sat, 23 Nov 2024 07:01:40 +0000 (UTC) X-FDA: 82816464318.01.1EFFFD2 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf02.hostedemail.com (Postfix) with ESMTP id 08DD380004 for ; Sat, 23 Nov 2024 07:01:36 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=W3KD5BEk; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732345297; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JXhflq7P1PXwUfP6HB1vMSOhGS+kuKDZWZxKFXAM1Ag=; b=BHHAloDFdgHt8YDbtzeXcTDgsKQm42KtxAXprHavHq4NirURL+Oxclte520yWTorFOvvVS xFhLV+t2BgOpf9iZqReiItwePtY82GaHofLx5LgMu/vUVGgveiPSpHVeFq0FM83ZI4+sLT Q8rnW28JymJYcvFv6BUIZQgUwuu3MyY= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=W3KD5BEk; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732345297; a=rsa-sha256; cv=none; b=m6IKB+W0yhNabYbqUIWVbVmYRH1fO0lkI4+Dp9jBu7LMjLQweV9gUWgK/mF7D+SlTX3Jve gT0gvLw7HuP3IjmMMXhplfpROmbZX5Wwhvkelrfzsv6fDxSFnF7WOVKkwl8LWbutLKqYJG nzZqK0K8BfHlPSZwzqyii6AKYfUwq9Y= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732345299; x=1763881299; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+3rqkJH/SwncCXsl5CpY/iFTNLwceujbB/a3ZiVVtHs=; b=W3KD5BEkZcU6iY1VEYDsWWK3bAishlIb/iZtG6GUZ3Gpi5imTifzIeUi 6XhcTIhQ4CaA47j4egOfUbQcOw+7kRU5vY8PKkHTOMUKLsR9GJoa8BASy ZHS2mxwORUSYaqHxLeh9zvwGnep4EDKoaucL2yQR13j+6vr3Re+Xoymfb H0seyjC6VZscJH6bbxVCbMh2rwiILTqp1fTpT3w+Ci+tVuEW1o9Zu0yuy /I7w+sSDkmn/qTvMU3Kkh1LrvadEBnU3YpEelgJNwpFySPFCulTZiMOmJ KcQ1Y8VtyrKOUXbvFCPHP3x/9+KaKTPtQz2DWs90gwTbBZykJEDBtXFGj g==; X-CSE-ConnectionGUID: OB0Neg5KRMGV431YoBcIsQ== X-CSE-MsgGUID: zAnU9QE5R2S3nuiFihJNKw== X-IronPort-AV: E=McAfee;i="6700,10204,11264"; a="32435571" X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="32435571" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2024 23:01:30 -0800 X-CSE-ConnectionGUID: qCe775yiSni/CxtTRPBmJQ== X-CSE-MsgGUID: Aa7lpLhRRe+MyuU+gGClTA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,178,1728975600"; d="scan'208";a="91573574" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 22 Nov 2024 23:01:29 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v4 10/10] mm: zswap: Compress batching with Intel IAA in zswap_batch_store() of large folios. Date: Fri, 22 Nov 2024 23:01:27 -0800 Message-Id: <20241123070127.332773-11-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> References: <20241123070127.332773-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 08DD380004 X-Stat-Signature: 5zz6rb797dez4ngcz5ztmbbgoy3w6ux4 X-Rspam-User: X-HE-Tag: 1732345296-522189 X-HE-Meta: U2FsdGVkX1/A+JZdrJS5ln7IAmGB5hUkt74PATdrQYO9Y19kP6Jhg/ZQPKuM6pNyZ8vEl56HXPm82wDI6AxUhi/uNWUAB+SFrd07JB+EMYJruTmLi5ECXDpaJF4zAahQimkYpUrdQkwOPJz4eF4+ZfqgNWrRw9QuYvi6yAybGXfhPMCMinpUGJ5TbA1V+FCgEDaisuH41ENYWM1WpKfzjLi34bIKfZH14JjRHnFP3cVrO9lvSHvkhG5sno+2hxYOJxUCcpvZDXjBXFBzmRo0bVdtpVLIWI5kq2JzzMl9zMVwhNzmuGZhZjEbNmmqNrXqfFr8E5I2l8rTbDCOrGmYBCpw7EG9LL8MEw7UKX5ibkuiilepgvoZ9MlomAYjGqdGxvJgyg/yhGBOC20UtJ0kGi4KpKasa1GfBW9D/MoRxE93Jn3fW9DzeQJ7ktPKueizsp1zkfN4yE3P/It2MfbTUPuai89xeMfpRG4jvqJVm9zEKID9wnvO5T+k21F4MmKPJz7wCgGI3TRJqCe32wQc//LnrkDoG409gr7ka3NHW2VXdmCn/1aJTGva6OIOaVwIG+MQP+ttIeHE5pI/piOE9wbgwloAQtY81dp5HTihYmpc+6vkdVvQuql1R7l+Y1aY0B0chWzTwmz/y3JRaB/6iF7n0ViGcPyf9ExCoSIPpFfSmbX/JFo3YMIAlzAz/AOzo+Ut5MnYjvPfy60lxlxuoPW2vQQm+3ePa26Zgh4uxTwLW2Q8RVcULW9DIxzwZje5+MMoMky0XbH0Oq4wMCgdx+Fr0PpTRtm4Ea7BwK8ZAeXa3xzT5YS7VHSAJr3hp44WR3kUDyiE8TopNLY+B9K0Mi6bBKUDkV7536pPWeF1qPd/91VXrCIziqWwC13/OKmjhrFpLIBJDq1T3v4d6KUUeT6evi+y6l3hpGnTcA1qQi63JZ0KE+zO6RzIsvJhAp7W6lXWqpAWzXAlqRceb3w lR2g9Phf 4k0eru0ZFzbrT4qfZ3iyh1hEe8NRn6O+6ozHhDJsCk9R8Ak9mlKIZXyYxw51WW0axHvz53paEtrge2Z7WIJyt1Nnkw3W+UJfQo0V5h/FG2O83lhE8pXditnLliz8ElYcvxfQbiVWY1BLOZ3KdHQ1drV9TDDPk0ZGGiGfuUgtfUcB5uqJrlKZwya8C1+qrrYL+7qZkSdoz1d52ug1TzmALRUzATcOLcIs5dpIsEY7NZ1QHz4lZKXmHACXxlGy7DJad+mdnasDsReHyBgcNoqNYqbRfiOagc4zA9SP1IGchRI9w8nOHH3rQ+Sht2A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch adds two new zswap API: 1) bool zswap_can_batch(void); 2) void zswap_batch_store(struct folio_batch *batch, int *errors); Higher level mm code, for instance, swap_writepage(), can query if the current zswap pool supports batching, by calling zswap_can_batch(). If so it can invoke zswap_batch_store() to swapout a large folio much more efficiently to zswap, instead of calling zswap_store(). Hence, on systems with Intel IAA hardware compress/decompress accelerators, swap_writepage() will invoke zswap_batch_store() for large folios. zswap_batch_store() will call crypto_acomp_batch_compress() to compress up to SWAP_CRYPTO_BATCH_SIZE (i.e. 8) pages in large folios in parallel using the multiple compress engines available in IAA. On platforms with multiple IAA devices per package, compress jobs from all cores in a package will be distributed among all IAA devices in the package by the iaa_crypto driver. The newly added zswap_batch_store() follows the general structure of zswap_store(). Some amount of restructuring and optimization is done to minimize failure points for a batch, fail early and maximize the zswap store pipeline occupancy with SWAP_CRYPTO_BATCH_SIZE pages, potentially from multiple folios in future. This is intended to maximize reclaim throughput with the IAA hardware parallel compressions. Suggested-by: Johannes Weiner Suggested-by: Yosry Ahmed Signed-off-by: Kanchana P Sridhar --- include/linux/zswap.h | 12 + mm/page_io.c | 16 +- mm/zswap.c | 639 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 666 insertions(+), 1 deletion(-) diff --git a/include/linux/zswap.h b/include/linux/zswap.h index 9ad27ab3d222..a05f59139a6e 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -4,6 +4,7 @@ #include #include +#include struct lruvec; @@ -33,6 +34,8 @@ struct zswap_lruvec_state { unsigned long zswap_total_pages(void); bool zswap_store(struct folio *folio); +bool zswap_can_batch(void); +void zswap_batch_store(struct folio_batch *batch, int *errors); bool zswap_load(struct folio *folio); void zswap_invalidate(swp_entry_t swp); int zswap_swapon(int type, unsigned long nr_pages); @@ -51,6 +54,15 @@ static inline bool zswap_store(struct folio *folio) return false; } +static inline bool zswap_can_batch(void) +{ + return false; +} + +static inline void zswap_batch_store(struct folio_batch *batch, int *errors) +{ +} + static inline bool zswap_load(struct folio *folio) { return false; diff --git a/mm/page_io.c b/mm/page_io.c index 4b4ea8e49cf6..271d3a40c0c1 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -276,7 +276,21 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) */ swap_zeromap_folio_clear(folio); } - if (zswap_store(folio)) { + + if (folio_test_large(folio) && zswap_can_batch()) { + struct folio_batch batch; + int error = -1; + + folio_batch_init(&batch); + folio_batch_add(&batch, folio); + zswap_batch_store(&batch, &error); + + if (!error) { + count_mthp_stat(folio_order(folio), MTHP_STAT_ZSWPOUT); + folio_unlock(folio); + return 0; + } + } else if (zswap_store(folio)) { count_mthp_stat(folio_order(folio), MTHP_STAT_ZSWPOUT); folio_unlock(folio); return 0; diff --git a/mm/zswap.c b/mm/zswap.c index 173f7632990e..53c8e39b778b 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -229,6 +229,80 @@ static DEFINE_MUTEX(zswap_init_lock); /* init completed, but couldn't create the initial pool */ static bool zswap_has_pool; +/* + * struct zswap_batch_store_sub_batch: + * + * This represents a sub-batch of SWAP_CRYPTO_BATCH_SIZE pages during IAA + * compress batching of a folio or (conceptually, a reclaim batch of) folios. + * The new zswap_batch_store() API will break down the batch of folios being + * reclaimed into sub-batches of SWAP_CRYPTO_BATCH_SIZE pages, batch compress + * the pages by calling the iaa_crypto driver API crypto_acomp_batch_compress(); + * and storing the sub-batch in zpool/xarray before updating objcg/vm/zswap + * stats. + * + * Although the page itself is represented directly, the structure adds a + * "u8 folio_id" to represent an index for the folio in a conceptual + * "reclaim batch of folios" that can be passed to zswap_store(). Conceptually, + * this allows for up to 256 folios that can be passed to zswap_store(). + * Even though the folio_id seems redundant in the context of a single large + * folio being stored by zswap, it does simplify error handling and redundant + * computes/rewinding state, all of which can add latency. Since the + * zswap_batch_store() of a large folio can fail for any of these reasons -- + * compress errors, zpool malloc errors, xarray store errors -- the procedures + * that detect these errors for a sub-batch, can all call a single cleanup + * procedure, zswap_batch_cleanup(), which will de-allocate zpool memory and + * zswap_entries for the sub-batch and set the "errors[folio_id]" to -EINVAL. + * All subsequent procedures that operate on a sub-batch will do nothing if the + * errors[folio_id] is non-0. Hence, the folio_id facilitates the use of the + * "errors" passed to zswap_batch_store() as a global folio error status for a + * single folio (which could also be a folio in the folio_batch). + * + * The sub-batch concept could be further evolved to use pipelining to + * overlap CPU computes with IAA computes. For instance, we could stage + * the post-compress computes for sub-batch "N-1" to happen in parallel with + * IAA batch compression of sub-batch "N". + * + * We begin by developing the concept of compress batching. Pipelining with + * overlap can be future work. + * + * @pages: The individual pages in the sub-batch. There are no assumptions + * about all of them belonging to the same folio. + * @dsts: The destination buffers for batch compress of the sub-batch. + * @dlens: The destination length constraints, and eventual compressed lengths + * of successful compressions. + * @comp_errors: The compress error status for each page in the sub-batch, set + * by crypto_acomp_batch_compress(). + * @folio_ids: The containing folio_id of each sub-batch page. + * @swpentries: The page_swap_entry() for each corresponding sub-batch page. + * @objcgs: The objcg for each corresponding sub-batch page. + * @entries: The zswap_entry for each corresponding sub-batch page. + * @nr_pages: Total number of pages in @sub_batch. + * @pool: A valid zswap_pool that can_batch. + * + * Note: + * The max sub-batch size is SWAP_CRYPTO_BATCH_SIZE, currently 8UL. + * Hence, if SWAP_CRYPTO_BATCH_SIZE exceeds 256, @nr_pages needs to become u16. + * The sub-batch representation is future-proofed to a small extent to be able + * to easily scale the zswap_batch_store() implementation to handle a conceptual + * "reclaim batch of folios"; without addding too much complexity, while + * benefiting from simpler error handling, localized sub-batch resources cleanup + * and avoiding expensive rewinding state. If this conceptual number of reclaim + * folios sent to zswap_batch_store() exceeds 256, @folio_ids needs to + * become u16. + */ +struct zswap_batch_store_sub_batch { + struct page *pages[SWAP_CRYPTO_BATCH_SIZE]; + u8 *dsts[SWAP_CRYPTO_BATCH_SIZE]; + unsigned int dlens[SWAP_CRYPTO_BATCH_SIZE]; + int comp_errors[SWAP_CRYPTO_BATCH_SIZE]; /* folio error status. */ + u8 folio_ids[SWAP_CRYPTO_BATCH_SIZE]; + swp_entry_t swpentries[SWAP_CRYPTO_BATCH_SIZE]; + struct obj_cgroup *objcgs[SWAP_CRYPTO_BATCH_SIZE]; + struct zswap_entry *entries[SWAP_CRYPTO_BATCH_SIZE]; + u8 nr_pages; + struct zswap_pool *pool; +}; + /********************************* * helpers and fwd declarations **********************************/ @@ -1705,6 +1779,571 @@ void zswap_invalidate(swp_entry_t swp) zswap_entry_free(entry); } +/****************************************************** + * zswap_batch_store() with compress batching. + ******************************************************/ + +/* + * Note: If SWAP_CRYPTO_BATCH_SIZE exceeds 256, change the + * u8 stack variables in the next several functions, to u16. + */ +bool zswap_can_batch(void) +{ + struct zswap_pool *pool; + bool ret = false; + + pool = zswap_pool_current_get(); + + if (!pool) + return ret; + + if (pool->can_batch) + ret = true; + + zswap_pool_put(pool); + + return ret; +} + +/* + * If the zswap store fails or zswap is disabled, we must invalidate + * the possibly stale entries which were previously stored at the + * offsets corresponding to each page of the folio. Otherwise, + * writeback could overwrite the new data in the swapfile. + */ +static void zswap_delete_stored_entries(struct folio *folio) +{ + swp_entry_t swp = folio->swap; + unsigned type = swp_type(swp); + pgoff_t offset = swp_offset(swp); + struct zswap_entry *entry; + struct xarray *tree; + long index; + + for (index = 0; index < folio_nr_pages(folio); ++index) { + tree = swap_zswap_tree(swp_entry(type, offset + index)); + entry = xa_erase(tree, offset + index); + if (entry) + zswap_entry_free(entry); + } +} + +static __always_inline void zswap_batch_reset(struct zswap_batch_store_sub_batch *sb) +{ + sb->nr_pages = 0; +} + +/* + * Upon encountering the first sub-batch page in a folio with an error due to + * any of the following: + * - compression + * - zpool malloc + * - xarray store + * , cleanup the sub-batch resources (zpool memory, zswap_entry) for all other + * sub_batch elements belonging to the same folio, using the "error_folio_id". + * + * Set the "errors[error_folio_id] to signify to all downstream computes in + * zswap_batch_store(), that no further processing is required for the folio + * with "error_folio_id" in the batch: this folio's zswap store status will + * be considered an error, and existing zswap_entries in the xarray will be + * deleted before zswap_batch_store() exits. + */ +static void zswap_batch_cleanup(struct zswap_batch_store_sub_batch *sb, + int *errors, + u8 error_folio_id) +{ + u8 i; + + if (errors[error_folio_id]) + return; + + for (i = 0; i < sb->nr_pages; ++i) { + if (sb->folio_ids[i] == error_folio_id) { + if (sb->entries[i]) { + if (!IS_ERR_VALUE(sb->entries[i]->handle)) + zpool_free(sb->pool->zpool, sb->entries[i]->handle); + + zswap_entry_cache_free(sb->entries[i]); + sb->entries[i] = NULL; + } + } + } + + errors[error_folio_id] = -EINVAL; +} + +/* + * Returns true if the entry was successfully + * stored in the xarray, and false otherwise. + */ +static bool zswap_store_entry(swp_entry_t page_swpentry, struct zswap_entry *entry) +{ + struct zswap_entry *old = xa_store(swap_zswap_tree(page_swpentry), + swp_offset(page_swpentry), + entry, GFP_KERNEL); + if (xa_is_err(old)) { + int err = xa_err(old); + + WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); + zswap_reject_alloc_fail++; + return false; + } + + /* + * We may have had an existing entry that became stale when + * the folio was redirtied and now the new version is being + * swapped out. Get rid of the old. + */ + if (old) + zswap_entry_free(old); + + return true; +} + +/* + * The stats accounting makes no assumptions about all pages in the sub-batch + * belonging to the same folio, or having the same objcg; while still doing + * the updates in aggregation. + */ +static void zswap_batch_xarray_stats(struct zswap_batch_store_sub_batch *sb, + int *errors) +{ + int nr_objcg_pages = 0, nr_pages = 0; + struct obj_cgroup *objcg = NULL; + size_t compressed_bytes = 0; + u8 i; + + for (i = 0; i < sb->nr_pages; ++i) { + if (errors[sb->folio_ids[i]]) + continue; + + if (!zswap_store_entry(sb->swpentries[i], sb->entries[i])) { + zswap_batch_cleanup(sb, errors, sb->folio_ids[i]); + continue; + } + + /* + * The entry is successfully compressed and stored in the tree, + * there is no further possibility of failure. Grab refs to the + * pool and objcg. These refs will be dropped by + * zswap_entry_free() when the entry is removed from the tree. + */ + zswap_pool_get(sb->pool); + if (sb->objcgs[i]) + obj_cgroup_get(sb->objcgs[i]); + + /* + * We finish initializing the entry while it's already in xarray. + * This is safe because: + * + * 1. Concurrent stores and invalidations are excluded by folio + * lock. + * + * 2. Writeback is excluded by the entry not being on the LRU yet. + * The publishing order matters to prevent writeback from seeing + * an incoherent entry. + */ + sb->entries[i]->pool = sb->pool; + sb->entries[i]->swpentry = sb->swpentries[i]; + sb->entries[i]->objcg = sb->objcgs[i]; + sb->entries[i]->referenced = true; + if (sb->entries[i]->length) { + INIT_LIST_HEAD(&(sb->entries[i]->lru)); + zswap_lru_add(&zswap_list_lru, sb->entries[i]); + } + + if (!objcg && sb->objcgs[i]) { + objcg = sb->objcgs[i]; + } else if (objcg && sb->objcgs[i] && (objcg != sb->objcgs[i])) { + obj_cgroup_charge_zswap(objcg, compressed_bytes); + count_objcg_events(objcg, ZSWPOUT, nr_objcg_pages); + compressed_bytes = 0; + nr_objcg_pages = 0; + objcg = sb->objcgs[i]; + } + + if (sb->objcgs[i]) { + compressed_bytes += sb->entries[i]->length; + ++nr_objcg_pages; + } + + ++nr_pages; + } /* for sub-batch pages. */ + + if (objcg) { + obj_cgroup_charge_zswap(objcg, compressed_bytes); + count_objcg_events(objcg, ZSWPOUT, nr_objcg_pages); + } + + atomic_long_add(nr_pages, &zswap_stored_pages); + count_vm_events(ZSWPOUT, nr_pages); +} + +static void zswap_batch_zpool_store(struct zswap_batch_store_sub_batch *sb, + int *errors) +{ + u8 i; + + for (i = 0; i < sb->nr_pages; ++i) { + struct zpool *zpool; + unsigned long handle; + char *buf; + gfp_t gfp; + int err; + + /* Skip pages belonging to folios that had compress errors. */ + if (errors[sb->folio_ids[i]]) + continue; + + zpool = sb->pool->zpool; + gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; + if (zpool_malloc_support_movable(zpool)) + gfp |= __GFP_HIGHMEM | __GFP_MOVABLE; + err = zpool_malloc(zpool, sb->dlens[i], gfp, &handle); + + if (err) { + if (err == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_alloc_fail++; + + /* + * A zpool malloc error should trigger cleanup for + * other same-folio pages in the sub-batch, and zpool + * resources/zswap_entries for those pages should be + * de-allocated. + */ + zswap_batch_cleanup(sb, errors, sb->folio_ids[i]); + continue; + } + + buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO); + memcpy(buf, sb->dsts[i], sb->dlens[i]); + zpool_unmap_handle(zpool, handle); + + sb->entries[i]->handle = handle; + sb->entries[i]->length = sb->dlens[i]; + } +} + +static void zswap_batch_proc_comp_errors(struct zswap_batch_store_sub_batch *sb, + int *errors) +{ + u8 i; + + for (i = 0; i < sb->nr_pages; ++i) { + if (sb->comp_errors[i]) { + if (sb->comp_errors[i] == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_compress_fail++; + + if (!errors[sb->folio_ids[i]]) + zswap_batch_cleanup(sb, errors, sb->folio_ids[i]); + } + } +} + +/* + * Batch compress up to SWAP_CRYPTO_BATCH_SIZE pages with IAA. + * It is important to note that the SWAP_CRYPTO_BATCH_SIZE resources + * resources are allocated for the pool's per-cpu acomp_ctx during cpu + * hotplug only if the crypto_acomp has registered either + * batch_compress() and batch_decompress(). + * The iaa_crypto driver registers implementations for both these API. + * Hence, if IAA is the zswap compressor, the call to + * crypto_acomp_batch_compress() will compress the pages in parallel, + * resulting in significant performance improvements as compared to + * software compressors. + */ +static void zswap_batch_compress(struct zswap_batch_store_sub_batch *sb, + int *errors) +{ + struct crypto_acomp_ctx *acomp_ctx = raw_cpu_ptr(sb->pool->acomp_ctx); + u8 i; + + mutex_lock(&acomp_ctx->mutex); + + BUG_ON(acomp_ctx->nr_reqs != SWAP_CRYPTO_BATCH_SIZE); + + for (i = 0; i < sb->nr_pages; ++i) { + sb->dsts[i] = acomp_ctx->buffers[i]; + sb->dlens[i] = PAGE_SIZE; + } + + /* + * Batch compress sub-batch "N". If IAA is the compressor, the + * hardware will compress multiple pages in parallel. + */ + crypto_acomp_batch_compress( + acomp_ctx->reqs, + &acomp_ctx->wait, + sb->pages, + sb->dsts, + sb->dlens, + sb->comp_errors, + sb->nr_pages); + + /* + * Scan the sub-batch for any compression errors, + * and invalidate pages with errors, along with other + * pages belonging to the same folio as the error page(s). + * Set the folio's error status in "errors" so that no + * further zswap_batch_store() processing is done for + * the folio(s) with compression errors. + */ + zswap_batch_proc_comp_errors(sb, errors); + + zswap_batch_zpool_store(sb, errors); + + mutex_unlock(&acomp_ctx->mutex); +} + +static void zswap_batch_add_pages(struct zswap_batch_store_sub_batch *sb, + struct folio *folio, + u8 folio_id, + struct obj_cgroup *objcg, + struct zswap_entry *entries[], + long start_idx, + u8 nr) +{ + long index; + + for (index = start_idx; index < (start_idx + nr); ++index) { + u8 i = sb->nr_pages; + struct page *page = folio_page(folio, index); + sb->pages[i] = page; + sb->swpentries[i] = page_swap_entry(page); + sb->folio_ids[i] = folio_id; + sb->objcgs[i] = objcg; + sb->entries[i] = entries[index - start_idx]; + sb->comp_errors[i] = 0; + ++sb->nr_pages; + } +} + +/* Allocate entries for the next sub-batch. */ +static int zswap_batch_alloc_entries(struct zswap_entry *entries[], int node_id, u8 nr) +{ + u8 i; + + for (i = 0; i < nr; ++i) { + entries[i] = zswap_entry_cache_alloc(GFP_KERNEL, node_id); + if (!entries[i]) { + u8 j; + + zswap_reject_kmemcache_fail++; + for (j = 0; j < i; ++j) + zswap_entry_cache_free(entries[j]); + return -EINVAL; + } + + entries[i]->handle = (unsigned long)ERR_PTR(-EINVAL); + } + + return 0; +} + +static bool zswap_batch_comp_folio(struct folio *folio, int *errors, u8 folio_id, + struct obj_cgroup *objcg, + struct zswap_batch_store_sub_batch *sub_batch, + bool last) +{ + long folio_start_idx = 0, nr_folio_pages = folio_nr_pages(folio); + struct zswap_entry *entries[SWAP_CRYPTO_BATCH_SIZE]; + int node_id = folio_nid(folio); + + /* + * Iterate over the pages in the folio passed in. Construct compress + * sub-batches of up to SWAP_CRYPTO_BATCH_SIZE pages. Process each + * sub-batch with IAA batch compression. Detect errors from batch + * compression and set the folio's error status. + */ + while (nr_folio_pages > 0) { + u8 add_nr_pages; + + /* + * If we have accumulated SWAP_CRYPTO_BATCH_SIZE + * pages, process the sub-batch. + */ + if (sub_batch->nr_pages == SWAP_CRYPTO_BATCH_SIZE) { + zswap_batch_compress(sub_batch, errors); + zswap_batch_xarray_stats(sub_batch, errors); + zswap_batch_reset(sub_batch); + /* + * Stop processing this folio if it had compress errors. + */ + if (errors[folio_id]) + goto ret_folio; + } + + /* Add pages from the folio to the compress sub-batch. */ + add_nr_pages = min3(( + (long)SWAP_CRYPTO_BATCH_SIZE - + (long)sub_batch->nr_pages), + nr_folio_pages, + (long)SWAP_CRYPTO_BATCH_SIZE); + + /* + * Allocate zswap entries for this sub-batch. If we get errors + * while doing so, we can fail early and flag an error for the + * folio. + */ + if (zswap_batch_alloc_entries(entries, node_id, add_nr_pages)) { + zswap_batch_reset(sub_batch); + errors[folio_id] = -EINVAL; + goto ret_folio; + } + + zswap_batch_add_pages(sub_batch, folio, folio_id, objcg, + entries, folio_start_idx, add_nr_pages); + + nr_folio_pages -= add_nr_pages; + folio_start_idx += add_nr_pages; + } /* this folio has pages to be compressed. */ + + /* + * Process last sub-batch: it could contain pages from multiple folios. + */ + if (last && sub_batch->nr_pages) { + zswap_batch_compress(sub_batch, errors); + zswap_batch_xarray_stats(sub_batch, errors); + } + +ret_folio: + return (!errors[folio_id]); +} + +/* + * Store a large folio and/or a batch of any-order folio(s) in zswap + * using IAA compress batching API. + * + * This the main procedure for batching within large folios and for batching + * of folios. Each large folio will be broken into sub-batches of + * SWAP_CRYPTO_BATCH_SIZE pages, the sub-batch pages will be compressed by + * IAA hardware compress engines in parallel, then stored in zpool/xarray. + * + * This procedure should only be called if zswap supports batching of stores. + * Otherwise, the sequential implementation for storing folios as in the + * current zswap_store() should be used. The code handles the unlikely event + * that the zswap pool changes from batching to non-batching between + * swap_writepage() and the start of zswap_batch_store(). + * + * The signature of this procedure is meant to allow the calling function, + * (for instance, swap_writepage()) to pass a batch of folios @batch + * (the "reclaim batch") to be stored in zswap. + * + * @batch and @errors have folio_batch_count(@batch) number of entries, + * with one-one correspondence (@errors[i] represents the error status of + * @batch->folios[i], for i in folio_batch_count(@batch)). Please also + * see comments preceding "struct zswap_batch_store_sub_batch" definition + * above. + * + * The calling function (for instance, swap_writepage()) should initialize + * @errors[i] to a non-0 value. + * If zswap successfully stores @batch->folios[i], it will set @errors[i] to 0. + * If there is an error in zswap, it will set @errors[i] to -EINVAL. + * + * @batch: folio_batch of folios to be batch compressed. + * @errors: zswap_batch_store() error status for the folios in @batch. + */ +void zswap_batch_store(struct folio_batch *batch, int *errors) +{ + struct zswap_batch_store_sub_batch sub_batch; + struct zswap_pool *pool; + u8 i; + + /* + * If zswap is disabled, we must invalidate the possibly stale entry + * which was previously stored at this offset. Otherwise, writeback + * could overwrite the new data in the swapfile. + */ + if (!zswap_enabled) + goto check_old; + + pool = zswap_pool_current_get(); + + if (!pool) { + if (zswap_check_limits()) + queue_work(shrink_wq, &zswap_shrink_work); + goto check_old; + } + + if (!pool->can_batch) { + for (i = 0; i < folio_batch_count(batch); ++i) + if (zswap_store(batch->folios[i])) + errors[i] = 0; + else + errors[i] = -EINVAL; + /* + * Seems preferable to release the pool ref after the calls to + * zswap_store(), so that the non-batching pool cannot be + * deleted, can be used for sequential stores, and the zswap pool + * cannot morph into a batching pool. + */ + zswap_pool_put(pool); + return; + } + + zswap_batch_reset(&sub_batch); + sub_batch.pool = pool; + + for (i = 0; i < folio_batch_count(batch); ++i) { + struct folio *folio = batch->folios[i]; + struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; + bool ret; + + VM_WARN_ON_ONCE(!folio_test_locked(folio)); + VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); + + objcg = get_obj_cgroup_from_folio(folio); + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto put_objcg; + } + mem_cgroup_put(memcg); + } + + if (zswap_check_limits()) + goto put_objcg; + + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { + mem_cgroup_put(memcg); + goto put_objcg; + } + mem_cgroup_put(memcg); + } + + /* + * By default, set zswap error status in "errors" to "success" + * for use in swap_writepage() when this returns. In case of + * errors encountered in any sub-batch in which this folio's + * pages are batch-compressed, a negative error number will + * over-write this when zswap_batch_cleanup() is called. + */ + errors[i] = 0; + ret = zswap_batch_comp_folio(folio, errors, i, objcg, &sub_batch, + (i == folio_batch_count(batch) - 1)); + +put_objcg: + obj_cgroup_put(objcg); + if (!ret && zswap_pool_reached_full) + queue_work(shrink_wq, &zswap_shrink_work); + } /* for batch folios */ + + zswap_pool_put(pool); + +check_old: + for (i = 0; i < folio_batch_count(batch); ++i) + if (errors[i]) + zswap_delete_stored_entries(batch->folios[i]); +} + int zswap_swapon(int type, unsigned long nr_pages) { struct xarray *trees, *tree;