From patchwork Mon Mar 3 08:47:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E8D4C282CD for ; Mon, 3 Mar 2025 08:47:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 396A36B0089; Mon, 3 Mar 2025 03:47:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 31ADF6B008A; Mon, 3 Mar 2025 03:47:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 146AC280002; Mon, 3 Mar 2025 03:47:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E138F6B0089 for ; Mon, 3 Mar 2025 03:47:34 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 980CD4E31C for ; Mon, 3 Mar 2025 08:47:34 +0000 (UTC) X-FDA: 83179611228.16.524F5FF Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf21.hostedemail.com (Postfix) with ESMTP id 5B2081C0005 for ; Mon, 3 Mar 2025 08:47:32 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=d+LsPOom; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991652; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DT7qSsd6NzVQAqBGddK7aj/1FLc2FPkIQ1Vg7McwlLw=; b=JQKFzJkryPdQz0UDpzoU8ErRxngp94pLKHgUIfSH01yDOaC2o+CN6mTrTeEGpQsdffGp0C XFdkhDgWyc8o22epadRPdYoS3P0mrpQyJUThUWhFnkdENmTfu64PHsRzN5AewOsVCahkCZ l/9zUjBGURr+UUT+LVAqpPSNzMSuHNg= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=d+LsPOom; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991652; a=rsa-sha256; cv=none; b=jctPKMD6/dYwSjsakk9LTdx8SnEq2wjs1pmpUW1Zipi0fzHkXL8n3FB5NhFrtjspJEbMhm YSkO/PQqNLelCY9U7+oIjOVqFs+nE5tZ1/GaJLeMNMLySU3NEgNDGAS8V7dVKkLcMFK0/J UFuyGkPaMnwU3OH4hNVgqACGhQffddM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991653; x=1772527653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5xvVgcGbsVS3XOQngdpBodjWDsRWeoD2A/h2JxKzioc=; b=d+LsPOom9vQEtDIl8ESw58xU/kDfFobq/lG/k8waYI5wV9AS/QUXboUc 3VHPtAY8/dpqf+DtHZajAMXcj4iVkYnDlaKCJpVCopBTKAIu3VaqgnO+l nfhasfEPIgSbiklGOdMbZZqrAnWMvg2xd1reZOzTTZxQksCtV0voIU9iU 9zJFUS/rK61YSW0GMMJ8fKtQ6sp4w0g+bePMiCnvRoFPado6rei6G7AGu +C+F8I3n3vHdfDiz8p7kkdAxKSiUaMeQ+SkM3cxfFZfW8Ml9eipIIYXM8 /x27VI54TI9rvpunLmw11yVc6dfqBJBHkqtgArBlaBCkJAq7B6Mg5Fepv g==; X-CSE-ConnectionGUID: XP7N9NbQQLKXAZyvZpE68g== X-CSE-MsgGUID: quLudXmtTW+DVwH3/yaZfA== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111857" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111857" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:27 -0800 X-CSE-ConnectionGUID: pHHvrbHVTj2dg6QDahNk4Q== X-CSE-MsgGUID: aRS85qAtQeeozA4A+kzF9w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426761" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:25 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 01/14] crypto: acomp - Add synchronous/asynchronous acomp request chaining. Date: Mon, 3 Mar 2025 00:47:11 -0800 Message-Id: <20250303084724.6490-2-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: px7epyezog697fyy15bofmtdjw5wh1un X-Rspamd-Queue-Id: 5B2081C0005 X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1740991652-413862 X-HE-Meta: U2FsdGVkX1/ppa1nwfK1RMzll0UxsurCQxfCQRbitEtGXPcl5tFUyOzNDh/Y/+jqko44wwkWVzm7oPQf/rXf69uOdQnTGqZblXdOuGrIdAHBKAx2meOpaO0LajNLyNGSPcZuqEJfNadYtkmcXkyt8n3mrY1vFa32Me/tG59SBPwCq3OV9Nw+SoqW/oBBLJTNgD7hWZyy9JvmQ4Lr3V9f4FSMqNsHD4iQuCD0l5BjkpVkYTIK9g5kKWUi0DqYpJ9PoIncpyYbjBfCDKVn3vlZEsdsLMvGoLzM9vlAsP2h/1AWfhK/a1geZK0tp0T2SSnfVXpzpwJm3Gw7yF3t7+HgWlwPN/KhmZDvbkauyQ6ZTBz5FO6kuru2phE8vSOqlD8PZ4rAX9RExGC20pnbK3WLjJbj6nFnWHAv+f9Pjx9QQz5p6fixBBn7PEf8oJFw/JT8aqUZiHQg2Yw2Bfjt3FDwBjqShh9/Y0HeYQ+QT+EOOdZrmGH16h17bPj+e4bp97Gv5etEY+w+SKowRw11KeDuC4MLMO3MNtADhKDMwUIad/LnFvQD9ES/IH2/7evxvAGlkAsUBf6KOAIAq9gpL5lN6B4wnM8V/wrfSimeEQubkRFZO15KDyx+y2qgJVZcusyPk3imG30Eo0JgNCMh8juWn2TPOC7ykz2jk8hHa0R7HsGIFYOrnizcWzQKdvBfvAQ0TRJAACtx0mEcGhGLEr4d8uAszZm4cEc/rV1eOgLcs4hKcGO1qbAj1HQ/mHiR+LQPjOo0UxxTr5qGFjyJbbkTL2QkcnXTp9ok5AOPWcSo0fFUYAcw/ciFOBbd+PjOG97/0PmylM1w3VTI3C1cXqlRTUUJdY4NLCs4PEj7l2SkcfVuF8fv0avBGeWj5WJd8gIihogCYbWJKlkjlGkqiovMUaSj6N9UxIooC5dx0o1icJuP6gBSAaozbGsoVcVwAalykWuqn8BKq/ON6UT6uZd RolzpQ77 O7KEyV8RR3JDJhOQGUmddzldNDpCLbr+ajrZKls3A+Ay7wkRLQSanwFTlHk7JvfPKiv4bZG9NkNfxpZLTttHXn6Yuzqm40BytVKBRjlLzlzbK7LRuIJgcCkBRqDYeegtw6zFDJHSGZIYyskJW2RN4NWVomCYgH6OVBMEq3mGnNL3kj9Bp2AeLudLMAqIBicVtUpjlyJADnlsCgcIZ7ULF8u8MFdyYY7Z0yNt3fGIMo0Dx7RuoDafpkcrdIbFy6AAMK8Mq X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch is based on Herbert Xu's request chaining for ahash ("[PATCH 2/6] crypto: hash - Add request chaining API") [1]. The generic framework for request chaining that's provided in the ahash implementation has been used as reference to develop a similar synchronous request chaining framework for crypto_acomp. Furthermore, this commit develops an asynchronous request chaining framework and API that iaa_crypto can use for request chaining with parallelism, in order to fully benefit from Intel IAA's multiple compress/decompress engines in hardware. This allows us to gain significant latency improvements with IAA batching as compared to synchronous request chaining. Usage of acomp request chaining API: ==================================== Any crypto_acomp compressor can avail of request chaining as follows: Step 1: Create request chain: Request 0 (the first req in the chain): void acomp_reqchain_init(struct acomp_req *req, u32 flags, crypto_completion_t compl, void *data); Subsequent requests: void acomp_request_chain(struct acomp_req *req, struct acomp_req *head); Step 2: Process the request chain using the specified compress/decompress "op": 2.a) Synchronous: the chain of requests is processed in series: int acomp_do_req_chain(struct acomp_req *req, int (*op)(struct acomp_req *req)); 2.b) Asynchronous: the chain of requests is processed in parallel using a submit-poll paradigm: int acomp_do_async_req_chain(struct acomp_req *req, int (*op_submit)(struct acomp_req *req), int (*op_poll)(struct acomp_req *req)); Request chaining will be used in subsequent patches to implement compress/decompress batching in the iaa_crypto driver for the two supported IAA driver sync_modes: sync_mode = 'sync' will use (2.a), sync_mode = 'async' will use (2.b). These files are directly re-used from [1] which is not yet merged: include/crypto/algapi.h include/linux/crypto.h Hence, I am adding Herbert as the co-developer of this acomp request chaining patch. [1]: https://lore.kernel.org/linux-crypto/677614fbdc70b31df2e26483c8d2cd1510c8af91.1730021644.git.herbert@gondor.apana.org.au/ Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar Co-developed-by: Herbert Xu Signed-off-by: --- crypto/acompress.c | 284 ++++++++++++++++++++++++++++ include/crypto/acompress.h | 46 +++++ include/crypto/algapi.h | 10 + include/crypto/internal/acompress.h | 10 + include/linux/crypto.h | 39 ++++ 5 files changed, 389 insertions(+) diff --git a/crypto/acompress.c b/crypto/acompress.c index 6fdf0ff9f3c0..cb6444d09dd7 100644 --- a/crypto/acompress.c +++ b/crypto/acompress.c @@ -23,6 +23,19 @@ struct crypto_scomp; static const struct crypto_type crypto_acomp_type; +struct acomp_save_req_state { + struct list_head head; + struct acomp_req *req0; + struct acomp_req *cur; + int (*op)(struct acomp_req *req); + crypto_completion_t compl; + void *data; +}; + +static void acomp_reqchain_done(void *data, int err); +static int acomp_save_req(struct acomp_req *req, crypto_completion_t cplt); +static void acomp_restore_req(struct acomp_req *req); + static inline struct acomp_alg *__crypto_acomp_alg(struct crypto_alg *alg) { return container_of(alg, struct acomp_alg, calg.base); @@ -123,6 +136,277 @@ struct crypto_acomp *crypto_alloc_acomp_node(const char *alg_name, u32 type, } EXPORT_SYMBOL_GPL(crypto_alloc_acomp_node); +static int acomp_save_req(struct acomp_req *req, crypto_completion_t cplt) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct acomp_save_req_state *state; + gfp_t gfp; + u32 flags; + + if (!acomp_is_async(tfm)) + return 0; + + flags = acomp_request_flags(req); + gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; + state = kmalloc(sizeof(*state), gfp); + if (!state) + return -ENOMEM; + + state->compl = req->base.complete; + state->data = req->base.data; + state->req0 = req; + + req->base.complete = cplt; + req->base.data = state; + + return 0; +} + +static void acomp_restore_req(struct acomp_req *req) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct acomp_save_req_state *state; + + if (!acomp_is_async(tfm)) + return; + + state = req->base.data; + + req->base.complete = state->compl; + req->base.data = state->data; + kfree(state); +} + +static int acomp_reqchain_finish(struct acomp_save_req_state *state, + int err, u32 mask) +{ + struct acomp_req *req0 = state->req0; + struct acomp_req *req = state->cur; + struct acomp_req *n; + + req->base.err = err; + + if (req == req0) + INIT_LIST_HEAD(&req->base.list); + else + list_add_tail(&req->base.list, &req0->base.list); + + list_for_each_entry_safe(req, n, &state->head, base.list) { + list_del_init(&req->base.list); + + req->base.flags &= mask; + req->base.complete = acomp_reqchain_done; + req->base.data = state; + state->cur = req; + err = state->op(req); + + if (err == -EINPROGRESS) { + if (!list_empty(&state->head)) + err = -EBUSY; + goto out; + } + + if (err == -EBUSY) + goto out; + + req->base.err = err; + list_add_tail(&req->base.list, &req0->base.list); + } + + acomp_restore_req(req0); + +out: + return err; +} + +static void acomp_reqchain_done(void *data, int err) +{ + struct acomp_save_req_state *state = data; + crypto_completion_t compl = state->compl; + + data = state->data; + + if (err == -EINPROGRESS) { + if (!list_empty(&state->head)) + return; + goto notify; + } + + err = acomp_reqchain_finish(state, err, CRYPTO_TFM_REQ_MAY_BACKLOG); + if (err == -EBUSY) + return; + +notify: + compl(data, err); +} + +int acomp_do_req_chain(struct acomp_req *req, + int (*op)(struct acomp_req *req)) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct acomp_save_req_state *state; + struct acomp_save_req_state state0; + int err = 0; + + if (!acomp_request_chained(req) || list_empty(&req->base.list) || + !crypto_acomp_req_chain(tfm)) + return op(req); + + state = &state0; + + if (acomp_is_async(tfm)) { + err = acomp_save_req(req, acomp_reqchain_done); + if (err) { + struct acomp_req *r2; + + req->base.err = err; + list_for_each_entry(r2, &req->base.list, base.list) + r2->base.err = err; + + return err; + } + + state = req->base.data; + } + + state->op = op; + state->cur = req; + INIT_LIST_HEAD(&state->head); + list_splice(&req->base.list, &state->head); + + err = op(req); + if (err == -EBUSY || err == -EINPROGRESS) + return -EBUSY; + + return acomp_reqchain_finish(state, err, ~0); +} +EXPORT_SYMBOL_GPL(acomp_do_req_chain); + +static void acomp_async_reqchain_done(struct acomp_req *req0, + struct list_head *state, + int (*op_poll)(struct acomp_req *req)) +{ + struct acomp_req *req, *n; + bool req0_done = false; + int err; + + while (!list_empty(state)) { + + if (!req0_done) { + err = op_poll(req0); + if (!(err == -EAGAIN || err == -EINPROGRESS || err == -EBUSY)) { + req0->base.err = err; + req0_done = true; + } + } + + list_for_each_entry_safe(req, n, state, base.list) { + err = op_poll(req); + + if (err == -EAGAIN || err == -EINPROGRESS || err == -EBUSY) + continue; + + req->base.err = err; + list_del_init(&req->base.list); + list_add_tail(&req->base.list, &req0->base.list); + } + } + + while (!req0_done) { + err = op_poll(req0); + if (!(err == -EAGAIN || err == -EINPROGRESS || err == -EBUSY)) { + req0->base.err = err; + break; + } + } +} + +static int acomp_async_reqchain_finish(struct acomp_req *req0, + struct list_head *state, + int (*op_submit)(struct acomp_req *req), + int (*op_poll)(struct acomp_req *req)) +{ + struct acomp_req *req, *n; + int err = 0; + + INIT_LIST_HEAD(&req0->base.list); + + list_for_each_entry_safe(req, n, state, base.list) { + BUG_ON(req == req0); + + err = op_submit(req); + + if (!(err == -EINPROGRESS || err == -EBUSY)) { + req->base.err = err; + list_del_init(&req->base.list); + list_add_tail(&req->base.list, &req0->base.list); + } + } + + acomp_async_reqchain_done(req0, state, op_poll); + + return req0->base.err; +} + +int acomp_do_async_req_chain(struct acomp_req *req, + int (*op_submit)(struct acomp_req *req), + int (*op_poll)(struct acomp_req *req)) +{ + struct crypto_acomp *tfm = crypto_acomp_reqtfm(req); + struct list_head state; + struct acomp_req *r2; + int err = 0; + void *req0_data = req->base.data; + + if (!acomp_request_chained(req) || list_empty(&req->base.list) || + !acomp_is_async(tfm) || !crypto_acomp_req_chain(tfm)) { + + err = op_submit(req); + + if (err == -EINPROGRESS || err == -EBUSY) { + bool req0_done = false; + + while (!req0_done) { + err = op_poll(req); + if (!(err == -EAGAIN || err == -EINPROGRESS || err == -EBUSY)) { + req->base.err = err; + break; + } + } + } else { + req->base.err = err; + } + + req->base.data = req0_data; + if (acomp_is_async(tfm)) + req->base.complete(req->base.data, req->base.err); + + return err; + } + + err = op_submit(req); + req->base.err = err; + + if (err && !(err == -EINPROGRESS || err == -EBUSY)) + goto err_prop; + + INIT_LIST_HEAD(&state); + list_splice(&req->base.list, &state); + + err = acomp_async_reqchain_finish(req, &state, op_submit, op_poll); + req->base.data = req0_data; + req->base.complete(req->base.data, req->base.err); + + return err; + +err_prop: + list_for_each_entry(r2, &req->base.list, base.list) + r2->base.err = err; + + return err; +} +EXPORT_SYMBOL_GPL(acomp_do_async_req_chain); + struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp); diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index 54937b615239..e6783deba3ac 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -206,6 +206,7 @@ static inline void acomp_request_set_callback(struct acomp_req *req, req->base.data = data; req->base.flags &= CRYPTO_ACOMP_ALLOC_OUTPUT; req->base.flags |= flgs & ~CRYPTO_ACOMP_ALLOC_OUTPUT; + req->base.flags &= ~CRYPTO_TFM_REQ_CHAIN; } /** @@ -237,6 +238,51 @@ static inline void acomp_request_set_params(struct acomp_req *req, req->flags |= CRYPTO_ACOMP_ALLOC_OUTPUT; } +static inline u32 acomp_request_flags(struct acomp_req *req) +{ + return req->base.flags; +} + +static inline void acomp_reqchain_init(struct acomp_req *req, + u32 flags, crypto_completion_t compl, + void *data) +{ + acomp_request_set_callback(req, flags, compl, data); + crypto_reqchain_init(&req->base); +} + +static inline bool acomp_is_reqchain(struct acomp_req *req) +{ + return crypto_is_reqchain(&req->base); +} + +static inline void acomp_reqchain_clear(struct acomp_req *req, void *data) +{ + struct crypto_wait *wait = (struct crypto_wait *)data; + reinit_completion(&wait->completion); + crypto_reqchain_clear(&req->base); + acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, data); +} + +static inline void acomp_request_chain(struct acomp_req *req, + struct acomp_req *head) +{ + crypto_request_chain(&req->base, &head->base); +} + +int acomp_do_req_chain(struct acomp_req *req, + int (*op)(struct acomp_req *req)); + +int acomp_do_async_req_chain(struct acomp_req *req, + int (*op_submit)(struct acomp_req *req), + int (*op_poll)(struct acomp_req *req)); + +static inline int acomp_request_err(struct acomp_req *req) +{ + return req->base.err; +} + /** * crypto_acomp_compress() -- Invoke asynchronous compress operation * diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 156de41ca760..c5df380c7d08 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -271,4 +271,14 @@ static inline u32 crypto_tfm_alg_type(struct crypto_tfm *tfm) return tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK; } +static inline bool crypto_request_chained(struct crypto_async_request *req) +{ + return req->flags & CRYPTO_TFM_REQ_CHAIN; +} + +static inline bool crypto_tfm_req_chain(struct crypto_tfm *tfm) +{ + return tfm->__crt_alg->cra_flags & CRYPTO_ALG_REQ_CHAIN; +} + #endif /* _CRYPTO_ALGAPI_H */ diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h index 8831edaafc05..53b4ef59b48c 100644 --- a/include/crypto/internal/acompress.h +++ b/include/crypto/internal/acompress.h @@ -84,6 +84,16 @@ static inline void __acomp_request_free(struct acomp_req *req) kfree_sensitive(req); } +static inline bool acomp_request_chained(struct acomp_req *req) +{ + return crypto_request_chained(&req->base); +} + +static inline bool crypto_acomp_req_chain(struct crypto_acomp *tfm) +{ + return crypto_tfm_req_chain(&tfm->base); +} + /** * crypto_register_acomp() -- Register asynchronous compression algorithm * diff --git a/include/linux/crypto.h b/include/linux/crypto.h index b164da5e129e..f1bc282e1ed6 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -13,6 +13,8 @@ #define _LINUX_CRYPTO_H #include +#include +#include #include #include #include @@ -124,6 +126,9 @@ */ #define CRYPTO_ALG_FIPS_INTERNAL 0x00020000 +/* Set if the algorithm supports request chains. */ +#define CRYPTO_ALG_REQ_CHAIN 0x00040000 + /* * Transform masks and values (for crt_flags). */ @@ -133,6 +138,7 @@ #define CRYPTO_TFM_REQ_FORBID_WEAK_KEYS 0x00000100 #define CRYPTO_TFM_REQ_MAY_SLEEP 0x00000200 #define CRYPTO_TFM_REQ_MAY_BACKLOG 0x00000400 +#define CRYPTO_TFM_REQ_CHAIN 0x00000800 /* * Miscellaneous stuff. @@ -174,6 +180,7 @@ struct crypto_async_request { struct crypto_tfm *tfm; u32 flags; + int err; }; /** @@ -391,6 +398,9 @@ void crypto_req_done(void *req, int err); static inline int crypto_wait_req(int err, struct crypto_wait *wait) { + if (!wait) + return err; + switch (err) { case -EINPROGRESS: case -EBUSY: @@ -540,5 +550,34 @@ int crypto_comp_decompress(struct crypto_comp *tfm, const u8 *src, unsigned int slen, u8 *dst, unsigned int *dlen); +static inline void crypto_reqchain_init(struct crypto_async_request *req) +{ + req->err = -EINPROGRESS; + req->flags |= CRYPTO_TFM_REQ_CHAIN; + INIT_LIST_HEAD(&req->list); +} + +static inline bool crypto_is_reqchain(struct crypto_async_request *req) +{ + return req->flags & CRYPTO_TFM_REQ_CHAIN; +} + +static inline void crypto_reqchain_clear(struct crypto_async_request *req) +{ + req->flags &= ~CRYPTO_TFM_REQ_CHAIN; +} + +static inline void crypto_request_chain(struct crypto_async_request *req, + struct crypto_async_request *head) +{ + req->err = -EINPROGRESS; + list_add_tail(&req->list, &head->list); +} + +static inline bool crypto_tfm_is_async(struct crypto_tfm *tfm) +{ + return tfm->__crt_alg->cra_flags & CRYPTO_ALG_ASYNC; +} + #endif /* _LINUX_CRYPTO_H */ From patchwork Mon Mar 3 08:47:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11084C282C6 for ; Mon, 3 Mar 2025 08:47:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA49F6B008A; Mon, 3 Mar 2025 03:47:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D54DE6B008C; Mon, 3 Mar 2025 03:47:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5842280002; Mon, 3 Mar 2025 03:47:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 95A0B6B008A for ; Mon, 3 Mar 2025 03:47:35 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 280C3A29F8 for ; Mon, 3 Mar 2025 08:47:35 +0000 (UTC) X-FDA: 83179611270.21.FD6CE1F Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf24.hostedemail.com (Postfix) with ESMTP id 0848318000E for ; Mon, 3 Mar 2025 08:47:32 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="gzN1p/tw"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991653; a=rsa-sha256; cv=none; b=KL9MQdMuwfzba6LBnsaLVkrVZH6tvzphUCHzvN5mPx8hod2X8uiT3nz20avmce0kvydxPH l63PCS3MNhs0xMH3GkIH82KP6wgD69AyufBb+rM5IeBB+3GzNdb/zro84n6xJ8wAo7gFxU c/RFW+n5dAu0khwFQMa6R5zx9g3Nx4c= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="gzN1p/tw"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991653; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JtvTbH4VmdHwlkmr8zXqJcNEcW2lb6PzPo4UCOAYcHk=; b=ji5ZDrgu4EoEI8wNeAfTZFa8sgc109wl2vgF1g0pT5BDE/NThoBig73n/jQSAt7vWUT8Hu zp7cn07/Ppd41hBJXng92LzmzijyJzyilgKw7QX4DtOALyE9doTkPRxZHzGRyfyiw32dum iOirTYX1/aYzZbB8FXIAX+iE0HFtwCE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991653; x=1772527653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LJoG0hwzy11IwSS0GA7f2KkCoIWMhfdeKs7bZI/C2Jg=; b=gzN1p/twAPSBcx2k7Sag9nV2Wv+TMSqDr+gADkyYbwoMyPHRg60yBe2s pPcgxDYxM9xjlw6YNRgl9PzMr95ZkRqauMGtQfLsVqtt/zEfbWdNVXVLV g7qQcvfEjbHB0qRzhcWhIScAJAYuVOiMA79XBtFmi2AE7Hn706u5bY9tW 0xlHH6D8ITVPiGk4kv9xslCCBBnadjJBoNAcZtat3XDWHeOJXta1zGvKs n+u5Avzt4QRG2U2OhiFQ1QMqVt32GJqYcer19PJU4LUgFcpHAZZI3gZaU kEYlUEs4DqZjLmYBt9HjtwR4soPCiQUpRR3Bp6plT3ayPJN9m8ijFVsW+ w==; X-CSE-ConnectionGUID: 2d+JgNSMRHuoAaDAzJJDCQ== X-CSE-MsgGUID: p2pShVO8QZSKbNJ9THJI4g== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111871" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111871" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:28 -0800 X-CSE-ConnectionGUID: BqnvijDaScOzKITX7vs36w== X-CSE-MsgGUID: yCwbwk1qQtu2vMCztNB7sg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426765" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:26 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 02/14] crypto: acomp - New interfaces to facilitate batching support in acomp & drivers. Date: Mon, 3 Mar 2025 00:47:12 -0800 Message-Id: <20250303084724.6490-3-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0848318000E X-Stat-Signature: 87w88h57ynodjy7yycdyguk58b4sqx15 X-Rspam-User: X-HE-Tag: 1740991652-76333 X-HE-Meta: U2FsdGVkX1/kgearBGKoKmk6szIe58Nr8AAUq45DVKqWe1yQHdnc3cHR7CgFbZS01NtxwFvoWTDd3UZj0Yxxz4kObirnfDoflso9bM0ECYCSbgvVJhtnVzzGMNV7lIfLi8ad1r7GPrewG5SO96cCA6TnamwUOUw950gj9Oi92a/9tG96zjNjvGAIynImUIwZcskCoh9yYlXswMb8QkAHkA7PA3rXX+87EzvAlsixMot6a0Hu9IJj6OZ30MRyFMn4G1e01fc2TIW/ux9VScGXC4w9nl1Jp1J2wIMWd140eQLtpqmM1xgR33+b/2ycmogEKgaVSpo8FgHoIjmzIy7wgduAbAb03N2z3ugArfipVM3t/U3nV/XJzt6PSqPO1Fu1IX5Vqn/eq1bNt3I2PA9kr3DnTEOU7x9kTcx+FCdVZljWhXBEmHLlUGlNEz+Q74gRG89verTUV5iriuxnATGJPNvWSrFRKYPMWQDIcpcQP7HBHqNfK5QW5T3OxSNVBbC55fvnENomDZ3l8L/meMnbd4+/QGrUbuvNGzqm23fTmYPHvELkFvJSXQobIKxzrS4ClLAxYz4oVbH655pO9KGrdKeDOY5pAZOxWDlhBjQ7Xvp/P/jIyYSTpoKc3mmtChS93Ms8Mjqd6UwdGQOf4Hu8x8eklndHE72VfU/7UTqOOZTorEy8Frdvy7QvCwJs/sLqMqXfLr1aMRRXcYtwyCxqmlRRfPpHMic/OYlK7CkCGyZr42F8Td7EggYyBUT+L/yT3JvTfuryR/7Xvf6c3S70ydaGY9HjeWXwpigua9vL6qqIHTqCYtY8xKBbwLDvw+Od3qJVjH5+kr0iv2A3w7LXKdEAqLHmfdMDdlg7SmP26UIMA4iEGsQcjnvklC1pNdIIs8mi5MvX3uPQZFARp6ki5RIxp4wsLYfsFkHtQuccC8HDGCMp0Aqn35KqFiHYEk/PvkC0wNiu5qYGvtLb95/ jyvVZC/3 zJdzW3gOpPJIN4k2AY4jLgSaSxkghPY0Z+nYioFbCUGX8izOswUfqR58sUarvn4u+dVelcvI6Iu6b13Yy7Mx6JC1SwQnQINnztNBcuYzh0XQokI6V11sj1QjoFZcyLMB3xGhxI27fMlCdHMkky+pV399Lxmtr2uSnO/ZVStastSUhWGCSSOxFM5aG/g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This commit adds a get_batch_size() interface to: struct acomp_alg struct crypto_acomp A crypto_acomp compression algorithm that supports batching of compressions and decompressions must register and provide an implementation for this API, so that higher level modules such as zswap and zram can allocate resources for submitting multiple compress/decompress jobs that can be batched. In addition, the compression algorithm must register itself to use request chaining (cra_flags |= CRYPTO_ALG_REQ_CHAIN). A new helper function acomp_has_async_batching() can be invoked to query if a crypto_acomp has registered this API. Further, the newly added crypto_acomp API "crypto_acomp_batch_size()" is provided for use by higher level modules like zswap and zram. crypto_acomp_batch_size() returns 1 if the acomp has not provided an implementation for get_batch_size(). For instance, zswap can call crypto_acomp_batch_size() to get the maximum batch-size supported by the compressor. Based on this, zswap can use the minimum of any zswap-specific upper limits for batch-size and the compressor's max batch-size, to allocate batching resources. Further, the way that zswap can avail of the compressor's batching capability is by using request chaining to create a list requests chained to a head request. zswap can call crypto_acomp_compress() or crypto_acomp_decompress() with the head request in the chain for processing the chain as a batch. The call into crypto for compress/decompress will thus remain the same from zswap's perspective for both, batching and sequential compressions/decompressions. An acomp_is_reqchain() API is introduced, that a driver can call to query if a request received from compress/decompress represents a request chain, and accordingly, process the request chain using either one of: acomp_do_req_chain() acomp_do_async_req_chain() These capabilities allow the iaa_crypto Intel IAA driver to register and implement the get_batch_size() acomp_alg interface, that can subsequently be invoked from the kernel zswap/zram modules to construct a request chain to compress/decompress pages in parallel in the IAA hardware accelerator to improve swapout/swapin performance. Signed-off-by: Kanchana P Sridhar --- crypto/acompress.c | 1 + include/crypto/acompress.h | 28 ++++++++++++++++++++++++++++ include/crypto/internal/acompress.h | 4 ++++ 3 files changed, 33 insertions(+) diff --git a/crypto/acompress.c b/crypto/acompress.c index cb6444d09dd7..b2a6c06d7262 100644 --- a/crypto/acompress.c +++ b/crypto/acompress.c @@ -84,6 +84,7 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm) acomp->compress = alg->compress; acomp->decompress = alg->decompress; + acomp->get_batch_size = alg->get_batch_size; acomp->dst_free = alg->dst_free; acomp->reqsize = alg->reqsize; diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index e6783deba3ac..147f184b6bea 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -43,6 +43,9 @@ struct acomp_req { * * @compress: Function performs a compress operation * @decompress: Function performs a de-compress operation + * @get_batch_size: Maximum batch-size for batching compress/decompress + * operations. If registered, the acomp must provide + * a batching implementation using request chaining. * @dst_free: Frees destination buffer if allocated inside the * algorithm * @reqsize: Context size for (de)compression requests @@ -51,6 +54,7 @@ struct acomp_req { struct crypto_acomp { int (*compress)(struct acomp_req *req); int (*decompress)(struct acomp_req *req); + unsigned int (*get_batch_size)(void); void (*dst_free)(struct scatterlist *dst); unsigned int reqsize; struct crypto_tfm base; @@ -142,6 +146,13 @@ static inline bool acomp_is_async(struct crypto_acomp *tfm) CRYPTO_ALG_ASYNC; } +static inline bool acomp_has_async_batching(struct crypto_acomp *tfm) +{ + return (acomp_is_async(tfm) && + (crypto_comp_alg_common(tfm)->base.cra_flags & CRYPTO_ALG_TYPE_ACOMPRESS) && + tfm->get_batch_size); +} + static inline struct crypto_acomp *crypto_acomp_reqtfm(struct acomp_req *req) { return __crypto_acomp_tfm(req->base.tfm); @@ -311,4 +322,21 @@ static inline int crypto_acomp_decompress(struct acomp_req *req) return crypto_acomp_reqtfm(req)->decompress(req); } +/** + * crypto_acomp_batch_size() -- Get the algorithm's batch size + * + * Function returns the algorithm's batch size for batching operations + * + * @tfm: ACOMPRESS tfm handle allocated with crypto_alloc_acomp() + * + * Return: crypto_acomp's batch size. + */ +static inline unsigned int crypto_acomp_batch_size(struct crypto_acomp *tfm) +{ + if (acomp_has_async_batching(tfm)) + return tfm->get_batch_size(); + + return 1; +} + #endif diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h index 53b4ef59b48c..24b63db56dfb 100644 --- a/include/crypto/internal/acompress.h +++ b/include/crypto/internal/acompress.h @@ -17,6 +17,9 @@ * * @compress: Function performs a compress operation * @decompress: Function performs a de-compress operation + * @get_batch_size: Maximum batch-size for batching compress/decompress + * operations. If registered, the acomp must provide + * a batching implementation using request chaining. * @dst_free: Frees destination buffer if allocated inside the algorithm * @init: Initialize the cryptographic transformation object. * This function is used to initialize the cryptographic @@ -37,6 +40,7 @@ struct acomp_alg { int (*compress)(struct acomp_req *req); int (*decompress)(struct acomp_req *req); + unsigned int (*get_batch_size)(void); void (*dst_free)(struct scatterlist *dst); int (*init)(struct crypto_acomp *tfm); void (*exit)(struct crypto_acomp *tfm); From patchwork Mon Mar 3 08:47:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8255BC282C6 for ; Mon, 3 Mar 2025 08:47:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AEC06B008C; Mon, 3 Mar 2025 03:47:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 935B76B0092; Mon, 3 Mar 2025 03:47:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AFC56B0093; Mon, 3 Mar 2025 03:47:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5A43D6B008C for ; Mon, 3 Mar 2025 03:47:36 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 0E7E9A2A20 for ; Mon, 3 Mar 2025 08:47:36 +0000 (UTC) X-FDA: 83179611312.15.C8091B0 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf18.hostedemail.com (Postfix) with ESMTP id E4ABA1C0006 for ; Mon, 3 Mar 2025 08:47:33 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Di9dYeH8; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991654; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S7zn2qvfvOLCDp2F8MXE3i6nl8YZve4VX5u9OSi+jXs=; b=UScHGM458eJlYT2RUNLYSEbYLW/SWOPCEtjVRCY6aiLiPMg2+oe0dekQLtZcIm6Wa7MRLY ukQDxfqqHjuYdFCmi/zbPBoBrxCulB1glZ4JTkeLigbxbC37DVwktqShNcSjBcJXdj8PAZ bLgvEcDVPmfKMuJxFxNiHntH6zkRfpI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991654; a=rsa-sha256; cv=none; b=cZIoFSIN7h8MFW/2Cl93fLzJB2u4DvDMJ1CgfSoMr6iOdxMA8Ybxu25g8WQjEyKB5yYpB0 gEFsaqo9MBsu4DV742UtChBqEaSwrx0arYz9XiOHl1dupRkSVLrMvznn94LtcW6ym1s/Mx iNPERxRP3Fewk+DyG/xD7OHeSCMUAeY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Di9dYeH8; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991654; x=1772527654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8AO2hJSnLaCRd/8HtMcdHOYwB/zSpBOSifH5FW9w0Ms=; b=Di9dYeH8YClOCFztIjlDl3+yijlM5TKzTCcvI2WN+QCZygFkSf6Ktde3 FkN2Ej5eaBX331lY+Af+XlrJOsBncvBNTTt9+jxvLYUizSdBgXRPcZOy6 5qjkiRZJGc/ezEQe0w5+0gLF1slKqvZlmPJrmb+CL0hjrecxVD2LC77Rc IJk+WMAu+J7Z2K6pVc7EVAoQSxyqHlUZpHFMAzMfh640i1MhGepBVNzy/ ulwpA5SWsOxY8Fvo+6GHADVYyxKOJsA8KZvSxb1EFN9TMVzlso6kV0pLY u3SAHpt00h/xk8+HY3E1dILs+5TlB/8/23cFgs5Rb/Cz3ib7Csn4v8WJD w==; X-CSE-ConnectionGUID: 2Z6+nF+JRPe+N4LqG5QkTA== X-CSE-MsgGUID: qXQc6ZXPSEasItVbBrIh0g== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111885" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111885" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:29 -0800 X-CSE-ConnectionGUID: SLQFcck9SfqnDM3/EV2iXg== X-CSE-MsgGUID: STpS0GyCTU2r/TCFjJpTcg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426771" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:27 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 03/14] crypto: iaa - Add an acomp_req flag CRYPTO_ACOMP_REQ_POLL to enable async mode. Date: Mon, 3 Mar 2025 00:47:13 -0800 Message-Id: <20250303084724.6490-4-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: yc8ahk31hijoc3x5r7kfafimi9ryrbw9 X-Rspamd-Queue-Id: E4ABA1C0006 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740991653-392084 X-HE-Meta: U2FsdGVkX1/oKh5M90J1mue7pld+r9w5noQZzA9Rg0Bf6lWCV+xzOBkGxwGvkP0zX4QIZGpjgoGeVOicMQ7LfzVyqG4op36e4qrPsayMAwFV0iArNJQ/oKVY1dW4NNo26JhrxcNnIne26cBv1PNoRFJ6VkdKDtsmWYOEmltSWRURk0lxVyVAO9SwNpdGvDOHaOjqFzqBXSakziKaM+0SgUGU+II/w5ePvfeP8l+8pwlFFQN1sqb+FvON3+Vq60FJDQG7k/v+eeWV4Q1aagric+RRjTKp29H9Xgf7KmKryDP6pIprTPll1k3szvf22mqXeoILeqajqd0DRiq5TRRoFd5iIl+RDCzLKd57TiFwe98MjUhJ4SxF27a5xg7Sdi5QjX1mf/ZX0y/YrZ7rjzbGQyrFJXOqBG4tE02Xe/tOKdEMrvg+3OYir/AurunZmmvMy5Wzaldyefq+R7fKWhFUXSlwSYZF7j6Yz6Zsc/fheOtFoMfCOFHLzWVZUUXCJi3zvP4YokWxA3TpQQ09H4ZRtUFEvqlBXhD+4ABKyAGZKSTWsMdwWrVttUF4TSlJRiEwLHisLEUJUc64tUshVlrIPXoaQIcbrf9O/pJUQjocnzxVGCso24OwzqFHlZa9QysqGZbRnBKIOdSvqL8n9jpQYjnMsygfztiQuXFFZ5hTFDYGlm8WLwtKmtzJM7HiTB+Kz1nvxjJR7edqoIl6VL1tnvoC+1As/ssezvGMt0dCAtRgUjFbUqoaWOgGmSwYDHbb6+4o0h7mwsdJSuqiIOGWp2FIwPCaudEw1DhiSbpaYB0NmknN3wbCNzB5dfu9g0yY6XLOMl/GBs+P1NLRxCv6bO0bK1no1UHwl4edb7PX0RFPwTM1ZcH9C94B/XpwfUzTGv9EODZ6IvotU22xcXVDFFQeFLso1UZifBFB9jYrpZpshgJESjWYWJKGonSCcPExs46bbkXs/CYTWfcjaIz QWgG3XHH bZGdMEQw/b+oQkVSm2dvTkCULg6bcpEjYKVpYK26bel6NW1kI9PKiizjiVe58glt05HHvuA9qx55mMdMKultv56BoZ0tGtEdFaSMjgRfjhLhtYLFhLes4yiTEwgwCdeIbDmzsq5ZtcEXD8okpNMgNmsQhcuF+ugBlZ3Mkl9z5n0MDeUoYlJXx11X+BuWdc0pN0+srDzZMlg6aqx739rBDqKg+gIk8gL14rUs8mjPxB5KMFgpggE5MUEfD5mCgkZkYo6LeafC4AxxBQP+rQVf/5INEYhl5LyzqhGCw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If the iaa_crypto driver has async_mode set to true, and use_irq set to false, it can still be forced to use synchronous mode by turning off the CRYPTO_ACOMP_REQ_POLL flag in req->flags. In other words, all three of the following need to be true for a request to be processed in fully async poll mode: 1) async_mode should be "true" 2) use_irq should be "false" 3) req->flags & CRYPTO_ACOMP_REQ_POLL should be "true" Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 11 ++++++++++- include/crypto/acompress.h | 5 +++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index c3776b0de51d..d7983ab3c34a 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1520,6 +1520,10 @@ static int iaa_comp_acompress(struct acomp_req *req) return -EINVAL; } + /* If the caller has requested no polling, disable async. */ + if (!(req->flags & CRYPTO_ACOMP_REQ_POLL)) + disable_async = true; + cpu = get_cpu(); wq = wq_table_next_wq(cpu); put_cpu(); @@ -1712,6 +1716,7 @@ static int iaa_comp_adecompress(struct acomp_req *req) { struct crypto_tfm *tfm = req->base.tfm; dma_addr_t src_addr, dst_addr; + bool disable_async = false; int nr_sgs, cpu, ret = 0; struct iaa_wq *iaa_wq; struct device *dev; @@ -1727,6 +1732,10 @@ static int iaa_comp_adecompress(struct acomp_req *req) return -EINVAL; } + /* If the caller has requested no polling, disable async. */ + if (!(req->flags & CRYPTO_ACOMP_REQ_POLL)) + disable_async = true; + if (!req->dst) return iaa_comp_adecompress_alloc_dest(req); @@ -1775,7 +1784,7 @@ static int iaa_comp_adecompress(struct acomp_req *req) req->dst, req->dlen, sg_dma_len(req->dst)); ret = iaa_decompress(tfm, req, wq, src_addr, req->slen, - dst_addr, &req->dlen, false); + dst_addr, &req->dlen, disable_async); if (ret == -EINPROGRESS) return ret; diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h index 147f184b6bea..afadf84f236d 100644 --- a/include/crypto/acompress.h +++ b/include/crypto/acompress.h @@ -14,6 +14,11 @@ #include #define CRYPTO_ACOMP_ALLOC_OUTPUT 0x00000001 +/* + * If set, the driver must have a way to submit the req, then + * poll its completion status for success/error. + */ +#define CRYPTO_ACOMP_REQ_POLL 0x00000002 #define CRYPTO_ACOMP_DST_MAX 131072 /** From patchwork Mon Mar 3 08:47:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7DB6C282C6 for ; Mon, 3 Mar 2025 08:47:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE580280009; Mon, 3 Mar 2025 03:47:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7489280008; Mon, 3 Mar 2025 03:47:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C434280009; Mon, 3 Mar 2025 03:47:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 58AD6280002 for ; Mon, 3 Mar 2025 03:47:38 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 181501604CC for ; Mon, 3 Mar 2025 08:47:38 +0000 (UTC) X-FDA: 83179611396.07.A79FA04 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf24.hostedemail.com (Postfix) with ESMTP id DFC4C180008 for ; Mon, 3 Mar 2025 08:47:35 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=XiRSzR5C; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991656; a=rsa-sha256; cv=none; b=gc/O3dGUAD5gXAfK8zx6S8LO+TNuMAKhvufoJ7F1ZG8kAHhTkdpbmMBhANRqDedT3aj99D /MTbmzf4JZ58Vfoap0yjHnnAQUeFls/LSCi1oPQpCGMWRzWPDIrIZT8VMCsW9qnsd3GNUF l6NsFZ6bMQeQIQDfhj9qNnyC2usWtT4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=XiRSzR5C; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991656; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CxDBO9hF3bAE8sokaQQ0dSqjUz0eILnCamYYQTChDNg=; b=XdtXHkBSLS/Afsatdb0VP8d3nFTLFAM7K9uUftfAmLlVsdpOagSzeToq7ME8XtZR4KhIa3 NsgA5w1WBIFyR7ljDmKk180zUNMMeB9hwhQtGlFwfrMSRZNohJxZSDpfuu26u+EYNP9Mfw 91y+2FeYMDYNuTADppvNRP3O2SINokQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991656; x=1772527656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yAxiyAJkmOi+qCLJ6YIryDvvFkW0bqE3LR6f2KKFmuI=; b=XiRSzR5C6tDwGAKQ5NnHyHxbJYUvK6wt+xIiNE2xqD3NdpxI5irK6NMq 6QJslMBFQSr+cTUwifj+o4Hgqkp5RXmzKukzAmvgdrgwVh2/J19clqlrN tN32u5z3kb7eV7kiBD3RvCIeR8x0dYQVys/jd3FZ1Z/+9Nc8M+69I8XQI RNVO1G6KU26SEA5CYWJIJV3JXvo2X9Y3RE6Q8rhE+NBF0sQtVB+CmgdXD N+Bk65ra5VqhyjANZH20tAOmdiTSZVp6TOuX2lHRvUMTNbX4RsgkuDHOg BqJ6lppUHZDahr95WX/0lvZmCkXCDiSZGSETCPQ52X+xc7Zg4Msjf2ADA w==; X-CSE-ConnectionGUID: K8FJg+EASMGteoUEsrtEfQ== X-CSE-MsgGUID: 8lbT7k+zSFyY9mlotnoNRQ== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111898" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111898" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:31 -0800 X-CSE-ConnectionGUID: fvBsJymEQP6lwEgEmf6goA== X-CSE-MsgGUID: RpkS3VTXRlGfrTvv0qcbqg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426786" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:28 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 04/14] crypto: iaa - Implement batch compression/decompression with request chaining. Date: Mon, 3 Mar 2025 00:47:14 -0800 Message-Id: <20250303084724.6490-5-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: DFC4C180008 X-Stat-Signature: ps368ongugxwnz8dgshb7b8abjxr7ncy X-Rspam-User: X-HE-Tag: 1740991655-976853 X-HE-Meta: U2FsdGVkX18pfHYSWh88xO27Y/tK5nQSmVgf8vzFdGP+OalZgBI3tz1y5Qb6Bcq8Wp9i9lC5haks9HzkeeCwDCxo4K1DIy0ehBph5EhNFE5o7r2Xua4PfqGhfsY46hKeKPT0WiHsYJsUuzb0YLy2iYctd86W+CNxlqEDzdBjKmEAzMLLTow+o+t5TX1tr5ZbbHdUmCnFAEKWpf7fRD6oeQlxccnpXZ9aY6YjbOpoyEEDfy+tlk0TcOntvAX596jzEgCGvamTHgTDFFy32wmPjjUS8ai1/zjmQcVcXh4VsgmYudh1aU9KhyQkz9j+60IijGklUKrdSJphVk+VraxAWPrsBgacLvSBj1b+3OsfcPphYR4LfYx8YckfAR8B1IgEKJHcqZec7BLx9dvQpS9Yt0lkOuMjNl69lYU4e+Q6vgbkOwlz9+fELvfCbqAUr3REPV26mLOp4yMgVWXEE49h8O+sQ/Fb64qwb3wKuZi3FhLmsL1zbj3vOa74CShwdD0rMOaMLQENwEriIT+XHhT6ThHvx24D6NatUxPjDv7P+tmtcL+jK0p25MGhOOOCAeVfKwitt2uRKKCA/IVPfGuo72JLcowjPKGnViKq46p89DawylujxSRIi5SVFvDgX8rLx5P5A0qa2PCtuf9bnpwWn7WJv3f65ljuSF7nB1jBXaFuIaj6VlJU9fijhXSLnzlWk6oHqO50znXeMFnW32xhx2TExLt3QLU8L/aAc7QWC6EDd/s0o33B/2v/gDk1fj3dTuGrZGeyWll9RPV6OQfExEu9yXZrZXzAurQp3WZKOLDf2FoXuV4HpMtvAFQ7LA0+Oebal8NRNn/QFob/I7s+EWC1S/900XxjwxFn0J3jvrrVFk8P2crPAzkfaKYsHwxujQY5j5VQVBw5cvQMGm18bb+emzeo/DL4sebdY34Que+H1RpPq3AsWjgGWk/3tCQ9UCVrRtxJw7N8INsShEw 2yti4Tve LzaJ01HwCwm+AofljRK9qVjEjD1TVzP7SrpAu9CeXvBQc0wI67bTiF6+RXk88djDxS72BDSmvB8RHZMmuuc2cw/8YraC4h5T8IEI0lx/aLPEjnohxZPwcBD2QLz8GO0EslR91Ccsuzks2uPZ0ejXjfUDUisEQHNY5AWgOIgS/NXaPU8iKCC8oEYYiEcs6se+C+hFoJ35KPGkkqQSgvuWsT+TlC+SvmoWVUwLq6rTEqknFNzo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch provides the iaa_crypto driver implementation for the newly added crypto_acomp "get_batch_size()" interface which will be called when swap modules invoke crypto_acomp_batch_size() to query for the maximum batch size. This will return an iaa_driver specific constant, IAA_CRYPTO_MAX_BATCH_SIZE (set to 8U currently). This allows swap modules such as zswap/zram to allocate required batching resources and then invoke fully asynchronous batch parallel compression/decompression of pages on systems with Intel IAA, by setting up a request chain, and calling crypto_acomp_compress() or crypto_acomp_decompress() with the head request in the chain. This enables zswap compress batching code to be developed in a manner similar to the current single-page synchronous calls to crypto_acomp_compress() and crypto_acomp_decompress(), thereby, facilitating encapsulated and modular hand-off between the kernel zswap/zram code and the crypto_acomp layer. This patch also provides implementations of IAA batching with request chaining for both iaa_crypto sync modes: asynchronous/no-irq and fully synchronous. Since iaa_crypto supports the use of acomp request chaining, this patch also adds CRYPTO_ALG_REQ_CHAIN to the iaa_acomp_fixed_deflate algorithm's cra_flags. Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 9 + drivers/crypto/intel/iaa/iaa_crypto_main.c | 186 ++++++++++++++++++++- 2 files changed, 192 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 56985e395263..45d94a646636 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -39,6 +39,15 @@ IAA_DECOMP_CHECK_FOR_EOB | \ IAA_DECOMP_STOP_ON_EOB) +/* + * The maximum compress/decompress batch size for IAA's implementation of + * batched compressions/decompressions. + * The IAA compression algorithms should provide the crypto_acomp + * get_batch_size() interface through a function that returns this + * constant. + */ +#define IAA_CRYPTO_MAX_BATCH_SIZE 8U + /* Representation of IAA workqueue */ struct iaa_wq { struct list_head list; diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index d7983ab3c34a..a9800b8f3575 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1807,6 +1807,185 @@ static void compression_ctx_init(struct iaa_compression_ctx *ctx) ctx->use_irq = use_irq; } +static int iaa_comp_poll(struct acomp_req *req) +{ + struct idxd_desc *idxd_desc; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + struct idxd_wq *wq; + bool compress_op; + int ret; + + idxd_desc = req->base.data; + if (!idxd_desc) + return -EAGAIN; + + compress_op = (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS); + wq = idxd_desc->wq; + iaa_wq = idxd_wq_get_private(wq); + idxd = iaa_wq->iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + + ret = check_completion(dev, idxd_desc->iax_completion, true, true); + if (ret == -EAGAIN) + return ret; + if (ret) + goto out; + + req->dlen = idxd_desc->iax_completion->output_size; + + /* Update stats */ + if (compress_op) { + update_total_comp_bytes_out(req->dlen); + update_wq_comp_bytes(wq, req->dlen); + } else { + update_total_decomp_bytes_in(req->slen); + update_wq_decomp_bytes(wq, req->slen); + } + + if (iaa_verify_compress && (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS)) { + struct crypto_tfm *tfm = req->base.tfm; + dma_addr_t src_addr, dst_addr; + u32 compression_crc; + + compression_crc = idxd_desc->iax_completion->crc; + + dma_sync_sg_for_device(dev, req->dst, 1, DMA_FROM_DEVICE); + dma_sync_sg_for_device(dev, req->src, 1, DMA_TO_DEVICE); + + src_addr = sg_dma_address(req->src); + dst_addr = sg_dma_address(req->dst); + + ret = iaa_compress_verify(tfm, req, wq, src_addr, req->slen, + dst_addr, &req->dlen, compression_crc); + } +out: + /* caller doesn't call crypto_wait_req, so no acomp_request_complete() */ + + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + + idxd_free_desc(idxd_desc->wq, idxd_desc); + + dev_dbg(dev, "%s: returning ret=%d\n", __func__, ret); + + return ret; +} + +static unsigned int iaa_comp_get_batch_size(void) +{ + return IAA_CRYPTO_MAX_BATCH_SIZE; +} + +static void iaa_set_reqchain_poll( + struct acomp_req *req0, + bool set_flag) +{ + struct acomp_req *req; + + set_flag ? (req0->flags |= CRYPTO_ACOMP_REQ_POLL) : + (req0->flags &= ~CRYPTO_ACOMP_REQ_POLL); + + list_for_each_entry(req, &req0->base.list, base.list) + set_flag ? (req->flags |= CRYPTO_ACOMP_REQ_POLL) : + (req->flags &= ~CRYPTO_ACOMP_REQ_POLL); +} + +/** + * This API provides IAA compress batching functionality for use by swap + * modules. Batching is implemented using request chaining. + * + * @req: The head asynchronous compress request in the chain. + * + * Returns the compression error status (0 or -errno) of the last + * request that finishes. Caller should call acomp_request_err() + * for each request in the chain, to get its error status. + */ +static int iaa_comp_acompress_batch(struct acomp_req *req) +{ + bool async = (async_mode && !use_irq); + int err = 0; + + if (likely(async)) + iaa_set_reqchain_poll(req, true); + else + iaa_set_reqchain_poll(req, false); + + + if (likely(async)) + /* Process the request chain in parallel. */ + err = acomp_do_async_req_chain(req, iaa_comp_acompress, iaa_comp_poll); + else + /* Process the request chain in series. */ + err = acomp_do_req_chain(req, iaa_comp_acompress); + + /* + * For the same request chain to be usable by + * iaa_comp_acompress()/iaa_comp_adecompress() in synchronous mode, + * clear the CRYPTO_ACOMP_REQ_POLL bit on all acomp_reqs. + */ + iaa_set_reqchain_poll(req, false); + + return err; +} + +/** + * This API provides IAA decompress batching functionality for use by swap + * modules. Batching is implemented using request chaining. + * + * @req: The head asynchronous decompress request in the chain. + * + * Returns the decompression error status (0 or -errno) of the last + * request that finishes. Caller should call acomp_request_err() + * for each request in the chain, to get its error status. + */ +static int iaa_comp_adecompress_batch(struct acomp_req *req) +{ + bool async = (async_mode && !use_irq); + int err = 0; + + if (likely(async)) + iaa_set_reqchain_poll(req, true); + else + iaa_set_reqchain_poll(req, false); + + + if (likely(async)) + /* Process the request chain in parallel. */ + err = acomp_do_async_req_chain(req, iaa_comp_adecompress, iaa_comp_poll); + else + /* Process the request chain in series. */ + err = acomp_do_req_chain(req, iaa_comp_adecompress); + + /* + * For the same request chain to be usable by + * iaa_comp_acompress()/iaa_comp_adecompress() in synchronous mode, + * clear the CRYPTO_ACOMP_REQ_POLL bit on all acomp_reqs. + */ + iaa_set_reqchain_poll(req, false); + + return err; +} + +static int iaa_compress_main(struct acomp_req *req) +{ + if (acomp_is_reqchain(req)) + return iaa_comp_acompress_batch(req); + + return iaa_comp_acompress(req); +} + +static int iaa_decompress_main(struct acomp_req *req) +{ + if (acomp_is_reqchain(req)) + return iaa_comp_adecompress_batch(req); + + return iaa_comp_adecompress(req); +} + static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm); @@ -1829,13 +2008,14 @@ static void dst_free(struct scatterlist *sgl) static struct acomp_alg iaa_acomp_fixed_deflate = { .init = iaa_comp_init_fixed, - .compress = iaa_comp_acompress, - .decompress = iaa_comp_adecompress, + .compress = iaa_compress_main, + .decompress = iaa_decompress_main, .dst_free = dst_free, + .get_batch_size = iaa_comp_get_batch_size, .base = { .cra_name = "deflate", .cra_driver_name = "deflate-iaa", - .cra_flags = CRYPTO_ALG_ASYNC, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_CHAIN, .cra_ctxsize = sizeof(struct iaa_compression_ctx), .cra_module = THIS_MODULE, .cra_priority = IAA_ALG_PRIORITY, From patchwork Mon Mar 3 08:47:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82BB9C282C6 for ; Mon, 3 Mar 2025 08:47:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F1EB280007; Mon, 3 Mar 2025 03:47:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 68666280008; Mon, 3 Mar 2025 03:47:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A2F7280007; Mon, 3 Mar 2025 03:47:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 24268280002 for ; Mon, 3 Mar 2025 03:47:38 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id D01C41C76A9 for ; Mon, 3 Mar 2025 08:47:37 +0000 (UTC) X-FDA: 83179611354.17.F57E965 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf21.hostedemail.com (Postfix) with ESMTP id 94B831C0008 for ; Mon, 3 Mar 2025 08:47:34 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dfRKxpJh; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991655; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2PQLRexpGdYT5Ec1XymA/Aac1BcRmmLQuyWuqoDnPs8=; b=MSD+IDat4U2AS1uemEU+ki0z4XHZha0vQvdXCfPBRc4p4kWsL+YaThI826zytQi9rgVII2 5RkEHFLFB0kGu0yWw6T+UwimO3WD/wsuPtQe94aJyn52E9vdlz7Awj9Ae+efoa91bhGY9K EFq9d7nwT1UBhnkGLtkIxVh8CYe7UaM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dfRKxpJh; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991655; a=rsa-sha256; cv=none; b=kqOev2IkJ8pEVGDTs4WP2hNd2pNPN4y/MtewpqxvRgevomcl5quyuPX9KRWZMqLa/iZLIO R7gsCbdj71uokLIAS6kxuONa2OHga4zvydQc6tlXgvxxrIlwhQbbQ/os8ZXvnnar+nhPbA s2V7p7rdRI9GxA4ldNQgsbEglmi3KqM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991656; x=1772527656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VRzCnyQyEfhyAHvAPp33ZwaL6nzqmuQ9xsZtH/mrvjQ=; b=dfRKxpJhYtj+ZlsbhTYYBUZMqLKY5asrp1XrqZCX5+OEG3ZIjc5Dgvki nVzbtE1DmSVlR7RCOQuc2PO94sWge8rEv5nGE+xP3suOndX4DJH+XdtIU BEclRj4BCiymEB0TSNYix/FCY7+kNaNR37IAvhFbAInryFpuWNi5iVD5J MY6LD9FdvuDpC3pK4Xk4so4R4KIrBPI5bTQX0D625sMuJpQta8zcoOKEt JDaIlN38IwF2HxN62w9gQuO4YwK3u6DEfeS7qhSjH/WKaPPbiASCLs2iN ZhDsjw1waCEWUvJI5gxKy6h47h1nchBGknQlEaj3Nw2ssD7VZhaAxKKnM Q==; X-CSE-ConnectionGUID: H+xXe8AgQOq3IqWTS5oagQ== X-CSE-MsgGUID: FvuD4nzwSyCCm4d3tAtw0Q== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111910" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111910" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:31 -0800 X-CSE-ConnectionGUID: I9tMg+h6QkyIP7Qwr363mQ== X-CSE-MsgGUID: GycSSx/qQvS+RVjpX9aU/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426793" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:30 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 05/14] crypto: iaa - Enable async mode and make it the default. Date: Mon, 3 Mar 2025 00:47:15 -0800 Message-Id: <20250303084724.6490-6-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: dd3hcjmye3uru7r6hyc6b7jrpq86qchf X-Rspamd-Queue-Id: 94B831C0008 X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1740991654-774090 X-HE-Meta: U2FsdGVkX1/hcDm20Y3D77XjUQVjb0dohjZ6Q06fQ5gXziU1zT/STRbnXmeuj3VSHMP+RaEJ2TJbncCFe7vjxxAnqbSdX1uWu6vLLlkcTrc146cJ7oMsVD6Q/XGWCXKrURcShYXKRYlelqYmEGgQ4Q7Cr8KKwRfbt2Tgqc3jqUmnLklqXSbAc/fKNZ99Dhv7XabOmKQjUaYuCUnRuyCy8fMaRE7C0+VZZO/NBizHMjQCvjFM1GgnRZOmOJihUVXjimZV5D2YBwb3ENybzUwUhDgbb6IwS6jLEUFp0zo/kMzdNnUfPrgCczswiW8HCBa4r/Pi4yoqCR5Xg2l3/L6pXhBWC730I0qj09DZgQNdTNP3koyyzYz7bYJYKc8+4bUqMgppJaptkkZBuzKuByZkaKlwYK6A7gRaQuT4KF9+92WuOcMWCRuJIut+irnMfoSQX8IasiVRNnri6qTHP+7J/IRRSzjw8GVBBfBlanfuygZ7+/+o3Au3zj+1a+l4xP7V7DPSSDv4UnLCSTtFcqHkBBD1DVVdXYBnOZf47ZhBqTNjaqkeQ8LKdSd6MNCdrniNP8drHkPX/P07JwZQr6FGyKNAjFmsj96Wf6KixI76Ku/sE76dxOWAXoFY4qZeOcT3ZSSRI9+eaBJbZJVeqaSztAjyfJsek2slQ4eM+9WVPO92s8UJPk11rdQKpVNzw6WqvKkRs5qtUSj5gpioxtC3SXybT6hxEAFM7qXQOj/7AhwQTUzxyMTBS8NLMqicqrAybnT8mkp39mlN3+hzULRwbajFbTaRETfWPh6c2H3l7kYCRipNir9BNyJxgx9WomUC4Ov3KsM0jv+80Usweun7U+xaKrM3ef4kWabVgSj8YOZQcrI1VYQB+aXo7ppI0sL0v0nqne+s4eDClj9ZaQr8V0i92fndr32D+QjU1+JBy0u5YA64jndxsEdD7AXJl3mfRzflvvRZVSggUpnLh8+ 53JfhCtD giTS0T0pYMtcoJ8SK6dVpWmn/FsvJNKxW1gipaW4qrseOYbSjnJBT96mwf3YO+xnbjNosRua8OLQOeChQ84dxxEAs4NRtguH3bTuq0ciDR1xsWXTe0rlrYKsaVoS8K/gQN4TGNymiMEo6JqrLbblAVo8PhvmZdhxLyYq+/jC5J+bi9nijchrAGV5M7w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch enables the 'async' sync_mode in the driver. Further, it sets the default sync_mode to 'async', which makes it easier for IAA hardware acceleration in the iaa_crypto driver to be loaded by default in the most efficient/recommended 'async' mode for parallel compressions/decompressions, namely, asynchronous submission of descriptors, followed by polling for job completions with or without request chaining. Earlier, the "sync" mode used to be the default. This way, anyone who wants to use IAA for zswap/zram can do so after building the kernel, and without having to go through these steps to use async mode: 1) disable all the IAA device/wq bindings that happen at boot time 2) rmmod iaa_crypto 3) modprobe iaa_crypto 4) echo async > /sys/bus/dsa/drivers/crypto/sync_mode 5) re-run initialization of the IAA devices and wqs Signed-off-by: Kanchana P Sridhar --- Documentation/driver-api/crypto/iaa/iaa-crypto.rst | 11 ++--------- drivers/crypto/intel/iaa/iaa_crypto_main.c | 4 ++-- 2 files changed, 4 insertions(+), 11 deletions(-) diff --git a/Documentation/driver-api/crypto/iaa/iaa-crypto.rst b/Documentation/driver-api/crypto/iaa/iaa-crypto.rst index 8e50b900d51c..782da5230fcd 100644 --- a/Documentation/driver-api/crypto/iaa/iaa-crypto.rst +++ b/Documentation/driver-api/crypto/iaa/iaa-crypto.rst @@ -272,7 +272,7 @@ The available attributes are: echo async_irq > /sys/bus/dsa/drivers/crypto/sync_mode Async mode without interrupts (caller must poll) can be enabled by - writing 'async' to it (please see Caveat):: + writing 'async' to it:: echo async > /sys/bus/dsa/drivers/crypto/sync_mode @@ -281,14 +281,7 @@ The available attributes are: echo sync > /sys/bus/dsa/drivers/crypto/sync_mode - The default mode is 'sync'. - - Caveat: since the only mechanism that iaa_crypto currently implements - for async polling without interrupts is via the 'sync' mode as - described earlier, writing 'async' to - '/sys/bus/dsa/drivers/crypto/sync_mode' will internally enable the - 'sync' mode. This is to ensure correct iaa_crypto behavior until true - async polling without interrupts is enabled in iaa_crypto. + The default mode is 'async'. .. _iaa_default_config: diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index a9800b8f3575..4dac4852c113 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -153,7 +153,7 @@ static DRIVER_ATTR_RW(verify_compress); */ /* Use async mode */ -static bool async_mode; +static bool async_mode = true; /* Use interrupts */ static bool use_irq; @@ -173,7 +173,7 @@ static int set_iaa_sync_mode(const char *name) async_mode = false; use_irq = false; } else if (sysfs_streq(name, "async")) { - async_mode = false; + async_mode = true; use_irq = false; } else if (sysfs_streq(name, "async_irq")) { async_mode = true; From patchwork Mon Mar 3 08:47:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D318FC282C5 for ; Mon, 3 Mar 2025 08:47:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0209B280008; Mon, 3 Mar 2025 03:47:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EDDD8280002; Mon, 3 Mar 2025 03:47:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBDC428000A; Mon, 3 Mar 2025 03:47:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8EE24280002 for ; Mon, 3 Mar 2025 03:47:38 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5351080446 for ; Mon, 3 Mar 2025 08:47:38 +0000 (UTC) X-FDA: 83179611396.20.FE04BBC Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf18.hostedemail.com (Postfix) with ESMTP id 332631C0006 for ; Mon, 3 Mar 2025 08:47:36 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=TSgkgmWe; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991656; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+X9c7Zr+xYOcQEk2MjjRAl3qcylNM61at0Akh92rUuY=; b=ligLZnwX542V8Oj8LlSbfweA+F4Uhn5N8Z1bq7tzX+2SbrKXY/G0buJ5XQCcE3WulbZ+89 zAQcvR0TD+CQdhJ03qPmguwv1vwrXnZS5r8VUi3YcQ9S/e5W86Chv7tBvykvOPure6pEkn KCHGlxFpb5nKn/2cDZZBXkAiEbHILn4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991656; a=rsa-sha256; cv=none; b=q4EwhgDpBE6+wt/ZZwtQCqekFYuviIZ/j1zOPvdf8ilgFn5ueYzLQgP/ag/jFdldkDtaVR cm1Z7tN50OL//ZKzLd5/IoYk7417ci5KFpHWVUVsksJJkMenBLyMiwIovlYn4Z1XEnljYN NmXbZy+2I0o2HQ/Sid3eBQrb5g3n9Xs= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=TSgkgmWe; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991656; x=1772527656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yHP5Zb8+z92UaMlcfYAxfjtnW5rOspDIJ08VjR15a3A=; b=TSgkgmWeb6sbRSbU8EkxWy4L6daq7oA0pCvPYnsI8Sfe0P5yPl4zRwpG cVmdVU5Y/qEcC5i9IIna6VGJf5IqlARewHqekc3VXdRFWhgxoTCdFS4pg QUZ0dpRXz9ebWVM0ELu6ZA4cOhZIA6txRAh07rB2sqb9pYFO25Fm44Lvj 2y4hD/KoSkFoFptyx4p8VcCJQP3sFTMuK7u//KS9sk8iI/vQG6A9Nt/1z M/oOh5uL5pdAdQR95tpTZ1NcU5KXBTdrIED+S2CmzcXitDwFlzAuTbfJc 3lWWNVKslsoLP8K4bUocRd7NsxHJ1EWz9m3JQKykKXSdwl+IwSoixCut2 A==; X-CSE-ConnectionGUID: RD3Y4YMBRdaFljnbcHA0sw== X-CSE-MsgGUID: G3tdGomGRYe+9oWoVqHGpg== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111929" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111929" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:32 -0800 X-CSE-ConnectionGUID: lYP0XnrKQSqHh0j88W47jQ== X-CSE-MsgGUID: rCTevcFxQLi+bA7Gz5gqig== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426798" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:31 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 06/14] crypto: iaa - Disable iaa_verify_compress by default. Date: Mon, 3 Mar 2025 00:47:16 -0800 Message-Id: <20250303084724.6490-7-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: 9ifnqpiemgmwthdrq67x9pr4mmybs8qx X-Rspamd-Queue-Id: 332631C0006 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740991656-571712 X-HE-Meta: U2FsdGVkX1+wHIgkiTqRodwtbaQbp8IRjp6HLsTfIHeyo/4SgNU27oevYGZcQtDdvAw5U9SD6QBTAIgVNic65pIQ3gvHFvijkefIFfatJldAZXkoH9mJUiJG7I1Cyg9L5PY65d5Rm1s8ZJNAt6RZTiBC7Fd9As0I2FQjq21s9WivzDi1yVj8sHZgWOcnbTamcqVjbIvRaH721xjDpPzJYrFeRkB7nNvPMKkeZJTJ6JrHuOKPKyyLcPauyV9MDgFwyLLfeNQuWNoAxHF2UOeS8me2a0hXkrN5fiQKLcAuxpDoPY7m+CY1g0kuY+U1Vy5sUfZTO5LOCXHHEUK5QVz38l/ahd9UZGvlzDgv9U9IPj5W9h2xcBPg+DIfsZ5Id9BwxKTAPNE4PCf5TGFGXHdqaMnBcihYc8Pc4a28W0w4jcYO8aXK5qfHm2rRQBD3opeW9LG9gYk8ePRgKAuxQJVi8DY41SKRRWtlP7K2FQuJtwSUeiHAb3nSj/KVk1khxlQbAGN3/cC8NGv4oo5D8I3hbwo4x/JUVGiTApLqYJaQ20ll/RrBmOp2EKU1QoLIH6UAnRSrl2fxXftjyuTGI7ruX5+xaOEVXowdmz7ZNHqkie7nqjPR2mAn2mIq4wFVAVh87K7lWSW8vp0VaZePOXJT2L1yTl8I7SZERjXsBp4tIahea+uDBrbmPM3Eg7o80H9XEkdhXXLESw2YAVnvRGr0Eycw5D8QhjU9EHR14rBb/RI5Yym/Mt55a0XpcZFXf/H9E0SWyBuiUhJkb7Y3p3HFh+QhBhoFsgrQj0OlBBzyanLeem2H68sJyPlhdApjU4+NlubKA/eq6UGA42k/aoq6ClwezAhCtHL7yzHKCo4Hw37NC+nwqk5IWh5akk30AwIqwPHC+v6FMLSU/+5xfGaFZySdWtvPWVM3lsv1BpSq2wF2YBe9DdONnct0AGWuFPBWGEfzC5HOeYCiG2HrvIw FoZTKPVn g+Dv03w2kfHDs46Tf1oWhbuPbY+yzgfsLdUcs6gIFeqgVDORiIy2Vj89m9GVRVATcdxapqNAr3mHysY6Ukm15rWwX4K8auJr7oMISSjh9aEZQd9c66Mu3DMBaAlKwbNEFiej3kEDOwZFynTEFNkpdR/utXrfPJYgH1GAFm+sVHZBzohtqy0f7WiV0OorXnrT92TSFjs+s6uyfr2gQi+DJu9n2ppnD6xFIcf82lOI5YM4r1W0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch makes it easier for IAA hardware acceleration in the iaa_crypto driver to be loaded by default with "iaa_verify_compress" disabled, to facilitate performance comparisons with software compressors (which also do not run compress verification by default). Earlier, iaa_crypto compress verification used to be enabled by default. With this patch, if users want to enable compress verification, they can do so with these steps: 1) disable all the IAA device/wq bindings that happen at boot time 2) rmmod iaa_crypto 3) modprobe iaa_crypto 4) echo 1 > /sys/bus/dsa/drivers/crypto/verify_compress 5) re-run initialization of the IAA devices and wqs Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 4dac4852c113..5038fd7ced02 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -94,7 +94,7 @@ static bool iaa_crypto_enabled; static bool iaa_crypto_registered; /* Verify results of IAA compress or not */ -static bool iaa_verify_compress = true; +static bool iaa_verify_compress = false; static ssize_t verify_compress_show(struct device_driver *driver, char *buf) { From patchwork Mon Mar 3 08:47:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74E71C282C5 for ; Mon, 3 Mar 2025 08:47:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AAB5328001A; Mon, 3 Mar 2025 03:47:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A23C328000A; Mon, 3 Mar 2025 03:47:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8008828001A; Mon, 3 Mar 2025 03:47:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4DB38280002 for ; Mon, 3 Mar 2025 03:47:41 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 03C1D1A04E0 for ; Mon, 3 Mar 2025 08:47:40 +0000 (UTC) X-FDA: 83179611522.01.86A7E6C Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf18.hostedemail.com (Postfix) with ESMTP id BFA751C0009 for ; Mon, 3 Mar 2025 08:47:38 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Sr21Bw17; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7THpAo6jTwozO6S7/CoYt0LufUyvsa1fijnlgCmBAFA=; b=7Av2I9v7qO3gRKmMA0j6qFZFbEANmKDmhm8zhEhYeuMOkoIMVLNIqaQySX5twFQBRAjz32 gaWOndpEdcfUE/EkbWDKYckoMpGJKi5Bfv9bxCgHGWSdUv5HI6GxwC3CxSzDxebUL5BwoZ 1vWJ32Bc5Z5otOqK4BHtkkVU6kbFu5Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991659; a=rsa-sha256; cv=none; b=qVUyU9RlknjOmqCMzVckV/Mhi3hTdWAHLQK/GSVbCwT+z0jAkN4O61J4EjlBYB4/nssJAu T4JwylgVzdA2eoGXSwk1KxmgB6T89i6MGeyP8r0CG19X1tRebuyNTMW3lbAbODVjESDNqO ZG2ZN6SgaN0hoLqzg3r8gOD8oBK9fOI= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Sr21Bw17; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991659; x=1772527659; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U0820r8j/yZHUeGSU9DGtaJk7AzxMBMPmbEhUmO82nI=; b=Sr21Bw17VDSaEopY61qv0LCEnBKlR8kNGNZmpQH6eNaDT4CcaE9rKXD+ qlWpzf1vYg76qX/eVnSPZDGM7tRyeWOvwFEsCBw5R4wOwAo54jH8dW53g XrO15VSR6wQBkEhT95Q0dHZPJH6NmqCEB0OFk54nxII4djSplKwTCpF9T RD83BnNe9kJIXnkIHHGIrQ7IEofWSXsKtVOE5xPXoL6Dred6/ExP13Uia 8u7I30RPPi5k8bCGP93Hrrb2sjjwhC0rVx+au9kzuBvjuQgJYFEqpUOZY Olw6bU+KiXE1CcokgX154fSQ7Z/Toz2YDYI8hq2Tc81x3pp4C3078J5ig w==; X-CSE-ConnectionGUID: XxwKb3beSUqgo89jRK1vng== X-CSE-MsgGUID: 6fyNLwdOQ9mcOPD6CG+ujA== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111940" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111940" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:33 -0800 X-CSE-ConnectionGUID: 8bcBEVp9TQm8EqGb3oOD6g== X-CSE-MsgGUID: bfWpy7TURpmDqJDhCEwNKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426803" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:32 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 07/14] crypto: iaa - Re-organize the iaa_crypto driver code. Date: Mon, 3 Mar 2025 00:47:17 -0800 Message-Id: <20250303084724.6490-8-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: o7a87ibp33bp9kgpeaziorzmh3j7cn5b X-Rspamd-Queue-Id: BFA751C0009 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740991658-435909 X-HE-Meta: U2FsdGVkX1+ZhQPazT0gn1JkXWwSJ1+v6RR8jxosziQzgScFmqNypzc66ETIPT01oEFYZXeqhNw3BqDEWzMxYa8Y0ckchGZYCNeDn8homOgv3cMAyPb973LJwfszMIMXNplSh7bGxOBMP1BSm0wB98HoZKgIsnAuSUJGz0I8BeWN3bU68Tra0q0WZ2bDZJ6FDZcGZhb1+V86UfpUFJ4GcQjw8XUnaOxPVtLVuSqbnyQ0QgV7Re91sh82hkN10jwfE5d1y8y2KGSJzUs6d+kLzQeTzCxO9Ks8EvpHayF9QONPfD6DZskDJY9es8uW+ftmPukeKoxkW69QL7KIcwTAEgjYSw34N+C3JO0yCRB35wqd6H64r8xe+XIg4Zi1Gcfpc7YgbGK70qweo450FmS/fNSwv+cNaMjJtBcncC0M+wp1h77jW50lfs9LRoFeFqNvJAMRxcmuf+qVF+fI7/gnED08/Zo7VjGoasS5jvw//xY5rAKiAqSA3Z3Mop0+UeZUIwSXhSd8EY+Moz5nYMsEVB3NT2dGhIg5gh5ptM2dIx53S/TeFLR5ZYh7tEm92z/z4STz2bkDTL1PSr7CCGdYRZlaSmAueTAT4dq6KFnqiTmfnylYMw3O7DFwgbObJ4nbVUNrPCnDJ/CbbUv3fV1Btgmnp5UQF05Nq3sxxu+9C8GXdPnOYjzqYlUsqS1Q+Cmodqfn6DdWtAxaeFvRyqk3bYcmzhfaptPOiMPSeNvccLyjtNEh78EVmv44kstWD+l8zf2dlud2MftFeqjBK97z6/3BuUAfw+pizi7jTmA3ohDm8a54Oij2Itx2hM7YbvA4KNDV3gPdTbf1rBbidURGxKGVAPJQjv7DW/gf2FslU8MgVdM0M2VHGwtTO5XBT8/+wR953xRNtPIyUWc8reyXdb/dXO1rBuWuRdldYvYvYTi2LwRwsF0YCqclfmfUs176Ub7x6nGH2S/cGVO6Uxu gK658uV/ PfkATnhEW2T4s/fh5a+eaj5wyaMjC3mNfVCx/AM/5Y+id8Th6MaOz3iR1N5ePuKO6g4BZ2KoP8k35+sy4HQg9TTEA5/LPLLcJ1ezeOhmvOMiu5cnK8RFJMFztM3Tj4OasrTCkMmM7kVvJOh7jYY1hItaUneNlJoUHOadwUZ0bDuw0ApMsH4k59UkMjC7D59WYkdN95pfpCabvCN5u+I00+lB+diNQmaYwuxjjHsAusF1ce1o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch merely reorganizes the code in iaa_crypto_main.c, so that the functions are consolidated into logically related sub-sections of code. This is expected to make the code more maintainable and for it to be easier to replace functional layers and/or add new features. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 540 +++++++++++---------- 1 file changed, 275 insertions(+), 265 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 5038fd7ced02..abaee160e5ec 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -24,6 +24,9 @@ #define IAA_ALG_PRIORITY 300 +/************************************** + * Driver internal global variables. + **************************************/ /* number of iaa instances probed */ static unsigned int nr_iaa; static unsigned int nr_cpus; @@ -36,55 +39,46 @@ static unsigned int cpus_per_iaa; static struct crypto_comp *deflate_generic_tfm; /* Per-cpu lookup table for balanced wqs */ -static struct wq_table_entry __percpu *wq_table; +static struct wq_table_entry __percpu *wq_table = NULL; -static struct idxd_wq *wq_table_next_wq(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - if (++entry->cur_wq >= entry->n_wqs) - entry->cur_wq = 0; - - if (!entry->wqs[entry->cur_wq]) - return NULL; - - pr_debug("%s: returning wq at idx %d (iaa wq %d.%d) from cpu %d\n", __func__, - entry->cur_wq, entry->wqs[entry->cur_wq]->idxd->id, - entry->wqs[entry->cur_wq]->id, cpu); - - return entry->wqs[entry->cur_wq]; -} - -static void wq_table_add(int cpu, struct idxd_wq *wq) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - if (WARN_ON(entry->n_wqs == entry->max_wqs)) - return; - - entry->wqs[entry->n_wqs++] = wq; - - pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__, - entry->wqs[entry->n_wqs - 1]->idxd->id, - entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); -} - -static void wq_table_free_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); +/* Verify results of IAA compress or not */ +static bool iaa_verify_compress = false; - kfree(entry->wqs); - memset(entry, 0, sizeof(*entry)); -} +/* + * The iaa crypto driver supports three 'sync' methods determining how + * compressions and decompressions are performed: + * + * - sync: the compression or decompression completes before + * returning. This is the mode used by the async crypto + * interface when the sync mode is set to 'sync' and by + * the sync crypto interface regardless of setting. + * + * - async: the compression or decompression is submitted and returns + * immediately. Completion interrupts are not used so + * the caller is responsible for polling the descriptor + * for completion. This mode is applicable to only the + * async crypto interface and is ignored for anything + * else. + * + * - async_irq: the compression or decompression is submitted and + * returns immediately. Completion interrupts are + * enabled so the caller can wait for the completion and + * yield to other threads. When the compression or + * decompression completes, the completion is signaled + * and the caller awakened. This mode is applicable to + * only the async crypto interface and is ignored for + * anything else. + * + * These modes can be set using the iaa_crypto sync_mode driver + * attribute. + */ -static void wq_table_clear_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); +/* Use async mode */ +static bool async_mode = true; +/* Use interrupts */ +static bool use_irq; - entry->n_wqs = 0; - entry->cur_wq = 0; - memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); -} +static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX]; LIST_HEAD(iaa_devices); DEFINE_MUTEX(iaa_devices_lock); @@ -93,9 +87,9 @@ DEFINE_MUTEX(iaa_devices_lock); static bool iaa_crypto_enabled; static bool iaa_crypto_registered; -/* Verify results of IAA compress or not */ -static bool iaa_verify_compress = false; - +/************************************************** + * Driver attributes along with get/set functions. + **************************************************/ static ssize_t verify_compress_show(struct device_driver *driver, char *buf) { return sprintf(buf, "%d\n", iaa_verify_compress); @@ -123,40 +117,6 @@ static ssize_t verify_compress_store(struct device_driver *driver, } static DRIVER_ATTR_RW(verify_compress); -/* - * The iaa crypto driver supports three 'sync' methods determining how - * compressions and decompressions are performed: - * - * - sync: the compression or decompression completes before - * returning. This is the mode used by the async crypto - * interface when the sync mode is set to 'sync' and by - * the sync crypto interface regardless of setting. - * - * - async: the compression or decompression is submitted and returns - * immediately. Completion interrupts are not used so - * the caller is responsible for polling the descriptor - * for completion. This mode is applicable to only the - * async crypto interface and is ignored for anything - * else. - * - * - async_irq: the compression or decompression is submitted and - * returns immediately. Completion interrupts are - * enabled so the caller can wait for the completion and - * yield to other threads. When the compression or - * decompression completes, the completion is signaled - * and the caller awakened. This mode is applicable to - * only the async crypto interface and is ignored for - * anything else. - * - * These modes can be set using the iaa_crypto sync_mode driver - * attribute. - */ - -/* Use async mode */ -static bool async_mode = true; -/* Use interrupts */ -static bool use_irq; - /** * set_iaa_sync_mode - Set IAA sync mode * @name: The name of the sync mode @@ -219,8 +179,9 @@ static ssize_t sync_mode_store(struct device_driver *driver, } static DRIVER_ATTR_RW(sync_mode); -static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX]; - +/**************************** + * Driver compression modes. + ****************************/ static int find_empty_iaa_compression_mode(void) { int i = -EINVAL; @@ -411,11 +372,6 @@ static void free_device_compression_mode(struct iaa_device *iaa_device, IDXD_OP_FLAG_WR_SRC2_AECS_COMP | \ IDXD_OP_FLAG_AECS_RW_TGLS) -static int check_completion(struct device *dev, - struct iax_completion_record *comp, - bool compress, - bool only_once); - static int init_device_compression_mode(struct iaa_device *iaa_device, struct iaa_compression_mode *mode, int idx, struct idxd_wq *wq) @@ -502,6 +458,10 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device) } } +/*********************************************************** + * Functions for use in crypto probe and remove interfaces: + * allocate/init/query/deallocate devices/wqs. + ***********************************************************/ static struct iaa_device *iaa_device_alloc(void) { struct iaa_device *iaa_device; @@ -614,16 +574,6 @@ static void del_iaa_wq(struct iaa_device *iaa_device, struct idxd_wq *wq) } } -static void clear_wq_table(void) -{ - int cpu; - - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_clear_entry(cpu); - - pr_debug("cleared wq table\n"); -} - static void free_iaa_device(struct iaa_device *iaa_device) { if (!iaa_device) @@ -704,43 +654,6 @@ static int iaa_wq_put(struct idxd_wq *wq) return ret; } -static void free_wq_table(void) -{ - int cpu; - - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_free_entry(cpu); - - free_percpu(wq_table); - - pr_debug("freed wq table\n"); -} - -static int alloc_wq_table(int max_wqs) -{ - struct wq_table_entry *entry; - int cpu; - - wq_table = alloc_percpu(struct wq_table_entry); - if (!wq_table) - return -ENOMEM; - - for (cpu = 0; cpu < nr_cpus; cpu++) { - entry = per_cpu_ptr(wq_table, cpu); - entry->wqs = kcalloc(max_wqs, sizeof(struct wq *), GFP_KERNEL); - if (!entry->wqs) { - free_wq_table(); - return -ENOMEM; - } - - entry->max_wqs = max_wqs; - } - - pr_debug("initialized wq table\n"); - - return 0; -} - static int save_iaa_wq(struct idxd_wq *wq) { struct iaa_device *iaa_device, *found = NULL; @@ -829,6 +742,87 @@ static void remove_iaa_wq(struct idxd_wq *wq) cpus_per_iaa = 1; } +/*************************************************************** + * Mapping IAA devices and wqs to cores with per-cpu wq_tables. + ***************************************************************/ +static void wq_table_free_entry(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); + + kfree(entry->wqs); + memset(entry, 0, sizeof(*entry)); +} + +static void wq_table_clear_entry(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); + + entry->n_wqs = 0; + entry->cur_wq = 0; + memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); +} + +static void clear_wq_table(void) +{ + int cpu; + + for (cpu = 0; cpu < nr_cpus; cpu++) + wq_table_clear_entry(cpu); + + pr_debug("cleared wq table\n"); +} + +static void free_wq_table(void) +{ + int cpu; + + for (cpu = 0; cpu < nr_cpus; cpu++) + wq_table_free_entry(cpu); + + free_percpu(wq_table); + + pr_debug("freed wq table\n"); +} + +static int alloc_wq_table(int max_wqs) +{ + struct wq_table_entry *entry; + int cpu; + + wq_table = alloc_percpu(struct wq_table_entry); + if (!wq_table) + return -ENOMEM; + + for (cpu = 0; cpu < nr_cpus; cpu++) { + entry = per_cpu_ptr(wq_table, cpu); + entry->wqs = kcalloc(max_wqs, sizeof(struct wq *), GFP_KERNEL); + if (!entry->wqs) { + free_wq_table(); + return -ENOMEM; + } + + entry->max_wqs = max_wqs; + } + + pr_debug("initialized wq table\n"); + + return 0; +} + +static void wq_table_add(int cpu, struct idxd_wq *wq) +{ + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); + + if (WARN_ON(entry->n_wqs == entry->max_wqs)) + return; + + entry->wqs[entry->n_wqs++] = wq; + + pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__, + entry->wqs[entry->n_wqs - 1]->idxd->id, + entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); +} + static int wq_table_add_wqs(int iaa, int cpu) { struct iaa_device *iaa_device, *found_device = NULL; @@ -939,6 +933,29 @@ static void rebalance_wq_table(void) } } +/*************************************************************** + * Assign work-queues for driver ops using per-cpu wq_tables. + ***************************************************************/ +static struct idxd_wq *wq_table_next_wq(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); + + if (++entry->cur_wq >= entry->n_wqs) + entry->cur_wq = 0; + + if (!entry->wqs[entry->cur_wq]) + return NULL; + + pr_debug("%s: returning wq at idx %d (iaa wq %d.%d) from cpu %d\n", __func__, + entry->cur_wq, entry->wqs[entry->cur_wq]->idxd->id, + entry->wqs[entry->cur_wq]->id, cpu); + + return entry->wqs[entry->cur_wq]; +} + +/************************************************* + * Core iaa_crypto compress/decompress functions. + *************************************************/ static inline int check_completion(struct device *dev, struct iax_completion_record *comp, bool compress, @@ -1020,13 +1037,130 @@ static int deflate_generic_decompress(struct acomp_req *req) static int iaa_remap_for_verify(struct device *dev, struct iaa_wq *iaa_wq, struct acomp_req *req, - dma_addr_t *src_addr, dma_addr_t *dst_addr); + dma_addr_t *src_addr, dma_addr_t *dst_addr) +{ + int ret = 0; + int nr_sgs; + + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + + nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_FROM_DEVICE); + if (nr_sgs <= 0 || nr_sgs > 1) { + dev_dbg(dev, "verify: couldn't map src sg for iaa device %d," + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, + iaa_wq->wq->id, ret); + ret = -EIO; + goto out; + } + *src_addr = sg_dma_address(req->src); + dev_dbg(dev, "verify: dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p," + " req->slen %d, sg_dma_len(sg) %d\n", *src_addr, nr_sgs, + req->src, req->slen, sg_dma_len(req->src)); + + nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_TO_DEVICE); + if (nr_sgs <= 0 || nr_sgs > 1) { + dev_dbg(dev, "verify: couldn't map dst sg for iaa device %d," + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, + iaa_wq->wq->id, ret); + ret = -EIO; + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_FROM_DEVICE); + goto out; + } + *dst_addr = sg_dma_address(req->dst); + dev_dbg(dev, "verify: dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p," + " req->dlen %d, sg_dma_len(sg) %d\n", *dst_addr, nr_sgs, + req->dst, req->dlen, sg_dma_len(req->dst)); +out: + return ret; +} static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, struct idxd_wq *wq, dma_addr_t src_addr, unsigned int slen, dma_addr_t dst_addr, unsigned int *dlen, - u32 compression_crc); + u32 compression_crc) +{ + struct iaa_device_compression_mode *active_compression_mode; + struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); + struct iaa_device *iaa_device; + struct idxd_desc *idxd_desc; + struct iax_hw_desc *desc; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + int ret = 0; + + iaa_wq = idxd_wq_get_private(wq); + iaa_device = iaa_wq->iaa_device; + idxd = iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + + active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); + + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + if (IS_ERR(idxd_desc)) { + dev_dbg(dev, "idxd descriptor allocation failed\n"); + dev_dbg(dev, "iaa compress failed: ret=%ld\n", + PTR_ERR(idxd_desc)); + return PTR_ERR(idxd_desc); + } + desc = idxd_desc->iax_hw; + + /* Verify (optional) - decompress and check crc, suppress dest write */ + + desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC; + desc->opcode = IAX_OPCODE_DECOMPRESS; + desc->decompr_flags = IAA_DECOMP_FLAGS | IAA_DECOMP_SUPPRESS_OUTPUT; + desc->priv = 0; + + desc->src1_addr = (u64)dst_addr; + desc->src1_size = *dlen; + desc->dst_addr = (u64)src_addr; + desc->max_dst_size = slen; + desc->completion_addr = idxd_desc->compl_dma; + + dev_dbg(dev, "(verify) compression mode %s," + " desc->src1_addr %llx, desc->src1_size %d," + " desc->dst_addr %llx, desc->max_dst_size %d," + " desc->src2_addr %llx, desc->src2_size %d\n", + active_compression_mode->name, + desc->src1_addr, desc->src1_size, desc->dst_addr, + desc->max_dst_size, desc->src2_addr, desc->src2_size); + + ret = idxd_submit_desc(wq, idxd_desc); + if (ret) { + dev_dbg(dev, "submit_desc (verify) failed ret=%d\n", ret); + goto err; + } + + ret = check_completion(dev, idxd_desc->iax_completion, false, false); + if (ret) { + dev_dbg(dev, "(verify) check_completion failed ret=%d\n", ret); + goto err; + } + + if (compression_crc != idxd_desc->iax_completion->crc) { + ret = -EINVAL; + dev_dbg(dev, "(verify) iaa comp/decomp crc mismatch:" + " comp=0x%x, decomp=0x%x\n", compression_crc, + idxd_desc->iax_completion->crc); + print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET, + 8, 1, idxd_desc->iax_completion, 64, 0); + goto err; + } + + idxd_free_desc(wq, idxd_desc); +out: + return ret; +err: + idxd_free_desc(wq, idxd_desc); + dev_dbg(dev, "iaa compress failed: ret=%d\n", ret); + + goto out; +} static void iaa_desc_complete(struct idxd_desc *idxd_desc, enum idxd_complete_type comp_type, @@ -1245,133 +1379,6 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, goto out; } -static int iaa_remap_for_verify(struct device *dev, struct iaa_wq *iaa_wq, - struct acomp_req *req, - dma_addr_t *src_addr, dma_addr_t *dst_addr) -{ - int ret = 0; - int nr_sgs; - - dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); - dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); - - nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_FROM_DEVICE); - if (nr_sgs <= 0 || nr_sgs > 1) { - dev_dbg(dev, "verify: couldn't map src sg for iaa device %d," - " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, - iaa_wq->wq->id, ret); - ret = -EIO; - goto out; - } - *src_addr = sg_dma_address(req->src); - dev_dbg(dev, "verify: dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p," - " req->slen %d, sg_dma_len(sg) %d\n", *src_addr, nr_sgs, - req->src, req->slen, sg_dma_len(req->src)); - - nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_TO_DEVICE); - if (nr_sgs <= 0 || nr_sgs > 1) { - dev_dbg(dev, "verify: couldn't map dst sg for iaa device %d," - " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, - iaa_wq->wq->id, ret); - ret = -EIO; - dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_FROM_DEVICE); - goto out; - } - *dst_addr = sg_dma_address(req->dst); - dev_dbg(dev, "verify: dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p," - " req->dlen %d, sg_dma_len(sg) %d\n", *dst_addr, nr_sgs, - req->dst, req->dlen, sg_dma_len(req->dst)); -out: - return ret; -} - -static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, - struct idxd_wq *wq, - dma_addr_t src_addr, unsigned int slen, - dma_addr_t dst_addr, unsigned int *dlen, - u32 compression_crc) -{ - struct iaa_device_compression_mode *active_compression_mode; - struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); - struct iaa_device *iaa_device; - struct idxd_desc *idxd_desc; - struct iax_hw_desc *desc; - struct idxd_device *idxd; - struct iaa_wq *iaa_wq; - struct pci_dev *pdev; - struct device *dev; - int ret = 0; - - iaa_wq = idxd_wq_get_private(wq); - iaa_device = iaa_wq->iaa_device; - idxd = iaa_device->idxd; - pdev = idxd->pdev; - dev = &pdev->dev; - - active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); - - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); - if (IS_ERR(idxd_desc)) { - dev_dbg(dev, "idxd descriptor allocation failed\n"); - dev_dbg(dev, "iaa compress failed: ret=%ld\n", - PTR_ERR(idxd_desc)); - return PTR_ERR(idxd_desc); - } - desc = idxd_desc->iax_hw; - - /* Verify (optional) - decompress and check crc, suppress dest write */ - - desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC; - desc->opcode = IAX_OPCODE_DECOMPRESS; - desc->decompr_flags = IAA_DECOMP_FLAGS | IAA_DECOMP_SUPPRESS_OUTPUT; - desc->priv = 0; - - desc->src1_addr = (u64)dst_addr; - desc->src1_size = *dlen; - desc->dst_addr = (u64)src_addr; - desc->max_dst_size = slen; - desc->completion_addr = idxd_desc->compl_dma; - - dev_dbg(dev, "(verify) compression mode %s," - " desc->src1_addr %llx, desc->src1_size %d," - " desc->dst_addr %llx, desc->max_dst_size %d," - " desc->src2_addr %llx, desc->src2_size %d\n", - active_compression_mode->name, - desc->src1_addr, desc->src1_size, desc->dst_addr, - desc->max_dst_size, desc->src2_addr, desc->src2_size); - - ret = idxd_submit_desc(wq, idxd_desc); - if (ret) { - dev_dbg(dev, "submit_desc (verify) failed ret=%d\n", ret); - goto err; - } - - ret = check_completion(dev, idxd_desc->iax_completion, false, false); - if (ret) { - dev_dbg(dev, "(verify) check_completion failed ret=%d\n", ret); - goto err; - } - - if (compression_crc != idxd_desc->iax_completion->crc) { - ret = -EINVAL; - dev_dbg(dev, "(verify) iaa comp/decomp crc mismatch:" - " comp=0x%x, decomp=0x%x\n", compression_crc, - idxd_desc->iax_completion->crc); - print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET, - 8, 1, idxd_desc->iax_completion, 64, 0); - goto err; - } - - idxd_free_desc(wq, idxd_desc); -out: - return ret; -err: - idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa compress failed: ret=%d\n", ret); - - goto out; -} - static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, struct idxd_wq *wq, dma_addr_t src_addr, unsigned int slen, @@ -1986,6 +1993,9 @@ static int iaa_decompress_main(struct acomp_req *req) return iaa_comp_adecompress(req); } +/********************************************* + * Interfaces to crypto_alg and crypto_acomp. + *********************************************/ static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm); From patchwork Mon Mar 3 08:47:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 539F1C282CD for ; Mon, 3 Mar 2025 08:47:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DAD7128000A; Mon, 3 Mar 2025 03:47:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD9DA28001B; Mon, 3 Mar 2025 03:47:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E7EE280002; Mon, 3 Mar 2025 03:47:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5389428000A for ; Mon, 3 Mar 2025 03:47:41 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0FCBD1C76DA for ; Mon, 3 Mar 2025 08:47:41 +0000 (UTC) X-FDA: 83179611522.30.F93A593 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf24.hostedemail.com (Postfix) with ESMTP id C747018000C for ; Mon, 3 Mar 2025 08:47:38 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Y1jtdSwc; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991659; a=rsa-sha256; cv=none; b=qd9tz1hELEfVg6DHbXm1Evv8riYqLXEkfgMOG96R9Y9kvkyGh8vnHZ99ZQez83Yawh2Mc7 r+eZOEnDkQYolXSRlJXdoky6pNOmUGjfFh2emGwWKHwDcUIqA27a2kyYLY6EtmLGK0JaWN vSwXYEcPIEsUZMYw4Ob8Lef8j9dQsfQ= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Y1jtdSwc; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=21kyIoSHbchHqdtXkC8uEYlZFOHLsgXDKIl1Xuz2U4E=; b=YJzzstMAQOM3SX7oV1MZzUGnj4ZbiERnTJZ0e4ZMIQX0yqylQy2f9N1eY71GocptWXrYHu WrSyOV5iuCP3qcHFn2A0GxHw5wqJI404eGpZxwpdE0fbaamwHQmKKhx3xr5+mzzeAv3xJe a/HBBCsDqkbt2CdunjO3utoBdS4fBAU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991659; x=1772527659; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VUwt0gNvg+/EAq7f9Vl2avLoGXxhNLtj5vjvAXStlZ8=; b=Y1jtdSwceu2VqhRGjqE4viOrCL3to2OUa315rMacV8G+oH8mvfe4e/ra Kw7J7a1fsvu/O7ITDlhBMJJyukzzMVgFG/iBUeWchof2iadcKsHw9fuJj Xtk8HglQm4NhJ3Joz8cHXyIvheX7NwDKDHjjEZd8ilM1V7IgjKwa86kSI eRcT4e7WvhOkK7Rqn0zn6SH2TEo+Q4iGmSguVmYFGYGNHnM3ozpDm4rXt a7rm24jwzmz1P//tr2EEo3DNfJlbAz1PJnvB816gjCx2vlhg5QpsOwePP mkC1hmQjaEl9pyggR23wBlzVoCuuAxWWcZ446sTJiBejmVNq8vsCVx+fN w==; X-CSE-ConnectionGUID: DrvJhxezR4O0e3dJkEUFow== X-CSE-MsgGUID: WOR46lk7RxuyP9fZMJB1PA== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111954" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111954" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:34 -0800 X-CSE-ConnectionGUID: A+H8ry80QeeAZgmroDx10A== X-CSE-MsgGUID: YBM8FY6HRxy+tST3TD0EQQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426806" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:33 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 08/14] crypto: iaa - Map IAA devices/wqs to cores based on packages instead of NUMA. Date: Mon, 3 Mar 2025 00:47:18 -0800 Message-Id: <20250303084724.6490-9-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C747018000C X-Stat-Signature: iqkkb3dowqqqtaamanh4wskxjfd45qsx X-Rspam-User: X-HE-Tag: 1740991658-166271 X-HE-Meta: U2FsdGVkX1/EnvzBZj7PwqTMyx8+u6wgIkgTK9p+mCsFWb7Rwxikwh8rMVyvO03Z8ULtaN88mzYYtM28/eDTpPIFqN0qJBAP0R/F0ENysf/SAhfO1+IgYjcZX8dLQ7TnLgxwr9/T/F77SX/koCqxKB/aYqwpO90lweYvVJG4SF0jGCUztimBYg990IUYu8Lz8+uYWXgXVURVN9vvdzqqzlWNSoOyLSQXSeGgdo5W+B/V1CJfnLm95bFD1zxCQ+BIUb6a+jFzweXlCy9XuF8luWd+gbVui32KWNlP4H5k4n6RNrynTHOj7aLzjG2WmYwb+he5xv7JdOpE5uu2PvJTaC/SM8/7O0o3Hla8TMoyGFCyDk46Qw/Ros1dzqKEcWdXAs6f5mUeLgcW/Czb4BRXL73FQ0eFQjVbjBOpva76U2eWtm0z3H21rlyhGj5I5UtBai/F6OZvCd6f6oEIE51POZBfcT1R/4WjW31vcaQ0KnlDRAyaaX7dT1Qh7RSsrr2/3YfdarxiFOpqGu1q8nMw/Z6Co1TNgXnqQJlwMRCtUOrNKO57Cw9L88ouTf+XLfjVxWCjJA/Ht3PJD9ok/KEeJH3oY4Tu2aehv/C33gcwXBnBeJ+xdN0mR8ZgVq6OZFmBxmbWpISsETnyRmim4IUKr12Abzu8h0k+cD0yC8TXbihYkpqkhaOQCtvSf3xekHzAC9SsM7qzUh2YSNaH3+nSMO6i0V53bpLdBgZbygAcZe0irw3WnUjfa39orwRRYIkGeMqe7jYBB2vEy2jhqC91qxEWVB8JfoVN7nQyKp2GMZuYzHUPjySBI0NQehUL08FLBb/Z0SchyLCHmWS2TOGGRaeDB/AbVpyA/6NDO1gA8j2oJuCZPtYhnFPHHzgWasbiAQPgoCJArXiteMeULfhwQ7eTWqDsKw9KK+ITisNTA1TuqZAKRap98ulnX38r32QIncmE5TwM3koKZPLhMSc NH1+EodB 08GYiVMdXSuyjj0EonvUdW+fGJM0isem79mpEmbyxq3TPKBH76wpCO2KOC5lk7lT0svA5C8dhm23tKPNzLqkhQEZmHGZcyXWdJy1m7RI3+SD9qJ0T1gyU7zCs7hc6ez9FV/erDbvee/SvjMjrSA1H9MkiNLQIsbUQ3KCQec4Q38dgdVWdCWW1N2DdGQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch modifies the algorithm for mapping available IAA devices and wqs to cores, as they are being discovered, based on packages instead of NUMA nodes. This leads to a more realistic mapping of IAA devices as compression/decompression resources for a package, rather than for a NUMA node. This also resolves problems that were observed during internal validation on Intel platforms with many more NUMA nodes than packages: for such cases, the earlier NUMA based allocation caused some IAAs to be over-subscribed and some to not be utilized at all. As a result of this change from NUMA to packages, some of the core functions used by the iaa_crypto driver's "probe" and "remove" API have been re-written. The new infrastructure maintains a static/global mapping of "local wqs" per IAA device, in the "struct iaa_device" itself. The earlier implementation would allocate memory per-cpu for this data, which never changes once the IAA devices/wqs have been initialized. Two main outcomes from this new iaa_crypto driver infrastructure are: 1) Resolves "task blocked for more than x seconds" errors observed during internal validation on Intel systems with the earlier NUMA node based mappings, which was root-caused to the non-optimal IAA-to-core mappings described earlier. 2) Results in a NUM_THREADS factor reduction in memory footprint cost of initializing IAA devices/wqs, due to eliminating the per-cpu copies of each IAA device's wqs. On a 384 cores Intel Granite Rapids server with 8 IAA devices, this saves 140MiB. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 17 +- drivers/crypto/intel/iaa/iaa_crypto_main.c | 276 ++++++++++++--------- 2 files changed, 171 insertions(+), 122 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 45d94a646636..72ffdf55f7b3 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -55,6 +55,7 @@ struct iaa_wq { struct idxd_wq *wq; int ref; bool remove; + bool mapped; struct iaa_device *iaa_device; @@ -72,6 +73,13 @@ struct iaa_device_compression_mode { dma_addr_t aecs_comp_table_dma_addr; }; +struct wq_table_entry { + struct idxd_wq **wqs; + int max_wqs; + int n_wqs; + int cur_wq; +}; + /* Representation of IAA device with wqs, populated by probe */ struct iaa_device { struct list_head list; @@ -82,19 +90,14 @@ struct iaa_device { int n_wq; struct list_head wqs; + struct wq_table_entry *iaa_local_wqs; + atomic64_t comp_calls; atomic64_t comp_bytes; atomic64_t decomp_calls; atomic64_t decomp_bytes; }; -struct wq_table_entry { - struct idxd_wq **wqs; - int max_wqs; - int n_wqs; - int cur_wq; -}; - #define IAA_AECS_ALIGN 32 /* diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index abaee160e5ec..40751d7c83c0 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -30,8 +30,9 @@ /* number of iaa instances probed */ static unsigned int nr_iaa; static unsigned int nr_cpus; -static unsigned int nr_nodes; -static unsigned int nr_cpus_per_node; +static unsigned int nr_packages; +static unsigned int nr_cpus_per_package; +static unsigned int nr_iaa_per_package; /* Number of physical cpus sharing each iaa instance */ static unsigned int cpus_per_iaa; @@ -462,17 +463,46 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device) * Functions for use in crypto probe and remove interfaces: * allocate/init/query/deallocate devices/wqs. ***********************************************************/ -static struct iaa_device *iaa_device_alloc(void) +static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) { + struct wq_table_entry *local; struct iaa_device *iaa_device; iaa_device = kzalloc(sizeof(*iaa_device), GFP_KERNEL); if (!iaa_device) - return NULL; + goto err; + + iaa_device->idxd = idxd; + + /* IAA device's local wqs. */ + iaa_device->iaa_local_wqs = kzalloc(sizeof(struct wq_table_entry), GFP_KERNEL); + if (!iaa_device->iaa_local_wqs) + goto err; + + local = iaa_device->iaa_local_wqs; + + local->wqs = kzalloc(iaa_device->idxd->max_wqs * sizeof(struct wq *), GFP_KERNEL); + if (!local->wqs) + goto err; + + local->max_wqs = iaa_device->idxd->max_wqs; + local->n_wqs = 0; INIT_LIST_HEAD(&iaa_device->wqs); return iaa_device; + +err: + if (iaa_device) { + if (iaa_device->iaa_local_wqs) { + if (iaa_device->iaa_local_wqs->wqs) + kfree(iaa_device->iaa_local_wqs->wqs); + kfree(iaa_device->iaa_local_wqs); + } + kfree(iaa_device); + } + + return NULL; } static bool iaa_has_wq(struct iaa_device *iaa_device, struct idxd_wq *wq) @@ -491,12 +521,10 @@ static struct iaa_device *add_iaa_device(struct idxd_device *idxd) { struct iaa_device *iaa_device; - iaa_device = iaa_device_alloc(); + iaa_device = iaa_device_alloc(idxd); if (!iaa_device) return NULL; - iaa_device->idxd = idxd; - list_add_tail(&iaa_device->list, &iaa_devices); nr_iaa++; @@ -537,6 +565,7 @@ static int add_iaa_wq(struct iaa_device *iaa_device, struct idxd_wq *wq, iaa_wq->wq = wq; iaa_wq->iaa_device = iaa_device; idxd_wq_set_private(wq, iaa_wq); + iaa_wq->mapped = false; list_add_tail(&iaa_wq->list, &iaa_device->wqs); @@ -580,6 +609,13 @@ static void free_iaa_device(struct iaa_device *iaa_device) return; remove_device_compression_modes(iaa_device); + + if (iaa_device->iaa_local_wqs) { + if (iaa_device->iaa_local_wqs->wqs) + kfree(iaa_device->iaa_local_wqs->wqs); + kfree(iaa_device->iaa_local_wqs); + } + kfree(iaa_device); } @@ -716,9 +752,14 @@ static int save_iaa_wq(struct idxd_wq *wq) if (WARN_ON(nr_iaa == 0)) return -EINVAL; - cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa; + cpus_per_iaa = (nr_packages * nr_cpus_per_package) / nr_iaa; if (!cpus_per_iaa) cpus_per_iaa = 1; + + nr_iaa_per_package = nr_iaa / nr_packages; + if (!nr_iaa_per_package) + nr_iaa_per_package = 1; + out: return 0; } @@ -735,53 +776,45 @@ static void remove_iaa_wq(struct idxd_wq *wq) } if (nr_iaa) { - cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa; + cpus_per_iaa = (nr_packages * nr_cpus_per_package) / nr_iaa; if (!cpus_per_iaa) cpus_per_iaa = 1; - } else + + nr_iaa_per_package = nr_iaa / nr_packages; + if (!nr_iaa_per_package) + nr_iaa_per_package = 1; + } else { cpus_per_iaa = 1; + nr_iaa_per_package = 1; + } } /*************************************************************** * Mapping IAA devices and wqs to cores with per-cpu wq_tables. ***************************************************************/ -static void wq_table_free_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - kfree(entry->wqs); - memset(entry, 0, sizeof(*entry)); -} - -static void wq_table_clear_entry(int cpu) -{ - struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - - entry->n_wqs = 0; - entry->cur_wq = 0; - memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *)); -} - -static void clear_wq_table(void) +/* + * Given a cpu, find the closest IAA instance. The idea is to try to + * choose the most appropriate IAA instance for a caller and spread + * available workqueues around to clients. + */ +static inline int cpu_to_iaa(int cpu) { - int cpu; - - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_clear_entry(cpu); + int package_id, base_iaa, iaa = 0; - pr_debug("cleared wq table\n"); -} + if (!nr_packages || !nr_iaa_per_package) + return 0; -static void free_wq_table(void) -{ - int cpu; + package_id = topology_logical_package_id(cpu); + base_iaa = package_id * nr_iaa_per_package; + iaa = base_iaa + ((cpu % nr_cpus_per_package) / cpus_per_iaa); - for (cpu = 0; cpu < nr_cpus; cpu++) - wq_table_free_entry(cpu); + pr_debug("cpu = %d, package_id = %d, base_iaa = %d, iaa = %d", + cpu, package_id, base_iaa, iaa); - free_percpu(wq_table); + if (iaa >= 0 && iaa < nr_iaa) + return iaa; - pr_debug("freed wq table\n"); + return (nr_iaa - 1); } static int alloc_wq_table(int max_wqs) @@ -795,13 +828,11 @@ static int alloc_wq_table(int max_wqs) for (cpu = 0; cpu < nr_cpus; cpu++) { entry = per_cpu_ptr(wq_table, cpu); - entry->wqs = kcalloc(max_wqs, sizeof(struct wq *), GFP_KERNEL); - if (!entry->wqs) { - free_wq_table(); - return -ENOMEM; - } + entry->wqs = NULL; entry->max_wqs = max_wqs; + entry->n_wqs = 0; + entry->cur_wq = 0; } pr_debug("initialized wq table\n"); @@ -809,33 +840,27 @@ static int alloc_wq_table(int max_wqs) return 0; } -static void wq_table_add(int cpu, struct idxd_wq *wq) +static void wq_table_add(int cpu, struct wq_table_entry *iaa_local_wqs) { struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); - if (WARN_ON(entry->n_wqs == entry->max_wqs)) - return; - - entry->wqs[entry->n_wqs++] = wq; + entry->wqs = iaa_local_wqs->wqs; + entry->max_wqs = iaa_local_wqs->max_wqs; + entry->n_wqs = iaa_local_wqs->n_wqs; + entry->cur_wq = 0; - pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__, + pr_debug("%s: cpu %d: added %d iaa local wqs up to wq %d.%d\n", __func__, + cpu, entry->n_wqs, entry->wqs[entry->n_wqs - 1]->idxd->id, - entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu); + entry->wqs[entry->n_wqs - 1]->id); } static int wq_table_add_wqs(int iaa, int cpu) { struct iaa_device *iaa_device, *found_device = NULL; - int ret = 0, cur_iaa = 0, n_wqs_added = 0; - struct idxd_device *idxd; - struct iaa_wq *iaa_wq; - struct pci_dev *pdev; - struct device *dev; + int ret = 0, cur_iaa = 0; list_for_each_entry(iaa_device, &iaa_devices, list) { - idxd = iaa_device->idxd; - pdev = idxd->pdev; - dev = &pdev->dev; if (cur_iaa != iaa) { cur_iaa++; @@ -843,7 +868,8 @@ static int wq_table_add_wqs(int iaa, int cpu) } found_device = iaa_device; - dev_dbg(dev, "getting wq from iaa_device %d, cur_iaa %d\n", + dev_dbg(&found_device->idxd->pdev->dev, + "getting wq from iaa_device %d, cur_iaa %d\n", found_device->idxd->id, cur_iaa); break; } @@ -858,29 +884,58 @@ static int wq_table_add_wqs(int iaa, int cpu) } cur_iaa = 0; - idxd = found_device->idxd; - pdev = idxd->pdev; - dev = &pdev->dev; - dev_dbg(dev, "getting wq from only iaa_device %d, cur_iaa %d\n", + dev_dbg(&found_device->idxd->pdev->dev, + "getting wq from only iaa_device %d, cur_iaa %d\n", found_device->idxd->id, cur_iaa); } - list_for_each_entry(iaa_wq, &found_device->wqs, list) { - wq_table_add(cpu, iaa_wq->wq); - pr_debug("rebalance: added wq for cpu=%d: iaa wq %d.%d\n", - cpu, iaa_wq->wq->idxd->id, iaa_wq->wq->id); - n_wqs_added++; + wq_table_add(cpu, found_device->iaa_local_wqs); + +out: + return ret; +} + +static int map_iaa_device_wqs(struct iaa_device *iaa_device) +{ + struct wq_table_entry *local; + int ret = 0, n_wqs_added = 0; + struct iaa_wq *iaa_wq; + + local = iaa_device->iaa_local_wqs; + + list_for_each_entry(iaa_wq, &iaa_device->wqs, list) { + if (iaa_wq->mapped && ++n_wqs_added) + continue; + + pr_debug("iaa_device %px: processing wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + + if (WARN_ON(local->n_wqs == local->max_wqs)) + break; + + local->wqs[local->n_wqs++] = iaa_wq->wq; + pr_debug("iaa_device %px: added local wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + + iaa_wq->mapped = true; + ++n_wqs_added; } - if (!n_wqs_added) { - pr_debug("couldn't find any iaa wqs!\n"); + if (!n_wqs_added && !iaa_device->n_wq) { + pr_debug("iaa_device %d: couldn't find any iaa wqs!\n", iaa_device->idxd->id); ret = -EINVAL; - goto out; } -out: + return ret; } +static void map_iaa_devices(void) +{ + struct iaa_device *iaa_device; + + list_for_each_entry(iaa_device, &iaa_devices, list) { + BUG_ON(map_iaa_device_wqs(iaa_device)); + } +} + /* * Rebalance the wq table so that given a cpu, it's easy to find the * closest IAA instance. The idea is to try to choose the most @@ -889,48 +944,42 @@ static int wq_table_add_wqs(int iaa, int cpu) */ static void rebalance_wq_table(void) { - const struct cpumask *node_cpus; - int node, cpu, iaa = -1; + int cpu, iaa; if (nr_iaa == 0) return; - pr_debug("rebalance: nr_nodes=%d, nr_cpus %d, nr_iaa %d, cpus_per_iaa %d\n", - nr_nodes, nr_cpus, nr_iaa, cpus_per_iaa); + map_iaa_devices(); - clear_wq_table(); + pr_debug("rebalance: nr_packages=%d, nr_cpus %d, nr_iaa %d, cpus_per_iaa %d\n", + nr_packages, nr_cpus, nr_iaa, cpus_per_iaa); - if (nr_iaa == 1) { - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (WARN_ON(wq_table_add_wqs(0, cpu))) { - pr_debug("could not add any wqs for iaa 0 to cpu %d!\n", cpu); - return; - } + for (cpu = 0; cpu < nr_cpus; cpu++) { + iaa = cpu_to_iaa(cpu); + pr_debug("rebalance: cpu=%d iaa=%d\n", cpu, iaa); + + if (WARN_ON(iaa == -1)) { + pr_debug("rebalance (cpu_to_iaa(%d)) failed!\n", cpu); + return; } - return; + if (WARN_ON(wq_table_add_wqs(iaa, cpu))) { + pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); + return; + } } - for_each_node_with_cpus(node) { - node_cpus = cpumask_of_node(node); - - for (cpu = 0; cpu < cpumask_weight(node_cpus); cpu++) { - int node_cpu = cpumask_nth(cpu, node_cpus); - - if (WARN_ON(node_cpu >= nr_cpu_ids)) { - pr_debug("node_cpu %d doesn't exist!\n", node_cpu); - return; - } - - if ((cpu % cpus_per_iaa) == 0) - iaa++; + pr_debug("Finished rebalance local wqs."); +} - if (WARN_ON(wq_table_add_wqs(iaa, node_cpu))) { - pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); - return; - } - } +static void free_wq_tables(void) +{ + if (wq_table) { + free_percpu(wq_table); + wq_table = NULL; } + + pr_debug("freed local wq table\n"); } /*************************************************************** @@ -2134,7 +2183,7 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev) free_iaa_wq(idxd_wq_get_private(wq)); err_save: if (first_wq) - free_wq_table(); + free_wq_tables(); err_alloc: mutex_unlock(&iaa_devices_lock); idxd_drv_disable_wq(wq); @@ -2184,7 +2233,9 @@ static void iaa_crypto_remove(struct idxd_dev *idxd_dev) if (nr_iaa == 0) { iaa_crypto_enabled = false; - free_wq_table(); + free_wq_tables(); + BUG_ON(!list_empty(&iaa_devices)); + INIT_LIST_HEAD(&iaa_devices); module_put(THIS_MODULE); pr_info("iaa_crypto now DISABLED\n"); @@ -2210,16 +2261,11 @@ static struct idxd_device_driver iaa_crypto_driver = { static int __init iaa_crypto_init_module(void) { int ret = 0; - int node; + INIT_LIST_HEAD(&iaa_devices); nr_cpus = num_possible_cpus(); - for_each_node_with_cpus(node) - nr_nodes++; - if (!nr_nodes) { - pr_err("IAA couldn't find any nodes with cpus\n"); - return -ENODEV; - } - nr_cpus_per_node = nr_cpus / nr_nodes; + nr_cpus_per_package = topology_num_cores_per_package(); + nr_packages = topology_max_packages(); if (crypto_has_comp("deflate-generic", 0, 0)) deflate_generic_tfm = crypto_alloc_comp("deflate-generic", 0, 0); From patchwork Mon Mar 3 08:47:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E977C282C6 for ; Mon, 3 Mar 2025 08:48:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 20E4C280002; Mon, 3 Mar 2025 03:47:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1291A28000B; Mon, 3 Mar 2025 03:47:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0770280002; Mon, 3 Mar 2025 03:47:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 552EA28000B for ; Mon, 3 Mar 2025 03:47:41 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0E4FB80467 for ; Mon, 3 Mar 2025 08:47:41 +0000 (UTC) X-FDA: 83179611522.28.174FC4B Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf21.hostedemail.com (Postfix) with ESMTP id BF3E81C0008 for ; Mon, 3 Mar 2025 08:47:38 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=D5qZq8Zi; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vJAnTHQ3IhxA42sUXNS6RQitcXjblu0cDsHOQ5DIOHM=; b=EqrElptooNZYkN3A7ffUr2wFl17dYipfPf+zP26/erRi/HK78tGCJt/eqFZkOBEldI/Fz7 3BYBWhk3eEwqAmKJMB1pc+VDVjeCXLuBzwbetWKKOPjHKXnC0m3VIOfo5jK7Trjzx6X5jY cC2gmPFpTbXRwx0cWAiM+T5cBfC5tUU= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=D5qZq8Zi; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991659; a=rsa-sha256; cv=none; b=rjZeENpCEiuusaaSV/Jys5RjSTnuHiEAvW5c71seGKpVeVjN4crqdu1ga2J88/tHgWuwNU aJ7MQQ+veL9NjjcVo3uIDPYDqBhMvuH85ZgctvMKSj5wTLcPR3w2467R2iDlMH1weM7cEB BsoycoUu24hGXW3f3AoWzFIc7t4QbOU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991659; x=1772527659; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ulF50kW0uCIexjMCp05uYzE1nbxjcJTcM/dakNDW84E=; b=D5qZq8ZizxSa7cbawxd0SId+SUY99LU7YJoMBPqs2UM0EHen5T2c3DP4 4HY0EPwfx2nuX3Gc1EebZV4nVUeqLi0cuk+5Jp3+RpGjnWsZbmo7K2bGU kZdu+t/H0Z7BDO/iMicoAlV5cflAlXK2d/Xn9uy9hg6sP1M6dQIq8kT2M grwBQ6tfG8F8bD3hhRfcj+1jPLPbq1EC5Sctag373mfgiiUFLzHbrsVN9 pulWUEjRJq+0k3ckGwV07LDO0EbtIC9FARluZaTJhUpgwatHvhmcii5RJ MlxObJlh9Wozeqps4Be8hIbza7hMAoBI+S56slPVF+IfBZA1izFBNWQTC w==; X-CSE-ConnectionGUID: g2GDUahGRBKJffx7hS3lhA== X-CSE-MsgGUID: c0w3YhbnQxKyBUqpD+ZFoQ== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111967" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111967" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:36 -0800 X-CSE-ConnectionGUID: OKpEMqlHTDO4XQa/WDUrSQ== X-CSE-MsgGUID: HDCJGEWUTLa0evKSdxk4Iw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426812" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:34 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 09/14] crypto: iaa - Distribute compress jobs from all cores to all IAAs on a package. Date: Mon, 3 Mar 2025 00:47:19 -0800 Message-Id: <20250303084724.6490-10-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: 8jxafgbe7jcqm5xeibmeuddy9uq3y9ad X-Rspamd-Queue-Id: BF3E81C0008 X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1740991658-441875 X-HE-Meta: U2FsdGVkX1/spbYJeTsYEdP9OJhyCdfNtB3koj5zZ1lpfNJ+LfRSEEQVMrE7Z0VI1wxq0cB+ukCAStECD/vfAfcaDKOHE9HnJNp3bBbGyHNCKMvhbRDHNS/mEuKvsijdJBww8gKxpwFQQeacMOPPiT4DavnHgU947Gc+8Jxs1HBwgmG6ajjeqwuy+rIcokh3kIkW4R3TSWjHFQFzttXoAA97tWYEnu7p5Tf8QBGOtBgSYT+TqKgJ4nOrnrykVqsZae5HXnv3QS6tIZKwZR8GdKqZAYQuM/q80AQGQ/ZoHITvulIEWqRvX5supSK518GWV1ntx4PWx0LO54IinDxjJPptq2zhaNp/EajUq//FznlfIIbI7psGuxfFY4CResHOxqunJF2pur5vyJSboqEJRz9Tq7ePW8He+JfEnj0J5ir5TT7R1527KC2W/wae6aGBxGimGuYKEkYajz86N3Uv4wp3gN6XBRrKfQqJNnY+zezMUr6rcOhZUNXciu5lC5ajlspqzkhekRVtWUcwJj1nBHib+cS55ge+yUxdcf0FAFgUBwDtIJf4SN0pd82uuM2oENHQtXfQZWHseBkFR0fGW9P1Cer9fMaABXh003xm/xC/3KKn7ycpxc/PWaLs6OENMNg6Oq5GqcyRn31fiB3yMf3NyngrNp9+JkoTcWwDIML//rJrj/9pQngwReDWr6S/UPv89GdUGPTJzDGrwYOcfUjX4fpaFkqPxcTBoQtVTrVbJiRS4IQkp+qzsrBEsJUQV5RcrFfx6w9h1iXUeSLkOLABgU1t99v6bRKctWSXHyPhYP6oc20bki24HPJx1CbNeYdSErFqcS4fSWUnGQfZKCk3MfAWlUzEjmn60Ip22X5dVi3iQq01bBcEd03op2/xd1jjTCw7xvksHniQ34nIw0jxUQr61f05u+zACyZqrtfeKgnylFOwaBg8M565ZDH6Gp0bVfkgt/6tfBOfEzY 823+NFCp dC9ZkiR4iihD0y/24uKVlvt4wV4XI63thapSipx6/GjfawAvVqXPySMsr9jseYGvqZzqIYYbKoZ9Nv9Xr2YrMkczRXgxWVZZxDbT7qSKCaWA+/qseFBNWxXQ08ZDf4TKt3zxFQ2FAG8IyDW5j+iLl1H6HKxKxDG4mPV+t+cl5vXK8B1NJ1HJ+Ut5sL+zdIe81aRY5HPITeIE4i82sjv3Sjs69sA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This change enables processes running on any logical core on a package to use all the IAA devices enabled on that package for compress jobs. In other words, compressions originating from any process in a package will be distributed in round-robin manner to the available IAA devices on the same package. This is not the default behavior, and is recommended only for highly contended scenarios, when there is significant swapout/swapin activity. The commit log describes how to enable this feature through driver parameters, but the key thing to note is that this requires configuring 2 work-queues per IAA device (each with 64 entries), with 1 WQ used solely for decompress jobs, and the other WQ used solely for compress jobs. Hence the above recommendation. The main premise behind this change is to make sure that no compress engines on any IAA device are un-utilized/under-utilized/over-utilized. In other words, the compress engines on all IAA devices are considered a global resource for that package, thus maximizing compression throughput. This allows the use of all IAA devices present in a given package for (batched) compressions originating from zswap/zram, from all cores on this package. A new per-cpu "global_wq_table" implements this in the iaa_crypto driver. We can think of the global WQ per IAA as a WQ to which all cores on that package can submit compress jobs. To avail of this feature, the user must configure 2 WQs per IAA in order to enable distribution of compress jobs to multiple IAA devices. Each IAA will have 2 WQs: wq.0 (local WQ): Used for decompress jobs from cores mapped by the cpu_to_iaa() "even balancing of logical cores to IAA devices" algorithm. wq.1 (global WQ): Used for compress jobs from *all* logical cores on that package. The iaa_crypto driver will place all global WQs from all same-package IAA devices in the global_wq_table per cpu on that package. When the driver receives a compress job, it will lookup the "next" global WQ in the cpu's global_wq_table to submit the descriptor. The starting wq in the global_wq_table for each cpu is the global wq associated with the IAA nearest to it, so that we stagger the starting global wq for each process. This results in very uniform usage of all IAAs for compress jobs. Two new driver module parameters are added for this feature: g_wqs_per_iaa (default 0): /sys/bus/dsa/drivers/crypto/g_wqs_per_iaa This represents the number of global WQs that can be configured per IAA device. The recommended setting is 1 to enable the use of this feature once the user configures 2 WQs per IAA using higher level scripts as described in Documentation/driver-api/crypto/iaa/iaa-crypto.rst. g_consec_descs_per_gwq (default 1): /sys/bus/dsa/drivers/crypto/g_consec_descs_per_gwq This represents the number of consecutive compress jobs that will be submitted to the same global WQ (i.e. to the same IAA device) from a given core, before moving to the next global WQ. The default is 1, which is also the recommended setting to avail of this feature. The decompress jobs from any core will be sent to the "local" IAA, namely the one that the driver assigns with the cpu_to_iaa() mapping algorithm that evenly balances the assignment of logical cores to IAA devices on a package. On a 2-package Sapphire Rapids server where each package has 56 cores and 4 IAA devices, this is how the compress/decompress jobs will be mapped when the user configures 2 WQs per IAA device (which implies wq.1 will be added to the global WQ table for each logical core on that package): package(s): 2 package0 CPU(s): 0-55,112-167 package1 CPU(s): 56-111,168-223 Compress jobs: -------------- package 0: iaa_crypto will send compress jobs from all cpus (0-55,112-167) to all IAA devices on the package (iax1/iax3/iax5/iax7) in round-robin manner: iaa: iax1 iax3 iax5 iax7 package 1: iaa_crypto will send compress jobs from all cpus (56-111,168-223) to all IAA devices on the package (iax9/iax11/iax13/iax15) in round-robin manner: iaa: iax9 iax11 iax13 iax15 Decompress jobs: ---------------- package 0: cpu 0-13,112-125 14-27,126-139 28-41,140-153 42-55,154-167 iaa: iax1 iax3 iax5 iax7 package 1: cpu 56-69,168-181 70-83,182-195 84-97,196-209 98-111,210-223 iaa: iax9 iax11 iax13 iax15 Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 1 + drivers/crypto/intel/iaa/iaa_crypto_main.c | 385 ++++++++++++++++++++- 2 files changed, 378 insertions(+), 8 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 72ffdf55f7b3..5f38f530c33d 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -91,6 +91,7 @@ struct iaa_device { struct list_head wqs; struct wq_table_entry *iaa_local_wqs; + struct wq_table_entry *iaa_global_wqs; atomic64_t comp_calls; atomic64_t comp_bytes; diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 40751d7c83c0..cb96897e7fed 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -42,6 +42,18 @@ static struct crypto_comp *deflate_generic_tfm; /* Per-cpu lookup table for balanced wqs */ static struct wq_table_entry __percpu *wq_table = NULL; +static struct wq_table_entry **pkg_global_wq_tables = NULL; + +/* Per-cpu lookup table for global wqs shared by all cpus. */ +static struct wq_table_entry __percpu *global_wq_table = NULL; + +/* + * Per-cpu counter of consecutive descriptors allocated to + * the same wq in the global_wq_table, so that we know + * when to switch to the next wq in the global_wq_table. + */ +static int __percpu *num_consec_descs_per_wq = NULL; + /* Verify results of IAA compress or not */ static bool iaa_verify_compress = false; @@ -79,6 +91,16 @@ static bool async_mode = true; /* Use interrupts */ static bool use_irq; +/* Number of global wqs per iaa*/ +static int g_wqs_per_iaa = 0; + +/* + * Number of consecutive descriptors to allocate from a + * given global wq before switching to the next wq in + * the global_wq_table. + */ +static int g_consec_descs_per_gwq = 1; + static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX]; LIST_HEAD(iaa_devices); @@ -180,6 +202,60 @@ static ssize_t sync_mode_store(struct device_driver *driver, } static DRIVER_ATTR_RW(sync_mode); +static ssize_t g_wqs_per_iaa_show(struct device_driver *driver, char *buf) +{ + return sprintf(buf, "%d\n", g_wqs_per_iaa); +} + +static ssize_t g_wqs_per_iaa_store(struct device_driver *driver, + const char *buf, size_t count) +{ + int ret = -EBUSY; + + mutex_lock(&iaa_devices_lock); + + if (iaa_crypto_enabled) + goto out; + + ret = kstrtoint(buf, 10, &g_wqs_per_iaa); + if (ret) + goto out; + + ret = count; +out: + mutex_unlock(&iaa_devices_lock); + + return ret; +} +static DRIVER_ATTR_RW(g_wqs_per_iaa); + +static ssize_t g_consec_descs_per_gwq_show(struct device_driver *driver, char *buf) +{ + return sprintf(buf, "%d\n", g_consec_descs_per_gwq); +} + +static ssize_t g_consec_descs_per_gwq_store(struct device_driver *driver, + const char *buf, size_t count) +{ + int ret = -EBUSY; + + mutex_lock(&iaa_devices_lock); + + if (iaa_crypto_enabled) + goto out; + + ret = kstrtoint(buf, 10, &g_consec_descs_per_gwq); + if (ret) + goto out; + + ret = count; +out: + mutex_unlock(&iaa_devices_lock); + + return ret; +} +static DRIVER_ATTR_RW(g_consec_descs_per_gwq); + /**************************** * Driver compression modes. ****************************/ @@ -465,7 +541,7 @@ static void remove_device_compression_modes(struct iaa_device *iaa_device) ***********************************************************/ static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) { - struct wq_table_entry *local; + struct wq_table_entry *local, *global; struct iaa_device *iaa_device; iaa_device = kzalloc(sizeof(*iaa_device), GFP_KERNEL); @@ -488,6 +564,20 @@ static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) local->max_wqs = iaa_device->idxd->max_wqs; local->n_wqs = 0; + /* IAA device's global wqs. */ + iaa_device->iaa_global_wqs = kzalloc(sizeof(struct wq_table_entry), GFP_KERNEL); + if (!iaa_device->iaa_global_wqs) + goto err; + + global = iaa_device->iaa_global_wqs; + + global->wqs = kzalloc(iaa_device->idxd->max_wqs * sizeof(struct wq *), GFP_KERNEL); + if (!global->wqs) + goto err; + + global->max_wqs = iaa_device->idxd->max_wqs; + global->n_wqs = 0; + INIT_LIST_HEAD(&iaa_device->wqs); return iaa_device; @@ -499,6 +589,8 @@ static struct iaa_device *iaa_device_alloc(struct idxd_device *idxd) kfree(iaa_device->iaa_local_wqs->wqs); kfree(iaa_device->iaa_local_wqs); } + if (iaa_device->iaa_global_wqs) + kfree(iaa_device->iaa_global_wqs); kfree(iaa_device); } @@ -616,6 +708,12 @@ static void free_iaa_device(struct iaa_device *iaa_device) kfree(iaa_device->iaa_local_wqs); } + if (iaa_device->iaa_global_wqs) { + if (iaa_device->iaa_global_wqs->wqs) + kfree(iaa_device->iaa_global_wqs->wqs); + kfree(iaa_device->iaa_global_wqs); + } + kfree(iaa_device); } @@ -817,6 +915,58 @@ static inline int cpu_to_iaa(int cpu) return (nr_iaa - 1); } +static void free_global_wq_table(void) +{ + if (global_wq_table) { + free_percpu(global_wq_table); + global_wq_table = NULL; + } + + if (num_consec_descs_per_wq) { + free_percpu(num_consec_descs_per_wq); + num_consec_descs_per_wq = NULL; + } + + pr_debug("freed global wq table\n"); +} + +static int pkg_global_wq_tables_alloc(void) +{ + int i, j; + + pkg_global_wq_tables = kzalloc(nr_packages * sizeof(*pkg_global_wq_tables), GFP_KERNEL); + if (!pkg_global_wq_tables) + return -ENOMEM; + + for (i = 0; i < nr_packages; ++i) { + pkg_global_wq_tables[i] = kzalloc(sizeof(struct wq_table_entry), GFP_KERNEL); + + if (!pkg_global_wq_tables[i]) { + for (j = 0; j < i; ++j) + kfree(pkg_global_wq_tables[j]); + kfree(pkg_global_wq_tables); + pkg_global_wq_tables = NULL; + return -ENOMEM; + } + pkg_global_wq_tables[i]->wqs = NULL; + } + + return 0; +} + +static void pkg_global_wq_tables_dealloc(void) +{ + int i; + + for (i = 0; i < nr_packages; ++i) { + if (pkg_global_wq_tables[i]->wqs) + kfree(pkg_global_wq_tables[i]->wqs); + kfree(pkg_global_wq_tables[i]); + } + kfree(pkg_global_wq_tables); + pkg_global_wq_tables = NULL; +} + static int alloc_wq_table(int max_wqs) { struct wq_table_entry *entry; @@ -835,6 +985,35 @@ static int alloc_wq_table(int max_wqs) entry->cur_wq = 0; } + global_wq_table = alloc_percpu(struct wq_table_entry); + if (!global_wq_table) + return 0; + + for (cpu = 0; cpu < nr_cpus; cpu++) { + entry = per_cpu_ptr(global_wq_table, cpu); + + entry->wqs = NULL; + entry->max_wqs = max_wqs; + entry->n_wqs = 0; + entry->cur_wq = 0; + } + + num_consec_descs_per_wq = alloc_percpu(int); + if (!num_consec_descs_per_wq) { + free_global_wq_table(); + return 0; + } + + for (cpu = 0; cpu < nr_cpus; cpu++) { + int *num_consec_descs = per_cpu_ptr(num_consec_descs_per_wq, cpu); + *num_consec_descs = 0; + } + + if (pkg_global_wq_tables_alloc()) { + free_global_wq_table(); + return 0; + } + pr_debug("initialized wq table\n"); return 0; @@ -895,13 +1074,120 @@ static int wq_table_add_wqs(int iaa, int cpu) return ret; } +static void pkg_global_wq_tables_reinit(void) +{ + int i, cur_iaa = 0, pkg = 0, nr_pkg_wqs = 0; + struct iaa_device *iaa_device; + struct wq_table_entry *global; + + if (!pkg_global_wq_tables) + return; + + /* Reallocate per-package wqs. */ + list_for_each_entry(iaa_device, &iaa_devices, list) { + global = iaa_device->iaa_global_wqs; + nr_pkg_wqs += global->n_wqs; + + if (++cur_iaa == nr_iaa_per_package) { + nr_pkg_wqs = nr_pkg_wqs ? max_t(int, iaa_device->idxd->max_wqs, nr_pkg_wqs) : 0; + + if (pkg_global_wq_tables[pkg]->wqs) { + kfree(pkg_global_wq_tables[pkg]->wqs); + pkg_global_wq_tables[pkg]->wqs = NULL; + } + + if (nr_pkg_wqs) + pkg_global_wq_tables[pkg]->wqs = kzalloc(nr_pkg_wqs * + sizeof(struct wq *), + GFP_KERNEL); + + pkg_global_wq_tables[pkg]->n_wqs = 0; + pkg_global_wq_tables[pkg]->cur_wq = 0; + pkg_global_wq_tables[pkg]->max_wqs = nr_pkg_wqs; + + if (++pkg == nr_packages) + break; + cur_iaa = 0; + nr_pkg_wqs = 0; + } + } + + pkg = 0; + cur_iaa = 0; + + /* Re-initialize per-package wqs. */ + list_for_each_entry(iaa_device, &iaa_devices, list) { + global = iaa_device->iaa_global_wqs; + + if (pkg_global_wq_tables[pkg]->wqs) + for (i = 0; i < global->n_wqs; ++i) + pkg_global_wq_tables[pkg]->wqs[pkg_global_wq_tables[pkg]->n_wqs++] = global->wqs[i]; + + pr_debug("pkg_global_wq_tables[%d] has %d wqs", pkg, pkg_global_wq_tables[pkg]->n_wqs); + + if (++cur_iaa == nr_iaa_per_package) { + if (++pkg == nr_packages) + break; + cur_iaa = 0; + } + } +} + +static void global_wq_table_add(int cpu, struct wq_table_entry *pkg_global_wq_table) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + + /* This could be NULL. */ + entry->wqs = pkg_global_wq_table->wqs; + entry->max_wqs = pkg_global_wq_table->max_wqs; + entry->n_wqs = pkg_global_wq_table->n_wqs; + entry->cur_wq = 0; + + if (entry->wqs) + pr_debug("%s: cpu %d: added %d iaa global wqs up to wq %d.%d\n", __func__, + cpu, entry->n_wqs, + entry->wqs[entry->n_wqs - 1]->idxd->id, + entry->wqs[entry->n_wqs - 1]->id); +} + +static void global_wq_table_set_start_wq(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + int start_wq = g_wqs_per_iaa * (cpu_to_iaa(cpu) % nr_iaa_per_package); + + if ((start_wq >= 0) && (start_wq < entry->n_wqs)) + entry->cur_wq = start_wq; +} + +static void global_wq_table_add_wqs(void) +{ + int cpu; + + if (!pkg_global_wq_tables) + return; + + for (cpu = 0; cpu < nr_cpus; cpu += nr_cpus_per_package) { + /* cpu's on the same package get the same global_wq_table. */ + int package_id = topology_logical_package_id(cpu); + int pkg_cpu; + + for (pkg_cpu = cpu; pkg_cpu < cpu + nr_cpus_per_package; ++pkg_cpu) { + if (pkg_global_wq_tables[package_id]->n_wqs > 0) { + global_wq_table_add(pkg_cpu, pkg_global_wq_tables[package_id]); + global_wq_table_set_start_wq(pkg_cpu); + } + } + } +} + static int map_iaa_device_wqs(struct iaa_device *iaa_device) { - struct wq_table_entry *local; + struct wq_table_entry *local, *global; int ret = 0, n_wqs_added = 0; struct iaa_wq *iaa_wq; local = iaa_device->iaa_local_wqs; + global = iaa_device->iaa_global_wqs; list_for_each_entry(iaa_wq, &iaa_device->wqs, list) { if (iaa_wq->mapped && ++n_wqs_added) @@ -909,11 +1195,18 @@ static int map_iaa_device_wqs(struct iaa_device *iaa_device) pr_debug("iaa_device %px: processing wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); - if (WARN_ON(local->n_wqs == local->max_wqs)) - break; + if ((!n_wqs_added || ((n_wqs_added + g_wqs_per_iaa) < iaa_device->n_wq)) && + (local->n_wqs < local->max_wqs)) { + + local->wqs[local->n_wqs++] = iaa_wq->wq; + pr_debug("iaa_device %px: added local wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + } else { + if (WARN_ON(global->n_wqs == global->max_wqs)) + break; - local->wqs[local->n_wqs++] = iaa_wq->wq; - pr_debug("iaa_device %px: added local wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + global->wqs[global->n_wqs++] = iaa_wq->wq; + pr_debug("iaa_device %px: added global wq %d.%d\n", iaa_device, iaa_device->idxd->id, iaa_wq->wq->id); + } iaa_wq->mapped = true; ++n_wqs_added; @@ -969,6 +1262,10 @@ static void rebalance_wq_table(void) } } + if (iaa_crypto_enabled && pkg_global_wq_tables) { + pkg_global_wq_tables_reinit(); + global_wq_table_add_wqs(); + } pr_debug("Finished rebalance local wqs."); } @@ -979,7 +1276,17 @@ static void free_wq_tables(void) wq_table = NULL; } - pr_debug("freed local wq table\n"); + if (global_wq_table) { + free_percpu(global_wq_table); + global_wq_table = NULL; + } + + if (num_consec_descs_per_wq) { + free_percpu(num_consec_descs_per_wq); + num_consec_descs_per_wq = NULL; + } + + pr_debug("freed wq tables\n"); } /*************************************************************** @@ -1002,6 +1309,35 @@ static struct idxd_wq *wq_table_next_wq(int cpu) return entry->wqs[entry->cur_wq]; } +/* + * Caller should make sure to call only if the + * per_cpu_ptr "global_wq_table" is non-NULL + * and has at least one wq configured. + */ +static struct idxd_wq *global_wq_table_next_wq(int cpu) +{ + struct wq_table_entry *entry = per_cpu_ptr(global_wq_table, cpu); + int *num_consec_descs = per_cpu_ptr(num_consec_descs_per_wq, cpu); + + /* + * Fall-back to local IAA's wq if there were no global wqs configured + * for any IAA device, or if there were problems in setting up global + * wqs for this cpu's package. + */ + if (!entry->wqs) + return wq_table_next_wq(cpu); + + if ((*num_consec_descs) == g_consec_descs_per_gwq) { + if (++entry->cur_wq >= entry->n_wqs) + entry->cur_wq = 0; + *num_consec_descs = 0; + } + + ++(*num_consec_descs); + + return entry->wqs[entry->cur_wq]; +} + /************************************************* * Core iaa_crypto compress/decompress functions. *************************************************/ @@ -1563,6 +1899,7 @@ static int iaa_comp_acompress(struct acomp_req *req) struct idxd_wq *wq; struct device *dev; int order = -1; + struct wq_table_entry *entry; compression_ctx = crypto_tfm_ctx(tfm); @@ -1581,8 +1918,15 @@ static int iaa_comp_acompress(struct acomp_req *req) disable_async = true; cpu = get_cpu(); - wq = wq_table_next_wq(cpu); + entry = per_cpu_ptr(global_wq_table, cpu); + + if (!entry || !entry->wqs || entry->n_wqs == 0) { + wq = wq_table_next_wq(cpu); + } else { + wq = global_wq_table_next_wq(cpu); + } put_cpu(); + if (!wq) { pr_debug("no wq configured for cpu=%d\n", cpu); return -ENODEV; @@ -2233,6 +2577,7 @@ static void iaa_crypto_remove(struct idxd_dev *idxd_dev) if (nr_iaa == 0) { iaa_crypto_enabled = false; + pkg_global_wq_tables_dealloc(); free_wq_tables(); BUG_ON(!list_empty(&iaa_devices)); INIT_LIST_HEAD(&iaa_devices); @@ -2302,6 +2647,20 @@ static int __init iaa_crypto_init_module(void) goto err_sync_attr_create; } + ret = driver_create_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); + if (ret) { + pr_debug("IAA g_wqs_per_iaa attr creation failed\n"); + goto err_g_wqs_per_iaa_attr_create; + } + + ret = driver_create_file(&iaa_crypto_driver.drv, + &driver_attr_g_consec_descs_per_gwq); + if (ret) { + pr_debug("IAA g_consec_descs_per_gwq attr creation failed\n"); + goto err_g_consec_descs_per_gwq_attr_create; + } + if (iaa_crypto_debugfs_init()) pr_warn("debugfs init failed, stats not available\n"); @@ -2309,6 +2668,12 @@ static int __init iaa_crypto_init_module(void) out: return ret; +err_g_consec_descs_per_gwq_attr_create: + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); +err_g_wqs_per_iaa_attr_create: + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_sync_mode); err_sync_attr_create: driver_remove_file(&iaa_crypto_driver.drv, &driver_attr_verify_compress); @@ -2332,6 +2697,10 @@ static void __exit iaa_crypto_cleanup_module(void) &driver_attr_sync_mode); driver_remove_file(&iaa_crypto_driver.drv, &driver_attr_verify_compress); + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_wqs_per_iaa); + driver_remove_file(&iaa_crypto_driver.drv, + &driver_attr_g_consec_descs_per_gwq); idxd_driver_unregister(&iaa_crypto_driver); iaa_aecs_cleanup_fixed(); crypto_free_comp(deflate_generic_tfm); From patchwork Mon Mar 3 08:47:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D4BAC282C5 for ; Mon, 3 Mar 2025 08:48:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1BFA128001C; Mon, 3 Mar 2025 03:47:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1479428001B; Mon, 3 Mar 2025 03:47:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE15528001C; Mon, 3 Mar 2025 03:47:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 79FB828000B for ; Mon, 3 Mar 2025 03:47:45 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 03FFE12047E for ; Mon, 3 Mar 2025 08:47:44 +0000 (UTC) X-FDA: 83179611690.15.036E636 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf18.hostedemail.com (Postfix) with ESMTP id D21771C000B for ; Mon, 3 Mar 2025 08:47:42 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=TMfb7IJi; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991663; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RSUdPHzQM/668ueiIbP4yqJngLMhWytO1m39iTdoK4A=; b=eRRx/0e8fYMOpQs67z+S7uNPZmJRzR8zFZOfPZBVb5+VgNJO345+jN0Pol3Axe+6lIybQ3 vlZ6Gvx3r9kgTHEgTNGb0IMxMwv8Ytp4vgTSEddw15yDJDFTJgQKivJjtPAsHcyRvIakrc d6qoJEMNaxiaVaoymXIlXs+z9fAxS6E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991663; a=rsa-sha256; cv=none; b=IUjdZvLraZQ/AkPm3kdopiDhZkMpO3TfuBwgspnwQDVCM0SHE8obHxvd+lwCxTSg+pA3de Ontlpka2f3XYI4KIA665c+jnGtT5jir2ooXBwrcOaD3b0YzN/RL1d7l7ip4WSXRyqCE1MB jgVTvAW+ruC/DtKJM1+gmVV5pyH/TAk= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=TMfb7IJi; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991663; x=1772527663; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xzJxxiFYwAjT5Xrqds5C/407QeaiWfMMqcMGCN4vRiA=; b=TMfb7IJifv9AX0FT7W6KV8cDMGTwS3c1ckMbtdSl2Q5KIrdrSIIscD/l OVMMKIkEXHwvjaxFlMM5QBi0s4m3pbX38nHBJ0DefIkwVsJGOCFsTluuc bxACW9+SIyEOqxRTGeWay0LuL96vpxJtVmYfR+LXj76+3c1T9PJ3gJQ7i Hr/4OlHWWO8UGaBlubJ6mFbF0bFqGE2byCfvSA8kEz4oVFbij8WQFx2yK lh9Qnd+Aw9J3EexNUWNV0Odqx/p0EYDch8AjPpiEKD4OSgLAJ2ZooMCn6 zjvd2x0GjP0daHKpkyjdIc/hsQEpQwQJOL3TUulPqUO3m+wb/RBORVNmg A==; X-CSE-ConnectionGUID: e2edGUN4THqrOsu4bEW46g== X-CSE-MsgGUID: YDZm0hD2Q2eY5noYMOHyVA== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111981" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111981" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:36 -0800 X-CSE-ConnectionGUID: dO2fnQJ4QGuFYHAJhUYFvA== X-CSE-MsgGUID: I08r1il/SqebEfhy5Kjt9Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426818" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:35 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 10/14] crypto: iaa - Descriptor allocation timeouts with mitigations in iaa_crypto. Date: Mon, 3 Mar 2025 00:47:20 -0800 Message-Id: <20250303084724.6490-11-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: ekj1hb11qhxn4ur9iyd9unidnhbpo8jj X-Rspamd-Queue-Id: D21771C000B X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740991662-4491 X-HE-Meta: U2FsdGVkX1/clbFuEjuWqTYubeOQnBKfQPWfNxhONzXIvOAjWSS/x84AlC7DAc0njFpRcOQZBlB/lJSbj+p0Qc9u4Wzykxc+Bccu9NlgvQwHZtIkJRJTLXahaT/GWn/xuB2MTxLwjlBO148Fl1BG3ckV8UJ7FtwSVtMYon2hu8FP+kDPHgBvvS9CD4/u55Z/RzcHiWtgIzm3zGXbbYNnNXbX+hiDW1Y0TOSbaz3Zy+1QtjzEqlI+oewFGQGtI/hu2qYZ6H1CcqtA0/q10eqNUkOerWiK2zGnuY7C9NU/ShOLXWUoiwR4w/c+jitbz8JDhp9IPzcgokcTLvM2GWoxlQYmO3Ti/0ABsZWjbiKle3acH4gVcMS2AdvzCIR/aBlaHq0PztC5U4jesfpIPmUa6i96qo3Y5D5GbY+smxoJNRWXA5gD0B93cHEXzHEnPdkJBli15SeL3nHn+L3/g6hmDM8zExU/YuT8mE2WwhsNrypKVpDljnmMfGGXGje9zEpy2jEch6pyQd3hFX8nftextJb4eoJt3gxgnGCORdGle60A7UqUI+gckI2Ef3VbAYcKqRQ6pBVQkKyHdvEQKB27Ue7agcci/SvBoy4rMgvldMCPhIv9hpcfFCJXI6OWBZ/2yOhzqMPu7ZxsAnePCtL1QJw9xKRCK8FXjkgOw+qOH7UmTLpBK1KX5Ksd6sMhXDH87V0b3VhVGgW6PgKDgN84L1VnFbcENG6aF/2RtpTik3SsQZGRaQPoPB5HCIrmBq2rRs+SFUUenLTqXa0vT84DWHMgiKP+RxUJujmxSLqu33Lnymd1AoHcPoOzoimtQblaO4mby46rM3SIgQ+tFfRI515mCY6/Z0PcQf86VBNkg5Ch7omj2N5FkI3u58mWxb+FJekRUVJ8E9rcwn7ZyK4E3D2ofiW33bdbO7/JNS/tGS8vOOcNl5jZ3tfPAmOpGMWg2j9zauQ32Uyhpa7YkvI EPUE50O0 vOAmh5UyzIOGNDFsdTwDAnIKU8LRVZIiVXybOn0hHjcHA81xTRJ5u0aSHSnhtpkvUb4y9SUVbj68lQGO+Q8aaBUeily8ohS4nNBXJM94ibtWsJ8KQEe7Z5dJYfR6fs3fGcjxe/W4//zeVli7+HSqDd62t97q2nVZa9Rv33AvPBjYNi+3WC4CjG7vUp5B1jHA9RqdnEpV5L6N0izV+o2KbQAEWGvxyiUpChFbdlUpJY5ksFj0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch modifies the descriptor allocation from blocking to non-blocking with bounded retries or "timeouts". This is necessary to prevent task blocked errors in high contention scenarios, for instance, when the platform has only 1 IAA device enabled. With 1 IAA device enabled per package on a dual-package SPR with 56 cores/package, there are 112 logical cores mapped to this single IAA device. In this scenario, the task blocked errors can occur because idxd_alloc_desc() is called with IDXD_OP_BLOCK. Any process that is able to obtain IAA_CRYPTO_MAX_BATCH_SIZE (8U) descriptors, will cause contention for allocating descriptors for all other processes. Under IDXD_OP_BLOCK, this can cause compress/decompress jobs to stall in stress test scenarios (e.g. zswap_store() of 2M folios). In order to make the iaa_crypto driver be more fail-safe, this commit implements the following: 1) Change compress/decompress descriptor allocations to be non-blocking with retries ("timeouts"). 2) Return compress error to zswap if descriptor allocation with timeouts fails during compress ops. zswap_store() will return an error and the folio gets stored in the backing swap device. 3) Fallback to software decompress if descriptor allocation with timeouts fails during decompress ops. 4) Bug fixes for freeing the descriptor consistently in all error cases. With these fixes, there are no task blocked errors seen under stress testing conditions, and no performance degradation observed. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 3 + drivers/crypto/intel/iaa/iaa_crypto_main.c | 74 ++++++++++++---------- 2 files changed, 45 insertions(+), 32 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 5f38f530c33d..de14e5e2a017 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -21,6 +21,9 @@ #define IAA_COMPLETION_TIMEOUT 1000000 +#define IAA_ALLOC_DESC_COMP_TIMEOUT 1000 +#define IAA_ALLOC_DESC_DECOMP_TIMEOUT 500 + #define IAA_ANALYTICS_ERROR 0x0a #define IAA_ERROR_DECOMP_BUF_OVERFLOW 0x0b #define IAA_ERROR_COMP_BUF_OVERFLOW 0x19 diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index cb96897e7fed..7503fafca279 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1406,6 +1406,7 @@ static int deflate_generic_decompress(struct acomp_req *req) void *src, *dst; int ret; + req->dlen = PAGE_SIZE; src = kmap_local_page(sg_page(req->src)) + req->src->offset; dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset; @@ -1469,7 +1470,8 @@ static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, struct iaa_device_compression_mode *active_compression_mode; struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); struct iaa_device *iaa_device; - struct idxd_desc *idxd_desc; + struct idxd_desc *idxd_desc = ERR_PTR(-EAGAIN); + int alloc_desc_retries = 0; struct iax_hw_desc *desc; struct idxd_device *idxd; struct iaa_wq *iaa_wq; @@ -1485,7 +1487,11 @@ static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + while ((idxd_desc == ERR_PTR(-EAGAIN)) && (alloc_desc_retries++ < IAA_ALLOC_DESC_DECOMP_TIMEOUT)) { + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_NONBLOCK); + cpu_relax(); + } + if (IS_ERR(idxd_desc)) { dev_dbg(dev, "idxd descriptor allocation failed\n"); dev_dbg(dev, "iaa compress failed: ret=%ld\n", @@ -1661,7 +1667,8 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, struct iaa_device_compression_mode *active_compression_mode; struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); struct iaa_device *iaa_device; - struct idxd_desc *idxd_desc; + struct idxd_desc *idxd_desc = ERR_PTR(-EAGAIN); + int alloc_desc_retries = 0; struct iax_hw_desc *desc; struct idxd_device *idxd; struct iaa_wq *iaa_wq; @@ -1677,7 +1684,11 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + while ((idxd_desc == ERR_PTR(-EAGAIN)) && (alloc_desc_retries++ < IAA_ALLOC_DESC_COMP_TIMEOUT)) { + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_NONBLOCK); + cpu_relax(); + } + if (IS_ERR(idxd_desc)) { dev_dbg(dev, "idxd descriptor allocation failed\n"); dev_dbg(dev, "iaa compress failed: ret=%ld\n", PTR_ERR(idxd_desc)); @@ -1753,15 +1764,10 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, *compression_crc = idxd_desc->iax_completion->crc; - if (!ctx->async_mode || disable_async) - idxd_free_desc(wq, idxd_desc); -out: - return ret; err: idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa compress failed: ret=%d\n", ret); - - goto out; +out: + return ret; } static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, @@ -1773,7 +1779,8 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, struct iaa_device_compression_mode *active_compression_mode; struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); struct iaa_device *iaa_device; - struct idxd_desc *idxd_desc; + struct idxd_desc *idxd_desc = ERR_PTR(-EAGAIN); + int alloc_desc_retries = 0; struct iax_hw_desc *desc; struct idxd_device *idxd; struct iaa_wq *iaa_wq; @@ -1789,12 +1796,18 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode); - idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + while ((idxd_desc == ERR_PTR(-EAGAIN)) && (alloc_desc_retries++ < IAA_ALLOC_DESC_DECOMP_TIMEOUT)) { + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_NONBLOCK); + cpu_relax(); + } + if (IS_ERR(idxd_desc)) { dev_dbg(dev, "idxd descriptor allocation failed\n"); dev_dbg(dev, "iaa decompress failed: ret=%ld\n", PTR_ERR(idxd_desc)); - return PTR_ERR(idxd_desc); + ret = PTR_ERR(idxd_desc); + idxd_desc = NULL; + goto fallback_software_decomp; } desc = idxd_desc->iax_hw; @@ -1837,7 +1850,7 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, ret = idxd_submit_desc(wq, idxd_desc); if (ret) { dev_dbg(dev, "submit_desc failed ret=%d\n", ret); - goto err; + goto fallback_software_decomp; } /* Update stats */ @@ -1851,19 +1864,20 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, } ret = check_completion(dev, idxd_desc->iax_completion, false, false); + +fallback_software_decomp: if (ret) { - dev_dbg(dev, "%s: check_completion failed ret=%d\n", __func__, ret); - if (idxd_desc->iax_completion->status == IAA_ANALYTICS_ERROR) { + dev_dbg(dev, "%s: desc allocation/submission/check_completion failed ret=%d\n", __func__, ret); + if (idxd_desc && idxd_desc->iax_completion->status == IAA_ANALYTICS_ERROR) { pr_warn("%s: falling back to deflate-generic decompress, " "analytics error code %x\n", __func__, idxd_desc->iax_completion->error_code); - ret = deflate_generic_decompress(req); - if (ret) { - dev_dbg(dev, "%s: deflate-generic failed ret=%d\n", - __func__, ret); - goto err; - } - } else { + } + + ret = deflate_generic_decompress(req); + + if (ret) { + pr_err("%s: iaa decompress failed: fallback to deflate-generic software decompress error ret=%d\n", __func__, ret); goto err; } } else { @@ -1872,19 +1886,15 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, *dlen = req->dlen; - if (!ctx->async_mode || disable_async) - idxd_free_desc(wq, idxd_desc); - /* Update stats */ update_total_decomp_bytes_in(slen); update_wq_decomp_bytes(wq, slen); + +err: + if (idxd_desc) + idxd_free_desc(wq, idxd_desc); out: return ret; -err: - idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa decompress failed: ret=%d\n", ret); - - goto out; } static int iaa_comp_acompress(struct acomp_req *req) From patchwork Mon Mar 3 08:47:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 254F4C282C5 for ; Mon, 3 Mar 2025 08:48:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 87A5128001E; Mon, 3 Mar 2025 03:47:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8030F28001D; Mon, 3 Mar 2025 03:47:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B33728001C; Mon, 3 Mar 2025 03:47:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3C2D828000B for ; Mon, 3 Mar 2025 03:47:45 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id EF0E61404B4 for ; Mon, 3 Mar 2025 08:47:44 +0000 (UTC) X-FDA: 83179611648.17.CC0380E Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf24.hostedemail.com (Postfix) with ESMTP id CFC6418000F for ; Mon, 3 Mar 2025 08:47:42 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NsjIL2CN; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991663; a=rsa-sha256; cv=none; b=wSaK2UysIrE/KrSxX567n+WV9/uRcqY9zlUWmBc4Q9k7tzQ5OkVgcdbaOFzlu+Tn4qNa+9 l14QYFxT8MDeyoXztDRQgnewdtuqU4PfIaJoF6WkZHLvURWLfBTTqcrgJvFVSFB2xK8JVM tzKipQTKl/S90fcbqmRj/SMUTWxI0Jk= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NsjIL2CN; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991663; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+eOTM3Nk/Zac48FupoTlTwnc9jTXgoQyfvQyY2539qA=; b=vgX0qu5WAYih/Z57b0zbF0fW0hGkPkOsvQ6LN8vtFJ3Q1Ako9zNxLtmb2kauy/bx3i8Ioq tqfEIvndYCpAjJB+2w64khfqMSaEgazgEvJyPK1mpafc91fNCnlTaVFedwfuxuX4VLpMDP A5d2qD/HbVGESSXiiJSnSVNWI/0rdu8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991663; x=1772527663; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nu1yNk8S8EgSrO+ZHhQ4Hl0iMXbUsnYxF38CaLmoAAI=; b=NsjIL2CNvS6jIZSg4V2txfYUfFDgVOdxZiL15utmPeWdzE++YtUuOzyy r01D9IqJLQEAVFYl3xFk0+0eCR7S2HJ/mwrbuzLLAighxl06QCUcElxN8 MjaOjKroPzxotOY6k/3DlLu4rdiDFJQsfA72OUb0MrXND/geTvMaM9OEK elIxDM9MISful6vq4hKxk6ez4F9n9i02lS8OChEg3fVPS2ecJd6irqcmr szwaVdzVRXktXpZhWhNXi37nPsRYTR1XqnMR/Ti6zQMonc16lpQCtJzx/ +n+dphpfuHsTKE/jxQBoiSOBghwSLeDaOc5er3pdQHU5NYbaX6mIrLUOM g==; X-CSE-ConnectionGUID: DJgl1wFfQ7SruGNRNuaOHw== X-CSE-MsgGUID: yeLdyS/gTlmpw4UZY7ebrg== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42111995" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42111995" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:39 -0800 X-CSE-ConnectionGUID: RcBRVQJHThmfzPKoFFz43Q== X-CSE-MsgGUID: Mf9Syfb0RG6aEdF/ZVQZAQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426822" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:36 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 11/14] crypto: iaa - Fix for "deflate_generic_tfm" global being accessed without locks. Date: Mon, 3 Mar 2025 00:47:21 -0800 Message-Id: <20250303084724.6490-12-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: CFC6418000F X-Stat-Signature: grqc1kgrggk5pfnmm136x39kt6sjqhgs X-Rspam-User: X-HE-Tag: 1740991662-653550 X-HE-Meta: U2FsdGVkX1+skyad8KQQbhAp8Y2Ov8AcQ5VnO4cfkVerP4Wpwj1n+nLuo5NY1aVHuxM4AOd7Th8TE5X5xHhZGQDwZfy63LILVPbgY8WfgMNZox/Edh/SDUjQ0Pwxo1ySp2lRF09aypg0vd/3ugdVFFVF0glKmU3UFICyk3DR1G0ElmcHYYFHOszrn2/AyzW30vmMF0zPaBZ+Q7zM4MeN1mYiSvXVClK7NO3EHE/qEDCRZqjyp0IduMhx6ZnViCXUEX6NrLBiRNB1AQPjIftnOouEQKyy1VtUH8DXHamH6oLXbW8OTp1kvQuOxoCOqyyWFuKkQk+J7eXA6fFWDeaoV2D1ZtzGURAFoB7W6ZouSgr0jO+PUoakf/RN83ueO9kmgAuiO1N+C/jRlqRZSV1Iy2OHC8zMmAqktdeNy4hUr0aX4wdmbFrjtFfYlnXsUZKkNzKENIjC3PKIVrbnqRRQu/u4v1KFVJO+RToAxFXeVy1+Cl8raZ8RrYqqnC7DCSKS97fulZ7vj8j34Usgbz1hTuifvmxzIsqGtys1GoNLa+mHNJYtJf6lK9G2vXFgz1sPvOVygKxdRIF8q7k6AT1JTEujO2q557HCN5NvX+N77gjsAxJIiZGA2b/1KFmbzZsmr+Ph5KKbDrPjG0ccur+ECJNAqBIGDjh7zPT4GAeOYuaSZCxNNmXzFwD29UHpVQXxoYEBprP5ygu/YjXlAUKhsHgm8VxFlX4bfl5VgMcEL9raLqk6EwTRli1ElZpQZUSDgem9V54+LmiLOPooBAcm6N3BF2AoPRonO7uZWi1QMM/oT7u0sLyC71jGWYFqHM2BbshBpzPIo5JClcBQl/G2baUiqanWLvCaxWiSTD2qF361Xmp1sxy8qEvtdhXxlBL1pAENv9YFux1jnshjIhQzdTcr8m9XqT6PYDq4pg7Jb1+GLyqYKBATxlUQA3MkmaDByQq5Vj6XyKpXMhy5roG /Qdy9NE/ yzRZ+m5FLkSDQKFXejAMZg7ntwB+Czdbi9F+LJe+EJExKXciBWxvVE65DGOKHFR67pCXNinT/hfjmE+YLnfk03g6qcOkVGqFdhtKHG3hrUVeTELukq+b4iPTmwuYnGB8E22uc5nbLXDVVBIs2j8F75B+Y4yYds8Ee2nPUDI0dfRzcTJMI0ajcjvhFD+B5itrN4z51Uya63EHCEtfIzsz6RDxAcHrSFu7L5KnppU9ffhzpbrI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The mainline implementation of "deflate_generic_decompress" has a bug in the usage of this global variable: static struct crypto_comp *deflate_generic_tfm; The "deflate_generic_tfm" is allocated at module init time, and freed during module cleanup. Any calls to software decompress, for instance, if descriptor allocation fails or job submission fails, will trigger this bug in the deflate_generic_decompress() procedure. The problem is the unprotected access of "deflate_generic_tfm" in this procedure. While stress testing workloads under high memory pressure, with 1 IAA device and "deflate-iaa" as the compressor, the descriptor allocation times out and the software fallback route is taken. With multiple processes calling: ret = crypto_comp_decompress(deflate_generic_tfm, src, req->slen, dst, &req->dlen); we end up with data corruption, that results in req->dlen being larger than PAGE_SIZE. zswap_decompress() subsequently raises a kernel bug. This bug can manifest under high contention and memory pressure situations with high likelihood. This has been resolved by adding a mutex, which is locked before accessing "deflate_generic_tfm" and unlocked after the crypto_comp call is done. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 7503fafca279..2a994f307679 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -105,6 +105,7 @@ static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX]; LIST_HEAD(iaa_devices); DEFINE_MUTEX(iaa_devices_lock); +DEFINE_MUTEX(deflate_generic_tfm_lock); /* If enabled, IAA hw crypto algos are registered, unavailable otherwise */ static bool iaa_crypto_enabled; @@ -1407,6 +1408,9 @@ static int deflate_generic_decompress(struct acomp_req *req) int ret; req->dlen = PAGE_SIZE; + + mutex_lock(&deflate_generic_tfm_lock); + src = kmap_local_page(sg_page(req->src)) + req->src->offset; dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset; @@ -1416,6 +1420,8 @@ static int deflate_generic_decompress(struct acomp_req *req) kunmap_local(src); kunmap_local(dst); + mutex_unlock(&deflate_generic_tfm_lock); + update_total_sw_decomp_calls(); return ret; From patchwork Mon Mar 3 08:47:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A14CC282C5 for ; Mon, 3 Mar 2025 08:48:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE63528000B; Mon, 3 Mar 2025 03:47:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C71DC28001B; Mon, 3 Mar 2025 03:47:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8748928001C; Mon, 3 Mar 2025 03:47:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 515EF28001B for ; Mon, 3 Mar 2025 03:47:45 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 19A5380466 for ; Mon, 3 Mar 2025 08:47:45 +0000 (UTC) X-FDA: 83179611690.06.D027315 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf21.hostedemail.com (Postfix) with ESMTP id D66F01C0009 for ; Mon, 3 Mar 2025 08:47:42 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EVzyrUbo; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991663; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=i8ISw6ogqlLppdixEPK3GXPGu1vnDJvXZVK4Ebi7ntk=; b=o8N5yHN40xIMKAKMlsmfNaWAYJfTYDZpjJVPoxXF4tGhPPXhg88t7C4GkchB0IcOnmrQq5 KJoKlGCyxFGN577I3ZzVkBIZXHYwWb2Em8cZJgliAGqRvhwVwRB/V9mLajxSvT1Tisp2D2 T3QtVZTTMr0NRTcjXvLHLGilgG9vqIc= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EVzyrUbo; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991663; a=rsa-sha256; cv=none; b=K4e9EUys7QZCQ0C+Ihuyja5juc+IGHirhXtI0skJ6QwhmFSMV9DKPC9P8/NqyCE7a6r3C6 cY6e8P/Zh0GWNqglGZ3tdMG16/qXFCqVk7tHoyBdVkwf3s67zK8azdMAYGofjmaY7unVNb 35TvgzRV9eB/C46k8YVrZUNeOoWOigI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991663; x=1772527663; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wnAZgzZysvwzP6ose3BLkVVzZLfpVRnkxBx4ywaxaog=; b=EVzyrUboROO9DKUiwWzPIsHdQab5v4rlppgYOvtZOlNH+zqPZuEoI60u ceydfJzysWLPEtZy0p8Wuawc/IVgg39ZPOYmNmXaXuQ+iG5yNY8LejgIr w7rFViF6jKNWlQmvBzLMy4rptYpqQYnN1Pg6WLQo/s2AnX8Kp6sE3MrlS rHE6Wc0W/gIyIMA4RfeKg72/HSsZezG4EGxfmUIe0o0raez9LpJv201eB HQDnIF6YV2BSHpeHYvjpkZbPpJo1kcNpsi8dfBb3/TyHHsvYALuIqSOLQ VmIoBR/5X8A+N5pNw+vf0oWxF11hGlzV+qwE8LZOdjQxp1y5wbBKKYU7F g==; X-CSE-ConnectionGUID: xebPziHHTSGtzbLRIpn4HQ== X-CSE-MsgGUID: L2foQv7fQP2fdpkX84H32g== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42112008" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42112008" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:39 -0800 X-CSE-ConnectionGUID: ySiWdTsSQJGG+Ckiz8Ho7g== X-CSE-MsgGUID: b9EPa7AyS3WyQO54PH0Rmg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426825" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:37 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 12/14] mm: zswap: Simplify acomp_ctx resource allocation/deletion and mutex lock usage. Date: Mon, 3 Mar 2025 00:47:22 -0800 Message-Id: <20250303084724.6490-13-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: 6gz9zwgpjunrtnxugtqnahjezcxqxuud X-Rspamd-Queue-Id: D66F01C0009 X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1740991662-418027 X-HE-Meta: U2FsdGVkX1+Epj6tieVGqrg1SrGQUqbhEHm+of8opB3/m3uKt+SKkV7Cu3VP+uSdywKWjGYwsWwdIglkuPDPkKUWStDJADNUZImsEKnhG2k86GXuvl0nnsaaHe+h0LTnMebre7T30XV5jrnqNKamh5RuSfwx+qtYb9DO6j3lq/bkYL5MF1PLHTQeH1/q93uiCX9krHNN+lUvV0wLprIk+87hgclvjIY3Hs9w7Q1trCh/qu4dxr6lAQKMZrtwDlZX88goJVgyCgWhGcd9bYFyW6Y00FIVIb6AwX7zke5HcJA5MKxbK1lDqKRVVDVrx8RgL03M6AZbeEGrh0HVMc3aHv2X3tHnXBeVAP6wtD7W+Dt3+5M+0QNfNLTFrEyfNGNANzXs+BxeAL2SX74E0W2Z361ytOvUGVr5u3YvkrDpxC04OYed4xi+9aJRhmNqiiAMMO+yk8ZVnNIGRx3IAs/0OLMHLSgJP7LWvCGGinxvZzzQreiBIdKIHuG2fwuxO/dcD6XgxICTB2/ZQ7SvKRf0v6rvtL/qJFyYt6rGTJxbBBOr1v1a2MJjAyPFg4l91NpvlQFTJGzCC/QBWUjArwxtO4P0uMJy4E9Uk6BFbL2koImIk9xG+kTk2+uJMie/Byd0rleZw52IYKx97cr4Qsca9v3Q3pxPv34l/DK5apjrgCqo8dDc4JG84PWFt9PCIYQg1+a5lm8htDec2JLBboKEkeDOSp8x20ZaCY4D9l1j+cRHVQUbG20Atie5jFLqPFU3M35lL75NRclHR0Fr56NujnQm9oaWAfxVjdG3o92BArtRHsF7OJRY4U5t3bFod3DAQPpxtZJqB7v7utLTt+rNmCjKQX6G3MVIfejyC1FtQDeUns0rI1oIBkpgQlYJSCOtmNO/GLnspYmK+TnUOckpab5RGLVE2+AAAnTcVjYPkTC+ACOH3L28ehyHPquKq/jM4yZpRHbH/2beiOunhpT NcTGcrYN drf6ikH6gCi06OGYGBW2kdb6uZ+urGBLU74PnOt1iWDobU7ycwf2hH3K40eA8rHP9wZDi3nmXw1PBvdErYj5jRy+hTwalGNi5PxiEo038DiuPZ4Xse0gXYhIpX2pZUiliqRXP7ZvQUShw0/BWzr2i/sgWRsH9te2OuOG8NmX5hgTrS2iwnbrJwVw2Nnqqq1kwh6unU2FMNClpc0szdZ7vBYzh2B/rJktCYbdo0Kn76MydAUY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch modifies the acomp_ctx resources' lifetime to be from pool creation to deletion. A "bool __online" and "u8 nr_reqs" are added to "struct crypto_acomp_ctx" which simplify a few things: 1) zswap_pool_create() will initialize all members of each percpu acomp_ctx to 0 or NULL and only then initialize the mutex. 2) CPU hotplug will set nr_reqs to 1, allocate resources and set __online to true, without locking the mutex. 3) CPU hotunplug will lock the mutex before setting __online to false. It will not delete any resources. 4) acomp_ctx_get_cpu_lock() will lock the mutex, then check if __online is true, and if so, return the mutex for use in zswap compress and decompress ops. 5) CPU onlining after offlining will simply check if either __online or nr_reqs are non-0, and return 0 if so, without re-allocating the resources. 6) zswap_pool_destroy() will call a newly added zswap_cpu_comp_dealloc() to delete the acomp_ctx resources. 7) Common resource deletion code in case of zswap_cpu_comp_prepare() errors, and for use in zswap_cpu_comp_dealloc(), is factored into a new acomp_ctx_dealloc(). The CPU hot[un]plug callback functions are moved to "pool functions" accordingly. The per-cpu memory cost of not deleting the acomp_ctx resources upon CPU offlining, and only deleting them when the pool is destroyed, is as follows: IAA with batching: 64.8 KB Software compressors: 8.2 KB I would appreciate code review comments on whether this memory cost is acceptable, for the latency improvement that it provides due to a faster reclaim restart after a CPU hotunplug-hotplug sequence - all that the hotplug code needs to do is to check if acomp_ctx->nr_reqs is non-0, and if so, set __online to true and return, and reclaim can proceed. Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 273 +++++++++++++++++++++++++++++++++++------------------ 1 file changed, 182 insertions(+), 91 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 10f2a16e7586..cff96df1df8b 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -144,10 +144,12 @@ bool zswap_never_enabled(void) struct crypto_acomp_ctx { struct crypto_acomp *acomp; struct acomp_req *req; - struct crypto_wait wait; u8 *buffer; + u8 nr_reqs; + struct crypto_wait wait; struct mutex mutex; bool is_sleepable; + bool __online; }; /* @@ -246,6 +248,122 @@ static inline struct xarray *swap_zswap_tree(swp_entry_t swp) **********************************/ static void __zswap_pool_empty(struct percpu_ref *ref); +static void acomp_ctx_dealloc(struct crypto_acomp_ctx *acomp_ctx) +{ + if (!IS_ERR_OR_NULL(acomp_ctx) && acomp_ctx->nr_reqs) { + + if (!IS_ERR_OR_NULL(acomp_ctx->req)) + acomp_request_free(acomp_ctx->req); + acomp_ctx->req = NULL; + + kfree(acomp_ctx->buffer); + acomp_ctx->buffer = NULL; + + if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) + crypto_free_acomp(acomp_ctx->acomp); + + acomp_ctx->nr_reqs = 0; + } +} + +static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) +{ + struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); + struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); + int ret = -ENOMEM; + + /* + * Just to be even more fail-safe against changes in assumptions and/or + * implementation of the CPU hotplug code. + */ + if (acomp_ctx->__online) + return 0; + + if (acomp_ctx->nr_reqs) { + acomp_ctx->__online = true; + return 0; + } + + acomp_ctx->acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); + if (IS_ERR(acomp_ctx->acomp)) { + pr_err("could not alloc crypto acomp %s : %ld\n", + pool->tfm_name, PTR_ERR(acomp_ctx->acomp)); + ret = PTR_ERR(acomp_ctx->acomp); + goto fail; + } + + acomp_ctx->nr_reqs = 1; + + acomp_ctx->req = acomp_request_alloc(acomp_ctx->acomp); + if (!acomp_ctx->req) { + pr_err("could not alloc crypto acomp_request %s\n", + pool->tfm_name); + ret = -ENOMEM; + goto fail; + } + + acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->buffer) { + ret = -ENOMEM; + goto fail; + } + + crypto_init_wait(&acomp_ctx->wait); + + /* + * if the backend of acomp is async zip, crypto_req_done() will wakeup + * crypto_wait_req(); if the backend of acomp is scomp, the callback + * won't be called, crypto_wait_req() will return without blocking. + */ + acomp_request_set_callback(acomp_ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &acomp_ctx->wait); + + acomp_ctx->is_sleepable = acomp_is_async(acomp_ctx->acomp); + + acomp_ctx->__online = true; + + return 0; + +fail: + acomp_ctx_dealloc(acomp_ctx); + + return ret; +} + +static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) +{ + struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); + struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); + + mutex_lock(&acomp_ctx->mutex); + acomp_ctx->__online = false; + mutex_unlock(&acomp_ctx->mutex); + + return 0; +} + +static void zswap_cpu_comp_dealloc(unsigned int cpu, struct hlist_node *node) +{ + struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); + struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); + + /* + * The lifetime of acomp_ctx resources is from pool creation to + * pool deletion. + * + * Reclaims should not be happening because, we get to this routine only + * in two scenarios: + * + * 1) pool creation failures before/during the pool ref initialization. + * 2) we are in the process of releasing the pool, it is off the + * zswap_pools list and has no references. + * + * Hence, there is no need for locks. + */ + acomp_ctx->__online = false; + acomp_ctx_dealloc(acomp_ctx); +} + static struct zswap_pool *zswap_pool_create(char *type, char *compressor) { struct zswap_pool *pool; @@ -285,13 +403,21 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) goto error; } - for_each_possible_cpu(cpu) - mutex_init(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); + for_each_possible_cpu(cpu) { + struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); + + acomp_ctx->acomp = NULL; + acomp_ctx->req = NULL; + acomp_ctx->buffer = NULL; + acomp_ctx->__online = false; + acomp_ctx->nr_reqs = 0; + mutex_init(&acomp_ctx->mutex); + } ret = cpuhp_state_add_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); if (ret) - goto error; + goto ref_fail; /* being the current pool takes 1 ref; this func expects the * caller to always add the new pool as the current pool @@ -307,6 +433,9 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) return pool; ref_fail: + for_each_possible_cpu(cpu) + zswap_cpu_comp_dealloc(cpu, &pool->node); + cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); error: if (pool->acomp_ctx) @@ -361,8 +490,13 @@ static struct zswap_pool *__zswap_pool_create_fallback(void) static void zswap_pool_destroy(struct zswap_pool *pool) { + int cpu; + zswap_pool_debug("destroying", pool); + for_each_possible_cpu(cpu) + zswap_cpu_comp_dealloc(cpu, &pool->node); + cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); free_percpu(pool->acomp_ctx); @@ -816,85 +950,6 @@ static void zswap_entry_free(struct zswap_entry *entry) /********************************* * compressed storage functions **********************************/ -static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) -{ - struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); - struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); - struct crypto_acomp *acomp = NULL; - struct acomp_req *req = NULL; - u8 *buffer = NULL; - int ret; - - buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); - if (!buffer) { - ret = -ENOMEM; - goto fail; - } - - acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); - if (IS_ERR(acomp)) { - pr_err("could not alloc crypto acomp %s : %ld\n", - pool->tfm_name, PTR_ERR(acomp)); - ret = PTR_ERR(acomp); - goto fail; - } - - req = acomp_request_alloc(acomp); - if (!req) { - pr_err("could not alloc crypto acomp_request %s\n", - pool->tfm_name); - ret = -ENOMEM; - goto fail; - } - - /* - * Only hold the mutex after completing allocations, otherwise we may - * recurse into zswap through reclaim and attempt to hold the mutex - * again resulting in a deadlock. - */ - mutex_lock(&acomp_ctx->mutex); - crypto_init_wait(&acomp_ctx->wait); - - /* - * if the backend of acomp is async zip, crypto_req_done() will wakeup - * crypto_wait_req(); if the backend of acomp is scomp, the callback - * won't be called, crypto_wait_req() will return without blocking. - */ - acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - crypto_req_done, &acomp_ctx->wait); - - acomp_ctx->buffer = buffer; - acomp_ctx->acomp = acomp; - acomp_ctx->is_sleepable = acomp_is_async(acomp); - acomp_ctx->req = req; - mutex_unlock(&acomp_ctx->mutex); - return 0; - -fail: - if (acomp) - crypto_free_acomp(acomp); - kfree(buffer); - return ret; -} - -static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) -{ - struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); - struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); - - mutex_lock(&acomp_ctx->mutex); - if (!IS_ERR_OR_NULL(acomp_ctx)) { - if (!IS_ERR_OR_NULL(acomp_ctx->req)) - acomp_request_free(acomp_ctx->req); - acomp_ctx->req = NULL; - if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) - crypto_free_acomp(acomp_ctx->acomp); - kfree(acomp_ctx->buffer); - } - mutex_unlock(&acomp_ctx->mutex); - - return 0; -} static struct crypto_acomp_ctx *acomp_ctx_get_cpu_lock(struct zswap_pool *pool) { @@ -902,16 +957,52 @@ static struct crypto_acomp_ctx *acomp_ctx_get_cpu_lock(struct zswap_pool *pool) for (;;) { acomp_ctx = raw_cpu_ptr(pool->acomp_ctx); - mutex_lock(&acomp_ctx->mutex); - if (likely(acomp_ctx->req)) - return acomp_ctx; /* - * It is possible that we were migrated to a different CPU after - * getting the per-CPU ctx but before the mutex was acquired. If - * the old CPU got offlined, zswap_cpu_comp_dead() could have - * already freed ctx->req (among other things) and set it to - * NULL. Just try again on the new CPU that we ended up on. + * If the CPU onlining code successfully allocates acomp_ctx resources, + * it sets acomp_ctx->__online to true. Until this happens, we have + * two options: + * + * 1. Return NULL and fail all stores on this CPU. + * 2. Retry, until onlining has finished allocating resources. + * + * In theory, option 1 could be more appropriate, because it + * allows the calling procedure to decide how it wants to handle + * reclaim racing with CPU hotplug. For instance, it might be Ok + * for compress to return an error for the backing swap device + * to store the folio. Decompress could wait until we get a + * valid and locked mutex after onlining has completed. For now, + * we go with option 2 because adding a do-while in + * zswap_decompress() adds latency for software compressors. + * + * Once initialized, the resources will be de-allocated only + * when the pool is destroyed. The acomp_ctx will hold on to the + * resources through CPU offlining/onlining at any time until + * the pool is destroyed. + * + * This prevents races/deadlocks between reclaim and CPU acomp_ctx + * resource allocation that are a dependency for reclaim. + * It further simplifies the interaction with CPU onlining and + * offlining: + * + * - CPU onlining does not take the mutex. It only allocates + * resources and sets __online to true. + * - CPU offlining acquires the mutex before setting + * __online to false. If reclaim has acquired the mutex, + * offlining will have to wait for reclaim to complete before + * hotunplug can proceed. Further, hotplug merely sets + * __online to false. It does not delete the acomp_ctx + * resources. + * + * Option 1 is better than potentially not exiting the earlier + * for (;;) loop because the system is running low on memory + * and/or CPUs are getting offlined for whatever reason. At + * least failing this store will prevent data loss by failing + * zswap_store(), and saving the data in the backing swap device. */ + mutex_lock(&acomp_ctx->mutex); + if (likely(acomp_ctx->__online)) + return acomp_ctx; + mutex_unlock(&acomp_ctx->mutex); } } From patchwork Mon Mar 3 08:47:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AAD6C282C5 for ; Mon, 3 Mar 2025 08:48:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B340228001D; Mon, 3 Mar 2025 03:47:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC76028001B; Mon, 3 Mar 2025 03:47:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9143428001D; Mon, 3 Mar 2025 03:47:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6EDF028001B for ; Mon, 3 Mar 2025 03:47:47 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2BE131604C2 for ; Mon, 3 Mar 2025 08:47:47 +0000 (UTC) X-FDA: 83179611774.02.E250947 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf24.hostedemail.com (Postfix) with ESMTP id 1A07A180008 for ; Mon, 3 Mar 2025 08:47:44 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="m1B/AgcL"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991665; a=rsa-sha256; cv=none; b=ONtAHe/Gy0xCJh3OhQkjp8BA+aZvDLsK3BQDKSQ63whRJG3RRtW+KtUNlynOy16PQaUHO0 a76RV/ptjQ8CtWYbm5X/Vup/x9crTgebdldxdU0jJ8R/WrZEfcWBKhtbsxtLsyGJLUQ+Ll V/U4wXZWtJxs6x84MQ6dHY0+ihnTjMU= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="m1B/AgcL"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991665; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zXbZtOHt7+pO0zXjvoQbGB5IClR4ahn31ymLdKG6W88=; b=6rsQIV+5st7tfdlF6DepsXPs4u5CYidkNrSqd/ng2ejIv5DWe3v1qAGcM3Vfy53AaeOpfM TFuEtaL6zPwi7FjSFBTKoZUJp5UTAR1PygRLqkB/r3bIFfxHF3F9mTBFJ+T0K1GVNV072Y nNgQO3bsmZU9cVYi4YAX8EZvQkrjCJM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991665; x=1772527665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mzM5ALTaJ0aweiEptVLygcT+eKuMHDddBO1cMIOmY5c=; b=m1B/AgcLtUhvYbpJE1+UVl1vgFgConOvdEPX1imXuGUpeLb6kjCDhOFK 5qoSNwf60Ju1coNTezJl4ixH0KdGfvz7bZtdcVurzfPNd51VuP1IYULsa L2/SsVyvE/F2WJFGtmzHLzTZNGU4cU7jVSmHsTOOKApIK7FHQu9iNhqmw mFmvWnaCT5IfoaVGG55KxInTrrR4rS0U8GdaBpiMv6VcROKONPJJQtAiZ A5uu5DifFBPffwpVYLKKKTft2b298y6srS8HO1eXgi22mK0NRHVf/DdmC hRVblSDY6yMhIAN1mZ5UeuA39EPU7YfQryuHMg3QFCLg/w+BMv6KbochF A==; X-CSE-ConnectionGUID: Dpze1yemSU+N6WfYOpd7dg== X-CSE-MsgGUID: nwEV8JycTCWSoDIFgZN4/w== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42112022" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42112022" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:40 -0800 X-CSE-ConnectionGUID: ehvg2B31TRqs9zpVgL56/w== X-CSE-MsgGUID: pKsO428nSeatt0KiCUhTdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426829" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:38 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 13/14] mm: zswap: Allocate pool batching resources if the compressor supports batching. Date: Mon, 3 Mar 2025 00:47:23 -0800 Message-Id: <20250303084724.6490-14-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 1A07A180008 X-Stat-Signature: 4996u65h57joab5f57k9rq841osinw3g X-Rspam-User: X-HE-Tag: 1740991664-363849 X-HE-Meta: U2FsdGVkX19sKeK6Hi1sS/PJ9mYe0R7HPvKRKB2jQR5h/mNdlYzxhPPHi6AMSf2DYAc2ZwlDN8KpYAFrb8pa2niQW1KxPS1Hbm0OJK/VWGOB60h+twAn339b7hd72cdFlVedHqdK9YaXJuJbc1FnvNOulbfDSfCl//TYUgz/e70bNmsnJMCYTI2GrgP6T6BRyhvG9vSewu2llNgRtgmZBqkMTZXuHD0YdURvAYmD+yqMoKtzv7Puc6FmeZL5bWoGoXtTyt6nZEEFUtBNOUN52qSmop0ZJkBzjYkc149F+dOec4VIR3ZGDx7aWvRv1rD6fgLbmsY651y1HMeKc/bMws3GTe1BJ8PtZf4IiflwAh6/I5R+9XGevTNABxP29BwsecIT5m//yGOHsRNu/NeG7uHBmMfgW9UV0nIjX880Kr0jAJSQJO90+QkkgLUerxWBSHMAWmok74ISAsHUSxIZpAwPzy7gwHDAnbVykUt8eq/odGMxnAxa26QLhchS6FC8SzFy/jKv0jmFUs8nu3VwXb1wSbMkxjHdp6R6sOXh3RirBPZfLzWRYBX2a+KXyeffKhLDC/EvyrbSZSWSBlN8GRu9EFOfuS+7gCX1y3RdehrxySTW0dyP1urwfdYUf1s8i4Momne8U8UWIBd99/3EWX2/AR+xCcvO+rtCJ2hHFpuXu9+9nNBBreEBrjtKk0k7SolKfe+gPX2/a30JDmB+q+JQwRzsnxVu6ONdLxuVL0w9B27Kd9D1lhrnp2pn+xSpaq41lc3fiXx6FQ9RAp8j/11Ic0cRZBaRdt4D0fO9AKse07yHjVsa8eItV6VL8o9sqkshFU7fX6lDp02xPxJ19WMVxCnJU4wQIAlsr0SEYB7s2nKAmPG4Mwvzr+lSD/s6U/Nl/taMfZsJGg+38AXjN43DRr4n4nk5/nini2/CIPZtSdaYaGcwwA9UW+9aRZGEreq7xtlMJKaszKS9JI9 18K4mlCP wBB6sCqGt3YwyOQI0R0QM5ewSO0uF7azGYt39QBbXfuLroDI6DnOCnHifpstISVJP/ac2usS7oVCuyydgC6Nn5viErBIAWjoLea0OFSineO8BxIHQ/dwvTdGZsMJ+yDjRrgYTQOd3jT0teZaBZsT43f9JGuzHRC46LpU/gpMNTU2re1Yn2xEzz+j6NA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch adds support for the per-CPU acomp_ctx to track multiple compression/decompression requests and multiple compression destination buffers. The zswap_cpu_comp_prepare() CPU onlining code will get the maximum batch-size the compressor supports. If so, it will allocate the necessary batching resources. However, zswap does not use more than one request yet. Follow-up patches will actually utilize the multiple acomp_ctx requests/buffers for batch compression/decompression of multiple pages. The newly added ZSWAP_MAX_BATCH_SIZE limits the amount of extra memory used for batching. There is a small extra memory overhead of allocating the "reqs" and "buffers" arrays for compressors that do not support batching. Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 99 +++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 69 insertions(+), 30 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index cff96df1df8b..fae59d6d5147 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -78,6 +78,16 @@ static bool zswap_pool_reached_full; #define ZSWAP_PARAM_UNSET "" +/* + * For compression batching of large folios: + * Maximum number of acomp compress requests that will be processed + * in a batch, iff the zswap compressor supports batching. + * This limit exists because we preallocate enough requests and buffers + * in the per-cpu acomp_ctx accordingly. Hence, a higher limit means higher + * memory usage. + */ +#define ZSWAP_MAX_BATCH_SIZE 8U + static int zswap_setup(void); /* Enable/disable zswap */ @@ -143,8 +153,8 @@ bool zswap_never_enabled(void) struct crypto_acomp_ctx { struct crypto_acomp *acomp; - struct acomp_req *req; - u8 *buffer; + struct acomp_req **reqs; + u8 **buffers; u8 nr_reqs; struct crypto_wait wait; struct mutex mutex; @@ -251,13 +261,22 @@ static void __zswap_pool_empty(struct percpu_ref *ref); static void acomp_ctx_dealloc(struct crypto_acomp_ctx *acomp_ctx) { if (!IS_ERR_OR_NULL(acomp_ctx) && acomp_ctx->nr_reqs) { + u8 i; + + if (acomp_ctx->reqs) { + for (i = 0; i < acomp_ctx->nr_reqs; ++i) + if (!IS_ERR_OR_NULL(acomp_ctx->reqs[i])) + acomp_request_free(acomp_ctx->reqs[i]); + kfree(acomp_ctx->reqs); + acomp_ctx->reqs = NULL; + } - if (!IS_ERR_OR_NULL(acomp_ctx->req)) - acomp_request_free(acomp_ctx->req); - acomp_ctx->req = NULL; - - kfree(acomp_ctx->buffer); - acomp_ctx->buffer = NULL; + if (acomp_ctx->buffers) { + for (i = 0; i < acomp_ctx->nr_reqs; ++i) + kfree(acomp_ctx->buffers[i]); + kfree(acomp_ctx->buffers); + acomp_ctx->buffers = NULL; + } if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) crypto_free_acomp(acomp_ctx->acomp); @@ -271,6 +290,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); int ret = -ENOMEM; + u8 i; /* * Just to be even more fail-safe against changes in assumptions and/or @@ -292,22 +312,41 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) goto fail; } - acomp_ctx->nr_reqs = 1; + acomp_ctx->nr_reqs = min(ZSWAP_MAX_BATCH_SIZE, + crypto_acomp_batch_size(acomp_ctx->acomp)); - acomp_ctx->req = acomp_request_alloc(acomp_ctx->acomp); - if (!acomp_ctx->req) { - pr_err("could not alloc crypto acomp_request %s\n", - pool->tfm_name); - ret = -ENOMEM; + acomp_ctx->reqs = kcalloc_node(acomp_ctx->nr_reqs, sizeof(struct acomp_req *), + GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->reqs) goto fail; + + for (i = 0; i < acomp_ctx->nr_reqs; ++i) { + acomp_ctx->reqs[i] = acomp_request_alloc(acomp_ctx->acomp); + if (!acomp_ctx->reqs[i]) { + pr_err("could not alloc crypto acomp_request reqs[%d] %s\n", + i, pool->tfm_name); + goto fail; + } } - acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); - if (!acomp_ctx->buffer) { - ret = -ENOMEM; + acomp_ctx->buffers = kcalloc_node(acomp_ctx->nr_reqs, sizeof(u8 *), + GFP_KERNEL, cpu_to_node(cpu)); + if (!acomp_ctx->buffers) goto fail; + + for (i = 0; i < acomp_ctx->nr_reqs; ++i) { + acomp_ctx->buffers[i] = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, + cpu_to_node(cpu)); + if (!acomp_ctx->buffers[i]) + goto fail; } + /* + * The crypto_wait is used only in fully synchronous, i.e., with scomp + * or non-poll mode of acomp, hence there is only one "wait" per + * acomp_ctx, with callback set to reqs[0], under the assumption that + * there is at least 1 request per acomp_ctx. + */ crypto_init_wait(&acomp_ctx->wait); /* @@ -315,7 +354,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) * crypto_wait_req(); if the backend of acomp is scomp, the callback * won't be called, crypto_wait_req() will return without blocking. */ - acomp_request_set_callback(acomp_ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG, + acomp_request_set_callback(acomp_ctx->reqs[0], CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &acomp_ctx->wait); acomp_ctx->is_sleepable = acomp_is_async(acomp_ctx->acomp); @@ -407,8 +446,8 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor) struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); acomp_ctx->acomp = NULL; - acomp_ctx->req = NULL; - acomp_ctx->buffer = NULL; + acomp_ctx->reqs = NULL; + acomp_ctx->buffers = NULL; acomp_ctx->__online = false; acomp_ctx->nr_reqs = 0; mutex_init(&acomp_ctx->mutex); @@ -1026,7 +1065,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, u8 *dst; acomp_ctx = acomp_ctx_get_cpu_lock(pool); - dst = acomp_ctx->buffer; + dst = acomp_ctx->buffers[0]; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -1036,7 +1075,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * giving the dst buffer with enough length to avoid buffer overflow. */ sg_init_one(&output, dst, PAGE_SIZE * 2); - acomp_request_set_params(acomp_ctx->req, &input, &output, PAGE_SIZE, dlen); + acomp_request_set_params(acomp_ctx->reqs[0], &input, &output, PAGE_SIZE, dlen); /* * it maybe looks a little bit silly that we send an asynchronous request, @@ -1050,8 +1089,8 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * but in different threads running on different cpu, we have different * acomp instance, so multiple threads can do (de)compression in parallel. */ - comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acomp_ctx->wait); - dlen = acomp_ctx->req->dlen; + comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->reqs[0]), &acomp_ctx->wait); + dlen = acomp_ctx->reqs[0]->dlen; if (comp_ret) goto unlock; @@ -1102,19 +1141,19 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) */ if ((acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) || !virt_addr_valid(src)) { - memcpy(acomp_ctx->buffer, src, entry->length); - src = acomp_ctx->buffer; + memcpy(acomp_ctx->buffers[0], src, entry->length); + src = acomp_ctx->buffers[0]; zpool_unmap_handle(zpool, entry->handle); } sg_init_one(&input, src, entry->length); sg_init_table(&output, 1); sg_set_folio(&output, folio, PAGE_SIZE, 0); - acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE); - BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait)); - BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE); + acomp_request_set_params(acomp_ctx->reqs[0], &input, &output, entry->length, PAGE_SIZE); + BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->reqs[0]), &acomp_ctx->wait)); + BUG_ON(acomp_ctx->reqs[0]->dlen != PAGE_SIZE); - if (src != acomp_ctx->buffer) + if (src != acomp_ctx->buffers[0]) zpool_unmap_handle(zpool, entry->handle); acomp_ctx_put_unlock(acomp_ctx); } From patchwork Mon Mar 3 08:47:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13998359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F9C4C282C6 for ; Mon, 3 Mar 2025 08:48:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E58428001B; Mon, 3 Mar 2025 03:47:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 17308280020; Mon, 3 Mar 2025 03:47:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDC5528001B; Mon, 3 Mar 2025 03:47:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B4ED9280020 for ; Mon, 3 Mar 2025 03:47:47 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 70039A2A24 for ; Mon, 3 Mar 2025 08:47:47 +0000 (UTC) X-FDA: 83179611774.16.69272C8 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf18.hostedemail.com (Postfix) with ESMTP id 15FCF1C0004 for ; Mon, 3 Mar 2025 08:47:44 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=C1Zod8nB; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740991665; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GuijIYKm4WAQ4T1D7CKPQrAIq/ToZ5dXmYnqhEYk2Sw=; b=nFRV4Y5ONsPDsqssFQTQ3MMDkm+g+qhUtKTiDeOwdJ2Cfc5HXSo6hjKSCBKooKBtOyjYmi 3vULLCMrlv+dmk1zrRQLqWPiqbG0YTMDciXdhBuMtE07D/CvaVvWDIN1j5VV5n9ZDEIQuP LvXa1cRb4Z2K4UhYt5HeGOwFVuuFvg8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740991665; a=rsa-sha256; cv=none; b=Z0a0vWEQ069iEP1Ca/C3TX69CEsxPPF1hSWDcF2TcKD4H8tPXmfmvGIlULonscMhfMFscp s+zK6NHY8JoSkMzqejmtqctgNOYpKH5EgOVZjT/iRCOgLhNxRM9s+HR5cz3LLdY2Mh1ajY 9KcED3INcUuL9p4v9ZtS/aDAeDdN2VM= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=C1Zod8nB; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1740991665; x=1772527665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jKgTtMy7aiHO9c72gZcER/x64kejqLa2yHPStrGIUVI=; b=C1Zod8nBOELxBSGLvzPs37VKT6XUPkPFiiPU5LjAPZavr5xjqO6ovTgz LDoxBCiE56fXlUtlbaU/9qhGHyUoEUfDZ/e4CfA6MAdrUUFf8LY7i0Ix4 VcUle2wwpDOwVf2z7X8IHTfn0Ixc5gpmHAA8jEr1YvR3lxCcExpMhGT11 M65z7BgWMAkP1Kzg5QC3FsiClTOVW695hLpe0AESrEz+XS/5ezzHCneEZ 8DlHrq5q7T8U5+GfrEZXc8gerPNexib1OVL/stXcdzjY2tNiYd9sjJCLz VhQyIHtPBgMAH8/cIpVI9+BheQwnuaskaFcUb982GiitkvgTh3HjawHYv w==; X-CSE-ConnectionGUID: 2Erdud7+TaSyv6rFDVS0YQ== X-CSE-MsgGUID: t1vW+D1LQi2Mxhuv4xrSpA== X-IronPort-AV: E=McAfee;i="6700,10204,11361"; a="42112035" X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="42112035" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2025 00:47:43 -0800 X-CSE-ConnectionGUID: RRUxYjm2TCiAu8/R3lif+g== X-CSE-MsgGUID: AjH89oddTjmq3zuMJP+VBA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.13,329,1732608000"; d="scan'208";a="118426836" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa010.fm.intel.com with ESMTP; 03 Mar 2025 00:47:39 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 14/14] mm: zswap: Compress batching with request chaining in zswap_store() of large folios. Date: Mon, 3 Mar 2025 00:47:24 -0800 Message-Id: <20250303084724.6490-15-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: epbowcxjybknc853y1mtaq5k5x4kz1se X-Rspamd-Queue-Id: 15FCF1C0004 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740991664-198528 X-HE-Meta: U2FsdGVkX1/EGPWHPmWn4oNhZnHR8M5OJbg/NPadEtczL4E8wrzqXPOCIYLhtIG+7osxyzqD7eG6CLHQJgSPuvSpLzcNITtc6ZTRNDyy8iGAvQTouxPgV1lE/TPdSXW/rPMNvPgSm6rnKVoqBf3q4QJKGgOT/JRBtaHIe4OXBl/dQMOWWiFnuXIBFKVBNiC8eTOwZoVKmTE7KT4ldpe9b26+8zZ6l3fvCHRMSlP6YddfQKoHLbHmhMFQH/iOwty7JZifPnDKhKARu/XX6Ewv8jH0rXla0zLh5gA+Y9L/ewbxr2JfqEsWGxtYk10+AciV1fb2E2H0IZW4XDERnx+WsVLCGSy8xw9G4Lp5tZsofUa35pkfMXkVuERBbSthYzr0lvlJSxeHGVRaEdZLlegh4OMI1Q2arf7FuwGK8t8+g5NTsNyjXbMjHcaw8h3tduimDjcfApwWTNulpU3KTLsWktCwlPMg56o67YanjLOQaB7TTMcTJr0HHPSOYr2OrQvFTDGxDwuyiLODGi1BwM4QwYz5m59nVgHvzac8o4y9qxHv6lTSG/2fq9Uw+jin6dzKGUodPk/BSHtpTWDiMKT5GZBvTRVZypY76k39UYui9PExcZr6AXOvcLEsUQFtKL8m3QtVCczPhXV1y1xS8bhvDLAuXqsu8C1kcsIDef3eLaNGD5sUgfwH2yIRsMGYo2ekluIIXJQsrvxrX7J4vHQgQD1wy7fZytFGikPU6+oAdVAEngG2FGvR+WHv6ziBj3e+I08JQKC2qN8IWXlVMvARH+CyOebh6s1cGuyRk00gkYRUqnpHuO90HUrqREeb/XLmcjCXHWlcyv0B8kOWfGez7ln8ga5UApz+LieAv35h1YzzMKB1hgAOt7pd2sOtYfoLci4e7k5F7hSUfv0u/tKE/ez9aPHN1r7kPA1Uwyq5cueXBorDWFOXwvzoGDp6rxLgQ0sxtEUOdiVZAR5mAIu RMTBN6DE MSpl/7KuuK11jeypSN9y70Ex37/6Hg1VUG6go4aWUY90ryJ5zwbz248RHRjgGVK2q7k2Wl+XxwRsf1ta+lXTCRu5d9UaUODuYh+7NQkiGPCikY0YI7bukZbWQqeanzCV0RZKbTw6BG5Ug7FiR5hXVOazAk5EkuC9FabHOb/zVDoe7zPn8FjON+XPgMU/qHBlfKoRzY+03gubA30QaPMoeiRt74fRKN89EJFhx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch introduces three new procedures: zswap_store_folio() zswap_store_pages() zswap_batch_compress() zswap_store_folio() stores a folio in chunks of "batch_size" pages. If the compressor supports batching and has multiple acomp_ctx->nr_reqs, the batch_size will be the acomp_ctx->nr_reqs. If the compressor does not support batching, the batch_size will be ZSWAP_MAX_BATCH_SIZE. The compressor having multiple acomp_ctx->nr_reqs is passed as a bool "batching" parameter to zswap_store_pages() and zswap_batch_compress(). This refactorization allows us to move the loop over the folio's pages from zswap_store() to zswap_store_folio(), and also enables batching. zswap_store_pages() implements all the computes done earlier in zswap_store_page() for a single-page, for multiple pages in a folio, namely a "batch". zswap_store_pages() starts by allocating all zswap entries required to store the batch. Next, it calls zswap_batch_compress() to compress the batch. Finally, it adds the batch's zswap entries to the xarray and LRU, charges zswap memory and increments zswap stats. The error handling and cleanup required for all failure scenarios that can occur while storing a batch in zswap are consolidated to a single "store_pages_failed" label in zswap_store_pages(). And finally, this patch introduces zswap_batch_compress(), which does the following: - If the compressor supports batching, sets up a request chain for compressing the batch in one shot. If Intel IAA is the zswap compressor, the request chain will be compressed in parallel in hardware. If all requests in the chain are compressed without errors, the compressed buffers are then stored in zpool. - If the compressor does not support batching, each page in the batch is compressed and stored sequentially. zswap_batch_compress() replaces zswap_compress(), thereby eliminating code duplication and facilitating maintainability of the code with the introduction of batching. The call to the crypto layer is exactly the same in both cases: when batch compressing a request chain or when sequentially compressing each page in the batch. Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 396 ++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 270 insertions(+), 126 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index fae59d6d5147..135d5792ce50 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1051,74 +1051,141 @@ static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx) mutex_unlock(&acomp_ctx->mutex); } -static bool zswap_compress(struct page *page, struct zswap_entry *entry, - struct zswap_pool *pool) +/* + * Unified code paths for compressors that do and do not support + * batching. This procedure will compress multiple @nr_pages in @folio, + * starting from @index. + * If @batching is set to true, it will create a request chain for + * compression batching. It is assumed that the caller has verified + * that the acomp_ctx->nr_reqs is at least @nr_pages. + * If @batching is set to false, it will process each page sequentially. + * In both cases, if all compressions were successful, it will proceed + * to store the compressed buffers in zpool. + */ +static bool zswap_batch_compress(struct folio *folio, + long index, + unsigned int nr_pages, + struct zswap_entry *entries[], + struct zswap_pool *pool, + struct crypto_acomp_ctx *acomp_ctx, + bool batching) { - struct crypto_acomp_ctx *acomp_ctx; - struct scatterlist input, output; - int comp_ret = 0, alloc_ret = 0; - unsigned int dlen = PAGE_SIZE; - unsigned long handle; - struct zpool *zpool; - char *buf; + struct scatterlist inputs[ZSWAP_MAX_BATCH_SIZE]; + struct scatterlist outputs[ZSWAP_MAX_BATCH_SIZE]; + struct zpool *zpool = pool->zpool; + int acomp_idx = 0, nr_to_store = 1; + unsigned int i, j; + int err = 0; gfp_t gfp; - u8 *dst; - acomp_ctx = acomp_ctx_get_cpu_lock(pool); - dst = acomp_ctx->buffers[0]; - sg_init_table(&input, 1); - sg_set_page(&input, page, PAGE_SIZE, 0); + lockdep_assert_held(&acomp_ctx->mutex); - /* - * We need PAGE_SIZE * 2 here since there maybe over-compression case, - * and hardware-accelerators may won't check the dst buffer size, so - * giving the dst buffer with enough length to avoid buffer overflow. - */ - sg_init_one(&output, dst, PAGE_SIZE * 2); - acomp_request_set_params(acomp_ctx->reqs[0], &input, &output, PAGE_SIZE, dlen); - - /* - * it maybe looks a little bit silly that we send an asynchronous request, - * then wait for its completion synchronously. This makes the process look - * synchronous in fact. - * Theoretically, acomp supports users send multiple acomp requests in one - * acomp instance, then get those requests done simultaneously. but in this - * case, zswap actually does store and load page by page, there is no - * existing method to send the second page before the first page is done - * in one thread doing zwap. - * but in different threads running on different cpu, we have different - * acomp instance, so multiple threads can do (de)compression in parallel. - */ - comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx->reqs[0]), &acomp_ctx->wait); - dlen = acomp_ctx->reqs[0]->dlen; - if (comp_ret) - goto unlock; - - zpool = pool->zpool; gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; if (zpool_malloc_support_movable(zpool)) gfp |= __GFP_HIGHMEM | __GFP_MOVABLE; - alloc_ret = zpool_malloc(zpool, dlen, gfp, &handle); - if (alloc_ret) - goto unlock; - buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO); - memcpy(buf, dst, dlen); - zpool_unmap_handle(zpool, handle); + for (i = 0; i < nr_pages; ++i) { + struct page *page = folio_page(folio, index + i); - entry->handle = handle; - entry->length = dlen; + sg_init_table(&inputs[acomp_idx], 1); + sg_set_page(&inputs[acomp_idx], page, PAGE_SIZE, 0); -unlock: - if (comp_ret == -ENOSPC || alloc_ret == -ENOSPC) - zswap_reject_compress_poor++; - else if (comp_ret) - zswap_reject_compress_fail++; - else if (alloc_ret) - zswap_reject_alloc_fail++; + /* + * Each dst buffer should be of size (PAGE_SIZE * 2). + * Reflect same in sg_list. + */ + sg_init_one(&outputs[acomp_idx], acomp_ctx->buffers[acomp_idx], PAGE_SIZE * 2); + acomp_request_set_params(acomp_ctx->reqs[acomp_idx], &inputs[acomp_idx], + &outputs[acomp_idx], PAGE_SIZE, PAGE_SIZE); + + if (batching) { + /* Add the acomp request to the chain. */ + if (likely(i)) + acomp_request_chain(acomp_ctx->reqs[acomp_idx], acomp_ctx->reqs[0]); + else + acomp_reqchain_init(acomp_ctx->reqs[0], 0, crypto_req_done, + &acomp_ctx->wait); + + if (i == (nr_pages - 1)) { + /* Process the request chain. */ + err = crypto_wait_req(crypto_acomp_compress(acomp_ctx->reqs[0]), &acomp_ctx->wait); + + /* + * Get the individual compress errors from request chaining. + */ + for (j = 0; j < nr_pages; ++j) { + if (unlikely(acomp_request_err(acomp_ctx->reqs[j]))) { + err = -EINVAL; + if (acomp_request_err(acomp_ctx->reqs[j]) == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_compress_fail++; + } + } + /* + * Request chaining cleanup: + * + * - Clear the CRYPTO_TFM_REQ_CHAIN bit on acomp_ctx->reqs[0]. + * - Reset the acomp_ctx->wait to notify acomp_ctx->reqs[0]. + */ + acomp_reqchain_clear(acomp_ctx->reqs[0], &acomp_ctx->wait); + if (unlikely(err)) + return false; + j = 0; + nr_to_store = nr_pages; + goto store_zpool; + } + + ++acomp_idx; + continue; + } else { + err = crypto_wait_req(crypto_acomp_compress(acomp_ctx->reqs[0]), &acomp_ctx->wait); + + if (unlikely(err)) { + if (err == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_compress_fail++; + return false; + } + j = i; + nr_to_store = 1; + } - acomp_ctx_put_unlock(acomp_ctx); - return comp_ret == 0 && alloc_ret == 0; +store_zpool: + /* + * All batch pages were successfully compressed. + * Store the pages in zpool. + */ + acomp_idx = -1; + while (nr_to_store--) { + unsigned long handle; + char *buf; + + ++acomp_idx; + prefetchw(entries[j]); + err = zpool_malloc(zpool, acomp_ctx->reqs[acomp_idx]->dlen, gfp, &handle); + + if (unlikely(err)) { + if (err == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_alloc_fail++; + + return false; + } + + buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO); + memcpy(buf, acomp_ctx->buffers[acomp_idx], acomp_ctx->reqs[acomp_idx]->dlen); + zpool_unmap_handle(zpool, handle); + + entries[j]->handle = handle; + entries[j]->length = acomp_ctx->reqs[acomp_idx]->dlen; + ++j; + } + } + + return true; } static void zswap_decompress(struct zswap_entry *entry, struct folio *folio) @@ -1581,84 +1648,165 @@ static void shrink_worker(struct work_struct *w) * main API **********************************/ -static bool zswap_store_page(struct page *page, - struct obj_cgroup *objcg, - struct zswap_pool *pool) +/* + * Store multiple pages in @folio, starting from the page at index @si up to + * and including the page at index @ei. + * + * The error handling from all failure points is consolidated to the + * "store_pages_failed" label, based on the initialization of the zswap entries' + * handles to ERR_PTR(-EINVAL) at allocation time, and the fact that the + * entry's handle is subsequently modified only upon a successful zpool_malloc() + * after the page is compressed. + */ +static bool zswap_store_pages(struct folio *folio, + long si, + long ei, + struct obj_cgroup *objcg, + struct zswap_pool *pool, + struct crypto_acomp_ctx *acomp_ctx, + bool batching) { - swp_entry_t page_swpentry = page_swap_entry(page); - struct zswap_entry *entry, *old; + struct zswap_entry *entries[ZSWAP_MAX_BATCH_SIZE]; + int node_id = folio_nid(folio); + u8 i, from_i = 0, nr_pages = ei - si + 1; - /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); - if (!entry) { - zswap_reject_kmemcache_fail++; - return false; + for (i = 0; i < nr_pages; ++i) { + entries[i] = zswap_entry_cache_alloc(GFP_KERNEL, node_id); + + if (unlikely(!entries[i])) { + zswap_reject_kmemcache_fail++; + nr_pages = i; + goto store_pages_failed; + } + + entries[i]->handle = (unsigned long)ERR_PTR(-EINVAL); } - if (!zswap_compress(page, entry, pool)) - goto compress_failed; + if (!zswap_batch_compress(folio, si, nr_pages, entries, pool, acomp_ctx, batching)) + goto store_pages_failed; - old = xa_store(swap_zswap_tree(page_swpentry), - swp_offset(page_swpentry), - entry, GFP_KERNEL); - if (xa_is_err(old)) { - int err = xa_err(old); + for (i = 0; i < nr_pages; ++i) { + swp_entry_t page_swpentry = page_swap_entry(folio_page(folio, si + i)); + struct zswap_entry *old, *entry = entries[i]; - WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); - zswap_reject_alloc_fail++; - goto store_failed; - } + old = xa_store(swap_zswap_tree(page_swpentry), + swp_offset(page_swpentry), + entry, GFP_KERNEL); + if (unlikely(xa_is_err(old))) { + int err = xa_err(old); - /* - * We may have had an existing entry that became stale when - * the folio was redirtied and now the new version is being - * swapped out. Get rid of the old. - */ - if (old) - zswap_entry_free(old); + WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); + zswap_reject_alloc_fail++; + from_i = i; + goto store_pages_failed; + } - /* - * The entry is successfully compressed and stored in the tree, there is - * no further possibility of failure. Grab refs to the pool and objcg, - * charge zswap memory, and increment zswap_stored_pages. - * The opposite actions will be performed by zswap_entry_free() - * when the entry is removed from the tree. - */ - zswap_pool_get(pool); - if (objcg) { - obj_cgroup_get(objcg); - obj_cgroup_charge_zswap(objcg, entry->length); - } - atomic_long_inc(&zswap_stored_pages); + /* + * We may have had an existing entry that became stale when + * the folio was redirtied and now the new version is being + * swapped out. Get rid of the old. + */ + if (unlikely(old)) + zswap_entry_free(old); - /* - * We finish initializing the entry while it's already in xarray. - * This is safe because: - * - * 1. Concurrent stores and invalidations are excluded by folio lock. - * - * 2. Writeback is excluded by the entry not being on the LRU yet. - * The publishing order matters to prevent writeback from seeing - * an incoherent entry. - */ - entry->pool = pool; - entry->swpentry = page_swpentry; - entry->objcg = objcg; - entry->referenced = true; - if (entry->length) { - INIT_LIST_HEAD(&entry->lru); - zswap_lru_add(&zswap_list_lru, entry); + /* + * The entry is successfully compressed and stored in the tree, there is + * no further possibility of failure. Grab refs to the pool and objcg, + * charge zswap memory, and increment zswap_stored_pages. + * The opposite actions will be performed by zswap_entry_free() + * when the entry is removed from the tree. + */ + zswap_pool_get(pool); + if (objcg) { + obj_cgroup_get(objcg); + obj_cgroup_charge_zswap(objcg, entry->length); + } + atomic_long_inc(&zswap_stored_pages); + + /* + * We finish initializing the entry while it's already in xarray. + * This is safe because: + * + * 1. Concurrent stores and invalidations are excluded by folio lock. + * + * 2. Writeback is excluded by the entry not being on the LRU yet. + * The publishing order matters to prevent writeback from seeing + * an incoherent entry. + */ + entry->pool = pool; + entry->swpentry = page_swpentry; + entry->objcg = objcg; + entry->referenced = true; + if (likely(entry->length)) { + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&zswap_list_lru, entry); + } } return true; -store_failed: - zpool_free(pool->zpool, entry->handle); -compress_failed: - zswap_entry_cache_free(entry); +store_pages_failed: + for (i = from_i; i < nr_pages; ++i) { + if (!IS_ERR_VALUE(entries[i]->handle)) + zpool_free(pool->zpool, entries[i]->handle); + + zswap_entry_cache_free(entries[i]); + } + return false; } +/* + * Store all pages in a folio by calling zswap_batch_compress(). + * If the compressor supports batching, i.e., has multiple acomp requests, + * the folio will be compressed in batches of "acomp_ctx->nr_reqs" using + * request chaining. + * If the compressor has only one acomp request, the folio will be compressed + * in batches of ZSWAP_MAX_BATCH_SIZE pages, where each page in the batch is + * compressed sequentially. + */ +static bool zswap_store_folio(struct folio *folio, + struct obj_cgroup *objcg, + struct zswap_pool *pool) +{ + long nr_pages = folio_nr_pages(folio); + struct crypto_acomp_ctx *acomp_ctx; + unsigned int batch_size; + bool ret = true, batching; + long si, ei; + + acomp_ctx = acomp_ctx_get_cpu_lock(pool); + + batching = ((acomp_ctx->nr_reqs > 1) && (nr_pages > 1)); + + batch_size = batching ? acomp_ctx->nr_reqs : ZSWAP_MAX_BATCH_SIZE; + + if (!batching) + acomp_ctx_put_unlock(acomp_ctx); + + /* Store the folio in batches of "batch_size" pages. */ + for (si = 0, ei = min(si + batch_size - 1, nr_pages - 1); + ((si < nr_pages) && (ei < nr_pages)); + si = ei + 1, ei = min(si + batch_size - 1, nr_pages - 1)) { + + if (!batching) + acomp_ctx = acomp_ctx_get_cpu_lock(pool); + + if (!zswap_store_pages(folio, si, ei, objcg, pool, acomp_ctx, batching)) { + ret = false; + break; + } + + if (!batching) + acomp_ctx_put_unlock(acomp_ctx); + } + + if (batching || !ret) + acomp_ctx_put_unlock(acomp_ctx); + + return ret; +} + bool zswap_store(struct folio *folio) { long nr_pages = folio_nr_pages(folio); @@ -1667,7 +1815,6 @@ bool zswap_store(struct folio *folio) struct mem_cgroup *memcg = NULL; struct zswap_pool *pool; bool ret = false; - long index; VM_WARN_ON_ONCE(!folio_test_locked(folio)); VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); @@ -1701,12 +1848,8 @@ bool zswap_store(struct folio *folio) mem_cgroup_put(memcg); } - for (index = 0; index < nr_pages; ++index) { - struct page *page = folio_page(folio, index); - - if (!zswap_store_page(page, objcg, pool)) - goto put_pool; - } + if (!zswap_store_folio(folio, objcg, pool)) + goto put_pool; if (objcg) count_objcg_events(objcg, ZSWPOUT, nr_pages); @@ -1733,6 +1876,7 @@ bool zswap_store(struct folio *folio) pgoff_t offset = swp_offset(swp); struct zswap_entry *entry; struct xarray *tree; + long index; for (index = 0; index < nr_pages; ++index) { tree = swap_zswap_tree(swp_entry(type, offset + index));