From patchwork Thu Apr 13 06:24:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13209808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CCABC77B71 for ; Thu, 13 Apr 2023 06:24:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229685AbjDMGYx (ORCPT ); Thu, 13 Apr 2023 02:24:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229618AbjDMGYw (ORCPT ); Thu, 13 Apr 2023 02:24:52 -0400 Received: from 167-179-156-38.a7b39c.syd.nbn.aussiebb.net (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45B466583; Wed, 12 Apr 2023 23:24:51 -0700 (PDT) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pmqNV-00FNVI-Fs; Thu, 13 Apr 2023 14:24:18 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Thu, 13 Apr 2023 14:24:17 +0800 From: "Herbert Xu" Date: Thu, 13 Apr 2023 14:24:17 +0800 Subject: [PATCH 2/6] crypto: api - Add crypto_clone_tfm References: To: Linux Crypto Mailing List , David Ahern , Eric Dumazet , Paolo Abeni , Jakub Kicinski , "David S. Miller" , Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , Bob Gilligan , Dan Carpenter , David Laight , Dmitry Safonov <0x7f454c46@gmail.com>, Eric Biggers , "Eric W. Biederman" , Francesco Ruggeri , Hideaki YOSHIFUJI , Ivan Delalande , Leonard Crestez , Salam Noureddine , netdev@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch adds the helper crypto_clone_tfm. The purpose is to allocate a tfm object with GFP_ATOMIC. As we cannot sleep, the object has to be cloned from an existing tfm object. This allows code paths that cannot otherwise allocate a crypto_tfm object to do so. Once a new tfm has been obtained its key could then be changed without impacting other users. Signed-off-by: Herbert Xu Reviewed-by: Simon Horman --- crypto/api.c | 59 +++++++++++++++++++++++++++++++++++++++++++++--------- crypto/internal.h | 2 + 2 files changed, 52 insertions(+), 9 deletions(-) diff --git a/crypto/api.c b/crypto/api.c index f509d73fa682..d375e8cd770d 100644 --- a/crypto/api.c +++ b/crypto/api.c @@ -488,28 +488,44 @@ struct crypto_tfm *crypto_alloc_base(const char *alg_name, u32 type, u32 mask) } EXPORT_SYMBOL_GPL(crypto_alloc_base); -void *crypto_create_tfm_node(struct crypto_alg *alg, - const struct crypto_type *frontend, - int node) +static void *crypto_alloc_tfmmem(struct crypto_alg *alg, + const struct crypto_type *frontend, int node, + gfp_t gfp) { - char *mem; - struct crypto_tfm *tfm = NULL; + struct crypto_tfm *tfm; unsigned int tfmsize; unsigned int total; - int err = -ENOMEM; + char *mem; tfmsize = frontend->tfmsize; total = tfmsize + sizeof(*tfm) + frontend->extsize(alg); - mem = kzalloc_node(total, GFP_KERNEL, node); + mem = kzalloc_node(total, gfp, node); if (mem == NULL) - goto out_err; + return ERR_PTR(-ENOMEM); tfm = (struct crypto_tfm *)(mem + tfmsize); tfm->__crt_alg = alg; tfm->node = node; refcount_set(&tfm->refcnt, 1); + return mem; +} + +void *crypto_create_tfm_node(struct crypto_alg *alg, + const struct crypto_type *frontend, + int node) +{ + struct crypto_tfm *tfm; + char *mem; + int err; + + mem = crypto_alloc_tfmmem(alg, frontend, node, GFP_KERNEL); + if (IS_ERR(mem)) + goto out; + + tfm = (struct crypto_tfm *)(mem + frontend->tfmsize); + err = frontend->init_tfm(tfm); if (err) goto out_free_tfm; @@ -525,13 +541,38 @@ void *crypto_create_tfm_node(struct crypto_alg *alg, if (err == -EAGAIN) crypto_shoot_alg(alg); kfree(mem); -out_err: mem = ERR_PTR(err); out: return mem; } EXPORT_SYMBOL_GPL(crypto_create_tfm_node); +void *crypto_clone_tfm(const struct crypto_type *frontend, + struct crypto_tfm *otfm) +{ + struct crypto_alg *alg = otfm->__crt_alg; + struct crypto_tfm *tfm; + char *mem; + + mem = ERR_PTR(-ESTALE); + if (unlikely(!crypto_mod_get(alg))) + goto out; + + mem = crypto_alloc_tfmmem(alg, frontend, otfm->node, GFP_ATOMIC); + if (IS_ERR(mem)) { + crypto_mod_put(alg); + goto out; + } + + tfm = (struct crypto_tfm *)(mem + frontend->tfmsize); + tfm->crt_flags = otfm->crt_flags; + tfm->exit = otfm->exit; + +out: + return mem; +} +EXPORT_SYMBOL_GPL(crypto_clone_tfm); + struct crypto_alg *crypto_find_alg(const char *alg_name, const struct crypto_type *frontend, u32 type, u32 mask) diff --git a/crypto/internal.h b/crypto/internal.h index 5eee009ee494..8dd746b1130b 100644 --- a/crypto/internal.h +++ b/crypto/internal.h @@ -106,6 +106,8 @@ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type, u32 mask); void *crypto_create_tfm_node(struct crypto_alg *alg, const struct crypto_type *frontend, int node); +void *crypto_clone_tfm(const struct crypto_type *frontend, + struct crypto_tfm *otfm); static inline void *crypto_create_tfm(struct crypto_alg *alg, const struct crypto_type *frontend)