From patchwork Tue Jul 7 06:32:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11647813 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0ED4313B4 for ; Tue, 7 Jul 2020 06:36:51 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF0F4206F6 for ; Tue, 7 Jul 2020 06:36:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="DoiytXtt"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="SRWfbF76" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF0F4206F6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=5L3UUf7hfA6V0QqmmFzEYloYNVlQ0kxZrPRJQtmtPO0=; b=DoiytXtt1Ggu8e9Jw+sWqQpTY sr/yNzdIy4PWg1kLNIhDgy5jntVoAa0jNeOPyTiWoFvph18MRIo3CmEB2SwUUpP/RAdQYEQPn64Eb SFTqLFVtyytGT49T3/uuFYldh5ZIe0acEqr+hkcBdC3d1WYz3eq6F8YCTu2I7gKYeGt2Vl9ybI3+G eGOGZbR/Mcc84hWTP8yjRLaVi/fwoeUicb+jzjOjw1Cr7xpgZ1vRAwBWnZkm9djWCiLJ7TfWz0Vmx TBE/Plh6Y/Dd66G4r0r15JkyZrymzbosZre7yxLZShQB0anE283rLv2ZK1bPhVKcnwXfPHQt8pXfR 4yMe4vE5w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jshC4-0003A0-J7; Tue, 07 Jul 2020 06:35:05 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jshBB-0002cK-Vq; Tue, 07 Jul 2020 06:34:12 +0000 Received: from e123331-lin.nice.arm.com (adsl-70.109.242.21.tellas.gr [109.242.21.70]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 81019206CD; Tue, 7 Jul 2020 06:34:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1594103649; bh=6U7XguKXGmmibU47z0Py8zOQR8BTQqlqFCMXn3UNzgE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SRWfbF76+3qXp6z2vLxeWgCTtyuEzCKcQ0prmUAZL0007ahMKJjqKBYiKe3An1Y4N deZ/RU3lvOM/jmuSXJ8cZLZnlnem2buSV+u1Si2Pk3TvV9DffseDB++tLuwNiNF9D7 hiQl2R1htEBqSTbKgNHT3Kw66Psge6htN5GMgr38= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v4 12/13] crypto: sahara - permit asynchronous skcipher as fallback Date: Tue, 7 Jul 2020 09:32:02 +0300 Message-Id: <20200707063203.5018-13-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707063203.5018-1-ardb@kernel.org> References: <20200707063203.5018-1-ardb@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200707_023411_066409_77ACE905 X-CRM114-Status: GOOD ( 17.67 ) X-Spam-Score: -5.2 (-----) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-5.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at https://www.dnswl.org/, high trust [198.145.29.99 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.0 DKIMWL_WL_HIGH DKIMwl.org - Whitelisted High sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Eric Biggers , Corentin Labbe , Ayush Sawal , Jamie Iles , Fabio Estevam , Ard Biesheuvel , Herbert Xu , =?utf-8?q?Horia_Geant=C4=83?= , Rohit Maheshwari , Chen-Yu Tsai , Corentin Labbe , NXP Linux Team , Tom Lendacky , Sascha Hauer , Vinay Kumar Yadav , Maxime Ripard , Matthias Brugger , linux-amlogic@lists.infradead.org, linux-omap@vger.kernel.org, linux-arm-kernel@lists.infradead.org, John Allen , Tero Kristo , Pengutronix Kernel Team , Shawn Guo , "David S. Miller" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Even though the sahara driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel Reviewed-by: Horia Geantă --- drivers/crypto/sahara.c | 96 +++++++++----------- 1 file changed, 45 insertions(+), 51 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index 466e30bd529c..0c8cb23ae708 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -146,11 +146,12 @@ struct sahara_ctx { /* AES-specific context */ int keylen; u8 key[AES_KEYSIZE_128]; - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; }; struct sahara_aes_reqctx { unsigned long mode; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -617,10 +618,10 @@ static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, /* * The requested key size is not supported by HW, do a fallback. */ - crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + return crypto_skcipher_setkey(ctx->fallback, key, keylen); } static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) @@ -651,21 +652,19 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) static int sahara_aes_ecb_encrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_encrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_ENCRYPT); @@ -673,21 +672,19 @@ static int sahara_aes_ecb_encrypt(struct skcipher_request *req) static int sahara_aes_ecb_decrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_decrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, 0); @@ -695,21 +692,19 @@ static int sahara_aes_ecb_decrypt(struct skcipher_request *req) static int sahara_aes_cbc_encrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_encrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC); @@ -717,21 +712,19 @@ static int sahara_aes_cbc_encrypt(struct skcipher_request *req) static int sahara_aes_cbc_decrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_decrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_CBC); @@ -742,14 +735,15 @@ static int sahara_aes_init_tfm(struct crypto_skcipher *tfm) const char *name = crypto_tfm_alg_name(&tfm->base); struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); - ctx->fallback = crypto_alloc_sync_skcipher(name, 0, + ctx->fallback = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->fallback)) { pr_err("Error allocating fallback algo %s\n", name); return PTR_ERR(ctx->fallback); } - crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx) + + crypto_skcipher_reqsize(ctx->fallback)); return 0; } @@ -758,7 +752,7 @@ static void sahara_aes_exit_tfm(struct crypto_skcipher *tfm) { struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->fallback); + crypto_free_skcipher(ctx->fallback); } static u32 sahara_sha_init_hdr(struct sahara_dev *dev,