From patchwork Tue Nov 2 16:47:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Toromanoff X-Patchwork-Id: 12599365 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABB22C433EF for ; Tue, 2 Nov 2021 16:48:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 88EB260EB4 for ; Tue, 2 Nov 2021 16:48:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231918AbhKBQuq (ORCPT ); Tue, 2 Nov 2021 12:50:46 -0400 Received: from mx07-00178001.pphosted.com ([185.132.182.106]:41488 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231874AbhKBQun (ORCPT ); Tue, 2 Nov 2021 12:50:43 -0400 Received: from pps.filterd (m0046668.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2F12ff015193; Tue, 2 Nov 2021 17:47:52 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=selector1; bh=ApzKX0Ohee8Bs+HYdLXyjIjlJjwLbFK+nNLOPyTFAjc=; b=6sUtsk008pRWMXt14NUyNvvscQfSdi1a7cNmPrXILobmEb8VoWuAtQTNuV/p7x+R7NYk z0CBDs1RWfBeNlLf6OeD98SdRkb2IN012nfBNviJ+L+/58YfXcT7UuUmsOU94WLNnRYz fSMU9EMEVkBIR3Z2l2Im360XAc3NLHoVAxE7Q/c1pDlDAs2AT7AaYXz4NZytGzUoVauw l8R/yLvD3yW3byXCNtQVzCwDxWgmyOfrhMMsXLhFHb38dbkjZXmjIPDs2g9ipHmywfbO 0O+psAMsH8BVKsGq7UfzxyoEzjnNrGIranCGiCz4j7q9o1esTC+qt8XpBqWwYGn96cYm Vg== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3c2yg3ku7d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 02 Nov 2021 17:47:52 +0100 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 7FF9C100034; Tue, 2 Nov 2021 17:47:51 +0100 (CET) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 78013236528; Tue, 2 Nov 2021 17:47:51 +0100 (CET) Received: from localhost (10.75.127.46) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:47:51 +0100 From: Nicolas Toromanoff To: Herbert Xu , "David S . Miller" , Maxime Coquelin , Alexandre Torgue CC: Marek Vasut , Nicolas Toromanoff , Ard Biesheuvel , , , , , Etienne Carriere Subject: [PATCH v2 1/8] crypto: stm32/cryp - defer probe for reset controller Date: Tue, 2 Nov 2021 17:47:22 +0100 Message-ID: <20211102164729.9957-2-nicolas.toromanoff@foss.st.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> References: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.46] X-ClientProxiedBy: SFHDAG1NODE3.st.com (10.75.127.3) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Etienne Carriere Change stm32 CRYP driver to defer its probe operation when reset controller device is registered but has not been probed yet. Signed-off-by: Etienne Carriere Signed-off-by: Nicolas Toromanoff --- drivers/crypto/stm32/stm32-cryp.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index 7389a0536ff0..dcdd313485de 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -1973,7 +1973,11 @@ static int stm32_cryp_probe(struct platform_device *pdev) pm_runtime_enable(dev); rst = devm_reset_control_get(dev, NULL); - if (!IS_ERR(rst)) { + if (IS_ERR(rst)) { + ret = PTR_ERR(rst); + if (ret == -EPROBE_DEFER) + goto err_rst; + } else { reset_control_assert(rst); udelay(2); reset_control_deassert(rst); @@ -2024,7 +2028,7 @@ static int stm32_cryp_probe(struct platform_device *pdev) spin_lock(&cryp_list.lock); list_del(&cryp->list); spin_unlock(&cryp_list.lock); - +err_rst: pm_runtime_disable(dev); pm_runtime_put_noidle(dev); pm_runtime_disable(dev); From patchwork Tue Nov 2 16:47:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Toromanoff X-Patchwork-Id: 12599367 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18BC8C433FE for ; Tue, 2 Nov 2021 16:48:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0577D60EB8 for ; Tue, 2 Nov 2021 16:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233681AbhKBQur (ORCPT ); Tue, 2 Nov 2021 12:50:47 -0400 Received: from mx07-00178001.pphosted.com ([185.132.182.106]:33802 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231881AbhKBQuo (ORCPT ); Tue, 2 Nov 2021 12:50:44 -0400 Received: from pps.filterd (m0241204.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2EMM4J011759; Tue, 2 Nov 2021 17:47:55 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=selector1; bh=23KQt3SEHQG2x+P/jacjp6tL0OJTLQOO2vIEH5W7XlU=; b=IKDDoyq4SSJg7hlvBpSzSglCmssE4vrrIGuzOq8oPaxJpDFsgwDMjwxOLV9u+P9DHiAK YkYgNLfEslixcP1xRp9ujeJ2u0i/a282cMbCTOTI4n32nlOaSpje5ZsxdVqLkbVMuhs7 WUmzaAaKGWKJ+6LGO0zl9Bap1hZGYn2JWbmrskPrlA9k4h/kL+DeitwMhKdN/2dm2Qg3 1HvJoo+JUI8DtPY/+0t1VwvCPBBoU3qvRG3IRoDPqC02WH02So9kj+/MvotBjIGdHaIp s90VqkoM4KFFASXvhof+dpsqi1i+UONHd/1E/NrD1d9dob2sVDX72DjcUbfumukLXuid pA== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3c2jfj6pc4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 02 Nov 2021 17:47:55 +0100 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id AE268100034; Tue, 2 Nov 2021 17:47:54 +0100 (CET) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id A650822FA5E; Tue, 2 Nov 2021 17:47:54 +0100 (CET) Received: from localhost (10.75.127.46) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:47:54 +0100 From: Nicolas Toromanoff To: Herbert Xu , "David S . Miller" , Maxime Coquelin , Alexandre Torgue CC: Marek Vasut , Nicolas Toromanoff , Ard Biesheuvel , , , , , Etienne Carriere Subject: [PATCH v2 2/8] crypto: stm32/cryp - don't print error on probe deferral Date: Tue, 2 Nov 2021 17:47:23 +0100 Message-ID: <20211102164729.9957-3-nicolas.toromanoff@foss.st.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> References: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.46] X-ClientProxiedBy: SFHDAG1NODE3.st.com (10.75.127.3) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Etienne Carriere Change driver to not print an error message when the device probe is deferred for a clock resource. Signed-off-by: Etienne Carriere Signed-off-by: Nicolas Toromanoff --- drivers/crypto/stm32/stm32-cryp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index dcdd313485de..7b55ad6d2f1a 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -1955,7 +1955,8 @@ static int stm32_cryp_probe(struct platform_device *pdev) cryp->clk = devm_clk_get(dev, NULL); if (IS_ERR(cryp->clk)) { - dev_err(dev, "Could not get clock\n"); + dev_err_probe(dev, PTR_ERR(cryp->clk), "Could not get clock\n"); + return PTR_ERR(cryp->clk); } From patchwork Tue Nov 2 16:47:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Toromanoff X-Patchwork-Id: 12599369 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67EE4C4332F for ; Tue, 2 Nov 2021 16:48:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51B9360F24 for ; Tue, 2 Nov 2021 16:48:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234861AbhKBQuz (ORCPT ); Tue, 2 Nov 2021 12:50:55 -0400 Received: from mx08-00178001.pphosted.com ([91.207.212.93]:39246 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S234742AbhKBQuv (ORCPT ); Tue, 2 Nov 2021 12:50:51 -0400 Received: from pps.filterd (m0046660.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2G3agv027220; Tue, 2 Nov 2021 17:47:59 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=selector1; bh=8VGavK9hiCnHSl3HTo4xQl3V9mEIutGOoDIiRVVysKM=; b=EiiCCNYBcMIXbG9w/vCbh39x+ymtkCX19+X9Dp9/V5VMOY6Y9JC7gIs3Q9+FUVSDEW8J +v/EXZ2k/jDAdPxqowB2XAuZ1qI6xnkQViLYiVJ31EmnRnrGcV2LIP9exN6dDWRLKKSp S4oMPiH/fTOTRwrk3tM5ir0tPD8BXPMFWBI8qKZ3lmIom896TgEIB0mwB2ohQj4RrtGl ZX9akgvEAPH66cFqtiMemPqrnA2DwvkhlSC+AWbfvR9ZXFmY/zDgHh07KcS2D+6c0rU7 WMwT+34c2EVfEH9/vYLZ8j3s1JYUvpLfmj6n3ebwHtKTaVQuPFl+8DKxRu3lgSh9edyZ 9A== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3c30uvkqm2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 02 Nov 2021 17:47:59 +0100 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id AB9E2100034; Tue, 2 Nov 2021 17:47:58 +0100 (CET) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id A402322FA5E; Tue, 2 Nov 2021 17:47:58 +0100 (CET) Received: from localhost (10.75.127.46) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:47:58 +0100 From: Nicolas Toromanoff To: Herbert Xu , "David S . Miller" , Maxime Coquelin , Alexandre Torgue CC: Marek Vasut , Nicolas Toromanoff , Ard Biesheuvel , , , , Subject: [PATCH v2 3/8] crypto: stm32/cryp - fix CTR counter carry Date: Tue, 2 Nov 2021 17:47:24 +0100 Message-ID: <20211102164729.9957-4-nicolas.toromanoff@foss.st.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> References: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.46] X-ClientProxiedBy: SFHDAG1NODE3.st.com (10.75.127.3) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org STM32 CRYP hardware doesn't manage CTR counter bigger than max U32, as a workaround, at each block the current IV is saved, if the saved IV lower u32 is 0xFFFFFFFF, the full IV is manually incremented, and set in hardware. Fixes: bbb2832620ac ("crypto: stm32 - Fix sparse warnings") Signed-off-by: Nicolas Toromanoff --- drivers/crypto/stm32/stm32-cryp.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index 7b55ad6d2f1a..9d6ccf1eb4ce 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -163,7 +163,7 @@ struct stm32_cryp { struct scatter_walk in_walk; struct scatter_walk out_walk; - u32 last_ctr[4]; + __be32 last_ctr[4]; u32 gcm_ctr; }; @@ -1218,26 +1218,25 @@ static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp) u32 cr; if (unlikely(cryp->last_ctr[3] == 0xFFFFFFFF)) { - cryp->last_ctr[3] = 0; - cryp->last_ctr[2]++; - if (!cryp->last_ctr[2]) { - cryp->last_ctr[1]++; - if (!cryp->last_ctr[1]) - cryp->last_ctr[0]++; - } + /* + * In this case, we need to increment manually the ctr counter, + * as HW doesn't handle the U32 carry. + */ + crypto_inc((u8 *)cryp->last_ctr, sizeof(cryp->last_ctr)); cr = stm32_cryp_read(cryp, CRYP_CR); stm32_cryp_write(cryp, CRYP_CR, cr & ~CR_CRYPEN); - stm32_cryp_hw_write_iv(cryp, (__be32 *)cryp->last_ctr); + stm32_cryp_hw_write_iv(cryp, cryp->last_ctr); stm32_cryp_write(cryp, CRYP_CR, cr); } - cryp->last_ctr[0] = stm32_cryp_read(cryp, CRYP_IV0LR); - cryp->last_ctr[1] = stm32_cryp_read(cryp, CRYP_IV0RR); - cryp->last_ctr[2] = stm32_cryp_read(cryp, CRYP_IV1LR); - cryp->last_ctr[3] = stm32_cryp_read(cryp, CRYP_IV1RR); + /* The IV registers are BE */ + cryp->last_ctr[0] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV0LR)); + cryp->last_ctr[1] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV0RR)); + cryp->last_ctr[2] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1LR)); + cryp->last_ctr[3] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1RR)); } static bool stm32_cryp_irq_read_data(struct stm32_cryp *cryp) From patchwork Tue Nov 2 16:47:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Toromanoff X-Patchwork-Id: 12599371 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF8AEC433EF for ; Tue, 2 Nov 2021 16:49:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9F8060EB8 for ; Tue, 2 Nov 2021 16:49:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234754AbhKBQvo (ORCPT ); Tue, 2 Nov 2021 12:51:44 -0400 Received: from mx07-00178001.pphosted.com ([185.132.182.106]:42078 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234948AbhKBQvW (ORCPT ); Tue, 2 Nov 2021 12:51:22 -0400 Received: from pps.filterd (m0046668.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2ExLrA015345; Tue, 2 Nov 2021 17:48:33 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=selector1; bh=c3P0JBoOld0KDv09Q6IshVxFhYKHwNh/H1gEQQ26NpQ=; b=Axp26peVCEns6xCoBpdD1pmfYJufYz5n2RH3VKsPIeJYLS2E821pczRI2oue5Tsbu2U0 N1RQjAsGcowdAc/4Qh1Vi5ofoawMV4SO5LqwHKHJRrOC4cZcLgspkCHeFXMKCDC9NdK3 SO/EMokVCLI62hBgWOMoU2KNaCdyHpNOue8cASmo+ztkFeZmc4Ydf0oq38QhRDfyZwDv JScWD0mUkxDFlcMEIpq45ZVmEB9moSGEKXRio+NndKjAsqbd93QNKk2EsCR/8xrf7HYd epOsFKbgRx+B5eOhA6ZxxyvwzANPj4EWfYeQjKootGUHe3qvZJAAxrt1jzG4FvXaXAl2 DA== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3c2yg3kuam-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 02 Nov 2021 17:48:33 +0100 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 916D4100034; Tue, 2 Nov 2021 17:48:32 +0100 (CET) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 88EB323B83F; Tue, 2 Nov 2021 17:48:32 +0100 (CET) Received: from localhost (10.75.127.45) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:48:32 +0100 From: Nicolas Toromanoff To: Herbert Xu , "David S . Miller" , Maxime Coquelin , Alexandre Torgue CC: Marek Vasut , Nicolas Toromanoff , Ard Biesheuvel , , , , Subject: [PATCH v2 4/8] crypto: stm32/cryp - fix race condition in crypto_engine requests Date: Tue, 2 Nov 2021 17:47:25 +0100 Message-ID: <20211102164729.9957-5-nicolas.toromanoff@foss.st.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> References: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.45] X-ClientProxiedBy: SFHDAG1NODE3.st.com (10.75.127.3) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Erase key after finalizing request. The key was reseted to 0 before the crypto_finalize_.*_request() call, in some running path a pending call could run with a key={ 0 }. Fixes: 9e054ec21ef8 ("crypto: stm32 - Support for STM32 CRYP crypto module") Signed-off-by: Nicolas Toromanoff --- drivers/crypto/stm32/stm32-cryp.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index 9d6ccf1eb4ce..c0903025a4cc 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -666,6 +666,8 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err) free_pages((unsigned long)buf_out, pages); } + memset(cryp->ctx->key, 0, sizeof(cryp->ctx->key)); + pm_runtime_mark_last_busy(cryp->dev); pm_runtime_put_autosuspend(cryp->dev); @@ -674,8 +676,6 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err) else crypto_finalize_skcipher_request(cryp->engine, cryp->req, err); - - memset(cryp->ctx->key, 0, cryp->ctx->keylen); } static int stm32_cryp_cpu_start(struct stm32_cryp *cryp) From patchwork Tue Nov 2 16:47:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Toromanoff X-Patchwork-Id: 12599373 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E51EC433F5 for ; Tue, 2 Nov 2021 16:49:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56E2B60EBB for ; Tue, 2 Nov 2021 16:49:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234880AbhKBQvp (ORCPT ); Tue, 2 Nov 2021 12:51:45 -0400 Received: from mx08-00178001.pphosted.com ([91.207.212.93]:51832 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S235033AbhKBQvX (ORCPT ); Tue, 2 Nov 2021 12:51:23 -0400 Received: from pps.filterd (m0046661.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2G39t4025654; Tue, 2 Nov 2021 17:48:36 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=selector1; bh=AvLW6/k4bd6TATcPdoATdn/HWLAD3uHiP4K+O5Y0WxE=; b=gVE8kwwH+qCGHyygD5m0fvKDB9ch9+54Gm8BoHRgP0CP1c0+onrsXtwbnH2rBsB/CF0+ prX/gmlt9aKs/SxbRuwo7i14Ws62Gm5WLheSx1a98oLoRf66j3Us/tcHndN65KHjsZl/ UENSy+AAD/VZYomjYXQyK7aIX26hHuTwIM32ZxrN+xi6hHmIjfi7UyJQYjHv36UP6ns3 4Qvd0lF7LevIaGOdf+X8C0sZ7jcxfATc9+wVtyfOsDxpVJUjtu1QGufcKRchYQAE1V5I 8tTE5lJ+ewCDyh2Yvx7XLU5Z/fc5iuhz1r/3D8hP5dGGNRY0rbQi1syZmcFWlYn9JDX+ qg== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3c30vnbqhv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 02 Nov 2021 17:48:36 +0100 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 2DB6E10002A; Tue, 2 Nov 2021 17:48:36 +0100 (CET) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 2601623B83F; Tue, 2 Nov 2021 17:48:36 +0100 (CET) Received: from localhost (10.75.127.44) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:48:35 +0100 From: Nicolas Toromanoff To: Herbert Xu , "David S . Miller" , Maxime Coquelin , Alexandre Torgue CC: Marek Vasut , Nicolas Toromanoff , Ard Biesheuvel , , , , Subject: [PATCH v2 5/8] crypto: stm32/cryp - check early input data Date: Tue, 2 Nov 2021 17:47:26 +0100 Message-ID: <20211102164729.9957-6-nicolas.toromanoff@foss.st.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> References: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.44] X-ClientProxiedBy: SFHDAG2NODE2.st.com (10.75.127.5) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Some auto tests failed because driver wasn't returning the expected error with some input size/iv value/tag size. Now: Return 0 early for empty buffer. (We don't need to start the engine for an empty input buffer). Accept any valid authsize for gcm(aes). Return -EINVAL if iv for ccm(aes) is invalid. Return -EINVAL if buffer size is a not a multiple of algorithm block size. Fixes: 9e054ec21ef8 ("crypto: stm32 - Support for STM32 CRYP crypto module") Signed-off-by: Nicolas Toromanoff --- drivers/crypto/stm32/stm32-cryp.c | 114 +++++++++++++++++++++++++++++- 1 file changed, 113 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index c0903025a4cc..8cea7d3abf87 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -801,7 +801,20 @@ static int stm32_cryp_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key, static int stm32_cryp_aes_gcm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) { - return authsize == AES_BLOCK_SIZE ? 0 : -EINVAL; + switch (authsize) { + case 4: + case 8: + case 12: + case 13: + case 14: + case 15: + case 16: + break; + default: + return -EINVAL; + } + + return 0; } static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm, @@ -825,31 +838,61 @@ static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm, static int stm32_cryp_aes_ecb_encrypt(struct skcipher_request *req) { + if (req->cryptlen % AES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_AES | FLG_ECB | FLG_ENCRYPT); } static int stm32_cryp_aes_ecb_decrypt(struct skcipher_request *req) { + if (req->cryptlen % AES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_AES | FLG_ECB); } static int stm32_cryp_aes_cbc_encrypt(struct skcipher_request *req) { + if (req->cryptlen % AES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_AES | FLG_CBC | FLG_ENCRYPT); } static int stm32_cryp_aes_cbc_decrypt(struct skcipher_request *req) { + if (req->cryptlen % AES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_AES | FLG_CBC); } static int stm32_cryp_aes_ctr_encrypt(struct skcipher_request *req) { + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_AES | FLG_CTR | FLG_ENCRYPT); } static int stm32_cryp_aes_ctr_decrypt(struct skcipher_request *req) { + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_AES | FLG_CTR); } @@ -863,53 +906,122 @@ static int stm32_cryp_aes_gcm_decrypt(struct aead_request *req) return stm32_cryp_aead_crypt(req, FLG_AES | FLG_GCM); } +static inline int crypto_ccm_check_iv(const u8 *iv) +{ + /* 2 <= L <= 8, so 1 <= L' <= 7. */ + if (iv[0] < 1 || iv[0] > 7) + return -EINVAL; + + return 0; +} + static int stm32_cryp_aes_ccm_encrypt(struct aead_request *req) { + int err; + + err = crypto_ccm_check_iv(req->iv); + if (err) + return err; + return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM | FLG_ENCRYPT); } static int stm32_cryp_aes_ccm_decrypt(struct aead_request *req) { + int err; + + err = crypto_ccm_check_iv(req->iv); + if (err) + return err; + return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM); } static int stm32_cryp_des_ecb_encrypt(struct skcipher_request *req) { + if (req->cryptlen % DES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_DES | FLG_ECB | FLG_ENCRYPT); } static int stm32_cryp_des_ecb_decrypt(struct skcipher_request *req) { + if (req->cryptlen % DES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_DES | FLG_ECB); } static int stm32_cryp_des_cbc_encrypt(struct skcipher_request *req) { + if (req->cryptlen % DES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_DES | FLG_CBC | FLG_ENCRYPT); } static int stm32_cryp_des_cbc_decrypt(struct skcipher_request *req) { + if (req->cryptlen % DES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_DES | FLG_CBC); } static int stm32_cryp_tdes_ecb_encrypt(struct skcipher_request *req) { + if (req->cryptlen % DES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB | FLG_ENCRYPT); } static int stm32_cryp_tdes_ecb_decrypt(struct skcipher_request *req) { + if (req->cryptlen % DES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB); } static int stm32_cryp_tdes_cbc_encrypt(struct skcipher_request *req) { + if (req->cryptlen % DES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC | FLG_ENCRYPT); } static int stm32_cryp_tdes_cbc_decrypt(struct skcipher_request *req) { + if (req->cryptlen % DES_BLOCK_SIZE) + return -EINVAL; + + if (req->cryptlen == 0) + return 0; + return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC); } From patchwork Tue Nov 2 16:47:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Toromanoff X-Patchwork-Id: 12599375 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4FDCC433F5 for ; Tue, 2 Nov 2021 16:49:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ABF3C60EB8 for ; Tue, 2 Nov 2021 16:49:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234719AbhKBQvt (ORCPT ); Tue, 2 Nov 2021 12:51:49 -0400 Received: from mx08-00178001.pphosted.com ([91.207.212.93]:51938 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S234858AbhKBQv2 (ORCPT ); Tue, 2 Nov 2021 12:51:28 -0400 Received: from pps.filterd (m0046661.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2FooD8025646; Tue, 2 Nov 2021 17:48:41 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=selector1; bh=W02KUHFI/tZXI6h9v7LxP3Q519z7uOdSs4XeAre+Dsk=; b=lttA3Ozf3v4JY1HzpvuLpX3MPxZ/uzLsXrnFkmy7bRCxLJaprK3FkSIGIIcZDaTrRXi4 WvrxkEdlA+yPx2waSaq4mdHtgJq2Qi1enQy1CtyUolO8j1K6PuTNuEEXE9QpkKbAYd7v cuez26kqEJn07wNKnQTDfvOgld+zlcrczfAFIZTu1dQTC7zB9liDY0JDIDzyVlMkxr40 NM/cTjMEn4OPmwuVaEfEymOR4CZeXiJkO4Z2l3/pZjxvjH/Dv+FDGXx+E3FiKMqO+uDh La75k6f163SKcf6NT8myBnhJm3xibCJqeUCwcR7tyTPqBGjoRurKZ+I2ZRK5QjP4CWc9 1A== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3c30vnbqja-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 02 Nov 2021 17:48:41 +0100 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id F31A710002A; Tue, 2 Nov 2021 17:48:40 +0100 (CET) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id EC5BB23C351; Tue, 2 Nov 2021 17:48:40 +0100 (CET) Received: from localhost (10.75.127.46) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:48:40 +0100 From: Nicolas Toromanoff To: Herbert Xu , "David S . Miller" , Maxime Coquelin , Alexandre Torgue CC: Marek Vasut , Nicolas Toromanoff , Ard Biesheuvel , , , , Subject: [PATCH v2 6/8] crypto: stm32/cryp - fix double pm exit. Date: Tue, 2 Nov 2021 17:47:27 +0100 Message-ID: <20211102164729.9957-7-nicolas.toromanoff@foss.st.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> References: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.46] X-ClientProxiedBy: SFHDAG2NODE2.st.com (10.75.127.5) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Delete extraneous lines in probe error handling code: pm was disabled twice. Fixes: 65f9aa36ee47 ("crypto: stm32/cryp - Add power management support") Reported-by: Marek Vasut Signed-off-by: Nicolas Toromanoff --- drivers/crypto/stm32/stm32-cryp.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index 8cea7d3abf87..d61608dc416a 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -2143,8 +2143,6 @@ static int stm32_cryp_probe(struct platform_device *pdev) err_rst: pm_runtime_disable(dev); pm_runtime_put_noidle(dev); - pm_runtime_disable(dev); - pm_runtime_put_noidle(dev); clk_disable_unprepare(cryp->clk); From patchwork Tue Nov 2 16:47:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Toromanoff X-Patchwork-Id: 12599377 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE920C433EF for ; Tue, 2 Nov 2021 16:49:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B36A360EB8 for ; Tue, 2 Nov 2021 16:49:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234929AbhKBQv6 (ORCPT ); Tue, 2 Nov 2021 12:51:58 -0400 Received: from mx08-00178001.pphosted.com ([91.207.212.93]:52034 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S235066AbhKBQvg (ORCPT ); Tue, 2 Nov 2021 12:51:36 -0400 Received: from pps.filterd (m0046661.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2FooDA025646; Tue, 2 Nov 2021 17:48:46 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=selector1; bh=N4FRlCl6MdMhFHq8sJgb/19eVPcC9xPkKBjjo5oQ1nQ=; b=L0sLSps7yXxEN7+BnSMI4+7Ktvdr5bgign7cyxygfRIgpv5FDUvOXaOgbqkHObW3vsrQ bJXXitSS9owPQXHqRoo6FxbFgWoksI6WLEqy4gofdKPXguoeYjbWDGyHNASw7RUQxKSY sRrq7hf3e4EsmxuGcezvZ2ECUq8ZCudWNzNtgBBpfv3g1HB84YUJBGacIvAeXB0UOT6o lwodTTBE1I+XaMwUY2R7pE7q1fYBVkAF38e4YLWib+AwoCxX/QU1m2BYwNoXL2w3f2RL 9bMAQAxB98WOA5IbTB2NT7su2+ECUss/mHMme0EVYJgU02hictb9SIhR0Cj4kYTT2L72 LA== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3c30vnbqje-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 02 Nov 2021 17:48:45 +0100 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 65B0B10002A; Tue, 2 Nov 2021 17:48:45 +0100 (CET) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 5B5B223C351; Tue, 2 Nov 2021 17:48:45 +0100 (CET) Received: from localhost (10.75.127.44) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:48:44 +0100 From: Nicolas Toromanoff To: Herbert Xu , "David S . Miller" , Maxime Coquelin , Alexandre Torgue CC: Marek Vasut , Nicolas Toromanoff , Ard Biesheuvel , , , , Subject: [PATCH v2 7/8] crypto: stm32/cryp - fix bugs and crash in tests Date: Tue, 2 Nov 2021 17:47:28 +0100 Message-ID: <20211102164729.9957-8-nicolas.toromanoff@foss.st.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> References: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.44] X-ClientProxiedBy: SFHDAG1NODE3.st.com (10.75.127.3) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Extra crypto manager auto test were crashing or failling due to 2 reasons: - block in a dead loop (dues to issues in cipher end process management) - crash due to read/write unmapped memory (this crash was also reported when using openssl afalg engine) Rework interrupt management, interrupts are masked as soon as they are no more used: if input buffer is fully consumed, "Input FIFO not full" interrupt is masked and if output buffer is full, "Output FIFO not empty" interrupt is masked. And crypto request finish when input *and* outpout buffer are fully read/write. About the crash due to unmapped memory, using scatterwalk_copychunks() that will map and copy each block fix the issue. Using this api and copying full block will also fix unaligned data access, avoid early copy of in/out buffer, and make useless the extra alignment constraint. Fixes: 9e054ec21ef8 ("crypto: stm32 - Support for STM32 CRYP crypto module") Reported-by: Marek Vasut Signed-off-by: Nicolas Toromanoff --- drivers/crypto/stm32/stm32-cryp.c | 790 +++++++++--------------------- 1 file changed, 243 insertions(+), 547 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index d61608dc416a..5962fbb0bc91 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -37,7 +37,6 @@ /* Mode mask = bits [15..0] */ #define FLG_MODE_MASK GENMASK(15, 0) /* Bit [31..16] status */ -#define FLG_CCM_PADDED_WA BIT(16) /* Registers */ #define CRYP_CR 0x00000000 @@ -105,8 +104,6 @@ /* Misc */ #define AES_BLOCK_32 (AES_BLOCK_SIZE / sizeof(u32)) #define GCM_CTR_INIT 2 -#define _walked_in (cryp->in_walk.offset - cryp->in_sg->offset) -#define _walked_out (cryp->out_walk.offset - cryp->out_sg->offset) #define CRYP_AUTOSUSPEND_DELAY 50 struct stm32_cryp_caps { @@ -144,21 +141,11 @@ struct stm32_cryp { size_t authsize; size_t hw_blocksize; - size_t total_in; - size_t total_in_save; - size_t total_out; - size_t total_out_save; + size_t payload_in; + size_t header_in; + size_t payload_out; - struct scatterlist *in_sg; struct scatterlist *out_sg; - struct scatterlist *out_sg_save; - - struct scatterlist in_sgl; - struct scatterlist out_sgl; - bool sgs_copied; - - int in_sg_len; - int out_sg_len; struct scatter_walk in_walk; struct scatter_walk out_walk; @@ -262,6 +249,7 @@ static inline int stm32_cryp_wait_output(struct stm32_cryp *cryp) } static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp); +static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err); static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx) { @@ -283,103 +271,6 @@ static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx) return cryp; } -static int stm32_cryp_check_aligned(struct scatterlist *sg, size_t total, - size_t align) -{ - int len = 0; - - if (!total) - return 0; - - if (!IS_ALIGNED(total, align)) - return -EINVAL; - - while (sg) { - if (!IS_ALIGNED(sg->offset, sizeof(u32))) - return -EINVAL; - - if (!IS_ALIGNED(sg->length, align)) - return -EINVAL; - - len += sg->length; - sg = sg_next(sg); - } - - if (len != total) - return -EINVAL; - - return 0; -} - -static int stm32_cryp_check_io_aligned(struct stm32_cryp *cryp) -{ - int ret; - - ret = stm32_cryp_check_aligned(cryp->in_sg, cryp->total_in, - cryp->hw_blocksize); - if (ret) - return ret; - - ret = stm32_cryp_check_aligned(cryp->out_sg, cryp->total_out, - cryp->hw_blocksize); - - return ret; -} - -static void sg_copy_buf(void *buf, struct scatterlist *sg, - unsigned int start, unsigned int nbytes, int out) -{ - struct scatter_walk walk; - - if (!nbytes) - return; - - scatterwalk_start(&walk, sg); - scatterwalk_advance(&walk, start); - scatterwalk_copychunks(buf, &walk, nbytes, out); - scatterwalk_done(&walk, out, 0); -} - -static int stm32_cryp_copy_sgs(struct stm32_cryp *cryp) -{ - void *buf_in, *buf_out; - int pages, total_in, total_out; - - if (!stm32_cryp_check_io_aligned(cryp)) { - cryp->sgs_copied = 0; - return 0; - } - - total_in = ALIGN(cryp->total_in, cryp->hw_blocksize); - pages = total_in ? get_order(total_in) : 1; - buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages); - - total_out = ALIGN(cryp->total_out, cryp->hw_blocksize); - pages = total_out ? get_order(total_out) : 1; - buf_out = (void *)__get_free_pages(GFP_ATOMIC, pages); - - if (!buf_in || !buf_out) { - dev_err(cryp->dev, "Can't allocate pages when unaligned\n"); - cryp->sgs_copied = 0; - return -EFAULT; - } - - sg_copy_buf(buf_in, cryp->in_sg, 0, cryp->total_in, 0); - - sg_init_one(&cryp->in_sgl, buf_in, total_in); - cryp->in_sg = &cryp->in_sgl; - cryp->in_sg_len = 1; - - sg_init_one(&cryp->out_sgl, buf_out, total_out); - cryp->out_sg_save = cryp->out_sg; - cryp->out_sg = &cryp->out_sgl; - cryp->out_sg_len = 1; - - cryp->sgs_copied = 1; - - return 0; -} - static void stm32_cryp_hw_write_iv(struct stm32_cryp *cryp, __be32 *iv) { if (!iv) @@ -481,16 +372,99 @@ static int stm32_cryp_gcm_init(struct stm32_cryp *cryp, u32 cfg) /* Wait for end of processing */ ret = stm32_cryp_wait_enable(cryp); - if (ret) + if (ret) { dev_err(cryp->dev, "Timeout (gcm init)\n"); + return ret; + } - return ret; + /* Prepare next phase */ + if (cryp->areq->assoclen) { + cfg |= CR_PH_HEADER; + stm32_cryp_write(cryp, CRYP_CR, cfg); + } else if (stm32_cryp_get_input_text_len(cryp)) { + cfg |= CR_PH_PAYLOAD; + stm32_cryp_write(cryp, CRYP_CR, cfg); + } + + return 0; +} + +static void stm32_crypt_gcmccm_end_header(struct stm32_cryp *cryp) +{ + u32 cfg; + int err; + + /* Check if whole header written */ + if (!cryp->header_in) { + /* Wait for completion */ + err = stm32_cryp_wait_busy(cryp); + if (err) { + dev_err(cryp->dev, "Timeout (gcm/ccm header)\n"); + stm32_cryp_write(cryp, CRYP_IMSCR, 0); + stm32_cryp_finish_req(cryp, err); + return; + } + + if (stm32_cryp_get_input_text_len(cryp)) { + /* Phase 3 : payload */ + cfg = stm32_cryp_read(cryp, CRYP_CR); + cfg &= ~CR_CRYPEN; + stm32_cryp_write(cryp, CRYP_CR, cfg); + + cfg &= ~CR_PH_MASK; + cfg |= CR_PH_PAYLOAD | CR_CRYPEN; + stm32_cryp_write(cryp, CRYP_CR, cfg); + } else { + /* + * Phase 4 : tag. + * Nothing to read, nothing to write, caller have to + * end request + */ + } + } +} + +static void stm32_cryp_write_ccm_first_header(struct stm32_cryp *cryp) +{ + unsigned int i; + size_t written; + size_t len; + u32 alen = cryp->areq->assoclen; + u32 block[AES_BLOCK_32] = {0}; + u8 *b8 = (u8 *)block; + + if (alen <= 65280) { + /* Write first u32 of B1 */ + b8[0] = (alen >> 8) & 0xFF; + b8[1] = alen & 0xFF; + len = 2; + } else { + /* Build the two first u32 of B1 */ + b8[0] = 0xFF; + b8[1] = 0xFE; + b8[2] = alen & 0xFF000000 >> 24; + b8[3] = alen & 0x00FF0000 >> 16; + b8[4] = alen & 0x0000FF00 >> 8; + b8[5] = alen & 0x000000FF; + len = 6; + } + + written = min_t(size_t, AES_BLOCK_SIZE - len, alen); + + scatterwalk_copychunks((char *)block + len, &cryp->in_walk, written, 0); + for (i = 0; i < AES_BLOCK_32; i++) + stm32_cryp_write(cryp, CRYP_DIN, block[i]); + + cryp->header_in -= written; + + stm32_crypt_gcmccm_end_header(cryp); } static int stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg) { int ret; - u8 iv[AES_BLOCK_SIZE], b0[AES_BLOCK_SIZE]; + u32 iv_32[AES_BLOCK_32], b0_32[AES_BLOCK_32]; + u8 *iv = (u8 *)iv_32, *b0 = (u8 *)b0_32; __be32 *bd; u32 *d; unsigned int i, textlen; @@ -531,17 +505,30 @@ static int stm32_cryp_ccm_init(struct stm32_cryp *cryp, u32 cfg) /* Wait for end of processing */ ret = stm32_cryp_wait_enable(cryp); - if (ret) + if (ret) { dev_err(cryp->dev, "Timeout (ccm init)\n"); + return ret; + } - return ret; + /* Prepare next phase */ + if (cryp->areq->assoclen) { + cfg |= CR_PH_HEADER | CR_CRYPEN; + stm32_cryp_write(cryp, CRYP_CR, cfg); + + /* Write first (special) block (may move to next phase [payload]) */ + stm32_cryp_write_ccm_first_header(cryp); + } else if (stm32_cryp_get_input_text_len(cryp)) { + cfg |= CR_PH_PAYLOAD; + stm32_cryp_write(cryp, CRYP_CR, cfg); + } + + return 0; } static int stm32_cryp_hw_init(struct stm32_cryp *cryp) { int ret; u32 cfg, hw_mode; - pm_runtime_resume_and_get(cryp->dev); /* Disable interrupt */ @@ -605,16 +592,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp) if (ret) return ret; - /* Phase 2 : header (authenticated data) */ - if (cryp->areq->assoclen) { - cfg |= CR_PH_HEADER; - } else if (stm32_cryp_get_input_text_len(cryp)) { - cfg |= CR_PH_PAYLOAD; - stm32_cryp_write(cryp, CRYP_CR, cfg); - } else { - cfg |= CR_PH_INIT; - } - break; case CR_DES_CBC: @@ -633,8 +610,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp) stm32_cryp_write(cryp, CRYP_CR, cfg); - cryp->flags &= ~FLG_CCM_PADDED_WA; - return 0; } @@ -647,25 +622,6 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err) if (!err && (!(is_gcm(cryp) || is_ccm(cryp)))) stm32_cryp_get_iv(cryp); - if (cryp->sgs_copied) { - void *buf_in, *buf_out; - int pages, len; - - buf_in = sg_virt(&cryp->in_sgl); - buf_out = sg_virt(&cryp->out_sgl); - - sg_copy_buf(buf_out, cryp->out_sg_save, 0, - cryp->total_out_save, 1); - - len = ALIGN(cryp->total_in_save, cryp->hw_blocksize); - pages = len ? get_order(len) : 1; - free_pages((unsigned long)buf_in, pages); - - len = ALIGN(cryp->total_out_save, cryp->hw_blocksize); - pages = len ? get_order(len) : 1; - free_pages((unsigned long)buf_out, pages); - } - memset(cryp->ctx->key, 0, sizeof(cryp->ctx->key)); pm_runtime_mark_last_busy(cryp->dev); @@ -1031,6 +987,7 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req, struct stm32_cryp_ctx *ctx; struct stm32_cryp *cryp; struct stm32_cryp_reqctx *rctx; + struct scatterlist *in_sg; int ret; if (!req && !areq) @@ -1056,76 +1013,55 @@ static int stm32_cryp_prepare_req(struct skcipher_request *req, if (req) { cryp->req = req; cryp->areq = NULL; - cryp->total_in = req->cryptlen; - cryp->total_out = cryp->total_in; + cryp->header_in = 0; + cryp->payload_in = req->cryptlen; + cryp->payload_out = req->cryptlen; + cryp->authsize = 0; } else { /* * Length of input and output data: * Encryption case: - * INPUT = AssocData || PlainText + * INPUT = AssocData || PlainText * <- assoclen -> <- cryptlen -> - * <------- total_in -----------> * - * OUTPUT = AssocData || CipherText || AuthTag - * <- assoclen -> <- cryptlen -> <- authsize -> - * <---------------- total_out -----------------> + * OUTPUT = AssocData || CipherText || AuthTag + * <- assoclen -> <-- cryptlen --> <- authsize -> * * Decryption case: - * INPUT = AssocData || CipherText || AuthTag - * <- assoclen -> <--------- cryptlen ---------> - * <- authsize -> - * <---------------- total_in ------------------> + * INPUT = AssocData || CipherTex || AuthTag + * <- assoclen ---> <---------- cryptlen ----------> * - * OUTPUT = AssocData || PlainText - * <- assoclen -> <- crypten - authsize -> - * <---------- total_out -----------------> + * OUTPUT = AssocData || PlainText + * <- assoclen -> <- cryptlen - authsize -> */ cryp->areq = areq; cryp->req = NULL; cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq)); - cryp->total_in = areq->assoclen + areq->cryptlen; - if (is_encrypt(cryp)) - /* Append auth tag to output */ - cryp->total_out = cryp->total_in + cryp->authsize; - else - /* No auth tag in output */ - cryp->total_out = cryp->total_in - cryp->authsize; + if (is_encrypt(cryp)) { + cryp->payload_in = areq->cryptlen; + cryp->header_in = areq->assoclen; + cryp->payload_out = areq->cryptlen; + } else { + cryp->payload_in = areq->cryptlen - cryp->authsize; + cryp->header_in = areq->assoclen; + cryp->payload_out = cryp->payload_in; + } } - cryp->total_in_save = cryp->total_in; - cryp->total_out_save = cryp->total_out; + in_sg = req ? req->src : areq->src; + scatterwalk_start(&cryp->in_walk, in_sg); - cryp->in_sg = req ? req->src : areq->src; cryp->out_sg = req ? req->dst : areq->dst; - cryp->out_sg_save = cryp->out_sg; - - cryp->in_sg_len = sg_nents_for_len(cryp->in_sg, cryp->total_in); - if (cryp->in_sg_len < 0) { - dev_err(cryp->dev, "Cannot get in_sg_len\n"); - ret = cryp->in_sg_len; - return ret; - } - - cryp->out_sg_len = sg_nents_for_len(cryp->out_sg, cryp->total_out); - if (cryp->out_sg_len < 0) { - dev_err(cryp->dev, "Cannot get out_sg_len\n"); - ret = cryp->out_sg_len; - return ret; - } - - ret = stm32_cryp_copy_sgs(cryp); - if (ret) - return ret; - - scatterwalk_start(&cryp->in_walk, cryp->in_sg); scatterwalk_start(&cryp->out_walk, cryp->out_sg); if (is_gcm(cryp) || is_ccm(cryp)) { /* In output, jump after assoc data */ - scatterwalk_advance(&cryp->out_walk, cryp->areq->assoclen); - cryp->total_out -= cryp->areq->assoclen; + scatterwalk_copychunks(NULL, &cryp->out_walk, cryp->areq->assoclen, 2); } + if (is_ctr(cryp)) + memset(cryp->last_ctr, 0, sizeof(cryp->last_ctr)); + ret = stm32_cryp_hw_init(cryp); return ret; } @@ -1173,8 +1109,7 @@ static int stm32_cryp_aead_one_req(struct crypto_engine *engine, void *areq) if (!cryp) return -ENODEV; - if (unlikely(!cryp->areq->assoclen && - !stm32_cryp_get_input_text_len(cryp))) { + if (unlikely(!cryp->payload_in && !cryp->header_in)) { /* No input data to process: get tag and finish */ stm32_cryp_finish_req(cryp, 0); return 0; @@ -1183,43 +1118,10 @@ static int stm32_cryp_aead_one_req(struct crypto_engine *engine, void *areq) return stm32_cryp_cpu_start(cryp); } -static u32 *stm32_cryp_next_out(struct stm32_cryp *cryp, u32 *dst, - unsigned int n) -{ - scatterwalk_advance(&cryp->out_walk, n); - - if (unlikely(cryp->out_sg->length == _walked_out)) { - cryp->out_sg = sg_next(cryp->out_sg); - if (cryp->out_sg) { - scatterwalk_start(&cryp->out_walk, cryp->out_sg); - return (sg_virt(cryp->out_sg) + _walked_out); - } - } - - return (u32 *)((u8 *)dst + n); -} - -static u32 *stm32_cryp_next_in(struct stm32_cryp *cryp, u32 *src, - unsigned int n) -{ - scatterwalk_advance(&cryp->in_walk, n); - - if (unlikely(cryp->in_sg->length == _walked_in)) { - cryp->in_sg = sg_next(cryp->in_sg); - if (cryp->in_sg) { - scatterwalk_start(&cryp->in_walk, cryp->in_sg); - return (sg_virt(cryp->in_sg) + _walked_in); - } - } - - return (u32 *)((u8 *)src + n); -} - static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp) { - u32 cfg, size_bit, *dst, d32; - u8 *d8; - unsigned int i, j; + u32 cfg, size_bit; + unsigned int i; int ret = 0; /* Update Config */ @@ -1242,7 +1144,7 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp) stm32_cryp_write(cryp, CRYP_DIN, size_bit); size_bit = is_encrypt(cryp) ? cryp->areq->cryptlen : - cryp->areq->cryptlen - AES_BLOCK_SIZE; + cryp->areq->cryptlen - cryp->authsize; size_bit *= 8; if (cryp->caps->swap_final) size_bit = (__force u32)cpu_to_be32(size_bit); @@ -1251,11 +1153,9 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp) stm32_cryp_write(cryp, CRYP_DIN, size_bit); } else { /* CCM: write CTR0 */ - u8 iv[AES_BLOCK_SIZE]; - u32 *iv32 = (u32 *)iv; - __be32 *biv; - - biv = (void *)iv; + u32 iv32[AES_BLOCK_32]; + u8 *iv = (u8 *)iv32; + __be32 *biv = (__be32 *)iv32; memcpy(iv, cryp->areq->iv, AES_BLOCK_SIZE); memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1); @@ -1277,39 +1177,18 @@ static int stm32_cryp_read_auth_tag(struct stm32_cryp *cryp) } if (is_encrypt(cryp)) { + u32 out_tag[AES_BLOCK_32]; + /* Get and write tag */ - dst = sg_virt(cryp->out_sg) + _walked_out; + for (i = 0; i < AES_BLOCK_32; i++) + out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT); - for (i = 0; i < AES_BLOCK_32; i++) { - if (cryp->total_out >= sizeof(u32)) { - /* Read a full u32 */ - *dst = stm32_cryp_read(cryp, CRYP_DOUT); - - dst = stm32_cryp_next_out(cryp, dst, - sizeof(u32)); - cryp->total_out -= sizeof(u32); - } else if (!cryp->total_out) { - /* Empty fifo out (data from input padding) */ - stm32_cryp_read(cryp, CRYP_DOUT); - } else { - /* Read less than an u32 */ - d32 = stm32_cryp_read(cryp, CRYP_DOUT); - d8 = (u8 *)&d32; - - for (j = 0; j < cryp->total_out; j++) { - *((u8 *)dst) = *(d8++); - dst = stm32_cryp_next_out(cryp, dst, 1); - } - cryp->total_out = 0; - } - } + scatterwalk_copychunks(out_tag, &cryp->out_walk, cryp->authsize, 1); } else { /* Get and check tag */ u32 in_tag[AES_BLOCK_32], out_tag[AES_BLOCK_32]; - scatterwalk_map_and_copy(in_tag, cryp->in_sg, - cryp->total_in_save - cryp->authsize, - cryp->authsize, 0); + scatterwalk_copychunks(in_tag, &cryp->in_walk, cryp->authsize, 0); for (i = 0; i < AES_BLOCK_32; i++) out_tag[i] = stm32_cryp_read(cryp, CRYP_DOUT); @@ -1351,92 +1230,37 @@ static void stm32_cryp_check_ctr_counter(struct stm32_cryp *cryp) cryp->last_ctr[3] = cpu_to_be32(stm32_cryp_read(cryp, CRYP_IV1RR)); } -static bool stm32_cryp_irq_read_data(struct stm32_cryp *cryp) +static void stm32_cryp_irq_read_data(struct stm32_cryp *cryp) { - unsigned int i, j; - u32 d32, *dst; - u8 *d8; - size_t tag_size; - - /* Do no read tag now (if any) */ - if (is_encrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp))) - tag_size = cryp->authsize; - else - tag_size = 0; - - dst = sg_virt(cryp->out_sg) + _walked_out; - - for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) { - if (likely(cryp->total_out - tag_size >= sizeof(u32))) { - /* Read a full u32 */ - *dst = stm32_cryp_read(cryp, CRYP_DOUT); + unsigned int i; + u32 block[AES_BLOCK_32]; - dst = stm32_cryp_next_out(cryp, dst, sizeof(u32)); - cryp->total_out -= sizeof(u32); - } else if (cryp->total_out == tag_size) { - /* Empty fifo out (data from input padding) */ - d32 = stm32_cryp_read(cryp, CRYP_DOUT); - } else { - /* Read less than an u32 */ - d32 = stm32_cryp_read(cryp, CRYP_DOUT); - d8 = (u8 *)&d32; - - for (j = 0; j < cryp->total_out - tag_size; j++) { - *((u8 *)dst) = *(d8++); - dst = stm32_cryp_next_out(cryp, dst, 1); - } - cryp->total_out = tag_size; - } - } + for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) + block[i] = stm32_cryp_read(cryp, CRYP_DOUT); - return !(cryp->total_out - tag_size) || !cryp->total_in; + scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out), 1); + cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, + cryp->payload_out); } static void stm32_cryp_irq_write_block(struct stm32_cryp *cryp) { - unsigned int i, j; - u32 *src; - u8 d8[4]; - size_t tag_size; - - /* Do no write tag (if any) */ - if (is_decrypt(cryp) && (is_gcm(cryp) || is_ccm(cryp))) - tag_size = cryp->authsize; - else - tag_size = 0; - - src = sg_virt(cryp->in_sg) + _walked_in; + unsigned int i; + u32 block[AES_BLOCK_32] = {0}; - for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) { - if (likely(cryp->total_in - tag_size >= sizeof(u32))) { - /* Write a full u32 */ - stm32_cryp_write(cryp, CRYP_DIN, *src); + scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, cryp->hw_blocksize, + cryp->payload_in), 0); + for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) + stm32_cryp_write(cryp, CRYP_DIN, block[i]); - src = stm32_cryp_next_in(cryp, src, sizeof(u32)); - cryp->total_in -= sizeof(u32); - } else if (cryp->total_in == tag_size) { - /* Write padding data */ - stm32_cryp_write(cryp, CRYP_DIN, 0); - } else { - /* Write less than an u32 */ - memset(d8, 0, sizeof(u32)); - for (j = 0; j < cryp->total_in - tag_size; j++) { - d8[j] = *((u8 *)src); - src = stm32_cryp_next_in(cryp, src, 1); - } - - stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8); - cryp->total_in = tag_size; - } - } + cryp->payload_in -= min_t(size_t, cryp->hw_blocksize, cryp->payload_in); } static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp) { int err; - u32 cfg, tmp[AES_BLOCK_32]; - size_t total_in_ori = cryp->total_in; - struct scatterlist *out_sg_ori = cryp->out_sg; + u32 cfg, block[AES_BLOCK_32] = {0}; unsigned int i; /* 'Special workaround' procedure described in the datasheet */ @@ -1461,18 +1285,25 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp) /* b) pad and write the last block */ stm32_cryp_irq_write_block(cryp); - cryp->total_in = total_in_ori; + /* wait end of process */ err = stm32_cryp_wait_output(cryp); if (err) { - dev_err(cryp->dev, "Timeout (write gcm header)\n"); + dev_err(cryp->dev, "Timeout (write gcm last data)\n"); return stm32_cryp_finish_req(cryp, err); } /* c) get and store encrypted data */ - stm32_cryp_irq_read_data(cryp); - scatterwalk_map_and_copy(tmp, out_sg_ori, - cryp->total_in_save - total_in_ori, - total_in_ori, 0); + /* + * Same code as stm32_cryp_irq_read_data(), but we want to store + * block value + */ + for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) + block[i] = stm32_cryp_read(cryp, CRYP_DOUT); + + scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out), 1); + cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, + cryp->payload_out); /* d) change mode back to AES GCM */ cfg &= ~CR_ALGO_MASK; @@ -1485,19 +1316,13 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp) stm32_cryp_write(cryp, CRYP_CR, cfg); /* f) write padded data */ - for (i = 0; i < AES_BLOCK_32; i++) { - if (cryp->total_in) - stm32_cryp_write(cryp, CRYP_DIN, tmp[i]); - else - stm32_cryp_write(cryp, CRYP_DIN, 0); - - cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in); - } + for (i = 0; i < AES_BLOCK_32; i++) + stm32_cryp_write(cryp, CRYP_DIN, block[i]); /* g) Empty fifo out */ err = stm32_cryp_wait_output(cryp); if (err) { - dev_err(cryp->dev, "Timeout (write gcm header)\n"); + dev_err(cryp->dev, "Timeout (write gcm padded data)\n"); return stm32_cryp_finish_req(cryp, err); } @@ -1510,16 +1335,14 @@ static void stm32_cryp_irq_write_gcm_padded_data(struct stm32_cryp *cryp) static void stm32_cryp_irq_set_npblb(struct stm32_cryp *cryp) { - u32 cfg, payload_bytes; + u32 cfg; /* disable ip, set NPBLB and reneable ip */ cfg = stm32_cryp_read(cryp, CRYP_CR); cfg &= ~CR_CRYPEN; stm32_cryp_write(cryp, CRYP_CR, cfg); - payload_bytes = is_decrypt(cryp) ? cryp->total_in - cryp->authsize : - cryp->total_in; - cfg |= (cryp->hw_blocksize - payload_bytes) << CR_NBPBL_SHIFT; + cfg |= (cryp->hw_blocksize - cryp->payload_in) << CR_NBPBL_SHIFT; cfg |= CR_CRYPEN; stm32_cryp_write(cryp, CRYP_CR, cfg); } @@ -1528,13 +1351,11 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp) { int err = 0; u32 cfg, iv1tmp; - u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32], tmp[AES_BLOCK_32]; - size_t last_total_out, total_in_ori = cryp->total_in; - struct scatterlist *out_sg_ori = cryp->out_sg; + u32 cstmp1[AES_BLOCK_32], cstmp2[AES_BLOCK_32]; + u32 block[AES_BLOCK_32] = {0}; unsigned int i; /* 'Special workaround' procedure described in the datasheet */ - cryp->flags |= FLG_CCM_PADDED_WA; /* a) disable ip */ stm32_cryp_write(cryp, CRYP_IMSCR, 0); @@ -1564,7 +1385,7 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp) /* b) pad and write the last block */ stm32_cryp_irq_write_block(cryp); - cryp->total_in = total_in_ori; + /* wait end of process */ err = stm32_cryp_wait_output(cryp); if (err) { dev_err(cryp->dev, "Timeout (wite ccm padded data)\n"); @@ -1572,13 +1393,16 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp) } /* c) get and store decrypted data */ - last_total_out = cryp->total_out; - stm32_cryp_irq_read_data(cryp); + /* + * Same code as stm32_cryp_irq_read_data(), but we want to store + * block value + */ + for (i = 0; i < cryp->hw_blocksize / sizeof(u32); i++) + block[i] = stm32_cryp_read(cryp, CRYP_DOUT); - memset(tmp, 0, sizeof(tmp)); - scatterwalk_map_and_copy(tmp, out_sg_ori, - cryp->total_out_save - last_total_out, - last_total_out, 0); + scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, cryp->hw_blocksize, + cryp->payload_out), 1); + cryp->payload_out -= min_t(size_t, cryp->hw_blocksize, cryp->payload_out); /* d) Load again CRYP_CSGCMCCMxR */ for (i = 0; i < ARRAY_SIZE(cstmp2); i++) @@ -1595,10 +1419,10 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp) stm32_cryp_write(cryp, CRYP_CR, cfg); /* g) XOR and write padded data */ - for (i = 0; i < ARRAY_SIZE(tmp); i++) { - tmp[i] ^= cstmp1[i]; - tmp[i] ^= cstmp2[i]; - stm32_cryp_write(cryp, CRYP_DIN, tmp[i]); + for (i = 0; i < ARRAY_SIZE(block); i++) { + block[i] ^= cstmp1[i]; + block[i] ^= cstmp2[i]; + stm32_cryp_write(cryp, CRYP_DIN, block[i]); } /* h) wait for completion */ @@ -1612,30 +1436,34 @@ static void stm32_cryp_irq_write_ccm_padded_data(struct stm32_cryp *cryp) static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp) { - if (unlikely(!cryp->total_in)) { + if (unlikely(!cryp->payload_in)) { dev_warn(cryp->dev, "No more data to process\n"); return; } - if (unlikely(cryp->total_in < AES_BLOCK_SIZE && + if (unlikely(cryp->payload_in < AES_BLOCK_SIZE && (stm32_cryp_get_hw_mode(cryp) == CR_AES_GCM) && is_encrypt(cryp))) { /* Padding for AES GCM encryption */ - if (cryp->caps->padding_wa) + if (cryp->caps->padding_wa) { /* Special case 1 */ - return stm32_cryp_irq_write_gcm_padded_data(cryp); + stm32_cryp_irq_write_gcm_padded_data(cryp); + return; + } /* Setting padding bytes (NBBLB) */ stm32_cryp_irq_set_npblb(cryp); } - if (unlikely((cryp->total_in - cryp->authsize < AES_BLOCK_SIZE) && + if (unlikely((cryp->payload_in < AES_BLOCK_SIZE) && (stm32_cryp_get_hw_mode(cryp) == CR_AES_CCM) && is_decrypt(cryp))) { /* Padding for AES CCM decryption */ - if (cryp->caps->padding_wa) + if (cryp->caps->padding_wa) { /* Special case 2 */ - return stm32_cryp_irq_write_ccm_padded_data(cryp); + stm32_cryp_irq_write_ccm_padded_data(cryp); + return; + } /* Setting padding bytes (NBBLB) */ stm32_cryp_irq_set_npblb(cryp); @@ -1647,192 +1475,60 @@ static void stm32_cryp_irq_write_data(struct stm32_cryp *cryp) stm32_cryp_irq_write_block(cryp); } -static void stm32_cryp_irq_write_gcm_header(struct stm32_cryp *cryp) +static void stm32_cryp_irq_write_gcmccm_header(struct stm32_cryp *cryp) { - int err; - unsigned int i, j; - u32 cfg, *src; - - src = sg_virt(cryp->in_sg) + _walked_in; - - for (i = 0; i < AES_BLOCK_32; i++) { - stm32_cryp_write(cryp, CRYP_DIN, *src); - - src = stm32_cryp_next_in(cryp, src, sizeof(u32)); - cryp->total_in -= min_t(size_t, sizeof(u32), cryp->total_in); - - /* Check if whole header written */ - if ((cryp->total_in_save - cryp->total_in) == - cryp->areq->assoclen) { - /* Write padding if needed */ - for (j = i + 1; j < AES_BLOCK_32; j++) - stm32_cryp_write(cryp, CRYP_DIN, 0); - - /* Wait for completion */ - err = stm32_cryp_wait_busy(cryp); - if (err) { - dev_err(cryp->dev, "Timeout (gcm header)\n"); - return stm32_cryp_finish_req(cryp, err); - } - - if (stm32_cryp_get_input_text_len(cryp)) { - /* Phase 3 : payload */ - cfg = stm32_cryp_read(cryp, CRYP_CR); - cfg &= ~CR_CRYPEN; - stm32_cryp_write(cryp, CRYP_CR, cfg); - - cfg &= ~CR_PH_MASK; - cfg |= CR_PH_PAYLOAD; - cfg |= CR_CRYPEN; - stm32_cryp_write(cryp, CRYP_CR, cfg); - } else { - /* Phase 4 : tag */ - stm32_cryp_write(cryp, CRYP_IMSCR, 0); - stm32_cryp_finish_req(cryp, 0); - } - - break; - } - - if (!cryp->total_in) - break; - } -} + unsigned int i; + u32 block[AES_BLOCK_32] = {0}; + size_t written; -static void stm32_cryp_irq_write_ccm_header(struct stm32_cryp *cryp) -{ - int err; - unsigned int i = 0, j, k; - u32 alen, cfg, *src; - u8 d8[4]; - - src = sg_virt(cryp->in_sg) + _walked_in; - alen = cryp->areq->assoclen; - - if (!_walked_in) { - if (cryp->areq->assoclen <= 65280) { - /* Write first u32 of B1 */ - d8[0] = (alen >> 8) & 0xFF; - d8[1] = alen & 0xFF; - d8[2] = *((u8 *)src); - src = stm32_cryp_next_in(cryp, src, 1); - d8[3] = *((u8 *)src); - src = stm32_cryp_next_in(cryp, src, 1); - - stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8); - i++; - - cryp->total_in -= min_t(size_t, 2, cryp->total_in); - } else { - /* Build the two first u32 of B1 */ - d8[0] = 0xFF; - d8[1] = 0xFE; - d8[2] = alen & 0xFF000000; - d8[3] = alen & 0x00FF0000; - - stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8); - i++; - - d8[0] = alen & 0x0000FF00; - d8[1] = alen & 0x000000FF; - d8[2] = *((u8 *)src); - src = stm32_cryp_next_in(cryp, src, 1); - d8[3] = *((u8 *)src); - src = stm32_cryp_next_in(cryp, src, 1); - - stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8); - i++; - - cryp->total_in -= min_t(size_t, 2, cryp->total_in); - } - } + written = min_t(size_t, AES_BLOCK_SIZE, cryp->header_in); - /* Write next u32 */ - for (; i < AES_BLOCK_32; i++) { - /* Build an u32 */ - memset(d8, 0, sizeof(u32)); - for (k = 0; k < sizeof(u32); k++) { - d8[k] = *((u8 *)src); - src = stm32_cryp_next_in(cryp, src, 1); - - cryp->total_in -= min_t(size_t, 1, cryp->total_in); - if ((cryp->total_in_save - cryp->total_in) == alen) - break; - } + scatterwalk_copychunks(block, &cryp->in_walk, written, 0); + for (i = 0; i < AES_BLOCK_32; i++) + stm32_cryp_write(cryp, CRYP_DIN, block[i]); - stm32_cryp_write(cryp, CRYP_DIN, *(u32 *)d8); - - if ((cryp->total_in_save - cryp->total_in) == alen) { - /* Write padding if needed */ - for (j = i + 1; j < AES_BLOCK_32; j++) - stm32_cryp_write(cryp, CRYP_DIN, 0); - - /* Wait for completion */ - err = stm32_cryp_wait_busy(cryp); - if (err) { - dev_err(cryp->dev, "Timeout (ccm header)\n"); - return stm32_cryp_finish_req(cryp, err); - } - - if (stm32_cryp_get_input_text_len(cryp)) { - /* Phase 3 : payload */ - cfg = stm32_cryp_read(cryp, CRYP_CR); - cfg &= ~CR_CRYPEN; - stm32_cryp_write(cryp, CRYP_CR, cfg); - - cfg &= ~CR_PH_MASK; - cfg |= CR_PH_PAYLOAD; - cfg |= CR_CRYPEN; - stm32_cryp_write(cryp, CRYP_CR, cfg); - } else { - /* Phase 4 : tag */ - stm32_cryp_write(cryp, CRYP_IMSCR, 0); - stm32_cryp_finish_req(cryp, 0); - } + cryp->header_in -= written; - break; - } - } + stm32_crypt_gcmccm_end_header(cryp); } static irqreturn_t stm32_cryp_irq_thread(int irq, void *arg) { struct stm32_cryp *cryp = arg; u32 ph; + u32 it_mask = stm32_cryp_read(cryp, CRYP_IMSCR); if (cryp->irq_status & MISR_OUT) /* Output FIFO IRQ: read data */ - if (unlikely(stm32_cryp_irq_read_data(cryp))) { - /* All bytes processed, finish */ - stm32_cryp_write(cryp, CRYP_IMSCR, 0); - stm32_cryp_finish_req(cryp, 0); - return IRQ_HANDLED; - } + stm32_cryp_irq_read_data(cryp); if (cryp->irq_status & MISR_IN) { - if (is_gcm(cryp)) { - ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK; - if (unlikely(ph == CR_PH_HEADER)) - /* Write Header */ - stm32_cryp_irq_write_gcm_header(cryp); - else - /* Input FIFO IRQ: write data */ - stm32_cryp_irq_write_data(cryp); - cryp->gcm_ctr++; - } else if (is_ccm(cryp)) { + if (is_gcm(cryp) || is_ccm(cryp)) { ph = stm32_cryp_read(cryp, CRYP_CR) & CR_PH_MASK; if (unlikely(ph == CR_PH_HEADER)) /* Write Header */ - stm32_cryp_irq_write_ccm_header(cryp); + stm32_cryp_irq_write_gcmccm_header(cryp); else /* Input FIFO IRQ: write data */ stm32_cryp_irq_write_data(cryp); + if (is_gcm(cryp)) + cryp->gcm_ctr++; } else { /* Input FIFO IRQ: write data */ stm32_cryp_irq_write_data(cryp); } } + /* Mask useless interrupts */ + if (!cryp->payload_in && !cryp->header_in) + it_mask &= ~IMSCR_IN; + if (!cryp->payload_out) + it_mask &= ~IMSCR_OUT; + stm32_cryp_write(cryp, CRYP_IMSCR, it_mask); + + if (!cryp->payload_in && !cryp->header_in && !cryp->payload_out) + stm32_cryp_finish_req(cryp, 0); + return IRQ_HANDLED; } @@ -1853,7 +1549,7 @@ static struct skcipher_alg crypto_algs[] = { .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = AES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .base.cra_alignmask = 0xf, + .base.cra_alignmask = 0, .base.cra_module = THIS_MODULE, .init = stm32_cryp_init_tfm, @@ -1870,7 +1566,7 @@ static struct skcipher_alg crypto_algs[] = { .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = AES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .base.cra_alignmask = 0xf, + .base.cra_alignmask = 0, .base.cra_module = THIS_MODULE, .init = stm32_cryp_init_tfm, @@ -1888,7 +1584,7 @@ static struct skcipher_alg crypto_algs[] = { .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = 1, .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .base.cra_alignmask = 0xf, + .base.cra_alignmask = 0, .base.cra_module = THIS_MODULE, .init = stm32_cryp_init_tfm, @@ -1906,7 +1602,7 @@ static struct skcipher_alg crypto_algs[] = { .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = DES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .base.cra_alignmask = 0xf, + .base.cra_alignmask = 0, .base.cra_module = THIS_MODULE, .init = stm32_cryp_init_tfm, @@ -1923,7 +1619,7 @@ static struct skcipher_alg crypto_algs[] = { .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = DES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .base.cra_alignmask = 0xf, + .base.cra_alignmask = 0, .base.cra_module = THIS_MODULE, .init = stm32_cryp_init_tfm, @@ -1941,7 +1637,7 @@ static struct skcipher_alg crypto_algs[] = { .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = DES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .base.cra_alignmask = 0xf, + .base.cra_alignmask = 0, .base.cra_module = THIS_MODULE, .init = stm32_cryp_init_tfm, @@ -1958,7 +1654,7 @@ static struct skcipher_alg crypto_algs[] = { .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = DES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .base.cra_alignmask = 0xf, + .base.cra_alignmask = 0, .base.cra_module = THIS_MODULE, .init = stm32_cryp_init_tfm, @@ -1988,7 +1684,7 @@ static struct aead_alg aead_algs[] = { .cra_flags = CRYPTO_ALG_ASYNC, .cra_blocksize = 1, .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, + .cra_alignmask = 0, .cra_module = THIS_MODULE, }, }, @@ -2008,7 +1704,7 @@ static struct aead_alg aead_algs[] = { .cra_flags = CRYPTO_ALG_ASYNC, .cra_blocksize = 1, .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, + .cra_alignmask = 0, .cra_module = THIS_MODULE, }, }, From patchwork Tue Nov 2 16:47:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Toromanoff X-Patchwork-Id: 12599379 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48EEFC433FE for ; Tue, 2 Nov 2021 16:50:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3277F61181 for ; Tue, 2 Nov 2021 16:50:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235083AbhKBQwl (ORCPT ); Tue, 2 Nov 2021 12:52:41 -0400 Received: from mx07-00178001.pphosted.com ([185.132.182.106]:34728 "EHLO mx07-00178001.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234928AbhKBQwJ (ORCPT ); Tue, 2 Nov 2021 12:52:09 -0400 Received: from pps.filterd (m0241204.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2ELW5I011686; Tue, 2 Nov 2021 17:49:21 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=selector1; bh=QJ6uXo4CEaD4BilyVlTiA31QtvvAdHcFZfiTCH/Jv/U=; b=SyxBB5Wmxqce4kvtjGlGooBkwoBhYMQxW1jBIpbmkTrGmQtONC94S3D19jpN1pv4cYsd cQcVtfLU4qiosRqjCTTp+ApLCu5g+1NZggXUaYPZiwT59jD3+qdIpeZTTnMBCNi/y2iV q2dktGQnLryKh9y/+vsilWEE/l/I5TewQskuLdC05mIXy3OTphz07vDFXlrtp55o6EfB ysGcql6O2Do/+A9A8Xnb5DGMozfuLSruW/yKxG40zewohfUF7ekwznzMDqam4N0Lty+Y KzCV7yORo2jAOT6lnf3LcvMY+ixCKzQUWLq18xindPvlkzu4Wl4UYXgl6AH4o6b9iInL Yw== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3c2jfj6pjw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 02 Nov 2021 17:49:21 +0100 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 06F4D100038; Tue, 2 Nov 2021 17:49:21 +0100 (CET) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id F205123535A; Tue, 2 Nov 2021 17:49:20 +0100 (CET) Received: from localhost (10.75.127.46) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:49:20 +0100 From: Nicolas Toromanoff To: Herbert Xu , "David S . Miller" , Maxime Coquelin , Alexandre Torgue CC: Marek Vasut , Nicolas Toromanoff , Ard Biesheuvel , , , , Subject: [PATCH v2 8/8] crypto: stm32/cryp - reorder hw initialization Date: Tue, 2 Nov 2021 17:47:29 +0100 Message-ID: <20211102164729.9957-9-nicolas.toromanoff@foss.st.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> References: <20211102164729.9957-1-nicolas.toromanoff@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.46] X-ClientProxiedBy: SFHDAG2NODE2.st.com (10.75.127.5) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The CRYP IP checks the written key depending of the configuration, it's safer to write the whole configuration to hardware then the key to avoid unexpected key rejection. Signed-off-by: Nicolas Toromanoff --- drivers/crypto/stm32/stm32-cryp.c | 39 ++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 13 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index 5962fbb0bc91..d99eea9cb8cd 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -232,6 +232,11 @@ static inline int stm32_cryp_wait_busy(struct stm32_cryp *cryp) !(status & SR_BUSY), 10, 100000); } +static inline void stm32_cryp_enable(struct stm32_cryp *cryp) +{ + writel_relaxed(readl_relaxed(cryp->regs + CRYP_CR) | CR_CRYPEN, cryp->regs + CRYP_CR); +} + static inline int stm32_cryp_wait_enable(struct stm32_cryp *cryp) { u32 status; @@ -534,9 +539,6 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp) /* Disable interrupt */ stm32_cryp_write(cryp, CRYP_IMSCR, 0); - /* Set key */ - stm32_cryp_hw_write_key(cryp); - /* Set configuration */ cfg = CR_DATA8 | CR_FFLUSH; @@ -562,23 +564,36 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp) /* AES ECB/CBC decrypt: run key preparation first */ if (is_decrypt(cryp) && ((hw_mode == CR_AES_ECB) || (hw_mode == CR_AES_CBC))) { - stm32_cryp_write(cryp, CRYP_CR, cfg | CR_AES_KP | CR_CRYPEN); + /* Configure in key preparation mode */ + stm32_cryp_write(cryp, CRYP_CR, cfg | CR_AES_KP); + /* Set key only after full configuration done */ + stm32_cryp_hw_write_key(cryp); + + /* Start prepare key */ + stm32_cryp_enable(cryp); /* Wait for end of processing */ ret = stm32_cryp_wait_busy(cryp); if (ret) { dev_err(cryp->dev, "Timeout (key preparation)\n"); return ret; } - } - cfg |= hw_mode; + cfg |= hw_mode | CR_DEC_NOT_ENC; - if (is_decrypt(cryp)) - cfg |= CR_DEC_NOT_ENC; + /* Apply updated config (Decrypt + algo) and flush */ + stm32_cryp_write(cryp, CRYP_CR, cfg); + } else { + cfg |= hw_mode; + if (is_decrypt(cryp)) + cfg |= CR_DEC_NOT_ENC; - /* Apply config and flush (valid when CRYPEN = 0) */ - stm32_cryp_write(cryp, CRYP_CR, cfg); + /* Apply config and flush */ + stm32_cryp_write(cryp, CRYP_CR, cfg); + + /* Set key only after configuration done */ + stm32_cryp_hw_write_key(cryp); + } switch (hw_mode) { case CR_AES_GCM: @@ -606,9 +621,7 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp) } /* Enable now */ - cfg |= CR_CRYPEN; - - stm32_cryp_write(cryp, CRYP_CR, cfg); + stm32_cryp_enable(cryp); return 0; }