From patchwork Sun Sep 1 20:35:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125717 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 167721399 for ; Sun, 1 Sep 2019 20:36:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F3DB523429 for ; Sun, 1 Sep 2019 20:36:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729051AbfIAUfp (ORCPT ); Sun, 1 Sep 2019 16:35:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50978 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725955AbfIAUfo (ORCPT ); Sun, 1 Sep 2019 16:35:44 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0144F368DA; Sun, 1 Sep 2019 20:35:44 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id B75DE60606; Sun, 1 Sep 2019 20:35:39 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/9] crypto: arm - Rename functions to avoid conflict with crypto/sha256.h Date: Sun, 1 Sep 2019 22:35:24 +0200 Message-Id: <20190901203532.2615-2-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Sun, 01 Sep 2019 20:35:44 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede --- arch/arm/crypto/sha256_glue.c | 8 ++++---- arch/arm/crypto/sha256_neon_glue.c | 24 ++++++++++++------------ 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/arm/crypto/sha256_glue.c b/arch/arm/crypto/sha256_glue.c index 70efa9656bff..215497f011f2 100644 --- a/arch/arm/crypto/sha256_glue.c +++ b/arch/arm/crypto/sha256_glue.c @@ -39,7 +39,7 @@ int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data, } EXPORT_SYMBOL(crypto_sha256_arm_update); -static int sha256_final(struct shash_desc *desc, u8 *out) +static int crypto_sha256_arm_final(struct shash_desc *desc, u8 *out) { sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_block_data_order); @@ -51,7 +51,7 @@ int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data, { sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha256_block_data_order); - return sha256_final(desc, out); + return crypto_sha256_arm_final(desc, out); } EXPORT_SYMBOL(crypto_sha256_arm_finup); @@ -59,7 +59,7 @@ static struct shash_alg algs[] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, .update = crypto_sha256_arm_update, - .final = sha256_final, + .final = crypto_sha256_arm_final, .finup = crypto_sha256_arm_finup, .descsize = sizeof(struct sha256_state), .base = { @@ -73,7 +73,7 @@ static struct shash_alg algs[] = { { .digestsize = SHA224_DIGEST_SIZE, .init = sha224_base_init, .update = crypto_sha256_arm_update, - .final = sha256_final, + .final = crypto_sha256_arm_final, .finup = crypto_sha256_arm_finup, .descsize = sizeof(struct sha256_state), .base = { diff --git a/arch/arm/crypto/sha256_neon_glue.c b/arch/arm/crypto/sha256_neon_glue.c index a7ce38a36006..38645e415196 100644 --- a/arch/arm/crypto/sha256_neon_glue.c +++ b/arch/arm/crypto/sha256_neon_glue.c @@ -25,8 +25,8 @@ asmlinkage void sha256_block_data_order_neon(u32 *digest, const void *data, unsigned int num_blks); -static int sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static int crypto_sha256_neon_update(struct shash_desc *desc, const u8 *data, + unsigned int len) { struct sha256_state *sctx = shash_desc_ctx(desc); @@ -42,8 +42,8 @@ static int sha256_update(struct shash_desc *desc, const u8 *data, return 0; } -static int sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) +static int crypto_sha256_neon_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) { if (!crypto_simd_usable()) return crypto_sha256_arm_finup(desc, data, len, out); @@ -59,17 +59,17 @@ static int sha256_finup(struct shash_desc *desc, const u8 *data, return sha256_base_finish(desc, out); } -static int sha256_final(struct shash_desc *desc, u8 *out) +static int crypto_sha256_neon_final(struct shash_desc *desc, u8 *out) { - return sha256_finup(desc, NULL, 0, out); + return crypto_sha256_neon_finup(desc, NULL, 0, out); } struct shash_alg sha256_neon_algs[] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, - .update = sha256_update, - .final = sha256_final, - .finup = sha256_finup, + .update = crypto_sha256_neon_update, + .final = crypto_sha256_neon_final, + .finup = crypto_sha256_neon_finup, .descsize = sizeof(struct sha256_state), .base = { .cra_name = "sha256", @@ -81,9 +81,9 @@ struct shash_alg sha256_neon_algs[] = { { }, { .digestsize = SHA224_DIGEST_SIZE, .init = sha224_base_init, - .update = sha256_update, - .final = sha256_final, - .finup = sha256_finup, + .update = crypto_sha256_neon_update, + .final = crypto_sha256_neon_final, + .finup = crypto_sha256_neon_finup, .descsize = sizeof(struct sha256_state), .base = { .cra_name = "sha224", From patchwork Sun Sep 1 20:35:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125715 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50D6E1399 for ; Sun, 1 Sep 2019 20:36:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3A89023429 for ; Sun, 1 Sep 2019 20:36:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729043AbfIAUfu (ORCPT ); Sun, 1 Sep 2019 16:35:50 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37100 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725955AbfIAUft (ORCPT ); Sun, 1 Sep 2019 16:35:49 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7BDFE8535D; Sun, 1 Sep 2019 20:35:48 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4872060606; Sun, 1 Sep 2019 20:35:44 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 2/9] crypto: arm64 - Rename functions to avoid conflict with crypto/sha256.h Date: Sun, 1 Sep 2019 22:35:25 +0200 Message-Id: <20190901203532.2615-3-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Sun, 01 Sep 2019 20:35:48 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede --- arch/arm64/crypto/sha256-glue.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c index 04b9d17b0733..e273faca924f 100644 --- a/arch/arm64/crypto/sha256-glue.c +++ b/arch/arm64/crypto/sha256-glue.c @@ -30,15 +30,15 @@ EXPORT_SYMBOL(sha256_block_data_order); asmlinkage void sha256_block_neon(u32 *digest, const void *data, unsigned int num_blks); -static int sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static int crypto_sha256_arm64_update(struct shash_desc *desc, const u8 *data, + unsigned int len) { return sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha256_block_data_order); } -static int sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) +static int crypto_sha256_arm64_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) { if (len) sha256_base_do_update(desc, data, len, @@ -49,17 +49,17 @@ static int sha256_finup(struct shash_desc *desc, const u8 *data, return sha256_base_finish(desc, out); } -static int sha256_final(struct shash_desc *desc, u8 *out) +static int crypto_sha256_arm64_final(struct shash_desc *desc, u8 *out) { - return sha256_finup(desc, NULL, 0, out); + return crypto_sha256_arm64_finup(desc, NULL, 0, out); } static struct shash_alg algs[] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, - .update = sha256_update, - .final = sha256_final, - .finup = sha256_finup, + .update = crypto_sha256_arm64_update, + .final = crypto_sha256_arm64_final, + .finup = crypto_sha256_arm64_finup, .descsize = sizeof(struct sha256_state), .base.cra_name = "sha256", .base.cra_driver_name = "sha256-arm64", @@ -69,9 +69,9 @@ static struct shash_alg algs[] = { { }, { .digestsize = SHA224_DIGEST_SIZE, .init = sha224_base_init, - .update = sha256_update, - .final = sha256_final, - .finup = sha256_finup, + .update = crypto_sha256_arm64_update, + .final = crypto_sha256_arm64_final, + .finup = crypto_sha256_arm64_finup, .descsize = sizeof(struct sha256_state), .base.cra_name = "sha224", .base.cra_driver_name = "sha224-arm64", From patchwork Sun Sep 1 20:35:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125713 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63D4D17EF for ; Sun, 1 Sep 2019 20:36:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4C6CC21897 for ; Sun, 1 Sep 2019 20:36:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729158AbfIAUfy (ORCPT ); Sun, 1 Sep 2019 16:35:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51006 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725955AbfIAUfx (ORCPT ); Sun, 1 Sep 2019 16:35:53 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F0FE536887; Sun, 1 Sep 2019 20:35:52 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id C1FD560920; Sun, 1 Sep 2019 20:35:48 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 3/9] crypto: s390 - Rename functions to avoid conflict with crypto/sha256.h Date: Sun, 1 Sep 2019 22:35:26 +0200 Message-Id: <20190901203532.2615-4-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Sun, 01 Sep 2019 20:35:53 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede --- arch/s390/crypto/sha256_s390.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/s390/crypto/sha256_s390.c b/arch/s390/crypto/sha256_s390.c index af7505148f80..b52c87e44939 100644 --- a/arch/s390/crypto/sha256_s390.c +++ b/arch/s390/crypto/sha256_s390.c @@ -17,7 +17,7 @@ #include "sha.h" -static int sha256_init(struct shash_desc *desc) +static int s390_sha256_init(struct shash_desc *desc) { struct s390_sha_ctx *sctx = shash_desc_ctx(desc); @@ -60,7 +60,7 @@ static int sha256_import(struct shash_desc *desc, const void *in) static struct shash_alg sha256_alg = { .digestsize = SHA256_DIGEST_SIZE, - .init = sha256_init, + .init = s390_sha256_init, .update = s390_sha_update, .final = s390_sha_final, .export = sha256_export, @@ -76,7 +76,7 @@ static struct shash_alg sha256_alg = { } }; -static int sha224_init(struct shash_desc *desc) +static int s390_sha224_init(struct shash_desc *desc) { struct s390_sha_ctx *sctx = shash_desc_ctx(desc); @@ -96,7 +96,7 @@ static int sha224_init(struct shash_desc *desc) static struct shash_alg sha224_alg = { .digestsize = SHA224_DIGEST_SIZE, - .init = sha224_init, + .init = s390_sha224_init, .update = s390_sha_update, .final = s390_sha_final, .export = sha256_export, From patchwork Sun Sep 1 20:35:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125711 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA17C1399 for ; Sun, 1 Sep 2019 20:36:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B35F323429 for ; Sun, 1 Sep 2019 20:36:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729180AbfIAUf6 (ORCPT ); Sun, 1 Sep 2019 16:35:58 -0400 Received: from mx1.redhat.com ([209.132.183.28]:35190 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725955AbfIAUf6 (ORCPT ); Sun, 1 Sep 2019 16:35:58 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 728EB8666C; Sun, 1 Sep 2019 20:35:57 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4264A60606; Sun, 1 Sep 2019 20:35:53 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 4/9] crypto: x86 - Rename functions to avoid conflict with crypto/sha256.h Date: Sun, 1 Sep 2019 22:35:27 +0200 Message-Id: <20190901203532.2615-5-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Sun, 01 Sep 2019 20:35:57 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede --- arch/x86/crypto/sha256_ssse3_glue.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 73867da3cbee..f9aff31fe59e 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -45,8 +45,8 @@ asmlinkage void sha256_transform_ssse3(u32 *digest, const char *data, u64 rounds); typedef void (sha256_transform_fn)(u32 *digest, const char *data, u64 rounds); -static int sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha256_transform_fn *sha256_xform) +static int _sha256_update(struct shash_desc *desc, const u8 *data, + unsigned int len, sha256_transform_fn *sha256_xform) { struct sha256_state *sctx = shash_desc_ctx(desc); @@ -84,7 +84,7 @@ static int sha256_finup(struct shash_desc *desc, const u8 *data, static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha256_update(desc, data, len, sha256_transform_ssse3); + return _sha256_update(desc, data, len, sha256_transform_ssse3); } static int sha256_ssse3_finup(struct shash_desc *desc, const u8 *data, @@ -151,7 +151,7 @@ asmlinkage void sha256_transform_avx(u32 *digest, const char *data, static int sha256_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha256_update(desc, data, len, sha256_transform_avx); + return _sha256_update(desc, data, len, sha256_transform_avx); } static int sha256_avx_finup(struct shash_desc *desc, const u8 *data, @@ -233,7 +233,7 @@ asmlinkage void sha256_transform_rorx(u32 *digest, const char *data, static int sha256_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha256_update(desc, data, len, sha256_transform_rorx); + return _sha256_update(desc, data, len, sha256_transform_rorx); } static int sha256_avx2_finup(struct shash_desc *desc, const u8 *data, @@ -313,7 +313,7 @@ asmlinkage void sha256_ni_transform(u32 *digest, const char *data, static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha256_update(desc, data, len, sha256_ni_transform); + return _sha256_update(desc, data, len, sha256_ni_transform); } static int sha256_ni_finup(struct shash_desc *desc, const u8 *data, From patchwork Sun Sep 1 20:35:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125719 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BAF521399 for ; Sun, 1 Sep 2019 20:36:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8F4DD23429 for ; Sun, 1 Sep 2019 20:36:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729203AbfIAUgF (ORCPT ); Sun, 1 Sep 2019 16:36:05 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58672 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725955AbfIAUgC (ORCPT ); Sun, 1 Sep 2019 16:36:02 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E8DC0307D88D; Sun, 1 Sep 2019 20:36:01 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id BABD160606; Sun, 1 Sep 2019 20:35:57 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 5/9] crypto: ccree - Rename arrays to avoid conflict with crypto/sha256.h Date: Sun, 1 Sep 2019 22:35:28 +0200 Message-Id: <20190901203532.2615-6-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Sun, 01 Sep 2019 20:36:02 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename the algo_init arrays to cc_algo_init so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede Signed-off-by: Gilad Ben-Yossef Acked-by: Gilad Ben-Yossef --- drivers/crypto/ccree/cc_hash.c | 153 +++++++++++++++++---------------- 1 file changed, 77 insertions(+), 76 deletions(-) diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c index a6abe4e3bb0e..bc71bdf44a9f 100644 --- a/drivers/crypto/ccree/cc_hash.c +++ b/drivers/crypto/ccree/cc_hash.c @@ -25,27 +25,27 @@ struct cc_hash_handle { struct list_head hash_list; }; -static const u32 digest_len_init[] = { +static const u32 cc_digest_len_init[] = { 0x00000040, 0x00000000, 0x00000000, 0x00000000 }; -static const u32 md5_init[] = { +static const u32 cc_md5_init[] = { SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 }; -static const u32 sha1_init[] = { +static const u32 cc_sha1_init[] = { SHA1_H4, SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 }; -static const u32 sha224_init[] = { +static const u32 cc_sha224_init[] = { SHA224_H7, SHA224_H6, SHA224_H5, SHA224_H4, SHA224_H3, SHA224_H2, SHA224_H1, SHA224_H0 }; -static const u32 sha256_init[] = { +static const u32 cc_sha256_init[] = { SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4, SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 }; -static const u32 digest_len_sha512_init[] = { +static const u32 cc_digest_len_sha512_init[] = { 0x00000080, 0x00000000, 0x00000000, 0x00000000 }; -static u64 sha384_init[] = { +static u64 cc_sha384_init[] = { SHA384_H7, SHA384_H6, SHA384_H5, SHA384_H4, SHA384_H3, SHA384_H2, SHA384_H1, SHA384_H0 }; -static u64 sha512_init[] = { +static u64 cc_sha512_init[] = { SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4, SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 }; -static const u32 sm3_init[] = { +static const u32 cc_sm3_init[] = { SM3_IVH, SM3_IVG, SM3_IVF, SM3_IVE, SM3_IVD, SM3_IVC, SM3_IVB, SM3_IVA }; @@ -144,10 +144,11 @@ static void cc_init_req(struct device *dev, struct ahash_req_ctx *state, if (ctx->hash_mode == DRV_HASH_SHA512 || ctx->hash_mode == DRV_HASH_SHA384) memcpy(state->digest_bytes_len, - digest_len_sha512_init, + cc_digest_len_sha512_init, ctx->hash_len); else - memcpy(state->digest_bytes_len, digest_len_init, + memcpy(state->digest_bytes_len, + cc_digest_len_init, ctx->hash_len); } @@ -1873,26 +1874,26 @@ int cc_init_hash_sram(struct cc_drvdata *drvdata) int rc = 0; /* Copy-to-sram digest-len */ - cc_set_sram_desc(digest_len_init, sram_buff_ofs, - ARRAY_SIZE(digest_len_init), larval_seq, + cc_set_sram_desc(cc_digest_len_init, sram_buff_ofs, + ARRAY_SIZE(cc_digest_len_init), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) goto init_digest_const_err; - sram_buff_ofs += sizeof(digest_len_init); + sram_buff_ofs += sizeof(cc_digest_len_init); larval_seq_len = 0; if (large_sha_supported) { /* Copy-to-sram digest-len for sha384/512 */ - cc_set_sram_desc(digest_len_sha512_init, sram_buff_ofs, - ARRAY_SIZE(digest_len_sha512_init), + cc_set_sram_desc(cc_digest_len_sha512_init, sram_buff_ofs, + ARRAY_SIZE(cc_digest_len_sha512_init), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) goto init_digest_const_err; - sram_buff_ofs += sizeof(digest_len_sha512_init); + sram_buff_ofs += sizeof(cc_digest_len_sha512_init); larval_seq_len = 0; } @@ -1900,64 +1901,64 @@ int cc_init_hash_sram(struct cc_drvdata *drvdata) hash_handle->larval_digest_sram_addr = sram_buff_ofs; /* Copy-to-sram initial SHA* digests */ - cc_set_sram_desc(md5_init, sram_buff_ofs, ARRAY_SIZE(md5_init), + cc_set_sram_desc(cc_md5_init, sram_buff_ofs, ARRAY_SIZE(cc_md5_init), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) goto init_digest_const_err; - sram_buff_ofs += sizeof(md5_init); + sram_buff_ofs += sizeof(cc_md5_init); larval_seq_len = 0; - cc_set_sram_desc(sha1_init, sram_buff_ofs, - ARRAY_SIZE(sha1_init), larval_seq, + cc_set_sram_desc(cc_sha1_init, sram_buff_ofs, + ARRAY_SIZE(cc_sha1_init), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) goto init_digest_const_err; - sram_buff_ofs += sizeof(sha1_init); + sram_buff_ofs += sizeof(cc_sha1_init); larval_seq_len = 0; - cc_set_sram_desc(sha224_init, sram_buff_ofs, - ARRAY_SIZE(sha224_init), larval_seq, + cc_set_sram_desc(cc_sha224_init, sram_buff_ofs, + ARRAY_SIZE(cc_sha224_init), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) goto init_digest_const_err; - sram_buff_ofs += sizeof(sha224_init); + sram_buff_ofs += sizeof(cc_sha224_init); larval_seq_len = 0; - cc_set_sram_desc(sha256_init, sram_buff_ofs, - ARRAY_SIZE(sha256_init), larval_seq, + cc_set_sram_desc(cc_sha256_init, sram_buff_ofs, + ARRAY_SIZE(cc_sha256_init), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) goto init_digest_const_err; - sram_buff_ofs += sizeof(sha256_init); + sram_buff_ofs += sizeof(cc_sha256_init); larval_seq_len = 0; if (sm3_supported) { - cc_set_sram_desc(sm3_init, sram_buff_ofs, - ARRAY_SIZE(sm3_init), larval_seq, + cc_set_sram_desc(cc_sm3_init, sram_buff_ofs, + ARRAY_SIZE(cc_sm3_init), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) goto init_digest_const_err; - sram_buff_ofs += sizeof(sm3_init); + sram_buff_ofs += sizeof(cc_sm3_init); larval_seq_len = 0; } if (large_sha_supported) { - cc_set_sram_desc((u32 *)sha384_init, sram_buff_ofs, - (ARRAY_SIZE(sha384_init) * 2), larval_seq, + cc_set_sram_desc((u32 *)cc_sha384_init, sram_buff_ofs, + (ARRAY_SIZE(cc_sha384_init) * 2), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) goto init_digest_const_err; - sram_buff_ofs += sizeof(sha384_init); + sram_buff_ofs += sizeof(cc_sha384_init); larval_seq_len = 0; - cc_set_sram_desc((u32 *)sha512_init, sram_buff_ofs, - (ARRAY_SIZE(sha512_init) * 2), larval_seq, + cc_set_sram_desc((u32 *)cc_sha512_init, sram_buff_ofs, + (ARRAY_SIZE(cc_sha512_init) * 2), larval_seq, &larval_seq_len); rc = send_request_init(drvdata, larval_seq, larval_seq_len); if (rc) @@ -1986,8 +1987,8 @@ static void __init cc_swap_dwords(u32 *buf, unsigned long size) */ void __init cc_hash_global_init(void) { - cc_swap_dwords((u32 *)&sha384_init, (ARRAY_SIZE(sha384_init) * 2)); - cc_swap_dwords((u32 *)&sha512_init, (ARRAY_SIZE(sha512_init) * 2)); + cc_swap_dwords((u32 *)&cc_sha384_init, (ARRAY_SIZE(cc_sha384_init) * 2)); + cc_swap_dwords((u32 *)&cc_sha512_init, (ARRAY_SIZE(cc_sha512_init) * 2)); } int cc_hash_alloc(struct cc_drvdata *drvdata) @@ -2006,18 +2007,18 @@ int cc_hash_alloc(struct cc_drvdata *drvdata) INIT_LIST_HEAD(&hash_handle->hash_list); drvdata->hash_handle = hash_handle; - sram_size_to_alloc = sizeof(digest_len_init) + - sizeof(md5_init) + - sizeof(sha1_init) + - sizeof(sha224_init) + - sizeof(sha256_init); + sram_size_to_alloc = sizeof(cc_digest_len_init) + + sizeof(cc_md5_init) + + sizeof(cc_sha1_init) + + sizeof(cc_sha224_init) + + sizeof(cc_sha256_init); if (drvdata->hw_rev >= CC_HW_REV_713) - sram_size_to_alloc += sizeof(sm3_init); + sram_size_to_alloc += sizeof(cc_sm3_init); if (drvdata->hw_rev >= CC_HW_REV_712) - sram_size_to_alloc += sizeof(digest_len_sha512_init) + - sizeof(sha384_init) + sizeof(sha512_init); + sram_size_to_alloc += sizeof(cc_digest_len_sha512_init) + + sizeof(cc_sha384_init) + sizeof(cc_sha512_init); sram_buff = cc_sram_alloc(drvdata, sram_size_to_alloc); if (sram_buff == NULL_SRAM_ADDR) { @@ -2258,22 +2259,22 @@ static const void *cc_larval_digest(struct device *dev, u32 mode) { switch (mode) { case DRV_HASH_MD5: - return md5_init; + return cc_md5_init; case DRV_HASH_SHA1: - return sha1_init; + return cc_sha1_init; case DRV_HASH_SHA224: - return sha224_init; + return cc_sha224_init; case DRV_HASH_SHA256: - return sha256_init; + return cc_sha256_init; case DRV_HASH_SHA384: - return sha384_init; + return cc_sha384_init; case DRV_HASH_SHA512: - return sha512_init; + return cc_sha512_init; case DRV_HASH_SM3: - return sm3_init; + return cc_sm3_init; default: dev_err(dev, "Invalid hash mode (%d)\n", mode); - return md5_init; + return cc_md5_init; } } @@ -2301,40 +2302,40 @@ cc_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode) return (hash_handle->larval_digest_sram_addr); case DRV_HASH_SHA1: return (hash_handle->larval_digest_sram_addr + - sizeof(md5_init)); + sizeof(cc_md5_init)); case DRV_HASH_SHA224: return (hash_handle->larval_digest_sram_addr + - sizeof(md5_init) + - sizeof(sha1_init)); + sizeof(cc_md5_init) + + sizeof(cc_sha1_init)); case DRV_HASH_SHA256: return (hash_handle->larval_digest_sram_addr + - sizeof(md5_init) + - sizeof(sha1_init) + - sizeof(sha224_init)); + sizeof(cc_md5_init) + + sizeof(cc_sha1_init) + + sizeof(cc_sha224_init)); case DRV_HASH_SM3: return (hash_handle->larval_digest_sram_addr + - sizeof(md5_init) + - sizeof(sha1_init) + - sizeof(sha224_init) + - sizeof(sha256_init)); + sizeof(cc_md5_init) + + sizeof(cc_sha1_init) + + sizeof(cc_sha224_init) + + sizeof(cc_sha256_init)); case DRV_HASH_SHA384: addr = (hash_handle->larval_digest_sram_addr + - sizeof(md5_init) + - sizeof(sha1_init) + - sizeof(sha224_init) + - sizeof(sha256_init)); + sizeof(cc_md5_init) + + sizeof(cc_sha1_init) + + sizeof(cc_sha224_init) + + sizeof(cc_sha256_init)); if (sm3_supported) - addr += sizeof(sm3_init); + addr += sizeof(cc_sm3_init); return addr; case DRV_HASH_SHA512: addr = (hash_handle->larval_digest_sram_addr + - sizeof(md5_init) + - sizeof(sha1_init) + - sizeof(sha224_init) + - sizeof(sha256_init) + - sizeof(sha384_init)); + sizeof(cc_md5_init) + + sizeof(cc_sha1_init) + + sizeof(cc_sha224_init) + + sizeof(cc_sha256_init) + + sizeof(cc_sha384_init)); if (sm3_supported) - addr += sizeof(sm3_init); + addr += sizeof(cc_sm3_init); return addr; default: dev_err(dev, "Invalid hash mode (%d)\n", mode); @@ -2360,7 +2361,7 @@ cc_digest_len_addr(void *drvdata, u32 mode) #if (CC_DEV_SHA_MAX > 256) case DRV_HASH_SHA384: case DRV_HASH_SHA512: - return digest_len_addr + sizeof(digest_len_init); + return digest_len_addr + sizeof(cc_digest_len_init); #endif default: return digest_len_addr; /*to avoid kernel crash*/ From patchwork Sun Sep 1 20:35:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125721 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B28F13B1 for ; Sun, 1 Sep 2019 20:36:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 037D321897 for ; Sun, 1 Sep 2019 20:36:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729225AbfIAUgI (ORCPT ); Sun, 1 Sep 2019 16:36:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38584 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729213AbfIAUgH (ORCPT ); Sun, 1 Sep 2019 16:36:07 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6808810A8139; Sun, 1 Sep 2019 20:36:06 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 39B1360606; Sun, 1 Sep 2019 20:36:02 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 6/9] crypto: chelsio - Rename arrays to avoid conflict with crypto/sha256.h Date: Sun, 1 Sep 2019 22:35:29 +0200 Message-Id: <20190901203532.2615-7-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.64]); Sun, 01 Sep 2019 20:36:06 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename the sha*_init arrays to chcr_sha*_init so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede --- drivers/crypto/chelsio/chcr_algo.h | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.h b/drivers/crypto/chelsio/chcr_algo.h index ee20dd899e83..d1e6b51df0ce 100644 --- a/drivers/crypto/chelsio/chcr_algo.h +++ b/drivers/crypto/chelsio/chcr_algo.h @@ -333,26 +333,26 @@ struct phys_sge_pairs { }; -static const u32 sha1_init[SHA1_DIGEST_SIZE / 4] = { +static const u32 chcr_sha1_init[SHA1_DIGEST_SIZE / 4] = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4, }; -static const u32 sha224_init[SHA256_DIGEST_SIZE / 4] = { +static const u32 chcr_sha224_init[SHA256_DIGEST_SIZE / 4] = { SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7, }; -static const u32 sha256_init[SHA256_DIGEST_SIZE / 4] = { +static const u32 chcr_sha256_init[SHA256_DIGEST_SIZE / 4] = { SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7, }; -static const u64 sha384_init[SHA512_DIGEST_SIZE / 8] = { +static const u64 chcr_sha384_init[SHA512_DIGEST_SIZE / 8] = { SHA384_H0, SHA384_H1, SHA384_H2, SHA384_H3, SHA384_H4, SHA384_H5, SHA384_H6, SHA384_H7, }; -static const u64 sha512_init[SHA512_DIGEST_SIZE / 8] = { +static const u64 chcr_sha512_init[SHA512_DIGEST_SIZE / 8] = { SHA512_H0, SHA512_H1, SHA512_H2, SHA512_H3, SHA512_H4, SHA512_H5, SHA512_H6, SHA512_H7, }; @@ -362,21 +362,21 @@ static inline void copy_hash_init_values(char *key, int digestsize) u8 i; __be32 *dkey = (__be32 *)key; u64 *ldkey = (u64 *)key; - __be64 *sha384 = (__be64 *)sha384_init; - __be64 *sha512 = (__be64 *)sha512_init; + __be64 *sha384 = (__be64 *)chcr_sha384_init; + __be64 *sha512 = (__be64 *)chcr_sha512_init; switch (digestsize) { case SHA1_DIGEST_SIZE: for (i = 0; i < SHA1_INIT_STATE; i++) - dkey[i] = cpu_to_be32(sha1_init[i]); + dkey[i] = cpu_to_be32(chcr_sha1_init[i]); break; case SHA224_DIGEST_SIZE: for (i = 0; i < SHA224_INIT_STATE; i++) - dkey[i] = cpu_to_be32(sha224_init[i]); + dkey[i] = cpu_to_be32(chcr_sha224_init[i]); break; case SHA256_DIGEST_SIZE: for (i = 0; i < SHA256_INIT_STATE; i++) - dkey[i] = cpu_to_be32(sha256_init[i]); + dkey[i] = cpu_to_be32(chcr_sha256_init[i]); break; case SHA384_DIGEST_SIZE: for (i = 0; i < SHA384_INIT_STATE; i++) From patchwork Sun Sep 1 20:35:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125725 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9D2A21399 for ; Sun, 1 Sep 2019 20:36:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 85CDB23429 for ; Sun, 1 Sep 2019 20:36:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729244AbfIAUgM (ORCPT ); Sun, 1 Sep 2019 16:36:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40888 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729213AbfIAUgL (ORCPT ); Sun, 1 Sep 2019 16:36:11 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E09698BA2D4; Sun, 1 Sep 2019 20:36:10 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id AF20860606; Sun, 1 Sep 2019 20:36:06 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 7/9] crypto: n2 - Rename arrays to avoid conflict with crypto/sha256.h Date: Sun, 1 Sep 2019 22:35:30 +0200 Message-Id: <20190901203532.2615-8-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.68]); Sun, 01 Sep 2019 20:36:11 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rename the sha*_init arrays to n2_sha*_init so that they do not conflict with the functions declared in crypto/sha256.h. Also rename md5_init to n2_md5_init for consistency. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede --- drivers/crypto/n2_core.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c index 760e72a5893b..c4bf85fc9601 100644 --- a/drivers/crypto/n2_core.c +++ b/drivers/crypto/n2_core.c @@ -1295,20 +1295,20 @@ struct n2_hash_tmpl { u8 hmac_type; }; -static const u32 md5_init[MD5_HASH_WORDS] = { +static const u32 n2_md5_init[MD5_HASH_WORDS] = { cpu_to_le32(MD5_H0), cpu_to_le32(MD5_H1), cpu_to_le32(MD5_H2), cpu_to_le32(MD5_H3), }; -static const u32 sha1_init[SHA1_DIGEST_SIZE / 4] = { +static const u32 n2_sha1_init[SHA1_DIGEST_SIZE / 4] = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4, }; -static const u32 sha256_init[SHA256_DIGEST_SIZE / 4] = { +static const u32 n2_sha256_init[SHA256_DIGEST_SIZE / 4] = { SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7, }; -static const u32 sha224_init[SHA256_DIGEST_SIZE / 4] = { +static const u32 n2_sha224_init[SHA256_DIGEST_SIZE / 4] = { SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7, }; @@ -1316,7 +1316,7 @@ static const u32 sha224_init[SHA256_DIGEST_SIZE / 4] = { static const struct n2_hash_tmpl hash_tmpls[] = { { .name = "md5", .hash_zero = md5_zero_message_hash, - .hash_init = md5_init, + .hash_init = n2_md5_init, .auth_type = AUTH_TYPE_MD5, .hmac_type = AUTH_TYPE_HMAC_MD5, .hw_op_hashsz = MD5_DIGEST_SIZE, @@ -1324,7 +1324,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = { .block_size = MD5_HMAC_BLOCK_SIZE }, { .name = "sha1", .hash_zero = sha1_zero_message_hash, - .hash_init = sha1_init, + .hash_init = n2_sha1_init, .auth_type = AUTH_TYPE_SHA1, .hmac_type = AUTH_TYPE_HMAC_SHA1, .hw_op_hashsz = SHA1_DIGEST_SIZE, @@ -1332,7 +1332,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = { .block_size = SHA1_BLOCK_SIZE }, { .name = "sha256", .hash_zero = sha256_zero_message_hash, - .hash_init = sha256_init, + .hash_init = n2_sha256_init, .auth_type = AUTH_TYPE_SHA256, .hmac_type = AUTH_TYPE_HMAC_SHA256, .hw_op_hashsz = SHA256_DIGEST_SIZE, @@ -1340,7 +1340,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = { .block_size = SHA256_BLOCK_SIZE }, { .name = "sha224", .hash_zero = sha224_zero_message_hash, - .hash_init = sha224_init, + .hash_init = n2_sha224_init, .auth_type = AUTH_TYPE_SHA256, .hmac_type = AUTH_TYPE_RESERVED, .hw_op_hashsz = SHA256_DIGEST_SIZE, From patchwork Sun Sep 1 20:35:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125727 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 471D913B1 for ; Sun, 1 Sep 2019 20:36:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 302BF2342E for ; Sun, 1 Sep 2019 20:36:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729261AbfIAUgR (ORCPT ); Sun, 1 Sep 2019 16:36:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58934 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729213AbfIAUgR (ORCPT ); Sun, 1 Sep 2019 16:36:17 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6027D309B688; Sun, 1 Sep 2019 20:36:15 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3426D60606; Sun, 1 Sep 2019 20:36:11 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 8/9] crypto: sha256 - Merge crypto/sha256.h into crypto/sha.h Date: Sun, 1 Sep 2019 22:35:31 +0200 Message-Id: <20190901203532.2615-9-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Sun, 01 Sep 2019 20:36:15 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The generic sha256 implementation from lib/crypto/sha256.c uses data structs defined in crypto/sha.h, so lets move the function prototypes there too. Signed-off-by: Hans de Goede --- arch/s390/purgatory/purgatory.c | 2 +- arch/x86/purgatory/purgatory.c | 2 +- crypto/sha256_generic.c | 1 - include/crypto/sha.h | 21 ++++++++++++++++++++ include/crypto/sha256.h | 34 --------------------------------- lib/crypto/sha256.c | 2 +- 6 files changed, 24 insertions(+), 38 deletions(-) delete mode 100644 include/crypto/sha256.h diff --git a/arch/s390/purgatory/purgatory.c b/arch/s390/purgatory/purgatory.c index a80c78da9985..0a423bcf6746 100644 --- a/arch/s390/purgatory/purgatory.c +++ b/arch/s390/purgatory/purgatory.c @@ -9,7 +9,7 @@ #include #include -#include +#include #include int verify_sha256_digest(void) diff --git a/arch/x86/purgatory/purgatory.c b/arch/x86/purgatory/purgatory.c index 7f90a86eff49..3b95410ff0f8 100644 --- a/arch/x86/purgatory/purgatory.c +++ b/arch/x86/purgatory/purgatory.c @@ -9,7 +9,7 @@ */ #include -#include +#include #include #include "../boot/string.h" diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c index eafd10f9bf86..f2d7095d4f2d 100644 --- a/crypto/sha256_generic.c +++ b/crypto/sha256_generic.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include diff --git a/include/crypto/sha.h b/include/crypto/sha.h index 8a46202b1857..535955c84187 100644 --- a/include/crypto/sha.h +++ b/include/crypto/sha.h @@ -112,4 +112,25 @@ extern int crypto_sha512_update(struct shash_desc *desc, const u8 *data, extern int crypto_sha512_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *hash); + +/* + * Stand-alone implementation of the SHA256 algorithm. It is designed to + * have as little dependencies as possible so it can be used in the + * kexec_file purgatory. In other cases you should generally use the + * hash APIs from include/crypto/hash.h. Especially when hashing large + * amounts of data as those APIs may be hw-accelerated. + * + * For details see lib/crypto/sha256.c + */ + +extern int sha256_init(struct sha256_state *sctx); +extern int sha256_update(struct sha256_state *sctx, const u8 *input, + unsigned int length); +extern int sha256_final(struct sha256_state *sctx, u8 *hash); + +extern int sha224_init(struct sha256_state *sctx); +extern int sha224_update(struct sha256_state *sctx, const u8 *input, + unsigned int length); +extern int sha224_final(struct sha256_state *sctx, u8 *hash); + #endif diff --git a/include/crypto/sha256.h b/include/crypto/sha256.h deleted file mode 100644 index a75998d65a41..000000000000 --- a/include/crypto/sha256.h +++ /dev/null @@ -1,34 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2014 Red Hat Inc. - * - * Author: Vivek Goyal - */ - -#ifndef SHA256_H -#define SHA256_H - -#include -#include - -/* - * Stand-alone implementation of the SHA256 algorithm. It is designed to - * have as little dependencies as possible so it can be used in the - * kexec_file purgatory. In other cases you should generally use the - * hash APIs from include/crypto/hash.h. Especially when hashing large - * amounts of data as those APIs may be hw-accelerated. - * - * For details see lib/crypto/sha256.c - */ - -extern int sha256_init(struct sha256_state *sctx); -extern int sha256_update(struct sha256_state *sctx, const u8 *input, - unsigned int length); -extern int sha256_final(struct sha256_state *sctx, u8 *hash); - -extern int sha224_init(struct sha256_state *sctx); -extern int sha224_update(struct sha256_state *sctx, const u8 *input, - unsigned int length); -extern int sha224_final(struct sha256_state *sctx, u8 *hash); - -#endif /* SHA256_H */ diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 42d75e490a97..220b74c2bbd8 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -15,7 +15,7 @@ #include #include #include -#include +#include #include static inline u32 Ch(u32 x, u32 y, u32 z) From patchwork Sun Sep 1 20:35:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hans de Goede X-Patchwork-Id: 11125729 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04CCB13B1 for ; Sun, 1 Sep 2019 20:36:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D797E233A2 for ; Sun, 1 Sep 2019 20:36:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729280AbfIAUgW (ORCPT ); Sun, 1 Sep 2019 16:36:22 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57666 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729213AbfIAUgV (ORCPT ); Sun, 1 Sep 2019 16:36:21 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CF7DA308FC4B; Sun, 1 Sep 2019 20:36:19 +0000 (UTC) Received: from shalem.localdomain.com (ovpn-116-36.ams2.redhat.com [10.36.116.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id A602C60920; Sun, 1 Sep 2019 20:36:15 +0000 (UTC) From: Hans de Goede To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Russell King , Catalin Marinas , Will Deacon , Gilad Ben-Yossef , Atul Gupta Cc: Hans de Goede , Marc Zyngier , Eric Biggers , Andy Lutomirski , Ard Biesheuvel , linux-crypto@vger.kernel.org, x86@kernel.org, linux-s390@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 9/9] crypto: sha256 - Remove sha256/224_init code duplication Date: Sun, 1 Sep 2019 22:35:32 +0200 Message-Id: <20190901203532.2615-10-hdegoede@redhat.com> In-Reply-To: <20190901203532.2615-1-hdegoede@redhat.com> References: <20190901203532.2615-1-hdegoede@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Sun, 01 Sep 2019 20:36:20 +0000 (UTC) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org lib/crypto/sha256.c and include/crypto/sha256_base.h define 99% identical functions to init a sha256_state struct for sha224 or sha256 use. This commit moves the functions from lib/crypto/sha256.c to include/crypto/sha.h (making them static inline) and makes the sha224/256_base_init static inline functions from include/crypto/sha256_base.h wrappers around the now also static inline include/crypto/sha.h functions. Signed-off-by: Hans de Goede --- include/crypto/sha.h | 30 ++++++++++++++++++++++++++++-- include/crypto/sha256_base.h | 24 ++---------------------- lib/crypto/sha256.c | 32 -------------------------------- 3 files changed, 30 insertions(+), 56 deletions(-) diff --git a/include/crypto/sha.h b/include/crypto/sha.h index 535955c84187..5c2132c71900 100644 --- a/include/crypto/sha.h +++ b/include/crypto/sha.h @@ -123,12 +123,38 @@ extern int crypto_sha512_finup(struct shash_desc *desc, const u8 *data, * For details see lib/crypto/sha256.c */ -extern int sha256_init(struct sha256_state *sctx); +static inline int sha256_init(struct sha256_state *sctx) +{ + sctx->state[0] = SHA256_H0; + sctx->state[1] = SHA256_H1; + sctx->state[2] = SHA256_H2; + sctx->state[3] = SHA256_H3; + sctx->state[4] = SHA256_H4; + sctx->state[5] = SHA256_H5; + sctx->state[6] = SHA256_H6; + sctx->state[7] = SHA256_H7; + sctx->count = 0; + + return 0; +} extern int sha256_update(struct sha256_state *sctx, const u8 *input, unsigned int length); extern int sha256_final(struct sha256_state *sctx, u8 *hash); -extern int sha224_init(struct sha256_state *sctx); +static inline int sha224_init(struct sha256_state *sctx) +{ + sctx->state[0] = SHA224_H0; + sctx->state[1] = SHA224_H1; + sctx->state[2] = SHA224_H2; + sctx->state[3] = SHA224_H3; + sctx->state[4] = SHA224_H4; + sctx->state[5] = SHA224_H5; + sctx->state[6] = SHA224_H6; + sctx->state[7] = SHA224_H7; + sctx->count = 0; + + return 0; +} extern int sha224_update(struct sha256_state *sctx, const u8 *input, unsigned int length); extern int sha224_final(struct sha256_state *sctx, u8 *hash); diff --git a/include/crypto/sha256_base.h b/include/crypto/sha256_base.h index 59159bc944f5..b8af853690b9 100644 --- a/include/crypto/sha256_base.h +++ b/include/crypto/sha256_base.h @@ -19,34 +19,14 @@ static inline int sha224_base_init(struct shash_desc *desc) { struct sha256_state *sctx = shash_desc_ctx(desc); - sctx->state[0] = SHA224_H0; - sctx->state[1] = SHA224_H1; - sctx->state[2] = SHA224_H2; - sctx->state[3] = SHA224_H3; - sctx->state[4] = SHA224_H4; - sctx->state[5] = SHA224_H5; - sctx->state[6] = SHA224_H6; - sctx->state[7] = SHA224_H7; - sctx->count = 0; - - return 0; + return sha224_init(sctx); } static inline int sha256_base_init(struct shash_desc *desc) { struct sha256_state *sctx = shash_desc_ctx(desc); - sctx->state[0] = SHA256_H0; - sctx->state[1] = SHA256_H1; - sctx->state[2] = SHA256_H2; - sctx->state[3] = SHA256_H3; - sctx->state[4] = SHA256_H4; - sctx->state[5] = SHA256_H5; - sctx->state[6] = SHA256_H6; - sctx->state[7] = SHA256_H7; - sctx->count = 0; - - return 0; + return sha256_init(sctx); } static inline int sha256_base_do_update(struct shash_desc *desc, diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 220b74c2bbd8..66cb04b0cf4e 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -206,38 +206,6 @@ static void sha256_transform(u32 *state, const u8 *input) memzero_explicit(W, 64 * sizeof(u32)); } -int sha256_init(struct sha256_state *sctx) -{ - sctx->state[0] = SHA256_H0; - sctx->state[1] = SHA256_H1; - sctx->state[2] = SHA256_H2; - sctx->state[3] = SHA256_H3; - sctx->state[4] = SHA256_H4; - sctx->state[5] = SHA256_H5; - sctx->state[6] = SHA256_H6; - sctx->state[7] = SHA256_H7; - sctx->count = 0; - - return 0; -} -EXPORT_SYMBOL(sha256_init); - -int sha224_init(struct sha256_state *sctx) -{ - sctx->state[0] = SHA224_H0; - sctx->state[1] = SHA224_H1; - sctx->state[2] = SHA224_H2; - sctx->state[3] = SHA224_H3; - sctx->state[4] = SHA224_H4; - sctx->state[5] = SHA224_H5; - sctx->state[6] = SHA224_H6; - sctx->state[7] = SHA224_H7; - sctx->count = 0; - - return 0; -} -EXPORT_SYMBOL(sha224_init); - int sha256_update(struct sha256_state *sctx, const u8 *data, unsigned int len) { unsigned int partial, done;