From patchwork Tue Feb 13 10:13:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 13554909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 629C8C48260 for ; Tue, 13 Feb 2024 10:14:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=mGs2CCPOMNalMDOgQYAr7bfmYQa5wVRfOaOWbUtvLH4=; b=Wb6tu+SjMRXpA2 TEgZf2Afd4R5BkyErvKLwakUPCJxKbQ9Xa9C/lelIL+uM+UNGRWDxUTSTj5f78wZtwTUgS5/6qWfH lqax5fd3v/bSSx7NA94LEzcfi4xgcjsDGPtDVls2dfF02GcjekYm4BRGuTV8ETAqBjXfJwJK/8E2S +qYP4v5AISBvaZQGd9G+jFaQkrHRmavj5bJIW2rl4Y79qjHrdg3Yw+GVn70zIPlxODl1/rE/C3Fte nUU1R6tnza4I+bxplLRHU2Q5sjPBKvsBTg06N4fNG5vwxYGGVlQclf8WBCMWUsnUqT0UsQEItpg6N /K1zDYXG/Dk2Xw+0x+fA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZpnj-00000008n0a-1EuL; Tue, 13 Feb 2024 10:14:07 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZpng-00000008myj-3WVf for linux-arm-kernel@lists.infradead.org; Tue, 13 Feb 2024 10:14:06 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id E009DCE1A8F; Tue, 13 Feb 2024 10:14:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40900C433F1; Tue, 13 Feb 2024 10:13:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1707819242; bh=pJXnaVkOmADfzOfCpLbYu/Clp0Z96iKVydBpYB8ckH0=; h=From:To:Cc:Subject:Date:From; b=B5u6HgeYAzEr5R34fLeTyjNoBpg1ig8esgtWrTsxTVmSPFRIpTNB2wf+bEsSR3UHF 6m1W693PjIIQsdL787qcI/AH5nNFASF4HBJ2qKmylayQMLVS2Sb1FdQjrMxtP5C18K wdIJAgrFuUE/mS10P8htbTT2/iYhNA9U4lZuhrdc4lMy31bhwnor8JLqpLjV4BoKYx xGI3oCP7Ciifaa7KEeAQtZzzPvJsgLD+7m0AGzVngnl+3eHIYBLCO95wDE4yyakv2C mXwcnW0xBIOpDhLTi51fw1ET7SsTdyXJWHHlEe36yJ8QgMUIl1nZzvBd+wSm7h4sPI jClKDXE31FRWg== From: Arnd Bergmann To: Herbert Xu , "David S. Miller" , Russell King , Ard Biesheuvel Cc: Arnd Bergmann , Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Jussi Kivilinna , linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, llvm@lists.linux.dev Subject: [PATCH] ARM: crypto: fix function cast warnings Date: Tue, 13 Feb 2024 11:13:44 +0100 Message-Id: <20240213101356.460376-1-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240213_021405_256783_9C5E81B1 X-CRM114-Status: GOOD ( 12.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Arnd Bergmann clang-16 warns about casting between incompatible function types: arch/arm/crypto/sha256_glue.c:37:5: error: cast from 'void (*)(u32 *, const void *, unsigned int)' (aka 'void (*)(unsigned int *, const void *, unsigned int)') to 'sha256_block_fn *' (aka 'void (*)(struct sha256_state *, const unsigned char *, int)') converts to incompatible function type [-Werror,-Wcast-function-type-strict] 37 | (sha256_block_fn *)sha256_block_data_order); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/arm/crypto/sha512-glue.c:34:3: error: cast from 'void (*)(u64 *, const u8 *, int)' (aka 'void (*)(unsigned long long *, const unsigned char *, int)') to 'sha512_block_fn *' (aka 'void (*)(struct sha512_state *, const unsigned char *, int)') converts to incompatible function type [-Werror,-Wcast-function-type-strict] 34 | (sha512_block_fn *)sha512_block_data_order); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rework the sha256/sha512 code to instead go through a trivial helper function to preserve the calling conventions. Fixes: c80ae7ca3726 ("crypto: arm/sha512 - accelerated SHA-512 using ARM generic ASM and NEON") Fixes: b59e2ae3690c ("crypto: arm/sha256 - move SHA-224/256 ASM/NEON implementation to base layer") Signed-off-by: Arnd Bergmann --- arch/arm/crypto/sha256_glue.c | 18 ++++++++++-------- arch/arm/crypto/sha512-glue.c | 11 ++++++++--- 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/arch/arm/crypto/sha256_glue.c b/arch/arm/crypto/sha256_glue.c index 433ee4ddce6c..d80448d96ab3 100644 --- a/arch/arm/crypto/sha256_glue.c +++ b/arch/arm/crypto/sha256_glue.c @@ -27,29 +27,31 @@ asmlinkage void sha256_block_data_order(u32 *digest, const void *data, unsigned int num_blks); -int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static void sha256_block_data_order_wrapper(struct sha256_state *sst, u8 const *src, int blocks) { /* make sure casting to sha256_block_fn() is safe */ BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); - return sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_block_data_order); + return sha256_block_data_order((u32 *)sst, src, blocks); +} + +int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + return sha256_base_do_update(desc, data, len, sha256_block_data_order_wrapper); } EXPORT_SYMBOL(crypto_sha256_arm_update); static int crypto_sha256_arm_final(struct shash_desc *desc, u8 *out) { - sha256_base_do_finalize(desc, - (sha256_block_fn *)sha256_block_data_order); + sha256_base_do_finalize(desc, sha256_block_data_order_wrapper); return sha256_base_finish(desc, out); } int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_block_data_order); + sha256_base_do_update(desc, data, len, sha256_block_data_order_wrapper); return crypto_sha256_arm_final(desc, out); } EXPORT_SYMBOL(crypto_sha256_arm_finup); diff --git a/arch/arm/crypto/sha512-glue.c b/arch/arm/crypto/sha512-glue.c index 0635a65aa488..1b2c9c0c8a5f 100644 --- a/arch/arm/crypto/sha512-glue.c +++ b/arch/arm/crypto/sha512-glue.c @@ -27,17 +27,22 @@ MODULE_ALIAS_CRYPTO("sha512-arm"); asmlinkage void sha512_block_data_order(u64 *state, u8 const *src, int blocks); +static void sha512_block_data_order_wrapper(struct sha512_state *sst, u8 const *src, int blocks) +{ + return sha512_block_data_order((u64 *)sst, src, blocks); +} + int sha512_arm_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha512_base_do_update(desc, data, len, - (sha512_block_fn *)sha512_block_data_order); + sha512_block_data_order_wrapper); } static int sha512_arm_final(struct shash_desc *desc, u8 *out) { sha512_base_do_finalize(desc, - (sha512_block_fn *)sha512_block_data_order); + sha512_block_data_order_wrapper); return sha512_base_finish(desc, out); } @@ -45,7 +50,7 @@ int sha512_arm_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { sha512_base_do_update(desc, data, len, - (sha512_block_fn *)sha512_block_data_order); + sha512_block_data_order_wrapper); return sha512_arm_final(desc, out); }