From patchwork Fri Oct 18 07:53:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13841341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A99AD3C55E for ; Fri, 18 Oct 2024 08:01:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tNwrKgHpfeEPpiezt/Iqbr3KcELj8yx0Qy4sFreW/+w=; b=lFHtgk/Z45e6m4vybyl175KVwI JsBEGj4qedfo9ol5yyxbyLt9TU4vUs92JlvAl0gBZNd1YjrUC1DMzwZfzmVfNWJY4Wk4Kj7O9HDdT XpMnLK1gv0EhADNpQh9N4IeT9Y52herGh/YczvTjv5QSY8b8+GgxFRbIz7c1rNiUoZ0lP6lPwAM1N SYK/tTLWnQwO4d+/nBSegiFCvNdulGDf9x6MzoDledvrB+UWXUWNbzFpzkrgf21ejEFaH51bGGY2h xkMh+ksi3od2VQkWKO9pbpN9v6fAEAFnp0jcyXsSit7ueOeuVIpFXfFTLfGB0wRRDtk7PSvLGMZqX F1GpNegA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t1hva-0000000HQWa-0Ux4; Fri, 18 Oct 2024 08:01:42 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t1ho8-0000000HPBm-3SQj for linux-arm-kernel@lists.infradead.org; Fri, 18 Oct 2024 07:54:02 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-6e3529d80e5so31142577b3.0 for ; Fri, 18 Oct 2024 00:54:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1729238039; x=1729842839; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tNwrKgHpfeEPpiezt/Iqbr3KcELj8yx0Qy4sFreW/+w=; b=R8DQTNNtocSnnODd1Rq/nGNYZGx0jQOtTkkm1h1hX/X3NTy8m8ZtOmrFZqM0QMIYC1 7Sb2FVP2cPeVCfsoQwCGQn1IkjWoYma/F4zx+OEAlB8nhPTcBk3z+9JHKURDECYR5Pp6 OkjwfAQjhStprZPfM8/sFWHwq/BIeXxBxKfh1SxlFavN5ZkVHmcA+LxcL+GSy7z906Yu 15NknVgy38x3IDJo+AGzOQBrymWGX2YKIOHIisht+nCFEXrFO7wIBI5OpuOL3v+SmwWF LuV9SsOvvLShM3+jVjTq1YaZyN14TPXbJTt/fKrSe/qFkCgoSlyDaURHsXebFamY/35d uE2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729238039; x=1729842839; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tNwrKgHpfeEPpiezt/Iqbr3KcELj8yx0Qy4sFreW/+w=; b=Jk0j8p0ygJ3Iuu+sqkli+sqn3UZmdjT4Ez7gGw1mWtoKHkqdky5bkiC9wVap1CVv5F QlI2XVNzz2nK4nHIoifYUv4+kijMlPK7iZeeNrD7O06FjoNFIbF7EA+VymP0VyN4JIJ7 AsAcGd2GVle9/QU+8uyd+lG2w4G/73Zf7va7ofxW+c0I+Y2YNF6w+hWb7rqg+BIloG2E O9KxGhDhug+/yUZQyn/MiRRIXqSG8lg5IC5t5VCv673QuYBgNtpEzHlXLvRB9VjyDuu4 B1ocBtVZ3WS/qaNcZP2/PZJGgy1wxAeJaCwl4yCn2CeQMMCuUF93bp3dM8ohRWXY3y1s JZBQ== X-Gm-Message-State: AOJu0YxxKth0VBydJd91jl99iUz3333mRX/kxlhBp79sVSvpM2StW/eF uej2gUgueCcmYCcMmNFb9DzdFUh4UPLL93P6wpRjlPom3EFMjdOdo9Eq0OfD2e/7imRSNJFE0yf ytlFx89sbxcGJA40+RlOt7gbfSACP1vjSb9U5e2BxvWtuCxzk04K1uvdw1Yh5lV9gTm2NEMbYHO RXuRkDsaTUm7pRpnz5Tn4fzjFaZ6g2HCXhu0l8ycXk X-Google-Smtp-Source: AGHT+IEe29XOvyn9WDi9xAEfZbcgksmYHhNGnVdSc/NhMxBAycrneIeqDFrR9WDdNM7rvT4ebtva2h+o X-Received: from palermo.c.googlers.com ([fda3:e722:ac3:cc00:7b:198d:ac11:8138]) (user=ardb job=sendgmr) by 2002:a05:690c:368c:b0:6e2:1b8c:39bf with SMTP id 00721157ae682-6e5bfbe2e35mr374097b3.2.1729238039135; Fri, 18 Oct 2024 00:53:59 -0700 (PDT) Date: Fri, 18 Oct 2024 09:53:50 +0200 In-Reply-To: <20241018075347.2821102-5-ardb+git@google.com> Mime-Version: 1.0 References: <20241018075347.2821102-5-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=3357; i=ardb@kernel.org; h=from:subject; bh=mLKqxqTEQhRSUiFWy6NyWFEr4AEp2njT3OggUKTWlcU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIV1IhK+mUG3X1MInF46ZVaW39MzS7Dc+PettVdqLe0vYD kyfxqzZUcrCIMbBICumyCIw+++7nacnStU6z5KFmcPKBDKEgYtTACby6jXDP91/7D2zGxc7ilVI lTPPOHkmLbhXvmWVjXix0Tk7hmkNcxj+h/rmCu6d8Jl5zcKE+y/Uj9VPSmN6dH2N0Z5/wtzTT7m 2cQMA X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241018075347.2821102-7-ardb+git@google.com> Subject: [PATCH v4 2/3] arm64/crc32: Reorganize bit/byte ordering macros From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, will@kernel.org, catalin.marinas@arm.com, Ard Biesheuvel , Eric Biggers , Kees Cook X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241018_005400_892989_EAF2C583 X-CRM114-Status: UNSURE ( 9.92 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel In preparation for a new user, reorganize the bit/byte ordering macros that are used to parameterize the crc32 template code and instantiate CRC-32, CRC-32c and 'big endian' CRC-32. Signed-off-by: Ard Biesheuvel Reviewed-by: Eric Biggers --- arch/arm64/lib/crc32.S | 91 +++++++++----------- 1 file changed, 39 insertions(+), 52 deletions(-) diff --git a/arch/arm64/lib/crc32.S b/arch/arm64/lib/crc32.S index 22139691c7ae..f9920492f135 100644 --- a/arch/arm64/lib/crc32.S +++ b/arch/arm64/lib/crc32.S @@ -10,44 +10,48 @@ .arch armv8-a+crc - .macro byteorder, reg, be - .if \be -CPU_LE( rev \reg, \reg ) - .else -CPU_BE( rev \reg, \reg ) - .endif + .macro bitle, reg .endm - .macro byteorder16, reg, be - .if \be -CPU_LE( rev16 \reg, \reg ) - .else -CPU_BE( rev16 \reg, \reg ) - .endif + .macro bitbe, reg + rbit \reg, \reg .endm - .macro bitorder, reg, be - .if \be - rbit \reg, \reg - .endif + .macro bytele, reg .endm - .macro bitorder16, reg, be - .if \be + .macro bytebe, reg rbit \reg, \reg - lsr \reg, \reg, #16 - .endif + lsr \reg, \reg, #24 + .endm + + .macro hwordle, reg +CPU_BE( rev16 \reg, \reg ) .endm - .macro bitorder8, reg, be - .if \be + .macro hwordbe, reg +CPU_LE( rev \reg, \reg ) rbit \reg, \reg - lsr \reg, \reg, #24 - .endif +CPU_BE( lsr \reg, \reg, #16 ) + .endm + + .macro le, regs:vararg + .irp r, \regs +CPU_BE( rev \r, \r ) + .endr + .endm + + .macro be, regs:vararg + .irp r, \regs +CPU_LE( rev \r, \r ) + .endr + .irp r, \regs + rbit \r, \r + .endr .endm - .macro __crc32, c, be=0 - bitorder w0, \be + .macro __crc32, c, order=le + bit\order w0 cmp x2, #16 b.lt 8f // less than 16 bytes @@ -60,14 +64,7 @@ CPU_BE( rev16 \reg, \reg ) add x8, x8, x1 add x1, x1, x7 ldp x5, x6, [x8] - byteorder x3, \be - byteorder x4, \be - byteorder x5, \be - byteorder x6, \be - bitorder x3, \be - bitorder x4, \be - bitorder x5, \be - bitorder x6, \be + \order x3, x4, x5, x6 tst x7, #8 crc32\c\()x w8, w0, x3 @@ -95,42 +92,32 @@ CPU_BE( rev16 \reg, \reg ) 32: ldp x3, x4, [x1], #32 sub x2, x2, #32 ldp x5, x6, [x1, #-16] - byteorder x3, \be - byteorder x4, \be - byteorder x5, \be - byteorder x6, \be - bitorder x3, \be - bitorder x4, \be - bitorder x5, \be - bitorder x6, \be + \order x3, x4, x5, x6 crc32\c\()x w0, w0, x3 crc32\c\()x w0, w0, x4 crc32\c\()x w0, w0, x5 crc32\c\()x w0, w0, x6 cbnz x2, 32b -0: bitorder w0, \be +0: bit\order w0 ret 8: tbz x2, #3, 4f ldr x3, [x1], #8 - byteorder x3, \be - bitorder x3, \be + \order x3 crc32\c\()x w0, w0, x3 4: tbz x2, #2, 2f ldr w3, [x1], #4 - byteorder w3, \be - bitorder w3, \be + \order w3 crc32\c\()w w0, w0, w3 2: tbz x2, #1, 1f ldrh w3, [x1], #2 - byteorder16 w3, \be - bitorder16 w3, \be + hword\order w3 crc32\c\()h w0, w0, w3 1: tbz x2, #0, 0f ldrb w3, [x1] - bitorder8 w3, \be + byte\order w3 crc32\c\()b w0, w0, w3 -0: bitorder w0, \be +0: bit\order w0 ret .endm @@ -146,5 +133,5 @@ SYM_FUNC_END(crc32c_le_arm64) .align 5 SYM_FUNC_START(crc32_be_arm64) - __crc32 be=1 + __crc32 order=be SYM_FUNC_END(crc32_be_arm64)