From patchwork Sun Dec 31 15:27:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507248 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2F451C3DA6E for ; Sun, 31 Dec 2023 15:28:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2rcxuDh7BuPkmoBaZQSvPL8jQsF6RXuEb0SOKOrvPQM=; b=UV4JPyW/CExG9/ bBzcBUg57/wl/2ygn2ZojsF+WgnNMx+z54TxL8sVqrVOTqOd4PQRLcGK0MOF/pCWZDf0VI8mPqTXz uNI2iRBz/WBcpAAWafYd//dy95GmN8QH+VIbDWve8wx/PuQUrkYKk0BjeS96WZZVOQALNL8xIgyEy CdP1sB/JaOLJ0VmG1kOQ7MEdGdLZpSMxVEvODnfYHdgrO38LH+Qq2lqXqM0j/GqrGknHEQJqgztid CXtz+8COfDEV5oax2r0Q7sxa7B8g4wSycy8jg2y1y2rtaAb1voXFTJGb4eLlY4qpfcIoat42wzZJT /l8jyD3SIke3ryI+zCsA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjP-004mPv-23; Sun, 31 Dec 2023 15:28:03 +0000 Received: from mail-pj1-x1031.google.com ([2607:f8b0:4864:20::1031]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjJ-004mOg-32 for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:27:59 +0000 Received: by mail-pj1-x1031.google.com with SMTP id 98e67ed59e1d1-28bcc273833so6780814a91.1 for ; Sun, 31 Dec 2023 07:27:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036477; x=1704641277; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Iifngznp+ertIjgLYKgYyodchJA69bCk2AsJZ4z1PYM=; b=kJt5t4G79X7IfrKLY0kCrHvGghTFiiPcGdnXutzwhSqTylo9rjYWouFumvLiNhIh49 22HAjenpLAeyozn9PN84ppAn2JqQmb+rpMgyx7IA+nnRJGnvqnmfRoi2Kk1bBxQL8SZX nUUpB+AdjjTEezRB0uEv8CG7mm0h1qCZsIxowY+fJCzjgFBJNEqOgDCAVXjC3+TCpLWL jGAtuantTpDnXTd+Dty32w/SL7A8c0fnQxmsjFuhx3A7Ys0BUIs6jb+MK9y1bLApLg7E rQ7hasr535xhh9IS1AwV7kbcfRzofscjvnYRbDXwcUBvlGbH3JFn/+HULNZdlQcYEtSu xoLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036477; x=1704641277; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Iifngznp+ertIjgLYKgYyodchJA69bCk2AsJZ4z1PYM=; b=iRXA/fxT6lo/Cw5WTKvkZvDUHiOuEUd480vfvxT5kehzSe/HpPY+fwLOgEj3H7GLL7 3CD+o4iFYrbWedVDlD3o7C/PChB3H3RWhDcYjiO71ZuhA40SshMJwAOZjmbSFGs61c06 lkn5LRMo6FkpRSKvEwgp3+x8OAUbqN81cdb1GwW+O8NgbZOODf/z7cvmbq8iFsvWfh14 U820Ulczeh1KFZXP6yIJZmBQ5JX8xgSZaCI6IWPQUe560agxNunGsDhMSwBoVkTEHAbf bRWY/RxNDFx1+er+F0Fi5pLpVwrScY/RKx9bnaO6qewCErOYXcnndtj2siXms8OItS7s sM4g== X-Gm-Message-State: AOJu0YzjD6v1YV835s/sxJjl+KK0QPDm/y5Yh1h9H1g6wymkeHLX3eve P9jVPYbarjhHDeCkDfhiP0cLtWqugd55yw== X-Google-Smtp-Source: AGHT+IG2X++FLxOhI7SgAVayiogOLWmlW/65gGZuJtaqh9iKO1jYHAKqb8XHTGyS7hCPlNtJAE2iRw== X-Received: by 2002:a17:902:684f:b0:1d0:6ffe:9f5 with SMTP id f15-20020a170902684f00b001d06ffe09f5mr5478381pln.83.1704036476827; Sun, 31 Dec 2023 07:27:56 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.27.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:27:56 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 01/11] RISC-V: add helper function to read the vector VLEN Date: Sun, 31 Dec 2023 23:27:33 +0800 Message-Id: <20231231152743.6304-2-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072757_977611_B4B9997E X-CRM114-Status: GOOD ( 10.67 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Heiko Stuebner VLEN describes the length of each vector register and some instructions need specific minimal VLENs to work correctly. The vector code already includes a variable riscv_v_vsize that contains the value of "32 vector registers with vlenb length" that gets filled during boot. vlenb is the value contained in the CSR_VLENB register and the value represents "VLEN / 8". So add riscv_vector_vlen() to return the actual VLEN value for in-kernel users when they need to check the available VLEN. Signed-off-by: Heiko Stuebner Reviewed-by: Eric Biggers Signed-off-by: Jerry Shih --- arch/riscv/include/asm/vector.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index d69844906d51..b04ee0a50315 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -294,4 +294,15 @@ static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #endif /* CONFIG_RISCV_ISA_V */ +/* + * Return the implementation's vlen value. + * + * riscv_v_vsize contains the value of "32 vector registers with vlenb length" + * so rebuild the vlen value in bits from it. + */ +static inline int riscv_vector_vlen(void) +{ + return riscv_v_vsize / 32 * 8; +} + #endif /* ! __ASM_RISCV_VECTOR_H */ From patchwork Sun Dec 31 15:27:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507250 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD6F3C47077 for ; Sun, 31 Dec 2023 15:28:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DJkFqCaBS47gro/VENANjO0K9xx8lcv+TvLhpm+Trws=; b=VVOLtPFWUtH4Ui rubR9X1XbS7zvH9Wx+rxE7Y3YZZspAcdMQtKIbDMThR9UtFi6npaGLJ9WEfZyixDXuCLTFJ/mHp8R LvtYMJXsShum52JA2f1UhIHledhdZnxmtWcJ0tfLHNvLhj8d6zxk9u4hMzCLwVvew9zQJYNw+jJXl oTyLXZaWlxWSOh1mq3GpMBDPwXH6/ADilhwgf6xlE0FspWDzKPazDa0rh0Wb05HypjWpGTzQAwipX 5hHjdbkMuPPKOdMsL+RSuemQkwVkw5llk22D4rBrGPu2EpLyT53Hgh9mfuXCTN9sFMcH7Yyn9nxE8 X1/+S7QxY7OTMSo5d4+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjQ-004mQR-0r; Sun, 31 Dec 2023 15:28:04 +0000 Received: from mail-io1-xd2f.google.com ([2607:f8b0:4864:20::d2f]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjN-004mPG-10 for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:02 +0000 Received: by mail-io1-xd2f.google.com with SMTP id ca18e2360f4ac-7baa8da5692so330929639f.0 for ; Sun, 31 Dec 2023 07:28:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036480; x=1704641280; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=leMifUQFY/hnRKKKOp+M4kt1TiREdMrY53gOOvxLu5g=; b=jgQ3Hn0+djItBcFXywCbVt/W71QUXcjizV6WMuwcFgvBFLJ5NdLkrP4K5+uev7yYGb 7JXmeUYaSKyZk0MagiXWrpye7sJn3YrL4FCJQqQBX70sl090m2jJVt7/nsqvTWC+EBly KSx3U1JnSCixlxFhuvM2eGobOf64MHtSqkbGwN+38eSClQl0cLTazgMUnQ+60TJUHyaH flvGlmVYbcUlSKL/i7XB9sG+1NwS6xHTzg/lX+SUpNOmBdatY7DzOW94+0gBI9hDWzYR JoCbn3oFO6HYrQOlvpRhohat7LISv+ELaX9+krx1pReBxJXe5Jjk/uPpzBU1j7gZJAFq DxPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036480; x=1704641280; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=leMifUQFY/hnRKKKOp+M4kt1TiREdMrY53gOOvxLu5g=; b=QI7qwXK9WdNyZv0304V8AYQ+jT7bsfLzskNsQPsqjEQCizqGUYZZfNTidijmVg/ciT 6BQbtlvJ7cU1Vf3lysgPsaCYwIqy1Bp2rBlRXUYadJfMDV0q4u+9JpUmqjha5rkFt+hg 0hmwo5pRArYA6ZUQ0Xc/b8Cb8KkU2Bee2uUeQ8g9b8hcFCXnbJuPrePv4PVdyptJketw vifkZhLEot4yU7rAsCxieXlTTPZnIAxIdakNLeg+qilpw6SA45OD6f1Dd3bLTg5dfcHN ZndTIU2Os5AjKraM+nI0jwYlHsXliSdyg2rgQq4bnOtEQ9MDN07WLunz8upjJlgtvcjO /7fA== X-Gm-Message-State: AOJu0YyxCxQoXLubnKRs8HUhY9SYUUIKKbZIA3ZauQN3soKyoMiz/SHl d3zBC71ywVcm64iDInLoXgbow8iPqzF+iw== X-Google-Smtp-Source: AGHT+IHvzlmEOnBTt5frc//wb3eiOCzfM75y7LicREf98+n1CVL3gaYUoNXYckFPbzNADg+lxI7hlA== X-Received: by 2002:a92:ca4a:0:b0:35f:f4ae:955e with SMTP id q10-20020a92ca4a000000b0035ff4ae955emr17849031ilo.35.1704036479991; Sun, 31 Dec 2023 07:27:59 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.27.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:27:59 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 02/11] RISC-V: hook new crypto subdir into build-system Date: Sun, 31 Dec 2023 23:27:34 +0800 Message-Id: <20231231152743.6304-3-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072801_424512_D4478E2A X-CRM114-Status: GOOD ( 13.17 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Heiko Stuebner Create a crypto subdirectory for added accelerated cryptography routines and hook it into the riscv Kbuild and the main crypto Kconfig. Signed-off-by: Heiko Stuebner Reviewed-by: Eric Biggers Signed-off-by: Jerry Shih --- arch/riscv/Kbuild | 1 + arch/riscv/crypto/Kconfig | 5 +++++ arch/riscv/crypto/Makefile | 4 ++++ crypto/Kconfig | 3 +++ 4 files changed, 13 insertions(+) create mode 100644 arch/riscv/crypto/Kconfig create mode 100644 arch/riscv/crypto/Makefile diff --git a/arch/riscv/Kbuild b/arch/riscv/Kbuild index d25ad1c19f88..2c585f7a0b6e 100644 --- a/arch/riscv/Kbuild +++ b/arch/riscv/Kbuild @@ -2,6 +2,7 @@ obj-y += kernel/ mm/ net/ obj-$(CONFIG_BUILTIN_DTB) += boot/dts/ +obj-$(CONFIG_CRYPTO) += crypto/ obj-y += errata/ obj-$(CONFIG_KVM) += kvm/ diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig new file mode 100644 index 000000000000..10d60edc0110 --- /dev/null +++ b/arch/riscv/crypto/Kconfig @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 + +menu "Accelerated Cryptographic Algorithms for CPU (riscv)" + +endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile new file mode 100644 index 000000000000..b3b6332c9f6d --- /dev/null +++ b/arch/riscv/crypto/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# linux/arch/riscv/crypto/Makefile +# diff --git a/crypto/Kconfig b/crypto/Kconfig index 70661f58ee41..c8fd2b83e589 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1519,6 +1519,9 @@ endif if PPC source "arch/powerpc/crypto/Kconfig" endif +if RISCV +source "arch/riscv/crypto/Kconfig" +endif if S390 source "arch/s390/crypto/Kconfig" endif From patchwork Sun Dec 31 15:27:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD75DC4707B for ; Sun, 31 Dec 2023 15:28:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=j75i7oppd+P1FUYtiuXVZ6Hw2oYtZWTk3uj0p6uL+VA=; b=1m5/3ulIbBdDcc SXm2U9jsL9Enm0YHIYrk2x2meSiA99ss/vAtSI2NCLGgYauE2VSag1Xz2dI0GfQXMZkaeioQjn/Pp Tv/s2PVUv/eqeO+wrWbWnbElfNtPpc+qzcRtMUBsRllB16pYf90PK4T1XPzSlCkMKR2ArFrw5yPQT 5a0LxdDFEdnH4uDkY0dR1ec2X+a+QNaheYrSD469j3T7OyXyxpkwQmK9/sIMpcrI5OACTP8v+dU25 RmugPgOqkevCpdTOCnT1mBRQ8DBtK7cb+NWC6beH0BLy+Mb25UY1TKmE4i3uYQkVhNIhTlfKcIBaC W5kMMha9L/EsakIFUyUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjV-004mSQ-09; Sun, 31 Dec 2023 15:28:09 +0000 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjQ-004mPu-0c for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:05 +0000 Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1d3ed1ca402so63153825ad.2 for ; Sun, 31 Dec 2023 07:28:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036483; x=1704641283; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XuCoCK+SW4zawUSFFv0erKPf4OwGQ3cihPvdb62nAAQ=; b=ZoVDp4WLvqWSdxTafwrGNZYmMHC+JnoBWBKgfvnVRnImbVfTMFPySjxS47bY0QF+Ja ekyXl4RY+Clw8rqvMf38bLwf/gfkYvwhGd4NvvWuLpdPLWmKooAVAdNV3aE0RdKd6oRe OGKzNGvKdeDk6so3c55K5RyaisooyVYJ30fSqnzVfXiNmD20YBgoQfivRR6Zuu07wvrS bs1erOP+dCmGypq9QtGQG3GQlwVLV4Bg8034moE8Udg/BLLaXPp1dWTpJ8DTdr+Rx3Yk VfiUZumfo7ulVpo4YaoaRec8IhMlj56Cz9LyqXswlvrlEL3UYBmyV4ySiAvBvXxJFc4F Dxgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036483; x=1704641283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XuCoCK+SW4zawUSFFv0erKPf4OwGQ3cihPvdb62nAAQ=; b=l9prNiOCl39q4sjchWJ1I8Iax5+BdeJBeByEPX67f3KhKa2joQGbtLmyNL59dUDeuG 8fzBZTZUVAYqs038FkrOY3TDEUUAkJq0haNaHaXnBnnb0L60hVnNHUp1Xgdshhb4jK1p nDpH1v3jC/TGfzBRqED9TX0wvZAujZ+BvvfZj5h7qteMw/ghOff8mbpHi/8WFj51Q+pN 3mFF3HJpjvMTQwB27QybyNlXBOQ8G++eJHnQIMwnup+1uknWlkoH50c+ep5Ya4Po7Mh/ Y9prsHmFbTVpOzxP/MOjNUJWfVHvFC+8jFIaVKGYS77z+jg1oxJwGDZPJuBgLELTuT7O r5gg== X-Gm-Message-State: AOJu0YzwRbSG3x+8iXNfB6SMjVINmDq5X9dWJQ9CvbFs1sJZZ9MVfzyW Ey9zy+3MUWbp+4LBKCsgm9vCd6avuE8fZg== X-Google-Smtp-Source: AGHT+IFrXdhgAdCMZNHHFRjvOLoIEeS5k4qN1vslTS4fA+aVRSXmXy/wbfSG12vHdQUE2i/D13D4aw== X-Received: by 2002:a17:903:1ca:b0:1d4:25ec:5975 with SMTP id e10-20020a17090301ca00b001d425ec5975mr20365331plh.10.1704036483091; Sun, 31 Dec 2023 07:28:03 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:02 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 03/11] RISC-V: add TOOLCHAIN_HAS_VECTOR_CRYPTO in kconfig Date: Sun, 31 Dec 2023 23:27:35 +0800 Message-Id: <20231231152743.6304-4-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072804_287220_0D0783C5 X-CRM114-Status: UNSURE ( 8.56 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org LLVM main and binutils master now both fully support v1.0 of the RISC-V vector crypto extensions. Check the assembler capability for using the vector crypto asm mnemonics in kernel. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Jerry Shih --- arch/riscv/Kconfig | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 0a03d72706b5..8647392ece0b 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -636,6 +636,14 @@ config TOOLCHAIN_NEEDS_OLD_ISA_SPEC versions of clang and GCC to be passed to GAS, which has the same result as passing zicsr and zifencei to -march. +# This option indicates that the toolchain supports all v1.0 vector crypto +# extensions, including Zvk*, Zvbb, and Zvbc. The LLVM added all of these at +# once. The binutils added all except Zvkb, then added Zvkb. So we just check +# for Zvkb. +config TOOLCHAIN_HAS_VECTOR_CRYPTO + def_bool $(as-instr, .option arch$(comma) +zvkb) + depends on AS_HAS_OPTION_ARCH + config FPU bool "FPU support" default y From patchwork Sun Dec 31 15:27:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 25AEFC3DA6E for ; Sun, 31 Dec 2023 15:28:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aIIlTPdJ3L7oPJ9HvZ9x9EaZzD1URQCHMcWFL+smRRc=; b=fo/bJUlRsUEBSo wJ9vTCcotgDoN5/8F9IzzbtghyddgTUm3tdcSQ4XCjfbHLKBEaQco2CsMMBsyD5R89xsRsXIZD/X9 MFhD2tduxbQqSXBtVlSTvnMSFcK0KHYyXILv12cRZnvwqVWsPB+vszdcuNd9ISVBq+XiDcWI9luWj 661PkWvhJ25eufZrvrtW3A00xr33vNpY7IQ1l+ZljCzQWsMPWgQY1LrHYRGyKiYjm0gNE5j+Oyml5 AVn714qEMlUY1Bg/z1H4EmROY6RkOxS4y1JR7qv3FkE+RYi9zVu8/7afLfoRelk9DlYfFeUfepuWF lUAsi/kPxoMs+Hi1KEAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjY-004mTr-2r; Sun, 31 Dec 2023 15:28:12 +0000 Received: from mail-il1-x12d.google.com ([2607:f8b0:4864:20::12d]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjU-004mRM-0f for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:11 +0000 Received: by mail-il1-x12d.google.com with SMTP id e9e14a558f8ab-35fd0154368so40980205ab.0 for ; Sun, 31 Dec 2023 07:28:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036486; x=1704641286; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wzdxV7yuqvqemWBa1v0uB2/WXCtG4G7354Jv7SKaK6w=; b=hCoib+q1iM6Gv/Yodm0lV/GHH2BiByy7hJYRxFO9v8hkno3++kYsGgY1qI+KniVk2X JZjIlG2weYi1LSk/9kf0sQ7+BXmp8/KAuRGz8+rpYkuAc7xU2HOWuiVYU39+CssG0+oy 4VjJMe2/ZNK76IobYyMU79SgAu5j02S9UaoSU2pf4hFFSPTVwFQ4A6Ms9S9z04UbowvT VMv8i5UqX5SGg4yF6oeo00taMBOM3ZjusqLaeYnp1t0LdHcXxvXUWZKImU+9eDRYa1Qs 41etm8O4fDpsqOm2NmljfOeO2x+p+gZmFYKr/GrFFzJ1oo35fD7eQ4+UwngL19ZSyfvO 04iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036486; x=1704641286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wzdxV7yuqvqemWBa1v0uB2/WXCtG4G7354Jv7SKaK6w=; b=VLAeYloXuVLD8Ar9ISIJWcSqyMlR4BsLe9vVBQ+FCWmxqaqMZqf9BvX44gYJ+4ZScS EyD77Ob5yNnYst2s0s72nJ9/ZMXxxY0ibM+UVoqH5fWa/KVMdqsRLbjLzmEZ4wWS5kuk egUL7Q7K094+kYU9nOyAIslrijni0OvI1sZwRydmW+Du2peq+jib8Ovo9BzFS6Spp2DB 9hDrYGwMhHsrZl8SpmPCkhwdtof4wELVTb2ZhJ7KW2ChOZmsA4qH+1b9SjfavvcTK+ba lWvR8Uf7ra2lU4fiBapD0rOYCTEyIxlYh2mHLna9De3K0QB6O0gNXdcdKLySME/EaN3Q ffsg== X-Gm-Message-State: AOJu0YyqxiPkWcwMHEYJzyRyX1650c2gSKDmLpWmKvyhvgohYlhzkvmd +xgUNg6wzah9hxJQqQ+zhwj8twjVrlT9OA== X-Google-Smtp-Source: AGHT+IGFJrVAhY7E/f4SOKNHzcFmCp5P3PR8v2oEb4Y/dTPncHB52szAEuJryNrpYmYa0/l5D3ErIQ== X-Received: by 2002:a92:ca4c:0:b0:35f:efdc:f067 with SMTP id q12-20020a92ca4c000000b0035fefdcf067mr23324392ilo.11.1704036486373; Sun, 31 Dec 2023 07:28:06 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:05 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 04/11] RISC-V: crypto: add Zvkned accelerated AES implementation Date: Sun, 31 Dec 2023 23:27:36 +0800 Message-Id: <20231231152743.6304-5-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072808_247232_A88ACC71 X-CRM114-Status: GOOD ( 29.25 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The AES implementation using the Zvkned vector crypto extension from OpenSSL(openssl/openssl#21923). Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Co-developed-by: Phoebe Chen Signed-off-by: Phoebe Chen Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Rename aes_setkey() to aes_setkey_zvkned(). - Rename riscv64_aes_setkey() to riscv64_aes_setkey_zvkned(). - Use aes generic software key expanding everywhere. - Remove rv64i_zvkned_set_encrypt_key(). We still need to provide the decryption expanding key for the SW fallback path which is not supported directly using zvkned extension. So, we turn to use the pure generic software key expanding everywhere to simplify the set_key flow. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `AES_RISCV64` option by default. - Turn to use `crypto_aes_ctx` structure for aes key. - Use `Zvkned` extension for AES-128/256 key expanding. - Export riscv64_aes_* symbols for other modules. - Add `asmlinkage` qualifier for crypto asm function. - Reorder structure riscv64_aes_alg_zvkned members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 11 + arch/riscv/crypto/Makefile | 11 + arch/riscv/crypto/aes-riscv64-glue.c | 137 +++++++ arch/riscv/crypto/aes-riscv64-glue.h | 18 + arch/riscv/crypto/aes-riscv64-zvkned.pl | 453 ++++++++++++++++++++++++ 5 files changed, 630 insertions(+) create mode 100644 arch/riscv/crypto/aes-riscv64-glue.c create mode 100644 arch/riscv/crypto/aes-riscv64-glue.h create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 10d60edc0110..2a7c365f2a86 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -2,4 +2,15 @@ menu "Accelerated Cryptographic Algorithms for CPU (riscv)" +config CRYPTO_AES_RISCV64 + tristate "Ciphers: AES" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_ALGAPI + select CRYPTO_LIB_AES + help + Block ciphers: AES cipher algorithms (FIPS-197) + + Architecture: riscv64 using: + - Zvkned vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index b3b6332c9f6d..90ca91d8df26 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -2,3 +2,14 @@ # # linux/arch/riscv/crypto/Makefile # + +obj-$(CONFIG_CRYPTO_AES_RISCV64) += aes-riscv64.o +aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o + +quiet_cmd_perlasm = PERLASM $@ + cmd_perlasm = $(PERL) $(<) void $(@) + +$(obj)/aes-riscv64-zvkned.S: $(src)/aes-riscv64-zvkned.pl + $(call cmd,perlasm) + +clean-files += aes-riscv64-zvkned.S diff --git a/arch/riscv/crypto/aes-riscv64-glue.c b/arch/riscv/crypto/aes-riscv64-glue.c new file mode 100644 index 000000000000..f29898c25652 --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-glue.c @@ -0,0 +1,137 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Port of the OpenSSL AES implementation for RISC-V + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "aes-riscv64-glue.h" + +/* aes cipher using zvkned vector crypto extension */ +asmlinkage void rv64i_zvkned_encrypt(const u8 *in, u8 *out, + const struct crypto_aes_ctx *key); +asmlinkage void rv64i_zvkned_decrypt(const u8 *in, u8 *out, + const struct crypto_aes_ctx *key); + +int riscv64_aes_setkey_zvkned(struct crypto_aes_ctx *ctx, const u8 *key, + unsigned int keylen) +{ + int ret; + + ret = aes_check_keylen(keylen); + if (ret < 0) + return -EINVAL; + + /* + * The RISC-V AES vector crypto key expanding doesn't support AES-192. + * So, we use the generic software key expanding here for all cases. + */ + return aes_expandkey(ctx, key, keylen); +} +EXPORT_SYMBOL(riscv64_aes_setkey_zvkned); + +void riscv64_aes_encrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst, + const u8 *src) +{ + if (crypto_simd_usable()) { + kernel_vector_begin(); + rv64i_zvkned_encrypt(src, dst, ctx); + kernel_vector_end(); + } else { + aes_encrypt(ctx, dst, src); + } +} +EXPORT_SYMBOL(riscv64_aes_encrypt_zvkned); + +void riscv64_aes_decrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst, + const u8 *src) +{ + if (crypto_simd_usable()) { + kernel_vector_begin(); + rv64i_zvkned_decrypt(src, dst, ctx); + kernel_vector_end(); + } else { + aes_decrypt(ctx, dst, src); + } +} +EXPORT_SYMBOL(riscv64_aes_decrypt_zvkned); + +static int aes_setkey_zvkned(struct crypto_tfm *tfm, const u8 *key, + unsigned int keylen) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + return riscv64_aes_setkey_zvkned(ctx, key, keylen); +} + +static void aes_encrypt_zvkned(struct crypto_tfm *tfm, u8 *dst, const u8 *src) +{ + const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + riscv64_aes_encrypt_zvkned(ctx, dst, src); +} + +static void aes_decrypt_zvkned(struct crypto_tfm *tfm, u8 *dst, const u8 *src) +{ + const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + riscv64_aes_decrypt_zvkned(ctx, dst, src); +} + +static struct crypto_alg riscv64_aes_alg_zvkned = { + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_priority = 300, + .cra_name = "aes", + .cra_driver_name = "aes-riscv64-zvkned", + .cra_cipher = { + .cia_min_keysize = AES_MIN_KEY_SIZE, + .cia_max_keysize = AES_MAX_KEY_SIZE, + .cia_setkey = aes_setkey_zvkned, + .cia_encrypt = aes_encrypt_zvkned, + .cia_decrypt = aes_decrypt_zvkned, + }, + .cra_module = THIS_MODULE, +}; + +static inline bool check_aes_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKNED) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_aes_mod_init(void) +{ + if (check_aes_ext()) + return crypto_register_alg(&riscv64_aes_alg_zvkned); + + return -ENODEV; +} + +static void __exit riscv64_aes_mod_fini(void) +{ + crypto_unregister_alg(&riscv64_aes_alg_zvkned); +} + +module_init(riscv64_aes_mod_init); +module_exit(riscv64_aes_mod_fini); + +MODULE_DESCRIPTION("AES (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("aes"); diff --git a/arch/riscv/crypto/aes-riscv64-glue.h b/arch/riscv/crypto/aes-riscv64-glue.h new file mode 100644 index 000000000000..2b544125091e --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-glue.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef AES_RISCV64_GLUE_H +#define AES_RISCV64_GLUE_H + +#include +#include + +int riscv64_aes_setkey_zvkned(struct crypto_aes_ctx *ctx, const u8 *key, + unsigned int keylen); + +void riscv64_aes_encrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst, + const u8 *src); + +void riscv64_aes_decrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst, + const u8 *src); + +#endif /* AES_RISCV64_GLUE_H */ diff --git a/arch/riscv/crypto/aes-riscv64-zvkned.pl b/arch/riscv/crypto/aes-riscv64-zvkned.pl new file mode 100644 index 000000000000..583e87912e5d --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-zvkned.pl @@ -0,0 +1,453 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Phoebe Chen +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector AES block cipher extension ('Zvkned') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvkned +___ + +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +{ +################################################################################ +# void rv64i_zvkned_encrypt(const unsigned char *in, unsigned char *out, +# const AES_KEY *key); +my ($INP, $OUTP, $KEYP) = ("a0", "a1", "a2"); +my ($T0) = ("t0"); +my ($KEY_LEN) = ("a3"); + +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_encrypt +.type rv64i_zvkned_encrypt,\@function +rv64i_zvkned_encrypt: + # Load key length. + lwu $KEY_LEN, 480($KEYP) + + # Get proper routine for key length. + li $T0, 32 + beq $KEY_LEN, $T0, L_enc_256 + li $T0, 24 + beq $KEY_LEN, $T0, L_enc_192 + li $T0, 16 + beq $KEY_LEN, $T0, L_enc_128 + + j L_fail_m2 +.size rv64i_zvkned_encrypt,.-rv64i_zvkned_encrypt +___ + +$code .= <<___; +.p2align 3 +L_enc_128: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + vle32.v $V10, ($KEYP) + vaesz.vs $V1, $V10 # with round key w[ 0, 3] + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + vaesem.vs $V1, $V11 # with round key w[ 4, 7] + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + vaesem.vs $V1, $V12 # with round key w[ 8,11] + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + vaesem.vs $V1, $V13 # with round key w[12,15] + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + vaesem.vs $V1, $V14 # with round key w[16,19] + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) + vaesem.vs $V1, $V15 # with round key w[20,23] + addi $KEYP, $KEYP, 16 + vle32.v $V16, ($KEYP) + vaesem.vs $V1, $V16 # with round key w[24,27] + addi $KEYP, $KEYP, 16 + vle32.v $V17, ($KEYP) + vaesem.vs $V1, $V17 # with round key w[28,31] + addi $KEYP, $KEYP, 16 + vle32.v $V18, ($KEYP) + vaesem.vs $V1, $V18 # with round key w[32,35] + addi $KEYP, $KEYP, 16 + vle32.v $V19, ($KEYP) + vaesem.vs $V1, $V19 # with round key w[36,39] + addi $KEYP, $KEYP, 16 + vle32.v $V20, ($KEYP) + vaesef.vs $V1, $V20 # with round key w[40,43] + + vse32.v $V1, ($OUTP) + + ret +.size L_enc_128,.-L_enc_128 +___ + +$code .= <<___; +.p2align 3 +L_enc_192: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + vle32.v $V10, ($KEYP) + vaesz.vs $V1, $V10 + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + vaesem.vs $V1, $V11 + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + vaesem.vs $V1, $V12 + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + vaesem.vs $V1, $V13 + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + vaesem.vs $V1, $V14 + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) + vaesem.vs $V1, $V15 + addi $KEYP, $KEYP, 16 + vle32.v $V16, ($KEYP) + vaesem.vs $V1, $V16 + addi $KEYP, $KEYP, 16 + vle32.v $V17, ($KEYP) + vaesem.vs $V1, $V17 + addi $KEYP, $KEYP, 16 + vle32.v $V18, ($KEYP) + vaesem.vs $V1, $V18 + addi $KEYP, $KEYP, 16 + vle32.v $V19, ($KEYP) + vaesem.vs $V1, $V19 + addi $KEYP, $KEYP, 16 + vle32.v $V20, ($KEYP) + vaesem.vs $V1, $V20 + addi $KEYP, $KEYP, 16 + vle32.v $V21, ($KEYP) + vaesem.vs $V1, $V21 + addi $KEYP, $KEYP, 16 + vle32.v $V22, ($KEYP) + vaesef.vs $V1, $V22 + + vse32.v $V1, ($OUTP) + ret +.size L_enc_192,.-L_enc_192 +___ + +$code .= <<___; +.p2align 3 +L_enc_256: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + vle32.v $V10, ($KEYP) + vaesz.vs $V1, $V10 + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + vaesem.vs $V1, $V11 + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + vaesem.vs $V1, $V12 + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + vaesem.vs $V1, $V13 + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + vaesem.vs $V1, $V14 + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) + vaesem.vs $V1, $V15 + addi $KEYP, $KEYP, 16 + vle32.v $V16, ($KEYP) + vaesem.vs $V1, $V16 + addi $KEYP, $KEYP, 16 + vle32.v $V17, ($KEYP) + vaesem.vs $V1, $V17 + addi $KEYP, $KEYP, 16 + vle32.v $V18, ($KEYP) + vaesem.vs $V1, $V18 + addi $KEYP, $KEYP, 16 + vle32.v $V19, ($KEYP) + vaesem.vs $V1, $V19 + addi $KEYP, $KEYP, 16 + vle32.v $V20, ($KEYP) + vaesem.vs $V1, $V20 + addi $KEYP, $KEYP, 16 + vle32.v $V21, ($KEYP) + vaesem.vs $V1, $V21 + addi $KEYP, $KEYP, 16 + vle32.v $V22, ($KEYP) + vaesem.vs $V1, $V22 + addi $KEYP, $KEYP, 16 + vle32.v $V23, ($KEYP) + vaesem.vs $V1, $V23 + addi $KEYP, $KEYP, 16 + vle32.v $V24, ($KEYP) + vaesef.vs $V1, $V24 + + vse32.v $V1, ($OUTP) + ret +.size L_enc_256,.-L_enc_256 +___ + +################################################################################ +# void rv64i_zvkned_decrypt(const unsigned char *in, unsigned char *out, +# const AES_KEY *key); +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_decrypt +.type rv64i_zvkned_decrypt,\@function +rv64i_zvkned_decrypt: + # Load key length. + lwu $KEY_LEN, 480($KEYP) + + # Get proper routine for key length. + li $T0, 32 + beq $KEY_LEN, $T0, L_dec_256 + li $T0, 24 + beq $KEY_LEN, $T0, L_dec_192 + li $T0, 16 + beq $KEY_LEN, $T0, L_dec_128 + + j L_fail_m2 +.size rv64i_zvkned_decrypt,.-rv64i_zvkned_decrypt +___ + +$code .= <<___; +.p2align 3 +L_dec_128: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + addi $KEYP, $KEYP, 160 + vle32.v $V20, ($KEYP) + vaesz.vs $V1, $V20 # with round key w[40,43] + addi $KEYP, $KEYP, -16 + vle32.v $V19, ($KEYP) + vaesdm.vs $V1, $V19 # with round key w[36,39] + addi $KEYP, $KEYP, -16 + vle32.v $V18, ($KEYP) + vaesdm.vs $V1, $V18 # with round key w[32,35] + addi $KEYP, $KEYP, -16 + vle32.v $V17, ($KEYP) + vaesdm.vs $V1, $V17 # with round key w[28,31] + addi $KEYP, $KEYP, -16 + vle32.v $V16, ($KEYP) + vaesdm.vs $V1, $V16 # with round key w[24,27] + addi $KEYP, $KEYP, -16 + vle32.v $V15, ($KEYP) + vaesdm.vs $V1, $V15 # with round key w[20,23] + addi $KEYP, $KEYP, -16 + vle32.v $V14, ($KEYP) + vaesdm.vs $V1, $V14 # with round key w[16,19] + addi $KEYP, $KEYP, -16 + vle32.v $V13, ($KEYP) + vaesdm.vs $V1, $V13 # with round key w[12,15] + addi $KEYP, $KEYP, -16 + vle32.v $V12, ($KEYP) + vaesdm.vs $V1, $V12 # with round key w[ 8,11] + addi $KEYP, $KEYP, -16 + vle32.v $V11, ($KEYP) + vaesdm.vs $V1, $V11 # with round key w[ 4, 7] + addi $KEYP, $KEYP, -16 + vle32.v $V10, ($KEYP) + vaesdf.vs $V1, $V10 # with round key w[ 0, 3] + + vse32.v $V1, ($OUTP) + + ret +.size L_dec_128,.-L_dec_128 +___ + +$code .= <<___; +.p2align 3 +L_dec_192: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + addi $KEYP, $KEYP, 192 + vle32.v $V22, ($KEYP) + vaesz.vs $V1, $V22 # with round key w[48,51] + addi $KEYP, $KEYP, -16 + vle32.v $V21, ($KEYP) + vaesdm.vs $V1, $V21 # with round key w[44,47] + addi $KEYP, $KEYP, -16 + vle32.v $V20, ($KEYP) + vaesdm.vs $V1, $V20 # with round key w[40,43] + addi $KEYP, $KEYP, -16 + vle32.v $V19, ($KEYP) + vaesdm.vs $V1, $V19 # with round key w[36,39] + addi $KEYP, $KEYP, -16 + vle32.v $V18, ($KEYP) + vaesdm.vs $V1, $V18 # with round key w[32,35] + addi $KEYP, $KEYP, -16 + vle32.v $V17, ($KEYP) + vaesdm.vs $V1, $V17 # with round key w[28,31] + addi $KEYP, $KEYP, -16 + vle32.v $V16, ($KEYP) + vaesdm.vs $V1, $V16 # with round key w[24,27] + addi $KEYP, $KEYP, -16 + vle32.v $V15, ($KEYP) + vaesdm.vs $V1, $V15 # with round key w[20,23] + addi $KEYP, $KEYP, -16 + vle32.v $V14, ($KEYP) + vaesdm.vs $V1, $V14 # with round key w[16,19] + addi $KEYP, $KEYP, -16 + vle32.v $V13, ($KEYP) + vaesdm.vs $V1, $V13 # with round key w[12,15] + addi $KEYP, $KEYP, -16 + vle32.v $V12, ($KEYP) + vaesdm.vs $V1, $V12 # with round key w[ 8,11] + addi $KEYP, $KEYP, -16 + vle32.v $V11, ($KEYP) + vaesdm.vs $V1, $V11 # with round key w[ 4, 7] + addi $KEYP, $KEYP, -16 + vle32.v $V10, ($KEYP) + vaesdf.vs $V1, $V10 # with round key w[ 0, 3] + + vse32.v $V1, ($OUTP) + + ret +.size L_dec_192,.-L_dec_192 +___ + +$code .= <<___; +.p2align 3 +L_dec_256: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + addi $KEYP, $KEYP, 224 + vle32.v $V24, ($KEYP) + vaesz.vs $V1, $V24 # with round key w[56,59] + addi $KEYP, $KEYP, -16 + vle32.v $V23, ($KEYP) + vaesdm.vs $V1, $V23 # with round key w[52,55] + addi $KEYP, $KEYP, -16 + vle32.v $V22, ($KEYP) + vaesdm.vs $V1, $V22 # with round key w[48,51] + addi $KEYP, $KEYP, -16 + vle32.v $V21, ($KEYP) + vaesdm.vs $V1, $V21 # with round key w[44,47] + addi $KEYP, $KEYP, -16 + vle32.v $V20, ($KEYP) + vaesdm.vs $V1, $V20 # with round key w[40,43] + addi $KEYP, $KEYP, -16 + vle32.v $V19, ($KEYP) + vaesdm.vs $V1, $V19 # with round key w[36,39] + addi $KEYP, $KEYP, -16 + vle32.v $V18, ($KEYP) + vaesdm.vs $V1, $V18 # with round key w[32,35] + addi $KEYP, $KEYP, -16 + vle32.v $V17, ($KEYP) + vaesdm.vs $V1, $V17 # with round key w[28,31] + addi $KEYP, $KEYP, -16 + vle32.v $V16, ($KEYP) + vaesdm.vs $V1, $V16 # with round key w[24,27] + addi $KEYP, $KEYP, -16 + vle32.v $V15, ($KEYP) + vaesdm.vs $V1, $V15 # with round key w[20,23] + addi $KEYP, $KEYP, -16 + vle32.v $V14, ($KEYP) + vaesdm.vs $V1, $V14 # with round key w[16,19] + addi $KEYP, $KEYP, -16 + vle32.v $V13, ($KEYP) + vaesdm.vs $V1, $V13 # with round key w[12,15] + addi $KEYP, $KEYP, -16 + vle32.v $V12, ($KEYP) + vaesdm.vs $V1, $V12 # with round key w[ 8,11] + addi $KEYP, $KEYP, -16 + vle32.v $V11, ($KEYP) + vaesdm.vs $V1, $V11 # with round key w[ 4, 7] + addi $KEYP, $KEYP, -16 + vle32.v $V10, ($KEYP) + vaesdf.vs $V1, $V10 # with round key w[ 0, 3] + + vse32.v $V1, ($OUTP) + + ret +.size L_dec_256,.-L_dec_256 +___ +} + +$code .= <<___; +L_fail_m1: + li a0, -1 + ret +.size L_fail_m1,.-L_fail_m1 + +L_fail_m2: + li a0, -2 + ret +.size L_fail_m2,.-L_fail_m2 + +L_end: + ret +.size L_end,.-L_end +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3165BC3DA6E for ; Sun, 31 Dec 2023 15:28:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=T8bTs2KUVH45YAz2aiHwwJrbrmQmMe/Bx4kd+eReAMY=; b=kYlwzL91xNc21U d6KbHIh2oXE8Nr2nl2gdrw2NEhWV0CAWxm6YMCKT8qVW6sLqwyTvHZ+1fs6TdOsO6BBTCmXRepR+J 2F3PDIU0yunqrsBMe71qcjost6uoIEXovVDXHaV49jJJfHfLOexkS+BFZalfTLNfT5tt1RSd5s/lw mNe/0+un8TpxgkQpwRjpAWOiK53VjOc3cpynAzWtT/oKoXRIiZpc6wT4oxFMK3UHL+3OHcM4f+kE9 VQDxFRHFVlUoBfGCJbb2ZmLMKoVeuBIUUYJJAdb2mYz/RgJ1qnVVu1sSIMJ51voSZNOp10cl6MxDz ipsUus3LM07Cdcw8Ytgg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjf-004mXH-1N; Sun, 31 Dec 2023 15:28:19 +0000 Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjY-004mSk-0D for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:17 +0000 Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1d427518d52so41799505ad.0 for ; Sun, 31 Dec 2023 07:28:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036490; x=1704641290; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vIxsdiVWBuoXoHI5S24YlGchXWXS7dKKVxtBNRPhuEQ=; b=P1ueCKVZPZ3xRhTgkkIpmpCSj/2rwO0I+Nl2wIyFa5ftbBtQpOunesaOZQUN8j7vp8 pMMb+8a/2tVq3FGR08dsx/B8eyZEWDdWFzg6H6yhUDXxd2REHtXqFMvfYIJb7ZWPKCjc PAs7trdkR5HhqsDT1LdtqlOCeOAjO0Xrxl4AuDv+6Xajqsvi4kdbULgeI12G0bveP5mp 1tY07ozZfqZii0D29OBnLI1/aYczXL92+7x7EV4xJnjwdSu16FARkRWKdNqPX3gyY+3N 2axJytdSp1eXmqxzLbfWPzxbIhjjAXrRN/uiGupCsaz3SkFr/PUtFyN9Bwg9kQ4VKgiL BYPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036490; x=1704641290; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vIxsdiVWBuoXoHI5S24YlGchXWXS7dKKVxtBNRPhuEQ=; b=LHJyTAZoAvYswEUA6gwgbM10DEEqCx/nUmRvk6C5aAtDZms1BvDFEhS2kpdieoWu/8 yVGIBOPj7MpX1gLwwk4tSzmXU+r4lWqCIr/YAB33fPnlw+2BzSRlnhgcTSTyPbkaN4oQ /Mb5bbbsMCY5CBX/tyLMy1PQYc0CN70jddwVa4C5dKtpcJvslk55pfaoXxw6Z7H9SCFj 6/NZ4ldSED6ZGyYruIwBfhVZ5sk8nYRSOJgZRAF+m0nGADOGc8FdUGXbrme6kOKFjpFV 4cix8VeuOn8D2RJxqe5UGUOINNluu0gP6NKtcKxsWq34ViwQ6H/MClbwbvgmfUOxYcDm 5lMA== X-Gm-Message-State: AOJu0YyTGUUTRS/JwdK+WU5mOlmti7qO/shLO8M4LyVsPRUp/fBid/g1 mhCGp01W+I3cUJbiKqSPms4J9jOU8a9Ya09Kj6MXIxXxyoU= X-Google-Smtp-Source: AGHT+IEpfMp0chD2KMMxJFnxZDJjk7f/sRapbol7ggzGvvvASdJchVHh9n67itcDkzt3sKRaT2EOWg== X-Received: by 2002:a17:903:98e:b0:1d3:8e5d:ecf4 with SMTP id mb14-20020a170903098e00b001d38e5decf4mr21299683plb.56.1704036490038; Sun, 31 Dec 2023 07:28:10 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:09 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 05/11] RISC-V: crypto: add accelerated AES-CBC/CTR/ECB/XTS implementations Date: Sun, 31 Dec 2023 23:27:37 +0800 Message-Id: <20231231152743.6304-6-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072812_149429_E81E0026 X-CRM114-Status: GOOD ( 23.35 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Port the vector-crypto accelerated CBC, CTR, ECB and XTS block modes for AES cipher from OpenSSL(openssl/openssl#21923). In addition, support XTS-AES-192 mode which is not existed in OpenSSL. Co-developed-by: Phoebe Chen Signed-off-by: Phoebe Chen Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. - Revert the usage of simd skcipher. - Get `walksize` from `crypto_skcipher_alg()`. Changelog v3: - Update extension checking conditions in riscv64_aes_block_mod_init(). - Add `riscv64` prefix for all setkey, encrypt and decrypt functions. - Update xts_crypt() implementation. Use the similar approach as x86's aes-xts implementation. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `AES_BLOCK_RISCV64` option by default. - Update asm function for using aes key in `crypto_aes_ctx` structure. - Turn to use simd skcipher interface for AES-CBC/CTR/ECB/XTS modes. We still have lots of discussions for kernel-vector implementation. Before the final version of kernel-vector, use simd skcipher interface to skip the fallback path for all aes modes in all kinds of contexts. If we could always enable kernel-vector in softirq in the future, we could make the original sync skcipher algorithm back. - Refine aes-xts comments for head and tail blocks handling. - Update VLEN constraint for aex-xts mode. - Add `asmlinkage` qualifier for crypto asm function. - Rename aes-riscv64-zvbb-zvkg-zvkned to aes-riscv64-zvkned-zvbb-zvkg. - Rename aes-riscv64-zvkb-zvkned to aes-riscv64-zvkned-zvkb. - Reorder structure riscv64_aes_algs_zvkned, riscv64_aes_alg_zvkned_zvkb and riscv64_aes_alg_zvkned_zvbb_zvkg members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 21 + arch/riscv/crypto/Makefile | 11 + .../crypto/aes-riscv64-block-mode-glue.c | 459 +++++++++ .../crypto/aes-riscv64-zvkned-zvbb-zvkg.pl | 949 ++++++++++++++++++ arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl | 415 ++++++++ arch/riscv/crypto/aes-riscv64-zvkned.pl | 746 ++++++++++++++ 6 files changed, 2601 insertions(+) create mode 100644 arch/riscv/crypto/aes-riscv64-block-mode-glue.c create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 2a7c365f2a86..2cee0f68f0c7 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -13,4 +13,25 @@ config CRYPTO_AES_RISCV64 Architecture: riscv64 using: - Zvkned vector crypto extension +config CRYPTO_AES_BLOCK_RISCV64 + tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_AES_RISCV64 + select CRYPTO_SIMD + select CRYPTO_SKCIPHER + help + Length-preserving ciphers: AES cipher algorithms (FIPS-197) + with block cipher modes: + - ECB (Electronic Codebook) mode (NIST SP 800-38A) + - CBC (Cipher Block Chaining) mode (NIST SP 800-38A) + - CTR (Counter) mode (NIST SP 800-38A) + - XTS (XOR Encrypt XOR Tweakable Block Cipher with Ciphertext + Stealing) mode (NIST SP 800-38E and IEEE 1619) + + Architecture: riscv64 using: + - Zvkned vector crypto extension + - Zvbb vector extension (XTS) + - Zvkb vector crypto extension (CTR/XTS) + - Zvkg vector crypto extension (XTS) + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 90ca91d8df26..9574b009762f 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -6,10 +6,21 @@ obj-$(CONFIG_CRYPTO_AES_RISCV64) += aes-riscv64.o aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o +obj-$(CONFIG_CRYPTO_AES_BLOCK_RISCV64) += aes-block-riscv64.o +aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) $(obj)/aes-riscv64-zvkned.S: $(src)/aes-riscv64-zvkned.pl $(call cmd,perlasm) +$(obj)/aes-riscv64-zvkned-zvbb-zvkg.S: $(src)/aes-riscv64-zvkned-zvbb-zvkg.pl + $(call cmd,perlasm) + +$(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S +clean-files += aes-riscv64-zvkned-zvbb-zvkg.S +clean-files += aes-riscv64-zvkned-zvkb.S diff --git a/arch/riscv/crypto/aes-riscv64-block-mode-glue.c b/arch/riscv/crypto/aes-riscv64-block-mode-glue.c new file mode 100644 index 000000000000..929c9948468a --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-block-mode-glue.c @@ -0,0 +1,459 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Port of the OpenSSL AES block mode implementations for RISC-V + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "aes-riscv64-glue.h" + +struct riscv64_aes_xts_ctx { + struct crypto_aes_ctx ctx1; + struct crypto_aes_ctx ctx2; +}; + +/* aes cbc block mode using zvkned vector crypto extension */ +asmlinkage void rv64i_zvkned_cbc_encrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, + u8 *ivec); +asmlinkage void rv64i_zvkned_cbc_decrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, + u8 *ivec); +/* aes ecb block mode using zvkned vector crypto extension */ +asmlinkage void rv64i_zvkned_ecb_encrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key); +asmlinkage void rv64i_zvkned_ecb_decrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key); + +/* aes ctr block mode using zvkb and zvkned vector crypto extension */ +/* This func operates on 32-bit counter. Caller has to handle the overflow. */ +asmlinkage void +rv64i_zvkb_zvkned_ctr32_encrypt_blocks(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, + u8 *ivec); + +/* aes xts block mode using zvbb, zvkg and zvkned vector crypto extension */ +asmlinkage void +rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, u8 *iv, + int update_iv); +asmlinkage void +rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, u8 *iv, + int update_iv); + +/* ecb */ +static int riscv64_aes_setkey(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + + return riscv64_aes_setkey_zvkned(ctx, in_key, key_len); +} + +static int riscv64_ecb_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + + /* If we have error here, the `nbytes` will be zero. */ + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + kernel_vector_begin(); + rv64i_zvkned_ecb_encrypt(walk.src.virt.addr, walk.dst.virt.addr, + nbytes & ~(AES_BLOCK_SIZE - 1), ctx); + kernel_vector_end(); + err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1)); + } + + return err; +} + +static int riscv64_ecb_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + kernel_vector_begin(); + rv64i_zvkned_ecb_decrypt(walk.src.virt.addr, walk.dst.virt.addr, + nbytes & ~(AES_BLOCK_SIZE - 1), ctx); + kernel_vector_end(); + err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1)); + } + + return err; +} + +/* cbc */ +static int riscv64_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + kernel_vector_begin(); + rv64i_zvkned_cbc_encrypt(walk.src.virt.addr, walk.dst.virt.addr, + nbytes & ~(AES_BLOCK_SIZE - 1), ctx, + walk.iv); + kernel_vector_end(); + err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1)); + } + + return err; +} + +static int riscv64_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + kernel_vector_begin(); + rv64i_zvkned_cbc_decrypt(walk.src.virt.addr, walk.dst.virt.addr, + nbytes & ~(AES_BLOCK_SIZE - 1), ctx, + walk.iv); + kernel_vector_end(); + err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1)); + } + + return err; +} + +/* ctr */ +static int riscv64_ctr_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int ctr32; + unsigned int nbytes; + unsigned int blocks; + unsigned int current_blocks; + unsigned int current_length; + int err; + + /* the ctr iv uses big endian */ + ctr32 = get_unaligned_be32(req->iv + 12); + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + if (nbytes != walk.total) { + nbytes &= ~(AES_BLOCK_SIZE - 1); + blocks = nbytes / AES_BLOCK_SIZE; + } else { + /* This is the last walk. We should handle the tail data. */ + blocks = DIV_ROUND_UP(nbytes, AES_BLOCK_SIZE); + } + ctr32 += blocks; + + kernel_vector_begin(); + /* + * The `if` block below detects the overflow, which is then handled by + * limiting the amount of blocks to the exact overflow point. + */ + if (ctr32 >= blocks) { + rv64i_zvkb_zvkned_ctr32_encrypt_blocks( + walk.src.virt.addr, walk.dst.virt.addr, nbytes, + ctx, req->iv); + } else { + /* use 2 ctr32 function calls for overflow case */ + current_blocks = blocks - ctr32; + current_length = + min(nbytes, current_blocks * AES_BLOCK_SIZE); + rv64i_zvkb_zvkned_ctr32_encrypt_blocks( + walk.src.virt.addr, walk.dst.virt.addr, + current_length, ctx, req->iv); + crypto_inc(req->iv, 12); + + if (ctr32) { + rv64i_zvkb_zvkned_ctr32_encrypt_blocks( + walk.src.virt.addr + + current_blocks * AES_BLOCK_SIZE, + walk.dst.virt.addr + + current_blocks * AES_BLOCK_SIZE, + nbytes - current_length, ctx, req->iv); + } + } + kernel_vector_end(); + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + + return err; +} + +/* xts */ +static int riscv64_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct riscv64_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm); + unsigned int xts_single_key_len = key_len / 2; + int ret; + + ret = xts_verify_key(tfm, in_key, key_len); + if (ret) + return ret; + ret = riscv64_aes_setkey_zvkned(&ctx->ctx1, in_key, xts_single_key_len); + if (ret) + return ret; + return riscv64_aes_setkey_zvkned( + &ctx->ctx2, in_key + xts_single_key_len, xts_single_key_len); +} + +static int xts_crypt(struct skcipher_request *req, bool encrypt) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct riscv64_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_request sub_req; + struct scatterlist sg_src[2], sg_dst[2]; + struct scatterlist *src, *dst; + struct skcipher_walk walk; + unsigned int walk_size = crypto_skcipher_alg(tfm)->walksize; + unsigned int tail = req->cryptlen & (AES_BLOCK_SIZE - 1); + unsigned int nbytes; + unsigned int update_iv = 1; + int err; + + /* xts input size should be bigger than AES_BLOCK_SIZE */ + if (req->cryptlen < AES_BLOCK_SIZE) + return -EINVAL; + + riscv64_aes_encrypt_zvkned(&ctx->ctx2, req->iv, req->iv); + + if (unlikely(tail > 0 && req->cryptlen > walk_size)) { + /* + * Find the largest tail size which is small than `walk` size while the + * non-ciphertext-stealing parts still fit AES block boundary. + */ + tail = walk_size + tail - AES_BLOCK_SIZE; + + skcipher_request_set_tfm(&sub_req, tfm); + skcipher_request_set_callback( + &sub_req, skcipher_request_flags(req), NULL, NULL); + skcipher_request_set_crypt(&sub_req, req->src, req->dst, + req->cryptlen - tail, req->iv); + req = &sub_req; + } else { + tail = 0; + } + + err = skcipher_walk_virt(&walk, req, false); + if (!walk.nbytes) + return err; + + while ((nbytes = walk.nbytes)) { + if (nbytes < walk.total) + nbytes &= ~(AES_BLOCK_SIZE - 1); + else + update_iv = (tail > 0); + + kernel_vector_begin(); + if (encrypt) + rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt( + walk.src.virt.addr, walk.dst.virt.addr, nbytes, + &ctx->ctx1, req->iv, update_iv); + else + rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt( + walk.src.virt.addr, walk.dst.virt.addr, nbytes, + &ctx->ctx1, req->iv, update_iv); + kernel_vector_end(); + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + + if (unlikely(tail > 0 && !err)) { + dst = src = scatterwalk_ffwd(sg_src, req->src, req->cryptlen); + if (req->dst != req->src) + dst = scatterwalk_ffwd(sg_dst, req->dst, req->cryptlen); + + skcipher_request_set_crypt(req, src, dst, tail, req->iv); + + err = skcipher_walk_virt(&walk, req, false); + if (err) + return err; + + kernel_vector_begin(); + if (encrypt) + rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt( + walk.src.virt.addr, walk.dst.virt.addr, + walk.nbytes, &ctx->ctx1, req->iv, 0); + else + rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt( + walk.src.virt.addr, walk.dst.virt.addr, + walk.nbytes, &ctx->ctx1, req->iv, 0); + kernel_vector_end(); + + err = skcipher_walk_done(&walk, 0); + } + + return err; +} + +static int riscv64_xts_encrypt(struct skcipher_request *req) +{ + return xts_crypt(req, true); +} + +static int riscv64_xts_decrypt(struct skcipher_request *req) +{ + return xts_crypt(req, false); +} + +static struct skcipher_alg riscv64_aes_algs_zvkned[] = { + { + .setkey = riscv64_aes_setkey, + .encrypt = riscv64_ecb_encrypt, + .decrypt = riscv64_ecb_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .walksize = AES_BLOCK_SIZE * 8, + .base = { + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_priority = 300, + .cra_name = "ecb(aes)", + .cra_driver_name = "ecb-aes-riscv64-zvkned", + .cra_module = THIS_MODULE, + }, + }, { + .setkey = riscv64_aes_setkey, + .encrypt = riscv64_cbc_encrypt, + .decrypt = riscv64_cbc_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .walksize = AES_BLOCK_SIZE * 8, + .base = { + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_priority = 300, + .cra_name = "cbc(aes)", + .cra_driver_name = "cbc-aes-riscv64-zvkned", + .cra_module = THIS_MODULE, + }, + } +}; + +static struct skcipher_alg riscv64_aes_alg_zvkned_zvkb = { + .setkey = riscv64_aes_setkey, + .encrypt = riscv64_ctr_encrypt, + .decrypt = riscv64_ctr_encrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .walksize = AES_BLOCK_SIZE * 8, + .base = { + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_priority = 300, + .cra_name = "ctr(aes)", + .cra_driver_name = "ctr-aes-riscv64-zvkned-zvkb", + .cra_module = THIS_MODULE, + }, +}; + +static struct skcipher_alg riscv64_aes_alg_zvkned_zvbb_zvkg = { + .setkey = riscv64_xts_setkey, + .encrypt = riscv64_xts_encrypt, + .decrypt = riscv64_xts_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .walksize = AES_BLOCK_SIZE * 8, + .base = { + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct riscv64_aes_xts_ctx), + .cra_priority = 300, + .cra_name = "xts(aes)", + .cra_driver_name = "xts-aes-riscv64-zvkned-zvbb-zvkg", + .cra_module = THIS_MODULE, + }, +}; + +static int __init riscv64_aes_block_mod_init(void) +{ + int ret = -ENODEV; + + if (riscv_isa_extension_available(NULL, ZVKNED) && + riscv_vector_vlen() >= 128 && riscv_vector_vlen() <= 2048) { + ret = crypto_register_skciphers( + riscv64_aes_algs_zvkned, + ARRAY_SIZE(riscv64_aes_algs_zvkned)); + if (ret) + return ret; + + if (riscv_isa_extension_available(NULL, ZVKB)) { + ret = crypto_register_skcipher(&riscv64_aes_alg_zvkned_zvkb); + if (ret) + goto unregister_zvkned; + } + + if (riscv_isa_extension_available(NULL, ZVBB) && + riscv_isa_extension_available(NULL, ZVKG)) { + ret = crypto_register_skcipher(&riscv64_aes_alg_zvkned_zvbb_zvkg); + if (ret) + goto unregister_zvkned_zvkb; + } + } + + return ret; + +unregister_zvkned_zvkb: + crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvkb); +unregister_zvkned: + crypto_unregister_skciphers(riscv64_aes_algs_zvkned, + ARRAY_SIZE(riscv64_aes_algs_zvkned)); + + return ret; +} + +static void __exit riscv64_aes_block_mod_fini(void) +{ + crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvbb_zvkg); + crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvkb); + crypto_unregister_skciphers(riscv64_aes_algs_zvkned, + ARRAY_SIZE(riscv64_aes_algs_zvkned)); +} + +module_init(riscv64_aes_block_mod_init); +module_exit(riscv64_aes_block_mod_fini); + +MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS (RISC-V accelerated)"); +MODULE_AUTHOR("Jerry Shih "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("cbc(aes)"); +MODULE_ALIAS_CRYPTO("ctr(aes)"); +MODULE_ALIAS_CRYPTO("ecb(aes)"); +MODULE_ALIAS_CRYPTO("xts(aes)"); diff --git a/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl b/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl new file mode 100644 index 000000000000..bc7772a5944a --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl @@ -0,0 +1,949 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 && VLEN <= 2048 +# - RISC-V Vector AES block cipher extension ('Zvkned') +# - RISC-V Vector Bit-manipulation extension ('Zvbb') +# - RISC-V Vector GCM/GMAC extension ('Zvkg') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvkned, +zvbb, +zvkg +___ + +{ +################################################################################ +# void rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt(const unsigned char *in, +# unsigned char *out, size_t length, +# const AES_KEY *key, +# unsigned char iv[16], +# int update_iv) +my ($INPUT, $OUTPUT, $LENGTH, $KEY, $IV, $UPDATE_IV) = ("a0", "a1", "a2", "a3", "a4", "a5"); +my ($TAIL_LENGTH) = ("a6"); +my ($VL) = ("a7"); +my ($T0, $T1, $T2, $T3) = ("t0", "t1", "t2", "t3"); +my ($STORE_LEN32) = ("t4"); +my ($LEN32) = ("t5"); +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +# load iv to v28 +sub load_xts_iv0 { + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V28, ($IV) +___ + + return $code; +} + +# prepare input data(v24), iv(v28), bit-reversed-iv(v16), bit-reversed-iv-multiplier(v20) +sub init_first_round { + my $code=<<___; + # load input + vsetvli $VL, $LEN32, e32, m4, ta, ma + vle32.v $V24, ($INPUT) + + li $T0, 5 + # We could simplify the initialization steps if we have `block<=1`. + blt $LEN32, $T0, 1f + + # Note: We use `vgmul` for GF(2^128) multiplication. The `vgmul` uses + # different order of coefficients. We should use`vbrev8` to reverse the + # data when we use `vgmul`. + vsetivli zero, 4, e32, m1, ta, ma + vbrev8.v $V0, $V28 + vsetvli zero, $LEN32, e32, m4, ta, ma + vmv.v.i $V16, 0 + # v16: [r-IV0, r-IV0, ...] + vaesz.vs $V16, $V0 + + # Prepare GF(2^128) multiplier [1, x, x^2, x^3, ...] in v8. + # We use `vwsll` to get power of 2 multipliers. Current rvv spec only + # supports `SEW<=64`. So, the maximum `VLEN` for this approach is `2048`. + # SEW64_BITS * AES_BLOCK_SIZE / LMUL + # = 64 * 128 / 4 = 2048 + # + # TODO: truncate the vl to `2048` for `vlen>2048` case. + slli $T0, $LEN32, 2 + vsetvli zero, $T0, e32, m1, ta, ma + # v2: [`1`, `1`, `1`, `1`, ...] + vmv.v.i $V2, 1 + # v3: [`0`, `1`, `2`, `3`, ...] + vid.v $V3 + vsetvli zero, $T0, e64, m2, ta, ma + # v4: [`1`, 0, `1`, 0, `1`, 0, `1`, 0, ...] + vzext.vf2 $V4, $V2 + # v6: [`0`, 0, `1`, 0, `2`, 0, `3`, 0, ...] + vzext.vf2 $V6, $V3 + slli $T0, $LEN32, 1 + vsetvli zero, $T0, e32, m2, ta, ma + # v8: [1<<0=1, 0, 0, 0, 1<<1=x, 0, 0, 0, 1<<2=x^2, 0, 0, 0, ...] + vwsll.vv $V8, $V4, $V6 + + # Compute [r-IV0*1, r-IV0*x, r-IV0*x^2, r-IV0*x^3, ...] in v16 + vsetvli zero, $LEN32, e32, m4, ta, ma + vbrev8.v $V8, $V8 + vgmul.vv $V16, $V8 + + # Compute [IV0*1, IV0*x, IV0*x^2, IV0*x^3, ...] in v28. + # Reverse the bits order back. + vbrev8.v $V28, $V16 + + # Prepare the x^n multiplier in v20. The `n` is the aes-xts block number + # in a LMUL=4 register group. + # n = ((VLEN*LMUL)/(32*4)) = ((VLEN*4)/(32*4)) + # = (VLEN/32) + # We could use vsetvli with `e32, m1` to compute the `n` number. + vsetvli $T0, zero, e32, m1, ta, ma + li $T1, 1 + sll $T0, $T1, $T0 + vsetivli zero, 2, e64, m1, ta, ma + vmv.v.i $V0, 0 + vsetivli zero, 1, e64, m1, tu, ma + vmv.v.x $V0, $T0 + vsetivli zero, 2, e64, m1, ta, ma + vbrev8.v $V0, $V0 + vsetvli zero, $LEN32, e32, m4, ta, ma + vmv.v.i $V20, 0 + vaesz.vs $V20, $V0 + + j 2f +1: + vsetivli zero, 4, e32, m1, ta, ma + vbrev8.v $V16, $V28 +2: +___ + + return $code; +} + +# prepare xts enc last block's input(v24) and iv(v28) +sub handle_xts_enc_last_block { + my $code=<<___; + bnez $TAIL_LENGTH, 2f + + beqz $UPDATE_IV, 1f + ## Store next IV + addi $VL, $VL, -4 + vsetivli zero, 4, e32, m4, ta, ma + # multiplier + vslidedown.vx $V16, $V16, $VL + + # setup `x` multiplier with byte-reversed order + # 0b00000010 => 0b01000000 (0x40) + li $T0, 0x40 + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.i $V28, 0 + vsetivli zero, 1, e8, m1, tu, ma + vmv.v.x $V28, $T0 + + # IV * `x` + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V28 + # Reverse the IV's bits order back to big-endian + vbrev8.v $V28, $V16 + + vse32.v $V28, ($IV) +1: + + ret +2: + # slidedown second to last block + addi $VL, $VL, -4 + vsetivli zero, 4, e32, m4, ta, ma + # ciphertext + vslidedown.vx $V24, $V24, $VL + # multiplier + vslidedown.vx $V16, $V16, $VL + + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.v $V25, $V24 + + # load last block into v24 + # note: We should load the last block before store the second to last block + # for in-place operation. + vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma + vle8.v $V24, ($INPUT) + + # setup `x` multiplier with byte-reversed order + # 0b00000010 => 0b01000000 (0x40) + li $T0, 0x40 + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.i $V28, 0 + vsetivli zero, 1, e8, m1, tu, ma + vmv.v.x $V28, $T0 + + # compute IV for last block + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V28 + vbrev8.v $V28, $V16 + + # store second to last block + vsetvli zero, $TAIL_LENGTH, e8, m1, ta, ma + vse8.v $V25, ($OUTPUT) +___ + + return $code; +} + +# prepare xts dec second to last block's input(v24) and iv(v29) and +# last block's and iv(v28) +sub handle_xts_dec_last_block { + my $code=<<___; + bnez $TAIL_LENGTH, 2f + + beqz $UPDATE_IV, 1f + ## Store next IV + # setup `x` multiplier with byte-reversed order + # 0b00000010 => 0b01000000 (0x40) + li $T0, 0x40 + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.i $V28, 0 + vsetivli zero, 1, e8, m1, tu, ma + vmv.v.x $V28, $T0 + + beqz $LENGTH, 3f + addi $VL, $VL, -4 + vsetivli zero, 4, e32, m4, ta, ma + # multiplier + vslidedown.vx $V16, $V16, $VL + +3: + # IV * `x` + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V28 + # Reverse the IV's bits order back to big-endian + vbrev8.v $V28, $V16 + + vse32.v $V28, ($IV) +1: + + ret +2: + # load second to last block's ciphertext + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V24, ($INPUT) + addi $INPUT, $INPUT, 16 + + # setup `x` multiplier with byte-reversed order + # 0b00000010 => 0b01000000 (0x40) + li $T0, 0x40 + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.i $V20, 0 + vsetivli zero, 1, e8, m1, tu, ma + vmv.v.x $V20, $T0 + + beqz $LENGTH, 1f + # slidedown third to last block + addi $VL, $VL, -4 + vsetivli zero, 4, e32, m4, ta, ma + # multiplier + vslidedown.vx $V16, $V16, $VL + + # compute IV for last block + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V20 + vbrev8.v $V28, $V16 + + # compute IV for second to last block + vgmul.vv $V16, $V20 + vbrev8.v $V29, $V16 + j 2f +1: + # compute IV for second to last block + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V20 + vbrev8.v $V29, $V16 +2: +___ + + return $code; +} + +# Load all 11 round keys to v1-v11 registers. +sub aes_128_load_key { + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V2, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V3, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V4, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V5, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V6, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V7, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V8, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V9, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V10, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V11, ($KEY) +___ + + return $code; +} + +# Load all 13 round keys to v1-v13 registers. +sub aes_192_load_key { + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V2, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V3, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V4, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V5, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V6, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V7, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V8, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V9, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V10, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V11, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V12, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V13, ($KEY) +___ + + return $code; +} + +# Load all 15 round keys to v1-v15 registers. +sub aes_256_load_key { + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V2, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V3, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V4, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V5, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V6, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V7, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V8, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V9, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V10, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V11, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V12, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V13, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V14, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V15, ($KEY) +___ + + return $code; +} + +# aes-128 enc with round keys v1-v11 +sub aes_128_enc { + my $code=<<___; + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesef.vs $V24, $V11 +___ + + return $code; +} + +# aes-128 dec with round keys v1-v11 +sub aes_128_dec { + my $code=<<___; + vaesz.vs $V24, $V11 + vaesdm.vs $V24, $V10 + vaesdm.vs $V24, $V9 + vaesdm.vs $V24, $V8 + vaesdm.vs $V24, $V7 + vaesdm.vs $V24, $V6 + vaesdm.vs $V24, $V5 + vaesdm.vs $V24, $V4 + vaesdm.vs $V24, $V3 + vaesdm.vs $V24, $V2 + vaesdf.vs $V24, $V1 +___ + + return $code; +} + +# aes-192 enc with round keys v1-v13 +sub aes_192_enc { + my $code=<<___; + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesem.vs $V24, $V11 + vaesem.vs $V24, $V12 + vaesef.vs $V24, $V13 +___ + + return $code; +} + +# aes-192 dec with round keys v1-v13 +sub aes_192_dec { + my $code=<<___; + vaesz.vs $V24, $V13 + vaesdm.vs $V24, $V12 + vaesdm.vs $V24, $V11 + vaesdm.vs $V24, $V10 + vaesdm.vs $V24, $V9 + vaesdm.vs $V24, $V8 + vaesdm.vs $V24, $V7 + vaesdm.vs $V24, $V6 + vaesdm.vs $V24, $V5 + vaesdm.vs $V24, $V4 + vaesdm.vs $V24, $V3 + vaesdm.vs $V24, $V2 + vaesdf.vs $V24, $V1 +___ + + return $code; +} + +# aes-256 enc with round keys v1-v15 +sub aes_256_enc { + my $code=<<___; + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesem.vs $V24, $V11 + vaesem.vs $V24, $V12 + vaesem.vs $V24, $V13 + vaesem.vs $V24, $V14 + vaesef.vs $V24, $V15 +___ + + return $code; +} + +# aes-256 dec with round keys v1-v15 +sub aes_256_dec { + my $code=<<___; + vaesz.vs $V24, $V15 + vaesdm.vs $V24, $V14 + vaesdm.vs $V24, $V13 + vaesdm.vs $V24, $V12 + vaesdm.vs $V24, $V11 + vaesdm.vs $V24, $V10 + vaesdm.vs $V24, $V9 + vaesdm.vs $V24, $V8 + vaesdm.vs $V24, $V7 + vaesdm.vs $V24, $V6 + vaesdm.vs $V24, $V5 + vaesdm.vs $V24, $V4 + vaesdm.vs $V24, $V3 + vaesdm.vs $V24, $V2 + vaesdf.vs $V24, $V1 +___ + + return $code; +} + +$code .= <<___; +.p2align 3 +.globl rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt +.type rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt,\@function +rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt: + @{[load_xts_iv0]} + + # aes block size is 16 + andi $TAIL_LENGTH, $LENGTH, 15 + mv $STORE_LEN32, $LENGTH + beqz $TAIL_LENGTH, 1f + sub $LENGTH, $LENGTH, $TAIL_LENGTH + addi $STORE_LEN32, $LENGTH, -16 +1: + # We make the `LENGTH` become e32 length here. + srli $LEN32, $LENGTH, 2 + srli $STORE_LEN32, $STORE_LEN32, 2 + + # Load key length. + lwu $T0, 480($KEY) + li $T1, 32 + li $T2, 24 + li $T3, 16 + beq $T0, $T1, aes_xts_enc_256 + beq $T0, $T2, aes_xts_enc_192 + beq $T0, $T3, aes_xts_enc_128 +.size rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt,.-rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt +___ + +$code .= <<___; +.p2align 3 +aes_xts_enc_128: + @{[init_first_round]} + @{[aes_128_load_key]} + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Lenc_blocks_128: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load plaintext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_128_enc]} + vxor.vv $V24, $V24, $V28 + + # store ciphertext + vsetvli zero, $STORE_LEN32, e32, m4, ta, ma + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + sub $STORE_LEN32, $STORE_LEN32, $VL + + bnez $LEN32, .Lenc_blocks_128 + + @{[handle_xts_enc_last_block]} + + # xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_128_enc]} + vxor.vv $V24, $V24, $V28 + + # store last block ciphertext + addi $OUTPUT, $OUTPUT, -16 + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_enc_128,.-aes_xts_enc_128 +___ + +$code .= <<___; +.p2align 3 +aes_xts_enc_192: + @{[init_first_round]} + @{[aes_192_load_key]} + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Lenc_blocks_192: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load plaintext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_192_enc]} + vxor.vv $V24, $V24, $V28 + + # store ciphertext + vsetvli zero, $STORE_LEN32, e32, m4, ta, ma + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + sub $STORE_LEN32, $STORE_LEN32, $VL + + bnez $LEN32, .Lenc_blocks_192 + + @{[handle_xts_enc_last_block]} + + # xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_192_enc]} + vxor.vv $V24, $V24, $V28 + + # store last block ciphertext + addi $OUTPUT, $OUTPUT, -16 + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_enc_192,.-aes_xts_enc_192 +___ + +$code .= <<___; +.p2align 3 +aes_xts_enc_256: + @{[init_first_round]} + @{[aes_256_load_key]} + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Lenc_blocks_256: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load plaintext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_256_enc]} + vxor.vv $V24, $V24, $V28 + + # store ciphertext + vsetvli zero, $STORE_LEN32, e32, m4, ta, ma + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + sub $STORE_LEN32, $STORE_LEN32, $VL + + bnez $LEN32, .Lenc_blocks_256 + + @{[handle_xts_enc_last_block]} + + # xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_256_enc]} + vxor.vv $V24, $V24, $V28 + + # store last block ciphertext + addi $OUTPUT, $OUTPUT, -16 + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_enc_256,.-aes_xts_enc_256 +___ + +################################################################################ +# void rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt(const unsigned char *in, +# unsigned char *out, size_t length, +# const AES_KEY *key, +# unsigned char iv[16], +# int update_iv) +$code .= <<___; +.p2align 3 +.globl rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt +.type rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt,\@function +rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt: + @{[load_xts_iv0]} + + # aes block size is 16 + andi $TAIL_LENGTH, $LENGTH, 15 + beqz $TAIL_LENGTH, 1f + sub $LENGTH, $LENGTH, $TAIL_LENGTH + addi $LENGTH, $LENGTH, -16 +1: + # We make the `LENGTH` become e32 length here. + srli $LEN32, $LENGTH, 2 + + # Load key length. + lwu $T0, 480($KEY) + li $T1, 32 + li $T2, 24 + li $T3, 16 + beq $T0, $T1, aes_xts_dec_256 + beq $T0, $T2, aes_xts_dec_192 + beq $T0, $T3, aes_xts_dec_128 +.size rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt,.-rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt +___ + +$code .= <<___; +.p2align 3 +aes_xts_dec_128: + @{[init_first_round]} + @{[aes_128_load_key]} + + beqz $LEN32, 2f + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Ldec_blocks_128: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load ciphertext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_128_dec]} + vxor.vv $V24, $V24, $V28 + + # store plaintext + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + + bnez $LEN32, .Ldec_blocks_128 + +2: + @{[handle_xts_dec_last_block]} + + ## xts second to last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V29 + @{[aes_128_dec]} + vxor.vv $V24, $V24, $V29 + vmv.v.v $V25, $V24 + + # load last block ciphertext + vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma + vle8.v $V24, ($INPUT) + + # store second to last block plaintext + addi $T0, $OUTPUT, 16 + vse8.v $V25, ($T0) + + ## xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_128_dec]} + vxor.vv $V24, $V24, $V28 + + # store second to last block plaintext + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_dec_128,.-aes_xts_dec_128 +___ + +$code .= <<___; +.p2align 3 +aes_xts_dec_192: + @{[init_first_round]} + @{[aes_192_load_key]} + + beqz $LEN32, 2f + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Ldec_blocks_192: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load ciphertext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_192_dec]} + vxor.vv $V24, $V24, $V28 + + # store plaintext + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + + bnez $LEN32, .Ldec_blocks_192 + +2: + @{[handle_xts_dec_last_block]} + + ## xts second to last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V29 + @{[aes_192_dec]} + vxor.vv $V24, $V24, $V29 + vmv.v.v $V25, $V24 + + # load last block ciphertext + vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma + vle8.v $V24, ($INPUT) + + # store second to last block plaintext + addi $T0, $OUTPUT, 16 + vse8.v $V25, ($T0) + + ## xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_192_dec]} + vxor.vv $V24, $V24, $V28 + + # store second to last block plaintext + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_dec_192,.-aes_xts_dec_192 +___ + +$code .= <<___; +.p2align 3 +aes_xts_dec_256: + @{[init_first_round]} + @{[aes_256_load_key]} + + beqz $LEN32, 2f + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Ldec_blocks_256: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load ciphertext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_256_dec]} + vxor.vv $V24, $V24, $V28 + + # store plaintext + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + + bnez $LEN32, .Ldec_blocks_256 + +2: + @{[handle_xts_dec_last_block]} + + ## xts second to last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V29 + @{[aes_256_dec]} + vxor.vv $V24, $V24, $V29 + vmv.v.v $V25, $V24 + + # load last block ciphertext + vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma + vle8.v $V24, ($INPUT) + + # store second to last block plaintext + addi $T0, $OUTPUT, 16 + vse8.v $V25, ($T0) + + ## xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_256_dec]} + vxor.vv $V24, $V24, $V28 + + # store second to last block plaintext + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_dec_256,.-aes_xts_dec_256 +___ +} + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; diff --git a/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl b/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl new file mode 100644 index 000000000000..39ce998039a2 --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl @@ -0,0 +1,415 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector AES block cipher extension ('Zvkned') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvkned, +zvkb +___ + +################################################################################ +# void rv64i_zvkb_zvkned_ctr32_encrypt_blocks(const unsigned char *in, +# unsigned char *out, size_t length, +# const void *key, +# unsigned char ivec[16]); +{ +my ($INP, $OUTP, $LEN, $KEYP, $IVP) = ("a0", "a1", "a2", "a3", "a4"); +my ($T0, $T1, $T2, $T3) = ("t0", "t1", "t2", "t3"); +my ($VL) = ("t4"); +my ($LEN32) = ("t5"); +my ($CTR) = ("t6"); +my ($MASK) = ("v0"); +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +# Prepare the AES ctr input data into v16. +sub init_aes_ctr_input { + my $code=<<___; + # Setup mask into v0 + # The mask pattern for 4*N-th elements + # mask v0: [000100010001....] + # Note: + # We could setup the mask just for the maximum element length instead of + # the VLMAX. + li $T0, 0b10001000 + vsetvli $T2, zero, e8, m1, ta, ma + vmv.v.x $MASK, $T0 + # Load IV. + # v31:[IV0, IV1, IV2, big-endian count] + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V31, ($IVP) + # Convert the big-endian counter into little-endian. + vsetivli zero, 4, e32, m1, ta, mu + vrev8.v $V31, $V31, $MASK.t + # Splat the IV to v16 + vsetvli zero, $LEN32, e32, m4, ta, ma + vmv.v.i $V16, 0 + vaesz.vs $V16, $V31 + # Prepare the ctr pattern into v20 + # v20: [x, x, x, 0, x, x, x, 1, x, x, x, 2, ...] + viota.m $V20, $MASK, $MASK.t + # v16:[IV0, IV1, IV2, count+0, IV0, IV1, IV2, count+1, ...] + vsetvli $VL, $LEN32, e32, m4, ta, mu + vadd.vv $V16, $V16, $V20, $MASK.t +___ + + return $code; +} + +$code .= <<___; +.p2align 3 +.globl rv64i_zvkb_zvkned_ctr32_encrypt_blocks +.type rv64i_zvkb_zvkned_ctr32_encrypt_blocks,\@function +rv64i_zvkb_zvkned_ctr32_encrypt_blocks: + # The aes block size is 16 bytes. + # We try to get the minimum aes block number including the tail data. + addi $T0, $LEN, 15 + # the minimum block number + srli $T0, $T0, 4 + # We make the block number become e32 length here. + slli $LEN32, $T0, 2 + + # Load key length. + lwu $T0, 480($KEYP) + li $T1, 32 + li $T2, 24 + li $T3, 16 + + beq $T0, $T1, ctr32_encrypt_blocks_256 + beq $T0, $T2, ctr32_encrypt_blocks_192 + beq $T0, $T3, ctr32_encrypt_blocks_128 + + ret +.size rv64i_zvkb_zvkned_ctr32_encrypt_blocks,.-rv64i_zvkb_zvkned_ctr32_encrypt_blocks +___ + +$code .= <<___; +.p2align 3 +ctr32_encrypt_blocks_128: + # Load all 11 round keys to v1-v11 registers. + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + + @{[init_aes_ctr_input]} + + ##### AES body + j 2f +1: + vsetvli $VL, $LEN32, e32, m4, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t +2: + # Prepare the AES ctr input into v24. + # The ctr data uses big-endian form. + vmv.v.v $V24, $V16 + vrev8.v $V24, $V24, $MASK.t + srli $CTR, $VL, 2 + sub $LEN32, $LEN32, $VL + + # Load plaintext in bytes into v20. + vsetvli $T0, $LEN, e8, m4, ta, ma + vle8.v $V20, ($INP) + sub $LEN, $LEN, $T0 + add $INP, $INP, $T0 + + vsetvli zero, $VL, e32, m4, ta, ma + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesef.vs $V24, $V11 + + # ciphertext + vsetvli zero, $T0, e8, m4, ta, ma + vxor.vv $V24, $V24, $V20 + + # Store the ciphertext. + vse8.v $V24, ($OUTP) + add $OUTP, $OUTP, $T0 + + bnez $LEN, 1b + + ## store ctr iv + vsetivli zero, 4, e32, m1, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t + # Convert ctr data back to big-endian. + vrev8.v $V16, $V16, $MASK.t + vse32.v $V16, ($IVP) + + ret +.size ctr32_encrypt_blocks_128,.-ctr32_encrypt_blocks_128 +___ + +$code .= <<___; +.p2align 3 +ctr32_encrypt_blocks_192: + # Load all 13 round keys to v1-v13 registers. + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + + @{[init_aes_ctr_input]} + + ##### AES body + j 2f +1: + vsetvli $VL, $LEN32, e32, m4, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t +2: + # Prepare the AES ctr input into v24. + # The ctr data uses big-endian form. + vmv.v.v $V24, $V16 + vrev8.v $V24, $V24, $MASK.t + srli $CTR, $VL, 2 + sub $LEN32, $LEN32, $VL + + # Load plaintext in bytes into v20. + vsetvli $T0, $LEN, e8, m4, ta, ma + vle8.v $V20, ($INP) + sub $LEN, $LEN, $T0 + add $INP, $INP, $T0 + + vsetvli zero, $VL, e32, m4, ta, ma + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesem.vs $V24, $V11 + vaesem.vs $V24, $V12 + vaesef.vs $V24, $V13 + + # ciphertext + vsetvli zero, $T0, e8, m4, ta, ma + vxor.vv $V24, $V24, $V20 + + # Store the ciphertext. + vse8.v $V24, ($OUTP) + add $OUTP, $OUTP, $T0 + + bnez $LEN, 1b + + ## store ctr iv + vsetivli zero, 4, e32, m1, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t + # Convert ctr data back to big-endian. + vrev8.v $V16, $V16, $MASK.t + vse32.v $V16, ($IVP) + + ret +.size ctr32_encrypt_blocks_192,.-ctr32_encrypt_blocks_192 +___ + +$code .= <<___; +.p2align 3 +ctr32_encrypt_blocks_256: + # Load all 15 round keys to v1-v15 registers. + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) + + @{[init_aes_ctr_input]} + + ##### AES body + j 2f +1: + vsetvli $VL, $LEN32, e32, m4, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t +2: + # Prepare the AES ctr input into v24. + # The ctr data uses big-endian form. + vmv.v.v $V24, $V16 + vrev8.v $V24, $V24, $MASK.t + srli $CTR, $VL, 2 + sub $LEN32, $LEN32, $VL + + # Load plaintext in bytes into v20. + vsetvli $T0, $LEN, e8, m4, ta, ma + vle8.v $V20, ($INP) + sub $LEN, $LEN, $T0 + add $INP, $INP, $T0 + + vsetvli zero, $VL, e32, m4, ta, ma + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesem.vs $V24, $V11 + vaesem.vs $V24, $V12 + vaesem.vs $V24, $V13 + vaesem.vs $V24, $V14 + vaesef.vs $V24, $V15 + + # ciphertext + vsetvli zero, $T0, e8, m4, ta, ma + vxor.vv $V24, $V24, $V20 + + # Store the ciphertext. + vse8.v $V24, ($OUTP) + add $OUTP, $OUTP, $T0 + + bnez $LEN, 1b + + ## store ctr iv + vsetivli zero, 4, e32, m1, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t + # Convert ctr data back to big-endian. + vrev8.v $V16, $V16, $MASK.t + vse32.v $V16, ($IVP) + + ret +.size ctr32_encrypt_blocks_256,.-ctr32_encrypt_blocks_256 +___ +} + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; diff --git a/arch/riscv/crypto/aes-riscv64-zvkned.pl b/arch/riscv/crypto/aes-riscv64-zvkned.pl index 583e87912e5d..383d5fee4ff2 100644 --- a/arch/riscv/crypto/aes-riscv64-zvkned.pl +++ b/arch/riscv/crypto/aes-riscv64-zvkned.pl @@ -67,6 +67,752 @@ my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, ) = map("v$_",(0..31)); +# Load all 11 round keys to v1-v11 registers. +sub aes_128_load_key { + my $KEYP = shift; + + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) +___ + + return $code; +} + +# Load all 13 round keys to v1-v13 registers. +sub aes_192_load_key { + my $KEYP = shift; + + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) +___ + + return $code; +} + +# Load all 15 round keys to v1-v15 registers. +sub aes_256_load_key { + my $KEYP = shift; + + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) +___ + + return $code; +} + +# aes-128 encryption with round keys v1-v11 +sub aes_128_encrypt { + my $code=<<___; + vaesz.vs $V24, $V1 # with round key w[ 0, 3] + vaesem.vs $V24, $V2 # with round key w[ 4, 7] + vaesem.vs $V24, $V3 # with round key w[ 8,11] + vaesem.vs $V24, $V4 # with round key w[12,15] + vaesem.vs $V24, $V5 # with round key w[16,19] + vaesem.vs $V24, $V6 # with round key w[20,23] + vaesem.vs $V24, $V7 # with round key w[24,27] + vaesem.vs $V24, $V8 # with round key w[28,31] + vaesem.vs $V24, $V9 # with round key w[32,35] + vaesem.vs $V24, $V10 # with round key w[36,39] + vaesef.vs $V24, $V11 # with round key w[40,43] +___ + + return $code; +} + +# aes-128 decryption with round keys v1-v11 +sub aes_128_decrypt { + my $code=<<___; + vaesz.vs $V24, $V11 # with round key w[40,43] + vaesdm.vs $V24, $V10 # with round key w[36,39] + vaesdm.vs $V24, $V9 # with round key w[32,35] + vaesdm.vs $V24, $V8 # with round key w[28,31] + vaesdm.vs $V24, $V7 # with round key w[24,27] + vaesdm.vs $V24, $V6 # with round key w[20,23] + vaesdm.vs $V24, $V5 # with round key w[16,19] + vaesdm.vs $V24, $V4 # with round key w[12,15] + vaesdm.vs $V24, $V3 # with round key w[ 8,11] + vaesdm.vs $V24, $V2 # with round key w[ 4, 7] + vaesdf.vs $V24, $V1 # with round key w[ 0, 3] +___ + + return $code; +} + +# aes-192 encryption with round keys v1-v13 +sub aes_192_encrypt { + my $code=<<___; + vaesz.vs $V24, $V1 # with round key w[ 0, 3] + vaesem.vs $V24, $V2 # with round key w[ 4, 7] + vaesem.vs $V24, $V3 # with round key w[ 8,11] + vaesem.vs $V24, $V4 # with round key w[12,15] + vaesem.vs $V24, $V5 # with round key w[16,19] + vaesem.vs $V24, $V6 # with round key w[20,23] + vaesem.vs $V24, $V7 # with round key w[24,27] + vaesem.vs $V24, $V8 # with round key w[28,31] + vaesem.vs $V24, $V9 # with round key w[32,35] + vaesem.vs $V24, $V10 # with round key w[36,39] + vaesem.vs $V24, $V11 # with round key w[40,43] + vaesem.vs $V24, $V12 # with round key w[44,47] + vaesef.vs $V24, $V13 # with round key w[48,51] +___ + + return $code; +} + +# aes-192 decryption with round keys v1-v13 +sub aes_192_decrypt { + my $code=<<___; + vaesz.vs $V24, $V13 # with round key w[48,51] + vaesdm.vs $V24, $V12 # with round key w[44,47] + vaesdm.vs $V24, $V11 # with round key w[40,43] + vaesdm.vs $V24, $V10 # with round key w[36,39] + vaesdm.vs $V24, $V9 # with round key w[32,35] + vaesdm.vs $V24, $V8 # with round key w[28,31] + vaesdm.vs $V24, $V7 # with round key w[24,27] + vaesdm.vs $V24, $V6 # with round key w[20,23] + vaesdm.vs $V24, $V5 # with round key w[16,19] + vaesdm.vs $V24, $V4 # with round key w[12,15] + vaesdm.vs $V24, $V3 # with round key w[ 8,11] + vaesdm.vs $V24, $V2 # with round key w[ 4, 7] + vaesdf.vs $V24, $V1 # with round key w[ 0, 3] +___ + + return $code; +} + +# aes-256 encryption with round keys v1-v15 +sub aes_256_encrypt { + my $code=<<___; + vaesz.vs $V24, $V1 # with round key w[ 0, 3] + vaesem.vs $V24, $V2 # with round key w[ 4, 7] + vaesem.vs $V24, $V3 # with round key w[ 8,11] + vaesem.vs $V24, $V4 # with round key w[12,15] + vaesem.vs $V24, $V5 # with round key w[16,19] + vaesem.vs $V24, $V6 # with round key w[20,23] + vaesem.vs $V24, $V7 # with round key w[24,27] + vaesem.vs $V24, $V8 # with round key w[28,31] + vaesem.vs $V24, $V9 # with round key w[32,35] + vaesem.vs $V24, $V10 # with round key w[36,39] + vaesem.vs $V24, $V11 # with round key w[40,43] + vaesem.vs $V24, $V12 # with round key w[44,47] + vaesem.vs $V24, $V13 # with round key w[48,51] + vaesem.vs $V24, $V14 # with round key w[52,55] + vaesef.vs $V24, $V15 # with round key w[56,59] +___ + + return $code; +} + +# aes-256 decryption with round keys v1-v15 +sub aes_256_decrypt { + my $code=<<___; + vaesz.vs $V24, $V15 # with round key w[56,59] + vaesdm.vs $V24, $V14 # with round key w[52,55] + vaesdm.vs $V24, $V13 # with round key w[48,51] + vaesdm.vs $V24, $V12 # with round key w[44,47] + vaesdm.vs $V24, $V11 # with round key w[40,43] + vaesdm.vs $V24, $V10 # with round key w[36,39] + vaesdm.vs $V24, $V9 # with round key w[32,35] + vaesdm.vs $V24, $V8 # with round key w[28,31] + vaesdm.vs $V24, $V7 # with round key w[24,27] + vaesdm.vs $V24, $V6 # with round key w[20,23] + vaesdm.vs $V24, $V5 # with round key w[16,19] + vaesdm.vs $V24, $V4 # with round key w[12,15] + vaesdm.vs $V24, $V3 # with round key w[ 8,11] + vaesdm.vs $V24, $V2 # with round key w[ 4, 7] + vaesdf.vs $V24, $V1 # with round key w[ 0, 3] +___ + + return $code; +} + +{ +############################################################################### +# void rv64i_zvkned_cbc_encrypt(const unsigned char *in, unsigned char *out, +# size_t length, const AES_KEY *key, +# unsigned char *ivec, const int enc); +my ($INP, $OUTP, $LEN, $KEYP, $IVP, $ENC) = ("a0", "a1", "a2", "a3", "a4", "a5"); +my ($T0, $T1) = ("t0", "t1", "t2"); + +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_cbc_encrypt +.type rv64i_zvkned_cbc_encrypt,\@function +rv64i_zvkned_cbc_encrypt: + # check whether the length is a multiple of 16 and >= 16 + li $T1, 16 + blt $LEN, $T1, L_end + andi $T1, $LEN, 15 + bnez $T1, L_end + + # Load key length. + lwu $T0, 480($KEYP) + + # Get proper routine for key length. + li $T1, 16 + beq $T1, $T0, L_cbc_enc_128 + + li $T1, 24 + beq $T1, $T0, L_cbc_enc_192 + + li $T1, 32 + beq $T1, $T0, L_cbc_enc_256 + + ret +.size rv64i_zvkned_cbc_encrypt,.-rv64i_zvkned_cbc_encrypt +___ + +$code .= <<___; +.p2align 3 +L_cbc_enc_128: + # Load all 11 round keys to v1-v11 registers. + @{[aes_128_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vxor.vv $V24, $V24, $V16 + j 2f + +1: + vle32.v $V17, ($INP) + vxor.vv $V24, $V24, $V17 + +2: + # AES body + @{[aes_128_encrypt]} + + vse32.v $V24, ($OUTP) + + addi $INP, $INP, 16 + addi $OUTP, $OUTP, 16 + addi $LEN, $LEN, -16 + + bnez $LEN, 1b + + vse32.v $V24, ($IVP) + + ret +.size L_cbc_enc_128,.-L_cbc_enc_128 +___ + +$code .= <<___; +.p2align 3 +L_cbc_enc_192: + # Load all 13 round keys to v1-v13 registers. + @{[aes_192_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vxor.vv $V24, $V24, $V16 + j 2f + +1: + vle32.v $V17, ($INP) + vxor.vv $V24, $V24, $V17 + +2: + # AES body + @{[aes_192_encrypt]} + + vse32.v $V24, ($OUTP) + + addi $INP, $INP, 16 + addi $OUTP, $OUTP, 16 + addi $LEN, $LEN, -16 + + bnez $LEN, 1b + + vse32.v $V24, ($IVP) + + ret +.size L_cbc_enc_192,.-L_cbc_enc_192 +___ + +$code .= <<___; +.p2align 3 +L_cbc_enc_256: + # Load all 15 round keys to v1-v15 registers. + @{[aes_256_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vxor.vv $V24, $V24, $V16 + j 2f + +1: + vle32.v $V17, ($INP) + vxor.vv $V24, $V24, $V17 + +2: + # AES body + @{[aes_256_encrypt]} + + vse32.v $V24, ($OUTP) + + addi $INP, $INP, 16 + addi $OUTP, $OUTP, 16 + addi $LEN, $LEN, -16 + + bnez $LEN, 1b + + vse32.v $V24, ($IVP) + + ret +.size L_cbc_enc_256,.-L_cbc_enc_256 +___ + +############################################################################### +# void rv64i_zvkned_cbc_decrypt(const unsigned char *in, unsigned char *out, +# size_t length, const AES_KEY *key, +# unsigned char *ivec, const int enc); +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_cbc_decrypt +.type rv64i_zvkned_cbc_decrypt,\@function +rv64i_zvkned_cbc_decrypt: + # check whether the length is a multiple of 16 and >= 16 + li $T1, 16 + blt $LEN, $T1, L_end + andi $T1, $LEN, 15 + bnez $T1, L_end + + # Load key length. + lwu $T0, 480($KEYP) + + # Get proper routine for key length. + li $T1, 16 + beq $T1, $T0, L_cbc_dec_128 + + li $T1, 24 + beq $T1, $T0, L_cbc_dec_192 + + li $T1, 32 + beq $T1, $T0, L_cbc_dec_256 + + ret +.size rv64i_zvkned_cbc_decrypt,.-rv64i_zvkned_cbc_decrypt +___ + +$code .= <<___; +.p2align 3 +L_cbc_dec_128: + # Load all 11 round keys to v1-v11 registers. + @{[aes_128_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + j 2f + +1: + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + addi $OUTP, $OUTP, 16 + +2: + # AES body + @{[aes_128_decrypt]} + + vxor.vv $V24, $V24, $V16 + vse32.v $V24, ($OUTP) + vmv.v.v $V16, $V17 + + addi $LEN, $LEN, -16 + addi $INP, $INP, 16 + + bnez $LEN, 1b + + vse32.v $V16, ($IVP) + + ret +.size L_cbc_dec_128,.-L_cbc_dec_128 +___ + +$code .= <<___; +.p2align 3 +L_cbc_dec_192: + # Load all 13 round keys to v1-v13 registers. + @{[aes_192_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + j 2f + +1: + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + addi $OUTP, $OUTP, 16 + +2: + # AES body + @{[aes_192_decrypt]} + + vxor.vv $V24, $V24, $V16 + vse32.v $V24, ($OUTP) + vmv.v.v $V16, $V17 + + addi $LEN, $LEN, -16 + addi $INP, $INP, 16 + + bnez $LEN, 1b + + vse32.v $V16, ($IVP) + + ret +.size L_cbc_dec_192,.-L_cbc_dec_192 +___ + +$code .= <<___; +.p2align 3 +L_cbc_dec_256: + # Load all 15 round keys to v1-v15 registers. + @{[aes_256_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + j 2f + +1: + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + addi $OUTP, $OUTP, 16 + +2: + # AES body + @{[aes_256_decrypt]} + + vxor.vv $V24, $V24, $V16 + vse32.v $V24, ($OUTP) + vmv.v.v $V16, $V17 + + addi $LEN, $LEN, -16 + addi $INP, $INP, 16 + + bnez $LEN, 1b + + vse32.v $V16, ($IVP) + + ret +.size L_cbc_dec_256,.-L_cbc_dec_256 +___ +} + +{ +############################################################################### +# void rv64i_zvkned_ecb_encrypt(const unsigned char *in, unsigned char *out, +# size_t length, const AES_KEY *key, +# const int enc); +my ($INP, $OUTP, $LEN, $KEYP, $ENC) = ("a0", "a1", "a2", "a3", "a4"); +my ($VL) = ("a5"); +my ($LEN32) = ("a6"); +my ($T0, $T1) = ("t0", "t1"); + +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_ecb_encrypt +.type rv64i_zvkned_ecb_encrypt,\@function +rv64i_zvkned_ecb_encrypt: + # Make the LEN become e32 length. + srli $LEN32, $LEN, 2 + + # Load key length. + lwu $T0, 480($KEYP) + + # Get proper routine for key length. + li $T1, 16 + beq $T1, $T0, L_ecb_enc_128 + + li $T1, 24 + beq $T1, $T0, L_ecb_enc_192 + + li $T1, 32 + beq $T1, $T0, L_ecb_enc_256 + + ret +.size rv64i_zvkned_ecb_encrypt,.-rv64i_zvkned_ecb_encrypt +___ + +$code .= <<___; +.p2align 3 +L_ecb_enc_128: + # Load all 11 round keys to v1-v11 registers. + @{[aes_128_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_128_encrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_enc_128,.-L_ecb_enc_128 +___ + +$code .= <<___; +.p2align 3 +L_ecb_enc_192: + # Load all 13 round keys to v1-v13 registers. + @{[aes_192_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_192_encrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_enc_192,.-L_ecb_enc_192 +___ + +$code .= <<___; +.p2align 3 +L_ecb_enc_256: + # Load all 15 round keys to v1-v15 registers. + @{[aes_256_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_256_encrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_enc_256,.-L_ecb_enc_256 +___ + +############################################################################### +# void rv64i_zvkned_ecb_decrypt(const unsigned char *in, unsigned char *out, +# size_t length, const AES_KEY *key, +# const int enc); +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_ecb_decrypt +.type rv64i_zvkned_ecb_decrypt,\@function +rv64i_zvkned_ecb_decrypt: + # Make the LEN become e32 length. + srli $LEN32, $LEN, 2 + + # Load key length. + lwu $T0, 480($KEYP) + + # Get proper routine for key length. + li $T1, 16 + beq $T1, $T0, L_ecb_dec_128 + + li $T1, 24 + beq $T1, $T0, L_ecb_dec_192 + + li $T1, 32 + beq $T1, $T0, L_ecb_dec_256 + + ret +.size rv64i_zvkned_ecb_decrypt,.-rv64i_zvkned_ecb_decrypt +___ + +$code .= <<___; +.p2align 3 +L_ecb_dec_128: + # Load all 11 round keys to v1-v11 registers. + @{[aes_128_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_128_decrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_dec_128,.-L_ecb_dec_128 +___ + +$code .= <<___; +.p2align 3 +L_ecb_dec_192: + # Load all 13 round keys to v1-v13 registers. + @{[aes_192_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_192_decrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_dec_192,.-L_ecb_dec_192 +___ + +$code .= <<___; +.p2align 3 +L_ecb_dec_256: + # Load all 15 round keys to v1-v15 registers. + @{[aes_256_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_256_decrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_dec_256,.-L_ecb_dec_256 +___ +} + { ################################################################################ # void rv64i_zvkned_encrypt(const unsigned char *in, unsigned char *out, From patchwork Sun Dec 31 15:27:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 96614C47073 for ; Sun, 31 Dec 2023 15:28:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jQLQXK1axfnqYMbD+XycV78D3frnhGsMoKBU8gTwgDI=; b=OyJJrh8A13HDEm nrsmdoJY4BXtjgDaET+ImZwJc/dzCzg/8CYlEfqG4/zuswU3A9TdD8h3At+BJNpP/5yrNognUOEz6 Omh08CouMssFbXOe4WlOc6Sesc7gNxnnmc9fkcgrPojfevQxNdmvwPkZqmakmZd4+h9QTTvZ0Jfgi mCY0EbXN/wRMPP16/ctYf6jyKtmq7mUlXeCtBcHPzmTPhZTkZjGuwER5PWvT6e+B/IM51KGeQlV2L THo++Mho/pCLfddcLQHxB53+tHr3oH9teYsHI6w8qKEpjbOdwGwQCOJZAKzjmz9IBgK2JT42aduQd uCAdbo5awJxz2wxzVetg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxje-004mWj-2U; Sun, 31 Dec 2023 15:28:18 +0000 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxja-004mUC-1O for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:16 +0000 Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1d2e6e14865so37470465ad.0 for ; Sun, 31 Dec 2023 07:28:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036493; x=1704641293; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kkFJu6wbfKrNvlCJDFW6tY4AkgVd3nqkM4USIeuwRmw=; b=a5EUPj1e+sjHtaewstZZtKw/ocmjNsvvWnQ8GPzz3g4n44FNp15OZKy4AHcMykceTm dpbLxIlpDhwkTuoCoN23w/PR6AJz+JdZZ6qiqIdc1EFd5A5g8+8Oxh6foU8/iFOdM1uE OcKgPzo3m8TyAmPCF1utrjyiQge5watVh+X8gDNBpdPVSct+83IiPpUAJQ3p1D7RAiJt A7/Gpn1HCxlqhIJ1WI03f7MlVeymIJ8MAq8fq93Of1vXiHzDSUgQwCXP2BQRbmfTHJh0 AYrcuva+NeRk0ugJ33blTymUdIFU6zr7aVyrYo94I9BTuemsTe5S1OF4pAIAhvDLnjQ3 rVKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036493; x=1704641293; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kkFJu6wbfKrNvlCJDFW6tY4AkgVd3nqkM4USIeuwRmw=; b=JgtxinexHdZ23qb3L473euJYhryaGlyz36ziz4UySNqA35a1LmUzJjV+wJueHgSZMP XpItNoeFkOK1FNrdSMcmo2GtpIlUj3KiveY7CxeZPAPkMBSAWYZCW3XKzQrMBZ5PlYWi E/YP8gQMSjP1wrXj3V4aOrGdfsx8nkmudNcirJl97e5ZIRNlB4JX+vYA8O07YsODw/T7 l11vewFYuYDbbl9d3F5c8kSRQxgPxrYYpHeskDZBJjxZ9M6PYhT3FAJJH6ZEZNJCHYNt aDex1OppI+xkF6TfQUjzxYUQPDfY1nqoFo6QpTjD38mDpuFjpFaS6ORFK3U1bCfhb32N mDEw== X-Gm-Message-State: AOJu0Yxi4ES/itpfZFzIUm4X27CTLtyT25mW/lQNJHHxIlDXuRvnLpfz R0e9qFu/M2gykHspMxJoYmqMZa/c9y7aUw== X-Google-Smtp-Source: AGHT+IHIPKMRiqbavN1oME66IycwcLAtZ7yhnlsI4BJbgS3NesqNBwv7KKtpU/UoQkmQzm9DzCAALA== X-Received: by 2002:a17:903:32cc:b0:1d4:cae:9a05 with SMTP id i12-20020a17090332cc00b001d40cae9a05mr7664302plr.2.1704036493285; Sun, 31 Dec 2023 07:28:13 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:12 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 06/11] RISC-V: crypto: add Zvkg accelerated GCM GHASH implementation Date: Sun, 31 Dec 2023 23:27:38 +0800 Message-Id: <20231231152743.6304-7-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072814_537947_2B0923E1 X-CRM114-Status: GOOD ( 30.98 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add a gcm hash implementation using the Zvkg extension from OpenSSL (openssl/openssl#21923). The perlasm here is different from the original implementation in OpenSSL. The OpenSSL assumes that the H is stored in little-endian. Thus, it needs to convert the H to big-endian for Zvkg instructions. In kernel, we have the big-endian H directly. There is no need for endian conversion. Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `GHASH_RISCV64` option by default. - Add `asmlinkage` qualifier for crypto asm function. - Update the ghash fallback path in ghash_blocks(). - Rename structure riscv64_ghash_context to riscv64_ghash_tfm_ctx. - Fold ghash_update_zvkg() and ghash_final_zvkg(). - Reorder structure riscv64_ghash_alg_zvkg members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 10 ++ arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/ghash-riscv64-glue.c | 175 ++++++++++++++++++++++++ arch/riscv/crypto/ghash-riscv64-zvkg.pl | 100 ++++++++++++++ 4 files changed, 292 insertions(+) create mode 100644 arch/riscv/crypto/ghash-riscv64-glue.c create mode 100644 arch/riscv/crypto/ghash-riscv64-zvkg.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 2cee0f68f0c7..d73b89ceb1a3 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -34,4 +34,14 @@ config CRYPTO_AES_BLOCK_RISCV64 - Zvkb vector crypto extension (CTR/XTS) - Zvkg vector crypto extension (XTS) +config CRYPTO_GHASH_RISCV64 + tristate "Hash functions: GHASH" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_GCM + help + GCM GHASH function (NIST SP 800-38D) + + Architecture: riscv64 using: + - Zvkg vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 9574b009762f..94a7f8eaa8a7 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -9,6 +9,9 @@ aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o obj-$(CONFIG_CRYPTO_AES_BLOCK_RISCV64) += aes-block-riscv64.o aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o +obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o +ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) @@ -21,6 +24,10 @@ $(obj)/aes-riscv64-zvkned-zvbb-zvkg.S: $(src)/aes-riscv64-zvkned-zvbb-zvkg.pl $(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl $(call cmd,perlasm) +$(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S +clean-files += ghash-riscv64-zvkg.S diff --git a/arch/riscv/crypto/ghash-riscv64-glue.c b/arch/riscv/crypto/ghash-riscv64-glue.c new file mode 100644 index 000000000000..b01ab5714677 --- /dev/null +++ b/arch/riscv/crypto/ghash-riscv64-glue.c @@ -0,0 +1,175 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * RISC-V optimized GHASH routines + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* ghash using zvkg vector crypto extension */ +asmlinkage void gcm_ghash_rv64i_zvkg(be128 *Xi, const be128 *H, const u8 *inp, + size_t len); + +struct riscv64_ghash_tfm_ctx { + be128 key; +}; + +struct riscv64_ghash_desc_ctx { + be128 shash; + u8 buffer[GHASH_BLOCK_SIZE]; + u32 bytes; +}; + +static inline void ghash_blocks(const struct riscv64_ghash_tfm_ctx *tctx, + struct riscv64_ghash_desc_ctx *dctx, + const u8 *src, size_t srclen) +{ + /* The srclen is nonzero and a multiple of 16. */ + if (crypto_simd_usable()) { + kernel_vector_begin(); + gcm_ghash_rv64i_zvkg(&dctx->shash, &tctx->key, src, srclen); + kernel_vector_end(); + } else { + do { + crypto_xor((u8 *)&dctx->shash, src, GHASH_BLOCK_SIZE); + gf128mul_lle(&dctx->shash, &tctx->key); + srclen -= GHASH_BLOCK_SIZE; + src += GHASH_BLOCK_SIZE; + } while (srclen); + } +} + +static int ghash_init(struct shash_desc *desc) +{ + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + + *dctx = (struct riscv64_ghash_desc_ctx){}; + + return 0; +} + +static int ghash_update_zvkg(struct shash_desc *desc, const u8 *src, + unsigned int srclen) +{ + size_t len; + const struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + + if (dctx->bytes) { + if (dctx->bytes + srclen < GHASH_BLOCK_SIZE) { + memcpy(dctx->buffer + dctx->bytes, src, srclen); + dctx->bytes += srclen; + return 0; + } + memcpy(dctx->buffer + dctx->bytes, src, + GHASH_BLOCK_SIZE - dctx->bytes); + + ghash_blocks(tctx, dctx, dctx->buffer, GHASH_BLOCK_SIZE); + + src += GHASH_BLOCK_SIZE - dctx->bytes; + srclen -= GHASH_BLOCK_SIZE - dctx->bytes; + dctx->bytes = 0; + } + len = srclen & ~(GHASH_BLOCK_SIZE - 1); + + if (len) { + ghash_blocks(tctx, dctx, src, len); + src += len; + srclen -= len; + } + + if (srclen) { + memcpy(dctx->buffer, src, srclen); + dctx->bytes = srclen; + } + + return 0; +} + +static int ghash_final_zvkg(struct shash_desc *desc, u8 *out) +{ + const struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + int i; + + if (dctx->bytes) { + for (i = dctx->bytes; i < GHASH_BLOCK_SIZE; i++) + dctx->buffer[i] = 0; + + ghash_blocks(tctx, dctx, dctx->buffer, GHASH_BLOCK_SIZE); + } + + memcpy(out, &dctx->shash, GHASH_DIGEST_SIZE); + + return 0; +} + +static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, + unsigned int keylen) +{ + struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(tfm); + + if (keylen != GHASH_BLOCK_SIZE) + return -EINVAL; + + memcpy(&tctx->key, key, GHASH_BLOCK_SIZE); + + return 0; +} + +static struct shash_alg riscv64_ghash_alg_zvkg = { + .init = ghash_init, + .update = ghash_update_zvkg, + .final = ghash_final_zvkg, + .setkey = ghash_setkey, + .descsize = sizeof(struct riscv64_ghash_desc_ctx), + .digestsize = GHASH_DIGEST_SIZE, + .base = { + .cra_blocksize = GHASH_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct riscv64_ghash_tfm_ctx), + .cra_priority = 303, + .cra_name = "ghash", + .cra_driver_name = "ghash-riscv64-zvkg", + .cra_module = THIS_MODULE, + }, +}; + +static inline bool check_ghash_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKG) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_ghash_mod_init(void) +{ + if (check_ghash_ext()) + return crypto_register_shash(&riscv64_ghash_alg_zvkg); + + return -ENODEV; +} + +static void __exit riscv64_ghash_mod_fini(void) +{ + crypto_unregister_shash(&riscv64_ghash_alg_zvkg); +} + +module_init(riscv64_ghash_mod_init); +module_exit(riscv64_ghash_mod_fini); + +MODULE_DESCRIPTION("GCM GHASH (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("ghash"); diff --git a/arch/riscv/crypto/ghash-riscv64-zvkg.pl b/arch/riscv/crypto/ghash-riscv64-zvkg.pl new file mode 100644 index 000000000000..f18824496573 --- /dev/null +++ b/arch/riscv/crypto/ghash-riscv64-zvkg.pl @@ -0,0 +1,100 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector GCM/GMAC extension ('Zvkg') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvkg +___ + +############################################################################### +# void gcm_ghash_rv64i_zvkg(be128 *Xi, const be128 *H, const u8 *inp, size_t len) +# +# input: Xi: current hash value +# H: hash key +# inp: pointer to input data +# len: length of input data in bytes (multiple of block size) +# output: Xi: Xi+1 (next hash value Xi) +{ +my ($Xi,$H,$inp,$len) = ("a0","a1","a2","a3"); +my ($vXi,$vH,$vinp,$Vzero) = ("v1","v2","v3","v4"); + +$code .= <<___; +.p2align 3 +.globl gcm_ghash_rv64i_zvkg +.type gcm_ghash_rv64i_zvkg,\@function +gcm_ghash_rv64i_zvkg: + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $vH, ($H) + vle32.v $vXi, ($Xi) + +Lstep: + vle32.v $vinp, ($inp) + add $inp, $inp, 16 + add $len, $len, -16 + vghsh.vv $vXi, $vH, $vinp + bnez $len, Lstep + + vse32.v $vXi, ($Xi) + ret + +.size gcm_ghash_rv64i_zvkg,.-gcm_ghash_rv64i_zvkg +___ +} + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5E74FC47079 for ; Sun, 31 Dec 2023 15:28:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NIRRmuoZ7ZQC2+j1yI0//pMY83aXmxYh6jo/R3ukiss=; b=TtI9ySafBjy9N5 W3LNL2ZfP2wmHuaPKV4hPKAG0qxKZXE/v71ZbeDU/Tk8u/3TcBZZ3E5P9pB30gvczRajy8zncCrT+ tYuy+WkxqmqzPC3oCuSm6ko46khX9KW4M0iV9lKxDJOAlRtNuY8MickpngyQ+zuMg8WVFGJkmwPw8 l2skGKDMRo+8cBBOHi1+NbM4Kox9gTiwsSGKVoTSpoeHEy9bLoiJ22dt16c5GvLI8XWtN5Ba4icSv 8FLRCuOCgkqoQvTIMPOQAN3zkBjFU6buWDw3plLmIWqcTxae2DkYfM9CfIYRUJ2tjfS2UA9wrdiuI TC63aKfOr9GLTeeiOUoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjk-004maj-1G; Sun, 31 Dec 2023 15:28:24 +0000 Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjd-004mVx-30 for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:21 +0000 Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1d4ba539f6cso506445ad.3 for ; Sun, 31 Dec 2023 07:28:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036497; x=1704641297; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WNQBInVZby/ekPhWTRyt5y6cGlBZoZz5Zj+QnKJ0oEM=; b=ah5lMKBrZvJYdoh4ckGYRa8Dn8M/pdSUj+h2N6loPH0M3ZsqBYpsNoIHnTd3Wdbs3r mGkT+yUYqL/VO+2yhqB5WfkhzHv+eLJ/HFcfQEa0Dk9vXC93/jPOz1XpYth9lXdf1O5O Mps9NsqulzU9NG8eUWzSx2BTkP2ZxSMq+6ldAfp0UKAODNW3oncjYqeLdb6iQQqpppM8 588FpKbysKlPDrjUTqC6/cMYDsVItabMng39Lz4cr9w+HeVPDu5v8lbY2XIfd4GSrjl4 GqMgaXJ0BRiQktAGUmuL/J2GmaxRq0Zypp695Ffw20wp2fx/G1ASqPf13n0CCOS4e5Ho tnEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036497; x=1704641297; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WNQBInVZby/ekPhWTRyt5y6cGlBZoZz5Zj+QnKJ0oEM=; b=wW61+8RbYa1fJLCKS2+z7bjCmJ29Beocll0MRxp7E4VWOuDwJlEuBl0OExWt3gmggm 9sxx2Wml3+lx2+CeiqBD8hztXakT/GbebHpuodhK557s07779YCCOpDbgCUaJaNu3521 t7v8ffT1sO+tbC63pwsou/erhQBKsEw9CT5QKii4HoYKHLA1HC6SHC03BvkMdVU3Azhh SD8irGcZ73nF+IsIqkd0JB9C3TzwEpa1VmWPbMSAfpyfeN0D8OMl0/Iw8Qu+ylBQYt1q gKi9P5pNwh2y9XttxtLHZU/yXU69zwxX34zJATqNV2Jq7pgijKJXczqykrmoIAkdBJF8 3VZg== X-Gm-Message-State: AOJu0Yym9jy8Q9H9OaWkxghUS7FvTuGBlxPVO7gMIy15PIVyu9iEsiUk mb1X21MsTJm8kqfPInUrcztwt2C525jXdw== X-Google-Smtp-Source: AGHT+IGXrDi3jTdznJEzeGw/WEDBAI62EWrC1t/rjDI9lLHQppITWWFTgbjVsrezy/ywCDClikiLUw== X-Received: by 2002:a17:903:244d:b0:1d4:b017:a053 with SMTP id l13-20020a170903244d00b001d4b017a053mr2250935pls.116.1704036496630; Sun, 31 Dec 2023 07:28:16 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:16 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 07/11] RISC-V: crypto: add Zvknha/b accelerated SHA224/256 implementations Date: Sun, 31 Dec 2023 23:27:39 +0800 Message-Id: <20231231152743.6304-8-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072818_217521_9BAE4E57 X-CRM114-Status: GOOD ( 31.02 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add SHA224 and 256 implementations using Zvknha or Zvknhb vector crypto extensions from OpenSSL(openssl/openssl#21923). Co-developed-by: Charalampos Mitrodimas Signed-off-by: Charalampos Mitrodimas Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Co-developed-by: Phoebe Chen Signed-off-by: Phoebe Chen Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use `SYM_TYPED_FUNC_START` for sha256 indirect-call asm symbol. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `SHA256_RISCV64` option by default. - Add `asmlinkage` qualifier for crypto asm function. - Rename sha256-riscv64-zvkb-zvknha_or_zvknhb to sha256-riscv64-zvknha_or_zvknhb-zvkb. - Reorder structure sha256_algs members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 11 + arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/sha256-riscv64-glue.c | 145 ++++++++ .../sha256-riscv64-zvknha_or_zvknhb-zvkb.pl | 317 ++++++++++++++++++ 4 files changed, 480 insertions(+) create mode 100644 arch/riscv/crypto/sha256-riscv64-glue.c create mode 100644 arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index d73b89ceb1a3..ff1dce4a2bcc 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -44,4 +44,15 @@ config CRYPTO_GHASH_RISCV64 Architecture: riscv64 using: - Zvkg vector crypto extension +config CRYPTO_SHA256_RISCV64 + tristate "Hash functions: SHA-224 and SHA-256" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_SHA256 + help + SHA-224 and SHA-256 secure hash algorithm (FIPS 180) + + Architecture: riscv64 using: + - Zvknha or Zvknhb vector crypto extensions + - Zvkb vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 94a7f8eaa8a7..e9d7717ec943 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -12,6 +12,9 @@ aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvk obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o +obj-$(CONFIG_CRYPTO_SHA256_RISCV64) += sha256-riscv64.o +sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) @@ -27,7 +30,11 @@ $(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl $(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl $(call cmd,perlasm) +$(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S clean-files += ghash-riscv64-zvkg.S +clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S diff --git a/arch/riscv/crypto/sha256-riscv64-glue.c b/arch/riscv/crypto/sha256-riscv64-glue.c new file mode 100644 index 000000000000..760d89031d1c --- /dev/null +++ b/arch/riscv/crypto/sha256-riscv64-glue.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Linux/riscv64 port of the OpenSSL SHA256 implementation for RISC-V 64 + * + * Copyright (C) 2022 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * sha256 using zvkb and zvknha/b vector crypto extension + * + * This asm function will just take the first 256-bit as the sha256 state from + * the pointer to `struct sha256_state`. + */ +asmlinkage void +sha256_block_data_order_zvkb_zvknha_or_zvknhb(struct sha256_state *digest, + const u8 *data, int num_blks); + +static int riscv64_sha256_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + int ret = 0; + + /* + * Make sure struct sha256_state begins directly with the SHA256 + * 256-bit internal state, as this is what the asm function expect. + */ + BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + ret = sha256_base_do_update( + desc, data, len, + sha256_block_data_order_zvkb_zvknha_or_zvknhb); + kernel_vector_end(); + } else { + ret = crypto_sha256_update(desc, data, len); + } + + return ret; +} + +static int riscv64_sha256_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + if (crypto_simd_usable()) { + kernel_vector_begin(); + if (len) + sha256_base_do_update( + desc, data, len, + sha256_block_data_order_zvkb_zvknha_or_zvknhb); + sha256_base_do_finalize( + desc, sha256_block_data_order_zvkb_zvknha_or_zvknhb); + kernel_vector_end(); + + return sha256_base_finish(desc, out); + } + + return crypto_sha256_finup(desc, data, len, out); +} + +static int riscv64_sha256_final(struct shash_desc *desc, u8 *out) +{ + return riscv64_sha256_finup(desc, NULL, 0, out); +} + +static struct shash_alg sha256_algs[] = { + { + .init = sha256_base_init, + .update = riscv64_sha256_update, + .final = riscv64_sha256_final, + .finup = riscv64_sha256_finup, + .descsize = sizeof(struct sha256_state), + .digestsize = SHA256_DIGEST_SIZE, + .base = { + .cra_blocksize = SHA256_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sha256", + .cra_driver_name = "sha256-riscv64-zvknha_or_zvknhb-zvkb", + .cra_module = THIS_MODULE, + }, + }, { + .init = sha224_base_init, + .update = riscv64_sha256_update, + .final = riscv64_sha256_final, + .finup = riscv64_sha256_finup, + .descsize = sizeof(struct sha256_state), + .digestsize = SHA224_DIGEST_SIZE, + .base = { + .cra_blocksize = SHA224_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sha224", + .cra_driver_name = "sha224-riscv64-zvknha_or_zvknhb-zvkb", + .cra_module = THIS_MODULE, + }, + }, +}; + +static inline bool check_sha256_ext(void) +{ + /* + * From the spec: + * The Zvknhb ext supports both SHA-256 and SHA-512 and Zvknha only + * supports SHA-256. + */ + return (riscv_isa_extension_available(NULL, ZVKNHA) || + riscv_isa_extension_available(NULL, ZVKNHB)) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_sha256_mod_init(void) +{ + if (check_sha256_ext()) + return crypto_register_shashes(sha256_algs, + ARRAY_SIZE(sha256_algs)); + + return -ENODEV; +} + +static void __exit riscv64_sha256_mod_fini(void) +{ + crypto_unregister_shashes(sha256_algs, ARRAY_SIZE(sha256_algs)); +} + +module_init(riscv64_sha256_mod_init); +module_exit(riscv64_sha256_mod_fini); + +MODULE_DESCRIPTION("SHA-256 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("sha224"); +MODULE_ALIAS_CRYPTO("sha256"); diff --git a/arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl b/arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl new file mode 100644 index 000000000000..22dd40d8c734 --- /dev/null +++ b/arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl @@ -0,0 +1,317 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Phoebe Chen +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# The generated code of this file depends on the following RISC-V extensions: +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector SHA-2 Secure Hash extension ('Zvknha' or 'Zvknhb') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +#include + +.text +.option arch, +zvknha, +zvkb +___ + +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +my $K256 = "K256"; + +# Function arguments +my ($H, $INP, $LEN, $KT, $H2, $INDEX_PATTERN) = ("a0", "a1", "a2", "a3", "t3", "t4"); + +sub sha_256_load_constant { + my $code=<<___; + la $KT, $K256 # Load round constants K256 + vle32.v $V10, ($KT) + addi $KT, $KT, 16 + vle32.v $V11, ($KT) + addi $KT, $KT, 16 + vle32.v $V12, ($KT) + addi $KT, $KT, 16 + vle32.v $V13, ($KT) + addi $KT, $KT, 16 + vle32.v $V14, ($KT) + addi $KT, $KT, 16 + vle32.v $V15, ($KT) + addi $KT, $KT, 16 + vle32.v $V16, ($KT) + addi $KT, $KT, 16 + vle32.v $V17, ($KT) + addi $KT, $KT, 16 + vle32.v $V18, ($KT) + addi $KT, $KT, 16 + vle32.v $V19, ($KT) + addi $KT, $KT, 16 + vle32.v $V20, ($KT) + addi $KT, $KT, 16 + vle32.v $V21, ($KT) + addi $KT, $KT, 16 + vle32.v $V22, ($KT) + addi $KT, $KT, 16 + vle32.v $V23, ($KT) + addi $KT, $KT, 16 + vle32.v $V24, ($KT) + addi $KT, $KT, 16 + vle32.v $V25, ($KT) +___ + + return $code; +} + +################################################################################ +# void sha256_block_data_order_zvkb_zvknha_or_zvknhb(void *c, const void *p, size_t len) +$code .= <<___; +SYM_TYPED_FUNC_START(sha256_block_data_order_zvkb_zvknha_or_zvknhb) + vsetivli zero, 4, e32, m1, ta, ma + + @{[sha_256_load_constant]} + + # H is stored as {a,b,c,d},{e,f,g,h}, but we need {f,e,b,a},{h,g,d,c} + # The dst vtype is e32m1 and the index vtype is e8mf4. + # We use index-load with the following index pattern at v26. + # i8 index: + # 20, 16, 4, 0 + # Instead of setting the i8 index, we could use a single 32bit + # little-endian value to cover the 4xi8 index. + # i32 value: + # 0x 00 04 10 14 + li $INDEX_PATTERN, 0x00041014 + vsetivli zero, 1, e32, m1, ta, ma + vmv.v.x $V26, $INDEX_PATTERN + + addi $H2, $H, 8 + + # Use index-load to get {f,e,b,a},{h,g,d,c} + vsetivli zero, 4, e32, m1, ta, ma + vluxei8.v $V6, ($H), $V26 + vluxei8.v $V7, ($H2), $V26 + + # Setup v0 mask for the vmerge to replace the first word (idx==0) in key-scheduling. + # The AVL is 4 in SHA, so we could use a single e8(8 element masking) for masking. + vsetivli zero, 1, e8, m1, ta, ma + vmv.v.i $V0, 0x01 + + vsetivli zero, 4, e32, m1, ta, ma + +L_round_loop: + # Decrement length by 1 + add $LEN, $LEN, -1 + + # Keep the current state as we need it later: H' = H+{a',b',c',...,h'}. + vmv.v.v $V30, $V6 + vmv.v.v $V31, $V7 + + # Load the 512-bits of the message block in v1-v4 and perform + # an endian swap on each 4 bytes element. + vle32.v $V1, ($INP) + vrev8.v $V1, $V1 + add $INP, $INP, 16 + vle32.v $V2, ($INP) + vrev8.v $V2, $V2 + add $INP, $INP, 16 + vle32.v $V3, ($INP) + vrev8.v $V3, $V3 + add $INP, $INP, 16 + vle32.v $V4, ($INP) + vrev8.v $V4, $V4 + add $INP, $INP, 16 + + # Quad-round 0 (+0, Wt from oldest to newest in v1->v2->v3->v4) + vadd.vv $V5, $V10, $V1 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V3, $V2, $V0 + vsha2ms.vv $V1, $V5, $V4 # Generate W[19:16] + + # Quad-round 1 (+1, v2->v3->v4->v1) + vadd.vv $V5, $V11, $V2 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V4, $V3, $V0 + vsha2ms.vv $V2, $V5, $V1 # Generate W[23:20] + + # Quad-round 2 (+2, v3->v4->v1->v2) + vadd.vv $V5, $V12, $V3 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V1, $V4, $V0 + vsha2ms.vv $V3, $V5, $V2 # Generate W[27:24] + + # Quad-round 3 (+3, v4->v1->v2->v3) + vadd.vv $V5, $V13, $V4 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V2, $V1, $V0 + vsha2ms.vv $V4, $V5, $V3 # Generate W[31:28] + + # Quad-round 4 (+0, v1->v2->v3->v4) + vadd.vv $V5, $V14, $V1 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V3, $V2, $V0 + vsha2ms.vv $V1, $V5, $V4 # Generate W[35:32] + + # Quad-round 5 (+1, v2->v3->v4->v1) + vadd.vv $V5, $V15, $V2 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V4, $V3, $V0 + vsha2ms.vv $V2, $V5, $V1 # Generate W[39:36] + + # Quad-round 6 (+2, v3->v4->v1->v2) + vadd.vv $V5, $V16, $V3 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V1, $V4, $V0 + vsha2ms.vv $V3, $V5, $V2 # Generate W[43:40] + + # Quad-round 7 (+3, v4->v1->v2->v3) + vadd.vv $V5, $V17, $V4 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V2, $V1, $V0 + vsha2ms.vv $V4, $V5, $V3 # Generate W[47:44] + + # Quad-round 8 (+0, v1->v2->v3->v4) + vadd.vv $V5, $V18, $V1 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V3, $V2, $V0 + vsha2ms.vv $V1, $V5, $V4 # Generate W[51:48] + + # Quad-round 9 (+1, v2->v3->v4->v1) + vadd.vv $V5, $V19, $V2 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V4, $V3, $V0 + vsha2ms.vv $V2, $V5, $V1 # Generate W[55:52] + + # Quad-round 10 (+2, v3->v4->v1->v2) + vadd.vv $V5, $V20, $V3 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V1, $V4, $V0 + vsha2ms.vv $V3, $V5, $V2 # Generate W[59:56] + + # Quad-round 11 (+3, v4->v1->v2->v3) + vadd.vv $V5, $V21, $V4 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V2, $V1, $V0 + vsha2ms.vv $V4, $V5, $V3 # Generate W[63:60] + + # Quad-round 12 (+0, v1->v2->v3->v4) + # Note that we stop generating new message schedule words (Wt, v1-13) + # as we already generated all the words we end up consuming (i.e., W[63:60]). + vadd.vv $V5, $V22, $V1 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + + # Quad-round 13 (+1, v2->v3->v4->v1) + vadd.vv $V5, $V23, $V2 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + + # Quad-round 14 (+2, v3->v4->v1->v2) + vadd.vv $V5, $V24, $V3 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + + # Quad-round 15 (+3, v4->v1->v2->v3) + vadd.vv $V5, $V25, $V4 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + + # H' = H+{a',b',c',...,h'} + vadd.vv $V6, $V30, $V6 + vadd.vv $V7, $V31, $V7 + bnez $LEN, L_round_loop + + # Store {f,e,b,a},{h,g,d,c} back to {a,b,c,d},{e,f,g,h}. + vsuxei8.v $V6, ($H), $V26 + vsuxei8.v $V7, ($H2), $V26 + + ret +SYM_FUNC_END(sha256_block_data_order_zvkb_zvknha_or_zvknhb) + +.p2align 2 +.type $K256,\@object +$K256: + .word 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5 + .word 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5 + .word 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3 + .word 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174 + .word 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc + .word 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da + .word 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7 + .word 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967 + .word 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13 + .word 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85 + .word 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3 + .word 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070 + .word 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5 + .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 + .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 + .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 +.size $K256,.-$K256 +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507256 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7AED7C3DA6E for ; Sun, 31 Dec 2023 15:28:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=W/GL7oG7eJDqtwy0LXBSR0/u4Cf+WhosYMzUKN9IBpQ=; b=agQvcFcsT7ia65 WsQ4wyg7BP+TqzUJb70ZszrnIRCiyPsM5+6FEgjl++ullVu21Y7uS7ORPaHgyG6izPwH1BkK+MPNJ i2ObddQeKdqu9TcT5T9EYYcDuzqCd1wspIzAhAf12k193CYqcV9JF5c1Bub6dwTTMNUoBdiGQmF3a hKG0h7/CvnE3r7ibrlaIzH9bmJhUe8TJwkgy5MUehfVxHDZ74T9jo2XiWLfJZjKa81YpINoNgXxdk Xltw1rc5KAj5y0qFl3dKSOiRS61LC7bizMILxhg3bWLd9g7MxJwpiDRLhkw4Znw1dqeODybtIMzii DFx9FzVLDyQTDSmaG2mg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjo-004mdD-1E; Sun, 31 Dec 2023 15:28:28 +0000 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjh-004mXg-0K for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:23 +0000 Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1d427518d52so41799885ad.0 for ; Sun, 31 Dec 2023 07:28:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036500; x=1704641300; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+/OIIXKk5C61VbRi9dkG/9FTypVu5kojGYvk9W3ZCc4=; b=C3CMCyUewg0tDRiZM4Bpx3OU+1qKXE3O1gknfC/pTURE19D5UFJUkH2xkvjZmqoatf q3US++5FHRJuHHf6DSjUHnfGiuyxOTivJDRqCWNAqnpAGHP9EvHCCuiArthFQi5tVw5k eQ1gsZ46hzhqs9FjzhPuEnclm7f1y+qwNfQAGlktZQSSeZTREoH1d12BUkn9ljK4ToCz gcO2BYQxHIuJ/OHe7xsX1e/DkJKCdRr0jKl44/PsHcc/aUCv8ynEiRrZDw+jRbfu0KYp i1T+1Buf7UgwjUSrhDmF0QXmkSBD9hRbetu2yb9FrgvSCVs0P7bXFxkoAfG/4jb4SPqT 0BOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036500; x=1704641300; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+/OIIXKk5C61VbRi9dkG/9FTypVu5kojGYvk9W3ZCc4=; b=azSlhg626b3kGu4oKHZgxi0qHISzibRaUO92ZtYhbuF6AKBt76Tk7GY7gqJWhaorCN Wu52GjEiYyOqsnkfhTRBpBOYqiUEDL5LNmsOGXJC6TpTqbDVAveskunbE4BszS9Jd9Jr E0DycD4PTMGsQU91c4mSVVhtXHhQFyJYUhl+uAFcx3LzBXDTXkYAs/gZU3BYuEn0MR/+ XokbYzTg/YJowvZ4g2/75slXINJpiZ0le3sSgclm7w7miL6ICMzFrIwSeqLGL3ph/8MP Mzne29nYbY7Jd1hWw47cZJbum7MUmcipQY8ZHhjvZpw9pzN52dAJtSsBaqTY25jRscu2 NbIw== X-Gm-Message-State: AOJu0Yy24bv3Be6O/IdnVZXFsXIingFc2/d9JKfmiz4dStDHG9jTLuzh SLizF+4gS6oelg4HFpgXu/U/IwrImDZFYQ== X-Google-Smtp-Source: AGHT+IFmoUQlTfiER2b2yvKQp0iUR+CJAnj2ht1IxlQZZq26hUwGkZ4m77JGVAoiB0HPYRl/BszAPw== X-Received: by 2002:a17:903:1392:b0:1d4:f42:de02 with SMTP id jx18-20020a170903139200b001d40f42de02mr16429358plb.16.1704036499826; Sun, 31 Dec 2023 07:28:19 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:19 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 08/11] RISC-V: crypto: add Zvknhb accelerated SHA384/512 implementations Date: Sun, 31 Dec 2023 23:27:40 +0800 Message-Id: <20231231152743.6304-9-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072821_159287_67CE3869 X-CRM114-Status: GOOD ( 32.49 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add SHA384 and 512 implementations using Zvknhb vector crypto extension from OpenSSL(openssl/openssl#21923). Co-developed-by: Charalampos Mitrodimas Signed-off-by: Charalampos Mitrodimas Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Co-developed-by: Phoebe Chen Signed-off-by: Phoebe Chen Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use `SYM_TYPED_FUNC_START` for sha512 indirect-call asm symbol. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `SHA512_RISCV64` option by default. - Add `asmlinkage` qualifier for crypto asm function. - Rename sha512-riscv64-zvkb-zvknhb to sha512-riscv64-zvknhb-zvkb. - Reorder structure sha512_algs members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 11 + arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/sha512-riscv64-glue.c | 139 +++++++++ .../crypto/sha512-riscv64-zvknhb-zvkb.pl | 265 ++++++++++++++++++ 4 files changed, 422 insertions(+) create mode 100644 arch/riscv/crypto/sha512-riscv64-glue.c create mode 100644 arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index ff1dce4a2bcc..1604782c0eed 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -55,4 +55,15 @@ config CRYPTO_SHA256_RISCV64 - Zvknha or Zvknhb vector crypto extensions - Zvkb vector crypto extension +config CRYPTO_SHA512_RISCV64 + tristate "Hash functions: SHA-384 and SHA-512" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_SHA512 + help + SHA-384 and SHA-512 secure hash algorithm (FIPS 180) + + Architecture: riscv64 using: + - Zvknhb vector crypto extension + - Zvkb vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index e9d7717ec943..8aabef950ad3 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -15,6 +15,9 @@ ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o obj-$(CONFIG_CRYPTO_SHA256_RISCV64) += sha256-riscv64.o sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o +obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o +sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) @@ -33,8 +36,12 @@ $(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl $(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl $(call cmd,perlasm) +$(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S clean-files += ghash-riscv64-zvkg.S clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S +clean-files += sha512-riscv64-zvknhb-zvkb.S diff --git a/arch/riscv/crypto/sha512-riscv64-glue.c b/arch/riscv/crypto/sha512-riscv64-glue.c new file mode 100644 index 000000000000..3dd8e1c9d402 --- /dev/null +++ b/arch/riscv/crypto/sha512-riscv64-glue.c @@ -0,0 +1,139 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Linux/riscv64 port of the OpenSSL SHA512 implementation for RISC-V 64 + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * sha512 using zvkb and zvknhb vector crypto extension + * + * This asm function will just take the first 512-bit as the sha512 state from + * the pointer to `struct sha512_state`. + */ +asmlinkage void sha512_block_data_order_zvkb_zvknhb(struct sha512_state *digest, + const u8 *data, + int num_blks); + +static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + int ret = 0; + + /* + * Make sure struct sha512_state begins directly with the SHA512 + * 512-bit internal state, as this is what the asm function expect. + */ + BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + ret = sha512_base_do_update( + desc, data, len, sha512_block_data_order_zvkb_zvknhb); + kernel_vector_end(); + } else { + ret = crypto_sha512_update(desc, data, len); + } + + return ret; +} + +static int riscv64_sha512_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + if (crypto_simd_usable()) { + kernel_vector_begin(); + if (len) + sha512_base_do_update( + desc, data, len, + sha512_block_data_order_zvkb_zvknhb); + sha512_base_do_finalize(desc, + sha512_block_data_order_zvkb_zvknhb); + kernel_vector_end(); + + return sha512_base_finish(desc, out); + } + + return crypto_sha512_finup(desc, data, len, out); +} + +static int riscv64_sha512_final(struct shash_desc *desc, u8 *out) +{ + return riscv64_sha512_finup(desc, NULL, 0, out); +} + +static struct shash_alg sha512_algs[] = { + { + .init = sha512_base_init, + .update = riscv64_sha512_update, + .final = riscv64_sha512_final, + .finup = riscv64_sha512_finup, + .descsize = sizeof(struct sha512_state), + .digestsize = SHA512_DIGEST_SIZE, + .base = { + .cra_blocksize = SHA512_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sha512", + .cra_driver_name = "sha512-riscv64-zvknhb-zvkb", + .cra_module = THIS_MODULE, + }, + }, + { + .init = sha384_base_init, + .update = riscv64_sha512_update, + .final = riscv64_sha512_final, + .finup = riscv64_sha512_finup, + .descsize = sizeof(struct sha512_state), + .digestsize = SHA384_DIGEST_SIZE, + .base = { + .cra_blocksize = SHA384_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sha384", + .cra_driver_name = "sha384-riscv64-zvknhb-zvkb", + .cra_module = THIS_MODULE, + }, + }, +}; + +static inline bool check_sha512_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKNHB) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_sha512_mod_init(void) +{ + if (check_sha512_ext()) + return crypto_register_shashes(sha512_algs, + ARRAY_SIZE(sha512_algs)); + + return -ENODEV; +} + +static void __exit riscv64_sha512_mod_fini(void) +{ + crypto_unregister_shashes(sha512_algs, ARRAY_SIZE(sha512_algs)); +} + +module_init(riscv64_sha512_mod_init); +module_exit(riscv64_sha512_mod_fini); + +MODULE_DESCRIPTION("SHA-512 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("sha384"); +MODULE_ALIAS_CRYPTO("sha512"); diff --git a/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl b/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl new file mode 100644 index 000000000000..cab46ccd1fe2 --- /dev/null +++ b/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl @@ -0,0 +1,265 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Phoebe Chen +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# The generated code of this file depends on the following RISC-V extensions: +# - RV64I +# - RISC-V vector ('V') with VLEN >= 128 +# - RISC-V Vector SHA-2 Secure Hash extension ('Zvknhb') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +#include + +.text +.option arch, +zvknhb, +zvkb +___ + +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +my $K512 = "K512"; + +# Function arguments +my ($H, $INP, $LEN, $KT, $H2, $INDEX_PATTERN) = ("a0", "a1", "a2", "a3", "t3", "t4"); + +################################################################################ +# void sha512_block_data_order_zvkb_zvknhb(void *c, const void *p, size_t len) +$code .= <<___; +SYM_TYPED_FUNC_START(sha512_block_data_order_zvkb_zvknhb) + vsetivli zero, 4, e64, m2, ta, ma + + # H is stored as {a,b,c,d},{e,f,g,h}, but we need {f,e,b,a},{h,g,d,c} + # The dst vtype is e64m2 and the index vtype is e8mf4. + # We use index-load with the following index pattern at v1. + # i8 index: + # 40, 32, 8, 0 + # Instead of setting the i8 index, we could use a single 32bit + # little-endian value to cover the 4xi8 index. + # i32 value: + # 0x 00 08 20 28 + li $INDEX_PATTERN, 0x00082028 + vsetivli zero, 1, e32, m1, ta, ma + vmv.v.x $V1, $INDEX_PATTERN + + addi $H2, $H, 16 + + # Use index-load to get {f,e,b,a},{h,g,d,c} + vsetivli zero, 4, e64, m2, ta, ma + vluxei8.v $V22, ($H), $V1 + vluxei8.v $V24, ($H2), $V1 + + # Setup v0 mask for the vmerge to replace the first word (idx==0) in key-scheduling. + # The AVL is 4 in SHA, so we could use a single e8(8 element masking) for masking. + vsetivli zero, 1, e8, m1, ta, ma + vmv.v.i $V0, 0x01 + + vsetivli zero, 4, e64, m2, ta, ma + +L_round_loop: + # Load round constants K512 + la $KT, $K512 + + # Decrement length by 1 + addi $LEN, $LEN, -1 + + # Keep the current state as we need it later: H' = H+{a',b',c',...,h'}. + vmv.v.v $V26, $V22 + vmv.v.v $V28, $V24 + + # Load the 1024-bits of the message block in v10-v16 and perform the endian + # swap. + vle64.v $V10, ($INP) + vrev8.v $V10, $V10 + addi $INP, $INP, 32 + vle64.v $V12, ($INP) + vrev8.v $V12, $V12 + addi $INP, $INP, 32 + vle64.v $V14, ($INP) + vrev8.v $V14, $V14 + addi $INP, $INP, 32 + vle64.v $V16, ($INP) + vrev8.v $V16, $V16 + addi $INP, $INP, 32 + + .rept 4 + # Quad-round 0 (+0, v10->v12->v14->v16) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V10 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + vmerge.vvm $V18, $V14, $V12, $V0 + vsha2ms.vv $V10, $V18, $V16 + + # Quad-round 1 (+1, v12->v14->v16->v10) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V12 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + vmerge.vvm $V18, $V16, $V14, $V0 + vsha2ms.vv $V12, $V18, $V10 + + # Quad-round 2 (+2, v14->v16->v10->v12) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V14 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + vmerge.vvm $V18, $V10, $V16, $V0 + vsha2ms.vv $V14, $V18, $V12 + + # Quad-round 3 (+3, v16->v10->v12->v14) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V16 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + vmerge.vvm $V18, $V12, $V10, $V0 + vsha2ms.vv $V16, $V18, $V14 + .endr + + # Quad-round 16 (+0, v10->v12->v14->v16) + # Note that we stop generating new message schedule words (Wt, v10-16) + # as we already generated all the words we end up consuming (i.e., W[79:76]). + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V10 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + + # Quad-round 17 (+1, v12->v14->v16->v10) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V12 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + + # Quad-round 18 (+2, v14->v16->v10->v12) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V14 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + + # Quad-round 19 (+3, v16->v10->v12->v14) + vle64.v $V20, ($KT) + # No t1 increment needed. + vadd.vv $V18, $V20, $V16 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + + # H' = H+{a',b',c',...,h'} + vadd.vv $V22, $V26, $V22 + vadd.vv $V24, $V28, $V24 + bnez $LEN, L_round_loop + + # Store {f,e,b,a},{h,g,d,c} back to {a,b,c,d},{e,f,g,h}. + vsuxei8.v $V22, ($H), $V1 + vsuxei8.v $V24, ($H2), $V1 + + ret +SYM_FUNC_END(sha512_block_data_order_zvkb_zvknhb) + +.p2align 3 +.type $K512,\@object +$K512: + .dword 0x428a2f98d728ae22, 0x7137449123ef65cd + .dword 0xb5c0fbcfec4d3b2f, 0xe9b5dba58189dbbc + .dword 0x3956c25bf348b538, 0x59f111f1b605d019 + .dword 0x923f82a4af194f9b, 0xab1c5ed5da6d8118 + .dword 0xd807aa98a3030242, 0x12835b0145706fbe + .dword 0x243185be4ee4b28c, 0x550c7dc3d5ffb4e2 + .dword 0x72be5d74f27b896f, 0x80deb1fe3b1696b1 + .dword 0x9bdc06a725c71235, 0xc19bf174cf692694 + .dword 0xe49b69c19ef14ad2, 0xefbe4786384f25e3 + .dword 0x0fc19dc68b8cd5b5, 0x240ca1cc77ac9c65 + .dword 0x2de92c6f592b0275, 0x4a7484aa6ea6e483 + .dword 0x5cb0a9dcbd41fbd4, 0x76f988da831153b5 + .dword 0x983e5152ee66dfab, 0xa831c66d2db43210 + .dword 0xb00327c898fb213f, 0xbf597fc7beef0ee4 + .dword 0xc6e00bf33da88fc2, 0xd5a79147930aa725 + .dword 0x06ca6351e003826f, 0x142929670a0e6e70 + .dword 0x27b70a8546d22ffc, 0x2e1b21385c26c926 + .dword 0x4d2c6dfc5ac42aed, 0x53380d139d95b3df + .dword 0x650a73548baf63de, 0x766a0abb3c77b2a8 + .dword 0x81c2c92e47edaee6, 0x92722c851482353b + .dword 0xa2bfe8a14cf10364, 0xa81a664bbc423001 + .dword 0xc24b8b70d0f89791, 0xc76c51a30654be30 + .dword 0xd192e819d6ef5218, 0xd69906245565a910 + .dword 0xf40e35855771202a, 0x106aa07032bbd1b8 + .dword 0x19a4c116b8d2d0c8, 0x1e376c085141ab53 + .dword 0x2748774cdf8eeb99, 0x34b0bcb5e19b48a8 + .dword 0x391c0cb3c5c95a63, 0x4ed8aa4ae3418acb + .dword 0x5b9cca4f7763e373, 0x682e6ff3d6b2b8a3 + .dword 0x748f82ee5defb2fc, 0x78a5636f43172f60 + .dword 0x84c87814a1f0ab72, 0x8cc702081a6439ec + .dword 0x90befffa23631e28, 0xa4506cebde82bde9 + .dword 0xbef9a3f7b2c67915, 0xc67178f2e372532b + .dword 0xca273eceea26619c, 0xd186b8c721c0c207 + .dword 0xeada7dd6cde0eb1e, 0xf57d4f7fee6ed178 + .dword 0x06f067aa72176fba, 0x0a637dc5a2c898a6 + .dword 0x113f9804bef90dae, 0x1b710b35131c471b + .dword 0x28db77f523047d84, 0x32caab7b40c72493 + .dword 0x3c9ebe0a15c9bebc, 0x431d67c49c100d4c + .dword 0x4cc5d4becb3e42b6, 0x597f299cfc657e2a + .dword 0x5fcb6fab3ad6faec, 0x6c44198c4a475817 +.size $K512,.-$K512 +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0600FC47077 for ; Sun, 31 Dec 2023 15:28:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eFYLNj/JQG01s38NSj8VYgzusGPzuhlY06fgojhqqVk=; b=1xbwnJOKHW6omd gfhH+0nc8hlQTKLfz2Bkttr8HgJZg3BTWnURw4lSxC1dwrQ43JTIYSO28GmdQwDwkgR372egIjdeL jjTi/J7PrqKIFhoFWi8RvYGAhamyHSb9Qk6rHICp8BMCFcbl5xSyaQ/+Uf24bcrxWb1rcRdAUD5mt H37jQPxd1xoCbHCmPdU9AM5TXss9DzYjTtWX8VYS8MJamVR+TMBr+nb2AQDawOBwUriKxbXAyP8vO hnWazyGSjIFLk/e5DQhjvhZgqapwluCsVB5lfGGITmheDzCD6JhFqmEu5l+9eSP7zKddr65FANEs9 6BcRBh5yQQ6Y2+uhYYBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjp-004mdr-0R; Sun, 31 Dec 2023 15:28:29 +0000 Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjj-004mZx-2X for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:26 +0000 Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1d3e84fded7so39425155ad.1 for ; Sun, 31 Dec 2023 07:28:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036503; x=1704641303; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hXPPBCQQI1J2vwyZpAmmynu+6ALlp8WBjY508kn9xmw=; b=TPVa97IDmW4JrbRReIJezPuXGTGuicf1hc0pbL9TbfI3Xx7tfY8Nb1Ds2bVXRXIUDf MKKV0XOo0gQlXioATv9ZISMrbKvKALVGz0964cn1XbYFBScjf60XdNlABiCy8ao1x7Bt 7c49medpgLTbV2rWvNin4jP745yed7SnVhz2WYccPHEzz4vxiQE5R1xaFQGrSLFiiLP3 9f9grYxnb25/41DJrKQ4uRwwZC2hK0MXKGd7PAHmFqSBOW1wcNXtowhPYO5699Ju8XPS ZL6lMWqWZ4LDREQCuwXmQu2KuVQtwJT1KxRJbQgOaBYIUx9BLNAxaMWp3ourQLCk4UTm k+cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036503; x=1704641303; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hXPPBCQQI1J2vwyZpAmmynu+6ALlp8WBjY508kn9xmw=; b=oCKxaO8Y3egHpLsLxIfU6iUQqIN+PcmwvCz+giu0UpIeuIquBpgFJ0io1fblOb6TZ1 V99jlOnjVONFOxGHpdWDXlT/2nS0jc9JEhD73LelH5VwCsYG1UqiIaRi14oet2hlUwOd QCxnXheIhn+eeBL3zysPXC5GdkmGd7E/kdp1uwp+kn8LNz4l5TZPcOyG2gNjUxGBYhHn RU9UgZ59eUXR3caIUuk51oZg3nyEyHJ+CTYbnyIBF4yMqOM9cYkMTjYAKZNgpBlYcU1y cENw2ZbGNKJ9/gPgKh9o2d8GZOB/P5ZKfxbP5B3sNvkInUpkqYFJDwiOpHbpbyTgHa+b wIlQ== X-Gm-Message-State: AOJu0YyYQ+9UUH8oSwI3/pktSGzHqsj8a1utxq6MGAHpFNA0MZTc3lkh CScr2QqReZFfD4oS3LhALflBW+1H4+2SCA== X-Google-Smtp-Source: AGHT+IFTJ2mwpS/mm9mFWteGyyr+L6PbECUy4sUM62B2IvRa8N3vPhAbnopbiRnHLKV1dQMJGbEpAw== X-Received: by 2002:a17:902:d3c6:b0:1d3:d8e3:266 with SMTP id w6-20020a170902d3c600b001d3d8e30266mr5312184plb.65.1704036503238; Sun, 31 Dec 2023 07:28:23 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:22 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 09/11] RISC-V: crypto: add Zvksed accelerated SM4 implementation Date: Sun, 31 Dec 2023 23:27:41 +0800 Message-Id: <20231231152743.6304-10-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072823_839089_67618D54 X-CRM114-Status: GOOD ( 28.84 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add SM4 implementation using Zvksed vector crypto extension from OpenSSL (openssl/openssl#21923). The perlasm here is different from the original implementation in OpenSSL. In OpenSSL, SM4 has the separated set_encrypt_key and set_decrypt_key functions. In kernel, these set_key functions are merged into a single one in order to skip the redundant key expanding instructions. Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `SM4_RISCV64` option by default. - Add the missed `static` declaration for riscv64_sm4_zvksed_alg. - Add `asmlinkage` qualifier for crypto asm function. - Rename sm4-riscv64-zvkb-zvksed to sm4-riscv64-zvksed-zvkb. - Reorder structure riscv64_sm4_zvksed_zvkb_alg members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 17 ++ arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/sm4-riscv64-glue.c | 121 +++++++++++ arch/riscv/crypto/sm4-riscv64-zvksed.pl | 268 ++++++++++++++++++++++++ 4 files changed, 413 insertions(+) create mode 100644 arch/riscv/crypto/sm4-riscv64-glue.c create mode 100644 arch/riscv/crypto/sm4-riscv64-zvksed.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 1604782c0eed..cdf7fead0636 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -66,4 +66,21 @@ config CRYPTO_SHA512_RISCV64 - Zvknhb vector crypto extension - Zvkb vector crypto extension +config CRYPTO_SM4_RISCV64 + tristate "Ciphers: SM4 (ShangMi 4)" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_ALGAPI + select CRYPTO_SM4 + help + SM4 cipher algorithms (OSCCA GB/T 32907-2016, + ISO/IEC 18033-3:2010/Amd 1:2021) + + SM4 (GBT.32907-2016) is a cryptographic standard issued by the + Organization of State Commercial Administration of China (OSCCA) + as an authorized cryptographic algorithms for the use within China. + + Architecture: riscv64 using: + - Zvksed vector crypto extension + - Zvkb vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 8aabef950ad3..8e34861bba34 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -18,6 +18,9 @@ sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o +obj-$(CONFIG_CRYPTO_SM4_RISCV64) += sm4-riscv64.o +sm4-riscv64-y := sm4-riscv64-glue.o sm4-riscv64-zvksed.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) @@ -39,9 +42,13 @@ $(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_z $(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl $(call cmd,perlasm) +$(obj)/sm4-riscv64-zvksed.S: $(src)/sm4-riscv64-zvksed.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S clean-files += ghash-riscv64-zvkg.S clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S clean-files += sha512-riscv64-zvknhb-zvkb.S +clean-files += sm4-riscv64-zvksed.S diff --git a/arch/riscv/crypto/sm4-riscv64-glue.c b/arch/riscv/crypto/sm4-riscv64-glue.c new file mode 100644 index 000000000000..9d9d24b67ee3 --- /dev/null +++ b/arch/riscv/crypto/sm4-riscv64-glue.c @@ -0,0 +1,121 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Linux/riscv64 port of the OpenSSL SM4 implementation for RISC-V 64 + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* sm4 using zvksed vector crypto extension */ +asmlinkage void rv64i_zvksed_sm4_encrypt(const u8 *in, u8 *out, const u32 *key); +asmlinkage void rv64i_zvksed_sm4_decrypt(const u8 *in, u8 *out, const u32 *key); +asmlinkage int rv64i_zvksed_sm4_set_key(const u8 *user_key, + unsigned int key_len, u32 *enc_key, + u32 *dec_key); + +static int riscv64_sm4_setkey_zvksed(struct crypto_tfm *tfm, const u8 *key, + unsigned int key_len) +{ + struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); + int ret = 0; + + if (crypto_simd_usable()) { + kernel_vector_begin(); + if (rv64i_zvksed_sm4_set_key(key, key_len, ctx->rkey_enc, + ctx->rkey_dec)) + ret = -EINVAL; + kernel_vector_end(); + } else { + ret = sm4_expandkey(ctx, key, key_len); + } + + return ret; +} + +static void riscv64_sm4_encrypt_zvksed(struct crypto_tfm *tfm, u8 *dst, + const u8 *src) +{ + const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + rv64i_zvksed_sm4_encrypt(src, dst, ctx->rkey_enc); + kernel_vector_end(); + } else { + sm4_crypt_block(ctx->rkey_enc, dst, src); + } +} + +static void riscv64_sm4_decrypt_zvksed(struct crypto_tfm *tfm, u8 *dst, + const u8 *src) +{ + const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + rv64i_zvksed_sm4_decrypt(src, dst, ctx->rkey_dec); + kernel_vector_end(); + } else { + sm4_crypt_block(ctx->rkey_dec, dst, src); + } +} + +static struct crypto_alg riscv64_sm4_zvksed_zvkb_alg = { + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct sm4_ctx), + .cra_priority = 300, + .cra_name = "sm4", + .cra_driver_name = "sm4-riscv64-zvksed-zvkb", + .cra_cipher = { + .cia_min_keysize = SM4_KEY_SIZE, + .cia_max_keysize = SM4_KEY_SIZE, + .cia_setkey = riscv64_sm4_setkey_zvksed, + .cia_encrypt = riscv64_sm4_encrypt_zvksed, + .cia_decrypt = riscv64_sm4_decrypt_zvksed, + }, + .cra_module = THIS_MODULE, +}; + +static inline bool check_sm4_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKSED) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_sm4_mod_init(void) +{ + if (check_sm4_ext()) + return crypto_register_alg(&riscv64_sm4_zvksed_zvkb_alg); + + return -ENODEV; +} + +static void __exit riscv64_sm4_mod_fini(void) +{ + crypto_unregister_alg(&riscv64_sm4_zvksed_zvkb_alg); +} + +module_init(riscv64_sm4_mod_init); +module_exit(riscv64_sm4_mod_fini); + +MODULE_DESCRIPTION("SM4 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("sm4"); diff --git a/arch/riscv/crypto/sm4-riscv64-zvksed.pl b/arch/riscv/crypto/sm4-riscv64-zvksed.pl new file mode 100644 index 000000000000..1873160aac2f --- /dev/null +++ b/arch/riscv/crypto/sm4-riscv64-zvksed.pl @@ -0,0 +1,268 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# The generated code of this file depends on the following RISC-V extensions: +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector SM4 Block Cipher extension ('Zvksed') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvksed, +zvkb +___ + +#### +# int rv64i_zvksed_sm4_set_key(const u8 *user_key, unsigned int key_len, +# u32 *enc_key, u32 *dec_key); +# +{ +my ($ukey,$key_len,$enc_key,$dec_key)=("a0","a1","a2","a3"); +my ($fk,$stride)=("a4","a5"); +my ($t0,$t1)=("t0","t1"); +my ($vukey,$vfk,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10"); +$code .= <<___; +.p2align 3 +.globl rv64i_zvksed_sm4_set_key +.type rv64i_zvksed_sm4_set_key,\@function +rv64i_zvksed_sm4_set_key: + li $t0, 16 + beq $t0, $key_len, 1f + li a0, 1 + ret +1: + + vsetivli zero, 4, e32, m1, ta, ma + + # Load the user key + vle32.v $vukey, ($ukey) + vrev8.v $vukey, $vukey + + # Load the FK. + la $fk, FK + vle32.v $vfk, ($fk) + + # Generate round keys. + vxor.vv $vukey, $vukey, $vfk + vsm4k.vi $vk0, $vukey, 0 # rk[0:3] + vsm4k.vi $vk1, $vk0, 1 # rk[4:7] + vsm4k.vi $vk2, $vk1, 2 # rk[8:11] + vsm4k.vi $vk3, $vk2, 3 # rk[12:15] + vsm4k.vi $vk4, $vk3, 4 # rk[16:19] + vsm4k.vi $vk5, $vk4, 5 # rk[20:23] + vsm4k.vi $vk6, $vk5, 6 # rk[24:27] + vsm4k.vi $vk7, $vk6, 7 # rk[28:31] + + # Store enc round keys + vse32.v $vk0, ($enc_key) # rk[0:3] + addi $enc_key, $enc_key, 16 + vse32.v $vk1, ($enc_key) # rk[4:7] + addi $enc_key, $enc_key, 16 + vse32.v $vk2, ($enc_key) # rk[8:11] + addi $enc_key, $enc_key, 16 + vse32.v $vk3, ($enc_key) # rk[12:15] + addi $enc_key, $enc_key, 16 + vse32.v $vk4, ($enc_key) # rk[16:19] + addi $enc_key, $enc_key, 16 + vse32.v $vk5, ($enc_key) # rk[20:23] + addi $enc_key, $enc_key, 16 + vse32.v $vk6, ($enc_key) # rk[24:27] + addi $enc_key, $enc_key, 16 + vse32.v $vk7, ($enc_key) # rk[28:31] + + # Store dec round keys in reverse order + addi $dec_key, $dec_key, 12 + li $stride, -4 + vsse32.v $vk7, ($dec_key), $stride # rk[31:28] + addi $dec_key, $dec_key, 16 + vsse32.v $vk6, ($dec_key), $stride # rk[27:24] + addi $dec_key, $dec_key, 16 + vsse32.v $vk5, ($dec_key), $stride # rk[23:20] + addi $dec_key, $dec_key, 16 + vsse32.v $vk4, ($dec_key), $stride # rk[19:16] + addi $dec_key, $dec_key, 16 + vsse32.v $vk3, ($dec_key), $stride # rk[15:12] + addi $dec_key, $dec_key, 16 + vsse32.v $vk2, ($dec_key), $stride # rk[11:8] + addi $dec_key, $dec_key, 16 + vsse32.v $vk1, ($dec_key), $stride # rk[7:4] + addi $dec_key, $dec_key, 16 + vsse32.v $vk0, ($dec_key), $stride # rk[3:0] + + li a0, 0 + ret +.size rv64i_zvksed_sm4_set_key,.-rv64i_zvksed_sm4_set_key +___ +} + +#### +# void rv64i_zvksed_sm4_encrypt(const unsigned char *in, unsigned char *out, +# const SM4_KEY *key); +# +{ +my ($in,$out,$keys,$stride)=("a0","a1","a2","t0"); +my ($vdata,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7,$vgen)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10"); +$code .= <<___; +.p2align 3 +.globl rv64i_zvksed_sm4_encrypt +.type rv64i_zvksed_sm4_encrypt,\@function +rv64i_zvksed_sm4_encrypt: + vsetivli zero, 4, e32, m1, ta, ma + + # Load input data + vle32.v $vdata, ($in) + vrev8.v $vdata, $vdata + + # Order of elements was adjusted in sm4_set_key() + # Encrypt with all keys + vle32.v $vk0, ($keys) # rk[0:3] + vsm4r.vs $vdata, $vk0 + addi $keys, $keys, 16 + vle32.v $vk1, ($keys) # rk[4:7] + vsm4r.vs $vdata, $vk1 + addi $keys, $keys, 16 + vle32.v $vk2, ($keys) # rk[8:11] + vsm4r.vs $vdata, $vk2 + addi $keys, $keys, 16 + vle32.v $vk3, ($keys) # rk[12:15] + vsm4r.vs $vdata, $vk3 + addi $keys, $keys, 16 + vle32.v $vk4, ($keys) # rk[16:19] + vsm4r.vs $vdata, $vk4 + addi $keys, $keys, 16 + vle32.v $vk5, ($keys) # rk[20:23] + vsm4r.vs $vdata, $vk5 + addi $keys, $keys, 16 + vle32.v $vk6, ($keys) # rk[24:27] + vsm4r.vs $vdata, $vk6 + addi $keys, $keys, 16 + vle32.v $vk7, ($keys) # rk[28:31] + vsm4r.vs $vdata, $vk7 + + # Save the ciphertext (in reverse element order) + vrev8.v $vdata, $vdata + li $stride, -4 + addi $out, $out, 12 + vsse32.v $vdata, ($out), $stride + + ret +.size rv64i_zvksed_sm4_encrypt,.-rv64i_zvksed_sm4_encrypt +___ +} + +#### +# void rv64i_zvksed_sm4_decrypt(const unsigned char *in, unsigned char *out, +# const SM4_KEY *key); +# +{ +my ($in,$out,$keys,$stride)=("a0","a1","a2","t0"); +my ($vdata,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7,$vgen)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10"); +$code .= <<___; +.p2align 3 +.globl rv64i_zvksed_sm4_decrypt +.type rv64i_zvksed_sm4_decrypt,\@function +rv64i_zvksed_sm4_decrypt: + vsetivli zero, 4, e32, m1, ta, ma + + # Load input data + vle32.v $vdata, ($in) + vrev8.v $vdata, $vdata + + # Order of key elements was adjusted in sm4_set_key() + # Decrypt with all keys + vle32.v $vk7, ($keys) # rk[31:28] + vsm4r.vs $vdata, $vk7 + addi $keys, $keys, 16 + vle32.v $vk6, ($keys) # rk[27:24] + vsm4r.vs $vdata, $vk6 + addi $keys, $keys, 16 + vle32.v $vk5, ($keys) # rk[23:20] + vsm4r.vs $vdata, $vk5 + addi $keys, $keys, 16 + vle32.v $vk4, ($keys) # rk[19:16] + vsm4r.vs $vdata, $vk4 + addi $keys, $keys, 16 + vle32.v $vk3, ($keys) # rk[15:11] + vsm4r.vs $vdata, $vk3 + addi $keys, $keys, 16 + vle32.v $vk2, ($keys) # rk[11:8] + vsm4r.vs $vdata, $vk2 + addi $keys, $keys, 16 + vle32.v $vk1, ($keys) # rk[7:4] + vsm4r.vs $vdata, $vk1 + addi $keys, $keys, 16 + vle32.v $vk0, ($keys) # rk[3:0] + vsm4r.vs $vdata, $vk0 + + # Save the ciphertext (in reverse element order) + vrev8.v $vdata, $vdata + li $stride, -4 + addi $out, $out, 12 + vsse32.v $vdata, ($out), $stride + + ret +.size rv64i_zvksed_sm4_decrypt,.-rv64i_zvksed_sm4_decrypt +___ +} + +$code .= <<___; +# Family Key (little-endian 32-bit chunks) +.p2align 3 +FK: + .word 0xA3B1BAC6, 0x56AA3350, 0x677D9197, 0xB27022DC +.size FK,.-FK +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 067D7C3DA6E for ; Sun, 31 Dec 2023 15:28:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6wAiffyRffBLYNrQPjsxQ7tEK4ZIf86rF47IM/lJrQ0=; b=cwoe2uap0l+uWm JG+Y5FqnfbsLAadSbNwz8VvG0K0GXmEhq0j8B0Ttlfwq/fF5GcyQ4rCnQDzlwQG8mYKpad+NnYUXy f5io0/QP4n8xSoG8LCMLEhcESFxVsnpfqaSRASRE47pv08+DLATTr4La/PQlbhEh5ELVlAzNYjGBZ jVdQgvORSy6qyBr2mdEZ+2VDuRb8b00Mt24PXKULyfxgzn9WL3Ppo1U1ThgC3QCOBS2M9QA6thaHl SiKg+7ydQZiOjo9RrHeKSIsrgV7IZtdRDHfn3tM5uegPfSctvYmQdQnzAJNVx9q9nBiYNYCkdjjWY XCuMICpf0Yw3lCYxspSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjt-004mgs-0E; Sun, 31 Dec 2023 15:28:33 +0000 Received: from mail-pf1-x429.google.com ([2607:f8b0:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjn-004mcT-2A for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:30 +0000 Received: by mail-pf1-x429.google.com with SMTP id d2e1a72fcca58-6d9bee259c5so1765023b3a.1 for ; Sun, 31 Dec 2023 07:28:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036506; x=1704641306; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4btUclgL8prP5a5xsfbrxrrvqeEp8OCXhpchFT4JJow=; b=H/SCASTVl4WXtlhKVLJAbCM1OVlzaQfv1Cjdo/z3vCYqsu0rma2JTPBLkmu1NrDMoe eQv/GDmftnP3x3MG2eQciz60iSW86mIHnZHQnQOY6ITuFpJX1TqNcS+po8SUM60V95SW JaR6KU4Rxii+T2E/3iV+OXnO54OBu7fXiiuVeTEapTmqSSmPFHuKX4pB/41VphxqnNUv PagsDrse4u4EcySUpUOrMdQBfZX2M5Zjo+pmCFr9bcfgad23FC2zlQoNT3kFr8tbcr4K Tgk7VmTY67nW0+dtn5uz0XixeAF1BdhLF7hJ+YDgFUTWG9Vsy7YUG+c9VFiZ6otQpbJ0 6GHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036506; x=1704641306; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4btUclgL8prP5a5xsfbrxrrvqeEp8OCXhpchFT4JJow=; b=O1S3fiyd07mp0AwKjzM1TBrqxWaTHZarWzppqWxjqvzVHV/CUJbfzftahKDwlD7SOW 0IcqY6kEdamyBQ284OytyVogSeHjkudy9l9RzupPiNPgs2HuMALBGTVVaAtGlhroI6KI nIuwk8BS5HP3SjUwlz8EAyh9Kpu6+BRJgt1+NHdiQdRH6B0ay5jyanXCLXOAFUWCOKE9 CU54pqvfsgd3YANhTR2U9SDvI13NTenQRKxjB5qYNwhFNTqvdq06q+BouZgPNPts9xWe Z9b74idMdIoA6pFFGEnN/cfVBxfB5wYaEBq1I3mIdhf5z/xzxqeLKs8wPsoQEeCGaGxU Rh9w== X-Gm-Message-State: AOJu0YzF4FkH+AjAVmYCkMehDcXuNEyTlpXLJX9vNArE0mqxXQc+XyDY z7piBf68Ua8NWJRy89V62DuLr4FxbCbp2A== X-Google-Smtp-Source: AGHT+IGs+2VkVLO9wT3XYs3ZUo/QaAslGvl/LI9b9+V4HNtX175e8ul7UJLbAGqZqbTBd5UyWRSFTA== X-Received: by 2002:a05:6a20:5294:b0:196:5929:dcdd with SMTP id o20-20020a056a20529400b001965929dcddmr1930861pzg.6.1704036506477; Sun, 31 Dec 2023 07:28:26 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:26 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 10/11] RISC-V: crypto: add Zvksh accelerated SM3 implementation Date: Sun, 31 Dec 2023 23:27:42 +0800 Message-Id: <20231231152743.6304-11-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072827_716621_EFBFC8EF X-CRM114-Status: GOOD ( 32.00 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add SM3 implementation using Zvksh vector crypto extension from OpenSSL (openssl/openssl#21923). Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use `SYM_TYPED_FUNC_START` for sm3 indirect-call asm symbol. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `SM3_RISCV64` option by default. - Add `asmlinkage` qualifier for crypto asm function. - Rename sm3-riscv64-zvkb-zvksh to sm3-riscv64-zvksh-zvkb. - Reorder structure sm3_alg members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 12 ++ arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/sm3-riscv64-glue.c | 124 ++++++++++++++ arch/riscv/crypto/sm3-riscv64-zvksh.pl | 227 +++++++++++++++++++++++++ 4 files changed, 370 insertions(+) create mode 100644 arch/riscv/crypto/sm3-riscv64-glue.c create mode 100644 arch/riscv/crypto/sm3-riscv64-zvksh.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index cdf7fead0636..81dcae72c477 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -66,6 +66,18 @@ config CRYPTO_SHA512_RISCV64 - Zvknhb vector crypto extension - Zvkb vector crypto extension +config CRYPTO_SM3_RISCV64 + tristate "Hash functions: SM3 (ShangMi 3)" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_HASH + select CRYPTO_SM3 + help + SM3 (ShangMi 3) secure hash function (OSCCA GM/T 0004-2012) + + Architecture: riscv64 using: + - Zvksh vector crypto extension + - Zvkb vector crypto extension + config CRYPTO_SM4_RISCV64 tristate "Ciphers: SM4 (ShangMi 4)" depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 8e34861bba34..b1f857695c1c 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -18,6 +18,9 @@ sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o +obj-$(CONFIG_CRYPTO_SM3_RISCV64) += sm3-riscv64.o +sm3-riscv64-y := sm3-riscv64-glue.o sm3-riscv64-zvksh.o + obj-$(CONFIG_CRYPTO_SM4_RISCV64) += sm4-riscv64.o sm4-riscv64-y := sm4-riscv64-glue.o sm4-riscv64-zvksed.o @@ -42,6 +45,9 @@ $(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_z $(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl $(call cmd,perlasm) +$(obj)/sm3-riscv64-zvksh.S: $(src)/sm3-riscv64-zvksh.pl + $(call cmd,perlasm) + $(obj)/sm4-riscv64-zvksed.S: $(src)/sm4-riscv64-zvksed.pl $(call cmd,perlasm) @@ -51,4 +57,5 @@ clean-files += aes-riscv64-zvkned-zvkb.S clean-files += ghash-riscv64-zvkg.S clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S clean-files += sha512-riscv64-zvknhb-zvkb.S +clean-files += sm3-riscv64-zvksh.S clean-files += sm4-riscv64-zvksed.S diff --git a/arch/riscv/crypto/sm3-riscv64-glue.c b/arch/riscv/crypto/sm3-riscv64-glue.c new file mode 100644 index 000000000000..0e5a2b84c930 --- /dev/null +++ b/arch/riscv/crypto/sm3-riscv64-glue.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Linux/riscv64 port of the OpenSSL SM3 implementation for RISC-V 64 + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * sm3 using zvksh vector crypto extension + * + * This asm function will just take the first 256-bit as the sm3 state from + * the pointer to `struct sm3_state`. + */ +asmlinkage void ossl_hwsm3_block_data_order_zvksh(struct sm3_state *digest, + u8 const *o, int num); + +static int riscv64_sm3_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + int ret = 0; + + /* + * Make sure struct sm3_state begins directly with the SM3 256-bit internal + * state, as this is what the asm function expect. + */ + BUILD_BUG_ON(offsetof(struct sm3_state, state) != 0); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + ret = sm3_base_do_update(desc, data, len, + ossl_hwsm3_block_data_order_zvksh); + kernel_vector_end(); + } else { + sm3_update(shash_desc_ctx(desc), data, len); + } + + return ret; +} + +static int riscv64_sm3_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + struct sm3_state *ctx; + + if (crypto_simd_usable()) { + kernel_vector_begin(); + if (len) + sm3_base_do_update(desc, data, len, + ossl_hwsm3_block_data_order_zvksh); + sm3_base_do_finalize(desc, ossl_hwsm3_block_data_order_zvksh); + kernel_vector_end(); + + return sm3_base_finish(desc, out); + } + + ctx = shash_desc_ctx(desc); + if (len) + sm3_update(ctx, data, len); + sm3_final(ctx, out); + + return 0; +} + +static int riscv64_sm3_final(struct shash_desc *desc, u8 *out) +{ + return riscv64_sm3_finup(desc, NULL, 0, out); +} + +static struct shash_alg sm3_alg = { + .init = sm3_base_init, + .update = riscv64_sm3_update, + .final = riscv64_sm3_final, + .finup = riscv64_sm3_finup, + .descsize = sizeof(struct sm3_state), + .digestsize = SM3_DIGEST_SIZE, + .base = { + .cra_blocksize = SM3_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sm3", + .cra_driver_name = "sm3-riscv64-zvksh-zvkb", + .cra_module = THIS_MODULE, + }, +}; + +static inline bool check_sm3_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKSH) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_sm3_mod_init(void) +{ + if (check_sm3_ext()) + return crypto_register_shash(&sm3_alg); + + return -ENODEV; +} + +static void __exit riscv64_sm3_mod_fini(void) +{ + crypto_unregister_shash(&sm3_alg); +} + +module_init(riscv64_sm3_mod_init); +module_exit(riscv64_sm3_mod_fini); + +MODULE_DESCRIPTION("SM3 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("sm3"); diff --git a/arch/riscv/crypto/sm3-riscv64-zvksh.pl b/arch/riscv/crypto/sm3-riscv64-zvksh.pl new file mode 100644 index 000000000000..c94c99111a71 --- /dev/null +++ b/arch/riscv/crypto/sm3-riscv64-zvksh.pl @@ -0,0 +1,227 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# The generated code of this file depends on the following RISC-V extensions: +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector SM3 Secure Hash extension ('Zvksh') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +#include + +.text +.option arch, +zvksh, +zvkb +___ + +################################################################################ +# ossl_hwsm3_block_data_order_zvksh(SM3_CTX *c, const void *p, size_t num); +{ +my ($CTX, $INPUT, $NUM) = ("a0", "a1", "a2"); +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +$code .= <<___; +SYM_TYPED_FUNC_START(ossl_hwsm3_block_data_order_zvksh) + vsetivli zero, 8, e32, m2, ta, ma + + # Load initial state of hash context (c->A-H). + vle32.v $V0, ($CTX) + vrev8.v $V0, $V0 + +L_sm3_loop: + # Copy the previous state to v2. + # It will be XOR'ed with the current state at the end of the round. + vmv.v.v $V2, $V0 + + # Load the 64B block in 2x32B chunks. + vle32.v $V6, ($INPUT) # v6 := {w7, ..., w0} + addi $INPUT, $INPUT, 32 + + vle32.v $V8, ($INPUT) # v8 := {w15, ..., w8} + addi $INPUT, $INPUT, 32 + + addi $NUM, $NUM, -1 + + # As vsm3c consumes only w0, w1, w4, w5 we need to slide the input + # 2 elements down so we process elements w2, w3, w6, w7 + # This will be repeated for each odd round. + vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w7, ..., w2} + + vsm3c.vi $V0, $V6, 0 + vsm3c.vi $V0, $V4, 1 + + # Prepare a vector with {w11, ..., w4} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w7, ..., w4} + vslideup.vi $V4, $V8, 4 # v4 := {w11, w10, w9, w8, w7, w6, w5, w4} + + vsm3c.vi $V0, $V4, 2 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w11, w10, w9, w8, w7, w6} + vsm3c.vi $V0, $V4, 3 + + vsm3c.vi $V0, $V8, 4 + vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w15, w14, w13, w12, w11, w10} + vsm3c.vi $V0, $V4, 5 + + vsm3me.vv $V6, $V8, $V6 # v6 := {w23, w22, w21, w20, w19, w18, w17, w16} + + # Prepare a register with {w19, w18, w17, w16, w15, w14, w13, w12} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w15, w14, w13, w12} + vslideup.vi $V4, $V6, 4 # v4 := {w19, w18, w17, w16, w15, w14, w13, w12} + + vsm3c.vi $V0, $V4, 6 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w19, w18, w17, w16, w15, w14} + vsm3c.vi $V0, $V4, 7 + + vsm3c.vi $V0, $V6, 8 + vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w23, w22, w21, w20, w19, w18} + vsm3c.vi $V0, $V4, 9 + + vsm3me.vv $V8, $V6, $V8 # v8 := {w31, w30, w29, w28, w27, w26, w25, w24} + + # Prepare a register with {w27, w26, w25, w24, w23, w22, w21, w20} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w23, w22, w21, w20} + vslideup.vi $V4, $V8, 4 # v4 := {w27, w26, w25, w24, w23, w22, w21, w20} + + vsm3c.vi $V0, $V4, 10 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w27, w26, w25, w24, w23, w22} + vsm3c.vi $V0, $V4, 11 + + vsm3c.vi $V0, $V8, 12 + vslidedown.vi $V4, $V8, 2 # v4 := {x, X, w31, w30, w29, w28, w27, w26} + vsm3c.vi $V0, $V4, 13 + + vsm3me.vv $V6, $V8, $V6 # v6 := {w32, w33, w34, w35, w36, w37, w38, w39} + + # Prepare a register with {w35, w34, w33, w32, w31, w30, w29, w28} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w31, w30, w29, w28} + vslideup.vi $V4, $V6, 4 # v4 := {w35, w34, w33, w32, w31, w30, w29, w28} + + vsm3c.vi $V0, $V4, 14 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w35, w34, w33, w32, w31, w30} + vsm3c.vi $V0, $V4, 15 + + vsm3c.vi $V0, $V6, 16 + vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w39, w38, w37, w36, w35, w34} + vsm3c.vi $V0, $V4, 17 + + vsm3me.vv $V8, $V6, $V8 # v8 := {w47, w46, w45, w44, w43, w42, w41, w40} + + # Prepare a register with {w43, w42, w41, w40, w39, w38, w37, w36} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w39, w38, w37, w36} + vslideup.vi $V4, $V8, 4 # v4 := {w43, w42, w41, w40, w39, w38, w37, w36} + + vsm3c.vi $V0, $V4, 18 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w43, w42, w41, w40, w39, w38} + vsm3c.vi $V0, $V4, 19 + + vsm3c.vi $V0, $V8, 20 + vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w47, w46, w45, w44, w43, w42} + vsm3c.vi $V0, $V4, 21 + + vsm3me.vv $V6, $V8, $V6 # v6 := {w55, w54, w53, w52, w51, w50, w49, w48} + + # Prepare a register with {w51, w50, w49, w48, w47, w46, w45, w44} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w47, w46, w45, w44} + vslideup.vi $V4, $V6, 4 # v4 := {w51, w50, w49, w48, w47, w46, w45, w44} + + vsm3c.vi $V0, $V4, 22 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w51, w50, w49, w48, w47, w46} + vsm3c.vi $V0, $V4, 23 + + vsm3c.vi $V0, $V6, 24 + vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w55, w54, w53, w52, w51, w50} + vsm3c.vi $V0, $V4, 25 + + vsm3me.vv $V8, $V6, $V8 # v8 := {w63, w62, w61, w60, w59, w58, w57, w56} + + # Prepare a register with {w59, w58, w57, w56, w55, w54, w53, w52} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w55, w54, w53, w52} + vslideup.vi $V4, $V8, 4 # v4 := {w59, w58, w57, w56, w55, w54, w53, w52} + + vsm3c.vi $V0, $V4, 26 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w59, w58, w57, w56, w55, w54} + vsm3c.vi $V0, $V4, 27 + + vsm3c.vi $V0, $V8, 28 + vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w63, w62, w61, w60, w59, w58} + vsm3c.vi $V0, $V4, 29 + + vsm3me.vv $V6, $V8, $V6 # v6 := {w71, w70, w69, w68, w67, w66, w65, w64} + + # Prepare a register with {w67, w66, w65, w64, w63, w62, w61, w60} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w63, w62, w61, w60} + vslideup.vi $V4, $V6, 4 # v4 := {w67, w66, w65, w64, w63, w62, w61, w60} + + vsm3c.vi $V0, $V4, 30 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w67, w66, w65, w64, w63, w62} + vsm3c.vi $V0, $V4, 31 + + # XOR in the previous state. + vxor.vv $V0, $V0, $V2 + + bnez $NUM, L_sm3_loop # Check if there are any more block to process +L_sm3_end: + vrev8.v $V0, $V0 + vse32.v $V0, ($CTX) + ret +SYM_FUNC_END(ossl_hwsm3_block_data_order_zvksh) +___ +} + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53266C47073 for ; Sun, 31 Dec 2023 15:28:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=P6S9ntNzPPl9K/FIiV/Ci0K4ymV/HMz/gNMPtRCi02k=; b=vMu6kjZJ6fy0yk L7raiv4P30JBekWj3dH2VojtrMuBNEx9tldzvSc663FVRfLu7H0/PyJ3tV5wqd0AFfjjgQTGI59PE 5KEmBSGFLCsnn3Li3iVy7DEEoOEmimSlU9O2hyLsfnUvtXVKGIGwwMD43DIel8u77IXod/oQqZhzl QJ0aRbNyX03Usvx5Zvb+7sRZ2s3fg6qvXIGo1TOU6Us0RVYo3tVkrIPC4ZB+gWZZQ/N5yRsk24TCj iyvCZUNDU4U9TU4AzMidhmZ26b3WtFSgVB2dIVlmg8BLfyGS7U4/GpZPQ3tg17yku+vAI2DXu7GQ4 /TLZrk6xW/M+eJuXcKeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjv-004miw-31; Sun, 31 Dec 2023 15:28:35 +0000 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rJxjr-004mex-0Z for linux-riscv@lists.infradead.org; Sun, 31 Dec 2023 15:28:34 +0000 Received: by mail-pj1-x1036.google.com with SMTP id 98e67ed59e1d1-28c7c422ad3so2182443a91.3 for ; Sun, 31 Dec 2023 07:28:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036510; x=1704641310; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rj7SSogxzCOROqn86O2lcjv5NKV2ZOXp3SBjo/z8Ga0=; b=gdSxt11pzbVmX+EvtvKu6LUBJSO82rzcw+12yjf7H3Cy0F+F0Be1cbhV57U0gNgjIl meIxp94eRNPegF8TXkAIVaa4tRekzKD8AN9kqdWfyW57sLQGuRoekMkB4w4tqZ7kuu1A b7dWooHIhApuF9YoLkJDLyb3oOrxJ/Imypo/ObxcKX1VKx2AOdge33yCHqRcM8CV1Xbd 5oAUPePdhs/oj+VAxCELAczXQMAEuJUTUK4p/AYZQYkgScvEKh3GfpVimzvfTnLXTlyV b5bGESO5AkMT8ximiEA5bS5yuwuHkHu1ZqxjpKQ+VjF4+AysSTQZ2geu2BodGUsahcnV 5cmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036510; x=1704641310; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rj7SSogxzCOROqn86O2lcjv5NKV2ZOXp3SBjo/z8Ga0=; b=rX0ZCRDUCNKquRYzZqXmZvtaIf9zSwxiMHnH0ApsPBN3JuZja/21K3hnG1VDZFSO7g +6IMgXcFNbdbsVYI7ZEZdV+vIkVxNlmLOiuQBg8rNAhLDJ1zMZq+7933+SjSXGZdZega su4qOgnXhEQpWhBIvFlOuAc3ax6gyvsE3yFBhBq+NXuttKXPJtUAl16I/82hDRT1QiUj Lo5j/+q2Kr12akO2HXPwnrNlkMmT5ltDlHFG7a3uy71HG81D+0F9z59Ig3E3XmirnoH5 TuGomUMaeXljhJ6z6lEPDUoQ2zqqj6L8pjYTJYBjDswuyvxaW9GY8Z+vetn/JzvnWTgh 8+sg== X-Gm-Message-State: AOJu0YwJkFPrhdujGHwCky5F+SoX2pb8KsjBKt0AE8YHl7nBVUNXF63W SUXQUJZsFXtce0d64IidN6ScuZkJPZJCng== X-Google-Smtp-Source: AGHT+IGnclcxmMOJR8aaB5AN+Sgak0SsCFCASSKiW+guzFyQWbm4sKfwNn9xsx8Ll66Hl5DoG2jQ3w== X-Received: by 2002:a17:902:c411:b0:1d4:70c7:aabf with SMTP id k17-20020a170902c41100b001d470c7aabfmr5249294plk.62.1704036509945; Sun, 31 Dec 2023 07:28:29 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:29 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 11/11] RISC-V: crypto: add Zvkb accelerated ChaCha20 implementation Date: Sun, 31 Dec 2023 23:27:43 +0800 Message-Id: <20231231152743.6304-12-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231231_072831_218008_FD5082F0 X-CRM114-Status: GOOD ( 26.67 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add a ChaCha20 vector implementation from OpenSSL(openssl/openssl#21923). Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. - Revert the usage of simd skcipher. Changelog v3: - Rename kconfig CRYPTO_CHACHA20_RISCV64 to CRYPTO_CHACHA_RISCV64. - Rename chacha20_encrypt() to riscv64_chacha20_encrypt(). - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `CHACHA20_RISCV64` option by default. - Use simd skcipher interface. - Add `asmlinkage` qualifier for crypto asm function. - Reorder structure riscv64_chacha_alg_zvkb members initialization in the order declared. - Use smaller iv buffer instead of whole state matrix as chacha20's input. --- arch/riscv/crypto/Kconfig | 12 + arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/chacha-riscv64-glue.c | 109 ++++++++ arch/riscv/crypto/chacha-riscv64-zvkb.pl | 321 +++++++++++++++++++++++ 4 files changed, 449 insertions(+) create mode 100644 arch/riscv/crypto/chacha-riscv64-glue.c create mode 100644 arch/riscv/crypto/chacha-riscv64-zvkb.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 81dcae72c477..2a756f92871f 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -34,6 +34,18 @@ config CRYPTO_AES_BLOCK_RISCV64 - Zvkb vector crypto extension (CTR/XTS) - Zvkg vector crypto extension (XTS) +config CRYPTO_CHACHA_RISCV64 + tristate "Ciphers: ChaCha" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_SIMD + select CRYPTO_SKCIPHER + select CRYPTO_LIB_CHACHA_GENERIC + help + Length-preserving ciphers: ChaCha20 stream cipher algorithm + + Architecture: riscv64 using: + - Zvkb vector crypto extension + config CRYPTO_GHASH_RISCV64 tristate "Hash functions: GHASH" depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index b1f857695c1c..31021eb3929c 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -9,6 +9,9 @@ aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o obj-$(CONFIG_CRYPTO_AES_BLOCK_RISCV64) += aes-block-riscv64.o aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o +obj-$(CONFIG_CRYPTO_CHACHA_RISCV64) += chacha-riscv64.o +chacha-riscv64-y := chacha-riscv64-glue.o chacha-riscv64-zvkb.o + obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o @@ -36,6 +39,9 @@ $(obj)/aes-riscv64-zvkned-zvbb-zvkg.S: $(src)/aes-riscv64-zvkned-zvbb-zvkg.pl $(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl $(call cmd,perlasm) +$(obj)/chacha-riscv64-zvkb.S: $(src)/chacha-riscv64-zvkb.pl + $(call cmd,perlasm) + $(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl $(call cmd,perlasm) @@ -54,6 +60,7 @@ $(obj)/sm4-riscv64-zvksed.S: $(src)/sm4-riscv64-zvksed.pl clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S +clean-files += chacha-riscv64-zvkb.S clean-files += ghash-riscv64-zvkg.S clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S clean-files += sha512-riscv64-zvknhb-zvkb.S diff --git a/arch/riscv/crypto/chacha-riscv64-glue.c b/arch/riscv/crypto/chacha-riscv64-glue.c new file mode 100644 index 000000000000..a7a2f0303afe --- /dev/null +++ b/arch/riscv/crypto/chacha-riscv64-glue.c @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Port of the OpenSSL ChaCha20 implementation for RISC-V 64 + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include + +/* chacha20 using zvkb vector crypto extension */ +asmlinkage void ChaCha20_ctr32_zvkb(u8 *out, const u8 *input, size_t len, + const u32 *key, const u32 *counter); + +static int riscv64_chacha20_encrypt(struct skcipher_request *req) +{ + u32 iv[CHACHA_IV_SIZE / sizeof(u32)]; + u8 block_buffer[CHACHA_BLOCK_SIZE]; + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + unsigned int tail_bytes; + int err; + + iv[0] = get_unaligned_le32(req->iv); + iv[1] = get_unaligned_le32(req->iv + 4); + iv[2] = get_unaligned_le32(req->iv + 8); + iv[3] = get_unaligned_le32(req->iv + 12); + + err = skcipher_walk_virt(&walk, req, false); + while (walk.nbytes) { + nbytes = walk.nbytes & (~(CHACHA_BLOCK_SIZE - 1)); + tail_bytes = walk.nbytes & (CHACHA_BLOCK_SIZE - 1); + kernel_vector_begin(); + if (nbytes) { + ChaCha20_ctr32_zvkb(walk.dst.virt.addr, + walk.src.virt.addr, nbytes, + ctx->key, iv); + iv[0] += nbytes / CHACHA_BLOCK_SIZE; + } + if (walk.nbytes == walk.total && tail_bytes > 0) { + memcpy(block_buffer, walk.src.virt.addr + nbytes, + tail_bytes); + ChaCha20_ctr32_zvkb(block_buffer, block_buffer, + CHACHA_BLOCK_SIZE, ctx->key, iv); + memcpy(walk.dst.virt.addr + nbytes, block_buffer, + tail_bytes); + tail_bytes = 0; + } + kernel_vector_end(); + + err = skcipher_walk_done(&walk, tail_bytes); + } + + return err; +} + +static struct skcipher_alg riscv64_chacha_alg_zvkb = { + .setkey = chacha20_setkey, + .encrypt = riscv64_chacha20_encrypt, + .decrypt = riscv64_chacha20_encrypt, + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = CHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .walksize = CHACHA_BLOCK_SIZE * 4, + .base = { + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct chacha_ctx), + .cra_priority = 300, + .cra_name = "chacha20", + .cra_driver_name = "chacha20-riscv64-zvkb", + .cra_module = THIS_MODULE, + }, +}; + +static inline bool check_chacha20_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_chacha_mod_init(void) +{ + if (check_chacha20_ext()) + return crypto_register_skcipher(&riscv64_chacha_alg_zvkb); + + return -ENODEV; +} + +static void __exit riscv64_chacha_mod_fini(void) +{ + crypto_unregister_skcipher(&riscv64_chacha_alg_zvkb); +} + +module_init(riscv64_chacha_mod_init); +module_exit(riscv64_chacha_mod_fini); + +MODULE_DESCRIPTION("ChaCha20 (RISC-V accelerated)"); +MODULE_AUTHOR("Jerry Shih "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("chacha20"); diff --git a/arch/riscv/crypto/chacha-riscv64-zvkb.pl b/arch/riscv/crypto/chacha-riscv64-zvkb.pl new file mode 100644 index 000000000000..279410d9e062 --- /dev/null +++ b/arch/riscv/crypto/chacha-riscv64-zvkb.pl @@ -0,0 +1,321 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023-2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You may not use +# this file except in compliance with the License. You can obtain a copy +# in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT, ">$output"; + +my $code = <<___; +.text +.option arch, +zvkb +___ + +# void ChaCha20_ctr32_zvkb(unsigned char *out, const unsigned char *inp, +# size_t len, const unsigned int key[8], +# const unsigned int counter[4]); +################################################################################ +my ( $OUTPUT, $INPUT, $LEN, $KEY, $COUNTER ) = ( "a0", "a1", "a2", "a3", "a4" ); +my ( $T0 ) = ( "t0" ); +my ( $CONST_DATA0, $CONST_DATA1, $CONST_DATA2, $CONST_DATA3 ) = + ( "a5", "a6", "a7", "t1" ); +my ( $KEY0, $KEY1, $KEY2,$KEY3, $KEY4, $KEY5, $KEY6, $KEY7, + $COUNTER0, $COUNTER1, $NONCE0, $NONCE1 +) = ( "s0", "s1", "s2", "s3", "s4", "s5", "s6", + "s7", "s8", "s9", "s10", "s11" ); +my ( $VL, $STRIDE, $CHACHA_LOOP_COUNT ) = ( "t2", "t3", "t4" ); +my ( + $V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, $V8, $V9, $V10, + $V11, $V12, $V13, $V14, $V15, $V16, $V17, $V18, $V19, $V20, $V21, + $V22, $V23, $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map( "v$_", ( 0 .. 31 ) ); + +sub chacha_quad_round_group { + my ( + $A0, $B0, $C0, $D0, $A1, $B1, $C1, $D1, + $A2, $B2, $C2, $D2, $A3, $B3, $C3, $D3 + ) = @_; + + my $code = <<___; + # a += b; d ^= a; d <<<= 16; + vadd.vv $A0, $A0, $B0 + vadd.vv $A1, $A1, $B1 + vadd.vv $A2, $A2, $B2 + vadd.vv $A3, $A3, $B3 + vxor.vv $D0, $D0, $A0 + vxor.vv $D1, $D1, $A1 + vxor.vv $D2, $D2, $A2 + vxor.vv $D3, $D3, $A3 + vror.vi $D0, $D0, 32 - 16 + vror.vi $D1, $D1, 32 - 16 + vror.vi $D2, $D2, 32 - 16 + vror.vi $D3, $D3, 32 - 16 + # c += d; b ^= c; b <<<= 12; + vadd.vv $C0, $C0, $D0 + vadd.vv $C1, $C1, $D1 + vadd.vv $C2, $C2, $D2 + vadd.vv $C3, $C3, $D3 + vxor.vv $B0, $B0, $C0 + vxor.vv $B1, $B1, $C1 + vxor.vv $B2, $B2, $C2 + vxor.vv $B3, $B3, $C3 + vror.vi $B0, $B0, 32 - 12 + vror.vi $B1, $B1, 32 - 12 + vror.vi $B2, $B2, 32 - 12 + vror.vi $B3, $B3, 32 - 12 + # a += b; d ^= a; d <<<= 8; + vadd.vv $A0, $A0, $B0 + vadd.vv $A1, $A1, $B1 + vadd.vv $A2, $A2, $B2 + vadd.vv $A3, $A3, $B3 + vxor.vv $D0, $D0, $A0 + vxor.vv $D1, $D1, $A1 + vxor.vv $D2, $D2, $A2 + vxor.vv $D3, $D3, $A3 + vror.vi $D0, $D0, 32 - 8 + vror.vi $D1, $D1, 32 - 8 + vror.vi $D2, $D2, 32 - 8 + vror.vi $D3, $D3, 32 - 8 + # c += d; b ^= c; b <<<= 7; + vadd.vv $C0, $C0, $D0 + vadd.vv $C1, $C1, $D1 + vadd.vv $C2, $C2, $D2 + vadd.vv $C3, $C3, $D3 + vxor.vv $B0, $B0, $C0 + vxor.vv $B1, $B1, $C1 + vxor.vv $B2, $B2, $C2 + vxor.vv $B3, $B3, $C3 + vror.vi $B0, $B0, 32 - 7 + vror.vi $B1, $B1, 32 - 7 + vror.vi $B2, $B2, 32 - 7 + vror.vi $B3, $B3, 32 - 7 +___ + + return $code; +} + +$code .= <<___; +.p2align 3 +.globl ChaCha20_ctr32_zvkb +.type ChaCha20_ctr32_zvkb,\@function +ChaCha20_ctr32_zvkb: + srli $LEN, $LEN, 6 + beqz $LEN, .Lend + + addi sp, sp, -96 + sd s0, 0(sp) + sd s1, 8(sp) + sd s2, 16(sp) + sd s3, 24(sp) + sd s4, 32(sp) + sd s5, 40(sp) + sd s6, 48(sp) + sd s7, 56(sp) + sd s8, 64(sp) + sd s9, 72(sp) + sd s10, 80(sp) + sd s11, 88(sp) + + li $STRIDE, 64 + + #### chacha block data + # "expa" little endian + li $CONST_DATA0, 0x61707865 + # "nd 3" little endian + li $CONST_DATA1, 0x3320646e + # "2-by" little endian + li $CONST_DATA2, 0x79622d32 + # "te k" little endian + li $CONST_DATA3, 0x6b206574 + + lw $KEY0, 0($KEY) + lw $KEY1, 4($KEY) + lw $KEY2, 8($KEY) + lw $KEY3, 12($KEY) + lw $KEY4, 16($KEY) + lw $KEY5, 20($KEY) + lw $KEY6, 24($KEY) + lw $KEY7, 28($KEY) + + lw $COUNTER0, 0($COUNTER) + lw $COUNTER1, 4($COUNTER) + lw $NONCE0, 8($COUNTER) + lw $NONCE1, 12($COUNTER) + +.Lblock_loop: + vsetvli $VL, $LEN, e32, m1, ta, ma + + # init chacha const states + vmv.v.x $V0, $CONST_DATA0 + vmv.v.x $V1, $CONST_DATA1 + vmv.v.x $V2, $CONST_DATA2 + vmv.v.x $V3, $CONST_DATA3 + + # init chacha key states + vmv.v.x $V4, $KEY0 + vmv.v.x $V5, $KEY1 + vmv.v.x $V6, $KEY2 + vmv.v.x $V7, $KEY3 + vmv.v.x $V8, $KEY4 + vmv.v.x $V9, $KEY5 + vmv.v.x $V10, $KEY6 + vmv.v.x $V11, $KEY7 + + # init chacha key states + vid.v $V12 + vadd.vx $V12, $V12, $COUNTER0 + vmv.v.x $V13, $COUNTER1 + + # init chacha nonce states + vmv.v.x $V14, $NONCE0 + vmv.v.x $V15, $NONCE1 + + # load the top-half of input data + vlsseg8e32.v $V16, ($INPUT), $STRIDE + + li $CHACHA_LOOP_COUNT, 10 +.Lround_loop: + addi $CHACHA_LOOP_COUNT, $CHACHA_LOOP_COUNT, -1 + @{[chacha_quad_round_group + $V0, $V4, $V8, $V12, + $V1, $V5, $V9, $V13, + $V2, $V6, $V10, $V14, + $V3, $V7, $V11, $V15]} + @{[chacha_quad_round_group + $V0, $V5, $V10, $V15, + $V1, $V6, $V11, $V12, + $V2, $V7, $V8, $V13, + $V3, $V4, $V9, $V14]} + bnez $CHACHA_LOOP_COUNT, .Lround_loop + + # load the bottom-half of input data + addi $T0, $INPUT, 32 + vlsseg8e32.v $V24, ($T0), $STRIDE + + # add chacha top-half initial block states + vadd.vx $V0, $V0, $CONST_DATA0 + vadd.vx $V1, $V1, $CONST_DATA1 + vadd.vx $V2, $V2, $CONST_DATA2 + vadd.vx $V3, $V3, $CONST_DATA3 + vadd.vx $V4, $V4, $KEY0 + vadd.vx $V5, $V5, $KEY1 + vadd.vx $V6, $V6, $KEY2 + vadd.vx $V7, $V7, $KEY3 + # xor with the top-half input + vxor.vv $V16, $V16, $V0 + vxor.vv $V17, $V17, $V1 + vxor.vv $V18, $V18, $V2 + vxor.vv $V19, $V19, $V3 + vxor.vv $V20, $V20, $V4 + vxor.vv $V21, $V21, $V5 + vxor.vv $V22, $V22, $V6 + vxor.vv $V23, $V23, $V7 + + # save the top-half of output + vssseg8e32.v $V16, ($OUTPUT), $STRIDE + + # add chacha bottom-half initial block states + vadd.vx $V8, $V8, $KEY4 + vadd.vx $V9, $V9, $KEY5 + vadd.vx $V10, $V10, $KEY6 + vadd.vx $V11, $V11, $KEY7 + vid.v $V0 + vadd.vx $V12, $V12, $COUNTER0 + vadd.vx $V13, $V13, $COUNTER1 + vadd.vx $V14, $V14, $NONCE0 + vadd.vx $V15, $V15, $NONCE1 + vadd.vv $V12, $V12, $V0 + # xor with the bottom-half input + vxor.vv $V24, $V24, $V8 + vxor.vv $V25, $V25, $V9 + vxor.vv $V26, $V26, $V10 + vxor.vv $V27, $V27, $V11 + vxor.vv $V29, $V29, $V13 + vxor.vv $V28, $V28, $V12 + vxor.vv $V30, $V30, $V14 + vxor.vv $V31, $V31, $V15 + + # save the bottom-half of output + addi $T0, $OUTPUT, 32 + vssseg8e32.v $V24, ($T0), $STRIDE + + # update counter + add $COUNTER0, $COUNTER0, $VL + sub $LEN, $LEN, $VL + # increase offset for `4 * 16 * VL = 64 * VL` + slli $T0, $VL, 6 + add $INPUT, $INPUT, $T0 + add $OUTPUT, $OUTPUT, $T0 + bnez $LEN, .Lblock_loop + + ld s0, 0(sp) + ld s1, 8(sp) + ld s2, 16(sp) + ld s3, 24(sp) + ld s4, 32(sp) + ld s5, 40(sp) + ld s6, 48(sp) + ld s7, 56(sp) + ld s8, 64(sp) + ld s9, 72(sp) + ld s10, 80(sp) + ld s11, 88(sp) + addi sp, sp, 96 + +.Lend: + ret +.size ChaCha20_ctr32_zvkb,.-ChaCha20_ctr32_zvkb +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!";