From patchwork Thu Feb 10 23:28:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nathan Huckleberry X-Patchwork-Id: 12742586 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52F2FC433EF for ; Thu, 10 Feb 2022 23:28:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345655AbiBJX2k (ORCPT ); Thu, 10 Feb 2022 18:28:40 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:41240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345662AbiBJX2i (ORCPT ); Thu, 10 Feb 2022 18:28:38 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 464E6E7B for ; Thu, 10 Feb 2022 15:28:37 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id q11-20020a252a0b000000b0061e240c8fb3so11620582ybq.22 for ; Thu, 10 Feb 2022 15:28:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ya64rv8iRGRj/ERiZDHYeJUnB1BfH4UM0WazUSMAZYE=; b=TTvhIzl1otljkbUEDpV2q4UxAnPzeAMAvxFyPVk1CfNJW6tv/Go2XUkheCVft52DXy EMqsyX+4iFCEDY0Pnm8L2HaUCI+HHQgeqxFXofaaLbmCOkyGcY/XoMAQBX6Q4rFlVZNv qxIEiPB/OXgg5S4kBVwloyR8mL8povTLAV9czs2kKvvrshoZMExmab71UK7bFHqWtCZx n13Xp+r0w3/K6TsmZdyVVznExmH/dqKyLQJGJKSvDg0md/O94M46nsdyxbJRFV4H90fE qBwIf0Lm1ggb8Peqk/pCjHEDlB+D+MldzjyP+Pr1tMTNzy4HNllEdligiXWSxxHRS7bI 1qSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ya64rv8iRGRj/ERiZDHYeJUnB1BfH4UM0WazUSMAZYE=; b=s6Gl/RqBJ3Av53I6i3v84vE7GHtCOitqg59/3Ay8ck9x+toSP/uaanfLJsmD3WD446 fpA5fxndfxvrrp0+M8bafGgPG3tcvF2LDkGnbWgfnebZRoEQ3dmzznib4HuCo4jDCF06 yRDoR5DDrMm9EzCmPmE4FEWvtN8gGvOwQnlbgcmC2POilDIKxWZrwvaxi7bY/Cl9ymes B//ZYjtrgszBEnLgX6wJ+3THahhEcwBx8aPJ5/ODoRw+BUW9SU0ycW765pBju32QcVXl qLfngy0Xj9Mfugg4gv6RSH6jp/WArZBbBlwL9KlHZO5nNFetj/8G+U1Pjfq0UybOL6XO e21w== X-Gm-Message-State: AOAM530cAOTJqKhoXulYWhq0h9PlFNJ3QRPzo3GkddhZZSDqJ3agm17x 37gH32B6XPRxPTGtWXdS8Lw3cOK0btee1lwBq0XzsS6O7eSnneHftCNmQdtsWKCMnh+g4Tz9GMv 6GhobYJqAfqXsvPmPmlrPA7xOiBSkW3UTQ8azwZtw6UNo/lSF5wndQefHp+k1fWr8uSI= X-Google-Smtp-Source: ABdhPJy1i/KGjobTFZfZju4e84SUucFvEhGwsepxlQznuDNtZPFmN3cFS5Zf8HBNXhB+JbDsUq+ekE/hCQ== X-Received: from nhuck.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:39cc]) (user=nhuck job=sendgmr) by 2002:a25:2cd:: with SMTP id 196mr2185405ybc.35.1644535716496; Thu, 10 Feb 2022 15:28:36 -0800 (PST) Date: Thu, 10 Feb 2022 23:28:06 +0000 In-Reply-To: <20220210232812.798387-1-nhuck@google.com> Message-Id: <20220210232812.798387-2-nhuck@google.com> Mime-Version: 1.0 References: <20220210232812.798387-1-nhuck@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [RFC PATCH v2 1/7] crypto: xctr - Add XCTR support From: Nathan Huckleberry To: linux-crypto@vger.kernel.org Cc: Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Eric Biggers , Sami Tolvanen , Ard Biesheuvel , Nathan Huckleberry Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add a generic implementation of XCTR mode as a template. XCTR is a blockcipher mode similar to CTR mode. XCTR uses XORs and little-endian addition rather than big-endian arithmetic which has two advantages: It is slightly faster on little-endian CPUs and it is less likely to be implemented incorrect since integer overflows are not possible on practical input sizes. XCTR is used as a component to implement HCTR2. More information on XCTR mode can be found in the HCTR2 paper: https://eprint.iacr.org/2021/1441.pdf Signed-off-by: Nathan Huckleberry Changes since v1: * Restricted blocksize to 16-bytes * Removed xctr.h and u32_to_le_block * Use single crypto_template instead of array --- crypto/Kconfig | 9 + crypto/Makefile | 1 + crypto/tcrypt.c | 1 + crypto/testmgr.c | 6 + crypto/testmgr.h | 546 +++++++++++++++++++++++++++++++++++++++++++++++ crypto/xctr.c | 193 +++++++++++++++++ 6 files changed, 756 insertions(+) create mode 100644 crypto/xctr.c diff --git a/crypto/Kconfig b/crypto/Kconfig index fa1741bb568f..8543f34fa200 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -452,6 +452,15 @@ config CRYPTO_PCBC PCBC: Propagating Cipher Block Chaining mode This block cipher algorithm is required for RxRPC. +config CRYPTO_XCTR + tristate + select CRYPTO_SKCIPHER + select CRYPTO_MANAGER + help + XCTR: XOR Counter mode. This blockcipher mode is a variant of CTR mode + using XORs and little-endian addition rather than big-endian arithmetic. + XCTR mode is used to implement HCTR2. + config CRYPTO_XTS tristate "XTS support" select CRYPTO_SKCIPHER diff --git a/crypto/Makefile b/crypto/Makefile index d76bff8d0ffd..6b3fe3df1489 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -93,6 +93,7 @@ obj-$(CONFIG_CRYPTO_CTS) += cts.o obj-$(CONFIG_CRYPTO_LRW) += lrw.o obj-$(CONFIG_CRYPTO_XTS) += xts.o obj-$(CONFIG_CRYPTO_CTR) += ctr.o +obj-$(CONFIG_CRYPTO_XCTR) += xctr.o obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o obj-$(CONFIG_CRYPTO_ADIANTUM) += adiantum.o obj-$(CONFIG_CRYPTO_NHPOLY1305) += nhpoly1305.o diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 2a808e843de5..b3a23dbf5b14 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -1556,6 +1556,7 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("rfc3686(ctr(aes))"); ret += tcrypt_test("ofb(aes)"); ret += tcrypt_test("cfb(aes)"); + ret += tcrypt_test("xctr(aes)"); break; case 11: diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 3a5a3e5cb77b..d2b42ff0b04a 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -5451,6 +5451,12 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .cipher = __VECS(xchacha20_tv_template) }, + }, { + .alg = "xctr(aes)", + .test = alg_test_skcipher, + .suite = { + .cipher = __VECS(aes_xctr_tv_template) + } }, { .alg = "xts(aes)", .generic_driver = "xts(ecb(aes-generic))", diff --git a/crypto/testmgr.h b/crypto/testmgr.h index a253d66ba1c1..e1ebbb3c4d4c 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -32800,4 +32800,550 @@ static const struct hash_testvec blakes2s_256_tv_template[] = {{ 0xd5, 0x06, 0xb5, 0x3a, 0x7c, 0x7a, 0x65, 0x1d, }, }}; +/* + * Test vectors generated using https://github.com/google/hctr2 + */ +static const struct cipher_testvec aes_xctr_tv_template[] = { + { + .key = "\x06\x20\x5d\xba\x50\xb5\x12\x8e" + "\xee\x65\x3c\x59\x80\xa1\xfe\xb1", + .iv = "\x16\x52\x22\x0d\x1c\x76\x94\x9f" + "\x74\xba\x41\x0c\xc4\xc4\xaf\xb9", + .ptext = "\x02\x62\x54\x87\x28\x8f\xa1\xd3" + "\x8f\xd8\xc6\xab\x08\xef\xea\x83" + "\xa3\xbd\xf4\x85\x47\x66\x74\x11" + "\xf1\x58\x9f\x9f\xe8\xb9\x95\xc9", + .ctext = "\x11\xfe\xef\xb4\x9e\xed\x5b\xe5" + "\x92\x9b\x03\xa7\x6d\x8e\xf9\x7a" + "\xaa\xfa\x33\x4a\xf7\xd9\xb2\xeb" + "\x73\xa1\x85\xbc\x45\xbc\x42\x70", + .klen = 16, + .len = 32, + }, + { + .key = "\x19\x0e\xea\x30\x59\x8e\x39\x35" + "\x93\x63\xcc\x8b\x5f\x98\x4f\x43", + .iv = "\x4b\x9f\xf4\xd8\xaa\xcf\x99\xdc" + "\xc5\x07\xe0\xde\xb2\x6d\x85\x12", + .ptext = "\x23\x2d\x48\x15\x89\x34\x54\xf9" + "\x2b\x38\xd1\x62\x06\x98\x21\x59" + "\xd4\x3a\x45\x6f\x12\x27\x08\xa9" + "\x3e\x0f\x21\x3d\xda\x80\x92\x3f", + .ctext = "\x01\xa7\xe5\x9e\xf8\x49\xbb\x36" + "\x49\xb8\x59\x7a\x77\x3f\x5a\x10" + "\x2e\x8f\xe7\xc9\xc4\xb8\xdb\x86" + "\xe4\xc0\x6b\x60\x2f\x79\xa0\x91", + .klen = 16, + .len = 32, + }, + { + .key = "\x17\xa6\x01\x3d\x5d\xd6\xef\x2d" + "\x69\x8f\x4c\x54\x5b\xae\x43\xf0", + .iv = "\xa9\x1b\x47\x60\x26\x82\xf7\x1c" + "\x80\xf8\x88\xdd\xfb\x44\xd9\xda", + .ptext = "\xf7\x67\xcd\xa6\x04\x65\x53\x99" + "\x90\x5c\xa2\x56\x74\xd7\x9d\xf2" + "\x0b\x03\x7f\x4e\xa7\x84\x72\x2b" + "\xf0\xa5\xbf\xe6\x9a\x62\x3a\xfe" + "\x69\x5c\x93\x79\x23\x86\x64\x85" + "\xeb\x13\xb1\x5a\xd5\x48\x39\xa0" + "\x70\xfb\x06\x9a\xd7\x12\x5a\xb9" + "\xbe\xed\x2c\x81\x64\xf7\xcf\x80" + "\xee\xe6\x28\x32\x2d\x37\x4c\x32" + "\xf4\x1f\x23\x21\xe9\xc8\xc9\xbf" + "\x54\xbc\xcf\xb4\xc2\x65\x39\xdf" + "\xa5\xfb\x14\x11\xed\x62\x38\xcf" + "\x9b\x58\x11\xdd\xe9\xbd\x37\x57" + "\x75\x4c\x9e\xd5\x67\x0a\x48\xc6" + "\x0d\x05\x4e\xb1\x06\xd7\xec\x2e" + "\x9e\x59\xde\x4f\xab\x38\xbb\xe5" + "\x87\x04\x5a\x2c\x2a\xa2\x8f\x3c" + "\xe7\xe1\x46\xa9\x49\x9f\x24\xad" + "\x2d\xb0\x55\x40\x64\xd5\xda\x7e" + "\x1e\x77\xb8\x29\x72\x73\xc3\x84" + "\xcd\xf3\x94\x90\x58\x76\xc9\x2c" + "\x2a\xad\x56\xde\x33\x18\xb6\x3b" + "\x10\xe9\xe9\x8d\xf0\xa9\x7f\x05" + "\xf7\xb5\x8c\x13\x7e\x11\x3d\x1e" + "\x02\xbb\x5b\xea\x69\xff\x85\xcf" + "\x6a\x18\x97\x45\xe3\x96\xba\x4d" + "\x2d\x7a\x70\x78\x15\x2c\xe9\xdc" + "\x4e\x09\x92\x57\x04\xd8\x0b\xa6" + "\x20\x71\x76\x47\x76\x96\x89\xa0" + "\xd9\x29\xa2\x5a\x06\xdb\x56\x39" + "\x60\x33\x59\x04\x95\x89\xf6\x18" + "\x1d\x70\x75\x85\x3a\xb7\x6e", + .ctext = "\xe1\xe7\x3f\xd3\x6a\xb9\x2f\x64" + "\x37\xc5\xa4\xe9\xca\x0a\xa1\xd6" + "\xea\x7d\x39\xe5\xe6\xcc\x80\x54" + "\x74\x31\x2a\x04\x33\x79\x8c\x8e" + "\x4d\x47\x84\x28\x27\x9b\x3c\x58" + "\x54\x58\x20\x4f\x70\x01\x52\x5b" + "\xac\x95\x61\x49\x5f\xef\xba\xce" + "\xd7\x74\x56\xe7\xbb\xe0\x3c\xd0" + "\x7f\xa9\x23\x57\x33\x2a\xf6\xcb" + "\xbe\x42\x14\x95\xa8\xf9\x7a\x7e" + "\x12\x53\x3a\xe2\x13\xfe\x2d\x89" + "\xeb\xac\xd7\xa8\xa5\xf8\x27\xf3" + "\x74\x9a\x65\x63\xd1\x98\x3a\x7e" + "\x27\x7b\xc0\x20\x00\x4d\xf4\xe5" + "\x7b\x69\xa6\xa8\x06\x50\x85\xb6" + "\x7f\xac\x7f\xda\x1f\xf5\x37\x56" + "\x9b\x2f\xd3\x86\x6b\x70\xbd\x0e" + "\x55\x9a\x9d\x4b\x08\xb5\x5b\x7b" + "\xd4\x7c\xb4\x71\x49\x92\x4a\x1e" + "\xed\x6d\x11\x09\x47\x72\x32\x6a" + "\x97\x53\x36\xaf\xf3\x06\x06\x2c" + "\x69\xf1\x59\x00\x36\x95\x28\x2a" + "\xb6\xcd\x10\x21\x84\x73\x5c\x96" + "\x86\x14\x2c\x3d\x02\xdb\x53\x9a" + "\x61\xde\xea\x99\x84\x7a\x27\xf6" + "\xf7\xc8\x49\x73\x4b\xb8\xeb\xd3" + "\x41\x33\xdd\x09\x68\xe2\x64\xb8" + "\x5f\x75\x74\x97\x91\x54\xda\xc2" + "\x73\x2c\x1e\x5a\x84\x48\x01\x1a" + "\x0d\x8b\x0a\xdf\x07\x2e\xee\x77" + "\x1d\x17\x41\x7a\xc9\x33\x63\xfa" + "\x9f\xc3\x74\x57\x5f\x03\x4c", + .klen = 16, + .len = 255, + }, + { + .key = "\xd1\x87\xd3\xa1\x97\x6a\x4b\xf9" + "\x5d\xcb\x6c\x07\x6e\x2d\x48\xad", + .iv = "\xe9\x8c\x88\x40\xa9\x52\xe0\xbc" + "\x8a\x47\x3a\x09\x5d\x60\xdd\xb2", + .ptext = "\x67\x80\x86\x46\x18\xc6\xed\xd2" + "\x99\x0f\x7a\xc3\xa5\x0b\x80\xcb" + "\x8d\xe4\x0b\x4c\x1e\x4c\x98\x46" + "\x87\x8a\x8c\x76\x75\xce\x2c\x27" + "\x74\x88\xdc\x37\xaa\x77\x53\x14" + "\xd3\x01\xcf\xb5\xcb\xdd\xb4\x8e" + "\x6b\x54\x68\x01\xc3\xdf\xbc\xdd" + "\x1a\x08\x4c\x11\xab\x25\x4b\x69" + "\x25\x21\x78\xb1\x91\x1b\x75\xfa" + "\xd0\x10\xf3\x8a\x65\xd3\x8d\x2e" + "\xf8\xb6\xce\x29\xf9\x1e\x45\x5f" + "\x4e\x41\x63\x6f\xf9\xca\x59\xd7" + "\xc8\x9c\x97\xda\xff\xab\x42\x47" + "\xfb\x2b\xca\xed\xda\x6c\x96\xe4" + "\x59\x0d\xc6\x4a\x26\xde\xa8\x50" + "\xc5\xbb\x13\xf8\xd1\xb9\x6b\xf4" + "\x19\x30\xfb\xc0\x4f\x6b\x96\xc4" + "\x88\x0b\x57\xb3\x43\xbd\xdd\xe2" + "\x06\xae\x88\x44\x41\xdf\xa4\x29" + "\x31\xd3\x38\xeb\xe9\xf8\xa2\xe4" + "\x6a\x55\x2f\x56\x58\x19\xeb\xf7" + "\x5f\x4b\x15\x52\xe4\xaa\xdc\x31" + "\x4a\x32\xc9\x31\x96\x68\x3b\x80" + "\x20\x4f\xe5\x8f\x87\xc9\x37\x58" + "\x79\xfd\xc9\xc1\x9a\x83\xe3\x8b" + "\x6b\x57\x07\xef\x28\x8d\x55\xcb" + "\x4e\xb6\xa2\xb6\xd3\x4f\x8b\x10" + "\x70\x10\x02\xf6\x74\x71\x20\x5a" + "\xe2\x2f\xb6\x46\xc5\x22\xa3\x29" + "\xf5\xc1\x25\xb0\x4d\xda\xaf\x04" + "\xca\x83\xe6\x3f\x66\x6e\x3b\xa4" + "\x09\x40\x22\xd7\x97\x12\x1e", + .ctext = "\xd4\x6d\xfa\xc8\x6e\x54\x31\x69" + "\x47\x51\x0f\xb8\xfa\x03\xa2\xe1" + "\x57\xa8\x4f\x2d\xc5\x4e\x8d\xcd" + "\x92\x0f\x71\x08\xdd\xa4\x5b\xc7" + "\x69\x3a\x3d\x93\x29\x1d\x87\x2c" + "\xfa\x96\xd2\x4d\x72\x61\xb0\x9e" + "\xa7\xf5\xd5\x09\x3d\x43\x32\x82" + "\xd2\x9a\x58\xe3\x4c\x84\xc2\xad" + "\x33\x77\x9c\x5d\x37\xc1\x4f\x95" + "\x56\x55\xc6\x76\x62\x27\x6a\xc7" + "\x45\x80\x9e\x7c\x48\xc8\x14\xbb" + "\x32\xbf\x4a\xbb\x8d\xb4\x2c\x7c" + "\x01\xfa\xc8\xde\x10\x55\xa0\xae" + "\x29\xed\xe2\x3d\xd6\x26\xfa\x3c" + "\x7a\x81\xae\xfd\xc3\x2f\xe5\x3a" + "\x00\xa3\xf0\x66\x0f\x3a\xd2\xa3" + "\xaf\x0e\x75\xbb\x79\xad\xcc\xe0" + "\x98\x10\xfb\xf1\xc0\x0c\xb9\x03" + "\x07\xee\x46\x6a\xc0\xf6\x17\x8f" + "\x7f\xc9\xad\x16\x58\x54\xb0\xd5" + "\x67\x73\x9f\xce\xea\x4b\x60\x57" + "\x1d\x62\x72\xec\xab\xe3\xd8\x32" + "\x29\x48\x37\x1b\x5c\xd6\xd0\xb7" + "\xc3\x39\xef\xf6\x1b\x18\xf6\xd1" + "\x2d\x76\x7c\x68\x50\x37\xfa\x8f" + "\x16\x87\x5e\xf8\xb1\x79\x82\x52" + "\xc7\x3e\x0e\xa3\x61\xb9\x00\xe0" + "\x2e\x03\x80\x6e\xc0\xbf\x63\x78" + "\xdf\xab\xc2\x3b\xf0\x4c\xb0\xcb" + "\x91\x6a\x26\xe6\x3a\x86\xef\x1a" + "\x4e\x4d\x23\x2d\x59\x3a\x02\x3a" + "\xf3\xda\xd1\x9d\x68\xf6\xef", + .klen = 16, + .len = 255, + }, + { + .key = "\x17\xe6\xb1\x85\x40\x24\xbe\x80" + "\x99\xc7\xa1\x0c\x0f\x72\x31\xb8" + "\x10\xb5\x11\x21\x3a\x99\x9e\xc8", + .iv = "\x6b\x5f\xe1\x6a\xe1\x21\xfc\x62" + "\xd9\x85\x2e\x0b\xbd\x58\x79\xd1", + .ptext = "\xea\x3c\xad\x9d\x92\x05\x50\xa4" + "\x68\x56\x6b\x33\x95\xa8\x24\x6c" + "\xa0\x9d\x91\x15\x3a\x26\xb7\xeb" + "\xb4\x5d\xf7\x0c\xec\x91\xbe\x11", + .ctext = "\x6a\xac\xfc\x24\x64\x98\x28\x33" + "\xa4\x39\xfd\x72\x46\x56\x7e\xf7" + "\xd0\x7f\xee\x95\xd8\x68\x44\x67" + "\x70\x80\xd4\x69\x7a\xf5\x8d\xad", + .klen = 24, + .len = 32, + }, + { + .key = "\x02\x81\x0e\xb1\x97\xe0\x20\x0c" + "\x46\x8c\x7b\xde\xac\xe6\xe0\xb5" + "\x2e\xb3\xc0\x40\x0e\xb7\x3d\xd3", + .iv = "\x37\x15\x1c\x61\xab\x95\x8f\xf3" + "\x11\x3a\x79\xe2\xf7\x33\x96\xb3", + .ptext = "\x05\xd9\x7a\xc7\x08\x79\xba\xd8" + "\x4a\x63\x54\xf7\x4e\x0c\x98\x8a" + "\x5d\x40\x05\xe4\x7a\x7a\x14\x0c" + "\xa8\xa7\x53\xf4\x3e\x66\x81\x38", + .ctext = "\x43\x66\x70\x51\xd9\x7c\x6f\x80" + "\x82\x8e\x34\xda\x5d\x3c\x47\xd1" + "\xe0\x67\x76\xb5\x78\x98\x47\x26" + "\x41\x31\xfa\x97\xc9\x79\xeb\x15", + .klen = 24, + .len = 32, + }, + { + .key = "\x9a\xef\x58\x01\x4c\x1e\xa2\x33" + "\xce\x1f\x32\xae\xc8\x69\x1f\xf5" + "\x82\x1b\x74\xf4\x8b\x1b\xce\x30", + .iv = "\xb1\x72\x52\xa8\xc4\x8f\xb5\xec" + "\x95\x12\x14\x5f\xd2\x29\x14\x0f", + .ptext = "\x8a\xbc\x20\xbd\x67\x76\x8d\xd8" + "\xa6\x70\xf0\x74\x8c\x8d\x9c\x00" + "\xdd\xaf\xef\x28\x5d\x8d\xfa\x87" + "\x81\x39\x8c\xb1\x6e\x0a\xcf\x3c" + "\xe8\x3b\xc0\xff\x6e\xe7\xd1\xc6" + "\x70\xb8\xdf\x27\x62\x72\x8e\xb7" + "\x6b\xa7\xb2\x74\xdd\xc6\xb4\xc9" + "\x4c\xd8\x4f\x2c\x09\x75\x6e\xb7" + "\x41\xb3\x8f\x96\x09\x0d\x40\x8e" + "\x0f\x49\xc2\xad\xc4\xf7\x71\x0a" + "\x76\xfb\x45\x97\x29\x7a\xaa\x98" + "\x22\x55\x4f\x9c\x26\x01\xc8\xb9" + "\x41\x42\x51\x9d\x00\x5c\x7f\x02" + "\x9b\x00\xaa\xbd\x69\x47\x9c\x26" + "\x5b\xcb\x08\xf3\x46\x33\xf9\xeb" + "\x79\xdd\xfe\x38\x08\x84\x8c\x81" + "\xb8\x51\xbd\xcd\x72\x00\xdb\xbd" + "\xf5\xd6\xb4\x80\xf7\xd3\x49\xac" + "\x9e\xf9\xea\xd5\xad\xd4\xaa\x8f" + "\x97\x60\xce\x60\xa7\xdd\xc0\xb2" + "\x51\x80\x9b\xae\xab\x0d\x62\xab" + "\x78\x1a\xeb\x8c\x03\x6f\x30\xbf" + "\xe0\xe1\x20\x65\x74\x65\x54\x43" + "\x92\x57\xd2\x73\x8a\xeb\x99\x38" + "\xca\x78\xc8\x11\xd7\x92\x1a\x05" + "\x55\xb8\xfa\xa0\x82\xb7\xd6\x16" + "\x84\x4d\x25\xc4\xd5\xe4\x55\xf3" + "\x6c\xb3\xe4\x6e\x66\x31\x5c\x41" + "\x98\x46\x28\xd8\x71\x05\xf2\x3b" + "\xd1\x3e\x0f\x79\x7f\xf3\x30\x3f" + "\xbe\x36\xf4\x50\xbd\x0c\x89\xd5" + "\xcb\x53\x9f\xeb\x56\xf4\x3f", + .ctext = "\xee\x90\xe1\x45\xf5\xab\x04\x23" + "\x70\x0a\x54\x49\xac\x34\xb8\x69" + "\x3f\xa8\xce\xef\x6e\x63\xc1\x20" + "\x7a\x41\x43\x5d\xa2\x29\x71\x1d" + "\xd2\xbb\xb1\xca\xb4\x3a\x5a\xf3" + "\x0a\x68\x0b\x9d\x6f\x68\x60\x9e" + "\x9d\xb9\x23\x68\xbb\xdd\x12\x31" + "\xc6\xd6\xf9\xb3\x80\xe8\xb5\xab" + "\x84\x2a\x8e\x7b\xb2\x4f\xee\x31" + "\x83\xc4\x1c\x80\x89\xe4\xe7\xd2" + "\x00\x65\x98\xd1\x57\xcc\xf6\x87" + "\x14\xf1\x23\x22\x78\x61\xc7\xb6" + "\xf5\x90\x97\xdd\xcd\x90\x98\xd8" + "\xbb\x02\xfa\x2c\xf0\x89\xfc\x7e" + "\xe7\xcd\xee\x41\x3f\x73\x4a\x08" + "\xf8\x8f\xf3\xbf\x3a\xd5\xce\xb7" + "\x7a\xf4\x49\xcd\x3f\xc7\x1f\x77" + "\x98\xd0\x9d\x82\x20\x8a\x04\x5d" + "\x9f\x77\xcb\xf4\x38\x92\x47\xce" + "\x6d\xc3\x51\xc1\xd9\xf4\x2f\x65" + "\x67\x01\xf4\x46\x3b\xd2\x90\x5d" + "\x2a\xcb\xc5\x39\x1c\x72\xa5\xba" + "\xaf\x80\x9b\x87\x01\x85\xa1\x02" + "\xdf\x79\x4c\x27\x77\x3e\xfc\xb3" + "\x96\xbc\x42\xad\xdf\xa4\x16\x1e" + "\x77\xe7\x39\xcc\x78\x2c\xc1\x00" + "\xe5\xa6\xb5\x9b\x0c\x12\x19\xc5" + "\x8b\xbe\xae\x4b\xc3\xa3\x91\x8f" + "\x5b\x82\x0f\x20\x30\x35\x45\x26" + "\x29\x84\x2e\xc8\x2d\xce\xae\xac" + "\xbe\x93\x50\x7a\x6a\x01\x08\x38" + "\xf5\x49\x4d\x8b\x7e\x96\x70", + .klen = 24, + .len = 255, + }, + { + .key = "\x2c\x3c\x6c\x78\xaa\x83\xed\x14" + "\x4e\xe5\xe2\x3e\x1e\x89\xcb\x2f" + "\x19\x5a\x70\x50\x09\x81\x43\x75", + .iv = "\xa5\x57\x8e\x3c\xba\x52\x87\x4f" + "\xb7\x45\x26\xab\x31\xb9\x58\xfa", + .ptext = "\x43\x29\x69\x02\xf0\xc0\x64\xf3" + "\xe1\x85\x75\x25\x11\x5d\x18\xf8" + "\xdc\x96\x82\x1b\xee\x4d\x01\xd2" + "\x28\x83\xbb\xfe\xe1\x72\x14\x3c" + "\xe9\xe5\x9f\x8c\x40\xb5\x0a\xaa" + "\x9f\xb8\xc5\xf1\x01\x05\x65\x79" + "\x90\x05\xeb\xac\xa8\x52\x35\xc4" + "\x2d\x56\x0d\xe1\x37\x09\xb8\xec" + "\x51\xd8\x79\x13\x5b\x85\x8c\x14" + "\x77\xe3\x64\xea\x89\xb1\x04\x9d" + "\x6c\x58\x1b\x51\x54\x1f\xc7\x2f" + "\xc8\x3d\xa6\x93\x39\xce\x77\x3a" + "\x93\xc2\xaa\x88\xcc\x09\xfa\xc4" + "\x5e\x92\x3b\x46\xd2\xd6\xd4\x5d" + "\x31\x58\xc5\xc6\x30\xb8\x7f\x77" + "\x0f\x1b\xf8\x9a\x7d\x3f\x56\x90" + "\x61\x8f\x08\x8f\x61\x64\x8e\xf4" + "\xaa\x7c\xf8\x4c\x0b\xab\x47\x2a" + "\x0d\xa7\x24\x36\x59\xfe\x94\xfc" + "\x38\x38\x32\xdf\x73\x1b\x75\xb1" + "\x6f\xa2\xd8\x0b\xa1\xd4\x31\x58" + "\xaa\x24\x11\x22\xc9\xf7\x83\x3c" + "\x6e\xee\x75\xc0\xdd\x3b\x21\x99" + "\x9f\xde\x81\x9c\x2a\x70\xc4\xb8" + "\xc6\x27\x4e\x5d\x9a\x4a\xe1\x75" + "\x01\x95\x47\x87\x3f\x9a\x69\x20" + "\xb4\x66\x70\x1a\xe2\xb3\x6c\xfa" + "\x1f\x6e\xf9\xc3\x8a\x1f\x0b\x0b" + "\xc5\x92\xba\xd9\xf8\x27\x6b\x97" + "\x01\xe2\x38\x01\x7f\x06\xde\x54" + "\xb7\x78\xbc\x7d\x6a\xa1\xf2\x6f" + "\x62\x42\x30\xbf\xb1\x6d\xc7", + .ctext = "\x53\xc0\xb3\x13\x8f\xbf\x88\x1a" + "\x6f\xda\xad\x0b\x33\x8b\x82\x9d" + "\xca\x17\x32\x65\xaa\x72\x24\x1b" + "\x95\x33\xcc\x5b\x58\x5d\x08\x58" + "\xe5\x52\xc0\xb7\xc6\x97\x77\x66" + "\xbd\xf4\x50\xde\xe1\xf0\x70\x61" + "\xc2\x05\xce\xe0\x90\x2f\x7f\xb3" + "\x04\x7a\xee\xbe\xb3\xb7\xaf\xda" + "\x3c\xb8\x95\xb4\x20\xba\x66\x0b" + "\x97\xcc\x07\x3f\x22\x07\x0e\xea" + "\x76\xd8\x32\xf9\x34\x47\xcb\xaa" + "\xb3\x5a\x06\x68\xac\x94\x10\x39" + "\xf2\x70\xe1\x7b\x98\x5c\x0c\xcb" + "\x8f\xd8\x48\xfa\x2e\x15\xa1\xf1" + "\x2f\x85\x55\x39\xd8\x24\xe6\xc1" + "\x6f\xd7\x52\x97\x42\x7a\x2e\x14" + "\x39\x74\x16\xf3\x8b\xbd\x38\xb9" + "\x54\x20\xc6\x31\x1b\x4c\xb7\x26" + "\xd4\x71\x63\x97\xaa\xbf\xf5\xb7" + "\x17\x5e\xee\x14\x67\x38\x14\x11" + "\xf6\x98\x3c\x70\x4a\x89\xf4\x27" + "\xb4\x72\x7a\xc0\x5d\x58\x3d\x8b" + "\xf6\xf7\x80\x7b\xa9\xa7\x4d\xf8" + "\x1a\xbe\x07\x0c\x06\x97\x25\xc8" + "\x5a\x18\xae\x21\xa6\xe4\x77\x13" + "\x5a\xe5\xf5\xe0\xd5\x48\x73\x22" + "\x68\xde\x70\x05\xc4\xdf\xd5\x7c" + "\xa0\x2b\x99\x9c\xa8\x21\xd7\x6c" + "\x55\x97\x09\xd6\xb0\x62\x93\x90" + "\x14\xb1\xd1\x83\x5a\xb3\x17\xb9" + "\xc7\xcc\x6b\x51\x23\x44\x4b\xef" + "\x48\x0f\x0f\xf0\x0e\xa1\x8f", + .klen = 24, + .len = 255, + }, + { + .key = "\xed\xd1\xcf\x81\x1c\xf8\x9d\x56" + "\xd4\x3b\x86\x4b\x65\x96\xfe\xe8" + "\x8a\xd4\x3b\xd7\x76\x07\xab\xf4" + "\xe9\xae\xd1\x4d\x50\x9b\x94\x1c", + .iv = "\x09\x90\xf3\x7c\x15\x99\x7d\x94" + "\x88\xf4\x99\x19\xd1\x62\xc4\x65", + .ptext = "\xa2\x06\x41\x55\x60\x2c\xe3\x76" + "\xa9\xaf\xf9\xe1\xd7\x0d\x65\x49" + "\xda\x27\x0d\xf8\xec\xdc\x09\x2b" + "\x06\x24\xe4\xd5\x15\x29\x6b\x5f", + .ctext = "\xad\x5c\xd0\xc1\x03\x45\xba\x9d" + "\xab\x6d\x82\xae\xf7\x8e\x2b\x8b" + "\xd8\x61\xe6\x96\x5c\x5c\xe2\x70" + "\xe5\x19\x0a\x04\x60\xca\x45\xfc", + .klen = 32, + .len = 32, + }, + { + .key = "\xf8\x75\xa6\xba\x7b\x00\xf0\x71" + "\x24\x5d\xdf\x93\x8b\xa3\x7d\x6d" + "\x8e\x0f\x65\xf4\xe2\xbe\x2b\xaa" + "\x2a\x0d\x9e\x00\x6a\x94\x80\xa1", + .iv = "\xb9\xb7\x55\x26\x5f\x96\x16\x68" + "\x5c\x5f\x58\xbb\x4e\x5a\xe1\x3b", + .ptext = "\x2f\xd9\x2c\xc2\x98\x1e\x81\x5e" + "\x89\xc8\xec\x1f\x56\x3e\xd9\xa4" + "\x92\x48\xec\xfc\x5d\xeb\x7f\xad" + "\x7a\x47\xe6\xda\x71\x1b\x2e\xfa", + .ctext = "\x25\x5e\x38\x20\xcf\xbe\x4c\x6c" + "\xe6\xce\xfc\xe2\xca\x6a\xa1\x62" + "\x3a\xb7\xdf\x21\x3e\x49\xa6\xb8" + "\x22\xd2\xc8\x37\xa4\x55\x09\xe6", + .klen = 32, + .len = 32, + }, + { + .key = "\x32\x37\x2b\x8f\x7b\xb1\x23\x79" + "\x05\x52\xde\x05\xf1\x68\x3f\x6c" + "\xa4\xae\xbc\x21\xc2\xc6\xf0\xbd" + "\x0f\x20\xb7\xa4\xc5\x05\x7b\x64", + .iv = "\xff\x26\x4e\x67\x48\xdd\xcf\xfe" + "\x42\x09\x04\x98\x5f\x1e\xfa\x80", + .ptext = "\x99\xdc\x3b\x19\x41\xf9\xff\x6e" + "\x76\xb5\x03\xfa\x61\xed\xf8\x44" + "\x70\xb9\xf0\x83\x80\x6e\x31\x77" + "\x77\xe4\xc7\xb4\x77\x02\xab\x91" + "\x82\xc6\xf8\x7c\x46\x61\x03\x69" + "\x09\xa0\xf7\x12\xb7\x81\x6c\xa9" + "\x10\x5c\xbb\x55\xb3\x44\xed\xb5" + "\xa2\x52\x48\x71\x90\x5d\xda\x40" + "\x0b\x7f\x4a\x11\x6d\xa7\x3d\x8e" + "\x1b\xcd\x9d\x4e\x75\x8b\x7d\x87" + "\xe5\x39\x34\x32\x1e\xe6\x8d\x51" + "\xd4\x1f\xe3\x1d\x50\xa0\x22\x37" + "\x7c\xb0\xd9\xfb\xb6\xb2\x16\xf6" + "\x6d\x26\xa0\x4e\x8c\x6a\xe6\xb6" + "\xbe\x4c\x7c\xe3\x88\x10\x18\x90" + "\x11\x50\x19\x90\xe7\x19\x3f\xd0" + "\x31\x15\x0f\x06\x96\xfe\xa7\x7b" + "\xc3\x32\x88\x69\xa4\x12\xe3\x64" + "\x02\x30\x17\x74\x6c\x88\x7c\x9b" + "\xd6\x6d\x75\xdf\x11\x86\x70\x79" + "\x48\x7d\x34\x3e\x33\x58\x07\x8b" + "\xd2\x50\xac\x35\x15\x45\x05\xb4" + "\x4d\x31\x97\x19\x87\x23\x4b\x87" + "\x53\xdc\xa9\x19\x78\xf1\xbf\x35" + "\x30\x04\x14\xd4\xcf\xb2\x8c\x87" + "\x7d\xdb\x69\xc9\xcd\xfe\x40\x3e" + "\x8d\x66\x5b\x61\xe5\xf0\x2d\x87" + "\x93\x3a\x0c\x2b\x04\x98\x05\xc2" + "\x56\x4d\xc4\x6c\xcd\x7a\x98\x7e" + "\xe2\x2d\x79\x07\x91\x9f\xdf\x2f" + "\x72\xc9\x8f\xcb\x0b\x87\x1b\xb7" + "\x04\x86\xcb\x47\xfa\x5d\x03", + .ctext = "\x0b\x00\xf7\xf2\xc8\x6a\xba\x9a" + "\x0a\x97\x18\x7a\x00\xa0\xdb\xf4" + "\x5e\x8e\x4a\xb7\xe0\x51\xf1\x75" + "\x17\x8b\xb4\xf1\x56\x11\x05\x9f" + "\x2f\x2e\xba\x67\x04\xe1\xb4\xa5" + "\xfc\x7c\x8c\xad\xc6\xb9\xd1\x64" + "\xca\xbd\x5d\xaf\xdb\x65\x48\x4f" + "\x1b\xb3\x94\x5c\x0b\xd0\xee\xcd" + "\xb5\x7f\x43\x8a\xd8\x8b\x66\xde" + "\xd2\x9c\x13\x65\xa4\x47\xa7\x03" + "\xc5\xa1\x46\x8f\x2f\x84\xbc\xef" + "\x48\x9d\x9d\xb5\xbd\x43\xff\xd2" + "\xd2\x7a\x5a\x13\xbf\xb4\xf6\x05" + "\x17\xcd\x01\x12\xf0\x35\x27\x96" + "\xf4\xc1\x65\xf7\x69\xef\x64\x1b" + "\x6e\x4a\xe8\x77\xce\x83\x01\xb7" + "\x60\xe6\x45\x2a\xcd\x41\x4a\xb5" + "\x8e\xcc\x45\x93\xf1\xd6\x64\x5f" + "\x32\x60\xe4\x29\x4a\x82\x6c\x86" + "\x16\xe4\xcc\xdb\x5f\xc8\x11\xa6" + "\xfe\x88\xd6\xc3\xe5\x5c\xbb\x67" + "\xec\xa5\x7b\xf5\xa8\x4f\x77\x25" + "\x5d\x0c\x2a\x99\xf9\xb9\xd1\xae" + "\x3c\x83\x2a\x93\x9b\x66\xec\x68" + "\x2c\x93\x02\x8a\x8a\x1e\x2f\x50" + "\x09\x37\x19\x5c\x2a\x3a\xc2\xcb" + "\xcb\x89\x82\x81\xb7\xbb\xef\x73" + "\x8b\xc9\xae\x42\x96\xef\x70\xc0" + "\x89\xc7\x3e\x6a\x26\xc3\xe4\x39" + "\x53\xa9\xcf\x63\x7d\x05\xf3\xff" + "\x52\x04\xf6\x7f\x23\x96\xe9\xf7" + "\xff\xd6\x50\xa3\x0e\x20\x71", + .klen = 32, + .len = 255, + }, + { + .key = "\x49\x85\x84\x69\xd4\x5f\xf9\xdb" + "\xf2\xc4\x1c\x62\x20\x88\xea\x8a" + "\x5b\x69\xe6\x3b\xe2\x5c\xfe\xce" + "\xe1\x7a\x27\x7b\x1c\xc9\xb4\x43", + .iv = "\xae\x98\xdb\xef\x5c\x6b\xe9\x27" + "\x1a\x2f\x51\x17\x97\x7d\x4f\x10", + .ptext = "\xbe\xf2\x8f\x8a\x51\x9e\x3d\xff" + "\xd7\x68\x0f\xd2\xf2\x5b\xe3\xa5" + "\x59\x3e\xcd\xab\x46\xc6\xe9\x24" + "\x43\xbc\xb8\x37\x1f\x55\x7f\xb5" + "\xc0\xa6\x68\xdf\xbf\x21\x1e\xed" + "\x67\x73\xb7\x06\x47\xff\x67\x07" + "\x5b\x94\xab\xef\x43\x95\x52\xce" + "\xe7\x71\xbd\x72\x5b\x3a\x25\x01" + "\xed\x7d\x02\x2d\x72\xd6\xc4\x3d" + "\xd2\xf5\xe5\xb3\xf2\xd7\xa1\x8d" + "\x12\x0d\x3b\x4a\x58\xf4\x1b\xfd" + "\xcd\x2c\x13\x05\x07\x3d\x30\x8a" + "\x1f\xc6\xed\xfc\x7c\x3c\xa6\x1c" + "\x64\x2c\x36\xa8\x5d\xe2\xfa\x12" + "\xd7\x17\xa9\x39\x43\x63\xbf\x44" + "\xd0\xcb\x4c\xf0\xab\xe6\x75\xd6" + "\x60\xd1\x64\x9e\x01\x2b\x97\x52" + "\x97\x24\x32\xb0\xfa\x22\xf4\x04" + "\xe6\x98\x6a\xbc\xba\xe8\x65\xad" + "\x60\x08\xfc\xd7\x40\xf8\x2a\xf2" + "\x5e\x32\x32\x82\x24\x12\xda\xbc" + "\x8f\x1c\xd4\x06\x81\x08\x80\x35" + "\x20\xa5\xa8\x3a\x6e\x3e\x2f\x78" + "\xe4\x7d\x9e\x81\x43\xb8\xfe\xa7" + "\x3b\xa9\x9b\x1a\xe7\xce\xd2\x3d" + "\xc1\x27\x26\x22\x35\x12\xa2\xc6" + "\x59\x51\x22\x31\x7b\xc8\xca\xa6" + "\xa9\xf3\x16\x57\x72\x3d\xfa\x24" + "\x66\x56\x5d\x21\x29\x9e\xf2\xff" + "\xae\x0c\x71\xcf\xc5\xf0\x98\xe5" + "\xa1\x05\x96\x94\x3e\x36\xed\x97" + "\xc7\xee\xcd\xc2\x54\x35\x5c", + .ctext = "\xde\x7f\x5e\xac\x6f\xec\xed\x2a" + "\x3a\x3b\xb3\x36\x19\x46\x26\x27" + "\x09\x7b\x49\x47\x1b\x88\x43\xb7" + "\x65\x67\xef\x0b\xe4\xde\x0a\x97" + "\x7f\xab\x32\x7c\xa2\xde\x4e\xba" + "\x11\x9b\x19\x12\x7d\x03\x01\x15" + "\xa3\x90\x9f\x52\x9d\x29\x3d\x5c" + "\xc6\x71\x59\x2c\x44\x8f\xb7\x8c" + "\x0d\x75\x81\x76\xe2\x11\x96\x41" + "\xae\x48\x27\x0e\xbc\xaf\x1d\xf5" + "\x51\x68\x5a\x34\xe5\x6d\xdf\x60" + "\xc7\x9d\x4e\x1a\xaa\xb5\x1a\x57" + "\x58\x6a\xa4\x79\x0a\xa9\x50\x8d" + "\x93\x59\xef\x5b\x23\xdb\xc8\xb3" + "\x38\x96\x8c\xdf\x7d\x6a\x3d\x53" + "\x84\x9d\xb0\xf0\x07\x5f\xff\x67" + "\xff\x5b\x3c\x8b\x1f\xa2\x3b\xcf" + "\xf5\x86\x7c\xbc\x98\x38\x7a\xe5" + "\x96\x56\xba\x44\x85\x29\x4f\x3a" + "\x64\xde\xec\xc6\x53\xf0\x30\xca" + "\xa4\x90\x4f\x9c\x2e\x0e\xec\x2d" + "\x8c\x38\x1c\x93\x9a\x5d\x5d\x98" + "\xf9\x2c\xf7\x27\x71\x3c\x69\xa9" + "\x0b\xec\xd9\x9c\x6c\x69\x09\x47" + "\xd9\xc2\x84\x6e\x3e\x2d\x9f\x1f" + "\xb6\x13\x62\x4c\xf3\x33\x44\x13" + "\x6c\x43\x0a\xae\x8e\x89\xd6\x27" + "\xdd\xc3\x5b\x37\x62\x09\x47\x94" + "\xe3\xea\x7d\x08\x14\x70\xb1\x8e" + "\x83\x4a\xcb\xc0\xa9\xf2\xa3\x02" + "\xe9\xa0\x44\xfe\xcf\x5a\x15\x50" + "\xc4\x5a\x6f\xc8\xd6\xf1\x83", + .klen = 32, + .len = 255, + }, +}; + #endif /* _CRYPTO_TESTMGR_H */ diff --git a/crypto/xctr.c b/crypto/xctr.c new file mode 100644 index 000000000000..29e40e01f0b7 --- /dev/null +++ b/crypto/xctr.c @@ -0,0 +1,193 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * XCTR: XOR Counter mode - Adapted from ctr.c + * + * (C) Copyright IBM Corp. 2007 - Joy Latten + * Copyright 2021 Google LLC + */ + +/* + * XCTR mode is a blockcipher mode of operation used to implement HCTR2. XCTR is + * closely related to the CTR mode of operation; the main difference is that CTR + * generates the keystream using E(CTR + IV) whereas XCTR generates the + * keystream using E(CTR ^ IV). + * + * See the HCTR2 paper for more details: + * Length-preserving encryption with HCTR2 + * (https://eprint.iacr.org/2021/1441.pdf) + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +// Limited to 16-byte blocks for simplicity +#define XCTR_BLOCKSIZE 16 + +static void crypto_xctr_crypt_final(struct skcipher_walk *walk, + struct crypto_cipher *tfm, u32 byte_ctr) +{ + unsigned long alignmask = crypto_cipher_alignmask(tfm); + u8 tmp[XCTR_BLOCKSIZE + MAX_CIPHER_ALIGNMASK]; + u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1); + u8 *src = walk->src.virt.addr; + u8 *dst = walk->dst.virt.addr; + unsigned int nbytes = walk->nbytes; + __le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1); + + crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32)); + crypto_cipher_encrypt_one(tfm, keystream, walk->iv); + crypto_xor_cpy(dst, keystream, src, nbytes); + crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32)); +} + +static int crypto_xctr_crypt_segment(struct skcipher_walk *walk, + struct crypto_cipher *tfm, u32 byte_ctr) +{ + void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = + crypto_cipher_alg(tfm)->cia_encrypt; + u8 *src = walk->src.virt.addr; + u8 *dst = walk->dst.virt.addr; + unsigned int nbytes = walk->nbytes; + __le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1); + + do { + /* create keystream */ + crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32)); + fn(crypto_cipher_tfm(tfm), dst, walk->iv); + crypto_xor(dst, src, XCTR_BLOCKSIZE); + crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32)); + + ctr32++; + + src += XCTR_BLOCKSIZE; + dst += XCTR_BLOCKSIZE; + } while ((nbytes -= XCTR_BLOCKSIZE) >= XCTR_BLOCKSIZE); + + return nbytes; +} + +static int crypto_xctr_crypt_inplace(struct skcipher_walk *walk, + struct crypto_cipher *tfm, u32 byte_ctr) +{ + void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = + crypto_cipher_alg(tfm)->cia_encrypt; + unsigned long alignmask = crypto_cipher_alignmask(tfm); + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; + u8 tmp[XCTR_BLOCKSIZE + MAX_CIPHER_ALIGNMASK]; + u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1); + __le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1); + + do { + /* create keystream */ + crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32)); + fn(crypto_cipher_tfm(tfm), keystream, walk->iv); + crypto_xor(src, keystream, XCTR_BLOCKSIZE); + crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32)); + + ctr32++; + + src += XCTR_BLOCKSIZE; + } while ((nbytes -= XCTR_BLOCKSIZE) >= XCTR_BLOCKSIZE); + + return nbytes; +} + +static int crypto_xctr_crypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_cipher *cipher = skcipher_cipher_simple(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + u32 byte_ctr = 0; + + err = skcipher_walk_virt(&walk, req, false); + + while (walk.nbytes >= XCTR_BLOCKSIZE) { + if (walk.src.virt.addr == walk.dst.virt.addr) + nbytes = crypto_xctr_crypt_inplace(&walk, cipher, + byte_ctr); + else + nbytes = crypto_xctr_crypt_segment(&walk, cipher, + byte_ctr); + + byte_ctr += walk.nbytes - nbytes; + err = skcipher_walk_done(&walk, nbytes); + } + + if (walk.nbytes) { + crypto_xctr_crypt_final(&walk, cipher, byte_ctr); + err = skcipher_walk_done(&walk, 0); + } + + return err; +} + +static int crypto_xctr_create(struct crypto_template *tmpl, struct rtattr **tb) +{ + struct skcipher_instance *inst; + struct crypto_alg *alg; + int err; + + inst = skcipher_alloc_instance_simple(tmpl, tb); + if (IS_ERR(inst)) + return PTR_ERR(inst); + + alg = skcipher_ialg_simple(inst); + + /* Block size must be >= 4 bytes. */ + err = -EINVAL; + if (alg->cra_blocksize != XCTR_BLOCKSIZE) + goto out_free_inst; + + /* XCTR mode is a stream cipher. */ + inst->alg.base.cra_blocksize = 1; + + /* + * To simplify the implementation, configure the skcipher walk to only + * give a partial block at the very end, never earlier. + */ + inst->alg.chunksize = alg->cra_blocksize; + + inst->alg.encrypt = crypto_xctr_crypt; + inst->alg.decrypt = crypto_xctr_crypt; + + err = skcipher_register_instance(tmpl, inst); + if (err) { +out_free_inst: + inst->free(inst); + } + + return err; +} + +static struct crypto_template crypto_xctr_tmpl = { + .name = "xctr", + .create = crypto_xctr_create, + .module = THIS_MODULE, +}; + +static int __init crypto_xctr_module_init(void) +{ + return crypto_register_template(&crypto_xctr_tmpl); +} + +static void __exit crypto_xctr_module_exit(void) +{ + crypto_unregister_template(&crypto_xctr_tmpl); +} + +subsys_initcall(crypto_xctr_module_init); +module_exit(crypto_xctr_module_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("XCTR block cipher mode of operation"); +MODULE_ALIAS_CRYPTO("xctr"); +MODULE_IMPORT_NS(CRYPTO_INTERNAL); From patchwork Thu Feb 10 23:28:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nathan Huckleberry X-Patchwork-Id: 12742587 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92075C433FE for ; Thu, 10 Feb 2022 23:28:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345685AbiBJX2k (ORCPT ); Thu, 10 Feb 2022 18:28:40 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:41260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345624AbiBJX2k (ORCPT ); Thu, 10 Feb 2022 18:28:40 -0500 Received: from mail-ua1-x949.google.com (mail-ua1-x949.google.com [IPv6:2607:f8b0:4864:20::949]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7711F5F5B for ; Thu, 10 Feb 2022 15:28:39 -0800 (PST) Received: by mail-ua1-x949.google.com with SMTP id z4-20020a9f3704000000b0030bb302b19dso3550700uad.11 for ; Thu, 10 Feb 2022 15:28:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=hMLABJ99eOAvi/mzHi8TOvXnAJT7y1QJHMEUiSzJRyA=; b=h5r1dLJAVwLcHRThB7P468t18WUMTHgtU7vjJI1al922TynvB0jopSwpau6Qg84bpJ xkfE46S1MjW5s3BUuIWmwDl5DVq+uDkBY+hKdZ8wXO6dXz8UnZzYZzkXZVRQXgVXxNEC qNLA1lZtGxZ74PUUUtW43StMwcMKdg+7I8GC0prG1gm0LvNOSz72DvUgwTN9oa6C11zz /6p1ME3wmYUs9KATcOPfo7AUWZGzWuzqGD3AHZHVX7YkGOJ+TP55Q0iy3d5nGpurNMpT d74Pc51aicWjhjbQlLa0os1qoYsolql/MuiuicANEpRtGdt5HET716/yrv484q5dN6t5 WYqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=hMLABJ99eOAvi/mzHi8TOvXnAJT7y1QJHMEUiSzJRyA=; b=IHk0Q99BfBfSSDM/5B+gCrAJHodOp66B8GSh8Z058q2XisKcTpM8xn6qmHY78wlshM gnfBcYD/KO081uGhJwy2fwcHecDmG+FOgNVjxmaH42HWz1dPFhKUkHmPxg1ZyOzwrkxd r29y4JHMQJJw4JDcuw/BTOPjZk42GfGTulR5V0aztVrCozydXSxwV7A7TN31b9MfUaDQ N8VEFML2b22OfvDmHUZB9N4c2q7TRyE3JIhiXJzxU1SCB1SCG1p52gIhxZ9nZWbLQoPE OxFxjLAk/kMN7baS8AHvoKwD5d9LIBfKUAawDVrfyPeuytkUaSJZ7RbA/ho0mDW0Lb6+ 7tug== X-Gm-Message-State: AOAM531d8podT7j+4R0valjKYNUhTHQLODJ4c20ZM+skglOu+4p38ZEs LPGfbO0avusg+idxWTfCiBKpmHlVVrLNljhNIKwzSZczv97bbq73qvv5fE2i2IuQJ2cVV8jl5uC P3gI3faVao/ZYCKTdjTNAHq1jUEzMktREIil6ty8mJ+LAE8+RqM5xpTHV24aRUnspF1A= X-Google-Smtp-Source: ABdhPJxbcp3VuiZ0f+3SeDdptIhMvEd+qOnhM68C86J6K1lZEWSMcPJQQrTBoGwLq2P0zYvkIUc8CKd7ow== X-Received: from nhuck.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:39cc]) (user=nhuck job=sendgmr) by 2002:a05:6122:7c7:: with SMTP id l7mr3487351vkr.9.1644535718486; Thu, 10 Feb 2022 15:28:38 -0800 (PST) Date: Thu, 10 Feb 2022 23:28:07 +0000 In-Reply-To: <20220210232812.798387-1-nhuck@google.com> Message-Id: <20220210232812.798387-3-nhuck@google.com> Mime-Version: 1.0 References: <20220210232812.798387-1-nhuck@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [RFC PATCH v2 2/7] crypto: polyval - Add POLYVAL support From: Nathan Huckleberry To: linux-crypto@vger.kernel.org Cc: Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Eric Biggers , Sami Tolvanen , Ard Biesheuvel , Nathan Huckleberry Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add support for POLYVAL, an ε-Δ-universal hash function similar to GHASH. POLYVAL is used as a component to implement HCTR2 mode. POLYVAL is implemented as an shash algorithm. The implementation is modified from ghash-generic.c. More information on POLYVAL can be found in the HCTR2 paper: https://eprint.iacr.org/2021/1441.pdf Signed-off-by: Nathan Huckleberry --- Changes since v1: * Added comments explaining the GHASH/POLYVAL trick * Fixed a bug on non-block-multiple messages crypto/Kconfig | 8 ++ crypto/Makefile | 1 + crypto/polyval-generic.c | 199 +++++++++++++++++++++++++++ crypto/tcrypt.c | 4 + crypto/testmgr.c | 6 + crypto/testmgr.h | 284 +++++++++++++++++++++++++++++++++++++++ include/crypto/polyval.h | 22 +++ 7 files changed, 524 insertions(+) create mode 100644 crypto/polyval-generic.c create mode 100644 include/crypto/polyval.h diff --git a/crypto/Kconfig b/crypto/Kconfig index 8543f34fa200..0c61d03530a6 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -760,6 +760,14 @@ config CRYPTO_GHASH GHASH is the hash function used in GCM (Galois/Counter Mode). It is not a general-purpose cryptographic hash function. +config CRYPTO_POLYVAL + tristate + select CRYPTO_GF128MUL + select CRYPTO_HASH + help + POLYVAL is the hash function used in HCTR2. It is not a general-purpose + cryptographic hash function. + config CRYPTO_POLY1305 tristate "Poly1305 authenticator algorithm" select CRYPTO_HASH diff --git a/crypto/Makefile b/crypto/Makefile index 6b3fe3df1489..561f901a91d4 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -169,6 +169,7 @@ UBSAN_SANITIZE_jitterentropy.o = n jitterentropy_rng-y := jitterentropy.o jitterentropy-kcapi.o obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o obj-$(CONFIG_CRYPTO_GHASH) += ghash-generic.o +obj-$(CONFIG_CRYPTO_POLYVAL) += polyval-generic.o obj-$(CONFIG_CRYPTO_USER_API) += af_alg.o obj-$(CONFIG_CRYPTO_USER_API_HASH) += algif_hash.o obj-$(CONFIG_CRYPTO_USER_API_SKCIPHER) += algif_skcipher.o diff --git a/crypto/polyval-generic.c b/crypto/polyval-generic.c new file mode 100644 index 000000000000..e8d7e34e5355 --- /dev/null +++ b/crypto/polyval-generic.c @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * POLYVAL: hash function for HCTR2. + * + * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen + * Copyright (c) 2009 Intel Corp. + * Author: Huang Ying + * Copyright 2021 Google LLC + */ + +/* + * Code based on crypto/ghash-generic.c + * + * POLYVAL is a keyed hash function similar to GHASH. POLYVAL uses a + * different modulus for finite field multiplication which makes hardware + * accelerated implementations on little-endian machines faster. + * + * Like GHASH, POLYVAL is not a cryptographic hash function and should + * not be used outside of crypto modes explicitly designed to use POLYVAL. + * + * This implementation uses a convenient trick involving the GHASH and POLYVAL + * fields. This trick allows multiplication in the POLYVAL field to be + * implemented by using multiplication in the GHASH field as a subroutine. An + * element of the POLYVAL field can be converted to an element of the GHASH + * field by computing x*REVERSE(a), where REVERSE reverses the byte-ordering of + * a. Similarly, an element of the GHASH field can be converted back to the + * POLYVAL field by computing REVERSE(x^{-1}*a). + * + * By using this trick, we do not need to implement the POLYVAL field for the + * generic implementation. + * + * Warning: this generic implementation is not intended to be used in practice + * and is not constant time. For practical use, a hardware accelerated + * implementation of POLYVAL should be used instead. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct polyval_tfm_ctx { + struct gf128mul_4k *gf128; +}; + +static int polyval_init(struct shash_desc *desc) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + + memset(dctx, 0, sizeof(*dctx)); + + return 0; +} + +static void reverse_block(u8 block[POLYVAL_BLOCK_SIZE]) +{ + u64 *p1 = (u64 *)block; + u64 *p2 = (u64 *)&block[8]; + u64 a = get_unaligned(p1); + u64 b = get_unaligned(p2); + + put_unaligned(swab64(a), p2); + put_unaligned(swab64(b), p1); +} + +static int polyval_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + struct polyval_tfm_ctx *ctx = crypto_shash_ctx(tfm); + be128 k; + + if (keylen != POLYVAL_BLOCK_SIZE) + return -EINVAL; + + gf128mul_free_4k(ctx->gf128); + + BUILD_BUG_ON(sizeof(k) != POLYVAL_BLOCK_SIZE); + // avoid violating alignment rules + memcpy(&k, key, POLYVAL_BLOCK_SIZE); + + reverse_block((u8 *)&k); + gf128mul_x_lle(&k, &k); + + ctx->gf128 = gf128mul_init_4k_lle(&k); + memzero_explicit(&k, POLYVAL_BLOCK_SIZE); + + if (!ctx->gf128) + return -ENOMEM; + + return 0; +} + +static int polyval_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + const struct polyval_tfm_ctx *ctx = crypto_shash_ctx(desc->tfm); + u8 *dst = dctx->buffer; + u8 *pos; + u8 tmp[POLYVAL_BLOCK_SIZE]; + int n; + + if (dctx->bytes) { + n = min(srclen, dctx->bytes); + pos = dst + dctx->bytes - 1; + + dctx->bytes -= n; + srclen -= n; + + while (n--) + *pos-- ^= *src++; + + if (!dctx->bytes) + gf128mul_4k_lle((be128 *)dst, ctx->gf128); + } + + while (srclen >= POLYVAL_BLOCK_SIZE) { + memcpy(tmp, src, POLYVAL_BLOCK_SIZE); + reverse_block(tmp); + crypto_xor(dst, tmp, POLYVAL_BLOCK_SIZE); + gf128mul_4k_lle((be128 *)dst, ctx->gf128); + src += POLYVAL_BLOCK_SIZE; + srclen -= POLYVAL_BLOCK_SIZE; + } + + if (srclen) { + dctx->bytes = POLYVAL_BLOCK_SIZE - srclen; + pos = dst + POLYVAL_BLOCK_SIZE - 1; + while (srclen--) + *pos-- ^= *src++; + } + + return 0; +} + +static int polyval_final(struct shash_desc *desc, u8 *dst) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + const struct polyval_tfm_ctx *ctx = crypto_shash_ctx(desc->tfm); + u8 *buf = dctx->buffer; + + if (dctx->bytes) + gf128mul_4k_lle((be128 *)buf, ctx->gf128); + dctx->bytes = 0; + + reverse_block(buf); + memcpy(dst, buf, POLYVAL_BLOCK_SIZE); + + return 0; +} + +static void polyval_exit_tfm(struct crypto_tfm *tfm) +{ + struct polyval_tfm_ctx *ctx = crypto_tfm_ctx(tfm); + + gf128mul_free_4k(ctx->gf128); +} + +static struct shash_alg polyval_alg = { + .digestsize = POLYVAL_DIGEST_SIZE, + .init = polyval_init, + .update = polyval_update, + .final = polyval_final, + .setkey = polyval_setkey, + .descsize = sizeof(struct polyval_desc_ctx), + .base = { + .cra_name = "polyval", + .cra_driver_name = "polyval-generic", + .cra_priority = 100, + .cra_blocksize = POLYVAL_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct polyval_tfm_ctx), + .cra_module = THIS_MODULE, + .cra_exit = polyval_exit_tfm, + }, +}; + +static int __init polyval_mod_init(void) +{ + return crypto_register_shash(&polyval_alg); +} + +static void __exit polyval_mod_exit(void) +{ + crypto_unregister_shash(&polyval_alg); +} + +subsys_initcall(polyval_mod_init); +module_exit(polyval_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("POLYVAL hash function"); +MODULE_ALIAS_CRYPTO("polyval"); +MODULE_ALIAS_CRYPTO("polyval-generic"); diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index b3a23dbf5b14..ced7467bb481 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -1730,6 +1730,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("ccm(sm4)"); break; + case 57: + ret += tcrypt_test("polyval"); + break; + case 100: ret += tcrypt_test("hmac(md5)"); break; diff --git a/crypto/testmgr.c b/crypto/testmgr.c index d2b42ff0b04a..3e54d17fe644 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -5245,6 +5245,12 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .hash = __VECS(poly1305_tv_template) } + }, { + .alg = "polyval", + .test = alg_test_hash, + .suite = { + .hash = __VECS(polyval_tv_template) + } }, { .alg = "rfc3686(ctr(aes))", .test = alg_test_skcipher, diff --git a/crypto/testmgr.h b/crypto/testmgr.h index e1ebbb3c4d4c..da3736e51982 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -33346,4 +33346,288 @@ static const struct cipher_testvec aes_xctr_tv_template[] = { }, }; +/* + * Test vectors generated using https://github.com/google/hctr2 + */ +static const struct hash_testvec polyval_tv_template[] = { + { + .key = "\x31\x07\x28\xd9\x91\x1f\x1f\x38" + "\x37\xb2\x43\x16\xc3\xfa\xb9\xa0", + .plaintext = "\x65\x78\x61\x6d\x70\x6c\x65\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x48\x65\x6c\x6c\x6f\x20\x77\x6f" + "\x72\x6c\x64\x00\x00\x00\x00\x00" + "\x38\x00\x00\x00\x00\x00\x00\x00" + "\x58\x00\x00\x00\x00\x00\x00\x00", + .digest = "\xad\x7f\xcf\x0b\x51\x69\x85\x16" + "\x62\x67\x2f\x3c\x5f\x95\x13\x8f", + .psize = 48, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .digest = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .psize = 16, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x40\x00\x00\x00\x00\x00\x00\x00", + .digest = "\xeb\x93\xb7\x74\x09\x62\xc5\xe4" + "\x9d\x2a\x90\xa7\xdc\x5c\xec\x74", + .psize = 32, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x60\x00\x00\x00\x00\x00\x00\x00", + .digest = "\x48\xeb\x6c\x6c\x5a\x2d\xbe\x4a" + "\x1d\xde\x50\x8f\xee\x06\x36\x1b", + .psize = 32, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x02\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x01\x00\x00\x00\x00\x00\x00", + .digest = "\xce\x6e\xdc\x9a\x50\xb3\x6d\x9a" + "\x98\x98\x6b\xbf\x6a\x26\x1c\x3b", + .psize = 48, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x02\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x03\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x80\x01\x00\x00\x00\x00\x00\x00", + .digest = "\x81\x38\x87\x46\xbc\x22\xd2\x6b" + "\x2a\xbc\x3d\xcb\x15\x75\x42\x22", + .psize = 64, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x02\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x03\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x04\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x02\x00\x00\x00\x00\x00\x00", + .digest = "\x1e\x39\xb6\xd3\x34\x4d\x34\x8f" + "\x60\x44\xf8\x99\x35\xd1\xcf\x78", + .psize = 80, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x02\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x03\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x08\x00\x00\x00\x00\x00\x00\x00" + "\x00\x01\x00\x00\x00\x00\x00\x00", + .digest = "\x2c\xe7\xda\xaf\x7c\x89\x49\x08" + "\x22\x05\x12\x55\xb1\x2e\xca\x6b", + .psize = 64, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x02\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x03\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x04\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x08\x00\x00\x00\x00\x00\x00\x00" + "\x80\x01\x00\x00\x00\x00\x00\x00", + .digest = "\x9c\xa9\x87\x71\x5d\x69\xc1\x78" + "\x67\x11\xdf\xcd\x22\xf8\x30\xfc", + .psize = 80, + .ksize = 16, + }, + { + .key = "\xd9\xb3\x60\x27\x96\x94\x94\x1a" + "\xc5\xdb\xc6\x98\x7a\xda\x73\x77", + .plaintext = "\x01\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x02\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x03\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x04\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x05\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x08\x00\x00\x00\x00\x00\x00\x00" + "\x00\x02\x00\x00\x00\x00\x00\x00", + .digest = "\xff\xcd\x05\xd5\x77\x0f\x34\xad" + "\x92\x67\xf0\xa5\x99\x94\xb1\x5a", + .psize = 96, + .ksize = 16, + }, + { + .key = "\x03\x6e\xe1\xfe\x2d\x79\x26\xaf" + "\x68\x89\x80\x95\xe5\x4e\x7b\x3c", + .plaintext = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .digest = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .psize = 16, + .ksize = 16, + }, + { + .key = "\x37\x24\xf5\x5f\x1d\x22\xac\x0a" + "\xb8\x30\xda\x0b\x6a\x99\x5d\x74", + .plaintext = "\x75\x76\xf7\x02\x8e\xc6\xeb\x5e" + "\xa7\xe2\x98\x34\x2a\x94\xd4\xb2" + "\x02\xb3\x70\xef\x97\x68\xec\x65" + "\x61\xc4\xfe\x6b\x7e\x72\x96\xfa" + "\x85\x9c\x21\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\xe4\x2a\x3c\x02\xc2\x5b\x64\x86" + "\x9e\x14\x6d\x7b\x23\x39\x87\xbd" + "\xdf\xc2\x40\x87\x1d\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x18\x01\x00\x00\x00\x00\x00\x00" + "\xa8\x00\x00\x00\x00\x00\x00\x00", + .digest = "\x4c\xbb\xa0\x90\xf0\x3f\x7d\x11" + "\x88\xea\x55\x74\x9f\xa6\xc7\xbd", + .psize = 96, + .ksize = 16, + }, + { + .key = "\x90\xcc\xac\xee\xba\xd7\xd4\x68" + "\x98\xa6\x79\x70\xdf\x66\x15\x6c", + .plaintext = "", + .digest = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .psize = 0, + .ksize = 16, + }, + { + .key = "\x89\xc9\x4b\xde\x40\xa6\xf9\x62" + "\x58\x04\x51\x26\xb4\xb1\x14\xe4", + .plaintext = "", + .digest = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .psize = 0, + .ksize = 16, + }, + { + .key = "\x37\xbe\x68\x16\x50\xb9\x4e\xb0" + "\x47\xde\xe2\xbd\xde\xe4\x48\x09", + .plaintext = "\x87\xfc\x68\x9f\xff\xf2\x4a\x1e" + "\x82\x3b\x73\x8f\xc1\xb2\x1b\x7a" + "\x6c\x4f\x81\xbc\x88\x9b\x6c\xa3" + "\x9c\xc2\xa5\xbc\x14\x70\x4c\x9b" + "\x0c\x9f\x59\x92\x16\x4b\x91\x3d" + "\x18\x55\x22\x68\x12\x8c\x63\xb2" + "\x51\xcb\x85\x4b\xd2\xae\x0b\x1c" + "\x5d\x28\x9d\x1d\xb1\xc8\xf0\x77" + "\xe9\xb5\x07\x4e\x06\xc8\xee\xf8" + "\x1b\xed\x72\x2a\x55\x7d\x16\xc9" + "\xf2\x54\xe7\xe9\xe0\x44\x5b\x33" + "\xb1\x49\xee\xff\x43\xfb\x82\xcd" + "\x4a\x70\x78\x81\xa4\x34\x36\xe8" + "\x4c\x28\x54\xa6\x6c\xc3\x6b\x78" + "\xe7\xc0\x5d\xc6\x5d\x81\xab\x70" + "\x08\x86\xa1\xfd\xf4\x77\x55\xfd" + "\xa3\xe9\xe2\x1b\xdf\x99\xb7\x80" + "\xf9\x0a\x4f\x72\x4a\xd3\xaf\xbb" + "\xb3\x3b\xeb\x08\x58\x0f\x79\xce" + "\xa5\x99\x05\x12\x34\xd4\xf4\x86" + "\x37\x23\x1d\xc8\x49\xc0\x92\xae" + "\xa6\xac\x9b\x31\x55\xed\x15\xc6" + "\x05\x17\x37\x8d\x90\x42\xe4\x87" + "\x89\x62\x88\x69\x1c\x6a\xfd\xe3" + "\x00\x2b\x47\x1a\x73\xc1\x51\xc2" + "\xc0\x62\x74\x6a\x9e\xb2\xe5\x21" + "\xbe\x90\xb5\xb0\x50\xca\x88\x68" + "\xe1\x9d\x7a\xdf\x6c\xb7\xb9\x98" + "\xee\x28\x62\x61\x8b\xd1\x47\xf9" + "\x04\x7a\x0b\x5d\xcd\x2b\x65\xf5" + "\x12\xa3\xfe\x1a\xaa\x2c\x78\x42" + "\xb8\xbe\x7d\x74\xeb\x59\xba\xba", + .digest = "\xae\x11\xd4\x60\x2a\x5f\x9e\x42" + "\x89\x04\xc2\x34\x8d\x55\x94\x0a", + .psize = 256, + .ksize = 16, + }, + { + .key = "\xc8\x53\xde\xaa\xb1\x4b\x6b\xd5" + "\x88\xd6\x4c\xe9\xba\x35\x3d\x5a", + .plaintext = "\xc1\xeb\xba\x8d\xb7\x20\x09\xe0" + "\x28\x4f\x29\xf3\xd8\x26\x50\x40" + "\xd9\x06\xa8\xa8\xc0\xbe\xf0\xfb" + "\x75\x7c\x02\x86\x16\x83\x9d\x65" + "\x8f\x5e\xc4\x58\xed\x6a\xb3\x10" + "\xd2\xf7\x23\xc2\x4a\xb0\x00\x6a" + "\x01\x7c\xf7\xf7\x69\x42\xb2\x12" + "\xb0\xeb\x65\x07\xd7\x8e\x2d\x27" + "\x67\xa2\x57\xf0\x49\x0f\x3f\x0e" + "\xc9\xf7\x1b\xe0\x5b\xdd\x87\xfb" + "\x89\xd1\xfa\xb1\x46\xaf\xa2\x93" + "\x01\x65\xb6\x6f\xbe\x29\x7d\x9f" + "\xfa\xf5\x58\xc6\xb5\x92\x55\x25" + "\x4c\xb5\x0c\xc2\x61\x9f\xc4\xb1" + "\x7f\xe3\x61\x18\x3f\x8c\xb2\xd6" + "\xfd\x9f\xd8\xe5\x3d\x03\x05\xa2" + "\x5d\x1a\xa8\xf0\x04\x41\xea\xa6" + "\x07\x67\x86\x00\xe8\x86\xfc\xb1" + "\xc3\x15\x3e\xc8\x84\x2e\x5e\x5f" + "\x7b\x75\x6a\xc4\x48\xb4\xee\x5f" + "\xe9\x76\xdf\xe6\x1a\xd4\x15\x92" + "\x23\x03\x06\xc1\x2d\x0f\x94\xcb" + "\xe6\x5e\x18\xa6\x3b\x38\x1f\xc2" + "\x28\x73\x8a\xbd\x3a\x6f\xb0\x95" + "\x0f\x1c\xc7\xdf\x10\x0b\x2a\x7d" + "\xf9\x6b\xe1\x4a\xfb\xe1\x07\xc9" + "\x69\x7b\x27\x65\xc0\x08\x49\xc0" + "\xf3\x0b\x5b\xa6\x8b\xf7\x1a\xfe" + "\xe3\x9f\x87\x1d\x68\x07\xf4\x53" + "\x8d\x54\xe9\x3f\xd5\x02\x3a\x09" + "\x72\xa9\x84\xdc\x25\xd3\xad\xdb" + "\x4e\x45\x4f\x7f\xe8\x02\x69\x45", + .digest = "\x7b\x4f\x29\xb3\x0b\x4d\x2b\xa3" + "\x40\xc8\x56\x5a\x0a\xcf\xbd\x9b", + .psize = 256, + .ksize = 16, + }, +}; + #endif /* _CRYPTO_TESTMGR_H */ diff --git a/include/crypto/polyval.h b/include/crypto/polyval.h new file mode 100644 index 000000000000..fd0c6e124b65 --- /dev/null +++ b/include/crypto/polyval.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Common values for the Polyval hash algorithm + * + * Copyright 2021 Google LLC + */ + +#ifndef _CRYPTO_POLYVAL_H +#define _CRYPTO_POLYVAL_H + +#include +#include + +#define POLYVAL_BLOCK_SIZE 16 +#define POLYVAL_DIGEST_SIZE 16 + +struct polyval_desc_ctx { + u8 buffer[POLYVAL_BLOCK_SIZE]; + u32 bytes; +}; + +#endif From patchwork Thu Feb 10 23:28:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nathan Huckleberry X-Patchwork-Id: 12742588 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 254E9C433EF for ; Thu, 10 Feb 2022 23:28:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345663AbiBJX2t (ORCPT ); Thu, 10 Feb 2022 18:28:49 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:41296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345656AbiBJX2n (ORCPT ); Thu, 10 Feb 2022 18:28:43 -0500 Received: from mail-ua1-x949.google.com (mail-ua1-x949.google.com [IPv6:2607:f8b0:4864:20::949]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F93F116D for ; Thu, 10 Feb 2022 15:28:41 -0800 (PST) Received: by mail-ua1-x949.google.com with SMTP id a16-20020ab03c90000000b0033c71cc6a2cso3595882uax.0 for ; Thu, 10 Feb 2022 15:28:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=d9YF2gn6l8jputdnpO3x6JGJt1dtz0Ft//7QQRNUVJE=; b=WmyyULPumeHgxc1ERw2+3vLQ75rhVUdGfGLcSVxJrslYBQod71h0STLWsMb5erYCQY BgoVelMIu1cwLYWJRUeF+JqoT7uGLg7MacGTAv0VpGMULrkN+FOdYVfb+Kf5mdGOkxIw t0dDUQBv0+LFWQLq9vTsvLdiKAdNEIF4UdfE6cn7FYPVbXsYWpR2jw3fyM52ES0YVwku IPC2cogx0MCek6WVMtCzX2NXUuvIMcquiyKuE6GDdPELReTIgsaBIK+ZOBr0bpDv3YDA ZbC5q/zztpsQCzit+SqFoQcyIEiuiStwa+NomXyyoQbx1Iiw8c5lqM9iSN9lFeVFTSA3 h/6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=d9YF2gn6l8jputdnpO3x6JGJt1dtz0Ft//7QQRNUVJE=; b=J7/4EUUTOX0PFUN0aA6IjO+sqPLok6yKk7xGDtONV80yjeWge0/GFRu65HjrvGvAXC uQnlucMrlfuhkg/gSpcDizPW/MfMdbR+hztjEi7bqhjqjPl1UoffJJ50Gq6Quw4FxdcY +Qo9RxHaBj5BWz92dLyH1G9QKgYa+31OUeyWHxbpADKE/Csp6OsNXn/fL2p0T28UnCBJ YwPYWKUjg52EAB+NI6PAy0wbC/ycxQgcIm47OqUSqY4n4OW1jTO4KFzJHvBATR4AP9Ej rXHoOuf4AHNzex9rNexc+AbmphvrRw6tnVAF5bcia87uWye2JNf7leI6aPaX2yQo1Mpt vCGg== X-Gm-Message-State: AOAM533vzPp82aeQ2SPpu5oWfHdimydLcuEgMwxiYwAatjsG0GF54xxh NCbHyrsZavx7HzDcgXZyP8QvEc+4joZPis2CGuY2zYEy3pfpMOvC00HIXQfdNVerplMRAPeP7bY I0ZNqdLbf5Z/2SzkEEQ3/rxjnIqGydkenYLR0AotEls5iypxIOJDjG+sv3mFq5aLHrck= X-Google-Smtp-Source: ABdhPJzi5wtD/9oQf+cS/3y6gfXtS/2UCdeJ9UiylpldkPk5qgepWuKzhjJHAlUHXHfa3t8+0uuxJQXfvw== X-Received: from nhuck.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:39cc]) (user=nhuck job=sendgmr) by 2002:a67:cc1b:: with SMTP id q27mr1154352vsl.67.1644535720255; Thu, 10 Feb 2022 15:28:40 -0800 (PST) Date: Thu, 10 Feb 2022 23:28:08 +0000 In-Reply-To: <20220210232812.798387-1-nhuck@google.com> Message-Id: <20220210232812.798387-4-nhuck@google.com> Mime-Version: 1.0 References: <20220210232812.798387-1-nhuck@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [RFC PATCH v2 3/7] crypto: hctr2 - Add HCTR2 support From: Nathan Huckleberry To: linux-crypto@vger.kernel.org Cc: Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Eric Biggers , Sami Tolvanen , Ard Biesheuvel , Nathan Huckleberry Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add support for HCTR2 as a template. HCTR2 is a length-preserving encryption mode that is efficient on processors with instructions to accelerate AES and carryless multiplication, e.g. x86 processors with AES-NI and CLMUL, and ARM processors with the ARMv8 Crypto Extensions. As a length-preserving encryption mode, HCTR2 is suitable for applications such as storage encryption where ciphertext expansion is not possible, and thus authenticated encryption cannot be used. Currently, such applications usually use XTS, or in some cases Adiantum. XTS has the disadvantage that it is a narrow-block mode: a bitflip will only change 16 bytes in the resulting ciphertext or plaintext. This reveals more information to an attacker than necessary. HCTR2 is a wide-block mode, so it provides a stronger security property: a bitflip will change the entire message. HCTR2 is somewhat similar to Adiantum, which is also a wide-block mode. However, HCTR2 is designed to take advantage of existing crypto instructions, while Adiantum targets devices without such hardware support. Adiantum is also designed with longer messages in mind, while HCTR2 is designed to be efficient even on short messages. HCTR2 requires POLYVAL and XCTR as components. More information on HCTR2 can be found here: Length-preserving encryption with HCTR2: https://eprint.iacr.org/2021/1441.pdf Signed-off-by: Nathan Huckleberry --- Changes since v1: * Rename streamcipher -> xctr * Rename hash -> polyval * Use __le64 instead of u64 for little-endian length * memzero_explicit in set_key * Use crypto request length instead of scatterlist length for polyval * Add comments referencing the paper's pseudocode * Derive blockcipher name from xctr name * Pass IV through request context * Use .generic_driver * Make tests more comprehensive crypto/Kconfig | 11 + crypto/Makefile | 1 + crypto/hctr2.c | 532 +++++++++++++++++++++++++++++++++++++ crypto/tcrypt.c | 5 + crypto/testmgr.c | 8 + crypto/testmgr.h | 670 +++++++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 1227 insertions(+) create mode 100644 crypto/hctr2.c diff --git a/crypto/Kconfig b/crypto/Kconfig index 0c61d03530a6..2a9029f51caf 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -524,6 +524,17 @@ config CRYPTO_ADIANTUM If unsure, say N. +config CRYPTO_HCTR2 + tristate "HCTR2 support" + select CRYPTO_XCTR + select CRYPTO_POLYVAL + select CRYPTO_MANAGER + help + HCTR2 is a length-preserving encryption mode for storage encryption that + is efficient on processors with instructions to accelerate AES and + carryless multiplication, e.g. x86 processors with AES-NI and CLMUL, and + ARM processors with the ARMv8 crypto extensions. + config CRYPTO_ESSIV tristate "ESSIV support for block encryption" select CRYPTO_AUTHENC diff --git a/crypto/Makefile b/crypto/Makefile index 561f901a91d4..2dca9dbdede6 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -94,6 +94,7 @@ obj-$(CONFIG_CRYPTO_LRW) += lrw.o obj-$(CONFIG_CRYPTO_XTS) += xts.o obj-$(CONFIG_CRYPTO_CTR) += ctr.o obj-$(CONFIG_CRYPTO_XCTR) += xctr.o +obj-$(CONFIG_CRYPTO_HCTR2) += hctr2.o obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o obj-$(CONFIG_CRYPTO_ADIANTUM) += adiantum.o obj-$(CONFIG_CRYPTO_NHPOLY1305) += nhpoly1305.o diff --git a/crypto/hctr2.c b/crypto/hctr2.c new file mode 100644 index 000000000000..6ccc2a2f9038 --- /dev/null +++ b/crypto/hctr2.c @@ -0,0 +1,532 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * HCTR2 length-preserving encryption mode + * + * Copyright 2021 Google LLC + */ + + +/* + * HCTR2 is a length-preserving encryption mode that is efficient on + * processors with instructions to accelerate aes and carryless + * multiplication, e.g. x86 processors with AES-NI and CLMUL, and ARM + * processors with the ARMv8 crypto extensions. + * + * For more details, see the paper: Length-preserving encryption with HCTR2 + * (https://eprint.iacr.org/2021/1441.pdf) + */ + +#include +#include +#include +#include +#include +#include + +#define BLOCKCIPHER_BLOCK_SIZE 16 + +/* + * The specification allows variable-length tweaks, but Linux's crypto API + * currently only allows algorithms to support a single length. The "natural" + * tweak length for HCTR2 is 16, since that fits into one POLYVAL block for + * the best performance. But longer tweaks are useful for fscrypt, to avoid + * needing to derive per-file keys. So instead we use two blocks, or 32 bytes. + */ +#define TWEAK_SIZE 32 + +struct hctr2_instance_ctx { + struct crypto_cipher_spawn blockcipher_spawn; + struct crypto_skcipher_spawn xctr_spawn; + struct crypto_shash_spawn polyval_spawn; +}; + +struct hctr2_tfm_ctx { + struct crypto_cipher *blockcipher; + struct crypto_skcipher *xctr; + struct crypto_shash *polyval; + u8 L[BLOCKCIPHER_BLOCK_SIZE]; +}; + +struct hctr2_request_ctx { + u8 first_block[BLOCKCIPHER_BLOCK_SIZE]; + u8 xctr_iv[BLOCKCIPHER_BLOCK_SIZE]; + struct scatterlist *bulk_part_dst; + struct scatterlist *bulk_part_src; + struct scatterlist sg_src[2]; + struct scatterlist sg_dst[2]; + /* Sub-requests, must be last */ + union { + struct shash_desc hash_desc; + struct skcipher_request xctr_req; + } u; +}; + +static int hctr2_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keylen) +{ + struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + u8 hbar[BLOCKCIPHER_BLOCK_SIZE]; + int err; + + crypto_cipher_clear_flags(tctx->blockcipher, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tctx->blockcipher, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tctx->blockcipher, key, keylen); + if (err) + return err; + + crypto_skcipher_clear_flags(tctx->xctr, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(tctx->xctr, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_skcipher_setkey(tctx->xctr, key, keylen); + if (err) + return err; + + memset(tctx->L, 0, sizeof(tctx->L)); + memset(hbar, 0, sizeof(hbar)); + tctx->L[0] = 0x01; + crypto_cipher_encrypt_one(tctx->blockcipher, tctx->L, tctx->L); + crypto_cipher_encrypt_one(tctx->blockcipher, hbar, hbar); + + crypto_shash_clear_flags(tctx->polyval, CRYPTO_TFM_REQ_MASK); + crypto_shash_set_flags(tctx->polyval, crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_shash_setkey(tctx->polyval, hbar, BLOCKCIPHER_BLOCK_SIZE); + memzero_explicit(hbar, sizeof(hbar)); + return err; +} + +static int hctr2_hash_tweak(struct skcipher_request *req) +{ + __le64 tweak_length_block[2]; + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct hctr2_request_ctx *rctx = skcipher_request_ctx(req); + struct shash_desc *hash_desc = &rctx->u.hash_desc; + int err; + + memset(tweak_length_block, 0, sizeof(tweak_length_block)); + if (req->cryptlen % POLYVAL_BLOCK_SIZE == 0) + tweak_length_block[0] = cpu_to_le64(TWEAK_SIZE * 8 * 2 + 2); + else + tweak_length_block[0] = cpu_to_le64(TWEAK_SIZE * 8 * 2 + 3); + + hash_desc->tfm = tctx->polyval; + err = crypto_shash_init(hash_desc); + if (err) + return err; + + err = crypto_shash_update(hash_desc, (u8 *)tweak_length_block, + sizeof(tweak_length_block)); + if (err) + return err; + return crypto_shash_update(hash_desc, req->iv, TWEAK_SIZE); +} + +static int hctr2_hash_message(struct skcipher_request *req, + struct scatterlist *sgl, + u8 digest[POLYVAL_DIGEST_SIZE]) +{ + u8 padding[BLOCKCIPHER_BLOCK_SIZE]; + struct hctr2_request_ctx *rctx = skcipher_request_ctx(req); + struct shash_desc *hash_desc = &rctx->u.hash_desc; + const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; + struct sg_mapping_iter miter; + unsigned int remainder = bulk_len % BLOCKCIPHER_BLOCK_SIZE; + int err, i; + int n = 0; + + sg_miter_start(&miter, sgl, sg_nents(sgl), + SG_MITER_FROM_SG | SG_MITER_ATOMIC); + for (i = 0; i < bulk_len; i += n) { + sg_miter_next(&miter); + n = min_t(unsigned int, miter.length, bulk_len - i); + err = crypto_shash_update(hash_desc, miter.addr, n); + if (err) + break; + } + sg_miter_stop(&miter); + + if (err) + return err; + + if (remainder) { + memset(padding, 0, BLOCKCIPHER_BLOCK_SIZE); + padding[0] = 0x01; + err = crypto_shash_update(hash_desc, padding, + BLOCKCIPHER_BLOCK_SIZE - remainder); + if (err) + return err; + } + return crypto_shash_final(hash_desc, digest); +} + +static int hctr2_finish(struct skcipher_request *req) +{ + struct hctr2_request_ctx *rctx = skcipher_request_ctx(req); + u8 digest[POLYVAL_DIGEST_SIZE]; + int err; + + // U = UU ^ H(T || V) + err = hctr2_hash_tweak(req); + if (err) + return err; + err = hctr2_hash_message(req, rctx->bulk_part_dst, digest); + if (err) + return err; + crypto_xor(rctx->first_block, digest, BLOCKCIPHER_BLOCK_SIZE); + + // Copy U into dst scatterlist + scatterwalk_map_and_copy(rctx->first_block, req->dst, + 0, BLOCKCIPHER_BLOCK_SIZE, 1); + return 0; +} + +static void hctr2_xctr_done(struct crypto_async_request *areq, + int err) +{ + struct skcipher_request *req = areq->data; + + if (!err) + err = hctr2_finish(req); + + skcipher_request_complete(req, err); +} + +static int hctr2_crypt(struct skcipher_request *req, bool enc) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct hctr2_request_ctx *rctx = skcipher_request_ctx(req); + u8 digest[POLYVAL_DIGEST_SIZE]; + int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; + int err; + + // Requests must be at least one block + if (req->cryptlen < BLOCKCIPHER_BLOCK_SIZE) + return -EINVAL; + + // Copy M into a temporary buffer + scatterwalk_map_and_copy(rctx->first_block, req->src, + 0, BLOCKCIPHER_BLOCK_SIZE, 0); + + // Create scatterlists for N and V + rctx->bulk_part_src = scatterwalk_ffwd(rctx->sg_src, req->src, + BLOCKCIPHER_BLOCK_SIZE); + rctx->bulk_part_dst = scatterwalk_ffwd(rctx->sg_dst, req->dst, + BLOCKCIPHER_BLOCK_SIZE); + + // MM = M ^ H(T || N) + err = hctr2_hash_tweak(req); + if (err) + return err; + err = hctr2_hash_message(req, rctx->bulk_part_src, digest); + if (err) + return err; + crypto_xor(digest, rctx->first_block, BLOCKCIPHER_BLOCK_SIZE); + + // UU = E(MM) + if (enc) + crypto_cipher_encrypt_one(tctx->blockcipher, rctx->first_block, + digest); + else + crypto_cipher_decrypt_one(tctx->blockcipher, rctx->first_block, + digest); + + // S = MM ^ UU ^ L + crypto_xor(digest, rctx->first_block, BLOCKCIPHER_BLOCK_SIZE); + crypto_xor_cpy(rctx->xctr_iv, digest, tctx->L, BLOCKCIPHER_BLOCK_SIZE); + + // V = XCTR(S, N) + skcipher_request_set_tfm(&rctx->u.xctr_req, tctx->xctr); + skcipher_request_set_crypt(&rctx->u.xctr_req, rctx->bulk_part_src, + rctx->bulk_part_dst, bulk_len, + rctx->xctr_iv); + skcipher_request_set_callback(&rctx->u.xctr_req, + req->base.flags, + hctr2_xctr_done, req); + return crypto_skcipher_encrypt(&rctx->u.xctr_req) ?: + hctr2_finish(req); +} + +static int hctr2_encrypt(struct skcipher_request *req) +{ + return hctr2_crypt(req, true); +} + +static int hctr2_decrypt(struct skcipher_request *req) +{ + return hctr2_crypt(req, false); +} + +static int hctr2_init_tfm(struct crypto_skcipher *tfm) +{ + struct skcipher_instance *inst = skcipher_alg_instance(tfm); + struct hctr2_instance_ctx *ictx = skcipher_instance_ctx(inst); + struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct crypto_skcipher *xctr; + struct crypto_cipher *blockcipher; + struct crypto_shash *polyval; + unsigned int subreq_size; + int err; + + xctr = crypto_spawn_skcipher(&ictx->xctr_spawn); + if (IS_ERR(xctr)) + return PTR_ERR(xctr); + + blockcipher = crypto_spawn_cipher(&ictx->blockcipher_spawn); + if (IS_ERR(blockcipher)) { + err = PTR_ERR(blockcipher); + goto err_free_xctr; + } + + polyval = crypto_spawn_shash(&ictx->polyval_spawn); + if (IS_ERR(polyval)) { + err = PTR_ERR(polyval); + goto err_free_blockcipher; + } + + tctx->xctr = xctr; + tctx->blockcipher = blockcipher; + tctx->polyval = polyval; + + BUILD_BUG_ON(offsetofend(struct hctr2_request_ctx, u) != + sizeof(struct hctr2_request_ctx)); + subreq_size = max(sizeof_field(struct hctr2_request_ctx, u.hash_desc) + + crypto_shash_descsize(polyval), sizeof_field(struct + hctr2_request_ctx, u.xctr_req) + + crypto_skcipher_reqsize(xctr)); + + crypto_skcipher_set_reqsize(tfm, offsetof(struct hctr2_request_ctx, u) + + subreq_size); + return 0; + +err_free_blockcipher: + crypto_free_cipher(blockcipher); +err_free_xctr: + crypto_free_skcipher(xctr); + return err; +} + +static void hctr2_exit_tfm(struct crypto_skcipher *tfm) +{ + struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + + crypto_free_cipher(tctx->blockcipher); + crypto_free_skcipher(tctx->xctr); + crypto_free_shash(tctx->polyval); +} + +static void hctr2_free_instance(struct skcipher_instance *inst) +{ + struct hctr2_instance_ctx *ictx = skcipher_instance_ctx(inst); + + crypto_drop_cipher(&ictx->blockcipher_spawn); + crypto_drop_skcipher(&ictx->xctr_spawn); + crypto_drop_shash(&ictx->polyval_spawn); + kfree(inst); +} + +/* + * Check for a supported set of inner algorithms. + * See the comment at the beginning of this file. + */ +static bool hctr2_supported_algorithms(struct skcipher_alg *xctr_alg, + struct crypto_alg *blockcipher_alg, + struct shash_alg *polyval_alg) +{ + if (strncmp(xctr_alg->base.cra_name, "xctr(", 4) != 0) + return false; + + if (blockcipher_alg->cra_blocksize != BLOCKCIPHER_BLOCK_SIZE) + return false; + + if (strcmp(polyval_alg->base.cra_name, "polyval") != 0) + return false; + + return true; +} + +static int hctr2_create_common(struct crypto_template *tmpl, + struct rtattr **tb, + const char *blockcipher_name, + const char *xctr_name, + const char *polyval_name) +{ + u32 mask; + struct skcipher_instance *inst; + struct hctr2_instance_ctx *ictx; + struct skcipher_alg *xctr_alg; + struct crypto_alg *blockcipher_alg; + struct shash_alg *polyval_alg; + int err; + + err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER, &mask); + if (err) + return err; + + inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL); + if (!inst) + return -ENOMEM; + ictx = skcipher_instance_ctx(inst); + + /* Stream cipher, xctr(block_cipher) */ + err = crypto_grab_skcipher(&ictx->xctr_spawn, + skcipher_crypto_instance(inst), + xctr_name, 0, mask); + if (err) + goto err_free_inst; + xctr_alg = crypto_spawn_skcipher_alg(&ictx->xctr_spawn); + + /* Block cipher, e.g. "aes" */ + err = crypto_grab_cipher(&ictx->blockcipher_spawn, + skcipher_crypto_instance(inst), + blockcipher_name, 0, mask); + if (err) + goto err_free_inst; + blockcipher_alg = crypto_spawn_cipher_alg(&ictx->blockcipher_spawn); + + /* Polyval ε-∆U hash function */ + err = crypto_grab_shash(&ictx->polyval_spawn, + skcipher_crypto_instance(inst), + polyval_name, 0, mask); + if (err) + goto err_free_inst; + polyval_alg = crypto_spawn_shash_alg(&ictx->polyval_spawn); + + /* Check the set of algorithms */ + if (!hctr2_supported_algorithms(xctr_alg, blockcipher_alg, + polyval_alg)) { + pr_warn("Unsupported HCTR2 instantiation: (%s,%s,%s)\n", + xctr_alg->base.cra_name, blockcipher_alg->cra_name, + polyval_alg->base.cra_name); + err = -EINVAL; + goto err_free_inst; + } + + /* Instance fields */ + + err = -ENAMETOOLONG; + if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "hctr2(%s)", + blockcipher_alg->cra_name) >= CRYPTO_MAX_ALG_NAME) + goto err_free_inst; + if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, + "hctr2_base(%s,%s)", + xctr_alg->base.cra_driver_name, + polyval_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) + goto err_free_inst; + + inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE; + inst->alg.base.cra_ctxsize = sizeof(struct hctr2_tfm_ctx); + inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask | + polyval_alg->base.cra_alignmask; + /* + * The hash function is called twice, so it is weighted higher than the + * xctr and blockcipher. + */ + inst->alg.base.cra_priority = (2 * xctr_alg->base.cra_priority + + 4 * polyval_alg->base.cra_priority + + blockcipher_alg->cra_priority) / 7; + + inst->alg.setkey = hctr2_setkey; + inst->alg.encrypt = hctr2_encrypt; + inst->alg.decrypt = hctr2_decrypt; + inst->alg.init = hctr2_init_tfm; + inst->alg.exit = hctr2_exit_tfm; + inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(xctr_alg); + inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(xctr_alg); + inst->alg.ivsize = TWEAK_SIZE; + + inst->free = hctr2_free_instance; + + err = skcipher_register_instance(tmpl, inst); + if (err) { +err_free_inst: + hctr2_free_instance(inst); + } + return err; +} + +static int hctr2_create_base(struct crypto_template *tmpl, struct rtattr **tb) +{ + const char *xctr_name; + const char *polyval_name; + char blockcipher_name[CRYPTO_MAX_ALG_NAME]; + int len; + + xctr_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(xctr_name)) + return PTR_ERR(xctr_name); + + if (!strncmp(xctr_name, "xctr(", 5)) { + len = strscpy(blockcipher_name, xctr_name + 5, + sizeof(blockcipher_name)); + + if (len < 1) + return -EINVAL; + + if (blockcipher_name[len - 1] != ')') + return -EINVAL; + + blockcipher_name[len - 1] = 0; + } else + return -EINVAL; + + polyval_name = crypto_attr_alg_name(tb[2]); + if (IS_ERR(polyval_name)) + return PTR_ERR(polyval_name); + + return hctr2_create_common(tmpl, tb, blockcipher_name, + xctr_name, polyval_name); +} + +static int hctr2_create(struct crypto_template *tmpl, struct rtattr **tb) +{ + const char *blockcipher_name; + char xctr_name[CRYPTO_MAX_ALG_NAME]; + + blockcipher_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(blockcipher_name)) + return PTR_ERR(blockcipher_name); + + if (snprintf(xctr_name, CRYPTO_MAX_ALG_NAME, "xctr(%s)", + blockcipher_name) >= CRYPTO_MAX_ALG_NAME) + return -ENAMETOOLONG; + return hctr2_create_common(tmpl, tb, blockcipher_name, + xctr_name, "polyval"); +} + +/* hctr2(blockcipher_name) */ +/* hctr2_base(xctr_name, polyval_name) */ +static struct crypto_template hctr2_tmpls[] = { + { + .name = "hctr2_base", + .create = hctr2_create_base, + .module = THIS_MODULE, + }, { + .name = "hctr2", + .create = hctr2_create, + .module = THIS_MODULE, + } +}; + +static int __init hctr2_module_init(void) +{ + return crypto_register_templates(hctr2_tmpls, ARRAY_SIZE(hctr2_tmpls)); +} + +static void __exit hctr2_module_exit(void) +{ + return crypto_unregister_templates(hctr2_tmpls, + ARRAY_SIZE(hctr2_tmpls)); +} + +subsys_initcall(hctr2_module_init); +module_exit(hctr2_module_exit); + +MODULE_DESCRIPTION("HCTR2 length-preserving encryption mode"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_CRYPTO("hctr2"); +MODULE_IMPORT_NS(CRYPTO_INTERNAL); diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index ced7467bb481..3a5cd6831e65 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -2191,6 +2191,11 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) 16, 16, aead_speed_template_19, num_mb); break; + case 226: + test_cipher_speed("hctr2(aes)", ENCRYPT, sec, NULL, + 0, speed_template_32); + break; + case 300: if (alg) { test_hash_speed(alg, sec, generic_hash_speed_template); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 3e54d17fe644..2e92a4a89285 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4991,6 +4991,14 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .hash = __VECS(ghash_tv_template) } + }, { + .alg = "hctr2(aes)", + .generic_driver = + "hctr2_base(xctr(aes-generic),polyval-generic)", + .test = alg_test_skcipher, + .suite = { + .cipher = __VECS(aes_hctr2_tv_template) + } }, { .alg = "hmac(md5)", .test = alg_test_hash, diff --git a/crypto/testmgr.h b/crypto/testmgr.h index da3736e51982..a16b631730e9 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -33630,4 +33630,674 @@ static const struct hash_testvec polyval_tv_template[] = { }, }; +/* + * Test vectors generated using https://github.com/google/hctr2 + */ +static const struct cipher_testvec aes_hctr2_tv_template[] = { + { + .key = "\xe1\x15\x66\x3c\x8d\xc6\x3a\xff" + "\xef\x41\xd7\x47\xa2\xcc\x8a\xba", + .iv = "\xc3\xbe\x2a\xcb\xb5\x39\x86\xf1" + "\x91\xad\x6c\xf4\xde\x74\x45\x63" + "\x5c\x7a\xd5\xcc\x8b\x76\xef\x0e" + "\xcf\x2c\x60\x69\x37\xfd\x07\x96", + .ptext = "\x65\x75\xae\xd3\xe2\xbc\x43\x5c" + "\xb3\x1a\xd8\x05\xc3\xd0\x56\x29", + .ctext = "\x11\x91\xea\x74\x58\xcc\xd5\xa2" + "\xd0\x55\x9e\x3d\xfe\x7f\xc8\xfe", + .klen = 16, + .len = 16, + }, + { + .key = "\x50\xcc\x28\x5c\xaf\x62\xa2\x4e" + "\x02\xf0\xc0\x5e\xc1\x29\x80\xca", + .iv = "\x64\xa5\xd5\xf9\xf4\x68\x26\xea" + "\xce\xbb\x6c\xdd\xa5\xef\x39\xb5" + "\x5c\x93\xdf\x1b\x93\x21\xbe\x49" + "\xff\x9e\x86\x4f\x7c\x4d\x51\x15", + .ptext = "\x34\xc1\x08\x3e\x9c\x28\x0a\xcf" + "\x33\xdb\x3f\x0d\x05\x27\xa4\xed", + .ctext = "\x7c\xae\xbb\x37\x4a\x55\x94\x5b" + "\xc6\x6f\x8f\x9f\x68\x5f\xc7\x62", + .klen = 16, + .len = 16, + }, + { + .key = "\xda\xce\x30\x85\xe7\x06\xe6\x02" + "\x8f\x02\xbf\x9a\x82\x6e\x54\xde", + .iv = "\xf6\x7a\x28\xce\xfb\x6c\xb3\xc5" + "\x47\x81\x58\x69\x07\xe5\x22\xdb" + "\x66\x93\xd7\xe9\xbd\x5c\x7f\xf0" + "\x8a\x0b\x07\x09\xbb\xf1\x48\xc4", + .ptext = "\x01\xcd\xa4\x47\x8e\x4e\xbc\x7d" + "\xfd\xd8\xe9\xaa\xc7\x37\x25\x3d" + "\x56", + .ctext = "\xf3\xb2\x9e\xde\x96\x5d\xf0\xf6" + "\xb6\x43\x57\xc5\x53\xe8\xf9\x05" + "\x87", + .klen = 16, + .len = 17, + }, + { + .key = "\xe1\x22\xee\x5b\x3c\x92\x0e\x52" + "\xd7\x95\x88\xa3\x79\x6c\xf8\xd9", + .iv = "\xb8\xd1\xe7\x32\x36\x96\xd6\x44" + "\x9c\x36\xad\x31\x5c\xaa\xf0\x17" + "\x33\x2f\x29\x04\x31\xf5\x46\xc1" + "\x2f\x1b\xfa\xa1\xbd\x86\xc4\xd3", + .ptext = "\x87\xd7\xb8\x2d\x12\x62\xed\x41" + "\x30\x7e\xd4\x0c\xfd\xb9\x6d\x8e" + "\x30", + .ctext = "\xb6\x6a\x0c\x71\x96\x22\xb9\x40" + "\xa2\x04\x56\x14\x22\xae\xaa\x94" + "\x26", + .klen = 16, + .len = 17, + }, + { + .key = "\xd8\x4f\xbc\x25\x8d\x3b\x30\xb9" + "\x1a\xbc\x20\x6d\xae\xfd\xc8\x26" + "\xcd\x23\xb4\x86\x28\x07\x4c\x3e", + .iv = "\xeb\xd5\x97\xaf\x03\x85\x03\x83" + "\x0c\x6b\xa3\xab\xe1\x00\x15\xf2" + "\x4c\x7d\xfb\x98\x50\xcd\x19\x75" + "\x28\x27\xe8\x18\x02\xbc\xe0\xdd", + .ptext = "\x7e\x3a\x0e\x9c\xa8\x52\xa8\x3a" + "\x15\x53\xed\x5c\x0b\x2a\x96\x5c" + "\x71\x24\x82\xee\x53\xd4\xd5\xde" + "\x27\xcd\x36\x18\xf7\x91\x4f", + .ctext = "\xd0\x82\xa9\xdb\x77\x12\x3b\x90" + "\xe6\xd5\xdd\x26\x6f\x31\xeb\xdf" + "\xd4\x0c\x56\x2e\x84\x76\x77\x86" + "\x35\xb4\x0f\xfb\x1d\x5a\x15", + .klen = 24, + .len = 31, + }, + { + .key = "\xba\xb1\x52\xa3\x76\x5e\x83\xee" + "\x49\xe6\xcf\x01\xf6\x63\xa4\xba" + "\xa1\x87\xbd\x58\xbb\x20\x96\xa5", + .iv = "\x1d\x6a\x6d\x26\x40\x9c\xce\x76" + "\x5e\xb8\x22\x1a\x10\xb6\x1d\xf2" + "\x93\x1f\x87\x04\xb8\xb4\x6e\xf8" + "\x35\x51\x96\x1b\xee\x7f\x8a\x60", + .ptext = "\x67\xdf\x68\x07\xc0\xf9\x45\x4c" + "\x1a\xfe\xd3\xc9\xb6\x7d\xe5\x18" + "\x54\xf5\xb3\xae\xf6\xda\x52\x27" + "\x3e\x52\xe8\xed\x04\xd7\x80", + .ctext = "\x11\xa1\x00\x15\x7a\x46\x92\x82" + "\x07\x5b\x64\xf1\x61\x27\x25\xc5" + "\xc5\xaf\xa7\x2e\x61\x09\xb5\x5a" + "\x9a\x1d\xc9\x20\xf0\xab\x1e", + .klen = 24, + .len = 31, + }, + { + .key = "\xbf\xaf\xd7\x67\x8c\x47\xcf\x21" + "\x8a\xa5\xdd\x32\x25\x47\xbe\x4f" + "\xf1\x3a\x0b\xa6\xaa\x2d\xcf\x09", + .iv = "\xd9\xe8\xf0\x92\x4e\xfc\x1d\xf2" + "\x81\x37\x7c\x8f\xf1\x59\x09\x20" + "\xf4\x46\x51\x86\x4f\x54\x8b\x32" + "\x58\xd1\x99\x8b\x8c\x03\xeb\x5d", + .ptext = "\xcd\x64\x90\xf9\x7c\xe5\x0e\x5a" + "\x75\xe7\x8e\x39\x86\xec\x20\x43" + "\x8a\x49\x09\x15\x47\xf4\x3c\x89" + "\x21\xeb\xcf\x4e\xcf\x91\xb5\x40" + "\xcd\xe5\x4d\x5c\x6f\xf2\xd2\x80" + "\xfa\xab\xb3\x76\x9f\x7f\x84\x0a", + .ctext = "\x44\x98\x64\x15\xb7\x0b\x80\xa3" + "\xb9\xca\x23\xff\x3b\x0b\x68\x74" + "\xbb\x3e\x20\x19\x9f\x28\x71\x2a" + "\x48\x3c\x7c\xe2\xef\xb5\x10\xac" + "\x82\x9f\xcd\x08\x8f\x6b\x16\x6f" + "\xc3\xbb\x07\xfb\x3c\xb0\x1b\x27", + .klen = 24, + .len = 48, + }, + { + .key = "\xbe\xbb\x77\x46\x06\x9c\xf4\x4d" + "\x37\x9a\xe6\x3f\x27\xa7\x3b\x6e" + "\x7a\x36\xb8\xb3\xff\xba\x51\xcc", + .iv = "\x06\xbc\x8f\x66\x6a\xbe\xed\x5e" + "\x51\xf2\x72\x11\x3a\x56\x85\x21" + "\x44\xfe\xec\x47\x2b\x09\xb8\x6f" + "\x08\x85\x2a\x93\xa3\xc3\xab\x5e", + .ptext = "\xc7\x74\x42\xf1\xea\xc5\x37\x2d" + "\xc2\xa0\xf6\xd5\x5a\x9a\xbb\xa0" + "\xb2\xfd\x54\x8e\x98\xa0\xea\xc7" + "\x79\x09\x65\x63\xa0\x2e\x82\x4e" + "\x49\x9c\x39\x67\xd0\x0d\x80\x3e" + "\x1a\x86\x84\x2b\x20\x23\xdf\xa7", + .ctext = "\x5f\xa3\x11\xca\x93\xfa\x24\x3a" + "\x24\xb6\xcf\x1e\x76\xbc\xab\xc4" + "\xf3\x24\xa0\x27\xac\x90\xec\xe9" + "\x73\x28\x7d\x35\x67\xfe\x2e\xa8" + "\x89\x77\xac\xeb\xc3\x68\x36\xf4" + "\x8f\x80\x2c\xf1\x80\xef\x49\x49", + .klen = 24, + .len = 48, + }, + { + .key = "\xa5\x28\x24\x34\x1a\x3c\xd8\xf7" + "\x05\x91\x8f\xee\x85\x1f\x35\x7f" + "\x80\x3d\xfc\x9b\x94\xf6\xfc\x9e" + "\x19\x09\x00\xa9\x04\x31\x4f\x11", + .iv = "\xa1\xba\x49\x95\xff\x34\x6d\xb8" + "\xcd\x87\x5d\x5e\xfd\xea\x85\xdb" + "\x8a\x7b\x5e\xb2\x5d\x57\xdd\x62" + "\xac\xa9\x8c\x41\x42\x94\x75\xb7", + .ptext = "\x69\xb4\xe8\x8c\x37\xe8\x67\x82" + "\xf1\xec\x5d\x04\xe5\x14\x91\x13" + "\xdf\xf2\x87\x1b\x69\x81\x1d\x71" + "\x70\x9e\x9c\x3b\xde\x49\x70\x11" + "\xa0\xa3\xdb\x0d\x54\x4f\x66\x69" + "\xd7\xdb\x80\xa7\x70\x92\x68\xce" + "\x81\x04\x2c\xc6\xab\xae\xe5\x60" + "\x15\xe9\x6f\xef\xaa\x8f\xa7\xa7" + "\x63\x8f\xf2\xf0\x77\xf1\xa8\xea" + "\xe1\xb7\x1f\x9e\xab\x9e\x4b\x3f" + "\x07\x87\x5b\x6f\xcd\xa8\xaf\xb9" + "\xfa\x70\x0b\x52\xb8\xa8\xa7\x9e" + "\x07\x5f\xa6\x0e\xb3\x9b\x79\x13" + "\x79\xc3\x3e\x8d\x1c\x2c\x68\xc8" + "\x51\x1d\x3c\x7b\x7d\x79\x77\x2a" + "\x56\x65\xc5\x54\x23\x28\xb0\x03", + .ctext = "\xeb\xf9\x98\x86\x3c\x40\x9f\x16" + "\x84\x01\xf9\x06\x0f\xeb\x3c\xa9" + "\x4c\xa4\x8e\x5d\xc3\x8d\xe5\xd3" + "\xae\xa6\xe6\xcc\xd6\x2d\x37\x4f" + "\x99\xc8\xa3\x21\x46\xb8\x69\xf2" + "\xe3\x14\x89\xd7\xb9\xf5\x9e\x4e" + "\x07\x93\x6f\x78\x8e\x6b\xea\x8f" + "\xfb\x43\xb8\x3e\x9b\x4c\x1d\x7e" + "\x20\x9a\xc5\x87\xee\xaf\xf6\xf9" + "\x46\xc5\x18\x8a\xe8\x69\xe7\x96" + "\x52\x55\x5f\x00\x1e\x1a\xdc\xcc" + "\x13\xa5\xee\xff\x4b\x27\xca\xdc" + "\x10\xa6\x48\x76\x98\x43\x94\xa3" + "\xc7\xe2\xc9\x65\x9b\x08\x14\x26" + "\x1d\x68\xfb\x15\x0a\x33\x49\x84" + "\x84\x33\x5a\x1b\x24\x46\x31\x92", + .klen = 32, + .len = 128, + }, + { + .key = "\x91\x35\xf6\xba\x36\x94\x44\x6e" + "\xf5\x7b\xaf\xe7\x56\x15\x0c\x8d" + "\x98\x4b\x5d\xc0\x99\x39\xd9\x75" + "\x71\xa6\x6b\x80\xa1\x92\xde\x6b", + .iv = "\xda\xf3\x93\x88\x19\x70\xd2\x7a" + "\x8f\xe5\x7a\xbc\xec\x74\xc0\xf1" + "\x6b\x46\x37\x79\x92\x91\x1d\x15" + "\x3b\xe4\x89\x2c\xf9\x50\x7f\x5c", + .ptext = "\x66\xd2\xd9\xaa\x76\x91\x8d\x04" + "\x78\xd3\x93\xeb\xe4\x9d\x88\xad" + "\x14\x6b\x05\x96\x55\x60\x17\x04" + "\x9d\x4d\xf0\x0d\x49\x78\xcc\xfc" + "\xc7\x46\xf3\x3f\xf5\x21\x39\x51" + "\xd1\x88\x84\x3e\x34\xde\x86\x19" + "\xa4\x3b\x75\x18\x98\x89\x0a\x93" + "\xe9\x6e\xbf\x52\xa1\x63\xf8\xa2" + "\x77\xab\x57\xed\x5e\xc9\x64\xed" + "\x5c\x1a\x1d\xb6\x14\xbc\x7b\x26" + "\x27\xce\xf1\xfe\xc5\x74\xd0\x9d" + "\x60\x77\x87\x36\xfd\x70\x54\x03" + "\x8b\x9a\x36\x11\xf9\x0f\x7d\x1a" + "\x66\xc5\xf0\x21\xbb\xfc\x84\xcd" + "\x45\xbc\xdf\xc0\x81\xd3\xdf\x0f" + "\x14\x20\xff\x20\x05\x0c\x47\x38", + .ctext = "\xd4\x99\xdc\x4c\xba\xb9\x9c\xca" + "\x5c\x85\x98\x3b\x11\x5e\xfb\xdc" + "\xed\x49\x0f\x49\xf2\x45\x6c\x2c" + "\x16\x4d\x75\xbf\x9b\x28\x20\x38" + "\xea\xdf\xbe\x72\xea\xf8\x6e\x34" + "\x7a\x97\x7c\xe8\xa9\x4f\x2f\xb0" + "\x45\x48\x05\xcd\xd6\xc8\x59\xe5" + "\x5f\x51\x42\xc4\x4e\x12\x64\xce" + "\x99\xb1\xaf\x78\x13\x4d\x7e\x4a" + "\xa5\x01\x0c\xd1\xad\xfe\x31\xbb" + "\xbf\x1c\x02\x58\xa4\xd5\xd2\x70" + "\x8a\xf8\x7d\x8f\x5d\xdf\xe7\x10" + "\x09\xd6\xe1\x05\x50\xe8\x31\x7a" + "\xa7\xfc\x4b\xf7\xad\xd5\x10\x29" + "\x76\x60\x3e\x71\x99\x0a\x22\xe4" + "\x29\xba\x63\xb7\x16\x56\xa2\x37", + .klen = 32, + .len = 128, + }, + { + .key = "\x36\x45\x11\xa2\x98\x5f\x96\x7c" + "\xc6\xb4\x94\x31\x0a\x67\x09\x32" + "\x6c\x6f\x6f\x00\xf0\x17\xcb\xac" + "\xa5\xa9\x47\x9e\x2e\x85\x2f\xfa", + .iv = "\x28\x88\xaa\x9b\x59\x3b\x1e\x97" + "\x82\xe5\x5c\x9e\x6d\x14\x11\x19" + "\x6e\x38\x8f\xd5\x40\x2b\xca\xf9" + "\x7b\x4c\xe4\xa3\xd0\xd2\x8a\x13", + .ptext = "\x95\xd2\xf7\x71\x1b\xca\xa5\x86" + "\xd9\x48\x01\x93\x2f\x79\x55\x29" + "\x71\x13\x15\x0e\xe6\x12\xbc\x4d" + "\x8a\x31\xe3\x40\x2a\xc6\x5e\x0d" + "\x68\xbb\x4a\x62\x8d\xc7\x45\x77" + "\xd2\xb8\xc7\x1d\xf1\xd2\x5d\x97" + "\xcf\xac\x52\xe5\x32\x77\xb6\xda" + "\x30\x85\xcf\x2b\x98\xe9\xaa\x34" + "\x62\xb5\x23\x9e\xb7\xa6\xd4\xe0" + "\xb4\x58\x18\x8c\x4d\xde\x4d\x01" + "\x83\x89\x24\xca\xfb\x11\xd4\x82" + "\x30\x7a\x81\x35\xa0\xb4\xd4\xb6" + "\x84\xea\x47\x91\x8c\x19\x86\x25" + "\xa6\x06\x8d\x78\xe6\xed\x87\xeb" + "\xda\xea\x73\x7c\xbf\x66\xb8\x72" + "\xe3\x0a\xb8\x0c\xcb\x1a\x73\xf1" + "\xa7\xca\x0a\xde\x57\x2b\xbd\x2b" + "\xeb\x8b\x24\x38\x22\xd3\x0e\x1f" + "\x17\xa0\x84\x98\x31\x77\xfd\x34" + "\x6a\x4e\x3d\x84\x4c\x0e\xfb\xed" + "\xc8\x2a\x51\xfa\xd8\x73\x21\x8a" + "\xdb\xb5\xfe\x1f\xee\xc4\xe8\x65" + "\x54\x84\xdd\x96\x6d\xfd\xd3\x31" + "\x77\x36\x52\x6b\x80\x4f\x9e\xb4" + "\xa2\x55\xbf\x66\x41\x49\x4e\x87" + "\xa7\x0c\xca\xe7\xa5\xc5\xf6\x6f" + "\x27\x56\xe2\x48\x22\xdd\x5f\x59" + "\x3c\xf1\x9f\x83\xe5\x2d\xfb\x71" + "\xad\xd1\xae\x1b\x20\x5c\x47\xb7" + "\x3b\xd3\x14\xce\x81\x42\xb1\x0a" + "\xf0\x49\xfa\xc2\xe7\x86\xbf\xcd" + "\xb0\x95\x9f\x8f\x79\x41\x54", + .ctext = "\xf6\x57\x51\xc4\x25\x61\x2d\xfa" + "\xd6\xd9\x3f\x9a\x81\x51\xdd\x8e" + "\x3d\xe7\xaa\x2d\xb1\xda\xc8\xa6" + "\x9d\xaa\x3c\xab\x62\xf2\x80\xc3" + "\x2c\xe7\x58\x72\x1d\x44\xc5\x28" + "\x7f\xb4\xf9\xbc\x9c\xb2\xab\x8e" + "\xfa\xd1\x4d\x72\xd9\x79\xf5\xa0" + "\x24\x3e\x90\x25\x31\x14\x38\x45" + "\x59\xc8\xf6\xe2\xc6\xf6\xc1\xa7" + "\xb2\xf8\xa7\xa9\x2b\x6f\x12\x3a" + "\xb0\x81\xa4\x08\x57\x59\xb1\x56" + "\x4c\x8f\x18\x55\x33\x5f\xd6\x6a" + "\xc6\xa0\x4b\xd6\x6b\x64\x3e\x9e" + "\xfd\x66\x16\xe2\xdb\xeb\x5f\xb3" + "\x50\x50\x3e\xde\x8d\x72\x76\x01" + "\xbe\xcc\xc9\x52\x09\x2d\x8d\xe7" + "\xd6\xc3\x66\xdb\x36\x08\xd1\x77" + "\xc8\x73\x46\x26\x24\x29\xbf\x68" + "\x2d\x2a\x99\x43\x56\x55\xe4\x93" + "\xaf\xae\x4d\xe7\x55\x4a\xc0\x45" + "\x26\xeb\x3b\x12\x90\x7c\xdc\xd1" + "\xd5\x6f\x0a\xd0\xa9\xd7\x4b\x89" + "\x0b\x07\xd8\x86\xad\xa1\xc4\x69" + "\x1f\x5e\x8b\xc4\x9e\x91\x41\x25" + "\x56\x98\x69\x78\x3a\x9e\xae\x91" + "\xd8\xd9\xfa\xfb\xff\x81\x25\x09" + "\xfc\xed\x2d\x87\xbc\x04\x62\x97" + "\x35\xe1\x26\xc2\x46\x1c\xcf\xd7" + "\x14\xed\x02\x09\xa5\xb2\xb6\xaa" + "\x27\x4e\x61\xb3\x71\x6b\x47\x16" + "\xb7\xe8\xd4\xaf\x52\xeb\x6a\x6b" + "\xdb\x4c\x65\x21\x9e\x1c\x36", + .klen = 32, + .len = 255, + }, + { + .key = "\x56\x33\x37\x21\xc4\xea\x8b\x88" + "\x67\x5e\xee\xb8\x0b\x6c\x04\x43" + "\x17\xc5\x2b\x8a\x37\x17\x8b\x37" + "\x60\x57\x3f\xa7\x82\xcd\xb9\x09", + .iv = "\x88\xee\x9b\x35\x21\x2d\x41\xa1" + "\x16\x0d\x7f\xdf\x57\xc9\xb9\xc3" + "\xf6\x30\x53\xbf\x89\x46\xe6\x87" + "\x60\xc8\x5e\x59\xdd\x8a\x7b\xfe", + .ptext = "\x49\xe2\x0a\x4f\x7a\x60\x75\x9b" + "\x95\x98\x2c\xe7\x4f\xb4\x58\xb9" + "\x24\x54\x46\x34\xdf\x58\x31\xe7" + "\x23\xc6\xa2\x60\x4a\xd2\x59\xb6" + "\xeb\x3e\xc2\xf8\xe5\x14\x3c\x6d" + "\x4b\x72\xcb\x5f\xcb\xa7\x47\xb9" + "\x7a\x49\xfc\xf1\xad\x92\x76\x55" + "\xac\x59\xdc\x3a\xc6\x8b\x7c\xdb" + "\x06\xcd\xea\x6a\x34\x51\xb7\xb2" + "\xe5\x39\x3c\x87\x00\x90\xc2\xbb" + "\xb2\xa5\x2c\x58\xc2\x9b\xe3\x77" + "\x95\x82\x50\xcb\x23\xdc\x18\xd8" + "\x4e\xbb\x13\x5d\x35\x3d\x9a\xda" + "\xe4\x75\xa1\x75\x17\x59\x8c\x6a" + "\xb2\x76\x7e\xd4\x45\x31\x0a\x45" + "\x2e\x60\x83\x3d\xdc\x8d\x43\x20" + "\x58\x24\xb2\x9d\xd5\x59\x64\x32" + "\x4e\x6f\xb9\x9c\xde\x77\x4d\x65" + "\xdf\xc0\x7a\xeb\x40\x80\xe8\xe5" + "\xc7\xc1\x77\x3b\xae\x2b\x85\xce" + "\x56\xfa\x43\x41\x96\x23\x8e\xab" + "\xd3\xc8\x65\xef\x0b\xfe\x42\x4c" + "\x3a\x8a\x54\x55\xab\xa3\xf9\x62" + "\x9f\x8e\xbe\x33\x9a\xfe\x6b\x52" + "\xd4\x4c\x93\x84\x7c\x7e\xb1\x5e" + "\x32\xaf\x6e\x21\x44\xd2\x6b\x56" + "\xcd\x2c\x9d\x03\x3b\x50\x1f\x0a" + "\xc3\x98\xff\x3a\x1d\x36\x7e\x6d" + "\xcf\xbc\xe7\xe8\xfc\x24\x55\xfd" + "\x72\x3d\xa7\x3f\x09\xa7\x38\xe6" + "\x57\x8d\xc4\x74\x7f\xd3\x26\x75" + "\xda\xfa\x29\x35\xc1\x31\x82", + .ctext = "\x02\x23\x74\x02\x56\xf4\x7b\xc8" + "\x55\x61\xa0\x6b\x68\xff\xde\x87" + "\x9d\x66\x77\x86\x98\x63\xab\xd5" + "\xd6\xf4\x7e\x3b\xf4\xae\x97\x13" + "\x79\xc0\x96\x75\x87\x33\x2a\x0e" + "\xc2\x1a\x13\x90\x5f\x6e\x93\xed" + "\x54\xfe\xee\x05\x48\xae\x20\x2d" + "\xa9\x2b\x98\xa3\xc8\xaf\x17\x6b" + "\x82\x4a\x9a\x7f\xf0\xce\xd9\x26" + "\x16\x28\xeb\xf4\x4b\xab\x7d\x6e" + "\x96\x27\xd2\x90\xbb\x8d\x98\xdc" + "\xb8\x6f\x7a\x98\x67\xef\x1c\xfb" + "\xd0\x23\x1a\x2f\xc9\x58\x4e\xc6" + "\x38\x03\x53\x61\x8e\xff\x55\x46" + "\x47\xe8\x1f\x9d\x66\x95\x9b\x7f" + "\x26\xac\xf2\x61\xa4\x05\x15\xcb" + "\x62\xb6\x6b\x7c\x57\x95\x9d\x25" + "\x9e\x83\xb1\x88\x50\x39\xb5\x34" + "\x8a\x04\x2b\x76\x1b\xb8\x8c\x57" + "\x26\x21\x99\x2e\x93\xc8\x9b\xb2" + "\x31\xe1\xe3\x27\xde\xc8\xf2\xc5" + "\x01\x7a\x45\x38\x6f\xe7\xa0\x9d" + "\x8c\x41\x99\xec\x3d\xb6\xaf\x66" + "\x76\xac\xc8\x78\xb0\xdf\xcf\xce" + "\xa1\x29\x46\x6f\xe3\x35\x4a\x67" + "\x59\x27\x14\xcc\x04\xdb\xb3\x03" + "\xb7\x2d\x8d\xf9\x75\x9e\x59\x42" + "\xe3\xa4\xf8\xf4\x82\x27\xa3\xa9" + "\x79\xac\x6b\x8a\xd8\xdb\x29\x73" + "\x02\xbb\x6f\x85\x00\x92\xea\x59" + "\x30\x1b\x19\xf3\xab\x6e\x99\x9a" + "\xf2\x23\x27\xc6\x59\x5a\x9c", + .klen = 32, + .len = 255, + }, + { + .key = "\xd3\x81\x72\x18\x23\xff\x6f\x4a" + "\x25\x74\x29\x0d\x51\x8a\x0e\x13" + "\xc1\x53\x5d\x30\x8d\xee\x75\x0d" + "\x14\xd6\x69\xc9\x15\xa9\x0c\x60", + .iv = "\x65\x9b\xd4\xa8\x7d\x29\x1d\xf4" + "\xc4\xd6\x9b\x6a\x28\xab\x64\xe2" + "\x62\x81\x97\xc5\x81\xaa\xf9\x44" + "\xc1\x72\x59\x82\xaf\x16\xc8\x2c", + .ptext = "\xc7\x6b\x52\x6a\x10\xf0\xcc\x09" + "\xc1\x12\x1d\x6d\x21\xa6\x78\xf5" + "\x05\xa3\x69\x60\x91\x36\x98\x57" + "\xba\x0c\x14\xcc\xf3\x2d\x73\x03" + "\xc6\xb2\x5f\xc8\x16\x27\x37\x5d" + "\xd0\x0b\x87\xb2\x50\x94\x7b\x58" + "\x04\xf4\xe0\x7f\x6e\x57\x8e\xc9" + "\x41\x84\xc1\xb1\x7e\x4b\x91\x12" + "\x3a\x8b\x5d\x50\x82\x7b\xcb\xd9" + "\x9a\xd9\x4e\x18\x06\x23\x9e\xd4" + "\xa5\x20\x98\xef\xb5\xda\xe5\xc0" + "\x8a\x6a\x83\x77\x15\x84\x1e\xae" + "\x78\x94\x9d\xdf\xb7\xd1\xea\x67" + "\xaa\xb0\x14\x15\xfa\x67\x21\x84" + "\xd3\x41\x2a\xce\xba\x4b\x4a\xe8" + "\x95\x62\xa9\x55\xf0\x80\xad\xbd" + "\xab\xaf\xdd\x4f\xa5\x7c\x13\x36" + "\xed\x5e\x4f\x72\xad\x4b\xf1\xd0" + "\x88\x4e\xec\x2c\x88\x10\x5e\xea" + "\x12\xc0\x16\x01\x29\xa3\xa0\x55" + "\xaa\x68\xf3\xe9\x9d\x3b\x0d\x3b" + "\x6d\xec\xf8\xa0\x2d\xf0\x90\x8d" + "\x1c\xe2\x88\xd4\x24\x71\xf9\xb3" + "\xc1\x9f\xc5\xd6\x76\x70\xc5\x2e" + "\x9c\xac\xdb\x90\xbd\x83\x72\xba" + "\x6e\xb5\xa5\x53\x83\xa9\xa5\xbf" + "\x7d\x06\x0e\x3c\x2a\xd2\x04\xb5" + "\x1e\x19\x38\x09\x16\xd2\x82\x1f" + "\x75\x18\x56\xb8\x96\x0b\xa6\xf9" + "\xcf\x62\xd9\x32\x5d\xa9\xd7\x1d" + "\xec\xe4\xdf\x1b\xbe\xf1\x36\xee" + "\xe3\x7b\xb5\x2f\xee\xf8\x53\x3d" + "\x6a\xb7\x70\xa9\xfc\x9c\x57\x25" + "\xf2\x89\x10\xd3\xb8\xa8\x8c\x30" + "\xae\x23\x4f\x0e\x13\x66\x4f\xe1" + "\xb6\xc0\xe4\xf8\xef\x93\xbd\x6e" + "\x15\x85\x6b\xe3\x60\x81\x1d\x68" + "\xd7\x31\x87\x89\x09\xab\xd5\x96" + "\x1d\xf3\x6d\x67\x80\xca\x07\x31" + "\x5d\xa7\xe4\xfb\x3e\xf2\x9b\x33" + "\x52\x18\xc8\x30\xfe\x2d\xca\x1e" + "\x79\x92\x7a\x60\x5c\xb6\x58\x87" + "\xa4\x36\xa2\x67\x92\x8b\xa4\xb7" + "\xf1\x86\xdf\xdc\xc0\x7e\x8f\x63" + "\xd2\xa2\xdc\x78\xeb\x4f\xd8\x96" + "\x47\xca\xb8\x91\xf9\xf7\x94\x21" + "\x5f\x9a\x9f\x5b\xb8\x40\x41\x4b" + "\x66\x69\x6a\x72\xd0\xcb\x70\xb7" + "\x93\xb5\x37\x96\x05\x37\x4f\xe5" + "\x8c\xa7\x5a\x4e\x8b\xb7\x84\xea" + "\xc7\xfc\x19\x6e\x1f\x5a\xa1\xac" + "\x18\x7d\x52\x3b\xb3\x34\x62\x99" + "\xe4\x9e\x31\x04\x3f\xc0\x8d\x84" + "\x17\x7c\x25\x48\x52\x67\x11\x27" + "\x67\xbb\x5a\x85\xca\x56\xb2\x5c" + "\xe6\xec\xd5\x96\x3d\x15\xfc\xfb" + "\x22\x25\xf4\x13\xe5\x93\x4b\x9a" + "\x77\xf1\x52\x18\xfa\x16\x5e\x49" + "\x03\x45\xa8\x08\xfa\xb3\x41\x92" + "\x79\x50\x33\xca\xd0\xd7\x42\x55" + "\xc3\x9a\x0c\x4e\xd9\xa4\x3c\x86" + "\x80\x9f\x53\xd1\xa4\x2e\xd1\xbc" + "\xf1\x54\x6e\x93\xa4\x65\x99\x8e" + "\xdf\x29\xc0\x64\x63\x07\xbb\xea", + .ctext = "\x9f\x72\x87\xc7\x17\xfb\x20\x15" + "\x65\xb3\x55\xa8\x1c\x8e\x52\x32" + "\xb1\x82\x8d\xbf\xb5\x9f\x10\x0a" + "\xe8\x0c\x70\x62\xef\x89\xb6\x1f" + "\x73\xcc\xe4\xcc\x7a\x3a\x75\x4a" + "\x26\xe7\xf5\xd7\x7b\x17\x39\x2d" + "\xd2\x27\x6e\xf9\x2f\x9e\xe2\xf6" + "\xfa\x16\xc2\xf2\x49\x26\xa7\x5b" + "\xe7\xca\x25\x0e\x45\xa0\x34\xc2" + "\x9a\x37\x79\x7e\x7c\x58\x18\x94" + "\x10\xa8\x7c\x48\xa9\xd7\x63\x89" + "\x9e\x61\x4d\x26\x34\xd9\xf0\xb1" + "\x2d\x17\x2c\x6f\x7c\x35\x0e\xbe" + "\x77\x71\x7c\x17\x5b\xab\x70\xdb" + "\x2f\x54\x0f\xa9\xc8\xf4\xf5\xab" + "\x52\x04\x3a\xb8\x03\xa7\xfd\x57" + "\x45\x5e\xbc\x77\xe1\xee\x79\x8c" + "\x58\x7b\x1f\xf7\x75\xde\x68\x17" + "\x98\x85\x8a\x18\x5c\xd2\x39\x78" + "\x7a\x6f\x26\x6e\xe1\x13\x91\xdd" + "\xdf\x0e\x6e\x67\xcc\x51\x53\xd8" + "\x17\x5e\xce\xa7\xe4\xaf\xfa\xf3" + "\x4f\x9f\x01\x9b\x04\xe7\xfc\xf9" + "\x6a\xdc\x1d\x0c\x9a\xaa\x3a\x7a" + "\x73\x03\xdf\xbf\x3b\x82\xbe\xb0" + "\xb4\xa4\xcf\x07\xd7\xde\x71\x25" + "\xc5\x10\xee\x0a\x15\x96\x8b\x4f" + "\xfe\xb8\x28\xbd\x4a\xcd\xeb\x9f" + "\x5d\x00\xc1\xee\xe8\x16\x44\xec" + "\xe9\x7b\xd6\x85\x17\x29\xcf\x58" + "\x20\xab\xf7\xce\x6b\xe7\x71\x7d" + "\x4f\xa8\xb0\xe9\x7d\x70\xd6\x0b" + "\x2e\x20\xb1\x1a\x63\x37\xaa\x2c" + "\x94\xee\xd5\xf6\x58\x2a\xf4\x7a" + "\x4c\xba\xf5\xe9\x3c\x6f\x95\x13" + "\x5f\x96\x81\x5b\xb5\x62\xf2\xd7" + "\x8d\xbe\xa1\x31\x51\xe6\xfe\xc9" + "\x07\x7d\x0f\x00\x3a\x66\x8c\x4b" + "\x94\xaa\xe5\x56\xde\xcd\x74\xa7" + "\x48\x67\x6f\xed\xc9\x6a\xef\xaf" + "\x9a\xb7\xae\x60\xfa\xc0\x37\x39" + "\xa5\x25\xe5\x22\xea\x82\x55\x68" + "\x3e\x30\xc3\x5a\xb6\x29\x73\x7a" + "\xb6\xfb\x34\xee\x51\x7c\x54\xe5" + "\x01\x4d\x72\x25\x32\x4a\xa3\x68" + "\x80\x9a\x89\xc5\x11\x66\x4c\x8c" + "\x44\x50\xbe\xd7\xa0\xee\xa6\xbb" + "\x92\x0c\xe6\xd7\x83\x51\xb1\x69" + "\x63\x40\xf3\xf4\x92\x84\xc4\x38" + "\x29\xfb\xb4\x84\xa0\x19\x75\x16" + "\x60\xbf\x0a\x9c\x89\xee\xad\xb4" + "\x43\xf9\x71\x39\x45\x7c\x24\x83" + "\x30\xbb\xee\x28\xb0\x86\x7b\xec" + "\x93\xc1\xbf\xb9\x97\x1b\x96\xef" + "\xee\x58\x35\x61\x12\x19\xda\x25" + "\x77\xe5\x80\x1a\x31\x27\x9b\xe4" + "\xda\x8b\x7e\x51\x4d\xcb\x01\x19" + "\x4f\xdc\x92\x1a\x17\xd5\x6b\xf4" + "\x50\xe3\x06\xe4\x76\x9f\x65\x00" + "\xbd\x7a\xe2\x64\x26\xf2\xe4\x7e" + "\x40\xf2\x80\xab\x62\xd5\xef\x23" + "\x8b\xfb\x6f\x24\x6e\x9b\x66\x0e" + "\xf4\x1c\x24\x1e\x1d\x26\x95\x09" + "\x94\x3c\xb2\xb6\x02\xa7\xd9\x9a", + .klen = 32, + .len = 512, + }, + { + .key = "\x83\x8a\xa7\xd6\x31\x10\xb1\x67" + "\xbf\xed\xf6\x93\x1d\x2e\xc9\x4c" + "\x18\xab\x98\x2c\xed\x5a\x14\x30" + "\xc9\xe0\x4b\x67\xb5\x0d\x6c\xb4", + .iv = "\x79\x9a\xea\x92\x10\xd8\x0b\x6a" + "\xb4\xcf\x49\x29\xdb\x50\xce\x54" + "\xf2\x93\x09\x1d\xcc\xd6\x1a\xf7" + "\x80\x49\x74\x83\x76\x50\xaf\x2c", + .ptext = "\xce\x7a\x3c\xde\x95\x4b\x2f\x63" + "\x39\x5f\x50\x87\x39\xfb\x5e\x42" + "\x17\xcd\xff\x5e\x5c\x77\x67\x21" + "\x9c\xae\xad\xa6\xbf\x89\xc2\x7e" + "\x99\xfe\xec\x25\x3d\x94\x7f\xcf" + "\x43\x52\xad\x87\x9d\x12\x54\x08" + "\xc7\xb8\xe2\x5c\x4e\x4f\xc0\x6e" + "\x1c\xff\xc1\x30\x66\xd4\x2e\x60" + "\xe6\xc6\xfa\xf5\xc1\xc8\xb1\xd0" + "\x89\x83\x13\x00\x35\x52\x3f\x08" + "\xb7\x62\x77\xbd\x9b\x66\x35\xd3" + "\x57\x24\x94\xe6\x2c\x2e\x9e\xda" + "\x44\xf9\x6b\xae\x0b\xd7\x9f\x55" + "\x86\x4e\x1b\x4b\xe2\x32\x20\x9c" + "\x03\x15\xd1\x6e\x22\x56\xc7\x5c" + "\xe4\x51\xbc\xd8\x21\xd0\xc4\x19" + "\x18\xce\x62\x73\xad\x0c\x31\xa6" + "\x66\xed\x1a\x7d\x54\xcb\xa4\x7c" + "\xeb\xed\xdf\x80\x02\x8d\x26\x4b" + "\xd4\x97\x13\x9d\xeb\xe7\x0b\x09" + "\x99\x4d\xe6\xba\xb5\x38\x37\xff" + "\x7d\xc5\xf2\xb9\x8a\xa8\x00\x4d" + "\xff\x43\xb4\x22\xc0\x0b\x72\xea" + "\x5b\x3e\xc3\xdb\xc8\xa7\xb0\x50" + "\x48\x90\x6d\x8a\xf7\x30\x62\xd8" + "\x3a\xcf\xf9\xcd\x6a\x67\xab\x55" + "\x64\x70\x64\xda\x23\xed\x58\x26" + "\xf6\x90\x2a\x6e\x5a\x98\xd4\x8e" + "\x54\x6a\x9d\x1d\x29\xef\x84\xfa" + "\x3c\xba\x2b\x5e\x34\x45\x7d\xfc" + "\x45\x4f\x13\xb7\xdd\xd7\x2b\xb7" + "\x1a\xb4\x86\x5e\xcf\x35\x54\xc3" + "\xb6\x0d\xe7\xcd\x46\x44\xa4\xc4" + "\x48\x2f\xd0\xfe\x72\xe1\xf0\x92" + "\x1f\x53\xe4\x95\x45\x03\xb9\x9e" + "\xc8\xe0\xcc\x04\x9c\xdd\x19\x19" + "\xa3\xcf\x87\xec\xf1\x84\x0e\x65" + "\xbc\xc9\xe7\x12\x26\x45\xe6\x2e" + "\x9e\xe4\x79\x6c\xa0\x04\xdb\xca" + "\x72\x97\x29\xfc\x20\x43\xd0\x37" + "\x64\xf3\x33\x90\x14\xcf\x00\xa2" + "\xf9\x1b\xa4\x9b\x30\x4b\xd0\x7a" + "\x0d\x52\x2b\x1a\xd1\xea\xe8\x84" + "\x8b\x44\x61\xb1\xfd\x4d\xdb\xf7" + "\x0b\xd5\x55\x32\x83\xb2\x71\x42" + "\x8a\x7f\x80\xc6\xff\x94\x16\xdf" + "\xb5\xfe\x59\xe7\xb5\xa4\x58\x9c" + "\x88\xd2\xb4\x63\x8b\xcb\x9b\x9f" + "\xc6\x5c\x94\x1b\x41\x8b\xa2\x66" + "\xda\x0d\xbc\x9d\x3a\x59\xd8\x66" + "\xd0\x67\xfa\x50\x6f\xe6\xd0\x7a" + "\xd1\x06\x23\x42\x0e\x14\x20\x65" + "\x20\x73\xaa\x34\xac\xa7\x6d\xe5" + "\x23\x28\xa0\xcf\x57\x3e\x19\x00" + "\x3a\x85\x2f\x9d\x79\x15\x29\x4c" + "\x9f\xf7\x3d\xa3\x24\x3c\xa0\x68" + "\xc6\x4c\x44\x5a\x87\xe7\xbc\x0f" + "\xbb\x19\xea\x3e\x37\xc4\x3b\xcc" + "\x1e\xdd\xfa\xfa\x71\x0e\x37\xd5" + "\x3a\xc5\x1e\x90\x5e\xf0\x13\x1f" + "\x7a\x35\xb2\x63\x29\xb6\x27\xf2" + "\x0a\x57\x5c\x43\xe2\xc7\x02\x4a" + "\xc6\x56\xf0\xc1\xa7\xd8\xc6\x3c" + "\x81\xd4\x5e\x16\x5e\x2a\x77\x77", + .ctext = "\xd8\xb0\xf0\x69\xef\x35\x99\x52" + "\xf1\x05\xd6\x07\x09\x8f\x2a\xd2" + "\x69\xea\x3e\x3a\xc1\xa6\xbe\xdb" + "\x9a\x13\xa2\x19\x59\x6d\xc9\x52" + "\xf4\xf7\x3e\xed\xb2\xe2\xac\x2a" + "\x75\xfa\x63\x29\x7a\x28\x97\x2b" + "\xdb\xd2\xa4\xef\x5a\x92\x0a\xf0" + "\xb5\x83\x60\x4c\x14\x20\x68\x19" + "\x89\xee\xc6\x5b\xe9\x62\x58\x63" + "\x41\x3e\xca\xa4\x5b\xdb\x49\x9b" + "\xc0\x32\x10\x24\x19\xc2\xb1\x36" + "\x7d\x04\xec\xc2\x1a\xfd\x74\xe5" + "\x20\xe5\x2c\x0b\x9d\x70\x8b\x1a" + "\xb5\xaf\x57\xad\x88\x8c\xe8\x51" + "\x87\x0e\xca\x11\xfe\x93\x17\x6b" + "\xa3\x03\x72\x66\x5e\x73\x2f\x15" + "\xfd\xd3\xbb\x16\x44\x56\x73\x55" + "\x0e\xfb\xfa\x71\x4e\x21\x40\xe4" + "\xac\x77\x0a\x8a\x2a\x62\xd6\xcc" + "\x30\x11\x75\xbb\x9c\x7f\x70\x31" + "\x6d\xbc\x99\x33\xe6\x01\xfb\xb4" + "\xd6\x5b\x93\xaf\x7e\xb5\x60\x11" + "\x6c\x91\xa4\xd4\xa0\xeb\x9b\xa2" + "\x33\x66\x68\x6b\xb8\xe4\x4f\xb6" + "\x24\x89\x3e\xb7\xdc\xef\x6c\x6f" + "\x80\x8e\x1d\xa0\xbe\xbe\x51\x49" + "\xd7\x63\x62\x71\x37\x9e\x2d\x7f" + "\xa6\x8f\xb8\xee\x9c\x64\x73\xd4" + "\xd3\xe2\x3e\x42\x84\x31\xbb\x83" + "\x19\x15\xdd\xdd\x56\x04\x22\x48" + "\xd1\xb2\x0f\x65\x2f\x92\x56\x52" + "\xb6\x96\x25\x93\x2b\xf1\x86\x9f" + "\x30\x75\x23\xab\x48\x8e\x6c\x71" + "\x1a\x46\x65\xe1\x3b\x8d\x09\xfb" + "\xba\x9a\xb6\x08\xdc\xd6\x3e\x54" + "\x74\xe7\xd2\xe7\x6b\xeb\x01\x46" + "\x6b\xc9\x69\xc6\xaa\x3e\xd5\xe2" + "\xd9\x45\x36\x2e\xf0\x46\x2f\x6f" + "\xcb\x2b\xb7\xf0\xaf\x0a\xe7\x22" + "\xc7\x15\x8d\x7f\x52\x63\x50\x88" + "\x49\xbe\x52\xe7\x20\xf7\x65\x72" + "\x32\xce\x50\x44\x1c\x8b\xe0\x34" + "\x8c\x88\x0d\x2c\x7f\xde\xf9\xfd" + "\xaf\x11\x03\x31\xf8\x86\xba\xe9" + "\xc1\x61\x63\xeb\x64\x9f\x6c\x37" + "\x8e\x20\x93\xdb\x6d\x05\xf3\x3c" + "\x89\x3d\xbe\x8d\x3f\xee\x6d\xa0" + "\xb4\x4b\xc2\xe5\x66\x6a\xa8\xcc" + "\x8c\x06\xa6\xd2\xcd\x16\x73\xae" + "\x28\xf1\x03\x2d\x38\x1a\x1c\x44" + "\xb3\xf7\x35\x5f\xc8\xd1\xdb\xff" + "\x3e\xc6\x65\x19\x06\xfc\x94\x80" + "\x4f\xf5\xea\x33\x39\x02\x6a\x4d" + "\x0e\xa6\x57\xe9\xf6\x38\x32\x0a" + "\x9f\x97\x42\xb4\x23\x0e\xc8\x9d" + "\x32\xdc\xb5\x5a\x94\x8d\x64\xd6" + "\x4d\xcc\x42\xd0\xd8\xf6\x0b\x02" + "\xb3\x51\x37\x46\x61\x59\x58\x00" + "\x8c\x89\xa9\xbb\x00\x09\x4f\x1f" + "\x57\xa1\xf5\x25\x14\x69\x4d\x22" + "\x76\xa7\x5f\x7e\x12\xb5\x51\x5c" + "\x1a\xc6\x54\x95\x7c\x7c\xa3\x6e" + "\xba\xac\xa3\x2f\x13\xcb\x96\xd7" + "\x4f\x45\xd4\xb4\x97\x73\xe4\x43", + .klen = 32, + .len = 512, + }, +}; + #endif /* _CRYPTO_TESTMGR_H */ From patchwork Thu Feb 10 23:28:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nathan Huckleberry X-Patchwork-Id: 12742590 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED5E3C433F5 for ; Thu, 10 Feb 2022 23:28:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345624AbiBJX2t (ORCPT ); Thu, 10 Feb 2022 18:28:49 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:41304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345662AbiBJX2n (ORCPT ); Thu, 10 Feb 2022 18:28:43 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FB5155AD for ; Thu, 10 Feb 2022 15:28:43 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id b64-20020a256743000000b0061e169a5f19so14848559ybc.11 for ; Thu, 10 Feb 2022 15:28:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=soq/cMDceqpHK6l+rLfyragElya1D0Uy3uvOnbLDSs8=; b=N9Dp6vKH05G7p3rbZuDVzS+TfV9mqFDQlDNtQ5abx9+01Zc0we8vE8pGmYv+vwBU0R fXE/AdNvZNd53sJjD3VNh1QwfBAdMJsqppzdceVM8ftPG5YmhvQ3BQeb+BN8da7ZpCUJ 0LFXFrnh1CLRkk8dReU2CefhfnSNHvEexTYlyMm4hi90QuClvkpVYBdjpWsLuHjUJvBf AM5gCnAKheqsAFrM2yTQ+qt7hyGkJhGFO752Dw7DKLE/oFN7D1bTx24Xsstge2rSCCVK 2bWqf0qx5kXKkTnGRvQxnnBwQA0RRTN8XaqLJbvmrsulMfvczwje3EcSPWzouXXMnohm en4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=soq/cMDceqpHK6l+rLfyragElya1D0Uy3uvOnbLDSs8=; b=20JAZmyNcjhVwpZwy6g8sO9Cf8tkQk9ZTEnEs+WK/3PZMQpn3z6Dxf38amFCZaBv1U AsXofQ3kigKyCrpbfBUBuF5cBhrnUbvrYXzQsmbPKK0dX/mawJvLOdNfy7FAACkvSggf epKLLbwOcLCpDhTjkvb0BTd9EfP4EKwh/xTCWQMVaA2n8WE+OWIG9lTRZ6nUMjnjPwly 5sGcG1EMSJAXiVclG8pW3XB9hDN9DnV3IU/ypqGRzzVqjlrgsEA0I8QwYmV+eVKvxlhX QjHcdUktzGi5QoFY1vJttExcxa634H0UPXyLxOddl5YLBhPB6rRXFy72/du43kqFbOfc dfRw== X-Gm-Message-State: AOAM530pu4uVtFOPb+KJoGtf0GQZIaD0i/FAIO6FzcNF9oJyNNOxyiyN FhhYFe3/vnlqG5cZ9Ua0EXp4XiCPuJy7B+EfjCejkIynhszYSqderjTwb3yy18115R6njE0hhqS 3Ah0EZxjQDFHcq50dQDFZR0pbe21aVoNMQVkTNve08o1VRmhOVwWy7U55puwZfsjwViU= X-Google-Smtp-Source: ABdhPJwNhJS8k8RzDgL7Swt/Z/IDZVUsu+Wmug64KapcqaducWVGvQvQJB0B0bJDfsJPg8fzZXDDlPOFgA== X-Received: from nhuck.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:39cc]) (user=nhuck job=sendgmr) by 2002:a25:7415:: with SMTP id p21mr9191014ybc.162.1644535722334; Thu, 10 Feb 2022 15:28:42 -0800 (PST) Date: Thu, 10 Feb 2022 23:28:09 +0000 In-Reply-To: <20220210232812.798387-1-nhuck@google.com> Message-Id: <20220210232812.798387-5-nhuck@google.com> Mime-Version: 1.0 References: <20220210232812.798387-1-nhuck@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [RFC PATCH v2 4/7] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR From: Nathan Huckleberry To: linux-crypto@vger.kernel.org Cc: Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Eric Biggers , Sami Tolvanen , Ard Biesheuvel , Nathan Huckleberry Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add hardware accelerated versions of XCTR for x86-64 CPUs with AESNI support. These implementations are modified versions of the CTR implementations found in aesni-intel_asm.S and aes_ctrby8_avx-x86_64.S. More information on XCTR can be found in the HCTR2 paper: Length-preserving encryption with HCTR2: https://enterprint.iacr.org/2021/1441.pdf Signed-off-by: Nathan Huckleberry --- Changes since v1: * Changed ctr32 from u32 to __le32 * Removed references to u32_to_le_block arch/x86/crypto/Makefile | 2 +- arch/x86/crypto/aes_xctrby8_avx-x86_64.S | 529 +++++++++++++++++++++++ arch/x86/crypto/aesni-intel_asm.S | 70 +++ arch/x86/crypto/aesni-intel_glue.c | 89 ++++ 4 files changed, 689 insertions(+), 1 deletion(-) create mode 100644 arch/x86/crypto/aes_xctrby8_avx-x86_64.S diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index 2831685adf6f..ee2df489b0d9 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -48,7 +48,7 @@ chacha-x86_64-$(CONFIG_AS_AVX512) += chacha-avx512vl-x86_64.o obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o -aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o +aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o aes_xctrby8_avx-x86_64.o obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ssse3_glue.o diff --git a/arch/x86/crypto/aes_xctrby8_avx-x86_64.S b/arch/x86/crypto/aes_xctrby8_avx-x86_64.S new file mode 100644 index 000000000000..53d70cab9474 --- /dev/null +++ b/arch/x86/crypto/aes_xctrby8_avx-x86_64.S @@ -0,0 +1,529 @@ +/* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */ +/* + * AES XCTR mode by8 optimization with AVX instructions. (x86_64) + * + * Copyright(c) 2014 Intel Corporation. + * + * Contact Information: + * James Guilford + * Sean Gulley + * Chandramouli Narayanan + */ +/* + * Implement AES XCTR mode with AVX instructions. This code is a modified + * version of the Linux kernel's AES CTR by8 implementation. + * + * This is AES128/192/256 XCTR mode optimization implementation. It requires + * the support of Intel(R) AESNI and AVX instructions. + * + * This work was inspired by the AES XCTR mode optimization published + * in Intel Optimized IPSEC Cryptographic library. + * Additional information on it can be found at: + * https://github.com/intel/intel-ipsec-mb + */ + +#include + +#define VMOVDQ vmovdqu + +#define xdata0 %xmm0 +#define xdata1 %xmm1 +#define xdata2 %xmm2 +#define xdata3 %xmm3 +#define xdata4 %xmm4 +#define xdata5 %xmm5 +#define xdata6 %xmm6 +#define xdata7 %xmm7 +#define xiv %xmm8 +#define xbyteswap %xmm9 +#define xkey0 %xmm10 +#define xkey4 %xmm11 +#define xkey8 %xmm12 +#define xkey12 %xmm13 +#define xkeyA %xmm14 +#define xkeyB %xmm15 + +#define p_in %rdi +#define p_iv %rsi +#define p_keys %rdx +#define p_out %rcx +#define num_bytes %r8 +#define counter %r9 + +#define tmp %r10 +#define DDQ_DATA 0 +#define XDATA 1 +#define KEY_128 1 +#define KEY_192 2 +#define KEY_256 3 + +.section .rodata +.align 16 + +byteswap_const: + .octa 0x000102030405060708090A0B0C0D0E0F +ddq_low_msk: + .octa 0x0000000000000000FFFFFFFFFFFFFFFF +ddq_high_add_1: + .octa 0x00000000000000010000000000000000 +ddq_add_1: + .octa 0x00000000000000000000000000000001 +ddq_add_2: + .octa 0x00000000000000000000000000000002 +ddq_add_3: + .octa 0x00000000000000000000000000000003 +ddq_add_4: + .octa 0x00000000000000000000000000000004 +ddq_add_5: + .octa 0x00000000000000000000000000000005 +ddq_add_6: + .octa 0x00000000000000000000000000000006 +ddq_add_7: + .octa 0x00000000000000000000000000000007 +ddq_add_8: + .octa 0x00000000000000000000000000000008 + +.text + +/* generate a unique variable for ddq_add_x */ + +/* generate a unique variable for xmm register */ +.macro setxdata n + var_xdata = %xmm\n +.endm + +/* club the numeric 'id' to the symbol 'name' */ + +.macro club name, id +.altmacro + .if \name == XDATA + setxdata %\id + .endif +.noaltmacro +.endm + +/* + * do_aes num_in_par load_keys key_len + * This increments p_in, but not p_out + */ +.macro do_aes b, k, key_len + .set by, \b + .set load_keys, \k + .set klen, \key_len + + .set i, 0 + .rept (by) + club XDATA, i + movq counter, var_xdata + .set i, (i +1) + .endr + + .if (load_keys) + vmovdqa 0*16(p_keys), xkey0 + .endif + + // next two blocks compute iv ^ block_index + .set i, 0 + .rept (by) + club XDATA, i + vpaddq (ddq_add_1 + 16 * i)(%rip), var_xdata, var_xdata + .set i, (i +1) + .endr + .set i, 0 + .rept (by) + club XDATA, i + vpxor xiv, var_xdata, var_xdata + .set i, (i +1) + .endr + + vmovdqa 1*16(p_keys), xkeyA + + vpxor xkey0, xdata0, xdata0 + add $by, counter + + .set i, 1 + .rept (by - 1) + club XDATA, i + vpxor xkey0, var_xdata, var_xdata + .set i, (i +1) + .endr + + vmovdqa 2*16(p_keys), xkeyB + + .set i, 0 + .rept by + club XDATA, i + vaesenc xkeyA, var_xdata, var_xdata /* key 1 */ + .set i, (i +1) + .endr + + .if (klen == KEY_128) + .if (load_keys) + vmovdqa 3*16(p_keys), xkey4 + .endif + .else + vmovdqa 3*16(p_keys), xkeyA + .endif + + .set i, 0 + .rept by + club XDATA, i + vaesenc xkeyB, var_xdata, var_xdata /* key 2 */ + .set i, (i +1) + .endr + + add $(16*by), p_in + + .if (klen == KEY_128) + vmovdqa 4*16(p_keys), xkeyB + .else + .if (load_keys) + vmovdqa 4*16(p_keys), xkey4 + .endif + .endif + + .set i, 0 + .rept by + club XDATA, i + /* key 3 */ + .if (klen == KEY_128) + vaesenc xkey4, var_xdata, var_xdata + .else + vaesenc xkeyA, var_xdata, var_xdata + .endif + .set i, (i +1) + .endr + + vmovdqa 5*16(p_keys), xkeyA + + .set i, 0 + .rept by + club XDATA, i + /* key 4 */ + .if (klen == KEY_128) + vaesenc xkeyB, var_xdata, var_xdata + .else + vaesenc xkey4, var_xdata, var_xdata + .endif + .set i, (i +1) + .endr + + .if (klen == KEY_128) + .if (load_keys) + vmovdqa 6*16(p_keys), xkey8 + .endif + .else + vmovdqa 6*16(p_keys), xkeyB + .endif + + .set i, 0 + .rept by + club XDATA, i + vaesenc xkeyA, var_xdata, var_xdata /* key 5 */ + .set i, (i +1) + .endr + + vmovdqa 7*16(p_keys), xkeyA + + .set i, 0 + .rept by + club XDATA, i + /* key 6 */ + .if (klen == KEY_128) + vaesenc xkey8, var_xdata, var_xdata + .else + vaesenc xkeyB, var_xdata, var_xdata + .endif + .set i, (i +1) + .endr + + .if (klen == KEY_128) + vmovdqa 8*16(p_keys), xkeyB + .else + .if (load_keys) + vmovdqa 8*16(p_keys), xkey8 + .endif + .endif + + .set i, 0 + .rept by + club XDATA, i + vaesenc xkeyA, var_xdata, var_xdata /* key 7 */ + .set i, (i +1) + .endr + + .if (klen == KEY_128) + .if (load_keys) + vmovdqa 9*16(p_keys), xkey12 + .endif + .else + vmovdqa 9*16(p_keys), xkeyA + .endif + + .set i, 0 + .rept by + club XDATA, i + /* key 8 */ + .if (klen == KEY_128) + vaesenc xkeyB, var_xdata, var_xdata + .else + vaesenc xkey8, var_xdata, var_xdata + .endif + .set i, (i +1) + .endr + + vmovdqa 10*16(p_keys), xkeyB + + .set i, 0 + .rept by + club XDATA, i + /* key 9 */ + .if (klen == KEY_128) + vaesenc xkey12, var_xdata, var_xdata + .else + vaesenc xkeyA, var_xdata, var_xdata + .endif + .set i, (i +1) + .endr + + .if (klen != KEY_128) + vmovdqa 11*16(p_keys), xkeyA + .endif + + .set i, 0 + .rept by + club XDATA, i + /* key 10 */ + .if (klen == KEY_128) + vaesenclast xkeyB, var_xdata, var_xdata + .else + vaesenc xkeyB, var_xdata, var_xdata + .endif + .set i, (i +1) + .endr + + .if (klen != KEY_128) + .if (load_keys) + vmovdqa 12*16(p_keys), xkey12 + .endif + + .set i, 0 + .rept by + club XDATA, i + vaesenc xkeyA, var_xdata, var_xdata /* key 11 */ + .set i, (i +1) + .endr + + .if (klen == KEY_256) + vmovdqa 13*16(p_keys), xkeyA + .endif + + .set i, 0 + .rept by + club XDATA, i + .if (klen == KEY_256) + /* key 12 */ + vaesenc xkey12, var_xdata, var_xdata + .else + vaesenclast xkey12, var_xdata, var_xdata + .endif + .set i, (i +1) + .endr + + .if (klen == KEY_256) + vmovdqa 14*16(p_keys), xkeyB + + .set i, 0 + .rept by + club XDATA, i + /* key 13 */ + vaesenc xkeyA, var_xdata, var_xdata + .set i, (i +1) + .endr + + .set i, 0 + .rept by + club XDATA, i + /* key 14 */ + vaesenclast xkeyB, var_xdata, var_xdata + .set i, (i +1) + .endr + .endif + .endif + + .set i, 0 + .rept (by / 2) + .set j, (i+1) + VMOVDQ (i*16 - 16*by)(p_in), xkeyA + VMOVDQ (j*16 - 16*by)(p_in), xkeyB + club XDATA, i + vpxor xkeyA, var_xdata, var_xdata + club XDATA, j + vpxor xkeyB, var_xdata, var_xdata + .set i, (i+2) + .endr + + .if (i < by) + VMOVDQ (i*16 - 16*by)(p_in), xkeyA + club XDATA, i + vpxor xkeyA, var_xdata, var_xdata + .endif + + .set i, 0 + .rept by + club XDATA, i + VMOVDQ var_xdata, i*16(p_out) + .set i, (i+1) + .endr +.endm + +.macro do_aes_load val, key_len + do_aes \val, 1, \key_len +.endm + +.macro do_aes_noload val, key_len + do_aes \val, 0, \key_len +.endm + +/* main body of aes xctr load */ + +.macro do_aes_xctrmain key_len + andq $(~0xf), num_bytes + cmp $16, num_bytes + jb .Ldo_return2\key_len + + vmovdqa byteswap_const(%rip), xbyteswap + shr $4, counter + vmovdqu (p_iv), xiv + + mov num_bytes, tmp + and $(7*16), tmp + jz .Lmult_of_8_blks\key_len + + /* 1 <= tmp <= 7 */ + cmp $(4*16), tmp + jg .Lgt4\key_len + je .Leq4\key_len + +.Llt4\key_len: + cmp $(2*16), tmp + jg .Leq3\key_len + je .Leq2\key_len + +.Leq1\key_len: + do_aes_load 1, \key_len + add $(1*16), p_out + and $(~7*16), num_bytes + jz .Ldo_return2\key_len + jmp .Lmain_loop2\key_len + +.Leq2\key_len: + do_aes_load 2, \key_len + add $(2*16), p_out + and $(~7*16), num_bytes + jz .Ldo_return2\key_len + jmp .Lmain_loop2\key_len + + +.Leq3\key_len: + do_aes_load 3, \key_len + add $(3*16), p_out + and $(~7*16), num_bytes + jz .Ldo_return2\key_len + jmp .Lmain_loop2\key_len + +.Leq4\key_len: + do_aes_load 4, \key_len + add $(4*16), p_out + and $(~7*16), num_bytes + jz .Ldo_return2\key_len + jmp .Lmain_loop2\key_len + +.Lgt4\key_len: + cmp $(6*16), tmp + jg .Leq7\key_len + je .Leq6\key_len + +.Leq5\key_len: + do_aes_load 5, \key_len + add $(5*16), p_out + and $(~7*16), num_bytes + jz .Ldo_return2\key_len + jmp .Lmain_loop2\key_len + +.Leq6\key_len: + do_aes_load 6, \key_len + add $(6*16), p_out + and $(~7*16), num_bytes + jz .Ldo_return2\key_len + jmp .Lmain_loop2\key_len + +.Leq7\key_len: + do_aes_load 7, \key_len + add $(7*16), p_out + and $(~7*16), num_bytes + jz .Ldo_return2\key_len + jmp .Lmain_loop2\key_len + +.Lmult_of_8_blks\key_len: + .if (\key_len != KEY_128) + vmovdqa 0*16(p_keys), xkey0 + vmovdqa 4*16(p_keys), xkey4 + vmovdqa 8*16(p_keys), xkey8 + vmovdqa 12*16(p_keys), xkey12 + .else + vmovdqa 0*16(p_keys), xkey0 + vmovdqa 3*16(p_keys), xkey4 + vmovdqa 6*16(p_keys), xkey8 + vmovdqa 9*16(p_keys), xkey12 + .endif +.align 16 +.Lmain_loop2\key_len: + /* num_bytes is a multiple of 8 and >0 */ + do_aes_noload 8, \key_len + add $(8*16), p_out + sub $(8*16), num_bytes + jne .Lmain_loop2\key_len + +.Ldo_return2\key_len: + ret +.endm + +/* + * routine to do AES128 XCTR enc/decrypt "by8" + * XMM registers are clobbered. + * Saving/restoring must be done at a higher level + * aes_xctr_enc_128_avx_by8(const u8 *in, const u8 *iv, const aes_ctx *keys, u8 + * *out, unsigned int num_bytes, unsigned int byte_ctr) + */ +SYM_FUNC_START(aes_xctr_enc_128_avx_by8) + /* call the aes main loop */ + do_aes_xctrmain KEY_128 + +SYM_FUNC_END(aes_xctr_enc_128_avx_by8) + +/* + * routine to do AES192 XCTR enc/decrypt "by8" + * XMM registers are clobbered. + * Saving/restoring must be done at a higher level + * aes_xctr_enc_192_avx_by8(const u8 *in, const u8 *iv, const aes_ctx *keys, u8 + * *out, unsigned int num_bytes, unsigned int byte_ctr) + */ +SYM_FUNC_START(aes_xctr_enc_192_avx_by8) + /* call the aes main loop */ + do_aes_xctrmain KEY_192 + +SYM_FUNC_END(aes_xctr_enc_192_avx_by8) + +/* + * routine to do AES256 XCTR enc/decrypt "by8" + * XMM registers are clobbered. + * Saving/restoring must be done at a higher level + * aes_xctr_enc_256_avx_by8(const u8 *in, const u8 *iv, const aes_ctx *keys, u8 + * *out, unsigned int num_bytes, unsigned int byte_ctr) + */ +SYM_FUNC_START(aes_xctr_enc_256_avx_by8) + /* call the aes main loop */ + do_aes_xctrmain KEY_256 + +SYM_FUNC_END(aes_xctr_enc_256_avx_by8) diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index 363699dd7220..ce17fe630150 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -2821,6 +2821,76 @@ SYM_FUNC_END(aesni_ctr_enc) #endif +#ifdef __x86_64__ +/* + * void aesni_xctr_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src, + * size_t len, u8 *iv, int byte_ctr) + */ +SYM_FUNC_START(aesni_xctr_enc) + FRAME_BEGIN + cmp $16, LEN + jb .Lxctr_ret + shr $4, %arg6 + movq %arg6, CTR + mov 480(KEYP), KLEN + movups (IVP), IV + cmp $64, LEN + jb .Lxctr_enc_loop1 +.align 4 +.Lxctr_enc_loop4: + movaps IV, STATE1 + vpaddq ONE(%rip), CTR, CTR + vpxor CTR, STATE1, STATE1 + movups (INP), IN1 + movaps IV, STATE2 + vpaddq ONE(%rip), CTR, CTR + vpxor CTR, STATE2, STATE2 + movups 0x10(INP), IN2 + movaps IV, STATE3 + vpaddq ONE(%rip), CTR, CTR + vpxor CTR, STATE3, STATE3 + movups 0x20(INP), IN3 + movaps IV, STATE4 + vpaddq ONE(%rip), CTR, CTR + vpxor CTR, STATE4, STATE4 + movups 0x30(INP), IN4 + call _aesni_enc4 + pxor IN1, STATE1 + movups STATE1, (OUTP) + pxor IN2, STATE2 + movups STATE2, 0x10(OUTP) + pxor IN3, STATE3 + movups STATE3, 0x20(OUTP) + pxor IN4, STATE4 + movups STATE4, 0x30(OUTP) + sub $64, LEN + add $64, INP + add $64, OUTP + cmp $64, LEN + jge .Lxctr_enc_loop4 + cmp $16, LEN + jb .Lxctr_ret +.align 4 +.Lxctr_enc_loop1: + movaps IV, STATE + vpaddq ONE(%rip), CTR, CTR + vpxor CTR, STATE1, STATE1 + movups (INP), IN + call _aesni_enc1 + pxor IN, STATE + movups STATE, (OUTP) + sub $16, LEN + add $16, INP + add $16, OUTP + cmp $16, LEN + jge .Lxctr_enc_loop1 +.Lxctr_ret: + FRAME_END + RET +SYM_FUNC_END(aesni_xctr_enc) + +#endif + .section .rodata.cst16.gf128mul_x_ble_mask, "aM", @progbits, 16 .align 16 .Lgf128mul_x_ble_mask: diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 41901ba9d3a2..74021bd524b6 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -112,6 +112,11 @@ asmlinkage void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len, u8 *iv); DEFINE_STATIC_CALL(aesni_ctr_enc_tfm, aesni_ctr_enc); +asmlinkage void aesni_xctr_enc(struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in, unsigned int len, u8 *iv, + unsigned int byte_ctr); +DEFINE_STATIC_CALL(aesni_xctr_enc_tfm, aesni_xctr_enc); + /* Scatter / Gather routines, with args similar to above */ asmlinkage void aesni_gcm_init(void *ctx, struct gcm_context_data *gdata, @@ -135,6 +140,16 @@ asmlinkage void aes_ctr_enc_192_avx_by8(const u8 *in, u8 *iv, void *keys, u8 *out, unsigned int num_bytes); asmlinkage void aes_ctr_enc_256_avx_by8(const u8 *in, u8 *iv, void *keys, u8 *out, unsigned int num_bytes); + +asmlinkage void aes_xctr_enc_128_avx_by8(const u8 *in, u8 *iv, void *keys, u8 + *out, unsigned int num_bytes, unsigned int byte_ctr); + +asmlinkage void aes_xctr_enc_192_avx_by8(const u8 *in, u8 *iv, void *keys, u8 + *out, unsigned int num_bytes, unsigned int byte_ctr); + +asmlinkage void aes_xctr_enc_256_avx_by8(const u8 *in, u8 *iv, void *keys, u8 + *out, unsigned int num_bytes, unsigned int byte_ctr); + /* * asmlinkage void aesni_gcm_init_avx_gen2() * gcm_data *my_ctx_data, context data @@ -527,6 +542,61 @@ static int ctr_crypt(struct skcipher_request *req) return err; } +static void aesni_xctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out, const u8 + *in, unsigned int len, u8 *iv, unsigned int + byte_ctr) +{ + if (ctx->key_length == AES_KEYSIZE_128) + aes_xctr_enc_128_avx_by8(in, iv, (void *)ctx, out, len, + byte_ctr); + else if (ctx->key_length == AES_KEYSIZE_192) + aes_xctr_enc_192_avx_by8(in, iv, (void *)ctx, out, len, + byte_ctr); + else + aes_xctr_enc_256_avx_by8(in, iv, (void *)ctx, out, len, + byte_ctr); +} + +static int xctr_crypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = aes_ctx(crypto_skcipher_ctx(tfm)); + u8 keystream[AES_BLOCK_SIZE]; + u8 ctr[AES_BLOCK_SIZE]; + struct skcipher_walk walk; + unsigned int nbytes; + unsigned int byte_ctr = 0; + int err; + __le32 ctr32; + + err = skcipher_walk_virt(&walk, req, false); + + while ((nbytes = walk.nbytes) > 0) { + kernel_fpu_begin(); + if (nbytes & AES_BLOCK_MASK) + static_call(aesni_xctr_enc_tfm)(ctx, walk.dst.virt.addr, + walk.src.virt.addr, nbytes & AES_BLOCK_MASK, + walk.iv, byte_ctr); + nbytes &= ~AES_BLOCK_MASK; + byte_ctr += walk.nbytes - nbytes; + + if (walk.nbytes == walk.total && nbytes > 0) { + ctr32 = cpu_to_le32(byte_ctr / AES_BLOCK_SIZE + 1); + memcpy(ctr, walk.iv, AES_BLOCK_SIZE); + crypto_xor(ctr, (u8 *)&ctr32, sizeof(ctr32)); + aesni_enc(ctx, keystream, ctr); + crypto_xor_cpy(walk.dst.virt.addr + walk.nbytes - + nbytes, walk.src.virt.addr + walk.nbytes + - nbytes, keystream, nbytes); + byte_ctr += nbytes; + nbytes = 0; + } + kernel_fpu_end(); + err = skcipher_walk_done(&walk, nbytes); + } + return err; +} + static int rfc4106_set_hash_subkey(u8 *hash_subkey, const u8 *key, unsigned int key_len) { @@ -1026,6 +1096,23 @@ static struct skcipher_alg aesni_skciphers[] = { .setkey = aesni_skcipher_setkey, .encrypt = ctr_crypt, .decrypt = ctr_crypt, + }, { + .base = { + .cra_name = "__xctr(aes)", + .cra_driver_name = "__xctr-aes-aesni", + .cra_priority = 400, + .cra_flags = CRYPTO_ALG_INTERNAL, + .cra_blocksize = 1, + .cra_ctxsize = CRYPTO_AES_CTX_SIZE, + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .setkey = aesni_skcipher_setkey, + .encrypt = xctr_crypt, + .decrypt = xctr_crypt, #endif }, { .base = { @@ -1162,6 +1249,8 @@ static int __init aesni_init(void) /* optimize performance of ctr mode encryption transform */ static_call_update(aesni_ctr_enc_tfm, aesni_ctr_enc_avx_tfm); pr_info("AES CTR mode by8 optimization enabled\n"); + static_call_update(aesni_xctr_enc_tfm, aesni_xctr_enc_avx_tfm); + pr_info("AES XCTR mode by8 optimization enabled\n"); } #endif From patchwork Thu Feb 10 23:28:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nathan Huckleberry X-Patchwork-Id: 12742589 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F0C5C4332F for ; Thu, 10 Feb 2022 23:28:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345650AbiBJX2u (ORCPT ); Thu, 10 Feb 2022 18:28:50 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:41322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345690AbiBJX2o (ORCPT ); Thu, 10 Feb 2022 18:28:44 -0500 Received: from mail-vk1-xa49.google.com (mail-vk1-xa49.google.com [IPv6:2607:f8b0:4864:20::a49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8C915F5B for ; Thu, 10 Feb 2022 15:28:44 -0800 (PST) Received: by mail-vk1-xa49.google.com with SMTP id w78-20020a1fad51000000b0032823aa16b6so996302vke.7 for ; Thu, 10 Feb 2022 15:28:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LFn5HsQG6yE4p0N4NsfZ9UVbRekmJnl7KohBN08JwfM=; b=MqunerhqEQY9+Vidv+33xbfQP6h0LUfsliuqwFe5UCWlAJIQvm9mWUgQPr5rJlt6nX B8oivp6F3fRLW2rdD1fPPelbQDkU+QkCAuYm+QS0RfMOXLobiCGgxJx8QkyB22iKhnpy Z/gxNWtHVcCTQb5Gy+uEECOHKL1wXQIP3cwLq+Yj07ftJWRqXr2gDYPGpLiTMsA6YfDG vs3jnIJTqPNEYYJv0f7f9xEYFbiukxUSStYmx/u3cU987yxTzXlpWlOf9+p8i2fL7kMB 8IbunbtPrcYLZg4bE2EkkayGPE87+gw5eBHJmEwx2XzMz9igy5NBjNLzqYNZDETCPPyq MWug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LFn5HsQG6yE4p0N4NsfZ9UVbRekmJnl7KohBN08JwfM=; b=qbFsHE+Vsv0fCPhwDJnUplWaL5xfdA9cZpC4mcBtlZ7z3o+PAx+L/Z8u40q9KfWq9O q0aeVdiDNymSUU1h3ZJoxgeCPkaRHQmUe4dT8bwlIEehYky1+HkC7cxPn61P48ooIBUP z5wyxzBArZcTlfoKIBfZihhzb6KnjFlEaQmoPLwfejiXWcd5zmr2b6SuJ8TKZGXikr2F R/zkKREiK6X/94xX1KeEvFhcQtDYRvrSbQNiN9KpmIyhstN86DK40P8GX+k1HwAu1eJO nQfPa9MLWBZU1Hw2G8szO+YCz23T2ey9uuXYqX9qGvVN/fenEUwRROCu01+5/sIdi+kV GRbg== X-Gm-Message-State: AOAM5302Xw/n902+dNnsIkfwu3l5d3H8WsfaqcMC7chccXpddT74kSzZ zokhNHSZj9oHPl4qZuiXTc8SXzn8SPhDmFNGiJru8FC7olHz5bYFKuqnpYW0S0bBrC3o8iPOvFg x+FyJ0FtNINJhUhSzJtj57EEq+Y2bhhHpo7pWGs5CS6Y24RR/vhdKcbBzV4phoLqZ9XM= X-Google-Smtp-Source: ABdhPJznK+CUJbs5EjuSA9KUzmEcqeaxALCXxOk9kaOi3iurlmCd8wfGHt7j8HzJjBdocTRDIjlGmWAG9w== X-Received: from nhuck.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:39cc]) (user=nhuck job=sendgmr) by 2002:ab0:59e2:: with SMTP id k31mr3083066uad.117.1644535723875; Thu, 10 Feb 2022 15:28:43 -0800 (PST) Date: Thu, 10 Feb 2022 23:28:10 +0000 In-Reply-To: <20220210232812.798387-1-nhuck@google.com> Message-Id: <20220210232812.798387-6-nhuck@google.com> Mime-Version: 1.0 References: <20220210232812.798387-1-nhuck@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [RFC PATCH v2 5/7] crypto: arm64/aes-xctr: Add accelerated implementation of XCTR From: Nathan Huckleberry To: linux-crypto@vger.kernel.org Cc: Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Eric Biggers , Sami Tolvanen , Ard Biesheuvel , Nathan Huckleberry Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add hardware accelerated version of XCTR for ARM64 CPUs with ARMv8 Crypto Extension support. This XCTR implementation is based on the CTR implementation in aes-modes.S. More information on XCTR can be found in the HCTR2 paper: Length-preserving encryption with HCTR2: https://eprint.iacr.org/2021/1441.pdf Signed-off-by: Nathan Huckleberry --- Changes since v1: * Added STRIDE back to aes-glue.c arch/arm64/crypto/Kconfig | 4 +- arch/arm64/crypto/aes-glue.c | 72 ++++++++++++++++++- arch/arm64/crypto/aes-modes.S | 130 ++++++++++++++++++++++++++++++++++ 3 files changed, 202 insertions(+), 4 deletions(-) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 2a965aa0188d..897f9a4b5b67 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -84,13 +84,13 @@ config CRYPTO_AES_ARM64_CE_CCM select CRYPTO_LIB_AES config CRYPTO_AES_ARM64_CE_BLK - tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" + tristate "AES in ECB/CBC/CTR/XTS/XCTR modes using ARMv8 Crypto Extensions" depends on KERNEL_MODE_NEON select CRYPTO_SKCIPHER select CRYPTO_AES_ARM64_CE config CRYPTO_AES_ARM64_NEON_BLK - tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions" + tristate "AES in ECB/CBC/CTR/XTS/XCTR modes using NEON instructions" depends on KERNEL_MODE_NEON select CRYPTO_SKCIPHER select CRYPTO_LIB_AES diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 561dd2332571..dd04f3c5b0f1 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -24,6 +24,7 @@ #ifdef USE_V8_CRYPTO_EXTENSIONS #define MODE "ce" #define PRIO 300 +#define STRIDE 5 #define aes_expandkey ce_aes_expandkey #define aes_ecb_encrypt ce_aes_ecb_encrypt #define aes_ecb_decrypt ce_aes_ecb_decrypt @@ -34,13 +35,15 @@ #define aes_essiv_cbc_encrypt ce_aes_essiv_cbc_encrypt #define aes_essiv_cbc_decrypt ce_aes_essiv_cbc_decrypt #define aes_ctr_encrypt ce_aes_ctr_encrypt +#define aes_xctr_encrypt ce_aes_xctr_encrypt #define aes_xts_encrypt ce_aes_xts_encrypt #define aes_xts_decrypt ce_aes_xts_decrypt #define aes_mac_update ce_aes_mac_update -MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); +MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS/XCTR using ARMv8 Crypto Extensions"); #else #define MODE "neon" #define PRIO 200 +#define STRIDE 4 #define aes_ecb_encrypt neon_aes_ecb_encrypt #define aes_ecb_decrypt neon_aes_ecb_decrypt #define aes_cbc_encrypt neon_aes_cbc_encrypt @@ -50,16 +53,18 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); #define aes_essiv_cbc_encrypt neon_aes_essiv_cbc_encrypt #define aes_essiv_cbc_decrypt neon_aes_essiv_cbc_decrypt #define aes_ctr_encrypt neon_aes_ctr_encrypt +#define aes_xctr_encrypt neon_aes_xctr_encrypt #define aes_xts_encrypt neon_aes_xts_encrypt #define aes_xts_decrypt neon_aes_xts_decrypt #define aes_mac_update neon_aes_mac_update -MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 NEON"); +MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS/XCTR using ARMv8 NEON"); #endif #if defined(USE_V8_CRYPTO_EXTENSIONS) || !IS_ENABLED(CONFIG_CRYPTO_AES_ARM64_BS) MODULE_ALIAS_CRYPTO("ecb(aes)"); MODULE_ALIAS_CRYPTO("cbc(aes)"); MODULE_ALIAS_CRYPTO("ctr(aes)"); MODULE_ALIAS_CRYPTO("xts(aes)"); +MODULE_ALIAS_CRYPTO("xctr(aes)"); #endif MODULE_ALIAS_CRYPTO("cts(cbc(aes))"); MODULE_ALIAS_CRYPTO("essiv(cbc(aes),sha256)"); @@ -89,6 +94,10 @@ asmlinkage void aes_cbc_cts_decrypt(u8 out[], u8 const in[], u32 const rk[], asmlinkage void aes_ctr_encrypt(u8 out[], u8 const in[], u32 const rk[], int rounds, int bytes, u8 ctr[]); +asmlinkage void aes_xctr_encrypt(u8 out[], u8 const in[], u32 const rk[], + int rounds, int bytes, u8 ctr[], u8 finalbuf[], + int byte_ctr); + asmlinkage void aes_xts_encrypt(u8 out[], u8 const in[], u32 const rk1[], int rounds, int bytes, u32 const rk2[], u8 iv[], int first); @@ -442,6 +451,49 @@ static int __maybe_unused essiv_cbc_decrypt(struct skcipher_request *req) return err ?: cbc_decrypt_walk(req, &walk); } +static int __maybe_unused xctr_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + int err, rounds = 6 + ctx->key_length / 4; + struct skcipher_walk walk; + unsigned int byte_ctr = 0; + + err = skcipher_walk_virt(&walk, req, false); + + while (walk.nbytes > 0) { + const u8 *src = walk.src.virt.addr; + unsigned int nbytes = walk.nbytes; + u8 *dst = walk.dst.virt.addr; + u8 buf[AES_BLOCK_SIZE]; + unsigned int tail; + + if (unlikely(nbytes < AES_BLOCK_SIZE)) + src = memcpy(buf, src, nbytes); + else if (nbytes < walk.total) + nbytes &= ~(AES_BLOCK_SIZE - 1); + + kernel_neon_begin(); + aes_xctr_encrypt(dst, src, ctx->key_enc, rounds, nbytes, + walk.iv, buf, byte_ctr); + kernel_neon_end(); + + tail = nbytes % (STRIDE * AES_BLOCK_SIZE); + if (tail > 0 && tail < AES_BLOCK_SIZE) + /* + * The final partial block could not be returned using + * an overlapping store, so it was passed via buf[] + * instead. + */ + memcpy(dst + nbytes - tail, buf, tail); + byte_ctr += nbytes; + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + + return err; +} + static int __maybe_unused ctr_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -669,6 +721,22 @@ static struct skcipher_alg aes_algs[] = { { .setkey = skcipher_aes_setkey, .encrypt = ctr_encrypt, .decrypt = ctr_encrypt, +}, { + .base = { + .cra_name = "xctr(aes)", + .cra_driver_name = "xctr-aes-" MODE, + .cra_priority = PRIO, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .setkey = skcipher_aes_setkey, + .encrypt = xctr_encrypt, + .decrypt = xctr_encrypt, }, { .base = { .cra_name = "xts(aes)", diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S index dc35eb0245c5..3b5c7a5c21e4 100644 --- a/arch/arm64/crypto/aes-modes.S +++ b/arch/arm64/crypto/aes-modes.S @@ -479,6 +479,136 @@ ST5( mov v3.16b, v4.16b ) b .Lctrout AES_FUNC_END(aes_ctr_encrypt) + /* + * aes_xctr_encrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, + * int bytes, u8 const ctr[], u8 finalbuf[], int + * byte_ctr) + */ + +AES_FUNC_START(aes_xctr_encrypt) + stp x29, x30, [sp, #-16]! + mov x29, sp + + enc_prepare w3, x2, x12 + ld1 {vctr.16b}, [x5] + + umov x12, vctr.d[0] /* keep ctr in reg */ + lsr x7, x7, #4 + add x11, x7, #1 + +.LxctrloopNx: + add w7, w4, #15 + sub w4, w4, #MAX_STRIDE << 4 + lsr w7, w7, #4 + mov w8, #MAX_STRIDE + cmp w7, w8 + csel w7, w7, w8, lt + add x11, x11, x7 + + mov v0.16b, vctr.16b + mov v1.16b, vctr.16b + mov v2.16b, vctr.16b + mov v3.16b, vctr.16b +ST5( mov v4.16b, vctr.16b ) + + sub x7, x11, #MAX_STRIDE + eor x7, x12, x7 + ins v0.d[0], x7 + sub x7, x11, #MAX_STRIDE - 1 + sub x8, x11, #MAX_STRIDE - 2 + eor x7, x7, x12 + sub x9, x11, #MAX_STRIDE - 3 + mov v1.d[0], x7 + eor x8, x8, x12 + eor x9, x9, x12 +ST5( sub x10, x11, #MAX_STRIDE - 4) + mov v2.d[0], x8 + eor x10, x10, x12 + mov v3.d[0], x9 +ST5( mov v4.d[0], x10 ) + tbnz w4, #31, .Lxctrtail + ld1 {v5.16b-v7.16b}, [x1], #48 +ST4( bl aes_encrypt_block4x ) +ST5( bl aes_encrypt_block5x ) + eor v0.16b, v5.16b, v0.16b +ST4( ld1 {v5.16b}, [x1], #16 ) + eor v1.16b, v6.16b, v1.16b +ST5( ld1 {v5.16b-v6.16b}, [x1], #32 ) + eor v2.16b, v7.16b, v2.16b + eor v3.16b, v5.16b, v3.16b +ST5( eor v4.16b, v6.16b, v4.16b ) + st1 {v0.16b-v3.16b}, [x0], #64 +ST5( st1 {v4.16b}, [x0], #16 ) + cbz w4, .Lxctrout + b .LxctrloopNx + +.Lxctrout: + ldp x29, x30, [sp], #16 + ret + +.Lxctrtail: + /* XOR up to MAX_STRIDE * 16 - 1 bytes of in/output with v0 ... v3/v4 */ + mov x17, #16 + ands x13, x4, #0xf + csel x13, x13, x17, ne + +ST5( cmp w4, #64 - (MAX_STRIDE << 4)) +ST5( csel x14, x17, xzr, gt ) + cmp w4, #48 - (MAX_STRIDE << 4) + csel x15, x17, xzr, gt + cmp w4, #32 - (MAX_STRIDE << 4) + csel x16, x17, xzr, gt + cmp w4, #16 - (MAX_STRIDE << 4) + ble .Lxctrtail1x + +ST5( mov v4.d[0], x10 ) + + adr_l x12, .Lcts_permute_table + add x12, x12, x13 + +ST5( ld1 {v5.16b}, [x1], x14 ) + ld1 {v6.16b}, [x1], x15 + ld1 {v7.16b}, [x1], x16 + +ST4( bl aes_encrypt_block4x ) +ST5( bl aes_encrypt_block5x ) + + ld1 {v8.16b}, [x1], x13 + ld1 {v9.16b}, [x1] + ld1 {v10.16b}, [x12] + +ST4( eor v6.16b, v6.16b, v0.16b ) +ST4( eor v7.16b, v7.16b, v1.16b ) +ST4( tbl v3.16b, {v3.16b}, v10.16b ) +ST4( eor v8.16b, v8.16b, v2.16b ) +ST4( eor v9.16b, v9.16b, v3.16b ) + +ST5( eor v5.16b, v5.16b, v0.16b ) +ST5( eor v6.16b, v6.16b, v1.16b ) +ST5( tbl v4.16b, {v4.16b}, v10.16b ) +ST5( eor v7.16b, v7.16b, v2.16b ) +ST5( eor v8.16b, v8.16b, v3.16b ) +ST5( eor v9.16b, v9.16b, v4.16b ) + +ST5( st1 {v5.16b}, [x0], x14 ) + st1 {v6.16b}, [x0], x15 + st1 {v7.16b}, [x0], x16 + add x13, x13, x0 + st1 {v9.16b}, [x13] // overlapping stores + st1 {v8.16b}, [x0] + b .Lxctrout + +.Lxctrtail1x: + // use finalbuf if less than a full block + csel x0, x0, x6, eq + ld1 {v5.16b}, [x1] +ST5( mov v3.16b, v4.16b ) + encrypt_block v3, w3, x2, x8, w7 + eor v5.16b, v5.16b, v3.16b + st1 {v5.16b}, [x0] + b .Lxctrout +AES_FUNC_END(aes_xctr_encrypt) + /* * aes_xts_encrypt(u8 out[], u8 const in[], u8 const rk1[], int rounds, From patchwork Thu Feb 10 23:28:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nathan Huckleberry X-Patchwork-Id: 12742592 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84769C43217 for ; Thu, 10 Feb 2022 23:28:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345654AbiBJX2x (ORCPT ); Thu, 10 Feb 2022 18:28:53 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:41378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345659AbiBJX2s (ORCPT ); Thu, 10 Feb 2022 18:28:48 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88F3B5F67 for ; Thu, 10 Feb 2022 15:28:46 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id w14-20020a0cfc4e000000b0042c1ac91249so5129204qvp.4 for ; Thu, 10 Feb 2022 15:28:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uvOO8ZNH1ZfcGkGuvFcfNS7K5FAXvaqzIa8ApkAjUlk=; b=KWviO3Q6aZAotjrk9w73/Dg+Jm3U5xwpsluhcSQmxk2NmT7heyhGNqgkvAeD3xVMR2 ku0pyHpq3/LitKCgEa2Cvyp/VzY7FZD4YM/PMIDnhivFg7yXDtZv79d1dG2WeCaKz3qg /pei6XJb2fQvMwF3wytez6tTklLcIRvAa+9a4Gsy5rEaNjVWc5yNq17tduHJuqOxsP3T q8+YiybRbvYupGxCALTsdWG8o1FY+nQ75sRXbhFPWVr3Ont3Xu8KXSmtbjwZn53n6S64 FXyd+mj4TIHs+Bi2KrKoFWcapOzquagxgmwJ+4LQy1+l4e2+EIsU+QJhZfFrohHUDgQX ezhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uvOO8ZNH1ZfcGkGuvFcfNS7K5FAXvaqzIa8ApkAjUlk=; b=8OvxNcOCUp0YxAljl2D9MLkRKI/Hdpu2VTNPp8vnkwqfWfUen8z+O+yabjANOtq44Y al5gbkg8ZxxNNET+b8HVZYSY3fyajfZOUfKePi0LdfIaJJOf9/uUMeYE8AwNFQTBe/Lr D9d/RXY/9pCc51lUauOhb77uWdiszw/eg4Bg3ZHKOSCDLoh3P3TLRUbsrXHl4111HDwZ +ovmJXs1YzcLEYWrkNfM3DFxrlvvLJ5GSYMetST3BAUZY3Uz099HesrVBYd3hwr0XYoZ JgsIRKOhayyBdDVTd6YfsYxrYtGiZpnoeuEu5l2PxpGaLKZWQIDI6aHIo5yGj8/9j3jV jLjg== X-Gm-Message-State: AOAM533ZxJC69E8cP//tzUC1kE04Ma/mIUW/0QmPNRLnb+dGuFJGLOGq pfLRzj36TNc53iKcTC0IQd9Ia9/K00RWf3ym1P1bHxb6rq9i8TjO8FR1VIqPEKiirec6/GoTGK4 as7teppoLGjoygEcXblyFkpD/TXFWED1FG3iRhYyE11OfwX9hMdI5Bp9Hvov2CANOmFA= X-Google-Smtp-Source: ABdhPJy1qb2FgD/k24haM+CpfYV7QoTJ0Jaa1kUdux1WeSG+locnVGiQL977gmiAc1sO4farcpPRsVEjZg== X-Received: from nhuck.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:39cc]) (user=nhuck job=sendgmr) by 2002:a05:6214:20ca:: with SMTP id 10mr6898460qve.96.1644535725641; Thu, 10 Feb 2022 15:28:45 -0800 (PST) Date: Thu, 10 Feb 2022 23:28:11 +0000 In-Reply-To: <20220210232812.798387-1-nhuck@google.com> Message-Id: <20220210232812.798387-7-nhuck@google.com> Mime-Version: 1.0 References: <20220210232812.798387-1-nhuck@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [RFC PATCH v2 6/7] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL From: Nathan Huckleberry To: linux-crypto@vger.kernel.org Cc: Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Eric Biggers , Sami Tolvanen , Ard Biesheuvel , Nathan Huckleberry Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add hardware accelerated version of POLYVAL for x86-64 CPUs with PCLMULQDQ support. This implementation is accelerated using PCLMULQDQ instructions to perform the finite field computations. For added efficiency, 8 blocks of the message are processed simultaneously by precomputing the first 8 powers of the key. Schoolbook multiplication is used instead of Karatsuba multiplication because it was found to be slightly faster on x86-64 machines. Montgomery reduction must be used instead of Barrett reduction due to the difference in modulus between POLYVAL's field and other finite fields. More information on POLYVAL can be found in the HCTR2 paper: Length-preserving encryption with HCTR2: https://eprint.iacr.org/2021/1441.pdf Signed-off-by: Nathan Huckleberry --- Changes since v1: * Rename C, D, EF to LO, MI, HI * Add comments explaining POLYVAL * Fix bug in handling of non-block-multiple messages * Wrap shash in ahash (as done in ghash) arch/x86/crypto/Makefile | 3 + arch/x86/crypto/polyval-clmulni_asm.S | 414 +++++++++++++++++++++++++ arch/x86/crypto/polyval-clmulni_glue.c | 365 ++++++++++++++++++++++ crypto/Kconfig | 10 + 4 files changed, 792 insertions(+) create mode 100644 arch/x86/crypto/polyval-clmulni_asm.S create mode 100644 arch/x86/crypto/polyval-clmulni_glue.c diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index ee2df489b0d9..c0f13801afba 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -69,6 +69,9 @@ libblake2s-x86_64-y := blake2s-core.o blake2s-glue.o obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o +obj-$(CONFIG_CRYPTO_POLYVAL_CLMUL_NI) += polyval-clmulni.o +polyval-clmulni-y := polyval-clmulni_asm.o polyval-clmulni_glue.o + obj-$(CONFIG_CRYPTO_CRC32C_INTEL) += crc32c-intel.o crc32c-intel-y := crc32c-intel_glue.o crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o diff --git a/arch/x86/crypto/polyval-clmulni_asm.S b/arch/x86/crypto/polyval-clmulni_asm.S new file mode 100644 index 000000000000..bec1a2046b18 --- /dev/null +++ b/arch/x86/crypto/polyval-clmulni_asm.S @@ -0,0 +1,414 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2021 Google LLC + * + * Use of this source code is governed by an MIT-style + * license that can be found in the LICENSE file or at + * https://opensource.org/licenses/MIT. + */ +/* + * This is an efficient implementation of POLYVAL using intel PCLMULQDQ-NI + * instructions. It works on 8 blocks at a time, by precomputing the first 8 + * keys powers h^8, ..., h^1 in the POLYVAL finite field. This precomputation + * allows us to split finite field multiplication into two steps. + * + * In the first step, we consider h^i, m_i as normal polynomials of degree less + * than 128. We then compute p(x) = h^8m_0 + ... + h^1m_7 where multiplication + * is simply polynomial multiplication. + * + * In the second step, we compute the reduction of p(x) modulo the finite field + * modulus g(x) = x^128 + x^127 + x^126 + x^121 + 1. + * + * This two step process is equivalent to computing h^8m_0 + ... + h^1m_7 where + * multiplication is finite field multiplication. The advantage is that the + * two-step process only requires 1 finite field reduction for every 8 + * polynomial multiplications. Further parallelism is gained by interleaving the + * multiplications and polynomial reductions. + */ + +#include +#include + +#define NUM_PRECOMPUTE_POWERS 8 + +#define GSTAR %xmm7 +#define PL %xmm8 +#define PH %xmm9 +#define T %xmm10 +#define V %xmm11 +#define LO %xmm12 +#define HI %xmm13 +#define MI %xmm14 +#define SUM %xmm15 + +#define BLOCKS_LEFT %rdx +#define OP1 %rdi +#define OP2 %r10 +#define IDX %r11 +#define TMP %rax + +.section .rodata.cst16.gstar, "aM", @progbits, 16 +.align 16 + +Lgstar: + .quad 0xc200000000000000, 0xc200000000000000 + +.text + +/* + * Performs schoolbook1_iteration on two lists of 128-bit polynomials of length + * b pointed to by OP1 and OP2. + */ +.macro schoolbook1 b + .set by, \b + .set i, 0 + .rept (by) + schoolbook1_iteration i 0 + .set i, (i +1) + .endr +.endm + +/* + * Computes the product of two 128-bit polynomials at the memory locations + * specified by (OP1 + 16*i) and (OP2 + 16*i) and XORs the components of the + * 256-bit product into LO, MI, HI. + * + * The multiplication produces four parts: + * LOW: The polynomial given by performing carryless multiplication of the + * bottom 64-bits of each polynomial + * MID1: The polynomial given by performing carryless multiplication of the + * bottom 64-bits of the first polynomial and the top 64-bits of the second + * MID2: The polynomial given by performing carryless multiplication of the + * bottom 64-bits of the second polynomial and the top 64-bits of the first + * HIGH: The polynomial given by performing carryless multiplication of the + * top 64-bits of each polynomial + * + * We compute: + * LO ^= LOW + * Mi ^= MID1 ^ MID2 + * Hi ^= HIGH + * + * Later, the 256-bit result can be extracted as: + * [HI_H : HI_L ^ MI_H : LO_H ^ MI_L : LO_L] + * This step is done when computing the polynomial reduction for efficiency + * reasons. + * + * If xor_sum == 1 then XOR the value of SUM into m_0. + * This avoids an extra multication of SUM and h^N. + */ +.macro schoolbook1_iteration i xor_sum + .set i, \i + .set xor_sum, \xor_sum + movups (16*i)(OP1), %xmm0 + .if(i == 0 && xor_sum == 1) + pxor SUM, %xmm0 + .endif + vpclmulqdq $0x01, (16*i)(OP2), %xmm0, %xmm1 + vpxor %xmm1, MI, MI + vpclmulqdq $0x00, (16*i)(OP2), %xmm0, %xmm2 + vpxor %xmm2, LO, LO + vpclmulqdq $0x11, (16*i)(OP2), %xmm0, %xmm3 + vpxor %xmm3, HI, HI + vpclmulqdq $0x10, (16*i)(OP2), %xmm0, %xmm4 + vpxor %xmm4, MI, MI +.endm + +/* + * Performs the same computation as schoolbook1_iteration, except we expect the + * arguments to already be loaded into xmm0 and xmm1. + */ +.macro schoolbook1_noload + vpclmulqdq $0x01, %xmm0, %xmm1, %xmm2 + vpxor %xmm2, MI, MI + vpclmulqdq $0x00, %xmm0, %xmm1, %xmm3 + vpxor %xmm3, LO, LO + vpclmulqdq $0x11, %xmm0, %xmm1, %xmm4 + vpxor %xmm4, HI, HI + vpclmulqdq $0x10, %xmm0, %xmm1, %xmm5 + vpxor %xmm5, MI, MI +.endm + +/* + * Computes the 256-bit polynomial represented by LO, HI, MI. Stores + * the result in PL, PH. + * [PH :: PL] = [HI_H : HI_L ^ MI_H :: LO_H ^ MI_L : LO_L] + */ +.macro schoolbook2 + vpslldq $8, MI, PL + vpsrldq $8, MI, PH + pxor LO, PL + pxor HI, PH +.endm + +/* + * Computes the 128-bit reduction of PL, PH. Stores the result in PH. + * + * This macro computes p(x) mod g(x) where p(x) is in montgomery form and g(x) = + * x^128 + x^127 + x^126 + x^121 + 1. + * + * The montgomery form of a polynomial p(x) is p(x)x^{128}. Montgomery reduction + * works by simultaneously dividing by x^{128} and computing the modular + * reduction. + * + * Suppose we wish to reduce the montgomery form of p(x) = [P_3 : P_2 : P_1 : + * P_0] where P_i is a polynomial of degree at most 64 represented as 64-bits. + * Thus we would like to compute: + * p(x) / x^{128} mod g(x) + * = (P_3*x^{192} + P_2*x^{128} + P_1*x^{64} + P_0) / x^{128} mod g(x) + * + * We would like to divide by x^{128} efficiently. Since P_3*x^{128}, + * P_2*x^{128} are multiples of x^{128}, we can simply bitshift right by 128. + * + * We now focus on dividing P_1*x^{64} + P_0 by x^{128}. We do this by making + * P_1*x^{64} + P_0 divisble by x^{128} then bitshifting. To add divisibility, + * we consider the polynomials mod x^{128}. + * + * Let c(x) = P_1*x^{64} + P_0. + * + * Now let m(x) = c(x) mod x^{128} + * and + * Let z(x) = [c(x) + m(x)g(x)] / x^{128} + * + * First notice that: + * c(x) + m(x)g(x) = c(x) mod g(x). + * Furthermore, g(x) mod x^{128} = 1, so we have + * c(x) + m(x)g(x) = c(x) + c(x) = 0 (mod x^{128}). + * + * Thus c(x) + m(x)g(x) is divisible by x^{128} and is equivalent to c(x) mod + * g(x). + * + * In practice we use a slight modification of this idea, by using g*(x) = + * x^{63} + x^{62} + x^{57}. This is because we can only multiply 64-bit + * polynomials. Notice that g(x) = x^128 + g*(x)*x^{64} + 1 + * + * We do this by substituting g(x) = x^{128} + g*(x)x^{64} + 1 + * z(x) = [c(x) + c(x)*(x^{128} + g*(x)*x^{64} + 1)] / x^{128} + * = [P_1*x^{192} + P_0*x^{128} + P_1*g*(x)x^{128} + P_0*g*(x)*x^{64} z(x)] + * / x^{128} + * = P_1*x^{64} + P_0 + P_1*g*(x) + P_0*g*(x)*x^{-64} + * + * The only difficulty left in this expression is P_0*g*(x)x^{-64}. + * Let t(x) = P_0*g*(x) = [T_1 : T_0] + * Notice that we can repeat the above process: + * g(x) mod x^{64} = 1 + * m'(x) = t(x) mod x^{64} + * z'(x) = [t(x) + m'(x)g(x)] / x^64 + * Thus we get + * z'(x) = [t(x) + (x^{128} + g*(x)x^{64} + 1)T_0] / x^64 + * = T_1 + T_0*x^{64} + g*(x)*T_0 + * + * Recall that this is only the reduction for [P_1*x^{64} + P_0] / x^{64}. The + * full computation we need to make is: + * p(x) / x^{128} = P_3*x^{64} + P_2 + P_1*x^{64} + P_0 + P_1*g*(x) + + * T_1 + T_0*x^{64} + g*(x)*T_0 + * + * Thus we have: + * t(x) = g*(x) * P_0 = [T_1 : T_0] + * v(x) = g*(x) * (T_0 ^ P_1) = [V_1 : V_0] + * p(x) / x^{128} mod g(x) = [P_3 ^ P_1 ^ V_1 ^ T_0 : P_2 ^ P_0 ^ V_0 ^ T_1] + */ +.macro montgomery_reduction + movdqa PL, T + pclmulqdq $0x00, GSTAR, T # T = [P_0 * g*(x)] + pshufd $0b01001110, T, V # V = [T_0 : T_1] + pxor V, PL # PL = [P_1 ^ T_0 : P_0 ^ T_1] + pxor PL, PH # PH = [P_1 ^ T_0 ^ P_3 : P_0 ^ T_1 ^ P_2] + pclmulqdq $0x11, GSTAR, PL # PL = [(P_1 ^ T_0) * g*(x)] + pxor PL, PH +.endm + +/* + * Compute schoolbook multiplication for 8 blocks + * M_0h^8 + ... + M_7h^1 (no constant term) + * + * If reduce is set, computes the montgomery reduction of the + * previous full_stride call and XORs with the first message block. + * (M_0 + REDUCE(PL, PH))h^8 + ... + M_7h^1 (no constant term) + * + * Sets PL, PH + * Clobbers LO, HI, MI + * + */ +.macro full_stride reduce + .set reduce, \reduce + mov %rsi, OP2 + pxor LO, LO + pxor HI, HI + pxor MI, MI + + schoolbook1_iteration 7 0 + .if(reduce) + movdqa PL, T + .endif + + schoolbook1_iteration 6 0 + .if(reduce) + pclmulqdq $0x00, GSTAR, T # T = [X0 * g*(x)] + .endif + + schoolbook1_iteration 5 0 + .if(reduce) + pshufd $0b01001110, T, V # V = [T0 : T1] + .endif + + schoolbook1_iteration 4 0 + .if(reduce) + pxor V, PL # PL = [X1 ^ T0 : X0 ^ T1] + .endif + + schoolbook1_iteration 3 0 + .if(reduce) + pxor PL, PH # PH = [X1 ^ T0 ^ X3 : X0 ^ T1 ^ X2] + .endif + + schoolbook1_iteration 2 0 + .if(reduce) + pclmulqdq $0x11, GSTAR, PL # PL = [X1 ^ T0 * g*(x)] + .endif + + schoolbook1_iteration 1 0 + .if(reduce) + pxor PL, PH + movdqa PH, SUM + .endif + + schoolbook1_iteration 0 1 + + addq $(8*16), OP1 + addq $(8*16), OP2 + schoolbook2 +.endm + +/* + * Compute poly on window size of %rdx blocks + * 0 < %rdx < NUM_PRECOMPUTE_POWERS + */ +.macro partial_stride + pxor LO, LO + pxor HI, HI + pxor MI, MI + mov BLOCKS_LEFT, TMP + shlq $4, TMP + mov %rsi, OP2 + addq $(16*NUM_PRECOMPUTE_POWERS), OP2 + subq TMP, OP2 + # Multiply sum by h^N + movups (OP2), %xmm0 + movdqa SUM, %xmm1 + schoolbook1_noload + schoolbook2 + montgomery_reduction + movdqa PH, SUM + pxor LO, LO + pxor HI, HI + pxor MI, MI + xor IDX, IDX +.LloopPartial: + cmpq BLOCKS_LEFT, IDX # IDX < rdx + jae .LloopExitPartial + + movq BLOCKS_LEFT, TMP + subq IDX, TMP # TMP = rdx - IDX + + cmp $4, TMP # TMP < 4 ? + jl .Llt4Partial + schoolbook1 4 + addq $4, IDX + addq $(4*16), OP1 + addq $(4*16), OP2 + jmp .LoutPartial +.Llt4Partial: + cmp $3, TMP # TMP < 3 ? + jl .Llt3Partial + schoolbook1 3 + addq $3, IDX + addq $(3*16), OP1 + addq $(3*16), OP2 + jmp .LoutPartial +.Llt3Partial: + cmp $2, TMP # TMP < 2 ? + jl .Llt2Partial + schoolbook1 2 + addq $2, IDX + addq $(2*16), OP1 + addq $(2*16), OP2 + jmp .LoutPartial +.Llt2Partial: + schoolbook1 1 # TMP < 1 ? + addq $1, IDX + addq $(1*16), OP1 + addq $(1*16), OP2 +.LoutPartial: + jmp .LloopPartial +.LloopExitPartial: + schoolbook2 + montgomery_reduction + pxor PH, SUM +.endm + +/* + * Perform montgomery multiplication in GF(2^128) and store result in op1. + * + * Computes op1*op2*x^{-128} mod x^128 + x^127 + x^126 + x^121 + 1 + * If op1, op2 are in montgomery form, this computes the montgomery + * form of op1*op2. + * + * void clmul_polyval_mul(u8 *op1, const u8 *op2); + */ +SYM_FUNC_START(clmul_polyval_mul) + FRAME_BEGIN + vmovdqa Lgstar(%rip), GSTAR + pxor LO, LO + pxor HI, HI + pxor MI, MI + movups (%rdi), %xmm0 + movups (%rsi), %xmm1 + schoolbook1_noload + schoolbook2 + montgomery_reduction + movups PH, (%rdi) + FRAME_END + ret +SYM_FUNC_END(clmul_polyval_mul) + +/* + * Perform polynomial evaluation as specified by POLYVAL. If nblocks = k, this + * routine multiplies the value stored at accumulator by h^k and XORs the + * evaluated polynomial into it. + * + * Computes h^k * accumulator + h^kM_0 + ... + h^1M_{k-1} (No constant term) + * + * rdi (OP1) - pointer to message blocks + * rsi - pointer to precomputed key struct + * rdx - number of blocks to hash + * rcx - location to XOR with evaluated polynomial + * + * void clmul_polyval_update(const u8 *in, const struct polyhash_ctx* ctx, + * size_t nblocks, u8* accumulator); + */ +SYM_FUNC_START(clmul_polyval_update) + FRAME_BEGIN + vmovdqa Lgstar(%rip), GSTAR + movups (%rcx), SUM + cmpq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT + jb .LstrideLoopExit + full_stride 0 + subq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT +.LstrideLoop: + cmpq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT + jb .LstrideLoopExitReduce + full_stride 1 + subq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT + jmp .LstrideLoop +.LstrideLoopExitReduce: + montgomery_reduction + movdqa PH, SUM +.LstrideLoopExit: + test BLOCKS_LEFT, BLOCKS_LEFT + je .LskipPartial + partial_stride +.LskipPartial: + movups SUM, (%rcx) + FRAME_END + ret +SYM_FUNC_END(clmul_polyval_update) diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c new file mode 100644 index 000000000000..78cbb19658ac --- /dev/null +++ b/arch/x86/crypto/polyval-clmulni_glue.c @@ -0,0 +1,365 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Accelerated POLYVAL implementation with Intel PCLMULQDQ-NI + * instructions. This file contains glue code. + * + * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen + * Copyright (c) 2009 Intel Corp. + * Author: Huang Ying + * Copyright 2021 Google LLC + */ +/* + * Glue code based on ghash-clmulni-intel_glue.c. + * + * This implementation of POLYVAL uses montgomery multiplication + * accelerated by PCLMULQDQ-NI to implement the finite field + * operations. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define POLYVAL_BLOCK_SIZE 16 +#define POLYVAL_DIGEST_SIZE 16 +#define NUM_PRECOMPUTE_POWERS 8 + +struct polyval_async_ctx { + struct cryptd_ahash *cryptd_tfm; +}; + +struct polyval_ctx { + /* + * These powers must be in the order h^8, ..., h^1. + */ + u128 key_powers[NUM_PRECOMPUTE_POWERS]; +}; + +struct polyval_desc_ctx { + u8 buffer[POLYVAL_BLOCK_SIZE]; + u32 bytes; +}; + +asmlinkage void clmul_polyval_update(const u8 *in, struct polyval_ctx *keys, + size_t nblocks, u8 *accumulator); +asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2); + +static int polyval_init(struct shash_desc *desc) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + + memset(dctx, 0, sizeof(*dctx)); + + return 0; +} + +static int polyval_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + struct polyval_ctx *ctx = crypto_shash_ctx(tfm); + int i; + + if (keylen != POLYVAL_BLOCK_SIZE) + return -EINVAL; + + BUILD_BUG_ON(sizeof(u128) != POLYVAL_BLOCK_SIZE); + + memcpy(&ctx->key_powers[NUM_PRECOMPUTE_POWERS-1], key, sizeof(u128)); + + kernel_fpu_begin(); + for (i = NUM_PRECOMPUTE_POWERS-2; i >= 0; i--) { + memcpy(&ctx->key_powers[i], key, sizeof(u128)); + clmul_polyval_mul((u8 *)&ctx->key_powers[i], + (u8 *)&ctx->key_powers[i+1]); + } + kernel_fpu_end(); + + return 0; +} + +static int polyval_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + struct polyval_ctx *ctx = crypto_shash_ctx(desc->tfm); + u8 *dst = dctx->buffer; + u8 *pos; + unsigned int nblocks; + int n; + + kernel_fpu_begin(); + if (dctx->bytes) { + n = min(srclen, dctx->bytes); + pos = dst + POLYVAL_BLOCK_SIZE - dctx->bytes; + + dctx->bytes -= n; + srclen -= n; + + while (n--) + *pos++ ^= *src++; + + if (!dctx->bytes) + clmul_polyval_mul(dst, + (u8 *)&ctx->key_powers[NUM_PRECOMPUTE_POWERS-1]); + } + + nblocks = srclen/POLYVAL_BLOCK_SIZE; + clmul_polyval_update(src, ctx, nblocks, dst); + srclen -= nblocks*POLYVAL_BLOCK_SIZE; + kernel_fpu_end(); + + if (srclen) { + dctx->bytes = POLYVAL_BLOCK_SIZE - srclen; + src += nblocks*POLYVAL_BLOCK_SIZE; + pos = dst; + while (srclen--) + *pos++ ^= *src++; + } + + return 0; +} + +static int polyval_final(struct shash_desc *desc, u8 *dst) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + struct polyval_ctx *ctx = crypto_shash_ctx(desc->tfm); + u8 *buf = dctx->buffer; + + if (dctx->bytes) { + kernel_fpu_begin(); + clmul_polyval_mul((u8 *)buf, + (u8 *)&ctx->key_powers[NUM_PRECOMPUTE_POWERS-1]); + kernel_fpu_end(); + } + + dctx->bytes = 0; + memcpy(dst, buf, POLYVAL_BLOCK_SIZE); + + return 0; +} + +static struct shash_alg polyval_alg = { + .digestsize = POLYVAL_DIGEST_SIZE, + .init = polyval_init, + .update = polyval_update, + .final = polyval_final, + .setkey = polyval_setkey, + .descsize = sizeof(struct polyval_desc_ctx), + .base = { + .cra_name = "__polyval", + .cra_driver_name = "__polyval-pclmulqdqni", + .cra_priority = 0, + .cra_flags = CRYPTO_ALG_INTERNAL, + .cra_blocksize = POLYVAL_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct polyval_ctx), + .cra_module = THIS_MODULE, + }, +}; + +static int polyval_async_init(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; + struct shash_desc *desc = cryptd_shash_desc(cryptd_req); + struct crypto_shash *child = cryptd_ahash_child(cryptd_tfm); + + desc->tfm = child; + return crypto_shash_init(desc); +} + +static int polyval_async_update(struct ahash_request *req) +{ + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; + struct shash_desc *desc; + + if (!crypto_simd_usable() || + (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { + memcpy(cryptd_req, req, sizeof(*req)); + ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base); + return crypto_ahash_update(cryptd_req); + } + desc = cryptd_shash_desc(cryptd_req); + + return shash_ahash_update(req, desc); +} + +static int polyval_async_final(struct ahash_request *req) +{ + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; + struct shash_desc *desc; + + if (!crypto_simd_usable() || + (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { + memcpy(cryptd_req, req, sizeof(*req)); + ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base); + return crypto_ahash_final(cryptd_req); + } + desc = cryptd_shash_desc(cryptd_req); + + return crypto_shash_final(desc, req->result); +} + +static int polyval_async_import(struct ahash_request *req, const void *in) +{ + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct shash_desc *desc = cryptd_shash_desc(cryptd_req); + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + + polyval_async_init(req); + memcpy(dctx, in, sizeof(*dctx)); + return 0; + +} + +static int polyval_async_export(struct ahash_request *req, void *out) +{ + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct shash_desc *desc = cryptd_shash_desc(cryptd_req); + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + + memcpy(out, dctx, sizeof(*dctx)); + return 0; + +} + +static int polyval_async_digest(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; + struct shash_desc *desc; + struct crypto_shash *child; + + if (!crypto_simd_usable() || + (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { + memcpy(cryptd_req, req, sizeof(*req)); + ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base); + return crypto_ahash_digest(cryptd_req); + } + desc = cryptd_shash_desc(cryptd_req); + child = cryptd_ahash_child(cryptd_tfm); + + desc->tfm = child; + return shash_ahash_digest(req, desc); +} + +static int polyval_async_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int keylen) +{ + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct crypto_ahash *child = &ctx->cryptd_tfm->base; + + crypto_ahash_clear_flags(child, CRYPTO_TFM_REQ_MASK); + crypto_ahash_set_flags(child, crypto_ahash_get_flags(tfm) + & CRYPTO_TFM_REQ_MASK); + return crypto_ahash_setkey(child, key, keylen); +} + +static int polyval_async_init_tfm(struct crypto_tfm *tfm) +{ + struct cryptd_ahash *cryptd_tfm; + struct polyval_async_ctx *ctx = crypto_tfm_ctx(tfm); + + cryptd_tfm = cryptd_alloc_ahash("__polyval-pclmulqdqni", + CRYPTO_ALG_INTERNAL, + CRYPTO_ALG_INTERNAL); + if (IS_ERR(cryptd_tfm)) + return PTR_ERR(cryptd_tfm); + ctx->cryptd_tfm = cryptd_tfm; + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), + sizeof(struct ahash_request) + + crypto_ahash_reqsize(&cryptd_tfm->base)); + + return 0; +} + +static void polyval_async_exit_tfm(struct crypto_tfm *tfm) +{ + struct polyval_async_ctx *ctx = crypto_tfm_ctx(tfm); + + cryptd_free_ahash(ctx->cryptd_tfm); +} + +static struct ahash_alg polyval_async_alg = { + .init = polyval_async_init, + .update = polyval_async_update, + .final = polyval_async_final, + .setkey = polyval_async_setkey, + .digest = polyval_async_digest, + .export = polyval_async_export, + .import = polyval_async_import, + .halg = { + .digestsize = POLYVAL_DIGEST_SIZE, + .statesize = sizeof(struct polyval_desc_ctx), + .base = { + .cra_name = "polyval", + .cra_driver_name = "polyval-clmulni", + .cra_priority = 200, + .cra_ctxsize = sizeof(struct polyval_async_ctx), + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = POLYVAL_BLOCK_SIZE, + .cra_module = THIS_MODULE, + .cra_init = polyval_async_init_tfm, + .cra_exit = polyval_async_exit_tfm, + }, + }, +}; + +static const struct x86_cpu_id pcmul_cpu_id[] = { + X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), /* Pickle-Mickle-Duck */ + {} +}; +MODULE_DEVICE_TABLE(x86cpu, pcmul_cpu_id); + +static int __init polyval_pclmulqdqni_mod_init(void) +{ + int err; + + if (!x86_match_cpu(pcmul_cpu_id)) + return -ENODEV; + + err = crypto_register_shash(&polyval_alg); + if (err) + goto err_out; + err = crypto_register_ahash(&polyval_async_alg); + if (err) + goto err_shash; + + return 0; + +err_shash: + crypto_unregister_shash(&polyval_alg); +err_out: + return err; +} + +static void __exit polyval_pclmulqdqni_mod_exit(void) +{ + crypto_unregister_ahash(&polyval_async_alg); + crypto_unregister_shash(&polyval_alg); +} + +module_init(polyval_pclmulqdqni_mod_init); +module_exit(polyval_pclmulqdqni_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("POLYVAL hash function accelerated by PCLMULQDQ-NI"); +MODULE_ALIAS_CRYPTO("polyval"); diff --git a/crypto/Kconfig b/crypto/Kconfig index 2a9029f51caf..b539a5bdc45e 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -773,12 +773,22 @@ config CRYPTO_GHASH config CRYPTO_POLYVAL tristate + select CRYPTO_CRYPTD select CRYPTO_GF128MUL select CRYPTO_HASH help POLYVAL is the hash function used in HCTR2. It is not a general-purpose cryptographic hash function. +config CRYPTO_POLYVAL_CLMUL_NI + tristate "POLYVAL hash function (CLMUL-NI accelerated)" + depends on X86 && 64BIT + select CRYPTO_POLYVAL + help + This is the x86_64 CLMUL-NI accelerated implementation of POLYVAL. It is + used to efficiently implement HCTR2 on x86-64 processors that support + carry-less multiplication instructions. + config CRYPTO_POLY1305 tristate "Poly1305 authenticator algorithm" select CRYPTO_HASH From patchwork Thu Feb 10 23:28:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nathan Huckleberry X-Patchwork-Id: 12742591 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36171C433EF for ; Thu, 10 Feb 2022 23:28:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345649AbiBJX2u (ORCPT ); Thu, 10 Feb 2022 18:28:50 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:41358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345661AbiBJX2t (ORCPT ); Thu, 10 Feb 2022 18:28:49 -0500 Received: from mail-vs1-xe49.google.com (mail-vs1-xe49.google.com [IPv6:2607:f8b0:4864:20::e49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C9E35F6B for ; Thu, 10 Feb 2022 15:28:48 -0800 (PST) Received: by mail-vs1-xe49.google.com with SMTP id b12-20020a67fe8c000000b0031a490e8b9dso644536vsr.15 for ; Thu, 10 Feb 2022 15:28:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=aI+ZZN/XgFx9qNVpwESSjNwjrEMAlNEJZKE9wM7Vg8M=; b=szJN+mEd6nLTGyWwqNr7fYO0/2rktvhHXQztqi/tX90V+UH1y5L56DnLAVXvuM+dZj KZ6aBrbekNOSCcuB1MNM/4UXyLMoVCAw5jnWNZiXlpmrpCn2K00MUoT5nJ6k/BdBQ+1B 4tU4JvSvK2ie5atiyYMBkZY2iIxFQQzy7gH6OSY0toUs9PDHSIrr1susNs1Te5EXhJwN EgB6sVHg8+YTEoi2lzbS563wEaReTAkyWSF4zBxnEPhjKHqE4bU0uDPNWEutBEBHyvkq +Q9pmtChWIia2FBJ8PltcLBnY2ZSWjwBk40Wqx3OokYV0FZ7RwBL30C5YbZbBZFvjRPw pj8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=aI+ZZN/XgFx9qNVpwESSjNwjrEMAlNEJZKE9wM7Vg8M=; b=mBBw44sTzlPI7K6bXXlMh+/Vj+yjhXaRTS6aM6sEgOscZf9i74uVlm6wtx8SivHV8K uSfaCsEa3IIABHCCy2aA8fLKn5fbmMnonTMpqpf3uEFDzzLN7w/D2rNyQc+NAR7ZkRc2 HkT+6hEqF3SNbLaqUs8ndGOFV3+AZgU1l/q3sGAgV23gUvqfP2Bli7vlU+kEBog2oNH4 8jGaECglvChfQLlXfTEPf9Tuc80cBa5z43ORvsFFdrM4ZjyHyOQ4Iy9DeJHjx/3cWCoR ZWykfidF8GJ8rzhlDPXbq1jTiBL3itKER63jsaZEWH8fG9ZAgh4+gvQ+L33FJWljwJG1 goKg== X-Gm-Message-State: AOAM533/KnetsFWDmePZmHj5UeNINiA/uI6a2RRZGUQirT/DCjTKk+k3 Ya1TY8JpvnR80LG11Du6Wh7CmGyKbPYk4bfNbLmQ3eDNEjnjTG0fuwtfMSN8bFG5MZ//a7i8d2X gGRx0hU3TVsCA3JkeLRHNGwtEdKOTW8NYtqZ/YlLmM3os6cAPybN95J00fR5/4rfRBQE= X-Google-Smtp-Source: ABdhPJzCBY0mrmEviuzKIse40qgMrfXkf3Pj774y8cx6D6m7AiAAbw5HhZBEYCXE4J1+fE0/KjzeZZYC5w== X-Received: from nhuck.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:39cc]) (user=nhuck job=sendgmr) by 2002:a67:fb0b:: with SMTP id d11mr2991237vsr.35.1644535727137; Thu, 10 Feb 2022 15:28:47 -0800 (PST) Date: Thu, 10 Feb 2022 23:28:12 +0000 In-Reply-To: <20220210232812.798387-1-nhuck@google.com> Message-Id: <20220210232812.798387-8-nhuck@google.com> Mime-Version: 1.0 References: <20220210232812.798387-1-nhuck@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [RFC PATCH v2 7/7] crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL From: Nathan Huckleberry To: linux-crypto@vger.kernel.org Cc: Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Paul Crowley , Eric Biggers , Sami Tolvanen , Ard Biesheuvel , Nathan Huckleberry Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add hardware accelerated version of POLYVAL for ARM64 CPUs with Crypto Extension support. This implementation is accelerated using PMULL instructions to perform the finite field computations. For added efficiency, 8 blocks of the message are processed simultaneously by precomputing the first 8 powers of the key. Karatsuba multiplication is used instead of Schoolbook multiplication because it was found to be slightly faster on ARM64 CPUs. Montgomery reduction must be used instead of Barrett reduction due to the difference in modulus between POLYVAL's field and other finite fields. Signed-off-by: Nathan Huckleberry --- Changes since v1: * Rename C, D, E to LO, MI, HI * Add comments explaining POLYVAL * Fix bug in handling of non-block-multiple messages * Wrap shash in ahash (as done in ghash) arch/arm64/crypto/Kconfig | 7 + arch/arm64/crypto/Makefile | 3 + arch/arm64/crypto/polyval-ce-core.S | 405 ++++++++++++++++++++++++++++ arch/arm64/crypto/polyval-ce-glue.c | 365 +++++++++++++++++++++++++ 4 files changed, 780 insertions(+) create mode 100644 arch/arm64/crypto/polyval-ce-core.S create mode 100644 arch/arm64/crypto/polyval-ce-glue.c diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 897f9a4b5b67..f7fbe8637e5c 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -60,6 +60,13 @@ config CRYPTO_GHASH_ARM64_CE select CRYPTO_GF128MUL select CRYPTO_LIB_AES +config CRYPTO_POLYVAL_ARM64_CE + tristate "POLYVAL using ARMv8 Crypto Extensions (for HCTR2)" + depends on KERNEL_MODE_NEON + select CRYPTO_CRYPTD + select CRYPTO_HASH + select CRYPTO_POLYVAL + config CRYPTO_CRCT10DIF_ARM64_CE tristate "CRCT10DIF digest algorithm using PMULL instructions" depends on KERNEL_MODE_NEON && CRC_T10DIF diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index 09a805cc32d7..53f9af962b86 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -26,6 +26,9 @@ sm4-ce-y := sm4-ce-glue.o sm4-ce-core.o obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o +obj-$(CONFIG_CRYPTO_POLYVAL_ARM64_CE) += polyval-ce.o +polyval-ce-y := polyval-ce-glue.o polyval-ce-core.o + obj-$(CONFIG_CRYPTO_CRCT10DIF_ARM64_CE) += crct10dif-ce.o crct10dif-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o diff --git a/arch/arm64/crypto/polyval-ce-core.S b/arch/arm64/crypto/polyval-ce-core.S new file mode 100644 index 000000000000..3b2a5adf8987 --- /dev/null +++ b/arch/arm64/crypto/polyval-ce-core.S @@ -0,0 +1,405 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2021 Google LLC + * + * Use of this source code is governed by an MIT-style + * license that can be found in the LICENSE file or at + * https://opensource.org/licenses/MIT. + */ +/* + * This is an efficient implementation of POLYVAL using ARMv8 Crypto Extension + * instructions. It works on 8 blocks at a time, by precomputing the first 8 + * keys powers h^8, ..., h^1 in the POLYVAL finite field. This precomputation + * allows us to split finite field multiplication into two steps. + * + * In the first step, we consider h^i, m_i as normal polynomials of degree less + * than 128. We then compute p(x) = h^8m_0 + ... + h^1m_7 where multiplication + * is simply polynomial multiplication. + * + * In the second step, we compute the reduction of p(x) modulo the finite field + * modulus g(x) = x^128 + x^127 + x^126 + x^121 + 1. + * + * This two step process is equivalent to computing h^8m_0 + ... + h^1m_7 where + * multiplication is finite field multiplication. The advantage is that the + * two-step process only requires 1 finite field reduction for every 8 + * polynomial multiplications. Further parallelism is gained by interleaving the + * multiplications and polynomial reductions. + */ + +#include +#define NUM_PRECOMPUTE_POWERS 8 + +BLOCKS_LEFT .req x2 +OP1 .req x9 +KEY_START .req x10 +EXTRA_BYTES .req x11 +IND .req x12 +TMP .req x13 +PARTIAL_LEFT .req x14 + +M0 .req v0 +M1 .req v1 +M2 .req v2 +M3 .req v3 +M4 .req v4 +M5 .req v5 +M6 .req v6 +M7 .req v7 +KEY8 .req v8 +KEY7 .req v9 +KEY6 .req v10 +KEY5 .req v11 +KEY4 .req v12 +KEY3 .req v13 +KEY2 .req v14 +KEY1 .req v15 +PL .req v16 +PH .req v17 +T .req v18 +V .req v19 +LO .req v20 +MI .req v21 +HI .req v22 +SUM .req v23 +GSTAR .req v24 + + .text + .align 4 + + .arch armv8-a+crypto + .align 4 + +.Lgstar: + .quad 0xc200000000000000, 0xc200000000000000 + +/* + * Computes the product of two 128-bit polynomials in X and Y and XORs the + * components of the 256-bit product into LO, MI, HI. + * + * The multiplication produces four parts: + * LOW: The polynomial given by performing carryless multiplication of X_L and + * Y_L + * MID: The polynomial given by performing carryless multiplication of (X_L ^ + * X_H) and (Y_L ^ Y_H) + * HIGH: The polynomial given by performing carryless multiplication of X_H + * and Y_H + * + * We compute: + * LO ^= LOW + * MI ^= MID + * HI ^= HIGH + * + * Later, the 256-bit result can be extracted as: + * [HI_H : HI_L ^ HI_H ^ MI_H ^ LO_H :: LO_H ^ HI_L ^ MI_L ^ LO_L : LO_L] + * This step is done when computing the polynomial reduction for efficiency + * reasons. + */ +.macro karatsuba1 X Y + X .req \X + Y .req \Y + ext v25.16b, X.16b, Y.16b, #8 + eor v25.16b, v25.16b, X.16b + ext v26.16b, Y.16b, Y.16b, #8 + eor v26.16b, v26.16b, Y.16b + pmull v26.1q, v25.1d, v26.1d + pmull2 v25.1q, X.2d, Y.2d + pmull X.1q, X.1d, Y.1d + eor HI.16b, HI.16b, v26.16b + eor LO.16b, LO.16b, v25.16b + eor MI.16b, MI.16b, X.16b + .unreq X + .unreq Y +.endm + +/* + * Computes the 256-bit polynomial represented by LO, HI, MI. Stores + * the result in PL, PH. + * [PH :: PL] = [HI_H : HI_L ^ HI_H ^ MI_H ^ LO_H :: LO_H ^ HI_L ^ MI_L ^ LO_L + * : LO_L] + */ +.macro karatsuba2 + ext v4.16b, MI.16b, LO.16b, #8 + eor HI.16b, HI.16b, v4.16b //[HI1 ^ LO0 : HI0 ^ MI1] + eor v4.16b, LO.16b, MI.16b //[LO1 ^ MI1 : LO0 ^ MI0] + //[LO0 ^ LO1 ^ MI1 ^ HI1 : MI1 ^ LO0 ^ MI0 ^ HI0] + eor v4.16b, HI.16b, v4.16b + ext LO.16b, LO.16b, LO.16b, #8 // [LO0 : LO1] + ext MI.16b, MI.16b, MI.16b, #8 // [MI0 : MI1] + ext PH.16b, v4.16b, LO.16b, #8 //[LO1 : LO1 ^ MI1 ^ HI1 ^ LO0] + ext PL.16b, MI.16b, v4.16b, #8 //[MI1 ^ LO0 ^ MI0 ^ HI0 : MI0] +.endm + +/* + * Computes the 128-bit reduction of PL, PH. Stores the result in PH. + * + * This macro computes p(x) mod g(x) where p(x) is in montgomery form and g(x) = + * x^128 + x^127 + x^126 + x^121 + 1. + * + * The montgomery form of a polynomial p(x) is p(x)x^{128}. Montgomery reduction + * works by simultaneously dividing by x^{128} and computing the modular + * reduction. + * + * Suppose we wish to reduce the montgomery form of p(x) = [P_3 : P_2 : P_1 : + * P_0] where P_i is a polynomial of degree at most 64 represented as 64-bits. + * Thus we would like to compute: + * p(x) / x^{128} mod g(x) + * = (P_3*x^{192} + P_2*x^{128} + P_1*x^{64} + P_0) / x^{128} mod g(x) + * + * We would like to divide by x^{128} efficiently. Since P_3*x^{128}, + * P_2*x^{128} are multiples of x^{128}, we can simply bitshift right by 128. + * + * We now focus on dividing P_1*x^{64} + P_0 by x^{128}. We do this by making + * P_1*x^{64} + P_0 divisble by x^{128} then bitshifting. To add divisibility, + * we consider the polynomials mod x^{128}. + * + * Let c(x) = P_1*x^{64} + P_0. + * + * Now let m(x) = c(x) mod x^{128} + * and + * Let z(x) = [c(x) + m(x)g(x)] / x^{128} + * + * First notice that: + * c(x) + m(x)g(x) = c(x) mod g(x). + * Furthermore, g(x) mod x^{128} = 1, so we have + * c(x) + m(x)g(x) = c(x) + c(x) = 0 (mod x^{128}). + * + * Thus c(x) + m(x)g(x) is divisible by x^{128} and is equivalent to c(x) mod + * g(x). + * + * In practice we use a slight modification of this idea, by using g*(x) = + * x^{63} + x^{62} + x^{57}. This is because we can only multiply 64-bit + * polynomials. Notice that g(x) = x^128 + g*(x)*x^{64} + 1 + * + * We do this by substituting g(x) = x^{128} + g*(x)x^{64} + 1 + * z(x) = [c(x) + c(x)*(x^{128} + g*(x)*x^{64} + 1)] / x^{128} + * = [P_1*x^{192} + P_0*x^{128} + P_1*g*(x)x^{128} + P_0*g*(x)*x^{64} z(x)] + * / x^{128} + * = P_1*x^{64} + P_0 + P_1*g*(x) + P_0*g*(x)*x^{-64} + * + * The only difficulty left in this expression is P_0*g*(x)x^{-64}. + * Let t(x) = P_0*g*(x) = [T_1 : T_0] + * Notice that we can repeat the above process: + * g(x) mod x^{64} = 1 + * m'(x) = t(x) mod x^{64} + * z'(x) = [t(x) + m'(x)g(x)] / x^64 + * Thus we get + * z'(x) = [t(x) + (x^{128} + g*(x)x^{64} + 1)T_0] / x^64 + * = T_1 + T_0*x^{64} + g*(x)*T_0 + * + * Recall that this is only the reduction for [P_1*x^{64} + P_0] / x^{64}. The + * full computation we need to make is: + * p(x) / x^{128} = P_3*x^{64} + P_2 + P_1*x^{64} + P_0 + P_1*g*(x) + + * T_1 + T_0*x^{64} + g*(x)*T_0 + * + * Thus we have: + * t(x) = g*(x) * P_0 = [T_1 : T_0] + * v(x) = g*(x) * (T_0 ^ P_1) = [V_1 : V_0] + * p(x) / x^{128} mod g(x) = [P_3 ^ P_1 ^ V_1 ^ T_0 : P_2 ^ P_0 ^ V_0 ^ T_1] + */ +.macro montgomery_reduction + pmull T.1q, GSTAR.1d, PL.1d + ext T.16b, T.16b, T.16b, #8 + eor PL.16b, PL.16b, T.16b + pmull2 V.1q, GSTAR.2d, PL.2d + eor V.16b, PL.16b, V.16b + eor PH.16b, PH.16b, V.16b +.endm + +/* + * Compute Polyval on 8 blocks. + * + * If reduce is set, performs interleaved montgomery reduction + * on the last full_stride iteration's PL, PH. + * + * Sets PL, PH. + */ +.macro full_stride reduce + .set reduce, \reduce + eor LO.16b, LO.16b, LO.16b + eor MI.16b, MI.16b, MI.16b + eor HI.16b, HI.16b, HI.16b + + ld1 {M0.16b, M1.16b, M2.16b, M3.16b}, [x0], #64 + ld1 {M4.16b, M5.16b, M6.16b, M7.16b}, [x0], #64 + + karatsuba1 M7 KEY1 + .if(reduce) + pmull T.1q, GSTAR.1d, PL.1d + .endif + + karatsuba1 M6 KEY2 + .if(reduce) + ext T.16b, T.16b, T.16b, #8 + .endif + + karatsuba1 M5 KEY3 + .if(reduce) + eor PL.16b, PL.16b, T.16b + .endif + + karatsuba1 M4 KEY4 + .if(reduce) + pmull2 V.1q, GSTAR.2d, PL.2d + .endif + + karatsuba1 M3 KEY5 + .if(reduce) + eor V.16b, PL.16b, V.16b + .endif + + karatsuba1 M2 KEY6 + .if(reduce) + eor PH.16b, PH.16b, V.16b + .endif + + karatsuba1 M1 KEY7 + .if(reduce) + mov SUM.16b, PH.16b + .endif + eor M0.16b, M0.16b, SUM.16b + + karatsuba1 M0 KEY8 + + karatsuba2 +.endm + +/* + * Handle any extra blocks before + * full_stride loop. + */ +.macro partial_stride + eor LO.16b, LO.16b, LO.16b + eor MI.16b, MI.16b, MI.16b + eor HI.16b, HI.16b, HI.16b + add KEY_START, x1, #(NUM_PRECOMPUTE_POWERS << 4) + sub KEY_START, KEY_START, PARTIAL_LEFT, lsl #4 + ld1 {v0.16b}, [KEY_START] + mov v1.16b, SUM.16b + karatsuba1 v0 v1 + karatsuba2 + montgomery_reduction + mov SUM.16b, PH.16b + eor LO.16b, LO.16b, LO.16b + eor MI.16b, MI.16b, MI.16b + eor HI.16b, HI.16b, HI.16b + mov IND, XZR +.LloopPartial: + cmp IND, PARTIAL_LEFT + bge .LloopExitPartial + + sub TMP, IND, PARTIAL_LEFT + + cmp TMP, #-4 + bgt .Lgt4Partial + ld1 {M0.16b, M1.16b, M2.16b, M3.16b}, [x0], #64 + // Clobber key registers + ld1 {KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [KEY_START], #64 + karatsuba1 M0 KEY8 + karatsuba1 M1 KEY7 + karatsuba1 M2 KEY6 + karatsuba1 M3 KEY5 + add IND, IND, #4 + b .LoutPartial + +.Lgt4Partial: + cmp TMP, #-3 + bgt .Lgt3Partial + ld1 {M0.16b, M1.16b, M2.16b}, [x0], #48 + // Clobber key registers + ld1 {KEY8.16b, KEY7.16b, KEY6.16b}, [KEY_START], #48 + karatsuba1 M0 KEY8 + karatsuba1 M1 KEY7 + karatsuba1 M2 KEY6 + add IND, IND, #3 + b .LoutPartial + +.Lgt3Partial: + cmp TMP, #-2 + bgt .Lgt2Partial + ld1 {M0.16b, M1.16b}, [x0], #32 + // Clobber key registers + ld1 {KEY8.16b, KEY7.16b}, [KEY_START], #32 + karatsuba1 M0 KEY8 + karatsuba1 M1 KEY7 + add IND, IND, #2 + b .LoutPartial + +.Lgt2Partial: + ld1 {M0.16b}, [x0], #16 + // Clobber key registers + ld1 {KEY8.16b}, [KEY_START], #16 + karatsuba1 M0 KEY8 + add IND, IND, #1 +.LoutPartial: + b .LloopPartial +.LloopExitPartial: + karatsuba2 + montgomery_reduction + eor SUM.16b, SUM.16b, PH.16b +.endm + +/* + * Perform montgomery multiplication in GF(2^128) and store result in op1. + * + * Computes op1*op2*x^{-128} mod x^128 + x^127 + x^126 + x^121 + 1 + * If op1, op2 are in montgomery form, this computes the montgomery + * form of op1*op2. + * + * void pmull_polyval_mul(u8 *op1, const u8 *op2); + */ +SYM_FUNC_START(pmull_polyval_mul) + adr TMP, .Lgstar + ld1 {GSTAR.2d}, [TMP] + eor LO.16b, LO.16b, LO.16b + eor MI.16b, MI.16b, MI.16b + eor HI.16b, HI.16b, HI.16b + ld1 {v0.16b}, [x0] + ld1 {v1.16b}, [x1] + karatsuba1 v0 v1 + karatsuba2 + montgomery_reduction + st1 {PH.16b}, [x0] + ret +SYM_FUNC_END(pmull_polyval_mul) + +/* + * Perform polynomial evaluation as specified by POLYVAL. If nblocks = k, this + * routine multiplies the value stored at accumulator by h^k and XORs the + * evaluated polynomial into it. + * + * Computes h^k * accumulator + h^kM_0 + ... + h^1M_{k-1} (No constant term) + * + * x0 (OP1) - pointer to message blocks + * x1 - pointer to precomputed key struct + * x2 - number of blocks to hash + * x3 - location to XOR with evaluated polynomial + * + * void pmull_polyval_update(const u8 *in, const struct polyval_ctx *ctx, + * size_t nblocks, u8 *accumulator); + */ +SYM_FUNC_START(pmull_polyval_update) + adr TMP, .Lgstar + ld1 {GSTAR.2d}, [TMP] + ld1 {SUM.16b}, [x3] + ands PARTIAL_LEFT, BLOCKS_LEFT, #7 + beq .LskipPartial + partial_stride +.LskipPartial: + subs BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS + blt .LstrideLoopExit + ld1 {KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [x1], #64 + ld1 {KEY4.16b, KEY3.16b, KEY2.16b, KEY1.16b}, [x1], #64 + full_stride 0 + subs BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS + blt .LstrideLoopExitReduce +.LstrideLoop: + full_stride 1 + subs BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS + bge .LstrideLoop +.LstrideLoopExitReduce: + montgomery_reduction + mov SUM.16b, PH.16b +.LstrideLoopExit: + st1 {SUM.16b}, [x3] + ret +SYM_FUNC_END(pmull_polyval_update) diff --git a/arch/arm64/crypto/polyval-ce-glue.c b/arch/arm64/crypto/polyval-ce-glue.c new file mode 100644 index 000000000000..ca92e027b4ec --- /dev/null +++ b/arch/arm64/crypto/polyval-ce-glue.c @@ -0,0 +1,365 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Accelerated POLYVAL implementation with ARMv8 Crypto Extension + * instructions. This file contains glue code. + * + * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen + * Copyright (c) 2009 Intel Corp. + * Author: Huang Ying + * Copyright 2021 Google LLC + */ +/* + * Glue code based on ghash-clmulni-intel_glue.c. + * + * This implementation of POLYVAL uses montgomery multiplication accelerated by + * ARMv8 Crypto Extension instructions to implement the finite field operations. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define POLYVAL_BLOCK_SIZE 16 +#define POLYVAL_DIGEST_SIZE 16 +#define NUM_PRECOMPUTE_POWERS 8 + +struct polyval_async_ctx { + struct cryptd_ahash *cryptd_tfm; +}; + +struct polyval_ctx { + be128 key_powers[NUM_PRECOMPUTE_POWERS]; +}; + +struct polyval_desc_ctx { + u8 buffer[POLYVAL_BLOCK_SIZE]; + u32 bytes; +}; + +asmlinkage void pmull_polyval_update(const u8 *in, const struct polyval_ctx + *ctx, size_t nblocks, u8 *accumulator); +asmlinkage void pmull_polyval_mul(u8 *op1, const u8 *op2); + +static int polyval_init(struct shash_desc *desc) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + + memset(dctx, 0, sizeof(*dctx)); + + return 0; +} + +static int polyval_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + struct polyval_ctx *ctx = crypto_shash_ctx(tfm); + int i; + + if (keylen != POLYVAL_BLOCK_SIZE) + return -EINVAL; + + BUILD_BUG_ON(sizeof(u128) != POLYVAL_BLOCK_SIZE); + + memcpy(&ctx->key_powers[NUM_PRECOMPUTE_POWERS-1], key, sizeof(be128)); + + kernel_neon_begin(); + for (i = NUM_PRECOMPUTE_POWERS-2; i >= 0; i--) { + memcpy(&ctx->key_powers[i], key, sizeof(be128)); + pmull_polyval_mul((u8 *)&ctx->key_powers[i], + (u8 *)&ctx->key_powers[i+1]); + } + kernel_neon_end(); + + return 0; +} + +static int polyval_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + struct polyval_ctx *ctx = crypto_shash_ctx(desc->tfm); + u8 *dst = dctx->buffer; + u8 *pos; + unsigned int nblocks; + unsigned int n; + + kernel_neon_begin(); + if (dctx->bytes) { + n = min(srclen, dctx->bytes); + pos = dst + POLYVAL_BLOCK_SIZE - dctx->bytes; + + dctx->bytes -= n; + srclen -= n; + + while (n--) + *pos++ ^= *src++; + + if (!dctx->bytes) + pmull_polyval_mul(dst, + (u8 *)&ctx->key_powers[NUM_PRECOMPUTE_POWERS-1]); + } + + nblocks = srclen/POLYVAL_BLOCK_SIZE; + pmull_polyval_update(src, ctx, nblocks, dst); + srclen -= nblocks*POLYVAL_BLOCK_SIZE; + kernel_neon_end(); + + if (srclen) { + dctx->bytes = POLYVAL_BLOCK_SIZE - srclen; + src += nblocks*POLYVAL_BLOCK_SIZE; + pos = dst; + while (srclen--) + *pos++ ^= *src++; + } + + return 0; +} + +static int polyval_final(struct shash_desc *desc, u8 *dst) +{ + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + struct polyval_ctx *ctx = crypto_shash_ctx(desc->tfm); + u8 *buf = dctx->buffer; + + if (dctx->bytes) { + kernel_neon_begin(); + pmull_polyval_mul(buf, + (u8 *)&ctx->key_powers[NUM_PRECOMPUTE_POWERS-1]); + kernel_neon_end(); + } + + dctx->bytes = 0; + memcpy(dst, buf, POLYVAL_BLOCK_SIZE); + + return 0; +} + +static struct shash_alg polyval_alg = { + .digestsize = POLYVAL_DIGEST_SIZE, + .init = polyval_init, + .update = polyval_update, + .final = polyval_final, + .setkey = polyval_setkey, + .descsize = sizeof(struct polyval_desc_ctx), + .base = { + .cra_name = "__polyval", + .cra_driver_name = "__polyval-ce", + .cra_priority = 0, + .cra_flags = CRYPTO_ALG_INTERNAL, + .cra_blocksize = POLYVAL_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct polyval_ctx), + .cra_module = THIS_MODULE, + }, +}; + +static int polyval_async_init(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; + struct shash_desc *desc = cryptd_shash_desc(cryptd_req); + struct crypto_shash *child = cryptd_ahash_child(cryptd_tfm); + + desc->tfm = child; + return crypto_shash_init(desc); +} + +static int polyval_async_update(struct ahash_request *req) +{ + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; + struct shash_desc *desc; + + if (!crypto_simd_usable() || + (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { + memcpy(cryptd_req, req, sizeof(*req)); + ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base); + return crypto_ahash_update(cryptd_req); + } + desc = cryptd_shash_desc(cryptd_req); + + return shash_ahash_update(req, desc); +} + +static int polyval_async_final(struct ahash_request *req) +{ + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; + struct shash_desc *desc; + + if (!crypto_simd_usable() || + (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { + memcpy(cryptd_req, req, sizeof(*req)); + ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base); + return crypto_ahash_final(cryptd_req); + } + desc = cryptd_shash_desc(cryptd_req); + + return crypto_shash_final(desc, req->result); +} + +static int polyval_async_import(struct ahash_request *req, const void *in) +{ + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct shash_desc *desc = cryptd_shash_desc(cryptd_req); + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + + polyval_async_init(req); + memcpy(dctx, in, sizeof(*dctx)); + return 0; + +} + +static int polyval_async_export(struct ahash_request *req, void *out) +{ + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct shash_desc *desc = cryptd_shash_desc(cryptd_req); + struct polyval_desc_ctx *dctx = shash_desc_ctx(desc); + + memcpy(out, dctx, sizeof(*dctx)); + return 0; + +} + +static int polyval_async_digest(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct ahash_request *cryptd_req = ahash_request_ctx(req); + struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; + struct shash_desc *desc; + struct crypto_shash *child; + + if (!crypto_simd_usable() || + (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { + memcpy(cryptd_req, req, sizeof(*req)); + ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base); + return crypto_ahash_digest(cryptd_req); + } + desc = cryptd_shash_desc(cryptd_req); + child = cryptd_ahash_child(cryptd_tfm); + + desc->tfm = child; + return shash_ahash_digest(req, desc); +} + +static int polyval_async_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int keylen) +{ + struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm); + struct crypto_ahash *child = &ctx->cryptd_tfm->base; + + crypto_ahash_clear_flags(child, CRYPTO_TFM_REQ_MASK); + crypto_ahash_set_flags(child, crypto_ahash_get_flags(tfm) + & CRYPTO_TFM_REQ_MASK); + return crypto_ahash_setkey(child, key, keylen); +} + +static int polyval_async_init_tfm(struct crypto_tfm *tfm) +{ + struct cryptd_ahash *cryptd_tfm; + struct polyval_async_ctx *ctx = crypto_tfm_ctx(tfm); + + cryptd_tfm = cryptd_alloc_ahash("__polyval-ce", + CRYPTO_ALG_INTERNAL, + CRYPTO_ALG_INTERNAL); + if (IS_ERR(cryptd_tfm)) + return PTR_ERR(cryptd_tfm); + ctx->cryptd_tfm = cryptd_tfm; + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), + sizeof(struct ahash_request) + + crypto_ahash_reqsize(&cryptd_tfm->base)); + + return 0; +} + +static void polyval_async_exit_tfm(struct crypto_tfm *tfm) +{ + struct polyval_async_ctx *ctx = crypto_tfm_ctx(tfm); + + cryptd_free_ahash(ctx->cryptd_tfm); +} + +static struct ahash_alg polyval_async_alg = { + .init = polyval_async_init, + .update = polyval_async_update, + .final = polyval_async_final, + .setkey = polyval_async_setkey, + .digest = polyval_async_digest, + .export = polyval_async_export, + .import = polyval_async_import, + .halg = { + .digestsize = POLYVAL_DIGEST_SIZE, + .statesize = sizeof(struct polyval_desc_ctx), + .base = { + .cra_name = "polyval", + .cra_driver_name = "polyval-clmulni", + .cra_priority = 200, + .cra_ctxsize = sizeof(struct polyval_async_ctx), + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = POLYVAL_BLOCK_SIZE, + .cra_module = THIS_MODULE, + .cra_init = polyval_async_init_tfm, + .cra_exit = polyval_async_exit_tfm, + }, + }, +}; + +static int __init polyval_ce_mod_init(void) +{ + int err; + + if (!cpu_have_named_feature(ASIMD)) + return -ENODEV; + + if (!cpu_have_named_feature(PMULL)) + return -ENODEV; + + err = crypto_register_shash(&polyval_alg); + if (err) + goto err_out; + err = crypto_register_ahash(&polyval_async_alg); + if (err) + goto err_shash; + + return 0; + +err_shash: + crypto_unregister_shash(&polyval_alg); +err_out: + return err; +} + +static void __exit polyval_ce_mod_exit(void) +{ + crypto_unregister_ahash(&polyval_async_alg); + crypto_unregister_shash(&polyval_alg); +} + +static const struct cpu_feature polyval_cpu_feature[] = { + { cpu_feature(PMULL) }, { } +}; +MODULE_DEVICE_TABLE(cpu, polyval_cpu_feature); + +module_init(polyval_ce_mod_init); +module_exit(polyval_ce_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("POLYVAL hash function accelerated by ARMv8 Crypto Extension"); +MODULE_ALIAS_CRYPTO("polyval");