From patchwork Tue Feb 20 07:48:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10229495 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1A71160392 for ; Tue, 20 Feb 2018 07:51:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0CF2E2852C for ; Tue, 20 Feb 2018 07:51:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 016EF284F9; Tue, 20 Feb 2018 07:51:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 39B56284F9 for ; Tue, 20 Feb 2018 07:51:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751312AbeBTHvj (ORCPT ); Tue, 20 Feb 2018 02:51:39 -0500 Received: from mail-pf0-f195.google.com ([209.85.192.195]:40501 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751203AbeBTHve (ORCPT ); Tue, 20 Feb 2018 02:51:34 -0500 Received: by mail-pf0-f195.google.com with SMTP id m5so1523021pff.7 for ; Mon, 19 Feb 2018 23:51:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TEZvqserjqASD2Tl+kTLnxEsgE1yeuAlDNLBhZJ0eFQ=; b=X6lWESZXd+xWpdGqLSxmAw57nXUhpvSHbEAUWxE73PIVbMSMnUkaF0pzgj3ouIkDSz HSyIm/IpAURU2BaaP9+XTnxKdWWorHEvS7mmGvlljoG5PF/hBi7hGoF/XppGK87+eXjL 0STisi/8Tv52qlR++eXb77TMzV+Rl7za4jlIn/00sjrufaZAZSlN7E6kI/wvrHQpxoUx O6qnFrjfpnpXXombrNpvOoZk1P3iX4J91kJhUbFJBU3kOb6o6OZIG8ozH6B4ItKQbqtg 70Ne735FS+AFne5DLjQMYXyk/scO9xSe1EbXMReZhHQirpFW3p4lm48ht+pCXpVlN9CX HyVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TEZvqserjqASD2Tl+kTLnxEsgE1yeuAlDNLBhZJ0eFQ=; b=M4e4fr7W4S711t7OC9NKDj+NWaRHCPt9aCy671ok96ouyiYrDOeQCDlS/gZwGTcn1t qIDhSPM2djWmAqlnbwduT/f+5AJDdmu/DR1tWUTCBrrn3FA3xDJtQ3vf+RYzO2cOtHvZ a0i+j0uWEBH5aopSLMHDqPrcILQnSmnSApapaWeg7dTe2dqX5SrWfrUlxQHC58oIyc1L ry9D3N+tfYVWB4VvIXoanUQCnmAJv6dVv1saARLrOG0i7HcBFyGvpalbDURNE62Fme1S fYf7NcFShUo92hqlh7oVpmpNp95hK5zw/i/3NcBntg3JLia/QrCY81RWHLiDjJBF8a1O s4pw== X-Gm-Message-State: APf1xPDga3+sogwO39qAHfze9KQPTgIsvX++PYEo72jgqRmLNxdwOal1 wkY10pqCPiJSydnU5p+pFBxSGj7Y X-Google-Smtp-Source: AH8x224MCrYetamxlmw68gf3DGEVufaM4fhO8QJyG92EqwExmGFhMEb6Y02l46QvEMCoEcBKYmexcQ== X-Received: by 10.101.90.203 with SMTP id d11mr14230663pgt.366.1519113093376; Mon, 19 Feb 2018 23:51:33 -0800 (PST) Received: from zzz.localdomain (c-67-185-97-198.hsd1.wa.comcast.net. [67.185.97.198]) by smtp.gmail.com with ESMTPSA id z17sm16412772pfh.183.2018.02.19.23.51.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Feb 2018 23:51:32 -0800 (PST) From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Herbert Xu , Jussi Kivilinna , Eric Biggers Subject: [PATCH 06/30] crypto: x86/serpent-avx2 - remove LRW algorithm Date: Mon, 19 Feb 2018 23:48:04 -0800 Message-Id: <20180220074828.2050-7-ebiggers3@gmail.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180220074828.2050-1-ebiggers3@gmail.com> References: <20180220074828.2050-1-ebiggers3@gmail.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-avx2 algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-avx2 will now get lrw(ecb-serpent-avx2) instead, which is just as fast. Signed-off-by: Eric Biggers --- arch/x86/crypto/serpent_avx2_glue.c | 176 +----------------------------------- crypto/Kconfig | 1 - 2 files changed, 1 insertion(+), 176 deletions(-) diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c index 870f6d812a2dd..2bd0f0459db43 100644 --- a/arch/x86/crypto/serpent_avx2_glue.c +++ b/arch/x86/crypto/serpent_avx2_glue.c @@ -168,122 +168,6 @@ static int ctr_crypt(struct blkcipher_desc *desc, struct scatterlist *dst, return glue_ctr_crypt_128bit(&serpent_ctr, desc, dst, src, nbytes); } -static inline bool serpent_fpu_begin(bool fpu_enabled, unsigned int nbytes) -{ - /* since reusing AVX functions, starts using FPU at 8 parallel blocks */ - return glue_fpu_begin(SERPENT_BLOCK_SIZE, 8, NULL, fpu_enabled, nbytes); -} - -static inline void serpent_fpu_end(bool fpu_enabled) -{ - glue_fpu_end(fpu_enabled); -} - -struct crypt_priv { - struct serpent_ctx *ctx; - bool fpu_enabled; -}; - -static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes) -{ - const unsigned int bsize = SERPENT_BLOCK_SIZE; - struct crypt_priv *ctx = priv; - int i; - - ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes); - - if (nbytes >= SERPENT_AVX2_PARALLEL_BLOCKS * bsize) { - serpent_ecb_enc_16way(ctx->ctx, srcdst, srcdst); - srcdst += bsize * SERPENT_AVX2_PARALLEL_BLOCKS; - nbytes -= bsize * SERPENT_AVX2_PARALLEL_BLOCKS; - } - - while (nbytes >= SERPENT_PARALLEL_BLOCKS * bsize) { - serpent_ecb_enc_8way_avx(ctx->ctx, srcdst, srcdst); - srcdst += bsize * SERPENT_PARALLEL_BLOCKS; - nbytes -= bsize * SERPENT_PARALLEL_BLOCKS; - } - - for (i = 0; i < nbytes / bsize; i++, srcdst += bsize) - __serpent_encrypt(ctx->ctx, srcdst, srcdst); -} - -static void decrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes) -{ - const unsigned int bsize = SERPENT_BLOCK_SIZE; - struct crypt_priv *ctx = priv; - int i; - - ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes); - - if (nbytes >= SERPENT_AVX2_PARALLEL_BLOCKS * bsize) { - serpent_ecb_dec_16way(ctx->ctx, srcdst, srcdst); - srcdst += bsize * SERPENT_AVX2_PARALLEL_BLOCKS; - nbytes -= bsize * SERPENT_AVX2_PARALLEL_BLOCKS; - } - - while (nbytes >= SERPENT_PARALLEL_BLOCKS * bsize) { - serpent_ecb_dec_8way_avx(ctx->ctx, srcdst, srcdst); - srcdst += bsize * SERPENT_PARALLEL_BLOCKS; - nbytes -= bsize * SERPENT_PARALLEL_BLOCKS; - } - - for (i = 0; i < nbytes / bsize; i++, srcdst += bsize) - __serpent_decrypt(ctx->ctx, srcdst, srcdst); -} - -static int lrw_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, - struct scatterlist *src, unsigned int nbytes) -{ - struct serpent_lrw_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); - be128 buf[SERPENT_AVX2_PARALLEL_BLOCKS]; - struct crypt_priv crypt_ctx = { - .ctx = &ctx->serpent_ctx, - .fpu_enabled = false, - }; - struct lrw_crypt_req req = { - .tbuf = buf, - .tbuflen = sizeof(buf), - - .table_ctx = &ctx->lrw_table, - .crypt_ctx = &crypt_ctx, - .crypt_fn = encrypt_callback, - }; - int ret; - - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; - ret = lrw_crypt(desc, dst, src, nbytes, &req); - serpent_fpu_end(crypt_ctx.fpu_enabled); - - return ret; -} - -static int lrw_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, - struct scatterlist *src, unsigned int nbytes) -{ - struct serpent_lrw_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); - be128 buf[SERPENT_AVX2_PARALLEL_BLOCKS]; - struct crypt_priv crypt_ctx = { - .ctx = &ctx->serpent_ctx, - .fpu_enabled = false, - }; - struct lrw_crypt_req req = { - .tbuf = buf, - .tbuflen = sizeof(buf), - - .table_ctx = &ctx->lrw_table, - .crypt_ctx = &crypt_ctx, - .crypt_fn = decrypt_callback, - }; - int ret; - - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; - ret = lrw_crypt(desc, dst, src, nbytes, &req); - serpent_fpu_end(crypt_ctx.fpu_enabled); - - return ret; -} - static int xts_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes) { @@ -304,7 +188,7 @@ static int xts_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, &ctx->tweak_ctx, &ctx->crypt_ctx); } -static struct crypto_alg srp_algs[10] = { { +static struct crypto_alg srp_algs[] = { { .cra_name = "__ecb-serpent-avx2", .cra_driver_name = "__driver-ecb-serpent-avx2", .cra_priority = 0, @@ -315,7 +199,6 @@ static struct crypto_alg srp_algs[10] = { { .cra_alignmask = 0, .cra_type = &crypto_blkcipher_type, .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[0].cra_list), .cra_u = { .blkcipher = { .min_keysize = SERPENT_MIN_KEY_SIZE, @@ -336,7 +219,6 @@ static struct crypto_alg srp_algs[10] = { { .cra_alignmask = 0, .cra_type = &crypto_blkcipher_type, .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[1].cra_list), .cra_u = { .blkcipher = { .min_keysize = SERPENT_MIN_KEY_SIZE, @@ -357,7 +239,6 @@ static struct crypto_alg srp_algs[10] = { { .cra_alignmask = 0, .cra_type = &crypto_blkcipher_type, .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[2].cra_list), .cra_u = { .blkcipher = { .min_keysize = SERPENT_MIN_KEY_SIZE, @@ -368,31 +249,6 @@ static struct crypto_alg srp_algs[10] = { { .decrypt = ctr_crypt, }, }, -}, { - .cra_name = "__lrw-serpent-avx2", - .cra_driver_name = "__driver-lrw-serpent-avx2", - .cra_priority = 0, - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | - CRYPTO_ALG_INTERNAL, - .cra_blocksize = SERPENT_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct serpent_lrw_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_blkcipher_type, - .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[3].cra_list), - .cra_exit = lrw_serpent_exit_tfm, - .cra_u = { - .blkcipher = { - .min_keysize = SERPENT_MIN_KEY_SIZE + - SERPENT_BLOCK_SIZE, - .max_keysize = SERPENT_MAX_KEY_SIZE + - SERPENT_BLOCK_SIZE, - .ivsize = SERPENT_BLOCK_SIZE, - .setkey = lrw_serpent_setkey, - .encrypt = lrw_encrypt, - .decrypt = lrw_decrypt, - }, - }, }, { .cra_name = "__xts-serpent-avx2", .cra_driver_name = "__driver-xts-serpent-avx2", @@ -404,7 +260,6 @@ static struct crypto_alg srp_algs[10] = { { .cra_alignmask = 0, .cra_type = &crypto_blkcipher_type, .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[4].cra_list), .cra_u = { .blkcipher = { .min_keysize = SERPENT_MIN_KEY_SIZE * 2, @@ -425,7 +280,6 @@ static struct crypto_alg srp_algs[10] = { { .cra_alignmask = 0, .cra_type = &crypto_ablkcipher_type, .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[5].cra_list), .cra_init = ablk_init, .cra_exit = ablk_exit, .cra_u = { @@ -447,7 +301,6 @@ static struct crypto_alg srp_algs[10] = { { .cra_alignmask = 0, .cra_type = &crypto_ablkcipher_type, .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[6].cra_list), .cra_init = ablk_init, .cra_exit = ablk_exit, .cra_u = { @@ -470,7 +323,6 @@ static struct crypto_alg srp_algs[10] = { { .cra_alignmask = 0, .cra_type = &crypto_ablkcipher_type, .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[7].cra_list), .cra_init = ablk_init, .cra_exit = ablk_exit, .cra_u = { @@ -484,31 +336,6 @@ static struct crypto_alg srp_algs[10] = { { .geniv = "chainiv", }, }, -}, { - .cra_name = "lrw(serpent)", - .cra_driver_name = "lrw-serpent-avx2", - .cra_priority = 600, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = SERPENT_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct async_helper_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[8].cra_list), - .cra_init = ablk_init, - .cra_exit = ablk_exit, - .cra_u = { - .ablkcipher = { - .min_keysize = SERPENT_MIN_KEY_SIZE + - SERPENT_BLOCK_SIZE, - .max_keysize = SERPENT_MAX_KEY_SIZE + - SERPENT_BLOCK_SIZE, - .ivsize = SERPENT_BLOCK_SIZE, - .setkey = ablk_set_key, - .encrypt = ablk_encrypt, - .decrypt = ablk_decrypt, - }, - }, }, { .cra_name = "xts(serpent)", .cra_driver_name = "xts-serpent-avx2", @@ -519,7 +346,6 @@ static struct crypto_alg srp_algs[10] = { { .cra_alignmask = 0, .cra_type = &crypto_ablkcipher_type, .cra_module = THIS_MODULE, - .cra_list = LIST_HEAD_INIT(srp_algs[9].cra_list), .cra_init = ablk_init, .cra_exit = ablk_exit, .cra_u = { diff --git a/crypto/Kconfig b/crypto/Kconfig index fba43e4de705d..259629c5bac84 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1488,7 +1488,6 @@ config CRYPTO_SERPENT_AVX2_X86_64 select CRYPTO_GLUE_HELPER_X86 select CRYPTO_SERPENT select CRYPTO_SERPENT_AVX_X86_64 - select CRYPTO_LRW select CRYPTO_XTS help Serpent cipher algorithm, by Anderson, Biham & Knudsen.