From patchwork Tue Feb 20 07:48:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10229515 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C9AC760392 for ; Tue, 20 Feb 2018 07:51:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BD0582852B for ; Tue, 20 Feb 2018 07:51:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B21322852D; Tue, 20 Feb 2018 07:51:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 335DF2852B for ; Tue, 20 Feb 2018 07:51:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751368AbeBTHvx (ORCPT ); Tue, 20 Feb 2018 02:51:53 -0500 Received: from mail-pf0-f195.google.com ([209.85.192.195]:34232 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751274AbeBTHvn (ORCPT ); Tue, 20 Feb 2018 02:51:43 -0500 Received: by mail-pf0-f195.google.com with SMTP id k2so8195pfi.1 for ; Mon, 19 Feb 2018 23:51:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7I8uVARMoT3JuwHN3V/91YNyFKg9R6oZPNPoxPoQYqc=; b=g38iqwhe51RPQGoOcXbrZhW3NnjjbyR/yPIqgDBV4eD0+/1IfrYnLkRki9wm9e9HBu 1q5RNJxrXJpHjicVhY6niXn7+Y8+yFBDNp65jpkgnA389RnF3iKxZr3ba+GjyUhhh7C2 nNqAWAKx4UcH05yYHmI9UO1vHiD0yHMBsB1AWpJ9Ws5zo/Vt5INsTLIvovWLSmASQwTp bYYe5rq35rzNzfZhFgtgGtcmZMiWaSt4MCJQoFimfqwOLPQmAR/t6JyXhwIOCOr2hiZw 4Q2DanMOa2s4SP2TA9+9f5dySYdLBQFRPNmaMgsM0d6AWPwDBqEsHDpt81KGTmEQh2ac 2Djg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7I8uVARMoT3JuwHN3V/91YNyFKg9R6oZPNPoxPoQYqc=; b=Jw9fh5llAr2+d0lY9LUuLTGBhQv55KSUwCQ4lwdj19HrtfZITWfKjLEkto4HPiX4tO AJfqKnHlYcRfqY2+NaH9ZZzIczG9f4FvF8Nfqdh2vPGSIJutZwlajIrWTl2zBj7qfJU5 TdjtTTfvx2Z7nAmBlciySqDbCKPnWHGFXyFy3P10tP7HUUxDI8a/c1ZloS1eNAhXn5L5 vpUO42ExPZ5cpG0aABC7/adYesr+thtIUBpxuqKVORRMNDhqrV1P8blGiA1a5Y7uSlDN TMFLXhEZt5BB8lOBu67aeC0Am4SpkYPD22q9P5Ehx8TQxDd6guvRQDpkvi3pfanSZn0p 8UkA== X-Gm-Message-State: APf1xPCC7FtAE1yP3HU2GE3it4UqqtfdMPeAah/vmPKpRf5T/xR8XbuT XmLM5un6Bf/mSOMJhF5c6DtiKVmA X-Google-Smtp-Source: AH8x224fcg68p/DUDe40XRea0DoBiOfneDXjk0ngVBU7sE0TxkjYcK/rgH0T3EU81emmQmgDK9F8bQ== X-Received: by 10.98.199.9 with SMTP id w9mr16980754pfg.97.1519113102364; Mon, 19 Feb 2018 23:51:42 -0800 (PST) Received: from zzz.localdomain (c-67-185-97-198.hsd1.wa.comcast.net. [67.185.97.198]) by smtp.gmail.com with ESMTPSA id z17sm16412772pfh.183.2018.02.19.23.51.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Feb 2018 23:51:41 -0800 (PST) From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Herbert Xu , Jussi Kivilinna , Eric Biggers Subject: [PATCH 16/30] crypto: x86/cast6-avx - remove LRW algorithm Date: Mon, 19 Feb 2018 23:48:14 -0800 Message-Id: <20180220074828.2050-17-ebiggers3@gmail.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180220074828.2050-1-ebiggers3@gmail.com> References: <20180220074828.2050-1-ebiggers3@gmail.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-cast6-avx algorithm which did this. Users who request lrw(cast6) and previously would have gotten lrw-cast6-avx will now get lrw(ecb-cast6-avx) instead, which is just as fast. Signed-off-by: Eric Biggers --- arch/x86/crypto/cast6_avx_glue.c | 180 +-------------------------------------- crypto/Kconfig | 1 - 2 files changed, 1 insertion(+), 180 deletions(-) diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c index 50e684768c555..d2fbf2be771e4 100644 --- a/arch/x86/crypto/cast6_avx_glue.c +++ b/arch/x86/crypto/cast6_avx_glue.c @@ -34,9 +34,7 @@ #include #include #include -#include #include -#include #include #define CAST6_PARALLEL_BLOCKS 8 @@ -189,134 +187,6 @@ static int ctr_crypt(struct blkcipher_desc *desc, struct scatterlist *dst, return glue_ctr_crypt_128bit(&cast6_ctr, desc, dst, src, nbytes); } -static inline bool cast6_fpu_begin(bool fpu_enabled, unsigned int nbytes) -{ - return glue_fpu_begin(CAST6_BLOCK_SIZE, CAST6_PARALLEL_BLOCKS, - NULL, fpu_enabled, nbytes); -} - -static inline void cast6_fpu_end(bool fpu_enabled) -{ - glue_fpu_end(fpu_enabled); -} - -struct crypt_priv { - struct cast6_ctx *ctx; - bool fpu_enabled; -}; - -static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes) -{ - const unsigned int bsize = CAST6_BLOCK_SIZE; - struct crypt_priv *ctx = priv; - int i; - - ctx->fpu_enabled = cast6_fpu_begin(ctx->fpu_enabled, nbytes); - - if (nbytes == bsize * CAST6_PARALLEL_BLOCKS) { - cast6_ecb_enc_8way(ctx->ctx, srcdst, srcdst); - return; - } - - for (i = 0; i < nbytes / bsize; i++, srcdst += bsize) - __cast6_encrypt(ctx->ctx, srcdst, srcdst); -} - -static void decrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes) -{ - const unsigned int bsize = CAST6_BLOCK_SIZE; - struct crypt_priv *ctx = priv; - int i; - - ctx->fpu_enabled = cast6_fpu_begin(ctx->fpu_enabled, nbytes); - - if (nbytes == bsize * CAST6_PARALLEL_BLOCKS) { - cast6_ecb_dec_8way(ctx->ctx, srcdst, srcdst); - return; - } - - for (i = 0; i < nbytes / bsize; i++, srcdst += bsize) - __cast6_decrypt(ctx->ctx, srcdst, srcdst); -} - -struct cast6_lrw_ctx { - struct lrw_table_ctx lrw_table; - struct cast6_ctx cast6_ctx; -}; - -static int lrw_cast6_setkey(struct crypto_tfm *tfm, const u8 *key, - unsigned int keylen) -{ - struct cast6_lrw_ctx *ctx = crypto_tfm_ctx(tfm); - int err; - - err = __cast6_setkey(&ctx->cast6_ctx, key, keylen - CAST6_BLOCK_SIZE, - &tfm->crt_flags); - if (err) - return err; - - return lrw_init_table(&ctx->lrw_table, key + keylen - CAST6_BLOCK_SIZE); -} - -static int lrw_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, - struct scatterlist *src, unsigned int nbytes) -{ - struct cast6_lrw_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); - be128 buf[CAST6_PARALLEL_BLOCKS]; - struct crypt_priv crypt_ctx = { - .ctx = &ctx->cast6_ctx, - .fpu_enabled = false, - }; - struct lrw_crypt_req req = { - .tbuf = buf, - .tbuflen = sizeof(buf), - - .table_ctx = &ctx->lrw_table, - .crypt_ctx = &crypt_ctx, - .crypt_fn = encrypt_callback, - }; - int ret; - - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; - ret = lrw_crypt(desc, dst, src, nbytes, &req); - cast6_fpu_end(crypt_ctx.fpu_enabled); - - return ret; -} - -static int lrw_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, - struct scatterlist *src, unsigned int nbytes) -{ - struct cast6_lrw_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); - be128 buf[CAST6_PARALLEL_BLOCKS]; - struct crypt_priv crypt_ctx = { - .ctx = &ctx->cast6_ctx, - .fpu_enabled = false, - }; - struct lrw_crypt_req req = { - .tbuf = buf, - .tbuflen = sizeof(buf), - - .table_ctx = &ctx->lrw_table, - .crypt_ctx = &crypt_ctx, - .crypt_fn = decrypt_callback, - }; - int ret; - - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; - ret = lrw_crypt(desc, dst, src, nbytes, &req); - cast6_fpu_end(crypt_ctx.fpu_enabled); - - return ret; -} - -static void lrw_exit_tfm(struct crypto_tfm *tfm) -{ - struct cast6_lrw_ctx *ctx = crypto_tfm_ctx(tfm); - - lrw_free_table(&ctx->lrw_table); -} - struct cast6_xts_ctx { struct cast6_ctx tweak_ctx; struct cast6_ctx crypt_ctx; @@ -363,7 +233,7 @@ static int xts_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, &ctx->tweak_ctx, &ctx->crypt_ctx); } -static struct crypto_alg cast6_algs[10] = { { +static struct crypto_alg cast6_algs[] = { { .cra_name = "__ecb-cast6-avx", .cra_driver_name = "__driver-ecb-cast6-avx", .cra_priority = 0, @@ -424,30 +294,6 @@ static struct crypto_alg cast6_algs[10] = { { .decrypt = ctr_crypt, }, }, -}, { - .cra_name = "__lrw-cast6-avx", - .cra_driver_name = "__driver-lrw-cast6-avx", - .cra_priority = 0, - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | - CRYPTO_ALG_INTERNAL, - .cra_blocksize = CAST6_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cast6_lrw_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_blkcipher_type, - .cra_module = THIS_MODULE, - .cra_exit = lrw_exit_tfm, - .cra_u = { - .blkcipher = { - .min_keysize = CAST6_MIN_KEY_SIZE + - CAST6_BLOCK_SIZE, - .max_keysize = CAST6_MAX_KEY_SIZE + - CAST6_BLOCK_SIZE, - .ivsize = CAST6_BLOCK_SIZE, - .setkey = lrw_cast6_setkey, - .encrypt = lrw_encrypt, - .decrypt = lrw_decrypt, - }, - }, }, { .cra_name = "__xts-cast6-avx", .cra_driver_name = "__driver-xts-cast6-avx", @@ -535,30 +381,6 @@ static struct crypto_alg cast6_algs[10] = { { .geniv = "chainiv", }, }, -}, { - .cra_name = "lrw(cast6)", - .cra_driver_name = "lrw-cast6-avx", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CAST6_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct async_helper_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = ablk_init, - .cra_exit = ablk_exit, - .cra_u = { - .ablkcipher = { - .min_keysize = CAST6_MIN_KEY_SIZE + - CAST6_BLOCK_SIZE, - .max_keysize = CAST6_MAX_KEY_SIZE + - CAST6_BLOCK_SIZE, - .ivsize = CAST6_BLOCK_SIZE, - .setkey = ablk_set_key, - .encrypt = ablk_encrypt, - .decrypt = ablk_decrypt, - }, - }, }, { .cra_name = "xts(cast6)", .cra_driver_name = "xts-cast6-avx", diff --git a/crypto/Kconfig b/crypto/Kconfig index 20ac64afa02d3..4b4fbac8e1d50 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1266,7 +1266,6 @@ config CRYPTO_CAST6_AVX_X86_64 select CRYPTO_GLUE_HELPER_X86 select CRYPTO_CAST_COMMON select CRYPTO_CAST6 - select CRYPTO_LRW select CRYPTO_XTS help The CAST6 encryption algorithm (synonymous with CAST-256) is