From patchwork Tue Feb 20 07:48:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10229505 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 083EF60392 for ; Tue, 20 Feb 2018 07:51:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EDB152852C for ; Tue, 20 Feb 2018 07:51:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E2A152852D; Tue, 20 Feb 2018 07:51:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5BB6F2852B for ; Tue, 20 Feb 2018 07:51:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751346AbeBTHvr (ORCPT ); Tue, 20 Feb 2018 02:51:47 -0500 Received: from mail-pf0-f194.google.com ([209.85.192.194]:33504 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750774AbeBTHvj (ORCPT ); Tue, 20 Feb 2018 02:51:39 -0500 Received: by mail-pf0-f194.google.com with SMTP id q13so225042pff.0 for ; Mon, 19 Feb 2018 23:51:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/7RSl3eL3nPOmivieaT0pRnpzh41FdmthMFtRx5GMwQ=; b=MkjrgMgXasW4of6q9gFlm8fNesa8FGlsoQtNc+GIQN2/maaYdl7RmoV1xw+ZazXVKU rn/HozQpz/FDl7Tul5VXgCQl4ZcRMfKz+UDoMCUzNxpJgbgNLk53ZfjvmbpDTjEXVOyI pHqkT3JGYpu5WY6T9AGYOgpPsmP/UfGdvqFc0RjfkpE7Bhe7y+TZZFdRktilGBFlKdyD /bOLdFsbhmjeEoc42cATc87XxhiwXSloBlyfacyKORnVn9bixPsPBmeubNrWmsRdLKFn 6UXuNudDk92mX85+pZn7rlgGTnv8Dpq4uCJFstdGHwFnN1SJv82p5lpAJpdCgIIxTNzn +lEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/7RSl3eL3nPOmivieaT0pRnpzh41FdmthMFtRx5GMwQ=; b=fwrRQN/k2axOENmUrsAS/9v4v/N1H8HJBHJj62qfnXgEcbw6LDT48hBzEa4Eh3+59U cZW9KjdEdoCvJrPQ9dov0tke6QEilWK7msEXsJumULTKWK4iqKAob+3TmQ1XCfw9qKCU 2Uy2Jrh8+V61VARaCTz/oL3Ra/wVyA6sTPx5Hf+xqEzcMJlvC0tHUSdXHmgojkr+nbXi 4KkYP/tV8VmO8/EjV5/kYHURHT9+mJ89CvyzHsAnIjcY5fO0Jyol0QRdS9xle0x3lNpN HhExhDELwaBHkXg+mn9HGMWSr58T4M5LDwipsJ/yaV6Bc8C+K4Lk61afdCk+kLMO6O9F 4F0g== X-Gm-Message-State: APf1xPDEY7O5Q0AfFbXpnRFLrluizvTkhypxB/oq9s/MYhz3ohefE4Xk zp6d8dDR34lXhGksOPh4dtZf8OBW X-Google-Smtp-Source: AH8x225vtdP02V9HwgOnnEy1dxALfLSIBLx6EhD4JUDjPt4U0ZGaQeakr4H5bgx4tvJZw1+tGXYVMw== X-Received: by 10.98.200.80 with SMTP id z77mr8056326pff.85.1519113098421; Mon, 19 Feb 2018 23:51:38 -0800 (PST) Received: from zzz.localdomain (c-67-185-97-198.hsd1.wa.comcast.net. [67.185.97.198]) by smtp.gmail.com with ESMTPSA id z17sm16412772pfh.183.2018.02.19.23.51.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Feb 2018 23:51:37 -0800 (PST) From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Herbert Xu , Jussi Kivilinna , Eric Biggers Subject: [PATCH 12/30] crypto: x86/twofish-avx - remove LRW algorithm Date: Mon, 19 Feb 2018 23:48:10 -0800 Message-Id: <20180220074828.2050-13-ebiggers3@gmail.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180220074828.2050-1-ebiggers3@gmail.com> References: <20180220074828.2050-1-ebiggers3@gmail.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-avx algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-avx will now get lrw(ecb-twofish-avx) instead, which is just as fast. Signed-off-by: Eric Biggers --- arch/x86/crypto/twofish_avx_glue.c | 189 +------------------------------------ crypto/Kconfig | 1 - 2 files changed, 1 insertion(+), 189 deletions(-) diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c index 3750b2cd30197..35de5e5a87b38 100644 --- a/arch/x86/crypto/twofish_avx_glue.c +++ b/arch/x86/crypto/twofish_avx_glue.c @@ -34,7 +34,6 @@ #include #include #include -#include #include #include #include @@ -227,144 +226,6 @@ static int ctr_crypt(struct blkcipher_desc *desc, struct scatterlist *dst, return glue_ctr_crypt_128bit(&twofish_ctr, desc, dst, src, nbytes); } -static inline bool twofish_fpu_begin(bool fpu_enabled, unsigned int nbytes) -{ - return glue_fpu_begin(TF_BLOCK_SIZE, TWOFISH_PARALLEL_BLOCKS, NULL, - fpu_enabled, nbytes); -} - -static inline void twofish_fpu_end(bool fpu_enabled) -{ - glue_fpu_end(fpu_enabled); -} - -struct crypt_priv { - struct twofish_ctx *ctx; - bool fpu_enabled; -}; - -static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes) -{ - const unsigned int bsize = TF_BLOCK_SIZE; - struct crypt_priv *ctx = priv; - int i; - - ctx->fpu_enabled = twofish_fpu_begin(ctx->fpu_enabled, nbytes); - - if (nbytes == bsize * TWOFISH_PARALLEL_BLOCKS) { - twofish_ecb_enc_8way(ctx->ctx, srcdst, srcdst); - return; - } - - for (i = 0; i < nbytes / (bsize * 3); i++, srcdst += bsize * 3) - twofish_enc_blk_3way(ctx->ctx, srcdst, srcdst); - - nbytes %= bsize * 3; - - for (i = 0; i < nbytes / bsize; i++, srcdst += bsize) - twofish_enc_blk(ctx->ctx, srcdst, srcdst); -} - -static void decrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes) -{ - const unsigned int bsize = TF_BLOCK_SIZE; - struct crypt_priv *ctx = priv; - int i; - - ctx->fpu_enabled = twofish_fpu_begin(ctx->fpu_enabled, nbytes); - - if (nbytes == bsize * TWOFISH_PARALLEL_BLOCKS) { - twofish_ecb_dec_8way(ctx->ctx, srcdst, srcdst); - return; - } - - for (i = 0; i < nbytes / (bsize * 3); i++, srcdst += bsize * 3) - twofish_dec_blk_3way(ctx->ctx, srcdst, srcdst); - - nbytes %= bsize * 3; - - for (i = 0; i < nbytes / bsize; i++, srcdst += bsize) - twofish_dec_blk(ctx->ctx, srcdst, srcdst); -} - -struct twofish_lrw_ctx { - struct lrw_table_ctx lrw_table; - struct twofish_ctx twofish_ctx; -}; - -static int lrw_twofish_setkey(struct crypto_tfm *tfm, const u8 *key, - unsigned int keylen) -{ - struct twofish_lrw_ctx *ctx = crypto_tfm_ctx(tfm); - int err; - - err = __twofish_setkey(&ctx->twofish_ctx, key, keylen - TF_BLOCK_SIZE, - &tfm->crt_flags); - if (err) - return err; - - return lrw_init_table(&ctx->lrw_table, key + keylen - TF_BLOCK_SIZE); -} - -static void lrw_twofish_exit_tfm(struct crypto_tfm *tfm) -{ - struct twofish_lrw_ctx *ctx = crypto_tfm_ctx(tfm); - - lrw_free_table(&ctx->lrw_table); -} - -static int lrw_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, - struct scatterlist *src, unsigned int nbytes) -{ - struct twofish_lrw_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); - be128 buf[TWOFISH_PARALLEL_BLOCKS]; - struct crypt_priv crypt_ctx = { - .ctx = &ctx->twofish_ctx, - .fpu_enabled = false, - }; - struct lrw_crypt_req req = { - .tbuf = buf, - .tbuflen = sizeof(buf), - - .table_ctx = &ctx->lrw_table, - .crypt_ctx = &crypt_ctx, - .crypt_fn = encrypt_callback, - }; - int ret; - - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; - ret = lrw_crypt(desc, dst, src, nbytes, &req); - twofish_fpu_end(crypt_ctx.fpu_enabled); - - return ret; -} - -static int lrw_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, - struct scatterlist *src, unsigned int nbytes) -{ - struct twofish_lrw_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); - be128 buf[TWOFISH_PARALLEL_BLOCKS]; - struct crypt_priv crypt_ctx = { - .ctx = &ctx->twofish_ctx, - .fpu_enabled = false, - }; - struct lrw_crypt_req req = { - .tbuf = buf, - .tbuflen = sizeof(buf), - - .table_ctx = &ctx->lrw_table, - .crypt_ctx = &crypt_ctx, - .crypt_fn = decrypt_callback, - }; - int ret; - - desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; - ret = lrw_crypt(desc, dst, src, nbytes, &req); - twofish_fpu_end(crypt_ctx.fpu_enabled); - - return ret; -} - static int xts_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes) { @@ -385,7 +246,7 @@ static int xts_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst, &ctx->tweak_ctx, &ctx->crypt_ctx); } -static struct crypto_alg twofish_algs[10] = { { +static struct crypto_alg twofish_algs[] = { { .cra_name = "__ecb-twofish-avx", .cra_driver_name = "__driver-ecb-twofish-avx", .cra_priority = 0, @@ -446,30 +307,6 @@ static struct crypto_alg twofish_algs[10] = { { .decrypt = ctr_crypt, }, }, -}, { - .cra_name = "__lrw-twofish-avx", - .cra_driver_name = "__driver-lrw-twofish-avx", - .cra_priority = 0, - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | - CRYPTO_ALG_INTERNAL, - .cra_blocksize = TF_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct twofish_lrw_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_blkcipher_type, - .cra_module = THIS_MODULE, - .cra_exit = lrw_twofish_exit_tfm, - .cra_u = { - .blkcipher = { - .min_keysize = TF_MIN_KEY_SIZE + - TF_BLOCK_SIZE, - .max_keysize = TF_MAX_KEY_SIZE + - TF_BLOCK_SIZE, - .ivsize = TF_BLOCK_SIZE, - .setkey = lrw_twofish_setkey, - .encrypt = lrw_encrypt, - .decrypt = lrw_decrypt, - }, - }, }, { .cra_name = "__xts-twofish-avx", .cra_driver_name = "__driver-xts-twofish-avx", @@ -557,30 +394,6 @@ static struct crypto_alg twofish_algs[10] = { { .geniv = "chainiv", }, }, -}, { - .cra_name = "lrw(twofish)", - .cra_driver_name = "lrw-twofish-avx", - .cra_priority = 400, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = TF_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct async_helper_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = ablk_init, - .cra_exit = ablk_exit, - .cra_u = { - .ablkcipher = { - .min_keysize = TF_MIN_KEY_SIZE + - TF_BLOCK_SIZE, - .max_keysize = TF_MAX_KEY_SIZE + - TF_BLOCK_SIZE, - .ivsize = TF_BLOCK_SIZE, - .setkey = ablk_set_key, - .encrypt = ablk_encrypt, - .decrypt = ablk_decrypt, - }, - }, }, { .cra_name = "xts(twofish)", .cra_driver_name = "xts-twofish-avx", diff --git a/crypto/Kconfig b/crypto/Kconfig index 5bb476dd98e60..5533cd1810e3a 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1594,7 +1594,6 @@ config CRYPTO_TWOFISH_AVX_X86_64 select CRYPTO_TWOFISH_COMMON select CRYPTO_TWOFISH_X86_64 select CRYPTO_TWOFISH_X86_64_3WAY - select CRYPTO_LRW select CRYPTO_XTS help Twofish cipher algorithm (x86_64/AVX).