From patchwork Mon Sep 10 11:31:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ondrej Mosnacek X-Patchwork-Id: 10593999 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 705AA921 for ; Mon, 10 Sep 2018 11:31:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5FAF028D22 for ; Mon, 10 Sep 2018 11:31:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 539AF28D25; Mon, 10 Sep 2018 11:31:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DBC0028D22 for ; Mon, 10 Sep 2018 11:31:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728475AbeIJQZU (ORCPT ); Mon, 10 Sep 2018 12:25:20 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:33255 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728501AbeIJQZU (ORCPT ); Mon, 10 Sep 2018 12:25:20 -0400 Received: by mail-wr1-f66.google.com with SMTP id v90-v6so21585359wrc.0 for ; Mon, 10 Sep 2018 04:31:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=c7hgYR6miOPBepHsBW5tBRWZRnpGf2rDczYum6zEZSc=; b=KwOkiPK4XCpJoGELzrWhBLre0D1YMgGaz8d0WrSeRErWrqMqvLLkMPybpw0W6m125K TVlt0uNKa1asCD8y87aDLO0xB3mUodoYU1EaC0T0FZfMt8JGaGNNFoBn6KIDemGGbDz5 Egs1oheA3rVUh2yxYwPjag0cDM42uEASmyVdU6d1xhc8afszpLU3l+NPLuvO3LQaDMUf m9vow9XcoZYC54DrNiTLxeUWyU8BDr8ON4u/EpYVbMZrh7HRIrejY10sbK24t9wEXcrX sWWPPZbtp/EEJs45kpCxVh7uwEBF1sXghlrE7D93YqlgPR2/alcWyEnw9Aejdr0d7D/R Yh5A== X-Gm-Message-State: APzg51DFQy1ByolgdWs9A8dAXo9cOmrL7/mgSeFGsxyY2YatLHBo88Ze OU/8pAqXnwBUFY/e+shv7Z9tug== X-Google-Smtp-Source: ANB0VdZKVKNng20i9a8JJQYC8PhtdiRdGPQbk/nDxk3/D9nUiephot9OJcOzB7xunVUlMQiRGxK3FQ== X-Received: by 2002:a5d:4b90:: with SMTP id b16-v6mr15263966wrt.168.1536579100834; Mon, 10 Sep 2018 04:31:40 -0700 (PDT) Received: from localhost.localdomain.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id d1-v6sm29562629wrc.52.2018.09.10.04.31.39 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Sep 2018 04:31:39 -0700 (PDT) From: Ondrej Mosnacek To: Herbert Xu Cc: linux-crypto@vger.kernel.org, dm-devel@redhat.com, Mikulas Patocka , Eric Biggers , Ondrej Mosnacek Subject: [PATCH v2 1/2] crypto: lrw - Optimize tweak computation Date: Mon, 10 Sep 2018 13:31:33 +0200 Message-Id: <20180910113134.24103-2-omosnace@redhat.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180910113134.24103-1-omosnace@redhat.com> References: <20180910113134.24103-1-omosnace@redhat.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch rewrites the tweak computation to a slightly simpler method that performs less bswaps. Based on performance measurements the new code seems to provide slightly better performance than the old one. PERFORMANCE MEASUREMENTS (x86_64) Performed using: https://gitlab.com/omos/linux-crypto-bench Crypto driver used: lrw(ecb-aes-aesni) Before: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 204 286 lrw(aes) 320 64 227 203 lrw(aes) 384 64 208 204 lrw(aes) 256 512 441 439 lrw(aes) 320 512 456 455 lrw(aes) 384 512 469 483 lrw(aes) 256 4096 2136 2190 lrw(aes) 320 4096 2161 2213 lrw(aes) 384 4096 2295 2369 lrw(aes) 256 16384 7692 7868 lrw(aes) 320 16384 8230 8691 lrw(aes) 384 16384 8971 8813 lrw(aes) 256 32768 15336 15560 lrw(aes) 320 32768 16410 16346 lrw(aes) 384 32768 18023 17465 After: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 200 203 lrw(aes) 320 64 202 204 lrw(aes) 384 64 204 205 lrw(aes) 256 512 415 415 lrw(aes) 320 512 432 440 lrw(aes) 384 512 449 451 lrw(aes) 256 4096 1838 1995 lrw(aes) 320 4096 2123 1980 lrw(aes) 384 4096 2100 2119 lrw(aes) 256 16384 7183 6954 lrw(aes) 320 16384 7844 7631 lrw(aes) 384 16384 8256 8126 lrw(aes) 256 32768 14772 14484 lrw(aes) 320 32768 15281 15431 lrw(aes) 384 32768 16469 16293 Signed-off-by: Ondrej Mosnacek --- crypto/lrw.c | 49 +++++++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/crypto/lrw.c b/crypto/lrw.c index 393a782679c7..b4f30b6f16d6 100644 --- a/crypto/lrw.c +++ b/crypto/lrw.c @@ -120,30 +120,19 @@ static int setkey(struct crypto_skcipher *parent, const u8 *key, return 0; } -static inline void inc(be128 *iv) +static int next_index(u32 *counter) { - be64_add_cpu(&iv->b, 1); - if (!iv->b) - be64_add_cpu(&iv->a, 1); -} - -/* this returns the number of consequative 1 bits starting - * from the right, get_index128(00 00 00 00 00 00 ... 00 00 10 FB) = 2 */ -static inline int get_index128(be128 *block) -{ - int x; - __be32 *p = (__be32 *) block; - - for (p += 3, x = 0; x < 128; p--, x += 32) { - u32 val = be32_to_cpup(p); - - if (!~val) - continue; + int i, res = 0; - return x + ffz(val); + for (i = 0; i < 4; i++) { + if (counter[i] + 1 != 0) { + res += ffz(counter[i]++); + break; + } + counter[i] = 0; + res += 32; } - - return x; + return res; } static int post_crypt(struct skcipher_request *req) @@ -209,8 +198,9 @@ static int pre_crypt(struct skcipher_request *req) struct scatterlist *sg; unsigned cryptlen; unsigned offset; - be128 *iv; bool more; + __u32 *iv; + u32 counter[4]; int err; subreq = &rctx->subreq; @@ -227,6 +217,11 @@ static int pre_crypt(struct skcipher_request *req) err = skcipher_walk_virt(&w, subreq, false); iv = w.iv; + counter[0] = be32_to_cpu(iv[3]); + counter[1] = be32_to_cpu(iv[2]); + counter[2] = be32_to_cpu(iv[1]); + counter[3] = be32_to_cpu(iv[0]); + while (w.nbytes) { unsigned int avail = w.nbytes; be128 *wsrc; @@ -242,10 +237,16 @@ static int pre_crypt(struct skcipher_request *req) /* T <- I*Key2, using the optimization * discussed in the specification */ be128_xor(&rctx->t, &rctx->t, - &ctx->mulinc[get_index128(iv)]); - inc(iv); + &ctx->mulinc[next_index(counter)]); } while ((avail -= bs) >= bs); + if (w.nbytes == w.total) { + iv[0] = cpu_to_be32(counter[3]); + iv[1] = cpu_to_be32(counter[2]); + iv[2] = cpu_to_be32(counter[1]); + iv[3] = cpu_to_be32(counter[0]); + } + err = skcipher_walk_done(&w, avail); } From patchwork Mon Sep 10 11:31:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ondrej Mosnacek X-Patchwork-Id: 10594003 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C7FE109C for ; Mon, 10 Sep 2018 11:31:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17E6428D22 for ; Mon, 10 Sep 2018 11:31:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0BCF128D25; Mon, 10 Sep 2018 11:31:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3089528D24 for ; Mon, 10 Sep 2018 11:31:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727998AbeIJQZW (ORCPT ); Mon, 10 Sep 2018 12:25:22 -0400 Received: from mail-wr1-f54.google.com ([209.85.221.54]:39477 "EHLO mail-wr1-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728009AbeIJQZW (ORCPT ); Mon, 10 Sep 2018 12:25:22 -0400 Received: by mail-wr1-f54.google.com with SMTP id s14-v6so12559509wrw.6 for ; Mon, 10 Sep 2018 04:31:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AjPoI5OYfZUbowG0lxuNDL9RYBMr8wOoC/4zqjG6R1U=; b=nSK65s3XN6TmmNqHb+phFsFtvf1JonE/Iy/TBLiobATP8Gt46tOJeQ/U3fPGpAsopT 1UdgXZNQprRlndtocxgMXMOPJvkcFyQ9lu6noQaeI9krtD7lmRIOr+KlVp0ThnZvy/am 6wiYHQr6RhEi/fddG3inzBN0SzbDAGwe9BbzHvRykZlIe5oszkztxB/rHv0jgun5alP3 NlqgHAdNa9FahZvyWWph5lPttiyX6vpKbW+Kq1Gnz+ZNHaySKXKZh5moSCSkYOb4RA0f 5dYNoK5YIuVpeJVn+c+k4uQuHr2iIkFMNOo5X4pNN/lJYNeJzQxoA6msZBAdWjxuHjg/ eTqg== X-Gm-Message-State: APzg51DvZKpAjfDl3a0l4PhTCaNcXPzBQea1z60kT/oLpcGYVlVfDh9a asC1qmn675mAjrj7RbHfesQ+Yw== X-Google-Smtp-Source: ANB0Vdax2He7u+z/LL8Gs2fqLzDJImaDUolJNNYSHI1LRh3L4b8Z7JxJYl9J5fP2zJndzBzl7LiiNw== X-Received: by 2002:a5d:6604:: with SMTP id n4-v6mr14451894wru.281.1536579102480; Mon, 10 Sep 2018 04:31:42 -0700 (PDT) Received: from localhost.localdomain.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id d1-v6sm29562629wrc.52.2018.09.10.04.31.40 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 10 Sep 2018 04:31:41 -0700 (PDT) From: Ondrej Mosnacek To: Herbert Xu Cc: linux-crypto@vger.kernel.org, dm-devel@redhat.com, Mikulas Patocka , Eric Biggers , Ondrej Mosnacek Subject: [PATCH v2 2/2] crypto: lrw - Do not use auxiliary buffer Date: Mon, 10 Sep 2018 13:31:34 +0200 Message-Id: <20180910113134.24103-3-omosnace@redhat.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180910113134.24103-1-omosnace@redhat.com> References: <20180910113134.24103-1-omosnace@redhat.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch simplifies the LRW template to recompute the LRW tweaks from scratch in the second pass and thus also removes the need to allocate a dynamic buffer using kmalloc(). As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt. PERFORMANCE MEASUREMENTS (x86_64) Performed using: https://gitlab.com/omos/linux-crypto-bench Crypto driver used: lrw(ecb-aes-aesni) The results show that the new code has about the same performance as the old code. For 512-byte message it seems to be even slightly faster, but that might be just noise. Before: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 200 203 lrw(aes) 320 64 202 204 lrw(aes) 384 64 204 205 lrw(aes) 256 512 415 415 lrw(aes) 320 512 432 440 lrw(aes) 384 512 449 451 lrw(aes) 256 4096 1838 1995 lrw(aes) 320 4096 2123 1980 lrw(aes) 384 4096 2100 2119 lrw(aes) 256 16384 7183 6954 lrw(aes) 320 16384 7844 7631 lrw(aes) 384 16384 8256 8126 lrw(aes) 256 32768 14772 14484 lrw(aes) 320 32768 15281 15431 lrw(aes) 384 32768 16469 16293 After: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 197 196 lrw(aes) 320 64 200 197 lrw(aes) 384 64 203 199 lrw(aes) 256 512 385 380 lrw(aes) 320 512 401 395 lrw(aes) 384 512 415 415 lrw(aes) 256 4096 1869 1846 lrw(aes) 320 4096 2080 1981 lrw(aes) 384 4096 2160 2109 lrw(aes) 256 16384 7077 7127 lrw(aes) 320 16384 7807 7766 lrw(aes) 384 16384 8108 8357 lrw(aes) 256 32768 14111 14454 lrw(aes) 320 32768 15268 15082 lrw(aes) 384 32768 16581 16250 [1] https://lkml.org/lkml/2018/8/23/1315 Signed-off-by: Ondrej Mosnacek --- crypto/lrw.c | 280 ++++++++++----------------------------------------- 1 file changed, 51 insertions(+), 229 deletions(-) diff --git a/crypto/lrw.c b/crypto/lrw.c index b4f30b6f16d6..6315a6a59589 100644 --- a/crypto/lrw.c +++ b/crypto/lrw.c @@ -29,8 +29,6 @@ #include #include -#define LRW_BUFFER_SIZE 128u - #define LRW_BLOCK_SIZE 16 struct priv { @@ -56,19 +54,7 @@ struct priv { }; struct rctx { - be128 buf[LRW_BUFFER_SIZE / sizeof(be128)]; - - be128 t; - - be128 *ext; - - struct scatterlist srcbuf[2]; - struct scatterlist dstbuf[2]; - struct scatterlist *src; - struct scatterlist *dst; - - unsigned int left; - + be128 t, orig_iv; struct skcipher_request subreq; }; @@ -135,86 +121,31 @@ static int next_index(u32 *counter) return res; } -static int post_crypt(struct skcipher_request *req) +/* + * We compute the tweak masks twice (both before and after the ECB encryption or + * decryption) to avoid having to allocate a temporary buffer and/or make + * mutliple calls to the 'ecb(..)' instance, which usually would be slower than + * just doing the gf128mul_x_ble() calls again. + */ +static int xor_tweak(struct skcipher_request *req, bool second_pass) { - struct rctx *rctx = skcipher_request_ctx(req); - be128 *buf = rctx->ext ?: rctx->buf; - struct skcipher_request *subreq; const int bs = LRW_BLOCK_SIZE; - struct skcipher_walk w; - struct scatterlist *sg; - unsigned offset; - int err; - - subreq = &rctx->subreq; - err = skcipher_walk_virt(&w, subreq, false); - - while (w.nbytes) { - unsigned int avail = w.nbytes; - be128 *wdst; - - wdst = w.dst.virt.addr; - - do { - be128_xor(wdst, buf++, wdst); - wdst++; - } while ((avail -= bs) >= bs); - - err = skcipher_walk_done(&w, avail); - } - - rctx->left -= subreq->cryptlen; - - if (err || !rctx->left) - goto out; - - rctx->dst = rctx->dstbuf; - - scatterwalk_done(&w.out, 0, 1); - sg = w.out.sg; - offset = w.out.offset; - - if (rctx->dst != sg) { - rctx->dst[0] = *sg; - sg_unmark_end(rctx->dst); - scatterwalk_crypto_chain(rctx->dst, sg_next(sg), 2); - } - rctx->dst[0].length -= offset - sg->offset; - rctx->dst[0].offset = offset; - -out: - return err; -} - -static int pre_crypt(struct skcipher_request *req) -{ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct rctx *rctx = skcipher_request_ctx(req); struct priv *ctx = crypto_skcipher_ctx(tfm); - be128 *buf = rctx->ext ?: rctx->buf; - struct skcipher_request *subreq; - const int bs = LRW_BLOCK_SIZE; + struct rctx *rctx = skcipher_request_ctx(req); + be128 t = rctx->t; struct skcipher_walk w; - struct scatterlist *sg; - unsigned cryptlen; - unsigned offset; - bool more; __u32 *iv; u32 counter[4]; int err; - subreq = &rctx->subreq; - skcipher_request_set_tfm(subreq, tfm); - - cryptlen = subreq->cryptlen; - more = rctx->left > cryptlen; - if (!more) - cryptlen = rctx->left; - - skcipher_request_set_crypt(subreq, rctx->src, rctx->dst, - cryptlen, req->iv); + if (second_pass) { + req = &rctx->subreq; + /* set to our TFM to enforce correct alignment: */ + skcipher_request_set_tfm(req, tfm); + } - err = skcipher_walk_virt(&w, subreq, false); + err = skcipher_walk_virt(&w, req, false); iv = w.iv; counter[0] = be32_to_cpu(iv[3]); @@ -231,13 +162,11 @@ static int pre_crypt(struct skcipher_request *req) wdst = w.dst.virt.addr; do { - *buf++ = rctx->t; - be128_xor(wdst++, &rctx->t, wsrc++); + be128_xor(wdst++, &t, wsrc++); /* T <- I*Key2, using the optimization * discussed in the specification */ - be128_xor(&rctx->t, &rctx->t, - &ctx->mulinc[next_index(counter)]); + be128_xor(&t, &t, &ctx->mulinc[next_index(counter)]); } while ((avail -= bs) >= bs); if (w.nbytes == w.total) { @@ -250,175 +179,68 @@ static int pre_crypt(struct skcipher_request *req) err = skcipher_walk_done(&w, avail); } - skcipher_request_set_tfm(subreq, ctx->child); - skcipher_request_set_crypt(subreq, rctx->dst, rctx->dst, - cryptlen, NULL); - - if (err || !more) - goto out; - - rctx->src = rctx->srcbuf; - - scatterwalk_done(&w.in, 0, 1); - sg = w.in.sg; - offset = w.in.offset; - - if (rctx->src != sg) { - rctx->src[0] = *sg; - sg_unmark_end(rctx->src); - scatterwalk_crypto_chain(rctx->src, sg_next(sg), 2); - } - rctx->src[0].length -= offset - sg->offset; - rctx->src[0].offset = offset; - -out: return err; } -static int init_crypt(struct skcipher_request *req, crypto_completion_t done) -{ - struct priv *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); - struct rctx *rctx = skcipher_request_ctx(req); - struct skcipher_request *subreq; - gfp_t gfp; - - subreq = &rctx->subreq; - skcipher_request_set_callback(subreq, req->base.flags, done, req); - - gfp = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : - GFP_ATOMIC; - rctx->ext = NULL; - - subreq->cryptlen = LRW_BUFFER_SIZE; - if (req->cryptlen > LRW_BUFFER_SIZE) { - unsigned int n = min(req->cryptlen, (unsigned int)PAGE_SIZE); - - rctx->ext = kmalloc(n, gfp); - if (rctx->ext) - subreq->cryptlen = n; - } - - rctx->src = req->src; - rctx->dst = req->dst; - rctx->left = req->cryptlen; - - /* calculate first value of T */ - memcpy(&rctx->t, req->iv, sizeof(rctx->t)); - - /* T <- I*Key2 */ - gf128mul_64k_bbe(&rctx->t, ctx->table); - - return 0; -} - -static void exit_crypt(struct skcipher_request *req) +static int xor_tweak_pre(struct skcipher_request *req) { - struct rctx *rctx = skcipher_request_ctx(req); - - rctx->left = 0; - - if (rctx->ext) - kzfree(rctx->ext); + return xor_tweak(req, false); } -static int do_encrypt(struct skcipher_request *req, int err) +static int xor_tweak_post(struct skcipher_request *req) { - struct rctx *rctx = skcipher_request_ctx(req); - struct skcipher_request *subreq; - - subreq = &rctx->subreq; - - while (!err && rctx->left) { - err = pre_crypt(req) ?: - crypto_skcipher_encrypt(subreq) ?: - post_crypt(req); - - if (err == -EINPROGRESS || err == -EBUSY) - return err; - } - - exit_crypt(req); - return err; + return xor_tweak(req, false); } -static void encrypt_done(struct crypto_async_request *areq, int err) +static void crypt_done(struct crypto_async_request *areq, int err) { struct skcipher_request *req = areq->data; - struct skcipher_request *subreq; - struct rctx *rctx; - - rctx = skcipher_request_ctx(req); - - if (err == -EINPROGRESS) { - if (rctx->left != req->cryptlen) - return; - goto out; - } - subreq = &rctx->subreq; - subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG; + if (!err) + err = xor_tweak_post(req); - err = do_encrypt(req, err ?: post_crypt(req)); - if (rctx->left) - return; - -out: skcipher_request_complete(req, err); } -static int encrypt(struct skcipher_request *req) -{ - return do_encrypt(req, init_crypt(req, encrypt_done)); -} - -static int do_decrypt(struct skcipher_request *req, int err) +static void init_crypt(struct skcipher_request *req) { + struct priv *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); struct rctx *rctx = skcipher_request_ctx(req); - struct skcipher_request *subreq; + struct skcipher_request *subreq = &rctx->subreq; - subreq = &rctx->subreq; - - while (!err && rctx->left) { - err = pre_crypt(req) ?: - crypto_skcipher_decrypt(subreq) ?: - post_crypt(req); + skcipher_request_set_tfm(subreq, ctx->child); + skcipher_request_set_callback(subreq, req->base.flags, crypt_done, req); + skcipher_request_set_crypt(subreq, req->dst, req->dst, + req->cryptlen, &rctx->orig_iv); - if (err == -EINPROGRESS || err == -EBUSY) - return err; - } + /* calculate first value of T */ + memcpy(&rctx->orig_iv, req->iv, sizeof(rctx->t)); + rctx->t = rctx->orig_iv; - exit_crypt(req); - return err; + /* T <- I*Key2 */ + gf128mul_64k_bbe(&rctx->t, ctx->table); } -static void decrypt_done(struct crypto_async_request *areq, int err) +static int encrypt(struct skcipher_request *req) { - struct skcipher_request *req = areq->data; - struct skcipher_request *subreq; - struct rctx *rctx; - - rctx = skcipher_request_ctx(req); - - if (err == -EINPROGRESS) { - if (rctx->left != req->cryptlen) - return; - goto out; - } - - subreq = &rctx->subreq; - subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG; - - err = do_decrypt(req, err ?: post_crypt(req)); - if (rctx->left) - return; + struct rctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->subreq; -out: - skcipher_request_complete(req, err); + init_crypt(req); + return xor_tweak_pre(req) ?: + crypto_skcipher_encrypt(subreq) ?: + xor_tweak_post(req); } static int decrypt(struct skcipher_request *req) { - return do_decrypt(req, init_crypt(req, decrypt_done)); + struct rctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->subreq; + + init_crypt(req); + return xor_tweak_pre(req) ?: + crypto_skcipher_decrypt(subreq) ?: + xor_tweak_post(req); } static int init_tfm(struct crypto_skcipher *tfm)