From patchwork Fri May 22 08:31:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 6462011 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D379C9F318 for ; Fri, 22 May 2015 08:31:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id ECAC920382 for ; Fri, 22 May 2015 08:31:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0E6F820376 for ; Fri, 22 May 2015 08:31:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756824AbbEVIbU (ORCPT ); Fri, 22 May 2015 04:31:20 -0400 Received: from helcar.hengli.com.au ([209.40.204.226]:40579 "EHLO helcar.hengli.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756663AbbEVIbJ (ORCPT ); Fri, 22 May 2015 04:31:09 -0400 Received: from gondolin.me.apana.org.au ([192.168.0.6]) by norbury.hengli.com.au with esmtp (Exim 4.80 #3 (Debian)) id 1YviME-0001FD-1L; Fri, 22 May 2015 18:31:06 +1000 Received: from herbert by gondolin.me.apana.org.au with local (Exim 4.80) (envelope-from ) id 1YviMC-00010E-Uk; Fri, 22 May 2015 16:31:05 +0800 Subject: [v2 PATCH 13/13] crypto: algif_aead - Switch to new AEAD interface References: <20150522082708.GA3507@gondor.apana.org.au> To: Linux Crypto Mailing List , netdev@vger.kernel.org, "David S. Miller" , Johannes Berg , Marcel Holtmann , Steffen Klassert , Stephan Mueller Message-Id: From: Herbert Xu Date: Fri, 22 May 2015 16:31:04 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch makes use of the new AEAD interface which uses a single SG list instead of separate lists for the AD and plain text. Signed-off-by: Herbert Xu --- crypto/algif_aead.c | 61 ++++++++++++++++++++++++++++++---------------------- 1 file changed, 36 insertions(+), 25 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index 53702e9..5674a33 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -26,7 +26,7 @@ struct aead_sg_list { unsigned int cur; - struct scatterlist sg[ALG_MAX_PAGES]; + struct scatterlist sg[ALG_MAX_PAGES + 1]; }; struct aead_ctx { @@ -357,7 +357,8 @@ static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored, unsigned as = crypto_aead_authsize(crypto_aead_reqtfm(&ctx->aead_req)); struct aead_sg_list *sgl = &ctx->tsgl; struct scatterlist *sg = NULL; - struct scatterlist assoc[ALG_MAX_PAGES]; + struct scatterlist dstbuf[ALG_MAX_PAGES + 1]; + struct scatterlist *dst = dstbuf; size_t assoclen = 0; unsigned int i = 0; int err = -EINVAL; @@ -453,7 +454,7 @@ static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored, if (usedpages < outlen) goto unlock; - sg_init_table(assoc, ALG_MAX_PAGES); + sg_mark_end(sgl->sg + sgl->cur); assoclen = ctx->aead_assoclen; /* * Split scatterlist into two: first part becomes AD, second part @@ -465,35 +466,45 @@ static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored, sg = sgl->sg + i; if (sg->length <= assoclen) { /* AD is larger than one page */ - sg_set_page(assoc + i, sg_page(sg), + sg_set_page(dst + i, sg_page(sg), sg->length, sg->offset); assoclen -= sg->length; - if (i >= ctx->tsgl.cur) - goto unlock; - } else if (!assoclen) { - /* current page is to start of plaintext / ciphertext */ - if (i) - /* AD terminates at page boundary */ - sg_mark_end(assoc + i - 1); - else - /* AD size is zero */ - sg_mark_end(assoc); - break; - } else { + continue; + } + + if (assoclen) { /* AD does not terminate at page boundary */ - sg_set_page(assoc + i, sg_page(sg), + sg_set_page(dst + i, sg_page(sg), assoclen, sg->offset); - sg_mark_end(assoc + i); - /* plaintext / ciphertext starts after AD */ - sg->length -= assoclen; - sg->offset += assoclen; - break; + assoclen = 0; + i++; } + + break; } - aead_request_set_assoc(&ctx->aead_req, assoc, ctx->aead_assoclen); - aead_request_set_crypt(&ctx->aead_req, sg, ctx->rsgl[0].sg, used, - ctx->iv); + /* This should never happen because of aead_sufficient_data. */ + if (WARN_ON_ONCE(assoclen)) + goto unlock; + + /* current page is the start of plaintext / ciphertext */ + if (!i) + /* AD size is zero */ + dst = ctx->rsgl[0].sg; + else if (outlen) + /* AD size is non-zero */ + scatterwalk_crypto_chain( + dst, ctx->rsgl[0].sg, + sg_page(ctx->rsgl[0].sg) == sg_page(dst + i - 1) && + ctx->rsgl[0].sg[0].offset == dst[i - 1].offset + + dst[i - 1].length, + i + 1); + else + /* AD only */ + sg_mark_end(dst + i); + + aead_request_set_crypt(&ctx->aead_req, sgl->sg, dst, used, ctx->iv); + aead_request_set_ad(&ctx->aead_req, ctx->aead_assoclen, 0); err = af_alg_wait_for_completion(ctx->enc ? crypto_aead_encrypt(&ctx->aead_req) :