From patchwork Tue Jan 23 20:19:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junaid Shahid X-Patchwork-Id: 10181021 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A8E30602B7 for ; Tue, 23 Jan 2018 20:19:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 99362287C3 for ; Tue, 23 Jan 2018 20:19:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8DE10287C7; Tue, 23 Jan 2018 20:19:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28E4B287C3 for ; Tue, 23 Jan 2018 20:19:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752533AbeAWUTZ (ORCPT ); Tue, 23 Jan 2018 15:19:25 -0500 Received: from mail-pf0-f193.google.com ([209.85.192.193]:41848 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752340AbeAWUTX (ORCPT ); Tue, 23 Jan 2018 15:19:23 -0500 Received: by mail-pf0-f193.google.com with SMTP id c6so1199838pfi.8 for ; Tue, 23 Jan 2018 12:19:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2Xh6g/dTaGdonfclrk28+1IsosV7nai5sf0/Q6/xlEo=; b=DcuJ58NeSx96qqm/1cMkQ1QfB7dJ3Y2RKIAQnL4KGZWxNRPh8+HmiXuXXb2e3YZzKS wRbJjBXEkxJfCITbVXARPUK3BZPqLRl3y11E0//KMYkx02F2AZeVaA7N6xXsV3/PJzeh GhZEi8sHEHG72FSHTG6W21Pqk72QNVUIc7eLv0DOvTshdNoOSZZrfrvDcbPXw5+s6FJB aWhflzGANqU6OfbspPWGrEBVQuocC6mUXqkXwCh6/CIRGpywV918QfOpVKJ+QmixXm1/ 8lyVpuneh+vRtDMIZKGTo3yJxZu3kFZFGGffnxgBCgE9FJeWBZjUuh73eplE+COAbR/i oPOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2Xh6g/dTaGdonfclrk28+1IsosV7nai5sf0/Q6/xlEo=; b=dDxQeSVJhDDpyNb0oRu+3d5J0VUc7PA01Gv4Qq09pf51CJnsQUbcwKXJrLAx32yHFm k/ecfmo9SRtuHC8wBdBiHyr+9dYpkq9MRZiofGvaaRdDPh8HpHA/VyrdpdjucgCrc2LN eAuZy9fizLy1/wIomJcq5Q5a2A6VZMqJeM75aVi6mBqGfVT9mLNxjOv4xMekjhqXJEUp bgGIgNa58NKlV6pse7eKfvp0XdAHx4oav61u8Mj2aNRCvvCcqG1iWJvpSO/4wV2EaSIL efyB2ajdN4PG4kQKmrtSvPvR6LsDq4vGOltxrAdi6lziWKeimxlw6Nj4g1CFMo3Svj6J GDSg== X-Gm-Message-State: AKwxytcB3L53FT2c0hFHAY7ok11GOo2ComNlaZtUfyDg8WAOjaIqwgbD 7VsUinvHu3iOoJB8rVfLtbR0VA== X-Google-Smtp-Source: AH8x225+NyPvmV78rUmV7ZNPB8iXcWMKMf7sbH1TC4ObXIOlHxo44WUcO6+rGN74fIAiKJ0V/Dw6Zg== X-Received: by 2002:a17:902:b783:: with SMTP id e3-v6mr6346134pls.160.1516738762758; Tue, 23 Jan 2018 12:19:22 -0800 (PST) Received: from js-desktop.svl.corp.google.com ([2620:15c:2cb:1:cdad:b4d5:21d1:e91e]) by smtp.gmail.com with ESMTPSA id t6sm9509573pfl.174.2018.01.23.12.19.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Jan 2018 12:19:22 -0800 (PST) From: Junaid Shahid To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, andreslc@google.com, davem@davemloft.net, gthelen@google.com, ebiggers3@gmail.com, smueller@chronox.de Subject: [PATCH v2 3/4] crypto: aesni - Directly use kmap_atomic instead of scatter_walk object in gcm(aes) Date: Tue, 23 Jan 2018 12:19:15 -0800 Message-Id: <20180123201916.102134-4-junaids@google.com> X-Mailer: git-send-email 2.16.0.rc1.238.g530d649a79-goog In-Reply-To: <20180123201916.102134-1-junaids@google.com> References: <20180123201916.102134-1-junaids@google.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP gcmaes_crypt uses a scatter_walk object to map and unmap the crypto request sglists. But the only purpose that appears to serve here is to allow the D-Cache to be flushed at the end for pages that were used as output. However, that is not applicable on x86, so we can avoid using the scatter_walk object for simplicity. Signed-off-by: Junaid Shahid --- arch/x86/crypto/aesni-intel_glue.c | 36 +++++++++++++++--------------------- 1 file changed, 15 insertions(+), 21 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index aef6c82b9ca7..8615ecb0ff1e 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -750,6 +750,11 @@ static bool is_mappable(struct scatterlist *sgl, unsigned long len) (!PageHighMem(sg_page(sgl)) || sgl->offset + len <= PAGE_SIZE); } +static u8 *map_buffer(struct scatterlist *sgl) +{ + return kmap_atomic(sg_page(sgl)) + sgl->offset; +} + /* * Maps the sglist buffer and returns a pointer to the mapped buffer in * data_buf. @@ -762,14 +767,12 @@ static bool is_mappable(struct scatterlist *sgl, unsigned long len) * the data_buf and the bounce_buf should be freed using kfree(). */ static int get_request_buffer(struct scatterlist *sgl, - struct scatter_walk *sg_walk, unsigned long bounce_buf_size, u8 **data_buf, u8 **bounce_buf, bool *mapped) { if (sg_is_last(sgl) && is_mappable(sgl, sgl->length)) { *mapped = true; - scatterwalk_start(sg_walk, sgl); - *data_buf = scatterwalk_map(sg_walk); + *data_buf = map_buffer(sgl); return 0; } @@ -785,14 +788,10 @@ static int get_request_buffer(struct scatterlist *sgl, return 0; } -static void put_request_buffer(u8 *data_buf, unsigned long len, bool mapped, - struct scatter_walk *sg_walk, bool output) +static void put_request_buffer(u8 *data_buf, bool mapped) { - if (mapped) { - scatterwalk_unmap(data_buf); - scatterwalk_advance(sg_walk, len); - scatterwalk_done(sg_walk, output, 0); - } + if (mapped) + kunmap_atomic(data_buf); } /* @@ -809,16 +808,14 @@ static int gcmaes_crypt(struct aead_request *req, unsigned int assoclen, struct crypto_aead *tfm = crypto_aead_reqtfm(req); unsigned long auth_tag_len = crypto_aead_authsize(tfm); unsigned long data_len = req->cryptlen - (decrypt ? auth_tag_len : 0); - struct scatter_walk src_sg_walk; - struct scatter_walk dst_sg_walk = {}; int retval = 0; unsigned long bounce_buf_size = data_len + auth_tag_len + req->assoclen; if (auth_tag_len > 16) return -EINVAL; - retval = get_request_buffer(req->src, &src_sg_walk, bounce_buf_size, - &assoc, &bounce_buf, &src_mapped); + retval = get_request_buffer(req->src, bounce_buf_size, &assoc, + &bounce_buf, &src_mapped); if (retval) goto exit; @@ -828,9 +825,8 @@ static int gcmaes_crypt(struct aead_request *req, unsigned int assoclen, dst = src; dst_mapped = src_mapped; } else { - retval = get_request_buffer(req->dst, &dst_sg_walk, - bounce_buf_size, &dst, &bounce_buf, - &dst_mapped); + retval = get_request_buffer(req->dst, bounce_buf_size, &dst, + &bounce_buf, &dst_mapped); if (retval) goto exit; @@ -866,11 +862,9 @@ static int gcmaes_crypt(struct aead_request *req, unsigned int assoclen, 1); exit: if (req->dst != req->src) - put_request_buffer(dst - req->assoclen, req->dst->length, - dst_mapped, &dst_sg_walk, true); + put_request_buffer(dst - req->assoclen, dst_mapped); - put_request_buffer(assoc, req->src->length, src_mapped, &src_sg_walk, - false); + put_request_buffer(assoc, src_mapped); kfree(bounce_buf); return retval;