From patchwork Tue Jan 23 20:19:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junaid Shahid X-Patchwork-Id: 10181027 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1DA4F602B7 for ; Tue, 23 Jan 2018 20:19:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DFC7287C2 for ; Tue, 23 Jan 2018 20:19:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0264B287C5; Tue, 23 Jan 2018 20:19:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 462C5287C3 for ; Tue, 23 Jan 2018 20:19:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752411AbeAWUT3 (ORCPT ); Tue, 23 Jan 2018 15:19:29 -0500 Received: from mail-pg0-f68.google.com ([74.125.83.68]:46848 "EHLO mail-pg0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752496AbeAWUTW (ORCPT ); Tue, 23 Jan 2018 15:19:22 -0500 Received: by mail-pg0-f68.google.com with SMTP id s9so1046867pgq.13 for ; Tue, 23 Jan 2018 12:19:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wGsF7/kmW2nqTvTtUGIFjAzWucUl26db1fmV3Nm1pfc=; b=YEqf6lS8MjMeDNVaxz4DHpUNt4piUPmPVM8FwedVQEtOZF02t6YQnyqAEQ8/3Q6MNB 9VPbVZkCc7ZCEk3ay1MAUWQqWL7B9zILhHHXmLwdir9fjeTGlX/mPGkvUoPFyUB9KQ0A e6GSVzqvY4u0qHYddgEOiN2g31m6orCZewNRvkCQivIgBIZBHqaOB7B8VPGC3Lz4IWaY UasJl7qUipJadP2O7FP8vvFXtk38z1M7SLlzzDILCSn9DgEM0ppygYXFCUocMcW8B48u +z9mHv2dRuZgUKuUiT+WfXrV5lM5D8lEF1VLaefpgN0aL+yRxxots7xjijf6n/1OSIcO hixg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wGsF7/kmW2nqTvTtUGIFjAzWucUl26db1fmV3Nm1pfc=; b=JKk784/UUe/LYCtxsOopASk71qbEUzFWIJ8of/ELBcpFYGu69g12ccKEFulTyZ0Ik0 zhODsw1PVwo83tgvMCWQuDiGL9LLvaKvNREv/OuQcue8gYjb0GU7pwgSs+brRgtdyqtD HwVaRxPI+98h0COLRU+5gD4tD10z3X/ISaJ10Lu4kDtHb8aH9a0dArKM44NcTgb1J4GE 8DHDJ1WTGXMqtgNajlWXHYdfS/wftHufF04cYR1u2qwDpxth5VYwYZvm0AH13AwnxuSJ MV9Fjr5yPDTiUTtcZPThN6VoOcgto9nYMheXy5RQqtPhXSWjdo3KVbLAvsCLKE/tCA8A LnvA== X-Gm-Message-State: AKwxytfVDySQmfPGD4F9uvthG2Fiuxi3Y7XZSrWsw/msMPl3X+LfevVv D2rbyjCsS4lYvgmd4+iGWvOeiSuRB2E= X-Google-Smtp-Source: AH8x2266/U678BJ9hyuP89d38UWkLZo1BjDKyHq9tmPYF6vzBvBl8SGFSFU9f+7BxfbYZyw0B1VJag== X-Received: by 2002:a17:902:b909:: with SMTP id bf9-v6mr6440491plb.218.1516738761949; Tue, 23 Jan 2018 12:19:21 -0800 (PST) Received: from js-desktop.svl.corp.google.com ([2620:15c:2cb:1:cdad:b4d5:21d1:e91e]) by smtp.gmail.com with ESMTPSA id t6sm9509573pfl.174.2018.01.23.12.19.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Jan 2018 12:19:21 -0800 (PST) From: Junaid Shahid To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, andreslc@google.com, davem@davemloft.net, gthelen@google.com, ebiggers3@gmail.com, smueller@chronox.de Subject: [PATCH v2 2/4] crypto: aesni - Enable one-sided zero copy for gcm(aes) request buffers Date: Tue, 23 Jan 2018 12:19:14 -0800 Message-Id: <20180123201916.102134-3-junaids@google.com> X-Mailer: git-send-email 2.16.0.rc1.238.g530d649a79-goog In-Reply-To: <20180123201916.102134-1-junaids@google.com> References: <20180123201916.102134-1-junaids@google.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP gcmaes_encrypt/decrypt perform zero-copy crypto if both the source and destination satisfy certain conditions (single sglist entry located in low-mem or within a single high-mem page). But two copies are done otherwise, even if one of source or destination still satisfies the zero-copy conditions. This optimization is now extended to avoid the copy on the side that does satisfy the zero-copy conditions. Signed-off-by: Junaid Shahid --- arch/x86/crypto/aesni-intel_glue.c | 256 +++++++++++++++++++------------------ 1 file changed, 134 insertions(+), 122 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 3bf3dcf29825..aef6c82b9ca7 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -744,136 +744,148 @@ static int generic_gcmaes_set_authsize(struct crypto_aead *tfm, return 0; } +static bool is_mappable(struct scatterlist *sgl, unsigned long len) +{ + return sgl->length > 0 && len <= sgl->length && + (!PageHighMem(sg_page(sgl)) || sgl->offset + len <= PAGE_SIZE); +} + +/* + * Maps the sglist buffer and returns a pointer to the mapped buffer in + * data_buf. + * + * If direct mapping is not feasible, then allocates a bounce buffer if one + * isn't already available in bounce_buf, and returns a pointer to the bounce + * buffer in data_buf. + * + * When the buffer is no longer needed, put_request_buffer() should be called on + * the data_buf and the bounce_buf should be freed using kfree(). + */ +static int get_request_buffer(struct scatterlist *sgl, + struct scatter_walk *sg_walk, + unsigned long bounce_buf_size, + u8 **data_buf, u8 **bounce_buf, bool *mapped) +{ + if (sg_is_last(sgl) && is_mappable(sgl, sgl->length)) { + *mapped = true; + scatterwalk_start(sg_walk, sgl); + *data_buf = scatterwalk_map(sg_walk); + return 0; + } + + *mapped = false; + + if (*bounce_buf == NULL) { + *bounce_buf = kmalloc(bounce_buf_size, GFP_ATOMIC); + if (unlikely(*bounce_buf == NULL)) + return -ENOMEM; + } + + *data_buf = *bounce_buf; + return 0; +} + +static void put_request_buffer(u8 *data_buf, unsigned long len, bool mapped, + struct scatter_walk *sg_walk, bool output) +{ + if (mapped) { + scatterwalk_unmap(data_buf); + scatterwalk_advance(sg_walk, len); + scatterwalk_done(sg_walk, output, 0); + } +} + +/* + * Performs the encryption/decryption operation for the given request. The src + * and dst sglists in the request are directly mapped if possible. Otherwise, a + * bounce buffer is allocated and used to copy the data from the src or to the + * dst, or both. + */ +static int gcmaes_crypt(struct aead_request *req, unsigned int assoclen, + u8 *hash_subkey, u8 *iv, void *aes_ctx, bool decrypt) +{ + u8 *src, *dst, *assoc, *bounce_buf = NULL; + bool src_mapped = false, dst_mapped = false; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + unsigned long auth_tag_len = crypto_aead_authsize(tfm); + unsigned long data_len = req->cryptlen - (decrypt ? auth_tag_len : 0); + struct scatter_walk src_sg_walk; + struct scatter_walk dst_sg_walk = {}; + int retval = 0; + unsigned long bounce_buf_size = data_len + auth_tag_len + req->assoclen; + + if (auth_tag_len > 16) + return -EINVAL; + + retval = get_request_buffer(req->src, &src_sg_walk, bounce_buf_size, + &assoc, &bounce_buf, &src_mapped); + if (retval) + goto exit; + + src = assoc + req->assoclen; + + if (req->src == req->dst) { + dst = src; + dst_mapped = src_mapped; + } else { + retval = get_request_buffer(req->dst, &dst_sg_walk, + bounce_buf_size, &dst, &bounce_buf, + &dst_mapped); + if (retval) + goto exit; + + dst += req->assoclen; + } + + if (!src_mapped) + scatterwalk_map_and_copy(bounce_buf, req->src, 0, + req->assoclen + req->cryptlen, 0); + + kernel_fpu_begin(); + + if (decrypt) { + u8 gen_auth_tag[16]; + + aesni_gcm_dec_tfm(aes_ctx, dst, src, data_len, iv, + hash_subkey, assoc, assoclen, + gen_auth_tag, auth_tag_len); + /* Compare generated tag with passed in tag. */ + if (crypto_memneq(src + data_len, gen_auth_tag, auth_tag_len)) + retval = -EBADMSG; + + } else + aesni_gcm_enc_tfm(aes_ctx, dst, src, data_len, iv, + hash_subkey, assoc, assoclen, + dst + data_len, auth_tag_len); + + kernel_fpu_end(); + + if (!dst_mapped) + scatterwalk_map_and_copy(dst, req->dst, req->assoclen, + data_len + (decrypt ? 0 : auth_tag_len), + 1); +exit: + if (req->dst != req->src) + put_request_buffer(dst - req->assoclen, req->dst->length, + dst_mapped, &dst_sg_walk, true); + + put_request_buffer(assoc, req->src->length, src_mapped, &src_sg_walk, + false); + + kfree(bounce_buf); + return retval; +} + static int gcmaes_encrypt(struct aead_request *req, unsigned int assoclen, u8 *hash_subkey, u8 *iv, void *aes_ctx) { - u8 one_entry_in_sg = 0; - u8 *src, *dst, *assoc; - struct crypto_aead *tfm = crypto_aead_reqtfm(req); - unsigned long auth_tag_len = crypto_aead_authsize(tfm); - struct scatter_walk src_sg_walk; - struct scatter_walk dst_sg_walk = {}; - - if (sg_is_last(req->src) && - (!PageHighMem(sg_page(req->src)) || - req->src->offset + req->src->length <= PAGE_SIZE) && - sg_is_last(req->dst) && - (!PageHighMem(sg_page(req->dst)) || - req->dst->offset + req->dst->length <= PAGE_SIZE)) { - one_entry_in_sg = 1; - scatterwalk_start(&src_sg_walk, req->src); - assoc = scatterwalk_map(&src_sg_walk); - src = assoc + req->assoclen; - dst = src; - if (unlikely(req->src != req->dst)) { - scatterwalk_start(&dst_sg_walk, req->dst); - dst = scatterwalk_map(&dst_sg_walk) + req->assoclen; - } - } else { - /* Allocate memory for src, dst, assoc */ - assoc = kmalloc(req->cryptlen + auth_tag_len + req->assoclen, - GFP_ATOMIC); - if (unlikely(!assoc)) - return -ENOMEM; - scatterwalk_map_and_copy(assoc, req->src, 0, - req->assoclen + req->cryptlen, 0); - src = assoc + req->assoclen; - dst = src; - } - - kernel_fpu_begin(); - aesni_gcm_enc_tfm(aes_ctx, dst, src, req->cryptlen, iv, - hash_subkey, assoc, assoclen, - dst + req->cryptlen, auth_tag_len); - kernel_fpu_end(); - - /* The authTag (aka the Integrity Check Value) needs to be written - * back to the packet. */ - if (one_entry_in_sg) { - if (unlikely(req->src != req->dst)) { - scatterwalk_unmap(dst - req->assoclen); - scatterwalk_advance(&dst_sg_walk, req->dst->length); - scatterwalk_done(&dst_sg_walk, 1, 0); - } - scatterwalk_unmap(assoc); - scatterwalk_advance(&src_sg_walk, req->src->length); - scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); - } else { - scatterwalk_map_and_copy(dst, req->dst, req->assoclen, - req->cryptlen + auth_tag_len, 1); - kfree(assoc); - } - return 0; + return gcmaes_crypt(req, assoclen, hash_subkey, iv, aes_ctx, false); } static int gcmaes_decrypt(struct aead_request *req, unsigned int assoclen, u8 *hash_subkey, u8 *iv, void *aes_ctx) { - u8 one_entry_in_sg = 0; - u8 *src, *dst, *assoc; - unsigned long tempCipherLen = 0; - struct crypto_aead *tfm = crypto_aead_reqtfm(req); - unsigned long auth_tag_len = crypto_aead_authsize(tfm); - u8 authTag[16]; - struct scatter_walk src_sg_walk; - struct scatter_walk dst_sg_walk = {}; - int retval = 0; - - tempCipherLen = (unsigned long)(req->cryptlen - auth_tag_len); - - if (sg_is_last(req->src) && - (!PageHighMem(sg_page(req->src)) || - req->src->offset + req->src->length <= PAGE_SIZE) && - sg_is_last(req->dst) && - (!PageHighMem(sg_page(req->dst)) || - req->dst->offset + req->dst->length <= PAGE_SIZE)) { - one_entry_in_sg = 1; - scatterwalk_start(&src_sg_walk, req->src); - assoc = scatterwalk_map(&src_sg_walk); - src = assoc + req->assoclen; - dst = src; - if (unlikely(req->src != req->dst)) { - scatterwalk_start(&dst_sg_walk, req->dst); - dst = scatterwalk_map(&dst_sg_walk) + req->assoclen; - } - } else { - /* Allocate memory for src, dst, assoc */ - assoc = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC); - if (!assoc) - return -ENOMEM; - scatterwalk_map_and_copy(assoc, req->src, 0, - req->assoclen + req->cryptlen, 0); - src = assoc + req->assoclen; - dst = src; - } - - - kernel_fpu_begin(); - aesni_gcm_dec_tfm(aes_ctx, dst, src, tempCipherLen, iv, - hash_subkey, assoc, assoclen, - authTag, auth_tag_len); - kernel_fpu_end(); - - /* Compare generated tag with passed in tag. */ - retval = crypto_memneq(src + tempCipherLen, authTag, auth_tag_len) ? - -EBADMSG : 0; - - if (one_entry_in_sg) { - if (unlikely(req->src != req->dst)) { - scatterwalk_unmap(dst - req->assoclen); - scatterwalk_advance(&dst_sg_walk, req->dst->length); - scatterwalk_done(&dst_sg_walk, 1, 0); - } - scatterwalk_unmap(assoc); - scatterwalk_advance(&src_sg_walk, req->src->length); - scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); - } else { - scatterwalk_map_and_copy(dst, req->dst, req->assoclen, - tempCipherLen, 1); - kfree(assoc); - } - return retval; - + return gcmaes_crypt(req, assoclen, hash_subkey, iv, aes_ctx, true); } static int helper_rfc4106_encrypt(struct aead_request *req)