From patchwork Mon Nov 23 17:42:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Watson X-Patchwork-Id: 7684571 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id B97EDBF90C for ; Mon, 23 Nov 2015 17:43:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 96000207B7 for ; Mon, 23 Nov 2015 17:43:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4CDCA207B5 for ; Mon, 23 Nov 2015 17:43:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755010AbbKWRnh (ORCPT ); Mon, 23 Nov 2015 12:43:37 -0500 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:8923 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754915AbbKWRnf (ORCPT ); Mon, 23 Nov 2015 12:43:35 -0500 Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.15.0.59/8.15.0.59) with SMTP id tANHYt8Z016124; Mon, 23 Nov 2015 09:43:31 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=facebook; bh=zHTHuNoAgrQIo7lTGCojm4peFgj0YUc3+sIVM3SIiC0=; b=gssJ4Z1fWjvQYC5fefy2ai+7pyNKOpeUYmGgsJkTTKqLDP5lIYcFfju1omJdf0A2nEhy 26aqeGya9dbGE6LRQSVejBirh2lJBS4KLWwU9tj9I6LTzA+NMp9+WftAsl50XJBVVZLd I6qMWv1e3NTg1WqT7DXS8rmHoQ3qJ9H+Tk8= Received: from mail.thefacebook.com ([199.201.64.23]) by mx0a-00082601.pphosted.com with ESMTP id 1yc78301xk-6 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Mon, 23 Nov 2015 09:43:30 -0800 Received: from devbig217.prn1.facebook.com (192.168.52.123) by mail.thefacebook.com (192.168.16.22) with Microsoft SMTP Server (TLS) id 14.3.248.2; Mon, 23 Nov 2015 09:42:45 -0800 Date: Mon, 23 Nov 2015 09:42:44 -0800 From: Dave Watson To: Tom Herbert , , , Sowmini Varadhan , Herbert Xu , CC: , Subject: [RFC PATCH 1/2] Crypto support aesni rfc5288 Message-ID: Mail-Followup-To: Tom Herbert , netdev@vger.kernel.org, davem@davemloft.net, Sowmini Varadhan , Herbert Xu , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-12-10) X-Originating-IP: [192.168.52.123] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2015-11-23_08:, , signatures=0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Support rfc5288 using intel aesni routines. See also rfc5246. AAD length is 13 bytes padded out to 16. Padding bytes have to be passed in in scatterlist currently, which probably isn't quite the right fix. The assoclen checks were moved to the individual rfc stubs, and the common routines support all assoc lengths. --- arch/x86/crypto/aesni-intel_asm.S | 6 ++ arch/x86/crypto/aesni-intel_avx-x86_64.S | 4 ++ arch/x86/crypto/aesni-intel_glue.c | 105 +++++++++++++++++++++++-------- 3 files changed, 88 insertions(+), 27 deletions(-) -- 2.4.6 -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index 6bd2c6c..49667c4 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -228,6 +228,9 @@ XMM2 XMM3 XMM4 XMMDst TMP6 TMP7 i i_seq operation MOVADQ SHUF_MASK(%rip), %xmm14 mov arg7, %r10 # %r10 = AAD mov arg8, %r12 # %r12 = aadLen + add $3, %r12 + and $~3, %r12 + mov %r12, %r11 pxor %xmm\i, %xmm\i @@ -453,6 +456,9 @@ XMM2 XMM3 XMM4 XMMDst TMP6 TMP7 i i_seq operation MOVADQ SHUF_MASK(%rip), %xmm14 mov arg7, %r10 # %r10 = AAD mov arg8, %r12 # %r12 = aadLen + add $3, %r12 + and $~3, %r12 + mov %r12, %r11 pxor %xmm\i, %xmm\i _get_AAD_loop\num_initial_blocks\operation: diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S index 522ab68..0756e4a 100644 --- a/arch/x86/crypto/aesni-intel_avx-x86_64.S +++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S @@ -360,6 +360,8 @@ VARIABLE_OFFSET = 16*8 mov arg6, %r10 # r10 = AAD mov arg7, %r12 # r12 = aadLen + add $3, %r12 + and $~3, %r12 mov %r12, %r11 @@ -1619,6 +1621,8 @@ ENDPROC(aesni_gcm_dec_avx_gen2) mov arg6, %r10 # r10 = AAD mov arg7, %r12 # r12 = aadLen + add $3, %r12 + and $~3, %r12 mov %r12, %r11 diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 3633ad6..00a42ca 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -949,12 +949,7 @@ static int helper_rfc4106_encrypt(struct aead_request *req) struct scatter_walk src_sg_walk; struct scatter_walk dst_sg_walk; unsigned int i; - - /* Assuming we are supporting rfc4106 64-bit extended */ - /* sequence numbers We need to have the AAD length equal */ - /* to 16 or 20 bytes */ - if (unlikely(req->assoclen != 16 && req->assoclen != 20)) - return -EINVAL; + unsigned int padded_assoclen = (req->assoclen + 3) & ~3; /* IV below built */ for (i = 0; i < 4; i++) @@ -970,21 +965,21 @@ static int helper_rfc4106_encrypt(struct aead_request *req) one_entry_in_sg = 1; scatterwalk_start(&src_sg_walk, req->src); assoc = scatterwalk_map(&src_sg_walk); - src = assoc + req->assoclen; + src = assoc + padded_assoclen; dst = src; if (unlikely(req->src != req->dst)) { scatterwalk_start(&dst_sg_walk, req->dst); - dst = scatterwalk_map(&dst_sg_walk) + req->assoclen; + dst = scatterwalk_map(&dst_sg_walk) + padded_assoclen; } } else { /* Allocate memory for src, dst, assoc */ - assoc = kmalloc(req->cryptlen + auth_tag_len + req->assoclen, + assoc = kmalloc(req->cryptlen + auth_tag_len + padded_assoclen, GFP_ATOMIC); if (unlikely(!assoc)) return -ENOMEM; scatterwalk_map_and_copy(assoc, req->src, 0, - req->assoclen + req->cryptlen, 0); - src = assoc + req->assoclen; + padded_assoclen + req->cryptlen, 0); + src = assoc + padded_assoclen; dst = src; } @@ -998,7 +993,7 @@ static int helper_rfc4106_encrypt(struct aead_request *req) * back to the packet. */ if (one_entry_in_sg) { if (unlikely(req->src != req->dst)) { - scatterwalk_unmap(dst - req->assoclen); + scatterwalk_unmap(dst - padded_assoclen); scatterwalk_advance(&dst_sg_walk, req->dst->length); scatterwalk_done(&dst_sg_walk, 1, 0); } @@ -1006,7 +1001,7 @@ static int helper_rfc4106_encrypt(struct aead_request *req) scatterwalk_advance(&src_sg_walk, req->src->length); scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); } else { - scatterwalk_map_and_copy(dst, req->dst, req->assoclen, + scatterwalk_map_and_copy(dst, req->dst, padded_assoclen, req->cryptlen + auth_tag_len, 1); kfree(assoc); } @@ -1029,13 +1024,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req) struct scatter_walk src_sg_walk; struct scatter_walk dst_sg_walk; unsigned int i; - - if (unlikely(req->assoclen != 16 && req->assoclen != 20)) - return -EINVAL; - - /* Assuming we are supporting rfc4106 64-bit extended */ - /* sequence numbers We need to have the AAD length */ - /* equal to 16 or 20 bytes */ + unsigned int padded_assoclen = (req->assoclen + 3) & ~3; tempCipherLen = (unsigned long)(req->cryptlen - auth_tag_len); /* IV below built */ @@ -1052,21 +1041,21 @@ static int helper_rfc4106_decrypt(struct aead_request *req) one_entry_in_sg = 1; scatterwalk_start(&src_sg_walk, req->src); assoc = scatterwalk_map(&src_sg_walk); - src = assoc + req->assoclen; + src = assoc + padded_assoclen; dst = src; if (unlikely(req->src != req->dst)) { scatterwalk_start(&dst_sg_walk, req->dst); - dst = scatterwalk_map(&dst_sg_walk) + req->assoclen; + dst = scatterwalk_map(&dst_sg_walk) + padded_assoclen; } } else { /* Allocate memory for src, dst, assoc */ - assoc = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC); + assoc = kmalloc(req->cryptlen + padded_assoclen, GFP_ATOMIC); if (!assoc) return -ENOMEM; scatterwalk_map_and_copy(assoc, req->src, 0, - req->assoclen + req->cryptlen, 0); - src = assoc + req->assoclen; + padded_assoclen + req->cryptlen, 0); + src = assoc + padded_assoclen; dst = src; } @@ -1082,7 +1071,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req) if (one_entry_in_sg) { if (unlikely(req->src != req->dst)) { - scatterwalk_unmap(dst - req->assoclen); + scatterwalk_unmap(dst - padded_assoclen); scatterwalk_advance(&dst_sg_walk, req->dst->length); scatterwalk_done(&dst_sg_walk, 1, 0); } @@ -1090,7 +1079,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req) scatterwalk_advance(&src_sg_walk, req->src->length); scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); } else { - scatterwalk_map_and_copy(dst, req->dst, req->assoclen, + scatterwalk_map_and_copy(dst, req->dst, padded_assoclen, tempCipherLen, 1); kfree(assoc); } @@ -1107,6 +1096,12 @@ static int rfc4106_encrypt(struct aead_request *req) cryptd_aead_child(cryptd_tfm) : &cryptd_tfm->base); + /* Assuming we are supporting rfc4106 64-bit extended */ + /* sequence numbers We need to have the AAD length */ + /* equal to 16 or 20 bytes */ + if (unlikely(req->assoclen != 16 && req->assoclen != 20)) + return -EINVAL; + return crypto_aead_encrypt(req); } @@ -1120,6 +1115,44 @@ static int rfc4106_decrypt(struct aead_request *req) cryptd_aead_child(cryptd_tfm) : &cryptd_tfm->base); + /* Assuming we are supporting rfc4106 64-bit extended */ + /* sequence numbers We need to have the AAD length */ + /* equal to 16 or 20 bytes */ + if (unlikely(req->assoclen != 16 && req->assoclen != 20)) + return -EINVAL; + + return crypto_aead_decrypt(req); +} + +static int rfc5288_encrypt(struct aead_request *req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct cryptd_aead **ctx = crypto_aead_ctx(tfm); + struct cryptd_aead *cryptd_tfm = *ctx; + + if (unlikely(req->assoclen != 21)) + return -EINVAL; + + aead_request_set_tfm(req, irq_fpu_usable() ? + cryptd_aead_child(cryptd_tfm) : + &cryptd_tfm->base); + + return crypto_aead_encrypt(req); +} + +static int rfc5288_decrypt(struct aead_request *req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct cryptd_aead **ctx = crypto_aead_ctx(tfm); + struct cryptd_aead *cryptd_tfm = *ctx; + + if (unlikely(req->assoclen != 21)) + return -EINVAL; + + aead_request_set_tfm(req, irq_fpu_usable() ? + cryptd_aead_child(cryptd_tfm) : + &cryptd_tfm->base); + return crypto_aead_decrypt(req); } #endif @@ -1442,6 +1475,24 @@ static struct aead_alg aesni_aead_algs[] = { { .cra_ctxsize = sizeof(struct cryptd_aead *), .cra_module = THIS_MODULE, }, +}, { + .init = rfc4106_init, + .exit = rfc4106_exit, + .setkey = rfc4106_set_key, + .setauthsize = rfc4106_set_authsize, + .encrypt = rfc5288_encrypt, + .decrypt = rfc5288_decrypt, + .ivsize = 8, + .maxauthsize = 16, + .base = { + .cra_name = "rfc5288(gcm(aes))", + .cra_driver_name = "rfc5288-gcm-aesni", + .cra_priority = 400, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct cryptd_aead *), + .cra_module = THIS_MODULE, + }, } }; #else static struct aead_alg aesni_aead_algs[0];