From patchwork Tue Mar 22 01:58:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Krzysztof Kozlowski X-Patchwork-Id: 8637671 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DBBA99F36E for ; Tue, 22 Mar 2016 01:59:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C9681201FE for ; Tue, 22 Mar 2016 01:59:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A993C20166 for ; Tue, 22 Mar 2016 01:59:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758159AbcCVB7N (ORCPT ); Mon, 21 Mar 2016 21:59:13 -0400 Received: from mailout2.w1.samsung.com ([210.118.77.12]:18077 "EHLO mailout2.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758135AbcCVB7K (ORCPT ); Mon, 21 Mar 2016 21:59:10 -0400 Received: from eucpsbgm2.samsung.com (unknown [203.254.199.245]) by mailout2.w1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0O4F00EE946ILLA0@mailout2.w1.samsung.com>; Tue, 22 Mar 2016 01:59:06 +0000 (GMT) X-AuditID: cbfec7f5-f792a6d000001302-6c-56f0a6ea72e0 Received: from eusync4.samsung.com ( [203.254.199.214]) by eucpsbgm2.samsung.com (EUCPMTA) with SMTP id 86.20.04866.AE6A0F65; Tue, 22 Mar 2016 01:59:06 +0000 (GMT) Received: from localhost.localdomain ([10.113.63.52]) by eusync4.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0O4F00FGE4633A80@eusync4.samsung.com>; Tue, 22 Mar 2016 01:59:06 +0000 (GMT) From: Krzysztof Kozlowski To: Herbert Xu , "David S. Miller" , Vladimir Zapolskiy , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Krzysztof Kozlowski Subject: [PATCH v3 2/3] crypto: s5p-sss - Handle unaligned buffers Date: Tue, 22 Mar 2016 10:58:24 +0900 Message-id: <1458611905-30157-2-git-send-email-k.kozlowski@samsung.com> X-Mailer: git-send-email 2.5.0 In-reply-to: <1458611905-30157-1-git-send-email-k.kozlowski@samsung.com> References: <1458611905-30157-1-git-send-email-k.kozlowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFuphluLIzCtJLcpLzFFi42I5/e/4Nd1Xyz6EGTx4LG0x53wLi0X3KxmL 8+c3sFvcv/eTyeLyrjlsFv9/NTM7sHlsWXmTyWPbAVWPTas62Tz+LZzC4vF5k1wAaxSXTUpq TmZZapG+XQJXxp9PW5kK7ptUPJ//n7GBsUWri5GTQ0LARGLJ7A8sELaYxIV769m6GLk4hASW MkqsPnCcFcL5zyhxv+MXO0gVm4CxxOblS8CqRAR2MEosvPqHGSTBLKAp0b1iKlARB4ewgJPE 7S1pIGEWAVWJ/1N/MILYvALuEieOTGUGKZEQkJNYcCEdJMwp4CHx+9ozNhBbCKhkz7LnjBMY eRcwMqxiFE0tTS4oTkrPNdIrTswtLs1L10vOz93ECAmjrzsYlx6zOsQowMGoxMPLsOp9mBBr YllxZe4hRgkOZiURXt7eD2FCvCmJlVWpRfnxRaU5qcWHGKU5WJTEeWfueh8iJJCeWJKanZpa kFoEk2Xi4JRqYBT0u//8b8wEWT4j5aLjPpbLHyoJT+09V/bGKLry1d72sEnPPJk/MZv/UXNi bZ6vKbx8H4/ajVUzrxnwh5T36p/Ov/rC0vG12zq3lCxJpfV8V8I39a6+77iLg19o1jstB9kV TY/dH53POFH8yGv3r14bxwYu+ZpN/61XLZixc3/SOTMmy2rHlUosxRmJhlrMRcWJAA5krIkf AgAA Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Krzysztof Kozlowski During crypto selftests on Odroid XU3 (Exynos5422) some of the algorithms failed because of passing AES-block unaligned source and destination buffers: alg: skcipher: encryption failed on chunk test 1 for ecb-aes-s5p: ret=22 Handle such case by copying the buffers to a new aligned and contiguous space. Signed-off-by: Krzysztof Kozlowski Acked-by: Vladimir Zapolskiy --- Changes since v2: 1. Fix infinite loop in s5p_is_sg_aligned(). Changes since v1: 1. Add Vladimir's acked-by. Driver tested on Exynos4412/Trats2 and Exynos5422/Odroid XU4 with crypto manager self-tests and tcrypt: $ modprobe tcrypt sec=1 mode=500 --- drivers/crypto/s5p-sss.c | 150 +++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 138 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c index 60f835455a41..3730fb0af4d8 100644 --- a/drivers/crypto/s5p-sss.c +++ b/drivers/crypto/s5p-sss.c @@ -28,6 +28,7 @@ #include #include #include +#include #define _SBF(s, v) ((v) << (s)) #define _BIT(b) _SBF(b, 1) @@ -185,6 +186,10 @@ struct s5p_aes_dev { struct scatterlist *sg_src; struct scatterlist *sg_dst; + /* In case of unaligned access: */ + struct scatterlist *sg_src_cpy; + struct scatterlist *sg_dst_cpy; + struct tasklet_struct tasklet; struct crypto_queue queue; bool busy; @@ -244,8 +249,45 @@ static void s5p_set_dma_outdata(struct s5p_aes_dev *dev, struct scatterlist *sg) SSS_WRITE(dev, FCBTDMAL, sg_dma_len(sg)); } +static void s5p_free_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist **sg) +{ + int len; + + if (!*sg) + return; + + len = ALIGN(dev->req->nbytes, AES_BLOCK_SIZE); + free_pages((unsigned long)sg_virt(*sg), get_order(len)); + + kfree(*sg); + *sg = NULL; +} + +static void s5p_sg_copy_buf(void *buf, struct scatterlist *sg, + unsigned int nbytes, int out) +{ + struct scatter_walk walk; + + if (!nbytes) + return; + + scatterwalk_start(&walk, sg); + scatterwalk_copychunks(buf, &walk, nbytes, out); + scatterwalk_done(&walk, out, 0); +} + static void s5p_aes_complete(struct s5p_aes_dev *dev, int err) { + if (dev->sg_dst_cpy) { + dev_dbg(dev->dev, + "Copying %d bytes of output data back to original place\n", + dev->req->nbytes); + s5p_sg_copy_buf(sg_virt(dev->sg_dst_cpy), dev->req->dst, + dev->req->nbytes, 1); + } + s5p_free_sg_cpy(dev, &dev->sg_src_cpy); + s5p_free_sg_cpy(dev, &dev->sg_dst_cpy); + /* holding a lock outside */ dev->req->base.complete(&dev->req->base, err); dev->busy = false; @@ -261,14 +303,36 @@ static void s5p_unset_indata(struct s5p_aes_dev *dev) dma_unmap_sg(dev->dev, dev->sg_src, 1, DMA_TO_DEVICE); } +static int s5p_make_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist *src, + struct scatterlist **dst) +{ + void *pages; + int len; + + *dst = kmalloc(sizeof(**dst), GFP_ATOMIC); + if (!*dst) + return -ENOMEM; + + len = ALIGN(dev->req->nbytes, AES_BLOCK_SIZE); + pages = (void *)__get_free_pages(GFP_ATOMIC, get_order(len)); + if (!pages) { + kfree(*dst); + *dst = NULL; + return -ENOMEM; + } + + s5p_sg_copy_buf(pages, src, dev->req->nbytes, 0); + + sg_init_table(*dst, 1); + sg_set_buf(*dst, pages, len); + + return 0; +} + static int s5p_set_outdata(struct s5p_aes_dev *dev, struct scatterlist *sg) { int err; - if (!IS_ALIGNED(sg_dma_len(sg), AES_BLOCK_SIZE)) { - err = -EINVAL; - goto exit; - } if (!sg_dma_len(sg)) { err = -EINVAL; goto exit; @@ -291,10 +355,6 @@ static int s5p_set_indata(struct s5p_aes_dev *dev, struct scatterlist *sg) { int err; - if (!IS_ALIGNED(sg_dma_len(sg), AES_BLOCK_SIZE)) { - err = -EINVAL; - goto exit; - } if (!sg_dma_len(sg)) { err = -EINVAL; goto exit; @@ -394,6 +454,71 @@ static void s5p_set_aes(struct s5p_aes_dev *dev, memcpy_toio(keystart, key, keylen); } +static bool s5p_is_sg_aligned(struct scatterlist *sg) +{ + while (sg) { + if (!IS_ALIGNED(sg_dma_len(sg), AES_BLOCK_SIZE)) + return false; + sg = sg_next(sg); + } + + return true; +} + +static int s5p_set_indata_start(struct s5p_aes_dev *dev, + struct ablkcipher_request *req) +{ + struct scatterlist *sg; + int err; + + dev->sg_src_cpy = NULL; + sg = req->src; + if (!s5p_is_sg_aligned(sg)) { + dev_dbg(dev->dev, + "At least one unaligned source scatter list, making a copy\n"); + err = s5p_make_sg_cpy(dev, sg, &dev->sg_src_cpy); + if (err) + return err; + + sg = dev->sg_src_cpy; + } + + err = s5p_set_indata(dev, sg); + if (err) { + s5p_free_sg_cpy(dev, &dev->sg_src_cpy); + return err; + } + + return 0; +} + +static int s5p_set_outdata_start(struct s5p_aes_dev *dev, + struct ablkcipher_request *req) +{ + struct scatterlist *sg; + int err; + + dev->sg_dst_cpy = NULL; + sg = req->dst; + if (!s5p_is_sg_aligned(sg)) { + dev_dbg(dev->dev, + "At least one unaligned dest scatter list, making a copy\n"); + err = s5p_make_sg_cpy(dev, sg, &dev->sg_dst_cpy); + if (err) + return err; + + sg = dev->sg_dst_cpy; + } + + err = s5p_set_outdata(dev, sg); + if (err) { + s5p_free_sg_cpy(dev, &dev->sg_dst_cpy); + return err; + } + + return 0; +} + static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) { struct ablkcipher_request *req = dev->req; @@ -430,19 +555,19 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) SSS_FCINTENCLR_BTDMAINTENCLR | SSS_FCINTENCLR_BRDMAINTENCLR); SSS_WRITE(dev, FCFIFOCTRL, 0x00); - err = s5p_set_indata(dev, req->src); + err = s5p_set_indata_start(dev, req); if (err) goto indata_error; - err = s5p_set_outdata(dev, req->dst); + err = s5p_set_outdata_start(dev, req); if (err) goto outdata_error; SSS_AES_WRITE(dev, AES_CONTROL, aes_control); s5p_set_aes(dev, dev->ctx->aes_key, req->info, dev->ctx->keylen); - s5p_set_dma_indata(dev, req->src); - s5p_set_dma_outdata(dev, req->dst); + s5p_set_dma_indata(dev, dev->sg_src); + s5p_set_dma_outdata(dev, dev->sg_dst); SSS_WRITE(dev, FCINTENSET, SSS_FCINTENSET_BTDMAINTENSET | SSS_FCINTENSET_BRDMAINTENSET); @@ -452,6 +577,7 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) return; outdata_error: + s5p_free_sg_cpy(dev, &dev->sg_src_cpy); s5p_unset_indata(dev); indata_error: