From patchwork Thu Mar 10 01:31:46 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Krzysztof Kozlowski X-Patchwork-Id: 8552321 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 36393C0553 for ; Thu, 10 Mar 2016 01:32:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 228A7202A1 for ; Thu, 10 Mar 2016 01:32:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E6CD5201BC for ; Thu, 10 Mar 2016 01:32:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934670AbcCJBcR (ORCPT ); Wed, 9 Mar 2016 20:32:17 -0500 Received: from mailout4.w1.samsung.com ([210.118.77.14]:47962 "EHLO mailout4.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754008AbcCJBcM (ORCPT ); Wed, 9 Mar 2016 20:32:12 -0500 Received: from eucpsbgm2.samsung.com (unknown [203.254.199.245]) by mailout4.w1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0O3S005E9UXLMT40@mailout4.w1.samsung.com>; Thu, 10 Mar 2016 01:32:09 +0000 (GMT) X-AuditID: cbfec7f5-f79b16d000005389-00-56e0ce98ade7 Received: from eusync2.samsung.com ( [203.254.199.212]) by eucpsbgm2.samsung.com (EUCPMTA) with SMTP id C2.42.21385.89EC0E65; Thu, 10 Mar 2016 01:32:08 +0000 (GMT) Received: from localhost.localdomain ([10.113.63.52]) by eusync2.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0O3S00ECFUXB2U40@eusync2.samsung.com>; Thu, 10 Mar 2016 01:32:08 +0000 (GMT) From: Krzysztof Kozlowski To: Herbert Xu , "David S. Miller" , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Vladimir Zapolskiy , linux-samsung-soc@vger.kernel.org, Krzysztof Kozlowski Subject: [PATCH v2 2/3] crypto: s5p-sss - Handle unaligned buffers Date: Thu, 10 Mar 2016 10:31:46 +0900 Message-id: <1457573507-1687-2-git-send-email-k.kozlowski@samsung.com> X-Mailer: git-send-email 2.5.0 In-reply-to: <1457573507-1687-1-git-send-email-k.kozlowski@samsung.com> References: <1457573507-1687-1-git-send-email-k.kozlowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprILMWRmVeSWpSXmKPExsVy+t/xK7ozzj0IM7h2X99izvkWFovuVzIW 589vYLe4f+8nk8XlXXPYLGac38dk8f9XM7MDu8eWlTeZPLYdUPXYtKqTzePfwiksHp83yQWw RnHZpKTmZJalFunbJXBl3OrQKug0rljy5ydbA+N1jS5GTg4JAROJrp8XWSFsMYkL99azdTFy cQgJLGWUOPnqO5Tzn1Fi0qU3YFVsAsYSm5cvAUuICExllPh8/QQzSIJZoFDie9NhJhBbWMBJ 4uX7g2wgNouAqsTbg+1gcV4BN4kpz/+zdDFyAK2Tk1hwIR0kzCngLvH+1Xt2EFsIqOTjg3+s Exh5FzAyrGIUTS1NLihOSs810itOzC0uzUvXS87P3cQICaivOxiXHrM6xCjAwajEw5tR9yBM iDWxrLgy9xCjBAezkgiv3xmgEG9KYmVValF+fFFpTmrxIUZpDhYlcd6Zu96HCAmkJ5akZqem FqQWwWSZODilGhhZneT95p58uoi9vpv1f86sdG2h2SX/9z3uK5t0uo1ptvLxMr0Fz+IsxTui ljKW/b1R4Rc1p32qtM+vuz+edMfvrFkYkP19QUSvrQ1TBNd0BmHO+y/eaEw+Utp3/drPTw+f Mrd6TmzfYXreb0MeY0n26s9nYvoi1xvsfC3R53f8l4bI3Zv/FF8rsRRnJBpqMRcVJwIAsaXm YCQCAAA= Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Krzysztof Kozlowski During crypto selftests on Odroid XU3 (Exynos5422) some of the algorithms failed because of passing AES-block unaligned source and destination buffers: alg: skcipher: encryption failed on chunk test 1 for ecb-aes-s5p: ret=22 Handle such case by copying the buffers to a new aligned and contiguous space. Signed-off-by: Krzysztof Kozlowski Acked-by: Vladimir Zapolskiy --- Changes since v1: 1. Add Vladimir's acked-by. --- drivers/crypto/s5p-sss.c | 149 +++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 137 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c index 60f835455a41..bac1f4593f98 100644 --- a/drivers/crypto/s5p-sss.c +++ b/drivers/crypto/s5p-sss.c @@ -28,6 +28,7 @@ #include #include #include +#include #define _SBF(s, v) ((v) << (s)) #define _BIT(b) _SBF(b, 1) @@ -185,6 +186,10 @@ struct s5p_aes_dev { struct scatterlist *sg_src; struct scatterlist *sg_dst; + /* In case of unaligned access: */ + struct scatterlist *sg_src_cpy; + struct scatterlist *sg_dst_cpy; + struct tasklet_struct tasklet; struct crypto_queue queue; bool busy; @@ -244,8 +249,45 @@ static void s5p_set_dma_outdata(struct s5p_aes_dev *dev, struct scatterlist *sg) SSS_WRITE(dev, FCBTDMAL, sg_dma_len(sg)); } +static void s5p_free_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist **sg) +{ + int len; + + if (!*sg) + return; + + len = ALIGN(dev->req->nbytes, AES_BLOCK_SIZE); + free_pages((unsigned long)sg_virt(*sg), get_order(len)); + + kfree(*sg); + *sg = NULL; +} + +static void s5p_sg_copy_buf(void *buf, struct scatterlist *sg, + unsigned int nbytes, int out) +{ + struct scatter_walk walk; + + if (!nbytes) + return; + + scatterwalk_start(&walk, sg); + scatterwalk_copychunks(buf, &walk, nbytes, out); + scatterwalk_done(&walk, out, 0); +} + static void s5p_aes_complete(struct s5p_aes_dev *dev, int err) { + if (dev->sg_dst_cpy) { + dev_dbg(dev->dev, + "Copying %d bytes of output data back to original place\n", + dev->req->nbytes); + s5p_sg_copy_buf(sg_virt(dev->sg_dst_cpy), dev->req->dst, + dev->req->nbytes, 1); + } + s5p_free_sg_cpy(dev, &dev->sg_src_cpy); + s5p_free_sg_cpy(dev, &dev->sg_dst_cpy); + /* holding a lock outside */ dev->req->base.complete(&dev->req->base, err); dev->busy = false; @@ -261,14 +303,36 @@ static void s5p_unset_indata(struct s5p_aes_dev *dev) dma_unmap_sg(dev->dev, dev->sg_src, 1, DMA_TO_DEVICE); } +static int s5p_make_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist *src, + struct scatterlist **dst) +{ + void *pages; + int len; + + *dst = kmalloc(sizeof(**dst), GFP_ATOMIC); + if (!*dst) + return -ENOMEM; + + len = ALIGN(dev->req->nbytes, AES_BLOCK_SIZE); + pages = (void *)__get_free_pages(GFP_ATOMIC, get_order(len)); + if (!pages) { + kfree(*dst); + *dst = NULL; + return -ENOMEM; + } + + s5p_sg_copy_buf(pages, src, dev->req->nbytes, 0); + + sg_init_table(*dst, 1); + sg_set_buf(*dst, pages, len); + + return 0; +} + static int s5p_set_outdata(struct s5p_aes_dev *dev, struct scatterlist *sg) { int err; - if (!IS_ALIGNED(sg_dma_len(sg), AES_BLOCK_SIZE)) { - err = -EINVAL; - goto exit; - } if (!sg_dma_len(sg)) { err = -EINVAL; goto exit; @@ -291,10 +355,6 @@ static int s5p_set_indata(struct s5p_aes_dev *dev, struct scatterlist *sg) { int err; - if (!IS_ALIGNED(sg_dma_len(sg), AES_BLOCK_SIZE)) { - err = -EINVAL; - goto exit; - } if (!sg_dma_len(sg)) { err = -EINVAL; goto exit; @@ -394,6 +454,70 @@ static void s5p_set_aes(struct s5p_aes_dev *dev, memcpy_toio(keystart, key, keylen); } +static bool s5p_is_sg_aligned(struct scatterlist *sg) +{ + do { + if (!IS_ALIGNED(sg_dma_len(sg), AES_BLOCK_SIZE)) + return false; + } while (!sg_is_last(sg)); + + return true; +} + +static int s5p_set_indata_start(struct s5p_aes_dev *dev, + struct ablkcipher_request *req) +{ + struct scatterlist *sg; + int err; + + dev->sg_src_cpy = NULL; + sg = req->src; + if (!s5p_is_sg_aligned(sg)) { + dev_dbg(dev->dev, + "At least one unaligned source scatter list, making a copy\n"); + err = s5p_make_sg_cpy(dev, sg, &dev->sg_src_cpy); + if (err) + return err; + + sg = dev->sg_src_cpy; + } + + err = s5p_set_indata(dev, sg); + if (err) { + s5p_free_sg_cpy(dev, &dev->sg_src_cpy); + return err; + } + + return 0; +} + +static int s5p_set_outdata_start(struct s5p_aes_dev *dev, + struct ablkcipher_request *req) +{ + struct scatterlist *sg; + int err; + + dev->sg_dst_cpy = NULL; + sg = req->dst; + if (!s5p_is_sg_aligned(sg)) { + dev_dbg(dev->dev, + "At least one unaligned dest scatter list, making a copy\n"); + err = s5p_make_sg_cpy(dev, sg, &dev->sg_dst_cpy); + if (err) + return err; + + sg = dev->sg_dst_cpy; + } + + err = s5p_set_outdata(dev, sg); + if (err) { + s5p_free_sg_cpy(dev, &dev->sg_dst_cpy); + return err; + } + + return 0; +} + static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) { struct ablkcipher_request *req = dev->req; @@ -430,19 +554,19 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) SSS_FCINTENCLR_BTDMAINTENCLR | SSS_FCINTENCLR_BRDMAINTENCLR); SSS_WRITE(dev, FCFIFOCTRL, 0x00); - err = s5p_set_indata(dev, req->src); + err = s5p_set_indata_start(dev, req); if (err) goto indata_error; - err = s5p_set_outdata(dev, req->dst); + err = s5p_set_outdata_start(dev, req); if (err) goto outdata_error; SSS_AES_WRITE(dev, AES_CONTROL, aes_control); s5p_set_aes(dev, dev->ctx->aes_key, req->info, dev->ctx->keylen); - s5p_set_dma_indata(dev, req->src); - s5p_set_dma_outdata(dev, req->dst); + s5p_set_dma_indata(dev, dev->sg_src); + s5p_set_dma_outdata(dev, dev->sg_dst); SSS_WRITE(dev, FCINTENSET, SSS_FCINTENSET_BTDMAINTENSET | SSS_FCINTENSET_BRDMAINTENSET); @@ -452,6 +576,7 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) return; outdata_error: + s5p_free_sg_cpy(dev, &dev->sg_src_cpy); s5p_unset_indata(dev); indata_error: