From patchwork Tue Dec 26 16:21:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Antoine Tenart X-Patchwork-Id: 10133163 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 22AD260211 for ; Tue, 26 Dec 2017 16:35:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1496F2D862 for ; Tue, 26 Dec 2017 16:35:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0980C2DF43; Tue, 26 Dec 2017 16:35:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2591D2DF41 for ; Tue, 26 Dec 2017 16:35:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750890AbdLZQfF (ORCPT ); Tue, 26 Dec 2017 11:35:05 -0500 Received: from mail.free-electrons.com ([62.4.15.54]:53374 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750777AbdLZQfE (ORCPT ); Tue, 26 Dec 2017 11:35:04 -0500 Received: by mail.free-electrons.com (Postfix, from userid 110) id 2FCF520726; Tue, 26 Dec 2017 17:35:03 +0100 (CET) Received: from localhost (unknown [80.12.37.231]) by mail.free-electrons.com (Postfix) with ESMTPSA id E913720378; Tue, 26 Dec 2017 17:35:02 +0100 (CET) From: Antoine Tenart To: herbert@gondor.apana.org.au, davem@davemloft.net Cc: Antoine Tenart , thomas.petazzoni@free-electrons.com, gregory.clement@free-electrons.com, miquel.raynal@free-electrons.com, oferh@marvell.com, igall@marvell.com, nadavh@marvell.com, linux-crypto@vger.kernel.org Subject: [PATCH 2/2] crypto: inside-secure - fix hash when length is a multiple of a block Date: Tue, 26 Dec 2017 17:21:17 +0100 Message-Id: <20171226162117.4885-3-antoine.tenart@free-electrons.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171226162117.4885-1-antoine.tenart@free-electrons.com> References: <20171226162117.4885-1-antoine.tenart@free-electrons.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch fixes the hash support in the SafeXcel driver when the update size is a multiple of a block size, and when a final call is made just after with a size of 0. In such cases the driver should cache the last block from the update to avoid handling 0 length data on the final call (that's a hardware limitation). Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver") Signed-off-by: Antoine Tenart --- drivers/crypto/inside-secure/safexcel_hash.c | 34 ++++++++++++++++++++-------- 1 file changed, 24 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c index fbc03d6e7db7..122a2a58e98f 100644 --- a/drivers/crypto/inside-secure/safexcel_hash.c +++ b/drivers/crypto/inside-secure/safexcel_hash.c @@ -189,17 +189,31 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring, else cache_len = queued - areq->nbytes; - /* - * If this is not the last request and the queued data does not fit - * into full blocks, cache it for the next send() call. - */ - extra = queued & (crypto_ahash_blocksize(ahash) - 1); - if (!req->last_req && extra) { - sg_pcopy_to_buffer(areq->src, sg_nents(areq->src), - req->cache_next, extra, areq->nbytes - extra); + if (!req->last_req) { + /* If this is not the last request and the queued data does not + * fit into full blocks, cache it for the next send() call. + */ + extra = queued & (crypto_ahash_blocksize(ahash) - 1); + if (!extra) + /* If this is not the last request and the queued data + * is a multiple of a block, cache the last one for now. + */ + extra = queued - crypto_ahash_blocksize(ahash); - queued -= extra; - len -= extra; + if (extra) { + sg_pcopy_to_buffer(areq->src, sg_nents(areq->src), + req->cache_next, extra, + areq->nbytes - extra); + + queued -= extra; + len -= extra; + + if (!queued) { + *commands = 0; + *results = 0; + return 0; + } + } } spin_lock_bh(&priv->ring[ring].egress_lock);