From patchwork Fri Jun 2 12:24:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Gstir X-Patchwork-Id: 9762445 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 47F916038E for ; Fri, 2 Jun 2017 12:25:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E7F328506 for ; Fri, 2 Jun 2017 12:25:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3441028545; Fri, 2 Jun 2017 12:25:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE83C28521 for ; Fri, 2 Jun 2017 12:25:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751326AbdFBMZE (ORCPT ); Fri, 2 Jun 2017 08:25:04 -0400 Received: from mail.sigma-star.at ([95.130.255.111]:45997 "EHLO mail.sigma-star.at" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751153AbdFBMZD (ORCPT ); Fri, 2 Jun 2017 08:25:03 -0400 Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.sigma-star.at (Postfix) with ESMTP id CD87C24E0001; Fri, 2 Jun 2017 14:24:58 +0200 (CEST) Received: from spire.fritz.box (david_mobile.vpn.sigmapriv.at [10.3.0.121]) by mail.sigma-star.at (Postfix) with ESMTPSA id 8A5A424E0002; Fri, 2 Jun 2017 14:24:57 +0200 (CEST) From: David Gstir To: horia.geanta@nxp.com, dan.douglass@nxp.com, herbert@gondor.apana.org.au, davem@davemloft.net Cc: richard@sigma-star.at, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, David Gstir Subject: [RFC PATCH 1/2] crypto: caam - properly set IV after {en,de}crypt Date: Fri, 2 Jun 2017 14:24:45 +0200 Message-Id: <20170602122446.2427-2-david@sigma-star.at> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20170602122446.2427-1-david@sigma-star.at> References: <20170602122446.2427-1-david@sigma-star.at> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Certain cipher modes like CTS expect the IV (req->info) of ablkcipher_request (or equivalently req->iv of skcipher_request) to contain the last ciphertext block when the {en,de}crypt operation is done. This is currently not the case for the CAAM driver which in turn breaks e.g. cts(cbc(aes)) when the CAAM driver is enabled. This patch fixes the CAAM driver to properly set the IV after the {en,de}crypt operation of ablkcipher finishes. Signed-off-by: David Gstir --- drivers/crypto/caam/caamalg.c | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 398807d1b77e..d13c1aee4427 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -882,10 +882,11 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, { struct ablkcipher_request *req = context; struct ablkcipher_edesc *edesc; -#ifdef DEBUG struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + int nents; +#ifdef DEBUG dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); #endif @@ -904,6 +905,19 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, #endif ablkcipher_unmap(jrdev, edesc, req); + + if (req->src == req->dst) + nents = edesc->src_nents; + else + nents = edesc->dst_nents; + + /* + * The crypto API expects us to set the IV (req->info) to the last + * ciphertext block. This is used e.g. by the CTS mode. + */ + sg_pcopy_to_buffer(req->dst, nents, req->info, ivsize, + req->nbytes - ivsize); + kfree(edesc); ablkcipher_request_complete(req, err); @@ -914,10 +928,10 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, { struct ablkcipher_request *req = context; struct ablkcipher_edesc *edesc; -#ifdef DEBUG struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); int ivsize = crypto_ablkcipher_ivsize(ablkcipher); +#ifdef DEBUG dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); #endif @@ -935,6 +949,14 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, #endif ablkcipher_unmap(jrdev, edesc, req); + + /* + * The crypto API expects us to set the IV (req->info) to the last + * ciphertext block. + */ + sg_pcopy_to_buffer(req->src, edesc->src_nents, req->info, ivsize, + req->nbytes - ivsize); + kfree(edesc); ablkcipher_request_complete(req, err);