From patchwork Thu Dec 14 14:26:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Antoine Tenart X-Patchwork-Id: 10112339 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AA5BB60352 for ; Thu, 14 Dec 2017 14:30:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9AC22296AC for ; Thu, 14 Dec 2017 14:30:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F8CF296D1; Thu, 14 Dec 2017 14:30:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A58C29640 for ; Thu, 14 Dec 2017 14:30:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753169AbdLNOaI (ORCPT ); Thu, 14 Dec 2017 09:30:08 -0500 Received: from mail.free-electrons.com ([62.4.15.54]:36797 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753161AbdLNOaH (ORCPT ); Thu, 14 Dec 2017 09:30:07 -0500 Received: by mail.free-electrons.com (Postfix, from userid 110) id 0780E20950; Thu, 14 Dec 2017 15:30:06 +0100 (CET) Received: from localhost (LStLambert-657-1-97-87.w90-63.abo.wanadoo.fr [90.63.216.87]) by mail.free-electrons.com (Postfix) with ESMTPSA id C97A020392; Thu, 14 Dec 2017 15:30:05 +0100 (CET) From: Antoine Tenart To: herbert@gondor.apana.org.au, davem@davemloft.net Cc: Antoine Tenart , thomas.petazzoni@free-electrons.com, gregory.clement@free-electrons.com, miquel.raynal@free-electrons.com, oferh@marvell.com, igall@marvell.com, nadavh@marvell.com, linux-crypto@vger.kernel.org Subject: [PATCH 11/17] crypto: inside-secure - dequeue all requests at once Date: Thu, 14 Dec 2017 15:26:53 +0100 Message-Id: <20171214142659.16987-12-antoine.tenart@free-electrons.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171214142659.16987-1-antoine.tenart@free-electrons.com> References: <20171214142659.16987-1-antoine.tenart@free-electrons.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch updates the dequeueing logic to dequeue all requests at once. Since we can have many requests in the queue, the interrupt coalescing is kept so that the ring interrupt fires every EIP197_MAX_BATCH_SZ at most. To allow dequeueing all requests at once while still using reasonable settings for the interrupt coalescing, the result handling function was updated to setup the threshold interrupt when needed (i.e. when more requests than EIP197_MAX_BATCH_SZ are in the queue). When using this capability the ring is marked as busy so that the dequeue function enqueue new requests without setting the threshold interrupt. Suggested-by: Ofer Heifetz Signed-off-by: Antoine Tenart --- drivers/crypto/inside-secure/safexcel.c | 60 ++++++++++++++++++++++++++------- drivers/crypto/inside-secure/safexcel.h | 8 +++++ 2 files changed, 56 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c index aa4943cad55e..db7ad9d3eeed 100644 --- a/drivers/crypto/inside-secure/safexcel.c +++ b/drivers/crypto/inside-secure/safexcel.c @@ -422,6 +422,23 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv) return 0; } +/* Called with ring's lock taken */ +int safexcel_try_push_requests(struct safexcel_crypto_priv *priv, int ring, + int reqs) +{ + int coal = min_t(int, reqs, EIP197_MAX_BATCH_SZ); + + if (!coal) + return 0; + + /* Configure when we want an interrupt */ + writel(EIP197_HIA_RDR_THRESH_PKT_MODE | + EIP197_HIA_RDR_THRESH_PROC_PKT(coal), + priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_THRESH); + + return coal; +} + void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring) { struct crypto_async_request *req, *backlog; @@ -429,7 +446,7 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring) struct safexcel_request *request; int ret, nreq = 0, cdesc = 0, rdesc = 0, commands, results; - do { + while (true) { spin_lock_bh(&priv->ring[ring].queue_lock); backlog = crypto_get_backlog(&priv->ring[ring].queue); req = crypto_dequeue_request(&priv->ring[ring].queue); @@ -463,18 +480,24 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring) cdesc += commands; rdesc += results; - } while (nreq++ < EIP197_MAX_BATCH_SZ); + nreq++; + } finalize: if (!nreq) return; - spin_lock_bh(&priv->ring[ring].lock); + spin_lock_bh(&priv->ring[ring].egress_lock); - /* Configure when we want an interrupt */ - writel(EIP197_HIA_RDR_THRESH_PKT_MODE | - EIP197_HIA_RDR_THRESH_PROC_PKT(nreq), - priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_THRESH); + if (!priv->ring[ring].busy) { + nreq -= safexcel_try_push_requests(priv, ring, nreq); + if (nreq) + priv->ring[ring].busy = true; + } + + priv->ring[ring].requests_left += nreq; + + spin_unlock_bh(&priv->ring[ring].egress_lock); /* let the RDR know we have pending descriptors */ writel((rdesc * priv->config.rd_offset) << 2, @@ -483,8 +506,6 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring) /* let the CDR know we have pending descriptors */ writel((cdesc * priv->config.cd_offset) << 2, priv->base + EIP197_HIA_CDR(ring) + EIP197_HIA_xDR_PREP_COUNT); - - spin_unlock_bh(&priv->ring[ring].lock); } void safexcel_free_context(struct safexcel_crypto_priv *priv, @@ -579,14 +600,14 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv { struct safexcel_request *sreq; struct safexcel_context *ctx; - int ret, i, nreq, ndesc = 0; + int ret, i, nreq, ndesc = 0, done; bool should_complete; nreq = readl(priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT); nreq >>= 24; nreq &= GENMASK(6, 0); if (!nreq) - return; + goto requests_left; for (i = 0; i < nreq; i++) { spin_lock_bh(&priv->ring[ring].egress_lock); @@ -601,7 +622,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv if (ndesc < 0) { kfree(sreq); dev_err(priv->dev, "failed to handle result (%d)", ndesc); - return; + goto requests_left; } writel(EIP197_xDR_PROC_xD_PKT(1) | @@ -616,6 +637,18 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv kfree(sreq); } + +requests_left: + spin_lock_bh(&priv->ring[ring].egress_lock); + + done = safexcel_try_push_requests(priv, ring, + priv->ring[ring].requests_left); + + priv->ring[ring].requests_left -= done; + if (!done && !priv->ring[ring].requests_left) + priv->ring[ring].busy = false; + + spin_unlock_bh(&priv->ring[ring].egress_lock); } static void safexcel_dequeue_work(struct work_struct *work) @@ -861,6 +894,9 @@ static int safexcel_probe(struct platform_device *pdev) goto err_clk; } + priv->ring[i].requests_left = 0; + priv->ring[i].busy = false; + crypto_init_queue(&priv->ring[i].queue, EIP197_DEFAULT_RING_SIZE); diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h index fffddefb0d9b..531e3e9d8384 100644 --- a/drivers/crypto/inside-secure/safexcel.h +++ b/drivers/crypto/inside-secure/safexcel.h @@ -489,6 +489,14 @@ struct safexcel_crypto_priv { /* queue */ struct crypto_queue queue; spinlock_t queue_lock; + + /* Number of requests in the engine that needs the threshold + * interrupt to be set up. + */ + int requests_left; + + /* The ring is currently handling at least one request */ + bool busy; } ring[EIP197_MAX_RINGS]; };