From patchwork Wed Jun 8 08:51:57 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jon Hunter X-Patchwork-Id: 9163709 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DF08060832 for ; Wed, 8 Jun 2016 09:02:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CEEEC28363 for ; Wed, 8 Jun 2016 09:02:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C141D28391; Wed, 8 Jun 2016 09:02:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7165928348 for ; Wed, 8 Jun 2016 09:02:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1bAZM9-00022K-Su; Wed, 08 Jun 2016 09:00:57 +0000 Received: from hqemgate14.nvidia.com ([216.228.121.143]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1bAZM7-0001gm-1w for linux-arm-kernel@lists.infradead.org; Wed, 08 Jun 2016 09:00:55 +0000 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com id ; Wed, 08 Jun 2016 02:00:22 -0700 Received: from HQMAIL104.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Wed, 08 Jun 2016 01:57:24 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Wed, 08 Jun 2016 01:57:24 -0700 Received: from UKMAIL102.nvidia.com (10.26.138.15) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1130.7; Wed, 8 Jun 2016 09:00:32 +0000 Received: from [10.21.132.106] (10.21.132.106) by UKMAIL102.nvidia.com (10.26.138.15) with Microsoft SMTP Server (TLS) id 15.0.1130.7; Wed, 8 Jun 2016 09:00:28 +0000 Subject: Re: [PATCH 7/8] dmaengine: tegra20-apb-dma: Only calculate residue if txstate exists. To: Peter Griffin , , , , , , , , , , , , , , , References: <1465321121-22238-1-git-send-email-peter.griffin@linaro.org> <1465321121-22238-8-git-send-email-peter.griffin@linaro.org> From: Jon Hunter Message-ID: <5757DCAD.1090106@nvidia.com> Date: Wed, 8 Jun 2016 09:51:57 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0 MIME-Version: 1.0 In-Reply-To: <1465321121-22238-8-git-send-email-peter.griffin@linaro.org> X-Originating-IP: [10.21.132.106] X-ClientProxiedBy: UKMAIL102.nvidia.com (10.26.138.15) To UKMAIL102.nvidia.com (10.26.138.15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160608_020055_165902_DE68C303 X-CRM114-Status: GOOD ( 17.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dmaengine@vger.kernel.org, linux-tegra@vger.kernel.org, lee.jones@linaro.org, linuxppc-dev@lists.ozlabs.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Peter, On 07/06/16 18:38, Peter Griffin wrote: > There is no point calculating the residue if there is > no txstate to store the value. > > Signed-off-by: Peter Griffin > --- > drivers/dma/tegra20-apb-dma.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c > index 01e316f..7f4af8c 100644 > --- a/drivers/dma/tegra20-apb-dma.c > +++ b/drivers/dma/tegra20-apb-dma.c > @@ -814,7 +814,7 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, > unsigned int residual; > > ret = dma_cookie_status(dc, cookie, txstate); > - if (ret == DMA_COMPLETE) > + if (ret == DMA_COMPLETE || !txstate) > return ret; Thanks for reporting this. I agree that we should not do this, however, looking at the code for Tegra, I am wondering if this could change the actual state that is returned. Looking at dma_cookie_status() it will call dma_async_is_complete() which will return either DMA_COMPLETE or DMA_IN_PROGRESS. It could be possible that the actual state for the DMA transfer in the tegra driver is DMA_ERROR, so I am wondering if we should do something like the following ... Cheers Jon diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c index 01e316f73559..45edab7418d0 100644 --- a/drivers/dma/tegra20-apb-dma.c +++ b/drivers/dma/tegra20-apb-dma.c @@ -822,13 +822,8 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, /* Check on wait_ack desc status */ list_for_each_entry(dma_desc, &tdc->free_dma_desc, node) { if (dma_desc->txd.cookie == cookie) { - residual = dma_desc->bytes_requested - - (dma_desc->bytes_transferred % - dma_desc->bytes_requested); - dma_set_residue(txstate, residual); ret = dma_desc->dma_status; - spin_unlock_irqrestore(&tdc->lock, flags); - return ret; + goto found; } } @@ -836,17 +831,23 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, list_for_each_entry(sg_req, &tdc->pending_sg_req, node) { dma_desc = sg_req->dma_desc; if (dma_desc->txd.cookie == cookie) { - residual = dma_desc->bytes_requested - - (dma_desc->bytes_transferred % - dma_desc->bytes_requested); - dma_set_residue(txstate, residual); ret = dma_desc->dma_status; - spin_unlock_irqrestore(&tdc->lock, flags); - return ret; + goto found; } } - dev_dbg(tdc2dev(tdc), "cookie %d does not found\n", cookie); + dev_warn(tdc2dev(tdc), "cookie %d not found\n", cookie); + spin_unlock_irqrestore(&tdc->lock, flags); + return ret; + +found: + if (txstate) { + residual = dma_desc->bytes_requested - + (dma_desc->bytes_transferred % + dma_desc->bytes_requested); + dma_set_residue(txstate, residual); + } + spin_unlock_irqrestore(&tdc->lock, flags); return ret; }