From patchwork Tue May 6 21:22:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christopher Freeman X-Patchwork-Id: 4124241 X-Patchwork-Delegate: vinod.koul@intel.com Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5009D9F1E1 for ; Tue, 6 May 2014 21:23:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 81008202B4 for ; Tue, 6 May 2014 21:23:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A48DE202E9 for ; Tue, 6 May 2014 21:23:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751807AbaEFVWV (ORCPT ); Tue, 6 May 2014 17:22:21 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:11267 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750939AbaEFVWT (ORCPT ); Tue, 6 May 2014 17:22:19 -0400 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com id ; Tue, 06 May 2014 14:21:28 -0700 Received: from hqemhub02.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Tue, 06 May 2014 14:12:05 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Tue, 06 May 2014 14:12:05 -0700 Received: from cfreeman-dt.nvidia.com (172.20.144.16) by hqemhub02.nvidia.com (172.20.150.31) with Microsoft SMTP Server (TLS) id 8.3.342.0; Tue, 6 May 2014 14:22:18 -0700 From: Christopher Freeman To: , , , CC: , , , Christopher Freeman Subject: [PATCH v1 1/3] dma: tegra: finer granularity residual for tx_status Date: Tue, 6 May 2014 14:22:21 -0700 Message-ID: <1399411343-12222-2-git-send-email-cfreeman@nvidia.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1399411343-12222-1-git-send-email-cfreeman@nvidia.com> References: <1399411343-12222-1-git-send-email-cfreeman@nvidia.com> MIME-Version: 1.0 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Get word-level granularity from hardware for calculating the transfer count remaining. Signed-off-by: Christopher Freeman --- drivers/dma/tegra20-apb-dma.c | 52 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c index 03ad64e..cc6b2fd 100644 --- a/drivers/dma/tegra20-apb-dma.c +++ b/drivers/dma/tegra20-apb-dma.c @@ -779,15 +779,54 @@ skip_dma_stop: spin_unlock_irqrestore(&tdc->lock, flags); } +static int tegra_dma_wcount_in_bytes(struct dma_chan *dc) +{ + struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc); + struct tegra_dma_sg_req *sgreq; + unsigned long wcount = 0; + unsigned long status = 0; + int bytes = 0; + + if (list_empty(&tdc->pending_sg_req) || !tdc->busy) + return 0; + + tegra_dma_pause(tdc, true); + + /* in case of interrupt, handle it and don't read wcount reg */ + status = tdc_read(tdc, TEGRA_APBDMA_CHAN_STATUS); + if (status & TEGRA_APBDMA_STATUS_ISE_EOC) { + tdc_write(tdc, TEGRA_APBDMA_CHAN_STATUS, status); + dev_info(tdc2dev(tdc), "%s():handling isr\n", __func__); + tdc->isr_handler(tdc, false); + tegra_dma_resume(tdc); + return 0; + } + + if (tdc->tdma->chip_data->support_separate_wcount_reg) + wcount = tdc_read(tdc, TEGRA_APBDMA_CHAN_WORD_TRANSFER); + else + wcount = tdc_read(tdc, TEGRA_APBDMA_CHAN_STATUS); + + sgreq = list_first_entry(&tdc->pending_sg_req, + typeof(*sgreq), node); + bytes = get_current_xferred_count(tdc, sgreq, wcount); + + tegra_dma_resume(tdc); + + return bytes; +} + static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, dma_cookie_t cookie, struct dma_tx_state *txstate) { struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc); struct tegra_dma_desc *dma_desc; struct tegra_dma_sg_req *sg_req; + struct tegra_dma_sg_req *first_entry = NULL; enum dma_status ret; unsigned long flags; unsigned int residual; + unsigned int hw_byte_count = 0; ret = dma_cookie_status(dc, cookie, txstate); if (ret == DMA_COMPLETE) @@ -812,9 +851,22 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc, list_for_each_entry(sg_req, &tdc->pending_sg_req, node) { dma_desc = sg_req->dma_desc; if (dma_desc->txd.cookie == cookie) { + hw_byte_count = tegra_dma_wcount_in_bytes(dc); + + if (!list_empty(&tdc->pending_sg_req)) + first_entry = + list_first_entry(&tdc->pending_sg_req, + typeof(*first_entry), node); + residual = dma_desc->bytes_requested - (dma_desc->bytes_transferred % dma_desc->bytes_requested); + + /* hw byte count only applies to current transaction */ + if (first_entry && + first_entry->dma_desc->txd.cookie == cookie) + residual -= hw_byte_count; + dma_set_residue(txstate, residual); ret = dma_desc->dma_status; spin_unlock_irqrestore(&tdc->lock, flags);