From patchwork Tue Dec 3 18:15:50 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Warren X-Patchwork-Id: 3278181 X-Patchwork-Delegate: vinod.koul@intel.com Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 121769F380 for ; Tue, 3 Dec 2013 18:16:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 21F69203A9 for ; Tue, 3 Dec 2013 18:16:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B201B20384 for ; Tue, 3 Dec 2013 18:15:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754774Ab3LCSP7 (ORCPT ); Tue, 3 Dec 2013 13:15:59 -0500 Received: from avon.wwwdotorg.org ([70.85.31.133]:48229 "EHLO avon.wwwdotorg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752028Ab3LCSP6 (ORCPT ); Tue, 3 Dec 2013 13:15:58 -0500 Received: from severn.wwwdotorg.org (unknown [192.168.65.5]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by avon.wwwdotorg.org (Postfix) with ESMTPS id 1F6906233; Tue, 3 Dec 2013 11:15:58 -0700 (MST) Received: from swarren-lx1.nvidia.com (localhost [127.0.0.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by severn.wwwdotorg.org (Postfix) with ESMTPSA id 21231E45FB; Tue, 3 Dec 2013 11:15:43 -0700 (MST) From: Stephen Warren To: Dan Williams , Vinod Koul Cc: dmaengine@vger.kernel.org, linux-tegra@vger.kernel.org, Thierry Reding , Laxman Dewangan , kunala@nvidia.com, Stephen Warren Subject: [PATCH V2] dma: tegra: add support for Tegra148/124 Date: Tue, 3 Dec 2013 11:15:50 -0700 Message-Id: <1386094550-5011-1-git-send-email-swarren@wwwdotorg.org> X-Mailer: git-send-email 1.8.1.5 X-NVConfidentiality: public X-Virus-Scanned: clamav-milter 0.97.8 at avon.wwwdotorg.org X-Virus-Status: Clean Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Laxman Dewangan Tegra148 introduces a few changes to the APB DMA HW registers. Update the driver to cope with them. Tegra124 inherits these changes. * The register address stride between DMA channels increases. * A new per-channel WCOUNT register is introduced. Signed-off-by: Laxman Dewangan Signed-off-by: Kunal Agrawal [swarren, remove .dts file change, rewrote commit description, removed some duplicate/unused code and register IO] Signed-off-by: Stephen Warren Reviewed-by: Thierry Reding Tested-by: Thierry Reding --- v2: * Remove some unused #defines that had incorrect names or values. * Remove unnecessary initialization of wcount local variable. This can be applied directly to the usual dmaengine branch; I've checked it doesn't conflict with the patches going through other trees which convert the driver to use the standard reset framework, or implement a DT DMA provider. --- drivers/dma/tegra20-apb-dma.c | 62 ++++++++++++++++++++++++++++++++++++++----- 1 file changed, 55 insertions(+), 7 deletions(-) diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c index 73654e33f13b..895ffd0bc9bb 100644 --- a/drivers/dma/tegra20-apb-dma.c +++ b/drivers/dma/tegra20-apb-dma.c @@ -99,6 +99,11 @@ #define TEGRA_APBDMA_APBSEQ_DATA_SWAP BIT(27) #define TEGRA_APBDMA_APBSEQ_WRAP_WORD_1 (1 << 16) +/* Tegra148 specific registers */ +#define TEGRA_APBDMA_CHAN_WCOUNT 0x20 + +#define TEGRA_APBDMA_CHAN_WORD_TRANSFER 0x24 + /* * If any burst is in flight and DMA paused then this is the time to complete * on-flight burst and update DMA status register. @@ -108,21 +113,22 @@ /* Channel base address offset from APBDMA base address */ #define TEGRA_APBDMA_CHANNEL_BASE_ADD_OFFSET 0x1000 -/* DMA channel register space size */ -#define TEGRA_APBDMA_CHANNEL_REGISTER_SIZE 0x20 - struct tegra_dma; /* * tegra_dma_chip_data Tegra chip specific DMA data * @nr_channels: Number of channels available in the controller. + * @channel_reg_size: Channel register size/stride. * @max_dma_count: Maximum DMA transfer count supported by DMA controller. * @support_channel_pause: Support channel wise pause of dma. + * @support_separate_wcount_reg: Support separate word count register. */ struct tegra_dma_chip_data { int nr_channels; + int channel_reg_size; int max_dma_count; bool support_channel_pause; + bool support_separate_wcount_reg; }; /* DMA channel registers */ @@ -132,6 +138,7 @@ struct tegra_dma_channel_regs { unsigned long apb_ptr; unsigned long ahb_seq; unsigned long apb_seq; + unsigned long wcount; }; /* @@ -421,6 +428,8 @@ static void tegra_dma_start(struct tegra_dma_channel *tdc, tdc_write(tdc, TEGRA_APBDMA_CHAN_APBPTR, ch_regs->apb_ptr); tdc_write(tdc, TEGRA_APBDMA_CHAN_AHBSEQ, ch_regs->ahb_seq); tdc_write(tdc, TEGRA_APBDMA_CHAN_AHBPTR, ch_regs->ahb_ptr); + if (tdc->tdma->chip_data->support_separate_wcount_reg) + tdc_write(tdc, TEGRA_APBDMA_CHAN_WCOUNT, ch_regs->wcount); /* Start DMA */ tdc_write(tdc, TEGRA_APBDMA_CHAN_CSR, @@ -460,6 +469,9 @@ static void tegra_dma_configure_for_next(struct tegra_dma_channel *tdc, /* Safe to program new configuration */ tdc_write(tdc, TEGRA_APBDMA_CHAN_APBPTR, nsg_req->ch_regs.apb_ptr); tdc_write(tdc, TEGRA_APBDMA_CHAN_AHBPTR, nsg_req->ch_regs.ahb_ptr); + if (tdc->tdma->chip_data->support_separate_wcount_reg) + tdc_write(tdc, TEGRA_APBDMA_CHAN_WCOUNT, + nsg_req->ch_regs.wcount); tdc_write(tdc, TEGRA_APBDMA_CHAN_CSR, nsg_req->ch_regs.csr | TEGRA_APBDMA_CSR_ENB); nsg_req->configured = true; @@ -713,6 +725,7 @@ static void tegra_dma_terminate_all(struct dma_chan *dc) struct tegra_dma_desc *dma_desc; unsigned long flags; unsigned long status; + unsigned long wcount; bool was_busy; spin_lock_irqsave(&tdc->lock, flags); @@ -733,6 +746,10 @@ static void tegra_dma_terminate_all(struct dma_chan *dc) tdc->isr_handler(tdc, true); status = tdc_read(tdc, TEGRA_APBDMA_CHAN_STATUS); } + if (tdc->tdma->chip_data->support_separate_wcount_reg) + wcount = tdc_read(tdc, TEGRA_APBDMA_CHAN_WORD_TRANSFER); + else + wcount = status; was_busy = tdc->busy; tegra_dma_stop(tdc); @@ -741,7 +758,7 @@ static void tegra_dma_terminate_all(struct dma_chan *dc) sgreq = list_first_entry(&tdc->pending_sg_req, typeof(*sgreq), node); sgreq->dma_desc->bytes_transferred += - get_current_xferred_count(tdc, sgreq, status); + get_current_xferred_count(tdc, sgreq, wcount); } tegra_dma_resume(tdc); @@ -903,6 +920,17 @@ static int get_transfer_param(struct tegra_dma_channel *tdc, return -EINVAL; } +static void tegra_dma_prep_wcount(struct tegra_dma_channel *tdc, + struct tegra_dma_channel_regs *ch_regs, u32 len) +{ + u32 len_field = (len - 4) & 0xFFFC; + + if (tdc->tdma->chip_data->support_separate_wcount_reg) + ch_regs->wcount = len_field; + else + ch_regs->csr |= len_field; +} + static struct dma_async_tx_descriptor *tegra_dma_prep_slave_sg( struct dma_chan *dc, struct scatterlist *sgl, unsigned int sg_len, enum dma_transfer_direction direction, unsigned long flags, @@ -986,7 +1014,8 @@ static struct dma_async_tx_descriptor *tegra_dma_prep_slave_sg( sg_req->ch_regs.apb_ptr = apb_ptr; sg_req->ch_regs.ahb_ptr = mem; - sg_req->ch_regs.csr = csr | ((len - 4) & 0xFFFC); + sg_req->ch_regs.csr = csr; + tegra_dma_prep_wcount(tdc, &(sg_req->ch_regs), len); sg_req->ch_regs.apb_seq = apb_seq; sg_req->ch_regs.ahb_seq = ahb_seq; sg_req->configured = false; @@ -1115,7 +1144,8 @@ static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic( ahb_seq |= get_burst_size(tdc, burst_size, slave_bw, len); sg_req->ch_regs.apb_ptr = apb_ptr; sg_req->ch_regs.ahb_ptr = mem; - sg_req->ch_regs.csr = csr | ((len - 4) & 0xFFFC); + sg_req->ch_regs.csr = csr; + tegra_dma_prep_wcount(tdc, &(sg_req->ch_regs), len); sg_req->ch_regs.apb_seq = apb_seq; sg_req->ch_regs.ahb_seq = ahb_seq; sg_req->configured = false; @@ -1210,27 +1240,45 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc) /* Tegra20 specific DMA controller information */ static const struct tegra_dma_chip_data tegra20_dma_chip_data = { .nr_channels = 16, + .channel_reg_size = 0x20, .max_dma_count = 1024UL * 64, .support_channel_pause = false, + .support_separate_wcount_reg = false, }; /* Tegra30 specific DMA controller information */ static const struct tegra_dma_chip_data tegra30_dma_chip_data = { .nr_channels = 32, + .channel_reg_size = 0x20, .max_dma_count = 1024UL * 64, .support_channel_pause = false, + .support_separate_wcount_reg = false, }; /* Tegra114 specific DMA controller information */ static const struct tegra_dma_chip_data tegra114_dma_chip_data = { .nr_channels = 32, + .channel_reg_size = 0x20, .max_dma_count = 1024UL * 64, .support_channel_pause = true, + .support_separate_wcount_reg = false, +}; + +/* Tegra148 specific DMA controller information */ +static const struct tegra_dma_chip_data tegra148_dma_chip_data = { + .nr_channels = 32, + .channel_reg_size = 0x40, + .max_dma_count = 1024UL * 64, + .support_channel_pause = true, + .support_separate_wcount_reg = true, }; static const struct of_device_id tegra_dma_of_match[] = { { + .compatible = "nvidia,tegra148-apbdma", + .data = &tegra148_dma_chip_data, + }, { .compatible = "nvidia,tegra114-apbdma", .data = &tegra114_dma_chip_data, }, { @@ -1318,7 +1366,7 @@ static int tegra_dma_probe(struct platform_device *pdev) struct tegra_dma_channel *tdc = &tdma->channels[i]; tdc->chan_base_offset = TEGRA_APBDMA_CHANNEL_BASE_ADD_OFFSET + - i * TEGRA_APBDMA_CHANNEL_REGISTER_SIZE; + i * cdata->channel_reg_size; res = platform_get_resource(pdev, IORESOURCE_IRQ, i); if (!res) {