From patchwork Wed Aug 21 12:08:56 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Mack X-Patchwork-Id: 2847696 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EDCF19F271 for ; Wed, 21 Aug 2013 12:11:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 35ADF204AF for ; Wed, 21 Aug 2013 12:11:39 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C594E2049E for ; Wed, 21 Aug 2013 12:11:37 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VC7F9-0000fJ-27; Wed, 21 Aug 2013 12:10:31 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VC7Ep-0000Z9-FP; Wed, 21 Aug 2013 12:10:11 +0000 Received: from svenfoo.org ([82.94.215.22] helo=mail.zonque.de) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VC7E7-0000Ui-Ij for linux-arm-kernel@lists.infradead.org; Wed, 21 Aug 2013 12:09:32 +0000 Received: from localhost (localhost [127.0.0.1]) by mail.zonque.de (Postfix) with ESMTP id 3C929C1689; Wed, 21 Aug 2013 14:09:05 +0200 (CEST) Received: from mail.zonque.de ([127.0.0.1]) by localhost (rambrand.bugwerft.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kprJ3iGcF4w8; Wed, 21 Aug 2013 14:09:05 +0200 (CEST) Received: from tamtam.fritz.box (unknown [212.87.41.14]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.zonque.de (Postfix) with ESMTPSA id A5256C1685; Wed, 21 Aug 2013 14:09:04 +0200 (CEST) From: Daniel Mack To: vinod.koul@intel.com Subject: [PATCH v5 3/5] dma: mmp_pdma: add support for cyclic DMA descriptors Date: Wed, 21 Aug 2013 14:08:56 +0200 Message-Id: <1377086938-29145-4-git-send-email-zonque@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1377086938-29145-1-git-send-email-zonque@gmail.com> References: <1377086938-29145-1-git-send-email-zonque@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130821_080927_931311_6602FDE6 X-CRM114-Status: GOOD ( 16.39 ) X-Spam-Score: -0.3 (/) Cc: wangx@marvell.com, linux@arm.linux.org.uk, nico@fluxnic.net, haojian.zhuang@gmail.com, Daniel Mack , andy.shevchenko@gmail.com, cxie4@marvell.com, ezequiel.garcia@free-electrons.com, djbw@fb.com, eric.y.miao@gmail.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Provide a callback to prepare cyclic DMA transfers. This is for instance needed for audio channel transport. Signed-off-by: Daniel Mack --- drivers/dma/mmp_pdma.c | 112 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 111 insertions(+), 1 deletion(-) diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c index 31e9a71..d82a4f6 100644 --- a/drivers/dma/mmp_pdma.c +++ b/drivers/dma/mmp_pdma.c @@ -98,6 +98,9 @@ struct mmp_pdma_chan { struct mmp_pdma_phy *phy; enum dma_transfer_direction dir; + struct mmp_pdma_desc_sw *cyclic_first; /* first desc_sw if channel + * is in cyclic mode */ + /* channel's basic info */ struct tasklet_struct tasklet; u32 dcmd; @@ -499,6 +502,8 @@ mmp_pdma_prep_memcpy(struct dma_chan *dchan, new->desc.ddadr = DDADR_STOP; new->desc.dcmd |= DCMD_ENDIRQEN; + chan->cyclic_first = NULL; + return &first->async_tx; fail: @@ -574,6 +579,94 @@ mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, new->desc.ddadr = DDADR_STOP; new->desc.dcmd |= DCMD_ENDIRQEN; + chan->dir = dir; + chan->cyclic_first = NULL; + + return &first->async_tx; + +fail: + if (first) + mmp_pdma_free_desc_list(chan, &first->tx_list); + return NULL; +} + +static struct dma_async_tx_descriptor *mmp_pdma_prep_dma_cyclic( + struct dma_chan *dchan, dma_addr_t buf_addr, size_t len, + size_t period_len, enum dma_transfer_direction direction, + unsigned long flags, void *context) +{ + struct mmp_pdma_chan *chan; + struct mmp_pdma_desc_sw *first = NULL, *prev = NULL, *new; + dma_addr_t dma_src, dma_dst; + + if (!dchan || !len || !period_len) + return NULL; + + /* the buffer length must be a multiple of period_len */ + if (len % period_len != 0) + return NULL; + + if (period_len > PDMA_MAX_DESC_BYTES) + return NULL; + + chan = to_mmp_pdma_chan(dchan); + + switch (direction) { + case DMA_MEM_TO_DEV: + dma_src = buf_addr; + dma_dst = chan->dev_addr; + break; + case DMA_DEV_TO_MEM: + dma_dst = buf_addr; + dma_src = chan->dev_addr; + break; + default: + dev_err(chan->dev, "Unsupported direction for cyclic DMA\n"); + return NULL; + } + + chan->dir = direction; + + do { + /* Allocate the link descriptor from DMA pool */ + new = mmp_pdma_alloc_descriptor(chan); + if (!new) { + dev_err(chan->dev, "no memory for desc\n"); + goto fail; + } + + new->desc.dcmd = chan->dcmd | DCMD_ENDIRQEN | + (DCMD_LENGTH & period_len); + new->desc.dsadr = dma_src; + new->desc.dtadr = dma_dst; + + if (!first) + first = new; + else + prev->desc.ddadr = new->async_tx.phys; + + new->async_tx.cookie = 0; + async_tx_ack(&new->async_tx); + + prev = new; + len -= period_len; + + if (chan->dir == DMA_MEM_TO_DEV) + dma_src += period_len; + else + dma_dst += period_len; + + /* Insert the link descriptor to the LD ring */ + list_add_tail(&new->node, &first->tx_list); + } while (len); + + first->async_tx.flags = flags; /* client is in control of this ack */ + first->async_tx.cookie = -EBUSY; + + /* make the cyclic link */ + new->desc.ddadr = first->async_tx.phys; + chan->cyclic_first = first; + return &first->async_tx; fail: @@ -688,8 +781,23 @@ static void dma_do_tasklet(unsigned long data) LIST_HEAD(chain_cleanup); unsigned long flags; - /* submit pending list; callback for each desc; free desc */ + if (chan->cyclic_first) { + dma_async_tx_callback cb = NULL; + void *cb_data = NULL; + spin_lock_irqsave(&chan->desc_lock, flags); + desc = chan->cyclic_first; + cb = desc->async_tx.callback; + cb_data = desc->async_tx.callback_param; + spin_unlock_irqrestore(&chan->desc_lock, flags); + + if (cb) + cb(cb_data); + + return; + } + + /* submit pending list; callback for each desc; free desc */ spin_lock_irqsave(&chan->desc_lock, flags); list_for_each_entry_safe(desc, _desc, &chan->chain_running, node) { @@ -886,12 +994,14 @@ static int mmp_pdma_probe(struct platform_device *op) dma_cap_set(DMA_SLAVE, pdev->device.cap_mask); dma_cap_set(DMA_MEMCPY, pdev->device.cap_mask); + dma_cap_set(DMA_CYCLIC, pdev->device.cap_mask); pdev->device.dev = &op->dev; pdev->device.device_alloc_chan_resources = mmp_pdma_alloc_chan_resources; pdev->device.device_free_chan_resources = mmp_pdma_free_chan_resources; pdev->device.device_tx_status = mmp_pdma_tx_status; pdev->device.device_prep_dma_memcpy = mmp_pdma_prep_memcpy; pdev->device.device_prep_slave_sg = mmp_pdma_prep_slave_sg; + pdev->device.device_prep_dma_cyclic = mmp_pdma_prep_dma_cyclic; pdev->device.device_issue_pending = mmp_pdma_issue_pending; pdev->device.device_control = mmp_pdma_control; pdev->device.copy_align = PDMA_ALIGNMENT;