From patchwork Sat Aug 10 16:52:25 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Mack X-Patchwork-Id: 2842484 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 97F50BF547 for ; Sat, 10 Aug 2013 17:16:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8E805203EA for ; Sat, 10 Aug 2013 17:16:56 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 751692037A for ; Sat, 10 Aug 2013 17:16:55 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8CRu-0004XK-Dr; Sat, 10 Aug 2013 16:55:32 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8CR5-00009o-7G; Sat, 10 Aug 2013 16:54:39 +0000 Received: from svenfoo.org ([82.94.215.22] helo=mail.zonque.de) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8CPn-0008Sd-65 for linux-arm-kernel@lists.infradead.org; Sat, 10 Aug 2013 16:53:28 +0000 Received: from localhost (localhost [127.0.0.1]) by mail.zonque.de (Postfix) with ESMTP id D6AA2C059A; Sat, 10 Aug 2013 18:52:41 +0200 (CEST) Received: from mail.zonque.de ([127.0.0.1]) by localhost (rambrand.bugwerft.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QzS0-MPER6g1; Sat, 10 Aug 2013 18:52:41 +0200 (CEST) Received: from tamtam.fritz.box (p5DDC5C11.dip0.t-ipconnect.de [93.220.92.17]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.zonque.de (Postfix) with ESMTPSA id 8BE44C08EC; Sat, 10 Aug 2013 18:52:40 +0200 (CEST) From: Daniel Mack To: vinod.koul@intel.com, djbw@fb.com Subject: [PATCH v2 11/11] dma: mmp_pdma: add support for cyclic DMA descriptors Date: Sat, 10 Aug 2013 18:52:25 +0200 Message-Id: <1376153545-14361-12-git-send-email-zonque@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1376153545-14361-1-git-send-email-zonque@gmail.com> References: <1376153545-14361-1-git-send-email-zonque@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130810_125320_002064_71D85871 X-CRM114-Status: GOOD ( 17.91 ) X-Spam-Score: 1.6 (+) Cc: wangx@marvell.com, eric.y.miao@gmail.com, arnd@arndb.de, nico@fluxnic.net, haojian.zhuang@gmail.com, Daniel Mack , cxie4@marvell.com, ezequiel.garcia@free-electrons.com, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,KHOP_BIG_TO_CC,RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Provide a callback to prepare cyclic DMA transfers. This is for instance needed for audio channel transport. Signed-off-by: Daniel Mack --- drivers/dma/mmp_pdma.c | 116 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 115 insertions(+), 1 deletion(-) diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c index ab0303b..eba9d2d 100644 --- a/drivers/dma/mmp_pdma.c +++ b/drivers/dma/mmp_pdma.c @@ -97,6 +97,8 @@ struct mmp_pdma_chan { struct dma_async_tx_descriptor desc; struct mmp_pdma_phy *phy; enum dma_transfer_direction dir; + struct mmp_pdma_desc_sw *cyclic_first; /* first desc_sw if channel + * is in cyclic mode */ /* * We memorize the original start address of the first descriptor as * well as the original total length so we can later determine the @@ -531,6 +533,8 @@ mmp_pdma_prep_memcpy(struct dma_chan *dchan, new->desc.ddadr = DDADR_STOP; new->desc.dcmd |= DCMD_ENDIRQEN; + chan->cyclic_first = NULL; + return &first->async_tx; fail: @@ -612,6 +616,96 @@ mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, new->desc.ddadr = DDADR_STOP; new->desc.dcmd |= DCMD_ENDIRQEN; + chan->dir = dir; + chan->cyclic_first = NULL; + + return &first->async_tx; + +fail: + if (first) + mmp_pdma_free_desc_list(chan, &first->tx_list); + return NULL; +} + +static struct dma_async_tx_descriptor *mmp_pdma_prep_dma_cyclic( + struct dma_chan *dchan, dma_addr_t buf_addr, size_t len, + size_t period_len, enum dma_transfer_direction direction, + unsigned long flags, void *context) +{ + struct mmp_pdma_chan *chan; + struct mmp_pdma_desc_sw *first = NULL, *prev = NULL, *new; + dma_addr_t dma_src, dma_dst; + + if (!dchan || !len || !period_len) + return NULL; + + /* the buffer length must be a multiple of period_len */ + if (len % period_len != 0) + return NULL; + + if (period_len > PDMA_MAX_DESC_BYTES) + return NULL; + + chan = to_mmp_pdma_chan(dchan); + + switch (direction) { + case DMA_MEM_TO_DEV: + dma_src = buf_addr; + dma_dst = chan->dev_addr; + break; + case DMA_DEV_TO_MEM: + dma_dst = buf_addr; + dma_src = chan->dev_addr; + break; + default: + dev_err(chan->dev, "Unsupported direction for cyclic DMA\n"); + return NULL; + } + + chan->start_addr = buf_addr; + chan->total_len = len; + chan->dir = direction; + + do { + /* Allocate the link descriptor from DMA pool */ + new = mmp_pdma_alloc_descriptor(chan); + if (!new) { + dev_err(chan->dev, "no memory for desc\n"); + goto fail; + } + + new->desc.dcmd = chan->dcmd | DCMD_ENDIRQEN | + (DCMD_LENGTH & period_len); + new->desc.dsadr = dma_src; + new->desc.dtadr = dma_dst; + + if (!first) + first = new; + else + prev->desc.ddadr = new->async_tx.phys; + + new->async_tx.cookie = 0; + async_tx_ack(&new->async_tx); + + prev = new; + len -= period_len; + + if (chan->dir == DMA_MEM_TO_DEV) + dma_src += period_len; + else + dma_dst += period_len; + + /* Insert the link descriptor to the LD ring */ + list_add_tail(&new->node, &first->tx_list); + } while (len); + + first->async_tx.flags = flags; /* client is in control of this ack */ + first->async_tx.cookie = -EBUSY; + + /* make the cyclic link */ + new->desc.ddadr = first->async_tx.phys; + chan->cyclic_first = first; + return &first->async_tx; fail: @@ -747,8 +841,25 @@ static void dma_do_tasklet(unsigned long data) LIST_HEAD(chain_cleanup); unsigned long flags; - /* submit pending list; callback for each desc; free desc */ + if (chan->cyclic_first) { + dma_async_tx_callback cb = NULL; + void *cb_data = NULL; + + spin_lock_irqsave(&chan->desc_lock, flags); + desc = chan->cyclic_first; + cb = desc->async_tx.callback; + cb_data = desc->async_tx.callback_param; + spin_unlock_irqrestore(&chan->desc_lock, flags); + + start_pending_queue(chan); + if (cb) + cb(cb_data); + + return; + } + + /* submit pending list; callback for each desc; free desc */ spin_lock_irqsave(&chan->desc_lock, flags); /* update the cookie if we have some descriptors to cleanup */ @@ -781,6 +892,7 @@ static void dma_do_tasklet(unsigned long data) /* Remove from the list of transactions */ list_del(&desc->node); + /* Run the link descriptor callback function */ if (txd->callback) txd->callback(txd->callback_param); @@ -938,12 +1050,14 @@ static int mmp_pdma_probe(struct platform_device *op) dma_cap_set(DMA_SLAVE, pdev->device.cap_mask); dma_cap_set(DMA_MEMCPY, pdev->device.cap_mask); + dma_cap_set(DMA_CYCLIC, pdev->device.cap_mask); pdev->device.dev = &op->dev; pdev->device.device_alloc_chan_resources = mmp_pdma_alloc_chan_resources; pdev->device.device_free_chan_resources = mmp_pdma_free_chan_resources; pdev->device.device_tx_status = mmp_pdma_tx_status; pdev->device.device_prep_dma_memcpy = mmp_pdma_prep_memcpy; pdev->device.device_prep_slave_sg = mmp_pdma_prep_slave_sg; + pdev->device.device_prep_dma_cyclic = mmp_pdma_prep_dma_cyclic; pdev->device.device_issue_pending = mmp_pdma_issue_pending; pdev->device.device_control = mmp_pdma_control; pdev->device.copy_align = PDMA_ALIGNMENT;