From patchwork Thu Apr 3 03:17:00 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuninori Morimoto X-Patchwork-Id: 3931321 X-Patchwork-Delegate: vinod.koul@intel.com Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 56C74BF540 for ; Thu, 3 Apr 2014 03:17:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5AD9B2017E for ; Thu, 3 Apr 2014 03:17:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 449BE202E9 for ; Thu, 3 Apr 2014 03:17:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758917AbaDCDRF (ORCPT ); Wed, 2 Apr 2014 23:17:05 -0400 Received: from mail-pb0-f44.google.com ([209.85.160.44]:42298 "EHLO mail-pb0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758822AbaDCDRB (ORCPT ); Wed, 2 Apr 2014 23:17:01 -0400 Received: by mail-pb0-f44.google.com with SMTP id rp16so1145101pbb.31 for ; Wed, 02 Apr 2014 20:17:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:message-id:from:subject:user-agent:to:cc:in-reply-to :references:mime-version:content-type; bh=RNhxjYK8/nzeO6gx9wYHL/ZSvu0j1x/q/cV6rzJP8S0=; b=B/WDQuFW38DSNDUeN3DDZy8loGE4HhEUdv7/jLZqWJ8DjwXEodciQ8zParbddfD7qT CAsO+EDyfZeotbzkiBTnZDD9Ivka7YNJxNafk5FuRFAM3AQZhrxzkRQMVtUd5oU8AZiX HKc3mkQilREMTId9vxLqJ3DuejA0SSviyx5k9U3lxEJhLHresjxEACkxrXUo2ijJtISK z5QOARWJnd33HSb7cjI8XdJlXxcHQN0R4Uklg0UlS93V+EjQ6TSZYvBggsba0SVuLcyc vGSVMuww/9IOdxV9cYQRshawDeRRAX/DGiClLdunlzDQudITz7kzxywHysb6o02bB/Fd oYbg== X-Received: by 10.68.139.2 with SMTP id qu2mr4248268pbb.164.1396495020707; Wed, 02 Apr 2014 20:17:00 -0700 (PDT) Received: from remon.gmail.com (49.14.32.202.bf.2iij.net. [202.32.14.49]) by mx.google.com with ESMTPSA id ci4sm7684896pbb.50.2014.04.02.20.16.59 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 02 Apr 2014 20:17:00 -0700 (PDT) Date: Wed, 02 Apr 2014 20:17:00 -0700 (PDT) Message-ID: <87ioqrrtjn.wl%kuninori.morimoto.gx@gmail.com> From: Kuninori Morimoto Subject: [PATCH 2/2] DMA: shdma: add cyclic transfer support User-Agent: Wanderlust/2.14.0 Emacs/23.3 Mule/6.0 To: Vinod Koul Cc: Linux-SH , linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org In-Reply-To: <87lhvnrtkw.wl%kuninori.morimoto.gx@gmail.com> References: <87lhvnrtkw.wl%kuninori.morimoto.gx@gmail.com> MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Kuninori Morimoto This patch add cyclic transfer support and enables dmaengine_prep_dma_cyclic() Signed-off-by: Kuninori Morimoto --- drivers/dma/sh/shdma-base.c | 72 ++++++++++++++++++++++++++++++++++++++----- include/linux/shdma-base.h | 1 + 2 files changed, 66 insertions(+), 7 deletions(-) diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c index 6786ecb..974794c 100644 --- a/drivers/dma/sh/shdma-base.c +++ b/drivers/dma/sh/shdma-base.c @@ -304,6 +304,7 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all) dma_async_tx_callback callback = NULL; void *param = NULL; unsigned long flags; + LIST_HEAD(cyclic_list); spin_lock_irqsave(&schan->chan_lock, flags); list_for_each_entry_safe(desc, _desc, &schan->ld_queue, node) { @@ -369,10 +370,16 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all) if (((desc->mark == DESC_COMPLETED || desc->mark == DESC_WAITING) && async_tx_test_ack(&desc->async_tx)) || all) { - /* Remove from ld_queue list */ - desc->mark = DESC_IDLE; - list_move(&desc->node, &schan->ld_free); + if (all || !desc->cyclic) { + /* Remove from ld_queue list */ + desc->mark = DESC_IDLE; + list_move(&desc->node, &schan->ld_free); + } else { + /* reuse as cyclic */ + desc->mark = DESC_SUBMITTED; + list_move_tail(&desc->node, &cyclic_list); + } if (list_empty(&schan->ld_queue)) { dev_dbg(schan->dev, "Bring down channel %d\n", schan->id); @@ -389,6 +396,8 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all) */ schan->dma_chan.completed_cookie = schan->dma_chan.cookie; + list_splice_tail(&cyclic_list, &schan->ld_queue); + spin_unlock_irqrestore(&schan->chan_lock, flags); if (callback) @@ -521,7 +530,7 @@ static struct shdma_desc *shdma_add_desc(struct shdma_chan *schan, */ static struct dma_async_tx_descriptor *shdma_prep_sg(struct shdma_chan *schan, struct scatterlist *sgl, unsigned int sg_len, dma_addr_t *addr, - enum dma_transfer_direction direction, unsigned long flags) + enum dma_transfer_direction direction, unsigned long flags, bool cyclic) { struct scatterlist *sg; struct shdma_desc *first = NULL, *new = NULL /* compiler... */; @@ -569,7 +578,11 @@ static struct dma_async_tx_descriptor *shdma_prep_sg(struct shdma_chan *schan, if (!new) goto err_get_desc; - new->chunks = chunks--; + new->cyclic = cyclic; + if (cyclic) + new->chunks = 1; + else + new->chunks = chunks--; list_add_tail(&new->node, &tx_list); } while (len); } @@ -612,7 +625,8 @@ static struct dma_async_tx_descriptor *shdma_prep_memcpy( sg_dma_address(&sg) = dma_src; sg_dma_len(&sg) = len; - return shdma_prep_sg(schan, &sg, 1, &dma_dest, DMA_MEM_TO_MEM, flags); + return shdma_prep_sg(schan, &sg, 1, &dma_dest, DMA_MEM_TO_MEM, + flags, false); } static struct dma_async_tx_descriptor *shdma_prep_slave_sg( @@ -640,7 +654,50 @@ static struct dma_async_tx_descriptor *shdma_prep_slave_sg( slave_addr = ops->slave_addr(schan); return shdma_prep_sg(schan, sgl, sg_len, &slave_addr, - direction, flags); + direction, flags, false); +} + +struct dma_async_tx_descriptor *shdma_prep_dma_cyclic( + struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, + size_t period_len, enum dma_transfer_direction direction, + unsigned long flags, void *context) +{ + struct shdma_chan *schan = to_shdma_chan(chan); + struct shdma_dev *sdev = to_shdma_dev(schan->dma_chan.device); + const struct shdma_ops *ops = sdev->ops; + unsigned int sg_len = buf_len / period_len; + int slave_id = schan->slave_id; + dma_addr_t slave_addr; + struct scatterlist sgl[sg_len]; + int i; + + if (!chan) + return NULL; + + BUG_ON(!schan->desc_num); + + /* Someone calling slave DMA on a generic channel? */ + if (slave_id < 0 || (buf_len < period_len)) { + dev_warn(schan->dev, + "%s: bad parameter: buf_len=%d, period_len=%d, id=%d\n", + __func__, buf_len, period_len, slave_id); + return NULL; + } + + slave_addr = ops->slave_addr(schan); + + sg_init_table(sgl, sg_len); + for (i = 0; i < sg_len; i++) { + dma_addr_t src = buf_addr + (period_len * i); + + sg_set_page(&sgl[i], pfn_to_page(PFN_DOWN(src)), period_len, + offset_in_page(src)); + sg_dma_address(&sgl[i]) = src; + sg_dma_len(&sgl[i]) = period_len; + } + + return shdma_prep_sg(schan, sgl, sg_len, &slave_addr, + direction, flags, true); } static int shdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, @@ -915,6 +972,7 @@ int shdma_init(struct device *dev, struct shdma_dev *sdev, /* Compulsory for DMA_SLAVE fields */ dma_dev->device_prep_slave_sg = shdma_prep_slave_sg; + dma_dev->device_prep_dma_cyclic = shdma_prep_dma_cyclic; dma_dev->device_control = shdma_control; dma_dev->dev = dev; diff --git a/include/linux/shdma-base.h b/include/linux/shdma-base.h index f92c0a4..abdf1f2 100644 --- a/include/linux/shdma-base.h +++ b/include/linux/shdma-base.h @@ -54,6 +54,7 @@ struct shdma_desc { dma_cookie_t cookie; int chunks; int mark; + bool cyclic; /* used as cyclic transfer */ }; struct shdma_chan {