From patchwork Wed Jun 25 12:00:33 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King - ARM Linux X-Patchwork-Id: 4419891 X-Patchwork-Delegate: vinod.koul@intel.com Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 759E6BEEAA for ; Wed, 25 Jun 2014 12:00:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6F3422038E for ; Wed, 25 Jun 2014 12:00:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 78CD72038D for ; Wed, 25 Jun 2014 12:00:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754540AbaFYMAx (ORCPT ); Wed, 25 Jun 2014 08:00:53 -0400 Received: from gw-1.arm.linux.org.uk ([78.32.30.217]:42075 "EHLO pandora.arm.linux.org.uk" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752942AbaFYMAw (ORCPT ); Wed, 25 Jun 2014 08:00:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=pandora; h=Sender:Content-Type:MIME-Version:Message-ID:Subject:Cc:To:From:Date; bh=uDfFElWjOGAL0Lzovvo6+juZNxgHvFbwe2WC3VcK0Gw=; b=kpbaLyIaDSAvF99WnxrJrlzcgheHraRzdThYeoR6mOa/gjIfINytdtmLu47pfVWftnXiF/rOIVPftnrgoHZjOR3WozAsqn5uahx2Dfns2hvr5okWpFZZ2+w0zEK6nDBeWKD9hUHXtROGD8i0pmV+sk3vrWpxYgY2jlmtI7IYAHY=; Received: from n2100.arm.linux.org.uk ([2001:4d48:ad52:3201:214:fdff:fe10:4f86]:43904) by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.76) (envelope-from ) id 1WzlsT-0000sL-EW; Wed, 25 Jun 2014 13:00:37 +0100 Received: from linux by n2100.arm.linux.org.uk with local (Exim 4.76) (envelope-from ) id 1WzlsR-00087Y-CM; Wed, 25 Jun 2014 13:00:35 +0100 Date: Wed, 25 Jun 2014 13:00:33 +0100 From: Russell King - ARM Linux To: Shawn Guo , Sascha Hauer , Mark Brown Cc: Vinod Koul , Dan Williams , dmaengine@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH] Update imx-sdma cyclic handling to report residue Message-ID: <20140625120032.GI32514@n2100.arm.linux.org.uk> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.19 (2009-01-05) Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP I received a report this morning from one of the Novena developers that the behaviour of the iMX6 ASoC codec driver (using imx-pcm-dma.c) was sub-optimal under high system load. While there are issues relating to system load remaining, upon reviewing the ASoC imx-pcm-dma.c driver, it was noticed that it not using the residue support, because SDMA doesn't support it. This has the effect that SDMA has to make multiple calls into the ASoC and ALSA code, one for each period. Since ALSA's snd_pcm_elapsed() does not need to be called multiple times and it is entirely sufficient to call it once to update ALSA with the current buffer position via the pointer method, we can do better here. We can also avoid stopping the DMA entirely, just like real cyclic DMA implementations behave. While this means that we replay some old samples, this is a nicer behaviour than having audio stop and restart. The changes to achieve this are relatively minor - imx-sdma.c can track where the DMA is to the nearest descriptor boundary - it does this already when deciding how many callbacks to issue. In doing this, buf_tail always points at the descriptor which will complete next. The residue is defined by the bytes remaining to the end of the buffer, when the buffer is viewed as a single block of memory [start...end]. So, when we start out, there's a full buffer worth of residue, and this counts down as we approach the end of the buffer, eventually becoming zero at the end, before returning to the full buffer worth when we wrap back to the start. Moving the walking of the descriptors into the interrupt handler means that we can update the BD_DONE flag at interrupt time, thus avoiding a delayed tasklet stopping the cyclic DMA. This means that the residue can be calculated from (total descriptors - buf_tail) * descriptor size. This is what the change below does. We update imx-pcm-dma.c to remove the NO_RESIDUE flag since we now provide the residue. Signed-off-by: Russell King Tested-by: Shawn Guo --- IMHO, ASoC should encourage all DMA engine drivers to do this just the same way that ASoC requires cyclic DMA support - if the DMA engine driver knows how many times it needs to issue the callback (which is strictly incorrect behaviour to the DMA engine documentation) then it has enough information to be able to provide the residue information and avoid that loop. Note that these two files can't be split into separate patches - both changes are mutually dependent. drivers/dma/imx-sdma.c | 22 ++++++++++++++++++---- sound/soc/fsl/imx-pcm-dma.c | 1 - 2 files changed, 18 insertions(+), 5 deletions(-) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 128714622bf5..14867e3ac8ff 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -255,6 +255,7 @@ struct sdma_channel { enum dma_slave_buswidth word_size; unsigned int buf_tail; unsigned int num_bd; + unsigned int period_len; struct sdma_buffer_descriptor *bd; dma_addr_t bd_phys; unsigned int pc_from_device, pc_to_device; @@ -593,6 +594,12 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event) static void sdma_handle_channel_loop(struct sdma_channel *sdmac) { + if (sdmac->desc.callback) + sdmac->desc.callback(sdmac->desc.callback_param); +} + +static void sdma_update_channel_loop(struct sdma_channel *sdmac) +{ struct sdma_buffer_descriptor *bd; /* @@ -611,9 +618,6 @@ static void sdma_handle_channel_loop(struct sdma_channel *sdmac) bd->mode.status |= BD_DONE; sdmac->buf_tail++; sdmac->buf_tail %= sdmac->num_bd; - - if (sdmac->desc.callback) - sdmac->desc.callback(sdmac->desc.callback_param); } } @@ -669,6 +673,9 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id) int channel = fls(stat) - 1; struct sdma_channel *sdmac = &sdma->channel[channel]; + if (sdmac->flags & IMX_DMA_SG_LOOP) + sdma_update_channel_loop(sdmac); + tasklet_schedule(&sdmac->tasklet); __clear_bit(channel, &stat); @@ -1129,6 +1136,7 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( sdmac->status = DMA_IN_PROGRESS; sdmac->buf_tail = 0; + sdmac->period_len = period_len; sdmac->flags |= IMX_DMA_SG_LOOP; sdmac->direction = direction; @@ -1225,9 +1233,15 @@ static enum dma_status sdma_tx_status(struct dma_chan *chan, struct dma_tx_state *txstate) { struct sdma_channel *sdmac = to_sdma_chan(chan); + u32 residue; + + if (sdmac->flags & IMX_DMA_SG_LOOP) + residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len; + else + residue = sdmac->chn_count - sdmac->chn_real_count; dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, - sdmac->chn_count - sdmac->chn_real_count); + residue); return sdmac->status; } diff --git a/sound/soc/fsl/imx-pcm-dma.c b/sound/soc/fsl/imx-pcm-dma.c index a545cbf8d16e..1f4ad080b907 100644 --- a/sound/soc/fsl/imx-pcm-dma.c +++ b/sound/soc/fsl/imx-pcm-dma.c @@ -59,7 +59,6 @@ int imx_pcm_dma_init(struct platform_device *pdev) { return devm_snd_dmaengine_pcm_register(&pdev->dev, &imx_dmaengine_pcm_config, - SND_DMAENGINE_PCM_FLAG_NO_RESIDUE | SND_DMAENGINE_PCM_FLAG_COMPAT); } EXPORT_SYMBOL_GPL(imx_pcm_dma_init);