Message ID | 1386751756-12583-1-git-send-email-bandik@nvidia.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Vinod Koul |
Headers | show |
On 12/11/2013 01:49 AM, Chaitanya Bandi wrote: > Used runtime_pm APIs for clock enabling/disabling. > Made changes such that clock is not enabled during > idle. Also moved the usage of clk prepare/unprepare > such that they are not called in isr context. Hmm. This is going to cause conflicts with the patch I'm taking through the Tegra tree which converts the driver to support the standard DMA DT bindings. Perhaps this can wait until 3.15, or perhaps we can merge the Tegra branch back into the DMA tree to resolve the conflict. > diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c > - * Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved. > + * Copyright (c) 2012-13, NVIDIA CORPORATION. All rights reserved. s/13/2013/ > @@ -580,6 +580,11 @@ static void handle_once_dma_done(struct tegra_dma_channel *tdc, > list_add_tail(&sgreq->node, &tdc->free_sg_req); > > /* Do not start DMA if it is going to be terminate */ > + if (list_empty(&tdc->pending_sg_req) && (!to_terminate)) { > + clk_disable(tdc->tdma->dma_clk); > + pm_runtime_put(tdc->tdma->dev); > + } > + > if (to_terminate || list_empty(&tdc->pending_sg_req)) > return; Don't you want to insert the new code before the comment? Otherwise, you're separating the existing comment and code. Here and many other places, both pm_runtime_get/put *and* clk_enable/disable are called. Why doesn't the code *just* call pm_runtime_get/put, and let the implementation of those APIs perform the clock_enable/disable? That'd be a lot more typical. Most of the new calls to pm_runtime_*()/clk_{en,dis}able() are missing error checking. > @@ -682,12 +687,21 @@ static void tegra_dma_issue_pending(struct dma_chan *dc) > + pm_runtime_get(tdc->tdma->dev); I think you need pm_runtime_get_sync() here to make sure the clock gets turned on immediately? Perhaps that why... > + ret = clk_enable(tdc->tdma->dma_clk); ... there's also direct manipulation of the clock everywhere. Also, shouldn't the DMA core be calling pm_runtime_get(), rather than each individual DMA driver? -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Dec 11, 2013 at 01:49:29PM -0700, Stephen Warren wrote: > On 12/11/2013 01:49 AM, Chaitanya Bandi wrote: > > Used runtime_pm APIs for clock enabling/disabling. > > Made changes such that clock is not enabled during > > idle. Also moved the usage of clk prepare/unprepare > > such that they are not called in isr context. > > Hmm. This is going to cause conflicts with the patch I'm taking through > the Tegra tree which converts the driver to support the standard DMA DT > bindings. Perhaps this can wait until 3.15, or perhaps we can merge the > Tegra branch back into the DMA tree to resolve the conflict. Ok pls do resubmit once your stuff is sorted out! > > > diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c > > > - * Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved. > > + * Copyright (c) 2012-13, NVIDIA CORPORATION. All rights reserved. > > s/13/2013/ > > > @@ -580,6 +580,11 @@ static void handle_once_dma_done(struct tegra_dma_channel *tdc, > > list_add_tail(&sgreq->node, &tdc->free_sg_req); > > > > /* Do not start DMA if it is going to be terminate */ > > + if (list_empty(&tdc->pending_sg_req) && (!to_terminate)) { > > + clk_disable(tdc->tdma->dma_clk); > > + pm_runtime_put(tdc->tdma->dev); > > + } > > + > > if (to_terminate || list_empty(&tdc->pending_sg_req)) > > return; > > Don't you want to insert the new code before the comment? Otherwise, > you're separating the existing comment and code. > > Here and many other places, both pm_runtime_get/put *and* > clk_enable/disable are called. Why doesn't the code *just* call > pm_runtime_get/put, and let the implementation of those APIs perform the > clock_enable/disable? That'd be a lot more typical. And that's what I was thinking too. Why dont we rely on runtime calls for doing the ref counting and then in runtime suspend & resume handlers disable and enable clock > > Most of the new calls to pm_runtime_*()/clk_{en,dis}able() are missing > error checking. > > > @@ -682,12 +687,21 @@ static void tegra_dma_issue_pending(struct dma_chan *dc) > > > + pm_runtime_get(tdc->tdma->dev); > > I think you need pm_runtime_get_sync() here to make sure the clock gets > turned on immediately? Perhaps that why... > > > + ret = clk_enable(tdc->tdma->dma_clk); > > ... there's also direct manipulation of the clock everywhere. > > Also, shouldn't the DMA core be calling pm_runtime_get(), rather than > each individual DMA driver? Yes i have been keen on that, not getting priortized yet! -- ~Vinod -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c index 73654e3..355572d 100644 --- a/drivers/dma/tegra20-apb-dma.c +++ b/drivers/dma/tegra20-apb-dma.c @@ -1,7 +1,7 @@ /* * DMA driver for Nvidia's Tegra20 APB DMA controller. * - * Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved. + * Copyright (c) 2012-13, NVIDIA CORPORATION. All rights reserved. * * This program is free software; you can redistribute it and/or modify it * under the terms and conditions of the GNU General Public License, @@ -580,6 +580,11 @@ static void handle_once_dma_done(struct tegra_dma_channel *tdc, list_add_tail(&sgreq->node, &tdc->free_sg_req); /* Do not start DMA if it is going to be terminate */ + if (list_empty(&tdc->pending_sg_req) && (!to_terminate)) { + clk_disable(tdc->tdma->dma_clk); + pm_runtime_put(tdc->tdma->dev); + } + if (to_terminate || list_empty(&tdc->pending_sg_req)) return; @@ -682,12 +687,21 @@ static void tegra_dma_issue_pending(struct dma_chan *dc) { struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc); unsigned long flags; + int ret; spin_lock_irqsave(&tdc->lock, flags); if (list_empty(&tdc->pending_sg_req)) { dev_err(tdc2dev(tdc), "No DMA request\n"); goto end; } + + pm_runtime_get(tdc->tdma->dev); + ret = clk_enable(tdc->tdma->dma_clk); + if (ret < 0) { + dev_err(tdc2dev(tdc), "clk_enable failed: %d\n", ret); + return; + } + if (!tdc->busy) { tdc_start_head_req(tdc); @@ -744,6 +758,8 @@ static void tegra_dma_terminate_all(struct dma_chan *dc) get_current_xferred_count(tdc, sgreq, status); } tegra_dma_resume(tdc); + clk_disable(tdc->tdma->dma_clk); + pm_runtime_put(tdc->tdma->dev); skip_dma_stop: tegra_dma_abort_all(tdc); @@ -1153,22 +1169,16 @@ static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic( static int tegra_dma_alloc_chan_resources(struct dma_chan *dc) { struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc); - struct tegra_dma *tdma = tdc->tdma; - int ret; + clk_prepare(tdc->tdma->dma_clk); dma_cookie_init(&tdc->dma_chan); tdc->config_init = false; - ret = clk_prepare_enable(tdma->dma_clk); - if (ret < 0) - dev_err(tdc2dev(tdc), "clk_prepare_enable failed: %d\n", ret); - return ret; + return 0; } static void tegra_dma_free_chan_resources(struct dma_chan *dc) { struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc); - struct tegra_dma *tdma = tdc->tdma; - struct tegra_dma_desc *dma_desc; struct tegra_dma_sg_req *sg_req; struct list_head dma_desc_list; @@ -1182,7 +1192,7 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc) if (tdc->busy) tegra_dma_terminate_all(dc); - + clk_unprepare(tdc->tdma->dma_clk); spin_lock_irqsave(&tdc->lock, flags); list_splice_init(&tdc->pending_sg_req, &sg_req_list); list_splice_init(&tdc->free_sg_req, &sg_req_list); @@ -1204,7 +1214,6 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc) list_del(&sg_req->node); kfree(sg_req); } - clk_disable_unprepare(tdma->dma_clk); } /* Tegra20 specific DMA controller information */ @@ -1418,7 +1427,7 @@ static int tegra_dma_runtime_suspend(struct device *dev) struct platform_device *pdev = to_platform_device(dev); struct tegra_dma *tdma = platform_get_drvdata(pdev); - clk_disable_unprepare(tdma->dma_clk); + clk_disable(tdma->dma_clk); return 0; } @@ -1428,7 +1437,7 @@ static int tegra_dma_runtime_resume(struct device *dev) struct tegra_dma *tdma = platform_get_drvdata(pdev); int ret; - ret = clk_prepare_enable(tdma->dma_clk); + ret = clk_enable(tdma->dma_clk); if (ret < 0) { dev_err(dev, "clk_enable failed: %d\n", ret); return ret;
Used runtime_pm APIs for clock enabling/disabling. Made changes such that clock is not enabled during idle. Also moved the usage of clk prepare/unprepare such that they are not called in isr context. Signed-off-by: Chaitanya Bandi <bandik@nvidia.com> --- Verified with audio playback on Dalmore and check runtime status. drivers/dma/tegra20-apb-dma.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-)