From patchwork Tue May 17 03:48:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Jiada" X-Patchwork-Id: 9108631 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7CEFA9F30C for ; Tue, 17 May 2016 03:51:02 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AF2E620268 for ; Tue, 17 May 2016 03:51:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D284D201B9 for ; Tue, 17 May 2016 03:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755205AbcEQDuo (ORCPT ); Mon, 16 May 2016 23:50:44 -0400 Received: from relay1.mentorg.com ([192.94.38.131]:38245 "EHLO relay1.mentorg.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754793AbcEQDtE (ORCPT ); Mon, 16 May 2016 23:49:04 -0400 Received: from svr-orw-fem-05.mgc.mentorg.com ([147.34.97.43]) by relay1.mentorg.com with esmtp id 1b2W0G-0004yW-6G from Jiada_Wang@mentor.com ; Mon, 16 May 2016 20:49:04 -0700 Received: from jiwang-OptiPlex-980.tokyo.mentorg.com (147.34.91.1) by svr-orw-fem-05.mgc.mentorg.com (147.34.97.43) with Microsoft SMTP Server id 14.3.224.2; Mon, 16 May 2016 20:49:04 -0700 From: Jiada Wang To: , CC: , , , , , Subject: [PATCH 05/10] dma: imx-sdma: add flag to indicate SDMA channel state Date: Tue, 17 May 2016 12:48:51 +0900 Message-ID: <1463456936-10634-6-git-send-email-jiada_wang@mentor.com> X-Mailer: git-send-email 2.4.5 In-Reply-To: <1463456936-10634-1-git-send-email-jiada_wang@mentor.com> References: <1463456936-10634-1-git-send-email-jiada_wang@mentor.com> MIME-Version: 1.0 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There is race between STOP of SDMA channel and finish of SDMA transfer, so some times, even after sdma_disable_channel() is called, the bit of 'terminated channel' in INTR may still get asserted, thus cause an extra sdma tasklet be called. Add flag 'enabled' to each sdma channel to indicate its state. only when SDMA channel is in its enabled state, irq handler can schedule a sdma tasklet for it. Signed-off-by: Jiada Wang --- drivers/dma/imx-sdma.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 36f5e39..ef5d37c 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -327,6 +327,7 @@ struct sdma_channel { unsigned int chn_real_count; struct tasklet_struct tasklet; struct imx_dma_data data; + bool enabled; }; #define IMX_DMA_SG_LOOP BIT(0) @@ -562,7 +563,13 @@ static int sdma_config_ownership(struct sdma_channel *sdmac, static void sdma_enable_channel(struct sdma_engine *sdma, int channel) { + struct sdma_channel *sdmac = &sdma->channel[channel]; + unsigned long flags; + + spin_lock_irqsave(&sdmac->lock, flags); + sdmac->enabled = true; writel(BIT(channel), sdma->regs + SDMA_H_START); + spin_unlock_irqrestore(&sdmac->lock, flags); } /* @@ -740,9 +747,12 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id) int channel = fls(stat) - 1; struct sdma_channel *sdmac = &sdma->channel[channel]; - tasklet_schedule(&sdmac->tasklet); + spin_lock(&sdmac->lock); + if (sdmac->enabled) + tasklet_schedule(&sdmac->tasklet); __clear_bit(channel, &stat); + spin_unlock(&sdmac->lock); } return IRQ_HANDLED; @@ -906,9 +916,13 @@ static int sdma_disable_channel(struct dma_chan *chan) struct sdma_channel *sdmac = to_sdma_chan(chan); struct sdma_engine *sdma = sdmac->sdma; int channel = sdmac->channel; + unsigned long flags; + spin_lock_irqsave(&sdmac->lock, flags); + sdmac->enabled = false; writel_relaxed(BIT(channel), sdma->regs + SDMA_H_STATSTOP); sdmac->status = DMA_ERROR; + spin_unlock_irqrestore(&sdmac->lock, flags); return 0; }