From patchwork Thu May 5 11:56:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amelie Delaunay X-Patchwork-Id: 12839451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45579C433F5 for ; Thu, 5 May 2022 11:57:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+2N+QcQ6hfi6oZ2KUbeK0lwT3D+HawySxjjkqjsDy7c=; b=afU7oUAFiO3aqe vlbNRbwPuB1qY1NMbVNTQ+69wkyS68cNRgq61T04k32z9N8cjYkjI+GhFXrpZCdz5HNTwx6RN/bQN yMLwQVUsZJ2YboXHb8d62DcstfpApgVvkAB/r9QySzz6xXBK38jUs3SneNT+bZXOIsbqLn0FO8Wgj mXhAZuJuDApOv9aKGC1k47ReQSAj9NvD2YZcxUAT6g9WCYLXCde6BCgKJFVeg3oLZ+ZjbZ+nCdeiJ QK9+FVuYljpNjo5DlCQjHRSf9eMSwGGoJTE71r12rNDdSP9/1mH5wmdCHzEtTASc0XWRWEiO1gEUe t9SGAfA4aXD2pMIsN5PQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nma64-00FbEX-BK; Thu, 05 May 2022 11:56:40 +0000 Received: from mx07-00178001.pphosted.com ([185.132.182.106]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nma5r-00Fb8N-Pk for linux-arm-kernel@lists.infradead.org; Thu, 05 May 2022 11:56:29 +0000 Received: from pps.filterd (m0288072.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2458Pia1018222; Thu, 5 May 2022 13:56:21 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=selector1; bh=vVNs20GHeQv8C1KNCWf/owN4D8dQJUtAlbOROVhSK7s=; b=P3XqQRTA1D5786TFzpTPVyAb3NRTd1mpjNy7pi8kChfU6Fop/CCZ1EAsodK7gTdl50cD h2OMm2VMY/7hvL6Ucqe/LoYtsYY5rcBIrfyOQVKoVmCxHMBzu8UoDQCHfT8r9hFhSrLl eD/00rHkCV0k4jTekiNLJmkb+DDstKCZfpibZEXxWF5q8p13jyWoz+wOieUvqqFja6mt Ob28djNI4r9xoeCDjXL6PiTHnYMxp4qsNBd3hkL7WmI46jnJymJZSdl46f0ghC0IS1KF BCGzF2dJQ15v4Tk3T+UDct43ImL1nsuC9ygjww8hddZEZXekeDJJJ/ir9HTANVmEYb0n hg== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3frthk26tm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 May 2022 13:56:21 +0200 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 27EAB100034; Thu, 5 May 2022 13:56:21 +0200 (CEST) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 20EFD21B50C; Thu, 5 May 2022 13:56:21 +0200 (CEST) Received: from localhost (10.75.127.50) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.26; Thu, 5 May 2022 13:56:20 +0200 From: Amelie Delaunay To: Vinod Koul , Maxime Coquelin , Alexandre Torgue CC: , , , , Amelie Delaunay Subject: [PATCH 1/4] dmaengine: stm32-dma: introduce stm32_dma_sg_inc to manage chan->next_sg Date: Thu, 5 May 2022 13:56:08 +0200 Message-ID: <20220505115611.38845-2-amelie.delaunay@foss.st.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220505115611.38845-1-amelie.delaunay@foss.st.com> References: <20220505115611.38845-1-amelie.delaunay@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.50] X-ClientProxiedBy: SFHDAG2NODE3.st.com (10.75.127.6) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-05_05,2022-05-05_01,2022-02-23_01 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220505_045628_170712_9F170168 X-CRM114-Status: GOOD ( 16.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org chan->next_sg is used to know which transfer will start after the ongoing one. It is incremented for each new transfer, either on transfer start for non-cyclic transfers, or on transfer complete interrupt for cyclic transfers. For cyclic transfer, when the last item is reached, chan->next_sg must be reinitialized to the first item. Signed-off-by: Amelie Delaunay --- drivers/dma/stm32-dma.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c index d2365fab1b7a..5afe4205f57b 100644 --- a/drivers/dma/stm32-dma.c +++ b/drivers/dma/stm32-dma.c @@ -535,6 +535,13 @@ static void stm32_dma_dump_reg(struct stm32_dma_chan *chan) dev_dbg(chan2dev(chan), "SFCR: 0x%08x\n", sfcr); } +static void stm32_dma_sg_inc(struct stm32_dma_chan *chan) +{ + chan->next_sg++; + if (chan->desc->cyclic && (chan->next_sg == chan->desc->num_sgs)) + chan->next_sg = 0; +} + static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan); static void stm32_dma_start_transfer(struct stm32_dma_chan *chan) @@ -575,7 +582,7 @@ static void stm32_dma_start_transfer(struct stm32_dma_chan *chan) stm32_dma_write(dmadev, STM32_DMA_SM1AR(chan->id), reg->dma_sm1ar); stm32_dma_write(dmadev, STM32_DMA_SNDTR(chan->id), reg->dma_sndtr); - chan->next_sg++; + stm32_dma_sg_inc(chan); /* Clear interrupt status if it is there */ status = stm32_dma_irq_status(chan); @@ -606,9 +613,6 @@ static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan) dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(id)); if (dma_scr & STM32_DMA_SCR_DBM) { - if (chan->next_sg == chan->desc->num_sgs) - chan->next_sg = 0; - sg_req = &chan->desc->sg_req[chan->next_sg]; if (dma_scr & STM32_DMA_SCR_CT) { @@ -630,7 +634,7 @@ static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan) if (chan->desc) { if (chan->desc->cyclic) { vchan_cyclic_callback(&chan->desc->vdesc); - chan->next_sg++; + stm32_dma_sg_inc(chan); stm32_dma_configure_next_sg(chan); } else { chan->busy = false; From patchwork Thu May 5 11:56:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amelie Delaunay X-Patchwork-Id: 12839452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8A2ADC433FE for ; Thu, 5 May 2022 11:57:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=et+0jwYiDqvJlsvYHCEth0px429fUJHF4wICGaZj3hA=; b=Zfw5zCZpxCXBdn YVTelyvlcLr1GrQRx0blsJq0Y+Pbn5X9xWT+GUBMIgnbnTnOiOXrSjgao+6OssK9fCei6+/SVboa7 lhFKDzo+4lDis6tM4O4kHUViaoduPdfONRsc5NIcZLYJy4j2n+0/eMKFPV+UWrDS2NFA38iE5BAKw 2jvKHQU8AuTGlJL7jd2flGL4j2PmCUPs5WcR22Zeej8PGOPdD+ioCoH6UKL6AVT5WxHTESbRzx37l 1UUVQvAxPHWHkV8b+8c7nhv3623ZX9WQB6bi68GrYTVFSIDzQb5D8Ead0hjp8YN5QA32Nvrve6By5 h3JKmovFqcwKSmpR2eJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nma6E-00FbK1-6h; Thu, 05 May 2022 11:56:50 +0000 Received: from mx07-00178001.pphosted.com ([185.132.182.106]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nma5r-00Fb8T-Vq for linux-arm-kernel@lists.infradead.org; Thu, 05 May 2022 11:56:30 +0000 Received: from pps.filterd (m0288072.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2458NTKj017722; Thu, 5 May 2022 13:56:23 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=selector1; bh=uItryK/B8ZXSaeQAnfJVbT07KupO9tgEkT2nTq7fmd4=; b=0kFa7UKaujsyf5A/3C+MkmVOk8hZMjosW2GFBEK6hJ39PedXZ9KDS3S6O3sBhKRichtV 9SWcjmJSvaFubhSMidvWQshD4BJBEne5bvKBQIs45M3KyedCmCH1TCO6rnUYRMtnp38V Bct1XNoumV7Pp5xjjfXlrC9q4HzuZfbssGwkdhBBk9ZkfQD8gvK7dO6F+TuJQXCp5JTj P23LxsG5XwOXcmISB/lesiN6DFj05qU7YbUj6zhteGzXa7dB0g4ehK99/51oHrQd6poc LFGSj2wXo7S92X3woeYgYRdH/gWvlN586fjqyrZ1cOgybrVw6GbzwCPmXSBnvZt9Fmsc OA== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3frthk26tp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 May 2022 13:56:22 +0200 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 5DECA10002A; Thu, 5 May 2022 13:56:22 +0200 (CEST) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 56FAD21B50C; Thu, 5 May 2022 13:56:22 +0200 (CEST) Received: from localhost (10.75.127.48) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.26; Thu, 5 May 2022 13:56:21 +0200 From: Amelie Delaunay To: Vinod Koul , Maxime Coquelin , Alexandre Torgue CC: , , , , Amelie Delaunay Subject: [PATCH 2/4] dmaengine: stm32-dma: pass DMA_SxSCR value to stm32_dma_handle_chan_done() Date: Thu, 5 May 2022 13:56:09 +0200 Message-ID: <20220505115611.38845-3-amelie.delaunay@foss.st.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220505115611.38845-1-amelie.delaunay@foss.st.com> References: <20220505115611.38845-1-amelie.delaunay@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.48] X-ClientProxiedBy: SFHDAG2NODE3.st.com (10.75.127.6) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-05_05,2022-05-05_01,2022-02-23_01 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220505_045628_358076_5960DF02 X-CRM114-Status: GOOD ( 18.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org stm32_dma_handle_chan_done() is called on Transfer Complete interrupt. As DMA_SxSCR register is read in interrupt handler, pass the value as parameter of stm32_dma_handle_chan_done(). Also return directly if chan->desc is null to remove one ident level. Then, stm32_dma_configure_next_sg() is doing something only if Double-Buffer Mode (DBM) is enabled, so, check it is enabled prior calling stm32_dma_configure_next_sg(), to remove one ident level in stm32_dma_configure_next_sg(). Signed-off-by: Amelie Delaunay --- drivers/dma/stm32-dma.c | 54 ++++++++++++++++++++--------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c index 5afe4205f57b..eecd13795943 100644 --- a/drivers/dma/stm32-dma.c +++ b/drivers/dma/stm32-dma.c @@ -612,38 +612,38 @@ static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan) id = chan->id; dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(id)); - if (dma_scr & STM32_DMA_SCR_DBM) { - sg_req = &chan->desc->sg_req[chan->next_sg]; - - if (dma_scr & STM32_DMA_SCR_CT) { - dma_sm0ar = sg_req->chan_reg.dma_sm0ar; - stm32_dma_write(dmadev, STM32_DMA_SM0AR(id), dma_sm0ar); - dev_dbg(chan2dev(chan), "CT=1 <=> SM0AR: 0x%08x\n", - stm32_dma_read(dmadev, STM32_DMA_SM0AR(id))); - } else { - dma_sm1ar = sg_req->chan_reg.dma_sm1ar; - stm32_dma_write(dmadev, STM32_DMA_SM1AR(id), dma_sm1ar); - dev_dbg(chan2dev(chan), "CT=0 <=> SM1AR: 0x%08x\n", - stm32_dma_read(dmadev, STM32_DMA_SM1AR(id))); - } + sg_req = &chan->desc->sg_req[chan->next_sg]; + + if (dma_scr & STM32_DMA_SCR_CT) { + dma_sm0ar = sg_req->chan_reg.dma_sm0ar; + stm32_dma_write(dmadev, STM32_DMA_SM0AR(id), dma_sm0ar); + dev_dbg(chan2dev(chan), "CT=1 <=> SM0AR: 0x%08x\n", + stm32_dma_read(dmadev, STM32_DMA_SM0AR(id))); + } else { + dma_sm1ar = sg_req->chan_reg.dma_sm1ar; + stm32_dma_write(dmadev, STM32_DMA_SM1AR(id), dma_sm1ar); + dev_dbg(chan2dev(chan), "CT=0 <=> SM1AR: 0x%08x\n", + stm32_dma_read(dmadev, STM32_DMA_SM1AR(id))); } } -static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan) +static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan, u32 scr) { - if (chan->desc) { - if (chan->desc->cyclic) { - vchan_cyclic_callback(&chan->desc->vdesc); - stm32_dma_sg_inc(chan); + if (!chan->desc) + return; + + if (chan->desc->cyclic) { + vchan_cyclic_callback(&chan->desc->vdesc); + stm32_dma_sg_inc(chan); + if (scr & STM32_DMA_SCR_DBM) stm32_dma_configure_next_sg(chan); - } else { - chan->busy = false; - if (chan->next_sg == chan->desc->num_sgs) { - vchan_cookie_complete(&chan->desc->vdesc); - chan->desc = NULL; - } - stm32_dma_start_transfer(chan); + } else { + chan->busy = false; + if (chan->next_sg == chan->desc->num_sgs) { + vchan_cookie_complete(&chan->desc->vdesc); + chan->desc = NULL; } + stm32_dma_start_transfer(chan); } } @@ -680,7 +680,7 @@ static irqreturn_t stm32_dma_chan_irq(int irq, void *devid) if (status & STM32_DMA_TCI) { stm32_dma_irq_clear(chan, STM32_DMA_TCI); if (scr & STM32_DMA_SCR_TCIE) - stm32_dma_handle_chan_done(chan); + stm32_dma_handle_chan_done(chan, scr); status &= ~STM32_DMA_TCI; } From patchwork Thu May 5 11:56:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amelie Delaunay X-Patchwork-Id: 12839453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 28D58C433EF for ; Thu, 5 May 2022 11:58:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dJzvAbVxVutqKj0TpBTmBsOYWJyaqoxd7OsaWmNGgfw=; b=AHYBmqZESFHc/C VuL/pEevo/COSVUszx6ZYdEVjE6vDQV4bQxh61Zf3NYk7QMs6un5nv1KtRl37qtrsZcmz5FgqH1do 5LQgcQaUunlpX0SernjpC7pdUg2R+HgPRIlope+lmsQRVlZr1kd+mTXLs0vRCIHEAhclFEzOF7vgK DVHyWckakhbRZOObhJxyUux4RcFx5HTXb8Cpm4PlxpW3GAzNmczBUfbvTP8APGuFNjjaa5FExB1Nz AatvD8v5LWk8gZlLHYPTrsLz5XRYNmA5kARTd4+sI93G0acTTJkiJDpo9OvCpxTxS8MYn6fvtWaU8 GMhSc3Jrl3nDspXpy6Lw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nma6N-00FbNs-Nc; Thu, 05 May 2022 11:56:59 +0000 Received: from mx08-00178001.pphosted.com ([91.207.212.93] helo=mx07-00178001.pphosted.com) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nma5s-00Fb8g-MQ for linux-arm-kernel@lists.infradead.org; Thu, 05 May 2022 11:56:30 +0000 Received: from pps.filterd (m0046661.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2458kmoi001608; Thu, 5 May 2022 13:56:23 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=selector1; bh=UArImUFvNKHacRXvpNLIfvx/nkVFBIIqhhzgSK6NXF4=; b=mLRGkq8qlXVf/8FwGKXek0wYWMD/IPIH7pZqetyugc5CSw1ExMv1szrOwZcNhKTESXAW Qw2A3oKXUUhOXE9E5a79mzxD+srvGD3Cfk/+mvsZL/kRCeF2R/OvgkYAhl9Xbh1BBNyf GsdDDAPEluShIMP0ayzryEjngGhbG+KgaNNdL3l7A8kscegApu49ZdDdY7L3fYnKJBAV lz+ad9o++zU6xqJ+ZFNFX0s19QCURYQMmSfCV9DjRSwQ9h6Vs/+5soy/0Jytp854912/ w/aKRo8o+hp2r+Bnten0J6xAgiyD9aq7lPt1Kp72ZvmZlVqQV7QSMD53m48JK/oOCXrh vg== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3frv0gkbdp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 May 2022 13:56:23 +0200 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 6576810002A; Thu, 5 May 2022 13:56:23 +0200 (CEST) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 5F67621B50C; Thu, 5 May 2022 13:56:23 +0200 (CEST) Received: from localhost (10.75.127.50) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.26; Thu, 5 May 2022 13:56:22 +0200 From: Amelie Delaunay To: Vinod Koul , Maxime Coquelin , Alexandre Torgue CC: , , , , Amelie Delaunay Subject: [PATCH 3/4] dmaengine: stm32-dma: rename pm ops before dma pause/resume introduction Date: Thu, 5 May 2022 13:56:10 +0200 Message-ID: <20220505115611.38845-4-amelie.delaunay@foss.st.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220505115611.38845-1-amelie.delaunay@foss.st.com> References: <20220505115611.38845-1-amelie.delaunay@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.50] X-ClientProxiedBy: SFHDAG2NODE3.st.com (10.75.127.6) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-05_05,2022-05-05_01,2022-02-23_01 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220505_045629_045701_D2CBB492 X-CRM114-Status: GOOD ( 14.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org dmaengine framework offers device_pause and device_resume ops to pause an on-going transfer and resume it later. To avoid any misunderstanding with system sleep pm ops, rename pm ops into stm32_dma_pm_suspend and stm32_dma_pm_resume. Signed-off-by: Amelie Delaunay --- drivers/dma/stm32-dma.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c index eecd13795943..0b35c5178501 100644 --- a/drivers/dma/stm32-dma.c +++ b/drivers/dma/stm32-dma.c @@ -1486,7 +1486,7 @@ static int stm32_dma_runtime_resume(struct device *dev) #endif #ifdef CONFIG_PM_SLEEP -static int stm32_dma_suspend(struct device *dev) +static int stm32_dma_pm_suspend(struct device *dev) { struct stm32_dma_device *dmadev = dev_get_drvdata(dev); int id, ret, scr; @@ -1510,14 +1510,14 @@ static int stm32_dma_suspend(struct device *dev) return 0; } -static int stm32_dma_resume(struct device *dev) +static int stm32_dma_pm_resume(struct device *dev) { return pm_runtime_force_resume(dev); } #endif static const struct dev_pm_ops stm32_dma_pm_ops = { - SET_SYSTEM_SLEEP_PM_OPS(stm32_dma_suspend, stm32_dma_resume) + SET_SYSTEM_SLEEP_PM_OPS(stm32_dma_pm_suspend, stm32_dma_pm_resume) SET_RUNTIME_PM_OPS(stm32_dma_runtime_suspend, stm32_dma_runtime_resume, NULL) }; From patchwork Thu May 5 11:56:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amelie Delaunay X-Patchwork-Id: 12839454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 38F05C433EF for ; Thu, 5 May 2022 11:58:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zBHBPIHC57ANk62I9nTA6ARfmzTNsalwRSGssp5jb7A=; b=S86bsshAvMQoM3 lSQJ2XLdYvPK9IUsavl2tKrxY93WUTHMED2TNAy3tcDpI9yUtZw0QV54Fg40UbeRIIHMijqKop+YD NbnDZd4HspOy0SkH59baOAu3UPMs52HLvmHM22B2IrnXma/2cu26BnujPkhLE7SPPj/IK9kn7rvCk i4xgh5wMRdQSd4MDFlsdVvQVKf2XIQ6UYINmRsIOkipTyDhwdkN9vM/NQWmHCQ8SPm1tJuupCdo9E ktsw83vLEbzKjoMFmuTkmTR1iP3ucL6ZEWJMzZHonSbfNMyorHESBaZyUUuSTAAR7eBECY68xNgJC ia9hKjsHv9J7AF5UPIHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nma6Y-00FbTs-Lg; Thu, 05 May 2022 11:57:10 +0000 Received: from mx08-00178001.pphosted.com ([91.207.212.93] helo=mx07-00178001.pphosted.com) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nma5v-00FbAO-8i for linux-arm-kernel@lists.infradead.org; Thu, 05 May 2022 11:56:33 +0000 Received: from pps.filterd (m0046661.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24599UoN001423; Thu, 5 May 2022 13:56:25 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=selector1; bh=1Ogb9fzJH2RRPqHqv5XBkYLmdMiwYt9r27x+Zvdcf7U=; b=0xlrdLnoRVgLJ7iaBEVY133xL1mJhBMOkkk7O/Dj5aifT/MiXwruteifOQ8/iUMAudSz RMt2gWkmGmU53iNutvQ+1Q9mGK73OI8iDfzirJTtHA9c4P5pFhqDrj//ILPLoNpbnCxr 7m/dD+wAcTSQmzO4CEooMT9WmLE2b2ZHhn85AsSvvtfQ/LwFM9vPxFM++JNo7iYOX4gC x1+GJWUd4q93uCQAekLM1ObTd1McJIclYH8V1ksByfyPnZpPGzqNwiLHmldi9SZTFIIt RrfqHQ47cWcBXuwkuAZVoH7ZVzX/JCpQLJjupk6by1FsubTCz3nDbN3vkloF5ctX/8Pp lg== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3frv0gkbdt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 05 May 2022 13:56:25 +0200 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id C267110002A; Thu, 5 May 2022 13:56:24 +0200 (CEST) Received: from Webmail-eu.st.com (sfhdag2node2.st.com [10.75.127.5]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id BB7D021B50C; Thu, 5 May 2022 13:56:24 +0200 (CEST) Received: from localhost (10.75.127.49) by SFHDAG2NODE2.st.com (10.75.127.5) with Microsoft SMTP Server (TLS) id 15.0.1497.26; Thu, 5 May 2022 13:56:23 +0200 From: Amelie Delaunay To: Vinod Koul , Maxime Coquelin , Alexandre Torgue CC: , , , , Amelie Delaunay Subject: [PATCH 4/4] dmaengine: stm32-dma: add device_pause/device_resume support Date: Thu, 5 May 2022 13:56:11 +0200 Message-ID: <20220505115611.38845-5-amelie.delaunay@foss.st.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220505115611.38845-1-amelie.delaunay@foss.st.com> References: <20220505115611.38845-1-amelie.delaunay@foss.st.com> MIME-Version: 1.0 X-Originating-IP: [10.75.127.49] X-ClientProxiedBy: SFHDAG2NODE3.st.com (10.75.127.6) To SFHDAG2NODE2.st.com (10.75.127.5) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-05_05,2022-05-05_01,2022-02-23_01 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220505_045631_647998_5BBBC083 X-CRM114-Status: GOOD ( 30.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org At any time, a DMA transfer can be suspended to be restarted later before the end of the DMA transfer. In order to restart from the point where the transfer was stopped, DMA_SxNDTR has to be read after disabling the channel by clearing the EN bit in DMA_SxCR register, to know the number of data items already collected. Peripheral and/or memory addresses have to be updated in order to adjust the address pointers. SxNDTR register has to be updated with the remaining number of data items to be transferred (the value read when the channel was disabled). Then the channel can be re-enabled to resume the transfer from the point it was suspended. If the channel was configured in circular or double-buffer mode, the circular or double-buffer mode must be disabled before re-enabling the channel to be able to reconfigure SxNDTR register and re-activate circular or double-buffer mode on next Transfer Complete interrupt where channel will be disabled by HW. This is due to the fact that on resume, re-writing SxNDTR register value updates internal HW auto-reload data counter, and then it truncates all next transfers after a pause/resume sequence. Signed-off-by: Amelie Delaunay --- drivers/dma/stm32-dma.c | 247 +++++++++++++++++++++++++++++++++++++--- 1 file changed, 234 insertions(+), 13 deletions(-) diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c index 0b35c5178501..adb25a11c70f 100644 --- a/drivers/dma/stm32-dma.c +++ b/drivers/dma/stm32-dma.c @@ -208,6 +208,7 @@ struct stm32_dma_chan { u32 threshold; u32 mem_burst; u32 mem_width; + enum dma_status status; }; struct stm32_dma_device { @@ -485,6 +486,7 @@ static void stm32_dma_stop(struct stm32_dma_chan *chan) } chan->busy = false; + chan->status = DMA_COMPLETE; } static int stm32_dma_terminate_all(struct dma_chan *c) @@ -595,11 +597,11 @@ static void stm32_dma_start_transfer(struct stm32_dma_chan *chan) stm32_dma_dump_reg(chan); /* Start DMA */ + chan->busy = true; + chan->status = DMA_IN_PROGRESS; reg->dma_scr |= STM32_DMA_SCR_EN; stm32_dma_write(dmadev, STM32_DMA_SCR(chan->id), reg->dma_scr); - chan->busy = true; - dev_dbg(chan2dev(chan), "vchan %pK: started\n", &chan->vchan); } @@ -627,6 +629,95 @@ static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan) } } +static void stm32_dma_handle_chan_paused(struct stm32_dma_chan *chan) +{ + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); + u32 dma_scr; + + /* + * Read and store current remaining data items and peripheral/memory addresses to be + * updated on resume + */ + dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id)); + /* + * Transfer can be paused while between a previous resume and reconfiguration on transfer + * complete. If transfer is cyclic and CIRC and DBM have been deactivated for resume, need + * to set it here in SCR backup to ensure a good reconfiguration on transfer complete. + */ + if (chan->desc && chan->desc->cyclic) { + if (chan->desc->num_sgs == 1) + dma_scr |= STM32_DMA_SCR_CIRC; + else + dma_scr |= STM32_DMA_SCR_DBM; + } + chan->chan_reg.dma_scr = dma_scr; + + /* + * Need to temporarily deactivate CIRC/DBM until next Transfer Complete interrupt, otherwise + * on resume NDTR autoreload value will be wrong (lower than the initial period length) + */ + if (chan->desc && chan->desc->cyclic) { + dma_scr &= ~(STM32_DMA_SCR_DBM | STM32_DMA_SCR_CIRC); + stm32_dma_write(dmadev, STM32_DMA_SCR(chan->id), dma_scr); + } + + chan->chan_reg.dma_sndtr = stm32_dma_read(dmadev, STM32_DMA_SNDTR(chan->id)); + + dev_dbg(chan2dev(chan), "vchan %pK: paused\n", &chan->vchan); +} + +static void stm32_dma_post_resume_reconfigure(struct stm32_dma_chan *chan) +{ + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); + struct stm32_dma_sg_req *sg_req; + u32 dma_scr, status, id; + + id = chan->id; + dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(id)); + + /* Clear interrupt status if it is there */ + status = stm32_dma_irq_status(chan); + if (status) + stm32_dma_irq_clear(chan, status); + + if (!chan->next_sg) + sg_req = &chan->desc->sg_req[chan->desc->num_sgs - 1]; + else + sg_req = &chan->desc->sg_req[chan->next_sg - 1]; + + /* Reconfigure NDTR with the initial value */ + stm32_dma_write(dmadev, STM32_DMA_SNDTR(chan->id), sg_req->chan_reg.dma_sndtr); + + /* Restore SPAR */ + stm32_dma_write(dmadev, STM32_DMA_SPAR(id), sg_req->chan_reg.dma_spar); + + /* Restore SM0AR/SM1AR whatever DBM/CT as they may have been modified */ + stm32_dma_write(dmadev, STM32_DMA_SM0AR(id), sg_req->chan_reg.dma_sm0ar); + stm32_dma_write(dmadev, STM32_DMA_SM1AR(id), sg_req->chan_reg.dma_sm1ar); + + /* Reactivate CIRC/DBM if needed */ + if (chan->chan_reg.dma_scr & STM32_DMA_SCR_DBM) { + dma_scr |= STM32_DMA_SCR_DBM; + /* Restore CT */ + if (chan->chan_reg.dma_scr & STM32_DMA_SCR_CT) + dma_scr &= ~STM32_DMA_SCR_CT; + else + dma_scr |= STM32_DMA_SCR_CT; + } else if (chan->chan_reg.dma_scr & STM32_DMA_SCR_CIRC) { + dma_scr |= STM32_DMA_SCR_CIRC; + } + stm32_dma_write(dmadev, STM32_DMA_SCR(chan->id), dma_scr); + + stm32_dma_configure_next_sg(chan); + + stm32_dma_dump_reg(chan); + + dma_scr |= STM32_DMA_SCR_EN; + stm32_dma_write(dmadev, STM32_DMA_SCR(chan->id), dma_scr); + + dev_dbg(chan2dev(chan), "vchan %pK: reconfigured after pause/resume\n", &chan->vchan); +} + static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan, u32 scr) { if (!chan->desc) @@ -635,10 +726,14 @@ static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan, u32 scr) if (chan->desc->cyclic) { vchan_cyclic_callback(&chan->desc->vdesc); stm32_dma_sg_inc(chan); - if (scr & STM32_DMA_SCR_DBM) + /* cyclic while CIRC/DBM disable => post resume reconfiguration needed */ + if (!(scr & (STM32_DMA_SCR_CIRC | STM32_DMA_SCR_DBM))) + stm32_dma_post_resume_reconfigure(chan); + else if (scr & STM32_DMA_SCR_DBM) stm32_dma_configure_next_sg(chan); } else { chan->busy = false; + chan->status = DMA_COMPLETE; if (chan->next_sg == chan->desc->num_sgs) { vchan_cookie_complete(&chan->desc->vdesc); chan->desc = NULL; @@ -679,8 +774,12 @@ static irqreturn_t stm32_dma_chan_irq(int irq, void *devid) if (status & STM32_DMA_TCI) { stm32_dma_irq_clear(chan, STM32_DMA_TCI); - if (scr & STM32_DMA_SCR_TCIE) - stm32_dma_handle_chan_done(chan, scr); + if (scr & STM32_DMA_SCR_TCIE) { + if (chan->status == DMA_PAUSED && !(scr & STM32_DMA_SCR_EN)) + stm32_dma_handle_chan_paused(chan); + else + stm32_dma_handle_chan_done(chan, scr); + } status &= ~STM32_DMA_TCI; } @@ -715,6 +814,107 @@ static void stm32_dma_issue_pending(struct dma_chan *c) spin_unlock_irqrestore(&chan->vchan.lock, flags); } +static int stm32_dma_pause(struct dma_chan *c) +{ + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); + unsigned long flags; + int ret; + + if (chan->status != DMA_IN_PROGRESS) + return -EPERM; + + spin_lock_irqsave(&chan->vchan.lock, flags); + ret = stm32_dma_disable_chan(chan); + /* + * A transfer complete flag is set to indicate the end of transfer due to the stream + * interruption, so wait for interrupt + */ + if (!ret) + chan->status = DMA_PAUSED; + spin_unlock_irqrestore(&chan->vchan.lock, flags); + + return ret; +} + +static int stm32_dma_resume(struct dma_chan *c) +{ + struct stm32_dma_chan *chan = to_stm32_dma_chan(c); + struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); + struct stm32_dma_chan_reg chan_reg = chan->chan_reg; + u32 id = chan->id, scr, ndtr, offset, spar, sm0ar, sm1ar; + struct stm32_dma_sg_req *sg_req; + unsigned long flags; + + if (chan->status != DMA_PAUSED) + return -EPERM; + + scr = stm32_dma_read(dmadev, STM32_DMA_SCR(id)); + if (WARN_ON(scr & STM32_DMA_SCR_EN)) + return -EPERM; + + spin_lock_irqsave(&chan->vchan.lock, flags); + + /* sg_reg[prev_sg] contains original ndtr, sm0ar and sm1ar before pausing the transfer */ + if (!chan->next_sg) + sg_req = &chan->desc->sg_req[chan->desc->num_sgs - 1]; + else + sg_req = &chan->desc->sg_req[chan->next_sg - 1]; + + ndtr = sg_req->chan_reg.dma_sndtr; + offset = (ndtr - chan_reg.dma_sndtr) << STM32_DMA_SCR_PSIZE_GET(chan_reg.dma_scr); + spar = sg_req->chan_reg.dma_spar; + sm0ar = sg_req->chan_reg.dma_sm0ar; + sm1ar = sg_req->chan_reg.dma_sm1ar; + + /* + * The peripheral and/or memory addresses have to be updated in order to adjust the + * address pointers. Need to check increment. + */ + if (chan_reg.dma_scr & STM32_DMA_SCR_PINC) + stm32_dma_write(dmadev, STM32_DMA_SPAR(id), spar + offset); + else + stm32_dma_write(dmadev, STM32_DMA_SPAR(id), spar); + + if (!(chan_reg.dma_scr & STM32_DMA_SCR_MINC)) + offset = 0; + + /* + * In case of DBM, the current target could be SM1AR. + * Need to temporarily deactivate CIRC/DBM to finish the current transfer, so + * SM0AR becomes the current target and must be updated with SM1AR + offset if CT=1. + */ + if ((chan_reg.dma_scr & STM32_DMA_SCR_DBM) && (chan_reg.dma_scr & STM32_DMA_SCR_CT)) + stm32_dma_write(dmadev, STM32_DMA_SM1AR(id), sm1ar + offset); + else + stm32_dma_write(dmadev, STM32_DMA_SM0AR(id), sm0ar + offset); + + /* NDTR must be restored otherwise internal HW counter won't be correctly reset */ + stm32_dma_write(dmadev, STM32_DMA_SNDTR(id), chan_reg.dma_sndtr); + + /* + * Need to temporarily deactivate CIRC/DBM until next Transfer Complete interrupt, + * otherwise NDTR autoreload value will be wrong (lower than the initial period length) + */ + if (chan_reg.dma_scr & (STM32_DMA_SCR_CIRC | STM32_DMA_SCR_DBM)) + chan_reg.dma_scr &= ~(STM32_DMA_SCR_CIRC | STM32_DMA_SCR_DBM); + + if (chan_reg.dma_scr & STM32_DMA_SCR_DBM) + stm32_dma_configure_next_sg(chan); + + stm32_dma_dump_reg(chan); + + /* The stream may then be re-enabled to restart transfer from the point it was stopped */ + chan->status = DMA_IN_PROGRESS; + chan_reg.dma_scr |= STM32_DMA_SCR_EN; + stm32_dma_write(dmadev, STM32_DMA_SCR(id), chan_reg.dma_scr); + + spin_unlock_irqrestore(&chan->vchan.lock, flags); + + dev_dbg(chan2dev(chan), "vchan %pK: resumed\n", &chan->vchan); + + return 0; +} + static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan, enum dma_transfer_direction direction, enum dma_slave_buswidth *buswidth, @@ -982,10 +1182,12 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_dma_cyclic( } /* Enable Circular mode or double buffer mode */ - if (buf_len == period_len) + if (buf_len == period_len) { chan->chan_reg.dma_scr |= STM32_DMA_SCR_CIRC; - else + } else { chan->chan_reg.dma_scr |= STM32_DMA_SCR_DBM; + chan->chan_reg.dma_scr &= ~STM32_DMA_SCR_CT; + } /* Clear periph ctrl if client set it */ chan->chan_reg.dma_scr &= ~STM32_DMA_SCR_PFCTRL; @@ -1095,24 +1297,36 @@ static bool stm32_dma_is_current_sg(struct stm32_dma_chan *chan) { struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan); struct stm32_dma_sg_req *sg_req; - u32 dma_scr, dma_smar, id; + u32 dma_scr, dma_smar, id, period_len; id = chan->id; dma_scr = stm32_dma_read(dmadev, STM32_DMA_SCR(id)); + /* In cyclic CIRC but not DBM, CT is not used */ if (!(dma_scr & STM32_DMA_SCR_DBM)) return true; sg_req = &chan->desc->sg_req[chan->next_sg]; + period_len = sg_req->len; + /* DBM - take care of a previous pause/resume not yet post reconfigured */ if (dma_scr & STM32_DMA_SCR_CT) { dma_smar = stm32_dma_read(dmadev, STM32_DMA_SM0AR(id)); - return (dma_smar == sg_req->chan_reg.dma_sm0ar); + /* + * If transfer has been pause/resumed, + * SM0AR is in the range of [SM0AR:SM0AR+period_len] + */ + return (dma_smar >= sg_req->chan_reg.dma_sm0ar && + dma_smar < sg_req->chan_reg.dma_sm0ar + period_len); } dma_smar = stm32_dma_read(dmadev, STM32_DMA_SM1AR(id)); - - return (dma_smar == sg_req->chan_reg.dma_sm1ar); + /* + * If transfer has been pause/resumed, + * SM1AR is in the range of [SM1AR:SM1AR+period_len] + */ + return (dma_smar >= sg_req->chan_reg.dma_sm1ar && + dma_smar < sg_req->chan_reg.dma_sm1ar + period_len); } static size_t stm32_dma_desc_residue(struct stm32_dma_chan *chan, @@ -1152,7 +1366,7 @@ static size_t stm32_dma_desc_residue(struct stm32_dma_chan *chan, residue = stm32_dma_get_remaining_bytes(chan); - if (!stm32_dma_is_current_sg(chan)) { + if (chan->desc->cyclic && !stm32_dma_is_current_sg(chan)) { n_sg++; if (n_sg == chan->desc->num_sgs) n_sg = 0; @@ -1192,7 +1406,12 @@ static enum dma_status stm32_dma_tx_status(struct dma_chan *c, u32 residue = 0; status = dma_cookie_status(c, cookie, state); - if (status == DMA_COMPLETE || !state) + if (status == DMA_COMPLETE) + return status; + + status = chan->status; + + if (!state) return status; spin_lock_irqsave(&chan->vchan.lock, flags); @@ -1381,6 +1600,8 @@ static int stm32_dma_probe(struct platform_device *pdev) dd->device_prep_slave_sg = stm32_dma_prep_slave_sg; dd->device_prep_dma_cyclic = stm32_dma_prep_dma_cyclic; dd->device_config = stm32_dma_slave_config; + dd->device_pause = stm32_dma_pause; + dd->device_resume = stm32_dma_resume; dd->device_terminate_all = stm32_dma_terminate_all; dd->device_synchronize = stm32_dma_synchronize; dd->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |