From patchwork Fri Jun 8 09:43:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Gong X-Patchwork-Id: 10454025 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3B35860159 for ; Fri, 8 Jun 2018 09:43:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E5E42946D for ; Fri, 8 Jun 2018 09:43:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 12E6F295E7; Fri, 8 Jun 2018 09:43:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 259C42946D for ; Fri, 8 Jun 2018 09:43:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KxBJKASeQFlBR7zSTowVfXD4lW6DStFd3pZfwsxkqHA=; b=CZZrQ3rwC0vPDg a3KxWFZ3yOHSkLRUDNOFYGNbGC4hUF2Zau/jZV51F0xqcMIRTVIC6iz6c2KhQsCFDM5fQtowUO/QI 6PpfzhgRJDfNXKlZvJ648cLBWf/hiqIDw7zsGSMdpw6AAXBUqrFX9+6kD5YSSFsrVxN57krrVwzQM z9N3jxDGBklRtJVxuq+XziiMn9O91WO5jQGcfi5xFyaBIoMopQ+pPQheUr306qmo8IY93QKWUzy4/ G8/OwznvZOVDQKVe5RSZdKRPQFDgTObXdfCmnxWJQwSIK+DYhReajYMnH8JCIrEkEystjdgk86XEJ 2wFlvlPMFpmGFSuHUeQA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fRDvp-00072b-9n; Fri, 08 Jun 2018 09:43:41 +0000 Received: from mail-he1eur01on0080.outbound.protection.outlook.com ([104.47.0.80] helo=EUR01-HE1-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fRDvi-0006jS-U7 for linux-arm-kernel@lists.infradead.org; Fri, 08 Jun 2018 09:43:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=g1FFr/qmub6fXLFxyZoz7jETQSfDL5H812c2bTjykd0=; b=k3y4c3bo2WdwoGE70thFDTTZwBV+SRsliKJd7vObvi6LQKs1Sd2VSUvtjwtDdgZN+niAnAhfMQsYSFrIhCvpBjJmxQgVxSQ4LEYBz3Q72nYsu6f5UukHJRBKYMpj00JzUn73FJyGm00jsu7EkCatvglBDS3seEVsdYJAuLgBK38= Received: from DB6PR04MB3223.eurprd04.prod.outlook.com (10.170.211.19) by DB6PR04MB3000.eurprd04.prod.outlook.com (10.170.213.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.841.17; Fri, 8 Jun 2018 09:43:21 +0000 Received: from DB6PR04MB3223.eurprd04.prod.outlook.com ([fe80::15a5:6ba0:ac66:7977]) by DB6PR04MB3223.eurprd04.prod.outlook.com ([fe80::15a5:6ba0:ac66:7977%2]) with mapi id 15.20.0841.011; Fri, 8 Jun 2018 09:43:21 +0000 From: Robin Gong To: Sascha Hauer Subject: RE: [PATCH v2 0/5] add virt-dma support for imx-sdma Thread-Topic: [PATCH v2 0/5] add virt-dma support for imx-sdma Thread-Index: AQHT/uvxPIc/xh6Aa0uy+1N/nqXpoqRWC50AgAAPc8A= Date: Fri, 8 Jun 2018 09:43:20 +0000 Message-ID: References: <1528465490-19684-1-git-send-email-yibin.gong@nxp.com> <20180608084310.bzxeazqht5u6mgdw@pengutronix.de> In-Reply-To: <20180608084310.bzxeazqht5u6mgdw@pengutronix.de> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=yibin.gong@nxp.com; x-originating-ip: [119.31.174.66] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB6PR04MB3000; 7:eFmQ1E0/Eqh16mgzvYMoprEmidFHAtCoFV6r8fzDfPzCJDKinPDJ/nkgEs8rgZDKydySINCQNOmDqKWfGXXVXASCQEtEjVw6xFpNe6cfg5rGIs+iYftt5BfE5MfT50RzWuf+gAR8zGZcIAldSB3seESxTvD7Dkz2e8WMIzgpSpmsCLWB0DJswAg+EDlqNcdozh8gMxebKQrLXb9eOCoFrTT7pxoJFB3J1i3EKcxawEOo4TkP2EWFgWxB5xMe/CC9 x-ms-exchange-antispam-srfa-diagnostics: SOS; x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(4534165)(4627221)(201703031133081)(201702281549075)(5600026)(48565401081)(2017052603328)(7153060)(7193020); SRVR:DB6PR04MB3000; x-ms-traffictypediagnostic: DB6PR04MB3000: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(9452136761055)(189930954265078)(185117386973197)(258649278758335)(45079756050767)(228905959029699); x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(10201501046)(3002001)(93006095)(93001095)(3231254)(944501410)(52105095)(6055026)(149027)(150027)(6041310)(20161123562045)(20161123558120)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(6072148)(201708071742011)(7699016); SRVR:DB6PR04MB3000; BCL:0; PCL:0; RULEID:; SRVR:DB6PR04MB3000; x-forefront-prvs: 06973FFAD3 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(39380400002)(376002)(396003)(39860400002)(346002)(366004)(13464003)(199004)(189003)(186003)(14454004)(26005)(53936002)(6246003)(105586002)(106356001)(99286004)(25786009)(6306002)(9686003)(4326008)(966005)(74316002)(316002)(54906003)(3846002)(6116002)(5660300001)(478600001)(446003)(7736002)(81156014)(81166006)(55016002)(33656002)(8676002)(86362001)(5890100001)(575784001)(3660700001)(68736007)(45080400002)(305945005)(476003)(6436002)(6916009)(2900100001)(11346002)(229853002)(7696005)(59450400001)(3280700002)(6506007)(76176011)(102836004)(5250100002)(66066001)(8936002)(97736004)(2906002)(53546011)(486006); DIR:OUT; SFP:1101; SCL:1; SRVR:DB6PR04MB3000; H:DB6PR04MB3223.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: 8Y3jVZTktIw8CCBusqNp4lszPjKsIS/taB+SoDe/8Zv+RLx/ctbaTk8BnuK2aesEoutL2BFk0TarWAWh710WpI8sMcNGE2MSe8pbU113gn1T49O/YLpGHMNVUzrD9anaFCxww0t2nl81II8zD1ptfSGEIph5XpLclx9Kpr2MDAB+0j4q9anEAEtOkVUzKyqX spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-MS-Office365-Filtering-Correlation-Id: 2bff2748-b233-4064-31b5-08d5cd244581 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2bff2748-b233-4064-31b5-08d5cd244581 X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Jun 2018 09:43:21.0311 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR04MB3000 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180608_024335_365832_40DF47B2 X-CRM114-Status: GOOD ( 30.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "linux-kernel@vger.kernel.org" , "vkoul@kernel.org" , dl-linux-imx , "dmaengine@vger.kernel.org" , "dan.j.williams@intel.com" , "linux-arm-kernel@lists.infradead.org" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Thanks Sacha, I'll intergrate it into v3. -----Original Message----- From: Sascha Hauer [mailto:s.hauer@pengutronix.de] Sent: 2018年6月8日 16:43 To: Robin Gong Cc: vkoul@kernel.org; dan.j.williams@intel.com; dmaengine@vger.kernel.org; linux-kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; dl-linux-imx Subject: Re: [PATCH v2 0/5] add virt-dma support for imx-sdma On Fri, Jun 08, 2018 at 09:44:45PM +0800, Robin Gong wrote: > The legacy sdma driver has below limitations or drawbacks: > 1. Hardcode the max BDs number as "PAGE_SIZE / sizeof(*)", and alloc > one page size for one channel regardless of only few BDs needed > most time. But in few cases, the max PAGE_SIZE maybe not enough. > 2. One SDMA channel can't stop immediatley once channel disabled which > means SDMA interrupt may come in after this channel terminated.There > are some patches for this corner case such as commit "2746e2c389f9", > but not cover non-cyclic. > > The common virt-dma overcomes the above limitations. It can alloc bd > dynamically and free bd once this tx transfer done. No memory wasted > or maximum limititation here, only depends on how many memory can be > requested from kernel. For No.2, such issue can be workaround by > checking if there is available descript("sdmac->desc") now once the > unwanted interrupt coming. At last the common virt-dma is easier for sdma driver maintain. > > Change from v1: > 1. split v1 patch into 5 patches. > 2. remove some unnecessary condition check. > 3. remove unneccessary 'pending' list. > > Robin Gong (5): > dmaengine: imx-sdma: add virt-dma support > Revert "dmaengine: imx-sdma: fix pagefault when channel is disabled > during interrupt" > dmaengine: imx-sdma: remove usless lock > dmaengine: imx-sdma: remove the maximum limation for bd numbers > dmaengine: imx-sdma: add sdma_transfer_init to decrease code overlap > > drivers/dma/Kconfig | 1 + > drivers/dma/imx-sdma.c | 392 > ++++++++++++++++++++++++++++--------------------- > 2 files changed, 227 insertions(+), 166 deletions(-) Please put the attached patch in front of your series. It makes the virt-dma support patch smaller and thus easier to review. Sascha --------------------------------8<---------------------------------- From a70ccdf780cc6fcddd2d06c4a3eb0123d4aba443 Mon Sep 17 00:00:00 2001 From: Sascha Hauer Date: Fri, 8 Jun 2018 10:20:18 +0200 Subject: [PATCH 1/2] dmaengine: imx-sdma: factor out a struct sdma_desc from struct sdma_channel This is a preparation step to make the adding of virt-dma easier. We create a struct sdma_desc, move some fields from struct sdma_channel there and add a pointer from the former to the latter. For now we allocate the data statically in struct sdma_channel, but with virt-dma support it will be dynamically allocated. Signed-off-by: Sascha Hauer --- drivers/dma/imx-sdma.c | 137 +++++++++++++++++++++++++---------------- 1 file changed, 83 insertions(+), 54 deletions(-) u32 address) { - struct sdma_buffer_descriptor *bd0 = sdma->channel[0].bd; + struct sdma_buffer_descriptor *bd0 = sdma->bd0; void *buf_virt; dma_addr_t buf_phys; int ret; @@ -707,7 +724,9 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac) * call callback function. */ while (1) { - bd = &sdmac->bd[sdmac->buf_tail]; + struct sdma_desc *desc = sdmac->desc; + + bd = &desc->bd[desc->buf_tail]; if (bd->mode.status & BD_DONE) break; @@ -723,11 +742,11 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac) * the number of bytes present in the current buffer descriptor. */ - sdmac->chn_real_count = bd->mode.count; + desc->chn_real_count = bd->mode.count; bd->mode.status |= BD_DONE; - bd->mode.count = sdmac->period_len; - sdmac->buf_ptail = sdmac->buf_tail; - sdmac->buf_tail = (sdmac->buf_tail + 1) % sdmac->num_bd; + bd->mode.count = desc->period_len; + desc->buf_ptail = desc->buf_tail; + desc->buf_tail = (desc->buf_tail + 1) % desc->num_bd; /* * The callback is called from the interrupt context in order @@ -736,7 +755,7 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac) * executed. */ - dmaengine_desc_get_callback_invoke(&sdmac->desc, NULL); + dmaengine_desc_get_callback_invoke(&sdmac->txdesc, NULL); if (error) sdmac->status = old_status; @@ -749,17 +768,17 @@ static void mxc_sdma_handle_channel_normal(unsigned long data) struct sdma_buffer_descriptor *bd; int i, error = 0; - sdmac->chn_real_count = 0; + sdmac->desc->chn_real_count = 0; /* * non loop mode. Iterate over all descriptors, collect * errors and call callback function */ - for (i = 0; i < sdmac->num_bd; i++) { - bd = &sdmac->bd[i]; + for (i = 0; i < sdmac->desc->num_bd; i++) { + bd = &sdmac->desc->bd[i]; if (bd->mode.status & (BD_DONE | BD_RROR)) error = -EIO; - sdmac->chn_real_count += bd->mode.count; + sdmac->desc->chn_real_count += bd->mode.count; } if (error) @@ -767,9 +786,9 @@ static void mxc_sdma_handle_channel_normal(unsigned long data) else sdmac->status = DMA_COMPLETE; - dma_cookie_complete(&sdmac->desc); + dma_cookie_complete(&sdmac->txdesc); - dmaengine_desc_get_callback_invoke(&sdmac->desc, NULL); + dmaengine_desc_get_callback_invoke(&sdmac->txdesc, NULL); } static irqreturn_t sdma_int_handler(int irq, void *dev_id) @@ -897,7 +916,7 @@ static int sdma_load_context(struct sdma_channel *sdmac) int channel = sdmac->channel; int load_address; struct sdma_context_data *context = sdma->context; - struct sdma_buffer_descriptor *bd0 = sdma->channel[0].bd; + struct sdma_buffer_descriptor *bd0 = sdma->bd0; int ret; unsigned long flags; @@ -1100,18 +1119,22 @@ static int sdma_set_channel_priority(struct sdma_channel *sdmac, static int sdma_request_channel(struct sdma_channel *sdmac) { struct sdma_engine *sdma = sdmac->sdma; + struct sdma_desc *desc; int channel = sdmac->channel; int ret = -EBUSY; - sdmac->bd = dma_zalloc_coherent(NULL, PAGE_SIZE, &sdmac->bd_phys, + sdmac->desc = &sdmac->_desc; + desc = sdmac->desc; + + desc->bd = dma_zalloc_coherent(NULL, PAGE_SIZE, &desc->bd_phys, GFP_KERNEL); - if (!sdmac->bd) { + if (!desc->bd) { ret = -ENOMEM; goto out; } - sdma->channel_control[channel].base_bd_ptr = sdmac->bd_phys; - sdma->channel_control[channel].current_bd_ptr = sdmac->bd_phys; + sdma->channel_control[channel].base_bd_ptr = desc->bd_phys; + sdma->channel_control[channel].current_bd_ptr = desc->bd_phys; sdma_set_channel_priority(sdmac, MXC_SDMA_DEFAULT_PRIORITY); return 0; @@ -1176,10 +1199,10 @@ static int sdma_alloc_chan_resources(struct dma_chan *chan) if (ret) goto disable_clk_ahb; - dma_async_tx_descriptor_init(&sdmac->desc, chan); - sdmac->desc.tx_submit = sdma_tx_submit; + dma_async_tx_descriptor_init(&sdmac->txdesc, chan); + sdmac->txdesc.tx_submit = sdma_tx_submit; /* txd.flags will be overwritten in prep funcs */ - sdmac->desc.flags = DMA_CTRL_ACK; + sdmac->txdesc.flags = DMA_CTRL_ACK; return 0; @@ -1194,6 +1217,7 @@ static void sdma_free_chan_resources(struct dma_chan *chan) { struct sdma_channel *sdmac = to_sdma_chan(chan); struct sdma_engine *sdma = sdmac->sdma; + struct sdma_desc *desc = sdmac->desc; sdma_disable_channel(chan); @@ -1207,7 +1231,7 @@ static void sdma_free_chan_resources(struct dma_chan *chan) sdma_set_channel_priority(sdmac, 0); - dma_free_coherent(NULL, PAGE_SIZE, sdmac->bd, sdmac->bd_phys); + dma_free_coherent(NULL, PAGE_SIZE, desc->bd, desc->bd_phys); clk_disable(sdma->clk_ipg); clk_disable(sdma->clk_ahb); @@ -1223,6 +1247,7 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg( int ret, i, count; int channel = sdmac->channel; struct scatterlist *sg; + struct sdma_desc *desc = sdmac->desc; if (sdmac->status == DMA_IN_PROGRESS) return NULL; @@ -1230,9 +1255,9 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg( sdmac->flags = 0; - sdmac->buf_tail = 0; - sdmac->buf_ptail = 0; - sdmac->chn_real_count = 0; + desc->buf_tail = 0; + desc->buf_ptail = 0; + desc->chn_real_count = 0; dev_dbg(sdma->dev, "setting up %d entries for channel %d.\n", sg_len, channel); @@ -1249,9 +1274,9 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg( goto err_out; } - sdmac->chn_count = 0; + desc->chn_count = 0; for_each_sg(sgl, sg, sg_len, i) { - struct sdma_buffer_descriptor *bd = &sdmac->bd[i]; + struct sdma_buffer_descriptor *bd = &desc->bd[i]; int param; bd->buffer_addr = sg->dma_address; @@ -1266,7 +1291,7 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg( } bd->mode.count = count; - sdmac->chn_count += count; + desc->chn_count += count; if (sdmac->word_size > DMA_SLAVE_BUSWIDTH_4_BYTES) { ret = -EINVAL; @@ -1307,10 +1332,10 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg( bd->mode.status = param; } - sdmac->num_bd = sg_len; - sdma->channel_control[channel].current_bd_ptr = sdmac->bd_phys; + desc->num_bd = sg_len; + sdma->channel_control[channel].current_bd_ptr = desc->bd_phys; - return &sdmac->desc; + return &sdmac->txdesc; err_out: sdmac->status = DMA_ERROR; return NULL; @@ -1326,6 +1351,7 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( int num_periods = buf_len / period_len; int channel = sdmac->channel; int ret, i = 0, buf = 0; + struct sdma_desc *desc = sdmac->desc; dev_dbg(sdma->dev, "%s channel: %d\n", __func__, channel); @@ -1334,10 +1360,10 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( sdmac->status = DMA_IN_PROGRESS; - sdmac->buf_tail = 0; - sdmac->buf_ptail = 0; - sdmac->chn_real_count = 0; - sdmac->period_len = period_len; + desc->buf_tail = 0; + desc->buf_ptail = 0; + desc->chn_real_count = 0; + desc->period_len = period_len; sdmac->flags |= IMX_DMA_SG_LOOP; sdmac->direction = direction; @@ -1358,7 +1384,7 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( } while (buf < buf_len) { - struct sdma_buffer_descriptor *bd = &sdmac->bd[i]; + struct sdma_buffer_descriptor *bd = &desc->bd[i]; int param; bd->buffer_addr = dma_addr; @@ -1389,10 +1415,10 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( i++; } - sdmac->num_bd = num_periods; - sdma->channel_control[channel].current_bd_ptr = sdmac->bd_phys; + desc->num_bd = num_periods; + sdma->channel_control[channel].current_bd_ptr = desc->bd_phys; - return &sdmac->desc; + return &sdmac->txdesc; err_out: sdmac->status = DMA_ERROR; return NULL; @@ -1431,13 +1457,14 @@ static enum dma_status sdma_tx_status(struct dma_chan *chan, struct dma_tx_state *txstate) { struct sdma_channel *sdmac = to_sdma_chan(chan); + struct sdma_desc *desc = sdmac->desc; u32 residue; if (sdmac->flags & IMX_DMA_SG_LOOP) - residue = (sdmac->num_bd - sdmac->buf_ptail) * - sdmac->period_len - sdmac->chn_real_count; + residue = (desc->num_bd - desc->buf_ptail) * + desc->period_len - desc->chn_real_count; else - residue = sdmac->chn_count - sdmac->chn_real_count; + residue = desc->chn_count - desc->chn_real_count; dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, residue); @@ -1661,6 +1688,8 @@ static int sdma_init(struct sdma_engine *sdma) if (ret) goto err_dma_alloc; + sdma->bd0 = sdma->channel[0].desc->bd; + sdma_config_ownership(&sdma->channel[0], false, true, false); /* Set Command Channel (Channel Zero) */ -- 2.17.1 -- Pengutronix e.K. | | Industrial Linux Solutions | https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.pengutronix.de%2F&data=02%7C01%7Cyibin.gong%40nxp.com%7Cf4671adfeaa947506e2b08d5cd1be23d%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C636640442016947893&sdata=6FkizxBudOfDkWUrj28qYaycOz%2Br0bwf7DJJ8vZgaKg%3D&reserved=0 | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index ccd03c3cedfe..556d08712f4a 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -295,6 +295,30 @@ struct sdma_context_data { struct sdma_engine; +/** + * struct sdma_desc - descriptor structor for one transfer + * @vd descriptor for virt dma + * @num_bd max NUM_BD. number of descriptors currently handling + * @buf_tail ID of the buffer that was processed + * @buf_ptail ID of the previous buffer that was processed + * @period_len period length, used in cyclic. + * @chn_real_count the real count updated from bd->mode.count + * @chn_count the transfer count setuped + * @sdmac sdma_channel pointer + * @bd pointer of alloced bd + */ +struct sdma_desc { + unsigned int num_bd; + dma_addr_t bd_phys; + unsigned int buf_tail; + unsigned int buf_ptail; + unsigned int period_len; + unsigned int chn_real_count; + unsigned int chn_count; + struct sdma_channel *sdmac; + struct sdma_buffer_descriptor *bd; +}; + /** * struct sdma_channel - housekeeping for a SDMA channel * @@ -305,11 +329,10 @@ struct sdma_engine; * @event_id0 aka dma request line * @event_id1 for channels that use 2 events * @word_size peripheral access size - * @buf_tail ID of the buffer that was processed - * @buf_ptail ID of the previous buffer that was processed - * @num_bd max NUM_BD. number of descriptors currently handling */ struct sdma_channel { + struct sdma_desc *desc; + struct sdma_desc _desc; struct sdma_engine *sdma; unsigned int channel; enum dma_transfer_direction direction; @@ -317,12 +340,6 @@ struct sdma_channel { unsigned int event_id0; unsigned int event_id1; enum dma_slave_buswidth word_size; - unsigned int buf_tail; - unsigned int buf_ptail; - unsigned int num_bd; - unsigned int period_len; - struct sdma_buffer_descriptor *bd; - dma_addr_t bd_phys; unsigned int pc_from_device, pc_to_device; unsigned int device_to_device; unsigned long flags; @@ -332,10 +349,8 @@ struct sdma_channel { u32 shp_addr, per_addr; struct dma_chan chan; spinlock_t lock; - struct dma_async_tx_descriptor desc; + struct dma_async_tx_descriptor txdesc; enum dma_status status; - unsigned int chn_count; - unsigned int chn_real_count; struct tasklet_struct tasklet; struct imx_dma_data data; bool enabled; @@ -398,6 +413,8 @@ struct sdma_engine { u32 spba_start_addr; u32 spba_end_addr; unsigned int irq; + dma_addr_t bd0_phys; + struct sdma_buffer_descriptor *bd0; }; static struct sdma_driver_data sdma_imx31 = { @@ -632,7 +649,7 @@ static int sdma_run_channel0(struct sdma_engine *sdma) static int sdma_load_script(struct sdma_engine *sdma, void *buf, int size,