From patchwork Fri Mar 30 07:22:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Suloev X-Patchwork-Id: 10317229 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E7DD160212 for ; Fri, 30 Mar 2018 07:26:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CEA502A54E for ; Fri, 30 Mar 2018 07:26:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C2E6E2A55B; Fri, 30 Mar 2018 07:26:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DECFB2A54E for ; Fri, 30 Mar 2018 07:26:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=1zw9Bybj1vBU19JraDCQYWd6GoFTMLx6LzcLpBSkWrw=; b=VhHVHMzhikTjNOHLyJ/uY4IJtn DxjLV/RCVq+BwA0L6AUrKk9P0HHmmwrR90Ow9KN0EtokCmPuEFmyvKU1GcJM6jq1SLMhLqMwV53wm 4tiTjVzVt7BQS+AphU43BRPZGcWWUgDwQaD9tYWpxc4XOAvneK6j/hJxJfM2nszAd1rtJpsJoYDSW eJH1OLXARFranY7xxndGzPKLBoeam2+zCx36o9+Oy2RT4iRxg5tn9GZVf2CqSJYgQ5WhXYUlszfGw dcAdkfOHy1ZAaykarbrcAhLzvECgcx+N1fOvxB+KpKjuCFP6EjzR0ICjf7VENcoBXCArv98x7Vc8d CKwQhUNQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1f1oQ8-0001VV-33; Fri, 30 Mar 2018 07:25:56 +0000 Received: from smtp57.i.mail.ru ([217.69.128.37]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1f1oNM-0006ts-FV for linux-arm-kernel@lists.infradead.org; Fri, 30 Mar 2018 07:23:09 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=orpaltech.com; s=mailru; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=F5YFVeqyZyCKQqoaCpfqY4yUCywTVMcwKTYJHBcddNk=; b=kmcr/9G/3ssafrOTzgviavTT9ujEtuz3KbMetFiCRuCXVUtZPPwU/e+qgX2MjsMVxTVRZuPZ/Yu2cw9yC+it8pf+PG/TGulvHmm8HNU0IJiSGvPBLOfk1bJlwwn5CS2/16p6LdULBSU3Kfwwpr71UABgciKVraPw5oGNQkP5HcY=; Received: by smtp57.i.mail.ru with esmtpa (envelope-from ) id 1f1oNH-0001IV-L9; Fri, 30 Mar 2018 10:23:00 +0300 From: Sergey Suloev To: Mark Brown , Maxime Ripard , Chen-Yu Tsai Subject: [PATCH 6/6] spi: sun6i: add DMA transfers support Date: Fri, 30 Mar 2018 10:22:43 +0300 Message-Id: <20180330072243.19368-7-ssuloev@orpaltech.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180330072243.19368-1-ssuloev@orpaltech.com> References: <20180330072243.19368-1-ssuloev@orpaltech.com> Authentication-Results: smtp57.i.mail.ru; auth=pass smtp.auth=ssuloev@orpaltech.com smtp.mailfrom=ssuloev@orpaltech.com X-7FA49CB5: 0D63561A33F958A5830D759506F25FC4BC0A8B6031D50A00E14224B2EA6FF843725E5C173C3A84C39D7D3120FB43BDE3C5736D3416C5D869DC7F9DB2523BC7CDC4224003CC836476C0CAF46E325F83A50BF2EBBBDD9D6B0F05F538519369F3743B503F486389A921A5CC5B56E945C8DA X-Mailru-Sender: C5364AD02485212F3ACDC11E67D84917A6FA6D2DA4FB0C9C46437C374FC2386E069BFC61DABEEB110841D3AAAB1726C63DDE9B364B0DF289264D2CD8C2503E8C22A194DADEED8EEDCA01A23BA9CD1BE7ED14614B50AE0675 X-Mras: OK X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180330_002304_912314_848EECA7 X-CRM114-Status: GOOD ( 18.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sergey Suloev , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-spi@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP DMA transfers are now available for sun6i and sun8i SoCs. The DMA mode is used automatically as soon as requested transfer length is more than FIFO length. Signed-off-by: Sergey Suloev --- drivers/spi/spi-sun6i.c | 296 ++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 275 insertions(+), 21 deletions(-) diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c index a6e6812..e74fa3d 100644 --- a/drivers/spi/spi-sun6i.c +++ b/drivers/spi/spi-sun6i.c @@ -14,6 +14,8 @@ #include #include #include +#include +#include #include #include #include @@ -55,11 +57,14 @@ #define SUN6I_FIFO_CTL_REG 0x18 #define SUN6I_FIFO_CTL_RF_RDY_TRIG_LEVEL_MASK 0xff -#define SUN6I_FIFO_CTL_RF_RDY_TRIG_LEVEL_BITS 0 +#define SUN6I_FIFO_CTL_RF_RDY_TRIG_LEVEL_POS 0 +#define SUN6I_FIFO_CTL_RF_DRQ_EN BIT(8) #define SUN6I_FIFO_CTL_RF_RST BIT(15) #define SUN6I_FIFO_CTL_TF_ERQ_TRIG_LEVEL_MASK 0xff -#define SUN6I_FIFO_CTL_TF_ERQ_TRIG_LEVEL_BITS 16 +#define SUN6I_FIFO_CTL_TF_ERQ_TRIG_LEVEL_POS 16 +#define SUN6I_FIFO_CTL_TF_DRQ_EN BIT(24) #define SUN6I_FIFO_CTL_TF_RST BIT(31) +#define SUN6I_FIFO_CTL_DMA_DEDICATE BIT(9)|BIT(25) #define SUN6I_FIFO_STA_REG 0x1c #define SUN6I_FIFO_STA_RF_CNT_MASK 0x7f @@ -177,6 +182,15 @@ static inline void sun6i_spi_fill_fifo(struct sun6i_spi *sspi, int len) } } +static bool sun6i_spi_can_dma(struct spi_master *master, + struct spi_device *spi, + struct spi_transfer *tfr) +{ + struct sun6i_spi *sspi = spi_master_get_devdata(master); + + return tfr->len > sspi->fifo_depth; +} + static void sun6i_spi_set_cs(struct spi_device *spi, bool enable) { struct sun6i_spi *sspi = spi_master_get_devdata(spi->master); @@ -208,6 +222,9 @@ static size_t sun6i_spi_max_transfer_size(struct spi_device *spi) struct spi_master *master = spi->master; struct sun6i_spi *sspi = spi_master_get_devdata(master); + if (master->can_dma) + return SUN6I_MAX_XFER_SIZE; + return sspi->fifo_depth; } @@ -277,15 +294,174 @@ static int sun6i_spi_wait_for_transfer(struct spi_device *spi, return 0; } +static void sun6i_spi_dma_callback(void *param) +{ + struct spi_master *master = param; + + dev_dbg(&master->dev, "DMA transfer complete\n"); + spi_finalize_current_transfer(master); +} + +static int sun6i_spi_dmap_prep_tx(struct spi_master *master, + struct spi_transfer *tfr, + dma_cookie_t *cookie) +{ + struct dma_async_tx_descriptor *chan_desc = NULL; + + chan_desc = dmaengine_prep_slave_sg(master->dma_tx, + tfr->tx_sg.sgl, tfr->tx_sg.nents, + DMA_TO_DEVICE, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!chan_desc) { + dev_err(&master->dev, + "Couldn't prepare TX DMA slave\n"); + return -EIO; + } + + chan_desc->callback = sun6i_spi_dma_callback; + chan_desc->callback_param = master; + + *cookie = dmaengine_submit(chan_desc); + dma_async_issue_pending(master->dma_tx); + + return 0; +} + +static int sun6i_spi_dmap_prep_rx(struct spi_master *master, + struct spi_transfer *tfr, + dma_cookie_t *cookie) +{ + struct dma_async_tx_descriptor *chan_desc = NULL; + + chan_desc = dmaengine_prep_slave_sg(master->dma_rx, + tfr->rx_sg.sgl, tfr->rx_sg.nents, + DMA_FROM_DEVICE, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!chan_desc) { + dev_err(&master->dev, + "Couldn't prepare RX DMA slave\n"); + return -EIO; + } + + chan_desc->callback = sun6i_spi_dma_callback; + chan_desc->callback_param = master; + + *cookie = dmaengine_submit(chan_desc); + dma_async_issue_pending(master->dma_rx); + + return 0; +} + +static int sun6i_spi_transfer_one_dma(struct spi_device *spi, + struct spi_transfer *tfr) +{ + struct spi_master *master = spi->master; + struct sun6i_spi *sspi = spi_master_get_devdata(master); + dma_cookie_t tx_cookie = 0,rx_cookie = 0; + enum dma_status status; + int ret; + u32 reg, trig_level = 0; + + dev_dbg(&master->dev, "Using DMA mode for transfer\n"); + + reg = sun6i_spi_read(sspi, SUN6I_FIFO_CTL_REG); + + if (sspi->tx_buf) { + ret = sun6i_spi_dmap_prep_tx(master, tfr, &tx_cookie); + if (ret) + goto out; + + reg |= SUN6I_FIFO_CTL_TF_DRQ_EN; + + trig_level = sspi->fifo_depth; + reg &= ~SUN6I_FIFO_CTL_TF_ERQ_TRIG_LEVEL_MASK; + reg |= (trig_level << SUN6I_FIFO_CTL_TF_ERQ_TRIG_LEVEL_POS); + } + + if (sspi->rx_buf) { + ret = sun6i_spi_dmap_prep_rx(master, tfr, &rx_cookie); + if (ret) + goto out; + + reg |= SUN6I_FIFO_CTL_RF_DRQ_EN; + + trig_level = 1; + reg &= ~SUN6I_FIFO_CTL_RF_RDY_TRIG_LEVEL_MASK; + reg |= (trig_level << SUN6I_FIFO_CTL_RF_RDY_TRIG_LEVEL_POS); + } + + /* Enable Dedicated DMA requests */ + sun6i_spi_write(sspi, SUN6I_FIFO_CTL_REG, + reg | SUN6I_FIFO_CTL_DMA_DEDICATE); + + /* Start transfer */ + sun6i_spi_set(sspi, SUN6I_TFR_CTL_REG, SUN6I_TFR_CTL_XCH); + + ret = sun6i_spi_wait_for_transfer(spi, tfr); + if (ret) + goto out; + + if (sspi->tx_buf && (status = dma_async_is_tx_complete(master->dma_tx, + tx_cookie, NULL, NULL))) { + dev_warn(&master->dev, + "DMA returned completion status of: %s\n", + status == DMA_ERROR ? "error" : "in progress"); + } + if (sspi->rx_buf && (status = dma_async_is_tx_complete(master->dma_rx, + rx_cookie, NULL, NULL))) { + dev_warn(&master->dev, + "DMA returned completion status of: %s\n", + status == DMA_ERROR ? "error" : "in progress"); + } + +out: + if (ret) { + dev_dbg(&master->dev, "DMA channel teardown\n"); + if (sspi->tx_buf) + dmaengine_terminate_sync(master->dma_tx); + if (sspi->rx_buf) + dmaengine_terminate_sync(master->dma_rx); + } + + sun6i_spi_drain_fifo(sspi, sspi->fifo_depth); + + sun6i_spi_write(sspi, SUN6I_INT_CTL_REG, 0); + + return ret; +} + +static int sun6i_spi_transfer_one_pio(struct spi_device *spi, + struct spi_transfer *tfr) +{ + struct spi_master *master = spi->master; + struct sun6i_spi *sspi = spi_master_get_devdata(master); + int ret; + + /* Disable DMA requests */ + sun6i_spi_write(sspi, SUN6I_FIFO_CTL_REG, 0); + + sun6i_spi_fill_fifo(sspi, sspi->fifo_depth); + + /* Enable transfer complete IRQ */ + sun6i_spi_set(sspi, SUN6I_INT_CTL_REG, SUN6I_INT_CTL_TC); + + /* Start transfer */ + sun6i_spi_set(sspi, SUN6I_TFR_CTL_REG, SUN6I_TFR_CTL_XCH); + + ret = sun6i_spi_wait_for_transfer(spi, tfr); + + sun6i_spi_write(sspi, SUN6I_INT_CTL_REG, 0); + + return ret; +} + static int sun6i_spi_transfer_one(struct spi_master *master, struct spi_device *spi, struct spi_transfer *tfr) { struct sun6i_spi *sspi = spi_master_get_devdata(master); - unsigned int mclk_rate, div, timeout; - unsigned int start, end, tx_time; + unsigned int mclk_rate, div; unsigned int tx_len = 0; - int ret = 0; u32 reg; /* A zero length transfer never finishes if programmed @@ -293,10 +469,15 @@ static int sun6i_spi_transfer_one(struct spi_master *master, if (!tfr->len) return 0; - /* Don't support transfer larger than the FIFO */ - if (tfr->len > sspi->fifo_depth) + if (tfr->len > SUN6I_MAX_XFER_SIZE) return -EMSGSIZE; + if (!master->can_dma) { + /* Don't support transfer larger than the FIFO */ + if (tfr->len > sspi->fifo_depth) + return -EMSGSIZE; + } + sspi->tx_buf = tfr->tx_buf; sspi->rx_buf = tfr->rx_buf; sspi->len = tfr->len; @@ -353,21 +534,10 @@ static int sun6i_spi_transfer_one(struct spi_master *master, sun6i_spi_write(sspi, SUN6I_BURST_CTL_CNT_REG, SUN6I_BURST_CTL_CNT_STC(tx_len)); - /* Fill the TX FIFO */ - sun6i_spi_fill_fifo(sspi, sspi->fifo_depth); - - /* Enable transfer complete interrupt */ - sun6i_spi_set(sspi, SUN6I_INT_CTL_REG, SUN6I_INT_CTL_TC); - - /* Start the transfer */ - sun6i_spi_set(sspi, SUN6I_TFR_CTL_REG, SUN6I_TFR_CTL_XCH); - - /* Wait for completion */ - ret = sun6i_spi_wait_for_transfer(spi, tfr); - - sun6i_spi_write(sspi, SUN6I_INT_CTL_REG, 0); + if (sun6i_spi_can_dma(master, spi, tfr)) + return sun6i_spi_transfer_one_dma(spi, tfr); - return ret; + return sun6i_spi_transfer_one_pio(spi, tfr); } static irqreturn_t sun6i_spi_handler(int irq, void *dev_id) @@ -389,6 +559,76 @@ static irqreturn_t sun6i_spi_handler(int irq, void *dev_id) return IRQ_NONE; } +static int sun6i_spi_dma_setup(struct platform_device *pdev, + struct resource *res) +{ + struct spi_master *master = platform_get_drvdata(pdev); + struct dma_slave_config dma_sconf; + int ret; + + master->dma_tx = dma_request_slave_channel_reason(&pdev->dev, "tx"); + if (IS_ERR(master->dma_tx)) { + dev_err(&pdev->dev, "Unable to acquire DMA TX channel\n"); + ret = PTR_ERR(master->dma_tx); + goto out; + } + + dma_sconf.direction = DMA_MEM_TO_DEV; + dma_sconf.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + dma_sconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + dma_sconf.dst_addr = res->start + SUN6I_TXDATA_REG; + dma_sconf.src_maxburst = 1; + dma_sconf.dst_maxburst = 1; + + ret = dmaengine_slave_config(master->dma_tx, &dma_sconf); + if (ret) { + dev_err(&pdev->dev, "Unable to configure DMA TX slave\n"); + goto err_rel_tx; + } + + master->dma_rx = dma_request_slave_channel_reason(&pdev->dev, "rx"); + if (IS_ERR(master->dma_rx)) { + dev_err(&pdev->dev, "Unable to acquire DMA RX channel\n"); + ret = PTR_ERR(master->dma_rx); + goto err_rel_tx; + } + + dma_sconf.direction = DMA_DEV_TO_MEM; + dma_sconf.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + dma_sconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + dma_sconf.src_addr = res->start + SUN6I_RXDATA_REG; + dma_sconf.src_maxburst = 1; + dma_sconf.dst_maxburst = 1; + + ret = dmaengine_slave_config(master->dma_rx, &dma_sconf); + if (ret) { + dev_err(&pdev->dev, "Unable to configure DMA RX slave\n"); + goto err_rel_rx; + } + + /* don't set can_dma unless both channels are valid*/ + master->can_dma = sun6i_spi_can_dma; + + return 0; + +err_rel_rx: + dma_release_channel(master->dma_rx); +err_rel_tx: + dma_release_channel(master->dma_tx); +out: + master->dma_tx = NULL; + master->dma_rx = NULL; + return ret; +} + +static void sun6i_spi_dma_release(struct spi_master *master) +{ + if (master->can_dma) { + dma_release_channel(master->dma_rx); + dma_release_channel(master->dma_tx); + } +} + static int sun6i_spi_runtime_resume(struct device *dev) { struct spi_master *master = dev_get_drvdata(dev); @@ -510,6 +750,15 @@ static int sun6i_spi_probe(struct platform_device *pdev) goto err_free_master; } + ret = sun6i_spi_dma_setup(pdev, res); + if (ret) { + if (ret == -EPROBE_DEFER) { + /* wait for the dma driver to load */ + goto err_free_master; + } + dev_warn(&pdev->dev, "DMA transfer not supported\n"); + } + /* * This wake-up/shutdown pattern is to be able to have the * device woken up, even if runtime_pm is disabled @@ -536,14 +785,19 @@ err_pm_disable: pm_runtime_disable(&pdev->dev); sun6i_spi_runtime_suspend(&pdev->dev); err_free_master: + sun6i_spi_dma_release(master); spi_master_put(master); return ret; } static int sun6i_spi_remove(struct platform_device *pdev) { + struct spi_master *master = platform_get_drvdata(pdev); + pm_runtime_force_suspend(&pdev->dev); + sun6i_spi_dma_release(master); + return 0; }