From patchwork Tue Sep 4 02:40:15 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Vasut X-Patchwork-Id: 1401051 Return-Path: X-Original-To: patchwork-spi-devel-general@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88]) by patchwork1.kernel.org (Postfix) with ESMTP id C69E93FC85 for ; Tue, 4 Sep 2012 02:40:45 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=sfs-ml-1.v29.ch3.sourceforge.com) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1T8j4G-0001OM-54; Tue, 04 Sep 2012 02:40:44 +0000 Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194] helo=mx.sourceforge.net) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1T8j4F-0001OG-MH for spi-devel-general@lists.sourceforge.net; Tue, 04 Sep 2012 02:40:43 +0000 X-ACL-Warn: Received: from mail-out.m-online.net ([212.18.0.9]) by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:AES256-SHA:256) (Exim 4.76) id 1T8j4B-0004sn-LL for spi-devel-general@lists.sourceforge.net; Tue, 04 Sep 2012 02:40:43 +0000 Received: from frontend1.mail.m-online.net (unknown [192.168.8.180]) by mail-out.m-online.net (Postfix) with ESMTP id 3X9sjD3VFpz4KK4N; Tue, 4 Sep 2012 04:40:32 +0200 (CEST) X-Auth-Info: Rgmno44CN0MKN88sfTCvHqVn+hS9xC5k8lAucaMdRMs= Received: from mashiro.lan (unknown [195.140.253.167]) by smtp-auth.mnet-online.de (Postfix) with ESMTPA id 3X9sjD1MP4zbbgN; Tue, 4 Sep 2012 04:40:32 +0200 (CEST) From: Marek Vasut To: spi-devel-general@lists.sourceforge.net Subject: [PATCH 1/4] mxs/spi: Fix issues when doing long continuous transfer Date: Tue, 4 Sep 2012 04:40:15 +0200 Message-Id: <1346726418-2856-2-git-send-email-marex@denx.de> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1346726418-2856-1-git-send-email-marex@denx.de> References: <1346726418-2856-1-git-send-email-marex@denx.de> X-Spam-Score: -0.1 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at http://www.dnswl.org/, no trust [212.18.0.9 listed in list.dnswl.org] -0.1 AWL AWL: From: address is in the auto white-list X-Headers-End: 1T8j4B-0004sn-LL Cc: Marek Vasut , Fabio Estevam , Shawn Guo , Mark Brown , Chris Ball , linux-arm-kernel@lists.infradead.org X-BeenThere: spi-devel-general@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: Linux SPI core/device drivers discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: spi-devel-general-bounces@lists.sourceforge.net When doing long continuous transfer, eg. from SPI flash via /dev/mtd, the driver dies. This is caused by a bug in the DMA chaining. Rework the DMA transfer code so that this issue does not happen any longer. This involves proper allocation of correct amount of sg-list members. Also, this means proper creation of DMA descriptors. There is actually an important catch to this, the data transfer descriptors must be interleaved with PIO register write descriptor, otherwise the transfer stalls. This can be done in one descriptor, but due to the limitation of the DMA API, it's not possible. It turns out that in order for the SPI DMA to properly support continuous transfers longer than 65280 bytes, there are some very important parts that were left out from the documentation about about the PIO transfer that is used. Firstly, the XFER_SIZE register is not written with the whole length of a transfer, but is written by each and every chained descriptor with the length of the descriptors data buffer. Next, unlike the demo code supplied by FSL, which only writes one PIO word per descriptor, this does not apply if the descriptors are chained, since the XFER_SIZE register must be written. Therefore, it is essential to use four PIO words, CTRL0, CMD0, CMD1, XFER_SIZE. CMD0 and CMD1 are written with zero, since they don't apply. The DMA programs the PIO words in an incrementing order, so four PIO words. Finally, unlike the demo code supplied by FSL, the SSP_CTRL0_IGNORE_CRC must not be set during the whole transfer, but it must be set only on the last descriptor in the chain. Lastly, this code lends code from drivers/mtd/nand/omap2.c, which solves trouble when the buffer supplied to the DMA transfer was vmalloc()'d. So with this patch, it's safe to use /dev/mtdblockX interface again. Signed-off-by: Marek Vasut Cc: Chris Ball Cc: Fabio Estevam Cc: Grant Likely Cc: Mark Brown Cc: Shawn Guo --- drivers/spi/spi-mxs.c | 141 ++++++++++++++++++++++++++++++------------------- 1 file changed, 88 insertions(+), 53 deletions(-) diff --git a/drivers/spi/spi-mxs.c b/drivers/spi/spi-mxs.c index d49634a..3add52e 100644 --- a/drivers/spi/spi-mxs.c +++ b/drivers/spi/spi-mxs.c @@ -53,9 +53,9 @@ #define DRIVER_NAME "mxs-spi" -#define SSP_TIMEOUT 1000 /* 1000 ms */ +/* Use 10S timeout for very long transfers, it should suffice. */ +#define SSP_TIMEOUT 10000 -#define SG_NUM 4 #define SG_MAXLEN 0xff00 struct mxs_spi { @@ -219,61 +219,94 @@ static int mxs_spi_txrx_dma(struct mxs_spi *spi, int cs, int *first, int *last, int write) { struct mxs_ssp *ssp = &spi->ssp; - struct dma_async_tx_descriptor *desc; - struct scatterlist sg[SG_NUM]; + struct dma_async_tx_descriptor *desc = NULL; + const bool vmalloced_buf = is_vmalloc_addr(buf); + const int desc_len = vmalloced_buf ? PAGE_SIZE : SG_MAXLEN; + const int sgs = DIV_ROUND_UP(len, desc_len); int sg_count; - uint32_t pio = BM_SSP_CTRL0_DATA_XFER | mxs_spi_cs_to_reg(cs); - int ret; - - if (len > SG_NUM * SG_MAXLEN) { - dev_err(ssp->dev, "Data chunk too big for DMA\n"); + int min, ret; + uint32_t ctrl0; + struct page *vm_page; + void *sg_buf; + struct { + uint32_t pio[4]; + struct scatterlist sg; + } *dma_xfer; + + if (!len) return -EINVAL; - } + + dma_xfer = kzalloc(sizeof(*dma_xfer) * sgs, GFP_KERNEL); + if (!dma_xfer) + return -ENOMEM; INIT_COMPLETION(spi->c); + ctrl0 = readl(ssp->base + HW_SSP_CTRL0); + ctrl0 |= BM_SSP_CTRL0_DATA_XFER | mxs_spi_cs_to_reg(cs); + if (*first) - pio |= BM_SSP_CTRL0_LOCK_CS; - if (*last) - pio |= BM_SSP_CTRL0_IGNORE_CRC; + ctrl0 |= BM_SSP_CTRL0_LOCK_CS; if (!write) - pio |= BM_SSP_CTRL0_READ; - - if (ssp->devid == IMX23_SSP) - pio |= len; - else - writel(len, ssp->base + HW_SSP_XFER_SIZE); - - /* Queue the PIO register write transfer. */ - desc = dmaengine_prep_slave_sg(ssp->dmach, - (struct scatterlist *)&pio, - 1, DMA_TRANS_NONE, 0); - if (!desc) { - dev_err(ssp->dev, - "Failed to get PIO reg. write descriptor.\n"); - return -EINVAL; - } + ctrl0 |= BM_SSP_CTRL0_READ; /* Queue the DMA data transfer. */ - sg_init_table(sg, (len / SG_MAXLEN) + 1); - sg_count = 0; - while (len) { - sg_set_buf(&sg[sg_count++], buf, min(len, SG_MAXLEN)); - len -= min(len, SG_MAXLEN); - buf += min(len, SG_MAXLEN); - } - dma_map_sg(ssp->dev, sg, sg_count, - write ? DMA_TO_DEVICE : DMA_FROM_DEVICE); - - desc = dmaengine_prep_slave_sg(ssp->dmach, sg, sg_count, - write ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); - - if (!desc) { - dev_err(ssp->dev, - "Failed to get DMA data write descriptor.\n"); - ret = -EINVAL; - goto err; + for (sg_count = 0; sg_count < sgs; sg_count++) { + min = min(len, desc_len); + + /* Prepare the transfer descriptor. */ + if ((sg_count + 1 == sgs) && *last) + ctrl0 |= BM_SSP_CTRL0_IGNORE_CRC; + + if (ssp->devid == IMX23_SSP) + ctrl0 |= min; + + dma_xfer[sg_count].pio[0] = ctrl0; + dma_xfer[sg_count].pio[3] = min; + + if (vmalloced_buf) { + vm_page = vmalloc_to_page(buf); + if (!vm_page) { + ret = -ENOMEM; + goto err_vmalloc; + } + sg_buf = page_address(vm_page) + + ((size_t)buf & ~PAGE_MASK); + } else { + sg_buf = buf; + } + + sg_init_one(&dma_xfer[sg_count].sg, sg_buf, min); + ret = dma_map_sg(ssp->dev, &dma_xfer[sg_count].sg, 1, + write ? DMA_TO_DEVICE : DMA_FROM_DEVICE); + + len -= min; + buf += min; + + /* Queue the PIO register write transfer. */ + desc = dmaengine_prep_slave_sg(ssp->dmach, + (struct scatterlist *)dma_xfer[sg_count].pio, + (ssp->devid == IMX23_SSP) ? 1 : 4, + DMA_TRANS_NONE, + sg_count ? DMA_PREP_INTERRUPT : 0); + if (!desc) { + dev_err(ssp->dev, + "Failed to get PIO reg. write descriptor.\n"); + ret = -EINVAL; + goto err_mapped; + } + + desc = dmaengine_prep_slave_sg(ssp->dmach, + &dma_xfer[sg_count].sg, 1, + write ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + + if (!desc) { + dev_err(ssp->dev, + "Failed to get DMA data write descriptor.\n"); + ret = -EINVAL; + goto err_mapped; + } } /* @@ -289,21 +322,23 @@ static int mxs_spi_txrx_dma(struct mxs_spi *spi, int cs, ret = wait_for_completion_timeout(&spi->c, msecs_to_jiffies(SSP_TIMEOUT)); - if (!ret) { dev_err(ssp->dev, "DMA transfer timeout\n"); ret = -ETIMEDOUT; - goto err; + goto err_vmalloc; } ret = 0; -err: - for (--sg_count; sg_count >= 0; sg_count--) { - dma_unmap_sg(ssp->dev, &sg[sg_count], 1, +err_vmalloc: + while (--sg_count >= 0) { +err_mapped: + dma_unmap_sg(ssp->dev, &dma_xfer[sg_count].sg, 1, write ? DMA_TO_DEVICE : DMA_FROM_DEVICE); } + kfree(dma_xfer); + return ret; }