From patchwork Mon Sep 22 04:27:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Gross X-Patchwork-Id: 4945431 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 778E49F32F for ; Mon, 22 Sep 2014 04:30:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 32E1620220 for ; Mon, 22 Sep 2014 04:30:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 38E622021A for ; Mon, 22 Sep 2014 04:30:20 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XVvEV-0004nL-Ef; Mon, 22 Sep 2014 04:28:15 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XVvES-0004eE-Ch for linux-arm-kernel@lists.infradead.org; Mon, 22 Sep 2014 04:28:13 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 5740F13F65C; Mon, 22 Sep 2014 04:27:50 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 497F313F65F; Mon, 22 Sep 2014 04:27:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from localhost (108-85-129-155.lightspeed.austtx.sbcglobal.net [108.85.129.155]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: agross@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 3FFF613F65C; Mon, 22 Sep 2014 04:27:49 +0000 (UTC) From: Andy Gross To: Mark Brown Subject: [PATCH] spi: qup: Fix incorrect block transfers Date: Sun, 21 Sep 2014 23:27:28 -0500 Message-Id: <1411360048-3388-1-git-send-email-agross@codeaurora.org> X-Mailer: git-send-email 1.7.9.5 X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140921_212812_482770_CD7C4279 X-CRM114-Status: GOOD ( 26.64 ) X-Spam-Score: -0.8 (/) Cc: Bjorn Andersson , linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-spi@vger.kernel.org, "Ivan T. Ivanov" , Andy Gross , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch fixes a number of errors with the QUP block transfer mode. Errors manifested themselves as input underruns, output overruns, and timed out transactions. The block mode does not require the priming that occurs in FIFO mode. At the moment that the QUP is placed into the RUN state, the QUP may immediately raise an interrupt if the request is a write. Therefore, there is no need to prime the pump. In addition, the block transfers require that whole blocks of data are read/written at a time. The last block of data that completes a transaction may contain less than a full blocks worth of data. Each block of data results in an input/output service interrupt accompanied with a input/output block flag set. Additional block reads/writes require clearing of the service flag. It is ok to check for additional blocks of data in the ISR, but you have to ack every block you transfer. Imbalanced acks result in early return from complete transactions with pending interrupts that still have to be ack'd. The next transaction can be affected by these interrupts. Signed-off-by: Andy Gross --- drivers/spi/spi-qup.c | 194 +++++++++++++++++++++++++++++++++++-------------- 1 file changed, 141 insertions(+), 53 deletions(-) diff --git a/drivers/spi/spi-qup.c b/drivers/spi/spi-qup.c index 9f83d29..9c4c745 100644 --- a/drivers/spi/spi-qup.c +++ b/drivers/spi/spi-qup.c @@ -80,6 +80,8 @@ #define QUP_IO_M_MODE_BAM 3 /* QUP_OPERATIONAL fields */ +#define QUP_OP_IN_BLOCK_READ_REQ BIT(13) +#define QUP_OP_OUT_BLOCK_WRITE_REQ BIT(12) #define QUP_OP_MAX_INPUT_DONE_FLAG BIT(11) #define QUP_OP_MAX_OUTPUT_DONE_FLAG BIT(10) #define QUP_OP_IN_SERVICE_FLAG BIT(9) @@ -143,6 +145,7 @@ struct spi_qup { int tx_bytes; int rx_bytes; int qup_v1; + int mode; }; @@ -198,30 +201,16 @@ static int spi_qup_set_state(struct spi_qup *controller, u32 state) return 0; } - -static void spi_qup_fifo_read(struct spi_qup *controller, - struct spi_transfer *xfer) +static void qup_fill_read_buffer(struct spi_qup *controller, + struct spi_transfer *xfer, u32 data) { u8 *rx_buf = xfer->rx_buf; - u32 word, state; - int idx, shift, w_size; - - w_size = controller->w_size; - - while (controller->rx_bytes < xfer->len) { - - state = readl_relaxed(controller->base + QUP_OPERATIONAL); - if (0 == (state & QUP_OP_IN_FIFO_NOT_EMPTY)) - break; + int idx, shift; + int read_len = min_t(int, xfer->len - controller->rx_bytes, + controller->w_size); - word = readl_relaxed(controller->base + QUP_INPUT_FIFO); - - if (!rx_buf) { - controller->rx_bytes += w_size; - continue; - } - - for (idx = 0; idx < w_size; idx++, controller->rx_bytes++) { + if (rx_buf) + for (idx = 0; idx < read_len; idx++) { /* * The data format depends on bytes per SPI word: * 4 bytes: 0x12345678 @@ -229,40 +218,129 @@ static void spi_qup_fifo_read(struct spi_qup *controller, * 1 byte : 0x00000012 */ shift = BITS_PER_BYTE; - shift *= (w_size - idx - 1); - rx_buf[controller->rx_bytes] = word >> shift; + shift *= (controller->w_size - idx - 1); + rx_buf[controller->rx_bytes + idx] = data >> shift; } - } + + controller->rx_bytes += read_len; } -static void spi_qup_fifo_write(struct spi_qup *controller, - struct spi_transfer *xfer) +static void qup_prepare_write_data(struct spi_qup *controller, + struct spi_transfer *xfer, u32 *data) { const u8 *tx_buf = xfer->tx_buf; - u32 word, state, data; - int idx, w_size; + u32 val; + int idx; + int write_len = min_t(int, xfer->len - controller->tx_bytes, + controller->w_size); - w_size = controller->w_size; + *data = 0; - while (controller->tx_bytes < xfer->len) { + if (tx_buf) + for (idx = 0; idx < write_len; idx++) { + val = tx_buf[controller->tx_bytes + idx]; + *data |= val << (BITS_PER_BYTE * (3 - idx)); + } - state = readl_relaxed(controller->base + QUP_OPERATIONAL); - if (state & QUP_OP_OUT_FIFO_FULL) - break; + controller->tx_bytes += write_len; +} - word = 0; - for (idx = 0; idx < w_size; idx++, controller->tx_bytes++) { +static void spi_qup_service_block(struct spi_qup *controller, + struct spi_transfer *xfer, bool is_read) +{ + u32 data, words_per_blk, num_words, ack_flag, op_flag; + int i; + + if (is_read) { + op_flag = QUP_OP_IN_BLOCK_READ_REQ; + ack_flag = QUP_OP_IN_SERVICE_FLAG; + num_words = DIV_ROUND_UP(xfer->len - controller->rx_bytes, + controller->w_size); + words_per_blk = controller->in_blk_sz >> 2; + } else { + op_flag = QUP_OP_OUT_BLOCK_WRITE_REQ; + ack_flag = QUP_OP_OUT_SERVICE_FLAG; + num_words = DIV_ROUND_UP(xfer->len - controller->tx_bytes, + controller->w_size); + words_per_blk = controller->out_blk_sz >> 2; + } - if (!tx_buf) { - controller->tx_bytes += w_size; - break; + do { + /* ACK by clearing service flag */ + writel_relaxed(ack_flag, controller->base + QUP_OPERATIONAL); + + /* transfer up to a block size of data in a single pass */ + for (i = 0; num_words && i < words_per_blk; i++, num_words--) { + + if (is_read) { + /* read data and fill up rx buffer */ + data = readl_relaxed(controller->base + + QUP_INPUT_FIFO); + qup_fill_read_buffer(controller, xfer, data); + } else { + /* swizzle the bytes for output and write out */ + qup_prepare_write_data(controller, xfer, &data); + writel_relaxed(data, + controller->base + QUP_OUTPUT_FIFO); } - - data = tx_buf[controller->tx_bytes]; - word |= data << (BITS_PER_BYTE * (3 - idx)); } - writel_relaxed(word, controller->base + QUP_OUTPUT_FIFO); + /* check to see if next block is ready */ + if (!(readl_relaxed(controller->base + QUP_OPERATIONAL) & + op_flag)) + break; + + } while (num_words); + + /* + * Due to extra stickiness of the QUP_OP_IN_SERVICE_FLAG during block + * reads, it has to be cleared again at the very end + */ + if (is_read && (readl_relaxed(controller->base + QUP_OPERATIONAL) & + QUP_OP_MAX_INPUT_DONE_FLAG)) + writel_relaxed(ack_flag, controller->base + QUP_OPERATIONAL); + +} + + +static void spi_qup_fifo_read(struct spi_qup *controller, + struct spi_transfer *xfer) +{ + u32 data; + + /* clear service request */ + writel_relaxed(QUP_OP_IN_SERVICE_FLAG, + controller->base + QUP_OPERATIONAL); + + while (controller->rx_bytes < xfer->len) { + if (!(readl_relaxed(controller->base + QUP_OPERATIONAL) & + QUP_OP_IN_FIFO_NOT_EMPTY)) + break; + + data = readl_relaxed(controller->base + QUP_INPUT_FIFO); + + qup_fill_read_buffer(controller, xfer, data); + } + +} + +static void spi_qup_fifo_write(struct spi_qup *controller, + struct spi_transfer *xfer) +{ + u32 data; + + /* clear service request */ + writel_relaxed(QUP_OP_OUT_SERVICE_FLAG, + controller->base + QUP_OPERATIONAL); + + while (controller->tx_bytes < xfer->len) { + + if (readl_relaxed(controller->base + QUP_OPERATIONAL) & + QUP_OP_OUT_FIFO_FULL) + break; + + qup_prepare_write_data(controller, xfer, &data); + writel_relaxed(data, controller->base + QUP_OUTPUT_FIFO); } } @@ -285,9 +363,9 @@ static irqreturn_t spi_qup_qup_irq(int irq, void *dev_id) writel_relaxed(qup_err, controller->base + QUP_ERROR_FLAGS); writel_relaxed(spi_err, controller->base + SPI_ERROR_FLAGS); - writel_relaxed(opflags, controller->base + QUP_OPERATIONAL); if (!xfer) { + writel_relaxed(opflags, controller->base + QUP_OPERATIONAL); dev_err_ratelimited(controller->dev, "unexpected irq %08x %08x %08x\n", qup_err, spi_err, opflags); return IRQ_HANDLED; @@ -315,11 +393,19 @@ static irqreturn_t spi_qup_qup_irq(int irq, void *dev_id) error = -EIO; } - if (opflags & QUP_OP_IN_SERVICE_FLAG) - spi_qup_fifo_read(controller, xfer); + if (opflags & QUP_OP_IN_SERVICE_FLAG) { + if (opflags & QUP_OP_IN_BLOCK_READ_REQ) + spi_qup_service_block(controller, xfer, 1); + else + spi_qup_fifo_read(controller, xfer); + } - if (opflags & QUP_OP_OUT_SERVICE_FLAG) - spi_qup_fifo_write(controller, xfer); + if (opflags & QUP_OP_OUT_SERVICE_FLAG) { + if (opflags & QUP_OP_OUT_BLOCK_WRITE_REQ) + spi_qup_service_block(controller, xfer, 0); + else + spi_qup_fifo_write(controller, xfer); + } spin_lock_irqsave(&controller->lock, flags); controller->error = error; @@ -337,7 +423,7 @@ static irqreturn_t spi_qup_qup_irq(int irq, void *dev_id) static int spi_qup_io_config(struct spi_device *spi, struct spi_transfer *xfer) { struct spi_qup *controller = spi_master_get_devdata(spi->master); - u32 config, iomode, mode; + u32 config, iomode; int ret, n_words, w_size; if (spi->mode & SPI_LOOP && xfer->len > controller->in_fifo_sz) { @@ -368,14 +454,14 @@ static int spi_qup_io_config(struct spi_device *spi, struct spi_transfer *xfer) controller->w_size = w_size; if (n_words <= (controller->in_fifo_sz / sizeof(u32))) { - mode = QUP_IO_M_MODE_FIFO; + controller->mode = QUP_IO_M_MODE_FIFO; writel_relaxed(n_words, controller->base + QUP_MX_READ_CNT); writel_relaxed(n_words, controller->base + QUP_MX_WRITE_CNT); /* must be zero for FIFO */ writel_relaxed(0, controller->base + QUP_MX_INPUT_CNT); writel_relaxed(0, controller->base + QUP_MX_OUTPUT_CNT); } else { - mode = QUP_IO_M_MODE_BLOCK; + controller->mode = QUP_IO_M_MODE_BLOCK; writel_relaxed(n_words, controller->base + QUP_MX_INPUT_CNT); writel_relaxed(n_words, controller->base + QUP_MX_OUTPUT_CNT); /* must be zero for BLOCK and BAM */ @@ -387,8 +473,8 @@ static int spi_qup_io_config(struct spi_device *spi, struct spi_transfer *xfer) /* Set input and output transfer mode */ iomode &= ~(QUP_IO_M_INPUT_MODE_MASK | QUP_IO_M_OUTPUT_MODE_MASK); iomode &= ~(QUP_IO_M_PACK_EN | QUP_IO_M_UNPACK_EN); - iomode |= (mode << QUP_IO_M_OUTPUT_MODE_MASK_SHIFT); - iomode |= (mode << QUP_IO_M_INPUT_MODE_MASK_SHIFT); + iomode |= (controller->mode << QUP_IO_M_OUTPUT_MODE_MASK_SHIFT); + iomode |= (controller->mode << QUP_IO_M_INPUT_MODE_MASK_SHIFT); writel_relaxed(iomode, controller->base + QUP_IO_M_MODES); @@ -462,7 +548,8 @@ static int spi_qup_transfer_one(struct spi_master *master, goto exit; } - spi_qup_fifo_write(controller, xfer); + if (controller->mode == QUP_IO_M_MODE_FIFO) + spi_qup_fifo_write(controller, xfer); if (spi_qup_set_state(controller, QUP_STATE_RUN)) { dev_warn(controller->dev, "cannot set EXECUTE state\n"); @@ -478,6 +565,7 @@ exit: if (!ret) ret = controller->error; spin_unlock_irqrestore(&controller->lock, flags); + return ret; }