From patchwork Thu Jan 16 12:28:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 3498191 Return-Path: X-Original-To: patchwork-linux-spi@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 448A79F2E9 for ; Thu, 16 Jan 2014 12:29:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0E99A20163 for ; Thu, 16 Jan 2014 12:29:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E744E20122 for ; Thu, 16 Jan 2014 12:29:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752729AbaAPM3G (ORCPT ); Thu, 16 Jan 2014 07:29:06 -0500 Received: from mezzanine.sirena.org.uk ([106.187.55.193]:55584 "EHLO mezzanine.sirena.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752497AbaAPM3F (ORCPT ); Thu, 16 Jan 2014 07:29:05 -0500 Received: from cpc11-sgyl31-2-0-cust68.sgyl.cable.virginm.net ([94.175.92.69] helo=debutante.sirena.org.uk) by mezzanine.sirena.org.uk with esmtpsa (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from ) id 1W3m4B-00027h-H8; Thu, 16 Jan 2014 12:29:05 +0000 Received: from broonie by debutante.sirena.org.uk with local (Exim 4.82) (envelope-from ) id 1W3m48-0003ZT-G8; Thu, 16 Jan 2014 12:28:56 +0000 From: Mark Brown To: linux-spi@vger.kernel.org Cc: linaro-kernel@lists.linaro.org, linux-samsung-soc@vger.kernel.org, Mark Brown Date: Thu, 16 Jan 2014 12:28:50 +0000 Message-Id: <1389875331-13692-1-git-send-email-broonie@kernel.org> X-Mailer: git-send-email 1.8.5.2 X-SA-Exim-Connect-IP: 94.175.92.69 X-SA-Exim-Mail-From: broonie@sirena.org.uk X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Subject: [PATCH 1/2] spi: Provide core support for DMA mapping transfers X-SA-Exim-Version: 4.2.1 (built Mon, 26 Dec 2011 16:24:06 +0000) X-SA-Exim-Scanned: Yes (on mezzanine.sirena.org.uk) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Mark Brown The process of DMA mapping buffers for SPI transfers does not vary between devices so in order to save duplication of code in drivers this can be factored out into the core, allowing it to be integrated with the work that is being done on factoring out the common elements from the data path including more sharing of dmaengine code. In order to use this masters need to provide a can_dma() operation and while the hardware is prepared they should ensure that DMA channels are provided in tx_dma and rx_dma. The core will then ensure that the buffers are mapped for DMA prior to calling transfer_one_message(). Currently the cleanup on error is not complete, this needs to be improved. Signed-off-by: Mark Brown --- drivers/spi/spi.c | 79 +++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/spi/spi.h | 18 +++++++++++ 2 files changed, 97 insertions(+) diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index a86569e1f178..314d326bce9f 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -24,6 +24,8 @@ #include #include #include +#include +#include #include #include #include @@ -570,6 +572,74 @@ static void spi_set_cs(struct spi_device *spi, bool enable) spi->master->set_cs(spi, !enable); } +static int spi_dma_map_msg(struct spi_master *master, struct spi_message *msg) +{ + struct device *dev = master->dev.parent; + struct device *tx_dev, *rx_dev; + struct spi_transfer *xfer; + + if (msg->is_dma_mapped || !master->can_dma) + return 0; + + tx_dev = &master->dma_tx->dev->device; + rx_dev = &master->dma_rx->dev->device; + + list_for_each_entry(xfer, &msg->transfers, transfer_list) { + if (!master->can_dma(master, msg->spi, xfer)) + continue; + + if (xfer->tx_buf != NULL) { + xfer->tx_dma = dma_map_single(tx_dev, + (void *)xfer->tx_buf, + xfer->len, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, xfer->tx_dma)) { + dev_err(dev, "dma_map_single Tx failed\n"); + return -ENOMEM; + } + } + + if (xfer->rx_buf != NULL) { + xfer->rx_dma = dma_map_single(rx_dev, + xfer->rx_buf, xfer->len, + DMA_FROM_DEVICE); + if (dma_mapping_error(dev, xfer->rx_dma)) { + dev_err(dev, "dma_map_single Rx failed\n"); + dma_unmap_single(tx_dev, xfer->tx_dma, + xfer->len, DMA_TO_DEVICE); + return -ENOMEM; + } + } + } + + return 0; +} + +static int spi_dma_unmap_msg(struct spi_master *master, + struct spi_message *msg) +{ + struct spi_transfer *xfer; + struct device *tx_dev, *rx_dev; + + if (!master->cur_msg_mapped || msg->is_dma_mapped || !master->can_dma) + return 0; + + tx_dev = &master->dma_tx->dev->device; + rx_dev = &master->dma_rx->dev->device; + + list_for_each_entry(xfer, &msg->transfers, transfer_list) { + if (!master->can_dma(master, msg->spi, xfer)) + continue; + + dma_unmap_single(rx_dev, xfer->rx_dma, xfer->len, + DMA_FROM_DEVICE); + dma_unmap_single(tx_dev, xfer->tx_dma, xfer->len, + DMA_TO_DEVICE); + } + + return 0; +} + /* * spi_transfer_one_message - Default implementation of transfer_one_message() * @@ -740,6 +810,13 @@ static void spi_pump_messages(struct kthread_work *work) master->cur_msg_prepared = true; } + ret = spi_dma_map_msg(master, master->cur_msg); + if (ret) { + master->cur_msg->status = ret; + spi_finalize_current_message(master); + return; + } + ret = master->transfer_one_message(master, master->cur_msg); if (ret) { dev_err(&master->dev, @@ -829,6 +906,8 @@ void spi_finalize_current_message(struct spi_master *master) queue_kthread_work(&master->kworker, &master->pump_messages); spin_unlock_irqrestore(&master->queue_lock, flags); + spi_dma_unmap_msg(master, mesg); + if (master->cur_msg_prepared && master->unprepare_message) { ret = master->unprepare_message(master, mesg); if (ret) { diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h index 21a7251d85ee..14380043491b 100644 --- a/include/linux/spi/spi.h +++ b/include/linux/spi/spi.h @@ -25,6 +25,8 @@ #include #include +struct dma_chan; + /* * INTERFACES between SPI master-side drivers and SPI infrastructure. * (There's no SPI slave support for Linux yet...) @@ -385,6 +387,17 @@ struct spi_master { void (*cleanup)(struct spi_device *spi); /* + * Used to enable core support for DMA handling, if can_dma() + * exists and returns true then the transfer will be mapped + * prior to transfer_one() being called. The driver should + * not modify or store xfer and dma_tx and dma_rx must be set + * while the device is prepared. + */ + bool (*can_dma)(struct spi_master *master, + struct spi_device *spi, + struct spi_transfer *xfer); + + /* * These hooks are for drivers that want to use the generic * master transfer queueing mechanism. If these are used, the * transfer() function above must NOT be specified by the driver. @@ -402,6 +415,7 @@ struct spi_master { bool rt; bool auto_runtime_pm; bool cur_msg_prepared; + bool cur_msg_mapped; struct completion xfer_completion; int (*prepare_transfer_hardware)(struct spi_master *master); @@ -423,6 +437,10 @@ struct spi_master { /* gpio chip select */ int *cs_gpios; + + /* DMA channels for use with core dmaengine helpers */ + struct dma_chan *dma_tx; + struct dma_chan *dma_rx; }; static inline void *spi_master_get_devdata(struct spi_master *master)