From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074359 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23C5817E0 for ; Sat, 3 Aug 2019 10:38:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1119A288AE for ; Sat, 3 Aug 2019 10:38:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 037E1288E5; Sat, 3 Aug 2019 10:38:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3F326288E5 for ; Sat, 3 Aug 2019 10:38:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388325AbfHCKih (ORCPT ); Sat, 3 Aug 2019 06:38:37 -0400 Received: from mailout2.hostsharing.net ([83.223.78.233]:49753 "EHLO mailout2.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727123AbfHCKih (ORCPT ); Sat, 3 Aug 2019 06:38:37 -0400 X-Greylist: delayed 593 seconds by postgrey-1.27 at vger.kernel.org; Sat, 03 Aug 2019 06:38:36 EDT Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout2.hostsharing.net (Postfix) with ESMTPS id B366510189B93; Sat, 3 Aug 2019 12:28:41 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 707BC618EDF5; Sat, 3 Aug 2019 12:28:41 +0200 (CEST) X-Mailbox-Line: From 7b67f8d95154d25b3a360f580f1ca61f83f79a6f Mon Sep 17 00:00:00 2001 Message-Id: <7b67f8d95154d25b3a360f580f1ca61f83f79a6f.1564825752.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 01/10] dmaengine: bcm2835: Allow reusable descriptors MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The DMA engine API requires DMA drivers to explicitly allow that descriptors are prepared once and reused multiple times. Only a single driver makes use of this functionality so far (pxa_dma.c, to speed up pxa_camera.c). We're about to add another use case for reusable descriptors in the BCM2835 SPI driver, so allow that in the BCM2835 DMA driver. Tested-by: Nuno Sá Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Florian Kauer Cc: Robert Jarzmik Acked-by: Vinod Koul --- drivers/dma/bcm2835-dma.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c index 8101ff2f05c1..523c507ad69e 100644 --- a/drivers/dma/bcm2835-dma.c +++ b/drivers/dma/bcm2835-dma.c @@ -907,6 +907,7 @@ static int bcm2835_dma_probe(struct platform_device *pdev) od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV) | BIT(DMA_MEM_TO_MEM); od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; + od->ddev.descriptor_reuse = true; od->ddev.dev = &pdev->dev; INIT_LIST_HEAD(&od->ddev.channels); From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074355 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A0291398 for ; Sat, 3 Aug 2019 10:33:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C40B288C7 for ; Sat, 3 Aug 2019 10:33:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F8FC288E6; Sat, 3 Aug 2019 10:33:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E8C22288C7 for ; Sat, 3 Aug 2019 10:33:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388475AbfHCKdO (ORCPT ); Sat, 3 Aug 2019 06:33:14 -0400 Received: from mailout3.hostsharing.net ([176.9.242.54]:55441 "EHLO mailout3.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388423AbfHCKdO (ORCPT ); Sat, 3 Aug 2019 06:33:14 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout3.hostsharing.net (Postfix) with ESMTPS id 4663B1033A716; Sat, 3 Aug 2019 12:33:12 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id D4ADB618F189; Sat, 3 Aug 2019 12:33:11 +0200 (CEST) X-Mailbox-Line: From 037b83d2ae8aa92448b9d647436012c06f4c0561 Mon Sep 17 00:00:00 2001 Message-Id: <037b83d2ae8aa92448b9d647436012c06f4c0561.1564825752.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 02/10] dmaengine: bcm2835: Allow cyclic transactions without interrupt MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The BCM2835 DMA driver currently requests an interrupt from the controller regardless whether or not the client has passed in the DMA_PREP_INTERRUPT flag. This causes unnecessary overhead for cyclic transactions which do not need an interrupt after each period. We're about to add such a use case, namely cyclic clearing of the SPI controller's RX FIFO, so amend the DMA driver to request an interrupt only if DMA_PREP_INTERRUPT was passed in. Ignore the period_len for such transactions and set it to the buffer length to make the driver's calculations work. Tested-by: Nuno Sá Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Florian Kauer Acked-by: Vinod Koul --- drivers/dma/bcm2835-dma.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c index 523c507ad69e..a65514fcb7f2 100644 --- a/drivers/dma/bcm2835-dma.c +++ b/drivers/dma/bcm2835-dma.c @@ -691,7 +691,7 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic( struct bcm2835_desc *d; dma_addr_t src, dst; u32 info = BCM2835_DMA_WAIT_RESP; - u32 extra = BCM2835_DMA_INT_EN; + u32 extra = 0; size_t max_len = bcm2835_dma_max_frame_length(c); size_t frames; @@ -707,6 +707,11 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic( return NULL; } + if (flags & DMA_PREP_INTERRUPT) + extra |= BCM2835_DMA_INT_EN; + else + period_len = buf_len; + /* * warn if buf_len is not a multiple of period_len - this may leed * to unexpected latencies for interrupts and thus audiable clicks @@ -778,7 +783,10 @@ static int bcm2835_dma_terminate_all(struct dma_chan *chan) /* stop DMA activity */ if (c->desc) { - vchan_terminate_vdesc(&c->desc->vd); + if (c->desc->vd.tx.flags & DMA_PREP_INTERRUPT) + vchan_terminate_vdesc(&c->desc->vd); + else + vchan_vdesc_fini(&c->desc->vd); c->desc = NULL; bcm2835_dma_abort(c); } From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074377 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D5DEC1398 for ; Sat, 3 Aug 2019 10:43:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5A5128894 for ; Sat, 3 Aug 2019 10:43:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B9C86288EC; Sat, 3 Aug 2019 10:43:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5129D28894 for ; Sat, 3 Aug 2019 10:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388848AbfHCKnw (ORCPT ); Sat, 3 Aug 2019 06:43:52 -0400 Received: from mailout1.hostsharing.net ([83.223.95.204]:46141 "EHLO mailout1.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388809AbfHCKnv (ORCPT ); Sat, 3 Aug 2019 06:43:51 -0400 X-Greylist: delayed 471 seconds by postgrey-1.27 at vger.kernel.org; Sat, 03 Aug 2019 06:43:49 EDT Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout1.hostsharing.net (Postfix) with ESMTPS id 0B87C101A7220; Sat, 3 Aug 2019 12:35:57 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id B9C94618F189; Sat, 3 Aug 2019 12:35:56 +0200 (CEST) X-Mailbox-Line: From 69faaeb305582ab7dca65fa08635edfa1a39b3fa Mon Sep 17 00:00:00 2001 Message-Id: <69faaeb305582ab7dca65fa08635edfa1a39b3fa.1564825752.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 03/10] spi: Guarantee cacheline alignment of driver-private data MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP __spi_alloc_controller() uses a single allocation to accommodate struct spi_controller and the driver-private data, but places the latter behind the former. This order does not guarantee cacheline alignment of the driver-private data. It does guarantee cacheline alignment of struct spi_controller but the structure doesn't make any use of that property. Reverse the order. A forthcoming commit leverages this to grant DMA access to driver-private data of the BCM2835 SPI master. An alternative approach would be to round up struct spi_controller to cacheline size, at the expense of some wasted memory. Tested-by: Nuno Sá Signed-off-by: Lukas Wunner --- drivers/spi/spi.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index 8e83c9567353..8f692f657690 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -2071,9 +2071,11 @@ static inline void acpi_register_spi_devices(struct spi_controller *ctlr) {} static void spi_controller_release(struct device *dev) { struct spi_controller *ctlr; + void *devdata; ctlr = container_of(dev, struct spi_controller, dev); - kfree(ctlr); + devdata = spi_controller_get_devdata(ctlr); + kfree(devdata); } static struct class spi_master_class = { @@ -2187,8 +2189,10 @@ extern struct class spi_slave_class; /* dummy */ * __spi_alloc_controller - allocate an SPI master or slave controller * @dev: the controller, possibly using the platform_bus * @size: how much zeroed driver-private data to allocate; the pointer to this - * memory is in the driver_data field of the returned device, - * accessible with spi_controller_get_devdata(). + * memory is in the driver_data field of the returned device, accessible + * with spi_controller_get_devdata(); the memory is cacheline aligned; + * drivers granting DMA access to portions of their private data need to + * round up @size using ALIGN(size, dma_get_cache_alignment()). * @slave: flag indicating whether to allocate an SPI master (false) or SPI * slave (true) controller * Context: can sleep @@ -2210,14 +2214,16 @@ struct spi_controller *__spi_alloc_controller(struct device *dev, unsigned int size, bool slave) { struct spi_controller *ctlr; + void *devdata; if (!dev) return NULL; - ctlr = kzalloc(size + sizeof(*ctlr), GFP_KERNEL); - if (!ctlr) + devdata = kzalloc(size + sizeof(*ctlr), GFP_KERNEL); + if (!devdata) return NULL; + ctlr = devdata + size; device_initialize(&ctlr->dev); ctlr->bus_num = -1; ctlr->num_chipselect = 1; @@ -2228,7 +2234,7 @@ struct spi_controller *__spi_alloc_controller(struct device *dev, ctlr->dev.class = &spi_master_class; ctlr->dev.parent = dev; pm_suspend_ignore_children(&ctlr->dev, true); - spi_controller_set_devdata(ctlr, &ctlr[1]); + spi_controller_set_devdata(ctlr, devdata); return ctlr; } From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074363 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4380B17E0 for ; Sat, 3 Aug 2019 10:38:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3502E288AE for ; Sat, 3 Aug 2019 10:38:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F799288E5; Sat, 3 Aug 2019 10:38:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA3E0288E5 for ; Sat, 3 Aug 2019 10:38:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388796AbfHCKin (ORCPT ); Sat, 3 Aug 2019 06:38:43 -0400 Received: from mailout3.hostsharing.net ([176.9.242.54]:58061 "EHLO mailout3.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727123AbfHCKin (ORCPT ); Sat, 3 Aug 2019 06:38:43 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout3.hostsharing.net (Postfix) with ESMTPS id 1E0861033B348; Sat, 3 Aug 2019 12:38:40 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id B4394618F189; Sat, 3 Aug 2019 12:38:39 +0200 (CEST) X-Mailbox-Line: From 08a6fa2163fcbeacd50e647072d1eb39bafb8d31 Mon Sep 17 00:00:00 2001 Message-Id: <08a6fa2163fcbeacd50e647072d1eb39bafb8d31.1564825752.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 04/10] spi: bcm2835: Drop dma_pending flag MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The BCM2835 SPI driver uses a flag to keep track of whether a DMA transfer is in progress. The flag is used to avoid terminating DMA channels multiple times if a transfer finishes orderly while simultaneously the SPI core invokes the ->handle_err() callback because the transfer took too long. However terminating DMA channels multiple times is perfectly fine, so the flag is unnecessary for this particular purpose. The flag is also used to avoid invoking bcm2835_spi_undo_prologue() multiple times under this race condition. However multiple *concurrent* invocations can no longer happen since commit 2527704d8411 ("spi: bcm2835: Synchronize with callback on DMA termination") because the ->handle_err() callback now uses the _sync() variant when terminating DMA channels. The only raison d'être of the flag is therefore that bcm2835_spi_undo_prologue() cannot cope with multiple *sequential* invocations. Achieve that by setting tx_prologue to 0 at the end of the function. Subsequent invocations thus become no-ops. With that, the dma_pending flag becomes unnecessary, so drop it. Tested-by: Nuno Sá Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Noralf Trønnes --- drivers/spi/spi-bcm2835.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c index 4b89e0a04ffd..2bf725e909fd 100644 --- a/drivers/spi/spi-bcm2835.c +++ b/drivers/spi/spi-bcm2835.c @@ -92,7 +92,6 @@ MODULE_PARM_DESC(polling_limit_us, * @rx_prologue: bytes received without DMA if first RX sglist entry's * length is not a multiple of 4 (to overcome hardware limitation) * @tx_spillover: whether @tx_prologue spills over to second TX sglist entry - * @dma_pending: whether a DMA transfer is in progress * @debugfs_dir: the debugfs directory - neede to remove debugfs when * unloading the module * @count_transfer_polling: count of how often polling mode is used @@ -115,7 +114,6 @@ struct bcm2835_spi { int tx_prologue; int rx_prologue; unsigned int tx_spillover; - unsigned int dma_pending; struct dentry *debugfs_dir; u64 count_transfer_polling; @@ -539,6 +537,8 @@ static void bcm2835_spi_undo_prologue(struct bcm2835_spi *bs) sg_dma_address(&tfr->tx_sg.sgl[1]) -= 4; sg_dma_len(&tfr->tx_sg.sgl[1]) += 4; } + + bs->tx_prologue = 0; } static void bcm2835_spi_dma_done(void *data) @@ -554,10 +554,8 @@ static void bcm2835_spi_dma_done(void *data) * is called the tx-dma must have finished - can't get to this * situation otherwise... */ - if (cmpxchg(&bs->dma_pending, true, false)) { - dmaengine_terminate_async(ctlr->dma_tx); - bcm2835_spi_undo_prologue(bs); - } + dmaengine_terminate_async(ctlr->dma_tx); + bcm2835_spi_undo_prologue(bs); /* and mark as completed */; complete(&ctlr->xfer_completion); @@ -632,9 +630,6 @@ static int bcm2835_spi_transfer_one_dma(struct spi_controller *ctlr, /* start TX early */ dma_async_issue_pending(ctlr->dma_tx); - /* mark as dma pending */ - bs->dma_pending = 1; - /* set the DMA length */ bcm2835_wr(bs, BCM2835_SPI_DLEN, bs->tx_len); @@ -650,7 +645,6 @@ static int bcm2835_spi_transfer_one_dma(struct spi_controller *ctlr, if (ret) { /* need to reset on errors */ dmaengine_terminate_sync(ctlr->dma_tx); - bs->dma_pending = false; goto err_reset_hw; } @@ -915,11 +909,10 @@ static void bcm2835_spi_handle_err(struct spi_controller *ctlr, struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); /* if an error occurred and we have an active dma, then terminate */ - if (cmpxchg(&bs->dma_pending, true, false)) { - dmaengine_terminate_sync(ctlr->dma_tx); - dmaengine_terminate_sync(ctlr->dma_rx); - bcm2835_spi_undo_prologue(bs); - } + dmaengine_terminate_sync(ctlr->dma_tx); + dmaengine_terminate_sync(ctlr->dma_rx); + bcm2835_spi_undo_prologue(bs); + /* and reset */ bcm2835_spi_reset_hw(ctlr); } From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074365 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B547417E0 for ; Sat, 3 Aug 2019 10:40:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C6A7286DE for ; Sat, 3 Aug 2019 10:40:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8726E286CF; Sat, 3 Aug 2019 10:40:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 21A2F286AA for ; Sat, 3 Aug 2019 10:40:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388807AbfHCKkw (ORCPT ); Sat, 3 Aug 2019 06:40:52 -0400 Received: from mailout3.hostsharing.net ([176.9.242.54]:41163 "EHLO mailout3.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727123AbfHCKkw (ORCPT ); Sat, 3 Aug 2019 06:40:52 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout3.hostsharing.net (Postfix) with ESMTPS id 4EC491034071B; Sat, 3 Aug 2019 12:40:49 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id E146B618F189; Sat, 3 Aug 2019 12:40:48 +0200 (CEST) X-Mailbox-Line: From edb004dff4af6106f6bfcb89e1a96391e96eb857 Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 05/10] spi: bcm2835: Work around DONE bit erratum MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit 3bd7f6589f67 ("spi: bcm2835: Overcome sglist entry length limitation") amended the BCM2835 SPI driver with support for DMA transfers whose buffers are not aligned to 4 bytes and require more than one sglist entry. When testing this feature with upcoming commits to speed up TX-only and RX-only transfers, I noticed that SPI transmission sometimes breaks. A function introduced by the commit, bcm2835_spi_transfer_prologue(), performs one or two PIO transmissions as a prologue to the actual DMA transmission. It turns out that the breakage goes away if the DONE bit in the CS register is set when ending such a PIO transmission. The DONE bit signifies emptiness of the TX FIFO. According to the spec, the bit is of type RO, so writing it should never have any effect. Perhaps the spec is wrong and the bit is actually of type RW1C. E.g. the I2C controller on the BCM2835 does have an RW1C DONE bit which needs to be cleared by the driver. Another, possibly more likely explanation is that it's a hardware erratum since the issue does not occur consistently. Either way, amend bcm2835_spi_transfer_prologue() to always write the DONE bit. Usually a transmission is ended by bcm2835_spi_reset_hw(). If the transmission was successful, the TX FIFO is empty and thus the DONE bit is set when bcm2835_spi_reset_hw() reads the CS register. The bit is then written back to the register, so we happen to do the right thing. However if DONE is not set, e.g. because transmission is aborted with a non-empty TX FIFO, the bit won't be written by bcm2835_spi_reset_hw() and it seems possible that transmission might subsequently break. To be on the safe side, likewise amend bcm2835_spi_reset_hw() to always write the bit. Tested-by: Nuno Sá Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Noralf Trønnes --- drivers/spi/spi-bcm2835.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c index 2bf725e909fd..f6391c302363 100644 --- a/drivers/spi/spi-bcm2835.c +++ b/drivers/spi/spi-bcm2835.c @@ -317,6 +317,13 @@ static void bcm2835_spi_reset_hw(struct spi_controller *ctlr) BCM2835_SPI_CS_INTD | BCM2835_SPI_CS_DMAEN | BCM2835_SPI_CS_TA); + /* + * Transmission sometimes breaks unless the DONE bit is written at the + * end of every transfer. The spec says it's a RO bit. Either the + * spec is wrong and the bit is actually of type RW1C, or it's a + * hardware erratum. + */ + cs |= BCM2835_SPI_CS_DONE; /* and reset RX/TX FIFOS */ cs |= BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX; @@ -475,7 +482,9 @@ static void bcm2835_spi_transfer_prologue(struct spi_controller *ctlr, bcm2835_wr_fifo_count(bs, bs->rx_prologue); bcm2835_wait_tx_fifo_empty(bs); bcm2835_rd_fifo_count(bs, bs->rx_prologue); - bcm2835_spi_reset_hw(ctlr); + bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_CLEAR_RX + | BCM2835_SPI_CS_CLEAR_TX + | BCM2835_SPI_CS_DONE); dma_sync_single_for_device(ctlr->dma_rx->device->dev, sg_dma_address(&tfr->rx_sg.sgl[0]), @@ -496,7 +505,8 @@ static void bcm2835_spi_transfer_prologue(struct spi_controller *ctlr, | BCM2835_SPI_CS_DMAEN); bcm2835_wr_fifo_count(bs, tx_remaining); bcm2835_wait_tx_fifo_empty(bs); - bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_CLEAR_TX); + bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_CLEAR_TX + | BCM2835_SPI_CS_DONE); } if (likely(!bs->tx_spillover)) { From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074369 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A0E291398 for ; Sat, 3 Aug 2019 10:41:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BA7D288E5 for ; Sat, 3 Aug 2019 10:41:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 784C4288F4; Sat, 3 Aug 2019 10:41:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D2BA028894 for ; Sat, 3 Aug 2019 10:41:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388832AbfHCKlx (ORCPT ); Sat, 3 Aug 2019 06:41:53 -0400 Received: from mailout3.hostsharing.net ([176.9.242.54]:54533 "EHLO mailout3.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388809AbfHCKlx (ORCPT ); Sat, 3 Aug 2019 06:41:53 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout3.hostsharing.net (Postfix) with ESMTPS id 2D88B1034071B; Sat, 3 Aug 2019 12:41:50 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id BAD30618F189; Sat, 3 Aug 2019 12:41:49 +0200 (CEST) X-Mailbox-Line: From 3f471dcb5da331d0efb831d6e3117e0c127ebea7 Mon Sep 17 00:00:00 2001 Message-Id: <3f471dcb5da331d0efb831d6e3117e0c127ebea7.1564825752.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 06/10] spi: bcm2835: Cache CS register value for ->prepare_message() MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The BCM2835 SPI driver needs to set up the clock polarity in its ->prepare_message() hook before spi_transfer_one_message() asserts chip select to avoid a gratuitous clock signal edge (cf. commit acace73df2c1 ("spi: bcm2835: set up spi-mode before asserting cs-gpio")). Precalculate the CS register value (which selects the clock polarity) once in ->setup() and use that cached value in ->prepare_message() and ->transfer_one(). This avoids one MMIO read per message and one per transfer, yielding a small latency improvement. Additionally, a forthcoming commit will use the precalculated value to derive the register value for clearing the RX FIFO, which will eliminate the need for an RX dummy buffer when performing TX-only DMA transfers. Tested-by: Nuno Sá Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Noralf Trønnes --- drivers/spi/spi-bcm2835.c | 47 ++++++++++++++++++++++----------------- 1 file changed, 27 insertions(+), 20 deletions(-) diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c index f6391c302363..fcf203c30455 100644 --- a/drivers/spi/spi-bcm2835.c +++ b/drivers/spi/spi-bcm2835.c @@ -66,6 +66,7 @@ #define BCM2835_SPI_FIFO_SIZE 64 #define BCM2835_SPI_FIFO_SIZE_3_4 48 #define BCM2835_SPI_DMA_MIN_LENGTH 96 +#define BCM2835_SPI_NUM_CS 3 /* raise as necessary */ #define BCM2835_SPI_MODE_BITS (SPI_CPOL | SPI_CPHA | SPI_CS_HIGH \ | SPI_NO_CS | SPI_3WIRE) @@ -92,6 +93,8 @@ MODULE_PARM_DESC(polling_limit_us, * @rx_prologue: bytes received without DMA if first RX sglist entry's * length is not a multiple of 4 (to overcome hardware limitation) * @tx_spillover: whether @tx_prologue spills over to second TX sglist entry + * @prepare_cs: precalculated CS register value for ->prepare_message() + * (uses slave-specific clock polarity and phase settings) * @debugfs_dir: the debugfs directory - neede to remove debugfs when * unloading the module * @count_transfer_polling: count of how often polling mode is used @@ -114,6 +117,7 @@ struct bcm2835_spi { int tx_prologue; int rx_prologue; unsigned int tx_spillover; + u32 prepare_cs[BCM2835_SPI_NUM_CS]; struct dentry *debugfs_dir; u64 count_transfer_polling; @@ -816,7 +820,7 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr, struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); unsigned long spi_hz, clk_hz, cdiv, spi_used_hz; unsigned long hz_per_byte, byte_limit; - u32 cs = bcm2835_rd(bs, BCM2835_SPI_CS); + u32 cs = bs->prepare_cs[spi->chip_select]; /* set clock */ spi_hz = tfr->speed_hz; @@ -841,15 +845,6 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr, if (spi->mode & SPI_3WIRE && tfr->rx_buf && tfr->rx_buf != ctlr->dummy_rx) cs |= BCM2835_SPI_CS_REN; - else - cs &= ~BCM2835_SPI_CS_REN; - - /* - * The driver always uses software-controlled GPIO Chip Select. - * Set the hardware-controlled native Chip Select to an invalid - * value to prevent it from interfering. - */ - cs |= BCM2835_SPI_CS_CS_10 | BCM2835_SPI_CS_CS_01; /* set transmit buffers and length */ bs->tx_buf = tfr->tx_buf; @@ -886,7 +881,6 @@ static int bcm2835_spi_prepare_message(struct spi_controller *ctlr, { struct spi_device *spi = msg->spi; struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); - u32 cs = bcm2835_rd(bs, BCM2835_SPI_CS); int ret; if (ctlr->can_dma) { @@ -901,14 +895,11 @@ static int bcm2835_spi_prepare_message(struct spi_controller *ctlr, return ret; } - cs &= ~(BCM2835_SPI_CS_CPOL | BCM2835_SPI_CS_CPHA); - - if (spi->mode & SPI_CPOL) - cs |= BCM2835_SPI_CS_CPOL; - if (spi->mode & SPI_CPHA) - cs |= BCM2835_SPI_CS_CPHA; - - bcm2835_wr(bs, BCM2835_SPI_CS, cs); + /* + * Set up clock polarity before spi_transfer_one_message() asserts + * chip select to avoid a gratuitous clock signal edge. + */ + bcm2835_wr(bs, BCM2835_SPI_CS, bs->prepare_cs[spi->chip_select]); return 0; } @@ -934,8 +925,24 @@ static int chip_match_name(struct gpio_chip *chip, void *data) static int bcm2835_spi_setup(struct spi_device *spi) { + struct bcm2835_spi *bs = spi_controller_get_devdata(spi->controller); int err; struct gpio_chip *chip; + u32 cs; + + /* + * Precalculate SPI slave's CS register value for ->prepare_message(): + * The driver always uses software-controlled GPIO chip select, hence + * set the hardware-controlled native chip select to an invalid value + * to prevent it from interfering. + */ + cs = BCM2835_SPI_CS_CS_10 | BCM2835_SPI_CS_CS_01; + if (spi->mode & SPI_CPOL) + cs |= BCM2835_SPI_CS_CPOL; + if (spi->mode & SPI_CPHA) + cs |= BCM2835_SPI_CS_CPHA; + bs->prepare_cs[spi->chip_select] = cs; + /* * sanity checking the native-chipselects */ @@ -994,7 +1001,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev) ctlr->mode_bits = BCM2835_SPI_MODE_BITS; ctlr->bits_per_word_mask = SPI_BPW_MASK(8); - ctlr->num_chipselect = 3; + ctlr->num_chipselect = BCM2835_SPI_NUM_CS; ctlr->setup = bcm2835_spi_setup; ctlr->transfer_one = bcm2835_spi_transfer_one; ctlr->handle_err = bcm2835_spi_handle_err; From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074373 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA7651395 for ; Sat, 3 Aug 2019 10:43:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A836D28894 for ; Sat, 3 Aug 2019 10:43:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9A6BB288F4; Sat, 3 Aug 2019 10:43:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3534128894 for ; Sat, 3 Aug 2019 10:43:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388852AbfHCKnc (ORCPT ); Sat, 3 Aug 2019 06:43:32 -0400 Received: from mailout2.hostsharing.net ([83.223.78.233]:47897 "EHLO mailout2.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388809AbfHCKnb (ORCPT ); Sat, 3 Aug 2019 06:43:31 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout2.hostsharing.net (Postfix) with ESMTPS id 6F34E10189B9A; Sat, 3 Aug 2019 12:43:26 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 200C1618F189; Sat, 3 Aug 2019 12:43:26 +0200 (CEST) X-Mailbox-Line: From e264f405f6fd528f860f028d0e16e6d8f9d004be Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 07/10] spi: bcm2835: Speed up TX-only DMA transfers by clearing RX FIFO MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The BCM2835 SPI driver currently sets the SPI_CONTROLLER_MUST_RX flag. When performing a TX-only transfer, this flag causes the SPI core to allocate and DMA-map a dummy buffer into which the RX FIFO contents are copied. The dummy buffer is necessary because the chip is not capable of disabling the receiver or automatically throwing away received data. Not reading the RX FIFO isn't an option either since transmission is halted once it's full. Avoid the overhead induced by the dummy buffer by preallocating a reusable DMA transaction which cyclically clears the RX FIFO. The transaction requires very little CPU time to submit and generates no interrupts while running. Specifics are provided in kerneldoc comments. With a ks8851 Ethernet chip attached to the SPI controller, I am seeing a 30 us reduction in ping time with this commit (1.819 ms vs. 1.849 ms, average of 100,000 packets) as well as a 2% reduction in CPU time (75:08 vs. 76:39 for transmission of 5 GByte over the SPI bus). The commit uses the TX DMA interrupt to signal completion of a transfer. This interrupt is raised once all bytes have been written to the TX FIFO and it is then necessary to busy-wait for the TX FIFO to become empty before the transfer can be finalized. As an alternative approach, I have explored using the SPI controller's DONE interrupt to detect completion. This interrupt is signaled when the TX FIFO becomes empty, avoiding the need to busy-wait. However latency deteriorates compared to the present commit and surprisingly, CPU time is slightly higher as well: It turns out that in 45% of the cases, no busy-waiting is needed at all and in 76% of the cases, less than 10 busy-wait iterations are sufficient for the TX FIFO to drain. This was measured on an RT kernel. On a vanilla kernel, wakeup latency is worse and thus fewer iterations are needed. The measurements were made with an SPI clock of 20 MHz, they may differ slightly for slower or faster clock speeds. Previously we always used the RX DMA interrupt to signal completion of a transfer. Using the TX DMA interrupt now introduces a race condition: TX DMA is always started before RX DMA so that bytes are already clocked out while RX DMA is still being set up. But if a TX-only transfer is very short, then the TX DMA interrupt may occur before RX DMA is set up. If the interrupt happens to occur on the same CPU, setup of RX DMA may even be delayed until after the interrupt was handled. I've solved this by having the TX DMA callback clear the RX FIFO while busy-waiting for the TX FIFO to drain, thus avoiding a dependency on setup of RX DMA. Additionally, I am using a lock-free mechanism with two flags, tx_dma_active and rx_dma_active plus memory barriers to terminate RX DMA either by the TX DMA callback or immediately after setting it up, whichever wins the race. I've explored an alternative approach which temporarily disables the TX DMA callback until RX DMA has been set up (using tasklet_disable(), local_bh_disable() or local_irq_save()), but the performance was minimally worse. [Nathan Chancellor contributed a DMA mapping fixup for an early version of this commit, hence his Signed-off-by.] Tested-by: Nuno Sá Signed-off-by: Nathan Chancellor Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Noralf Trønnes Cc: Robert Jarzmik --- drivers/spi/spi-bcm2835.c | 241 ++++++++++++++++++++++++++++++++++---- 1 file changed, 218 insertions(+), 23 deletions(-) diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c index fcf203c30455..9630a25c0304 100644 --- a/drivers/spi/spi-bcm2835.c +++ b/drivers/spi/spi-bcm2835.c @@ -104,6 +104,16 @@ MODULE_PARM_DESC(polling_limit_us, * These are counted as well in @count_transfer_polling and * @count_transfer_irq * @count_transfer_dma: count how often dma mode is used + * @chip_select: SPI slave currently selected + * (used by bcm2835_spi_dma_tx_done() to write @clear_rx_cs) + * @tx_dma_active: whether a TX DMA descriptor is in progress + * @rx_dma_active: whether a RX DMA descriptor is in progress + * (used by bcm2835_spi_dma_tx_done() to handle a race) + * @clear_rx_desc: preallocated RX DMA descriptor used for TX-only transfers + * (cyclically clears RX FIFO by writing @clear_rx_cs to CS register) + * @clear_rx_addr: bus address of @clear_rx_cs + * @clear_rx_cs: precalculated CS register value to clear RX FIFO + * (uses slave-specific clock polarity and phase settings) */ struct bcm2835_spi { void __iomem *regs; @@ -124,6 +134,13 @@ struct bcm2835_spi { u64 count_transfer_irq; u64 count_transfer_irq_after_polling; u64 count_transfer_dma; + + u8 chip_select; + unsigned int tx_dma_active; + unsigned int rx_dma_active; + struct dma_async_tx_descriptor *clear_rx_desc[BCM2835_SPI_NUM_CS]; + dma_addr_t clear_rx_addr; + u32 clear_rx_cs[BCM2835_SPI_NUM_CS] ____cacheline_aligned; }; #if defined(CONFIG_DEBUG_FS) @@ -460,7 +477,7 @@ static void bcm2835_spi_transfer_prologue(struct spi_controller *ctlr, if (!sg_is_last(&tfr->tx_sg.sgl[0])) bs->tx_prologue = sg_dma_len(&tfr->tx_sg.sgl[0]) & 3; - if (!sg_is_last(&tfr->rx_sg.sgl[0])) { + if (bs->rx_buf && !sg_is_last(&tfr->rx_sg.sgl[0])) { bs->rx_prologue = sg_dma_len(&tfr->rx_sg.sgl[0]) & 3; if (bs->rx_prologue > bs->tx_prologue) { @@ -555,7 +572,13 @@ static void bcm2835_spi_undo_prologue(struct bcm2835_spi *bs) bs->tx_prologue = 0; } -static void bcm2835_spi_dma_done(void *data) +/** + * bcm2835_spi_dma_rx_done() - callback for DMA RX channel + * @data: SPI master controller + * + * Used for bidirectional and RX-only transfers. + */ +static void bcm2835_spi_dma_rx_done(void *data) { struct spi_controller *ctlr = data; struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); @@ -569,14 +592,61 @@ static void bcm2835_spi_dma_done(void *data) * situation otherwise... */ dmaengine_terminate_async(ctlr->dma_tx); + bs->tx_dma_active = false; + bs->rx_dma_active = false; bcm2835_spi_undo_prologue(bs); /* and mark as completed */; complete(&ctlr->xfer_completion); } +/** + * bcm2835_spi_dma_tx_done() - callback for DMA TX channel + * @data: SPI master controller + * + * Used for TX-only transfers. + */ +static void bcm2835_spi_dma_tx_done(void *data) +{ + struct spi_controller *ctlr = data; + struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); + + /* busy-wait for TX FIFO to empty */ + while (!(bcm2835_rd(bs, BCM2835_SPI_CS) & BCM2835_SPI_CS_DONE)) + bcm2835_wr(bs, BCM2835_SPI_CS, + bs->clear_rx_cs[bs->chip_select]); + + bs->tx_dma_active = false; + smp_wmb(); + + /* + * In case of a very short transfer, RX DMA may not have been + * issued yet. The onus is then on bcm2835_spi_transfer_one_dma() + * to terminate it immediately after issuing. + */ + if (cmpxchg(&bs->rx_dma_active, true, false)) + dmaengine_terminate_async(ctlr->dma_rx); + + bcm2835_spi_undo_prologue(bs); + bcm2835_spi_reset_hw(ctlr); + complete(&ctlr->xfer_completion); +} + +/** + * bcm2835_spi_prepare_sg() - prepare and submit DMA descriptor for sglist + * @ctlr: SPI master controller + * @spi: SPI slave + * @tfr: SPI transfer + * @bs: BCM2835 SPI controller + * @is_tx: whether to submit DMA descriptor for TX or RX sglist + * + * Prepare and submit a DMA descriptor for the TX or RX sglist of @tfr. + * Return 0 on success or a negative error number. + */ static int bcm2835_spi_prepare_sg(struct spi_controller *ctlr, + struct spi_device *spi, struct spi_transfer *tfr, + struct bcm2835_spi *bs, bool is_tx) { struct dma_chan *chan; @@ -593,8 +663,7 @@ static int bcm2835_spi_prepare_sg(struct spi_controller *ctlr, chan = ctlr->dma_tx; nents = tfr->tx_sg.nents; sgl = tfr->tx_sg.sgl; - flags = 0 /* no tx interrupt */; - + flags = tfr->rx_buf ? 0 : DMA_PREP_INTERRUPT; } else { dir = DMA_DEV_TO_MEM; chan = ctlr->dma_rx; @@ -607,10 +676,17 @@ static int bcm2835_spi_prepare_sg(struct spi_controller *ctlr, if (!desc) return -EINVAL; - /* set callback for rx */ + /* + * Completion is signaled by the RX channel for bidirectional and + * RX-only transfers; else by the TX channel for TX-only transfers. + */ if (!is_tx) { - desc->callback = bcm2835_spi_dma_done; + desc->callback = bcm2835_spi_dma_rx_done; + desc->callback_param = ctlr; + } else if (!tfr->rx_buf) { + desc->callback = bcm2835_spi_dma_tx_done; desc->callback_param = ctlr; + bs->chip_select = spi->chip_select; } /* submit it to DMA-engine */ @@ -619,12 +695,42 @@ static int bcm2835_spi_prepare_sg(struct spi_controller *ctlr, return dma_submit_error(cookie); } +/** + * bcm2835_spi_transfer_one_dma() - perform SPI transfer using DMA engine + * @ctlr: SPI master controller + * @spi: SPI slave + * @tfr: SPI transfer + * @cs: CS register + * + * For *bidirectional* transfers (both tx_buf and rx_buf are non-%NULL), set up + * the TX and RX DMA channel to copy between memory and FIFO register. + * + * For *TX-only* transfers (rx_buf is %NULL), copying the RX FIFO's contents to + * memory is pointless. However not reading the RX FIFO isn't an option either + * because transmission is halted once it's full. As a workaround, cyclically + * clear the RX FIFO by setting the CLEAR_RX bit in the CS register. + * + * The CS register value is precalculated in bcm2835_spi_setup(). Normally + * this is called only once, on slave registration. A DMA descriptor to write + * this value is preallocated in bcm2835_dma_init(). All that's left to do + * when performing a TX-only transfer is to submit this descriptor to the RX + * DMA channel. Latency is thereby minimized. The descriptor does not + * generate any interrupts while running. It must be terminated once the + * TX DMA channel is done. + * + * Clearing the RX FIFO is paced by the DREQ signal. The signal is asserted + * when the RX FIFO becomes half full, i.e. 32 bytes. (Tuneable with the DC + * register.) Reading 32 bytes from the RX FIFO would normally require 8 bus + * accesses, whereas clearing it requires only 1 bus access. So an 8-fold + * reduction in bus traffic and thus energy consumption is achieved. + */ static int bcm2835_spi_transfer_one_dma(struct spi_controller *ctlr, struct spi_device *spi, struct spi_transfer *tfr, u32 cs) { struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); + dma_cookie_t cookie; int ret; /* update usage statistics */ @@ -637,13 +743,10 @@ static int bcm2835_spi_transfer_one_dma(struct spi_controller *ctlr, bcm2835_spi_transfer_prologue(ctlr, tfr, bs, cs); /* setup tx-DMA */ - ret = bcm2835_spi_prepare_sg(ctlr, tfr, true); + ret = bcm2835_spi_prepare_sg(ctlr, spi, tfr, bs, true); if (ret) goto err_reset_hw; - /* start TX early */ - dma_async_issue_pending(ctlr->dma_tx); - /* set the DMA length */ bcm2835_wr(bs, BCM2835_SPI_DLEN, bs->tx_len); @@ -651,19 +754,43 @@ static int bcm2835_spi_transfer_one_dma(struct spi_controller *ctlr, bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_TA | BCM2835_SPI_CS_DMAEN); + bs->tx_dma_active = true; + smp_wmb(); + + /* start TX early */ + dma_async_issue_pending(ctlr->dma_tx); + /* setup rx-DMA late - to run transfers while * mapping of the rx buffers still takes place * this saves 10us or more. */ - ret = bcm2835_spi_prepare_sg(ctlr, tfr, false); + if (bs->rx_buf) { + ret = bcm2835_spi_prepare_sg(ctlr, spi, tfr, bs, false); + } else { + cookie = dmaengine_submit(bs->clear_rx_desc[spi->chip_select]); + ret = dma_submit_error(cookie); + } if (ret) { /* need to reset on errors */ dmaengine_terminate_sync(ctlr->dma_tx); + bs->tx_dma_active = false; goto err_reset_hw; } /* start rx dma late */ dma_async_issue_pending(ctlr->dma_rx); + bs->rx_dma_active = true; + smp_mb(); + + /* + * In case of a very short TX-only transfer, bcm2835_spi_dma_tx_done() + * may run before RX DMA is issued. Terminate RX DMA if so. + */ + if (!bs->rx_buf && !bs->tx_dma_active && + cmpxchg(&bs->rx_dma_active, true, false)) { + dmaengine_terminate_async(ctlr->dma_rx); + bcm2835_spi_reset_hw(ctlr); + } /* wait for wakeup in framework */ return 1; @@ -686,26 +813,42 @@ static bool bcm2835_spi_can_dma(struct spi_controller *ctlr, return true; } -static void bcm2835_dma_release(struct spi_controller *ctlr) +static void bcm2835_dma_release(struct spi_controller *ctlr, + struct bcm2835_spi *bs) { + int i; + if (ctlr->dma_tx) { dmaengine_terminate_sync(ctlr->dma_tx); dma_release_channel(ctlr->dma_tx); ctlr->dma_tx = NULL; } + if (ctlr->dma_rx) { dmaengine_terminate_sync(ctlr->dma_rx); + + for (i = 0; i < BCM2835_SPI_NUM_CS; i++) + if (bs->clear_rx_desc[i]) + dmaengine_desc_free(bs->clear_rx_desc[i]); + + if (bs->clear_rx_addr) + dma_unmap_single(ctlr->dma_rx->device->dev, + bs->clear_rx_addr, + sizeof(bs->clear_rx_cs), + DMA_TO_DEVICE); + dma_release_channel(ctlr->dma_rx); ctlr->dma_rx = NULL; } } -static void bcm2835_dma_init(struct spi_controller *ctlr, struct device *dev) +static void bcm2835_dma_init(struct spi_controller *ctlr, struct device *dev, + struct bcm2835_spi *bs) { struct dma_slave_config slave_config; const __be32 *addr; dma_addr_t dma_reg_base; - int ret; + int ret, i; /* base address in dma-space */ addr = of_get_address(ctlr->dev.of_node, 0, NULL, NULL); @@ -735,17 +878,51 @@ static void bcm2835_dma_init(struct spi_controller *ctlr, struct device *dev) if (ret) goto err_config; + /* + * The RX DMA channel is used bidirectionally: It either reads the + * RX FIFO or, in case of a TX-only transfer, cyclically writes a + * precalculated value to the CS register to clear the RX FIFO. + */ slave_config.src_addr = (u32)(dma_reg_base + BCM2835_SPI_FIFO); slave_config.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + slave_config.dst_addr = (u32)(dma_reg_base + BCM2835_SPI_CS); + slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; ret = dmaengine_slave_config(ctlr->dma_rx, &slave_config); if (ret) goto err_config; + bs->clear_rx_addr = dma_map_single(ctlr->dma_rx->device->dev, + bs->clear_rx_cs, + sizeof(bs->clear_rx_cs), + DMA_TO_DEVICE); + if (dma_mapping_error(ctlr->dma_rx->device->dev, bs->clear_rx_addr)) { + dev_err(dev, "cannot map clear_rx_cs - not using DMA mode\n"); + bs->clear_rx_addr = 0; + goto err_release; + } + + for (i = 0; i < BCM2835_SPI_NUM_CS; i++) { + bs->clear_rx_desc[i] = dmaengine_prep_dma_cyclic(ctlr->dma_rx, + bs->clear_rx_addr + i * sizeof(u32), + sizeof(u32), 0, + DMA_MEM_TO_DEV, 0); + if (!bs->clear_rx_desc[i]) { + dev_err(dev, "cannot prepare clear_rx_desc - not using DMA mode\n"); + goto err_release; + } + + ret = dmaengine_desc_set_reuse(bs->clear_rx_desc[i]); + if (ret) { + dev_err(dev, "cannot reuse clear_rx_desc - not using DMA mode\n"); + goto err_release; + } + } + /* all went well, so set can_dma */ ctlr->can_dma = bcm2835_spi_can_dma; - /* need to do TX AND RX DMA, so we need dummy buffers */ - ctlr->flags = SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX; + /* need to do TX DMA, so we need a dummy buffer */ + ctlr->flags = SPI_CONTROLLER_MUST_TX; return; @@ -753,7 +930,7 @@ static void bcm2835_dma_init(struct spi_controller *ctlr, struct device *dev) dev_err(dev, "issue configuring dma: %d - not using DMA mode\n", ret); err_release: - bcm2835_dma_release(ctlr); + bcm2835_dma_release(ctlr, bs); err: return; } @@ -842,8 +1019,7 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr, bcm2835_wr(bs, BCM2835_SPI_CLK, cdiv); /* handle all the 3-wire mode */ - if (spi->mode & SPI_3WIRE && tfr->rx_buf && - tfr->rx_buf != ctlr->dummy_rx) + if (spi->mode & SPI_3WIRE && tfr->rx_buf) cs |= BCM2835_SPI_CS_REN; /* set transmit buffers and length */ @@ -911,7 +1087,9 @@ static void bcm2835_spi_handle_err(struct spi_controller *ctlr, /* if an error occurred and we have an active dma, then terminate */ dmaengine_terminate_sync(ctlr->dma_tx); + bs->tx_dma_active = false; dmaengine_terminate_sync(ctlr->dma_rx); + bs->rx_dma_active = false; bcm2835_spi_undo_prologue(bs); /* and reset */ @@ -925,7 +1103,8 @@ static int chip_match_name(struct gpio_chip *chip, void *data) static int bcm2835_spi_setup(struct spi_device *spi) { - struct bcm2835_spi *bs = spi_controller_get_devdata(spi->controller); + struct spi_controller *ctlr = spi->controller; + struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); int err; struct gpio_chip *chip; u32 cs; @@ -943,6 +1122,21 @@ static int bcm2835_spi_setup(struct spi_device *spi) cs |= BCM2835_SPI_CS_CPHA; bs->prepare_cs[spi->chip_select] = cs; + /* + * Precalculate SPI slave's CS register value to clear RX FIFO + * in case of a TX-only DMA transfer. + */ + if (ctlr->dma_rx) { + bs->clear_rx_cs[spi->chip_select] = cs | + BCM2835_SPI_CS_TA | + BCM2835_SPI_CS_DMAEN | + BCM2835_SPI_CS_CLEAR_RX; + dma_sync_single_for_device(ctlr->dma_rx->device->dev, + bs->clear_rx_addr, + sizeof(bs->clear_rx_cs), + DMA_TO_DEVICE); + } + /* * sanity checking the native-chipselects */ @@ -993,7 +1187,8 @@ static int bcm2835_spi_probe(struct platform_device *pdev) struct resource *res; int err; - ctlr = spi_alloc_master(&pdev->dev, sizeof(*bs)); + ctlr = spi_alloc_master(&pdev->dev, ALIGN(sizeof(*bs), + dma_get_cache_alignment())); if (!ctlr) return -ENOMEM; @@ -1032,7 +1227,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev) clk_prepare_enable(bs->clk); - bcm2835_dma_init(ctlr, &pdev->dev); + bcm2835_dma_init(ctlr, &pdev->dev, bs); /* initialise the hardware with the default polarities */ bcm2835_wr(bs, BCM2835_SPI_CS, @@ -1076,7 +1271,7 @@ static int bcm2835_spi_remove(struct platform_device *pdev) clk_disable_unprepare(bs->clk); - bcm2835_dma_release(ctlr); + bcm2835_dma_release(ctlr, bs); return 0; } From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074381 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D007F1395 for ; Sat, 3 Aug 2019 10:44:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB4CA28894 for ; Sat, 3 Aug 2019 10:44:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A8EAA288FC; Sat, 3 Aug 2019 10:44:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9819D28894 for ; Sat, 3 Aug 2019 10:44:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388889AbfHCKov (ORCPT ); Sat, 3 Aug 2019 06:44:51 -0400 Received: from mailout3.hostsharing.net ([176.9.242.54]:47313 "EHLO mailout3.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388809AbfHCKov (ORCPT ); Sat, 3 Aug 2019 06:44:51 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout3.hostsharing.net (Postfix) with ESMTPS id ECC4510340734; Sat, 3 Aug 2019 12:44:48 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 992D8618F189; Sat, 3 Aug 2019 12:44:48 +0200 (CEST) X-Mailbox-Line: From f2a812b579c0b99d7fadf61a30a6263a9cc19bc4 Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 08/10] dmaengine: bcm2835: Document struct bcm2835_dmadev MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Document the BCM2835 DMA driver's device data structure so that upcoming commits may add further members with proper kerneldoc. Tested-by: Nuno Sá Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Florian Kauer Acked-by: Vinod Koul --- drivers/dma/bcm2835-dma.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c index a65514fcb7f2..14358faf3bff 100644 --- a/drivers/dma/bcm2835-dma.c +++ b/drivers/dma/bcm2835-dma.c @@ -37,6 +37,12 @@ #define BCM2835_DMA_MAX_DMA_CHAN_SUPPORTED 14 #define BCM2835_DMA_CHAN_NAME_SIZE 8 +/** + * struct bcm2835_dmadev - BCM2835 DMA controller + * @ddev: DMA device + * @base: base address of register map + * @dma_parms: DMA parameters (to convey 1 GByte max segment size to clients) + */ struct bcm2835_dmadev { struct dma_device ddev; void __iomem *base; From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074385 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 15C231510 for ; Sat, 3 Aug 2019 10:46:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 06DCC288FC for ; Sat, 3 Aug 2019 10:46:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EF05A288EC; Sat, 3 Aug 2019 10:46:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80656288E5 for ; Sat, 3 Aug 2019 10:46:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388901AbfHCKqm (ORCPT ); Sat, 3 Aug 2019 06:46:42 -0400 Received: from mailout2.hostsharing.net ([83.223.78.233]:58105 "EHLO mailout2.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388809AbfHCKql (ORCPT ); Sat, 3 Aug 2019 06:46:41 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout2.hostsharing.net (Postfix) with ESMTPS id C572E10189CBA; Sat, 3 Aug 2019 12:46:38 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 7960C618F189; Sat, 3 Aug 2019 12:46:38 +0200 (CEST) X-Mailbox-Line: From a8efa43470bc5092b8727a93c9cf694c80e0c8c4 Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 09/10] dmaengine: bcm2835: Avoid accessing memory when copying zeroes MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The BCM2835 DMA controller is capable of synthesizing zeroes instead of copying them from a source address. The feature is enabled by setting the SRC_IGNORE bit in the Transfer Information field of a Control Block: "Do not perform source reads. In addition, destination writes will zero all the write strobes. This is used for fast cache fill operations." https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf The feature is only available on 8 of the 16 channels. The others are so-called "lite" channels with a limited feature set and performance. Enable the feature if a cyclic transaction copies from the zero page. This reduces traffic on the memory bus. A forthcoming use case is the BCM2835 SPI driver, which will cyclically copy from the zero page to the TX FIFO. The idea to use SRC_IGNORE was taken from an ancient GitHub conversation between Martin and Noralf: https://github.com/msperl/spi-bcm2835/issues/13#issuecomment-98180451 Tested-by: Nuno Sá Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Noralf Trønnes Cc: Florian Kauer Acked-by: Vinod Koul --- drivers/dma/bcm2835-dma.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c index 14358faf3bff..67100e4e1083 100644 --- a/drivers/dma/bcm2835-dma.c +++ b/drivers/dma/bcm2835-dma.c @@ -42,11 +42,14 @@ * @ddev: DMA device * @base: base address of register map * @dma_parms: DMA parameters (to convey 1 GByte max segment size to clients) + * @zero_page: bus address of zero page (to detect transactions copying from + * zero page and avoid accessing memory if so) */ struct bcm2835_dmadev { struct dma_device ddev; void __iomem *base; struct device_dma_parameters dma_parms; + dma_addr_t zero_page; }; struct bcm2835_dma_cb { @@ -693,6 +696,7 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic( size_t period_len, enum dma_transfer_direction direction, unsigned long flags) { + struct bcm2835_dmadev *od = to_bcm2835_dma_dev(chan->device); struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); struct bcm2835_desc *d; dma_addr_t src, dst; @@ -743,6 +747,10 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic( dst = c->cfg.dst_addr; src = buf_addr; info |= BCM2835_DMA_D_DREQ | BCM2835_DMA_S_INC; + + /* non-lite channels can write zeroes w/o accessing memory */ + if (buf_addr == od->zero_page && !c->is_lite_channel) + info |= BCM2835_DMA_S_IGNORE; } /* calculate number of frames */ @@ -845,6 +853,9 @@ static void bcm2835_dma_free(struct bcm2835_dmadev *od) list_del(&c->vc.chan.device_node); tasklet_kill(&c->vc.task); } + + dma_unmap_page_attrs(od->ddev.dev, od->zero_page, PAGE_SIZE, + DMA_TO_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); } static const struct of_device_id bcm2835_dma_of_match[] = { @@ -927,6 +938,14 @@ static int bcm2835_dma_probe(struct platform_device *pdev) platform_set_drvdata(pdev, od); + od->zero_page = dma_map_page_attrs(od->ddev.dev, ZERO_PAGE(0), 0, + PAGE_SIZE, DMA_TO_DEVICE, + DMA_ATTR_SKIP_CPU_SYNC); + if (dma_mapping_error(od->ddev.dev, od->zero_page)) { + dev_err(&pdev->dev, "Failed to map zero page\n"); + return -ENOMEM; + } + /* Request DMA channel mask from device tree */ if (of_property_read_u32(pdev->dev.of_node, "brcm,dma-channel-mask", From patchwork Sat Aug 3 10:10:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 11074391 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C35C914E5 for ; Sat, 3 Aug 2019 10:48:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF6D3288FC for ; Sat, 3 Aug 2019 10:48:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A3A01288EC; Sat, 3 Aug 2019 10:48:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 013EB288E5 for ; Sat, 3 Aug 2019 10:48:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389176AbfHCKs1 (ORCPT ); Sat, 3 Aug 2019 06:48:27 -0400 Received: from mailout2.hostsharing.net ([83.223.78.233]:35109 "EHLO mailout2.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389162AbfHCKs1 (ORCPT ); Sat, 3 Aug 2019 06:48:27 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by mailout2.hostsharing.net (Postfix) with ESMTPS id 4226D10189B21; Sat, 3 Aug 2019 12:48:22 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id F03C6618F189; Sat, 3 Aug 2019 12:48:21 +0200 (CEST) X-Mailbox-Line: From fe12893a7521a162001a1f52d2a98f07592c811c Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Sat, 3 Aug 2019 12:10:00 +0200 Subject: [PATCH 10/10] spi: bcm2835: Speed up RX-only DMA transfers by zero-filling TX FIFO MIME-Version: 1.0 To: Mark Brown , Vinod Koul , Stefan Wahren , linux-spi@vger.kernel.org, dmaengine@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com Cc: Eric Anholt , Nuno Sa , Martin Sperl , Noralf Tronnes , Robert Jarzmik , Florian Kauer , Florian Fainelli , Ray Jui , Scott Branden Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The BCM2835 SPI driver currently sets the SPI_CONTROLLER_MUST_TX flag. When performing an RX-only transfer, this flag causes the SPI core to allocate and DMA-map a dummy buffer which is copied to the TX FIFO. The dummy buffer is necessary because the chip is not capable of automatically clocking out null bytes. Avoid the overhead induced by the dummy buffer by preallocating a reusable DMA transaction which fills the TX FIFO by cyclically copying from the zero page. The transaction requires very little CPU time to submit and generates no interrupts while running. Specifics are provided in kerneldoc comments. [Nathan Chancellor contributed a DMA mapping fixup for an early version of this commit, hence his Signed-off-by.] Tested-by: Nuno Sá Signed-off-by: Nathan Chancellor Signed-off-by: Lukas Wunner Cc: Martin Sperl Cc: Noralf Trønnes Cc: Robert Jarzmik --- drivers/spi/spi-bcm2835.c | 93 ++++++++++++++++++++++++++++++++++----- 1 file changed, 82 insertions(+), 11 deletions(-) diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c index 9630a25c0304..a57d72fca66e 100644 --- a/drivers/spi/spi-bcm2835.c +++ b/drivers/spi/spi-bcm2835.c @@ -109,6 +109,9 @@ MODULE_PARM_DESC(polling_limit_us, * @tx_dma_active: whether a TX DMA descriptor is in progress * @rx_dma_active: whether a RX DMA descriptor is in progress * (used by bcm2835_spi_dma_tx_done() to handle a race) + * @fill_tx_desc: preallocated TX DMA descriptor used for RX-only transfers + * (cyclically copies from zero page to TX FIFO) + * @fill_tx_addr: bus address of zero page * @clear_rx_desc: preallocated RX DMA descriptor used for TX-only transfers * (cyclically clears RX FIFO by writing @clear_rx_cs to CS register) * @clear_rx_addr: bus address of @clear_rx_cs @@ -138,6 +141,8 @@ struct bcm2835_spi { u8 chip_select; unsigned int tx_dma_active; unsigned int rx_dma_active; + struct dma_async_tx_descriptor *fill_tx_desc; + dma_addr_t fill_tx_addr; struct dma_async_tx_descriptor *clear_rx_desc[BCM2835_SPI_NUM_CS]; dma_addr_t clear_rx_addr; u32 clear_rx_cs[BCM2835_SPI_NUM_CS] ____cacheline_aligned; @@ -474,14 +479,14 @@ static void bcm2835_spi_transfer_prologue(struct spi_controller *ctlr, bs->rx_prologue = 0; bs->tx_spillover = false; - if (!sg_is_last(&tfr->tx_sg.sgl[0])) + if (bs->tx_buf && !sg_is_last(&tfr->tx_sg.sgl[0])) bs->tx_prologue = sg_dma_len(&tfr->tx_sg.sgl[0]) & 3; if (bs->rx_buf && !sg_is_last(&tfr->rx_sg.sgl[0])) { bs->rx_prologue = sg_dma_len(&tfr->rx_sg.sgl[0]) & 3; if (bs->rx_prologue > bs->tx_prologue) { - if (sg_is_last(&tfr->tx_sg.sgl[0])) { + if (!bs->tx_buf || sg_is_last(&tfr->tx_sg.sgl[0])) { bs->tx_prologue = bs->rx_prologue; } else { bs->tx_prologue += 4; @@ -515,6 +520,9 @@ static void bcm2835_spi_transfer_prologue(struct spi_controller *ctlr, sg_dma_len(&tfr->rx_sg.sgl[0]) -= bs->rx_prologue; } + if (!bs->tx_buf) + return; + /* * Write remaining TX prologue. Adjust first entry in TX sglist. * Also adjust second entry if prologue spills over to it. @@ -560,6 +568,9 @@ static void bcm2835_spi_undo_prologue(struct bcm2835_spi *bs) sg_dma_len(&tfr->rx_sg.sgl[0]) += bs->rx_prologue; } + if (!bs->tx_buf) + goto out; + if (likely(!bs->tx_spillover)) { sg_dma_address(&tfr->tx_sg.sgl[0]) -= bs->tx_prologue; sg_dma_len(&tfr->tx_sg.sgl[0]) += bs->tx_prologue; @@ -568,7 +579,7 @@ static void bcm2835_spi_undo_prologue(struct bcm2835_spi *bs) sg_dma_address(&tfr->tx_sg.sgl[1]) -= 4; sg_dma_len(&tfr->tx_sg.sgl[1]) += 4; } - +out: bs->tx_prologue = 0; } @@ -583,10 +594,7 @@ static void bcm2835_spi_dma_rx_done(void *data) struct spi_controller *ctlr = data; struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); - /* reset fifo and HW */ - bcm2835_spi_reset_hw(ctlr); - - /* and terminate tx-dma as we do not have an irq for it + /* terminate tx-dma as we do not have an irq for it * because when the rx dma will terminate and this callback * is called the tx-dma must have finished - can't get to this * situation otherwise... @@ -596,6 +604,9 @@ static void bcm2835_spi_dma_rx_done(void *data) bs->rx_dma_active = false; bcm2835_spi_undo_prologue(bs); + /* reset fifo and HW */ + bcm2835_spi_reset_hw(ctlr); + /* and mark as completed */; complete(&ctlr->xfer_completion); } @@ -723,6 +734,24 @@ static int bcm2835_spi_prepare_sg(struct spi_controller *ctlr, * register.) Reading 32 bytes from the RX FIFO would normally require 8 bus * accesses, whereas clearing it requires only 1 bus access. So an 8-fold * reduction in bus traffic and thus energy consumption is achieved. + * + * For *RX-only* transfers (tx_buf is %NULL), fill the TX FIFO by cyclically + * copying from the zero page. The DMA descriptor to do this is preallocated + * in bcm2835_dma_init(). It must be terminated once the RX DMA channel is + * done and can then be reused. + * + * The BCM2835 DMA driver autodetects when a transaction copies from the zero + * page and utilizes the DMA controller's ability to synthesize zeroes instead + * of copying them from memory. This reduces traffic on the memory bus. The + * feature is not available on so-called "lite" channels, but normally TX DMA + * is backed by a full-featured channel. + * + * Zero-filling the TX FIFO is paced by the DREQ signal. Unfortunately the + * BCM2835 SPI controller continues to assert DREQ even after the DLEN register + * has been counted down to zero (hardware erratum). Thus, when the transfer + * has finished, the DMA engine zero-fills the TX FIFO until it is half full. + * (Tuneable with the DC register.) So up to 9 gratuitous bus accesses are + * performed at the end of an RX-only transfer. */ static int bcm2835_spi_transfer_one_dma(struct spi_controller *ctlr, struct spi_device *spi, @@ -743,7 +772,12 @@ static int bcm2835_spi_transfer_one_dma(struct spi_controller *ctlr, bcm2835_spi_transfer_prologue(ctlr, tfr, bs, cs); /* setup tx-DMA */ - ret = bcm2835_spi_prepare_sg(ctlr, spi, tfr, bs, true); + if (bs->tx_buf) { + ret = bcm2835_spi_prepare_sg(ctlr, spi, tfr, bs, true); + } else { + cookie = dmaengine_submit(bs->fill_tx_desc); + ret = dma_submit_error(cookie); + } if (ret) goto err_reset_hw; @@ -820,6 +854,16 @@ static void bcm2835_dma_release(struct spi_controller *ctlr, if (ctlr->dma_tx) { dmaengine_terminate_sync(ctlr->dma_tx); + + if (bs->fill_tx_desc) + dmaengine_desc_free(bs->fill_tx_desc); + + if (bs->fill_tx_addr) + dma_unmap_page_attrs(ctlr->dma_tx->device->dev, + bs->fill_tx_addr, sizeof(u32), + DMA_TO_DEVICE, + DMA_ATTR_SKIP_CPU_SYNC); + dma_release_channel(ctlr->dma_tx); ctlr->dma_tx = NULL; } @@ -870,7 +914,11 @@ static void bcm2835_dma_init(struct spi_controller *ctlr, struct device *dev, goto err_release; } - /* configure DMAs */ + /* + * The TX DMA channel either copies a transfer's TX buffer to the FIFO + * or, in case of an RX-only transfer, cyclically copies from the zero + * page to the FIFO using a preallocated, reusable descriptor. + */ slave_config.dst_addr = (u32)(dma_reg_base + BCM2835_SPI_FIFO); slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; @@ -878,6 +926,31 @@ static void bcm2835_dma_init(struct spi_controller *ctlr, struct device *dev, if (ret) goto err_config; + bs->fill_tx_addr = dma_map_page_attrs(ctlr->dma_tx->device->dev, + ZERO_PAGE(0), 0, sizeof(u32), + DMA_TO_DEVICE, + DMA_ATTR_SKIP_CPU_SYNC); + if (dma_mapping_error(ctlr->dma_tx->device->dev, bs->fill_tx_addr)) { + dev_err(dev, "cannot map zero page - not using DMA mode\n"); + bs->fill_tx_addr = 0; + goto err_release; + } + + bs->fill_tx_desc = dmaengine_prep_dma_cyclic(ctlr->dma_tx, + bs->fill_tx_addr, + sizeof(u32), 0, + DMA_MEM_TO_DEV, 0); + if (!bs->fill_tx_desc) { + dev_err(dev, "cannot prepare fill_tx_desc - not using DMA mode\n"); + goto err_release; + } + + ret = dmaengine_desc_set_reuse(bs->fill_tx_desc); + if (ret) { + dev_err(dev, "cannot reuse fill_tx_desc - not using DMA mode\n"); + goto err_release; + } + /* * The RX DMA channel is used bidirectionally: It either reads the * RX FIFO or, in case of a TX-only transfer, cyclically writes a @@ -921,8 +994,6 @@ static void bcm2835_dma_init(struct spi_controller *ctlr, struct device *dev, /* all went well, so set can_dma */ ctlr->can_dma = bcm2835_spi_can_dma; - /* need to do TX DMA, so we need a dummy buffer */ - ctlr->flags = SPI_CONTROLLER_MUST_TX; return;