From patchwork Wed Jan 30 07:00:27 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt Porter X-Patchwork-Id: 2065801 Return-Path: X-Original-To: patchwork-linux-mmc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 017353FD2B for ; Wed, 30 Jan 2013 07:01:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753595Ab3A3HB1 (ORCPT ); Wed, 30 Jan 2013 02:01:27 -0500 Received: from mail-ia0-f178.google.com ([209.85.210.178]:41349 "EHLO mail-ia0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753754Ab3A3HAY (ORCPT ); Wed, 30 Jan 2013 02:00:24 -0500 Received: by mail-ia0-f178.google.com with SMTP id y26so1887684iab.9 for ; Tue, 29 Jan 2013 23:00:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=GpJPOzkx/A/PlWBu8yXzNAwfRoNc3dsSg2SYLRaqM7k=; b=TcGkhQ6zhJCHg7HCKGn5t9+fNsYPZcKts4Z546sfMfk/LJKAP1gAVD+jp21XroapMv NG6IpJdTIU5LwdJtbzE09TIb6JhWeGv6XNikyCvQTZM9OsOX/plP801+4pd3O77tuOTs m4+t13TOeHPctexz81qhJPCeM66VZa/kmhSFxLhPFVlH1BXMBaOVVvBZ21juIIu81p+j Ns1GOBxjnWcA5TdhHyNMLhnvdHu4dwlPBeC4kkqIf6w3WFqRHD8hr0T8T4phPiSHrgQX BDoK9N36qac1FwscncYfMVKt+LmrkUIagG30Yd9fwPcZ0xNz+/z7+G1zLgxIdWrCIDev yWuw== X-Received: by 10.50.196.164 with SMTP id in4mr2639452igc.86.1359529223138; Tue, 29 Jan 2013 23:00:23 -0800 (PST) Received: from beef.ohporter.com (cpe-24-166-64-7.neo.res.rr.com. [24.166.64.7]) by mx.google.com with ESMTPS id nj1sm762015igc.3.2013.01.29.23.00.21 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 29 Jan 2013 23:00:22 -0800 (PST) From: Matt Porter To: Tony Lindgren , Sekhar Nori , Grant Likely , Mark Brown , Benoit Cousson , Russell King , Vinod Koul , Rob Landley , Chris Ball Cc: Devicetree Discuss , Linux OMAP List , Linux ARM Kernel List , Linux DaVinci Kernel List , Linux Kernel Mailing List , Linux Documentation List , Linux MMC List , Linux SPI Devel List , Arnd Bergmann , Dan Williams , Rob Herring Subject: [PATCH v6 08/10] spi: omap2-mcspi: convert to dma_request_slave_channel_compat() Date: Wed, 30 Jan 2013 02:00:27 -0500 Message-Id: <1359529229-22207-9-git-send-email-mporter@ti.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1359529229-22207-1-git-send-email-mporter@ti.com> References: <1359529229-22207-1-git-send-email-mporter@ti.com> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Convert dmaengine channel requests to use dma_request_slave_channel_compat(). This supports the DT case of platforms requiring channel selection from either the OMAP DMA or the EDMA engine. AM33xx only boots from DT and is the only user implementing EDMA so in the !DT case we can default to the OMAP DMA filter. Signed-off-by: Matt Porter Acked-by: Mark Brown --- drivers/spi/spi-omap2-mcspi.c | 65 ++++++++++++++++++++++++++++------------- 1 file changed, 45 insertions(+), 20 deletions(-) diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c index b610f52..2c02c02 100644 --- a/drivers/spi/spi-omap2-mcspi.c +++ b/drivers/spi/spi-omap2-mcspi.c @@ -102,6 +102,9 @@ struct omap2_mcspi_dma { struct completion dma_tx_completion; struct completion dma_rx_completion; + + char dma_rx_ch_name[14]; + char dma_tx_ch_name[14]; }; /* use PIO for small transfers, avoiding DMA setup/teardown overhead and @@ -822,14 +825,23 @@ static int omap2_mcspi_request_dma(struct spi_device *spi) dma_cap_zero(mask); dma_cap_set(DMA_SLAVE, mask); sig = mcspi_dma->dma_rx_sync_dev; - mcspi_dma->dma_rx = dma_request_channel(mask, omap_dma_filter_fn, &sig); + + mcspi_dma->dma_rx = + dma_request_slave_channel_compat(mask, omap_dma_filter_fn, + &sig, &master->dev, + mcspi_dma->dma_rx_ch_name); + if (!mcspi_dma->dma_rx) { dev_err(&spi->dev, "no RX DMA engine channel for McSPI\n"); return -EAGAIN; } sig = mcspi_dma->dma_tx_sync_dev; - mcspi_dma->dma_tx = dma_request_channel(mask, omap_dma_filter_fn, &sig); + mcspi_dma->dma_tx = + dma_request_slave_channel_compat(mask, omap_dma_filter_fn, + &sig, &master->dev, + mcspi_dma->dma_tx_ch_name); + if (!mcspi_dma->dma_tx) { dev_err(&spi->dev, "no TX DMA engine channel for McSPI\n"); dma_release_channel(mcspi_dma->dma_rx); @@ -1223,29 +1235,42 @@ static int omap2_mcspi_probe(struct platform_device *pdev) goto free_master; for (i = 0; i < master->num_chipselect; i++) { - char dma_ch_name[14]; + char *dma_rx_ch_name = mcspi->dma_channels[i].dma_rx_ch_name; + char *dma_tx_ch_name = mcspi->dma_channels[i].dma_tx_ch_name; struct resource *dma_res; - sprintf(dma_ch_name, "rx%d", i); - dma_res = platform_get_resource_byname(pdev, IORESOURCE_DMA, - dma_ch_name); - if (!dma_res) { - dev_dbg(&pdev->dev, "cannot get DMA RX channel\n"); - status = -ENODEV; - break; - } + sprintf(dma_rx_ch_name, "rx%d", i); + if (!pdev->dev.of_node) { + dma_res = + platform_get_resource_byname(pdev, + IORESOURCE_DMA, + dma_rx_ch_name); + if (!dma_res) { + dev_dbg(&pdev->dev, + "cannot get DMA RX channel\n"); + status = -ENODEV; + break; + } - mcspi->dma_channels[i].dma_rx_sync_dev = dma_res->start; - sprintf(dma_ch_name, "tx%d", i); - dma_res = platform_get_resource_byname(pdev, IORESOURCE_DMA, - dma_ch_name); - if (!dma_res) { - dev_dbg(&pdev->dev, "cannot get DMA TX channel\n"); - status = -ENODEV; - break; + mcspi->dma_channels[i].dma_rx_sync_dev = + dma_res->start; } + sprintf(dma_tx_ch_name, "tx%d", i); + if (!pdev->dev.of_node) { + dma_res = + platform_get_resource_byname(pdev, + IORESOURCE_DMA, + dma_tx_ch_name); + if (!dma_res) { + dev_dbg(&pdev->dev, + "cannot get DMA TX channel\n"); + status = -ENODEV; + break; + } - mcspi->dma_channels[i].dma_tx_sync_dev = dma_res->start; + mcspi->dma_channels[i].dma_tx_sync_dev = + dma_res->start; + } } if (status < 0)