From patchwork Wed Mar 6 16:15:33 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt Porter X-Patchwork-Id: 2226991 Return-Path: X-Original-To: patchwork-linux-omap@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id BFBA8DFF33 for ; Wed, 6 Mar 2013 16:17:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753821Ab3CFQP3 (ORCPT ); Wed, 6 Mar 2013 11:15:29 -0500 Received: from mail-ia0-f179.google.com ([209.85.210.179]:61678 "EHLO mail-ia0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753429Ab3CFQPX (ORCPT ); Wed, 6 Mar 2013 11:15:23 -0500 Received: by mail-ia0-f179.google.com with SMTP id x24so7549679iak.10 for ; Wed, 06 Mar 2013 08:15:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=TuDcPm1uxgJsnc0tPCGMJzahemrB3oNbLod90LmlmTQ=; b=s+WVLqTzcBnQ7LEkJ0T9YnoJttKuNOdIk/yVaZpVXm9ZPiv6uPTZetm+9csPsBgHN1 2TEvpB2KrIFUpVkofV7AiCI7SdMe52xr8fq6yQHlNFWOI15axDMLBBT3klxbitswp9Fq WZNYHYcSFqfe17ElxX0jXBwSwGwkvijEO0HIIj1bBBYZKjjgAeoyxWwlhYF8ZXmGR8du 5EVVOPmPupxqYRFj5LspKjVdzTw9NDirvEIRjhedyEEhr3oa3bPpBFD1zBJkHeK06PKV qtcsx8ihhZmRqaDr6unNJv3wLU9bTDJPA4Ebmn/7CNu7vppX8xARYQTNkToFZWURTQl4 U/lA== X-Received: by 10.50.173.105 with SMTP id bj9mr11334066igc.44.1362586519686; Wed, 06 Mar 2013 08:15:19 -0800 (PST) Received: from beef.ohporter.com (cpe-98-27-254-98.neo.res.rr.com. [98.27.254.98]) by mx.google.com with ESMTPS id vb15sm24923098igb.9.2013.03.06.08.15.17 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 06 Mar 2013 08:15:19 -0800 (PST) From: Matt Porter To: Tony Lindgren , Sekhar Nori , Grant Likely , Rob Herring , Vinod Koul , Mark Brown , Benoit Cousson , Russell King , Rob Landley , Andrew Morton Cc: Devicetree Discuss , Linux OMAP List , Linux ARM Kernel List , Linux DaVinci Kernel List , Linux Kernel Mailing List , Linux Documentation List , Linux MMC List , Linux SPI Devel List , Arnd Bergmann Subject: [PATCH v9 3/9] ARM: edma: add AM33XX support to the private EDMA API Date: Wed, 6 Mar 2013 11:15:33 -0500 Message-Id: <1362586540-10393-4-git-send-email-mporter@ti.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1362586540-10393-1-git-send-email-mporter@ti.com> References: <1362586540-10393-1-git-send-email-mporter@ti.com> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org Adds support for parsing the TI EDMA DT data into the required EDMA private API platform data. Enables runtime PM support to initialize the EDMA hwmod. Adds AM33XX EDMA crossbar event mux support. Enables build on OMAP. Signed-off-by: Matt Porter Acked-by: Sekhar Nori --- arch/arm/common/edma.c | 300 ++++++++++++++++++++++++++++++++++-- arch/arm/mach-omap2/Kconfig | 1 + include/linux/platform_data/edma.h | 1 + 3 files changed, 292 insertions(+), 10 deletions(-) diff --git a/arch/arm/common/edma.c b/arch/arm/common/edma.c index a1db6cd..e68ac38 100644 --- a/arch/arm/common/edma.c +++ b/arch/arm/common/edma.c @@ -24,6 +24,13 @@ #include #include #include +#include +#include +#include +#include +#include +#include +#include #include @@ -1369,31 +1376,278 @@ void edma_clear_event(unsigned channel) EXPORT_SYMBOL(edma_clear_event); /*-----------------------------------------------------------------------*/ +static int edma_of_read_u32_to_s8_array(const struct device_node *np, + const char *propname, s8 *out_values, + size_t sz) +{ + int ret; + + ret = of_property_read_u8_array(np, propname, out_values, sz); + if (ret) + return ret; + + /* Terminate it */ + *out_values++ = -1; + *out_values++ = -1; + + return 0; +} + +static int edma_of_read_u32_to_s16_array(const struct device_node *np, + const char *propname, s16 *out_values, + size_t sz) +{ + int ret; + + ret = of_property_read_u16_array(np, propname, out_values, sz); + if (ret) + return ret; + + /* Terminate it */ + *out_values++ = -1; + *out_values++ = -1; + + return 0; +} + +static int edma_xbar_event_map(struct device *dev, + struct device_node *node, + struct edma_soc_info *pdata, int len) +{ + int ret = 0; + int i; + struct resource res; + void *xbar; + const s16 (*xbar_chans)[2]; + u32 shift, offset, mux; + + xbar_chans = devm_kzalloc(dev, + len/sizeof(s16) + 2*sizeof(s16), + GFP_KERNEL); + if (!xbar_chans) + return -ENOMEM; + + ret = of_address_to_resource(node, 1, &res); + if (ret) + return -EIO; + + xbar = devm_ioremap(dev, res.start, resource_size(&res)); + if (!xbar) + return -ENOMEM; + + ret = edma_of_read_u32_to_s16_array(node, + "ti,edma-xbar-event-map", + (s16 *)xbar_chans, + len/sizeof(u32)); + if (ret) + return -EIO; + + for (i = 0; xbar_chans[i][0] != -1; i++) { + shift = (xbar_chans[i][1] % 4) * 8; + offset = xbar_chans[i][1] >> 2; + offset <<= 2; + mux = readl((void *)((u32)xbar + offset)); + mux &= ~(0xff << shift); + mux |= xbar_chans[i][0] << shift; + writel(mux, (void *)((u32)xbar + offset)); + } + + pdata->xbar_chans = xbar_chans; + + return 0; +} + +static int edma_of_parse_dt(struct device *dev, + struct device_node *node, + struct edma_soc_info *pdata) +{ + int ret = 0; + u32 value; + struct property *prop; + size_t sz; + struct edma_rsv_info *rsv_info; + const s16 (*rsv_chans)[2], (*rsv_slots)[2]; + const s8 (*queue_tc_map)[2], (*queue_priority_map)[2]; + + memset(pdata, 0, sizeof(struct edma_soc_info)); + + ret = of_property_read_u32(node, "dma-channels", &value); + if (ret < 0) + return ret; + pdata->n_channel = value; + + ret = of_property_read_u32(node, "ti,edma-regions", &value); + if (ret < 0) + return ret; + pdata->n_region = value; + + ret = of_property_read_u32(node, "ti,edma-slots", &value); + if (ret < 0) + return ret; + pdata->n_slot = value; + + pdata->n_cc = 1; + pdata->n_tc = 3; + + rsv_info = + devm_kzalloc(dev, sizeof(struct edma_rsv_info), GFP_KERNEL); + if (!rsv_info) + return -ENOMEM; + pdata->rsv = rsv_info; + + /* Build the reserved channel/slots arrays */ + prop = of_find_property(node, "ti,edma-reserved-channels", &sz); + if (prop) { + rsv_chans = devm_kzalloc(dev, + sz/sizeof(s16) + 2*sizeof(s16), + GFP_KERNEL); + if (!rsv_chans) + return -ENOMEM; + pdata->rsv->rsv_chans = rsv_chans; + + ret = edma_of_read_u32_to_s16_array(node, + "ti,edma-reserved-channels", + (s16 *)rsv_chans, + sz/sizeof(u32)); + if (ret < 0) + return ret; + } -static int __init edma_probe(struct platform_device *pdev) + prop = of_find_property(node, "ti,edma-reserved-slots", &sz); + if (prop) { + rsv_slots = devm_kzalloc(dev, + sz/sizeof(s16) + 2*sizeof(s16), + GFP_KERNEL); + if (!rsv_slots) + return -ENOMEM; + pdata->rsv->rsv_slots = rsv_slots; + + ret = edma_of_read_u32_to_s16_array(node, + "ti,edma-reserved-slots", + (s16 *)rsv_slots, + sz/sizeof(u32)); + if (ret < 0) + return ret; + } + + prop = of_find_property(node, "ti,edma-queue-tc-map", &sz); + if (!prop) + return -EINVAL; + + queue_tc_map = devm_kzalloc(dev, + sz/sizeof(s8) + 2*sizeof(s8), + GFP_KERNEL); + if (!queue_tc_map) + return -ENOMEM; + pdata->queue_tc_mapping = queue_tc_map; + + ret = edma_of_read_u32_to_s8_array(node, + "ti,edma-queue-tc-map", + (s8 *)queue_tc_map, + sz/sizeof(u32)); + if (ret < 0) + return ret; + + prop = of_find_property(node, "ti,edma-queue-priority-map", &sz); + if (!prop) + return -EINVAL; + + queue_priority_map = devm_kzalloc(dev, + sz/sizeof(s8) + 2*sizeof(s8), + GFP_KERNEL); + if (!queue_priority_map) + return -ENOMEM; + pdata->queue_priority_mapping = queue_priority_map; + + ret = edma_of_read_u32_to_s8_array(node, + "ti,edma-queue-tc-map", + (s8 *)queue_priority_map, + sz/sizeof(u32)); + if (ret < 0) + return ret; + + ret = of_property_read_u32(node, "ti,edma-default-queue", &value); + if (ret < 0) + return ret; + pdata->default_queue = value; + + prop = of_find_property(node, "ti,edma-xbar-event-map", &sz); + if (prop) + ret = edma_xbar_event_map(dev, node, pdata, sz); + + return ret; +} + +static struct of_dma_filter_info edma_filter_info = { + .filter_fn = edma_filter_fn, +}; + +static int edma_probe(struct platform_device *pdev) { struct edma_soc_info **info = pdev->dev.platform_data; + struct edma_soc_info *ninfo[EDMA_MAX_CC] = {NULL, NULL}; + struct edma_soc_info tmpinfo; const s8 (*queue_priority_mapping)[2]; const s8 (*queue_tc_mapping)[2]; int i, j, off, ln, found = 0; int status = -1; const s16 (*rsv_chans)[2]; const s16 (*rsv_slots)[2]; + const s16 (*xbar_chans)[2]; int irq[EDMA_MAX_CC] = {0, 0}; int err_irq[EDMA_MAX_CC] = {0, 0}; - struct resource *r[EDMA_MAX_CC] = {NULL}; + struct resource *r[EDMA_MAX_CC] = {NULL, NULL}; + struct resource res[EDMA_MAX_CC]; resource_size_t len[EDMA_MAX_CC]; char res_name[10]; char irq_name[10]; + struct device_node *node = pdev->dev.of_node; + struct device *dev = &pdev->dev; + int ret; + + if (node) { + /* Check if this is a second instance registered */ + if (arch_num_cc) { + dev_err(dev, "only one EDMA instance is supported via DT\n"); + return -ENODEV; + } + info = ninfo; + edma_of_parse_dt(dev, node, &tmpinfo); + info[0] = &tmpinfo; + + dma_cap_set(DMA_SLAVE, edma_filter_info.dma_cap); + of_dma_controller_register(dev->of_node, + of_dma_simple_xlate, + &edma_filter_info); + } if (!info) return -ENODEV; + pm_runtime_enable(dev); + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + dev_err(dev, "pm_runtime_get_sync() failed\n"); + return ret; + } + for (j = 0; j < EDMA_MAX_CC; j++) { - sprintf(res_name, "edma_cc%d", j); - r[j] = platform_get_resource_byname(pdev, IORESOURCE_MEM, + if (!info[j]) { + if (!found) + return -ENODEV; + break; + } + if (node) { + ret = of_address_to_resource(node, j, &res[j]); + if (!ret) + r[j] = &res[j]; + } else { + sprintf(res_name, "edma_cc%d", j); + r[j] = platform_get_resource_byname(pdev, + IORESOURCE_MEM, res_name); - if (!r[j] || !info[j]) { + } + if (!r[j]) { if (found) break; else @@ -1468,8 +1722,22 @@ static int __init edma_probe(struct platform_device *pdev) } } - sprintf(irq_name, "edma%d", j); - irq[j] = platform_get_irq_byname(pdev, irq_name); + /* Clear the xbar mapped channels in unused list */ + xbar_chans = info[j]->xbar_chans; + if (xbar_chans) { + for (i = 0; xbar_chans[i][1] != -1; i++) { + off = xbar_chans[i][1]; + clear_bits(off, 1, + edma_cc[j]->edma_unused); + } + } + + if (node) + irq[j] = irq_of_parse_and_map(node, 0); + else { + sprintf(irq_name, "edma%d", j); + irq[j] = platform_get_irq_byname(pdev, irq_name); + } edma_cc[j]->irq_res_start = irq[j]; status = request_irq(irq[j], dma_irq_handler, 0, "edma", &pdev->dev); @@ -1479,8 +1747,12 @@ static int __init edma_probe(struct platform_device *pdev) goto fail; } - sprintf(irq_name, "edma%d_err", j); - err_irq[j] = platform_get_irq_byname(pdev, irq_name); + if (node) + err_irq[j] = irq_of_parse_and_map(node, 2); + else { + sprintf(irq_name, "edma%d_err", j); + err_irq[j] = platform_get_irq_byname(pdev, irq_name); + } edma_cc[j]->irq_res_end = err_irq[j]; status = request_irq(err_irq[j], dma_ccerr_handler, 0, "edma_error", &pdev->dev); @@ -1541,9 +1813,17 @@ fail1: return status; } +static const struct of_device_id edma_of_ids[] = { + { .compatible = "ti,edma3", }, + {} +}; static struct platform_driver edma_driver = { - .driver.name = "edma", + .driver = { + .name = "edma", + .of_match_table = edma_of_ids, + }, + .probe = edma_probe, }; static int __init edma_init(void) diff --git a/arch/arm/mach-omap2/Kconfig b/arch/arm/mach-omap2/Kconfig index 49ac3df..d3b433d 100644 --- a/arch/arm/mach-omap2/Kconfig +++ b/arch/arm/mach-omap2/Kconfig @@ -16,6 +16,7 @@ config ARCH_OMAP2PLUS select PINCTRL select PROC_DEVICETREE if PROC_FS select SPARSE_IRQ + select TI_PRIV_EDMA select USE_OF help Systems based on OMAP2, OMAP3, OMAP4 or OMAP5 diff --git a/include/linux/platform_data/edma.h b/include/linux/platform_data/edma.h index 2344ea2..ffc1fb2 100644 --- a/include/linux/platform_data/edma.h +++ b/include/linux/platform_data/edma.h @@ -177,6 +177,7 @@ struct edma_soc_info { const s8 (*queue_tc_mapping)[2]; const s8 (*queue_priority_mapping)[2]; + const s16 (*xbar_chans)[2]; }; #endif