From patchwork Tue Nov 17 10:56:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E402C6379D for ; Tue, 17 Nov 2020 10:58:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A61C621534 for ; Tue, 17 Nov 2020 10:58:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="ikBSvicE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727807AbgKQK4T (ORCPT ); Tue, 17 Nov 2020 05:56:19 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59602 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727347AbgKQK4S (ORCPT ); Tue, 17 Nov 2020 05:56:18 -0500 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuD0D019443; Tue, 17 Nov 2020 04:56:13 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610573; bh=vOyHJcWQgFd/X5VGt8S3nS+qlpjg6sSRGX+cBw76W44=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=ikBSvicEeXyZkVW/bI2E9SMbQ9sG+P56mnzbCDb6/hhEq4lYSm4epHS1dkxGULBFP eGXXo7gmyrCbnAKG3iBcCxtXefV+6xA4tvhkIDMJPdJ2L7uh890AaCTDklGuj96Gaq 72GEpZlhbrNtCblViZkokE80M39bnNfM6/RjV1dE= Received: from DLEE103.ent.ti.com (dlee103.ent.ti.com [157.170.170.33]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuD4a009876 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:13 -0600 Received: from DLEE109.ent.ti.com (157.170.170.41) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:12 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:12 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tl087311; Tue, 17 Nov 2020 04:56:10 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 01/19] dmaengine: ti: k3-udma: Correct normal channel offset when uchan_cnt is not 0 Date: Tue, 17 Nov 2020 12:56:38 +0200 Message-ID: <20201117105656.5236-2-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org According to different sections of the TRM, the hchan_cnt of CAP3 includes the number of uchan in UDMA, thus the start offset of the normal channels are hchan_cnt. Fixes: daf4ad0499aa4 ("dmaengine: ti: k3-udma: Query throughput level information from hardware") Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index e508280b3d70..0e8426dd18a7 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -3199,8 +3199,7 @@ static int udma_setup_resources(struct udma_dev *ud) } else if (UDMA_CAP3_UCHAN_CNT(cap3)) { ud->tpl_levels = 3; ud->tpl_start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3); - ud->tpl_start_idx[0] = ud->tpl_start_idx[1] + - UDMA_CAP3_HCHAN_CNT(cap3); + ud->tpl_start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); } else if (UDMA_CAP3_HCHAN_CNT(cap3)) { ud->tpl_levels = 2; ud->tpl_start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); From patchwork Tue Nov 17 10:56:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FCFCC64E7A for ; Tue, 17 Nov 2020 10:57:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E61B3238E6 for ; Tue, 17 Nov 2020 10:57:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="fb3LVBkR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727986AbgKQK4X (ORCPT ); Tue, 17 Nov 2020 05:56:23 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59610 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727347AbgKQK4V (ORCPT ); Tue, 17 Nov 2020 05:56:21 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuGPc019508; Tue, 17 Nov 2020 04:56:16 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610576; bh=LQWkom4AlDyOMYyG9quYjUCZhb57RJ8vj/xUJWaN+D0=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=fb3LVBkRu5Z6JEvUr+0Q2d8iNO8h7CAFdxJTUbeZfTJf3JFhTtFT0K7eidyv7hFup FKCGFulewIC9lsTz7cUQPLLMG6QiH6/wk+Vrk5d391w5xbIZLQRRsHKyx1GharbfNv dpZAFLG8dfttRwnD05Nnx9CpywhYkAJe5Qe8lnSI= Received: from DFLE100.ent.ti.com (dfle100.ent.ti.com [10.64.6.21]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuGKP005098 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:16 -0600 Received: from DFLE106.ent.ti.com (10.64.6.27) by DFLE100.ent.ti.com (10.64.6.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:15 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE106.ent.ti.com (10.64.6.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:15 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tm087311; Tue, 17 Nov 2020 04:56:13 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 02/19] dmaengine: ti: k3-udma: Wait for peer teardown completion if supported Date: Tue, 17 Nov 2020 12:56:39 +0200 Message-ID: <20201117105656.5236-3-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Set the TDTYPE if it is supported on the platform (j721e) which will cause UDMAP to wait for the remote peer to finish the teardown before returning the teardown completed message. Signed-off-by: Peter Ujfalusi Tested-by: Keerthy Reviewed-by: Grygorii Strashko --- drivers/dma/ti/k3-udma.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 0e8426dd18a7..eee43757e774 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -86,6 +86,7 @@ struct udma_rchan { #define UDMA_FLAG_PDMA_ACC32 BIT(0) #define UDMA_FLAG_PDMA_BURST BIT(1) +#define UDMA_FLAG_TDTYPE BIT(2) struct udma_match_data { u32 psil_base; @@ -1589,6 +1590,13 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc) req_tx.tx_fetch_size = fetch_size >> 2; req_tx.txcq_qnum = tc_ring; req_tx.tx_atype = uc->config.atype; + if (uc->config.ep_type == PSIL_EP_PDMA_XY && + ud->match_data->flags & UDMA_FLAG_TDTYPE) { + /* wait for peer to complete the teardown for PDMAs */ + req_tx.valid_params |= + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_TDTYPE_VALID; + req_tx.tx_tdtype = 1; + } ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); if (ret) @@ -3105,14 +3113,14 @@ static struct udma_match_data am654_mcu_data = { static struct udma_match_data j721e_main_data = { .psil_base = 0x1000, .enable_memcpy_support = true, - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST, + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, .statictr_z_mask = GENMASK(23, 0), }; static struct udma_match_data j721e_mcu_data = { .psil_base = 0x6000, .enable_memcpy_support = false, /* MEM_TO_MEM is slow via MCU UDMA */ - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST, + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, .statictr_z_mask = GENMASK(23, 0), }; From patchwork Tue Nov 17 10:56:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F06DC63798 for ; Tue, 17 Nov 2020 10:56:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C21624654 for ; Tue, 17 Nov 2020 10:56:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="SMRzKQXp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728011AbgKQK40 (ORCPT ); Tue, 17 Nov 2020 05:56:26 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59616 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727993AbgKQK4Z (ORCPT ); Tue, 17 Nov 2020 05:56:25 -0500 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuJPA019540; Tue, 17 Nov 2020 04:56:19 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610579; bh=Qm1lw36/mMXKb0+Q2tfyWktKFazIJEYt1QeJaMROv28=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=SMRzKQXpVj5wy9T6BX952pP/fQGcXLhqury0w+shuJC3hcWYHUqRdYVEaBnl0KpdB muYnaNU3PMEcLGQZOlai0V+YwcEGUilVMjgxhrmi5jqsMVYtzAwTgo2saCCUxfUX90 FcloXYb8gkaODMhCmOc1JTAkHZu/FQBH332Ruq2c= Received: from DLEE105.ent.ti.com (dlee105.ent.ti.com [157.170.170.35]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuJKL009963 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:19 -0600 Received: from DLEE108.ent.ti.com (157.170.170.38) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:18 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE108.ent.ti.com (157.170.170.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:18 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tn087311; Tue, 17 Nov 2020 04:56:16 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 03/19] dmaengine: ti: k3-udma: Add support for second resource range from sysfw Date: Tue, 17 Nov 2020 12:56:40 +0200 Message-ID: <20201117105656.5236-4-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Resource allocation via sysfw can use up to two ranges per resource subtype to support more complex resource assignment, mainly for DMA channels. Take the second range also into consideration when setting up the maps for available resources. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 55 ++++++++++++++++++++++------------------ 1 file changed, 31 insertions(+), 24 deletions(-) diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index eee43757e774..b89afa602532 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -3174,12 +3174,22 @@ static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud) return 0; } +static void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *map, + struct ti_sci_resource_desc *rm_desc, + char *name) +{ + bitmap_clear(map, rm_desc->start, rm_desc->num); + bitmap_clear(map, rm_desc->start_sec, rm_desc->num_sec); + dev_dbg(ud->dev, "ti_sci resource range for %s: %d:%d | %d:%d\n", name, + rm_desc->start, rm_desc->num, rm_desc->start_sec, + rm_desc->num_sec); +} + static int udma_setup_resources(struct udma_dev *ud) { struct device *dev = ud->dev; int ch_count, ret, i, j; u32 cap2, cap3; - struct ti_sci_resource_desc *rm_desc; struct ti_sci_resource *rm_res, irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; static const char * const range_names[] = { "ti,sci-rm-range-tchan", @@ -3264,13 +3274,9 @@ static int udma_setup_resources(struct udma_dev *ud) bitmap_zero(ud->tchan_map, ud->tchan_cnt); } else { bitmap_fill(ud->tchan_map, ud->tchan_cnt); - for (i = 0; i < rm_res->sets; i++) { - rm_desc = &rm_res->desc[i]; - bitmap_clear(ud->tchan_map, rm_desc->start, - rm_desc->num); - dev_dbg(dev, "ti-sci-res: tchan: %d:%d\n", - rm_desc->start, rm_desc->num); - } + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->tchan_map, + &rm_res->desc[i], "tchan"); } irq_res.sets = rm_res->sets; @@ -3280,13 +3286,9 @@ static int udma_setup_resources(struct udma_dev *ud) bitmap_zero(ud->rchan_map, ud->rchan_cnt); } else { bitmap_fill(ud->rchan_map, ud->rchan_cnt); - for (i = 0; i < rm_res->sets; i++) { - rm_desc = &rm_res->desc[i]; - bitmap_clear(ud->rchan_map, rm_desc->start, - rm_desc->num); - dev_dbg(dev, "ti-sci-res: rchan: %d:%d\n", - rm_desc->start, rm_desc->num); - } + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rchan_map, + &rm_res->desc[i], "rchan"); } irq_res.sets += rm_res->sets; @@ -3295,12 +3297,21 @@ static int udma_setup_resources(struct udma_dev *ud) for (i = 0; i < rm_res->sets; i++) { irq_res.desc[i].start = rm_res->desc[i].start; irq_res.desc[i].num = rm_res->desc[i].num; + irq_res.desc[i].start_sec = rm_res->desc[i].start_sec; + irq_res.desc[i].num_sec = rm_res->desc[i].num_sec; } rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; for (j = 0; j < rm_res->sets; j++, i++) { - irq_res.desc[i].start = rm_res->desc[j].start + + if (rm_res->desc[j].num) { + irq_res.desc[i].start = rm_res->desc[j].start + ud->soc_data->rchan_oes_offset; - irq_res.desc[i].num = rm_res->desc[j].num; + irq_res.desc[i].num = rm_res->desc[j].num; + } + if (rm_res->desc[j].num_sec) { + irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + + ud->soc_data->rchan_oes_offset; + irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; + } } ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); kfree(irq_res.desc); @@ -3316,13 +3327,9 @@ static int udma_setup_resources(struct udma_dev *ud) bitmap_clear(ud->rflow_gp_map, ud->rchan_cnt, ud->rflow_cnt - ud->rchan_cnt); } else { - for (i = 0; i < rm_res->sets; i++) { - rm_desc = &rm_res->desc[i]; - bitmap_clear(ud->rflow_gp_map, rm_desc->start, - rm_desc->num); - dev_dbg(dev, "ti-sci-res: rflow: %d:%d\n", - rm_desc->start, rm_desc->num); - } + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rflow_gp_map, + &rm_res->desc[i], "gp-rflow"); } ch_count -= bitmap_weight(ud->tchan_map, ud->tchan_cnt); From patchwork Tue Nov 17 10:56:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EED44C83012 for ; Tue, 17 Nov 2020 10:57:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8686720781 for ; Tue, 17 Nov 2020 10:57:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="KBNUOnx8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728037AbgKQK4b (ORCPT ); Tue, 17 Nov 2020 05:56:31 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34062 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727993AbgKQK43 (ORCPT ); Tue, 17 Nov 2020 05:56:29 -0500 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuM4c117205; Tue, 17 Nov 2020 04:56:22 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610582; bh=clrhNW0HKvD5h4sMDEuhMbOQDZdCfb7rL0FInuqE+2c=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=KBNUOnx8v9DK79Kzxm70fQyEuREgoG/nHBulAthwoBGfNYnIpf67R1RhD0p/GSpwC GrQzSYp623Zl9nVBLLlLA22ic9v3NqbfYs3TmUoaRgQnde1a44HVXRi8MKlxduZJVQ y35YLiOZoKH9iG35Wtl0wBCtLzH4uBus7xKuPu58= Received: from DLEE109.ent.ti.com (dlee109.ent.ti.com [157.170.170.41]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuMSg010012 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:22 -0600 Received: from DLEE107.ent.ti.com (157.170.170.37) by DLEE109.ent.ti.com (157.170.170.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:21 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:21 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6to087311; Tue, 17 Nov 2020 04:56:19 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 04/19] dmaengine: ti: k3-udma-glue: Add function to get device pointer for DMA API Date: Tue, 17 Nov 2020 12:56:41 +0200 Message-ID: <20201117105656.5236-5-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Glue layer users should use the device of the DMA for DMA mapping and allocations as it is the DMA which accesses to descriptors and buffers, not the clients Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-glue.c | 14 ++++++++++++++ drivers/dma/ti/k3-udma-private.c | 6 ++++++ drivers/dma/ti/k3-udma.h | 1 + include/linux/dma/k3-udma-glue.h | 4 ++++ 4 files changed, 25 insertions(+) diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c index dfb65e382ab9..29d1524d1916 100644 --- a/drivers/dma/ti/k3-udma-glue.c +++ b/drivers/dma/ti/k3-udma-glue.c @@ -493,6 +493,13 @@ int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn) } EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq); +struct device * + k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn) +{ + return xudma_get_device(tx_chn->common.udmax); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_dma_device); + static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) { const struct udma_tisci_rm *tisci_rm = rx_chn->common.tisci_rm; @@ -1201,3 +1208,10 @@ int k3_udma_glue_rx_get_irq(struct k3_udma_glue_rx_channel *rx_chn, return flow->virq; } EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_irq); + +struct device * + k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn) +{ + return xudma_get_device(rx_chn->common.udmax); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_dma_device); diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c index 8563a392f30b..c9fb1d832581 100644 --- a/drivers/dma/ti/k3-udma-private.c +++ b/drivers/dma/ti/k3-udma-private.c @@ -50,6 +50,12 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property) } EXPORT_SYMBOL(of_xudma_dev_get); +struct device *xudma_get_device(struct udma_dev *ud) +{ + return ud->dev; +} +EXPORT_SYMBOL(xudma_get_device); + u32 xudma_dev_get_psil_base(struct udma_dev *ud) { return ud->psil_base; diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index 09c4529e013d..d1cace0cb43b 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -112,6 +112,7 @@ int xudma_navss_psil_unpair(struct udma_dev *ud, u32 src_thread, u32 dst_thread); struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property); +struct device *xudma_get_device(struct udma_dev *ud); void xudma_dev_put(struct udma_dev *ud); u32 xudma_dev_get_psil_base(struct udma_dev *ud); struct udma_tisci_rm *xudma_dev_get_tisci_rm(struct udma_dev *ud); diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h index 5eb34ad973a7..d7c12f31377c 100644 --- a/include/linux/dma/k3-udma-glue.h +++ b/include/linux/dma/k3-udma-glue.h @@ -41,6 +41,8 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, u32 k3_udma_glue_tx_get_hdesc_size(struct k3_udma_glue_tx_channel *tx_chn); u32 k3_udma_glue_tx_get_txcq_id(struct k3_udma_glue_tx_channel *tx_chn); int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn); +struct device * + k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn); enum { K3_UDMA_GLUE_SRC_TAG_LO_KEEP = 0, @@ -130,5 +132,7 @@ int k3_udma_glue_rx_flow_enable(struct k3_udma_glue_rx_channel *rx_chn, u32 flow_idx); int k3_udma_glue_rx_flow_disable(struct k3_udma_glue_rx_channel *rx_chn, u32 flow_idx); +struct device * + k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn); #endif /* K3_UDMA_GLUE_H_ */ From patchwork Tue Nov 17 10:56:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DD87C83014 for ; Tue, 17 Nov 2020 10:57:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCA9C21534 for ; Tue, 17 Nov 2020 10:57:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="uVdP+flR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728041AbgKQK4b (ORCPT ); Tue, 17 Nov 2020 05:56:31 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34070 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727347AbgKQK4a (ORCPT ); Tue, 17 Nov 2020 05:56:30 -0500 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuP2k117215; Tue, 17 Nov 2020 04:56:25 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610585; bh=GPNClNRp61iGNgJX9hjyyFfrcPdoVMSd0kEOHS+xKLc=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=uVdP+flRTAxnCrsn0rXQ/oBgbbZFdX+SKtKMBOx88G9hWGtnZSAQNuWWJKdT72Qef ruE8j7JL3CwrjXEKYa2pHbcMCbWLkxpgoqmzkydS9SeQUl0nfHJVEhQqwz+bJg0yma PFOpk3mlR3bw2QJ0Dh5oYwViIOPPI+TA63yW2btM= Received: from DFLE102.ent.ti.com (dfle102.ent.ti.com [10.64.6.23]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuPJj010058 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:25 -0600 Received: from DFLE102.ent.ti.com (10.64.6.23) by DFLE102.ent.ti.com (10.64.6.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:24 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE102.ent.ti.com (10.64.6.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:24 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tp087311; Tue, 17 Nov 2020 04:56:22 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 05/19] dmaengine: ti: k3-udma-glue: Configure the dma_dev for rings Date: Tue, 17 Nov 2020 12:56:42 +0200 Message-ID: <20201117105656.5236-6-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Rings in RING mode should be using the DMA device for DMA API as in this mode the ringacc will not access the ring memory in any ways, but the DMA is. Fix up the ring configuration and set the dma_dev unconditionally and let the ringacc driver to select the correct device to use for DMA API. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-glue.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c index 29d1524d1916..ddf0730aa2bc 100644 --- a/drivers/dma/ti/k3-udma-glue.c +++ b/drivers/dma/ti/k3-udma-glue.c @@ -280,6 +280,10 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, goto err; } + /* Set the dma_dev for the rings to be configured */ + cfg->tx_cfg.dma_dev = k3_udma_glue_tx_get_dma_device(tx_chn); + cfg->txcq_cfg.dma_dev = cfg->tx_cfg.dma_dev; + ret = k3_ringacc_ring_cfg(tx_chn->ringtx, &cfg->tx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringtx %d\n", ret); @@ -595,6 +599,10 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn, goto err_rflow_put; } + /* Set the dma_dev for the rings to be configured */ + flow_cfg->rx_cfg.dma_dev = k3_udma_glue_rx_get_dma_device(rx_chn); + flow_cfg->rxfdq_cfg.dma_dev = flow_cfg->rx_cfg.dma_dev; + ret = k3_ringacc_ring_cfg(flow->ringrx, &flow_cfg->rx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringrx %d\n", ret); From patchwork Tue Nov 17 10:56:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FAECC8300B for ; Tue, 17 Nov 2020 10:57:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55E0B24654 for ; Tue, 17 Nov 2020 10:57:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="W1u7Mx1P" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728089AbgKQK4e (ORCPT ); Tue, 17 Nov 2020 05:56:34 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59658 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727347AbgKQK4d (ORCPT ); Tue, 17 Nov 2020 05:56:33 -0500 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuSLj019551; Tue, 17 Nov 2020 04:56:28 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610588; bh=5ridk5920UfMB0nCEIpCyF/07n7stscwVpGaO1rQQhA=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=W1u7Mx1PwsOuT8GyG/xpAz0KvDnKeigyw/WI0qJ88iN2FyC/B9XEDPNHAhJ1+3RIA lNMyvDdGdDLwMqQdM4wZJy0aIIugdjbvWIjnA7AWEq+UnYE1CT8RpANkX7sdzz0PeG 8g0XDO4RWicCvykSk2aSrhrPy4fOsvbLUqrlFoj8= Received: from DFLE111.ent.ti.com (dfle111.ent.ti.com [10.64.6.32]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuSfI010123 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:28 -0600 Received: from DFLE114.ent.ti.com (10.64.6.35) by DFLE111.ent.ti.com (10.64.6.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:27 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE114.ent.ti.com (10.64.6.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:27 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tq087311; Tue, 17 Nov 2020 04:56:25 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 06/19] dmaengine: of-dma: Add support for optional router configuration callback Date: Tue, 17 Nov 2020 12:56:43 +0200 Message-ID: <20201117105656.5236-7-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Additional configuration for the DMA event router might be needed for a channel which can not be done during device_alloc_chan_resources callback since the router information is not yet present for the drivers. If there is a need for additional configuration for the channel if DMA router is in use, then the driver can implement the device_router_config callback. Signed-off-by: Peter Ujfalusi --- drivers/dma/of-dma.c | 10 ++++++++++ include/linux/dmaengine.h | 2 ++ 2 files changed, 12 insertions(+) diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c index 8a4f608904b9..ec00b20ae8e4 100644 --- a/drivers/dma/of-dma.c +++ b/drivers/dma/of-dma.c @@ -75,8 +75,18 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec, ofdma->dma_router->route_free(ofdma->dma_router->dev, route_data); } else { + int ret = 0; + chan->router = ofdma->dma_router; chan->route_data = route_data; + + if (chan->device->device_router_config) + ret = chan->device->device_router_config(chan); + + if (ret) { + dma_release_channel(chan); + chan = ERR_PTR(ret); + } } /* diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index dd357a747780..d6197fe875af 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -800,6 +800,7 @@ struct dma_filter { * by tx_status * @device_alloc_chan_resources: allocate resources and return the * number of allocated descriptors + * @device_router_config: optional callback for DMA router configuration * @device_free_chan_resources: release DMA channel's resources * @device_prep_dma_memcpy: prepares a memcpy operation * @device_prep_dma_xor: prepares a xor operation @@ -874,6 +875,7 @@ struct dma_device { enum dma_residue_granularity residue_granularity; int (*device_alloc_chan_resources)(struct dma_chan *chan); + int (*device_router_config)(struct dma_chan *chan); void (*device_free_chan_resources)(struct dma_chan *chan); struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)( From patchwork Tue Nov 17 10:56:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 773D2C6379F for ; Tue, 17 Nov 2020 10:57:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 164DD24656 for ; Tue, 17 Nov 2020 10:57:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="tYW3hLul" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728099AbgKQK4g (ORCPT ); Tue, 17 Nov 2020 05:56:36 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59664 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727347AbgKQK4g (ORCPT ); Tue, 17 Nov 2020 05:56:36 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuVI2019558; Tue, 17 Nov 2020 04:56:31 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610591; bh=24q5eD4y4xZgELs8UTDdNZFf1mU01Jty7BUIc3+XGk4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=tYW3hLul0fVpDYy1m/RvZ/5ySeiEBG5wvZC6pVfPZhNc61Qh5LO7lwpWll3i5xobv 8b82CYGbqXsOXfg4zhsK2zWBbXhigw9pDfrXkM7oWBjmT5BJphU/KHsvNwW27HwlyB n758PNCY3xzROQtVa/Kv2er32eoYKOi0jTOC28uM= Received: from DLEE105.ent.ti.com (dlee105.ent.ti.com [157.170.170.35]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuVEA005357 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:31 -0600 Received: from DLEE111.ent.ti.com (157.170.170.22) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:30 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE111.ent.ti.com (157.170.170.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:30 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tr087311; Tue, 17 Nov 2020 04:56:28 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 07/19] dmaengine: Add support for per channel coherency handling Date: Tue, 17 Nov 2020 12:56:44 +0200 Message-ID: <20201117105656.5236-8-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org If the DMA device supports per channel coherency configuration (a channel can be configured to have coherent or not coherent view) then a single device (the DMA controller's device) can not be used for dma_api for all channels as channels can have different coherency. Introduce custom_dma_mapping flag for the dma_chan and a new helper to get the device pointer to be used for dma_api for the given channel. Client drivers should be updated to be able to support per channel coherency by: - dma_map_single(chan->device->dev, ptr, size, DMA_TO_DEVICE); + struct device *dma_dev = dmaengine_get_dma_device(chan); + + dma_map_single(dma_dev, ptr, size, DMA_TO_DEVICE); Signed-off-by: Peter Ujfalusi --- include/linux/dmaengine.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index d6197fe875af..182a1a2e7793 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -357,11 +357,14 @@ struct dma_chan { * @chan: driver channel device * @device: sysfs device * @dev_id: parent dma_device dev_id + * @chan_dma_dev: The channel is using custom/different dma-mapping + * compared to the parent dma_device */ struct dma_chan_dev { struct dma_chan *chan; struct device device; int dev_id; + bool chan_dma_dev; }; /** @@ -1613,4 +1616,13 @@ dmaengine_get_direction_text(enum dma_transfer_direction dir) return "invalid"; } } + +static inline struct device *dmaengine_get_dma_device(struct dma_chan *chan) +{ + if (chan->dev->chan_dma_dev) + return &chan->dev->device; + + return chan->device->dev; +} + #endif /* DMAENGINE_H */ From patchwork Tue Nov 17 10:56:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 483A1C2D0E4 for ; Tue, 17 Nov 2020 10:57:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DD56E20781 for ; Tue, 17 Nov 2020 10:57:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="yOR2La60" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728121AbgKQK4j (ORCPT ); Tue, 17 Nov 2020 05:56:39 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34112 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727347AbgKQK4i (ORCPT ); Tue, 17 Nov 2020 05:56:38 -0500 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuYvh117275; Tue, 17 Nov 2020 04:56:34 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610594; bh=9ls1rrDJzgkFi+HLj+7Y+2Kxc/KjoBYF/oSuS633YiQ=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=yOR2La60kAYc+yjxy6XHNbdb7yOsXZ5xcxeXxrTPcalxLOT9HIKaLwRLpaXExZeoc 9PQtg2hJ7uC6Ug7NGVrP3OMXt7COD0qo9JnjhDWWXbhAYgr7kNGKkHB+qftDlOV6Ol UnCDXqYTtUbLECv+N65uoo8cilvRrWCAkqKSJAIA= Received: from DFLE100.ent.ti.com (dfle100.ent.ti.com [10.64.6.21]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuYBh010214 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:34 -0600 Received: from DFLE106.ent.ti.com (10.64.6.27) by DFLE100.ent.ti.com (10.64.6.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:33 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE106.ent.ti.com (10.64.6.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:33 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6ts087311; Tue, 17 Nov 2020 04:56:31 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 08/19] dmaengine: doc: client: Update for dmaengine_get_dma_device() usage Date: Tue, 17 Nov 2020 12:56:45 +0200 Message-ID: <20201117105656.5236-9-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Client drivers should use the dmaengine_get_dma_device(chan) to get the device pointer which should be used for DMA API for allocations and mapping. Signed-off-by: Peter Ujfalusi --- Documentation/driver-api/dmaengine/client.rst | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/Documentation/driver-api/dmaengine/client.rst b/Documentation/driver-api/dmaengine/client.rst index 09a3f66dcd26..bfd057b21a00 100644 --- a/Documentation/driver-api/dmaengine/client.rst +++ b/Documentation/driver-api/dmaengine/client.rst @@ -120,7 +120,9 @@ The details of these operations are: .. code-block:: c - nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len); + struct device *dma_dev = dmaengine_get_dma_device(chan); + + nr_sg = dma_map_sg(dma_dev, sgl, sg_len); if (nr_sg == 0) /* error */ From patchwork Tue Nov 17 10:56:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2C3BC64EBC for ; Tue, 17 Nov 2020 10:56:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 47D092467A for ; Tue, 17 Nov 2020 10:56:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="TAcq2xYG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728161AbgKQK4n (ORCPT ); Tue, 17 Nov 2020 05:56:43 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59676 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728143AbgKQK4m (ORCPT ); Tue, 17 Nov 2020 05:56:42 -0500 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAubbf019703; Tue, 17 Nov 2020 04:56:37 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610597; bh=GcxwvtwaXqOgpXBurlcj+/JKCfjnKyirXPuoynaLPoY=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=TAcq2xYGthmxVcAL6NttUJ5xWlNx77AiYGz0+ZjOyMZ4jCgdu3xivWwBhG8kQ9tUG wiad2hI3bM3ZYCv1QghkW5lXWz+o+5tz+htRq+NJsFT2IxfITXZbgb4a4PW3xDqxfg 1epq3PkZfZumUXsyuIoEtwFsFrS1X+CsiWhtbu9A= Received: from DLEE106.ent.ti.com (dlee106.ent.ti.com [157.170.170.36]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAubVA085702 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:37 -0600 Received: from DLEE104.ent.ti.com (157.170.170.34) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:37 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE104.ent.ti.com (157.170.170.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:37 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tt087311; Tue, 17 Nov 2020 04:56:34 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 09/19] dmaengine: dmatest: Use dmaengine_get_dma_device Date: Tue, 17 Nov 2020 12:56:46 +0200 Message-ID: <20201117105656.5236-10-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org By using the dmaengine_get_dma_device() to get the device for dma_api use, the dmatest can support per channel coherency if it is supported by the DMA controller. Signed-off-by: Peter Ujfalusi --- drivers/dma/dmatest.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c index a3a172173e34..f696246f57fd 100644 --- a/drivers/dma/dmatest.c +++ b/drivers/dma/dmatest.c @@ -573,6 +573,7 @@ static int dmatest_func(void *data) struct dmatest_params *params; struct dma_chan *chan; struct dma_device *dev; + struct device *dma_dev; unsigned int error_count; unsigned int failed_tests = 0; unsigned int total_tests = 0; @@ -606,6 +607,8 @@ static int dmatest_func(void *data) params = &info->params; chan = thread->chan; dev = chan->device; + dma_dev = dmaengine_get_dma_device(chan); + src = &thread->src; dst = &thread->dst; if (thread->type == DMA_MEMCPY) { @@ -730,7 +733,7 @@ static int dmatest_func(void *data) filltime = ktime_add(filltime, diff); } - um = dmaengine_get_unmap_data(dev->dev, src->cnt + dst->cnt, + um = dmaengine_get_unmap_data(dma_dev, src->cnt + dst->cnt, GFP_KERNEL); if (!um) { failed_tests++; @@ -745,10 +748,10 @@ static int dmatest_func(void *data) struct page *pg = virt_to_page(buf); unsigned long pg_off = offset_in_page(buf); - um->addr[i] = dma_map_page(dev->dev, pg, pg_off, + um->addr[i] = dma_map_page(dma_dev, pg, pg_off, um->len, DMA_TO_DEVICE); srcs[i] = um->addr[i] + src->off; - ret = dma_mapping_error(dev->dev, um->addr[i]); + ret = dma_mapping_error(dma_dev, um->addr[i]); if (ret) { result("src mapping error", total_tests, src->off, dst->off, len, ret); @@ -763,9 +766,9 @@ static int dmatest_func(void *data) struct page *pg = virt_to_page(buf); unsigned long pg_off = offset_in_page(buf); - dsts[i] = dma_map_page(dev->dev, pg, pg_off, um->len, + dsts[i] = dma_map_page(dma_dev, pg, pg_off, um->len, DMA_BIDIRECTIONAL); - ret = dma_mapping_error(dev->dev, dsts[i]); + ret = dma_mapping_error(dma_dev, dsts[i]); if (ret) { result("dst mapping error", total_tests, src->off, dst->off, len, ret); From patchwork Tue Nov 17 10:56:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEEE7C6379D for ; Tue, 17 Nov 2020 10:57:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7D6A320781 for ; Tue, 17 Nov 2020 10:57:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="JmQcRaX+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728155AbgKQK4r (ORCPT ); Tue, 17 Nov 2020 05:56:47 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59680 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728143AbgKQK4r (ORCPT ); Tue, 17 Nov 2020 05:56:47 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuedX019712; Tue, 17 Nov 2020 04:56:40 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610600; bh=/4QhnmGYetu7mOrPwRhCMxnc8g/UcVArKeaf7KTL62w=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=JmQcRaX+93hD91db972LZJw3PuNUkTa6UUEp6u8/LI8sQ+rhtetEmayyjTm5J1hF1 JeVGaBdyPJpXWGlUwUN5OFdccbI12VzKqmRVYvISIiJIE/Oy4phugeXubcawhl2Tc4 GwdFGyMPo9795wikvKL3xPkJ6WeJmFZ1peNrgLAo= Received: from DFLE105.ent.ti.com (dfle105.ent.ti.com [10.64.6.26]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAueSC005469 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:40 -0600 Received: from DFLE108.ent.ti.com (10.64.6.29) by DFLE105.ent.ti.com (10.64.6.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:40 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE108.ent.ti.com (10.64.6.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:40 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tu087311; Tue, 17 Nov 2020 04:56:37 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 10/19] dt-bindings: dma: ti: Add document for K3 BCDMA Date: Tue, 17 Nov 2020 12:56:47 +0200 Message-ID: <20201117105656.5236-11-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org New binding document for Texas Instruments K3 Block Copy DMA (BCDMA). BCDMA is introduced as part of AM64. Signed-off-by: Peter Ujfalusi --- .../devicetree/bindings/dma/ti/k3-bcdma.yaml | 175 ++++++++++++++++++ 1 file changed, 175 insertions(+) create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml diff --git a/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml new file mode 100644 index 000000000000..c6d76641ebec --- /dev/null +++ b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml @@ -0,0 +1,175 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/dma/ti/k3-bcdma.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Texas Instruments K3 DMSS BCDMA Device Tree Bindings + +maintainers: + - Peter Ujfalusi + +description: | + The Block Copy DMA (BCDMA) is intended to perform similar functions as the TR + mode channels of K3 UDMA-P. + BCDMA includes block copy channels and Split channels. + + Block copy channels mainly used for memory to memory transfers, but with + optional triggers a block copy channel can service peripherals by accessing + directly to memory mapped registers or area. + + Split channels can be used to service PSI-L based peripherals. + The peripherals can be PSI-L native or legacy, non PSI-L native peripherals + with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the + legacy peripheral. + + PDMAs can be configured via BCDMA split channel's peer registers to match with + the configuration of the legacy peripheral. + +allOf: + - $ref: /schemas/dma/dma-controller.yaml# + +properties: + "#dma-cells": + const: 3 + description: | + cell 1: type of the BCDMA channel to be used to service the peripheral: + 0 - split channel + 1 - block copy channel using global trigger 1 + 2 - block copy channel using global trigger 2 + 3 - block copy channel using local trigger + + cell 2: parameter for the channel: + if cell 1 is 0 (split channel): + PSI-L thread ID of the remote (to BCDMA) end. + Valid ranges for thread ID depends on the data movement direction: + for source thread IDs (rx): 0 - 0x7fff + for destination thread IDs (tx): 0x8000 - 0xffff + + Please refer to the device documentation for the PSI-L thread map and + also the PSI-L peripheral chapter for the correct thread ID. + if cell 1 is 1 or 2 (block copy channel using global trigger): + Unused, ignored + + The trigger must be configured for the channel externally to BCDMA, + channels using global triggers should not be requested directly, but + via DMA event router. + if cell 1 is 3 (block copy channel using local trigger): + bchan number of the locally triggered channel + + cell 3: ASEL value for the channel + + compatible: + enum: + - ti,am64-dmss-bcdma + + "#address-cells": + const: 2 + + "#size-cells": + const: 2 + + reg: + maxItems: 5 + + reg-names: + items: + - const: gcfg + - const: bchanrt + - const: rchanrt + - const: tchanrt + - const: ringrt + + msi-parent: true + + ti,asel: + $ref: /schemas/types.yaml#/definitions/uint32 + description: ASEL value for non slave channels + + ti,sci-rm-range-bchan: + $ref: /schemas/types.yaml#/definitions/uint32-array + description: | + Array of BCDMA block-copy channel resource subtypes for resource + allocation for this host + minItems: 1 + # Should be enough + maxItems: 255 + items: + maximum: 0x3f + + ti,sci-rm-range-tchan: + $ref: /schemas/types.yaml#/definitions/uint32-array + description: | + Array of BCDMA split tx channel resource subtypes for resource allocation + for this host + minItems: 1 + # Should be enough + maxItems: 255 + items: + maximum: 0x3f + + ti,sci-rm-range-rchan: + $ref: /schemas/types.yaml#/definitions/uint32-array + description: | + Array of BCDMA split rx channel resource subtypes for resource allocation + for this host + minItems: 1 + # Should be enough + maxItems: 255 + items: + maximum: 0x3f + +required: + - compatible + - "#address-cells" + - "#size-cells" + - "#dma-cells" + - reg + - reg-names + - msi-parent + - ti,sci + - ti,sci-dev-id + - ti,sci-rm-range-bchan + - ti,sci-rm-range-tchan + - ti,sci-rm-range-rchan + +unevaluatedProperties: false + +examples: + - |+ + cbass_main { + #address-cells = <2>; + #size-cells = <2>; + + main_dmss { + compatible = "simple-mfd"; + #address-cells = <2>; + #size-cells = <2>; + dma-ranges; + ranges; + + ti,sci-dev-id = <25>; + + main_bcdma: dma-controller@485c0100 { + compatible = "ti,am64-dmss-bcdma"; + #address-cells = <2>; + #size-cells = <2>; + + reg = <0x0 0x485c0100 0x0 0x100>, + <0x0 0x4c000000 0x0 0x20000>, + <0x0 0x4a820000 0x0 0x20000>, + <0x0 0x4aa40000 0x0 0x20000>, + <0x0 0x4bc00000 0x0 0x100000>; + reg-names = "gcfg", "bchanrt", "rchanrt", "tchanrt", "ringrt"; + msi-parent = <&inta_main_dmss>; + #dma-cells = <3>; + + ti,sci = <&dmsc>; + ti,sci-dev-id = <26>; + + ti,sci-rm-range-bchan = <0x20>; /* BLOCK_COPY_CHAN */ + ti,sci-rm-range-rchan = <0x21>; /* SPLIT_TR_RX_CHAN */ + ti,sci-rm-range-tchan = <0x22>; /* SPLIT_TR_TX_CHAN */ + }; + }; + }; From patchwork Tue Nov 17 10:56:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEA29C6379F for ; Tue, 17 Nov 2020 10:57:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8331120781 for ; Tue, 17 Nov 2020 10:57:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="bsUeR3Gn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728190AbgKQK4y (ORCPT ); Tue, 17 Nov 2020 05:56:54 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34146 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728186AbgKQK4x (ORCPT ); Tue, 17 Nov 2020 05:56:53 -0500 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuhXd117362; Tue, 17 Nov 2020 04:56:43 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610603; bh=9o/pzPysNXhm8GFEdjeTAg4i/9Rt9N3Tow0i0r4/gwM=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=bsUeR3Gn+IGU3jE9BJayFDTZj5M3q84+32Tpo5n5CPSe/An08ZVU+66Uu61Nd2ms4 JXLiTeyJpuFkRd4Sz+9QEeXZpo+PcArsM1p9RDRnJNaMexLo0IHtPhvtAhZig0UD7b yHvijep5PeL4HcLzJet9D6zfm5JLkyBEkSxSjaKA= Received: from DLEE103.ent.ti.com (dlee103.ent.ti.com [157.170.170.33]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuh0V019659 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:43 -0600 Received: from DLEE114.ent.ti.com (157.170.170.25) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:43 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE114.ent.ti.com (157.170.170.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:43 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tv087311; Tue, 17 Nov 2020 04:56:40 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 11/19] dt-bindings: dma: ti: Add document for K3 PKTDMA Date: Tue, 17 Nov 2020 12:56:48 +0200 Message-ID: <20201117105656.5236-12-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org New binding document for Texas Instruments K3 Packet DMA (PKTDMA). PKTDMA is introduced as part of AM64. Signed-off-by: Peter Ujfalusi --- .../devicetree/bindings/dma/ti/k3-pktdma.yaml | 183 ++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml diff --git a/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml new file mode 100644 index 000000000000..bf49b0135fbe --- /dev/null +++ b/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml @@ -0,0 +1,183 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/dma/ti/k3-pktdma.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Texas Instruments K3 DMSS PKTDMA Device Tree Bindings + +maintainers: + - Peter Ujfalusi + +description: | + The Packet DMA (PKTDMA) is intended to perform similar functions as the packet + mode channels of K3 UDMA-P. + PKTDMA only includes Split channels to service PSI-L based peripherals. + + The peripherals can be PSI-L native or legacy, non PSI-L native peripherals + with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the + legacy peripheral. + + PDMAs can be configured via PKTDMA split channel's peer registers to match + with the configuration of the legacy peripheral. + +allOf: + - $ref: /schemas/dma/dma-controller.yaml# + +properties: + "#dma-cells": + const: 2 + description: | + The first cell is the PSI-L thread ID of the remote (to PKTDMA) end. + Valid ranges for thread ID depends on the data movement direction: + for source thread IDs (rx): 0 - 0x7fff + for destination thread IDs (tx): 0x8000 - 0xffff + + Please refer to the device documentation for the PSI-L thread map and also + the PSI-L peripheral chapter for the correct thread ID. + + The second cell is the ASEL value for the channel + + compatible: + enum: + - ti,am64-dmss-pktdma + + "#address-cells": + const: 2 + + "#size-cells": + const: 2 + + reg: + maxItems: 4 + + reg-names: + items: + - const: gcfg + - const: rchanrt + - const: tchanrt + - const: ringrt + + msi-parent: true + + ti,sci-rm-range-tchan: + $ref: /schemas/types.yaml#/definitions/uint32-array + description: | + Array of PKTDMA split tx channel resource subtypes for resource allocation + for this host + minItems: 1 + # Should be enough + maxItems: 255 + items: + maximum: 0x3f + + ti,sci-rm-range-tflow: + $ref: /schemas/types.yaml#/definitions/uint32-array + description: | + Array of PKTDMA split tx flow resource subtypes for resource allocation + for this host + minItems: 1 + # Should be enough + maxItems: 255 + items: + maximum: 0x3f + + ti,sci-rm-range-rchan: + $ref: /schemas/types.yaml#/definitions/uint32-array + description: | + Array of PKTDMA split rx channel resource subtypes for resource allocation + for this host + minItems: 1 + # Should be enough + maxItems: 255 + items: + maximum: 0x3f + + ti,sci-rm-range-rflow: + $ref: /schemas/types.yaml#/definitions/uint32-array + description: | + Array of PKTDMA split rx flow resource subtypes for resource allocation + for this host + minItems: 1 + # Should be enough + maxItems: 255 + items: + maximum: 0x3f + +required: + - compatible + - "#address-cells" + - "#size-cells" + - "#dma-cells" + - reg + - reg-names + - msi-parent + - ti,sci + - ti,sci-dev-id + - ti,sci-rm-range-tchan + - ti,sci-rm-range-tflow + - ti,sci-rm-range-rchan + - ti,sci-rm-range-rflow + +unevaluatedProperties: false + +examples: + - |+ + cbass_main { + #address-cells = <2>; + #size-cells = <2>; + + main_dmss { + compatible = "simple-mfd"; + #address-cells = <2>; + #size-cells = <2>; + dma-ranges; + ranges; + + ti,sci-dev-id = <25>; + + main_pktdma: dma-controller@485c0000 { + compatible = "ti,am64-dmss-pktdma"; + #address-cells = <2>; + #size-cells = <2>; + + reg = <0x0 0x485c0000 0x0 0x100>, + <0x0 0x4a800000 0x0 0x20000>, + <0x0 0x4aa00000 0x0 0x40000>, + <0x0 0x4b800000 0x0 0x400000>; + reg-names = "gcfg", "rchanrt", "tchanrt", "ringrt"; + msi-parent = <&inta_main_dmss>; + #dma-cells = <2>; + + ti,sci = <&dmsc>; + ti,sci-dev-id = <30>; + + ti,sci-rm-range-tchan = <0x23>, /* UNMAPPED_TX_CHAN */ + <0x24>, /* CPSW_TX_CHAN */ + <0x25>, /* SAUL_TX_0_CHAN */ + <0x26>, /* SAUL_TX_1_CHAN */ + <0x27>, /* ICSSG_0_TX_CHAN */ + <0x28>; /* ICSSG_1_TX_CHAN */ + ti,sci-rm-range-tflow = <0x10>, /* RING_UNMAPPED_TX_CHAN */ + <0x11>, /* RING_CPSW_TX_CHAN */ + <0x12>, /* RING_SAUL_TX_0_CHAN */ + <0x13>, /* RING_SAUL_TX_1_CHAN */ + <0x14>, /* RING_ICSSG_0_TX_CHAN */ + <0x15>; /* RING_ICSSG_1_TX_CHAN */ + ti,sci-rm-range-rchan = <0x29>, /* UNMAPPED_RX_CHAN */ + <0x2b>, /* CPSW_RX_CHAN */ + <0x2d>, /* SAUL_RX_0_CHAN */ + <0x2f>, /* SAUL_RX_1_CHAN */ + <0x31>, /* SAUL_RX_2_CHAN */ + <0x33>, /* SAUL_RX_3_CHAN */ + <0x35>, /* ICSSG_0_RX_CHAN */ + <0x37>; /* ICSSG_1_RX_CHAN */ + ti,sci-rm-range-rflow = <0x2a>, /* FLOW_UNMAPPED_RX_CHAN */ + <0x2c>, /* FLOW_CPSW_RX_CHAN */ + <0x2e>, /* FLOW_SAUL_RX_0/1_CHAN */ + <0x32>, /* FLOW_SAUL_RX_2/3_CHAN */ + <0x36>, /* FLOW_ICSSG_0_RX_CHAN */ + <0x38>; /* FLOW_ICSSG_1_RX_CHAN */ + }; + }; + }; From patchwork Tue Nov 17 10:56:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66E06C63777 for ; Tue, 17 Nov 2020 10:57:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 065A020781 for ; Tue, 17 Nov 2020 10:57:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="N57a+EYm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728139AbgKQK4y (ORCPT ); Tue, 17 Nov 2020 05:56:54 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34142 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728143AbgKQK4w (ORCPT ); Tue, 17 Nov 2020 05:56:52 -0500 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuka1117372; Tue, 17 Nov 2020 04:56:46 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610606; bh=KrUTk8ul87troFJrpjX2nEBHGpMAgPlBoUHLfqkKttI=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=N57a+EYmGunHmUk0CrK3SaBvFLQyP9HZIozZ2Jsp6erWv6bhA9MyAbf3zOtkEYCub dQv5h/NRlz3UdZPBva4gW0MawvYutWCkbmwYISiOLgdUbWCUx+BqTlr0L3+ZC/vXMr TyAD9CY6wUpQT8t3dqvv9YTm5729JvR7R5PlexVI= Received: from DFLE102.ent.ti.com (dfle102.ent.ti.com [10.64.6.23]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAukCA010356 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:46 -0600 Received: from DFLE110.ent.ti.com (10.64.6.31) by DFLE102.ent.ti.com (10.64.6.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:46 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE110.ent.ti.com (10.64.6.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:46 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tw087311; Tue, 17 Nov 2020 04:56:43 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 12/19] dmaengine: ti: k3-psil: Extend psil_endpoint_config for K3 PKTDMA Date: Tue, 17 Nov 2020 12:56:49 +0200 Message-ID: <20201117105656.5236-13-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Additional fields needed for K3 PKTDMA to be able to handle the mapped channels (channels are locked to handle specific threads) and flow ranges for these mapped threads. PKTDMA also introduces tflow for tx channels which can not be found in K3 UDMA architecture. Signed-off-by: Peter Ujfalusi --- include/linux/dma/k3-psil.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/include/linux/dma/k3-psil.h b/include/linux/dma/k3-psil.h index 1962f75fa2d3..36e22c5a0f29 100644 --- a/include/linux/dma/k3-psil.h +++ b/include/linux/dma/k3-psil.h @@ -50,6 +50,15 @@ enum psil_endpoint_type { * @channel_tpl: Desired throughput level for the channel * @pdma_acc32: ACC32 must be enabled on the PDMA side * @pdma_burst: BURST must be enabled on the PDMA side + * @mapped_channel_id: PKTDMA thread to channel mapping for mapped channels. + * The thread must be serviced by the specified channel if + * mapped_channel_id is >= 0 in case of PKTDMA + * @flow_start: PKDMA flow range start of mapped channel. Unmapped + * channels use flow_id == chan_id + * @flow_num: PKDMA flow count of mapped channel. Unmapped channels + * use flow_id == chan_id + * @default_flow_id: PKDMA default (r)flow index of mapped channel. + * Must be within the flow range of the mapped channel. */ struct psil_endpoint_config { enum psil_endpoint_type ep_type; @@ -63,6 +72,13 @@ struct psil_endpoint_config { /* PDMA properties, valid for PSIL_EP_PDMA_* */ unsigned pdma_acc32:1; unsigned pdma_burst:1; + + /* PKDMA mapped channel */ + int mapped_channel_id; + /* PKTDMA tflow and rflow ranges for mapped channel */ + u16 flow_start; + u16 flow_num; + u16 default_flow_id; }; int psil_set_new_ep_config(struct device *dev, const char *name, From patchwork Tue Nov 17 10:56:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D084EC64E7B for ; Tue, 17 Nov 2020 10:57:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8148E20781 for ; Tue, 17 Nov 2020 10:57:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="DNy/+n2i" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728209AbgKQK5A (ORCPT ); Tue, 17 Nov 2020 05:57:00 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34160 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728186AbgKQK5A (ORCPT ); Tue, 17 Nov 2020 05:57:00 -0500 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAunrt117383; Tue, 17 Nov 2020 04:56:49 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610609; bh=gWwHItEoo4DJVr7FXCMLBP6bV/xJktpY4mjYFaYMosU=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=DNy/+n2iOuq8bLOtdTYZ9s6iDVtaEBKjzTxV5pbvDHy6mIeyKD67L+orRkP7lulbS 1tzWXw461y8QpetKN+wEfNo4/1nhl5iL8r5nYcnNbn4l0d8Stpw44CyShOR3ZNhYBY WiLAHXhx+pUJon2TatAdFaht37VEqg1k75kNul7M= Received: from DFLE100.ent.ti.com (dfle100.ent.ti.com [10.64.6.21]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAunVd085888 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:49 -0600 Received: from DFLE103.ent.ti.com (10.64.6.24) by DFLE100.ent.ti.com (10.64.6.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:49 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE103.ent.ti.com (10.64.6.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:49 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6tx087311; Tue, 17 Nov 2020 04:56:46 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 13/19] dmaengine: ti: k3-psil: Add initial map for AM64 Date: Tue, 17 Nov 2020 12:56:50 +0200 Message-ID: <20201117105656.5236-14-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Add initial PSI-L map file for AM64. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/Makefile | 3 +- drivers/dma/ti/k3-psil-am64.c | 75 +++++++++++++++++++++++++++++++++++ drivers/dma/ti/k3-psil-priv.h | 1 + drivers/dma/ti/k3-psil.c | 1 + 4 files changed, 79 insertions(+), 1 deletion(-) create mode 100644 drivers/dma/ti/k3-psil-am64.c diff --git a/drivers/dma/ti/Makefile b/drivers/dma/ti/Makefile index 0c67254caee6..bd496efadff7 100644 --- a/drivers/dma/ti/Makefile +++ b/drivers/dma/ti/Makefile @@ -7,5 +7,6 @@ obj-$(CONFIG_TI_K3_UDMA_GLUE_LAYER) += k3-udma-glue.o obj-$(CONFIG_TI_K3_PSIL) += k3-psil.o \ k3-psil-am654.o \ k3-psil-j721e.o \ - k3-psil-j7200.o + k3-psil-j7200.o \ + k3-psil-am64.o obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o diff --git a/drivers/dma/ti/k3-psil-am64.c b/drivers/dma/ti/k3-psil-am64.c new file mode 100644 index 000000000000..e88f57a36ac1 --- /dev/null +++ b/drivers/dma/ti/k3-psil-am64.c @@ -0,0 +1,75 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020 Texas Instruments Incorporated - https://www.ti.com + * Author: Peter Ujfalusi + */ + +#include + +#include "k3-psil-priv.h" + +#define PSIL_ETHERNET(x, ch, flow_base, flow_cnt) \ + { \ + .thread_id = x, \ + .ep_config = { \ + .ep_type = PSIL_EP_NATIVE, \ + .pkt_mode = 1, \ + .needs_epib = 1, \ + .psd_size = 16, \ + .mapped_channel_id = ch, \ + .flow_start = flow_base, \ + .flow_num = flow_cnt, \ + .default_flow_id = flow_base, \ + }, \ + } + +#define PSIL_SAUL(x, ch, flow_base, flow_cnt, default_flow, tx) \ + { \ + .thread_id = x, \ + .ep_config = { \ + .ep_type = PSIL_EP_NATIVE, \ + .pkt_mode = 1, \ + .needs_epib = 1, \ + .psd_size = 64, \ + .mapped_channel_id = ch, \ + .flow_start = flow_base, \ + .flow_num = flow_cnt, \ + .default_flow_id = default_flow, \ + .notdpkt = tx, \ + }, \ + } + +/* PSI-L source thread IDs, used for RX (DMA_DEV_TO_MEM) */ +static struct psil_ep am64_src_ep_map[] = { + /* SA2UL */ + PSIL_SAUL(0x4000, 17, 32, 8, 32, 0), + PSIL_SAUL(0x4001, 18, 32, 8, 33, 0), + PSIL_SAUL(0x4002, 19, 40, 8, 40, 0), + PSIL_SAUL(0x4003, 20, 40, 8, 41, 0), + /* CPSW3G */ + PSIL_ETHERNET(0x4500, 16, 16, 16), +}; + +/* PSI-L destination thread IDs, used for TX (DMA_MEM_TO_DEV) */ +static struct psil_ep am64_dst_ep_map[] = { + /* SA2UL */ + PSIL_SAUL(0xc000, 24, 80, 8, 80, 1), + PSIL_SAUL(0xc001, 25, 88, 8, 88, 1), + /* CPSW3G */ + PSIL_ETHERNET(0xc500, 16, 16, 8), + PSIL_ETHERNET(0xc501, 17, 24, 8), + PSIL_ETHERNET(0xc502, 18, 32, 8), + PSIL_ETHERNET(0xc503, 19, 40, 8), + PSIL_ETHERNET(0xc504, 20, 48, 8), + PSIL_ETHERNET(0xc505, 21, 56, 8), + PSIL_ETHERNET(0xc506, 22, 64, 8), + PSIL_ETHERNET(0xc507, 23, 72, 8), +}; + +struct psil_ep_map am64_ep_map = { + .name = "am64", + .src = am64_src_ep_map, + .src_count = ARRAY_SIZE(am64_src_ep_map), + .dst = am64_dst_ep_map, + .dst_count = ARRAY_SIZE(am64_dst_ep_map), +}; diff --git a/drivers/dma/ti/k3-psil-priv.h b/drivers/dma/ti/k3-psil-priv.h index b4b0fb359eff..b74e192e3c2d 100644 --- a/drivers/dma/ti/k3-psil-priv.h +++ b/drivers/dma/ti/k3-psil-priv.h @@ -40,5 +40,6 @@ struct psil_endpoint_config *psil_get_ep_config(u32 thread_id); extern struct psil_ep_map am654_ep_map; extern struct psil_ep_map j721e_ep_map; extern struct psil_ep_map j7200_ep_map; +extern struct psil_ep_map am64_ep_map; #endif /* K3_PSIL_PRIV_H_ */ diff --git a/drivers/dma/ti/k3-psil.c b/drivers/dma/ti/k3-psil.c index 837853aab95a..13ce7367d870 100644 --- a/drivers/dma/ti/k3-psil.c +++ b/drivers/dma/ti/k3-psil.c @@ -20,6 +20,7 @@ static const struct soc_device_attribute k3_soc_devices[] = { { .family = "AM65X", .data = &am654_ep_map }, { .family = "J721E", .data = &j721e_ep_map }, { .family = "J7200", .data = &j7200_ep_map }, + { .family = "AM64X", .data = &am64_ep_map }, { /* sentinel */ } }; From patchwork Tue Nov 17 10:56:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E927C64E90 for ; Tue, 17 Nov 2020 10:57:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CC876246BE for ; Tue, 17 Nov 2020 10:57:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="iu2PLKVn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728215AbgKQK5B (ORCPT ); Tue, 17 Nov 2020 05:57:01 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34162 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728208AbgKQK47 (ORCPT ); Tue, 17 Nov 2020 05:56:59 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuqfn117397; Tue, 17 Nov 2020 04:56:52 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610612; bh=r6BZcfFspKekPk2CsVfLnHhDTFjI4u11uU/6NZJdVNU=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=iu2PLKVnQRaNSmhkyiovnJjARHlRHbE8pdRD8FjXrHFL0PeE4J+w+HEq+caoxPUih +46Bmz84oPBRrYMU99ljah3krjhMAda5A9eEyUwx/A/Z0WibHgxJnO8QBvhNtwVH8z o/ecDiHgb064244/RrkbRZvaAYghcGtoxESmezRI= Received: from DFLE105.ent.ti.com (dfle105.ent.ti.com [10.64.6.26]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuqd9005553 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:52 -0600 Received: from DFLE108.ent.ti.com (10.64.6.29) by DFLE105.ent.ti.com (10.64.6.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:52 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE108.ent.ti.com (10.64.6.29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:52 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6u0087311; Tue, 17 Nov 2020 04:56:49 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 14/19] dmaengine: ti: Add support for k3 event routers Date: Tue, 17 Nov 2020 12:56:51 +0200 Message-ID: <20201117105656.5236-15-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org In k3 architecture a DMA channel (in TR momde) can be triggered by global events, origination from different modules. The events for triggers can be sent from any module which is connected to PSI-L fabric, but the event number to be sent is DMA channel specific, it is only known after the channel itself is requested. The router operation needs to be split up: - route_allocate: configure the dma_spec for the DMA and store the configuration which is needed for the router's input - set_event: callback used by the DMA driver to set the event number for the channel and enable the routing Signed-off-by: Peter Ujfalusi --- include/linux/dma/k3-event-router.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 include/linux/dma/k3-event-router.h diff --git a/include/linux/dma/k3-event-router.h b/include/linux/dma/k3-event-router.h new file mode 100644 index 000000000000..e3f88b2f87be --- /dev/null +++ b/include/linux/dma/k3-event-router.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020 Texas Instruments Incorporated - https://www.ti.com + */ + +#ifndef K3_EVENT_ROUTER_ +#define K3_EVENT_ROUTER_ + +#include + +struct k3_event_route_data { + void *priv; + int (*set_event)(void *priv, u32 event); +}; + +#endif /* K3_EVENT_ROUTER_ */ From patchwork Tue Nov 17 10:56:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28809C6379D for ; Tue, 17 Nov 2020 10:57:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BDEDD238E6 for ; Tue, 17 Nov 2020 10:57:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="w64IcI2/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728208AbgKQK5E (ORCPT ); Tue, 17 Nov 2020 05:57:04 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59714 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728226AbgKQK5E (ORCPT ); Tue, 17 Nov 2020 05:57:04 -0500 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuum5019752; Tue, 17 Nov 2020 04:56:56 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610616; bh=jAnYGwH4rnu9nXzQHqVNZUHdDj2HcYr2aoiJGxEMNyE=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=w64IcI2/bisR+dqN8S9YD70PEp7IvG5XN/bBpaL4txDFYWyG6V9SzsB+YdlWC3i4s m0C2Meqvg4hORKjcqdY1cE/KsjPXvZ3EYmvv6pIx+BeFzTGAfpCnsZbQNEZkYdB+6q pRjrUpJlcP/6GrViqdKlUWVzNu6v05Jpi4W6YKws= Received: from DLEE108.ent.ti.com (dlee108.ent.ti.com [157.170.170.38]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAuuNA019757 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:56 -0600 Received: from DLEE107.ent.ti.com (157.170.170.37) by DLEE108.ent.ti.com (157.170.170.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:55 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:55 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6u1087311; Tue, 17 Nov 2020 04:56:52 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 15/19] soc: ti: k3-ringacc: add AM64 DMA rings support. Date: Tue, 17 Nov 2020 12:56:52 +0200 Message-ID: <20201117105656.5236-16-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org From: Grygorii Strashko The DMAs in AM64 have built in rings compared to AM654/J721e/J7200 where a separate and generic ringacc is used. The ring SW interface is similar to ringacc with some major architectural differences, like They are part of the DMA (BCDMA or PKTDMA). They are dual mode rings are modeled as pair of Rings objects which has common configuration and memory buffer, but separate real-time control register sets for each direction mem2dev (forward) and dev2mem (reverse). The ringacc driver must be initialized for DMA rings use with k3_ringacc_dmarings_init() as it is not an independent device as ringacc is. AM64 rings must be requested only using k3_ringacc_request_rings_pair(), and forward ring must always be initialized/configured. After this any other Ringacc APIs can be used without any callers changes. Signed-off-by: Grygorii Strashko Signed-off-by: Peter Ujfalusi --- drivers/soc/ti/k3-ringacc.c | 325 +++++++++++++++++++++++++++++- include/linux/soc/ti/k3-ringacc.h | 17 ++ 2 files changed, 335 insertions(+), 7 deletions(-) diff --git a/drivers/soc/ti/k3-ringacc.c b/drivers/soc/ti/k3-ringacc.c index 7fdb688452f7..44879f45575d 100644 --- a/drivers/soc/ti/k3-ringacc.c +++ b/drivers/soc/ti/k3-ringacc.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -21,6 +22,7 @@ static LIST_HEAD(k3_ringacc_list); static DEFINE_MUTEX(k3_ringacc_list_lock); #define K3_RINGACC_CFG_RING_SIZE_ELCNT_MASK GENMASK(19, 0) +#define K3_DMARING_CFG_RING_SIZE_ELCNT_MASK GENMASK(15, 0) /** * struct k3_ring_rt_regs - The RA realtime Control/Status Registers region @@ -43,7 +45,13 @@ struct k3_ring_rt_regs { u32 hwindx; }; -#define K3_RINGACC_RT_REGS_STEP 0x1000 +#define K3_RINGACC_RT_REGS_STEP 0x1000 +#define K3_DMARING_RT_REGS_STEP 0x2000 +#define K3_DMARING_RT_REGS_REVERSE_OFS 0x1000 +#define K3_RINGACC_RT_OCC_MASK GENMASK(20, 0) +#define K3_DMARING_RT_OCC_TDOWN_COMPLETE BIT(31) +#define K3_DMARING_RT_DB_ENTRY_MASK GENMASK(7, 0) +#define K3_DMARING_RT_DB_TDOWN_ACK BIT(31) /** * struct k3_ring_fifo_regs - The Ring Accelerator Queues Registers region @@ -122,6 +130,7 @@ struct k3_ring_state { u32 occ; u32 windex; u32 rindex; + u32 tdown_complete:1; }; /** @@ -142,6 +151,7 @@ struct k3_ring_state { * @use_count: Use count for shared rings * @proxy_id: RA Ring Proxy Id (only if @K3_RINGACC_RING_USE_PROXY) * @dma_dev: device to be used for DMA API (allocation, mapping) + * @asel: Address Space Select value for physical addresses */ struct k3_ring { struct k3_ring_rt_regs __iomem *rt; @@ -156,12 +166,15 @@ struct k3_ring { u32 flags; #define K3_RING_FLAG_BUSY BIT(1) #define K3_RING_FLAG_SHARED BIT(2) +#define K3_RING_FLAG_REVERSE BIT(3) struct k3_ring_state state; u32 ring_id; struct k3_ringacc *parent; u32 use_count; int proxy_id; struct device *dma_dev; + u32 asel; +#define K3_ADDRESS_ASEL_SHIFT 48 }; struct k3_ringacc_ops { @@ -187,6 +200,7 @@ struct k3_ringacc_ops { * @tisci_ring_ops: ti-sci rings ops * @tisci_dev_id: ti-sci device id * @ops: SoC specific ringacc operation + * @dma_rings: indicate DMA ring (dual ring within BCDMA/PKTDMA) */ struct k3_ringacc { struct device *dev; @@ -209,6 +223,7 @@ struct k3_ringacc { u32 tisci_dev_id; const struct k3_ringacc_ops *ops; + bool dma_rings; }; /** @@ -220,6 +235,21 @@ struct k3_ringacc_soc_data { unsigned dma_ring_reset_quirk:1; }; +static int k3_ringacc_ring_read_occ(struct k3_ring *ring) +{ + return readl(&ring->rt->occ) & K3_RINGACC_RT_OCC_MASK; +} + +static void k3_ringacc_ring_update_occ(struct k3_ring *ring) +{ + u32 val; + + val = readl(&ring->rt->occ); + + ring->state.occ = val & K3_RINGACC_RT_OCC_MASK; + ring->state.tdown_complete = !!(val & K3_DMARING_RT_OCC_TDOWN_COMPLETE); +} + static long k3_ringacc_ring_get_fifo_pos(struct k3_ring *ring) { return K3_RINGACC_FIFO_WINDOW_SIZE_BYTES - @@ -233,12 +263,24 @@ static void *k3_ringacc_get_elm_addr(struct k3_ring *ring, u32 idx) static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem); static int k3_ringacc_ring_pop_mem(struct k3_ring *ring, void *elem); +static int k3_dmaring_fwd_pop(struct k3_ring *ring, void *elem); +static int k3_dmaring_reverse_pop(struct k3_ring *ring, void *elem); static struct k3_ring_ops k3_ring_mode_ring_ops = { .push_tail = k3_ringacc_ring_push_mem, .pop_head = k3_ringacc_ring_pop_mem, }; +static struct k3_ring_ops k3_dmaring_fwd_ops = { + .push_tail = k3_ringacc_ring_push_mem, + .pop_head = k3_dmaring_fwd_pop, +}; + +static struct k3_ring_ops k3_dmaring_reverse_ops = { + /* Reverse side of the DMA ring can only be popped by SW */ + .pop_head = k3_dmaring_reverse_pop, +}; + static int k3_ringacc_ring_push_io(struct k3_ring *ring, void *elem); static int k3_ringacc_ring_pop_io(struct k3_ring *ring, void *elem); static int k3_ringacc_ring_push_head_io(struct k3_ring *ring, void *elem); @@ -341,6 +383,40 @@ struct k3_ring *k3_ringacc_request_ring(struct k3_ringacc *ringacc, } EXPORT_SYMBOL_GPL(k3_ringacc_request_ring); +static int k3_dmaring_request_dual_ring(struct k3_ringacc *ringacc, int fwd_id, + struct k3_ring **fwd_ring, + struct k3_ring **compl_ring) +{ + int ret = 0; + + /* + * DMA rings must be requested by ID, completion ring is the reverse + * side of the forward ring + */ + if (fwd_id < 0) + return -EINVAL; + + mutex_lock(&ringacc->req_lock); + + if (test_bit(fwd_id, ringacc->rings_inuse)) { + ret = -EBUSY; + goto error; + } + + *fwd_ring = &ringacc->rings[fwd_id]; + *compl_ring = &ringacc->rings[fwd_id + ringacc->num_rings]; + set_bit(fwd_id, ringacc->rings_inuse); + ringacc->rings[fwd_id].use_count++; + dev_dbg(ringacc->dev, "Giving ring#%d\n", fwd_id); + + mutex_unlock(&ringacc->req_lock); + return 0; + +error: + mutex_unlock(&ringacc->req_lock); + return ret; +} + int k3_ringacc_request_rings_pair(struct k3_ringacc *ringacc, int fwd_id, int compl_id, struct k3_ring **fwd_ring, @@ -351,6 +427,10 @@ int k3_ringacc_request_rings_pair(struct k3_ringacc *ringacc, if (!fwd_ring || !compl_ring) return -EINVAL; + if (ringacc->dma_rings) + return k3_dmaring_request_dual_ring(ringacc, fwd_id, + fwd_ring, compl_ring); + *fwd_ring = k3_ringacc_request_ring(ringacc, fwd_id, 0); if (!(*fwd_ring)) return -ENODEV; @@ -420,7 +500,7 @@ void k3_ringacc_ring_reset_dma(struct k3_ring *ring, u32 occ) goto reset; if (!occ) - occ = readl(&ring->rt->occ); + occ = k3_ringacc_ring_read_occ(ring); if (occ) { u32 db_ring_cnt, db_ring_cnt_cur; @@ -495,6 +575,13 @@ int k3_ringacc_ring_free(struct k3_ring *ring) ringacc = ring->parent; + /* + * DMA rings: rings shared memory and configuration, only forward ring + * is configured and reverse ring considered as slave. + */ + if (ringacc->dma_rings && (ring->flags & K3_RING_FLAG_REVERSE)) + return 0; + dev_dbg(ring->parent->dev, "flags: 0x%08x\n", ring->flags); if (!test_bit(ring->ring_id, ringacc->rings_inuse)) @@ -516,6 +603,8 @@ int k3_ringacc_ring_free(struct k3_ring *ring) ring->flags = 0; ring->ops = NULL; ring->dma_dev = NULL; + ring->asel = 0; + if (ring->proxy_id != K3_RINGACC_PROXY_NOT_USED) { clear_bit(ring->proxy_id, ringacc->proxy_inuse); ring->proxy = NULL; @@ -580,6 +669,7 @@ static int k3_ringacc_ring_cfg_sci(struct k3_ring *ring) ring_cfg.count = ring->size; ring_cfg.mode = ring->mode; ring_cfg.size = ring->elm_size; + ring_cfg.asel = ring->asel; ret = ringacc->tisci_ring_ops->set_cfg(ringacc->tisci, &ring_cfg); if (ret) @@ -589,6 +679,90 @@ static int k3_ringacc_ring_cfg_sci(struct k3_ring *ring) return ret; } +static int k3_dmaring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg) +{ + struct k3_ringacc *ringacc; + struct k3_ring *reverse_ring; + int ret = 0; + + if (cfg->elm_size != K3_RINGACC_RING_ELSIZE_8 || + cfg->mode != K3_RINGACC_RING_MODE_RING || + cfg->size & ~K3_DMARING_CFG_RING_SIZE_ELCNT_MASK) + return -EINVAL; + + ringacc = ring->parent; + + /* + * DMA rings: rings shared memory and configuration, only forward ring + * is configured and reverse ring considered as slave. + */ + if (ringacc->dma_rings && (ring->flags & K3_RING_FLAG_REVERSE)) + return 0; + + if (!test_bit(ring->ring_id, ringacc->rings_inuse)) + return -EINVAL; + + ring->size = cfg->size; + ring->elm_size = cfg->elm_size; + ring->mode = cfg->mode; + ring->asel = cfg->asel; + ring->dma_dev = cfg->dma_dev; + if (!ring->dma_dev) { + dev_warn(ringacc->dev, "dma_dev is not provided for ring%d\n", + ring->ring_id); + ring->dma_dev = ringacc->dev; + } + + memset(&ring->state, 0, sizeof(ring->state)); + + ring->ops = &k3_dmaring_fwd_ops; + + ring->ring_mem_virt = dma_alloc_coherent(ring->dma_dev, + ring->size * (4 << ring->elm_size), + &ring->ring_mem_dma, GFP_KERNEL); + if (!ring->ring_mem_virt) { + dev_err(ringacc->dev, "Failed to alloc ring mem\n"); + ret = -ENOMEM; + goto err_free_ops; + } + + ret = k3_ringacc_ring_cfg_sci(ring); + if (ret) + goto err_free_mem; + + ring->flags |= K3_RING_FLAG_BUSY; + + k3_ringacc_ring_dump(ring); + + /* DMA rings: configure reverse ring */ + reverse_ring = &ringacc->rings[ring->ring_id + ringacc->num_rings]; + reverse_ring->size = cfg->size; + reverse_ring->elm_size = cfg->elm_size; + reverse_ring->mode = cfg->mode; + reverse_ring->asel = cfg->asel; + memset(&reverse_ring->state, 0, sizeof(reverse_ring->state)); + reverse_ring->ops = &k3_dmaring_reverse_ops; + + reverse_ring->ring_mem_virt = ring->ring_mem_virt; + reverse_ring->ring_mem_dma = ring->ring_mem_dma; + reverse_ring->flags |= K3_RING_FLAG_BUSY; + k3_ringacc_ring_dump(reverse_ring); + + return 0; + +err_free_mem: + dma_free_coherent(ring->dma_dev, + ring->size * (4 << ring->elm_size), + ring->ring_mem_virt, + ring->ring_mem_dma); +err_free_ops: + ring->ops = NULL; + ring->proxy = NULL; + ring->dma_dev = NULL; + ring->asel = 0; + return ret; +} + int k3_ringacc_ring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg) { struct k3_ringacc *ringacc; @@ -596,8 +770,12 @@ int k3_ringacc_ring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg) if (!ring || !cfg) return -EINVAL; + ringacc = ring->parent; + if (ringacc->dma_rings) + return k3_dmaring_cfg(ring, cfg); + if (cfg->elm_size > K3_RINGACC_RING_ELSIZE_256 || cfg->mode >= K3_RINGACC_RING_MODE_INVALID || cfg->size & ~K3_RINGACC_CFG_RING_SIZE_ELCNT_MASK || @@ -704,7 +882,7 @@ u32 k3_ringacc_ring_get_free(struct k3_ring *ring) return -EINVAL; if (!ring->state.free) - ring->state.free = ring->size - readl(&ring->rt->occ); + ring->state.free = ring->size - k3_ringacc_ring_read_occ(ring); return ring->state.free; } @@ -715,7 +893,7 @@ u32 k3_ringacc_ring_get_occ(struct k3_ring *ring) if (!ring || !(ring->flags & K3_RING_FLAG_BUSY)) return -EINVAL; - return readl(&ring->rt->occ); + return k3_ringacc_ring_read_occ(ring); } EXPORT_SYMBOL_GPL(k3_ringacc_ring_get_occ); @@ -891,6 +1069,72 @@ static int k3_ringacc_ring_pop_tail_io(struct k3_ring *ring, void *elem) K3_RINGACC_ACCESS_MODE_POP_HEAD); } +/* + * The element is 48 bits of address + ASEL bits in the ring. + * ASEL is used by the DMAs and should be removed for the kernel as it is not + * part of the physical memory address. + */ +static void k3_dmaring_remove_asel_from_elem(u64 *elem) +{ + *elem &= GENMASK_ULL(K3_ADDRESS_ASEL_SHIFT - 1, 0); +} + +static int k3_dmaring_fwd_pop(struct k3_ring *ring, void *elem) +{ + void *elem_ptr; + u32 elem_idx; + + /* + * DMA rings: forward ring is always tied DMA channel and HW does not + * maintain any state data required for POP operation and its unknown + * how much elements were consumed by HW. So, to actually + * do POP, the read pointer has to be recalculated every time. + */ + ring->state.occ = k3_ringacc_ring_read_occ(ring); + if (ring->state.windex >= ring->state.occ) + elem_idx = ring->state.windex - ring->state.occ; + else + elem_idx = ring->size - (ring->state.occ - ring->state.windex); + + elem_ptr = k3_ringacc_get_elm_addr(ring, elem_idx); + memcpy(elem, elem_ptr, (4 << ring->elm_size)); + k3_dmaring_remove_asel_from_elem(elem); + + ring->state.occ--; + writel(-1, &ring->rt->db); + + dev_dbg(ring->parent->dev, "%s: occ%d Windex%d Rindex%d pos_ptr%px\n", + __func__, ring->state.occ, ring->state.windex, elem_idx, + elem_ptr); + return 0; +} + +static int k3_dmaring_reverse_pop(struct k3_ring *ring, void *elem) +{ + void *elem_ptr; + + elem_ptr = k3_ringacc_get_elm_addr(ring, ring->state.rindex); + + if (ring->state.occ) { + memcpy(elem, elem_ptr, (4 << ring->elm_size)); + k3_dmaring_remove_asel_from_elem(elem); + + ring->state.rindex = (ring->state.rindex + 1) % ring->size; + ring->state.occ--; + writel(-1 & K3_DMARING_RT_DB_ENTRY_MASK, &ring->rt->db); + } else if (ring->state.tdown_complete) { + dma_addr_t *value = elem; + + *value = CPPI5_TDCM_MARKER; + writel(K3_DMARING_RT_DB_TDOWN_ACK, &ring->rt->db); + ring->state.tdown_complete = false; + } + + dev_dbg(ring->parent->dev, "%s: occ%d index%d pos_ptr%px\n", + __func__, ring->state.occ, ring->state.rindex, elem_ptr); + return 0; +} + static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem) { void *elem_ptr; @@ -898,6 +1142,11 @@ static int k3_ringacc_ring_push_mem(struct k3_ring *ring, void *elem) elem_ptr = k3_ringacc_get_elm_addr(ring, ring->state.windex); memcpy(elem_ptr, elem, (4 << ring->elm_size)); + if (ring->parent->dma_rings) { + u64 *addr = elem_ptr; + + *addr |= ((u64)ring->asel << K3_ADDRESS_ASEL_SHIFT); + } ring->state.windex = (ring->state.windex + 1) % ring->size; ring->state.free--; @@ -974,12 +1223,12 @@ int k3_ringacc_ring_pop(struct k3_ring *ring, void *elem) return -EINVAL; if (!ring->state.occ) - ring->state.occ = k3_ringacc_ring_get_occ(ring); + k3_ringacc_ring_update_occ(ring); dev_dbg(ring->parent->dev, "ring_pop: occ%d index%d\n", ring->state.occ, ring->state.rindex); - if (!ring->state.occ) + if (!ring->state.occ && !ring->state.tdown_complete) return -ENODATA; if (ring->ops && ring->ops->pop_head) @@ -997,7 +1246,7 @@ int k3_ringacc_ring_pop_tail(struct k3_ring *ring, void *elem) return -EINVAL; if (!ring->state.occ) - ring->state.occ = k3_ringacc_ring_get_occ(ring); + k3_ringacc_ring_update_occ(ring); dev_dbg(ring->parent->dev, "ring_pop_tail: occ%d index%d\n", ring->state.occ, ring->state.rindex); @@ -1202,6 +1451,68 @@ static const struct of_device_id k3_ringacc_of_match[] = { {}, }; +struct k3_ringacc *k3_ringacc_dmarings_init(struct platform_device *pdev, + struct k3_ringacc_init_data *data) +{ + struct device *dev = &pdev->dev; + struct k3_ringacc *ringacc; + void __iomem *base_rt; + struct resource *res; + int i; + + ringacc = devm_kzalloc(dev, sizeof(*ringacc), GFP_KERNEL); + if (!ringacc) + return ERR_PTR(-ENOMEM); + + ringacc->dev = dev; + ringacc->dma_rings = true; + ringacc->num_rings = data->num_rings; + ringacc->tisci = data->tisci; + ringacc->tisci_dev_id = data->tisci_dev_id; + + mutex_init(&ringacc->req_lock); + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ringrt"); + base_rt = devm_ioremap_resource(dev, res); + if (IS_ERR(base_rt)) + return base_rt; + + ringacc->rings = devm_kzalloc(dev, + sizeof(*ringacc->rings) * + ringacc->num_rings * 2, + GFP_KERNEL); + ringacc->rings_inuse = devm_kcalloc(dev, + BITS_TO_LONGS(ringacc->num_rings), + sizeof(unsigned long), GFP_KERNEL); + + if (!ringacc->rings || !ringacc->rings_inuse) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < ringacc->num_rings; i++) { + struct k3_ring *ring = &ringacc->rings[i]; + + ring->rt = base_rt + K3_DMARING_RT_REGS_STEP * i; + ring->parent = ringacc; + ring->ring_id = i; + ring->proxy_id = K3_RINGACC_PROXY_NOT_USED; + + ring = &ringacc->rings[ringacc->num_rings + i]; + ring->rt = base_rt + K3_DMARING_RT_REGS_STEP * i + + K3_DMARING_RT_REGS_REVERSE_OFS; + ring->parent = ringacc; + ring->ring_id = i; + ring->proxy_id = K3_RINGACC_PROXY_NOT_USED; + ring->flags = K3_RING_FLAG_REVERSE; + } + + ringacc->tisci_ring_ops = &ringacc->tisci->ops.rm_ring_ops; + + dev_info(dev, "Number of rings: %u\n", ringacc->num_rings); + + return ringacc; +} +EXPORT_SYMBOL_GPL(k3_ringacc_dmarings_init); + static int k3_ringacc_probe(struct platform_device *pdev) { const struct ringacc_match_data *match_data; diff --git a/include/linux/soc/ti/k3-ringacc.h b/include/linux/soc/ti/k3-ringacc.h index 658dc71d2901..39b022b92598 100644 --- a/include/linux/soc/ti/k3-ringacc.h +++ b/include/linux/soc/ti/k3-ringacc.h @@ -70,6 +70,7 @@ struct k3_ring; * @dma_dev: Master device which is using and accessing to the ring * memory when the mode is K3_RINGACC_RING_MODE_RING. Memory allocations * should be done using this device. + * @asel: Address Space Select value for physical addresses */ struct k3_ring_cfg { u32 size; @@ -79,6 +80,7 @@ struct k3_ring_cfg { u32 flags; struct device *dma_dev; + u32 asel; }; #define K3_RINGACC_RING_ID_ANY (-1) @@ -250,4 +252,19 @@ int k3_ringacc_ring_pop_tail(struct k3_ring *ring, void *elem); u32 k3_ringacc_get_tisci_dev_id(struct k3_ring *ring); +/* DMA ring support */ +struct ti_sci_handle; + +/** + * struct struct k3_ringacc_init_data - Initialization data for DMA rings + */ +struct k3_ringacc_init_data { + const struct ti_sci_handle *tisci; + u32 tisci_dev_id; + u32 num_rings; +}; + +struct k3_ringacc *k3_ringacc_dmarings_init(struct platform_device *pdev, + struct k3_ringacc_init_data *data); + #endif /* __SOC_TI_K3_RINGACC_API_H_ */ From patchwork Tue Nov 17 10:56:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3228AC83008 for ; Tue, 17 Nov 2020 10:57:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1E4D20781 for ; Tue, 17 Nov 2020 10:57:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="DrZw/eVH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728265AbgKQK5P (ORCPT ); Tue, 17 Nov 2020 05:57:15 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34224 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728218AbgKQK5P (ORCPT ); Tue, 17 Nov 2020 05:57:15 -0500 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAuxXc117415; Tue, 17 Nov 2020 04:56:59 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610619; bh=ebCGEHdXsLlrVZu0dX4dqHyHwO6AiToKt0J7HalaSRo=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=DrZw/eVH5LT3sGNJCboO+hsycHhgWReR+HsPbUFvt0Gl3qJZ4i40TMmq4ai/po3sk vJkorj+EUVc0Y1XyaNN5tYE0RyPChYj0QToOIVwvmxyQtZ75gFCmMI+blW1s6v5Gkv +5JiKzIwmbjduJq0Exy5ts4QJUZtpIPg0TEsAfSQ= Received: from DLEE115.ent.ti.com (dlee115.ent.ti.com [157.170.170.26]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAux7E019796 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:56:59 -0600 Received: from DLEE106.ent.ti.com (157.170.170.36) by DLEE115.ent.ti.com (157.170.170.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:56:58 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:56:58 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6u2087311; Tue, 17 Nov 2020 04:56:55 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 16/19] dmaengine: ti: k3-udma: Initial support for K3 BCDMA Date: Tue, 17 Nov 2020 12:56:53 +0200 Message-ID: <20201117105656.5236-17-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org One of the DMAs introduced with AM64 is the Block Copy DMA (BCDMA). It serves similar purpose as K3 UDMAP channels in TR mode. The rings for the BCDMA is integrated within the DMA itself instead of using rings from the general purpose ringacc. A BCDMA have two different type of channels: - Block Copy Channels (bchan) - Split Channels (tchan and rchan) tchan and rchan can be used to service PSI-L peripherals, similarly to K3 UDMA channels. bchan can be only used for block copy operation (TR type15) like the paired K3 UDMA tchan/rchan configured in block copy mode. bchans can be also used to service peripherals directly if an external trigger is selected for the channel. Most of the driver code can be reused for BCDMA bchan/tchan/rchan support but new setup and allocation functions are needed to handle the differences between the DMAs. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 1384 ++++++++++++++++++++++++++++++++++---- drivers/dma/ti/k3-udma.h | 12 +- 2 files changed, 1245 insertions(+), 151 deletions(-) diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index b89afa602532..f327d436f8ad 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include "../virt-dma.h" @@ -55,14 +56,25 @@ struct udma_static_tr { struct udma_chan; +enum k3_dma_type { + DMA_TYPE_UDMA = 0, + DMA_TYPE_BCDMA, +}; + enum udma_mmr { MMR_GCFG = 0, + MMR_BCHANRT, MMR_RCHANRT, MMR_TCHANRT, MMR_LAST, }; -static const char * const mmr_names[] = { "gcfg", "rchanrt", "tchanrt" }; +static const char * const mmr_names[] = { + [MMR_GCFG] = "gcfg", + [MMR_BCHANRT] = "bchanrt", + [MMR_RCHANRT] = "rchanrt", + [MMR_TCHANRT] = "tchanrt", +}; struct udma_tchan { void __iomem *reg_rt; @@ -72,6 +84,8 @@ struct udma_tchan { struct k3_ring *tc_ring; /* Transmit Completion ring */ }; +#define udma_bchan udma_tchan + struct udma_rflow { int id; struct k3_ring *fd_ring; /* Free Descriptor ring */ @@ -84,11 +98,25 @@ struct udma_rchan { int id; }; +struct udma_oes_offsets { + /* K3 UDMA Output Event Offset */ + u32 udma_rchan; + + /* BCDMA Output Event Offsets */ + u32 bcdma_bchan_data; + u32 bcdma_bchan_ring; + u32 bcdma_tchan_data; + u32 bcdma_tchan_ring; + u32 bcdma_rchan_data; + u32 bcdma_rchan_ring; +}; + #define UDMA_FLAG_PDMA_ACC32 BIT(0) #define UDMA_FLAG_PDMA_BURST BIT(1) #define UDMA_FLAG_TDTYPE BIT(2) struct udma_match_data { + enum k3_dma_type type; u32 psil_base; bool enable_memcpy_support; u32 flags; @@ -96,7 +124,8 @@ struct udma_match_data { }; struct udma_soc_data { - u32 rchan_oes_offset; + struct udma_oes_offsets oes; + u32 bcdma_trigger_event_offset; }; struct udma_hwdesc { @@ -139,16 +168,19 @@ struct udma_dev { struct udma_rx_flush rx_flush; + int bchan_cnt; int tchan_cnt; int echan_cnt; int rchan_cnt; int rflow_cnt; + unsigned long *bchan_map; unsigned long *tchan_map; unsigned long *rchan_map; unsigned long *rflow_gp_map; unsigned long *rflow_gp_map_allocated; unsigned long *rflow_in_use; + struct udma_bchan *bchans; struct udma_tchan *tchans; struct udma_rchan *rchans; struct udma_rflow *rflows; @@ -156,6 +188,7 @@ struct udma_dev { struct udma_chan *channels; u32 psil_base; u32 atype; + u32 asel; }; struct udma_desc { @@ -200,6 +233,7 @@ struct udma_chan_config { bool notdpkt; /* Suppress sending TDC packet */ int remote_thread_id; u32 atype; + u32 asel; u32 src_thread; u32 dst_thread; enum psil_endpoint_type ep_type; @@ -207,6 +241,8 @@ struct udma_chan_config { bool enable_burst; enum udma_tp_level channel_tpl; /* Channel Throughput Level */ + u32 tr_trigger_type; + enum dma_transfer_direction dir; }; @@ -214,11 +250,13 @@ struct udma_chan { struct virt_dma_chan vc; struct dma_slave_config cfg; struct udma_dev *ud; + struct device *dma_dev; struct udma_desc *desc; struct udma_desc *terminated_desc; struct udma_static_tr static_tr; char *name; + struct udma_bchan *bchan; struct udma_tchan *tchan; struct udma_rchan *rchan; struct udma_rflow *rflow; @@ -354,6 +392,30 @@ static int navss_psil_unpair(struct udma_dev *ud, u32 src_thread, src_thread, dst_thread); } +static void k3_configure_chan_coherency(struct dma_chan *chan, u32 asel) +{ + struct device *chan_dev = &chan->dev->device; + + if (asel == 0) { + /* No special handling for the channel */ + chan->dev->chan_dma_dev = false; + + chan_dev->dma_coherent = false; + chan_dev->dma_parms = NULL; + } else if (asel == 14 || asel == 15) { + chan->dev->chan_dma_dev = true; + + chan_dev->dma_coherent = true; + dma_coerce_mask_and_coherent(chan_dev, DMA_BIT_MASK(48)); + chan_dev->dma_parms = chan_dev->parent->dma_parms; + } else { + dev_warn(chan->device->dev, "Invalid ASEL value: %u\n", asel); + + chan_dev->dma_coherent = false; + chan_dev->dma_parms = NULL; + } +} + static void udma_reset_uchan(struct udma_chan *uc) { memset(&uc->config, 0, sizeof(uc->config)); @@ -440,9 +502,7 @@ static void udma_free_hwdesc(struct udma_chan *uc, struct udma_desc *d) d->hwdesc[i].cppi5_desc_vaddr = NULL; } } else if (d->hwdesc[0].cppi5_desc_vaddr) { - struct udma_dev *ud = uc->ud; - - dma_free_coherent(ud->dev, d->hwdesc[0].cppi5_desc_size, + dma_free_coherent(uc->dma_dev, d->hwdesc[0].cppi5_desc_size, d->hwdesc[0].cppi5_desc_vaddr, d->hwdesc[0].cppi5_desc_paddr); @@ -671,8 +731,10 @@ static void udma_reset_counters(struct udma_chan *uc) val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PCNT_REG); udma_tchanrt_write(uc, UDMA_CHAN_RT_PCNT_REG, val); - val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG); - udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val); + if (!uc->bchan) { + val = udma_tchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG); + udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val); + } } if (uc->rchan) { @@ -1233,6 +1295,42 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ UDMA_RESERVE_RESOURCE(tchan); UDMA_RESERVE_RESOURCE(rchan); +static struct udma_bchan *__bcdma_reserve_bchan(struct udma_dev *ud, int id) +{ + if (id >= 0) { + if (test_bit(id, ud->bchan_map)) { + dev_err(ud->dev, "bchan%d is in use\n", id); + return ERR_PTR(-ENOENT); + } + } else { + id = find_next_zero_bit(ud->bchan_map, ud->bchan_cnt, 0); + if (id == ud->bchan_cnt) + return ERR_PTR(-ENOENT); + } + + set_bit(id, ud->bchan_map); + return &ud->bchans[id]; +} + +static int bcdma_get_bchan(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->bchan) { + dev_dbg(ud->dev, "chan%d: already have bchan%d allocated\n", + uc->id, uc->bchan->id); + return 0; + } + + uc->bchan = __bcdma_reserve_bchan(ud, -1); + if (IS_ERR(uc->bchan)) + return PTR_ERR(uc->bchan); + + uc->tchan = uc->bchan; + + return 0; +} + static int udma_get_tchan(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1325,6 +1423,19 @@ static int udma_get_rflow(struct udma_chan *uc, int flow_id) return PTR_ERR_OR_ZERO(uc->rflow); } +static void bcdma_put_bchan(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->bchan) { + dev_dbg(ud->dev, "chan%d: put bchan%d\n", uc->id, + uc->bchan->id); + clear_bit(uc->bchan->id, ud->bchan_map); + uc->bchan = NULL; + uc->tchan = NULL; + } +} + static void udma_put_rchan(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1361,6 +1472,65 @@ static void udma_put_rflow(struct udma_chan *uc) } } +static void bcdma_free_bchan_resources(struct udma_chan *uc) +{ + if (!uc->bchan) + return; + + k3_ringacc_ring_free(uc->bchan->tc_ring); + k3_ringacc_ring_free(uc->bchan->t_ring); + uc->bchan->tc_ring = NULL; + uc->bchan->t_ring = NULL; + k3_configure_chan_coherency(&uc->vc.chan, 0); + + bcdma_put_bchan(uc); +} + +static int bcdma_alloc_bchan_resources(struct udma_chan *uc) +{ + struct k3_ring_cfg ring_cfg; + struct udma_dev *ud = uc->ud; + int ret; + + ret = bcdma_get_bchan(uc); + if (ret) + return ret; + + ret = k3_ringacc_request_rings_pair(ud->ringacc, uc->bchan->id, -1, + &uc->bchan->t_ring, + &uc->bchan->tc_ring); + if (ret) { + ret = -EBUSY; + goto err_ring; + } + + memset(&ring_cfg, 0, sizeof(ring_cfg)); + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; + ring_cfg.mode = K3_RINGACC_RING_MODE_RING; + + k3_configure_chan_coherency(&uc->vc.chan, ud->asel); + ring_cfg.asel = ud->asel; + ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan); + + ret = k3_ringacc_ring_cfg(uc->bchan->t_ring, &ring_cfg); + if (ret) + goto err_ringcfg; + + return 0; + +err_ringcfg: + k3_ringacc_ring_free(uc->bchan->tc_ring); + uc->bchan->tc_ring = NULL; + k3_ringacc_ring_free(uc->bchan->t_ring); + uc->bchan->t_ring = NULL; + k3_configure_chan_coherency(&uc->vc.chan, 0); +err_ring: + bcdma_put_bchan(uc); + + return ret; +} + static void udma_free_tx_resources(struct udma_chan *uc) { if (!uc->tchan) @@ -1378,15 +1548,19 @@ static int udma_alloc_tx_resources(struct udma_chan *uc) { struct k3_ring_cfg ring_cfg; struct udma_dev *ud = uc->ud; - int ret; + struct udma_tchan *tchan; + int ring_idx, ret; ret = udma_get_tchan(uc); if (ret) return ret; - ret = k3_ringacc_request_rings_pair(ud->ringacc, uc->tchan->id, -1, - &uc->tchan->t_ring, - &uc->tchan->tc_ring); + tchan = uc->tchan; + ring_idx = ud->bchan_cnt + tchan->id; + + ret = k3_ringacc_request_rings_pair(ud->ringacc, ring_idx, -1, + &tchan->t_ring, + &tchan->tc_ring); if (ret) { ret = -EBUSY; goto err_ring; @@ -1395,10 +1569,18 @@ static int udma_alloc_tx_resources(struct udma_chan *uc) memset(&ring_cfg, 0, sizeof(ring_cfg)); ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; - ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + if (ud->match_data->type == DMA_TYPE_UDMA) { + ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + } else { + ring_cfg.mode = K3_RINGACC_RING_MODE_RING; + + k3_configure_chan_coherency(&uc->vc.chan, uc->config.asel); + ring_cfg.asel = uc->config.asel; + ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan); + } - ret = k3_ringacc_ring_cfg(uc->tchan->t_ring, &ring_cfg); - ret |= k3_ringacc_ring_cfg(uc->tchan->tc_ring, &ring_cfg); + ret = k3_ringacc_ring_cfg(tchan->t_ring, &ring_cfg); + ret |= k3_ringacc_ring_cfg(tchan->tc_ring, &ring_cfg); if (ret) goto err_ringcfg; @@ -1458,7 +1640,8 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) } rflow = uc->rflow; - fd_ring_id = ud->tchan_cnt + ud->echan_cnt + uc->rchan->id; + fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt + + uc->rchan->id; ret = k3_ringacc_request_rings_pair(ud->ringacc, fd_ring_id, -1, &rflow->fd_ring, &rflow->r_ring); if (ret) { @@ -1468,15 +1651,25 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) memset(&ring_cfg, 0, sizeof(ring_cfg)); - if (uc->config.pkt_mode) - ring_cfg.size = SG_MAX_SEGMENTS; - else + ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; + if (ud->match_data->type == DMA_TYPE_UDMA) { + if (uc->config.pkt_mode) + ring_cfg.size = SG_MAX_SEGMENTS; + else + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + + ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + } else { ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + ring_cfg.mode = K3_RINGACC_RING_MODE_RING; - ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; - ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + k3_configure_chan_coherency(&uc->vc.chan, uc->config.asel); + ring_cfg.asel = uc->config.asel; + ring_cfg.dma_dev = dmaengine_get_dma_device(&uc->vc.chan); + } ret = k3_ringacc_ring_cfg(rflow->fd_ring, &ring_cfg); + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; ret |= k3_ringacc_ring_cfg(rflow->r_ring, &ring_cfg); @@ -1498,7 +1691,18 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) return ret; } -#define TISCI_TCHAN_VALID_PARAMS ( \ +#define TISCI_BCDMA_BCHAN_VALID_PARAMS ( \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_EXTENDED_CH_TYPE_VALID) + +#define TISCI_BCDMA_TCHAN_VALID_PARAMS ( \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_SUPR_TDPKT_VALID) + +#define TISCI_BCDMA_RCHAN_VALID_PARAMS ( \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID) + +#define TISCI_UDMA_TCHAN_VALID_PARAMS ( \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_EINFO_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_PSWORDS_VALID | \ @@ -1508,7 +1712,7 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID) -#define TISCI_RCHAN_VALID_PARAMS ( \ +#define TISCI_UDMA_RCHAN_VALID_PARAMS ( \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID | \ TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | \ @@ -1533,7 +1737,7 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc) struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; - req_tx.valid_params = TISCI_TCHAN_VALID_PARAMS; + req_tx.valid_params = TISCI_UDMA_TCHAN_VALID_PARAMS; req_tx.nav_id = tisci_rm->tisci_dev_id; req_tx.index = tchan->id; req_tx.tx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR; @@ -1547,7 +1751,7 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc) return ret; } - req_rx.valid_params = TISCI_RCHAN_VALID_PARAMS; + req_rx.valid_params = TISCI_UDMA_RCHAN_VALID_PARAMS; req_rx.nav_id = tisci_rm->tisci_dev_id; req_rx.index = rchan->id; req_rx.rx_fetch_size = sizeof(struct cppi5_desc_hdr_t) >> 2; @@ -1562,6 +1766,27 @@ static int udma_tisci_m2m_channel_config(struct udma_chan *uc) return ret; } +static int bcdma_tisci_m2m_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; + struct udma_bchan *bchan = uc->bchan; + int ret = 0; + + req_tx.valid_params = TISCI_BCDMA_BCHAN_VALID_PARAMS; + req_tx.nav_id = tisci_rm->tisci_dev_id; + req_tx.extended_ch_type = TI_SCI_RM_BCDMA_EXTENDED_CH_TYPE_BCHAN; + req_tx.index = bchan->id; + + ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); + if (ret) + dev_err(ud->dev, "bchan%d cfg failed %d\n", bchan->id, ret); + + return ret; +} + static int udma_tisci_tx_channel_config(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1582,7 +1807,7 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc) fetch_size = sizeof(struct cppi5_desc_hdr_t); } - req_tx.valid_params = TISCI_TCHAN_VALID_PARAMS; + req_tx.valid_params = TISCI_UDMA_TCHAN_VALID_PARAMS; req_tx.nav_id = tisci_rm->tisci_dev_id; req_tx.index = tchan->id; req_tx.tx_chan_type = mode; @@ -1605,6 +1830,33 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc) return ret; } +static int bcdma_tisci_tx_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct udma_tchan *tchan = uc->tchan; + struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; + int ret = 0; + + req_tx.valid_params = TISCI_BCDMA_TCHAN_VALID_PARAMS; + req_tx.nav_id = tisci_rm->tisci_dev_id; + req_tx.index = tchan->id; + req_tx.tx_supr_tdpkt = uc->config.notdpkt; + if (ud->match_data->flags & UDMA_FLAG_TDTYPE) { + /* wait for peer to complete the teardown for PDMAs */ + req_tx.valid_params |= + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_TDTYPE_VALID; + req_tx.tx_tdtype = 1; + } + + ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); + if (ret) + dev_err(ud->dev, "tchan%d cfg failed %d\n", tchan->id, ret); + + return ret; +} + static int udma_tisci_rx_channel_config(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1627,7 +1879,7 @@ static int udma_tisci_rx_channel_config(struct udma_chan *uc) fetch_size = sizeof(struct cppi5_desc_hdr_t); } - req_rx.valid_params = TISCI_RCHAN_VALID_PARAMS; + req_rx.valid_params = TISCI_UDMA_RCHAN_VALID_PARAMS; req_rx.nav_id = tisci_rm->tisci_dev_id; req_rx.index = rchan->id; req_rx.rx_fetch_size = fetch_size >> 2; @@ -1686,6 +1938,26 @@ static int udma_tisci_rx_channel_config(struct udma_chan *uc) return 0; } +static int bcdma_tisci_rx_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct udma_rchan *rchan = uc->rchan; + struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; + int ret = 0; + + req_rx.valid_params = TISCI_BCDMA_RCHAN_VALID_PARAMS; + req_rx.nav_id = tisci_rm->tisci_dev_id; + req_rx.index = rchan->id; + + ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx); + if (ret) + dev_err(ud->dev, "rchan%d cfg failed %d\n", rchan->id, ret); + + return ret; +} + static int udma_alloc_chan_resources(struct dma_chan *chan) { struct udma_chan *uc = to_udma_chan(chan); @@ -1695,6 +1967,8 @@ static int udma_alloc_chan_resources(struct dma_chan *chan) u32 irq_udma_idx; int ret; + uc->dma_dev = ud->dev; + if (uc->config.pkt_mode || uc->config.dir == DMA_MEM_TO_MEM) { uc->use_dma_pool = true; /* in case of MEM_TO_MEM we have maximum of two TRs */ @@ -1790,7 +2064,7 @@ static int udma_alloc_chan_resources(struct dma_chan *chan) K3_PSIL_DST_THREAD_ID_OFFSET; irq_ring = uc->rflow->r_ring; - irq_udma_idx = soc_data->rchan_oes_offset + uc->rchan->id; + irq_udma_idx = soc_data->oes.udma_rchan + uc->rchan->id; ret = udma_tisci_rx_channel_config(uc); break; @@ -1890,81 +2164,293 @@ static int udma_alloc_chan_resources(struct dma_chan *chan) return ret; } -static int udma_slave_config(struct dma_chan *chan, - struct dma_slave_config *cfg) +static int bcdma_alloc_chan_resources(struct dma_chan *chan) { struct udma_chan *uc = to_udma_chan(chan); + struct udma_dev *ud = to_udma_dev(chan->device); + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u32 irq_udma_idx, irq_ring_idx; + int ret; - memcpy(&uc->cfg, cfg, sizeof(uc->cfg)); + /* Only TR mode is supported */ + uc->config.pkt_mode = false; - return 0; -} + /* + * Make sure that the completion is in a known state: + * No teardown, the channel is idle + */ + reinit_completion(&uc->teardown_completed); + complete_all(&uc->teardown_completed); + uc->state = UDMA_CHAN_IS_IDLE; -static struct udma_desc *udma_alloc_tr_desc(struct udma_chan *uc, - size_t tr_size, int tr_count, - enum dma_transfer_direction dir) -{ - struct udma_hwdesc *hwdesc; - struct cppi5_desc_hdr_t *tr_desc; - struct udma_desc *d; - u32 reload_count = 0; - u32 ring_id; + switch (uc->config.dir) { + case DMA_MEM_TO_MEM: + /* Non synchronized - mem to mem type of transfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-MEM\n", __func__, + uc->id); - switch (tr_size) { - case 16: - case 32: - case 64: - case 128: - break; - default: - dev_err(uc->ud->dev, "Unsupported TR size of %zu\n", tr_size); - return NULL; - } + ret = bcdma_alloc_bchan_resources(uc); + if (ret) + return ret; - /* We have only one descriptor containing multiple TRs */ - d = kzalloc(sizeof(*d) + sizeof(d->hwdesc[0]), GFP_NOWAIT); - if (!d) - return NULL; + irq_ring_idx = uc->bchan->id + oes->bcdma_bchan_ring; + irq_udma_idx = uc->bchan->id + oes->bcdma_bchan_data; - d->sglen = tr_count; + ret = bcdma_tisci_m2m_channel_config(uc); + break; + case DMA_MEM_TO_DEV: + /* Slave transfer synchronized - mem to dev (TX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-DEV\n", __func__, + uc->id); - d->hwdesc_count = 1; - hwdesc = &d->hwdesc[0]; + ret = udma_alloc_tx_resources(uc); + if (ret) { + uc->config.remote_thread_id = -1; + return ret; + } - /* Allocate memory for DMA ring descriptor */ - if (uc->use_dma_pool) { - hwdesc->cppi5_desc_size = uc->config.hdesc_size; - hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool, - GFP_NOWAIT, - &hwdesc->cppi5_desc_paddr); - } else { - hwdesc->cppi5_desc_size = cppi5_trdesc_calc_size(tr_size, - tr_count); - hwdesc->cppi5_desc_size = ALIGN(hwdesc->cppi5_desc_size, - uc->ud->desc_align); - hwdesc->cppi5_desc_vaddr = dma_alloc_coherent(uc->ud->dev, - hwdesc->cppi5_desc_size, - &hwdesc->cppi5_desc_paddr, - GFP_NOWAIT); - } + uc->config.src_thread = ud->psil_base + uc->tchan->id; + uc->config.dst_thread = uc->config.remote_thread_id; + uc->config.dst_thread |= K3_PSIL_DST_THREAD_ID_OFFSET; - if (!hwdesc->cppi5_desc_vaddr) { - kfree(d); - return NULL; - } + irq_ring_idx = uc->tchan->id + oes->bcdma_tchan_ring; + irq_udma_idx = uc->tchan->id + oes->bcdma_tchan_data; - /* Start of the TR req records */ - hwdesc->tr_req_base = hwdesc->cppi5_desc_vaddr + tr_size; - /* Start address of the TR response array */ - hwdesc->tr_resp_base = hwdesc->tr_req_base + tr_size * tr_count; + ret = bcdma_tisci_tx_channel_config(uc); + break; + case DMA_DEV_TO_MEM: + /* Slave transfer synchronized - dev to mem (RX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as DEV-to-MEM\n", __func__, + uc->id); - tr_desc = hwdesc->cppi5_desc_vaddr; + ret = udma_alloc_rx_resources(uc); + if (ret) { + uc->config.remote_thread_id = -1; + return ret; + } - if (uc->cyclic) - reload_count = CPPI5_INFO0_TRDESC_RLDCNT_INFINITE; + uc->config.src_thread = uc->config.remote_thread_id; + uc->config.dst_thread = (ud->psil_base + uc->rchan->id) | + K3_PSIL_DST_THREAD_ID_OFFSET; - if (dir == DMA_DEV_TO_MEM) - ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring); + irq_ring_idx = uc->rchan->id + oes->bcdma_rchan_ring; + irq_udma_idx = uc->rchan->id + oes->bcdma_rchan_data; + + ret = bcdma_tisci_rx_channel_config(uc); + break; + default: + /* Can not happen */ + dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n", + __func__, uc->id, uc->config.dir); + return -EINVAL; + } + + /* check if the channel configuration was successful */ + if (ret) + goto err_res_free; + + if (udma_is_chan_running(uc)) { + dev_warn(ud->dev, "chan%d: is running!\n", uc->id); + udma_reset_chan(uc, false); + if (udma_is_chan_running(uc)) { + dev_err(ud->dev, "chan%d: won't stop!\n", uc->id); + ret = -EBUSY; + goto err_res_free; + } + } + + uc->dma_dev = dmaengine_get_dma_device(chan); + if (uc->config.dir == DMA_MEM_TO_MEM && !uc->config.tr_trigger_type) { + uc->config.hdesc_size = cppi5_trdesc_calc_size( + sizeof(struct cppi5_tr_type15_t), 2); + + uc->hdesc_pool = dma_pool_create(uc->name, ud->ddev.dev, + uc->config.hdesc_size, + ud->desc_align, + 0); + if (!uc->hdesc_pool) { + dev_err(ud->ddev.dev, + "Descriptor pool allocation failed\n"); + uc->use_dma_pool = false; + return -ENOMEM; + } + + uc->use_dma_pool = true; + } else if (uc->config.dir != DMA_MEM_TO_MEM) { + /* PSI-L pairing */ + ret = navss_psil_pair(ud, uc->config.src_thread, + uc->config.dst_thread); + if (ret) { + dev_err(ud->dev, + "PSI-L pairing failed: 0x%04x -> 0x%04x\n", + uc->config.src_thread, uc->config.dst_thread); + goto err_res_free; + } + + uc->psil_paired = true; + } + + uc->irq_num_ring = ti_sci_inta_msi_get_virq(ud->dev, irq_ring_idx); + if (uc->irq_num_ring <= 0) { + dev_err(ud->dev, "Failed to get ring irq (index: %u)\n", + irq_ring_idx); + ret = -EINVAL; + goto err_psi_free; + } + + ret = request_irq(uc->irq_num_ring, udma_ring_irq_handler, + IRQF_TRIGGER_HIGH, uc->name, uc); + if (ret) { + dev_err(ud->dev, "chan%d: ring irq request failed\n", uc->id); + goto err_irq_free; + } + + /* Event from BCDMA (TR events) only needed for slave channels */ + if (is_slave_direction(uc->config.dir)) { + uc->irq_num_udma = ti_sci_inta_msi_get_virq(ud->dev, + irq_udma_idx); + if (uc->irq_num_udma <= 0) { + dev_err(ud->dev, "Failed to get bcdma irq (index: %u)\n", + irq_udma_idx); + free_irq(uc->irq_num_ring, uc); + ret = -EINVAL; + goto err_irq_free; + } + + ret = request_irq(uc->irq_num_udma, udma_udma_irq_handler, 0, + uc->name, uc); + if (ret) { + dev_err(ud->dev, "chan%d: BCDMA irq request failed\n", + uc->id); + free_irq(uc->irq_num_ring, uc); + goto err_irq_free; + } + } else { + uc->irq_num_udma = 0; + } + + udma_reset_rings(uc); + + INIT_DELAYED_WORK_ONSTACK(&uc->tx_drain.work, + udma_check_tx_completion); + return 0; + +err_irq_free: + uc->irq_num_ring = 0; + uc->irq_num_udma = 0; +err_psi_free: + if (uc->psil_paired) + navss_psil_unpair(ud, uc->config.src_thread, + uc->config.dst_thread); + uc->psil_paired = false; +err_res_free: + bcdma_free_bchan_resources(uc); + udma_free_tx_resources(uc); + udma_free_rx_resources(uc); + + udma_reset_uchan(uc); + + if (uc->use_dma_pool) { + dma_pool_destroy(uc->hdesc_pool); + uc->use_dma_pool = false; + } + + return ret; +} + +static int bcdma_router_config(struct dma_chan *chan) +{ + struct k3_event_route_data *router_data = chan->route_data; + struct udma_chan *uc = to_udma_chan(chan); + u32 trigger_event; + + if (!uc->bchan) + return -EINVAL; + + if (uc->config.tr_trigger_type != 1 && uc->config.tr_trigger_type != 2) + return -EINVAL; + + trigger_event = uc->ud->soc_data->bcdma_trigger_event_offset; + trigger_event += (uc->bchan->id * 2) + uc->config.tr_trigger_type - 1; + + return router_data->set_event(router_data->priv, trigger_event); +} + +static int udma_slave_config(struct dma_chan *chan, + struct dma_slave_config *cfg) +{ + struct udma_chan *uc = to_udma_chan(chan); + + memcpy(&uc->cfg, cfg, sizeof(uc->cfg)); + + return 0; +} + +static struct udma_desc *udma_alloc_tr_desc(struct udma_chan *uc, + size_t tr_size, int tr_count, + enum dma_transfer_direction dir) +{ + struct udma_hwdesc *hwdesc; + struct cppi5_desc_hdr_t *tr_desc; + struct udma_desc *d; + u32 reload_count = 0; + u32 ring_id; + + switch (tr_size) { + case 16: + case 32: + case 64: + case 128: + break; + default: + dev_err(uc->ud->dev, "Unsupported TR size of %zu\n", tr_size); + return NULL; + } + + /* We have only one descriptor containing multiple TRs */ + d = kzalloc(sizeof(*d) + sizeof(d->hwdesc[0]), GFP_NOWAIT); + if (!d) + return NULL; + + d->sglen = tr_count; + + d->hwdesc_count = 1; + hwdesc = &d->hwdesc[0]; + + /* Allocate memory for DMA ring descriptor */ + if (uc->use_dma_pool) { + hwdesc->cppi5_desc_size = uc->config.hdesc_size; + hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool, + GFP_NOWAIT, + &hwdesc->cppi5_desc_paddr); + } else { + hwdesc->cppi5_desc_size = cppi5_trdesc_calc_size(tr_size, + tr_count); + hwdesc->cppi5_desc_size = ALIGN(hwdesc->cppi5_desc_size, + uc->ud->desc_align); + hwdesc->cppi5_desc_vaddr = dma_alloc_coherent(uc->ud->dev, + hwdesc->cppi5_desc_size, + &hwdesc->cppi5_desc_paddr, + GFP_NOWAIT); + } + + if (!hwdesc->cppi5_desc_vaddr) { + kfree(d); + return NULL; + } + + /* Start of the TR req records */ + hwdesc->tr_req_base = hwdesc->cppi5_desc_vaddr + tr_size; + /* Start address of the TR response array */ + hwdesc->tr_resp_base = hwdesc->tr_req_base + tr_size * tr_count; + + tr_desc = hwdesc->cppi5_desc_vaddr; + + if (uc->cyclic) + reload_count = CPPI5_INFO0_TRDESC_RLDCNT_INFINITE; + + if (dir == DMA_DEV_TO_MEM) + ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring); else ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); @@ -2034,6 +2520,7 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, size_t tr_size; int num_tr = 0; int tr_idx = 0; + u64 asel; /* estimate the number of TRs we will need */ for_each_sg(sgl, sgent, sglen, i) { @@ -2051,6 +2538,11 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, d->sglen = sglen; + if (uc->ud->match_data->type == DMA_TYPE_UDMA) + asel = 0; + else + asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT; + tr_req = d->hwdesc[0].tr_req_base; for_each_sg(sgl, sgent, sglen, i) { dma_addr_t sg_addr = sg_dma_address(sgent); @@ -2069,6 +2561,7 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, false, CPPI5_TR_EVENT_SIZE_COMPLETION, 0); cppi5_tr_csf_set(&tr_req[tr_idx].flags, CPPI5_TR_CSF_SUPR_EVT); + sg_addr |= asel; tr_req[tr_idx].addr = sg_addr; tr_req[tr_idx].icnt0 = tr0_cnt0; tr_req[tr_idx].icnt1 = tr0_cnt1; @@ -2098,6 +2591,205 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, return d; } +static struct udma_desc * +udma_prep_slave_sg_triggered_tr(struct udma_chan *uc, struct scatterlist *sgl, + unsigned int sglen, + enum dma_transfer_direction dir, + unsigned long tx_flags, void *context) +{ + struct scatterlist *sgent; + struct cppi5_tr_type15_t *tr_req = NULL; + enum dma_slave_buswidth dev_width; + u16 tr_cnt0, tr_cnt1; + dma_addr_t dev_addr; + struct udma_desc *d; + unsigned int i; + size_t tr_size, sg_len; + int num_tr = 0; + int tr_idx = 0; + u32 burst, trigger_size, port_window; + u64 asel; + + if (dir == DMA_DEV_TO_MEM) { + dev_addr = uc->cfg.src_addr; + dev_width = uc->cfg.src_addr_width; + burst = uc->cfg.src_maxburst; + port_window = uc->cfg.src_port_window_size; + } else if (dir == DMA_MEM_TO_DEV) { + dev_addr = uc->cfg.dst_addr; + dev_width = uc->cfg.dst_addr_width; + burst = uc->cfg.dst_maxburst; + port_window = uc->cfg.dst_port_window_size; + } else { + dev_err(uc->ud->dev, "%s: bad direction?\n", __func__); + return NULL; + } + + if (!burst) + burst = 1; + + if (port_window) { + if (port_window != burst) { + dev_err(uc->ud->dev, + "The burst must be equal to port_window\n"); + return NULL; + } + + tr_cnt0 = dev_width * port_window; + tr_cnt1 = 1; + } else { + tr_cnt0 = dev_width; + tr_cnt1 = burst; + } + trigger_size = tr_cnt0 * tr_cnt1; + + /* estimate the number of TRs we will need */ + for_each_sg(sgl, sgent, sglen, i) { + sg_len = sg_dma_len(sgent); + + if (sg_len % trigger_size) { + dev_err(uc->ud->dev, + "Not aligned SG entry (%zu for %u)\n", sg_len, + trigger_size); + return NULL; + } + + if (sg_len / trigger_size < SZ_64K) + num_tr++; + else + num_tr += 2; + } + + /* Now allocate and setup the descriptor. */ + tr_size = sizeof(struct cppi5_tr_type15_t); + d = udma_alloc_tr_desc(uc, tr_size, num_tr, dir); + if (!d) + return NULL; + + d->sglen = sglen; + + if (uc->ud->match_data->type == DMA_TYPE_UDMA) { + asel = 0; + } else { + asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT; + dev_addr |= asel; + } + + tr_req = d->hwdesc[0].tr_req_base; + for_each_sg(sgl, sgent, sglen, i) { + u16 tr0_cnt2, tr0_cnt3, tr1_cnt2; + dma_addr_t sg_addr = sg_dma_address(sgent); + + sg_len = sg_dma_len(sgent); + num_tr = udma_get_tr_counters(sg_len / trigger_size, 0, + &tr0_cnt2, &tr0_cnt3, &tr1_cnt2); + if (num_tr < 0) { + dev_err(uc->ud->dev, "size %zu is not supported\n", + sg_len); + udma_free_hwdesc(uc, d); + kfree(d); + return NULL; + } + + cppi5_tr_init(&tr_req[tr_idx].flags, CPPI5_TR_TYPE15, false, + true, CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + cppi5_tr_csf_set(&tr_req[tr_idx].flags, CPPI5_TR_CSF_SUPR_EVT); + cppi5_tr_set_trigger(&tr_req[tr_idx].flags, + uc->config.tr_trigger_type, + CPPI5_TR_TRIGGER_TYPE_ICNT2_DEC, 0, 0); + + sg_addr |= asel; + if (dir == DMA_DEV_TO_MEM) { + tr_req[tr_idx].addr = dev_addr; + tr_req[tr_idx].icnt0 = tr_cnt0; + tr_req[tr_idx].icnt1 = tr_cnt1; + tr_req[tr_idx].icnt2 = tr0_cnt2; + tr_req[tr_idx].icnt3 = tr0_cnt3; + tr_req[tr_idx].dim1 = (-1) * tr_cnt0; + + tr_req[tr_idx].daddr = sg_addr; + tr_req[tr_idx].dicnt0 = tr_cnt0; + tr_req[tr_idx].dicnt1 = tr_cnt1; + tr_req[tr_idx].dicnt2 = tr0_cnt2; + tr_req[tr_idx].dicnt3 = tr0_cnt3; + tr_req[tr_idx].ddim1 = tr_cnt0; + tr_req[tr_idx].ddim2 = trigger_size; + tr_req[tr_idx].ddim3 = trigger_size * tr0_cnt2; + } else { + tr_req[tr_idx].addr = sg_addr; + tr_req[tr_idx].icnt0 = tr_cnt0; + tr_req[tr_idx].icnt1 = tr_cnt1; + tr_req[tr_idx].icnt2 = tr0_cnt2; + tr_req[tr_idx].icnt3 = tr0_cnt3; + tr_req[tr_idx].dim1 = tr_cnt0; + tr_req[tr_idx].dim2 = trigger_size; + tr_req[tr_idx].dim3 = trigger_size * tr0_cnt2; + + tr_req[tr_idx].daddr = dev_addr; + tr_req[tr_idx].dicnt0 = tr_cnt0; + tr_req[tr_idx].dicnt1 = tr_cnt1; + tr_req[tr_idx].dicnt2 = tr0_cnt2; + tr_req[tr_idx].dicnt3 = tr0_cnt3; + tr_req[tr_idx].ddim1 = (-1) * tr_cnt0; + } + + tr_idx++; + + if (num_tr == 2) { + cppi5_tr_init(&tr_req[tr_idx].flags, CPPI5_TR_TYPE15, + false, true, + CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + cppi5_tr_csf_set(&tr_req[tr_idx].flags, + CPPI5_TR_CSF_SUPR_EVT); + cppi5_tr_set_trigger(&tr_req[tr_idx].flags, + uc->config.tr_trigger_type, + CPPI5_TR_TRIGGER_TYPE_ICNT2_DEC, + 0, 0); + + sg_addr += trigger_size * tr0_cnt2 * tr0_cnt3; + if (dir == DMA_DEV_TO_MEM) { + tr_req[tr_idx].addr = dev_addr; + tr_req[tr_idx].icnt0 = tr_cnt0; + tr_req[tr_idx].icnt1 = tr_cnt1; + tr_req[tr_idx].icnt2 = tr1_cnt2; + tr_req[tr_idx].icnt3 = 1; + tr_req[tr_idx].dim1 = (-1) * tr_cnt0; + + tr_req[tr_idx].daddr = sg_addr; + tr_req[tr_idx].dicnt0 = tr_cnt0; + tr_req[tr_idx].dicnt1 = tr_cnt1; + tr_req[tr_idx].dicnt2 = tr1_cnt2; + tr_req[tr_idx].dicnt3 = 1; + tr_req[tr_idx].ddim1 = tr_cnt0; + tr_req[tr_idx].ddim2 = trigger_size; + } else { + tr_req[tr_idx].addr = sg_addr; + tr_req[tr_idx].icnt0 = tr_cnt0; + tr_req[tr_idx].icnt1 = tr_cnt1; + tr_req[tr_idx].icnt2 = tr1_cnt2; + tr_req[tr_idx].icnt3 = 1; + tr_req[tr_idx].dim1 = tr_cnt0; + tr_req[tr_idx].dim2 = trigger_size; + + tr_req[tr_idx].daddr = dev_addr; + tr_req[tr_idx].dicnt0 = tr_cnt0; + tr_req[tr_idx].dicnt1 = tr_cnt1; + tr_req[tr_idx].dicnt2 = tr1_cnt2; + tr_req[tr_idx].dicnt3 = 1; + tr_req[tr_idx].ddim1 = (-1) * tr_cnt0; + } + tr_idx++; + } + + d->residue += sg_len; + } + + cppi5_tr_csf_set(&tr_req[tr_idx - 1].flags, + CPPI5_TR_CSF_SUPR_EVT | CPPI5_TR_CSF_EOP); + + return d; +} + static int udma_configure_statictr(struct udma_chan *uc, struct udma_desc *d, enum dma_slave_buswidth dev_width, u16 elcnt) @@ -2339,7 +3031,8 @@ udma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, struct udma_desc *d; u32 burst; - if (dir != uc->config.dir) { + if (dir != uc->config.dir && + (uc->config.dir == DMA_MEM_TO_MEM && !uc->config.tr_trigger_type)) { dev_err(chan->device->dev, "%s: chan%d is for %s, not supporting %s\n", __func__, uc->id, @@ -2365,9 +3058,12 @@ udma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, if (uc->config.pkt_mode) d = udma_prep_slave_sg_pkt(uc, sgl, sglen, dir, tx_flags, context); - else + else if (is_slave_direction(uc->config.dir)) d = udma_prep_slave_sg_tr(uc, sgl, sglen, dir, tx_flags, context); + else + d = udma_prep_slave_sg_triggered_tr(uc, sgl, sglen, dir, + tx_flags, context); if (!d) return NULL; @@ -2421,7 +3117,12 @@ udma_prep_dma_cyclic_tr(struct udma_chan *uc, dma_addr_t buf_addr, return NULL; tr_req = d->hwdesc[0].tr_req_base; - period_addr = buf_addr; + if (uc->ud->match_data->type == DMA_TYPE_UDMA) + period_addr = buf_addr; + else + period_addr = buf_addr | + ((u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT); + for (i = 0; i < periods; i++) { int tr_idx = i * num_tr; @@ -2627,6 +3328,11 @@ udma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, d->tr_idx = 0; d->residue = len; + if (uc->ud->match_data->type != DMA_TYPE_UDMA) { + src |= (u64)uc->ud->asel << K3_ADDRESS_ASEL_SHIFT; + dest |= (u64)uc->ud->asel << K3_ADDRESS_ASEL_SHIFT; + } + tr_req = d->hwdesc[0].tr_req_base; cppi5_tr_init(&tr_req[0].flags, CPPI5_TR_TYPE15, false, true, @@ -2984,6 +3690,7 @@ static void udma_free_chan_resources(struct dma_chan *chan) vchan_free_chan_resources(&uc->vc); tasklet_kill(&uc->vc.task); + bcdma_free_bchan_resources(uc); udma_free_tx_resources(uc); udma_free_rx_resources(uc); udma_reset_uchan(uc); @@ -2995,10 +3702,13 @@ static void udma_free_chan_resources(struct dma_chan *chan) } static struct platform_driver udma_driver; +static struct platform_driver bcdma_driver; struct udma_filter_param { int remote_thread_id; u32 atype; + u32 asel; + u32 tr_trigger_type; }; static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) @@ -3009,7 +3719,8 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) struct udma_chan *uc; struct udma_dev *ud; - if (chan->device->dev->driver != &udma_driver.driver) + if (chan->device->dev->driver != &udma_driver.driver && + chan->device->dev->driver != &bcdma_driver.driver) return false; uc = to_udma_chan(chan); @@ -3023,13 +3734,25 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) return false; } + if (filter_param->asel > 15) { + dev_err(ud->dev, "Invalid channel asel: %u\n", + filter_param->asel); + return false; + } + ucc->remote_thread_id = filter_param->remote_thread_id; ucc->atype = filter_param->atype; + ucc->asel = filter_param->asel; + ucc->tr_trigger_type = filter_param->tr_trigger_type; - if (ucc->remote_thread_id & K3_PSIL_DST_THREAD_ID_OFFSET) + if (ucc->tr_trigger_type) { + ucc->dir = DMA_MEM_TO_MEM; + goto triggered_bchan; + } else if (ucc->remote_thread_id & K3_PSIL_DST_THREAD_ID_OFFSET) { ucc->dir = DMA_MEM_TO_DEV; - else + } else { ucc->dir = DMA_DEV_TO_MEM; + } ep_config = psil_get_ep_config(ucc->remote_thread_id); if (IS_ERR(ep_config)) { @@ -3038,6 +3761,19 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) ucc->dir = DMA_MEM_TO_MEM; ucc->remote_thread_id = -1; ucc->atype = 0; + ucc->asel = 0; + return false; + } + + if (ud->match_data->type == DMA_TYPE_BCDMA && + ep_config->pkt_mode) { + dev_err(ud->dev, + "Only TR mode is supported (psi-l thread 0x%04x)\n", + ucc->remote_thread_id); + ucc->dir = DMA_MEM_TO_MEM; + ucc->remote_thread_id = -1; + ucc->atype = 0; + ucc->asel = 0; return false; } @@ -3069,6 +3805,13 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) ucc->remote_thread_id, dmaengine_get_direction_text(ucc->dir)); return true; + +triggered_bchan: + dev_dbg(ud->dev, "chan%d: triggered channel (type: %u)\n", uc->id, + ucc->tr_trigger_type); + + return true; + } static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec, @@ -3079,14 +3822,33 @@ static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec, struct udma_filter_param filter_param; struct dma_chan *chan; - if (dma_spec->args_count != 1 && dma_spec->args_count != 2) - return NULL; + if (ud->match_data->type == DMA_TYPE_BCDMA) { + if (dma_spec->args_count != 3) + return NULL; - filter_param.remote_thread_id = dma_spec->args[0]; - if (dma_spec->args_count == 2) - filter_param.atype = dma_spec->args[1]; - else + filter_param.tr_trigger_type = dma_spec->args[0]; + filter_param.remote_thread_id = dma_spec->args[1]; + filter_param.asel = dma_spec->args[2]; filter_param.atype = 0; + } else { + if (dma_spec->args_count != 1 && dma_spec->args_count != 2) + return NULL; + + filter_param.remote_thread_id = dma_spec->args[0]; + filter_param.tr_trigger_type = 0; + if (dma_spec->args_count == 2) { + if (ud->match_data->type == DMA_TYPE_UDMA) { + filter_param.atype = dma_spec->args[1]; + filter_param.asel = 0; + } else { + filter_param.atype = 0; + filter_param.asel = dma_spec->args[1]; + } + } else { + filter_param.atype = 0; + filter_param.asel = 0; + } + } chan = __dma_request_channel(&mask, udma_dma_filter_fn, &filter_param, ofdma->of_node); @@ -3099,18 +3861,21 @@ static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec, } static struct udma_match_data am654_main_data = { + .type = DMA_TYPE_UDMA, .psil_base = 0x1000, .enable_memcpy_support = true, .statictr_z_mask = GENMASK(11, 0), }; static struct udma_match_data am654_mcu_data = { + .type = DMA_TYPE_UDMA, .psil_base = 0x6000, .enable_memcpy_support = false, .statictr_z_mask = GENMASK(11, 0), }; static struct udma_match_data j721e_main_data = { + .type = DMA_TYPE_UDMA, .psil_base = 0x1000, .enable_memcpy_support = true, .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, @@ -3118,12 +3883,21 @@ static struct udma_match_data j721e_main_data = { }; static struct udma_match_data j721e_mcu_data = { + .type = DMA_TYPE_UDMA, .psil_base = 0x6000, .enable_memcpy_support = false, /* MEM_TO_MEM is slow via MCU UDMA */ .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, .statictr_z_mask = GENMASK(23, 0), }; +static struct udma_match_data am64_bcdma_data = { + .type = DMA_TYPE_BCDMA, + .psil_base = 0x2000, /* for tchan and rchan, not applicable to bchan */ + .enable_memcpy_support = true, /* Supported via bchan */ + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, + .statictr_z_mask = GENMASK(23, 0), +}; + static const struct of_device_id udma_of_match[] = { { .compatible = "ti,am654-navss-main-udmap", @@ -3142,30 +3916,88 @@ static const struct of_device_id udma_of_match[] = { { /* Sentinel */ }, }; +static const struct of_device_id bcdma_of_match[] = { + { + .compatible = "ti,am64-dmss-bcdma", + .data = &am64_bcdma_data, + }, + { /* Sentinel */ }, +}; + static struct udma_soc_data am654_soc_data = { - .rchan_oes_offset = 0x200, + .oes = { + .udma_rchan = 0x200, + }, }; static struct udma_soc_data j721e_soc_data = { - .rchan_oes_offset = 0x400, + .oes = { + .udma_rchan = 0x400, + }, }; static struct udma_soc_data j7200_soc_data = { - .rchan_oes_offset = 0x80, + .oes = { + .udma_rchan = 0x80, + }, +}; + +static struct udma_soc_data am64_soc_data = { + .oes = { + .bcdma_bchan_data = 0x2200, + .bcdma_bchan_ring = 0x2400, + .bcdma_tchan_data = 0x2800, + .bcdma_tchan_ring = 0x2a00, + .bcdma_rchan_data = 0x2e00, + .bcdma_rchan_ring = 0x3000, + }, + .bcdma_trigger_event_offset = 0xc400, }; static const struct soc_device_attribute k3_soc_devices[] = { { .family = "AM65X", .data = &am654_soc_data }, { .family = "J721E", .data = &j721e_soc_data }, { .family = "J7200", .data = &j7200_soc_data }, + { .family = "AM64X", .data = &am64_soc_data }, { /* sentinel */ } }; static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud) { + u32 cap2, cap3; int i; - for (i = 0; i < MMR_LAST; i++) { + ud->mmrs[MMR_GCFG] = devm_platform_ioremap_resource_byname(pdev, mmr_names[MMR_GCFG]); + if (IS_ERR(ud->mmrs[MMR_GCFG])) + return PTR_ERR(ud->mmrs[MMR_GCFG]); + + cap2 = udma_read(ud->mmrs[MMR_GCFG], 0x28); + cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); + + switch (ud->match_data->type) { + case DMA_TYPE_UDMA: + ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3); + ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2); + ud->echan_cnt = UDMA_CAP2_ECHAN_CNT(cap2); + ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2); + break; + case DMA_TYPE_BCDMA: + ud->bchan_cnt = BCDMA_CAP2_BCHAN_CNT(cap2); + ud->tchan_cnt = BCDMA_CAP2_TCHAN_CNT(cap2); + ud->rchan_cnt = BCDMA_CAP2_RCHAN_CNT(cap2); + break; + default: + return -EINVAL; + } + + for (i = 1; i < MMR_LAST; i++) { + if (i == MMR_BCHANRT && ud->bchan_cnt == 0) + continue; + if (i == MMR_TCHANRT && ud->tchan_cnt == 0) + continue; + if (i == MMR_RCHANRT && ud->rchan_cnt == 0) + continue; + ud->mmrs[i] = devm_platform_ioremap_resource_byname(pdev, mmr_names[i]); if (IS_ERR(ud->mmrs[i])) return PTR_ERR(ud->mmrs[i]); @@ -3185,27 +4017,23 @@ static void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *map, rm_desc->num_sec); } +static const char * const range_names[] = { + [RM_RANGE_BCHAN] = "ti,sci-rm-range-bchan", + [RM_RANGE_TCHAN] = "ti,sci-rm-range-tchan", + [RM_RANGE_RCHAN] = "ti,sci-rm-range-rchan", + [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow" +}; + static int udma_setup_resources(struct udma_dev *ud) { + int ret, i, j; struct device *dev = ud->dev; - int ch_count, ret, i, j; - u32 cap2, cap3; struct ti_sci_resource *rm_res, irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; - static const char * const range_names[] = { "ti,sci-rm-range-tchan", - "ti,sci-rm-range-rchan", - "ti,sci-rm-range-rflow" }; - - cap2 = udma_read(ud->mmrs[MMR_GCFG], UDMA_CAP_REG(2)); - cap3 = udma_read(ud->mmrs[MMR_GCFG], UDMA_CAP_REG(3)); - - ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3); - ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2); - ud->echan_cnt = UDMA_CAP2_ECHAN_CNT(cap2); - ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2); - ch_count = ud->tchan_cnt + ud->rchan_cnt; + u32 cap3; /* Set up the throughput level start indexes */ + cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); if (of_device_is_compatible(dev->of_node, "ti,am654-navss-main-udmap")) { ud->tpl_levels = 2; @@ -3262,11 +4090,15 @@ static int udma_setup_resources(struct udma_dev *ud) bitmap_set(ud->rflow_gp_map, 0, ud->rflow_cnt); /* Get resource ranges from tisci */ - for (i = 0; i < RM_RANGE_LAST; i++) + for (i = 0; i < RM_RANGE_LAST; i++) { + if (i == RM_RANGE_BCHAN) + continue; + tisci_rm->rm_ranges[i] = devm_ti_sci_get_of_resource(tisci_rm->tisci, dev, tisci_rm->tisci_dev_id, (char *)range_names[i]); + } /* tchan ranges */ rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; @@ -3304,12 +4136,12 @@ static int udma_setup_resources(struct udma_dev *ud) for (j = 0; j < rm_res->sets; j++, i++) { if (rm_res->desc[j].num) { irq_res.desc[i].start = rm_res->desc[j].start + - ud->soc_data->rchan_oes_offset; + ud->soc_data->oes.udma_rchan; irq_res.desc[i].num = rm_res->desc[j].num; } if (rm_res->desc[j].num_sec) { irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + - ud->soc_data->rchan_oes_offset; + ud->soc_data->oes.udma_rchan; irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; } } @@ -3332,6 +4164,174 @@ static int udma_setup_resources(struct udma_dev *ud) &rm_res->desc[i], "gp-rflow"); } + return 0; +} + +static int bcdma_setup_resources(struct udma_dev *ud) +{ + int ret, i, j; + struct device *dev = ud->dev; + struct ti_sci_resource *rm_res, irq_res; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + + ud->bchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->bchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->bchans = devm_kcalloc(dev, ud->bchan_cnt, sizeof(*ud->bchans), + GFP_KERNEL); + ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans), + GFP_KERNEL); + ud->rchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->rchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->rchans = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rchans), + GFP_KERNEL); + /* BCDMA do not really have flows, but the driver expect it */ + ud->rflow_in_use = devm_kcalloc(dev, BITS_TO_LONGS(ud->rchan_cnt), + sizeof(unsigned long), + GFP_KERNEL); + ud->rflows = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rflows), + GFP_KERNEL); + + if (!ud->bchan_map || !ud->tchan_map || !ud->rchan_map || + !ud->rflow_in_use || !ud->bchans || !ud->tchans || !ud->rchans || + !ud->rflows) + return -ENOMEM; + + /* TPL is not yet supported for BCDMA */ + ud->tpl_levels = 1; + + /* Get resource ranges from tisci */ + for (i = 0; i < RM_RANGE_LAST; i++) { + if (i == RM_RANGE_RFLOW) + continue; + if (i == RM_RANGE_BCHAN && ud->bchan_cnt == 0) + continue; + if (i == RM_RANGE_TCHAN && ud->tchan_cnt == 0) + continue; + if (i == RM_RANGE_RCHAN && ud->rchan_cnt == 0) + continue; + + tisci_rm->rm_ranges[i] = + devm_ti_sci_get_of_resource(tisci_rm->tisci, dev, + tisci_rm->tisci_dev_id, + (char *)range_names[i]); + } + + irq_res.sets = 0; + + /* bchan ranges */ + if (ud->bchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->bchan_map, ud->bchan_cnt); + } else { + bitmap_fill(ud->bchan_map, ud->bchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->bchan_map, + &rm_res->desc[i], + "bchan"); + } + irq_res.sets += rm_res->sets; + } + + /* tchan ranges */ + if (ud->tchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->tchan_map, ud->tchan_cnt); + } else { + bitmap_fill(ud->tchan_map, ud->tchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->tchan_map, + &rm_res->desc[i], + "tchan"); + } + irq_res.sets += rm_res->sets * 2; + } + + /* rchan ranges */ + if (ud->rchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->rchan_map, ud->rchan_cnt); + } else { + bitmap_fill(ud->rchan_map, ud->rchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rchan_map, + &rm_res->desc[i], + "rchan"); + } + irq_res.sets += rm_res->sets * 2; + } + + irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL); + if (ud->bchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN]; + for (i = 0; i < rm_res->sets; i++) { + irq_res.desc[i].start = rm_res->desc[i].start + + oes->bcdma_bchan_ring; + irq_res.desc[i].num = rm_res->desc[i].num; + } + } + if (ud->tchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; + for (j = 0; j < rm_res->sets; j++, i += 2) { + irq_res.desc[i].start = rm_res->desc[j].start + + oes->bcdma_tchan_data; + irq_res.desc[i].num = rm_res->desc[j].num; + + irq_res.desc[i + 1].start = rm_res->desc[j].start + + oes->bcdma_tchan_ring; + irq_res.desc[i + 1].num = rm_res->desc[j].num; + } + } + if (ud->rchan_cnt) { + rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; + for (j = 0; j < rm_res->sets; j++, i += 2) { + irq_res.desc[i].start = rm_res->desc[j].start + + oes->bcdma_rchan_data; + irq_res.desc[i].num = rm_res->desc[j].num; + + irq_res.desc[i + 1].start = rm_res->desc[j].start + + oes->bcdma_rchan_ring; + irq_res.desc[i + 1].num = rm_res->desc[j].num; + } + } + + ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); + kfree(irq_res.desc); + if (ret) { + dev_err(ud->dev, "Failed to allocate MSI interrupts\n"); + return ret; + } + + return 0; +} + +static int setup_resources(struct udma_dev *ud) +{ + struct device *dev = ud->dev; + int ch_count, ret; + + switch (ud->match_data->type) { + case DMA_TYPE_UDMA: + ret = udma_setup_resources(ud); + break; + case DMA_TYPE_BCDMA: + ret = bcdma_setup_resources(ud); + break; + default: + return -EINVAL; + } + + if (ret) + return ret; + + ch_count = ud->bchan_cnt + ud->tchan_cnt + ud->rchan_cnt; + if (ud->bchan_cnt) + ch_count -= bitmap_weight(ud->bchan_map, ud->bchan_cnt); ch_count -= bitmap_weight(ud->tchan_map, ud->tchan_cnt); ch_count -= bitmap_weight(ud->rchan_map, ud->rchan_cnt); if (!ch_count) @@ -3342,12 +4342,32 @@ static int udma_setup_resources(struct udma_dev *ud) if (!ud->channels) return -ENOMEM; - dev_info(dev, "Channels: %d (tchan: %u, rchan: %u, gp-rflow: %u)\n", - ch_count, - ud->tchan_cnt - bitmap_weight(ud->tchan_map, ud->tchan_cnt), - ud->rchan_cnt - bitmap_weight(ud->rchan_map, ud->rchan_cnt), - ud->rflow_cnt - bitmap_weight(ud->rflow_gp_map, - ud->rflow_cnt)); + switch (ud->match_data->type) { + case DMA_TYPE_UDMA: + dev_info(dev, + "Channels: %d (tchan: %u, rchan: %u, gp-rflow: %u)\n", + ch_count, + ud->tchan_cnt - bitmap_weight(ud->tchan_map, + ud->tchan_cnt), + ud->rchan_cnt - bitmap_weight(ud->rchan_map, + ud->rchan_cnt), + ud->rflow_cnt - bitmap_weight(ud->rflow_gp_map, + ud->rflow_cnt)); + break; + case DMA_TYPE_BCDMA: + dev_info(dev, + "Channels: %d (bchan: %u, tchan: %u, rchan: %u)\n", + ch_count, + ud->bchan_cnt - bitmap_weight(ud->bchan_map, + ud->bchan_cnt), + ud->tchan_cnt - bitmap_weight(ud->tchan_map, + ud->tchan_cnt), + ud->rchan_cnt - bitmap_weight(ud->rchan_map, + ud->rchan_cnt)); + break; + default: + break; + } return ch_count; } @@ -3456,10 +4476,19 @@ static void udma_dbg_summary_show_chan(struct seq_file *s, seq_printf(s, " %-13s| %s", dma_chan_name(chan), chan->dbg_client_name ?: "in-use"); - seq_printf(s, " (%s, ", dmaengine_get_direction_text(uc->config.dir)); + if (ucc->tr_trigger_type) + seq_puts(s, " (triggered, "); + else + seq_printf(s, " (%s, ", + dmaengine_get_direction_text(uc->config.dir)); switch (uc->config.dir) { case DMA_MEM_TO_MEM: + if (uc->ud->match_data->type == DMA_TYPE_BCDMA) { + seq_printf(s, "bchan%d)\n", uc->bchan->id); + return; + } + seq_printf(s, "chan%d pair [0x%04x -> 0x%04x], ", uc->tchan->id, ucc->src_thread, ucc->dst_thread); break; @@ -3531,6 +4560,23 @@ static int udma_probe(struct platform_device *pdev) if (!ud) return -ENOMEM; + match = of_match_node(udma_of_match, dev->of_node); + if (!match) { + match = of_match_node(bcdma_of_match, dev->of_node); + if (!match) { + dev_err(dev, "No compatible match found\n"); + return -ENODEV; + } + } + ud->match_data = match->data; + + soc = soc_device_match(k3_soc_devices); + if (!soc) { + dev_err(dev, "No compatible SoC found\n"); + return -ENODEV; + } + ud->soc_data = soc->data; + ret = udma_get_mmrs(pdev, ud); if (ret) return ret; @@ -3554,16 +4600,38 @@ static int udma_probe(struct platform_device *pdev) return ret; } - ret = of_property_read_u32(dev->of_node, "ti,udma-atype", &ud->atype); - if (!ret && ud->atype > 2) { - dev_err(dev, "Invalid atype: %u\n", ud->atype); - return -EINVAL; + if (ud->match_data->type == DMA_TYPE_UDMA) { + ret = of_property_read_u32(dev->of_node, "ti,udma-atype", + &ud->atype); + if (!ret && ud->atype > 2) { + dev_err(dev, "Invalid atype: %u\n", ud->atype); + return -EINVAL; + } + } else { + ret = of_property_read_u32(dev->of_node, "ti,asel", + &ud->asel); + if (!ret && ud->asel > 15) { + dev_err(dev, "Invalid asel: %u\n", ud->asel); + return -EINVAL; + } } ud->tisci_rm.tisci_udmap_ops = &ud->tisci_rm.tisci->ops.rm_udmap_ops; ud->tisci_rm.tisci_psil_ops = &ud->tisci_rm.tisci->ops.rm_psil_ops; - ud->ringacc = of_k3_ringacc_get_by_phandle(dev->of_node, "ti,ringacc"); + if (ud->match_data->type == DMA_TYPE_UDMA) { + ud->ringacc = of_k3_ringacc_get_by_phandle(dev->of_node, "ti,ringacc"); + } else { + struct k3_ringacc_init_data ring_init_data; + + ring_init_data.tisci = ud->tisci_rm.tisci; + ring_init_data.tisci_dev_id = ud->tisci_rm.tisci_dev_id; + ring_init_data.num_rings = ud->bchan_cnt + ud->tchan_cnt + + ud->rchan_cnt; + + ud->ringacc = k3_ringacc_dmarings_init(pdev, &ring_init_data); + } + if (IS_ERR(ud->ringacc)) return PTR_ERR(ud->ringacc); @@ -3574,24 +4642,9 @@ static int udma_probe(struct platform_device *pdev) return -EPROBE_DEFER; } - match = of_match_node(udma_of_match, dev->of_node); - if (!match) { - dev_err(dev, "No compatible match found\n"); - return -ENODEV; - } - ud->match_data = match->data; - - soc = soc_device_match(k3_soc_devices); - if (!soc) { - dev_err(dev, "No compatible SoC found\n"); - return -ENODEV; - } - ud->soc_data = soc->data; - dma_cap_set(DMA_SLAVE, ud->ddev.cap_mask); dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask); - ud->ddev.device_alloc_chan_resources = udma_alloc_chan_resources; ud->ddev.device_config = udma_slave_config; ud->ddev.device_prep_slave_sg = udma_prep_slave_sg; ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic; @@ -3605,7 +4658,21 @@ static int udma_probe(struct platform_device *pdev) ud->ddev.dbg_summary_show = udma_dbg_summary_show; #endif + switch (ud->match_data->type) { + case DMA_TYPE_UDMA: + ud->ddev.device_alloc_chan_resources = + udma_alloc_chan_resources; + break; + case DMA_TYPE_BCDMA: + ud->ddev.device_alloc_chan_resources = + bcdma_alloc_chan_resources; + ud->ddev.device_router_config = bcdma_router_config; + break; + default: + return -EINVAL; + } ud->ddev.device_free_chan_resources = udma_free_chan_resources; + ud->ddev.src_addr_widths = TI_UDMAC_BUSWIDTHS; ud->ddev.dst_addr_widths = TI_UDMAC_BUSWIDTHS; ud->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); @@ -3613,7 +4680,8 @@ static int udma_probe(struct platform_device *pdev) ud->ddev.copy_align = DMAENGINE_ALIGN_8_BYTES; ud->ddev.desc_metadata_modes = DESC_METADATA_CLIENT | DESC_METADATA_ENGINE; - if (ud->match_data->enable_memcpy_support) { + if (ud->match_data->enable_memcpy_support && + !(ud->match_data->type == DMA_TYPE_BCDMA && ud->bchan_cnt == 0)) { dma_cap_set(DMA_MEMCPY, ud->ddev.cap_mask); ud->ddev.device_prep_dma_memcpy = udma_prep_dma_memcpy; ud->ddev.directions |= BIT(DMA_MEM_TO_MEM); @@ -3626,7 +4694,7 @@ static int udma_probe(struct platform_device *pdev) INIT_LIST_HEAD(&ud->ddev.channels); INIT_LIST_HEAD(&ud->desc_to_purge); - ch_count = udma_setup_resources(ud); + ch_count = setup_resources(ud); if (ch_count <= 0) return ch_count; @@ -3641,6 +4709,13 @@ static int udma_probe(struct platform_device *pdev) if (ret) return ret; + for (i = 0; i < ud->bchan_cnt; i++) { + struct udma_bchan *bchan = &ud->bchans[i]; + + bchan->id = i; + bchan->reg_rt = ud->mmrs[MMR_BCHANRT] + i * 0x1000; + } + for (i = 0; i < ud->tchan_cnt; i++) { struct udma_tchan *tchan = &ud->tchans[i]; @@ -3667,6 +4742,7 @@ static int udma_probe(struct platform_device *pdev) uc->ud = ud; uc->vc.desc_free = udma_desc_free; uc->id = i; + uc->bchan = NULL; uc->tchan = NULL; uc->rchan = NULL; uc->config.remote_thread_id = -1; @@ -3708,5 +4784,15 @@ static struct platform_driver udma_driver = { }; builtin_platform_driver(udma_driver); +static struct platform_driver bcdma_driver = { + .driver = { + .name = "ti-bcdma", + .of_match_table = bcdma_of_match, + .suppress_bind_attrs = true, + }, + .probe = udma_probe, +}; +builtin_platform_driver(bcdma_driver); + /* Private interfaces to UDMA */ #include "k3-udma-private.c" diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index d1cace0cb43b..bf78ad94354a 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -18,7 +18,7 @@ #define UDMA_RX_FLOW_ID_FW_OES_REG 0x80 #define UDMA_RX_FLOW_ID_FW_STATUS_REG 0x88 -/* TCHANRT/RCHANRT registers */ +/* BCHANRT/TCHANRT/RCHANRT registers */ #define UDMA_CHAN_RT_CTL_REG 0x0 #define UDMA_CHAN_RT_SWTRIG_REG 0x8 #define UDMA_CHAN_RT_STDATA_REG 0x80 @@ -45,6 +45,10 @@ #define UDMA_CAP3_HCHAN_CNT(val) (((val) >> 14) & 0x1ff) #define UDMA_CAP3_UCHAN_CNT(val) (((val) >> 23) & 0x1ff) +#define BCDMA_CAP2_BCHAN_CNT(val) ((val) & 0x1ff) +#define BCDMA_CAP2_TCHAN_CNT(val) (((val) >> 9) & 0x1ff) +#define BCDMA_CAP2_RCHAN_CNT(val) (((val) >> 18) & 0x1ff) + /* UDMA_CHAN_RT_CTL_REG */ #define UDMA_CHAN_RT_CTL_EN BIT(31) #define UDMA_CHAN_RT_CTL_TDOWN BIT(30) @@ -82,13 +86,17 @@ */ #define PDMA_STATIC_TR_Z(x, mask) ((x) & (mask)) +/* Address Space Select */ +#define K3_ADDRESS_ASEL_SHIFT 48 + struct udma_dev; struct udma_tchan; struct udma_rchan; struct udma_rflow; enum udma_rm_range { - RM_RANGE_TCHAN = 0, + RM_RANGE_BCHAN = 0, + RM_RANGE_TCHAN, RM_RANGE_RCHAN, RM_RANGE_RFLOW, RM_RANGE_LAST, From patchwork Tue Nov 17 10:56:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0CA8C64EBC for ; Tue, 17 Nov 2020 10:57:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8F6C620781 for ; Tue, 17 Nov 2020 10:57:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="SsCqT6zt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728254AbgKQK5N (ORCPT ); Tue, 17 Nov 2020 05:57:13 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:59728 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728251AbgKQK5M (ORCPT ); Tue, 17 Nov 2020 05:57:12 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAv35k019782; Tue, 17 Nov 2020 04:57:03 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610623; bh=A5OrLu3ojbYpG4gq/4/5UqViwqExPA+nK9Ht2JX2wy4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=SsCqT6ztKejISY9Qbvn6w5QkK46hXQc4RZgaGnsS6Y8MMioRzMymmM2c3y0QRRk1Q u4Wa4A6kHizgsRKewRd+QthpjuWtUV/I+QFigt1Ph/NB0RFgHf/1NXpdZ+qCMr/PwT +ySQRsHF89aRyS5Fg1oyAdb+aqLhgEtdgX0BkVYw= Received: from DLEE113.ent.ti.com (dlee113.ent.ti.com [157.170.170.24]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAv2Y6005701 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:57:02 -0600 Received: from DLEE115.ent.ti.com (157.170.170.26) by DLEE113.ent.ti.com (157.170.170.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:57:02 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE115.ent.ti.com (157.170.170.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:57:02 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6u3087311; Tue, 17 Nov 2020 04:56:59 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 17/19] dmaengine: ti: k3-udma: Add support for BCDMA channel TPL handling Date: Tue, 17 Nov 2020 12:56:54 +0200 Message-ID: <20201117105656.5236-18-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Unlike UDMAP the BCDMA defines the channel TPL levels per channel type. In UDMAP the number of high and ultra-high channels applies to both tchan and rchan. BCDMA defines the TPL per channel types: bchan, tchan and rchan can have different number of high and ultra-high channels. In order to support BCDMA channel TPL we need to move the tpl information as per channel type property for the DMAs. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 117 ++++++++++++++++++++++++++------------- drivers/dma/ti/k3-udma.h | 6 ++ 2 files changed, 85 insertions(+), 38 deletions(-) diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index f327d436f8ad..0dc3430fea40 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -146,6 +146,11 @@ struct udma_rx_flush { dma_addr_t buffer_paddr; }; +struct udma_tpl { + u8 levels; + u32 start_idx[3]; +}; + struct udma_dev { struct dma_device ddev; struct device *dev; @@ -153,8 +158,9 @@ struct udma_dev { const struct udma_match_data *match_data; const struct udma_soc_data *soc_data; - u8 tpl_levels; - u32 tpl_start_idx[3]; + struct udma_tpl bchan_tpl; + struct udma_tpl tchan_tpl; + struct udma_tpl rchan_tpl; size_t desc_align; /* alignment to use for descriptors */ @@ -1276,10 +1282,10 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ } else { \ int start; \ \ - if (tpl >= ud->tpl_levels) \ - tpl = ud->tpl_levels - 1; \ + if (tpl >= ud->res##_tpl.levels) \ + tpl = ud->res##_tpl.levels - 1; \ \ - start = ud->tpl_start_idx[tpl]; \ + start = ud->res##_tpl.start_idx[tpl]; \ \ id = find_next_zero_bit(ud->res##_map, ud->res##_cnt, \ start); \ @@ -1292,29 +1298,14 @@ static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ return &ud->res##s[id]; \ } +UDMA_RESERVE_RESOURCE(bchan); UDMA_RESERVE_RESOURCE(tchan); UDMA_RESERVE_RESOURCE(rchan); -static struct udma_bchan *__bcdma_reserve_bchan(struct udma_dev *ud, int id) -{ - if (id >= 0) { - if (test_bit(id, ud->bchan_map)) { - dev_err(ud->dev, "bchan%d is in use\n", id); - return ERR_PTR(-ENOENT); - } - } else { - id = find_next_zero_bit(ud->bchan_map, ud->bchan_cnt, 0); - if (id == ud->bchan_cnt) - return ERR_PTR(-ENOENT); - } - - set_bit(id, ud->bchan_map); - return &ud->bchans[id]; -} - static int bcdma_get_bchan(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; + enum udma_tp_level tpl; if (uc->bchan) { dev_dbg(ud->dev, "chan%d: already have bchan%d allocated\n", @@ -1322,7 +1313,16 @@ static int bcdma_get_bchan(struct udma_chan *uc) return 0; } - uc->bchan = __bcdma_reserve_bchan(ud, -1); + /* + * Use normal channels for peripherals, and highest TPL channel for + * mem2mem + */ + if (uc->config.tr_trigger_type) + tpl = 0; + else + tpl = ud->bchan_tpl.levels - 1; + + uc->bchan = __udma_reserve_bchan(ud, tpl, -1); if (IS_ERR(uc->bchan)) return PTR_ERR(uc->bchan); @@ -1384,8 +1384,11 @@ static int udma_get_chan_pair(struct udma_chan *uc) /* Can be optimized, but let's have it like this for now */ end = min(ud->tchan_cnt, ud->rchan_cnt); - /* Try to use the highest TPL channel pair for MEM_TO_MEM channels */ - chan_id = ud->tpl_start_idx[ud->tpl_levels - 1]; + /* + * Try to use the highest TPL channel pair for MEM_TO_MEM channels + * Note: in UDMAP the channel TPL is symmetric between tchan and rchan + */ + chan_id = ud->tchan_tpl.start_idx[ud->tchan_tpl.levels - 1]; for (; chan_id < end; chan_id++) { if (!test_bit(chan_id, ud->tchan_map) && !test_bit(chan_id, ud->rchan_map)) @@ -4036,23 +4039,27 @@ static int udma_setup_resources(struct udma_dev *ud) cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); if (of_device_is_compatible(dev->of_node, "ti,am654-navss-main-udmap")) { - ud->tpl_levels = 2; - ud->tpl_start_idx[0] = 8; + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = 8; } else if (of_device_is_compatible(dev->of_node, "ti,am654-navss-mcu-udmap")) { - ud->tpl_levels = 2; - ud->tpl_start_idx[0] = 2; + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = 2; } else if (UDMA_CAP3_UCHAN_CNT(cap3)) { - ud->tpl_levels = 3; - ud->tpl_start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3); - ud->tpl_start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); + ud->tchan_tpl.levels = 3; + ud->tchan_tpl.start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3); + ud->tchan_tpl.start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); } else if (UDMA_CAP3_HCHAN_CNT(cap3)) { - ud->tpl_levels = 2; - ud->tpl_start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); } else { - ud->tpl_levels = 1; + ud->tchan_tpl.levels = 1; } + ud->rchan_tpl.levels = ud->tchan_tpl.levels; + ud->rchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[0]; + ud->rchan_tpl.start_idx[1] = ud->tchan_tpl.start_idx[1]; + ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt), sizeof(unsigned long), GFP_KERNEL); ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans), @@ -4174,6 +4181,43 @@ static int bcdma_setup_resources(struct udma_dev *ud) struct ti_sci_resource *rm_res, irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u32 cap; + + /* Set up the throughput level start indexes */ + cap = udma_read(ud->mmrs[MMR_GCFG], 0x2c); + if (BCDMA_CAP3_UBCHAN_CNT(cap)) { + ud->bchan_tpl.levels = 3; + ud->bchan_tpl.start_idx[1] = BCDMA_CAP3_UBCHAN_CNT(cap); + ud->bchan_tpl.start_idx[0] = BCDMA_CAP3_HBCHAN_CNT(cap); + } else if (BCDMA_CAP3_HBCHAN_CNT(cap)) { + ud->bchan_tpl.levels = 2; + ud->bchan_tpl.start_idx[0] = BCDMA_CAP3_HBCHAN_CNT(cap); + } else { + ud->bchan_tpl.levels = 1; + } + + cap = udma_read(ud->mmrs[MMR_GCFG], 0x30); + if (BCDMA_CAP4_URCHAN_CNT(cap)) { + ud->rchan_tpl.levels = 3; + ud->rchan_tpl.start_idx[1] = BCDMA_CAP4_URCHAN_CNT(cap); + ud->rchan_tpl.start_idx[0] = BCDMA_CAP4_HRCHAN_CNT(cap); + } else if (BCDMA_CAP4_HRCHAN_CNT(cap)) { + ud->rchan_tpl.levels = 2; + ud->rchan_tpl.start_idx[0] = BCDMA_CAP4_HRCHAN_CNT(cap); + } else { + ud->rchan_tpl.levels = 1; + } + + if (BCDMA_CAP4_UTCHAN_CNT(cap)) { + ud->tchan_tpl.levels = 3; + ud->tchan_tpl.start_idx[1] = BCDMA_CAP4_UTCHAN_CNT(cap); + ud->tchan_tpl.start_idx[0] = BCDMA_CAP4_HTCHAN_CNT(cap); + } else if (BCDMA_CAP4_HTCHAN_CNT(cap)) { + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = BCDMA_CAP4_HTCHAN_CNT(cap); + } else { + ud->tchan_tpl.levels = 1; + } ud->bchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->bchan_cnt), sizeof(unsigned long), GFP_KERNEL); @@ -4199,9 +4243,6 @@ static int bcdma_setup_resources(struct udma_dev *ud) !ud->rflows) return -ENOMEM; - /* TPL is not yet supported for BCDMA */ - ud->tpl_levels = 1; - /* Get resource ranges from tisci */ for (i = 0; i < RM_RANGE_LAST; i++) { if (i == RM_RANGE_RFLOW) diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index bf78ad94354a..8cb32681afaf 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -48,6 +48,12 @@ #define BCDMA_CAP2_BCHAN_CNT(val) ((val) & 0x1ff) #define BCDMA_CAP2_TCHAN_CNT(val) (((val) >> 9) & 0x1ff) #define BCDMA_CAP2_RCHAN_CNT(val) (((val) >> 18) & 0x1ff) +#define BCDMA_CAP3_HBCHAN_CNT(val) (((val) >> 14) & 0x1ff) +#define BCDMA_CAP3_UBCHAN_CNT(val) (((val) >> 23) & 0x1ff) +#define BCDMA_CAP4_HRCHAN_CNT(val) ((val) & 0xff) +#define BCDMA_CAP4_URCHAN_CNT(val) (((val) >> 8) & 0xff) +#define BCDMA_CAP4_HTCHAN_CNT(val) (((val) >> 16) & 0xff) +#define BCDMA_CAP4_UTCHAN_CNT(val) (((val) >> 24) & 0xff) /* UDMA_CHAN_RT_CTL_REG */ #define UDMA_CHAN_RT_CTL_EN BIT(31) From patchwork Tue Nov 17 10:56:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E21AAC8300E for ; Tue, 17 Nov 2020 10:57:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 819E920781 for ; Tue, 17 Nov 2020 10:57:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="lFC12utB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728292AbgKQK5Y (ORCPT ); Tue, 17 Nov 2020 05:57:24 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34234 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728218AbgKQK5U (ORCPT ); Tue, 17 Nov 2020 05:57:20 -0500 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAv5JZ117615; Tue, 17 Nov 2020 04:57:05 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610625; bh=RVWSwOK8Hw5uiIDjvVTBd7PHFohu3wsbWtGDCjuWBhA=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=lFC12utBjhwqbYPDrK0bIKhkJsRLB/PuuVnw+MwxlhOvMKBnCF7yxp0c0mC/Cbvop 5x6DIt7C4bvBSfBQjUMeKuTv5RM9cGAX86xJBTZQ9S3GAldWCD5xQ0pF9Is3YjGPel D6vbZLBXmaDoeHD+fd1q04yy/W9INdQvkGDlXyEM= Received: from DFLE115.ent.ti.com (dfle115.ent.ti.com [10.64.6.36]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAv5I5086748 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:57:05 -0600 Received: from DFLE100.ent.ti.com (10.64.6.21) by DFLE115.ent.ti.com (10.64.6.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:57:05 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE100.ent.ti.com (10.64.6.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:57:05 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6u4087311; Tue, 17 Nov 2020 04:57:02 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 18/19] dmaengine: ti: k3-udma: Initial support for K3 PKTDMA Date: Tue, 17 Nov 2020 12:56:55 +0200 Message-ID: <20201117105656.5236-19-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org One of the DMAs introduced with AM64 is the Packet DMA (PKTDMA). It serves similar purpose as K3 UDMAP channels in packet mode, but with notable differences, like tflow support and channels being allocated to service specific peripherals. The rings for the PKTDMA is integrated within the DMA itself instead of using rings from the general purpose ringacc. PKTDMA can be used to service PSI-L peripherals, similarly to K3 UDMA channels. Most of the driver code can be reused for PKTDMA tchan/rchan support but new setup and allocation functions are needed to handle the differences between the DMAs. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-private.c | 9 + drivers/dma/ti/k3-udma.c | 545 +++++++++++++++++++++++++++++-- drivers/dma/ti/k3-udma.h | 4 + 3 files changed, 533 insertions(+), 25 deletions(-) diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c index c9fb1d832581..87cbcfe96d5f 100644 --- a/drivers/dma/ti/k3-udma-private.c +++ b/drivers/dma/ti/k3-udma-private.c @@ -82,6 +82,9 @@ EXPORT_SYMBOL(xudma_free_gp_rflow_range); bool xudma_rflow_is_gp(struct udma_dev *ud, int id) { + if (!ud->rflow_gp_map) + return false; + return !test_bit(id, ud->rflow_gp_map); } EXPORT_SYMBOL(xudma_rflow_is_gp); @@ -113,6 +116,12 @@ void xudma_rflow_put(struct udma_dev *ud, struct udma_rflow *p) } EXPORT_SYMBOL(xudma_rflow_put); +int xudma_get_rflow_ring_offset(struct udma_dev *ud) +{ + return ud->tflow_cnt; +} +EXPORT_SYMBOL(xudma_get_rflow_ring_offset); + #define XUDMA_GET_RESOURCE_ID(res) \ int xudma_##res##_get_id(struct udma_##res *p) \ { \ diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 0dc3430fea40..87157cbae1b8 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -59,6 +59,7 @@ struct udma_chan; enum k3_dma_type { DMA_TYPE_UDMA = 0, DMA_TYPE_BCDMA, + DMA_TYPE_PKTDMA, }; enum udma_mmr { @@ -82,6 +83,8 @@ struct udma_tchan { int id; struct k3_ring *t_ring; /* Transmit ring */ struct k3_ring *tc_ring; /* Transmit Completion ring */ + int tflow_id; /* applicable only for PKTDMA */ + }; #define udma_bchan udma_tchan @@ -109,6 +112,10 @@ struct udma_oes_offsets { u32 bcdma_tchan_ring; u32 bcdma_rchan_data; u32 bcdma_rchan_ring; + + /* PKTDMA Output Event Offsets */ + u32 pktdma_tchan_flow; + u32 pktdma_rchan_flow; }; #define UDMA_FLAG_PDMA_ACC32 BIT(0) @@ -179,12 +186,14 @@ struct udma_dev { int echan_cnt; int rchan_cnt; int rflow_cnt; + int tflow_cnt; unsigned long *bchan_map; unsigned long *tchan_map; unsigned long *rchan_map; unsigned long *rflow_gp_map; unsigned long *rflow_gp_map_allocated; unsigned long *rflow_in_use; + unsigned long *tflow_map; struct udma_bchan *bchans; struct udma_tchan *tchans; @@ -249,6 +258,11 @@ struct udma_chan_config { u32 tr_trigger_type; + /* PKDMA mapped channel */ + int mapped_channel_id; + /* PKTDMA default tflow or rflow for mapped channel */ + int default_flow_id; + enum dma_transfer_direction dir; }; @@ -426,6 +440,8 @@ static void udma_reset_uchan(struct udma_chan *uc) { memset(&uc->config, 0, sizeof(uc->config)); uc->config.remote_thread_id = -1; + uc->config.mapped_channel_id = -1; + uc->config.default_flow_id = -1; uc->state = UDMA_CHAN_IS_IDLE; } @@ -815,10 +831,16 @@ static void udma_start_desc(struct udma_chan *uc) { struct udma_chan_config *ucc = &uc->config; - if (ucc->pkt_mode && (uc->cyclic || ucc->dir == DMA_DEV_TO_MEM)) { + if (uc->ud->match_data->type == DMA_TYPE_UDMA && ucc->pkt_mode && + (uc->cyclic || ucc->dir == DMA_DEV_TO_MEM)) { int i; - /* Push all descriptors to ring for packet mode cyclic or RX */ + /* + * UDMA only: Push all descriptors to ring for packet mode + * cyclic or RX + * PKTDMA supports pre-linked descriptor and cyclic is not + * supported + */ for (i = 0; i < uc->desc->sglen; i++) udma_push_to_ring(uc, i); } else { @@ -1248,10 +1270,12 @@ static struct udma_rflow *__udma_get_rflow(struct udma_dev *ud, int id) if (test_bit(id, ud->rflow_in_use)) return ERR_PTR(-ENOENT); - /* GP rflow has to be allocated first */ - if (!test_bit(id, ud->rflow_gp_map) && - !test_bit(id, ud->rflow_gp_map_allocated)) - return ERR_PTR(-EINVAL); + if (ud->rflow_gp_map) { + /* GP rflow has to be allocated first */ + if (!test_bit(id, ud->rflow_gp_map) && + !test_bit(id, ud->rflow_gp_map_allocated)) + return ERR_PTR(-EINVAL); + } dev_dbg(ud->dev, "get rflow%d\n", id); set_bit(id, ud->rflow_in_use); @@ -1341,9 +1365,39 @@ static int udma_get_tchan(struct udma_chan *uc) return 0; } - uc->tchan = __udma_reserve_tchan(ud, uc->config.channel_tpl, -1); + /* + * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels. + * For PKTDMA mapped channels it is configured to a channel which must + * be used to service the peripheral. + */ + uc->tchan = __udma_reserve_tchan(ud, uc->config.channel_tpl, + uc->config.mapped_channel_id); + if (IS_ERR(uc->tchan)) + return PTR_ERR(uc->tchan); + + if (ud->tflow_cnt) { + int tflow_id; + + /* Only PKTDMA have support for tx flows */ + if (uc->config.default_flow_id >= 0) + tflow_id = uc->config.default_flow_id; + else + tflow_id = uc->tchan->id; + + if (test_bit(tflow_id, ud->tflow_map)) { + dev_err(ud->dev, "tflow%d is in use\n", tflow_id); + clear_bit(uc->tchan->id, ud->tchan_map); + uc->tchan = NULL; + return -ENOENT; + } + + uc->tchan->tflow_id = tflow_id; + set_bit(tflow_id, ud->tflow_map); + } else { + uc->tchan->tflow_id = -1; + } - return PTR_ERR_OR_ZERO(uc->tchan); + return 0; } static int udma_get_rchan(struct udma_chan *uc) @@ -1356,7 +1410,13 @@ static int udma_get_rchan(struct udma_chan *uc) return 0; } - uc->rchan = __udma_reserve_rchan(ud, uc->config.channel_tpl, -1); + /* + * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels. + * For PKTDMA mapped channels it is configured to a channel which must + * be used to service the peripheral. + */ + uc->rchan = __udma_reserve_rchan(ud, uc->config.channel_tpl, + uc->config.mapped_channel_id); return PTR_ERR_OR_ZERO(uc->rchan); } @@ -1403,6 +1463,9 @@ static int udma_get_chan_pair(struct udma_chan *uc) uc->tchan = &ud->tchans[chan_id]; uc->rchan = &ud->rchans[chan_id]; + /* UDMA does not use tx flows */ + uc->tchan->tflow_id = -1; + return 0; } @@ -1459,6 +1522,10 @@ static void udma_put_tchan(struct udma_chan *uc) dev_dbg(ud->dev, "chan%d: put tchan%d\n", uc->id, uc->tchan->id); clear_bit(uc->tchan->id, ud->tchan_map); + + if (uc->tchan->tflow_id >= 0) + clear_bit(uc->tchan->tflow_id, ud->tflow_map); + uc->tchan = NULL; } } @@ -1559,7 +1626,10 @@ static int udma_alloc_tx_resources(struct udma_chan *uc) return ret; tchan = uc->tchan; - ring_idx = ud->bchan_cnt + tchan->id; + if (tchan->tflow_id >= 0) + ring_idx = tchan->tflow_id; + else + ring_idx = ud->bchan_cnt + tchan->id; ret = k3_ringacc_request_rings_pair(ud->ringacc, ring_idx, -1, &tchan->t_ring, @@ -1636,15 +1706,23 @@ static int udma_alloc_rx_resources(struct udma_chan *uc) if (uc->config.dir == DMA_MEM_TO_MEM) return 0; - ret = udma_get_rflow(uc, uc->rchan->id); + if (uc->config.default_flow_id >= 0) + ret = udma_get_rflow(uc, uc->config.default_flow_id); + else + ret = udma_get_rflow(uc, uc->rchan->id); + if (ret) { ret = -EBUSY; goto err_rflow; } rflow = uc->rflow; - fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt + - uc->rchan->id; + if (ud->tflow_cnt) + fd_ring_id = ud->tflow_cnt + rflow->id; + else + fd_ring_id = ud->bchan_cnt + ud->tchan_cnt + ud->echan_cnt + + uc->rchan->id; + ret = k3_ringacc_request_rings_pair(ud->ringacc, fd_ring_id, -1, &rflow->fd_ring, &rflow->r_ring); if (ret) { @@ -1860,6 +1938,8 @@ static int bcdma_tisci_tx_channel_config(struct udma_chan *uc) return ret; } +#define pktdma_tisci_tx_channel_config bcdma_tisci_tx_channel_config + static int udma_tisci_rx_channel_config(struct udma_chan *uc) { struct udma_dev *ud = uc->ud; @@ -1961,6 +2041,52 @@ static int bcdma_tisci_rx_channel_config(struct udma_chan *uc) return ret; } +static int pktdma_tisci_rx_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; + struct ti_sci_msg_rm_udmap_flow_cfg flow_req = { 0 }; + int ret = 0; + + req_rx.valid_params = TISCI_BCDMA_RCHAN_VALID_PARAMS; + req_rx.nav_id = tisci_rm->tisci_dev_id; + req_rx.index = uc->rchan->id; + + ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx); + if (ret) { + dev_err(ud->dev, "rchan%d cfg failed %d\n", uc->rchan->id, ret); + return ret; + } + + flow_req.valid_params = + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID; + + flow_req.nav_id = tisci_rm->tisci_dev_id; + flow_req.flow_index = uc->rflow->id; + + if (uc->config.needs_epib) + flow_req.rx_einfo_present = 1; + else + flow_req.rx_einfo_present = 0; + if (uc->config.psd_size) + flow_req.rx_psinfo_present = 1; + else + flow_req.rx_psinfo_present = 0; + flow_req.rx_error_handling = 1; + + ret = tisci_ops->rx_flow_cfg(tisci_rm->tisci, &flow_req); + + if (ret) + dev_err(ud->dev, "flow%d config failed: %d\n", uc->rflow->id, + ret); + + return ret; +} + static int udma_alloc_chan_resources(struct dma_chan *chan) { struct udma_chan *uc = to_udma_chan(chan); @@ -2379,6 +2505,157 @@ static int bcdma_router_config(struct dma_chan *chan) return router_data->set_event(router_data->priv, trigger_event); } +static int pktdma_alloc_chan_resources(struct dma_chan *chan) +{ + struct udma_chan *uc = to_udma_chan(chan); + struct udma_dev *ud = to_udma_dev(chan->device); + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u32 irq_ring_idx; + int ret; + + /* + * Make sure that the completion is in a known state: + * No teardown, the channel is idle + */ + reinit_completion(&uc->teardown_completed); + complete_all(&uc->teardown_completed); + uc->state = UDMA_CHAN_IS_IDLE; + + switch (uc->config.dir) { + case DMA_MEM_TO_DEV: + /* Slave transfer synchronized - mem to dev (TX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-DEV\n", __func__, + uc->id); + + ret = udma_alloc_tx_resources(uc); + if (ret) { + uc->config.remote_thread_id = -1; + return ret; + } + + uc->config.src_thread = ud->psil_base + uc->tchan->id; + uc->config.dst_thread = uc->config.remote_thread_id; + uc->config.dst_thread |= K3_PSIL_DST_THREAD_ID_OFFSET; + + irq_ring_idx = uc->tchan->tflow_id + oes->pktdma_tchan_flow; + + ret = pktdma_tisci_tx_channel_config(uc); + break; + case DMA_DEV_TO_MEM: + /* Slave transfer synchronized - dev to mem (RX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as DEV-to-MEM\n", __func__, + uc->id); + + ret = udma_alloc_rx_resources(uc); + if (ret) { + uc->config.remote_thread_id = -1; + return ret; + } + + uc->config.src_thread = uc->config.remote_thread_id; + uc->config.dst_thread = (ud->psil_base + uc->rchan->id) | + K3_PSIL_DST_THREAD_ID_OFFSET; + + irq_ring_idx = uc->rflow->id + oes->pktdma_rchan_flow; + + ret = pktdma_tisci_rx_channel_config(uc); + break; + default: + /* Can not happen */ + dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n", + __func__, uc->id, uc->config.dir); + return -EINVAL; + } + + /* check if the channel configuration was successful */ + if (ret) + goto err_res_free; + + if (udma_is_chan_running(uc)) { + dev_warn(ud->dev, "chan%d: is running!\n", uc->id); + udma_reset_chan(uc, false); + if (udma_is_chan_running(uc)) { + dev_err(ud->dev, "chan%d: won't stop!\n", uc->id); + ret = -EBUSY; + goto err_res_free; + } + } + + uc->dma_dev = dmaengine_get_dma_device(chan); + uc->hdesc_pool = dma_pool_create(uc->name, uc->dma_dev, + uc->config.hdesc_size, ud->desc_align, + 0); + if (!uc->hdesc_pool) { + dev_err(ud->ddev.dev, + "Descriptor pool allocation failed\n"); + uc->use_dma_pool = false; + ret = -ENOMEM; + goto err_res_free; + } + + uc->use_dma_pool = true; + + /* PSI-L pairing */ + ret = navss_psil_pair(ud, uc->config.src_thread, uc->config.dst_thread); + if (ret) { + dev_err(ud->dev, "PSI-L pairing failed: 0x%04x -> 0x%04x\n", + uc->config.src_thread, uc->config.dst_thread); + goto err_res_free; + } + + uc->psil_paired = true; + + uc->irq_num_ring = ti_sci_inta_msi_get_virq(ud->dev, irq_ring_idx); + if (uc->irq_num_ring <= 0) { + dev_err(ud->dev, "Failed to get ring irq (index: %u)\n", + irq_ring_idx); + ret = -EINVAL; + goto err_psi_free; + } + + ret = request_irq(uc->irq_num_ring, udma_ring_irq_handler, + IRQF_TRIGGER_HIGH, uc->name, uc); + if (ret) { + dev_err(ud->dev, "chan%d: ring irq request failed\n", uc->id); + goto err_irq_free; + } + + uc->irq_num_udma = 0; + + udma_reset_rings(uc); + + INIT_DELAYED_WORK_ONSTACK(&uc->tx_drain.work, + udma_check_tx_completion); + + if (uc->tchan) + dev_dbg(ud->dev, + "chan%d: tchan%d, tflow%d, Remote thread: 0x%04x\n", + uc->id, uc->tchan->id, uc->tchan->tflow_id, + uc->config.remote_thread_id); + else if (uc->rchan) + dev_dbg(ud->dev, + "chan%d: rchan%d, rflow%d, Remote thread: 0x%04x\n", + uc->id, uc->rchan->id, uc->rflow->id, + uc->config.remote_thread_id); + return 0; + +err_irq_free: + uc->irq_num_ring = 0; +err_psi_free: + navss_psil_unpair(ud, uc->config.src_thread, uc->config.dst_thread); + uc->psil_paired = false; +err_res_free: + udma_free_tx_resources(uc); + udma_free_rx_resources(uc); + + udma_reset_uchan(uc); + + dma_pool_destroy(uc->hdesc_pool); + uc->use_dma_pool = false; + + return ret; +} + static int udma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg) { @@ -2857,6 +3134,7 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl, struct udma_desc *d; u32 ring_id; unsigned int i; + u64 asel; d = kzalloc(struct_size(d, hwdesc, sglen), GFP_NOWAIT); if (!d) @@ -2870,6 +3148,11 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl, else ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); + if (uc->ud->match_data->type == DMA_TYPE_UDMA) + asel = 0; + else + asel = (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT; + for_each_sg(sgl, sgent, sglen, i) { struct udma_hwdesc *hwdesc = &d->hwdesc[i]; dma_addr_t sg_addr = sg_dma_address(sgent); @@ -2904,14 +3187,16 @@ udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl, } /* attach the sg buffer to the descriptor */ + sg_addr |= asel; cppi5_hdesc_attach_buf(desc, sg_addr, sg_len, sg_addr, sg_len); /* Attach link as host buffer descriptor */ if (h_desc) cppi5_hdesc_link_hbdesc(h_desc, - hwdesc->cppi5_desc_paddr); + hwdesc->cppi5_desc_paddr | asel); - if (dir == DMA_MEM_TO_DEV) + if (uc->ud->match_data->type == DMA_TYPE_PKTDMA || + dir == DMA_MEM_TO_DEV) h_desc = desc; } @@ -3190,6 +3475,9 @@ udma_prep_dma_cyclic_pkt(struct udma_chan *uc, dma_addr_t buf_addr, else ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); + if (uc->ud->match_data->type != DMA_TYPE_UDMA) + buf_addr |= (u64)uc->config.asel << K3_ADDRESS_ASEL_SHIFT; + for (i = 0; i < periods; i++) { struct udma_hwdesc *hwdesc = &d->hwdesc[i]; dma_addr_t period_addr = buf_addr + (period_len * i); @@ -3706,6 +3994,7 @@ static void udma_free_chan_resources(struct dma_chan *chan) static struct platform_driver udma_driver; static struct platform_driver bcdma_driver; +static struct platform_driver pktdma_driver; struct udma_filter_param { int remote_thread_id; @@ -3723,7 +4012,8 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) struct udma_dev *ud; if (chan->device->dev->driver != &udma_driver.driver && - chan->device->dev->driver != &bcdma_driver.driver) + chan->device->dev->driver != &bcdma_driver.driver && + chan->device->dev->driver != &pktdma_driver.driver) return false; uc = to_udma_chan(chan); @@ -3785,6 +4075,15 @@ static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) ucc->notdpkt = ep_config->notdpkt; ucc->ep_type = ep_config->ep_type; + if (ud->match_data->type == DMA_TYPE_PKTDMA && + ep_config->mapped_channel_id >= 0) { + ucc->mapped_channel_id = ep_config->mapped_channel_id; + ucc->default_flow_id = ep_config->default_flow_id; + } else { + ucc->mapped_channel_id = -1; + ucc->default_flow_id = -1; + } + if (ucc->ep_type != PSIL_EP_NATIVE) { const struct udma_match_data *match_data = ud->match_data; @@ -3901,6 +4200,14 @@ static struct udma_match_data am64_bcdma_data = { .statictr_z_mask = GENMASK(23, 0), }; +static struct udma_match_data am64_pktdma_data = { + .type = DMA_TYPE_PKTDMA, + .psil_base = 0x1000, + .enable_memcpy_support = false, /* PKTDMA does not support MEM_TO_MEM */ + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, + .statictr_z_mask = GENMASK(23, 0), +}; + static const struct of_device_id udma_of_match[] = { { .compatible = "ti,am654-navss-main-udmap", @@ -3927,6 +4234,14 @@ static const struct of_device_id bcdma_of_match[] = { { /* Sentinel */ }, }; +static const struct of_device_id pktdma_of_match[] = { + { + .compatible = "ti,am64-dmss-pktdma", + .data = &am64_pktdma_data, + }, + { /* Sentinel */ }, +}; + static struct udma_soc_data am654_soc_data = { .oes = { .udma_rchan = 0x200, @@ -3953,6 +4268,8 @@ static struct udma_soc_data am64_soc_data = { .bcdma_tchan_ring = 0x2a00, .bcdma_rchan_data = 0x2e00, .bcdma_rchan_ring = 0x3000, + .pktdma_tchan_flow = 0x1200, + .pktdma_rchan_flow = 0x1600, }, .bcdma_trigger_event_offset = 0xc400, }; @@ -3967,7 +4284,7 @@ static const struct soc_device_attribute k3_soc_devices[] = { static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud) { - u32 cap2, cap3; + u32 cap2, cap3, cap4; int i; ud->mmrs[MMR_GCFG] = devm_platform_ioremap_resource_byname(pdev, mmr_names[MMR_GCFG]); @@ -3989,6 +4306,13 @@ static int udma_get_mmrs(struct platform_device *pdev, struct udma_dev *ud) ud->tchan_cnt = BCDMA_CAP2_TCHAN_CNT(cap2); ud->rchan_cnt = BCDMA_CAP2_RCHAN_CNT(cap2); break; + case DMA_TYPE_PKTDMA: + cap4 = udma_read(ud->mmrs[MMR_GCFG], 0x30); + ud->tchan_cnt = UDMA_CAP2_TCHAN_CNT(cap2); + ud->rchan_cnt = UDMA_CAP2_RCHAN_CNT(cap2); + ud->rflow_cnt = UDMA_CAP3_RFLOW_CNT(cap3); + ud->tflow_cnt = PKTDMA_CAP4_TFLOW_CNT(cap4); + break; default: return -EINVAL; } @@ -4024,7 +4348,8 @@ static const char * const range_names[] = { [RM_RANGE_BCHAN] = "ti,sci-rm-range-bchan", [RM_RANGE_TCHAN] = "ti,sci-rm-range-tchan", [RM_RANGE_RCHAN] = "ti,sci-rm-range-rchan", - [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow" + [RM_RANGE_RFLOW] = "ti,sci-rm-range-rflow", + [RM_RANGE_TFLOW] = "ti,sci-rm-range-tflow", }; static int udma_setup_resources(struct udma_dev *ud) @@ -4098,7 +4423,7 @@ static int udma_setup_resources(struct udma_dev *ud) /* Get resource ranges from tisci */ for (i = 0; i < RM_RANGE_LAST; i++) { - if (i == RM_RANGE_BCHAN) + if (i == RM_RANGE_BCHAN || i == RM_RANGE_TFLOW) continue; tisci_rm->rm_ranges[i] = @@ -4245,7 +4570,7 @@ static int bcdma_setup_resources(struct udma_dev *ud) /* Get resource ranges from tisci */ for (i = 0; i < RM_RANGE_LAST; i++) { - if (i == RM_RANGE_RFLOW) + if (i == RM_RANGE_RFLOW || i == RM_RANGE_TFLOW) continue; if (i == RM_RANGE_BCHAN && ud->bchan_cnt == 0) continue; @@ -4351,6 +4676,134 @@ static int bcdma_setup_resources(struct udma_dev *ud) return 0; } +static int pktdma_setup_resources(struct udma_dev *ud) +{ + int ret, i, j; + struct device *dev = ud->dev; + struct ti_sci_resource *rm_res, irq_res; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u32 cap3; + + /* Set up the throughput level start indexes */ + cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); + if (UDMA_CAP3_UCHAN_CNT(cap3)) { + ud->tchan_tpl.levels = 3; + ud->tchan_tpl.start_idx[1] = UDMA_CAP3_UCHAN_CNT(cap3); + ud->tchan_tpl.start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); + } else if (UDMA_CAP3_HCHAN_CNT(cap3)) { + ud->tchan_tpl.levels = 2; + ud->tchan_tpl.start_idx[0] = UDMA_CAP3_HCHAN_CNT(cap3); + } else { + ud->tchan_tpl.levels = 1; + } + + ud->tchan_tpl.levels = ud->tchan_tpl.levels; + ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[0]; + ud->tchan_tpl.start_idx[1] = ud->tchan_tpl.start_idx[1]; + + ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->tchans = devm_kcalloc(dev, ud->tchan_cnt, sizeof(*ud->tchans), + GFP_KERNEL); + ud->rchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->rchan_cnt), + sizeof(unsigned long), GFP_KERNEL); + ud->rchans = devm_kcalloc(dev, ud->rchan_cnt, sizeof(*ud->rchans), + GFP_KERNEL); + ud->rflow_in_use = devm_kcalloc(dev, BITS_TO_LONGS(ud->rflow_cnt), + sizeof(unsigned long), + GFP_KERNEL); + ud->rflows = devm_kcalloc(dev, ud->rflow_cnt, sizeof(*ud->rflows), + GFP_KERNEL); + ud->tflow_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tflow_cnt), + sizeof(unsigned long), GFP_KERNEL); + + if (!ud->tchan_map || !ud->rchan_map || !ud->tflow_map || !ud->tchans || + !ud->rchans || !ud->rflows || !ud->rflow_in_use) + return -ENOMEM; + + /* Get resource ranges from tisci */ + for (i = 0; i < RM_RANGE_LAST; i++) { + if (i == RM_RANGE_BCHAN) + continue; + + tisci_rm->rm_ranges[i] = + devm_ti_sci_get_of_resource(tisci_rm->tisci, dev, + tisci_rm->tisci_dev_id, + (char *)range_names[i]); + } + + /* tchan ranges */ + rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->tchan_map, ud->tchan_cnt); + } else { + bitmap_fill(ud->tchan_map, ud->tchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->tchan_map, + &rm_res->desc[i], "tchan"); + } + + /* rchan ranges */ + rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; + if (IS_ERR(rm_res)) { + bitmap_zero(ud->rchan_map, ud->rchan_cnt); + } else { + bitmap_fill(ud->rchan_map, ud->rchan_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rchan_map, + &rm_res->desc[i], "rchan"); + } + + /* rflow ranges */ + rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW]; + if (IS_ERR(rm_res)) { + /* all rflows are assigned exclusively to Linux */ + bitmap_zero(ud->rflow_in_use, ud->rflow_cnt); + } else { + bitmap_fill(ud->rflow_in_use, ud->rflow_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->rflow_in_use, + &rm_res->desc[i], "rflow"); + } + irq_res.sets = rm_res->sets; + + /* tflow ranges */ + rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW]; + if (IS_ERR(rm_res)) { + /* all tflows are assigned exclusively to Linux */ + bitmap_zero(ud->tflow_map, ud->tflow_cnt); + } else { + bitmap_fill(ud->tflow_map, ud->tflow_cnt); + for (i = 0; i < rm_res->sets; i++) + udma_mark_resource_ranges(ud, ud->tflow_map, + &rm_res->desc[i], "tflow"); + } + irq_res.sets += rm_res->sets; + + irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL); + rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW]; + for (i = 0; i < rm_res->sets; i++) { + irq_res.desc[i].start = rm_res->desc[i].start + + oes->pktdma_tchan_flow; + irq_res.desc[i].num = rm_res->desc[i].num; + } + rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW]; + for (j = 0; j < rm_res->sets; j++, i++) { + irq_res.desc[i].start = rm_res->desc[j].start + + oes->pktdma_rchan_flow; + irq_res.desc[i].num = rm_res->desc[j].num; + } + ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); + kfree(irq_res.desc); + if (ret) { + dev_err(ud->dev, "Failed to allocate MSI interrupts\n"); + return ret; + } + + return 0; +} + static int setup_resources(struct udma_dev *ud) { struct device *dev = ud->dev; @@ -4363,6 +4816,9 @@ static int setup_resources(struct udma_dev *ud) case DMA_TYPE_BCDMA: ret = bcdma_setup_resources(ud); break; + case DMA_TYPE_PKTDMA: + ret = pktdma_setup_resources(ud); + break; default: return -EINVAL; } @@ -4406,6 +4862,14 @@ static int setup_resources(struct udma_dev *ud) ud->rchan_cnt - bitmap_weight(ud->rchan_map, ud->rchan_cnt)); break; + case DMA_TYPE_PKTDMA: + dev_info(dev, + "Channels: %d (tchan: %u, rchan: %u)\n", + ch_count, + ud->tchan_cnt - bitmap_weight(ud->tchan_map, + ud->tchan_cnt), + ud->rchan_cnt - bitmap_weight(ud->rchan_map, + ud->rchan_cnt)); default: break; } @@ -4536,10 +5000,14 @@ static void udma_dbg_summary_show_chan(struct seq_file *s, case DMA_DEV_TO_MEM: seq_printf(s, "rchan%d [0x%04x -> 0x%04x], ", uc->rchan->id, ucc->src_thread, ucc->dst_thread); + if (uc->ud->match_data->type == DMA_TYPE_PKTDMA) + seq_printf(s, "rflow%d, ", uc->rflow->id); break; case DMA_MEM_TO_DEV: seq_printf(s, "tchan%d [0x%04x -> 0x%04x], ", uc->tchan->id, ucc->src_thread, ucc->dst_thread); + if (uc->ud->match_data->type == DMA_TYPE_PKTDMA) + seq_printf(s, "tflow%d, ", uc->tchan->tflow_id); break; default: seq_printf(s, ")\n"); @@ -4602,8 +5070,10 @@ static int udma_probe(struct platform_device *pdev) return -ENOMEM; match = of_match_node(udma_of_match, dev->of_node); - if (!match) { + if (!match) match = of_match_node(bcdma_of_match, dev->of_node); + if (!match) { + match = of_match_node(pktdma_of_match, dev->of_node); if (!match) { dev_err(dev, "No compatible match found\n"); return -ENODEV; @@ -4667,8 +5137,14 @@ static int udma_probe(struct platform_device *pdev) ring_init_data.tisci = ud->tisci_rm.tisci; ring_init_data.tisci_dev_id = ud->tisci_rm.tisci_dev_id; - ring_init_data.num_rings = ud->bchan_cnt + ud->tchan_cnt + - ud->rchan_cnt; + if (ud->match_data->type == DMA_TYPE_BCDMA) { + ring_init_data.num_rings = ud->bchan_cnt + + ud->tchan_cnt + + ud->rchan_cnt; + } else { + ring_init_data.num_rings = ud->rflow_cnt + + ud->tflow_cnt; + } ud->ringacc = k3_ringacc_dmarings_init(pdev, &ring_init_data); } @@ -4684,11 +5160,14 @@ static int udma_probe(struct platform_device *pdev) } dma_cap_set(DMA_SLAVE, ud->ddev.cap_mask); - dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask); + /* cyclic operation is not supported via PKTDMA */ + if (ud->match_data->type != DMA_TYPE_PKTDMA) { + dma_cap_set(DMA_CYCLIC, ud->ddev.cap_mask); + ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic; + } ud->ddev.device_config = udma_slave_config; ud->ddev.device_prep_slave_sg = udma_prep_slave_sg; - ud->ddev.device_prep_dma_cyclic = udma_prep_dma_cyclic; ud->ddev.device_issue_pending = udma_issue_pending; ud->ddev.device_tx_status = udma_tx_status; ud->ddev.device_pause = udma_pause; @@ -4709,6 +5188,10 @@ static int udma_probe(struct platform_device *pdev) bcdma_alloc_chan_resources; ud->ddev.device_router_config = bcdma_router_config; break; + case DMA_TYPE_PKTDMA: + ud->ddev.device_alloc_chan_resources = + pktdma_alloc_chan_resources; + break; default: return -EINVAL; } @@ -4787,6 +5270,8 @@ static int udma_probe(struct platform_device *pdev) uc->tchan = NULL; uc->rchan = NULL; uc->config.remote_thread_id = -1; + uc->config.mapped_channel_id = -1; + uc->config.default_flow_id = -1; uc->config.dir = DMA_MEM_TO_MEM; uc->name = devm_kasprintf(dev, GFP_KERNEL, "%s chan%d", dev_name(dev), i); @@ -4835,5 +5320,15 @@ static struct platform_driver bcdma_driver = { }; builtin_platform_driver(bcdma_driver); +static struct platform_driver pktdma_driver = { + .driver = { + .name = "ti-pktdma", + .of_match_table = pktdma_of_match, + .suppress_bind_attrs = true, + }, + .probe = udma_probe, +}; +builtin_platform_driver(pktdma_driver); + /* Private interfaces to UDMA */ #include "k3-udma-private.c" diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index 8cb32681afaf..078cc3aa4126 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -55,6 +55,8 @@ #define BCDMA_CAP4_HTCHAN_CNT(val) (((val) >> 16) & 0xff) #define BCDMA_CAP4_UTCHAN_CNT(val) (((val) >> 24) & 0xff) +#define PKTDMA_CAP4_TFLOW_CNT(val) ((val) & 0x3fff) + /* UDMA_CHAN_RT_CTL_REG */ #define UDMA_CHAN_RT_CTL_EN BIT(31) #define UDMA_CHAN_RT_CTL_TDOWN BIT(30) @@ -105,6 +107,7 @@ enum udma_rm_range { RM_RANGE_TCHAN, RM_RANGE_RCHAN, RM_RANGE_RFLOW, + RM_RANGE_TFLOW, RM_RANGE_LAST, }; @@ -151,5 +154,6 @@ void xudma_tchanrt_write(struct udma_tchan *tchan, int reg, u32 val); u32 xudma_rchanrt_read(struct udma_rchan *rchan, int reg); void xudma_rchanrt_write(struct udma_rchan *rchan, int reg, u32 val); bool xudma_rflow_is_gp(struct udma_dev *ud, int id); +int xudma_get_rflow_ring_offset(struct udma_dev *ud); #endif /* K3_UDMA_H_ */ From patchwork Tue Nov 17 10:56:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 11911989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54E8CC8300C for ; Tue, 17 Nov 2020 10:57:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0CB8F20781 for ; Tue, 17 Nov 2020 10:57:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="uEKw8K17" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728290AbgKQK5Y (ORCPT ); Tue, 17 Nov 2020 05:57:24 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:34236 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728274AbgKQK5U (ORCPT ); Tue, 17 Nov 2020 05:57:20 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAv82R117629; Tue, 17 Nov 2020 04:57:08 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1605610628; bh=k5sXBOdLk/LmrwNBnsytBkYJ3nZn2gLIOQ4grr9iRII=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=uEKw8K175Gl/StjTjkxWksSlojgXW8tfLI151yEUnH/x0WyhTdk3BaUuP0kXZ+zpF lfpJra0XzkdlG42NyKzLrvDKbt0zPe5AGsyB5SYY/Ig6wTSan7Sq1tXRvRnH+/jKyq GTJ0MP5tw/fYEvic4KiCHNGs2crQ3xrUqLNZft24= Received: from DLEE107.ent.ti.com (dlee107.ent.ti.com [157.170.170.37]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 0AHAv8Zg006817 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 17 Nov 2020 04:57:08 -0600 Received: from DLEE100.ent.ti.com (157.170.170.30) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Tue, 17 Nov 2020 04:57:08 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE100.ent.ti.com (157.170.170.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Tue, 17 Nov 2020 04:57:08 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 0AHAu6u5087311; Tue, 17 Nov 2020 04:57:05 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , Subject: [PATCH v2 19/19] dmaengine: ti: k3-udma-glue: Add support for K3 PKTDMA Date: Tue, 17 Nov 2020 12:56:56 +0200 Message-ID: <20201117105656.5236-20-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117105656.5236-1-peter.ujfalusi@ti.com> References: <20201117105656.5236-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org From: Vignesh Raghavendra This commit adds support for PKTDMA in k3-udma glue driver. Use new psil_endpoint_config struct to get static data for a given channel or a flow during setup. Make sure that the RX flows being mapped to a RX channel is within the range of flows that is been allocated to that RX channel. Signed-off-by: Vignesh Raghavendra Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma-glue.c | 272 ++++++++++++++++++++++++++----- drivers/dma/ti/k3-udma-private.c | 24 +++ drivers/dma/ti/k3-udma.h | 4 + include/linux/dma/k3-udma-glue.h | 8 + 4 files changed, 270 insertions(+), 38 deletions(-) diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c index ddf0730aa2bc..2d0b26a7bf78 100644 --- a/drivers/dma/ti/k3-udma-glue.c +++ b/drivers/dma/ti/k3-udma-glue.c @@ -22,6 +22,7 @@ struct k3_udma_glue_common { struct device *dev; + struct device chan_dev; struct udma_dev *udmax; const struct udma_tisci_rm *tisci_rm; struct k3_ringacc *ringacc; @@ -32,7 +33,8 @@ struct k3_udma_glue_common { bool epib; u32 psdata_size; u32 swdata_size; - u32 atype; + u32 atype_asel; + struct psil_endpoint_config *ep_config; }; struct k3_udma_glue_tx_channel { @@ -53,6 +55,8 @@ struct k3_udma_glue_tx_channel { bool tx_filt_einfo; bool tx_filt_pswords; bool tx_supr_tdpkt; + + int udma_tflow_id; }; struct k3_udma_glue_rx_flow { @@ -104,7 +108,6 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np, const char *name, struct k3_udma_glue_common *common, bool tx_chn) { - struct psil_endpoint_config *ep_config; struct of_phandle_args dma_spec; u32 thread_id; int ret = 0; @@ -121,15 +124,26 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np, &dma_spec)) return -ENOENT; + ret = of_k3_udma_glue_parse(dma_spec.np, common); + if (ret) + goto out_put_spec; + thread_id = dma_spec.args[0]; if (dma_spec.args_count == 2) { - if (dma_spec.args[1] > 2) { + if (dma_spec.args[1] > 2 && !xudma_is_pktdma(common->udmax)) { dev_err(common->dev, "Invalid channel atype: %u\n", dma_spec.args[1]); ret = -EINVAL; goto out_put_spec; } - common->atype = dma_spec.args[1]; + if (dma_spec.args[1] > 15 && xudma_is_pktdma(common->udmax)) { + dev_err(common->dev, "Invalid channel asel: %u\n", + dma_spec.args[1]); + ret = -EINVAL; + goto out_put_spec; + } + + common->atype_asel = dma_spec.args[1]; } if (tx_chn && !(thread_id & K3_PSIL_DST_THREAD_ID_OFFSET)) { @@ -143,25 +157,23 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np, } /* get psil endpoint config */ - ep_config = psil_get_ep_config(thread_id); - if (IS_ERR(ep_config)) { + common->ep_config = psil_get_ep_config(thread_id); + if (IS_ERR(common->ep_config)) { dev_err(common->dev, "No configuration for psi-l thread 0x%04x\n", thread_id); - ret = PTR_ERR(ep_config); + ret = PTR_ERR(common->ep_config); goto out_put_spec; } - common->epib = ep_config->needs_epib; - common->psdata_size = ep_config->psd_size; + common->epib = common->ep_config->needs_epib; + common->psdata_size = common->ep_config->psd_size; if (tx_chn) common->dst_thread = thread_id; else common->src_thread = thread_id; - ret = of_k3_udma_glue_parse(dma_spec.np, common); - out_put_spec: of_node_put(dma_spec.np); return ret; @@ -227,7 +239,7 @@ static int k3_udma_glue_cfg_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) req.tx_supr_tdpkt = 1; req.tx_fetch_size = tx_chn->common.hdesc_size >> 2; req.txcq_qnum = k3_ringacc_get_ring_id(tx_chn->ringtxcq); - req.tx_atype = tx_chn->common.atype; + req.tx_atype = tx_chn->common.atype_asel; return tisci_rm->tisci_udmap_ops->tx_ch_cfg(tisci_rm->tisci, &req); } @@ -259,8 +271,14 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, tx_chn->common.psdata_size, tx_chn->common.swdata_size); + if (xudma_is_pktdma(tx_chn->common.udmax)) + tx_chn->udma_tchan_id = tx_chn->common.ep_config->mapped_channel_id; + else + tx_chn->udma_tchan_id = -1; + /* request and cfg UDMAP TX channel */ - tx_chn->udma_tchanx = xudma_tchan_get(tx_chn->common.udmax, -1); + tx_chn->udma_tchanx = xudma_tchan_get(tx_chn->common.udmax, + tx_chn->udma_tchan_id); if (IS_ERR(tx_chn->udma_tchanx)) { ret = PTR_ERR(tx_chn->udma_tchanx); dev_err(dev, "UDMAX tchanx get err %d\n", ret); @@ -268,11 +286,33 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, } tx_chn->udma_tchan_id = xudma_tchan_get_id(tx_chn->udma_tchanx); + tx_chn->common.chan_dev.parent = xudma_get_device(tx_chn->common.udmax); + dev_set_name(&tx_chn->common.chan_dev, "tchan%d-0x%04x", + tx_chn->udma_tchan_id, tx_chn->common.dst_thread); + ret = device_register(&tx_chn->common.chan_dev); + if (ret) { + dev_err(dev, "Channel Device registration failed %d\n", ret); + tx_chn->common.chan_dev.parent = NULL; + goto err; + } + + if (xudma_is_pktdma(tx_chn->common.udmax)) { + /* prepare the channel device as coherent */ + tx_chn->common.chan_dev.dma_coherent = true; + dma_coerce_mask_and_coherent(&tx_chn->common.chan_dev, + DMA_BIT_MASK(48)); + } + atomic_set(&tx_chn->free_pkts, cfg->txcq_cfg.size); + if (xudma_is_pktdma(tx_chn->common.udmax)) + tx_chn->udma_tflow_id = tx_chn->common.ep_config->default_flow_id; + else + tx_chn->udma_tflow_id = tx_chn->udma_tchan_id; + /* request and cfg rings */ ret = k3_ringacc_request_rings_pair(tx_chn->common.ringacc, - tx_chn->udma_tchan_id, -1, + tx_chn->udma_tflow_id, -1, &tx_chn->ringtx, &tx_chn->ringtxcq); if (ret) { @@ -284,6 +324,12 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev, cfg->tx_cfg.dma_dev = k3_udma_glue_tx_get_dma_device(tx_chn); cfg->txcq_cfg.dma_dev = cfg->tx_cfg.dma_dev; + /* Set the ASEL value for DMA rings of PKTDMA */ + if (xudma_is_pktdma(tx_chn->common.udmax)) { + cfg->tx_cfg.asel = tx_chn->common.atype_asel; + cfg->txcq_cfg.asel = tx_chn->common.atype_asel; + } + ret = k3_ringacc_ring_cfg(tx_chn->ringtx, &cfg->tx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringtx %d\n", ret); @@ -335,6 +381,11 @@ void k3_udma_glue_release_tx_chn(struct k3_udma_glue_tx_channel *tx_chn) if (tx_chn->ringtx) k3_ringacc_ring_free(tx_chn->ringtx); + + if (tx_chn->common.chan_dev.parent) { + device_unregister(&tx_chn->common.chan_dev); + tx_chn->common.chan_dev.parent = NULL; + } } EXPORT_SYMBOL_GPL(k3_udma_glue_release_tx_chn); @@ -447,13 +498,10 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, void *data, void (*cleanup)(void *data, dma_addr_t desc_dma)) { + struct device *dev = tx_chn->common.dev; dma_addr_t desc_dma; int occ_tx, i, ret; - /* reset TXCQ as it is not input for udma - expected to be empty */ - if (tx_chn->ringtxcq) - k3_ringacc_ring_reset(tx_chn->ringtxcq); - /* * TXQ reset need to be special way as it is input for udma and its * state cached by udma, so: @@ -462,17 +510,20 @@ void k3_udma_glue_reset_tx_chn(struct k3_udma_glue_tx_channel *tx_chn, * 3) reset TXQ in a special way */ occ_tx = k3_ringacc_ring_get_occ(tx_chn->ringtx); - dev_dbg(tx_chn->common.dev, "TX reset occ_tx %u\n", occ_tx); + dev_dbg(dev, "TX reset occ_tx %u\n", occ_tx); for (i = 0; i < occ_tx; i++) { ret = k3_ringacc_ring_pop(tx_chn->ringtx, &desc_dma); if (ret) { - dev_err(tx_chn->common.dev, "TX reset pop %d\n", ret); + if (ret != -ENODATA) + dev_err(dev, "TX reset pop %d\n", ret); break; } cleanup(data, desc_dma); } + /* reset TXCQ as it is not input for udma - expected to be empty */ + k3_ringacc_ring_reset(tx_chn->ringtxcq); k3_ringacc_ring_reset_dma(tx_chn->ringtx, occ_tx); } EXPORT_SYMBOL_GPL(k3_udma_glue_reset_tx_chn); @@ -491,7 +542,12 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_txcq_id); int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn) { - tx_chn->virq = k3_ringacc_get_ring_irq_num(tx_chn->ringtxcq); + if (xudma_is_pktdma(tx_chn->common.udmax)) { + tx_chn->virq = xudma_pktdma_tflow_get_irq(tx_chn->common.udmax, + tx_chn->udma_tflow_id); + } else { + tx_chn->virq = k3_ringacc_get_ring_irq_num(tx_chn->ringtxcq); + } return tx_chn->virq; } @@ -500,10 +556,36 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_irq); struct device * k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn) { + if (xudma_is_pktdma(tx_chn->common.udmax) && + (tx_chn->common.atype_asel == 14 || tx_chn->common.atype_asel == 15)) + return &tx_chn->common.chan_dev; + return xudma_get_device(tx_chn->common.udmax); } EXPORT_SYMBOL_GPL(k3_udma_glue_tx_get_dma_device); +void k3_udma_glue_tx_dma_to_cppi5_addr(struct k3_udma_glue_tx_channel *tx_chn, + dma_addr_t *addr) +{ + if (!xudma_is_pktdma(tx_chn->common.udmax) || + !tx_chn->common.atype_asel) + return; + + *addr |= (u64)tx_chn->common.atype_asel << K3_ADDRESS_ASEL_SHIFT; +} +EXPORT_SYMBOL_GPL(k3_udma_glue_tx_dma_to_cppi5_addr); + +void k3_udma_glue_tx_cppi5_to_dma_addr(struct k3_udma_glue_tx_channel *tx_chn, + dma_addr_t *addr) +{ + if (!xudma_is_pktdma(tx_chn->common.udmax) || + !tx_chn->common.atype_asel) + return; + + *addr &= (u64)GENMASK(K3_ADDRESS_ASEL_SHIFT - 1, 0); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_tx_cppi5_to_dma_addr); + static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) { const struct udma_tisci_rm *tisci_rm = rx_chn->common.tisci_rm; @@ -515,8 +597,6 @@ static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) req.valid_params = TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID | TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID | - TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID | - TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID | TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID; req.nav_id = tisci_rm->tisci_dev_id; @@ -528,13 +608,16 @@ static int k3_udma_glue_cfg_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) * req.rxcq_qnum = k3_ringacc_get_ring_id(rx_chn->flows[0].ringrx); */ req.rxcq_qnum = 0xFFFF; - if (rx_chn->flow_num && rx_chn->flow_id_base != rx_chn->udma_rchan_id) { + if (!xudma_is_pktdma(rx_chn->common.udmax) && rx_chn->flow_num && + rx_chn->flow_id_base != rx_chn->udma_rchan_id) { /* Default flow + extra ones */ + req.valid_params |= TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID; req.flowid_start = rx_chn->flow_id_base; req.flowid_cnt = rx_chn->flow_num; } req.rx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR; - req.rx_atype = rx_chn->common.atype; + req.rx_atype = rx_chn->common.atype_asel; ret = tisci_rm->tisci_udmap_ops->rx_ch_cfg(tisci_rm->tisci, &req); if (ret) @@ -588,10 +671,18 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn, goto err_rflow_put; } + if (xudma_is_pktdma(rx_chn->common.udmax)) { + rx_ring_id = flow->udma_rflow_id + + xudma_get_rflow_ring_offset(rx_chn->common.udmax); + rx_ringfdq_id = 0; + } else { + rx_ring_id = flow_cfg->ring_rxq_id; + rx_ringfdq_id = flow_cfg->ring_rxfdq0_id; + } + /* request and cfg rings */ ret = k3_ringacc_request_rings_pair(rx_chn->common.ringacc, - flow_cfg->ring_rxfdq0_id, - flow_cfg->ring_rxq_id, + rx_ringfdq_id, rx_ring_id, &flow->ringrxfdq, &flow->ringrx); if (ret) { @@ -603,6 +694,12 @@ static int k3_udma_glue_cfg_rx_flow(struct k3_udma_glue_rx_channel *rx_chn, flow_cfg->rx_cfg.dma_dev = k3_udma_glue_rx_get_dma_device(rx_chn); flow_cfg->rxfdq_cfg.dma_dev = flow_cfg->rx_cfg.dma_dev; + /* Set the ASEL value for DMA rings of PKTDMA */ + if (xudma_is_pktdma(rx_chn->common.udmax)) { + flow_cfg->rx_cfg.asel = rx_chn->common.atype_asel; + flow_cfg->rxfdq_cfg.asel = rx_chn->common.atype_asel; + } + ret = k3_ringacc_ring_cfg(flow->ringrx, &flow_cfg->rx_cfg); if (ret) { dev_err(dev, "Failed to cfg ringrx %d\n", ret); @@ -761,6 +858,7 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name, struct k3_udma_glue_rx_channel_cfg *cfg) { struct k3_udma_glue_rx_channel *rx_chn; + struct psil_endpoint_config *ep_cfg; int ret, i; if (cfg->flow_id_num <= 0) @@ -788,8 +886,16 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name, rx_chn->common.psdata_size, rx_chn->common.swdata_size); + ep_cfg = rx_chn->common.ep_config; + + if (xudma_is_pktdma(rx_chn->common.udmax)) + rx_chn->udma_rchan_id = ep_cfg->mapped_channel_id; + else + rx_chn->udma_rchan_id = -1; + /* request and cfg UDMAP RX channel */ - rx_chn->udma_rchanx = xudma_rchan_get(rx_chn->common.udmax, -1); + rx_chn->udma_rchanx = xudma_rchan_get(rx_chn->common.udmax, + rx_chn->udma_rchan_id); if (IS_ERR(rx_chn->udma_rchanx)) { ret = PTR_ERR(rx_chn->udma_rchanx); dev_err(dev, "UDMAX rchanx get err %d\n", ret); @@ -797,12 +903,47 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name, } rx_chn->udma_rchan_id = xudma_rchan_get_id(rx_chn->udma_rchanx); - rx_chn->flow_num = cfg->flow_id_num; - rx_chn->flow_id_base = cfg->flow_id_base; + rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax); + dev_set_name(&rx_chn->common.chan_dev, "rchan%d-0x%04x", + rx_chn->udma_rchan_id, rx_chn->common.src_thread); + ret = device_register(&rx_chn->common.chan_dev); + if (ret) { + dev_err(dev, "Channel Device registration failed %d\n", ret); + rx_chn->common.chan_dev.parent = NULL; + goto err; + } - /* Use RX channel id as flow id: target dev can't generate flow_id */ - if (cfg->flow_id_use_rxchan_id) - rx_chn->flow_id_base = rx_chn->udma_rchan_id; + if (xudma_is_pktdma(rx_chn->common.udmax)) { + /* prepare the channel device as coherent */ + rx_chn->common.chan_dev.dma_coherent = true; + dma_coerce_mask_and_coherent(&rx_chn->common.chan_dev, + DMA_BIT_MASK(48)); + } + + if (xudma_is_pktdma(rx_chn->common.udmax)) { + int flow_start = cfg->flow_id_base; + int flow_end; + + if (flow_start == -1) + flow_start = ep_cfg->flow_start; + + flow_end = flow_start + cfg->flow_id_num - 1; + if (flow_start < ep_cfg->flow_start || + flow_end > (ep_cfg->flow_start + ep_cfg->flow_num - 1)) { + dev_err(dev, "Invalid flow range requested\n"); + ret = -EINVAL; + goto err; + } + rx_chn->flow_id_base = flow_start; + } else { + rx_chn->flow_id_base = cfg->flow_id_base; + + /* Use RX channel id as flow id: target dev can't generate flow_id */ + if (cfg->flow_id_use_rxchan_id) + rx_chn->flow_id_base = rx_chn->udma_rchan_id; + } + + rx_chn->flow_num = cfg->flow_id_num; rx_chn->flows = devm_kcalloc(dev, rx_chn->flow_num, sizeof(*rx_chn->flows), GFP_KERNEL); @@ -892,6 +1033,23 @@ k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name, goto err; } + rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax); + dev_set_name(&rx_chn->common.chan_dev, "rchan_remote-0x%04x", + rx_chn->common.src_thread); + ret = device_register(&rx_chn->common.chan_dev); + if (ret) { + dev_err(dev, "Channel Device registration failed %d\n", ret); + rx_chn->common.chan_dev.parent = NULL; + goto err; + } + + if (xudma_is_pktdma(rx_chn->common.udmax)) { + /* prepare the channel device as coherent */ + rx_chn->common.chan_dev.dma_coherent = true; + dma_coerce_mask_and_coherent(&rx_chn->common.chan_dev, + DMA_BIT_MASK(48)); + } + ret = k3_udma_glue_allocate_rx_flows(rx_chn, cfg); if (ret) goto err; @@ -944,6 +1102,11 @@ void k3_udma_glue_release_rx_chn(struct k3_udma_glue_rx_channel *rx_chn) if (!IS_ERR_OR_NULL(rx_chn->udma_rchanx)) xudma_rchan_put(rx_chn->common.udmax, rx_chn->udma_rchanx); + + if (rx_chn->common.chan_dev.parent) { + device_unregister(&rx_chn->common.chan_dev); + rx_chn->common.chan_dev.parent = NULL; + } } EXPORT_SYMBOL_GPL(k3_udma_glue_release_rx_chn); @@ -1155,12 +1318,10 @@ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn, /* reset RXCQ as it is not input for udma - expected to be empty */ occ_rx = k3_ringacc_ring_get_occ(flow->ringrx); dev_dbg(dev, "RX reset flow %u occ_rx %u\n", flow_num, occ_rx); - if (flow->ringrx) - k3_ringacc_ring_reset(flow->ringrx); /* Skip RX FDQ in case one FDQ is used for the set of flows */ if (skip_fdq) - return; + goto do_reset; /* * RX FDQ reset need to be special way as it is input for udma and its @@ -1175,13 +1336,17 @@ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn, for (i = 0; i < occ_rx; i++) { ret = k3_ringacc_ring_pop(flow->ringrxfdq, &desc_dma); if (ret) { - dev_err(dev, "RX reset pop %d\n", ret); + if (ret != -ENODATA) + dev_err(dev, "RX reset pop %d\n", ret); break; } cleanup(data, desc_dma); } k3_ringacc_ring_reset_dma(flow->ringrxfdq, occ_rx); + +do_reset: + k3_ringacc_ring_reset(flow->ringrx); } EXPORT_SYMBOL_GPL(k3_udma_glue_reset_rx_chn); @@ -1211,7 +1376,12 @@ int k3_udma_glue_rx_get_irq(struct k3_udma_glue_rx_channel *rx_chn, flow = &rx_chn->flows[flow_num]; - flow->virq = k3_ringacc_get_ring_irq_num(flow->ringrx); + if (xudma_is_pktdma(rx_chn->common.udmax)) { + flow->virq = xudma_pktdma_rflow_get_irq(rx_chn->common.udmax, + flow->udma_rflow_id); + } else { + flow->virq = k3_ringacc_get_ring_irq_num(flow->ringrx); + } return flow->virq; } @@ -1220,6 +1390,32 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_irq); struct device * k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn) { + if (xudma_is_pktdma(rx_chn->common.udmax) && + (rx_chn->common.atype_asel == 14 || rx_chn->common.atype_asel == 15)) + return &rx_chn->common.chan_dev; + return xudma_get_device(rx_chn->common.udmax); } EXPORT_SYMBOL_GPL(k3_udma_glue_rx_get_dma_device); + +void k3_udma_glue_rx_dma_to_cppi5_addr(struct k3_udma_glue_rx_channel *rx_chn, + dma_addr_t *addr) +{ + if (!xudma_is_pktdma(rx_chn->common.udmax) || + !rx_chn->common.atype_asel) + return; + + *addr |= (u64)rx_chn->common.atype_asel << K3_ADDRESS_ASEL_SHIFT; +} +EXPORT_SYMBOL_GPL(k3_udma_glue_rx_dma_to_cppi5_addr); + +void k3_udma_glue_rx_cppi5_to_dma_addr(struct k3_udma_glue_rx_channel *rx_chn, + dma_addr_t *addr) +{ + if (!xudma_is_pktdma(rx_chn->common.udmax) || + !rx_chn->common.atype_asel) + return; + + *addr &= (u64)GENMASK(K3_ADDRESS_ASEL_SHIFT - 1, 0); +} +EXPORT_SYMBOL_GPL(k3_udma_glue_rx_cppi5_to_dma_addr); diff --git a/drivers/dma/ti/k3-udma-private.c b/drivers/dma/ti/k3-udma-private.c index 87cbcfe96d5f..c6738b0cb706 100644 --- a/drivers/dma/ti/k3-udma-private.c +++ b/drivers/dma/ti/k3-udma-private.c @@ -151,3 +151,27 @@ void xudma_##res##rt_write(struct udma_##res *p, int reg, u32 val) \ EXPORT_SYMBOL(xudma_##res##rt_write) XUDMA_RT_IO_FUNCTIONS(tchan); XUDMA_RT_IO_FUNCTIONS(rchan); + +int xudma_is_pktdma(struct udma_dev *ud) +{ + return ud->match_data->type == DMA_TYPE_PKTDMA; +} +EXPORT_SYMBOL(xudma_is_pktdma); + +int xudma_pktdma_tflow_get_irq(struct udma_dev *ud, int udma_tflow_id) +{ + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + + return ti_sci_inta_msi_get_virq(ud->dev, udma_tflow_id + + oes->pktdma_tchan_flow); +} +EXPORT_SYMBOL(xudma_pktdma_tflow_get_irq); + +int xudma_pktdma_rflow_get_irq(struct udma_dev *ud, int udma_rflow_id) +{ + const struct udma_oes_offsets *oes = &ud->soc_data->oes; + + return ti_sci_inta_msi_get_virq(ud->dev, udma_rflow_id + + oes->pktdma_rchan_flow); +} +EXPORT_SYMBOL(xudma_pktdma_rflow_get_irq); diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index 078cc3aa4126..c02080bb5866 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -156,4 +156,8 @@ void xudma_rchanrt_write(struct udma_rchan *rchan, int reg, u32 val); bool xudma_rflow_is_gp(struct udma_dev *ud, int id); int xudma_get_rflow_ring_offset(struct udma_dev *ud); +int xudma_is_pktdma(struct udma_dev *ud); + +int xudma_pktdma_tflow_get_irq(struct udma_dev *ud, int udma_tflow_id); +int xudma_pktdma_rflow_get_irq(struct udma_dev *ud, int udma_rflow_id); #endif /* K3_UDMA_H_ */ diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h index d7c12f31377c..e443be4d3b4b 100644 --- a/include/linux/dma/k3-udma-glue.h +++ b/include/linux/dma/k3-udma-glue.h @@ -43,6 +43,10 @@ u32 k3_udma_glue_tx_get_txcq_id(struct k3_udma_glue_tx_channel *tx_chn); int k3_udma_glue_tx_get_irq(struct k3_udma_glue_tx_channel *tx_chn); struct device * k3_udma_glue_tx_get_dma_device(struct k3_udma_glue_tx_channel *tx_chn); +void k3_udma_glue_tx_dma_to_cppi5_addr(struct k3_udma_glue_tx_channel *tx_chn, + dma_addr_t *addr); +void k3_udma_glue_tx_cppi5_to_dma_addr(struct k3_udma_glue_tx_channel *tx_chn, + dma_addr_t *addr); enum { K3_UDMA_GLUE_SRC_TAG_LO_KEEP = 0, @@ -134,5 +138,9 @@ int k3_udma_glue_rx_flow_disable(struct k3_udma_glue_rx_channel *rx_chn, u32 flow_idx); struct device * k3_udma_glue_rx_get_dma_device(struct k3_udma_glue_rx_channel *rx_chn); +void k3_udma_glue_rx_dma_to_cppi5_addr(struct k3_udma_glue_rx_channel *rx_chn, + dma_addr_t *addr); +void k3_udma_glue_rx_cppi5_to_dma_addr(struct k3_udma_glue_rx_channel *rx_chn, + dma_addr_t *addr); #endif /* K3_UDMA_GLUE_H_ */