From patchwork Tue Dec 12 11:10:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: s-vadapalli X-Patchwork-Id: 13489075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5077AC4167D for ; Tue, 12 Dec 2023 12:16:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yE/6eNj79ydDGCi+czUoRIZgPaTu7AUlkqEAEQxF7H8=; b=crShShnyM8sMSJ ra75lIL341JMNeA5KSX8ntypoEXBoNvtXTigLU+USCSK7GJW9WlG5YJfFcMVy4DwNyXUpPg2eX+Y9 9LwT/6I9uM9bIgoOd3IC08lYRb0rIv/WXBTAhxJ9j2j3KulqZ0Rt5WrPCp2IwuZ8txWcPNmLzo8+r 3T1zm5zQ4XbtNiz1/SgNyPONM/lBHHbLxDQ2umZCe4nTrbKVrhIccTmCWtT4T9HDkeF+yHj1lF01K bYIMym93YdYRySgptLHEKPbwwuP2ooNuy0eBdHhUeBiAePvlq+Q/75wou2PNZL5f2yUG4JpJB5T0D a9NQGHJ/8K8VdrenaBaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rD1gT-00BcrS-1g; Tue, 12 Dec 2023 12:16:21 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rD0ey-00BNPz-38 for linux-arm-kernel@bombadil.infradead.org; Tue, 12 Dec 2023 11:10:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender :Reply-To:Content-ID:Content-Description; bh=PvkANg04A2utzuZAggw+6HPpsozwSzDkEwa7BpI7mhM=; b=mipCun/VA2VDwjnppZkhn3LZB5 1FPDTPkyH6oJADuj4SUbEf+lOOLWTqfEYC2fYEDDV0+6z4fTLNpIz8kCd4lGGtzUClgG7CbmYhc5O LXA3zJFoleBGTRSZgwxcep+Yw2lbIO9RkvvoT79uO/Ss3hQDKptTRGkUBpFCeonprGB8dXMsbai2w Wt9iw4W0ZXzfOFwrZdwDJmraBN9oEdWIREgzEkvhOf0lH6qQv0ic31BzdQTWO5SqJgmflfevis8Pk Ykyywfov1usgFNUQoFDoB8e0PWtFIutD+FJnoEBa+h/N2cQTrSW7YyzUVqSwP7rQUo393tveguDEU En5xd2GQ==; Received: from lelv0142.ext.ti.com ([198.47.23.249]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1rD0et-00A5y1-0J for linux-arm-kernel@lists.infradead.org; Tue, 12 Dec 2023 11:10:43 +0000 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 3BCBAO3k031728; Tue, 12 Dec 2023 05:10:24 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1702379424; bh=PvkANg04A2utzuZAggw+6HPpsozwSzDkEwa7BpI7mhM=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=ym91uVEHlmGoLgfFRoymYYV8JwoBCkTjoSt/EI8c1RgQbl45K32fRnzC4Wt0RbzDY KKI0drNj4qBwG608Ifbcbsyu4NAiO7htNe7dbc4hncl7ZtRXu9GC36TivlHOAPNVG2 LAN2iXgLKFvn82rgj0y024NW87VUluaSf8xc9f54= Received: from DLEE115.ent.ti.com (dlee115.ent.ti.com [157.170.170.26]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 3BCBAOHW043985 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 12 Dec 2023 05:10:24 -0600 Received: from DLEE107.ent.ti.com (157.170.170.37) by DLEE115.ent.ti.com (157.170.170.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23; Tue, 12 Dec 2023 05:10:24 -0600 Received: from lelvsmtp5.itg.ti.com (10.180.75.250) by DLEE107.ent.ti.com (157.170.170.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.23 via Frontend Transport; Tue, 12 Dec 2023 05:10:24 -0600 Received: from uda0492258.dhcp.ti.com (uda0492258.dhcp.ti.com [172.24.227.9]) by lelvsmtp5.itg.ti.com (8.15.2/8.15.2) with ESMTP id 3BCBABA1088764; Tue, 12 Dec 2023 05:10:22 -0600 From: Siddharth Vadapalli To: , CC: , , , , , Subject: [PATCH v2 4/4] dmaengine: ti: k3-udma-glue: Add function to request RX channel by ID Date: Tue, 12 Dec 2023 16:40:11 +0530 Message-ID: <20231212111011.1401641-5-s-vadapalli@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231212111011.1401641-1-s-vadapalli@ti.com> References: <20231212111011.1401641-1-s-vadapalli@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231212_111039_320349_2A89E364 X-CRM114-Status: GOOD ( 19.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The existing function k3_udma_glue_request_remote_rx_chn() supports requesting an RX DMA channel and flow by the name of the RX DMA channel. Add support to request RX channel by ID in the form of a new function k3_udma_glue_request_remote_rx_chn_by_id() and export it for use by drivers which are probed by alternate methods (non device-tree) but still wish to make use of the existing DMA APIs. Such drivers could be informed about the RX channel to use by RPMsg for example. Since the implementation of k3_udma_glue_request_remote_rx_chn_by_id() reuses most of the code in k3_udma_glue_request_remote_rx_chn(), create a new function k3_udma_glue_request_remote_rx_chn_common() for the common code. Signed-off-by: Siddharth Vadapalli --- Changes since v1: - Updated commit message indicating the use-case for which the patch is being added. drivers/dma/ti/k3-udma-glue.c | 136 ++++++++++++++++++++++--------- include/linux/dma/k3-udma-glue.h | 4 + 2 files changed, 101 insertions(+), 39 deletions(-) diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c index ea5119a5e8eb..996614b60675 100644 --- a/drivers/dma/ti/k3-udma-glue.c +++ b/drivers/dma/ti/k3-udma-glue.c @@ -1072,52 +1072,21 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name, return ERR_PTR(ret); } -static struct k3_udma_glue_rx_channel * -k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name, - struct k3_udma_glue_rx_channel_cfg *cfg) +static int +k3_udma_glue_request_remote_rx_chn_common(struct k3_udma_glue_rx_channel *rx_chn, + struct k3_udma_glue_rx_channel_cfg *cfg, + struct device *dev) { - struct k3_udma_glue_rx_channel *rx_chn; int ret, i; - if (cfg->flow_id_num <= 0 || - cfg->flow_id_use_rxchan_id || - cfg->def_flow_cfg || - cfg->flow_id_base < 0) - return ERR_PTR(-EINVAL); - - /* - * Remote RX channel is under control of Remote CPU core, so - * Linux can only request and manipulate by dedicated RX flows - */ - - rx_chn = devm_kzalloc(dev, sizeof(*rx_chn), GFP_KERNEL); - if (!rx_chn) - return ERR_PTR(-ENOMEM); - - rx_chn->common.dev = dev; - rx_chn->common.swdata_size = cfg->swdata_size; - rx_chn->remote = true; - rx_chn->udma_rchan_id = -1; - rx_chn->flow_num = cfg->flow_id_num; - rx_chn->flow_id_base = cfg->flow_id_base; - rx_chn->psil_paired = false; - - /* parse of udmap channel */ - ret = of_k3_udma_glue_parse_chn(dev->of_node, name, - &rx_chn->common, false); - if (ret) - goto err; - rx_chn->common.hdesc_size = cppi5_hdesc_calc_size(rx_chn->common.epib, rx_chn->common.psdata_size, rx_chn->common.swdata_size); rx_chn->flows = devm_kcalloc(dev, rx_chn->flow_num, sizeof(*rx_chn->flows), GFP_KERNEL); - if (!rx_chn->flows) { - ret = -ENOMEM; - goto err; - } + if (!rx_chn->flows) + return -ENOMEM; rx_chn->common.chan_dev.class = &k3_udma_glue_devclass; rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax); @@ -1128,7 +1097,7 @@ k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name, dev_err(dev, "Channel Device registration failed %d\n", ret); put_device(&rx_chn->common.chan_dev); rx_chn->common.chan_dev.parent = NULL; - goto err; + return ret; } if (xudma_is_pktdma(rx_chn->common.udmax)) { @@ -1140,19 +1109,108 @@ k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name, ret = k3_udma_glue_allocate_rx_flows(rx_chn, cfg); if (ret) - goto err; + return ret; for (i = 0; i < rx_chn->flow_num; i++) rx_chn->flows[i].udma_rflow_id = rx_chn->flow_id_base + i; k3_udma_glue_dump_rx_chn(rx_chn); + return 0; +} + +static struct k3_udma_glue_rx_channel * +k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name, + struct k3_udma_glue_rx_channel_cfg *cfg) +{ + struct k3_udma_glue_rx_channel *rx_chn; + int ret; + + if (cfg->flow_id_num <= 0 || + cfg->flow_id_use_rxchan_id || + cfg->def_flow_cfg || + cfg->flow_id_base < 0) + return ERR_PTR(-EINVAL); + + /* + * Remote RX channel is under control of Remote CPU core, so + * Linux can only request and manipulate by dedicated RX flows + */ + + rx_chn = devm_kzalloc(dev, sizeof(*rx_chn), GFP_KERNEL); + if (!rx_chn) + return ERR_PTR(-ENOMEM); + + rx_chn->common.dev = dev; + rx_chn->common.swdata_size = cfg->swdata_size; + rx_chn->remote = true; + rx_chn->udma_rchan_id = -1; + rx_chn->flow_num = cfg->flow_id_num; + rx_chn->flow_id_base = cfg->flow_id_base; + rx_chn->psil_paired = false; + + /* parse of udmap channel */ + ret = of_k3_udma_glue_parse_chn(dev->of_node, name, + &rx_chn->common, false); + if (ret) + goto err; + + ret = k3_udma_glue_request_remote_rx_chn_common(rx_chn, cfg, dev); + if (ret) + goto err; + + return rx_chn; + +err: + k3_udma_glue_release_rx_chn(rx_chn); + return ERR_PTR(ret); +} + +struct k3_udma_glue_rx_channel * +k3_udma_glue_request_remote_rx_chn_by_id(struct device *dev, struct device_node *udmax_np, + struct k3_udma_glue_rx_channel_cfg *cfg, u32 thread_id) +{ + struct k3_udma_glue_rx_channel *rx_chn; + int ret; + + if (cfg->flow_id_num <= 0 || + cfg->flow_id_use_rxchan_id || + cfg->def_flow_cfg || + cfg->flow_id_base < 0) + return ERR_PTR(-EINVAL); + + /* + * Remote RX channel is under control of Remote CPU core, so + * Linux can only request and manipulate by dedicated RX flows + */ + + rx_chn = devm_kzalloc(dev, sizeof(*rx_chn), GFP_KERNEL); + if (!rx_chn) + return ERR_PTR(-ENOMEM); + + rx_chn->common.dev = dev; + rx_chn->common.swdata_size = cfg->swdata_size; + rx_chn->remote = true; + rx_chn->udma_rchan_id = -1; + rx_chn->flow_num = cfg->flow_id_num; + rx_chn->flow_id_base = cfg->flow_id_base; + rx_chn->psil_paired = false; + + ret = of_k3_udma_glue_parse_chn_by_id(udmax_np, &rx_chn->common, false, thread_id); + if (ret) + goto err; + + ret = k3_udma_glue_request_remote_rx_chn_common(rx_chn, cfg, dev); + if (ret) + goto err; + return rx_chn; err: k3_udma_glue_release_rx_chn(rx_chn); return ERR_PTR(ret); } +EXPORT_SYMBOL_GPL(k3_udma_glue_request_remote_rx_chn_by_id); struct k3_udma_glue_rx_channel * k3_udma_glue_request_rx_chn(struct device *dev, const char *name, diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h index 6205d84430ca..89dc59d7c5e2 100644 --- a/include/linux/dma/k3-udma-glue.h +++ b/include/linux/dma/k3-udma-glue.h @@ -113,6 +113,10 @@ struct k3_udma_glue_rx_channel *k3_udma_glue_request_rx_chn( const char *name, struct k3_udma_glue_rx_channel_cfg *cfg); +struct k3_udma_glue_rx_channel * +k3_udma_glue_request_remote_rx_chn_by_id(struct device *dev, struct device_node *udmax_np, + struct k3_udma_glue_rx_channel_cfg *cfg, u32 thread_id); + void k3_udma_glue_release_rx_chn(struct k3_udma_glue_rx_channel *rx_chn); int k3_udma_glue_enable_rx_chn(struct k3_udma_glue_rx_channel *rx_chn); void k3_udma_glue_disable_rx_chn(struct k3_udma_glue_rx_channel *rx_chn);