Message ID | 20231114083906.3143548-1-s-vadapalli@ti.com (mailing list archive) |
---|---|
Headers | show |
Series | Add APIs to request TX/RX DMA channels by ID | expand |
On 14/11/2023 10:39, Siddharth Vadapalli wrote: > The existing APIs for requesting TX and RX DMA channels rely on parsing > a device-tree node to obtain the Channel/Thread IDs from their names. Yes, since it is a DMA device and it is using the standard DMA mapping. It is by design that the standard DMAengine and the custom glue layer (which should have been a temporary solution) uses the same standard DMA binding to make sure that we are not going to deviate from the standard and be able to move the glue users to DMAengine (which would need core changes). > However, it is possible to know the thread IDs by alternative means such > as being informed by Firmware on a remote core regarding the allocated > TX/RX DMA channel IDs. Thus, add APIs to support such use cases. I see, so the TISCI res manager is going to managed the channels/flows for some peripherals? What is the API and parameters to get these channels? I would really like to follow a standard binding since what will happen if the firmware will start to provision channels/flows for DMAengine users? It is not that simple to hack that around. My initial take is that this can be implemented via the existing DMA crossbar support. It has been created exactly for this sort of purpose. I'm sure you need to provide some sort of parameters to TISCI to get the channel/rflow provisioned for the host requesting, right? The crossbar implements the binding with the given set of parameters, does the needed 'black magic' to get the information needed for the target DMA and crafts the binding for it and get's the channel. If you take a look at the drivers/dma/ti/dma-crossbar.c, it implements two types of crossbars. For DMAengine, it would be relatively simple to write a new one for tisci, The glue layer might needs a bit more work as it is not relying on core, but I would not think that it is that much complicated to extend it to be able to handle a crossbar binding. The benefit is that none of the clients should not need to know about the way the channel is looked up, they just request for an RX channel and depending on the binding they will get it directly from the DMA or get the translation via the crossbar to be able to fetch the channel. Can you check if this would be doable? For reference: Documentation/devicetree/bindings/dma/dma-router.yaml Documentation/devicetree/bindings/dma/ti-dma-crossbar.txt drivers/dma/ti/dma-crossbar.c > Additionally, since the name of the device for the remote RX channel is > being set purely on the basis of the RX channel ID itself, it can result > in duplicate names when multiple flows are used on the same channel. > Avoid name duplication by including the flow in the name. Make sense. > Series is based on linux-next tagged next-20231114. > > RFC Series: > https://lore.kernel.org/r/20231111121555.2656760-1-s-vadapalli@ti.com/ > > Changes since RFC Series: > - Rebased patches 1, 2 and 3 on linux-next tagged next-20231114. > - Added patch 4 to the series. > > Regards, > Siddharth. > > Siddharth Vadapalli (4): > dmaengine: ti: k3-udma-glue: Add function to parse channel by ID > dmaengine: ti: k3-udma-glue: Add function to request TX channel by ID > dmaengine: ti: k3-udma-glue: Add function to request RX channel by ID > dmaengine: ti: k3-udma-glue: Update name for remote RX channel device > > drivers/dma/ti/k3-udma-glue.c | 306 ++++++++++++++++++++++--------- > include/linux/dma/k3-udma-glue.h | 8 + > 2 files changed, 228 insertions(+), 86 deletions(-) >
Hello Péter, On 16/11/23 01:29, Péter Ujfalusi wrote: > > > On 14/11/2023 10:39, Siddharth Vadapalli wrote: >> The existing APIs for requesting TX and RX DMA channels rely on parsing >> a device-tree node to obtain the Channel/Thread IDs from their names. > > Yes, since it is a DMA device and it is using the standard DMA mapping. > It is by design that the standard DMAengine and the custom glue layer > (which should have been a temporary solution) uses the same standard DMA > binding to make sure that we are not going to deviate from the standard > and be able to move the glue users to DMAengine (which would need core > changes). > >> However, it is possible to know the thread IDs by alternative means such >> as being informed by Firmware on a remote core regarding the allocated >> TX/RX DMA channel IDs. Thus, add APIs to support such use cases. > > I see, so the TISCI res manager is going to managed the channels/flows > for some peripherals? > > What is the API and parameters to get these channels? > > I would really like to follow a standard binding since what will happen > if the firmware will start to provision channels/flows for DMAengine > users? It is not that simple to hack that around. Please consider the following use-case for which the APIs are being added by this series. I apologize for not explaining the idea behind the APIs in more detail earlier. Firmware running on a remote core is in control of a peripheral (CPSW Ethernet Switch for example) and shares the peripheral across software running on different cores. The control path between the Firmware and the Clients on various cores is via RPMsg, while the data path used by the Clients is the DMA Channels. In the example where Clients send data to the shared peripheral over DMA, the Clients send RPMsg based requests to the Firmware to obtain the allocated thead IDs. Firmware allocates the thread IDs by making a request to TISCI Resource Manager followed by sharing the thread IDs to the Clients. In such use cases, the Linux Client is probed by RPMsg endpoint discovery over the RPMsg bus. Therefore, there is no device-tree corresponding to the Client device. The Client knows the DMA Channel IDs as well as the RX Flow details from the Firmware. Knowing these details, the Client can request the configuration of the TX and RX Channels/Flows by using the DMA APIs which this series adds. Please let me know in case of any suggestions for an implementation which shall address the above use-case. > > My initial take is that this can be implemented via the existing DMA > crossbar support. It has been created exactly for this sort of purpose. > I'm sure you need to provide some sort of parameters to TISCI to get the > channel/rflow provisioned for the host requesting, right? > The crossbar implements the binding with the given set of parameters, > does the needed 'black magic' to get the information needed for the > target DMA and crafts the binding for it and get's the channel. > > If you take a look at the drivers/dma/ti/dma-crossbar.c, it implements > two types of crossbars. > > For DMAengine, it would be relatively simple to write a new one for > tisci, The glue layer might needs a bit more work as it is not relying > on core, but I would not think that it is that much complicated to > extend it to be able to handle a crossbar binding. > The benefit is that none of the clients should not need to know about > the way the channel is looked up, they just request for an RX channel > and depending on the binding they will get it directly from the DMA or > get the translation via the crossbar to be able to fetch the channel. > > Can you check if this would be doable? > > For reference: > Documentation/devicetree/bindings/dma/dma-router.yaml > Documentation/devicetree/bindings/dma/ti-dma-crossbar.txt > drivers/dma/ti/dma-crossbar.c > >> Additionally, since the name of the device for the remote RX channel is >> being set purely on the basis of the RX channel ID itself, it can result >> in duplicate names when multiple flows are used on the same channel. >> Avoid name duplication by including the flow in the name. > > Make sense. May I post that patch separately in that case? >> Series is based on linux-next tagged next-20231114. >> >> RFC Series: >> https://lore.kernel.org/r/20231111121555.2656760-1-s-vadapalli@ti.com/ >> >> Changes since RFC Series: >> - Rebased patches 1, 2 and 3 on linux-next tagged next-20231114. >> - Added patch 4 to the series. >> >> Regards, >> Siddharth. >> >> Siddharth Vadapalli (4): >> dmaengine: ti: k3-udma-glue: Add function to parse channel by ID >> dmaengine: ti: k3-udma-glue: Add function to request TX channel by ID >> dmaengine: ti: k3-udma-glue: Add function to request RX channel by ID >> dmaengine: ti: k3-udma-glue: Update name for remote RX channel device >> >> drivers/dma/ti/k3-udma-glue.c | 306 ++++++++++++++++++++++--------- >> include/linux/dma/k3-udma-glue.h | 8 + >> 2 files changed, 228 insertions(+), 86 deletions(-) >> >
Hi Siddharth, On 17/11/2023 07:55, Siddharth Vadapalli wrote: >> I would really like to follow a standard binding since what will happen >> if the firmware will start to provision channels/flows for DMAengine >> users? It is not that simple to hack that around. > > Please consider the following use-case for which the APIs are being added by > this series. I apologize for not explaining the idea behind the APIs in more > detail earlier. > > Firmware running on a remote core is in control of a peripheral (CPSW Ethernet > Switch for example) and shares the peripheral across software running on > different cores. The control path between the Firmware and the Clients on > various cores is via RPMsg, while the data path used by the Clients is the DMA > Channels. In the example where Clients send data to the shared peripheral over > DMA, the Clients send RPMsg based requests to the Firmware to obtain the > allocated thead IDs. Firmware allocates the thread IDs by making a request to > TISCI Resource Manager followed by sharing the thread IDs to the Clients. > > In such use cases, the Linux Client is probed by RPMsg endpoint discovery over > the RPMsg bus. Therefore, there is no device-tree corresponding to the Client > device. The Client knows the DMA Channel IDs as well as the RX Flow details from > the Firmware. Knowing these details, the Client can request the configuration of > the TX and RX Channels/Flows by using the DMA APIs which this series adds. I see, so the CPSW will be probed in a similar way as USB peripherals for example? The CPSW does not have a DT entry at all? Is this correct? > Please let me know in case of any suggestions for an implementation which shall > address the above use-case. How does the driver knows how to request a DMA resource from the remote core? How that scales with different SoCs and even with changes in the firmware? You are right, this is in a grey area. The DMA channel as it is controlled by the remote processor, it lends a thread to clients on other cores (like Linux) via RPMsg. Well, it is similar to how non DT is working in a way. This CPSW type is not yet supported mainline, right?
Hello Péter, On 22/11/23 20:52, Péter Ujfalusi wrote: > Hi Siddharth, > > On 17/11/2023 07:55, Siddharth Vadapalli wrote: >>> I would really like to follow a standard binding since what will happen >>> if the firmware will start to provision channels/flows for DMAengine >>> users? It is not that simple to hack that around. >> >> Please consider the following use-case for which the APIs are being added by >> this series. I apologize for not explaining the idea behind the APIs in more >> detail earlier. >> >> Firmware running on a remote core is in control of a peripheral (CPSW Ethernet >> Switch for example) and shares the peripheral across software running on >> different cores. The control path between the Firmware and the Clients on >> various cores is via RPMsg, while the data path used by the Clients is the DMA >> Channels. In the example where Clients send data to the shared peripheral over >> DMA, the Clients send RPMsg based requests to the Firmware to obtain the >> allocated thead IDs. Firmware allocates the thread IDs by making a request to >> TISCI Resource Manager followed by sharing the thread IDs to the Clients. >> >> In such use cases, the Linux Client is probed by RPMsg endpoint discovery over >> the RPMsg bus. Therefore, there is no device-tree corresponding to the Client >> device. The Client knows the DMA Channel IDs as well as the RX Flow details from >> the Firmware. Knowing these details, the Client can request the configuration of >> the TX and RX Channels/Flows by using the DMA APIs which this series adds. > > I see, so the CPSW will be probed in a similar way as USB peripherals > for example? The CPSW does not have a DT entry at all? Is this correct? I apologize for the delayed response. Yes, the CPSW instance which shall be in control of Firmware running on the remote core will not have a DT entry. The Linux Client driver shall be probed when the Firmware announces its endpoint over the RPMsg bus, which the Client driver shall register with the RPMsg framework. > >> Please let me know in case of any suggestions for an implementation which shall >> address the above use-case. > > How does the driver knows how to request a DMA resource from the remote > core? How that scales with different SoCs and even with changes in the > firmware? After getting probed, the Client driver communicates with Firmware via RPMsg, requesting details of the allocated resources including the TX Channels and RX Flows. Knowing these parameters, the Client driver can use the newly added DMA APIs to request TX Channel and RX Flows by IDs. The only dependency here is that the Client driver needs to know which DMA instance to request these resources from. That information is hard coded in the driver's data in the form of the compatible used for the DMA instance, thereby allowing the Client driver to get a reference to the DMA controller node using the of_find_compatible_node() API. Since all the resource allocation information comes from Firmware, the device-specific details will be hard coded in the Firmware while the Client driver can be used across all K3 SoCs which have the same DMA APIs. > > You are right, this is in a grey area. The DMA channel as it is > controlled by the remote processor, it lends a thread to clients on > other cores (like Linux) via RPMsg. > Well, it is similar to how non DT is working in a way. > > This CPSW type is not yet supported mainline, right? Yes, it is not yet supported in mainline. This series is a dependency for upstreaming the Client driver.
If there are no concerns, may I post the v2 of this series, rebasing it on the latest linux-next tree with minor code cleanup and reordering of the patches? On 04/12/23 13:51, Siddharth Vadapalli wrote: > Hello Péter, > > On 22/11/23 20:52, Péter Ujfalusi wrote: >> Hi Siddharth, >> >> On 17/11/2023 07:55, Siddharth Vadapalli wrote: >>>> I would really like to follow a standard binding since what will happen >>>> if the firmware will start to provision channels/flows for DMAengine >>>> users? It is not that simple to hack that around. >>> >>> Please consider the following use-case for which the APIs are being added by >>> this series. I apologize for not explaining the idea behind the APIs in more >>> detail earlier. >>> >>> Firmware running on a remote core is in control of a peripheral (CPSW Ethernet >>> Switch for example) and shares the peripheral across software running on >>> different cores. The control path between the Firmware and the Clients on >>> various cores is via RPMsg, while the data path used by the Clients is the DMA >>> Channels. In the example where Clients send data to the shared peripheral over >>> DMA, the Clients send RPMsg based requests to the Firmware to obtain the >>> allocated thead IDs. Firmware allocates the thread IDs by making a request to >>> TISCI Resource Manager followed by sharing the thread IDs to the Clients. >>> >>> In such use cases, the Linux Client is probed by RPMsg endpoint discovery over >>> the RPMsg bus. Therefore, there is no device-tree corresponding to the Client >>> device. The Client knows the DMA Channel IDs as well as the RX Flow details from >>> the Firmware. Knowing these details, the Client can request the configuration of >>> the TX and RX Channels/Flows by using the DMA APIs which this series adds. >> >> I see, so the CPSW will be probed in a similar way as USB peripherals >> for example? The CPSW does not have a DT entry at all? Is this correct? > > I apologize for the delayed response. Yes, the CPSW instance which shall be in > control of Firmware running on the remote core will not have a DT entry. The > Linux Client driver shall be probed when the Firmware announces its endpoint > over the RPMsg bus, which the Client driver shall register with the RPMsg framework. > >> >>> Please let me know in case of any suggestions for an implementation which shall >>> address the above use-case. >> >> How does the driver knows how to request a DMA resource from the remote >> core? How that scales with different SoCs and even with changes in the >> firmware? > > After getting probed, the Client driver communicates with Firmware via RPMsg, > requesting details of the allocated resources including the TX Channels and RX > Flows. Knowing these parameters, the Client driver can use the newly added DMA > APIs to request TX Channel and RX Flows by IDs. The only dependency here is that > the Client driver needs to know which DMA instance to request these resources > from. That information is hard coded in the driver's data in the form of the > compatible used for the DMA instance, thereby allowing the Client driver to get > a reference to the DMA controller node using the of_find_compatible_node() API. > > Since all the resource allocation information comes from Firmware, the > device-specific details will be hard coded in the Firmware while the Client > driver can be used across all K3 SoCs which have the same DMA APIs. > >> >> You are right, this is in a grey area. The DMA channel as it is >> controlled by the remote processor, it lends a thread to clients on >> other cores (like Linux) via RPMsg. >> Well, it is similar to how non DT is working in a way. >> >> This CPSW type is not yet supported mainline, right? > > Yes, it is not yet supported in mainline. This series is a dependency for > upstreaming the Client driver. >