Message ID | 20250306230203.1550314-1-nikolay@enfabrica.net (mailing list archive) |
---|---|
Headers | show |
Series | Ultra Ethernet driver introduction | expand |
On Fri, Mar 07, 2025 at 01:01:50AM +0200, Nikolay Aleksandrov wrote: > Hi all, <...> > Ultra Ethernet is a new RDMA transport. Awesome, and now please explain why new subsystem is needed when drivers/infiniband already supports at least 5 different RDMA transports (OmniPath, iWARP, Infiniband, RoCE v1 and RoCE v2). Maybe after this discussion it will be very clear that new subsystem is needed, but at least it needs to be stated clearly. An please CC RDMA maintainers to any Ultra Ethernet related discussions as it is more RDMA than Ethernet. Thanks
> From: Leon Romanovsky <leon@kernel.org> > Sent: Sunday, March 9, 2025 12:17 AM > > On Fri, Mar 07, 2025 at 01:01:50AM +0200, Nikolay Aleksandrov wrote: > > Hi all, > > <...> > > > Ultra Ethernet is a new RDMA transport. > > Awesome, and now please explain why new subsystem is needed when > drivers/infiniband already supports at least 5 different RDMA transports > (OmniPath, iWARP, Infiniband, RoCE v1 and RoCE v2). > 6th transport is drivers/infiniband/hw/efa (srd). > Maybe after this discussion it will be very clear that new subsystem is needed, > but at least it needs to be stated clearly. > > An please CC RDMA maintainers to any Ultra Ethernet related discussions as it > is more RDMA than Ethernet. > > Thanks
> -----Original Message----- > From: Parav Pandit <parav@nvidia.com> > Sent: Sunday, March 9, 2025 4:22 AM > To: Leon Romanovsky <leon@kernel.org>; Nikolay Aleksandrov > <nikolay@enfabrica.net> > Cc: netdev@vger.kernel.org; shrijeet@enfabrica.net; > alex.badea@keysight.com; eric.davis@broadcom.com; rip.sohan@amd.com; > dsahern@kernel.org; Bernard Metzler <BMT@zurich.ibm.com>; > roland@enfabrica.net; winston.liu@keysight.com; > dan.mihailescu@keysight.com; Kamal Heib <kheib@redhat.com>; > parth.v.parikh@keysight.com; Dave Miller <davem@redhat.com>; > ian.ziemba@hpe.com; andrew.tauferner@cornelisnetworks.com; > welch@hpe.com; rakhahari.bhunia@keysight.com; > kingshuk.mandal@keysight.com; linux-rdma@vger.kernel.org; > kuba@kernel.org; Paolo Abeni <pabeni@redhat.com>; Jason Gunthorpe > <jgg@nvidia.com> > Subject: [EXTERNAL] RE: [RFC PATCH 00/13] Ultra Ethernet driver > introduction > > > > > From: Leon Romanovsky <leon@kernel.org> > > Sent: Sunday, March 9, 2025 12:17 AM > > > > On Fri, Mar 07, 2025 at 01:01:50AM +0200, Nikolay Aleksandrov wrote: > > > Hi all, > > > > <...> > > > > > Ultra Ethernet is a new RDMA transport. > > > > Awesome, and now please explain why new subsystem is needed when > > drivers/infiniband already supports at least 5 different RDMA > transports > > (OmniPath, iWARP, Infiniband, RoCE v1 and RoCE v2). > > > 6th transport is drivers/infiniband/hw/efa (srd). > > > Maybe after this discussion it will be very clear that new subsystem > is needed, > > but at least it needs to be stated clearly. I am not sure if a new subsystem is what this RFC calls for, but rather a discussion about the proper integration of a new RDMA transport into the Linux kernel. Ultra Ethernet Transport is probably not just another transport up for easy integration into the current RDMA subsystem. First of all, its design does not follow the well-known RDMA verbs model inherited from InfiniBand, which has largely shaped the current structure of the RDMA subsystem. While having send, receive and completion queues (and completion counters) to steer message exchange, there is no concept of a queue pair. Endpoints can span multiple queues, can have multiple peer addresses. Communication resources sharing is controlled in a different way than within protection domains. Connections are ephemeral, created and released by the provider as needed. There are more differences. In a nutshell, the UET communication model is trimmed for extreme scalability. Its API semantics follow libfabrics, not RDMA verbs. I think Nik gave us a first still incomplete look at the UET protocol engine to help us understand some of the specifics. It's just the lower part (packet delivery). The implementation of the upper part (resource management, communication semantics, job management) may largely depend on the environment we all choose. IMO, integrating UET with the current RDMA subsystem would ask for its extension to allow exposing all of UETs intended functionality, probably starting with a more generic RDMA device model than current ib_device. The different API semantics of UET may further call for either extending verbs to cover it as well, or exposing a new non-verbs API (libfabrics), or both. Thanks, Bernard. > > > > An please CC RDMA maintainers to any Ultra Ethernet related > discussions as it > > is more RDMA than Ethernet. > > > > Thanks
On Tue, Mar 11, 2025 at 02:20:07PM +0000, Bernard Metzler wrote: > > > > -----Original Message----- > > From: Parav Pandit <parav@nvidia.com> > > Sent: Sunday, March 9, 2025 4:22 AM > > To: Leon Romanovsky <leon@kernel.org>; Nikolay Aleksandrov > > <nikolay@enfabrica.net> > > Cc: netdev@vger.kernel.org; shrijeet@enfabrica.net; > > alex.badea@keysight.com; eric.davis@broadcom.com; rip.sohan@amd.com; > > dsahern@kernel.org; Bernard Metzler <BMT@zurich.ibm.com>; > > roland@enfabrica.net; winston.liu@keysight.com; > > dan.mihailescu@keysight.com; Kamal Heib <kheib@redhat.com>; > > parth.v.parikh@keysight.com; Dave Miller <davem@redhat.com>; > > ian.ziemba@hpe.com; andrew.tauferner@cornelisnetworks.com; > > welch@hpe.com; rakhahari.bhunia@keysight.com; > > kingshuk.mandal@keysight.com; linux-rdma@vger.kernel.org; > > kuba@kernel.org; Paolo Abeni <pabeni@redhat.com>; Jason Gunthorpe > > <jgg@nvidia.com> > > Subject: [EXTERNAL] RE: [RFC PATCH 00/13] Ultra Ethernet driver > > introduction > > > > > > > > > From: Leon Romanovsky <leon@kernel.org> > > > Sent: Sunday, March 9, 2025 12:17 AM > > > > > > On Fri, Mar 07, 2025 at 01:01:50AM +0200, Nikolay Aleksandrov wrote: > > > > Hi all, > > > > > > <...> > > > > > > > Ultra Ethernet is a new RDMA transport. > > > > > > Awesome, and now please explain why new subsystem is needed when > > > drivers/infiniband already supports at least 5 different RDMA > > transports > > > (OmniPath, iWARP, Infiniband, RoCE v1 and RoCE v2). > > > > > 6th transport is drivers/infiniband/hw/efa (srd). > > > > > Maybe after this discussion it will be very clear that new subsystem > > is needed, > > > but at least it needs to be stated clearly. > > I am not sure if a new subsystem is what this RFC calls > for, but rather a discussion about the proper integration of > a new RDMA transport into the Linux kernel. <...> > The different API semantics of UET may further call > for either extending verbs to cover it as well, or exposing a > new non-verbs API (libfabrics), or both. So you should start from there (UAPI) by presenting the device model and how the verbs API needs to be extended, so it will be possible to evaluate how to fit that model into existing Linux kernel codebase. RDNA subsystem provides multiple type of QPs and operational models, some of them are indeed follow IB style, but not all of them (SRD, DC e.t.c). Thanks > > Thanks, > Bernard. > > > > > > > > An please CC RDMA maintainers to any Ultra Ethernet related > > discussions as it > > > is more RDMA than Ethernet. > > > > > > Thanks >
> I am not sure if a new subsystem is what this RFC calls for, but rather a > discussion about the proper integration of a new RDMA transport into the > Linux kernel. > > Ultra Ethernet Transport is probably not just another transport up for easy > integration into the current RDMA subsystem. > First of all, its design does not follow the well-known RDMA verbs model > inherited from InfiniBand, which has largely shaped the current structure of > the RDMA subsystem. While having send, receive and completion queues (and > completion counters) to steer message exchange, there is no concept of a > queue pair. Endpoints can span multiple queues, can have multiple peer > addresses. > Communication resources sharing is controlled in a different way than within > protection domains. Connections are ephemeral, created and released by the > provider as needed. There are more differences. In a nutshell, the UET > communication model is trimmed for extreme scalability. Its API semantics > follow libfabrics, not RDMA verbs. > > I think Nik gave us a first still incomplete look at the UET protocol engine to > help us understand some of the specifics. > It's just the lower part (packet delivery). The implementation of the upper part > (resource management, communication semantics, job management) may > largely depend on the environment we all choose. > > IMO, integrating UET with the current RDMA subsystem would ask for its > extension to allow exposing all of UETs intended functionality, probably > starting with a more generic RDMA device model than current ib_device. > > The different API semantics of UET may further call for either extending verbs > to cover it as well, or exposing a new non-verbs API (libfabrics), or both. Reading through the submissions, what I found lacking is a description of some higher-level plan. I don't easily see how to relate this series to NICs that may implement UET in HW. Should the PDS be viewed as a partial implementation of a SW UET 'device', similar to soft RoCE or iWarp? If so, having a description of a proposed device model seems like a necessary first step. If, instead, the PDS should be viewed more along the lines of a partial RDS-like path, then that changes the uapi. Or, am I not viewing this series as intended at all? It is almost guaranteed that there will be NICs which will support both RoCE and UET, and it's not farfetched to think that an app may use both simultaneously. IMO, a common device model is ideal, assuming exposing a device model is the intent. I agree that different transport models should not be forced together unnaturally, but I think that's solvable. In the end, the application developer is exposed to libfabric naming anyway. Besides, even a repurposed RDMA name is still better than the naming used within OpenMPI. :) - Sean
On 3/11/25 7:11 PM, Sean Hefty wrote: >> I am not sure if a new subsystem is what this RFC calls for, but rather a >> discussion about the proper integration of a new RDMA transport into the >> Linux kernel. >> >> Ultra Ethernet Transport is probably not just another transport up for easy >> integration into the current RDMA subsystem. >> First of all, its design does not follow the well-known RDMA verbs model >> inherited from InfiniBand, which has largely shaped the current structure of >> the RDMA subsystem. While having send, receive and completion queues (and >> completion counters) to steer message exchange, there is no concept of a >> queue pair. Endpoints can span multiple queues, can have multiple peer >> addresses. >> Communication resources sharing is controlled in a different way than within >> protection domains. Connections are ephemeral, created and released by the >> provider as needed. There are more differences. In a nutshell, the UET >> communication model is trimmed for extreme scalability. Its API semantics >> follow libfabrics, not RDMA verbs. >> >> I think Nik gave us a first still incomplete look at the UET protocol engine to >> help us understand some of the specifics. >> It's just the lower part (packet delivery). The implementation of the upper part >> (resource management, communication semantics, job management) may >> largely depend on the environment we all choose. >> >> IMO, integrating UET with the current RDMA subsystem would ask for its >> extension to allow exposing all of UETs intended functionality, probably >> starting with a more generic RDMA device model than current ib_device. >> >> The different API semantics of UET may further call for either extending verbs >> to cover it as well, or exposing a new non-verbs API (libfabrics), or both. > > Reading through the submissions, what I found lacking is a description of some higher-level plan. I don't easily see how to relate this series to NICs that may implement UET in HW. > > Should the PDS be viewed as a partial implementation of a SW UET 'device', similar to soft RoCE or iWarp? If so, having a description of a proposed device model seems like a necessary first step. > Hi Sean, To quote the cover letter: "...As there isn't any UET hardware available yet, we introduce a software device model which implements the lowest sublayer of the spec - PDS..." and "The plan is to have that split into core Ultra Ethernet module (ultraeth.ko) which is responsible for managing the UET contexts, jobs and all other common/generic UET configuration, and the software UET device model (uecon.ko) which implements the UET protocols for communication in software (e.g. the PDS will be a part of uecon) and is represented by a UDP tunnel network device." So as I said, it is in very early stage, but we plan to split this into core UET code and uecon software device model that implements the UEC specs. > If, instead, the PDS should be viewed more along the lines of a partial RDS-like path, then that changes the uapi. > > Or, am I not viewing this series as intended at all? > > It is almost guaranteed that there will be NICs which will support both RoCE and UET, and it's not farfetched to think that an app may use both simultaneously. IMO, a common device model is ideal, assuming exposing a device model is the intent. > That is the goal and we're working on UET kernel device API as I've noted in the cover letter. > I agree that different transport models should not be forced together unnaturally, but I think that's solvable. In the end, the application developer is exposed to libfabric naming anyway. Besides, even a repurposed RDMA name is still better than the naming used within OpenMPI. :) > > - Sean Cheers, Nik
On 3/8/25 8:46 PM, Leon Romanovsky wrote: > On Fri, Mar 07, 2025 at 01:01:50AM +0200, Nikolay Aleksandrov wrote: >> Hi all, > > <...> > >> Ultra Ethernet is a new RDMA transport. > > Awesome, and now please explain why new subsystem is needed when > drivers/infiniband already supports at least 5 different RDMA > transports (OmniPath, iWARP, Infiniband, RoCE v1 and RoCE v2). > As Bernard commented, we're not trying to add a new subsystem, but start a discussion on where UEC should live because it has multiple objects and semantics that don't map well to the current infrastructure. For example from this set - managing contexts, jobs and fabric endpoints. Also we have the ephemeral PDC connections that come and go as needed. There more such objects coming with more state, configuration and lifecycle management. That is why we added a separate netlink family to cleanly manage them without trying to fit a square peg in a round hole so to speak. In the next version I'll make sure to expand much more on this topic. By the way I believe Sean is working on the verbs mapping for parts of UEC, he can probably also share more details. We definitely want to re-use as much as possible from the current infrastructure, noone is trying to reinvent the wheel. > Maybe after this discussion it will be very clear that new subsystem > is needed, but at least it needs to be stated clearly. > > An please CC RDMA maintainers to any Ultra Ethernet related discussions > as it is more RDMA than Ethernet. > Of course it's RDMA, that's stated in the first few sentences, I made a mistake with the "To", but I did add linux-rdma@ to the recipient list. I'll make sure to also add the rdma maintainers personally for the next version and change the "to". > Thanks Cheers, Nik
On Wed, Mar 12, 2025 at 11:40:05AM +0200, Nikolay Aleksandrov wrote: > On 3/8/25 8:46 PM, Leon Romanovsky wrote: > > On Fri, Mar 07, 2025 at 01:01:50AM +0200, Nikolay Aleksandrov wrote: > >> Hi all, > > > > <...> > > > >> Ultra Ethernet is a new RDMA transport. > > > Awesome, and now please explain why new subsystem is needed when > > drivers/infiniband already supports at least 5 different RDMA > > transports (OmniPath, iWARP, Infiniband, RoCE v1 and RoCE v2). > > > > As Bernard commented, we're not trying to add a new subsystem, So why did you create new drivers/ultraeth/ folder? > but start a discussion on where UEC should live because it has multiple > objects and semantics that don't map well to the current > infrastructure. For example from this set - managing contexts, jobs and > fabric endpoints. It is just different names which libfabric used to do not use traditional verbs naming. There is nothing in the stack which prevents from QP to have same properties as "fabric endpoints" have. > Also we have the ephemeral PDC connections > that come and go as needed. There more such objects coming with more > state, configuration and lifecycle management. That is why we added a > separate netlink family to cleanly manage them without trying to fit > a square peg in a round hole so to speak. Yeah, I saw that you are planning to use netlink to manage objects, which is very questionable. It is slow, unreliable, requires sockets, needs more parsing logic e.t.c To avoid all this overhead, RDMA uses netlink-like ioctl calls, which fits better for object configurations. Thanks