Message ID | 20221025135958.6242-1-aaptel@nvidia.com (mailing list archive) |
---|---|
Headers | show |
Series | nvme-tcp receive offloads | expand |
On Tue, Oct 25, 2022 at 04:59:35PM +0300, Aurelien Aptel wrote: > The feature will also be presented in netdev this week > https://netdevconf.info/0x16/session.html?NVMeTCP-Offload-%E2%80%93-Implementation-and-Performance-Gains That seems to miss slides. > Currently the series is aligned to net-next, please update us if you will prefer otherwise. Please also point to a git tree for a huge series with a dependency on some tree, otherwise there's no good way to review it.
On Tue, Oct 25, 2022 at 7:04 PM Christoph Hellwig <hch@lst.de> wrote: > > On Tue, Oct 25, 2022 at 04:59:35PM +0300, Aurelien Aptel wrote: > > The feature will also be presented in netdev this week > > https://netdevconf.info/0x16/session.html?NVMeTCP-Offload-%E2%80%93-Implementation-and-Performance-Gains > > That seems to miss slides. to be presented on Friday this week.. AFAIK slides are uploaded little later The design/principles were presented last year https://netdevconf.info/0x15/session.html?Autonomous-NVMe-TCP-offload
Hi Christoph, >> Currently the series is aligned to net-next, please update us if you will prefer otherwise. > Please also point to a git tree for a huge series with a dependency > on some tree, otherwise there's no good way to review it. This series is based on top of yesterday's net-next [1] and I created a github tree if that's easier to use [2]. 1: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git branch 'main' commit d6dd508080a3 ("bnx2: Use kmalloc_size_roundup() to match ksize() usage") 2: Github: https://github.com/aaptel/linux/tree/nvme-rx-offload-v7 Git repo: https://github.com/aaptel/linux.git branch nvme-rx-offload-v7
> Hi, > > The nvme-tcp receive offloads series v7 was sent to both net-next and > nvme. It is the continuation of v5 which was sent on July 2021 > https://lore.kernel.org/netdev/20210722110325.371-1-borisp@nvidia.com/ . > V7 is now working on a real HW. > > The feature will also be presented in netdev this week > https://netdevconf.info/0x16/session.html?NVMeTCP-Offload-%E2%80%93-Implementation-and-Performance-Gains > > Currently the series is aligned to net-next, please update us if you will prefer otherwise. > > Thanks, > Shai, Aurelien Hey Shai & Aurelien Can you please add in the next time documentation of the limitations that this offload has in terms of compatibility? i.e. for example (from my own imagination): 1. bonding/teaming/other-stacking? 2. TLS (sw/hw)? 3. any sort of tunneling/overlay? 4. VF/PF? 5. any nvme features? 6. ... And what are your plans to address each if at all. Also, does this have a path to userspace? for example almost all of the nvme-tcp targets live in userspace. I don't think I see in the code any limits like the maximum connections that can be offloaded on a single device/port. Can you share some details on this? Thanks.
On Thu, 27 Oct 2022 at 11:35, Sagi Grimberg <sagi@grimberg.me> wrote: > > Hi, > > > > The nvme-tcp receive offloads series v7 was sent to both net-next and > > nvme. It is the continuation of v5 which was sent on July 2021 > > https://lore.kernel.org/netdev/20210722110325.371-1-borisp@nvidia.com/ . > > V7 is now working on a real HW. > > > > The feature will also be presented in netdev this week > > https://netdevconf.info/0x16/session.html?NVMeTCP-Offload-%E2%80%93- > Implementation-and-Performance-Gains > > > > Currently the series is aligned to net-next, please update us if you will prefer > otherwise. > > > > Thanks, > > Shai, Aurelien > > Hey Shai & Aurelien > > Can you please add in the next time documentation of the limitations > that this offload has in terms of compatibility? i.e. for example (from > my own imagination): > 1. bonding/teaming/other-stacking? > 2. TLS (sw/hw)? > 3. any sort of tunneling/overlay? > 4. VF/PF? > 5. any nvme features? > 6. ... > > And what are your plans to address each if at all. > > Also, does this have a path to userspace? for example almost all > of the nvme-tcp targets live in userspace. > > I don't think I see in the code any limits like the maximum > connections that can be offloaded on a single device/port. Can > you share some details on this? > > Thanks. Sure, we add it.