Message ID | 160408357912.912050.17005584526266191420.stgit@djiang5-desk3.ch.intel.com (mailing list archive) |
---|---|
Headers | show |
Series | Add VFIO mediated device support and DEV-MSI support for the idxd driver | expand |
On Fri, Oct 30, 2020 at 11:50:47AM -0700, Dave Jiang wrote: > .../ABI/stable/sysfs-driver-dma-idxd | 6 + > Documentation/driver-api/vfio/mdev-idxd.rst | 404 ++++++ > MAINTAINERS | 1 + > drivers/dma/Kconfig | 9 + > drivers/dma/idxd/Makefile | 2 + > drivers/dma/idxd/cdev.c | 6 +- > drivers/dma/idxd/device.c | 294 ++++- > drivers/dma/idxd/idxd.h | 67 +- > drivers/dma/idxd/init.c | 86 ++ > drivers/dma/idxd/irq.c | 6 +- > drivers/dma/idxd/mdev.c | 1121 +++++++++++++++++ > drivers/dma/idxd/mdev.h | 116 ++ Again, a subsytem driver belongs in the directory hierarchy of the subsystem, not in other random places. All this mdev stuff belongs under drivers/vfio Jason
On 10/30/2020 11:58 AM, Jason Gunthorpe wrote: > On Fri, Oct 30, 2020 at 11:50:47AM -0700, Dave Jiang wrote: >> .../ABI/stable/sysfs-driver-dma-idxd | 6 + >> Documentation/driver-api/vfio/mdev-idxd.rst | 404 ++++++ >> MAINTAINERS | 1 + >> drivers/dma/Kconfig | 9 + >> drivers/dma/idxd/Makefile | 2 + >> drivers/dma/idxd/cdev.c | 6 +- >> drivers/dma/idxd/device.c | 294 ++++- >> drivers/dma/idxd/idxd.h | 67 +- >> drivers/dma/idxd/init.c | 86 ++ >> drivers/dma/idxd/irq.c | 6 +- >> drivers/dma/idxd/mdev.c | 1121 +++++++++++++++++ >> drivers/dma/idxd/mdev.h | 116 ++ > > Again, a subsytem driver belongs in the directory hierarchy of the > subsystem, not in other random places. All this mdev stuff belongs > under drivers/vfio Alex seems to have disagreed last time.... https://lore.kernel.org/dmaengine/20200917113016.425dcde7@x1.home/ And I do agree with his perspective. The mdev is an extension of the PF driver. It's a bit awkward to be a stand alone mdev driver under vfio/mdev/. > > Jason >
On Fri, Oct 30, 2020 at 12:13:48PM -0700, Dave Jiang wrote: > > > On 10/30/2020 11:58 AM, Jason Gunthorpe wrote: > > On Fri, Oct 30, 2020 at 11:50:47AM -0700, Dave Jiang wrote: > > > .../ABI/stable/sysfs-driver-dma-idxd | 6 + > > > Documentation/driver-api/vfio/mdev-idxd.rst | 404 ++++++ > > > MAINTAINERS | 1 + > > > drivers/dma/Kconfig | 9 + > > > drivers/dma/idxd/Makefile | 2 + > > > drivers/dma/idxd/cdev.c | 6 +- > > > drivers/dma/idxd/device.c | 294 ++++- > > > drivers/dma/idxd/idxd.h | 67 +- > > > drivers/dma/idxd/init.c | 86 ++ > > > drivers/dma/idxd/irq.c | 6 +- > > > drivers/dma/idxd/mdev.c | 1121 +++++++++++++++++ > > > drivers/dma/idxd/mdev.h | 116 ++ > > > > Again, a subsytem driver belongs in the directory hierarchy of the > > subsystem, not in other random places. All this mdev stuff belongs > > under drivers/vfio > > Alex seems to have disagreed last time.... > https://lore.kernel.org/dmaengine/20200917113016.425dcde7@x1.home/ Nobody else in the kernel is splitting subsystems up anymore > And I do agree with his perspective. The mdev is an extension of the PF > driver. It's a bit awkward to be a stand alone mdev driver under vfio/mdev/. By this logic we'd have giagantic drivers under drivers/ethernet touching netdev, rdma, scsi, vdpa, etc just because that is where the PF driver came from. It is not how the kernel works. Subsystem owners are responsible for their subsystem, drivers implementing their subsystem are under the subsystem directory. Jason
On Fri, Oct 30, 2020 at 04:17:06PM -0300, Jason Gunthorpe wrote: > On Fri, Oct 30, 2020 at 12:13:48PM -0700, Dave Jiang wrote: > > > > > > On 10/30/2020 11:58 AM, Jason Gunthorpe wrote: > > > On Fri, Oct 30, 2020 at 11:50:47AM -0700, Dave Jiang wrote: > > > > .../ABI/stable/sysfs-driver-dma-idxd | 6 + > > > > Documentation/driver-api/vfio/mdev-idxd.rst | 404 ++++++ > > > > MAINTAINERS | 1 + > > > > drivers/dma/Kconfig | 9 + > > > > drivers/dma/idxd/Makefile | 2 + > > > > drivers/dma/idxd/cdev.c | 6 +- > > > > drivers/dma/idxd/device.c | 294 ++++- > > > > drivers/dma/idxd/idxd.h | 67 +- > > > > drivers/dma/idxd/init.c | 86 ++ > > > > drivers/dma/idxd/irq.c | 6 +- > > > > drivers/dma/idxd/mdev.c | 1121 +++++++++++++++++ > > > > drivers/dma/idxd/mdev.h | 116 ++ > > > > > > Again, a subsytem driver belongs in the directory hierarchy of the > > > subsystem, not in other random places. All this mdev stuff belongs > > > under drivers/vfio > > > > Alex seems to have disagreed last time.... > > https://lore.kernel.org/dmaengine/20200917113016.425dcde7@x1.home/ > > Nobody else in the kernel is splitting subsystems up anymore > > > And I do agree with his perspective. The mdev is an extension of the PF > > driver. It's a bit awkward to be a stand alone mdev driver under vfio/mdev/. > > By this logic we'd have giagantic drivers under drivers/ethernet > touching netdev, rdma, scsi, vdpa, etc just because that is where the > PF driver came from. What makes you think this is providing services like scsi/rdma/vdpa etc.. ? for DSA this playes the exact same role, not a different function as you highlight above. these mdev's are creating DSA for virtualization use. They aren't providing a completely different role or subsystem per-se. Cheers, Ashok
On Fri, Oct 30, 2020 at 12:23:25PM -0700, Raj, Ashok wrote: > On Fri, Oct 30, 2020 at 04:17:06PM -0300, Jason Gunthorpe wrote: > > On Fri, Oct 30, 2020 at 12:13:48PM -0700, Dave Jiang wrote: > > > > > > > > > On 10/30/2020 11:58 AM, Jason Gunthorpe wrote: > > > > On Fri, Oct 30, 2020 at 11:50:47AM -0700, Dave Jiang wrote: > > > > > .../ABI/stable/sysfs-driver-dma-idxd | 6 + > > > > > Documentation/driver-api/vfio/mdev-idxd.rst | 404 ++++++ > > > > > MAINTAINERS | 1 + > > > > > drivers/dma/Kconfig | 9 + > > > > > drivers/dma/idxd/Makefile | 2 + > > > > > drivers/dma/idxd/cdev.c | 6 +- > > > > > drivers/dma/idxd/device.c | 294 ++++- > > > > > drivers/dma/idxd/idxd.h | 67 +- > > > > > drivers/dma/idxd/init.c | 86 ++ > > > > > drivers/dma/idxd/irq.c | 6 +- > > > > > drivers/dma/idxd/mdev.c | 1121 +++++++++++++++++ > > > > > drivers/dma/idxd/mdev.h | 116 ++ > > > > > > > > Again, a subsytem driver belongs in the directory hierarchy of the > > > > subsystem, not in other random places. All this mdev stuff belongs > > > > under drivers/vfio > > > > > > Alex seems to have disagreed last time.... > > > https://lore.kernel.org/dmaengine/20200917113016.425dcde7@x1.home/ > > > > Nobody else in the kernel is splitting subsystems up anymore > > > > > And I do agree with his perspective. The mdev is an extension of the PF > > > driver. It's a bit awkward to be a stand alone mdev driver under vfio/mdev/. > > > > By this logic we'd have giagantic drivers under drivers/ethernet > > touching netdev, rdma, scsi, vdpa, etc just because that is where the > > PF driver came from. > > What makes you think this is providing services like scsi/rdma/vdpa etc.. ? > > for DSA this playes the exact same role, not a different function > as you highlight above. these mdev's are creating DSA for virtualization > use. They aren't providing a completely different role or subsystem per-se. It is a different subsystem, different maintainer, and different reviewers. It is a development process problem, it doesn't matter what it is doing. Jason
On Fri, Oct 30, 2020 at 04:30:45PM -0300, Jason Gunthorpe wrote: > On Fri, Oct 30, 2020 at 12:23:25PM -0700, Raj, Ashok wrote: > > On Fri, Oct 30, 2020 at 04:17:06PM -0300, Jason Gunthorpe wrote: > > > On Fri, Oct 30, 2020 at 12:13:48PM -0700, Dave Jiang wrote: > > > > > > > > > > > > On 10/30/2020 11:58 AM, Jason Gunthorpe wrote: > > > > > On Fri, Oct 30, 2020 at 11:50:47AM -0700, Dave Jiang wrote: > > > > > > .../ABI/stable/sysfs-driver-dma-idxd | 6 + > > > > > > Documentation/driver-api/vfio/mdev-idxd.rst | 404 ++++++ > > > > > > MAINTAINERS | 1 + > > > > > > drivers/dma/Kconfig | 9 + > > > > > > drivers/dma/idxd/Makefile | 2 + > > > > > > drivers/dma/idxd/cdev.c | 6 +- > > > > > > drivers/dma/idxd/device.c | 294 ++++- > > > > > > drivers/dma/idxd/idxd.h | 67 +- > > > > > > drivers/dma/idxd/init.c | 86 ++ > > > > > > drivers/dma/idxd/irq.c | 6 +- > > > > > > drivers/dma/idxd/mdev.c | 1121 +++++++++++++++++ > > > > > > drivers/dma/idxd/mdev.h | 116 ++ > > > > > > > > > > Again, a subsytem driver belongs in the directory hierarchy of the > > > > > subsystem, not in other random places. All this mdev stuff belongs > > > > > under drivers/vfio > > > > > > > > Alex seems to have disagreed last time.... > > > > https://lore.kernel.org/dmaengine/20200917113016.425dcde7@x1.home/ > > > > > > Nobody else in the kernel is splitting subsystems up anymore > > > > > > > And I do agree with his perspective. The mdev is an extension of the PF > > > > driver. It's a bit awkward to be a stand alone mdev driver under vfio/mdev/. > > > > > > By this logic we'd have giagantic drivers under drivers/ethernet > > > touching netdev, rdma, scsi, vdpa, etc just because that is where the > > > PF driver came from. > > > > What makes you think this is providing services like scsi/rdma/vdpa etc.. ? > > > > for DSA this playes the exact same role, not a different function > > as you highlight above. these mdev's are creating DSA for virtualization > > use. They aren't providing a completely different role or subsystem per-se. > > It is a different subsystem, different maintainer, and different > reviewers. > > It is a development process problem, it doesn't matter what it is > doing. So drawing that parallel, do you expect all drivers that call pci_register_driver() to be located in drivers/pci? Aren't they scattered all over the place ata,scsi, platform drivers and such? As Alex pointed out, i915 and handful of s390 drivers that are mdev users are not in drivers/vfio. Are you sayint those drivers don't get reviewed? This is no different than PF driver offering VF services. Its a logical extension. Reviews happen for mdev users today. What you suggest seems like cutting the feet to fit the shoe. Unless the maintainers are asking things to be split just because its calling mdev_register_device() that practice doesn't exist and would be totally weird if you want to move all callers of pci_register_driver(). Your argument seems interesting even entertaining :-). But honestly i'm not finding it practical :-). So every caller of mmu_register_notifier() needs to be in mm? What you mention for different functions make absolute sense, not arguing against that. but this ain't that. And we just follow the asks of the maintainer. I know you aren't going to give up, but there is little we can do. I want the maintainers to make that call and I'm not add more noise to this. Cheers, Ashok
On Fri, Oct 30 2020 at 11:50, Dave Jiang wrote: > The code has dependency on Thomas’s MSI restructuring patch series: > https://lore.kernel.org/lkml/20200826111628.794979401@linutronix.de/ which is outdated and not longer applicable. Thanks, tglx
On 10/30/2020 1:48 PM, Thomas Gleixner wrote: > On Fri, Oct 30 2020 at 11:50, Dave Jiang wrote: >> The code has dependency on Thomas’s MSI restructuring patch series: >> https://lore.kernel.org/lkml/20200826111628.794979401@linutronix.de/ > > which is outdated and not longer applicable. Yes.... I wasn't sure how to point to these patches from you as a dependency. irqdomain/msi: Provide msi_alloc/free_store() callbacks platform-msi: Add device MSI infrastructure genirq/msi: Provide and use msi_domain_set_default_info_flags() genirq/proc: Take buslock on affinity write platform-msi: Provide default irq_chip:: Ack x86/msi: Rename and rework pci_msi_prepare() to cover non-PCI MSI x86/irq: Add DEV_MSI allocation type Do I need to include these patches in my series? Thanks! > > Thanks, > > tglx >
On Fri, Oct 30 2020 at 13:59, Dave Jiang wrote: > On 10/30/2020 1:48 PM, Thomas Gleixner wrote: >> On Fri, Oct 30 2020 at 11:50, Dave Jiang wrote: >>> The code has dependency on Thomas’s MSI restructuring patch series: >>> https://lore.kernel.org/lkml/20200826111628.794979401@linutronix.de/ >> >> which is outdated and not longer applicable. > > Yes.... I wasn't sure how to point to these patches from you as a dependency. > > irqdomain/msi: Provide msi_alloc/free_store() callbacks > platform-msi: Add device MSI infrastructure > genirq/msi: Provide and use msi_domain_set_default_info_flags() > genirq/proc: Take buslock on affinity write > platform-msi: Provide default irq_chip:: Ack > x86/msi: Rename and rework pci_msi_prepare() to cover non-PCI MSI > x86/irq: Add DEV_MSI allocation type How can you point at something which is not longer applicable? > Do I need to include these patches in my series? Thanks! No. They are NOT part of this series. Prerequisites are seperate entities and your series can be based on them. So for one you want to make sure that the prerequisites for your IDXD stuff are going to be merged into the relevant maintainer trees. To allow people working with your stuff you simply provide an aggregation git tree which contains all the collected prerequisites. This aggregation tree needs to be rebased when the prerequisites change during review or are merged into a maintainer tree/branch. It's not rocket science and a lot of people do exactly this all the time in order to coordinate changes which have dependencies over multiple subsystems. Thanks, tglx
On Fri, Oct 30, 2020 at 01:43:07PM -0700, Raj, Ashok wrote: > So drawing that parallel, do you expect all drivers that call > pci_register_driver() to be located in drivers/pci? Aren't they scattered > all over the place ata,scsi, platform drivers and such? The subsystem is the thing that calls device_register. pci_register_driver() doesn't do that. > As Alex pointed out, i915 and handful of s390 drivers that are mdev users > are not in drivers/vfio. Are you sayint those drivers don't get reviewed? Past mistakes do not justify continuing to do it wrong. ARM and PPC went through a huge multi year cleanup moving code out of arch and into the proper drivers/ directories. We know this is the correct way to work the development process. > Your argument seems interesting even entertaining :-). But honestly i'm not finding it > practical :-). So every caller of mmu_register_notifier() needs to be in > mm? mmu notifiers are not a subsytem, they are core libary code. You seem to completely not understand what a subsystem is. :( > I know you aren't going to give up, but there is little we can do. I want > the maintainers to make that call and I'm not add more noise to this. Well, hopefully Vinod will insist on following kernel norms here. Jason
Ashok, On Fri, Oct 30 2020 at 13:43, Ashok Raj wrote: > On Fri, Oct 30, 2020 at 04:30:45PM -0300, Jason Gunthorpe wrote: >> On Fri, Oct 30, 2020 at 12:23:25PM -0700, Raj, Ashok wrote: >> It is a different subsystem, different maintainer, and different >> reviewers. >> >> It is a development process problem, it doesn't matter what it is >> doing. < skip a lot of non-sensical arguments> > I know you aren't going to give up, but there is little we can do. I want > the maintainers to make that call and I'm not add more noise to this. Jason is absolutely right. Just because there is historical precendence which does not care about the differentiation of subsystems is not an argument at all to make the same mistakes which have been made years ago. IDXD is just infrastructure which provides the base for a variety of different functionalities. Very similar to what multi function devices provide. In fact IDXD is pretty much a MFD facility. Sticking all of it into dmaengine is sloppy at best. The dma engine related part of IDXD is only a part of the overall functionality. I'm well aware that it is conveniant to just throw everything into drivers/myturf/ but that does neither make it reviewable nor maintainable. What's the problem with restructuring your code in a way which makes it fit into existing subsystems? The whole thing - as I pointed out to Dave earlier - is based on 'works for me' wishful thinking with a blissful ignorance of the development process and the requirement to split a large problem into the proper bits and pieces aka. engineering 101. Thanks, tglx
Hi Thomas, On Sat, Oct 31, 2020 at 03:50:43AM +0100, Thomas Gleixner wrote: > Ashok, > > < skip a lot of non-sensical arguments> Ouch!.. Didn't mean to awaken you like this :-).. apologies.. profusely! > > Just because there is historical precendence which does not care about > the differentiation of subsystems is not an argument at all to make the > same mistakes which have been made years ago. > > IDXD is just infrastructure which provides the base for a variety of > different functionalities. Very similar to what multi function devices > provide. In fact IDXD is pretty much a MFD facility. I'm only asking this to better understand the thought process. I don't intend to be defensive, I have my hands tied back.. so we will do what you say best fits per your recommendation. Not my intend to dig a deeper hole than I have already dug! :-( IDXD is just a glorified DMA engine, data mover. It also does a few other things. In that sense its a multi-function facility. But doesn't do different functional pieces like PCIe multi-function device in that sense. i.e it doesn't do other storage and network in that sense. > > Sticking all of it into dmaengine is sloppy at best. The dma engine > related part of IDXD is only a part of the overall functionality. dmaengine is the basic non-transformational data-mover. Doing other operations or transformations are just the glorified data-mover part. But fundamentally not different. > > I'm well aware that it is conveniant to just throw everything into > drivers/myturf/ but that does neither make it reviewable nor > maintainable. That's true, when we add lot of functionality in one place. IDXD doing mdev support is not offering new functioanlity. SRIOV PF drivers that support PF/VF mailboxes are part of PF drivers today. IDXD mdev is preciely playing that exact role. If we are doing this just to improve review effectiveness, Now we would need some parent driver, and these sub-drivers registering seemed like a bit of over-engineering when these sub-drivers actually are an extension of the base driver and offer nothing more than extending sub-device partitions of IDXD for guest drivers. These look and feel like IDXD, not another device interface. In that sense if we move PF/VF mailboxes as separate drivers i thought it feels a bit odd. Please don't take it the wrong way. Cheers, Ashok
On Sat, Oct 31, 2020 at 04:53:59PM -0700, Raj, Ashok wrote: > If we are doing this just to improve review effectiveness, Now we would need > some parent driver, and these sub-drivers registering seemed like a bit of > over-engineering when these sub-drivers actually are an extension of the > base driver and offer nothing more than extending sub-device partitions > of IDXD for guest drivers. These look and feel like IDXD, not another device > interface. In that sense if we move PF/VF mailboxes as > separate drivers i thought it feels a bit odd. You need this split anyhow, putting VFIO calls into the main idxd module is not OK. Plugging in a PCI device should not auto-load VFIO modules. Jason
Hi Jason On Mon, Nov 02, 2020 at 09:20:36AM -0400, Jason Gunthorpe wrote: > > of IDXD for guest drivers. These look and feel like IDXD, not another device > > interface. In that sense if we move PF/VF mailboxes as > > separate drivers i thought it feels a bit odd. > > You need this split anyhow, putting VFIO calls into the main idxd > module is not OK. > > Plugging in a PCI device should not auto-load VFIO modules. Yes, I agree that would be a good reason to separate them completely and glue functionality with private APIs between the 2 modules. - Separate mdev code from base idxd. - Separate maintainers, so its easy to review and include. (But remember they are heavily inter-dependent. They have to move to-gether) Almost all SRIOV drivers today are just configured with some form of Kconfig and those relevant files are compiled into the same module. I think in *most* applications idxd would be operating in that mode, where you have the base driver and mdev parts (like VF) compiled in if configured such. Creating these private interfaces for intra-module are just 1-1 and not general purpose and every accelerator needs to create these instances. I wasn't sure focibly creating this firewall between the PF/VF interfaces is actually worth the work every driver is going to require. I can see where this is required when they offer separate functional interfaces when we talk about multi-function in a more confined definition today. idxd mdev's are purely a VF extension. It doesn't provide any different function. For e.g. like an RDMA device that can provide iWarp, ipoib or even multiplexing storage over IB. IDXD is a fixed function interface. Sure having separate modules helps with that isolation. But I'm not convinced if this simplifies, or complicates things more than what is required for these device types. Cheers, Ashok
On Mon, Nov 02, 2020 at 08:20:43AM -0800, Raj, Ashok wrote: > Creating these private interfaces for intra-module are just 1-1 and not > general purpose and every accelerator needs to create these instances. This is where we are going, auxillary bus should be merged soon which is specifically to connect these kinds of devices across subsystems Jason
On 11/2/2020 10:19 AM, Jason Gunthorpe wrote: > On Mon, Nov 02, 2020 at 08:20:43AM -0800, Raj, Ashok wrote: >> Creating these private interfaces for intra-module are just 1-1 and not >> general purpose and every accelerator needs to create these instances. > > This is where we are going, auxillary bus should be merged soon which > is specifically to connect these kinds of devices across subsystems I think this resolves the aux device probe/remove issue via a common bus. But it does not help with the mdev device needing a lot of the device handling calls from the parent driver as it share the same handling as the parent device. My plan is to export all the needed call via EXPORT_SYMBOL_NS() so the calls can be shared in its own namespace between the modules. Do you have any objection with that? > > Jason >
On Mon, Nov 02, 2020 at 11:18:33AM -0700, Dave Jiang wrote: > > > On 11/2/2020 10:19 AM, Jason Gunthorpe wrote: > > On Mon, Nov 02, 2020 at 08:20:43AM -0800, Raj, Ashok wrote: > > > Creating these private interfaces for intra-module are just 1-1 and not > > > general purpose and every accelerator needs to create these instances. > > > > This is where we are going, auxillary bus should be merged soon which > > is specifically to connect these kinds of devices across subsystems > > I think this resolves the aux device probe/remove issue via a common bus. > But it does not help with the mdev device needing a lot of the device > handling calls from the parent driver as it share the same handling as the > parent device. The intention of auxiliary bus is that the two parts will tightly couple across some exported function interface. > My plan is to export all the needed call via EXPORT_SYMBOL_NS() so > the calls can be shared in its own namespace between the modules. Do > you have any objection with that? I think you will be the first to use the namespace stuff for this, it seems like a good idea and others should probably do so as well. Jason
On Mon, Nov 2, 2020 at 10:26 AM Jason Gunthorpe <jgg@nvidia.com> wrote: > > On Mon, Nov 02, 2020 at 11:18:33AM -0700, Dave Jiang wrote: > > > > > > On 11/2/2020 10:19 AM, Jason Gunthorpe wrote: > > > On Mon, Nov 02, 2020 at 08:20:43AM -0800, Raj, Ashok wrote: > > > > Creating these private interfaces for intra-module are just 1-1 and not > > > > general purpose and every accelerator needs to create these instances. > > > > > > This is where we are going, auxillary bus should be merged soon which > > > is specifically to connect these kinds of devices across subsystems > > > > I think this resolves the aux device probe/remove issue via a common bus. > > But it does not help with the mdev device needing a lot of the device > > handling calls from the parent driver as it share the same handling as the > > parent device. > > The intention of auxiliary bus is that the two parts will tightly > couple across some exported function interface. > > > My plan is to export all the needed call via EXPORT_SYMBOL_NS() so > > the calls can be shared in its own namespace between the modules. Do > > you have any objection with that? > > I think you will be the first to use the namespace stuff for this, it > seems like a good idea and others should probably do so as well. I was thinking either EXPORT_SYMBOL_NS, or auxiliary bus, because you should be able to export an ops structure with all the necessary callbacks. Aux bus seems cleaner because the lifetime rules and ownership concerns are clearer.
On Mon, Nov 02, 2020 at 10:38:28AM -0800, Dan Williams wrote: > > I think you will be the first to use the namespace stuff for this, it > > seems like a good idea and others should probably do so as well. > > I was thinking either EXPORT_SYMBOL_NS, or auxiliary bus, because you > should be able to export an ops structure with all the necessary > callbacks. 'or'? Auxiliary bus should not be used with huge arrays of function pointers... The module providing the device should export a normal linkable function interface. Putting that in a namespace makes a lot of sense. Jason
On Mon, Nov 2, 2020 at 10:52 AM Jason Gunthorpe <jgg@nvidia.com> wrote: > > On Mon, Nov 02, 2020 at 10:38:28AM -0800, Dan Williams wrote: > > > > I think you will be the first to use the namespace stuff for this, it > > > seems like a good idea and others should probably do so as well. > > > > I was thinking either EXPORT_SYMBOL_NS, or auxiliary bus, because you > > should be able to export an ops structure with all the necessary > > callbacks. > > 'or'? > > Auxiliary bus should not be used with huge arrays of function > pointers... The module providing the device should export a normal > linkable function interface. Putting that in a namespace makes a lot > of sense. True, probably needs to be a mixture of both.