Message ID | 20200210035608.10002-4-jasowang@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | vDPA support | expand |
On Mon, Feb 10, 2020 at 11:56:06AM +0800, Jason Wang wrote: > +/** > + * vdpa_register_device - register a vDPA device > + * Callers must have a succeed call of vdpa_init_device() before. > + * @vdev: the vdpa device to be registered to vDPA bus > + * > + * Returns an error when fail to add to vDPA bus > + */ > +int vdpa_register_device(struct vdpa_device *vdev) > +{ > + int err = device_add(&vdev->dev); > + > + if (err) { > + put_device(&vdev->dev); > + ida_simple_remove(&vdpa_index_ida, vdev->index); > + } This is a very dangerous construction, I've seen it lead to driver bugs. Better to require the driver to always do the put_device on error unwind The ida_simple_remove should probably be part of the class release function to make everything work right > +/** > + * vdpa_unregister_driver - unregister a vDPA device driver > + * @drv: the vdpa device driver to be unregistered > + */ > +void vdpa_unregister_driver(struct vdpa_driver *drv) > +{ > + driver_unregister(&drv->driver); > +} > +EXPORT_SYMBOL_GPL(vdpa_unregister_driver); > + > +static int vdpa_init(void) > +{ > + if (bus_register(&vdpa_bus) != 0) > + panic("virtio bus registration failed"); > + return 0; > +} Linus will tell you not to kill the kernel - return the error code and propagate it up to the module init function. > +/** > + * vDPA device - representation of a vDPA device > + * @dev: underlying device > + * @dma_dev: the actual device that is performing DMA > + * @config: the configuration ops for this device. > + * @index: device index > + */ > +struct vdpa_device { > + struct device dev; > + struct device *dma_dev; > + const struct vdpa_config_ops *config; > + int index; unsigned values shuld be unsigned int Jason
On 2020/2/11 下午9:47, Jason Gunthorpe wrote: > On Mon, Feb 10, 2020 at 11:56:06AM +0800, Jason Wang wrote: >> +/** >> + * vdpa_register_device - register a vDPA device >> + * Callers must have a succeed call of vdpa_init_device() before. >> + * @vdev: the vdpa device to be registered to vDPA bus >> + * >> + * Returns an error when fail to add to vDPA bus >> + */ >> +int vdpa_register_device(struct vdpa_device *vdev) >> +{ >> + int err = device_add(&vdev->dev); >> + >> + if (err) { >> + put_device(&vdev->dev); >> + ida_simple_remove(&vdpa_index_ida, vdev->index); >> + } > This is a very dangerous construction, I've seen it lead to driver > bugs. Better to require the driver to always do the put_device on > error unwind Ok. > > The ida_simple_remove should probably be part of the class release > function to make everything work right It looks to me bus instead of class is the correct abstraction here since the devices share a set of programming interface but not the semantics. Or do you actually mean type here? > >> +/** >> + * vdpa_unregister_driver - unregister a vDPA device driver >> + * @drv: the vdpa device driver to be unregistered >> + */ >> +void vdpa_unregister_driver(struct vdpa_driver *drv) >> +{ >> + driver_unregister(&drv->driver); >> +} >> +EXPORT_SYMBOL_GPL(vdpa_unregister_driver); >> + >> +static int vdpa_init(void) >> +{ >> + if (bus_register(&vdpa_bus) != 0) >> + panic("virtio bus registration failed"); >> + return 0; >> +} > Linus will tell you not to kill the kernel - return the error code and > propagate it up to the module init function. Yes, will fix. > >> +/** >> + * vDPA device - representation of a vDPA device >> + * @dev: underlying device >> + * @dma_dev: the actual device that is performing DMA >> + * @config: the configuration ops for this device. >> + * @index: device index >> + */ >> +struct vdpa_device { >> + struct device dev; >> + struct device *dma_dev; >> + const struct vdpa_config_ops *config; >> + int index; > unsigned values shuld be unsigned int > > Jason Will fix. Thanks >
On Wed, Feb 12, 2020 at 03:55:31PM +0800, Jason Wang wrote: > > The ida_simple_remove should probably be part of the class release > > function to make everything work right > > It looks to me bus instead of class is the correct abstraction here since > the devices share a set of programming interface but not the semantics. device_release() doesn't call the bus release? You have dev, type or class to choose from. Type is rarely used and doesn't seem to be used by vdpa, so class seems the right choice Jason
On 2020/2/12 下午8:51, Jason Gunthorpe wrote: > On Wed, Feb 12, 2020 at 03:55:31PM +0800, Jason Wang wrote: >>> The ida_simple_remove should probably be part of the class release >>> function to make everything work right >> It looks to me bus instead of class is the correct abstraction here since >> the devices share a set of programming interface but not the semantics. > device_release() doesn't call the bus release? What it did is: if (dev->release) dev->release(dev); else if (dev->type && dev->type->release) dev->type->release(dev); else if (dev->class && dev->class->dev_release) dev->class->dev_release(dev); else WARN(1, KERN_ERR "Device '%s' does not have a release() function, it is broken and must be fixed. See Documentation/kobject.txt.\n", dev_name(dev)); So it looks not. > You have dev, type or > class to choose from. Type is rarely used and doesn't seem to be used > by vdpa, so class seems the right choice > > Jason Yes, but my understanding is class and bus are mutually exclusive. So we can't add a class to a device which is already attached on a bus. Thanks
On Thu, Feb 13, 2020 at 11:34:10AM +0800, Jason Wang wrote: > > You have dev, type or > > class to choose from. Type is rarely used and doesn't seem to be used > > by vdpa, so class seems the right choice > > > > Jason > > Yes, but my understanding is class and bus are mutually exclusive. So we > can't add a class to a device which is already attached on a bus. While I suppose there are variations, typically 'class' devices are user facing things and 'bus' devices are internal facing (ie like a PCI device) So why is this using a bus? VDPA is a user facing object, so the driver should create a class vhost_vdpa device directly, and that driver should live in the drivers/vhost/ directory. For the PCI VF case this driver would bind to a PCI device like everything else For our future SF/ADI cases the driver would bind to some SF/ADI/whatever device on a bus. I don't see a reason for VDPA to be creating busses.. Jason
On 2020/2/13 下午9:41, Jason Gunthorpe wrote: > On Thu, Feb 13, 2020 at 11:34:10AM +0800, Jason Wang wrote: > >>> You have dev, type or >>> class to choose from. Type is rarely used and doesn't seem to be used >>> by vdpa, so class seems the right choice >>> >>> Jason >> Yes, but my understanding is class and bus are mutually exclusive. So we >> can't add a class to a device which is already attached on a bus. > While I suppose there are variations, typically 'class' devices are > user facing things and 'bus' devices are internal facing (ie like a > PCI device) Though all vDPA devices have the same programming interface, but the semantic is different. So it looks to me that use bus complies what class.rst said: " Each device class defines a set of semantics and a programming interface that devices of that class adhere to. Device drivers are the implementation of that programming interface for a particular device on a particular bus. " > > So why is this using a bus? VDPA is a user facing object, so the > driver should create a class vhost_vdpa device directly, and that > driver should live in the drivers/vhost/ directory. This is because we want vDPA to be generic for being used by different drivers which is not limited to vhost-vdpa. E.g in this series, it allows vDPA to be used by kernel virtio drivers. And in the future, we will probably introduce more drivers in the future. > > For the PCI VF case this driver would bind to a PCI device like > everything else > > For our future SF/ADI cases the driver would bind to some > SF/ADI/whatever device on a bus. All these driver will still be bound to their own bus (PCI or other). And what the driver needs is to present a vDPA device to virtual vDPA bus on top. Thanks > > I don't see a reason for VDPA to be creating busses.. > > Jason >
On Thu, Feb 13, 2020 at 10:58:44PM +0800, Jason Wang wrote: > > On 2020/2/13 下午9:41, Jason Gunthorpe wrote: > > On Thu, Feb 13, 2020 at 11:34:10AM +0800, Jason Wang wrote: > > > > > > You have dev, type or > > > > class to choose from. Type is rarely used and doesn't seem to be used > > > > by vdpa, so class seems the right choice > > > > > > > > Jason > > > Yes, but my understanding is class and bus are mutually exclusive. So we > > > can't add a class to a device which is already attached on a bus. > > While I suppose there are variations, typically 'class' devices are > > user facing things and 'bus' devices are internal facing (ie like a > > PCI device) > > > Though all vDPA devices have the same programming interface, but the > semantic is different. So it looks to me that use bus complies what > class.rst said: > > " > > Each device class defines a set of semantics and a programming interface > that devices of that class adhere to. Device drivers are the > implementation of that programming interface for a particular device on > a particular bus. > > " Here we are talking about the /dev/XX node that provides the programming interface. All the vdpa devices have the same basic chardev interface and discover any semantic variations 'in band' > > So why is this using a bus? VDPA is a user facing object, so the > > driver should create a class vhost_vdpa device directly, and that > > driver should live in the drivers/vhost/ directory. > > This is because we want vDPA to be generic for being used by different > drivers which is not limited to vhost-vdpa. E.g in this series, it allows > vDPA to be used by kernel virtio drivers. And in the future, we will > probably introduce more drivers in the future. I don't see how that connects with using a bus. Every class of virtio traffic is going to need a special HW driver to enable VDPA, that special driver can create the correct vhost side class device. > > For the PCI VF case this driver would bind to a PCI device like > > everything else > > > > For our future SF/ADI cases the driver would bind to some > > SF/ADI/whatever device on a bus. > > All these driver will still be bound to their own bus (PCI or other). And > what the driver needs is to present a vDPA device to virtual vDPA bus on > top. Again, I can't see any reason to inject a 'vdpa virtual bus' on top. That seems like mis-using the driver core. Jason
On Thu, Feb 13, 2020 at 11:05:42AM -0400, Jason Gunthorpe wrote: > On Thu, Feb 13, 2020 at 10:58:44PM +0800, Jason Wang wrote: > > > > On 2020/2/13 下午9:41, Jason Gunthorpe wrote: > > > On Thu, Feb 13, 2020 at 11:34:10AM +0800, Jason Wang wrote: > > > > > > > > You have dev, type or > > > > > class to choose from. Type is rarely used and doesn't seem to be used > > > > > by vdpa, so class seems the right choice > > > > > > > > > > Jason > > > > Yes, but my understanding is class and bus are mutually exclusive. So we > > > > can't add a class to a device which is already attached on a bus. > > > While I suppose there are variations, typically 'class' devices are > > > user facing things and 'bus' devices are internal facing (ie like a > > > PCI device) > > > > > > Though all vDPA devices have the same programming interface, but the > > semantic is different. So it looks to me that use bus complies what > > class.rst said: > > > > " > > > > Each device class defines a set of semantics and a programming interface > > that devices of that class adhere to. Device drivers are the > > implementation of that programming interface for a particular device on > > a particular bus. > > > > " > > Here we are talking about the /dev/XX node that provides the > programming interface. All the vdpa devices have the same basic > chardev interface and discover any semantic variations 'in band' > > > > So why is this using a bus? VDPA is a user facing object, so the > > > driver should create a class vhost_vdpa device directly, and that > > > driver should live in the drivers/vhost/ directory. > > > > This is because we want vDPA to be generic for being used by different > > drivers which is not limited to vhost-vdpa. E.g in this series, it allows > > vDPA to be used by kernel virtio drivers. And in the future, we will > > probably introduce more drivers in the future. > > I don't see how that connects with using a bus. > > Every class of virtio traffic is going to need a special HW driver to > enable VDPA, that special driver can create the correct vhost side > class device. That's just a ton of useless code duplication, and a good chance to have minor variations in implementations confusing userspace. Instead, each device implement the same interface, and then vhost sits on top. > > > For the PCI VF case this driver would bind to a PCI device like > > > everything else > > > > > > For our future SF/ADI cases the driver would bind to some > > > SF/ADI/whatever device on a bus. > > > > All these driver will still be bound to their own bus (PCI or other). And > > what the driver needs is to present a vDPA device to virtual vDPA bus on > > top. > > Again, I can't see any reason to inject a 'vdpa virtual bus' on > top. That seems like mis-using the driver core. > > Jason That bus is exactly what Greg KH proposed. There are other ways to solve this I guess but this bikeshedding is getting tiring. Come on it's an internal kernel interface, if we feel it was a wrong direction to take we can change our minds later. Main thing is getting UAPI right.
On Thu, Feb 13, 2020 at 10:41:06AM -0500, Michael S. Tsirkin wrote: > On Thu, Feb 13, 2020 at 11:05:42AM -0400, Jason Gunthorpe wrote: > > On Thu, Feb 13, 2020 at 10:58:44PM +0800, Jason Wang wrote: > > > > > > On 2020/2/13 下午9:41, Jason Gunthorpe wrote: > > > > On Thu, Feb 13, 2020 at 11:34:10AM +0800, Jason Wang wrote: > > > > > > > > > > You have dev, type or > > > > > > class to choose from. Type is rarely used and doesn't seem to be used > > > > > > by vdpa, so class seems the right choice > > > > > > > > > > > > Jason > > > > > Yes, but my understanding is class and bus are mutually exclusive. So we > > > > > can't add a class to a device which is already attached on a bus. > > > > While I suppose there are variations, typically 'class' devices are > > > > user facing things and 'bus' devices are internal facing (ie like a > > > > PCI device) > > > > > > > > > Though all vDPA devices have the same programming interface, but the > > > semantic is different. So it looks to me that use bus complies what > > > class.rst said: > > > > > > " > > > > > > Each device class defines a set of semantics and a programming interface > > > that devices of that class adhere to. Device drivers are the > > > implementation of that programming interface for a particular device on > > > a particular bus. > > > > > > " > > > > Here we are talking about the /dev/XX node that provides the > > programming interface. All the vdpa devices have the same basic > > chardev interface and discover any semantic variations 'in band' > > > > > > So why is this using a bus? VDPA is a user facing object, so the > > > > driver should create a class vhost_vdpa device directly, and that > > > > driver should live in the drivers/vhost/ directory. > > > > > > This is because we want vDPA to be generic for being used by different > > > drivers which is not limited to vhost-vdpa. E.g in this series, it allows > > > vDPA to be used by kernel virtio drivers. And in the future, we will > > > probably introduce more drivers in the future. > > > > I don't see how that connects with using a bus. > > > > Every class of virtio traffic is going to need a special HW driver to > > enable VDPA, that special driver can create the correct vhost side > > class device. > > That's just a ton of useless code duplication, and a good chance > to have minor variations in implementations confusing > userspace. What? Why? This is how almost every user of the driver core works. I don't see how you get any duplication unless the subsystem core is badly done wrong. The 'class' is supposed to provide all the library functions to remove this duplication. Instead of plugging the HW driver in via some bus scheme every subsystem has its own 'ops' that the HW driver provides to the subsystem's class via subsystem_register() This is the *standard* pattern to use the driver core. This is almost there, it just has this extra bus part to convey the HW ops instead of directly. > Instead, each device implement the same interface, and then > vhost sits on top. Sure, but plugging in via ops/etc not via a bus and another struct device. > That bus is exactly what Greg KH proposed. There are other ways > to solve this I guess but this bikeshedding is getting tiring. This discussion was for a different goal, IMHO. > Come on it's an internal kernel interface, if we feel > it was a wrong direction to take we can change our minds later. > Main thing is getting UAPI right. This discussion started because the refcounting has been busted up in every version posted so far. It is not bikeshedding when these bugs are actually being caused by trying to abuse the driver core and shoehorn in a bus that isn't needed when a class is the correct thing to use. Jason
On Thu, Feb 13, 2020 at 11:51:54AM -0400, Jason Gunthorpe wrote: > > That bus is exactly what Greg KH proposed. There are other ways > > to solve this I guess but this bikeshedding is getting tiring. > > This discussion was for a different goal, IMHO. Hmm couldn't find it anymore. What was the goal there in your opinion?
On Thu, Feb 13, 2020 at 11:51:54AM -0400, Jason Gunthorpe wrote: > The 'class' is supposed to provide all the library functions to remove > this duplication. Instead of plugging the HW driver in via some bus > scheme every subsystem has its own 'ops' that the HW driver provides > to the subsystem's class via subsystem_register() Hmm I'm not familiar with subsystem_register. A grep didn't find it in the kernel either ...
On Thu, Feb 13, 2020 at 10:59:34AM -0500, Michael S. Tsirkin wrote: > On Thu, Feb 13, 2020 at 11:51:54AM -0400, Jason Gunthorpe wrote: > > The 'class' is supposed to provide all the library functions to remove > > this duplication. Instead of plugging the HW driver in via some bus > > scheme every subsystem has its own 'ops' that the HW driver provides > > to the subsystem's class via subsystem_register() > > Hmm I'm not familiar with subsystem_register. A grep didn't find it > in the kernel either ... I mean it is the registration function provided by the subsystem that owns the class, for instance tpm_chip_register(), ib_register_device(), register_netdev(), rtc_register_device() etc So if you have some vhost (vhost net?) class then you'd have some vhost_vdpa_init/alloc(); vhost_vdpa_register(), sequence presumably. (vs trying to do it with a bus matcher) I recommend to look at rtc and tpm for fairly simple easy to follow patterns for creating a subsystem in the kernel. A subsystem owns a class, allows HW drivers to plug in to it, and provides a consistent user API via a cdev/sysfs/etc. The driver model class should revolve around the char dev and sysfs uABI - if you enumerate the devices on the class then they should all follow the char dev and sysfs interfaces contract of that class. Those examples show how to do all the refcounting semi-sanely, introduce sysfs, cdevs, etc. I thought the latest proposal was to use the existing vhost class and largely the existing vhost API, so it probably just needs to make sure the common class-wide stuff is split from the 'driver' stuff of the existing vhost to netdev. Jason
On Thu, Feb 13, 2020 at 10:56:00AM -0500, Michael S. Tsirkin wrote: > On Thu, Feb 13, 2020 at 11:51:54AM -0400, Jason Gunthorpe wrote: > > > That bus is exactly what Greg KH proposed. There are other ways > > > to solve this I guess but this bikeshedding is getting tiring. > > > > This discussion was for a different goal, IMHO. > > Hmm couldn't find it anymore. What was the goal there in your opinion? I think it was largely talking about how to model things like ADI/SF/etc, plus stuff got very confused when the discussion tried to explain what mdev's role was vs the driver core. The standard driver model is a 'bus' driver provides the HW access (think PCI level things), and a 'hw driver' attaches to the bus device, and instantiates a 'subsystem device' (think netdev, rdma, etc) using some per-subsystem XXX_register(). The 'hw driver' pulls in functions from the 'subsystem' using a combination of callbacks and library-style calls so there is no code duplication. As a subsystem, vhost&vdpa should expect its 'HW driver' to bind to devices on busses, for instance I would expect: - A future SF/ADI/'virtual bus' as a child of multi-functional PCI device Exactly how this works is still under active discussion and is one place where Greg said 'use a bus'. - An existing PCI, platform, or other bus and device. No need for an extra bus here, PCI is the bus. - No bus, ie for a simulator or binding to a netdev. (existing vhost?) They point is that the HW driver's job is to adapt from the bus level interfaces (eg readl/writel) to the subsystem level (eg something like the vdpa_ops). For instance that Intel driver should be a pci_driver to bind to a struct pci_device for its VF and then call some 'vhost&vdpa' _register() function to pass its ops to the subsystem which in turn creates the struct device of the subsystem calls, common char devices, sysfs, etc and calls the driver's ops in response to uAPI calls. This is already almost how things were setup in v2 of the patches, near as I can see, just that a bus was inserted somehow instead of having only the vhost class. So it iwas confusing and the lifetime model becomes too complicated to implement correctly... Jason
On 2020/2/13 下午11:05, Jason Gunthorpe wrote: > On Thu, Feb 13, 2020 at 10:58:44PM +0800, Jason Wang wrote: >> On 2020/2/13 下午9:41, Jason Gunthorpe wrote: >>> On Thu, Feb 13, 2020 at 11:34:10AM +0800, Jason Wang wrote: >>> >>>>> You have dev, type or >>>>> class to choose from. Type is rarely used and doesn't seem to be used >>>>> by vdpa, so class seems the right choice >>>>> >>>>> Jason >>>> Yes, but my understanding is class and bus are mutually exclusive. So we >>>> can't add a class to a device which is already attached on a bus. >>> While I suppose there are variations, typically 'class' devices are >>> user facing things and 'bus' devices are internal facing (ie like a >>> PCI device) >> >> Though all vDPA devices have the same programming interface, but the >> semantic is different. So it looks to me that use bus complies what >> class.rst said: >> >> " >> >> Each device class defines a set of semantics and a programming interface >> that devices of that class adhere to. Device drivers are the >> implementation of that programming interface for a particular device on >> a particular bus. >> >> " > Here we are talking about the /dev/XX node that provides the > programming interface. I'm confused here, are you suggesting to use class to create char device in vhost-vdpa? That's fine but the comment should go for vhost-vdpa patch. > All the vdpa devices have the same basic > chardev interface and discover any semantic variations 'in band' That's not true, char interface is only used for vhost. Kernel virtio driver does not need char dev but a device on the virtio bus. > >>> So why is this using a bus? VDPA is a user facing object, so the >>> driver should create a class vhost_vdpa device directly, and that >>> driver should live in the drivers/vhost/ directory. >> >> This is because we want vDPA to be generic for being used by different >> drivers which is not limited to vhost-vdpa. E.g in this series, it allows >> vDPA to be used by kernel virtio drivers. And in the future, we will >> probably introduce more drivers in the future. > I don't see how that connects with using a bus. This is demonstrated in the virito-vdpa driver. So if you want to use kernel virito driver for vDPA device, a bus is most straight forward. > > Every class of virtio traffic is going to need a special HW driver to > enable VDPA, that special driver can create the correct vhost side > class device. Are you saying, e.g it's the charge of IFCVF driver to create vhost char dev and other stuffs? > >>> For the PCI VF case this driver would bind to a PCI device like >>> everything else >>> >>> For our future SF/ADI cases the driver would bind to some >>> SF/ADI/whatever device on a bus. >> All these driver will still be bound to their own bus (PCI or other). And >> what the driver needs is to present a vDPA device to virtual vDPA bus on >> top. > Again, I can't see any reason to inject a 'vdpa virtual bus' on > top. That seems like mis-using the driver core. I don't think so. Vhost is not the only programming interface for vDPA. We don't want a device that can only work for userspace drivers and only have a single set of userspace APIs. Thanks > > Jason >
On 2020/2/14 上午12:24, Jason Gunthorpe wrote: > On Thu, Feb 13, 2020 at 10:56:00AM -0500, Michael S. Tsirkin wrote: >> On Thu, Feb 13, 2020 at 11:51:54AM -0400, Jason Gunthorpe wrote: >>>> That bus is exactly what Greg KH proposed. There are other ways >>>> to solve this I guess but this bikeshedding is getting tiring. >>> This discussion was for a different goal, IMHO. >> Hmm couldn't find it anymore. What was the goal there in your opinion? > I think it was largely talking about how to model things like > ADI/SF/etc, plus stuff got very confused when the discussion tried to > explain what mdev's role was vs the driver core. > > The standard driver model is a 'bus' driver provides the HW access > (think PCI level things), and a 'hw driver' attaches to the bus > device, This is not true, kernel had already had plenty virtual bus where virtual devices and drivers could be attached, besides mdev and virtio, you can see vop, rpmsg, visorbus etc. > and instantiates a 'subsystem device' (think netdev, rdma, > etc) using some per-subsystem XXX_register(). Well, if you go through virtio spec, we support ~20 types of different devices. Classes like netdev and rdma are correct since they have a clear set of semantics their own. But grouping network and scsi into a single class looks wrong, that's the work of a virtual bus. The class should be done on top of vDPA device instead of vDPA device itself: - For kernel driver, netdev, blk dev could be done on top - For userspace driver, the class could be done by the drivers inside VM or userspace (dpdk) > The 'hw driver' pulls in > functions from the 'subsystem' using a combination of callbacks and > library-style calls so there is no code duplication. The point is we want vDPA devices to be used by different subsystems, not only vhost, but also netdev, blk, crypto (every subsystem that can use virtio devices). That's why we introduce vDPA bus and introduce different drivers on top. > > As a subsystem, vhost&vdpa should expect its 'HW driver' to bind to > devices on busses, for instance I would expect: > > - A future SF/ADI/'virtual bus' as a child of multi-functional PCI device > Exactly how this works is still under active discussion and is > one place where Greg said 'use a bus'. That's ok but it's something that is not directly related to vDPA which can be implemented by any kinds of devices/buses: struct XXX_device { struct vdpa_device vdpa; struct adi_device/pci_device *lowerdev; } ... > - An existing PCI, platform, or other bus and device. No need for an > extra bus here, PCI is the bus. There're several examples that a bus is needed on top. A good example is Mellanox TmFIFO driver which is a platform device driver but register itself as a virtio device in order to be used by virito-console driver on the virtio bus. But it's a pity that the device can not be used by userspace driver due to the limitation of virito bus which is designed for kernel driver. That's why vDPA bus is introduced which abstract the common requirements of both kernel and userspace drivers which allow the a single HW driver to be used by kernel drivers (and the subsystems on top) and userspace drivers. > - No bus, ie for a simulator or binding to a netdev. (existing vhost?) Note, simulator can have its own class (sysfs etc.). > > They point is that the HW driver's job is to adapt from the bus level > interfaces (eg readl/writel) to the subsystem level (eg something like > the vdpa_ops). > > For instance that Intel driver should be a pci_driver to bind to a > struct pci_device for its VF and then call some 'vhost&vdpa' > _register() function to pass its ops to the subsystem which in turn > creates the struct device of the subsystem calls, common char devices, > sysfs, etc and calls the driver's ops in response to uAPI calls. > > This is already almost how things were setup in v2 of the patches, > near as I can see, just that a bus was inserted somehow instead of > having only the vhost class. Well the series (plus mdev part) uses a bus since day 0. It's not something new. Thanks > So it iwas confusing and the lifetime > model becomes too complicated to implement correctly... > > Jason >
On 2020/2/14 上午12:13, Jason Gunthorpe wrote: > On Thu, Feb 13, 2020 at 10:59:34AM -0500, Michael S. Tsirkin wrote: >> On Thu, Feb 13, 2020 at 11:51:54AM -0400, Jason Gunthorpe wrote: >>> The 'class' is supposed to provide all the library functions to remove >>> this duplication. Instead of plugging the HW driver in via some bus >>> scheme every subsystem has its own 'ops' that the HW driver provides >>> to the subsystem's class via subsystem_register() >> Hmm I'm not familiar with subsystem_register. A grep didn't find it >> in the kernel either ... > I mean it is the registration function provided by the subsystem that > owns the class, for instance tpm_chip_register(), > ib_register_device(), register_netdev(), rtc_register_device() etc > > So if you have some vhost (vhost net?) class then you'd have some > vhost_vdpa_init/alloc(); vhost_vdpa_register(), sequence > presumably. (vs trying to do it with a bus matcher) > > I recommend to look at rtc and tpm for fairly simple easy to follow > patterns for creating a subsystem in the kernel. A subsystem owns a class, > allows HW drivers to plug in to it, and provides a consistent user > API via a cdev/sysfs/etc. > > The driver model class should revolve around the char dev and sysfs > uABI - if you enumerate the devices on the class then they should all > follow the char dev and sysfs interfaces contract of that class. > > Those examples show how to do all the refcounting semi-sanely, > introduce sysfs, cdevs, etc. > > I thought the latest proposal was to use the existing vhost class and > largely the existing vhost API, so it probably just needs to make sure > the common class-wide stuff is split from the 'driver' stuff of the > existing vhost to netdev. Still, netdev is only one of the type we want to support. And we can not guarantee or forecast that vhost is the only API that is used. Let's take virtio as an example, it is implemented through a bus which allows different subsystems on top. And it can provide a variety of different uAPIs. For best performance, VFIO could be used for userspace drivers, but it requires the bus has support from VFIO. For vDPA devices, it's just the same logic. A bus allows different drivers and subsystems on top. One of the subsystem could be vhost that provides a unified API for userspace driver. Thanks > > Jason >
On Fri, Feb 14, 2020 at 11:23:27AM +0800, Jason Wang wrote: > > > Though all vDPA devices have the same programming interface, but the > > > semantic is different. So it looks to me that use bus complies what > > > class.rst said: > > > > > > " > > > > > > Each device class defines a set of semantics and a programming interface > > > that devices of that class adhere to. Device drivers are the > > > implementation of that programming interface for a particular device on > > > a particular bus. > > > > > > " > > Here we are talking about the /dev/XX node that provides the > > programming interface. > > > I'm confused here, are you suggesting to use class to create char device in > vhost-vdpa? That's fine but the comment should go for vhost-vdpa patch. Certainly yes, something creating many char devs should have a class. That makes the sysfs work as expected I suppose this is vhost user? I admit I don't really see how this vhost stuff works, all I see are global misc devices? Very unusual for a new subsystem to be using global misc devices.. I would have expected that a single VDPA device comes out as a single char dev linked to only that VDPA device. > > All the vdpa devices have the same basic > > chardev interface and discover any semantic variations 'in band' > > That's not true, char interface is only used for vhost. Kernel virtio driver > does not need char dev but a device on the virtio bus. Okay, this is fine, but why do you need two busses to accomplish this? Shouldn't the 'struct virito_device' be the plug in point for HW drivers I was talking about - and from there a vhost-user can connect to the struct virtio_device to give it a char dev or a kernel driver can connect to link it to another subsystem? It is easy to see something is going wrong with this design because the drivers/virtio/virtio_vdpa.c mainly contains a bunch of trampoline functions reflecting identical calls from one ops struct to a different ops struct. This suggests the 'vdpa' is some subclass of 'virtio' and it is possibly better to model it by extending 'struct virito_device' to include the vdpa specific stuff. Where does the vhost-user char dev get invovled in with the v2 series? Is that included? > > Every class of virtio traffic is going to need a special HW driver to > > enable VDPA, that special driver can create the correct vhost side > > class device. > > Are you saying, e.g it's the charge of IFCVF driver to create vhost char dev > and other stuffs? No. Jason
On Fri, Feb 14, 2020 at 12:05:32PM +0800, Jason Wang wrote: > > The standard driver model is a 'bus' driver provides the HW access > > (think PCI level things), and a 'hw driver' attaches to the bus > > device, > > This is not true, kernel had already had plenty virtual bus where virtual > devices and drivers could be attached, besides mdev and virtio, you can see > vop, rpmsg, visorbus etc. Sure, but those are not connecting HW into the kernel.. > > and instantiates a 'subsystem device' (think netdev, rdma, > > etc) using some per-subsystem XXX_register(). > > > Well, if you go through virtio spec, we support ~20 types of different > devices. Classes like netdev and rdma are correct since they have a clear > set of semantics their own. But grouping network and scsi into a single > class looks wrong, that's the work of a virtual bus. rdma also has about 20 different types of things it supports on top of the generic ib_device. The central point in RDMA is the 'struct ib_device' which is a device class. You can discover all RDMA devices by looking in /sys/class/infiniband/ It has an internal bus like thing (which probably should have been an actual bus, but this was done 15 years ago) which allows other subsystems to have drivers to match and bind their own drivers to the struct ib_device. So you'd have a chain like: struct pci_device -> struct ib_device -> [ib client bus thing] -> struct net_device And the various char devs are created by clients connecting to the ib_device and creating char devs on their own classes. Since ib_devices are multi-queue we can have all 20 devices running concurrently and there are various schemes to manage when the various things are created. > > The 'hw driver' pulls in > > functions from the 'subsystem' using a combination of callbacks and > > library-style calls so there is no code duplication. > > The point is we want vDPA devices to be used by different subsystems, not > only vhost, but also netdev, blk, crypto (every subsystem that can use > virtio devices). That's why we introduce vDPA bus and introduce different > drivers on top. See the other mail, it seems struct virtio_device serves this purpose already, confused why a struct vdpa_device and another bus is being introduced > There're several examples that a bus is needed on top. > > A good example is Mellanox TmFIFO driver which is a platform device driver > but register itself as a virtio device in order to be used by virito-console > driver on the virtio bus. How is that another bus? The platform bus is the HW bus, the TmFIFO is the HW driver, and virtio_device is the subsystem. This seems reasonable/normal so far.. > But it's a pity that the device can not be used by userspace driver due to > the limitation of virito bus which is designed for kernel driver. That's why > vDPA bus is introduced which abstract the common requirements of both kernel > and userspace drivers which allow the a single HW driver to be used by > kernel drivers (and the subsystems on top) and userspace drivers. Ah! Maybe this is the source of all this strangeness - the userspace driver is something parallel to the struct virtio_device instead of being a consumer of it?? That certianly would mess up the driver model quite a lot. Then you want to add another bus to switch between vhost and struct virtio_device? But only for vdpa? But as you point out something like TmFIFO is left hanging. Seems like the wrong abstraction point.. Jason
On 2020/2/14 下午10:04, Jason Gunthorpe wrote: > On Fri, Feb 14, 2020 at 12:05:32PM +0800, Jason Wang wrote: > >>> The standard driver model is a 'bus' driver provides the HW access >>> (think PCI level things), and a 'hw driver' attaches to the bus >>> device, >> This is not true, kernel had already had plenty virtual bus where virtual >> devices and drivers could be attached, besides mdev and virtio, you can see >> vop, rpmsg, visorbus etc. > Sure, but those are not connecting HW into the kernel.. Well the virtual devices are normally implemented via a real HW driver. E.g for virio bus, its transport driver could be driver of real hardware (e.g PCI). > >>> and instantiates a 'subsystem device' (think netdev, rdma, >>> etc) using some per-subsystem XXX_register(). >> >> Well, if you go through virtio spec, we support ~20 types of different >> devices. Classes like netdev and rdma are correct since they have a clear >> set of semantics their own. But grouping network and scsi into a single >> class looks wrong, that's the work of a virtual bus. > rdma also has about 20 different types of things it supports on top of > the generic ib_device. > > The central point in RDMA is the 'struct ib_device' which is a device > class. You can discover all RDMA devices by looking in /sys/class/infiniband/ > > It has an internal bus like thing (which probably should have been an > actual bus, but this was done 15 years ago) which allows other > subsystems to have drivers to match and bind their own drivers to the > struct ib_device. Right. > > So you'd have a chain like: > > struct pci_device -> struct ib_device -> [ib client bus thing] -> struct net_device So for vDPA we want to have: kernel datapath: struct pci_device -> struct vDPA device -> [ vDPA bus] -> struct virtio_device -> [virtio bus] -> struct net_device userspace datapath: struct pci_device -> struct vDPA device -> [ vDPA bus] -> struct vhost_device -> UAPI -> userspace driver > > And the various char devs are created by clients connecting to the > ib_device and creating char devs on their own classes. > > Since ib_devices are multi-queue we can have all 20 devices running > concurrently and there are various schemes to manage when the various > things are created. > >>> The 'hw driver' pulls in >>> functions from the 'subsystem' using a combination of callbacks and >>> library-style calls so there is no code duplication. >> The point is we want vDPA devices to be used by different subsystems, not >> only vhost, but also netdev, blk, crypto (every subsystem that can use >> virtio devices). That's why we introduce vDPA bus and introduce different >> drivers on top. > See the other mail, it seems struct virtio_device serves this purpose > already, confused why a struct vdpa_device and another bus is being > introduced > >> There're several examples that a bus is needed on top. >> >> A good example is Mellanox TmFIFO driver which is a platform device driver >> but register itself as a virtio device in order to be used by virito-console >> driver on the virtio bus. > How is that another bus? The platform bus is the HW bus, the TmFIFO is > the HW driver, and virtio_device is the subsystem. > > This seems reasonable/normal so far.. Yes, that's reasonable. This example is to answer the question why bus is used instead of class here. > >> But it's a pity that the device can not be used by userspace driver due to >> the limitation of virito bus which is designed for kernel driver. That's why >> vDPA bus is introduced which abstract the common requirements of both kernel >> and userspace drivers which allow the a single HW driver to be used by >> kernel drivers (and the subsystems on top) and userspace drivers. > Ah! Maybe this is the source of all this strangeness - the userspace > driver is something parallel to the struct virtio_device instead of > being a consumer of it?? userspace driver is not parallel to virtio_device. The vhost_device is parallel to virtio_device actually. > That certianly would mess up the driver model > quite a lot. > > Then you want to add another bus to switch between vhost and struct > virtio_device? But only for vdpa? Still, vhost works on top of vDPA bus directly (see the reply above). > > But as you point out something like TmFIFO is left hanging. Seems like > the wrong abstraction point.. You know, even refactoring virtio-bus is not for free, TmFIFO driver needs changes anyhow. Thanks > > Jason >
On 2020/2/14 下午9:52, Jason Gunthorpe wrote: > On Fri, Feb 14, 2020 at 11:23:27AM +0800, Jason Wang wrote: > >>>> Though all vDPA devices have the same programming interface, but the >>>> semantic is different. So it looks to me that use bus complies what >>>> class.rst said: >>>> >>>> " >>>> >>>> Each device class defines a set of semantics and a programming interface >>>> that devices of that class adhere to. Device drivers are the >>>> implementation of that programming interface for a particular device on >>>> a particular bus. >>>> >>>> " >>> Here we are talking about the /dev/XX node that provides the >>> programming interface. >> >> I'm confused here, are you suggesting to use class to create char device in >> vhost-vdpa? That's fine but the comment should go for vhost-vdpa patch. > Certainly yes, something creating many char devs should have a > class. That makes the sysfs work as expected > > I suppose this is vhost user? Actually not. Vhost-user is the vhost protocol that is used for userspace vhost backend (usually though a UNIX domain socket). What's being done in the vhost-vpda is a new type of the vhost in kernel. > I admit I don't really see how this > vhost stuff works, all I see are global misc devices? Very unusual for > a new subsystem to be using global misc devices.. Vhost is not a subsystem right now, e.g for it's net implementation, it was loosely coupled with a socket. I thought you were copied in the patch [1], maybe we can move vhost related discussion there to avoid confusion. [1] https://lwn.net/Articles/811210/ > > I would have expected that a single VDPA device comes out as a single > char dev linked to only that VDPA device. > >>> All the vdpa devices have the same basic >>> chardev interface and discover any semantic variations 'in band' >> That's not true, char interface is only used for vhost. Kernel virtio driver >> does not need char dev but a device on the virtio bus. > Okay, this is fine, but why do you need two busses to accomplish this? The reasons are: - vDPA ops is designed to be functional as a software assisted transport (control path) for virtio, so it's fit for a new transport driver but not directly into virtio bus. VOP use similar design. - virtio bus is designed for kernel drivers but not userspace, and it can not be easily extended to support userspace driver but requires some major refactoring. E.g the virtio bus operations requires the virtqueue to be allocated by the transport driver. So it's cheaper and simpler to introduce a new bus instead of refactoring a well known bus and API where brunches of drivers and devices had been implemented for years. > > Shouldn't the 'struct virito_device' be the plug in point for HW > drivers I was talking about - and from there a vhost-user can connect > to the struct virtio_device to give it a char dev or a kernel driver > can connect to link it to another subsystem? From vhost point of view, it would only need to connect vDPA bus, no need to go for virtio bus. Vhost device talks to vDPA device through vDPA bus. Virito device talks to vDPA device through a new vDPA transport driver. > > It is easy to see something is going wrong with this design because > the drivers/virtio/virtio_vdpa.c mainly contains a bunch of trampoline > functions reflecting identical calls from one ops struct to a > different ops struct. That's pretty normal, since part of the virtio ops could be 1:1 mapped to some device function. If you see MMIO and PCI transport, you can see something similar. The only difference is that in the case of VDPA the function is assisted or emulated by hardware vDPA driver. > This suggests the 'vdpa' is some subclass of > 'virtio' and it is possibly better to model it by extending 'struct > virito_device' to include the vdpa specific stuff. Going for such kind of modeling, virtio-pci and virtio-mmio could be also treated as a subclass of virtio as well, they were all implemented via a dedicated transport driver. > > Where does the vhost-user char dev get invovled in with the v2 series? > Is that included? We're working on the a new version, but for the bus/driver part it should be the same as version 1. Thanks > >>> Every class of virtio traffic is going to need a special HW driver to >>> enable VDPA, that special driver can create the correct vhost side >>> class device. >> Are you saying, e.g it's the charge of IFCVF driver to create vhost char dev >> and other stuffs? > No. > > Jason >
On Mon, Feb 17, 2020 at 02:08:03PM +0800, Jason Wang wrote: > I thought you were copied in the patch [1], maybe we can move vhost related > discussion there to avoid confusion. > > [1] https://lwn.net/Articles/811210/ Wow, that is .. confusing. So this is supposed to duplicate the uAPI of vhost-user? But it is open coded and duplicated because .. vdpa? > So it's cheaper and simpler to introduce a new bus instead of refactoring a > well known bus and API where brunches of drivers and devices had been > implemented for years. If you reason for this approach is to ease the implementation then you should talk about it in the cover letters/etc Maybe it is reasonable to do this because the rework is too great, I don't know, but to me this whole thing looks rather messy. Remember this stuff is all uAPI as it shows up in sysfs, so you can easilly get stuck with it forever. Jason
On Tue, Feb 18, 2020 at 01:56:12PM +0000, Jason Gunthorpe wrote: > On Mon, Feb 17, 2020 at 02:08:03PM +0800, Jason Wang wrote: > > > I thought you were copied in the patch [1], maybe we can move vhost related > > discussion there to avoid confusion. > > > > [1] https://lwn.net/Articles/811210/ > > Wow, that is .. confusing. > > So this is supposed to duplicate the uAPI of vhost-user? But it is > open coded and duplicated because .. vdpa? Do you mean the vhost-user in DPDK? There is no vhost-user in Linux kernel. Thanks, Tiwei > > > So it's cheaper and simpler to introduce a new bus instead of refactoring a > > well known bus and API where brunches of drivers and devices had been > > implemented for years. > > If you reason for this approach is to ease the implementation then you > should talk about it in the cover letters/etc > > Maybe it is reasonable to do this because the rework is too great, I > don't know, but to me this whole thing looks rather messy. > > Remember this stuff is all uAPI as it shows up in sysfs, so you can > easilly get stuck with it forever. > > Jason
On 2020/2/18 下午9:56, Jason Gunthorpe wrote: > On Mon, Feb 17, 2020 at 02:08:03PM +0800, Jason Wang wrote: > >> I thought you were copied in the patch [1], maybe we can move vhost related >> discussion there to avoid confusion. >> >> [1] https://lwn.net/Articles/811210/ > Wow, that is .. confusing. > > So this is supposed to duplicate the uAPI of vhost-user? It tries to reuse the uAPI of vhost with some extension. > But it is > open coded and duplicated because .. vdpa? I'm not sure I get here, vhost module is reused for vhost-vdpa and all current vhost device (e.g net) uses their own char device. > >> So it's cheaper and simpler to introduce a new bus instead of refactoring a >> well known bus and API where brunches of drivers and devices had been >> implemented for years. > If you reason for this approach is to ease the implementation then you > should talk about it in the cover letters/etc I will add more rationale in both cover letter and this patch. Thanks > > Maybe it is reasonable to do this because the rework is too great, I > don't know, but to me this whole thing looks rather messy. > > Remember this stuff is all uAPI as it shows up in sysfs, so you can > easilly get stuck with it forever. > > Jason >
On Wed, Feb 19, 2020 at 01:35:25PM +0800, Jason Wang wrote: > > But it is > > open coded and duplicated because .. vdpa? > > > I'm not sure I get here, vhost module is reused for vhost-vdpa and all > current vhost device (e.g net) uses their own char device. I mean there shouldn't be two fops implementing the same uAPI Jason
diff --git a/MAINTAINERS b/MAINTAINERS index d4bda9c900fa..578d2a581e3b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17540,6 +17540,7 @@ F: tools/virtio/ F: drivers/net/virtio_net.c F: drivers/block/virtio_blk.c F: include/linux/virtio*.h +F: include/linux/vdpa.h F: include/uapi/linux/virtio_*.h F: drivers/crypto/virtio/ F: mm/balloon_compaction.c diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 078615cf2afc..9c4fdb64d9ac 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -96,3 +96,5 @@ config VIRTIO_MMIO_CMDLINE_DEVICES If unsure, say 'N'. endif # VIRTIO_MENU + +source "drivers/virtio/vdpa/Kconfig" diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 3a2b5c5dcf46..fdf5eacd0d0a 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o +obj-$(CONFIG_VDPA) += vdpa/ diff --git a/drivers/virtio/vdpa/Kconfig b/drivers/virtio/vdpa/Kconfig new file mode 100644 index 000000000000..7a99170e6c30 --- /dev/null +++ b/drivers/virtio/vdpa/Kconfig @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0-only +config VDPA + tristate + default m + help + Enable this module to support vDPA device that uses a + datapath which complies with virtio specifications with + vendor specific control path. + diff --git a/drivers/virtio/vdpa/Makefile b/drivers/virtio/vdpa/Makefile new file mode 100644 index 000000000000..ee6a35e8a4fb --- /dev/null +++ b/drivers/virtio/vdpa/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_VDPA) += vdpa.o diff --git a/drivers/virtio/vdpa/vdpa.c b/drivers/virtio/vdpa/vdpa.c new file mode 100644 index 000000000000..4003d90f5df3 --- /dev/null +++ b/drivers/virtio/vdpa/vdpa.c @@ -0,0 +1,160 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vDPA bus. + * + * Copyright (c) 2020, Red Hat. All rights reserved. + * Author: Jason Wang <jasowang@redhat.com> + * + */ + +#include <linux/module.h> +#include <linux/idr.h> +#include <linux/vdpa.h> + +static DEFINE_IDA(vdpa_index_ida); + +static int vdpa_dev_probe(struct device *d) +{ + struct vdpa_device *vdev = dev_to_vdpa(d); + struct vdpa_driver *drv = drv_to_vdpa(vdev->dev.driver); + int ret = 0; + + if (drv && drv->probe) + ret = drv->probe(vdev); + + return ret; +} + +static int vdpa_dev_remove(struct device *d) +{ + struct vdpa_device *vdev = dev_to_vdpa(d); + struct vdpa_driver *drv = drv_to_vdpa(vdev->dev.driver); + + if (drv && drv->remove) + drv->remove(vdev); + + return 0; +} + +static struct bus_type vdpa_bus = { + .name = "vdpa", + .probe = vdpa_dev_probe, + .remove = vdpa_dev_remove, +}; + +/** + * vdpa_init_device - initilaize a vDPA device + * This allows driver to some prepartion between after device is + * initialized but before vdpa_register_device() + * @vdev: the vdpa device to be initialized + * @parent: the paretn device + * @dma_dev: the actual device that is performing DMA + * @config: the bus operations support by this device + * + * Returns an error when parent/config/dma_dev is not set or fail to get + * ida. + */ +int vdpa_init_device(struct vdpa_device *vdev, struct device *parent, + struct device *dma_dev, + const struct vdpa_config_ops *config) +{ + int err; + + if (!parent || !dma_dev || !config) + return -EINVAL; + + err = ida_simple_get(&vdpa_index_ida, 0, 0, GFP_KERNEL); + if (err < 0) + return -EFAULT; + + vdev->dev.bus = &vdpa_bus; + vdev->dev.parent = parent; + + device_initialize(&vdev->dev); + + vdev->index = err; + vdev->dma_dev = dma_dev; + vdev->config = config; + + dev_set_name(&vdev->dev, "vdpa%u", vdev->index); + + return 0; +} +EXPORT_SYMBOL_GPL(vdpa_init_device); + +/** + * vdpa_register_device - register a vDPA device + * Callers must have a succeed call of vdpa_init_device() before. + * @vdev: the vdpa device to be registered to vDPA bus + * + * Returns an error when fail to add to vDPA bus + */ +int vdpa_register_device(struct vdpa_device *vdev) +{ + int err = device_add(&vdev->dev); + + if (err) { + put_device(&vdev->dev); + ida_simple_remove(&vdpa_index_ida, vdev->index); + } + + return err; +} +EXPORT_SYMBOL_GPL(vdpa_register_device); + +/** + * vdpa_unregister_device - unregister a vDPA device + * @vdev: the vdpa device to be unregisted from vDPA bus + */ +void vdpa_unregister_device(struct vdpa_device *vdev) +{ + int index = vdev->index; + + device_unregister(&vdev->dev); + ida_simple_remove(&vdpa_index_ida, index); +} +EXPORT_SYMBOL_GPL(vdpa_unregister_device); + +/** + * __vdpa_register_driver - register a vDPA device driver + * @drv: the vdpa device driver to be registered + * @owner: module owner of the driver + * + * Returns an err when fail to do the registration + */ +int __vdpa_register_driver(struct vdpa_driver *drv, struct module *owner) +{ + drv->driver.bus = &vdpa_bus; + drv->driver.owner = owner; + + return driver_register(&drv->driver); +} +EXPORT_SYMBOL_GPL(__vdpa_register_driver); + +/** + * vdpa_unregister_driver - unregister a vDPA device driver + * @drv: the vdpa device driver to be unregistered + */ +void vdpa_unregister_driver(struct vdpa_driver *drv) +{ + driver_unregister(&drv->driver); +} +EXPORT_SYMBOL_GPL(vdpa_unregister_driver); + +static int vdpa_init(void) +{ + if (bus_register(&vdpa_bus) != 0) + panic("virtio bus registration failed"); + return 0; +} + +static void __exit vdpa_exit(void) +{ + bus_unregister(&vdpa_bus); + ida_destroy(&vdpa_index_ida); +} +core_initcall(vdpa_init); +module_exit(vdpa_exit); + +MODULE_AUTHOR("Jason Wang <jasowang@redhat.com>"); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h new file mode 100644 index 000000000000..52543a862801 --- /dev/null +++ b/include/linux/vdpa.h @@ -0,0 +1,233 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_VDPA_H +#define _LINUX_VDPA_H + +#include <linux/device.h> +#include <linux/interrupt.h> +#include <linux/vhost_iotlb.h> + +/** + * vDPA callback definition. + * @callback: interrupt callback function + * @private: the data passed to the callback function + */ +struct vdpa_callback { + irqreturn_t (*callback)(void *data); + void *private; +}; + +/** + * vDPA device - representation of a vDPA device + * @dev: underlying device + * @dma_dev: the actual device that is performing DMA + * @config: the configuration ops for this device. + * @index: device index + */ +struct vdpa_device { + struct device dev; + struct device *dma_dev; + const struct vdpa_config_ops *config; + int index; +}; + +/** + * vDPA_config_ops - operations for configuring a vDPA device. + * Note: vDPA device drivers are required to implement all of the + * operations unless it is mentioned to be optional in the following + * list. + * + * @set_vq_address: Set the address of virtqueue + * @vdev: vdpa device + * @idx: virtqueue index + * @desc_area: address of desc area + * @driver_area: address of driver area + * @device_area: address of device area + * Returns integer: success (0) or error (< 0) + * @set_vq_num: Set the size of virtqueue + * @vdev: vdpa device + * @idx: virtqueue index + * @num: the size of virtqueue + * @kick_vq: Kick the virtqueue + * @vdev: vdpa device + * @idx: virtqueue index + * @set_vq_cb: Set the interrupt callback function for + * a virtqueue + * @vdev: vdpa device + * @idx: virtqueue index + * @cb: virtio-vdev interrupt callback structure + * @set_vq_ready: Set ready status for a virtqueue + * @vdev: vdpa device + * @idx: virtqueue index + * @ready: ready (true) not ready(false) + * @get_vq_ready: Get ready status for a virtqueue + * @vdev: vdpa device + * @idx: virtqueue index + * Returns boolean: ready (true) or not (false) + * @set_vq_state: Set the state for a virtqueue + * @vdev: vdpa device + * @idx: virtqueue index + * @state: virtqueue state (last_avail_idx) + * Returns integer: success (0) or error (< 0) + * @get_vq_state: Get the state for a virtqueue + * @vdev: vdpa device + * @idx: virtqueue index + * Returns virtqueue state (last_avail_idx) + * @get_vq_align: Get the virtqueue align requirement + * for the device + * @vdev: vdpa device + * Returns virtqueue algin requirement + * @get_features: Get virtio features supported by the device + * @vdev: vdpa device + * Returns the virtio features support by the + * device + * @set_features: Set virtio features supported by the driver + * @vdev: vdpa device + * @features: feature support by the driver + * Returns integer: success (0) or error (< 0) + * @set_config_cb: Set the config interrupt callback + * @vdev: vdpa device + * @cb: virtio-vdev interrupt callback structure + * @get_vq_num_max: Get the max size of virtqueue + * @vdev: vdpa device + * Returns u16: max size of virtqueue + * @get_device_id: Get virtio device id + * @vdev: vdpa device + * Returns u32: virtio device id + * @get_vendor_id: Get id for the vendor that provides this device + * @vdev: vdpa device + * Returns u32: virtio vendor id + * @get_status: Get the device status + * @vdev: vdpa device + * Returns u8: virtio device status + * @set_status: Set the device status + * @vdev: vdpa device + * @status: virtio device status + * @get_config: Read from device specific configuration space + * @vdev: vdpa device + * @offset: offset from the beginning of + * configuration space + * @buf: buffer used to read to + * @len: the length to read from + * configuration space + * @set_config: Write to device specific configuration space + * @vdev: vdpa device + * @offset: offset from the beginning of + * configuration space + * @buf: buffer used to write from + * @len: the length to write to + * configuration space + * @get_generation: Get device config generation (optional) + * @vdev: vdpa device + * Returns u32: device generation + * @set_map: Set device memory mapping (optional) + * and only needed for device that using + * device specific DMA translation + * (on-chip IOMMU) + * @vdev: vdpa device + * @iotlb: vhost memory mapping to be + * used by the vDPA + * Returns integer: success (0) or error (< 0) + * @dma_map: Map an area of PA to IOVA + * @vdev: vdpa device + * @iova: iova to be mapped + * @size: size of the area + * @pa: physical address for the map + * @perm: device access permission (VHOST_MAP_XX) + * Returns integer: success (0) or error (< 0) + * @dma_unmap: Unmap an area of IOVA + * @vdev: vdpa device + * @iova: iova to be unmapped + * @size: size of the area + * Returns integer: success (0) or error (< 0) + */ +struct vdpa_config_ops { + /* Virtqueue ops */ + int (*set_vq_address)(struct vdpa_device *vdev, + u16 idx, u64 desc_area, u64 driver_area, + u64 device_area); + void (*set_vq_num)(struct vdpa_device *vdev, u16 idx, u32 num); + void (*kick_vq)(struct vdpa_device *vdev, u16 idx); + void (*set_vq_cb)(struct vdpa_device *vdev, u16 idx, + struct vdpa_callback *cb); + void (*set_vq_ready)(struct vdpa_device *vdev, u16 idx, bool ready); + bool (*get_vq_ready)(struct vdpa_device *vdev, u16 idx); + int (*set_vq_state)(struct vdpa_device *vdev, u16 idx, u64 state); + u64 (*get_vq_state)(struct vdpa_device *vdev, u16 idx); + + /* Device ops */ + u16 (*get_vq_align)(struct vdpa_device *vdev); + u64 (*get_features)(struct vdpa_device *vdev); + int (*set_features)(struct vdpa_device *vdev, u64 features); + void (*set_config_cb)(struct vdpa_device *vdev, + struct vdpa_callback *cb); + u16 (*get_vq_num_max)(struct vdpa_device *vdev); + u32 (*get_device_id)(struct vdpa_device *vdev); + u32 (*get_vendor_id)(struct vdpa_device *vdev); + u8 (*get_status)(struct vdpa_device *vdev); + void (*set_status)(struct vdpa_device *vdev, u8 status); + void (*get_config)(struct vdpa_device *vdev, unsigned int offset, + void *buf, unsigned int len); + void (*set_config)(struct vdpa_device *vdev, unsigned int offset, + const void *buf, unsigned int len); + u32 (*get_generation)(struct vdpa_device *vdev); + + /* DMA ops */ + int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb *iotlb); + int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size, + u64 pa, u32 perm); + int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size); +}; + +int vdpa_init_device(struct vdpa_device *vdev, + struct device *parent, + struct device *dma_dev, + const struct vdpa_config_ops *config); +int vdpa_register_device(struct vdpa_device *vdev); +void vdpa_unregister_device(struct vdpa_device *vdev); + +/** + * vdpa_driver - operations for a vDPA driver + * @driver: underlying device driver + * @probe: the function to call when a device is found. Returns 0 or -errno. + * @remove: the function to call when a device is removed. + */ +struct vdpa_driver { + struct device_driver driver; + int (*probe)(struct vdpa_device *vdev); + void (*remove)(struct vdpa_device *vdev); +}; + +#define vdpa_register_driver(drv) \ + __vdpa_register_driver(drv, THIS_MODULE) +int __vdpa_register_driver(struct vdpa_driver *drv, struct module *owner); +void vdpa_unregister_driver(struct vdpa_driver *drv); + +#define module_vdpa_driver(__vdpa_driver) \ + module_driver(__vdpa_driver, vdpa_register_driver, \ + vdpa_unregister_driver) + +static inline struct vdpa_driver *drv_to_vdpa(struct device_driver *driver) +{ + return container_of(driver, struct vdpa_driver, driver); +} + +static inline struct vdpa_device *dev_to_vdpa(struct device *_dev) +{ + return container_of(_dev, struct vdpa_device, dev); +} + +static inline void *vdpa_get_drvdata(const struct vdpa_device *vdev) +{ + return dev_get_drvdata(&vdev->dev); +} + +static inline void vdpa_set_drvdata(struct vdpa_device *vdev, void *data) +{ + dev_set_drvdata(&vdev->dev, data); +} + +static inline struct device *vdpa_get_dma_dev(struct vdpa_device *vdev) +{ + return vdev->dma_dev; +} +#endif /* _LINUX_VDPA_H */
vDPA device is a device that uses a datapath which complies with the virtio specifications with vendor specific control path. vDPA devices can be both physically located on the hardware or emulated by software. vDPA hardware devices are usually implemented through PCIE with the following types: - PF (Physical Function) - A single Physical Function - VF (Virtual Function) - Device that supports single root I/O virtualization (SR-IOV). Its Virtual Function (VF) represents a virtualized instance of the device that can be assigned to different partitions - ADI (Assignable Device Interface) and its equivalents - With technologies such as Intel Scalable IOV, a virtual device (VDEV) composed by host OS utilizing one or more ADIs. Or its equivalent like SF (Sub function) from Mellanox. From a driver's perspective, depends on how and where the DMA translation is done, vDPA devices are split into two types: - Platform specific DMA translation - From the driver's perspective, the device can be used on a platform where device access to data in memory is limited and/or translated. An example is a PCIE vDPA whose DMA request was tagged via a bus (e.g PCIE) specific way. DMA translation and protection are done at PCIE bus IOMMU level. - Device specific DMA translation - The device implements DMA isolation and protection through its own logic. An example is a vDPA device which uses on-chip IOMMU. To hide the differences and complexity of the above types for a vDPA device/IOMMU options and in order to present a generic virtio device to the upper layer, a device agnostic framework is required. This patch introduces a software vDPA bus which abstracts the common attributes of vDPA device, vDPA bus driver and the communication method (vdpa_config_ops) between the vDPA device abstraction and the vDPA bus driver: With the abstraction of vDPA bus and vDPA bus operations, the difference and complexity of the under layer hardware is hidden from upper layer. The vDPA bus drivers on top can use a unified vdpa_config_ops to control different types of vDPA device. Signed-off-by: Jason Wang <jasowang@redhat.com> --- MAINTAINERS | 1 + drivers/virtio/Kconfig | 2 + drivers/virtio/Makefile | 1 + drivers/virtio/vdpa/Kconfig | 9 ++ drivers/virtio/vdpa/Makefile | 2 + drivers/virtio/vdpa/vdpa.c | 160 ++++++++++++++++++++++++ include/linux/vdpa.h | 233 +++++++++++++++++++++++++++++++++++ 7 files changed, 408 insertions(+) create mode 100644 drivers/virtio/vdpa/Kconfig create mode 100644 drivers/virtio/vdpa/Makefile create mode 100644 drivers/virtio/vdpa/vdpa.c create mode 100644 include/linux/vdpa.h