Message ID | 20231218083755.96281-10-yishaih@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Introduce a vfio driver over virtio devices | expand |
On Mon, 18 Dec 2023 10:37:55 +0200 Yishai Hadas <yishaih@nvidia.com> wrote: > Introduce a vfio driver over virtio devices to support the legacy > interface functionality for VFs. > > Background, from the virtio spec [1]. > -------------------------------------------------------------------- > In some systems, there is a need to support a virtio legacy driver with > a device that does not directly support the legacy interface. In such > scenarios, a group owner device can provide the legacy interface > functionality for the group member devices. The driver of the owner > device can then access the legacy interface of a member device on behalf > of the legacy member device driver. > > For example, with the SR-IOV group type, group members (VFs) can not > present the legacy interface in an I/O BAR in BAR0 as expected by the > legacy pci driver. If the legacy driver is running inside a virtual > machine, the hypervisor executing the virtual machine can present a > virtual device with an I/O BAR in BAR0. The hypervisor intercepts the > legacy driver accesses to this I/O BAR and forwards them to the group > owner device (PF) using group administration commands. > -------------------------------------------------------------------- > > Specifically, this driver adds support for a virtio-net VF to be exposed > as a transitional device to a guest driver and allows the legacy IO BAR > functionality on top. > > This allows a VM which uses a legacy virtio-net driver in the guest to > work transparently over a VF which its driver in the host is that new > driver. > > The driver can be extended easily to support some other types of virtio > devices (e.g virtio-blk), by adding in a few places the specific type > properties as was done for virtio-net. > > For now, only the virtio-net use case was tested and as such we introduce > the support only for such a device. > > Practically, > Upon probing a VF for a virtio-net device, in case its PF supports > legacy access over the virtio admin commands and the VF doesn't have BAR > 0, we set some specific 'vfio_device_ops' to be able to simulate in SW a > transitional device with I/O BAR in BAR 0. > > The existence of the simulated I/O bar is reported later on by > overwriting the VFIO_DEVICE_GET_REGION_INFO command and the device > exposes itself as a transitional device by overwriting some properties > upon reading its config space. > > Once we report the existence of I/O BAR as BAR 0 a legacy driver in the > guest may use it via read/write calls according to the virtio > specification. > > Any read/write towards the control parts of the BAR will be captured by > the new driver and will be translated into admin commands towards the > device. > > In addition, any data path read/write access (i.e. virtio driver > notifications) will be captured by the driver and forwarded to the > physical BAR which its properties were supplied by the admin command > VIRTIO_ADMIN_CMD_LEGACY_NOTIFY_INFO upon the probing/init flow. > > With that code in place a legacy driver in the guest has the look and > feel as if having a transitional device with legacy support for both its > control and data path flows. > > [1] > https://github.com/oasis-tcs/virtio-spec/commit/03c2d32e5093ca9f2a17797242fbef88efe94b8c > > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> > Reviewed-by: Kevin Tian <kevin.tian@intel.com> > Signed-off-by: Yishai Hadas <yishaih@nvidia.com> > --- > MAINTAINERS | 7 + > drivers/vfio/pci/Kconfig | 2 + > drivers/vfio/pci/Makefile | 2 + > drivers/vfio/pci/virtio/Kconfig | 15 + > drivers/vfio/pci/virtio/Makefile | 4 + > drivers/vfio/pci/virtio/main.c | 576 +++++++++++++++++++++++++++++++ > 6 files changed, 606 insertions(+) > create mode 100644 drivers/vfio/pci/virtio/Kconfig > create mode 100644 drivers/vfio/pci/virtio/Makefile > create mode 100644 drivers/vfio/pci/virtio/main.c > > diff --git a/MAINTAINERS b/MAINTAINERS > index 012df8ccf34e..b246b769092d 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -22872,6 +22872,13 @@ L: kvm@vger.kernel.org > S: Maintained > F: drivers/vfio/pci/mlx5/ > > +VFIO VIRTIO PCI DRIVER > +M: Yishai Hadas <yishaih@nvidia.com> > +L: kvm@vger.kernel.org > +L: virtualization@lists.linux-foundation.org > +S: Maintained > +F: drivers/vfio/pci/virtio > + > VFIO PCI DEVICE SPECIFIC DRIVERS > R: Jason Gunthorpe <jgg@nvidia.com> > R: Yishai Hadas <yishaih@nvidia.com> > diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig > index 8125e5f37832..18c397df566d 100644 > --- a/drivers/vfio/pci/Kconfig > +++ b/drivers/vfio/pci/Kconfig > @@ -65,4 +65,6 @@ source "drivers/vfio/pci/hisilicon/Kconfig" > > source "drivers/vfio/pci/pds/Kconfig" > > +source "drivers/vfio/pci/virtio/Kconfig" > + > endmenu > diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile > index 45167be462d8..046139a4eca5 100644 > --- a/drivers/vfio/pci/Makefile > +++ b/drivers/vfio/pci/Makefile > @@ -13,3 +13,5 @@ obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5/ > obj-$(CONFIG_HISI_ACC_VFIO_PCI) += hisilicon/ > > obj-$(CONFIG_PDS_VFIO_PCI) += pds/ > + > +obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio/ > diff --git a/drivers/vfio/pci/virtio/Kconfig b/drivers/vfio/pci/virtio/Kconfig > new file mode 100644 > index 000000000000..a3e5d8ea22a0 > --- /dev/null > +++ b/drivers/vfio/pci/virtio/Kconfig > @@ -0,0 +1,15 @@ > +# SPDX-License-Identifier: GPL-2.0-only > +config VIRTIO_VFIO_PCI > + tristate "VFIO support for VIRTIO NET PCI devices" > + depends on X86 && VIRTIO_PCI I'd really prefer if this was (X86 || COMPILE_TEST) but the legacy admin interfaces are also hard linked to X86, so I think instead the following should be factored into the series: diff --git a/drivers/vfio/pci/virtio/Kconfig b/drivers/vfio/pci/virtio/Kconfig index a3e5d8ea22a0..244b09eff67c 100644 --- a/drivers/vfio/pci/virtio/Kconfig +++ b/drivers/vfio/pci/virtio/Kconfig @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only config VIRTIO_VFIO_PCI tristate "VFIO support for VIRTIO NET PCI devices" - depends on X86 && VIRTIO_PCI + depends on VIRTIO_PCI_ADMIN_LEGACY select VFIO_PCI_CORE help This provides support for exposing VIRTIO NET VF devices which support diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 0a53a61231c2..259e7742a442 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -60,6 +60,11 @@ config VIRTIO_PCI If unsure, say M. +config VIRTIO_PCI_ADMIN_LEGACY + bool + depends on VIRTIO_PCI && (X86 || COMPILE_TEST) + default y + config VIRTIO_PCI_LEGACY bool "Support for legacy virtio draft 0.9.X and older devices" default y diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index a73358bb4ebb..73ace62af440 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -7,7 +7,7 @@ obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o -virtio_pci-$(CONFIG_X86) += virtio_pci_admin_legacy_io.o +virtio_pci-$(CONFIG_VIRTIO_PCI_ADMIN_LEGACY) += virtio_pci_admin_legacy_io.o obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o obj-$(CONFIG_VIRTIO_VDPA) += virtio_vdpa.o diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h index ff51c8053520..7fef52bee455 100644 --- a/drivers/virtio/virtio_pci_common.h +++ b/drivers/virtio/virtio_pci_common.h @@ -170,7 +170,7 @@ struct virtio_device *virtio_pci_vf_get_pf_dev(struct pci_dev *pdev); * on ARM, use big endian on PPC, etc. X86 drivers are mostly ok though, more * or less by chance. For now, only support legacy IO on X86. */ -#ifdef CONFIG_X86 +#ifdef CONFIG_VIRTIO_PCI_ADMIN_LEGACY #define VIRTIO_ADMIN_CMD_BITMAP VIRTIO_LEGACY_ADMIN_CMD_BITMAP #else #define VIRTIO_ADMIN_CMD_BITMAP 0 diff --git a/include/linux/virtio_pci_admin.h b/include/linux/virtio_pci_admin.h index 0c9c1f336d3f..f4a100a0fe2e 100644 --- a/include/linux/virtio_pci_admin.h +++ b/include/linux/virtio_pci_admin.h @@ -5,7 +5,7 @@ #include <linux/types.h> #include <linux/pci.h> -#ifdef CONFIG_X86 +#ifdef CONFIG_VIRTIO_PCI_ADMIN_LEGACY bool virtio_pci_admin_has_legacy_io(struct pci_dev *pdev); int virtio_pci_admin_legacy_common_io_write(struct pci_dev *pdev, u8 offset, u8 size, u8 *buf); It might even be preferable if virtio_pci_admin_legacy_io were only built if VIRTIO_VFIO_PCI were selected, but I can't quickly get that to work. > + select VFIO_PCI_CORE > + help > + This provides support for exposing VIRTIO NET VF devices which support > + legacy IO access, using the VFIO framework that can work with a legacy > + virtio driver in the guest. > + Based on PCIe spec, VFs do not support I/O Space. > + As of that this driver emulates I/O BAR in software to let a VF be > + seen as a transitional device by its users and let it work with > + a legacy driver. > + > + If you don't know what to do here, say N. > diff --git a/drivers/vfio/pci/virtio/Makefile b/drivers/vfio/pci/virtio/Makefile > new file mode 100644 > index 000000000000..2039b39fb723 > --- /dev/null > +++ b/drivers/vfio/pci/virtio/Makefile > @@ -0,0 +1,4 @@ > +# SPDX-License-Identifier: GPL-2.0-only > +obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio-vfio-pci.o > +virtio-vfio-pci-y := main.o > + Extraneous blank line at the end of this file. Thanks, Alex > diff --git a/drivers/vfio/pci/virtio/main.c b/drivers/vfio/pci/virtio/main.c > new file mode 100644 > index 000000000000..291c55b641f1 > --- /dev/null > +++ b/drivers/vfio/pci/virtio/main.c > @@ -0,0 +1,576 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved > + */ > + > +#include <linux/device.h> > +#include <linux/module.h> > +#include <linux/mutex.h> > +#include <linux/pci.h> > +#include <linux/pm_runtime.h> > +#include <linux/types.h> > +#include <linux/uaccess.h> > +#include <linux/vfio.h> > +#include <linux/vfio_pci_core.h> > +#include <linux/virtio_pci.h> > +#include <linux/virtio_net.h> > +#include <linux/virtio_pci_admin.h> > + > +struct virtiovf_pci_core_device { > + struct vfio_pci_core_device core_device; > + u8 *bar0_virtual_buf; > + /* synchronize access to the virtual buf */ > + struct mutex bar_mutex; > + void __iomem *notify_addr; > + u64 notify_offset; > + __le32 pci_base_addr_0; > + __le16 pci_cmd; > + u8 bar0_virtual_buf_size; > + u8 notify_bar; > +}; > + > +static int > +virtiovf_issue_legacy_rw_cmd(struct virtiovf_pci_core_device *virtvdev, > + loff_t pos, char __user *buf, > + size_t count, bool read) > +{ > + bool msix_enabled = > + (virtvdev->core_device.irq_type == VFIO_PCI_MSIX_IRQ_INDEX); > + struct pci_dev *pdev = virtvdev->core_device.pdev; > + u8 *bar0_buf = virtvdev->bar0_virtual_buf; > + bool common; > + u8 offset; > + int ret; > + > + common = pos < VIRTIO_PCI_CONFIG_OFF(msix_enabled); > + /* offset within the relevant configuration area */ > + offset = common ? pos : pos - VIRTIO_PCI_CONFIG_OFF(msix_enabled); > + mutex_lock(&virtvdev->bar_mutex); > + if (read) { > + if (common) > + ret = virtio_pci_admin_legacy_common_io_read(pdev, offset, > + count, bar0_buf + pos); > + else > + ret = virtio_pci_admin_legacy_device_io_read(pdev, offset, > + count, bar0_buf + pos); > + if (ret) > + goto out; > + if (copy_to_user(buf, bar0_buf + pos, count)) > + ret = -EFAULT; > + } else { > + if (copy_from_user(bar0_buf + pos, buf, count)) { > + ret = -EFAULT; > + goto out; > + } > + > + if (common) > + ret = virtio_pci_admin_legacy_common_io_write(pdev, offset, > + count, bar0_buf + pos); > + else > + ret = virtio_pci_admin_legacy_device_io_write(pdev, offset, > + count, bar0_buf + pos); > + } > +out: > + mutex_unlock(&virtvdev->bar_mutex); > + return ret; > +} > + > +static int > +virtiovf_pci_bar0_rw(struct virtiovf_pci_core_device *virtvdev, > + loff_t pos, char __user *buf, > + size_t count, bool read) > +{ > + struct vfio_pci_core_device *core_device = &virtvdev->core_device; > + struct pci_dev *pdev = core_device->pdev; > + u16 queue_notify; > + int ret; > + > + if (!(le16_to_cpu(virtvdev->pci_cmd) & PCI_COMMAND_IO)) > + return -EIO; > + > + if (pos + count > virtvdev->bar0_virtual_buf_size) > + return -EINVAL; > + > + ret = pm_runtime_resume_and_get(&pdev->dev); > + if (ret) { > + pci_info_ratelimited(pdev, "runtime resume failed %d\n", ret); > + return -EIO; > + } > + > + switch (pos) { > + case VIRTIO_PCI_QUEUE_NOTIFY: > + if (count != sizeof(queue_notify)) { > + ret = -EINVAL; > + goto end; > + } > + if (read) { > + ret = vfio_pci_core_ioread16(core_device, true, &queue_notify, > + virtvdev->notify_addr); > + if (ret) > + goto end; > + if (copy_to_user(buf, &queue_notify, > + sizeof(queue_notify))) { > + ret = -EFAULT; > + goto end; > + } > + } else { > + if (copy_from_user(&queue_notify, buf, count)) { > + ret = -EFAULT; > + goto end; > + } > + ret = vfio_pci_core_iowrite16(core_device, true, queue_notify, > + virtvdev->notify_addr); > + } > + break; > + default: > + ret = virtiovf_issue_legacy_rw_cmd(virtvdev, pos, buf, count, > + read); > + } > + > +end: > + pm_runtime_put(&pdev->dev); > + return ret ? ret : count; > +} > + > +static bool range_intersect_range(loff_t range1_start, size_t count1, > + loff_t range2_start, size_t count2, > + loff_t *start_offset, > + size_t *intersect_count, > + size_t *register_offset) > +{ > + if (range1_start <= range2_start && > + range1_start + count1 > range2_start) { > + *start_offset = range2_start - range1_start; > + *intersect_count = min_t(size_t, count2, > + range1_start + count1 - range2_start); > + *register_offset = 0; > + return true; > + } > + > + if (range1_start > range2_start && > + range1_start < range2_start + count2) { > + *start_offset = 0; > + *intersect_count = min_t(size_t, count1, > + range2_start + count2 - range1_start); > + *register_offset = range1_start - range2_start; > + return true; > + } > + > + return false; > +} > + > +static ssize_t virtiovf_pci_read_config(struct vfio_device *core_vdev, > + char __user *buf, size_t count, > + loff_t *ppos) > +{ > + struct virtiovf_pci_core_device *virtvdev = container_of( > + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); > + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; > + size_t register_offset; > + loff_t copy_offset; > + size_t copy_count; > + __le32 val32; > + __le16 val16; > + u8 val8; > + int ret; > + > + ret = vfio_pci_core_read(core_vdev, buf, count, ppos); > + if (ret < 0) > + return ret; > + > + if (range_intersect_range(pos, count, PCI_DEVICE_ID, sizeof(val16), > + ©_offset, ©_count, ®ister_offset)) { > + val16 = cpu_to_le16(VIRTIO_TRANS_ID_NET); > + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, copy_count)) > + return -EFAULT; > + } > + > + if ((le16_to_cpu(virtvdev->pci_cmd) & PCI_COMMAND_IO) && > + range_intersect_range(pos, count, PCI_COMMAND, sizeof(val16), > + ©_offset, ©_count, ®ister_offset)) { > + if (copy_from_user((void *)&val16 + register_offset, buf + copy_offset, > + copy_count)) > + return -EFAULT; > + val16 |= cpu_to_le16(PCI_COMMAND_IO); > + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, > + copy_count)) > + return -EFAULT; > + } > + > + if (range_intersect_range(pos, count, PCI_REVISION_ID, sizeof(val8), > + ©_offset, ©_count, ®ister_offset)) { > + /* Transional needs to have revision 0 */ > + val8 = 0; > + if (copy_to_user(buf + copy_offset, &val8, copy_count)) > + return -EFAULT; > + } > + > + if (range_intersect_range(pos, count, PCI_BASE_ADDRESS_0, sizeof(val32), > + ©_offset, ©_count, ®ister_offset)) { > + u32 bar_mask = ~(virtvdev->bar0_virtual_buf_size - 1); > + u32 pci_base_addr_0 = le32_to_cpu(virtvdev->pci_base_addr_0); > + > + val32 = cpu_to_le32((pci_base_addr_0 & bar_mask) | PCI_BASE_ADDRESS_SPACE_IO); > + if (copy_to_user(buf + copy_offset, (void *)&val32 + register_offset, copy_count)) > + return -EFAULT; > + } > + > + if (range_intersect_range(pos, count, PCI_SUBSYSTEM_ID, sizeof(val16), > + ©_offset, ©_count, ®ister_offset)) { > + /* > + * Transitional devices use the PCI subsystem device id as > + * virtio device id, same as legacy driver always did. > + */ > + val16 = cpu_to_le16(VIRTIO_ID_NET); > + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, > + copy_count)) > + return -EFAULT; > + } > + > + if (range_intersect_range(pos, count, PCI_SUBSYSTEM_VENDOR_ID, sizeof(val16), > + ©_offset, ©_count, ®ister_offset)) { > + val16 = cpu_to_le16(PCI_VENDOR_ID_REDHAT_QUMRANET); > + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, > + copy_count)) > + return -EFAULT; > + } > + > + return count; > +} > + > +static ssize_t > +virtiovf_pci_core_read(struct vfio_device *core_vdev, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct virtiovf_pci_core_device *virtvdev = container_of( > + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); > + unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos); > + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; > + > + if (!count) > + return 0; > + > + if (index == VFIO_PCI_CONFIG_REGION_INDEX) > + return virtiovf_pci_read_config(core_vdev, buf, count, ppos); > + > + if (index == VFIO_PCI_BAR0_REGION_INDEX) > + return virtiovf_pci_bar0_rw(virtvdev, pos, buf, count, true); > + > + return vfio_pci_core_read(core_vdev, buf, count, ppos); > +} > + > +static ssize_t virtiovf_pci_write_config(struct vfio_device *core_vdev, > + const char __user *buf, size_t count, > + loff_t *ppos) > +{ > + struct virtiovf_pci_core_device *virtvdev = container_of( > + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); > + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; > + size_t register_offset; > + loff_t copy_offset; > + size_t copy_count; > + > + if (range_intersect_range(pos, count, PCI_COMMAND, sizeof(virtvdev->pci_cmd), > + ©_offset, ©_count, > + ®ister_offset)) { > + if (copy_from_user((void *)&virtvdev->pci_cmd + register_offset, > + buf + copy_offset, > + copy_count)) > + return -EFAULT; > + } > + > + if (range_intersect_range(pos, count, PCI_BASE_ADDRESS_0, > + sizeof(virtvdev->pci_base_addr_0), > + ©_offset, ©_count, > + ®ister_offset)) { > + if (copy_from_user((void *)&virtvdev->pci_base_addr_0 + register_offset, > + buf + copy_offset, > + copy_count)) > + return -EFAULT; > + } > + > + return vfio_pci_core_write(core_vdev, buf, count, ppos); > +} > + > +static ssize_t > +virtiovf_pci_core_write(struct vfio_device *core_vdev, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct virtiovf_pci_core_device *virtvdev = container_of( > + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); > + unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos); > + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; > + > + if (!count) > + return 0; > + > + if (index == VFIO_PCI_CONFIG_REGION_INDEX) > + return virtiovf_pci_write_config(core_vdev, buf, count, ppos); > + > + if (index == VFIO_PCI_BAR0_REGION_INDEX) > + return virtiovf_pci_bar0_rw(virtvdev, pos, (char __user *)buf, count, false); > + > + return vfio_pci_core_write(core_vdev, buf, count, ppos); > +} > + > +static int > +virtiovf_pci_ioctl_get_region_info(struct vfio_device *core_vdev, > + unsigned int cmd, unsigned long arg) > +{ > + struct virtiovf_pci_core_device *virtvdev = container_of( > + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); > + unsigned long minsz = offsetofend(struct vfio_region_info, offset); > + void __user *uarg = (void __user *)arg; > + struct vfio_region_info info = {}; > + > + if (copy_from_user(&info, uarg, minsz)) > + return -EFAULT; > + > + if (info.argsz < minsz) > + return -EINVAL; > + > + switch (info.index) { > + case VFIO_PCI_BAR0_REGION_INDEX: > + info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index); > + info.size = virtvdev->bar0_virtual_buf_size; > + info.flags = VFIO_REGION_INFO_FLAG_READ | > + VFIO_REGION_INFO_FLAG_WRITE; > + return copy_to_user(uarg, &info, minsz) ? -EFAULT : 0; > + default: > + return vfio_pci_core_ioctl(core_vdev, cmd, arg); > + } > +} > + > +static long > +virtiovf_vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, > + unsigned long arg) > +{ > + switch (cmd) { > + case VFIO_DEVICE_GET_REGION_INFO: > + return virtiovf_pci_ioctl_get_region_info(core_vdev, cmd, arg); > + default: > + return vfio_pci_core_ioctl(core_vdev, cmd, arg); > + } > +} > + > +static int > +virtiovf_set_notify_addr(struct virtiovf_pci_core_device *virtvdev) > +{ > + struct vfio_pci_core_device *core_device = &virtvdev->core_device; > + int ret; > + > + /* > + * Setup the BAR where the 'notify' exists to be used by vfio as well > + * This will let us mmap it only once and use it when needed. > + */ > + ret = vfio_pci_core_setup_barmap(core_device, > + virtvdev->notify_bar); > + if (ret) > + return ret; > + > + virtvdev->notify_addr = core_device->barmap[virtvdev->notify_bar] + > + virtvdev->notify_offset; > + return 0; > +} > + > +static int virtiovf_pci_open_device(struct vfio_device *core_vdev) > +{ > + struct virtiovf_pci_core_device *virtvdev = container_of( > + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); > + struct vfio_pci_core_device *vdev = &virtvdev->core_device; > + int ret; > + > + ret = vfio_pci_core_enable(vdev); > + if (ret) > + return ret; > + > + if (virtvdev->bar0_virtual_buf) { > + /* > + * Upon close_device() the vfio_pci_core_disable() is called > + * and will close all the previous mmaps, so it seems that the > + * valid life cycle for the 'notify' addr is per open/close. > + */ > + ret = virtiovf_set_notify_addr(virtvdev); > + if (ret) { > + vfio_pci_core_disable(vdev); > + return ret; > + } > + } > + > + vfio_pci_core_finish_enable(vdev); > + return 0; > +} > + > +static int virtiovf_get_device_config_size(unsigned short device) > +{ > + /* Network card */ > + return offsetofend(struct virtio_net_config, status); > +} > + > +static int virtiovf_read_notify_info(struct virtiovf_pci_core_device *virtvdev) > +{ > + u64 offset; > + int ret; > + u8 bar; > + > + ret = virtio_pci_admin_legacy_io_notify_info(virtvdev->core_device.pdev, > + VIRTIO_ADMIN_CMD_NOTIFY_INFO_FLAGS_OWNER_MEM, > + &bar, &offset); > + if (ret) > + return ret; > + > + virtvdev->notify_bar = bar; > + virtvdev->notify_offset = offset; > + return 0; > +} > + > +static int virtiovf_pci_init_device(struct vfio_device *core_vdev) > +{ > + struct virtiovf_pci_core_device *virtvdev = container_of( > + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); > + struct pci_dev *pdev; > + int ret; > + > + ret = vfio_pci_core_init_dev(core_vdev); > + if (ret) > + return ret; > + > + pdev = virtvdev->core_device.pdev; > + ret = virtiovf_read_notify_info(virtvdev); > + if (ret) > + return ret; > + > + virtvdev->bar0_virtual_buf_size = VIRTIO_PCI_CONFIG_OFF(true) + > + virtiovf_get_device_config_size(pdev->device); > + BUILD_BUG_ON(!is_power_of_2(virtvdev->bar0_virtual_buf_size)); > + virtvdev->bar0_virtual_buf = kzalloc(virtvdev->bar0_virtual_buf_size, > + GFP_KERNEL); > + if (!virtvdev->bar0_virtual_buf) > + return -ENOMEM; > + mutex_init(&virtvdev->bar_mutex); > + return 0; > +} > + > +static void virtiovf_pci_core_release_dev(struct vfio_device *core_vdev) > +{ > + struct virtiovf_pci_core_device *virtvdev = container_of( > + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); > + > + kfree(virtvdev->bar0_virtual_buf); > + vfio_pci_core_release_dev(core_vdev); > +} > + > +static const struct vfio_device_ops virtiovf_vfio_pci_tran_ops = { > + .name = "virtio-vfio-pci-trans", > + .init = virtiovf_pci_init_device, > + .release = virtiovf_pci_core_release_dev, > + .open_device = virtiovf_pci_open_device, > + .close_device = vfio_pci_core_close_device, > + .ioctl = virtiovf_vfio_pci_core_ioctl, > + .device_feature = vfio_pci_core_ioctl_feature, > + .read = virtiovf_pci_core_read, > + .write = virtiovf_pci_core_write, > + .mmap = vfio_pci_core_mmap, > + .request = vfio_pci_core_request, > + .match = vfio_pci_core_match, > + .bind_iommufd = vfio_iommufd_physical_bind, > + .unbind_iommufd = vfio_iommufd_physical_unbind, > + .attach_ioas = vfio_iommufd_physical_attach_ioas, > + .detach_ioas = vfio_iommufd_physical_detach_ioas, > +}; > + > +static const struct vfio_device_ops virtiovf_vfio_pci_ops = { > + .name = "virtio-vfio-pci", > + .init = vfio_pci_core_init_dev, > + .release = vfio_pci_core_release_dev, > + .open_device = virtiovf_pci_open_device, > + .close_device = vfio_pci_core_close_device, > + .ioctl = vfio_pci_core_ioctl, > + .device_feature = vfio_pci_core_ioctl_feature, > + .read = vfio_pci_core_read, > + .write = vfio_pci_core_write, > + .mmap = vfio_pci_core_mmap, > + .request = vfio_pci_core_request, > + .match = vfio_pci_core_match, > + .bind_iommufd = vfio_iommufd_physical_bind, > + .unbind_iommufd = vfio_iommufd_physical_unbind, > + .attach_ioas = vfio_iommufd_physical_attach_ioas, > + .detach_ioas = vfio_iommufd_physical_detach_ioas, > +}; > + > +static bool virtiovf_bar0_exists(struct pci_dev *pdev) > +{ > + struct resource *res = pdev->resource; > + > + return res->flags; > +} > + > +static int virtiovf_pci_probe(struct pci_dev *pdev, > + const struct pci_device_id *id) > +{ > + const struct vfio_device_ops *ops = &virtiovf_vfio_pci_ops; > + struct virtiovf_pci_core_device *virtvdev; > + int ret; > + > + if (pdev->is_virtfn && virtio_pci_admin_has_legacy_io(pdev) && > + !virtiovf_bar0_exists(pdev)) > + ops = &virtiovf_vfio_pci_tran_ops; > + > + virtvdev = vfio_alloc_device(virtiovf_pci_core_device, core_device.vdev, > + &pdev->dev, ops); > + if (IS_ERR(virtvdev)) > + return PTR_ERR(virtvdev); > + > + dev_set_drvdata(&pdev->dev, &virtvdev->core_device); > + ret = vfio_pci_core_register_device(&virtvdev->core_device); > + if (ret) > + goto out; > + return 0; > +out: > + vfio_put_device(&virtvdev->core_device.vdev); > + return ret; > +} > + > +static void virtiovf_pci_remove(struct pci_dev *pdev) > +{ > + struct virtiovf_pci_core_device *virtvdev = dev_get_drvdata(&pdev->dev); > + > + vfio_pci_core_unregister_device(&virtvdev->core_device); > + vfio_put_device(&virtvdev->core_device.vdev); > +} > + > +static const struct pci_device_id virtiovf_pci_table[] = { > + /* Only virtio-net is supported/tested so far */ > + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_REDHAT_QUMRANET, 0x1041) }, > + {} > +}; > + > +MODULE_DEVICE_TABLE(pci, virtiovf_pci_table); > + > +void virtiovf_pci_aer_reset_done(struct pci_dev *pdev) > +{ > + struct virtiovf_pci_core_device *virtvdev = dev_get_drvdata(&pdev->dev); > + > + virtvdev->pci_cmd = 0; > +} > + > +static const struct pci_error_handlers virtiovf_err_handlers = { > + .reset_done = virtiovf_pci_aer_reset_done, > + .error_detected = vfio_pci_core_aer_err_detected, > +}; > + > +static struct pci_driver virtiovf_pci_driver = { > + .name = KBUILD_MODNAME, > + .id_table = virtiovf_pci_table, > + .probe = virtiovf_pci_probe, > + .remove = virtiovf_pci_remove, > + .err_handler = &virtiovf_err_handlers, > + .driver_managed_dma = true, > +}; > + > +module_pci_driver(virtiovf_pci_driver); > + > +MODULE_LICENSE("GPL"); > +MODULE_AUTHOR("Yishai Hadas <yishaih@nvidia.com>"); > +MODULE_DESCRIPTION( > + "VIRTIO VFIO PCI - User Level meta-driver for VIRTIO NET devices");
diff --git a/MAINTAINERS b/MAINTAINERS index 012df8ccf34e..b246b769092d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -22872,6 +22872,13 @@ L: kvm@vger.kernel.org S: Maintained F: drivers/vfio/pci/mlx5/ +VFIO VIRTIO PCI DRIVER +M: Yishai Hadas <yishaih@nvidia.com> +L: kvm@vger.kernel.org +L: virtualization@lists.linux-foundation.org +S: Maintained +F: drivers/vfio/pci/virtio + VFIO PCI DEVICE SPECIFIC DRIVERS R: Jason Gunthorpe <jgg@nvidia.com> R: Yishai Hadas <yishaih@nvidia.com> diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 8125e5f37832..18c397df566d 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -65,4 +65,6 @@ source "drivers/vfio/pci/hisilicon/Kconfig" source "drivers/vfio/pci/pds/Kconfig" +source "drivers/vfio/pci/virtio/Kconfig" + endmenu diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index 45167be462d8..046139a4eca5 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -13,3 +13,5 @@ obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5/ obj-$(CONFIG_HISI_ACC_VFIO_PCI) += hisilicon/ obj-$(CONFIG_PDS_VFIO_PCI) += pds/ + +obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio/ diff --git a/drivers/vfio/pci/virtio/Kconfig b/drivers/vfio/pci/virtio/Kconfig new file mode 100644 index 000000000000..a3e5d8ea22a0 --- /dev/null +++ b/drivers/vfio/pci/virtio/Kconfig @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: GPL-2.0-only +config VIRTIO_VFIO_PCI + tristate "VFIO support for VIRTIO NET PCI devices" + depends on X86 && VIRTIO_PCI + select VFIO_PCI_CORE + help + This provides support for exposing VIRTIO NET VF devices which support + legacy IO access, using the VFIO framework that can work with a legacy + virtio driver in the guest. + Based on PCIe spec, VFs do not support I/O Space. + As of that this driver emulates I/O BAR in software to let a VF be + seen as a transitional device by its users and let it work with + a legacy driver. + + If you don't know what to do here, say N. diff --git a/drivers/vfio/pci/virtio/Makefile b/drivers/vfio/pci/virtio/Makefile new file mode 100644 index 000000000000..2039b39fb723 --- /dev/null +++ b/drivers/vfio/pci/virtio/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio-vfio-pci.o +virtio-vfio-pci-y := main.o + diff --git a/drivers/vfio/pci/virtio/main.c b/drivers/vfio/pci/virtio/main.c new file mode 100644 index 000000000000..291c55b641f1 --- /dev/null +++ b/drivers/vfio/pci/virtio/main.c @@ -0,0 +1,576 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved + */ + +#include <linux/device.h> +#include <linux/module.h> +#include <linux/mutex.h> +#include <linux/pci.h> +#include <linux/pm_runtime.h> +#include <linux/types.h> +#include <linux/uaccess.h> +#include <linux/vfio.h> +#include <linux/vfio_pci_core.h> +#include <linux/virtio_pci.h> +#include <linux/virtio_net.h> +#include <linux/virtio_pci_admin.h> + +struct virtiovf_pci_core_device { + struct vfio_pci_core_device core_device; + u8 *bar0_virtual_buf; + /* synchronize access to the virtual buf */ + struct mutex bar_mutex; + void __iomem *notify_addr; + u64 notify_offset; + __le32 pci_base_addr_0; + __le16 pci_cmd; + u8 bar0_virtual_buf_size; + u8 notify_bar; +}; + +static int +virtiovf_issue_legacy_rw_cmd(struct virtiovf_pci_core_device *virtvdev, + loff_t pos, char __user *buf, + size_t count, bool read) +{ + bool msix_enabled = + (virtvdev->core_device.irq_type == VFIO_PCI_MSIX_IRQ_INDEX); + struct pci_dev *pdev = virtvdev->core_device.pdev; + u8 *bar0_buf = virtvdev->bar0_virtual_buf; + bool common; + u8 offset; + int ret; + + common = pos < VIRTIO_PCI_CONFIG_OFF(msix_enabled); + /* offset within the relevant configuration area */ + offset = common ? pos : pos - VIRTIO_PCI_CONFIG_OFF(msix_enabled); + mutex_lock(&virtvdev->bar_mutex); + if (read) { + if (common) + ret = virtio_pci_admin_legacy_common_io_read(pdev, offset, + count, bar0_buf + pos); + else + ret = virtio_pci_admin_legacy_device_io_read(pdev, offset, + count, bar0_buf + pos); + if (ret) + goto out; + if (copy_to_user(buf, bar0_buf + pos, count)) + ret = -EFAULT; + } else { + if (copy_from_user(bar0_buf + pos, buf, count)) { + ret = -EFAULT; + goto out; + } + + if (common) + ret = virtio_pci_admin_legacy_common_io_write(pdev, offset, + count, bar0_buf + pos); + else + ret = virtio_pci_admin_legacy_device_io_write(pdev, offset, + count, bar0_buf + pos); + } +out: + mutex_unlock(&virtvdev->bar_mutex); + return ret; +} + +static int +virtiovf_pci_bar0_rw(struct virtiovf_pci_core_device *virtvdev, + loff_t pos, char __user *buf, + size_t count, bool read) +{ + struct vfio_pci_core_device *core_device = &virtvdev->core_device; + struct pci_dev *pdev = core_device->pdev; + u16 queue_notify; + int ret; + + if (!(le16_to_cpu(virtvdev->pci_cmd) & PCI_COMMAND_IO)) + return -EIO; + + if (pos + count > virtvdev->bar0_virtual_buf_size) + return -EINVAL; + + ret = pm_runtime_resume_and_get(&pdev->dev); + if (ret) { + pci_info_ratelimited(pdev, "runtime resume failed %d\n", ret); + return -EIO; + } + + switch (pos) { + case VIRTIO_PCI_QUEUE_NOTIFY: + if (count != sizeof(queue_notify)) { + ret = -EINVAL; + goto end; + } + if (read) { + ret = vfio_pci_core_ioread16(core_device, true, &queue_notify, + virtvdev->notify_addr); + if (ret) + goto end; + if (copy_to_user(buf, &queue_notify, + sizeof(queue_notify))) { + ret = -EFAULT; + goto end; + } + } else { + if (copy_from_user(&queue_notify, buf, count)) { + ret = -EFAULT; + goto end; + } + ret = vfio_pci_core_iowrite16(core_device, true, queue_notify, + virtvdev->notify_addr); + } + break; + default: + ret = virtiovf_issue_legacy_rw_cmd(virtvdev, pos, buf, count, + read); + } + +end: + pm_runtime_put(&pdev->dev); + return ret ? ret : count; +} + +static bool range_intersect_range(loff_t range1_start, size_t count1, + loff_t range2_start, size_t count2, + loff_t *start_offset, + size_t *intersect_count, + size_t *register_offset) +{ + if (range1_start <= range2_start && + range1_start + count1 > range2_start) { + *start_offset = range2_start - range1_start; + *intersect_count = min_t(size_t, count2, + range1_start + count1 - range2_start); + *register_offset = 0; + return true; + } + + if (range1_start > range2_start && + range1_start < range2_start + count2) { + *start_offset = 0; + *intersect_count = min_t(size_t, count1, + range2_start + count2 - range1_start); + *register_offset = range1_start - range2_start; + return true; + } + + return false; +} + +static ssize_t virtiovf_pci_read_config(struct vfio_device *core_vdev, + char __user *buf, size_t count, + loff_t *ppos) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + size_t register_offset; + loff_t copy_offset; + size_t copy_count; + __le32 val32; + __le16 val16; + u8 val8; + int ret; + + ret = vfio_pci_core_read(core_vdev, buf, count, ppos); + if (ret < 0) + return ret; + + if (range_intersect_range(pos, count, PCI_DEVICE_ID, sizeof(val16), + ©_offset, ©_count, ®ister_offset)) { + val16 = cpu_to_le16(VIRTIO_TRANS_ID_NET); + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, copy_count)) + return -EFAULT; + } + + if ((le16_to_cpu(virtvdev->pci_cmd) & PCI_COMMAND_IO) && + range_intersect_range(pos, count, PCI_COMMAND, sizeof(val16), + ©_offset, ©_count, ®ister_offset)) { + if (copy_from_user((void *)&val16 + register_offset, buf + copy_offset, + copy_count)) + return -EFAULT; + val16 |= cpu_to_le16(PCI_COMMAND_IO); + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, + copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, PCI_REVISION_ID, sizeof(val8), + ©_offset, ©_count, ®ister_offset)) { + /* Transional needs to have revision 0 */ + val8 = 0; + if (copy_to_user(buf + copy_offset, &val8, copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, PCI_BASE_ADDRESS_0, sizeof(val32), + ©_offset, ©_count, ®ister_offset)) { + u32 bar_mask = ~(virtvdev->bar0_virtual_buf_size - 1); + u32 pci_base_addr_0 = le32_to_cpu(virtvdev->pci_base_addr_0); + + val32 = cpu_to_le32((pci_base_addr_0 & bar_mask) | PCI_BASE_ADDRESS_SPACE_IO); + if (copy_to_user(buf + copy_offset, (void *)&val32 + register_offset, copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, PCI_SUBSYSTEM_ID, sizeof(val16), + ©_offset, ©_count, ®ister_offset)) { + /* + * Transitional devices use the PCI subsystem device id as + * virtio device id, same as legacy driver always did. + */ + val16 = cpu_to_le16(VIRTIO_ID_NET); + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, + copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, PCI_SUBSYSTEM_VENDOR_ID, sizeof(val16), + ©_offset, ©_count, ®ister_offset)) { + val16 = cpu_to_le16(PCI_VENDOR_ID_REDHAT_QUMRANET); + if (copy_to_user(buf + copy_offset, (void *)&val16 + register_offset, + copy_count)) + return -EFAULT; + } + + return count; +} + +static ssize_t +virtiovf_pci_core_read(struct vfio_device *core_vdev, char __user *buf, + size_t count, loff_t *ppos) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos); + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + + if (!count) + return 0; + + if (index == VFIO_PCI_CONFIG_REGION_INDEX) + return virtiovf_pci_read_config(core_vdev, buf, count, ppos); + + if (index == VFIO_PCI_BAR0_REGION_INDEX) + return virtiovf_pci_bar0_rw(virtvdev, pos, buf, count, true); + + return vfio_pci_core_read(core_vdev, buf, count, ppos); +} + +static ssize_t virtiovf_pci_write_config(struct vfio_device *core_vdev, + const char __user *buf, size_t count, + loff_t *ppos) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + size_t register_offset; + loff_t copy_offset; + size_t copy_count; + + if (range_intersect_range(pos, count, PCI_COMMAND, sizeof(virtvdev->pci_cmd), + ©_offset, ©_count, + ®ister_offset)) { + if (copy_from_user((void *)&virtvdev->pci_cmd + register_offset, + buf + copy_offset, + copy_count)) + return -EFAULT; + } + + if (range_intersect_range(pos, count, PCI_BASE_ADDRESS_0, + sizeof(virtvdev->pci_base_addr_0), + ©_offset, ©_count, + ®ister_offset)) { + if (copy_from_user((void *)&virtvdev->pci_base_addr_0 + register_offset, + buf + copy_offset, + copy_count)) + return -EFAULT; + } + + return vfio_pci_core_write(core_vdev, buf, count, ppos); +} + +static ssize_t +virtiovf_pci_core_write(struct vfio_device *core_vdev, const char __user *buf, + size_t count, loff_t *ppos) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos); + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK; + + if (!count) + return 0; + + if (index == VFIO_PCI_CONFIG_REGION_INDEX) + return virtiovf_pci_write_config(core_vdev, buf, count, ppos); + + if (index == VFIO_PCI_BAR0_REGION_INDEX) + return virtiovf_pci_bar0_rw(virtvdev, pos, (char __user *)buf, count, false); + + return vfio_pci_core_write(core_vdev, buf, count, ppos); +} + +static int +virtiovf_pci_ioctl_get_region_info(struct vfio_device *core_vdev, + unsigned int cmd, unsigned long arg) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + unsigned long minsz = offsetofend(struct vfio_region_info, offset); + void __user *uarg = (void __user *)arg; + struct vfio_region_info info = {}; + + if (copy_from_user(&info, uarg, minsz)) + return -EFAULT; + + if (info.argsz < minsz) + return -EINVAL; + + switch (info.index) { + case VFIO_PCI_BAR0_REGION_INDEX: + info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index); + info.size = virtvdev->bar0_virtual_buf_size; + info.flags = VFIO_REGION_INFO_FLAG_READ | + VFIO_REGION_INFO_FLAG_WRITE; + return copy_to_user(uarg, &info, minsz) ? -EFAULT : 0; + default: + return vfio_pci_core_ioctl(core_vdev, cmd, arg); + } +} + +static long +virtiovf_vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, + unsigned long arg) +{ + switch (cmd) { + case VFIO_DEVICE_GET_REGION_INFO: + return virtiovf_pci_ioctl_get_region_info(core_vdev, cmd, arg); + default: + return vfio_pci_core_ioctl(core_vdev, cmd, arg); + } +} + +static int +virtiovf_set_notify_addr(struct virtiovf_pci_core_device *virtvdev) +{ + struct vfio_pci_core_device *core_device = &virtvdev->core_device; + int ret; + + /* + * Setup the BAR where the 'notify' exists to be used by vfio as well + * This will let us mmap it only once and use it when needed. + */ + ret = vfio_pci_core_setup_barmap(core_device, + virtvdev->notify_bar); + if (ret) + return ret; + + virtvdev->notify_addr = core_device->barmap[virtvdev->notify_bar] + + virtvdev->notify_offset; + return 0; +} + +static int virtiovf_pci_open_device(struct vfio_device *core_vdev) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + struct vfio_pci_core_device *vdev = &virtvdev->core_device; + int ret; + + ret = vfio_pci_core_enable(vdev); + if (ret) + return ret; + + if (virtvdev->bar0_virtual_buf) { + /* + * Upon close_device() the vfio_pci_core_disable() is called + * and will close all the previous mmaps, so it seems that the + * valid life cycle for the 'notify' addr is per open/close. + */ + ret = virtiovf_set_notify_addr(virtvdev); + if (ret) { + vfio_pci_core_disable(vdev); + return ret; + } + } + + vfio_pci_core_finish_enable(vdev); + return 0; +} + +static int virtiovf_get_device_config_size(unsigned short device) +{ + /* Network card */ + return offsetofend(struct virtio_net_config, status); +} + +static int virtiovf_read_notify_info(struct virtiovf_pci_core_device *virtvdev) +{ + u64 offset; + int ret; + u8 bar; + + ret = virtio_pci_admin_legacy_io_notify_info(virtvdev->core_device.pdev, + VIRTIO_ADMIN_CMD_NOTIFY_INFO_FLAGS_OWNER_MEM, + &bar, &offset); + if (ret) + return ret; + + virtvdev->notify_bar = bar; + virtvdev->notify_offset = offset; + return 0; +} + +static int virtiovf_pci_init_device(struct vfio_device *core_vdev) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + struct pci_dev *pdev; + int ret; + + ret = vfio_pci_core_init_dev(core_vdev); + if (ret) + return ret; + + pdev = virtvdev->core_device.pdev; + ret = virtiovf_read_notify_info(virtvdev); + if (ret) + return ret; + + virtvdev->bar0_virtual_buf_size = VIRTIO_PCI_CONFIG_OFF(true) + + virtiovf_get_device_config_size(pdev->device); + BUILD_BUG_ON(!is_power_of_2(virtvdev->bar0_virtual_buf_size)); + virtvdev->bar0_virtual_buf = kzalloc(virtvdev->bar0_virtual_buf_size, + GFP_KERNEL); + if (!virtvdev->bar0_virtual_buf) + return -ENOMEM; + mutex_init(&virtvdev->bar_mutex); + return 0; +} + +static void virtiovf_pci_core_release_dev(struct vfio_device *core_vdev) +{ + struct virtiovf_pci_core_device *virtvdev = container_of( + core_vdev, struct virtiovf_pci_core_device, core_device.vdev); + + kfree(virtvdev->bar0_virtual_buf); + vfio_pci_core_release_dev(core_vdev); +} + +static const struct vfio_device_ops virtiovf_vfio_pci_tran_ops = { + .name = "virtio-vfio-pci-trans", + .init = virtiovf_pci_init_device, + .release = virtiovf_pci_core_release_dev, + .open_device = virtiovf_pci_open_device, + .close_device = vfio_pci_core_close_device, + .ioctl = virtiovf_vfio_pci_core_ioctl, + .device_feature = vfio_pci_core_ioctl_feature, + .read = virtiovf_pci_core_read, + .write = virtiovf_pci_core_write, + .mmap = vfio_pci_core_mmap, + .request = vfio_pci_core_request, + .match = vfio_pci_core_match, + .bind_iommufd = vfio_iommufd_physical_bind, + .unbind_iommufd = vfio_iommufd_physical_unbind, + .attach_ioas = vfio_iommufd_physical_attach_ioas, + .detach_ioas = vfio_iommufd_physical_detach_ioas, +}; + +static const struct vfio_device_ops virtiovf_vfio_pci_ops = { + .name = "virtio-vfio-pci", + .init = vfio_pci_core_init_dev, + .release = vfio_pci_core_release_dev, + .open_device = virtiovf_pci_open_device, + .close_device = vfio_pci_core_close_device, + .ioctl = vfio_pci_core_ioctl, + .device_feature = vfio_pci_core_ioctl_feature, + .read = vfio_pci_core_read, + .write = vfio_pci_core_write, + .mmap = vfio_pci_core_mmap, + .request = vfio_pci_core_request, + .match = vfio_pci_core_match, + .bind_iommufd = vfio_iommufd_physical_bind, + .unbind_iommufd = vfio_iommufd_physical_unbind, + .attach_ioas = vfio_iommufd_physical_attach_ioas, + .detach_ioas = vfio_iommufd_physical_detach_ioas, +}; + +static bool virtiovf_bar0_exists(struct pci_dev *pdev) +{ + struct resource *res = pdev->resource; + + return res->flags; +} + +static int virtiovf_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + const struct vfio_device_ops *ops = &virtiovf_vfio_pci_ops; + struct virtiovf_pci_core_device *virtvdev; + int ret; + + if (pdev->is_virtfn && virtio_pci_admin_has_legacy_io(pdev) && + !virtiovf_bar0_exists(pdev)) + ops = &virtiovf_vfio_pci_tran_ops; + + virtvdev = vfio_alloc_device(virtiovf_pci_core_device, core_device.vdev, + &pdev->dev, ops); + if (IS_ERR(virtvdev)) + return PTR_ERR(virtvdev); + + dev_set_drvdata(&pdev->dev, &virtvdev->core_device); + ret = vfio_pci_core_register_device(&virtvdev->core_device); + if (ret) + goto out; + return 0; +out: + vfio_put_device(&virtvdev->core_device.vdev); + return ret; +} + +static void virtiovf_pci_remove(struct pci_dev *pdev) +{ + struct virtiovf_pci_core_device *virtvdev = dev_get_drvdata(&pdev->dev); + + vfio_pci_core_unregister_device(&virtvdev->core_device); + vfio_put_device(&virtvdev->core_device.vdev); +} + +static const struct pci_device_id virtiovf_pci_table[] = { + /* Only virtio-net is supported/tested so far */ + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_REDHAT_QUMRANET, 0x1041) }, + {} +}; + +MODULE_DEVICE_TABLE(pci, virtiovf_pci_table); + +void virtiovf_pci_aer_reset_done(struct pci_dev *pdev) +{ + struct virtiovf_pci_core_device *virtvdev = dev_get_drvdata(&pdev->dev); + + virtvdev->pci_cmd = 0; +} + +static const struct pci_error_handlers virtiovf_err_handlers = { + .reset_done = virtiovf_pci_aer_reset_done, + .error_detected = vfio_pci_core_aer_err_detected, +}; + +static struct pci_driver virtiovf_pci_driver = { + .name = KBUILD_MODNAME, + .id_table = virtiovf_pci_table, + .probe = virtiovf_pci_probe, + .remove = virtiovf_pci_remove, + .err_handler = &virtiovf_err_handlers, + .driver_managed_dma = true, +}; + +module_pci_driver(virtiovf_pci_driver); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Yishai Hadas <yishaih@nvidia.com>"); +MODULE_DESCRIPTION( + "VIRTIO VFIO PCI - User Level meta-driver for VIRTIO NET devices");