Message ID | 20230113171132.86057-1-mjrosato@linux.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] vfio: fix potential deadlock on vfio group lock | expand |
On Fri, Jan 13, 2023 at 12:11:32PM -0500, Matthew Rosato wrote: > @@ -462,9 +520,19 @@ static inline void vfio_device_pm_runtime_put(struct vfio_device *device) > static int vfio_device_fops_release(struct inode *inode, struct file *filep) > { > struct vfio_device *device = filep->private_data; > + struct kvm *kvm = NULL; > > vfio_device_group_close(device); > > + mutex_lock(&device->dev_set->lock); > + if (device->open_count == 0 && device->kvm) { > + kvm = device->kvm; > + device->kvm = NULL; > + } > + mutex_unlock(&device->dev_set->lock); This still doesn't seem right, another thread could have incr'd the open_count already This has to be done at the moment open_count is decremented to zero, while still under the original lock. Jason
On 1/13/23 1:52 PM, Jason Gunthorpe wrote: > On Fri, Jan 13, 2023 at 12:11:32PM -0500, Matthew Rosato wrote: >> @@ -462,9 +520,19 @@ static inline void vfio_device_pm_runtime_put(struct vfio_device *device) >> static int vfio_device_fops_release(struct inode *inode, struct file *filep) >> { >> struct vfio_device *device = filep->private_data; >> + struct kvm *kvm = NULL; >> >> vfio_device_group_close(device); >> >> + mutex_lock(&device->dev_set->lock); >> + if (device->open_count == 0 && device->kvm) { >> + kvm = device->kvm; >> + device->kvm = NULL; >> + } >> + mutex_unlock(&device->dev_set->lock); > > This still doesn't seem right, another thread could have incr'd the > open_count already > > This has to be done at the moment open_count is decremented to zero, > while still under the original lock. Hmm.. Fair. Well, we can go back to clearing of device->kvm in vfio_device_last_close but the group lock is held then so we can't immediately do the kvm_put at that time -- unless we go back to the notion of stacking the kvm_put on a workqueue, but now from vfio. If we do that, I think we also have to scrap the idea of putting the kvm_put_kvm function pointer into device->put_kvm too (or otherwise stash it along with the kvm value to be picked up by the scheduled work). Another thought would be passing the device->open_count that was read while holding the dev_set->lock back on vfio_close_device() / vfio_device_group_close() as an indicator of whether vfio_device_last_close() was called - then you could use the stashed kvm value because it doesn't matter what's currently in device->kvm or what the current device->open_count is, you know that kvm reference needs to be put. e.g.: struct *kvm = device->kvm; void (*put)(struct kvm *kvm) = device->put_kvm; opened = vfio_device_group_close(device); if (opened == 0 && kvm) put(kvm);
On Fri, Jan 13, 2023 at 03:09:01PM -0500, Matthew Rosato wrote: > > This still doesn't seem right, another thread could have incr'd the > > open_count already > > > > This has to be done at the moment open_count is decremented to zero, > > while still under the original lock. > > Hmm.. Fair. Well, we can go back to clearing of device->kvm in > vfio_device_last_close but the group lock is held then so we can't > immediately do the kvm_put at that time -- unless we go back to the > notion of stacking the kvm_put on a workqueue, but now from vfio. > If we do that, I think we also have to scrap the idea of putting the > kvm_put_kvm function pointer into device->put_kvm too (or otherwise > stash it along with the kvm value to be picked up by the scheduled > work). Well, you have to keep the same sort of design, the vfio_device_last_close() has to put the kvm on the stack until the group lock is unlocked. It is messy due to how the functions are nested, but not hard. Jason
diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c index bb24b2f0271e..2b0da82f82f4 100644 --- a/drivers/vfio/group.c +++ b/drivers/vfio/group.c @@ -165,9 +165,9 @@ static int vfio_device_group_open(struct vfio_device *device) } /* - * Here we pass the KVM pointer with the group under the lock. If the - * device driver will use it, it must obtain a reference and release it - * during close_device. + * Here we pass the KVM pointer with the group under the lock. A + * reference will be obtained the first time the device is opened and + * will be held until the device fd is closed. */ ret = vfio_device_open(device, device->group->iommufd, device->group->kvm); diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c index 5177bb061b17..dbdf16903d52 100644 --- a/drivers/vfio/vfio_main.c +++ b/drivers/vfio/vfio_main.c @@ -16,6 +16,9 @@ #include <linux/fs.h> #include <linux/idr.h> #include <linux/iommu.h> +#ifdef CONFIG_HAVE_KVM +#include <linux/kvm_host.h> +#endif #include <linux/list.h> #include <linux/miscdevice.h> #include <linux/module.h> @@ -344,6 +347,57 @@ static bool vfio_assert_device_open(struct vfio_device *device) return !WARN_ON_ONCE(!READ_ONCE(device->open_count)); } +#ifdef CONFIG_HAVE_KVM +static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct kvm *kvm) +{ + void (*pfn)(struct kvm *kvm); + bool (*fn)(struct kvm *kvm); + bool ret; + + pfn = symbol_get(kvm_put_kvm); + if (WARN_ON(!pfn)) + return false; + + fn = symbol_get(kvm_get_kvm_safe); + if (WARN_ON(!fn)) { + symbol_put(kvm_put_kvm); + return false; + } + + ret = fn(kvm); + if (ret) + device->put_kvm = pfn; + else + symbol_put(kvm_put_kvm); + + symbol_put(kvm_get_kvm_safe); + + return ret; +} + +static void vfio_kvm_put_kvm(struct vfio_device *device, struct kvm *kvm) +{ + if (WARN_ON(!device->put_kvm)) + return; + + device->put_kvm(kvm); + + device->put_kvm = NULL; + + symbol_put(kvm_put_kvm); +} +#else +static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct kvm *kvm) +{ + return false; +} + +static void vfio_kvm_put_kvm(struct vfio_device *device, struct kvm *kvm) +{ + +} +#endif + static int vfio_device_first_open(struct vfio_device *device, struct iommufd_ctx *iommufd, struct kvm *kvm) { @@ -361,16 +415,21 @@ static int vfio_device_first_open(struct vfio_device *device, if (ret) goto err_module_put; - device->kvm = kvm; + if (kvm && vfio_kvm_get_kvm_safe(device, kvm)) + device->kvm = kvm; + if (device->ops->open_device) { ret = device->ops->open_device(device); if (ret) - goto err_unuse_iommu; + goto err_put_kvm; } return 0; -err_unuse_iommu: - device->kvm = NULL; +err_put_kvm: + if (device->kvm) { + vfio_kvm_put_kvm(device, device->kvm); + device->kvm = NULL; + } if (iommufd) vfio_iommufd_unbind(device); else @@ -387,7 +446,6 @@ static void vfio_device_last_close(struct vfio_device *device, if (device->ops->close_device) device->ops->close_device(device); - device->kvm = NULL; if (iommufd) vfio_iommufd_unbind(device); else @@ -462,9 +520,19 @@ static inline void vfio_device_pm_runtime_put(struct vfio_device *device) static int vfio_device_fops_release(struct inode *inode, struct file *filep) { struct vfio_device *device = filep->private_data; + struct kvm *kvm = NULL; vfio_device_group_close(device); + mutex_lock(&device->dev_set->lock); + if (device->open_count == 0 && device->kvm) { + kvm = device->kvm; + device->kvm = NULL; + } + mutex_unlock(&device->dev_set->lock); + if (kvm) + vfio_kvm_put_kvm(device, kvm); + vfio_device_put_registration(device); return 0; diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 35be78e9ae57..87ff862ff555 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -46,7 +46,6 @@ struct vfio_device { struct vfio_device_set *dev_set; struct list_head dev_set_list; unsigned int migration_flags; - /* Driver must reference the kvm during open_device or never touch it */ struct kvm *kvm; /* Members below here are private, not for driver use */ @@ -58,6 +57,7 @@ struct vfio_device { struct list_head group_next; struct list_head iommu_entry; struct iommufd_access *iommufd_access; + void (*put_kvm)(struct kvm *kvm); #if IS_ENABLED(CONFIG_IOMMUFD) struct iommufd_device *iommufd_device; struct iommufd_ctx *iommufd_ictx;
Currently it is possible that the final put of a KVM reference comes from vfio during its device close operation. This occurs while the vfio group lock is held; however, if the vfio device is still in the kvm device list, then the following call chain could result in a deadlock: kvm_put_kvm -> kvm_destroy_vm -> kvm_destroy_devices -> kvm_vfio_destroy -> kvm_vfio_file_set_kvm -> vfio_file_set_kvm -> group->group_lock/group_rwsem Avoid this scenario by having vfio core code acquire a KVM reference the first time a device is opened and hold that reference until the device fd is closed, at a point after the group lock has been released. Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM") Reported-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> --- Changes from v2: * Re-arrange vfio_kvm_set_kvm_safe error path to still trigger device_open with device->kvm=NULL (Alex) * get device->dev_set->lock when checking device->open_count (Alex) * but don't hold it over the kvm_put_kvm (Jason) * get kvm_put symbol upfront and stash it in device until close (Jason) * check CONFIG_HAVE_KVM to avoid build errors on architectures without KVM support Changes from v1: * Re-write using symbol get logic to get kvm ref during first device open, release the ref during device fd close after group lock is released * Drop kvm get/put changes to drivers; now that vfio core holds a kvm ref until sometime after the device_close op is called, it should be fine for drivers to get and put their own references to it. --- drivers/vfio/group.c | 6 ++-- drivers/vfio/vfio_main.c | 78 +++++++++++++++++++++++++++++++++++++--- include/linux/vfio.h | 2 +- 3 files changed, 77 insertions(+), 9 deletions(-)