Message ID | 20230321154804.184577-4-sgarzare@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | vdpa_sim: add support for user VA | expand |
On Tue, Mar 21, 2023 at 11:48 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > > The new "use_va" module parameter (default: true) is used in > vdpa_alloc_device() to inform the vDPA framework that the device > supports VA. > > vringh is initialized to use VA only when "use_va" is true and the > user's mm has been bound. So, only when the bus supports user VA > (e.g. vhost-vdpa). > > vdpasim_mm_work_fn work is used to serialize the binding to a new > address space when the .bind_mm callback is invoked, and unbinding > when the .unbind_mm callback is invoked. > > Call mmget_not_zero()/kthread_use_mm() inside the worker function > to pin the address space only as long as needed, following the > documentation of mmget() in include/linux/sched/mm.h: > > * Never use this function to pin this address space for an > * unbounded/indefinite amount of time. I wonder if everything would be simplified if we just allow the parent to advertise whether or not it requires the address space. Then when vhost-vDPA probes the device it can simply advertise use_work as true so vhost core can use get_task_mm() in this case? Thanks > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> > --- > > Notes: > v3: > - called mmget_not_zero() before kthread_use_mm() [Jason] > As the documentation of mmget() in include/linux/sched/mm.h says: > > * Never use this function to pin this address space for an > * unbounded/indefinite amount of time. > > I moved mmget_not_zero/kthread_use_mm inside the worker function, > this way we pin the address space only as long as needed. > This is similar to what vfio_iommu_type1_dma_rw_chunk() does in > drivers/vfio/vfio_iommu_type1.c > - simplified the mm bind/unbind [Jason] > - renamed vdpasim_worker_change_mm_sync() [Jason] > - fix commit message (s/default: false/default: true) > v2: > - `use_va` set to true by default [Eugenio] > - supported the new unbind_mm callback [Jason] > - removed the unbind_mm call in vdpasim_do_reset() [Jason] > - avoided to release the lock while call kthread_flush_work() since we > are now using a mutex to protect the device state > > drivers/vdpa/vdpa_sim/vdpa_sim.h | 1 + > drivers/vdpa/vdpa_sim/vdpa_sim.c | 80 +++++++++++++++++++++++++++++++- > 2 files changed, 79 insertions(+), 2 deletions(-) > > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h > index 4774292fba8c..3a42887d05d9 100644 > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h > @@ -59,6 +59,7 @@ struct vdpasim { > struct vdpasim_virtqueue *vqs; > struct kthread_worker *worker; > struct kthread_work work; > + struct mm_struct *mm_bound; > struct vdpasim_dev_attr dev_attr; > /* mutex to synchronize virtqueue state */ > struct mutex mutex; > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c > index ab4cfb82c237..23c891cdcd54 100644 > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c > @@ -35,10 +35,44 @@ module_param(max_iotlb_entries, int, 0444); > MODULE_PARM_DESC(max_iotlb_entries, > "Maximum number of iotlb entries for each address space. 0 means unlimited. (default: 2048)"); > > +static bool use_va = true; > +module_param(use_va, bool, 0444); > +MODULE_PARM_DESC(use_va, "Enable/disable the device's ability to use VA"); > + > #define VDPASIM_QUEUE_ALIGN PAGE_SIZE > #define VDPASIM_QUEUE_MAX 256 > #define VDPASIM_VENDOR_ID 0 > > +struct vdpasim_mm_work { > + struct kthread_work work; > + struct vdpasim *vdpasim; > + struct mm_struct *mm_to_bind; > + int ret; > +}; > + > +static void vdpasim_mm_work_fn(struct kthread_work *work) > +{ > + struct vdpasim_mm_work *mm_work = > + container_of(work, struct vdpasim_mm_work, work); > + struct vdpasim *vdpasim = mm_work->vdpasim; > + > + mm_work->ret = 0; > + > + //TODO: should we attach the cgroup of the mm owner? > + vdpasim->mm_bound = mm_work->mm_to_bind; > +} > + > +static void vdpasim_worker_change_mm_sync(struct vdpasim *vdpasim, > + struct vdpasim_mm_work *mm_work) > +{ > + struct kthread_work *work = &mm_work->work; > + > + kthread_init_work(work, vdpasim_mm_work_fn); > + kthread_queue_work(vdpasim->worker, work); > + > + kthread_flush_work(work); > +} > + > static struct vdpasim *vdpa_to_sim(struct vdpa_device *vdpa) > { > return container_of(vdpa, struct vdpasim, vdpa); > @@ -59,8 +93,10 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) > { > struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; > uint16_t last_avail_idx = vq->vring.last_avail_idx; > + bool va_enabled = use_va && vdpasim->mm_bound; > > - vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, false, > + vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, > + va_enabled, > (struct vring_desc *)(uintptr_t)vq->desc_addr, > (struct vring_avail *) > (uintptr_t)vq->driver_addr, > @@ -130,8 +166,20 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops; > static void vdpasim_work_fn(struct kthread_work *work) > { > struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); > + struct mm_struct *mm = vdpasim->mm_bound; > + > + if (mm) { > + if (!mmget_not_zero(mm)) > + return; > + kthread_use_mm(mm); > + } > > vdpasim->dev_attr.work_fn(vdpasim); > + > + if (mm) { > + kthread_unuse_mm(mm); > + mmput(mm); > + } > } > > struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, > @@ -162,7 +210,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, > vdpa = __vdpa_alloc_device(NULL, ops, > dev_attr->ngroups, dev_attr->nas, > dev_attr->alloc_size, > - dev_attr->name, false); > + dev_attr->name, use_va); > if (IS_ERR(vdpa)) { > ret = PTR_ERR(vdpa); > goto err_alloc; > @@ -582,6 +630,30 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, > return ret; > } > > +static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct *mm) > +{ > + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); > + struct vdpasim_mm_work mm_work; > + > + mm_work.vdpasim = vdpasim; > + mm_work.mm_to_bind = mm; > + > + vdpasim_worker_change_mm_sync(vdpasim, &mm_work); > + > + return mm_work.ret; > +} > + > +static void vdpasim_unbind_mm(struct vdpa_device *vdpa) > +{ > + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); > + struct vdpasim_mm_work mm_work; > + > + mm_work.vdpasim = vdpasim; > + mm_work.mm_to_bind = NULL; > + > + vdpasim_worker_change_mm_sync(vdpasim, &mm_work); > +} > + > static int vdpasim_dma_map(struct vdpa_device *vdpa, unsigned int asid, > u64 iova, u64 size, > u64 pa, u32 perm, void *opaque) > @@ -678,6 +750,8 @@ static const struct vdpa_config_ops vdpasim_config_ops = { > .set_group_asid = vdpasim_set_group_asid, > .dma_map = vdpasim_dma_map, > .dma_unmap = vdpasim_dma_unmap, > + .bind_mm = vdpasim_bind_mm, > + .unbind_mm = vdpasim_unbind_mm, > .free = vdpasim_free, > }; > > @@ -712,6 +786,8 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops = { > .get_iova_range = vdpasim_get_iova_range, > .set_group_asid = vdpasim_set_group_asid, > .set_map = vdpasim_set_map, > + .bind_mm = vdpasim_bind_mm, > + .unbind_mm = vdpasim_unbind_mm, > .free = vdpasim_free, > }; > > -- > 2.39.2 >
On Thu, Mar 23, 2023 at 11:42:07AM +0800, Jason Wang wrote: >On Tue, Mar 21, 2023 at 11:48 PM Stefano Garzarella <sgarzare@redhat.com> wrote: >> >> The new "use_va" module parameter (default: true) is used in >> vdpa_alloc_device() to inform the vDPA framework that the device >> supports VA. >> >> vringh is initialized to use VA only when "use_va" is true and the >> user's mm has been bound. So, only when the bus supports user VA >> (e.g. vhost-vdpa). >> >> vdpasim_mm_work_fn work is used to serialize the binding to a new >> address space when the .bind_mm callback is invoked, and unbinding >> when the .unbind_mm callback is invoked. >> >> Call mmget_not_zero()/kthread_use_mm() inside the worker function >> to pin the address space only as long as needed, following the >> documentation of mmget() in include/linux/sched/mm.h: >> >> * Never use this function to pin this address space for an >> * unbounded/indefinite amount of time. > >I wonder if everything would be simplified if we just allow the parent >to advertise whether or not it requires the address space. > >Then when vhost-vDPA probes the device it can simply advertise >use_work as true so vhost core can use get_task_mm() in this case? IIUC set user_worker to true, it also creates the kthread in the vhost core (but we can add another variable to avoid this). My biggest concern is the comment in include/linux/sched/mm.h. get_task_mm() uses mmget(), but in the documentation they advise against pinning the address space indefinitely, so I preferred in keeping mmgrab() in the vhost core, then call mmget_not_zero() in the worker only when it is running. In the future maybe mm will be used differently from parent if somehow it is supported by iommu, so I would leave it to the parent to handle this. Thanks, Stefano
On Thu, Mar 23, 2023 at 10:50:06AM +0100, Stefano Garzarella wrote: > On Thu, Mar 23, 2023 at 11:42:07AM +0800, Jason Wang wrote: > > On Tue, Mar 21, 2023 at 11:48 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > > > > > > The new "use_va" module parameter (default: true) is used in > > > vdpa_alloc_device() to inform the vDPA framework that the device > > > supports VA. > > > > > > vringh is initialized to use VA only when "use_va" is true and the > > > user's mm has been bound. So, only when the bus supports user VA > > > (e.g. vhost-vdpa). > > > > > > vdpasim_mm_work_fn work is used to serialize the binding to a new > > > address space when the .bind_mm callback is invoked, and unbinding > > > when the .unbind_mm callback is invoked. > > > > > > Call mmget_not_zero()/kthread_use_mm() inside the worker function > > > to pin the address space only as long as needed, following the > > > documentation of mmget() in include/linux/sched/mm.h: > > > > > > * Never use this function to pin this address space for an > > > * unbounded/indefinite amount of time. > > > > I wonder if everything would be simplified if we just allow the parent > > to advertise whether or not it requires the address space. > > > > Then when vhost-vDPA probes the device it can simply advertise > > use_work as true so vhost core can use get_task_mm() in this case? > > IIUC set user_worker to true, it also creates the kthread in the vhost > core (but we can add another variable to avoid this). > > My biggest concern is the comment in include/linux/sched/mm.h. > get_task_mm() uses mmget(), but in the documentation they advise against > pinning the address space indefinitely, so I preferred in keeping > mmgrab() in the vhost core, then call mmget_not_zero() in the worker > only when it is running. > > In the future maybe mm will be used differently from parent if somehow > it is supported by iommu, so I would leave it to the parent to handle > this. > > Thanks, > Stefano I think iommufd is supposed to handle all this detail, yes.
On Thu, Mar 23, 2023 at 5:50 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > > On Thu, Mar 23, 2023 at 11:42:07AM +0800, Jason Wang wrote: > >On Tue, Mar 21, 2023 at 11:48 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > >> > >> The new "use_va" module parameter (default: true) is used in > >> vdpa_alloc_device() to inform the vDPA framework that the device > >> supports VA. > >> > >> vringh is initialized to use VA only when "use_va" is true and the > >> user's mm has been bound. So, only when the bus supports user VA > >> (e.g. vhost-vdpa). > >> > >> vdpasim_mm_work_fn work is used to serialize the binding to a new > >> address space when the .bind_mm callback is invoked, and unbinding > >> when the .unbind_mm callback is invoked. > >> > >> Call mmget_not_zero()/kthread_use_mm() inside the worker function > >> to pin the address space only as long as needed, following the > >> documentation of mmget() in include/linux/sched/mm.h: > >> > >> * Never use this function to pin this address space for an > >> * unbounded/indefinite amount of time. > > > >I wonder if everything would be simplified if we just allow the parent > >to advertise whether or not it requires the address space. > > > >Then when vhost-vDPA probes the device it can simply advertise > >use_work as true so vhost core can use get_task_mm() in this case? > > IIUC set user_worker to true, it also creates the kthread in the vhost > core (but we can add another variable to avoid this). > > My biggest concern is the comment in include/linux/sched/mm.h. > get_task_mm() uses mmget(), but in the documentation they advise against > pinning the address space indefinitely, so I preferred in keeping > mmgrab() in the vhost core, then call mmget_not_zero() in the worker > only when it is running. Ok. > > In the future maybe mm will be used differently from parent if somehow > it is supported by iommu, so I would leave it to the parent to handle > this. This should be possible, I was told by Intel that their IOMMU can access the process page table for shared virtual memory. Thanks > > Thanks, > Stefano >
在 2023/3/21 23:48, Stefano Garzarella 写道: > The new "use_va" module parameter (default: true) is used in > vdpa_alloc_device() to inform the vDPA framework that the device > supports VA. > > vringh is initialized to use VA only when "use_va" is true and the > user's mm has been bound. So, only when the bus supports user VA > (e.g. vhost-vdpa). > > vdpasim_mm_work_fn work is used to serialize the binding to a new > address space when the .bind_mm callback is invoked, and unbinding > when the .unbind_mm callback is invoked. > > Call mmget_not_zero()/kthread_use_mm() inside the worker function > to pin the address space only as long as needed, following the > documentation of mmget() in include/linux/sched/mm.h: > > * Never use this function to pin this address space for an > * unbounded/indefinite amount of time. > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> > --- > > Notes: > v3: > - called mmget_not_zero() before kthread_use_mm() [Jason] > As the documentation of mmget() in include/linux/sched/mm.h says: > > * Never use this function to pin this address space for an > * unbounded/indefinite amount of time. > > I moved mmget_not_zero/kthread_use_mm inside the worker function, > this way we pin the address space only as long as needed. > This is similar to what vfio_iommu_type1_dma_rw_chunk() does in > drivers/vfio/vfio_iommu_type1.c > - simplified the mm bind/unbind [Jason] > - renamed vdpasim_worker_change_mm_sync() [Jason] > - fix commit message (s/default: false/default: true) > v2: > - `use_va` set to true by default [Eugenio] > - supported the new unbind_mm callback [Jason] > - removed the unbind_mm call in vdpasim_do_reset() [Jason] > - avoided to release the lock while call kthread_flush_work() since we > are now using a mutex to protect the device state > > drivers/vdpa/vdpa_sim/vdpa_sim.h | 1 + > drivers/vdpa/vdpa_sim/vdpa_sim.c | 80 +++++++++++++++++++++++++++++++- > 2 files changed, 79 insertions(+), 2 deletions(-) > > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h > index 4774292fba8c..3a42887d05d9 100644 > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h > @@ -59,6 +59,7 @@ struct vdpasim { > struct vdpasim_virtqueue *vqs; > struct kthread_worker *worker; > struct kthread_work work; > + struct mm_struct *mm_bound; > struct vdpasim_dev_attr dev_attr; > /* mutex to synchronize virtqueue state */ > struct mutex mutex; > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c > index ab4cfb82c237..23c891cdcd54 100644 > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c > @@ -35,10 +35,44 @@ module_param(max_iotlb_entries, int, 0444); > MODULE_PARM_DESC(max_iotlb_entries, > "Maximum number of iotlb entries for each address space. 0 means unlimited. (default: 2048)"); > > +static bool use_va = true; > +module_param(use_va, bool, 0444); > +MODULE_PARM_DESC(use_va, "Enable/disable the device's ability to use VA"); > + > #define VDPASIM_QUEUE_ALIGN PAGE_SIZE > #define VDPASIM_QUEUE_MAX 256 > #define VDPASIM_VENDOR_ID 0 > > +struct vdpasim_mm_work { > + struct kthread_work work; > + struct vdpasim *vdpasim; > + struct mm_struct *mm_to_bind; > + int ret; > +}; > + > +static void vdpasim_mm_work_fn(struct kthread_work *work) > +{ > + struct vdpasim_mm_work *mm_work = > + container_of(work, struct vdpasim_mm_work, work); > + struct vdpasim *vdpasim = mm_work->vdpasim; > + > + mm_work->ret = 0; > + > + //TODO: should we attach the cgroup of the mm owner? > + vdpasim->mm_bound = mm_work->mm_to_bind; > +} > + > +static void vdpasim_worker_change_mm_sync(struct vdpasim *vdpasim, > + struct vdpasim_mm_work *mm_work) > +{ > + struct kthread_work *work = &mm_work->work; > + > + kthread_init_work(work, vdpasim_mm_work_fn); > + kthread_queue_work(vdpasim->worker, work); > + > + kthread_flush_work(work); > +} > + > static struct vdpasim *vdpa_to_sim(struct vdpa_device *vdpa) > { > return container_of(vdpa, struct vdpasim, vdpa); > @@ -59,8 +93,10 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) > { > struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; > uint16_t last_avail_idx = vq->vring.last_avail_idx; > + bool va_enabled = use_va && vdpasim->mm_bound; > > - vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, false, > + vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, > + va_enabled, > (struct vring_desc *)(uintptr_t)vq->desc_addr, > (struct vring_avail *) > (uintptr_t)vq->driver_addr, > @@ -130,8 +166,20 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops; > static void vdpasim_work_fn(struct kthread_work *work) > { > struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); > + struct mm_struct *mm = vdpasim->mm_bound; > + > + if (mm) { > + if (!mmget_not_zero(mm)) > + return; Do we need to check use_va here. Other than this Acked-by: Jason Wang <jasowang@redhat.com> Thanks > + kthread_use_mm(mm); > + } > > vdpasim->dev_attr.work_fn(vdpasim); > + > + if (mm) { > + kthread_unuse_mm(mm); > + mmput(mm); > + } > } > > struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, > @@ -162,7 +210,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, > vdpa = __vdpa_alloc_device(NULL, ops, > dev_attr->ngroups, dev_attr->nas, > dev_attr->alloc_size, > - dev_attr->name, false); > + dev_attr->name, use_va); > if (IS_ERR(vdpa)) { > ret = PTR_ERR(vdpa); > goto err_alloc; > @@ -582,6 +630,30 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, > return ret; > } > > +static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct *mm) > +{ > + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); > + struct vdpasim_mm_work mm_work; > + > + mm_work.vdpasim = vdpasim; > + mm_work.mm_to_bind = mm; > + > + vdpasim_worker_change_mm_sync(vdpasim, &mm_work); > + > + return mm_work.ret; > +} > + > +static void vdpasim_unbind_mm(struct vdpa_device *vdpa) > +{ > + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); > + struct vdpasim_mm_work mm_work; > + > + mm_work.vdpasim = vdpasim; > + mm_work.mm_to_bind = NULL; > + > + vdpasim_worker_change_mm_sync(vdpasim, &mm_work); > +} > + > static int vdpasim_dma_map(struct vdpa_device *vdpa, unsigned int asid, > u64 iova, u64 size, > u64 pa, u32 perm, void *opaque) > @@ -678,6 +750,8 @@ static const struct vdpa_config_ops vdpasim_config_ops = { > .set_group_asid = vdpasim_set_group_asid, > .dma_map = vdpasim_dma_map, > .dma_unmap = vdpasim_dma_unmap, > + .bind_mm = vdpasim_bind_mm, > + .unbind_mm = vdpasim_unbind_mm, > .free = vdpasim_free, > }; > > @@ -712,6 +786,8 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops = { > .get_iova_range = vdpasim_get_iova_range, > .set_group_asid = vdpasim_set_group_asid, > .set_map = vdpasim_set_map, > + .bind_mm = vdpasim_bind_mm, > + .unbind_mm = vdpasim_unbind_mm, > .free = vdpasim_free, > }; >
On Fri, Mar 24, 2023 at 10:54:39AM +0800, Jason Wang wrote: >On Thu, Mar 23, 2023 at 5:50 PM Stefano Garzarella <sgarzare@redhat.com> wrote: >> >> On Thu, Mar 23, 2023 at 11:42:07AM +0800, Jason Wang wrote: >> >On Tue, Mar 21, 2023 at 11:48 PM Stefano Garzarella <sgarzare@redhat.com> wrote: >> >> >> >> The new "use_va" module parameter (default: true) is used in >> >> vdpa_alloc_device() to inform the vDPA framework that the device >> >> supports VA. >> >> >> >> vringh is initialized to use VA only when "use_va" is true and the >> >> user's mm has been bound. So, only when the bus supports user VA >> >> (e.g. vhost-vdpa). >> >> >> >> vdpasim_mm_work_fn work is used to serialize the binding to a new >> >> address space when the .bind_mm callback is invoked, and unbinding >> >> when the .unbind_mm callback is invoked. >> >> >> >> Call mmget_not_zero()/kthread_use_mm() inside the worker function >> >> to pin the address space only as long as needed, following the >> >> documentation of mmget() in include/linux/sched/mm.h: >> >> >> >> * Never use this function to pin this address space for an >> >> * unbounded/indefinite amount of time. >> > >> >I wonder if everything would be simplified if we just allow the parent >> >to advertise whether or not it requires the address space. >> > >> >Then when vhost-vDPA probes the device it can simply advertise >> >use_work as true so vhost core can use get_task_mm() in this case? >> >> IIUC set user_worker to true, it also creates the kthread in the vhost >> core (but we can add another variable to avoid this). >> >> My biggest concern is the comment in include/linux/sched/mm.h. >> get_task_mm() uses mmget(), but in the documentation they advise against >> pinning the address space indefinitely, so I preferred in keeping >> mmgrab() in the vhost core, then call mmget_not_zero() in the worker >> only when it is running. > >Ok. > >> >> In the future maybe mm will be used differently from parent if somehow >> it is supported by iommu, so I would leave it to the parent to handle >> this. > >This should be possible, I was told by Intel that their IOMMU can >access the process page table for shared virtual memory. Cool, we should investigate this. Do you have any pointers to their documentation? Thanks, Stefano
On Fri, Mar 24, 2023 at 11:49:32AM +0800, Jason Wang wrote: > >在 2023/3/21 23:48, Stefano Garzarella 写道: >>The new "use_va" module parameter (default: true) is used in >>vdpa_alloc_device() to inform the vDPA framework that the device >>supports VA. >> >>vringh is initialized to use VA only when "use_va" is true and the >>user's mm has been bound. So, only when the bus supports user VA >>(e.g. vhost-vdpa). >> >>vdpasim_mm_work_fn work is used to serialize the binding to a new >>address space when the .bind_mm callback is invoked, and unbinding >>when the .unbind_mm callback is invoked. >> >>Call mmget_not_zero()/kthread_use_mm() inside the worker function >>to pin the address space only as long as needed, following the >>documentation of mmget() in include/linux/sched/mm.h: >> >> * Never use this function to pin this address space for an >> * unbounded/indefinite amount of time. >> >>Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> >>--- >> >>Notes: >> v3: >> - called mmget_not_zero() before kthread_use_mm() [Jason] >> As the documentation of mmget() in include/linux/sched/mm.h says: >> * Never use this function to pin this address space for an >> * unbounded/indefinite amount of time. >> I moved mmget_not_zero/kthread_use_mm inside the worker function, >> this way we pin the address space only as long as needed. >> This is similar to what vfio_iommu_type1_dma_rw_chunk() does in >> drivers/vfio/vfio_iommu_type1.c >> - simplified the mm bind/unbind [Jason] >> - renamed vdpasim_worker_change_mm_sync() [Jason] >> - fix commit message (s/default: false/default: true) >> v2: >> - `use_va` set to true by default [Eugenio] >> - supported the new unbind_mm callback [Jason] >> - removed the unbind_mm call in vdpasim_do_reset() [Jason] >> - avoided to release the lock while call kthread_flush_work() since we >> are now using a mutex to protect the device state >> >> drivers/vdpa/vdpa_sim/vdpa_sim.h | 1 + >> drivers/vdpa/vdpa_sim/vdpa_sim.c | 80 +++++++++++++++++++++++++++++++- >> 2 files changed, 79 insertions(+), 2 deletions(-) >> >>diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h >>index 4774292fba8c..3a42887d05d9 100644 >>--- a/drivers/vdpa/vdpa_sim/vdpa_sim.h >>+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h >>@@ -59,6 +59,7 @@ struct vdpasim { >> struct vdpasim_virtqueue *vqs; >> struct kthread_worker *worker; >> struct kthread_work work; >>+ struct mm_struct *mm_bound; >> struct vdpasim_dev_attr dev_attr; >> /* mutex to synchronize virtqueue state */ >> struct mutex mutex; >>diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c >>index ab4cfb82c237..23c891cdcd54 100644 >>--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c >>+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c >>@@ -35,10 +35,44 @@ module_param(max_iotlb_entries, int, 0444); >> MODULE_PARM_DESC(max_iotlb_entries, >> "Maximum number of iotlb entries for each address space. 0 means unlimited. (default: 2048)"); >>+static bool use_va = true; >>+module_param(use_va, bool, 0444); >>+MODULE_PARM_DESC(use_va, "Enable/disable the device's ability to use VA"); >>+ >> #define VDPASIM_QUEUE_ALIGN PAGE_SIZE >> #define VDPASIM_QUEUE_MAX 256 >> #define VDPASIM_VENDOR_ID 0 >>+struct vdpasim_mm_work { >>+ struct kthread_work work; >>+ struct vdpasim *vdpasim; >>+ struct mm_struct *mm_to_bind; >>+ int ret; >>+}; >>+ >>+static void vdpasim_mm_work_fn(struct kthread_work *work) >>+{ >>+ struct vdpasim_mm_work *mm_work = >>+ container_of(work, struct vdpasim_mm_work, work); >>+ struct vdpasim *vdpasim = mm_work->vdpasim; >>+ >>+ mm_work->ret = 0; >>+ >>+ //TODO: should we attach the cgroup of the mm owner? >>+ vdpasim->mm_bound = mm_work->mm_to_bind; >>+} >>+ >>+static void vdpasim_worker_change_mm_sync(struct vdpasim *vdpasim, >>+ struct vdpasim_mm_work *mm_work) >>+{ >>+ struct kthread_work *work = &mm_work->work; >>+ >>+ kthread_init_work(work, vdpasim_mm_work_fn); >>+ kthread_queue_work(vdpasim->worker, work); >>+ >>+ kthread_flush_work(work); >>+} >>+ >> static struct vdpasim *vdpa_to_sim(struct vdpa_device *vdpa) >> { >> return container_of(vdpa, struct vdpasim, vdpa); >>@@ -59,8 +93,10 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) >> { >> struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; >> uint16_t last_avail_idx = vq->vring.last_avail_idx; >>+ bool va_enabled = use_va && vdpasim->mm_bound; >>- vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, false, >>+ vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, >>+ va_enabled, >> (struct vring_desc *)(uintptr_t)vq->desc_addr, >> (struct vring_avail *) >> (uintptr_t)vq->driver_addr, >>@@ -130,8 +166,20 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops; >> static void vdpasim_work_fn(struct kthread_work *work) >> { >> struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); >>+ struct mm_struct *mm = vdpasim->mm_bound; >>+ >>+ if (mm) { >>+ if (!mmget_not_zero(mm)) >>+ return; > > >Do we need to check use_va here. Yep, right! > >Other than this > >Acked-by: Jason Wang <jasowang@redhat.com> Thanks for the reviews, Stefano
On Fri, Mar 24, 2023 at 10:43 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > > On Fri, Mar 24, 2023 at 10:54:39AM +0800, Jason Wang wrote: > >On Thu, Mar 23, 2023 at 5:50 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > >> > >> On Thu, Mar 23, 2023 at 11:42:07AM +0800, Jason Wang wrote: > >> >On Tue, Mar 21, 2023 at 11:48 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > >> >> > >> >> The new "use_va" module parameter (default: true) is used in > >> >> vdpa_alloc_device() to inform the vDPA framework that the device > >> >> supports VA. > >> >> > >> >> vringh is initialized to use VA only when "use_va" is true and the > >> >> user's mm has been bound. So, only when the bus supports user VA > >> >> (e.g. vhost-vdpa). > >> >> > >> >> vdpasim_mm_work_fn work is used to serialize the binding to a new > >> >> address space when the .bind_mm callback is invoked, and unbinding > >> >> when the .unbind_mm callback is invoked. > >> >> > >> >> Call mmget_not_zero()/kthread_use_mm() inside the worker function > >> >> to pin the address space only as long as needed, following the > >> >> documentation of mmget() in include/linux/sched/mm.h: > >> >> > >> >> * Never use this function to pin this address space for an > >> >> * unbounded/indefinite amount of time. > >> > > >> >I wonder if everything would be simplified if we just allow the parent > >> >to advertise whether or not it requires the address space. > >> > > >> >Then when vhost-vDPA probes the device it can simply advertise > >> >use_work as true so vhost core can use get_task_mm() in this case? > >> > >> IIUC set user_worker to true, it also creates the kthread in the vhost > >> core (but we can add another variable to avoid this). > >> > >> My biggest concern is the comment in include/linux/sched/mm.h. > >> get_task_mm() uses mmget(), but in the documentation they advise against > >> pinning the address space indefinitely, so I preferred in keeping > >> mmgrab() in the vhost core, then call mmget_not_zero() in the worker > >> only when it is running. > > > >Ok. > > > >> > >> In the future maybe mm will be used differently from parent if somehow > >> it is supported by iommu, so I would leave it to the parent to handle > >> this. > > > >This should be possible, I was told by Intel that their IOMMU can > >access the process page table for shared virtual memory. > > Cool, we should investigate this. Do you have any pointers to their > documentation? The vtd-spec I think. Thanks > > Thanks, > Stefano >
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index 4774292fba8c..3a42887d05d9 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -59,6 +59,7 @@ struct vdpasim { struct vdpasim_virtqueue *vqs; struct kthread_worker *worker; struct kthread_work work; + struct mm_struct *mm_bound; struct vdpasim_dev_attr dev_attr; /* mutex to synchronize virtqueue state */ struct mutex mutex; diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index ab4cfb82c237..23c891cdcd54 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -35,10 +35,44 @@ module_param(max_iotlb_entries, int, 0444); MODULE_PARM_DESC(max_iotlb_entries, "Maximum number of iotlb entries for each address space. 0 means unlimited. (default: 2048)"); +static bool use_va = true; +module_param(use_va, bool, 0444); +MODULE_PARM_DESC(use_va, "Enable/disable the device's ability to use VA"); + #define VDPASIM_QUEUE_ALIGN PAGE_SIZE #define VDPASIM_QUEUE_MAX 256 #define VDPASIM_VENDOR_ID 0 +struct vdpasim_mm_work { + struct kthread_work work; + struct vdpasim *vdpasim; + struct mm_struct *mm_to_bind; + int ret; +}; + +static void vdpasim_mm_work_fn(struct kthread_work *work) +{ + struct vdpasim_mm_work *mm_work = + container_of(work, struct vdpasim_mm_work, work); + struct vdpasim *vdpasim = mm_work->vdpasim; + + mm_work->ret = 0; + + //TODO: should we attach the cgroup of the mm owner? + vdpasim->mm_bound = mm_work->mm_to_bind; +} + +static void vdpasim_worker_change_mm_sync(struct vdpasim *vdpasim, + struct vdpasim_mm_work *mm_work) +{ + struct kthread_work *work = &mm_work->work; + + kthread_init_work(work, vdpasim_mm_work_fn); + kthread_queue_work(vdpasim->worker, work); + + kthread_flush_work(work); +} + static struct vdpasim *vdpa_to_sim(struct vdpa_device *vdpa) { return container_of(vdpa, struct vdpasim, vdpa); @@ -59,8 +93,10 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) { struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; uint16_t last_avail_idx = vq->vring.last_avail_idx; + bool va_enabled = use_va && vdpasim->mm_bound; - vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, false, + vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, + va_enabled, (struct vring_desc *)(uintptr_t)vq->desc_addr, (struct vring_avail *) (uintptr_t)vq->driver_addr, @@ -130,8 +166,20 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops; static void vdpasim_work_fn(struct kthread_work *work) { struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); + struct mm_struct *mm = vdpasim->mm_bound; + + if (mm) { + if (!mmget_not_zero(mm)) + return; + kthread_use_mm(mm); + } vdpasim->dev_attr.work_fn(vdpasim); + + if (mm) { + kthread_unuse_mm(mm); + mmput(mm); + } } struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, @@ -162,7 +210,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, vdpa = __vdpa_alloc_device(NULL, ops, dev_attr->ngroups, dev_attr->nas, dev_attr->alloc_size, - dev_attr->name, false); + dev_attr->name, use_va); if (IS_ERR(vdpa)) { ret = PTR_ERR(vdpa); goto err_alloc; @@ -582,6 +630,30 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, return ret; } +static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct *mm) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + struct vdpasim_mm_work mm_work; + + mm_work.vdpasim = vdpasim; + mm_work.mm_to_bind = mm; + + vdpasim_worker_change_mm_sync(vdpasim, &mm_work); + + return mm_work.ret; +} + +static void vdpasim_unbind_mm(struct vdpa_device *vdpa) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + struct vdpasim_mm_work mm_work; + + mm_work.vdpasim = vdpasim; + mm_work.mm_to_bind = NULL; + + vdpasim_worker_change_mm_sync(vdpasim, &mm_work); +} + static int vdpasim_dma_map(struct vdpa_device *vdpa, unsigned int asid, u64 iova, u64 size, u64 pa, u32 perm, void *opaque) @@ -678,6 +750,8 @@ static const struct vdpa_config_ops vdpasim_config_ops = { .set_group_asid = vdpasim_set_group_asid, .dma_map = vdpasim_dma_map, .dma_unmap = vdpasim_dma_unmap, + .bind_mm = vdpasim_bind_mm, + .unbind_mm = vdpasim_unbind_mm, .free = vdpasim_free, }; @@ -712,6 +786,8 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops = { .get_iova_range = vdpasim_get_iova_range, .set_group_asid = vdpasim_set_group_asid, .set_map = vdpasim_set_map, + .bind_mm = vdpasim_bind_mm, + .unbind_mm = vdpasim_unbind_mm, .free = vdpasim_free, };
The new "use_va" module parameter (default: true) is used in vdpa_alloc_device() to inform the vDPA framework that the device supports VA. vringh is initialized to use VA only when "use_va" is true and the user's mm has been bound. So, only when the bus supports user VA (e.g. vhost-vdpa). vdpasim_mm_work_fn work is used to serialize the binding to a new address space when the .bind_mm callback is invoked, and unbinding when the .unbind_mm callback is invoked. Call mmget_not_zero()/kthread_use_mm() inside the worker function to pin the address space only as long as needed, following the documentation of mmget() in include/linux/sched/mm.h: * Never use this function to pin this address space for an * unbounded/indefinite amount of time. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> --- Notes: v3: - called mmget_not_zero() before kthread_use_mm() [Jason] As the documentation of mmget() in include/linux/sched/mm.h says: * Never use this function to pin this address space for an * unbounded/indefinite amount of time. I moved mmget_not_zero/kthread_use_mm inside the worker function, this way we pin the address space only as long as needed. This is similar to what vfio_iommu_type1_dma_rw_chunk() does in drivers/vfio/vfio_iommu_type1.c - simplified the mm bind/unbind [Jason] - renamed vdpasim_worker_change_mm_sync() [Jason] - fix commit message (s/default: false/default: true) v2: - `use_va` set to true by default [Eugenio] - supported the new unbind_mm callback [Jason] - removed the unbind_mm call in vdpasim_do_reset() [Jason] - avoided to release the lock while call kthread_flush_work() since we are now using a mutex to protect the device state drivers/vdpa/vdpa_sim/vdpa_sim.h | 1 + drivers/vdpa/vdpa_sim/vdpa_sim.c | 80 +++++++++++++++++++++++++++++++- 2 files changed, 79 insertions(+), 2 deletions(-)