Message ID | 20210525180600.6349-10-michael.christie@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/9] vhost: move worker thread fields to new struct | expand |
On Tue, May 25, 2021 at 01:06:00PM -0500, Mike Christie wrote: > This allows a worker to handle multiple device's vqs. > > TODO: > - The worker is attached to the cgroup of the device that created it. In > this patch you can share workers with devices with different owners which > could be in different cgroups. Do we want to restict sharing workers with > devices that have the same owner (dev->mm value)? Question for Michael or Jason.
在 2021/6/3 下午10:32, Stefan Hajnoczi 写道: > On Tue, May 25, 2021 at 01:06:00PM -0500, Mike Christie wrote: >> This allows a worker to handle multiple device's vqs. >> >> TODO: >> - The worker is attached to the cgroup of the device that created it. In >> this patch you can share workers with devices with different owners which >> could be in different cgroups. Do we want to restict sharing workers with >> devices that have the same owner (dev->mm value)? > Question for Michael or Jason. I thing sharing workers within a cgroup should be fine. The differences is that if we restrict the works with the same owner, it may only work in the case where an VM have multiple vhost devices. Thanks
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index eb16eb2bbee0..c32f72b1901c 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -388,12 +388,10 @@ static void vhost_vq_reset(struct vhost_dev *dev, static int vhost_worker(void *data) { struct vhost_worker *worker = data; - struct vhost_dev *dev = worker->dev; struct vhost_work *work, *work_next; + struct vhost_dev *dev; struct llist_node *node; - kthread_use_mm(dev->mm); - for (;;) { /* mb paired w/ kthread_stop */ set_current_state(TASK_INTERRUPTIBLE); @@ -412,15 +410,20 @@ static int vhost_worker(void *data) smp_wmb(); llist_for_each_entry_safe(work, work_next, node, node) { clear_bit(VHOST_WORK_QUEUED, &work->flags); + dev = work->dev; + + kthread_use_mm(dev->mm); + __set_current_state(TASK_RUNNING); kcov_remote_start_common(dev->kcov_handle); work->fn(work); kcov_remote_stop(); if (need_resched()) schedule(); + + kthread_unuse_mm(dev->mm); } } - kthread_unuse_mm(dev->mm); return 0; } @@ -667,7 +670,6 @@ static struct vhost_worker *vhost_worker_create(struct vhost_dev *dev) return NULL; worker->id = dev->num_workers; - worker->dev = dev; init_llist_head(&worker->work_list); INIT_HLIST_NODE(&worker->h_node); refcount_set(&worker->refcount, 1); @@ -702,10 +704,6 @@ static struct vhost_worker *vhost_worker_find(struct vhost_dev *dev, pid_t pid) spin_lock(&vhost_workers_lock); hash_for_each_possible(vhost_workers, worker, h_node, pid) { if (worker->task->pid == pid) { - /* tmp - next patch allows sharing across devs */ - if (worker->dev != dev) - break; - found_worker = worker; refcount_inc(&worker->refcount); break; diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 75ad3aa5adca..40c400172a84 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -32,7 +32,6 @@ struct vhost_worker { struct llist_head work_list; struct hlist_node h_node; refcount_t refcount; - struct vhost_dev *dev; int id; };
This allows a worker to handle multiple device's vqs. TODO: - The worker is attached to the cgroup of the device that created it. In this patch you can share workers with devices with different owners which could be in different cgroups. Do we want to restict sharing workers with devices that have the same owner (dev->mm value)? Signed-off-by: Mike Christie <michael.christie@oracle.com> --- drivers/vhost/vhost.c | 16 +++++++--------- drivers/vhost/vhost.h | 1 - 2 files changed, 7 insertions(+), 10 deletions(-)