Message ID | 1605223150-10888-1-git-send-email-michael.christie@oracle.com (mailing list archive) |
---|---|
Headers | show |
Series | vhost/qemu: thread per IO SCSI vq | expand |
On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: > The following kernel patches were made over Michael's vhost branch: > > https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost > > and the vhost-scsi bug fix patchset: > > https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t > > And the qemu patch was made over the qemu master branch. > > vhost-scsi currently supports multiple queues with the num_queues > setting, but we end up with a setup where the guest's scsi/block > layer can do a queue per vCPU and the layers below vhost can do > a queue per CPU. vhost-scsi will then do a num_queue virtqueues, > but all IO gets set on and completed on a single vhost-scsi thread. > After 2 - 4 vqs this becomes a bottleneck. > > This patchset allows us to create a worker thread per IO vq, so we > can better utilize multiple CPUs with the multiple queues. It > implments Jason's suggestion to create the initial worker like > normal, then create the extra workers for IO vqs with the > VHOST_SET_VRING_ENABLE ioctl command added in this patchset. How does userspace find out the tids and set their CPU affinity? What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't really "enable" or "disable" the vq, requests are processed regardless. The purpose of the ioctl isn't clear to me because the kernel could automatically create 1 thread per vq without a new ioctl. On the other hand, if userspace is supposed to control worker threads then a different interface would be more powerful: struct vhost_vq_worker_info { /* * The pid of an existing vhost worker that this vq will be * assigned to. When pid is 0 the virtqueue is assigned to the * default vhost worker. When pid is -1 a new worker thread is * created for this virtqueue. When pid is -2 the virtqueue's * worker thread is unchanged. * * If a vhost worker no longer has any virtqueues assigned to it * then it will terminate. * * The pid of the vhost worker is stored to this field when the * ioctl completes successfully. Use pid -2 to query the current * vhost worker pid. */ __kernel_pid_t pid; /* in/out */ /* The virtqueue index*/ unsigned int vq_idx; /* in */ }; ioctl(vhost_fd, VHOST_SET_VQ_WORKER, &info); Stefan
On 11/17/20 10:40 AM, Stefan Hajnoczi wrote: > On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: >> The following kernel patches were made over Michael's vhost branch: >> >> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost >> >> and the vhost-scsi bug fix patchset: >> >> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t >> >> And the qemu patch was made over the qemu master branch. >> >> vhost-scsi currently supports multiple queues with the num_queues >> setting, but we end up with a setup where the guest's scsi/block >> layer can do a queue per vCPU and the layers below vhost can do >> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, >> but all IO gets set on and completed on a single vhost-scsi thread. >> After 2 - 4 vqs this becomes a bottleneck. >> >> This patchset allows us to create a worker thread per IO vq, so we >> can better utilize multiple CPUs with the multiple queues. It >> implments Jason's suggestion to create the initial worker like >> normal, then create the extra workers for IO vqs with the >> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. > > How does userspace find out the tids and set their CPU affinity? > When we create the worker thread we add it to the device owner's cgroup, so we end up inheriting those settings like affinity. However, are you more asking about finer control like if the guest is doing mq, and the mq hw queue is bound to cpu0, it would perform better if we could bind vhost vq's worker thread to cpu0? I think the problem might is if you are in the cgroup then we can't set a specific threads CPU affinity to just one specific CPU. So you can either do cgroups or not. > What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't > really "enable" or "disable" the vq, requests are processed regardless. > Yeah, I agree. The problem I've mentioned before is: 1. For net and vsock, it's not useful because the vqs are hard coded in the kernel and userspace, so you can't disable a vq and you never need to enable one. 2. vdpa has it's own enable ioctl. 3. For scsi, because we already are doing multiple vqs based on the num_queues value, we have to have some sort of compat support and code to detect if userspace is even going to send the new ioctl. In this patchset, compat just meant enable/disable the extra functionality of extra worker threads for a vq. We will still use the vq if userspace set it up. > The purpose of the ioctl isn't clear to me because the kernel could > automatically create 1 thread per vq without a new ioctl. On the other > hand, if userspace is supposed to control worker threads then a > different interface would be more powerful: > My preference has been: 1. If we were to ditch cgroups, then add a new interface that would allow us to bind threads to a specific CPU, so that it lines up with the guest's mq to CPU mapping. 2. If we continue with cgroups then I think just creating the worker threads from vhost_scsi_set_endpoint is best, because that is the point we do the other final vq setup ops vhost_vq_set_backend and vhost_vq_init_access. For option number 2 it would be simple. Instead of the vring enable patches: [PATCH 08/10] vhost: move msg_handler to new ops struct [PATCH 09/10] vhost: add VHOST_SET_VRING_ENABLE support [PATCH 10/10] vhost-scsi: create a woker per IO vq and [PATCH 1/1] qemu vhost scsi: add VHOST_SET_VRING_ENABLE support we could do this patch like I had done in previous versions: From bcc4c29c28daf04679ce6566d06845b9e1b31eb4 Mon Sep 17 00:00:00 2001 From: Mike Christie <michael.christie@oracle.com> Date: Wed, 11 Nov 2020 22:50:56 -0600 Subject: vhost scsi: multiple worker support This patch creates a worker per IO vq to fix an issue where after 2 vqs and/or multple luns the single worker thread becomes a bottleneck due to the multiple queues/luns trying to execute/ complete their IO on the same thread/CPU. This patch allows us to better match the guest and lower levels multiqueue setups. Signed-off-by: Mike Christie <michael.christie@oracle.com> --- drivers/vhost/scsi.c | 41 ++++++++++++++++++++++++++++++++--------- 1 file changed, 32 insertions(+), 9 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 44c108a..2c119d3 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -1640,9 +1640,18 @@ static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds) vq = &vs->vqs[i].vq; if (!vhost_vq_is_setup(vq)) continue; + /* + * For compat, we have the evt, ctl and first IO vq + * share worker0 like is setup by default. Additional + * vqs get their own worker. + */ + if (i > VHOST_SCSI_VQ_IO) { + if (vhost_vq_worker_add(&vs->dev, vq)) + goto cleanup_vq; + } if (vhost_scsi_setup_vq_cmds(vq, vq->num)) - goto destroy_vq_cmds; + goto cleanup_vq; } for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) { @@ -1666,10 +1675,14 @@ static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds) vs->vs_tpg = vs_tpg; goto out; -destroy_vq_cmds: - for (i--; i >= VHOST_SCSI_VQ_IO; i--) { - if (!vhost_vq_get_backend(&vs->vqs[i].vq)) - vhost_scsi_destroy_vq_cmds(&vs->vqs[i].vq); +cleanup_vq: + for (; i >= VHOST_SCSI_VQ_IO; i--) { + if (vhost_vq_get_backend(&vs->vqs[i].vq)) + continue; + + if (i > VHOST_SCSI_VQ_IO) + vhost_vq_worker_remove(&vs->dev, &vs->vqs[i].vq); + vhost_scsi_destroy_vq_cmds(&vs->vqs[i].vq); } undepend: for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) { @@ -1752,14 +1765,24 @@ static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds) mutex_lock(&vq->mutex); vhost_vq_set_backend(vq, NULL); mutex_unlock(&vq->mutex); + } + vhost_scsi_flush(vs); + + for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) { + vq = &vs->vqs[i].vq; + if (!vhost_vq_is_setup(vq)) + continue; /* - * Make sure cmds are not running before tearing them - * down. - */ - vhost_scsi_flush(vs); + * We only remove the extra workers we created in case + * this is for a reboot. The default worker will be + * removed at dev cleanup. + */ + if (i > VHOST_SCSI_VQ_IO) + vhost_vq_worker_remove(&vs->dev, vq); vhost_scsi_destroy_vq_cmds(vq); } } + /* * Act as synchronize_rcu to make sure access to * old vs->vs_tpg is finished.
On 2020/11/18 上午12:40, Stefan Hajnoczi wrote: > On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: >> The following kernel patches were made over Michael's vhost branch: >> >> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost >> >> and the vhost-scsi bug fix patchset: >> >> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t >> >> And the qemu patch was made over the qemu master branch. >> >> vhost-scsi currently supports multiple queues with the num_queues >> setting, but we end up with a setup where the guest's scsi/block >> layer can do a queue per vCPU and the layers below vhost can do >> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, >> but all IO gets set on and completed on a single vhost-scsi thread. >> After 2 - 4 vqs this becomes a bottleneck. >> >> This patchset allows us to create a worker thread per IO vq, so we >> can better utilize multiple CPUs with the multiple queues. It >> implments Jason's suggestion to create the initial worker like >> normal, then create the extra workers for IO vqs with the >> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. > How does userspace find out the tids and set their CPU affinity? > > What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't > really "enable" or "disable" the vq, requests are processed regardless. Actually I think it should do the real "enable/disable" that tries to follow the virtio spec. (E.g both PCI and MMIO have something similar). > > The purpose of the ioctl isn't clear to me because the kernel could > automatically create 1 thread per vq without a new ioctl. It's not necessarily to create or destroy kthread according to VRING_ENABLE but could be a hint. > On the other > hand, if userspace is supposed to control worker threads then a > different interface would be more powerful: > > struct vhost_vq_worker_info { > /* > * The pid of an existing vhost worker that this vq will be > * assigned to. When pid is 0 the virtqueue is assigned to the > * default vhost worker. When pid is -1 a new worker thread is > * created for this virtqueue. When pid is -2 the virtqueue's > * worker thread is unchanged. > * > * If a vhost worker no longer has any virtqueues assigned to it > * then it will terminate. > * > * The pid of the vhost worker is stored to this field when the > * ioctl completes successfully. Use pid -2 to query the current > * vhost worker pid. > */ > __kernel_pid_t pid; /* in/out */ > > /* The virtqueue index*/ > unsigned int vq_idx; /* in */ > }; > > ioctl(vhost_fd, VHOST_SET_VQ_WORKER, &info); This seems to leave the question to userspace which I'm not sure it's good since it tries to introduce another scheduling layer. Per vq worker seems be good enough to start with. Thanks > > Stefan
On 11/17/20 11:17 PM, Jason Wang wrote: > > On 2020/11/18 上午12:40, Stefan Hajnoczi wrote: >> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: >>> The following kernel patches were made over Michael's vhost branch: >>> >>> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost__;!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NoRys_hU$ >>> and the vhost-scsi bug fix patchset: >>> >>> https://urldefense.com/v3/__https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/*t__;Iw!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NmuPE_m8$ >>> And the qemu patch was made over the qemu master branch. >>> >>> vhost-scsi currently supports multiple queues with the num_queues >>> setting, but we end up with a setup where the guest's scsi/block >>> layer can do a queue per vCPU and the layers below vhost can do >>> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, >>> but all IO gets set on and completed on a single vhost-scsi thread. >>> After 2 - 4 vqs this becomes a bottleneck. >>> >>> This patchset allows us to create a worker thread per IO vq, so we >>> can better utilize multiple CPUs with the multiple queues. It >>> implments Jason's suggestion to create the initial worker like >>> normal, then create the extra workers for IO vqs with the >>> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. >> How does userspace find out the tids and set their CPU affinity? >> >> What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't >> really "enable" or "disable" the vq, requests are processed regardless. > > > Actually I think it should do the real "enable/disable" that tries to follow the virtio spec. > What does real mean here? For the vdpa enable call for example, would it be like ifcvf_vdpa_set_vq_ready where it sets the ready bit or more like mlx5_vdpa_set_vq_ready where it can do some more work in the disable case? For net and something like ifcvf_vdpa_set_vq_ready's design would we have vhost_ring_ioctl() set some vhost_virtqueue enable bit. We then have some helper vhost_vq_is_enabled() and some code to detect if userspace supports the new ioctl. And then in vhost_net_set_backend do we call vhost_vq_is_enabled()? What is done for disable then? It doesn't seem to buy a lot of new functionality. Is it just so we follow the spec? Or do you want it work more like mlx5_vdpa_set_vq_ready? For this in vhost_ring_ioctl when we get the new ioctl we would call into the drivers and have it start queues and stop queues? For enable, what we you do for net for this case? For disable, would you do something like vhost_net_stop_vq (we don't free up anything allocated in vhost_vring_ioctl calls, but we can stop what we setup in the net driver)? Is this useful for the current net mq design or is this for something like where you would do one vhost net device with multiple vqs? My issue/convern is that in general these calls seems useful, but we don't really need them for scsi because vhost scsi is already stuck creating vqs like how it does due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of design where we just set some bit, then the new ioctl does not give us a lot. It's just an extra check and extra code. And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem like it's going to happen a lot where the admin is going to want to remove vqs from a running device. And for both addition/removal for scsi we would need code in virtio scsi to handle hot plug removal/addition of a queue and then redoing the multiqueue mappings which would be difficult to add with no one requesting it.
On 11/18/20 12:57 AM, Mike Christie wrote: > On 11/17/20 11:17 PM, Jason Wang wrote: >> >> On 2020/11/18 上午12:40, Stefan Hajnoczi wrote: >>> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: >>>> The following kernel patches were made over Michael's vhost branch: >>>> >>>> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost__;!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NoRys_hU$ >>>> and the vhost-scsi bug fix patchset: >>>> >>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/*t__;Iw!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NmuPE_m8$ >>>> And the qemu patch was made over the qemu master branch. >>>> >>>> vhost-scsi currently supports multiple queues with the num_queues >>>> setting, but we end up with a setup where the guest's scsi/block >>>> layer can do a queue per vCPU and the layers below vhost can do >>>> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, >>>> but all IO gets set on and completed on a single vhost-scsi thread. >>>> After 2 - 4 vqs this becomes a bottleneck. >>>> >>>> This patchset allows us to create a worker thread per IO vq, so we >>>> can better utilize multiple CPUs with the multiple queues. It >>>> implments Jason's suggestion to create the initial worker like >>>> normal, then create the extra workers for IO vqs with the >>>> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. >>> How does userspace find out the tids and set their CPU affinity? >>> >>> What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't >>> really "enable" or "disable" the vq, requests are processed regardless. >> >> >> Actually I think it should do the real "enable/disable" that tries to follow the virtio spec. >> > > What does real mean here? For the vdpa enable call for example, would it be like > ifcvf_vdpa_set_vq_ready where it sets the ready bit or more like mlx5_vdpa_set_vq_ready > where it can do some more work in the disable case? > > For net and something like ifcvf_vdpa_set_vq_ready's design would we have > vhost_ring_ioctl() set some vhost_virtqueue enable bit. We then have some helper > vhost_vq_is_enabled() and some code to detect if userspace supports the new ioctl. > And then in vhost_net_set_backend do we call vhost_vq_is_enabled()? What is done > for disable then? It doesn't seem to buy a lot of new functionality. Is it just > so we follow the spec? > > Or do you want it work more like mlx5_vdpa_set_vq_ready? For this in vhost_ring_ioctl > when we get the new ioctl we would call into the drivers and have it start queues > and stop queues? For enable, what we you do for net for this case? For disable, > would you do something like vhost_net_stop_vq (we don't free up anything allocated > in vhost_vring_ioctl calls, but we can stop what we setup in the net driver)? > Is this useful for the current net mq design or is this for something like where > you would do one vhost net device with multiple vqs? > > My issue/convern is that in general these calls seems useful, but we don't really > need them for scsi because vhost scsi is already stuck creating vqs like how it does > due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of design where > we just set some bit, then the new ioctl does not give us a lot. It's just an extra > check and extra code. > > And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem like it's going > to happen a lot where the admin is going to want to remove vqs from a running device. > And for both addition/removal for scsi we would need code in virtio scsi to handle > hot plug removal/addition of a queue and then redoing the multiqueue mappings which > would be difficult to add with no one requesting it. Actually I want to half take this last chunk back. When I said in general these calls seem useful, I meant for the mlx5_vdpa_set_vq_ready type design. For example, if a user was going to remove/add vCPUs then this functionality where we are completely adding/removing virtqueues would be useful. We would need a lot more than just the new ioctl though, because we would want to completely create/setup a new virtqueue I do not have any of our users asking for this. You guys work on this more so you know better. Another option is to kick it down the road again since I'm not sure my patches here have a lot to do with this. We could also just do the kernel only approach (no new ioctl) and then add some new design when we have users asking for it.
On 2020/11/18 下午2:57, Mike Christie wrote: > On 11/17/20 11:17 PM, Jason Wang wrote: >> On 2020/11/18 上午12:40, Stefan Hajnoczi wrote: >>> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: >>>> The following kernel patches were made over Michael's vhost branch: >>>> >>>> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost__;!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NoRys_hU$ >>>> and the vhost-scsi bug fix patchset: >>>> >>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/*t__;Iw!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NmuPE_m8$ >>>> And the qemu patch was made over the qemu master branch. >>>> >>>> vhost-scsi currently supports multiple queues with the num_queues >>>> setting, but we end up with a setup where the guest's scsi/block >>>> layer can do a queue per vCPU and the layers below vhost can do >>>> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, >>>> but all IO gets set on and completed on a single vhost-scsi thread. >>>> After 2 - 4 vqs this becomes a bottleneck. >>>> >>>> This patchset allows us to create a worker thread per IO vq, so we >>>> can better utilize multiple CPUs with the multiple queues. It >>>> implments Jason's suggestion to create the initial worker like >>>> normal, then create the extra workers for IO vqs with the >>>> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. >>> How does userspace find out the tids and set their CPU affinity? >>> >>> What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't >>> really "enable" or "disable" the vq, requests are processed regardless. >> >> Actually I think it should do the real "enable/disable" that tries to follow the virtio spec. >> > What does real mean here? I think it means when a vq is disabled, vhost won't process any request from that virtqueue. > For the vdpa enable call for example, would it be like > ifcvf_vdpa_set_vq_ready where it sets the ready bit or more like mlx5_vdpa_set_vq_ready > where it can do some more work in the disable case? For vDPA, it would be more complicated. E.g for IFCVF, it just delay the setting of queue_enable when it get DRIVER_OK. Technically it can passthrough the queue_enable to the hardware as what mlx5e did. > > For net and something like ifcvf_vdpa_set_vq_ready's design would we have > vhost_ring_ioctl() set some vhost_virtqueue enable bit. We then have some helper > vhost_vq_is_enabled() and some code to detect if userspace supports the new ioctl. Yes, vhost support backend capability. When userspace negotiate the new capability, we should depend on SET_VRING_ENABLE, if not we can do vhost_vq_is_enable(). > And then in vhost_net_set_backend do we call vhost_vq_is_enabled()? What is done > for disable then? It needs more thought, but the question is not specific to SET_VRING_ENABLE. Consider guest may zero ring address as well. For disabling, we can simply flush the work and disable all the polls. > It doesn't seem to buy a lot of new functionality. Is it just > so we follow the spec? My understanding is that, since spec defines queue_enable, we should support it in vhost. And we can piggyback the delayed vq creation with this feature. Otherwise we will duplicate the function if we want to support queue_enable. > > Or do you want it work more like mlx5_vdpa_set_vq_ready? For this in vhost_ring_ioctl > when we get the new ioctl we would call into the drivers and have it start queues > and stop queues? For enable, what we you do for net for this case? Net is something different, we can simply use SET_BACKEND to disable a specific virtqueue without introducing new ioctls. Notice that, net mq is kind of different with scsi which have a per queue pair vhost device, and the API allows us to set backend for a specific virtqueue. > For disable, > would you do something like vhost_net_stop_vq (we don't free up anything allocated > in vhost_vring_ioctl calls, but we can stop what we setup in the net driver)? It's up to you, if you think you should free the resources you can do that. > Is this useful for the current net mq design or is this for something like where > you would do one vhost net device with multiple vqs? I think SET_VRING_ENABLE is more useful for SCSI since it have a model of multiple vqs per vhost device. > > My issue/convern is that in general these calls seems useful, but we don't really > need them for scsi because vhost scsi is already stuck creating vqs like how it does > due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of design where > we just set some bit, then the new ioctl does not give us a lot. It's just an extra > check and extra code. > > And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem like it's going > to happen a lot where the admin is going to want to remove vqs from a running device. In this case, qemu may just disable the queues of vhost-scsi via SET_VRING_ENABLE and then we can free resources? > And for both addition/removal for scsi we would need code in virtio scsi to handle > hot plug removal/addition of a queue and then redoing the multiqueue mappings which > would be difficult to add with no one requesting it. Thanks >
On Tue, Nov 17, 2020 at 01:13:14PM -0600, Mike Christie wrote: > On 11/17/20 10:40 AM, Stefan Hajnoczi wrote: > > On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: > >> The following kernel patches were made over Michael's vhost branch: > >> > >> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost > >> > >> and the vhost-scsi bug fix patchset: > >> > >> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t > >> > >> And the qemu patch was made over the qemu master branch. > >> > >> vhost-scsi currently supports multiple queues with the num_queues > >> setting, but we end up with a setup where the guest's scsi/block > >> layer can do a queue per vCPU and the layers below vhost can do > >> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, > >> but all IO gets set on and completed on a single vhost-scsi thread. > >> After 2 - 4 vqs this becomes a bottleneck. > >> > >> This patchset allows us to create a worker thread per IO vq, so we > >> can better utilize multiple CPUs with the multiple queues. It > >> implments Jason's suggestion to create the initial worker like > >> normal, then create the extra workers for IO vqs with the > >> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. > > > > How does userspace find out the tids and set their CPU affinity? > > > > When we create the worker thread we add it to the device owner's cgroup, > so we end up inheriting those settings like affinity. > > However, are you more asking about finer control like if the guest is > doing mq, and the mq hw queue is bound to cpu0, it would perform > better if we could bind vhost vq's worker thread to cpu0? I think the > problem might is if you are in the cgroup then we can't set a specific > threads CPU affinity to just one specific CPU. So you can either do > cgroups or not. Something we wanted to try for a while is to allow userspace to create threads for us, then specify which vqs it processes. That would address this set of concerns ...
On Tue, Nov 17, 2020 at 01:13:14PM -0600, Mike Christie wrote: > On 11/17/20 10:40 AM, Stefan Hajnoczi wrote: > > On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: > >> The following kernel patches were made over Michael's vhost branch: > >> > >> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost > >> > >> and the vhost-scsi bug fix patchset: > >> > >> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t > >> > >> And the qemu patch was made over the qemu master branch. > >> > >> vhost-scsi currently supports multiple queues with the num_queues > >> setting, but we end up with a setup where the guest's scsi/block > >> layer can do a queue per vCPU and the layers below vhost can do > >> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, > >> but all IO gets set on and completed on a single vhost-scsi thread. > >> After 2 - 4 vqs this becomes a bottleneck. > >> > >> This patchset allows us to create a worker thread per IO vq, so we > >> can better utilize multiple CPUs with the multiple queues. It > >> implments Jason's suggestion to create the initial worker like > >> normal, then create the extra workers for IO vqs with the > >> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. > > > > How does userspace find out the tids and set their CPU affinity? > > > > When we create the worker thread we add it to the device owner's cgroup, > so we end up inheriting those settings like affinity. > > However, are you more asking about finer control like if the guest is > doing mq, and the mq hw queue is bound to cpu0, it would perform > better if we could bind vhost vq's worker thread to cpu0? I think the > problem might is if you are in the cgroup then we can't set a specific > threads CPU affinity to just one specific CPU. So you can either do > cgroups or not. > > > > What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't > > really "enable" or "disable" the vq, requests are processed regardless. > > > > Yeah, I agree. The problem I've mentioned before is: > > 1. For net and vsock, it's not useful because the vqs are hard coded in > the kernel and userspace, so you can't disable a vq and you never need > to enable one. > > 2. vdpa has it's own enable ioctl. > > 3. For scsi, because we already are doing multiple vqs based on the > num_queues value, we have to have some sort of compat support and > code to detect if userspace is even going to send the new ioctl. > In this patchset, compat just meant enable/disable the extra functionality > of extra worker threads for a vq. We will still use the vq if > userspace set it up. > > > > The purpose of the ioctl isn't clear to me because the kernel could > > automatically create 1 thread per vq without a new ioctl. On the other > > hand, if userspace is supposed to control worker threads then a > > different interface would be more powerful: > > The main request I have is to clearly define the meaning of the VHOST_SET_VRING_ENABLE ioctl. If you want to keep it as-is for now and the vhost maintainers are happy with then, that's okay. It should just be documented so that userspace and other vhost driver authors understand what it's supposed to do. > My preference has been: > > 1. If we were to ditch cgroups, then add a new interface that would allow > us to bind threads to a specific CPU, so that it lines up with the guest's > mq to CPU mapping. A 1:1 vCPU/vq->CPU mapping isn't desirable in all cases. The CPU affinity is a userspace policy decision. The host kernel should provide a mechanism but not the policy. That way userspace can decide which workers are shared by multiple vqs and on which physical CPUs they should run. Stefan
On 11/18/20 1:54 AM, Jason Wang wrote: > > On 2020/11/18 下午2:57, Mike Christie wrote: >> On 11/17/20 11:17 PM, Jason Wang wrote: >>> On 2020/11/18 上午12:40, Stefan Hajnoczi wrote: >>>> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: >>>>> The following kernel patches were made over Michael's vhost branch: >>>>> >>>>> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost__;!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NoRys_hU$ >>>>> >>>>> and the vhost-scsi bug fix patchset: >>>>> >>>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/*t__;Iw!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NmuPE_m8$ >>>>> >>>>> And the qemu patch was made over the qemu master branch. >>>>> >>>>> vhost-scsi currently supports multiple queues with the num_queues >>>>> setting, but we end up with a setup where the guest's scsi/block >>>>> layer can do a queue per vCPU and the layers below vhost can do >>>>> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, >>>>> but all IO gets set on and completed on a single vhost-scsi thread. >>>>> After 2 - 4 vqs this becomes a bottleneck. >>>>> >>>>> This patchset allows us to create a worker thread per IO vq, so we >>>>> can better utilize multiple CPUs with the multiple queues. It >>>>> implments Jason's suggestion to create the initial worker like >>>>> normal, then create the extra workers for IO vqs with the >>>>> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. >>>> How does userspace find out the tids and set their CPU affinity? >>>> >>>> What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It doesn't >>>> really "enable" or "disable" the vq, requests are processed regardless. >>> >>> Actually I think it should do the real "enable/disable" that tries to >>> follow the virtio spec. >>> >> What does real mean here? > > > I think it means when a vq is disabled, vhost won't process any request > from that virtqueue. > > >> For the vdpa enable call for example, would it be like >> ifcvf_vdpa_set_vq_ready where it sets the ready bit or more like >> mlx5_vdpa_set_vq_ready >> where it can do some more work in the disable case? > > > For vDPA, it would be more complicated. > > E.g for IFCVF, it just delay the setting of queue_enable when it get > DRIVER_OK. Technically it can passthrough the queue_enable to the > hardware as what mlx5e did. > > >> >> For net and something like ifcvf_vdpa_set_vq_ready's design would we have >> vhost_ring_ioctl() set some vhost_virtqueue enable bit. We then have >> some helper >> vhost_vq_is_enabled() and some code to detect if userspace supports >> the new ioctl. > > > Yes, vhost support backend capability. When userspace negotiate the new > capability, we should depend on SET_VRING_ENABLE, if not we can do > vhost_vq_is_enable(). > > >> And then in vhost_net_set_backend do we call vhost_vq_is_enabled()? >> What is done >> for disable then? > > > It needs more thought, but the question is not specific to > SET_VRING_ENABLE. Consider guest may zero ring address as well. > > For disabling, we can simply flush the work and disable all the polls. > > >> It doesn't seem to buy a lot of new functionality. Is it just >> so we follow the spec? > > > My understanding is that, since spec defines queue_enable, we should > support it in vhost. And we can piggyback the delayed vq creation with > this feature. Otherwise we will duplicate the function if we want to > support queue_enable. I had actually given up on the delayed vq creation goal. I'm still not sure how it's related to ENABLE and I think it gets pretty gross. 1. If we started from a semi-clean slate, and used the ENABLE ioctl more like a CREATE ioctl, and did the ENABLE after vhost dev open() but before any other ioctls, we can allocate the vq when we get the ENABLE ioctl. This fixes the issue where vhost scsi is allocating 128 vqs at open() time. We can then allocate metadata like the iovecs at ENABLE time or when we get a setup ioctl that is related to the metadata, so it fixes that too. That makes sense how ENABLE is related to delayed vq allocation and why we would want it. If we now need to support old tools though, then you lose me. To try and keep the code paths using the same code, then at vhost dev open() time do we start vhost_dev_init with zero vqs like with the allocate at ENABLE time case? Then when we get the first vring or dev ioctl, do we allocate the vq and related metadata? If so, the ENABLE does not buy us a lot since we get the delayed allocation from the compat code. Also this compat case gets really messy when we are delaying the actual vq and not just the metadata. If for the compat case, we keep the code that before/during vhost_dev_init allocates all the vqs and does the initialization, then we end up with 2 very very different code paths. And we also need a new modparam or something to tell the drivers to do the old or new open() behavior. 2. If we do an approach that is less invasive to the kernel for the compat case, and do the ENABLE ioctl after other vring ioctl calls then that would not work for the delayed vq allocation goal since the ENABLE call is too late. > > >> >> Or do you want it work more like mlx5_vdpa_set_vq_ready? For this in >> vhost_ring_ioctl >> when we get the new ioctl we would call into the drivers and have it >> start queues >> and stop queues? For enable, what we you do for net for this case? > > > Net is something different, we can simply use SET_BACKEND to disable a > specific virtqueue without introducing new ioctls. Notice that, net mq > is kind of different with scsi which have a per queue pair vhost device, > and the API allows us to set backend for a specific virtqueue. That's one of the things I am trying to understand. It sounds like ENABLE is not useful to net. Will net even use/implement the ENABLE ioctl or just use the SET_BACKEND? What about vsock? For net it sounds like it's just going to add an extra code path if you support it. > > >> For disable, >> would you do something like vhost_net_stop_vq (we don't free up >> anything allocated >> in vhost_vring_ioctl calls, but we can stop what we setup in the net >> driver)? > > > It's up to you, if you think you should free the resources you can do that. > > >> Is this useful for the current net mq design or is this for something >> like where >> you would do one vhost net device with multiple vqs? > > > I think SET_VRING_ENABLE is more useful for SCSI since it have a model > of multiple vqs per vhost device. That is why I was asking about if you were going to change net. It would have been useful for scsi if we had it when mq support was added and we don't have to support old tools. But now, if enable=true, is only going to be something where we set some bit so later when VHOST_SCSI_SET_ENDPOINT is run it we can do what we are already doing its just extra code. This patch: https://www.spinics.net/lists/linux-scsi/msg150151.html would work without the ENABLE ioctl I mean. And if you guys want to do the completely new interface, then none of this matters I guess :) For disable see below. > > >> >> My issue/convern is that in general these calls seems useful, but we >> don't really >> need them for scsi because vhost scsi is already stuck creating vqs >> like how it does >> due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of >> design where >> we just set some bit, then the new ioctl does not give us a lot. It's >> just an extra >> check and extra code. >> >> And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem >> like it's going >> to happen a lot where the admin is going to want to remove vqs from a >> running device. > > > In this case, qemu may just disable the queues of vhost-scsi via > SET_VRING_ENABLE and then we can free resources? Some SCSI background in case it doesn't work like net: ------- When the user sets up mq for vhost-scsi/virtio-scsi, for max perf and no cares about mem use they would normally set num_queues based on the number of vCPUs and MSI-x vectors. I think the default in qemu now is to try and detect that value. When the virtio_scsi driver is loaded into the guest kernel, it takes the num_queues value and tells the scsi/block mq layer to create num_queues multiqueue hw queues. ------ I was trying to say in the previous email that is if all we do is set some bits to indicate the queue is disabled, free its resources, stop polling/queueing in the scsi/target layer, flush etc, it does not seem useful. I was trying to ask when would a user only want this behavior? I think we need an extra piece where the guests needs to be modified to handle the queue removal or the block/scsi layers would still send IO and we would get IO errors. Without this it seems like some extra code that we will not use. And then if we are going to make disable useful like this, what about enable? We would want to the reverse where we add the queue and the guest remaps the mq to hw queue layout. To do this, enable has to do more than just set some bits. There is also an issue with how it would need to interact with the SET_BACKEND (VHOST_SCSI_SET_ENDPOINT/VHOST_SCSI_CLEAR_ENDPOINT for scsi) calls. I think if we wanted the ENABLE ioctl to work like this then that is not related to my patches and I like I've written before I think my patches do not need the ENABLE ioctl in general. We could add the patch where we create the workers threads from VHOST_SCSI_SET_ENDPOINT. And if we ever add this queue hotplug type of code, then the worker thread would just get moved/rearranged with the other vq modification code in vhost_scsi_set_endpoint/vhost_scsi_clear_endpoint. We could also go the new threading interface route, and also do the ENABLE ioctl separately.
On 2020/11/19 上午4:06, Mike Christie wrote: > On 11/18/20 1:54 AM, Jason Wang wrote: >> >> On 2020/11/18 下午2:57, Mike Christie wrote: >>> On 11/17/20 11:17 PM, Jason Wang wrote: >>>> On 2020/11/18 上午12:40, Stefan Hajnoczi wrote: >>>>> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: >>>>>> The following kernel patches were made over Michael's vhost branch: >>>>>> >>>>>> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost__;!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NoRys_hU$ >>>>>> >>>>>> and the vhost-scsi bug fix patchset: >>>>>> >>>>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/*t__;Iw!!GqivPVa7Brio!MzCv3wdRfz5dltunazRWGCeUkMg91pPEOLpIivsebLX9vhYDSi_E1V36e9H0NmuPE_m8$ >>>>>> >>>>>> And the qemu patch was made over the qemu master branch. >>>>>> >>>>>> vhost-scsi currently supports multiple queues with the num_queues >>>>>> setting, but we end up with a setup where the guest's scsi/block >>>>>> layer can do a queue per vCPU and the layers below vhost can do >>>>>> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, >>>>>> but all IO gets set on and completed on a single vhost-scsi thread. >>>>>> After 2 - 4 vqs this becomes a bottleneck. >>>>>> >>>>>> This patchset allows us to create a worker thread per IO vq, so we >>>>>> can better utilize multiple CPUs with the multiple queues. It >>>>>> implments Jason's suggestion to create the initial worker like >>>>>> normal, then create the extra workers for IO vqs with the >>>>>> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. >>>>> How does userspace find out the tids and set their CPU affinity? >>>>> >>>>> What is the meaning of the new VHOST_SET_VRING_ENABLE ioctl? It >>>>> doesn't >>>>> really "enable" or "disable" the vq, requests are processed >>>>> regardless. >>>> >>>> Actually I think it should do the real "enable/disable" that tries >>>> to follow the virtio spec. >>>> >>> What does real mean here? >> >> >> I think it means when a vq is disabled, vhost won't process any >> request from that virtqueue. >> >> >>> For the vdpa enable call for example, would it be like >>> ifcvf_vdpa_set_vq_ready where it sets the ready bit or more like >>> mlx5_vdpa_set_vq_ready >>> where it can do some more work in the disable case? >> >> >> For vDPA, it would be more complicated. >> >> E.g for IFCVF, it just delay the setting of queue_enable when it get >> DRIVER_OK. Technically it can passthrough the queue_enable to the >> hardware as what mlx5e did. >> >> >>> >>> For net and something like ifcvf_vdpa_set_vq_ready's design would we >>> have >>> vhost_ring_ioctl() set some vhost_virtqueue enable bit. We then have >>> some helper >>> vhost_vq_is_enabled() and some code to detect if userspace supports >>> the new ioctl. >> >> >> Yes, vhost support backend capability. When userspace negotiate the >> new capability, we should depend on SET_VRING_ENABLE, if not we can >> do vhost_vq_is_enable(). >> >> >>> And then in vhost_net_set_backend do we call vhost_vq_is_enabled()? >>> What is done >>> for disable then? >> >> >> It needs more thought, but the question is not specific to >> SET_VRING_ENABLE. Consider guest may zero ring address as well. >> >> For disabling, we can simply flush the work and disable all the polls. >> >> >>> It doesn't seem to buy a lot of new functionality. Is it just >>> so we follow the spec? >> >> >> My understanding is that, since spec defines queue_enable, we should >> support it in vhost. And we can piggyback the delayed vq creation >> with this feature. Otherwise we will duplicate the function if we >> want to support queue_enable. > > > I had actually given up on the delayed vq creation goal. I'm still not > sure how it's related to ENABLE and I think it gets pretty gross. > > 1. If we started from a semi-clean slate, and used the ENABLE ioctl > more like a CREATE ioctl, and did the ENABLE after vhost dev open() > but before any other ioctls, we can allocate the vq when we get the > ENABLE ioctl. This fixes the issue where vhost scsi is allocating 128 > vqs at open() time. We can then allocate metadata like the iovecs at > ENABLE time or when we get a setup ioctl that is related to the > metadata, so it fixes that too. > > That makes sense how ENABLE is related to delayed vq allocation and > why we would want it. > > If we now need to support old tools though, then you lose me. To try > and keep the code paths using the same code, then at vhost dev open() > time do we start vhost_dev_init with zero vqs like with the allocate > at ENABLE time case? Then when we get the first vring or dev ioctl, do > we allocate the vq and related metadata? If so, the ENABLE does not > buy us a lot since we get the delayed allocation from the compat code. > Also this compat case gets really messy when we are delaying the > actual vq and not just the metadata. > > If for the compat case, we keep the code that before/during > vhost_dev_init allocates all the vqs and does the initialization, then > we end up with 2 very very different code paths. And we also need a > new modparam or something to tell the drivers to do the old or new > open() behavior. Right, so I think maybe we can take a step back. Instead of depending on explicit new ioctl which may cause a lot of issues, can we do something similar to vhost_vq_is_setup(). That means, let's create/destory new workers on SET_VRING_ADDR? > > 2. If we do an approach that is less invasive to the kernel for the > compat case, and do the ENABLE ioctl after other vring ioctl calls > then that would not work for the delayed vq allocation goal since the > ENABLE call is too late. > > >> >> >>> >>> Or do you want it work more like mlx5_vdpa_set_vq_ready? For this in >>> vhost_ring_ioctl >>> when we get the new ioctl we would call into the drivers and have it >>> start queues >>> and stop queues? For enable, what we you do for net for this case? >> >> >> Net is something different, we can simply use SET_BACKEND to disable >> a specific virtqueue without introducing new ioctls. Notice that, net >> mq is kind of different with scsi which have a per queue pair vhost >> device, and the API allows us to set backend for a specific virtqueue. > > > That's one of the things I am trying to understand. It sounds like > ENABLE is not useful to net. Will net even use/implement the ENABLE > ioctl or just use the SET_BACKEND? I think SET_BACKEND is sufficient for net. > What about vsock? For vsock (and scsi as well), their backend is per virtqueue, but the actual issue is there's no uAPI to configure it per vq. The current uAPI is per device. > > For net it sounds like it's just going to add an extra code path if > you support it. Yes, so if we really want one w(which is still questionable during our discussion). We can start from a SCSI specific one (or an alias of vDPA one). > > >> >> >>> For disable, >>> would you do something like vhost_net_stop_vq (we don't free up >>> anything allocated >>> in vhost_vring_ioctl calls, but we can stop what we setup in the net >>> driver)? >> >> >> It's up to you, if you think you should free the resources you can do >> that. >> >> >>> Is this useful for the current net mq design or is this for >>> something like where >>> you would do one vhost net device with multiple vqs? >> >> >> I think SET_VRING_ENABLE is more useful for SCSI since it have a >> model of multiple vqs per vhost device. > > That is why I was asking about if you were going to change net. > > It would have been useful for scsi if we had it when mq support was > added and we don't have to support old tools. But now, if enable=true, > is only going to be something where we set some bit so later when > VHOST_SCSI_SET_ENDPOINT is run it we can do what we are already doing > its just extra code. This patch: > https://www.spinics.net/lists/linux-scsi/msg150151.html > would work without the ENABLE ioctl I mean. That seems to pre-allocate all workers. If we don't care the resources (127 workers) consumption it could be fine. > > > And if you guys want to do the completely new interface, then none of > this matters I guess :) > > For disable see below. > >> >> >>> >>> My issue/convern is that in general these calls seems useful, but we >>> don't really >>> need them for scsi because vhost scsi is already stuck creating vqs >>> like how it does >>> due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of >>> design where >>> we just set some bit, then the new ioctl does not give us a lot. >>> It's just an extra >>> check and extra code. >>> >>> And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem >>> like it's going >>> to happen a lot where the admin is going to want to remove vqs from >>> a running device. >> >> >> In this case, qemu may just disable the queues of vhost-scsi via >> SET_VRING_ENABLE and then we can free resources? > > > Some SCSI background in case it doesn't work like net: > ------- > When the user sets up mq for vhost-scsi/virtio-scsi, for max perf and > no cares about mem use they would normally set num_queues based on the > number of vCPUs and MSI-x vectors. I think the default in qemu now is > to try and detect that value. > > When the virtio_scsi driver is loaded into the guest kernel, it takes > the num_queues value and tells the scsi/block mq layer to create > num_queues multiqueue hw queues. If I read the code correctly, for modern device, guest will set queue_enable for the queues that it wants to use. So in this ideal case, qemu can forward them to VRING_ENABLE and reset VRING_ENABLE during device reset. But it would be complicated to support legacy device and qemu. > > ------ > > I was trying to say in the previous email that is if all we do is set > some bits to indicate the queue is disabled, free its resources, stop > polling/queueing in the scsi/target layer, flush etc, it does not seem > useful. I was trying to ask when would a user only want this behavior? I think it's device reset, the semantic is that unless the queue is enabled, we should treat it as disabled. > > I think we need an extra piece where the guests needs to be modified > to handle the queue removal or the block/scsi layers would still send > IO and we would get IO errors. Without this it seems like some extra > code that we will not use. > > And then if we are going to make disable useful like this, what about > enable? We would want to the reverse where we add the queue and the > guest remaps the mq to hw queue layout. To do this, enable has to do > more than just set some bits. There is also an issue with how it would > need to interact with the SET_BACKEND > (VHOST_SCSI_SET_ENDPOINT/VHOST_SCSI_CLEAR_ENDPOINT for scsi) calls. > > I think if we wanted the ENABLE ioctl to work like this then that is > not related to my patches and I like I've written before I think my > patches do not need the ENABLE ioctl in general. We could add the > patch where we create the workers threads from > VHOST_SCSI_SET_ENDPOINT. And if we ever add this queue hotplug type of > code, then the worker thread would just get moved/rearranged with the > other vq modification code in > vhost_scsi_set_endpoint/vhost_scsi_clear_endpoint. > > We could also go the new threading interface route, and also do the > ENABLE ioctl separately. Right, my original idea is to try to make queue_enable (in the spec) work for SCSI and we can use that for any delayed stuffs (vq, or workers). But it looks not as easy as I imaged. Thanks
On Wed, Nov 18, 2020 at 04:54:07AM -0500, Michael S. Tsirkin wrote: > On Tue, Nov 17, 2020 at 01:13:14PM -0600, Mike Christie wrote: > > On 11/17/20 10:40 AM, Stefan Hajnoczi wrote: > > > On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote: > > >> The following kernel patches were made over Michael's vhost branch: > > >> > > >> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost > > >> > > >> and the vhost-scsi bug fix patchset: > > >> > > >> https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t > > >> > > >> And the qemu patch was made over the qemu master branch. > > >> > > >> vhost-scsi currently supports multiple queues with the num_queues > > >> setting, but we end up with a setup where the guest's scsi/block > > >> layer can do a queue per vCPU and the layers below vhost can do > > >> a queue per CPU. vhost-scsi will then do a num_queue virtqueues, > > >> but all IO gets set on and completed on a single vhost-scsi thread. > > >> After 2 - 4 vqs this becomes a bottleneck. > > >> > > >> This patchset allows us to create a worker thread per IO vq, so we > > >> can better utilize multiple CPUs with the multiple queues. It > > >> implments Jason's suggestion to create the initial worker like > > >> normal, then create the extra workers for IO vqs with the > > >> VHOST_SET_VRING_ENABLE ioctl command added in this patchset. > > > > > > How does userspace find out the tids and set their CPU affinity? > > > > > > > When we create the worker thread we add it to the device owner's cgroup, > > so we end up inheriting those settings like affinity. > > > > However, are you more asking about finer control like if the guest is > > doing mq, and the mq hw queue is bound to cpu0, it would perform > > better if we could bind vhost vq's worker thread to cpu0? I think the > > problem might is if you are in the cgroup then we can't set a specific > > threads CPU affinity to just one specific CPU. So you can either do > > cgroups or not. > > Something we wanted to try for a while is to allow userspace > to create threads for us, then specify which vqs it processes. Do you mean an interface like a blocking ioctl(vhost_fd, VHOST_WORKER_RUN) where the vhost processing is done in the context of the caller's userspace thread? What is neat about this is that it removes thread configuration from the kernel vhost code. On the other hand, userspace still needs an interface indicating which vqs should be processed. Maybe it would even require an int worker_fd = ioctl(vhost_fd, VHOST_WORKER_CREATE) and then ioctl(worker_fd, VHOST_WORKER_BIND_VQ, vq_idx)? So then it becomes complex again... Stefan
On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: > > My preference has been: > > > > 1. If we were to ditch cgroups, then add a new interface that would allow > > us to bind threads to a specific CPU, so that it lines up with the guest's > > mq to CPU mapping. > > A 1:1 vCPU/vq->CPU mapping isn't desirable in all cases. > > The CPU affinity is a userspace policy decision. The host kernel should > provide a mechanism but not the policy. That way userspace can decide > which workers are shared by multiple vqs and on which physical CPUs they > should run. So if we let userspace dictate the threading policy then I think binding vqs to userspace threads and running there makes the most sense, no need to create the threads.
On 11/18/20 10:35 PM, Jason Wang wrote: >> its just extra code. This patch: >> https://urldefense.com/v3/__https://www.spinics.net/lists/linux-scsi/msg150151.html__;!!GqivPVa7Brio!MJS-iYeBuOljoz2xerETyn4c1N9i0XnOE8oNhz4ebbzCMNeQIP_Iie8zH18L7cY7_hur$ >> would work without the ENABLE ioctl I mean. > > > That seems to pre-allocate all workers. If we don't care the resources > (127 workers) consumption it could be fine. It only makes what the user requested via num_queues. That patch will: 1. For the default case of num_queues=1 we use the single worker created from the SET_OWNER ioctl. 2. If num_queues > 1, then it creates a worker thread per num_queue > 1. > > >> >> >> And if you guys want to do the completely new interface, then none of >> this matters I guess :) >> >> For disable see below. >> >>> >>> >>>> >>>> My issue/convern is that in general these calls seems useful, but we >>>> don't really >>>> need them for scsi because vhost scsi is already stuck creating vqs >>>> like how it does >>>> due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of >>>> design where >>>> we just set some bit, then the new ioctl does not give us a lot. >>>> It's just an extra >>>> check and extra code. >>>> >>>> And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem >>>> like it's going >>>> to happen a lot where the admin is going to want to remove vqs from >>>> a running device. >>> >>> >>> In this case, qemu may just disable the queues of vhost-scsi via >>> SET_VRING_ENABLE and then we can free resources? >> >> >> Some SCSI background in case it doesn't work like net: >> ------- >> When the user sets up mq for vhost-scsi/virtio-scsi, for max perf and >> no cares about mem use they would normally set num_queues based on the >> number of vCPUs and MSI-x vectors. I think the default in qemu now is >> to try and detect that value. >> >> When the virtio_scsi driver is loaded into the guest kernel, it takes >> the num_queues value and tells the scsi/block mq layer to create >> num_queues multiqueue hw queues. > > > If I read the code correctly, for modern device, guest will set > queue_enable for the queues that it wants to use. So in this ideal case, > qemu can forward them to VRING_ENABLE and reset VRING_ENABLE during > device reset. I was thinking more you want an event like when a device/LUN is added/removed to a host. Instead of kicking off a device scan, you could call the block helper to remap queues. It would then not be too invasive to running IO. I'll look into reset some more. > > But it would be complicated to support legacy device and qemu. > > >> >> ------ >> >> I was trying to say in the previous email that is if all we do is set >> some bits to indicate the queue is disabled, free its resources, stop >> polling/queueing in the scsi/target layer, flush etc, it does not seem >> useful. I was trying to ask when would a user only want this behavior? > > > I think it's device reset, the semantic is that unless the queue is > enabled, we should treat it as disabled. > Ah ok. I I'll look into that some more. A funny thing is that I was trying to test that a while ago, but it wasn't helpful. I'm guessing it didn't work because it didn't implement what you wanted for disable right now :)
On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: > On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: >>> My preference has been: >>> >>> 1. If we were to ditch cgroups, then add a new interface that would allow >>> us to bind threads to a specific CPU, so that it lines up with the guest's >>> mq to CPU mapping. >> >> A 1:1 vCPU/vq->CPU mapping isn't desirable in all cases. >> >> The CPU affinity is a userspace policy decision. The host kernel should >> provide a mechanism but not the policy. That way userspace can decide >> which workers are shared by multiple vqs and on which physical CPUs they >> should run. > > So if we let userspace dictate the threading policy then I think binding > vqs to userspace threads and running there makes the most sense, > no need to create the threads. > Just to make sure I am on the same page, in one of the first postings of this set at the bottom of the mail: https://www.spinics.net/lists/linux-scsi/msg148322.html I asked about a new interface and had done something more like what Stefan posted: struct vhost_vq_worker_info { /* * The pid of an existing vhost worker that this vq will be * assigned to. When pid is 0 the virtqueue is assigned to the * default vhost worker. When pid is -1 a new worker thread is * created for this virtqueue. When pid is -2 the virtqueue's * worker thread is unchanged. * * If a vhost worker no longer has any virtqueues assigned to it * then it will terminate. * * The pid of the vhost worker is stored to this field when the * ioctl completes successfully. Use pid -2 to query the current * vhost worker pid. */ __kernel_pid_t pid; /* in/out */ /* The virtqueue index*/ unsigned int vq_idx; /* in */ }; This approach is simple and it allowed me to have userspace map queues and threads optimally for our setups. Note: Stefan, in response to your previous comment, I am just using my 1:1 mapping as an example and would make it configurable from userspace. In the email above are you guys suggesting to execute the SCSI/vhost requests in userspace? We should not do that because: 1. It negates part of what makes vhost fast where we do not have to kick out to userspace then back to the kernel. 2. It's not doable or becomes a crazy mess because vhost-scsi is tied to the scsi/target layer in the kernel. You can't process the scsi command in userspace since the scsi state machine and all its configuration info is in the kernel's scsi/target layer. For example, I was just the maintainer of the target_core_user module that hooks into LIO/target on the backend (vhost-scsi hooks in on the front end) and passes commands to userspace and there we have a semi-shadow state machine. It gets nasty to try and maintain/sync state between lio/target core in the kernel and in userspace. We also see the perf loss I mentioned in #1.
On Thu, Nov 19, 2020 at 4:13 PM Mike Christie <michael.christie@oracle.com> wrote: > > On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: > > On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: > >>> My preference has been: > >>> > >>> 1. If we were to ditch cgroups, then add a new interface that would allow > >>> us to bind threads to a specific CPU, so that it lines up with the guest's > >>> mq to CPU mapping. > >> > >> A 1:1 vCPU/vq->CPU mapping isn't desirable in all cases. > >> > >> The CPU affinity is a userspace policy decision. The host kernel should > >> provide a mechanism but not the policy. That way userspace can decide > >> which workers are shared by multiple vqs and on which physical CPUs they > >> should run. > > > > So if we let userspace dictate the threading policy then I think binding > > vqs to userspace threads and running there makes the most sense, > > no need to create the threads. > > > > Just to make sure I am on the same page, in one of the first postings of > this set at the bottom of the mail: > > https://www.spinics.net/lists/linux-scsi/msg148322.html > > I asked about a new interface and had done something more like what > Stefan posted: > > struct vhost_vq_worker_info { > /* > * The pid of an existing vhost worker that this vq will be > * assigned to. When pid is 0 the virtqueue is assigned to the > * default vhost worker. When pid is -1 a new worker thread is > * created for this virtqueue. When pid is -2 the virtqueue's > * worker thread is unchanged. > * > * If a vhost worker no longer has any virtqueues assigned to it > * then it will terminate. > * > * The pid of the vhost worker is stored to this field when the > * ioctl completes successfully. Use pid -2 to query the current > * vhost worker pid. > */ > __kernel_pid_t pid; /* in/out */ > > /* The virtqueue index*/ > unsigned int vq_idx; /* in */ > }; > > This approach is simple and it allowed me to have userspace map queues > and threads optimally for our setups. > > Note: Stefan, in response to your previous comment, I am just using my > 1:1 mapping as an example and would make it configurable from userspace. > > In the email above are you guys suggesting to execute the SCSI/vhost > requests in userspace? We should not do that because: > > 1. It negates part of what makes vhost fast where we do not have to kick > out to userspace then back to the kernel. > > 2. It's not doable or becomes a crazy mess because vhost-scsi is tied to > the scsi/target layer in the kernel. You can't process the scsi command > in userspace since the scsi state machine and all its configuration info > is in the kernel's scsi/target layer. > > For example, I was just the maintainer of the target_core_user module > that hooks into LIO/target on the backend (vhost-scsi hooks in on the > front end) and passes commands to userspace and there we have a > semi-shadow state machine. It gets nasty to try and maintain/sync state > between lio/target core in the kernel and in userspace. We also see the > perf loss I mentioned in #1. No, if I understand Michael correctly he has suggested a different approach. My suggestion was that the kernel continues to manage the worker threads but an ioctl allows userspace to control the policy. I think Michael is saying that the kernel shouldn't manage/create threads. Userspace should create threads and then invoke an ioctl from those threads. The ioctl will call into the vhost driver where it will execute something similar to vhost_worker(). So this ioctl will block while the kernel is using the thread to process vqs. What isn't clear to me is how to tell the kernel which vqs are processed by a thread. We could try to pass that information into the ioctl. I'm not sure what the cleanest solution is here. Maybe something like: struct vhost_run_worker_info { struct timespec *timeout; sigset_t *sigmask; /* List of virtqueues to process */ unsigned nvqs; unsigned vqs[]; }; /* This blocks until the timeout is reached, a signal is received, or the vhost device is destroyed */ int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); As you can see, userspace isn't involved with dealing with the requests. It just acts as a thread donor to the vhost driver. We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the penalty of switching into the kernel, copying in the arguments, etc. Michael: is this the kind of thing you were thinking of? Stefan
On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie > <michael.christie@oracle.com> wrote: >> >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: >>>>> My preference has been: >>>>> >>>>> 1. If we were to ditch cgroups, then add a new interface that would allow >>>>> us to bind threads to a specific CPU, so that it lines up with the guest's >>>>> mq to CPU mapping. >>>> >>>> A 1:1 vCPU/vq->CPU mapping isn't desirable in all cases. >>>> >>>> The CPU affinity is a userspace policy decision. The host kernel should >>>> provide a mechanism but not the policy. That way userspace can decide >>>> which workers are shared by multiple vqs and on which physical CPUs they >>>> should run. >>> >>> So if we let userspace dictate the threading policy then I think binding >>> vqs to userspace threads and running there makes the most sense, >>> no need to create the threads. >>> >> >> Just to make sure I am on the same page, in one of the first postings of >> this set at the bottom of the mail: >> >> https://urldefense.com/v3/__https://www.spinics.net/lists/linux-scsi/msg148322.html__;!!GqivPVa7Brio!PdGIFdzqcAb6DW8twtjX3r7xOcM7XbTh7Ndkhxhb-1fV1VNB4lXjFzwFVE1zczUIE2Mp$ >> >> I asked about a new interface and had done something more like what >> Stefan posted: >> >> struct vhost_vq_worker_info { >> /* >> * The pid of an existing vhost worker that this vq will be >> * assigned to. When pid is 0 the virtqueue is assigned to the >> * default vhost worker. When pid is -1 a new worker thread is >> * created for this virtqueue. When pid is -2 the virtqueue's >> * worker thread is unchanged. >> * >> * If a vhost worker no longer has any virtqueues assigned to it >> * then it will terminate. >> * >> * The pid of the vhost worker is stored to this field when the >> * ioctl completes successfully. Use pid -2 to query the current >> * vhost worker pid. >> */ >> __kernel_pid_t pid; /* in/out */ >> >> /* The virtqueue index*/ >> unsigned int vq_idx; /* in */ >> }; >> >> This approach is simple and it allowed me to have userspace map queues >> and threads optimally for our setups. >> >> Note: Stefan, in response to your previous comment, I am just using my >> 1:1 mapping as an example and would make it configurable from userspace. >> >> In the email above are you guys suggesting to execute the SCSI/vhost >> requests in userspace? We should not do that because: >> >> 1. It negates part of what makes vhost fast where we do not have to kick >> out to userspace then back to the kernel. >> >> 2. It's not doable or becomes a crazy mess because vhost-scsi is tied to >> the scsi/target layer in the kernel. You can't process the scsi command >> in userspace since the scsi state machine and all its configuration info >> is in the kernel's scsi/target layer. >> >> For example, I was just the maintainer of the target_core_user module >> that hooks into LIO/target on the backend (vhost-scsi hooks in on the >> front end) and passes commands to userspace and there we have a >> semi-shadow state machine. It gets nasty to try and maintain/sync state >> between lio/target core in the kernel and in userspace. We also see the >> perf loss I mentioned in #1. > > No, if I understand Michael correctly he has suggested a different approach. > > My suggestion was that the kernel continues to manage the worker > threads but an ioctl allows userspace to control the policy. > > I think Michael is saying that the kernel shouldn't manage/create > threads. Userspace should create threads and then invoke an ioctl from > those threads. > > The ioctl will call into the vhost driver where it will execute > something similar to vhost_worker(). So this ioctl will block while > the kernel is using the thread to process vqs. > > What isn't clear to me is how to tell the kernel which vqs are > processed by a thread. We could try to pass that information into the > ioctl. I'm not sure what the cleanest solution is here. > > Maybe something like: > > struct vhost_run_worker_info { > struct timespec *timeout; > sigset_t *sigmask; > > /* List of virtqueues to process */ > unsigned nvqs; > unsigned vqs[]; > }; > > /* This blocks until the timeout is reached, a signal is received, or > the vhost device is destroyed */ > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); > > As you can see, userspace isn't involved with dealing with the > requests. It just acts as a thread donor to the vhost driver. > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the > penalty of switching into the kernel, copying in the arguments, etc. I didn't get this part. Why have the timeout? When the timeout expires, does userspace just call right back down to the kernel or does it do some sort of processing/operation? You could have your worker function run from that ioctl wait for a signal or a wake up call from the vhost_work/poll functions. > > Michael: is this the kind of thing you were thinking of? > > Stefan >
On Thu, Nov 19, 2020 at 4:43 PM Mike Christie <michael.christie@oracle.com> wrote: > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie > > <michael.christie@oracle.com> wrote: > >> > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: > > struct vhost_run_worker_info { > > struct timespec *timeout; > > sigset_t *sigmask; > > > > /* List of virtqueues to process */ > > unsigned nvqs; > > unsigned vqs[]; > > }; > > > > /* This blocks until the timeout is reached, a signal is received, or > > the vhost device is destroyed */ > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); > > > > As you can see, userspace isn't involved with dealing with the > > requests. It just acts as a thread donor to the vhost driver. > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the > > penalty of switching into the kernel, copying in the arguments, etc. > > I didn't get this part. Why have the timeout? When the timeout expires, > does userspace just call right back down to the kernel or does it do > some sort of processing/operation? > > You could have your worker function run from that ioctl wait for a > signal or a wake up call from the vhost_work/poll functions. An optional timeout argument is common in blocking interfaces like poll(2), recvmmsg(2), etc. Although something can send a signal to the thread instead, implementing that in an application is more awkward than passing a struct timespec. Compared to other blocking calls we don't expect ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will rarely be used and can be dropped from the interface. BTW the code I posted wasn't a carefully thought out proposal :). The details still need to be considered and I'm going to be offline for the next week so maybe someone else can think it through in the meantime. Stefan
On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote: > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie > <michael.christie@oracle.com> wrote: > > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie > > > <michael.christie@oracle.com> wrote: > > >> > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: > > > struct vhost_run_worker_info { > > > struct timespec *timeout; > > > sigset_t *sigmask; > > > > > > /* List of virtqueues to process */ > > > unsigned nvqs; > > > unsigned vqs[]; > > > }; > > > > > > /* This blocks until the timeout is reached, a signal is received, or > > > the vhost device is destroyed */ > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); > > > > > > As you can see, userspace isn't involved with dealing with the > > > requests. It just acts as a thread donor to the vhost driver. > > > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the > > > penalty of switching into the kernel, copying in the arguments, etc. > > > > I didn't get this part. Why have the timeout? When the timeout expires, > > does userspace just call right back down to the kernel or does it do > > some sort of processing/operation? > > > > You could have your worker function run from that ioctl wait for a > > signal or a wake up call from the vhost_work/poll functions. > > An optional timeout argument is common in blocking interfaces like > poll(2), recvmmsg(2), etc. > > Although something can send a signal to the thread instead, > implementing that in an application is more awkward than passing a > struct timespec. > > Compared to other blocking calls we don't expect > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will > rarely be used and can be dropped from the interface. > > BTW the code I posted wasn't a carefully thought out proposal :). The > details still need to be considered and I'm going to be offline for > the next week so maybe someone else can think it through in the > meantime. One final thought before I'm offline for a week. If ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance then it's hard to support poll-mode (busy waiting) workers because each device instance consumes a whole CPU. If we stick to an interface where the kernel manages the worker threads then it's easier to share workers between devices for polling. I have CCed Stefano Garzarella, who is looking at similar designs for vDPA software device implementations. Stefan
On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote: > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote: > > > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie > > <michael.christie@oracle.com> wrote: > > > > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie > > > > <michael.christie@oracle.com> wrote: > > > >> > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: > > > > struct vhost_run_worker_info { > > > > struct timespec *timeout; > > > > sigset_t *sigmask; > > > > > > > > /* List of virtqueues to process */ > > > > unsigned nvqs; > > > > unsigned vqs[]; > > > > }; > > > > > > > > /* This blocks until the timeout is reached, a signal is received, or > > > > the vhost device is destroyed */ > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); > > > > > > > > As you can see, userspace isn't involved with dealing with the > > > > requests. It just acts as a thread donor to the vhost driver. > > > > > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the > > > > penalty of switching into the kernel, copying in the arguments, etc. > > > > > > I didn't get this part. Why have the timeout? When the timeout expires, > > > does userspace just call right back down to the kernel or does it do > > > some sort of processing/operation? > > > > > > You could have your worker function run from that ioctl wait for a > > > signal or a wake up call from the vhost_work/poll functions. > > > > An optional timeout argument is common in blocking interfaces like > > poll(2), recvmmsg(2), etc. > > > > Although something can send a signal to the thread instead, > > implementing that in an application is more awkward than passing a > > struct timespec. > > > > Compared to other blocking calls we don't expect > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will > > rarely be used and can be dropped from the interface. > > > > BTW the code I posted wasn't a carefully thought out proposal :). The > > details still need to be considered and I'm going to be offline for > > the next week so maybe someone else can think it through in the > > meantime. > > One final thought before I'm offline for a week. If > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance > then it's hard to support poll-mode (busy waiting) workers because > each device instance consumes a whole CPU. If we stick to an interface > where the kernel manages the worker threads then it's easier to share > workers between devices for polling. Yes that is the reason vhost did its own reason in the first place. I am vaguely thinking about poll(2) or a similar interface, which can wait for an event on multiple FDs. > I have CCed Stefano Garzarella, who is looking at similar designs for > vDPA software device implementations. > > Stefan
On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote: >On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote: >> >> On Thu, Nov 19, 2020 at 4:43 PM Mike Christie >> <michael.christie@oracle.com> wrote: >> > >> > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: >> > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie >> > > <michael.christie@oracle.com> wrote: >> > >> >> > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: >> > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: >> > > struct vhost_run_worker_info { >> > > struct timespec *timeout; >> > > sigset_t *sigmask; >> > > >> > > /* List of virtqueues to process */ >> > > unsigned nvqs; >> > > unsigned vqs[]; >> > > }; >> > > >> > > /* This blocks until the timeout is reached, a signal is received, or >> > > the vhost device is destroyed */ >> > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); >> > > >> > > As you can see, userspace isn't involved with dealing with the >> > > requests. It just acts as a thread donor to the vhost driver. >> > > >> > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the >> > > penalty of switching into the kernel, copying in the arguments, etc. >> > >> > I didn't get this part. Why have the timeout? When the timeout expires, >> > does userspace just call right back down to the kernel or does it do >> > some sort of processing/operation? >> > >> > You could have your worker function run from that ioctl wait for a >> > signal or a wake up call from the vhost_work/poll functions. >> >> An optional timeout argument is common in blocking interfaces like >> poll(2), recvmmsg(2), etc. >> >> Although something can send a signal to the thread instead, >> implementing that in an application is more awkward than passing a >> struct timespec. >> >> Compared to other blocking calls we don't expect >> ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will >> rarely be used and can be dropped from the interface. >> >> BTW the code I posted wasn't a carefully thought out proposal :). The >> details still need to be considered and I'm going to be offline for >> the next week so maybe someone else can think it through in the >> meantime. > >One final thought before I'm offline for a week. If >ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance >then it's hard to support poll-mode (busy waiting) workers because >each device instance consumes a whole CPU. If we stick to an interface >where the kernel manages the worker threads then it's easier to share >workers between devices for polling. Agree, ioctl(VHOST_RUN_WORKER) is interesting and perhaps simplifies thread management (pinning, etc.), but with kthread would be easier to implement polling sharing worker with multiple devices. > >I have CCed Stefano Garzarella, who is looking at similar designs for >vDPA software device implementations. Thanks, Mike please can you keep me in CC for this work? It's really interesting since I'll have similar issues to solve with vDPA software device. Thanks, Stefano
On Fri, Nov 20, 2020 at 07:31:08AM -0500, Michael S. Tsirkin wrote: > On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote: > > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote: > > > > > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie > > > <michael.christie@oracle.com> wrote: > > > > > > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: > > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie > > > > > <michael.christie@oracle.com> wrote: > > > > >> > > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: > > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: > > > > > struct vhost_run_worker_info { > > > > > struct timespec *timeout; > > > > > sigset_t *sigmask; > > > > > > > > > > /* List of virtqueues to process */ > > > > > unsigned nvqs; > > > > > unsigned vqs[]; > > > > > }; > > > > > > > > > > /* This blocks until the timeout is reached, a signal is received, or > > > > > the vhost device is destroyed */ > > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); > > > > > > > > > > As you can see, userspace isn't involved with dealing with the > > > > > requests. It just acts as a thread donor to the vhost driver. > > > > > > > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the > > > > > penalty of switching into the kernel, copying in the arguments, etc. > > > > > > > > I didn't get this part. Why have the timeout? When the timeout expires, > > > > does userspace just call right back down to the kernel or does it do > > > > some sort of processing/operation? > > > > > > > > You could have your worker function run from that ioctl wait for a > > > > signal or a wake up call from the vhost_work/poll functions. > > > > > > An optional timeout argument is common in blocking interfaces like > > > poll(2), recvmmsg(2), etc. > > > > > > Although something can send a signal to the thread instead, > > > implementing that in an application is more awkward than passing a > > > struct timespec. > > > > > > Compared to other blocking calls we don't expect > > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will > > > rarely be used and can be dropped from the interface. > > > > > > BTW the code I posted wasn't a carefully thought out proposal :). The > > > details still need to be considered and I'm going to be offline for > > > the next week so maybe someone else can think it through in the > > > meantime. > > > > One final thought before I'm offline for a week. If > > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance > > then it's hard to support poll-mode (busy waiting) workers because > > each device instance consumes a whole CPU. If we stick to an interface > > where the kernel manages the worker threads then it's easier to share > > workers between devices for polling. > > > Yes that is the reason vhost did its own reason in the first place. > > > I am vaguely thinking about poll(2) or a similar interface, > which can wait for an event on multiple FDs. I can imagine how using poll(2) would work from a userspace perspective, but on the kernel side I don't think it can be implemented cleanly. poll(2) is tied to the file_operations->poll() callback and read/write/error events. Not to mention there isn't a way to substitue the vhost worker thread function instead of scheduling out the current thread while waiting for poll fd events. But maybe ioctl(VHOST_WORKER_RUN) can do it: struct vhost_run_worker_dev { int vhostfd; /* /dev/vhost-TYPE fd */ unsigned nvqs; /* number of virtqueues in vqs[] */ unsigned vqs[]; /* virtqueues to process */ }; struct vhost_run_worker_info { struct timespec *timeout; sigset_t *sigmask; unsigned ndevices; struct vhost_run_worker_dev *devices[]; }; In the simple case userspace sets ndevices to 1 and we just handle virtqueues for the current device. In the fancier shared worker thread case the userspace process has the vhost fds of all the devices it is processing and passes them to ioctl(VHOST_WORKER_RUN) via struct vhost_run_worker_dev elements. From a security perspective it means the userspace thread has access to all vhost devices (because it has their fds). I'm not sure how the mm is supposed to work. The devices might be associated with different userspace processes (guests) and therefore have different virtual memory. Just wanted to push this discussion along a little further. I'm buried under emails and probably wont be very active over the next few days. Stefan
On Tue, Dec 01, 2020 at 12:59:43PM +0000, Stefan Hajnoczi wrote: >On Fri, Nov 20, 2020 at 07:31:08AM -0500, Michael S. Tsirkin wrote: >> On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote: >> > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote: >> > > >> > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie >> > > <michael.christie@oracle.com> wrote: >> > > > >> > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: >> > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie >> > > > > <michael.christie@oracle.com> wrote: >> > > > >> >> > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: >> > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: >> > > > > struct vhost_run_worker_info { >> > > > > struct timespec *timeout; >> > > > > sigset_t *sigmask; >> > > > > >> > > > > /* List of virtqueues to process */ >> > > > > unsigned nvqs; >> > > > > unsigned vqs[]; >> > > > > }; >> > > > > >> > > > > /* This blocks until the timeout is reached, a signal is received, or >> > > > > the vhost device is destroyed */ >> > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); >> > > > > >> > > > > As you can see, userspace isn't involved with dealing with the >> > > > > requests. It just acts as a thread donor to the vhost driver. >> > > > > >> > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the >> > > > > penalty of switching into the kernel, copying in the arguments, etc. >> > > > >> > > > I didn't get this part. Why have the timeout? When the timeout expires, >> > > > does userspace just call right back down to the kernel or does it do >> > > > some sort of processing/operation? >> > > > >> > > > You could have your worker function run from that ioctl wait for a >> > > > signal or a wake up call from the vhost_work/poll functions. >> > > >> > > An optional timeout argument is common in blocking interfaces like >> > > poll(2), recvmmsg(2), etc. >> > > >> > > Although something can send a signal to the thread instead, >> > > implementing that in an application is more awkward than passing a >> > > struct timespec. >> > > >> > > Compared to other blocking calls we don't expect >> > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will >> > > rarely be used and can be dropped from the interface. >> > > >> > > BTW the code I posted wasn't a carefully thought out proposal :). The >> > > details still need to be considered and I'm going to be offline for >> > > the next week so maybe someone else can think it through in the >> > > meantime. >> > >> > One final thought before I'm offline for a week. If >> > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance >> > then it's hard to support poll-mode (busy waiting) workers because >> > each device instance consumes a whole CPU. If we stick to an interface >> > where the kernel manages the worker threads then it's easier to share >> > workers between devices for polling. >> >> >> Yes that is the reason vhost did its own reason in the first place. >> >> >> I am vaguely thinking about poll(2) or a similar interface, >> which can wait for an event on multiple FDs. > >I can imagine how using poll(2) would work from a userspace perspective, >but on the kernel side I don't think it can be implemented cleanly. >poll(2) is tied to the file_operations->poll() callback and >read/write/error events. Not to mention there isn't a way to substitue >the vhost worker thread function instead of scheduling out the current >thread while waiting for poll fd events. > >But maybe ioctl(VHOST_WORKER_RUN) can do it: > > struct vhost_run_worker_dev { > int vhostfd; /* /dev/vhost-TYPE fd */ > unsigned nvqs; /* number of virtqueues in vqs[] */ > unsigned vqs[]; /* virtqueues to process */ > }; > > struct vhost_run_worker_info { > struct timespec *timeout; > sigset_t *sigmask; > > unsigned ndevices; > struct vhost_run_worker_dev *devices[]; > }; > >In the simple case userspace sets ndevices to 1 and we just handle >virtqueues for the current device. > >In the fancier shared worker thread case the userspace process has the >vhost fds of all the devices it is processing and passes them to >ioctl(VHOST_WORKER_RUN) via struct vhost_run_worker_dev elements. Which fd will be used for this IOCTL? One of the 'vhostfd' or we should create a new /dev/vhost-workers (or something similar)? Maybe the new device will be cleaner and can be reused also for other stuff (I'm thinking about vDPA software devices). > >From a security perspective it means the userspace thread has access to >all vhost devices (because it has their fds). > >I'm not sure how the mm is supposed to work. The devices might be >associated with different userspace processes (guests) and therefore >have different virtual memory. Maybe in this case we should do something similar to io_uring SQPOLL kthread where kthread_use_mm()/kthread_unuse_mm() is used to switch virtual memory spaces. After writing, I saw that we already do it this in the vhost_worker() in drivers/vhost/vhost.c > >Just wanted to push this discussion along a little further. I'm buried >under emails and probably wont be very active over the next few days. > I think ioctl(VHOST_WORKER_RUN) might be the right way and also maybe the least difficult one. Thanks, Stefano
On Tue, Dec 01, 2020 at 02:45:18PM +0100, Stefano Garzarella wrote: > On Tue, Dec 01, 2020 at 12:59:43PM +0000, Stefan Hajnoczi wrote: > > On Fri, Nov 20, 2020 at 07:31:08AM -0500, Michael S. Tsirkin wrote: > > > On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote: > > > > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote: > > > > > > > > > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie > > > > > <michael.christie@oracle.com> wrote: > > > > > > > > > > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: > > > > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie > > > > > > > <michael.christie@oracle.com> wrote: > > > > > > >> > > > > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: > > > > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: > > > > > > > struct vhost_run_worker_info { > > > > > > > struct timespec *timeout; > > > > > > > sigset_t *sigmask; > > > > > > > > > > > > > > /* List of virtqueues to process */ > > > > > > > unsigned nvqs; > > > > > > > unsigned vqs[]; > > > > > > > }; > > > > > > > > > > > > > > /* This blocks until the timeout is reached, a signal is received, or > > > > > > > the vhost device is destroyed */ > > > > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); > > > > > > > > > > > > > > As you can see, userspace isn't involved with dealing with the > > > > > > > requests. It just acts as a thread donor to the vhost driver. > > > > > > > > > > > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the > > > > > > > penalty of switching into the kernel, copying in the arguments, etc. > > > > > > > > > > > > I didn't get this part. Why have the timeout? When the timeout expires, > > > > > > does userspace just call right back down to the kernel or does it do > > > > > > some sort of processing/operation? > > > > > > > > > > > > You could have your worker function run from that ioctl wait for a > > > > > > signal or a wake up call from the vhost_work/poll functions. > > > > > > > > > > An optional timeout argument is common in blocking interfaces like > > > > > poll(2), recvmmsg(2), etc. > > > > > > > > > > Although something can send a signal to the thread instead, > > > > > implementing that in an application is more awkward than passing a > > > > > struct timespec. > > > > > > > > > > Compared to other blocking calls we don't expect > > > > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will > > > > > rarely be used and can be dropped from the interface. > > > > > > > > > > BTW the code I posted wasn't a carefully thought out proposal :). The > > > > > details still need to be considered and I'm going to be offline for > > > > > the next week so maybe someone else can think it through in the > > > > > meantime. > > > > > > > > One final thought before I'm offline for a week. If > > > > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance > > > > then it's hard to support poll-mode (busy waiting) workers because > > > > each device instance consumes a whole CPU. If we stick to an interface > > > > where the kernel manages the worker threads then it's easier to share > > > > workers between devices for polling. > > > > > > > > > Yes that is the reason vhost did its own reason in the first place. > > > > > > > > > I am vaguely thinking about poll(2) or a similar interface, > > > which can wait for an event on multiple FDs. > > > > I can imagine how using poll(2) would work from a userspace perspective, > > but on the kernel side I don't think it can be implemented cleanly. > > poll(2) is tied to the file_operations->poll() callback and > > read/write/error events. Not to mention there isn't a way to substitue > > the vhost worker thread function instead of scheduling out the current > > thread while waiting for poll fd events. > > > > But maybe ioctl(VHOST_WORKER_RUN) can do it: > > > > struct vhost_run_worker_dev { > > int vhostfd; /* /dev/vhost-TYPE fd */ > > unsigned nvqs; /* number of virtqueues in vqs[] */ > > unsigned vqs[]; /* virtqueues to process */ > > }; > > > > struct vhost_run_worker_info { > > struct timespec *timeout; > > sigset_t *sigmask; > > > > unsigned ndevices; > > struct vhost_run_worker_dev *devices[]; > > }; > > > > In the simple case userspace sets ndevices to 1 and we just handle > > virtqueues for the current device. > > > > In the fancier shared worker thread case the userspace process has the > > vhost fds of all the devices it is processing and passes them to > > ioctl(VHOST_WORKER_RUN) via struct vhost_run_worker_dev elements. > > Which fd will be used for this IOCTL? One of the 'vhostfd' or we should > create a new /dev/vhost-workers (or something similar)? > > Maybe the new device will be cleaner and can be reused also for other stuff > (I'm thinking about vDPA software devices). > > > > > From a security perspective it means the userspace thread has access to > > all vhost devices (because it has their fds). > > > > I'm not sure how the mm is supposed to work. The devices might be > > associated with different userspace processes (guests) and therefore > > have different virtual memory. > > Maybe in this case we should do something similar to io_uring SQPOLL kthread > where kthread_use_mm()/kthread_unuse_mm() is used to switch virtual memory > spaces. > > After writing, I saw that we already do it this in the vhost_worker() in > drivers/vhost/vhost.c > > > > > Just wanted to push this discussion along a little further. I'm buried > > under emails and probably wont be very active over the next few days. > > > > I think ioctl(VHOST_WORKER_RUN) might be the right way and also maybe the > least difficult one. Sending an ioctl API proposal email could help progress this discussion. Interesting questions: 1. How to specify which virtqueues to process (Mike's use case)? 2. How to process multiple devices? Stefan
On Tue, Dec 01, 2020 at 05:43:38PM +0000, Stefan Hajnoczi wrote: >On Tue, Dec 01, 2020 at 02:45:18PM +0100, Stefano Garzarella wrote: >> On Tue, Dec 01, 2020 at 12:59:43PM +0000, Stefan Hajnoczi wrote: >> > On Fri, Nov 20, 2020 at 07:31:08AM -0500, Michael S. Tsirkin wrote: >> > > On Fri, Nov 20, 2020 at 08:45:49AM +0000, Stefan Hajnoczi wrote: >> > > > On Thu, Nov 19, 2020 at 5:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote: >> > > > > >> > > > > On Thu, Nov 19, 2020 at 4:43 PM Mike Christie >> > > > > <michael.christie@oracle.com> wrote: >> > > > > > >> > > > > > On 11/19/20 10:24 AM, Stefan Hajnoczi wrote: >> > > > > > > On Thu, Nov 19, 2020 at 4:13 PM Mike Christie >> > > > > > > <michael.christie@oracle.com> wrote: >> > > > > > >> >> > > > > > >> On 11/19/20 8:46 AM, Michael S. Tsirkin wrote: >> > > > > > >>> On Wed, Nov 18, 2020 at 11:31:17AM +0000, Stefan Hajnoczi wrote: >> > > > > > > struct vhost_run_worker_info { >> > > > > > > struct timespec *timeout; >> > > > > > > sigset_t *sigmask; >> > > > > > > >> > > > > > > /* List of virtqueues to process */ >> > > > > > > unsigned nvqs; >> > > > > > > unsigned vqs[]; >> > > > > > > }; >> > > > > > > >> > > > > > > /* This blocks until the timeout is reached, a signal is received, or >> > > > > > > the vhost device is destroyed */ >> > > > > > > int ret = ioctl(vhost_fd, VHOST_RUN_WORKER, &info); >> > > > > > > >> > > > > > > As you can see, userspace isn't involved with dealing with the >> > > > > > > requests. It just acts as a thread donor to the vhost driver. >> > > > > > > >> > > > > > > We would want the VHOST_RUN_WORKER calls to be infrequent to avoid the >> > > > > > > penalty of switching into the kernel, copying in the arguments, etc. >> > > > > > >> > > > > > I didn't get this part. Why have the timeout? When the timeout expires, >> > > > > > does userspace just call right back down to the kernel or does it do >> > > > > > some sort of processing/operation? >> > > > > > >> > > > > > You could have your worker function run from that ioctl wait for a >> > > > > > signal or a wake up call from the vhost_work/poll functions. >> > > > > >> > > > > An optional timeout argument is common in blocking interfaces like >> > > > > poll(2), recvmmsg(2), etc. >> > > > > >> > > > > Although something can send a signal to the thread instead, >> > > > > implementing that in an application is more awkward than passing a >> > > > > struct timespec. >> > > > > >> > > > > Compared to other blocking calls we don't expect >> > > > > ioctl(VHOST_RUN_WORKER) to return soon, so maybe the timeout will >> > > > > rarely be used and can be dropped from the interface. >> > > > > >> > > > > BTW the code I posted wasn't a carefully thought out proposal >> > > > > :). The >> > > > > details still need to be considered and I'm going to be offline for >> > > > > the next week so maybe someone else can think it through in the >> > > > > meantime. >> > > > >> > > > One final thought before I'm offline for a week. If >> > > > ioctl(VHOST_RUN_WORKER) is specific to a single vhost device instance >> > > > then it's hard to support poll-mode (busy waiting) workers because >> > > > each device instance consumes a whole CPU. If we stick to an interface >> > > > where the kernel manages the worker threads then it's easier to >> > > > share >> > > > workers between devices for polling. >> > > >> > > >> > > Yes that is the reason vhost did its own reason in the first place. >> > > >> > > >> > > I am vaguely thinking about poll(2) or a similar interface, >> > > which can wait for an event on multiple FDs. >> > >> > I can imagine how using poll(2) would work from a userspace perspective, >> > but on the kernel side I don't think it can be implemented cleanly. >> > poll(2) is tied to the file_operations->poll() callback and >> > read/write/error events. Not to mention there isn't a way to substitue >> > the vhost worker thread function instead of scheduling out the current >> > thread while waiting for poll fd events. >> > >> > But maybe ioctl(VHOST_WORKER_RUN) can do it: >> > >> > struct vhost_run_worker_dev { >> > int vhostfd; /* /dev/vhost-TYPE fd */ >> > unsigned nvqs; /* number of virtqueues in vqs[] */ >> > unsigned vqs[]; /* virtqueues to process */ >> > }; >> > >> > struct vhost_run_worker_info { >> > struct timespec *timeout; >> > sigset_t *sigmask; >> > >> > unsigned ndevices; >> > struct vhost_run_worker_dev *devices[]; >> > }; >> > >> > In the simple case userspace sets ndevices to 1 and we just handle >> > virtqueues for the current device. >> > >> > In the fancier shared worker thread case the userspace process has the >> > vhost fds of all the devices it is processing and passes them to >> > ioctl(VHOST_WORKER_RUN) via struct vhost_run_worker_dev elements. >> >> Which fd will be used for this IOCTL? One of the 'vhostfd' or we should >> create a new /dev/vhost-workers (or something similar)? >> >> Maybe the new device will be cleaner and can be reused also for other stuff >> (I'm thinking about vDPA software devices). >> >> > >> > From a security perspective it means the userspace thread has access to >> > all vhost devices (because it has their fds). >> > >> > I'm not sure how the mm is supposed to work. The devices might be >> > associated with different userspace processes (guests) and therefore >> > have different virtual memory. >> >> Maybe in this case we should do something similar to io_uring SQPOLL kthread >> where kthread_use_mm()/kthread_unuse_mm() is used to switch virtual memory >> spaces. >> >> After writing, I saw that we already do it this in the vhost_worker() in >> drivers/vhost/vhost.c >> >> > >> > Just wanted to push this discussion along a little further. I'm buried >> > under emails and probably wont be very active over the next few days. >> > >> >> I think ioctl(VHOST_WORKER_RUN) might be the right way and also maybe the >> least difficult one. > >Sending an ioctl API proposal email could help progress this >discussion. > >Interesting questions: >1. How to specify which virtqueues to process (Mike's use case)? >2. How to process multiple devices? > Okay, I'll try to prepare a tentative proposal next week with that questions in mind :-) Thanks, Stefano