Message ID | 1459258923-10319-3-git-send-email-mst@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 29/03/2016 15:42, Michael S. Tsirkin wrote: > @@ -262,6 +274,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s) > > /* Stop notifications for new requests from guest */ > virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false); I think that this should have been ", true, false" even before your patch; I'd prefer to fix it even if the problem is latent. The patch looks good, and it might even be an API improvement independent of Conny's patches. The reentrancy _is_ hard to understand, and eliminating it makes the new API not just a hack. In that case I would unify the new function with virtio_queue_aio_set_host_notifier_handler. In other words - virtio_queue_aio_set_host_notifier_handler(vq, ctx, NULL) is the same as virtio_set_queue_aio(s->vq, NULL); virtio_queue_aio_set_host_notifier_handler(vq, ctx, true, false); - virtio_queue_aio_set_host_notifier_handler(vq, ctx, fn) is the same as virtio_queue_aio_set_host_notifier_handler(vq, ctx, true, true); virtio_set_queue_aio(vq, fn); Thanks, Paolo > + virtio_set_queue_aio(s->vq, NULL); > > /* Drain and switch bs back to the QEMU main loop */ > blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
On Tue, Mar 29, 2016 at 03:56:18PM +0200, Paolo Bonzini wrote: > > > On 29/03/2016 15:42, Michael S. Tsirkin wrote: > > @@ -262,6 +274,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s) > > > > /* Stop notifications for new requests from guest */ > > virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false); > > I think that this should have been ", true, false" even before your > patch; I'd prefer to fix it even if the problem is latent. Makes sense - post a patch? > The patch looks good, and it might even be an API improvement > independent of Conny's patches. The reentrancy _is_ hard to understand, > and eliminating it makes the new API not just a hack. > > In that case I would unify the new function with > virtio_queue_aio_set_host_notifier_handler. In other words > > - virtio_queue_aio_set_host_notifier_handler(vq, ctx, NULL) is > the same as > > virtio_set_queue_aio(s->vq, NULL); > virtio_queue_aio_set_host_notifier_handler(vq, ctx, true, false); > > - virtio_queue_aio_set_host_notifier_handler(vq, ctx, fn) is the same as > > virtio_queue_aio_set_host_notifier_handler(vq, ctx, true, true); > virtio_set_queue_aio(vq, fn); > > Thanks, > > Paolo In that case, we'll have to also change scsi to use the new API. A bit more work, to be sure. Does scsi have the same problem as blk? > > + virtio_set_queue_aio(s->vq, NULL); > > > > /* Drain and switch bs back to the QEMU main loop */ > > blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
On 29/03/2016 15:58, Michael S. Tsirkin wrote: > In that case, we'll have to also change scsi to use the new API. > A bit more work, to be sure. > Does scsi have the same problem as blk? Yes. The bug is in the virtio core. Paolo
On 29/03/2016 15:42, Michael S. Tsirkin wrote: > + if (s->dataplane) { > + /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start > + * dataplane here instead of waiting for .set_status(). > + */ > + if (!s->dataplane_started) { > + virtio_blk_data_plane_start(s->dataplane); > + } > + return; > + } > + > + virtio_blk_handle_vq(s, vq); Another small comment, this can be written simply as if (s->dataplane) { virtio_blk_data_plane_start(s->dataplane); } else { virtio_blk_handle_vq(s, vq); } Paolo
On Tue, Mar 29, 2016 at 04:05:46PM +0200, Paolo Bonzini wrote: > > > On 29/03/2016 15:42, Michael S. Tsirkin wrote: > > + if (s->dataplane) { > > + /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start > > + * dataplane here instead of waiting for .set_status(). > > + */ > > + if (!s->dataplane_started) { > > + virtio_blk_data_plane_start(s->dataplane); > > + } > > + return; > > + } > > + > > + virtio_blk_handle_vq(s, vq); > > Another small comment, this can be written simply as > > if (s->dataplane) { > virtio_blk_data_plane_start(s->dataplane); True, it's best not to poke at dataplane_started. > } else { > virtio_blk_handle_vq(s, vq); > } > I prefer the return style I think, to stress the fact that this is an unusual, unexpected case. > Paolo
On 29/03/2016 16:09, Michael S. Tsirkin wrote: >> > Another small comment, this can be written simply as >> > >> > if (s->dataplane) { >> > virtio_blk_data_plane_start(s->dataplane); > > True, it's best not to poke at dataplane_started. > > > } else { > > virtio_blk_handle_vq(s, vq); > > } > > > > I prefer the return style I think, to stress the > fact that this is an unusual, unexpected case. We're getting dangerously close to personal preference, but I've noticed virtio-scsi that you get a very common pattern of: void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq) { ... virtqueue_pop ... } static void virtio_scsi_data_plane_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq) { VirtIOSCSI *s = VIRTIO_SCSI(vdev); assert(dataplane active and started); virtio_scsi_handle_ctrl_vq(s, vq); } static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq) { VirtIOSCSI *s = VIRTIO_SCSI(vdev); if (dataplane active) { virtio_scsi_dataplane_start(s); } else { virtio_scsi_handle_ctrl_vq(s, vq); } } so it's not really an unusual, unexpected case but a complete separation between the dataplane case (handle_output starts dataplane) and the non-dataplane case (handle_output just does a cast and calls the actual workhorse). Paolo
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index ae84d92..df517ff 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -85,4 +85,6 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb); void virtio_blk_submit_multireq(BlockBackend *blk, MultiReqBuffer *mrb); +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq); + #endif diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index 36f3d2b..7d1f3b2 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -184,6 +184,17 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s) g_free(s); } +static void virtio_blk_data_plane_handle_output(VirtIODevice *vdev, + VirtQueue *vq) +{ + VirtIOBlock *s = VIRTIO_BLK(vdev); + + assert(s->dataplane); + assert(s->dataplane_started); + + virtio_blk_handle_vq(s, vq); +} + /* Context: QEMU global mutex held */ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) { @@ -226,6 +237,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) /* Get this show started by hooking up our callbacks */ aio_context_acquire(s->ctx); + virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output); virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true); aio_context_release(s->ctx); return; @@ -262,6 +274,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s) /* Stop notifications for new requests from guest */ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false); + virtio_set_queue_aio(s->vq, NULL); /* Drain and switch bs back to the QEMU main loop */ blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context()); diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index cb710f1..5717f09 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -577,20 +577,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) } } -static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) { - VirtIOBlock *s = VIRTIO_BLK(vdev); VirtIOBlockReq *req; MultiReqBuffer mrb = {}; - /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start - * dataplane here instead of waiting for .set_status(). - */ - if (s->dataplane && !s->dataplane_started) { - virtio_blk_data_plane_start(s->dataplane); - return; - } - blk_io_plug(s->blk); while ((req = virtio_blk_get_request(s))) { @@ -604,6 +595,23 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) blk_io_unplug(s->blk); } +static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) +{ + VirtIOBlock *s = VIRTIO_BLK(vdev); + + if (s->dataplane) { + /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start + * dataplane here instead of waiting for .set_status(). + */ + if (!s->dataplane_started) { + virtio_blk_data_plane_start(s->dataplane); + } + return; + } + + virtio_blk_handle_vq(s, vq); +} + static void virtio_blk_dma_restart_bh(void *opaque) { VirtIOBlock *s = opaque;
In addition to handling IO in vcpu thread and in io thread, blk dataplane introduces yet another mode: handling it by aio. This reuses the same handler as previous modes, which triggers races as these were not designed to be reentrant. As a temporary fix, use a separate handler just for aio, and disable regular handlers when dataplane is active. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> --- include/hw/virtio/virtio-blk.h | 2 ++ hw/block/dataplane/virtio-blk.c | 13 +++++++++++++ hw/block/virtio-blk.c | 28 ++++++++++++++++++---------- 3 files changed, 33 insertions(+), 10 deletions(-)