Message ID | 20210610224837.670192-9-vivek.kasireddy@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | virtio-gpu: Add a default synchronization mechanism for blobs | expand |
Hi, > - if (!cmd->finished) { > + if (!cmd->finished && !(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) { > virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error ? cmd->error : > VIRTIO_GPU_RESP_OK_NODATA); > } My idea would be more along the lines of ... if (!cmd->finished) { if (renderer_blocked) { g->pending_completion = cmd; } else { virtio_gpu_ctrl_response_nodata(...) } } Then, when resuming processing after unblock check pending_completion and call virtio_gpu_ctrl_response_nodata if needed. Workflow: virtio_gpu_simple_process_cmd() -> virtio_gpu_resource_flush() -> dpy_gfx_update() -> gd_gl_area_update() call graphic_hw_gl_block(true), create fence. virtio_gpu_simple_process_cmd() -> will see renderer_blocked and delays RESOURCE_FLUSH completion. Then, when the fence is ready, gtk will: - call graphic_hw_gl_block(false) - call graphic_hw_gl_flush() -> virtio-gpu resumes processing the cmd queue. When you use the existing block/unblock functionality the fence can be a gtk internal detail, virtio-gpu doesn't need to know that gtk uses a fence to wait for the moment when it can unblock virtio queue processing (the egl fence helpers still make sense). take care, Gerd
Hi Gerd, > > > - if (!cmd->finished) { > > + if (!cmd->finished && !(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) { > > virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error ? cmd->error : > > VIRTIO_GPU_RESP_OK_NODATA); > > } > > My idea would be more along the lines of ... > > if (!cmd->finished) { > if (renderer_blocked) { > g->pending_completion = cmd; > } else { > virtio_gpu_ctrl_response_nodata(...) > } > } > > Then, when resuming processing after unblock check pending_completion > and call virtio_gpu_ctrl_response_nodata if needed. > > Workflow: > > virtio_gpu_simple_process_cmd() > -> virtio_gpu_resource_flush() > -> dpy_gfx_update() > -> gd_gl_area_update() > call graphic_hw_gl_block(true), create fence. [Kasireddy, Vivek] So, with blobs, as you know we call dpy_gl_update() and this call just "queues" the render/redraw. And, GTK then later calls the render signal callback which in this case would be gd_gl_area_draw() which is where the actual Blit happens and also glFlush; only after which we can create a fence. > virtio_gpu_simple_process_cmd() > -> will see renderer_blocked and delays RESOURCE_FLUSH completion. > > Then, when the fence is ready, gtk will: > - call graphic_hw_gl_block(false) > - call graphic_hw_gl_flush() > -> virtio-gpu resumes processing the cmd queue. [Kasireddy, Vivek] Yeah, I think this can be done. > > When you use the existing block/unblock functionality the fence can be a > gtk internal detail, virtio-gpu doesn't need to know that gtk uses a > fence to wait for the moment when it can unblock virtio queue processing > (the egl fence helpers still make sense). [Kasireddy, Vivek] Ok, I'll try to include your suggestions in v3. Thanks, Vivek > > take care, > Gerd
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c index 4d549377cb..bd96332973 100644 --- a/hw/display/virtio-gpu.c +++ b/hw/display/virtio-gpu.c @@ -982,7 +982,7 @@ void virtio_gpu_simple_process_cmd(VirtIOGPU *g, cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; break; } - if (!cmd->finished) { + if (!cmd->finished && !(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) { virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error ? cmd->error : VIRTIO_GPU_RESP_OK_NODATA); } @@ -1040,6 +1040,46 @@ void virtio_gpu_process_cmdq(VirtIOGPU *g) g->processing_cmdq = false; } +static void virtio_gpu_signal_fence(VirtIOGPU *g, + struct virtio_gpu_ctrl_command *cmd, + enum virtio_gpu_ctrl_type type) +{ + struct virtio_gpu_simple_resource *res; + struct virtio_gpu_resource_flush rf; + + VIRTIO_GPU_FILL_CMD(rf); + virtio_gpu_bswap_32(&rf, sizeof(rf)); + res = virtio_gpu_find_check_resource(g, rf.resource_id, true, + __func__, &cmd->error); + if (res) { + virtio_gpu_resource_wait_sync(g, res); + } + virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA); +} + +static void virtio_gpu_process_fenceq(VirtIOGPU *g) +{ + struct virtio_gpu_ctrl_command *cmd, *tmp; + + QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) { + trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id); + virtio_gpu_signal_fence(g, cmd, VIRTIO_GPU_RESP_OK_NODATA); + QTAILQ_REMOVE(&g->fenceq, cmd, next); + g_free(cmd); + g->inflight--; + if (virtio_gpu_stats_enabled(g->parent_obj.conf)) { + fprintf(stderr, "inflight: %3d (-)\r", g->inflight); + } + } +} + +static void virtio_gpu_handle_gl_flushed(VirtIOGPUBase *b) +{ + VirtIOGPU *g = container_of(b, VirtIOGPU, parent_obj); + + virtio_gpu_process_fenceq(g); +} + static void virtio_gpu_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq) { VirtIOGPU *g = VIRTIO_GPU(vdev); @@ -1398,10 +1438,12 @@ static void virtio_gpu_class_init(ObjectClass *klass, void *data) DeviceClass *dc = DEVICE_CLASS(klass); VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass); VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass); + VirtIOGPUBaseClass *vgbc = &vgc->parent; vgc->handle_ctrl = virtio_gpu_handle_ctrl; vgc->process_cmd = virtio_gpu_simple_process_cmd; vgc->update_cursor_data = virtio_gpu_update_cursor_data; + vgbc->gl_flushed = virtio_gpu_handle_gl_flushed; vdc->realize = virtio_gpu_device_realize; vdc->reset = virtio_gpu_reset;
Adding this callback provides a way to determine when the UI has submitted the buffer to the Host windowing system. Making the guest wait for this event will ensure that the dmabuf/buffer updates are synchronized. Cc: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com> --- hw/display/virtio-gpu.c | 44 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 43 insertions(+), 1 deletion(-)