Message ID | 20200205181955.202485-10-olvaffe@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/virtio: fixes and cleanups for vbuf queuing | expand |
On Wed, Feb 05, 2020 at 10:19:53AM -0800, Chia-I Wu wrote: > Make sure elemcnt does not exceed the maximum element count in > virtio_gpu_queue_ctrl_sgs. We should improve our error handling or > impose a size limit on execbuffer, which are TODOs. Hmm, virtio supports indirect ring entries, so large execbuffers should not be a problem ... So I've waded through the virtio code. Figured our logic is wrong. Luckily we err on the safe side (waiting for more free entries than we actually need). The patch below should fix that (not tested yet). cheers, Gerd diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index aa25e8781404..535399b3a3ea 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -328,7 +328,7 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, { struct virtqueue *vq = vgdev->ctrlq.vq; bool notify = false; - int ret; + int vqcnt, ret; again: spin_lock(&vgdev->ctrlq.qlock); @@ -341,9 +341,10 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, return notify; } - if (vq->num_free < elemcnt) { + vqcnt = virtqueue_use_indirect(vq, elemcnt) ? 1 : elemcnt; + if (vq->num_free < vqcnt) { spin_unlock(&vgdev->ctrlq.qlock); - wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= elemcnt); + wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= vq); goto again; }
On Thu, Feb 6, 2020 at 1:49 AM Gerd Hoffmann <kraxel@redhat.com> wrote: > > On Wed, Feb 05, 2020 at 10:19:53AM -0800, Chia-I Wu wrote: > > Make sure elemcnt does not exceed the maximum element count in > > virtio_gpu_queue_ctrl_sgs. We should improve our error handling or > > impose a size limit on execbuffer, which are TODOs. > > Hmm, virtio supports indirect ring entries, so large execbuffers should > not be a problem ... > > So I've waded through the virtio code. Figured our logic is wrong. > Luckily we err on the safe side (waiting for more free entries than we > actually need). The patch below should fix that (not tested yet). That is good to know! I was not sure if we have VIRTIO_RING_F_INDIRECT_DESC so I kept our logic. I will drop this patch in v2. > > cheers, > Gerd > > diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c > index aa25e8781404..535399b3a3ea 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_vq.c > +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c > @@ -328,7 +328,7 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, > { > struct virtqueue *vq = vgdev->ctrlq.vq; > bool notify = false; > - int ret; > + int vqcnt, ret; > > again: > spin_lock(&vgdev->ctrlq.qlock); > @@ -341,9 +341,10 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, > return notify; > } > > - if (vq->num_free < elemcnt) { > + vqcnt = virtqueue_use_indirect(vq, elemcnt) ? 1 : elemcnt; > + if (vq->num_free < vqcnt) { > spin_unlock(&vgdev->ctrlq.qlock); > - wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= elemcnt); > + wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= vq); > goto again; > } > >
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 7e69c06e168ea..f7520feb39d4b 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -143,6 +143,7 @@ struct virtio_gpu_framebuffer { struct virtio_gpu_queue { struct virtqueue *vq; + unsigned int max_free; spinlock_t qlock; wait_queue_head_t ack_queue; struct work_struct dequeue_work; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 205ec4abae2b9..0954f61d2000f 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -132,6 +132,9 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, goto out_unused_fd; } + /* XXX virtio_gpu_cmd_submit may fail silently when exbuf->size is + * huge + */ buf = vmemdup_user(u64_to_user_ptr(exbuf->command), exbuf->size); if (IS_ERR(buf)) { ret = PTR_ERR(buf); diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 2f5773e43557c..e7d5840e432dc 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -170,7 +170,9 @@ int virtio_gpu_init(struct drm_device *dev) goto err_vqs; } vgdev->ctrlq.vq = vqs[0]; + vgdev->ctrlq.max_free = vqs[0]->num_free; vgdev->cursorq.vq = vqs[1]; + vgdev->cursorq.max_free = vqs[1]->num_free; ret = virtio_gpu_alloc_vbufs(vgdev); if (ret) { DRM_ERROR("failed to alloc vbufs\n"); diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 0bf82cff8da37..725cfe93bcef8 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -333,7 +333,7 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, again: spin_lock(&vgdev->ctrlq.qlock); - if (!vgdev->vqs_ready) { + if (unlikely(!vgdev->vqs_ready || elemcnt > vgdev->ctrlq.max_free)) { spin_unlock(&vgdev->ctrlq.qlock); if (fence && vbuf->objs)
Make sure elemcnt does not exceed the maximum element count in virtio_gpu_queue_ctrl_sgs. We should improve our error handling or impose a size limit on execbuffer, which are TODOs. Signed-off-by: Chia-I Wu <olvaffe@gmail.com> Cc: David Riley <davidriley@chromium.org> --- drivers/gpu/drm/virtio/virtgpu_drv.h | 1 + drivers/gpu/drm/virtio/virtgpu_ioctl.c | 3 +++ drivers/gpu/drm/virtio/virtgpu_kms.c | 2 ++ drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +- 4 files changed, 7 insertions(+), 1 deletion(-)