mbox series

[0/7] dma-buf: Add an API for exporting sync files (v11)

Message ID 20210525211753.1086069-1-jason@jlekstrand.net (mailing list archive)
Headers show
Series dma-buf: Add an API for exporting sync files (v11) | expand

Message

Jason Ekstrand May 25, 2021, 9:17 p.m. UTC
Modern userspace APIs like Vulkan are built on an explicit
synchronization model.  This doesn't always play nicely with the
implicit synchronization used in the kernel and assumed by X11 and
Wayland.  The client -> compositor half of the synchronization isn't too
bad, at least on intel, because we can control whether or not i915
synchronizes on the buffer and whether or not it's considered written.

The harder part is the compositor -> client synchronization when we get
the buffer back from the compositor.  We're required to be able to
provide the client with a VkSemaphore and VkFence representing the point
in time where the window system (compositor and/or display) finished
using the buffer.  With current APIs, it's very hard to do this in such
a way that we don't get confused by the Vulkan driver's access of the
buffer.  In particular, once we tell the kernel that we're rendering to
the buffer again, any CPU waits on the buffer or GPU dependencies will
wait on some of the client rendering and not just the compositor.

This new IOCTL solves this problem by allowing us to get a snapshot of
the implicit synchronization state of a given dma-buf in the form of a
sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
instead of CPU waiting directly, it encapsulates the wait operation, at
the current moment in time, in a sync_file so we can check/wait on it
later.  As long as the Vulkan driver does the sync_file export from the
dma-buf before we re-introduce it for rendering, it will only contain
fences from the compositor or display.  This allows to accurately turn
it into a VkFence or VkSemaphore without any over- synchronization.

This patch series actually contains two new ioctls.  There is the export
one mentioned above as well as an RFC for an import ioctl which provides
the other half.  The intention is to land the export ioctl since it seems
like there's no real disagreement on that one.  The import ioctl, however,
has a lot of debate around it so it's intended to be RFC-only for now.

Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
IGT tests: https://patchwork.freedesktop.org/series/90490/

v10 (Jason Ekstrand, Daniel Vetter):
 - Add reviews/acks
 - Add a patch to rename _rcu to _unlocked
 - Split things better so import is clearly RFC status

v11 (Daniel Vetter):
 - Add more CCs to try and get maintainers
 - Add a patch to document DMA_BUF_IOCTL_SYNC
 - Generally better docs
 - Use separate structs for import/export (easier to document)
 - Fix an issue in the import patch

Cc: Christian König <christian.koenig@amd.com>
Cc: Michel Dänzer <michel@daenzer.net>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Cc: Daniel Stone <daniels@collabora.com>
Cc: mesa-dev@lists.freedesktop.org
Cc: wayland-devel@lists.freedesktop.org
Test-with: 20210524205225.872316-1-jason@jlekstrand.net

Christian König (1):
  dma-buf: Add dma_fence_array_for_each (v2)

Jason Ekstrand (6):
  dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2)
  dma-buf: Add dma_resv_get_singleton_unlocked (v5)
  dma-buf: Document DMA_BUF_IOCTL_SYNC
  dma-buf: Add an API for exporting sync files (v11)
  RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
  RFC: dma-buf: Add an API for importing sync files (v7)

 Documentation/driver-api/dma-buf.rst          |   8 +
 drivers/dma-buf/dma-buf.c                     | 107 +++++++++++++-
 drivers/dma-buf/dma-fence-array.c             |  27 ++++
 drivers/dma-buf/dma-resv.c                    | 139 ++++++++++++++++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c   |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c       |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c        |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |  14 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |   6 +-
 drivers/gpu/drm/drm_gem.c                     |  10 +-
 drivers/gpu/drm/drm_gem_atomic_helper.c       |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c         |   7 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c  |   8 +-
 drivers/gpu/drm/i915/display/intel_display.c  |   2 +-
 drivers/gpu/drm/i915/dma_resv_utils.c         |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_busy.c      |   2 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_wait.c      |  10 +-
 drivers/gpu/drm/i915/i915_request.c           |   6 +-
 drivers/gpu/drm/i915/i915_sw_fence.c          |   4 +-
 drivers/gpu/drm/msm/msm_gem.c                 |   3 +-
 drivers/gpu/drm/nouveau/dispnv50/wndw.c       |   2 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c         |   4 +-
 drivers/gpu/drm/panfrost/panfrost_drv.c       |   4 +-
 drivers/gpu/drm/panfrost/panfrost_job.c       |   2 +-
 drivers/gpu/drm/radeon/radeon_gem.c           |   6 +-
 drivers/gpu/drm/radeon/radeon_mn.c            |   4 +-
 drivers/gpu/drm/ttm/ttm_bo.c                  |  18 +--
 drivers/gpu/drm/vgem/vgem_fence.c             |   4 +-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c        |   6 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_bo.c            |   2 +-
 include/linux/dma-fence-array.h               |  17 +++
 include/linux/dma-resv.h                      |  21 +--
 include/uapi/linux/dma-buf.h                  |  89 ++++++++++-
 40 files changed, 465 insertions(+), 111 deletions(-)

Comments

Chia-I Wu June 10, 2021, 8:10 p.m. UTC | #1
On Tue, May 25, 2021 at 2:18 PM Jason Ekstrand <jason@jlekstrand.net> wrote:
> Modern userspace APIs like Vulkan are built on an explicit
> synchronization model.  This doesn't always play nicely with the
> implicit synchronization used in the kernel and assumed by X11 and
> Wayland.  The client -> compositor half of the synchronization isn't too
> bad, at least on intel, because we can control whether or not i915
> synchronizes on the buffer and whether or not it's considered written.
We might have an important use case for this half, for virtio-gpu and Chrome OS.

When the guest compositor acts as a proxy to connect guest apps to the
host compositor, implicit fencing requires the guest compositor to do
a wait before forwarding the buffer to the host compositor.  With this
patch, the guest compositor can extract the dma-fence from the buffer,
and if the fence is a virtio-gpu fence, forward both the fence and the
buffer to the host compositor.  It will allow us to convert a
guest-side wait into a host-side wait.
Jason Ekstrand June 10, 2021, 8:26 p.m. UTC | #2
On Thu, Jun 10, 2021 at 3:11 PM Chia-I Wu <olvaffe@gmail.com> wrote:
>
> On Tue, May 25, 2021 at 2:18 PM Jason Ekstrand <jason@jlekstrand.net> wrote:
> > Modern userspace APIs like Vulkan are built on an explicit
> > synchronization model.  This doesn't always play nicely with the
> > implicit synchronization used in the kernel and assumed by X11 and
> > Wayland.  The client -> compositor half of the synchronization isn't too
> > bad, at least on intel, because we can control whether or not i915
> > synchronizes on the buffer and whether or not it's considered written.
> We might have an important use case for this half, for virtio-gpu and Chrome OS.
>
> When the guest compositor acts as a proxy to connect guest apps to the
> host compositor, implicit fencing requires the guest compositor to do
> a wait before forwarding the buffer to the host compositor.  With this
> patch, the guest compositor can extract the dma-fence from the buffer,
> and if the fence is a virtio-gpu fence, forward both the fence and the
> buffer to the host compositor.  It will allow us to convert a
> guest-side wait into a host-side wait.

Yeah, I think the first half solves a lot of problems.  I'm rebasing
it now and will send a v12 series shortly.  I don't think there's a
lot standing between the first few patches and merging.  I've got IGT
tests and I'm pretty sure the code is good.  The last review cycle got
distracted with some renaming fun.

--Jason