Message ID | 20210525211753.1086069-6-jason@jlekstrand.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | dma-buf: Add an API for exporting sync files (v11) | expand |
Am 25.05.21 um 23:17 schrieb Jason Ekstrand: > Modern userspace APIs like Vulkan are built on an explicit > synchronization model. This doesn't always play nicely with the > implicit synchronization used in the kernel and assumed by X11 and > Wayland. The client -> compositor half of the synchronization isn't too > bad, at least on intel, because we can control whether or not i915 > synchronizes on the buffer and whether or not it's considered written. > > The harder part is the compositor -> client synchronization when we get > the buffer back from the compositor. We're required to be able to > provide the client with a VkSemaphore and VkFence representing the point > in time where the window system (compositor and/or display) finished > using the buffer. With current APIs, it's very hard to do this in such > a way that we don't get confused by the Vulkan driver's access of the > buffer. In particular, once we tell the kernel that we're rendering to > the buffer again, any CPU waits on the buffer or GPU dependencies will > wait on some of the client rendering and not just the compositor. > > This new IOCTL solves this problem by allowing us to get a snapshot of > the implicit synchronization state of a given dma-buf in the form of a > sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, > instead of CPU waiting directly, it encapsulates the wait operation, at > the current moment in time, in a sync_file so we can check/wait on it > later. As long as the Vulkan driver does the sync_file export from the > dma-buf before we re-introduce it for rendering, it will only contain > fences from the compositor or display. This allows to accurately turn > it into a VkFence or VkSemaphore without any over- synchronization. Regarding that, why do we actually use a syncfile and not a drm_syncobj here? The later should be much closer to a Vulkan timeline semaphore. Christian. > > v2 (Jason Ekstrand): > - Use a wrapper dma_fence_array of all fences including the new one > when importing an exclusive fence. > > v3 (Jason Ekstrand): > - Lock around setting shared fences as well as exclusive > - Mark SIGNAL_SYNC_FILE as a read-write ioctl. > - Initialize ret to 0 in dma_buf_wait_sync_file > > v4 (Jason Ekstrand): > - Use the new dma_resv_get_singleton helper > > v5 (Jason Ekstrand): > - Rename the IOCTLs to import/export rather than wait/signal > - Drop the WRITE flag and always get/set the exclusive fence > > v6 (Jason Ekstrand): > - Drop the sync_file import as it was all-around sketchy and not nearly > as useful as import. > - Re-introduce READ/WRITE flag support for export > - Rework the commit message > > v7 (Jason Ekstrand): > - Require at least one sync flag > - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference > - Use _rcu helpers since we're accessing the dma_resv read-only > > v8 (Jason Ekstrand): > - Return -ENOMEM if the sync_file_create fails > - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE) > > v9 (Jason Ekstrand): > - Add documentation for the new ioctl > > v10 (Jason Ekstrand): > - Go back to dma_buf_sync_file as the ioctl struct name > > v11 (Daniel Vetter): > - Go back to dma_buf_export_sync_file as the ioctl struct name > - Better kerneldoc describing what the read/write flags do > > Signed-off-by: Jason Ekstrand <jason@jlekstrand.net> > Acked-by: Simon Ser <contact@emersion.fr> > Acked-by: Christian König <christian.koenig@amd.com> > Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> > Cc: Sumit Semwal <sumit.semwal@linaro.org> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> > --- > drivers/dma-buf/dma-buf.c | 67 ++++++++++++++++++++++++++++++++++++ > include/uapi/linux/dma-buf.h | 35 +++++++++++++++++++ > 2 files changed, 102 insertions(+) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index ed6451d55d663..65a9574ee04ed 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -20,6 +20,7 @@ > #include <linux/debugfs.h> > #include <linux/module.h> > #include <linux/seq_file.h> > +#include <linux/sync_file.h> > #include <linux/poll.h> > #include <linux/dma-resv.h> > #include <linux/mm.h> > @@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) > * Note that this only signals the completion of the respective fences, i.e. the > * DMA transfers are complete. Cache flushing and any other necessary > * preparations before CPU access can begin still need to happen. > + * > + * As an alternative to poll(), the set of fences on DMA buffer can be > + * exported as a &sync_file using &dma_buf_sync_file_export. > */ > > static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb) > @@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) > return ret; > } > > +#if IS_ENABLED(CONFIG_SYNC_FILE) > +static long dma_buf_export_sync_file(struct dma_buf *dmabuf, > + void __user *user_data) > +{ > + struct dma_buf_export_sync_file arg; > + struct dma_fence *fence = NULL; > + struct sync_file *sync_file; > + int fd, ret; > + > + if (copy_from_user(&arg, user_data, sizeof(arg))) > + return -EFAULT; > + > + if (arg.flags & ~DMA_BUF_SYNC_RW) > + return -EINVAL; > + > + if ((arg.flags & DMA_BUF_SYNC_RW) == 0) > + return -EINVAL; > + > + fd = get_unused_fd_flags(O_CLOEXEC); > + if (fd < 0) > + return fd; > + > + if (arg.flags & DMA_BUF_SYNC_WRITE) { > + fence = dma_resv_get_singleton_unlocked(dmabuf->resv); > + if (IS_ERR(fence)) { > + ret = PTR_ERR(fence); > + goto err_put_fd; > + } > + } else if (arg.flags & DMA_BUF_SYNC_READ) { > + fence = dma_resv_get_excl_unlocked(dmabuf->resv); > + } > + > + if (!fence) > + fence = dma_fence_get_stub(); > + > + sync_file = sync_file_create(fence); > + > + dma_fence_put(fence); > + > + if (!sync_file) { > + ret = -ENOMEM; > + goto err_put_fd; > + } > + > + fd_install(fd, sync_file->file); > + > + arg.fd = fd; > + if (copy_to_user(user_data, &arg, sizeof(arg))) > + return -EFAULT; > + > + return 0; > + > +err_put_fd: > + put_unused_fd(fd); > + return ret; > +} > +#endif > + > static long dma_buf_ioctl(struct file *file, > unsigned int cmd, unsigned long arg) > { > @@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file, > case DMA_BUF_SET_NAME_B: > return dma_buf_set_name(dmabuf, (const char __user *)arg); > > +#if IS_ENABLED(CONFIG_SYNC_FILE) > + case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: > + return dma_buf_export_sync_file(dmabuf, (void __user *)arg); > +#endif > + > default: > return -ENOTTY; > } > diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h > index 1f67ced853b14..aeba45180b028 100644 > --- a/include/uapi/linux/dma-buf.h > +++ b/include/uapi/linux/dma-buf.h > @@ -67,6 +67,40 @@ struct dma_buf_sync { > > #define DMA_BUF_NAME_LEN 32 > > +/** > + * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf > + * > + * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the > + * current set of fences on a dma-buf file descriptor as a sync_file. CPU > + * waits via poll() or other driver-specific mechanisms typically wait on > + * whatever fences are on the dma-buf at the time the wait begins. This > + * is similar except that it takes a snapshot of the current fences on the > + * dma-buf for waiting later instead of waiting immediately. This is > + * useful for modern graphics APIs such as Vulkan which assume an explicit > + * synchronization model but still need to inter-operate with dma-buf. > + */ > +struct dma_buf_export_sync_file { > + /** > + * @flags: Read/write flags > + * > + * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both. > + * > + * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set, > + * the returned sync file waits on any writers of the dma-buf to > + * complete. Waiting on the returned sync file is equivalent to > + * poll() with POLLIN. > + * > + * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on > + * any users of the dma-buf (read or write) to complete. Waiting > + * on the returned sync file is equivalent to poll() with POLLOUT. > + * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this > + * is equivalent to just DMA_BUF_SYNC_WRITE. > + */ > + __u32 flags; > + /** @fd: Returned sync file descriptor */ > + __s32 fd; > +}; > + > #define DMA_BUF_BASE 'b' > #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) > > @@ -76,5 +110,6 @@ struct dma_buf_sync { > #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) > #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) > #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) > +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file) > > #endif
Hi Christian, On Wed, 26 May 2021 at 12:02, Christian König <christian.koenig@amd.com> wrote: > Am 25.05.21 um 23:17 schrieb Jason Ekstrand: > > This new IOCTL solves this problem by allowing us to get a snapshot of > > the implicit synchronization state of a given dma-buf in the form of a > > sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, > > instead of CPU waiting directly, it encapsulates the wait operation, at > > the current moment in time, in a sync_file so we can check/wait on it > > later. As long as the Vulkan driver does the sync_file export from the > > dma-buf before we re-introduce it for rendering, it will only contain > > fences from the compositor or display. This allows to accurately turn > > it into a VkFence or VkSemaphore without any over- synchronization. > > Regarding that, why do we actually use a syncfile and not a drm_syncobj > here? > > The later should be much closer to a Vulkan timeline semaphore. How would we insert a syncobj+val into a resv though? Like, if we pass an unmaterialised syncobj+val here to insert into the resv, then an implicit-only media user (or KMS) goes to sync against the resv, what happens? Cheers, Daniel
Am 26.05.21 um 13:31 schrieb Daniel Stone: > Hi Christian, > > On Wed, 26 May 2021 at 12:02, Christian König <christian.koenig@amd.com> wrote: >> Am 25.05.21 um 23:17 schrieb Jason Ekstrand: >>> This new IOCTL solves this problem by allowing us to get a snapshot of >>> the implicit synchronization state of a given dma-buf in the form of a >>> sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, >>> instead of CPU waiting directly, it encapsulates the wait operation, at >>> the current moment in time, in a sync_file so we can check/wait on it >>> later. As long as the Vulkan driver does the sync_file export from the >>> dma-buf before we re-introduce it for rendering, it will only contain >>> fences from the compositor or display. This allows to accurately turn >>> it into a VkFence or VkSemaphore without any over- synchronization. >> Regarding that, why do we actually use a syncfile and not a drm_syncobj >> here? >> >> The later should be much closer to a Vulkan timeline semaphore. > How would we insert a syncobj+val into a resv though? Like, if we pass > an unmaterialised syncobj+val here to insert into the resv, then an > implicit-only media user (or KMS) goes to sync against the resv, what > happens? Well this is for exporting, not importing. So we don't need to worry about that. It's just my thinking because the drm_syncobj is the backing object on VkSemaphore implementations these days, isn't it? Christian. > > Cheers, > Daniel
Hi, On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote: > Am 26.05.21 um 13:31 schrieb Daniel Stone: > > How would we insert a syncobj+val into a resv though? Like, if we pass > > an unmaterialised syncobj+val here to insert into the resv, then an > > implicit-only media user (or KMS) goes to sync against the resv, what > > happens? > > Well this is for exporting, not importing. So we don't need to worry > about that. > > It's just my thinking because the drm_syncobj is the backing object on > VkSemaphore implementations these days, isn't it? Yeah, I can see that to an extent. But then binary vs. timeline syncobjs are very different in use (which is unfortunate tbh), and then we have an asymmetry between syncobj export & sync_file import. You're right that we do want a syncobj though. This is probably not practical due to smashing uAPI to bits, but if we could wind the clock back a couple of years, I suspect the interface we want is that export can either export a sync_file or a binary syncobj, and further that binary syncobjs could transparently act as timeline semaphores by mapping any value (either wait or signal) to the binary signal. In hindsight, we should probably just never have had binary syncobj. Oh well. Cheers, Daniel
Am 26.05.21 um 15:12 schrieb Daniel Stone: > Hi, > > On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote: >> Am 26.05.21 um 13:31 schrieb Daniel Stone: >>> How would we insert a syncobj+val into a resv though? Like, if we pass >>> an unmaterialised syncobj+val here to insert into the resv, then an >>> implicit-only media user (or KMS) goes to sync against the resv, what >>> happens? >> Well this is for exporting, not importing. So we don't need to worry >> about that. >> >> It's just my thinking because the drm_syncobj is the backing object on >> VkSemaphore implementations these days, isn't it? > Yeah, I can see that to an extent. But then binary vs. timeline > syncobjs are very different in use (which is unfortunate tbh), and > then we have an asymmetry between syncobj export & sync_file import. > > You're right that we do want a syncobj though. This is probably not > practical due to smashing uAPI to bits, but if we could wind the clock > back a couple of years, I suspect the interface we want is that export > can either export a sync_file or a binary syncobj, and further that > binary syncobjs could transparently act as timeline semaphores by > mapping any value (either wait or signal) to the binary signal. In > hindsight, we should probably just never have had binary syncobj. Oh > well. Well the later is the case IIRC. Don't ask me for the detail semantics, but in general the drm_syncobj in timeline mode is compatible to the binary mode. The sync_file is also import/exportable to a certain drm_syncobj timeline point (or as binary signal). So no big deal, we are all compatible here :) I just thought that it might be more appropriate to return a drm_syncobj directly instead of a sync_file. Regards, Christian. > > Cheers, > Daniel
On Wed, May 26, 2021 at 03:23:01PM +0200, Christian König wrote: > > > Am 26.05.21 um 15:12 schrieb Daniel Stone: > > Hi, > > > > On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote: > > > Am 26.05.21 um 13:31 schrieb Daniel Stone: > > > > How would we insert a syncobj+val into a resv though? Like, if we pass > > > > an unmaterialised syncobj+val here to insert into the resv, then an > > > > implicit-only media user (or KMS) goes to sync against the resv, what > > > > happens? > > > Well this is for exporting, not importing. So we don't need to worry > > > about that. > > > > > > It's just my thinking because the drm_syncobj is the backing object on > > > VkSemaphore implementations these days, isn't it? > > Yeah, I can see that to an extent. But then binary vs. timeline > > syncobjs are very different in use (which is unfortunate tbh), and > > then we have an asymmetry between syncobj export & sync_file import. > > > > You're right that we do want a syncobj though. This is probably not > > practical due to smashing uAPI to bits, but if we could wind the clock > > back a couple of years, I suspect the interface we want is that export > > can either export a sync_file or a binary syncobj, and further that > > binary syncobjs could transparently act as timeline semaphores by > > mapping any value (either wait or signal) to the binary signal. In > > hindsight, we should probably just never have had binary syncobj. Oh > > well. > > Well the later is the case IIRC. Don't ask me for the detail semantics, but > in general the drm_syncobj in timeline mode is compatible to the binary > mode. > > The sync_file is also import/exportable to a certain drm_syncobj timeline > point (or as binary signal). So no big deal, we are all compatible here :) > > I just thought that it might be more appropriate to return a drm_syncobj > directly instead of a sync_file. I think another big potential user for this is a vk-based compositor which needs to interact/support implicit synced clients. And compositor world I think is right now still more sync_file (because that's where we started with atomic kms ioctl). The other slight nudge is that drm_syncobj is a drm thing, so we'd first need to lift it out into drivers/dma-buf (and hand-wave the DRM prefix away) for it to be a non-awkward fit for dma-buf. Plus you can convert them to the other form anyway, so really doesn't matter much. But for the above reasons I'm leaning slightly towards sync_file. Except if compositor folks tell me I'm a fool and why :-) -Daniel
On Thursday, May 27th, 2021 at 12:33 PM, Daniel Vetter <daniel@ffwll.ch> wrote: > > The sync_file is also import/exportable to a certain drm_syncobj timeline > > point (or as binary signal). So no big deal, we are all compatible here :) > > I just thought that it might be more appropriate to return a drm_syncobj > > directly instead of a sync_file. > > I think another big potential user for this is a vk-based compositor which > needs to interact/support implicit synced clients. And compositor world I > think is right now still more sync_file (because that's where we started > with atomic kms ioctl). Yeah, right now compositors can only deal with sync_file. I have a Vulkan pull request for wlroots [1] that would benefit from this, I think. Also it seems like drm_syncobj isn't necessarily going to be the the sync primitive that ends them all with all that user-space fence discussion, so long-term I guess we'll need a conversion anyways. [1]: https://github.com/swaywm/wlroots/pull/2771 > Plus you can convert them to the other form anyway, so really doesn't > matter much. But for the above reasons I'm leaning slightly towards > sync_file. Except if compositor folks tell me I'm a fool and why :-) sync_file is fine from my PoV.
Am 27.05.21 um 12:33 schrieb Daniel Vetter: > On Wed, May 26, 2021 at 03:23:01PM +0200, Christian König wrote: >> >> Am 26.05.21 um 15:12 schrieb Daniel Stone: >>> Hi, >>> >>> On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote: >>>> Am 26.05.21 um 13:31 schrieb Daniel Stone: >>>>> How would we insert a syncobj+val into a resv though? Like, if we pass >>>>> an unmaterialised syncobj+val here to insert into the resv, then an >>>>> implicit-only media user (or KMS) goes to sync against the resv, what >>>>> happens? >>>> Well this is for exporting, not importing. So we don't need to worry >>>> about that. >>>> >>>> It's just my thinking because the drm_syncobj is the backing object on >>>> VkSemaphore implementations these days, isn't it? >>> Yeah, I can see that to an extent. But then binary vs. timeline >>> syncobjs are very different in use (which is unfortunate tbh), and >>> then we have an asymmetry between syncobj export & sync_file import. >>> >>> You're right that we do want a syncobj though. This is probably not >>> practical due to smashing uAPI to bits, but if we could wind the clock >>> back a couple of years, I suspect the interface we want is that export >>> can either export a sync_file or a binary syncobj, and further that >>> binary syncobjs could transparently act as timeline semaphores by >>> mapping any value (either wait or signal) to the binary signal. In >>> hindsight, we should probably just never have had binary syncobj. Oh >>> well. >> Well the later is the case IIRC. Don't ask me for the detail semantics, but >> in general the drm_syncobj in timeline mode is compatible to the binary >> mode. >> >> The sync_file is also import/exportable to a certain drm_syncobj timeline >> point (or as binary signal). So no big deal, we are all compatible here :) >> >> I just thought that it might be more appropriate to return a drm_syncobj >> directly instead of a sync_file. > I think another big potential user for this is a vk-based compositor which > needs to interact/support implicit synced clients. And compositor world I > think is right now still more sync_file (because that's where we started > with atomic kms ioctl). > > The other slight nudge is that drm_syncobj is a drm thing, so we'd first > need to lift it out into drivers/dma-buf (and hand-wave the DRM prefix > away) for it to be a non-awkward fit for dma-buf. > > Plus you can convert them to the other form anyway, so really doesn't > matter much. But for the above reasons I'm leaning slightly towards > sync_file. Except if compositor folks tell me I'm a fool and why :-) Yeah as discussed with a Jason already that drm_syncobj is DRM specific is the killer argument here. So sync_file is fine with me. Christian. > -Daniel
On Thu, May 27, 2021 at 4:49 AM Christian König <christian.koenig@amd.com> wrote: > > Am 26.05.21 um 19:42 schrieb Jason Ekstrand: > > On Wed, May 26, 2021 at 6:02 AM Christian König > > <christian.koenig@amd.com> wrote: > >> Regarding that, why do we actually use a syncfile and not a drm_syncobj > >> here? > > A sync file is a userspace handle to a dma_fence which is exactly what > > we're grabbing from the dma-buf so it's a better match. A syncobj, on > > the other hand, is a container for fences. If we really want it in a > > syncobj (which we may), we can use another ioctl to stuff it in there. > > The only thing making this work in terms of syncobj would save us is a > > little ioctl overhead. In my Mesa patches, we do stuff it in a > > syncobj but, instead of acting on the syncobj directly, we go through > > vkSemaphoreImportFd() which is way more convenient for generic WSI > > code. > > Yeah, that is a really good argument. > > > It also works nicer because a sync_file is a pretty generic construct > > but a syncobj is a DRM construct. This lets us do the export in an > > entirely driver-generic way without even having access to a DRM fd. > > It all happens as an ioctl on the dma-buf. > > Well that's an even better argument and I think the killer argument here. Cool. > We should probably mention that on the commit message as a single > sentence somewhere. Happy to do so. How does this sound: By making this an ioctl on the dma-buf itself, it allows this new functionality to be used in an entirely driver-agnostic way without having access to a DRM fd. This makes it ideal for use in driver-generic code in Mesa or in a client such as a compositor where the DRM fd may be hard to reach. > BTW: You replied exclusively to me. Was that intentionally? I don't > think so :) Oops... I've re-added the whole lot in this reply. --Jason > Regards, > Christian. > > > > > On Wed, May 26, 2021 at 8:23 AM Christian König > > <christian.koenig@amd.com> wrote: > > > >> Am 26.05.21 um 15:12 schrieb Daniel Stone: > >>> On Wed, 26 May 2021 at 13:46, Christian König <christian.koenig@amd.com> wrote: > >>>> Am 26.05.21 um 13:31 schrieb Daniel Stone: > >>>>> How would we insert a syncobj+val into a resv though? Like, if we pass > >>>>> an unmaterialised syncobj+val here to insert into the resv, then an > >>>>> implicit-only media user (or KMS) goes to sync against the resv, what > >>>>> happens? > >>>> Well this is for exporting, not importing. So we don't need to worry > >>>> about that. > >>>> > >>>> It's just my thinking because the drm_syncobj is the backing object on > >>>> VkSemaphore implementations these days, isn't it? > >>> Yeah, I can see that to an extent. But then binary vs. timeline > >>> syncobjs are very different in use (which is unfortunate tbh), and > >>> then we have an asymmetry between syncobj export & sync_file import. > >>> > >>> You're right that we do want a syncobj though. > > I'm not convinced. Some people seem to think that we have a direct > > progression of superiority: sync_file < syncobj < timeline syncobj. I > > don't think this is true. They each serve different purposes. A > > sync_file is a handle to a single immutable fence operation (a > > dma_fence*). A syncobj is a container for a fence and is, by > > definition, mutable (a dma_fence**). A timeline syncobj is a > > construct that lets us implement VK_KHR_timeline_semaphore with only > > immutable finite-time (not future) fences. > > > > From a WSI protocol PoV, it's sometimes nicer to pass a container > > object once and then pass a serial or a simple "I just updated it" > > signal every time instead of a new FD. It certainly makes all the > > "who closes this, again?" semantics easier. But we shouldn't think of > > syncobj as being better than sync_file. With binary syncobj, it > > really is the difference between passing a dma_fence* vs. a > > dma_fence**. Sometimes you want one and sometimes you want the other. > > The real pity, IMO, is that the uAPI is scattered everywhere. > > > >>> This is probably not > >>> practical due to smashing uAPI to bits, but if we could wind the clock > >>> back a couple of years, I suspect the interface we want is that export > >>> can either export a sync_file or a binary syncobj, and further that > >>> binary syncobjs could transparently act as timeline semaphores by > >>> mapping any value (either wait or signal) to the binary signal. In > >>> hindsight, we should probably just never have had binary syncobj. Oh > >>> well. > >> Well the later is the case IIRC. Don't ask me for the detail semantics, > >> but in general the drm_syncobj in timeline mode is compatible to the > >> binary mode. > >> > >> The sync_file is also import/exportable to a certain drm_syncobj > >> timeline point (or as binary signal). So no big deal, we are all > >> compatible here :) > >> > >> I just thought that it might be more appropriate to return a drm_syncobj > >> directly instead of a sync_file. > > Maybe. I'm not convinced that's better. In the current Mesa WSI > > code, it'd actually be quite a pain. > > > > --Jason >
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ed6451d55d663..65a9574ee04ed 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) * Note that this only signals the completion of the respective fences, i.e. the * DMA transfers are complete. Cache flushing and any other necessary * preparations before CPU access can begin still need to happen. + * + * As an alternative to poll(), the set of fences on DMA buffer can be + * exported as a &sync_file using &dma_buf_sync_file_export. */ static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb) @@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; } +#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_export_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags & ~DMA_BUF_SYNC_RW) + return -EINVAL; + + if ((arg.flags & DMA_BUF_SYNC_RW) == 0) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_WRITE) { + fence = dma_resv_get_singleton_unlocked(dmabuf->resv); + if (IS_ERR(fence)) { + ret = PTR_ERR(fence); + goto err_put_fd; + } + } else if (arg.flags & DMA_BUF_SYNC_READ) { + fence = dma_resv_get_excl_unlocked(dmabuf->resv); + } + + if (!fence) + fence = dma_fence_get_stub(); + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -ENOMEM; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} +#endif + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg); +#if IS_ENABLED(CONFIG_SYNC_FILE) + case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: + return dma_buf_export_sync_file(dmabuf, (void __user *)arg); +#endif + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 1f67ced853b14..aeba45180b028 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -67,6 +67,40 @@ struct dma_buf_sync { #define DMA_BUF_NAME_LEN 32 +/** + * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf + * + * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the + * current set of fences on a dma-buf file descriptor as a sync_file. CPU + * waits via poll() or other driver-specific mechanisms typically wait on + * whatever fences are on the dma-buf at the time the wait begins. This + * is similar except that it takes a snapshot of the current fences on the + * dma-buf for waiting later instead of waiting immediately. This is + * useful for modern graphics APIs such as Vulkan which assume an explicit + * synchronization model but still need to inter-operate with dma-buf. + */ +struct dma_buf_export_sync_file { + /** + * @flags: Read/write flags + * + * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both. + * + * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set, + * the returned sync file waits on any writers of the dma-buf to + * complete. Waiting on the returned sync file is equivalent to + * poll() with POLLIN. + * + * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on + * any users of the dma-buf (read or write) to complete. Waiting + * on the returned sync file is equivalent to poll() with POLLOUT. + * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this + * is equivalent to just DMA_BUF_SYNC_WRITE. + */ + __u32 flags; + /** @fd: Returned sync file descriptor */ + __s32 fd; +}; + #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) @@ -76,5 +110,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file) #endif