Message ID | 20200905103420.3021852-1-mperttunen@nvidia.com (mailing list archive) |
---|---|
Headers | show |
Series | Host1x/TegraDRM UAPI | expand |
05.09.2020 13:34, Mikko Perttunen пишет: > Hi all, > > here's a second revision of the Host1x/TegraDRM UAPI proposal, > hopefully with most issues from v1 resolved, and also with > an implementation. There are still open issues with the > implementation: Could you please clarify the current status of the DMA heaps. Are we still going to use DMA heaps?
05.09.2020 13:34, Mikko Perttunen пишет: > Hi all, > > here's a second revision of the Host1x/TegraDRM UAPI proposal, > hopefully with most issues from v1 resolved, and also with > an implementation. There are still open issues with the > implementation: > > * Relocs are now handled on TegraDRM side instead of Host1x, > so the firewall is not aware of them, causing submission > failure where the firewall is enabled. Proposed solution > is to move the firewall to TegraDRM side, but this hasn't > been done yet. > * For the new UAPI, syncpoint recovery on job timeout is > disabled. What this means is that upon job timeout, > all further jobs using that syncpoint are cancelled, > and the syncpoint is marked unusable until it is freed. > However, there is currently a race between the timeout > handler and job submission, where submission can observe > the syncpoint in non-locked state and yet the job > cancellations won't cancel the new job. > * Waiting for DMA reservation fences is not implemented yet. > * I have only tested on Tegra186. > > The series consists of three parts: > > * The first part contains some fixes and improvements to > the Host1x driver of more general nature, > * The second part adds the Host1x side UAPI, as well as > Host1x-side changes needed for the new TegraDRM UAPI, > * The third part adds the new TegraDRM UAPI. > > I have written some tests to test the new interface, > see https://github.com/cyndis/uapi-test. Porting of proper > userspace (e.g. opentegra, vdpau-tegra) will come once > there is some degree of conclusion on the UAPI definition. Could you please enumerate all the currently opened questions?
On 9/9/20 2:36 AM, Dmitry Osipenko wrote: > 05.09.2020 13:34, Mikko Perttunen пишет: >> Hi all, >> >> here's a second revision of the Host1x/TegraDRM UAPI proposal, >> hopefully with most issues from v1 resolved, and also with >> an implementation. There are still open issues with the >> implementation: > Could you please clarify the current status of the DMA heaps. Are we > still going to use DMA heaps? > Sorry, should have mentioned the status in the cover letter. I sent an email to dri-devel about how DMA heaps should be used -- I believe the conclusion was that it's not entirely clear, but dma-bufs should only be used for buffers shared between engines. So for the time being, we should still implement GEM for intra-TegraDRM buffers. There seems to be some planning ongoing to see if the different subsystem allocators can be unified (see dma-buf heaps talk from linux plumbers conference), but for now we should go for GEM. Mikko
On 9/9/20 5:20 AM, Dmitry Osipenko wrote: > 05.09.2020 13:34, Mikko Perttunen пишет: >> Hi all, >> >> here's a second revision of the Host1x/TegraDRM UAPI proposal, >> hopefully with most issues from v1 resolved, and also with >> an implementation. There are still open issues with the >> implementation: >> >> * Relocs are now handled on TegraDRM side instead of Host1x, >> so the firewall is not aware of them, causing submission >> failure where the firewall is enabled. Proposed solution >> is to move the firewall to TegraDRM side, but this hasn't >> been done yet. >> * For the new UAPI, syncpoint recovery on job timeout is >> disabled. What this means is that upon job timeout, >> all further jobs using that syncpoint are cancelled, >> and the syncpoint is marked unusable until it is freed. >> However, there is currently a race between the timeout >> handler and job submission, where submission can observe >> the syncpoint in non-locked state and yet the job >> cancellations won't cancel the new job. >> * Waiting for DMA reservation fences is not implemented yet. >> * I have only tested on Tegra186. >> >> The series consists of three parts: >> >> * The first part contains some fixes and improvements to >> the Host1x driver of more general nature, >> * The second part adds the Host1x side UAPI, as well as >> Host1x-side changes needed for the new TegraDRM UAPI, >> * The third part adds the new TegraDRM UAPI. >> >> I have written some tests to test the new interface, >> see https://github.com/cyndis/uapi-test. Porting of proper >> userspace (e.g. opentegra, vdpau-tegra) will come once >> there is some degree of conclusion on the UAPI definition. > > Could you please enumerate all the currently opened questions? > Which open questions do you refer to? The open items of v1 should be closed now; for fences we setup an SW timeout to prevent them from sticking around forever, and regarding GEM the GEM IOCTLs are again being used.
09.09.2020 11:44, Mikko Perttunen пишет: ... >> Could you please enumerate all the currently opened questions? >> > > Which open questions do you refer to? Anything related to the UAPI definition that needs more thought. If there is nothing outstanding, then good! > The open items of v1 should be > closed now; for fences we setup an SW timeout to prevent them from > sticking around forever, and regarding GEM the GEM IOCTLs are again > being used. > We'll see how it will be in practice! For now it's a bit difficult to decide what is good and what needs more improvement.
09.09.2020 11:40, Mikko Perttunen пишет: > On 9/9/20 2:36 AM, Dmitry Osipenko wrote: >> 05.09.2020 13:34, Mikko Perttunen пишет: >>> Hi all, >>> >>> here's a second revision of the Host1x/TegraDRM UAPI proposal, >>> hopefully with most issues from v1 resolved, and also with >>> an implementation. There are still open issues with the >>> implementation: >> Could you please clarify the current status of the DMA heaps. Are we >> still going to use DMA heaps? >> > > Sorry, should have mentioned the status in the cover letter. I sent an > email to dri-devel about how DMA heaps should be used -- I believe the > conclusion was that it's not entirely clear, but dma-bufs should only be > used for buffers shared between engines. So for the time being, we > should still implement GEM for intra-TegraDRM buffers. There seems to be > some planning ongoing to see if the different subsystem allocators can > be unified (see dma-buf heaps talk from linux plumbers conference), but > for now we should go for GEM. Thanks!