Message ID | 20240826093916.29065-2-pstanner@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/sched: Document drm_sched_job_arm()'s effect on fences | expand |
On Mon, Aug 26, 2024 at 11:39:17AM +0200, Philipp Stanner wrote: > The GPU Scheduler's job initialization is split into two steps, > drm_sched_job_init() and drm_sched_job_arm(). One reason for this is > that actually arming a job results in the job's fences getting > initialized (armed). > > Currently, the documentation does not explicitly state what > drm_sched_job_arm() does in this regard and which rules the API-User has > to follow once the function has been called. > > Add a section to drm_sched_job_arm()'s docstring which details the > function's consequences regarding the job's fences. > > Signed-off-by: Philipp Stanner <pstanner@redhat.com> > --- > drivers/gpu/drm/scheduler/sched_main.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > index 7e90c9f95611..e563eff4887c 100644 > --- a/drivers/gpu/drm/scheduler/sched_main.c > +++ b/drivers/gpu/drm/scheduler/sched_main.c > @@ -831,6 +831,12 @@ EXPORT_SYMBOL(drm_sched_job_init); > * Refer to drm_sched_entity_push_job() documentation for locking > * considerations. > * > + * drm_sched_job_cleanup() can be used to disarm the job again - but only > + * _before_ the job's fences have been published. Once a drm_sched_fence was > + * published, the associated job needs to be submitted to and processed by the > + * scheduler to avoid potential deadlocks on the DMA fences encapsulated by > + * drm_sched_fence. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> > + * > * This can only be called if drm_sched_job_init() succeeded. > */ > void drm_sched_job_arm(struct drm_sched_job *job) > -- > 2.46.0 >
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 7e90c9f95611..e563eff4887c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -831,6 +831,12 @@ EXPORT_SYMBOL(drm_sched_job_init); * Refer to drm_sched_entity_push_job() documentation for locking * considerations. * + * drm_sched_job_cleanup() can be used to disarm the job again - but only + * _before_ the job's fences have been published. Once a drm_sched_fence was + * published, the associated job needs to be submitted to and processed by the + * scheduler to avoid potential deadlocks on the DMA fences encapsulated by + * drm_sched_fence. + * * This can only be called if drm_sched_job_init() succeeded. */ void drm_sched_job_arm(struct drm_sched_job *job)
The GPU Scheduler's job initialization is split into two steps, drm_sched_job_init() and drm_sched_job_arm(). One reason for this is that actually arming a job results in the job's fences getting initialized (armed). Currently, the documentation does not explicitly state what drm_sched_job_arm() does in this regard and which rules the API-User has to follow once the function has been called. Add a section to drm_sched_job_arm()'s docstring which details the function's consequences regarding the job's fences. Signed-off-by: Philipp Stanner <pstanner@redhat.com> --- drivers/gpu/drm/scheduler/sched_main.c | 6 ++++++ 1 file changed, 6 insertions(+)