diff mbox series

[v7,1/3] drm/sched: Adjust outdated docu for run_job()

Message ID 20250305130551.136682-3-phasta@kernel.org (mailing list archive)
State New
Headers show
Series drm/sched: Documentation and refcount improvements | expand

Commit Message

Philipp Stanner March 5, 2025, 1:05 p.m. UTC
The documentation for drm_sched_backend_ops.run_job() mentions a certain
function called drm_sched_job_recovery(). This function does not exist.
What's actually meant is drm_sched_resubmit_jobs(), which is by now also
deprecated.

Furthermore, the scheduler expects to "inherit" a reference on the fence
from the run_job() callback. This, so far, is also not documented.

Remove the mention of the removed function.

Discourage the behavior of drm_sched_backend_ops.run_job() being called
multiple times for the same job.

Document the necessity of incrementing the refcount in run_job().

Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
 include/drm/gpu_scheduler.h | 34 ++++++++++++++++++++++++++++++----
 1 file changed, 30 insertions(+), 4 deletions(-)

Comments

Bagas Sanjaya March 5, 2025, 1:45 p.m. UTC | #1
On Wed, Mar 05, 2025 at 02:05:50PM +0100, Philipp Stanner wrote:
>  	/**
> -         * @run_job: Called to execute the job once all of the dependencies
> -         * have been resolved.  This may be called multiple times, if
> -	 * timedout_job() has happened and drm_sched_job_recovery()
> -	 * decides to try it again.
> +	 * @run_job: Called to execute the job once all of the dependencies
> +	 * have been resolved.
> +	 *
> +	 * @sched_job: the job to run
> +	 *
> +	 * The deprecated drm_sched_resubmit_jobs() (called by &struct
> +	 * drm_sched_backend_ops.timedout_job) can invoke this again with the
> +	 * same parameters. Using this is discouraged because it violates
> +	 * dma_fence rules, notably dma_fence_init() has to be called on
> +	 * already initialized fences for a second time. Moreover, this is
> +	 * dangerous because attempts to allocate memory might deadlock with
> +	 * memory management code waiting for the reset to complete.
> +	 *
> +	 * TODO: Document what drivers should do / use instead.

No replacement? Or bespoke/roll-your-own functionality as a must?

Confused...
Philipp Stanner March 5, 2025, 2:24 p.m. UTC | #2
On Wed, 2025-03-05 at 20:45 +0700, Bagas Sanjaya wrote:
> On Wed, Mar 05, 2025 at 02:05:50PM +0100, Philipp Stanner wrote:
> >  	/**
> > -         * @run_job: Called to execute the job once all of the
> > dependencies
> > -         * have been resolved.  This may be called multiple times,
> > if
> > -	 * timedout_job() has happened and
> > drm_sched_job_recovery()
> > -	 * decides to try it again.
> > +	 * @run_job: Called to execute the job once all of the
> > dependencies
> > +	 * have been resolved.
> > +	 *
> > +	 * @sched_job: the job to run
> > +	 *
> > +	 * The deprecated drm_sched_resubmit_jobs() (called by
> > &struct
> > +	 * drm_sched_backend_ops.timedout_job) can invoke this
> > again with the
> > +	 * same parameters. Using this is discouraged because it
> > violates
> > +	 * dma_fence rules, notably dma_fence_init() has to be
> > called on
> > +	 * already initialized fences for a second time. Moreover,
> > this is
> > +	 * dangerous because attempts to allocate memory might
> > deadlock with
> > +	 * memory management code waiting for the reset to
> > complete.
> > +	 *
> > +	 * TODO: Document what drivers should do / use instead.
> 
> No replacement? Or bespoke/roll-your-own functionality as a must?
> 
> Confused...

We will document this in a follow-up. I'm trying for 2 months now [1]
just to fix up some broken, outdated documentation – and that in a
component that *I* am maintaining.

It's very difficult to reach the relevant stakeholders, and I really
want to unblock this series.

Feel free to provide a proposal for the TODO based on this series or
jump into the discussion here [2].

Otherwise I will propose a fix for the TODO some time the next weeks.

P.


[1] https://lore.kernel.org/dri-devel/20250109133710.39404-2-phasta@kernel.org/
[2] https://lore.kernel.org/dri-devel/688b5665-496d-470d-9835-0c6eadfa5569@gmail.com/
diff mbox series

Patch

diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 50928a7ae98e..6381baae8024 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -410,10 +410,36 @@  struct drm_sched_backend_ops {
 					 struct drm_sched_entity *s_entity);
 
 	/**
-         * @run_job: Called to execute the job once all of the dependencies
-         * have been resolved.  This may be called multiple times, if
-	 * timedout_job() has happened and drm_sched_job_recovery()
-	 * decides to try it again.
+	 * @run_job: Called to execute the job once all of the dependencies
+	 * have been resolved.
+	 *
+	 * @sched_job: the job to run
+	 *
+	 * The deprecated drm_sched_resubmit_jobs() (called by &struct
+	 * drm_sched_backend_ops.timedout_job) can invoke this again with the
+	 * same parameters. Using this is discouraged because it violates
+	 * dma_fence rules, notably dma_fence_init() has to be called on
+	 * already initialized fences for a second time. Moreover, this is
+	 * dangerous because attempts to allocate memory might deadlock with
+	 * memory management code waiting for the reset to complete.
+	 *
+	 * TODO: Document what drivers should do / use instead.
+	 *
+	 * This method is called in a workqueue context - either from the
+	 * submit_wq the driver passed through drm_sched_init(), or, if the
+	 * driver passed NULL, a separate, ordered workqueue the scheduler
+	 * allocated.
+	 *
+	 * Note that the scheduler expects to 'inherit' its own reference to
+	 * this fence from the callback. It does not invoke an extra
+	 * dma_fence_get() on it. Consequently, this callback must take a
+	 * reference for the scheduler, and additional ones for the driver's
+	 * respective needs.
+	 *
+	 * Return:
+	 * * On success: dma_fence the driver must signal once the hardware has
+	 * completed the job ("hardware fence").
+	 * * On failure: NULL or an ERR_PTR.
 	 */
 	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);