Message ID | 20240906180618.12180-3-tursulin@igalia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | DRM scheduler fixes, or not, or incorrect kind | expand |
On Fri, 2024-09-06 at 19:06 +0100, Tvrtko Ursulin wrote: > From: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> > > Since drm_sched_entity_modify_sched() can modify the entities run > queue > lets make sure to only derefernce the pointer once so both adding and > waking up are guaranteed to be consistent. > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> > Fixes: b37aced31eb0 ("drm/scheduler: implement a function to modify > sched list") > Cc: Christian König <christian.koenig@amd.com> > Cc: Alex Deucher <alexander.deucher@amd.com> > Cc: Luben Tuikov <ltuikov89@gmail.com> > Cc: Matthew Brost <matthew.brost@intel.com> > Cc: David Airlie <airlied@gmail.com> > Cc: Daniel Vetter <daniel@ffwll.ch> > Cc: dri-devel@lists.freedesktop.org > Cc: <stable@vger.kernel.org> # v5.7+ > --- > drivers/gpu/drm/scheduler/sched_entity.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c > b/drivers/gpu/drm/scheduler/sched_entity.c > index ae8be30472cd..62b07ef7630a 100644 > --- a/drivers/gpu/drm/scheduler/sched_entity.c > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > @@ -599,6 +599,8 @@ void drm_sched_entity_push_job(struct > drm_sched_job *sched_job) > > /* first job wakes up scheduler */ > if (first) { > + struct drm_sched_rq *rq; > + > /* Add the entity to the run queue */ > spin_lock(&entity->rq_lock); > if (entity->stopped) { > @@ -608,13 +610,15 @@ void drm_sched_entity_push_job(struct > drm_sched_job *sched_job) > return; > } > > - drm_sched_rq_add_entity(entity->rq, entity); > + rq = entity->rq; > + > + drm_sched_rq_add_entity(rq, entity); > spin_unlock(&entity->rq_lock); > > if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) > drm_sched_rq_update_fifo(entity, submit_ts); > > - drm_sched_wakeup(entity->rq->sched, entity); > + drm_sched_wakeup(rq->sched, entity); OK, I think that makes sense. But I'd mention that the more readable solution of moving the spin_unlock() down here cannot be done because drm_sched_rq_update_fifo() needs that same lock. P. > } > } > EXPORT_SYMBOL(drm_sched_entity_push_job);
Am 06.09.24 um 20:06 schrieb Tvrtko Ursulin: > From: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> > > Since drm_sched_entity_modify_sched() can modify the entities run queue > lets make sure to only derefernce the pointer once so both adding and > waking up are guaranteed to be consistent. > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> > Fixes: b37aced31eb0 ("drm/scheduler: implement a function to modify sched list") > Cc: Christian König <christian.koenig@amd.com> > Cc: Alex Deucher <alexander.deucher@amd.com> > Cc: Luben Tuikov <ltuikov89@gmail.com> > Cc: Matthew Brost <matthew.brost@intel.com> > Cc: David Airlie <airlied@gmail.com> > Cc: Daniel Vetter <daniel@ffwll.ch> > Cc: dri-devel@lists.freedesktop.org > Cc: <stable@vger.kernel.org> # v5.7+ > --- > drivers/gpu/drm/scheduler/sched_entity.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > index ae8be30472cd..62b07ef7630a 100644 > --- a/drivers/gpu/drm/scheduler/sched_entity.c > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > @@ -599,6 +599,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) > > /* first job wakes up scheduler */ > if (first) { > + struct drm_sched_rq *rq; > + I think we should go a step further and keep the scheduler to wake up and not the rq. Regards, Christian. > /* Add the entity to the run queue */ > spin_lock(&entity->rq_lock); > if (entity->stopped) { > @@ -608,13 +610,15 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) > return; > } > > - drm_sched_rq_add_entity(entity->rq, entity); > + rq = entity->rq; > + > + drm_sched_rq_add_entity(rq, entity); > spin_unlock(&entity->rq_lock); > > if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) > drm_sched_rq_update_fifo(entity, submit_ts); > > - drm_sched_wakeup(entity->rq->sched, entity); > + drm_sched_wakeup(rq->sched, entity); > } > } > EXPORT_SYMBOL(drm_sched_entity_push_job);
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index ae8be30472cd..62b07ef7630a 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -599,6 +599,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) /* first job wakes up scheduler */ if (first) { + struct drm_sched_rq *rq; + /* Add the entity to the run queue */ spin_lock(&entity->rq_lock); if (entity->stopped) { @@ -608,13 +610,15 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) return; } - drm_sched_rq_add_entity(entity->rq, entity); + rq = entity->rq; + + drm_sched_rq_add_entity(rq, entity); spin_unlock(&entity->rq_lock); if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) drm_sched_rq_update_fifo(entity, submit_ts); - drm_sched_wakeup(entity->rq->sched, entity); + drm_sched_wakeup(rq->sched, entity); } } EXPORT_SYMBOL(drm_sched_entity_push_job);