Message ID | 20240913202301.16772-1-robdclark@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] drm/sched: Fix dynamic job-flow control race | expand |
On Fri, Sep 13, 2024 at 01:23:01PM -0700, Rob Clark wrote: > From: Rob Clark <robdclark@chromium.org> > > Fixes a race condition reported here: https://github.com/AsahiLinux/linux/issues/309#issuecomment-2238968609 > > The whole premise of lockless access to a single-producer-single- > consumer queue is that there is just a single producer and single > consumer. That means we can't call drm_sched_can_queue() (which is > about queueing more work to the hw, not to the spsc queue) from > anywhere other than the consumer (wq). > > This call in the producer is just an optimization to avoid scheduling > the consuming worker if it cannot yet queue more work to the hw. It > is safe to drop this optimization to avoid the race condition. > > Suggested-by: Asahi Lina <lina@asahilina.net> > Fixes: a78422e9dff3 ("drm/sched: implement dynamic job-flow control") > Closes: https://github.com/AsahiLinux/linux/issues/309 > Cc: stable@vger.kernel.org > Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Danilo Krummrich <dakr@kernel.org> > --- > drivers/gpu/drm/scheduler/sched_entity.c | 4 ++-- > drivers/gpu/drm/scheduler/sched_main.c | 7 ++----- > include/drm/gpu_scheduler.h | 2 +- > 3 files changed, 5 insertions(+), 8 deletions(-) > > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > index 58c8161289fe..567e5ace6d0c 100644 > --- a/drivers/gpu/drm/scheduler/sched_entity.c > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > @@ -380,7 +380,7 @@ static void drm_sched_entity_wakeup(struct dma_fence *f, > container_of(cb, struct drm_sched_entity, cb); > > drm_sched_entity_clear_dep(f, cb); > - drm_sched_wakeup(entity->rq->sched, entity); > + drm_sched_wakeup(entity->rq->sched); > } > > /** > @@ -612,7 +612,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) > if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) > drm_sched_rq_update_fifo(entity, submit_ts); > > - drm_sched_wakeup(entity->rq->sched, entity); > + drm_sched_wakeup(entity->rq->sched); > } > } > EXPORT_SYMBOL(drm_sched_entity_push_job); > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > index ab53ab486fe6..6f27cab0b76d 100644 > --- a/drivers/gpu/drm/scheduler/sched_main.c > +++ b/drivers/gpu/drm/scheduler/sched_main.c > @@ -1013,15 +1013,12 @@ EXPORT_SYMBOL(drm_sched_job_cleanup); > /** > * drm_sched_wakeup - Wake up the scheduler if it is ready to queue > * @sched: scheduler instance > - * @entity: the scheduler entity > * > * Wake up the scheduler if we can queue jobs. > */ > -void drm_sched_wakeup(struct drm_gpu_scheduler *sched, > - struct drm_sched_entity *entity) > +void drm_sched_wakeup(struct drm_gpu_scheduler *sched) > { > - if (drm_sched_can_queue(sched, entity)) > - drm_sched_run_job_queue(sched); > + drm_sched_run_job_queue(sched); > } > > /** > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > index fe8edb917360..9c437a057e5d 100644 > --- a/include/drm/gpu_scheduler.h > +++ b/include/drm/gpu_scheduler.h > @@ -574,7 +574,7 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, > > void drm_sched_tdr_queue_imm(struct drm_gpu_scheduler *sched); > void drm_sched_job_cleanup(struct drm_sched_job *job); > -void drm_sched_wakeup(struct drm_gpu_scheduler *sched, struct drm_sched_entity *entity); > +void drm_sched_wakeup(struct drm_gpu_scheduler *sched); > bool drm_sched_wqueue_ready(struct drm_gpu_scheduler *sched); > void drm_sched_wqueue_stop(struct drm_gpu_scheduler *sched); > void drm_sched_wqueue_start(struct drm_gpu_scheduler *sched); > -- > 2.46.0 >
On Fri, Sep 13, 2024 at 01:23:01PM -0700, Rob Clark wrote: > From: Rob Clark <robdclark@chromium.org> > > Fixes a race condition reported here: https://github.com/AsahiLinux/linux/issues/309#issuecomment-2238968609 > > The whole premise of lockless access to a single-producer-single- > consumer queue is that there is just a single producer and single > consumer. That means we can't call drm_sched_can_queue() (which is > about queueing more work to the hw, not to the spsc queue) from > anywhere other than the consumer (wq). > > This call in the producer is just an optimization to avoid scheduling > the consuming worker if it cannot yet queue more work to the hw. It > is safe to drop this optimization to avoid the race condition. > > Suggested-by: Asahi Lina <lina@asahilina.net> > Fixes: a78422e9dff3 ("drm/sched: implement dynamic job-flow control") > Closes: https://github.com/AsahiLinux/linux/issues/309 > Cc: stable@vger.kernel.org > Signed-off-by: Rob Clark <robdclark@chromium.org> > --- > drivers/gpu/drm/scheduler/sched_entity.c | 4 ++-- > drivers/gpu/drm/scheduler/sched_main.c | 7 ++----- > include/drm/gpu_scheduler.h | 2 +- > 3 files changed, 5 insertions(+), 8 deletions(-) Tested for several hours with CONFIG_PREMPT=y and kasan with a similar workload as in the github issue without reports or oopses. Feel free to add Tested-by: Janne Grunau <j@jannau.net> thanks, Janne
On Fri, Sep 13, 2024 at 01:23:01PM -0700, Rob Clark wrote: > From: Rob Clark <robdclark@chromium.org> > > Fixes a race condition reported here: https://github.com/AsahiLinux/linux/issues/309#issuecomment-2238968609 > > The whole premise of lockless access to a single-producer-single- > consumer queue is that there is just a single producer and single > consumer. That means we can't call drm_sched_can_queue() (which is > about queueing more work to the hw, not to the spsc queue) from > anywhere other than the consumer (wq). > > This call in the producer is just an optimization to avoid scheduling > the consuming worker if it cannot yet queue more work to the hw. It > is safe to drop this optimization to avoid the race condition. > > Suggested-by: Asahi Lina <lina@asahilina.net> > Fixes: a78422e9dff3 ("drm/sched: implement dynamic job-flow control") > Closes: https://github.com/AsahiLinux/linux/issues/309 > Cc: stable@vger.kernel.org > Signed-off-by: Rob Clark <robdclark@chromium.org> Applied to drm-misc-fixes, thanks! > --- > drivers/gpu/drm/scheduler/sched_entity.c | 4 ++-- > drivers/gpu/drm/scheduler/sched_main.c | 7 ++----- > include/drm/gpu_scheduler.h | 2 +- > 3 files changed, 5 insertions(+), 8 deletions(-) > > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > index 58c8161289fe..567e5ace6d0c 100644 > --- a/drivers/gpu/drm/scheduler/sched_entity.c > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > @@ -380,7 +380,7 @@ static void drm_sched_entity_wakeup(struct dma_fence *f, > container_of(cb, struct drm_sched_entity, cb); > > drm_sched_entity_clear_dep(f, cb); > - drm_sched_wakeup(entity->rq->sched, entity); > + drm_sched_wakeup(entity->rq->sched); > } > > /** > @@ -612,7 +612,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) > if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) > drm_sched_rq_update_fifo(entity, submit_ts); > > - drm_sched_wakeup(entity->rq->sched, entity); > + drm_sched_wakeup(entity->rq->sched); > } > } > EXPORT_SYMBOL(drm_sched_entity_push_job); > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > index ab53ab486fe6..6f27cab0b76d 100644 > --- a/drivers/gpu/drm/scheduler/sched_main.c > +++ b/drivers/gpu/drm/scheduler/sched_main.c > @@ -1013,15 +1013,12 @@ EXPORT_SYMBOL(drm_sched_job_cleanup); > /** > * drm_sched_wakeup - Wake up the scheduler if it is ready to queue > * @sched: scheduler instance > - * @entity: the scheduler entity > * > * Wake up the scheduler if we can queue jobs. > */ > -void drm_sched_wakeup(struct drm_gpu_scheduler *sched, > - struct drm_sched_entity *entity) > +void drm_sched_wakeup(struct drm_gpu_scheduler *sched) > { > - if (drm_sched_can_queue(sched, entity)) > - drm_sched_run_job_queue(sched); > + drm_sched_run_job_queue(sched); > } > > /** > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > index fe8edb917360..9c437a057e5d 100644 > --- a/include/drm/gpu_scheduler.h > +++ b/include/drm/gpu_scheduler.h > @@ -574,7 +574,7 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, > > void drm_sched_tdr_queue_imm(struct drm_gpu_scheduler *sched); > void drm_sched_job_cleanup(struct drm_sched_job *job); > -void drm_sched_wakeup(struct drm_gpu_scheduler *sched, struct drm_sched_entity *entity); > +void drm_sched_wakeup(struct drm_gpu_scheduler *sched); > bool drm_sched_wqueue_ready(struct drm_gpu_scheduler *sched); > void drm_sched_wqueue_stop(struct drm_gpu_scheduler *sched); > void drm_sched_wqueue_start(struct drm_gpu_scheduler *sched); > -- > 2.46.0 >
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 58c8161289fe..567e5ace6d0c 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -380,7 +380,7 @@ static void drm_sched_entity_wakeup(struct dma_fence *f, container_of(cb, struct drm_sched_entity, cb); drm_sched_entity_clear_dep(f, cb); - drm_sched_wakeup(entity->rq->sched, entity); + drm_sched_wakeup(entity->rq->sched); } /** @@ -612,7 +612,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) drm_sched_rq_update_fifo(entity, submit_ts); - drm_sched_wakeup(entity->rq->sched, entity); + drm_sched_wakeup(entity->rq->sched); } } EXPORT_SYMBOL(drm_sched_entity_push_job); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index ab53ab486fe6..6f27cab0b76d 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1013,15 +1013,12 @@ EXPORT_SYMBOL(drm_sched_job_cleanup); /** * drm_sched_wakeup - Wake up the scheduler if it is ready to queue * @sched: scheduler instance - * @entity: the scheduler entity * * Wake up the scheduler if we can queue jobs. */ -void drm_sched_wakeup(struct drm_gpu_scheduler *sched, - struct drm_sched_entity *entity) +void drm_sched_wakeup(struct drm_gpu_scheduler *sched) { - if (drm_sched_can_queue(sched, entity)) - drm_sched_run_job_queue(sched); + drm_sched_run_job_queue(sched); } /** diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index fe8edb917360..9c437a057e5d 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -574,7 +574,7 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, void drm_sched_tdr_queue_imm(struct drm_gpu_scheduler *sched); void drm_sched_job_cleanup(struct drm_sched_job *job); -void drm_sched_wakeup(struct drm_gpu_scheduler *sched, struct drm_sched_entity *entity); +void drm_sched_wakeup(struct drm_gpu_scheduler *sched); bool drm_sched_wqueue_ready(struct drm_gpu_scheduler *sched); void drm_sched_wqueue_stop(struct drm_gpu_scheduler *sched); void drm_sched_wqueue_start(struct drm_gpu_scheduler *sched);