diff mbox series

[RFC,03/18] drm/sched: Remove one local variable

Message ID 20250108183528.41007-4-tvrtko.ursulin@igalia.com (mailing list archive)
State New
Headers show
Series Deadline scheduler and other ideas | expand

Commit Message

Tvrtko Ursulin Jan. 8, 2025, 6:35 p.m. UTC
It is not helping readability nor it is required to keep the line length
in check.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Philipp Stanner <pstanner@redhat.com>
---
 drivers/gpu/drm/scheduler/sched_main.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

Comments

Christian König Jan. 9, 2025, 12:49 p.m. UTC | #1
Am 08.01.25 um 19:35 schrieb Tvrtko Ursulin:
> It is not helping readability nor it is required to keep the line length
> in check.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Danilo Krummrich <dakr@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Philipp Stanner <pstanner@redhat.com>
> ---
>   drivers/gpu/drm/scheduler/sched_main.c | 5 +----
>   1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 1734c17aeea5..01e0d6e686d1 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1175,7 +1175,6 @@ static void drm_sched_run_job_work(struct work_struct *w)
>   		container_of(w, struct drm_gpu_scheduler, work_run_job);
>   	struct drm_sched_entity *entity;
>   	struct dma_fence *fence;
> -	struct drm_sched_fence *s_fence;
>   	struct drm_sched_job *sched_job;
>   	int r;
>   
> @@ -1194,15 +1193,13 @@ static void drm_sched_run_job_work(struct work_struct *w)
>   		return;
>   	}
>   
> -	s_fence = sched_job->s_fence;
> -
>   	atomic_add(sched_job->credits, &sched->credit_count);
>   	drm_sched_job_begin(sched_job);
>   
>   	trace_drm_run_job(sched_job, entity);
>   	fence = sched->ops->run_job(sched_job);
>   	complete_all(&entity->entity_idle);
> -	drm_sched_fence_scheduled(s_fence, fence);
> +	drm_sched_fence_scheduled(sched_job->s_fence, fence);

Originally that was not for readability but for correctness.

As soon as complete_all(&entity->entity_idle); was called the sched_job 
could have been released.

But we changed that so that the sched_job is released from a separate 
worker instead, so that is most likely not necessary any more.

Regards,
Christian.

>   
>   	if (!IS_ERR_OR_NULL(fence)) {
>   		/* Drop for original kref_init of the fence */
Tvrtko Ursulin Jan. 9, 2025, 1:20 p.m. UTC | #2
On 09/01/2025 12:49, Christian König wrote:
> Am 08.01.25 um 19:35 schrieb Tvrtko Ursulin:
>> It is not helping readability nor it is required to keep the line length
>> in check.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>> Cc: Christian König <christian.koenig@amd.com>
>> Cc: Danilo Krummrich <dakr@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Philipp Stanner <pstanner@redhat.com>
>> ---
>>   drivers/gpu/drm/scheduler/sched_main.c | 5 +----
>>   1 file changed, 1 insertion(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index 1734c17aeea5..01e0d6e686d1 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -1175,7 +1175,6 @@ static void drm_sched_run_job_work(struct 
>> work_struct *w)
>>           container_of(w, struct drm_gpu_scheduler, work_run_job);
>>       struct drm_sched_entity *entity;
>>       struct dma_fence *fence;
>> -    struct drm_sched_fence *s_fence;
>>       struct drm_sched_job *sched_job;
>>       int r;
>> @@ -1194,15 +1193,13 @@ static void drm_sched_run_job_work(struct 
>> work_struct *w)
>>           return;
>>       }
>> -    s_fence = sched_job->s_fence;
>> -
>>       atomic_add(sched_job->credits, &sched->credit_count);
>>       drm_sched_job_begin(sched_job);
>>       trace_drm_run_job(sched_job, entity);
>>       fence = sched->ops->run_job(sched_job);
>>       complete_all(&entity->entity_idle);
>> -    drm_sched_fence_scheduled(s_fence, fence);
>> +    drm_sched_fence_scheduled(sched_job->s_fence, fence);
> 
> Originally that was not for readability but for correctness.
> 
> As soon as complete_all(&entity->entity_idle); was called the sched_job 
> could have been released.

And without a comment ouch.

> But we changed that so that the sched_job is released from a separate 
> worker instead, so that is most likely not necessary any more.

Very subtle. Especially given some drivers use unordered queue.

And for them sched_job is dereferenced a few more times in the block 
below so not sure how it is safe.

Move complete_all() to the end of it all?

Regards,

Tvrtko

>>       if (!IS_ERR_OR_NULL(fence)) {
>>           /* Drop for original kref_init of the fence */
>
Christian König Jan. 9, 2025, 2:17 p.m. UTC | #3
Am 09.01.25 um 14:20 schrieb Tvrtko Ursulin:
>
> On 09/01/2025 12:49, Christian König wrote:
>> Am 08.01.25 um 19:35 schrieb Tvrtko Ursulin:
>>> It is not helping readability nor it is required to keep the line 
>>> length
>>> in check.
>>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>> Cc: Christian König <christian.koenig@amd.com>
>>> Cc: Danilo Krummrich <dakr@redhat.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Philipp Stanner <pstanner@redhat.com>
>>> ---
>>>   drivers/gpu/drm/scheduler/sched_main.c | 5 +----
>>>   1 file changed, 1 insertion(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>> index 1734c17aeea5..01e0d6e686d1 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -1175,7 +1175,6 @@ static void drm_sched_run_job_work(struct 
>>> work_struct *w)
>>>           container_of(w, struct drm_gpu_scheduler, work_run_job);
>>>       struct drm_sched_entity *entity;
>>>       struct dma_fence *fence;
>>> -    struct drm_sched_fence *s_fence;
>>>       struct drm_sched_job *sched_job;
>>>       int r;
>>> @@ -1194,15 +1193,13 @@ static void drm_sched_run_job_work(struct 
>>> work_struct *w)
>>>           return;
>>>       }
>>> -    s_fence = sched_job->s_fence;
>>> -
>>>       atomic_add(sched_job->credits, &sched->credit_count);
>>>       drm_sched_job_begin(sched_job);
>>>       trace_drm_run_job(sched_job, entity);
>>>       fence = sched->ops->run_job(sched_job);
>>>       complete_all(&entity->entity_idle);
>>> -    drm_sched_fence_scheduled(s_fence, fence);
>>> +    drm_sched_fence_scheduled(sched_job->s_fence, fence);
>>
>> Originally that was not for readability but for correctness.
>>
>> As soon as complete_all(&entity->entity_idle); was called the 
>> sched_job could have been released.
>
> And without a comment ouch.

That changed long long time ago and IIRC we did had a comment for that.

>
>> But we changed that so that the sched_job is released from a separate 
>> worker instead, so that is most likely not necessary any more.
>
> Very subtle. Especially given some drivers use unordered queue.

Hui? unordered queue? How should that work?

Job submission ordering is a mandatory requirement of the dma_fence.

>
> And for them sched_job is dereferenced a few more times in the block 
> below so not sure how it is safe.
>
> Move complete_all() to the end of it all?

Most likely good idea, yes.

Regards,
Christian.

>
> Regards,
>
> Tvrtko
>
>>>       if (!IS_ERR_OR_NULL(fence)) {
>>>           /* Drop for original kref_init of the fence */
>>
Tvrtko Ursulin Jan. 9, 2025, 4:13 p.m. UTC | #4
On 09/01/2025 14:17, Christian König wrote:
> Am 09.01.25 um 14:20 schrieb Tvrtko Ursulin:
>>
>> On 09/01/2025 12:49, Christian König wrote:
>>> Am 08.01.25 um 19:35 schrieb Tvrtko Ursulin:
>>>> It is not helping readability nor it is required to keep the line 
>>>> length
>>>> in check.
>>>>
>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>>> Cc: Christian König <christian.koenig@amd.com>
>>>> Cc: Danilo Krummrich <dakr@redhat.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Philipp Stanner <pstanner@redhat.com>
>>>> ---
>>>>   drivers/gpu/drm/scheduler/sched_main.c | 5 +----
>>>>   1 file changed, 1 insertion(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>> index 1734c17aeea5..01e0d6e686d1 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>> @@ -1175,7 +1175,6 @@ static void drm_sched_run_job_work(struct 
>>>> work_struct *w)
>>>>           container_of(w, struct drm_gpu_scheduler, work_run_job);
>>>>       struct drm_sched_entity *entity;
>>>>       struct dma_fence *fence;
>>>> -    struct drm_sched_fence *s_fence;
>>>>       struct drm_sched_job *sched_job;
>>>>       int r;
>>>> @@ -1194,15 +1193,13 @@ static void drm_sched_run_job_work(struct 
>>>> work_struct *w)
>>>>           return;
>>>>       }
>>>> -    s_fence = sched_job->s_fence;
>>>> -
>>>>       atomic_add(sched_job->credits, &sched->credit_count);
>>>>       drm_sched_job_begin(sched_job);
>>>>       trace_drm_run_job(sched_job, entity);
>>>>       fence = sched->ops->run_job(sched_job);
>>>>       complete_all(&entity->entity_idle);
>>>> -    drm_sched_fence_scheduled(s_fence, fence);
>>>> +    drm_sched_fence_scheduled(sched_job->s_fence, fence);
>>>
>>> Originally that was not for readability but for correctness.
>>>
>>> As soon as complete_all(&entity->entity_idle); was called the 
>>> sched_job could have been released.
>>
>> And without a comment ouch.
> 
> That changed long long time ago and IIRC we did had a comment for that.
> 
>>
>>> But we changed that so that the sched_job is released from a separate 
>>> worker instead, so that is most likely not necessary any more.
>>
>> Very subtle. Especially given some drivers use unordered queue.
> 
> Hui? unordered queue? How should that work?
> 
> Job submission ordering is a mandatory requirement of the dma_fence.

I think it is fine for submission since it is a single work item which 
still runs serialized to itself. But free work can the overtake it on 
drivers who pass in unordered queue.

I think Matt promised to document the ordered vs unordered 
criteria/requirements some time ago and maybe forgot*.

In any case seems like moving the complete_all() to be last is the 
safest option. I'll rework this patch to that effect for v3.

Regards,

Tvrtko

*)
https://lore.kernel.org/all/ZjlmZHBMfK9fld9c@DUT025-TGLU.fm.intel.com/T/

>> And for them sched_job is dereferenced a few more times in the block 
>> below so not sure how it is safe.
>>
>> Move complete_all() to the end of it all?
> 
> Most likely good idea, yes.
> 
> Regards,
> Christian.
> 
>>
>> Regards,
>>
>> Tvrtko
>>
>>>>       if (!IS_ERR_OR_NULL(fence)) {
>>>>           /* Drop for original kref_init of the fence */
>>>
>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 1734c17aeea5..01e0d6e686d1 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1175,7 +1175,6 @@  static void drm_sched_run_job_work(struct work_struct *w)
 		container_of(w, struct drm_gpu_scheduler, work_run_job);
 	struct drm_sched_entity *entity;
 	struct dma_fence *fence;
-	struct drm_sched_fence *s_fence;
 	struct drm_sched_job *sched_job;
 	int r;
 
@@ -1194,15 +1193,13 @@  static void drm_sched_run_job_work(struct work_struct *w)
 		return;
 	}
 
-	s_fence = sched_job->s_fence;
-
 	atomic_add(sched_job->credits, &sched->credit_count);
 	drm_sched_job_begin(sched_job);
 
 	trace_drm_run_job(sched_job, entity);
 	fence = sched->ops->run_job(sched_job);
 	complete_all(&entity->entity_idle);
-	drm_sched_fence_scheduled(s_fence, fence);
+	drm_sched_fence_scheduled(sched_job->s_fence, fence);
 
 	if (!IS_ERR_OR_NULL(fence)) {
 		/* Drop for original kref_init of the fence */