diff mbox series

[RFC,1/8] sched: Add nice value change notifier

Message ID 20211004143650.699120-2-tvrtko.ursulin@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series CPU + GPU synchronised priority scheduling | expand

Commit Message

Tvrtko Ursulin Oct. 4, 2021, 2:36 p.m. UTC
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Implement a simple notifier chain via which interested parties can track
when process nice value changes. Simple because it is global so each user
would have to track which tasks it is interested in.

First intended use case are GPU drivers using task nice as priority hint
when scheduling GPU contexts belonging to respective clients.

To use register_user_nice_notifier and unregister_user_nice_notifier
functions are provided and new nice value and pointer to task_struct
being modified passed to the callbacks.

v2:
 * Move the notifier chain outside task_rq_lock. (Peter)

Opens:
 * Security. Would some sort of a  per process mechanism be better and
   feasible?
     x Peter Zijlstra thinks it may be passable now that it is outside
       core scheduler locks.
 * Put it all behind kconfig to be selected by interested drivers?

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
---
 include/linux/sched.h |  5 +++++
 kernel/sched/core.c   | 37 ++++++++++++++++++++++++++++++++++++-
 2 files changed, 41 insertions(+), 1 deletion(-)

Comments

Wanghui (John) Oct. 6, 2021, 4:10 a.m. UTC | #1
HI Tvrtko

On 2021/10/4 22:36, Tvrtko Ursulin wrote:
>   void set_user_nice(struct task_struct *p, long nice)
>   {
>   	bool queued, running;
> -	int old_prio;
> +	int old_prio, ret;
>   	struct rq_flags rf;
>   	struct rq *rq;
>   
> @@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p, long nice)
>   
>   out_unlock:
>   	task_rq_unlock(rq, p, &rf);
> +
> +	ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
> +	WARN_ON_ONCE(ret != NOTIFY_DONE);
>   }
How about adding a new "io_nice" to task_struct,and move the call chain to
sched_setattr/getattr, there are two benefits:

1. Decoupled with fair scheduelr. In our use case, high priority tasks often
    use rt scheduler.
2. The range of value don't need to be bound to -20~19 or 0~139
Barry Song Oct. 6, 2021, 7:58 a.m. UTC | #2
On Wed, Oct 6, 2021 at 5:15 PM Wanghui (John) <john.wanghui@huawei.com> wrote:
>
> HI Tvrtko
>
> On 2021/10/4 22:36, Tvrtko Ursulin wrote:
> >   void set_user_nice(struct task_struct *p, long nice)
> >   {
> >       bool queued, running;
> > -     int old_prio;
> > +     int old_prio, ret;
> >       struct rq_flags rf;
> >       struct rq *rq;
> >
> > @@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p, long nice)
> >
> >   out_unlock:
> >       task_rq_unlock(rq, p, &rf);
> > +
> > +     ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
> > +     WARN_ON_ONCE(ret != NOTIFY_DONE);
> >   }
> How about adding a new "io_nice" to task_struct,and move the call chain to
> sched_setattr/getattr, there are two benefits:

We already have an ionice for block io scheduler. hardly can this new io_nice
be generic to all I/O. it seems the patchset is trying to link
process' nice with
GPU's scheduler, to some extent, it makes more senses than having a
common ionice because we have a lot of IO devices in the systems, we don't
know which I/O the ionice of task_struct should be applied to.

Maybe we could have an ionice dedicated for GPU just like ionice for CFQ
of bio/request scheduler.

>
> 1. Decoupled with fair scheduelr. In our use case, high priority tasks often
>     use rt scheduler.

Is it possible to tell GPU RT as we are telling them CFS nice?

> 2. The range of value don't need to be bound to -20~19 or 0~139
>

could build a mapping between the priorities of process and GPU. It seems
not a big deal.

Thanks
barry
Tvrtko Ursulin Oct. 6, 2021, 1:44 p.m. UTC | #3
Hi,

On 06/10/2021 08:58, Barry Song wrote:
> On Wed, Oct 6, 2021 at 5:15 PM Wanghui (John) <john.wanghui@huawei.com> wrote:
>>
>> HI Tvrtko
>>
>> On 2021/10/4 22:36, Tvrtko Ursulin wrote:
>>>    void set_user_nice(struct task_struct *p, long nice)
>>>    {
>>>        bool queued, running;
>>> -     int old_prio;
>>> +     int old_prio, ret;
>>>        struct rq_flags rf;
>>>        struct rq *rq;
>>>
>>> @@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p, long nice)
>>>
>>>    out_unlock:
>>>        task_rq_unlock(rq, p, &rf);
>>> +
>>> +     ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
>>> +     WARN_ON_ONCE(ret != NOTIFY_DONE);
>>>    }
>> How about adding a new "io_nice" to task_struct,and move the call chain to
>> sched_setattr/getattr, there are two benefits:
> 
> We already have an ionice for block io scheduler. hardly can this new io_nice
> be generic to all I/O. it seems the patchset is trying to link
> process' nice with
> GPU's scheduler, to some extent, it makes more senses than having a
> common ionice because we have a lot of IO devices in the systems, we don't
> know which I/O the ionice of task_struct should be applied to.
> 
> Maybe we could have an ionice dedicated for GPU just like ionice for CFQ
> of bio/request scheduler.

Thought crossed my mind but I couldn't see the practicality of a 3rd 
nice concept. I mean even to start with I struggle a bit with the 
usefulness of existing ionice vs nice. Like coming up with practical 
examples of usecases where it makes sense to decouple the two priorities.

 From a different angle I did think inheriting CPU nice makes sense for 
GPU workloads. This is because today, and more so in the future, 
computations on a same data set do flow from one to the other.

Like maybe a simple example of batch image processing where CPU decodes, 
GPU does a transform and then CPU encodes. Or a different mix, doesn't 
really matter, since the main point it is one computing pipeline from 
users point of view.

In this example perhaps everything could be handled in userspace so 
that's another argument to be had. Userspace could query the current 
scheduling attributes before submitting work to the processing pipeline 
and adjust using respective uapi.

Downside would be inability to react to changes after the work is 
already running which may not be too serious limitation outside the 
world of multi-minute compute workloads. And latter are probably special 
case enough that would be configured explicitly.

>>
>> 1. Decoupled with fair scheduelr. In our use case, high priority tasks often
>>      use rt scheduler.
> 
> Is it possible to tell GPU RT as we are telling them CFS nice?

Yes of course. We could create a common notification "data packet" which 
would be sent from both entry points and provide more data than just the 
nice value. Consumers (of the notifier chain) could then decide for 
themselves what they want to do with the data.

Regards,

Tvrtko

> 
>> 2. The range of value don't need to be bound to -20~19 or 0~139
>>
> 
> could build a mapping between the priorities of process and GPU. It seems
> not a big deal.
> 
> Thanks
> barry
>
Barry Song Oct. 6, 2021, 8:21 p.m. UTC | #4
On Thu, Oct 7, 2021 at 2:44 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> Hi,
>
> On 06/10/2021 08:58, Barry Song wrote:
> > On Wed, Oct 6, 2021 at 5:15 PM Wanghui (John) <john.wanghui@huawei.com> wrote:
> >>
> >> HI Tvrtko
> >>
> >> On 2021/10/4 22:36, Tvrtko Ursulin wrote:
> >>>    void set_user_nice(struct task_struct *p, long nice)
> >>>    {
> >>>        bool queued, running;
> >>> -     int old_prio;
> >>> +     int old_prio, ret;
> >>>        struct rq_flags rf;
> >>>        struct rq *rq;
> >>>
> >>> @@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p, long nice)
> >>>
> >>>    out_unlock:
> >>>        task_rq_unlock(rq, p, &rf);
> >>> +
> >>> +     ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
> >>> +     WARN_ON_ONCE(ret != NOTIFY_DONE);
> >>>    }
> >> How about adding a new "io_nice" to task_struct,and move the call chain to
> >> sched_setattr/getattr, there are two benefits:
> >
> > We already have an ionice for block io scheduler. hardly can this new io_nice
> > be generic to all I/O. it seems the patchset is trying to link
> > process' nice with
> > GPU's scheduler, to some extent, it makes more senses than having a
> > common ionice because we have a lot of IO devices in the systems, we don't
> > know which I/O the ionice of task_struct should be applied to.
> >
> > Maybe we could have an ionice dedicated for GPU just like ionice for CFQ
> > of bio/request scheduler.
>
> Thought crossed my mind but I couldn't see the practicality of a 3rd
> nice concept. I mean even to start with I struggle a bit with the
> usefulness of existing ionice vs nice. Like coming up with practical
> examples of usecases where it makes sense to decouple the two priorities.
>
>  From a different angle I did think inheriting CPU nice makes sense for
> GPU workloads. This is because today, and more so in the future,
> computations on a same data set do flow from one to the other.
>
> Like maybe a simple example of batch image processing where CPU decodes,
> GPU does a transform and then CPU encodes. Or a different mix, doesn't
> really matter, since the main point it is one computing pipeline from
> users point of view.
>

I am on it. but I am also seeing two problems here:
1. nice is not global in linux. For example, if you have two cgroups, cgroup A
has more quota then cgroup B. Tasks in B won't win even if it has a lower nice.
cgroups will run proportional-weight time-based division of CPU.

2. Historically, we had dynamic nice which was adjusted based on the average
sleep/running time; right now, we don't have dynamic nice, but virtual time
still make tasks which sleep more preempt other tasks with the same nice
or even lower nice.
virtual time += physical time/weight by nice
so, static nice number doesn't always make sense to decide preemption.

So it seems your patch only works under some simple situation for example
no cgroups, tasks have similar sleep/running time.

> In this example perhaps everything could be handled in userspace so
> that's another argument to be had. Userspace could query the current
> scheduling attributes before submitting work to the processing pipeline
> and adjust using respective uapi.
>
> Downside would be inability to react to changes after the work is
> already running which may not be too serious limitation outside the
> world of multi-minute compute workloads. And latter are probably special
> case enough that would be configured explicitly.
>
> >>
> >> 1. Decoupled with fair scheduelr. In our use case, high priority tasks often
> >>      use rt scheduler.
> >
> > Is it possible to tell GPU RT as we are telling them CFS nice?
>
> Yes of course. We could create a common notification "data packet" which
> would be sent from both entry points and provide more data than just the
> nice value. Consumers (of the notifier chain) could then decide for
> themselves what they want to do with the data.

RT should have the same problem with CFS once we have cgroups.

>
> Regards,
>
> Tvrtko
>
> >
> >> 2. The range of value don't need to be bound to -20~19 or 0~139
> >>
> >
> > could build a mapping between the priorities of process and GPU. It seems
> > not a big deal.
> >
> > Thanks
> > barry
> >

Thanks
barry
Tvrtko Ursulin Oct. 7, 2021, 8:50 a.m. UTC | #5
On 06/10/2021 21:21, Barry Song wrote:
> On Thu, Oct 7, 2021 at 2:44 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> Hi,
>>
>> On 06/10/2021 08:58, Barry Song wrote:
>>> On Wed, Oct 6, 2021 at 5:15 PM Wanghui (John) <john.wanghui@huawei.com> wrote:
>>>>
>>>> HI Tvrtko
>>>>
>>>> On 2021/10/4 22:36, Tvrtko Ursulin wrote:
>>>>>     void set_user_nice(struct task_struct *p, long nice)
>>>>>     {
>>>>>         bool queued, running;
>>>>> -     int old_prio;
>>>>> +     int old_prio, ret;
>>>>>         struct rq_flags rf;
>>>>>         struct rq *rq;
>>>>>
>>>>> @@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p, long nice)
>>>>>
>>>>>     out_unlock:
>>>>>         task_rq_unlock(rq, p, &rf);
>>>>> +
>>>>> +     ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
>>>>> +     WARN_ON_ONCE(ret != NOTIFY_DONE);
>>>>>     }
>>>> How about adding a new "io_nice" to task_struct,and move the call chain to
>>>> sched_setattr/getattr, there are two benefits:
>>>
>>> We already have an ionice for block io scheduler. hardly can this new io_nice
>>> be generic to all I/O. it seems the patchset is trying to link
>>> process' nice with
>>> GPU's scheduler, to some extent, it makes more senses than having a
>>> common ionice because we have a lot of IO devices in the systems, we don't
>>> know which I/O the ionice of task_struct should be applied to.
>>>
>>> Maybe we could have an ionice dedicated for GPU just like ionice for CFQ
>>> of bio/request scheduler.
>>
>> Thought crossed my mind but I couldn't see the practicality of a 3rd
>> nice concept. I mean even to start with I struggle a bit with the
>> usefulness of existing ionice vs nice. Like coming up with practical
>> examples of usecases where it makes sense to decouple the two priorities.
>>
>>   From a different angle I did think inheriting CPU nice makes sense for
>> GPU workloads. This is because today, and more so in the future,
>> computations on a same data set do flow from one to the other.
>>
>> Like maybe a simple example of batch image processing where CPU decodes,
>> GPU does a transform and then CPU encodes. Or a different mix, doesn't
>> really matter, since the main point it is one computing pipeline from
>> users point of view.
>>
> 
> I am on it. but I am also seeing two problems here:
> 1. nice is not global in linux. For example, if you have two cgroups, cgroup A
> has more quota then cgroup B. Tasks in B won't win even if it has a lower nice.
> cgroups will run proportional-weight time-based division of CPU.
> 
> 2. Historically, we had dynamic nice which was adjusted based on the average
> sleep/running time; right now, we don't have dynamic nice, but virtual time
> still make tasks which sleep more preempt other tasks with the same nice
> or even lower nice.
> virtual time += physical time/weight by nice
> so, static nice number doesn't always make sense to decide preemption.
> 
> So it seems your patch only works under some simple situation for example
> no cgroups, tasks have similar sleep/running time.

Yes, I broadly agree with your assessment. Although there are plans for 
adding cgroup support to i915 scheduling, I doubt as fine grained 
control and exact semantics as there are on the CPU side will happen.

Mostly because the drive seems to be for more micro-controller managed 
scheduling which adds further challenges in connecting the two sides 
together.

But when you say it is a problem, I would characterize it more a 
weakness in terms of being only a subset of possible control. It is 
still richer (better?) than what currently exists and as demonstrated 
with benchmarks in my cover letter it can deliver improvements in user 
experience. If in the mid term future we can extend it with cgroup 
support then the concept should still apply and get closer to how you 
described nice works in the CPU world.

Main question in my mind is whether the idea of adding the 
sched_attr/priority notifier to the kernel can be justified. Because as 
mentioned before, everything apart from adjusting currently running GPU 
jobs could be done purely in userspace. Stack changes would be quite 
extensive and all, but that is not usually a good enough reason to put 
something in the kernel. That's why it is an RFC an invitation to discuss.

Even ionice inherits from nice (see task_nice_ioprio()) so I think 
argument can be made for drivers as well.

Regards,

Tvrtko

>> In this example perhaps everything could be handled in userspace so
>> that's another argument to be had. Userspace could query the current
>> scheduling attributes before submitting work to the processing pipeline
>> and adjust using respective uapi.
>>
>> Downside would be inability to react to changes after the work is
>> already running which may not be too serious limitation outside the
>> world of multi-minute compute workloads. And latter are probably special
>> case enough that would be configured explicitly.
>>
>>>>
>>>> 1. Decoupled with fair scheduelr. In our use case, high priority tasks often
>>>>       use rt scheduler.
>>>
>>> Is it possible to tell GPU RT as we are telling them CFS nice?
>>
>> Yes of course. We could create a common notification "data packet" which
>> would be sent from both entry points and provide more data than just the
>> nice value. Consumers (of the notifier chain) could then decide for
>> themselves what they want to do with the data.
> 
> RT should have the same problem with CFS once we have cgroups.
> 
>>
>> Regards,
>>
>> Tvrtko
>>
>>>
>>>> 2. The range of value don't need to be bound to -20~19 or 0~139
>>>>
>>>
>>> could build a mapping between the priorities of process and GPU. It seems
>>> not a big deal.
>>>
>>> Thanks
>>> barry
>>>
> 
> Thanks
> barry
>
Tvrtko Ursulin Oct. 7, 2021, 9:09 a.m. UTC | #6
On 07/10/2021 09:50, Tvrtko Ursulin wrote:
> 
> On 06/10/2021 21:21, Barry Song wrote:
>> On Thu, Oct 7, 2021 at 2:44 AM Tvrtko Ursulin
>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>
>>>
>>> Hi,
>>>
>>> On 06/10/2021 08:58, Barry Song wrote:
>>>> On Wed, Oct 6, 2021 at 5:15 PM Wanghui (John) 
>>>> <john.wanghui@huawei.com> wrote:
>>>>>
>>>>> HI Tvrtko
>>>>>
>>>>> On 2021/10/4 22:36, Tvrtko Ursulin wrote:
>>>>>>     void set_user_nice(struct task_struct *p, long nice)
>>>>>>     {
>>>>>>         bool queued, running;
>>>>>> -     int old_prio;
>>>>>> +     int old_prio, ret;
>>>>>>         struct rq_flags rf;
>>>>>>         struct rq *rq;
>>>>>>
>>>>>> @@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p, 
>>>>>> long nice)
>>>>>>
>>>>>>     out_unlock:
>>>>>>         task_rq_unlock(rq, p, &rf);
>>>>>> +
>>>>>> +     ret = atomic_notifier_call_chain(&user_nice_notifier_list, 
>>>>>> nice, p);
>>>>>> +     WARN_ON_ONCE(ret != NOTIFY_DONE);
>>>>>>     }
>>>>> How about adding a new "io_nice" to task_struct,and move the call 
>>>>> chain to
>>>>> sched_setattr/getattr, there are two benefits:
>>>>
>>>> We already have an ionice for block io scheduler. hardly can this 
>>>> new io_nice
>>>> be generic to all I/O. it seems the patchset is trying to link
>>>> process' nice with
>>>> GPU's scheduler, to some extent, it makes more senses than having a
>>>> common ionice because we have a lot of IO devices in the systems, we 
>>>> don't
>>>> know which I/O the ionice of task_struct should be applied to.
>>>>
>>>> Maybe we could have an ionice dedicated for GPU just like ionice for 
>>>> CFQ
>>>> of bio/request scheduler.
>>>
>>> Thought crossed my mind but I couldn't see the practicality of a 3rd
>>> nice concept. I mean even to start with I struggle a bit with the
>>> usefulness of existing ionice vs nice. Like coming up with practical
>>> examples of usecases where it makes sense to decouple the two 
>>> priorities.
>>>
>>>   From a different angle I did think inheriting CPU nice makes sense for
>>> GPU workloads. This is because today, and more so in the future,
>>> computations on a same data set do flow from one to the other.
>>>
>>> Like maybe a simple example of batch image processing where CPU decodes,
>>> GPU does a transform and then CPU encodes. Or a different mix, doesn't
>>> really matter, since the main point it is one computing pipeline from
>>> users point of view.
>>>
>>
>> I am on it. but I am also seeing two problems here:
>> 1. nice is not global in linux. For example, if you have two cgroups, 
>> cgroup A
>> has more quota then cgroup B. Tasks in B won't win even if it has a 
>> lower nice.
>> cgroups will run proportional-weight time-based division of CPU.
>>
>> 2. Historically, we had dynamic nice which was adjusted based on the 
>> average
>> sleep/running time; right now, we don't have dynamic nice, but virtual 
>> time
>> still make tasks which sleep more preempt other tasks with the same nice
>> or even lower nice.
>> virtual time += physical time/weight by nice
>> so, static nice number doesn't always make sense to decide preemption.
>>
>> So it seems your patch only works under some simple situation for example
>> no cgroups, tasks have similar sleep/running time.
> 
> Yes, I broadly agree with your assessment. Although there are plans for 
> adding cgroup support to i915 scheduling, I doubt as fine grained 
> control and exact semantics as there are on the CPU side will happen.
> 
> Mostly because the drive seems to be for more micro-controller managed 
> scheduling which adds further challenges in connecting the two sides 
> together.
> 
> But when you say it is a problem, I would characterize it more a 
> weakness in terms of being only a subset of possible control. It is 
> still richer (better?) than what currently exists and as demonstrated 
> with benchmarks in my cover letter it can deliver improvements in user 
> experience. If in the mid term future we can extend it with cgroup 
> support then the concept should still apply and get closer to how you 
> described nice works in the CPU world.
> 
> Main question in my mind is whether the idea of adding the 
> sched_attr/priority notifier to the kernel can be justified. Because as 
> mentioned before, everything apart from adjusting currently running GPU 
> jobs could be done purely in userspace. Stack changes would be quite 
> extensive and all, but that is not usually a good enough reason to put 
> something in the kernel. That's why it is an RFC an invitation to discuss.
> 
> Even ionice inherits from nice (see task_nice_ioprio()) so I think 
> argument can be made for drivers as well.

Now that I wrote this, I had a little bit of a light bulb moment. If I 
abandon the idea of adjusting the priority of already submitted work 
items, then I can do much of what I want purely from within the confines 
of i915.

I simply add code to inherit from current task nice on every new work 
item submission. This should probably bring the majority of the benefit 
I measured.

Regards,

Tvrtko
Barry Song Oct. 7, 2021, 10 a.m. UTC | #7
On Thu, Oct 7, 2021 at 10:09 PM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 07/10/2021 09:50, Tvrtko Ursulin wrote:
> >
> > On 06/10/2021 21:21, Barry Song wrote:
> >> On Thu, Oct 7, 2021 at 2:44 AM Tvrtko Ursulin
> >> <tvrtko.ursulin@linux.intel.com> wrote:
> >>>
> >>>
> >>> Hi,
> >>>
> >>> On 06/10/2021 08:58, Barry Song wrote:
> >>>> On Wed, Oct 6, 2021 at 5:15 PM Wanghui (John)
> >>>> <john.wanghui@huawei.com> wrote:
> >>>>>
> >>>>> HI Tvrtko
> >>>>>
> >>>>> On 2021/10/4 22:36, Tvrtko Ursulin wrote:
> >>>>>>     void set_user_nice(struct task_struct *p, long nice)
> >>>>>>     {
> >>>>>>         bool queued, running;
> >>>>>> -     int old_prio;
> >>>>>> +     int old_prio, ret;
> >>>>>>         struct rq_flags rf;
> >>>>>>         struct rq *rq;
> >>>>>>
> >>>>>> @@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p,
> >>>>>> long nice)
> >>>>>>
> >>>>>>     out_unlock:
> >>>>>>         task_rq_unlock(rq, p, &rf);
> >>>>>> +
> >>>>>> +     ret = atomic_notifier_call_chain(&user_nice_notifier_list,
> >>>>>> nice, p);
> >>>>>> +     WARN_ON_ONCE(ret != NOTIFY_DONE);
> >>>>>>     }
> >>>>> How about adding a new "io_nice" to task_struct,and move the call
> >>>>> chain to
> >>>>> sched_setattr/getattr, there are two benefits:
> >>>>
> >>>> We already have an ionice for block io scheduler. hardly can this
> >>>> new io_nice
> >>>> be generic to all I/O. it seems the patchset is trying to link
> >>>> process' nice with
> >>>> GPU's scheduler, to some extent, it makes more senses than having a
> >>>> common ionice because we have a lot of IO devices in the systems, we
> >>>> don't
> >>>> know which I/O the ionice of task_struct should be applied to.
> >>>>
> >>>> Maybe we could have an ionice dedicated for GPU just like ionice for
> >>>> CFQ
> >>>> of bio/request scheduler.
> >>>
> >>> Thought crossed my mind but I couldn't see the practicality of a 3rd
> >>> nice concept. I mean even to start with I struggle a bit with the
> >>> usefulness of existing ionice vs nice. Like coming up with practical
> >>> examples of usecases where it makes sense to decouple the two
> >>> priorities.
> >>>
> >>>   From a different angle I did think inheriting CPU nice makes sense for
> >>> GPU workloads. This is because today, and more so in the future,
> >>> computations on a same data set do flow from one to the other.
> >>>
> >>> Like maybe a simple example of batch image processing where CPU decodes,
> >>> GPU does a transform and then CPU encodes. Or a different mix, doesn't
> >>> really matter, since the main point it is one computing pipeline from
> >>> users point of view.
> >>>
> >>
> >> I am on it. but I am also seeing two problems here:
> >> 1. nice is not global in linux. For example, if you have two cgroups,
> >> cgroup A
> >> has more quota then cgroup B. Tasks in B won't win even if it has a
> >> lower nice.
> >> cgroups will run proportional-weight time-based division of CPU.
> >>
> >> 2. Historically, we had dynamic nice which was adjusted based on the
> >> average
> >> sleep/running time; right now, we don't have dynamic nice, but virtual
> >> time
> >> still make tasks which sleep more preempt other tasks with the same nice
> >> or even lower nice.
> >> virtual time += physical time/weight by nice
> >> so, static nice number doesn't always make sense to decide preemption.
> >>
> >> So it seems your patch only works under some simple situation for example
> >> no cgroups, tasks have similar sleep/running time.
> >
> > Yes, I broadly agree with your assessment. Although there are plans for
> > adding cgroup support to i915 scheduling, I doubt as fine grained
> > control and exact semantics as there are on the CPU side will happen.
> >
> > Mostly because the drive seems to be for more micro-controller managed
> > scheduling which adds further challenges in connecting the two sides
> > together.
> >
> > But when you say it is a problem, I would characterize it more a
> > weakness in terms of being only a subset of possible control. It is
> > still richer (better?) than what currently exists and as demonstrated
> > with benchmarks in my cover letter it can deliver improvements in user
> > experience. If in the mid term future we can extend it with cgroup
> > support then the concept should still apply and get closer to how you
> > described nice works in the CPU world.
> >
> > Main question in my mind is whether the idea of adding the
> > sched_attr/priority notifier to the kernel can be justified. Because as
> > mentioned before, everything apart from adjusting currently running GPU
> > jobs could be done purely in userspace. Stack changes would be quite
> > extensive and all, but that is not usually a good enough reason to put
> > something in the kernel. That's why it is an RFC an invitation to discuss.
> >
> > Even ionice inherits from nice (see task_nice_ioprio()) so I think
> > argument can be made for drivers as well.
>
> Now that I wrote this, I had a little bit of a light bulb moment. If I
> abandon the idea of adjusting the priority of already submitted work
> items, then I can do much of what I want purely from within the confines
> of i915.
>
> I simply add code to inherit from current task nice on every new work
> item submission. This should probably bring the majority of the benefit
> I measured.

I think the idea makes sense to link the process's priority with the GPU's
scheduler. I have no doubt about this.
My question is more of what is the best way to implement this.

Android has bg_non_interactive cgroup with much lower weight for
background processes. interactive tasks, on the other hand, are placed
in another cgroup with much higer weight. So Android depends on
cgroup to improve user experience.

Chrome browser in your cover-letter uses nice to de-prioritise background
tabs.  this works perfectly as the whole chrome should be in the same
cgroup, so changing nice will improve/decrease the resource gotten by
tasks in this cgroup. But once we have two cgroups,  bringing this nice
belonging to the cgroup  to the global scheduler of GPU will somehow
break the aim.

For example, if we have two cgroup A and B
/sys/fs/cgroup/cpu$ sudo sh -c 'echo 4096 > A/cpu.shares'
/sys/fs/cgroup/cpu$ sudo sh -c 'echo 512 > B/cpu.shares'

task in B with lower nice will get more GPU than task in A. But actually A group
has 8X weight of B. So the result seems wrong. especially real users like
Android does depend on cgroup.
I don't know how to overcome this "weakness", it seems not easy.

>
> Regards,
>
> Tvrtko

Thanks
barry
diff mbox series

Patch

diff --git a/include/linux/sched.h b/include/linux/sched.h
index c1a927ddec64..1fcec88e5dbc 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2309,4 +2309,9 @@  static inline void sched_core_free(struct task_struct *tsk) { }
 static inline void sched_core_fork(struct task_struct *p) { }
 #endif
 
+struct notifier_block;
+
+extern int register_user_nice_notifier(struct notifier_block *);
+extern int unregister_user_nice_notifier(struct notifier_block *);
+
 #endif
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1bba4128a3e6..fc90b603bb6f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6864,10 +6864,42 @@  static inline int rt_effective_prio(struct task_struct *p, int prio)
 }
 #endif
 
+ATOMIC_NOTIFIER_HEAD(user_nice_notifier_list);
+
+/**
+ * register_user_nice_notifier - Register function to be called when task nice changes
+ * @nb: Info about notifier function to be called
+ *
+ * Registers a function with the list of functions to be called when task nice
+ * value changes.
+ *
+ * Currently always returns zero, as atomic_notifier_chain_register()
+ * always returns zero.
+ */
+int register_user_nice_notifier(struct notifier_block *nb)
+{
+	return atomic_notifier_chain_register(&user_nice_notifier_list, nb);
+}
+EXPORT_SYMBOL(register_user_nice_notifier);
+
+/**
+ * unregister_user_nice_notifier - Unregister previously registered user nice notifier
+ * @nb: Hook to be unregistered
+ *
+ * Unregisters a previously registered user nice notifier function.
+ *
+ * Returns zero on success, or %-ENOENT on failure.
+ */
+int unregister_user_nice_notifier(struct notifier_block *nb)
+{
+	return atomic_notifier_chain_unregister(&user_nice_notifier_list, nb);
+}
+EXPORT_SYMBOL(unregister_user_nice_notifier);
+
 void set_user_nice(struct task_struct *p, long nice)
 {
 	bool queued, running;
-	int old_prio;
+	int old_prio, ret;
 	struct rq_flags rf;
 	struct rq *rq;
 
@@ -6915,6 +6947,9 @@  void set_user_nice(struct task_struct *p, long nice)
 
 out_unlock:
 	task_rq_unlock(rq, p, &rf);
+
+	ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
+	WARN_ON_ONCE(ret != NOTIFY_DONE);
 }
 EXPORT_SYMBOL(set_user_nice);