Message ID | 20240429084633.9800-1-xuewen.yan@unisoc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | sched/proc: Print user_cpus_ptr for task status | expand |
On Mon, Apr 29, 2024 at 04:46:33PM +0800, Xuewen Yan wrote: > The commit 851a723e45d1c("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") > would clear the user_cpus_ptr when call the do_set_cpus_allowed. > > In order to determine whether the user_cpus_ptr is taking effect, > it is better to print the task's user_cpus_ptr. This is an ABI change and would mandate we forever more have this distinction. I don't think your changes justifies things sufficiently for this.
Hi Peter On Mon, Apr 29, 2024 at 8:10 PM Peter Zijlstra <peterz@infradead.org> wrote: > > On Mon, Apr 29, 2024 at 04:46:33PM +0800, Xuewen Yan wrote: > > The commit 851a723e45d1c("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") > > would clear the user_cpus_ptr when call the do_set_cpus_allowed. > > > > In order to determine whether the user_cpus_ptr is taking effect, > > it is better to print the task's user_cpus_ptr. > > This is an ABI change and would mandate we forever more have this > distinction. I don't think your changes justifies things sufficiently > for this. I added this mainly because online/offline cpu will produce different results for the !top-cpuset task. For example: If the task was running, then offline task's cpus, would lead to clear its user-mask. unisoc:/ # while true; do sleep 600; done& [1] 6786 unisoc:/ # echo 6786 > /dev/cpuset/top-app/tasks unisoc:/ # cat /dev/cpuset/top-app/cpus 0-7 unisoc:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: ff Cpus_allowed_list: 0-7 Cpus_user_allowed: (null) Cpus_user_allowed_list: (null) unisoc:/ # taskset -p c0 6786 pid 6786's current affinity mask: ff pid 6786's new affinity mask: c0 unisoc:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: c0 Cpus_allowed_list: 6-7 Cpus_user_allowed: c0 Cpus_user_allowed_list: 6-7 After offline the cpu6 and cpu7, the user-mask would be cleared: unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu7/online unisoc:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: 40 Cpus_allowed_list: 6 Cpus_user_allowed: c0 Cpus_user_allowed_list: 6-7 ums9621_1h10:/ # echo 0 > /sys/devices/system/cpu/cpu6/online ums9621_1h10:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: 3f Cpus_allowed_list: 0-5 Cpus_user_allowed: (null) Cpus_user_allowed_list: (null) When online the cpu6/7, the user-mask can not bring back: unisoc:/ # echo 1 > /sys/devices/system/cpu/cpu6/online unisoc:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: 7f Cpus_allowed_list: 0-6 Cpus_user_allowed: (null) Cpus_user_allowed_list: (null) unisoc:/ # echo 1 > /sys/devices/system/cpu/cpu7/online unisoc:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: ff Cpus_allowed_list: 0-7 Cpus_user_allowed: (null) Cpus_user_allowed_list: (null) However, if we offline the cpu when the task is sleeping, at this time, because would not call the fallback_cpu(), its user-mask will not be cleared. unisoc:/ # while true; do sleep 600; done& [1] 5990 unisoc:/ # echo 5990 > /dev/cpuset/top-app/tasks unisoc:/ # cat /proc/5990/status | grep Cpus Cpus_allowed: ff Cpus_allowed_list: 0-7 Cpus_user_allowed: (null) Cpus_user_allowed_list: (null) unisoc:/ # taskset -p c0 5990 pid 5990's current affinity mask: ff pid 5990's new affinity mask: c0 unisoc:/ # cat /proc/5990/status | grep Cpus Cpus_allowed: c0 Cpus_allowed_list: 6-7 Cpus_user_allowed: c0 Cpus_user_allowed_list: 6-7 unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu6/online unisoc:/ # cat /proc/5990/status | grep Cpus Cpus_allowed: 80 Cpus_allowed_list: 7 Cpus_user_allowed: c0 Cpus_user_allowed_list: 6-7 unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu7/online unisoc:/ # cat /proc/5990/status | grep Cpus Cpus_allowed: 3f Cpus_allowed_list: 0-5 Cpus_user_allowed: c0 Cpus_user_allowed_list: 6-7 After 10 minutes, it was waked up, it can also keep its user-mask: ums9621_1h10:/ # cat /proc/5990/status | grep Cpus Cpus_allowed: 3f Cpus_allowed_list: 0-5 Cpus_user_allowed: c0 Cpus_user_allowed_list: 6-7 In order to solve the above problem, I modified the following patch. At this time, for !top-cpuset, regardless of whether the task is in the running state when offline cpu, its cpu-mask can be maintained. However, this patch may not be perfect yet, so I send the "Print user_cpus_ptr for task status" patch first to debug more conveniently. ---> diff --git a/include/linux/sched.h b/include/linux/sched.h index 68cfa656b9b1..00879b6de8d4 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1870,7 +1870,7 @@ extern void dl_bw_free(int cpu, u64 dl_bw); #ifdef CONFIG_SMP /* do_set_cpus_allowed() - consider using set_cpus_allowed_ptr() instead */ -extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask); +extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask, bool keep_user); /** * set_cpus_allowed_ptr - set CPU affinity mask of a task @@ -1886,7 +1886,7 @@ extern int dl_task_check_affinity(struct task_struct *p, const struct cpumask *m extern void force_compatible_cpus_allowed_ptr(struct task_struct *p); extern void relax_compatible_cpus_allowed_ptr(struct task_struct *p); #else -static inline void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) +static inline void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask, bool keep_user) { } static inline int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 7ee9994aee40..0c448f8a3829 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -4005,9 +4005,14 @@ bool cpuset_cpus_allowed_fallback(struct task_struct *tsk) rcu_read_lock(); cs_mask = task_cs(tsk)->cpus_allowed; - if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) { - do_set_cpus_allowed(tsk, cs_mask); - changed = true; + if (cpumask_subset(cs_mask, possible_mask)) { + if (is_in_v2_mode()) { + do_set_cpus_allowed(tsk, cs_mask, false); + changed = true; + } else if (task_cs(tsk) != &top_cpuset) { + do_set_cpus_allowed(tsk, cs_mask, true); + changed = true; + } } rcu_read_unlock(); diff --git a/kernel/kthread.c b/kernel/kthread.c index 7a7aa5f93c0c..7ede27630088 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -527,7 +527,7 @@ static void __kthread_bind_mask(struct task_struct *p, const struct cpumask *mas /* It's safe because the task is inactive. */ raw_spin_lock_irqsave(&p->pi_lock, flags); - do_set_cpus_allowed(p, mask); + do_set_cpus_allowed(p, mask, false); p->flags |= PF_NO_SETAFFINITY; raw_spin_unlock_irqrestore(&p->pi_lock, flags); } diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 33cfd522fc7c..623f89e65e6c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2855,18 +2855,21 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx) * Used for kthread_bind() and select_fallback_rq(), in both cases the user * affinity (if any) should be destroyed too. */ -void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) +void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask, bool keep_user) { struct affinity_context ac = { .new_mask = new_mask, .user_mask = NULL, - .flags = SCA_USER, /* clear the user requested mask */ + .flags = 0, /* clear the user requested mask */ }; union cpumask_rcuhead { cpumask_t cpumask; struct rcu_head rcu; }; + if (!keep_user) + ac.flags = SCA_USER; + __do_set_cpus_allowed(p, &ac); /* @@ -2874,7 +2877,8 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) * to use kfree() here (when PREEMPT_RT=y), therefore punt to using * kfree_rcu(). */ - kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); + if (!keep_user) + kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); } static cpumask_t *alloc_user_cpus_ptr(int node) @@ -3664,7 +3668,7 @@ int select_fallback_rq(int cpu, struct task_struct *p) * * More yuck to audit. */ - do_set_cpus_allowed(p, task_cpu_possible_mask(p)); + do_set_cpus_allowed(p, task_cpu_possible_mask(p), false); state = fail; break; case fail: --- xuewen
On 5/6/24 04:04, Xuewen Yan wrote: > Hi Peter > > On Mon, Apr 29, 2024 at 8:10 PM Peter Zijlstra <peterz@infradead.org> wrote: >> On Mon, Apr 29, 2024 at 04:46:33PM +0800, Xuewen Yan wrote: >>> The commit 851a723e45d1c("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") >>> would clear the user_cpus_ptr when call the do_set_cpus_allowed. >>> >>> In order to determine whether the user_cpus_ptr is taking effect, >>> it is better to print the task's user_cpus_ptr. >> This is an ABI change and would mandate we forever more have this >> distinction. I don't think your changes justifies things sufficiently >> for this. > I added this mainly because online/offline cpu will produce different > results for the !top-cpuset task. > > For example: > > If the task was running, then offline task's cpus, would lead to clear > its user-mask. > > unisoc:/ # while true; do sleep 600; done& > [1] 6786 > unisoc:/ # echo 6786 > /dev/cpuset/top-app/tasks > unisoc:/ # cat /dev/cpuset/top-app/cpus > 0-7 > unisoc:/ # cat /proc/6786/status | grep Cpus > Cpus_allowed: ff > Cpus_allowed_list: 0-7 > Cpus_user_allowed: (null) > Cpus_user_allowed_list: (null) > > unisoc:/ # taskset -p c0 6786 > pid 6786's current affinity mask: ff > pid 6786's new affinity mask: c0 > unisoc:/ # cat /proc/6786/status | grep Cpus > Cpus_allowed: c0 > Cpus_allowed_list: 6-7 > Cpus_user_allowed: c0 > Cpus_user_allowed_list: 6-7 > > After offline the cpu6 and cpu7, the user-mask would be cleared: > > unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu7/online > unisoc:/ # cat /proc/6786/status | grep Cpus > Cpus_allowed: 40 > Cpus_allowed_list: 6 > Cpus_user_allowed: c0 > Cpus_user_allowed_list: 6-7 > ums9621_1h10:/ # echo 0 > /sys/devices/system/cpu/cpu6/online > ums9621_1h10:/ # cat /proc/6786/status | grep Cpus > Cpus_allowed: 3f > Cpus_allowed_list: 0-5 > Cpus_user_allowed: (null) > Cpus_user_allowed_list: (null) > > When online the cpu6/7, the user-mask can not bring back: > > unisoc:/ # echo 1 > /sys/devices/system/cpu/cpu6/online > unisoc:/ # cat /proc/6786/status | grep Cpus > Cpus_allowed: 7f > Cpus_allowed_list: 0-6 > Cpus_user_allowed: (null) > Cpus_user_allowed_list: (null) > unisoc:/ # echo 1 > /sys/devices/system/cpu/cpu7/online > unisoc:/ # cat /proc/6786/status | grep Cpus > Cpus_allowed: ff > Cpus_allowed_list: 0-7 > Cpus_user_allowed: (null) > Cpus_user_allowed_list: (null) > > However, if we offline the cpu when the task is sleeping, at this > time, because would not call the fallback_cpu(), its user-mask will > not be cleared. > > unisoc:/ # while true; do sleep 600; done& > [1] 5990 > unisoc:/ # echo 5990 > /dev/cpuset/top-app/tasks > unisoc:/ # cat /proc/5990/status | grep Cpus > Cpus_allowed: ff > Cpus_allowed_list: 0-7 > Cpus_user_allowed: (null) > Cpus_user_allowed_list: (null) > > unisoc:/ # taskset -p c0 5990 > pid 5990's current affinity mask: ff > pid 5990's new affinity mask: c0 > unisoc:/ # cat /proc/5990/status | grep Cpus > Cpus_allowed: c0 > Cpus_allowed_list: 6-7 > Cpus_user_allowed: c0 > Cpus_user_allowed_list: 6-7 > > unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu6/online > unisoc:/ # cat /proc/5990/status | grep Cpus > Cpus_allowed: 80 > Cpus_allowed_list: 7 > Cpus_user_allowed: c0 > Cpus_user_allowed_list: 6-7 > unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu7/online > unisoc:/ # cat /proc/5990/status | grep Cpus > Cpus_allowed: 3f > Cpus_allowed_list: 0-5 > Cpus_user_allowed: c0 > Cpus_user_allowed_list: 6-7 > > > After 10 minutes, it was waked up, it can also keep its user-mask: > ums9621_1h10:/ # cat /proc/5990/status | grep Cpus > Cpus_allowed: 3f > Cpus_allowed_list: 0-5 > Cpus_user_allowed: c0 > Cpus_user_allowed_list: 6-7 > > In order to solve the above problem, I modified the following patch. > At this time, for !top-cpuset, regardless of whether the task is in > the running state when offline cpu, its cpu-mask can be maintained. > However, this patch may not be perfect yet, so I send the "Print > user_cpus_ptr for task status" patch first to debug more conveniently. > > ---> > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 68cfa656b9b1..00879b6de8d4 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -1870,7 +1870,7 @@ extern void dl_bw_free(int cpu, u64 dl_bw); > #ifdef CONFIG_SMP > > /* do_set_cpus_allowed() - consider using set_cpus_allowed_ptr() instead */ > -extern void do_set_cpus_allowed(struct task_struct *p, const struct > cpumask *new_mask); > +extern void do_set_cpus_allowed(struct task_struct *p, const struct > cpumask *new_mask, bool keep_user); > > /** > * set_cpus_allowed_ptr - set CPU affinity mask of a task > @@ -1886,7 +1886,7 @@ extern int dl_task_check_affinity(struct > task_struct *p, const struct cpumask *m > extern void force_compatible_cpus_allowed_ptr(struct task_struct *p); > extern void relax_compatible_cpus_allowed_ptr(struct task_struct *p); > #else > -static inline void do_set_cpus_allowed(struct task_struct *p, const > struct cpumask *new_mask) > +static inline void do_set_cpus_allowed(struct task_struct *p, const > struct cpumask *new_mask, bool keep_user) > { > } > static inline int set_cpus_allowed_ptr(struct task_struct *p, const > struct cpumask *new_mask) > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > index 7ee9994aee40..0c448f8a3829 100644 > --- a/kernel/cgroup/cpuset.c > +++ b/kernel/cgroup/cpuset.c > @@ -4005,9 +4005,14 @@ bool cpuset_cpus_allowed_fallback(struct > task_struct *tsk) > > rcu_read_lock(); > cs_mask = task_cs(tsk)->cpus_allowed; > - if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) { > - do_set_cpus_allowed(tsk, cs_mask); > - changed = true; > + if (cpumask_subset(cs_mask, possible_mask)) { > + if (is_in_v2_mode()) { > + do_set_cpus_allowed(tsk, cs_mask, false); > + changed = true; > + } else if (task_cs(tsk) != &top_cpuset) { > + do_set_cpus_allowed(tsk, cs_mask, true); > + changed = true; > + } > } > rcu_read_unlock(); > > diff --git a/kernel/kthread.c b/kernel/kthread.c > index 7a7aa5f93c0c..7ede27630088 100644 > --- a/kernel/kthread.c > +++ b/kernel/kthread.c > @@ -527,7 +527,7 @@ static void __kthread_bind_mask(struct task_struct > *p, const struct cpumask *mas > > /* It's safe because the task is inactive. */ > raw_spin_lock_irqsave(&p->pi_lock, flags); > - do_set_cpus_allowed(p, mask); > + do_set_cpus_allowed(p, mask, false); > p->flags |= PF_NO_SETAFFINITY; > raw_spin_unlock_irqrestore(&p->pi_lock, flags); > } > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 33cfd522fc7c..623f89e65e6c 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2855,18 +2855,21 @@ __do_set_cpus_allowed(struct task_struct *p, > struct affinity_context *ctx) > * Used for kthread_bind() and select_fallback_rq(), in both cases the user > * affinity (if any) should be destroyed too. > */ > -void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) > +void do_set_cpus_allowed(struct task_struct *p, const struct cpumask > *new_mask, bool keep_user) > { > struct affinity_context ac = { > .new_mask = new_mask, > .user_mask = NULL, > - .flags = SCA_USER, /* clear the user requested mask */ > + .flags = 0, /* clear the user requested mask */ > }; > union cpumask_rcuhead { > cpumask_t cpumask; > struct rcu_head rcu; > }; > > + if (!keep_user) > + ac.flags = SCA_USER; > + > __do_set_cpus_allowed(p, &ac); > > /* > @@ -2874,7 +2877,8 @@ void do_set_cpus_allowed(struct task_struct *p, > const struct cpumask *new_mask) > * to use kfree() here (when PREEMPT_RT=y), therefore punt to using > * kfree_rcu(). > */ > - kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); > + if (!keep_user) > + kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); > } > > static cpumask_t *alloc_user_cpus_ptr(int node) > @@ -3664,7 +3668,7 @@ int select_fallback_rq(int cpu, struct task_struct *p) > * > * More yuck to audit. > */ > - do_set_cpus_allowed(p, task_cpu_possible_mask(p)); > + do_set_cpus_allowed(p, > task_cpu_possible_mask(p), false); > state = fail; > break; > case fail: > These changes essentially reverts commit 851a723e45d1c("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") except the additional caller in the cpuset code. How about the following less invasive change? diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7019a40457a6..646837eab70c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2796,21 +2796,24 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx) } /* - * Used for kthread_bind() and select_fallback_rq(), in both cases the user - * affinity (if any) should be destroyed too. + * Used for kthread_bind() and select_fallback_rq(). Destroy user affinity + * if no intersection with the new mask. */ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { struct affinity_context ac = { .new_mask = new_mask, .user_mask = NULL, - .flags = SCA_USER, /* clear the user requested mask */ + .flags = 0, }; union cpumask_rcuhead { cpumask_t cpumask; struct rcu_head rcu; }; + if (current->user_cpus_ptr && !cpumask_intersects(current->user_cpus_ptr, new_mask)) + ac.flags = SCA_USER; /* clear the user requested mask */ + __do_set_cpus_allowed(p, &ac); /* No compilation test done. Note that there is a null check inside kfree_rcu() with no need for additional check. Regards, Longman
Hi Waiman On Tue, May 7, 2024 at 2:04 AM Waiman Long <longman@redhat.com> wrote: > > On 5/6/24 04:04, Xuewen Yan wrote: > > Hi Peter > > > > On Mon, Apr 29, 2024 at 8:10 PM Peter Zijlstra <peterz@infradead.org> wrote: > >> On Mon, Apr 29, 2024 at 04:46:33PM +0800, Xuewen Yan wrote: > >>> The commit 851a723e45d1c("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") > >>> would clear the user_cpus_ptr when call the do_set_cpus_allowed. > >>> > >>> In order to determine whether the user_cpus_ptr is taking effect, > >>> it is better to print the task's user_cpus_ptr. > >> This is an ABI change and would mandate we forever more have this > >> distinction. I don't think your changes justifies things sufficiently > >> for this. > > I added this mainly because online/offline cpu will produce different > > results for the !top-cpuset task. > > > > For example: > > > > If the task was running, then offline task's cpus, would lead to clear > > its user-mask. > > > > unisoc:/ # while true; do sleep 600; done& > > [1] 6786 > > unisoc:/ # echo 6786 > /dev/cpuset/top-app/tasks > > unisoc:/ # cat /dev/cpuset/top-app/cpus > > 0-7 > > unisoc:/ # cat /proc/6786/status | grep Cpus > > Cpus_allowed: ff > > Cpus_allowed_list: 0-7 > > Cpus_user_allowed: (null) > > Cpus_user_allowed_list: (null) > > > > unisoc:/ # taskset -p c0 6786 > > pid 6786's current affinity mask: ff > > pid 6786's new affinity mask: c0 > > unisoc:/ # cat /proc/6786/status | grep Cpus > > Cpus_allowed: c0 > > Cpus_allowed_list: 6-7 > > Cpus_user_allowed: c0 > > Cpus_user_allowed_list: 6-7 > > > > After offline the cpu6 and cpu7, the user-mask would be cleared: > > > > unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu7/online > > unisoc:/ # cat /proc/6786/status | grep Cpus > > Cpus_allowed: 40 > > Cpus_allowed_list: 6 > > Cpus_user_allowed: c0 > > Cpus_user_allowed_list: 6-7 > > ums9621_1h10:/ # echo 0 > /sys/devices/system/cpu/cpu6/online > > ums9621_1h10:/ # cat /proc/6786/status | grep Cpus > > Cpus_allowed: 3f > > Cpus_allowed_list: 0-5 > > Cpus_user_allowed: (null) > > Cpus_user_allowed_list: (null) > > > > When online the cpu6/7, the user-mask can not bring back: > > > > unisoc:/ # echo 1 > /sys/devices/system/cpu/cpu6/online > > unisoc:/ # cat /proc/6786/status | grep Cpus > > Cpus_allowed: 7f > > Cpus_allowed_list: 0-6 > > Cpus_user_allowed: (null) > > Cpus_user_allowed_list: (null) > > unisoc:/ # echo 1 > /sys/devices/system/cpu/cpu7/online > > unisoc:/ # cat /proc/6786/status | grep Cpus > > Cpus_allowed: ff > > Cpus_allowed_list: 0-7 > > Cpus_user_allowed: (null) > > Cpus_user_allowed_list: (null) > > > > However, if we offline the cpu when the task is sleeping, at this > > time, because would not call the fallback_cpu(), its user-mask will > > not be cleared. > > > > unisoc:/ # while true; do sleep 600; done& > > [1] 5990 > > unisoc:/ # echo 5990 > /dev/cpuset/top-app/tasks > > unisoc:/ # cat /proc/5990/status | grep Cpus > > Cpus_allowed: ff > > Cpus_allowed_list: 0-7 > > Cpus_user_allowed: (null) > > Cpus_user_allowed_list: (null) > > > > unisoc:/ # taskset -p c0 5990 > > pid 5990's current affinity mask: ff > > pid 5990's new affinity mask: c0 > > unisoc:/ # cat /proc/5990/status | grep Cpus > > Cpus_allowed: c0 > > Cpus_allowed_list: 6-7 > > Cpus_user_allowed: c0 > > Cpus_user_allowed_list: 6-7 > > > > unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu6/online > > unisoc:/ # cat /proc/5990/status | grep Cpus > > Cpus_allowed: 80 > > Cpus_allowed_list: 7 > > Cpus_user_allowed: c0 > > Cpus_user_allowed_list: 6-7 > > unisoc:/ # echo 0 > /sys/devices/system/cpu/cpu7/online > > unisoc:/ # cat /proc/5990/status | grep Cpus > > Cpus_allowed: 3f > > Cpus_allowed_list: 0-5 > > Cpus_user_allowed: c0 > > Cpus_user_allowed_list: 6-7 > > > > > > After 10 minutes, it was waked up, it can also keep its user-mask: > > ums9621_1h10:/ # cat /proc/5990/status | grep Cpus > > Cpus_allowed: 3f > > Cpus_allowed_list: 0-5 > > Cpus_user_allowed: c0 > > Cpus_user_allowed_list: 6-7 > > > > In order to solve the above problem, I modified the following patch. > > At this time, for !top-cpuset, regardless of whether the task is in > > the running state when offline cpu, its cpu-mask can be maintained. > > However, this patch may not be perfect yet, so I send the "Print > > user_cpus_ptr for task status" patch first to debug more conveniently. > > > > ---> > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > index 68cfa656b9b1..00879b6de8d4 100644 > > --- a/include/linux/sched.h > > +++ b/include/linux/sched.h > > @@ -1870,7 +1870,7 @@ extern void dl_bw_free(int cpu, u64 dl_bw); > > #ifdef CONFIG_SMP > > > > /* do_set_cpus_allowed() - consider using set_cpus_allowed_ptr() instead */ > > -extern void do_set_cpus_allowed(struct task_struct *p, const struct > > cpumask *new_mask); > > +extern void do_set_cpus_allowed(struct task_struct *p, const struct > > cpumask *new_mask, bool keep_user); > > > > /** > > * set_cpus_allowed_ptr - set CPU affinity mask of a task > > @@ -1886,7 +1886,7 @@ extern int dl_task_check_affinity(struct > > task_struct *p, const struct cpumask *m > > extern void force_compatible_cpus_allowed_ptr(struct task_struct *p); > > extern void relax_compatible_cpus_allowed_ptr(struct task_struct *p); > > #else > > -static inline void do_set_cpus_allowed(struct task_struct *p, const > > struct cpumask *new_mask) > > +static inline void do_set_cpus_allowed(struct task_struct *p, const > > struct cpumask *new_mask, bool keep_user) > > { > > } > > static inline int set_cpus_allowed_ptr(struct task_struct *p, const > > struct cpumask *new_mask) > > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > > index 7ee9994aee40..0c448f8a3829 100644 > > --- a/kernel/cgroup/cpuset.c > > +++ b/kernel/cgroup/cpuset.c > > @@ -4005,9 +4005,14 @@ bool cpuset_cpus_allowed_fallback(struct > > task_struct *tsk) > > > > rcu_read_lock(); > > cs_mask = task_cs(tsk)->cpus_allowed; > > - if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) { > > - do_set_cpus_allowed(tsk, cs_mask); > > - changed = true; > > + if (cpumask_subset(cs_mask, possible_mask)) { > > + if (is_in_v2_mode()) { > > + do_set_cpus_allowed(tsk, cs_mask, false); > > + changed = true; > > + } else if (task_cs(tsk) != &top_cpuset) { > > + do_set_cpus_allowed(tsk, cs_mask, true); > > + changed = true; > > + } > > } > > rcu_read_unlock(); > > > > diff --git a/kernel/kthread.c b/kernel/kthread.c > > index 7a7aa5f93c0c..7ede27630088 100644 > > --- a/kernel/kthread.c > > +++ b/kernel/kthread.c > > @@ -527,7 +527,7 @@ static void __kthread_bind_mask(struct task_struct > > *p, const struct cpumask *mas > > > > /* It's safe because the task is inactive. */ > > raw_spin_lock_irqsave(&p->pi_lock, flags); > > - do_set_cpus_allowed(p, mask); > > + do_set_cpus_allowed(p, mask, false); > > p->flags |= PF_NO_SETAFFINITY; > > raw_spin_unlock_irqrestore(&p->pi_lock, flags); > > } > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 33cfd522fc7c..623f89e65e6c 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -2855,18 +2855,21 @@ __do_set_cpus_allowed(struct task_struct *p, > > struct affinity_context *ctx) > > * Used for kthread_bind() and select_fallback_rq(), in both cases the user > > * affinity (if any) should be destroyed too. > > */ > > -void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) > > +void do_set_cpus_allowed(struct task_struct *p, const struct cpumask > > *new_mask, bool keep_user) > > { > > struct affinity_context ac = { > > .new_mask = new_mask, > > .user_mask = NULL, > > - .flags = SCA_USER, /* clear the user requested mask */ > > + .flags = 0, /* clear the user requested mask */ > > }; > > union cpumask_rcuhead { > > cpumask_t cpumask; > > struct rcu_head rcu; > > }; > > > > + if (!keep_user) > > + ac.flags = SCA_USER; > > + > > __do_set_cpus_allowed(p, &ac); > > > > /* > > @@ -2874,7 +2877,8 @@ void do_set_cpus_allowed(struct task_struct *p, > > const struct cpumask *new_mask) > > * to use kfree() here (when PREEMPT_RT=y), therefore punt to using > > * kfree_rcu(). > > */ > > - kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); > > + if (!keep_user) > > + kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); > > } > > > > static cpumask_t *alloc_user_cpus_ptr(int node) > > @@ -3664,7 +3668,7 @@ int select_fallback_rq(int cpu, struct task_struct *p) > > * > > * More yuck to audit. > > */ > > - do_set_cpus_allowed(p, task_cpu_possible_mask(p)); > > + do_set_cpus_allowed(p, > > task_cpu_possible_mask(p), false); > > state = fail; > > break; > > case fail: > > > These changes essentially reverts commit 851a723e45d1c("sched: Always > clear user_cpus_ptr in do_set_cpus_allowed()") except the additional > caller in the cpuset code. > > How about the following less invasive change? > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 7019a40457a6..646837eab70c 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2796,21 +2796,24 @@ __do_set_cpus_allowed(struct task_struct *p, > struct affinity_context *ctx) > } > > /* > - * Used for kthread_bind() and select_fallback_rq(), in both cases the user > - * affinity (if any) should be destroyed too. > + * Used for kthread_bind() and select_fallback_rq(). Destroy user affinity > + * if no intersection with the new mask. > */ > void do_set_cpus_allowed(struct task_struct *p, const struct cpumask > *new_mask) > { > struct affinity_context ac = { > .new_mask = new_mask, > .user_mask = NULL, > - .flags = SCA_USER, /* clear the user requested mask */ > + .flags = 0, > }; > union cpumask_rcuhead { > cpumask_t cpumask; > struct rcu_head rcu; > }; > > + if (current->user_cpus_ptr && > !cpumask_intersects(current->user_cpus_ptr, new_mask)) Thanks for your suggestion, and I try it and as for me, it works well, but I change the "current" to p. I think “current” is inappropriate because what is changed here is the mask of p. It is possible that “p” and “current” are not equal. I would send the next patch later and add your Suggested-by. Thanks again for your advice! BR --- xuewen > + ac.flags = SCA_USER; /* clear the user requested mask */ > + > __do_set_cpus_allowed(p, &ac); > > /* > > No compilation test done. Note that there is a null check inside > kfree_rcu() with no need for additional check. > > Regards, > Longman > >
On 5/7/24 02:57, Xuewen Yan wrote: >> These changes essentially reverts commit 851a723e45d1c("sched: Always >> clear user_cpus_ptr in do_set_cpus_allowed()") except the additional >> caller in the cpuset code. >> >> How about the following less invasive change? >> >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 7019a40457a6..646837eab70c 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -2796,21 +2796,24 @@ __do_set_cpus_allowed(struct task_struct *p, >> struct affinity_context *ctx) >> } >> >> /* >> - * Used for kthread_bind() and select_fallback_rq(), in both cases the user >> - * affinity (if any) should be destroyed too. >> + * Used for kthread_bind() and select_fallback_rq(). Destroy user affinity >> + * if no intersection with the new mask. >> */ >> void do_set_cpus_allowed(struct task_struct *p, const struct cpumask >> *new_mask) >> { >> struct affinity_context ac = { >> .new_mask = new_mask, >> .user_mask = NULL, >> - .flags = SCA_USER, /* clear the user requested mask */ >> + .flags = 0, >> }; >> union cpumask_rcuhead { >> cpumask_t cpumask; >> struct rcu_head rcu; >> }; >> >> + if (current->user_cpus_ptr && >> !cpumask_intersects(current->user_cpus_ptr, new_mask)) > Thanks for your suggestion, and I try it and as for me, it works well, > but I change the "current" to p. > I think “current” is inappropriate because what is changed here is the > mask of p. > It is possible that “p” and “current” are not equal. > > I would send the next patch later and add your Suggested-by. Thanks > again for your advice! You are right. It should be "p" instead of "current". Thanks, Longman
diff --git a/fs/proc/array.c b/fs/proc/array.c index 34a47fb0c57f..084bee2a2e2b 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -409,6 +409,10 @@ static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) cpumask_pr_args(&task->cpus_mask)); seq_printf(m, "Cpus_allowed_list:\t%*pbl\n", cpumask_pr_args(&task->cpus_mask)); + seq_printf(m, "Cpus_user_allowed:\t%*pb\n", + cpumask_pr_args(task->user_cpus_ptr)); + seq_printf(m, "Cpus_user_allowed_list:\t%*pbl\n", + cpumask_pr_args(task->user_cpus_ptr)); } static inline void task_core_dumping(struct seq_file *m, struct task_struct *task)
The commit 851a723e45d1c("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") would clear the user_cpus_ptr when call the do_set_cpus_allowed. In order to determine whether the user_cpus_ptr is taking effect, it is better to print the task's user_cpus_ptr. For top-cpuset: ums9621_1h10:/ # while true; do done& [1] 6786 ums9621_1h10:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: ff Cpus_allowed_list: 0-7 Cpus_user_allowed: (null) Cpus_user_allowed_list: (null) bind the task to 6-7: ums9621_1h10:/ # taskset -p c0 6786 pid 6786's current affinity mask: ff pid 6786's new affinity mask: c0 ums9621_1h10:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: c0 Cpus_allowed_list: 6-7 Cpus_user_allowed: c0 Cpus_user_allowed_list: 6-7 Offline cpu7: ums9621_1h10:/ # echo 0 > /sys/devices/system/cpu/cpu7/online ums9621_1h10:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: c0 Cpus_allowed_list: 6-7 Cpus_user_allowed: c0 Cpus_user_allowed_list: 6-7 Offline cpu6, and then the do_set_cpus_allowed will clear the user_ptr: ums9621_1h10:/ # echo 0 > /sys/devices/system/cpu/cpu6/online ums9621_1h10:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: ff Cpus_allowed_list: 0-7 Cpus_user_allowed: (null) Cpus_user_allowed_list: (null) Bringup the core6,7: ums9621_1h10:/ # echo 1 > /sys/devices/system/cpu/cpu6/online ums9621_1h10:/ # echo 1 > /sys/devices/system/cpu/cpu7/online ums9621_1h10:/ # cat /proc/6786/status | grep Cpus Cpus_allowed: ff Cpus_allowed_list: 0-7 Cpus_user_allowed: (null) Cpus_user_allowed_list: (null) Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> --- fs/proc/array.c | 4 ++++ 1 file changed, 4 insertions(+)