diff mbox series

[1/1] sched/core: Fix stuck on completion for affine_move_task() when stopper disable

Message ID 20230927033431.12406-1-kuyo.chang@mediatek.com (mailing list archive)
State New, archived
Headers show
Series [1/1] sched/core: Fix stuck on completion for affine_move_task() when stopper disable | expand

Commit Message

Kuyo Chang (張建文) Sept. 27, 2023, 3:34 a.m. UTC
From: kuyo chang <kuyo.chang@mediatek.com>

[Syndrome] hung detect shows below warning msg
[ 4320.666557] [   T56] khungtaskd: [name:hung_task&]INFO: task stressapptest:17803 blocked for more than 3600 seconds.
[ 4320.666589] [   T56] khungtaskd: [name:core&]task:stressapptest   state:D stack:0     pid:17803 ppid:17579  flags:0x04000008
[ 4320.666601] [   T56] khungtaskd: Call trace:
[ 4320.666607] [   T56] khungtaskd:  __switch_to+0x17c/0x338
[ 4320.666642] [   T56] khungtaskd:  __schedule+0x54c/0x8ec
[ 4320.666651] [   T56] khungtaskd:  schedule+0x74/0xd4
[ 4320.666656] [   T56] khungtaskd:  schedule_timeout+0x34/0x108
[ 4320.666672] [   T56] khungtaskd:  do_wait_for_common+0xe0/0x154
[ 4320.666678] [   T56] khungtaskd:  wait_for_completion+0x44/0x58
[ 4320.666681] [   T56] khungtaskd:  __set_cpus_allowed_ptr_locked+0x344/0x730
[ 4320.666702] [   T56] khungtaskd:  __sched_setaffinity+0x118/0x160
[ 4320.666709] [   T56] khungtaskd:  sched_setaffinity+0x10c/0x248
[ 4320.666715] [   T56] khungtaskd:  __arm64_sys_sched_setaffinity+0x15c/0x1c0
[ 4320.666719] [   T56] khungtaskd:  invoke_syscall+0x3c/0xf8
[ 4320.666743] [   T56] khungtaskd:  el0_svc_common+0xb0/0xe8
[ 4320.666749] [   T56] khungtaskd:  do_el0_svc+0x28/0xa8
[ 4320.666755] [   T56] khungtaskd:  el0_svc+0x28/0x9c
[ 4320.666761] [   T56] khungtaskd:  el0t_64_sync_handler+0x7c/0xe4
[ 4320.666766] [   T56] khungtaskd:  el0t_64_sync+0x18c/0x190

[Analysis]

After add some debug footprint massage, this issue happened at stopper
disable case.
It cannot exec migration_cpu_stop fun to complete migration.
This will cause stuck on wait_for_completion.

Signed-off-by: kuyo chang <kuyo.chang@mediatek.com>
---
 kernel/sched/core.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Peter Zijlstra Sept. 27, 2023, 8:08 a.m. UTC | #1
On Wed, Sep 27, 2023 at 11:34:28AM +0800, Kuyo Chang wrote:
> From: kuyo chang <kuyo.chang@mediatek.com>
> 
> [Syndrome] hung detect shows below warning msg
> [ 4320.666557] [   T56] khungtaskd: [name:hung_task&]INFO: task stressapptest:17803 blocked for more than 3600 seconds.
> [ 4320.666589] [   T56] khungtaskd: [name:core&]task:stressapptest   state:D stack:0     pid:17803 ppid:17579  flags:0x04000008
> [ 4320.666601] [   T56] khungtaskd: Call trace:
> [ 4320.666607] [   T56] khungtaskd:  __switch_to+0x17c/0x338
> [ 4320.666642] [   T56] khungtaskd:  __schedule+0x54c/0x8ec
> [ 4320.666651] [   T56] khungtaskd:  schedule+0x74/0xd4
> [ 4320.666656] [   T56] khungtaskd:  schedule_timeout+0x34/0x108
> [ 4320.666672] [   T56] khungtaskd:  do_wait_for_common+0xe0/0x154
> [ 4320.666678] [   T56] khungtaskd:  wait_for_completion+0x44/0x58
> [ 4320.666681] [   T56] khungtaskd:  __set_cpus_allowed_ptr_locked+0x344/0x730
> [ 4320.666702] [   T56] khungtaskd:  __sched_setaffinity+0x118/0x160
> [ 4320.666709] [   T56] khungtaskd:  sched_setaffinity+0x10c/0x248
> [ 4320.666715] [   T56] khungtaskd:  __arm64_sys_sched_setaffinity+0x15c/0x1c0
> [ 4320.666719] [   T56] khungtaskd:  invoke_syscall+0x3c/0xf8
> [ 4320.666743] [   T56] khungtaskd:  el0_svc_common+0xb0/0xe8
> [ 4320.666749] [   T56] khungtaskd:  do_el0_svc+0x28/0xa8
> [ 4320.666755] [   T56] khungtaskd:  el0_svc+0x28/0x9c
> [ 4320.666761] [   T56] khungtaskd:  el0t_64_sync_handler+0x7c/0xe4
> [ 4320.666766] [   T56] khungtaskd:  el0t_64_sync+0x18c/0x190
> 
> [Analysis]
> 
> After add some debug footprint massage, this issue happened at stopper
> disable case.
> It cannot exec migration_cpu_stop fun to complete migration.
> This will cause stuck on wait_for_completion.

How did you get in this situation?

> Signed-off-by: kuyo chang <kuyo.chang@mediatek.com>
> ---
>  kernel/sched/core.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 1dc0b0287e30..98c217a1caa0 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3041,8 +3041,9 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
>  		task_rq_unlock(rq, p, rf);
>  
>  		if (!stop_pending) {
> -			stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> -					    &pending->arg, &pending->stop_work);
> +			if (!stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> +					    &pending->arg, &pending->stop_work))
> +				return -ENOENT;

And -ENOENT is the right return code for when the target CPU is not
available?

I suspect you're missing more than halp the picture and this is a
band-aid solution at best. Please try harder.

>  		}
>  
>  		if (flags & SCA_MIGRATE_ENABLE)
> -- 
> 2.18.0
>
Kuyo Chang (張建文) Sept. 27, 2023, 3:57 p.m. UTC | #2
On Wed, 2023-09-27 at 10:08 +0200, Peter Zijlstra wrote:
>  	 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  On Wed, Sep 27, 2023 at 11:34:28AM +0800, Kuyo Chang wrote:
> > From: kuyo chang <kuyo.chang@mediatek.com>
> > 
> > [Syndrome] hung detect shows below warning msg
> > [ 4320.666557] [   T56] khungtaskd: [name:hung_task&]INFO: task
> stressapptest:17803 blocked for more than 3600 seconds.
> > [ 4320.666589] [   T56] khungtaskd:
> [name:core&]task:stressapptest   state:D stack:0     pid:17803
> ppid:17579  flags:0x04000008
> > [ 4320.666601] [   T56] khungtaskd: Call trace:
> > [ 4320.666607] [   T56] khungtaskd:  __switch_to+0x17c/0x338
> > [ 4320.666642] [   T56] khungtaskd:  __schedule+0x54c/0x8ec
> > [ 4320.666651] [   T56] khungtaskd:  schedule+0x74/0xd4
> > [ 4320.666656] [   T56] khungtaskd:  schedule_timeout+0x34/0x108
> > [ 4320.666672] [   T56] khungtaskd:  do_wait_for_common+0xe0/0x154
> > [ 4320.666678] [   T56] khungtaskd:  wait_for_completion+0x44/0x58
> > [ 4320.666681] [   T56]
> khungtaskd:  __set_cpus_allowed_ptr_locked+0x344/0x730
> > [ 4320.666702] [   T56]
> khungtaskd:  __sched_setaffinity+0x118/0x160
> > [ 4320.666709] [   T56] khungtaskd:  sched_setaffinity+0x10c/0x248
> > [ 4320.666715] [   T56]
> khungtaskd:  __arm64_sys_sched_setaffinity+0x15c/0x1c0
> > [ 4320.666719] [   T56] khungtaskd:  invoke_syscall+0x3c/0xf8
> > [ 4320.666743] [   T56] khungtaskd:  el0_svc_common+0xb0/0xe8
> > [ 4320.666749] [   T56] khungtaskd:  do_el0_svc+0x28/0xa8
> > [ 4320.666755] [   T56] khungtaskd:  el0_svc+0x28/0x9c
> > [ 4320.666761] [   T56] khungtaskd:  el0t_64_sync_handler+0x7c/0xe4
> > [ 4320.666766] [   T56] khungtaskd:  el0t_64_sync+0x18c/0x190
> > 
> > [Analysis]
> > 
> > After add some debug footprint massage, this issue happened at
> stopper
> > disable case.
> > It cannot exec migration_cpu_stop fun to complete migration.
> > This will cause stuck on wait_for_completion.
> 
> How did you get in this situation?
> 

This issue occurs at CPU hotplug/set_affinity stress test.
The reproduce ratio is very low(about once a week).

So I add/record some debug message to snapshot the task status while it
stuck on wait_for_completion.

Below is the snapshot status while issue happened:

cpu_active_mask is 0xFC
new_mask is 0x8
pending->arg.dest_cpu is 0x3
task_on_cpu(rq,p) is 1
task_cpu is 0x2
p__state = TASK_RUNNING
flag is SCA_CHACK|SCA_USER
stop_one_cpu_nowait(stopper->enabled) return value is false.

I also record the footprint at migration_cpu_stop.
It shows the migration_cpu_stop is not execute.


> > Signed-off-by: kuyo chang <kuyo.chang@mediatek.com>
> > ---
> >  kernel/sched/core.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 1dc0b0287e30..98c217a1caa0 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -3041,8 +3041,9 @@ static int affine_move_task(struct rq *rq,
> struct task_struct *p, struct rq_flag
> >  task_rq_unlock(rq, p, rf);
> >  
> >  if (!stop_pending) {
> > -stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> > -    &pending->arg, &pending->stop_work);
> > +if (!stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> > +    &pending->arg, &pending->stop_work))
> > +return -ENOENT;
> 
> And -ENOENT is the right return code for when the target CPU is not
> available?
> 
> I suspect you're missing more than halp the picture and this is a
> band-aid solution at best. Please try harder.
> 

I think -ENOENT means stopper is not execute? 
Perhaps the error code is abused, or could you kindly give me some
suggestions?

Thanks,
Kuyo

> >  }
> >  
> >  if (flags & SCA_MIGRATE_ENABLE)
> > -- 
> > 2.18.0
> >
Peter Zijlstra Sept. 28, 2023, 3:16 p.m. UTC | #3
On Wed, Sep 27, 2023 at 03:57:35PM +0000, Kuyo Chang (張建文) wrote:
> On Wed, 2023-09-27 at 10:08 +0200, Peter Zijlstra wrote:
> >  	 
> > External email : Please do not click links or open attachments until
> > you have verified the sender or the content.
> >  On Wed, Sep 27, 2023 at 11:34:28AM +0800, Kuyo Chang wrote:
> > > From: kuyo chang <kuyo.chang@mediatek.com>
> > > 
> > > [Syndrome] hung detect shows below warning msg
> > > [ 4320.666557] [   T56] khungtaskd: [name:hung_task&]INFO: task
> > stressapptest:17803 blocked for more than 3600 seconds.
> > > [ 4320.666589] [   T56] khungtaskd:
> > [name:core&]task:stressapptest   state:D stack:0     pid:17803
> > ppid:17579  flags:0x04000008
> > > [ 4320.666601] [   T56] khungtaskd: Call trace:
> > > [ 4320.666607] [   T56] khungtaskd:  __switch_to+0x17c/0x338
> > > [ 4320.666642] [   T56] khungtaskd:  __schedule+0x54c/0x8ec
> > > [ 4320.666651] [   T56] khungtaskd:  schedule+0x74/0xd4
> > > [ 4320.666656] [   T56] khungtaskd:  schedule_timeout+0x34/0x108
> > > [ 4320.666672] [   T56] khungtaskd:  do_wait_for_common+0xe0/0x154
> > > [ 4320.666678] [   T56] khungtaskd:  wait_for_completion+0x44/0x58
> > > [ 4320.666681] [   T56]
> > khungtaskd:  __set_cpus_allowed_ptr_locked+0x344/0x730
> > > [ 4320.666702] [   T56]
> > khungtaskd:  __sched_setaffinity+0x118/0x160
> > > [ 4320.666709] [   T56] khungtaskd:  sched_setaffinity+0x10c/0x248
> > > [ 4320.666715] [   T56]
> > khungtaskd:  __arm64_sys_sched_setaffinity+0x15c/0x1c0
> > > [ 4320.666719] [   T56] khungtaskd:  invoke_syscall+0x3c/0xf8
> > > [ 4320.666743] [   T56] khungtaskd:  el0_svc_common+0xb0/0xe8
> > > [ 4320.666749] [   T56] khungtaskd:  do_el0_svc+0x28/0xa8
> > > [ 4320.666755] [   T56] khungtaskd:  el0_svc+0x28/0x9c
> > > [ 4320.666761] [   T56] khungtaskd:  el0t_64_sync_handler+0x7c/0xe4
> > > [ 4320.666766] [   T56] khungtaskd:  el0t_64_sync+0x18c/0x190
> > > 
> > > [Analysis]
> > > 
> > > After add some debug footprint massage, this issue happened at
> > stopper
> > > disable case.
> > > It cannot exec migration_cpu_stop fun to complete migration.
> > > This will cause stuck on wait_for_completion.
> > 
> > How did you get in this situation?
> > 
> 
> This issue occurs at CPU hotplug/set_affinity stress test.
> The reproduce ratio is very low(about once a week).
> 
> So I add/record some debug message to snapshot the task status while it
> stuck on wait_for_completion.
> 
> Below is the snapshot status while issue happened:
> 
> cpu_active_mask is 0xFC
> new_mask is 0x8
> pending->arg.dest_cpu is 0x3
> task_on_cpu(rq,p) is 1
> task_cpu is 0x2
> p__state = TASK_RUNNING
> flag is SCA_CHACK|SCA_USER
> stop_one_cpu_nowait(stopper->enabled) return value is false.
> 
> I also record the footprint at migration_cpu_stop.
> It shows the migration_cpu_stop is not execute.

AFAICT this is migrate_enable(), which acts on current, so how can the
CPU that current runs on go away?

That is completely unexplained. You've not given a proper description of
the race scenario. And because you've not, we can't even begin to talk
about how best to address the issue.

> > struct task_struct *p, struct rq_flag
> > >  task_rq_unlock(rq, p, rf);
> > >  
> > >  if (!stop_pending) {
> > > -stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> > > -    &pending->arg, &pending->stop_work);
> > > +if (!stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> > > +    &pending->arg, &pending->stop_work))
> > > +return -ENOENT;
> > 
> > And -ENOENT is the right return code for when the target CPU is not
> > available?
> > 
> > I suspect you're missing more than halp the picture and this is a
> > band-aid solution at best. Please try harder.
> > 
> 
> I think -ENOENT means stopper is not execute? 
> Perhaps the error code is abused, or could you kindly give me some
> suggestions?

Well, at this point you're leaving the whole affine_move_task()
machinery in an undefined state, which is a much bigger problem than the
weird return value.

Please read through that function and its comments a number of times. If
you're not a little nervous, you've not understood the thing.

Your patch has at least one very obvious resource leak.
Peter Zijlstra Sept. 28, 2023, 3:19 p.m. UTC | #4
On Thu, Sep 28, 2023 at 05:16:16PM +0200, Peter Zijlstra wrote:

> AFAICT this is migrate_enable(), which acts on current, so how can the
> CPU that current runs on go away?

> Your patch has at least one very obvious resource leak.

Sorry those are not so, I ended up staring at the wrong
stop_one_cpu_nowait() :-/

Still, the rest is very much the case, if you can't describe the exact
race scenario, you can't be talking about a solution.
Peter Zijlstra Sept. 29, 2023, 10:21 a.m. UTC | #5
On Wed, Sep 27, 2023 at 03:57:35PM +0000, Kuyo Chang (張建文) wrote:

> This issue occurs at CPU hotplug/set_affinity stress test.
> The reproduce ratio is very low(about once a week).

I'm assuming you're running an arm64 kernel with preempt_full=y (the
default for arm64).

Could you please test the below?

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d8fd29d66b24..079a63b8a954 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2645,9 +2645,11 @@ static int migration_cpu_stop(void *data)
 		 * it.
 		 */
 		WARN_ON_ONCE(!pending->stop_pending);
+		preempt_disable();
 		task_rq_unlock(rq, p, &rf);
 		stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
 				    &pending->arg, &pending->stop_work);
+		preempt_enable();
 		return 0;
 	}
 out:
@@ -2967,12 +2969,13 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
 			complete = true;
 		}
 
+		preempt_disable();
 		task_rq_unlock(rq, p, rf);
-
 		if (push_task) {
 			stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
 					    p, &rq->push_work);
 		}
+		preempt_enable();
 
 		if (complete)
 			complete_all(&pending->done);
@@ -3038,12 +3041,13 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
 		if (flags & SCA_MIGRATE_ENABLE)
 			p->migration_flags &= ~MDF_PUSH;
 
+		preempt_disable();
 		task_rq_unlock(rq, p, rf);
-
 		if (!stop_pending) {
 			stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
 					    &pending->arg, &pending->stop_work);
 		}
+		preempt_enable();
 
 		if (flags & SCA_MIGRATE_ENABLE)
 			return 0;
@@ -9459,6 +9461,7 @@ static void balance_push(struct rq *rq)
 	 * Temporarily drop rq->lock such that we can wake-up the stop task.
 	 * Both preemption and IRQs are still disabled.
 	 */
+	preempt_disable();
 	raw_spin_rq_unlock(rq);
 	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
 			    this_cpu_ptr(&push_work));
@@ -9468,6 +9471,7 @@ static void balance_push(struct rq *rq)
 	 * which kthread_is_per_cpu() and will push this task away.
 	 */
 	raw_spin_rq_lock(rq);
+	preempt_enable();
 }
 
 static void balance_push_set(int cpu, bool on)
Kuyo Chang (張建文) Oct. 1, 2023, 3:15 p.m. UTC | #6
On Fri, 2023-09-29 at 12:21 +0200, Peter Zijlstra wrote:
>  	 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  On Wed, Sep 27, 2023 at 03:57:35PM +0000, Kuyo Chang (張建文) wrote:
> 
> > This issue occurs at CPU hotplug/set_affinity stress test.
> > The reproduce ratio is very low(about once a week).
> 
> I'm assuming you're running an arm64 kernel with preempt_full=y (the
> default for arm64).

Yes, the test platform is arm64 with kernel config as below

CONFIG_PREEMPT_BUILD=y
CONFIG_PREEMPT=y
CONFIG_PREEMPT_COUNT=y
CONFIG_PREEMPTION=y
CONFIG_PREEMPT_RCU=y
CONFIG_HAVE_PREEMPT_DYNAMIC=y
CONFIG_HAVE_PREEMPT_DYNAMIC_KEY=y
CONFIG_PREEMPT_NOTIFIERS=y

> Could you please test the below?

Ok, let me run it and report.

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d8fd29d66b24..079a63b8a954 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2645,9 +2645,11 @@ static int migration_cpu_stop(void *data)
>   * it.
>   */
>  WARN_ON_ONCE(!pending->stop_pending);
> +preempt_disable();
>  task_rq_unlock(rq, p, &rf);
>  stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
>      &pending->arg, &pending->stop_work);
> +preempt_enable();
>  return 0;
>  }
>  out:
> @@ -2967,12 +2969,13 @@ static int affine_move_task(struct rq *rq,
> struct task_struct *p, struct rq_flag
>  complete = true;
>  }
>  
> +preempt_disable();
>  task_rq_unlock(rq, p, rf);
> -
>  if (push_task) {
>  stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
>      p, &rq->push_work);
>  }
> +preempt_enable();
>  
>  if (complete)
>  complete_all(&pending->done);
> @@ -3038,12 +3041,13 @@ static int affine_move_task(struct rq *rq,
> struct task_struct *p, struct rq_flag
>  if (flags & SCA_MIGRATE_ENABLE)
>  p->migration_flags &= ~MDF_PUSH;
>  
> +preempt_disable();
>  task_rq_unlock(rq, p, rf);
> -
>  if (!stop_pending) {
>  stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
>      &pending->arg, &pending->stop_work);
>  }
> +preempt_enable();
>  
>  if (flags & SCA_MIGRATE_ENABLE)
>  return 0;
> @@ -9459,6 +9461,7 @@ static void balance_push(struct rq *rq)
>   * Temporarily drop rq->lock such that we can wake-up the stop task.
>   * Both preemption and IRQs are still disabled.
>   */
> +preempt_disable();
>  raw_spin_rq_unlock(rq);
>  stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
>      this_cpu_ptr(&push_work));
> @@ -9468,6 +9471,7 @@ static void balance_push(struct rq *rq)
>   * which kthread_is_per_cpu() and will push this task away.
>   */
>  raw_spin_rq_lock(rq);
> +preempt_enable();
>  }
>  
>  static void balance_push_set(int cpu, bool on)
Kuyo Chang (張建文) Oct. 10, 2023, 2:40 p.m. UTC | #7
On Fri, 2023-09-29 at 12:21 +0200, Peter Zijlstra wrote:
>  	 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  On Wed, Sep 27, 2023 at 03:57:35PM +0000, Kuyo Chang (張建文) wrote:
> 
> > This issue occurs at CPU hotplug/set_affinity stress test.
> > The reproduce ratio is very low(about once a week).
> 
> I'm assuming you're running an arm64 kernel with preempt_full=y (the
> default for arm64).
> 
> Could you please test the below?
> 

It is running good so far(more than a week)on hotplug/set affinity
stress test. I will keep it testing and report back if it happens
again.

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d8fd29d66b24..079a63b8a954 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2645,9 +2645,11 @@ static int migration_cpu_stop(void *data)
>   * it.
>   */
>  WARN_ON_ONCE(!pending->stop_pending);
> +preempt_disable();
>  task_rq_unlock(rq, p, &rf);
>  stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
>      &pending->arg, &pending->stop_work);
> +preempt_enable();
>  return 0;
>  }
>  out:
> @@ -2967,12 +2969,13 @@ static int affine_move_task(struct rq *rq,
> struct task_struct *p, struct rq_flag
>  complete = true;
>  }
>  
> +preempt_disable();
>  task_rq_unlock(rq, p, rf);
> -
>  if (push_task) {
>  stop_one_cpu_nowait(rq->cpu, push_cpu_stop,
>      p, &rq->push_work);
>  }
> +preempt_enable();
>  
>  if (complete)
>  complete_all(&pending->done);
> @@ -3038,12 +3041,13 @@ static int affine_move_task(struct rq *rq,
> struct task_struct *p, struct rq_flag
>  if (flags & SCA_MIGRATE_ENABLE)
>  p->migration_flags &= ~MDF_PUSH;
>  
> +preempt_disable();
>  task_rq_unlock(rq, p, rf);
> -
>  if (!stop_pending) {
>  stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
>      &pending->arg, &pending->stop_work);
>  }
> +preempt_enable();
>  
>  if (flags & SCA_MIGRATE_ENABLE)
>  return 0;
> @@ -9459,6 +9461,7 @@ static void balance_push(struct rq *rq)
>   * Temporarily drop rq->lock such that we can wake-up the stop task.
>   * Both preemption and IRQs are still disabled.
>   */
> +preempt_disable();
>  raw_spin_rq_unlock(rq);
>  stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
>      this_cpu_ptr(&push_work));
> @@ -9468,6 +9471,7 @@ static void balance_push(struct rq *rq)
>   * which kthread_is_per_cpu() and will push this task away.
>   */
>  raw_spin_rq_lock(rq);
> +preempt_enable();
>  }
>  
>  static void balance_push_set(int cpu, bool on)
Peter Zijlstra Oct. 10, 2023, 2:57 p.m. UTC | #8
On Tue, Oct 10, 2023 at 02:40:22PM +0000, Kuyo Chang (張建文) wrote:
> On Fri, 2023-09-29 at 12:21 +0200, Peter Zijlstra wrote:
> >  	 
> >  On Wed, Sep 27, 2023 at 03:57:35PM +0000, Kuyo Chang (張建文) wrote:
> > 
> > > This issue occurs at CPU hotplug/set_affinity stress test.
> > > The reproduce ratio is very low(about once a week).
> > 
> > I'm assuming you're running an arm64 kernel with preempt_full=y (the
> > default for arm64).
> > 
> > Could you please test the below?
> > 
> 
> It is running good so far(more than a week)on hotplug/set affinity
> stress test. I will keep it testing and report back if it happens
> again.

OK, I suppose I should look at writing a coherent Changelog for this
then...
diff mbox series

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1dc0b0287e30..98c217a1caa0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3041,8 +3041,9 @@  static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
 		task_rq_unlock(rq, p, rf);
 
 		if (!stop_pending) {
-			stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
-					    &pending->arg, &pending->stop_work);
+			if (!stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
+					    &pending->arg, &pending->stop_work))
+				return -ENOENT;
 		}
 
 		if (flags & SCA_MIGRATE_ENABLE)