diff mbox series

[1/2] arm64/traps: Replace this_cpu_* with raw_cpu_* in panic_bad_stack()

Message ID 20220825063154.69-2-thunder.leizhen@huawei.com (mailing list archive)
State New, archived
Headers show
Series arm: Replace this_cpu_* with raw_cpu_* in panic_bad_stack() | expand

Commit Message

Leizhen (ThunderTown) Aug. 25, 2022, 6:31 a.m. UTC
The hardware automatically disable the IRQ interrupt before jumping to the
interrupt or exception vector. Therefore, the preempt_disable() operation
in this_cpu_read() after macro expansion is unnecessary. In fact, before
commit 8168f098867f ("arm64: entry: split bad stack entry"), the operation
this_cpu_read() precedes arm64_enter_nmi(). If set_preempt_need_resched()
is called before stack overflow, this_cpu_read() may trigger scheduling,
see pseudocode below.

Pseudocode of this_cpu_read(xx) when CONFIG_PREEMPTION=y:
preempt_disable_notrace();
raw_cpu_read(xx);
if (unlikely(__preempt_count_dec_and_test()))
	__preempt_schedule_notrace();

Therefore, use raw_cpu_* instead of this_cpu_* to eliminate potential
hazards. At the very least, it reduces a few lines of assembly code.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 arch/arm64/kernel/traps.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Mark Rutland Aug. 25, 2022, 1:29 p.m. UTC | #1
On Thu, Aug 25, 2022 at 02:31:53PM +0800, Zhen Lei wrote:
> The hardware automatically disable the IRQ interrupt before jumping to the
> interrupt or exception vector. Therefore, the preempt_disable() operation
> in this_cpu_read() after macro expansion is unnecessary. In fact, before
> commit 8168f098867f ("arm64: entry: split bad stack entry"), the operation
> this_cpu_read() precedes arm64_enter_nmi(). If set_preempt_need_resched()
> is called before stack overflow, this_cpu_read() may trigger scheduling,
> see pseudocode below.
> 
> Pseudocode of this_cpu_read(xx) when CONFIG_PREEMPTION=y:
> preempt_disable_notrace();
> raw_cpu_read(xx);
> if (unlikely(__preempt_count_dec_and_test()))
> 	__preempt_schedule_notrace();

Ok, but in mainline we have commit 8168f098867f; so we cannot reach here
without having fiddled with the preempt count.

Are you saying that some stable kernel is broken because it lacks commit
8168f098867f? Is so, I think the right fix is to backport commit 8168f098867f,
and that is then irrelevant to this change.

> Therefore, use raw_cpu_* instead of this_cpu_* to eliminate potential
> hazards. At the very least, it reduces a few lines of assembly code.

I'm happy to use raw_cpu_*() here, to minimize the work we have to do, any any
risks with e.g. instrumentation, but as above I don't think the case mentioned
in the commit message is relevant.

Thanks,
Mark.

> 
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> ---
>  arch/arm64/kernel/traps.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index b7fed33981f7b76..e6b6f4650e3d895 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -871,8 +871,8 @@ DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
>  void panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigned long far)
>  {
>  	unsigned long tsk_stk = (unsigned long)current->stack;
> -	unsigned long irq_stk = (unsigned long)this_cpu_read(irq_stack_ptr);
> -	unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack);
> +	unsigned long irq_stk = (unsigned long)raw_cpu_read(irq_stack_ptr);
> +	unsigned long ovf_stk = (unsigned long)raw_cpu_ptr(overflow_stack);
>  
>  	console_verbose();
>  	pr_emerg("Insufficient stack space to handle exception!");
> -- 
> 2.25.1
>
Leizhen (ThunderTown) Aug. 26, 2022, 3:25 a.m. UTC | #2
On 2022/8/25 21:29, Mark Rutland wrote:
> On Thu, Aug 25, 2022 at 02:31:53PM +0800, Zhen Lei wrote:
>> The hardware automatically disable the IRQ interrupt before jumping to the
>> interrupt or exception vector. Therefore, the preempt_disable() operation
>> in this_cpu_read() after macro expansion is unnecessary. In fact, before
>> commit 8168f098867f ("arm64: entry: split bad stack entry"), the operation
>> this_cpu_read() precedes arm64_enter_nmi(). If set_preempt_need_resched()
>> is called before stack overflow, this_cpu_read() may trigger scheduling,
>> see pseudocode below.
>>
>> Pseudocode of this_cpu_read(xx) when CONFIG_PREEMPTION=y:
>> preempt_disable_notrace();
>> raw_cpu_read(xx);
>> if (unlikely(__preempt_count_dec_and_test()))
>> 	__preempt_schedule_notrace();
> 
> Ok, but in mainline we have commit 8168f098867f; so we cannot reach here
> without having fiddled with the preempt count.
> 
> Are you saying that some stable kernel is broken because it lacks commit
> 8168f098867f? Is so, I think the right fix is to backport commit 8168f098867f,
> and that is then irrelevant to this change.

Yes, after backport commit 8168f098867f, the risk is gone.

> 
>> Therefore, use raw_cpu_* instead of this_cpu_* to eliminate potential
>> hazards. At the very least, it reduces a few lines of assembly code.
> 
> I'm happy to use raw_cpu_*() here, to minimize the work we have to do, any any
> risks with e.g. instrumentation, but as above I don't think the case mentioned
> in the commit message is relevant.

OK, I will delete the description about risk.

> 
> Thanks,
> Mark.
> 
>>
>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>> ---
>>  arch/arm64/kernel/traps.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>> index b7fed33981f7b76..e6b6f4650e3d895 100644
>> --- a/arch/arm64/kernel/traps.c
>> +++ b/arch/arm64/kernel/traps.c
>> @@ -871,8 +871,8 @@ DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
>>  void panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigned long far)
>>  {
>>  	unsigned long tsk_stk = (unsigned long)current->stack;
>> -	unsigned long irq_stk = (unsigned long)this_cpu_read(irq_stack_ptr);
>> -	unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack);
>> +	unsigned long irq_stk = (unsigned long)raw_cpu_read(irq_stack_ptr);
>> +	unsigned long ovf_stk = (unsigned long)raw_cpu_ptr(overflow_stack);
>>  
>>  	console_verbose();
>>  	pr_emerg("Insufficient stack space to handle exception!");
>> -- 
>> 2.25.1
>>
> .
>
diff mbox series

Patch

diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index b7fed33981f7b76..e6b6f4650e3d895 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -871,8 +871,8 @@  DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack)
 void panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigned long far)
 {
 	unsigned long tsk_stk = (unsigned long)current->stack;
-	unsigned long irq_stk = (unsigned long)this_cpu_read(irq_stack_ptr);
-	unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack);
+	unsigned long irq_stk = (unsigned long)raw_cpu_read(irq_stack_ptr);
+	unsigned long ovf_stk = (unsigned long)raw_cpu_ptr(overflow_stack);
 
 	console_verbose();
 	pr_emerg("Insufficient stack space to handle exception!");