diff mbox series

ARM: spectre-v2: fix smp_processor_id() warning

Message ID 795c9463-452e-bf64-1cc0-c318ccecb1da@I-love.SAKURA.ne.jp (mailing list archive)
State New, archived
Headers show
Series ARM: spectre-v2: fix smp_processor_id() warning | expand

Commit Message

Tetsuo Handa June 22, 2022, 6:49 a.m. UTC
syzbot complains smp_processor_id() from harden_branch_predictor()
 from page fault path [1]. Explicitly disable preemption and use
raw_smp_processor_id().

Link: https://syzkaller.appspot.com/bug?extid=a7ee43e564223f195c84 [1]
Reported-by: syzbot <syzbot+a7ee43e564223f195c84@syzkaller.appspotmail.com>
Fixes: f5fe12b1eaee220c ("ARM: spectre-v2: harden user aborts in kernel space")
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
---
This patch is completely untested.

 arch/arm/include/asm/system_misc.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

Russell King (Oracle) June 22, 2022, 11:21 a.m. UTC | #1
On Wed, Jun 22, 2022 at 03:49:21PM +0900, Tetsuo Handa wrote:
> syzbot complains smp_processor_id() from harden_branch_predictor()
>  from page fault path [1]. Explicitly disable preemption and use
> raw_smp_processor_id().
> 
> Link: https://syzkaller.appspot.com/bug?extid=a7ee43e564223f195c84 [1]
> Reported-by: syzbot <syzbot+a7ee43e564223f195c84@syzkaller.appspotmail.com>
> Fixes: f5fe12b1eaee220c ("ARM: spectre-v2: harden user aborts in kernel space")
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>

This may "fix" the warning, but...

> ---
> This patch is completely untested.
> 
>  arch/arm/include/asm/system_misc.h | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
> index 98b37340376b..a92446769acd 100644
> --- a/arch/arm/include/asm/system_misc.h
> +++ b/arch/arm/include/asm/system_misc.h
> @@ -20,8 +20,11 @@ typedef void (*harden_branch_predictor_fn_t)(void);
>  DECLARE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
>  static inline void harden_branch_predictor(void)
>  {
> -	harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
> -						  smp_processor_id());
> +	harden_branch_predictor_fn_t fn;
> +
> +	preempt_disable_notrace();
> +	fn = per_cpu(harden_branch_predictor_fn, raw_smp_processor_id());
> +	preempt_enable_no_resched_notrace();
>  	if (fn)
>  		fn();

The idea is to get the function for the specific CPU, and then to run it
_on_ that CPU, and in theory the CPU that took the fault. However, I
seem to remember there are issues trying to achieve that, and I don't
have a solution for it.
Tetsuo Handa June 22, 2022, 1:31 p.m. UTC | #2
On 2022/06/22 20:21, Russell King (Oracle) wrote:
>>  static inline void harden_branch_predictor(void)
>>  {
>> -	harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
>> -						  smp_processor_id());
>> +	harden_branch_predictor_fn_t fn;
>> +
>> +	preempt_disable_notrace();
>> +	fn = per_cpu(harden_branch_predictor_fn, raw_smp_processor_id());
>> +	preempt_enable_no_resched_notrace();
>>  	if (fn)
>>  		fn();
> 
> The idea is to get the function for the specific CPU, and then to run it
> _on_ that CPU, and in theory the CPU that took the fault. However, I
> seem to remember there are issues trying to achieve that, and I don't
> have a solution for it.

Hmm, since some ARM processors support Asymmetric Multiprocessing, whether
invalidation is needed depends on which processor was assigned, and that's
the reason smp_processor_id() is used?

I'm not familiar with hardware, but if a CPU assigned to current thread
changes, wouldn't sufficient invalidation take place? In other words, do
we need to worry about fn() call if the CPU that took the fault and the CPU
which runs the code after harden_branch_predictor() returned differs?

If some instructions for CPU-A is not supported by CPU-B, I guess that
reading instructions for CPU-A at per_cpu(harden_branch_predictor_fn) and
executing instructions on CPU-B at fn() causes problems. Is it guaranteed
that all instructions used by fn() are supported by all processors?
If it is not guaranteed, we would need to make sure that fn() runs on a CPU
which per_cpu(harden_branch_predictor_fn) was done for?
Marc Zyngier June 22, 2022, 1:50 p.m. UTC | #3
On 2022-06-22 07:49, Tetsuo Handa wrote:
> syzbot complains smp_processor_id() from harden_branch_predictor()
>  from page fault path [1]. Explicitly disable preemption and use
> raw_smp_processor_id().
> 
> Link: https://syzkaller.appspot.com/bug?extid=a7ee43e564223f195c84 [1]
> Reported-by: syzbot 
> <syzbot+a7ee43e564223f195c84@syzkaller.appspotmail.com>
> Fixes: f5fe12b1eaee220c ("ARM: spectre-v2: harden user aborts in kernel 
> space")
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> ---
> This patch is completely untested.
> 
>  arch/arm/include/asm/system_misc.h | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/system_misc.h
> b/arch/arm/include/asm/system_misc.h
> index 98b37340376b..a92446769acd 100644
> --- a/arch/arm/include/asm/system_misc.h
> +++ b/arch/arm/include/asm/system_misc.h
> @@ -20,8 +20,11 @@ typedef void (*harden_branch_predictor_fn_t)(void);
>  DECLARE_PER_CPU(harden_branch_predictor_fn_t, 
> harden_branch_predictor_fn);
>  static inline void harden_branch_predictor(void)
>  {
> -	harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
> -						  smp_processor_id());
> +	harden_branch_predictor_fn_t fn;
> +
> +	preempt_disable_notrace();
> +	fn = per_cpu(harden_branch_predictor_fn, raw_smp_processor_id());
> +	preempt_enable_no_resched_notrace();
>  	if (fn)
>  		fn();
>  }

I don't think that's required.

harden_branch_predictor() is always called on the fault path,
from __do_user_fault(), and that's always non-preemptible.

My hunch is that we're missing some tracking that indicates
to the kernel that we're already non-preemptible by virtue
of being in an exception handler.

Russell, what do you think?

         M.
Russell King (Oracle) June 22, 2022, 2:04 p.m. UTC | #4
On Wed, Jun 22, 2022 at 02:50:15PM +0100, Marc Zyngier wrote:
> On 2022-06-22 07:49, Tetsuo Handa wrote:
> > syzbot complains smp_processor_id() from harden_branch_predictor()
> >  from page fault path [1]. Explicitly disable preemption and use
> > raw_smp_processor_id().
> > 
> > Link: https://syzkaller.appspot.com/bug?extid=a7ee43e564223f195c84 [1]
> > Reported-by: syzbot
> > <syzbot+a7ee43e564223f195c84@syzkaller.appspotmail.com>
> > Fixes: f5fe12b1eaee220c ("ARM: spectre-v2: harden user aborts in kernel
> > space")
> > Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> > ---
> > This patch is completely untested.
> > 
> >  arch/arm/include/asm/system_misc.h | 7 +++++--
> >  1 file changed, 5 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/system_misc.h
> > b/arch/arm/include/asm/system_misc.h
> > index 98b37340376b..a92446769acd 100644
> > --- a/arch/arm/include/asm/system_misc.h
> > +++ b/arch/arm/include/asm/system_misc.h
> > @@ -20,8 +20,11 @@ typedef void (*harden_branch_predictor_fn_t)(void);
> >  DECLARE_PER_CPU(harden_branch_predictor_fn_t,
> > harden_branch_predictor_fn);
> >  static inline void harden_branch_predictor(void)
> >  {
> > -	harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
> > -						  smp_processor_id());
> > +	harden_branch_predictor_fn_t fn;
> > +
> > +	preempt_disable_notrace();
> > +	fn = per_cpu(harden_branch_predictor_fn, raw_smp_processor_id());
> > +	preempt_enable_no_resched_notrace();
> >  	if (fn)
> >  		fn();
> >  }
> 
> I don't think that's required.
> 
> harden_branch_predictor() is always called on the fault path,
> from __do_user_fault(), and that's always non-preemptible.
> 
> My hunch is that we're missing some tracking that indicates
> to the kernel that we're already non-preemptible by virtue
> of being in an exception handler.
> 
> Russell, what do you think?

There is one path that we can hit this while we're preemptible - a
page fault (handled via do_page_fault) at an address in kernel space
triggered by user space (e.g. userspace trying to access vmalloc or
module space). 

I suppose, since we know that's never going to be fixed up, we could
detect that the address >= TASK_SIZE and we're entering from usespace
before enabling interrupts, and do the hardening early in that path.
We'd need to move the other cases where we call the hardening as well.

The down side is more tests in the page fault fast-path... so there
will be a performance penalty to be paid just to shut up this warning -
a warning that is "userspace is trying to do real bad stuff" that
basically means userspace is trying to run an exploit...

Which makes me think... having the loud complaint from the kernel there
is actually a good thing, it makes people sit up and notice that
something is wrong.
Tetsuo Handa June 22, 2022, 3:02 p.m. UTC | #5
On 2022/06/22 23:04, Russell King (Oracle) wrote:
> Which makes me think... having the loud complaint from the kernel there
> is actually a good thing, it makes people sit up and notice that
> something is wrong.

OK. Then, would you change the code not to emit "BUG:" message, for
syzkaller stops testing upon encountering "BUG:" string?
Tetsuo Handa June 24, 2022, 5:39 a.m. UTC | #6
On 2022/06/23 0:02, Tetsuo Handa wrote:
> On 2022/06/22 23:04, Russell King (Oracle) wrote:
>> Which makes me think... having the loud complaint from the kernel there
>> is actually a good thing, it makes people sit up and notice that
>> something is wrong.
> 
> OK. Then, would you change the code not to emit "BUG:" message, for
> syzkaller stops testing upon encountering "BUG:" string?
> 

Something like this?

 arch/arm/include/asm/system_misc.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
index 98b37340376b..09d5c2262165 100644
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -20,10 +20,22 @@ typedef void (*harden_branch_predictor_fn_t)(void);
 DECLARE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
 static inline void harden_branch_predictor(void)
 {
-	harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
-						  smp_processor_id());
+	static DEFINE_RATELIMIT_STATE(predictor_rs, 10 * HZ, 1);
+	const bool is_preemptible = preemptible();
+	harden_branch_predictor_fn_t fn;
+
+	if (unlikely(is_preemptible)) {
+		ratelimit_set_flags(&predictor_rs, RATELIMIT_MSG_ON_RELEASE);
+		if (__ratelimit(&predictor_rs))
+			pr_err("%s[%d] page fault with preemption enabled (exploit attempt?)\n",
+			       current->comm, task_pid_nr(current));
+		preempt_disable_notrace();
+	}
+	fn = per_cpu(harden_branch_predictor_fn, raw_smp_processor_id());
 	if (fn)
 		fn();
+	if (unlikely(is_preemptible))
+		preempt_enable_no_resched_notrace();
 }
 #else
 #define harden_branch_predictor() do { } while (0)
diff mbox series

Patch

diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
index 98b37340376b..a92446769acd 100644
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -20,8 +20,11 @@  typedef void (*harden_branch_predictor_fn_t)(void);
 DECLARE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
 static inline void harden_branch_predictor(void)
 {
-	harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
-						  smp_processor_id());
+	harden_branch_predictor_fn_t fn;
+
+	preempt_disable_notrace();
+	fn = per_cpu(harden_branch_predictor_fn, raw_smp_processor_id());
+	preempt_enable_no_resched_notrace();
 	if (fn)
 		fn();
 }