diff mbox series

arm64: ssbs: Fix context-switch when SSBS instructions are present

Message ID 20200206113410.18301-1-will@kernel.org (mailing list archive)
State New, archived
Headers show
Series arm64: ssbs: Fix context-switch when SSBS instructions are present | expand

Commit Message

Will Deacon Feb. 6, 2020, 11:34 a.m. UTC
When all CPUs in the system implement the SSBS instructions, we
advertise this via an HWCAP and allow EL0 to toggle the SSBS field
in PSTATE directly. Consequently, the state of the mitigation is not
accurately tracked by the TIF_SSBD thread flag and the PSTATE value
is authoritative.

Avoid forcing the SSBS field in context-switch on such a system, and
simply rely on the PSTATE register instead.

Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Srinivas Ramana <sramana@codeaurora.org>
Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/process.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Marc Zyngier Feb. 6, 2020, 11:49 a.m. UTC | #1
On 2020-02-06 11:34, Will Deacon wrote:
> When all CPUs in the system implement the SSBS instructions, we
> advertise this via an HWCAP and allow EL0 to toggle the SSBS field
> in PSTATE directly. Consequently, the state of the mitigation is not
> accurately tracked by the TIF_SSBD thread flag and the PSTATE value
> is authoritative.
> 
> Avoid forcing the SSBS field in context-switch on such a system, and
> simply rely on the PSTATE register instead.
> 
> Cc: <stable@vger.kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Srinivas Ramana <sramana@codeaurora.org>
> Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/process.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index d54586d5b031..45e867f40a7a 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct 
> *next)
>  	if (unlikely(next->flags & PF_KTHREAD))
>  		return;
> 
> +	/*
> +	 * If all CPUs implement the SSBS instructions, then we just
> +	 * need to context-switch the PSTATE field.
> +	 */
> +	if (cpu_have_feature(cpu_feature(SSBS)))
> +		return;
> +
>  	/* If the mitigation is enabled, then we leave SSBS clear. */
>  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
>  	    test_tsk_thread_flag(next, TIF_SSBD))

Looks goot to me.

Reviewed-by: Marc Zyngier <maz@kernel.org>

         M.
Will Deacon Feb. 6, 2020, 12:20 p.m. UTC | #2
On Thu, Feb 06, 2020 at 11:49:31AM +0000, Marc Zyngier wrote:
> On 2020-02-06 11:34, Will Deacon wrote:
> > When all CPUs in the system implement the SSBS instructions, we
> > advertise this via an HWCAP and allow EL0 to toggle the SSBS field
> > in PSTATE directly. Consequently, the state of the mitigation is not
> > accurately tracked by the TIF_SSBD thread flag and the PSTATE value
> > is authoritative.
> > 
> > Avoid forcing the SSBS field in context-switch on such a system, and
> > simply rely on the PSTATE register instead.
> > 
> > Cc: <stable@vger.kernel.org>
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Srinivas Ramana <sramana@codeaurora.org>
> > Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/process.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > index d54586d5b031..45e867f40a7a 100644
> > --- a/arch/arm64/kernel/process.c
> > +++ b/arch/arm64/kernel/process.c
> > @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct
> > *next)
> >  	if (unlikely(next->flags & PF_KTHREAD))
> >  		return;
> > 
> > +	/*
> > +	 * If all CPUs implement the SSBS instructions, then we just
> > +	 * need to context-switch the PSTATE field.
> > +	 */
> > +	if (cpu_have_feature(cpu_feature(SSBS)))
> > +		return;
> > +
> >  	/* If the mitigation is enabled, then we leave SSBS clear. */
> >  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
> >  	    test_tsk_thread_flag(next, TIF_SSBD))
> 
> Looks goot to me.

Ja!

> Reviewed-by: Marc Zyngier <maz@kernel.org>

Cheers. It occurs to me that, although the patch is correct, the comment and
the commit message need tweaking because we're actually predicating this on
the presence of SSBS in any form, so the instructions may not be
implemented. That's fine because the prctl() updates pstate, so it remains
authoritative and can't be lost by one of the CPUs treating it as RAZ/WI.

I'll spin a v2 later on.

Will
Marc Zyngier Feb. 6, 2020, 12:41 p.m. UTC | #3
On 2020-02-06 12:20, Will Deacon wrote:
> On Thu, Feb 06, 2020 at 11:49:31AM +0000, Marc Zyngier wrote:
>> On 2020-02-06 11:34, Will Deacon wrote:
>> > When all CPUs in the system implement the SSBS instructions, we
>> > advertise this via an HWCAP and allow EL0 to toggle the SSBS field
>> > in PSTATE directly. Consequently, the state of the mitigation is not
>> > accurately tracked by the TIF_SSBD thread flag and the PSTATE value
>> > is authoritative.
>> >
>> > Avoid forcing the SSBS field in context-switch on such a system, and
>> > simply rely on the PSTATE register instead.
>> >
>> > Cc: <stable@vger.kernel.org>
>> > Cc: Marc Zyngier <maz@kernel.org>
>> > Cc: Catalin Marinas <catalin.marinas@arm.com>
>> > Cc: Srinivas Ramana <sramana@codeaurora.org>
>> > Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
>> > Signed-off-by: Will Deacon <will@kernel.org>
>> > ---
>> >  arch/arm64/kernel/process.c | 7 +++++++
>> >  1 file changed, 7 insertions(+)
>> >
>> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
>> > index d54586d5b031..45e867f40a7a 100644
>> > --- a/arch/arm64/kernel/process.c
>> > +++ b/arch/arm64/kernel/process.c
>> > @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct
>> > *next)
>> >  	if (unlikely(next->flags & PF_KTHREAD))
>> >  		return;
>> >
>> > +	/*
>> > +	 * If all CPUs implement the SSBS instructions, then we just
>> > +	 * need to context-switch the PSTATE field.
>> > +	 */
>> > +	if (cpu_have_feature(cpu_feature(SSBS)))
>> > +		return;
>> > +
>> >  	/* If the mitigation is enabled, then we leave SSBS clear. */
>> >  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
>> >  	    test_tsk_thread_flag(next, TIF_SSBD))
>> 
>> Looks goot to me.
> 
> Ja!

Ach...

> 
>> Reviewed-by: Marc Zyngier <maz@kernel.org>
> 
> Cheers. It occurs to me that, although the patch is correct, the 
> comment and
> the commit message need tweaking because we're actually predicating 
> this on
> the presence of SSBS in any form, so the instructions may not be
> implemented. That's fine because the prctl() updates pstate, so it 
> remains
> authoritative and can't be lost by one of the CPUs treating it as 
> RAZ/WI.

True. It is the PSTATE bit that actually matters, not the presence of 
the control
instruction.

> I'll spin a v2 later on.

Thanks,

         M.
diff mbox series

Patch

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index d54586d5b031..45e867f40a7a 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -466,6 +466,13 @@  static void ssbs_thread_switch(struct task_struct *next)
 	if (unlikely(next->flags & PF_KTHREAD))
 		return;
 
+	/*
+	 * If all CPUs implement the SSBS instructions, then we just
+	 * need to context-switch the PSTATE field.
+	 */
+	if (cpu_have_feature(cpu_feature(SSBS)))
+		return;
+
 	/* If the mitigation is enabled, then we leave SSBS clear. */
 	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
 	    test_tsk_thread_flag(next, TIF_SSBD))