diff mbox series

[RFC,2/6] x86/power: Inline write_cr[04]()

Message ID 20230112143825.644480983@infradead.org (mailing list archive)
State New, archived
Headers show
Series x86: Fix suspend vs retbleed=stuff | expand

Commit Message

Peter Zijlstra Jan. 12, 2023, 2:31 p.m. UTC
Since we can't do CALL/RET until GS is restored and CR[04] pinning is
of dubious value in this code path, simply write the stored values.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/power/cpu.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Kees Cook Jan. 12, 2023, 9:51 p.m. UTC | #1
On Thu, Jan 12, 2023 at 03:31:43PM +0100, Peter Zijlstra wrote:
> Since we can't do CALL/RET until GS is restored and CR[04] pinning is
> of dubious value in this code path, simply write the stored values.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Reviewed-by: Kees Cook <keescook@chromium.org>
Ingo Molnar Jan. 13, 2023, 1:16 p.m. UTC | #2
* Peter Zijlstra <peterz@infradead.org> wrote:

> Since we can't do CALL/RET until GS is restored and CR[04] pinning is
> of dubious value in this code path, simply write the stored values.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  arch/x86/power/cpu.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -208,11 +208,11 @@ static void notrace __restore_processor_
>  #else
>  /* CONFIG X86_64 */
>  	native_wrmsrl(MSR_EFER, ctxt->efer);
> -	native_write_cr4(ctxt->cr4);
> +	asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");

>  #endif
>  	native_write_cr3(ctxt->cr3);
>  	native_write_cr2(ctxt->cr2);
> -	native_write_cr0(ctxt->cr0);
> +	asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");

Yeah, so CR pinning protects against are easily accessible 'gadget' 
functions that exploits can call to disable HW protection features in the 
CR register.

__restore_processor_state() might be such a gadget if an exploit can pass 
in a well-prepared 'struct saved_context' on the stack.

Can we set up cr0/cr4 after we have a proper GS, or is that a 
chicken-and-egg scenario?

Thanks,

	Ingo
Peter Zijlstra Jan. 13, 2023, 5:17 p.m. UTC | #3
On Fri, Jan 13, 2023 at 02:16:44PM +0100, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > Since we can't do CALL/RET until GS is restored and CR[04] pinning is
> > of dubious value in this code path, simply write the stored values.
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> >  arch/x86/power/cpu.c |    4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > --- a/arch/x86/power/cpu.c
> > +++ b/arch/x86/power/cpu.c
> > @@ -208,11 +208,11 @@ static void notrace __restore_processor_
> >  #else
> >  /* CONFIG X86_64 */
> >  	native_wrmsrl(MSR_EFER, ctxt->efer);
> > -	native_write_cr4(ctxt->cr4);
> > +	asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
> 
> >  #endif
> >  	native_write_cr3(ctxt->cr3);
> >  	native_write_cr2(ctxt->cr2);
> > -	native_write_cr0(ctxt->cr0);
> > +	asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
> 
> Yeah, so CR pinning protects against are easily accessible 'gadget' 
> functions that exploits can call to disable HW protection features in the 
> CR register.
> 
> __restore_processor_state() might be such a gadget if an exploit can pass 
> in a well-prepared 'struct saved_context' on the stack.

Given the extent of saved_context, I think it's a total loss. Best we
can do is something like the last patch here that dis-allows indirect
calls of this function entirely (on appropriate builds/hardware).

> Can we set up cr0/cr4 after we have a proper GS, or is that a 
> chicken-and-egg scenario?

Can be done, but given the state we're in, I'd rather have the simplest
possible rules, calling out to functions with dodgy CR[04] is
'suboptimal' as well.

If people really worry about this I suppose we can call the full
native_write_cr4() later to double check the value in the context or
something.
diff mbox series

Patch

--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -208,11 +208,11 @@  static void notrace __restore_processor_
 #else
 /* CONFIG X86_64 */
 	native_wrmsrl(MSR_EFER, ctxt->efer);
-	native_write_cr4(ctxt->cr4);
+	asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
 #endif
 	native_write_cr3(ctxt->cr3);
 	native_write_cr2(ctxt->cr2);
-	native_write_cr0(ctxt->cr0);
+	asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
 
 	/* Restore the IDT. */
 	native_load_idt(&ctxt->idt);