diff mbox series

x86/svm: Separate STI and VMRUN instructions in svm_asm_do_resume()

Message ID 20250217161241.537168-1-andrew.cooper3@citrix.com (mailing list archive)
State Superseded
Headers show
Series x86/svm: Separate STI and VMRUN instructions in svm_asm_do_resume() | expand

Commit Message

Andrew Cooper Feb. 17, 2025, 4:12 p.m. UTC
There is a corner case in the VMRUN instruction where its INTR_SHADOW state
leaks into guest state if a VMExit occurs before the VMRUN is complete.  An
example of this could be taking #NPF due to event injection.

Xen can safely execute STI anywhere between CLGI and VMRUN, as CLGI blocks
external interrupts too.  Move the STI to the other end of the block, which
moves the VMRUN instruction outside of STI's shadow.

Link: https://lore.kernel.org/all/CADH9ctBs1YPmE4aCfGPNBwA10cA8RuAk2gO7542DjMZgs4uzJQ@mail.gmail.com/
Fixes: 66b245d9eaeb ("SVM: limit GIF=0 region")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

I'm reasonbly sure this will trigger reliably during LogDirty because of how
we do misconfig propagation.

It's also mostly benign; from the guest's point of view, a pending interrupt
will be delayed by one instruction.  Hence, not tagged for 4.20 at this
juncture.
---
 xen/arch/x86/hvm/svm/entry.S | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)


base-commit: 414dde38b0cf8a38230c8c3f9e8564da9762e743

Comments

Jan Beulich Feb. 17, 2025, 4:51 p.m. UTC | #1
On 17.02.2025 17:12, Andrew Cooper wrote:
> There is a corner case in the VMRUN instruction where its INTR_SHADOW state
> leaks into guest state if a VMExit occurs before the VMRUN is complete.  An
> example of this could be taking #NPF due to event injection.

Ouch.

> --- a/xen/arch/x86/hvm/svm/entry.S
> +++ b/xen/arch/x86/hvm/svm/entry.S
> @@ -57,6 +57,14 @@ __UNLIKELY_END(nsvm_hap)
>  
>          clgi
>  
> +        /*
> +         * Set EFLAGS.IF, after CLGI covers us from real interrupts, but not
> +         * immediately prior to VMRUN.  AMD CPUs leak Xen's INTR_SHADOW from
> +         * the STI into guest state if a VMExit occurs during VMEntry
> +         * (e.g. taking #NPF during event injecting.)
> +         */
> +        sti
> +
>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
>          /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
>          .macro svm_vmentry_spec_ctrl

I'm mildly worried to see it moved this high up. Any exception taken in
this exit code would consider the system to have interrupts enabled, when
we have more restrictive handling for the IF=0 case. Could we meet in the
middle and have STI before we start popping registers off the stack, but
after all the speculation machinery?

Jan
Andrew Cooper Feb. 17, 2025, 5:40 p.m. UTC | #2
On 17/02/2025 4:51 pm, Jan Beulich wrote:
> On 17.02.2025 17:12, Andrew Cooper wrote:
>> There is a corner case in the VMRUN instruction where its INTR_SHADOW state
>> leaks into guest state if a VMExit occurs before the VMRUN is complete.  An
>> example of this could be taking #NPF due to event injection.
> Ouch.

Yeah.  Intel go out of their way to make VM{LAUNCH,RESUME} fail if
they're executed in a shadow.

>
>> --- a/xen/arch/x86/hvm/svm/entry.S
>> +++ b/xen/arch/x86/hvm/svm/entry.S
>> @@ -57,6 +57,14 @@ __UNLIKELY_END(nsvm_hap)
>>  
>>          clgi
>>  
>> +        /*
>> +         * Set EFLAGS.IF, after CLGI covers us from real interrupts, but not
>> +         * immediately prior to VMRUN.  AMD CPUs leak Xen's INTR_SHADOW from
>> +         * the STI into guest state if a VMExit occurs during VMEntry
>> +         * (e.g. taking #NPF during event injecting.)
>> +         */
>> +        sti
>> +
>>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
>>          /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
>>          .macro svm_vmentry_spec_ctrl
> I'm mildly worried to see it moved this high up. Any exception taken in
> this exit code would consider the system to have interrupts enabled, when
> we have more restrictive handling for the IF=0 case. Could we meet in the
> middle and have STI before we start popping registers off the stack, but
> after all the speculation machinery?

Any exception taken here is fatal, and going to fail in weird ways. 
e.g. we don't clean up GIF before entering the crash kernel.

But yes, we probably should take steps to avoid the interrupted context
from looking even more weird than usual.

I'll put it above the line of pops.  They're going to turn into a single
macro when I can dust off that series.

~Andrew
Jan Beulich Feb. 18, 2025, 11:25 a.m. UTC | #3
On 17.02.2025 18:40, Andrew Cooper wrote:
> On 17/02/2025 4:51 pm, Jan Beulich wrote:
>> On 17.02.2025 17:12, Andrew Cooper wrote:
>>> There is a corner case in the VMRUN instruction where its INTR_SHADOW state
>>> leaks into guest state if a VMExit occurs before the VMRUN is complete.  An
>>> example of this could be taking #NPF due to event injection.
>> Ouch.
> 
> Yeah.  Intel go out of their way to make VM{LAUNCH,RESUME} fail if
> they're executed in a shadow.
> 
>>
>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>> @@ -57,6 +57,14 @@ __UNLIKELY_END(nsvm_hap)
>>>  
>>>          clgi
>>>  
>>> +        /*
>>> +         * Set EFLAGS.IF, after CLGI covers us from real interrupts, but not
>>> +         * immediately prior to VMRUN.  AMD CPUs leak Xen's INTR_SHADOW from
>>> +         * the STI into guest state if a VMExit occurs during VMEntry
>>> +         * (e.g. taking #NPF during event injecting.)
>>> +         */
>>> +        sti
>>> +
>>>          /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
>>>          /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
>>>          .macro svm_vmentry_spec_ctrl
>> I'm mildly worried to see it moved this high up. Any exception taken in
>> this exit code would consider the system to have interrupts enabled, when
>> we have more restrictive handling for the IF=0 case. Could we meet in the
>> middle and have STI before we start popping registers off the stack, but
>> after all the speculation machinery?
> 
> Any exception taken here is fatal, and going to fail in weird ways. 
> e.g. we don't clean up GIF before entering the crash kernel.
> 
> But yes, we probably should take steps to avoid the interrupted context
> from looking even more weird than usual.
> 
> I'll put it above the line of pops.  They're going to turn into a single
> macro when I can dust off that series.

Then:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index 6fd9652c04a1..c710464673f0 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -57,6 +57,14 @@  __UNLIKELY_END(nsvm_hap)
 
         clgi
 
+        /*
+         * Set EFLAGS.IF, after CLGI covers us from real interrupts, but not
+         * immediately prior to VMRUN.  AMD CPUs leak Xen's INTR_SHADOW from
+         * the STI into guest state if a VMExit occurs during VMEntry
+         * (e.g. taking #NPF during event injecting.)
+         */
+        sti
+
         /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */
         /* SPEC_CTRL_EXIT_TO_SVM       Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         .macro svm_vmentry_spec_ctrl
@@ -91,7 +99,6 @@  __UNLIKELY_END(nsvm_hap)
         pop  %rsi
         pop  %rdi
 
-        sti
         vmrun
 
         SAVE_ALL