Message ID | 20220315141807.22770-3-roger.pau@citrix.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | amd/msr: implement MSR_VIRT_SPEC_CTRL for HVM guests | expand |
On 15.03.2022 15:18, Roger Pau Monne wrote: > Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the > hardware has support for it. This requires adding logic in the > vm{entry,exit} paths for SVM in order to context switch between the > hypervisor value and the guest one. The added handlers for context > switch will also be used for the legacy SSBD support. > > Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM) > to signal whether VIRT_SPEC_CTRL needs to be handled on guest > vm{entry,exit}. > > Note the change in the handling of VIRT_SSBD in the featureset > description. The change from 's' to 'S' is due to the fact that now if > VIRT_SSBD is exposed by the hardware it can be passed through to HVM > guests. But lower vs upper case mean "(do not) expose by default", not whether underlying hardware exposes the feature. In patch 1 you actually used absence in underlying hardware to justify !, not s. > @@ -610,6 +611,14 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v) > svm_intercept_msr(v, MSR_SPEC_CTRL, > cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); > > + /* > + * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it > + * and the hardware implements it. > + */ > + svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL, > + cp->extd.virt_ssbd && cpu_has_virt_ssbd ? Despite giving the guest direct access guest_{rd,wr}msr() can be hit for such guests. Don't you need to update what patch 1 added there? Also, is there a reason the qualifier here is not in sync with ... > @@ -3105,6 +3114,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs) > vmcb_set_vintr(vmcb, intr); > } > > +/* Called with GIF=0. */ > +void vmexit_virt_spec_ctrl(void) > +{ > + unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0; > + > + if ( cpu_has_virt_ssbd ) ... this one? Since the patching is keyed to VIRT_SC_MSR_HVM, which in turn is enabled only when cpu_has_virt_ssbd, it would seem to me that if any asymmetry was okay here, then using cp->extd.virt_ssbd without cpu_has_virt_ssbd. > @@ -1069,6 +1072,10 @@ void __init init_speculation_mitigations(void) > setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM); > } > > + /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */ > + if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd ) > + setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM); In cpuid.c the comment (matching the code there) talks about exposing by default. I can't bring this in line with the use of !cpu_has_amd_ssbd here. Jan
On Mon, Mar 28, 2022 at 04:02:40PM +0200, Jan Beulich wrote: > On 15.03.2022 15:18, Roger Pau Monne wrote: > > Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the > > hardware has support for it. This requires adding logic in the > > vm{entry,exit} paths for SVM in order to context switch between the > > hypervisor value and the guest one. The added handlers for context > > switch will also be used for the legacy SSBD support. > > > > Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM) > > to signal whether VIRT_SPEC_CTRL needs to be handled on guest > > vm{entry,exit}. > > > > Note the change in the handling of VIRT_SSBD in the featureset > > description. The change from 's' to 'S' is due to the fact that now if > > VIRT_SSBD is exposed by the hardware it can be passed through to HVM > > guests. > > But lower vs upper case mean "(do not) expose by default", not whether > underlying hardware exposes the feature. In patch 1 you actually used > absence in underlying hardware to justify !, not s. Maybe I'm getting lost with all this !, lower case and upper case stuff. Patch 1 uses '!s' to account for: * '!': the feature might be exposed to guests even when not present on the host hardware. * 's': the feature won't be exposed by default. Which I think matches what is implemented in patch 1 where VIRT_SSBD is possibly exposed to guest when running on hardware that don't necessarily have VIRT_SSBD (ie: because we use AMD_SSBD in order to implement VIRT_SSBD). Patch 2 changes the 's' to 'S' because this patch introduces support to expose VIRT_SSBD to guests by default when the host (virtual) hardware also supports it. Maybe my understanding of the annotations is incorrect. > > @@ -610,6 +611,14 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v) > > svm_intercept_msr(v, MSR_SPEC_CTRL, > > cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); > > > > + /* > > + * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it > > + * and the hardware implements it. > > + */ > > + svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL, > > + cp->extd.virt_ssbd && cpu_has_virt_ssbd ? > > Despite giving the guest direct access guest_{rd,wr}msr() can be hit > for such guests. Don't you need to update what patch 1 added there? Indeed, I should add the chunk that's added in the next patch. > Also, is there a reason the qualifier here is not in sync with ... > > > @@ -3105,6 +3114,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs) > > vmcb_set_vintr(vmcb, intr); > > } > > > > +/* Called with GIF=0. */ > > +void vmexit_virt_spec_ctrl(void) > > +{ > > + unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0; > > + > > + if ( cpu_has_virt_ssbd ) > > ... this one? Since the patching is keyed to VIRT_SC_MSR_HVM, which in > turn is enabled only when cpu_has_virt_ssbd, it would seem to me that > if any asymmetry was okay here, then using cp->extd.virt_ssbd without > cpu_has_virt_ssbd. Using just cp->extd.virt_ssbd will be wrong when next patch also introduces support for exposing VIRT_SSBD by setting SSBD using the non-architectural method. We need to context switch just based on cpu_has_virt_ssbd because the running guest might not get VIRT_SSBD offered (cp->extd.virt_ssbd == false) but Xen might be using SSBD itself so it needs to context switch in order to activate it. Ie: if !cp->extd.virt_ssbd then the guest will always run with SSBD disabled, but Xen might not. > > @@ -1069,6 +1072,10 @@ void __init init_speculation_mitigations(void) > > setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM); > > } > > > > + /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */ > > + if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd ) > > + setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM); > > In cpuid.c the comment (matching the code there) talks about exposing > by default. I can't bring this in line with the use of !cpu_has_amd_ssbd > here. Exposing by default if !AMD_SSBD. Otherwise VIRT_SSBD is only in the max policy, and the default policy will instead contain AMD_SSBD. If AMD_SSBD is available it implies that X86_FEATURE_SC_MSR_HVM is already set (or otherwise opt_msr_sc_hvm is disabled), and hence the way to implement VIRT_SSBD is by using SPEC_CTRL. I think I need to fix the intercept in that case, so it's: svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL, cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); Because it AMD_SSBD is available VIRT_SSBD will be implemented using SPEC_CTRL, regardless of whether VIRT_SSBD is also available natively. Hope all this makes sense, I find it quite complex due to all the interactions. Thanks, Roger.
On 28.03.2022 17:19, Roger Pau Monné wrote: > On Mon, Mar 28, 2022 at 04:02:40PM +0200, Jan Beulich wrote: >> On 15.03.2022 15:18, Roger Pau Monne wrote: >>> Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the >>> hardware has support for it. This requires adding logic in the >>> vm{entry,exit} paths for SVM in order to context switch between the >>> hypervisor value and the guest one. The added handlers for context >>> switch will also be used for the legacy SSBD support. >>> >>> Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM) >>> to signal whether VIRT_SPEC_CTRL needs to be handled on guest >>> vm{entry,exit}. >>> >>> Note the change in the handling of VIRT_SSBD in the featureset >>> description. The change from 's' to 'S' is due to the fact that now if >>> VIRT_SSBD is exposed by the hardware it can be passed through to HVM >>> guests. >> >> But lower vs upper case mean "(do not) expose by default", not whether >> underlying hardware exposes the feature. In patch 1 you actually used >> absence in underlying hardware to justify !, not s. > > Maybe I'm getting lost with all this !, lower case and upper case > stuff. > > Patch 1 uses '!s' to account for: > * '!': the feature might be exposed to guests even when not present > on the host hardware. > * 's': the feature won't be exposed by default. > > Which I think matches what is implemented in patch 1 where VIRT_SSBD > is possibly exposed to guest when running on hardware that don't > necessarily have VIRT_SSBD (ie: because we use AMD_SSBD in order to > implement VIRT_SSBD). > > Patch 2 changes the 's' to 'S' because this patch introduces support > to expose VIRT_SSBD to guests by default when the host (virtual) > hardware also supports it. Hmm, so maybe the wording in the description is merely a little unfortunate. >>> @@ -610,6 +611,14 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v) >>> svm_intercept_msr(v, MSR_SPEC_CTRL, >>> cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); >>> >>> + /* >>> + * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it >>> + * and the hardware implements it. >>> + */ >>> + svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL, >>> + cp->extd.virt_ssbd && cpu_has_virt_ssbd ? >> >> Despite giving the guest direct access guest_{rd,wr}msr() can be hit >> for such guests. Don't you need to update what patch 1 added there? > > Indeed, I should add the chunk that's added in the next patch. > >> Also, is there a reason the qualifier here is not in sync with ... >> >>> @@ -3105,6 +3114,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs) >>> vmcb_set_vintr(vmcb, intr); >>> } >>> >>> +/* Called with GIF=0. */ >>> +void vmexit_virt_spec_ctrl(void) >>> +{ >>> + unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0; >>> + >>> + if ( cpu_has_virt_ssbd ) >> >> ... this one? Since the patching is keyed to VIRT_SC_MSR_HVM, which in >> turn is enabled only when cpu_has_virt_ssbd, it would seem to me that >> if any asymmetry was okay here, then using cp->extd.virt_ssbd without >> cpu_has_virt_ssbd. > > Using just cp->extd.virt_ssbd will be wrong when next patch also > introduces support for exposing VIRT_SSBD by setting SSBD using the > non-architectural method. Well, if the next patch needs to make adjustments here, then that's fine but different from what's needed at this point. However, ... > We need to context switch just based on cpu_has_virt_ssbd because the > running guest might not get VIRT_SSBD offered (cp->extd.virt_ssbd == > false) but Xen might be using SSBD itself so it needs to context > switch in order to activate it. Ie: if !cp->extd.virt_ssbd then the > guest will always run with SSBD disabled, but Xen might not. ... yes, I see. > Hope all this makes sense, It does, and ... > I find it quite complex due to all the interactions. ... yes, I definitely agree. Jan
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c index 4ca77ea870..91e53284fc 100644 --- a/xen/arch/x86/cpuid.c +++ b/xen/arch/x86/cpuid.c @@ -534,6 +534,10 @@ static void __init calculate_hvm_max_policy(void) raw_cpuid_policy.basic.sep ) __set_bit(X86_FEATURE_SEP, hvm_featureset); + if ( !boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) ) + /* Clear VIRT_SSBD if VIRT_SPEC_CTRL is not exposed to guests. */ + __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset); + /* * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional * availability, or admin choice), hide the feature. @@ -590,6 +594,13 @@ static void __init calculate_hvm_def_policy(void) guest_common_feature_adjustments(hvm_featureset); guest_common_default_feature_adjustments(hvm_featureset); + /* + * AMD_SSBD if preferred over VIRT_SSBD, so don't expose the later by + * default if the former is available. + */ + if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) ) + __clear_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset); + sanitise_featureset(hvm_featureset); cpuid_featureset_to_policy(hvm_featureset, p); recalculate_xstate(p); diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S index 4ae55a2ef6..e2c104bb1e 100644 --- a/xen/arch/x86/hvm/svm/entry.S +++ b/xen/arch/x86/hvm/svm/entry.S @@ -57,6 +57,9 @@ __UNLIKELY_END(nsvm_hap) clgi + ALTERNATIVE "", STR(call vmentry_virt_spec_ctrl), \ + X86_FEATURE_VIRT_SC_MSR_HVM + /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */ /* SPEC_CTRL_EXIT_TO_SVM Req: b=curr %rsp=regs/cpuinfo, Clob: acd */ .macro svm_vmentry_spec_ctrl @@ -114,6 +117,9 @@ __UNLIKELY_END(nsvm_hap) ALTERNATIVE "", svm_vmexit_spec_ctrl, X86_FEATURE_SC_MSR_HVM /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */ + ALTERNATIVE "", STR(call vmexit_virt_spec_ctrl), \ + X86_FEATURE_VIRT_SC_MSR_HVM + stgi GLOBAL(svm_stgi_label) mov %rsp,%rdi diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 64a45045da..73a0af599b 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -52,6 +52,7 @@ #include <asm/hvm/svm/svmdebug.h> #include <asm/hvm/svm/nestedsvm.h> #include <asm/hvm/nestedhvm.h> +#include <asm/spec_ctrl.h> #include <asm/x86_emulate.h> #include <public/sched.h> #include <asm/hvm/vpt.h> @@ -610,6 +611,14 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v) svm_intercept_msr(v, MSR_SPEC_CTRL, cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); + /* + * Give access to MSR_VIRT_SPEC_CTRL if the guest has been told about it + * and the hardware implements it. + */ + svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL, + cp->extd.virt_ssbd && cpu_has_virt_ssbd ? + MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); + /* Give access to MSR_PRED_CMD if the guest has been told about it. */ svm_intercept_msr(v, MSR_PRED_CMD, cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); @@ -3105,6 +3114,36 @@ void svm_vmexit_handler(struct cpu_user_regs *regs) vmcb_set_vintr(vmcb, intr); } +/* Called with GIF=0. */ +void vmexit_virt_spec_ctrl(void) +{ + unsigned int val = opt_ssbd ? SPEC_CTRL_SSBD : 0; + + if ( cpu_has_virt_ssbd ) + { + unsigned int lo, hi; + + /* + * Need to read from the hardware because VIRT_SPEC_CTRL is not context + * switched by the hardware, and we allow the guest untrapped access to + * the register. + */ + rdmsr(MSR_VIRT_SPEC_CTRL, lo, hi); + if ( val != lo ) + wrmsr(MSR_VIRT_SPEC_CTRL, val, 0); + current->arch.msrs->virt_spec_ctrl.raw = lo; + } +} + +/* Called with GIF=0. */ +void vmentry_virt_spec_ctrl(void) +{ + unsigned int val = current->arch.msrs->virt_spec_ctrl.raw; + + if ( val != (opt_ssbd ? SPEC_CTRL_SSBD : 0) ) + wrmsr(MSR_VIRT_SPEC_CTRL, val, 0); +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h index 7413febd7a..2240547b64 100644 --- a/xen/arch/x86/include/asm/cpufeatures.h +++ b/xen/arch/x86/include/asm/cpufeatures.h @@ -40,6 +40,7 @@ XEN_CPUFEATURE(SC_VERW_HVM, X86_SYNTH(24)) /* VERW used by Xen for HVM */ XEN_CPUFEATURE(SC_VERW_IDLE, X86_SYNTH(25)) /* VERW used by Xen for idle */ XEN_CPUFEATURE(XEN_SHSTK, X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */ XEN_CPUFEATURE(XEN_IBT, X86_SYNTH(27)) /* Xen uses CET Indirect Branch Tracking */ +XEN_CPUFEATURE(VIRT_SC_MSR_HVM, X86_SYNTH(28)) /* MSR_VIRT_SPEC_CTRL exposed to HVM */ /* Bug words follow the synthetic words. */ #define X86_NR_BUG 1 diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h index ab6fbb5051..460aabe84f 100644 --- a/xen/arch/x86/include/asm/msr.h +++ b/xen/arch/x86/include/asm/msr.h @@ -375,6 +375,16 @@ struct vcpu_msrs */ uint32_t tsc_aux; + /* + * 0xc001011f - MSR_VIRT_SPEC_CTRL (if !X86_FEATURE_AMD_SSBD) + * + * AMD only. Guest selected value, saved and restored on guest VM + * entry/exit. + */ + struct { + uint32_t raw; + } virt_spec_ctrl; + /* * 0xc00110{27,19-1b} MSR_AMD64_DR{0-3}_ADDRESS_MASK * diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c index f338bfe292..0d5ec877d1 100644 --- a/xen/arch/x86/spec_ctrl.c +++ b/xen/arch/x86/spec_ctrl.c @@ -406,9 +406,12 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps) (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) || boot_cpu_has(X86_FEATURE_SC_RSB_HVM) || boot_cpu_has(X86_FEATURE_MD_CLEAR) || + boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM) || opt_eager_fpu) ? "" : " None", boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ? " MSR_SPEC_CTRL" : "", - boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ? " MSR_VIRT_SPEC_CTRL" : "", + (boot_cpu_has(X86_FEATURE_SC_MSR_HVM) || + boot_cpu_has(X86_FEATURE_VIRT_SC_MSR_HVM)) ? " MSR_VIRT_SPEC_CTRL" + : "", boot_cpu_has(X86_FEATURE_SC_RSB_HVM) ? " RSB" : "", opt_eager_fpu ? " EAGER_FPU" : "", boot_cpu_has(X86_FEATURE_MD_CLEAR) ? " MD_CLEAR" : ""); @@ -1069,6 +1072,10 @@ void __init init_speculation_mitigations(void) setup_force_cpu_cap(X86_FEATURE_SC_MSR_HVM); } + /* Support VIRT_SPEC_CTRL.SSBD if AMD_SSBD is not available. */ + if ( opt_msr_sc_hvm && !cpu_has_amd_ssbd && cpu_has_virt_ssbd ) + setup_force_cpu_cap(X86_FEATURE_VIRT_SC_MSR_HVM); + /* If we have IBRS available, see whether we should use it. */ if ( has_spec_ctrl && ibrs ) default_xen_spec_ctrl |= SPEC_CTRL_IBRS; diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h index b797c6bea1..0639b9faf2 100644 --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -265,7 +265,7 @@ XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S IBRS provides same-mode protection XEN_CPUFEATURE(NO_LMSL, 8*32+20) /*S EFER.LMSLE no longer supported. */ XEN_CPUFEATURE(AMD_PPIN, 8*32+23) /* Protected Processor Inventory Number */ XEN_CPUFEATURE(AMD_SSBD, 8*32+24) /*S MSR_SPEC_CTRL.SSBD available */ -XEN_CPUFEATURE(VIRT_SSBD, 8*32+25) /*!s MSR_VIRT_SPEC_CTRL.SSBD */ +XEN_CPUFEATURE(VIRT_SSBD, 8*32+25) /*!S MSR_VIRT_SPEC_CTRL.SSBD */ XEN_CPUFEATURE(SSB_NO, 8*32+26) /*A Hardware not vulnerable to SSB */ XEN_CPUFEATURE(PSFD, 8*32+28) /*S MSR_SPEC_CTRL.PSFD */
Allow HVM guests untrapped access to MSR_VIRT_SPEC_CTRL if the hardware has support for it. This requires adding logic in the vm{entry,exit} paths for SVM in order to context switch between the hypervisor value and the guest one. The added handlers for context switch will also be used for the legacy SSBD support. Introduce a new synthetic feature leaf (X86_FEATURE_VIRT_SC_MSR_HVM) to signal whether VIRT_SPEC_CTRL needs to be handled on guest vm{entry,exit}. Note the change in the handling of VIRT_SSBD in the featureset description. The change from 's' to 'S' is due to the fact that now if VIRT_SSBD is exposed by the hardware it can be passed through to HVM guests. Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> --- Changes since v1: - Introduce virt_spec_ctrl vCPU field. - Context switch VIRT_SPEC_CTRL on vmentry/vmexit separately from SPEC_CTRL. --- xen/arch/x86/cpuid.c | 11 ++++++ xen/arch/x86/hvm/svm/entry.S | 6 ++++ xen/arch/x86/hvm/svm/svm.c | 39 +++++++++++++++++++++ xen/arch/x86/include/asm/cpufeatures.h | 1 + xen/arch/x86/include/asm/msr.h | 10 ++++++ xen/arch/x86/spec_ctrl.c | 9 ++++- xen/include/public/arch-x86/cpufeatureset.h | 2 +- 7 files changed, 76 insertions(+), 2 deletions(-)