Message ID | 20220126084452.28975-9-andrew.cooper3@citrix.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | x86: MSR_SPEC_CTRL support for SVM guests | expand |
On 26/01/2022 08:44, Andrew Cooper wrote: > With all other pieces in place, MSR_SPEC_CTRL is fully working for HVM guests. > > Update the CPUID derivation logic (both PV and HVM to avoid losing subtle > changes), and explicitly enable the CPUID bits for HVM guests. > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> SVM guests get rather more speedy with this hunk, which I missed: diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index bc834556c5f7..f11622ed4ff8 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -606,6 +606,10 @@ static void svm_cpuid_policy_changed(struct vcpu *v) vmcb_set_exception_intercepts(vmcb, bitmap); + /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */ + svm_intercept_msr(v, MSR_SPEC_CTRL, + cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); + /* Give access to MSR_PRED_CMD if the guest has been told about it. */ svm_intercept_msr(v, MSR_PRED_CMD, cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); I've folded it into v2, but won't repost for just this. ~Andrew
On 26.01.2022 09:44, Andrew Cooper wrote: > With all other pieces in place, MSR_SPEC_CTRL is fully working for HVM guests. > > Update the CPUID derivation logic (both PV and HVM to avoid losing subtle > changes), and explicitly enable the CPUID bits for HVM guests. > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> > --- > CC: Jan Beulich <JBeulich@suse.com> > CC: Roger Pau Monné <roger.pau@citrix.com> > CC: Wei Liu <wl@xen.org> > > Given the adjustment to calculate_pv_max_policy(), we could use 'A' rather > than 'S' which would avoid a second same-sized diff to cpufeatureset.h, but > it's also a bit misleading to say 'A' when the PV side won't engage at all > yet. I agree with using 'S' at this point for most of them. I'm unsure for SSB_NO, though - there 'A' would seem more appropriate, and afaict it would then also be visible to PV guests (since you validly don't make it dependent upon IBRS or anything else). Aiui this could have been done even ahead of this work of yours. > --- a/xen/arch/x86/cpuid.c > +++ b/xen/arch/x86/cpuid.c > @@ -433,6 +433,8 @@ static void __init guest_common_feature_adjustments(uint32_t *fs) > */ > if ( test_bit(X86_FEATURE_IBRSB, fs) ) > __set_bit(X86_FEATURE_STIBP, fs); > + if ( test_bit(X86_FEATURE_IBRS, fs) ) > + __set_bit(X86_FEATURE_AMD_STIBP, fs); > > /* > * On hardware which supports IBRS/IBPB, we can offer IBPB independently Following this comment is a cross-vendor adjustment. Shouldn't there be more of these now? Or has this been a one-off for some reason? > @@ -456,11 +458,14 @@ static void __init calculate_pv_max_policy(void) > pv_featureset[i] &= pv_max_featuremask[i]; > > /* > - * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests because of > - * administrator choice, hide the feature. > + * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional > + * availability, or admin choice), hide the feature. > + */ Unintended replacement of "PV" by "HVM"? > if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) ) > + { > __clear_bit(X86_FEATURE_IBRSB, pv_featureset); > + __clear_bit(X86_FEATURE_IBRS, pv_featureset); > + } > > guest_common_feature_adjustments(pv_featureset); What I had done in a local (and incomplete) patch is pass X86_FEATURE_SC_MSR_{PV,HVM} into guest_common_feature_adjustments() and do what you extend above (and then again for HVM) centrally there. (My gen-cpuid.py adjustment was smaller, so there were even more bits to clear, and hence it became yet more relevant to avoid doing this in two places.) > --- a/xen/tools/gen-cpuid.py > +++ b/xen/tools/gen-cpuid.py > @@ -290,6 +290,11 @@ def crunch_numbers(state): > > # In principle the TSXLDTRK insns could also be considered independent. > RTM: [TSXLDTRK], > + > + # AMD speculative controls > + IBRS: [AMD_STIBP, AMD_SSBD, PSFD, > + IBRS_ALWAYS, IBRS_FAST, IBRS_SAME_MODE], > + AMD_STIBP: [STIBP_ALWAYS], > } Could I talk you into moving this ahead of RTM, such that it sits next to the related Intel stuff? Jan
On 27/01/2022 13:47, Jan Beulich wrote: > On 26.01.2022 09:44, Andrew Cooper wrote: >> With all other pieces in place, MSR_SPEC_CTRL is fully working for HVM guests. >> >> Update the CPUID derivation logic (both PV and HVM to avoid losing subtle >> changes), and explicitly enable the CPUID bits for HVM guests. >> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> >> --- >> CC: Jan Beulich <JBeulich@suse.com> >> CC: Roger Pau Monné <roger.pau@citrix.com> >> CC: Wei Liu <wl@xen.org> >> >> Given the adjustment to calculate_pv_max_policy(), we could use 'A' rather >> than 'S' which would avoid a second same-sized diff to cpufeatureset.h, but >> it's also a bit misleading to say 'A' when the PV side won't engage at all >> yet. > I agree with using 'S' at this point for most of them. I'm unsure for > SSB_NO, though - there 'A' would seem more appropriate, and afaict it > would then also be visible to PV guests (since you validly don't make > it dependent upon IBRS or anything else). Aiui this could have been > done even ahead of this work of yours. Hmm. There aren't any AMD CPUs I'm aware of which enumerate SSB_NO, but it probably ought to be 'A'. I'll pull this out into a separate change. >> --- a/xen/arch/x86/cpuid.c >> +++ b/xen/arch/x86/cpuid.c >> @@ -433,6 +433,8 @@ static void __init guest_common_feature_adjustments(uint32_t *fs) >> */ >> if ( test_bit(X86_FEATURE_IBRSB, fs) ) >> __set_bit(X86_FEATURE_STIBP, fs); >> + if ( test_bit(X86_FEATURE_IBRS, fs) ) >> + __set_bit(X86_FEATURE_AMD_STIBP, fs); >> >> /* >> * On hardware which supports IBRS/IBPB, we can offer IBPB independently > Following this comment is a cross-vendor adjustment. Shouldn't there > be more of these now? Or has this been a one-off for some reason? MSR_PRED_CMD is easy to cross-vendor (just expose the CPUID bit and MSR), but IIRC the intention here was to be able to configure an Intel VM with IBPB only and no IBRS. This was at the same time as the buggy MSR_SPEC_CTRL microcode was out in the wild and a `1: wrmsr; jmp 1b` loop would reliably take the system down. For MSR_SPEC_CTRL, there are three substantially different behaviours. This is part of why this series has taken so long to get organised, and why there's no PV guest support yet. Attempting to cross-vendor anything related to MSR_SPEC_CTRL will be a disaster. Even if you can make the VM not crash (ought to be doable), it's going to have a very broken idea of its Spectre-v2 safety. >> @@ -456,11 +458,14 @@ static void __init calculate_pv_max_policy(void) >> pv_featureset[i] &= pv_max_featuremask[i]; >> >> /* >> - * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests because of >> - * administrator choice, hide the feature. >> + * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional >> + * availability, or admin choice), hide the feature. >> + */ > Unintended replacement of "PV" by "HVM"? Yes. Too much copy&paste. ~Andrew
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c index b5af48324aef..64570148c165 100644 --- a/xen/arch/x86/cpuid.c +++ b/xen/arch/x86/cpuid.c @@ -433,6 +433,8 @@ static void __init guest_common_feature_adjustments(uint32_t *fs) */ if ( test_bit(X86_FEATURE_IBRSB, fs) ) __set_bit(X86_FEATURE_STIBP, fs); + if ( test_bit(X86_FEATURE_IBRS, fs) ) + __set_bit(X86_FEATURE_AMD_STIBP, fs); /* * On hardware which supports IBRS/IBPB, we can offer IBPB independently @@ -456,11 +458,14 @@ static void __init calculate_pv_max_policy(void) pv_featureset[i] &= pv_max_featuremask[i]; /* - * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests because of - * administrator choice, hide the feature. + * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional + * availability, or admin choice), hide the feature. */ if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) ) + { __clear_bit(X86_FEATURE_IBRSB, pv_featureset); + __clear_bit(X86_FEATURE_IBRS, pv_featureset); + } guest_common_feature_adjustments(pv_featureset); @@ -530,11 +535,14 @@ static void __init calculate_hvm_max_policy(void) __set_bit(X86_FEATURE_SEP, hvm_featureset); /* - * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests because of - * administrator choice, hide the feature. + * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional + * availability, or admin choice), hide the feature. */ if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) ) + { __clear_bit(X86_FEATURE_IBRSB, hvm_featureset); + __clear_bit(X86_FEATURE_IBRS, hvm_featureset); + } /* * With VT-x, some features are only supported by Xen if dedicated diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h index 0b399375566f..dfbf25b9acb3 100644 --- a/xen/include/public/arch-x86/cpufeatureset.h +++ b/xen/include/public/arch-x86/cpufeatureset.h @@ -256,18 +256,18 @@ XEN_CPUFEATURE(CLZERO, 8*32+ 0) /*A CLZERO instruction */ XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */ XEN_CPUFEATURE(WBNOINVD, 8*32+ 9) /* WBNOINVD instruction */ XEN_CPUFEATURE(IBPB, 8*32+12) /*A IBPB support only (no IBRS, used by AMD) */ -XEN_CPUFEATURE(IBRS, 8*32+14) /* MSR_SPEC_CTRL.IBRS */ -XEN_CPUFEATURE(AMD_STIBP, 8*32+15) /* MSR_SPEC_CTRL.STIBP */ -XEN_CPUFEATURE(IBRS_ALWAYS, 8*32+16) /* IBRS preferred always on */ -XEN_CPUFEATURE(STIBP_ALWAYS, 8*32+17) /* STIBP preferred always on */ -XEN_CPUFEATURE(IBRS_FAST, 8*32+18) /* IBRS preferred over software options */ -XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /* IBRS provides same-mode protection */ +XEN_CPUFEATURE(IBRS, 8*32+14) /*S MSR_SPEC_CTRL.IBRS */ +XEN_CPUFEATURE(AMD_STIBP, 8*32+15) /*S MSR_SPEC_CTRL.STIBP */ +XEN_CPUFEATURE(IBRS_ALWAYS, 8*32+16) /*S IBRS preferred always on */ +XEN_CPUFEATURE(STIBP_ALWAYS, 8*32+17) /*S STIBP preferred always on */ +XEN_CPUFEATURE(IBRS_FAST, 8*32+18) /*S IBRS preferred over software options */ +XEN_CPUFEATURE(IBRS_SAME_MODE, 8*32+19) /*S IBRS provides same-mode protection */ XEN_CPUFEATURE(NO_LMSL, 8*32+20) /*S EFER.LMSLE no longer supported. */ XEN_CPUFEATURE(AMD_PPIN, 8*32+23) /* Protected Processor Inventory Number */ -XEN_CPUFEATURE(AMD_SSBD, 8*32+24) /* MSR_SPEC_CTRL.SSBD available */ +XEN_CPUFEATURE(AMD_SSBD, 8*32+24) /*S MSR_SPEC_CTRL.SSBD available */ XEN_CPUFEATURE(VIRT_SSBD, 8*32+25) /* MSR_VIRT_SPEC_CTRL.SSBD */ -XEN_CPUFEATURE(SSB_NO, 8*32+26) /* Hardware not vulnerable to SSB */ -XEN_CPUFEATURE(PSFD, 8*32+28) /* MSR_SPEC_CTRL.PSFD */ +XEN_CPUFEATURE(SSB_NO, 8*32+26) /*S Hardware not vulnerable to SSB */ +XEN_CPUFEATURE(PSFD, 8*32+28) /*S MSR_SPEC_CTRL.PSFD */ /* Intel-defined CPU features, CPUID level 0x00000007:0.edx, word 9 */ XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) /*A AVX512 Neural Network Instructions */ diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py index b953648b6572..e4915b5961aa 100755 --- a/xen/tools/gen-cpuid.py +++ b/xen/tools/gen-cpuid.py @@ -290,6 +290,11 @@ def crunch_numbers(state): # In principle the TSXLDTRK insns could also be considered independent. RTM: [TSXLDTRK], + + # AMD speculative controls + IBRS: [AMD_STIBP, AMD_SSBD, PSFD, + IBRS_ALWAYS, IBRS_FAST, IBRS_SAME_MODE], + AMD_STIBP: [STIBP_ALWAYS], } deep_features = tuple(sorted(deps.keys()))
With all other pieces in place, MSR_SPEC_CTRL is fully working for HVM guests. Update the CPUID derivation logic (both PV and HVM to avoid losing subtle changes), and explicitly enable the CPUID bits for HVM guests. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> --- CC: Jan Beulich <JBeulich@suse.com> CC: Roger Pau Monné <roger.pau@citrix.com> CC: Wei Liu <wl@xen.org> Given the adjustment to calculate_pv_max_policy(), we could use 'A' rather than 'S' which would avoid a second same-sized diff to cpufeatureset.h, but it's also a bit misleading to say 'A' when the PV side won't engage at all yet. --- xen/arch/x86/cpuid.c | 16 ++++++++++++---- xen/include/public/arch-x86/cpufeatureset.h | 18 +++++++++--------- xen/tools/gen-cpuid.py | 5 +++++ 3 files changed, 26 insertions(+), 13 deletions(-)