Message ID | 20230509164336.12523-4-alejandro.vallejo@cloud.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Add CpuidUserDis support | expand |
On 09.05.2023 18:43, Alejandro Vallejo wrote: > Because CpuIdUserDis is reported in CPUID itself, the extended leaf > containing that bit must be retrieved before calling c_early_init() > > Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> Looks largely okay when taken together with patch 2, but ... > --- a/xen/arch/x86/cpu/amd.c > +++ b/xen/arch/x86/cpu/amd.c > @@ -279,8 +279,12 @@ static void __init noinline amd_init_levelling(void) > * that can only be present when Xen is itself virtualized (because > * it can be emulated) > */ > - if (cpu_has_hypervisor && probe_cpuid_faulting()) > + if ((cpu_has_hypervisor && probe_cpuid_faulting()) || > + boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) { ... imo the probe_cpuid_faulting() call would better be avoided when the CPUID bit is set. > + expected_levelling_cap |= LCAP_faulting; > + levelling_caps |= LCAP_faulting; Further the movement of these two lines from ... > @@ -144,8 +145,6 @@ bool __init probe_cpuid_faulting(void) > return false; > } > > - expected_levelling_cap |= LCAP_faulting; > - levelling_caps |= LCAP_faulting; > setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING); ... here (and also to intel.c) should imo be part of patch 2. While moving them, I think you also want to deal with the stray double blank. Jan
On Thu, May 11, 2023 at 01:05:42PM +0200, Jan Beulich wrote: > > --- a/xen/arch/x86/cpu/amd.c > > +++ b/xen/arch/x86/cpu/amd.c > > @@ -279,8 +279,12 @@ static void __init noinline amd_init_levelling(void) > > * that can only be present when Xen is itself virtualized (because > > * it can be emulated) > > */ > > - if (cpu_has_hypervisor && probe_cpuid_faulting()) > > + if ((cpu_has_hypervisor && probe_cpuid_faulting()) || > > + boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) { > > ... imo the probe_cpuid_faulting() call would better be avoided when > the CPUID bit is set. I wrote it like that originally. However, it felt wrong to leave raw_policy.platform_info unset, as it's set inside probe_cpuid_faulting(). While it's highly unlikely a real AMD machine will have CPUID faulting support, Xen might see both if it's itself virtualized under Xen. The crux of the matter here is whether we want the raw policy to be an accurate representation of _all_ the features of the machine (real or virtual) or we're ok with it not having features we don't intend to use in practice. It certainly can be argued either way. CpuidUserDis naturally gets to the policy through CPUID leaf enumeration, so that's done regardless. My .02 is that raw means uncooked and as such should have the actual physical features reported by the machine, but I could be persuaded either way. > > > + expected_levelling_cap |= LCAP_faulting; > > + levelling_caps |= LCAP_faulting; > > Further the movement of these two lines from ... > > > @@ -144,8 +145,6 @@ bool __init probe_cpuid_faulting(void) > > return false; > > } > > > > - expected_levelling_cap |= LCAP_faulting; > > - levelling_caps |= LCAP_faulting; > > setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING); > > ... here (and also to intel.c) should imo be part of patch 2. While > moving them, I think you also want to deal with the stray double > blank. > > Jan Sure. Alejandro
On 11.05.2023 14:12, Alejandro Vallejo wrote: > On Thu, May 11, 2023 at 01:05:42PM +0200, Jan Beulich wrote: >>> --- a/xen/arch/x86/cpu/amd.c >>> +++ b/xen/arch/x86/cpu/amd.c >>> @@ -279,8 +279,12 @@ static void __init noinline amd_init_levelling(void) >>> * that can only be present when Xen is itself virtualized (because >>> * it can be emulated) >>> */ >>> - if (cpu_has_hypervisor && probe_cpuid_faulting()) >>> + if ((cpu_has_hypervisor && probe_cpuid_faulting()) || >>> + boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) { >> >> ... imo the probe_cpuid_faulting() call would better be avoided when >> the CPUID bit is set. > > I wrote it like that originally. However, it felt wrong to leave > raw_policy.platform_info unset, as it's set inside probe_cpuid_faulting(). > While it's highly unlikely a real AMD machine will have CPUID faulting > support, Xen might see both if it's itself virtualized under Xen. > > The crux of the matter here is whether we want the raw policy to be an > accurate representation of _all_ the features of the machine (real or > virtual) or we're ok with it not having features we don't intend to use in > practice. It certainly can be argued either way. CpuidUserDis naturally > gets to the policy through CPUID leaf enumeration, so that's done > regardless. > > My .02 is that raw means uncooked and as such should have the actual > physical features reported by the machine, but I could be persuaded either > way. I think I would be okay if that was (in perhaps slightly abridged form) made part of the description (or if the code comment there said so, then also preventing someone [like me] coming and re-ordering the conditional). Nevertheless having raw_policy populated like this seems a little fragile in the first place. Andrew - any particular thoughts from you in this regard? Jan
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c index 899bae7a10..cc9c89fd19 100644 --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -279,8 +279,12 @@ static void __init noinline amd_init_levelling(void) * that can only be present when Xen is itself virtualized (because * it can be emulated) */ - if (cpu_has_hypervisor && probe_cpuid_faulting()) + if ((cpu_has_hypervisor && probe_cpuid_faulting()) || + boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) { + expected_levelling_cap |= LCAP_faulting; + levelling_caps |= LCAP_faulting; return; + } probe_masking_msrs(); @@ -371,6 +375,20 @@ static void __init noinline amd_init_levelling(void) ctxt_switch_masking = amd_ctxt_switch_masking; } +void amd_set_cpuid_user_dis(bool enable) +{ + const uint64_t bit = K8_HWCR_CPUID_USER_DIS; + uint64_t val; + + rdmsrl(MSR_K8_HWCR, val); + + if (!!(val & bit) == enable) + return; + + val ^= bit; + wrmsrl(MSR_K8_HWCR, val); +} + /* * Check for the presence of an AMD erratum. Arguments are defined in amd.h * for each known erratum. Return 1 if erratum is found. diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c index 4bfaac4590..9bbb385db4 100644 --- a/xen/arch/x86/cpu/common.c +++ b/xen/arch/x86/cpu/common.c @@ -4,6 +4,7 @@ #include <xen/param.h> #include <xen/smp.h> +#include <asm/amd.h> #include <asm/cpu-policy.h> #include <asm/current.h> #include <asm/debugreg.h> @@ -144,8 +145,6 @@ bool __init probe_cpuid_faulting(void) return false; } - expected_levelling_cap |= LCAP_faulting; - levelling_caps |= LCAP_faulting; setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING); return true; @@ -168,8 +167,10 @@ static void set_cpuid_faulting(bool enable) void ctxt_switch_levelling(const struct vcpu *next) { const struct domain *nextd = next ? next->domain : NULL; + bool enable_cpuid_faulting; - if (cpu_has_cpuid_faulting) { + if (cpu_has_cpuid_faulting || + boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) { /* * No need to alter the faulting setting if we are switching * to idle; it won't affect any code running in idle context. @@ -190,12 +191,18 @@ void ctxt_switch_levelling(const struct vcpu *next) * an interim escape hatch in the form of * `dom0=no-cpuid-faulting` to restore the older behaviour. */ - set_cpuid_faulting(nextd && (opt_dom0_cpuid_faulting || - !is_control_domain(nextd) || - !is_pv_domain(nextd)) && - (is_pv_domain(nextd) || - next->arch.msrs-> - misc_features_enables.cpuid_faulting)); + enable_cpuid_faulting = nextd && (opt_dom0_cpuid_faulting || + !is_control_domain(nextd) || + !is_pv_domain(nextd)) && + (is_pv_domain(nextd) || + next->arch.msrs-> + misc_features_enables.cpuid_faulting); + + if (cpu_has_cpuid_faulting) + set_cpuid_faulting(enable_cpuid_faulting); + else + amd_set_cpuid_user_dis(enable_cpuid_faulting); + return; } @@ -404,6 +411,17 @@ static void generic_identify(struct cpuinfo_x86 *c) c->apicid = phys_pkg_id((ebx >> 24) & 0xFF, 0); c->phys_proc_id = c->apicid; + eax = cpuid_eax(0x80000000); + if ((eax >> 16) == 0x8000) + c->extended_cpuid_level = eax; + + /* + * These AMD-defined flags are out of place, but we need + * them early for the CPUID faulting probe code + */ + if (c->extended_cpuid_level >= 0x80000021) + c->x86_capability[FEATURESET_e21a] = cpuid_eax(0x80000021); + if (this_cpu->c_early_init) this_cpu->c_early_init(c); @@ -420,10 +438,6 @@ static void generic_identify(struct cpuinfo_x86 *c) (cpuid_ecx(CPUID_PM_LEAF) & CPUID6_ECX_APERFMPERF_CAPABILITY) ) __set_bit(X86_FEATURE_APERFMPERF, c->x86_capability); - eax = cpuid_eax(0x80000000); - if ((eax >> 16) == 0x8000) - c->extended_cpuid_level = eax; - /* AMD-defined flags: level 0x80000001 */ if (c->extended_cpuid_level >= 0x80000001) cpuid(0x80000001, &tmp, &tmp, diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c index a04414ba1d..bbe7b7d1ce 100644 --- a/xen/arch/x86/cpu/intel.c +++ b/xen/arch/x86/cpu/intel.c @@ -233,8 +233,11 @@ static void __init noinline intel_init_levelling(void) * MSRs may be emulated though, so we allow it in that case. */ if ((boot_cpu_data.x86 != 0xf || cpu_has_hypervisor) && - probe_cpuid_faulting()) + probe_cpuid_faulting()) { + expected_levelling_cap |= LCAP_faulting; + levelling_caps |= LCAP_faulting; return; + } probe_masking_msrs(); diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h index a975d3de26..09ee52dc1c 100644 --- a/xen/arch/x86/include/asm/amd.h +++ b/xen/arch/x86/include/asm/amd.h @@ -155,5 +155,6 @@ extern bool amd_legacy_ssbd; extern bool amd_virt_spec_ctrl; bool amd_setup_legacy_ssbd(void); void amd_set_legacy_ssbd(bool enable); +void amd_set_cpuid_user_dis(bool enable); #endif /* __AMD_H__ */
Because CpuIdUserDis is reported in CPUID itself, the extended leaf containing that bit must be retrieved before calling c_early_init() Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> --- v2: * Style fixes * MSR index inlined in rdmsr/wrmsr * Swapped Intel's conditional guard so typically true condition goes first * Factored the vendor-specific non functional changes into another patch --- xen/arch/x86/cpu/amd.c | 20 ++++++++++++++++- xen/arch/x86/cpu/common.c | 40 +++++++++++++++++++++++----------- xen/arch/x86/cpu/intel.c | 5 ++++- xen/arch/x86/include/asm/amd.h | 1 + 4 files changed, 51 insertions(+), 15 deletions(-)