Message ID | 1454952370-31432-1-git-send-email-andrew.cooper3@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 08.02.16 at 18:26, <andrew.cooper3@citrix.com> wrote:
This fiddles with behavior on AMD only, yet it's not obvious why this
couldn't be done in vendor independent code (it should, afaict, be
benign for Intel).
Jan
On 17/02/16 09:02, Jan Beulich wrote: >>>> On 08.02.16 at 18:26, <andrew.cooper3@citrix.com> wrote: > This fiddles with behavior on AMD only, yet it's not obvious why this > couldn't be done in vendor independent code (it should, afaict, be > benign for Intel). AMD and Intel levelling are fundamentally different. The former are override MSRs with some quirks when it comes to the magic bits, while the latter are strict masks which take effect before the magic bits are folded in. ~Andrew
>>> On 17.02.16 at 14:06, <andrew.cooper3@citrix.com> wrote: > On 17/02/16 09:02, Jan Beulich wrote: >>>>> On 08.02.16 at 18:26, <andrew.cooper3@citrix.com> wrote: >> This fiddles with behavior on AMD only, yet it's not obvious why this >> couldn't be done in vendor independent code (it should, afaict, be >> benign for Intel). > > AMD and Intel levelling are fundamentally different. > > The former are override MSRs with some quirks when it comes to the magic > bits, while the latter are strict masks which take effect before the > magic bits are folded in. That's what you've derived from observations aiui, not something written down somewhere. As long as the final effect is the same, I still think doing such adjustments in vendor independent code would be better. But I won't insist. Jan
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c index deb98ea..3e345fe 100644 --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -265,7 +265,15 @@ static void __init noinline amd_init_levelling(void) edx &= m->edx; } - cpuidmask_defaults._1cd &= ((uint64_t)ecx << 32) | edx; + /* Fast-forward bits - Must be set. */ + if (ecx & cpufeat_mask(X86_FEATURE_XSAVE)) + ecx |= cpufeat_mask(X86_FEATURE_OSXSAVE); + edx |= cpufeat_mask(X86_FEATURE_APIC); + + /* Allow the HYPERVISOR bit to be set via guest policy. */ + ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR); + + cpuidmask_defaults._1cd = ((uint64_t)ecx << 32) | edx; } if ((levelling_caps & LCAP_e1cd) == LCAP_e1cd) { @@ -281,7 +289,10 @@ static void __init noinline amd_init_levelling(void) edx &= m->ext_edx; } - cpuidmask_defaults.e1cd &= ((uint64_t)ecx << 32) | edx; + /* Fast-forward bits - Must be set. */ + edx |= cpufeat_mask(X86_FEATURE_APIC); + + cpuidmask_defaults.e1cd = ((uint64_t)ecx << 32) | edx; } if ((levelling_caps & LCAP_7ab0) == LCAP_7ab0) { diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index f06bc02..613bb5c 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -103,6 +103,15 @@ static void update_domain_cpuid_info(struct domain *d, case X86_VENDOR_AMD: mask &= ((uint64_t)ecx << 32) | edx; + + /* Fast-forward bits - Must be set. */ + if (ecx & cpufeat_mask(X86_FEATURE_XSAVE)) + ecx = cpufeat_mask(X86_FEATURE_OSXSAVE); + else + ecx = 0; + edx = cpufeat_mask(X86_FEATURE_APIC); + + mask |= ((uint64_t)ecx << 32) | edx; break; } @@ -170,6 +179,12 @@ static void update_domain_cpuid_info(struct domain *d, case X86_VENDOR_AMD: mask &= ((uint64_t)ecx << 32) | edx; + + /* Fast-forward bits - Must be set. */ + ecx = 0; + edx = cpufeat_mask(X86_FEATURE_APIC); + + mask |= ((uint64_t)ecx << 32) | edx; break; }