Message ID | 20221125040604.5051-12-weijiang.yang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Introduce Architectural LBR for vPMU | expand |
On Thu, Nov 24, 2022, Yang Weijiang wrote: > From: Like Xu <like.xu@linux.intel.com> > > On processors supporting XSAVES and XRSTORS, Architectural LBR XSAVE > support is enumerated from CPUID.(EAX=0DH, ECX=1):ECX[bit 15]. > The detailed sub-leaf for Arch LBR is enumerated in CPUID.(0DH, 0FH). > > XSAVES provides a faster means than RDMSR for guest to read all LBRs. > When guest IA32_XSS[bit 15] is set, the Arch LBR state can be saved using > XSAVES and restored by XRSTORS with the appropriate RFBM. > > Signed-off-by: Like Xu <like.xu@linux.intel.com> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > --- > arch/x86/kvm/vmx/vmx.c | 4 ++++ > arch/x86/kvm/x86.c | 2 +- > 2 files changed, 5 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 359da38a19a1..3bc892e8cf7a 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -7733,6 +7733,10 @@ static __init void vmx_set_cpu_caps(void) > kvm_cpu_cap_check_and_set(X86_FEATURE_DS); > kvm_cpu_cap_check_and_set(X86_FEATURE_DTES64); > } > + if (!cpu_has_vmx_arch_lbr()) { > + kvm_cpu_cap_clear(X86_FEATURE_ARCH_LBR); No, this needs to be opt-in, not opt-out. I.e. omit the flag from common CPUID code and set it if and only if it's fully supported. It's not out of the realm of possibilities that AMD might want to support arch LBRs, at which point those CPUs would explode. > + kvm_caps.supported_xss &= ~XFEATURE_MASK_LBR; > + } > > if (!enable_pmu) > kvm_cpu_cap_clear(X86_FEATURE_PDCM); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 889be0c9176d..38df08d9d0cb 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -217,7 +217,7 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs; > | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \ > | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE) > > -#define KVM_SUPPORTED_XSS 0 > +#define KVM_SUPPORTED_XSS XFEATURE_MASK_LBR > > u64 __read_mostly host_efer; > EXPORT_SYMBOL_GPL(host_efer); > -- > 2.27.0 >
On 1/28/2023 6:07 AM, Sean Christopherson wrote: > On Thu, Nov 24, 2022, Yang Weijiang wrote: >> From: Like Xu <like.xu@linux.intel.com> >> >> On processors supporting XSAVES and XRSTORS, Architectural LBR XSAVE >> support is enumerated from CPUID.(EAX=0DH, ECX=1):ECX[bit 15]. >> The detailed sub-leaf for Arch LBR is enumerated in CPUID.(0DH, 0FH). >> >> XSAVES provides a faster means than RDMSR for guest to read all LBRs. >> When guest IA32_XSS[bit 15] is set, the Arch LBR state can be saved using >> XSAVES and restored by XRSTORS with the appropriate RFBM. >> >> Signed-off-by: Like Xu <like.xu@linux.intel.com> >> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> >> --- >> arch/x86/kvm/vmx/vmx.c | 4 ++++ >> arch/x86/kvm/x86.c | 2 +- >> 2 files changed, 5 insertions(+), 1 deletion(-) >> >> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c >> index 359da38a19a1..3bc892e8cf7a 100644 >> --- a/arch/x86/kvm/vmx/vmx.c >> +++ b/arch/x86/kvm/vmx/vmx.c >> @@ -7733,6 +7733,10 @@ static __init void vmx_set_cpu_caps(void) >> kvm_cpu_cap_check_and_set(X86_FEATURE_DS); >> kvm_cpu_cap_check_and_set(X86_FEATURE_DTES64); >> } >> + if (!cpu_has_vmx_arch_lbr()) { >> + kvm_cpu_cap_clear(X86_FEATURE_ARCH_LBR); > No, this needs to be opt-in, not opt-out. I.e. omit the flag from common CPUID > code and set it if and only if it's fully supported. It's not out of the realm > of possibilities that AMD might want to support arch LBRs, at which point those > CPUs would explode. Will modify this patch. > >> + kvm_caps.supported_xss &= ~XFEATURE_MASK_LBR; >> + } >> >> if (!enable_pmu) >> kvm_cpu_cap_clear(X86_FEATURE_PDCM); >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index 889be0c9176d..38df08d9d0cb 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -217,7 +217,7 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs; >> | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \ >> | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE) >> >> -#define KVM_SUPPORTED_XSS 0 >> +#define KVM_SUPPORTED_XSS XFEATURE_MASK_LBR >> >> u64 __read_mostly host_efer; >> EXPORT_SYMBOL_GPL(host_efer); >> -- >> 2.27.0 >>
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 359da38a19a1..3bc892e8cf7a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7733,6 +7733,10 @@ static __init void vmx_set_cpu_caps(void) kvm_cpu_cap_check_and_set(X86_FEATURE_DS); kvm_cpu_cap_check_and_set(X86_FEATURE_DTES64); } + if (!cpu_has_vmx_arch_lbr()) { + kvm_cpu_cap_clear(X86_FEATURE_ARCH_LBR); + kvm_caps.supported_xss &= ~XFEATURE_MASK_LBR; + } if (!enable_pmu) kvm_cpu_cap_clear(X86_FEATURE_PDCM); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 889be0c9176d..38df08d9d0cb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -217,7 +217,7 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs; | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \ | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE) -#define KVM_SUPPORTED_XSS 0 +#define KVM_SUPPORTED_XSS XFEATURE_MASK_LBR u64 __read_mostly host_efer; EXPORT_SYMBOL_GPL(host_efer);