Message ID | 20230421134615.62539-10-weijiang.yang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Enable CET Virtualization | expand |
On 4/21/2023 9:46 PM, Yang Weijiang wrote: > From: Sean Christopherson <sean.j.christopherson@intel.com> > > Load the guest's FPU state if userspace is accessing MSRs whose values are > managed by XSAVES so that the MSR helpers, e.g. kvm_{get,set}_xsave_msr(), So far, kvm_{get,set}_xsave_msr() is not introduced yet. IMO, it is a bit confusing to understand the whole picture without the following patches. May be add some description or adjust the order? > can simply do {RD,WR}MSR to access the guest's value. > > If new feature MSRs supported in XSS are passed through to the guest they > are saved and restored by XSAVES/XRSTORS, i.e. in the guest's FPU state. > > Because is also used for the KVM_GET_MSRS device ioctl(), explicitly check > @vcpu is non-null before attempting to load guest state. The XSS supporting > MSRs cannot be retrieved via the device ioctl() without loading guest FPU > state (which doesn't exist). > > Note that guest_cpuid_has() is not queried as host userspace is allowed > to access MSRs that have not been exposed to the guest, e.g. it might do > KVM_SET_MSRS prior to KVM_SET_CPUID2. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> > Co-developed-by: Yang Weijiang <weijiang.yang@intel.com> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > --- > arch/x86/kvm/x86.c | 29 ++++++++++++++++++++++++++++- > 1 file changed, 28 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index d2975ca96ac5..7788646bbf1f 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -130,6 +130,9 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); > static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); > > static DEFINE_MUTEX(vendor_module_lock); > +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); > +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); > + > struct kvm_x86_ops kvm_x86_ops __read_mostly; > > #define KVM_X86_OP(func) \ > @@ -4336,6 +4339,21 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > } > EXPORT_SYMBOL_GPL(kvm_get_msr_common); > > +static const u32 xsave_msrs[] = { > + MSR_IA32_U_CET, MSR_IA32_PL3_SSP, > +}; > + > +static bool is_xsaves_msr(u32 index) > +{ > + int i; > + > + for (i = 0; i < ARRAY_SIZE(xsave_msrs); i++) { > + if (index == xsave_msrs[i]) > + return true; > + } > + return false; > +} > + > /* > * Read or write a bunch of msrs. All parameters are kernel addresses. > * > @@ -4346,11 +4364,20 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs, > int (*do_msr)(struct kvm_vcpu *vcpu, > unsigned index, u64 *data)) > { > + bool fpu_loaded = false; > int i; > > - for (i = 0; i < msrs->nmsrs; ++i) > + for (i = 0; i < msrs->nmsrs; ++i) { > + if (vcpu && !fpu_loaded && kvm_caps.supported_xss && > + is_xsaves_msr(entries[i].index)) { > + kvm_load_guest_fpu(vcpu); > + fpu_loaded = true; > + } > if (do_msr(vcpu, entries[i].index, &entries[i].data)) > break; > + } > + if (fpu_loaded) > + kvm_put_guest_fpu(vcpu); > > return i; > }
On 4/27/2023 11:46 AM, Binbin Wu wrote: > > > On 4/21/2023 9:46 PM, Yang Weijiang wrote: >> From: Sean Christopherson <sean.j.christopherson@intel.com> >> >> Load the guest's FPU state if userspace is accessing MSRs whose >> values are >> managed by XSAVES so that the MSR helpers, e.g. >> kvm_{get,set}_xsave_msr(), > So far, kvm_{get,set}_xsave_msr() is not introduced yet. > IMO, it is a bit confusing to understand the whole picture without the > following patches. > May be add some description or adjust the order? Sure, will add blurbs for the two helpers., thanks! > [...]
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d2975ca96ac5..7788646bbf1f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -130,6 +130,9 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2); static DEFINE_MUTEX(vendor_module_lock); +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); + struct kvm_x86_ops kvm_x86_ops __read_mostly; #define KVM_X86_OP(func) \ @@ -4336,6 +4339,21 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); +static const u32 xsave_msrs[] = { + MSR_IA32_U_CET, MSR_IA32_PL3_SSP, +}; + +static bool is_xsaves_msr(u32 index) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(xsave_msrs); i++) { + if (index == xsave_msrs[i]) + return true; + } + return false; +} + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * @@ -4346,11 +4364,20 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs, int (*do_msr)(struct kvm_vcpu *vcpu, unsigned index, u64 *data)) { + bool fpu_loaded = false; int i; - for (i = 0; i < msrs->nmsrs; ++i) + for (i = 0; i < msrs->nmsrs; ++i) { + if (vcpu && !fpu_loaded && kvm_caps.supported_xss && + is_xsaves_msr(entries[i].index)) { + kvm_load_guest_fpu(vcpu); + fpu_loaded = true; + } if (do_msr(vcpu, entries[i].index, &entries[i].data)) break; + } + if (fpu_loaded) + kvm_put_guest_fpu(vcpu); return i; }