diff mbox series

KVM: SVM: Fix svm_xsaves_supported

Message ID 20190904001422.11809-1-aaronlewis@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: SVM: Fix svm_xsaves_supported | expand

Commit Message

Aaron Lewis Sept. 4, 2019, 12:14 a.m. UTC
AMD allows guests to execute XSAVES/XRSTORS if supported by the host.  This is different than Intel as they have an additional control bit that determines if XSAVES/XRSTORS can be used by the guest. Intel also has intercept bits that might prevent the guest from intercepting the instruction as well. AMD has none of that, not even an Intercept mechanism.  AMD simply allows XSAVES/XRSTORS to be executed by the guest if also supported by the host.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/kvm/svm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Vitaly Kuznetsov Sept. 4, 2019, 4:51 p.m. UTC | #1
Aaron Lewis <aaronlewis@google.com> writes:

> AMD allows guests to execute XSAVES/XRSTORS if supported by the host.  This is different than Intel as they have an additional control bit that determines if XSAVES/XRSTORS can be used by the guest. Intel also has intercept bits that might prevent the guest from intercepting the instruction as well. AMD has none of that, not even an Intercept mechanism.  AMD simply allows XSAVES/XRSTORS to be executed by the guest if also supported by the host.
>

WARNING: Possible unwrapped commit description (prefer a maximum 75 chars per line)

> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> ---
>  arch/x86/kvm/svm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 1f220a85514f..b681a89f4f7e 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -5985,7 +5985,7 @@ static bool svm_mpx_supported(void)
>  
>  static bool svm_xsaves_supported(void)
>  {
> -	return false;
> +	return boot_cpu_has(X86_FEATURE_XSAVES);
>  }
>  
>  static bool svm_umip_emulated(void)

I had a similar patch in my stash when I tried to debug Hyper-V 2016
not being able to boot on KVM. I may have forgotten some important
details, but if I'm not mistaken XSAVES comes paired with MSR_IA32_XSS
and some OSes may actually try to write there, in particular I've
observed Hyper-V 2016 trying to write '0'. Currently, we only support
MSR_IA32_XSS in vmx code, this will need to be extended to SVM.

Currently, VMX code only supports writing '0' to MSR_IA32_XSS:

	case MSR_IA32_XSS:
		if (!vmx_xsaves_supported() ||
		    (!msr_info->host_initiated &&
		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
			return 1;
		/*
		 * The only supported bit as of Skylake is bit 8, but
		 * it is not supported on KVM.
		 */
		if (data != 0)
			return 1;


we will probably need the same limitation for SVM, however, I'd vote for
creating separate kvm_x86_ops->set_xss() implementations.
Aaron Lewis Sept. 4, 2019, 7:03 p.m. UTC | #2
On Wed, Sep 4, 2019 at 9:51 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>
> Aaron Lewis <aaronlewis@google.com> writes:
>
> > AMD allows guests to execute XSAVES/XRSTORS if supported by the host.  This is different than Intel as they have an additional control bit that determines if XSAVES/XRSTORS can be used by the guest. Intel also has intercept bits that might prevent the guest from intercepting the instruction as well. AMD has none of that, not even an Intercept mechanism.  AMD simply allows XSAVES/XRSTORS to be executed by the guest if also supported by the host.
> >
>
> WARNING: Possible unwrapped commit description (prefer a maximum 75 chars per line)
>
> > Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> > ---
> >  arch/x86/kvm/svm.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> > index 1f220a85514f..b681a89f4f7e 100644
> > --- a/arch/x86/kvm/svm.c
> > +++ b/arch/x86/kvm/svm.c
> > @@ -5985,7 +5985,7 @@ static bool svm_mpx_supported(void)
> >
> >  static bool svm_xsaves_supported(void)
> >  {
> > -     return false;
> > +     return boot_cpu_has(X86_FEATURE_XSAVES);
> >  }
> >
> >  static bool svm_umip_emulated(void)
>
> I had a similar patch in my stash when I tried to debug Hyper-V 2016
> not being able to boot on KVM. I may have forgotten some important
> details, but if I'm not mistaken XSAVES comes paired with MSR_IA32_XSS
> and some OSes may actually try to write there, in particular I've
> observed Hyper-V 2016 trying to write '0'. Currently, we only support
> MSR_IA32_XSS in vmx code, this will need to be extended to SVM.
>
> Currently, VMX code only supports writing '0' to MSR_IA32_XSS:
>
>         case MSR_IA32_XSS:
>                 if (!vmx_xsaves_supported() ||
>                     (!msr_info->host_initiated &&
>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
>                         return 1;
>                 /*
>                  * The only supported bit as of Skylake is bit 8, but
>                  * it is not supported on KVM.
>                  */
>                 if (data != 0)
>                         return 1;
>
>
> we will probably need the same limitation for SVM, however, I'd vote for
> creating separate kvm_x86_ops->set_xss() implementations.
>
> --
> Vitaly

Fixed the unwrapped description in v2.

As for extending VMX behavior to SVM for MSR_IA_32_XSS; I will do this
in a follow up patch.  Thanks for calling this out.
Vitaly Kuznetsov Sept. 5, 2019, 12:46 p.m. UTC | #3
Aaron Lewis <aaronlewis@google.com> writes:

> On Wed, Sep 4, 2019 at 9:51 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>>
>> Aaron Lewis <aaronlewis@google.com> writes:
>>
>> > AMD allows guests to execute XSAVES/XRSTORS if supported by the host.  This is different than Intel as they have an additional control bit that determines if XSAVES/XRSTORS can be used by the guest. Intel also has intercept bits that might prevent the guest from intercepting the instruction as well. AMD has none of that, not even an Intercept mechanism.  AMD simply allows XSAVES/XRSTORS to be executed by the guest if also supported by the host.
>> >
>>
>> WARNING: Possible unwrapped commit description (prefer a maximum 75 chars per line)
>>
>> > Signed-off-by: Aaron Lewis <aaronlewis@google.com>
>> > ---
>> >  arch/x86/kvm/svm.c | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>> > index 1f220a85514f..b681a89f4f7e 100644
>> > --- a/arch/x86/kvm/svm.c
>> > +++ b/arch/x86/kvm/svm.c
>> > @@ -5985,7 +5985,7 @@ static bool svm_mpx_supported(void)
>> >
>> >  static bool svm_xsaves_supported(void)
>> >  {
>> > -     return false;
>> > +     return boot_cpu_has(X86_FEATURE_XSAVES);
>> >  }
>> >
>> >  static bool svm_umip_emulated(void)
>>
>> I had a similar patch in my stash when I tried to debug Hyper-V 2016
>> not being able to boot on KVM. I may have forgotten some important
>> details, but if I'm not mistaken XSAVES comes paired with MSR_IA32_XSS
>> and some OSes may actually try to write there, in particular I've
>> observed Hyper-V 2016 trying to write '0'. Currently, we only support
>> MSR_IA32_XSS in vmx code, this will need to be extended to SVM.
>>
>> Currently, VMX code only supports writing '0' to MSR_IA32_XSS:
>>
>>         case MSR_IA32_XSS:
>>                 if (!vmx_xsaves_supported() ||
>>                     (!msr_info->host_initiated &&
>>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
>>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
>>                         return 1;
>>                 /*
>>                  * The only supported bit as of Skylake is bit 8, but
>>                  * it is not supported on KVM.
>>                  */
>>                 if (data != 0)
>>                         return 1;
>>
>>
>> we will probably need the same limitation for SVM, however, I'd vote for
>> creating separate kvm_x86_ops->set_xss() implementations.
>>
>> --
>> Vitaly
>
> Fixed the unwrapped description in v2.
>
> As for extending VMX behavior to SVM for MSR_IA_32_XSS; I will do this
> in a follow up patch.  Thanks for calling this out.

Doing this in a separate patch is fine, however, I think this patch
should come before we start announcing XSAVES support on AMD: both
MSR_IA_32_XSS and XSAVES/XRSTORS instructions are enumerated by
CPUID.(EAX=0DH, ECX=1).EAX[bit 3] so an unprepared guest may try
accessing MSR_IA_32_XSS and get #GP.
Jim Mattson Sept. 6, 2019, 5:55 p.m. UTC | #4
On Wed, Sep 4, 2019 at 9:51 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:

> Currently, VMX code only supports writing '0' to MSR_IA32_XSS:
>
>         case MSR_IA32_XSS:
>                 if (!vmx_xsaves_supported() ||
>                     (!msr_info->host_initiated &&
>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
>                         return 1;
>                 /*
>                  * The only supported bit as of Skylake is bit 8, but
>                  * it is not supported on KVM.
>                  */
>                 if (data != 0)
>                         return 1;
>
>
> we will probably need the same limitation for SVM, however, I'd vote for
> creating separate kvm_x86_ops->set_xss() implementations.

I hope separate implementations are unnecessary. The allowed IA32_XSS
bits should be derivable from guest_cpuid_has() in a
vendor-independent way. Otherwise, the CPU vendors have messed up. :-)

At present, we use the MSR-load area to swap guest/host values of
IA32_XSS on Intel (when the host and guest values differ), but it
seems to me that IA32_XSS and %xcr0 should be swapped at the same
time, in kvm_load_guest_xcr0/kvm_put_guest_xcr0. This potentially adds
an additional L1 WRMSR VM-exit to every emulated VM-entry or VM-exit
for nVMX, but since the host currently sets IA32_XSS to 0 and we only
allow the guest to set IA32_XSS to 0, we can probably worry about this
later.

I have to say, this is an awful lot of effort for an MSR that's never used!
Vitaly Kuznetsov Sept. 9, 2019, 8:54 a.m. UTC | #5
Jim Mattson <jmattson@google.com> writes:

> On Wed, Sep 4, 2019 at 9:51 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>
>> Currently, VMX code only supports writing '0' to MSR_IA32_XSS:
>>
>>         case MSR_IA32_XSS:
>>                 if (!vmx_xsaves_supported() ||
>>                     (!msr_info->host_initiated &&
>>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
>>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
>>                         return 1;
>>                 /*
>>                  * The only supported bit as of Skylake is bit 8, but
>>                  * it is not supported on KVM.
>>                  */
>>                 if (data != 0)
>>                         return 1;
>>
>>
>> we will probably need the same limitation for SVM, however, I'd vote for
>> creating separate kvm_x86_ops->set_xss() implementations.
>
> I hope separate implementations are unnecessary. The allowed IA32_XSS
> bits should be derivable from guest_cpuid_has() in a
> vendor-independent way. Otherwise, the CPU vendors have messed up. :-)
>
> At present, we use the MSR-load area to swap guest/host values of
> IA32_XSS on Intel (when the host and guest values differ), but it
> seems to me that IA32_XSS and %xcr0 should be swapped at the same
> time, in kvm_load_guest_xcr0/kvm_put_guest_xcr0. This potentially adds
> an additional L1 WRMSR VM-exit to every emulated VM-entry or VM-exit
> for nVMX, but since the host currently sets IA32_XSS to 0 and we only
> allow the guest to set IA32_XSS to 0, we can probably worry about this
> later.

Yea, I was suggesting to split implementations as a future proof but a
comment like "This ought to be 0 for now" would also do)

>
> I have to say, this is an awful lot of effort for an MSR that's never used!

Agreed :-)
Moger, Babu Oct. 3, 2019, 4:02 p.m. UTC | #6
On 9/9/19 3:54 AM, Vitaly Kuznetsov wrote:
> Jim Mattson <jmattson@google.com> writes:
> 
>> On Wed, Sep 4, 2019 at 9:51 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>>
>>> Currently, VMX code only supports writing '0' to MSR_IA32_XSS:
>>>
>>>         case MSR_IA32_XSS:
>>>                 if (!vmx_xsaves_supported() ||
>>>                     (!msr_info->host_initiated &&
>>>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
>>>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
>>>                         return 1;
>>>                 /*
>>>                  * The only supported bit as of Skylake is bit 8, but
>>>                  * it is not supported on KVM.
>>>                  */
>>>                 if (data != 0)
>>>                         return 1;
>>>
>>>
>>> we will probably need the same limitation for SVM, however, I'd vote for
>>> creating separate kvm_x86_ops->set_xss() implementations.
>>
>> I hope separate implementations are unnecessary. The allowed IA32_XSS
>> bits should be derivable from guest_cpuid_has() in a
>> vendor-independent way. Otherwise, the CPU vendors have messed up. :-)
>>
>> At present, we use the MSR-load area to swap guest/host values of
>> IA32_XSS on Intel (when the host and guest values differ), but it
>> seems to me that IA32_XSS and %xcr0 should be swapped at the same
>> time, in kvm_load_guest_xcr0/kvm_put_guest_xcr0. This potentially adds
>> an additional L1 WRMSR VM-exit to every emulated VM-entry or VM-exit
>> for nVMX, but since the host currently sets IA32_XSS to 0 and we only
>> allow the guest to set IA32_XSS to 0, we can probably worry about this
>> later.
> 
> Yea, I was suggesting to split implementations as a future proof but a
> comment like "This ought to be 0 for now" would also do)

Hi, Anyone actively working on this?

I was trying to expose xsaves on AMD qemu guest. Found out that we need to
get this above code working before I can expose xsaves on guest. I can
re-post these patches if it is ok.

Sorry, I dont have the complete background. From what I understood, we
need to add the same check as Intel for MSR_IA32_XSS in get_msr and set_msr.

static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
{
..

 case MSR_IA32_XSS:
                if (!vmx_xsaves_supported() ||
                    (!msr_info->host_initiated &&
                     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
                       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
                        return 1;
                msr_info->data = vcpu->arch.ia32_xss;
                break;
..
..
}

static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
{
..
..
  case MSR_IA32_XSS:
                if (!vmx_xsaves_supported() ||
                    (!msr_info->host_initiated &&
                     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
                       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
                        return 1;
                /*
                 * The only supported bit as of Skylake is bit 8, but
                 * it is not supported on KVM.
                 */
                if (data != 0)
                        return 1;
                vcpu->arch.ia32_xss = data;
                if (vcpu->arch.ia32_xss != host_xss)
                        add_atomic_switch_msr(vmx, MSR_IA32_XSS,
                                vcpu->arch.ia32_xss, host_xss, false);
                else
                        clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
                break;
...
}

We probably don't need last "if & else" check as we are setting it 0 now.
Does this look accurate?
Jim Mattson Oct. 3, 2019, 4:15 p.m. UTC | #7
On Thu, Oct 3, 2019 at 9:02 AM Moger, Babu <Babu.Moger@amd.com> wrote:
>
>
>
> On 9/9/19 3:54 AM, Vitaly Kuznetsov wrote:
> > Jim Mattson <jmattson@google.com> writes:
> >
> >> On Wed, Sep 4, 2019 at 9:51 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
> >>
> >>> Currently, VMX code only supports writing '0' to MSR_IA32_XSS:
> >>>
> >>>         case MSR_IA32_XSS:
> >>>                 if (!vmx_xsaves_supported() ||
> >>>                     (!msr_info->host_initiated &&
> >>>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> >>>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
> >>>                         return 1;
> >>>                 /*
> >>>                  * The only supported bit as of Skylake is bit 8, but
> >>>                  * it is not supported on KVM.
> >>>                  */
> >>>                 if (data != 0)
> >>>                         return 1;
> >>>
> >>>
> >>> we will probably need the same limitation for SVM, however, I'd vote for
> >>> creating separate kvm_x86_ops->set_xss() implementations.
> >>
> >> I hope separate implementations are unnecessary. The allowed IA32_XSS
> >> bits should be derivable from guest_cpuid_has() in a
> >> vendor-independent way. Otherwise, the CPU vendors have messed up. :-)
> >>
> >> At present, we use the MSR-load area to swap guest/host values of
> >> IA32_XSS on Intel (when the host and guest values differ), but it
> >> seems to me that IA32_XSS and %xcr0 should be swapped at the same
> >> time, in kvm_load_guest_xcr0/kvm_put_guest_xcr0. This potentially adds
> >> an additional L1 WRMSR VM-exit to every emulated VM-entry or VM-exit
> >> for nVMX, but since the host currently sets IA32_XSS to 0 and we only
> >> allow the guest to set IA32_XSS to 0, we can probably worry about this
> >> later.
> >
> > Yea, I was suggesting to split implementations as a future proof but a
> > comment like "This ought to be 0 for now" would also do)
>
> Hi, Anyone actively working on this?
>
> I was trying to expose xsaves on AMD qemu guest. Found out that we need to
> get this above code working before I can expose xsaves on guest. I can
> re-post these patches if it is ok.
>
> Sorry, I dont have the complete background. From what I understood, we
> need to add the same check as Intel for MSR_IA32_XSS in get_msr and set_msr.
>
> static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> {
> ..
>
>  case MSR_IA32_XSS:
>                 if (!vmx_xsaves_supported() ||
>                     (!msr_info->host_initiated &&
>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
>                         return 1;
>                 msr_info->data = vcpu->arch.ia32_xss;
>                 break;
> ..
> ..
> }
>
> static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> {
> ..
> ..
>   case MSR_IA32_XSS:
>                 if (!vmx_xsaves_supported() ||
>                     (!msr_info->host_initiated &&
>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
>                         return 1;
>                 /*
>                  * The only supported bit as of Skylake is bit 8, but
>                  * it is not supported on KVM.
>                  */
>                 if (data != 0)
>                         return 1;
>                 vcpu->arch.ia32_xss = data;
>                 if (vcpu->arch.ia32_xss != host_xss)
>                         add_atomic_switch_msr(vmx, MSR_IA32_XSS,
>                                 vcpu->arch.ia32_xss, host_xss, false);
>                 else
>                         clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
>                 break;
> ...
> }
>
> We probably don't need last "if & else" check as we are setting it 0 now.
> Does this look accurate?

Aaron is working on it, and I think he's close to being ready to send
out the next revision.
Moger, Babu Oct. 3, 2019, 4:20 p.m. UTC | #8
> -----Original Message-----
> From: Jim Mattson <jmattson@google.com>
> Sent: Thursday, October 3, 2019 11:15 AM
> To: Moger, Babu <Babu.Moger@amd.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>; Aaron Lewis
> <aaronlewis@google.com>; kvm list <kvm@vger.kernel.org>; Natarajan,
> Janakarajan <Janakarajan.Natarajan@amd.com>
> Subject: Re: [Patch] KVM: SVM: Fix svm_xsaves_supported
> 
> On Thu, Oct 3, 2019 at 9:02 AM Moger, Babu <Babu.Moger@amd.com>
> wrote:
> >
> >
> >
> > On 9/9/19 3:54 AM, Vitaly Kuznetsov wrote:
> > > Jim Mattson <jmattson@google.com> writes:
> > >
> > >> On Wed, Sep 4, 2019 at 9:51 AM Vitaly Kuznetsov
> <vkuznets@redhat.com> wrote:
> > >>
> > >>> Currently, VMX code only supports writing '0' to MSR_IA32_XSS:
> > >>>
> > >>>         case MSR_IA32_XSS:
> > >>>                 if (!vmx_xsaves_supported() ||
> > >>>                     (!msr_info->host_initiated &&
> > >>>                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> > >>>                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
> > >>>                         return 1;
> > >>>                 /*
> > >>>                  * The only supported bit as of Skylake is bit 8, but
> > >>>                  * it is not supported on KVM.
> > >>>                  */
> > >>>                 if (data != 0)
> > >>>                         return 1;
> > >>>
> > >>>
> > >>> we will probably need the same limitation for SVM, however, I'd vote
> for
> > >>> creating separate kvm_x86_ops->set_xss() implementations.
> > >>
> > >> I hope separate implementations are unnecessary. The allowed
> IA32_XSS
> > >> bits should be derivable from guest_cpuid_has() in a
> > >> vendor-independent way. Otherwise, the CPU vendors have messed
> up. :-)
> > >>
> > >> At present, we use the MSR-load area to swap guest/host values of
> > >> IA32_XSS on Intel (when the host and guest values differ), but it
> > >> seems to me that IA32_XSS and %xcr0 should be swapped at the same
> > >> time, in kvm_load_guest_xcr0/kvm_put_guest_xcr0. This potentially
> adds
> > >> an additional L1 WRMSR VM-exit to every emulated VM-entry or VM-
> exit
> > >> for nVMX, but since the host currently sets IA32_XSS to 0 and we only
> > >> allow the guest to set IA32_XSS to 0, we can probably worry about this
> > >> later.
> > >
> > > Yea, I was suggesting to split implementations as a future proof but a
> > > comment like "This ought to be 0 for now" would also do)
> >
> > Hi, Anyone actively working on this?
> >
> > I was trying to expose xsaves on AMD qemu guest. Found out that we
> need to
> > get this above code working before I can expose xsaves on guest. I can
> > re-post these patches if it is ok.
> >
> > Sorry, I dont have the complete background. From what I understood, we
> > need to add the same check as Intel for MSR_IA32_XSS in get_msr and
> set_msr.
> >
> > static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > {
> > ..
> >
> >  case MSR_IA32_XSS:
> >                 if (!vmx_xsaves_supported() ||
> >                     (!msr_info->host_initiated &&
> >                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> >                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
> >                         return 1;
> >                 msr_info->data = vcpu->arch.ia32_xss;
> >                 break;
> > ..
> > ..
> > }
> >
> > static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > {
> > ..
> > ..
> >   case MSR_IA32_XSS:
> >                 if (!vmx_xsaves_supported() ||
> >                     (!msr_info->host_initiated &&
> >                      !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> >                        guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
> >                         return 1;
> >                 /*
> >                  * The only supported bit as of Skylake is bit 8, but
> >                  * it is not supported on KVM.
> >                  */
> >                 if (data != 0)
> >                         return 1;
> >                 vcpu->arch.ia32_xss = data;
> >                 if (vcpu->arch.ia32_xss != host_xss)
> >                         add_atomic_switch_msr(vmx, MSR_IA32_XSS,
> >                                 vcpu->arch.ia32_xss, host_xss, false);
> >                 else
> >                         clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
> >                 break;
> > ...
> > }
> >
> > We probably don't need last "if & else" check as we are setting it 0 now.
> > Does this look accurate?
> 
> Aaron is working on it, and I think he's close to being ready to send
> out the next revision.

Ok. Thanks.
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1f220a85514f..b681a89f4f7e 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5985,7 +5985,7 @@  static bool svm_mpx_supported(void)
 
 static bool svm_xsaves_supported(void)
 {
-	return false;
+	return boot_cpu_has(X86_FEATURE_XSAVES);
 }
 
 static bool svm_umip_emulated(void)