Message ID | 20240219074733.122080-18-weijiang.yang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Enable CET Virtualization | expand |
On Sun, Feb 18, 2024, Yang Weijiang wrote: > Add CET MSRs to the list of MSRs reported to userspace if the feature, > i.e. IBT or SHSTK, associated with the MSRs is supported by KVM. > > SSP can only be read via RDSSP. Writing even requires destructive and > potentially faulting operations such as SAVEPREVSSP/RSTORSSP or > SETSSBSY/CLRSSBSY. Let the host use a pseudo-MSR that is just a wrapper > for the GUEST_SSP field of the VMCS. > > Suggested-by: Chao Gao <chao.gao@intel.com> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > --- > arch/x86/include/uapi/asm/kvm_para.h | 1 + > arch/x86/kvm/vmx/vmx.c | 2 ++ > arch/x86/kvm/x86.c | 18 ++++++++++++++++++ > 3 files changed, 21 insertions(+) > > diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h > index 605899594ebb..9d08c0bec477 100644 > --- a/arch/x86/include/uapi/asm/kvm_para.h > +++ b/arch/x86/include/uapi/asm/kvm_para.h > @@ -58,6 +58,7 @@ > #define MSR_KVM_ASYNC_PF_INT 0x4b564d06 > #define MSR_KVM_ASYNC_PF_ACK 0x4b564d07 > #define MSR_KVM_MIGRATION_CONTROL 0x4b564d08 > +#define MSR_KVM_SSP 0x4b564d09 We never resolved the conservation from v6[*], but I still agree with Maxim's view that defining a synthetic MSR, which "steals" an MSR from KVM's MSR address space, is a bad idea. And I still also think that KVM_SET_ONE_REG is the best way forward. Completely untested, but I think this is all that is needed to wire up KVM_{G,S}ET_ONE_REG to support MSRs, and carve out room for 250+ other register types, plus room for more future stuff as needed. We'll still need a KVM-defined MSR for SSP, but it can be KVM internal, not uAPI, e.g. the "index" exposed to userspace can simply be '0' for a register type of KVM_X86_REG_SYNTHETIC_MSR, and then the translated internal index can be any value that doesn't conflict. diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index ef11aa4cab42..ca2a47a85fa1 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -410,6 +410,16 @@ struct kvm_xcrs { __u64 padding[16]; }; +#define KVM_X86_REG_MSR (1 << 2) +#define KVM_X86_REG_SYNTHETIC_MSR (1 << 3) + +struct kvm_x86_reg_id { + __u32 index; + __u8 type; + __u8 rsvd; + __u16 rsvd16; +}; + #define KVM_SYNC_X86_REGS (1UL << 0) #define KVM_SYNC_X86_SREGS (1UL << 1) #define KVM_SYNC_X86_EVENTS (1UL << 2) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 47d9f03b7778..53f2b43b4651 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2244,6 +2244,30 @@ static int do_set_msr(struct kvm_vcpu *vcpu, unsigned index, u64 *data) return kvm_set_msr_ignored_check(vcpu, index, *data, true); } +static int kvm_get_one_msr(struct kvm_vcpu *vcpu, u32 msr, u64 __user *value) +{ + u64 val; + + r = do_get_msr(vcpu, reg.index, &val); + if (r) + return r; + + if (put_user(val, value); + return -EFAULT; + + return 0; +} + +static int kvm_set_one_msr(struct kvm_vcpu *vcpu, u32 msr, u64 __user *value) +{ + u64 val; + + if (get_user(val, value); + return -EFAULT; + + return do_set_msr(vcpu, reg.index, &val); +} + #ifdef CONFIG_X86_64 struct pvclock_clock { int vclock_mode; @@ -5976,6 +6000,39 @@ long kvm_arch_vcpu_ioctl(struct file *filp, srcu_read_unlock(&vcpu->kvm->srcu, idx); break; } + case KVM_GET_ONE_REG: + case KVM_SET_ONE_REG: { + struct kvm_x86_reg_id id; + struct kvm_one_reg reg; + u64 __user *value; + + r = -EFAULT; + if (copy_from_user(®, argp, sizeof(reg))) + break; + + r = -EINVAL; + id = (struct kvm_x86_reg)reg->id; + if (id.rsvd || id.rsvd16) + break; + + if (id.type != KVM_X86_REG_MSR && + id.type != KVM_X86_REG_SYNTHETIC_MSR) + break; + + if (id.type == KVM_X86_REG_SYNTHETIC_MSR) { + id.type = KVM_X86_REG_MSR; + r = kvm_translate_synthetic_msr(&id.index); + if (r) + break; + } + + value = u64_to_user_ptr(reg.addr); + if (ioctl == KVM_GET_ONE_REG) + r = kvm_get_one_msr(vcpu, id.index, value); + else + r = kvm_set_one_msr(vcpu, id.index, value); + break; + } case KVM_TPR_ACCESS_REPORTING: { struct kvm_tpr_access_ctl tac;
On 5/2/2024 6:40 AM, Sean Christopherson wrote: > On Sun, Feb 18, 2024, Yang Weijiang wrote: >> Add CET MSRs to the list of MSRs reported to userspace if the feature, >> i.e. IBT or SHSTK, associated with the MSRs is supported by KVM. >> >> SSP can only be read via RDSSP. Writing even requires destructive and >> potentially faulting operations such as SAVEPREVSSP/RSTORSSP or >> SETSSBSY/CLRSSBSY. Let the host use a pseudo-MSR that is just a wrapper >> for the GUEST_SSP field of the VMCS. >> >> Suggested-by: Chao Gao <chao.gao@intel.com> >> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> >> --- >> arch/x86/include/uapi/asm/kvm_para.h | 1 + >> arch/x86/kvm/vmx/vmx.c | 2 ++ >> arch/x86/kvm/x86.c | 18 ++++++++++++++++++ >> 3 files changed, 21 insertions(+) >> >> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h >> index 605899594ebb..9d08c0bec477 100644 >> --- a/arch/x86/include/uapi/asm/kvm_para.h >> +++ b/arch/x86/include/uapi/asm/kvm_para.h >> @@ -58,6 +58,7 @@ >> #define MSR_KVM_ASYNC_PF_INT 0x4b564d06 >> #define MSR_KVM_ASYNC_PF_ACK 0x4b564d07 >> #define MSR_KVM_MIGRATION_CONTROL 0x4b564d08 >> +#define MSR_KVM_SSP 0x4b564d09 > We never resolved the conservation from v6[*], but I still agree with Maxim's > view that defining a synthetic MSR, which "steals" an MSR from KVM's MSR address > space, is a bad idea. > > And I still also think that KVM_SET_ONE_REG is the best way forward. Completely > untested, but I think this is all that is needed to wire up KVM_{G,S}ET_ONE_REG > to support MSRs, and carve out room for 250+ other register types, plus room for > more future stuff as needed. Got your point now. > > We'll still need a KVM-defined MSR for SSP, but it can be KVM internal, not uAPI, > e.g. the "index" exposed to userspace can simply be '0' for a register type of > KVM_X86_REG_SYNTHETIC_MSR, and then the translated internal index can be any > value that doesn't conflict. Let me try to understand it, for your reference code below, id.type is to separate normal MSR (HW defined) namespace and synthetic MSR namespace, right? For the latter, IIUC KVM still needs to expose the index within the synthetic namespace so that userspace can read/write the intended MSRs, of course not expose the synthetic MSR index via existing uAPI, But you said the "index" exposed to userspace can simply be '0' in this case, then how to distinguish the synthetic MSRs in userspace and KVM? And how userspace can be aware of the synthetic MSR index allocation in KVM? Per your comments in [*], if we can use bits 39:32 to identify MSR classes/types, then under each class/type or namespace, still need define the relevant index for each synthetic MSR. [*]: https://lore.kernel.org/all/ZUQ3tcuAxYQ5bWwC@google.com/ > > diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h > index ef11aa4cab42..ca2a47a85fa1 100644 > --- a/arch/x86/include/uapi/asm/kvm.h > +++ b/arch/x86/include/uapi/asm/kvm.h > @@ -410,6 +410,16 @@ struct kvm_xcrs { > __u64 padding[16]; > }; > > +#define KVM_X86_REG_MSR (1 << 2) > +#define KVM_X86_REG_SYNTHETIC_MSR (1 << 3) > + > +struct kvm_x86_reg_id { > + __u32 index; > + __u8 type; > + __u8 rsvd; > + __u16 rsvd16; > +}; > + > #define KVM_SYNC_X86_REGS (1UL << 0) > #define KVM_SYNC_X86_SREGS (1UL << 1) > #define KVM_SYNC_X86_EVENTS (1UL << 2) > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 47d9f03b7778..53f2b43b4651 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -2244,6 +2244,30 @@ static int do_set_msr(struct kvm_vcpu *vcpu, unsigned index, u64 *data) > return kvm_set_msr_ignored_check(vcpu, index, *data, true); > } > > +static int kvm_get_one_msr(struct kvm_vcpu *vcpu, u32 msr, u64 __user *value) > +{ > + u64 val; > + > + r = do_get_msr(vcpu, reg.index, &val); > + if (r) > + return r; > + > + if (put_user(val, value); > + return -EFAULT; > + > + return 0; > +} > + > +static int kvm_set_one_msr(struct kvm_vcpu *vcpu, u32 msr, u64 __user *value) > +{ > + u64 val; > + > + if (get_user(val, value); > + return -EFAULT; > + > + return do_set_msr(vcpu, reg.index, &val); > +} > + > #ifdef CONFIG_X86_64 > struct pvclock_clock { > int vclock_mode; > @@ -5976,6 +6000,39 @@ long kvm_arch_vcpu_ioctl(struct file *filp, > srcu_read_unlock(&vcpu->kvm->srcu, idx); > break; > } > + case KVM_GET_ONE_REG: > + case KVM_SET_ONE_REG: { > + struct kvm_x86_reg_id id; > + struct kvm_one_reg reg; > + u64 __user *value; > + > + r = -EFAULT; > + if (copy_from_user(®, argp, sizeof(reg))) > + break; > + > + r = -EINVAL; > + id = (struct kvm_x86_reg)reg->id; > + if (id.rsvd || id.rsvd16) > + break; > + > + if (id.type != KVM_X86_REG_MSR && > + id.type != KVM_X86_REG_SYNTHETIC_MSR) > + break; > + > + if (id.type == KVM_X86_REG_SYNTHETIC_MSR) { > + id.type = KVM_X86_REG_MSR; > + r = kvm_translate_synthetic_msr(&id.index); > + if (r) > + break; > + } > + > + value = u64_to_user_ptr(reg.addr); > + if (ioctl == KVM_GET_ONE_REG) > + r = kvm_get_one_msr(vcpu, id.index, value); > + else > + r = kvm_set_one_msr(vcpu, id.index, value); > + break; > + } > case KVM_TPR_ACCESS_REPORTING: { > struct kvm_tpr_access_ctl tac; > >
On Mon, May 06, 2024, Weijiang Yang wrote: > On 5/2/2024 6:40 AM, Sean Christopherson wrote: > > On Sun, Feb 18, 2024, Yang Weijiang wrote: > > > Add CET MSRs to the list of MSRs reported to userspace if the feature, > > > i.e. IBT or SHSTK, associated with the MSRs is supported by KVM. > > > > > > SSP can only be read via RDSSP. Writing even requires destructive and > > > potentially faulting operations such as SAVEPREVSSP/RSTORSSP or > > > SETSSBSY/CLRSSBSY. Let the host use a pseudo-MSR that is just a wrapper > > > for the GUEST_SSP field of the VMCS. > > > > > > Suggested-by: Chao Gao <chao.gao@intel.com> > > > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> > > > --- > > > arch/x86/include/uapi/asm/kvm_para.h | 1 + > > > arch/x86/kvm/vmx/vmx.c | 2 ++ > > > arch/x86/kvm/x86.c | 18 ++++++++++++++++++ > > > 3 files changed, 21 insertions(+) > > > > > > diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h > > > index 605899594ebb..9d08c0bec477 100644 > > > --- a/arch/x86/include/uapi/asm/kvm_para.h > > > +++ b/arch/x86/include/uapi/asm/kvm_para.h > > > @@ -58,6 +58,7 @@ > > > #define MSR_KVM_ASYNC_PF_INT 0x4b564d06 > > > #define MSR_KVM_ASYNC_PF_ACK 0x4b564d07 > > > #define MSR_KVM_MIGRATION_CONTROL 0x4b564d08 > > > +#define MSR_KVM_SSP 0x4b564d09 > > We never resolved the conservation from v6[*], but I still agree with Maxim's > > view that defining a synthetic MSR, which "steals" an MSR from KVM's MSR address > > space, is a bad idea. > > > > And I still also think that KVM_SET_ONE_REG is the best way forward. Completely > > untested, but I think this is all that is needed to wire up KVM_{G,S}ET_ONE_REG > > to support MSRs, and carve out room for 250+ other register types, plus room for > > more future stuff as needed. > > Got your point now. > > > > > We'll still need a KVM-defined MSR for SSP, but it can be KVM internal, not uAPI, > > e.g. the "index" exposed to userspace can simply be '0' for a register type of > > KVM_X86_REG_SYNTHETIC_MSR, and then the translated internal index can be any > > value that doesn't conflict. > > Let me try to understand it, for your reference code below, id.type is to separate normal > MSR (HW defined) namespace and synthetic MSR namespace, right? Yep. > For the latter, IIUC KVM still needs to expose the index within the synthetic > namespace so that userspace can read/write the intended MSRs, of course not > expose the synthetic MSR index via existing uAPI, But you said the "index" > exposed to userspace can simply be '0' in this case, then how to distinguish > the synthetic MSRs in userspace and KVM? And how userspace can be aware of > the synthetic MSR index allocation in KVM? The idea is to have a synthetic index that is exposed to userspace, and a separate KVM-internal index for emulating accesses. The value that is exposed to userspace can start at 0 and be a simple incrementing value as we add synthetic MSRs, as the .type == SYNTHETIC makes it impossible for the value to collide with a "real" MSR. Translating to a KVM-internal index is a hack to avoid having to plumb a 64-bit index into all the MSR code. We could do that, i.e. pass the full kvm_x86_reg_id into the MSR helpers, but I'm not convinced it'd be worth the churn. That said, I'm not opposed to the idea either, if others prefer that approach. E.g. diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 738c449e4f9e..21152796238a 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -420,6 +420,8 @@ struct kvm_x86_reg_id { __u16 rsvd16; }; +#define MSR_KVM_GUEST_SSP 0 + #define KVM_SYNC_X86_REGS (1UL << 0) #define KVM_SYNC_X86_SREGS (1UL << 1) #define KVM_SYNC_X86_EVENTS (1UL << 2) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f45cdd9d8c1f..1a9e1e0c9f49 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5990,6 +5990,19 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, } } +static int kvm_translate_synthetic_msr(u32 *index) +{ + switch (*index) { + case MSR_KVM_GUEST_SSP: + *index = MSR_KVM_INTERNAL_GUEST_SSP; + break; + default: + return -EINVAL; + } + + return 0; +} + long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index cc585051d24b..3b5a038f5260 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -49,6 +49,15 @@ void kvm_spurious_fault(void); #define KVM_FIRST_EMULATED_VMX_MSR MSR_IA32_VMX_BASIC #define KVM_LAST_EMULATED_VMX_MSR MSR_IA32_VMX_VMFUNC +/* + * KVM's internal, non-ABI indices for synthetic MSRs. The values themselves + * are arbitrary and have no meaning, the only requirement is that they don't + * conflict with "real" MSRs that KVM supports. Use values at the uppper end + * of KVM's reserved paravirtual MSR range to minimize churn, i.e. these values + * will be usable until KVM exhausts its supply of paravirtual MSR indices. + */ +#define MSR_KVM_INTERNAL_GUEST_SSP 0x4b564dff + #define KVM_DEFAULT_PLE_GAP 128 #define KVM_VMX_DEFAULT_PLE_WINDOW 4096 #define KVM_DEFAULT_PLE_WINDOW_GROW 2
On 5/8/2024 1:27 AM, Sean Christopherson wrote: > On Mon, May 06, 2024, Weijiang Yang wrote: >> On 5/2/2024 6:40 AM, Sean Christopherson wrote: >>> On Sun, Feb 18, 2024, Yang Weijiang wrote: [...] >> For the latter, IIUC KVM still needs to expose the index within the synthetic >> namespace so that userspace can read/write the intended MSRs, of course not >> expose the synthetic MSR index via existing uAPI, But you said the "index" >> exposed to userspace can simply be '0' in this case, then how to distinguish >> the synthetic MSRs in userspace and KVM? And how userspace can be aware of >> the synthetic MSR index allocation in KVM? > The idea is to have a synthetic index that is exposed to userspace, and a separate > KVM-internal index for emulating accesses. The value that is exposed to userspace > can start at 0 and be a simple incrementing value as we add synthetic MSRs, as the > .type == SYNTHETIC makes it impossible for the value to collide with a "real" MSR. > > Translating to a KVM-internal index is a hack to avoid having to plumb a 64-bit > index into all the MSR code. We could do that, i.e. pass the full kvm_x86_reg_id > into the MSR helpers, but I'm not convinced it'd be worth the churn. That said, > I'm not opposed to the idea either, if others prefer that approach. > > E.g. > > diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h > index 738c449e4f9e..21152796238a 100644 > --- a/arch/x86/include/uapi/asm/kvm.h > +++ b/arch/x86/include/uapi/asm/kvm.h > @@ -420,6 +420,8 @@ struct kvm_x86_reg_id { > __u16 rsvd16; > }; > > +#define MSR_KVM_GUEST_SSP 0 > + > #define KVM_SYNC_X86_REGS (1UL << 0) > #define KVM_SYNC_X86_SREGS (1UL << 1) > #define KVM_SYNC_X86_EVENTS (1UL << 2) > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index f45cdd9d8c1f..1a9e1e0c9f49 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -5990,6 +5990,19 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, > } > } > > +static int kvm_translate_synthetic_msr(u32 *index) > +{ > + switch (*index) { > + case MSR_KVM_GUEST_SSP: > + *index = MSR_KVM_INTERNAL_GUEST_SSP; > + break; > + default: > + return -EINVAL; > + } > + > + return 0; > +} > + > long kvm_arch_vcpu_ioctl(struct file *filp, > unsigned int ioctl, unsigned long arg) > { > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h > index cc585051d24b..3b5a038f5260 100644 > --- a/arch/x86/kvm/x86.h > +++ b/arch/x86/kvm/x86.h > @@ -49,6 +49,15 @@ void kvm_spurious_fault(void); > #define KVM_FIRST_EMULATED_VMX_MSR MSR_IA32_VMX_BASIC > #define KVM_LAST_EMULATED_VMX_MSR MSR_IA32_VMX_VMFUNC > > +/* > + * KVM's internal, non-ABI indices for synthetic MSRs. The values themselves > + * are arbitrary and have no meaning, the only requirement is that they don't > + * conflict with "real" MSRs that KVM supports. Use values at the uppper end > + * of KVM's reserved paravirtual MSR range to minimize churn, i.e. these values > + * will be usable until KVM exhausts its supply of paravirtual MSR indices. > + */ > +#define MSR_KVM_INTERNAL_GUEST_SSP 0x4b564dff > + > #define KVM_DEFAULT_PLE_GAP 128 > #define KVM_VMX_DEFAULT_PLE_WINDOW 4096 > #define KVM_DEFAULT_PLE_WINDOW_GROW 2 OK, I'll post an RFC patch for this change, thanks a lot!
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h index 605899594ebb..9d08c0bec477 100644 --- a/arch/x86/include/uapi/asm/kvm_para.h +++ b/arch/x86/include/uapi/asm/kvm_para.h @@ -58,6 +58,7 @@ #define MSR_KVM_ASYNC_PF_INT 0x4b564d06 #define MSR_KVM_ASYNC_PF_ACK 0x4b564d07 #define MSR_KVM_MIGRATION_CONTROL 0x4b564d08 +#define MSR_KVM_SSP 0x4b564d09 struct kvm_steal_time { __u64 steal; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9239a89dea22..46042bc6e2fa 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7007,6 +7007,8 @@ static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index) case MSR_AMD64_TSC_RATIO: /* This is AMD only. */ return false; + case MSR_KVM_SSP: + return kvm_cpu_cap_has(X86_FEATURE_SHSTK); default: return true; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5f5df7e38d3d..c0ed69353674 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1476,6 +1476,9 @@ static const u32 msrs_to_save_base[] = { MSR_IA32_XFD, MSR_IA32_XFD_ERR, MSR_IA32_XSS, + MSR_IA32_U_CET, MSR_IA32_S_CET, + MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP, + MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB, }; static const u32 msrs_to_save_pmu[] = { @@ -1579,6 +1582,7 @@ static const u32 emulated_msrs_all[] = { MSR_K7_HWCR, MSR_KVM_POLL_CONTROL, + MSR_KVM_SSP, }; static u32 emulated_msrs[ARRAY_SIZE(emulated_msrs_all)]; @@ -7441,6 +7445,20 @@ static void kvm_probe_msr_to_save(u32 msr_index) if (!kvm_caps.supported_xss) return; break; + case MSR_IA32_U_CET: + case MSR_IA32_S_CET: + if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) && + !kvm_cpu_cap_has(X86_FEATURE_IBT)) + return; + break; + case MSR_IA32_INT_SSP_TAB: + if (!kvm_cpu_cap_has(X86_FEATURE_LM)) + return; + fallthrough; + case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP: + if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK)) + return; + break; default: break; }
Add CET MSRs to the list of MSRs reported to userspace if the feature, i.e. IBT or SHSTK, associated with the MSRs is supported by KVM. SSP can only be read via RDSSP. Writing even requires destructive and potentially faulting operations such as SAVEPREVSSP/RSTORSSP or SETSSBSY/CLRSSBSY. Let the host use a pseudo-MSR that is just a wrapper for the GUEST_SSP field of the VMCS. Suggested-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> --- arch/x86/include/uapi/asm/kvm_para.h | 1 + arch/x86/kvm/vmx/vmx.c | 2 ++ arch/x86/kvm/x86.c | 18 ++++++++++++++++++ 3 files changed, 21 insertions(+)