diff mbox

[v1,4/6] vvmx: add hvm_max_vmx_msr_policy

Message ID 20170626104435.25508-5-sergey.dyasli@citrix.com (mailing list archive)
State New, archived
Headers show

Commit Message

Sergey Dyasli June 26, 2017, 10:44 a.m. UTC
Currently, when nested virt is enabled, the set of L1 VMX features
is fixed and calculated by nvmx_msr_read_intercept() as an intersection
between the full set of Xen's supported L1 VMX features, the set of
actual H/W features and, for MSR_IA32_VMX_EPT_VPID_CAP, the set of
features that Xen uses.

Add hvm_max_vmx_msr_policy object which represents the end result of
nvmx_msr_read_intercept() on current H/W.  Most of the code is moved
from nvmx_msr_read_intercept() to calculate_hvm_max_policy() which is
called only once during the startup.

There is no functional change to what L1 sees in VMX MSRs.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c |   3 +
 xen/arch/x86/hvm/vmx/vvmx.c | 297 +++++++++++++++++++++-----------------------
 2 files changed, 147 insertions(+), 153 deletions(-)

Comments

Jan Beulich July 4, 2017, 3:04 p.m. UTC | #1
>>> On 26.06.17 at 12:44, <sergey.dyasli@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -244,6 +244,8 @@ static u32 adjust_vmx_controls(
>      return ctl;
>  }
>  
> +void calculate_hvm_max_policy(void);

As said for a prior patch, this once again needs to move to a header
which is also being included by the producer.

> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1941,6 +1941,8 @@ int nvmx_handle_invvpid(struct cpu_user_regs *regs)
>      return X86EMUL_OKAY;
>  }
>  
> +struct vmx_msr_policy __read_mostly hvm_max_vmx_msr_policy;

Wouldn't vvmx_max_msr_policy be unambiguous enough a name,
but shorter to type?

> @@ -1948,6 +1950,134 @@ int nvmx_handle_invvpid(struct cpu_user_regs *regs)
>      (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
>      ((uint32_t)(__emul_value(enable1, default1) | host_value)))
>  
> +void __init calculate_hvm_max_policy(void)

This is not a suitable name for a VMX specific function. I should
have noticed and said this in patch 2 already, so please consider
it applicable there too.

> +{
> +    struct vmx_msr_policy *p = &hvm_max_vmx_msr_policy;
> +    uint64_t data, *msr;
> +    u32 default1_bits;
> +
> +    *p = raw_vmx_msr_policy;
> +
> +    /* XXX: vmcs_revision_id for nested virt */

There was no such comment (presumably indicating something that
yet needs doing) in the old code - what's this about? Can't this be
implemented instead of such a comment be added?

> +    /* MSR_IA32_VMX_VMCS_ENUM */
> +    /* The max index of VVMCS encoding is 0x1f. */
> +    data = 0x1f << 1;
> +    msr = &p->msr[MSR_IA32_VMX_VMCS_ENUM - MSR_IA32_VMX_BASIC];
> +    *msr = data;
> +
> +    /* MSR_IA32_VMX_CR0_FIXED0 */
> +    /* PG, PE bits must be 1 in VMX operation */
> +    data = X86_CR0_PE | X86_CR0_PG;
> +    msr = &p->msr[MSR_IA32_VMX_CR0_FIXED0 - MSR_IA32_VMX_BASIC];
> +    *msr = data;
> +
> +    /* MSR_IA32_VMX_CR0_FIXED1 */
> +    /* allow 0-settings for all bits */
> +    data = 0xffffffff;
> +    msr = &p->msr[MSR_IA32_VMX_CR0_FIXED1 - MSR_IA32_VMX_BASIC];
> +    *msr = data;
> +
> +    /* MSR_IA32_VMX_CR4_FIXED0 */
> +    /* VMXE bit must be 1 in VMX operation */
> +    data = X86_CR4_VMXE;
> +    msr = &p->msr[MSR_IA32_VMX_CR4_FIXED0 - MSR_IA32_VMX_BASIC];
> +    *msr = data;

I don't see a need for using "data" as an intermediate variable in any
of the three cases above.

> +    /* MSR_IA32_VMX_CR4_FIXED1 */
> +    /* Treated dynamically */
> +
> +    /* MSR_IA32_VMX_MISC */
> +    /* Do not support CR3-target feature now */
> +    msr = &p->msr[MSR_IA32_VMX_MISC - MSR_IA32_VMX_BASIC];
> +    *msr = *msr & ~VMX_MISC_CR3_TARGET;

&=

> +    /* MSR_IA32_VMX_EPT_VPID_CAP */
> +    data = nept_get_ept_vpid_cap();
> +    msr = &p->msr[MSR_IA32_VMX_EPT_VPID_CAP - MSR_IA32_VMX_BASIC];
> +    *msr = data;

No need to use "data" again.

> +    /* MSR_IA32_VMX_VMFUNC is N/A */
> +    p->available &= ~0x20000;

Please use an expression again here.

Jan
Sergey Dyasli July 6, 2017, 10:23 a.m. UTC | #2
On Tue, 2017-07-04 at 09:04 -0600, Jan Beulich wrote:
> > > > On 26.06.17 at 12:44, <sergey.dyasli@citrix.com> wrote:

> > 

> > +{

> > +    struct vmx_msr_policy *p = &hvm_max_vmx_msr_policy;

> > +    uint64_t data, *msr;

> > +    u32 default1_bits;

> > +

> > +    *p = raw_vmx_msr_policy;

> > +

> > +    /* XXX: vmcs_revision_id for nested virt */

> 

> There was no such comment (presumably indicating something that

> yet needs doing) in the old code - what's this about? Can't this be

> implemented instead of such a comment be added?


Currently L1 sees vmcs_revision_id value from the H/W MSR. Which is
fine until live migration is concerned. The question is: what should
happen if L1 is migrated to some other H/W with different vmcs id?
One possible solution is to use "virtual vmcs id" in the policy object.

-- 
Thanks,
Sergey
Jan Beulich July 6, 2017, 12:28 p.m. UTC | #3
>>> On 06.07.17 at 12:23, <sergey.dyasli@citrix.com> wrote:
> On Tue, 2017-07-04 at 09:04 -0600, Jan Beulich wrote:
>> > > > On 26.06.17 at 12:44, <sergey.dyasli@citrix.com> wrote:
>> > 
>> > +{
>> > +    struct vmx_msr_policy *p = &hvm_max_vmx_msr_policy;
>> > +    uint64_t data, *msr;
>> > +    u32 default1_bits;
>> > +
>> > +    *p = raw_vmx_msr_policy;
>> > +
>> > +    /* XXX: vmcs_revision_id for nested virt */
>> 
>> There was no such comment (presumably indicating something that
>> yet needs doing) in the old code - what's this about? Can't this be
>> implemented instead of such a comment be added?
> 
> Currently L1 sees vmcs_revision_id value from the H/W MSR. Which is
> fine until live migration is concerned. The question is: what should
> happen if L1 is migrated to some other H/W with different vmcs id?
> One possible solution is to use "virtual vmcs id" in the policy object.

Are there any other (reasonable) ones, besides forbidding
migration (live or not). Otoh, if migration between hosts with
different IDs is allowed, won't we risk the page layout (which
is intentionally unknown to us) changing as well? Or in order
to be migrateable, such guests would have to be forced to
not use shadow VMCS, and we'd have to pin down (as part of
the guest ABI) the software layout we use.

Jan
Sergey Dyasli July 7, 2017, 1:01 p.m. UTC | #4
On Thu, 2017-07-06 at 06:28 -0600, Jan Beulich wrote:
> > > > On 06.07.17 at 12:23, <sergey.dyasli@citrix.com> wrote:

> > 

> > On Tue, 2017-07-04 at 09:04 -0600, Jan Beulich wrote:

> > > > > > On 26.06.17 at 12:44, <sergey.dyasli@citrix.com> wrote:

> > > > 

> > > > +{

> > > > +    struct vmx_msr_policy *p = &hvm_max_vmx_msr_policy;

> > > > +    uint64_t data, *msr;

> > > > +    u32 default1_bits;

> > > > +

> > > > +    *p = raw_vmx_msr_policy;

> > > > +

> > > > +    /* XXX: vmcs_revision_id for nested virt */

> > > 

> > > There was no such comment (presumably indicating something that

> > > yet needs doing) in the old code - what's this about? Can't this be

> > > implemented instead of such a comment be added?

> > 

> > Currently L1 sees vmcs_revision_id value from the H/W MSR. Which is

> > fine until live migration is concerned. The question is: what should

> > happen if L1 is migrated to some other H/W with different vmcs id?

> > One possible solution is to use "virtual vmcs id" in the policy object.

> 

> Are there any other (reasonable) ones, besides forbidding

> migration (live or not). Otoh, if migration between hosts with

> different IDs is allowed, won't we risk the page layout (which

> is intentionally unknown to us) changing as well? Or in order

> to be migrateable, such guests would have to be forced to

> not use shadow VMCS, and we'd have to pin down (as part of

> the guest ABI) the software layout we use.


During a discussion with Andrew, we identified difficulties in migration
of an L1 hypervisor to a H/W with the different vmcs revision id when
VMCS shadowing is used.

It seems to be a reasonable requirement for migration to have H/W with
the same vmcs revision id. Therefore it is fine to provide L1 with
the real H/W id and I will remove that comment in v2.

-- 
Thanks,
Sergey
Andrew Cooper July 7, 2017, 2:01 p.m. UTC | #5
On 07/07/17 14:01, Sergey Dyasli wrote:
> On Thu, 2017-07-06 at 06:28 -0600, Jan Beulich wrote:
>>>>> On 06.07.17 at 12:23, <sergey.dyasli@citrix.com> wrote:
>>> On Tue, 2017-07-04 at 09:04 -0600, Jan Beulich wrote:
>>>>>>> On 26.06.17 at 12:44, <sergey.dyasli@citrix.com> wrote:
>>>>> +{
>>>>> +    struct vmx_msr_policy *p = &hvm_max_vmx_msr_policy;
>>>>> +    uint64_t data, *msr;
>>>>> +    u32 default1_bits;
>>>>> +
>>>>> +    *p = raw_vmx_msr_policy;
>>>>> +
>>>>> +    /* XXX: vmcs_revision_id for nested virt */
>>>> There was no such comment (presumably indicating something that
>>>> yet needs doing) in the old code - what's this about? Can't this be
>>>> implemented instead of such a comment be added?
>>> Currently L1 sees vmcs_revision_id value from the H/W MSR. Which is
>>> fine until live migration is concerned. The question is: what should
>>> happen if L1 is migrated to some other H/W with different vmcs id?
>>> One possible solution is to use "virtual vmcs id" in the policy object.
>> Are there any other (reasonable) ones, besides forbidding
>> migration (live or not). Otoh, if migration between hosts with
>> different IDs is allowed, won't we risk the page layout (which
>> is intentionally unknown to us) changing as well? Or in order
>> to be migrateable, such guests would have to be forced to
>> not use shadow VMCS, and we'd have to pin down (as part of
>> the guest ABI) the software layout we use.
> During a discussion with Andrew, we identified difficulties in migration
> of an L1 hypervisor to a H/W with the different vmcs revision id when
> VMCS shadowing is used.
>
> It seems to be a reasonable requirement for migration to have H/W with
> the same vmcs revision id. Therefore it is fine to provide L1 with
> the real H/W id and I will remove that comment in v2.

From the point of view of the L1 guest which is using nested-virt, it
reads VMX_BASIC at the start of day to get the revision id.  This is
just like reading the CPUID information at the start of day, and
expecting it to remain constant until the next reboot.

If Xen is fully shadowing the VMCS, and Xen's advertised revision ID
hasn't changed, then migration between different hardware is possible
(providing that the guest was first booted with all other VMX features
suitably levelled), as Xen has to shadow everything the guest vmwrote
anyway.

If Xen uses VMCS shadowing to speed up the L1 guests vmread/vmwrites,
then migration must be locked to hardware with the same revision id.  If
we don't fulfil this requirement, Xen wouldn't be able to (validly)
object to the L1 guest doing a vmptrld with the old revision ID, and
wouldn't be in a position (at all, let alone validly) to re-serialise
the VMCS in place to be suitable for the current hardware.

~Andrew
diff mbox

Patch

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index dbf6eb7433..da6ddf52f1 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -244,6 +244,8 @@  static u32 adjust_vmx_controls(
     return ctl;
 }
 
+void calculate_hvm_max_policy(void);
+
 static int vmx_init_vmcs_config(void)
 {
     u32 min, opt;
@@ -463,6 +465,7 @@  static int vmx_init_vmcs_config(void)
         vmx_virt_exception         = !!(_vmx_secondary_exec_control &
                                        SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS);
         vmx_display_features();
+        calculate_hvm_max_policy();
 
         /* IA-32 SDM Vol 3B: VMCS size is never greater than 4kB. */
         if ( raw_vmx_msr_policy.basic.vmcs_region_size > PAGE_SIZE )
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 3560faec6d..657371ec69 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1941,6 +1941,8 @@  int nvmx_handle_invvpid(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+struct vmx_msr_policy __read_mostly hvm_max_vmx_msr_policy;
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
@@ -1948,6 +1950,134 @@  int nvmx_handle_invvpid(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+void __init calculate_hvm_max_policy(void)
+{
+    struct vmx_msr_policy *p = &hvm_max_vmx_msr_policy;
+    uint64_t data, *msr;
+    u32 default1_bits;
+
+    *p = raw_vmx_msr_policy;
+
+    /* XXX: vmcs_revision_id for nested virt */
+
+    /* Pinbased controls 1-settings */
+    data = PIN_BASED_EXT_INTR_MASK |
+           PIN_BASED_NMI_EXITING |
+           PIN_BASED_PREEMPT_TIMER;
+
+    msr = &p->msr[MSR_IA32_VMX_PINBASED_CTLS - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, VMX_PINBASED_CTLS_DEFAULT1, *msr);
+    msr = &p->msr[MSR_IA32_VMX_TRUE_PINBASED_CTLS - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, VMX_PINBASED_CTLS_DEFAULT1, *msr);
+
+    /* Procbased controls 1-settings */
+    default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
+    data = CPU_BASED_HLT_EXITING |
+           CPU_BASED_VIRTUAL_INTR_PENDING |
+           CPU_BASED_CR8_LOAD_EXITING |
+           CPU_BASED_CR8_STORE_EXITING |
+           CPU_BASED_INVLPG_EXITING |
+           CPU_BASED_CR3_LOAD_EXITING |
+           CPU_BASED_CR3_STORE_EXITING |
+           CPU_BASED_MONITOR_EXITING |
+           CPU_BASED_MWAIT_EXITING |
+           CPU_BASED_MOV_DR_EXITING |
+           CPU_BASED_ACTIVATE_IO_BITMAP |
+           CPU_BASED_USE_TSC_OFFSETING |
+           CPU_BASED_UNCOND_IO_EXITING |
+           CPU_BASED_RDTSC_EXITING |
+           CPU_BASED_MONITOR_TRAP_FLAG |
+           CPU_BASED_VIRTUAL_NMI_PENDING |
+           CPU_BASED_ACTIVATE_MSR_BITMAP |
+           CPU_BASED_PAUSE_EXITING |
+           CPU_BASED_RDPMC_EXITING |
+           CPU_BASED_TPR_SHADOW |
+           CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
+
+    msr = &p->msr[MSR_IA32_VMX_PROCBASED_CTLS - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, default1_bits, *msr);
+
+    default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
+                       CPU_BASED_CR3_STORE_EXITING |
+                       CPU_BASED_INVLPG_EXITING);
+
+    msr = &p->msr[MSR_IA32_VMX_TRUE_PROCBASED_CTLS - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, default1_bits, *msr);
+
+    /* Procbased-2 controls 1-settings */
+    data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+           SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+           SECONDARY_EXEC_ENABLE_VPID |
+           SECONDARY_EXEC_UNRESTRICTED_GUEST |
+           SECONDARY_EXEC_ENABLE_EPT;
+    msr = &p->msr[MSR_IA32_VMX_PROCBASED_CTLS2 - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, 0, *msr);
+
+    /* Vmexit controls 1-settings */
+    data = VM_EXIT_ACK_INTR_ON_EXIT |
+           VM_EXIT_IA32E_MODE |
+           VM_EXIT_SAVE_PREEMPT_TIMER |
+           VM_EXIT_SAVE_GUEST_PAT |
+           VM_EXIT_LOAD_HOST_PAT |
+           VM_EXIT_SAVE_GUEST_EFER |
+           VM_EXIT_LOAD_HOST_EFER |
+           VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
+    msr = &p->msr[MSR_IA32_VMX_EXIT_CTLS - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, VMX_EXIT_CTLS_DEFAULT1, *msr);
+    msr = &p->msr[MSR_IA32_VMX_TRUE_EXIT_CTLS - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, VMX_EXIT_CTLS_DEFAULT1, *msr);
+
+    /* Vmentry controls 1-settings */
+    data = VM_ENTRY_LOAD_GUEST_PAT |
+           VM_ENTRY_LOAD_GUEST_EFER |
+           VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+           VM_ENTRY_IA32E_MODE;
+    msr = &p->msr[MSR_IA32_VMX_ENTRY_CTLS - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, VMX_ENTRY_CTLS_DEFAULT1, *msr);
+    msr = &p->msr[MSR_IA32_VMX_TRUE_ENTRY_CTLS - MSR_IA32_VMX_BASIC];
+    *msr = gen_vmx_msr(data, VMX_ENTRY_CTLS_DEFAULT1, *msr);
+
+    /* MSR_IA32_VMX_VMCS_ENUM */
+    /* The max index of VVMCS encoding is 0x1f. */
+    data = 0x1f << 1;
+    msr = &p->msr[MSR_IA32_VMX_VMCS_ENUM - MSR_IA32_VMX_BASIC];
+    *msr = data;
+
+    /* MSR_IA32_VMX_CR0_FIXED0 */
+    /* PG, PE bits must be 1 in VMX operation */
+    data = X86_CR0_PE | X86_CR0_PG;
+    msr = &p->msr[MSR_IA32_VMX_CR0_FIXED0 - MSR_IA32_VMX_BASIC];
+    *msr = data;
+
+    /* MSR_IA32_VMX_CR0_FIXED1 */
+    /* allow 0-settings for all bits */
+    data = 0xffffffff;
+    msr = &p->msr[MSR_IA32_VMX_CR0_FIXED1 - MSR_IA32_VMX_BASIC];
+    *msr = data;
+
+    /* MSR_IA32_VMX_CR4_FIXED0 */
+    /* VMXE bit must be 1 in VMX operation */
+    data = X86_CR4_VMXE;
+    msr = &p->msr[MSR_IA32_VMX_CR4_FIXED0 - MSR_IA32_VMX_BASIC];
+    *msr = data;
+
+    /* MSR_IA32_VMX_CR4_FIXED1 */
+    /* Treated dynamically */
+
+    /* MSR_IA32_VMX_MISC */
+    /* Do not support CR3-target feature now */
+    msr = &p->msr[MSR_IA32_VMX_MISC - MSR_IA32_VMX_BASIC];
+    *msr = *msr & ~VMX_MISC_CR3_TARGET;
+
+    /* MSR_IA32_VMX_EPT_VPID_CAP */
+    data = nept_get_ept_vpid_cap();
+    msr = &p->msr[MSR_IA32_VMX_EPT_VPID_CAP - MSR_IA32_VMX_BASIC];
+    *msr = data;
+
+    /* MSR_IA32_VMX_VMFUNC is N/A */
+    p->available &= ~0x20000;
+}
+
 /*
  * Capability reporting
  */
@@ -1955,171 +2085,32 @@  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
     struct vcpu *v = current;
     struct domain *d = v->domain;
-    u64 data = 0, host_data = 0;
+    struct vmx_msr_policy *p = &hvm_max_vmx_msr_policy;
+    u64 data;
     int r = 1;
 
     /* VMX capablity MSRs are available only when guest supports VMX. */
     if ( !nestedhvm_enabled(d) || !d->arch.cpuid->basic.vmx )
         return 0;
 
-    /*
-     * These MSRs are only available when flags in other MSRs are set.
-     * These prerequisites are listed in the Intel 64 and IA-32
-     * Architectures Software Developer’s Manual, Vol 3, Appendix A.
-     */
-    switch ( msr )
+    /* TODO: disentangle feature control from nested virt */
+    if ( msr == MSR_IA32_FEATURE_CONTROL )
     {
-    case MSR_IA32_VMX_PROCBASED_CTLS2:
-        if ( !cpu_has_vmx_secondary_exec_control )
-            return 0;
-        break;
-
-    case MSR_IA32_VMX_EPT_VPID_CAP:
-        if ( !(cpu_has_vmx_ept || cpu_has_vmx_vpid) )
-            return 0;
-        break;
-
-    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
-    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
-    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
-    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
-        if ( !(vmx_basic_msr & VMX_BASIC_DEFAULT1_ZERO) )
-            return 0;
-        break;
+        data = IA32_FEATURE_CONTROL_LOCK |
+               IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
+        *msr_content = data;
 
-    case MSR_IA32_VMX_VMFUNC:
-        if ( !cpu_has_vmx_vmfunc )
-            return 0;
-        break;
+        return r;
     }
 
-    rdmsrl(msr, host_data);
-
-    /*
-     * Remove unsupport features from n1 guest capability MSR
-     */
-    switch (msr) {
-    case MSR_IA32_VMX_BASIC:
-    {
-        const struct vmcs_struct *vmcs =
-            map_domain_page(_mfn(PFN_DOWN(v->arch.hvm_vmx.vmcs_pa)));
-
-        data = (host_data & (~0ul << 32)) |
-               (vmcs->vmcs_revision_id & 0x7fffffff);
-        unmap_domain_page(vmcs);
-        break;
-    }
-    case MSR_IA32_VMX_PINBASED_CTLS:
-    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
-        /* 1-settings */
-        data = PIN_BASED_EXT_INTR_MASK |
-               PIN_BASED_NMI_EXITING |
-               PIN_BASED_PREEMPT_TIMER;
-        data = gen_vmx_msr(data, VMX_PINBASED_CTLS_DEFAULT1, host_data);
-        break;
-    case MSR_IA32_VMX_PROCBASED_CTLS:
-    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
-    {
-        u32 default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
-        /* 1-settings */
-        data = CPU_BASED_HLT_EXITING |
-               CPU_BASED_VIRTUAL_INTR_PENDING |
-               CPU_BASED_CR8_LOAD_EXITING |
-               CPU_BASED_CR8_STORE_EXITING |
-               CPU_BASED_INVLPG_EXITING |
-               CPU_BASED_CR3_LOAD_EXITING |
-               CPU_BASED_CR3_STORE_EXITING |
-               CPU_BASED_MONITOR_EXITING |
-               CPU_BASED_MWAIT_EXITING |
-               CPU_BASED_MOV_DR_EXITING |
-               CPU_BASED_ACTIVATE_IO_BITMAP |
-               CPU_BASED_USE_TSC_OFFSETING |
-               CPU_BASED_UNCOND_IO_EXITING |
-               CPU_BASED_RDTSC_EXITING |
-               CPU_BASED_MONITOR_TRAP_FLAG |
-               CPU_BASED_VIRTUAL_NMI_PENDING |
-               CPU_BASED_ACTIVATE_MSR_BITMAP |
-               CPU_BASED_PAUSE_EXITING |
-               CPU_BASED_RDPMC_EXITING |
-               CPU_BASED_TPR_SHADOW |
-               CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-
-        if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
-            default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
-                               CPU_BASED_CR3_STORE_EXITING |
-                               CPU_BASED_INVLPG_EXITING);
-
-        data = gen_vmx_msr(data, default1_bits, host_data);
-        break;
-    }
-    case MSR_IA32_VMX_PROCBASED_CTLS2:
-        /* 1-settings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
-               SECONDARY_EXEC_ENABLE_VPID |
-               SECONDARY_EXEC_UNRESTRICTED_GUEST |
-               SECONDARY_EXEC_ENABLE_EPT;
-        data = gen_vmx_msr(data, 0, host_data);
-        break;
-    case MSR_IA32_VMX_EXIT_CTLS:
-    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
-        /* 1-settings */
-        data = VM_EXIT_ACK_INTR_ON_EXIT |
-               VM_EXIT_IA32E_MODE |
-               VM_EXIT_SAVE_PREEMPT_TIMER |
-               VM_EXIT_SAVE_GUEST_PAT |
-               VM_EXIT_LOAD_HOST_PAT |
-               VM_EXIT_SAVE_GUEST_EFER |
-               VM_EXIT_LOAD_HOST_EFER |
-               VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
-        data = gen_vmx_msr(data, VMX_EXIT_CTLS_DEFAULT1, host_data);
-        break;
-    case MSR_IA32_VMX_ENTRY_CTLS:
-    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
-        /* 1-settings */
-        data = VM_ENTRY_LOAD_GUEST_PAT |
-               VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
-               VM_ENTRY_IA32E_MODE;
-        data = gen_vmx_msr(data, VMX_ENTRY_CTLS_DEFAULT1, host_data);
-        break;
+    if ( !vmx_msr_available(p, msr) )
+        return 0;
 
-    case MSR_IA32_FEATURE_CONTROL:
-        data = IA32_FEATURE_CONTROL_LOCK |
-               IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
-        break;
-    case MSR_IA32_VMX_VMCS_ENUM:
-        /* The max index of VVMCS encoding is 0x1f. */
-        data = 0x1f << 1;
-        break;
-    case MSR_IA32_VMX_CR0_FIXED0:
-        /* PG, PE bits must be 1 in VMX operation */
-        data = X86_CR0_PE | X86_CR0_PG;
-        break;
-    case MSR_IA32_VMX_CR0_FIXED1:
-        /* allow 0-settings for all bits */
-        data = 0xffffffff;
-        break;
-    case MSR_IA32_VMX_CR4_FIXED0:
-        /* VMXE bit must be 1 in VMX operation */
-        data = X86_CR4_VMXE;
-        break;
-    case MSR_IA32_VMX_CR4_FIXED1:
-        data = hvm_cr4_guest_valid_bits(v, 0);
-        break;
-    case MSR_IA32_VMX_MISC:
-        /* Do not support CR3-target feature now */
-        data = host_data & ~VMX_MISC_CR3_TARGET;
-        break;
-    case MSR_IA32_VMX_EPT_VPID_CAP:
-        data = nept_get_ept_vpid_cap();
-        break;
-    default:
-        r = 0;
-        break;
-    }
+    if ( msr == MSR_IA32_VMX_CR4_FIXED1 )
+        *msr_content = hvm_cr4_guest_valid_bits(v, 0);
+    else
+        *msr_content = p->msr[msr - MSR_IA32_VMX_BASIC];
 
-    *msr_content = data;
     return r;
 }