Message ID | 20191029210555.138393-2-aaronlewis@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add support for capturing the highest observable L2 TSC | expand |
On Tue, Oct 29, 2019 at 2:06 PM Aaron Lewis <aaronlewis@google.com> wrote: > > Add the function read_and_check_msr_entry() which just pulls some code > out of nested_vmx_store_msr() for now, however, this is in preparation > for a change later in this series were we reuse the code in > read_and_check_msr_entry(). > > Signed-off-by: Aaron Lewis <aaronlewis@google.com> > Change-Id: Iaf8787198c06674e8b0555982a962f5bd288e43f Drop the Change-Id. Reviewed-by: Jim Mattson <jmattson@google.com>
On Mon, Nov 04, 2019 at 02:41:10PM -0800, Jim Mattson wrote: > On Tue, Oct 29, 2019 at 2:06 PM Aaron Lewis <aaronlewis@google.com> wrote: > > > > Add the function read_and_check_msr_entry() which just pulls some code > > out of nested_vmx_store_msr() for now, however, this is in preparation > > for a change later in this series were we reuse the code in > > read_and_check_msr_entry(). > > > > Signed-off-by: Aaron Lewis <aaronlewis@google.com> > > Change-Id: Iaf8787198c06674e8b0555982a962f5bd288e43f > Drop the Change-Id. Even better, incoporate checkpatch into your flow, this is one of the few things that is unequivocally considered an error.
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index e76eb4f07f6c..7b058d7b9fcc 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -929,6 +929,26 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) return i + 1; } +static bool read_and_check_msr_entry(struct kvm_vcpu *vcpu, u64 gpa, int i, + struct vmx_msr_entry *e) +{ + if (kvm_vcpu_read_guest(vcpu, + gpa + i * sizeof(*e), + e, 2 * sizeof(u32))) { + pr_debug_ratelimited( + "%s cannot read MSR entry (%u, 0x%08llx)\n", + __func__, i, gpa + i * sizeof(*e)); + return false; + } + if (nested_vmx_store_msr_check(vcpu, e)) { + pr_debug_ratelimited( + "%s check failed (%u, 0x%x, 0x%x)\n", + __func__, i, e->index, e->reserved); + return false; + } + return true; +} + static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) { u64 data; @@ -940,20 +960,9 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) if (unlikely(i >= max_msr_list_size)) return -EINVAL; - if (kvm_vcpu_read_guest(vcpu, - gpa + i * sizeof(e), - &e, 2 * sizeof(u32))) { - pr_debug_ratelimited( - "%s cannot read MSR entry (%u, 0x%08llx)\n", - __func__, i, gpa + i * sizeof(e)); + if (!read_and_check_msr_entry(vcpu, gpa, i, &e)) return -EINVAL; - } - if (nested_vmx_store_msr_check(vcpu, &e)) { - pr_debug_ratelimited( - "%s check failed (%u, 0x%x, 0x%x)\n", - __func__, i, e.index, e.reserved); - return -EINVAL; - } + if (kvm_get_msr(vcpu, e.index, &data)) { pr_debug_ratelimited( "%s cannot read MSR (%u, 0x%x)\n",
Add the function read_and_check_msr_entry() which just pulls some code out of nested_vmx_store_msr() for now, however, this is in preparation for a change later in this series were we reuse the code in read_and_check_msr_entry(). Signed-off-by: Aaron Lewis <aaronlewis@google.com> Change-Id: Iaf8787198c06674e8b0555982a962f5bd288e43f --- arch/x86/kvm/vmx/nested.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-)