Message ID | 1646641011-55068-1-git-send-email-lirongqing@baidu.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [resend] KVM: x86: check steal time address when enable steal time | expand |
Li RongQing <lirongqing@baidu.com> writes: > check steal time address when enable steal time, do not update > arch.st.msr_val if the address is invalid, and return in #GP > > this can avoid unnecessary write/read invalid memory when guest > is running > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- > arch/x86/kvm/x86.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index eb402966..3ed0949 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -3616,6 +3616,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > if (data & KVM_STEAL_RESERVED_MASK) > return 1; > > + if (!kvm_vcpu_gfn_to_memslot(vcpu, data >> PAGE_SHIFT)) > + return 1; > + What about we use stronger kvm_is_visible_gfn() instead? I didn't put much thought to what's going to happen if we put e.g. APIC access page addr to the MSR, let's just cut any possibility. > vcpu->arch.st.msr_val = data; > > if (!(data & KVM_MSR_ENABLED))
On Mon, Mar 07, 2022, Vitaly Kuznetsov wrote: > Li RongQing <lirongqing@baidu.com> writes: > > > check steal time address when enable steal time, do not update > > arch.st.msr_val if the address is invalid, and return in #GP > > > > this can avoid unnecessary write/read invalid memory when guest > > is running Are you concerned about the host cycles, or about the guest triggering emulated MMIO? > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > > --- > > arch/x86/kvm/x86.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index eb402966..3ed0949 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -3616,6 +3616,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > > if (data & KVM_STEAL_RESERVED_MASK) > > return 1; > > > > + if (!kvm_vcpu_gfn_to_memslot(vcpu, data >> PAGE_SHIFT)) > > + return 1; > > + > > What about we use stronger kvm_is_visible_gfn() instead? I didn't put > much thought to what's going to happen if we put e.g. APIC access page > addr to the MSR, let's just cut any possibility. Hmm, I don't love handling this at WRMSR, e.g. the memslot might be moved/deleted, and it's not necessarily a guest problem, userspace could be at fault. The other issue is that there's no guarantee the guest will actually handle the #GP correctly, e.g. Linux guests will simply continue on (with a WARN). That said, I can't think of a better idea. Documentation/virt/kvm/msr.rst does say: 64-byte alignment physical address of a memory area which must be in guest RAM But doesn't enforce that :-/ So it's at least reasonable behavior.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index eb402966..3ed0949 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3616,6 +3616,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data & KVM_STEAL_RESERVED_MASK) return 1; + if (!kvm_vcpu_gfn_to_memslot(vcpu, data >> PAGE_SHIFT)) + return 1; + vcpu->arch.st.msr_val = data; if (!(data & KVM_MSR_ENABLED))
check steal time address when enable steal time, do not update arch.st.msr_val if the address is invalid, and return in #GP this can avoid unnecessary write/read invalid memory when guest is running Signed-off-by: Li RongQing <lirongqing@baidu.com> --- arch/x86/kvm/x86.c | 3 +++ 1 file changed, 3 insertions(+)