Message ID | 20240507155817.3951344-18-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Page fault and MMIO cleanups | expand |
On 5/7/2024 11:58 PM, Paolo Bonzini wrote: > From: Sean Christopherson <seanjc@google.com> > > WARN if __kvm_faultin_pfn() generates a "no slot" pfn, and gracefully > handle the unexpected behavior instead of continuing on with dangerous > state, e.g. tdp_mmu_map_handle_target_level() _only_ checks fault->slot, > and so could install a bogus PFN into the guest. > > The existing code is functionally ok, because kvm_faultin_pfn() pre-checks > all of the cases that result in KVM_PFN_NOSLOT, but it is unnecessarily > unsafe as it relies on __gfn_to_pfn_memslot() getting the _exact_ same > memslot, i.e. not a re-retrieved pointer with KVM_MEMSLOT_INVALID set. > And checking only fault->slot would fall apart if KVM ever added a flag or > condition that forced emulation, similar to how KVM handles writes to > read-only memslots. > > Cc: David Matlack <dmatlack@google.com> > Signed-off-by: Sean Christopherson <seanjc@google.com> > Reviewed-by: Kai Huang <kai.huang@intel.com> > Message-ID: <20240228024147.41573-17-seanjc@google.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index d717d60c6f19..510eb1117012 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4425,7 +4425,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, > if (unlikely(is_error_pfn(fault->pfn))) > return kvm_handle_error_pfn(vcpu, fault); > > - if (WARN_ON_ONCE(!fault->slot)) > + if (WARN_ON_ONCE(!fault->slot || is_noslot_pfn(fault->pfn))) > return kvm_handle_noslot_fault(vcpu, fault, access); > > /*
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d717d60c6f19..510eb1117012 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4425,7 +4425,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, if (unlikely(is_error_pfn(fault->pfn))) return kvm_handle_error_pfn(vcpu, fault); - if (WARN_ON_ONCE(!fault->slot)) + if (WARN_ON_ONCE(!fault->slot || is_noslot_pfn(fault->pfn))) return kvm_handle_noslot_fault(vcpu, fault, access); /*