Message ID | 20240507155817.3951344-10-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Page fault and MMIO cleanups | expand |
On 5/7/2024 11:58 PM, Paolo Bonzini wrote: > From: Sean Christopherson <seanjc@google.com> > > WARN and skip the emulated MMIO fastpath if a private, reserved page fault > is encountered, as private+reserved should be an impossible combination > (KVM should never create an MMIO SPTE for a private access). > > Signed-off-by: Sean Christopherson <seanjc@google.com> > Message-ID: <20240228024147.41573-9-seanjc@google.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> > --- > arch/x86/kvm/mmu/mmu.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index d52794663290..0d884d0b0f35 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5819,6 +5819,9 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err > > r = RET_PF_INVALID; > if (unlikely(error_code & PFERR_RSVD_MASK)) { > + if (WARN_ON_ONCE(error_code & PFERR_PRIVATE_ACCESS)) > + return -EFAULT; > + > r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct); > if (r == RET_PF_EMULATE) > goto emulate;
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d52794663290..0d884d0b0f35 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5819,6 +5819,9 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err r = RET_PF_INVALID; if (unlikely(error_code & PFERR_RSVD_MASK)) { + if (WARN_ON_ONCE(error_code & PFERR_PRIVATE_ACCESS)) + return -EFAULT; + r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct); if (r == RET_PF_EMULATE) goto emulate;