Message ID | 20240507155817.3951344-13-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Page fault and MMIO cleanups | expand |
On 5/7/2024 11:58 PM, Paolo Bonzini wrote: > From: Sean Christopherson <seanjc@google.com> > > Explicitly detect and disallow private accesses to emulated MMIO in > kvm_handle_noslot_fault() instead of relying on kvm_faultin_pfn_private() > to perform the check. This will allow the page fault path to go straight > to kvm_handle_noslot_fault() without bouncing through __kvm_faultin_pfn(). > > Signed-off-by: Sean Christopherson <seanjc@google.com> > Message-ID: <20240228024147.41573-12-seanjc@google.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> > --- > arch/x86/kvm/mmu/mmu.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index a8e14c2b68a7..fdae6d19e72b 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3262,6 +3262,11 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, > { > gva_t gva = fault->is_tdp ? 0 : fault->addr; > > + if (fault->is_private) { > + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); > + return -EFAULT; > + } > + > vcpu_cache_mmio_info(vcpu, gva, fault->gfn, > access & shadow_mmio_access_mask); >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8e14c2b68a7..fdae6d19e72b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3262,6 +3262,11 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, { gva_t gva = fault->is_tdp ? 0 : fault->addr; + if (fault->is_private) { + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); + return -EFAULT; + } + vcpu_cache_mmio_info(vcpu, gva, fault->gfn, access & shadow_mmio_access_mask);