Message ID | 20200605213853.14959-12-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Cleanup and unify kvm_mmu_memory_cache usage | expand |
On Fri, Jun 5, 2020 at 2:39 PM Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > Set __GFP_ZERO for the shadow page memory cache and drop the explicit > clear_page() from kvm_mmu_get_page(). This moves the cost of zeroing a > page to the allocation time of the physical page, i.e. when topping up > the memory caches, and thus avoids having to zero out an entire page > while holding mmu_lock. > > Cc: Peter Feiner <pfeiner@google.com> > Cc: Peter Shier <pshier@google.com> > Cc: Junaid Shahid <junaids@google.com> > Cc: Jim Mattson <jmattson@google.com> > Suggested-by: Ben Gardon <bgardon@google.com> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6b0ec9060786..a8f8eebf67df 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2545,7 +2545,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, > if (level > PG_LEVEL_4K && need_sync) > flush |= kvm_sync_pages(vcpu, gfn, &invalid_list); > } > - clear_page(sp->spt); > trace_kvm_mmu_get_page(sp, true); > > kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); > @@ -5687,6 +5686,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) > vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; > vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; > > + vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; > + > vcpu->arch.mmu = &vcpu->arch.root_mmu; > vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; > > -- > 2.26.0 >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6b0ec9060786..a8f8eebf67df 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2545,7 +2545,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, if (level > PG_LEVEL_4K && need_sync) flush |= kvm_sync_pages(vcpu, gfn, &invalid_list); } - clear_page(sp->spt); trace_kvm_mmu_get_page(sp, true); kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); @@ -5687,6 +5686,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
Set __GFP_ZERO for the shadow page memory cache and drop the explicit clear_page() from kvm_mmu_get_page(). This moves the cost of zeroing a page to the allocation time of the physical page, i.e. when topping up the memory caches, and thus avoids having to zero out an entire page while holding mmu_lock. Cc: Peter Feiner <pfeiner@google.com> Cc: Peter Shier <pshier@google.com> Cc: Junaid Shahid <junaids@google.com> Cc: Jim Mattson <jmattson@google.com> Suggested-by: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/kvm/mmu/mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)