Message ID | 20200605213853.14959-11-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Cleanup and unify kvm_mmu_memory_cache usage | expand |
On Fri, Jun 5, 2020 at 2:39 PM Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > Add a gfp_zero flag to 'struct kvm_mmu_memory_cache' and use it to > control __GFP_ZERO instead of hardcoding a call to kmem_cache_zalloc(). > A future patch needs such a flag for the __get_free_page() path, as > gfn arrays do not need/want the allocator to zero the memory. Convert > the kmem_cache paths to __GFP_ZERO now so as to avoid a weird and > inconsistent API in the future. > > No functional change intended. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu/mmu.c | 7 ++++++- > 2 files changed, 7 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index e7a427547557..fb99e6776e27 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -251,6 +251,7 @@ struct kvm_kernel_irq_routing_entry; > */ > struct kvm_mmu_memory_cache { > int nobjs; > + gfp_t gfp_zero; This would make more sense to me if it could be used for general extra gfp flags and was called gfp_flags or something, or it was a boolean that was later translated into the flag being set. Storing the gfp_zero flag here is a little counter-intuitive. Probably not worth changing unless you're sending out a v2 for some other reason. > struct kmem_cache *kmem_cache; > void *objects[KVM_NR_MEM_OBJS]; > }; > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index d245acece3cd..6b0ec9060786 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -1063,8 +1063,10 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) > static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, > gfp_t gfp_flags) > { > + gfp_flags |= mc->gfp_zero; > + > if (mc->kmem_cache) > - return kmem_cache_zalloc(mc->kmem_cache, gfp_flags); > + return kmem_cache_alloc(mc->kmem_cache, gfp_flags); > else > return (void *)__get_free_page(gfp_flags); > } > @@ -5680,7 +5682,10 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) > int ret; > > vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; > + vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO; > + > vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; > + vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; > > vcpu->arch.mmu = &vcpu->arch.root_mmu; > vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; > -- > 2.26.0 >
On Wed, Jun 10, 2020 at 11:57:32AM -0700, Ben Gardon wrote: > > --- > > arch/x86/include/asm/kvm_host.h | 1 + > > arch/x86/kvm/mmu/mmu.c | 7 ++++++- > > 2 files changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index e7a427547557..fb99e6776e27 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -251,6 +251,7 @@ struct kvm_kernel_irq_routing_entry; > > */ > > struct kvm_mmu_memory_cache { > > int nobjs; > > + gfp_t gfp_zero; > This would make more sense to me if it could be used for general extra > gfp flags and was called gfp_flags or something, or it was a boolean > that was later translated into the flag being set. Storing the > gfp_zero flag here is a little counter-intuitive. Probably not worth > changing unless you're sending out a v2 for some other reason. Ideally, this would be a generic gfp_flags field, but that's basically a non-starter for patch 5, which uses GFP_ATOMIC for the "oh crap the cache is empty" error handling. Allowing arbitrary flags would be a mess. I went with storing a full gfp_t because that produces more optimal code. This isn't a super critical path and it's only a few cycles, but it seems worthwhile given the frequency with which this code will be called, and since this happens under mmu_lock. 348 gfp_flags |= mc->gfp_zero; 0x00000000000058ab <+59>: mov 0x4(%rbx),%eax 0x00000000000058ae <+62>: or $0x400cc0,%eax versus 349 gfp_flags |= __GFP_ZERO; 0x00000000000058a7 <+55>: cmpb $0x1,0x4(%rbx) 0x00000000000058ab <+59>: mov 0x8(%rbx),%rdi <-- unrelated interleaved code 0x00000000000058af <+63>: sbb %eax,%eax 0x00000000000058b1 <+65>: xor %al,%al 0x00000000000058b3 <+67>: add $0x400dc0,%eax
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e7a427547557..fb99e6776e27 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -251,6 +251,7 @@ struct kvm_kernel_irq_routing_entry; */ struct kvm_mmu_memory_cache { int nobjs; + gfp_t gfp_zero; struct kmem_cache *kmem_cache; void *objects[KVM_NR_MEM_OBJS]; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d245acece3cd..6b0ec9060786 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1063,8 +1063,10 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, gfp_t gfp_flags) { + gfp_flags |= mc->gfp_zero; + if (mc->kmem_cache) - return kmem_cache_zalloc(mc->kmem_cache, gfp_flags); + return kmem_cache_alloc(mc->kmem_cache, gfp_flags); else return (void *)__get_free_page(gfp_flags); } @@ -5680,7 +5682,10 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) int ret; vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; + vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; + vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
Add a gfp_zero flag to 'struct kvm_mmu_memory_cache' and use it to control __GFP_ZERO instead of hardcoding a call to kmem_cache_zalloc(). A future patch needs such a flag for the __get_free_page() path, as gfn arrays do not need/want the allocator to zero the memory. Convert the kmem_cache paths to __GFP_ZERO now so as to avoid a weird and inconsistent API in the future. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 7 ++++++- 2 files changed, 7 insertions(+), 1 deletion(-)