From patchwork Mon Jan 28 14:48:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10783847 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25FE7922 for ; Mon, 28 Jan 2019 14:49:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 134E329BF5 for ; Mon, 28 Jan 2019 14:49:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 052412A0C0; Mon, 28 Jan 2019 14:49:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EC24D29BF5 for ; Mon, 28 Jan 2019 14:48:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726923AbfA1Os5 (ORCPT ); Mon, 28 Jan 2019 09:48:57 -0500 Received: from foss.arm.com ([217.140.101.70]:46532 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726266AbfA1Os5 (ORCPT ); Mon, 28 Jan 2019 09:48:57 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EE1C115AD; Mon, 28 Jan 2019 06:48:56 -0800 (PST) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.144.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5F47B3F589; Mon, 28 Jan 2019 06:48:56 -0800 (PST) From: Christoffer Dall To: kvm@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu, Christoffer Dall , James Hogan , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Marc Zyngier , Paul Mackerras , Christian Borntraeger , Anshuman Khandual , Suzuki K Poulose Subject: [PATCH 1/3] KVM: x86: Move mmu_memory_cache functions to common code Date: Mon, 28 Jan 2019 15:48:41 +0100 Message-Id: <20190128144843.11635-2-christoffer.dall@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190128144843.11635-1-christoffer.dall@arm.com> References: <20190128144843.11635-1-christoffer.dall@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We are currently duplicating the mmu memory cache functionality quite heavily between the architectures that support KVM. As a first step, move the x86 implementation (which seems to have the most recently maintained version of the mmu memory cache) to common code. We rename the functions and data types to have a kvm_ prefix for anything exported as a symbol to the rest of the kernel and take the chance to rename memory_cache to memcache to avoid overly long lines. This is a bit tedious on the callsites but ends up looking more palatable. We also introduce an arch-specific kvm_types.h which can be used to define the architecture-specific GFP flags for allocating memory to the memory cache, and to specify how many objects are required in the memory cache. These are the two points where the current implementations diverge across architectures. Since kvm_host.h defines structures with fields of the memcache object, we define the memcache structure in kvm_types.h, and we include the architecture-specific kvm_types.h to know the size of object in kvm_host.h. This patch currently only defines the structure and requires valid defines in the architecture-specific kvm_types.h when KVM_NR_MEM_OBJS is defined. As we move each architecture to the common implementation, this condition will eventually go away. Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_types.h | 5 ++ arch/arm64/include/asm/kvm_types.h | 6 ++ arch/mips/include/asm/kvm_types.h | 5 ++ arch/powerpc/include/asm/kvm_types.h | 5 ++ arch/s390/include/asm/kvm_types.h | 5 ++ arch/x86/include/asm/kvm_host.h | 17 +---- arch/x86/include/asm/kvm_types.h | 10 +++ arch/x86/kvm/mmu.c | 97 ++++++---------------------- arch/x86/kvm/paging_tmpl.h | 4 +- include/linux/kvm_host.h | 9 +++ include/linux/kvm_types.h | 12 ++++ virt/kvm/kvm_main.c | 58 +++++++++++++++++ 12 files changed, 139 insertions(+), 94 deletions(-) create mode 100644 arch/arm/include/asm/kvm_types.h create mode 100644 arch/arm64/include/asm/kvm_types.h create mode 100644 arch/mips/include/asm/kvm_types.h create mode 100644 arch/powerpc/include/asm/kvm_types.h create mode 100644 arch/s390/include/asm/kvm_types.h create mode 100644 arch/x86/include/asm/kvm_types.h diff --git a/arch/arm/include/asm/kvm_types.h b/arch/arm/include/asm/kvm_types.h new file mode 100644 index 000000000000..bc389f82e88d --- /dev/null +++ b/arch/arm/include/asm/kvm_types.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM_KVM_TYPES_H +#define _ASM_ARM_KVM_TYPES_H + +#endif /* _ASM_ARM_KVM_TYPES_H */ diff --git a/arch/arm64/include/asm/kvm_types.h b/arch/arm64/include/asm/kvm_types.h new file mode 100644 index 000000000000..d0987007d581 --- /dev/null +++ b/arch/arm64/include/asm/kvm_types.h @@ -0,0 +1,6 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM64_KVM_TYPES_H +#define _ASM_ARM64_KVM_TYPES_H + +#endif /* _ASM_ARM64_KVM_TYPES_H */ + diff --git a/arch/mips/include/asm/kvm_types.h b/arch/mips/include/asm/kvm_types.h new file mode 100644 index 000000000000..5efeb32a5926 --- /dev/null +++ b/arch/mips/include/asm/kvm_types.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_MIPS_KVM_TYPES_H +#define _ASM_MIPS_KVM_TYPES_H + +#endif /* _ASM_MIPS_KVM_TYPES_H */ diff --git a/arch/powerpc/include/asm/kvm_types.h b/arch/powerpc/include/asm/kvm_types.h new file mode 100644 index 000000000000..f627eceaa314 --- /dev/null +++ b/arch/powerpc/include/asm/kvm_types.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_KVM_TYPES_H +#define _ASM_POWERPC_KVM_TYPES_H + +#endif /* _ASM_POWERPC_KVM_TYPES_H */ diff --git a/arch/s390/include/asm/kvm_types.h b/arch/s390/include/asm/kvm_types.h new file mode 100644 index 000000000000..b66a81f8a354 --- /dev/null +++ b/arch/s390/include/asm/kvm_types.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_S390_KVM_TYPES_H +#define _ASM_S390_KVM_TYPES_H + +#endif /* _ASM_S390_KVM_TYPES_H */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4660ce90de7f..5c12cba8c2b1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -179,8 +179,6 @@ enum { #include -#define KVM_NR_MEM_OBJS 40 - #define KVM_NR_DB_REGS 4 #define DR6_BD (1 << 13) @@ -238,15 +236,6 @@ enum { struct kvm_kernel_irq_routing_entry; -/* - * We don't want allocation failures within the mmu code, so we preallocate - * enough memory for a single page fault in a cache. - */ -struct kvm_mmu_memory_cache { - int nobjs; - void *objects[KVM_NR_MEM_OBJS]; -}; - /* * the pages used as guest page table on soft mmu are tracked by * kvm_memory_slot.arch.gfn_track which is 16 bits, so the role bits used @@ -600,9 +589,9 @@ struct kvm_vcpu_arch { */ struct kvm_mmu *walk_mmu; - struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_page_cache; - struct kvm_mmu_memory_cache mmu_page_header_cache; + struct kvm_mmu_memcache mmu_pte_list_desc_cache; + struct kvm_mmu_memcache mmu_page_cache; + struct kvm_mmu_memcache mmu_page_header_cache; /* * QEMU userspace and the guest each have their own FPU state. diff --git a/arch/x86/include/asm/kvm_types.h b/arch/x86/include/asm/kvm_types.h new file mode 100644 index 000000000000..f71fe5556b6f --- /dev/null +++ b/arch/x86/include/asm/kvm_types.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_KVM_TYPES_H +#define _ASM_X86_KVM_TYPES_H + +#define KVM_NR_MEM_OBJS 40 + +#define KVM_MMU_CACHE_GFP GFP_KERNEL +#define KVM_MMU_CACHE_PAGE_GFP GFP_KERNEL_ACCOUNT + +#endif /* _ASM_X86_KVM_TYPES_H */ diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ce770b446238..cf0024849404 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -951,94 +951,35 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) local_irq_enable(); } -static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, - struct kmem_cache *base_cache, int min) -{ - void *obj; - - if (cache->nobjs >= min) - return 0; - while (cache->nobjs < ARRAY_SIZE(cache->objects)) { - obj = kmem_cache_zalloc(base_cache, GFP_KERNEL); - if (!obj) - return cache->nobjs >= min ? 0 : -ENOMEM; - cache->objects[cache->nobjs++] = obj; - } - return 0; -} - -static int mmu_memory_cache_free_objects(struct kvm_mmu_memory_cache *cache) -{ - return cache->nobjs; -} - -static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc, - struct kmem_cache *cache) -{ - while (mc->nobjs) - kmem_cache_free(cache, mc->objects[--mc->nobjs]); -} - -static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache, - int min) -{ - void *page; - - if (cache->nobjs >= min) - return 0; - while (cache->nobjs < ARRAY_SIZE(cache->objects)) { - page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT); - if (!page) - return cache->nobjs >= min ? 0 : -ENOMEM; - cache->objects[cache->nobjs++] = page; - } - return 0; -} - -static void mmu_free_memory_cache_page(struct kvm_mmu_memory_cache *mc) -{ - while (mc->nobjs) - free_page((unsigned long)mc->objects[--mc->nobjs]); -} - -static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) +static int mmu_topup_memcaches(struct kvm_vcpu *vcpu) { int r; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache, + r = kvm_mmu_topup_memcache(&vcpu->arch.mmu_pte_list_desc_cache, pte_list_desc_cache, 8 + PTE_PREFETCH_NUM); if (r) goto out; - r = mmu_topup_memory_cache_page(&vcpu->arch.mmu_page_cache, 8); + r = kvm_mmu_topup_memcache_page(&vcpu->arch.mmu_page_cache, 8); if (r) goto out; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, + r = kvm_mmu_topup_memcache(&vcpu->arch.mmu_page_header_cache, mmu_page_header_cache, 4); out: return r; } -static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) +static void mmu_free_memcaches(struct kvm_vcpu *vcpu) { - mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache, + kvm_mmu_free_memcache(&vcpu->arch.mmu_pte_list_desc_cache, pte_list_desc_cache); - mmu_free_memory_cache_page(&vcpu->arch.mmu_page_cache); - mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache, + kvm_mmu_free_memcache_page(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memcache(&vcpu->arch.mmu_page_header_cache, mmu_page_header_cache); } -static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) -{ - void *p; - - BUG_ON(!mc->nobjs); - p = mc->objects[--mc->nobjs]; - return p; -} - static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_vcpu *vcpu) { - return mmu_memory_cache_alloc(&vcpu->arch.mmu_pte_list_desc_cache); + return kvm_mmu_memcache_alloc(&vcpu->arch.mmu_pte_list_desc_cache); } static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) @@ -1358,10 +1299,10 @@ static struct kvm_rmap_head *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, static bool rmap_can_add(struct kvm_vcpu *vcpu) { - struct kvm_mmu_memory_cache *cache; + struct kvm_mmu_memcache *cache; cache = &vcpu->arch.mmu_pte_list_desc_cache; - return mmu_memory_cache_free_objects(cache); + return kvm_mmu_memcache_free_objects(cache); } static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) @@ -2044,10 +1985,10 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct { struct kvm_mmu_page *sp; - sp = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache); + sp = kvm_mmu_memcache_alloc(&vcpu->arch.mmu_page_header_cache); + sp->spt = kvm_mmu_memcache_alloc(&vcpu->arch.mmu_page_cache); if (!direct) - sp->gfns = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache); + sp->gfns = kvm_mmu_memcache_alloc(&vcpu->arch.mmu_page_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); /* @@ -3961,7 +3902,7 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gva_t gva, if (page_fault_handle_page_track(vcpu, error_code, gfn)) return RET_PF_EMULATE; - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memcaches(vcpu); if (r) return r; @@ -4090,7 +4031,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, if (page_fault_handle_page_track(vcpu, error_code, gfn)) return RET_PF_EMULATE; - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memcaches(vcpu); if (r) return r; @@ -5062,7 +5003,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) { int r; - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memcaches(vcpu); if (r) goto out; r = mmu_alloc_roots(vcpu); @@ -5240,7 +5181,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, * or not since pte prefetch is skiped if it does not have * enough objects in the cache. */ - mmu_topup_memory_caches(vcpu); + mmu_topup_memcaches(vcpu); spin_lock(&vcpu->kvm->mmu_lock); @@ -6052,7 +5993,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu) { kvm_mmu_unload(vcpu); free_mmu_pages(vcpu); - mmu_free_memory_caches(vcpu); + mmu_free_memcaches(vcpu); } void kvm_mmu_module_exit(void) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 6bdca39829bc..c32b603639a9 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -747,7 +747,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, pgprintk("%s: addr %lx err %x\n", __func__, addr, error_code); - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memcaches(vcpu); if (r) return r; @@ -870,7 +870,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) * No need to check return value here, rmap_can_add() can * help us to skip pte prefetch later. */ - mmu_topup_memory_caches(vcpu); + mmu_topup_memcaches(vcpu); if (!VALID_PAGE(root_hpa)) { WARN_ON(1); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c38cc5eb7e73..f5768e5d33f7 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -739,6 +739,15 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); void kvm_reload_remote_mmus(struct kvm *kvm); +int kvm_mmu_topup_memcache(struct kvm_mmu_memcache *cache, + struct kmem_cache *base_cache, int min); +int kvm_mmu_memcache_free_objects(struct kvm_mmu_memcache *cache); +void kvm_mmu_free_memcache(struct kvm_mmu_memcache *mc, + struct kmem_cache *cache); +int kvm_mmu_topup_memcache_page(struct kvm_mmu_memcache *cache, int min); +void kvm_mmu_free_memcache_page(struct kvm_mmu_memcache *mc); +void *kvm_mmu_memcache_alloc(struct kvm_mmu_memcache *mc); + bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req, unsigned long *vcpu_bitmap, cpumask_var_t tmp); bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 8bf259dae9f6..a314f0f08e22 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -32,6 +32,7 @@ struct kvm_memslots; enum kvm_mr_change; +#include #include /* @@ -63,4 +64,15 @@ struct gfn_to_hva_cache { struct kvm_memory_slot *memslot; }; +#ifdef KVM_NR_MEM_OBJS +/* + * We don't want allocation failures within the mmu code, so we preallocate + * enough memory for a single page fault in a cache. + */ +struct kvm_mmu_memcache { + int nobjs; + void *objects[KVM_NR_MEM_OBJS]; +}; +#endif + #endif /* __KVM_TYPES_H__ */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5ecea812cb6a..ae6454a77aa8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -285,6 +285,64 @@ void kvm_reload_remote_mmus(struct kvm *kvm) kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD); } +int kvm_mmu_topup_memcache(struct kvm_mmu_memcache *cache, + struct kmem_cache *base_cache, int min) +{ + void *obj; + + if (cache->nobjs >= min) + return 0; + while (cache->nobjs < ARRAY_SIZE(cache->objects)) { + obj = kmem_cache_zalloc(base_cache, KVM_MMU_CACHE_GFP); + if (!obj) + return cache->nobjs >= min ? 0 : -ENOMEM; + cache->objects[cache->nobjs++] = obj; + } + return 0; +} + +int kvm_mmu_memcache_free_objects(struct kvm_mmu_memcache *cache) +{ + return cache->nobjs; +} + +void kvm_mmu_free_memcache(struct kvm_mmu_memcache *mc, + struct kmem_cache *cache) +{ + while (mc->nobjs) + kmem_cache_free(cache, mc->objects[--mc->nobjs]); +} + +int kvm_mmu_topup_memcache_page(struct kvm_mmu_memcache *cache, int min) +{ + void *page; + + if (cache->nobjs >= min) + return 0; + while (cache->nobjs < ARRAY_SIZE(cache->objects)) { + page = (void *)__get_free_page(KVM_MMU_CACHE_PAGE_GFP); + if (!page) + return cache->nobjs >= min ? 0 : -ENOMEM; + cache->objects[cache->nobjs++] = page; + } + return 0; +} + +void kvm_mmu_free_memcache_page(struct kvm_mmu_memcache *mc) +{ + while (mc->nobjs) + free_page((unsigned long)mc->objects[--mc->nobjs]); +} + +void *kvm_mmu_memcache_alloc(struct kvm_mmu_memcache *mc) +{ + void *p; + + BUG_ON(!mc->nobjs); + p = mc->objects[--mc->nobjs]; + return p; +} + int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) { struct page *page; From patchwork Mon Jan 28 14:48:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10783849 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD113922 for ; Mon, 28 Jan 2019 14:49:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC73629BF5 for ; Mon, 28 Jan 2019 14:49:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A02432A159; Mon, 28 Jan 2019 14:49:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C4DA829BF5 for ; Mon, 28 Jan 2019 14:49:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726952AbfA1OtA (ORCPT ); Mon, 28 Jan 2019 09:49:00 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46550 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726266AbfA1OtA (ORCPT ); Mon, 28 Jan 2019 09:49:00 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 73E5180D; Mon, 28 Jan 2019 06:48:59 -0800 (PST) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.144.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D967B3F589; Mon, 28 Jan 2019 06:48:58 -0800 (PST) From: Christoffer Dall To: kvm@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu, Christoffer Dall , James Hogan , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Marc Zyngier , Paul Mackerras , Christian Borntraeger , Anshuman Khandual , Suzuki K Poulose Subject: [PATCH 2/3] KVM: arm/arm64: Move to common kvm_mmu_memcache infrastructure Date: Mon, 28 Jan 2019 15:48:42 +0100 Message-Id: <20190128144843.11635-3-christoffer.dall@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190128144843.11635-1-christoffer.dall@arm.com> References: <20190128144843.11635-1-christoffer.dall@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now when we have a common mmu mmemcache implementation, we can reuse this for arm and arm64. The common implementation has a slightly different behavior when allocating objects under high memory pressure; whereas the current arm/arm64 implementation will give up and return -ENOMEM if the full size of the cache cannot be allocated during topup, the common implementation is happy with any allocation between min and max. There should be no architecture-specific requirement for doing it one way or the other and it's in fact better to enforce a cross-architecture KVM policy on this behavior. Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_host.h | 13 +----- arch/arm/include/asm/kvm_mmu.h | 2 +- arch/arm/include/asm/kvm_types.h | 5 +++ arch/arm64/include/asm/kvm_host.h | 13 +----- arch/arm64/include/asm/kvm_mmu.h | 2 +- arch/arm64/include/asm/kvm_types.h | 5 +++ virt/kvm/arm/arm.c | 2 +- virt/kvm/arm/mmu.c | 68 ++++++++---------------------- 8 files changed, 32 insertions(+), 78 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index ca56537b61bc..bf6b6d027ff0 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -83,17 +83,6 @@ struct kvm_arch { u32 psci_version; }; -#define KVM_NR_MEM_OBJS 40 - -/* - * We don't want allocation failures within the mmu code, so we preallocate - * enough memory for a single page fault in a cache. - */ -struct kvm_mmu_memory_cache { - int nobjs; - void *objects[KVM_NR_MEM_OBJS]; -}; - struct kvm_vcpu_fault_info { u32 hsr; /* Hyp Syndrome Register */ u32 hxfar; /* Hyp Data/Inst. Fault Address Register */ @@ -184,7 +173,7 @@ struct kvm_vcpu_arch { struct kvm_decode mmio_decode; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + struct kvm_mmu_memcache mmu_page_cache; /* Detect first run of a vcpu */ bool has_run_once; diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 3a875fc1b63c..8877f53997c8 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -71,7 +71,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); -void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); +void kvm_mmu_free_memcaches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_get_idmap_vector(void); diff --git a/arch/arm/include/asm/kvm_types.h b/arch/arm/include/asm/kvm_types.h index bc389f82e88d..44d53373fc84 100644 --- a/arch/arm/include/asm/kvm_types.h +++ b/arch/arm/include/asm/kvm_types.h @@ -2,4 +2,9 @@ #ifndef _ASM_ARM_KVM_TYPES_H #define _ASM_ARM_KVM_TYPES_H +#define KVM_NR_MEM_OBJS 40 + +#define KVM_MMU_CACHE_GFP GFP_KERNEL +#define KVM_MMU_CACHE_PAGE_GFP (GFP_KERNEL | __GFP_ZERO) + #endif /* _ASM_ARM_KVM_TYPES_H */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7732d0ba4e60..1aa951de8338 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -82,17 +82,6 @@ struct kvm_arch { u32 psci_version; }; -#define KVM_NR_MEM_OBJS 40 - -/* - * We don't want allocation failures within the mmu code, so we preallocate - * enough memory for a single page fault in a cache. - */ -struct kvm_mmu_memory_cache { - int nobjs; - void *objects[KVM_NR_MEM_OBJS]; -}; - struct kvm_vcpu_fault_info { u32 esr_el2; /* Hyp Syndrom Register */ u64 far_el2; /* Hyp Fault Address Register */ @@ -285,7 +274,7 @@ struct kvm_vcpu_arch { struct kvm_decode mmio_decode; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + struct kvm_mmu_memcache mmu_page_cache; /* Target CPU and feature flags */ int target; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 8af4b1befa42..dec55fa00e56 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -170,7 +170,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); -void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); +void kvm_mmu_free_memcaches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_get_idmap_vector(void); diff --git a/arch/arm64/include/asm/kvm_types.h b/arch/arm64/include/asm/kvm_types.h index d0987007d581..2918b4693998 100644 --- a/arch/arm64/include/asm/kvm_types.h +++ b/arch/arm64/include/asm/kvm_types.h @@ -2,5 +2,10 @@ #ifndef _ASM_ARM64_KVM_TYPES_H #define _ASM_ARM64_KVM_TYPES_H +#define KVM_NR_MEM_OBJS 40 + +#define KVM_MMU_CACHE_GFP GFP_KERNEL +#define KVM_MMU_CACHE_PAGE_GFP (GFP_KERNEL | __GFP_ZERO) + #endif /* _ASM_ARM64_KVM_TYPES_H */ diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 9e350fd34504..89c89a151373 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -317,7 +317,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) if (vcpu->arch.has_run_once && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); - kvm_mmu_free_memory_caches(vcpu); + kvm_mmu_free_memcaches(vcpu); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vcpu_uninit(vcpu); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index fbdf3ac2f001..da193b446261 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -134,38 +134,6 @@ static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t *pudp) put_page(virt_to_page(pudp)); } -static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, - int min, int max) -{ - void *page; - - BUG_ON(max > KVM_NR_MEM_OBJS); - if (cache->nobjs >= min) - return 0; - while (cache->nobjs < max) { - page = (void *)__get_free_page(PGALLOC_GFP); - if (!page) - return -ENOMEM; - cache->objects[cache->nobjs++] = page; - } - return 0; -} - -static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc) -{ - while (mc->nobjs) - free_page((unsigned long)mc->objects[--mc->nobjs]); -} - -static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) -{ - void *p; - - BUG_ON(!mc || !mc->nobjs); - p = mc->objects[--mc->nobjs]; - return p; -} - static void clear_stage2_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr) { pud_t *pud_table __maybe_unused = stage2_pud_offset(kvm, pgd, 0UL); @@ -1016,7 +984,7 @@ void kvm_free_stage2_pgd(struct kvm *kvm) free_pages_exact(pgd, stage2_pgd_size(kvm)); } -static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, +static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memcache *cache, phys_addr_t addr) { pgd_t *pgd; @@ -1026,7 +994,7 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache if (stage2_pgd_none(kvm, *pgd)) { if (!cache) return NULL; - pud = mmu_memory_cache_alloc(cache); + pud = kvm_mmu_memcache_alloc(cache); stage2_pgd_populate(kvm, pgd, pud); get_page(virt_to_page(pgd)); } @@ -1034,7 +1002,7 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache return stage2_pud_offset(kvm, pgd, addr); } -static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, +static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memcache *cache, phys_addr_t addr) { pud_t *pud; @@ -1047,7 +1015,7 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache if (stage2_pud_none(kvm, *pud)) { if (!cache) return NULL; - pmd = mmu_memory_cache_alloc(cache); + pmd = kvm_mmu_memcache_alloc(cache); stage2_pud_populate(kvm, pud, pmd); get_page(virt_to_page(pud)); } @@ -1055,7 +1023,7 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache return stage2_pmd_offset(kvm, pud, addr); } -static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache +static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memcache *cache, phys_addr_t addr, const pmd_t *new_pmd) { pmd_t *pmd, old_pmd; @@ -1102,7 +1070,7 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache return 0; } -static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, +static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memcache *cache, phys_addr_t addr, const pud_t *new_pudp) { pud_t *pudp, old_pud; @@ -1194,7 +1162,7 @@ static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr) return kvm_s2pte_exec(ptep); } -static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, +static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memcache *cache, phys_addr_t addr, const pte_t *new_pte, unsigned long flags) { @@ -1226,7 +1194,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, if (stage2_pud_none(kvm, *pud)) { if (!cache) return 0; /* ignore calls from kvm_set_spte_hva */ - pmd = mmu_memory_cache_alloc(cache); + pmd = kvm_mmu_memcache_alloc(cache); stage2_pud_populate(kvm, pud, pmd); get_page(virt_to_page(pud)); } @@ -1251,7 +1219,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, if (pmd_none(*pmd)) { if (!cache) return 0; /* ignore calls from kvm_set_spte_hva */ - pte = mmu_memory_cache_alloc(cache); + pte = kvm_mmu_memcache_alloc(cache); kvm_pmd_populate(pmd, pte); get_page(virt_to_page(pmd)); } @@ -1318,7 +1286,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, phys_addr_t addr, end; int ret = 0; unsigned long pfn; - struct kvm_mmu_memory_cache cache = { 0, }; + struct kvm_mmu_memcache cache = { 0, }; end = (guest_ipa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn = __phys_to_pfn(pa); @@ -1329,9 +1297,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, if (writable) pte = kvm_s2pte_mkwrite(pte); - ret = mmu_topup_memory_cache(&cache, - kvm_mmu_cache_min_pages(kvm), - KVM_NR_MEM_OBJS); + ret = kvm_mmu_topup_memcache_page(&cache, + kvm_mmu_cache_min_pages(kvm)); if (ret) goto out; spin_lock(&kvm->mmu_lock); @@ -1345,7 +1312,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, } out: - mmu_free_memory_cache(&cache); + kvm_mmu_free_memcache_page(&cache); return ret; } @@ -1662,7 +1629,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long mmu_seq; gfn_t gfn = fault_ipa >> PAGE_SHIFT; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memcache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; kvm_pfn_t pfn; pgprot_t mem_type = PAGE_S2; @@ -1706,8 +1673,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, up_read(¤t->mm->mmap_sem); /* We need minimum second+third level pages */ - ret = mmu_topup_memory_cache(memcache, kvm_mmu_cache_min_pages(kvm), - KVM_NR_MEM_OBJS); + ret = kvm_mmu_topup_memcache_page(memcache, kvm_mmu_cache_min_pages(kvm)); if (ret) return ret; @@ -2123,9 +2089,9 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) return handle_hva_to_gpa(kvm, hva, hva, kvm_test_age_hva_handler, NULL); } -void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu) +void kvm_mmu_free_memcaches(struct kvm_vcpu *vcpu) { - mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memcache_page(&vcpu->arch.mmu_page_cache); } phys_addr_t kvm_mmu_get_httbr(void) From patchwork Mon Jan 28 14:48:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10783851 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CCFF922 for ; Mon, 28 Jan 2019 14:49:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4BAAD2A06E for ; Mon, 28 Jan 2019 14:49:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3FD3C2A159; Mon, 28 Jan 2019 14:49:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A6C032A06E for ; Mon, 28 Jan 2019 14:49:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726964AbfA1OtC (ORCPT ); Mon, 28 Jan 2019 09:49:02 -0500 Received: from foss.arm.com ([217.140.101.70]:46566 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726925AbfA1OtC (ORCPT ); Mon, 28 Jan 2019 09:49:02 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C686615AD; Mon, 28 Jan 2019 06:49:01 -0800 (PST) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.144.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5FE963F589; Mon, 28 Jan 2019 06:49:01 -0800 (PST) From: Christoffer Dall To: kvm@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu, Christoffer Dall , James Hogan , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Marc Zyngier , Paul Mackerras , Christian Borntraeger , Anshuman Khandual , Suzuki K Poulose Subject: [PATCH 3/3] KVM: mips: Move to common kvm_mmu_memcache infrastructure Date: Mon, 28 Jan 2019 15:48:43 +0100 Message-Id: <20190128144843.11635-4-christoffer.dall@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190128144843.11635-1-christoffer.dall@arm.com> References: <20190128144843.11635-1-christoffer.dall@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we have a common infrastructure for doing MMU cache allocations, use this for mips as well. Signed-off-by: Christoffer Dall --- arch/mips/include/asm/kvm_host.h | 15 ++------- arch/mips/include/asm/kvm_types.h | 5 +++ arch/mips/kvm/mips.c | 2 +- arch/mips/kvm/mmu.c | 54 ++++++------------------------- 4 files changed, 18 insertions(+), 58 deletions(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index d2abd98471e8..e05cabd53a9e 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -293,17 +293,6 @@ struct kvm_mips_tlb { long tlb_lo[2]; }; -#define KVM_NR_MEM_OBJS 4 - -/* - * We don't want allocation failures within the mmu code, so we preallocate - * enough memory for a single page fault in a cache. - */ -struct kvm_mmu_memory_cache { - int nobjs; - void *objects[KVM_NR_MEM_OBJS]; -}; - #define KVM_MIPS_AUX_FPU 0x1 #define KVM_MIPS_AUX_MSA 0x2 @@ -378,7 +367,7 @@ struct kvm_vcpu_arch { unsigned int last_user_gasid; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + struct kvm_mmu_memcache mmu_page_cache; #ifdef CONFIG_KVM_MIPS_VZ /* vcpu's vzguestid is different on each host cpu in an smp system */ @@ -915,7 +904,7 @@ void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags); bool kvm_mips_flush_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn); int kvm_mips_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn); pgd_t *kvm_pgd_alloc(void); -void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); +void kvm_mmu_free_memcaches(struct kvm_vcpu *vcpu); void kvm_trap_emul_invalidate_gva(struct kvm_vcpu *vcpu, unsigned long addr, bool user); void kvm_trap_emul_gva_lockless_begin(struct kvm_vcpu *vcpu); diff --git a/arch/mips/include/asm/kvm_types.h b/arch/mips/include/asm/kvm_types.h index 5efeb32a5926..6318e8d91f90 100644 --- a/arch/mips/include/asm/kvm_types.h +++ b/arch/mips/include/asm/kvm_types.h @@ -2,4 +2,9 @@ #ifndef _ASM_MIPS_KVM_TYPES_H #define _ASM_MIPS_KVM_TYPES_H +#define KVM_NR_MEM_OBJS 4 + +#define KVM_MMU_CACHE_GFP GFP_KERNEL +#define KVM_MMU_CACHE_PAGE_GFP GFP_KERNEL + #endif /* _ASM_MIPS_KVM_TYPES_H */ diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 3734cd58895e..5ba6905247d3 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -425,7 +425,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) kvm_mips_dump_stats(vcpu); - kvm_mmu_free_memory_caches(vcpu); + kvm_mmu_free_memcaches(vcpu); kfree(vcpu->arch.guest_ebase); kfree(vcpu->arch.kseg0_commpage); kfree(vcpu); diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 97e538a8c1be..aed5284d642e 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -25,41 +25,9 @@ #define KVM_MMU_CACHE_MIN_PAGES 2 #endif -static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, - int min, int max) +void kvm_mmu_free_memcaches(struct kvm_vcpu *vcpu) { - void *page; - - BUG_ON(max > KVM_NR_MEM_OBJS); - if (cache->nobjs >= min) - return 0; - while (cache->nobjs < max) { - page = (void *)__get_free_page(GFP_KERNEL); - if (!page) - return -ENOMEM; - cache->objects[cache->nobjs++] = page; - } - return 0; -} - -static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc) -{ - while (mc->nobjs) - free_page((unsigned long)mc->objects[--mc->nobjs]); -} - -static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) -{ - void *p; - - BUG_ON(!mc || !mc->nobjs); - p = mc->objects[--mc->nobjs]; - return p; -} - -void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu) -{ - mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memcache_page(&vcpu->arch.mmu_page_cache); } /** @@ -133,7 +101,7 @@ pgd_t *kvm_pgd_alloc(void) * NULL if a page table doesn't exist for @addr and !@cache. * NULL if a page table allocation failed. */ -static pte_t *kvm_mips_walk_pgd(pgd_t *pgd, struct kvm_mmu_memory_cache *cache, +static pte_t *kvm_mips_walk_pgd(pgd_t *pgd, struct kvm_mmu_memcache *cache, unsigned long addr) { pud_t *pud; @@ -151,7 +119,7 @@ static pte_t *kvm_mips_walk_pgd(pgd_t *pgd, struct kvm_mmu_memory_cache *cache, if (!cache) return NULL; - new_pmd = mmu_memory_cache_alloc(cache); + new_pmd = kvm_mmu_memcache_alloc(cache); pmd_init((unsigned long)new_pmd, (unsigned long)invalid_pte_table); pud_populate(NULL, pud, new_pmd); @@ -162,7 +130,7 @@ static pte_t *kvm_mips_walk_pgd(pgd_t *pgd, struct kvm_mmu_memory_cache *cache, if (!cache) return NULL; - new_pte = mmu_memory_cache_alloc(cache); + new_pte = kvm_mmu_memcache_alloc(cache); clear_page(new_pte); pmd_populate_kernel(NULL, pmd, new_pte); } @@ -171,7 +139,7 @@ static pte_t *kvm_mips_walk_pgd(pgd_t *pgd, struct kvm_mmu_memory_cache *cache, /* Caller must hold kvm->mm_lock */ static pte_t *kvm_mips_pte_for_gpa(struct kvm *kvm, - struct kvm_mmu_memory_cache *cache, + struct kvm_mmu_memcache *cache, unsigned long addr) { return kvm_mips_walk_pgd(kvm->arch.gpa_mm.pgd, cache, addr); @@ -688,7 +656,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, pte_t *out_entry, pte_t *out_buddy) { struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memcache *memcache = &vcpu->arch.mmu_page_cache; gfn_t gfn = gpa >> PAGE_SHIFT; int srcu_idx, err; kvm_pfn_t pfn; @@ -705,8 +673,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, goto out; /* We need a minimum of cached pages ready for page table creation */ - err = mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES, - KVM_NR_MEM_OBJS); + err = kvm_mmu_topup_memcache_page(memcache, KVM_MMU_CACHE_MIN_PAGES); if (err) goto out; @@ -785,13 +752,12 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, static pte_t *kvm_trap_emul_pte_for_gva(struct kvm_vcpu *vcpu, unsigned long addr) { - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memcache *memcache = &vcpu->arch.mmu_page_cache; pgd_t *pgdp; int ret; /* We need a minimum of cached pages ready for page table creation */ - ret = mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES, - KVM_NR_MEM_OBJS); + ret = kvm_mmu_topup_memcache_page(memcache, KVM_MMU_CACHE_MIN_PAGES); if (ret) return NULL;