From patchwork Tue Jun 15 13:55:19 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hansen X-Patchwork-Id: 106206 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o5FDvacP012980 for ; Tue, 15 Jun 2010 13:57:37 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932132Ab0FONzc (ORCPT ); Tue, 15 Jun 2010 09:55:32 -0400 Received: from e39.co.us.ibm.com ([32.97.110.160]:42602 "EHLO e39.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932122Ab0FONza (ORCPT ); Tue, 15 Jun 2010 09:55:30 -0400 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e39.co.us.ibm.com (8.14.4/8.13.1) with ESMTP id o5FDkJCm023259; Tue, 15 Jun 2010 07:46:19 -0600 Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id o5FDtLfN148850; Tue, 15 Jun 2010 07:55:23 -0600 Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o5FDtKqo027341; Tue, 15 Jun 2010 07:55:20 -0600 Received: from kernel.beaverton.ibm.com (kernel.beaverton.ibm.com [9.47.67.96]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id o5FDtKbV027317; Tue, 15 Jun 2010 07:55:20 -0600 Received: from localhost.localdomain (localhost [127.0.0.1]) by kernel.beaverton.ibm.com (Postfix) with ESMTP id B49C91E74F9; Tue, 15 Jun 2010 06:55:19 -0700 (PDT) Subject: [RFC][PATCH 1/9] abstract kvm x86 mmu->n_free_mmu_pages To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Dave Hansen From: Dave Hansen Date: Tue, 15 Jun 2010 06:55:19 -0700 References: <20100615135518.BC244431@kernel.beaverton.ibm.com> In-Reply-To: <20100615135518.BC244431@kernel.beaverton.ibm.com> Message-Id: <20100615135519.00781795@kernel.beaverton.ibm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 15 Jun 2010 13:57:37 +0000 (UTC) different way. Signed-off-by: Dave Hansen --- linux-2.6.git-dave/arch/x86/kvm/mmu.c | 6 +++--- linux-2.6.git-dave/arch/x86/kvm/mmu.h | 7 ++++++- 2 files changed, 9 insertions(+), 4 deletions(-) diff -puN arch/x86/kvm/mmu.c~abstract_kvm_free_mmu_pages arch/x86/kvm/mmu.c --- linux-2.6.git/arch/x86/kvm/mmu.c~abstract_kvm_free_mmu_pages 2010-06-09 15:14:28.000000000 -0700 +++ linux-2.6.git-dave/arch/x86/kvm/mmu.c 2010-06-09 15:14:28.000000000 -0700 @@ -1522,7 +1522,7 @@ void kvm_mmu_change_mmu_pages(struct kvm { int used_pages; - used_pages = kvm->arch.n_alloc_mmu_pages - kvm->arch.n_free_mmu_pages; + used_pages = kvm->arch.n_alloc_mmu_pages - kvm_mmu_available_pages(kvm); used_pages = max(0, used_pages); /* @@ -2752,7 +2752,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_unprotect_page void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu) { - while (vcpu->kvm->arch.n_free_mmu_pages < KVM_REFILL_PAGES && + while (kvm_mmu_available_pages(vcpu->kvm) < KVM_REFILL_PAGES && !list_empty(&vcpu->kvm->arch.active_mmu_pages)) { struct kvm_mmu_page *sp; @@ -2933,7 +2933,7 @@ static int mmu_shrink(int nr_to_scan, gf idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); npages = kvm->arch.n_alloc_mmu_pages - - kvm->arch.n_free_mmu_pages; + kvm_mmu_available_pages(kvm); cache_count += npages; if (!kvm_freed && nr_to_scan > 0 && npages > 0) { freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm); diff -puN arch/x86/kvm/mmu.h~abstract_kvm_free_mmu_pages arch/x86/kvm/mmu.h --- linux-2.6.git/arch/x86/kvm/mmu.h~abstract_kvm_free_mmu_pages 2010-06-09 15:14:28.000000000 -0700 +++ linux-2.6.git-dave/arch/x86/kvm/mmu.h 2010-06-09 15:14:28.000000000 -0700 @@ -50,9 +50,14 @@ int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]); +static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm) +{ + return kvm->arch.n_free_mmu_pages; +} + static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu) { - if (unlikely(vcpu->kvm->arch.n_free_mmu_pages < KVM_MIN_FREE_MMU_PAGES)) + if (unlikely(kvm_mmu_available_pages(vcpu->kvm)< KVM_MIN_FREE_MMU_PAGES)) __kvm_mmu_free_some_pages(vcpu); }