From patchwork Wed Aug 4 07:13:23 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 116962 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o747A5BY010472 for ; Wed, 4 Aug 2010 07:10:05 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758457Ab0HDHJj (ORCPT ); Wed, 4 Aug 2010 03:09:39 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:51179 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1758456Ab0HDHJh (ORCPT ); Wed, 4 Aug 2010 03:09:37 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 12865170124; Wed, 4 Aug 2010 15:09:35 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o7476XOT015546; Wed, 4 Aug 2010 15:06:33 +0800 Received: from [10.167.141.204] (unknown [10.167.141.204]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id A117E10C1EB; Wed, 4 Aug 2010 15:10:18 +0800 (CST) Message-ID: <4C591313.50402@cn.fujitsu.com> Date: Wed, 04 Aug 2010 15:13:23 +0800 From: Lai Jiangshan User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100423 Thunderbird/3.0.4 MIME-Version: 1.0 To: Avi Kivity , Marcelo Tosatti , LKML , kvm@vger.kernel.org Subject: [PATCH] kvm: make mmu_shrink() fit shrinker's requirement Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Wed, 04 Aug 2010 07:10:06 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9c69725..1034373 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3138,37 +3138,51 @@ static int mmu_shrink(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask) { struct kvm *kvm; struct kvm *kvm_freed = NULL; + struct kvm *kvm_last; int cache_count = 0; spin_lock(&kvm_lock); - list_for_each_entry(kvm, &vm_list, vm_list) { + if (list_empty(&vm_list)) + goto out; + + kvm_last = list_entry(vm_list.prev, struct kvm, vm_list); + + for (;;) { int npages, idx, freed_pages; LIST_HEAD(invalid_list); + kvm = list_first_entry(&vm_list, struct kvm, vm_list); idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); npages = kvm->arch.n_alloc_mmu_pages - kvm->arch.n_free_mmu_pages; - cache_count += npages; - if (!kvm_freed && nr_to_scan > 0 && npages > 0) { + if (kvm_last) + cache_count += npages; + if (nr_to_scan > 0 && npages > 0) { freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm, &invalid_list); + kvm_mmu_commit_zap_page(kvm, &invalid_list); cache_count -= freed_pages; kvm_freed = kvm; - } - nr_to_scan--; + nr_to_scan -= freed_pages; + } else if (kvm == kvm_freed) + nr_to_scan = 0; /* no more page to be freed, break */ - kvm_mmu_commit_zap_page(kvm, &invalid_list); spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, idx); - } - if (kvm_freed) list_move_tail(&kvm_freed->vm_list, &vm_list); + if (kvm == kvm_last) /* just scaned all vms */ + kvm_last = NULL; + if (!kvm_last && (nr_to_scan <= 0 || !kvm_freed)) + break; + } + +out: spin_unlock(&kvm_lock); - return cache_count; + return cache_count < 0 ? 0 : cache_count; } static struct shrinker mmu_shrinker = {