From patchwork Sun Apr 25 07:01:51 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 94890 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o3P76bGx004095 for ; Sun, 25 Apr 2010 07:06:37 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751995Ab0DYHE7 (ORCPT ); Sun, 25 Apr 2010 03:04:59 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:64524 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1751306Ab0DYHE6 (ORCPT ); Sun, 25 Apr 2010 03:04:58 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 8D07D170044; Sun, 25 Apr 2010 15:04:57 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o3P73EgB018258; Sun, 25 Apr 2010 15:03:14 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 97DD4DC2D2; Sun, 25 Apr 2010 15:08:01 +0800 (CST) Message-ID: <4BD3E8DF.10600@cn.fujitsu.com> Date: Sun, 25 Apr 2010 15:01:51 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , KVM list , LKML Subject: [PATCH v2 8/10] KVM MMU: allow more page become unsync at getting sp time References: <4BD3E306.4020202@cn.fujitsu.com> In-Reply-To: <4BD3E306.4020202@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Sun, 25 Apr 2010 07:06:37 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 5198fc9..81a1945 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1212,6 +1212,23 @@ static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) return 0; } +static void kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn) +{ + struct hlist_head *bucket; + struct kvm_mmu_page *s; + struct hlist_node *node, *n; + unsigned index; + + index = kvm_page_table_hashfn(gfn); + bucket = &vcpu->kvm->arch.mmu_page_hash[index]; + hlist_for_each_entry_safe(s, node, n, bucket, hash_link) { + if (s->gfn != gfn || !s->unsync) + continue; + WARN_ON(s->role.level != PT_PAGE_TABLE_LEVEL); + kvm_sync_page(vcpu, s); + } +} + struct mmu_page_path { struct kvm_mmu_page *parent[PT64_ROOT_LEVEL-1]; unsigned int idx[PT64_ROOT_LEVEL-1]; @@ -1348,8 +1365,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, trace_kvm_mmu_get_page(sp, false); return sp; } - if (!direct && unsync_sp) - kvm_sync_page(vcpu, unsync_sp); + if (!direct && level > PT_PAGE_TABLE_LEVEL && unsync_sp) + kvm_sync_pages(vcpu, gfn); ++vcpu->kvm->stat.mmu_cache_miss; sp = kvm_mmu_alloc_page(vcpu, parent_pte);