From patchwork Sun Apr 25 07:00:59 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 94885 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o3P74jV6003707 for ; Sun, 25 Apr 2010 07:04:46 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751920Ab0DYHEJ (ORCPT ); Sun, 25 Apr 2010 03:04:09 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:56837 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1751890Ab0DYHEG (ORCPT ); Sun, 25 Apr 2010 03:04:06 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 7525D170091; Sun, 25 Apr 2010 15:04:05 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o3P72MtI018252; Sun, 25 Apr 2010 15:02:22 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 7F921DC2D2; Sun, 25 Apr 2010 15:07:09 +0800 (CST) Message-ID: <4BD3E8AB.4020704@cn.fujitsu.com> Date: Sun, 25 Apr 2010 15:00:59 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , KVM list , LKML Subject: [PATCH v2 6/10] KVM MMU: don't write-protect if have new mapping to unsync page References: <4BD3E306.4020202@cn.fujitsu.com> In-Reply-To: <4BD3E306.4020202@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Sun, 25 Apr 2010 07:04:46 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 60a91c6..b946a5f 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1330,7 +1330,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, unsigned index; unsigned quadrant; struct hlist_head *bucket; - struct kvm_mmu_page *sp; + struct kvm_mmu_page *sp, *unsync_sp = NULL; struct hlist_node *node, *tmp; role = vcpu->arch.mmu.base_role; @@ -1349,12 +1349,17 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link) if (sp->gfn == gfn) { if (sp->unsync) - if (kvm_sync_page(vcpu, sp)) - continue; + unsync_sp = sp; if (sp->role.word != role.word) continue; + if (unsync_sp && + kvm_sync_page_transient(vcpu, unsync_sp)) { + unsync_sp = NULL; + break; + } + mmu_page_add_parent_pte(vcpu, sp, parent_pte); if (sp->unsync_children) { set_bit(KVM_REQ_MMU_SYNC, &vcpu->requests); @@ -1363,6 +1368,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, trace_kvm_mmu_get_page(sp, false); return sp; } + if (!direct && unsync_sp) + kvm_sync_page(vcpu, unsync_sp); + ++vcpu->kvm->stat.mmu_cache_miss; sp = kvm_mmu_alloc_page(vcpu, parent_pte); if (!sp)