From patchwork Tue Jul 13 09:45:27 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 111746 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o6D9o7E2021200 for ; Tue, 13 Jul 2010 09:50:23 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755582Ab0GMJt1 (ORCPT ); Tue, 13 Jul 2010 05:49:27 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:64923 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1752790Ab0GMJt0 (ORCPT ); Tue, 13 Jul 2010 05:49:26 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id D4F6D17011B; Tue, 13 Jul 2010 17:49:24 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o6D9kdEd020942; Tue, 13 Jul 2010 17:46:40 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 5A7A810C1B9; Tue, 13 Jul 2010 17:49:45 +0800 (CST) Message-ID: <4C3C35B7.50101@cn.fujitsu.com> Date: Tue, 13 Jul 2010 17:45:27 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: LKML , KVM list , Marcelo Tosatti Subject: [PATCH 3/4] KVM: MMU: track dirty page in speculative path properly References: <4C3C3518.7080505@cn.fujitsu.com> In-Reply-To: <4C3C3518.7080505@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 13 Jul 2010 09:51:02 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 67dbafa..5e9d4a0 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -315,21 +315,19 @@ static void set_spte_atomic(u64 *sptep, u64 new_spte) pfn = spte_to_pfn(old_spte); if (old_spte & shadow_accessed_mask) kvm_set_pfn_accessed(pfn); - if (is_writable_pte(old_spte)) + + if ((shadow_dirty_mask && (old_spte & shadow_dirty_mask)) || + (!shadow_dirty_mask && is_writable_pte(old_spte))) kvm_set_pfn_dirty(pfn); } static void update_spte(u64 *sptep, u64 new_spte) { - u64 old_spte; - - if (!shadow_accessed_mask || (new_spte & shadow_accessed_mask)) { + if ((!shadow_accessed_mask || (new_spte & shadow_accessed_mask)) && + (!shadow_dirty_mask || (new_spte & shadow_dirty_mask))) __set_spte(sptep, new_spte); - } else { - old_spte = __xchg_spte(sptep, new_spte); - if (old_spte & shadow_accessed_mask) - mark_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); - } + else + set_spte_atomic(sptep, new_spte); } static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, @@ -745,7 +743,7 @@ static int rmap_write_protect(struct kvm *kvm, u64 gfn) } spte = rmap_next(kvm, rmapp, spte); } - if (write_protected) { + if (!shadow_dirty_mask && write_protected) { pfn_t pfn; spte = rmap_next(kvm, rmapp, NULL); @@ -1879,9 +1877,9 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, * whether the guest actually used the pte (in order to detect * demand paging). */ - spte = shadow_base_present_pte | shadow_dirty_mask; + spte = shadow_base_present_pte; if (!speculative) - spte |= shadow_accessed_mask; + spte |= shadow_accessed_mask | shadow_dirty_mask; if (!dirty) pte_access &= ~ACC_WRITE_MASK; if (pte_access & ACC_EXEC_MASK) @@ -2007,7 +2005,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, if (rmap_count > RMAP_RECYCLE_THRESHOLD) rmap_recycle(vcpu, sptep, gfn); } else { - if (was_writable) + if (!shadow_dirty_mask && was_writable) kvm_release_pfn_dirty(pfn); else kvm_release_pfn_clean(pfn);