From patchwork Thu Apr 1 08:52:50 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 90096 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o318tM39008101 for ; Thu, 1 Apr 2010 08:55:23 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754547Ab0DAIzU (ORCPT ); Thu, 1 Apr 2010 04:55:20 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:56214 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1754484Ab0DAIzS (ORCPT ); Thu, 1 Apr 2010 04:55:18 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 01300170128; Thu, 1 Apr 2010 16:55:17 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o318rr9C028048; Thu, 1 Apr 2010 16:53:53 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id BA9F6D4977; Thu, 1 Apr 2010 16:57:52 +0800 (CST) Message-ID: <4BB45EE2.3010000@cn.fujitsu.com> Date: Thu, 01 Apr 2010 16:52:50 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.6 (Windows/20070728) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , KVM list , LKML Subject: [PATCH 2/2] KVM MMU: record reverse mapping for spte only if it's writable References: <4BB45E65.2040006@cn.fujitsu.com> In-Reply-To: <4BB45E65.2040006@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Thu, 01 Apr 2010 08:55:28 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 5de92ae..999f572 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -259,7 +259,17 @@ static int is_dirty_gpte(unsigned long pte) static int is_rmap_spte(u64 pte) { - return is_shadow_present_pte(pte); + return pte & PT_RMAP_MASK; +} + +static void spte_set_rmap(u64 *spte) +{ + *spte |= PT_RMAP_MASK; +} + +static void spte_clear_rmap(u64 *spte) +{ + *spte &= ~PT_RMAP_MASK; } static int is_last_spte(u64 pte, int level) @@ -543,7 +553,7 @@ static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) unsigned long *rmapp; int i, count = 0; - if (!is_rmap_spte(*spte)) + if (!is_shadow_present_pte(*spte) || !is_writable_pte(*spte)) return count; gfn = unalias_gfn(vcpu->kvm, gfn); sp = page_header(__pa(spte)); @@ -573,6 +583,7 @@ static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) ; desc->sptes[i] = spte; } + spte_set_rmap(spte); return count; } @@ -610,6 +621,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) if (!is_rmap_spte(*spte)) return; + spte_clear_rmap(spte); sp = page_header(__pa(spte)); pfn = spte_to_pfn(*spte); if (*spte & shadow_accessed_mask) @@ -646,6 +658,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) pr_err("rmap_remove: %p %llx many->many\n", spte, *spte); BUG(); } + } static u64 *rmap_next(struct kvm *kvm, unsigned long *rmapp, u64 *spte) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index be66759..166b9b5 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -22,6 +22,7 @@ #define PT_PAGE_SIZE_MASK (1ULL << 7) #define PT_PAT_MASK (1ULL << 7) #define PT_GLOBAL_MASK (1ULL << 8) +#define PT_RMAP_MASK (1ULL << 9) #define PT64_NX_SHIFT 63 #define PT64_NX_MASK (1ULL << PT64_NX_SHIFT)