From patchwork Fri Jul 16 03:23:04 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 112359 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o6G3RlLS017701 for ; Fri, 16 Jul 2010 03:27:47 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935519Ab0GPD1K (ORCPT ); Thu, 15 Jul 2010 23:27:10 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:61239 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S935497Ab0GPD1I (ORCPT ); Thu, 15 Jul 2010 23:27:08 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 14F6F17016F; Fri, 16 Jul 2010 11:27:03 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o6G3OGd4003300; Fri, 16 Jul 2010 11:24:16 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id E333810C1C0; Fri, 16 Jul 2010 11:27:26 +0800 (CST) Message-ID: <4C3FD098.4090701@cn.fujitsu.com> Date: Fri, 16 Jul 2010 11:23:04 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , LKML , KVM list Subject: [PATCH v2 2/6] KVM: MMU: fix page accessed tracking lost if ept is enabled References: <4C3FCFD7.5070005@cn.fujitsu.com> In-Reply-To: <4C3FCFD7.5070005@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Fri, 16 Jul 2010 03:27:47 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 1a4b42e..5937054 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -687,7 +687,7 @@ static void drop_spte(struct kvm *kvm, u64 *sptep, u64 new_spte) if (!is_rmap_spte(old_spte)) return; pfn = spte_to_pfn(old_spte); - if (old_spte & shadow_accessed_mask) + if (!shadow_accessed_mask || old_spte & shadow_accessed_mask) kvm_set_pfn_accessed(pfn); if (is_writable_pte(old_spte)) kvm_set_pfn_dirty(pfn); @@ -815,7 +815,8 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp, kvm_set_pfn_dirty(spte_to_pfn(*spte)); old_spte = __xchg_spte(spte, new_spte); if (is_shadow_present_pte(old_spte) - && (old_spte & shadow_accessed_mask)) + && (!shadow_accessed_mask || + old_spte & shadow_accessed_mask)) mark_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); spte = rmap_next(kvm, rmapp, spte); }