From patchwork Tue Jul 13 09:43:58 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 111742 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o6D9mcpN020959 for ; Tue, 13 Jul 2010 09:48:38 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754946Ab0GMJr6 (ORCPT ); Tue, 13 Jul 2010 05:47:58 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:52088 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1752231Ab0GMJr5 (ORCPT ); Tue, 13 Jul 2010 05:47:57 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 65033170118; Tue, 13 Jul 2010 17:47:55 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o6D9jA34020895; Tue, 13 Jul 2010 17:45:10 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id EB92310C1B9; Tue, 13 Jul 2010 17:48:15 +0800 (CST) Message-ID: <4C3C355E.1010202@cn.fujitsu.com> Date: Tue, 13 Jul 2010 17:43:58 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: LKML , KVM list , Marcelo Tosatti Subject: [PATCH 2/4] KVM: MMU: cleanup spte update path References: <4C3C3518.7080505@cn.fujitsu.com> In-Reply-To: <4C3C3518.7080505@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 13 Jul 2010 09:48:44 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9fc1524..67dbafa 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -303,6 +303,22 @@ static u64 __xchg_spte(u64 *sptep, u64 new_spte) #endif } +static void set_spte_atomic(u64 *sptep, u64 new_spte) +{ + pfn_t pfn; + u64 old_spte; + + old_spte = __xchg_spte(sptep, new_spte); + if (!is_rmap_spte(old_spte)) + return; + + pfn = spte_to_pfn(old_spte); + if (old_spte & shadow_accessed_mask) + kvm_set_pfn_accessed(pfn); + if (is_writable_pte(old_spte)) + kvm_set_pfn_dirty(pfn); +} + static void update_spte(u64 *sptep, u64 new_spte) { u64 old_spte; @@ -680,17 +696,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) static void drop_spte(struct kvm *kvm, u64 *sptep, u64 new_spte) { - pfn_t pfn; - u64 old_spte; - - old_spte = __xchg_spte(sptep, new_spte); - if (!is_rmap_spte(old_spte)) - return; - pfn = spte_to_pfn(old_spte); - if (old_spte & shadow_accessed_mask) - kvm_set_pfn_accessed(pfn); - if (is_writable_pte(old_spte)) - kvm_set_pfn_dirty(pfn); + set_spte_atomic(sptep, new_spte); rmap_remove(kvm, sptep); } @@ -790,7 +796,7 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp, unsigned long data) { int need_flush = 0; - u64 *spte, new_spte, old_spte; + u64 *spte, new_spte; pte_t *ptep = (pte_t *)data; pfn_t new_pfn; @@ -811,12 +817,7 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp, new_spte &= ~PT_WRITABLE_MASK; new_spte &= ~SPTE_HOST_WRITEABLE; new_spte &= ~shadow_accessed_mask; - if (is_writable_pte(*spte)) - kvm_set_pfn_dirty(spte_to_pfn(*spte)); - old_spte = __xchg_spte(spte, new_spte); - if (is_shadow_present_pte(old_spte) - && (old_spte & shadow_accessed_mask)) - mark_page_accessed(pfn_to_page(spte_to_pfn(old_spte))); + set_spte_atomic(spte, new_spte); spte = rmap_next(kvm, rmapp, spte); } }