From patchwork Wed Jun 30 08:05:00 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 108757 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o5U89BAg025776 for ; Wed, 30 Jun 2010 08:09:11 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753150Ab0F3IIs (ORCPT ); Wed, 30 Jun 2010 04:08:48 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:52025 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1751736Ab0F3IIr (ORCPT ); Wed, 30 Jun 2010 04:08:47 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 8AF5E170127; Wed, 30 Jun 2010 16:08:46 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o5U86BTk011895; Wed, 30 Jun 2010 16:06:12 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 375CD10C050; Wed, 30 Jun 2010 16:08:51 +0800 (CST) Message-ID: <4C2AFAAC.8040308@cn.fujitsu.com> Date: Wed, 30 Jun 2010 16:05:00 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , LKML , KVM list Subject: [PATCH v3 5/11] KVM: MMU: cleanup FNAME(fetch)() functions References: <4C2AF9FA.9020601@cn.fujitsu.com> In-Reply-To: <4C2AF9FA.9020601@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Wed, 30 Jun 2010 08:09:11 +0000 (UTC) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index f28f09d..3350c02 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -306,12 +306,18 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, gfn_t table_gfn; int r; int level; + bool dirty = is_dirty_gpte(gw->ptes[gw->level - 1]); + unsigned direct_access; pt_element_t curr_pte; struct kvm_shadow_walk_iterator iterator; if (!is_present_gpte(gw->ptes[gw->level - 1])) return NULL; + direct_access = gw->pt_access & gw->pte_access; + if (!dirty) + direct_access &= ~ACC_WRITE_MASK; + for_each_shadow_entry(vcpu, addr, iterator) { level = iterator.level; sptep = iterator.sptep; @@ -319,15 +325,13 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, mmu_set_spte(vcpu, sptep, access, gw->pte_access & access, user_fault, write_fault, - is_dirty_gpte(gw->ptes[gw->level-1]), - ptwrite, level, + dirty, ptwrite, level, gw->gfn, pfn, false, true); break; } if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) { struct kvm_mmu_page *child; - unsigned direct_access; if (level != gw->level) continue; @@ -339,10 +343,6 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, * so we should update the spte at this point to get * a new sp with the correct access. */ - direct_access = gw->pt_access & gw->pte_access; - if (!is_dirty_gpte(gw->ptes[gw->level - 1])) - direct_access &= ~ACC_WRITE_MASK; - child = page_header(*sptep & PT64_BASE_ADDR_MASK); if (child->role.access == direct_access) continue; @@ -359,11 +359,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, } if (level <= gw->level) { - int delta = level - gw->level + 1; direct = 1; - if (!is_dirty_gpte(gw->ptes[level - delta])) - access &= ~ACC_WRITE_MASK; - access &= gw->pte_access; + access = direct_access; /* * It is a large guest pages backed by small host pages,