From patchwork Sat Jul 3 10:31:24 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 110008 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o63AZaTK032451 for ; Sat, 3 Jul 2010 10:35:36 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754902Ab0GCKfW (ORCPT ); Sat, 3 Jul 2010 06:35:22 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:60912 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1754844Ab0GCKfV (ORCPT ); Sat, 3 Jul 2010 06:35:21 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 03B34170115; Sat, 3 Jul 2010 18:35:14 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o63AWalg024041; Sat, 3 Jul 2010 18:32:36 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id C260610C15A; Sat, 3 Jul 2010 18:35:21 +0800 (CST) Message-ID: <4C2F117C.2000006@cn.fujitsu.com> Date: Sat, 03 Jul 2010 18:31:24 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Marcelo Tosatti CC: Avi Kivity , LKML , KVM list Subject: Re: [PATCH v4 5/6] KVM: MMU: combine guest pte read between walk and pte prefetch References: <4C2C9DC0.8050607@cn.fujitsu.com> <4C2C9E6C.2040803@cn.fujitsu.com> <20100702170303.GC25969@amt.cnet> In-Reply-To: <20100702170303.GC25969@amt.cnet> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Sat, 03 Jul 2010 10:35:36 +0000 (UTC) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 3350c02..e617e93 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -291,6 +291,20 @@ static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, gpte_to_gfn(gpte), pfn, true, true); } +static bool FNAME(check_level_mapping)(struct kvm_vcpu *vcpu, + struct guest_walker *gw, int level) +{ + pt_element_t curr_pte; + int r; + + r = kvm_read_guest_atomic(vcpu->kvm, gw->pte_gpa[level - 1], + &curr_pte, sizeof(curr_pte)); + if (r || curr_pte != gw->ptes[level - 1]) + return false; + + return true; +} + /* * Fetch a shadow pte for a specific level in the paging hierarchy. */ @@ -304,11 +318,9 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, u64 spte, *sptep = NULL; int direct; gfn_t table_gfn; - int r; int level; - bool dirty = is_dirty_gpte(gw->ptes[gw->level - 1]); + bool dirty = is_dirty_gpte(gw->ptes[gw->level - 1]), check = true; unsigned direct_access; - pt_element_t curr_pte; struct kvm_shadow_walk_iterator iterator; if (!is_present_gpte(gw->ptes[gw->level - 1])) @@ -322,6 +334,12 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, level = iterator.level; sptep = iterator.sptep; if (iterator.level == hlevel) { + if (check && level == gw->level && + !FNAME(check_level_mapping)(vcpu, gw, hlevel)) { + kvm_release_pfn_clean(pfn); + break; + } + mmu_set_spte(vcpu, sptep, access, gw->pte_access & access, user_fault, write_fault, @@ -376,10 +394,10 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1, direct, access, sptep); if (!direct) { - r = kvm_read_guest_atomic(vcpu->kvm, - gw->pte_gpa[level - 2], - &curr_pte, sizeof(curr_pte)); - if (r || curr_pte != gw->ptes[level - 2]) { + if (hlevel == level - 1) + check = false; + + if (!FNAME(check_level_mapping)(vcpu, gw, level - 1)) { kvm_mmu_put_page(sp, sptep); kvm_release_pfn_clean(pfn); sptep = NULL;