From patchwork Tue Jul 26 11:31:23 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 1008152 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.4) with ESMTP id p6QBTXNd005457 for ; Tue, 26 Jul 2011 11:29:33 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753486Ab1GZL33 (ORCPT ); Tue, 26 Jul 2011 07:29:29 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:50333 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1753419Ab1GZL32 (ORCPT ); Tue, 26 Jul 2011 07:29:28 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 99A1317012C; Tue, 26 Jul 2011 19:29:24 +0800 (CST) Received: from mailserver.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id p6QBTNF6010943; Tue, 26 Jul 2011 19:29:23 +0800 Received: from localhost.localdomain ([10.167.225.99]) by mailserver.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.1FP4) with ESMTP id 2011072619282959-20137 ; Tue, 26 Jul 2011 19:28:29 +0800 Message-ID: <4E2EA58B.1030503@cn.fujitsu.com> Date: Tue, 26 Jul 2011 19:31:23 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.1.10-1.fc15 Thunderbird/3.1.10 MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , LKML , KVM Subject: [PATCH 09/11] KVM: MMU: remove the mismatch shadow page References: <4E2EA3DB.7040403@cn.fujitsu.com> In-Reply-To: <4E2EA3DB.7040403@cn.fujitsu.com> X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.1FP4|July 25, 2010) at 2011-07-26 19:28:29, Serialize by Router on mailserver/fnst(Release 8.5.1FP4|July 25, 2010) at 2011-07-26 19:28:30, Serialize complete at 2011-07-26 19:28:30 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Tue, 26 Jul 2011 11:29:33 +0000 (UTC) If the shadow page has different cpu mode with current vcpu, we do better remove them since the OS does not changes cpu mode frequently Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 26 +++++++++++++++++++------- 1 files changed, 19 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 931c23a..2328ee6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3603,6 +3603,18 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, return misaligned; } +/* + * The OS hardly changes cpu mode after boot, we can zap the shadow page if + * it is mismatched with the current vcpu. + */ +static bool detect_mismatch_sp(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +{ + union kvm_mmu_page_role mask; + + mask.cr0_wp = mask.cr4_pae = mask.nxe = 1; + return (sp->role.word ^ vcpu->arch.mmu.base_role.word) & mask.word; +} + static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) { unsigned page_offset, quadrant; @@ -3638,13 +3650,12 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, int bytes, bool repeat_write) { gfn_t gfn = gpa >> PAGE_SHIFT; - union kvm_mmu_page_role mask = { .word = 0 }; struct kvm_mmu_page *sp; struct hlist_node *node; LIST_HEAD(invalid_list); u64 entry, gentry, *spte; int npte; - bool remote_flush, local_flush, zap_page, flooded, misaligned; + bool remote_flush, local_flush, zap_page, flooded; /* * If we don't have indirect shadow pages, it means no page is @@ -3664,10 +3675,13 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, trace_kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE); flooded = detect_write_flooding(vcpu, gfn); - mask.cr0_wp = mask.cr4_pae = mask.nxe = 1; for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn, node) { + bool mismatch, misaligned; + misaligned = detect_write_misaligned(sp, gpa, bytes); - if (misaligned || flooded || repeat_write) { + mismatch = detect_mismatch_sp(vcpu, sp); + + if (misaligned || mismatch || flooded || repeat_write) { zap_page |= !!kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); ++vcpu->kvm->stat.mmu_flooded; @@ -3682,9 +3696,7 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, while (npte--) { entry = *spte; mmu_page_zap_pte(vcpu->kvm, sp, spte); - if (gentry && - !((sp->role.word ^ vcpu->arch.mmu.base_role.word) - & mask.word) && get_free_pte_list_desc_nr(vcpu)) + if (gentry && get_free_pte_list_desc_nr(vcpu)) mmu_pte_write_new_pte(vcpu, sp, spte, &gentry); if (!remote_flush && need_remote_flush(entry, *spte)) remote_flush = true;