From patchwork Mon Jul 9 17:05:42 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avi Kivity X-Patchwork-Id: 1174141 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id A863940B30 for ; Mon, 9 Jul 2012 17:06:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752608Ab2GIRF5 (ORCPT ); Mon, 9 Jul 2012 13:05:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:9103 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752480Ab2GIRF4 (ORCPT ); Mon, 9 Jul 2012 13:05:56 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q69H5tiA019362 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 9 Jul 2012 13:05:55 -0400 Received: from s01.tlv.redhat.com (s01.tlv.redhat.com [10.35.255.8]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q69H5mDH016692; Mon, 9 Jul 2012 13:05:54 -0400 From: Avi Kivity To: Marcelo Tosatti Cc: kvm@vger.kernel.org Subject: [PATCH v3 3/6] KVM: Move mmu reload out of line Date: Mon, 9 Jul 2012 20:05:42 +0300 Message-Id: <1341853545-3023-4-git-send-email-avi@redhat.com> In-Reply-To: <1341853545-3023-1-git-send-email-avi@redhat.com> References: <1341853545-3023-1-git-send-email-avi@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently we check that the mmu root exits before every entry. Use the existing KVM_REQ_MMU_RELOAD mechanism instead, by making it really reload the mmu, and by adding the request to mmu initialization code. Signed-off-by: Avi Kivity --- arch/x86/kvm/mmu.c | 4 +++- arch/x86/kvm/svm.c | 1 + arch/x86/kvm/x86.c | 13 +++++++------ 3 files changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 569cd66..136d757 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3180,7 +3180,8 @@ void kvm_mmu_flush_tlb(struct kvm_vcpu *vcpu) static void paging_new_cr3(struct kvm_vcpu *vcpu) { pgprintk("%s: cr3 %lx\n", __func__, kvm_read_cr3(vcpu)); - mmu_free_roots(vcpu); + kvm_mmu_unload(vcpu); + kvm_mmu_load(vcpu); } static unsigned long get_cr3(struct kvm_vcpu *vcpu) @@ -3469,6 +3470,7 @@ static int init_kvm_nested_mmu(struct kvm_vcpu *vcpu) static int init_kvm_mmu(struct kvm_vcpu *vcpu) { + kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu); if (mmu_is_nested(vcpu)) return init_kvm_nested_mmu(vcpu); else if (tdp_enabled) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 7a41878..d77ad8c 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2523,6 +2523,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) if (nested_vmcb->control.nested_ctl) { kvm_mmu_unload(&svm->vcpu); + kvm_make_request(KVM_REQ_MMU_RELOAD, &svm->vcpu); svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3; nested_svm_init_mmu_context(&svm->vcpu); } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 959e5a9..162231f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5226,8 +5226,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_make_request(KVM_REQ_EVENT, vcpu); if (vcpu->requests) { - if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) + if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) { kvm_mmu_unload(vcpu); + r = kvm_mmu_reload(vcpu); + if (unlikely(r)) { + kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu); + goto out; + } + } if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu)) __kvm_migrate_timers(vcpu); if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) { @@ -5285,11 +5291,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) } } - r = kvm_mmu_reload(vcpu); - if (unlikely(r)) { - goto cancel_injection; - } - preempt_disable(); kvm_x86_ops->prepare_guest_switch(vcpu);