From patchwork Wed Sep 24 07:57:56 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tangchen X-Patchwork-Id: 4962161 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 1DBE1BEEA5 for ; Wed, 24 Sep 2014 07:59:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4D9BF20253 for ; Wed, 24 Sep 2014 07:59:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6A0E7201B9 for ; Wed, 24 Sep 2014 07:59:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753996AbaIXH6q (ORCPT ); Wed, 24 Sep 2014 03:58:46 -0400 Received: from cn.fujitsu.com ([59.151.112.132]:7440 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1752735AbaIXH6g (ORCPT ); Wed, 24 Sep 2014 03:58:36 -0400 X-IronPort-AV: E=Sophos;i="5.04,587,1406563200"; d="scan'208";a="36385409" Received: from localhost (HELO edo.cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 24 Sep 2014 15:55:35 +0800 Received: from G08CNEXCHPEKD02.g08.fujitsu.local (localhost.localdomain [127.0.0.1]) by edo.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s8O7weL5016271; Wed, 24 Sep 2014 15:58:40 +0800 Received: from tangchen.fnst.cn.fujitsu.com (10.167.226.71) by G08CNEXCHPEKD02.g08.fujitsu.local (10.167.33.89) with Microsoft SMTP Server (TLS) id 14.3.181.6; Wed, 24 Sep 2014 15:58:39 +0800 From: Tang Chen To: , , , , CC: , , , , , Subject: [PATCH v8 6/8] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running. Date: Wed, 24 Sep 2014 15:57:56 +0800 Message-ID: <1411545478-9848-7-git-send-email-tangchen@cn.fujitsu.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1411545478-9848-1-git-send-email-tangchen@cn.fujitsu.com> References: <1411545478-9848-1-git-send-email-tangchen@cn.fujitsu.com> MIME-Version: 1.0 X-Originating-IP: [10.167.226.71] Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We are handling "L1 and L2 share one apic access page" situation when migrating apic access page. We should do some handling when migration happens in the following situations: 1) when L0 is running: Update L1's vmcs in the next L0->L1 entry and L2's vmcs in the next L1->L2 entry. 2) when L1 is running: Force a L1->L0 exit, update L1's vmcs in the next L0->L1 entry and L2's vmcs in the next L1->L2 entry. 3) when L2 is running: Force a L2->L0 exit, update L2's vmcs in the next L0->L2 entry and L1's vmcs in the next L2->L1 exit. This patch handles 3). In L0->L2 entry, L2's vmcs will be updated in prepare_vmcs02() called by nested_vm_run(). So we need to do nothing. In L2->L1 exit, this patch requests apic access page reload in L2->L1 vmexit. Reviewed-by: Paolo Bonzini Signed-off-by: Tang Chen --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx.c | 6 ++++++ arch/x86/kvm/x86.c | 3 ++- 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 582cd0f..66480fd 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1046,6 +1046,7 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu); int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu); int kvm_cpu_get_interrupt(struct kvm_vcpu *v); void kvm_vcpu_reset(struct kvm_vcpu *vcpu); +void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu); void kvm_define_shared_msr(unsigned index, u32 msr); void kvm_set_shared_msr(unsigned index, u64 val, u64 mask); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 1411bab..40bb9fc 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8826,6 +8826,12 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, } /* + * We are now running in L2, mmu_notifier will force to reload the + * page's hpa for L2 vmcs. Need to reload it for L1 before entering L1. + */ + kvm_vcpu_reload_apic_access_page(vcpu); + + /* * Exiting from L2 to L1, we're now back to L1 which thinks it just * finished a VMLAUNCH or VMRESUME instruction, so we need to set the * success or failure flag accordingly. diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1f0c99a..c064ca6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5989,7 +5989,7 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu) kvm_apic_update_tmr(vcpu, tmr); } -static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) +void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) { /* * If platform doesn't have 2nd exec virtualize apic access affinity, @@ -6009,6 +6009,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) kvm_x86_ops->set_apic_access_page_addr(vcpu, page_to_phys(vcpu->kvm->arch.apic_access_page)); } +EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page); /* * Returns 1 to let __vcpu_run() continue the guest execution loop without