From patchwork Wed Jan 16 09:00:48 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Yang Z" X-Patchwork-Id: 1987121 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 64F843FDD1 for ; Wed, 16 Jan 2013 09:04:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932287Ab3APJEL (ORCPT ); Wed, 16 Jan 2013 04:04:11 -0500 Received: from mga02.intel.com ([134.134.136.20]:27743 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932250Ab3APJEI (ORCPT ); Wed, 16 Jan 2013 04:04:08 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 16 Jan 2013 01:04:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.84,478,1355126400"; d="scan'208";a="247562212" Received: from yang-desktop.sh.intel.com ([10.239.13.6]) by orsmga001.jf.intel.com with ESMTP; 16 Jan 2013 01:04:05 -0800 From: Yang Zhang To: kvm@vger.kernel.org Cc: gleb@redhat.com, haitao.shan@intel.com, mtosatti@redhat.com, xiantao.zhang@intel.com, Yang Zhang Subject: [PATCH] KVM: VMX: enable acknowledge interupt on vmexit Date: Wed, 16 Jan 2013 17:00:48 +0800 Message-Id: <1358326848-32155-1-git-send-email-yang.z.zhang@intel.com> X-Mailer: git-send-email 1.7.1.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Yang Zhang The "acknowledge interrupt on exit" feature controls processor behavior for external interrupt acknowledgement. When this control is set, the processor acknowledges the interrupt controller to acquire the interrupt vector on VM exit. This feature is required by Posted Interrupt. It will be turnned on only when posted interrupt is enabled. Refer to Intel SDM volum 3, chapter 33.2. Signed-off-by: Yang Zhang --- arch/x86/kvm/vmx.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-- 1 files changed, 49 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index dd2a85c..d1ed9ae 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -2565,7 +2565,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf) #ifdef CONFIG_X86_64 min |= VM_EXIT_HOST_ADDR_SPACE_SIZE; #endif - opt = VM_EXIT_SAVE_IA32_PAT | VM_EXIT_LOAD_IA32_PAT; + opt = VM_EXIT_SAVE_IA32_PAT | VM_EXIT_LOAD_IA32_PAT | VM_EXIT_ACK_INTR_ON_EXIT; if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_EXIT_CTLS, &_vmexit_control) < 0) return -EIO; @@ -3926,7 +3926,7 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx) ++vmx->nmsrs; } - vmcs_write32(VM_EXIT_CONTROLS, vmcs_config.vmexit_ctrl); + vmcs_write32(VM_EXIT_CONTROLS, vmcs_config.vmexit_ctrl & ~VM_EXIT_ACK_INTR_ON_EXIT); /* 22.2.1, 20.8.1 */ vmcs_write32(VM_ENTRY_CONTROLS, vmcs_config.vmentry_ctrl); @@ -6096,6 +6096,52 @@ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx) } } + +static noinline void vmx_handle_external_intr(struct kvm_vcpu *vcpu) +{ + u32 exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO); + + if ((exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_EXT_INTR && + (exit_intr_info & INTR_INFO_VALID_MASK) ) { + unsigned int vector; + unsigned long entry; + struct desc_ptr dt; + gate_desc *desc; + + native_store_idt(&dt); + + vector = exit_intr_info & INTR_INFO_VECTOR_MASK; + desc = (void *)dt.address + vector * 16; + + entry = gate_offset(*desc); + asm( + "mov %0, %%" _ASM_DX "\n\t" + "mov %%" _ASM_SP ", %%" _ASM_BX "\n\t" +#ifdef CONFIG_X86_64 + "and $0xfffffffffffffff0, %%" _ASM_SP "\n\t" +#endif + "mov %%ss, %%" _ASM_AX "\n\t" + "push %%" _ASM_AX "\n\t" + "push %%" _ASM_BX "\n\t" + "pushf\n\t" + "mov %%cs, %%" _ASM_AX "\n\t" + "push %%" _ASM_AX "\n\t" + "push intr_return\n\t" + "jmp *%% " _ASM_DX "\n\t" + ".pushsection .rodata \n\t" + ".global intr_return \n\t" + "intr_return: " _ASM_PTR " 1b \n\t" + ".popsection\n\t" + : :"m"(entry) : +#ifdef CONFIG_X86_64 + "rax", "rbx", "rdx" +#else + "eax", "ebx", "edx" +#endif + ); + } +} + static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx) { u32 exit_intr_info; @@ -6431,6 +6477,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) vmx_complete_atomic_exit(vmx); vmx_recover_nmi_blocking(vmx); vmx_complete_interrupts(vmx); + vmx_handle_external_intr(vcpu); } static void vmx_free_vcpu(struct kvm_vcpu *vcpu)