From patchwork Sun May 8 08:27:32 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Har'El X-Patchwork-Id: 765302 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.3) with ESMTP id p488Rk0G009532 for ; Sun, 8 May 2011 08:27:46 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752456Ab1EHI1i (ORCPT ); Sun, 8 May 2011 04:27:38 -0400 Received: from mtagate1.uk.ibm.com ([194.196.100.161]:48336 "EHLO mtagate1.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752167Ab1EHI1g (ORCPT ); Sun, 8 May 2011 04:27:36 -0400 Received: from d06nrmr1707.portsmouth.uk.ibm.com (d06nrmr1707.portsmouth.uk.ibm.com [9.149.39.225]) by mtagate1.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p488RZkN029997 for ; Sun, 8 May 2011 08:27:35 GMT Received: from d06av05.portsmouth.uk.ibm.com (d06av05.portsmouth.uk.ibm.com [9.149.37.229]) by d06nrmr1707.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p488So9i2342952 for ; Sun, 8 May 2011 09:28:50 +0100 Received: from d06av05.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av05.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p488RYS8022395 for ; Sun, 8 May 2011 02:27:34 -0600 Received: from rice.haifa.ibm.com (rice.haifa.ibm.com [9.148.8.217]) by d06av05.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p488RXVu022388 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 8 May 2011 02:27:34 -0600 Received: from rice.haifa.ibm.com (lnx-nyh.haifa.ibm.com [127.0.0.1]) by rice.haifa.ibm.com (8.14.4/8.14.4) with ESMTP id p488RXkg018330; Sun, 8 May 2011 11:27:33 +0300 Received: (from nyh@localhost) by rice.haifa.ibm.com (8.14.4/8.14.4/Submit) id p488RW9O018328; Sun, 8 May 2011 11:27:32 +0300 Date: Sun, 8 May 2011 11:27:32 +0300 Message-Id: <201105080827.p488RW9O018328@rice.haifa.ibm.com> X-Authentication-Warning: rice.haifa.ibm.com: nyh set sender to "Nadav Har'El" using -f Cc: gleb@redhat.com, avi@redhat.com To: kvm@vger.kernel.org From: "Nadav Har'El" References: <1304842511-nyh@il.ibm.com> Subject: [PATCH 24/30] nVMX: Correct handling of idt vectoring info Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Sun, 08 May 2011 08:27:46 +0000 (UTC) This patch adds correct handling of IDT_VECTORING_INFO_FIELD for the nested case. When a guest exits while handling an interrupt or exception, we get this information in IDT_VECTORING_INFO_FIELD in the VMCS. When L2 exits to L1, there's nothing we need to do, because L1 will see this field in vmcs12, and handle it itself. However, when L2 exits and L0 handles the exit itself and plans to return to L2, L0 must inject this event to L2. In the normal non-nested case, the idt_vectoring_info case is discovered after the exit, and the decision to inject (though not the injection itself) is made at that point. However, in the nested case a decision of whether to return to L2 or L1 also happens during the injection phase (see the previous patches), so in the nested case we can only decide what to do about the idt_vectoring_info right after the injection, i.e., in the beginning of vmx_vcpu_run, which is the first time we know for sure if we're staying in L2 (i.e., nested_mode is true). Signed-off-by: Nadav Har'El --- arch/x86/kvm/vmx.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- .before/arch/x86/kvm/vmx.c 2011-05-08 10:43:21.000000000 +0300 +++ .after/arch/x86/kvm/vmx.c 2011-05-08 10:43:21.000000000 +0300 @@ -352,6 +352,10 @@ struct nested_vmx { u64 vmcs01_tsc_offset; /* L2 must run next, and mustn't decide to exit to L1. */ bool nested_run_pending; + /* true if last exit was of L2, and had a valid idt_vectoring_info */ + bool valid_idt_vectoring_info; + /* These are saved if valid_idt_vectoring_info */ + u32 vm_exit_instruction_len, idt_vectoring_error_code; /* * Guest pages referred to in vmcs02 with host-physical pointers, so * we must keep them pinned while L2 runs. @@ -5736,6 +5740,22 @@ static void vmx_cancel_injection(struct vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0); } +static void nested_handle_valid_idt_vectoring_info(struct vcpu_vmx *vmx) +{ + int irq = vmx->idt_vectoring_info & VECTORING_INFO_VECTOR_MASK; + int type = vmx->idt_vectoring_info & VECTORING_INFO_TYPE_MASK; + int errCodeValid = vmx->idt_vectoring_info & + VECTORING_INFO_DELIVER_CODE_MASK; + vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, + irq | type | INTR_INFO_VALID_MASK | errCodeValid); + + vmcs_write32(VM_ENTRY_INSTRUCTION_LEN, + vmx->nested.vm_exit_instruction_len); + if (errCodeValid) + vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, + vmx->nested.idt_vectoring_error_code); +} + #ifdef CONFIG_X86_64 #define R "r" #define Q "q" @@ -5748,6 +5768,9 @@ static void __noclone vmx_vcpu_run(struc { struct vcpu_vmx *vmx = to_vmx(vcpu); + if (is_guest_mode(vcpu) && vmx->nested.valid_idt_vectoring_info) + nested_handle_valid_idt_vectoring_info(vmx); + /* Record the guest's net vcpu time for enforced NMI injections. */ if (unlikely(!cpu_has_virtual_nmis() && vmx->soft_vnmi_blocked)) vmx->entry_time = ktime_get(); @@ -5879,6 +5902,15 @@ static void __noclone vmx_vcpu_run(struc vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD); + vmx->nested.valid_idt_vectoring_info = is_guest_mode(vcpu) && + (vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK); + if (vmx->nested.valid_idt_vectoring_info) { + vmx->nested.vm_exit_instruction_len = + vmcs_read32(VM_EXIT_INSTRUCTION_LEN); + vmx->nested.idt_vectoring_error_code = + vmcs_read32(IDT_VECTORING_ERROR_CODE); + } + asm("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS)); vmx->launched = 1;