From patchwork Thu Jan 27 08:35:29 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Har'El X-Patchwork-Id: 510561 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p0R8Za7Y013476 for ; Thu, 27 Jan 2011 08:35:36 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752494Ab1A0Ifd (ORCPT ); Thu, 27 Jan 2011 03:35:33 -0500 Received: from mtagate2.uk.ibm.com ([194.196.100.162]:36652 "EHLO mtagate2.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751916Ab1A0Ifd (ORCPT ); Thu, 27 Jan 2011 03:35:33 -0500 Received: from d06nrmr1806.portsmouth.uk.ibm.com (d06nrmr1806.portsmouth.uk.ibm.com [9.149.39.193]) by mtagate2.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p0R8ZVKQ026470 for ; Thu, 27 Jan 2011 08:35:31 GMT Received: from d06av06.portsmouth.uk.ibm.com (d06av06.portsmouth.uk.ibm.com [9.149.37.217]) by d06nrmr1806.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p0R8ZYwb1794270 for ; Thu, 27 Jan 2011 08:35:34 GMT Received: from d06av06.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av06.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p0R8ZUYj025770 for ; Thu, 27 Jan 2011 01:35:31 -0700 Received: from rice.haifa.ibm.com (rice.haifa.ibm.com [9.148.8.217]) by d06av06.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p0R8ZU0G025767 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 27 Jan 2011 01:35:30 -0700 Received: from rice.haifa.ibm.com (lnx-nyh.haifa.ibm.com [127.0.0.1]) by rice.haifa.ibm.com (8.14.4/8.14.4) with ESMTP id p0R8ZTtr002559; Thu, 27 Jan 2011 10:35:29 +0200 Received: (from nyh@localhost) by rice.haifa.ibm.com (8.14.4/8.14.4/Submit) id p0R8ZTuw002557; Thu, 27 Jan 2011 10:35:29 +0200 Date: Thu, 27 Jan 2011 10:35:29 +0200 Message-Id: <201101270835.p0R8ZTuw002557@rice.haifa.ibm.com> X-Authentication-Warning: rice.haifa.ibm.com: nyh set sender to "Nadav Har'El" using -f Cc: gleb@redhat.com, avi@redhat.com To: kvm@vger.kernel.org From: "Nadav Har'El" References: <1296116987-nyh@il.ibm.com> Subject: [PATCH 11/29] nVMX: Implement VMCLEAR Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 27 Jan 2011 08:35:36 +0000 (UTC) --- .before/arch/x86/kvm/vmx.c 2011-01-26 18:06:04.000000000 +0200 +++ .after/arch/x86/kvm/vmx.c 2011-01-26 18:06:04.000000000 +0200 @@ -283,6 +283,8 @@ struct __packed vmcs12 { u32 abort; struct vmcs_fields fields; + + bool launch_state; /* set to 0 by VMCLEAR, to 1 by VMLAUNCH */ }; /* @@ -4582,6 +4584,65 @@ static void nested_vmx_failValid(struct get_vmcs12_fields(vcpu)->vm_instruction_error = vm_instruction_error; } +/* Emulate the VMCLEAR instruction */ +static int handle_vmclear(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + gva_t gva; + gpa_t vmcs12_addr; + struct vmcs12 *vmcs12; + struct page *page; + + if (!nested_vmx_check_permission(vcpu)) + return 1; + + if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), + vmcs_read32(VMX_INSTRUCTION_INFO), &gva)) + return 1; + + if (kvm_read_guest_virt(gva, &vmcs12_addr, sizeof(vmcs12_addr), + vcpu, NULL)) { + kvm_queue_exception(vcpu, PF_VECTOR); + return 1; + } + + if (!IS_ALIGNED(vmcs12_addr, PAGE_SIZE)) { + nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_INVALID_ADDRESS); + skip_emulated_instruction(vcpu); + return 1; + } + + if (vmcs12_addr == vmx->nested.current_vmptr) { + kunmap(vmx->nested.current_vmcs12_page); + nested_release_page(vmx->nested.current_vmcs12_page); + vmx->nested.current_vmptr = -1ull; + } + + page = nested_get_page(vcpu, vmcs12_addr); + if (page == NULL) { + /* + * For accurate processor emulation, VMCLEAR beyond available + * physical memory should do nothing at all. However, it is + * possible that a nested vmx bug, not a guest hypervisor bug, + * resulted in this case, so let's shut down before doing any + * more damage: + */ + kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); + nested_release_page(page); + return 1; + } + vmcs12 = kmap(page); + vmcs12->launch_state = 0; + kunmap(page); + nested_release_page(page); + + nested_free_vmcs(vmx, vmcs12_addr); + + skip_emulated_instruction(vcpu); + nested_vmx_succeed(vcpu); + return 1; +} + /* * The exit handlers return 1 if the exit was handled fully and guest execution * may resume. Otherwise they set the kvm_run parameter to indicate what needs @@ -4603,7 +4664,7 @@ static int (*kvm_vmx_exit_handlers[])(st [EXIT_REASON_INVD] = handle_invd, [EXIT_REASON_INVLPG] = handle_invlpg, [EXIT_REASON_VMCALL] = handle_vmcall, - [EXIT_REASON_VMCLEAR] = handle_vmx_insn, + [EXIT_REASON_VMCLEAR] = handle_vmclear, [EXIT_REASON_VMLAUNCH] = handle_vmx_insn, [EXIT_REASON_VMPTRLD] = handle_vmx_insn, [EXIT_REASON_VMPTRST] = handle_vmx_insn,