From patchwork Wed May 25 20:06:59 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Har'El X-Patchwork-Id: 817242 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4PK7axT031161 for ; Wed, 25 May 2011 20:07:37 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753957Ab1EYUHE (ORCPT ); Wed, 25 May 2011 16:07:04 -0400 Received: from mtagate2.uk.ibm.com ([194.196.100.162]:40182 "EHLO mtagate2.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753130Ab1EYUHC (ORCPT ); Wed, 25 May 2011 16:07:02 -0400 Received: from d06nrmr1507.portsmouth.uk.ibm.com (d06nrmr1507.portsmouth.uk.ibm.com [9.149.38.233]) by mtagate2.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p4PK71nF009258 for ; Wed, 25 May 2011 20:07:01 GMT Received: from d06av04.portsmouth.uk.ibm.com (d06av04.portsmouth.uk.ibm.com [9.149.37.216]) by d06nrmr1507.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p4PK71qN1527810 for ; Wed, 25 May 2011 21:07:01 +0100 Received: from d06av04.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av04.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p4PK70i2018679 for ; Wed, 25 May 2011 14:07:01 -0600 Received: from rice.haifa.ibm.com (rice.haifa.ibm.com [9.148.8.217]) by d06av04.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p4PK6x3w018661 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 25 May 2011 14:07:00 -0600 Received: from rice.haifa.ibm.com (lnx-nyh.haifa.ibm.com [127.0.0.1]) by rice.haifa.ibm.com (8.14.4/8.14.4) with ESMTP id p4PK6xXR011195; Wed, 25 May 2011 23:06:59 +0300 Received: (from nyh@localhost) by rice.haifa.ibm.com (8.14.4/8.14.4/Submit) id p4PK6xW4011193; Wed, 25 May 2011 23:06:59 +0300 Date: Wed, 25 May 2011 23:06:59 +0300 Message-Id: <201105252006.p4PK6xW4011193@rice.haifa.ibm.com> X-Authentication-Warning: rice.haifa.ibm.com: nyh set sender to "Nadav Har'El" using -f Cc: avi@redhat.com To: kvm@vger.kernel.org From: "Nadav Har'El" References: <1306353651-nyh@il.ibm.com> Subject: [PATCH 11/31] nVMX: Implement VMCLEAR Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Wed, 25 May 2011 20:07:37 +0000 (UTC) This patch implements the VMCLEAR instruction. Signed-off-by: Nadav Har'El --- arch/x86/kvm/vmx.c | 65 ++++++++++++++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 1 2 files changed, 65 insertions(+), 1 deletion(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- .before/arch/x86/kvm/x86.c 2011-05-25 22:40:55.000000000 +0300 +++ .after/arch/x86/kvm/x86.c 2011-05-25 22:40:55.000000000 +0300 @@ -347,6 +347,7 @@ void kvm_inject_page_fault(struct kvm_vc vcpu->arch.cr2 = fault->address; kvm_queue_exception_e(vcpu, PF_VECTOR, fault->error_code); } +EXPORT_SYMBOL_GPL(kvm_inject_page_fault); void kvm_propagate_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault) { --- .before/arch/x86/kvm/vmx.c 2011-05-25 22:40:55.000000000 +0300 +++ .after/arch/x86/kvm/vmx.c 2011-05-25 22:40:55.000000000 +0300 @@ -166,6 +166,9 @@ struct __packed vmcs12 { u32 revision_id; u32 abort; + u32 launch_state; /* set to 0 by VMCLEAR, to 1 by VMLAUNCH */ + u32 padding[7]; /* room for future expansion */ + u64 io_bitmap_a; u64 io_bitmap_b; u64 msr_bitmap; @@ -4813,6 +4816,66 @@ static void nested_vmx_failValid(struct get_vmcs12(vcpu)->vm_instruction_error = vm_instruction_error; } +/* Emulate the VMCLEAR instruction */ +static int handle_vmclear(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + gva_t gva; + gpa_t vmptr; + struct vmcs12 *vmcs12; + struct page *page; + struct x86_exception e; + + if (!nested_vmx_check_permission(vcpu)) + return 1; + + if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), + vmcs_read32(VMX_INSTRUCTION_INFO), &gva)) + return 1; + + if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmptr, + sizeof(vmptr), &e)) { + kvm_inject_page_fault(vcpu, &e); + return 1; + } + + if (!IS_ALIGNED(vmptr, PAGE_SIZE)) { + nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_INVALID_ADDRESS); + skip_emulated_instruction(vcpu); + return 1; + } + + if (vmptr == vmx->nested.current_vmptr) { + kunmap(vmx->nested.current_vmcs12_page); + nested_release_page(vmx->nested.current_vmcs12_page); + vmx->nested.current_vmptr = -1ull; + vmx->nested.current_vmcs12 = NULL; + } + + page = nested_get_page(vcpu, vmptr); + if (page == NULL) { + /* + * For accurate processor emulation, VMCLEAR beyond available + * physical memory should do nothing at all. However, it is + * possible that a nested vmx bug, not a guest hypervisor bug, + * resulted in this case, so let's shut down before doing any + * more damage: + */ + kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); + return 1; + } + vmcs12 = kmap(page); + vmcs12->launch_state = 0; + kunmap(page); + nested_release_page(page); + + nested_free_vmcs02(vmx, vmptr); + + skip_emulated_instruction(vcpu); + nested_vmx_succeed(vcpu); + return 1; +} + /* * The exit handlers return 1 if the exit was handled fully and guest execution * may resume. Otherwise they set the kvm_run parameter to indicate what needs @@ -4834,7 +4897,7 @@ static int (*kvm_vmx_exit_handlers[])(st [EXIT_REASON_INVD] = handle_invd, [EXIT_REASON_INVLPG] = handle_invlpg, [EXIT_REASON_VMCALL] = handle_vmcall, - [EXIT_REASON_VMCLEAR] = handle_vmx_insn, + [EXIT_REASON_VMCLEAR] = handle_vmclear, [EXIT_REASON_VMLAUNCH] = handle_vmx_insn, [EXIT_REASON_VMPTRLD] = handle_vmx_insn, [EXIT_REASON_VMPTRST] = handle_vmx_insn,