From patchwork Thu Aug 5 12:29:27 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Har'El X-Patchwork-Id: 117284 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o75CTaH8017421 for ; Thu, 5 Aug 2010 12:29:36 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933211Ab0HEM3c (ORCPT ); Thu, 5 Aug 2010 08:29:32 -0400 Received: from mailgw12.technion.ac.il ([132.68.225.12]:4478 "EHLO mailgw12.technion.ac.il" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756027Ab0HEM3c (ORCPT ); Thu, 5 Aug 2010 08:29:32 -0400 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AvsEAC9LWkyERHMG/2dsb2JhbACgN3HDKgKFOASEG4UR X-IronPort-AV: E=Sophos;i="4.55,321,1278277200"; d="scan'208";a="456390" Received: from fermat.math.technion.ac.il ([132.68.115.6]) by mailgw12.technion.ac.il with ESMTP; 05 Aug 2010 15:29:29 +0300 Received: from fermat.math.technion.ac.il (localhost [127.0.0.1]) by fermat.math.technion.ac.il (8.12.10/8.12.10) with ESMTP id o75CTScQ025003; Thu, 5 Aug 2010 15:29:28 +0300 (IDT) Received: (from nyh@localhost) by fermat.math.technion.ac.il (8.12.10/8.12.10/Submit) id o75CTRcQ025002; Thu, 5 Aug 2010 15:29:27 +0300 (IDT) X-Authentication-Warning: fermat.math.technion.ac.il: nyh set sender to nyh@math.technion.ac.il using -f Date: Thu, 5 Aug 2010 15:29:27 +0300 From: "Nadav Har'El" To: Avi Kivity Cc: Gleb Natapov , kvm@vger.kernel.org Subject: Re: [PATCH 9/24] Implement VMCLEAR Message-ID: <20100805122927.GA24590@fermat.math.technion.ac.il> References: <201006131227.o5DCRAB0012968@rice.haifa.ibm.com> <20100615134753.GX21797@redhat.com> <4C17852B.5080703@redhat.com> <20100615135405.GY21797@redhat.com> <20100805115025.GC16722@fermat.math.technion.ac.il> <20100805115304.GL10499@redhat.com> <20100805120136.GD16722@fermat.math.technion.ac.il> <4C5AA911.4040405@redhat.com> <20100805121027.GE16722@fermat.math.technion.ac.il> <4C5AAACC.7040400@redhat.com> Mime-Version: 1.0 Content-Disposition: inline In-Reply-To: <4C5AAACC.7040400@redhat.com> User-Agent: Mutt/1.4.2.2i Hebrew-Date: 25 Av 5770 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Thu, 05 Aug 2010 12:29:36 +0000 (UTC) --- .before/arch/x86/kvm/vmx.c 2010-08-05 15:22:27.000000000 +0300 +++ .after/arch/x86/kvm/vmx.c 2010-08-05 15:22:27.000000000 +0300 @@ -144,6 +144,8 @@ struct __packed vmcs12 { */ u32 revision_id; u32 abort; + + bool launch_state; /* set to 0 by VMCLEAR, to 1 by VMLAUNCH */ }; /* @@ -3828,6 +3830,64 @@ static void nested_vmx_failValid(struct get_vmcs12_fields(vcpu)->vm_instruction_error = vm_instruction_error; } +/* Emulate the VMCLEAR instruction */ +static int handle_vmclear(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + gva_t gva; + gpa_t vmcs12_addr; + struct vmcs12 *vmcs12; + struct page *page; + + if (!nested_vmx_check_permission(vcpu)) + return 1; + + if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), + vmcs_read32(VMX_INSTRUCTION_INFO), &gva)) + return 1; + + if (kvm_read_guest_virt(gva, &vmcs12_addr, sizeof(vmcs12_addr), + vcpu, NULL)) { + kvm_queue_exception(vcpu, PF_VECTOR); + return 1; + } + + if (!IS_ALIGNED(vmcs12_addr, PAGE_SIZE)) { + nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_INVALID_ADDRESS); + skip_emulated_instruction(vcpu); + return 1; + } + + if (vmcs12_addr == vmx->nested.current_vmptr){ + kunmap(vmx->nested.current_vmcs12_page); + nested_release_page(vmx->nested.current_vmcs12_page); + vmx->nested.current_vmptr = -1ull; + } + + page = nested_get_page(vcpu, vmcs12_addr); + if(page == NULL){ + /* + * For accurate processor emulation, VMCLEAR beyond available + * physical memory should do nothing at all. However, it is + * possible that a nested vmx bug, not a guest hypervisor bug, + * resulted in this case, so let's shut down before doing any + * more damage: + */ + set_bit(KVM_REQ_TRIPLE_FAULT, &vcpu->requests); + return 1; + } + vmcs12 = kmap(page); + vmcs12->launch_state = 0; + kunmap(page); + nested_release_page(page); + + nested_free_vmcs(vcpu, vmcs12_addr); + + skip_emulated_instruction(vcpu); + nested_vmx_succeed(vcpu); + return 1; +} + static int handle_invlpg(struct kvm_vcpu *vcpu) { unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); @@ -4110,7 +4170,7 @@ static int (*kvm_vmx_exit_handlers[])(st [EXIT_REASON_HLT] = handle_halt, [EXIT_REASON_INVLPG] = handle_invlpg, [EXIT_REASON_VMCALL] = handle_vmcall, - [EXIT_REASON_VMCLEAR] = handle_vmx_insn, + [EXIT_REASON_VMCLEAR] = handle_vmclear, [EXIT_REASON_VMLAUNCH] = handle_vmx_insn, [EXIT_REASON_VMPTRLD] = handle_vmx_insn, [EXIT_REASON_VMPTRST] = handle_vmx_insn,