Message ID | 201105080821.p488LM55018138@rice.haifa.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Sun, May 08, 2011 at 11:21:22AM +0300, Nadav Har'El wrote: > This patch implements the VMPTRLD instruction. > > Signed-off-by: Nadav Har'El <nyh@il.ibm.com> > --- > arch/x86/kvm/vmx.c | 62 ++++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 61 insertions(+), 1 deletion(-) > > --- .before/arch/x86/kvm/vmx.c 2011-05-08 10:43:19.000000000 +0300 > +++ .after/arch/x86/kvm/vmx.c 2011-05-08 10:43:19.000000000 +0300 > @@ -4814,6 +4814,66 @@ static int handle_vmclear(struct kvm_vcp > return 1; > } > > +/* Emulate the VMPTRLD instruction */ > +static int handle_vmptrld(struct kvm_vcpu *vcpu) > +{ > + struct vcpu_vmx *vmx = to_vmx(vcpu); > + gva_t gva; > + gpa_t vmcs12_addr; > + struct x86_exception e; > + > + if (!nested_vmx_check_permission(vcpu)) > + return 1; > + > + if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), > + vmcs_read32(VMX_INSTRUCTION_INFO), &gva)) > + return 1; > + > + if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmcs12_addr, > + sizeof(vmcs12_addr), &e)) { > + kvm_inject_page_fault(vcpu, &e); > + return 1; > + } > + > + if (!IS_ALIGNED(vmcs12_addr, PAGE_SIZE)) { > + nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INVALID_ADDRESS); > + skip_emulated_instruction(vcpu); > + return 1; > + } > + > + if (vmx->nested.current_vmptr != vmcs12_addr) { > + struct vmcs12 *new_vmcs12; > + struct page *page; > + page = nested_get_page(vcpu, vmcs12_addr); > + if (page == NULL) { > + nested_vmx_failInvalid(vcpu); This can access a NULL current_vmcs12 pointer, no? Apparently other code paths are vulnerable to the same issue (as in allowed to execute before vmtprld maps guest VMCS). Perhaps a BUG_ON on get_vmcs12 could be helpful. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi, On Mon, May 16, 2011, Marcelo Tosatti wrote about "Re: [PATCH 12/30] nVMX: Implement VMPTRLD": > > + if (vmx->nested.current_vmptr != vmcs12_addr) { > > + struct vmcs12 *new_vmcs12; > > + struct page *page; > > + page = nested_get_page(vcpu, vmcs12_addr); > > + if (page == NULL) { > > + nested_vmx_failInvalid(vcpu); > > This can access a NULL current_vmcs12 pointer, no? I'm afraid I didn't understand where this specific code you can access a NULL current_vmcs12 pointer... Looking at the rest of this function, the code, for examples, that frees the previous current_vmcs12_page, first makes sure that vmx->nested.current_vmptr != -1ull. current_vmptr, current_vmcs12 and current_vmcs12_page are all set together (when everything was succesful) and we never release the old page before we test the new one, so we can be sure that whenever current_vmptr != -1, we have a valid current_vmcs12 as well. But maybe I'm missing something? > Apparently other > code paths are vulnerable to the same issue (as in allowed to execute > before vmtprld maps guest VMCS). Perhaps a BUG_ON on get_vmcs12 could be > helpful. The call to get_vmcs12() should typically be in a if(is_guest_mode(vcpu)) (or in places we know this to be true), and to enter L2, we should have verified already that we have a working vmcs12. This is why I thought it is unnecessary to add any assertions to the trivial inline function get_vmcs12(). But now that I think about it, there does appear to be a problem in nested_vmx_run(): This is where we should have verified that there is a current VMCS - i.e., that VMPTRLD was previously used! And it seems I forgot testing this... :( I'll need to add such a test - not as a BUG_ON but as a real test that causes the VMLAUNCH instruction to fail (I have to look at the spec to see exactly how) if VMPTRLD hadn't been previously done.
On Mon, May 16, 2011, Nadav Har'El wrote about "Re: [PATCH 12/30] nVMX: Implement VMPTRLD": > But now that I think about it, there does appear to be a problem in > nested_vmx_run(): This is where we should have verified that there is a > current VMCS - i.e., that VMPTRLD was previously used! And it seems I forgot > testing this... :( I'll need to add such a test - not as a BUG_ON but as > a real test that causes the VMLAUNCH instruction to fail (I have to look at > the spec to see exactly how) if VMPTRLD hadn't been previously done. Oh, and there appears to be a similar problem with VMWRITE/VMREAD - it also can be called before VMPTRLD was ever called, and cause us to dereference stupid pointers. Thanks for spotting this. Nadav.
--- .before/arch/x86/kvm/vmx.c 2011-05-08 10:43:19.000000000 +0300 +++ .after/arch/x86/kvm/vmx.c 2011-05-08 10:43:19.000000000 +0300 @@ -4814,6 +4814,66 @@ static int handle_vmclear(struct kvm_vcp return 1; } +/* Emulate the VMPTRLD instruction */ +static int handle_vmptrld(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + gva_t gva; + gpa_t vmcs12_addr; + struct x86_exception e; + + if (!nested_vmx_check_permission(vcpu)) + return 1; + + if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), + vmcs_read32(VMX_INSTRUCTION_INFO), &gva)) + return 1; + + if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmcs12_addr, + sizeof(vmcs12_addr), &e)) { + kvm_inject_page_fault(vcpu, &e); + return 1; + } + + if (!IS_ALIGNED(vmcs12_addr, PAGE_SIZE)) { + nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INVALID_ADDRESS); + skip_emulated_instruction(vcpu); + return 1; + } + + if (vmx->nested.current_vmptr != vmcs12_addr) { + struct vmcs12 *new_vmcs12; + struct page *page; + page = nested_get_page(vcpu, vmcs12_addr); + if (page == NULL) { + nested_vmx_failInvalid(vcpu); + skip_emulated_instruction(vcpu); + return 1; + } + new_vmcs12 = kmap(page); + if (new_vmcs12->revision_id != VMCS12_REVISION) { + kunmap(page); + nested_release_page_clean(page); + nested_vmx_failValid(vcpu, + VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); + skip_emulated_instruction(vcpu); + return 1; + } + if (vmx->nested.current_vmptr != -1ull) { + kunmap(vmx->nested.current_vmcs12_page); + nested_release_page(vmx->nested.current_vmcs12_page); + } + + vmx->nested.current_vmptr = vmcs12_addr; + vmx->nested.current_vmcs12 = new_vmcs12; + vmx->nested.current_vmcs12_page = page; + } + + nested_vmx_succeed(vcpu); + skip_emulated_instruction(vcpu); + return 1; +} + /* * The exit handlers return 1 if the exit was handled fully and guest execution * may resume. Otherwise they set the kvm_run parameter to indicate what needs @@ -4837,7 +4897,7 @@ static int (*kvm_vmx_exit_handlers[])(st [EXIT_REASON_VMCALL] = handle_vmcall, [EXIT_REASON_VMCLEAR] = handle_vmclear, [EXIT_REASON_VMLAUNCH] = handle_vmx_insn, - [EXIT_REASON_VMPTRLD] = handle_vmx_insn, + [EXIT_REASON_VMPTRLD] = handle_vmptrld, [EXIT_REASON_VMPTRST] = handle_vmx_insn, [EXIT_REASON_VMREAD] = handle_vmx_insn, [EXIT_REASON_VMRESUME] = handle_vmx_insn,
This patch implements the VMPTRLD instruction. Signed-off-by: Nadav Har'El <nyh@il.ibm.com> --- arch/x86/kvm/vmx.c | 62 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 61 insertions(+), 1 deletion(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html