From patchwork Sun Oct 17 10:07:09 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Har'El X-Patchwork-Id: 259781 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id o9HA7JLG015436 for ; Sun, 17 Oct 2010 10:07:19 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932341Ab0JQKHR (ORCPT ); Sun, 17 Oct 2010 06:07:17 -0400 Received: from mtagate1.uk.ibm.com ([194.196.100.161]:41655 "EHLO mtagate1.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932339Ab0JQKHQ (ORCPT ); Sun, 17 Oct 2010 06:07:16 -0400 Received: from d06nrmr1707.portsmouth.uk.ibm.com (d06nrmr1707.portsmouth.uk.ibm.com [9.149.39.225]) by mtagate1.uk.ibm.com (8.13.1/8.13.1) with ESMTP id o9HA7COL004968 for ; Sun, 17 Oct 2010 10:07:12 GMT Received: from d06av01.portsmouth.uk.ibm.com (d06av01.portsmouth.uk.ibm.com [9.149.37.212]) by d06nrmr1707.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o9HA7CRB3067920 for ; Sun, 17 Oct 2010 11:07:12 +0100 Received: from d06av01.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av01.portsmouth.uk.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id o9HA7B09010308 for ; Sun, 17 Oct 2010 11:07:12 +0100 Received: from rice.haifa.ibm.com (rice.haifa.ibm.com [9.148.8.112]) by d06av01.portsmouth.uk.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id o9HA7ALU010301 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 17 Oct 2010 11:07:11 +0100 Received: from rice.haifa.ibm.com (lnx-nyh.haifa.ibm.com [127.0.0.1]) by rice.haifa.ibm.com (8.14.4/8.14.4) with ESMTP id o9HA7ALL029368; Sun, 17 Oct 2010 12:07:10 +0200 Received: (from nyh@localhost) by rice.haifa.ibm.com (8.14.4/8.14.4/Submit) id o9HA79UZ029366; Sun, 17 Oct 2010 12:07:09 +0200 Date: Sun, 17 Oct 2010 12:07:09 +0200 Message-Id: <201010171007.o9HA79UZ029366@rice.haifa.ibm.com> X-Authentication-Warning: rice.haifa.ibm.com: nyh set sender to "Nadav Har'El" using -f Cc: gleb@redhat.com, avi@redhat.com To: kvm@vger.kernel.org From: "Nadav Har'El" References: <1287309814-nyh@il.ibm.com> Subject: [PATCH 07/27] nVMX: Decoding memory operands of VMX instructions Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Sun, 17 Oct 2010 10:07:19 +0000 (UTC) --- .before/arch/x86/kvm/x86.c 2010-10-17 11:52:00.000000000 +0200 +++ .after/arch/x86/kvm/x86.c 2010-10-17 11:52:00.000000000 +0200 @@ -3636,13 +3636,14 @@ static int kvm_fetch_guest_virt(gva_t ad access | PFERR_FETCH_MASK, error); } -static int kvm_read_guest_virt(gva_t addr, void *val, unsigned int bytes, +int kvm_read_guest_virt(gva_t addr, void *val, unsigned int bytes, struct kvm_vcpu *vcpu, u32 *error) { u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0; return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, error); } +EXPORT_SYMBOL_GPL(kvm_read_guest_virt); static int kvm_read_guest_virt_system(gva_t addr, void *val, unsigned int bytes, struct kvm_vcpu *vcpu, u32 *error) --- .before/arch/x86/kvm/x86.h 2010-10-17 11:52:00.000000000 +0200 +++ .after/arch/x86/kvm/x86.h 2010-10-17 11:52:00.000000000 +0200 @@ -74,6 +74,9 @@ void kvm_before_handle_nmi(struct kvm_vc void kvm_after_handle_nmi(struct kvm_vcpu *vcpu); int kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq); +int kvm_read_guest_virt(gva_t addr, void *val, unsigned int bytes, + struct kvm_vcpu *vcpu, u32 *error); + void kvm_write_tsc(struct kvm_vcpu *vcpu, u64 data); #endif --- .before/arch/x86/kvm/vmx.c 2010-10-17 11:52:00.000000000 +0200 +++ .after/arch/x86/kvm/vmx.c 2010-10-17 11:52:00.000000000 +0200 @@ -3647,6 +3647,65 @@ static int handle_vmoff(struct kvm_vcpu return 1; } +/* + * Decode the memory-address operand of a vmx instruction, as recorded on an + * exit caused by such an instruction (run by a guest hypervisor). + * On success, returns 0. When the operand is invalid, returns 1 and throws + * #UD or #GP. + */ +static int get_vmx_mem_address(struct kvm_vcpu *vcpu, + unsigned long exit_qualification, + u32 vmx_instruction_info, gva_t *ret) +{ + /* + * According to Vol. 3B, "Information for VM Exits Due to Instruction + * Execution", on an exit, vmx_instruction_info holds most of the + * addressing components of the operand. Only the displacement part + * is put in exit_qualification (see 3B, "Basic VM-Exit Information"). + * For how an actual address is calculated from all these components, + * refer to Vol. 1, "Operand Addressing". + */ + int scaling = vmx_instruction_info & 3; + int addr_size = (vmx_instruction_info >> 7) & 7; + bool is_reg = vmx_instruction_info & (1u << 10); + int seg_reg = (vmx_instruction_info >> 15) & 7; + int index_reg = (vmx_instruction_info >> 18) & 0xf; + bool index_is_valid = !(vmx_instruction_info & (1u << 22)); + int base_reg = (vmx_instruction_info >> 23) & 0xf; + bool base_is_valid = !(vmx_instruction_info & (1u << 27)); + + if (is_reg) { + kvm_queue_exception(vcpu, UD_VECTOR); + return 1; + } + + switch (addr_size) { + case 1: /* 32 bit. high bits are undefined according to the spec: */ + exit_qualification &= 0xffffffff; + break; + case 2: /* 64 bit */ + break; + default: /* 16 bit */ + return 1; + } + + /* Addr = segment_base + offset */ + /* offset = base + [index * scale] + displacement */ + *ret = vmx_get_segment_base(vcpu, seg_reg); + if (base_is_valid) + *ret += kvm_register_read(vcpu, base_reg); + if (index_is_valid) + *ret += kvm_register_read(vcpu, index_reg)<