From patchwork Sun Jun 13 12:26:09 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Har'El X-Patchwork-Id: 105786 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o5DCQF62027036 for ; Sun, 13 Jun 2010 12:26:15 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753355Ab0FMM0N (ORCPT ); Sun, 13 Jun 2010 08:26:13 -0400 Received: from mtagate2.uk.ibm.com ([194.196.100.162]:49717 "EHLO mtagate2.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752196Ab0FMM0M (ORCPT ); Sun, 13 Jun 2010 08:26:12 -0400 Received: from d06nrmr1407.portsmouth.uk.ibm.com (d06nrmr1407.portsmouth.uk.ibm.com [9.149.38.185]) by mtagate2.uk.ibm.com (8.13.1/8.13.1) with ESMTP id o5DCQBCi019279 for ; Sun, 13 Jun 2010 12:26:11 GMT Received: from d06av01.portsmouth.uk.ibm.com (d06av01.portsmouth.uk.ibm.com [9.149.37.212]) by d06nrmr1407.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o5DCQBxj676012 for ; Sun, 13 Jun 2010 13:26:11 +0100 Received: from d06av01.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av01.portsmouth.uk.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id o5DCQB2Z020796 for ; Sun, 13 Jun 2010 13:26:11 +0100 Received: from rice.haifa.ibm.com (rice.haifa.ibm.com [9.148.8.205]) by d06av01.portsmouth.uk.ibm.com (8.12.11.20060308/8.12.11) with ESMTP id o5DCQA6s020791 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 13 Jun 2010 13:26:11 +0100 Received: from rice.haifa.ibm.com (lnx-nyh.haifa.ibm.com [127.0.0.1]) by rice.haifa.ibm.com (8.14.4/8.14.4) with ESMTP id o5DCQ9ci012947; Sun, 13 Jun 2010 15:26:09 +0300 Received: (from nyh@localhost) by rice.haifa.ibm.com (8.14.4/8.14.4/Submit) id o5DCQ95O012945; Sun, 13 Jun 2010 15:26:09 +0300 Date: Sun, 13 Jun 2010 15:26:09 +0300 Message-Id: <201006131226.o5DCQ95O012945@rice.haifa.ibm.com> X-Authentication-Warning: rice.haifa.ibm.com: nyh set sender to "Nadav Har'El" using -f Cc: kvm@vger.kernel.org To: avi@redhat.com From: "Nadav Har'El" References: <1276431753-nyh@il.ibm.com> Subject: [PATCH 7/24] Understanding guest pointers to vmcs12 structures Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Sun, 13 Jun 2010 12:26:15 +0000 (UTC) --- .before/arch/x86/kvm/x86.c 2010-06-13 15:01:29.000000000 +0300 +++ .after/arch/x86/kvm/x86.c 2010-06-13 15:01:29.000000000 +0300 @@ -3286,13 +3286,14 @@ static int kvm_fetch_guest_virt(gva_t ad access | PFERR_FETCH_MASK, error); } -static int kvm_read_guest_virt(gva_t addr, void *val, unsigned int bytes, +int kvm_read_guest_virt(gva_t addr, void *val, unsigned int bytes, struct kvm_vcpu *vcpu, u32 *error) { u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0; return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, error); } +EXPORT_SYMBOL_GPL(kvm_read_guest_virt); static int kvm_read_guest_virt_system(gva_t addr, void *val, unsigned int bytes, struct kvm_vcpu *vcpu, u32 *error) --- .before/arch/x86/kvm/x86.h 2010-06-13 15:01:29.000000000 +0300 +++ .after/arch/x86/kvm/x86.h 2010-06-13 15:01:29.000000000 +0300 @@ -75,6 +75,9 @@ static inline struct kvm_mem_aliases *kv void kvm_before_handle_nmi(struct kvm_vcpu *vcpu); void kvm_after_handle_nmi(struct kvm_vcpu *vcpu); +int kvm_read_guest_virt(gva_t addr, void *val, unsigned int bytes, + struct kvm_vcpu *vcpu, u32 *error); + extern int nested; #endif --- .before/arch/x86/kvm/vmx.c 2010-06-13 15:01:29.000000000 +0300 +++ .after/arch/x86/kvm/vmx.c 2010-06-13 15:01:29.000000000 +0300 @@ -3654,6 +3654,86 @@ static int handle_vmoff(struct kvm_vcpu return 1; } +/* + * Decode the memory-address operand of a vmx instruction, according to the + * Intel spec. + */ +#define VMX_OPERAND_SCALING(vii) ((vii) & 3) +#define VMX_OPERAND_ADDR_SIZE(vii) (((vii) >> 7) & 7) +#define VMX_OPERAND_IS_REG(vii) ((vii) & (1u << 10)) +#define VMX_OPERAND_SEG_REG(vii) (((vii) >> 15) & 7) +#define VMX_OPERAND_INDEX_REG(vii) (((vii) >> 18) & 0xf) +#define VMX_OPERAND_INDEX_INVALID(vii) ((vii) & (1u << 22)) +#define VMX_OPERAND_BASE_REG(vii) (((vii) >> 23) & 0xf) +#define VMX_OPERAND_BASE_INVALID(vii) ((vii) & (1u << 27)) +#define VMX_OPERAND_REG(vii) (((vii) >> 3) & 0xf) +#define VMX_OPERAND_REG2(vii) (((vii) >> 28) & 0xf) +static gva_t get_vmx_mem_address(struct kvm_vcpu *vcpu, + unsigned long exit_qualification, + u32 vmx_instruction_info) +{ + int scaling = VMX_OPERAND_SCALING(vmx_instruction_info); + int addr_size = VMX_OPERAND_ADDR_SIZE(vmx_instruction_info); + bool is_reg = VMX_OPERAND_IS_REG(vmx_instruction_info); + int seg_reg = VMX_OPERAND_SEG_REG(vmx_instruction_info); + int index_reg = VMX_OPERAND_SEG_REG(vmx_instruction_info); + bool index_is_valid = !VMX_OPERAND_INDEX_INVALID(vmx_instruction_info); + int base_reg = VMX_OPERAND_BASE_REG(vmx_instruction_info); + bool base_is_valid = !VMX_OPERAND_BASE_INVALID(vmx_instruction_info); + gva_t addr; + + if (is_reg) { + kvm_queue_exception(vcpu, UD_VECTOR); + return 0; + } + + switch (addr_size) { + case 1: /* 32 bit. high bits are undefined according to the spec: */ + exit_qualification &= 0xffffffff; + break; + case 2: /* 64 bit */ + break; + default: /* addr_size=0 means 16 bit */ + return 0; + } + + /* Addr = segment_base + offset */ + /* offfset = Base + [Index * Scale] + Displacement */ + addr = vmx_get_segment_base(vcpu, seg_reg); + if (base_is_valid) + addr += kvm_register_read(vcpu, base_reg); + if (index_is_valid) + addr += kvm_register_read(vcpu, index_reg)<