From patchwork Mon May 6 07:04:29 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nakajima, Jun" X-Patchwork-Id: 2522921 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id C20ACDF230 for ; Mon, 6 May 2013 07:05:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753018Ab3EFHFC (ORCPT ); Mon, 6 May 2013 03:05:02 -0400 Received: from mail-pd0-f182.google.com ([209.85.192.182]:43861 "EHLO mail-pd0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753004Ab3EFHFB (ORCPT ); Mon, 6 May 2013 03:05:01 -0400 Received: by mail-pd0-f182.google.com with SMTP id 3so1865940pdj.13 for ; Mon, 06 May 2013 00:05:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=YpYS7Z5XLGstgNepnm2u9jCgM5dbVg5sWp34ns/RTIs=; b=JrLnr8dCd2Iz++ubYslWoRPCgtDjuLK3Vf/6ePvICjqS4JYl6scFUzjLj9kZ91NIeB XI8JprWIh96jqscN1I6aWhL9A7hTR5JirgLh0eM5NmwvdMKn0BcL4kKm3vVO3GysTfBG bz+NObY4+8m04w/M14L0e6FFdMu6ETO5Vty7ACQbc6eDow3nGUxFOfIxzT8Vdjv/SNCu NujyyN6ujwM8hpNPiIGRYNg1Wry/jpgsx56gDtUk5pXK61w94ZfTHRn3NjO8lJbjJ3Dy 0nVRQPYeEIXPcKOHoobRIYFhvX/CBQqh8R/6ZL1f3b92K09YU6+vApize6OfCrigh8Du b64Q== X-Received: by 10.66.25.80 with SMTP id a16mr25209470pag.97.1367823900731; Mon, 06 May 2013 00:05:00 -0700 (PDT) Received: from localhost (c-98-207-34-191.hsd1.ca.comcast.net. [98.207.34.191]) by mx.google.com with ESMTPSA id ya4sm22730827pbb.24.2013.05.06.00.04.58 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 06 May 2013 00:04:59 -0700 (PDT) From: Jun Nakajima To: kvm@vger.kernel.org Subject: [PATCH v2 10/13] nEPT: Nested INVEPT Date: Mon, 6 May 2013 00:04:29 -0700 Message-Id: <1367823872-25895-10-git-send-email-jun.nakajima@intel.com> X-Mailer: git-send-email 1.8.2.1.610.g562af5b In-Reply-To: <1367823872-25895-9-git-send-email-jun.nakajima@intel.com> References: <1367823872-25895-1-git-send-email-jun.nakajima@intel.com> <1367823872-25895-2-git-send-email-jun.nakajima@intel.com> <1367823872-25895-3-git-send-email-jun.nakajima@intel.com> <1367823872-25895-4-git-send-email-jun.nakajima@intel.com> <1367823872-25895-5-git-send-email-jun.nakajima@intel.com> <1367823872-25895-6-git-send-email-jun.nakajima@intel.com> <1367823872-25895-7-git-send-email-jun.nakajima@intel.com> <1367823872-25895-8-git-send-email-jun.nakajima@intel.com> <1367823872-25895-9-git-send-email-jun.nakajima@intel.com> X-Gm-Message-State: ALoCoQnDgI6i2AZwS3QdfNXYmV+rXev2zUywLM7rs+0dXk5xm36sgdZ8Ym7ztx6XBpgUqiu+abTP Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If we let L1 use EPT, we should probably also support the INVEPT instruction. In our current nested EPT implementation, when L1 changes its EPT table for L2 (i.e., EPT12), L0 modifies the shadow EPT table (EPT02), and in the course of this modification already calls INVEPT. Therefore, when L1 calls INVEPT, we don't really need to do anything. In particular we *don't* need to call the real INVEPT again. All we do in our INVEPT is verify the validity of the call, and its parameters, and then do nothing. In KVM Forum 2010, Dong et al. presented "Nested Virtualization Friendly KVM" and classified our current nested EPT implementation as "shadow-like virtual EPT". He recommended instead a different approach, which he called "VTLB-like virtual EPT". If we had taken that alternative approach, INVEPT would have had a bigger role: L0 would only rebuild the shadow EPT table when L1 calls INVEPT. Signed-off-by: Nadav Har'El Signed-off-by: Jun Nakajima Signed-off-by: Xinhao Xu --- arch/x86/include/uapi/asm/vmx.h | 1 + arch/x86/kvm/vmx.c | 83 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h index 2871fcc..ec51012 100644 --- a/arch/x86/include/uapi/asm/vmx.h +++ b/arch/x86/include/uapi/asm/vmx.h @@ -65,6 +65,7 @@ #define EXIT_REASON_EOI_INDUCED 45 #define EXIT_REASON_EPT_VIOLATION 48 #define EXIT_REASON_EPT_MISCONFIG 49 +#define EXIT_REASON_INVEPT 50 #define EXIT_REASON_WBINVD 54 #define EXIT_REASON_XSETBV 55 #define EXIT_REASON_APIC_WRITE 56 diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index de6cfb4..86e4022 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -5879,6 +5879,87 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu) return 1; } +/* Emulate the INVEPT instruction */ +static int handle_invept(struct kvm_vcpu *vcpu) +{ + u32 vmx_instruction_info; + unsigned long type; + gva_t gva; + struct x86_exception e; + struct { + u64 eptp, gpa; + } operand; + + if (!(nested_vmx_secondary_ctls_high & SECONDARY_EXEC_ENABLE_EPT) || + !(nested_vmx_ept_caps & VMX_EPT_INVEPT_BIT)) { + kvm_queue_exception(vcpu, UD_VECTOR); + return 1; + } + + if (!nested_vmx_check_permission(vcpu)) + return 1; + + if (!kvm_read_cr0_bits(vcpu, X86_CR0_PE)) { + kvm_queue_exception(vcpu, UD_VECTOR); + return 1; + } + + /* According to the Intel VMX instruction reference, the memory + * operand is read even if it isn't needed (e.g., for type==global) + */ + vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), + vmx_instruction_info, &gva)) + return 1; + if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &operand, + sizeof(operand), &e)) { + kvm_inject_page_fault(vcpu, &e); + return 1; + } + + type = kvm_register_read(vcpu, (vmx_instruction_info >> 28) & 0xf); + + switch (type) { + case VMX_EPT_EXTENT_GLOBAL: + if (!(nested_vmx_ept_caps & VMX_EPT_EXTENT_GLOBAL_BIT)) + nested_vmx_failValid(vcpu, + VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID); + else { + /* + * Do nothing: when L1 changes EPT12, we already + * update EPT02 (the shadow EPT table) and call INVEPT. + * So when L1 calls INVEPT, there's nothing left to do. + */ + nested_vmx_succeed(vcpu); + } + break; + case VMX_EPT_EXTENT_CONTEXT: + if (!(nested_vmx_ept_caps & VMX_EPT_EXTENT_CONTEXT_BIT)) + nested_vmx_failValid(vcpu, + VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID); + else { + /* Do nothing */ + nested_vmx_succeed(vcpu); + } + break; + case VMX_EPT_EXTENT_INDIVIDUAL_ADDR: + if (!(nested_vmx_ept_caps & VMX_EPT_EXTENT_INDIVIDUAL_BIT)) + nested_vmx_failValid(vcpu, + VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID); + else { + /* Do nothing */ + nested_vmx_succeed(vcpu); + } + break; + default: + nested_vmx_failValid(vcpu, + VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID); + } + + skip_emulated_instruction(vcpu); + return 1; +} + /* * The exit handlers return 1 if the exit was handled fully and guest execution * may resume. Otherwise they set the kvm_run parameter to indicate what needs @@ -5923,6 +6004,7 @@ static int (*const kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = { [EXIT_REASON_PAUSE_INSTRUCTION] = handle_pause, [EXIT_REASON_MWAIT_INSTRUCTION] = handle_invalid_op, [EXIT_REASON_MONITOR_INSTRUCTION] = handle_invalid_op, + [EXIT_REASON_INVEPT] = handle_invept, }; static const int kvm_vmx_max_exit_handlers = @@ -6107,6 +6189,7 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu) case EXIT_REASON_VMPTRST: case EXIT_REASON_VMREAD: case EXIT_REASON_VMRESUME: case EXIT_REASON_VMWRITE: case EXIT_REASON_VMOFF: case EXIT_REASON_VMON: + case EXIT_REASON_INVEPT: /* * VMX instructions trap unconditionally. This allows L1 to * emulate them for its L2 guest, i.e., allows 3-level nesting!