From patchwork Thu Apr 1 14:38:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12178593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 479B7C43617 for ; Thu, 1 Apr 2021 17:43:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 247ED61284 for ; Thu, 1 Apr 2021 17:43:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235664AbhDARnF (ORCPT ); Thu, 1 Apr 2021 13:43:05 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:55662 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234879AbhDARlC (ORCPT ); Thu, 1 Apr 2021 13:41:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617298862; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vyMr3QsOTroXcgjRtUdx7Du8y9HMmuw3i24kPeKgTH4=; b=UHSK70C492TvpRcA0lqKu2Hc6kpLnsc0ij9XZ3CSuswQW2Cxx1HgQLNitosZwepeOR9W0Y c5juV6oTIUabVUcJ/XP/4JSoUnhfE4xEuQKOuvUhomXLA8eUC/olmFwnDi+781bC39dLPN fusSJ8dfJ/3p7fkrDRZ90cdqWKjv/90= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-160-e6V2UZJlPB-u618cERGpDg-1; Thu, 01 Apr 2021 10:39:30 -0400 X-MC-Unique: e6V2UZJlPB-u618cERGpDg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D39B784B99E; Thu, 1 Apr 2021 14:38:25 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 86BB45D9CA; Thu, 1 Apr 2021 14:38:22 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Thomas Gleixner , Wanpeng Li , Borislav Petkov , Paolo Bonzini , Jim Mattson , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , "H. Peter Anvin" , Joerg Roedel , Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Sean Christopherson , Maxim Levitsky Subject: [PATCH 1/4] KVM: x86: pending exceptions must not be blocked by an injected event Date: Thu, 1 Apr 2021 17:38:14 +0300 Message-Id: <20210401143817.1030695-2-mlevitsk@redhat.com> In-Reply-To: <20210401143817.1030695-1-mlevitsk@redhat.com> References: <20210401143817.1030695-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Injected interrupts/nmi should not block a pending exception, but rather be either lost if nested hypervisor doesn't intercept the pending exception (as in stock x86), or be delivered in exitintinfo/IDT_VECTORING_INFO field, as a part of a VMexit that corresponds to the pending exception. The only reason for an exception to be blocked is when nested run is pending (and that can't really happen currently but still worth checking for). Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 8 +++++++- arch/x86/kvm/vmx/nested.c | 10 ++++++++-- 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 8523f60adb92..34a37b2bd486 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1062,7 +1062,13 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) } if (vcpu->arch.exception.pending) { - if (block_nested_events) + /* + * Only a pending nested run can block a pending exception. + * Otherwise an injected NMI/interrupt should either be + * lost or delivered to the nested hypervisor in the EXITINTINFO + * vmcb field, while delivering the pending exception. + */ + if (svm->nested.nested_run_pending) return -EBUSY; if (!nested_exit_on_exception(svm)) return 0; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index fd334e4aa6db..c3ba842fc07f 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3806,9 +3806,15 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) /* * Process any exceptions that are not debug traps before MTF. + * + * Note that only a pending nested run can block a pending exception. + * Otherwise an injected NMI/interrupt should either be + * lost or delivered to the nested hypervisor in the IDT_VECTORING_INFO, + * while delivering the pending exception. */ + if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) { - if (block_nested_events) + if (vmx->nested.nested_run_pending) return -EBUSY; if (!nested_vmx_check_exception(vcpu, &exit_qual)) goto no_vmexit; @@ -3825,7 +3831,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) } if (vcpu->arch.exception.pending) { - if (block_nested_events) + if (vmx->nested.nested_run_pending) return -EBUSY; if (!nested_vmx_check_exception(vcpu, &exit_qual)) goto no_vmexit; From patchwork Thu Apr 1 14:38:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12178483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3FE0C04FCC for ; Thu, 1 Apr 2021 17:43:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 757996112E for ; Thu, 1 Apr 2021 17:43:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235604AbhDARnA (ORCPT ); Thu, 1 Apr 2021 13:43:00 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:30237 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234802AbhDARkQ (ORCPT ); Thu, 1 Apr 2021 13:40:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617298816; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1XiHNsvGFXmmRVmD21Ex022DO567amJajR26yw0JMmc=; b=h0XyYj8rgAdxfY7Rj1WuM55PO6v1PNoPf3XT03PRWk3f+W/P1pmgaCjlaQGgfsM1IPPQ+4 p5vO7fbKiHocC1d5Yf4cYTd/zakuqbmiry5MTh5iYkZaChW7rBEmR8Ofk3B9S0NKNKDMsv YuloO6/aM18CqkETpyBpVSd3xtWEnPM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-313-QXjyG--EO8ejpzJD8TYfHg-1; Thu, 01 Apr 2021 10:39:31 -0400 X-MC-Unique: QXjyG--EO8ejpzJD8TYfHg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BE710884DA6; Thu, 1 Apr 2021 14:38:29 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3DFF15D9CA; Thu, 1 Apr 2021 14:38:26 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Thomas Gleixner , Wanpeng Li , Borislav Petkov , Paolo Bonzini , Jim Mattson , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , "H. Peter Anvin" , Joerg Roedel , Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Sean Christopherson , Maxim Levitsky Subject: [PATCH 2/4] KVM: x86: separate pending and injected exception Date: Thu, 1 Apr 2021 17:38:15 +0300 Message-Id: <20210401143817.1030695-3-mlevitsk@redhat.com> In-Reply-To: <20210401143817.1030695-1-mlevitsk@redhat.com> References: <20210401143817.1030695-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use 'pending_exception' and 'injected_exception' fields to store the pending and the injected exceptions. After this patch still only one is active, but in the next patch both could co-exist in some cases. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 25 ++++-- arch/x86/kvm/svm/nested.c | 26 +++--- arch/x86/kvm/svm/svm.c | 6 +- arch/x86/kvm/vmx/nested.c | 36 ++++---- arch/x86/kvm/vmx/vmx.c | 12 +-- arch/x86/kvm/x86.c | 145 ++++++++++++++++++-------------- arch/x86/kvm/x86.h | 6 +- 7 files changed, 143 insertions(+), 113 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a52f973bdff6..3b2fd276e8d5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -547,6 +547,14 @@ struct kvm_vcpu_xen { u64 runstate_times[4]; }; +struct kvm_queued_exception { + bool valid; + u8 nr; + bool has_error_code; + u32 error_code; +}; + + struct kvm_vcpu_arch { /* * rip and regs accesses must go through @@ -645,16 +653,15 @@ struct kvm_vcpu_arch { u8 event_exit_inst_len; - struct kvm_queued_exception { - bool pending; - bool injected; - bool has_error_code; - u8 nr; - u32 error_code; - unsigned long payload; - bool has_payload; + struct kvm_queued_exception pending_exception; + + struct kvm_exception_payload { + bool valid; + unsigned long value; u8 nested_apf; - } exception; + } exception_payload; + + struct kvm_queued_exception injected_exception; struct kvm_queued_interrupt { bool injected; diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 34a37b2bd486..7adad9b6dcad 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -349,14 +349,14 @@ static void nested_save_pending_event_to_vmcb12(struct vcpu_svm *svm, u32 exit_int_info = 0; unsigned int nr; - if (vcpu->arch.exception.injected) { - nr = vcpu->arch.exception.nr; + if (vcpu->arch.injected_exception.valid) { + nr = vcpu->arch.injected_exception.nr; exit_int_info = nr | SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_EXEPT; - if (vcpu->arch.exception.has_error_code) { + if (vcpu->arch.injected_exception.has_error_code) { exit_int_info |= SVM_EVTINJ_VALID_ERR; vmcb12->control.exit_int_info_err = - vcpu->arch.exception.error_code; + vcpu->arch.injected_exception.error_code; } } else if (vcpu->arch.nmi_injected) { @@ -1000,30 +1000,30 @@ int nested_svm_check_permissions(struct kvm_vcpu *vcpu) static bool nested_exit_on_exception(struct vcpu_svm *svm) { - unsigned int nr = svm->vcpu.arch.exception.nr; + unsigned int nr = svm->vcpu.arch.pending_exception.nr; return (svm->nested.ctl.intercepts[INTERCEPT_EXCEPTION] & BIT(nr)); } static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm) { - unsigned int nr = svm->vcpu.arch.exception.nr; + unsigned int nr = svm->vcpu.arch.pending_exception.nr; svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + nr; svm->vmcb->control.exit_code_hi = 0; - if (svm->vcpu.arch.exception.has_error_code) - svm->vmcb->control.exit_info_1 = svm->vcpu.arch.exception.error_code; + if (svm->vcpu.arch.pending_exception.has_error_code) + svm->vmcb->control.exit_info_1 = svm->vcpu.arch.pending_exception.error_code; /* * EXITINFO2 is undefined for all exception intercepts other * than #PF. */ if (nr == PF_VECTOR) { - if (svm->vcpu.arch.exception.nested_apf) + if (svm->vcpu.arch.exception_payload.nested_apf) svm->vmcb->control.exit_info_2 = svm->vcpu.arch.apf.nested_apf_token; - else if (svm->vcpu.arch.exception.has_payload) - svm->vmcb->control.exit_info_2 = svm->vcpu.arch.exception.payload; + else if (svm->vcpu.arch.exception_payload.valid) + svm->vmcb->control.exit_info_2 = svm->vcpu.arch.exception_payload.value; else svm->vmcb->control.exit_info_2 = svm->vcpu.arch.cr2; } else if (nr == DB_VECTOR) { @@ -1034,7 +1034,7 @@ static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm) kvm_update_dr7(&svm->vcpu); } } else - WARN_ON(svm->vcpu.arch.exception.has_payload); + WARN_ON(svm->vcpu.arch.exception_payload.valid); nested_svm_vmexit(svm); } @@ -1061,7 +1061,7 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) return 0; } - if (vcpu->arch.exception.pending) { + if (vcpu->arch.pending_exception.valid) { /* * Only a pending nested run can block a pending exception. * Otherwise an injected NMI/interrupt should either be diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 271196400495..90b541138c5a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -359,9 +359,9 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu) static void svm_queue_exception(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - unsigned nr = vcpu->arch.exception.nr; - bool has_error_code = vcpu->arch.exception.has_error_code; - u32 error_code = vcpu->arch.exception.error_code; + unsigned int nr = vcpu->arch.injected_exception.nr; + bool has_error_code = vcpu->arch.injected_exception.has_error_code; + u32 error_code = vcpu->arch.injected_exception.error_code; kvm_deliver_exception_payload(vcpu); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index c3ba842fc07f..5d54fecff9a7 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -388,17 +388,17 @@ static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12, static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit_qual) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); - unsigned int nr = vcpu->arch.exception.nr; - bool has_payload = vcpu->arch.exception.has_payload; - unsigned long payload = vcpu->arch.exception.payload; + unsigned int nr = vcpu->arch.pending_exception.nr; + bool has_payload = vcpu->arch.exception_payload.valid; + unsigned long payload = vcpu->arch.exception_payload.value; if (nr == PF_VECTOR) { - if (vcpu->arch.exception.nested_apf) { + if (vcpu->arch.exception_payload.nested_apf) { *exit_qual = vcpu->arch.apf.nested_apf_token; return 1; } if (nested_vmx_is_page_fault_vmexit(vmcs12, - vcpu->arch.exception.error_code)) { + vcpu->arch.pending_exception.error_code)) { *exit_qual = has_payload ? payload : vcpu->arch.cr2; return 1; } @@ -3617,8 +3617,8 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu, u32 idt_vectoring; unsigned int nr; - if (vcpu->arch.exception.injected) { - nr = vcpu->arch.exception.nr; + if (vcpu->arch.injected_exception.valid) { + nr = vcpu->arch.injected_exception.nr; idt_vectoring = nr | VECTORING_INFO_VALID_MASK; if (kvm_exception_is_soft(nr)) { @@ -3628,10 +3628,10 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu, } else idt_vectoring |= INTR_TYPE_HARD_EXCEPTION; - if (vcpu->arch.exception.has_error_code) { + if (vcpu->arch.injected_exception.has_error_code) { idt_vectoring |= VECTORING_INFO_DELIVER_CODE_MASK; vmcs12->idt_vectoring_error_code = - vcpu->arch.exception.error_code; + vcpu->arch.injected_exception.error_code; } vmcs12->idt_vectoring_info_field = idt_vectoring; @@ -3712,11 +3712,11 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu, unsigned long exit_qual) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); - unsigned int nr = vcpu->arch.exception.nr; + unsigned int nr = vcpu->arch.pending_exception.nr; u32 intr_info = nr | INTR_INFO_VALID_MASK; - if (vcpu->arch.exception.has_error_code) { - vmcs12->vm_exit_intr_error_code = vcpu->arch.exception.error_code; + if (vcpu->arch.pending_exception.has_error_code) { + vmcs12->vm_exit_intr_error_code = vcpu->arch.pending_exception.error_code; intr_info |= INTR_INFO_DELIVER_CODE_MASK; } @@ -3740,9 +3740,9 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu, */ static inline bool vmx_pending_dbg_trap(struct kvm_vcpu *vcpu) { - return vcpu->arch.exception.pending && - vcpu->arch.exception.nr == DB_VECTOR && - vcpu->arch.exception.payload; + return vcpu->arch.pending_exception.valid && + vcpu->arch.pending_exception.nr == DB_VECTOR && + vcpu->arch.exception_payload.value; } /* @@ -3756,7 +3756,7 @@ static void nested_vmx_update_pending_dbg(struct kvm_vcpu *vcpu) { if (vmx_pending_dbg_trap(vcpu)) vmcs_writel(GUEST_PENDING_DBG_EXCEPTIONS, - vcpu->arch.exception.payload); + vcpu->arch.exception_payload.value); } static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu) @@ -3813,7 +3813,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) * while delivering the pending exception. */ - if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) { + if (vcpu->arch.pending_exception.valid && !vmx_pending_dbg_trap(vcpu)) { if (vmx->nested.nested_run_pending) return -EBUSY; if (!nested_vmx_check_exception(vcpu, &exit_qual)) @@ -3830,7 +3830,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } - if (vcpu->arch.exception.pending) { + if (vcpu->arch.pending_exception.valid) { if (vmx->nested.nested_run_pending) return -EBUSY; if (!nested_vmx_check_exception(vcpu, &exit_qual)) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c8a4a548e96b..a9b241d2b271 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1648,8 +1648,8 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu) * vmx_check_nested_events(). */ if (nested_cpu_has_mtf(vmcs12) && - (!vcpu->arch.exception.pending || - vcpu->arch.exception.nr == DB_VECTOR)) + (!vcpu->arch.pending_exception.valid || + vcpu->arch.pending_exception.nr == DB_VECTOR)) vmx->nested.mtf_pending = true; else vmx->nested.mtf_pending = false; @@ -1677,9 +1677,9 @@ static void vmx_clear_hlt(struct kvm_vcpu *vcpu) static void vmx_queue_exception(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned nr = vcpu->arch.exception.nr; - bool has_error_code = vcpu->arch.exception.has_error_code; - u32 error_code = vcpu->arch.exception.error_code; + unsigned int nr = vcpu->arch.injected_exception.nr; + bool has_error_code = vcpu->arch.injected_exception.has_error_code; + u32 error_code = vcpu->arch.injected_exception.error_code; u32 intr_info = nr | INTR_INFO_VALID_MASK; kvm_deliver_exception_payload(vcpu); @@ -5397,7 +5397,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu) return 0; if (vmx->emulation_required && !vmx->rmode.vm86_active && - vcpu->arch.exception.pending) { + vcpu->arch.pending_exception.valid) { vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a9d95f90a048..493d87b0c2d5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -481,9 +481,9 @@ static int exception_type(int vector) void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu) { - unsigned nr = vcpu->arch.exception.nr; - bool has_payload = vcpu->arch.exception.has_payload; - unsigned long payload = vcpu->arch.exception.payload; + unsigned int nr = vcpu->arch.pending_exception.nr; + bool has_payload = vcpu->arch.exception_payload.valid; + unsigned long payload = vcpu->arch.exception_payload.value; if (!has_payload) return; @@ -529,8 +529,8 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu) break; } - vcpu->arch.exception.has_payload = false; - vcpu->arch.exception.payload = 0; + vcpu->arch.exception_payload.valid = false; + vcpu->arch.exception_payload.value = 0; } EXPORT_SYMBOL_GPL(kvm_deliver_exception_payload); @@ -543,7 +543,7 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, kvm_make_request(KVM_REQ_EVENT, vcpu); - if (!vcpu->arch.exception.pending && !vcpu->arch.exception.injected) { + if (!vcpu->arch.pending_exception.valid && !vcpu->arch.injected_exception.valid) { queue: if (reinject) { /* @@ -554,8 +554,7 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, * and the guest shouldn't proceed far enough to * need reinjection. */ - WARN_ON_ONCE(vcpu->arch.exception.pending); - vcpu->arch.exception.injected = true; + WARN_ON_ONCE(vcpu->arch.pending_exception.valid); if (WARN_ON_ONCE(has_payload)) { /* * A reinjected event has already @@ -564,22 +563,29 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, has_payload = false; payload = 0; } + + vcpu->arch.injected_exception.valid = true; + vcpu->arch.injected_exception.has_error_code = has_error; + vcpu->arch.injected_exception.nr = nr; + vcpu->arch.injected_exception.error_code = error_code; + } else { - vcpu->arch.exception.pending = true; - vcpu->arch.exception.injected = false; + vcpu->arch.pending_exception.valid = true; + vcpu->arch.injected_exception.valid = false; + vcpu->arch.pending_exception.has_error_code = has_error; + vcpu->arch.pending_exception.nr = nr; + vcpu->arch.pending_exception.error_code = error_code; } - vcpu->arch.exception.has_error_code = has_error; - vcpu->arch.exception.nr = nr; - vcpu->arch.exception.error_code = error_code; - vcpu->arch.exception.has_payload = has_payload; - vcpu->arch.exception.payload = payload; + + vcpu->arch.exception_payload.valid = has_payload; + vcpu->arch.exception_payload.value = payload; if (!is_guest_mode(vcpu)) kvm_deliver_exception_payload(vcpu); return; } /* to check exception */ - prev_nr = vcpu->arch.exception.nr; + prev_nr = vcpu->arch.injected_exception.nr; if (prev_nr == DF_VECTOR) { /* triple fault -> shutdown */ kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); @@ -594,13 +600,14 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, * exception.pending = true so that the double fault * can trigger a nested vmexit. */ - vcpu->arch.exception.pending = true; - vcpu->arch.exception.injected = false; - vcpu->arch.exception.has_error_code = true; - vcpu->arch.exception.nr = DF_VECTOR; - vcpu->arch.exception.error_code = 0; - vcpu->arch.exception.has_payload = false; - vcpu->arch.exception.payload = 0; + vcpu->arch.pending_exception.valid = true; + vcpu->arch.injected_exception.valid = false; + vcpu->arch.pending_exception.has_error_code = true; + vcpu->arch.pending_exception.nr = DF_VECTOR; + vcpu->arch.pending_exception.error_code = 0; + + vcpu->arch.exception_payload.valid = false; + vcpu->arch.exception_payload.value = 0; } else /* replace previous exception with a new one in a hope that instruction re-execution will regenerate lost @@ -648,9 +655,9 @@ EXPORT_SYMBOL_GPL(kvm_complete_insn_gp); void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault) { ++vcpu->stat.pf_guest; - vcpu->arch.exception.nested_apf = + vcpu->arch.exception_payload.nested_apf = is_guest_mode(vcpu) && fault->async_page_fault; - if (vcpu->arch.exception.nested_apf) { + if (vcpu->arch.exception_payload.nested_apf) { vcpu->arch.apf.nested_apf_token = fault->address; kvm_queue_exception_e(vcpu, PF_VECTOR, fault->error_code); } else { @@ -4269,6 +4276,7 @@ static int kvm_vcpu_ioctl_x86_set_mce(struct kvm_vcpu *vcpu, static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { + struct kvm_queued_exception *exc; process_nmi(vcpu); if (kvm_check_request(KVM_REQ_SMI, vcpu)) @@ -4286,21 +4294,27 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, * KVM_GET_VCPU_EVENTS. */ if (!vcpu->kvm->arch.exception_payload_enabled && - vcpu->arch.exception.pending && vcpu->arch.exception.has_payload) + vcpu->arch.pending_exception.valid && vcpu->arch.exception_payload.valid) kvm_deliver_exception_payload(vcpu); + WARN_ON_ONCE(vcpu->arch.pending_exception.valid && + vcpu->arch.injected_exception.valid); + + exc = vcpu->arch.pending_exception.valid ? &vcpu->arch.pending_exception : + &vcpu->arch.injected_exception; + /* * The API doesn't provide the instruction length for software * exceptions, so don't report them. As long as the guest RIP * isn't advanced, we should expect to encounter the exception * again. */ - if (kvm_exception_is_soft(vcpu->arch.exception.nr)) { + if (kvm_exception_is_soft(exc->nr)) { events->exception.injected = 0; events->exception.pending = 0; } else { - events->exception.injected = vcpu->arch.exception.injected; - events->exception.pending = vcpu->arch.exception.pending; + events->exception.injected = vcpu->arch.injected_exception.valid; + events->exception.pending = vcpu->arch.pending_exception.valid; /* * For ABI compatibility, deliberately conflate * pending and injected exceptions when @@ -4308,13 +4322,14 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, */ if (!vcpu->kvm->arch.exception_payload_enabled) events->exception.injected |= - vcpu->arch.exception.pending; + vcpu->arch.pending_exception.valid; } - events->exception.nr = vcpu->arch.exception.nr; - events->exception.has_error_code = vcpu->arch.exception.has_error_code; - events->exception.error_code = vcpu->arch.exception.error_code; - events->exception_has_payload = vcpu->arch.exception.has_payload; - events->exception_payload = vcpu->arch.exception.payload; + events->exception.nr = exc->nr; + events->exception.has_error_code = exc->has_error_code; + events->exception.error_code = exc->error_code; + + events->exception_has_payload = vcpu->arch.exception_payload.valid; + events->exception_payload = vcpu->arch.exception_payload.value; events->interrupt.injected = vcpu->arch.interrupt.injected && !vcpu->arch.interrupt.soft; @@ -4349,6 +4364,8 @@ static void kvm_smm_changed(struct kvm_vcpu *vcpu); static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { + struct kvm_queued_exception *exc; + if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING | KVM_VCPUEVENT_VALID_SIPI_VECTOR | KVM_VCPUEVENT_VALID_SHADOW @@ -4368,6 +4385,12 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, events->exception_has_payload = 0; } + exc = events->exception.pending ? &vcpu->arch.pending_exception : + &vcpu->arch.injected_exception; + + vcpu->arch.pending_exception.valid = false; + vcpu->arch.injected_exception.valid = false; + if ((events->exception.injected || events->exception.pending) && (events->exception.nr > 31 || events->exception.nr == NMI_VECTOR)) return -EINVAL; @@ -4379,13 +4402,13 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, return -EINVAL; process_nmi(vcpu); - vcpu->arch.exception.injected = events->exception.injected; - vcpu->arch.exception.pending = events->exception.pending; - vcpu->arch.exception.nr = events->exception.nr; - vcpu->arch.exception.has_error_code = events->exception.has_error_code; - vcpu->arch.exception.error_code = events->exception.error_code; - vcpu->arch.exception.has_payload = events->exception_has_payload; - vcpu->arch.exception.payload = events->exception_payload; + + exc->nr = events->exception.nr; + exc->has_error_code = events->exception.has_error_code; + exc->error_code = events->exception.error_code; + + vcpu->arch.exception_payload.valid = events->exception_has_payload; + vcpu->arch.exception_payload.value = events->exception_payload; vcpu->arch.interrupt.injected = events->interrupt.injected; vcpu->arch.interrupt.nr = events->interrupt.nr; @@ -8378,8 +8401,8 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu) static void kvm_inject_exception(struct kvm_vcpu *vcpu) { - if (vcpu->arch.exception.error_code && !is_protmode(vcpu)) - vcpu->arch.exception.error_code = false; + if (vcpu->arch.injected_exception.error_code && !is_protmode(vcpu)) + vcpu->arch.injected_exception.error_code = false; static_call(kvm_x86_queue_exception)(vcpu); } @@ -8390,7 +8413,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit /* try to reinject previous events if any */ - if (vcpu->arch.exception.injected) { + if (vcpu->arch.injected_exception.valid) { kvm_inject_exception(vcpu); can_inject = false; } @@ -8408,7 +8431,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit * serviced prior to recognizing any new events in order to * fully complete the previous instruction. */ - else if (!vcpu->arch.exception.pending) { + else if (!vcpu->arch.pending_exception.valid) { if (vcpu->arch.nmi_injected) { static_call(kvm_x86_set_nmi)(vcpu); can_inject = false; @@ -8418,8 +8441,8 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit } } - WARN_ON_ONCE(vcpu->arch.exception.injected && - vcpu->arch.exception.pending); + WARN_ON_ONCE(vcpu->arch.pending_exception.valid && + vcpu->arch.injected_exception.valid); /* * Call check_nested_events() even if we reinjected a previous event @@ -8434,19 +8457,19 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit } /* try to inject new event if pending */ - if (vcpu->arch.exception.pending) { - trace_kvm_inj_exception(vcpu->arch.exception.nr, - vcpu->arch.exception.has_error_code, - vcpu->arch.exception.error_code); + if (vcpu->arch.pending_exception.valid) { + trace_kvm_inj_exception(vcpu->arch.pending_exception.nr, + vcpu->arch.pending_exception.has_error_code, + vcpu->arch.pending_exception.error_code); - vcpu->arch.exception.pending = false; - vcpu->arch.exception.injected = true; + vcpu->arch.injected_exception = vcpu->arch.pending_exception; + vcpu->arch.pending_exception.valid = false; - if (exception_type(vcpu->arch.exception.nr) == EXCPT_FAULT) + if (exception_type(vcpu->arch.injected_exception.nr) == EXCPT_FAULT) __kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) | X86_EFLAGS_RF); - if (vcpu->arch.exception.nr == DB_VECTOR) { + if (vcpu->arch.injected_exception.nr == DB_VECTOR) { kvm_deliver_exception_payload(vcpu); if (vcpu->arch.dr7 & DR7_GD) { vcpu->arch.dr7 &= ~DR7_GD; @@ -8515,7 +8538,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit kvm_x86_ops.nested_ops->hv_timer_pending(vcpu)) *req_immediate_exit = true; - WARN_ON(vcpu->arch.exception.pending); + WARN_ON(vcpu->arch.pending_exception.valid); return; busy: @@ -9624,7 +9647,7 @@ static void __set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) kvm_rip_write(vcpu, regs->rip); kvm_set_rflags(vcpu, regs->rflags | X86_EFLAGS_FIXED); - vcpu->arch.exception.pending = false; + vcpu->arch.pending_exception.valid = false; kvm_make_request(KVM_REQ_EVENT, vcpu); } @@ -9910,7 +9933,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, if (dbg->control & (KVM_GUESTDBG_INJECT_DB | KVM_GUESTDBG_INJECT_BP)) { r = -EBUSY; - if (vcpu->arch.exception.pending) + if (vcpu->arch.pending_exception.valid) goto out; if (dbg->control & KVM_GUESTDBG_INJECT_DB) kvm_queue_exception(vcpu, DB_VECTOR); @@ -10991,7 +11014,7 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) if (vcpu->arch.pv.pv_unhalted) return true; - if (vcpu->arch.exception.pending) + if (vcpu->arch.pending_exception.valid) return true; if (kvm_test_request(KVM_REQ_NMI, vcpu) || @@ -11231,7 +11254,7 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu) { if (unlikely(!lapic_in_kernel(vcpu) || kvm_event_needs_reinjection(vcpu) || - vcpu->arch.exception.pending)) + vcpu->arch.pending_exception.valid)) return false; if (kvm_hlt_in_guest(vcpu->kvm) && !kvm_can_deliver_async_pf(vcpu)) diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index daccf20fbcd5..21d62387f5e6 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -60,8 +60,8 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu); static inline void kvm_clear_exception_queue(struct kvm_vcpu *vcpu) { - vcpu->arch.exception.pending = false; - vcpu->arch.exception.injected = false; + vcpu->arch.pending_exception.valid = false; + vcpu->arch.injected_exception.valid = false; } static inline void kvm_queue_interrupt(struct kvm_vcpu *vcpu, u8 vector, @@ -79,7 +79,7 @@ static inline void kvm_clear_interrupt_queue(struct kvm_vcpu *vcpu) static inline bool kvm_event_needs_reinjection(struct kvm_vcpu *vcpu) { - return vcpu->arch.exception.injected || vcpu->arch.interrupt.injected || + return vcpu->arch.injected_exception.valid || vcpu->arch.interrupt.injected || vcpu->arch.nmi_injected; } From patchwork Thu Apr 1 14:38:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12178597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2EE5C433ED for ; Thu, 1 Apr 2021 17:44:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0DD0561165 for ; Thu, 1 Apr 2021 17:44:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235745AbhDARnJ (ORCPT ); Thu, 1 Apr 2021 13:43:09 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:26115 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234880AbhDARlC (ORCPT ); Thu, 1 Apr 2021 13:41:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617298862; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=knfdMCa1MstgqSIgKLls10HrkX8OH4puDPVxVyz+xv4=; b=M0KGD9kbi83J3cyijvJ7ncovnZGLeiV0qUR1wXlwbgB5cfejm3EMnM6ffzDRnesl0cHWwb 2ZTam5aYevPz6IAWL5nfbuPGUMLUSQzZ5oN3VXZelzxiiuM4TTs2h5lvSUgazoBjhfiyaj aMzhTNeBlBdhV7F3Ij4UxSrVdjDcMgI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-159-NjTE-OFMNim1T-X1-53wvA-1; Thu, 01 Apr 2021 10:39:38 -0400 X-MC-Unique: NjTE-OFMNim1T-X1-53wvA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A30108B5D76; Thu, 1 Apr 2021 14:38:33 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2847A5D9CA; Thu, 1 Apr 2021 14:38:29 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Thomas Gleixner , Wanpeng Li , Borislav Petkov , Paolo Bonzini , Jim Mattson , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , "H. Peter Anvin" , Joerg Roedel , Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Sean Christopherson , Maxim Levitsky Subject: [PATCH 3/4] KVM: x86: correctly merge pending and injected exception Date: Thu, 1 Apr 2021 17:38:16 +0300 Message-Id: <20210401143817.1030695-4-mlevitsk@redhat.com> In-Reply-To: <20210401143817.1030695-1-mlevitsk@redhat.com> References: <20210401143817.1030695-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Allow the pending and the injected exceptions to co-exist when they are raised. Add 'kvm_deliver_pending_exception' function which 'merges' the pending and injected exception or delivers a VM exit with both for a case when the L1 intercepts the pending exception. The later is done by vendor code using new nested callback 'deliver_exception_as_vmexit' The kvm_deliver_pending_exception is called after each VM exit, and prior to VM entry which ensures that during userspace VM exits, only injected exception can be in a raised state. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 9 ++ arch/x86/kvm/svm/nested.c | 27 ++-- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/nested.c | 58 ++++---- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/x86.c | 233 ++++++++++++++++++-------------- 6 files changed, 181 insertions(+), 150 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3b2fd276e8d5..a9b9cd030d9a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1346,6 +1346,15 @@ struct kvm_x86_ops { struct kvm_x86_nested_ops { int (*check_events)(struct kvm_vcpu *vcpu); + + /* + * Deliver a pending exception as a VM exit if the L1 intercepts it. + * Returns -EBUSY if L1 does intercept the exception but, + * it is not possible to deliver it right now. + * (for example when nested run is pending) + */ + int (*deliver_exception_as_vmexit)(struct kvm_vcpu *vcpu); + bool (*hv_timer_pending)(struct kvm_vcpu *vcpu); void (*triple_fault)(struct kvm_vcpu *vcpu); int (*get_state)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 7adad9b6dcad..ff745d59ffcf 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1061,21 +1061,6 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) return 0; } - if (vcpu->arch.pending_exception.valid) { - /* - * Only a pending nested run can block a pending exception. - * Otherwise an injected NMI/interrupt should either be - * lost or delivered to the nested hypervisor in the EXITINTINFO - * vmcb field, while delivering the pending exception. - */ - if (svm->nested.nested_run_pending) - return -EBUSY; - if (!nested_exit_on_exception(svm)) - return 0; - nested_svm_inject_exception_vmexit(svm); - return 0; - } - if (vcpu->arch.smi_pending && !svm_smi_blocked(vcpu)) { if (block_nested_events) return -EBUSY; @@ -1107,6 +1092,17 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) return 0; } +int svm_deliver_nested_exception_as_vmexit(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + if (svm->nested.nested_run_pending) + return -EBUSY; + if (nested_exit_on_exception(svm)) + nested_svm_inject_exception_vmexit(svm); + return 0; +} + int nested_svm_exit_special(struct vcpu_svm *svm) { u32 exit_code = svm->vmcb->control.exit_code; @@ -1321,6 +1317,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, struct kvm_x86_nested_ops svm_nested_ops = { .check_events = svm_check_nested_events, .triple_fault = nested_svm_triple_fault, + .deliver_exception_as_vmexit = svm_deliver_nested_exception_as_vmexit, .get_nested_state_pages = svm_get_nested_state_pages, .get_state = svm_get_nested_state, .set_state = svm_set_nested_state, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 90b541138c5a..b89e48574c39 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -363,7 +363,7 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu) bool has_error_code = vcpu->arch.injected_exception.has_error_code; u32 error_code = vcpu->arch.injected_exception.error_code; - kvm_deliver_exception_payload(vcpu); + WARN_ON_ONCE(vcpu->arch.pending_exception.valid); if (nr == BP_VECTOR && !nrips) { unsigned long rip, old_rip = kvm_rip_read(vcpu); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 5d54fecff9a7..1c09b132c55c 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3768,7 +3768,6 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu) static int vmx_check_nested_events(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned long exit_qual; bool block_nested_events = vmx->nested.nested_run_pending || kvm_event_needs_reinjection(vcpu); bool mtf_pending = vmx->nested.mtf_pending; @@ -3804,41 +3803,15 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } - /* - * Process any exceptions that are not debug traps before MTF. - * - * Note that only a pending nested run can block a pending exception. - * Otherwise an injected NMI/interrupt should either be - * lost or delivered to the nested hypervisor in the IDT_VECTORING_INFO, - * while delivering the pending exception. - */ - - if (vcpu->arch.pending_exception.valid && !vmx_pending_dbg_trap(vcpu)) { - if (vmx->nested.nested_run_pending) - return -EBUSY; - if (!nested_vmx_check_exception(vcpu, &exit_qual)) - goto no_vmexit; - nested_vmx_inject_exception_vmexit(vcpu, exit_qual); - return 0; - } - if (mtf_pending) { if (block_nested_events) return -EBUSY; + nested_vmx_update_pending_dbg(vcpu); nested_vmx_vmexit(vcpu, EXIT_REASON_MONITOR_TRAP_FLAG, 0, 0); return 0; } - if (vcpu->arch.pending_exception.valid) { - if (vmx->nested.nested_run_pending) - return -EBUSY; - if (!nested_vmx_check_exception(vcpu, &exit_qual)) - goto no_vmexit; - nested_vmx_inject_exception_vmexit(vcpu, exit_qual); - return 0; - } - if (nested_vmx_preemption_timer_pending(vcpu)) { if (block_nested_events) return -EBUSY; @@ -3884,6 +3857,34 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } +static int nested_vmx_deliver_exception_as_vmexit(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + unsigned long exit_qual; + + if (vmx->nested.nested_run_pending) + return -EBUSY; + + if (vmx->nested.mtf_pending && vmx_pending_dbg_trap(vcpu)) { + /* + * A pending monitor trap takes precedence over pending + * debug exception which is 'stashed' into + * 'GUEST_PENDING_DBG_EXCEPTIONS' + */ + + nested_vmx_update_pending_dbg(vcpu); + vmx->nested.mtf_pending = false; + nested_vmx_vmexit(vcpu, EXIT_REASON_MONITOR_TRAP_FLAG, 0, 0); + return 0; + } + if (vcpu->arch.pending_exception.valid) { + if (nested_vmx_check_exception(vcpu, &exit_qual)) + nested_vmx_inject_exception_vmexit(vcpu, exit_qual); + return 0; + } + return 0; +} + static u32 vmx_get_preemption_timer_value(struct kvm_vcpu *vcpu) { ktime_t remaining = @@ -6603,6 +6604,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *)) struct kvm_x86_nested_ops vmx_nested_ops = { .check_events = vmx_check_nested_events, + .deliver_exception_as_vmexit = nested_vmx_deliver_exception_as_vmexit, .hv_timer_pending = nested_vmx_preemption_timer_pending, .triple_fault = nested_vmx_triple_fault, .get_state = vmx_get_nested_state, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a9b241d2b271..fc6bc40d47b0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1682,7 +1682,7 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu) u32 error_code = vcpu->arch.injected_exception.error_code; u32 intr_info = nr | INTR_INFO_VALID_MASK; - kvm_deliver_exception_payload(vcpu); + WARN_ON_ONCE(vcpu->arch.pending_exception.valid); if (has_error_code) { vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, error_code); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 493d87b0c2d5..a363204f37be 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -535,86 +535,30 @@ void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu) EXPORT_SYMBOL_GPL(kvm_deliver_exception_payload); static void kvm_multiple_exception(struct kvm_vcpu *vcpu, - unsigned nr, bool has_error, u32 error_code, - bool has_payload, unsigned long payload, bool reinject) + unsigned int nr, bool has_error, u32 error_code, + bool has_payload, unsigned long payload, + bool reinject) { - u32 prev_nr; - int class1, class2; - + struct kvm_queued_exception *exc; kvm_make_request(KVM_REQ_EVENT, vcpu); - if (!vcpu->arch.pending_exception.valid && !vcpu->arch.injected_exception.valid) { - queue: - if (reinject) { - /* - * On vmentry, vcpu->arch.exception.pending is only - * true if an event injection was blocked by - * nested_run_pending. In that case, however, - * vcpu_enter_guest requests an immediate exit, - * and the guest shouldn't proceed far enough to - * need reinjection. - */ - WARN_ON_ONCE(vcpu->arch.pending_exception.valid); - if (WARN_ON_ONCE(has_payload)) { - /* - * A reinjected event has already - * delivered its payload. - */ - has_payload = false; - payload = 0; - } - - vcpu->arch.injected_exception.valid = true; - vcpu->arch.injected_exception.has_error_code = has_error; - vcpu->arch.injected_exception.nr = nr; - vcpu->arch.injected_exception.error_code = error_code; + WARN_ON_ONCE(vcpu->arch.pending_exception.valid); + WARN_ON_ONCE(reinject && vcpu->arch.injected_exception.valid); - } else { - vcpu->arch.pending_exception.valid = true; - vcpu->arch.injected_exception.valid = false; - vcpu->arch.pending_exception.has_error_code = has_error; - vcpu->arch.pending_exception.nr = nr; - vcpu->arch.pending_exception.error_code = error_code; - } - - vcpu->arch.exception_payload.valid = has_payload; - vcpu->arch.exception_payload.value = payload; - if (!is_guest_mode(vcpu)) - kvm_deliver_exception_payload(vcpu); - return; - } - - /* to check exception */ - prev_nr = vcpu->arch.injected_exception.nr; - if (prev_nr == DF_VECTOR) { - /* triple fault -> shutdown */ - kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); - return; - } - class1 = exception_class(prev_nr); - class2 = exception_class(nr); - if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY) - || (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) { - /* - * Generate double fault per SDM Table 5-5. Set - * exception.pending = true so that the double fault - * can trigger a nested vmexit. - */ - vcpu->arch.pending_exception.valid = true; - vcpu->arch.injected_exception.valid = false; - vcpu->arch.pending_exception.has_error_code = true; - vcpu->arch.pending_exception.nr = DF_VECTOR; - vcpu->arch.pending_exception.error_code = 0; + exc = reinject ? &vcpu->arch.injected_exception : + &vcpu->arch.pending_exception; + exc->valid = true; + exc->nr = nr; + exc->has_error_code = has_error; + exc->error_code = error_code; - vcpu->arch.exception_payload.valid = false; - vcpu->arch.exception_payload.value = 0; - } else - /* replace previous exception with a new one in a hope - that instruction re-execution will regenerate lost - exception */ - goto queue; + // re-injected exception has its payload already delivered + WARN_ON_ONCE(reinject && has_payload); + vcpu->arch.exception_payload.valid = has_payload; + vcpu->arch.exception_payload.value = payload; } + void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr) { kvm_multiple_exception(vcpu, nr, false, 0, false, 0, false); @@ -641,6 +585,95 @@ static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr, true, payload, false); } +static int kvm_do_deliver_pending_exception(struct kvm_vcpu *vcpu) +{ + int class1, class2, ret; + + /* try to deliver current pending exception as VM exit */ + if (is_guest_mode(vcpu)) { + ret = kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu); + if (ret || !vcpu->arch.pending_exception.valid) + return ret; + } + + /* No injected exception, so just deliver the payload and inject it */ + if (!vcpu->arch.injected_exception.valid) { + trace_kvm_inj_exception(vcpu->arch.pending_exception.nr, + vcpu->arch.pending_exception.has_error_code, + vcpu->arch.pending_exception.error_code); +queue: + /* Intel SDM 17.3.1.1 */ + if (exception_type(vcpu->arch.pending_exception.nr) == EXCPT_FAULT) + __kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) | + X86_EFLAGS_RF); + + kvm_deliver_exception_payload(vcpu); + + /* Intel SDM 17.2.4 + * The processor clears the GD flag upon entering to the + * debug exception handler, to allow the handler access + * to the debug registers. + */ + if (vcpu->arch.pending_exception.nr == DB_VECTOR) { + if (vcpu->arch.dr7 & DR7_GD) { + vcpu->arch.dr7 &= ~DR7_GD; + kvm_update_dr7(vcpu); + } + } + + if (vcpu->arch.pending_exception.error_code && !is_protmode(vcpu)) + vcpu->arch.pending_exception.error_code = false; + + vcpu->arch.injected_exception = vcpu->arch.pending_exception; + vcpu->arch.pending_exception.valid = false; + return 0; + } + + /* Convert a pending exception and an injected #DF to a triple fault*/ + if (vcpu->arch.injected_exception.nr == DF_VECTOR) { + /* triple fault -> shutdown */ + vcpu->arch.injected_exception.valid = false; + vcpu->arch.pending_exception.valid = false; + kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); + return 0; + } + + class1 = exception_class(vcpu->arch.injected_exception.nr); + class2 = exception_class(vcpu->arch.pending_exception.nr); + + if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY) + || (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) { + + /* Generate double fault per SDM Table 5-5. */ + vcpu->arch.injected_exception.valid = false; + vcpu->arch.pending_exception.valid = true; + vcpu->arch.pending_exception.has_error_code = true; + vcpu->arch.pending_exception.nr = DF_VECTOR; + vcpu->arch.pending_exception.error_code = 0; + vcpu->arch.exception_payload.valid = false; + } else + /* Drop the injected exception and replace it with the pending one*/ + goto queue; + + return 0; +} + +static int kvm_deliver_pending_exception(struct kvm_vcpu *vcpu) +{ + int ret = 0; + + if (!vcpu->arch.pending_exception.valid) + return ret; + + ret = kvm_do_deliver_pending_exception(vcpu); + + if (ret || !vcpu->arch.pending_exception.valid) + return ret; + + WARN_ON_ONCE(vcpu->arch.pending_exception.nr != DF_VECTOR); + return kvm_do_deliver_pending_exception(vcpu); +} + int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err) { if (err) @@ -4297,6 +4330,12 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, vcpu->arch.pending_exception.valid && vcpu->arch.exception_payload.valid) kvm_deliver_exception_payload(vcpu); + /* + * Currently we merge the pending and the injected exceptions + * after each VM exit, which can fail only when nested run is pending, + * in which case only injected (from us or L1) exception is possible. + */ + WARN_ON_ONCE(vcpu->arch.pending_exception.valid && vcpu->arch.injected_exception.valid); @@ -8401,8 +8440,6 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu) static void kvm_inject_exception(struct kvm_vcpu *vcpu) { - if (vcpu->arch.injected_exception.error_code && !is_protmode(vcpu)) - vcpu->arch.injected_exception.error_code = false; static_call(kvm_x86_queue_exception)(vcpu); } @@ -8411,8 +8448,13 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit int r; bool can_inject = true; - /* try to reinject previous events if any */ + r = kvm_deliver_pending_exception(vcpu); + if (r < 0) + goto busy; + + WARN_ON_ONCE(vcpu->arch.pending_exception.valid); + /* try to reinject previous events if any */ if (vcpu->arch.injected_exception.valid) { kvm_inject_exception(vcpu); can_inject = false; @@ -8431,7 +8473,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit * serviced prior to recognizing any new events in order to * fully complete the previous instruction. */ - else if (!vcpu->arch.pending_exception.valid) { + else { if (vcpu->arch.nmi_injected) { static_call(kvm_x86_set_nmi)(vcpu); can_inject = false; @@ -8441,9 +8483,6 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit } } - WARN_ON_ONCE(vcpu->arch.pending_exception.valid && - vcpu->arch.injected_exception.valid); - /* * Call check_nested_events() even if we reinjected a previous event * in order for caller to determine if it should require immediate-exit @@ -8456,30 +8495,6 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit goto busy; } - /* try to inject new event if pending */ - if (vcpu->arch.pending_exception.valid) { - trace_kvm_inj_exception(vcpu->arch.pending_exception.nr, - vcpu->arch.pending_exception.has_error_code, - vcpu->arch.pending_exception.error_code); - - vcpu->arch.injected_exception = vcpu->arch.pending_exception; - vcpu->arch.pending_exception.valid = false; - - if (exception_type(vcpu->arch.injected_exception.nr) == EXCPT_FAULT) - __kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) | - X86_EFLAGS_RF); - - if (vcpu->arch.injected_exception.nr == DB_VECTOR) { - kvm_deliver_exception_payload(vcpu); - if (vcpu->arch.dr7 & DR7_GD) { - vcpu->arch.dr7 &= ~DR7_GD; - kvm_update_dr7(vcpu); - } - } - - kvm_inject_exception(vcpu); - can_inject = false; - } /* * Finally, inject interrupt events. If an event cannot be injected @@ -9270,6 +9285,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_lapic_sync_from_vapic(vcpu); r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath); + + /* + * Deliver the pending exception so that the state of having a pending + * and an injected exception is not visible to the userspace. + */ + + kvm_deliver_pending_exception(vcpu); + return r; cancel_injection: @@ -11014,7 +11037,7 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) if (vcpu->arch.pv.pv_unhalted) return true; - if (vcpu->arch.pending_exception.valid) + if (vcpu->arch.pending_exception.valid || vcpu->arch.injected_exception.valid) return true; if (kvm_test_request(KVM_REQ_NMI, vcpu) || From patchwork Thu Apr 1 14:38:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12178595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6402CC4361B for ; Thu, 1 Apr 2021 17:43:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D5476136F for ; Thu, 1 Apr 2021 17:43:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235700AbhDARnH (ORCPT ); Thu, 1 Apr 2021 13:43:07 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:56657 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234884AbhDARlC (ORCPT ); Thu, 1 Apr 2021 13:41:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617298862; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bmwsKY8J4nK6bYChJo7qBgeWMBlHl0uvtUs7chjpSMQ=; b=S5Mq4WRuLdTyELvN+FC4Ly20ap9keUJwkACeW0Jb6dCjPzScOZ5gnOj635tLRK0FIhz6SK mE2zA7YgvoScRHOQIiobL2CxZurCN1/CBcQE+DiA1B5M9LhcPd7IuitwuAxv1mmncgY/9A DM0v67+IK5Q/CGKy//BmZsXV5WLKLU8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-303-yw6xFhj4MYqEz1LKH2Bguw-1; Thu, 01 Apr 2021 10:39:32 -0400 X-MC-Unique: yw6xFhj4MYqEz1LKH2Bguw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5765910074A1; Thu, 1 Apr 2021 14:38:37 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DED55D9CA; Thu, 1 Apr 2021 14:38:33 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Thomas Gleixner , Wanpeng Li , Borislav Petkov , Paolo Bonzini , Jim Mattson , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , "H. Peter Anvin" , Joerg Roedel , Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Sean Christopherson , Maxim Levitsky Subject: [PATCH 4/4] KVM: x86: remove tweaking of inject_page_fault Date: Thu, 1 Apr 2021 17:38:17 +0300 Message-Id: <20210401143817.1030695-5-mlevitsk@redhat.com> In-Reply-To: <20210401143817.1030695-1-mlevitsk@redhat.com> References: <20210401143817.1030695-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is no longer needed since page faults can now be injected as regular exceptions in all the cases. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 20 -------------------- arch/x86/kvm/vmx/nested.c | 23 ----------------------- 2 files changed, 43 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index ff745d59ffcf..25840399841e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -53,23 +53,6 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu, nested_svm_vmexit(svm); } -static void svm_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86_exception *fault) -{ - struct vcpu_svm *svm = to_svm(vcpu); - WARN_ON(!is_guest_mode(vcpu)); - - if (vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_EXCEPTION_OFFSET + PF_VECTOR) && - !svm->nested.nested_run_pending) { - svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + PF_VECTOR; - svm->vmcb->control.exit_code_hi = 0; - svm->vmcb->control.exit_info_1 = fault->error_code; - svm->vmcb->control.exit_info_2 = fault->address; - nested_svm_vmexit(svm); - } else { - kvm_inject_page_fault(vcpu, fault); - } -} - static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index) { struct vcpu_svm *svm = to_svm(vcpu); @@ -575,9 +558,6 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, if (ret) return ret; - if (!npt_enabled) - vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested; - svm_set_gif(svm, true); return 0; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 1c09b132c55c..8add4c27e718 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -418,26 +418,6 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit return 0; } - -static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, - struct x86_exception *fault) -{ - struct vmcs12 *vmcs12 = get_vmcs12(vcpu); - - WARN_ON(!is_guest_mode(vcpu)); - - if (nested_vmx_is_page_fault_vmexit(vmcs12, fault->error_code) && - !to_vmx(vcpu)->nested.nested_run_pending) { - vmcs12->vm_exit_intr_error_code = fault->error_code; - nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI, - PF_VECTOR | INTR_TYPE_HARD_EXCEPTION | - INTR_INFO_DELIVER_CODE_MASK | INTR_INFO_VALID_MASK, - fault->address); - } else { - kvm_inject_page_fault(vcpu, fault); - } -} - static int nested_vmx_check_io_bitmap_controls(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { @@ -2588,9 +2568,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, vmcs_write64(GUEST_PDPTR3, vmcs12->guest_pdptr3); } - if (!enable_ept) - vcpu->arch.walk_mmu->inject_page_fault = vmx_inject_page_fault_nested; - if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) && WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, vmcs12->guest_ia32_perf_global_ctrl)))