From patchwork Thu Nov 17 14:32:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13046948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D185EC4332F for ; Thu, 17 Nov 2022 14:35:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240535AbiKQOfq (ORCPT ); Thu, 17 Nov 2022 09:35:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240359AbiKQOeg (ORCPT ); Thu, 17 Nov 2022 09:34:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D12ED2A94C for ; Thu, 17 Nov 2022 06:33:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668695614; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BRiRehiIas3vv9np5G0BO4HtZHrN+zn3ozCBnxST56A=; b=RGQl1/N66KURl0zVQN3WuLTyYbp5/1Zi5ofdIafG4D3MlIvuLAid0Ox/u5s6KYHP80B8fe zqPYN+DsHZ0GRK8rm4DAPyacvMgccxq0Lfc5FwQ9knDU8kiK8V1PWaywqu+JIFk1MbYToA XtcNri+OkflzdbBvHNCkEpdm+UycXyc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-614-RDKMU4pKPg6Zok40QL3vbw-1; Thu, 17 Nov 2022 09:33:29 -0500 X-MC-Unique: RDKMU4pKPg6Zok40QL3vbw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7C2A5381494E; Thu, 17 Nov 2022 14:33:28 +0000 (UTC) Received: from amdlaptop.tlv.redhat.com (dhcp-4-238.tlv.redhat.com [10.35.4.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id F2EF42166B29; Thu, 17 Nov 2022 14:33:24 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Paolo Bonzini , Ingo Molnar , "H. Peter Anvin" , Dave Hansen , linux-kernel@vger.kernel.org, Peter Zijlstra , Thomas Gleixner , Sandipan Das , Daniel Sneddon , Jing Liu , Josh Poimboeuf , Wyes Karny , Borislav Petkov , Babu Moger , Pawan Gupta , Sean Christopherson , Jim Mattson , x86@kernel.org, Maxim Levitsky , Santosh Shukla Subject: [PATCH 11/13] KVM: nSVM: implement nested VNMI Date: Thu, 17 Nov 2022 16:32:40 +0200 Message-Id: <20221117143242.102721-12-mlevitsk@redhat.com> In-Reply-To: <20221117143242.102721-1-mlevitsk@redhat.com> References: <20221117143242.102721-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Santosh Shukla In order to support nested VNMI requires saving and restoring the VNMI bits during nested entry and exit. In case of L1 and L2 both using VNMI- Copy VNMI bits from vmcb12 to vmcb02 during entry and vice-versa during exit. And in case of L1 uses VNMI and L2 doesn't- Copy VNMI bits from vmcb01 to vmcb02 during entry and vice-versa during exit. Tested with the KVM-unit-test and Nested Guest scenario. Maxim: - moved the vNMI bits copying to nested_sync_int_ctl_from_vmcb02 Signed-off-by: Santosh Shukla Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 13 +++++++++++++ arch/x86/kvm/svm/svm.c | 5 +++++ arch/x86/kvm/svm/svm.h | 6 ++++++ 3 files changed, 24 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 1f2b8492c8782f..c9fcdd691bb5a1 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -442,6 +442,14 @@ static void nested_sync_int_ctl_from_vmcb02(struct vcpu_svm *svm, */ ; + if (vnmi) { + /* copy back the vNMI fields which can be modified by the CPU */ + if (nested_vnmi_enabled(svm)) + l2_to_l1_mask |= V_NMI_MASK | V_NMI_PENDING; + else + l2_to_l0_mask |= V_NMI_MASK | V_NMI_PENDING; + } + vmcb12->control.int_ctl = (svm->nested.ctl.int_ctl & ~l2_to_l1_mask) | (vmcb02->control.int_ctl & l2_to_l1_mask); @@ -657,6 +665,11 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm, else int_ctl_vmcb01_bits |= (V_GIF_MASK | V_GIF_ENABLE_MASK); + if (nested_vnmi_enabled(svm)) + int_ctl_vmcb12_bits |= (V_NMI_PENDING | V_NMI_ENABLE | V_NMI_MASK); + else + int_ctl_vmcb01_bits |= (V_NMI_PENDING | V_NMI_ENABLE | V_NMI_MASK); + /* Copied from vmcb01. msrpm_base can be overwritten later. */ vmcb02->control.nested_ctl = vmcb01->control.nested_ctl; vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9ebfbd0d4b467e..c9190a8ee03273 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4188,6 +4188,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) svm->vgif_enabled = vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF); + svm->vnmi_enabled = vnmi && guest_cpuid_has(vcpu, X86_FEATURE_AMD_VNMI); + svm_recalc_instruction_intercepts(vcpu, svm); /* For sev guests, the memory encryption bit is not reserved in CR3. */ @@ -4939,6 +4941,9 @@ static __init void svm_set_cpu_caps(void) if (vgif) kvm_cpu_cap_set(X86_FEATURE_VGIF); + if (vnmi) + kvm_cpu_cap_set(X86_FEATURE_AMD_VNMI); + /* Nested VM can receive #VMEXIT instead of triggering #GP */ kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5f2ee72c6e3125..d39e937a2c8391 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -252,6 +252,7 @@ struct vcpu_svm { bool pause_filter_enabled : 1; bool pause_threshold_enabled : 1; bool vgif_enabled : 1; + bool vnmi_enabled : 1; u32 ldr_reg; u32 dfr_reg; @@ -532,6 +533,11 @@ static inline bool is_x2apic_msrpm_offset(u32 offset) (msr < (APIC_BASE_MSR + 0x100)); } +static inline bool nested_vnmi_enabled(struct vcpu_svm *svm) +{ + return svm->vnmi_enabled && (svm->nested.ctl.int_ctl & V_NMI_ENABLE); +} + static inline struct vmcb *get_vnmi_vmcb(struct vcpu_svm *svm) { if (!vnmi)