From patchwork Tue Nov 29 19:37:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92CF7C4332F for ; Tue, 29 Nov 2022 19:40:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237092AbiK2Tkn (ORCPT ); Tue, 29 Nov 2022 14:40:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237089AbiK2TkR (ORCPT ); Tue, 29 Nov 2022 14:40:17 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EF77C7A for ; Tue, 29 Nov 2022 11:37:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750652; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nYEa1XgfnHAIZ7cB2VYzoZKokkGbb+MsQs52pSYGTFU=; b=PjDs9tuIIooBVme+NCcmMDNY8yP8XZSKLCjaMzfXU5mWWbEhjDXLdzwx5AdUekP/KCnSFj u75H5YKJR/C8oWaAJ41KhAv7H82cqQHRM7HA2MPqVe7dgb1YvKTBlumYDRta7vsXwglYw2 vvJPrm/7I8MwdWDC3gsvLLDSW4OJlaI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-249-i3rAk3gQPo6OLUDKYDUTwg-1; Tue, 29 Nov 2022 14:37:28 -0500 X-MC-Unique: i3rAk3gQPo6OLUDKYDUTwg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E71D8857F90; Tue, 29 Nov 2022 19:37:26 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id CAC9C2027064; Tue, 29 Nov 2022 19:37:22 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky Subject: [PATCH v2 01/11] KVM: nSVM: don't sync back tlb_ctl on nested VM exit Date: Tue, 29 Nov 2022 21:37:07 +0200 Message-Id: <20221129193717.513824-2-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The CPU doesn't change TLB_CTL value as stated in the PRM (15.16.2): "The VMRUN instruction reads, but does not change, the value of the TLB_CONTROL field" Therefore the KVM shouldn't do that either. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index bc9cd7086fa972..37af0338da7c32 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1010,7 +1010,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm) vmcb12->control.next_rip = vmcb02->control.next_rip; vmcb12->control.int_ctl = svm->nested.ctl.int_ctl; - vmcb12->control.tlb_ctl = svm->nested.ctl.tlb_ctl; vmcb12->control.event_inj = svm->nested.ctl.event_inj; vmcb12->control.event_inj_err = svm->nested.ctl.event_inj_err; From patchwork Tue Nov 29 19:37:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01769C433FE for ; Tue, 29 Nov 2022 19:40:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237106AbiK2Tkl (ORCPT ); Tue, 29 Nov 2022 14:40:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237083AbiK2TkQ (ORCPT ); Tue, 29 Nov 2022 14:40:16 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E793710E9 for ; Tue, 29 Nov 2022 11:37:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750657; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IsK6SeDRcKB0ngx9XBGmKXR4JOxNBat2MjmDlSq9EM0=; b=gMxKjVEFh0IXe4i7DkFGVdOK0+hgof2fMK3QRG9bpA0/hKaHFZ0eXrK3INwB6KQlrIdXta 7eFYR6yrTh1ulGrM/85SVCuY9vaFFNIP4w18s+kb4gQhJlXGsmB5Primhlxizz9xDkA8ed 20wiwp8JxpCTldiMaBLWfR51zzFaLIs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-57-m87UuQubOz2LL_yd13l6nA-1; Tue, 29 Nov 2022 14:37:32 -0500 X-MC-Unique: m87UuQubOz2LL_yd13l6nA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 03F3D86EB22; Tue, 29 Nov 2022 19:37:31 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43C6A2027061; Tue, 29 Nov 2022 19:37:27 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky Subject: [PATCH v2 02/11] KVM: nSVM: clean up the copying of V_INTR bits from vmcb02 to vmcb12 Date: Tue, 29 Nov 2022 21:37:08 +0200 Message-Id: <20221129193717.513824-3-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org the V_IRQ and v_TPR bits don't exist when virtual interrupt masking is not enabled, therefore the KVM should not copy these bits regardless of V_IRQ intercept. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 37af0338da7c32..aad3145b2f62fe 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -412,24 +412,17 @@ void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm, */ void nested_sync_control_from_vmcb02(struct vcpu_svm *svm) { - u32 mask; + u32 mask = 0; svm->nested.ctl.event_inj = svm->vmcb->control.event_inj; svm->nested.ctl.event_inj_err = svm->vmcb->control.event_inj_err; - /* Only a few fields of int_ctl are written by the processor. */ - mask = V_IRQ_MASK | V_TPR_MASK; - if (!(svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) && - svm_is_intercept(svm, INTERCEPT_VINTR)) { - /* - * In order to request an interrupt window, L0 is usurping - * svm->vmcb->control.int_ctl and possibly setting V_IRQ - * even if it was clear in L1's VMCB. Restoring it would be - * wrong. However, in this case V_IRQ will remain true until - * interrupt_window_interception calls svm_clear_vintr and - * restores int_ctl. We can just leave it aside. - */ - mask &= ~V_IRQ_MASK; - } + /* + * Only a few fields of int_ctl are written by the processor. + * Copy back only the bits that are passed through to the L2. + */ + + if (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) + mask = V_IRQ_MASK | V_TPR_MASK; if (nested_vgif_enabled(svm)) mask |= V_GIF_MASK; From patchwork Tue Nov 29 19:37:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CBD8C4332F for ; Tue, 29 Nov 2022 19:40:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236138AbiK2Tkr (ORCPT ); Tue, 29 Nov 2022 14:40:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235900AbiK2TkX (ORCPT ); Tue, 29 Nov 2022 14:40:23 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACAD12626 for ; Tue, 29 Nov 2022 11:37:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750659; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ax33lO2rYC4qdqXK9hN6118xGd7ZlQSFY6CuNF4UBj0=; b=D/XCJxQkCiCal9eFqeLX2zrMXejXyn1/QuVuU4Kw5waYNDYiZ+3lacAD4szuNGEgMnXuKN 3yhkbkLdJPqK3TT+yOXGoPo4sDryssu42As4WK9RHsFRvNO724eYQds/S6tayqMzUMT6ro 7rCbsiEO7aRKbvd5zO6qHMZxtlqXsuA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-153-KxT3T5GlMzGGlJO3dCad6A-1; Tue, 29 Nov 2022 14:37:36 -0500 X-MC-Unique: KxT3T5GlMzGGlJO3dCad6A-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 00FB81C06EDF; Tue, 29 Nov 2022 19:37:35 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D0B82024CB7; Tue, 29 Nov 2022 19:37:31 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky Subject: [PATCH v2 03/11] KVM: nSVM: explicitly raise KVM_REQ_EVENT on nested VM exit if L1 doesn't intercept interrupts Date: Tue, 29 Nov 2022 21:37:09 +0200 Message-Id: <20221129193717.513824-4-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If the L2 doesn't intercept interrupts, then the KVM will use vmcb02's V_IRQ for L1 (to detect the interrupt window) In this case on the nested VM exit KVM might need to copy the V_IRQ bit from the vmcb02 to the vmcb01, to continue waiting for the interrupt window. To make it simple, just raise the KVM_REQ_EVENT request, which execution will lead to the reenabling of the interrupt window if needed. Note that this is a theoretical bug because the KVM already does raise the KVM_REQ_EVENT request one each nested VM exit because the nested VM exit resets RFLAGS and the kvm_set_rflags() raises the KVM_REQ_EVENT request in the response. However raising this request explicitly, together with documenting why this is needed, is still preferred. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index aad3145b2f62fe..e891318595113e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1016,6 +1016,31 @@ int nested_svm_vmexit(struct vcpu_svm *svm) svm_switch_vmcb(svm, &svm->vmcb01); + /* Note about synchronizing some of int_ctl bits from vmcb02 to vmcb01: + * + * - V_IRQ, V_IRQ_VECTOR, V_INTR_PRIO_MASK, V_IGN_TPR: + * If the L2 doesn't intercept interrupts, then + * (even if the L2 does use virtual interrupt masking), + * KVM will use the vmcb02's V_INTR to detect interrupt window. + * + * In this case, the KVM raises the KVM_REQ_EVENT to ensure that interrupt window + * is not lost and this implicitly copies these bits from vmcb02 to vmcb01 + * + * V_TPR: + * If the L2 doesn't use virtual interrupt masking, then the L1's vTPR + * is stored in the vmcb02 but its value doesn't need to be copied from/to + * vmcb01 because it is copied from/to the TPR APIC's register on + * each VM entry/exit. + * + * V_GIF: + * - If the nested vGIF is not used, KVM uses vmcb02's V_GIF for L1's V_GIF, + * however, the L1 vGIF is reset to false on each VM exit, thus + * there is no need to copy it from vmcb02 to vmcb01. + */ + + if (!nested_exit_on_intr(svm)) + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); + if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { svm_copy_lbrs(vmcb12, vmcb02); svm_update_lbrv(vcpu); From patchwork Tue Nov 29 19:37:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42B38C433FE for ; Tue, 29 Nov 2022 19:41:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232988AbiK2Tld (ORCPT ); Tue, 29 Nov 2022 14:41:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237086AbiK2Tkk (ORCPT ); Tue, 29 Nov 2022 14:40:40 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12C2B6320 for ; Tue, 29 Nov 2022 11:37:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750665; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9dwVem96795eE3qcgmMV9GIrD+Dv4vNEuuuNJ0HxdFU=; b=Z6SIkez/PvzD+N4sm4Biv07E2R0+5KG7WHXKh42O735qupOQZ0joSwBwGzDmRCVwnoGCJ4 JOoBenOXD50BUBCJLC0kWxiMUjnlO7T/8ZtaA03iN5Dqr0C9eX/cWj5yLANhLzLfyZ07pY 8JTIStDc7pxoqLrNti2iPpOi+rnjfS8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-202-7GtzMlR6PnaHeX50cy6XPw-1; Tue, 29 Nov 2022 14:37:42 -0500 X-MC-Unique: 7GtzMlR6PnaHeX50cy6XPw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 418321C06EDC; Tue, 29 Nov 2022 19:37:39 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 500D32024CB7; Tue, 29 Nov 2022 19:37:35 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky Subject: [PATCH v2 04/11] KVM: SVM: drop the SVM specific H_FLAGS Date: Tue, 29 Nov 2022 21:37:10 +0200 Message-Id: <20221129193717.513824-5-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org GIF and 'waiting for IRET' are used only for the SVM and thus should not be in H_FLAGS. NMI mask is not x86 specific but it is only used for SVM without vNMI. The VMX have similar concept of NMI mask (soft_vnmi_blocked), and it is used when its 'vNMI' feature is not enabled, but because the VMX can't intercept IRET, it is more of a hack, and thus should not use common host flags either. No functional change is intended. Suggested-by: Sean Christopherson Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/svm/svm.c | 22 +++++++++++++--------- arch/x86/kvm/svm/svm.h | 25 ++++++++++++++++++++++--- 3 files changed, 35 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 70af7240a1d5af..9208ad7a6bd004 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2052,9 +2052,6 @@ enum { TASK_SWITCH_GATE = 3, }; -#define HF_GIF_MASK (1 << 0) -#define HF_NMI_MASK (1 << 3) -#define HF_IRET_MASK (1 << 4) #define HF_GUEST_MASK (1 << 5) /* VCPU is in guest-mode */ #ifdef CONFIG_KVM_SMM diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 91352d69284524..512b2aa21137e2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1326,6 +1326,9 @@ static void __svm_vcpu_reset(struct kvm_vcpu *vcpu) vcpu->arch.microcode_version = 0x01000065; svm->tsc_ratio_msr = kvm_caps.default_tsc_scaling_ratio; + svm->nmi_masked = false; + svm->awaiting_iret_completion = false; + if (sev_es_guest(vcpu->kvm)) sev_es_vcpu_reset(svm); } @@ -2470,7 +2473,7 @@ static int iret_interception(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); ++vcpu->stat.nmi_window_exits; - vcpu->arch.hflags |= HF_IRET_MASK; + svm->awaiting_iret_completion = true; if (!sev_es_guest(vcpu->kvm)) { svm_clr_intercept(svm, INTERCEPT_IRET); svm->nmi_iret_rip = kvm_rip_read(vcpu); @@ -3466,7 +3469,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu) if (svm->nmi_l1_to_l2) return; - vcpu->arch.hflags |= HF_NMI_MASK; + svm->nmi_masked = true; if (!sev_es_guest(vcpu->kvm)) svm_set_intercept(svm, INTERCEPT_IRET); ++vcpu->stat.nmi_injections; @@ -3580,7 +3583,7 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu) return false; ret = (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) || - (vcpu->arch.hflags & HF_NMI_MASK); + (svm->nmi_masked); return ret; } @@ -3602,7 +3605,7 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu) { - return !!(vcpu->arch.hflags & HF_NMI_MASK); + return to_svm(vcpu)->nmi_masked; } static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) @@ -3610,11 +3613,11 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) struct vcpu_svm *svm = to_svm(vcpu); if (masked) { - vcpu->arch.hflags |= HF_NMI_MASK; + svm->nmi_masked = true; if (!sev_es_guest(vcpu->kvm)) svm_set_intercept(svm, INTERCEPT_IRET); } else { - vcpu->arch.hflags &= ~HF_NMI_MASK; + svm->nmi_masked = false; if (!sev_es_guest(vcpu->kvm)) svm_clr_intercept(svm, INTERCEPT_IRET); } @@ -3700,7 +3703,7 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - if ((vcpu->arch.hflags & (HF_NMI_MASK | HF_IRET_MASK)) == HF_NMI_MASK) + if (svm->nmi_masked && !svm->awaiting_iret_completion) return; /* IRET will cause a vm exit */ if (!gif_set(svm)) { @@ -3824,10 +3827,11 @@ static void svm_complete_interrupts(struct kvm_vcpu *vcpu) * If we've made progress since setting HF_IRET_MASK, we've * executed an IRET and can allow NMI injection. */ - if ((vcpu->arch.hflags & HF_IRET_MASK) && + if (svm->awaiting_iret_completion && (sev_es_guest(vcpu->kvm) || kvm_rip_read(vcpu) != svm->nmi_iret_rip)) { - vcpu->arch.hflags &= ~(HF_NMI_MASK | HF_IRET_MASK); + svm->awaiting_iret_completion = false; + svm->nmi_masked = false; kvm_make_request(KVM_REQ_EVENT, vcpu); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 4826e6cc611bf1..587ddc150f9f34 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -237,8 +237,24 @@ struct vcpu_svm { struct svm_nested_state nested; + /* NMI mask value, used when vNMI is not enabled */ + bool nmi_masked; + + /* + * True when the NMI still masked but guest IRET was just intercepted + * and KVM is waiting for RIP change which will signal that this IRET was + * retired and thus NMI can be unmasked. + */ + bool awaiting_iret_completion; + + /* + * Set when KVM waits for IRET completion and needs to + * inject NMIs as soon as it completes (e.g NMI is pending injection). + * The KVM takes over EFLAGS.TF for this. + */ bool nmi_singlestep; u64 nmi_singlestep_guest_rflags; + bool nmi_l1_to_l2; unsigned long soft_int_csbase; @@ -280,6 +296,9 @@ struct vcpu_svm { bool guest_state_loaded; bool x2avic_msrs_intercepted; + + /* Guest GIF value which is used when vGIF is not enabled */ + bool gif_value; }; struct svm_cpu_data { @@ -497,7 +516,7 @@ static inline void enable_gif(struct vcpu_svm *svm) if (vmcb) vmcb->control.int_ctl |= V_GIF_MASK; else - svm->vcpu.arch.hflags |= HF_GIF_MASK; + svm->gif_value = true; } static inline void disable_gif(struct vcpu_svm *svm) @@ -507,7 +526,7 @@ static inline void disable_gif(struct vcpu_svm *svm) if (vmcb) vmcb->control.int_ctl &= ~V_GIF_MASK; else - svm->vcpu.arch.hflags &= ~HF_GIF_MASK; + svm->gif_value = false; } static inline bool gif_set(struct vcpu_svm *svm) @@ -517,7 +536,7 @@ static inline bool gif_set(struct vcpu_svm *svm) if (vmcb) return !!(vmcb->control.int_ctl & V_GIF_MASK); else - return !!(svm->vcpu.arch.hflags & HF_GIF_MASK); + return svm->gif_value; } static inline bool nested_npt_enabled(struct vcpu_svm *svm) From patchwork Tue Nov 29 19:37:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73057C4332F for ; Tue, 29 Nov 2022 19:41:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237144AbiK2TlW (ORCPT ); Tue, 29 Nov 2022 14:41:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237069AbiK2Tkg (ORCPT ); Tue, 29 Nov 2022 14:40:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B98062E4 for ; Tue, 29 Nov 2022 11:37:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750667; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ztV2NkJ9fzkBSStJHJoXJSpsO1jLyNUEDLKWsHdAan8=; b=O/HGwd491S+ho5ZM81/P+Fp3Azfpn6KaNaxAgSpvY4VN4tUCvvVyHNonQccabU149sEFZD uPArLXdLwN1tyorkXb+2VBNJrHj+ZEtdllzTp2mIi7SUrmpWYJ8zDvd+74r23Mbv/KeC80 pmvv1677We1HOUZS6F/r9YX3h8wH0Pk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-608-b3cAepYqMuyaQIwk2L2AsQ-1; Tue, 29 Nov 2022 14:37:44 -0500 X-MC-Unique: b3cAepYqMuyaQIwk2L2AsQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6A7511C06ED8; Tue, 29 Nov 2022 19:37:43 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8F6A82028DC1; Tue, 29 Nov 2022 19:37:39 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky Subject: [PATCH v2 05/11] KVM: x86: emulator: stop using raw host flags Date: Tue, 29 Nov 2022 21:37:11 +0200 Message-Id: <20221129193717.513824-6-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Instead of re-defining the H_FLAGS bits, just expose the 'in_smm' and the 'in_guest_mode' host flags using emulator callbacks. Also while at it, garbage collect the recently removed host flags. No functional change is intended. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 6 +++--- arch/x86/kvm/emulate.c | 11 +++++------ arch/x86/kvm/kvm_emulate.h | 7 ++----- arch/x86/kvm/smm.c | 2 -- arch/x86/kvm/x86.c | 14 +++++++++----- 5 files changed, 19 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9208ad7a6bd004..684a5519812fb2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2052,11 +2052,11 @@ enum { TASK_SWITCH_GATE = 3, }; -#define HF_GUEST_MASK (1 << 5) /* VCPU is in guest-mode */ +#define HF_GUEST_MASK (1 << 0) /* VCPU is in guest-mode */ #ifdef CONFIG_KVM_SMM -#define HF_SMM_MASK (1 << 6) -#define HF_SMM_INSIDE_NMI_MASK (1 << 7) +#define HF_SMM_MASK (1 << 1) +#define HF_SMM_INSIDE_NMI_MASK (1 << 2) # define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE # define KVM_ADDRESS_SPACE_NUM 2 diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 5cc3efa0e21c17..d869131f84ffb3 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2309,7 +2309,7 @@ static int em_lseg(struct x86_emulate_ctxt *ctxt) static int em_rsm(struct x86_emulate_ctxt *ctxt) { - if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_MASK) == 0) + if (!ctxt->ops->in_smm(ctxt)) return emulate_ud(ctxt); if (ctxt->ops->leave_smm(ctxt)) @@ -5132,7 +5132,7 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt) const struct x86_emulate_ops *ops = ctxt->ops; int rc = X86EMUL_CONTINUE; int saved_dst_type = ctxt->dst.type; - unsigned emul_flags; + bool in_guest_mode = ctxt->ops->in_guest_mode(ctxt); ctxt->mem_read.pos = 0; @@ -5147,7 +5147,6 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt) goto done; } - emul_flags = ctxt->ops->get_hflags(ctxt); if (unlikely(ctxt->d & (No64|Undefined|Sse|Mmx|Intercept|CheckPerm|Priv|Prot|String))) { if ((ctxt->mode == X86EMUL_MODE_PROT64 && (ctxt->d & No64)) || @@ -5181,7 +5180,7 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt) fetch_possible_mmx_operand(&ctxt->dst); } - if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && ctxt->intercept) { + if (unlikely(in_guest_mode) && ctxt->intercept) { rc = emulator_check_intercept(ctxt, ctxt->intercept, X86_ICPT_PRE_EXCEPT); if (rc != X86EMUL_CONTINUE) @@ -5210,7 +5209,7 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt) goto done; } - if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && (ctxt->d & Intercept)) { + if (unlikely(in_guest_mode) && (ctxt->d & Intercept)) { rc = emulator_check_intercept(ctxt, ctxt->intercept, X86_ICPT_POST_EXCEPT); if (rc != X86EMUL_CONTINUE) @@ -5264,7 +5263,7 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt) special_insn: - if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && (ctxt->d & Intercept)) { + if (unlikely(in_guest_mode) && (ctxt->d & Intercept)) { rc = emulator_check_intercept(ctxt, ctxt->intercept, X86_ICPT_POST_MEMACCESS); if (rc != X86EMUL_CONTINUE) diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index 2d9662be833378..dd0203fbb27543 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -220,7 +220,8 @@ struct x86_emulate_ops { void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked); - unsigned (*get_hflags)(struct x86_emulate_ctxt *ctxt); + bool (*in_smm)(struct x86_emulate_ctxt *ctxt); + bool (*in_guest_mode)(struct x86_emulate_ctxt *ctxt); int (*leave_smm)(struct x86_emulate_ctxt *ctxt); void (*triple_fault)(struct x86_emulate_ctxt *ctxt); int (*set_xcr)(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr); @@ -275,10 +276,6 @@ enum x86emul_mode { X86EMUL_MODE_PROT64, /* 64-bit (long) mode. */ }; -/* These match some of the HF_* flags defined in kvm_host.h */ -#define X86EMUL_GUEST_MASK (1 << 5) /* VCPU is in guest-mode */ -#define X86EMUL_SMM_MASK (1 << 6) - /* * fastop functions are declared as taking a never-defined fastop parameter, * so they can't be called from C directly. diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c index a9c1c2af8d94c2..a3a94edd2f0bc9 100644 --- a/arch/x86/kvm/smm.c +++ b/arch/x86/kvm/smm.c @@ -110,8 +110,6 @@ static void check_smram_offsets(void) void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm) { - BUILD_BUG_ON(HF_SMM_MASK != X86EMUL_SMM_MASK); - trace_kvm_smm_transition(vcpu->vcpu_id, vcpu->arch.smbase, entering_smm); if (entering_smm) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f18f579ebde81c..85d2a12c214dda 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8138,9 +8138,14 @@ static void emulator_set_nmi_mask(struct x86_emulate_ctxt *ctxt, bool masked) static_call(kvm_x86_set_nmi_mask)(emul_to_vcpu(ctxt), masked); } -static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt) +static bool emulator_in_smm(struct x86_emulate_ctxt *ctxt) { - return emul_to_vcpu(ctxt)->arch.hflags; + return emul_to_vcpu(ctxt)->arch.hflags & HF_SMM_MASK; +} + +static bool emulator_in_guest_mode(struct x86_emulate_ctxt *ctxt) +{ + return emul_to_vcpu(ctxt)->arch.hflags & HF_GUEST_MASK; } #ifndef CONFIG_KVM_SMM @@ -8209,7 +8214,8 @@ static const struct x86_emulate_ops emulate_ops = { .guest_has_fxsr = emulator_guest_has_fxsr, .guest_has_rdpid = emulator_guest_has_rdpid, .set_nmi_mask = emulator_set_nmi_mask, - .get_hflags = emulator_get_hflags, + .in_smm = emulator_in_smm, + .in_guest_mode = emulator_in_guest_mode, .leave_smm = emulator_leave_smm, .triple_fault = emulator_triple_fault, .set_xcr = emulator_set_xcr, @@ -8281,8 +8287,6 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu) (cs_l && is_long_mode(vcpu)) ? X86EMUL_MODE_PROT64 : cs_db ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16; - BUILD_BUG_ON(HF_GUEST_MASK != X86EMUL_GUEST_MASK); - ctxt->interruptibility = 0; ctxt->have_exception = false; ctxt->exception.vector = -1; From patchwork Tue Nov 29 19:37:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059100 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5470C433FE for ; Tue, 29 Nov 2022 19:41:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236240AbiK2Tla (ORCPT ); Tue, 29 Nov 2022 14:41:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237071AbiK2Tki (ORCPT ); Tue, 29 Nov 2022 14:40:38 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E63B658D for ; Tue, 29 Nov 2022 11:37:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750671; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=twqdjhN58Mc4yzZpBl1NNFSCL0kf8bUxY7h7dFC7RKY=; b=NNzss3C7MdXcTNfTJ0NOW/WUKjratNeQVAkrhvMJTMGdPifte1kf8kkwiTwY7eJPpE2CtE 88/I9d3sZhhOGZJI8sZi17aJYD1AmS6bVmVfq9DYrhr2rklvvrcuoyLUVK4pn/SIZHIu89 3B1CgFpzYZNkIoJK2jxBMerKQas7CXg= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-8-e2k6pOH2OU-WXS70I9SEuw-1; Tue, 29 Nov 2022 14:37:48 -0500 X-MC-Unique: e2k6pOH2OU-WXS70I9SEuw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7E41F3817A6D; Tue, 29 Nov 2022 19:37:47 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id BBA682028DC1; Tue, 29 Nov 2022 19:37:43 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky Subject: [PATCH v2 06/11] KVM: SVM: add wrappers to enable/disable IRET interception Date: Tue, 29 Nov 2022 21:37:12 +0200 Message-Id: <20221129193717.513824-7-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org SEV-ES guests don't use IRET interception for the detection of an end of a NMI. Therefore it makes sense to create a wrapper to avoid repeating the check for the SEV-ES. No functional change is intended. Suggested-by: Sean Christopherson Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 512b2aa21137e2..cfed6ab29c839a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2468,16 +2468,29 @@ static int task_switch_interception(struct kvm_vcpu *vcpu) has_error_code, error_code); } +static void svm_disable_iret_interception(struct vcpu_svm *svm) +{ + if (!sev_es_guest(svm->vcpu.kvm)) + svm_clr_intercept(svm, INTERCEPT_IRET); +} + +static void svm_enable_iret_interception(struct vcpu_svm *svm) +{ + if (!sev_es_guest(svm->vcpu.kvm)) + svm_set_intercept(svm, INTERCEPT_IRET); +} + static int iret_interception(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); ++vcpu->stat.nmi_window_exits; svm->awaiting_iret_completion = true; - if (!sev_es_guest(vcpu->kvm)) { - svm_clr_intercept(svm, INTERCEPT_IRET); + + svm_disable_iret_interception(svm); + if (!sev_es_guest(vcpu->kvm)) svm->nmi_iret_rip = kvm_rip_read(vcpu); - } + kvm_make_request(KVM_REQ_EVENT, vcpu); return 1; } @@ -3470,8 +3483,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu) return; svm->nmi_masked = true; - if (!sev_es_guest(vcpu->kvm)) - svm_set_intercept(svm, INTERCEPT_IRET); + svm_enable_iret_interception(svm); ++vcpu->stat.nmi_injections; } @@ -3614,12 +3626,10 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) if (masked) { svm->nmi_masked = true; - if (!sev_es_guest(vcpu->kvm)) - svm_set_intercept(svm, INTERCEPT_IRET); + svm_enable_iret_interception(svm); } else { svm->nmi_masked = false; - if (!sev_es_guest(vcpu->kvm)) - svm_clr_intercept(svm, INTERCEPT_IRET); + svm_disable_iret_interception(svm); } } From patchwork Tue Nov 29 19:37:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059102 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0711C433FE for ; Tue, 29 Nov 2022 19:41:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237129AbiK2Tls (ORCPT ); Tue, 29 Nov 2022 14:41:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235272AbiK2Tkx (ORCPT ); Tue, 29 Nov 2022 14:40:53 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5A202702 for ; Tue, 29 Nov 2022 11:38:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750680; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OPg8L3jKUTetjU9lk920ZBLtdtJuDxmt2DbZj0atMtY=; b=NdDxUY1O8jh7Jw8D6ez+LALlr7+FQdXaClo7l8kVnDQe552941IwjcwoCSrQ0MaMpJwP29 ryCeSRr8I4kNrgqZuHxzacUk2kRwu8nWt7BnrgBGXCWGDFrFW8J5bbTja22twHFGhoksTk NBXIF0eMAPAEmJ9jaw3cbsFXNbtUqaE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-637-qVt9wRf9OFS2X7LYDsPYVA-1; Tue, 29 Nov 2022 14:37:52 -0500 X-MC-Unique: qVt9wRf9OFS2X7LYDsPYVA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A34A2800B23; Tue, 29 Nov 2022 19:37:51 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id CC7942027061; Tue, 29 Nov 2022 19:37:47 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky Subject: [PATCH v2 07/11] KVM: x86: add a delayed hardware NMI injection interface Date: Tue, 29 Nov 2022 21:37:13 +0200 Message-Id: <20221129193717.513824-8-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds two new vendor callbacks: - kvm_x86_get_hw_nmi_pending() - kvm_x86_set_hw_nmi_pending() Using those callbacks the KVM can take advantage of the hardware's accelerated delayed NMI delivery (currently vNMI on SVM). Once NMI is set to pending via this interface, it is assumed that the hardware will deliver the NMI on its own to the guest once all the x86 conditions for the NMI delivery are met. Note that the 'kvm_x86_set_hw_nmi_pending()' callback is allowed to fail, in which case a normal NMI injection will be attempted when NMI can be delivered (possibly by using a NMI window). With vNMI that can happen either if vNMI is already pending or if a nested guest is running. When the vNMI injection fails due to the 'vNMI is already pending' condition, the new NMI will be dropped unless the new NMI can be injected immediately, so no NMI window will be requested. Signed-off-by: Maxim Levitsky Signed-off-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm-x86-ops.h | 2 ++ arch/x86/include/asm/kvm_host.h | 15 ++++++++++++- arch/x86/kvm/x86.c | 36 ++++++++++++++++++++++++++---- 3 files changed, 48 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index abccd51dcfca1b..9e2db6cf7cc041 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -67,6 +67,8 @@ KVM_X86_OP(get_interrupt_shadow) KVM_X86_OP(patch_hypercall) KVM_X86_OP(inject_irq) KVM_X86_OP(inject_nmi) +KVM_X86_OP_OPTIONAL_RET0(get_hw_nmi_pending) +KVM_X86_OP_OPTIONAL_RET0(set_hw_nmi_pending) KVM_X86_OP(inject_exception) KVM_X86_OP(cancel_injection) KVM_X86_OP(interrupt_allowed) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 684a5519812fb2..46993ce61c92db 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -871,8 +871,13 @@ struct kvm_vcpu_arch { u64 tsc_scaling_ratio; /* current scaling ratio */ atomic_t nmi_queued; /* unprocessed asynchronous NMIs */ - unsigned nmi_pending; /* NMI queued after currently running handler */ + + unsigned int nmi_pending; /* + * NMI queued after currently running handler + * (not including a hardware pending NMI (e.g vNMI)) + */ bool nmi_injected; /* Trying to inject an NMI this entry */ + bool smi_pending; /* SMI queued after currently running handler */ u8 handling_intr_from_guest; @@ -1602,6 +1607,13 @@ struct kvm_x86_ops { int (*nmi_allowed)(struct kvm_vcpu *vcpu, bool for_injection); bool (*get_nmi_mask)(struct kvm_vcpu *vcpu); void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked); + + /* returns true, if a NMI is pending injection on hardware level (e.g vNMI) */ + bool (*get_hw_nmi_pending)(struct kvm_vcpu *vcpu); + + /* attempts make a NMI pending via hardware interface (e.g vNMI) */ + bool (*set_hw_nmi_pending)(struct kvm_vcpu *vcpu); + void (*enable_nmi_window)(struct kvm_vcpu *vcpu); void (*enable_irq_window)(struct kvm_vcpu *vcpu); void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr); @@ -1964,6 +1976,7 @@ int kvm_pic_set_irq(struct kvm_pic *pic, int irq, int irq_source_id, int level); void kvm_pic_clear_all(struct kvm_pic *pic, int irq_source_id); void kvm_inject_nmi(struct kvm_vcpu *vcpu); +int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu); void kvm_update_dr7(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 85d2a12c214dda..3c30e3f1106f79 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5103,7 +5103,7 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); events->nmi.injected = vcpu->arch.nmi_injected; - events->nmi.pending = vcpu->arch.nmi_pending != 0; + events->nmi.pending = kvm_get_total_nmi_pending(vcpu) != 0; events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu); /* events->sipi_vector is never valid when reporting to user space */ @@ -5191,9 +5191,12 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, vcpu->arch.nmi_injected = events->nmi.injected; if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) - vcpu->arch.nmi_pending = events->nmi.pending; + atomic_add(events->nmi.pending, &vcpu->arch.nmi_queued); + static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked); + process_nmi(vcpu); + if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR && lapic_in_kernel(vcpu)) vcpu->arch.apic->sipi_vector = events->sipi_vector; @@ -10008,6 +10011,10 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu, static void process_nmi(struct kvm_vcpu *vcpu) { unsigned limit = 2; + int nmi_to_queue = atomic_xchg(&vcpu->arch.nmi_queued, 0); + + if (!nmi_to_queue) + return; /* * x86 is limited to one NMI running, and one NMI pending after it. @@ -10015,13 +10022,34 @@ static void process_nmi(struct kvm_vcpu *vcpu) * Otherwise, allow two (and we'll inject the first one immediately). */ if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected) - limit = 1; + limit--; + + /* Also if there is already a NMI hardware queued to be injected, + * decrease the limit again + */ + if (static_call(kvm_x86_get_hw_nmi_pending)(vcpu)) + limit--; - vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0); + if (limit <= 0) + return; + + /* Attempt to use hardware NMI queueing */ + if (static_call(kvm_x86_set_hw_nmi_pending)(vcpu)) { + limit--; + nmi_to_queue--; + } + + vcpu->arch.nmi_pending += nmi_to_queue; vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit); kvm_make_request(KVM_REQ_EVENT, vcpu); } +/* Return total number of NMIs pending injection to the VM */ +int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.nmi_pending + static_call(kvm_x86_get_hw_nmi_pending)(vcpu); +} + void kvm_make_scan_ioapic_request_mask(struct kvm *kvm, unsigned long *vcpu_bitmap) { From patchwork Tue Nov 29 19:37:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5D96C433FE for ; Tue, 29 Nov 2022 19:41:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237162AbiK2Tlv (ORCPT ); Tue, 29 Nov 2022 14:41:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236888AbiK2Tk4 (ORCPT ); Tue, 29 Nov 2022 14:40:56 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4502E1ADB0 for ; Tue, 29 Nov 2022 11:38:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750682; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W3C0J4b/Iq2TqSFgmT2zDR52z47EVntTEYZ6+rgtU/8=; b=aG2l1cO/IgswxACI3gVfOgBUtg7an+dgwgHn7/XK4Il7kaHAWUCgBdAWfQBs5VGOk7IE2P vU/aZ9dhR2aS0cH8u/dQl8aeQD6izDmG3AbFEQlyCoXywrJpgUcDgnog8+uJyzqSC69032 ls0k0gfJRPoP1yi8lge6U4KPRGffkBU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-247-s4zjxzx_Nja3jk2eWklLvQ-1; Tue, 29 Nov 2022 14:37:57 -0500 X-MC-Unique: s4zjxzx_Nja3jk2eWklLvQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F25E73817A62; Tue, 29 Nov 2022 19:37:55 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 00B312028CE4; Tue, 29 Nov 2022 19:37:51 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky , Santosh Shukla Subject: [PATCH v2 08/11] x86/cpu: Add CPUID feature bit for VNMI Date: Tue, 29 Nov 2022 21:37:14 +0200 Message-Id: <20221129193717.513824-9-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Santosh Shukla VNMI feature allows the hypervisor to inject NMI into the guest w/o using Event injection mechanism, The benefit of using VNMI over the event Injection that does not require tracking the Guest's NMI state and intercepting the IRET for the NMI completion. VNMI achieves that by exposing 3 capability bits in VMCB intr_cntrl which helps with virtualizing NMI injection and NMI_Masking. The presence of this feature is indicated via the CPUID function 0x8000000A_EDX[25]. Reviewed-by: Maxim Levitsky Signed-off-by: Santosh Shukla --- arch/x86/include/asm/cpufeatures.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 1419c4e04d45f3..ed50f28bdf235b 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -359,6 +359,7 @@ #define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */ #define X86_FEATURE_X2AVIC (15*32+18) /* Virtual x2apic */ #define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* Virtual SPEC_CTRL */ +#define X86_FEATURE_AMD_VNMI (15*32+25) /* Virtual NMI */ #define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* "" SVME addr check */ /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */ From patchwork Tue Nov 29 19:37:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059104 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF9C9C4332F for ; Tue, 29 Nov 2022 19:41:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237174AbiK2Tlx (ORCPT ); Tue, 29 Nov 2022 14:41:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237065AbiK2TlH (ORCPT ); Tue, 29 Nov 2022 14:41:07 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BFBB442EE for ; Tue, 29 Nov 2022 11:38:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750686; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8DWsatb5cdHreCXzHDkIF/f3CHfD6y+ENtUisNkNx+I=; b=QY6Jdkj0xFm0XbxytdEWw28FkwtPfcLdWL2GmQRR5uzsvFWKI6DpxHVnc5OG1ibGDyf+jK 0YH7eDFvePrzB/iT599kuS1cPqQm1J2wEWhuRijO+NhBPmbLnumb6igGo3X4cKqzoDW92C O4SwZzzzkpq7tUGhcv1V2j2hO6jOzsQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-76-r4lRUZj7OwCnv_6ASj17GA-1; Tue, 29 Nov 2022 14:38:01 -0500 X-MC-Unique: r4lRUZj7OwCnv_6ASj17GA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9F84C1C06ED8; Tue, 29 Nov 2022 19:38:00 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D7AE2028CE4; Tue, 29 Nov 2022 19:37:56 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky , Santosh Shukla Subject: [PATCH v2 09/11] KVM: SVM: Add VNMI bit definition Date: Tue, 29 Nov 2022 21:37:15 +0200 Message-Id: <20221129193717.513824-10-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Santosh Shukla VNMI exposes 3 capability bits (V_NMI, V_NMI_MASK, and V_NMI_ENABLE) to virtualize NMI and NMI_MASK, Those capability bits are part of VMCB::intr_ctrl - V_NMI(11) - Indicates whether a virtual NMI is pending in the guest. V_NMI_MASK(12) - Indicates whether virtual NMI is masked in the guest. V_NMI_ENABLE(26) - Enables the NMI virtualization feature for the guest. When Hypervisor wants to inject NMI, it will set V_NMI bit, Processor will clear the V_NMI bit and Set the V_NMI_MASK which means the Guest is handling NMI, After the guest handled the NMI, The processor will clear the V_NMI_MASK on the successful completion of IRET instruction Or if VMEXIT occurs while delivering the virtual NMI. To enable the VNMI capability, Hypervisor need to program V_NMI_ENABLE bit 1. Reviewed-by: Maxim Levitsky Signed-off-by: Santosh Shukla --- arch/x86/include/asm/svm.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index cb1ee53ad3b189..26d6f549ce2b46 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -203,6 +203,13 @@ struct __attribute__ ((__packed__)) vmcb_control_area { #define X2APIC_MODE_SHIFT 30 #define X2APIC_MODE_MASK (1 << X2APIC_MODE_SHIFT) +#define V_NMI_PENDING_SHIFT 11 +#define V_NMI_PENDING (1 << V_NMI_PENDING_SHIFT) +#define V_NMI_MASK_SHIFT 12 +#define V_NMI_MASK (1 << V_NMI_MASK_SHIFT) +#define V_NMI_ENABLE_SHIFT 26 +#define V_NMI_ENABLE (1 << V_NMI_ENABLE_SHIFT) + #define LBR_CTL_ENABLE_MASK BIT_ULL(0) #define VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK BIT_ULL(1) From patchwork Tue Nov 29 19:37:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F45EC433FE for ; Tue, 29 Nov 2022 19:41:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236369AbiK2Tly (ORCPT ); Tue, 29 Nov 2022 14:41:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237095AbiK2TlH (ORCPT ); Tue, 29 Nov 2022 14:41:07 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C0B95915A for ; Tue, 29 Nov 2022 11:38:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750691; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rGwAkLQUJjI6/SajL0tjhWVIBhdO0EXs4usgx0YdcWo=; b=VbJB2gNEzUyOq/yBWONBPAHRKyuOarIuwlZ5Y6tadDkCL+SS386qaur8fG26Z6uRNBX4tc jK812x/LFPhXbT2tAwUWSDOiOifyKqkw9miLqi4HgI7l7WQvvJkPrh71K3Ap1ug9kC3hO2 SUn1giqIiNA6Uh7GB2OcY0XeCslGoPQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-385-I7THsf4vNf6hAU3FMB9_Jg-1; Tue, 29 Nov 2022 14:38:06 -0500 X-MC-Unique: I7THsf4vNf6hAU3FMB9_Jg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2FC5086C141; Tue, 29 Nov 2022 19:38:05 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id EDDF42028DC1; Tue, 29 Nov 2022 19:38:00 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky , Santosh Shukla Subject: [PATCH v2 10/11] KVM: SVM: implement support for vNMI Date: Tue, 29 Nov 2022 21:37:16 +0200 Message-Id: <20221129193717.513824-11-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch implements support for injecting pending NMIs via the .kvm_x86_set_hw_nmi_pending using new AMD's vNMI feature. Note that the vNMI can't cause a VM exit, which is needed when a nested guest intercepts NMIs. Therefore to avoid breaking nesting, the vNMI is inhibited while a nested guest is running and instead, the legacy NMI window detection and delivery method is used. While it is possible to passthrough the vNMI if a nested guest doesn't intercept NMIs, such usage is very uncommon, and it's not worth to optimize for. Signed-off-by: Santosh Shukla Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 42 +++++++++++++++ arch/x86/kvm/svm/svm.c | 111 ++++++++++++++++++++++++++++++-------- arch/x86/kvm/svm/svm.h | 10 ++++ 3 files changed, 140 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index e891318595113e..5bea672bf8b12d 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -623,6 +623,42 @@ static bool is_evtinj_nmi(u32 evtinj) return type == SVM_EVTINJ_TYPE_NMI; } +static void nested_svm_save_vnmi(struct vcpu_svm *svm) +{ + struct vmcb *vmcb01 = svm->vmcb01.ptr; + + /* + * Copy the vNMI state back to software NMI tracking state + * for the duration of the nested run + */ + + svm->nmi_masked = vmcb01->control.int_ctl & V_NMI_MASK; + svm->vcpu.arch.nmi_pending += vmcb01->control.int_ctl & V_NMI_PENDING; +} + +static void nested_svm_restore_vnmi(struct vcpu_svm *svm) +{ + struct kvm_vcpu *vcpu = &svm->vcpu; + struct vmcb *vmcb01 = svm->vmcb01.ptr; + + /* + * Restore the vNMI state from the software NMI tracking state + * after a nested run + */ + + if (svm->nmi_masked) + vmcb01->control.int_ctl |= V_NMI_MASK; + else + vmcb01->control.int_ctl &= ~V_NMI_MASK; + + if (vcpu->arch.nmi_pending) { + vcpu->arch.nmi_pending--; + vmcb01->control.int_ctl |= V_NMI_PENDING; + } else + vmcb01->control.int_ctl &= ~V_NMI_PENDING; +} + + static void nested_vmcb02_prepare_control(struct vcpu_svm *svm, unsigned long vmcb12_rip, unsigned long vmcb12_csbase) @@ -646,6 +682,9 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm, else int_ctl_vmcb01_bits |= (V_GIF_MASK | V_GIF_ENABLE_MASK); + if (vnmi) + nested_svm_save_vnmi(svm); + /* Copied from vmcb01. msrpm_base can be overwritten later. */ vmcb02->control.nested_ctl = vmcb01->control.nested_ctl; vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa; @@ -1049,6 +1088,9 @@ int nested_svm_vmexit(struct vcpu_svm *svm) svm_update_lbrv(vcpu); } + if (vnmi) + nested_svm_restore_vnmi(svm); + /* * On vmexit the GIF is set to false and * no event can be injected in L1. diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index cfed6ab29c839a..bf10adcf3170a8 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -230,6 +230,8 @@ module_param(dump_invalid_vmcb, bool, 0644); bool intercept_smi = true; module_param(intercept_smi, bool, 0444); +bool vnmi = true; +module_param(vnmi, bool, 0444); static bool svm_gp_erratum_intercept = true; @@ -1299,6 +1301,9 @@ static void init_vmcb(struct kvm_vcpu *vcpu) if (kvm_vcpu_apicv_active(vcpu)) avic_init_vmcb(svm, vmcb); + if (vnmi) + svm->vmcb->control.int_ctl |= V_NMI_ENABLE; + if (vgif) { svm_clr_intercept(svm, INTERCEPT_STGI); svm_clr_intercept(svm, INTERCEPT_CLGI); @@ -3487,6 +3492,39 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu) ++vcpu->stat.nmi_injections; } + +static bool svm_get_hw_nmi_pending(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + if (!is_vnmi_enabled(svm)) + return false; + + return !!(svm->vmcb->control.int_ctl & V_NMI_MASK); +} + +static bool svm_set_hw_nmi_pending(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + if (!is_vnmi_enabled(svm)) + return false; + + if (svm->vmcb->control.int_ctl & V_NMI_PENDING) + return false; + + svm->vmcb->control.int_ctl |= V_NMI_PENDING; + vmcb_mark_dirty(svm->vmcb, VMCB_INTR); + + /* + * NMI isn't yet technically injected but + * this rough estimation should be good enough + */ + ++vcpu->stat.nmi_injections; + + return true; +} + static void svm_inject_irq(struct kvm_vcpu *vcpu, bool reinjected) { struct vcpu_svm *svm = to_svm(vcpu); @@ -3582,11 +3620,38 @@ static void svm_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) svm_set_intercept(svm, INTERCEPT_CR8_WRITE); } +static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + if (is_vnmi_enabled(svm)) + return svm->vmcb->control.int_ctl & V_NMI_MASK; + else + return svm->nmi_masked; +} + +static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + if (is_vnmi_enabled(svm)) { + if (masked) + svm->vmcb->control.int_ctl |= V_NMI_MASK; + else + svm->vmcb->control.int_ctl &= ~V_NMI_MASK; + } else { + svm->nmi_masked = masked; + if (masked) + svm_enable_iret_interception(svm); + else + svm_disable_iret_interception(svm); + } +} + bool svm_nmi_blocked(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; - bool ret; if (!gif_set(svm)) return true; @@ -3594,10 +3659,10 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu) if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm)) return false; - ret = (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) || - (svm->nmi_masked); + if (svm_get_nmi_mask(vcpu)) + return true; - return ret; + return vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK; } static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) @@ -3615,24 +3680,6 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) return 1; } -static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu) -{ - return to_svm(vcpu)->nmi_masked; -} - -static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) -{ - struct vcpu_svm *svm = to_svm(vcpu); - - if (masked) { - svm->nmi_masked = true; - svm_enable_iret_interception(svm); - } else { - svm->nmi_masked = false; - svm_disable_iret_interception(svm); - } -} - bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -3725,10 +3772,16 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) /* * Something prevents NMI from been injected. Single step over possible * problem (IRET or exception injection or interrupt shadow) + * + * With vNMI we should never need an NMI window + * (we can always inject vNMI either by setting VNMI_PENDING or by EVENTINJ) */ + if (WARN_ON_ONCE(is_vnmi_enabled(svm))) + return; + svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu); - svm->nmi_singlestep = true; svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF); + svm->nmi_singlestep = true; } static void svm_flush_tlb_current(struct kvm_vcpu *vcpu) @@ -4770,6 +4823,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .patch_hypercall = svm_patch_hypercall, .inject_irq = svm_inject_irq, .inject_nmi = svm_inject_nmi, + .get_hw_nmi_pending = svm_get_hw_nmi_pending, + .set_hw_nmi_pending = svm_set_hw_nmi_pending, .inject_exception = svm_inject_exception, .cancel_injection = svm_cancel_injection, .interrupt_allowed = svm_interrupt_allowed, @@ -5058,6 +5113,16 @@ static __init int svm_hardware_setup(void) pr_info("Virtual GIF supported\n"); } + + vnmi = vgif && vnmi && boot_cpu_has(X86_FEATURE_AMD_VNMI); + if (vnmi) + pr_info("Virtual NMI enabled\n"); + + if (!vnmi) { + svm_x86_ops.get_hw_nmi_pending = NULL; + svm_x86_ops.set_hw_nmi_pending = NULL; + } + if (lbrv) { if (!boot_cpu_has(X86_FEATURE_LBRV)) lbrv = false; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 587ddc150f9f34..0b7e1790fadde1 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -35,6 +35,7 @@ extern u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly; extern bool npt_enabled; extern int vgif; extern bool intercept_smi; +extern bool vnmi; enum avic_modes { AVIC_MODE_NONE = 0, @@ -553,6 +554,15 @@ static inline bool is_x2apic_msrpm_offset(u32 offset) (msr < (APIC_BASE_MSR + 0x100)); } +static inline bool is_vnmi_enabled(struct vcpu_svm *svm) +{ + /* L1's vNMI is inhibited while nested guest is running */ + if (is_guest_mode(&svm->vcpu)) + return false; + + return !!(svm->vmcb01.ptr->control.int_ctl & V_NMI_ENABLE); +} + /* svm.c */ #define MSR_INVALID 0xffffffffU From patchwork Tue Nov 29 19:37:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 13059106 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8EA5C4332F for ; Tue, 29 Nov 2022 19:42:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236278AbiK2TmV (ORCPT ); Tue, 29 Nov 2022 14:42:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236955AbiK2TlS (ORCPT ); Tue, 29 Nov 2022 14:41:18 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99F955EFAC for ; Tue, 29 Nov 2022 11:38:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669750699; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zbNm6u7VwMLnnR5d0F1qjMdQ/5J2+Ysx2Y9gYe/zZwQ=; b=X0JnGBW7aYkVjciKNKX1OlDqKuO+9sn3mygQypANNhSkdvb/nwZs3XC6dH4qLxEk00U3oa v67MQnbvBnXis/gy9xtMFBdR4JIBgjD7TcT6EempLooYMDKvmtg9H/wit7IfOjYfJf6b7h 9xs+jJXwSFnIuAgpp5SxgC0rbTKiQrY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-275-UxmNabJBMqKYQKTIqYI29A-1; Tue, 29 Nov 2022 14:38:10 -0500 X-MC-Unique: UxmNabJBMqKYQKTIqYI29A-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CF1CB3C0F22F; Tue, 29 Nov 2022 19:38:09 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 80E782027061; Tue, 29 Nov 2022 19:38:05 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Sandipan Das , Paolo Bonzini , Jim Mattson , Peter Zijlstra , Dave Hansen , Borislav Petkov , Pawan Gupta , Thomas Gleixner , Ingo Molnar , Josh Poimboeuf , Daniel Sneddon , Jiaxi Chen , Babu Moger , linux-kernel@vger.kernel.org, Jing Liu , Wyes Karny , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Maxim Levitsky , Santosh Shukla Subject: [PATCH v2 11/11] KVM: nSVM: implement support for nested VNMI Date: Tue, 29 Nov 2022 21:37:17 +0200 Message-Id: <20221129193717.513824-12-mlevitsk@redhat.com> In-Reply-To: <20221129193717.513824-1-mlevitsk@redhat.com> References: <20221129193717.513824-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch allows L1 to use vNMI to accelerate its injection of NMIs to L2 by passing through vNMI int_ctl bits from vmcb12 to/from vmcb02. While L2 runs, L1's vNMI is inhibited, and L1's NMIs are injected normally. In order to support nested VNMI requires saving and restoring the VNMI bits during nested entry and exit. In case of L1 and L2 both using VNMI- Copy VNMI bits from vmcb12 to vmcb02 during entry and vice-versa during exit. And in case of L1 uses VNMI and L2 doesn't- Copy VNMI bits from vmcb01 to vmcb02 during entry and vice-versa during exit. Tested with the KVM-unit-test and Nested Guest scenario. Signed-off-by: Santosh Shukla Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 13 ++++++++++++- arch/x86/kvm/svm/svm.c | 5 +++++ arch/x86/kvm/svm/svm.h | 6 ++++++ 3 files changed, 23 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 5bea672bf8b12d..81346665058e26 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -278,6 +278,11 @@ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu, if (CC(!nested_svm_check_tlb_ctl(vcpu, control->tlb_ctl))) return false; + if (CC((control->int_ctl & V_NMI_ENABLE) && + !vmcb12_is_intercept(control, INTERCEPT_NMI))) { + return false; + } + return true; } @@ -427,6 +432,9 @@ void nested_sync_control_from_vmcb02(struct vcpu_svm *svm) if (nested_vgif_enabled(svm)) mask |= V_GIF_MASK; + if (nested_vnmi_enabled(svm)) + mask |= V_NMI_MASK | V_NMI_PENDING; + svm->nested.ctl.int_ctl &= ~mask; svm->nested.ctl.int_ctl |= svm->vmcb->control.int_ctl & mask; } @@ -682,8 +690,11 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm, else int_ctl_vmcb01_bits |= (V_GIF_MASK | V_GIF_ENABLE_MASK); - if (vnmi) + if (vnmi) { nested_svm_save_vnmi(svm); + if (nested_vnmi_enabled(svm)) + int_ctl_vmcb12_bits |= (V_NMI_PENDING | V_NMI_ENABLE | V_NMI_MASK); + } /* Copied from vmcb01. msrpm_base can be overwritten later. */ vmcb02->control.nested_ctl = vmcb01->control.nested_ctl; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index bf10adcf3170a8..fb203f536d2f9b 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4214,6 +4214,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) svm->vgif_enabled = vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF); + svm->vnmi_enabled = vnmi && guest_cpuid_has(vcpu, X86_FEATURE_AMD_VNMI); + svm_recalc_instruction_intercepts(vcpu, svm); /* For sev guests, the memory encryption bit is not reserved in CR3. */ @@ -4967,6 +4969,9 @@ static __init void svm_set_cpu_caps(void) if (vgif) kvm_cpu_cap_set(X86_FEATURE_VGIF); + if (vnmi) + kvm_cpu_cap_set(X86_FEATURE_AMD_VNMI); + /* Nested VM can receive #VMEXIT instead of triggering #GP */ kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0b7e1790fadde1..8fb2085188c5ac 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -271,6 +271,7 @@ struct vcpu_svm { bool pause_filter_enabled : 1; bool pause_threshold_enabled : 1; bool vgif_enabled : 1; + bool vnmi_enabled : 1; u32 ldr_reg; u32 dfr_reg; @@ -545,6 +546,11 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm) return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE; } +static inline bool nested_vnmi_enabled(struct vcpu_svm *svm) +{ + return svm->vnmi_enabled && (svm->nested.ctl.int_ctl & V_NMI_ENABLE); +} + static inline bool is_x2apic_msrpm_offset(u32 offset) { /* 4 msrs per u8, and 4 u8 in u32 */