From patchwork Thu Sep 15 09:28:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chenyi Qiang X-Patchwork-Id: 12977107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA206ECAAD3 for ; Thu, 15 Sep 2022 09:22:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229781AbiIOJWP (ORCPT ); Thu, 15 Sep 2022 05:22:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229724AbiIOJWM (ORCPT ); Thu, 15 Sep 2022 05:22:12 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A195089CD3 for ; Thu, 15 Sep 2022 02:22:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663233731; x=1694769731; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=0del9VXeyTnxvbQjFzOC/wKBQkuUfZxt/1ML/Cb1adw=; b=GvZIZjURhth8/KV/edUsbzfdwRu+5pyvoXz82c/SvbTHv8ZuMIq4ZrVQ fWO7Oxcr5fh93yGyct3dBMNmSAOTUSOy6bkQH0xei/72MjJmuOksLxiLZ SRQnPIhIF71a478ayEHagQpkNTVoyJ26Hg3TWraYVMj7QMVMkKYdGFZWD d01F0AtGeTbMVHvRAEQE8zJNYQ3IAEx6FcTvdVFb4+TYiviOGwNtZmaH5 o/oNESEEjlJvjkN2od/Hm5Pg73nHK91agJ1tvnur+7gn76eRhnVOYtR+u TqifJLknNdjMzprvOumFuCCVcobPwiK/tZfLOabeJoYiZRgVA3mcbrtVA g==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="299475289" X-IronPort-AV: E=Sophos;i="5.93,317,1654585200"; d="scan'208";a="299475289" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 02:21:56 -0700 X-IronPort-AV: E=Sophos;i="5.93,317,1654585200"; d="scan'208";a="759563768" Received: from chenyi-pc.sh.intel.com ([10.239.159.73]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 02:21:54 -0700 From: Chenyi Qiang To: Paolo Bonzini , Marcelo Tosatti , Richard Henderson , Eduardo Habkost , Peter Xu , Xiaoyao Li Cc: Chenyi Qiang , qemu-devel@nongnu.org, kvm@vger.kernel.org Subject: [PATCH v6 1/2] i386: kvm: extend kvm_{get, put}_vcpu_events to support pending triple fault Date: Thu, 15 Sep 2022 17:28:38 +0800 Message-Id: <20220915092839.5518-2-chenyi.qiang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220915092839.5518-1-chenyi.qiang@intel.com> References: <20220915092839.5518-1-chenyi.qiang@intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For the direct triple faults, i.e. hardware detected and KVM morphed to VM-Exit, KVM will never lose them. But for triple faults sythesized by KVM, e.g. the RSM path, if KVM exits to userspace before the request is serviced, userspace could migrate the VM and lose the triple fault. A new flag KVM_VCPUEVENT_VALID_TRIPLE_FAULT is defined to signal that the event.triple_fault_pending field contains a valid state if the KVM_CAP_X86_TRIPLE_FAULT_EVENT capability is enabled. Acked-by: Peter Xu Signed-off-by: Chenyi Qiang --- target/i386/cpu.c | 1 + target/i386/cpu.h | 1 + target/i386/kvm/kvm.c | 20 ++++++++++++++++++++ 3 files changed, 22 insertions(+) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index 1db1278a59..6e107466b3 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -6017,6 +6017,7 @@ static void x86_cpu_reset(DeviceState *dev) env->exception_has_payload = false; env->exception_payload = 0; env->nmi_injected = false; + env->triple_fault_pending = false; #if !defined(CONFIG_USER_ONLY) /* We hard-wire the BSP to the first CPU. */ apic_designate_bsp(cpu->apic_state, s->cpu_index == 0); diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 82004b65b9..b97d182e28 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1739,6 +1739,7 @@ typedef struct CPUArchState { uint8_t has_error_code; uint8_t exception_has_payload; uint64_t exception_payload; + bool triple_fault_pending; uint32_t ins_len; uint32_t sipi_vector; bool tsc_valid; diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index a1fd1f5379..3838827134 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -132,6 +132,7 @@ static int has_xcrs; static int has_pit_state2; static int has_sregs2; static int has_exception_payload; +static int has_triple_fault_event; static bool has_msr_mcg_ext_ctl; @@ -2483,6 +2484,16 @@ int kvm_arch_init(MachineState *ms, KVMState *s) } } + has_triple_fault_event = kvm_check_extension(s, KVM_CAP_X86_TRIPLE_FAULT_EVENT); + if (has_triple_fault_event) { + ret = kvm_vm_enable_cap(s, KVM_CAP_X86_TRIPLE_FAULT_EVENT, 0, true); + if (ret < 0) { + error_report("kvm: Failed to enable triple fault event cap: %s", + strerror(-ret)); + return ret; + } + } + ret = kvm_get_supported_msrs(s); if (ret < 0) { return ret; @@ -4299,6 +4310,11 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level) } } + if (has_triple_fault_event) { + events.flags |= KVM_VCPUEVENT_VALID_TRIPLE_FAULT; + events.triple_fault.pending = env->triple_fault_pending; + } + return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events); } @@ -4368,6 +4384,10 @@ static int kvm_get_vcpu_events(X86CPU *cpu) } } + if (events.flags & KVM_VCPUEVENT_VALID_TRIPLE_FAULT) { + env->triple_fault_pending = events.triple_fault.pending; + } + env->sipi_vector = events.sipi_vector; return 0;