From patchwork Thu Nov 21 20:14:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 13882331 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BEE51DD9A8; Thu, 21 Nov 2024 20:15:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732220127; cv=none; b=P46o4cr/YuxYbnMfAmsD/6aavksp44QfIiZ7J4NhGK68Kjf2wXuQl2azdj6ivmiLotCUWgzWwEC9aet5ctBrb7SYPdYkb7o2FbouQdKPVaPfXpToZOdMDEDSK8hieheBKfwWbRxQIYNgqozNIvglU0fwh51SDfd4RLnGQSE7m+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732220127; c=relaxed/simple; bh=c9HCZctPwLBPjlJ46aJHAtvnSoCVKDnGWM//g84sQIQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HLGqELG8grU0Ba/B4W1MwUXZDshPH7xVBHh69gQYSWX/rYoUH5oG11G7H4ycI2Z4m5p9I5wyJm2eYdbYcTOlcUPI3zoRfKB1VTflWCO0kPRNPiMm2e0BerArB02H8ye9VYpab+F6XGynDh6TMRhGiJ5hM0UfTKldVnEVCPKzJag= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=af/lvajy; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="af/lvajy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732220126; x=1763756126; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c9HCZctPwLBPjlJ46aJHAtvnSoCVKDnGWM//g84sQIQ=; b=af/lvajyFyoWFEZbG8iHfvcK+Ux3dNv+RdmndZS7IxdSqovvDv0bCENl NPPImcnqIFrDNWSXtmLwJnRHEG2jzEofFM4NiPWnGACLcQfqG64zqs20g NPaNMGtNjOIa6Mn59yWo1AJ6WQSNHG+oBRx6IXMlN35rAv+yEI6NeOio0 T+CQtUCVijHaKyNoh4iABNvt6za2rm4Q39T7Ugg4PHxJV1AmOEtCV+Xrb lr586ydHUPBQvJOnyC2M8pnYbQ+ECM7n3vcaz7KfUs0oU6GY/PICceiuC Fhlt6u/brXDz1VTAfQaQQ7kFnRGp/vZFroumaLxceF5Mq/1fOGipzkAZx Q==; X-CSE-ConnectionGUID: vI1QY8xmQrWSv93fkzGubQ== X-CSE-MsgGUID: V+bp6UnBT/CBSStqr37kMQ== X-IronPort-AV: E=McAfee;i="6700,10204,11263"; a="31715868" X-IronPort-AV: E=Sophos;i="6.12,173,1728975600"; d="scan'208";a="31715868" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2024 12:15:26 -0800 X-CSE-ConnectionGUID: zkWfvqhpR6a9ZGMcS/22Pg== X-CSE-MsgGUID: /Tiu0zipT0WbCmiFUnGrHA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,173,1728975600"; d="scan'208";a="90161109" Received: from ahunter6-mobl1.ger.corp.intel.com (HELO ahunter-VirtualBox.ger.corp.intel.com) ([10.246.16.81]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2024 12:15:20 -0800 From: Adrian Hunter To: pbonzini@redhat.com, seanjc@google.com, kvm@vger.kernel.org, dave.hansen@linux.intel.com Cc: rick.p.edgecombe@intel.com, kai.huang@intel.com, adrian.hunter@intel.com, reinette.chatre@intel.com, xiaoyao.li@intel.com, tony.lindgren@linux.intel.com, binbin.wu@linux.intel.com, dmatlack@google.com, isaku.yamahata@intel.com, nik.borisov@suse.com, linux-kernel@vger.kernel.org, x86@kernel.org, yan.y.zhao@intel.com, chao.gao@intel.com, weijiang.yang@intel.com Subject: [PATCH 3/7] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) Date: Thu, 21 Nov 2024 22:14:42 +0200 Message-ID: <20241121201448.36170-4-adrian.hunter@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241121201448.36170-1-adrian.hunter@intel.com> References: <20241121201448.36170-1-adrian.hunter@intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki From: Isaku Yamahata On entering/exiting TDX vcpu, preserved or clobbered CPU state is different from the VMX case. Add TDX hooks to save/restore host/guest CPU state. Save/restore kernel GS base MSR. Signed-off-by: Isaku Yamahata Signed-off-by: Adrian Hunter Reviewed-by: Paolo Bonzini --- TD vcpu enter/exit v1: - Clarify comment (Binbin) - Use lower case preserved and add the for VMX in log (Tony) - Fix bisectability issue with includes (Kai) --- arch/x86/kvm/vmx/main.c | 24 ++++++++++++++++++-- arch/x86/kvm/vmx/tdx.c | 46 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx.h | 4 ++++ arch/x86/kvm/vmx/x86_ops.h | 4 ++++ 4 files changed, 76 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 44ec6005a448..3a8ffc199be2 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -129,6 +129,26 @@ static void vt_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vmx_vcpu_load(vcpu, cpu); } +static void vt_prepare_switch_to_guest(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_prepare_switch_to_guest(vcpu); + return; + } + + vmx_prepare_switch_to_guest(vcpu); +} + +static void vt_vcpu_put(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_vcpu_put(vcpu); + return; + } + + vmx_vcpu_put(vcpu); +} + static int vt_vcpu_pre_run(struct kvm_vcpu *vcpu) { if (is_td_vcpu(vcpu)) @@ -250,9 +270,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .vcpu_free = vt_vcpu_free, .vcpu_reset = vt_vcpu_reset, - .prepare_switch_to_guest = vmx_prepare_switch_to_guest, + .prepare_switch_to_guest = vt_prepare_switch_to_guest, .vcpu_load = vt_vcpu_load, - .vcpu_put = vmx_vcpu_put, + .vcpu_put = vt_vcpu_put, .update_exception_bitmap = vmx_update_exception_bitmap, .get_feature_msr = vmx_get_feature_msr, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 5fa5b65b9588..6e4ea2d420bc 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include #include +#include #include #include "capabilities.h" #include "mmu.h" @@ -9,6 +10,7 @@ #include "vmx.h" #include "mmu/spte.h" #include "common.h" +#include "posted_intr.h" #include #include "trace.h" @@ -605,6 +607,9 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) if ((kvm_tdx->xfam & XFEATURE_MASK_XTILE) == XFEATURE_MASK_XTILE) vcpu->arch.xfd_no_write_intercept = true; + tdx->host_state_need_save = true; + tdx->host_state_need_restore = false; + tdx->state = VCPU_TD_STATE_UNINITIALIZED; return 0; @@ -631,6 +636,45 @@ void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) local_irq_enable(); } +/* + * Compared to vmx_prepare_switch_to_guest(), there is not much to do + * as SEAMCALL/SEAMRET calls take care of most of save and restore. + */ +void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx = to_tdx(vcpu); + + if (!tdx->host_state_need_save) + return; + + if (likely(is_64bit_mm(current->mm))) + tdx->msr_host_kernel_gs_base = current->thread.gsbase; + else + tdx->msr_host_kernel_gs_base = read_msr(MSR_KERNEL_GS_BASE); + + tdx->host_state_need_save = false; +} + +static void tdx_prepare_switch_to_host(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx = to_tdx(vcpu); + + tdx->host_state_need_save = true; + if (!tdx->host_state_need_restore) + return; + + ++vcpu->stat.host_state_reload; + + wrmsrl(MSR_KERNEL_GS_BASE, tdx->msr_host_kernel_gs_base); + tdx->host_state_need_restore = false; +} + +void tdx_vcpu_put(struct kvm_vcpu *vcpu) +{ + vmx_vcpu_pi_put(vcpu); + tdx_prepare_switch_to_host(vcpu); +} + void tdx_vcpu_free(struct kvm_vcpu *vcpu) { struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); @@ -732,6 +776,8 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) tdx_vcpu_enter_exit(vcpu); + tdx->host_state_need_restore = true; + vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET; trace_kvm_exit(vcpu, KVM_ISA_VMX); diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index ebee1049b08b..48cf0a1abfcc 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -54,6 +54,10 @@ struct vcpu_tdx { u64 vp_enter_ret; enum vcpu_tdx_state state; + + bool host_state_need_save; + bool host_state_need_restore; + u64 msr_host_kernel_gs_base; }; void tdh_vp_rd_failed(struct vcpu_tdx *tdx, char *uclass, u32 field, u64 err); diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 3d292a677b92..5bd45a720007 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -130,6 +130,8 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit); +void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu); +void tdx_vcpu_put(struct kvm_vcpu *vcpu); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -161,6 +163,8 @@ static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediat { return EXIT_FASTPATH_NONE; } +static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {} +static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {} static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }