From patchwork Tue Apr 23 22:15:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13640755 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C145D143C4F for ; Tue, 23 Apr 2024 22:15:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713910529; cv=none; b=hVBWr1FZXmlYuNqA4auV/O5jn4+tjbAtXh6K1r0BCMkJgHTGVM7YzIRI/YOOxJnO4sSUmyrVCB41ttFnAGI7qskXReyrnGPoFhvR5xywcBsFIrp2Zro0jKU/YJTwpFt7jhBmvVv0HkrE2OSfb+hOG69WfFA4FxZXuEYUFVT48V8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713910529; c=relaxed/simple; bh=KgCs/kJZY7xdJCGpcAAqULB3WqvUrefj9IG3vqzS0KY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sbnNtVlSyrnrkneJx6IEb3QxGCXeRR94Px8inJ9CmpTJpa+ErZ0L8DNYn4LtctzHe8vgqGmHPhakDyaXvZhMnt8YaDrU7BsPAQWkZEMGHo6pUnA6Gsfu7LYwBosEXy/x4v5WyOOa1swtEbCks7ZSLgXDV7EXCCX1MrBqBgea3Rc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IKCeIlIZ; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IKCeIlIZ" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6b269686aso10905541276.1 for ; Tue, 23 Apr 2024 15:15:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713910527; x=1714515327; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ltCZUKA09BKJhKOF4qi4OedR1io6EGlo9sc+ZLkmkFA=; b=IKCeIlIZGgooIezmoosUP4Rd7Z8YN+jcD/l2UkhZ6j3eT4oh3ARRJNyGjE4/hhaSY+ ojijlyBaWTsEnM9MTAlH4yfG+hLDyrT7pu4P2QRr0gn6k2VA6JFHH44PlqSIgbSko0T5 j2D6VTWuT6Mapo9OkLxsr/mvXtFTHT4VsbqR7lPIvmknaARhQ+JzD0cPy8cz8Bj+Le3F TyhOtLIFsbGQZR9gCKoFzJjzK3VRDYXK0cnfBvM/fQwhgBpxpcvd+dkk/Wk0kSOxYwpA gMtGa6P/T5kbcWiKkSSzzQJNp+slqS4rO+DuEoGvMwKMV9c+d2PkKY8WP288VcT2jTFl cC4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713910527; x=1714515327; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ltCZUKA09BKJhKOF4qi4OedR1io6EGlo9sc+ZLkmkFA=; b=FM7llhrdtz2GYwv0nuOcO5Cja6apNpqwy3XqB6Pr0X1TMtmuyzXMfHorRa7iqiE0PI yA8LYtChi3sF2/khPZtO16M3RdIuvuvpapGNbosvnXihcTKIvcwXnjEeog4+O92oC+93 NwKwATg7KXhl/JHusNbgy1sLVrH29PvZ6a8UZJ9WzrGtxzPVeNYn6OWhnthjoSYXW9QC Fyg6IObu/6tZZQl8uD4STOpFfj0MM8blh/5SCxF/55OUeyqvwEucyoaldmRe0BHXWkCw 8dYzHBy4E0aFLQnw25HWsFKGswRL8aIds3xudR2rik/jGUgQ95qXaJPblBcmU/0QBaOL v0cg== X-Gm-Message-State: AOJu0YyRud6ba2BrFcuKru9spzQfyWJRUww2T7+ddsOzXsUUKlLaaZpB mZHy47isUecEbWdRGkKc17trY2pOrzLehgRr3wzUP7T0PEXDOJ+HMcnb0G1gvcSYYlaCNNHaCZs wOA== X-Google-Smtp-Source: AGHT+IEceCiUtwhramWsyr6t7u/CrRyOb3AC4xyCBZGzgKK+/xYWsdWnGvGIS/SZheBYAhyTWndO07VOvyk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:102c:b0:de5:78:34d2 with SMTP id x12-20020a056902102c00b00de5007834d2mr121617ybt.6.1713910526777; Tue, 23 Apr 2024 15:15:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 23 Apr 2024 15:15:18 -0700 In-Reply-To: <20240423221521.2923759-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240423221521.2923759-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240423221521.2923759-2-seanjc@google.com> Subject: [PATCH 1/4] KVM: x86: Add a struct to consolidate host values, e.g. EFER, XCR0, etc... From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Add "struct kvm_host_values kvm_host" to hold the various host values that KVM snapshots during initialization. Bundling the host values into a single struct simplifies adding new MSRs and other features with host state/values that KVM cares about, and provides a one-stop shop. E.g. adding a new value requires one line, whereas tracking each value individual often requires three: declaration, definition, and export. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/vmx/nested.c | 8 +++---- arch/x86/kvm/vmx/vmx.c | 14 ++++++------ arch/x86/kvm/x86.c | 38 +++++++++++++-------------------- arch/x86/kvm/x86.h | 12 +++++++---- 6 files changed, 35 insertions(+), 40 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1d13e3cd1dc5..8d3940a59894 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1846,7 +1846,6 @@ struct kvm_arch_async_pf { }; extern u32 __read_mostly kvm_nr_uret_msrs; -extern u64 __read_mostly host_efer; extern bool __read_mostly allow_smaller_maxphyaddr; extern bool __read_mostly enable_apicv; extern struct kvm_x86_ops kvm_x86_ops; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 598d78b4107f..71f1518f0ca1 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3251,7 +3251,7 @@ void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_are */ hostsa->xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); hostsa->pkru = read_pkru(); - hostsa->xss = host_xss; + hostsa->xss = kvm_host.xss; /* * If DebugSwap is enabled, debug registers are loaded but NOT saved by diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index d5b832126e34..a896df59eaad 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2422,7 +2422,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0 if (cpu_has_load_ia32_efer()) { if (guest_efer & EFER_LMA) exec_control |= VM_ENTRY_IA32E_MODE; - if (guest_efer != host_efer) + if (guest_efer != kvm_host.efer) exec_control |= VM_ENTRY_LOAD_IA32_EFER; } vm_entry_controls_set(vmx, exec_control); @@ -2435,7 +2435,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0 * bits may be modified by vmx_set_efer() in prepare_vmcs02(). */ exec_control = __vm_exit_controls_get(vmcs01); - if (cpu_has_load_ia32_efer() && guest_efer != host_efer) + if (cpu_has_load_ia32_efer() && guest_efer != kvm_host.efer) exec_control |= VM_EXIT_LOAD_IA32_EFER; else exec_control &= ~VM_EXIT_LOAD_IA32_EFER; @@ -4662,7 +4662,7 @@ static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx) return vmcs_read64(GUEST_IA32_EFER); if (cpu_has_load_ia32_efer()) - return host_efer; + return kvm_host.efer; for (i = 0; i < vmx->msr_autoload.guest.nr; ++i) { if (vmx->msr_autoload.guest.val[i].index == MSR_EFER) @@ -4673,7 +4673,7 @@ static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx) if (efer_msr) return efer_msr->data; - return host_efer; + return kvm_host.efer; } static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f10b5f8f364b..cb1bd9aebac4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -258,7 +258,7 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) return 0; } - if (host_arch_capabilities & ARCH_CAP_SKIP_VMENTRY_L1DFLUSH) { + if (kvm_host.arch_capabilities & ARCH_CAP_SKIP_VMENTRY_L1DFLUSH) { l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED; return 0; } @@ -403,7 +403,7 @@ static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) * and VM-Exit. */ vmx->disable_fb_clear = !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) && - (host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) && + (kvm_host.arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) && !boot_cpu_has_bug(X86_BUG_MDS) && !boot_cpu_has_bug(X86_BUG_TAA); @@ -1116,12 +1116,12 @@ static bool update_transition_efer(struct vcpu_vmx *vmx) * atomically, since it's faster than switching it manually. */ if (cpu_has_load_ia32_efer() || - (enable_ept && ((vmx->vcpu.arch.efer ^ host_efer) & EFER_NX))) { + (enable_ept && ((vmx->vcpu.arch.efer ^ kvm_host.efer) & EFER_NX))) { if (!(guest_efer & EFER_LMA)) guest_efer &= ~EFER_LME; - if (guest_efer != host_efer) + if (guest_efer != kvm_host.efer) add_atomic_switch_msr(vmx, MSR_EFER, - guest_efer, host_efer, false); + guest_efer, kvm_host.efer, false); else clear_atomic_switch_msr(vmx, MSR_EFER); return false; @@ -1134,7 +1134,7 @@ static bool update_transition_efer(struct vcpu_vmx *vmx) clear_atomic_switch_msr(vmx, MSR_EFER); guest_efer &= ~ignore_bits; - guest_efer |= host_efer & ignore_bits; + guest_efer |= kvm_host.efer & ignore_bits; vmx->guest_uret_msrs[i].data = guest_efer; vmx->guest_uret_msrs[i].mask = ~ignore_bits; @@ -4346,7 +4346,7 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx) } if (cpu_has_load_ia32_efer()) - vmcs_write64(HOST_IA32_EFER, host_efer); + vmcs_write64(HOST_IA32_EFER, kvm_host.efer); } void set_cr4_guest_host_mask(struct vcpu_vmx *vmx) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e9ef1fa4b90b..1b664385461d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -98,6 +98,9 @@ struct kvm_caps kvm_caps __read_mostly = { }; EXPORT_SYMBOL_GPL(kvm_caps); +struct kvm_host_values kvm_host __read_mostly; +EXPORT_SYMBOL_GPL(kvm_host); + #define ERR_PTR_USR(e) ((void __user *)ERR_PTR(e)) #define emul_to_vcpu(ctxt) \ @@ -227,21 +230,12 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs; | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \ | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE) -u64 __read_mostly host_efer; -EXPORT_SYMBOL_GPL(host_efer); - bool __read_mostly allow_smaller_maxphyaddr = 0; EXPORT_SYMBOL_GPL(allow_smaller_maxphyaddr); bool __read_mostly enable_apicv = true; EXPORT_SYMBOL_GPL(enable_apicv); -u64 __read_mostly host_xss; -EXPORT_SYMBOL_GPL(host_xss); - -u64 __read_mostly host_arch_capabilities; -EXPORT_SYMBOL_GPL(host_arch_capabilities); - const struct _kvm_stats_desc kvm_vm_stats_desc[] = { KVM_GENERIC_VM_STATS(), STATS_DESC_COUNTER(VM, mmu_shadow_zapped), @@ -315,8 +309,6 @@ const struct kvm_stats_header kvm_vcpu_stats_header = { sizeof(kvm_vcpu_stats_desc), }; -u64 __read_mostly host_xcr0; - static struct kmem_cache *x86_emulator_cache; /* @@ -1023,11 +1015,11 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) { - if (vcpu->arch.xcr0 != host_xcr0) + if (vcpu->arch.xcr0 != kvm_host.xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0); if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && - vcpu->arch.ia32_xss != host_xss) + vcpu->arch.ia32_xss != kvm_host.xss) wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); } @@ -1054,12 +1046,12 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) { - if (vcpu->arch.xcr0 != host_xcr0) - xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0); + if (vcpu->arch.xcr0 != kvm_host.xcr0) + xsetbv(XCR_XFEATURE_ENABLED_MASK, kvm_host.xcr0); if (guest_can_use(vcpu, X86_FEATURE_XSAVES) && - vcpu->arch.ia32_xss != host_xss) - wrmsrl(MSR_IA32_XSS, host_xss); + vcpu->arch.ia32_xss != kvm_host.xss) + wrmsrl(MSR_IA32_XSS, kvm_host.xss); } } @@ -1626,7 +1618,7 @@ static bool kvm_is_immutable_feature_msr(u32 msr) static u64 kvm_get_arch_capabilities(void) { - u64 data = host_arch_capabilities & KVM_SUPPORTED_ARCH_CAP; + u64 data = kvm_host.arch_capabilities & KVM_SUPPORTED_ARCH_CAP; /* * If nx_huge_pages is enabled, KVM's shadow paging will ensure that @@ -9777,19 +9769,19 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) goto out_free_percpu; if (boot_cpu_has(X86_FEATURE_XSAVE)) { - host_xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); - kvm_caps.supported_xcr0 = host_xcr0 & KVM_SUPPORTED_XCR0; + kvm_host.xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); + kvm_caps.supported_xcr0 = kvm_host.xcr0 & KVM_SUPPORTED_XCR0; } - rdmsrl_safe(MSR_EFER, &host_efer); + rdmsrl_safe(MSR_EFER, &kvm_host.efer); if (boot_cpu_has(X86_FEATURE_XSAVES)) - rdmsrl(MSR_IA32_XSS, host_xss); + rdmsrl(MSR_IA32_XSS, kvm_host.xss); kvm_init_pmu_capability(ops->pmu_ops); if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) - rdmsrl(MSR_IA32_ARCH_CAPABILITIES, host_arch_capabilities); + rdmsrl(MSR_IA32_ARCH_CAPABILITIES, kvm_host.arch_capabilities); r = ops->hardware_setup(); if (r != 0) diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index d80a4c6b5a38..e69fff7d1f21 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -33,6 +33,13 @@ struct kvm_caps { u64 supported_perf_cap; }; +struct kvm_host_values { + u64 efer; + u64 xcr0; + u64 xss; + u64 arch_capabilities; +}; + void kvm_spurious_fault(void); #define KVM_NESTED_VMENTER_CONSISTENCY_CHECK(consistency_check) \ @@ -325,11 +332,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int emulation_type, void *insn, int insn_len); fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu); -extern u64 host_xcr0; -extern u64 host_xss; -extern u64 host_arch_capabilities; - extern struct kvm_caps kvm_caps; +extern struct kvm_host_values kvm_host; extern bool enable_pmu; From patchwork Tue Apr 23 22:15:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13640756 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64864143C75 for ; Tue, 23 Apr 2024 22:15:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713910530; cv=none; b=FYntdpQvfBJx/H5CYZ9Kh/rEfE5gGlWmoE7OdWfucUL70QiISXCl60oeFaqbvglVzspuPz282wMrDaW40V2LOHp7C9dGzxux9IIgDvIrmipNAQbj7SbZjZCS5gxECRylcvL9uN0deQllBsJlxKSK530CV17p1jKJd2bfalFFnbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713910530; c=relaxed/simple; bh=aYUpste86lV3fyte7buFltqmDR1r1cQd4prVxaJtRf4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tFjXViOlVncFjlTJBnlU0fDUShQrpIF8ZdC+QQS0P10urjmeIonra/WxpsQB1SUn0D/VA1MsT9bwS27OnPJFwICZxWSc35/4QrR/0ag+YEvtuWAGOnOl0jAydJrSMbhVZkUgcWHSLoCwS+oTSbhcj3BoHvzZpYih1DauR0hddyo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=k1SmxXEo; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="k1SmxXEo" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1e8f59d1d9bso50817345ad.1 for ; Tue, 23 Apr 2024 15:15:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713910529; x=1714515329; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=3uqpWugoSDsDMOmVhUpk5dOwz/VU9DZaD5dJgZGIky0=; b=k1SmxXEo08A3+JFP+GjRflbC3VWnVArSumwUC8ybs5Sdj7DlSqvqAjcmuv6ZVteQ/q wvJx6f19lqzlk1swSdTlDagAwD+2igqEzr4w+kfS/8so2TI2jTt1+5qoAt+szmQqmDPH dpkPJmLdfYAQsAykvOv0wAGcKyzSDWsKxPMS8Jmwx31FTVLFf8F+hUtIffGjbKB5vb0s phJcHzYOn4GcuRCYZzOn5Q9Z10jNmpJ2rXrThHytN4FJHFee243mtPPlAcRINuuUdyQ3 rhJiPGQdIjVSPcFaX6ql6ekmglLwjZuWKyXuYK6+VLK4xp4F8sazl9wepBuJ0kbc3fhk 8Z+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713910529; x=1714515329; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=3uqpWugoSDsDMOmVhUpk5dOwz/VU9DZaD5dJgZGIky0=; b=MkTC4DS6pZXWG9FpCL/XLBZVBI79hleSVxyfAGInxRFLjPHKVztxVHpdeD5rWUWi2P O4HvStyK5OYudgcrVkL3O5eaLqnhyVIUWNtWsZ04RBMXAHlVGXQpp5ITAliHmGFlcRlL FfhG0JcT0ZFDG8rUG/+jB1OKQyw/KVO240Mtv8i24orWaZYqhVHcGHoeWd+s8M1RnZLw UvGqpEZeWzoMQT7vnpLdJ4o+QqX20ND4QT2IaHPjHE8OLWuP2IFaAMqHlsp4OgNFAgHR AlXoqB7pDiA5rPdUd0mURlCSQh3IRnlSYgvsBM9ysva9TUtHCmd+LW3MgaCL4+Gri2vG IUWg== X-Gm-Message-State: AOJu0Yx4Q27I2N6o4nTHIrPGx4yPVNE+YUtjqtGUzwTV4KxqHOE5ajZm 1tODeDjzddC8jVy7YjbY6y49NFoQDDHbqbs2vdUDmFa5ALjvskbZ9Aa23/dS7CWdAU6SCjCvIVP U/Q== X-Google-Smtp-Source: AGHT+IEJ5UGqVKfLfGgTKDpJHf9w8q0xghsT3k6p5hJZ2VzCpe5sdsP0SAi+hGIQuL4nfqzRjVq1Jkhwn4I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2346:b0:1ea:2838:e5b2 with SMTP id c6-20020a170903234600b001ea2838e5b2mr3159plh.7.1713910528618; Tue, 23 Apr 2024 15:15:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 23 Apr 2024 15:15:19 -0700 In-Reply-To: <20240423221521.2923759-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240423221521.2923759-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240423221521.2923759-3-seanjc@google.com> Subject: [PATCH 2/4] KVM: SVM: Use KVM's snapshot of the host's XCR0 for SEV-ES host state From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Use KVM's snapshot of the host's XCR0 when stuffing SEV-ES host state instead of reading XCR0 from hardware. XCR0 is only written during boot, i.e. won't change while KVM is running (and KVM at large is hosed if that doesn't hold true). Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/sev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 71f1518f0ca1..c56070991a58 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3249,7 +3249,7 @@ void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_are * isn't saved by VMRUN, that isn't already saved by VMSAVE (performed * by common SVM code). */ - hostsa->xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); + hostsa->xcr0 = kvm_host.xcr0; hostsa->pkru = read_pkru(); hostsa->xss = kvm_host.xss; From patchwork Tue Apr 23 22:15:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13640757 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 633621448D9 for ; Tue, 23 Apr 2024 22:15:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713910532; cv=none; b=S86De5eOf5JpAn+xwu3h+3asUIqX464DE+0UhnvjEMOiV76j9wo7rK9DjM65ynhIVmJ7h1jhVoORN64cr+Lpho1ormEDuwte0v4zxAxKn9M2b660MZJJdo2G41Fqmi2ZzfEUo/jGV5jUpBDRxQLIFM4NLFvg8yKBaT3vd2lsoHU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713910532; c=relaxed/simple; bh=xE1KUFK4G+p/d3CyfBOgMCJHlIiny9RJ4lxmT/9Dm04=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=W80CRn0IiKycSZ0oqvL2ZCKrQ0YdCv0sS6cJKRqX0H7RQ8sIQdEUOBSsOw9f5ryHhHt3fIPChpyMuqtDcR3dzh8KXkw+aXBuRwYy72tsQntWRLYvEMExU+9y8PxZPLaSnnsSbHlpETUhw0YKnFTl5Ipxahv/tRNzKTxSPN6v6xM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FkA08b1r; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FkA08b1r" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de463fb34c9so11963566276.3 for ; Tue, 23 Apr 2024 15:15:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713910530; x=1714515330; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=925/3TcbmBk8/dgvobnmcuIGRsrdaR6MRTRxQEFXUJk=; b=FkA08b1rtaw1mLOXxakIBnKEdlGtW4Zrn9fbVGt97lrntoEGBMRj8ZdMCyhYe+m2H2 LgVtVOhvmd1wUHIrVw+WtH10m4oV8Idm914gUIeLsGfn8/FlBpmVX561ZfqF9P4tHcLO IfKW12JdEqsqqNFDSghsVNHEa/Ubtu6sD2AqDfie/lBe+b5U1NUnRLdkcgV3bfQzl2op GtkJi4H1MDyymGw+z3OCzzjPzGLAkskBayBSC/G++qV7bwKXfzExHbK+anYUPnqjuzic cw8bufB0UFgYfawnOteQ8VogZWe9bkm6VZvLV1pHRDRcfRXUlFZLfpCGMbSSeD5GziXX ijwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713910530; x=1714515330; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=925/3TcbmBk8/dgvobnmcuIGRsrdaR6MRTRxQEFXUJk=; b=I0IvGvxupCpm2Wog1L8SrKkiJUypes+I9YCWG+GW+ljb6xm85YNr8XXot53xGlDxED 8ZCg1/xWPlnyMNgxi+3hFAbBbWrYk6yK8VobAYG+tX7OEiJ1dKQCOYBaLa+tWl/BEMsL fnvdJeZqNKnwLHJjBSKy3YZHr+rTQZmVxscfpiRPeUMzFk3IjJt43R5+TQwGIOw16cAl ioplcWVhBFRnqWknCzWcjh/WsfA4bEjAzXbLugPaWd0ywTAt9QaDfgj3yzKUeRmk8izB qDfc2ffAYPrgtyrp17vMFaHsdN51Fa+CmUN4WmvB6hSRp7sqkKAAZ4XUwo/FFinveiTn a6vQ== X-Gm-Message-State: AOJu0Yx1h+8DuRGxM9mbhZEuLgAS7sMGx/fNoI416J3Fzkv9KsOuTL4M R+jXFHQCpoE9cuKCiv5LYfoiWaKuMo7HOw9rZMWr69CQu831CYBPJtI0fSVX/kLmDYa1OvJXodg PYA== X-Google-Smtp-Source: AGHT+IFX0P+LbQNZxR5CLCQuMpKsdOUSE0MG1uSUj4yZFqITsFafOfA4cmVbD+WTu75aActV8Bz4bt1CoZc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1027:b0:de0:ecc6:4681 with SMTP id x7-20020a056902102700b00de0ecc64681mr100790ybt.1.1713910530470; Tue, 23 Apr 2024 15:15:30 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 23 Apr 2024 15:15:20 -0700 In-Reply-To: <20240423221521.2923759-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240423221521.2923759-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240423221521.2923759-4-seanjc@google.com> Subject: [PATCH 3/4] KVM: x86/mmu: Snapshot shadow_phys_bits when kvm.ko is loaded From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Snapshot shadow_phys_bits when kvm.ko is loaded, not when a vendor module is loaded, to guard against usage of shadow_phys_bits before it is initialized. The computation isn't vendor specific in any way, i.e. there there is no reason to wait to snapshot the value until a vendor module is loaded, nor is there any reason to recompute the value every time a vendor module is loaded. Opportunistically convert it from "read mostly" to "read-only after init". Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 2 +- arch/x86/kvm/mmu/spte.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index b410a227c601..ef970aea26e7 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -61,7 +61,7 @@ static __always_inline u64 rsvd_bits(int s, int e) * The number of non-reserved physical address bits irrespective of features * that repurpose legal bits, e.g. MKTME. */ -extern u8 __read_mostly shadow_phys_bits; +extern u8 __ro_after_init shadow_phys_bits; static inline gfn_t kvm_mmu_max_gfn(void) { diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 6c7ab3aa6aa7..927f4abbe973 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -43,7 +43,7 @@ u64 __read_mostly shadow_acc_track_mask; u64 __read_mostly shadow_nonpresent_or_rsvd_mask; u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask; -u8 __read_mostly shadow_phys_bits; +u8 __ro_after_init shadow_phys_bits; void __init kvm_mmu_spte_module_init(void) { @@ -55,6 +55,8 @@ void __init kvm_mmu_spte_module_init(void) * will change when the vendor module is (re)loaded. */ allow_mmio_caching = enable_mmio_caching; + + shadow_phys_bits = kvm_get_shadow_phys_bits(); } static u64 generation_mmio_spte_mask(u64 gen) @@ -439,8 +441,6 @@ void kvm_mmu_reset_all_pte_masks(void) u8 low_phys_bits; u64 mask; - shadow_phys_bits = kvm_get_shadow_phys_bits(); - /* * If the CPU has 46 or less physical address bits, then set an * appropriate mask to guard against L1TF attacks. Otherwise, it is From patchwork Tue Apr 23 22:15:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13640758 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36F98144D22 for ; Tue, 23 Apr 2024 22:15:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713910534; cv=none; b=A/jle2UtVktsW7VGu8UpnNIrg+i6+jua5qbdU5dQQn7S8J72COCKaSW+Nprg7nl7r+4lVDP+07gCiZahVIBg/doBRRAH9rQdt0Yy7W8UjYvwWOxuSMWBUBMqmG/DK2eSga6JrQy/XPJXUAWBElhQtyuzktX21ShkMUcZkisd1dY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713910534; c=relaxed/simple; bh=atYTvf09fqL9ha661bdOuC+yFRm+cZOt2NIycoIfOcs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ioVAd0mJeZb4gqJOF8eoF2MOaBOM1w9dlg2CADLiQbMTREJlK84rPKW9RJHqaC9hYdOrBrobLqDWB5ENzdsL3FaL0Hohlxz3MtC9TM8iktLY9Dn9zCRhr9ELBf8Yn1btFUULsqF5rfEW+21LjRTzyALeJEE6WyVNx0HQVHikogU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=sVYUNGM6; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sVYUNGM6" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6b26845cdso10425178276.3 for ; Tue, 23 Apr 2024 15:15:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713910532; x=1714515332; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/Ov/h5WRaO17YiHCSMWPKBysku7bCvXf02PCWZZe35g=; b=sVYUNGM6iqW09x3ixx9CAGUJaXBOXyl/kgW3o6EOOThlR7ZPZYhw3qxjEHmhcPHdTG PMEKYYaKdozNuptPMG4GPRLOS5XKg9rCxWL3KWNx6US9RJnRUKVzOYHbzyELT4rJ5ODm NTFd9asOkoYx7SQZHlzbnl3OeKaBWnsjgesPETs+LfzO+XdLXByNAc1z4x25q15NZpGE S+C+5RMp5BoTch88SO+udPQ94kvoenxa8iee/oRJ1cKjjgfXTBbXWX6+DKN5fOYRLTQm xckp4lDhSNI14C+sMJL3xhRh8xR9JM1UZPyGz4vbRgQHA0uAe87AUUqe6VY+4zuGcKW2 smvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713910532; x=1714515332; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/Ov/h5WRaO17YiHCSMWPKBysku7bCvXf02PCWZZe35g=; b=fIbmOedSCteKC9Jmh+OVdO/1e6cJFNl6jK173USHO4lwROUfyfLJlwA3627eJvzwmm /4CvqhUKF5UxGGHb/fTGdYfg7XyLYnLUqDZTkTc7wRnYkoqbi336r/4KX2cDJMBmNAey DW7B6RTlxuj5IZUXb9Jvb2Nny8Kb1q8gCW+RPXfEgbGJTthPMg+49yA2MoRw2Lpyy7fp nzdpfP1Z0KABEaeVIhc9+fMYwNN1ziHKVwLw5jFmHOV0nGdWhivQBacpn8cUFi4PRXEX sSoSrM45MKmCvRjLrG43Y4jDgjIsLtmWOPFDwKgVShgDW8Ny9lRIA1oec1rsSrmgp6cu o4Mw== X-Gm-Message-State: AOJu0YznF22eO3guxzYQjtOGJ6UAAaFre6xQPNSp1aTlWO38S6/e3ZJJ DaggRB8D1SBDKrvB0WXz0OEHwY7nIoXlRICHu3J59c1sAkUCtGvuR0L6l806+Le9Jio2+aWCKSV hKA== X-Google-Smtp-Source: AGHT+IEmD1+6roxoXhIronX3rlceU8QCd+Mk3yJr2hIjlvf+8Ry/DNq9i7zuRlIDXRbOpGegy5H5VsrZf0U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:110e:b0:dd9:1702:4837 with SMTP id o14-20020a056902110e00b00dd917024837mr313112ybu.3.1713910532321; Tue, 23 Apr 2024 15:15:32 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 23 Apr 2024 15:15:21 -0700 In-Reply-To: <20240423221521.2923759-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240423221521.2923759-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240423221521.2923759-5-seanjc@google.com> Subject: [PATCH 4/4] KVM: x86: Move shadow_phys_bits into "kvm_host", as "maxphyaddr" From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Move shadow_phys_bits into "struct kvm_host_values", i.e. into KVM's global "kvm_host" variable, so that it is automatically exported for use in vendor modules. Rename the variable/field to maxphyaddr to more clearly capture what value it holds, now that it's used outside of the MMU (and because the "shadow" part is more than a bit misleading as the variable is not at all unique to shadow paging). Recomputing the raw/true host.MAXPHYADDR on every use can be subtly expensive, e.g. it will incur a VM-Exit on the CPUID if KVM is running as a nested hypervisor. Vendor code already has access to the information, e.g. by directly doing CPUID or by invoking kvm_get_shadow_phys_bits(), so there's no tangible benefit to making it MMU-only. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 27 +-------------------------- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/spte.c | 24 +++++++++++++++++++++--- arch/x86/kvm/vmx/vmx.c | 14 ++++++-------- arch/x86/kvm/vmx/vmx.h | 2 +- arch/x86/kvm/x86.h | 7 +++++++ 6 files changed, 37 insertions(+), 39 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index ef970aea26e7..0d63637f46d7 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -57,12 +57,6 @@ static __always_inline u64 rsvd_bits(int s, int e) return ((2ULL << (e - s)) - 1) << s; } -/* - * The number of non-reserved physical address bits irrespective of features - * that repurpose legal bits, e.g. MKTME. - */ -extern u8 __ro_after_init shadow_phys_bits; - static inline gfn_t kvm_mmu_max_gfn(void) { /* @@ -76,30 +70,11 @@ static inline gfn_t kvm_mmu_max_gfn(void) * than hardware's real MAXPHYADDR. Using the host MAXPHYADDR * disallows such SPTEs entirely and simplifies the TDP MMU. */ - int max_gpa_bits = likely(tdp_enabled) ? shadow_phys_bits : 52; + int max_gpa_bits = likely(tdp_enabled) ? kvm_host.maxphyaddr : 52; return (1ULL << (max_gpa_bits - PAGE_SHIFT)) - 1; } -static inline u8 kvm_get_shadow_phys_bits(void) -{ - /* - * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected - * in CPU detection code, but the processor treats those reduced bits as - * 'keyID' thus they are not reserved bits. Therefore KVM needs to look at - * the physical address bits reported by CPUID. - */ - if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008)) - return cpuid_eax(0x80000008) & 0xff; - - /* - * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with - * custom CPUID. Proceed with whatever the kernel found since these features - * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008). - */ - return boot_cpu_data.x86_phys_bits; -} - u8 kvm_mmu_get_max_tdp_level(void); void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 12ad01929dce..c30bffa441cf 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4933,7 +4933,7 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu, static inline u64 reserved_hpa_bits(void) { - return rsvd_bits(shadow_phys_bits, 63); + return rsvd_bits(kvm_host.maxphyaddr, 63); } /* diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 927f4abbe973..d49a3f928b0b 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -43,7 +43,25 @@ u64 __read_mostly shadow_acc_track_mask; u64 __read_mostly shadow_nonpresent_or_rsvd_mask; u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask; -u8 __ro_after_init shadow_phys_bits; +static u8 __init kvm_get_host_maxphyaddr(void) +{ + /* + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected + * in CPU detection code, but the processor treats those reduced bits as + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look at + * the physical address bits reported by CPUID, i.e. the raw MAXPHYADDR, + * when reasoning about CPU behavior with respect to MAXPHYADDR. + */ + if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008)) + return cpuid_eax(0x80000008) & 0xff; + + /* + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with + * custom CPUID. Proceed with whatever the kernel found since these features + * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008). + */ + return boot_cpu_data.x86_phys_bits; +} void __init kvm_mmu_spte_module_init(void) { @@ -56,7 +74,7 @@ void __init kvm_mmu_spte_module_init(void) */ allow_mmio_caching = enable_mmio_caching; - shadow_phys_bits = kvm_get_shadow_phys_bits(); + kvm_host.maxphyaddr = kvm_get_host_maxphyaddr(); } static u64 generation_mmio_spte_mask(u64 gen) @@ -492,7 +510,7 @@ void kvm_mmu_reset_all_pte_masks(void) * 52-bit physical addresses then there are no reserved PA bits in the * PTEs and so the reserved PA approach must be disabled. */ - if (shadow_phys_bits < 52) + if (kvm_host.maxphyaddr < 52) mask = BIT_ULL(51) | PT_PRESENT_MASK; else mask = 0; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index cb1bd9aebac4..185b07bbbc16 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -8337,18 +8337,16 @@ static void __init vmx_setup_me_spte_mask(void) u64 me_mask = 0; /* - * kvm_get_shadow_phys_bits() returns shadow_phys_bits. Use - * the former to avoid exposing shadow_phys_bits. - * * On pre-MKTME system, boot_cpu_data.x86_phys_bits equals to - * shadow_phys_bits. On MKTME and/or TDX capable systems, + * kvm_host.maxphyaddr. On MKTME and/or TDX capable systems, * boot_cpu_data.x86_phys_bits holds the actual physical address - * w/o the KeyID bits, and shadow_phys_bits equals to MAXPHYADDR - * reported by CPUID. Those bits between are KeyID bits. + * w/o the KeyID bits, and kvm_host.maxphyaddr equals to + * MAXPHYADDR reported by CPUID. Those bits between are KeyID bits. */ - if (boot_cpu_data.x86_phys_bits != kvm_get_shadow_phys_bits()) + if (boot_cpu_data.x86_phys_bits != kvm_host.maxphyaddr) me_mask = rsvd_bits(boot_cpu_data.x86_phys_bits, - kvm_get_shadow_phys_bits() - 1); + kvm_host.maxphyaddr - 1); + /* * Unlike SME, host kernel doesn't support setting up any * MKTME KeyID on Intel platforms. No memory encryption diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 90f9e4434646..e7343023fbce 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -723,7 +723,7 @@ static inline bool vmx_need_pf_intercept(struct kvm_vcpu *vcpu) return true; return allow_smaller_maxphyaddr && - cpuid_maxphyaddr(vcpu) < kvm_get_shadow_phys_bits(); + cpuid_maxphyaddr(vcpu) < kvm_host.maxphyaddr; } static inline bool is_unrestricted_guest(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index e69fff7d1f21..a88c65d3ea26 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -34,6 +34,13 @@ struct kvm_caps { }; struct kvm_host_values { + /* + * The host's raw MAXPHYADDR, i.e. the number of non-reserved physical + * address bits irrespective of features that repurpose legal bits, + * e.g. MKTME. + */ + u8 maxphyaddr; + u64 efer; u64 xcr0; u64 xss;