diff mbox series

[RFC,36/41] KVM: x86/pmu: Intercept FIXED_CTR_CTRL MSR

Message ID 20240126085444.324918-37-xiong.y.zhang@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/pmu: Introduce passthrough vPM | expand

Commit Message

Xiong Zhang Jan. 26, 2024, 8:54 a.m. UTC
From: Xiong Zhang <xiong.y.zhang@intel.com>

Fixed counters control MSR are still intercepted for the purpose of
security, i.e., preventing guest from using unallowed Fixed Counter
to steal information or take advantages of any CPU errata.

Signed-off-by: Xiong Zhang  <xiong.y.zhang@intel.com>
Signed-off-by: Mingwei Zhang  <mizhang@google.com>
---
 arch/x86/kvm/vmx/pmu_intel.c | 1 -
 arch/x86/kvm/vmx/vmx.c       | 1 -
 2 files changed, 2 deletions(-)

Comments

Sean Christopherson April 11, 2024, 9:56 p.m. UTC | #1
On Fri, Jan 26, 2024, Xiong Zhang wrote:
> From: Xiong Zhang <xiong.y.zhang@intel.com>
> 
> Fixed counters control MSR are still intercepted for the purpose of
> security, i.e., preventing guest from using unallowed Fixed Counter
> to steal information or take advantages of any CPU errata.

Same comments as earlier patches.  Don't introduce bugs and then immediately fix
said bugs.
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 92c5baed8d36..713c2a7c7f07 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -825,7 +825,6 @@  void intel_passthrough_pmu_msrs(struct kvm_vcpu *vcpu)
 			vmx_set_intercept_for_msr(vcpu, MSR_IA32_PMC0 + i, MSR_TYPE_RW, false);
 	}
 
-	vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_TYPE_RW, false);
 	for (i = 0; i < vcpu_to_pmu(vcpu)->nr_arch_fixed_counters; i++)
 		vmx_set_intercept_for_msr(vcpu, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_TYPE_RW, false);
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1a518800d154..7c4e1feb589b 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -700,7 +700,6 @@  static bool is_valid_passthrough_msr(u32 msr)
 		/* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */
 	case MSR_IA32_PMC0 ... MSR_IA32_PMC0 + 7:
 	case MSR_IA32_PERFCTR0 ... MSR_IA32_PERFCTR0 + 7:
-	case MSR_CORE_PERF_FIXED_CTR_CTRL:
 	case MSR_CORE_PERF_FIXED_CTR0 ... MSR_CORE_PERF_FIXED_CTR0 + 2:
 	case MSR_CORE_PERF_GLOBAL_STATUS:
 	case MSR_CORE_PERF_GLOBAL_CTRL: