diff mbox series

[v2,13/54] perf: core/x86: Forbid PMI handler when guest own PMU

Message ID 20240506053020.3911940-14-mizhang@google.com (mailing list archive)
State New, archived
Headers show
Series Mediated Passthrough vPMU 2.0 for x86 | expand

Commit Message

Mingwei Zhang May 6, 2024, 5:29 a.m. UTC
If a guest PMI is delivered after VM-exit, the KVM maskable interrupt will
be held pending until EFLAGS.IF is set. In the meantime, if the logical
processor receives an NMI for any reason at all, perf_event_nmi_handler()
will be invoked. If there is any active perf event anywhere on the system,
x86_pmu_handle_irq() will be invoked, and it will clear
IA32_PERF_GLOBAL_STATUS. By the time KVM's PMI handler is invoked, it will
be a mystery which counter(s) overflowed.

When LVTPC is using KVM PMI vecotr, PMU is owned by guest, Host NMI let
x86_pmu_handle_irq() run, x86_pmu_handle_irq() restore PMU vector to NMI
and clear IA32_PERF_GLOBAL_STATUS, this breaks guest vPMU passthrough
environment.

So modify perf_event_nmi_handler() to check perf_guest_context_loaded,
and if so, to simply return without calling x86_pmu_handle_irq().

Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
 arch/x86/events/core.c     | 19 ++++++++++++++++++-
 include/linux/perf_event.h |  5 +++++
 kernel/events/core.c       |  5 +++++
 3 files changed, 28 insertions(+), 1 deletion(-)

Comments

Peter Zijlstra May 7, 2024, 9:33 a.m. UTC | #1
On Mon, May 06, 2024 at 05:29:38AM +0000, Mingwei Zhang wrote:

> @@ -1749,6 +1749,23 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
>  	u64 finish_clock;
>  	int ret;
>  
> +	/*
> +	 * When guest pmu context is loaded this handler should be forbidden from
> +	 * running, the reasons are:
> +	 * 1. After x86_perf_guest_enter() is called, and before cpu enter into
> +	 *    non-root mode, NMI could happen, but x86_pmu_handle_irq() restore PMU
> +	 *    to use NMI vector, which destroy KVM PMI vector setting.
> +	 * 2. When VM is running, host NMI other than PMI causes VM exit, KVM will
> +	 *    call host NMI handler (vmx_vcpu_enter_exit()) first before KVM save
> +	 *    guest PMU context (kvm_pmu_save_pmu_context()), as x86_pmu_handle_irq()
> +	 *    clear global_status MSR which has guest status now, then this destroy
> +	 *    guest PMU status.
> +	 * 3. After VM exit, but before KVM save guest PMU context, host NMI other
> +	 *    than PMI could happen, x86_pmu_handle_irq() clear global_status MSR
> +	 *    which has guest status now, then this destroy guest PMU status.
> +	 */
> +	if (perf_is_guest_context_loaded())
> +		return 0;

A function call makes sense because? Also, isn't this naming at least a
very little misleading? Specifically this is about passthrough, not
guest context per se.

>  	/*
>  	 * All PMUs/events that share this PMI handler should make sure to
>  	 * increment active_events for their events.
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index acf16676401a..5da7de42954e 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -1736,6 +1736,7 @@ extern int perf_get_mediated_pmu(void);
>  extern void perf_put_mediated_pmu(void);
>  void perf_guest_enter(void);
>  void perf_guest_exit(void);
> +bool perf_is_guest_context_loaded(void);
>  #else /* !CONFIG_PERF_EVENTS: */
>  static inline void *
>  perf_aux_output_begin(struct perf_output_handle *handle,
> @@ -1830,6 +1831,10 @@ static inline int perf_get_mediated_pmu(void)
>  static inline void perf_put_mediated_pmu(void)			{ }
>  static inline void perf_guest_enter(void)			{ }
>  static inline void perf_guest_exit(void)			{ }
> +static inline bool perf_is_guest_context_loaded(void)
> +{
> +	return false;
> +}
>  #endif
>  
>  #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 4c6daf5cc923..184d06c23391 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5895,6 +5895,11 @@ void perf_guest_exit(void)
>  	perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
>  }
>  
> +bool perf_is_guest_context_loaded(void)
> +{
> +	return __this_cpu_read(perf_in_guest);
> +}
> +
>  /*
>   * Holding the top-level event's child_mutex means that any
>   * descendant process that has inherited this event will block
> -- 
> 2.45.0.rc1.225.g2a3ae87e7f-goog
>
Xiong Zhang May 9, 2024, 7:39 a.m. UTC | #2
On 5/7/2024 5:33 PM, Peter Zijlstra wrote:
> On Mon, May 06, 2024 at 05:29:38AM +0000, Mingwei Zhang wrote:
> 
>> @@ -1749,6 +1749,23 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
>>  	u64 finish_clock;
>>  	int ret;
>>  
>> +	/*
>> +	 * When guest pmu context is loaded this handler should be forbidden from
>> +	 * running, the reasons are:
>> +	 * 1. After x86_perf_guest_enter() is called, and before cpu enter into
>> +	 *    non-root mode, NMI could happen, but x86_pmu_handle_irq() restore PMU
>> +	 *    to use NMI vector, which destroy KVM PMI vector setting.
>> +	 * 2. When VM is running, host NMI other than PMI causes VM exit, KVM will
>> +	 *    call host NMI handler (vmx_vcpu_enter_exit()) first before KVM save
>> +	 *    guest PMU context (kvm_pmu_save_pmu_context()), as x86_pmu_handle_irq()
>> +	 *    clear global_status MSR which has guest status now, then this destroy
>> +	 *    guest PMU status.
>> +	 * 3. After VM exit, but before KVM save guest PMU context, host NMI other
>> +	 *    than PMI could happen, x86_pmu_handle_irq() clear global_status MSR
>> +	 *    which has guest status now, then this destroy guest PMU status.
>> +	 */
>> +	if (perf_is_guest_context_loaded())
>> +		return 0;
> 
> A function call makes sense because? Also, isn't this naming at least the purpose of function call is to re-use the per-cpu variable defined in
perf core, otherwise another per-cpu variable will be defined in
arch/x86/event/core.c, whether function call or per-cpu variable depends on
the interface between perf and KVM.
> very little misleading? Specifically this is about passthrough, not
> guest context per se.
> 
>>  	/*
>>  	 * All PMUs/events that share this PMI handler should make sure to
>>  	 * increment active_events for their events.
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index acf16676401a..5da7de42954e 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -1736,6 +1736,7 @@ extern int perf_get_mediated_pmu(void);
>>  extern void perf_put_mediated_pmu(void);
>>  void perf_guest_enter(void);
>>  void perf_guest_exit(void);
>> +bool perf_is_guest_context_loaded(void);
>>  #else /* !CONFIG_PERF_EVENTS: */
>>  static inline void *
>>  perf_aux_output_begin(struct perf_output_handle *handle,
>> @@ -1830,6 +1831,10 @@ static inline int perf_get_mediated_pmu(void)
>>  static inline void perf_put_mediated_pmu(void)			{ }
>>  static inline void perf_guest_enter(void)			{ }
>>  static inline void perf_guest_exit(void)			{ }
>> +static inline bool perf_is_guest_context_loaded(void)
>> +{
>> +	return false;
>> +}
>>  #endif
>>  
>>  #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 4c6daf5cc923..184d06c23391 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -5895,6 +5895,11 @@ void perf_guest_exit(void)
>>  	perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
>>  }
>>  
>> +bool perf_is_guest_context_loaded(void)
>> +{
>> +	return __this_cpu_read(perf_in_guest);
>> +}
>> +
>>  /*
>>   * Holding the top-level event's child_mutex means that any
>>   * descendant process that has inherited this event will block
>> -- 
>> 2.45.0.rc1.225.g2a3ae87e7f-goog
>>
>
diff mbox series

Patch

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 8167f2230d3a..c0f6e294fcad 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -726,7 +726,7 @@  EXPORT_SYMBOL_GPL(x86_perf_guest_exit);
  * It will not be re-enabled in the NMI handler again, because enabled=0. After
  * handling the NMI, disable_all will be called, which will not change the
  * state either. If PMI hits after disable_all, the PMU is already disabled
- * before entering NMI handler. The NMI handler will not change the state
+ * before entering NMI handler. The NMI handler will no	change the state
  * either.
  *
  * So either situation is harmless.
@@ -1749,6 +1749,23 @@  perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
 	u64 finish_clock;
 	int ret;
 
+	/*
+	 * When guest pmu context is loaded this handler should be forbidden from
+	 * running, the reasons are:
+	 * 1. After x86_perf_guest_enter() is called, and before cpu enter into
+	 *    non-root mode, NMI could happen, but x86_pmu_handle_irq() restore PMU
+	 *    to use NMI vector, which destroy KVM PMI vector setting.
+	 * 2. When VM is running, host NMI other than PMI causes VM exit, KVM will
+	 *    call host NMI handler (vmx_vcpu_enter_exit()) first before KVM save
+	 *    guest PMU context (kvm_pmu_save_pmu_context()), as x86_pmu_handle_irq()
+	 *    clear global_status MSR which has guest status now, then this destroy
+	 *    guest PMU status.
+	 * 3. After VM exit, but before KVM save guest PMU context, host NMI other
+	 *    than PMI could happen, x86_pmu_handle_irq() clear global_status MSR
+	 *    which has guest status now, then this destroy guest PMU status.
+	 */
+	if (perf_is_guest_context_loaded())
+		return 0;
 	/*
 	 * All PMUs/events that share this PMI handler should make sure to
 	 * increment active_events for their events.
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index acf16676401a..5da7de42954e 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1736,6 +1736,7 @@  extern int perf_get_mediated_pmu(void);
 extern void perf_put_mediated_pmu(void);
 void perf_guest_enter(void);
 void perf_guest_exit(void);
+bool perf_is_guest_context_loaded(void);
 #else /* !CONFIG_PERF_EVENTS: */
 static inline void *
 perf_aux_output_begin(struct perf_output_handle *handle,
@@ -1830,6 +1831,10 @@  static inline int perf_get_mediated_pmu(void)
 static inline void perf_put_mediated_pmu(void)			{ }
 static inline void perf_guest_enter(void)			{ }
 static inline void perf_guest_exit(void)			{ }
+static inline bool perf_is_guest_context_loaded(void)
+{
+	return false;
+}
 #endif
 
 #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 4c6daf5cc923..184d06c23391 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5895,6 +5895,11 @@  void perf_guest_exit(void)
 	perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
 }
 
+bool perf_is_guest_context_loaded(void)
+{
+	return __this_cpu_read(perf_in_guest);
+}
+
 /*
  * Holding the top-level event's child_mutex means that any
  * descendant process that has inherited this event will block