Message ID | 20240126085444.324918-8-xiong.y.zhang@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/pmu: Introduce passthrough vPM | expand |
On Fri, Jan 26, 2024, Xiong Zhang wrote: > From: Xiong Zhang <xiong.y.zhang@intel.com> > > When guest clear LVTPC_MASK bit in guest PMI handler at PMU passthrough > mode, this bit should be reflected onto HW, otherwise HW couldn't generate > PMI again during VM running until it is cleared. This fixes a bug in the previous patch, i.e. this should not be a standalone patch. > > This commit set HW LVTPC_MASK bit at PMU vecctor switching to KVM PMI > vector. > > Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com> > Signed-off-by: Mingwei Zhang <mizhang@google.com> > --- > arch/x86/events/core.c | 9 +++++++-- > arch/x86/include/asm/perf_event.h | 2 +- > arch/x86/kvm/lapic.h | 1 - > 3 files changed, 8 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c > index 3f87894d8c8e..ece042cfb470 100644 > --- a/arch/x86/events/core.c > +++ b/arch/x86/events/core.c > @@ -709,13 +709,18 @@ void perf_guest_switch_to_host_pmi_vector(void) > } > EXPORT_SYMBOL_GPL(perf_guest_switch_to_host_pmi_vector); > > -void perf_guest_switch_to_kvm_pmi_vector(void) > +void perf_guest_switch_to_kvm_pmi_vector(bool mask) > { > lockdep_assert_irqs_disabled(); > > - apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR); > + if (mask) > + apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR | > + APIC_LVT_MASKED); > + else > + apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR); > } Or more simply: void perf_guest_enter(u32 guest_lvtpc) { ... apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR | (guest_lvtpc & APIC_LVT_MASKED)); } and then on the KVM side: perf_guest_enter(kvm_lapic_get_reg(vcpu->arch.apic, APIC_LVTPC)); because an in-kernel APIC should be a hard requirement for the mediated PMU.
On 4/12/2024 3:21 AM, Sean Christopherson wrote: > On Fri, Jan 26, 2024, Xiong Zhang wrote: >> From: Xiong Zhang <xiong.y.zhang@intel.com> >> >> When guest clear LVTPC_MASK bit in guest PMI handler at PMU passthrough >> mode, this bit should be reflected onto HW, otherwise HW couldn't generate >> PMI again during VM running until it is cleared. > > This fixes a bug in the previous patch, i.e. this should not be a standalone > patch. > >> >> This commit set HW LVTPC_MASK bit at PMU vecctor switching to KVM PMI >> vector. >> >> Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com> >> Signed-off-by: Mingwei Zhang <mizhang@google.com> >> --- >> arch/x86/events/core.c | 9 +++++++-- >> arch/x86/include/asm/perf_event.h | 2 +- >> arch/x86/kvm/lapic.h | 1 - >> 3 files changed, 8 insertions(+), 4 deletions(-) >> >> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c >> index 3f87894d8c8e..ece042cfb470 100644 >> --- a/arch/x86/events/core.c >> +++ b/arch/x86/events/core.c >> @@ -709,13 +709,18 @@ void perf_guest_switch_to_host_pmi_vector(void) >> } >> EXPORT_SYMBOL_GPL(perf_guest_switch_to_host_pmi_vector); >> >> -void perf_guest_switch_to_kvm_pmi_vector(void) >> +void perf_guest_switch_to_kvm_pmi_vector(bool mask) >> { >> lockdep_assert_irqs_disabled(); >> >> - apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR); >> + if (mask) >> + apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR | >> + APIC_LVT_MASKED); >> + else >> + apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR); >> } > > Or more simply: > > void perf_guest_enter(u32 guest_lvtpc) > { > ... > > apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR | > (guest_lvtpc & APIC_LVT_MASKED)); > } > > and then on the KVM side: > > perf_guest_enter(kvm_lapic_get_reg(vcpu->arch.apic, APIC_LVTPC)); > > because an in-kernel APIC should be a hard requirement for the mediated PMU. > this is simpler and we will follow this. thanks
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 3f87894d8c8e..ece042cfb470 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -709,13 +709,18 @@ void perf_guest_switch_to_host_pmi_vector(void) } EXPORT_SYMBOL_GPL(perf_guest_switch_to_host_pmi_vector); -void perf_guest_switch_to_kvm_pmi_vector(void) +void perf_guest_switch_to_kvm_pmi_vector(bool mask) { lockdep_assert_irqs_disabled(); - apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR); + if (mask) + apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR | + APIC_LVT_MASKED); + else + apic_write(APIC_LVTPC, APIC_DM_FIXED | KVM_VPMU_VECTOR); } EXPORT_SYMBOL_GPL(perf_guest_switch_to_kvm_pmi_vector); + /* * There may be PMI landing after enabled=0. The PMI hitting could be before or * after disable_all. diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 021ab362a061..180d63ba2f46 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -574,7 +574,7 @@ static inline void perf_check_microcode(void) { } #endif extern void perf_guest_switch_to_host_pmi_vector(void); -extern void perf_guest_switch_to_kvm_pmi_vector(void); +extern void perf_guest_switch_to_kvm_pmi_vector(bool mask); #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr, void *data); diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h index 0a0ea4b5dd8c..e30641d5ac90 100644 --- a/arch/x86/kvm/lapic.h +++ b/arch/x86/kvm/lapic.h @@ -277,5 +277,4 @@ static inline u8 kvm_xapic_id(struct kvm_lapic *apic) { return kvm_lapic_get_reg(apic, APIC_ID) >> 24; } - #endif