Message ID | 20190409192217.23459-8-andrew.murray@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Support perf event modifiers :G and :H | expand |
On 09/04/2019 20:22, Andrew Murray wrote: > Upon entering or exiting a guest we may modify multiple PMU counters to > enable of disable EL0 filtering. We presently do this via the indirect > PMXEVTYPER_EL0 system register (where the counter we modify is selected > by PMSELR). With this approach it is necessary to order the writes via > isb instructions such that we select the correct counter before modifying > it. > > Let's avoid potentially expensive instruction barriers by using the > direct PMEVTYPER<n>_EL0 registers instead. > > As the change to counter type relates only to EL0 filtering we can rely > on the implicit instruction barrier which occurs when we transition from > EL2 to EL1 on entering the guest. On returning to userspace we can, at the > latest, rely on the implicit barrier between EL2 and EL0. We can also > depend on the explicit isb in armv8pmu_select_counter to order our write > against any other kernel changes by the PMU driver to the type register as > a result of preemption. > > Signed-off-by: Andrew Murray <andrew.murray@arm.com> > --- > arch/arm64/kvm/pmu.c | 84 ++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 74 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c > index 3407a529e116..4d55193ccc71 100644 > --- a/arch/arm64/kvm/pmu.c > +++ b/arch/arm64/kvm/pmu.c > @@ -91,6 +91,74 @@ void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) > write_sysreg(pmu->events_host, pmcntenset_el0); > } > > +#define PMEVTYPER_READ_CASE(idx) \ > + case idx: \ > + return read_sysreg(pmevtyper##idx##_el0) > + > +#define PMEVTYPER_WRITE_CASE(idx) \ > + case idx: \ > + write_sysreg(val, pmevtyper##idx##_el0); \ > + break > + > +#define PMEVTYPER_CASES(readwrite) \ > + PMEVTYPER_##readwrite##_CASE(0); \ > + PMEVTYPER_##readwrite##_CASE(1); \ > + PMEVTYPER_##readwrite##_CASE(2); \ > + PMEVTYPER_##readwrite##_CASE(3); \ > + PMEVTYPER_##readwrite##_CASE(4); \ > + PMEVTYPER_##readwrite##_CASE(5); \ > + PMEVTYPER_##readwrite##_CASE(6); \ > + PMEVTYPER_##readwrite##_CASE(7); \ > + PMEVTYPER_##readwrite##_CASE(8); \ > + PMEVTYPER_##readwrite##_CASE(9); \ > + PMEVTYPER_##readwrite##_CASE(10); \ > + PMEVTYPER_##readwrite##_CASE(11); \ > + PMEVTYPER_##readwrite##_CASE(12); \ > + PMEVTYPER_##readwrite##_CASE(13); \ > + PMEVTYPER_##readwrite##_CASE(14); \ > + PMEVTYPER_##readwrite##_CASE(15); \ > + PMEVTYPER_##readwrite##_CASE(16); \ > + PMEVTYPER_##readwrite##_CASE(17); \ > + PMEVTYPER_##readwrite##_CASE(18); \ > + PMEVTYPER_##readwrite##_CASE(19); \ > + PMEVTYPER_##readwrite##_CASE(20); \ > + PMEVTYPER_##readwrite##_CASE(21); \ > + PMEVTYPER_##readwrite##_CASE(22); \ > + PMEVTYPER_##readwrite##_CASE(23); \ > + PMEVTYPER_##readwrite##_CASE(24); \ > + PMEVTYPER_##readwrite##_CASE(25); \ > + PMEVTYPER_##readwrite##_CASE(26); \ > + PMEVTYPER_##readwrite##_CASE(27); \ > + PMEVTYPER_##readwrite##_CASE(28); \ > + PMEVTYPER_##readwrite##_CASE(29); \ > + PMEVTYPER_##readwrite##_CASE(30) > + Don't we need case 31 and deal with the PMCCFILTR, instead of WARN_ON(1) below ? Otherwise, Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
On Mon, Apr 15, 2019 at 02:49:41PM +0100, Suzuki K Poulose wrote: > > > On 09/04/2019 20:22, Andrew Murray wrote: > > Upon entering or exiting a guest we may modify multiple PMU counters to > > enable of disable EL0 filtering. We presently do this via the indirect > > PMXEVTYPER_EL0 system register (where the counter we modify is selected > > by PMSELR). With this approach it is necessary to order the writes via > > isb instructions such that we select the correct counter before modifying > > it. > > > > Let's avoid potentially expensive instruction barriers by using the > > direct PMEVTYPER<n>_EL0 registers instead. > > > > As the change to counter type relates only to EL0 filtering we can rely > > on the implicit instruction barrier which occurs when we transition from > > EL2 to EL1 on entering the guest. On returning to userspace we can, at the > > latest, rely on the implicit barrier between EL2 and EL0. We can also > > depend on the explicit isb in armv8pmu_select_counter to order our write > > against any other kernel changes by the PMU driver to the type register as > > a result of preemption. > > > > Signed-off-by: Andrew Murray <andrew.murray@arm.com> > > --- > > arch/arm64/kvm/pmu.c | 84 ++++++++++++++++++++++++++++++++++++++------ > > 1 file changed, 74 insertions(+), 10 deletions(-) > > > > diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c > > index 3407a529e116..4d55193ccc71 100644 > > --- a/arch/arm64/kvm/pmu.c > > +++ b/arch/arm64/kvm/pmu.c > > @@ -91,6 +91,74 @@ void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) > > write_sysreg(pmu->events_host, pmcntenset_el0); > > } > > +#define PMEVTYPER_READ_CASE(idx) \ > > + case idx: \ > > + return read_sysreg(pmevtyper##idx##_el0) > > + > > +#define PMEVTYPER_WRITE_CASE(idx) \ > > + case idx: \ > > + write_sysreg(val, pmevtyper##idx##_el0); \ > > + break > > + > > +#define PMEVTYPER_CASES(readwrite) \ > > + PMEVTYPER_##readwrite##_CASE(0); \ > > + PMEVTYPER_##readwrite##_CASE(1); \ > > + PMEVTYPER_##readwrite##_CASE(2); \ > > + PMEVTYPER_##readwrite##_CASE(3); \ > > + PMEVTYPER_##readwrite##_CASE(4); \ > > + PMEVTYPER_##readwrite##_CASE(5); \ > > + PMEVTYPER_##readwrite##_CASE(6); \ > > + PMEVTYPER_##readwrite##_CASE(7); \ > > + PMEVTYPER_##readwrite##_CASE(8); \ > > + PMEVTYPER_##readwrite##_CASE(9); \ > > + PMEVTYPER_##readwrite##_CASE(10); \ > > + PMEVTYPER_##readwrite##_CASE(11); \ > > + PMEVTYPER_##readwrite##_CASE(12); \ > > + PMEVTYPER_##readwrite##_CASE(13); \ > > + PMEVTYPER_##readwrite##_CASE(14); \ > > + PMEVTYPER_##readwrite##_CASE(15); \ > > + PMEVTYPER_##readwrite##_CASE(16); \ > > + PMEVTYPER_##readwrite##_CASE(17); \ > > + PMEVTYPER_##readwrite##_CASE(18); \ > > + PMEVTYPER_##readwrite##_CASE(19); \ > > + PMEVTYPER_##readwrite##_CASE(20); \ > > + PMEVTYPER_##readwrite##_CASE(21); \ > > + PMEVTYPER_##readwrite##_CASE(22); \ > > + PMEVTYPER_##readwrite##_CASE(23); \ > > + PMEVTYPER_##readwrite##_CASE(24); \ > > + PMEVTYPER_##readwrite##_CASE(25); \ > > + PMEVTYPER_##readwrite##_CASE(26); \ > > + PMEVTYPER_##readwrite##_CASE(27); \ > > + PMEVTYPER_##readwrite##_CASE(28); \ > > + PMEVTYPER_##readwrite##_CASE(29); \ > > + PMEVTYPER_##readwrite##_CASE(30) > > + > > Don't we need case 31 and deal with the PMCCFILTR, instead of WARN_ON(1) > below ? > > Otherwise, > > Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Yes we do - I'll add this, thanks for spotting. Thanks, Andrew Murray
diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c index 3407a529e116..4d55193ccc71 100644 --- a/arch/arm64/kvm/pmu.c +++ b/arch/arm64/kvm/pmu.c @@ -91,6 +91,74 @@ void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) write_sysreg(pmu->events_host, pmcntenset_el0); } +#define PMEVTYPER_READ_CASE(idx) \ + case idx: \ + return read_sysreg(pmevtyper##idx##_el0) + +#define PMEVTYPER_WRITE_CASE(idx) \ + case idx: \ + write_sysreg(val, pmevtyper##idx##_el0); \ + break + +#define PMEVTYPER_CASES(readwrite) \ + PMEVTYPER_##readwrite##_CASE(0); \ + PMEVTYPER_##readwrite##_CASE(1); \ + PMEVTYPER_##readwrite##_CASE(2); \ + PMEVTYPER_##readwrite##_CASE(3); \ + PMEVTYPER_##readwrite##_CASE(4); \ + PMEVTYPER_##readwrite##_CASE(5); \ + PMEVTYPER_##readwrite##_CASE(6); \ + PMEVTYPER_##readwrite##_CASE(7); \ + PMEVTYPER_##readwrite##_CASE(8); \ + PMEVTYPER_##readwrite##_CASE(9); \ + PMEVTYPER_##readwrite##_CASE(10); \ + PMEVTYPER_##readwrite##_CASE(11); \ + PMEVTYPER_##readwrite##_CASE(12); \ + PMEVTYPER_##readwrite##_CASE(13); \ + PMEVTYPER_##readwrite##_CASE(14); \ + PMEVTYPER_##readwrite##_CASE(15); \ + PMEVTYPER_##readwrite##_CASE(16); \ + PMEVTYPER_##readwrite##_CASE(17); \ + PMEVTYPER_##readwrite##_CASE(18); \ + PMEVTYPER_##readwrite##_CASE(19); \ + PMEVTYPER_##readwrite##_CASE(20); \ + PMEVTYPER_##readwrite##_CASE(21); \ + PMEVTYPER_##readwrite##_CASE(22); \ + PMEVTYPER_##readwrite##_CASE(23); \ + PMEVTYPER_##readwrite##_CASE(24); \ + PMEVTYPER_##readwrite##_CASE(25); \ + PMEVTYPER_##readwrite##_CASE(26); \ + PMEVTYPER_##readwrite##_CASE(27); \ + PMEVTYPER_##readwrite##_CASE(28); \ + PMEVTYPER_##readwrite##_CASE(29); \ + PMEVTYPER_##readwrite##_CASE(30) + +/* + * Read a value direct from PMEVTYPER<idx> + */ +static u64 kvm_vcpu_pmu_read_evtype_direct(int idx) +{ + switch (idx) { + PMEVTYPER_CASES(READ); + default: + WARN_ON(1); + } + + return 0; +} + +/* + * Write a value direct to PMEVTYPER<idx> + */ +static void kvm_vcpu_pmu_write_evtype_direct(int idx, u32 val) +{ + switch (idx) { + PMEVTYPER_CASES(WRITE); + default: + WARN_ON(1); + } +} + /* * Modify ARMv8 PMU events to include EL0 counting */ @@ -100,11 +168,9 @@ static void kvm_vcpu_pmu_enable_el0(unsigned long events) u32 counter; for_each_set_bit(counter, &events, 32) { - write_sysreg(counter, pmselr_el0); - isb(); - typer = read_sysreg(pmxevtyper_el0) & ~ARMV8_PMU_EXCLUDE_EL0; - write_sysreg(typer, pmxevtyper_el0); - isb(); + typer = kvm_vcpu_pmu_read_evtype_direct(counter); + typer &= ~ARMV8_PMU_EXCLUDE_EL0; + kvm_vcpu_pmu_write_evtype_direct(counter, typer); } } @@ -117,11 +183,9 @@ static void kvm_vcpu_pmu_disable_el0(unsigned long events) u32 counter; for_each_set_bit(counter, &events, 32) { - write_sysreg(counter, pmselr_el0); - isb(); - typer = read_sysreg(pmxevtyper_el0) | ARMV8_PMU_EXCLUDE_EL0; - write_sysreg(typer, pmxevtyper_el0); - isb(); + typer = kvm_vcpu_pmu_read_evtype_direct(counter); + typer |= ARMV8_PMU_EXCLUDE_EL0; + kvm_vcpu_pmu_write_evtype_direct(counter, typer); } }
Upon entering or exiting a guest we may modify multiple PMU counters to enable of disable EL0 filtering. We presently do this via the indirect PMXEVTYPER_EL0 system register (where the counter we modify is selected by PMSELR). With this approach it is necessary to order the writes via isb instructions such that we select the correct counter before modifying it. Let's avoid potentially expensive instruction barriers by using the direct PMEVTYPER<n>_EL0 registers instead. As the change to counter type relates only to EL0 filtering we can rely on the implicit instruction barrier which occurs when we transition from EL2 to EL1 on entering the guest. On returning to userspace we can, at the latest, rely on the implicit barrier between EL2 and EL0. We can also depend on the explicit isb in armv8pmu_select_counter to order our write against any other kernel changes by the PMU driver to the type register as a result of preemption. Signed-off-by: Andrew Murray <andrew.murray@arm.com> --- arch/arm64/kvm/pmu.c | 84 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 74 insertions(+), 10 deletions(-)