Message ID | 20230215010717.3612794-15-rananta@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add support for vPMU selftests | expand |
Hi Raghu, On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta <rananta@google.com> wrote: > > Extend the vCPU migration test to occupy all the vPMU counters, > by configuring chained events on alternate counter-ids and chaining > them with its corresponding predecessor counter, and verify against > the extended behavior. > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > --- > .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++ > 1 file changed, 60 insertions(+) > > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c > index de725f4339ad5..fd00acb9391c8 100644 > --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c > @@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx) > pmu_irq_exit(chained_pmc_idx); > } > > +static void test_chain_all_counters(void) > +{ > + int i; > + uint64_t cnt, pmcr_n = get_pmcr_n(); > + struct pmc_accessor *acc = &pmc_accessors[0]; How do you decide whether to test with all accessors ? Perhaps, it might be simpler and more consistent if we implement each test case with one specified accessor as an argument, and run those test with each accessors? > + > + /* > + * Test the occupancy of all the event counters, by chaining the > + * alternate counters. The test assumes that the host hasn't > + * occupied any counters. Hence, if the test fails, it could be > + * because all the counters weren't available to the guest or > + * there's actually a bug in KVM. > + */ > + > + /* > + * Configure even numbered counters to count cpu-cycles, and chain > + * each of them with its odd numbered counter. > + */ You might want to use the cycle counter as well ? Thank you, Reiji > + for (i = 0; i < pmcr_n; i++) { > + if (i % 2) { > + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN); > + acc->write_cntr(i, 1); > + } else { > + pmu_irq_init(i); > + acc->write_cntr(i, PRE_OVERFLOW_32); > + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES); > + } > + enable_counter(i); > + } > + > + /* Introduce some cycles */ > + execute_precise_instrs(500, ARMV8_PMU_PMCR_E); > + > + /* > + * An overflow interrupt should've arrived for all the even numbered > + * counters but none for the odd numbered ones. The odd numbered ones > + * should've incremented exactly by 1. > + */ > + for (i = 0; i < pmcr_n; i++) { > + if (i % 2) { > + GUEST_ASSERT_1(!pmu_irq_received(i), i); > + > + cnt = acc->read_cntr(i); > + GUEST_ASSERT_2(cnt == 2, i, cnt); > + } else { > + GUEST_ASSERT_1(pmu_irq_received(i), i); > + } > + } > + > + /* Cleanup the states */ > + for (i = 0; i < pmcr_n; i++) { > + if (i % 2 == 0) > + pmu_irq_exit(i); > + disable_counter(i); > + } > +} > + > static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) > { > switch (event) { > @@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void) > > /* Test chained events */ > test_chained_count(0); > + > + /* Test running chained events on all the implemented counters */ > + test_chain_all_counters(); > } > > /* > -- > 2.39.1.581.gbfd45094c4-goog >
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index de725f4339ad5..fd00acb9391c8 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx) pmu_irq_exit(chained_pmc_idx); } +static void test_chain_all_counters(void) +{ + int i; + uint64_t cnt, pmcr_n = get_pmcr_n(); + struct pmc_accessor *acc = &pmc_accessors[0]; + + /* + * Test the occupancy of all the event counters, by chaining the + * alternate counters. The test assumes that the host hasn't + * occupied any counters. Hence, if the test fails, it could be + * because all the counters weren't available to the guest or + * there's actually a bug in KVM. + */ + + /* + * Configure even numbered counters to count cpu-cycles, and chain + * each of them with its odd numbered counter. + */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2) { + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN); + acc->write_cntr(i, 1); + } else { + pmu_irq_init(i); + acc->write_cntr(i, PRE_OVERFLOW_32); + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES); + } + enable_counter(i); + } + + /* Introduce some cycles */ + execute_precise_instrs(500, ARMV8_PMU_PMCR_E); + + /* + * An overflow interrupt should've arrived for all the even numbered + * counters but none for the odd numbered ones. The odd numbered ones + * should've incremented exactly by 1. + */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2) { + GUEST_ASSERT_1(!pmu_irq_received(i), i); + + cnt = acc->read_cntr(i); + GUEST_ASSERT_2(cnt == 2, i, cnt); + } else { + GUEST_ASSERT_1(pmu_irq_received(i), i); + } + } + + /* Cleanup the states */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2 == 0) + pmu_irq_exit(i); + disable_counter(i); + } +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { @@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void) /* Test chained events */ test_chained_count(0); + + /* Test running chained events on all the implemented counters */ + test_chain_all_counters(); } /*
Extend the vCPU migration test to occupy all the vPMU counters, by configuring chained events on alternate counter-ids and chaining them with its corresponding predecessor counter, and verify against the extended behavior. Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+)