Message ID | 20240823191354.4141950-2-seanjc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Coalesced IO cleanup and test | expand |
On Fri, 2024-08-23 at 12:13 -0700, Sean Christopherson wrote: > Add a test to verify that KVM correctly exits (or not) when a vCPU's > coalesced I/O ring is full (or isn't). Iterate over all legal starting > points in the ring (with an empty ring), and verify that KVM doesn't exit > until the ring is full. > > Opportunistically verify that KVM exits immediately on non-coalesced I/O, > either because the MMIO/PIO region was never registered, or because a > previous region was unregistered. > > This is a regression test for a KVM bug where KVM would prematurely exit > due to bad math resulting in a false positive if the first entry in the > ring was before the halfway mark. See commit 92f6d4130497 ("KVM: Fix > coalesced_mmio_has_room() to avoid premature userspace exit"). > > Enable the test for x86, arm64, and risc-v, i.e. all architectures except > s390, which doesn't have MMIO. > > On x86, which has both MMIO and PIO, interleave MMIO and PIO into the same > ring, as KVM shouldn't exit until a non-coalesced I/O is encountered, > regardless of whether the ring is filled with MMIO, PIO, or both. I guess there is some overlap between this patch and the one I proposed here: https://lore.kernel.org/kvm/20240820133333.1724191-7-ilstam@amazon.com/ Even though my test depends on the new ioctls that the series introduces. However, both the existing v1 API and the new proposed v2 API take the same code path internally when trying to push to the ring buffer. > > Cc: Ilias Stamatis <ilstam@amazon.com> > Cc: Marc Zyngier <maz@kernel.org> > Cc: Oliver Upton <oliver.upton@linux.dev> > Cc: Anup Patel <anup@brainfault.org> > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > tools/testing/selftests/kvm/Makefile | 3 + > .../testing/selftests/kvm/coalesced_io_test.c | 202 ++++++++++++++++++ > .../testing/selftests/kvm/include/kvm_util.h | 26 +++ > 3 files changed, 231 insertions(+) > create mode 100644 tools/testing/selftests/kvm/coalesced_io_test.c > > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile > index 48d32c5aa3eb..45cb70c048bb 100644 > --- a/tools/testing/selftests/kvm/Makefile > +++ b/tools/testing/selftests/kvm/Makefile > @@ -130,6 +130,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test > TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test > TEST_GEN_PROGS_x86_64 += x86_64/recalc_apic_map_test > TEST_GEN_PROGS_x86_64 += access_tracking_perf_test > +TEST_GEN_PROGS_x86_64 += coalesced_io_test > TEST_GEN_PROGS_x86_64 += demand_paging_test > TEST_GEN_PROGS_x86_64 += dirty_log_test > TEST_GEN_PROGS_x86_64 += dirty_log_perf_test > @@ -165,6 +166,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/vgic_lpi_stress > TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access > TEST_GEN_PROGS_aarch64 += access_tracking_perf_test > TEST_GEN_PROGS_aarch64 += arch_timer > +TEST_GEN_PROGS_aarch64 += coalesced_io_test > TEST_GEN_PROGS_aarch64 += demand_paging_test > TEST_GEN_PROGS_aarch64 += dirty_log_test > TEST_GEN_PROGS_aarch64 += dirty_log_perf_test > @@ -198,6 +200,7 @@ TEST_GEN_PROGS_s390x += kvm_binary_stats_test > TEST_GEN_PROGS_riscv += riscv/sbi_pmu_test > TEST_GEN_PROGS_riscv += riscv/ebreak_test > TEST_GEN_PROGS_riscv += arch_timer > +TEST_GEN_PROGS_riscv += coalesced_io_test > TEST_GEN_PROGS_riscv += demand_paging_test > TEST_GEN_PROGS_riscv += dirty_log_test > TEST_GEN_PROGS_riscv += get-reg-list > diff --git a/tools/testing/selftests/kvm/coalesced_io_test.c b/tools/testing/selftests/kvm/coalesced_io_test.c > new file mode 100644 > index 000000000000..3d591af63ef0 > --- /dev/null > +++ b/tools/testing/selftests/kvm/coalesced_io_test.c > @@ -0,0 +1,202 @@ > +// SPDX-License-Identifier: GPL-2.0 > +#include <signal.h> > +#include <stdio.h> > +#include <stdlib.h> > +#include <string.h> > +#include <sys/ioctl.h> > + > +#include <linux/sizes.h> > + > +#include <kvm_util.h> > +#include <processor.h> > + > +#include "ucall_common.h" > + > +#define MMIO_GPA (4ull * SZ_1G) > +#define PIO_PORT 0x80 > + > +/* > + * KVM's ABI uses the kernel's PAGE_SIZE, thus userspace must query the host's > + * page size at runtime to compute the real max. > + */ > +static int REAL_KVM_COALESCED_MMIO_MAX; > +static int coalesced_mmio_ring_offset; > + > +#ifdef __x86_64__ > +static const int has_pio = 1; > +#else > +static const int has_pio = 0; > +#endif > + > +static void guest_code(void) > +{ > + int i, j; > + > + for (;;) { > + for (j = 0; j < 1 + has_pio; j++) { > + /* > + * KVM always leaves one free entry, i.e. exits to > + * userspace before the last entry is filled. > + */ > + for (i = 0; i < REAL_KVM_COALESCED_MMIO_MAX - 1; i++) { > +#ifdef __x86_64__ > + if (i & 1) > + outl(PIO_PORT, i); > + else > +#endif > + WRITE_ONCE(*((uint64_t *)MMIO_GPA), i); > + } > +#ifdef __x86_64__ > + if (j & 1) > + outl(PIO_PORT, i); > + else > +#endif > + WRITE_ONCE(*((uint64_t *)MMIO_GPA), i); > + } > + GUEST_SYNC(0); > + > + WRITE_ONCE(*((uint64_t *)MMIO_GPA), i); > +#ifdef __x86_64__ > + outl(PIO_PORT, i); > +#endif > + } > +} > + > +static void vcpu_run_and_verify_io_exit(struct kvm_vcpu *vcpu, > + uint32_t ring_start, > + uint32_t expected_exit) > +{ > + const bool want_pio = expected_exit == KVM_EXIT_IO; > + struct kvm_coalesced_mmio_ring *ring; > + struct kvm_run *run = vcpu->run; > + > + ring = (void *)run + coalesced_mmio_ring_offset; > + > + WRITE_ONCE(ring->first, ring_start); > + WRITE_ONCE(ring->last, ring_start); > + vcpu_run(vcpu); > + > + TEST_ASSERT((!want_pio && (run->exit_reason == KVM_EXIT_MMIO && run->mmio.is_write && > + run->mmio.phys_addr == MMIO_GPA && run->mmio.len == 8 && > + *(uint64_t *)run->mmio.data == REAL_KVM_COALESCED_MMIO_MAX - 1)) || > + (want_pio && (run->exit_reason == KVM_EXIT_IO && run->io.port == PIO_PORT && > + run->io.direction == KVM_EXIT_IO_OUT && run->io.count == 1 && > + *(uint32_t *)((void *)run + run->io.data_offset) == REAL_KVM_COALESCED_MMIO_MAX - 1)), > + "For start = %u, expected exit on %u-byte %s write 0x%llx = %u, go exit_reason = %u (%s)\n " s/go/got > + "(MMIO addr = 0x%llx, write = %u, len = %u, data = %lu)\n " > + "(PIO port = 0x%x, write = %u, len = %u, count = %u, data = %u", > + ring_start, want_pio ? 4 : 8, want_pio ? "PIO" : "MMIO", > + want_pio ? (unsigned long long)PIO_PORT : MMIO_GPA, > + REAL_KVM_COALESCED_MMIO_MAX - 1, run->exit_reason, > + run->exit_reason == KVM_EXIT_MMIO ? "MMIO" : > + run->exit_reason == KVM_EXIT_IO ? "PIO" : "other", > + run->mmio.phys_addr, run->mmio.is_write, run->mmio.len, *(uint64_t *)run->mmio.data, > + run->io.port, run->io.direction, run->io.size, run->io.count, > + *(uint32_t *)((void *)run + run->io.data_offset)); > +} > + > +static void vcpu_run_and_verify_coalesced_io(struct kvm_vcpu *vcpu, > + uint32_t ring_start, > + uint32_t expected_exit) > +{ > + struct kvm_coalesced_mmio_ring *ring; > + struct kvm_run *run = vcpu->run; > + int i; > + > + vcpu_run_and_verify_io_exit(vcpu, ring_start, expected_exit); > + > + ring = (void *)run + coalesced_mmio_ring_offset; > + TEST_ASSERT((ring->last + 1) % REAL_KVM_COALESCED_MMIO_MAX == ring->first, > + "Expected ring to be full (minus 1), first = %u, last = %u, max = %u, start = %u", > + ring->first, ring->last, REAL_KVM_COALESCED_MMIO_MAX, ring_start); > + > + for (i = 0; i < REAL_KVM_COALESCED_MMIO_MAX - 1; i++) { > + uint32_t idx = (ring->first + i) % REAL_KVM_COALESCED_MMIO_MAX; > + struct kvm_coalesced_mmio *io = &ring->coalesced_mmio[idx]; > + > +#ifdef __x86_64__ > + if (i & 1) > + TEST_ASSERT(io->phys_addr == PIO_PORT && > + io->len == 4 && io->pio, > + "Wanted 4-byte port I/O to 0x%x in entry %u, got %u-byte %s to 0x%llx", > + PIO_PORT, i, io->len, io->pio ? "PIO" : "MMIO", io->phys_addr); > + else > +#endif > + TEST_ASSERT(io->phys_addr == MMIO_GPA && > + io->len == 8 && !io->pio, > + "Wanted 8-byte MMIO to 0x%llx in entry %u, got %u-byte %s to 0x%llx", > + MMIO_GPA, i, io->len, io->pio ? "PIO" : "MMIO", io->phys_addr); > + > + TEST_ASSERT_EQ(*(uint64_t *)io->data, i); > + } > +} > + > +static void test_coalesced_io(struct kvm_vcpu *vcpu, uint32_t ring_start) > +{ > + struct kvm_run *run = vcpu->run; > + struct kvm_coalesced_mmio_ring *ring; > + > + kvm_vm_register_coalesced_io(vcpu->vm, MMIO_GPA, 8, false /* pio */); > +#ifdef __x86_64__ > + kvm_vm_register_coalesced_io(vcpu->vm, PIO_PORT, 8, true /* pio */); > +#endif > + > + vcpu_run_and_verify_coalesced_io(vcpu, ring_start, KVM_EXIT_MMIO); > +#ifdef __x86_64__ > + vcpu_run_and_verify_coalesced_io(vcpu, ring_start, KVM_EXIT_IO); > +#endif > + > + /* > + * Verify ucall, which may use non-coalesced MMIO or PIO, generates an > + * immediate exit. > + */ > + ring = (void *)run + coalesced_mmio_ring_offset; > + WRITE_ONCE(ring->first, ring_start); > + WRITE_ONCE(ring->last, ring_start); > + vcpu_run(vcpu); > + TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_SYNC); > + TEST_ASSERT_EQ(ring->first, ring_start); > + TEST_ASSERT_EQ(ring->last, ring_start); > + > + /* Verify that non-coalesced MMIO/PIO generates an exit to userspace. */ > + kvm_vm_unregister_coalesced_io(vcpu->vm, MMIO_GPA, 8, false /* pio */); > + vcpu_run_and_verify_io_exit(vcpu, ring_start, KVM_EXIT_MMIO); > + > +#ifdef __x86_64__ > + kvm_vm_unregister_coalesced_io(vcpu->vm, PIO_PORT, 8, true /* pio */); > + vcpu_run_and_verify_io_exit(vcpu, ring_start, KVM_EXIT_IO); > +#endif > +} > + > +int main(int argc, char *argv[]) > +{ > + struct kvm_vcpu *vcpu; > + struct kvm_vm *vm; > + int i; > + > + TEST_REQUIRE(kvm_has_cap(KVM_CAP_COALESCED_MMIO)); > + > +#ifdef __x86_64__ > + TEST_REQUIRE(kvm_has_cap(KVM_CAP_COALESCED_PIO)); > +#endif > + > + /* > + * The I/O ring is a kernel-allocated page whose address is relative > + * to each vCPU's run page, with the page offset provided by KVM in the > + * return of KVM_CAP_COALESCED_MMIO. > + */ > + coalesced_mmio_ring_offset = (kvm_check_cap(KVM_CAP_COALESCED_MMIO) * getpagesize()); > + > + vm = vm_create_with_one_vcpu(&vcpu, guest_code); > + virt_map(vm, MMIO_GPA, MMIO_GPA, 1); > + > + REAL_KVM_COALESCED_MMIO_MAX = (getpagesize() - sizeof(struct kvm_coalesced_mmio_ring)) / > + sizeof(struct kvm_coalesced_mmio); > + sync_global_to_guest(vm, REAL_KVM_COALESCED_MMIO_MAX); > + > + for (i = 0; i < REAL_KVM_COALESCED_MMIO_MAX; i++) > + test_coalesced_io(vcpu, i); > + > + kvm_vm_free(vm); > + return 0; > +} > diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h > index 63c2aaae51f3..b297a84f7be5 100644 > --- a/tools/testing/selftests/kvm/include/kvm_util.h > +++ b/tools/testing/selftests/kvm/include/kvm_util.h > @@ -460,6 +460,32 @@ static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm) > return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL); > } > > +static inline void kvm_vm_register_coalesced_io(struct kvm_vm *vm, > + uint64_t address, > + uint64_t size, bool pio) > +{ > + struct kvm_coalesced_mmio_zone zone = { > + .addr = address, > + .size = size, > + .pio = pio, > + }; > + > + vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO, &zone); > +} > + > +static inline void kvm_vm_unregister_coalesced_io(struct kvm_vm *vm, > + uint64_t address, > + uint64_t size, bool pio) > +{ > + struct kvm_coalesced_mmio_zone zone = { > + .addr = address, > + .size = size, > + .pio = pio, > + }; > + > + vm_ioctl(vm, KVM_UNREGISTER_COALESCED_MMIO, &zone); > +} > + > static inline int vm_get_stats_fd(struct kvm_vm *vm) > { > int fd = __vm_ioctl(vm, KVM_GET_STATS_FD, NULL); > -- > 2.46.0.295.g3b9ea8a38a-goog > Reviewed-by: Ilias Stamatis <ilstam@amazon.com>
On Tue, Aug 27, 2024, Ilias Stamatis wrote: > On Fri, 2024-08-23 at 12:13 -0700, Sean Christopherson wrote: > > Add a test to verify that KVM correctly exits (or not) when a vCPU's > > coalesced I/O ring is full (or isn't). Iterate over all legal starting > > points in the ring (with an empty ring), and verify that KVM doesn't exit > > until the ring is full. > > > > Opportunistically verify that KVM exits immediately on non-coalesced I/O, > > either because the MMIO/PIO region was never registered, or because a > > previous region was unregistered. > > > > This is a regression test for a KVM bug where KVM would prematurely exit > > due to bad math resulting in a false positive if the first entry in the > > ring was before the halfway mark. See commit 92f6d4130497 ("KVM: Fix > > coalesced_mmio_has_room() to avoid premature userspace exit"). > > > > Enable the test for x86, arm64, and risc-v, i.e. all architectures except > > s390, which doesn't have MMIO. > > > > On x86, which has both MMIO and PIO, interleave MMIO and PIO into the same > > ring, as KVM shouldn't exit until a non-coalesced I/O is encountered, > > regardless of whether the ring is filled with MMIO, PIO, or both. > > I guess there is some overlap between this patch and the one I proposed > here: > https://lore.kernel.org/kvm/20240820133333.1724191-7-ilstam@amazon.com/ /facepalm > Even though my test depends on the new ioctls that the series > introduces. However, both the existing v1 API and the new proposed v2 > API take the same code path internally when trying to push to the ring > buffer. Sorry, I completely spaced on the fact that you had already written a selftest. I'll take a closer look at your selftest and see how best to smush the two together.
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 48d32c5aa3eb..45cb70c048bb 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -130,6 +130,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test TEST_GEN_PROGS_x86_64 += x86_64/recalc_apic_map_test TEST_GEN_PROGS_x86_64 += access_tracking_perf_test +TEST_GEN_PROGS_x86_64 += coalesced_io_test TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += dirty_log_test TEST_GEN_PROGS_x86_64 += dirty_log_perf_test @@ -165,6 +166,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/vgic_lpi_stress TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += arch_timer +TEST_GEN_PROGS_aarch64 += coalesced_io_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test @@ -198,6 +200,7 @@ TEST_GEN_PROGS_s390x += kvm_binary_stats_test TEST_GEN_PROGS_riscv += riscv/sbi_pmu_test TEST_GEN_PROGS_riscv += riscv/ebreak_test TEST_GEN_PROGS_riscv += arch_timer +TEST_GEN_PROGS_riscv += coalesced_io_test TEST_GEN_PROGS_riscv += demand_paging_test TEST_GEN_PROGS_riscv += dirty_log_test TEST_GEN_PROGS_riscv += get-reg-list diff --git a/tools/testing/selftests/kvm/coalesced_io_test.c b/tools/testing/selftests/kvm/coalesced_io_test.c new file mode 100644 index 000000000000..3d591af63ef0 --- /dev/null +++ b/tools/testing/selftests/kvm/coalesced_io_test.c @@ -0,0 +1,202 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <signal.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/ioctl.h> + +#include <linux/sizes.h> + +#include <kvm_util.h> +#include <processor.h> + +#include "ucall_common.h" + +#define MMIO_GPA (4ull * SZ_1G) +#define PIO_PORT 0x80 + +/* + * KVM's ABI uses the kernel's PAGE_SIZE, thus userspace must query the host's + * page size at runtime to compute the real max. + */ +static int REAL_KVM_COALESCED_MMIO_MAX; +static int coalesced_mmio_ring_offset; + +#ifdef __x86_64__ +static const int has_pio = 1; +#else +static const int has_pio = 0; +#endif + +static void guest_code(void) +{ + int i, j; + + for (;;) { + for (j = 0; j < 1 + has_pio; j++) { + /* + * KVM always leaves one free entry, i.e. exits to + * userspace before the last entry is filled. + */ + for (i = 0; i < REAL_KVM_COALESCED_MMIO_MAX - 1; i++) { +#ifdef __x86_64__ + if (i & 1) + outl(PIO_PORT, i); + else +#endif + WRITE_ONCE(*((uint64_t *)MMIO_GPA), i); + } +#ifdef __x86_64__ + if (j & 1) + outl(PIO_PORT, i); + else +#endif + WRITE_ONCE(*((uint64_t *)MMIO_GPA), i); + } + GUEST_SYNC(0); + + WRITE_ONCE(*((uint64_t *)MMIO_GPA), i); +#ifdef __x86_64__ + outl(PIO_PORT, i); +#endif + } +} + +static void vcpu_run_and_verify_io_exit(struct kvm_vcpu *vcpu, + uint32_t ring_start, + uint32_t expected_exit) +{ + const bool want_pio = expected_exit == KVM_EXIT_IO; + struct kvm_coalesced_mmio_ring *ring; + struct kvm_run *run = vcpu->run; + + ring = (void *)run + coalesced_mmio_ring_offset; + + WRITE_ONCE(ring->first, ring_start); + WRITE_ONCE(ring->last, ring_start); + vcpu_run(vcpu); + + TEST_ASSERT((!want_pio && (run->exit_reason == KVM_EXIT_MMIO && run->mmio.is_write && + run->mmio.phys_addr == MMIO_GPA && run->mmio.len == 8 && + *(uint64_t *)run->mmio.data == REAL_KVM_COALESCED_MMIO_MAX - 1)) || + (want_pio && (run->exit_reason == KVM_EXIT_IO && run->io.port == PIO_PORT && + run->io.direction == KVM_EXIT_IO_OUT && run->io.count == 1 && + *(uint32_t *)((void *)run + run->io.data_offset) == REAL_KVM_COALESCED_MMIO_MAX - 1)), + "For start = %u, expected exit on %u-byte %s write 0x%llx = %u, go exit_reason = %u (%s)\n " + "(MMIO addr = 0x%llx, write = %u, len = %u, data = %lu)\n " + "(PIO port = 0x%x, write = %u, len = %u, count = %u, data = %u", + ring_start, want_pio ? 4 : 8, want_pio ? "PIO" : "MMIO", + want_pio ? (unsigned long long)PIO_PORT : MMIO_GPA, + REAL_KVM_COALESCED_MMIO_MAX - 1, run->exit_reason, + run->exit_reason == KVM_EXIT_MMIO ? "MMIO" : + run->exit_reason == KVM_EXIT_IO ? "PIO" : "other", + run->mmio.phys_addr, run->mmio.is_write, run->mmio.len, *(uint64_t *)run->mmio.data, + run->io.port, run->io.direction, run->io.size, run->io.count, + *(uint32_t *)((void *)run + run->io.data_offset)); +} + +static void vcpu_run_and_verify_coalesced_io(struct kvm_vcpu *vcpu, + uint32_t ring_start, + uint32_t expected_exit) +{ + struct kvm_coalesced_mmio_ring *ring; + struct kvm_run *run = vcpu->run; + int i; + + vcpu_run_and_verify_io_exit(vcpu, ring_start, expected_exit); + + ring = (void *)run + coalesced_mmio_ring_offset; + TEST_ASSERT((ring->last + 1) % REAL_KVM_COALESCED_MMIO_MAX == ring->first, + "Expected ring to be full (minus 1), first = %u, last = %u, max = %u, start = %u", + ring->first, ring->last, REAL_KVM_COALESCED_MMIO_MAX, ring_start); + + for (i = 0; i < REAL_KVM_COALESCED_MMIO_MAX - 1; i++) { + uint32_t idx = (ring->first + i) % REAL_KVM_COALESCED_MMIO_MAX; + struct kvm_coalesced_mmio *io = &ring->coalesced_mmio[idx]; + +#ifdef __x86_64__ + if (i & 1) + TEST_ASSERT(io->phys_addr == PIO_PORT && + io->len == 4 && io->pio, + "Wanted 4-byte port I/O to 0x%x in entry %u, got %u-byte %s to 0x%llx", + PIO_PORT, i, io->len, io->pio ? "PIO" : "MMIO", io->phys_addr); + else +#endif + TEST_ASSERT(io->phys_addr == MMIO_GPA && + io->len == 8 && !io->pio, + "Wanted 8-byte MMIO to 0x%llx in entry %u, got %u-byte %s to 0x%llx", + MMIO_GPA, i, io->len, io->pio ? "PIO" : "MMIO", io->phys_addr); + + TEST_ASSERT_EQ(*(uint64_t *)io->data, i); + } +} + +static void test_coalesced_io(struct kvm_vcpu *vcpu, uint32_t ring_start) +{ + struct kvm_run *run = vcpu->run; + struct kvm_coalesced_mmio_ring *ring; + + kvm_vm_register_coalesced_io(vcpu->vm, MMIO_GPA, 8, false /* pio */); +#ifdef __x86_64__ + kvm_vm_register_coalesced_io(vcpu->vm, PIO_PORT, 8, true /* pio */); +#endif + + vcpu_run_and_verify_coalesced_io(vcpu, ring_start, KVM_EXIT_MMIO); +#ifdef __x86_64__ + vcpu_run_and_verify_coalesced_io(vcpu, ring_start, KVM_EXIT_IO); +#endif + + /* + * Verify ucall, which may use non-coalesced MMIO or PIO, generates an + * immediate exit. + */ + ring = (void *)run + coalesced_mmio_ring_offset; + WRITE_ONCE(ring->first, ring_start); + WRITE_ONCE(ring->last, ring_start); + vcpu_run(vcpu); + TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_SYNC); + TEST_ASSERT_EQ(ring->first, ring_start); + TEST_ASSERT_EQ(ring->last, ring_start); + + /* Verify that non-coalesced MMIO/PIO generates an exit to userspace. */ + kvm_vm_unregister_coalesced_io(vcpu->vm, MMIO_GPA, 8, false /* pio */); + vcpu_run_and_verify_io_exit(vcpu, ring_start, KVM_EXIT_MMIO); + +#ifdef __x86_64__ + kvm_vm_unregister_coalesced_io(vcpu->vm, PIO_PORT, 8, true /* pio */); + vcpu_run_and_verify_io_exit(vcpu, ring_start, KVM_EXIT_IO); +#endif +} + +int main(int argc, char *argv[]) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + int i; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_COALESCED_MMIO)); + +#ifdef __x86_64__ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_COALESCED_PIO)); +#endif + + /* + * The I/O ring is a kernel-allocated page whose address is relative + * to each vCPU's run page, with the page offset provided by KVM in the + * return of KVM_CAP_COALESCED_MMIO. + */ + coalesced_mmio_ring_offset = (kvm_check_cap(KVM_CAP_COALESCED_MMIO) * getpagesize()); + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + virt_map(vm, MMIO_GPA, MMIO_GPA, 1); + + REAL_KVM_COALESCED_MMIO_MAX = (getpagesize() - sizeof(struct kvm_coalesced_mmio_ring)) / + sizeof(struct kvm_coalesced_mmio); + sync_global_to_guest(vm, REAL_KVM_COALESCED_MMIO_MAX); + + for (i = 0; i < REAL_KVM_COALESCED_MMIO_MAX; i++) + test_coalesced_io(vcpu, i); + + kvm_vm_free(vm); + return 0; +} diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 63c2aaae51f3..b297a84f7be5 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -460,6 +460,32 @@ static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm) return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL); } +static inline void kvm_vm_register_coalesced_io(struct kvm_vm *vm, + uint64_t address, + uint64_t size, bool pio) +{ + struct kvm_coalesced_mmio_zone zone = { + .addr = address, + .size = size, + .pio = pio, + }; + + vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO, &zone); +} + +static inline void kvm_vm_unregister_coalesced_io(struct kvm_vm *vm, + uint64_t address, + uint64_t size, bool pio) +{ + struct kvm_coalesced_mmio_zone zone = { + .addr = address, + .size = size, + .pio = pio, + }; + + vm_ioctl(vm, KVM_UNREGISTER_COALESCED_MMIO, &zone); +} + static inline int vm_get_stats_fd(struct kvm_vm *vm) { int fd = __vm_ioctl(vm, KVM_GET_STATS_FD, NULL);
Add a test to verify that KVM correctly exits (or not) when a vCPU's coalesced I/O ring is full (or isn't). Iterate over all legal starting points in the ring (with an empty ring), and verify that KVM doesn't exit until the ring is full. Opportunistically verify that KVM exits immediately on non-coalesced I/O, either because the MMIO/PIO region was never registered, or because a previous region was unregistered. This is a regression test for a KVM bug where KVM would prematurely exit due to bad math resulting in a false positive if the first entry in the ring was before the halfway mark. See commit 92f6d4130497 ("KVM: Fix coalesced_mmio_has_room() to avoid premature userspace exit"). Enable the test for x86, arm64, and risc-v, i.e. all architectures except s390, which doesn't have MMIO. On x86, which has both MMIO and PIO, interleave MMIO and PIO into the same ring, as KVM shouldn't exit until a non-coalesced I/O is encountered, regardless of whether the ring is filled with MMIO, PIO, or both. Cc: Ilias Stamatis <ilstam@amazon.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Cc: Anup Patel <anup@brainfault.org> Signed-off-by: Sean Christopherson <seanjc@google.com> --- tools/testing/selftests/kvm/Makefile | 3 + .../testing/selftests/kvm/coalesced_io_test.c | 202 ++++++++++++++++++ .../testing/selftests/kvm/include/kvm_util.h | 26 +++ 3 files changed, 231 insertions(+) create mode 100644 tools/testing/selftests/kvm/coalesced_io_test.c