From patchwork Mon Feb 13 18:02:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0B28C64EC7 for ; Mon, 13 Feb 2023 18:02:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230431AbjBMSCt (ORCPT ); Mon, 13 Feb 2023 13:02:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230380AbjBMSCs (ORCPT ); Mon, 13 Feb 2023 13:02:48 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 171881A671 for ; Mon, 13 Feb 2023 10:02:41 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id i23-20020a056e021d1700b003111192e89aso9796885ila.10 for ; Mon, 13 Feb 2023 10:02:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=70xvWuvYdaN0RHC9y2y+6cAohvWGNUH5IWQYtqhnjZE=; b=URPZnrYEApr1It+IVm3J16L4qUg2cigXzrcfhjuyLm3gPm2LeW9+RwiQI/kFr/6ju1 QRO6gvU1kY6dLPzMq2+wCiHIcftw3NDrPbCYlTgca1cM2BNmpFb0BRx5X9OGtMV8iMKn 4GC9+VjpgnH3ZAy+lR+VvnIrWIq2xvRyCFftvMcT7dG5qXF65SenNlCXkyi51D30k9MG Abkslkc8PQo9JTtVqLGYkkodvZP1FK5TTm+dDZmUgRPrThCwyXSKOQumFLGyGTRNrShi Ag9sAoQFvgZNJPH5i4fSQ1WSw6K2yJobD3ohEm/lMQeQ/f+tIPe4oookQuuonepQMnv3 WQQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=70xvWuvYdaN0RHC9y2y+6cAohvWGNUH5IWQYtqhnjZE=; b=LZl9rS6sK40LY3zvc5wbU/Lp+tpZHnUCN4OShASD99L8jVHwh1sai1zERV2twQ81ar SQO7Vpzff9PqGA78QWpWmUQ8V4NdEIKS2L0pzq4KWCTGonvIVih6q7xWZibT2ryL02w/ wEAy/NGt/SGRzekWo7JaXq90LNyMcDzPjweYdDkIyI9IRBrQm5vRsZzjiDDMAkX5Z62W 7T6fNkCh8z4Z+HV4/J6+8TZy0CRBImXeEdzXoXlxo1uZFejg1MgxZU4m2aguvRTm76u9 x1e/2bMCyq5LVleH75w6ZmjOknwDQnr9VxxObkNrU567C75N35yBmQmUZTuymAOTDoEE zTkQ== X-Gm-Message-State: AO0yUKU7qpDnHkgjWLWNrWYnVTVV3zBLvr9Lj0ucF8MT9TDJAFH8C8+e HdYDnlV1Y0UfIKw1u4qlfmibBEnMUYrK X-Google-Smtp-Source: AK7set/8lUVnM/xUxlc5ev5n38xMQU6UonaUcN7oe3E/mMvqt3yuPRKRB7RT8I01SIqBC8WTFb8l0KoRBFtB X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:2613:0:b0:310:9afc:aa6 with SMTP id n19-20020a922613000000b003109afc0aa6mr2640960ile.0.1676311360373; Mon, 13 Feb 2023 10:02:40 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:22 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-2-rananta@google.com> Subject: [PATCH 01/13] selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The upcoming patches would add more vPMU related tests to the file. Hence, rename it to be more generic. Signed-off-by: Raghavendra Rao Ananta --- tools/testing/selftests/kvm/Makefile | 2 +- .../kvm/aarch64/{vpmu_counter_access.c => vpmu_test.c} | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) rename tools/testing/selftests/kvm/aarch64/{vpmu_counter_access.c => vpmu_test.c} (99%) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index b27fea0ce5918..a4d262e139b18 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -143,7 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq -TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_test TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c similarity index 99% rename from tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c rename to tools/testing/selftests/kvm/aarch64/vpmu_test.c index 453f0dd240f44..581be0c463ad1 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * vpmu_counter_access - Test vPMU event counter access + * vpmu_test - Test the vPMU * * Copyright (c) 2022 Google LLC. * From patchwork Mon Feb 13 18:02:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F1F9C636CC for ; Mon, 13 Feb 2023 18:02:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230508AbjBMSCx (ORCPT ); Mon, 13 Feb 2023 13:02:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230467AbjBMSCu (ORCPT ); Mon, 13 Feb 2023 13:02:50 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3933D1F907 for ; Mon, 13 Feb 2023 10:02:42 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id v17-20020a92ab11000000b00310c6708243so9945299ilh.23 for ; Mon, 13 Feb 2023 10:02:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AsmO5eDYgPGzRFknyAuV3lp4/AdQBm/C7SIWaQDWgzs=; b=Cwyfu8Bq2PKtWPxrul5ebBEulrHVwsUaI40rkksHoY18wRNmgXilA9Ert3lhPhjxON UCkt9lQB7j/+RRwya+RbcV290veQl/3Jw9F88H6SqsjLrU4nnVODSYYiH9eJRouyB6II /qsFUZ1PIKaOYDahsLErY6Coe/tm9FDtSwspmSQtN/9CvXjv2wMaURvCiX889j+22PzX vuO8LqDxX6XxAYUscu7nk0JOconpZMPtvAIYFtHR8VruBW6vPhrRKWjPWxxQXFkgIak3 EoV8Q6qNkksBj3c9dDHndoddsc+ebUAGqCjGhrZUp5yDhA/EKml6KO8XoFTYbGu4zm9S Fz6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AsmO5eDYgPGzRFknyAuV3lp4/AdQBm/C7SIWaQDWgzs=; b=x3TXxq/4mGyXElVhfXPoP4zjnpu/ibc6sRCdY/vqJbDHTyjXxWvDvKwJyPNWUH9XsC MYU+XEeXjo2ZucRl2xyHbE6ckK5tHaIVRy9P2Jnj25Yk4y3UGvITTZWm5ATlm4YTvxJR 5Om0WqrUum7YGlREJzgCBr/fGjG2ea4dflXKV9bIs9G6BwQ1ku6/Hii1RLvSKjIkM5xX zgibRmhlzLxXSFm+unUP1HhAPUcrSfQLF8LLzBD2IyRrTKHQLr+UfsEqI8zX5LXX2ku1 ZOcO7jZxbvGk5y6nH98I9fb+k6dKfYSP4RMeKZnMR+ZRIJbTkpQfCH/gFNLN7Rhc5Lms MrhA== X-Gm-Message-State: AO0yUKUtkY+xZy3jDOQoYeyGhjNZB5ARLUtU5xbD6AIMuzm0ncbHsJLd 13l2QHM7ddswz1pAIo35UWV6mSzuMCvO X-Google-Smtp-Source: AK7set9pNf3dQ3WTwUtipCs3HqOPBdYIZEdSHW/5bEFYmaEXSbjn7kq84XWVTKBORlsRqT+UT9RJi2sQZ2qi X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:a144:0:b0:3c4:8536:446a with SMTP id m4-20020a02a144000000b003c48536446amr241jah.1.1676311361559; Mon, 13 Feb 2023 10:02:41 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:23 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-3-rananta@google.com> Subject: [PATCH 02/13] selftests: KVM: aarch64: Refactor the vPMU counter access tests From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor the existing counter access tests into its own independent functions and make running the tests generic to make way for the upcoming tests. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 140 ++++++++++++------ 1 file changed, 98 insertions(+), 42 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 581be0c463ad1..d72c3c9b9c39f 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -147,6 +147,11 @@ static inline void disable_counter(int idx) isb(); } +static inline uint64_t get_pmcr_n(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); +} + /* * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 * accessors that test cases will use. Each of the accessors will @@ -183,6 +188,23 @@ struct pmc_accessor pmc_accessors[] = { uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; +struct vpmu_vm { + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; +}; + +enum test_stage { + TEST_STAGE_COUNTER_ACCESS = 1, +}; + +struct guest_data { + enum test_stage test_stage; + uint64_t expected_pmcr_n; +}; + +static struct guest_data guest_data; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -295,7 +317,7 @@ static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) write_sysreg(test_bit, pmovsset_el0); /* The bit will be set only if the counter is implemented */ - pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); + pmcr_n = get_pmcr_n(); set_expected = (pmc_idx < pmcr_n) ? true : false; } else { write_sysreg(test_bit, pmcntenclr_el0); @@ -424,15 +446,14 @@ static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) * if reading/writing PMU registers for implemented or unimplemented * counters can work as expected. */ -static void guest_code(uint64_t expected_pmcr_n) +static void guest_counter_access_test(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n, unimp_mask; + uint64_t pmcr_n, unimp_mask; int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); - pmcr = read_sysreg(pmcr_el0); - pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); + pmcr_n = get_pmcr_n(); /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); @@ -462,6 +483,18 @@ static void guest_code(uint64_t expected_pmcr_n) for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); } +} + +static void guest_code(void) +{ + switch (guest_data.test_stage) { + case TEST_STAGE_COUNTER_ACCESS: + guest_counter_access_test(guest_data.expected_pmcr_n); + break; + default: + GUEST_ASSERT_1(0, guest_data.test_stage); + } + GUEST_DONE(); } @@ -469,14 +502,14 @@ static void guest_code(uint64_t expected_pmcr_n) #define GICR_BASE_GPA 0x80A0000ULL /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, - int *gic_fd) +static struct vpmu_vm *create_vpmu_vm(void *guest_code) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; uint64_t dfr0, irq = 23; + struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, .attr = KVM_ARM_VCPU_PMU_V3_IRQ, @@ -487,7 +520,10 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, .attr = KVM_ARM_VCPU_PMU_V3_INIT, }; - vm = vm_create(1); + vpmu_vm = calloc(1, sizeof(*vpmu_vm)); + TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); + + vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { @@ -498,9 +534,9 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, /* Create vCPU with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); - vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); vcpu_init_descriptor_tables(vcpu); - *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); /* Make sure that PMUv3 support is indicated in the ID register */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); @@ -513,15 +549,21 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); - *vcpup = vcpu; - return vm; + return vpmu_vm; +} + +static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) +{ + close(vpmu_vm->gic_fd); + kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm); } -static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +static void run_vcpu(struct kvm_vcpu *vcpu) { struct ucall uc; - vcpu_args_set(vcpu, 1, pmcr_n); + sync_global_to_guest(vcpu->vm, guest_data); vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: @@ -539,16 +581,18 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n, * and run the test. */ -static void run_test(uint64_t pmcr_n) +static void run_counter_access_test(uint64_t pmcr_n) { - struct kvm_vm *vm; + struct vpmu_vm *vpmu_vm; struct kvm_vcpu *vcpu; - int gic_fd; uint64_t sp, pmcr, pmcr_orig; struct kvm_vcpu_init init; + guest_data.expected_pmcr_n = pmcr_n; + pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu = vpmu_vm->vcpu; /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -559,23 +603,22 @@ static void run_test(uint64_t pmcr_n) pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); - run_vcpu(vcpu, pmcr_n); + run_vcpu(vcpu); /* * Reset and re-initialize the vCPU, and run the guest code again to * check if PMCR_EL0.N is preserved. */ - vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); vcpu_init_descriptor_tables(vcpu); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); - run_vcpu(vcpu, pmcr_n); + run_vcpu(vcpu); - close(gic_fd); - kvm_vm_free(vm); + destroy_vpmu_vm(vpmu_vm); } /* @@ -583,15 +626,18 @@ static void run_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_counter_access_error_test(uint64_t pmcr_n) { - struct kvm_vm *vm; + struct vpmu_vm *vpmu_vm; struct kvm_vcpu *vcpu; - int gic_fd, ret; + int ret; uint64_t pmcr, pmcr_orig; + guest_data.expected_pmcr_n = pmcr_n; + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu = vpmu_vm->vcpu; /* Update the PMCR_EL0.N with @pmcr_n */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); @@ -603,8 +649,25 @@ static void run_error_test(uint64_t pmcr_n) TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail", pmcr, pmcr_orig); - close(gic_fd); - kvm_vm_free(vm); + destroy_vpmu_vm(vpmu_vm); +} + +static void run_counter_access_tests(uint64_t pmcr_n) +{ + uint64_t i; + + guest_data.test_stage = TEST_STAGE_COUNTER_ACCESS; + + for (i = 0; i <= pmcr_n; i++) + run_counter_access_test(i); + + for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) + run_counter_access_error_test(i); +} + +static void run_tests(uint64_t pmcr_n) +{ + run_counter_access_tests(pmcr_n); } /* @@ -613,30 +676,23 @@ static void run_error_test(uint64_t pmcr_n) */ static uint64_t get_pmcr_n_limit(void) { - struct kvm_vm *vm; - struct kvm_vcpu *vcpu; - int gic_fd; + struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); - vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); - close(gic_fd); - kvm_vm_free(vm); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + destroy_vpmu_vm(vpmu_vm); return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } int main(void) { - uint64_t i, pmcr_n; + uint64_t pmcr_n; TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); pmcr_n = get_pmcr_n_limit(); - for (i = 0; i <= pmcr_n; i++) - run_test(i); - - for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_tests(pmcr_n); return 0; } From patchwork Mon Feb 13 18:02:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFC72C6379F for ; Mon, 13 Feb 2023 18:02:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229680AbjBMSC4 (ORCPT ); Mon, 13 Feb 2023 13:02:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230484AbjBMSCv (ORCPT ); Mon, 13 Feb 2023 13:02:51 -0500 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6FE2C1F91B for ; Mon, 13 Feb 2023 10:02:43 -0800 (PST) Received: by mail-io1-xd4a.google.com with SMTP id r11-20020a6b8f0b000000b0071853768168so8925202iod.23 for ; Mon, 13 Feb 2023 10:02:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I+uK5I847Ub+3lN0aBU7Gcssw6jbSaFlZxnmEwJsF78=; b=QbdYvBJWHHUW2D1muNpyWaqx7zQissf8ygmzU3xHnetuuSzhAGPmBNebmkvWnEv3Xz 0jTCK1eWSC3eS7VG05SG9/FEi8vrnebVB2jiUNG9uyANwJOnQEbKNRaRfVC8qN3KlsXd Rb7FZJQCo48VHQ5nbIMEllm2vsR2a8e8QO0sBnWaEwoXOJcijWRfplFKi31C6/eZC2hl CDRxtjZxnwD6IpsDftFHB8SBdayD9oQvAHBbKF9YvKcJEuRACq+a/5OsQTF53rsLP01P 9TtEQC0bRtuTmKo8G83KZK+lIFZkGfzIN2Lo/u/KWJLSOnf/yAbo8wQZT1gT4X3o0f7A /H3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I+uK5I847Ub+3lN0aBU7Gcssw6jbSaFlZxnmEwJsF78=; b=pfW5DyujuBHv1JzXIFle3E3phPPUFHaRj096gAL7yZTv/VoSaMWaj3W4pSNevDL9ri 8Y1jdylR91pAPI5cRqyTNKcRYtmWTFBgNwOOUdghlwPIW0f1gtL167XaWlA0OssZusJb j1N5VcRvbRPzFvt6tJGtj4S+3VsZEpCehHpDL8/gAKLEru3YEWPJE0Dr2zRHnvu8AAjO bJoYZ4lE/E/jdu+M081mexPBIMQ/a0ck78jx0w87l3ljSIC5LPTCDIVL3DVHpmTL6J77 bCv32+wp1oXZIc1rJFIXPvIQVT7tbhShq35Av1v51/GF5I2QIJQVLijcA3TKRfCLWx/F BV3Q== X-Gm-Message-State: AO0yUKUC3cMLlMAhShj1UODNXuid2h4YRlPY5gzivrUuoJTHP+heWCNp iAmAjMQuZcnIn8vNlJdaH7dK4tk0Pacl X-Google-Smtp-Source: AK7set9juY00LvKn0j/zEJmqdLAxmL1I6OU4HnYIt4dcFolsJ+3w6e8YxMwoo+OzXJYGRW6YP6suEVqRXC4+ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5d:9c49:0:b0:718:b11d:a972 with SMTP id 9-20020a5d9c49000000b00718b11da972mr11228352iof.36.1676311362812; Mon, 13 Feb 2023 10:02:42 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:24 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-4-rananta@google.com> Subject: [PATCH 03/13] tools: arm64: perf_event: Define Cycle counter enable/overflow bits From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the definitions of ARMV8_PMU_CNTOVS_C (Cycle counter overflow bit) for overflow status registers and ARMV8_PMU_CNTENSET_C (Cycle counter enable bit) for PMCNTENSET_EL0 register. Signed-off-by: Raghavendra Rao Ananta --- tools/arch/arm64/include/asm/perf_event.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h index 97e49a4d4969f..8ce23aabf6fe6 100644 --- a/tools/arch/arm64/include/asm/perf_event.h +++ b/tools/arch/arm64/include/asm/perf_event.h @@ -222,9 +222,11 @@ /* * PMOVSR: counters overflow flag status reg */ +#define ARMV8_PMU_CNTOVS_C (1 << 31) /* Cycle counter overflow bit */ #define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK + /* * PMXEVTYPER: Event selection reg */ @@ -247,6 +249,11 @@ #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ +/* + * PMCNTENSET: Count Enable set reg + */ +#define ARMV8_PMU_CNTENSET_C (1 << 31) /* Cycle counter enable bit */ + /* PMMIR_EL1.SLOTS mask */ #define ARMV8_PMU_SLOTS_MASK 0xff From patchwork Mon Feb 13 18:02:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1E6FC6379F for ; Mon, 13 Feb 2023 18:03:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231135AbjBMSC6 (ORCPT ); Mon, 13 Feb 2023 13:02:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230488AbjBMSCv (ORCPT ); Mon, 13 Feb 2023 13:02:51 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7209C166DF for ; Mon, 13 Feb 2023 10:02:44 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id k13-20020a92c24d000000b003127853ef5dso9802450ilo.5 for ; Mon, 13 Feb 2023 10:02:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m5yslkEpatxEcn+ZjK5j+e94Zo++KRDXawZarevQP94=; b=lcEuCEa97ssf+wWD9q1X2jC8NG6WwfI+HwutzvJgCa6Uq+ZlQvO6nZ4fTZ1DOuWvCH QYfQMES86DFpGnJNaCKSYUITRt/fnbFRlV24rRk/9s+HJOK4uK3HUoXbUfKpYuWWbdGM JAkmmiPtglLKasxoEqGau0AV+ap1NMUKKv6f9Q0J6+GYYOqy5tXTw0cA3vI0PsY+8Ti9 6DRHzCp9lRTmscj+OMpjzyRj19bIqkMSdns7IbL4DdWskvbfqjdhHYeYx/pEtPAC/xpy 7SX2rfvXEDyQVkwgmKnDgLQKOy/Rc39ynIF8Wwbekx9vLgo9IWDVOMuK3NLGvfW1CAvj N/RA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m5yslkEpatxEcn+ZjK5j+e94Zo++KRDXawZarevQP94=; b=4I11Wh1iJChO85VRSGBNl430BhHTSFTiZS9VJ3NQPxDCxXirBoHhX2VSfvkbebpEEr D9KabL6dFK3X8wobNxw9vj/78GqPQDb6A2DTCdjM0/U4szkni5IFDkqT5f10PltsPqNa YGlTo5DeVnitoJDfFOOFhCkXdvhZFDTLIU4gZxP+IWO3FpAv/aB5AOIIMXMRroHDYy2N eDfDLZ9T0ZbqbMxCQ5X90iBoHLCBKS9WCpTor4+zzPOsXehhYJ13b9Gu0rpJNPUmQ5zn qqY4/Qsz7y3fsh2HLGTsVJxa/3yiHbXrSIDjV0vO0D0KSwsKEHebuD3aH9EbptVWHThB d+wQ== X-Gm-Message-State: AO0yUKUA/ezBev2YZaIAhhgqW8RmgFuajY/uu8SutRdckZXyB2KO2Q08 Fwjlf/SCVnOXVKHqx7pwMJrzKxyo7kLb X-Google-Smtp-Source: AK7set/0edXrdMAO+nSquBaytN7bLQuBx2MZ7S0viOK1DwWUM7ZNzi/YO3vgRtPP1oEgxZM2YSawlRvX5KHR X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:854b:0:b0:3a7:e46f:1016 with SMTP id g69-20020a02854b000000b003a7e46f1016mr298jai.0.1676311363803; Mon, 13 Feb 2023 10:02:43 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:25 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-5-rananta@google.com> Subject: [PATCH 04/13] selftests: KVM: aarch64: Add PMU cycle counter helpers From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add basic helpers for the test to access the cycle counter registers. The helpers will be used in the upcoming patches to run the tests related to cycle counter. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index d72c3c9b9c39f..15aebc7d7dc94 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -147,6 +147,46 @@ static inline void disable_counter(int idx) isb(); } +static inline uint64_t read_cycle_counter(void) +{ + return read_sysreg(pmccntr_el0); +} + +static inline void reset_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcr_el0); + + write_sysreg(ARMV8_PMU_PMCR_C | v, pmcr_el0); + isb(); +} + +static inline void enable_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenset_el0); + isb(); +} + +static inline void disable_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenclr_el0); + isb(); +} + +static inline void write_pmccfiltr(unsigned long val) +{ + write_sysreg(val, pmccfiltr_el0); + isb(); +} + +static inline uint64_t read_pmccfiltr(void) +{ + return read_sysreg(pmccfiltr_el0); +} + static inline uint64_t get_pmcr_n(void) { return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); From patchwork Mon Feb 13 18:02:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFDBAC636D4 for ; Mon, 13 Feb 2023 18:03:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231152AbjBMSDB (ORCPT ); Mon, 13 Feb 2023 13:03:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230490AbjBMSCw (ORCPT ); Mon, 13 Feb 2023 13:02:52 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AED7B166FF for ; Mon, 13 Feb 2023 10:02:45 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-4cddba76f55so135297967b3.23 for ; Mon, 13 Feb 2023 10:02:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=Jx3a++ruGoXrcWcT9j3/ApaAyQ82DXdYanaDr7ST0XpE2Zdzv5xTJY/Gfvi9n0sUxT IAGJgL1CX6Hf5UbNZQlM9snWEB7yQtiJ3BksENLZKNl8tmSdopEGil1x1IoShWafjen4 62pvkCMKN/mE+1i0nSy4KestzSs7XPOtO/tu5mcO3NEE1o7l/6XD/mvfDRZctpTX5k1/ 1ymQj+IpyTb/3m2LR+/32yyqg07hGCAIxxTKNNPGOEavERroNb6chMCoBWYWytMMWT+T SVF11S0EQueuF/zePtphljd/7Lqh6rJXqYpNcY3OAqPtQRWYbKLek3BlYIVi3Lowxvff lneA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=lXmEdSWyHcsuDhAsDoReCBirco4WS+le553RN0NqjNAQSaBIMwrnhK2/3mbnQ/iEL0 5ZZIAeY6qX0wjY6JtG7GvlNa5yphs/+E7exYOpFMgBJ+CmIbrM9i8nvNP5mB7O2TXJ8X scGwJvKF/ytrPZGPTk3W8GUpM2mgwB8rD1lSeznnjGtM2flArwcY7yoIyosHkOVgMmoV 3NFK2fi6/TAXx1xba9FE+xzIEHh01Ga0C8xVqHsf6SDxOeKVeLdjal2fgwe0Q2xHxNjW LgTKaBH9r340l1Xq0VN+OeYUoFqgE94rKwQ2ntQy7bSCnXkRKPs50fKIeupxeRcV9beu 2yDQ== X-Gm-Message-State: AO0yUKWAzcw156Ao7SVaVQk5o/E5KjBggDTZatXOCJZuz3pMtPNO6RNs QC30kYg4FGxK3S06CkICfTg/dkoJbw+S X-Google-Smtp-Source: AK7set/EzSiXKhIlnWZGFQXSX4Ttf2a7ec3Qre0VGbOsMKnsjqZTP111F1zCvSTSfNtsmpqkl4lGGYYDMsVP X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1c4:b0:87e:a15b:4e55 with SMTP id u4-20020a05690201c400b0087ea15b4e55mr2919839ybh.21.1676311364919; Mon, 13 Feb 2023 10:02:44 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:26 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-6-rananta@google.com> Subject: [PATCH 05/13] selftests: KVM: aarch64: Consider PMU event filters for VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Accept a list of KVM PMU event filters as an argument while creating a VM via create_vpmu_vm(). Upcoming patches would leverage this to test the event filters' functionality. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++-- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 15aebc7d7dc94..2b3a4fa3afa9c 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,10 +15,14 @@ #include #include #include +#include /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* The max number of event numbers that's supported */ +#define ARMV8_PMU_MAX_EVENTS 64 + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] = { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; +#define MAX_EVENT_FILTERS_PER_VM 10 + #define INVALID_EC (-1ul) uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; @@ -232,6 +238,7 @@ struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; int gic_fd; + unsigned long *pmu_filter; }; enum test_stage { @@ -541,8 +548,51 @@ static void guest_code(void) #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL +static unsigned long * +set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters) +{ + int j; + unsigned long *pmu_filter; + struct kvm_device_attr filter_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_FILTER, + }; + + /* + * Setting up of the bitmap is similar to what KVM does. + * If the first filter denys an event, default all the others to allow, and vice-versa. + */ + pmu_filter = bitmap_zalloc(ARMV8_PMU_MAX_EVENTS); + TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter"); + + if (pmu_event_filters[0].action == KVM_PMU_EVENT_DENY) + bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS); + + for (j = 0; j < MAX_EVENT_FILTERS_PER_VM; j++) { + struct kvm_pmu_event_filter *pmu_event_filter = &pmu_event_filters[j]; + + if (!pmu_event_filter->nevents) + break; + + pr_debug("Applying event filter:: event: 0x%x; action: %s\n", + pmu_event_filter->base_event, + pmu_event_filter->action == KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY"); + + filter_attr.addr = (uint64_t) pmu_event_filter; + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + + if (pmu_event_filter->action == KVM_PMU_EVENT_ALLOW) + __set_bit(pmu_event_filter->base_event, pmu_filter); + else + __clear_bit(pmu_event_filter->base_event, pmu_filter); + } + + return pmu_filter; +} + /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct vpmu_vm *create_vpmu_vm(void *guest_code) +static struct vpmu_vm * +create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; @@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); @@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) { + if (vpmu_vm->pmu_filter) + bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); free(vpmu_vm); @@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu = vpmu_vm->vcpu; /* Save the initial sp to restore them later to run the guest again */ @@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu = vpmu_vm->vcpu; /* Update the PMCR_EL0.N with @pmcr_n */ @@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); + return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } From patchwork Mon Feb 13 18:02:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F6B8C6379F for ; Mon, 13 Feb 2023 18:03:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231200AbjBMSDO (ORCPT ); Mon, 13 Feb 2023 13:03:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231124AbjBMSCz (ORCPT ); Mon, 13 Feb 2023 13:02:55 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D08FA1EFFB for ; Mon, 13 Feb 2023 10:02:46 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id z9-20020a25ba49000000b007d4416e3667so13389471ybj.23 for ; Mon, 13 Feb 2023 10:02:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b4AB06q1cmw2xfepDKSJYP3pCnth/5v7kXmR5lfk+6U=; b=fXK9ReETXPS2j77ugkcwz+WisHJJl30nYulVnC5wPiVkGYG2emzbPoAyLynH7ZaAsI 1c3LfWRVQ6iewmxzt1Wg3dcqrIbv7kvBuFrhDdqtN0ITzFnMcZNSuWr+0SY6Kw1+o/FR szzw1tiwapnZbgthXSx/Q5fTtv96fLq+VqsfNBqsKWqMyUmVfxhQ7HqAbJNxSJzNGvs+ CqH2iEfuDdAD+suKAproIohuUCyyo1DeIRtwQxE1WZ193pQLMzCrkXd/oJGU2pJpvoMD KS3jlG86kbYxGFvTrXi5TTvOLQarjIK+MQliKVXkQg6GClJdDqpSaaeT1GDu/N1b4QVv jK6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b4AB06q1cmw2xfepDKSJYP3pCnth/5v7kXmR5lfk+6U=; b=boEs2mcn4BKDhSePPpX2tdLdGb4G92vF8PiXS6fRN3kB8c5mVIsdJwNc4q0pGesdG2 9J8FMeH7+j7gx1/VssBndR/+6EMlexZHNJ7OCXLRBV/l0nqClqwCqQ+ZKCH4jp8uH4Mj pDJ3M7ueiMiugj2qzuY2cBZ7pJDTAaqVtnewoxZMZp7v7zsEmtG9bqcLDoDsNy/lka0U kWOt8SKzgFZCnlWjVYBzOZIC7sXyLUr7m27JaTOuRv2jVTxImUKBiRS46kPMb+GTVjSs VVuRxA7EKAUz6z8sCKaSg5LDte8hCUXeZ6ivOR0xVPA37XLUiVGgLX+Id7WDx3lqR2Uq kLPw== X-Gm-Message-State: AO0yUKUzSPN476+qNcxSX5tX9yyABOpMdJpr3pxaShbwqONGYZcb+1kW gfCHWRavJ1kWXhnvy+tHex0bWlJryktm X-Google-Smtp-Source: AK7set9x9mJnUYIPvLaxq+mW4hkjTEtu0lhCvSA8e5Al797fNHvzz7ZeDQoXvhWboY/INSxnq62Zmi3F8bIn X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a0d:eac3:0:b0:52e:c5c0:1d0e with SMTP id t186-20020a0deac3000000b0052ec5c01d0emr1538030ywe.418.1676311366096; Mon, 13 Feb 2023 10:02:46 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:27 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-7-rananta@google.com> Subject: [PATCH 06/13] selftests: KVM: aarch64: Add KVM PMU event filter test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add tests to validate KVM's KVM_ARM_VCPU_PMU_V3_FILTER attribute by applying a series of filters to allow or deny events from the userspace. Validation is done by the guest in a way that it should be able to count only the events that are allowed. The workload to execute a precise number of instructions (execute_precise_instrs() and precise_instrs_loop()) is taken from the kvm-unit-tests' arm/pmu.c. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 261 +++++++++++++++++- 1 file changed, 258 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 2b3a4fa3afa9c..3dfb770b538e9 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -2,12 +2,21 @@ /* * vpmu_test - Test the vPMU * - * Copyright (c) 2022 Google LLC. + * The test suit contains a series of checks to validate the vPMU + * functionality. This test runs only when KVM_CAP_ARM_PMU_V3 is + * supported on the host. The tests include: * - * This test checks if the guest can see the same number of the PMU event + * 1. Check if the guest can see the same number of the PMU event * counters (PMCR_EL0.N) that userspace sets, if the guest can access * those counters, and if the guest cannot access any other counters. - * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + * + * 2. Test the functionality of KVM's KVM_ARM_VCPU_PMU_V3_FILTER + * attribute by applying a series of filters in various combinations + * of allowing or denying the events. The guest validates it by + * checking if it's able to count only the events that are allowed. + * + * Copyright (c) 2022 Google LLC. + * */ #include #include @@ -230,6 +239,12 @@ struct pmc_accessor pmc_accessors[] = { #define MAX_EVENT_FILTERS_PER_VM 10 +#define EVENT_ALLOW(ev) \ + {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_ALLOW} + +#define EVENT_DENY(ev) \ + {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_DENY} + #define INVALID_EC (-1ul) uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; @@ -243,11 +258,13 @@ struct vpmu_vm { enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, + TEST_STAGE_KVM_EVENT_FILTER, }; struct guest_data { enum test_stage test_stage; uint64_t expected_pmcr_n; + unsigned long *pmu_filter; }; static struct guest_data guest_data; @@ -329,6 +346,113 @@ static bool pmu_event_is_supported(uint64_t event) GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ } + +/* + * Extra instructions inserted by the compiler would be difficult to compensate + * for, so hand assemble everything between, and including, the PMCR accesses + * to start and stop counting. isb instructions are inserted to make sure + * pmccntr read after this function returns the exact instructions executed + * in the controlled block. Total instrs = isb + nop + 2*loop = 2 + 2*loop. + */ +static inline void precise_instrs_loop(int loop, uint32_t pmcr) +{ + uint64_t pmcr64 = pmcr; + + asm volatile( + " msr pmcr_el0, %[pmcr]\n" + " isb\n" + "1: subs %w[loop], %w[loop], #1\n" + " b.gt 1b\n" + " nop\n" + " msr pmcr_el0, xzr\n" + " isb\n" + : [loop] "+r" (loop) + : [pmcr] "r" (pmcr64) + : "cc"); +} + +/* + * Execute a known number of guest instructions. Only even instruction counts + * greater than or equal to 4 are supported by the in-line assembly code. The + * control register (PMCR_EL0) is initialized with the provided value (allowing + * for example for the cycle counter or event counters to be reset). At the end + * of the exact instruction loop, zero is written to PMCR_EL0 to disable + * counting, allowing the cycle counter or event counters to be read at the + * leisure of the calling code. + */ +static void execute_precise_instrs(int num, uint32_t pmcr) +{ + int loop = (num - 2) / 2; + + GUEST_ASSERT_2(num >= 4 && ((num - 2) % 2 == 0), num, loop); + precise_instrs_loop(loop, pmcr); +} + +static void test_instructions_count(int pmc_idx, bool expect_count) +{ + int i; + struct pmc_accessor *acc; + uint64_t cnt; + int instrs_count = 100; + + enable_counter(pmc_idx); + + /* Test the event using all the possible way to configure the event */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + pmu_disable_reset(); + + acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); + + /* Enable the PMU and execute precisely number of instructions as a workload */ + execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); + + /* If a count is expected, the counter should be increased by 'instrs_count' */ + cnt = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(expect_count == (cnt == instrs_count), + i, expect_count, cnt, instrs_count); + } + + disable_counter(pmc_idx); +} + +static void test_cycles_count(bool expect_count) +{ + uint64_t cnt; + + pmu_enable(); + reset_cycle_counter(); + + /* Count cycles in EL0 and EL1 */ + write_pmccfiltr(0); + enable_cycle_counter(); + + cnt = read_cycle_counter(); + + /* + * If a count is expected by the test, the cycle counter should be increased by + * at least 1, as there is at least one instruction between enabling the + * counter and reading the counter. + */ + GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count); + + disable_cycle_counter(); + pmu_disable_reset(); +} + +static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) +{ + switch (event) { + case ARMV8_PMUV3_PERFCTR_INST_RETIRED: + test_instructions_count(pmc_idx, expect_count); + break; + case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: + test_cycles_count(expect_count); + break; + } +} + /* * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers * are set or cleared as specified in @set_expected. @@ -532,12 +656,37 @@ static void guest_counter_access_test(uint64_t expected_pmcr_n) } } +static void guest_event_filter_test(unsigned long *pmu_filter) +{ + uint64_t event; + + /* + * Check if PMCEIDx_EL0 is advertized as configured by the userspace. + * It's possible that even though the userspace allowed it, it may not be supported + * by the hardware and could be advertized as 'disabled'. Hence, only validate against + * the events that are advertized. + * + * Furthermore, check if the event is in fact counting if enabled, or vice-versa. + */ + for (event = 0; event < ARMV8_PMU_MAX_EVENTS - 1; event++) { + if (pmu_event_is_supported(event)) { + GUEST_ASSERT_1(test_bit(event, pmu_filter), event); + test_event_count(event, 0, true); + } else { + test_event_count(event, 0, false); + } + } +} + static void guest_code(void) { switch (guest_data.test_stage) { case TEST_STAGE_COUNTER_ACCESS: guest_counter_access_test(guest_data.expected_pmcr_n); break; + case TEST_STAGE_KVM_EVENT_FILTER: + guest_event_filter_test(guest_data.pmu_filter); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -760,9 +909,115 @@ static void run_counter_access_tests(uint64_t pmcr_n) run_counter_access_error_test(i); } +static struct kvm_pmu_event_filter pmu_event_filters[][MAX_EVENT_FILTERS_PER_VM] = { + /* + * Each set of events denotes a filter configuration for that VM. + * During VM creation, the filters will be applied in the sequence mentioned here. + */ + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, +}; + +static void run_kvm_event_filter_error_tests(void) +{ + int ret; + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct vpmu_vm *vpmu_vm; + struct kvm_vcpu_init init; + struct kvm_pmu_event_filter pmu_event_filter = EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES); + struct kvm_device_attr filter_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_FILTER, + .addr = (uint64_t) &pmu_event_filter, + }; + + /* KVM should not allow configuring filters after the PMU is initialized */ + vpmu_vm = create_vpmu_vm(guest_code, NULL); + ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + TEST_ASSERT(ret == -1 && errno == EBUSY, + "Failed to disallow setting an event filter after PMU init"); + destroy_vpmu_vm(vpmu_vm); + + /* Check for invalid event filter setting */ + vm = vm_create(1); + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + + pmu_event_filter.base_event = UINT16_MAX; + pmu_event_filter.nevents = 5; + ret = __vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + TEST_ASSERT(ret == -1 && errno == EINVAL, "Failed check for invalid filter configuration"); + kvm_vm_free(vm); +} + +static void run_kvm_event_filter_test(void) +{ + int i; + struct vpmu_vm *vpmu_vm; + struct kvm_vm *vm; + vm_vaddr_t pmu_filter_gva; + size_t pmu_filter_bmap_sz = BITS_TO_LONGS(ARMV8_PMU_MAX_EVENTS) * sizeof(unsigned long); + + guest_data.test_stage = TEST_STAGE_KVM_EVENT_FILTER; + + /* Test for valid filter configurations */ + for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) { + vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]); + vm = vpmu_vm->vm; + + pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR); + memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz); + guest_data.pmu_filter = (unsigned long *) pmu_filter_gva; + + run_vcpu(vpmu_vm->vcpu); + + destroy_vpmu_vm(vpmu_vm); + } + + /* Check if KVM is handling the errors correctly */ + run_kvm_event_filter_error_tests(); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); + run_kvm_event_filter_test(); } /* From patchwork Mon Feb 13 18:02:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 231DCC636CC for ; Mon, 13 Feb 2023 18:03:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231213AbjBMSDR (ORCPT ); Mon, 13 Feb 2023 13:03:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231128AbjBMSCz (ORCPT ); Mon, 13 Feb 2023 13:02:55 -0500 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0255166D0 for ; Mon, 13 Feb 2023 10:02:47 -0800 (PST) Received: by mail-io1-xd4a.google.com with SMTP id e16-20020a6b5010000000b00719041c51ebso8831216iob.12 for ; Mon, 13 Feb 2023 10:02:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=IUlIvqk99wOkDyN8QxvxQiqVaFMxbqcgA9uNoQQZndNpq1vnhI1bsorC5l7knDzNI1 CNDo0y4gWYWsjH8HGIUC2KbuMfUKXfm7ePcqNQpy+Qs4qo8ajWDUsOujEIEHzjn327wK sZ1ayR5mmnOmUCp4kpQavgmctGK3PWKE8lV3wz7hGdhQau+QWqpp26566SvWZ7ItdtGp m6IcH36QOlGywNrGdqHCpVGddGG8i6Xw2nKmdn1sn1BnyRI/jVLOCzIkw/xbo4NTXMGy /IjMPtH0X6J6GrDvjXmIAlRLaq0mI7ZT34see0s7lbyDyBucG6Lo7pboRUp5GpzbE3TB P77g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=Y0AyRn0rAz0rGxtayjg2nB4IiObeNs2pMt2mQfImickds0864P031+RtGgJObyiDSb M3GmCGr6n/YDyPbz7CGZQXTeJZBY1LPu6OBXeKgee+VeICJKGDfA4XsE3dT4W/knfD4w XeqHtxS5KUvoagb9Lc0ljl+/XXemgTDyvI1FjA0P3Ez6x6ObrveWK/LYDLdGa0FkGIAn v6QYwHZwv8R4tu8Ae+CrhnKW0HaKvYy3MIdDTdioSLAmQ4Ddpg2NsoZM+l8K5d1pAJwp QtAV1l+kzKSgnag2Om0XI7/9f6U7yjyMnVQ7Dp0/YUr13op4PDwRokXhNOdom3631enU +wZQ== X-Gm-Message-State: AO0yUKX6wjGBbjYPHy9X5WwlXbYFdyOQBMpBryxhpXsn1t3fM3deXW9y 5ArgP4Vpz2+rw/9gTYfjOhLdQ5mBWcUh X-Google-Smtp-Source: AK7set/zRoXeDH2o2OgI5SNOkKe9Bljn/xqjOVO+iFT8+upLAFbhCUFo5GurkgO+YcVUEBjj4XoXzR5ZKsrW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:9409:0:b0:310:9fc1:a92b with SMTP id c9-20020a929409000000b003109fc1a92bmr2739337ili.0.1676311367115; Mon, 13 Feb 2023 10:02:47 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:28 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-8-rananta@google.com> Subject: [PATCH 07/13] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM doest't allow the guests to modify the filter types such counting events in nonsecure/secure-EL2, EL3, and so on. Validate the same by force-configuring the bits in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0 registers. The test extends further by trying to create an event for counting only in EL2 and validates if the counter is not progressing. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 3dfb770b538e9..5c166df245589 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,6 +15,10 @@ * of allowing or denying the events. The guest validates it by * checking if it's able to count only the events that are allowed. * + * 3. KVM doesn't allow the guest to count the events attributed with + * higher exception levels (EL2, EL3). Verify this functionality by + * configuring and trying to count the events for EL2 in the guest. + * * Copyright (c) 2022 Google LLC. * */ @@ -23,6 +27,7 @@ #include #include #include +#include #include #include @@ -259,6 +264,7 @@ struct vpmu_vm { enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, TEST_STAGE_KVM_EVENT_FILTER, + TEST_STAGE_KVM_EVTYPE_FILTER, }; struct guest_data { @@ -678,6 +684,70 @@ static void guest_event_filter_test(unsigned long *pmu_filter) } } +static void guest_evtype_filter_test(void) +{ + int i; + struct pmc_accessor *acc; + uint64_t typer, cnt; + struct arm_smccc_res res; + + pmu_enable(); + + /* + * KVM blocks the guests from creating events for counting in Secure/Non-Secure Hyp (EL2), + * Monitor (EL3), and Multithreading configuration. It applies the mask + * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTYPERn_EL0, + * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this using all possible + * ways to configure the EVTYPER. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + /* Set all filter bits (31-24), readback, and check against the mask */ + acc->write_typer(0, 0xff000000); + typer = acc->read_typer(0); + + GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK, + typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK); + + /* + * Regardless of ARMV8_PMU_EVTYPE_MASK, KVM sets perf attr.exclude_hv + * to not count NS-EL2 events. Verify this functionality by configuring + * a NS-EL2 event, for which the couunt shouldn't increment. + */ + typer = ARMV8_PMUV3_PERFCTR_INST_RETIRED; + typer |= ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0; + acc->write_typer(0, typer); + acc->write_cntr(0, 0); + enable_counter(0); + + /* Issue a hypercall to enter EL2 and return */ + memset(&res, 0, sizeof(res)); + smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); + + cnt = acc->read_cntr(0); + GUEST_ASSERT_3(cnt == 0, cnt, typer, i); + } + + /* Check the same sequence for the Cycle counter */ + write_pmccfiltr(0xff000000); + typer = read_pmccfiltr(); + GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK, + typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK); + + typer = ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0; + write_pmccfiltr(typer); + reset_cycle_counter(); + enable_cycle_counter(); + + /* Issue a hypercall to enter EL2 and return */ + memset(&res, 0, sizeof(res)); + smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); + + cnt = read_cycle_counter(); + GUEST_ASSERT_2(cnt == 0, cnt, typer); +} + static void guest_code(void) { switch (guest_data.test_stage) { @@ -687,6 +757,9 @@ static void guest_code(void) case TEST_STAGE_KVM_EVENT_FILTER: guest_event_filter_test(guest_data.pmu_filter); break; + case TEST_STAGE_KVM_EVTYPE_FILTER: + guest_evtype_filter_test(); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -1014,10 +1087,22 @@ static void run_kvm_event_filter_test(void) run_kvm_event_filter_error_tests(); } +static void run_kvm_evtype_filter_test(void) +{ + struct vpmu_vm *vpmu_vm; + + guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER; + + vpmu_vm = create_vpmu_vm(guest_code, NULL); + run_vcpu(vpmu_vm->vcpu); + destroy_vpmu_vm(vpmu_vm); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); run_kvm_event_filter_test(); + run_kvm_evtype_filter_test(); } /* From patchwork Mon Feb 13 18:02:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138766 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D121BC636CC for ; Mon, 13 Feb 2023 18:03:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231252AbjBMSDT (ORCPT ); Mon, 13 Feb 2023 13:03:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231145AbjBMSC7 (ORCPT ); Mon, 13 Feb 2023 13:02:59 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F4C71A8 for ; Mon, 13 Feb 2023 10:02:48 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id i23-20020a056e021d1700b003111192e89aso9797190ila.10 for ; Mon, 13 Feb 2023 10:02:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NBYdz/0TDvO7IvfPjxUDSlCbo4CB4F+r8/XPr+qUWDQ=; b=q/9pkLp04O+q0/h8HqWtbW0b4VJCwyM93nDv77GqvjJ6CZ4Czxpm5H8d/lDlpTXsCj Lxw6nGE3jZxgA/Lo2HUBZdf5EE0qjSxvUgHGZW6o0+9e+HkhvQpiHgZdToFd+AbKHIFB A5CHSl5TSlCOZjqQlpxD5e5wkvR8X/0b6YcT5jL/nTql/+HqHR7svK3rwtTT1Y+ZTR7R GJ/moeEEW6KXP6NWUNqTjcT4/YBd1FC1rP0DUY32WTXTIIVoTMVTv09vwSp4CKIUlMXE jaIuuptu376OmVIHLWwwSlFCwuNah5pfd60Cn7DxqT17NURWdErBO2ZYOWNiy3gYm9fN UGog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NBYdz/0TDvO7IvfPjxUDSlCbo4CB4F+r8/XPr+qUWDQ=; b=cBnJoqr99kC/gtWG0Eb/govvF4RO8Pto33hxLYVoGhcrd1OWGj7SA8SxtUhiRatJ3P p+cvVFP1bfjMxnaRbqxJMsl9lHms8SJpIXKQmVvy+bHmZMuXJvBEEmvcwDpFcPsBhpVD EmXsogRCUu5+uYHxRdr/o4DhH+aJhhZWmlJVJm3INYGM5cd50eBmKC9zq7dsDqPPMbGt wVJ/i1LX7WZltbVnqHBqibC655BO86MFF6EtowaWPIekv/Pnufr6y7oz44D20ffA5+VB 3rQ5wRge9fPTNbl2DDVsgTWmS63A7/HMC4DMz/SyE1cHepEUB28lpwD8j72gbCBulytT ULRA== X-Gm-Message-State: AO0yUKWMwtI5kLfA24inlpxqGoiOkNGX4IpOlvWuqYpMs0vLdnoQcJDy P+dhub7OPhQTIYf25ItkPJUFZlIhjgUD X-Google-Smtp-Source: AK7set/5XtGkWzBFYSheugf2dkdONWpj2qzeqEPgWjWfAnOy0vxdZ74FjxG8HtTLpC55TP3sfOr6vpeLRU29 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:9407:0:b0:313:f870:58fb with SMTP id c7-20020a929407000000b00313f87058fbmr2508047ili.2.1676311368215; Mon, 13 Feb 2023 10:02:48 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:29 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-9-rananta@google.com> Subject: [PATCH 08/13] selftests: KVM: aarch64: Add vCPU migration test for PMU From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement a stress test for KVM by frequently force-migrating the vCPU to random pCPUs in the system. This would validate the save/restore functionality of KVM and starting/stopping of PMU counters as necessary. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 195 +++++++++++++++++- 1 file changed, 193 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 5c166df245589..0c9d801f4e602 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -19,9 +19,15 @@ * higher exception levels (EL2, EL3). Verify this functionality by * configuring and trying to count the events for EL2 in the guest. * + * 4. Since the PMU registers are per-cpu, stress KVM by frequently + * migrating the guest vCPU to random pCPUs in the system, and check + * if the vPMU is still behaving as expected. + * * Copyright (c) 2022 Google LLC. * */ +#define _GNU_SOURCE + #include #include #include @@ -30,6 +36,11 @@ #include #include #include +#include +#include +#include + +#include "delay.h" /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) @@ -37,6 +48,8 @@ /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 +#define msecs_to_usecs(msec) ((msec) * 1000LL) + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -265,6 +278,7 @@ enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, TEST_STAGE_KVM_EVENT_FILTER, TEST_STAGE_KVM_EVTYPE_FILTER, + TEST_STAGE_VCPU_MIGRATION, }; struct guest_data { @@ -275,6 +289,19 @@ struct guest_data { static struct guest_data guest_data; +#define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 +#define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 + +struct test_args { + int vcpu_migration_test_iter; + int vcpu_migration_test_migrate_freq_ms; +}; + +static struct test_args test_args = { + .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF, + .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS, +}; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -352,7 +379,6 @@ static bool pmu_event_is_supported(uint64_t event) GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ } - /* * Extra instructions inserted by the compiler would be difficult to compensate * for, so hand assemble everything between, and including, the PMCR accesses @@ -459,6 +485,13 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) } } +static void test_basic_pmu_functionality(void) +{ + /* Test events on generic and cycle counters */ + test_instructions_count(0, true); + test_cycles_count(true); +} + /* * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers * are set or cleared as specified in @set_expected. @@ -748,6 +781,16 @@ static void guest_evtype_filter_test(void) GUEST_ASSERT_2(cnt == 0, cnt, typer); } +static void guest_vcpu_migration_test(void) +{ + /* + * While the userspace continuously migrates this vCPU to random pCPUs, + * run basic PMU functionalities and verify the results. + */ + while (test_args.vcpu_migration_test_iter--) + test_basic_pmu_functionality(); +} + static void guest_code(void) { switch (guest_data.test_stage) { @@ -760,6 +803,9 @@ static void guest_code(void) case TEST_STAGE_KVM_EVTYPE_FILTER: guest_evtype_filter_test(); break; + case TEST_STAGE_VCPU_MIGRATION: + guest_vcpu_migration_test(); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -837,6 +883,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); + /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec, @@ -881,6 +928,8 @@ static void run_vcpu(struct kvm_vcpu *vcpu) struct ucall uc; sync_global_to_guest(vcpu->vm, guest_data); + sync_global_to_guest(vcpu->vm, test_args); + vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: @@ -1098,11 +1147,112 @@ static void run_kvm_evtype_filter_test(void) destroy_vpmu_vm(vpmu_vm); } +struct vcpu_migrate_data { + struct vpmu_vm *vpmu_vm; + pthread_t *pt_vcpu; + bool vcpu_done; +}; + +static void *run_vcpus_migrate_test_func(void *arg) +{ + struct vcpu_migrate_data *migrate_data = arg; + struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; + + run_vcpu(vpmu_vm->vcpu); + migrate_data->vcpu_done = true; + + return NULL; +} + +static uint32_t get_pcpu(void) +{ + uint32_t pcpu; + unsigned int nproc_conf; + cpu_set_t online_cpuset; + + nproc_conf = get_nprocs_conf(); + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); + + /* Randomly find an available pCPU to place the vCPU on */ + do { + pcpu = rand() % nproc_conf; + } while (!CPU_ISSET(pcpu, &online_cpuset)); + + return pcpu; +} + +static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) +{ + int ret; + cpu_set_t cpuset; + uint32_t new_pcpu = get_pcpu(); + + CPU_ZERO(&cpuset); + CPU_SET(new_pcpu, &cpuset); + + pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu); + + ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset); + + /* Allow the error where the vCPU thread is already finished */ + TEST_ASSERT(ret == 0 || ret == ESRCH, + "Failed to migrate the vCPU to pCPU: %u; ret: %d\n", new_pcpu, ret); + + return ret; +} + +static void *vcpus_migrate_func(void *arg) +{ + struct vcpu_migrate_data *migrate_data = arg; + + while (!migrate_data->vcpu_done) { + usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms)); + migrate_vcpu(migrate_data); + } + + return NULL; +} + +static void run_vcpu_migration_test(uint64_t pmcr_n) +{ + int ret; + struct vpmu_vm *vpmu_vm; + pthread_t pt_vcpu, pt_sched; + struct vcpu_migrate_data migrate_data = { + .pt_vcpu = &pt_vcpu, + .vcpu_done = false, + }; + + __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test"); + + guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; + guest_data.expected_pmcr_n = pmcr_n; + + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL); + + /* Initialize random number generation for migrating vCPUs to random pCPUs */ + srand(time(NULL)); + + /* Spawn a vCPU thread */ + ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data); + TEST_ASSERT(!ret, "Failed to create the vCPU thread"); + + /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */ + ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data); + TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs"); + + pthread_join(pt_sched, NULL); + pthread_join(pt_vcpu, NULL); + + destroy_vpmu_vm(vpmu_vm); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); run_kvm_event_filter_test(); run_kvm_evtype_filter_test(); + run_vcpu_migration_test(pmcr_n); } /* @@ -1121,12 +1271,53 @@ static uint64_t get_pmcr_n_limit(void) return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } -int main(void) +static void print_help(char *name) +{ + pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n", + name); + pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n", + VCPU_MIGRATIONS_TEST_ITERS_DEF); + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n", + VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS); + pr_info("\t-h: print this help screen\n"); +} + +static bool parse_args(int argc, char *argv[]) +{ + int opt; + + while ((opt = getopt(argc, argv, "hi:m:")) != -1) { + switch (opt) { + case 'i': + test_args.vcpu_migration_test_iter = + atoi_positive("Nr vCPU migration iterations", optarg); + break; + case 'm': + test_args.vcpu_migration_test_migrate_freq_ms = + atoi_positive("vCPU migration frequency", optarg); + break; + case 'h': + default: + goto err; + } + } + + return true; + +err: + print_help(argv[0]); + return false; +} + +int main(int argc, char *argv[]) { uint64_t pmcr_n; TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + if (!parse_args(argc, argv)) + exit(KSFT_SKIP); + pmcr_n = get_pmcr_n_limit(); run_tests(pmcr_n); From patchwork Mon Feb 13 18:02:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD6ACC636D4 for ; Mon, 13 Feb 2023 18:03:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231192AbjBMSDY (ORCPT ); Mon, 13 Feb 2023 13:03:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231181AbjBMSDI (ORCPT ); Mon, 13 Feb 2023 13:03:08 -0500 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C78423C2D for ; Mon, 13 Feb 2023 10:02:49 -0800 (PST) Received: by mail-io1-xd4a.google.com with SMTP id y22-20020a5d94d6000000b007076e06ba3dso8842345ior.20 for ; Mon, 13 Feb 2023 10:02:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=miWor8s0jqaZUd3xGtI4JBTTtHHMqBsmA5HUd209CbcgvQyMiCxhmIj64++D4RGVUx zOUakTN+WbmyJc4C78xTGot9kZ1TRe/KSWTN0bpi6hF+WVnpU0/dmbw0WrccUj1vTd9B j6YMabigiPCP6tqWjGaElGNojvU9Bo+s3tzO3qRKWFPfYD2rFIZeIdpyKo0nGUoUr6bU n/Dd+LajqjVo+FCHnCumcYYRwJzv1+FUYrv9G15MD7XiGehqTHK72wTJyLinYEhAl+Jt 1CRcERI1uyamRWlZqHP0/QY9MLpgSBpNz5ytPZDElyqeePPkphsxrm3FcdKBwM6D7P6I A2TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=4Z/zpY5t0QLqY5KpdM24O/H2a0+oi04zoVhSjzU4dtKduDBSxX/2v8gq/6Jf4wrkDw LWk6L/Vlw+X6TxSUU/Ykg3+tc6pLnxEyW7psIbncH4FnDflsxm9o0rINi46fxL2pY3yk 0nOUQpxt9zD4kSVg1NwbW996ObgkcjLu9/53vkkpuDC4AoHaYAEj50z8Rtffd/3d+Sf1 JrsjXDv2JiZ5eo8A3KK00jR1oKCQ7EUG/QSyQRltF8VMxKW24w2fBhm4kQdm+6bN+1Fs XmC8jcG89WvIJJLWFhF9OVuvjqh9HoXdzRrw9qWx4OHMaSV9q/V7/RRbdOUyiAUXhsdT t+4w== X-Gm-Message-State: AO0yUKWsn94QrgY7qQaCE1ihyTRr/vwx87aHF1Mp9UJE3/GoKlRiKMvh xkSawO8E/56L/2Ksv6Jrv71xfRsp3KKz X-Google-Smtp-Source: AK7set+pdx/dE8EiXxbf6oKANYP1HUfyveaDx4g8eQWjTRp3vLmsmLm3zVaQurH0YKQXfW3y4Mzx2sOx1mUP X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:735c:0:b0:3c4:88de:524 with SMTP id a28-20020a02735c000000b003c488de0524mr351jae.3.1676311369255; Mon, 13 Feb 2023 10:02:49 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:30 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-10-rananta@google.com> Subject: [PATCH 09/13] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extend the vCPU migration test to also validate the vPMU's functionality when set up for overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++-- 1 file changed, 198 insertions(+), 25 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 0c9d801f4e602..066dc17fa3906 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -21,7 +21,9 @@ * * 4. Since the PMU registers are per-cpu, stress KVM by frequently * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. + * if the vPMU is still behaving as expected. The sub-tests include + * testing basic functionalities such as basic counters behavior, + * overflow, and overflow interrupts. * * Copyright (c) 2022 Google LLC. * @@ -41,13 +43,27 @@ #include #include "delay.h" +#include "gic.h" +#include "spinlock.h" /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* The cycle counter bit position that's common among the PMU registers */ +#define ARMV8_PMU_CYCLE_COUNTER_IDX 31 + /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 +#define PMU_IRQ 23 + +#define COUNT_TO_OVERFLOW 0xFULL +#define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) +#define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + #define msecs_to_usecs(msec) ((msec) * 1000LL) /* @@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val) isb(); } +static inline void write_pmovsclr(unsigned long val) +{ + write_sysreg(val, pmovsclr_el0); + isb(); +} + +static unsigned long read_pmovsclr(void) +{ + return read_sysreg(pmovsclr_el0); +} + static inline void enable_counter(int idx) { uint64_t v = read_sysreg(pmcntenset_el0); @@ -178,11 +205,33 @@ static inline void disable_counter(int idx) isb(); } +static inline void enable_irq(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenset_el1); + isb(); +} + +static inline void disable_irq(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenclr_el1); + isb(); +} + static inline uint64_t read_cycle_counter(void) { return read_sysreg(pmccntr_el0); } +static inline void write_cycle_counter(uint64_t v) +{ + write_sysreg(v, pmccntr_el0); + isb(); +} + static inline void reset_cycle_counter(void) { uint64_t v = read_sysreg(pmcr_el0); @@ -289,6 +338,15 @@ struct guest_data { static struct guest_data guest_data; +/* Data to communicate among guest threads */ +struct guest_irq_data { + uint32_t pmc_idx_bmap; + uint32_t irq_received_bmap; + struct spinlock lock; +}; + +static struct guest_irq_data guest_irq_data; + #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 @@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs) expected_ec = INVALID_EC; } +static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap) +{ + /* + * Fail if there's an interrupt from unexpected PMCs. + * All the expected events' IRQs may not arrive at the same time. + * Hence, check if the interrupt is valid only if it's expected. + */ + if (pmovsclr & BIT(pmc_idx)) { + GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap); + write_pmovsclr(BIT(pmc_idx)); + } +} + +static void guest_irq_handler(struct ex_regs *regs) +{ + uint32_t pmc_idx_bmap; + uint64_t i, pmcr_n = get_pmcr_n(); + uint32_t pmovsclr = read_pmovsclr(); + unsigned int intid = gic_get_and_ack_irq(); + + /* No other IRQ apart from the PMU IRQ is expected */ + GUEST_ASSERT_1(intid == PMU_IRQ, intid); + + spin_lock(&guest_irq_data.lock); + pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap); + + for (i = 0; i < pmcr_n; i++) + guest_validate_irq(i, pmovsclr, pmc_idx_bmap); + guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); + + /* Mark IRQ as recived for the corresponding PMCs */ + WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); + spin_unlock(&guest_irq_data.lock); + + gic_set_eoi(intid); +} + +static int pmu_irq_received(int pmc_idx) +{ + bool irq_received; + + spin_lock(&guest_irq_data.lock); + irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + return irq_received; +} + +static void pmu_irq_init(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + enable_irq(pmc_idx); +} + +static void pmu_irq_exit(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + disable_irq(pmc_idx); +} + /* * Run the given operation that should trigger an exception with the * given exception class. The exception handler (guest_sync_handler) @@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr) precise_instrs_loop(loop, pmcr); } -static void test_instructions_count(int pmc_idx, bool expect_count) +static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow) { int i; struct pmc_accessor *acc; - uint64_t cnt; - int instrs_count = 100; + uint64_t cntr_val = 0; + int instrs_count = 500; + + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT_1(expect_count, pmc_idx); + + cntr_val = PRE_OVERFLOW_32; + pmu_irq_init(pmc_idx); + } enable_counter(pmc_idx); @@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count) for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { acc = &pmc_accessors[i]; - pmu_disable_reset(); - + acc->write_cntr(pmc_idx, cntr_val); acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); - /* Enable the PMU and execute precisely number of instructions as a workload */ - execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); + /* + * Enable the PMU and execute a precise number of instructions as a workload. + * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count' + * should have enough instructions to raise an IRQ. + */ + execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E); - /* If a count is expected, the counter should be increased by 'instrs_count' */ - cnt = acc->read_cntr(pmc_idx); - GUEST_ASSERT_4(expect_count == (cnt == instrs_count), - i, expect_count, cnt, instrs_count); + /* + * If an overflow is expected, only check for the overflag flag. + * As overflow interrupt is enabled, the interrupt would add additional + * instructions and mess up the precise instruction count. Hence, measure + * the instructions count only when the test is not set up for an overflow. + */ + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i); + } else { + uint64_t cnt = acc->read_cntr(pmc_idx); + + GUEST_ASSERT_4(expect_count == (cnt == instrs_count), + pmc_idx, i, cnt, expect_count); + } } - disable_counter(pmc_idx); + if (test_overflow) + pmu_irq_exit(pmc_idx); } -static void test_cycles_count(bool expect_count) +static void test_cycles_count(bool expect_count, bool test_overflow) { uint64_t cnt; - pmu_enable(); - reset_cycle_counter(); + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT(expect_count); + + write_cycle_counter(PRE_OVERFLOW_64); + pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX); + } else { + reset_cycle_counter(); + } /* Count cycles in EL0 and EL1 */ write_pmccfiltr(0); enable_cycle_counter(); + /* Enable the PMU and execute precisely number of instructions as a workload */ + execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); cnt = read_cycle_counter(); /* * If a count is expected by the test, the cycle counter should be increased by - * at least 1, as there is at least one instruction between enabling the + * at least 1, as there are a number of instructions between enabling the * counter and reading the counter. */ GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count); + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count); + pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX); + } disable_cycle_counter(); pmu_disable_reset(); @@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { case ARMV8_PMUV3_PERFCTR_INST_RETIRED: - test_instructions_count(pmc_idx, expect_count); + test_instructions_count(pmc_idx, expect_count, false); break; case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: - test_cycles_count(expect_count); + test_cycles_count(expect_count, false); break; } } static void test_basic_pmu_functionality(void) { + local_irq_disable(); + gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_irq_enable(PMU_IRQ); + local_irq_enable(); + /* Test events on generic and cycle counters */ - test_instructions_count(0, true); - test_cycles_count(true); + test_instructions_count(0, true, false); + test_cycles_count(true, false); + + /* Test overflow with interrupts on generic and cycle counters */ + test_instructions_count(0, true, true); + test_cycles_count(true, true); } /* @@ -813,9 +988,6 @@ static void guest_code(void) GUEST_DONE(); } -#define GICD_BASE_GPA 0x8000000ULL -#define GICR_BASE_GPA 0x80A0000ULL - static unsigned long * set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters) { @@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; - uint64_t dfr0, irq = 23; + uint64_t dfr0, irq = PMU_IRQ; struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, @@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); + vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { From patchwork Mon Feb 13 18:02:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 774B2C636D4 for ; Mon, 13 Feb 2023 18:03:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230521AbjBMSDV (ORCPT ); Mon, 13 Feb 2023 13:03:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230509AbjBMSDI (ORCPT ); Mon, 13 Feb 2023 13:03:08 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87EA79032 for ; Mon, 13 Feb 2023 10:02:51 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id a17-20020a256611000000b00889c54916f2so13279447ybc.14 for ; Mon, 13 Feb 2023 10:02:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cRn8pqb+ZATFzovvOq02T9w/U2XDHj1IusmF+T0soEI=; b=Odev8Z6Kqs3GEhehIpX1NfnNvtpHcHO5P823Sx9vCSEvBaW+zpJ3MeA2k0DrQgW/hq qzZ0Hg7nt5uxsTCap5L4HuaZBZQTI3E1Zi5MUcXkCuJGIpYie9eqKdEIoyCQHFLzDhhG xlq5vIQJd3CcnFweyuBak5BSdKhSKWTI0i0kyNFGzwKtEuzFKbtmWbuUQceIbBCCHYyi kxFKKFTLFjxiMhp6TugfvW8TYUKvke2doRbaM8GV/h9rrqEyRRaZMrWlf3IqbBqhJGsH wTSTc75k7otoY5g1eF6K6WOJDyw2YWSdz0xPc0gMpyTcUXxLWhaGv9Wjfw3hc+h3rdFW vu0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cRn8pqb+ZATFzovvOq02T9w/U2XDHj1IusmF+T0soEI=; b=LoZyP8AiESySt5/fd1nNl5y4yz75XgPgR3dlmroA+euIqy1/4CQahnJtlcgi+fG/pV oaoC+GNE4MIS6SmKXphOi4JHXNaLrYyvyoJT69t16r6hWGTRNc4bn6k/Ng4bgDAGfrKH Bd1g51K25Q8jM7p34KMLpXNaCo/9fLO/9rMcjwMtdlOEGo6BwMc2TCK3WEjv3bP8kedo uCu5P1lG0ZSApa7R1YN24qFl8+9iYz5g55JmAepH2YO3/2Gwz89MPF4CkTe0x2VPxyBs GGhMTAStjVLQrFSjmkM26psUKLhCB7ReLGPysadpR4NUAqs996RfRsK+qdL3R27z4CCv YN4g== X-Gm-Message-State: AO0yUKX6a8k3W44d1j7PcpY8rpyczKe6a6j8frRCb1nsP6aAx47Nhvca j7a2ZKG/+JhYxjeQR9HeD7+UA6wAlpDV X-Google-Smtp-Source: AK7set8RHgzHP8c22NX4XX8rTz8mK48f2xqsC6kI1cPrwBMOEbu/WcXWtj02YO8LfVZHw/DkvQKxZKKZMH8A X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:788:b0:52f:184a:da09 with SMTP id bw8-20020a05690c078800b0052f184ada09mr23ywb.2.1676311370417; Mon, 13 Feb 2023 10:02:50 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:31 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-11-rananta@google.com> Subject: [PATCH 10/13] selftests: KVM: aarch64: Test chained events for PMU From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extend the vPMU's vCPU migration test to validate chained events, and their overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 76 ++++++++++++++++++- 1 file changed, 75 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 066dc17fa3906..de725f4339ad5 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -23,7 +23,7 @@ * migrating the guest vCPU to random pCPUs in the system, and check * if the vPMU is still behaving as expected. The sub-tests include * testing basic functionalities such as basic counters behavior, - * overflow, and overflow interrupts. + * overflow, overflow interrupts, and chained events. * * Copyright (c) 2022 Google LLC. * @@ -61,6 +61,8 @@ #define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) #define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) +#define ALL_SET_64 GENMASK(63, 0) + #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL @@ -639,6 +641,75 @@ static void test_cycles_count(bool expect_count, bool test_overflow) pmu_disable_reset(); } +static void test_chained_count(int pmc_idx) +{ + int i, chained_pmc_idx; + struct pmc_accessor *acc; + uint64_t pmcr_n, cnt, cntr_val; + + /* The test needs at least two PMCs */ + pmcr_n = get_pmcr_n(); + GUEST_ASSERT_1(pmcr_n >= 2, pmcr_n); + + /* + * The chained counter's idx is always chained with (pmc_idx + 1). + * pmc_idx should be even as the chained event doesn't count on + * odd numbered counters. + */ + GUEST_ASSERT_1(pmc_idx % 2 == 0, pmc_idx); + + /* + * The max counter idx that the chained counter can occupy is + * (pmcr_n - 1), while the actual event sits on (pmcr_n - 2). + */ + chained_pmc_idx = pmc_idx + 1; + GUEST_ASSERT(chained_pmc_idx < pmcr_n); + + enable_counter(chained_pmc_idx); + pmu_irq_init(chained_pmc_idx); + + /* Configure the chained event using all the possible ways*/ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + /* Test if the chained counter increments when the base event overflows */ + + cntr_val = 1; + acc->write_cntr(chained_pmc_idx, cntr_val); + acc->write_typer(chained_pmc_idx, ARMV8_PMUV3_PERFCTR_CHAIN); + + /* Chain the counter with pmc_idx that's configured for an overflow */ + test_instructions_count(pmc_idx, true, true); + + /* + * pmc_idx is also configured to run for all the ARRAY_SIZE(pmc_accessors) + * combinations. Hence, the chained chained_pmc_idx is expected to be + * cntr_val + ARRAY_SIZE(pmc_accessors). + */ + cnt = acc->read_cntr(chained_pmc_idx); + GUEST_ASSERT_4(cnt == cntr_val + ARRAY_SIZE(pmc_accessors), + pmc_idx, i, cnt, cntr_val + ARRAY_SIZE(pmc_accessors)); + + /* Test for the overflow of the chained counter itself */ + + cntr_val = ALL_SET_64; + acc->write_cntr(chained_pmc_idx, cntr_val); + + test_instructions_count(pmc_idx, true, true); + + /* + * At this point, an interrupt should've been fired for the chained + * counter (which validates the overflow bit), and the counter should've + * wrapped around to ARRAY_SIZE(pmc_accessors) - 1. + */ + cnt = acc->read_cntr(chained_pmc_idx); + GUEST_ASSERT_4(cnt == ARRAY_SIZE(pmc_accessors) - 1, + pmc_idx, i, cnt, ARRAY_SIZE(pmc_accessors)); + } + + pmu_irq_exit(chained_pmc_idx); +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { @@ -665,6 +736,9 @@ static void test_basic_pmu_functionality(void) /* Test overflow with interrupts on generic and cycle counters */ test_instructions_count(0, true, true); test_cycles_count(true, true); + + /* Test chained events */ + test_chained_count(0); } /* From patchwork Mon Feb 13 18:02:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B1BC636CC for ; Mon, 13 Feb 2023 18:03:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231304AbjBMSDj (ORCPT ); Mon, 13 Feb 2023 13:03:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230527AbjBMSDN (ORCPT ); Mon, 13 Feb 2023 13:03:13 -0500 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FE952D7F for ; Mon, 13 Feb 2023 10:02:52 -0800 (PST) Received: by mail-il1-x149.google.com with SMTP id h7-20020a056e021d8700b0031532629b80so3961875ila.14 for ; Mon, 13 Feb 2023 10:02:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=E305a934JyKR6d0LbQnwDb1KLXy1uRhITDhO1feUNVQ=; b=klshuBOzJGsDndpswreQ0XlinbMaJK4kRqAQ+37wZY8lmbOhHgx/ZdZOFpaN1lJ6Yy GKSvylJPXQGCRQtm8PG+CoJXz3i/KEj97JV/Ab045b8U+Wj43GYkWufOUVvppWmGFm0v WaQbsZTE/P4Putg06m5v8Od47rRyUpcHCvrxvuzKG+FvCNge/NQA8/pfj9YEQUHWkd1n t1T8Jk93C0yORnKG07Q1162qgwKRkb9xUo9DJnwrkywAG4Rmh/eaGJg6ea6LmI7s1gpt sHUJKKq/+DnRWgfVNv9ltO9gOAFPNz18MlmUrWzqIN8X8XsBaKgJsikEtqscWqdJj/h9 2Zgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=E305a934JyKR6d0LbQnwDb1KLXy1uRhITDhO1feUNVQ=; b=n9Fgq2+/0ygeXubeZkYUnIB1IR8R2bbfCTbDeP6ZlGB8QpYBFDg85QuNfwSlsctjZB eDIMDsa3LKXuhyD4yqJwZg/Xgpuukrq+geua3574BPz3aGQAQkaVsbg2xF6Ic+UzxL4s wLL/CtGS+qJiui6t/kH3ZX/iJRnIuWsaXKdP9snDQ3rzgEmJG0Qv8CwjJDlQQknykX/k p0vhxTmJ8n8mJf+CcwPgOpOXdswWQD9yjRxCBlpPzGB84LR0z/ujQOaocEJSKuIjfx20 G/SqesCBswV4bKaeTZfCaXXXmDD/MLi1Sl5hid/0eizafGjRRn5ebbCOGr54JQI8I1yj SI6w== X-Gm-Message-State: AO0yUKXGU3BEBbgen1vzcV/BVMjj26FomqBK/Nz5KkEwJ35+PhBlvXXs RvR988Niz+JaUlGM2O8P/DvQ9jXTV2dP X-Google-Smtp-Source: AK7set9XcTIYijLiXD+fG0978y9t+ze0/330Z/53BuCCPZ7jl1ZgND52wYhOh70n0GJQrhaPSQBmbepj7EJS X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:d0c:b0:3a7:e46f:1018 with SMTP id q12-20020a0566380d0c00b003a7e46f1018mr113jaj.2.1676311371815; Mon, 13 Feb 2023 10:02:51 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:32 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-12-rananta@google.com> Subject: [PATCH 11/13] selftests: KVM: aarch64: Add PMU test to chain all the counters From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extend the vCPU migration test to occupy all the vPMU counters, by configuring chained events on alternate counter-ids and chaining them with its corresponding predecessor counter, and verify against the extended behavior. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index de725f4339ad5..fd00acb9391c8 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx) pmu_irq_exit(chained_pmc_idx); } +static void test_chain_all_counters(void) +{ + int i; + uint64_t cnt, pmcr_n = get_pmcr_n(); + struct pmc_accessor *acc = &pmc_accessors[0]; + + /* + * Test the occupancy of all the event counters, by chaining the + * alternate counters. The test assumes that the host hasn't + * occupied any counters. Hence, if the test fails, it could be + * because all the counters weren't available to the guest or + * there's actually a bug in KVM. + */ + + /* + * Configure even numbered counters to count cpu-cycles, and chain + * each of them with its odd numbered counter. + */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2) { + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN); + acc->write_cntr(i, 1); + } else { + pmu_irq_init(i); + acc->write_cntr(i, PRE_OVERFLOW_32); + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES); + } + enable_counter(i); + } + + /* Introduce some cycles */ + execute_precise_instrs(500, ARMV8_PMU_PMCR_E); + + /* + * An overflow interrupt should've arrived for all the even numbered + * counters but none for the odd numbered ones. The odd numbered ones + * should've incremented exactly by 1. + */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2) { + GUEST_ASSERT_1(!pmu_irq_received(i), i); + + cnt = acc->read_cntr(i); + GUEST_ASSERT_2(cnt == 2, i, cnt); + } else { + GUEST_ASSERT_1(pmu_irq_received(i), i); + } + } + + /* Cleanup the states */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2 == 0) + pmu_irq_exit(i); + disable_counter(i); + } +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { @@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void) /* Test chained events */ test_chained_count(0); + + /* Test running chained events on all the implemented counters */ + test_chain_all_counters(); } /* From patchwork Mon Feb 13 18:02:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE357C6379F for ; Mon, 13 Feb 2023 18:03:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231312AbjBMSDy (ORCPT ); Mon, 13 Feb 2023 13:03:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230473AbjBMSDR (ORCPT ); Mon, 13 Feb 2023 13:03:17 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD00D59DB for ; Mon, 13 Feb 2023 10:02:53 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5005ef73cf3so132719657b3.2 for ; Mon, 13 Feb 2023 10:02:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=Dl8jDBFxwwxPG5dvCnLeljlHX+mEMM3Lt7CRmMJSCztwob6fwXljZpLM3RUkog1USt HLVFYbxCfbddhLdviZGYAZWu1YOebDg9K7yxfjuHtBa2X9q4BCps7A/v2LPNN1otPxsw oB6E+y3HzPVYekUc1zHIdLE6GTler0EGqotKcXV8VxicHBonenMsmfvh8IhK7LkKa6mk F3RSWL4rVu3+5tNQ4uRFQP/N0AT/0CiK/2xmVH7Rtm6fa5GJKd1sXVqRA0Z4nDB5ZkfV 0QDEQzHlYocRqurPz0TXY1z5Ozz7EvVY8ZkcsXGe26IThiFJWzAy9HKsbiwWBlJT5BvR 7tDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=41A+w5ujPOESpuOyzcFfH5CnsKEYqIKiBeKF/cpanN2uuZnzGmyfNws5FloFSrOgPC Q+ALjU90I/j+ni5oqa+7ZxjOInzcQ9gUc/pt2scd4VlP8RyejtI94j1vxCS51DxqHRqO 2/qMLfCVQOBLDUqkwy9Oh4IM4mXCRxonUgMGrKzgjtArFNC9kKwcYJ/uJod+wX8hVhXD pbtU3Ovd8XmlJIfXHIcO9Ifvrp8w5F2NwkpnxJFMfmKnCoaWxIxPohcvV69wFU8TlaKy GwkJiZQpLZGwSbRZqBlB9ZDhxJbVPtkda4uSFOiCd88I9HYSj8D93mYUtoqIvm/WxOcU ym6w== X-Gm-Message-State: AO0yUKXIxuCPF6tRONzKsoysajLeB2FKgoMEYAXe9dYnOBaJz1yS0vId 32dszOhQU60wbc3OFX9EPa6+mL7/2v0a X-Google-Smtp-Source: AK7set8WFkcjjQ486sENyo/Bofwmb4YEVcsdM3KzPAVhr+NFr/y1hTAIeb0LxE4fo9R9UrAHes4GTeEFvSOs X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:5987:0:b0:8da:f656:8da6 with SMTP id n129-20020a255987000000b008daf6568da6mr16ybb.7.1676311372794; Mon, 13 Feb 2023 10:02:52 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:33 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-13-rananta@google.com> Subject: [PATCH 12/13] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The PMU test's create_vpmu_vm() currently creates a VM with only one cpu. Extend this to accept a number of cpus as a argument to create a multi-vCPU VM. This would help the upcoming patches to test the vPMU context across multiple vCPUs. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 82 +++++++++++-------- 1 file changed, 49 insertions(+), 33 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index fd00acb9391c8..239fc7e06b3b9 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -320,7 +320,8 @@ uint64_t op_end_addr; struct vpmu_vm { struct kvm_vm *vm; - struct kvm_vcpu *vcpu; + int nr_vcpus; + struct kvm_vcpu **vcpus; int gic_fd; unsigned long *pmu_filter; }; @@ -1164,10 +1165,11 @@ set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_ return pmu_filter; } -/* Create a VM that has one vCPU with PMUv3 configured. */ +/* Create a VM that with PMUv3 configured. */ static struct vpmu_vm * -create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) +create_vpmu_vm(int nr_vcpus, void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) { + int i; struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; @@ -1187,7 +1189,11 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm = calloc(1, sizeof(*vpmu_vm)); TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); - vpmu_vm->vm = vm = vm_create(1); + vpmu_vm->vcpus = calloc(nr_vcpus, sizeof(struct kvm_vcpu *)); + TEST_ASSERT(vpmu_vm->vcpus, "Failed to allocate kvm_vpus"); + vpmu_vm->nr_vcpus = nr_vcpus; + + vpmu_vm->vm = vm = vm_create(nr_vcpus); vm_init_descriptor_tables(vm); vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); @@ -1197,26 +1203,35 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) guest_sync_handler); } - /* Create vCPU with PMUv3 */ + /* Create vCPUs with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); - vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); - vcpu_init_descriptor_tables(vcpu); - vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); - /* Make sure that PMUv3 support is indicated in the ID register */ - vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); - TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && - pmuver >= ID_AA64DFR0_PMUVER_8_0, - "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + for (i = 0; i < nr_vcpus; i++) { + vpmu_vm->vcpus[i] = vcpu = aarch64_vcpu_add(vm, i, &init, guest_code); + vcpu_init_descriptor_tables(vcpu); + } - /* Initialize vPMU */ - if (pmu_event_filters) - vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + /* vGIC setup is expected after the vCPUs are created but before the vPMU is initialized */ + vpmu_vm->gic_fd = vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA); - vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); - vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + for (i = 0; i < nr_vcpus; i++) { + vcpu = vpmu_vm->vcpus[i]; + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU %d with PMUv3", i, pmuver); + + /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + } return vpmu_vm; } @@ -1227,6 +1242,7 @@ static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm->vcpus); free(vpmu_vm); } @@ -1264,8 +1280,8 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu = vpmu_vm->vcpu; + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu = vpmu_vm->vcpus[0]; /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -1309,8 +1325,8 @@ static void run_counter_access_error_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu = vpmu_vm->vcpu; + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu = vpmu_vm->vcpus[0]; /* Update the PMCR_EL0.N with @pmcr_n */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); @@ -1396,8 +1412,8 @@ static void run_kvm_event_filter_error_tests(void) }; /* KVM should not allow configuring filters after the PMU is initialized */ - vpmu_vm = create_vpmu_vm(guest_code, NULL); - ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + ret = __vcpu_ioctl(vpmu_vm->vcpus[0], KVM_SET_DEVICE_ATTR, &filter_attr); TEST_ASSERT(ret == -1 && errno == EBUSY, "Failed to disallow setting an event filter after PMU init"); destroy_vpmu_vm(vpmu_vm); @@ -1427,14 +1443,14 @@ static void run_kvm_event_filter_test(void) /* Test for valid filter configurations */ for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) { - vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]); + vpmu_vm = create_vpmu_vm(1, guest_code, pmu_event_filters[i]); vm = vpmu_vm->vm; pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR); memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz); guest_data.pmu_filter = (unsigned long *) pmu_filter_gva; - run_vcpu(vpmu_vm->vcpu); + run_vcpu(vpmu_vm->vcpus[0]); destroy_vpmu_vm(vpmu_vm); } @@ -1449,8 +1465,8 @@ static void run_kvm_evtype_filter_test(void) guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER; - vpmu_vm = create_vpmu_vm(guest_code, NULL); - run_vcpu(vpmu_vm->vcpu); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + run_vcpu(vpmu_vm->vcpus[0]); destroy_vpmu_vm(vpmu_vm); } @@ -1465,7 +1481,7 @@ static void *run_vcpus_migrate_test_func(void *arg) struct vcpu_migrate_data *migrate_data = arg; struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; - run_vcpu(vpmu_vm->vcpu); + run_vcpu(vpmu_vm->vcpus[0]); migrate_data->vcpu_done = true; return NULL; @@ -1535,7 +1551,7 @@ static void run_vcpu_migration_test(uint64_t pmcr_n) guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; guest_data.expected_pmcr_n = pmcr_n; - migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL); + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL); /* Initialize random number generation for migrating vCPUs to random pCPUs */ srand(time(NULL)); @@ -1571,8 +1587,8 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu_get_reg(vpmu_vm->vcpus[0], KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); From patchwork Mon Feb 13 18:02:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2073C6379F for ; Mon, 13 Feb 2023 18:03:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231133AbjBMSD5 (ORCPT ); Mon, 13 Feb 2023 13:03:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231248AbjBMSDT (ORCPT ); Mon, 13 Feb 2023 13:03:19 -0500 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 453A31DBA1 for ; Mon, 13 Feb 2023 10:02:54 -0800 (PST) Received: by mail-io1-xd49.google.com with SMTP id y22-20020a5d94d6000000b007076e06ba3dso8842496ior.20 for ; Mon, 13 Feb 2023 10:02:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=mnjrGM5wUez2KEVI/vFMV4+FbMOaoVOnfJ9xGgmJ4Zd0uoapoTt5c95+fncj1Ro61W TUPQy0EqHajLx4de1Fdfds2Ob5QycOCdI8eHnPwd/MexlEbNJPqJVys/+MnhBemJq29N ebNk/piULaEcMtvxd9QBPD5koG96A4AWJ6140gil1zQe/bYGnJWni59OsnV2d5tEhHNU X8QeFM0m+JtgdYxf7dURm6TMD3wCQ6HecN9oWny4BBbrHhW+3+yPtkFaWufSdEvy3thw ma3Cmg32IIfK0X5DtalBNaYeROcLkG7mk6jnQNONfmBTH2cxoarIDk15wHj6mi2GgKOq 41Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=wPcMI0oRzsfPWTOXCOq2DA8kR29Yc5DiHD2adJ3JU3moLEtYScGIto2YHqwnJCrK9n Zx42vevKt/eDHd4fnxQHaD1dtHt2CSsTVTSnaniWgGx2ZhaOs42CQU60M7fOU8JnJuRc 6hanySoHSoRPWVglKRkAr2XDvwvL7lc3kRPK2nVb129qjCZhda55NgsDOCiKBBYaiU8R Y8owFlYa3ec9d5sjKGLxffAjeWKEum2Ac9egrJmVMX0cPGQqA0OtdH+bTPXESoSVK9hg 9Gwe5OHJFO5WiyQWjfyQYig8fKgaVAGNfTcLuGG9kjCKzzrYlnSLgPeZThUItTssJOi+ DB/g== X-Gm-Message-State: AO0yUKUu2wGUVt10pJ5dPMqf5GDlNaGvUnZYfHdCBImyIu6xtuVqnIyC HQwqlmtSlJsZXaZNH8lYCe8bVrVqvjcg X-Google-Smtp-Source: AK7set9z6n5ru7Pp9s5RYw/d+TMzl0POKn/BVaGF5c9dcAeZ9wYFw3I4Z3GF+X4eobdissRqds0fajtDfTdM X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:b513:0:b0:310:a381:d0a2 with SMTP id f19-20020a92b513000000b00310a381d0a2mr2582318ile.0.1676311374044; Mon, 13 Feb 2023 10:02:54 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:34 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-14-rananta@google.com> Subject: [PATCH 13/13] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To test KVM's handling of multiple vCPU contexts together, that are frequently migrating across random pCPUs in the system, extend the test to create a VM with multiple vCPUs and validate the behavior. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 166 ++++++++++++------ 1 file changed, 114 insertions(+), 52 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 239fc7e06b3b9..c9d8e5f9a22ab 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -19,11 +19,12 @@ * higher exception levels (EL2, EL3). Verify this functionality by * configuring and trying to count the events for EL2 in the guest. * - * 4. Since the PMU registers are per-cpu, stress KVM by frequently - * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. The sub-tests include - * testing basic functionalities such as basic counters behavior, - * overflow, overflow interrupts, and chained events. + * 4. Since the PMU registers are per-cpu, stress KVM by creating a + * multi-vCPU VM, then frequently migrate the guest vCPUs to random + * pCPUs in the system, and check if the vPMU is still behaving as + * expected. The sub-tests include testing basic functionalities such + * as basic counters behavior, overflow, overflow interrupts, and + * chained events. * * Copyright (c) 2022 Google LLC. * @@ -348,19 +349,22 @@ struct guest_irq_data { struct spinlock lock; }; -static struct guest_irq_data guest_irq_data; +static struct guest_irq_data guest_irq_data[KVM_MAX_VCPUS]; #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 +#define VCPU_MIGRATIONS_TEST_NR_VPUS_DEF 2 struct test_args { int vcpu_migration_test_iter; int vcpu_migration_test_migrate_freq_ms; + int vcpu_migration_test_nr_vcpus; }; static struct test_args test_args = { .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF, .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS, + .vcpu_migration_test_nr_vcpus = VCPU_MIGRATIONS_TEST_NR_VPUS_DEF, }; static void guest_sync_handler(struct ex_regs *regs) @@ -396,26 +400,34 @@ static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_ } } +static struct guest_irq_data *get_irq_data(void) +{ + uint32_t cpu = guest_get_vcpuid(); + + return &guest_irq_data[cpu]; +} + static void guest_irq_handler(struct ex_regs *regs) { uint32_t pmc_idx_bmap; uint64_t i, pmcr_n = get_pmcr_n(); uint32_t pmovsclr = read_pmovsclr(); unsigned int intid = gic_get_and_ack_irq(); + struct guest_irq_data *irq_data = get_irq_data(); /* No other IRQ apart from the PMU IRQ is expected */ GUEST_ASSERT_1(intid == PMU_IRQ, intid); - spin_lock(&guest_irq_data.lock); - pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap); + spin_lock(&irq_data->lock); + pmc_idx_bmap = READ_ONCE(irq_data->pmc_idx_bmap); for (i = 0; i < pmcr_n; i++) guest_validate_irq(i, pmovsclr, pmc_idx_bmap); guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); /* Mark IRQ as recived for the corresponding PMCs */ - WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); - spin_unlock(&guest_irq_data.lock); + WRITE_ONCE(irq_data->irq_received_bmap, pmovsclr); + spin_unlock(&irq_data->lock); gic_set_eoi(intid); } @@ -423,35 +435,40 @@ static void guest_irq_handler(struct ex_regs *regs) static int pmu_irq_received(int pmc_idx) { bool irq_received; + struct guest_irq_data *irq_data = get_irq_data(); - spin_lock(&guest_irq_data.lock); - irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + irq_received = READ_ONCE(irq_data->irq_received_bmap) & BIT(pmc_idx); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&irq_data->lock); return irq_received; } static void pmu_irq_init(int pmc_idx) { + struct guest_irq_data *irq_data = get_irq_data(); + write_pmovsclr(BIT(pmc_idx)); - spin_lock(&guest_irq_data.lock); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap | BIT(pmc_idx)); + spin_unlock(&irq_data->lock); enable_irq(pmc_idx); } static void pmu_irq_exit(int pmc_idx) { + struct guest_irq_data *irq_data = get_irq_data(); + write_pmovsclr(BIT(pmc_idx)); - spin_lock(&guest_irq_data.lock); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&irq_data->lock); disable_irq(pmc_idx); } @@ -783,7 +800,8 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) static void test_basic_pmu_functionality(void) { local_irq_disable(); - gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_init(GIC_V3, test_args.vcpu_migration_test_nr_vcpus, + (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); gic_irq_enable(PMU_IRQ); local_irq_enable(); @@ -1093,11 +1111,13 @@ static void guest_evtype_filter_test(void) static void guest_vcpu_migration_test(void) { + int iter = test_args.vcpu_migration_test_iter; + /* * While the userspace continuously migrates this vCPU to random pCPUs, * run basic PMU functionalities and verify the results. */ - while (test_args.vcpu_migration_test_iter--) + while (iter--) test_basic_pmu_functionality(); } @@ -1472,17 +1492,23 @@ static void run_kvm_evtype_filter_test(void) struct vcpu_migrate_data { struct vpmu_vm *vpmu_vm; - pthread_t *pt_vcpu; - bool vcpu_done; + pthread_t *pt_vcpus; + unsigned long *vcpu_done_map; + pthread_mutex_t vcpu_done_map_lock; }; +struct vcpu_migrate_data migrate_data; + static void *run_vcpus_migrate_test_func(void *arg) { - struct vcpu_migrate_data *migrate_data = arg; - struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; + struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm; + unsigned int vcpu_idx = (unsigned long)arg; - run_vcpu(vpmu_vm->vcpus[0]); - migrate_data->vcpu_done = true; + run_vcpu(vpmu_vm->vcpus[vcpu_idx]); + + pthread_mutex_lock(&migrate_data.vcpu_done_map_lock); + __set_bit(vcpu_idx, migrate_data.vcpu_done_map); + pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock); return NULL; } @@ -1504,7 +1530,7 @@ static uint32_t get_pcpu(void) return pcpu; } -static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) +static int migrate_vcpu(int vcpu_idx) { int ret; cpu_set_t cpuset; @@ -1513,9 +1539,9 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) CPU_ZERO(&cpuset); CPU_SET(new_pcpu, &cpuset); - pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu); + pr_debug("Migrating vCPU %d to pCPU: %u\n", vcpu_idx, new_pcpu); - ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset); + ret = pthread_setaffinity_np(migrate_data.pt_vcpus[vcpu_idx], sizeof(cpuset), &cpuset); /* Allow the error where the vCPU thread is already finished */ TEST_ASSERT(ret == 0 || ret == ESRCH, @@ -1526,48 +1552,74 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) static void *vcpus_migrate_func(void *arg) { - struct vcpu_migrate_data *migrate_data = arg; + struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm; + int i, n_done, nr_vcpus = vpmu_vm->nr_vcpus; + bool vcpu_done; - while (!migrate_data->vcpu_done) { + do { usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms)); - migrate_vcpu(migrate_data); - } + for (n_done = 0, i = 0; i < nr_vcpus; i++) { + pthread_mutex_lock(&migrate_data.vcpu_done_map_lock); + vcpu_done = test_bit(i, migrate_data.vcpu_done_map); + pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock); + + if (vcpu_done) { + n_done++; + continue; + } + + migrate_vcpu(i); + } + + } while (nr_vcpus != n_done); return NULL; } static void run_vcpu_migration_test(uint64_t pmcr_n) { - int ret; + int i, nr_vcpus, ret; struct vpmu_vm *vpmu_vm; - pthread_t pt_vcpu, pt_sched; - struct vcpu_migrate_data migrate_data = { - .pt_vcpu = &pt_vcpu, - .vcpu_done = false, - }; + pthread_t pt_sched, *pt_vcpus; __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test"); guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; guest_data.expected_pmcr_n = pmcr_n; - migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + nr_vcpus = test_args.vcpu_migration_test_nr_vcpus; + + migrate_data.vcpu_done_map = bitmap_zalloc(nr_vcpus); + TEST_ASSERT(migrate_data.vcpu_done_map, "Failed to create vCPU done bitmap"); + pthread_mutex_init(&migrate_data.vcpu_done_map_lock, NULL); + + migrate_data.pt_vcpus = pt_vcpus = calloc(nr_vcpus, sizeof(*pt_vcpus)); + TEST_ASSERT(pt_vcpus, "Failed to create vCPU thread pointers"); + + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(nr_vcpus, guest_code, NULL); /* Initialize random number generation for migrating vCPUs to random pCPUs */ srand(time(NULL)); - /* Spawn a vCPU thread */ - ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data); - TEST_ASSERT(!ret, "Failed to create the vCPU thread"); + /* Spawn vCPU threads */ + for (i = 0; i < nr_vcpus; i++) { + ret = pthread_create(&pt_vcpus[i], NULL, + run_vcpus_migrate_test_func, (void *)(unsigned long)i); + TEST_ASSERT(!ret, "Failed to create the vCPU thread: %d", i); + } /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */ - ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data); + ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, NULL); TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs"); pthread_join(pt_sched, NULL); - pthread_join(pt_vcpu, NULL); + + for (i = 0; i < nr_vcpus; i++) + pthread_join(pt_vcpus[i], NULL); destroy_vpmu_vm(vpmu_vm); + free(pt_vcpus); + bitmap_free(migrate_data.vcpu_done_map); } static void run_tests(uint64_t pmcr_n) @@ -1596,12 +1648,14 @@ static uint64_t get_pmcr_n_limit(void) static void print_help(char *name) { - pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n", - name); + pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]" + "[-n vcpu_migration_nr_vcpus]\n", name); pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n", VCPU_MIGRATIONS_TEST_ITERS_DEF); pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n", VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS); + pr_info("\t-n: Number of vCPUs for vCPU migrations test. (default: %u)\n", + VCPU_MIGRATIONS_TEST_NR_VPUS_DEF); pr_info("\t-h: print this help screen\n"); } @@ -1609,7 +1663,7 @@ static bool parse_args(int argc, char *argv[]) { int opt; - while ((opt = getopt(argc, argv, "hi:m:")) != -1) { + while ((opt = getopt(argc, argv, "hi:m:n:")) != -1) { switch (opt) { case 'i': test_args.vcpu_migration_test_iter = @@ -1619,6 +1673,14 @@ static bool parse_args(int argc, char *argv[]) test_args.vcpu_migration_test_migrate_freq_ms = atoi_positive("vCPU migration frequency", optarg); break; + case 'n': + test_args.vcpu_migration_test_nr_vcpus = + atoi_positive("Nr vCPUs for vCPU migrations", optarg); + if (test_args.vcpu_migration_test_nr_vcpus > KVM_MAX_VCPUS) { + pr_info("Max allowed vCPUs: %u\n", KVM_MAX_VCPUS); + goto err; + } + break; case 'h': default: goto err;