From patchwork Fri Dec 30 03:59:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13084045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 142C9C4332F for ; Fri, 30 Dec 2022 04:03:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Nn5+tjDDX9ke4xuzLnus0G3ZLHm4HEWMMQdYLcnkvDg=; b=NvW4DGjkQo6HrV1C/gmf2h48/b u5deUX7Su5x7cQgIxJN0co+Xq27/tatPcNxdE9aMHaHEFLDOycNoyUX3hqk4+ZxlIV5F7RIoMGPRv XQZe82b7B+hA9y63EkOL/Z2a99EBNAMxgronpjVcbsQ24e8LNc8nTmdFd1DrRlNLYnZAqwA74YEf5 RCgqTs2zd5t1TlGMrJx8DUcziSC4ZuYIR7QbG5r6yv80rQtrdLnQ0Uap++6HNJJ2jv3MW03fDSRzA OmBre8+TyGt606JGjNl/MvXvtT0i8sQU3uu4qz+o3cFcmZnEjrTJ4T623+Qsci9qYIOQywxlsloG9 wk/F7rBQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pB6bP-005Exa-5R; Fri, 30 Dec 2022 04:02:39 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pB6Zy-005EIM-3Q for linux-arm-kernel@lists.infradead.org; Fri, 30 Dec 2022 04:01:14 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-45dd4fb5580so222006647b3.22 for ; Thu, 29 Dec 2022 20:01:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JSDZO0mQdJzPMEBfs4W6qo68O45OPVluReIApRP0NWY=; b=Aw5UxayDXrMzhwo+e/LvLzKBeTFpohCeXuk36fjT/TB9x6l84I0UBxYl0LXOcBJJC5 hg53JOClJhwzRQrbEufEoDL/5Zyft/jYjodqcHAsKPZDUuSY+4h/94Ten2ffcxbVB0HJ 1PDK/B4J6HYNGA3XoulajWzl54IIDkpA94uB+kWAbsLmN3sicmfKH4I1JPumWGqsW172 dYM9xyRbBBEFvJ4gq1FH6aI4C8dDr9lwycn2mZyCz0n1FBn4DWMABMtanB2yy8lmLB5Y E9y+RaV1yffXtGCIpy2Iu9n1NjnKDfOGZX78+hDrhfCp11ElwB+5a4vDqq7r7eoIcG6A sV9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JSDZO0mQdJzPMEBfs4W6qo68O45OPVluReIApRP0NWY=; b=Sft7DJocrs5kUf6LTKqCJ5SaGsouNQvVl1Pr2fezi7MUqcWPe5j/TzCd4qzpoZ1hQa H5Q+rWw4EvdO/+CK3Yvx5aABCLEYiYAuM2DPMGoNlcJBxeyYnANOZGPugdbNg7F9/CRT CNsY8I5Jh5uUtmuBk4juj+Aag7dFNz6rifBh2ZE0zjo43d1tDiL2SkL7ZbZfrL4SivRo YDa98CV94na2HiUXN2Iv2xoYdnehA+01o5YX1Qf1UW9ZA/syLQsd007bzUkDNm5zEuhk RkbIOUlRgu+CXeVonQj/DRZh7S3+YXKNdJydbAhHX5PHX6vJm/3Dd/jfn8KZDRPGixyL h/yA== X-Gm-Message-State: AFqh2kqMyZTaVH4t0NylURKHFKRrUtTl+WJ7tve1rkvHpPombSpp4smA O/yAK2xX6Z2zS0DtH3R508ELlDB+jaM= X-Google-Smtp-Source: AMrXdXshQl4Tkuxod/BpcYFd5WV77JLTPR6CTq27o+wefMY9qyY620CD2OOu4iHXf3bv7kcavfHdoZDLE+Y= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:874e:0:b0:726:9b62:faa9 with SMTP id e14-20020a25874e000000b007269b62faa9mr2065125ybn.43.1672372868173; Thu, 29 Dec 2022 20:01:08 -0800 (PST) Date: Thu, 29 Dec 2022 19:59:26 -0800 In-Reply-To: <20221230035928.3423990-1-reijiw@google.com> Mime-Version: 1.0 References: <20221230035928.3423990-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221230035928.3423990-6-reijiw@google.com> Subject: [PATCH 5/7] KVM: selftests: aarch64: Introduce vpmu_counter_access test From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221229_200110_214454_C181197C X-CRM114-Status: GOOD ( 23.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce vpmu_counter_access test for arm64 platforms. The test configures PMUv3 for a vCPU, sets PMCR_EL1.N for the vCPU, and check if the guest can consistently see the same number of the PMU event counters (PMCR_EL1.N) that userspace sets. This test case is done with each PMCR_EL1.N value that is equal to or less than PMCR_EL1.N on the host. Signed-off-by: Reiji Watanabe --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../kvm/aarch64/vpmu_counter_access.c | 181 ++++++++++++++++++ 3 files changed, 183 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 4a30d684e208..98816f714051 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -9,6 +9,7 @@ /aarch64/vcpu_width_config /aarch64/vgic_init /aarch64/vgic_irq +/aarch64/vpmu_counter_access /s390x/memop /s390x/resets /s390x/sync_regs_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1d85b8e218a0..4dc5166116f1 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -158,6 +158,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c new file mode 100644 index 000000000000..4760030eee28 --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -0,0 +1,181 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vpmu_counter_access - Test vPMU event counter access + * + * Copyright (c) 2022 Google LLC. + * + * This test checks if the guest can see the same number of the PMU event + * counters (PMCR_EL1.N) that userspace sets. + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + */ +#include +#include +#include +#include +#include +#include + +/* The max number of the PMU event counters (excluding the cycle counter) */ +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) + +static uint64_t pmcr_extract_n(uint64_t pmcr_val) +{ + return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; +} + +/* + * The guest is configured with PMUv3 with @expected_pmcr_n number of + * event counters. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + */ +static void guest_code(uint64_t expected_pmcr_n) +{ + uint64_t pmcr, pmcr_n; + + GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); + + pmcr = read_sysreg(pmcr_el0); + pmcr_n = pmcr_extract_n(pmcr); + + /* Make sure that PMCR_EL0.N indicates the value userspace set */ + GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + + GUEST_DONE(); +} + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +/* Create a VM that has one vCPU with PMUv3 configured. */ +static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, + int *gic_fd) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_vcpu_init init; + uint8_t pmuver; + uint64_t dfr0, irq = 23; + struct kvm_device_attr irq_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, + .addr = (uint64_t)&irq, + }; + struct kvm_device_attr init_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_INIT, + }; + + vm = vm_create(1); + ucall_init(vm, NULL); + + /* Create vCPU with PMUv3 */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + + /* Initialize vPMU */ + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + *vcpup = vcpu; + return vm; +} + +static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +{ + struct ucall uc; + + vcpu_args_set(vcpu, 1, pmcr_n); + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + break; + } +} + +/* + * Create a guest with one vCPU, set the PMCR_EL1.N for the vCPU to @pmcr_n, + * and run the test. + */ +static void run_test(uint64_t pmcr_n) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t sp, pmcr, pmcr_orig; + struct kvm_vcpu_init init; + + pr_debug("Test with pmcr_n %lu\n", pmcr_n); + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + + /* Save the initial sp to restore them later to run the guest again */ + vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); + + /* Update the PMCR_EL1.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); + pmcr |= (pmcr_n & ARMV8_PMU_PMCR_N_MASK) << ARMV8_PMU_PMCR_N_SHIFT; + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + + run_vcpu(vcpu, pmcr_n); + + /* + * Reset and re-initialize the vCPU, and run the guest code again to + * check if PMCR_EL1.N is preserved. + */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + aarch64_vcpu_setup(vcpu, &init); + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); + + run_vcpu(vcpu, pmcr_n); + + close(gic_fd); + kvm_vm_free(vm); +} + +/* + * Return the default number of implemented PMU event counters excluding + * the cycle counter (i.e. PMCR_EL1.N value) for the guest. + */ +static uint64_t get_pmcr_n_limit(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t pmcr; + + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + close(gic_fd); + kvm_vm_free(vm); + return pmcr_extract_n(pmcr); +} + +int main(void) +{ + uint64_t i, pmcr_n; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + pmcr_n = get_pmcr_n_limit(); + for (i = 0; i <= pmcr_n; i++) + run_test(i); + + return 0; +}