From patchwork Wed Jun 19 08:31:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shaoqin Huang X-Patchwork-Id: 13703521 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EE6C7EF10 for ; Wed, 19 Jun 2024 08:32:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718785944; cv=none; b=J70m3vdNB0FMFVXHsFbVt3q4zICBeRTsdXo5FxHupAaCjEXqpaB5+BpqILVsV3+omW0WEFhL9Dki8UpsHyTWYhpnqKP8Qj18BmmjKGhar773rBWVD5WbtgL5G3bBgaPAfZYrvl1G7+YspCbTPETR117XKGiH5+gHXie070PrlAo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718785944; c=relaxed/simple; bh=T79GruQ5XU71ZJktULha3qH0FAqoH8vHzs0HHjGhQMc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=h8JUT30uqb35T45c2oXxC035WwuZoWmCfZ3KEoWVZJv/z2kE1PPrUQrbZ+9VzRnb1BK9RjQXAcPkgs201+zQNtj8KIV3uwzquGA3DNzQCJYD6eydAYn4uXUnG3o+o/kXxlVPKrqS0K5mCEwrHwZCEdwz94P71dVnoP+DqWzgrrs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=H8foELkq; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="H8foELkq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718785940; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cb6ubIYDdiNuTuCTMMEBbUdr8hTUCMRhul3uO0V4T8U=; b=H8foELkqL9qGxVmI2GxklsPg5VXRVPo8SBWA+EQ0KQDDCynVgXguITGvM1T2SBKueCvAZ3 zk8Xt2El3USkvKWxpfqlNAMCNwb6ow1Lv29n4JBm2WUm9OGtKJJP5edQa2JhmprmLWcJAv N0P1z+ux4cGXfXHPH98gcKNCxq0AFwY= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-274-5mjRXU0ANaWK4ya3e5j5qw-1; Wed, 19 Jun 2024 04:32:18 -0400 X-MC-Unique: 5mjRXU0ANaWK4ya3e5j5qw-1 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D65C019560B0; Wed, 19 Jun 2024 08:32:16 +0000 (UTC) Received: from virt-mtcollins-01.lab.eng.rdu2.redhat.com (virt-mtcollins-01.lab.eng.rdu2.redhat.com [10.8.1.196]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E70DE19560B0; Wed, 19 Jun 2024 08:32:14 +0000 (UTC) From: Shaoqin Huang To: Oliver Upton , Marc Zyngier , kvmarm@lists.linux.dev Cc: Shaoqin Huang , Paolo Bonzini , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v10 2/3] KVM: selftests: aarch64: Introduce pmu_event_filter_test Date: Wed, 19 Jun 2024 04:31:55 -0400 Message-Id: <20240619083200.1047073-3-shahuang@redhat.com> In-Reply-To: <20240619083200.1047073-1-shahuang@redhat.com> References: <20240619083200.1047073-1-shahuang@redhat.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Introduce pmu_event_filter_test for arm64 platforms. The test configures PMUv3 for a vCPU, and sets different pmu event filters for the vCPU, and check if the guest can see those events which user allow and can't use those events which use deny. This test refactor the create_vpmu_vm() and make it a wrapper for __create_vpmu_vm(), which allows some extra init code before KVM_ARM_VCPU_PMU_V3_INIT. And this test use the KVM_ARM_VCPU_PMU_V3_FILTER attribute to set the pmu event filter in KVM. And choose to filter two common event branches_retired and instructions_retired, and let the guest to check if it see the right pmceid register. Signed-off-by: Shaoqin Huang Reviewed-by: Raghavendra Rao Ananta --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/aarch64/pmu_event_filter_test.c | 314 ++++++++++++++++++ 2 files changed, 315 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index ac280dcba996..2110b49e7a84 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -153,6 +153,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/aarch32_id_regs TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/hypercalls TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test +TEST_GEN_PROGS_aarch64 += aarch64/pmu_event_filter_test TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/set_id_regs TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter diff --git a/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c new file mode 100644 index 000000000000..308b8677e08e --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/pmu_event_filter_test.c @@ -0,0 +1,314 @@ + +// SPDX-License-Identifier: GPL-2.0 +/* + * pmu_event_filter_test - Test user limit pmu event for guest. + * + * Copyright (c) 2023 Red Hat, Inc. + * + * This test checks if the guest only see the limited pmu event that userspace + * sets, if the guest can use those events which user allow, and if the guest + * can't use those events which user deny. + * This test runs only when KVM_CAP_ARM_PMU_V3, KVM_ARM_VCPU_PMU_V3_FILTER + * is supported on the host. + */ +#include +#include +#include +#include +#include +#include + +struct pmu_common_event_ids { + uint64_t pmceid0; + uint64_t pmceid1; +} max_pmce, expected_pmce; + +struct vpmu_vm { + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; +}; + +static struct vpmu_vm vpmu_vm; + +#define FILTER_NR 10 + +struct test_desc { + const char *name; + struct kvm_pmu_event_filter filter[FILTER_NR]; +}; + +#define __DEFINE_FILTER(base, num, act) \ + ((struct kvm_pmu_event_filter) { \ + .base_event = base, \ + .nevents = num, \ + .action = act, \ + }) + +#define DEFINE_FILTER(base, act) __DEFINE_FILTER(base, 1, act) + +#define EVENT_ALLOW(event) DEFINE_FILTER(event, KVM_PMU_EVENT_ALLOW) +#define EVENT_DENY(event) DEFINE_FILTER(event, KVM_PMU_EVENT_DENY) + +static void guest_code(void) +{ + uint64_t pmceid0 = read_sysreg(pmceid0_el0); + uint64_t pmceid1 = read_sysreg(pmceid1_el0); + + GUEST_ASSERT_EQ(expected_pmce.pmceid0, pmceid0); + GUEST_ASSERT_EQ(expected_pmce.pmceid1, pmceid1); + + GUEST_DONE(); +} + +static void guest_get_pmceid(void) +{ + max_pmce.pmceid0 = read_sysreg(pmceid0_el0); + max_pmce.pmceid1 = read_sysreg(pmceid1_el0); + + GUEST_DONE(); +} + +static void run_vcpu(struct kvm_vcpu *vcpu) +{ + struct ucall uc; + + while (1) { + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_DONE: + return; + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + break; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + } + } +} + +static void set_pmce(struct pmu_common_event_ids *pmce, int action, int event) +{ + int base = 0; + uint64_t *pmceid = NULL; + + if (event >= 0x4000) { + event -= 0x4000; + base = 32; + } + + if (event >= 0 && event <= 0x1F) { + pmceid = &pmce->pmceid0; + } else if (event >= 0x20 && event <= 0x3F) { + event -= 0x20; + pmceid = &pmce->pmceid1; + } else { + return; + } + + event += base; + if (action == KVM_PMU_EVENT_ALLOW) + *pmceid |= BIT(event); + else + *pmceid &= ~BIT(event); +} + +static inline bool is_valid_filter(struct kvm_pmu_event_filter *filter) +{ + return filter && filter->nevents != 0; +} + +static void prepare_expected_pmce(struct kvm_pmu_event_filter *filter) +{ + struct pmu_common_event_ids pmce_mask = { ~0, ~0 }; + int i; + + if (is_valid_filter(filter) && filter->action == KVM_PMU_EVENT_ALLOW) + memset(&pmce_mask, 0, sizeof(pmce_mask)); + + while (is_valid_filter(filter)) { + for (i = 0; i < filter->nevents; i++) + set_pmce(&pmce_mask, filter->action, + filter->base_event + i); + filter++; + } + + expected_pmce.pmceid0 = max_pmce.pmceid0 & pmce_mask.pmceid0; + expected_pmce.pmceid1 = max_pmce.pmceid1 & pmce_mask.pmceid1; +} + +static void pmu_event_filter_init(struct kvm_pmu_event_filter *filter) +{ + while (is_valid_filter(filter)) { + kvm_device_attr_set(vpmu_vm.vcpu->fd, + KVM_ARM_VCPU_PMU_V3_CTRL, + KVM_ARM_VCPU_PMU_V3_FILTER, + filter); + filter++; + } +} + +/* Create a VM that has one vCPU with PMUv3 configured. */ +static void create_vpmu_vm_with_filter(void *guest_code, + struct kvm_pmu_event_filter *filter) +{ + uint64_t irq = 23; + + /* The test creates the vpmu_vm multiple times. Ensure a clean state */ + memset(&vpmu_vm, 0, sizeof(vpmu_vm)); + + vpmu_vm.vm = vm_create(1); + vpmu_vm.vcpu = vm_vcpu_add_with_vpmu(vpmu_vm.vm, 0, guest_code); + vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64); + __TEST_REQUIRE(vpmu_vm.gic_fd >= 0, + "Failed to create vgic-v3, skipping"); + + pmu_event_filter_init(filter); + + /* Initialize vPMU */ + vpmu_set_irq(vpmu_vm.vcpu, irq); + vpmu_init(vpmu_vm.vcpu); +} + +static void create_vpmu_vm(void *guest_code) +{ + create_vpmu_vm_with_filter(guest_code, NULL); +} + +static void destroy_vpmu_vm(void) +{ + close(vpmu_vm.gic_fd); + kvm_vm_free(vpmu_vm.vm); +} + +static void run_test(struct test_desc *t) +{ + pr_info("Test: %s\n", t->name); + + create_vpmu_vm_with_filter(guest_code, t->filter); + prepare_expected_pmce(t->filter); + sync_global_to_guest(vpmu_vm.vm, expected_pmce); + + run_vcpu(vpmu_vm.vcpu); + + destroy_vpmu_vm(); +} + +static struct test_desc tests[] = { + { + .name = "without_filter", + .filter = { + { 0 } + }, + }, + { + .name = "member_allow_filter", + .filter = { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_SW_INCR), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_BR_RETIRED), + { 0 }, + }, + }, + { + .name = "member_deny_filter", + .filter = { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_SW_INCR), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_BR_RETIRED), + { 0 }, + }, + }, + { + .name = "not_member_deny_filter", + .filter = { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_SW_INCR), + { 0 }, + }, + }, + { + .name = "not_member_allow_filter", + .filter = { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_SW_INCR), + { 0 }, + }, + }, + { + .name = "deny_chain_filter", + .filter = { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CHAIN), + { 0 }, + }, + }, + { + .name = "deny_cpu_cycles_filter", + .filter = { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + { 0 }, + }, + }, + { + .name = "cancel_allow_filter", + .filter = { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + }, + { + .name = "cancel_deny_filter", + .filter = { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + }, + { + .name = "multiple_filter", + .filter = { + __DEFINE_FILTER(0x0, 0x10, KVM_PMU_EVENT_ALLOW), + __DEFINE_FILTER(0x6, 0x3, KVM_PMU_EVENT_DENY), + }, + }, + { 0 } +}; + +static void run_tests(void) +{ + struct test_desc *t; + + for (t = &tests[0]; t->name; t++) + run_test(t); +} + +static int used_pmu_events[] = { + ARMV8_PMUV3_PERFCTR_BR_RETIRED, + ARMV8_PMUV3_PERFCTR_INST_RETIRED, + ARMV8_PMUV3_PERFCTR_CHAIN, + ARMV8_PMUV3_PERFCTR_CPU_CYCLES, +}; + +static bool kvm_pmu_support_events(void) +{ + struct pmu_common_event_ids used_pmce = { 0, 0 }; + + create_vpmu_vm(guest_get_pmceid); + + memset(&max_pmce, 0, sizeof(max_pmce)); + sync_global_to_guest(vpmu_vm.vm, max_pmce); + run_vcpu(vpmu_vm.vcpu); + sync_global_from_guest(vpmu_vm.vm, max_pmce); + destroy_vpmu_vm(); + + for (int i = 0; i < ARRAY_SIZE(used_pmu_events); i++) + set_pmce(&used_pmce, KVM_PMU_EVENT_ALLOW, used_pmu_events[i]); + + return ((max_pmce.pmceid0 & used_pmce.pmceid0) == used_pmce.pmceid0) && + ((max_pmce.pmceid1 & used_pmce.pmceid1) == used_pmce.pmceid1); +} + +int main(void) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + TEST_REQUIRE(kvm_pmu_support_events()); + + run_tests(); +}