From patchwork Fri Dec 30 03:59:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13084037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57F84C4332F for ; Fri, 30 Dec 2022 04:00:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234395AbiL3D77 (ORCPT ); Thu, 29 Dec 2022 22:59:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229570AbiL3D76 (ORCPT ); Thu, 29 Dec 2022 22:59:58 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 572A613D78 for ; Thu, 29 Dec 2022 19:59:57 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-487b0bf1117so71902767b3.5 for ; Thu, 29 Dec 2022 19:59:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1SYJm5JNDxkav0WQdwJtR7PUOPOmz+q4dcky77gCOc4=; b=CtHYSYoomRsxseLmDPzoqoYHK094uDWmtIS+mcaBfAMl4yXR5g8VvBGT+MP6t9oVov a7PMNLphXcn50C3qT+lfvc86bTtQBxmJzuDt3A1P1JPNUHfkNdt3cIipk30Vt0EkngPY za8KN0Y1mKLDUoz8v4G9pypXDzoh4uHowtMnT1cMXmz9SVcRus/mD/E96IBCEyzG4bTA UYhWbRv5mfP0Qmp0qs82nydtru1j10DmfPh8Kda+DT/l3xUOjGh8jRz/vmRI0pTGM/fD GVwUJ7kNTbgFNdAPtnVYy4ro1xK2t0WCDCSq0CDkMVO0sISbxm/ep2kuVZTx+33YOS8Z aObQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1SYJm5JNDxkav0WQdwJtR7PUOPOmz+q4dcky77gCOc4=; b=vPUCC6m/p0EBKmNEY0jH9lygcdUosP3GZLQoeccNcpfaFM6tPJH3DHbdFy/I9BHWqX vyRQbVSIAkGUX4MkB8xmHogvYt+0oAp2QfsI3sHnEd+aMWuwLqJ+8g2ZyQw7hqSNmRO/ UUwY73KpDHbA6QHvpv3FSeeHkuJROq5jEII5C6DeQTUQqlaO/MTLNSoE2lz6UdKsNBqW xTdadJzIUTlmfesrfKV1RSoqSzOuucb7pAnvjznuYK6tDM8j+0UjpiORi6UMZjRn0gRj iKAcP7YNCjc3itvcLBbNag8BBWD44UgHNr8k+/7guiguoEJRPjO0xYUoLMqEBSzbQdUX mIqg== X-Gm-Message-State: AFqh2kr/SDTTay589DP/pTmiL+nFoBC3M7QgI2w29VufnNf2iYfizp5Y sPbJLC4IukOjVnbeIspecFoUkb/Jq7U= X-Google-Smtp-Source: AMrXdXvLdndaUF+C3AnEdTc8eaJR11ktcamnCwVT3vgAxLC0xCEa4lAK5Ji3iCN6FUO1z3ovO0zq7Y46r0A= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a81:dd1:0:b0:3bd:370d:aa42 with SMTP id 200-20020a810dd1000000b003bd370daa42mr3445113ywn.497.1672372796597; Thu, 29 Dec 2022 19:59:56 -0800 (PST) Date: Thu, 29 Dec 2022 19:59:22 -0800 In-Reply-To: <20221230035928.3423990-1-reijiw@google.com> Mime-Version: 1.0 References: <20221230035928.3423990-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221230035928.3423990-2-reijiw@google.com> Subject: [PATCH 1/7] KVM: arm64: PMU: Have reset_pmu_reg() to clear a register From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On vCPU reset, PMCNTEN{SET,CLR}_EL1 and PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg(). This function clears RAZ bits of those registers corresponding to unimplemented event counters on the vCPU, and sets bits corresponding to implemented event counters to a predefined pseudo UNKNOWN value (some bits are set to 1). The function identifies (un)implemented event counters on the vCPU based on the PMCR_EL1.N value on the host. Using the host value for this would be problematic when KVM supports letting userspace set PMCR_EL1.N to a value different from the host value (some of the RAZ bits of those registers could end up being set to 1). Fix reset_pmu_reg() to clear the registers so that it can ensure that all the RAZ bits are cleared even when the PMCR_EL1.N value for the vCPU is different from the host value. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c6cbfe6b854b..ec4bdaf71a15 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -604,19 +604,11 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { - u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); - /* No PMU available, any PMU reg may UNDEF... */ if (!kvm_arm_support_pmu_v3()) return; - n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; - n &= ARMV8_PMU_PMCR_N_MASK; - if (n) - mask |= GENMASK(n - 1, 0); - - reset_unknown(vcpu, r); - __vcpu_sys_reg(vcpu, r->reg) &= mask; + __vcpu_sys_reg(vcpu, r->reg) = 0; } static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) From patchwork Fri Dec 30 03:59:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13084038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03B31C10F1B for ; Fri, 30 Dec 2022 04:00:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234401AbiL3EAc (ORCPT ); Thu, 29 Dec 2022 23:00:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229570AbiL3EAa (ORCPT ); Thu, 29 Dec 2022 23:00:30 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E744313DC9 for ; Thu, 29 Dec 2022 20:00:28 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id n190-20020a25dac7000000b007447d7a25e4so21056994ybf.9 for ; Thu, 29 Dec 2022 20:00:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yQL8LpFQbWC/4XZqubGHc3h5rf9RkGEx3SeQGahZOzI=; b=n+I2UUck4OlTqPy8zYXZ+xx/sIHj/NUhsraDP+JLn0MWryS27ueidGVOObK6RFGIwk BL6kbLfQLA6KOWwK/h1q2OpjhInRT7uNChsTZ5/PeTLsJCvsdDOxa+A2jCHpTinNJ36o 4AMQUVLsLbIMbEvDGIzAH1yA79wByfLwDHDsMUrSYeb8+6im1q5TQgy5cw8HdrV+c7Zp GSl4E62BZMYznQzrGz11QM5NRL0h7M9r5fdcQMXfDnMsWIAhhGAPJcg2S3Hqn/uIIpyt yO820vWLVAzXwwQGgmhGRNoyqLCDoN8Cz8dERHFNmTgEw+U1IRdEHsHBMDezxOFoydPC 8bHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yQL8LpFQbWC/4XZqubGHc3h5rf9RkGEx3SeQGahZOzI=; b=vRL6Wjdb70m514jBha288I7iuSIL9Ry91bmLtcVJtndYvAvfG25jKJIG1ya99tddgO njvgFXuh5sHhzbIouVzFIqIp3tO2e0w38sf4UzDNDzhlMwQRLyVyOsn6E7hp8A8CHUFx WELSLAlOr34iS/TZVyLnUOpUzBjb6npX+LTQ5RcoePNz43dheaxVqeWQc6mzGdmFypEh F4d81cWOQVd0iJViLtI7egPB/Kbg4015FdOvy/mWH9NHT59sFUvsJya8qs/uOsO1IoBy uqoPq2dgcEs+SFRka1lTY9DnWQ554BUJf4dAf1DMHmIxhr2yo5hF20xkezFid4/DkUh5 QplQ== X-Gm-Message-State: AFqh2kpHI3FAmYVfyNKgzI3eAoikg7GbDPQIcDfztnwwWCOOATcUgSIH V8hRUjtm8iSkegRDdyHx9VA0hHeaPJQ= X-Google-Smtp-Source: AMrXdXvHWmdW4POEvsJkJXQsrQc//66oPLiXUgTOK4KT6ispQGJVgdJ41r+5ZfWiIhEAIgPbKE8yYf9Oybs= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a5b:852:0:b0:711:6d3a:2b9d with SMTP id v18-20020a5b0852000000b007116d3a2b9dmr4131790ybq.133.1672372828203; Thu, 29 Dec 2022 20:00:28 -0800 (PST) Date: Thu, 29 Dec 2022 19:59:23 -0800 In-Reply-To: <20221230035928.3423990-1-reijiw@google.com> Mime-Version: 1.0 References: <20221230035928.3423990-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221230035928.3423990-3-reijiw@google.com> Subject: [PATCH 2/7] KVM: arm64: PMU: Use reset_pmu_reg() for PMUSERENR_EL0 and PMCCFILTR_EL0 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The default reset function for PMU registers (reset_pmu_reg()) now simply clears a specified register. Use that function for PMUSERENR_EL0 and PMCCFILTR_EL0, since those registers should simply be cleared on vCPU reset. No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index ec4bdaf71a15..4959658b502c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1747,7 +1747,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { * in 32bit mode. Here we choose to reset it as zero for consistency. */ { PMU_SYS_REG(SYS_PMUSERENR_EL0), .access = access_pmuserenr, - .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 }, + .reg = PMUSERENR_EL0 }, { PMU_SYS_REG(SYS_PMOVSSET_EL0), .access = access_pmovs, .reg = PMOVSSET_EL0 }, @@ -1903,7 +1903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { * in 32bit mode. Here we choose to reset it as zero for consistency. */ { PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper, - .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 }, + .reg = PMCCFILTR_EL0 }, { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 }, From patchwork Fri Dec 30 03:59:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13084039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B521C3DA7C for ; Fri, 30 Dec 2022 04:00:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234441AbiL3EAp (ORCPT ); Thu, 29 Dec 2022 23:00:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229853AbiL3EAl (ORCPT ); Thu, 29 Dec 2022 23:00:41 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4424B13D25 for ; Thu, 29 Dec 2022 20:00:41 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id z4-20020a17090ab10400b002195a146546so14886926pjq.9 for ; Thu, 29 Dec 2022 20:00:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R/yQ6qOJ9nE5sbZwT5+Hdlb+OS4oscumD9F8iuYsEP0=; b=TOfBgUI4HFcbvh0KtsNM/9/o1ljcx4Cqad/F/ADZ81uGH6f8ziU+OldSZqnTjldSar WiTR2PvRUcHB3sXXEtotTUdzx4cHCeMEK4s9CaNx3M6rjrf8UpWKtK0lVBRkXGG7R0Ow wSC57/bdkzgNWhmjfxLHoHJ35ZeTliLCztAxl5oolvneqPKe1VPfDBAzaAQNl2kAdTJC RoZX7y2q/atgIXWgzJIuX9jklibMxhBblRPCYOM1xZRGMbe4837LKDw+Ldlk99ynPvAH jfY654Fy62qyUHhAbxsK6w4Htvnzuc6dL+oKNVpX2FOq8bE6D+hKT8xAfCWfVozN+KXH 4icw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R/yQ6qOJ9nE5sbZwT5+Hdlb+OS4oscumD9F8iuYsEP0=; b=sBhgO5D06xcOdOCm1FkAqlCdrYN4vX4nQX3yrx5VktKDiw1ZIQ8G/XDKks7Ndqe6Re ZHDXN9OBHFPE3qhUpymuEn1Nt0bvkpWG1wN4asRQMG29QaTIjOP3Zv8ja6UZczfCBAAR LU0qlIF21BTHPt4N8b2Rm0DdjYHCIPSx9/c70JtkwOuWH2i8Xe3MYKC+Bud2uw1S5z9y 9zrygkVk10pWn0D0o0KPEB6gLRObRzTvzyPvGzcDts+ewn5YP92YlFvS1Fln9dP4rAu8 rFD8++08qZYy3/ATgPZJRAvZ79+kUPPgJzt653EscqvAUWzeRaAvQt3xDQHVRHNOOSFn QQow== X-Gm-Message-State: AFqh2krPWvh+d2oRfOsN1qTcX3TYC5nAWP4cUlew9Kj6tlOh4MesETp+ A22ZeYm4P0JwvpJOuiuKjLzgVVWQlm4= X-Google-Smtp-Source: AMrXdXsA4iCBx3qhY8WnvROWXZ/axMqc2KCGoEEEIPQYkLStYSSjcl0e/rJxaFRrNWsrhr+FV7fmOU0gHls= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a17:902:70cb:b0:18c:1bc5:ab84 with SMTP id l11-20020a17090270cb00b0018c1bc5ab84mr1583115plt.105.1672372840749; Thu, 29 Dec 2022 20:00:40 -0800 (PST) Date: Thu, 29 Dec 2022 19:59:24 -0800 In-Reply-To: <20221230035928.3423990-1-reijiw@google.com> Mime-Version: 1.0 References: <20221230035928.3423990-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221230035928.3423990-4-reijiw@google.com> Subject: [PATCH 3/7] KVM: arm64: PMU: Preserve vCPU's PMCR_EL0.N value on vCPU reset From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The number of PMU event counters is indicated in PMCR_EL0.N. For a vCPU with PMUv3 configured, its value will be the same as the host value by default. Userspace can set PMCR_EL0.N for the vCPU to a lower value than the host value using KVM_SET_ONE_REG. However, it is practically unsupported, as reset_pmcr() resets PMCR_EL0.N to the host value on vCPU reset. Change reset_pmcr() to preserve the vCPU's PMCR_EL0.N value on vCPU reset so that userspace can limit the number of the PMU event counter on the vCPU. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 6 ++++++ arch/arm64/kvm/sys_regs.c | 4 +++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 24908400e190..937a272b00a5 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -213,6 +213,12 @@ void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu) for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) pmu->pmc[i].idx = i; + + /* + * Initialize PMCR_EL0 for the vCPU with the host value so that + * the value is available at the very first vCPU reset. + */ + __vcpu_sys_reg(vcpu, PMCR_EL0) = read_sysreg(pmcr_el0); } /** diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 4959658b502c..67c1bd39b478 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -637,8 +637,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) if (!kvm_arm_support_pmu_v3()) return; + /* PMCR_EL0 for the vCPU is set to the host value at vCPU creation. */ + /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); + pmcr = __vcpu_sys_reg(vcpu, r->reg) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); if (!kvm_supports_32bit_el0()) pmcr |= ARMV8_PMU_PMCR_LC; From patchwork Fri Dec 30 03:59:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13084046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 843D4C3DA7C for ; Fri, 30 Dec 2022 04:01:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234419AbiL3EBB (ORCPT ); Thu, 29 Dec 2022 23:01:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229876AbiL3EA5 (ORCPT ); Thu, 29 Dec 2022 23:00:57 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9A29D63 for ; Thu, 29 Dec 2022 20:00:55 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id i1-20020a17090a718100b00225c78f69c4so10594661pjk.7 for ; Thu, 29 Dec 2022 20:00:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qn3YPHvbxT5N4Qy2asilovtHbC32P/cAZl7ccVXdKtc=; b=ovXnx3AiVWTWLZP1nqPGnA9G9aXvjVhU8yCZsweOlia82btgeLm9JpA1wGNvyQ4teE vMsKdZTGAe1G5aaSxl9BHjyRW5HlhqlZHtGXxM1iMYg6JMqzuORg5ysJvLWY7kkhmM8w 5lETtvybqcO+R3WY3bd/eg+70LYYOiZzWAHI0QQZRuMrOZqRARrlGyhCQiJaxTEFs8R3 PFze2j/FZsjcIpqI9zKZRwa1ohQDfuv8tP5FGPhA0QISAOJpXXE3lZIKBVuMNFUk8kI3 tBmrZoPqjNVxyJsGIx5qb5HVvZH6du+5Vy/WcXsZuY3nAWbkQApXksFm7P/GR7H4tQ9L /niA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qn3YPHvbxT5N4Qy2asilovtHbC32P/cAZl7ccVXdKtc=; b=Z7h+lIM6QfuznAlX4Dg9RAh/SjVlKdEeOoyM2HMgN/MP3HKAVOvYOP00ZJ2P+dhWLC 8gWl1+Fj5B8W1nX6S9XTJsbbtHM8iNbSaxebn2A1MyhDv4QxeYVVdGBeGwf3foOE8M4k gAKTUra/fQrqm0OXkfKW9SLuT/Os0B35hN+But74w4ojGr1vDz//CX7jEsYS3/N6jltu KveQ3Q0eTt2UpgdgLHAbuc/0Ot9UgIk/YW7oEOT8lSaopvuLsConedxeE3WXNZuz7msg +Lls2bx/bPdk68SeQydV9pfou+HuIFMht09vo1ddNGAY+Ta5pRAurtAT1aDauHIOPqln xlHw== X-Gm-Message-State: AFqh2ko6vHB3hTIOUP8vI2AA91gBvI4FL0o4achlFj3JHM/qqV0ixszs uS5q94rEsz23UsFLHKo8H1pxBHIu59U= X-Google-Smtp-Source: AMrXdXsbI0ZXpiajNWg3ek185LMsn1CCNRFvnkex6Yos+Gcb4IKLJ/VmsEmttW2joaMa3amU9SzjC6quPrM= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a05:6a00:3404:b0:580:104f:9b35 with SMTP id cn4-20020a056a00340400b00580104f9b35mr1814410pfb.13.1672372855519; Thu, 29 Dec 2022 20:00:55 -0800 (PST) Date: Thu, 29 Dec 2022 19:59:25 -0800 In-Reply-To: <20221230035928.3423990-1-reijiw@google.com> Mime-Version: 1.0 References: <20221230035928.3423990-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221230035928.3423990-5-reijiw@google.com> Subject: [PATCH 4/7] tools: arm64: Import perf_event.h From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Copy perf_event.h from the kernel's arch/arm64/include/asm/perf_event.h. The following patches will use macros defined in this header. Signed-off-by: Reiji Watanabe --- tools/arch/arm64/include/asm/perf_event.h | 258 ++++++++++++++++++++++ 1 file changed, 258 insertions(+) create mode 100644 tools/arch/arm64/include/asm/perf_event.h diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h new file mode 100644 index 000000000000..b2ae51f5f93d --- /dev/null +++ b/tools/arch/arm64/include/asm/perf_event.h @@ -0,0 +1,258 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ASM_PERF_EVENT_H +#define __ASM_PERF_EVENT_H + +#define ARMV8_PMU_MAX_COUNTERS 32 +#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) + +/* + * Common architectural and microarchitectural event numbers. + */ +#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005 +#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006 +#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007 +#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008 +#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009 +#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A +#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B +#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C +#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D +#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E +#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010 +#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011 +#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012 +#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018 +#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019 +#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A +#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B +#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C +#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020 +#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021 +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022 +#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023 +#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C +#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D +#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E +#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F +#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033 +#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034 +#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039 +#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A +#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B +#define ARMV8_PMUV3_PERFCTR_STALL 0x003C +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F + +/* Statistical profiling extension microarchitectural events */ +#define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000 +#define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001 +#define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002 +#define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003 + +/* AMUv1 architecture events */ +#define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004 +#define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005 + +/* long-latency read miss events */ +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B + +/* Trace buffer events */ +#define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C +#define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E + +/* Trace unit events */ +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B + +/* additional latency from alignment events */ +#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020 +#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021 +#define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022 + +/* Armv8.5 Memory Tagging Extension events */ +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026 + +/* ARMv8 recommended implementation defined event types */ +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048 + +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053 + +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058 + +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A + +#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C +#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D +#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E +#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F +#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070 +#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071 +#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072 +#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073 +#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074 +#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075 +#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076 +#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077 +#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078 +#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079 +#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A + +#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C +#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D +#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E + +#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081 +#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082 +#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083 +#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084 + +#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086 +#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087 +#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088 + +#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F +#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090 +#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8 + +/* + * Per-CPU PMCR: config reg + */ +#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ +#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ +#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ +#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ +#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ +#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ +#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ +#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ +#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ +#define ARMV8_PMU_PMCR_N_MASK 0x1f +#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ + +/* + * PMOVSR: counters overflow flag status reg + */ +#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ +#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK + +/* + * PMXEVTYPER: Event selection reg + */ +#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ +#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ + +/* + * Event filters for PMUv3 + */ +#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) +#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) +#define ARMV8_PMU_INCLUDE_EL2 (1U << 27) + +/* + * PMUSERENR: user enable reg + */ +#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ +#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ +#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ +#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ +#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ + +/* PMMIR_EL1.SLOTS mask */ +#define ARMV8_PMU_SLOTS_MASK 0xff + +#define ARMV8_PMU_BUS_SLOTS_SHIFT 8 +#define ARMV8_PMU_BUS_SLOTS_MASK 0xff +#define ARMV8_PMU_BUS_WIDTH_SHIFT 16 +#define ARMV8_PMU_BUS_WIDTH_MASK 0xf + +#endif /* __ASM_PERF_EVENT_H */ From patchwork Fri Dec 30 03:59:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13084047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93C4AC4332F for ; Fri, 30 Dec 2022 04:01:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234445AbiL3EBO (ORCPT ); Thu, 29 Dec 2022 23:01:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234443AbiL3EBL (ORCPT ); Thu, 29 Dec 2022 23:01:11 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99EA3D63 for ; Thu, 29 Dec 2022 20:01:08 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-4700580ca98so154254387b3.9 for ; Thu, 29 Dec 2022 20:01:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JSDZO0mQdJzPMEBfs4W6qo68O45OPVluReIApRP0NWY=; b=Aw5UxayDXrMzhwo+e/LvLzKBeTFpohCeXuk36fjT/TB9x6l84I0UBxYl0LXOcBJJC5 hg53JOClJhwzRQrbEufEoDL/5Zyft/jYjodqcHAsKPZDUuSY+4h/94Ten2ffcxbVB0HJ 1PDK/B4J6HYNGA3XoulajWzl54IIDkpA94uB+kWAbsLmN3sicmfKH4I1JPumWGqsW172 dYM9xyRbBBEFvJ4gq1FH6aI4C8dDr9lwycn2mZyCz0n1FBn4DWMABMtanB2yy8lmLB5Y E9y+RaV1yffXtGCIpy2Iu9n1NjnKDfOGZX78+hDrhfCp11ElwB+5a4vDqq7r7eoIcG6A sV9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JSDZO0mQdJzPMEBfs4W6qo68O45OPVluReIApRP0NWY=; b=GZJpxh3DenkNA6n0KQ4CSF/N39o9ab957ZWwkrkIHK0oVbFS3df0FFElaSnWqtJS7U OiWqEnqVkzlTv2pjROWPYBHdC+E7oLUNrx2b4n+9l1KuBzZj8+nBXAKF1znLMqnOQ2Pt 3LEEl4gcUbt+jhbdIhUyuC3GolM0R5gqph0Zngt/mL/QZrVQ2VSobiDVkMmBDZgso4gi P1SzTIYlEHMCxhUOtfN4xibWGL3Yc6c0UjbgMhy/GPcY6qgE271f1hfS/IUzuBl3YEk1 OmB4nLnzci05jtyTPGcIZSopN1xQD5/iPolWVgrJ9c45LeICvTRfv9GueSErAcOJbp8b p+4g== X-Gm-Message-State: AFqh2kqY0XlWH9XsmE7Qsqqkfkk0RtL7k/aVBgIjDL7P78SqdwF+4GtD ACMF9ABf+aAksmaoBuCrrXILYQ3O0U0= X-Google-Smtp-Source: AMrXdXshQl4Tkuxod/BpcYFd5WV77JLTPR6CTq27o+wefMY9qyY620CD2OOu4iHXf3bv7kcavfHdoZDLE+Y= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a25:874e:0:b0:726:9b62:faa9 with SMTP id e14-20020a25874e000000b007269b62faa9mr2065125ybn.43.1672372868173; Thu, 29 Dec 2022 20:01:08 -0800 (PST) Date: Thu, 29 Dec 2022 19:59:26 -0800 In-Reply-To: <20221230035928.3423990-1-reijiw@google.com> Mime-Version: 1.0 References: <20221230035928.3423990-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221230035928.3423990-6-reijiw@google.com> Subject: [PATCH 5/7] KVM: selftests: aarch64: Introduce vpmu_counter_access test From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce vpmu_counter_access test for arm64 platforms. The test configures PMUv3 for a vCPU, sets PMCR_EL1.N for the vCPU, and check if the guest can consistently see the same number of the PMU event counters (PMCR_EL1.N) that userspace sets. This test case is done with each PMCR_EL1.N value that is equal to or less than PMCR_EL1.N on the host. Signed-off-by: Reiji Watanabe --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../kvm/aarch64/vpmu_counter_access.c | 181 ++++++++++++++++++ 3 files changed, 183 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 4a30d684e208..98816f714051 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -9,6 +9,7 @@ /aarch64/vcpu_width_config /aarch64/vgic_init /aarch64/vgic_irq +/aarch64/vpmu_counter_access /s390x/memop /s390x/resets /s390x/sync_regs_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1d85b8e218a0..4dc5166116f1 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -158,6 +158,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c new file mode 100644 index 000000000000..4760030eee28 --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -0,0 +1,181 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vpmu_counter_access - Test vPMU event counter access + * + * Copyright (c) 2022 Google LLC. + * + * This test checks if the guest can see the same number of the PMU event + * counters (PMCR_EL1.N) that userspace sets. + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + */ +#include +#include +#include +#include +#include +#include + +/* The max number of the PMU event counters (excluding the cycle counter) */ +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) + +static uint64_t pmcr_extract_n(uint64_t pmcr_val) +{ + return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; +} + +/* + * The guest is configured with PMUv3 with @expected_pmcr_n number of + * event counters. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + */ +static void guest_code(uint64_t expected_pmcr_n) +{ + uint64_t pmcr, pmcr_n; + + GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); + + pmcr = read_sysreg(pmcr_el0); + pmcr_n = pmcr_extract_n(pmcr); + + /* Make sure that PMCR_EL0.N indicates the value userspace set */ + GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + + GUEST_DONE(); +} + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +/* Create a VM that has one vCPU with PMUv3 configured. */ +static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, + int *gic_fd) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_vcpu_init init; + uint8_t pmuver; + uint64_t dfr0, irq = 23; + struct kvm_device_attr irq_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, + .addr = (uint64_t)&irq, + }; + struct kvm_device_attr init_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_INIT, + }; + + vm = vm_create(1); + ucall_init(vm, NULL); + + /* Create vCPU with PMUv3 */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + + /* Initialize vPMU */ + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + *vcpup = vcpu; + return vm; +} + +static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +{ + struct ucall uc; + + vcpu_args_set(vcpu, 1, pmcr_n); + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + break; + } +} + +/* + * Create a guest with one vCPU, set the PMCR_EL1.N for the vCPU to @pmcr_n, + * and run the test. + */ +static void run_test(uint64_t pmcr_n) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t sp, pmcr, pmcr_orig; + struct kvm_vcpu_init init; + + pr_debug("Test with pmcr_n %lu\n", pmcr_n); + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + + /* Save the initial sp to restore them later to run the guest again */ + vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); + + /* Update the PMCR_EL1.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); + pmcr |= (pmcr_n & ARMV8_PMU_PMCR_N_MASK) << ARMV8_PMU_PMCR_N_SHIFT; + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + + run_vcpu(vcpu, pmcr_n); + + /* + * Reset and re-initialize the vCPU, and run the guest code again to + * check if PMCR_EL1.N is preserved. + */ + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + aarch64_vcpu_setup(vcpu, &init); + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); + + run_vcpu(vcpu, pmcr_n); + + close(gic_fd); + kvm_vm_free(vm); +} + +/* + * Return the default number of implemented PMU event counters excluding + * the cycle counter (i.e. PMCR_EL1.N value) for the guest. + */ +static uint64_t get_pmcr_n_limit(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; + uint64_t pmcr; + + vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + close(gic_fd); + kvm_vm_free(vm); + return pmcr_extract_n(pmcr); +} + +int main(void) +{ + uint64_t i, pmcr_n; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + pmcr_n = get_pmcr_n_limit(); + for (i = 0; i <= pmcr_n; i++) + run_test(i); + + return 0; +} From patchwork Fri Dec 30 03:59:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13084048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C2FFC3DA7C for ; Fri, 30 Dec 2022 04:01:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234466AbiL3EBc (ORCPT ); Thu, 29 Dec 2022 23:01:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234452AbiL3EBV (ORCPT ); Thu, 29 Dec 2022 23:01:21 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 105CAD63 for ; Thu, 29 Dec 2022 20:01:20 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id h2-20020a170902f54200b0018e56572a4eso14782404plf.9 for ; Thu, 29 Dec 2022 20:01:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aTqn4yALaQVArUAPXRJt2V4Mt69NIBr5iOqvCKRCZjM=; b=mWgYgeF8NiKwQzr+zwu/bLLCJ/FITsz/ZQdwXarNpVIUTzivheBqbMyEOI4WjNBViU tQQR7T7sVCQfL8laErV5MDR7DxlEsjVuRW88qYCqIv2aVfoRKAsC3xUqBifZLomQF5gG MdELYD/Sk5n+8ZMj7s4XF+xyIa4Wsgz3SNWHloO41+Rh2GK226xAaM/s6icbV7L6wgZI gpJRSpKjLlgxyvXXPSrJmCZm5wv4tXJ39yo9HFT5fs2MFOchxvVXql1Ybg5WIWTHIREW rk19B5GM/lln3rnQUESZaFD6Pe4YnjxbdW9+jjFoZow4mKRqYLNs0PNaafxy7foHch6D 3msg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aTqn4yALaQVArUAPXRJt2V4Mt69NIBr5iOqvCKRCZjM=; b=mood/v4OCedARcFS/IIBuQXnGt98nYUwJv7fCC68+NV6jHTw84Y3fWvBE6Fedp+kF+ jzDlAbz4Gr6jzaZYPlNnugPHvhu+YVPYybqnxOZsxRljWWc+be0IEGpI9UKnbW7XmP66 ildMstwX6qnSX1y0IhrJHQ+ct3XtAsq8HkuRpf6RCEisM+G17uN4LBmhj6NQ9SiuSTCY UJeBJo3J9fxETNdkWhfsK9jIaDywZSpK8ULLlE6MGLUUa8CamDEQ6WG/aMZVL0cLJpSi FgkJUMjhWYf7Om5rIZHeW0znN5wRyyJNCBMeeGEXzOdmOEQfgS7gyyrp92JTt09UXI5L tIjQ== X-Gm-Message-State: AFqh2kpwX7A9kSFm0bBs6CiBGZBV7sjrrf3f2KduZXND9iMaLrI79M6c nmAFIU1Qoo5R7oBjpOQpHspjIhUzFYw= X-Google-Smtp-Source: AMrXdXsYO11NJZOHUZShFzZQ1lpNsRRdv5Ns9+V7LSThInSYM3qdULMmTHnvSFooLEdZS0Eehe/fJ8Qv+Xw= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a17:90b:4d83:b0:220:1f03:129b with SMTP id oj3-20020a17090b4d8300b002201f03129bmr181535pjb.0.1672372879175; Thu, 29 Dec 2022 20:01:19 -0800 (PST) Date: Thu, 29 Dec 2022 19:59:27 -0800 In-Reply-To: <20221230035928.3423990-1-reijiw@google.com> Mime-Version: 1.0 References: <20221230035928.3423990-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221230035928.3423990-7-reijiw@google.com> Subject: [PATCH 6/7] KVM: selftests: aarch64: vPMU register test for implemented counters From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for implemented counters on the vCPU are readable/writable as expected, and can be programmed to count events. Signed-off-by: Reiji Watanabe --- .../kvm/aarch64/vpmu_counter_access.c | 347 +++++++++++++++++- 1 file changed, 344 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 4760030eee28..5b9d837f903a 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,7 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL1.N) that userspace sets. + * counters (PMCR_EL1.N) that userspace sets, and if the guest can access + * those counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -18,19 +19,350 @@ /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* + * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}_EL0 + * were basically copied from arch/arm64/kernel/perf_event.c. + */ +#define PMEVN_CASE(n, case_macro) \ + case n: case_macro(n); break + +#define PMEVN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + PMEVN_CASE(0, case_macro); \ + PMEVN_CASE(1, case_macro); \ + PMEVN_CASE(2, case_macro); \ + PMEVN_CASE(3, case_macro); \ + PMEVN_CASE(4, case_macro); \ + PMEVN_CASE(5, case_macro); \ + PMEVN_CASE(6, case_macro); \ + PMEVN_CASE(7, case_macro); \ + PMEVN_CASE(8, case_macro); \ + PMEVN_CASE(9, case_macro); \ + PMEVN_CASE(10, case_macro); \ + PMEVN_CASE(11, case_macro); \ + PMEVN_CASE(12, case_macro); \ + PMEVN_CASE(13, case_macro); \ + PMEVN_CASE(14, case_macro); \ + PMEVN_CASE(15, case_macro); \ + PMEVN_CASE(16, case_macro); \ + PMEVN_CASE(17, case_macro); \ + PMEVN_CASE(18, case_macro); \ + PMEVN_CASE(19, case_macro); \ + PMEVN_CASE(20, case_macro); \ + PMEVN_CASE(21, case_macro); \ + PMEVN_CASE(22, case_macro); \ + PMEVN_CASE(23, case_macro); \ + PMEVN_CASE(24, case_macro); \ + PMEVN_CASE(25, case_macro); \ + PMEVN_CASE(26, case_macro); \ + PMEVN_CASE(27, case_macro); \ + PMEVN_CASE(28, case_macro); \ + PMEVN_CASE(29, case_macro); \ + PMEVN_CASE(30, case_macro); \ + default: \ + GUEST_ASSERT_1(0, x); \ + } \ + } while (0) + +#define RETURN_READ_PMEVCNTRN(n) \ + return read_sysreg(pmevcntr##n##_el0) +static unsigned long read_pmevcntrn(int n) +{ + PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); + return 0; +} + +#define WRITE_PMEVCNTRN(n) \ + write_sysreg(val, pmevcntr##n##_el0) +static void write_pmevcntrn(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVCNTRN); + isb(); +} + +#define READ_PMEVTYPERN(n) \ + return read_sysreg(pmevtyper##n##_el0) +static unsigned long read_pmevtypern(int n) +{ + PMEVN_SWITCH(n, READ_PMEVTYPERN); + return 0; +} + +#define WRITE_PMEVTYPERN(n) \ + write_sysreg(val, pmevtyper##n##_el0) +static void write_pmevtypern(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVTYPERN); + isb(); +} + +/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline unsigned long read_sel_evcntr(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevcntr_el0); +} + +/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline void write_sel_evcntr(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevcntr_el0); + isb(); +} + +/* Read PMEVTTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline unsigned long read_sel_evtyper(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevtyper_el0); +} + +/* Write PMEVTTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline void write_sel_evtyper(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevtyper_el0); + isb(); +} + +static inline void enable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenset_el0); + isb(); +} + +static inline void disable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenclr_el0); + isb(); +} + +/* + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * accessors that test cases will use. Each of the accessors will + * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). + * + * This is used to test that combinations of those accessors provide + * the consistent behavior. + */ +struct pmc_accessor { + /* A function to be used to read PMEVTCNTR_EL0 */ + unsigned long (*read_cntr)(int idx); + /* A function to be used to write PMEVTCNTR_EL0 */ + void (*write_cntr)(int idx, unsigned long val); + /* A function to be used to read PMEVTTYPER_EL0 */ + unsigned long (*read_typer)(int idx); + /* A function to be used write PMEVTTYPER_EL0 */ + void (*write_typer)(int idx, unsigned long val); +}; + +struct pmc_accessor pmc_accessors[] = { + /* test with all direct accesses */ + { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern }, + /* test with all indirect accesses */ + { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper }, + /* read with direct accesses, and write with indirect accesses */ + { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper }, + /* read with indirect accesses, and write with direct accesses */ + { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, +}; + +static void pmu_disable_reset(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr &= ~ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +static void pmu_enable(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr |= ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +static bool pmu_event_is_supported(uint64_t event) +{ + GUEST_ASSERT_1(event < 64, event); + return (read_sysreg(pmceid0_el0) & BIT(event)); +} + static uint64_t pmcr_extract_n(uint64_t pmcr_val) { return (pmcr_val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; } +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \ +{ \ + uint64_t _tval = read_sysreg(regname); \ + \ + if (set_expected) \ + GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \ + else \ + GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ +} + +/* + * Check if @mask bits in {PMCNTEN,PMOVS}{SET,CLR} registers + * are set or cleared as specified in @set_expected. + */ +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected) +{ + GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected); +} + +/* + * Check if the bit in {PMCNTEN,PMOVS}{SET,CLR} registers corresponding + * to the specified counter (@pmc_idx) can be read/written as expected. + * When @set_op is true, it tries to set the bit for the counter in + * those registers by writing the SET registers (the bit won't be set + * if the counter is not implemented though). + * Otherwise, it tries to clear the bits in the registers by writing + * the CLR registers. + * Then, it checks if the values indicated in the registers are as expected. + */ +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) +{ + uint64_t pmcr_n, test_bit = BIT(pmc_idx); + bool set_expected = false; + + if (set_op) { + write_sysreg(test_bit, pmcntenset_el0); + write_sysreg(test_bit, pmovsset_el0); + + /* The bit will be set only if the counter is implemented */ + pmcr_n = pmcr_extract_n(read_sysreg(pmcr_el0)); + set_expected = (pmc_idx < pmcr_n) ? true : false; + } else { + write_sysreg(test_bit, pmcntenclr_el0); + write_sysreg(test_bit, pmovsclr_el0); + } + check_bitmap_pmu_regs(test_bit, set_expected); +} + +/* + * Tests for reading/writing registers for the (implemented) event counter + * specified by @pmc_idx. + */ +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + uint64_t write_data, read_data, read_data_prev, test_bit; + + /* Disable all PMCs and reset all PMCs to zero. */ + pmu_disable_reset(); + + + /* + * Tests for reading/writing {PMCNTEN,PMOVS}{SET,CLR}_EL1. + */ + + test_bit = 1ul << pmc_idx; + /* Make sure that the bit in those registers are set to 0 */ + test_bitmap_pmu_regs(test_bit, false); + /* Test if setting the bit in those registers works */ + test_bitmap_pmu_regs(test_bit, true); + /* Test if clearing the bit in those registers works */ + test_bitmap_pmu_regs(test_bit, false); + + + /* + * Tests for reading/writing the event type register. + */ + + read_data = acc->read_typer(pmc_idx); + /* + * Set the event type register to an arbitrary value just for testing + * of reading/writing the register. + * ArmARM says that for the event from 0x0000 to 0x003F, + * the value indicated in the PMEVTYPER_EL0.evtCount field is + * the value written to the field even when the specified event + * is not supported. + */ + write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED); + acc->write_typer(pmc_idx, write_data); + read_data = acc->read_typer(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, acc, read_data, write_data); + + + /* + * Tests for reading/writing the event count register. + */ + + read_data = acc->read_cntr(pmc_idx); + + /* The count value must be 0, as it is not used after the reset */ + GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data); + + write_data = read_data + pmc_idx + 0x12345; + acc->write_cntr(pmc_idx, write_data); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, acc, read_data, write_data); + + + /* The following test requires the INST_RETIRED event support. */ + if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED)) + return; + + pmu_enable(); + acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); + + /* + * Make sure that the counter doesn't count the INST_RETIRED + * event when disabled, and the counter counts the event when enabled. + */ + disable_counter(pmc_idx); + read_data_prev = acc->read_cntr(pmc_idx); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == read_data_prev, + pmc_idx, acc, read_data, read_data_prev); + + enable_counter(pmc_idx); + read_data = acc->read_cntr(pmc_idx); + + /* + * The counter should be increased by at least 1, as there is at + * least one instruction between enabling the counter and reading + * the counter (the test assumes that all event counters are not + * being used by the host's higher priority events). + */ + GUEST_ASSERT_4(read_data > read_data_prev, + pmc_idx, acc, read_data, read_data_prev); +} + /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and + * if reading/writing PMU registers for implemented counters can work + * as expected. */ static void guest_code(uint64_t expected_pmcr_n) { uint64_t pmcr, pmcr_n; + int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -40,6 +372,15 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Tests for reading/writing PMU registers for implemented counters. + * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = 0; pmc < pmcr_n; pmc++) + test_access_pmc_regs(&pmc_accessors[i], pmc); + } + GUEST_DONE(); } @@ -97,7 +438,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: - REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx"); break; case UCALL_DONE: break; From patchwork Fri Dec 30 03:59:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13084049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37BAAC4332F for ; Fri, 30 Dec 2022 04:01:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234443AbiL3EBs (ORCPT ); Thu, 29 Dec 2022 23:01:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234464AbiL3EBb (ORCPT ); Thu, 29 Dec 2022 23:01:31 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1740CD133 for ; Thu, 29 Dec 2022 20:01:29 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id h12-20020a17090a604c00b00225b2dbe4cfso8263875pjm.1 for ; Thu, 29 Dec 2022 20:01:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TF6mtXCSZiwAgtARbQ5xLSrPWgFz4SHEJ+nKUXLPeJ8=; b=sU5Q2FZB1j3zoL2IyByPHp4XXlV347Lgzl2S/dL8FIDS6Q2Gk5gLVaYY0mql2Dp6Ou KisxGHgGXlWLqy0KrCm2jzMx6MW2DUdsSKYrvTMRQzTkIP9n++TZxJNW5O7MKxr4/+u1 y5AtwhDj5bQkpOMw4Rt3TEsg9R3+Nx3N5SwHtOOOHaLq+xbJBQ7qymHu5n5oPGJsFhZD JZltvpqhshV0s0u4yV8AiKaVmx6dZd0jT75t4Jl9ACLOcRVNz/c6KlNj8gwquqeyYcTq 5YaMYUnKjoDopeUfn2jtR5j+bKqWjhgC2qrimfQn1WGUUn8dqPl3crbncgC3P1Un3vGK fC2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TF6mtXCSZiwAgtARbQ5xLSrPWgFz4SHEJ+nKUXLPeJ8=; b=YAhmikUILWn7mOzduqFb7A4AGBlqvodd1L01Q6NuFv6X8m+jhWmcKTEGTBKQPYwwt3 NZkrGoCMfu1w5s7OxQJ/vQSPY1PttTTraMTX6IQJ15uek2td40cm+BCRqqJvLIFMwTvV Z8zM9NaKUVNW06qgisGvtT6xm5ECQA/P3B8kecc/W/8XZThc2VtCJU4iK/7aoOyNptp4 VavP2iug4yTQze5UwRfewMrdgCAk6CWIAYf5aKyciIWfm/k45+qqhvZtDstyloARPlKp F23atdhH9mpWPxGbkBLdOvoWbAvOgL0Lq/7j4sMirgkr17ZyYfDbO1kFeqYIqDj9Ewb0 /YDw== X-Gm-Message-State: AFqh2kr7AXbr+5X+tB4KXf2mHAftATPrZf1UcsiE2lgZ3td/quaMWhpx lQnLhXt9Rrft+lFyUBaqIqFyMeM4pFo= X-Google-Smtp-Source: AMrXdXu5U/lKBFXfJCQ+vyRfT4vdWYzYWTLMDKD8zziBxUMakTgcKuR9JnE7SThBpiyZZGBEQxkoI0dzeRs= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a17:90b:4d83:b0:220:1f03:129b with SMTP id oj3-20020a17090b4d8300b002201f03129bmr181538pjb.0.1672372888381; Thu, 29 Dec 2022 20:01:28 -0800 (PST) Date: Thu, 29 Dec 2022 19:59:28 -0800 In-Reply-To: <20221230035928.3423990-1-reijiw@google.com> Mime-Version: 1.0 References: <20221230035928.3423990-1-reijiw@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221230035928.3423990-8-reijiw@google.com> Subject: [PATCH 7/7] KVM: selftests: aarch64: vPMU register test for unimplemented counters From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for unimplemented counters are not accessible or are RAZ, as expected. Signed-off-by: Reiji Watanabe --- .../kvm/aarch64/vpmu_counter_access.c | 103 +++++++++++++++++- .../selftests/kvm/include/aarch64/processor.h | 1 + 2 files changed, 98 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 5b9d837f903a..af1abdfa2be2 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,8 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL1.N) that userspace sets, and if the guest can access - * those counters. + * counters (PMCR_EL1.N) that userspace sets, if the guest can access + * those counters, and if the guest cannot access any other counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; +#define INVALID_EC (-1ul) +uint64_t expected_ec = INVALID_EC; +uint64_t op_end_addr; + +static void guest_sync_handler(struct ex_regs *regs) +{ + uint64_t esr, ec; + + esr = read_sysreg(esr_el1); + ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK; + GUEST_ASSERT_4(op_end_addr && (expected_ec == ec), + regs->pc, esr, ec, expected_ec); + + /* Will go back to op_end_addr after the handler exits */ + regs->pc = op_end_addr; + + /* + * Clear op_end_addr, and setting expected_ec to INVALID_EC + * as a sign that an exception has occurred. + */ + op_end_addr = 0; + expected_ec = INVALID_EC; +} + +/* + * Run the given operation that should trigger an exception with the + * given exception class. The exception handler (guest_sync_handler) + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and + * will come back to the instruction at the @done_label. + * The @done_label must be a unique label in this test program. + */ +#define TEST_EXCEPTION(ec, ops, done_label) \ +{ \ + extern int done_label; \ + \ + WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \ + GUEST_ASSERT(ec != INVALID_EC); \ + WRITE_ONCE(expected_ec, ec); \ + dsb(ish); \ + ops; \ + asm volatile(#done_label":"); \ + GUEST_ASSERT(!op_end_addr); \ + GUEST_ASSERT(expected_ec == INVALID_EC); \ +} + static void pmu_disable_reset(void) { uint64_t pmcr = read_sysreg(pmcr_el0); @@ -352,16 +397,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) pmc_idx, acc, read_data, read_data_prev); } +/* + * Tests for reading/writing registers for the unimplemented event counter + * specified by @pmc_idx (>= PMCR_EL1.N). + */ +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + /* + * Reading/writing the event count/type registers should cause + * an UNDEFINED exception. + */ + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer); + /* + * The bit corresponding to the (unimplemented) counter in + * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ. + */ + test_bitmap_pmu_regs(pmc_idx, 1); + test_bitmap_pmu_regs(pmc_idx, 0); +} + /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and - * if reading/writing PMU registers for implemented counters can work - * as expected. + * if reading/writing PMU registers for implemented or unimplemented + * counters can work as expected. */ static void guest_code(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n; + uint64_t pmcr, pmcr_n, unimp_mask; int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -372,6 +439,14 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Make sure that (RAZ) bits corresponding to unimplemented event + * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero. + * (NOTE: bits for implemented event counters are reset to UNKNOWN) + */ + unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n); + check_bitmap_pmu_regs(unimp_mask, false); + /* * Tests for reading/writing PMU registers for implemented counters. * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. @@ -381,6 +456,14 @@ static void guest_code(uint64_t expected_pmcr_n) test_access_pmc_regs(&pmc_accessors[i], pmc); } + /* + * Tests for reading/writing PMU registers for unimplemented counters. + * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) + test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); + } GUEST_DONE(); } @@ -394,7 +477,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; - uint8_t pmuver; + uint8_t pmuver, ec; uint64_t dfr0, irq = 23; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, @@ -408,11 +491,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, vm = vm_create(1); ucall_init(vm, NULL); + vm_init_descriptor_tables(vm); + /* Catch exceptions for easier debugging */ + for (ec = 0; ec < ESR_EC_NUM; ec++) { + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec, + guest_sync_handler); + } /* Create vCPU with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + vcpu_init_descriptor_tables(vcpu); *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); /* Make sure that PMUv3 support is indicated in the ID register */ @@ -481,6 +571,7 @@ static void run_test(uint64_t pmcr_n) vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); + vcpu_init_descriptor_tables(vcpu); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index 5f977528e09c..52d87809356c 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -104,6 +104,7 @@ enum { #define ESR_EC_SHIFT 26 #define ESR_EC_MASK (ESR_EC_NUM - 1) +#define ESR_EC_UNKNOWN 0x0 #define ESR_EC_SVC64 0x15 #define ESR_EC_IABT 0x21 #define ESR_EC_DABT 0x25