From patchwork Thu Aug 17 00:30:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F47AC2FC04 for ; Thu, 17 Aug 2023 00:31:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qO8Nqx3IbTMR+y6QC0086klEWaIoS97I+eVkg9ZnPk8=; b=AOX7atLo1brfcl/lE3pHUoOows 258cnIQHEA7qQtqjf1UovfV0YK5oOPYAlILdD3fHU0z8fNhOcEDi0ATTeabywiOqvC8n8U31H/BK+ xXMo5HYBCigkgH2Kb2GdAmZP0zBM6isHWM/Ln9fAHAXQkGSLBPWJujXSSQAi/uUF0bA3vpvBehUyc hZq2U2viAUbL8nBfvddUlATkDGSXKxxXC52rBLfkjwUR+KfJz4WgGWFj4ZG1nddVa6gfOzjkMiQHr jnC70uZyza7bPlocuU+wvmdW7KGRcnJE50XFmpgFiDn54G9i2aCDGpNBqugVchFaJ7vVmkS+wkTsH a+nxBPLg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuU-005EXA-1t; Thu, 17 Aug 2023 00:30:46 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuK-005ESq-2Z for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:38 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d62858b0914so805581276.1 for ; Wed, 16 Aug 2023 17:30:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232233; x=1692837033; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=p5AVWhdLdq92AtHjXfrvRAhqFYTQ8rzt+av6leDSxig=; b=6YSihL71lWgouUO/mSZlnJIqsVWU0t8EUJZUcY4kpRinwQnibsi30lL19o/067zYqx NP3jhndiF60n0Hj7ZHPVrE/7S6vfaPGr8n4XtBQmVhPyskYQ+2JZgpgh/3hxEMxpEglK 74bfyyR3COJxLz2Gkvif2unrCFKewmHAK3PaH0RC3PSbEa0SOp+i9OHlD4rECrAIYtzv s0IclNp5A0l6tJKtos3y5WPxudsHLorXzWZK/8nGLvLeuZGo9/l5urm3KxsssJJS6TaH Q81hdlsGdxVvjcpaWM2aJZ0wqTipFqk3uHbNSQOgi41yCCBadFvM4TQV8KOfVH7BhFcY VLFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232233; x=1692837033; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=p5AVWhdLdq92AtHjXfrvRAhqFYTQ8rzt+av6leDSxig=; b=UUwzY3JZNeV535FkgALGqKGvz+RUDYaSJ7yz6Mb/yNdUDWGOuqn1fuV95SeBTUvmBW AdhnQqmToLUJ1753EmquyYk7R2drNYSHx1JrBx8GVrvI7HdrgNqYbQgRmlYqG6LPMC1k O/LN4I87DuGeRhaVCP+u9hEiHf4QG1dFcOm1dgwaTALZ8IIT/QISiR7KxpzUeadkUcqz flned735ZovzKaEeXWo9LWhKEnD4nHfu5ysXLW16F0l1IY/YmUTza/CB7m//cJIGFMUY /0gs6tCr6RLbewxoyI7O94IvI1HeHgrt1w/d5WGYlxFK53imV31vyaoRhw9IRMR1k1xD j+RA== X-Gm-Message-State: AOJu0Ywr8TQv788H+LrsufBS5wy9S3CwFU6hxVy7/wPHVrPxa4XLut0u VOdNDnDoGXeEFjHT3+U8kLbYX4YyYiik X-Google-Smtp-Source: AGHT+IElcJgd6OEK6WsLtSqwCDbHFpQVk1NVpVhp/ioqgJwLe1DFcFjqYIWtYwJz7x3wfLN8H7uF9sToSnuG X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:76c2:0:b0:ca3:3341:6315 with SMTP id r185-20020a2576c2000000b00ca333416315mr25049ybc.0.1692232233287; Wed, 16 Aug 2023 17:30:33 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:18 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-2-rananta@google.com> Subject: [PATCH v5 01/12] KVM: arm64: PMU: Introduce a helper to set the guest's PMU From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173036_830151_26C144E7 X-CRM114-Status: GOOD ( 16.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Introduce a new helper function to set the guest's PMU (kvm->arch.arm_pmu), and use it when the guest's PMU needs to be set. This helper will make it easier for the following patches to modify the relevant code. No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/pmu-emul.c | 52 +++++++++++++++++++++++++++------------ 1 file changed, 36 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 5606509724787..0ffd1efa90c07 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -865,6 +865,32 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } +static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + lockdep_assert_held(&kvm->arch.config_lock); + + if (!arm_pmu) { + /* + * No PMU set, get the default one. + * + * The observant among you will notice that the supported_cpus + * mask does not get updated for the default PMU even though it + * is quite possible the selected instance supports only a + * subset of cores in the system. This is intentional, and + * upholds the preexisting behavior on heterogeneous systems + * where vCPUs can be scheduled on any core but the guest + * counters could stop working. + */ + arm_pmu = kvm_pmu_probe_armpmu(); + if (!arm_pmu) + return -ENODEV; + } + + kvm->arch.arm_pmu = arm_pmu; + + return 0; +} + static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) { struct kvm *kvm = vcpu->kvm; @@ -884,9 +910,13 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) break; } - kvm->arch.arm_pmu = arm_pmu; + ret = kvm_arm_set_vm_pmu(kvm, arm_pmu); + if (ret) { + WARN_ON(ret); + break; + } + cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); - ret = 0; break; } } @@ -908,20 +938,10 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return -EBUSY; if (!kvm->arch.arm_pmu) { - /* - * No PMU set, get the default one. - * - * The observant among you will notice that the supported_cpus - * mask does not get updated for the default PMU even though it - * is quite possible the selected instance supports only a - * subset of cores in the system. This is intentional, and - * upholds the preexisting behavior on heterogeneous systems - * where vCPUs can be scheduled on any core but the guest - * counters could stop working. - */ - kvm->arch.arm_pmu = kvm_pmu_probe_armpmu(); - if (!kvm->arch.arm_pmu) - return -ENODEV; + int ret = kvm_arm_set_vm_pmu(kvm, NULL); + + if (ret) + return ret; } switch (attr->attr) { From patchwork Thu Aug 17 00:30:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B437C2FC04 for ; Thu, 17 Aug 2023 00:31:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=OmqidKYrA699GWXi0NLh4cGBEVSPL29exjSsyoXM9/U=; b=HMih3fVdg4cRCUZFjhg5M5e2cY WRdhWs81WF2hs9OWKlO/wzfRuxO2bq9x6R1SUmYNc1oNvFLKTlY+qvxYtKAlr8tjj11SdWIWwhj/4 cpDwpjHlPued+2zzHx2eVdyrEh1oelWDaeLM8WzL+TwJMC2HOSM5dhZ37DXca4UJEpDB7rCxYyc+e z34WL7QMT9qgyNSm8kSeadr2mJ/pg9fKjgofjCBsN6qUXzUK4GG0ckzWcrXDzoQWVttxPmBwg8zOk 2HCzOgSBtxl467xALxPbVPINuWPwSnPSiW7e8Re/5v3Q7RX2sK9cwMvezFWBcVlf+wcCjTU93K+8u 9yiEzfdQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuN-005ETt-0I; Thu, 17 Aug 2023 00:30:39 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuK-005ESt-0R for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:37 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-589f437fef0so46375397b3.3 for ; Wed, 16 Aug 2023 17:30:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232234; x=1692837034; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mex/dnvNjwvv3cz9/YanI8OBgPSgiYyAz0HGHhAeol4=; b=pv1Q5St4yi9vCvJ/2N8qASCe/iqlLZhX0iyo78sYkUa3vd6I0xqei3JV5Hwtn4I1ch XeBIJCcDJMQ02vTKYh31FX58dsTQvl/NRk+qQa4Q0gPmhj20Zi9Bh87LTByxzj59qr2O RyNAuWbOg7Txw4ZGldZ5Yu+uVjq52gtJbPD3ddaQJvwx+9UkigBCDCNcOdRWdSCWqoV5 cfH8j+nqb6L5pvjjWFNxf37U5roRbFx+fgmZpPmFps4YewwbW1rgB9x56cXO812IoRFp TzbxNTuOYL+55lYB9EyObL16fOHJYNkfio1B9EnPrFwGU+WRdPuprxTPF3LR3I4eMtLF WUKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232234; x=1692837034; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mex/dnvNjwvv3cz9/YanI8OBgPSgiYyAz0HGHhAeol4=; b=llMRif7TsHmrDZr97hEByJApCYsTECHwqbSy0Qe/+waq40bCdurhX2VJNsb4qD9wji fUudK960RLLbZJJZE+n71NIRpu8829fyGIEfjVvniDlNfw5z0V6bWk14vfVOfFZvmT1y Lq8nqY0av5/f0QYerJHkTH8WKW1g2yxhggzHiCuJG+DibSrp8b3ELnOjEMR29PQ/17wr 7hE8xO0czcdJT2FPJDXPRwZ0RjwEA76W/GxTb9/bPmbjIGd7XKcmm38Mn5JDfeD0Fl17 cZE33J3Il0vLF7jh+olDsFfrM8f3Uhtdus1rlei9JEtW+iQAfOMMurH8x9NiimAJnqS5 7hXQ== X-Gm-Message-State: AOJu0Yxyf7AY6aWlNoM//7cTFwdeyjh8sDt2Mb3NVRe2oX5rbDUtMa66 rGIuzG1p0EabKX6KQOuhOQX0Es1Lw7wW X-Google-Smtp-Source: AGHT+IEGmTZ+oaTOH/In34DFDperlCh3+zJ9gyAHYMSQVXUAWM3r/4BOJL9eO8DWsHoD00EhvU/ijFUXAjR8 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:c70c:0:b0:57a:118a:f31 with SMTP id m12-20020a81c70c000000b0057a118a0f31mr46286ywi.7.1692232234354; Wed, 16 Aug 2023 17:30:34 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:19 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-3-rananta@google.com> Subject: [PATCH v5 02/12] KVM: arm64: PMU: Set the default PMU for the guest on vCPU reset From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173036_200508_3A4AE67B X-CRM114-Status: GOOD ( 17.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe The following patches will use the number of counters information from the arm_pmu and use this to set the PMCR.N for the guest during vCPU reset. However, since the guest is not associated with any arm_pmu until userspace configures the vPMU device attributes, and a reset can happen before this event, call kvm_arm_support_pmu_v3() just before doing the reset. No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/pmu-emul.c | 9 +-------- arch/arm64/kvm/reset.c | 18 +++++++++++++----- include/kvm/arm_pmu.h | 6 ++++++ 3 files changed, 20 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 0ffd1efa90c07..b87822024828a 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -865,7 +865,7 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } -static int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) { lockdep_assert_held(&kvm->arch.config_lock); @@ -937,13 +937,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) if (vcpu->arch.pmu.created) return -EBUSY; - if (!kvm->arch.arm_pmu) { - int ret = kvm_arm_set_vm_pmu(kvm, NULL); - - if (ret) - return ret; - } - switch (attr->attr) { case KVM_ARM_VCPU_PMU_V3_IRQ: { int __user *uaddr = (int __user *)(long)attr->addr; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index bc8556b6f4590..4c20f1ccd0789 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -206,6 +206,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) */ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) { + struct kvm *kvm = vcpu->kvm; struct vcpu_reset_state reset_state; int ret; bool loaded; @@ -216,6 +217,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) vcpu->arch.reset_state.reset = false; spin_unlock(&vcpu->arch.mp_state_lock); + /* + * When the vCPU has a PMU, but no PMU is set for the guest + * yet, set the default one. + */ + if (kvm_vcpu_has_pmu(vcpu) && unlikely(!kvm->arch.arm_pmu)) { + ret = -EINVAL; + if (kvm_arm_support_pmu_v3()) + ret = kvm_arm_set_vm_pmu(kvm, NULL); + if (ret) + return ret; + } + /* Reset PMU outside of the non-preemptible section */ kvm_pmu_vcpu_reset(vcpu); @@ -257,11 +270,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) } else { pstate = VCPU_RESET_PSTATE_EL1; } - - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { - ret = -EINVAL; - goto out; - } break; } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 847da6fc27139..66a2f8477641e 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -100,6 +100,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); }) u8 kvm_arm_pmu_get_pmuver_limit(void); +int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu); #else struct kvm_pmu { @@ -172,6 +173,11 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void) return 0; } +static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) +{ + return -ENODEV; +} + #endif #endif From patchwork Thu Aug 17 00:30:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 25A85C2FC0F for ; Thu, 17 Aug 2023 00:31:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wGr3zVaniVrFzRs/SZ5toD+54hcW3hCLEl4TY1xxugU=; b=U55id1sSj5V52P5nzxkDC3blQf HEeJviqjwrQdrDPkdEbBu3j1yTcBH7XN6wLVPhfyLwGOE6t/5yhnQfvoWma9ycoYg97Fc5EoJn+Zh Dhr8QiEf86wxd0ONvX1ryuSugjFjKy/a4Ud2tOSYXhCOWxSlOgPQFd0L7iEAc8BUCiAwYItyF+TRM 1dpL4+Shy1V1KkPmVNub2Rj5dnXkrwOe8IFGoM9LAIpLS1IlwbIIGbaYAFIo+Lx6/FqDYrLytn/kt s7ktek2OaA3o2Oe9PzH2mWk1YXxgcK9COeVzgfOkBGHDw6TwcGcBUA/Bc7WcOER/1VESvZq7J5tqM A/lRvNmw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuW-005EYb-18; Thu, 17 Aug 2023 00:30:48 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuN-005ESv-0v for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:40 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-58c9d29588aso13949827b3.0 for ; Wed, 16 Aug 2023 17:30:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232235; x=1692837035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3xjUNDJiBCoJcgjT/+49Msar3+0uyb3GjRudfsNPpjA=; b=nnwae7ChRhKPujDVQLlOFC/RbuGXKrYufvf1XOGFJo5aF+NN2asXevdOm4rbRZg8Tq aERDuo7GrEeVKvsx4Y4ftQzz7Fe54TcomW89SWLsiO2sFU0mLEMQMqCt3tWaiUuAZpF9 aB+0qT5dhPAgfNfruG8Fkkt1LGwyH1vWQpzPpfbxi1n6HBjodVHOOTQ16KQgjvYU1+3h 5o4p52ynaOPp3OxqDByhxbsEDOIbdsTuEmsCpOQo7M54IVuyHnmBv6YncwY0QMSo4Qjc 98YJ22vCwSwb5CqrV3TU8soGecZyVDxibeatCdFjdbc7wJ1L26MX9SOsRVTVtLMiuYhe hBKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232235; x=1692837035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3xjUNDJiBCoJcgjT/+49Msar3+0uyb3GjRudfsNPpjA=; b=KLRF+Nb1ga2lLSqR24nDwG8Ceedm6cPyqH9PHLYdNZLUFXERy/70TZ6cj97lDQwanD Q7f2u2hUMXywGyFrMbEy2ys9nNX/nAMtaCr9J8zOyKeIi2+fMyTny1/W22XOkCyCt6fx OQ66CQ1qJkCZy6NGLyEkqAW6jqhJptTJP4iL9Tb3gwSTpQ+7TvI6Dht1BqNI8DGXCKAY FtUpBI8lRTjbG9a628oma7XJNgE2+eKbQkJnUfkUhnh+5rl8EMdzXkco5Upak4vv9DZV 3PzVoZEVV4Xp2EggKDzu6KlJzR7/7wa1C/UWHKxoxHpEy0fEmoxHcPv24oGZrlgm4Uaf 9lTQ== X-Gm-Message-State: AOJu0Yy3fBuC09+7RLNSrbySo/CAvrsgala6FZW9d4fa8GryNoAgVoHb EmoG7x2eyeEuDRPLlsNhNdr4LTGi2N7m X-Google-Smtp-Source: AGHT+IFXQ7fYcd7YawZs3vGh87IG1agI3VzWVFMFYY7UPKBon+KAo2ZGhUuGKeey7RaOXJKKXMGpcZI4dNMW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:430c:0:b0:58c:b5a4:8e1f with SMTP id q12-20020a81430c000000b0058cb5a48e1fmr49495ywa.3.1692232235400; Wed, 16 Aug 2023 17:30:35 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:20 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-4-rananta@google.com> Subject: [PATCH v5 03/12] KVM: arm64: PMU: Clear PM{C,I}NTEN{SET,CLR} and PMOVS{SET,CLR} on vCPU reset From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173039_347761_26FE88BC X-CRM114-Status: GOOD ( 14.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe On vCPU reset, PMCNTEN{SET,CLR}_EL0, PMINTEN{SET,CLR}_EL1, and PMOVS{SET,CLR}_EL1 for a vCPU are reset by reset_pmu_reg(). This function clears RAZ bits of those registers corresponding to unimplemented event counters on the vCPU, and sets bits corresponding to implemented event counters to a predefined pseudo UNKNOWN value (some bits are set to 1). The function identifies (un)implemented event counters on the vCPU based on the PMCR_EL0.N value on the host. Using the host value for this would be problematic when KVM supports letting userspace set PMCR_EL0.N to a value different from the host value (some of the RAZ bits of those registers could end up being set to 1). Fix this by clearing the registers so that it can ensure that all the RAZ bits are cleared even when the PMCR_EL0.N value for the vCPU is different from the host value. Use reset_val() to do this instead of fixing reset_pmu_reg(), and remove reset_pmu_reg(), as it is no longer used. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/sys_regs.c | 21 +-------------------- 1 file changed, 1 insertion(+), 20 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 2ca2973abe66f..9d3d44d165eed 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -717,25 +717,6 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) -{ - u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); - - /* No PMU available, any PMU reg may UNDEF... */ - if (!kvm_arm_support_pmu_v3()) - return 0; - - n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; - n &= ARMV8_PMU_PMCR_N_MASK; - if (n) - mask |= GENMASK(n - 1, 0); - - reset_unknown(vcpu, r); - __vcpu_sys_reg(vcpu, r->reg) &= mask; - - return __vcpu_sys_reg(vcpu, r->reg); -} - static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); @@ -1115,7 +1096,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, trap_wcr, reset_wcr, 0, 0, get_wcr, set_wcr } #define PMU_SYS_REG(name) \ - SYS_DESC(SYS_##name), .reset = reset_pmu_reg, \ + SYS_DESC(SYS_##name), .reset = reset_val, \ .visibility = pmu_visibility /* Macro to expand the PMEVCNTRn_EL0 register */ From patchwork Thu Aug 17 00:30:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BFE0EC2FC14 for ; Thu, 17 Aug 2023 00:31:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Ei1x28W2ONStVQrkUvpOFtdd0UpNoRek6xx3ofZPOuc=; b=0ozOmk1Sjjuba5Csl8QEQuJ6CP ncD3X035tKX+IhI2PiBNGZQQlrt6+7012/HFFDDwAWrg+DUtXoGaxWfAjTehCUmKyMOw7xUOefLjh d0mG8nMPGcZY3h/IEgGI6peHHu0awwOmVOxdU1BhiudNNj+cegflhlmGxWGvzvXbpg0l2jRHq13oy dV9bOlnw/SQicsM8m5Bytj1q3DsXBgO4Q2jZyaQ7+m0JySSM+9pC/DqsEHnY1maZFP64VFpDzi9YJ BZ/ThAYsQfLqKvnjn12zPpY5uKwEx5Iyh05+CTC2ZlxKMi+MFQJwkRI04aiQo2vp8MsJ+asYzyOBU tbtbn4Sg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuV-005EXk-0Z; Thu, 17 Aug 2023 00:30:47 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuM-005ETC-2o for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:40 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-58d799aa369so8263767b3.0 for ; Wed, 16 Aug 2023 17:30:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232236; x=1692837036; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5WzKCvhVWsExbbkBqgjmP8R+v45VGDegN/7kG0ihAn0=; b=PZ3/GOZm9t8lBFHwWnUgx9kDAevNHEkCtnYgv/9aISsrBR+FRfc421K8zfhWwG0F1Z TaGHoAmhLwm37px7PqR6/QUXrSlnysmRhDFyoJXelmbm1ZuCxsX4u0AwXYpvgqYXS4Vu cdNWRqgad7TFa3PAVlF7iOrS4SaortFe85ZGJbB7LsGgTXxyJi9pgyhnPjMk55M6DeoW NGg+AgnCapkq61mr3/dDWy1ypTs2TsaCLGXvJzA602iNQm+eiHp3JJsNVvbSArDQXrpV +SsKkRT9m04KBNQklL6XrwrzipM4GxjonNKX+XriHBI5VBI3yIAX/UziCk2KLzYwcYdL 7RiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232236; x=1692837036; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5WzKCvhVWsExbbkBqgjmP8R+v45VGDegN/7kG0ihAn0=; b=UAoIXoG7wDRX98jeWs6qSkRYPpwB1OjxIkghNBu1mWWKj5q8mub4CwJAzkMhPz+RQB 5ZhG9HGk9g4kH7xmCfSKYoNl4/6Q1/xGEC2WHR1VInFgWQm3l3HxiBm1JUIhpTnP11zf mTqaFcq8RW5EPDO77yZBXKKs811hQl3P1SINCAnXP1qowTqG2lIWVPIDjXFKn/lbP3Nw 41DxARy879ROwgaCyOvh8xGTd5LTDFkmyV4nDE7KpHL9WxXYf4IALTgjaeSUotXtawzh DDHcEYtMNARMoWONfdqMm6BX3Md1OYpw9j+5vJvm2DlINoyiq/nU1J+D9277i+6O8cT3 GsOg== X-Gm-Message-State: AOJu0Yw4vvqmMcFXGki3KrObQXrvkFvLaANsECQ1P81T5ToXaEWfqdxN XMX0HPdMF12a8kgeYIdUXXOIOZJDeujV X-Google-Smtp-Source: AGHT+IHJmx2o/AjOqVUmGFuIH8g39Ni5Dd+TGMGoVeRs4jVJISVQQKSSwMp3clB56CLgHpeAZOan4BoJAgSv X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:4411:0:b0:583:a3c1:6b5a with SMTP id r17-20020a814411000000b00583a3c16b5amr40767ywa.4.1692232236456; Wed, 16 Aug 2023 17:30:36 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:21 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-5-rananta@google.com> Subject: [PATCH v5 04/12] KVM: arm64: PMU: Don't define the sysreg reset() for PM{USERENR,CCFILTR}_EL0 From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173038_919103_4E4019D2 X-CRM114-Status: GOOD ( 11.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe The default reset function for PMU registers (defined by PMU_SYS_REG) now simply clears a specified register. Use the default one for PMUSERENR_EL0 and PMCCFILTR_EL0, as KVM currently clears those registers on vCPU reset (NOTE: All non-RES0 fields of those registers have UNKNOWN reset values, and the same fields of their AArch32 registers have 0 reset values). No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/sys_regs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 9d3d44d165eed..39e9248c935e7 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2178,7 +2178,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { * in 32bit mode. Here we choose to reset it as zero for consistency. */ { PMU_SYS_REG(PMUSERENR_EL0), .access = access_pmuserenr, - .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 }, + .reg = PMUSERENR_EL0, }, { PMU_SYS_REG(PMOVSSET_EL0), .access = access_pmovs, .reg = PMOVSSET_EL0 }, @@ -2336,7 +2336,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { * in 32bit mode. Here we choose to reset it as zero for consistency. */ { PMU_SYS_REG(PMCCFILTR_EL0), .access = access_pmu_evtyper, - .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 }, + .reg = PMCCFILTR_EL0, }, EL2_REG(VPIDR_EL2, access_rw, reset_unknown, 0), EL2_REG(VMPIDR_EL2, access_rw, reset_unknown, 0), From patchwork Thu Aug 17 00:30:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AFA2C2FC06 for ; Thu, 17 Aug 2023 00:31:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=/pcYNIaud8fxxeTQCy19Zr5d7Gr23fNJGi8EJzEEz3M=; b=jk0Vwd319PErQiV9H/MFYxC7XQ 9ISgslaHZpSyE/H642NZSlvNObr8g0mlHmd6ZZ0/vHSQJ2ft4T4yp4AsSmOJwz+6jfwkcMMi1Qs17 knXDEwPTvBu2qn6Wf88VQzA2a9EDjQSWYumaaazgShleybKtQno1TA90gUILs3rAx7K8j2amgWfqm if+yRM2ylDlJshkzbHdmNSSnG01pNc1Frnhics3DG/3VeV52LZmkMIaYLtfN6H9oC2xt1WKngwJrP +GBkF9oDxPj24S2OFy4PRn+wHktIHEwP3CLExEkmWWb4P5/X2tBIFZFtuHU8LRZeHUdLIANro4ouZ Nzgc71tA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuV-005EYE-2X; Thu, 17 Aug 2023 00:30:47 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuM-005ETi-2i for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:40 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-589f7d66f22so39715947b3.3 for ; Wed, 16 Aug 2023 17:30:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232237; x=1692837037; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4VcSjhn3Nc4h2EsQzkC75SuxdCiUDrl0Xn8hm1A88XU=; b=KW9M9MiLl25Qo5Kq3N022O3lepsKgTjSN4cmTARQnkFGLGUhfrDklSTm1k+n6nYtlg YQURbHZxO9F17Em9Cw30PL6K0PgPqgx5X7bOTzeDUESBa6wasiAPYjy6bm42LbpFu7aL 8XqJteNB66qsY9ZF580P7URGkkmti+LuJUGslj31DZzmsa1Ub8CbsZnJpCSbojrgEkOT fT7J3awqGqc1Uk0KknKX5gcOig8gySOIDbRWWywRSwQOdw4WDBXsnhZB74WyWgridNtm 71kyeigRU0bdecVn/wiI4S8MA0w5wygBLuM2DCf6oQZKfw9bf8Fe2G+NUe5FpWO8gGu3 YU7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232237; x=1692837037; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4VcSjhn3Nc4h2EsQzkC75SuxdCiUDrl0Xn8hm1A88XU=; b=kyyYFB8S6WmAKFBVRQiSmaK5gzvi0u9LdNv9QmmW6I6twVODBaI3uC9DqSWxYe4+ts +Zdu+6s6S4/Y9H+2MHPtL+NOBZ06GEVXCwau7XVEFKd2+Ycy6dv8CvPYpYqWs0GiiF9U 0lDuTWyYaC4El8ms95FsppZbktmb9q1LDL2oIzBu6rPBBNd6qhSzUNVOskQhODRdWL3W 2zEoXCckupnNtOed3BAVelweLWYk/DVv7QLH2BNYHSK3dFu8KGbAGSgH3OEjguxarlPo WER9sPoIfOUReS6hB4MOWVspwv8W6X2CyWydzAcHedKqPz94Rkxw2GtZKuWVlZlHI7Dm /iYA== X-Gm-Message-State: AOJu0Yyw789M7mGllBRmwQeJMDCIFdalbad++JmztAvmWO8sWMUmQr6s CUY0zMMADdkD2l1jGfSP/TnNzBW/Ysvx X-Google-Smtp-Source: AGHT+IGc4m2gb883pEb+0MsREcw1PRVk1IvDQNWqoxYg7BT0U+c9IQIAxXN9AyskHKCo0o2kdMT+LveO3Jmy X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:e302:0:b0:57a:e0b:f66 with SMTP id q2-20020a81e302000000b0057a0e0b0f66mr43446ywl.7.1692232237457; Wed, 16 Aug 2023 17:30:37 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:22 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-6-rananta@google.com> Subject: [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173038_882908_3D5B4070 X-CRM114-Status: GOOD ( 13.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Some code extracts PMCR_EL0.N using ARMV8_PMU_PMCR_N_SHIFT and ARMV8_PMU_PMCR_N_MASK. Define ARMV8_PMU_PMCR_N (0x1f << 11), and simplify those codes using FIELD_GET() and/or ARMV8_PMU_PMCR_N. The following patches will also use these macros to extract PMCR_EL0.N. No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/pmu-emul.c | 3 +-- arch/arm64/kvm/sys_regs.c | 7 +++---- drivers/perf/arm_pmuv3.c | 3 +-- include/linux/perf/arm_pmuv3.h | 2 +- 4 files changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index b87822024828a..f7b5fa16341ad 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -245,9 +245,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) { - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); - val &= ARMV8_PMU_PMCR_N_MASK; if (val == 0) return BIT(ARMV8_PMU_CYCLE_IDX); else diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 39e9248c935e7..30108f09e088b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -750,7 +750,7 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) return 0; /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); + pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N; if (!kvm_supports_32bit_el0()) pmcr |= ARMV8_PMU_PMCR_LC; @@ -858,10 +858,9 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p, static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) { - u64 pmcr, val; + u64 val; - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0); - val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; + val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); return false; diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 08b3a1bf0ef62..7618b0adc0b8c 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -1128,8 +1128,7 @@ static void __armv8pmu_probe_pmu(void *info) probe->present = true; /* Read the nb of CNTx counters supported from PMNC */ - cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT) - & ARMV8_PMU_PMCR_N_MASK; + cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); /* Add the CPU cycles counter */ cpu_pmu->num_events += 1; diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index e3899bd77f5cc..ecbcf3f93560c 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -216,7 +216,7 @@ #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ #define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ -#define ARMV8_PMU_PMCR_N_MASK 0x1f +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT) #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ /* From patchwork Thu Aug 17 00:30:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D73CC2FC0F for ; Thu, 17 Aug 2023 00:31:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ndfEOOq3cZmaOEpRbpidZVUokIiOtA0EJBdzAmKcUGg=; b=fcOnfMgX7tuVxcKWQ8Kv1IpkKs 2Y8AS/cLfBZ+vuVxaPmwRe0SDyxWln0Ob/2wJCN9eL2gn/sLqupGgOw9cnNxrvHLevxfj6fV3XOCl DE6591ZuPL9HsHO3Wvz4ajZe+4VxSeudhSFpe+94wrFUTlCJJxMJYTqixCJlV2ye2HHmDCra2BMit P5n7jc7iFoEj/mkJsyiW7EDAl9QzahA3YuTDtw2x4LJbEsT+/7w0gh7SugvB2/XDuwB1MRkStOBee 4VbT4bhZrFaYnBZIu+HcJEDuvA+fG0hopajBpOYyhCkHxAl3Q6YdmjAsQcA+iFGqW0hMNKoS+Qmm5 TR4n7pjg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuW-005EYw-2m; Thu, 17 Aug 2023 00:30:48 +0000 Received: from mail-oa1-x4a.google.com ([2001:4860:4864:20::4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuP-005EUO-03 for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:42 +0000 Received: by mail-oa1-x4a.google.com with SMTP id 586e51a60fabf-187959a901eso8773351fac.0 for ; Wed, 16 Aug 2023 17:30:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232239; x=1692837039; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=31foyCQ/a3pkDgMzzqSdDVTr/pyp8KEpSQjN8T0eN08=; b=7ytRpnzDVk2DK1wvWU9HOoxeEa4waLZC0tL94tvBxAdurjnoOsPwVZpqWgbqd2UsOz GrC7nhFRAQUWDFIYfwE1dom2TzxNkdRZuPgc7G6HTzdpYA1z2zmM8xBxdeh+3KOs9eSV xp3aam9+5aDHKpAIeuaz5pkc5GgnEjQLsL4Ya1mTNQEpOFvJhTh3++8DeFRev5qhOGp9 ZWzX8BVUrXH/7ZnOm/MtG3/9a7QM2EAEIOTuPGIZKzzwqIMPiKuNoGF0e1IWuHXjhwrv mCtf/0oqBnVLT/SJk4/J2nNResk0eSKbEtP/sYaHgeUPkWWJC3sbiUgzBY8Lbwn+eOJK /LXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232239; x=1692837039; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=31foyCQ/a3pkDgMzzqSdDVTr/pyp8KEpSQjN8T0eN08=; b=YWBfikgmZsyGdeGpdOm+3TzlITeun1f88FKyGs8SFq/NO8G/Xv9//PcWkMx1sekQkH zBE+M6Bp/pHWYPnEtqZjVaIzE1dVrEs19SU7SmYfVGUcaNKIthWdoRQ6IDIB6+Qas5zl 2oaenAjWzuH7PSF32JROQJ/SNcz+oy3fTlJswZcQCAYVsUZ99DZRpX3nYXhKuw0WMNee K31jH81LjyQjiA/eBXNcE2mdA1sRsC3+Do6q4LZ+GVz0Y2ojoJ9bcX3QiYDTB+H0JYio IjWO/GM4ZwG/0L66fn7IpuxVS1JwCyC7+jIdIERfkNQP2sw/ydzV8hQodi8e0EgyF5sh Awzg== X-Gm-Message-State: AOJu0YxRVagTKEjY9sUCk6rIvUyEHHmZp6z3HMWg+r+C80u6JsuQicGu I6MQKmIsj2bpceOqvmFeHzNLrZs0UW/R X-Google-Smtp-Source: AGHT+IFX6P079ptWkhbBVXVqSVPb4Q5T60PJXrDI9E1rxdy0rV/nJ+zvvySQgmSbju6Pa6tVHVtGqyJZr/vZ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:5a95:b0:1bb:9fd4:65ed with SMTP id dt21-20020a0568705a9500b001bb9fd465edmr59639oab.5.1692232238697; Wed, 16 Aug 2023 17:30:38 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:23 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-7-rananta@google.com> Subject: [PATCH v5 06/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173041_054388_F473A41A X-CRM114-Status: GOOD ( 16.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Add a helper to read a vCPU's PMCR_EL0, and use it when KVM reads a vCPU's PMCR_EL0. The PMCR_EL0 value is tracked by a sysreg file per each vCPU. The following patches will make (only) PMCR_EL0.N track per guest. Having the new helper will be useful to combine the PMCR_EL0.N field (tracked per guest) and the other fields (tracked per vCPU) to provide the value of PMCR_EL0. No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/arm.c | 3 +-- arch/arm64/kvm/pmu-emul.c | 17 +++++++++++------ arch/arm64/kvm/sys_regs.c | 6 +++--- include/kvm/arm_pmu.h | 6 ++++++ 4 files changed, 21 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d1cb298a58a08..7bd438c181f76 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -800,8 +800,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu) } if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu)) - kvm_pmu_handle_pmcr(vcpu, - __vcpu_sys_reg(vcpu, PMCR_EL0)); + kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu)); if (kvm_check_request(KVM_REQ_SUSPEND, vcpu)) return kvm_vcpu_suspend(vcpu); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index f7b5fa16341ad..42b88b1a901f9 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -67,7 +67,7 @@ static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc) static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc) { - u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0); + u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc)); return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) || (pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC)); @@ -245,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) { - u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); if (val == 0) return BIT(ARMV8_PMU_CYCLE_IDX); @@ -266,7 +266,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!kvm_vcpu_has_pmu(vcpu)) return; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val) return; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { @@ -318,7 +318,7 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg = 0; - if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) { + if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) { reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1); @@ -420,7 +420,7 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu, { int i; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) return; /* Weed out disabled counters */ @@ -563,7 +563,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) { struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && + return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) && (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx)); } @@ -1069,3 +1069,8 @@ u8 kvm_arm_pmu_get_pmuver_limit(void) ID_AA64DFR0_EL1_PMUVer_V3P5); return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp); } + +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return __vcpu_sys_reg(vcpu, PMCR_EL0); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 30108f09e088b..cf4981e2c153b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -803,7 +803,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, * Only update writeable bits of PMCR (continuing into * kvm_pmu_handle_pmcr() as well) */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0); + val = kvm_vcpu_read_pmcr(vcpu); val &= ~ARMV8_PMU_PMCR_MASK; val |= p->regval & ARMV8_PMU_PMCR_MASK; if (!kvm_supports_32bit_el0()) @@ -811,7 +811,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, kvm_pmu_handle_pmcr(vcpu, val); } else { /* PMCR.P & PMCR.C are RAZ */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0) + val = kvm_vcpu_read_pmcr(vcpu) & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); p->regval = val; } @@ -860,7 +860,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) { u64 val; - val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); + val = FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); return false; diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 66a2f8477641e..99fe64c81ca8b 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -102,6 +102,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); u8 kvm_arm_pmu_get_pmuver_limit(void); int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu); +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); #else struct kvm_pmu { }; @@ -178,6 +179,11 @@ static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) return -ENODEV; } +static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return 0; +} + #endif #endif From patchwork Thu Aug 17 00:30:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AB595C2FC04 for ; Thu, 17 Aug 2023 00:31:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=SlNz0XxmpM5X2BgXTcp5lc/3J4QRYR85ZRAEwAx9fv4=; b=Vb/ngYNEArTph3GbznI2lyT4gv kFAhfkEVm5wA6s8RxfpvOFOLjfZNtUEZgLNxWCmz0WKWXscohJhUNgCrRkvdQhV8EjNbqCLrkoMSz AL0kbKKyIi43ySAnErrDy+/pLOWR2Tn/eT3FBPJ+OB66arz3wNy+BoSxXOarIvLseCrGb0/8kYJBH eY8bwefaqZ5ayhNWqabCQ4NFUWar6cTp8/CutA32AF22MZY8ynRfpDN9MP5it9dss9phgwpAIbC34 NGPD/90fsAMG1K6N8+7//WL21YqY5OnLdoYmzC2w+6RDHLvbZwZl0SOTY1HEAo4Ym/LsbPogWYNvA 5kX1eRJw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuX-005EZw-30; Thu, 17 Aug 2023 00:30:49 +0000 Received: from mail-oo1-xc4a.google.com ([2607:f8b0:4864:20::c4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuQ-005EUq-1A for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:44 +0000 Received: by mail-oo1-xc4a.google.com with SMTP id 006d021491bc7-560c7abdbdcso6861151eaf.0 for ; Wed, 16 Aug 2023 17:30:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232239; x=1692837039; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RZwROTfypdAjyAUBEkppwBtRWVB6YQ3Zf4TY7/KQaqk=; b=0Rd8XNXiQiYFczCHCXlKHlffpcm+W2wITe8hsUM/uxiJ/HKRNbM6ChEABNuKQllQEI DjmXL8hn1f3rT929hWMvLGKqKrLKcUEAAnPfs+2ciErlusjhqjl7B28Ow8xPXj3tLx2R TXI2OcbUtBuSC9EugpcAfs0oWIgODoZzjrfWlfuHS45y1VNVFXmesWVMRrm3uuU570YY hSwib+KkPQRz8ZHU3MyPH15qacLEd7x7enkpB1mywj3sBClCQesAwrbiNEpFkcI4jrUF O9yNo6HBGlhZ0uX+5Rkz+rRmIa/s4c3z1+vL11q5LxTOAKGr9qFe3FIArKy3KMkLdDCK FaRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232239; x=1692837039; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RZwROTfypdAjyAUBEkppwBtRWVB6YQ3Zf4TY7/KQaqk=; b=VPcOYpDwPxcZpdEMg54w8W+3BdwTW9WWoKToL71aIDQGwdMVeVmct+/oGjQag/9jN5 TJsoDOa4c6jlBQr94eSpR8eLIhrHfMi+JQBgKFTWBECetsycKHzHRzE1a6laOTz7rqvA zvgT6Z68Vhl5zRoo9p9UZm1DV3vIJGEmJWKevZuBdMpZnSHN2wQMoxDIT3/27PLLQvLr 1ee/QEkdDxz2FbO3P5agjDuxuCATwYUInTlUrPkFyHsJ7kW92jpS6GZKbgR9kyvwXaDQ D/eL/v2lpm34mni0sALjYAImhpizvteEhxcNzGwnM4HdILIXYYW3HxaAe5jfZhyKbyHb 97WA== X-Gm-Message-State: AOJu0Yyff36nR86BjJJqRrTgMk8BZZkxxmm+ctVt/6WKAn3WX65GR5l2 KuqdRrE+Os3BTuOPEua84aJrNAi9PPPb X-Google-Smtp-Source: AGHT+IEivFY9QQDx3QmUGiM1VRwGeePX6csw/PBzVWQ3N230MRxCTWjuemsW6hfUCviQt4Xy2/6rcaw0zaFQ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:5aaa:b0:1c0:fa9d:8fd8 with SMTP id dt42-20020a0568705aaa00b001c0fa9d8fd8mr64168oab.1.1692232239826; Wed, 16 Aug 2023 17:30:39 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:24 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-8-rananta@google.com> Subject: [PATCH v5 07/12] KVM: arm64: PMU: Set PMCR_EL0.N for vCPU based on the associated PMU From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173042_403078_C9FC3135 X-CRM114-Status: GOOD ( 19.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe The number of PMU event counters is indicated in PMCR_EL0.N. For a vCPU with PMUv3 configured, the value is set to the same value as the current PE on every vCPU reset. Unless the vCPU is pinned to PEs that has the PMU associated to the guest from the initial vCPU reset, the value might be different from the PMU's PMCR_EL0.N on heterogeneous PMU systems. Fix this by setting the vCPU's PMCR_EL0.N to the PMU's PMCR_EL0.N value. Track the PMCR_EL0.N per guest, as only one PMU can be set for the guest (PMCR_EL0.N must be the same for all vCPUs of the guest), and it is convenient for updating the value. KVM does not yet support userspace modifying PMCR_EL0.N. The following patch will add support for that. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/pmu-emul.c | 14 +++++++++++++- arch/arm64/kvm/sys_regs.c | 15 +++++++++------ 3 files changed, 25 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index d3dd05bbfe23f..0f2dbbe8f6a7e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -256,6 +256,9 @@ struct kvm_arch { cpumask_var_t supported_cpus; + /* PMCR_EL0.N value for the guest */ + u8 pmcr_n; + /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; struct maple_tree smccc_filter; diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 42b88b1a901f9..ce7de6bbdc967 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -681,6 +681,9 @@ void kvm_host_pmu_init(struct arm_pmu *pmu) if (!entry) goto out_unlock; + WARN_ON((pmu->num_events <= 0) || + (pmu->num_events > ARMV8_PMU_MAX_COUNTERS)); + entry->arm_pmu = pmu; list_add_tail(&entry->entry, &arm_pmus); @@ -887,6 +890,13 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) kvm->arch.arm_pmu = arm_pmu; + /* + * Both the num_events and PMCR_EL0.N indicates the number of + * PMU event counters, but the former includes the cycle counter + * while the latter does not. + */ + kvm->arch.pmcr_n = arm_pmu->num_events - 1; + return 0; } @@ -1072,5 +1082,7 @@ u8 kvm_arm_pmu_get_pmuver_limit(void) u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) { - return __vcpu_sys_reg(vcpu, PMCR_EL0); + u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) & ~ARMV8_PMU_PMCR_N; + + return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index cf4981e2c153b..2075901356c5b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -745,12 +745,8 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 pmcr; - /* No PMU available, PMCR_EL0 may UNDEF... */ - if (!kvm_arm_support_pmu_v3()) - return 0; - /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ - pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N; + pmcr = kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_N; if (!kvm_supports_32bit_el0()) pmcr |= ARMV8_PMU_PMCR_LC; @@ -1083,6 +1079,13 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return true; } +static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, + u64 *val) +{ + *val = kvm_vcpu_read_pmcr(vcpu); + return 0; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ @@ -2145,7 +2148,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_SVCR), undef_access }, { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, - .reset = reset_pmcr, .reg = PMCR_EL0 }, + .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr }, { PMU_SYS_REG(PMCNTENSET_EL0), .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, { PMU_SYS_REG(PMCNTENCLR_EL0), From patchwork Thu Aug 17 00:30:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FDD9C2FC04 for ; Thu, 17 Aug 2023 00:31:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VLNveDphBcBILPovzfh5IVHmU8QjBUxH9KZABY4Lx+A=; b=hCoTyYhiDqJSKix7xWh27UOicN +jwRHJwoTNMOtOpLOBvWo9wRZiJ6dl9yzodq8LnOnN6zSLo2C5NxvMun24s/kySxJPGHIUSnyTVFH PMnk4BXThDHKSvmuHj4ALwcPWo8c/5X0Os6yiVS2Li0mVLsDVRgleVKBdYZR1Kr9kwXjZnl4y1FR6 XsN4jrqhUxU3Gu4eOg0xKtm84oudfWgUT67aL+dyWWqLUDqSmLtmQo+KOvpeIc9H840adIJ3vY0+X MgidM46KvwOYxIR6DUsl4Ut//OaRGiMQ59vApaj7G9qU+sx6JAKJ4vTgCRFGcKVNl9AOJBl4Wu9nw 5igmensw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuX-005EZO-1C; Thu, 17 Aug 2023 00:30:49 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuP-005EV6-35 for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:43 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-58d37b541a2so10509477b3.2 for ; Wed, 16 Aug 2023 17:30:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232240; x=1692837040; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dy3jz0Upm5vJJIEQd/6fuDwPpF3yAciYfyTeKkOfktw=; b=TFkjpdFK0U43ci9JaC0b/cB/XH6betgPdARDR2SxRkzVZXc72zWUOoTdcvk2K2CTyU 4dJwnkVRj/2I6MNUtMwvGyR9yCygbWHI9lhSEqL/nrVD5O+hH2hZIb2yaMgkK6TglK06 1i8lmzHSRWRIzEYggGOOcXxV/qf2TzUdcZGVzxcQ2taGdDoFjDsdKnrzr8OiN65YCz4B jgkNFLbBLSaHPaxxNV1ddtMKOpIqQK48zlfNLY8M8Qijc7dCZ3JBmobhIdDOsEgj/Uza H/m9OzW0E2OzO+NoFb1DBeG0iJihqMp0vPtYFRkHUsHmwWubuXF0eX1P5ff8b/cT5RmS gb5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232240; x=1692837040; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dy3jz0Upm5vJJIEQd/6fuDwPpF3yAciYfyTeKkOfktw=; b=d0sNL0s8CXWuKSI6KI8tJQbMnRgZruHiPjlUNpkPMdQqfUbHbwb1G6kbzxvMyZtSp2 mhzFwKEqnV0QE/iKiO+3hOhuyGavTa1t3RaKLCAQHjwH28KuGzGFkIiUxi6YzF/lltZm M+qwm1pEIicNqX9AAN+pU5fURbC5rrJjEhTjhsD0kNbtKfG1cJJ3tTZo2CHtkWJ89RDR cHTDvGnQ2CWM/AplxeNt10TvAqIFi5WzTYCTdd4lCgyWkiEzWUp9THuLNMgPFlSvXjvx WAI6jdcm0QJ52uyHVMay86Zc6m2kWgYrKOH5uFjM+0cxJV7E8FhlLGRYtAh45YPbwo1v qRRA== X-Gm-Message-State: AOJu0YyxaQ1/gqB7ezAPQTav/hWn9S1iAXHbtmmnEf03W1dcyopjcVUv 6sGWxZ3NkErwgRAd5CBx0I+JeYBDTDRS X-Google-Smtp-Source: AGHT+IEPXZGPdN3mdjmFtzRONvtRYtEgwzrrjDWsYxKyH65SNIn4If96nwlibegCkgzywaHufOlROrERuSM/ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:ac54:0:b0:586:a58d:2e24 with SMTP id z20-20020a81ac54000000b00586a58d2e24mr48228ywj.5.1692232240685; Wed, 16 Aug 2023 17:30:40 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:25 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-9-rananta@google.com> Subject: [PATCH v5 08/12] KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173041_995768_EDA4F4D9 X-CRM114-Status: GOOD ( 23.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe KVM does not yet support userspace modifying PMCR_EL0.N (With the previous patch, KVM ignores what is written by upserspace). Add support userspace limiting PMCR_EL0.N. Disallow userspace to set PMCR_EL0.N to a value that is greater than the host value (KVM_SET_ONE_REG will fail), as KVM doesn't support more event counters than the host HW implements. Although this is an ABI change, this change only affects userspace setting PMCR_EL0.N to a larger value than the host. As accesses to unadvertised event counters indices is CONSTRAINED UNPREDICTABLE behavior, and PMCR_EL0.N was reset to the host value on every vCPU reset before this series, I can't think of any use case where a user space would do that. Also, ignore writes to read-only bits that are cleared on vCPU reset, and RES{0,1} bits (including writable bits that KVM doesn't support yet), as those bits shouldn't be modified (at least with the current KVM). Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 ++ arch/arm64/kvm/pmu-emul.c | 1 + arch/arm64/kvm/sys_regs.c | 49 +++++++++++++++++++++++++++++-- 3 files changed, 51 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0f2dbbe8f6a7e..c15ec365283d1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -259,6 +259,9 @@ struct kvm_arch { /* PMCR_EL0.N value for the guest */ u8 pmcr_n; + /* Limit value of PMCR_EL0.N for the guest */ + u8 pmcr_n_limit; + /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; struct maple_tree smccc_filter; diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index ce7de6bbdc967..39ad56a71ad20 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -896,6 +896,7 @@ int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) * while the latter does not. */ kvm->arch.pmcr_n = arm_pmu->num_events - 1; + kvm->arch.pmcr_n_limit = arm_pmu->num_events - 1; return 0; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 2075901356c5b..c01d62afa7db4 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1086,6 +1086,51 @@ static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, return 0; } +static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, + u64 val) +{ + struct kvm *kvm = vcpu->kvm; + u64 new_n, mutable_mask; + int ret = 0; + + new_n = FIELD_GET(ARMV8_PMU_PMCR_N, val); + + mutex_lock(&kvm->arch.config_lock); + if (unlikely(new_n != kvm->arch.pmcr_n)) { + /* + * The vCPU can't have more counters than the PMU + * hardware implements. + */ + if (new_n <= kvm->arch.pmcr_n_limit) + kvm->arch.pmcr_n = new_n; + else + ret = -EINVAL; + } + mutex_unlock(&kvm->arch.config_lock); + if (ret) + return ret; + + /* + * Ignore writes to RES0 bits, read only bits that are cleared on + * vCPU reset, and writable bits that KVM doesn't support yet. + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace) + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU. + * But, we leave the bit as it is here, as the vCPU's PMUver might + * be changed later (NOTE: the bit will be cleared on first vCPU run + * if necessary). + */ + mutable_mask = (ARMV8_PMU_PMCR_MASK | ARMV8_PMU_PMCR_N); + val &= mutable_mask; + val |= (__vcpu_sys_reg(vcpu, r->reg) & ~mutable_mask); + + /* The LC bit is RES1 when AArch32 is not supported */ + if (!kvm_supports_32bit_el0()) + val |= ARMV8_PMU_PMCR_LC; + + __vcpu_sys_reg(vcpu, r->reg) = val; + return 0; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ @@ -2147,8 +2192,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_CTR_EL0), access_ctr }, { SYS_DESC(SYS_SVCR), undef_access }, - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, - .reset = reset_pmcr, .reg = PMCR_EL0, .get_user = get_pmcr }, + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr }, { PMU_SYS_REG(PMCNTENSET_EL0), .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, { PMU_SYS_REG(PMCNTENCLR_EL0), From patchwork Thu Aug 17 00:30:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32CF7C2FC14 for ; Thu, 17 Aug 2023 00:31:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kBwwJsVZA8tNlFtAvAecYXJCfNagDSU+q8e0OZbYl3o=; b=l3sTZoDsiV/c7ygLkj0/c3/e+k pLXN/GBZTVrQzJHShNaSXhI8a+1ocaVKjf5wkoURQyKTD06hyy6Z4GmRalw0ft7ZEZ9PEICwTUFhb QDKLjLXVRB0zz+xQNvtTkWM1PB8QMplmWMCoMqq9AQKloGpkF20jLfqo+unohXAFeiuILES9K5JTV v8U7I2n7aTgpiZApWgv7bQclFPwb+Z0H1Kl0kR9YvjNAMxvEPCjoBbWO/iDmLrv4b21no3dJ6KfzK 63xrQ9wxhAAalvB5xDnpRa4iWmbnUv9ltk8SAFGydZwDuQnKRnHCnMRsAtqa2+klaKLhhIK69b7kQ D5p7QZ9Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuY-005EaN-1g; Thu, 17 Aug 2023 00:30:50 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuR-005EVb-19 for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:45 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-cf4cb742715so7600667276.2 for ; Wed, 16 Aug 2023 17:30:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232242; x=1692837042; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7vUBDRszcAqcGEwDGJH0ZWAn0wcg5wC1ncmHeNzqmQw=; b=VRpg/ES73/w1GBHGJPy+gEmBLm8ItklvJxqFGECzb7NaW9l562OWsNHA3fn5CdlK+0 2LhqM5r9J3ToowD0DrQuNTTyDHu7VP+2PzJQrBjTOLqaclRAp02SBqXhgBYaWmnM+cDt 06R9EHiEKroYoCCLhvqEZKntHM6MhYEzEi1zfgcPGVj4c0ptlGy4rl/K7/dU+ufLp5H3 oSN6C52/U5ShrfYC0fGdsAM1k5IAwLtIidTp2VBeEQX+LeUPZKZNc0GX0agCcZAxid7I dX/4HLV5SeHv2P1yilSJRPwXboCaYGLc73GYJn+gL8QvRfFvVY7w0xX5pL8fwnD+9MoW mwkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232242; x=1692837042; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7vUBDRszcAqcGEwDGJH0ZWAn0wcg5wC1ncmHeNzqmQw=; b=B67AE9Dvk/YFE2U+ZNYDTwwmztYbdK+E43LzwgLCkISYx47jK4bElXw23EHC2xH5Xh 4vWTlKURA1WqfcSetflgX0Xg3JThYAOZhhgS+tW7ZjQhAEVprxnlR/AdpXxpAKGFL3VD CbTACkFHGFUff3jcOwaJeTHzekqaurwkLrGqAURqJRgPypVbq2hC5DSiyyZ/zxs4/aMo bCrjheogMBdKVH3CRA7zfg4bbgZql307VRYAS26SwMJeVAiho7HJoHHFF0zYW0SQflVw WSgKgrUXu7qjthNFW6auGgSQ0vX12M0KVWxW8dQMuW1TkTt9i5DHFJyO7W63/E+0L4Ue BYXw== X-Gm-Message-State: AOJu0YzZ2r/HhYTgmmShWgDAjFuhYenR7h7BuJ68rBMV5Kyt87A1NvXt xDAia/JwKUaBcCBDeLJNAuT+dBFZnKDO X-Google-Smtp-Source: AGHT+IHkRXaW9ibr8bqDMRY9bvYVmJ1dR2nXw+n26cc50HhZ4iQnFMulbmh4FGvHRP0yMkz3mpdCcU1LLvrD X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1828:b0:d47:4b58:a19e with SMTP id cf40-20020a056902182800b00d474b58a19emr48186ybb.11.1692232241970; Wed, 16 Aug 2023 17:30:41 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:26 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-10-rananta@google.com> Subject: [PATCH v5 09/12] tools: Import arm_pmuv3.h From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173043_408717_6490468B X-CRM114-Status: GOOD ( 13.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Import kernel's include/linux/perf/arm_pmuv3.h, with the definition of PMEVN_SWITCH() additionally including an assert() for the 'default' case. The following patches will use macros defined in this header. Signed-off-by: Raghavendra Rao Ananta --- tools/include/perf/arm_pmuv3.h | 308 +++++++++++++++++++++++++++++++++ 1 file changed, 308 insertions(+) create mode 100644 tools/include/perf/arm_pmuv3.h diff --git a/tools/include/perf/arm_pmuv3.h b/tools/include/perf/arm_pmuv3.h new file mode 100644 index 0000000000000..1567a772abe27 --- /dev/null +++ b/tools/include/perf/arm_pmuv3.h @@ -0,0 +1,308 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __PERF_ARM_PMUV3_H +#define __PERF_ARM_PMUV3_H + +#include +#include + +#define ARMV8_PMU_MAX_COUNTERS 32 +#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) + +/* + * Common architectural and microarchitectural event numbers. + */ +#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005 +#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006 +#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007 +#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008 +#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009 +#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A +#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B +#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C +#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D +#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E +#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010 +#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011 +#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012 +#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013 +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018 +#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019 +#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A +#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B +#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C +#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020 +#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021 +#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022 +#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023 +#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024 +#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025 +#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029 +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C +#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D +#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E +#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F +#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033 +#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034 +#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036 +#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037 +#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038 +#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039 +#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A +#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B +#define ARMV8_PMUV3_PERFCTR_STALL 0x003C +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E +#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F + +/* Statistical profiling extension microarchitectural events */ +#define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000 +#define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001 +#define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002 +#define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003 + +/* AMUv1 architecture events */ +#define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004 +#define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005 + +/* long-latency read miss events */ +#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006 +#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009 +#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A +#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B + +/* Trace buffer events */ +#define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C +#define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E + +/* Trace unit events */ +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012 +#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019 +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A +#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B + +/* additional latency from alignment events */ +#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020 +#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021 +#define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022 + +/* Armv8.5 Memory Tagging Extension events */ +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025 +#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026 + +/* ARMv8 recommended implementation defined event types */ +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047 +#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048 + +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E +#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053 + +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057 +#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058 + +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E +#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064 +#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066 +#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069 +#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A + +#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C +#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D +#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E +#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F +#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070 +#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071 +#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072 +#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073 +#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074 +#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075 +#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076 +#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077 +#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078 +#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079 +#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A + +#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C +#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D +#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E + +#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081 +#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082 +#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083 +#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084 + +#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086 +#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087 +#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088 + +#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E +#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F +#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090 +#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3 + +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7 +#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8 + +/* + * Per-CPU PMCR: config reg + */ +#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ +#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ +#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ +#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ +#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ +#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ +#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ +#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ +#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT) +#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ + +/* + * PMOVSR: counters overflow flag status reg + */ +#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ +#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK + +/* + * PMXEVTYPER: Event selection reg + */ +#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ +#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ + +/* + * Event filters for PMUv3 + */ +#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) +#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) +#define ARMV8_PMU_INCLUDE_EL2 (1U << 27) + +/* + * PMUSERENR: user enable reg + */ +#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ +#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ +#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ +#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ +#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ + +/* PMMIR_EL1.SLOTS mask */ +#define ARMV8_PMU_SLOTS_MASK 0xff + +#define ARMV8_PMU_BUS_SLOTS_SHIFT 8 +#define ARMV8_PMU_BUS_SLOTS_MASK 0xff +#define ARMV8_PMU_BUS_WIDTH_SHIFT 16 +#define ARMV8_PMU_BUS_WIDTH_MASK 0xf + +/* + * This code is really good + */ + +#define PMEVN_CASE(n, case_macro) \ + case n: case_macro(n); break + +#define PMEVN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + PMEVN_CASE(0, case_macro); \ + PMEVN_CASE(1, case_macro); \ + PMEVN_CASE(2, case_macro); \ + PMEVN_CASE(3, case_macro); \ + PMEVN_CASE(4, case_macro); \ + PMEVN_CASE(5, case_macro); \ + PMEVN_CASE(6, case_macro); \ + PMEVN_CASE(7, case_macro); \ + PMEVN_CASE(8, case_macro); \ + PMEVN_CASE(9, case_macro); \ + PMEVN_CASE(10, case_macro); \ + PMEVN_CASE(11, case_macro); \ + PMEVN_CASE(12, case_macro); \ + PMEVN_CASE(13, case_macro); \ + PMEVN_CASE(14, case_macro); \ + PMEVN_CASE(15, case_macro); \ + PMEVN_CASE(16, case_macro); \ + PMEVN_CASE(17, case_macro); \ + PMEVN_CASE(18, case_macro); \ + PMEVN_CASE(19, case_macro); \ + PMEVN_CASE(20, case_macro); \ + PMEVN_CASE(21, case_macro); \ + PMEVN_CASE(22, case_macro); \ + PMEVN_CASE(23, case_macro); \ + PMEVN_CASE(24, case_macro); \ + PMEVN_CASE(25, case_macro); \ + PMEVN_CASE(26, case_macro); \ + PMEVN_CASE(27, case_macro); \ + PMEVN_CASE(28, case_macro); \ + PMEVN_CASE(29, case_macro); \ + PMEVN_CASE(30, case_macro); \ + default: \ + WARN(1, "Invalid PMEV* index\n"); \ + assert(0); \ + } \ + } while (0) + +#endif From patchwork Thu Aug 17 00:30:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5ECB1C2FC06 for ; Thu, 17 Aug 2023 00:32:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=NC2nzUf9aGnRdmxgkQ7gcze+9Mu+hXiA2ShTb6OqEiY=; b=zL1yAcrV+hznG1niESfuSSXVoF YsdcpfAt0QxYZZQbCJW0hHxdTwvIcyC2PNvie3VI23ro/Z5uZOd28SXOSG47aj6nbYt6rzfByBrRs O8KVXb1So4ebqNpfMsLI/AJC1e0VF+8bYI/ZdceYXwwoa1h7AsMeJbGoE7O8G3pLuCln6lPJw6rt7 sXQSzWLBfgwIFu4S+6ShUa9/VG0ujvKC5EDgr0jqjYealkbSci6K1zglGdN1wdsKHlI2VfC5I+aGt aNQpsG0drjHRC9gkacFGI7Ti64DoOsb2d8LJuiQu8nPR3QuR3xQuBNYhFGZt2DDvnphURnYN+Zee5 6oTBn7wg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQvK-005EzS-0G; Thu, 17 Aug 2023 00:31:38 +0000 Received: from mail-ot1-x349.google.com ([2607:f8b0:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuT-005EWC-1T for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:47 +0000 Received: by mail-ot1-x349.google.com with SMTP id 46e09a7af769-6b9c82fe107so7308349a34.2 for ; Wed, 16 Aug 2023 17:30:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232243; x=1692837043; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GZ0JdVkhKqWypE65t5cwu0hfmEnTX5zk7I+0wyF8fjg=; b=axKugz1Lvq6rmf7nIH9YoM4aIS2y6kWELQDE/HFe/euJEP0uqBuMWyElwqBGLW+3ge J74Y8MVLt3/ZHybkRZbh75y80Vqgw9k57AD7AnpPxTSPWQpGn8JuUAmklojb0EJe/zUc 6QQGMCTXkuVWf+M6HPc0qUhj2VyL9cF7MfEb7qzbj9GsRXzGTFxe48qO80GAc8Bu2cj7 MGsRg9rlw+3pvY1p852L2jE68oY3lSdg9RUHZg0qPG1y0AptG1Fg0GoBbTwGlxFZYqii Jgw6578k5C0lghNZ2igbWI7amAgtQOqxPC/WnkTy0qBfd1TyOBKJuJCxaXIWNmCh48Ko KdQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232243; x=1692837043; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GZ0JdVkhKqWypE65t5cwu0hfmEnTX5zk7I+0wyF8fjg=; b=X3pe/9268tr2Eu8basg4QV11E9NEPvPQSp9Gkz1ht7Q3fT5RW9PWQZooOox4hXCAPw pOGfroTqGw/i9Hk1ENE0x9E4jL/QdWA71KNeysGfY5DYUqMb8X2CJgXH+mnfouEkVU/W TJ4bW3Qs84rzbQkcgg3pXTwbodyclFDjMIfM05xLVGrelTbjbLcVXwB23uYk68Rn3CEV bqTx9FVQG4ykmzh78PXBd2E5J/KdGNQ2r48p7ypwY/v9/0fKj9GlJXNOqtNk7+sazeoV fN11sJaheiIuNVr9gEG1Fr+zYNbECcEKCwOpooYwr+okZg3g8lTsMW9phzaIDRUO5126 +ljw== X-Gm-Message-State: AOJu0YzwnzFnkOiu6YT+xqqCawekn0/Go5fZcJ6zRy9phgqR4520VC4l LGkQfBwc5SGQ6mpeGu6V7QcsdKuMQ/W7 X-Google-Smtp-Source: AGHT+IHQdoyClxbnr4jHKzbUVE7qtATKwClVGC0nIcMogO4rGmt0H9zaKjuHNq1/MBpvLemPQc94znUT7igh X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:7688:b0:1c0:e7d1:2c0a with SMTP id dx8-20020a056870768800b001c0e7d12c0amr56538oab.10.1692232243220; Wed, 16 Aug 2023 17:30:43 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:27 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-11-rananta@google.com> Subject: [PATCH v5 10/12] KVM: selftests: aarch64: Introduce vpmu_counter_access test From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173045_496266_270EA860 X-CRM114-Status: GOOD ( 26.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Introduce vpmu_counter_access test for arm64 platforms. The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU, and check if the guest can consistently see the same number of the PMU event counters (PMCR_EL0.N) that userspace sets. This test case is done with each of the PMCR_EL0.N values from 0 to 31 (With the PMCR_EL0.N values greater than the host value, the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail). Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/aarch64/vpmu_counter_access.c | 235 ++++++++++++++++++ 2 files changed, 236 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c692cc86e7da8..a1599e2b82e38 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -148,6 +148,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c new file mode 100644 index 0000000000000..d0afec07948ef --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -0,0 +1,235 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * vpmu_counter_access - Test vPMU event counter access + * + * Copyright (c) 2022 Google LLC. + * + * This test checks if the guest can see the same number of the PMU event + * counters (PMCR_EL0.N) that userspace sets. + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + */ +#include +#include +#include +#include +#include +#include + +/* The max number of the PMU event counters (excluding the cycle counter) */ +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) + +struct vpmu_vm { + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; +}; + +static void guest_sync_handler(struct ex_regs *regs) +{ + uint64_t esr, ec; + + esr = read_sysreg(esr_el1); + ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK; + GUEST_ASSERT_3(0, regs->pc, esr, ec); +} + +/* + * The guest is configured with PMUv3 with @expected_pmcr_n number of + * event counters. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + */ +static void guest_code(uint64_t expected_pmcr_n) +{ + uint64_t pmcr, pmcr_n; + + GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); + + pmcr = read_sysreg(pmcr_el0); + pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); + + /* Make sure that PMCR_EL0.N indicates the value userspace set */ + GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + + GUEST_DONE(); +} + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +/* Create a VM that has one vCPU with PMUv3 configured. */ +static struct vpmu_vm *create_vpmu_vm(void *guest_code) +{ + struct kvm_vcpu_init init; + uint8_t pmuver, ec; + uint64_t dfr0, irq = 23; + struct vpmu_vm *vpmu_vm; + struct kvm_device_attr irq_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, + .addr = (uint64_t)&irq, + }; + struct kvm_device_attr init_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_INIT, + }; + + vpmu_vm = calloc(1, sizeof(*vpmu_vm)); + TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); + + vpmu_vm->vm = vm_create(1); + vm_init_descriptor_tables(vpmu_vm->vm); + for (ec = 0; ec < ESR_EC_NUM; ec++) { + vm_install_sync_handler(vpmu_vm->vm, VECTOR_SYNC_CURRENT, ec, + guest_sync_handler); + } + + /* Create vCPU with PMUv3 */ + vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vpmu_vm->vcpu = aarch64_vcpu_add(vpmu_vm->vm, 0, &init, guest_code); + vcpu_init_descriptor_tables(vpmu_vm->vcpu); + vpmu_vm->gic_fd = vgic_v3_setup(vpmu_vm->vm, 1, 64, + GICD_BASE_GPA, GICR_BASE_GPA); + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vpmu_vm->vcpu, + KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + + /* Initialize vPMU */ + vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + + return vpmu_vm; +} + +static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) +{ + close(vpmu_vm->gic_fd); + kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm); +} + +static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +{ + struct ucall uc; + + vcpu_args_set(vcpu, 1, pmcr_n); + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + break; + } +} + +/* + * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n, + * and run the test. + */ +static void run_test(uint64_t pmcr_n) +{ + struct vpmu_vm *vpmu_vm; + struct kvm_vcpu *vcpu; + uint64_t sp, pmcr, pmcr_orig; + struct kvm_vcpu_init init; + + pr_debug("Test with pmcr_n %lu\n", pmcr_n); + vpmu_vm = create_vpmu_vm(guest_code); + + vcpu = vpmu_vm->vcpu; + + /* Save the initial sp to restore them later to run the guest again */ + vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); + + /* Update the PMCR_EL0.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N; + pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + + run_vcpu(vcpu, pmcr_n); + + /* + * Reset and re-initialize the vCPU, and run the guest code again to + * check if PMCR_EL0.N is preserved. + */ + vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + aarch64_vcpu_setup(vcpu, &init); + vcpu_init_descriptor_tables(vcpu); + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); + + run_vcpu(vcpu, pmcr_n); + + destroy_vpmu_vm(vpmu_vm); +} + +/* + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for + * the vCPU to @pmcr_n, which is larger than the host value. + * The attempt should fail as @pmcr_n is too big to set for the vCPU. + */ +static void run_error_test(uint64_t pmcr_n) +{ + struct vpmu_vm *vpmu_vm; + struct kvm_vcpu *vcpu; + int ret; + uint64_t pmcr, pmcr_orig; + + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu = vpmu_vm->vcpu; + + /* Update the PMCR_EL0.N with @pmcr_n */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); + pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N; + pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); + + /* This should fail as @pmcr_n is too big to set for the vCPU */ + ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); + TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail", + pmcr, pmcr_orig); + + destroy_vpmu_vm(vpmu_vm); +} + +/* + * Return the default number of implemented PMU event counters excluding + * the cycle counter (i.e. PMCR_EL0.N value) for the guest. + */ +static uint64_t get_pmcr_n_limit(void) +{ + struct vpmu_vm *vpmu_vm; + uint64_t pmcr; + + vpmu_vm = create_vpmu_vm(guest_code); + vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + destroy_vpmu_vm(vpmu_vm); + return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); +} + +int main(void) +{ + uint64_t i, pmcr_n; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + + pmcr_n = get_pmcr_n_limit(); + for (i = 0; i <= pmcr_n; i++) + run_test(i); + + for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) + run_error_test(i); + + return 0; +} From patchwork Thu Aug 17 00:30:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 33760C2FC04 for ; Thu, 17 Aug 2023 00:32:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=UD4rSualAwju9LoHaP7VYaBYQq8S4+TOGYjTc7vKzeM=; b=wn4swEeVhmpSjr3hLdb3P+5aO0 JSR97WlI1hWMHkHlF3NdZc1z3YaFyqFbqA+fRQQ38wKtF8ncsXTFT/4Ga6vZfH43x7yRKUpT8M9TL pGSEuImkf7RENT48jFnCZHSHSFMyB19mTCjW9VHH1WhbRgNLSw3cEfRpJzXTfGxznDz3PEa+mylqY 22u2wq8H5t5wsnyCbOHrz1ntVtKEjXJEv8dZsL14/Q/zqADpzd5oUGniPEY2oJ2G1rukBbQg2bWyC OYcYpWZOncla6kpIdOCh+azkQH859/6Qncc9VAAGcyU40Ojz3n2kgLvpE2iDN9jlv+0s5kEvdzAzh GD3IlJeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQvL-005F0B-01; Thu, 17 Aug 2023 00:31:39 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuT-005EWL-1u for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:47 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-589a45c1b0fso93464737b3.1 for ; Wed, 16 Aug 2023 17:30:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232244; x=1692837044; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QY9Bi196f8rMXJoUWr+B9aBhr8w9IYaI6gEf0CO+e/A=; b=IQgDRbEzWHIPChvCM0AwZ4SGraCZw/DbzeOQ7qllSuV/cQHre1m8cduXNC4m9mqGSF tR7dxOHf7S+wQCiAMBMmEfru4WSt94hgZK+AcSBrnA4f4sRg0MJd8O24JJTlMZPs2XhG a3PdgGm4r/fTRkP42UUzBNP2HCrZKjkaTsG5Ww6IvqRM+g4KJPbZqrhTUcLWgzv9P9pa wRU7arp0g8jc0+5Q8jeLePoHEbwwzdVxsFpZhPzL6Fiq6o6eFK7WByFdJcG9z+o6xLNW B0oommZ211QyDjmiFAn64hm1Cezz2xpD7plRgprOOXBaulC7qz8jVu/m30bkPlq+ccbA YQbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232244; x=1692837044; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QY9Bi196f8rMXJoUWr+B9aBhr8w9IYaI6gEf0CO+e/A=; b=PfupCV7PaiutxNayZuetQORfHelCJq8nknkhAR6lk3cYrOMXFZHodlulR5w7d1S/vF wikblswS+ZfPbQOc8nwixZE/zMwDLZecr68BwdQbj1hgOMzPA8Kv7Xqbt30LME7dPpxu nDHTEWc74TxUfF93kJXQ2HsB1L4t0qyB4GUY83P6JBjlcYm1cbrKJOF04wRBdczpijx8 PvRc+5RzG9PM+hygEk+Q3hzqGoKTpPZdVhHyQWBJI+219FlUSzK1PoVTxXqVPb3CX/Ls b8pehJtE4+gqciHMK3vcZ/bz41vyZDpfDTadUKfmGznzvgZ/IjlyDTVhK9fbrLHufLLT mAew== X-Gm-Message-State: AOJu0YxdMPakTKGZqnplP6xiQXMCZEbLFy/Fsl+sCepLaKFH6sueEUOe t43ngeht36cLOXF3CxlTD6g/G4urrNja X-Google-Smtp-Source: AGHT+IF6V4OIDSZvsYaTAcm3yv36Sv9/5uoBOEeQ1huPqUtwwmALoCxFYIisr2fI6KFHOzLEnHE8U83a/Hgu X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:731:b0:586:4eae:b943 with SMTP id bt17-20020a05690c073100b005864eaeb943mr47700ywb.8.1692232244398; Wed, 16 Aug 2023 17:30:44 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:28 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-12-rananta@google.com> Subject: [PATCH v5 11/12] KVM: selftests: aarch64: vPMU register test for implemented counters From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173045_636621_09C3E4A3 X-CRM114-Status: GOOD ( 30.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for implemented counters on the vCPU are readable/writable as expected, and can be programmed to count events. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- .../kvm/aarch64/vpmu_counter_access.c | 264 +++++++++++++++++- 1 file changed, 261 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index d0afec07948ef..3a2cf38bb415d 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,7 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL0.N) that userspace sets. + * counters (PMCR_EL0.N) that userspace sets, and if the guest can access + * those counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -24,6 +25,251 @@ struct vpmu_vm { int gic_fd; }; +/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline unsigned long read_sel_evcntr(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevcntr_el0); +} + +/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ +static inline void write_sel_evcntr(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevcntr_el0); + isb(); +} + +/* Read PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline unsigned long read_sel_evtyper(int sel) +{ + write_sysreg(sel, pmselr_el0); + isb(); + return read_sysreg(pmxevtyper_el0); +} + +/* Write PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ +static inline void write_sel_evtyper(int sel, unsigned long val) +{ + write_sysreg(sel, pmselr_el0); + isb(); + write_sysreg(val, pmxevtyper_el0); + isb(); +} + +static inline void enable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenset_el0); + isb(); +} + +static inline void disable_counter(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmcntenclr_el0); + isb(); +} + +static void pmu_disable_reset(void) +{ + uint64_t pmcr = read_sysreg(pmcr_el0); + + /* Reset all counters, disabling them */ + pmcr &= ~ARMV8_PMU_PMCR_E; + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); + isb(); +} + +#define RETURN_READ_PMEVCNTRN(n) \ + return read_sysreg(pmevcntr##n##_el0) +static unsigned long read_pmevcntrn(int n) +{ + PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); + return 0; +} + +#define WRITE_PMEVCNTRN(n) \ + write_sysreg(val, pmevcntr##n##_el0) +static void write_pmevcntrn(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVCNTRN); + isb(); +} + +#define READ_PMEVTYPERN(n) \ + return read_sysreg(pmevtyper##n##_el0) +static unsigned long read_pmevtypern(int n) +{ + PMEVN_SWITCH(n, READ_PMEVTYPERN); + return 0; +} + +#define WRITE_PMEVTYPERN(n) \ + write_sysreg(val, pmevtyper##n##_el0) +static void write_pmevtypern(int n, unsigned long val) +{ + PMEVN_SWITCH(n, WRITE_PMEVTYPERN); + isb(); +} + +/* + * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * accessors that test cases will use. Each of the accessors will + * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). + * + * This is used to test that combinations of those accessors provide + * the consistent behavior. + */ +struct pmc_accessor { + /* A function to be used to read PMEVTCNTR_EL0 */ + unsigned long (*read_cntr)(int idx); + /* A function to be used to write PMEVTCNTR_EL0 */ + void (*write_cntr)(int idx, unsigned long val); + /* A function to be used to read PMEVTYPER_EL0 */ + unsigned long (*read_typer)(int idx); + /* A function to be used to write PMEVTYPER_EL0 */ + void (*write_typer)(int idx, unsigned long val); +}; + +struct pmc_accessor pmc_accessors[] = { + /* test with all direct accesses */ + { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern }, + /* test with all indirect accesses */ + { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper }, + /* read with direct accesses, and write with indirect accesses */ + { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper }, + /* read with indirect accesses, and write with direct accesses */ + { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, +}; + +/* + * Convert a pointer of pmc_accessor to an index in pmc_accessors[], + * assuming that the pointer is one of the entries in pmc_accessors[]. + */ +#define PMC_ACC_TO_IDX(acc) (acc - &pmc_accessors[0]) + +#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \ +{ \ + uint64_t _tval = read_sysreg(regname); \ + \ + if (set_expected) \ + GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \ + else \ + GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ +} + +/* + * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers + * are set or cleared as specified in @set_expected. + */ +static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected) +{ + GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected); + GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected); +} + +/* + * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding + * to the specified counter (@pmc_idx) can be read/written as expected. + * When @set_op is true, it tries to set the bit for the counter in + * those registers by writing the SET registers (the bit won't be set + * if the counter is not implemented though). + * Otherwise, it tries to clear the bits in the registers by writing + * the CLR registers. + * Then, it checks if the values indicated in the registers are as expected. + */ +static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) +{ + uint64_t pmcr_n, test_bit = BIT(pmc_idx); + bool set_expected = false; + + if (set_op) { + write_sysreg(test_bit, pmcntenset_el0); + write_sysreg(test_bit, pmintenset_el1); + write_sysreg(test_bit, pmovsset_el0); + + /* The bit will be set only if the counter is implemented */ + pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); + set_expected = (pmc_idx < pmcr_n) ? true : false; + } else { + write_sysreg(test_bit, pmcntenclr_el0); + write_sysreg(test_bit, pmintenclr_el1); + write_sysreg(test_bit, pmovsclr_el0); + } + check_bitmap_pmu_regs(test_bit, set_expected); +} + +/* + * Tests for reading/writing registers for the (implemented) event counter + * specified by @pmc_idx. + */ +static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + uint64_t write_data, read_data; + + /* Disable all PMCs and reset all PMCs to zero. */ + pmu_disable_reset(); + + + /* + * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1. + */ + + /* Make sure that the bit in those registers are set to 0 */ + test_bitmap_pmu_regs(pmc_idx, false); + /* Test if setting the bit in those registers works */ + test_bitmap_pmu_regs(pmc_idx, true); + /* Test if clearing the bit in those registers works */ + test_bitmap_pmu_regs(pmc_idx, false); + + + /* + * Tests for reading/writing the event type register. + */ + + read_data = acc->read_typer(pmc_idx); + /* + * Set the event type register to an arbitrary value just for testing + * of reading/writing the register. + * ArmARM says that for the event from 0x0000 to 0x003F, + * the value indicated in the PMEVTYPER_EL0.evtCount field is + * the value written to the field even when the specified event + * is not supported. + */ + write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED); + acc->write_typer(pmc_idx, write_data); + read_data = acc->read_typer(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); + + + /* + * Tests for reading/writing the event count register. + */ + + read_data = acc->read_cntr(pmc_idx); + + /* The count value must be 0, as it is not used after the reset */ + GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data); + + write_data = read_data + pmc_idx + 0x12345; + acc->write_cntr(pmc_idx, write_data); + read_data = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(read_data == write_data, + pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); +} + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -36,11 +282,14 @@ static void guest_sync_handler(struct ex_regs *regs) /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. - * Check if @expected_pmcr_n is consistent with PMCR_EL0.N. + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and + * if reading/writing PMU registers for implemented counters can work + * as expected. */ static void guest_code(uint64_t expected_pmcr_n) { uint64_t pmcr, pmcr_n; + int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -50,6 +299,15 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Tests for reading/writing PMU registers for implemented counters. + * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = 0; pmc < pmcr_n; pmc++) + test_access_pmc_regs(&pmc_accessors[i], pmc); + } + GUEST_DONE(); } @@ -121,7 +379,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: - REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx"); + REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx"); break; case UCALL_DONE: break; From patchwork Thu Aug 17 00:30:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 454F2C2FC0F for ; Thu, 17 Aug 2023 00:32:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=eOduMsXH2F5dKDMXFsjvWazXMzuIo4dwHYQx8EihZm8=; b=QuQLPbro0BdkAFEYhOHVygMuPz dVBzIm/aWziBfb17W/qh0Zp4oH6IK9qaajoYRTCS8fmX4VY856/tdxMZ5zNPl9yuep7U3DArh5gBe lSJmNsGKVeWLDSYi6D3kwM/8sMwNnqH54W8eJirDmyR3XGLtl6voHnUQIo2FxFLYhLbbbZWNVtPWG nX9CnmHN5jmgkF3ouzAcr0lnC8P29YkPQvc10awYimAmF3k9Svety5W4KahHnNnM1TVJq7NgGYMXU 3ReatYSD2HhEOJizFD0xP2KwbzI6Iu53sw7XokbCjzbUjMYk/MO12WDBwzzrHBDo0zfnxHqb38B4g WGoIhy6A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qWQvM-005F0p-05; Thu, 17 Aug 2023 00:31:40 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qWQuU-005EWt-2v for linux-arm-kernel@lists.infradead.org; Thu, 17 Aug 2023 00:30:49 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-c8f360a07a2so5871206276.2 for ; Wed, 16 Aug 2023 17:30:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232245; x=1692837045; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=C9WZgaBtQ/foNKH5xVWMO5gHGsxN79LunTWXKeZ2DPU=; b=Ns63vI3jmDX2cO6nGcSU3zrRpWpLVveUg3QO78ZAntMNeCQriSab0fLrxsLmt+6/i8 kBjh/eQKH6g7lasL3YQy76v9RHbhelzBmWBX//p5uqog3snD2D1ENd5uRy+GvicYVj+h sdcCH0kF2YN3F8hUUAUKD1FOC8kVwNdgvEzmMFejjvwxL71j3JK6PqUiMMyccqPCy71Z NQ3l7aw1OTe7D0ShXvQdMOEzQe9d2psPTf+DRssdfkofpQSlL/fYI2Bn0S0sW4CV4cnH laoYFI4rT5JyX0P5MK5rh6c3KAN94k7wCuWiNC6lYve9n/EetINSdwGPrBLPqkPjZsJy wTeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232245; x=1692837045; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=C9WZgaBtQ/foNKH5xVWMO5gHGsxN79LunTWXKeZ2DPU=; b=MZG/9RoigvLSscGquszHtIuaIWrrNmTyUoeZdHX9wbvBc0E7vA6g+NhgoXRINyizLO q2Om8FAXcwBw3rVqpRQhdAKO1g83mIQmjsqXcteLiDSy7QTnpE9YS1tSI6jinpwZ1kPn Ir9Ga3LfuOCc4e4zA2s24sU9mp23wU6aDWd2HThPmlFXd885IqGfRR70ZjFPS/wB5p+R +YP6/HoHFqpLf2GHbabsZbfJBqmGomI3DY8c4aZNXvnGgtGfAlMhPPiUn9nFVa/FMGsV 2A3FoviG8YVrXfbDO+vOxMT12gex8Qml0PxxGm77gzR1Ce+DHpJuDXxvSXJ3TTY1AVYN OLlg== X-Gm-Message-State: AOJu0YznBz08514Ef4KJruQgTB1NBRm81kApoPe6Kl9YPWWgYShz3GJ+ Qr+LIEbv/dR1LmqxBXNo2S2dihXilQ9i X-Google-Smtp-Source: AGHT+IHidO8b1WPACAAuhaH6eabN3OpHJBUW05/NByAUJ8rYQHP7tgahj+xnWnsPvYRSV521Sdm+0Ium6itN X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:cf88:0:b0:d0e:42d7:8bf1 with SMTP id f130-20020a25cf88000000b00d0e42d78bf1mr48377ybg.6.1692232245518; Wed, 16 Aug 2023 17:30:45 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:29 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-13-rananta@google.com> Subject: [PATCH v5 12/12] KVM: selftests: aarch64: vPMU register test for unimplemented counters From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230816_173046_950293_A4B5A5B3 X-CRM114-Status: GOOD ( 21.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Add a new test case to the vpmu_counter_access test to check if PMU registers or their bits for unimplemented counters are not accessible or are RAZ, as expected. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- .../kvm/aarch64/vpmu_counter_access.c | 93 +++++++++++++++++-- .../selftests/kvm/include/aarch64/processor.h | 1 + 2 files changed, 85 insertions(+), 9 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index 3a2cf38bb415d..61fd1420e3cc1 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -5,8 +5,8 @@ * Copyright (c) 2022 Google LLC. * * This test checks if the guest can see the same number of the PMU event - * counters (PMCR_EL0.N) that userspace sets, and if the guest can access - * those counters. + * counters (PMCR_EL0.N) that userspace sets, if the guest can access + * those counters, and if the guest cannot access any other counters. * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. */ #include @@ -118,9 +118,9 @@ static void write_pmevtypern(int n, unsigned long val) } /* - * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}_EL0 + * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 * accessors that test cases will use. Each of the accessors will - * either directly reads/writes PMEVT{CNTR,TYPER}_EL0 + * either directly reads/writes PMEV{CNTR,TYPER}_EL0 * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). * @@ -270,25 +270,83 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); } +#define INVALID_EC (-1ul) +uint64_t expected_ec = INVALID_EC; +uint64_t op_end_addr; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; esr = read_sysreg(esr_el1); ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK; - GUEST_ASSERT_3(0, regs->pc, esr, ec); + GUEST_ASSERT_4(op_end_addr && (expected_ec == ec), + regs->pc, esr, ec, expected_ec); + + /* Will go back to op_end_addr after the handler exits */ + regs->pc = op_end_addr; + + /* + * Clear op_end_addr, and setting expected_ec to INVALID_EC + * as a sign that an exception has occurred. + */ + op_end_addr = 0; + expected_ec = INVALID_EC; +} + +/* + * Run the given operation that should trigger an exception with the + * given exception class. The exception handler (guest_sync_handler) + * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and + * will come back to the instruction at the @done_label. + * The @done_label must be a unique label in this test program. + */ +#define TEST_EXCEPTION(ec, ops, done_label) \ +{ \ + extern int done_label; \ + \ + WRITE_ONCE(op_end_addr, (uint64_t)&done_label); \ + GUEST_ASSERT(ec != INVALID_EC); \ + WRITE_ONCE(expected_ec, ec); \ + dsb(ish); \ + ops; \ + asm volatile(#done_label":"); \ + GUEST_ASSERT(!op_end_addr); \ + GUEST_ASSERT(expected_ec == INVALID_EC); \ +} + +/* + * Tests for reading/writing registers for the unimplemented event counter + * specified by @pmc_idx (>= PMCR_EL0.N). + */ +static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) +{ + /* + * Reading/writing the event count/type registers should cause + * an UNDEFINED exception. + */ + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer); + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer); + /* + * The bit corresponding to the (unimplemented) counter in + * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ. + */ + test_bitmap_pmu_regs(pmc_idx, 1); + test_bitmap_pmu_regs(pmc_idx, 0); } /* * The guest is configured with PMUv3 with @expected_pmcr_n number of * event counters. * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and - * if reading/writing PMU registers for implemented counters can work - * as expected. + * if reading/writing PMU registers for implemented or unimplemented + * counters can work as expected. */ static void guest_code(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n; + uint64_t pmcr, pmcr_n, unimp_mask; int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); @@ -299,15 +357,32 @@ static void guest_code(uint64_t expected_pmcr_n) /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); + /* + * Make sure that (RAZ) bits corresponding to unimplemented event + * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero. + * (NOTE: bits for implemented event counters are reset to UNKNOWN) + */ + unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n); + check_bitmap_pmu_regs(unimp_mask, false); + /* * Tests for reading/writing PMU registers for implemented counters. - * Use each combination of PMEVT{CNTR,TYPER}_EL0 accessor functions. + * Use each combination of PMEV{CNTR,TYPER}_EL0 accessor functions. */ for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { for (pmc = 0; pmc < pmcr_n; pmc++) test_access_pmc_regs(&pmc_accessors[i], pmc); } + /* + * Tests for reading/writing PMU registers for unimplemented counters. + * Use each combination of PMEV{CNTR,TYPER}_EL0 accessor functions. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) + test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); + } + GUEST_DONE(); } diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index cb537253a6b9c..c42d683102c7a 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -104,6 +104,7 @@ enum { #define ESR_EC_SHIFT 26 #define ESR_EC_MASK (ESR_EC_NUM - 1) +#define ESR_EC_UNKNOWN 0x0 #define ESR_EC_SVC64 0x15 #define ESR_EC_IABT 0x21 #define ESR_EC_DABT 0x25