From patchwork Fri Oct 20 21:40:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13431220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A73EC001DF for ; Fri, 20 Oct 2023 21:41:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=fsSAZMmvgpE8M73nIVhMcAiuQq/qXsaw8uGClyWwKdg=; b=M/0ieQyayN9oOsJrRwipnAUb6R 5BVte22+fsvONkOjffGoRHvMbKiYRb4XN2ZxWTfCtgUa0cflwdw+oBPxS3QDeAMiDamxw5XsuVNGa KiSew234jJrdKcYi6S8M0RuCJZJ1ENauf5xS7TpPqmNldAIOWxcZrkDM+yHkGTBMrEhyzpuqGm+Cw ifXo3b/g9RYqMkVPdkzt8FdDASZliTL13LehWLQZRShJpwPBkWA3cFFErwp8vTBgP67liQKfYIB6u bOZz7NAjPd87Yhz3hpaNTcBkbkA9ngfYeL8pHQA8uCUckppvcj6wrvIJ7vBop9D1pmI+gsAuwoqP3 zmPUTkJw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qtxF5-0034Lg-0B; Fri, 20 Oct 2023 21:41:15 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qtxEt-0034G2-2F for linux-arm-kernel@lists.infradead.org; Fri, 20 Oct 2023 21:41:11 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d9bc6447193so1635231276.3 for ; Fri, 20 Oct 2023 14:41:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697838060; x=1698442860; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uuXBV0qQn1hgsBM5m030KdW6lqqqMyz/TK8I8fJPWXo=; b=JUUguKJkmZu2bW6sdxv8yWPn9uIKNzIplZvcb7DGXUJ1+48SxyQnI0msx/MLAAhAje 6FqVxH9Q2k0UWrdfBQqTYZEAPL0mYrdxkCHp+IpI1LbXZlakxqlY2aD4TkSew+qb5/DG ZuN5z9J9F+oa8J59P7eJ+kIMLsw3Zn52QTBSYoJdFA+YTinHyFOJKA8SUcIth1NHkHUH TtJzX6gwoFRq1dmd0vwZGbLW0ggsFH6nek3ZzKFbCdGv+LOaieyl04meuXUvTf7A/CBH nNJmGTjXRogRa9SHi3Coe/xEtB1brmf62ABkfv4zJvS609xKs8VP1pwDJ4MznEKMVsen AcTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697838060; x=1698442860; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uuXBV0qQn1hgsBM5m030KdW6lqqqMyz/TK8I8fJPWXo=; b=H43oaC4+d3YW081kQW/hzCJnuU3dvnrw23KvULUmfoP45egt9pq6Ft80A3SUAZmDPR 5hV9FO8LcydtBhhRQj98joU8HpQ5n4ZMu7frWkMU5BIkhiru1DcYaRkoIHXbLyzPgQy0 FuvjCgd5iACTBo0aqu7nDPwgfLpjkw+qQLK9QEa8HNrpUTEo8zgeJxjtJSiGf6CYnU0w Yd00gQZqn95yC/nNGHpBMTymm5fenJtU69tZg9cjQwakwbIROMOteYNZJjw2bAUrhUKU e52rHS3MTqbYZmh0M4ssZTQ9THhy9pwEW9ZB7Dv1kgHKUksSsbcL50xTdvd4Ud1ZLuVL xuHg== X-Gm-Message-State: AOJu0YxKhjum3tPSgARLw+A/7GuyeoYubsLFueqpyXYRZvToqu5p78uu z3mkDX/7ZofypqfOiO1IbTjdjL5kl+un X-Google-Smtp-Source: AGHT+IGcErzgyS3eJYI37KfNFaBbNiCvHxmDEjQdEqWuiXZzGUDEszx3cxJtsS8H29RPPsR181w1xzUddUY2 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:20a1]) (user=rananta job=sendgmr) by 2002:a05:6902:1083:b0:d9a:4f4c:961b with SMTP id v3-20020a056902108300b00d9a4f4c961bmr88781ybu.1.1697838060201; Fri, 20 Oct 2023 14:41:00 -0700 (PDT) Date: Fri, 20 Oct 2023 21:40:43 +0000 In-Reply-To: <20231020214053.2144305-1-rananta@google.com> Mime-Version: 1.0 References: <20231020214053.2144305-1-rananta@google.com> X-Mailer: git-send-email 2.42.0.655.g421f12c284-goog Message-ID: <20231020214053.2144305-4-rananta@google.com> Subject: [PATCH v8 03/13] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Eric Auger X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231020_144103_752631_989F9693 X-CRM114-Status: GOOD ( 17.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Add a helper to read a vCPU's PMCR_EL0, and use it whenever KVM reads a vCPU's PMCR_EL0. Currently, the PMCR_EL0 value is tracked per vCPU. The following patches will make (only) PMCR_EL0.N track per guest. Having the new helper will be useful to combine the PMCR_EL0.N field (tracked per guest) and the other fields (tracked per vCPU) to provide the value of PMCR_EL0. No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Eric Auger Reviewed-by: Sebastian Ott --- arch/arm64/kvm/arm.c | 3 +-- arch/arm64/kvm/pmu-emul.c | 21 +++++++++++++++------ arch/arm64/kvm/sys_regs.c | 6 +++--- include/kvm/arm_pmu.h | 6 ++++++ 4 files changed, 25 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 08c2f76983b9d..e3074a9e23a8b 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -857,8 +857,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu) } if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu)) - kvm_pmu_handle_pmcr(vcpu, - __vcpu_sys_reg(vcpu, PMCR_EL0)); + kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu)); if (kvm_check_request(KVM_REQ_RESYNC_PMU_EL0, vcpu)) kvm_vcpu_pmu_restore_guest(vcpu); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 66c244021ff08..097bf7122130d 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -72,7 +72,7 @@ static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc) static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc) { - u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0); + u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc)); return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) || (pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC)); @@ -250,7 +250,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) { - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; + u64 val = kvm_vcpu_read_pmcr(vcpu) >> ARMV8_PMU_PMCR_N_SHIFT; val &= ARMV8_PMU_PMCR_N_MASK; if (val == 0) @@ -272,7 +272,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!kvm_vcpu_has_pmu(vcpu)) return; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val) return; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { @@ -324,7 +324,7 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg = 0; - if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) { + if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) { reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1); @@ -426,7 +426,7 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu, { int i; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) return; /* Weed out disabled counters */ @@ -569,7 +569,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) { struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && + return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) && (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx)); } @@ -1084,3 +1084,12 @@ u8 kvm_arm_pmu_get_pmuver_limit(void) ID_AA64DFR0_EL1_PMUVer_V3P5); return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp); } + +/** + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU + * @vcpu: The vcpu pointer + */ +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return __vcpu_sys_reg(vcpu, PMCR_EL0); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index ce1bb97d35176..a31cecb3d29fb 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -822,7 +822,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, * Only update writeable bits of PMCR (continuing into * kvm_pmu_handle_pmcr() as well) */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0); + val = kvm_vcpu_read_pmcr(vcpu); val &= ~ARMV8_PMU_PMCR_MASK; val |= p->regval & ARMV8_PMU_PMCR_MASK; if (!kvm_supports_32bit_el0()) @@ -830,7 +830,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, kvm_pmu_handle_pmcr(vcpu, val); } else { /* PMCR.P & PMCR.C are RAZ */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0) + val = kvm_vcpu_read_pmcr(vcpu) & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); p->regval = val; } @@ -879,7 +879,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) { u64 pmcr, val; - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0); + pmcr = kvm_vcpu_read_pmcr(vcpu); val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 858ed9ce828a6..cd980d78b86b5 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -103,6 +103,7 @@ void kvm_vcpu_pmu_resync_el0(void); u8 kvm_arm_pmu_get_pmuver_limit(void); int kvm_arm_set_default_pmu(struct kvm *kvm); +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); #else struct kvm_pmu { }; @@ -180,6 +181,11 @@ static inline int kvm_arm_set_default_pmu(struct kvm *kvm) return -ENODEV; } +static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return 0; +} + #endif #endif