From patchwork Thu Aug 17 00:30:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13355817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65DA0C2FC27 for ; Thu, 17 Aug 2023 00:31:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347390AbjHQAbD (ORCPT ); Wed, 16 Aug 2023 20:31:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347340AbjHQAak (ORCPT ); Wed, 16 Aug 2023 20:30:40 -0400 Received: from mail-ot1-x34a.google.com (mail-ot1-x34a.google.com [IPv6:2607:f8b0:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5921310F0 for ; Wed, 16 Aug 2023 17:30:39 -0700 (PDT) Received: by mail-ot1-x34a.google.com with SMTP id 46e09a7af769-6bc56f23c65so8591375a34.2 for ; Wed, 16 Aug 2023 17:30:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692232238; x=1692837038; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=31foyCQ/a3pkDgMzzqSdDVTr/pyp8KEpSQjN8T0eN08=; b=XoSp3CDnqlKXFkMHK9qAfTyTfrRg0R6p666Qj68aFrK8uVT0w8Jwjq0QSJaO4JlAwV PiUNF9YXac9VpjwlHz6RxUEPm2gWS+u0BKG2PFo920XqtrXBBisMxnh/MjFdjyD5nFbi HmUKF7ydb06roOoo39L5se1PCnjV8jWTTHsSylp8sildXccWgHi4K1hrRvEbleyu0dzn 1DJdv3atsxQ3E7FoDthNHsbiXWwtTyEu8V4nMlJ/aRRFvFZ6st0A5CAY2gfxpiRt0msQ ZDl/Cc8Ju/ni4enxTWa+rsX0CIQ6xhdiTk9WyeoOv7xEurC4hqjNOfpkzP8ZvrAe8V4f i4Xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692232238; x=1692837038; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=31foyCQ/a3pkDgMzzqSdDVTr/pyp8KEpSQjN8T0eN08=; b=B7EqQhq5YBKODiKb2IeA47r4hPXOcDjijdalDY3rFAI+usp6rV5+aWa0ssw8Uc7igD GwYT1isqJkHCclaw5s+0R9SGKC5vYqY9Q62RAfkxNwV7GCTHwq2SSsJ3JmTw7E3AQ0v2 C9+PRQgaL/VkGcKgsnEsZ12qTw6/NC8UI9srnAtUoUxAC2S1M8fug716LV/b86PGfZNM OYOx0CAE6bpDDI7la1HTK0wZ1t/wx7udc+Ubt13dsbYRXDh/fVyxRNk1894r0eL9uyOj hZygsV8hfOJjAG7u1Q0HqZi4St/fqFdrVPPLSY/GS7tgT9a66sQqQLRz8Bw0rF+B/mb7 oW0g== X-Gm-Message-State: AOJu0Yzuzaar5ErNMucHOqqacsNn0zxwvIQlvybf5Iy/j4Z3afGYWgx6 L+unk5IzMVyPVIYd5jHi/HMjaO5J+QTH X-Google-Smtp-Source: AGHT+IFX6P079ptWkhbBVXVqSVPb4Q5T60PJXrDI9E1rxdy0rV/nJ+zvvySQgmSbju6Pa6tVHVtGqyJZr/vZ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:5a95:b0:1bb:9fd4:65ed with SMTP id dt21-20020a0568705a9500b001bb9fd465edmr59639oab.5.1692232238697; Wed, 16 Aug 2023 17:30:38 -0700 (PDT) Date: Thu, 17 Aug 2023 00:30:23 +0000 In-Reply-To: <20230817003029.3073210-1-rananta@google.com> Mime-Version: 1.0 References: <20230817003029.3073210-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230817003029.3073210-7-rananta@google.com> Subject: [PATCH v5 06/12] KVM: arm64: PMU: Add a helper to read a vCPU's PMCR_EL0 From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier Cc: Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Reiji Watanabe Add a helper to read a vCPU's PMCR_EL0, and use it when KVM reads a vCPU's PMCR_EL0. The PMCR_EL0 value is tracked by a sysreg file per each vCPU. The following patches will make (only) PMCR_EL0.N track per guest. Having the new helper will be useful to combine the PMCR_EL0.N field (tracked per guest) and the other fields (tracked per vCPU) to provide the value of PMCR_EL0. No functional change intended. Signed-off-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/arm.c | 3 +-- arch/arm64/kvm/pmu-emul.c | 17 +++++++++++------ arch/arm64/kvm/sys_regs.c | 6 +++--- include/kvm/arm_pmu.h | 6 ++++++ 4 files changed, 21 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d1cb298a58a08..7bd438c181f76 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -800,8 +800,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu) } if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu)) - kvm_pmu_handle_pmcr(vcpu, - __vcpu_sys_reg(vcpu, PMCR_EL0)); + kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu)); if (kvm_check_request(KVM_REQ_SUSPEND, vcpu)) return kvm_vcpu_suspend(vcpu); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index f7b5fa16341ad..42b88b1a901f9 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -67,7 +67,7 @@ static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc) static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc) { - u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0); + u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc)); return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) || (pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC)); @@ -245,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) { - u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); if (val == 0) return BIT(ARMV8_PMU_CYCLE_IDX); @@ -266,7 +266,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) if (!kvm_vcpu_has_pmu(vcpu)) return; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val) return; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { @@ -318,7 +318,7 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg = 0; - if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) { + if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) { reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1); @@ -420,7 +420,7 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu, { int i; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) return; /* Weed out disabled counters */ @@ -563,7 +563,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) { struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && + return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) && (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx)); } @@ -1069,3 +1069,8 @@ u8 kvm_arm_pmu_get_pmuver_limit(void) ID_AA64DFR0_EL1_PMUVer_V3P5); return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp); } + +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return __vcpu_sys_reg(vcpu, PMCR_EL0); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 30108f09e088b..cf4981e2c153b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -803,7 +803,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, * Only update writeable bits of PMCR (continuing into * kvm_pmu_handle_pmcr() as well) */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0); + val = kvm_vcpu_read_pmcr(vcpu); val &= ~ARMV8_PMU_PMCR_MASK; val |= p->regval & ARMV8_PMU_PMCR_MASK; if (!kvm_supports_32bit_el0()) @@ -811,7 +811,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, kvm_pmu_handle_pmcr(vcpu, val); } else { /* PMCR.P & PMCR.C are RAZ */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0) + val = kvm_vcpu_read_pmcr(vcpu) & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); p->regval = val; } @@ -860,7 +860,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) { u64 val; - val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0)); + val = FIELD_GET(ARMV8_PMU_PMCR_N, kvm_vcpu_read_pmcr(vcpu)); if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); return false; diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 66a2f8477641e..99fe64c81ca8b 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -102,6 +102,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); u8 kvm_arm_pmu_get_pmuver_limit(void); int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu); +u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); #else struct kvm_pmu { }; @@ -178,6 +179,11 @@ static inline int kvm_arm_set_vm_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) return -ENODEV; } +static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) +{ + return 0; +} + #endif #endif