From patchwork Mon Jan 27 22:20:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13951794 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 617DDC02188 for ; Mon, 27 Jan 2025 22:26:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=J+qObDwS88LTAskUjFBnmb2tHGwgbjCOQx4TzLmc3bo=; b=3pOg8QKc6Edwll7ekVTtb73Phh pgatjDA9JInoqEQG4R1TgzQ6me/7MWar9UnZnre7Glo6OATrsczi2T5kTelf3gFAaYdSgmr3sJwfW a+4lBtJN3bdoeUgk3Vaen998HJiAogDZw3qXx82tRX6M3bJs7Y8QRMht57w20W8tmCx/e5xFR60RI iPdTOsHuxJNCU04PxGBrRpUZIhExS1EcQTvObWtiRRYPX9EfcerYQb1kxno6/P1s7KwEX5L2HXZpj 9jtTHGCYZblhTr4YT50Rjx7dOPACRhZj0a8JRUZ8Zm0KnYSTgitEJF1cHDORbRP2cv9YlPPPD6CkT 0bxERBBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tcXYW-00000003Oqp-31Hg; Mon, 27 Jan 2025 22:26:08 +0000 Received: from mail-oi1-x24a.google.com ([2607:f8b0:4864:20::24a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tcXTR-00000003Ncu-3ZRv for linux-arm-kernel@lists.infradead.org; Mon, 27 Jan 2025 22:20:54 +0000 Received: by mail-oi1-x24a.google.com with SMTP id 5614622812f47-3eb9278f4daso3712407b6e.2 for ; Mon, 27 Jan 2025 14:20:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738016452; x=1738621252; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=J+qObDwS88LTAskUjFBnmb2tHGwgbjCOQx4TzLmc3bo=; b=gq6adr9uJuXNEfv1zdCYIkuYEJrScbsQsVh6nLjqzqmqUqlelVs9o43JFhOdGBTTf5 7q6aNkXfs+TwCMOAXwdsdrNrqbwRUURoiWgrUJ0+MpPaY6ZIbFY3iZcRYcIKWqxzQSgP YH8VLgYaJSq0w9tE5XGwf8JmetIEBx6xIKCoaRY6Mh1vgKIduCi+gRnvI/bWbEUicvm/ nlIPx6uGo2YJrO7Ubr+Zg0CSRhMJwrak+zwWRSSlBYN6FAltIay/mzLzu677dEPHFTet vVF0LfW8c+Xa3rrZDBsD8nzrMuT7L4suXYGPkV7UnnpnDP+bWV1scKSnvl4J4bjMztdO RjDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738016452; x=1738621252; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=J+qObDwS88LTAskUjFBnmb2tHGwgbjCOQx4TzLmc3bo=; b=AyuYMzEWkgptlM/OYw249lOXPJE7XYE5VSQRavx2luVZNT2NB8Cr+YznuH6CXXiCxP TbztX0bQiOCRVcO3HvSQsQ4Jej4TODl+WIoaGYP5SlBOrd6PBROvIv4bGcJ0SiWA5Xu/ ibCCS7XnyfyS3+5KlBK/n3CwHxvakbMXCQrjJTg63R2oI66GIy70PmNNpHz/G+NusQ+q DihbQhqgytIed/oOw/3vOVH9pjiAIbhAo8Oy8av47WskTxRLrCFvZeOM9tpmSb0YdxBy n8m3catwa5+WGAuaZ9b+ESEY5UCpz7YdyiWy3Y1ZbVzT5zTL+Yp8dnH0AvCTlcJ005vD Nmmw== X-Forwarded-Encrypted: i=1; AJvYcCWM6HOGJ8RCaYLHXhSnpTl/td7D7nO/OOkLz8yqJuUqp9vOrRWv+OS/Gczpx3Yy8qnS3zquQhSdObJjsBTVLf9A@lists.infradead.org X-Gm-Message-State: AOJu0YyvRKsrbvtJeQQMfQbAD5dC5MiXB+gjWorvOh6lWp6NXv6OCfQ7 H4XcGgILCQQ0cK0JdrlGBQFpaZnEn7ymWLF93NZ0WlTzt88FrnVtzh9lRbPkDYTz/ZBy2yZLkxv EZSYSVSvMKwkUSNvxs5Va+A== X-Google-Smtp-Source: AGHT+IGwmY6EDgFuxeR3LWy7D1Q3b/YTDKxzwuYjqYnE2eZqfv0wem+uFwxiXcUXaHQYscSNGguM00ZFGJ5jekmJ8A== X-Received: from oojj19.prod.google.com ([2002:a05:6820:a013:b0:5f6:2f27:1d57]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6808:3c96:b0:3eb:549b:2c14 with SMTP id 5614622812f47-3f19fbfcd73mr29492204b6e.2.1738016451988; Mon, 27 Jan 2025 14:20:51 -0800 (PST) Date: Mon, 27 Jan 2025 22:20:27 +0000 In-Reply-To: <20250127222031.3078945-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250127222031.3078945-1-coltonlewis@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250127222031.3078945-2-coltonlewis@google.com> Subject: [RFC PATCH 1/4] perf: arm_pmuv3: Introduce module param to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-kselftest@vger.kernel.org, Colton Lewis X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250127_142053_890205_E9FDE43B X-CRM114-Status: GOOD ( 20.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For PMUv3, the register MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Introduce a module parameter in the PMUv3 driver to set this register. The name reserved_guest_counters reflects the intent to reserve some counters for the guest so they may eventually be allowed direct access to a subset of PMU functionality for increased performance. Track HPMN and whether the pmu is partitioned in struct arm_pmu. While FEAT_HPMN0 does allow HPMN to be set to 0, this patch specifically disallows that case because it's not useful given the intention to allow guests access to their own counters. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 10 +++++++ arch/arm64/include/asm/arm_pmuv3.h | 10 +++++++ drivers/perf/arm_pmuv3.c | 43 ++++++++++++++++++++++++++++-- include/linux/perf/arm_pmu.h | 2 ++ include/linux/perf/arm_pmuv3.h | 7 +++++ 5 files changed, 70 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pmuv3.h index 2ec0e5e83fc9..49ad90486aa5 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -277,4 +277,14 @@ static inline u64 read_pmceid1(void) return val; } +static inline u32 read_mdcr(void) +{ + return read_sysreg(mdcr_el2); +} + +static inline void write_mdcr(u32 val) +{ + write_sysreg(val, mdcr_el2); +} + #endif diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/arm_pmuv3.h index 8a777dec8d88..fc37e7e81e07 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -188,4 +188,14 @@ static inline bool is_pmuv3p9(int pmuver) return pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P9; } +static inline u64 read_mdcr(void) +{ + return read_sysreg(mdcr_el2); +} + +static inline void write_mdcr(u64 val) +{ + write_sysreg(val, mdcr_el2); +} + #endif diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index b5cc11abc962..55f9ae560715 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -325,6 +325,7 @@ GEN_PMU_FORMAT_ATTR(threshold_compare); GEN_PMU_FORMAT_ATTR(threshold); static int sysctl_perf_user_access __read_mostly; +static u8 reserved_guest_counters __read_mostly; static bool armv8pmu_event_is_64bit(struct perf_event *event) { @@ -500,6 +501,29 @@ static void armv8pmu_pmcr_write(u64 val) write_pmcr(val); } +static u64 armv8pmu_mdcr_read(void) +{ + return read_mdcr(); +} + +static void armv8pmu_mdcr_write(u64 val) +{ + write_mdcr(val); + isb(); +} + +static void armv8pmu_partition(u8 hpmn) +{ + u64 mdcr = armv8pmu_mdcr_read(); + + mdcr &= ~MDCR_EL2_HPMN_MASK; + mdcr |= FIELD_PREP(ARMV8_PMU_MDCR_HPMN, hpmn); + /* Prevent guest counters counting at EL2 */ + mdcr |= ARMV8_PMU_MDCR_HPMD; + + armv8pmu_mdcr_write(mdcr); +} + static int armv8pmu_has_overflowed(u64 pmovsr) { return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); @@ -1069,6 +1093,9 @@ static void armv8pmu_reset(void *info) bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); + if (cpu_pmu->partitioned) + armv8pmu_partition(cpu_pmu->hpmn); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1205,6 +1232,7 @@ static void __armv8pmu_probe_pmu(void *info) { struct armv8pmu_probe_info *probe = info; struct arm_pmu *cpu_pmu = probe->pmu; + u8 pmcr_n; u64 pmceid_raw[2]; u32 pmceid[2]; int pmuver; @@ -1215,10 +1243,19 @@ static void __armv8pmu_probe_pmu(void *info) cpu_pmu->pmuver = pmuver; probe->present = true; + pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); /* Read the nb of CNTx counters supported from PMNC */ - bitmap_set(cpu_pmu->cntr_mask, - 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); + bitmap_set(cpu_pmu->cntr_mask, 0, pmcr_n); + + if (reserved_guest_counters > 0 && reserved_guest_counters < pmcr_n) { + cpu_pmu->hpmn = reserved_guest_counters; + cpu_pmu->partitioned = true; + } else { + reserved_guest_counters = 0; + cpu_pmu->hpmn = pmcr_n; + cpu_pmu->partitioned = false; + } /* Add the CPU cycles counter */ set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); @@ -1516,3 +1553,5 @@ void arch_perf_update_userpage(struct perf_event *event, userpg->cap_user_time_zero = 1; userpg->cap_user_time_short = 1; } + +module_param(reserved_guest_counters, byte, 0); diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 4b5b83677e3f..ad97aabed25a 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -101,6 +101,8 @@ struct arm_pmu { void (*reset)(void *); int (*map_event)(struct perf_event *event); DECLARE_BITMAP(cntr_mask, ARMPMU_MAX_HWEVENTS); + u8 hpmn; /* MDCR_EL2.HPMN: counter partition pivot */ + bool partitioned; bool secure_access; /* 32-bit ARM only */ #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 DECLARE_BITMAP(pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d698efba28a2..d399e8c6f98e 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -223,6 +223,13 @@ ARMV8_PMU_PMCR_X | ARMV8_PMU_PMCR_DP | \ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) +/* + * Per-CPU MDCR: config reg + */ +#define ARMV8_PMU_MDCR_HPMN GENMASK(4, 0) +#define ARMV8_PMU_MDCR_HPME BIT(7) +#define ARMV8_PMU_MDCR_HPMD BIT(17) + /* * PMOVSR: counters overflow flag status reg */ From patchwork Mon Jan 27 22:20:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13951795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B76CC02188 for ; Mon, 27 Jan 2025 22:27:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=GaVkSfHrHzVfJjq8AwEUCvEhNWS7AN/5lexlisUUmk8=; b=ghTsuCpSQnrSIcblJ5YfFldW1T 7o7/GlaUlrJe49sSpuLdAY0nvMm49vbH9Af9i2cWghWzulLpZxCVqWhsne5L6FLBEjKAUaAZVU8Pj 8LxNnUDUTghSL9Y5MnxGL3cHCIl+ZNk/+CuA4Hy5LSmkDlgve7hFWvtAyjJ7T7IZNslrRQ5yl8nty wF7r2IYewrx2ExRA9hC/Qiw7by90x9qkKza97QXbZX3bAdplgQs2uNf21WGIxrkbQ/vwaqy3DMbYM +2PnUHAifhyJYwIKgZ9cXTSUHdy7tWO1vbL/D9SGyqTuN3hHQoXQBQowCWYxXApCJaIVrb8LyymJL mDopWqZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tcXZp-00000003P8H-2mp8; Mon, 27 Jan 2025 22:27:29 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tcXTS-00000003NdE-4Adn for linux-arm-kernel@lists.infradead.org; Mon, 27 Jan 2025 22:20:56 +0000 Received: by mail-il1-x149.google.com with SMTP id e9e14a558f8ab-3cf6ceaccdbso37306875ab.1 for ; Mon, 27 Jan 2025 14:20:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738016453; x=1738621253; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GaVkSfHrHzVfJjq8AwEUCvEhNWS7AN/5lexlisUUmk8=; b=kCYh3YmArJN023kSlOuIiSu0ANLPVpdO7W+NhWISKYNQvhCGTNFY8tngLXPKkQXpnI BdwAx/BwwR4sS0nuck2FWs6mVKtUxg5yfIwWb8YrKji1ShQmJOsv5rOQaOa7tjRAQPD4 NxIZBUISZhOqOD8+uvKvYgVIOvfhTWkKfW1lhf3kmdj8iWYHRukioEhBvs2nnyDHg5nC gjKLqjRxp71WGUI7ZF0f0+6/aXyMujAtmEtIVti/1TgCbwU7XigV0zHfgdjREd2R+byW SmeSvTN/9JzFogccFF8r3RSgvcv2Z9xfVkhsqN4ViUhbHayTNp/Xy8fmHop7tjJRodra zsTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738016453; x=1738621253; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GaVkSfHrHzVfJjq8AwEUCvEhNWS7AN/5lexlisUUmk8=; b=he4YVAHHVtWmNydyiCmbXOlaB0dFktpNGeBpIzXOX1hUwCjJ8N8hoBGE4AGcJcxJaf d8NjNB1j7YjLmYe1bU8vsaPmml/QtBOWXZh69MmdFBG5rHupX+1TDx+CZWKLp71XN/D2 pLddU1JMAEFhpWW+Opu4XrXsYtHDLNNAEyZIPLbwRCUVU88ovPpNvGf6HNf0mBYddnCn 5w9qtOg6MK+5kYDL7de/93hxnMe+Ji00FPN//HTFRsNGVM+N+/sUvB9btCFz5QfAXPR3 bxT9H2+41EYPQLFuV9XXiM7+Z8nLZ7jNth4vThndbfln2Iq6Q9P3ciSSMNgS0YakVSWH /Dzg== X-Forwarded-Encrypted: i=1; AJvYcCWvLeo5aDUBKKEkOuIsZJyHaBPvarqsGSsYbL6SaP8By6Bg8u+K2+1J+TgGZEN0BwGXAEKYy7XjJH6ZLVhrR9uf@lists.infradead.org X-Gm-Message-State: AOJu0YyiyOI3mSUX6pe/9Sa7wXvGmEbdXf5Gcee/PieRVxSLkC3Lywe8 J18VtRpky5vOauauWEBCKkrVuY/ztccs+UjXy3papNBBfAcucN0+F7eMj2V5yPTWvMyeG/qEolh wPv6CBqN5KgOrWdwEHaUTgg== X-Google-Smtp-Source: AGHT+IFYMP4DnSoUUl5kkRFoQsqHI/1D0lC2aOLiNQgEnibJSp/Pb6c49edDE2uZE1uw/6tcaw+6CTgspGEMpXa6dw== X-Received: from ilbbu12.prod.google.com ([2002:a05:6e02:350c:b0:3ce:8579:c1eb]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1f85:b0:3cf:b08e:7e91 with SMTP id e9e14a558f8ab-3cfb08e7f5bmr197753565ab.13.1738016453125; Mon, 27 Jan 2025 14:20:53 -0800 (PST) Date: Mon, 27 Jan 2025 22:20:28 +0000 In-Reply-To: <20250127222031.3078945-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250127222031.3078945-1-coltonlewis@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250127222031.3078945-3-coltonlewis@google.com> Subject: [RFC PATCH 2/4] KVM: arm64: Make guests see only counters they can access From: Colton Lewis To: kvm@vger.kernel.org Cc: Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-kselftest@vger.kernel.org, Colton Lewis X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250127_142055_041014_9250857E X-CRM114-Status: GOOD ( 14.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The ARM architecture specifies that when MDCR_EL2.HPMN is set, EL1 and EL0, which includes KVM guests, should read that value for PMCR.N. Signed-off-by: Colton Lewis --- arch/arm64/kvm/pmu-emul.c | 8 +++++++- tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6c5950b9ceac..052ce8c721fe 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -993,12 +993,18 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) { struct arm_pmu *arm_pmu = kvm->arch.arm_pmu; + u8 limit; + + if (arm_pmu->partitioned) + limit = arm_pmu->hpmn - 1; + else + limit = ARMV8_PMU_MAX_GENERAL_COUNTERS; /* * The arm_pmu->cntr_mask considers the fixed counter(s) as well. * Ignore those and return only the general-purpose counters. */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); + return bitmap_weight(arm_pmu->cntr_mask, limit); } static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c index f9c0c86d7e85..4d5acdb66bc2 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c @@ -610,7 +610,7 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) */ static void run_error_test(uint64_t pmcr_n) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pmcr_n); test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); destroy_vpmu_vm(); From patchwork Mon Jan 27 22:20:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13951796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2945DC02188 for ; Mon, 27 Jan 2025 22:29:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eB/WXdYtPvN5Ab5ru245YZuFbSt2Wl8WSfh8CIWhE9E=; b=l3gzOxg3+E2//GuEUu5krTVOAF 7PK/Vj4LMQr1PsdJJi3S+vKFiEcPIknFiArEa4rT4qRXK+ejw02Yv/MHGdgLHljInOHCyHdq4Xmi1 WuOl4PzTEnxlsz7FM6LdINLPA0RxmLFFkbp+M7CPnDcxURum4BrQ5QOQMN+XwYNuXLLZ6/AU7LuiH 8bXbLuWYHOOkATFfeP5I8f1nJh/176EFFlAEMVuvGTWYp2hm5JStLHucl8CGYWRghVhkCBcWkRIkl MGeTmxPENLNDtnBr06hlQ3IJ7QAaqpmk904K8CrGbkQDm47y7MDAm3OZ8PCMV8uDeiFdDNffUNkx9 fPWFSvvA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tcXb9-00000003PO5-1fQB; Mon, 27 Jan 2025 22:28:51 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tcXTT-00000003Ndi-1iy1 for linux-arm-kernel@lists.infradead.org; Mon, 27 Jan 2025 22:20:56 +0000 Received: by mail-il1-x14a.google.com with SMTP id e9e14a558f8ab-3a9d57cff85so93449395ab.2 for ; Mon, 27 Jan 2025 14:20:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738016454; x=1738621254; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eB/WXdYtPvN5Ab5ru245YZuFbSt2Wl8WSfh8CIWhE9E=; b=kgqsuQ14lP+4ooITpHUzYBXO5hNNr+JqiWovnYqB885K80qIzzYEf+UMIbD5bdKsPO QBDPd9Z3XGd5w7pP9VZpKSGAvOCd5B8FB4s5fJ23X26WPH77J5z/cLtiRcxoBszsj+DU pJk9GVVA6Mf+WHbFyFEUS7qLPXxdyIoBacZdc7XTLzENK9brxuDUN+Uk2dWX0CqemAfd e7Ry3o0IOqct9EXNoRrTu2WTKoSVNALDHzkAGLn+GKxPcDe2DCxMi3Bh5HUUYh2ALykp T0HLStIk6yufdLCC1SnRzK+IPcALUjp5Y5zj1Gx2FICxH+Q1ETyLZFlcyr/VPyNsXk8J PMuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738016454; x=1738621254; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eB/WXdYtPvN5Ab5ru245YZuFbSt2Wl8WSfh8CIWhE9E=; b=inZDFqPryU1oi/Ptp0a1Q+M1wUydzyfFaN3ahNePzAjaFdGUsx5eVWSbPyelzWYvab tgjx/lXGjd9Ydl3g2skOnLnrLSYu8+vsnZoN71o44NEZK9ONBtXLw8WIgzIGkWb7su3W Eup/nuySvJ0+HgOQKmVLhdfg5xZupZcrGeye/cSEJi6kSMgPMeP4LN9RmfmLP5qyYYVb ZdBIDXZ79R5t1XZsy5awhQN8SFZ1K9hfvoC12uhBP2nTG8IIWcEIjuFt6IY30kZqg15F exkFpKvzGqfAumblyGcCiUgM3QdoaMVgbEM8pSXcWpSdIzQz1X7tPVuA3rqfoX7G8/QO Xfzw== X-Forwarded-Encrypted: i=1; AJvYcCWNx0A3WQgVbFS0jbxhLXXAdgjNtFpZQJWu69IJdqt0mBryTZok7gCHYQUdhTZCzI56jKWGwQqAE37PrlBFDsfj@lists.infradead.org X-Gm-Message-State: AOJu0YwlLcaSuk+WijcNlWpCVccKC2AXcsjXe5au08ugVwoka1b3STC5 EUfb7YRdbEA6NDnSESwX5LxVOzp/amy/L69Vu9PjSSHhibuKqsvc4TnK3jbZo1Rc7JHZbYu4sdl OvHx3pyKEdvU9kqbl11acfQ== X-Google-Smtp-Source: AGHT+IHlnPj0B51G4F3gaUMK8xZM3/o5eoR0afyVhEDX9mUYzTDBYrn3lVxOBnfUacMgZ4+x5YUCgf5OnSaiN01GgQ== X-Received: from ilbdz9.prod.google.com ([2002:a05:6e02:4409:b0:3cd:deb8:6a64]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:1fc9:b0:3cf:c88f:7555 with SMTP id e9e14a558f8ab-3cfc88f7680mr123300885ab.17.1738016454065; Mon, 27 Jan 2025 14:20:54 -0800 (PST) Date: Mon, 27 Jan 2025 22:20:29 +0000 In-Reply-To: <20250127222031.3078945-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250127222031.3078945-1-coltonlewis@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250127222031.3078945-4-coltonlewis@google.com> Subject: [RFC PATCH 3/4] perf: arm_pmuv3: Generalize counter bitmasks From: Colton Lewis To: kvm@vger.kernel.org Cc: Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-kselftest@vger.kernel.org, Colton Lewis X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250127_142055_447164_03DDE40D X-CRM114-Status: GOOD ( 11.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org These bitmasks are valid for enable and interrupt registers as well as overflow registers. Generalize the names. Signed-off-by: Colton Lewis --- include/linux/perf/arm_pmuv3.h | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d399e8c6f98e..115ee39f693a 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -230,16 +230,23 @@ #define ARMV8_PMU_MDCR_HPME BIT(7) #define ARMV8_PMU_MDCR_HPMD BIT(17) +/* + * Counter bitmask layouts for overflow, enable, and interrupts + */ +#define ARMV8_PMU_CNT_MASK_P GENMASK(30, 0) +#define ARMV8_PMU_CNT_MASK_C BIT(31) +#define ARMV8_PMU_CNT_MASK_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_CNT_MASK_ALL (ARMV8_PMU_CNT_MASK_P | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) /* * PMOVSR: counters overflow flag status reg */ -#define ARMV8_PMU_OVSR_P GENMASK(30, 0) -#define ARMV8_PMU_OVSR_C BIT(31) -#define ARMV8_PMU_OVSR_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_OVSR_P ARMV8_PMU_CNT_MASK_P +#define ARMV8_PMU_OVSR_C ARMV8_PMU_CNT_MASK_C +#define ARMV8_PMU_OVSR_F ARMV8_PMU_CNT_MASK_F /* Mask for writable bits is both P and C fields */ -#define ARMV8_PMU_OVERFLOWED_MASK (ARMV8_PMU_OVSR_P | ARMV8_PMU_OVSR_C | \ - ARMV8_PMU_OVSR_F) - +#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_CNT_MASK_ALL /* * PMXEVTYPER: Event selection reg */ From patchwork Mon Jan 27 22:20:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13951797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C2650C02188 for ; Mon, 27 Jan 2025 22:30:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=pVrHE6B2dG3grFDn7LdUZxPwlglEt8pw1xMS4rCkJoc=; b=Det+eCQYDqbTcL+r3mIuZp3+MT tm3lQNBo2uwzy5RkYxWqGCh2WqbNQA9c0vcFxxk6782Zi1TRh1Y7jnz8x0shmF9NCg18J6Yqo1kKt b5miH9MHkeCo4pKXRN7y2YVkE2QGOLzhhKW58zIZ+2VreQrQhSlMGd6JQq5FG2PVSBkbQOB5FPpuA JEDHUKN19B7+oDU7nzxXqE9zN8J3+nsq15SmeI+HnjhaliXuuYIi/DWi3vLCS8v4H5trUKv2K0WLx FuMA7lQ3PArskS2FfpfA1wcT7PrpOE6brfjjX3rIC+WzF+uzOKKkGIILH/p1x9M0ZzXrY7RpvXQfd YhyGzDdQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tcXcS-00000003PUL-0KQ6; Mon, 27 Jan 2025 22:30:12 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tcXTU-00000003Ne8-04eN for linux-arm-kernel@lists.infradead.org; Mon, 27 Jan 2025 22:20:57 +0000 Received: by mail-il1-x149.google.com with SMTP id e9e14a558f8ab-3ce8dadfb67so26054465ab.1 for ; Mon, 27 Jan 2025 14:20:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738016455; x=1738621255; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pVrHE6B2dG3grFDn7LdUZxPwlglEt8pw1xMS4rCkJoc=; b=Ln1WRUffXk3gW6TLtt7lrwEzhaQMbNUbQqB5LSFFNCaf0vlL7EWw0cSyGqIHD5YgG1 k2214yqA0/WXz/0U+mKuDZUZGsZJvMedffe76A7406ofpS0kOFApIss18d3KBvxg9YA2 Iww8eFfSfjLbzfZCJqdXFwn3WkD4v5hbWd6V8x4qmC3Kv3XMaeKX9slBBtR+zwlbQr3q xZWmCAhtOD/1wSLWY4MPjju4YG2Yh68Z4yhEG6OMl/GwZ3X/amsL0r/n2KIeOEeYRlpE mNNK61uh0fFb0BtF8uEOMLbIlS+LIsB98B9NsjSdPJ7/DBBNBRWVCGNziTdRr/y2WsSp urNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738016455; x=1738621255; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pVrHE6B2dG3grFDn7LdUZxPwlglEt8pw1xMS4rCkJoc=; b=EUe5/ZpNPSKHAAtthTLDrkxd+vWO4jkOjteprZ9ygJZkcv9E77L7cGHL24q9CorJhW aiHLZQWAUq/fJns5djGc6Hc54aZqPY4bdCDW15lmiLhf+MgYwf12apU5nhy40didj03A aRnanV7yxViWEmSpZJx5wmNsoin/KiU2s3JOEwWzGu1QWunY+DPQec2dIDCLKajYOojd T80gxWNwiTub8mROGpF/NutXHuQ9difYSz6ovOv5PtysP83ltaT1qMDHg0bMK9fG2Hot 0zkfnKHREEM6eh66h0kQ92j5xZqXksC9tV9xBEPdz72JfP37bVg6XmNWUTN0lgyICTej wxKQ== X-Forwarded-Encrypted: i=1; AJvYcCVxQAIQdRHgVFgqPsUNOfGC8GGzJFdZ4rl7Q8YbAuZQ5vbLtor6xD6mD+ypyWtaR1UxyueeFiL4MQ2hPNiwo/j3@lists.infradead.org X-Gm-Message-State: AOJu0YzBAKsfN+U/tj3xTq3sON4Vi1LQVfnFubWO6X3VQHWWGlfs1NnU CU8MRMcu9Ao8/h0VDiMukeFOJnPvQZQFbp8P3nmvmwHSEFUnv9E0XQeqnr+vnE790R0q2ZudBGL aTwhKoLFx5S/cyonwQR41Mg== X-Google-Smtp-Source: AGHT+IFtAnJ1e6Omyb95spUU2CAJPVrJ4SV5b/ores++c+WBrkG6S5co/YmSyuOwcQFNByJ0T9eGNAI5BdjlXyAtsw== X-Received: from ilbeb27.prod.google.com ([2002:a05:6e02:461b:b0:3cf:a082:8561]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:4810:b0:3cf:b3ab:584d with SMTP id e9e14a558f8ab-3cfb3ab58famr155469855ab.13.1738016455100; Mon, 27 Jan 2025 14:20:55 -0800 (PST) Date: Mon, 27 Jan 2025 22:20:30 +0000 In-Reply-To: <20250127222031.3078945-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250127222031.3078945-1-coltonlewis@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250127222031.3078945-5-coltonlewis@google.com> Subject: [RFC PATCH 4/4] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-kselftest@vger.kernel.org, Colton Lewis X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250127_142056_058423_904E087F X-CRM114-Status: GOOD ( 23.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Partitioning is defined by the MDCR_EL2.HPMN register field and saved in cpu_pmu->hpmn. The range 0..HPMN-1 is accessible by EL1 and EL0 while HPMN..PMCR.N is reserved for EL2. Define some macros that take HPMN as an argument and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their different position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- drivers/perf/arm_pmuv3.c | 72 +++++++++++++++++++++++++++++----- include/linux/perf/arm_pmuv3.h | 8 ++++ 2 files changed, 70 insertions(+), 10 deletions(-) diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 55f9ae560715..c61845fad9d9 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -754,15 +754,19 @@ static void armv8pmu_disable_event_irq(struct perf_event *event) armv8pmu_disable_intens(BIT(event->hw.idx)); } -static u64 armv8pmu_getreset_flags(void) +static u64 armv8pmu_getreset_flags(struct arm_pmu *cpu_pmu) { u64 value; /* Read */ value = read_pmovsclr(); + if (cpu_pmu->partitioned) + value &= ARMV8_PMU_HOST_CNT_PART(cpu_pmu->hpmn); + else + value &= ARMV8_PMU_OVERFLOWED_MASK; + /* Write to clear flags */ - value &= ARMV8_PMU_OVERFLOWED_MASK; write_pmovsclr(value); return value; @@ -789,6 +793,18 @@ static void armv8pmu_disable_user_access(void) update_pmuserenr(0); } +static bool armv8pmu_is_guest_part(struct arm_pmu *cpu_pmu, u8 idx) +{ + return cpu_pmu->partitioned && + (BIT(idx) & ARMV8_PMU_GUEST_CNT_PART(cpu_pmu->hpmn)); +} + +static bool armv8pmu_is_host_part(struct arm_pmu *cpu_pmu, u8 idx) +{ + return !cpu_pmu->partitioned || + (BIT(idx) & ARMV8_PMU_HOST_CNT_PART(cpu_pmu->hpmn)); +} + static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) { int i; @@ -797,6 +813,8 @@ static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) if (is_pmuv3p9(cpu_pmu->pmuver)) { u64 mask = 0; for_each_set_bit(i, cpuc->used_mask, ARMPMU_MAX_HWEVENTS) { + if (armv8pmu_is_guest_part(cpu_pmu, i)) + continue; if (armv8pmu_event_has_user_read(cpuc->events[i])) mask |= BIT(i); } @@ -805,6 +823,8 @@ static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) /* Clear any unused counters to avoid leaking their contents */ for_each_andnot_bit(i, cpu_pmu->cntr_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS) { + if (armv8pmu_is_guest_part(cpu_pmu, i)) + continue; if (i == ARMV8_PMU_CYCLE_IDX) write_pmccntr(0); else if (i == ARMV8_PMU_INSTR_IDX) @@ -850,7 +870,10 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) armv8pmu_disable_user_access(); /* Enable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); + if (cpu_pmu->partitioned) + armv8pmu_mdcr_write(armv8pmu_mdcr_read() | ARMV8_PMU_MDCR_HPME); + else + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); kvm_vcpu_pmu_resync_el0(); } @@ -858,7 +881,10 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { /* Disable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); + if (cpu_pmu->partitioned) + armv8pmu_mdcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_MDCR_HPME); + else + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) @@ -872,7 +898,7 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) /* * Get and reset the IRQ flags */ - pmovsr = armv8pmu_getreset_flags(); + pmovsr = armv8pmu_getreset_flags(cpu_pmu); /* * Did an overflow occur? @@ -930,6 +956,8 @@ static int armv8pmu_get_single_idx(struct pmu_hw_events *cpuc, int idx; for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) { + if (armv8pmu_is_guest_part(cpu_pmu, idx)) + continue; if (!test_and_set_bit(idx, cpuc->used_mask)) return idx; } @@ -946,6 +974,8 @@ static int armv8pmu_get_chain_idx(struct pmu_hw_events *cpuc, * the lower idx must be even. */ for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) { + if (armv8pmu_is_guest_part(cpu_pmu, idx)) + continue; if (!(idx & 0x1)) continue; if (!test_and_set_bit(idx, cpuc->used_mask)) { @@ -968,6 +998,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, /* Always prefer to place a cycle counter into the cycle counter. */ if ((evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) && + !cpu_pmu->partitioned && !armv8pmu_event_get_threshold(&event->attr)) { if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask)) return ARMV8_PMU_CYCLE_IDX; @@ -983,6 +1014,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, * may not know how to handle it. */ if ((evtype == ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !cpu_pmu->partitioned && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -994,7 +1026,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1086,6 +1118,15 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event, return 0; } +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) + if (armv8pmu_is_host_part(cpu_pmu, idx)) + armv8pmu_write_evcntr(idx, 0); +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu = (struct arm_pmu *)info; @@ -1093,8 +1134,10 @@ static void armv8pmu_reset(void *info) bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); - if (cpu_pmu->partitioned) + if (cpu_pmu->partitioned) { armv8pmu_partition(cpu_pmu->hpmn); + mask &= ARMV8_PMU_HOST_CNT_PART(cpu_pmu->hpmn); + } /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); @@ -1103,11 +1146,20 @@ static void armv8pmu_reset(void *info) /* Clear the counters we flip at guest entry/exit */ kvm_clr_pmu_events(mask); + + pmcr = ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr = ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (cpu_pmu->partitioned) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr = ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index 115ee39f693a..5f8b143794ce 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -247,6 +247,14 @@ #define ARMV8_PMU_OVSR_F ARMV8_PMU_CNT_MASK_F /* Mask for writable bits is both P and C fields */ #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_CNT_MASK_ALL + +/* Masks for guest and host counter partitions */ +#define ARMV8_PMU_HPMN_CNT_MASK(N) GENMASK((N) - 1, 0) +#define ARMV8_PMU_GUEST_CNT_PART(N) (ARMV8_PMU_HPMN_CNT_MASK(N) | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) +#define ARMV8_PMU_HOST_CNT_PART(N) (ARMV8_PMU_CNT_MASK_ALL & \ + ~ARMV8_PMU_GUEST_CNT_PART(N)) /* * PMXEVTYPER: Event selection reg */