From patchwork Sat Feb 8 02:01:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13966229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 13889C02199 for ; Sat, 8 Feb 2025 02:06:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4O4akjcxYhNwCefv175mmgU5vTvI8B0CXOe+M7wmIh8=; b=IMaYaRNb1rIFf3s36hI5oM2xyg 1U2hwjCDUhcpY1tcxSeLLZCX4L/74O7bgAqG4h+HOKQYAc+4XhnfqTyrMjjB6G6Dy3mV/vA14yRED eHPJNTf3KtnBPbTk6ZCzFtJMoLw0ZB5FGFfEIWWKzZmvX65xxX4HQMr59o1LICdaTShjeFIfjvaMB DfKKpBNjiNhDYUCukuLZE+UukGu3VZAofIi8+yZj0qZg01NjV06OXIN6RdfvoMPbeSEWUq8roqHeU GOB/U1MtAIDAbG3iEU/D3R+YwUVC4FolUSnGF01vJdBRx7Bi73T0kuvkv+xUsIg4LxYqn0k5paXu0 Es4pJHUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tgaEy-0000000BsOd-0UrY; Sat, 08 Feb 2025 02:06:40 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tga9t-0000000BrM6-2jC3 for linux-arm-kernel@lists.infradead.org; Sat, 08 Feb 2025 02:01:26 +0000 Received: by mail-il1-x14a.google.com with SMTP id e9e14a558f8ab-3ce7b6225aeso48142325ab.0 for ; Fri, 07 Feb 2025 18:01:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738980084; x=1739584884; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4O4akjcxYhNwCefv175mmgU5vTvI8B0CXOe+M7wmIh8=; b=ZiEax8uRrXMSx7tnMKwiYq/9fX6Q2Zo9wuLTG5P2hdedb4Dzl9SqIOZRp9/37C8HBP KV6+X0GDcU4+8kAaBPpyAcnu0X1ErAorDZZSD+a62AHg2nN+QAhYHO7wlwbrz4c5jG7X b34YJqD78EevAgP+lFjlTNbL1XBmeUyGgySGHfbsS7FmfRgJX8i7q+9b0bs2uWZj7CjK rR6fqb6xBQDNkdGKZb7NsHs1nFNM6ZgyHGUR7beS/UdRZis1/JuroUmahW5Ke3pWCMFT S8Q8HJHCAzo/LDtPYAtkYyNLboevs9E8LufiGtjft8FWeSgFGrjGIckzMUkHT//BoE5+ 8ITA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738980084; x=1739584884; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4O4akjcxYhNwCefv175mmgU5vTvI8B0CXOe+M7wmIh8=; b=j+DPip/3D72GyWDoAprm8rPGRtR7x5AJhr1T2k2PejcbC+OirDh/YvaaqIItBaZd+p LOOootehf7D8uHRqIVw6ohn1D/C6A0Tbfj/VKLQh33zt+r3VmcW2fgBuNvfZK3ta2ihY CdVd5jBEukQF0PtO18snDRMxZUOV/4WI3ed3qkF4wbRdK2o+Aa2Wqw0RqTyzrsaXckYz ACaPHxP12Qbc/7MC4eet1NVV3M0h7A2xMoH7tv71BKyAuKIK3+AFWmn2+5xfMmPc2lHZ HbPbeJpal6Bs9UYc7s9xkoUj/O6a9X0yOTJJ+0/Xg8+t/OETPHFJuYE4MeIE1acjDw2k 8uAw== X-Forwarded-Encrypted: i=1; AJvYcCXdytQtz1gvUf4G56xWH68lykCGeQfmC5UV5w6XwCbyvAY3bm6UskITtm2IRe+/WO4SdA02hIhLp3Y4lldej4+C@lists.infradead.org X-Gm-Message-State: AOJu0YxOH9feBb8xRWn4ROS059Tob+5GIP2hSlNeVG22hNJwz9vrTOJz gx8ykwqZxhYLRJ+MBrmtFzCFNW60Ea0JZtob76tjJvixcEGvR3Q6ezjwJj1FJrPnpjoif0fIzGM bwouMqTU2xJi9Fdop4daFSA== X-Google-Smtp-Source: AGHT+IGWeZpc2qa1ckU4PjgV51Y7Hx9W1wQ1TYRDVmT/vQly3n2guK7Orb67xhImPnqLeWrkenED8+oum8nOVfQ3mw== X-Received: from ilbee27.prod.google.com ([2002:a05:6e02:491b:b0:3ce:69d1:ce53]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:17ce:b0:3d0:1bc4:77a0 with SMTP id e9e14a558f8ab-3d13da22469mr48897165ab.0.1738980083677; Fri, 07 Feb 2025 18:01:23 -0800 (PST) Date: Sat, 8 Feb 2025 02:01:08 +0000 In-Reply-To: <20250208020111.2068239-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250208020111.2068239-1-coltonlewis@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250208020111.2068239-2-coltonlewis@google.com> Subject: [RFC PATCH v2 1/4] perf: arm_pmuv3: Generalize counter bitmasks From: Colton Lewis To: kvm@vger.kernel.org Cc: Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250207_180125_708181_1903E324 X-CRM114-Status: GOOD ( 11.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org These bitmasks are valid for enable and interrupt registers as well as overflow registers. Generalize the names. Signed-off-by: Colton Lewis --- include/linux/perf/arm_pmuv3.h | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index d698efba28a2..c2448477c37f 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -223,16 +223,23 @@ ARMV8_PMU_PMCR_X | ARMV8_PMU_PMCR_DP | \ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) +/* + * Counter bitmask layouts for overflow, enable, and interrupts + */ +#define ARMV8_PMU_CNT_MASK_P GENMASK(30, 0) +#define ARMV8_PMU_CNT_MASK_C BIT(31) +#define ARMV8_PMU_CNT_MASK_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_CNT_MASK_ALL (ARMV8_PMU_CNT_MASK_P | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) /* * PMOVSR: counters overflow flag status reg */ -#define ARMV8_PMU_OVSR_P GENMASK(30, 0) -#define ARMV8_PMU_OVSR_C BIT(31) -#define ARMV8_PMU_OVSR_F BIT_ULL(32) /* arm64 only */ +#define ARMV8_PMU_OVSR_P ARMV8_PMU_CNT_MASK_P +#define ARMV8_PMU_OVSR_C ARMV8_PMU_CNT_MASK_C +#define ARMV8_PMU_OVSR_F ARMV8_PMU_CNT_MASK_F /* Mask for writable bits is both P and C fields */ -#define ARMV8_PMU_OVERFLOWED_MASK (ARMV8_PMU_OVSR_P | ARMV8_PMU_OVSR_C | \ - ARMV8_PMU_OVSR_F) - +#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_CNT_MASK_ALL /* * PMXEVTYPER: Event selection reg */ From patchwork Sat Feb 8 02:01:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13966230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 57906C02199 for ; Sat, 8 Feb 2025 02:08:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=drjA1Q37gh75rUzVihMmmLfV/T5nGlfKdtiHeyTpBgI=; b=J8WSjV9oMPcyI5wonOB4jWo0le RmeTddpcStCnbqnN9LDbb4GclDB/IK1/QQua7M0yWz17kiUe63qFzVL9xxRADEy98HoK9mWWhqvFG wYXgMhd1CROd6AvI63gSeeBNvgqAdr0JKn49tuFhnrcKbXM1OWdWBqh3ufI51WDveg22BV23MAFy7 oEdCQ06qZ4FwRXwKJImZggcuz8UFXqflZsohB4j216QsHFw/tg46TigWzqFPZNhvw4mxRt0JYihaQ uQD/XSmHSiZLnFxr1b2J0JSJa5h7fv6g0Od1BO2mRjUGr6dPwLXY5Wkga5QyFR39YJcbsHeO2LOV/ oPFVgyxA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tgaGK-0000000BsWd-38VN; Sat, 08 Feb 2025 02:08:04 +0000 Received: from mail-oo1-xc49.google.com ([2607:f8b0:4864:20::c49]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tga9u-0000000BrMS-3OKt for linux-arm-kernel@lists.infradead.org; Sat, 08 Feb 2025 02:01:27 +0000 Received: by mail-oo1-xc49.google.com with SMTP id 006d021491bc7-5f338069a21so1951260eaf.1 for ; Fri, 07 Feb 2025 18:01:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738980085; x=1739584885; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=drjA1Q37gh75rUzVihMmmLfV/T5nGlfKdtiHeyTpBgI=; b=REn1sjm/98n+nKAoSKnjbv/O/pDUPDyeuuXvQq5WgqIk3z0H1Xmyy3SWIoGyAT6QrB toBNqom/vUiSKNaL7brDWpfa9OyPozPKA/YjQs7kFXVVL9Qy3Di0yrvJ+4oM3F7EEjOy 7zfghyNK9323HJqbKkZLTz/rJwbABtICr1TxgPVPQDAGg5r1O+dHmJT0kbqLM1w/JXkq g0TcHpv4AA/3SgsyIOIKLXeMc7CmvyBWDzN8Qo897+sWz6SIbjiGKWK/d+XMeaJocbKK VzNqhU7D+yj71GtqIRxBfRtyP7vkUKlhZr2HydabT2JrsFxQoyr4NI8d9RWkmLpqmoMp ARvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738980085; x=1739584885; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=drjA1Q37gh75rUzVihMmmLfV/T5nGlfKdtiHeyTpBgI=; b=BdtZwO8yJ4rVJbwhiV0h7uqiRqchhSS2C+jafkJ7A/PaJGrJluAKksB6vYLj51Ol+H 5o7CeEBu5ORuSah0tZwc6aSIouSjm62NOoQ+g7DKAClXJeMwMfSj6y402unfAN8ZZNVi Y59tYykVaGL0Val4cjhQ0xM27kOQ+yM8decfzZYowU6fDAQhGeoJLiEgGx5pZ2ZMTJ59 QrazOpBC4It6ycFcE+8KyBg0wmHv72TR/dPIRiZeuurwujeVkHiyulMrFYr1oAIHTqry SKUcySEb8nKzho0ho60MlCiuR0UcnUWCjeB+o+/w5kx/jQUAO7TecVHAb+h9FWO013dW I5PA== X-Forwarded-Encrypted: i=1; AJvYcCUGdiNyHBXAuYeB1s87vLgcw7mmkz0rrbBNv7fv6uvh5HTgiaRmCGjINDZ6U5WrRtZKae8gG6xqFPzr8Y46N8tA@lists.infradead.org X-Gm-Message-State: AOJu0YzyEgk6DZGYM4g2LwLwgSd02J3ET9nsVGWdpmN+hyQ7/3Z+zHuf be2w6rlMPz+1Yz5bZxICr9NDfMN2C2Lw7KwdW8lfs7PSu9dpMlt1r3HUyH0B5e2/eCq6648azPR 6KLQ2nIvhR8jwsD5Qj0O70w== X-Google-Smtp-Source: AGHT+IHAFmwRvuSVnlu6AfcSdI3UbPArVuxlw536HHD4pciDPIBbZMlrBgthheBCrRaWlHc6RbttTF/L4uKfIgHBfw== X-Received: from oaclu2.prod.google.com ([2002:a05:6871:3142:b0:2aa:1a59:56d1]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:a586:b0:2b8:3e6b:605 with SMTP id 586e51a60fabf-2b83ed8adbcmr3039667fac.20.1738980084900; Fri, 07 Feb 2025 18:01:24 -0800 (PST) Date: Sat, 8 Feb 2025 02:01:09 +0000 In-Reply-To: <20250208020111.2068239-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250208020111.2068239-1-coltonlewis@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250208020111.2068239-3-coltonlewis@google.com> Subject: [RFC PATCH v2 2/4] perf: arm_pmuv3: Introduce module param to partition the PMU From: Colton Lewis To: kvm@vger.kernel.org Cc: Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250207_180126_853662_F6256F9D X-CRM114-Status: GOOD ( 23.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For PMUv3, the register MDCR_EL2.HPMN partitiones the PMU counters into two ranges where counters 0..HPMN-1 are accessible by EL1 and, if allowed, EL0 while counters HPMN..N are only accessible by EL2. Introduce a module parameter in the PMUv3 driver to set this register. The name reserved_host_counters reflects the intent to reserve some counters for the host so the guest may eventually be allowed direct access to a subset of PMU functionality for increased performance. Track HPMN and whether the pmu is partitioned in struct arm_pmu because KVM will need to know that to handle guests correctly. While FEAT_HPMN0 does allow HPMN to be set to 0, this patch specifically disallows that case because it's not useful given the intention to allow guests access to their own counters. Due to the difficulty this feature would create for the driver running at EL1 on the host, partitioning is only allowed in VHE mode. Working on nVHE mode would require a hypercall for every register access because the counters reserved for the host by HPMN are now only accessible to EL2. The parameter is only configurable at boot time. Making the parameter configurable on a running system is dangerous due to the difficulty of knowing for sure no counters are in use anywhere so it is safe to reporgram HPMN. Signed-off-by: Colton Lewis --- arch/arm/include/asm/arm_pmuv3.h | 13 +++++++++ arch/arm64/include/asm/arm_pmuv3.h | 10 +++++++ drivers/perf/arm_pmuv3.c | 46 ++++++++++++++++++++++++++++-- include/linux/perf/arm_pmu.h | 2 ++ include/linux/perf/arm_pmuv3.h | 7 +++++ 5 files changed, 76 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pmuv3.h index 2ec0e5e83fc9..c5f496450e16 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -11,6 +11,8 @@ #define PMCCNTR __ACCESS_CP15_64(0, c9) +#define HDCR __ACCESS_CP15(c1, 4, c1, 1) + #define PMCR __ACCESS_CP15(c9, 0, c12, 0) #define PMCNTENSET __ACCESS_CP15(c9, 0, c12, 1) #define PMCNTENCLR __ACCESS_CP15(c9, 0, c12, 2) @@ -214,6 +216,7 @@ static inline void write_pmuserenr(u32 val) static inline void write_pmuacr(u64 val) {} +static inline bool has_vhe(void) { return false; } static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} static inline void kvm_clr_pmu_events(u32 clr) {} static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) @@ -277,4 +280,14 @@ static inline u64 read_pmceid1(void) return val; } +static inline u32 read_mdcr(void) +{ + return read_sysreg(HDCR); +} + +static inline void write_mdcr(u32 val) +{ + write_sysreg(val, HDCR); +} + #endif diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/arm_pmuv3.h index 8a777dec8d88..fc37e7e81e07 100644 --- a/arch/arm64/include/asm/arm_pmuv3.h +++ b/arch/arm64/include/asm/arm_pmuv3.h @@ -188,4 +188,14 @@ static inline bool is_pmuv3p9(int pmuver) return pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P9; } +static inline u64 read_mdcr(void) +{ + return read_sysreg(mdcr_el2); +} + +static inline void write_mdcr(u64 val) +{ + write_sysreg(val, mdcr_el2); +} + #endif diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 0e360feb3432..39109260b161 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -325,6 +325,7 @@ GEN_PMU_FORMAT_ATTR(threshold_compare); GEN_PMU_FORMAT_ATTR(threshold); static int sysctl_perf_user_access __read_mostly; +static u8 reserved_host_counters __read_mostly; static bool armv8pmu_event_is_64bit(struct perf_event *event) { @@ -500,6 +501,29 @@ static void armv8pmu_pmcr_write(u64 val) write_pmcr(val); } +static u64 armv8pmu_mdcr_read(void) +{ + return read_mdcr(); +} + +static void armv8pmu_mdcr_write(u64 val) +{ + write_mdcr(val); + isb(); +} + +static void armv8pmu_partition(u8 hpmn) +{ + u64 mdcr = armv8pmu_mdcr_read(); + + mdcr &= ~ARMV8_PMU_MDCR_HPMN; + mdcr |= FIELD_PREP(ARMV8_PMU_MDCR_HPMN, hpmn); + /* Prevent guest counters counting at EL2 */ + mdcr |= ARMV8_PMU_MDCR_HPMD; + + armv8pmu_mdcr_write(mdcr); +} + static int armv8pmu_has_overflowed(u64 pmovsr) { return !!(pmovsr & ARMV8_PMU_OVERFLOWED_MASK); @@ -1069,6 +1093,9 @@ static void armv8pmu_reset(void *info) bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); + if (cpu_pmu->partitioned) + armv8pmu_partition(cpu_pmu->hpmn); + /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); armv8pmu_disable_intens(mask); @@ -1205,6 +1232,7 @@ static void __armv8pmu_probe_pmu(void *info) { struct armv8pmu_probe_info *probe = info; struct arm_pmu *cpu_pmu = probe->pmu; + u8 pmcr_n; u64 pmceid_raw[2]; u32 pmceid[2]; int pmuver; @@ -1215,10 +1243,20 @@ static void __armv8pmu_probe_pmu(void *info) cpu_pmu->pmuver = pmuver; probe->present = true; + pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read()); /* Read the nb of CNTx counters supported from PMNC */ - bitmap_set(cpu_pmu->cntr_mask, - 0, FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read())); + bitmap_set(cpu_pmu->cntr_mask, 0, pmcr_n); + + if (has_vhe() && + reserved_host_counters > 0 && + reserved_host_counters < pmcr_n) { + cpu_pmu->hpmn = pmcr_n - reserved_host_counters; + cpu_pmu->partitioned = true; + } else { + cpu_pmu->hpmn = pmcr_n; + cpu_pmu->partitioned = false; + } /* Add the CPU cycles counter */ set_bit(ARMV8_PMU_CYCLE_IDX, cpu_pmu->cntr_mask); @@ -1516,3 +1554,7 @@ void arch_perf_update_userpage(struct perf_event *event, userpg->cap_user_time_zero = 1; userpg->cap_user_time_short = 1; } + +module_param(reserved_host_counters, byte, 0); +MODULE_PARM_DESC(reserved_host_counters, + "Partition the PMU into host and guest counters"); diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 4b5b83677e3f..ad97aabed25a 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -101,6 +101,8 @@ struct arm_pmu { void (*reset)(void *); int (*map_event)(struct perf_event *event); DECLARE_BITMAP(cntr_mask, ARMPMU_MAX_HWEVENTS); + u8 hpmn; /* MDCR_EL2.HPMN: counter partition pivot */ + bool partitioned; bool secure_access; /* 32-bit ARM only */ #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 DECLARE_BITMAP(pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index c2448477c37f..115ee39f693a 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -223,6 +223,13 @@ ARMV8_PMU_PMCR_X | ARMV8_PMU_PMCR_DP | \ ARMV8_PMU_PMCR_LC | ARMV8_PMU_PMCR_LP) +/* + * Per-CPU MDCR: config reg + */ +#define ARMV8_PMU_MDCR_HPMN GENMASK(4, 0) +#define ARMV8_PMU_MDCR_HPME BIT(7) +#define ARMV8_PMU_MDCR_HPMD BIT(17) + /* * Counter bitmask layouts for overflow, enable, and interrupts */ From patchwork Sat Feb 8 02:01:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13966231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 644E3C02199 for ; Sat, 8 Feb 2025 02:09:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yqMFoR6hUXpvMlUoVyEoFzEO0LlwQErfofhnNI8M7xE=; b=MDtbxRwIDB537EGzeOZ0fQrJGT vaX5Tmth2VFcrFekMGUeJNnWjX5lFzUfA2DYkID++qrmpS1Mf+/NLLGvm1uB48aKJ+ycAUjM/NOjx iXz0oW0JDouwmeuyaZcSjWxYJNqy9e4HQy2ImDiMzsq35zKVTyVTryDzuIIqIU/NS4zK33IJIjEW8 sPNGUZcmro2TDurWi0pA+7lY6gGaNbLUo+2T0tBfs2wnOD4Kp/HxakdXagy54yfSbuvKKNkznfKkv oDYQHOSdwVozxP3Uee65zH3d6KR7ZbW2f6GzrFpn7xa+fuHj7kgL65tzlAdzzUEnNKE4VfxeDdhqR rQ2VFaBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tgaHh-0000000Bsfn-1c8e; Sat, 08 Feb 2025 02:09:29 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tga9v-0000000BrMq-18gY for linux-arm-kernel@lists.infradead.org; Sat, 08 Feb 2025 02:01:28 +0000 Received: by mail-il1-x149.google.com with SMTP id e9e14a558f8ab-3ce81a40f5cso47139095ab.1 for ; Fri, 07 Feb 2025 18:01:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738980086; x=1739584886; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yqMFoR6hUXpvMlUoVyEoFzEO0LlwQErfofhnNI8M7xE=; b=yYt+giehq47t4eCZ7xeuyWo8N4fVZkuq70bZLSzkVDnqv7v8Qe90rJkPMEkk24M/iH ep1t5wgnrZDnZZ8TZmEFcyWiJZ0LTGsVgMYvlngy83nDhhLJISK1HKyyroCtPsFDp3Fn B77PB5t7u8GCWaNkc6Ibmg4aRE/mhMBC0eeaq946eRsyqdUDXCJr0r3IYYDQKRgWFbQc 8mHuOsFr22bmslx7X/NpdraNIAn1AoQ1waHV+/eIIPHa4DIXNVKWxu4JbI/b/+lkjWQb QVJTqXkQIyUXMcu1+3LOFLL7k1yykT3R9hCywBlEX5jc2ASbrsK3aY7k6S1B+bLNps4Z BrXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738980086; x=1739584886; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yqMFoR6hUXpvMlUoVyEoFzEO0LlwQErfofhnNI8M7xE=; b=uWidMVIN66yBOUWcT38ZVJcp3UG3UZkdWf1Xe9+iOFvALOgFzV30stvD52TRoAzFT7 WHGDV9LA4AF1ZL78ALv4oLfSA2RLedZfKWRY0hP2j8+5NSldw7GhsY8dUcC0nNU2zMGJ 0yy+NQoqABpbowxRUNTxDReIYFq1ukho35ttx1RCvd5PkQET7vzx2vVTCtK+nzXTRxi8 g53gxKo2+V9PL81S47MmMS7EG5KcBNTGN0KTDDn9fj3X1eK8wvrDRWArmm9C/wA7/vQO AkGGLyosQw9Ijare+ZkjQ8/eR51vPgN/5HUWUKTCwpCxakI8tQjG5DEbbnF784Xcpfsu gcRQ== X-Forwarded-Encrypted: i=1; AJvYcCUyiecHK69y06Cr8HK305o3DwnP6rRVj/Ch/XyrWpJBj7Wk1Akklh1XowaWCUW7ZgzCGXb7lu8ZnEcWOA5NDEiW@lists.infradead.org X-Gm-Message-State: AOJu0Yxu20dtFjRj2/PPhEeJJOWzDU5+6Nrd7ecrrAikqs4ET7fQe6Zt CFlyjW7Zhag+pAsVgH4Gl3NkcyIGFfMJXLz1jIwIhBCkESXOXS98kHadWB8wMPOx9FGLZPzzA35 WrBJRSSds9Bt3SpXKTxt2JA== X-Google-Smtp-Source: AGHT+IGBfl5EUai4jUqGE9tVIABBshw6kSoM3m3UwCZuERUqk8K+xoXVDYPbIr/6/gGteryeIz+lvFvusHJBwx3+jg== X-Received: from ilbed6.prod.google.com ([2002:a05:6e02:4806:b0:3ce:8db7:b87a]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:3785:b0:3d0:4e57:bbd9 with SMTP id e9e14a558f8ab-3d13dd4d375mr42700305ab.10.1738980086030; Fri, 07 Feb 2025 18:01:26 -0800 (PST) Date: Sat, 8 Feb 2025 02:01:10 +0000 In-Reply-To: <20250208020111.2068239-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250208020111.2068239-1-coltonlewis@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250208020111.2068239-4-coltonlewis@google.com> Subject: [RFC PATCH v2 3/4] perf: arm_pmuv3: Keep out of guest counter partition From: Colton Lewis To: kvm@vger.kernel.org Cc: Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250207_180127_318328_2F06018B X-CRM114-Status: GOOD ( 23.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the PMU is partitioned, keep the driver out of the guest counter partition and only use the host counter partition. Partitioning is defined by the MDCR_EL2.HPMN register field and saved in cpu_pmu->hpmn. The range 0..HPMN-1 is accessible by EL1 and EL0 while HPMN..PMCR.N is reserved for EL2. Define some macros that take HPMN as an argument and construct mutually exclusive bitmaps for testing which partition a particular counter is in. Note that despite their different position in the bitmap, the cycle and instruction counters are always in the guest partition. Signed-off-by: Colton Lewis --- drivers/perf/arm_pmuv3.c | 73 +++++++++++++++++++++++++++++----- include/linux/perf/arm_pmuv3.h | 8 ++++ 2 files changed, 71 insertions(+), 10 deletions(-) diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 39109260b161..01468ec9fead 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -754,15 +754,19 @@ static void armv8pmu_disable_event_irq(struct perf_event *event) armv8pmu_disable_intens(BIT(event->hw.idx)); } -static u64 armv8pmu_getreset_flags(void) +static u64 armv8pmu_getreset_flags(struct arm_pmu *cpu_pmu) { u64 value; /* Read */ value = read_pmovsclr(); + if (cpu_pmu->partitioned) + value &= ARMV8_PMU_HOST_CNT_PART(cpu_pmu->hpmn); + else + value &= ARMV8_PMU_OVERFLOWED_MASK; + /* Write to clear flags */ - value &= ARMV8_PMU_OVERFLOWED_MASK; write_pmovsclr(value); return value; @@ -789,6 +793,18 @@ static void armv8pmu_disable_user_access(void) update_pmuserenr(0); } +static bool armv8pmu_is_guest_part(struct arm_pmu *cpu_pmu, u8 idx) +{ + return cpu_pmu->partitioned && + (BIT(idx) & ARMV8_PMU_GUEST_CNT_PART(cpu_pmu->hpmn)); +} + +static bool armv8pmu_is_host_part(struct arm_pmu *cpu_pmu, u8 idx) +{ + return !cpu_pmu->partitioned || + (BIT(idx) & ARMV8_PMU_HOST_CNT_PART(cpu_pmu->hpmn)); +} + static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) { int i; @@ -797,6 +813,8 @@ static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) if (is_pmuv3p9(cpu_pmu->pmuver)) { u64 mask = 0; for_each_set_bit(i, cpuc->used_mask, ARMPMU_MAX_HWEVENTS) { + if (armv8pmu_is_guest_part(cpu_pmu, i)) + continue; if (armv8pmu_event_has_user_read(cpuc->events[i])) mask |= BIT(i); } @@ -805,6 +823,8 @@ static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) /* Clear any unused counters to avoid leaking their contents */ for_each_andnot_bit(i, cpu_pmu->cntr_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS) { + if (armv8pmu_is_guest_part(cpu_pmu, i)) + continue; if (i == ARMV8_PMU_CYCLE_IDX) write_pmccntr(0); else if (i == ARMV8_PMU_INSTR_IDX) @@ -850,7 +870,10 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) armv8pmu_disable_user_access(); /* Enable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); + if (cpu_pmu->partitioned) + armv8pmu_mdcr_write(armv8pmu_mdcr_read() | ARMV8_PMU_MDCR_HPME); + else + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); kvm_vcpu_pmu_resync_el0(); } @@ -858,7 +881,10 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { /* Disable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); + if (cpu_pmu->partitioned) + armv8pmu_mdcr_write(armv8pmu_mdcr_read() & ~ARMV8_PMU_MDCR_HPME); + else + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) @@ -872,7 +898,7 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) /* * Get and reset the IRQ flags */ - pmovsr = armv8pmu_getreset_flags(); + pmovsr = armv8pmu_getreset_flags(cpu_pmu); /* * Did an overflow occur? @@ -930,6 +956,8 @@ static int armv8pmu_get_single_idx(struct pmu_hw_events *cpuc, int idx; for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) { + if (armv8pmu_is_guest_part(cpu_pmu, idx)) + continue; if (!test_and_set_bit(idx, cpuc->used_mask)) return idx; } @@ -946,6 +974,8 @@ static int armv8pmu_get_chain_idx(struct pmu_hw_events *cpuc, * the lower idx must be even. */ for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) { + if (armv8pmu_is_guest_part(cpu_pmu, idx)) + continue; if (!(idx & 0x1)) continue; if (!test_and_set_bit(idx, cpuc->used_mask)) { @@ -968,6 +998,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, /* Always prefer to place a cycle counter into the cycle counter. */ if ((evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) && + !cpu_pmu->partitioned && !armv8pmu_event_get_threshold(&event->attr)) { if (!test_and_set_bit(ARMV8_PMU_CYCLE_IDX, cpuc->used_mask)) return ARMV8_PMU_CYCLE_IDX; @@ -983,6 +1014,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, * may not know how to handle it. */ if ((evtype == ARMV8_PMUV3_PERFCTR_INST_RETIRED) && + !cpu_pmu->partitioned && !armv8pmu_event_get_threshold(&event->attr) && test_bit(ARMV8_PMU_INSTR_IDX, cpu_pmu->cntr_mask) && !armv8pmu_event_want_user_access(event)) { @@ -994,7 +1026,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, * Otherwise use events counters */ if (armv8pmu_event_is_chained(event)) - return armv8pmu_get_chain_idx(cpuc, cpu_pmu); + return armv8pmu_get_chain_idx(cpuc, cpu_pmu); else return armv8pmu_get_single_idx(cpuc, cpu_pmu); } @@ -1086,6 +1118,16 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event, return 0; } +static void armv8pmu_reset_host_counters(struct arm_pmu *cpu_pmu) +{ + int idx; + + for_each_set_bit(idx, cpu_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS) { + if (armv8pmu_is_host_part(cpu_pmu, idx)) + armv8pmu_write_evcntr(idx, 0); + } +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu = (struct arm_pmu *)info; @@ -1093,8 +1135,10 @@ static void armv8pmu_reset(void *info) bitmap_to_arr64(&mask, cpu_pmu->cntr_mask, ARMPMU_MAX_HWEVENTS); - if (cpu_pmu->partitioned) + if (cpu_pmu->partitioned) { armv8pmu_partition(cpu_pmu->hpmn); + mask &= ARMV8_PMU_HOST_CNT_PART(cpu_pmu->hpmn); + } /* The counter and interrupt enable registers are unknown at reset. */ armv8pmu_disable_counter(mask); @@ -1103,11 +1147,20 @@ static void armv8pmu_reset(void *info) /* Clear the counters we flip at guest entry/exit */ kvm_clr_pmu_events(mask); + + pmcr = ARMV8_PMU_PMCR_LC; + /* - * Initialize & Reset PMNC. Request overflow interrupt for - * 64 bit cycle counter but cheat in armv8pmu_write_counter(). + * Initialize & Reset PMNC. Request overflow interrupt for 64 + * bit cycle counter but cheat in armv8pmu_write_counter(). + * + * When partitioned, there is no single bit to reset only the + * host counters. so reset them individually. */ - pmcr = ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC; + if (cpu_pmu->partitioned) + armv8pmu_reset_host_counters(cpu_pmu); + else + pmcr = ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C; /* Enable long event counter support where available */ if (armv8pmu_has_long_event(cpu_pmu)) diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index 115ee39f693a..5f8b143794ce 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -247,6 +247,14 @@ #define ARMV8_PMU_OVSR_F ARMV8_PMU_CNT_MASK_F /* Mask for writable bits is both P and C fields */ #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_CNT_MASK_ALL + +/* Masks for guest and host counter partitions */ +#define ARMV8_PMU_HPMN_CNT_MASK(N) GENMASK((N) - 1, 0) +#define ARMV8_PMU_GUEST_CNT_PART(N) (ARMV8_PMU_HPMN_CNT_MASK(N) | \ + ARMV8_PMU_CNT_MASK_C | \ + ARMV8_PMU_CNT_MASK_F) +#define ARMV8_PMU_HOST_CNT_PART(N) (ARMV8_PMU_CNT_MASK_ALL & \ + ~ARMV8_PMU_GUEST_CNT_PART(N)) /* * PMXEVTYPER: Event selection reg */ From patchwork Sat Feb 8 02:01:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 13966232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 331A5C02199 for ; Sat, 8 Feb 2025 02:11:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=TJ0ITpSot12bWRIgUxfFL1nJMNuEe3fyU4TCb+rHvco=; b=3NOtQ7Xt/yQyYSPn49y4EBq+Ru 6+GuBZexRBhmyM4MxhqyD7Y4/CbvAXwTBkZzyEp+ka9UcZF2Zuty3HbO/YumlQbXWEO9CpGA7Mtn9 eLGVO7fN4Uib3dzqZ5jvPg1PwYHPvRXjtRuslvX0BCfUC0kRayFs9i1t+9YSl7GlTB+ZbKSZix6NC 03iEAzp/KIa0xFaxKeEclgMQrldVBK6EXEZvwBKrXqgXGdVz6X4rUvxsqt4wdnFQZ4gU/0EbL8lvF yBqHiR30LP3LypFoCXqrNuEWCrpuxKy/YS7CMk/VPl6o/q4gjrgN3JJDTjvxdAFrW7HfwHf1gLyT+ 5tbtsBfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tgaJ3-0000000BspH-00lB; Sat, 08 Feb 2025 02:10:53 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tga9v-0000000BrNF-1RHs for linux-arm-kernel@lists.infradead.org; Sat, 08 Feb 2025 02:01:29 +0000 Received: by mail-il1-x149.google.com with SMTP id e9e14a558f8ab-3d14c647935so7313585ab.0 for ; Fri, 07 Feb 2025 18:01:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738980087; x=1739584887; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TJ0ITpSot12bWRIgUxfFL1nJMNuEe3fyU4TCb+rHvco=; b=1HpiEG0SNSzlJoz2c4t+p4U1/lnyzxro4+7/jVJu6Cx3kODYuQEn3ZOZfwBdVXKtgl DeBLr3SpsyOQA5PkVjZzw23iZLSufoVheaeub3PiBiDlK66bcNKJAtbSXkMinWa+9SAi PCG/jZbkLDUyRMdqCdBXmuVwEl8jO8Vxd1LKsABF8fdAbnaLT3KFG7ZW55cIqHFnq2Fr 0e4MYMug/QpvuH07hzVGsxz5b6U4GpNLEMLg2iNkFMyLxWI57aUrTR9SfE1KnUPZKdYX JRpdr0VEQoYCR0d+CF+LUJpWpDvaAenmkf0pKjH2HkYx9c1XXkuNUNMCX9ZOV+KOJyqY 2NJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738980087; x=1739584887; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TJ0ITpSot12bWRIgUxfFL1nJMNuEe3fyU4TCb+rHvco=; b=O/Nnm68UaZx1ryTuqxonj8rS/fdXAjBRf0XcRuFZdAAFiJMJMJE5hIdw06/VWtmR/S Whce9QDG8JzuA7XWDZjlaZIs36r+HqVhErn66xq1VKFrQaeaFhEUJ4OEt52i92o62ZHd sk4I8p/x7RgeEgwGW1zzve9NKkaq8jViDKUsc/KAvOqnwlEsntRPYwCLIoO6jaXd6bvn uWmmwmkJGL9RodhaqKw7c4h/YieNvgr4fN/XyQskdXcRDzSirpoXHI0tlOpOiPZze2tk 1wIT0t+2wqYeR6AFhLHFsSBfwlWmUzLQKTudhrYZ8fzG++IRR8RcEYJSQlQzRBp+nMi7 8j0w== X-Forwarded-Encrypted: i=1; AJvYcCU4G8fuRSrdyknhCK5WaEanO1qKmBFUsYi9HaGmz3JKHfgl5yzP8lE/1abajNDDuTEVN/X5He6NheIb4wGbIy5R@lists.infradead.org X-Gm-Message-State: AOJu0YyAaamoKH+PNQ0fS1tx3nVw/QHN93TPfZonIo68hyfZ/0dxNwTy EwLeLT5C9s6+NFB+7wUT1xKK+Y0oamPy2PwAJzicg2G6LsBUGzi2BfS8TBgPVgN2ubVagTm5plo fRm0kEEXvZw2G1PgnJOqjAQ== X-Google-Smtp-Source: AGHT+IHGzAbpz5Gck/2Rf7WgkWKxzSWHxb28Hylkb71d1eO6uNKmPJkkI+3L+zJwZvOSJQqcK2g9jZxrFbjtaCAz6g== X-Received: from ilbeh24.prod.google.com ([2002:a05:6e02:4c18:b0:3d0:499b:8e3d]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6e02:20c9:b0:3cf:cdb8:78fb with SMTP id e9e14a558f8ab-3d13df0fb5fmr49647775ab.16.1738980086868; Fri, 07 Feb 2025 18:01:26 -0800 (PST) Date: Sat, 8 Feb 2025 02:01:11 +0000 In-Reply-To: <20250208020111.2068239-1-coltonlewis@google.com> Mime-Version: 1.0 References: <20250208020111.2068239-1-coltonlewis@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250208020111.2068239-5-coltonlewis@google.com> Subject: [RFC PATCH v2 4/4] KVM: arm64: Make guests see only counters they can access From: Colton Lewis To: kvm@vger.kernel.org Cc: Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250207_180127_393372_5E2A3AED X-CRM114-Status: GOOD ( 14.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The ARM architecture specifies that when MDCR_EL2.HPMN is set, EL1 and EL0, which includes KVM guests, should read that value for PMCR.N. Signed-off-by: Colton Lewis --- arch/arm64/kvm/debug.c | 3 +-- arch/arm64/kvm/pmu-emul.c | 8 +++++++- tools/testing/selftests/kvm/arm64/vpmu_counter_access.c | 2 +- 3 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 0e4c805e7e89..7c04db00bf6c 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -36,8 +36,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK * to disable guest access to the profiling and trace buffers */ - vcpu->arch.mdcr_el2 = FIELD_PREP(MDCR_EL2_HPMN, - *host_data_ptr(nr_event_counters)); + vcpu->arch.mdcr_el2 = FIELD_PREP(MDCR_EL2_HPMN, read_mdcr()); vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6c5950b9ceac..052ce8c721fe 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -993,12 +993,18 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) { struct arm_pmu *arm_pmu = kvm->arch.arm_pmu; + u8 limit; + + if (arm_pmu->partitioned) + limit = arm_pmu->hpmn - 1; + else + limit = ARMV8_PMU_MAX_GENERAL_COUNTERS; /* * The arm_pmu->cntr_mask considers the fixed counter(s) as well. * Ignore those and return only the general-purpose counters. */ - return bitmap_weight(arm_pmu->cntr_mask, ARMV8_PMU_MAX_GENERAL_COUNTERS); + return bitmap_weight(arm_pmu->cntr_mask, limit); } static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c index f16b3b27e32e..b5bc18b7528d 100644 --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c @@ -609,7 +609,7 @@ static void run_pmregs_validity_test(uint64_t pmcr_n) */ static void run_error_test(uint64_t pmcr_n) { - pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); + pr_debug("Error test with pmcr_n %lu (larger than the host allows)\n", pmcr_n); test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); destroy_vpmu_vm();