From patchwork Mon Feb 3 00:42:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Herring X-Patchwork-Id: 13956711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CEE9CC0218F for ; Mon, 3 Feb 2025 00:47:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=6UFBhaWJyy3P/mkzyx1gptV4gpLugs1tlu2uvDvb0FY=; b=o2JIgPpmT0NQZcZrTmmxEL4M+e 1jDYOMQlAMLvyWWc7AUvKThvqbLFpPQA05/+/DePc1c0i7MI9cYC3hwR5cTPDTKhSHCMGgQ6Ag3Bv bBDJi5eDYCLg86KioGaHmhZp9wD6S/fvy+DpmEmOiSHHpqTS6u7bWmcAwQRJbR+EXiXGICsOLwnxw UggSe/JQ/2oCpo6yHjM8rNtnXbauOnq3wmI+5jvuSzzH0TXM4lbaxNdS8M0nU+dvGhHq5ycWwN1wp 0ZhI+JOdb8YQF4+suHZEkEvLl5lyraM533iE5+EvRZOSSFXo1rJ+XddbRABudNqtslx1y8vvLhT3d e6iDv55g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tekcU-0000000EM96-3d4h; Mon, 03 Feb 2025 00:47:22 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tekYb-0000000ELJy-2Wvg for linux-arm-kernel@lists.infradead.org; Mon, 03 Feb 2025 00:43:22 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 195C0A40C6E; Mon, 3 Feb 2025 00:41:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC6B9C4CED1; Mon, 3 Feb 2025 00:43:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738543400; bh=jvbfAk+0Ldh4PLsv8Iq6CECrB3n0Uk9q6M/nTuRd/kM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ajlOswMXmnYE9z7rDc+ppz8gD0EiXYJi+ikMR+9Wi+XGSfoUXhYsCXDkwJSkv95KO FABzbHHYk9QagiafbYfh8V+7lQaUBps3mJBUfggsayMFE4grlodzfx5Sx0EyLGyzzE B+3MCZO9ZqKwh+qLUjvTExSliqHFAhimUiZIL8p/7lvpTrDZ1tXKBosNHX6bgUFl9/ pwuRRYxHyXGEl72bGXcZwtnlJ9w671AFkFil9l3dJ9bYjGtc0XG65imZTyvoBgyDFT ou4DJG00WhY5cg63mJu0LfObqUSmhk+HDjLx90U/xlH16pGujfPW+8QqV7dc29AUpR LZ/zJ3I49mTfQ== From: "Rob Herring (Arm)" Date: Sun, 02 Feb 2025 18:42:56 -0600 Subject: [PATCH v19 02/11] perf: arm_pmu: Don't disable counter in armpmu_add() MIME-Version: 1.0 Message-Id: <20250202-arm-brbe-v19-v19-2-1c1300802385@kernel.org> References: <20250202-arm-brbe-v19-v19-0-1c1300802385@kernel.org> In-Reply-To: <20250202-arm-brbe-v19-v19-0-1c1300802385@kernel.org> To: Will Deacon , Mark Rutland , Catalin Marinas , Jonathan Corbet , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , James Clark , Anshuman Khandual Cc: linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvmarm@lists.linux.dev X-Mailer: b4 0.15-dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250202_164321_780334_BEA12F3F X-CRM114-Status: GOOD ( 18.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland Currently armpmu_add() tries to handle a newly-allocated counter having a stale associated event, but this should not be possible, and if this were to happen the current mitigation is insufficient and potentially expensive. It would be better to warn if we encounter the impossible case. Calls to pmu::add() and pmu::del() are serialized by the core perf code, and armpmu_del() clears the relevant slot in pmu_hw_events::events[] before clearing the bit in pmu_hw_events::used_mask such that the counter can be reallocated. Thus when armpmu_add() allocates a counter index from pmu_hw_events::used_mask, it should not be possible to observe a stale even in pmu_hw_events::events[] unless either pmu_hw_events::used_mask or pmu_hw_events::events[] have been corrupted. If this were to happen, we'd end up with two events with the same event->hw.idx, which would clash with each other during reprogramming, deletion, etc, and produce bogus results. Add a WARN_ON_ONCE() for this case so that we can detect if this ever occurs in practice. That possiblity aside, there's no need to call arm_pmu::disable(event) for the new event. The PMU reset code initialises the counter in a disabled state, and armpmu_del() will disable the counter before it can be reused. Remove the redundant disable. Signed-off-by: Mark Rutland Signed-off-by: Rob Herring (Arm) Reviewed-by: Anshuman Khandual --- drivers/perf/arm_pmu.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 398cce3d76fc..2f33e69a8caf 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -342,12 +342,10 @@ armpmu_add(struct perf_event *event, int flags) if (idx < 0) return idx; - /* - * If there is an event in the counter we are going to use then make - * sure it is disabled. - */ + /* The newly-allocated counter should be empty */ + WARN_ON_ONCE(hw_events->events[idx]); + event->hw.idx = idx; - armpmu->disable(event); hw_events->events[idx] = event; hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;