From patchwork Thu Mar 27 19:35:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 14031405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A0D7BC3600B for ; Thu, 27 Mar 2025 19:59:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+uGGyRkYKjdJ+vw9b2EYMQ7VNOlGqoQjZ03lOaE3eUo=; b=Xu/VEkTknc81I8gE4tKAQhjzQ/ pInIDE2PYnTVpOwotRnyA7Fz9AY8amVJRGzNDr/1dm5mWvhQHPGph+YA7IewtT27FuKZY7VqnQdGt pCpI9fOvP3+lnFzgB700U8Ifnz1OOkynVyAtUsUWENc4s57L3G6NCqYfHFxXOUuK5LGxQ+CbMGjvw lGtrMC3OivJ7LSTASzYnlflMmllC1xKE6zd074Y2t6w91Gi26VHsCdHDBM1hZ2OdzxxqWxIlWnLZY tOFD+xDFltwkBM9MmxGrIMaxrSnzR+Od9Cfl7ikEKU6v/G1AzGzu5F4srASPicfulMnqmisTdlSr8 5TflSs4g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1txtO4-0000000BuZK-1ZYB; Thu, 27 Mar 2025 19:59:36 +0000 Received: from mail-pl1-x62f.google.com ([2607:f8b0:4864:20::62f]) by bombadil.infradead.org with esmtps (Exim 4.98.1 #2 (Red Hat Linux)) id 1txt1l-0000000Bpjc-34Zo for linux-arm-kernel@lists.infradead.org; Thu, 27 Mar 2025 19:36:35 +0000 Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-2279915e06eso32473045ad.1 for ; Thu, 27 Mar 2025 12:36:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1743104193; x=1743708993; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=+uGGyRkYKjdJ+vw9b2EYMQ7VNOlGqoQjZ03lOaE3eUo=; b=DLOAhRga55uX4gCqLWIVPqZRVpfaxK+XClj1hdSQLbhDEHu0/T5CJ2q7AW2kT82Kx0 eYRRde/GJ8Iu29varc+T11VMbOqDuShpCfWMHae3aUJ4bm6HTLfZPYoIa3YqoisIdhKH gSn13fJ+L4TC68TiS4aGQ2r+ZZdBul4UTudM1ZIla7ScCCWELsHvCX6gzJvOyWSD1xpI n8imme3tzDaweu1HCx81B5YyZeUklTdH1F/6C6Db4nBMZOoINYEBn7WGGohaSUkTzDu7 vH8V/q7xMaaxDni2wP7ZeQUvziPQxXPxNrZSfkx+VTntvW5wDZ8KOPNhMLvnDEUYkmRB hL1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743104193; x=1743708993; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+uGGyRkYKjdJ+vw9b2EYMQ7VNOlGqoQjZ03lOaE3eUo=; b=bmpePejTaVIzyY6+r8X3hsiwkfMXWhfgKfrgS4imiDPWQS1//YcgprScysvQF2Hx2D 8dNzZmAA3OlgP8EPmO7IYLoFiYJA4/GnAdVY7C36VEFpbyesaod4UuMxi6abeCeNxF6U HyGpqLU7k6nGMG1FpVdbMhf41rMvbOqCfIjB3V3PV0IcX7ayNDuLRTLsGPYco+nVYiKm 837oW8IMhSPqbUICD7p1UFyojZdWraMfyMoWHRb9Q9idORAcSmOMkU03Qq/E2zXEfcLz ZhB6wBgXJgWDGw2R1Z2ixRfZuxiU4yd56GxK+G5QO+N9V9C8NMblOYdzAFh2jmVUplWg 9Nbg== X-Forwarded-Encrypted: i=1; AJvYcCVTyyUfHWSTWMjbZOrdDTcyUaDkX7ihxjogPxRF1QehpJq8XRKNjmN05+QI81HY8Wn8er3acFkPQEaaNAPI7yzF@lists.infradead.org X-Gm-Message-State: AOJu0YyegR4mknP458LAEIyzSIcIowVRjXc0xTYolIM+bsWUDGZsYGdn cS/bEVfLw7JJGjVE7mRPLqDieEvx9wKexeNrlwfme4aWuJ42ZjTCyYChEiFhCzg= X-Gm-Gg: ASbGnculrEpz3GC90wfUMnfwPxl8D6W1JemDCuAJ2oEPmPe0l4fT831im4c/vJ5SkUk DXayXbXOMq9tBQoyTdNnUCTkFLy/f3epF01tIXKZayWehHybkZUHtMesgW55H88+nxTG4uq2U6x iCE+iElLqTLMi75rLJTXuc846Tq2KL8oye+NxxtL/eSpXGupGBVl10M0qd2jI81sgE11aeW3Ykm O1kfGpc2ZPCxHLvr62LXhV6NOaB+YCntrdjtlJZdf7Eoa6VBn79DLjxtqJcByRNgIQ71helbtLI Sl1u7ntkrxHZ4QtvELtALzDt9bfWktCkAY7y7OHtUHH4vctQ3HgG7/ZFkA== X-Google-Smtp-Source: AGHT+IENDTi2wmd5guSAtbCl7ZVpJGH3loOpoV0kcjkWU+S0QZ+aC84U+Dd29pEp73mrXKIMPYoAPg== X-Received: by 2002:a17:90b:2d46:b0:2fa:1e3e:9be5 with SMTP id 98e67ed59e1d1-303a6c35c4dmr8832996a91.0.1743104193005; Thu, 27 Mar 2025 12:36:33 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3039f6b638csm2624220a91.44.2025.03.27.12.36.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Mar 2025 12:36:32 -0700 (PDT) From: Atish Patra Date: Thu, 27 Mar 2025 12:35:54 -0700 Subject: [PATCH v5 13/21] RISC-V: perf: Add a mechanism to defined legacy event encoding MIME-Version: 1.0 Message-Id: <20250327-counter_delegation-v5-13-1ee538468d1b@rivosinc.com> References: <20250327-counter_delegation-v5-0-1ee538468d1b@rivosinc.com> In-Reply-To: <20250327-counter_delegation-v5-0-1ee538468d1b@rivosinc.com> To: Paul Walmsley , Palmer Dabbelt , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Anup Patel , Atish Patra , Will Deacon , Mark Rutland , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , weilin.wang@intel.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Conor Dooley , devicetree@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org, Atish Patra X-Mailer: b4 0.15-dev-42535 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250327_123633_827946_C7928D61 X-CRM114-Status: GOOD ( 18.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org RISC-V ISA doesn't define any standard event encodings or specify any event to counter mapping. Thus, event encoding information and corresponding counter mapping fot those events needs to be provided in the driver for each vendor. Add a framework to support that. The individual platform events will be added later. Signed-off-by: Atish Patra --- drivers/perf/riscv_pmu_dev.c | 54 +++++++++++++++++++++++++++++++++++++++++- include/linux/perf/riscv_pmu.h | 13 ++++++++++ 2 files changed, 66 insertions(+), 1 deletion(-) diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index c0397bd68b91..6f64404a6e3d 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -317,6 +317,56 @@ static struct sbi_pmu_event_data pmu_cache_event_sbi_map[PERF_COUNT_HW_CACHE_MAX }, }; +/* + * Vendor specific PMU events. + */ +struct riscv_pmu_event { + u64 event_id; + u32 counter_mask; +}; + +struct riscv_vendor_pmu_events { + unsigned long vendorid; + unsigned long archid; + unsigned long implid; + const struct riscv_pmu_event *hw_event_map; + const struct riscv_pmu_event (*cache_event_map)[PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX]; +}; + +#define RISCV_VENDOR_PMU_EVENTS(_vendorid, _archid, _implid, _hw_event_map, _cache_event_map) \ + { .vendorid = _vendorid, .archid = _archid, .implid = _implid, \ + .hw_event_map = _hw_event_map, .cache_event_map = _cache_event_map }, + +static struct riscv_vendor_pmu_events pmu_vendor_events_table[] = { +}; + +const struct riscv_pmu_event *current_pmu_hw_event_map; +const struct riscv_pmu_event (*current_pmu_cache_event_map)[PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX]; + +static void rvpmu_vendor_register_events(void) +{ + int cpu = raw_smp_processor_id(); + unsigned long vendor_id = riscv_cached_mvendorid(cpu); + unsigned long impl_id = riscv_cached_mimpid(cpu); + unsigned long arch_id = riscv_cached_marchid(cpu); + + for (int i = 0; i < ARRAY_SIZE(pmu_vendor_events_table); i++) { + if (pmu_vendor_events_table[i].vendorid == vendor_id && + pmu_vendor_events_table[i].implid == impl_id && + pmu_vendor_events_table[i].archid == arch_id) { + current_pmu_hw_event_map = pmu_vendor_events_table[i].hw_event_map; + current_pmu_cache_event_map = pmu_vendor_events_table[i].cache_event_map; + break; + } + } + + if (!current_pmu_hw_event_map || !current_pmu_cache_event_map) { + pr_info("No default PMU events found\n"); + } +} + static void rvpmu_sbi_check_event(struct sbi_pmu_event_data *edata) { struct sbiret ret; @@ -1552,8 +1602,10 @@ static int __init rvpmu_devinit(void) */ if (riscv_isa_extension_available(NULL, SSCCFG) && riscv_isa_extension_available(NULL, SMCDELEG) && - riscv_isa_extension_available(NULL, SSCSRIND)) + riscv_isa_extension_available(NULL, SSCSRIND)) { static_branch_enable(&riscv_pmu_cdeleg_available); + rvpmu_vendor_register_events(); + } if (!(riscv_pmu_sbi_available_boot() || riscv_pmu_cdeleg_available_boot())) return 0; diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 525acd6d96d0..a3e1fdd5084a 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -28,6 +28,19 @@ #define RISCV_PMU_CONFIG1_GUEST_EVENTS 0x1 +#define HW_OP_UNSUPPORTED 0xFFFF +#define CACHE_OP_UNSUPPORTED 0xFFFF + +#define PERF_MAP_ALL_UNSUPPORTED \ + [0 ... PERF_COUNT_HW_MAX - 1] = {HW_OP_UNSUPPORTED, 0x0} + +#define PERF_CACHE_MAP_ALL_UNSUPPORTED \ +[0 ... C(MAX) - 1] = { \ + [0 ... C(OP_MAX) - 1] = { \ + [0 ... C(RESULT_MAX) - 1] = {CACHE_OP_UNSUPPORTED, 0x0} \ + }, \ +} + struct cpu_hw_events { /* currently enabled events */ int n_events;