From patchwork Thu Jan 16 23:09:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13942415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09F6BC02187 for ; Thu, 16 Jan 2025 23:10:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KTKZWEHFwJPHEGU1hh4l6ST+Y5g2dEzimVAw6Avqxqk=; b=U4hy32ymptjhE1 kC41jDY6a1Ma+fcB4xPYqQVPKsHWG1d7q90KeiaCBtwb3n28giHxgMEz/xlKYF1ai5T+fj87JYFtd nRAbpG1LeU7YGdnMitwTZLvz882JAEm4GOvOd2zjqtBTq+NjBRwfVOaKuoH7QvJ2aFLGG+VmrhE8k 5lVJ6MCl2iEhbgbtEu1/EbfUL0biXmeHCZMPurICQskXDtqsTFAdb+DIqOmJRQ3Xg2jC/1qRLMvUL bFqBE/T3Bnxuy/is/OVQd0bg6ZS/nKw8H7pULfV0brClv9+DCM8+8EvennAzvK6q7qJCXBbAfuNpM MOEzDLuQuiIM+1sZ1Kjw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0R-0000000GJgo-1ZGX; Thu, 16 Jan 2025 23:10:31 +0000 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0P-0000000GJfG-34h3 for linux-riscv@lists.infradead.org; Thu, 16 Jan 2025 23:10:30 +0000 Received: by mail-wr1-x434.google.com with SMTP id ffacd0b85a97d-385deda28b3so983896f8f.0 for ; Thu, 16 Jan 2025 15:10:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069028; x=1737673828; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ih0D7I8XCBuUn8cwCzLwA9EVpPglV1xp5bSahtqmwB0=; b=mAbKnUHl06iJxcEHDIbUuDE4+diTi8QRLrvwkRK2yIY0V9xyNYIJRLBMfJC3bQRE20 CvLI0Tj7nxuFB9JYrc+W3D74rlq4Qz7jgxvmz8zKujf0BLUeKhqIg0EYwB2SaNItUBne x6YLwO/xc5DgjdAWF4ckkSyFuoocS0ObUeVSdT4Yh7GP3QzC1pIcHHMkVsZ68y5qFN4y yMmj1mG94mLPKSChFp3jUCDn7XeBsiKmSAytyuCRDThSG5Hpl2UNIJI4OS/J3432Ufdm 38c5ogcd++pY8Q8xStolhEAyaSs0M5ENcP7NfNTerqKCP/Cg/tsBc99xOw305kJ3SoM9 sVDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069028; x=1737673828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ih0D7I8XCBuUn8cwCzLwA9EVpPglV1xp5bSahtqmwB0=; b=BKsqoTDWUydjMpSl58mhdRB2x9utQlHqWBMyGgYbaesGhTOR7oLwZ4yjVvd8Ead1Np sg/Y3ZXoaqdhvWppNjP75ErzaD9eTcly0NMAzdQJwyaM9gwRsHgH77zv6zRV+VitGK2X lN+zy7DPAfEIXBOuLxOudVHAL+whXpx+s8YaPsiLUqZrGg+PY+7IDImXYwWPWt2rBqG6 N6naiBwisqpO8sDvCaw46P23qvZ29iRA8V8QlYAzXLx/76fsK3mL4l9v5+AvWj7sQWM+ SJuGXCvo90LJnkuL1UWG3FfYyS8YzJmrZHqQ19M8Hs3sOGfIwRMjxVieon259yd2f05R DvzQ== X-Forwarded-Encrypted: i=1; AJvYcCUrA3sT6sP/RX5I8ntoTuwnU72ob8WaQAl8aeOCBYGC3aIgG0leqXLl7zSaZJ4cnzz7hkxMfO+/Tr8dTg==@lists.infradead.org X-Gm-Message-State: AOJu0YzEvI7h+Zk0XBBjt++M90JwxR/TNredB5QaRUqb/V9/yTXW6I4r 0I9zDzWpwHPhgxGaBY5bMKa+0kBGUBCGdcsM17XmjEFlMMJioLntZ7kHu8jEqmg= X-Gm-Gg: ASbGnctupmiDJRkV8f5orTVrTkTm12+2VwhEs5H6ca3eOgM5czeI02aEhIxHTcuwclJ Tditz/DipFRrjUFNxDzCuwdTI6pxkX4yOy3g44/M76mLT5U8NRm28oDPKIZyY703MSfKsmOfGGw 1z7Y4ElJDj7wotY5G8i7pYjfP0gTqW+shjH9RDS0SuCN1ZqVS7e+sBk+ooyhLY2r4mJCtXefJx3 dy5Qr77jIDNle3sz3iyjzSsmF/mP7aKTLkd9symdwrbRBLlfI9icB2oHhib9I8gaWGfXl/VuJ+Q 1iBKmRs3dVMvQfpw X-Google-Smtp-Source: AGHT+IGZkSeIeWLg78b7QIySnUPbuLbB0vhAavzUj7u+Y4mZGnlfT+raQ/EoGquZ508s9+cVypWC2g== X-Received: by 2002:adf:ffca:0:b0:38a:88e2:e6aa with SMTP id ffacd0b85a97d-38bf56745bcmr243859f8f.29.1737069028398; Thu, 16 Jan 2025 15:10:28 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:28 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 1/7] perf: Increase the maximum number of samples to 256. Date: Thu, 16 Jan 2025 23:09:49 +0000 Message-Id: <20250116230955.867152-2-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250116_151029_772917_ED9F8615 X-CRM114-Status: GOOD ( 12.94 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org RISCV CTR extension support a maximum depth of 256 last branch records. The 127 entries limit results in corrupting CTR entries for RISC-V if configured to be 256 entries. This will not impact any other architectures as it is just increasing maximum limit of possible entries. Signed-off-by: Rajnesh Kanwal --- tools/perf/util/machine.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 27d5345d2b30..f2eb3c20274e 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -2174,25 +2174,32 @@ static void save_iterations(struct iterations *iter, iter->cycles += be[i].flags.cycles; } -#define CHASHSZ 127 -#define CHASHBITS 7 -#define NO_ENTRY 0xff +#define CHASHBITS 8 +#define NO_ENTRY 0xffU -#define PERF_MAX_BRANCH_DEPTH 127 +#define PERF_MAX_BRANCH_DEPTH 256 /* Remove loops. */ +/* Note: Last entry (i==ff) will never be checked against NO_ENTRY + * so it's safe to have an unsigned char array to process 256 entries + * without causing clash between last entry and NO_ENTRY value. + */ static int remove_loops(struct branch_entry *l, int nr, struct iterations *iter) { int i, j, off; - unsigned char chash[CHASHSZ]; + unsigned char chash[PERF_MAX_BRANCH_DEPTH]; memset(chash, NO_ENTRY, sizeof(chash)); - BUG_ON(PERF_MAX_BRANCH_DEPTH > 255); + BUG_ON(PERF_MAX_BRANCH_DEPTH > 256); for (i = 0; i < nr; i++) { - int h = hash_64(l[i].from, CHASHBITS) % CHASHSZ; + /* Remainder division by PERF_MAX_BRANCH_DEPTH is not + * needed as hash_64 will anyway limit the hash + * to CHASHBITS + */ + int h = hash_64(l[i].from, CHASHBITS); /* no collision handling for now */ if (chash[h] == NO_ENTRY) { From patchwork Thu Jan 16 23:09:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13942416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8EB65C02187 for ; Thu, 16 Jan 2025 23:10:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3Ixan5jrSQ9/FIDvmjnU3UE5sHtijRJksvjccRMu61c=; b=ixT18Tc/NpsKLy squ3sPRM4SuNQclac97AbD4dLjKXksienLt5hODFnRgIFb5t4ojW//PNYFbzc6cS8cx3Y0lr/O7LF 1VEePDe/8TmwoLZjf8CKxNDB4fdfGNqXV5R36kCRiOBGo8VJCK/+bynq2tNKzX3pJOJfGXEjDhbel Dylk+5u+PgulRIdBYyQtTAgSJVRnG+esMRhdqga1t6uqaiEARnJ1DHVeMukzBcpQ8SXbcSKQC5DyR Ywl2GqlA49TYo7lbtheaCAb18Yl1epKl1Hr/1yarZTdRwMaziPdrkp94cBVLaH0HVSldIsE/aUp+l LnqUPl2HDxoqw9m+U1FQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0T-0000000GJj6-3zDp; Thu, 16 Jan 2025 23:10:33 +0000 Received: from mail-wm1-x336.google.com ([2a00:1450:4864:20::336]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0R-0000000GJfn-0SGv for linux-riscv@lists.infradead.org; Thu, 16 Jan 2025 23:10:32 +0000 Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-436281c8a38so9734725e9.3 for ; Thu, 16 Jan 2025 15:10:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069029; x=1737673829; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iExoXWcsgpz1yiXKVDiuJsm//UYZR8b0hyvjPLZ7n2M=; b=wDu7s26l961Pimu+pXywJs75pcLYaMBKGvLwCWqHOr/JlwFEEkrSWde60O8s4iz/sb L5zyrtx0UDFo7M2WU7NR8ykcn11zgeQKuM+FYtPCBcv+6GJOfMsolgQWAFgD0897aF9K 1ei8bK063MnNhlPjtEVhLs+GO2tGYaIIiGxuOSI3hPFCBTH4jsLr3hTKbGF1dmFsSmeA nDxKEDWihSjPsk7zQW4YwehpPyHtPRndw3DyyamPGJq/aY4m9TMpuVqhehI55qmLOpuC 1W36qZdoShWZ8SR/gFilsgWqW6ucbcuE4Ox3XjGuxmcfoYEVXC+7Cu1As51EHBqFxSFB qSHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069029; x=1737673829; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iExoXWcsgpz1yiXKVDiuJsm//UYZR8b0hyvjPLZ7n2M=; b=nb37uFG/wB6hzs96+RVSwgy99K9uGq6JUCwRtxDuZagzxHjZrxiWVEb0X9Az0X7NNP zQVqWYYCygQatpq1tpkkq/3Hh0zO9nczPp5qO6EbS2Gx7trupRZcNoK3trkRiv7MtxE2 v3qOj6j3lqp8qNhUlcgrfS5t65M3GEbJ0CR2O+Ot2/h1gUVVm7qwZQmx8NZ1yodPzK7y ZKvcIMLiZJlR/3C0CsfIKUg8AXwWB9jJ2G/3loXLP3AGs57wBj4W7vsKK4Q38xm5ThPg X67K+l+FVO5eFYMgAEUP6K0Scz06LkYcm26yHDkzmTwsM+PHS5c/KwmKP9a7CR2CtKOj ZBNg== X-Forwarded-Encrypted: i=1; AJvYcCUTheMlehx5+mPKFeD8A0qpTfoTkHXQZslYMg6wQNeObKGtxP2iN2vHPRJaa+Q0CIzD5a4LcUrOXrOXYg==@lists.infradead.org X-Gm-Message-State: AOJu0YzjfYGMYbkx1fw8sDzu0YQsWCJOHGF8JkBJM+yk5wTfsbO7fRbw nZRMMHOd8BCA+1S6JrspuXWtI8aiI+vxhDL728cHRuqcjhDaszmC7DWrlQDUFiY= X-Gm-Gg: ASbGnctuRG+zQnk2XEgVQMobiOUwBQNNzBAx3/7vBtgIt9kcSJXPvPspCPU3JD7SiYx yXBhSBnNGSHc1+9PY82oHceDeENlPGaa+ED03/5aQif7BEjSO0kp4YW6bb5N5bosMK5YZCVSLYn KHcJSwH8pp+VrKysmVfsIOGTLaAu3E57k1wUVowOx3kJYVY8WXWEqCTLayS1dPxsYrKFn4eJ2uB iUHzXHpISzjOWHaRPFTTZK7EkkMiidrxwDfRx1j5JxSWU7EJypF1R4GDyyiLOxMblhjrHVzmf4T O2K5EKM0gqXFr3xr X-Google-Smtp-Source: AGHT+IEE85RzZXgjKSxtatDeczrhYN+ZJtEmiy39XawqUlljd+N8P1eGO6eoSvUUxpVRipcUDt0wBw== X-Received: by 2002:a5d:6d06:0:b0:38a:418e:1177 with SMTP id ffacd0b85a97d-38bf565531dmr377720f8f.11.1737069029362; Thu, 16 Jan 2025 15:10:29 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:29 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 2/7] riscv: pmu: Add Control transfer records CSR definations. Date: Thu, 16 Jan 2025 23:09:50 +0000 Message-Id: <20250116230955.867152-3-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250116_151031_144720_C807C60A X-CRM114-Status: UNSURE ( 7.20 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding CSR defines for RISCV Control Transfer Records extension [0] along with bit-field macros for each CSR. [0]: https://github.com/riscv/riscv-control-transfer-records Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/csr.h | 83 ++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index a06d5fec6e6d..465a5e338ccb 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -325,6 +325,85 @@ #define CSR_SCOUNTOVF 0xda0 +/* M-mode Control Transfer Records CSRs */ +#define CSR_MCTRCTL 0x34e + +/* S-mode Control Transfer Records CSRs */ +#define CSR_SCTRCTL 0x14e +#define CSR_SCTRSTATUS 0x14f +#define CSR_SCTRDEPTH 0x15f + +/* VS-mode Control Transfer Records CSRs */ +#define CSR_VSCTRCTL 0x24e + +/* xctrtl CSR bits. */ +#define CTRCTL_U_ENABLE _AC(0x1, UL) +#define CTRCTL_S_ENABLE _AC(0x2, UL) +#define CTRCTL_M_ENABLE _AC(0x4, UL) +#define CTRCTL_RASEMU _AC(0x80, UL) +#define CTRCTL_STE _AC(0x100, UL) +#define CTRCTL_MTE _AC(0x200, UL) +#define CTRCTL_BPFRZ _AC(0x800, UL) +#define CTRCTL_LCOFIFRZ _AC(0x1000, UL) +#define CTRCTL_EXCINH _AC(0x200000000, UL) +#define CTRCTL_INTRINH _AC(0x400000000, UL) +#define CTRCTL_TRETINH _AC(0x800000000, UL) +#define CTRCTL_NTBREN _AC(0x1000000000, UL) +#define CTRCTL_TKBRINH _AC(0x2000000000, UL) +#define CTRCTL_INDCALL_INH _AC(0x10000000000, UL) +#define CTRCTL_DIRCALL_INH _AC(0x20000000000, UL) +#define CTRCTL_INDJUMP_INH _AC(0x40000000000, UL) +#define CTRCTL_DIRJUMP_INH _AC(0x80000000000, UL) +#define CTRCTL_CORSWAP_INH _AC(0x100000000000, UL) +#define CTRCTL_RET_INH _AC(0x200000000000, UL) +#define CTRCTL_INDOJUMP_INH _AC(0x400000000000, UL) +#define CTRCTL_DIROJUMP_INH _AC(0x800000000000, UL) + +/* sctrstatus CSR bits. */ +#define SCTRSTATUS_WRPTR_MASK 0xFF +#define SCTRSTATUS_FROZEN _AC(0x80000000, UL) + +#ifdef CONFIG_RISCV_M_MODE +#define CTRCTL_KERNEL_ENABLE CTRCTL_M_ENABLE +#else +#define CTRCTL_KERNEL_ENABLE CTRCTL_S_ENABLE +#endif + +/* sctrdepth CSR bits. */ +#define SCTRDEPTH_MASK 0x7 + +#define SCTRDEPTH_MIN 0x0 /* 16 Entries. */ +#define SCTRDEPTH_MAX 0x4 /* 256 Entries. */ + +/* ctrsource, ctrtarget and ctrdata CSR bits. */ +#define CTRSOURCE_VALID 0x1ULL +#define CTRTARGET_MISP 0x1ULL + +#define CTRDATA_TYPE_MASK 0xF +#define CTRDATA_CCV 0x8000 +#define CTRDATA_CCM_MASK 0xFFF0000 +#define CTRDATA_CCE_MASK 0xF0000000 + +#define CTRDATA_TYPE_NONE 0 +#define CTRDATA_TYPE_EXCEPTION 1 +#define CTRDATA_TYPE_INTERRUPT 2 +#define CTRDATA_TYPE_TRAP_RET 3 +#define CTRDATA_TYPE_NONTAKEN_BRANCH 4 +#define CTRDATA_TYPE_TAKEN_BRANCH 5 +#define CTRDATA_TYPE_RESERVED_6 6 +#define CTRDATA_TYPE_RESERVED_7 7 +#define CTRDATA_TYPE_INDIRECT_CALL 8 +#define CTRDATA_TYPE_DIRECT_CALL 9 +#define CTRDATA_TYPE_INDIRECT_JUMP 10 +#define CTRDATA_TYPE_DIRECT_JUMP 11 +#define CTRDATA_TYPE_CO_ROUTINE_SWAP 12 +#define CTRDATA_TYPE_RETURN 13 +#define CTRDATA_TYPE_OTHER_INDIRECT_JUMP 14 +#define CTRDATA_TYPE_OTHER_DIRECT_JUMP 15 + +#define CTR_ENTRIES_FIRST 0x200 +#define CTR_ENTRIES_LAST 0x2ff + #define CSR_SSTATUS 0x100 #define CSR_SIE 0x104 #define CSR_STVEC 0x105 @@ -508,6 +587,8 @@ # define CSR_TOPEI CSR_MTOPEI # define CSR_TOPI CSR_MTOPI +# define CSR_CTRCTL CSR_MCTRCTL + # define SR_IE SR_MIE # define SR_PIE SR_MPIE # define SR_PP SR_MPP @@ -538,6 +619,8 @@ # define CSR_TOPEI CSR_STOPEI # define CSR_TOPI CSR_STOPI +# define CSR_CTRCTL CSR_SCTRCTL + # define SR_IE SR_SIE # define SR_PIE SR_SPIE # define SR_PP SR_SPP From patchwork Thu Jan 16 23:09:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13942417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CC9A2C02183 for ; Thu, 16 Jan 2025 23:10:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=d9sTEu9YWdhXKt4lz9a2tcT+eXYWbYHYcqAZCXthHJg=; b=HTYzrgyfUp+zaK ucTDqxW2tNidTS7UQ8afhgvh4mz7C+JhX6zoolQr7uRkDc6KACoyfCcTM0zD7cVPrPCk1VHyagwAB QHbC6ZelUlCfcwIdNtuze6qmLRRofhUEr1cnBNEkFiCP5fLFSY+z64ojlIUvA4l60CLtgvOsixy4A 27bW9mlkfsatuvRLwNY6YgAaN1u5JACR30YecOCVI9Ek/bR1K4rbbdokpD5a329iKCDGEQ5HH3oqc AljVPPOijcb9KZlZfmGTHCF+VMPZ8/8Y4BmrOWFJSqNkvVCpc4WuCH1fIp3y7xOIi4Nr0fYUlFQ+v zJoEoOngjKk78KiaVD/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0U-0000000GJjv-2Olr; Thu, 16 Jan 2025 23:10:34 +0000 Received: from mail-wr1-x435.google.com ([2a00:1450:4864:20::435]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0S-0000000GJgU-0iNz for linux-riscv@lists.infradead.org; Thu, 16 Jan 2025 23:10:33 +0000 Received: by mail-wr1-x435.google.com with SMTP id ffacd0b85a97d-385eed29d17so849248f8f.0 for ; Thu, 16 Jan 2025 15:10:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069030; x=1737673830; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vp/FsiaV0Z94/Qxw/F7YAAt+C4ai0FuIMW/dVHkyoAE=; b=GRq6h9cJJEHGTo6GI3i0+xP8KAW4pSAJhjB/00B5vzykbwgqU0b90mPEJg5enH1/9H gU/WVka+vziHoj7aZitjqfTyFFCjaUllM/HitfGRlZgAgDHv8Vl30+RDAAYVvrq2hb5o WCrlgQCSXHvx6e15+ioNRHwV+R+AVRvta8fPuvF9/AxO6jiEPSRGsGS4ub7OET2iURtt HogQXP2+wgKrDyCpnHLeIM+DYbcS6XqfAVFmv8BNdVsxxRIK5uLbYmbvMpqSN7Aw2N2l PLknJzltpNetWKbbwybgq1vMC+QQ6D+dThHRc1imra3YvrdFe+/hgrc32bktjaCUXS1Q VNQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069030; x=1737673830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vp/FsiaV0Z94/Qxw/F7YAAt+C4ai0FuIMW/dVHkyoAE=; b=FFh4nlluUWawuMXhRFodIXBfbUp60CvSa6FnhLG+2XqJqK259sN19LU2qZig8eVThd Dk1BXrESL+YAftEDlGz1imLKUItbfKmBMzP+qvxpHJRJTt5zS4Y35rlkN/SY5Mmgf+La 67UhrL0yIzB35TtvDYjgSGwod3BqGjI2QxP7J+wbVwbVmF7buVZaNImwKrsrC5eigw0j B3M/d8xZyDfBUxbbEMqgkLDqZp42VP+TXAfITHtDzVXQ4dIwnuWoi6f2egVJ0VucoItP 9I3Zb2LRASnujhMjt9MmtPNaaoOiYM/sAFO33o+oMfYpcuKsCM2O5CM97G+ByQPAiCY6 emTQ== X-Forwarded-Encrypted: i=1; AJvYcCUxJgXTlCtOxcWfjFfdceeu37N7KHI/v9fjn3si371MY57Akeh7K8VMMdM184/Rmy3XYlgI6s9F8bKYrw==@lists.infradead.org X-Gm-Message-State: AOJu0Yy16N9P6TeC64mRcCuKfttqJ9vBdgjqx6SsBVat0IKrPM8bPh/r O0N+ExFOky5mWq3/o36sSDn/pedEr/D1WrfY2Dzkdb0zhQqIw71AF5oi1OxXlbM= X-Gm-Gg: ASbGncvK7gx8RK2onnpILrFmOdFgu6aYzojkBptTwogDBdVMoK+/rvD2w2dHmpk/CME kv3Ijx2CXCAORx2PPqpMZYKVCQCvt2tdoEPiddSgsCwmiZrbwl0gR1ae+mjhnKi7Cge4IwzkcAD 7mhtHogBXaGulPKBTpiabQ6xmdid9vDbabhfLtZ1t44OyPy7f+9sjeExipLcJLHuSqovgcnVew4 o4m0inUcR0GGccxoOjfgbIC0AAFKytszhxi59ZgEk+jvjppaw2FlbO7zwiQW5QzU9NivDJpgm1d 5U3nkdlkDXy2rsKq X-Google-Smtp-Source: AGHT+IE0MecDnwhZClHJ7PgerRYXtp8cYAxIgYswrea4HoSFCj4iDiT/DdG8q//dcV9mlJlAaDCGOA== X-Received: by 2002:a5d:5f51:0:b0:385:dc45:ea26 with SMTP id ffacd0b85a97d-38bf566240amr315638f8f.12.1737069030525; Thu, 16 Jan 2025 15:10:30 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:30 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 3/7] riscv: Add Control Transfer Records extension parsing Date: Thu, 16 Jan 2025 23:09:51 +0000 Message-Id: <20250116230955.867152-4-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250116_151032_205888_E7FA45E5 X-CRM114-Status: UNSURE ( 7.87 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding CTR extension in ISA extension map to lookup for extension availability. Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/hwcap.h | 4 ++++ arch/riscv/kernel/cpufeature.c | 2 ++ 2 files changed, 6 insertions(+) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 42b34e2f80e8..552c7ebae7be 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -105,6 +105,8 @@ #define RISCV_ISA_EXT_SSCCFG 96 #define RISCV_ISA_EXT_SMCDELEG 97 #define RISCV_ISA_EXT_SMCNTRPMF 98 +#define RISCV_ISA_EXT_SMCTR 99 +#define RISCV_ISA_EXT_SSCTR 100 #define RISCV_ISA_EXT_XLINUXENVCFG 127 @@ -115,11 +117,13 @@ #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SMAIA #define RISCV_ISA_EXT_SUPM RISCV_ISA_EXT_SMNPM #define RISCV_ISA_EXT_SxCSRIND RISCV_ISA_EXT_SMCSRIND +#define RISCV_ISA_EXT_SxCTR RISCV_ISA_EXT_SMCTR #else #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA #define RISCV_ISA_EXT_SUPM RISCV_ISA_EXT_SSNPM #define RISCV_ISA_EXT_SxAIA RISCV_ISA_EXT_SSAIA #define RISCV_ISA_EXT_SxCSRIND RISCV_ISA_EXT_SSCSRIND +#define RISCV_ISA_EXT_SxCTR RISCV_ISA_EXT_SSCTR #endif #endif /* _ASM_RISCV_HWCAP_H */ diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index ec068c9130e5..ef3b70f7d5d2 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -391,6 +391,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = { __RISCV_ISA_EXT_DATA(zvkt, RISCV_ISA_EXT_ZVKT), __RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA), __RISCV_ISA_EXT_DATA(smcdeleg, RISCV_ISA_EXT_SMCDELEG), + __RISCV_ISA_EXT_DATA(smctr, RISCV_ISA_EXT_SMCTR), __RISCV_ISA_EXT_DATA(smmpm, RISCV_ISA_EXT_SMMPM), __RISCV_ISA_EXT_SUPERSET(smnpm, RISCV_ISA_EXT_SMNPM, riscv_xlinuxenvcfg_exts), __RISCV_ISA_EXT_DATA(smstateen, RISCV_ISA_EXT_SMSTATEEN), @@ -400,6 +401,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = { __RISCV_ISA_EXT_DATA(sscsrind, RISCV_ISA_EXT_SSCSRIND), __RISCV_ISA_EXT_DATA(ssccfg, RISCV_ISA_EXT_SSCCFG), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), + __RISCV_ISA_EXT_DATA(ssctr, RISCV_ISA_EXT_SSCTR), __RISCV_ISA_EXT_SUPERSET(ssnpm, RISCV_ISA_EXT_SSNPM, riscv_xlinuxenvcfg_exts), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svade, RISCV_ISA_EXT_SVADE), From patchwork Thu Jan 16 23:09:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13942418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4594C02188 for ; Thu, 16 Jan 2025 23:10:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zuw3Q/rNi/IquLgU94Z1zSkToasWiuURNZ23nnIWTGI=; b=RbSZeJtKQzcE+L fyq+RKbE5HfLjCXTI73rOkkdiU8WYPq+NM+64DhmBRW1ukhWUQ3ife9pEP6BgrPceumPbhnJQJVXq r6jHTHLVABpxZRPapSRXJ6OxZBupN82lhjIzfzSY6Yvitnk1TtpTwZokap4nIQyLOykSLlIYa/xDn qoCWYYYj//Gr9nun3vv4AtsNYEtC/VKZDX4mhk/K3s5U5b3uS71aUZ8RUTVeLYrIt8v9Bn52YDSTm DxjuUAbptUj7GxQctFDkFI31WfJB8wM9Oijv+Rz1aoC92cwnJ1oCCJpVjNXodiABwv7Y5eK8nfCRI JZEeIOpNclLWgBIvI6tQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0W-0000000GJmu-1lvf; Thu, 16 Jan 2025 23:10:36 +0000 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0T-0000000GJhw-1TAa for linux-riscv@lists.infradead.org; Thu, 16 Jan 2025 23:10:34 +0000 Received: by mail-wr1-x432.google.com with SMTP id ffacd0b85a97d-3862d161947so834996f8f.3 for ; Thu, 16 Jan 2025 15:10:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069032; x=1737673832; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TQjN+UKbroqD83qO2UjBTcTlF8+/q2QiSosN0QwUoU4=; b=bIu3WPxBSmExmwGYeuXp7i7vCmTxHQSaf6qIPYozzAjmc8VZ0g6/b+H++/fcm+d69z 1hfqm3dwMklzDHniseYoMYDEN6n7nP9Hl9aY8OssOanNMi0Betbz6m9Y8YnTYJKTTA+I ZL6On8xpW0IxCRd+ZGpEpLo4v5Wv5AKZhBvsuUSb5skDvlU1jlNZ/NM4385YKrHh/gE9 GN/3yUtaJWk3QJJnLT0k9GxGzV3eBu2YxWQkquG44Xj1q4Rof0xrHolx40EF7gASTZmS xTb8dek3CGqdeGntrOY2lFnfWEofKUdocN8VIkPg1h+tfJXykjbDSTqD0HIk9XR4Siuy 2w/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069032; x=1737673832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TQjN+UKbroqD83qO2UjBTcTlF8+/q2QiSosN0QwUoU4=; b=iTBzzsqr3kPTVkqoGi+rCm1qcXfM97egWxXQiJ0hVvaowA4KJWutv0MGNk34siFk9E krjlLPH81/zDMNU3OnUngSQHFf1FfnHfoSJz+aIjoPxF5KPMd+ZfLceg/2FzY58r33YD WW5yEZK3REX+gmqfco4n8DVRwg2O+F4VODtIvv9+vNJ8Kp8VzuTrNQhoHOjVMBoxLtL4 cejjGjr69oZs53llgvHkj7AjaZ6xfY8au/P8J0jbpYSG+eLzHosT1jH4zEhByngy734/ 7ylj6AVtY1LpNoIrDB1e0GcowEu+cydO5VxFBop9qT4SCV9vWe+aqVwHiEmIQxXRpOMP 6gqg== X-Forwarded-Encrypted: i=1; AJvYcCUap2L5U0ue4exUz+Zm3t/996G0nsWfhGwDIjUDuYkeqkjqbt/zm3GZzrQytAHl3qB0op0yCA5550XNUg==@lists.infradead.org X-Gm-Message-State: AOJu0YwIQVxseL4izbO8uYmj+/v9wpoRQRbrfFaZU3iYeqNCkJiHbPyG 05DV6vqoCwtXLcAgrqwESWQjgmtufj9+3adhmVqxK5GDpB9KDIu+N1B45HLhhVw= X-Gm-Gg: ASbGncvz2E2aqLTbgYDxSXA5Q6YBHwebcJZXzZ1T4V2DUdloJnFxoPp3A8GwR7KIxXp CHnl5eBhzKuqgVcnI8LsC2YTPPMlvlqHS6FCeIHoggCE3ls96oeqvETMc/192BZQC/90TIXBHVv uz9ul1B7GZ0i7+7TjCkZhWTV6HWuLNNPgJK/ZUfLB+kXCNo+wCJ5ITTWsMkGH4VspUJMjFaTS/9 oByI9yfm/KxOCv2xmtJq3nwOn33L+KV9F84wF7SO8d5DhZw1QNMLk/Mtxmn8dVe7ZwLUNCPH3BO 99WbpU7hbzaKDLVz X-Google-Smtp-Source: AGHT+IE8w0tW/wpQVyEgF5mM+IUwGM3ccJfDjyTPUx2Pdbu8PB/1j71pGZjZsCD/2FEHJdYpAfi9sw== X-Received: by 2002:a05:6000:1a87:b0:385:fc32:1ec6 with SMTP id ffacd0b85a97d-38bf57bb947mr268961f8f.50.1737069031710; Thu, 16 Jan 2025 15:10:31 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:31 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 4/7] dt-bindings: riscv: add Sxctr ISA extension description Date: Thu, 16 Jan 2025 23:09:52 +0000 Message-Id: <20250116230955.867152-5-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250116_151033_389194_85B45D13 X-CRM114-Status: UNSURE ( 8.11 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add the S[m|s]ctr ISA extension description. Signed-off-by: Rajnesh Kanwal --- .../devicetree/bindings/riscv/extensions.yaml | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/Documentation/devicetree/bindings/riscv/extensions.yaml b/Documentation/devicetree/bindings/riscv/extensions.yaml index 848354e3048f..8322503f0773 100644 --- a/Documentation/devicetree/bindings/riscv/extensions.yaml +++ b/Documentation/devicetree/bindings/riscv/extensions.yaml @@ -167,6 +167,13 @@ properties: extension allows other ISA extension to use indirect CSR access mechanism in M-mode. + - const: smctr + description: | + The standard Smctr supervisor-level extension for the machine mode + to enable recording limited branch history in a register-accessible + internal core storage. Smctr depend on both the implementation of + S-mode and the Sscsrind extension. + - const: sscsrind description: | The standard Sscsrind supervisor-level extension extends the @@ -193,6 +200,13 @@ properties: and mode-based filtering as ratified at commit 01d1df0 ("Add ability to manually trigger workflow. (#2)") of riscv-count-overflow. + - const: ssctr + description: | + The standard Ssctr supervisor-level extension enables recording of + limited branch history in a register-accessible internal core + storage. Ssctr depend on both the implementation of S-mode and the + Sscsrind extension. + - const: ssnpm description: | The standard Ssnpm extension for next-mode pointer masking as From patchwork Thu Jan 16 23:09:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13942419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D251CC02183 for ; Thu, 16 Jan 2025 23:10:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zL7uK8c4DKVAGKe3hzEVME7EBdjChnsyNcYpBaP3rBY=; b=LfY+uBEfRZPB+r 4JK1NSZcu3Oo3QNFB/M2WoMyUnzGyDcKlLMbHbMrROke9wBBSM+PYyoVln1uoXtQ+E/GuA9B8qA9Z y8P1Wksm9tCuyOEOlcR2+B+80C3ZlOYdLb3OWX7NcvqGoNL3XDc4L04pvRIiQvaDAMmlrVCfe5nCh NHrBov+PZSa0+iwfEMpOT4TEAUbTJUiKrn3LFkvpy6oJcuxSpwsycVEDiVjrsSQT6nohOjCWtXB+g CE2eKIqGHZYWpF/de2vOAd98mFofQz6wtWsZGpQTIncKWbhESs9ju1UajN3wI3H6fCEpdtTy+iQ/E HJHvkEfXrXLYlwYF1M1A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0Y-0000000GJp2-11d2; Thu, 16 Jan 2025 23:10:38 +0000 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0U-0000000GJig-1Vlb for linux-riscv@lists.infradead.org; Thu, 16 Jan 2025 23:10:35 +0000 Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-436249df846so9653495e9.3 for ; Thu, 16 Jan 2025 15:10:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069033; x=1737673833; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Hcwn1RBtRoJipu7fOBlFbI2SRrDtGNtcTvKQWm9rBOc=; b=doqiTKZT/xLrpBLEJp9ceg0g1dOZN0tFByUO6isvggQk0ml+PczKuPS5y+5uhKlAm+ DJdeUJlDSMjCxe6az/FdGvKYeiSsA9pojkhXyMch6sNcJrXmnJMUFbJsdKcuyJGCP+nF wNMXtWdAipIRHQOTFRzhu906B3XtX6PyPHsEhQ5i/ZdMSYSKRyq87Km0cosYZ2ax+wcL 0mqwcbVcKjctcri65eXYMe5EfJQmcyP/C3oKbw14u9v2iGELmfu5nzh0bWfnVYW1gytD 42zWhUahUYbmoteRjOR8cuaDbQ4QtZ+Wmwt5lJbLaqpLzzARIAHIML5PU8edzGKlq+ME is8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069033; x=1737673833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hcwn1RBtRoJipu7fOBlFbI2SRrDtGNtcTvKQWm9rBOc=; b=kZmBfRIZev0nsoY1G7TM2C+MpfBDS0uqbSOrlnX/EG73qnopTKD4QG1wpfraZZsXsc 1tDJxbWOMe2Gdzayno5dR7+npWyHKKilpD/MT+1zlFTvIVtO57ievhfxuDDC1YLyDSq5 Pp4P28qnDJbXB4vpAD03pLGoeyS99pNGlio/Oh6aNGRP2yc7P/74J43D2ikEhw0GsprT KAoPS2xx2tEryQXhxyh43BAoa+swsmk6sQq/IX7B2NH/+qUFLXsk4V4Yj9L9yIh4RY2G Is9lfo9W2gUDkbz9mD5/ZPhp3fAqg3MqRxWRxninOrkotnMbHMGRuhxEpdzNo6DDdX0d XYjA== X-Forwarded-Encrypted: i=1; AJvYcCXi5qT/NvZ/3co0IgcwQur60RJcVqZoV670e/JgjBmV/LqZ906+BhcVocKJFNF4sqo+u36K1caydzYtFg==@lists.infradead.org X-Gm-Message-State: AOJu0YwnWAIHm1hiKWV3QqYUXEkZD+AcGeA4bntrkE6s+X4hrfvnEuky wC21H0PMlbMbJIJbe4jmfeC/NzIEeDFy5rr0JJf8/QsBS7VYPY7dbuPBpfzISpk= X-Gm-Gg: ASbGncsZ1KeAch9htGUD2/t3+6IJNPjm0tBKKAmaI3rtIXhM7tc1NI/tRUnbbZgQRf5 C68YNN8dUS5E9oiiFBe6dA5CgB+jabSIBS81RTrkxC++7HYcwzIjdQNMPlqo7VzBZf6F3X6uNf5 wDoYbgNpNwP6lag+swVAS/Fl2lHbTWklU/n/42o8phwe91KpfBF+n/YTXecwq1yQKlX1js1oLK0 v8Ya4WzA3BFpzSuPbEo4F8QSQ9TVZ605iv23HJav7Zf6aqOgxcNjdZIxJKwkqHGLiuHodVftujM k77byXOixmmTUaJM X-Google-Smtp-Source: AGHT+IGhZmz9Itp7KaTHT92wABYL6U7X2X4dHUE4Or0Kr1OPRGRfA1au2n+XsimRBSL8IAL2MNW8AQ== X-Received: by 2002:a05:600c:3d06:b0:434:eb73:b0c0 with SMTP id 5b1f17b1804b1-438913c7e31mr4321905e9.5.1737069032813; Thu, 16 Jan 2025 15:10:32 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:32 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 5/7] riscv: pmu: Add infrastructure for Control Transfer Record Date: Thu, 16 Jan 2025 23:09:53 +0000 Message-Id: <20250116230955.867152-6-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250116_151034_398798_C9DAB175 X-CRM114-Status: GOOD ( 18.26 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org To support Control Transfer Records (CTR) extension, we need to extend the riscv_pmu framework with some basic infrastructure for branch stack sampling. Subsequent patches will use this to add support for CTR in the riscv_pmu_dev driver. With CTR, the branches are stored into a hardware FIFO, which will be sampled by software when perf events overflow. A task may be context-switched between overflows, and to avoid leaking samples we need to clear the last task's records when a task is context-switched in. To do this we will be using the pmu::sched_task() callback added in this patch. Signed-off-by: Rajnesh Kanwal --- drivers/perf/riscv_pmu_common.c | 20 ++++++++++++++++++++ drivers/perf/riscv_pmu_dev.c | 17 +++++++++++++++++ drivers/perf/riscv_pmu_legacy.c | 2 ++ include/linux/perf/riscv_pmu.h | 18 ++++++++++++++++++ 4 files changed, 57 insertions(+) diff --git a/drivers/perf/riscv_pmu_common.c b/drivers/perf/riscv_pmu_common.c index 7644147d50b4..c4c4b5d6bed0 100644 --- a/drivers/perf/riscv_pmu_common.c +++ b/drivers/perf/riscv_pmu_common.c @@ -157,6 +157,19 @@ u64 riscv_pmu_ctr_get_width_mask(struct perf_event *event) return GENMASK_ULL(cwidth, 0); } +static void riscv_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *pmu; + + if (!pmu_ctx) + return; + + pmu = to_riscv_pmu(pmu_ctx->pmu); + if (pmu->sched_task) + pmu->sched_task(pmu_ctx, sched_in); +} + u64 riscv_pmu_event_update(struct perf_event *event) { struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); @@ -269,6 +282,8 @@ static int riscv_pmu_add(struct perf_event *event, int flags) cpuc->events[idx] = event; cpuc->n_events++; hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; + if (rvpmu->ctr_add) + rvpmu->ctr_add(event, flags); if (flags & PERF_EF_START) riscv_pmu_start(event, PERF_EF_RELOAD); @@ -286,6 +301,9 @@ static void riscv_pmu_del(struct perf_event *event, int flags) riscv_pmu_stop(event, PERF_EF_UPDATE); cpuc->events[hwc->idx] = NULL; + if (rvpmu->ctr_del) + rvpmu->ctr_del(event, flags); + /* The firmware need to reset the counter mapping */ if (rvpmu->ctr_stop) rvpmu->ctr_stop(event, RISCV_PMU_STOP_FLAG_RESET); @@ -402,6 +420,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) for_each_possible_cpu(cpuid) { cpuc = per_cpu_ptr(pmu->hw_events, cpuid); cpuc->n_events = 0; + cpuc->ctr_users = 0; for (i = 0; i < RISCV_MAX_COUNTERS; i++) cpuc->events[i] = NULL; cpuc->snapshot_addr = NULL; @@ -416,6 +435,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) .start = riscv_pmu_start, .stop = riscv_pmu_stop, .read = riscv_pmu_read, + .sched_task = riscv_pmu_sched_task, }; return pmu; diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index d28d60abaaf2..b9b257607b76 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -1027,6 +1027,12 @@ static void rvpmu_sbi_ctr_stop(struct perf_event *event, unsigned long flag) } } +static void pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + /* Call CTR specific Sched hook. */ +} + static int rvpmu_sbi_find_num_ctrs(void) { struct sbiret ret; @@ -1569,6 +1575,14 @@ static int rvpmu_deleg_ctr_get_idx(struct perf_event *event) return -ENOENT; } +static void rvpmu_ctr_add(struct perf_event *event, int flags) +{ +} + +static void rvpmu_ctr_del(struct perf_event *event, int flags) +{ +} + static void rvpmu_ctr_start(struct perf_event *event, u64 ival) { struct hw_perf_event *hwc = &event->hw; @@ -1984,6 +1998,8 @@ static int rvpmu_device_probe(struct platform_device *pdev) else pmu->pmu.attr_groups = riscv_sbi_pmu_attr_groups; pmu->cmask = cmask; + pmu->ctr_add = rvpmu_ctr_add; + pmu->ctr_del = rvpmu_ctr_del; pmu->ctr_start = rvpmu_ctr_start; pmu->ctr_stop = rvpmu_ctr_stop; pmu->event_map = rvpmu_event_map; @@ -1995,6 +2011,7 @@ static int rvpmu_device_probe(struct platform_device *pdev) pmu->event_mapped = rvpmu_event_mapped; pmu->event_unmapped = rvpmu_event_unmapped; pmu->csr_index = rvpmu_csr_index; + pmu->sched_task = pmu_sched_task; ret = riscv_pm_pmu_register(pmu); if (ret) diff --git a/drivers/perf/riscv_pmu_legacy.c b/drivers/perf/riscv_pmu_legacy.c index 93c8e0fdb589..bee6742d35fa 100644 --- a/drivers/perf/riscv_pmu_legacy.c +++ b/drivers/perf/riscv_pmu_legacy.c @@ -115,6 +115,8 @@ static void pmu_legacy_init(struct riscv_pmu *pmu) BIT(RISCV_PMU_LEGACY_INSTRET); pmu->ctr_start = pmu_legacy_ctr_start; pmu->ctr_stop = NULL; + pmu->ctr_add = NULL; + pmu->ctr_del = NULL; pmu->event_map = pmu_legacy_event_map; pmu->ctr_get_idx = pmu_legacy_ctr_get_idx; pmu->ctr_get_width = pmu_legacy_ctr_get_width; diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index e58f83811988..883781f12ae0 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -46,6 +46,13 @@ }, \ } +#define MAX_BRANCH_RECORDS 256 + +struct branch_records { + struct perf_branch_stack branch_stack; + struct perf_branch_entry branch_entries[MAX_BRANCH_RECORDS]; +}; + struct cpu_hw_events { /* currently enabled events */ int n_events; @@ -65,6 +72,12 @@ struct cpu_hw_events { bool snapshot_set_done; /* A shadow copy of the counter values to avoid clobbering during multiple SBI calls */ u64 snapshot_cval_shcopy[RISCV_MAX_COUNTERS]; + + /* Saved branch records. */ + struct branch_records *branches; + + /* Active events requesting branch records */ + int ctr_users; }; struct riscv_pmu { @@ -78,6 +91,8 @@ struct riscv_pmu { int (*ctr_get_idx)(struct perf_event *event); int (*ctr_get_width)(int idx); void (*ctr_clear_idx)(struct perf_event *event); + void (*ctr_add)(struct perf_event *event, int flags); + void (*ctr_del)(struct perf_event *event, int flags); void (*ctr_start)(struct perf_event *event, u64 init_val); void (*ctr_stop)(struct perf_event *event, unsigned long flag); int (*event_map)(struct perf_event *event, u64 *config); @@ -85,10 +100,13 @@ struct riscv_pmu { void (*event_mapped)(struct perf_event *event, struct mm_struct *mm); void (*event_unmapped)(struct perf_event *event, struct mm_struct *mm); uint8_t (*csr_index)(struct perf_event *event); + void (*sched_task)(struct perf_event_pmu_context *ctx, bool sched_in); struct cpu_hw_events __percpu *hw_events; struct hlist_node node; struct notifier_block riscv_pm_nb; + + unsigned int ctr_depth; }; #define to_riscv_pmu(p) (container_of(p, struct riscv_pmu, pmu)) From patchwork Thu Jan 16 23:09:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13942420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3D32C02187 for ; Thu, 16 Jan 2025 23:10:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=j4PsgQ6pya6FDveDw7sQhymfXNqYQrf7B30RQPnbRsk=; b=qrwysZcCVGIKXz TOIASRt36Wf0yZmTjlaADA8X+TIY0ju9+3+rfgQ0LgEbB9Fxxj23eNwLliD9E2vDXo8ECYhFO1oMM oMnHcmf3jPrm6RWZ71Ao0vYlAvE7krKnl7rpihFP1lQrmwZ8q18RTwF6GNsbVX/NCb/Bmz3677bx8 fXnqanuIC/0DQhHJAFFZFZwpss+e2CYkoHr/rPmgHclSDqh9ek3v6mUD7+R6VBsIJRKsb1lS4EMIW 5q3rPiLUnDvjAaAcgjYXoxtYAtFR3qhTiuwIqCBVB2/jTF6XPOF3BiqjQlV8F8enjJWrjPvZ8oTsb TJZslsn1TDS/CNmaFjvw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0Y-0000000GJph-3oDq; Thu, 16 Jan 2025 23:10:38 +0000 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0V-0000000GJkK-3FL7 for linux-riscv@lists.infradead.org; Thu, 16 Jan 2025 23:10:37 +0000 Received: by mail-wm1-x32d.google.com with SMTP id 5b1f17b1804b1-436281c8a38so9735085e9.3 for ; Thu, 16 Jan 2025 15:10:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069034; x=1737673834; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mZCF2qhQTSScochH887+09o1kiVfF8lDkMyjFmVHhR0=; b=NNag2+yvelGC9zVk4R0eOlO6NdMzKWLNhrsKEO/VXTNtH0/8McnSRDg2O5RRW0wC0a RNISVZlPJyzGHv3tXMRoSuEioMgtyPH3zzMl+eRDDwG4MV0Fqk9BHbvTbPjHF0npzFDU +/KrGrHcUE10AY0qRuhjuOzUy4EmvsOG11A6RzRmqbZy3lqjQwttL8cmWP8AGEBalUI0 brS5thajuZV1H6ee7Ft6vxQhpSwp6cGPbAdPXdx+PDg72lnZLu8LAgZok3UkmN7XVPiX NmQami9mwTff4jn7hh+eRqAei2HOVDq+wk7fUP5cxDF/Ep1ZNkLpwBEddudevbF36zZM I1Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069034; x=1737673834; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mZCF2qhQTSScochH887+09o1kiVfF8lDkMyjFmVHhR0=; b=fDT+9Ds1FDxrIYhzRaBkybQxNfgbbNj/cTN8Lv6XYLaOh5MZ1eRvU45zmGlsCj+sZl vf1QUbINvFzSbtOx6i46Afn57gY2jy4MdkoeIzMfBimA/eQrjJL32yqLkQXgxAoMs5nC qhMZLKTSrHcjLywvvYcP8ywGCxpR+P8l2kCWui0hf11cY32EiuHzxJ50XcECBuANH3ef JA5H7W64NSZNRfAK0BNHjhpvyHPmiDMYzV9HUf6H4SZ+QsFSn1L8RP74S6pIGtpdfDDk LsJQ4uMgzVV7Jfvu8LC8wkCUQ/kktf8HiUk8OoiIrRo4EdgWwawixOt9jeAO5+Dk4kkV /6eA== X-Forwarded-Encrypted: i=1; AJvYcCWOxs5KMnuQYbENsDhSBBo1yfNYYtZ7yfYWWii/ArjfkxalfhJa386DM+DhTRTXl41RZ0Fnyf+VRNAAqw==@lists.infradead.org X-Gm-Message-State: AOJu0Yw5Q8pfXdz66jneXnaiN4JlkhLTtIQdoHEGt1dPYPz14UgIYRg6 macECsPmhA0xwc6i48G/Mt5Rq0uQ8kiSusMSw/Btmi6d2Nm+g6tf11NWvo077pI= X-Gm-Gg: ASbGncvd9jUs4pOH0hF5CItnNscYD6n/3jahKCH6z/0RzsKd++OEwo9lBVDA5CSy7rC bWVrEtZYg7zfMTYXjn618IAo9IQbUONFibSDpfAyBbXJTsBCMs9ay62haqZGOeUtBzR5afJ2Y37 Lvs/0YMZT2+99ugARnwpTQrqQm9I2GYgijKttRmpIW9i1M39nQn5GMMBcQzORLTvVfCLqhKVW3y VQxDBKPjwkyaeIDdWQu/bOFu0e81gL4qJHINoajQRkVyQkCVaTXPyHs4tnNOeM8Nax0BjFJBPfY A63lMVjLcafc/uzk X-Google-Smtp-Source: AGHT+IENNVt/z6/PQNOhZEjHjFyExbEVG67N2G2WBWJGwM1eJgy85tbXpBlyUecckzcRYTVjoisjXQ== X-Received: by 2002:a05:600c:4e06:b0:434:e8cf:6390 with SMTP id 5b1f17b1804b1-438913c6856mr3992575e9.6.1737069033888; Thu, 16 Jan 2025 15:10:33 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:33 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 6/7] riscv: pmu: Add driver for Control Transfer Records Ext. Date: Thu, 16 Jan 2025 23:09:54 +0000 Message-Id: <20250116230955.867152-7-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250116_151035_822948_839BE775 X-CRM114-Status: GOOD ( 22.95 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This adds support for CTR Ext defined in [0]. The extension allows to records a maximum for 256 last branch records. CTR extension depends on s[m|s]csrind and Sscofpmf extensions. Signed-off-by: Rajnesh Kanwal --- MAINTAINERS | 1 + drivers/perf/Kconfig | 11 + drivers/perf/Makefile | 1 + drivers/perf/riscv_ctr.c | 608 +++++++++++++++++++++++++++++++++ include/linux/perf/riscv_pmu.h | 37 ++ 5 files changed, 658 insertions(+) create mode 100644 drivers/perf/riscv_ctr.c diff --git a/MAINTAINERS b/MAINTAINERS index 2ef7ff933266..7bcd79f33811 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -20177,6 +20177,7 @@ M: Atish Patra R: Anup Patel L: linux-riscv@lists.infradead.org S: Supported +F: drivers/perf/riscv_ctr.c F: drivers/perf/riscv_pmu_common.c F: drivers/perf/riscv_pmu_dev.c F: drivers/perf/riscv_pmu_legacy.c diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index b3bdff2a99a4..9107c5208bf5 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -129,6 +129,17 @@ config ANDES_CUSTOM_PMU If you don't know what to do here, say "Y". +config RISCV_CTR + bool "Enable support for Control Transfer Records (CTR)" + depends on PERF_EVENTS && RISCV_PMU + default y + help + Enable support for Control Transfer Records (CTR) which + allows recording branches, Jumps, Calls, returns etc taken in an + execution path. This also supports privilege based filtering. It + captures additional relevant information such as cycle count, + branch misprediction etc. + config ARM_PMU_ACPI depends on ARM_PMU && ACPI def_bool y diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 0805d740c773..755609f184fe 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -20,6 +20,7 @@ obj-$(CONFIG_RISCV_PMU_COMMON) += riscv_pmu_common.o obj-$(CONFIG_RISCV_PMU_LEGACY) += riscv_pmu_legacy.o obj-$(CONFIG_RISCV_PMU) += riscv_pmu_dev.o obj-$(CONFIG_STARFIVE_STARLINK_PMU) += starfive_starlink_pmu.o +obj-$(CONFIG_RISCV_CTR) += riscv_ctr.o obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o diff --git a/drivers/perf/riscv_ctr.c b/drivers/perf/riscv_ctr.c new file mode 100644 index 000000000000..53419a656043 --- /dev/null +++ b/drivers/perf/riscv_ctr.c @@ -0,0 +1,608 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Control transfer records extension Helpers. + * + * Copyright (C) 2024 Rivos Inc. + * + * Author: Rajnesh Kanwal + */ + +#define pr_fmt(fmt) "CTR: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define CTR_BRANCH_FILTERS_INH (CTRCTL_EXCINH | \ + CTRCTL_INTRINH | \ + CTRCTL_TRETINH | \ + CTRCTL_TKBRINH | \ + CTRCTL_INDCALL_INH | \ + CTRCTL_DIRCALL_INH | \ + CTRCTL_INDJUMP_INH | \ + CTRCTL_DIRJUMP_INH | \ + CTRCTL_CORSWAP_INH | \ + CTRCTL_RET_INH | \ + CTRCTL_INDOJUMP_INH | \ + CTRCTL_DIROJUMP_INH) + +#define CTR_BRANCH_ENABLE_BITS (CTRCTL_KERNEL_ENABLE | CTRCTL_U_ENABLE) + +/* Branch filters not-supported by CTR extension. */ +#define CTR_EXCLUDE_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_ABORT_TX | \ + PERF_SAMPLE_BRANCH_IN_TX | \ + PERF_SAMPLE_BRANCH_PRIV_SAVE | \ + PERF_SAMPLE_BRANCH_NO_TX | \ + PERF_SAMPLE_BRANCH_COUNTERS) + +/* Branch filters supported by CTR extension. */ +#define CTR_ALLOWED_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_USER | \ + PERF_SAMPLE_BRANCH_KERNEL | \ + PERF_SAMPLE_BRANCH_HV | \ + PERF_SAMPLE_BRANCH_ANY | \ + PERF_SAMPLE_BRANCH_ANY_CALL | \ + PERF_SAMPLE_BRANCH_ANY_RETURN | \ + PERF_SAMPLE_BRANCH_IND_CALL | \ + PERF_SAMPLE_BRANCH_COND | \ + PERF_SAMPLE_BRANCH_IND_JUMP | \ + PERF_SAMPLE_BRANCH_HW_INDEX | \ + PERF_SAMPLE_BRANCH_NO_FLAGS | \ + PERF_SAMPLE_BRANCH_NO_CYCLES | \ + PERF_SAMPLE_BRANCH_CALL_STACK | \ + PERF_SAMPLE_BRANCH_CALL | \ + PERF_SAMPLE_BRANCH_TYPE_SAVE) + +#define CTR_PERF_BRANCH_FILTERS (CTR_ALLOWED_BRANCH_FILTERS | \ + CTR_EXCLUDE_BRANCH_FILTERS) + +static u64 allowed_filters __read_mostly; + +struct ctr_regset { + unsigned long src; + unsigned long target; + unsigned long ctr_data; +}; + +enum { + CTR_STATE_NONE, + CTR_STATE_VALID, +}; + +/* Head is the idx of the next available slot. The slot may be already populated + * by an old entry which will be lost on new writes. + */ +struct riscv_perf_task_context { + int callstack_users; + int stack_state; + unsigned int num_entries; + uint32_t ctr_status; + uint64_t ctr_control; + struct ctr_regset store[MAX_BRANCH_RECORDS]; +}; + +static inline u64 get_ctr_src_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_src_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline u64 get_ctr_tgt_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG2, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_tgt_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG2, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline u64 get_ctr_data_reg(unsigned int ctr_idx) +{ + return csr_ind_read(CSR_SIREG3, CTR_ENTRIES_FIRST, ctr_idx); +} + +static inline void set_ctr_data_reg(unsigned int ctr_idx, u64 value) +{ + return csr_ind_write(CSR_SIREG3, CTR_ENTRIES_FIRST, ctr_idx, value); +} + +static inline bool ctr_record_valid(u64 ctr_src) +{ + return !!FIELD_GET(CTRSOURCE_VALID, ctr_src); +} + +static inline int ctr_get_mispredict(u64 ctr_target) +{ + return FIELD_GET(CTRTARGET_MISP, ctr_target); +} + +static inline unsigned int ctr_get_cycles(u64 ctr_data) +{ + const unsigned int cce = FIELD_GET(CTRDATA_CCE_MASK, ctr_data); + const unsigned int ccm = FIELD_GET(CTRDATA_CCM_MASK, ctr_data); + + if (ctr_data & CTRDATA_CCV) + return 0; + + /* Formula to calculate cycles from spec: (2^12 + CCM) << CCE-1 */ + if (cce > 0) + return (4096 + ccm) << (cce - 1); + + return FIELD_GET(CTRDATA_CCM_MASK, ctr_data); +} + +static inline unsigned int ctr_get_type(u64 ctr_data) +{ + return FIELD_GET(CTRDATA_TYPE_MASK, ctr_data); +} + +static inline unsigned int ctr_get_depth(u64 ctr_depth) +{ + /* Depth table from CTR Spec: 2.4 sctrdepth. + * + * sctrdepth.depth Depth + * 000 - 16 + * 001 - 32 + * 010 - 64 + * 011 - 128 + * 100 - 256 + * + * Depth = 16 * 2 ^ (ctrdepth.depth) + * or + * Depth = 16 << ctrdepth.depth. + */ + return 16 << FIELD_GET(SCTRDEPTH_MASK, ctr_depth); +} + +static inline struct riscv_perf_task_context *task_context(void *ctx) +{ + return (struct riscv_perf_task_context *)ctx; +} + +/* Reads CTR entry at idx and stores it in entry struct. */ +static bool get_ctr_regset(struct ctr_regset *entry, unsigned int idx) +{ + entry->src = get_ctr_src_reg(idx); + + if (!ctr_record_valid(entry->src)) + return false; + + entry->src = entry->src; + entry->target = get_ctr_tgt_reg(idx); + entry->ctr_data = get_ctr_data_reg(idx); + + return true; +} + +static void set_ctr_regset(struct ctr_regset *entry, unsigned int idx) +{ + set_ctr_src_reg(idx, entry->src); + set_ctr_tgt_reg(idx, entry->target); + set_ctr_data_reg(idx, entry->ctr_data); +} + +static u64 branch_type_to_ctr(int branch_type) +{ + u64 config = CTR_BRANCH_FILTERS_INH | CTRCTL_LCOFIFRZ; + + if (branch_type & PERF_SAMPLE_BRANCH_USER) + config |= CTRCTL_U_ENABLE; + + if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) + config |= CTRCTL_KERNEL_ENABLE; + + if (branch_type & PERF_SAMPLE_BRANCH_HV) { + if (riscv_isa_extension_available(NULL, h)) + config |= CTRCTL_KERNEL_ENABLE; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + config &= ~CTR_BRANCH_FILTERS_INH; + return config; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) { + config &= ~CTRCTL_INDCALL_INH; + config &= ~CTRCTL_DIRCALL_INH; + config &= ~CTRCTL_EXCINH; + config &= ~CTRCTL_INTRINH; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + config &= ~(CTRCTL_RET_INH | CTRCTL_TRETINH); + + if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL) + config &= ~CTRCTL_INDCALL_INH; + + if (branch_type & PERF_SAMPLE_BRANCH_COND) + config &= ~CTRCTL_TKBRINH; + + if (branch_type & PERF_SAMPLE_BRANCH_CALL_STACK) + config |= CTRCTL_RASEMU; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP) { + config &= ~CTRCTL_INDJUMP_INH; + config &= ~CTRCTL_INDOJUMP_INH; + } + + if (branch_type & PERF_SAMPLE_BRANCH_CALL) + config &= ~CTRCTL_DIRCALL_INH; + + return config; +} + +static const int ctr_perf_map[] = { + [CTRDATA_TYPE_NONE] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_EXCEPTION] = PERF_BR_SYSCALL, + [CTRDATA_TYPE_INTERRUPT] = PERF_BR_IRQ, + [CTRDATA_TYPE_TRAP_RET] = PERF_BR_ERET, + [CTRDATA_TYPE_NONTAKEN_BRANCH] = PERF_BR_COND, + [CTRDATA_TYPE_TAKEN_BRANCH] = PERF_BR_COND, + [CTRDATA_TYPE_RESERVED_6] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_RESERVED_7] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_INDIRECT_CALL] = PERF_BR_IND_CALL, + [CTRDATA_TYPE_DIRECT_CALL] = PERF_BR_CALL, + [CTRDATA_TYPE_INDIRECT_JUMP] = PERF_BR_IND, + [CTRDATA_TYPE_DIRECT_JUMP] = PERF_BR_UNCOND, + [CTRDATA_TYPE_CO_ROUTINE_SWAP] = PERF_BR_UNKNOWN, + [CTRDATA_TYPE_RETURN] = PERF_BR_RET, + [CTRDATA_TYPE_OTHER_INDIRECT_JUMP] = PERF_BR_IND, + [CTRDATA_TYPE_OTHER_DIRECT_JUMP] = PERF_BR_UNCOND, +}; + +static void ctr_set_perf_entry_type(struct perf_branch_entry *entry, + u64 ctr_data) +{ + int ctr_type = ctr_get_type(ctr_data); + + entry->type = ctr_perf_map[ctr_type]; + if (entry->type == PERF_BR_UNKNOWN) + pr_warn("%d - unknown branch type captured\n", ctr_type); +} + +static void capture_ctr_flags(struct perf_branch_entry *entry, + struct perf_event *event, u64 ctr_data, + u64 ctr_target) +{ + if (branch_sample_type(event)) + ctr_set_perf_entry_type(entry, ctr_data); + + if (!branch_sample_no_cycles(event)) + entry->cycles = ctr_get_cycles(ctr_data); + + if (!branch_sample_no_flags(event)) { + entry->abort = 0; + entry->mispred = ctr_get_mispredict(ctr_target); + entry->predicted = !entry->mispred; + } + + if (branch_sample_priv(event)) + entry->priv = PERF_BR_PRIV_UNKNOWN; +} + +static void ctr_regset_to_branch_entry(struct cpu_hw_events *cpuc, + struct perf_event *event, + struct ctr_regset *regset, + unsigned int idx) +{ + struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx]; + + perf_clear_branch_entry_bitfields(entry); + entry->from = regset->src & (~CTRSOURCE_VALID); + entry->to = regset->target & (~CTRTARGET_MISP); + capture_ctr_flags(entry, event, regset->ctr_data, regset->target); +} + +static void ctr_read_entries(struct cpu_hw_events *cpuc, + struct perf_event *event, + unsigned int depth) +{ + struct ctr_regset entry = {}; + u64 ctr_ctl; + int i; + + ctr_ctl = csr_read_clear(CSR_CTRCTL, CTR_BRANCH_ENABLE_BITS); + + for (i = 0; i < depth; i++) { + if (!get_ctr_regset(&entry, i)) + break; + + ctr_regset_to_branch_entry(cpuc, event, &entry, i); + } + + csr_set(CSR_CTRCTL, ctr_ctl & CTR_BRANCH_ENABLE_BITS); + + cpuc->branches->branch_stack.nr = i; + cpuc->branches->branch_stack.hw_idx = 0; +} + +bool riscv_pmu_ctr_valid(struct perf_event *event) +{ + u64 branch_type = event->attr.branch_sample_type; + + if (branch_type & ~allowed_filters) { + pr_debug_once("Requested branch filters not supported 0x%llx\n", + branch_type & ~allowed_filters); + return false; + } + + return true; +} + +void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, struct perf_event *event) +{ + unsigned int depth = to_riscv_pmu(event->pmu)->ctr_depth; + + ctr_read_entries(cpuc, event, depth); + + /* Clear frozen bit. */ + csr_clear(CSR_SCTRSTATUS, SCTRSTATUS_FROZEN); +} + +static void riscv_pmu_ctr_reset(void) +{ + /* FIXME: Replace with sctrclr instruction once support is merged + * into toolchain. + */ + asm volatile(".4byte 0x10400073\n" ::: "memory"); + csr_write(CSR_SCTRSTATUS, 0); + csr_write(CSR_CTRCTL, 0); +} + +static void __riscv_pmu_ctr_restore(void *ctx) +{ + struct riscv_perf_task_context *task_ctx = ctx; + unsigned int i; + + csr_write(CSR_SCTRSTATUS, task_ctx->ctr_status); + + for (i = 0; i < task_ctx->num_entries; i++) + set_ctr_regset(&task_ctx->store[i], i); +} + +static void riscv_pmu_ctr_restore(void *ctx) +{ + if (task_context(ctx)->callstack_users == 0 || + task_context(ctx)->stack_state == CTR_STATE_NONE) { + riscv_pmu_ctr_reset(); + return; + } + + __riscv_pmu_ctr_restore(ctx); + + task_context(ctx)->stack_state = CTR_STATE_NONE; +} + +static void __riscv_pmu_ctr_save(void *ctx, unsigned int depth) +{ + struct riscv_perf_task_context *task_ctx = ctx; + struct ctr_regset *dst; + unsigned int i; + + for (i = 0; i < depth; i++) { + dst = &task_ctx->store[i]; + if (!get_ctr_regset(dst, i)) + break; + } + + task_ctx->num_entries = i; + + task_ctx->ctr_status = csr_read(CSR_SCTRSTATUS); +} + +static void riscv_pmu_ctr_save(void *ctx, unsigned int depth) +{ + if (task_context(ctx)->callstack_users == 0) { + task_context(ctx)->stack_state = CTR_STATE_NONE; + return; + } + + __riscv_pmu_ctr_save(ctx, depth); + + task_context(ctx)->stack_state = CTR_STATE_VALID; +} + +/* + * On context switch in, we need to make sure no samples from previous tasks + * are left in the CTR. + * + * On ctxswin, sched_in = true, called after the PMU has started + * On ctxswout, sched_in = false, called before the PMU is stopped + */ +void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *pmu_ctx, + bool sched_in) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(pmu_ctx->pmu); + struct cpu_hw_events *cpuc = this_cpu_ptr(rvpmu->hw_events); + void *task_ctx; + + if (!cpuc->ctr_users) + return; + + /* Save branch records in task_ctx on sched out */ + task_ctx = pmu_ctx ? pmu_ctx->task_ctx_data : NULL; + if (task_ctx) { + if (sched_in) + riscv_pmu_ctr_restore(task_ctx); + else + riscv_pmu_ctr_save(task_ctx, rvpmu->ctr_depth); + return; + } + + /* Reset branch records on sched in */ + if (sched_in) + riscv_pmu_ctr_reset(); +} + +static inline bool branch_user_callstack(unsigned int br_type) +{ + return (br_type & PERF_SAMPLE_BRANCH_USER) && + (br_type & PERF_SAMPLE_BRANCH_CALL_STACK); +} + +void riscv_pmu_ctr_add(struct perf_event *event) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpuc = this_cpu_ptr(rvpmu->hw_events); + + if (branch_user_callstack(event->attr.branch_sample_type) && + event->pmu_ctx->task_ctx_data) + task_context(event->pmu_ctx->task_ctx_data)->callstack_users++; + + perf_sched_cb_inc(event->pmu); + + if (!cpuc->ctr_users++) + riscv_pmu_ctr_reset(); +} + +void riscv_pmu_ctr_del(struct perf_event *event) +{ + struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpuc = this_cpu_ptr(rvpmu->hw_events); + + if (branch_user_callstack(event->attr.branch_sample_type) && + event->pmu_ctx->task_ctx_data) + task_context(event->pmu_ctx->task_ctx_data)->callstack_users--; + + cpuc->ctr_users--; + WARN_ON_ONCE(cpuc->ctr_users < 0); + + perf_sched_cb_dec(event->pmu); +} + +void riscv_pmu_ctr_enable(struct perf_event *event) +{ + u64 branch_type = event->attr.branch_sample_type; + u64 ctr; + + ctr = branch_type_to_ctr(branch_type); + csr_write(CSR_CTRCTL, ctr); +} + +void riscv_pmu_ctr_disable(struct perf_event *event) +{ + /* Clear CTRCTL to disable the recording. */ + csr_write(CSR_CTRCTL, 0); +} + +/* + * Check for hardware supported perf filters here. To avoid missing + * any new added filter in perf, we do a BUILD_BUG_ON check, so make sure + * to update CTR_ALLOWED_BRANCH_FILTERS or CTR_EXCLUDE_BRANCH_FILTERS + * defines when adding support for it in below function. + */ +static void __init check_available_filters(void) +{ + u64 ctr_ctl; + + /* + * Ensure both perf branch filter allowed and exclude + * masks are always in sync with the generic perf ABI. + */ + BUILD_BUG_ON(CTR_PERF_BRANCH_FILTERS != (PERF_SAMPLE_BRANCH_MAX - 1)); + + allowed_filters = PERF_SAMPLE_BRANCH_USER | + PERF_SAMPLE_BRANCH_KERNEL | + PERF_SAMPLE_BRANCH_ANY | + PERF_SAMPLE_BRANCH_HW_INDEX | + PERF_SAMPLE_BRANCH_NO_FLAGS | + PERF_SAMPLE_BRANCH_NO_CYCLES | + PERF_SAMPLE_BRANCH_TYPE_SAVE; + + csr_write(CSR_CTRCTL, ~0); + ctr_ctl = csr_read(CSR_CTRCTL); + + if (riscv_isa_extension_available(NULL, h)) + allowed_filters |= PERF_SAMPLE_BRANCH_HV; + + if (ctr_ctl & (CTRCTL_INDCALL_INH | CTRCTL_DIRCALL_INH)) + allowed_filters |= PERF_SAMPLE_BRANCH_ANY_CALL; + + if (ctr_ctl & (CTRCTL_RET_INH | CTRCTL_TRETINH)) + allowed_filters |= PERF_SAMPLE_BRANCH_ANY_RETURN; + + if (ctr_ctl & CTRCTL_INDCALL_INH) + allowed_filters |= PERF_SAMPLE_BRANCH_IND_CALL; + + if (ctr_ctl & CTRCTL_TKBRINH) + allowed_filters |= PERF_SAMPLE_BRANCH_COND; + + if (ctr_ctl & CTRCTL_RASEMU) + allowed_filters |= PERF_SAMPLE_BRANCH_CALL_STACK; + + if (ctr_ctl & (CTRCTL_INDOJUMP_INH | CTRCTL_INDJUMP_INH)) + allowed_filters |= PERF_SAMPLE_BRANCH_IND_JUMP; + + if (ctr_ctl & CTRCTL_DIRCALL_INH) + allowed_filters |= PERF_SAMPLE_BRANCH_CALL; +} + +void riscv_pmu_ctr_starting_cpu(void) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + /* Set depth to maximum. */ + csr_write(CSR_SCTRDEPTH, SCTRDEPTH_MASK); +} + +void riscv_pmu_ctr_dying_cpu(void) +{ + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return; + + /* Clear and reset CTR CSRs. */ + csr_write(CSR_SCTRDEPTH, 0); + riscv_pmu_ctr_reset(); +} + +int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu) +{ + size_t size = sizeof(struct riscv_perf_task_context); + + if (!riscv_isa_extension_available(NULL, SxCTR) || + !riscv_isa_extension_available(NULL, SSCOFPMF) || + !riscv_isa_extension_available(NULL, SxCSRIND)) + return 0; + + riscv_pmu->pmu.task_ctx_cache = + kmem_cache_create("ctr_task_ctx", size, sizeof(u64), 0, NULL); + if (!riscv_pmu->pmu.task_ctx_cache) + return -ENOMEM; + + check_available_filters(); + + /* Set depth to maximum. */ + csr_write(CSR_SCTRDEPTH, SCTRDEPTH_MASK); + riscv_pmu->ctr_depth = ctr_get_depth(csr_read(CSR_SCTRDEPTH)); + + pr_info("Perf CTR available, with %d depth\n", riscv_pmu->ctr_depth); + + return 0; +} + +void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu) +{ + if (!riscv_pmu_ctr_supported(riscv_pmu)) + return; + + csr_write(CSR_SCTRDEPTH, 0); + riscv_pmu->ctr_depth = 0; + riscv_pmu_ctr_reset(); + + kmem_cache_destroy(riscv_pmu->pmu.task_ctx_cache); +} diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 883781f12ae0..f32b6dcc3491 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -127,6 +127,43 @@ struct riscv_pmu *riscv_pmu_alloc(void); int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); #endif +static inline bool riscv_pmu_ctr_supported(struct riscv_pmu *pmu) +{ + return !!pmu->ctr_depth; +} + #endif /* CONFIG_RISCV_PMU_COMMON */ +#ifdef CONFIG_RISCV_CTR + +bool riscv_pmu_ctr_valid(struct perf_event *event); +void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, struct perf_event *event); +void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in); +void riscv_pmu_ctr_add(struct perf_event *event); +void riscv_pmu_ctr_del(struct perf_event *event); +void riscv_pmu_ctr_enable(struct perf_event *event); +void riscv_pmu_ctr_disable(struct perf_event *event); +void riscv_pmu_ctr_dying_cpu(void); +void riscv_pmu_ctr_starting_cpu(void); +int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu); +void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu); + +#else + +static inline bool riscv_pmu_ctr_valid(struct perf_event *event) { return false; } +static inline void riscv_pmu_ctr_consume(struct cpu_hw_events *cpuc, + struct perf_event *event) { } +static inline void riscv_pmu_ctr_sched_task(struct perf_event_pmu_context *, + bool sched_in) { } +static void riscv_pmu_ctr_add(struct perf_event *event) { } +static void riscv_pmu_ctr_del(struct perf_event *event) { } +static inline void riscv_pmu_ctr_enable(struct perf_event *event) { } +static inline void riscv_pmu_ctr_disable(struct perf_event *event) { } +static inline void riscv_pmu_ctr_dying_cpu(void) { } +static inline void riscv_pmu_ctr_starting_cpu(void) { } +static inline int riscv_pmu_ctr_init(struct riscv_pmu *riscv_pmu) { return 0; } +static inline void riscv_pmu_ctr_finish(struct riscv_pmu *riscv_pmu) { } + +#endif /* CONFIG_RISCV_CTR */ + #endif /* _RISCV_PMU_H */ From patchwork Thu Jan 16 23:09:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rajnesh Kanwal X-Patchwork-Id: 13942421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C46AEC02188 for ; Thu, 16 Jan 2025 23:10:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FMQvgZeQjRm3VxCHDN6lBtpDaR3vC62WeuaH4RHlnOA=; b=qEFjBiNCY/MQ9B DG+18BiElaRtvqeZaXeuN9Aa0sPyl65LqHChRCBjRBSwwWJN4paZOJFFC5FXPVa4ZGncVimzWZcu1 Mx4jHOz2N6DmIJiZDq8xWdSKH4Pyrav30cysEePIkUrVuVDRSiOLco0NKRpMgsRBZRuwKWCKh5r5d HXwNXQd5hoW2s5AJOUu+23LUS4v7RIPtSnxgsmZ2BCRTelSWSo4jpKTXoy3qAgjkITLGCR2Knwkrc ePyegtv1EpisZ/2AZFegQaIVcZWi1ujt1vL65HpctaQXNw5qy4fdYdm7M+vKrYEVdRrf9Bt592tBd pPZ3VyJ/l6mf8pBqViiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0a-0000000GJr2-24ss; Thu, 16 Jan 2025 23:10:40 +0000 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYZ0W-0000000GJlw-3hkh for linux-riscv@lists.infradead.org; Thu, 16 Jan 2025 23:10:38 +0000 Received: by mail-wr1-x42f.google.com with SMTP id ffacd0b85a97d-3863703258fso1690157f8f.1 for ; Thu, 16 Jan 2025 15:10:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1737069035; x=1737673835; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JEkq9sMQTY0nBcBSdfXe/oLw7i/Fvb8myFij4FR9Lmw=; b=U79rfTsdD9B9wa7zkc1AnS7EDCptVMCex4JFoDPfxJEDlvDRMCyDMcxBYRw0mDa+FR mh9pQc1xC18XLsyALkftEbVX3rO0ZCwgsp32o9MuqV4BqoMMvGM0qfODziKSUlShlOwd +LDLtai+dyLPJfgxXzsTX2Y/y3UsoZBwrW3DiqJyEqmihpsTEdpHSIaQqoEXzxN1fbya y8Iu6A3pHpRb9sJgMhSEYAkWKoCCf3jgMW9Qz0940Ku526qOGUMffN3n2OW3pAbdVrvt 81CpEX3Fs5oRdg9j5uKmXZ/YMHJLXeToUcffuPSzjTlUDyOQF3Ji9PAP8D8xXOduZfd9 RCtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737069035; x=1737673835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JEkq9sMQTY0nBcBSdfXe/oLw7i/Fvb8myFij4FR9Lmw=; b=OAKvELxTE1s5N55X92BNH7dPEvvXAUC69ZWupyMQX7qlsy+itgpIoWpeXiYFf/mRui ZQJlubdkm8GZACB1HP85GMgqWONXkeF20wYvXirRNqNhRo4ogQ0tNzQVo8qejBx3ORNk 8c4AUSXjLstZiN+iyKE2+SBGEZeH4gjJaVJg8UQGjBLlJNdpoAJryGHEyEMNqYwr2gyw Gzg18rHOe+92Ookzikn2ARo72kZ5T8FDAirqUkw1QXw2srviFoOEEzqdRYsQyvuoASZd PefTjGRnnAp140Z1wtefcPafV1Dw4Ij9K/aMobIFZ8t8AWhUoriX9XsEdhoboJpkSnaa JDbA== X-Forwarded-Encrypted: i=1; AJvYcCVyUewxSy9UouVKbIqc/i2PijWD4f7CkKyW0/jx95jnN4KcbkkGyk8wwNXLki+oxXgD3RZi7pyEAc3HiQ==@lists.infradead.org X-Gm-Message-State: AOJu0Yy9Nt+qh9aYhWxkHdAZdUUwKdoEz/FfaHj7b/fATmnTXUa7HWjm OMKW1DK2rD7rlEolLMXe/P1exKhVACgdzRKiVEHQ+jZLtPZ87ocn82JuL6rbr7M= X-Gm-Gg: ASbGncsGPQDD+kbiymK/DmMs5pJItJwjArxL0HqdeBYbw87gsUGAzsJ0Hx6xvVS6p0m D4+y6FxQsgIu+tETpAdwx+tXxMDFciC+p4Ei8uB6Fcoo9UzG6B1b4OqNEmYi/qP1iK3+zG0IToR wGd5Y+tu/pmH/20fx6mxJcHGinpUGyTfo6CLyMO2re28en4wYTlPf5UAh5syhnUWsxu4/YU2yzG jjsOK+6QlSl2FQgCTUpDMn87GPOFwwU0BRJfJxjqsvFE8cQz0AQvejVCd32K0DRcBT7/RGTjNEi XoYE0HUfWTQBS1vj X-Google-Smtp-Source: AGHT+IGuu6zWay2mqSvZNm4qasvF1tOh4RKXx4XbpxtExBqMV1QOAm0F3fvLQN9z0J/WkxXd0ISjQg== X-Received: by 2002:a05:6000:1548:b0:385:e328:8908 with SMTP id ffacd0b85a97d-38bf5b0b67bmr204983f8f.29.1737069035166; Thu, 16 Jan 2025 15:10:35 -0800 (PST) Received: from rkanwal-XPS-15-9520.uk.rivosinc.com ([2a02:c7c:75ac:6300:b3f2:3a24:1767:7db0]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf322b337sm974991f8f.59.2025.01.16.15.10.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2025 15:10:34 -0800 (PST) From: Rajnesh Kanwal To: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: linux-perf-users@vger.kernel.org, adrian.hunter@intel.com, alexander.shishkin@linux.intel.com, ajones@ventanamicro.com, anup@brainfault.org, acme@kernel.org, atishp@rivosinc.com, beeman@rivosinc.com, brauner@kernel.org, conor@kernel.org, heiko@sntech.de, irogers@google.com, mingo@redhat.com, james.clark@arm.com, renyu.zj@linux.alibaba.com, jolsa@kernel.org, jisheng.teoh@starfivetech.com, palmer@dabbelt.com, will@kernel.org, kaiwenxue1@gmail.com, vincent.chen@sifive.com, Rajnesh Kanwal Subject: [PATCH v2 7/7] riscv: pmu: Integrate CTR Ext support in riscv_pmu_dev driver Date: Thu, 16 Jan 2025 23:09:55 +0000 Message-Id: <20250116230955.867152-8-rkanwal@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250116230955.867152-1-rkanwal@rivosinc.com> References: <20250116230955.867152-1-rkanwal@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250116_151036_949731_FEAA069B X-CRM114-Status: GOOD ( 18.95 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This integrates recently added CTR ext support in riscv_pmu_dev driver to enable branch stack sampling using PMU events. This mainly adds CTR enable/disable callbacks in rvpmu_ctr_stop() and rvpmu_ctr_start() function to start/stop branch recording along with the event. PMU overflow handler rvpmu_ovf_handler() is also updated to sample CTR entries in case of the overflow for the particular event programmed to records branches. The recorded entries are fed to core perf for further processing. Signed-off-by: Rajnesh Kanwal --- drivers/perf/riscv_pmu_common.c | 3 +- drivers/perf/riscv_pmu_dev.c | 67 ++++++++++++++++++++++++++++++++- 2 files changed, 67 insertions(+), 3 deletions(-) diff --git a/drivers/perf/riscv_pmu_common.c b/drivers/perf/riscv_pmu_common.c index c4c4b5d6bed0..23077a6c4931 100644 --- a/drivers/perf/riscv_pmu_common.c +++ b/drivers/perf/riscv_pmu_common.c @@ -327,8 +327,7 @@ static int riscv_pmu_event_init(struct perf_event *event) u64 event_config = 0; uint64_t cmask; - /* driver does not support branch stack sampling */ - if (has_branch_stack(event)) + if (needs_branch_stack(event) && !riscv_pmu_ctr_supported(rvpmu)) return -EOPNOTSUPP; hwc->flags = 0; diff --git a/drivers/perf/riscv_pmu_dev.c b/drivers/perf/riscv_pmu_dev.c index b9b257607b76..10697deb1d26 100644 --- a/drivers/perf/riscv_pmu_dev.c +++ b/drivers/perf/riscv_pmu_dev.c @@ -1030,7 +1030,7 @@ static void rvpmu_sbi_ctr_stop(struct perf_event *event, unsigned long flag) static void pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) { - /* Call CTR specific Sched hook. */ + riscv_pmu_ctr_sched_task(pmu_ctx, sched_in); } static int rvpmu_sbi_find_num_ctrs(void) @@ -1379,6 +1379,13 @@ static irqreturn_t rvpmu_ovf_handler(int irq, void *dev) hw_evt->state |= PERF_HES_UPTODATE; perf_sample_data_init(&data, 0, hw_evt->last_period); if (riscv_pmu_event_set_period(event)) { + if (needs_branch_stack(event)) { + riscv_pmu_ctr_consume(cpu_hw_evt, event); + perf_sample_save_brstack( + &data, event, + &cpu_hw_evt->branches->branch_stack, NULL); + } + /* * Unlike other ISAs, RISC-V don't have to disable interrupts * to avoid throttling here. As per the specification, the @@ -1577,10 +1584,14 @@ static int rvpmu_deleg_ctr_get_idx(struct perf_event *event) static void rvpmu_ctr_add(struct perf_event *event, int flags) { + if (needs_branch_stack(event)) + riscv_pmu_ctr_add(event); } static void rvpmu_ctr_del(struct perf_event *event, int flags) { + if (needs_branch_stack(event)) + riscv_pmu_ctr_del(event); } static void rvpmu_ctr_start(struct perf_event *event, u64 ival) @@ -1595,6 +1606,9 @@ static void rvpmu_ctr_start(struct perf_event *event, u64 ival) if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) rvpmu_set_scounteren((void *)event); + + if (needs_branch_stack(event)) + riscv_pmu_ctr_enable(event); } static void rvpmu_ctr_stop(struct perf_event *event, unsigned long flag) @@ -1617,6 +1631,9 @@ static void rvpmu_ctr_stop(struct perf_event *event, unsigned long flag) } else { rvpmu_sbi_ctr_stop(event, flag); } + + if (needs_branch_stack(event) && flag != RISCV_PMU_STOP_FLAG_RESET) + riscv_pmu_ctr_disable(event); } static int rvpmu_find_ctrs(void) @@ -1652,6 +1669,9 @@ static int rvpmu_event_map(struct perf_event *event, u64 *econfig) { u64 config1; + if (needs_branch_stack(event) && !riscv_pmu_ctr_valid(event)) + return -EOPNOTSUPP; + config1 = event->attr.config1; if (riscv_pmu_cdeleg_available() && !pmu_sbi_is_fw_event(event) && !(config1 & RISCV_PMU_CONFIG1_GUEST_EVENTS)) { /* GUEST events rely on SBI encoding */ @@ -1701,6 +1721,8 @@ static int rvpmu_starting_cpu(unsigned int cpu, struct hlist_node *node) enable_percpu_irq(riscv_pmu_irq, IRQ_TYPE_NONE); } + riscv_pmu_ctr_starting_cpu(); + if (sbi_pmu_snapshot_available()) return pmu_sbi_snapshot_setup(pmu, cpu); @@ -1715,6 +1737,7 @@ static int rvpmu_dying_cpu(unsigned int cpu, struct hlist_node *node) /* Disable all counters access for user mode now */ csr_write(CSR_SCOUNTEREN, 0x0); + riscv_pmu_ctr_dying_cpu(); if (sbi_pmu_snapshot_available()) return pmu_sbi_snapshot_disable(); @@ -1838,6 +1861,29 @@ static void riscv_pmu_destroy(struct riscv_pmu *pmu) cpuhp_state_remove_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); } +static int branch_records_alloc(struct riscv_pmu *pmu) +{ + struct branch_records __percpu *tmp_alloc_ptr; + struct branch_records *records; + struct cpu_hw_events *events; + int cpu; + + if (!riscv_pmu_ctr_supported(pmu)) + return 0; + + tmp_alloc_ptr = alloc_percpu_gfp(struct branch_records, GFP_KERNEL); + if (!tmp_alloc_ptr) + return -ENOMEM; + + for_each_possible_cpu(cpu) { + events = per_cpu_ptr(pmu->hw_events, cpu); + records = per_cpu_ptr(tmp_alloc_ptr, cpu); + events->branches = records; + } + + return 0; +} + static void rvpmu_event_init(struct perf_event *event) { /* @@ -1850,6 +1896,9 @@ static void rvpmu_event_init(struct perf_event *event) event->hw.flags |= PERF_EVENT_FLAG_USER_ACCESS; else event->hw.flags |= PERF_EVENT_FLAG_LEGACY; + + if (branch_sample_call_stack(event)) + event->attach_state |= PERF_ATTACH_TASK_DATA; } static void rvpmu_event_mapped(struct perf_event *event, struct mm_struct *mm) @@ -1997,6 +2046,15 @@ static int rvpmu_device_probe(struct platform_device *pdev) pmu->pmu.attr_groups = riscv_cdeleg_pmu_attr_groups; else pmu->pmu.attr_groups = riscv_sbi_pmu_attr_groups; + + ret = riscv_pmu_ctr_init(pmu); + if (ret) + goto out_free; + + ret = branch_records_alloc(pmu); + if (ret) + goto out_ctr_finish; + pmu->cmask = cmask; pmu->ctr_add = rvpmu_ctr_add; pmu->ctr_del = rvpmu_ctr_del; @@ -2013,6 +2071,10 @@ static int rvpmu_device_probe(struct platform_device *pdev) pmu->csr_index = rvpmu_csr_index; pmu->sched_task = pmu_sched_task; + ret = cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); + if (ret) + goto out_ctr_finish; + ret = riscv_pm_pmu_register(pmu); if (ret) goto out_unregister; @@ -2062,6 +2124,9 @@ static int rvpmu_device_probe(struct platform_device *pdev) out_unregister: riscv_pmu_destroy(pmu); +out_ctr_finish: + riscv_pmu_ctr_finish(pmu); + out_free: kfree(pmu); return ret;