From patchwork Wed Jan 10 23:13:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516656 Received: from mail-io1-f43.google.com (mail-io1-f43.google.com [209.85.166.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D344C4F898 for ; Wed, 10 Jan 2024 23:14:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="Wr57N0GF" Received: by mail-io1-f43.google.com with SMTP id ca18e2360f4ac-7beda6a274bso71206639f.2 for ; Wed, 10 Jan 2024 15:14:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928480; x=1705533280; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EzYVT3U62jOpimrVBwbFtsr3YzUGPIXyIVj0onyqTs4=; b=Wr57N0GF9FCH05IyXCuhWmHZX0l+bZe+kIK+c1oRB6QHWSCB/DuXXqMzjijbGwNfoB /lgWKIrKjSVsIYVPfi0OduAAdtvoKC5ZfMQKsB8GbMaNKKO1u6Ft6/X3CbBFJzN+OFi0 0HLG7Xh/qTk3YEDIxXJAgmFk5tXxu/vwkgMnP448fDGEzKlVz2i6IwofNb0jcChgpvsR AUrskYhOXGXvv1+XRmQBn/MNjxolUb3Tw+nc0qI4u4EGUZMkyALJfbjyZ/WYCwIRmtkK uyB7j2Vp7LwI4OSoenCKHmZ7DQ1u+KRwG4fr1s46xDyZW1v7+Orph15bz8j4b04rtCi2 90eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928480; x=1705533280; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EzYVT3U62jOpimrVBwbFtsr3YzUGPIXyIVj0onyqTs4=; b=eUP5oFHbejyZXizY8f9QWoxpl2EFksDUSpXucUZPbnA08O2DzuL11muS+FmPhZu0zl Gj9JyWR4tlGrmOScD0S7+q3btM3Itusvyo47hVh7U951uDRgE0soa35WW5Jj8JjND9ex TvegpFxg5FRHLghhuIHrea0lt7iwU2rS7HTOdhGkGhzSRwCslTIlF5gOk9rNeFF+yX5S hdisgyklYsEF0+EtLkw+fwL4u+Y+MrU0mIpwNaxhZrB4CteE07CrGgCrM9SPAf53Vx78 VFYJ9xOrXPFzdcGMbc0+kMyEUB8iY7xyvzJFv6GURK/S2lx/3QX37kZ08KjdbudlvamB GoRg== X-Gm-Message-State: AOJu0YzJNWhe53LLMzYU+ag8TzFQpdDMPOohqOe3wvE2rjw/T8o2diwT uc+WvHkl46R0FMIwLgc/CUs3YCALqgKnzQ== X-Google-Smtp-Source: AGHT+IHBtYFpa0Fm4UJ7oZUXxQPb/ffVF39Gpk3+VL2e9dxmW6edwLnfASayEBNATNo0KU8KYSyH6A== X-Received: by 2002:a5e:dc44:0:b0:7ba:77f3:b7e2 with SMTP id s4-20020a5edc44000000b007ba77f3b7e2mr364158iop.42.1704928479976; Wed, 10 Jan 2024 15:14:39 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:39 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Conor Dooley , Anup Patel , Alexandre Ghiti , Andrew Jones , Atish Patra , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 01/10] RISC-V: Fix the typo in Scountovf CSR name Date: Wed, 10 Jan 2024 15:13:50 -0800 Message-Id: <20240110231359.1239367-2-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The counter overflow CSR name is "scountovf" not "sscountovf". Fix the csr name. Fixes: 4905ec2fb7e6 ("RISC-V: Add sscofpmf extension support") Reviewed-by: Conor Dooley Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/csr.h | 2 +- arch/riscv/include/asm/errata_list.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 306a19a5509c..88cdc8a3e654 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -281,7 +281,7 @@ #define CSR_HPMCOUNTER30H 0xc9e #define CSR_HPMCOUNTER31H 0xc9f -#define CSR_SSCOUNTOVF 0xda0 +#define CSR_SCOUNTOVF 0xda0 #define CSR_SSTATUS 0x100 #define CSR_SIE 0x104 diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h index 83ed25e43553..7026fba12eeb 100644 --- a/arch/riscv/include/asm/errata_list.h +++ b/arch/riscv/include/asm/errata_list.h @@ -152,7 +152,7 @@ asm volatile(ALTERNATIVE_2( \ #define ALT_SBI_PMU_OVERFLOW(__ovl) \ asm volatile(ALTERNATIVE( \ - "csrr %0, " __stringify(CSR_SSCOUNTOVF), \ + "csrr %0, " __stringify(CSR_SCOUNTOVF), \ "csrr %0, " __stringify(THEAD_C9XX_CSR_SCOUNTEROF), \ THEAD_VENDOR_ID, ERRATA_THEAD_PMU, \ CONFIG_ERRATA_THEAD_PMU) \ From patchwork Wed Jan 10 23:13:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516657 Received: from mail-il1-f173.google.com (mail-il1-f173.google.com [209.85.166.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39A334F8AA for ; Wed, 10 Jan 2024 23:14:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="0F98St71" Received: by mail-il1-f173.google.com with SMTP id e9e14a558f8ab-3608bdb484fso11499095ab.1 for ; Wed, 10 Jan 2024 15:14:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928481; x=1705533281; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DpN1MW+95i3F5wasxziCwuSYEcNzMGQOQLUR8+9YpL0=; b=0F98St71ILMisMGT6EsMyuiC5F0vqPN8co9EpR+2OW/piJMMgnwqKDyTXQHYHIzK2C HswkcuSfpjy3Wj5Mgr7M0NoQpCETx2nT23lVUUpxPyc7KuzJlyxuNAcrasniF3F4SaSQ TOCJxYujAlZc5mFSnH0Wspziv4eLUAbNJo5SZylut+rVOBVyJ5OAobJ0F9po9XoNP4Cg JwKFqYoVGdFlSq/Py1mAGtVKgIqfBPx/enXVr9gXvS0CozeWsFk8a42FpCYBaOCR5YgY HZm1VOilzaD5i/v3hbCYs+C42qiryFfKI0yBgeuQy861Aju4fIY+V7Ye5/F1AyTIiIf6 yrWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928481; x=1705533281; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DpN1MW+95i3F5wasxziCwuSYEcNzMGQOQLUR8+9YpL0=; b=jL7gWj128rlEaiI0YX5THZI0h6jwnEf7/wAWx1Rn0odlzMVdgbCvgxKSv1BDFUSmpD v5up4m8D4oaRk+Sc7eU1L1maeRPpen3sKBcoA9q4RfR5BNaLZOeCj48fxj/cSz2eZLtR nU6l7xdHpYU8cIiWHRkblGRuHutNz5Ls605JxQUpmitfCVUzFGQ1EJ2xEDLW0CpEFdoi srwDY/C+X1WRKImNT7cFJUBHqatxKfde4rnHJ0KdkNpAKJpO5SI7oovypScq3dO8MDHj JmuHhQOUHpn+fWpw9qXKL54ZrWVzaQALLK+PamWicgzKPT7mlUwElhSiSrIJr3eVixsH +LOg== X-Gm-Message-State: AOJu0YwFAp3cz3t8xN/7sA+PgQF6vG1EabkO1IDXbQNMGqePVGVPoBBF vmPP99fyWJLJ9z5xxTMAZ4FBSDmOTe1hIw== X-Google-Smtp-Source: AGHT+IEerfI4D/HCt6Wq/EDTg532UTtpPwmlvQqg9CrVuB5160up0pSpFPO/ZG+KXWk1i3x4xdcmZg== X-Received: by 2002:a05:6e02:180b:b0:360:7947:f14f with SMTP id a11-20020a056e02180b00b003607947f14fmr650893ilv.17.1704928481341; Wed, 10 Jan 2024 15:14:41 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:41 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Conor Dooley , Anup Patel , Alexandre Ghiti , Andrew Jones , Atish Patra , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 02/10] RISC-V: Add FIRMWARE_READ_HI definition Date: Wed, 10 Jan 2024 15:13:51 -0800 Message-Id: <20240110231359.1239367-3-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 SBI v2.0 added another function to SBI PMU extension to read the upper bits of a counter with width larger than XLEN. Add the definition for that function. Acked-by: Conor Dooley Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/sbi.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index b6f898c56940..914eacc6ba2e 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -122,6 +122,7 @@ enum sbi_ext_pmu_fid { SBI_EXT_PMU_COUNTER_START, SBI_EXT_PMU_COUNTER_STOP, SBI_EXT_PMU_COUNTER_FW_READ, + SBI_EXT_PMU_COUNTER_FW_READ_HI, }; union sbi_pmu_ctr_info { From patchwork Wed Jan 10 23:13:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516658 Received: from mail-io1-f44.google.com (mail-io1-f44.google.com [209.85.166.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F13AE5025B for ; Wed, 10 Jan 2024 23:14:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="tDNJTu0k" Received: by mail-io1-f44.google.com with SMTP id ca18e2360f4ac-7bc32b04dc9so159911839f.2 for ; Wed, 10 Jan 2024 15:14:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928483; x=1705533283; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=y/Smp2oLhuu6GIJFXbIgbmjKI2p4Ava6d/2TN1s81h4=; b=tDNJTu0k+yFq3L3KJ1TlvG0de0hqJNgp72hRJEcgyiPJerjd5+ENKWervVnJyxiElb eRzJ2JeH0db3SlvhNU21G507dUlxpi429f6KHY3Tr8OquJ/OOSC132bJRpwnYgYO4WjN sVz/XDNZzfg7fDNHreLqXJzoFrePfIyl8xbWPBaIsQFwoLrJEMQAqitBSCndrjiCji1b q2ByzrRpgPoNnCq+ErHE3m2dO02/zEvXWREuI/9UFHuXYfKZYqA6qDiJZoVDWHRXdBSw g9ZQ7VCW3z63LkRbMgi4VxjvuOwQ6CED5n3gKMeMTm4U/rvkzUfoKjjp3jk5ZgJY11bJ P8UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928483; x=1705533283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y/Smp2oLhuu6GIJFXbIgbmjKI2p4Ava6d/2TN1s81h4=; b=XcqZ+CoI38UtbWALzPd8I8qq8EPqpWcpe73o1xX+5N5T5ysThLWC8tXNY/lpMXqxku 8EB3Yj7JkKuahAOBUFS5GtoCm9RPIanqlhdw4pFiOu39OK+anSlmWDLDyB7KMiHmVYUM GEhsrFYUMroFONTdoSoO+JyYD47qKgE00PX9FmCjzNMq5SS57r8SdDO6+72lauHzQfw0 mRhNVRh0pB+bT4LJrmLO5pWaf7C+rZLHR6soJ7Diuy1GPFXzywJfI1FvqMfBNjNc4xPR 6gqZMnGqPYimqEhgSG2WkwB1tEx5420qVvhqgtwuHHoi24pP0joHHaGlzMLxfeVTbRDw TD6Q== X-Gm-Message-State: AOJu0Ywpl7m2+vY2V4nJOaspR5EIytDaOL/NVRzGrflAFGGqWK/fu7KI noJU+Rq8qKMYjw9//ITUrYGD0Dv65guw7g== X-Google-Smtp-Source: AGHT+IEO+CT+BOK9YfH/14uu4wNMxjKsnPCEdlLu9xcMpHCvuY2p3WWb+zLXSonhnsccpODzARsukg== X-Received: by 2002:a05:6602:2819:b0:7ba:85e4:f8de with SMTP id d25-20020a056602281900b007ba85e4f8demr324455ioe.42.1704928483021; Wed, 10 Jan 2024 15:14:43 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:42 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Palmer Dabbelt , Conor Dooley , Anup Patel , Alexandre Ghiti , Andrew Jones , Atish Patra , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 03/10] drivers/perf: riscv: Read upper bits of a firmware counter Date: Wed, 10 Jan 2024 15:13:52 -0800 Message-Id: <20240110231359.1239367-4-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 SBI v2.0 introduced a explicit function to read the upper 32 bits for any firmwar counter width that is longer than 32bits. This is only applicable for RV32 where firmware counter can be 64 bit. Acked-by: Palmer Dabbelt Reviewed-by: Conor Dooley Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- drivers/perf/riscv_pmu_sbi.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index 16acd4dcdb96..ea0fdb589f0d 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -35,6 +35,8 @@ PMU_FORMAT_ATTR(event, "config:0-47"); PMU_FORMAT_ATTR(firmware, "config:63"); +static bool sbi_v2_available; + static struct attribute *riscv_arch_formats_attr[] = { &format_attr_event.attr, &format_attr_firmware.attr, @@ -488,16 +490,23 @@ static u64 pmu_sbi_ctr_read(struct perf_event *event) struct hw_perf_event *hwc = &event->hw; int idx = hwc->idx; struct sbiret ret; - union sbi_pmu_ctr_info info; u64 val = 0; + union sbi_pmu_ctr_info info = pmu_ctr_list[idx]; if (pmu_sbi_is_fw_event(event)) { ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_FW_READ, hwc->idx, 0, 0, 0, 0, 0); - if (!ret.error) - val = ret.value; + if (ret.error) + return 0; + + val = ret.value; + if (IS_ENABLED(CONFIG_32BIT) && sbi_v2_available && info.width >= 32) { + ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_FW_READ_HI, + hwc->idx, 0, 0, 0, 0, 0); + if (!ret.error) + val |= ((u64)ret.value << 32); + } } else { - info = pmu_ctr_list[idx]; val = riscv_pmu_ctr_read_csr(info.csr); if (IS_ENABLED(CONFIG_32BIT)) val = ((u64)riscv_pmu_ctr_read_csr(info.csr + 0x80)) << 31 | val; @@ -1108,6 +1117,9 @@ static int __init pmu_sbi_devinit(void) return 0; } + if (sbi_spec_version >= sbi_mk_version(2, 0)) + sbi_v2_available = true; + ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_RISCV_STARTING, "perf/riscv/pmu:starting", pmu_sbi_starting_cpu, pmu_sbi_dying_cpu); From patchwork Wed Jan 10 23:13:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516661 Received: from mail-io1-f47.google.com (mail-io1-f47.google.com [209.85.166.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 686EA51017 for ; Wed, 10 Jan 2024 23:14:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="HInTl4eJ" Received: by mail-io1-f47.google.com with SMTP id ca18e2360f4ac-7bc32b04d16so199257639f.0 for ; Wed, 10 Jan 2024 15:14:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928484; x=1705533284; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Bzy66lxPBsLKj7NzWfh0sHPmHGdfbEMoaok8d/cKAlM=; b=HInTl4eJDD4Kw1xM3+3KIHchkh+cAZ0BqlJnYlYWf6F1XFFWdBmJHG/KNzaqU3ydJS Z9gse39jt1hKw5Gn3sWxg7PVIwMyZreoKKmRMYJjtyo/TBwxPTOD3aQdIXTTtcdqiFNE 4byZC3Y28CDOZF6q1XHbNbzz0BYPlPAeDB7/REKagzfUfvvHwt4AL9F3sH+ZdlNRHiWH 21/m8foolfeAB2WlTSeThoPebrlbhuDnb+mJS8t7HaxRcuLuaMYQEziw+BuLd4OCEKwu sYy/r7KkbKIoxWCCl3aFY06RZrFxhluLNo0EIX4FgHaGHYzepx6jR51BpVa1vJIPkZLN qtFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928484; x=1705533284; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Bzy66lxPBsLKj7NzWfh0sHPmHGdfbEMoaok8d/cKAlM=; b=kEO9X1ngbJLI7eTH3RBtWReFSc6cl2MGWgH69SU44VIWcd7ybSVCpoQqr90MH6xJxZ v7ec76l9Gjcq67d+liANHjFbZQxGAfCofFsb9izZo+HNWqdV+KbVaXMn7QXxRghjt5NL NRY+66XgeG1x5hXtqAR74WHCYxQELkCu1YRHeSdFsgNwDVO1AkGaxzz0lENpVBR7RJgV YFZOGamOFEoZriC0RUcMsRirRvaGwgqgzxyQNdnKKNaM+god3fzhINUjNdA2TJOjTnTH TpRUkBR8Fqi5eaN4pKkxP51pahhTwTQmvPlQPw12AfXrJR5IHM+eXc9aHh/qo/SkzCMR Hacw== X-Gm-Message-State: AOJu0YyaSC2LdgNuuAVkGOpz0nNpVJYBnFRUYoNxAVOXY3gXGV4EHxmv fEKn54vTDCvCqkOYGTzsp+Hkb/JSLAGqsQ== X-Google-Smtp-Source: AGHT+IGm7HAe0+skSsl/Idm/HM451rWShQSb9VP0fha0ecUw+9h1tvFJhWz0TqlC4nQuQ/KiihcYqA== X-Received: by 2002:a05:6602:730:b0:7be:eeee:816e with SMTP id g16-20020a056602073000b007beeeee816emr392642iox.4.1704928484498; Wed, 10 Jan 2024 15:14:44 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:44 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Palmer Dabbelt , Alexandre Ghiti , Andrew Jones , Atish Patra , Conor Dooley , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 04/10] RISC-V: Add SBI PMU snapshot definitions Date: Wed, 10 Jan 2024 15:13:53 -0800 Message-Id: <20240110231359.1239367-5-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 SBI PMU Snapshot function optimizes the number of traps to higher privilege mode by leveraging a shared memory between the S/VS-mode and the M/HS mode. Add the definitions for that extension and new error codes. Reviewed-by: Anup Patel Acked-by: Palmer Dabbelt Signed-off-by: Atish Patra --- arch/riscv/include/asm/sbi.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 914eacc6ba2e..75e95a7d9aa3 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -123,6 +123,7 @@ enum sbi_ext_pmu_fid { SBI_EXT_PMU_COUNTER_STOP, SBI_EXT_PMU_COUNTER_FW_READ, SBI_EXT_PMU_COUNTER_FW_READ_HI, + SBI_EXT_PMU_SNAPSHOT_SET_SHMEM, }; union sbi_pmu_ctr_info { @@ -139,6 +140,13 @@ union sbi_pmu_ctr_info { }; }; +/* Data structure to contain the pmu snapshot data */ +struct riscv_pmu_snapshot_data { + u64 ctr_overflow_mask; + u64 ctr_values[64]; + u64 reserved[447]; +}; + #define RISCV_PMU_RAW_EVENT_MASK GENMASK_ULL(47, 0) #define RISCV_PMU_RAW_EVENT_IDX 0x20000 @@ -235,9 +243,11 @@ enum sbi_pmu_ctr_type { /* Flags defined for counter start function */ #define SBI_PMU_START_FLAG_SET_INIT_VALUE (1 << 0) +#define SBI_PMU_START_FLAG_INIT_FROM_SNAPSHOT BIT(1) /* Flags defined for counter stop function */ #define SBI_PMU_STOP_FLAG_RESET (1 << 0) +#define SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT BIT(1) enum sbi_ext_dbcn_fid { SBI_EXT_DBCN_CONSOLE_WRITE = 0, @@ -276,6 +286,7 @@ struct sbi_sta_struct { #define SBI_ERR_ALREADY_AVAILABLE -6 #define SBI_ERR_ALREADY_STARTED -7 #define SBI_ERR_ALREADY_STOPPED -8 +#define SBI_ERR_NO_SHMEM -9 extern unsigned long sbi_spec_version; struct sbiret { From patchwork Wed Jan 10 23:13:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516659 Received: from mail-io1-f46.google.com (mail-io1-f46.google.com [209.85.166.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8B01524BD for ; Wed, 10 Jan 2024 23:14:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="YCrDOMdI" Received: by mail-io1-f46.google.com with SMTP id ca18e2360f4ac-7beda2e6794so83441139f.1 for ; Wed, 10 Jan 2024 15:14:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928486; x=1705533286; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yO2JDBI4dG1fddfahgGkvXZP9OhQu5VGYEvmnCwYQ0k=; b=YCrDOMdIieHfLcLMovZ3PtwWSINFYuRR0/1HpClCLdF+HnlxPpp5o/JhXDE9Q13ype W+vAbdIPXDVzi6NaInw/bmTPfy/3fQYPzW20XVhTfbhxeXp9nP/bJpup8G/r2SqM7+ik 1MofxSsTUB6zerww0IV+gZzy1P2UuiDSmUbMoOcHim6WLGjLaoSI+yRpirLfhLmiCb2K 8EdurKs+qr1gFQPlOsWUGyj/LcUogI++8ZkrRZutgir1wDjskNKAGPG5/6/DACBGIHIK 2udLNYeJHiErK2Yl8hvKFPa//mlpOQNYJk0YEd94b0khJNwMcef8o3ZSvVq0wpPlO3ji sbGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928486; x=1705533286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yO2JDBI4dG1fddfahgGkvXZP9OhQu5VGYEvmnCwYQ0k=; b=QzhBdx3EkYNmVoOcy9erXx2w7+eGugJUIpFH863bPuVdYNxYKIXPHshhHa2Tw5mzqm xAHB7DwFBae0fWjK3gmut/ZrwN7tWn7VTGNqTV/NisjBUK/Vuvip/0QcnGoBa4+VMSes 3On87Boyzt5x9ytycosY6fZEhPaw2SO6fw7eAwwbsX0hbbde+6FJaHTPnuknV+PBdNPI wrxOl1YZNCJmvjMlqmeHIaepe5Mh49oG4twq++tAC2z2yS5uXN+BZTReWVE6R8oLF96M 76iXxQYEbEFWsfAwruueL9bd4wxg058mujg5jLuMhDHL1yscEYAojckFDSCW5flisZg4 ij7Q== X-Gm-Message-State: AOJu0Yx+27c7Pnv792cT6eJfM5hZ7Iz4CiwfPlce1uw1RpcvWxO0F6Rj Gozn216vK8TV/Y+NPyC6yOWUuytzvNOsBw== X-Google-Smtp-Source: AGHT+IHmy7uZlU32KRyWX2R8jm/fMUJCdC3jJ/jDGlnTs3E2mxMqECEzjdVF6Fb5YmjE0VXm8N1m7Q== X-Received: by 2002:a5e:df4c:0:b0:7bc:157c:8aa6 with SMTP id g12-20020a5edf4c000000b007bc157c8aa6mr789486ioq.0.1704928485926; Wed, 10 Jan 2024 15:14:45 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:45 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Palmer Dabbelt , Anup Patel , Conor Dooley , Alexandre Ghiti , Andrew Jones , Atish Patra , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 05/10] drivers/perf: riscv: Implement SBI PMU snapshot function Date: Wed, 10 Jan 2024 15:13:54 -0800 Message-Id: <20240110231359.1239367-6-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 SBI v2.0 SBI introduced PMU snapshot feature which adds the following features. 1. Read counter values directly from the shared memory instead of csr read. 2. Start multiple counters with initial values with one SBI call. These functionalities optimizes the number of traps to the higher privilege mode. If the kernel is in VS mode while the hypervisor deploy trap & emulate method, this would minimize all the hpmcounter CSR read traps. If the kernel is running in S-mode, the benefits reduced to CSR latency vs DRAM/cache latency as there is no trap involved while accessing the hpmcounter CSRs. In both modes, it does saves the number of ecalls while starting multiple counter together with an initial values. This is a likely scenario if multiple counters overflow at the same time. Acked-by: Palmer Dabbelt Reviewed-by: Anup Patel Reviewed-by: Conor Dooley Signed-off-by: Atish Patra --- drivers/perf/riscv_pmu.c | 1 + drivers/perf/riscv_pmu_sbi.c | 209 +++++++++++++++++++++++++++++++-- include/linux/perf/riscv_pmu.h | 6 + 3 files changed, 204 insertions(+), 12 deletions(-) diff --git a/drivers/perf/riscv_pmu.c b/drivers/perf/riscv_pmu.c index 0dda70e1ef90..5b57acb770d3 100644 --- a/drivers/perf/riscv_pmu.c +++ b/drivers/perf/riscv_pmu.c @@ -412,6 +412,7 @@ struct riscv_pmu *riscv_pmu_alloc(void) cpuc->n_events = 0; for (i = 0; i < RISCV_MAX_COUNTERS; i++) cpuc->events[i] = NULL; + cpuc->snapshot_addr = NULL; } pmu->pmu = (struct pmu) { .event_init = riscv_pmu_event_init, diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index ea0fdb589f0d..8de5721e8019 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -36,6 +36,9 @@ PMU_FORMAT_ATTR(event, "config:0-47"); PMU_FORMAT_ATTR(firmware, "config:63"); static bool sbi_v2_available; +static DEFINE_STATIC_KEY_FALSE(sbi_pmu_snapshot_available); +#define sbi_pmu_snapshot_available() \ + static_branch_unlikely(&sbi_pmu_snapshot_available) static struct attribute *riscv_arch_formats_attr[] = { &format_attr_event.attr, @@ -485,14 +488,100 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig) return ret; } +static void pmu_sbi_snapshot_free(struct riscv_pmu *pmu) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct cpu_hw_events *cpu_hw_evt = per_cpu_ptr(pmu->hw_events, cpu); + + if (!cpu_hw_evt->snapshot_addr) + continue; + + free_page((unsigned long)cpu_hw_evt->snapshot_addr); + cpu_hw_evt->snapshot_addr = NULL; + cpu_hw_evt->snapshot_addr_phys = 0; + } +} + +static int pmu_sbi_snapshot_alloc(struct riscv_pmu *pmu) +{ + int cpu; + struct page *snapshot_page; + + for_each_possible_cpu(cpu) { + struct cpu_hw_events *cpu_hw_evt = per_cpu_ptr(pmu->hw_events, cpu); + + if (cpu_hw_evt->snapshot_addr) + continue; + + snapshot_page = alloc_page(GFP_ATOMIC | __GFP_ZERO); + if (!snapshot_page) { + pmu_sbi_snapshot_free(pmu); + return -ENOMEM; + } + cpu_hw_evt->snapshot_addr = page_to_virt(snapshot_page); + cpu_hw_evt->snapshot_addr_phys = page_to_phys(snapshot_page); + } + + return 0; +} + +static void pmu_sbi_snapshot_disable(void) +{ + sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_SNAPSHOT_SET_SHMEM, -1, + -1, 0, 0, 0, 0); +} + +static int pmu_sbi_snapshot_setup(struct riscv_pmu *pmu, int cpu) +{ + struct cpu_hw_events *cpu_hw_evt; + struct sbiret ret = {0}; + + cpu_hw_evt = per_cpu_ptr(pmu->hw_events, cpu); + if (!cpu_hw_evt->snapshot_addr_phys) + return -EINVAL; + + if (cpu_hw_evt->snapshot_set_done) + return 0; + + if (IS_ENABLED(CONFIG_32BIT)) + ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_SNAPSHOT_SET_SHMEM, + cpu_hw_evt->snapshot_addr_phys, + (u64)(cpu_hw_evt->snapshot_addr_phys) >> 32, 0, 0, 0, 0); + else + ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_SNAPSHOT_SET_SHMEM, + cpu_hw_evt->snapshot_addr_phys, 0, 0, 0, 0, 0); + + /* Free up the snapshot area memory and fall back to SBI PMU calls without snapshot */ + if (ret.error) { + if (ret.error != SBI_ERR_NOT_SUPPORTED) + pr_warn("pmu snapshot setup failed with error %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + + cpu_hw_evt->snapshot_set_done = true; + + return 0; +} + static u64 pmu_sbi_ctr_read(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; int idx = hwc->idx; struct sbiret ret; u64 val = 0; + struct riscv_pmu *pmu = to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events); + struct riscv_pmu_snapshot_data *sdata = cpu_hw_evt->snapshot_addr; union sbi_pmu_ctr_info info = pmu_ctr_list[idx]; + /* Read the value from the shared memory directly */ + if (sbi_pmu_snapshot_available()) { + val = sdata->ctr_values[idx]; + return val; + } + if (pmu_sbi_is_fw_event(event)) { ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_FW_READ, hwc->idx, 0, 0, 0, 0, 0); @@ -539,6 +628,7 @@ static void pmu_sbi_ctr_start(struct perf_event *event, u64 ival) struct hw_perf_event *hwc = &event->hw; unsigned long flag = SBI_PMU_START_FLAG_SET_INIT_VALUE; + /* There is no benefit setting SNAPSHOT FLAG for a single counter */ #if defined(CONFIG_32BIT) ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_START, hwc->idx, 1, flag, ival, ival >> 32, 0); @@ -559,16 +649,36 @@ static void pmu_sbi_ctr_stop(struct perf_event *event, unsigned long flag) { struct sbiret ret; struct hw_perf_event *hwc = &event->hw; + struct riscv_pmu *pmu = to_riscv_pmu(event->pmu); + struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events); + struct riscv_pmu_snapshot_data *sdata = cpu_hw_evt->snapshot_addr; if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) pmu_sbi_reset_scounteren((void *)event); + if (sbi_pmu_snapshot_available()) + flag |= SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT; + ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_STOP, hwc->idx, 1, flag, 0, 0, 0); - if (ret.error && (ret.error != SBI_ERR_ALREADY_STOPPED) && - flag != SBI_PMU_STOP_FLAG_RESET) + if (!ret.error && sbi_pmu_snapshot_available()) { + /* + * The counter snapshot is based on the index base specified by hwc->idx. + * The actual counter value is updated in shared memory at index 0 when counter + * mask is 0x01. To ensure accurate counter values, it's necessary to transfer + * the counter value to shared memory. However, if hwc->idx is zero, the counter + * value is already correctly updated in shared memory, requiring no further + * adjustment. + */ + if (hwc->idx > 0) { + sdata->ctr_values[hwc->idx] = sdata->ctr_values[0]; + sdata->ctr_values[0] = 0; + } + } else if (ret.error && (ret.error != SBI_ERR_ALREADY_STOPPED) && + flag != SBI_PMU_STOP_FLAG_RESET) { pr_err("Stopping counter idx %d failed with error %d\n", hwc->idx, sbi_err_map_linux_errno(ret.error)); + } } static int pmu_sbi_find_num_ctrs(void) @@ -626,10 +736,14 @@ static inline void pmu_sbi_stop_all(struct riscv_pmu *pmu) static inline void pmu_sbi_stop_hw_ctrs(struct riscv_pmu *pmu) { struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events); + unsigned long flag = 0; + + if (sbi_pmu_snapshot_available()) + flag = SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT; /* No need to check the error here as we can't do anything about the error */ sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_STOP, 0, - cpu_hw_evt->used_hw_ctrs[0], 0, 0, 0, 0); + cpu_hw_evt->used_hw_ctrs[0], flag, 0, 0, 0); } /* @@ -638,11 +752,10 @@ static inline void pmu_sbi_stop_hw_ctrs(struct riscv_pmu *pmu) * while the overflowed counters need to be started with updated initialization * value. */ -static inline void pmu_sbi_start_overflow_mask(struct riscv_pmu *pmu, - unsigned long ctr_ovf_mask) +static noinline void pmu_sbi_start_ovf_ctrs_sbi(struct cpu_hw_events *cpu_hw_evt, + unsigned long ctr_ovf_mask) { int idx = 0; - struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events); struct perf_event *event; unsigned long flag = SBI_PMU_START_FLAG_SET_INIT_VALUE; unsigned long ctr_start_mask = 0; @@ -677,6 +790,49 @@ static inline void pmu_sbi_start_overflow_mask(struct riscv_pmu *pmu, } } +static noinline void pmu_sbi_start_ovf_ctrs_snapshot(struct cpu_hw_events *cpu_hw_evt, + unsigned long ctr_ovf_mask) +{ + int idx = 0; + struct perf_event *event; + unsigned long flag = SBI_PMU_START_FLAG_INIT_FROM_SNAPSHOT; + u64 max_period, init_val = 0; + struct hw_perf_event *hwc; + unsigned long ctr_start_mask = 0; + struct riscv_pmu_snapshot_data *sdata = cpu_hw_evt->snapshot_addr; + + for_each_set_bit(idx, cpu_hw_evt->used_hw_ctrs, RISCV_MAX_COUNTERS) { + if (ctr_ovf_mask & (1 << idx)) { + event = cpu_hw_evt->events[idx]; + hwc = &event->hw; + max_period = riscv_pmu_ctr_get_width_mask(event); + init_val = local64_read(&hwc->prev_count) & max_period; + sdata->ctr_values[idx] = init_val; + } + /* + * We donot need to update the non-overflow counters the previous + * value should have been there already. + */ + } + + ctr_start_mask = cpu_hw_evt->used_hw_ctrs[0]; + + /* Start all the counters in a single shot */ + sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_START, 0, ctr_start_mask, + flag, 0, 0, 0); +} + +static void pmu_sbi_start_overflow_mask(struct riscv_pmu *pmu, + unsigned long ctr_ovf_mask) +{ + struct cpu_hw_events *cpu_hw_evt = this_cpu_ptr(pmu->hw_events); + + if (sbi_pmu_snapshot_available()) + pmu_sbi_start_ovf_ctrs_snapshot(cpu_hw_evt, ctr_ovf_mask); + else + pmu_sbi_start_ovf_ctrs_sbi(cpu_hw_evt, ctr_ovf_mask); +} + static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) { struct perf_sample_data data; @@ -690,6 +846,7 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) unsigned long overflowed_ctrs = 0; struct cpu_hw_events *cpu_hw_evt = dev; u64 start_clock = sched_clock(); + struct riscv_pmu_snapshot_data *sdata = cpu_hw_evt->snapshot_addr; if (WARN_ON_ONCE(!cpu_hw_evt)) return IRQ_NONE; @@ -711,8 +868,10 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev) pmu_sbi_stop_hw_ctrs(pmu); /* Overflow status register should only be read after counter are stopped */ - ALT_SBI_PMU_OVERFLOW(overflow); - + if (sbi_pmu_snapshot_available()) + overflow = sdata->ctr_overflow_mask; + else + ALT_SBI_PMU_OVERFLOW(overflow); /* * Overflow interrupt pending bit should only be cleared after stopping * all the counters to avoid any race condition. @@ -794,6 +953,9 @@ static int pmu_sbi_starting_cpu(unsigned int cpu, struct hlist_node *node) enable_percpu_irq(riscv_pmu_irq, IRQ_TYPE_NONE); } + if (sbi_pmu_snapshot_available()) + return pmu_sbi_snapshot_setup(pmu, cpu); + return 0; } @@ -807,6 +969,9 @@ static int pmu_sbi_dying_cpu(unsigned int cpu, struct hlist_node *node) /* Disable all counters access for user mode now */ csr_write(CSR_SCOUNTEREN, 0x0); + if (sbi_pmu_snapshot_available()) + pmu_sbi_snapshot_disable(); + return 0; } @@ -1076,10 +1241,6 @@ static int pmu_sbi_device_probe(struct platform_device *pdev) pmu->event_unmapped = pmu_sbi_event_unmapped; pmu->csr_index = pmu_sbi_csr_index; - ret = cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); - if (ret) - return ret; - ret = riscv_pm_pmu_register(pmu); if (ret) goto out_unregister; @@ -1088,8 +1249,32 @@ static int pmu_sbi_device_probe(struct platform_device *pdev) if (ret) goto out_unregister; + /* SBI PMU Snapsphot is only available in SBI v2.0 */ + if (sbi_v2_available) { + ret = pmu_sbi_snapshot_alloc(pmu); + if (ret) + goto out_unregister; + + ret = pmu_sbi_snapshot_setup(pmu, smp_processor_id()); + if (!ret) { + pr_info("SBI PMU snapshot detected\n"); + /* + * We enable it once here for the boot cpu. If snapshot shmem setup + * fails during cpu hotplug process, it will fail to start the cpu + * as we can not handle hetergenous PMUs with different snapshot + * capability. + */ + static_branch_enable(&sbi_pmu_snapshot_available); + } + /* Snapshot is an optional feature. Continue if not available */ + } + register_sysctl("kernel", sbi_pmu_sysctl_table); + ret = cpuhp_state_add_instance(CPUHP_AP_PERF_RISCV_STARTING, &pmu->node); + if (ret) + return ret; + return 0; out_unregister: diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index 43282e22ebe1..c3fa90970042 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -39,6 +39,12 @@ struct cpu_hw_events { DECLARE_BITMAP(used_hw_ctrs, RISCV_MAX_COUNTERS); /* currently enabled firmware counters */ DECLARE_BITMAP(used_fw_ctrs, RISCV_MAX_COUNTERS); + /* The virtual address of the shared memory where counter snapshot will be taken */ + void *snapshot_addr; + /* The physical address of the shared memory where counter snapshot will be taken */ + phys_addr_t snapshot_addr_phys; + /* Boolean flag to indicate setup is already done */ + bool snapshot_set_done; }; struct riscv_pmu { From patchwork Wed Jan 10 23:13:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516660 Received: from mail-io1-f46.google.com (mail-io1-f46.google.com [209.85.166.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28DED52F8A for ; Wed, 10 Jan 2024 23:14:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="aJ1KB07B" Received: by mail-io1-f46.google.com with SMTP id ca18e2360f4ac-7bc332d3a8cso298784539f.2 for ; Wed, 10 Jan 2024 15:14:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928487; x=1705533287; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QezjoA6tUvS/5tHzC/r6igb3Gn0pDkI0esE0F/349nk=; b=aJ1KB07B8xihJBHSJu4x2wtWXbNbhYGzAFZGya+OUrELM3nq3Eh9ov7hLGURIubBOQ UYKqK+jOqqrTbUsmKI9l8CXvfgBw8+8XEehKCQiH/hD6l7t4o5Ntko+7+ItTDOd6PsBp 0+7c3BwXyQu+dEj9dZ6l9eTHn3/DcFLlnn8uX0YUPQHo8ZneH9fIwhA3lFMIXeQSvLfH zJj0dvoNbYIHSXVZdOiyMEDIiNmZirnP0ao1w90AFeFIDbZ/A9xymJ6zNO8Z0A/zaAQJ IC2dt27Am5lZdmiABZoiC9jLsvIHVDxRKjp2kYhKcR0QbGct0Zc6K2d8mRlvByZamvR9 ptiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928487; x=1705533287; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QezjoA6tUvS/5tHzC/r6igb3Gn0pDkI0esE0F/349nk=; b=Xkd5e1neuRbWZfDw/bqppEzMZLeTDMu6DShmd7WJp+MViHWf0tmaDtAtaPw0rNagYu vn90HWV0y83UZusjXkqLp2sQ2wq2PQbec/2WaEXm8dUK8bxTeqxOrWkwmNtoilDtWbtR c2i2gki+9/Jkn97+Mpog2g9Wub5BO4kFIMA+Cx5+Ady0xpQpNxoiw6KZ4P4oKFF0iqTt qoMYfkkDrB9ZfJAy9ZZYubbwlcojHAU1UD68E1UI1BW4CR3owB3U5EDZ+XWN3+QqI97w 1cc7J916H2OYl6+DnlZalDh6rwLW/oxLscQBhLebxp8FE2PnPUO6d58Tf2N1ahr/umnn jP1A== X-Gm-Message-State: AOJu0Yx3u2V/Ous+IM3sYxG5EmkREu/b8fPw1Wu826/SMdpbgjVqVHmj hK4fHTyiRAI4sKH3K38NLGde49BiDW7QqA== X-Google-Smtp-Source: AGHT+IEDUT17GMF4Ey0i/oYBIUhinUi3IYTlxkvNGgesi1C9N3qaiFERFQSZ9E+my76bd2GnH6ne4g== X-Received: by 2002:a5e:890c:0:b0:7bb:df90:6936 with SMTP id k12-20020a5e890c000000b007bbdf906936mr380036ioj.41.1704928487361; Wed, 10 Jan 2024 15:14:47 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:47 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Alexandre Ghiti , Andrew Jones , Atish Patra , Conor Dooley , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 06/10] RISC-V: KVM: No need to update the counter value during reset Date: Wed, 10 Jan 2024 15:13:55 -0800 Message-Id: <20240110231359.1239367-7-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The virtual counter value is updated during pmu_ctr_read. There is no need to update it in reset case. Otherwise, it will be counted twice which is incorrect. Fixes: 0cb74b65d2e5 ("RISC-V: KVM: Implement perf support without sampling") Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu_pmu.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 86391a5061dd..b1574c043f77 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -397,7 +397,6 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); int i, pmc_index, sbiret = 0; - u64 enabled, running; struct kvm_pmc *pmc; int fevent_code; @@ -432,12 +431,9 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, sbiret = SBI_ERR_ALREADY_STOPPED; } - if (flags & SBI_PMU_STOP_FLAG_RESET) { - /* Relase the counter if this is a reset request */ - pmc->counter_val += perf_event_read_value(pmc->perf_event, - &enabled, &running); + if (flags & SBI_PMU_STOP_FLAG_RESET) + /* Release the counter if this is a reset request */ kvm_pmu_release_perf_event(pmc); - } } else { sbiret = SBI_ERR_INVALID_PARAM; } From patchwork Wed Jan 10 23:13:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516662 Received: from mail-io1-f46.google.com (mail-io1-f46.google.com [209.85.166.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDF3053E0C for ; Wed, 10 Jan 2024 23:14:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="eCWzhj5L" Received: by mail-io1-f46.google.com with SMTP id ca18e2360f4ac-7beda2e6794so83442839f.1 for ; Wed, 10 Jan 2024 15:14:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928489; x=1705533289; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C8UQElc6A/TnygdjzPZGFQnFIW5Dl3R45uxNNmzEmHk=; b=eCWzhj5LOlysgilm3p3hoM2D6KFD4eZx5vjCAdgZmv9fXKg5cC6DTnyzktH5eTGYPp 2KFozmFlEzQx9820ultnJlepmnlZErDbSfQXFbCZEWyB5xqrDvht6hFB6I6iTuDvZ0Wi 7bTFuZyanR9KlZNxhpfHvPIjMQ3iSZjnPPdb9c3UoV528pQPG9DrpKtg0i5BiouP17+z 3r+mzQmo3GsbzaT4hLauWlv3g2LN2OqvzlVFHQun3N0rHfIBCLY9r6OmnpVFtTWD8tGm C7biSi1SmRqwtAPoQWBwmet5iCa9HM5KQlg1czEbVtmB8Wm+UGueO6UxWKHyCtjdeU12 9oAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928489; x=1705533289; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C8UQElc6A/TnygdjzPZGFQnFIW5Dl3R45uxNNmzEmHk=; b=uLt0xtzpJv4ywhxrwEI8wds8xoIuERFb6eZBq7EpLUZrUa3AT6oA+F9+NTf34cTjne 9RPsugytdH0nl6kRUqg1QGYVsXFDhjDM74ZeGnnpfQZwM+OM0G0FiXXSrsIOA/NK6y2l mH4i+0+4hDKb7BMg7jQMZn0kgROiexB/x9nokh8/rGk9PONw81xeIn5fPFz75Jn/B9ZS shdc+AzH8PLnxqZNQx5Dsz0e6DNLzyBlROu25eYusrzFVibH8CxSzzXME1zP/2M1OJ52 qnC6Tv1y4xWu+YUFl8PAENH/y/JB9PGV4LNCq6vYvSqB2jVJDS60q8USyX4yLUsaT2wf 25/Q== X-Gm-Message-State: AOJu0YwWJCqX6TUmkRIe5vx2V8ytyXtrGK07hw1IbRx8Ty8toS7OUkHK KfhpRj/iAneCCPFnKhS094v49uN+xEW91g== X-Google-Smtp-Source: AGHT+IF5V83U/JpKcG4G+2/kPT0mLzMy58MOJPAO3gRWS2gTlupn43hdpcod+XzgEM0PaED9NMpmPQ== X-Received: by 2002:a5e:c748:0:b0:7be:c0d4:f567 with SMTP id g8-20020a5ec748000000b007bec0d4f567mr677989iop.4.1704928488783; Wed, 10 Jan 2024 15:14:48 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:48 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Alexandre Ghiti , Andrew Jones , Atish Patra , Conor Dooley , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 07/10] RISC-V: KVM: No need to exit to the user space if perf event failed Date: Wed, 10 Jan 2024 15:13:56 -0800 Message-Id: <20240110231359.1239367-8-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently, we return a linux error code if creating a perf event failed in kvm. That shouldn't be necessary as guest can continue to operate without perf profiling or profiling with firmware counters. Return appropriate SBI error code to indicate that PMU configuration failed. An error message in kvm already describes the reason for failure. Fixes: 0cb74b65d2e5 ("RISC-V: KVM: Implement perf support without sampling") Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu_pmu.c | 14 +++++++++----- arch/riscv/kvm/vcpu_sbi_pmu.c | 6 +++--- 2 files changed, 12 insertions(+), 8 deletions(-) diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index b1574c043f77..29bf4ca798cb 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -229,8 +229,9 @@ static int kvm_pmu_validate_counter_mask(struct kvm_pmu *kvpmu, unsigned long ct return 0; } -static int kvm_pmu_create_perf_event(struct kvm_pmc *pmc, struct perf_event_attr *attr, - unsigned long flags, unsigned long eidx, unsigned long evtdata) +static long kvm_pmu_create_perf_event(struct kvm_pmc *pmc, struct perf_event_attr *attr, + unsigned long flags, unsigned long eidx, + unsigned long evtdata) { struct perf_event *event; @@ -454,7 +455,8 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba unsigned long eidx, u64 evtdata, struct kvm_vcpu_sbi_return *retdata) { - int ctr_idx, ret, sbiret = 0; + int ctr_idx, sbiret = 0; + long ret; bool is_fevent; unsigned long event_code; u32 etype = kvm_pmu_get_perf_event_type(eidx); @@ -513,8 +515,10 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba kvpmu->fw_event[event_code].started = true; } else { ret = kvm_pmu_create_perf_event(pmc, &attr, flags, eidx, evtdata); - if (ret) - return ret; + if (ret) { + sbiret = SBI_ERR_NOT_SUPPORTED; + goto out; + } } set_bit(ctr_idx, kvpmu->pmc_in_use); diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c index 7eca72df2cbd..b70179e9e875 100644 --- a/arch/riscv/kvm/vcpu_sbi_pmu.c +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c @@ -42,9 +42,9 @@ static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, #endif /* * This can fail if perf core framework fails to create an event. - * Forward the error to userspace because it's an error which - * happened within the host kernel. The other option would be - * to convert to an SBI error and forward to the guest. + * No need to forward the error to userspace and exit the guest + * operation can continue without profiling. Forward the + * appropriate SBI error to the guest. */ ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1, cp->a2, cp->a3, temp, retdata); From patchwork Wed Jan 10 23:13:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516663 Received: from mail-io1-f43.google.com (mail-io1-f43.google.com [209.85.166.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC67754BDE for ; Wed, 10 Jan 2024 23:14:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="xxJISzGa" Received: by mail-io1-f43.google.com with SMTP id ca18e2360f4ac-7bc32b04d16so199260939f.0 for ; Wed, 10 Jan 2024 15:14:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928490; x=1705533290; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4ymgmumoudNB/wN3zowFkODrzcqTcd6665yl2L5pnQU=; b=xxJISzGaG/bfqQ9rl2OTJ5iHAU4HCJevEIL8Z4cMPFYYRIsy5uuKIpDqt2BmNOPgRx ruPSXaV1m037ctx9w8CuG9BZP736yVLwWTsq3qCqKy2YK5kpP1aQn0HxxFQmCj9vWdC+ 1b77z0kPtyQr0hZ1a837xkkCdoWk7hjqb5cM7pnhRVtCXpQ2X8RR8xnpyrJwAG7oqpoU 1AVYNTASEfu9V6GxFUmVUrPxmjy+0q+kBCm8Sa9jPJLa6N6VtGa9IPqYWhobABhmbP1C aKIj5eE2q8tcj1v45qEHQ4c8mN1fjpNQmqIqYqAKXF1yxXUQdLITfPUclMxsbz8h17KO RClQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928490; x=1705533290; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4ymgmumoudNB/wN3zowFkODrzcqTcd6665yl2L5pnQU=; b=OP312itOiy7igY2ApR1xPfFxz75ze/oPqFA3LNxoSbZ7Kpx4spzSdrZg8wzgzDggyW otgScg/IqK/EmWC1fsmjSOd/Cr5gTrolZXz0s66Cs4Y4iEyVfUwme2JSEiifRrfgyiaq 77Tu/XbK/4pehm1sr10LXvEjRb0/EHcl2qiyjm0WbTNuFb8L9l6tr1mqbKIEGjlGH4va WDb2t0UffoJPtk4GqQbhEwtP/0bY8oQoV8TkVM0t7dKw/0Yuo0Z7izTGhwu4AiTivgiS cqnrelRDA+NbcMvEDqQgUFybLwB8vcQ670XVkL80NjBWJCVugW+r4AGu7Ckwve5JQeOi GyBw== X-Gm-Message-State: AOJu0Yy5koY2BeXqNr2bJhj2td0TO0AHnMjtFxTo0dgXRZ65eXTlgzRt CRXFauAXePYJ+v/O/vCEzzeiCpKYzk6EAg== X-Google-Smtp-Source: AGHT+IEBrJvcmp0z9cRbrl/cYdMmAuqNUhkHuP1bqFpvH7zFsUr66dYYdNFD9nWIMUnU4ZDfRyAw3w== X-Received: by 2002:a5d:9917:0:b0:7bc:d40:6064 with SMTP id x23-20020a5d9917000000b007bc0d406064mr359544iol.28.1704928490166; Wed, 10 Jan 2024 15:14:50 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:49 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Alexandre Ghiti , Andrew Jones , Atish Patra , Conor Dooley , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 08/10] RISC-V: KVM: Implement SBI PMU Snapshot feature Date: Wed, 10 Jan 2024 15:13:57 -0800 Message-Id: <20240110231359.1239367-9-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 PMU Snapshot function allows to minimize the number of traps when the guest access configures/access the hpmcounters. If the snapshot feature is enabled, the hypervisor updates the shared memory with counter data and state of overflown counters. The guest can just read the shared memory instead of trap & emulate done by the hypervisor. This patch doesn't implement the counter overflow yet. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 7 ++ arch/riscv/kvm/vcpu_pmu.c | 119 +++++++++++++++++++++++++- arch/riscv/kvm/vcpu_sbi_pmu.c | 3 + 3 files changed, 127 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 395518a1664e..586bab84be35 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -50,6 +50,10 @@ struct kvm_pmu { bool init_done; /* Bit map of all the virtual counter used */ DECLARE_BITMAP(pmc_in_use, RISCV_KVM_MAX_COUNTERS); + /* The address of the counter snapshot area (guest physical address) */ + gpa_t snapshot_addr; + /* The actual data of the snapshot */ + struct riscv_pmu_snapshot_data *sdata; }; #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context) @@ -85,6 +89,9 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_return *retdata); void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); +int kvm_riscv_vcpu_pmu_setup_snapshot(struct kvm_vcpu *vcpu, unsigned long saddr_low, + unsigned long saddr_high, unsigned long flags, + struct kvm_vcpu_sbi_return *retdata); void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 29bf4ca798cb..b3112c23ad33 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -311,6 +311,81 @@ int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, return ret; } +static void kvm_pmu_clear_snapshot_area(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data); + + if (kvpmu->sdata) { + memset(kvpmu->sdata, 0, snapshot_area_size); + if (kvpmu->snapshot_addr != INVALID_GPA) + kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr, + kvpmu->sdata, snapshot_area_size); + kfree(kvpmu->sdata); + kvpmu->sdata = NULL; + } + kvpmu->snapshot_addr = INVALID_GPA; +} + +int kvm_riscv_vcpu_pmu_setup_snapshot(struct kvm_vcpu *vcpu, unsigned long saddr_low, + unsigned long saddr_high, unsigned long flags, + struct kvm_vcpu_sbi_return *retdata) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data); + int sbiret = 0; + gpa_t saddr; + unsigned long hva; + bool writable; + + if (!kvpmu) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + if (saddr_low == -1 && saddr_high == -1) { + kvm_pmu_clear_snapshot_area(vcpu); + return 0; + } + + saddr = saddr_low; + + if (saddr_high != 0) { + if (IS_ENABLED(CONFIG_32BIT)) + saddr |= ((gpa_t)saddr << 32); + else + sbiret = SBI_ERR_INVALID_ADDRESS; + goto out; + } + + if (kvm_is_error_gpa(vcpu->kvm, saddr)) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + hva = kvm_vcpu_gfn_to_hva_prot(vcpu, saddr >> PAGE_SHIFT, &writable); + if (kvm_is_error_hva(hva) || !writable) { + sbiret = SBI_ERR_INVALID_ADDRESS; + goto out; + } + + kvpmu->snapshot_addr = saddr; + kvpmu->sdata = kzalloc(snapshot_area_size, GFP_ATOMIC); + if (!kvpmu->sdata) + return -ENOMEM; + + if (kvm_vcpu_write_guest(vcpu, saddr, kvpmu->sdata, snapshot_area_size)) { + kfree(kvpmu->sdata); + kvpmu->snapshot_addr = INVALID_GPA; + sbiret = SBI_ERR_FAILURE; + } + +out: + retdata->err_val = sbiret; + + return 0; +} + int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata) { @@ -344,20 +419,32 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, int i, pmc_index, sbiret = 0; struct kvm_pmc *pmc; int fevent_code; + bool snap_flag_set = flags & SBI_PMU_START_FLAG_INIT_FROM_SNAPSHOT; if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { sbiret = SBI_ERR_INVALID_PARAM; goto out; } + if (snap_flag_set && kvpmu->snapshot_addr == INVALID_GPA) { + sbiret = SBI_ERR_NO_SHMEM; + goto out; + } + /* Start the counters that have been configured and requested by the guest */ for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) { pmc_index = i + ctr_base; if (!test_bit(pmc_index, kvpmu->pmc_in_use)) continue; pmc = &kvpmu->pmc[pmc_index]; - if (flags & SBI_PMU_START_FLAG_SET_INIT_VALUE) + if (flags & SBI_PMU_START_FLAG_SET_INIT_VALUE) { pmc->counter_val = ival; + } else if (snap_flag_set) { + kvm_vcpu_read_guest(vcpu, kvpmu->snapshot_addr, kvpmu->sdata, + sizeof(struct riscv_pmu_snapshot_data)); + pmc->counter_val = kvpmu->sdata->ctr_values[pmc_index]; + } + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) { fevent_code = get_event_code(pmc->event_idx); if (fevent_code >= SBI_PMU_FW_MAX) { @@ -398,14 +485,21 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); int i, pmc_index, sbiret = 0; + u64 enabled, running; struct kvm_pmc *pmc; int fevent_code; + bool snap_flag_set = flags & SBI_PMU_STOP_FLAG_TAKE_SNAPSHOT; - if (kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0) { + if ((kvm_pmu_validate_counter_mask(kvpmu, ctr_base, ctr_mask) < 0)) { sbiret = SBI_ERR_INVALID_PARAM; goto out; } + if (snap_flag_set && kvpmu->snapshot_addr == INVALID_GPA) { + sbiret = SBI_ERR_NO_SHMEM; + goto out; + } + /* Stop the counters that have been configured and requested by the guest */ for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) { pmc_index = i + ctr_base; @@ -438,9 +532,28 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, } else { sbiret = SBI_ERR_INVALID_PARAM; } + + if (snap_flag_set && !sbiret) { + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) + pmc->counter_val = kvpmu->fw_event[fevent_code].value; + else if (pmc->perf_event) + pmc->counter_val += perf_event_read_value(pmc->perf_event, + &enabled, &running); + /* TODO: Add counter overflow support when sscofpmf support is added */ + kvpmu->sdata->ctr_values[i] = pmc->counter_val; + kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr, kvpmu->sdata, + sizeof(struct riscv_pmu_snapshot_data)); + } + if (flags & SBI_PMU_STOP_FLAG_RESET) { pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; clear_bit(pmc_index, kvpmu->pmc_in_use); + if (snap_flag_set) { + /* Clear the snapshot area for the upcoming deletion event */ + kvpmu->sdata->ctr_values[i] = 0; + kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr, kvpmu->sdata, + sizeof(struct riscv_pmu_snapshot_data)); + } } } @@ -566,6 +679,7 @@ void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) kvpmu->num_hw_ctrs = num_hw_ctrs + 1; kvpmu->num_fw_ctrs = SBI_PMU_FW_MAX; memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); + kvpmu->snapshot_addr = INVALID_GPA; if (kvpmu->num_hw_ctrs > RISCV_KVM_MAX_HW_CTRS) { pr_warn_once("Limiting the hardware counters to 32 as specified by the ISA"); @@ -625,6 +739,7 @@ void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) } bitmap_zero(kvpmu->pmc_in_use, RISCV_MAX_COUNTERS); memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); + kvm_pmu_clear_snapshot_area(vcpu); } void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c index b70179e9e875..9f61136e4bb1 100644 --- a/arch/riscv/kvm/vcpu_sbi_pmu.c +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c @@ -64,6 +64,9 @@ static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, case SBI_EXT_PMU_COUNTER_FW_READ: ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); break; + case SBI_EXT_PMU_SNAPSHOT_SET_SHMEM: + ret = kvm_riscv_vcpu_pmu_setup_snapshot(vcpu, cp->a0, cp->a1, cp->a2, retdata); + break; default: retdata->err_val = SBI_ERR_NOT_SUPPORTED; } From patchwork Wed Jan 10 23:13:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516664 Received: from mail-il1-f175.google.com (mail-il1-f175.google.com [209.85.166.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F8CB55C08 for ; Wed, 10 Jan 2024 23:14:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="b3mpLX9I" Received: by mail-il1-f175.google.com with SMTP id e9e14a558f8ab-3606f8417beso16596415ab.0 for ; Wed, 10 Jan 2024 15:14:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928491; x=1705533291; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EWlgYeXsuNgCyW020s2adM6IRYxW5Wo3bqNd72j3CD8=; b=b3mpLX9IR04q3fo1KSVhKlfE7WHG8btk6hk80eth7A6L/fmk8tNVz8c7NOzjde0HzB BBOzrznrMxG+GV3x6pTDbsrup5RmILBJJibgBHzxUK5u+OMEWCAsy1nuX6l8jNhqtVce P+estBYnQZsZGPckdU2+S6ioc0oqaxSWdtGt1Fu7lJmPtSonw6ITKnPi6UkpZWXgsbrQ F5x70aVkATQkjKo3zjfKsTazt912DNfI2k8nC9Sud5EAGLusD6PtbvNYcY7uTByxjx+T rDVA095M5gyujcQaxtnT3IKJQ7kBgAYUqot1lFNpgQjoXpipeZUR5mADn4y9tP5cxaEB Ou/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928491; x=1705533291; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EWlgYeXsuNgCyW020s2adM6IRYxW5Wo3bqNd72j3CD8=; b=amBD299lbVU6+qUoQWF0bKhRaJtcCoAkiRdawnCfbGO1dS1h5DoBzD7EXGixcNonSh FqDcd8XTElux6XL69zBuBro1Zp7fW7pvA5HnnfXKrmuj5h0n9d/lirb0tc79woYxWx1i i0QMUyh85TW7yPM7va1TAH4DmWl+NxUNjiqxQcfBA7Gg0gw8EQRFN8Hlj/HZAhpxBfn4 kA4CuuLxGf2JoBgsc5BTyUHqswlkhVQtbYod2tQE1w+cOtmCp5tyEXwTOfiCVs10hANG a/phhShxmFQ2E/MkMXZxJfPq1QYq9BDSpvBBrQJ6+wwTL6GGip7yIw7VD+DaXEvItoAG 2f7Q== X-Gm-Message-State: AOJu0Yw9AnEKD5m08VJkt6TrJJXwIQ38sTzbx7KO16h9Ag3OIKD0Ok7J q6BiwbyC+chkTnsTqWoYUwJVPMrMoSxe6w== X-Google-Smtp-Source: AGHT+IEPb09LsphlDq1Rk8zGKNQaGPFqSiA8IfRYfePxNk48f+I1mTYlmenfwuqtcwnKtflOaUT0Xg== X-Received: by 2002:a05:6e02:1d8c:b0:35f:bea3:bd00 with SMTP id h12-20020a056e021d8c00b0035fbea3bd00mr751045ila.14.1704928491538; Wed, 10 Jan 2024 15:14:51 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:51 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Alexandre Ghiti , Andrew Jones , Atish Patra , Conor Dooley , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 09/10] RISC-V: KVM: Add perf sampling support for guests Date: Wed, 10 Jan 2024 15:13:58 -0800 Message-Id: <20240110231359.1239367-10-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 KVM enables perf for guest via counter virtualization. However, the sampling can not be supported as there is no mechanism to enabled trap/emulate scountovf in ISA yet. Rely on the SBI PMU snapshot to provide the counter overflow data via the shared memory. In case of sampling event, the host first guest the LCOFI interrupt and injects to the guest via irq filtering mechanism defined in AIA specification. Thus, ssaia must be enabled in the host in order to use perf sampling in the guest. No other AIA dpeendancy w.r.t kernel is required. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/csr.h | 3 +- arch/riscv/include/asm/kvm_vcpu_pmu.h | 3 ++ arch/riscv/include/uapi/asm/kvm.h | 1 + arch/riscv/kvm/aia.c | 5 ++ arch/riscv/kvm/vcpu.c | 8 +-- arch/riscv/kvm/vcpu_onereg.c | 9 +++- arch/riscv/kvm/vcpu_pmu.c | 70 +++++++++++++++++++++++++-- 7 files changed, 89 insertions(+), 10 deletions(-) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 88cdc8a3e654..bec09b33e2f0 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -168,7 +168,8 @@ #define VSIP_TO_HVIP_SHIFT (IRQ_VS_SOFT - IRQ_S_SOFT) #define VSIP_VALID_MASK ((_AC(1, UL) << IRQ_S_SOFT) | \ (_AC(1, UL) << IRQ_S_TIMER) | \ - (_AC(1, UL) << IRQ_S_EXT)) + (_AC(1, UL) << IRQ_S_EXT) | \ + (_AC(1, UL) << IRQ_PMU_OVF)) /* AIA CSR bits */ #define TOPI_IID_SHIFT 16 diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 586bab84be35..8cb21a4f862c 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -36,6 +36,7 @@ struct kvm_pmc { bool started; /* Monitoring event ID */ unsigned long event_idx; + struct kvm_vcpu *vcpu; }; /* PMU data structure per vcpu */ @@ -50,6 +51,8 @@ struct kvm_pmu { bool init_done; /* Bit map of all the virtual counter used */ DECLARE_BITMAP(pmc_in_use, RISCV_KVM_MAX_COUNTERS); + /* Bit map of all the virtual counter overflown */ + DECLARE_BITMAP(pmc_overflown, RISCV_KVM_MAX_COUNTERS); /* The address of the counter snapshot area (guest physical address) */ gpa_t snapshot_addr; /* The actual data of the snapshot */ diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index d6b7a5b95874..d5aea43bc797 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -139,6 +139,7 @@ enum KVM_RISCV_ISA_EXT_ID { KVM_RISCV_ISA_EXT_ZIHPM, KVM_RISCV_ISA_EXT_SMSTATEEN, KVM_RISCV_ISA_EXT_ZICOND, + KVM_RISCV_ISA_EXT_SSCOFPMF, KVM_RISCV_ISA_EXT_MAX, }; diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index a944294f6f23..0f0a9d11bb5f 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -545,6 +545,9 @@ void kvm_riscv_aia_enable(void) enable_percpu_irq(hgei_parent_irq, irq_get_trigger_type(hgei_parent_irq)); csr_set(CSR_HIE, BIT(IRQ_S_GEXT)); + /* Enable IRQ filtering for overflow interrupt only if sscofpmf is present */ + if (__riscv_isa_extension_available(NULL, RISCV_ISA_EXT_SSCOFPMF)) + csr_write(CSR_HVIEN, BIT(IRQ_PMU_OVF)); } void kvm_riscv_aia_disable(void) @@ -558,6 +561,8 @@ void kvm_riscv_aia_disable(void) return; hgctrl = get_cpu_ptr(&aia_hgei); + if (__riscv_isa_extension_available(NULL, RISCV_ISA_EXT_SSCOFPMF)) + csr_clear(CSR_HVIEN, BIT(IRQ_PMU_OVF)); /* Disable per-CPU SGEI interrupt */ csr_clear(CSR_HIE, BIT(IRQ_S_GEXT)); disable_percpu_irq(hgei_parent_irq); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index b5ca9f2e98ac..f83f0226439f 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -382,7 +382,8 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) if (irq < IRQ_LOCAL_MAX && irq != IRQ_VS_SOFT && irq != IRQ_VS_TIMER && - irq != IRQ_VS_EXT) + irq != IRQ_VS_EXT && + irq != IRQ_PMU_OVF) return -EINVAL; set_bit(irq, vcpu->arch.irqs_pending); @@ -397,14 +398,15 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq) { /* - * We only allow VS-mode software, timer, and external + * We only allow VS-mode software, timer, counter overflow and external * interrupts when irq is one of the local interrupts * defined by RISC-V privilege specification. */ if (irq < IRQ_LOCAL_MAX && irq != IRQ_VS_SOFT && irq != IRQ_VS_TIMER && - irq != IRQ_VS_EXT) + irq != IRQ_VS_EXT && + irq != IRQ_PMU_OVF) return -EINVAL; clear_bit(irq, vcpu->arch.irqs_pending); diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c index fc34557f5356..1eaaa919aa61 100644 --- a/arch/riscv/kvm/vcpu_onereg.c +++ b/arch/riscv/kvm/vcpu_onereg.c @@ -36,6 +36,7 @@ static const unsigned long kvm_isa_ext_arr[] = { /* Multi letter extensions (alphabetically sorted) */ KVM_ISA_EXT_ARR(SMSTATEEN), KVM_ISA_EXT_ARR(SSAIA), + KVM_ISA_EXT_ARR(SSCOFPMF), KVM_ISA_EXT_ARR(SSTC), KVM_ISA_EXT_ARR(SVINVAL), KVM_ISA_EXT_ARR(SVNAPOT), @@ -88,6 +89,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext) case KVM_RISCV_ISA_EXT_I: case KVM_RISCV_ISA_EXT_M: case KVM_RISCV_ISA_EXT_SSTC: + case KVM_RISCV_ISA_EXT_SSCOFPMF: case KVM_RISCV_ISA_EXT_SVINVAL: case KVM_RISCV_ISA_EXT_SVNAPOT: case KVM_RISCV_ISA_EXT_ZBA: @@ -117,8 +119,13 @@ void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu) for (i = 0; i < ARRAY_SIZE(kvm_isa_ext_arr); i++) { host_isa = kvm_isa_ext_arr[i]; if (__riscv_isa_extension_available(NULL, host_isa) && - kvm_riscv_vcpu_isa_enable_allowed(i)) + kvm_riscv_vcpu_isa_enable_allowed(i)) { + /* Sscofpmf depends on interrupt filtering defined in ssaia */ + if (host_isa == RISCV_ISA_EXT_SSCOFPMF && + !__riscv_isa_extension_available(NULL, RISCV_ISA_EXT_SSAIA)) + continue; set_bit(host_isa, vcpu->arch.isa); + } } } diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index b3112c23ad33..063e11685340 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -229,6 +229,47 @@ static int kvm_pmu_validate_counter_mask(struct kvm_pmu *kvpmu, unsigned long ct return 0; } +static void kvm_riscv_pmu_overflow(struct perf_event *perf_event, + struct perf_sample_data *data, + struct pt_regs *regs) +{ + struct kvm_pmc *pmc = perf_event->overflow_handler_context; + struct kvm_vcpu *vcpu = pmc->vcpu; + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct riscv_pmu *rpmu = to_riscv_pmu(perf_event->pmu); + u64 period; + + /* + * Stop the event counting by directly accessing the perf_event. + * Otherwise, this needs to deferred via a workqueue. + * That will introduce skew in the counter value because the actual + * physical counter would start after returning from this function. + * It will be stopped again once the workqueue is scheduled + */ + rpmu->pmu.stop(perf_event, PERF_EF_UPDATE); + + /* + * The hw counter would start automatically when this function returns. + * Thus, the host may continue to interrupt and inject it to the guest + * even without the guest configuring the next event. Depending on the hardware + * the host may have some sluggishness only if privilege mode filtering is not + * available. In an ideal world, where qemu is not the only capable hardware, + * this can be removed. + * FYI: ARM64 does this way while x86 doesn't do anything as such. + * TODO: Should we keep it for RISC-V ? + */ + period = -(local64_read(&perf_event->count)); + + local64_set(&perf_event->hw.period_left, 0); + perf_event->attr.sample_period = period; + perf_event->hw.sample_period = period; + + set_bit(pmc->idx, kvpmu->pmc_overflown); + kvm_riscv_vcpu_set_interrupt(vcpu, IRQ_PMU_OVF); + + rpmu->pmu.start(perf_event, PERF_EF_RELOAD); +} + static long kvm_pmu_create_perf_event(struct kvm_pmc *pmc, struct perf_event_attr *attr, unsigned long flags, unsigned long eidx, unsigned long evtdata) @@ -248,7 +289,7 @@ static long kvm_pmu_create_perf_event(struct kvm_pmc *pmc, struct perf_event_att */ attr->sample_period = kvm_pmu_get_sample_period(pmc); - event = perf_event_create_kernel_counter(attr, -1, current, NULL, pmc); + event = perf_event_create_kernel_counter(attr, -1, current, kvm_riscv_pmu_overflow, pmc); if (IS_ERR(event)) { pr_err("kvm pmu event creation failed for eidx %lx: %ld\n", eidx, PTR_ERR(event)); return PTR_ERR(event); @@ -473,6 +514,12 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, } } + /* The guest have serviced the interrupt and starting the counter again */ + if (test_bit(IRQ_PMU_OVF, vcpu->arch.irqs_pending)) { + clear_bit(pmc_index, kvpmu->pmc_overflown); + kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_PMU_OVF); + } + out: retdata->err_val = sbiret; @@ -539,7 +586,13 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, else if (pmc->perf_event) pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); - /* TODO: Add counter overflow support when sscofpmf support is added */ + /* + * The counter and overflow indicies in the snapshot region are w.r.to + * cbase. Modify the set bit in the counter mask instead of the pmc_index + * which indicates the absolute counter index. + */ + if (test_bit(pmc_index, kvpmu->pmc_overflown)) + kvpmu->sdata->ctr_overflow_mask |= (1UL << i); kvpmu->sdata->ctr_values[i] = pmc->counter_val; kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr, kvpmu->sdata, sizeof(struct riscv_pmu_snapshot_data)); @@ -548,15 +601,20 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, if (flags & SBI_PMU_STOP_FLAG_RESET) { pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; clear_bit(pmc_index, kvpmu->pmc_in_use); + clear_bit(pmc_index, kvpmu->pmc_overflown); if (snap_flag_set) { /* Clear the snapshot area for the upcoming deletion event */ kvpmu->sdata->ctr_values[i] = 0; + /* + * Only clear the given counter as the caller is responsible to + * validate both the overflow mask and configured counters. + */ + kvpmu->sdata->ctr_overflow_mask &= ~(1UL << i); kvm_vcpu_write_guest(vcpu, kvpmu->snapshot_addr, kvpmu->sdata, sizeof(struct riscv_pmu_snapshot_data)); } } } - out: retdata->err_val = sbiret; @@ -699,6 +757,7 @@ void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) pmc = &kvpmu->pmc[i]; pmc->idx = i; pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; + pmc->vcpu = vcpu; if (i < kvpmu->num_hw_ctrs) { pmc->cinfo.type = SBI_PMU_CTR_TYPE_HW; if (i < 3) @@ -731,13 +790,14 @@ void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) if (!kvpmu) return; - for_each_set_bit(i, kvpmu->pmc_in_use, RISCV_MAX_COUNTERS) { + for_each_set_bit(i, kvpmu->pmc_in_use, RISCV_KVM_MAX_COUNTERS) { pmc = &kvpmu->pmc[i]; pmc->counter_val = 0; kvm_pmu_release_perf_event(pmc); pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; } - bitmap_zero(kvpmu->pmc_in_use, RISCV_MAX_COUNTERS); + bitmap_zero(kvpmu->pmc_in_use, RISCV_KVM_MAX_COUNTERS); + bitmap_zero(kvpmu->pmc_overflown, RISCV_KVM_MAX_COUNTERS); memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); kvm_pmu_clear_snapshot_area(vcpu); } From patchwork Wed Jan 10 23:13:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13516665 Received: from mail-io1-f53.google.com (mail-io1-f53.google.com [209.85.166.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9A165646E for ; Wed, 10 Jan 2024 23:14:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="vI4rjziU" Received: by mail-io1-f53.google.com with SMTP id ca18e2360f4ac-7bed9f0ea20so79869239f.2 for ; Wed, 10 Jan 2024 15:14:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704928493; x=1705533293; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+8eq1Ys0e5NviyjpdbbjANOonZWXJ28I2i4yrtMskh0=; b=vI4rjziUsO94iwrs8LXUthW7VJ/8qfTIfo+u/DtxgNg19YbXfaK/k4/hunsoGSYvE2 wVsZJvbRblltnQcnniAmDaari42C/jUYCXiaqtj+kkBDgEPTwUYkjs3w5IMovCQ74LNy JdP0HvdQwRGtPZhChczy4oHxJ9lRUG1P0g1TPztXwROtroGR/h0FI1djNwRHk6GiWWS5 qKxBy5TCKUE/Cc/pXPi9kFCOgf+AJ1/xjya7vN6gmAl9c8SxN739yPpjqYwc/CVy6HPG Qlm1VatVNQPa/4TQobz550KdjL/KGxLJ2D/ySW1UTajFZ75NiJqqcG63ECOM7sdy6/bx D2kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704928493; x=1705533293; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+8eq1Ys0e5NviyjpdbbjANOonZWXJ28I2i4yrtMskh0=; b=WaRExvJDfxHUNnC4noqkxIf/jfQQ2CPyTeZoESxaNm9f9VOdKtWweS5YMFyIW/bb0M /1XNHRNbyBWqAQQPhgGCfNGlOaaMMk3bjCdDYFAohCH91wbMd/Pk3BNajgUuAcC+mtRp wsk3aNJNqNKHIiebjfq/mkS8Qh1fADrWO/NkWqnzPR1d1xrc/uefQ6z3h57OI0DwNLOv he5jOtqQr2HteViZaNohBz0atilkdrVAYkdtBm2zPYSnoymvN1j7XppbYLXCXOzoH5IZ gbYtkCZSa3AsAhJcRzx58UMDbluno1nBVZrm9a+AwHWFIVcJDq+4kmPap42jfzY6nZxv JOww== X-Gm-Message-State: AOJu0YwuKKPZJuRvWS9h33GGanz+vSyYd5ZNXeiGItAW6ixkgWWsLQDs cSnbpeDIY3rwk5zyxeyGwd1IAqMM4MiNxQ== X-Google-Smtp-Source: AGHT+IE/I4mp9gKGXQVF4buLvs7fObnRnWv0jCa0fj5/bcDDswwtsk/Th12J1Chb2GQBfGKkmS6ZdA== X-Received: by 2002:a05:6602:2c8b:b0:7bf:c4:1ffb with SMTP id i11-20020a0566022c8b00b007bf00c41ffbmr507836iow.17.1704928493059; Wed, 10 Jan 2024 15:14:53 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id co13-20020a0566383e0d00b0046e3b925818sm1185503jab.37.2024.01.10.15.14.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Jan 2024 15:14:52 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Alexandre Ghiti , Andrew Jones , Atish Patra , Conor Dooley , Guo Ren , Icenowy Zheng , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon , Vladimir Isaev Subject: [v3 10/10] RISC-V: KVM: Support 64 bit firmware counters on RV32 Date: Wed, 10 Jan 2024 15:13:59 -0800 Message-Id: <20240110231359.1239367-11-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240110231359.1239367-1-atishp@rivosinc.com> References: <20240110231359.1239367-1-atishp@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The SBI v2.0 introduced a fw_read_hi function to read 64 bit firmware counters for RV32 based systems. Add infrastructure to support that. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 4 ++- arch/riscv/kvm/vcpu_pmu.c | 37 ++++++++++++++++++++++++++- arch/riscv/kvm/vcpu_sbi_pmu.c | 6 +++++ 3 files changed, 45 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 8cb21a4f862c..e0ad27dea46c 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -20,7 +20,7 @@ static_assert(RISCV_KVM_MAX_COUNTERS <= 64); struct kvm_fw_event { /* Current value of the event */ - unsigned long value; + u64 value; /* Event monitoring status */ bool started; @@ -91,6 +91,8 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba struct kvm_vcpu_sbi_return *retdata); int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_return *retdata); +int kvm_riscv_vcpu_pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_return *retdata); void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); int kvm_riscv_vcpu_pmu_setup_snapshot(struct kvm_vcpu *vcpu, unsigned long saddr_low, unsigned long saddr_high, unsigned long flags, diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 063e11685340..31016934cd67 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -196,6 +196,29 @@ static int pmu_get_pmc_index(struct kvm_pmu *pmu, unsigned long eidx, return kvm_pmu_get_programmable_pmc_index(pmu, eidx, cbase, cmask); } +static int pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx, + unsigned long *out_val) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + int fevent_code; + + if (!IS_ENABLED(CONFIG_32BIT)) + return -EINVAL; + + pmc = &kvpmu->pmc[cidx]; + + if (pmc->cinfo.type != SBI_PMU_CTR_TYPE_FW) + return -EINVAL; + + fevent_code = get_event_code(pmc->event_idx); + pmc->counter_val = kvpmu->fw_event[fevent_code].value; + + *out_val = pmc->counter_val >> 32; + + return 0; +} + static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, unsigned long *out_val) { @@ -701,6 +724,18 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba return 0; } +int kvm_riscv_vcpu_pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_return *retdata) +{ + int ret; + + ret = pmu_fw_ctr_read_hi(vcpu, cidx, &retdata->out_val); + if (ret == -EINVAL) + retdata->err_val = SBI_ERR_INVALID_PARAM; + + return 0; +} + int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_return *retdata) { @@ -774,7 +809,7 @@ void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) pmc->cinfo.csr = CSR_CYCLE + i; } else { pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; - pmc->cinfo.width = BITS_PER_LONG - 1; + pmc->cinfo.width = 63; } } diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c index 9f61136e4bb1..58a0e5587e2a 100644 --- a/arch/riscv/kvm/vcpu_sbi_pmu.c +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c @@ -64,6 +64,12 @@ static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, case SBI_EXT_PMU_COUNTER_FW_READ: ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); break; + case SBI_EXT_PMU_COUNTER_FW_READ_HI: + if (IS_ENABLED(CONFIG_32BIT)) + ret = kvm_riscv_vcpu_pmu_fw_ctr_read_hi(vcpu, cp->a0, retdata); + else + retdata->out_val = 0; + break; case SBI_EXT_PMU_SNAPSHOT_SET_SHMEM: ret = kvm_riscv_vcpu_pmu_setup_snapshot(vcpu, cp->a0, cp->a1, cp->a2, retdata); break;