From patchwork Thu Dec 15 17:00:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0C2EBC4332F for ; Thu, 15 Dec 2022 17:01:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NJy+8pD+KiOU+oPxJ+k5fWyvoO/oa/fb+UbOLN/rRis=; b=Ij51VfMOgu+vwX mktplBfaqEyRkhNHXBOaGWDwv1A7atr5JVPTB8t2xYDqiH8oJw97GMwsL0r0oyNdPJYMm+UclV5UR i6XrHdzJavyIJsffzMkDvq0HqJfLRB9kshpA+FfiOwZeklHs3oJU3pO9hkBkl9YpStp+9l7+NsK6C U441E7Tb479wPRMt2apYFT6k4VSN5FEk6JpMajGqfQji4j231wQyP9s2IYu4+ywfbMdxjMoJTzOwk TnrvjGMSPyUB5ISxGomjDMlUvKGkGHHGdG/F/rSvBmsBTYcy1P5FBiBXEQndR6/rDiSpJtkeVbqZc +KiTsGns/kZF2Ox5sZfA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbr-00ARZ3-LU; Thu, 15 Dec 2022 17:01:27 +0000 Received: from mail-pj1-x1035.google.com ([2607:f8b0:4864:20::1035]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbf-00ARQY-Ot for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:19 +0000 Received: by mail-pj1-x1035.google.com with SMTP id t17so11117491pjo.3 for ; Thu, 15 Dec 2022 09:01:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8UQoyiEMzZayNPrzOS74SA9Pd1KLiR2v7Duf4rg8J3g=; b=UAgSXYLuvbe///7YzrpECjJchZRf3ZJZwA/r4+GphgGaI9L5qrog/0ptTEi4FipTCq 8oyERWkfNJNO0+ifqqfWSpsQ4VyO12DU7QFG4rxK17vvfXQuH/bBx2l0JMklzKm4dPJc 0x7Ml6eLQQD2KGtoYrBAXhzsWJYvwUdw5EL4qFli/S4dC7YX6TcPKRU92VxngtG8ZaRt pLiHdJLyeEfciNiI4NKkKgVdvnhW5tiAXcqlQXK2QAL/ri+nHE37JQHF9AOKEuFOyPfi 1ATBIsvJjpRIyP9aoqoA6Xl7jYupO8S1OcnEVISz7vDYlnxIYTCmRdtkxhZG6k9Pgiiq gzNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8UQoyiEMzZayNPrzOS74SA9Pd1KLiR2v7Duf4rg8J3g=; b=AhSveQ9ydUmMspnikfMYYvsO1HSuy1pjzTDox/aBjbNDI7bpR9fTBrE1Z8MZBrYa+L OmDCDNPmNI4AHMm0I1jywlHKHDitNwY4ItEvvuSNPH1GkmViEeN9SOn61xgFV6uyIbna 6VHQcoIStB1LuooaSAD05wgNusEXOJ6F1E4CGH3y6NyLmP40W/VIseEh5e21GG+dSocT llP1f3JxZr6rKaALZkjtuHnqKL33jRKZVauv+L1QXMVjR0J4Zp9i5rZTOwrg3PPB1vPT MFRH8l1nkZ+4RPpD7qstElU/aweHqQx2jVqkWaGvOnyn1IKQInJtPOSM2xUgjR0tnOfj ZvZQ== X-Gm-Message-State: ANoB5pmjAN6YbgUoo3VAooz/NS8LtT4fuulTVg4mgmzolnXfWtvJSAb7 oSSbnFkIi8K41PKKkvumCGRDFA== X-Google-Smtp-Source: AA0mqf6UHWoIuoaWq9x/mAhH0wEv9S4WHB2b/UkNzG/XUxl1VI7WMWM/ERSSUoFGbDBrX5NU1f6TnQ== X-Received: by 2002:a17:902:ce06:b0:188:bc62:276f with SMTP id k6-20020a170902ce0600b00188bc62276fmr33227103plg.3.1671123674146; Thu, 15 Dec 2022 09:01:14 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:13 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 01/11] RISC-V: Define helper functions expose hpm counter width and count Date: Thu, 15 Dec 2022 09:00:36 -0800 Message-Id: <20221215170046.2010255-2-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_090115_844571_B2819A94 X-CRM114-Status: GOOD ( 18.52 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org KVM module needs to know how many hardware counters and the counter width that the platform supports. Otherwise, it will not be able to show optimal value of virtual counters to the guest. The virtual hardware counters also need to have the same width as the logical hardware counters for simplicity. However, there shouldn't be mapping between virtual hardware counters and logical hardware counters. As we don't support hetergeneous harts or counters with different width as of now, the implementation relies on the counter width of the first available programmable counter. Signed-off-by: Atish Patra --- drivers/perf/riscv_pmu_sbi.c | 35 +++++++++++++++++++++++++++++++++- include/linux/perf/riscv_pmu.h | 3 +++ 2 files changed, 37 insertions(+), 1 deletion(-) diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index 3852c18..65d4aa4 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -49,6 +49,9 @@ static const struct attribute_group *riscv_pmu_attr_groups[] = { static union sbi_pmu_ctr_info *pmu_ctr_list; static unsigned int riscv_pmu_irq; +/* Cache the available counters in a bitmask */ +unsigned long cmask; + struct sbi_pmu_event_data { union { union { @@ -264,6 +267,37 @@ static bool pmu_sbi_ctr_is_fw(int cidx) return (info->type == SBI_PMU_CTR_TYPE_FW) ? true : false; } +/* + * Returns the counter width of a programmable counter and number of hardware + * counters. As we don't support heterneous CPUs yet, it is okay to just + * return the counter width of the first programmable counter. + */ +int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr) +{ + int i; + union sbi_pmu_ctr_info *info; + u32 hpm_width = 0, hpm_count = 0; + + if (!cmask) + return -EINVAL; + + for_each_set_bit(i, &cmask, RISCV_MAX_COUNTERS) { + info = &pmu_ctr_list[i]; + if (!info) + continue; + if (!hpm_width && (info->csr != CSR_CYCLE) && (info->csr != CSR_INSTRET)) + hpm_width = info->width; + if (info->type == SBI_PMU_CTR_TYPE_HW) + hpm_count++; + } + + *hw_ctr_width = hpm_width; + *num_hw_ctr = hpm_count; + + return 0; +} +EXPORT_SYMBOL(riscv_pmu_get_hpm_info); + static int pmu_sbi_ctr_get_idx(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; @@ -798,7 +832,6 @@ static void riscv_pmu_destroy(struct riscv_pmu *pmu) static int pmu_sbi_device_probe(struct platform_device *pdev) { struct riscv_pmu *pmu = NULL; - unsigned long cmask = 0; int ret = -ENODEV; int num_counters; diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index e17e86a..a1c3f77 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -73,6 +73,9 @@ void riscv_pmu_legacy_skip_init(void); static inline void riscv_pmu_legacy_skip_init(void) {}; #endif struct riscv_pmu *riscv_pmu_alloc(void); +#ifdef CONFIG_RISCV_PMU_SBI +int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr); +#endif #endif /* CONFIG_RISCV_PMU */ From patchwork Thu Dec 15 17:00:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5CDA2C4167B for ; Thu, 15 Dec 2022 17:01:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DJLs8tlgfxAC+d5G1YPzQYfthB9UsXHRanu7nshiKDI=; b=HPU2EFEmZAZEhe aLlvw49uCeydUlVNq+WSihW+JYiy+KGN9ah7ZF2kgXwgOtcMhWu0sDz6k9fOhTS5jxXw9XRVse0Js dqFKb5F9Upxz+epxf5r5A1vmg1Ikbkl0jVy2YCdtinr6vWq6Kv/exLnjtj4zV2lbGf3bZ/Aubo9p+ Xckx4TJIQyOEmO8/R4C2CKUgzAm6Tdj0DinLXouMgYMiP2TsWMO9XqWtLdxekL7Rvq6apf2+jjBJr oqEyR+6XNmlkEkuoWGDMkwxuo+Hdq+a+G6u9OHw3UeAZg4UNXCsMhcteq6b5Hzzi6pba+IC3aqEdz GmyZZeKB8gs89DlGE+dg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbt-00ARZo-5I; Thu, 15 Dec 2022 17:01:29 +0000 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbg-00ARQd-KH for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:19 +0000 Received: by mail-pj1-x1036.google.com with SMTP id o1-20020a17090a678100b00219cf69e5f0so3356653pjj.2 for ; Thu, 15 Dec 2022 09:01:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Oe+UropDcs41uZsJl5UoQsMteKaso09Ma6XjETIRZjA=; b=pv5pECr+7yhCoGDeSuvAT6ApQZ4t2eT4MBkXtO3ghYvONHQbcMSuQGcD9oGpIsBLnY /xLSkLLvxmyncb6xOOfdWjHyOw/ArLKeJf/3mj4P6+7Bmr4tkGvqunp9WMoatiCBSwD7 hND+BjAF9VNuICeGf4XADiRiWjW71fp1xI/2HaFUI174oykQvLOPa1/Ny/anECFnDjB5 7HzLQhqbjKh0RSmKFzwZRjmBSXmLk4JKeO5oXCoP50lXYLrIu2fOkOlWaspGWkaDYpGk 3GpeYxJjVG0fGpyj4ohFiMTZpw/FCZy/Hqm++CmBZ36Zz+0jR4sUxHU15x5dHYBOEEOP L1fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Oe+UropDcs41uZsJl5UoQsMteKaso09Ma6XjETIRZjA=; b=dLlHmDfql4goClmE9FWbOytqzRHBAVWyRvAhXUw5uRiRVR1k+w8V+WW/6UBI5sgoUH pcym025ZpB5vNwhSmXQNRf8j1HT9KY/3w6W7ujk/PjHKeK0gKJwggfRTE70JOJBQv5JK 7VbwR6bhpyfLXV4V3d+BTIDAYTIYfRwd7HNbiIWGD9g06okJOS/XJJSRc47zb36PcDKL BGGiAXukLACbz4YPgwyFjEjsi+c+uOb5SzIXR8hHFEDn0G/KY9EMRv9k9UWCQYwV5sS3 uDXELFLKHtmDn0Br6CRuyBWAe5018T8BZTG2Oy5Xq9/SMc70tE2qdLrEIUaJ5OJF1fNT 1a1A== X-Gm-Message-State: ANoB5pn4/dFIHOc4juMpYjLEKOX3TA7ORgldizdymYtCGURiThCmDZvr ExievPrtYqQcknhEYW+ip/2okA== X-Google-Smtp-Source: AA0mqf7rpf1DYdCy6MEJCPXmVFu1iVBoY4PGgeIxOVYoUigt1rdcD6Iks5r7UqhKDjTl7/0JeAcqkg== X-Received: by 2002:a17:902:7044:b0:186:ada3:c1a0 with SMTP id h4-20020a170902704400b00186ada3c1a0mr27971825plt.45.1671123675053; Thu, 15 Dec 2022 09:01:15 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:14 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 02/11] RISC-V: KVM: Define a probe function for SBI extension data structures Date: Thu, 15 Dec 2022 09:00:37 -0800 Message-Id: <20221215170046.2010255-3-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_090116_765329_F534F9B6 X-CRM114-Status: GOOD ( 14.74 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently the probe function just checks if an SBI extension is registered or not. However, the extension may not want to advertise itself depending on some other condition. An additional extension specific probe function will allow extensions to decide if they want to be advertised to the caller or not. Any extension that does not require additional dependency checks can avoid implementing this function. Signed-off-by: Atish Patra Reviewed-by: Andrew Jones --- arch/riscv/include/asm/kvm_vcpu_sbi.h | 3 +++ arch/riscv/kvm/vcpu_sbi_base.c | 13 +++++++++++-- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h index f79478a..61dac1b 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -29,6 +29,9 @@ struct kvm_vcpu_sbi_extension { int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long *out_val, struct kvm_cpu_trap *utrap, bool *exit); + + /* Extension specific probe function */ + unsigned long (*probe)(struct kvm_vcpu *vcpu, unsigned long extid); }; void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run); diff --git a/arch/riscv/kvm/vcpu_sbi_base.c b/arch/riscv/kvm/vcpu_sbi_base.c index 5d65c63..89e2415 100644 --- a/arch/riscv/kvm/vcpu_sbi_base.c +++ b/arch/riscv/kvm/vcpu_sbi_base.c @@ -19,6 +19,7 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, { int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + const struct kvm_vcpu_sbi_extension *sbi_ext; switch (cp->a6) { case SBI_EXT_BASE_GET_SPEC_VERSION: @@ -43,8 +44,16 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, */ kvm_riscv_vcpu_sbi_forward(vcpu, run); *exit = true; - } else - *out_val = kvm_vcpu_sbi_find_ext(cp->a0) ? 1 : 0; + } else { + sbi_ext = kvm_vcpu_sbi_find_ext(cp->a0); + if (sbi_ext) { + if (sbi_ext->probe) + *out_val = sbi_ext->probe(vcpu, cp->a0); + else + *out_val = 1; + } else + *out_val = 0; + } break; case SBI_EXT_BASE_GET_MVENDORID: *out_val = vcpu->arch.mvendorid; From patchwork Thu Dec 15 17:00:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD16EC4167B for ; Thu, 15 Dec 2022 17:01:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tgfaL+LPwNuAdITqogjAFawYBCjTqpE0+qNjM6vsCV8=; b=ghaVXykYwI6IUp 6ulkc6Gx5aj/MVBD1J879QNV2wk/or0QxJO/tl/ELy1Rnpu+nXKuo31kgcBIG6VccoZJBxX3qUGj+ 9ePkaVB5pIt4mkT67QGcjEBNw67xvT7a0BarTQCfY7kDE+d0c7IEiqviSIHwVru+X5TVKZCcDEkeK AuMiAfolmz0lwAYRz4dpXtuIKoUiZy0ySVpfhlM0yrWpjLnzJQj5zDmptGTxrwWae89AGcG1NwyWb 3y+NJtqTkzVd8u8WsnjWBeXuic1eSHxpzYuJMEg7Y3DVCMOqRVNSjP7GDYnEGewKhkuEiqVWbXW6I 6dIvU1PKNXBbJexMXEww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbw-00ARc1-Sx; Thu, 15 Dec 2022 17:01:32 +0000 Received: from mail-pl1-x629.google.com ([2607:f8b0:4864:20::629]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbh-00ARRj-1v for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:21 +0000 Received: by mail-pl1-x629.google.com with SMTP id d15so7504055pls.6 for ; Thu, 15 Dec 2022 09:01:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GmOKelhxXQczHTID3SM97kUHKPqlhsoeVDKTVsi4pDE=; b=M/x6Webfdw2/xEVgUjm/3euBweIF6ONa/tPWCh85rYYCknuNs2Zud5/f8I3G/wZU5b rrrNYegH6IFVTS4KlbBgDjZgb+iuoJwEWJLqoKvZxFEsIOb52j4dO9L0o1AGK8fb7O81 BI2fI848yBcMDT1j0MPDQaKV/1/PBvWFrsQ/L4wDih29c1+VHAfsfSAqGJ9fO/pVZcmj mHbTjQjK0medb6WIp2rZ9joI3XtSc5NJvFlRKREO2F/6e/kQx21y933t1IJtMEkSdW+l Np3flGCnWtOpAO8HZbH34eX36OLMgW8MmER1SLfSfqID6dgu/xV/SA4vGF4yTHmHPuVZ 3+/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GmOKelhxXQczHTID3SM97kUHKPqlhsoeVDKTVsi4pDE=; b=qTX369bwKt2aFYwfe9S5gjQywoYvp45AC/exZVs01MtBW2kqjFKORVF0t222t0jtMl /wD1hmqlsW4tgYAViv616l8QDYGmVTacujw0LOcC1Oq2Ga3gzikk/J6TlYHNCoE7vTGj FpbIxqiVmOrxJnVbNAZ5vVHnUXWq68zmB9iV3bsfC3jR+1HsCbmjgxvxjYsFE5SiNQ0s EbzDmayBb1V8H5BcYPbw4CP/TAErb0fqwz4HQUmbtlnamBOnQxqw/3jxoRuwHqSbrDmy olG2D3Yy8elBFeNDmmV9lq0Et+FJBzslfnDYmXWAR5t4UJSVGNMXZ0ZBqawuMOmSsWYy VSog== X-Gm-Message-State: ANoB5plz0dzwgWxxP66rqIPFoW21nk7y03wom7rIf4NhhIpFame6fDo8 T5wIfj7FWvE94sit7fMYbRkHVQ== X-Google-Smtp-Source: AA0mqf4TLCt2teX6XTB8t7vev1QAhHcvWBHIfgCgZ8onADiiluFB6jj7/tzzvBe40YjGUu2C1tDaRg== X-Received: by 2002:a17:902:7005:b0:189:dc47:fe40 with SMTP id y5-20020a170902700500b00189dc47fe40mr28685531plk.15.1671123675948; Thu, 15 Dec 2022 09:01:15 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:15 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 03/11] RISC-V: KVM: Return correct code for hsm stop function Date: Thu, 15 Dec 2022 09:00:38 -0800 Message-Id: <20221215170046.2010255-4-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_090117_109129_9320C683 X-CRM114-Status: GOOD ( 10.81 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org According to the SBI specification, the stop function can only return error code SBI_ERR_FAILED. However, currently it returns -EINVAL which will be mapped SBI_ERR_INVALID_PARAM. Return the appropriate linux error code. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu_sbi_hsm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index 2e915ca..0f8d9fe 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -42,7 +42,7 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu) { if (vcpu->arch.power_off) - return -EINVAL; + return -EPERM; kvm_riscv_vcpu_power_off(vcpu); From patchwork Thu Dec 15 17:00:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 48ABFC2D0CC for ; Thu, 15 Dec 2022 17:01:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZZHQgVShFaGaiECCar35NKqNMjDydSK17XQwCHJPfos=; b=TUs75dOsGyEJCz B3IGJH/deL05f+dGyIAem0Y6EQa1cVUKTUdad9neDZIxd/hk0Lt06YW1qoM9JbyNGWHhl9EaU1sXi XrHk0sP2MkMa08kAGGJGivGFKf/ALc5werhc0yy6j6TTDRBuqwIiXJtnCSxNx86ji2lfUeBFqQGgK hDlaeNCQmTxiAskqbyFPH41/HTngdzVP7q9uMCOhZctsD+Gf7F9EB3T2olYFbfxITnfxKWimFgLdC QpR1BhwYSnXrP6fBcPzArwjrnl1/zReTrko7C7S3AwKttUgctGmKcTXaIC9038ZpHS/Be9DfnyKBq rL3V8rLSbc4BKMDx0zZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rc3-00ARiJ-H3; Thu, 15 Dec 2022 17:01:39 +0000 Received: from mail-pj1-x1035.google.com ([2607:f8b0:4864:20::1035]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbh-00ARSz-B8 for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:23 +0000 Received: by mail-pj1-x1035.google.com with SMTP id u15-20020a17090a3fcf00b002191825cf02so3278590pjm.2 for ; Thu, 15 Dec 2022 09:01:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dK2vEcQhA1P41OOuDWA8wE18vwUASQSBlysoDq+3PyI=; b=snqv/WonWiwp9a2UF+kYFlyI+9RfEelyIEXXq2QUejDT4iK/2gvR+P4QzSUzGrwoLW U09NeF9JnYDuWQG7HlBXwy6cwHMQi93ljRg10yLM8PtDLa8zaRuvjwIAtZY2eO6dhzak xwH3UyFdJaJ3DwgqaWZ8odB0efjvCok5Z23fX1tZF5Y9NFNGFN092130TPUn/OwcAuFE 0doNxcQi9P1N5PiC1JHXcaf9i+rXrzivHHJOdl1hnDV3PYCw+O0QzlpMYjg4qJyZ0+eZ THwBLH2B/S8EM8S0hA3iXbkVElpLd+DABh7IdGY2CJn+iOi7eWZj9jijz1v/WGOh/u5T pBXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dK2vEcQhA1P41OOuDWA8wE18vwUASQSBlysoDq+3PyI=; b=pnoBrAmjAyiPfzoZxka7QPvm30NiGXwXIxIVQdgpKzC+MWe7tQpRLC3pWYxOOwUwkP cvz5b9yuVMvp0dW7U/2NZrv2hafDUGBRpjznO3H3kigTP89iz7XReo6r0sknDifoWj7U e7JL67z2jT+baqEzBO2teFkbAWyQ/MdCqGygCogU43K7V/v9hKZ7f7uqRTEVbcK1JEKa 64B8frmS33co/NoBJox2GcOsyokTEGW11bYMnC+y+lZUYTfpKkxg8vzSA1wwAvENJM0b cQctXsXl4SKWDCjVF55HGI/h6/+580xcWl6C0Znf3/hdOuHUkDM7QoG8P+PLI+avttFj mj2w== X-Gm-Message-State: ANoB5pmCWaD8Ggn8ug4Y8w5pc9pGcNTm74J3OpZliGawd0OQgPdyzBJF 6U2PJgRxAmYGc2OjXRIxuL1v3Q== X-Google-Smtp-Source: AA0mqf59SsJ5z06BF09x6JVPv+MacXlUBwMjwNQ3lvgAr2Q+gDmKBNPukBOrMS6IiygnVLpdouz44A== X-Received: by 2002:a17:902:8548:b0:18c:fd25:a934 with SMTP id d8-20020a170902854800b0018cfd25a934mr25842768plo.61.1671123676827; Thu, 15 Dec 2022 09:01:16 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:16 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 04/11] RISC-V: KVM: Modify SBI extension handler to return SBI error code Date: Thu, 15 Dec 2022 09:00:39 -0800 Message-Id: <20221215170046.2010255-5-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_090117_423685_EF41EF91 X-CRM114-Status: GOOD ( 26.12 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently, the SBI extension handle is expected to return Linux error code. The top SBI layer converts the Linux error code to SBI specific error code that can be returned to guest invoking the SBI calls. This model works as long as SBI error codes have 1-to-1 mappings between them. However, that may not be true always. This patch attempts to disassociate both these error codes by allowing the SBI extension implementation to return SBI specific error codes as well. The extension will continue to return the Linux error specific code which will indicate any problem *with* the extension emulation while the SBI specific error will indicate the problem *of* the emulation. Suggested-by: Andrew Jones Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_sbi.h | 10 ++++-- arch/riscv/kvm/vcpu_sbi.c | 45 ++++++++------------------- arch/riscv/kvm/vcpu_sbi_base.c | 38 +++++++++++----------- arch/riscv/kvm/vcpu_sbi_hsm.c | 22 +++++++------ arch/riscv/kvm/vcpu_sbi_replace.c | 44 +++++++++++++------------- 5 files changed, 74 insertions(+), 85 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h index 61dac1b..fee9253 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -18,6 +18,12 @@ struct kvm_vcpu_sbi_context { int return_handled; }; +struct kvm_vcpu_sbi_ext_data { + unsigned long out_val; + unsigned long err_val; + bool uexit; +}; + struct kvm_vcpu_sbi_extension { unsigned long extid_start; unsigned long extid_end; @@ -27,8 +33,8 @@ struct kvm_vcpu_sbi_extension { * specific error codes. */ int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, struct kvm_cpu_trap *utrap, - bool *exit); + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap); /* Extension specific probe function */ unsigned long (*probe)(struct kvm_vcpu *vcpu, unsigned long extid); diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index f96991d..50c5472 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -12,26 +12,6 @@ #include #include -static int kvm_linux_err_map_sbi(int err) -{ - switch (err) { - case 0: - return SBI_SUCCESS; - case -EPERM: - return SBI_ERR_DENIED; - case -EINVAL: - return SBI_ERR_INVALID_PARAM; - case -EFAULT: - return SBI_ERR_INVALID_ADDRESS; - case -EOPNOTSUPP: - return SBI_ERR_NOT_SUPPORTED; - case -EALREADY: - return SBI_ERR_ALREADY_AVAILABLE; - default: - return SBI_ERR_FAILURE; - }; -} - #ifndef CONFIG_RISCV_SBI_V01 static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { .extid_start = -1UL, @@ -125,11 +105,10 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) { int ret = 1; bool next_sepc = true; - bool userspace_exit = false; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; const struct kvm_vcpu_sbi_extension *sbi_ext; struct kvm_cpu_trap utrap = { 0 }; - unsigned long out_val = 0; + struct kvm_vcpu_sbi_ext_data edata_out = { 0 }; bool ext_is_v01 = false; sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7); @@ -139,7 +118,7 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) cp->a7 <= SBI_EXT_0_1_SHUTDOWN) ext_is_v01 = true; #endif - ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit); + ret = sbi_ext->handler(vcpu, run, &edata_out, &utrap); } else { /* Return error for unsupported SBI calls */ cp->a0 = SBI_ERR_NOT_SUPPORTED; @@ -156,25 +135,27 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) goto ecall_done; } + /* The SBI extension returns Linux error code. Exits the ioctl loop + * and forwards the error to the userspace. + */ + if (ret < 0) { + next_sepc = false; + goto ecall_done; + } + /* Exit ioctl loop or Propagate the error code the guest */ - if (userspace_exit) { + if (edata_out.uexit) { next_sepc = false; ret = 0; } else { - /** - * SBI extension handler always returns an Linux error code. Convert - * it to the SBI specific error code that can be propagated the SBI - * caller. - */ - ret = kvm_linux_err_map_sbi(ret); - cp->a0 = ret; + cp->a0 = edata_out.err_val; ret = 1; } ecall_done: if (next_sepc) cp->sepc += 4; if (!ext_is_v01) - cp->a1 = out_val; + cp->a1 = edata_out.out_val; return ret; } diff --git a/arch/riscv/kvm/vcpu_sbi_base.c b/arch/riscv/kvm/vcpu_sbi_base.c index 89e2415..487828d 100644 --- a/arch/riscv/kvm/vcpu_sbi_base.c +++ b/arch/riscv/kvm/vcpu_sbi_base.c @@ -14,24 +14,23 @@ #include static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *trap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *trap) { - int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; const struct kvm_vcpu_sbi_extension *sbi_ext; switch (cp->a6) { case SBI_EXT_BASE_GET_SPEC_VERSION: - *out_val = (KVM_SBI_VERSION_MAJOR << + edata->out_val = (KVM_SBI_VERSION_MAJOR << SBI_SPEC_VERSION_MAJOR_SHIFT) | KVM_SBI_VERSION_MINOR; break; case SBI_EXT_BASE_GET_IMP_ID: - *out_val = KVM_SBI_IMPID; + edata->out_val = KVM_SBI_IMPID; break; case SBI_EXT_BASE_GET_IMP_VERSION: - *out_val = LINUX_VERSION_CODE; + edata->out_val = LINUX_VERSION_CODE; break; case SBI_EXT_BASE_PROBE_EXT: if ((cp->a0 >= SBI_EXT_EXPERIMENTAL_START && @@ -43,33 +42,33 @@ static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, * forward it to the userspace */ kvm_riscv_vcpu_sbi_forward(vcpu, run); - *exit = true; + edata->uexit = true; } else { sbi_ext = kvm_vcpu_sbi_find_ext(cp->a0); if (sbi_ext) { if (sbi_ext->probe) - *out_val = sbi_ext->probe(vcpu, cp->a0); + edata->out_val = sbi_ext->probe(vcpu, cp->a0); else - *out_val = 1; + edata->out_val = 1; } else - *out_val = 0; + edata->out_val = 0; } break; case SBI_EXT_BASE_GET_MVENDORID: - *out_val = vcpu->arch.mvendorid; + edata->out_val = vcpu->arch.mvendorid; break; case SBI_EXT_BASE_GET_MARCHID: - *out_val = vcpu->arch.marchid; + edata->out_val = vcpu->arch.marchid; break; case SBI_EXT_BASE_GET_MIMPID: - *out_val = vcpu->arch.mimpid; + edata->out_val = vcpu->arch.mimpid; break; default: - ret = -EOPNOTSUPP; + edata->err_val = SBI_ERR_NOT_SUPPORTED; break; } - return ret; + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = { @@ -79,17 +78,16 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = { }; static int kvm_sbi_ext_forward_handler(struct kvm_vcpu *vcpu, - struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, - bool *exit) + struct kvm_run *run, + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { /* * Both SBI experimental and vendor extensions are * unconditionally forwarded to userspace. */ kvm_riscv_vcpu_sbi_forward(vcpu, run); - *exit = true; + edata->uexit = true; return 0; } diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index 0f8d9fe..4188f21 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -21,9 +21,9 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) - return -EINVAL; + return SBI_ERR_INVALID_PARAM; if (!target_vcpu->arch.power_off) - return -EALREADY; + return SBI_ERR_ALREADY_AVAILABLE; reset_cntx = &target_vcpu->arch.guest_reset_context; /* start address */ @@ -42,7 +42,7 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu) { if (vcpu->arch.power_off) - return -EPERM; + return SBI_ERR_FAILURE; kvm_riscv_vcpu_power_off(vcpu); @@ -57,7 +57,7 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) - return -EINVAL; + return SBI_ERR_INVALID_PARAM; if (!target_vcpu->arch.power_off) return SBI_HSM_STATE_STARTED; else if (vcpu->stat.generic.blocking) @@ -66,10 +66,10 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) return SBI_HSM_STATE_STOPPED; } + static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, - bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; @@ -88,8 +88,8 @@ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, case SBI_EXT_HSM_HART_STATUS: ret = kvm_sbi_hsm_vcpu_get_status(vcpu); if (ret >= 0) { - *out_val = ret; - ret = 0; + edata->out_val = ret; + edata->err_val = 0; } break; case SBI_EXT_HSM_HART_SUSPEND: @@ -108,7 +108,9 @@ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, ret = -EOPNOTSUPP; } - return ret; + edata->err_val = ret; + + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm = { diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c index 03a0198..d029136 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -14,15 +14,17 @@ #include static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; u64 next_cycle; - if (cp->a6 != SBI_EXT_TIME_SET_TIMER) - return -EINVAL; + if (cp->a6 != SBI_EXT_TIME_SET_TIMER) { + edata->err_val = SBI_ERR_INVALID_PARAM; + return 0; + } #if __riscv_xlen == 32 next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; @@ -41,8 +43,8 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time = { }; static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { int ret = 0; unsigned long i; @@ -51,8 +53,10 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long hmask = cp->a0; unsigned long hbase = cp->a1; - if (cp->a6 != SBI_EXT_IPI_SEND_IPI) - return -EINVAL; + if (cp->a6 != SBI_EXT_IPI_SEND_IPI) { + edata->err_val = SBI_ERR_INVALID_PARAM; + return 0; + } kvm_for_each_vcpu(i, tmp, vcpu->kvm) { if (hbase != -1UL) { @@ -76,10 +80,9 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi = { }; static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { - int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; unsigned long hmask = cp->a0; unsigned long hbase = cp->a1; @@ -116,10 +119,10 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run */ break; default: - ret = -EOPNOTSUPP; + edata->err_val = SBI_ERR_NOT_SUPPORTED; } - return ret; + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = { @@ -130,14 +133,13 @@ const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = { static int kvm_sbi_ext_srst_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, - unsigned long *out_val, - struct kvm_cpu_trap *utrap, bool *exit) + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) { struct kvm_cpu_context *cp = &vcpu->arch.guest_context; unsigned long funcid = cp->a6; u32 reason = cp->a1; u32 type = cp->a0; - int ret = 0; switch (funcid) { case SBI_EXT_SRST_RESET: @@ -146,24 +148,24 @@ static int kvm_sbi_ext_srst_handler(struct kvm_vcpu *vcpu, kvm_riscv_vcpu_sbi_system_reset(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN, reason); - *exit = true; + edata->uexit = true; break; case SBI_SRST_RESET_TYPE_COLD_REBOOT: case SBI_SRST_RESET_TYPE_WARM_REBOOT: kvm_riscv_vcpu_sbi_system_reset(vcpu, run, KVM_SYSTEM_EVENT_RESET, reason); - *exit = true; + edata->uexit = true; break; default: - ret = -EOPNOTSUPP; + edata->err_val = SBI_ERR_NOT_SUPPORTED; } break; default: - ret = -EOPNOTSUPP; + edata->err_val = SBI_ERR_NOT_SUPPORTED; } - return ret; + return 0; } const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_srst = { From patchwork Thu Dec 15 17:00:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A0F1C4332F for ; Thu, 15 Dec 2022 18:16:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vvCsuj3ViEWLBNXdZsVQ+XciuIhHkWz4Zmmb+YT1fwU=; b=zAL5vpgPf9z9Lh Qy/UBM4QTd1/Z1rIB/azbw/P7Aef/IjLGYsS5AfyZS6Nkz0JDq04pkdTh314rQTd4AQa7/RWISudt 5lH8V9ooSTYJjlU2efGI3SrisgGx3qUanQQoIuhbgCwP7GTYZoltA9f7slsV+8Ru+ECiP+GzczSFF 7S7eFHk5J6Mfnz953GpmWGhw60nwH6mLx4YVh/7auZwuRzUWboooi03hYylei2WhMtNkZi1TKbU/h EhMoNlP+d5nF0eeXtigwLFQKWEuWJ8EITtjmKdmveqA0rvwGGgCMobBQGFAepMn0nf5aucU9xdrP2 iwjpATw6Jdgmkft/XQhw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5smH-00An5a-DQ; Thu, 15 Dec 2022 18:16:17 +0000 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbj-00ARTt-D1 for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:22 +0000 Received: by mail-pl1-x62e.google.com with SMTP id n4so3844037plp.1 for ; Thu, 15 Dec 2022 09:01:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TA398QcoRw04Vi1tCToQyhK2GccVMIkF9itdIbQrjzg=; b=iCBFE98jjH1B+jWMupgkTrrpBiqryRet5vyKfU7xYz2IrRVevL7rN4/R8A1XGxSdrD S7lP75oLwz8r/CmXjPNMEcn116Bp8Pph6YlSIDoJD9xM/0Y5fCGJ7RDpT5PS4h43v3Az sUmCxrLeGrYguKN7cGHsUksEb4w8uhufJikxuzvkuL7Jhdbrlz/I0lAFSnrI5/aNNVF7 tDv74BKoI3m1btLXcPGMdCgjwWFcXVingmbKafWVyv6c4LUftSPSXRnkHh87ucAMEApM ijBLm3eRnw77G8KAHXEW91Fcfhvz+E/HTZKoachHXH/vKtbBaoUYsA3VwkLs26TGRH2j nexw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TA398QcoRw04Vi1tCToQyhK2GccVMIkF9itdIbQrjzg=; b=lfgD7AY0b/y1ONNuJ72MKCyPrl3HQfmQtWnYAqo4uHMySKYze7jMBIe7PbwKOVXyT/ rxMsQwNLAm23louBFdYxuUhu7/CGON50sI/d9TbkwvT4Eno/PdWdHTfumgAAOUCj1bpZ JX6TDhrUkf6yag9RrWuKINLBZGnrx32elRjMgR9yCd/4K1h4eSA2AAR4nbLoG/oD087x gt5brlAaJsjJPEX45eJvPLOLWGJmeBLM7VgWAKwuEyy3RAu6Hb3XcKoI9C8lwia0kxsg bq+5wL+8KNNjxKIbw+X/ATNH+3RqpUWOBjXJIIaWUlxCvmstcyolLb3n17IPJDRaHUn4 LDSg== X-Gm-Message-State: ANoB5pkAQX60aFqjge0PwdmHk6nWQ+Ceg1vQMVlZlFRepbrBl9tKSa92 uCAvYPj6/eLgbFBg2uokHzyWCg== X-Google-Smtp-Source: AA0mqf47OVooaAtiLeQdMDJXJiy7Bs5Hr2tNJlG8coYqXqFyYi7GgQGyDqsU+jYbG5rwf3pgYZ6MqQ== X-Received: by 2002:a17:903:2448:b0:189:f277:3830 with SMTP id l8-20020a170903244800b00189f2773830mr43509522pls.68.1671123677816; Thu, 15 Dec 2022 09:01:17 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:17 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Anup Patel , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 05/11] RISC-V: KVM: Improve privilege mode filtering for perf Date: Thu, 15 Dec 2022 09:00:40 -0800 Message-Id: <20221215170046.2010255-6-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_090119_463157_65E15B77 X-CRM114-Status: GOOD ( 14.63 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Currently, the host driver doesn't have any method to identify if the requested perf event is from kvm or bare metal. As KVM runs in HS mode, there are no separate hypervisor privilege mode to distinguish between the attributes for guest/host. Improve the privilege mode filtering by using the event specific config1 field. Reviewed-by: Andrew Jones Signed-off-by: Atish Patra --- drivers/perf/riscv_pmu_sbi.c | 27 ++++++++++++++++++++++----- include/linux/perf/riscv_pmu.h | 2 ++ 2 files changed, 24 insertions(+), 5 deletions(-) diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c index 65d4aa4..df795b7 100644 --- a/drivers/perf/riscv_pmu_sbi.c +++ b/drivers/perf/riscv_pmu_sbi.c @@ -298,6 +298,27 @@ int riscv_pmu_get_hpm_info(u32 *hw_ctr_width, u32 *num_hw_ctr) } EXPORT_SYMBOL(riscv_pmu_get_hpm_info); +static unsigned long pmu_sbi_get_filter_flags(struct perf_event *event) +{ + unsigned long cflags = 0; + bool guest_events = false; + + if (event->attr.config1 & RISCV_KVM_PMU_CONFIG1_GUEST_EVENTS) + guest_events = true; + if (event->attr.exclude_kernel) + cflags |= guest_events ? SBI_PMU_CFG_FLAG_SET_VSINH : SBI_PMU_CFG_FLAG_SET_SINH; + if (event->attr.exclude_user) + cflags |= guest_events ? SBI_PMU_CFG_FLAG_SET_VUINH : SBI_PMU_CFG_FLAG_SET_UINH; + if (guest_events && event->attr.exclude_hv) + cflags |= SBI_PMU_CFG_FLAG_SET_SINH; + if (event->attr.exclude_host) + cflags |= SBI_PMU_CFG_FLAG_SET_UINH | SBI_PMU_CFG_FLAG_SET_SINH; + if (event->attr.exclude_guest) + cflags |= SBI_PMU_CFG_FLAG_SET_VSINH | SBI_PMU_CFG_FLAG_SET_VUINH; + + return cflags; +} + static int pmu_sbi_ctr_get_idx(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; @@ -308,11 +329,7 @@ static int pmu_sbi_ctr_get_idx(struct perf_event *event) uint64_t cbase = 0; unsigned long cflags = 0; - if (event->attr.exclude_kernel) - cflags |= SBI_PMU_CFG_FLAG_SET_SINH; - if (event->attr.exclude_user) - cflags |= SBI_PMU_CFG_FLAG_SET_UINH; - + cflags = pmu_sbi_get_filter_flags(event); /* retrieve the available counter index */ #if defined(CONFIG_32BIT) ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_CFG_MATCH, cbase, diff --git a/include/linux/perf/riscv_pmu.h b/include/linux/perf/riscv_pmu.h index a1c3f77..1c42146 100644 --- a/include/linux/perf/riscv_pmu.h +++ b/include/linux/perf/riscv_pmu.h @@ -26,6 +26,8 @@ #define RISCV_PMU_STOP_FLAG_RESET 1 +#define RISCV_KVM_PMU_CONFIG1_GUEST_EVENTS 0x1 + struct cpu_hw_events { /* currently enabled events */ int n_events; From patchwork Thu Dec 15 17:00:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D1FCFC4332F for ; Thu, 15 Dec 2022 17:01:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ujyKhj0Pu03lrOT1OkCOWQLHnwRH3geelREnp3lRO1c=; b=whGrLSJEAz+rIZ gxeqfrQbwwgzWp8Q1b7zG11KxX1CQw7SWHoz3PnyWVf/WISvauyfw4t/PWkbRCv6uUN571QQvD6iT frMWxkfNgHNZNUlNTUkP7ZXW5JRf0p2PKsmqSC88IMMvHruvXdyzON/Nh1W6MbN+uXNw09J5RhaTw em/jq6l2iuNU2/7xKqsZpyFRH+TgjRT42USYf7GtFikbcAqNYiqeq8UVUd5tT+ATGTBkbAVJF0oT7 SNj70BYkdpojx6Y3yG5xnYl8ATsP6m2hrFK1clKjAuj+ciLrPJO6jtW614dsnht6lVsMgh0Q/T0ZO QLM2qewVSNEk9POPT1tg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rc0-00ARf7-NF; Thu, 15 Dec 2022 17:01:36 +0000 Received: from mail-pj1-x1035.google.com ([2607:f8b0:4864:20::1035]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbj-00ARU5-7p for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:23 +0000 Received: by mail-pj1-x1035.google.com with SMTP id t11-20020a17090a024b00b0021932afece4so3327477pje.5 for ; Thu, 15 Dec 2022 09:01:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dhs+JrFmy876qsXRL5CBd5BerbPW3etQeVs8SpsZcQw=; b=U6oNR8T4Jsx3E3L822kTsxHd5TEGEsxBn3CCUI2SeekPDUkSTGJ3fOb7abTtiH2h4F LFLmL/RlSDx5Gy7NL57xvVYxQFVz3At/fTJ5lhTAnbcZtYlPv3YqbA8/oF0LW4z1rrLd 2NzR+yITnwiEjMORqQ9wRHog+vOR75SHj03EJOOROKOWaE3zKorgx1eElPSSsd7BZpML t3+2flYfKJ2PRenAgW4s3fOk2QnyIDjhXZ8B7xxbGaydBqH/Z9PJlSz91Pj30V5kediS avY61zS5vHm0YooUmCNwe++UdZaFGHKwFMLtTH+sp3nepkSe07opJjxQVAJJko1VQZXp Vjig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dhs+JrFmy876qsXRL5CBd5BerbPW3etQeVs8SpsZcQw=; b=WaW2WFYMeyD97xQrQPODSfuBQdsnQKQKkC9d+WUj/0WS5dNROhfM8fHyY3Vm96/2cI 8xA+O+h1hODHLsespAZ2kRQaaN6KBRmXGw+0U6YwJqv0Zy6zp0hPZxnyrDXiHII5Hhq8 Kr2t6T+QIee/b/EbL9db5RxBZE3fP0Nybbig7MyqQCZGCiQNcWn5EfUklc1SQ8xFyawB bKsbSWVtgs/AWv5/17UUoJyXcwFOFRrXUYav9t056Hkm8Hrhh8uaQO+FyN6FlNzVqzAV nV9ilPiZXBSPKPDwgGCnNZezZ8GFdUzeQRFZuWyPeFn0uKvz4l9/laesWD5sRpIGVjfJ tpWw== X-Gm-Message-State: ANoB5pkj+dH/XjWauktTuLpOoiqebuRchIZyE2MQpbDC5AfihyRyxTOi +pLuX4vQT1YkfocCcG2pG9RpHw== X-Google-Smtp-Source: AA0mqf5DHC3LSXbUYgba/exs/9GOB/5X0REqYDp696TBPWA4irASk74J17kx3gV/3kuJxQkkUN+9Gw== X-Received: by 2002:a17:902:8b81:b0:185:441e:2d78 with SMTP id ay1-20020a1709028b8100b00185441e2d78mr27030692plb.15.1671123678696; Thu, 15 Dec 2022 09:01:18 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:18 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 06/11] RISC-V: KVM: Add skeleton support for perf Date: Thu, 15 Dec 2022 09:00:41 -0800 Message-Id: <20221215170046.2010255-7-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_090119_357578_8DA9E5E4 X-CRM114-Status: GOOD ( 28.29 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch only adds barebore structure of perf implementation. Most of the function returns zero at this point and will be implemented fully in the future. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 3 + arch/riscv/include/asm/kvm_vcpu_pmu.h | 76 ++++++++++++++ arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/vcpu.c | 5 + arch/riscv/kvm/vcpu_insn.c | 2 +- arch/riscv/kvm/vcpu_pmu.c | 142 ++++++++++++++++++++++++++ 6 files changed, 228 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/kvm_vcpu_pmu.h create mode 100644 arch/riscv/kvm/vcpu_pmu.c diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 93f43a3..f9874b4 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -18,6 +18,7 @@ #include #include #include +#include #define KVM_MAX_VCPUS 1024 @@ -228,6 +229,8 @@ struct kvm_vcpu_arch { /* Don't run the VCPU (blocked) */ bool pause; + + struct kvm_pmu pmu; }; static inline void kvm_arch_hardware_unsetup(void) {} diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h new file mode 100644 index 0000000..6a8c0f7 --- /dev/null +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022 Rivos Inc + * + * Authors: + * Atish Patra + */ + +#ifndef __KVM_VCPU_RISCV_PMU_H +#define __KVM_VCPU_RISCV_PMU_H + +#include +#include +#include + +#ifdef CONFIG_RISCV_PMU_SBI +#define RISCV_KVM_MAX_FW_CTRS 32 +#define RISCV_MAX_COUNTERS 64 + +/* Per virtual pmu counter data */ +struct kvm_pmc { + u8 idx; + struct perf_event *perf_event; + uint64_t counter_val; + union sbi_pmu_ctr_info cinfo; + /* Event monitoring status */ + bool started; +}; + +/* PMU data structure per vcpu */ +struct kvm_pmu { + struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; + /* Number of the virtual firmware counters available */ + int num_fw_ctrs; + /* Number of the virtual hardware counters available */ + int num_hw_ctrs; + /* A flag to indicate that pmu initialization is done */ + bool init_done; + /* Bit map of all the virtual counter used */ + DECLARE_BITMAP(pmc_in_use, RISCV_MAX_COUNTERS); +}; + +#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) +#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) + +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, uint64_t ival, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + unsigned long eidx, uint64_t edata, + struct kvm_vcpu_sbi_ext_data *extdata); +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_ext_data *edata); +int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); + +#else +struct kvm_pmu { +}; + +static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) +{ + return 0; +} +static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} +static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} +#endif +#endif diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 019df920..5de1053 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -25,3 +25,4 @@ kvm-y += vcpu_sbi_base.o kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 7c08567..b746f21 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -137,6 +137,7 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) WRITE_ONCE(vcpu->arch.irqs_pending, 0); WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); + kvm_riscv_vcpu_pmu_reset(vcpu); vcpu->arch.hfence_head = 0; vcpu->arch.hfence_tail = 0; @@ -194,6 +195,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Setup VCPU timer */ kvm_riscv_vcpu_timer_init(vcpu); + /* setup performance monitoring */ + kvm_riscv_vcpu_pmu_init(vcpu); + /* Reset VCPU */ kvm_riscv_reset_vcpu(vcpu); @@ -216,6 +220,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) /* Cleanup VCPU timer */ kvm_riscv_vcpu_timer_deinit(vcpu); + kvm_riscv_vcpu_pmu_deinit(vcpu); /* Free unused pages pre-allocated for G-stage page table mappings */ kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); } diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 0bb5276..1ff2649 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -213,7 +213,7 @@ struct csr_func { unsigned long wr_mask); }; -static const struct csr_func csr_funcs[] = { }; +static const struct csr_func csr_funcs[] = {}; /** * kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c new file mode 100644 index 0000000..0f0748f1 --- /dev/null +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -0,0 +1,142 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2022 Rivos Inc + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) + +int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + edata->out_val = kvpmu->num_fw_ctrs + kvpmu->num_hw_ctrs; + + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_ext_data *edata) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + if ((cidx > RISCV_MAX_COUNTERS) || (cidx == 1)) { + edata->err_val = SBI_ERR_INVALID_PARAM; + return 0; + } + + edata->out_val = kvpmu->pmc[cidx].cinfo.value; + + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, uint64_t ival, + struct kvm_vcpu_sbi_ext_data *edata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + struct kvm_vcpu_sbi_ext_data *edata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, + unsigned long ctr_mask, unsigned long flag, + unsigned long eidx, uint64_t edata, + struct kvm_vcpu_sbi_ext_data *extdata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_ext_data *edata) +{ + /* TODO */ + return 0; +} + +int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) +{ + int i = 0, num_fw_ctrs, ret, num_hw_ctrs = 0, hpm_width = 0; + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); + if (ret < 0) + return ret; + + if (!hpm_width || !num_hw_ctrs) { + pr_err("Can not initialize PMU for vcpu with NULL hpmcounter width/count\n"); + return -EINVAL; + } + + if ((num_hw_ctrs + RISCV_KVM_MAX_FW_CTRS) > RISCV_MAX_COUNTERS) + num_fw_ctrs = RISCV_MAX_COUNTERS - num_hw_ctrs; + else + num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; + + kvpmu->num_hw_ctrs = num_hw_ctrs; + kvpmu->num_fw_ctrs = num_fw_ctrs; + /* + * There is no corelation betwen the logical hardware counter and virtual counters. + * However, we need to encode a hpmcounter CSR in the counter info field so that + * KVM can trap n emulate the read. This works well in the migraiton usecase as + * KVM doesn't care if the actual hpmcounter is available in the hardware or not. + */ + for (i = 0; i < kvm_pmu_num_counters(kvpmu); i++) { + /* TIME CSR shouldn't be read from perf interface */ + if (i == 1) + continue; + kvpmu->pmc[i].idx = i; + if (i < kvpmu->num_hw_ctrs) { + kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_HW; + if (i < 3) + /* CY, IR counters */ + kvpmu->pmc[i].cinfo.width = 63; + else + kvpmu->pmc[i].cinfo.width = hpm_width; + /* + * The CSR number doesn't have any relation with the logical + * hardware counters. The CSR numbers are encoded sequentially + * to avoid maintaining a map between the virtual counter + * and CSR number. + */ + kvpmu->pmc[i].cinfo.csr = CSR_CYCLE + i; + } else { + kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_FW; + kvpmu->pmc[i].cinfo.width = BITS_PER_LONG - 1; + } + } + + kvpmu->init_done = true; + + return 0; +} + +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) +{ + /* TODO */ +} + +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) +{ + /* TODO */ +} + From patchwork Thu Dec 15 17:00:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05AB4C4332F for ; Thu, 15 Dec 2022 17:01:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=e181fPoMdkh9Z+iUU2x6wWraymQMcp70+WwhwhI3/gI=; b=w02EnWaeMo63E/ R7kR95pU4kjrRzOSk7+isKSljkR/DCwhPOuKt1aTK9d09+sFjGwSrQpDBN6nou5erFnjk9NkEwj9d clSqIq+02CuiYgbRFEJf0t17JX59W396174ZDmnxiNv2NTVK15GQ8fKK2NX8OIQL/wZnLowYnFxut 2bds+F2faTbbn+JeDdV6G7ZZQnHR1Eyu1/DwPEYtPMEUdj4Bqa4zRQf4bPKL+YmkzGPcO4exn0W6Y g02Fzf2MKKPJz8/kS/GRys8AcTGlkaUOaO7STMFq7NXgiOx5v5x+spsvphISpDcpmA+mw9tqtpuO4 v+kY8StITdUO9Fd2bDlA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rc5-00ARjs-Nv; Thu, 15 Dec 2022 17:01:41 +0000 Received: from mail-pj1-x1033.google.com ([2607:f8b0:4864:20::1033]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbk-00ARQJ-9x for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:24 +0000 Received: by mail-pj1-x1033.google.com with SMTP id v13-20020a17090a6b0d00b00219c3be9830so3269973pjj.4 for ; Thu, 15 Dec 2022 09:01:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SGBOR0tyzQoeNONlBW3uFBaSGhMbwn7WNuQR1Zb0GQc=; b=b1J2YZqiSzN5uw7EnpFFB32xL9nOiwrVzxyZOPFjLinJ1c1MrpzPyZ/YHyn0Wp6UXp DrCHpT43vc/rYZIDC7M8EwFJAGT5ORpjNFZj7lXMf/ZC3l/dUNlIiPrpw6AM//ZuVOsG vwUTlv6fAm0f70HujypFJGQnnRJGjxzI+G+XeJRyOhB2aCSpXO/78w+awPK+2BQnOsNR 2RqAheYiuz/EHG+KO4TUob083jCPfob3IzMlFyiISd7CFgeN2AStHOW7Oi7Dx2kDGekm e1FwDOfg72/BLQvey7lCXtqGrpaoO0jL1OSGGx+FhFvvEpUwxdcog/UTOTJOHEL51w29 16jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SGBOR0tyzQoeNONlBW3uFBaSGhMbwn7WNuQR1Zb0GQc=; b=wWYNeAnQoK5d5PPmkCdDCBlez0lc19WK9j/lb2SiMV54AdW28ChtUcSa0dHIuhkQ7O X22v223CPwY4wUCR7d4XWqacSREQCbt3nhJL8R7xEsnWd2EIk8IqofL8C6CYNzv7sxVk F6ysm9nIMyQgL8KiJOFe1lzJ/UsFH4ZDFhg4+x5b8ATyY/YFbRl36TQrvTz7JOQOoW73 Oi5q7j8bXQoOTfsHcxvdmEGYkrEWAykJPwc4YV2N+0mYDEJtGiMNWcziISj8hMyXJtc3 ou57nX9S/eAFiPJ2iWwZWeroZIBobNgyTI7uvo0xGb7bmiLyE0brHEk9UQ5T40NjwdZ1 I2PQ== X-Gm-Message-State: ANoB5pnKHIiNKEryhN0RM+qOWXHpc5Y5+pgYHTtbl8nQ6d5SLLyd09y/ LINS01cQk6ilxJKwD0NKDIzbYg== X-Google-Smtp-Source: AA0mqf4zkiG3mUjhsNTKfM4UvwyFLXZuCHUspb39Hpcj1DA0kiPwFevN5T/RbLs0WftLs3fJ+RMGFw== X-Received: by 2002:a17:903:268f:b0:189:dfb0:d380 with SMTP id jf15-20020a170903268f00b00189dfb0d380mr30497363plb.33.1671123679851; Thu, 15 Dec 2022 09:01:19 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:19 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 07/11] RISC-V: KVM: Add SBI PMU extension support Date: Thu, 15 Dec 2022 09:00:42 -0800 Message-Id: <20221215170046.2010255-8-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_090120_408677_7D1ACA57 X-CRM114-Status: GOOD ( 18.48 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org SBI PMU extension allows KVM guests to configure/start/stop/query about the PMU counters in virtualized enviornment as well. In order to allow that, KVM implements the entire SBI PMU extension. Signed-off-by: Atish Patra --- arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu_sbi.c | 11 +++++ arch/riscv/kvm/vcpu_sbi_pmu.c | 86 +++++++++++++++++++++++++++++++++++ 3 files changed, 98 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/kvm/vcpu_sbi_pmu.c diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 5de1053..278e97c 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -25,4 +25,4 @@ kvm-y += vcpu_sbi_base.o kvm-y += vcpu_sbi_replace.o kvm-y += vcpu_sbi_hsm.o kvm-y += vcpu_timer.o -kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o +kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 50c5472..3b8b84e8 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -20,6 +20,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { }; #endif +#ifdef CONFIG_RISCV_PMU_SBI +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu; +#else +static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { + .extid_start = -1UL, + .extid_end = -1UL, + .handler = NULL, +}; +#endif + static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_v01, &vcpu_sbi_ext_base, @@ -28,6 +38,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = { &vcpu_sbi_ext_rfence, &vcpu_sbi_ext_srst, &vcpu_sbi_ext_hsm, + &vcpu_sbi_ext_pmu, &vcpu_sbi_ext_experimental, &vcpu_sbi_ext_vendor, }; diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c new file mode 100644 index 0000000..223752f --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c @@ -0,0 +1,86 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2022 Rivos Inc + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include + +static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + struct kvm_vcpu_sbi_ext_data *edata, + struct kvm_cpu_trap *utrap) +{ + int ret = 0; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + unsigned long funcid = cp->a6; + uint64_t temp; + + /* Return not supported if PMU is not initialized */ + if (!kvpmu->init_done) + return -EINVAL; + + switch (funcid) { + case SBI_EXT_PMU_NUM_COUNTERS: + ret = kvm_riscv_vcpu_pmu_num_ctrs(vcpu, edata); + break; + case SBI_EXT_PMU_COUNTER_GET_INFO: + ret = kvm_riscv_vcpu_pmu_ctr_info(vcpu, cp->a0, edata); + break; + case SBI_EXT_PMU_COUNTER_CFG_MATCH: +#if defined(CONFIG_32BIT) + temp = ((uint64_t)cp->a5 << 32) | cp->a4; +#else + temp = cp->a4; +#endif + ret = kvm_riscv_vcpu_pmu_ctr_cfg_match(vcpu, cp->a0, cp->a1, + cp->a2, cp->a3, temp, edata); + break; + case SBI_EXT_PMU_COUNTER_START: +#if defined(CONFIG_32BIT) + temp = ((uint64_t)cp->a4 << 32) | cp->a3; +#else + temp = cp->a3; +#endif + ret = kvm_riscv_vcpu_pmu_ctr_start(vcpu, cp->a0, cp->a1, cp->a2, + temp, edata); + break; + case SBI_EXT_PMU_COUNTER_STOP: + ret = kvm_riscv_vcpu_pmu_ctr_stop(vcpu, cp->a0, cp->a1, cp->a2, edata); + break; + case SBI_EXT_PMU_COUNTER_FW_READ: + ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, edata); + break; + default: + edata->err_val = SBI_ERR_NOT_SUPPORTED; + } + + + return ret; +} + +unsigned long kvm_sbi_ext_pmu_probe(struct kvm_vcpu *vcpu, unsigned long extid) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + + /* + * PMU Extension is only available to guests if privilege mode filtering + * is available. Otherwise, guest will always count events while the + * execution is in hypervisor mode. + */ + return kvpmu->init_done && riscv_isa_extension_available(NULL, SSCOFPMF); +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { + .extid_start = SBI_EXT_PMU, + .extid_end = SBI_EXT_PMU, + .handler = kvm_sbi_ext_pmu_handler, + .probe = kvm_sbi_ext_pmu_probe, +}; From patchwork Thu Dec 15 17:00:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4A809C25B04 for ; Thu, 15 Dec 2022 17:28:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XgyGd06ycTFkgFQJ5iG8IXqgnQ5QGmaXHfAy1mD3e6o=; b=cUSe4QGNP1ERgg JkHRpJPGORwx47pPzy7NYMATIsy6ajgqttarqfl/uISUXF7EMJtNs6VgQJyM66QWY6CzS0ppwfFl7 4aobG5qbzT9rhcwVS28VCRlVlQiGh0le0N5atn+x7lsZBhvGao34tE7P6eTZIHfMmHjNbZGqMLdvF x+gc1qaPwqEa6F0v70CqsNQNhMmQkQXM5s3a/kURTdAnPXskBgQ3j5r1UVl7Au41Qz2T9FnYmj1Jg 67s/VjtvXhQBYvn3OATSGjgeHah5U/IOSstvnnfhlghcvxHVMo2pmVC9vzl7SftoBi+UCirzdwcSf IMGLdezLGvwB5n6d5fRw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5s1o-00AYyS-VY; Thu, 15 Dec 2022 17:28:16 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5s1m-00AYoz-TO for linux-riscv@bombadil.infradead.org; Thu, 15 Dec 2022 17:28:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=K7O8/7ogKx8WzzRgxh6e6gC9a0ALFnBxrPW4Tv+K91E=; b=MtTNS8z249ZZNIG/DWloRIRan5 aUpbqlahbDkH+heB0GbQWM7SILLCqRtaFtOyF53YDu0/8z8SliWr2LPP175HxWjEnikT+3UNFe404 xIDU+vZvIUQ+MHPXneNUaQwymDWM5Da0u42iplUrL5pIOFAE3NHxKLg6ye31+UXpzPrT6Fb0gn1dX npae+qP1KXR2QmWljMmFT5oxntdUrhcogMsSQE35ciMyOkjqUEG+T+jOj2jYm3FUTxMeDr64dm19b lB9e3FxDHdUX8TUpIUUyF1O5JcKYnjHEWbyQKYkAr8naVUZlmIhQ8GuW8tuRuDCSR//cWmAXKMZPf fJg6ortg==; Received: from mail-pj1-x1034.google.com ([2607:f8b0:4864:20::1034]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbu-00AtDp-03 for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:33 +0000 Received: by mail-pj1-x1034.google.com with SMTP id fy4so11175539pjb.0 for ; Thu, 15 Dec 2022 09:01:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=K7O8/7ogKx8WzzRgxh6e6gC9a0ALFnBxrPW4Tv+K91E=; b=rx5/T0r+M+q8zEdlugSYmMVCXj3c21Bm2FfgOzoyHwc+8Jl8/RpTpMXSOEzG+i62mg Q1KskwJXn+ytDChd+aW75GbgLkL9lr70oeZXw6iqxWVXLlJZ9QMX6IEBdLU8qjdheZi3 E/uQ7AHH53sLZrdrevjBMOJNz9LhgJdFhOdIyvFy7ij4O2/UJ9CmNY7ueRA2DnuIRIZ3 aDo7Wwzp6f5FW2tJPVGUnUOKyhoekaAsmy9mE/pPzTZWEye66pJitiioktZEy9TWx8Hv ql+fLIg7uF8vcCxW8MEjrangKuAIhb59R6clhGfh5qluqS2StQzO4YrEf9LVbGyXHgKe iPmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K7O8/7ogKx8WzzRgxh6e6gC9a0ALFnBxrPW4Tv+K91E=; b=Yyiius4GW5d9+dA4kenMNVeJ4KCPR+aFeHo8Rp8mE5OXiKjKYgbknxbzZMLqJNGLSM OoyH6Ok6VdSrVfSqIAc8+lLeIocq2InmrsbzO2hNFOZvvvUbsU4yzD7ExljbMenfRm+h uR2/UXDUDc0L7S1/PUy5/peuKZ5mwuILZRwvDJmVmd1SGr4aqLlJ2UOyeyx1/virFoij QeZuMHS3XxgK/d0cJ8pU27h5YnO5Z1+27t/L6WbQgbQ9xmuhKfePCyfNqCSa+Iw4U9Yd FUrmIeM9cJkHHeiD+0DWeq/RcPDRSNuh7xjdT8VQajoE/I22TJsACgAwMXZTCtRL0Yeb Lm9g== X-Gm-Message-State: ANoB5pl9a6PnbDO7c5/38hJ4f6ab23Tqb5FcwPZR0zXUsIPZ8+mThmXr DdEYUgjZUXew69svj9DWepOPdQ== X-Google-Smtp-Source: AA0mqf4TLyGoJXHjVVtP4w82MegmT9LdD5zX1zlasjVB4tCX+Jv61apQFV8x8y6zeXa83IWHB2TgoA== X-Received: by 2002:a17:902:e54c:b0:189:c1b4:8fe6 with SMTP id n12-20020a170902e54c00b00189c1b48fe6mr40417449plf.53.1671123680764; Thu, 15 Dec 2022 09:01:20 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:20 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 08/11] RISC-V: KVM: Disable all hpmcounter access for VS/VU mode Date: Thu, 15 Dec 2022 09:00:43 -0800 Message-Id: <20221215170046.2010255-9-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_170131_264641_9BC7D070 X-CRM114-Status: UNSURE ( 9.47 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Any guest must not get access to any hpmcounter including cycle/instret without any checks. We achieve that by disabling all the bits except TM bit in hcountern. However, instret and cycle access for guest userspace can be enabled upon explicit request (via ONE REG) or on first trap from VU mode to maintain ABI requirement in the future. This patch doesn't support that as ONE REG inteface is not settled yet. Signed-off-by: Atish Patra Reviewed-by: Andrew Jones --- arch/riscv/kvm/main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 58c5489..9c2efd3 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -49,7 +49,8 @@ int kvm_arch_hardware_enable(void) hideleg |= (1UL << IRQ_VS_EXT); csr_write(CSR_HIDELEG, hideleg); - csr_write(CSR_HCOUNTEREN, -1UL); + /* VS should access only TM bit. Everything else should trap */ + csr_write(CSR_HCOUNTEREN, 0x02); csr_write(CSR_HVIP, 0); From patchwork Thu Dec 15 17:00:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 641E8C4332F for ; Thu, 15 Dec 2022 17:01:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3IYpwKlnlyp37/QQidkvajSz9X2ro6zjmeoF0z36CaQ=; b=tRye93DwLyYUJx rIdzrzujCswITcnC3UzWCD4/cOYdLWwOw0HshSkkEb5mII0OnCgDhndw2DR7JUf1XkezSXfRfsX7U iz1cx/qbwd7cyvaxGmHv6yXK7hYKCNPWswTMZm/t938SlX9hAqGKnNkcvA+XeXuftttQr7xPSyPhW G90lHab6Td2TJQ8myqKPsJJfHX0ZYKkYKU9rQ1AYd5BU/DXnBG294h0oIU6tIrJoV0s0zpVIlJqYv ZEEelpFXIwyTB/D/yhGRodM4g0VsGNxzs5ajzCmLGISXTf23i1VpGiAwE0hNECO+dHflfx/ep+DWa WOPi74fOY/UZ532pc4kw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rc8-00ARmF-KS; Thu, 15 Dec 2022 17:01:44 +0000 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbm-00ARWb-2X for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:25 +0000 Received: by mail-pl1-x632.google.com with SMTP id x2so6184531plb.13 for ; Thu, 15 Dec 2022 09:01:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Xd4EliKq7RGLoGsrWDsqLSSvOown03UCv03d22ew7cY=; b=7aM70M89maU4rC8RoZOzNOc4Cypbe/3jpfgvF5KGf+DoFhgrk5PWKdM6aFcvLJxs7X s/vvFRVZyvRhTUrb7WyKk4LnxL7WJys1tTwX7ijtUucRf9y7H1MeyipCJNmVrfAgfKJA L4DjVhF5/krCg3I2U4iF2J9+WFTUoCvWzZZInPrRJIyLwaYEIrgoGdn96LRFitAK8ako YZJ0qVFh6Oz6JhZ/oVTgSPKEw3i436Sosoc3z0VFRoWW7KEVf0sRlLAWi3dQHjAdIxbr JqDOIyE/oAEgdNNO0m063ifsiJd68kSzJBvMnwnUvCm47tkjENGDTPXqitTVE2JmY4D3 OErw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xd4EliKq7RGLoGsrWDsqLSSvOown03UCv03d22ew7cY=; b=eNIWOzSH+SOzMsmj9I4FtGFEK+wmd9GSj1QAtImjWM7BXEgJw6OLQxNT17N/aHO/Hw 1qR9qwlWqvCOW/lBLaV8n/lJXyKnvhUWSQJqCtpuc++dHsdWMoUUUv6KZ4hZzEwJZtmz PMmMx/iFLC/Q0XTy0tforrmXgT9B6E5tES9XN08Tvsre3yYhqfHNe/BF/rb/dtmmg8TX fD2Jdk8Z1rRh16s3KpYcoBGxDtttjXiLMNM4v4JFoueeEqfbgI/fWLHSPZHqRxkKRr3m vDUGV6lGEdhvS6DWIeBhPFRdc4/T5ClA7PRLNQ2ATIQerlFWKkUQlNPGjJ18B/EzTpKE j/fg== X-Gm-Message-State: AFqh2kplY7BtBaupY5wKGvTzmpneLuMAP0CYvqwquXvxGvvmStV7TLn1 VloTkCv987P6vF08aiKZi48E3Q== X-Google-Smtp-Source: AMrXdXvQYU82N+iVubdoY963p+S/kLC/NfFUJryN/aBDIrEtRm2bFUCcxGUDizonKgdpgBRh/bjKew== X-Received: by 2002:a17:902:b201:b0:189:ea4c:e414 with SMTP id t1-20020a170902b20100b00189ea4ce414mr4445034plr.61.1671123681604; Thu, 15 Dec 2022 09:01:21 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:21 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 09/11] RISC-V: KVM: Implement trap & emulate for hpmcounters Date: Thu, 15 Dec 2022 09:00:44 -0800 Message-Id: <20221215170046.2010255-10-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_090122_175975_4CB92B05 X-CRM114-Status: GOOD ( 15.72 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org As the KVM guests only see the virtual PMU counters, all hpmcounter access should trap and KVM emulates the read access on behalf of guests. Signed-off-by: Atish Patra Reviewed-by: Andrew Jones --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 16 ++++++++++ arch/riscv/kvm/vcpu_insn.c | 4 ++- arch/riscv/kvm/vcpu_pmu.c | 44 ++++++++++++++++++++++++++- 3 files changed, 62 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 6a8c0f7..7a9a8e6 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -43,6 +43,19 @@ struct kvm_pmu { #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) #define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) +#if defined(CONFIG_32BIT) +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{ .base = CSR_CYCLEH, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, \ +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, +#else +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, +#endif + +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask); + int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_ext_data *edata); @@ -65,6 +78,9 @@ void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); #else struct kvm_pmu { }; +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{ .base = 0, .count = 0, .func = NULL }, + static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) { diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 1ff2649..f689337 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -213,7 +213,9 @@ struct csr_func { unsigned long wr_mask); }; -static const struct csr_func csr_funcs[] = {}; +static const struct csr_func csr_funcs[] = { + KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS +}; /** * kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 0f0748f1..53c4163 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -17,6 +17,43 @@ #define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) +static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + unsigned long *out_val) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + u64 enabled, running; + + pmc = &kvpmu->pmc[cidx]; + if (!pmc->perf_event) + return -EINVAL; + + pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); + *out_val = pmc->counter_val; + + return 0; +} + +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int cidx, ret = KVM_INSN_CONTINUE_NEXT_SEPC; + + if (!kvpmu || !kvpmu->init_done) + return KVM_INSN_EXIT_TO_USER_SPACE; + + if (wr_mask) + return KVM_INSN_ILLEGAL_TRAP; + cidx = csr_num - CSR_CYCLE; + + if (pmu_ctr_read(vcpu, cidx, val) < 0) + return KVM_INSN_EXIT_TO_USER_SPACE; + + return ret; +} + int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); @@ -69,7 +106,12 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_ext_data *edata) { - /* TODO */ + int ret; + + ret = pmu_ctr_read(vcpu, cidx, &edata->out_val); + if (ret == -EINVAL) + edata->err_val = SBI_ERR_INVALID_PARAM; + return 0; } From patchwork Thu Dec 15 17:00:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B1C27C4332F for ; Thu, 15 Dec 2022 17:28:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EnawY+EH0NQnFAIYKYrUhTdpHJnmphJIbMV9FW6g7OA=; b=P9mYi5y5zjOATF BzmBocMYRT4EHfeQCvPUB0ILOYfiAsCUxfqxn3EwqZT62B2+uIck27qwKAwCC7iq+7iaySMtxU/Ic 7Byo6/HXqG+Uge3OVSF8lkEjum0Hc8QrJbBeYjke7stpjwtRKQfFnGE3LAU2mh1L5MLC8Yt5zc/55 xBV4Xuz20x1Au4ldl9pO6N2Efok4qG5nxn5T/F9ZWYXYajc6gUtyQileMb9ovdcjaZUeQdD1PavW1 NZNBgsnWqw5I18bd7QHq/QHCtGtSdb0wRAdleajYs/zWHSUgaPtryr206Sl8wv209SdjJZs1K3lyc p59j9+uL8zH0jgrstn1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5s1m-00AYwx-OQ; Thu, 15 Dec 2022 17:28:14 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5s1k-00AYoz-PA for linux-riscv@bombadil.infradead.org; Thu, 15 Dec 2022 17:28:13 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DolzGJZ4k1W/MT9LsI1bhyuKnszBtHqekaiH9SK1Tyg=; b=p8QKHyLxwy3dViJKW3OQT0/xGO wah7EPgSbFAQFvbhY5JGYbLvmk8HCuaotH2IlSvS2JkezsFm4eQ0SvdjxeL+nRYDfEOhsGotJdhi9 eopX1ZkMR2+ZwTP5zJSK0agARQ/SvcpOQXfKiG/IBRs2xoMyfIOggP/+xoYB8FLJm3jXHNunHIPQ/ dJDp6xOKxt4AcL2JJYqoa9JRdHfiWinBu6glltRU7ov782TVf8i40GN9q3wbbFRVNBo0bDbhQvwmr MmPoowjYUka4T3lSQTqGWYSNx9F49Ly+9+M1fhxL3dpA8E4vQDL4weEa8eR90JO0BxU7k/yzuacAB uN9dhCew==; Received: from mail-pj1-x102d.google.com ([2607:f8b0:4864:20::102d]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbu-00AtDs-2V for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:01:38 +0000 Received: by mail-pj1-x102d.google.com with SMTP id t11-20020a17090a024b00b0021932afece4so3327649pje.5 for ; Thu, 15 Dec 2022 09:01:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DolzGJZ4k1W/MT9LsI1bhyuKnszBtHqekaiH9SK1Tyg=; b=fYYhU/yz/PsFnVvY/oZWXMkqyHqMG2Z2F354DcI+5nf0ncCfTCs8PrfDXqjD/vQkNT hpY2Vs2A4jebx52pbA72QOr4svFXXHXHmQTXdCSu3qroMpLzROkWLmjWQBxEWyPouFgz fIo4P2yBqeqv/OA4RhjAna13v/igVvCX7kzBx3pV7c4t5aHHaMTS4o0FFR7P10KWOrfD twe4tQvHbm3kjWgWbj0fERbIzoX+T3lYE6IZ6fME+BOXHBITV6UabCC7hFBsW7UULlwu kRTdNWpu74FfNT1lZdMmnqH55wTiIt1UvVTcRx7g9rPqPr0c8Pb+wCEnIvBIP/eD1Xgn /UTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DolzGJZ4k1W/MT9LsI1bhyuKnszBtHqekaiH9SK1Tyg=; b=ZfUVlnw5+OEsDYDbkGVG1ZCzh5NFBdBRly7RuUwd8Rt64gU+M5eiVKJgFB0HK2qvIg mlySH8Bm0tGXVo3CGfvRgUCgY9jfwYOT7ngNAlmVC9/2UtDceAIbsRP3K5j1WzLUsVLw w8wbWC0Mwa2YuaRJ8UMhOvUG2bIunwzIGPdKgH98YwM2IzOkJXiGNg6uR1VDT0KHbk65 eiFA4Ae6InFpETfMxh3SnFt2uNjQgmlvnCEhFMd59GnH9aiICkQxMzhTrdQsoc6sonXw ljguPUaUwiX2O1DIqdovBtfhcfvtj9evimPVF0/ANYKQWaDsNA7Q04f1uXedB1VQrdHs JuCQ== X-Gm-Message-State: ANoB5pkCZ34AWare6KwKybN3bcIR0RzGIkrkLvPefy95BCpaChm3ziyb 7T2716eaymS548LLAfKNWgTo9A== X-Google-Smtp-Source: AA0mqf6dwRb1jiKJrLleRCD/S0oZPKkH1PjNh45faRbUqj1nX4vJxB6FyWBUqtsz7MHxmezMu1pD9g== X-Received: by 2002:a17:903:515:b0:189:bcf7:1ec0 with SMTP id jn21-20020a170903051500b00189bcf71ec0mr29399151plb.30.1671123682500; Thu, 15 Dec 2022 09:01:22 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:22 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 10/11] RISC-V: KVM: Implement perf support without sampling Date: Thu, 15 Dec 2022 09:00:45 -0800 Message-Id: <20221215170046.2010255-11-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_170135_717222_E748DB41 X-CRM114-Status: GOOD ( 27.02 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org RISC-V SBI PMU & Sscofpmf ISA extension allows supporting perf in the virtualization enviornment as well. KVM implementation relies on SBI PMU extension for most the most part while trapping & emulating the CSRs read for counter access. This patch doesn't have the event sampling support yet. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu_pmu.c | 358 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 342 insertions(+), 16 deletions(-) diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 53c4163..21c1f0f 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -12,10 +12,163 @@ #include #include #include +#include #include #include #define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) +#define get_event_type(x) ((x & SBI_PMU_EVENT_IDX_TYPE_MASK) >> 16) +#define get_event_code(x) (x & SBI_PMU_EVENT_IDX_CODE_MASK) + +static inline u64 pmu_get_sample_period(struct kvm_pmc *pmc) +{ + u64 counter_val_mask = GENMASK(pmc->cinfo.width, 0); + u64 sample_period; + + if (!pmc->counter_val) + sample_period = counter_val_mask; + else + sample_period = (-pmc->counter_val) & counter_val_mask; + + return sample_period; +} + +static u32 pmu_get_perf_event_type(unsigned long eidx) +{ + enum sbi_pmu_event_type etype = get_event_type(eidx); + u32 type; + + if (etype == SBI_PMU_EVENT_TYPE_HW) + type = PERF_TYPE_HARDWARE; + else if (etype == SBI_PMU_EVENT_TYPE_CACHE) + type = PERF_TYPE_HW_CACHE; + else if (etype == SBI_PMU_EVENT_TYPE_RAW || etype == SBI_PMU_EVENT_TYPE_FW) + type = PERF_TYPE_RAW; + else + type = PERF_TYPE_MAX; + + return type; +} + +static inline bool pmu_is_fw_event(unsigned long eidx) +{ + + return get_event_type(eidx) == SBI_PMU_EVENT_TYPE_FW; +} + +static void pmu_release_perf_event(struct kvm_pmc *pmc) +{ + if (pmc->perf_event) { + perf_event_disable(pmc->perf_event); + perf_event_release_kernel(pmc->perf_event); + pmc->perf_event = NULL; + } +} + +static u64 pmu_get_perf_event_hw_config(u32 sbi_event_code) +{ + /* SBI PMU HW event code is offset by 1 from perf hw event codes */ + return (u64)sbi_event_code - 1; +} + +static u64 pmu_get_perf_event_cache_config(u32 sbi_event_code) +{ + u64 config = U64_MAX; + unsigned int cache_type, cache_op, cache_result; + + /* All the cache event masks lie within 0xFF. No separate masking is necesssary */ + cache_type = (sbi_event_code & SBI_PMU_EVENT_CACHE_ID_CODE_MASK) >> 3; + cache_op = (sbi_event_code & SBI_PMU_EVENT_CACHE_OP_ID_CODE_MASK) >> 1; + cache_result = sbi_event_code & SBI_PMU_EVENT_CACHE_RESULT_ID_CODE_MASK; + + if (cache_type >= PERF_COUNT_HW_CACHE_MAX || + cache_op >= PERF_COUNT_HW_CACHE_OP_MAX || + cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX) + return config; + + config = cache_type | (cache_op << 8) | (cache_result << 16); + + return config; +} + +static u64 pmu_get_perf_event_config(unsigned long eidx, uint64_t evt_data) +{ + enum sbi_pmu_event_type etype = get_event_type(eidx); + u32 ecode = get_event_code(eidx); + u64 config = U64_MAX; + + if (etype == SBI_PMU_EVENT_TYPE_HW) + config = pmu_get_perf_event_hw_config(ecode); + else if (etype == SBI_PMU_EVENT_TYPE_CACHE) + config = pmu_get_perf_event_cache_config(ecode); + else if (etype == SBI_PMU_EVENT_TYPE_RAW) + config = evt_data & RISCV_PMU_RAW_EVENT_MASK; + else if ((etype == SBI_PMU_EVENT_TYPE_FW) && (ecode < SBI_PMU_FW_MAX)) + config = (1ULL << 63) | ecode; + + return config; +} + +static int pmu_get_fixed_pmc_index(unsigned long eidx) +{ + u32 etype = pmu_get_perf_event_type(eidx); + u32 ecode = get_event_code(eidx); + int ctr_idx; + + if (etype != SBI_PMU_EVENT_TYPE_HW) + return -EINVAL; + + if (ecode == SBI_PMU_HW_CPU_CYCLES) + ctr_idx = 0; + else if (ecode == SBI_PMU_HW_INSTRUCTIONS) + ctr_idx = 2; + else + return -EINVAL; + + return ctr_idx; +} + +static int pmu_get_programmable_pmc_index(struct kvm_pmu *kvpmu, unsigned long eidx, + unsigned long cbase, unsigned long cmask) +{ + int ctr_idx = -1; + int i, pmc_idx; + int min, max; + + if (pmu_is_fw_event(eidx)) { + /* Firmware counters are mapped 1:1 starting from num_hw_ctrs for simplicity */ + min = kvpmu->num_hw_ctrs; + max = min + kvpmu->num_fw_ctrs; + } else { + /* First 3 counters are reserved for fixed counters */ + min = 3; + max = kvpmu->num_hw_ctrs; + } + + for_each_set_bit(i, &cmask, BITS_PER_LONG) { + pmc_idx = i + cbase; + if ((pmc_idx >= min && pmc_idx < max) && + !test_bit(pmc_idx, kvpmu->pmc_in_use)) { + ctr_idx = pmc_idx; + break; + } + } + + return ctr_idx; +} + +static int pmu_get_pmc_index(struct kvm_pmu *pmu, unsigned long eidx, + unsigned long cbase, unsigned long cmask) +{ + int ret; + + /* Fixed counters need to be have fixed mapping as they have different width */ + ret = pmu_get_fixed_pmc_index(eidx); + if (ret >= 0) + return ret; + + return pmu_get_programmable_pmc_index(pmu, eidx, cbase, cmask); +} static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, unsigned long *out_val) @@ -82,7 +235,41 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, unsigned long ctr_mask, unsigned long flag, uint64_t ival, struct kvm_vcpu_sbi_ext_data *edata) { - /* TODO */ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int i, num_ctrs, pmc_index, sbiret = 0; + struct kvm_pmc *pmc; + + num_ctrs = kvpmu->num_fw_ctrs + kvpmu->num_hw_ctrs; + if (ctr_base + __fls(ctr_mask) >= num_ctrs) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + /* Start the counters that have been configured and requested by the guest */ + for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) { + pmc_index = i + ctr_base; + if (!test_bit(pmc_index, kvpmu->pmc_in_use)) + continue; + pmc = &kvpmu->pmc[pmc_index]; + if (flag & SBI_PMU_START_FLAG_SET_INIT_VALUE) + pmc->counter_val = ival; + if (pmc->perf_event) { + if (unlikely(pmc->started)) { + sbiret = SBI_ERR_ALREADY_STARTED; + continue; + } + perf_event_period(pmc->perf_event, pmu_get_sample_period(pmc)); + perf_event_enable(pmc->perf_event); + pmc->started = true; + } else { + kvm_debug("Can not start counter due to invalid confiugartion\n"); + sbiret = SBI_ERR_INVALID_PARAM; + } + } + +out: + edata->err_val = sbiret; + return 0; } @@ -90,16 +277,142 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, unsigned long ctr_mask, unsigned long flag, struct kvm_vcpu_sbi_ext_data *edata) { - /* TODO */ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int i, num_ctrs, pmc_index, sbiret = 0; + u64 enabled, running; + struct kvm_pmc *pmc; + + num_ctrs = kvpmu->num_fw_ctrs + kvpmu->num_hw_ctrs; + if ((ctr_base + __fls(ctr_mask)) >= num_ctrs) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + /* Stop the counters that have been configured and requested by the guest */ + for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) { + pmc_index = i + ctr_base; + if (!test_bit(pmc_index, kvpmu->pmc_in_use)) + continue; + pmc = &kvpmu->pmc[pmc_index]; + if (pmc->perf_event) { + if (pmc->started) { + /* Stop counting the counter */ + perf_event_disable(pmc->perf_event); + pmc->started = false; + } else + sbiret = SBI_ERR_ALREADY_STOPPED; + + if (flag & SBI_PMU_STOP_FLAG_RESET) { + /* Relase the counter if this is a reset request */ + pmc->counter_val += perf_event_read_value(pmc->perf_event, + &enabled, &running); + pmu_release_perf_event(pmc); + clear_bit(pmc_index, kvpmu->pmc_in_use); + } + } else { + kvm_debug("Can not stop counter due to invalid confiugartion\n"); + sbiret = SBI_ERR_INVALID_PARAM; + } + } + +out: + edata->err_val = sbiret; + return 0; } int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_base, unsigned long ctr_mask, unsigned long flag, - unsigned long eidx, uint64_t edata, - struct kvm_vcpu_sbi_ext_data *extdata) + unsigned long eidx, uint64_t evt_data, + struct kvm_vcpu_sbi_ext_data *ext_data) { - /* TODO */ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct perf_event *event; + struct perf_event_attr attr; + int num_ctrs, ctr_idx; + u32 etype = pmu_get_perf_event_type(eidx); + u64 config; + struct kvm_pmc *pmc; + int sbiret = 0; + + + num_ctrs = kvpmu->num_fw_ctrs + kvpmu->num_hw_ctrs; + if (etype == PERF_TYPE_MAX || (ctr_base + __fls(ctr_mask) >= num_ctrs)) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + if (pmu_is_fw_event(eidx)) { + sbiret = SBI_ERR_NOT_SUPPORTED; + goto out; + } + + /* + * SKIP_MATCH flag indicates the caller is aware of the assigned counter + * for this event. Just do a sanity check if it already marked used. + */ + if (flag & SBI_PMU_CFG_FLAG_SKIP_MATCH) { + if (!test_bit(ctr_base, kvpmu->pmc_in_use)) { + sbiret = SBI_ERR_FAILURE; + goto out; + } + ctr_idx = ctr_base; + goto match_done; + } + + ctr_idx = pmu_get_pmc_index(kvpmu, eidx, ctr_base, ctr_mask); + if (ctr_idx < 0) { + sbiret = SBI_ERR_NOT_SUPPORTED; + goto out; + } + +match_done: + pmc = &kvpmu->pmc[ctr_idx]; + pmu_release_perf_event(pmc); + pmc->idx = ctr_idx; + + config = pmu_get_perf_event_config(eidx, evt_data); + memset(&attr, 0, sizeof(struct perf_event_attr)); + attr.type = etype; + attr.size = sizeof(attr); + attr.pinned = true; + + /* + * It should never reach here if the platform doesn't support sscofpmf extensio + * as mode filtering won't work without it. + */ + attr.exclude_host = true; + attr.exclude_hv = true; + attr.exclude_user = !!(flag & SBI_PMU_CFG_FLAG_SET_UINH); + attr.exclude_kernel = !!(flag & SBI_PMU_CFG_FLAG_SET_SINH); + attr.config = config; + attr.config1 = RISCV_KVM_PMU_CONFIG1_GUEST_EVENTS; + if (flag & SBI_PMU_CFG_FLAG_CLEAR_VALUE) { + //TODO: Do we really want to clear the value in hardware counter + pmc->counter_val = 0; + } + + /* + * Set the default sample_period for now. The guest specified value + * will be updated in the start call. + */ + attr.sample_period = pmu_get_sample_period(pmc); + + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); + if (IS_ERR(event)) { + pr_err("kvm pmu event creation failed event %pe for eidx %lx\n", event, eidx); + return -EOPNOTSUPP; + } + + set_bit(ctr_idx, kvpmu->pmc_in_use); + pmc->perf_event = event; + if (flag & SBI_PMU_CFG_FLAG_AUTO_START) + perf_event_enable(pmc->perf_event); + + ext_data->out_val = ctr_idx; +out: + ext_data->err_val = sbiret; + return 0; } @@ -119,6 +432,7 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) { int i = 0, num_fw_ctrs, ret, num_hw_ctrs = 0, hpm_width = 0; struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs); if (ret < 0) @@ -134,6 +448,7 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) else num_fw_ctrs = RISCV_KVM_MAX_FW_CTRS; + bitmap_zero(kvpmu->pmc_in_use, RISCV_MAX_COUNTERS); kvpmu->num_hw_ctrs = num_hw_ctrs; kvpmu->num_fw_ctrs = num_fw_ctrs; /* @@ -146,24 +461,26 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) /* TIME CSR shouldn't be read from perf interface */ if (i == 1) continue; - kvpmu->pmc[i].idx = i; + pmc = &kvpmu->pmc[i]; + pmc->idx = i; + pmc->counter_val = 0; if (i < kvpmu->num_hw_ctrs) { kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_HW; if (i < 3) /* CY, IR counters */ - kvpmu->pmc[i].cinfo.width = 63; + pmc->cinfo.width = 63; else - kvpmu->pmc[i].cinfo.width = hpm_width; + pmc->cinfo.width = hpm_width; /* * The CSR number doesn't have any relation with the logical * hardware counters. The CSR numbers are encoded sequentially * to avoid maintaining a map between the virtual counter * and CSR number. */ - kvpmu->pmc[i].cinfo.csr = CSR_CYCLE + i; + pmc->cinfo.csr = CSR_CYCLE + i; } else { - kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_FW; - kvpmu->pmc[i].cinfo.width = BITS_PER_LONG - 1; + pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; + pmc->cinfo.width = BITS_PER_LONG - 1; } } @@ -172,13 +489,22 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) return 0; } -void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) +void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) { - /* TODO */ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + int i; + + if (!kvpmu) + return; + + for_each_set_bit(i, kvpmu->pmc_in_use, RISCV_MAX_COUNTERS) { + pmc = &kvpmu->pmc[i]; + pmu_release_perf_event(pmc); + } } -void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) +void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) { - /* TODO */ + kvm_riscv_vcpu_pmu_deinit(vcpu); } - From patchwork Thu Dec 15 17:00:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13074407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E6E38C4332F for ; Thu, 15 Dec 2022 17:28:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/w5biJqX5M3qR4DTzQHeYYG4foDmVytoVdQWuNNgW5o=; b=udwAVumLV3vDPb NFvAqItRbUhPlJhfXnyDh/qxelH4gJR3XGv85BPbFRyJcY1CyXGoOKXXXGaLG5xMfWgapGjPzJXao OOGMuoU4o5u/lF3G1xT+ZI/20MnDMiLXyKDcbeyzol8gEYWsm80bSgsZoDDaVSXDd0G5G4u9G9zw/ UqC8uQi4NF0XodfFiSGsw1xC+PLUQ2LJ6U+8IgR33x7RTxAXRoE5iJbpTuJlmRVND3AgunR6iRmDD i759k42M2CFXFOaZ3E5BEb7gF5bvPWzs0KmQ1d8kl5jlhvurBzcirWyukvOEvJuOp+HsNb2tpx7sZ G0kq4pwAoPDqpa4hccDA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5s1b-00AYrs-RY; Thu, 15 Dec 2022 17:28:03 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5s1Z-00AYoz-VL for linux-riscv@bombadil.infradead.org; Thu, 15 Dec 2022 17:28:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wRtkaKdg+mN8Zre6M7JRpTv8Dlf4dEwmmnIUf9bzleY=; b=VcgNF5nlqo2ZIltPGIR5lUJWuV GzojtGGCEnHzW78c74YdsiTuLPtgo6bJSbNwzfCagjpLmnnNi7I7L44xeP2vbmr8ITnb2vfb+MAj2 NLbKowLwn6np7/34sTgjXvQs7wbpYbHbxdWTcTP8nI0DdP7Fgf2hPDTIQdcSSbueqBF9bJoNn8cDT UsiNDI2YugYYOsArqK+5GGcazUzWyHRFdrAhqFgFYWz/ZUkHNtR6UExHKQkp1tnu/CdYnYm/dDCvb Yr/bDnMtjEy2dtpYrSXy1J432BGAf2q7LPE4z4m85ScnP9vm5Ma4MySq+Bj1HnlWw1c22k3M8TM4w 7T2oZAEA==; Received: from mail-pl1-x62b.google.com ([2607:f8b0:4864:20::62b]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p5rbu-00AtDv-Am for linux-riscv@lists.infradead.org; Thu, 15 Dec 2022 17:02:25 +0000 Received: by mail-pl1-x62b.google.com with SMTP id m4so7513323pls.4 for ; Thu, 15 Dec 2022 09:01:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wRtkaKdg+mN8Zre6M7JRpTv8Dlf4dEwmmnIUf9bzleY=; b=Kdj2uyNBr+HvBQqESgTIaXrcZhBznSa6s6UU80j3bDfxTlYCV1Jy+ksSBJRLZR+OOJ 0i10TY90ApGHTbOo2NW7K/hvrZSIATW7FGJAfQrReQvzNHArofz1ey73woRMgQWswc8O czNpBXDDVIFXzkjPmW9IOFroVr4Isdgv+0FjvSzDBwmLSr7Od7f2swuTk+JMQ7Gp3U6Q f1ztoi64RtFSYyu0pwa4Tu6wFD6NN31Ewed2iWCtxR7Rc/2cGWlM8ztj94PMb/7R9V/k WGyjG6FD2CEcaw5m99iBTPJzD1Kc75Hk5S31mTyVzWfWV9qoU9FV8eOsZmCw8NZvZJUS L/xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wRtkaKdg+mN8Zre6M7JRpTv8Dlf4dEwmmnIUf9bzleY=; b=D6n2Gk06/RRbhlXkbNaXEVenTw6uUEV/9s1sb2IIZdjX4XvWb4qneA/XEv3C4i2du5 k71OOqYxmNkR1B9MYXXZDu2fovSc7KsUFsj0M29KGKuz/uutwKeVj56fVNB00+lUV0O9 hQA1p+OJHfrMO+ZuUg07BbNW/HxtHj3WRHIWMQj8X0xntNFp8ItG6MtCXTVmqyi0IVe9 7R76aeMdnBp5HSw2tsufx87ON97NI1d6RUpwarmHYqZc4I3iHNK4h/y4mc9XXS9LJ5DI wkdLipJ6lF2J01WNrWGoUk2OV+UNez8xfcao2g9eH4Hzy/4+m1HpgAyprRf5b+nHE4By DyDg== X-Gm-Message-State: ANoB5pkE4TgXIjxwHdmKJ68+cyuAllZ9AjYabC2/Vyq+TbD0yJCB9c4s x4+kkq9jdFAMPs6+3RI4W+gQ7g== X-Google-Smtp-Source: AA0mqf5lzGO3Kx1Rg1wp6sAhHqzFM8uuM04TPgTDDZcoTzISNrJGN0qW3Sxsa46Suh9BLhg7lTNyGg== X-Received: by 2002:a17:902:d717:b0:188:fc68:d366 with SMTP id w23-20020a170902d71700b00188fc68d366mr29286046ply.40.1671123683442; Thu, 15 Dec 2022 09:01:23 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id p10-20020a170902780a00b001897bfc9800sm4067449pll.53.2022.12.15.09.01.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Dec 2022 09:01:23 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Andrew Jones , Atish Patra , Guo Ren , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Eric Lin , Will Deacon Subject: [PATCH v2 11/11] RISC-V: KVM: Implement firmware events Date: Thu, 15 Dec 2022 09:00:46 -0800 Message-Id: <20221215170046.2010255-12-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221215170046.2010255-1-atishp@rivosinc.com> References: <20221215170046.2010255-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221215_170137_846195_31CA135E X-CRM114-Status: GOOD ( 26.82 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org SBI PMU extension defines a set of firmware events which can provide useful information to guests about number of SBI calls. As hypervisor implements the SBI PMU extension, these firmware events corresponds to ecall invocations between VS->HS mode. All other firmware events will always report zero if monitored as KVM doesn't implement them. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 16 ++++ arch/riscv/include/asm/sbi.h | 2 +- arch/riscv/kvm/tlb.c | 6 +- arch/riscv/kvm/vcpu_pmu.c | 105 ++++++++++++++++++++++---- arch/riscv/kvm/vcpu_sbi_replace.c | 7 ++ 5 files changed, 119 insertions(+), 17 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 7a9a8e6..cccc6182 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -17,6 +17,14 @@ #define RISCV_KVM_MAX_FW_CTRS 32 #define RISCV_MAX_COUNTERS 64 +struct kvm_fw_event { + /* Current value of the event */ + unsigned long value; + + /* Event monitoring status */ + bool started; +}; + /* Per virtual pmu counter data */ struct kvm_pmc { u8 idx; @@ -25,11 +33,14 @@ struct kvm_pmc { union sbi_pmu_ctr_info cinfo; /* Event monitoring status */ bool started; + /* Monitoring event ID */ + unsigned long event_idx; }; /* PMU data structure per vcpu */ struct kvm_pmu { struct kvm_pmc pmc[RISCV_MAX_COUNTERS]; + struct kvm_fw_event fw_event[RISCV_KVM_MAX_FW_CTRS]; /* Number of the virtual firmware counters available */ int num_fw_ctrs; /* Number of the virtual hardware counters available */ @@ -52,6 +63,7 @@ struct kvm_pmu { { .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, #endif +int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid); int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, unsigned long *val, unsigned long new_val, unsigned long wr_mask); @@ -81,6 +93,10 @@ struct kvm_pmu { #define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ { .base = 0, .count = 0, .func = NULL }, +static inline int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid) +{ + return 0; +} static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) { diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 2a0ef738..a192a95a 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -171,7 +171,7 @@ enum sbi_pmu_fw_generic_events_t { SBI_PMU_FW_IPI_SENT = 6, SBI_PMU_FW_IPI_RECVD = 7, SBI_PMU_FW_FENCE_I_SENT = 8, - SBI_PMU_FW_FENCE_I_RECVD = 9, + SBI_PMU_FW_FENCE_I_RCVD = 9, SBI_PMU_FW_SFENCE_VMA_SENT = 10, SBI_PMU_FW_SFENCE_VMA_RCVD = 11, SBI_PMU_FW_SFENCE_VMA_ASID_SENT = 12, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 309d79b..de81920 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -181,6 +181,7 @@ void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) { + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_RCVD); local_flush_icache_all(); } @@ -264,15 +265,18 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); kvm_riscv_local_hfence_vvma_asid_gva( READ_ONCE(v->vmid), d.asid, d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_ALL: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); kvm_riscv_local_hfence_vvma_asid_all( READ_ONCE(v->vmid), d.asid); break; case KVM_RISCV_HFENCE_VVMA_GVA: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); kvm_riscv_local_hfence_vvma_gva( READ_ONCE(v->vmid), d.addr, d.size, d.order); @@ -323,7 +327,7 @@ void kvm_riscv_fence_i(struct kvm *kvm, unsigned long hbase, unsigned long hmask) { make_xfence_request(kvm, hbase, hmask, KVM_REQ_FENCE_I, - KVM_REQ_FENCE_I, NULL); + KVM_REQ_FENCE_I, NULL); } void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 21c1f0f..a64a7ae 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -170,18 +170,36 @@ static int pmu_get_pmc_index(struct kvm_pmu *pmu, unsigned long eidx, return pmu_get_programmable_pmc_index(pmu, eidx, cbase, cmask); } +int kvm_riscv_vcpu_pmu_incr_fw(struct kvm_vcpu *vcpu, unsigned long fid) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_fw_event *fevent; + + if (!kvpmu || fid >= SBI_PMU_FW_MAX) + return -EINVAL; + + fevent = &kvpmu->fw_event[fid]; + if (fevent->started) + fevent->value++; + + return 0; +} + static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, unsigned long *out_val) { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; u64 enabled, running; + int fevent_code; pmc = &kvpmu->pmc[cidx]; - if (!pmc->perf_event) - return -EINVAL; - pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) { + fevent_code = get_event_code(pmc->event_idx); + pmc->counter_val = kvpmu->fw_event[fevent_code].value; + } else if (pmc->perf_event) + pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); *out_val = pmc->counter_val; return 0; @@ -238,6 +256,7 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); int i, num_ctrs, pmc_index, sbiret = 0; struct kvm_pmc *pmc; + int fevent_code; num_ctrs = kvpmu->num_fw_ctrs + kvpmu->num_hw_ctrs; if (ctr_base + __fls(ctr_mask) >= num_ctrs) { @@ -253,7 +272,22 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base, pmc = &kvpmu->pmc[pmc_index]; if (flag & SBI_PMU_START_FLAG_SET_INIT_VALUE) pmc->counter_val = ival; - if (pmc->perf_event) { + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) { + fevent_code = get_event_code(pmc->event_idx); + if (fevent_code >= SBI_PMU_FW_MAX) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + /* Check if the counter was already started for some reason */ + if (kvpmu->fw_event[fevent_code].started) { + sbiret = SBI_ERR_ALREADY_STARTED; + continue; + } + + kvpmu->fw_event[fevent_code].started = true; + kvpmu->fw_event[fevent_code].value = pmc->counter_val; + } else if (pmc->perf_event) { if (unlikely(pmc->started)) { sbiret = SBI_ERR_ALREADY_STARTED; continue; @@ -281,6 +315,7 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, int i, num_ctrs, pmc_index, sbiret = 0; u64 enabled, running; struct kvm_pmc *pmc; + int fevent_code; num_ctrs = kvpmu->num_fw_ctrs + kvpmu->num_hw_ctrs; if ((ctr_base + __fls(ctr_mask)) >= num_ctrs) { @@ -294,7 +329,18 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, if (!test_bit(pmc_index, kvpmu->pmc_in_use)) continue; pmc = &kvpmu->pmc[pmc_index]; - if (pmc->perf_event) { + if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) { + fevent_code = get_event_code(pmc->event_idx); + if (fevent_code >= SBI_PMU_FW_MAX) { + sbiret = SBI_ERR_INVALID_PARAM; + goto out; + } + + if (!kvpmu->fw_event[fevent_code].started) + sbiret = SBI_ERR_ALREADY_STOPPED; + + kvpmu->fw_event[fevent_code].started = false; + } else if (pmc->perf_event) { if (pmc->started) { /* Stop counting the counter */ perf_event_disable(pmc->perf_event); @@ -307,12 +353,15 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base, pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); pmu_release_perf_event(pmc); - clear_bit(pmc_index, kvpmu->pmc_in_use); } } else { kvm_debug("Can not stop counter due to invalid confiugartion\n"); sbiret = SBI_ERR_INVALID_PARAM; } + if (flag & SBI_PMU_STOP_FLAG_RESET) { + pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; + clear_bit(pmc_index, kvpmu->pmc_in_use); + } } out: @@ -329,12 +378,12 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); struct perf_event *event; struct perf_event_attr attr; - int num_ctrs, ctr_idx; + int num_ctrs, ctr_idx, sbiret = 0; u32 etype = pmu_get_perf_event_type(eidx); u64 config; - struct kvm_pmc *pmc; - int sbiret = 0; - + struct kvm_pmc *pmc = NULL; + bool is_fevent; + unsigned long event_code; num_ctrs = kvpmu->num_fw_ctrs + kvpmu->num_hw_ctrs; if (etype == PERF_TYPE_MAX || (ctr_base + __fls(ctr_mask) >= num_ctrs)) { @@ -342,7 +391,9 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba goto out; } - if (pmu_is_fw_event(eidx)) { + event_code = get_event_code(eidx); + is_fevent = pmu_is_fw_event(eidx); + if (is_fevent && event_code >= SBI_PMU_FW_MAX) { sbiret = SBI_ERR_NOT_SUPPORTED; goto out; } @@ -357,7 +408,10 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba goto out; } ctr_idx = ctr_base; - goto match_done; + if (is_fevent) + goto perf_event_done; + else + goto match_done; } ctr_idx = pmu_get_pmc_index(kvpmu, eidx, ctr_base, ctr_mask); @@ -366,6 +420,13 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba goto out; } + /* + * No need to create perf events for firmware events as the firmware counter + * is supposed to return the measurement of VS->HS mode invocations. + */ + if (is_fevent) + goto perf_event_done; + match_done: pmc = &kvpmu->pmc[ctr_idx]; pmu_release_perf_event(pmc); @@ -404,10 +465,19 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba return -EOPNOTSUPP; } - set_bit(ctr_idx, kvpmu->pmc_in_use); pmc->perf_event = event; - if (flag & SBI_PMU_CFG_FLAG_AUTO_START) - perf_event_enable(pmc->perf_event); +perf_event_done: + if (flag & SBI_PMU_CFG_FLAG_AUTO_START) { + if (is_fevent) + kvpmu->fw_event[event_code].started = true; + else + perf_event_enable(pmc->perf_event); + } + /* This should be only true for firmware events */ + if (!pmc) + pmc = &kvpmu->pmc[ctr_idx]; + pmc->event_idx = eidx; + set_bit(ctr_idx, kvpmu->pmc_in_use); ext_data->out_val = ctr_idx; out: @@ -451,6 +521,7 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) bitmap_zero(kvpmu->pmc_in_use, RISCV_MAX_COUNTERS); kvpmu->num_hw_ctrs = num_hw_ctrs; kvpmu->num_fw_ctrs = num_fw_ctrs; + memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); /* * There is no corelation betwen the logical hardware counter and virtual counters. * However, we need to encode a hpmcounter CSR in the counter info field so that @@ -464,6 +535,7 @@ int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) pmc = &kvpmu->pmc[i]; pmc->idx = i; pmc->counter_val = 0; + pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; if (i < kvpmu->num_hw_ctrs) { kvpmu->pmc[i].cinfo.type = SBI_PMU_CTR_TYPE_HW; if (i < 3) @@ -501,7 +573,10 @@ void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) for_each_set_bit(i, kvpmu->pmc_in_use, RISCV_MAX_COUNTERS) { pmc = &kvpmu->pmc[i]; pmu_release_perf_event(pmc); + pmc->counter_val = 0; + pmc->event_idx = SBI_PMU_EVENT_IDX_INVALID; } + memset(&kvpmu->fw_event, 0, SBI_PMU_FW_MAX * sizeof(struct kvm_fw_event)); } void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c index d029136..3f39711 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -11,6 +11,7 @@ #include #include #include +#include #include static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, @@ -26,6 +27,7 @@ static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, return 0; } + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_SET_TIMER); #if __riscv_xlen == 32 next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0; #else @@ -58,6 +60,7 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, return 0; } + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_IPI_SENT); kvm_for_each_vcpu(i, tmp, vcpu->kvm) { if (hbase != -1UL) { if (tmp->vcpu_id < hbase) @@ -68,6 +71,7 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT); if (ret < 0) break; + kvm_riscv_vcpu_pmu_incr_fw(tmp, SBI_PMU_FW_IPI_RECVD); } return ret; @@ -91,6 +95,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run switch (funcid) { case SBI_EXT_RFENCE_REMOTE_FENCE_I: kvm_riscv_fence_i(vcpu->kvm, hbase, hmask); + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: if (cp->a2 == 0 && cp->a3 == 0) @@ -98,6 +103,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run else kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, cp->a2, cp->a3, PAGE_SHIFT); + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: if (cp->a2 == 0 && cp->a3 == 0) @@ -108,6 +114,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run hbase, hmask, cp->a2, cp->a3, PAGE_SHIFT, cp->a4); + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT); break; case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID: