From patchwork Sun Feb 5 01:15:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13128940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2BE07C636D3 for ; Sun, 5 Feb 2023 02:16:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=32GnH2ewxOF5C2S5+TPm8DbtNwztd9rWfYK7WyLf7LQ=; b=QKQSkHljzZE4fR i/eNm1qQQYphqDd+4x11f1wN4z0hrFWtkFXduDt9LbmqQO71sRnN3dtHUryUvlTmhrG2Lf1DnYOah E1KD0Z7+TAhqEzMOFn3Nki1Tt2pbYaIhN4U3UpvxYwmAqv4+/nUeHOkj2BQQBrmRpr22tfCLYNWC6 sGu16ZIWUlhy8GFsRAE2ukvyGOX1SvCectqf2lFvC9KRN9iH40cmK4ScTvQl5uC+WA742c5N9D/Bk nPGs+Kp6Me73bZKVoMgujxWvGCT0lImAnyDBjHDpS8l1xhZ0ikq6RwqYth8CVhGSvkzbEsoBep2Nt HCLIlCoWV4ym1cxzurzw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pOUaA-005ren-So; Sun, 05 Feb 2023 02:16:42 +0000 Received: from mail-pj1-x102b.google.com ([2607:f8b0:4864:20::102b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pOTcz-005oDg-3g for linux-riscv@lists.infradead.org; Sun, 05 Feb 2023 01:15:34 +0000 Received: by mail-pj1-x102b.google.com with SMTP id n20-20020a17090aab9400b00229ca6a4636so12204208pjq.0 for ; Sat, 04 Feb 2023 17:15:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U56D9hJIIHchDDVPpFN73Ll8vi8yIOPIpQvQo3TH8jM=; b=i6LWUTJvWI0dc/A0LVN6lTUTLMzjjQ6p2D2q854wTpFsn/PpfZ4Ni0FxOT7l7PDwfj Bw2MbOZYFSWQceC/UN0zUQW8lwOLMJ1Hh0WFERaoUczhd87i4eUgueIat4nCIl9AraU6 n5OJfeqrPIDovcfUjZDD8a3YsWBrv3iWYsW7BS/HsjHIdf/if/3Cm2DYKqcjKP6eykDr hl7dwJkjUxwuxLQwWedT97MVQWFZu88vMy538eYLVUmVLRNc6BDR1oeeGJPjf5lRZXeF gzouB0IezaAq1dEsXx6d426cTHqvOe5NQ6TO24coYRVvWMydAsGhSFn2+AbVEAKVulkH bSlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U56D9hJIIHchDDVPpFN73Ll8vi8yIOPIpQvQo3TH8jM=; b=i01WXZ1JtN1lkL+KwmoxU/Hel4JgZB3ilZHiMFz6Vry7MJgLcGAj5uAGLFGHxKlqw8 FsYhR1UWWqaV37NcD+DZUFWEoaFv0YdcDBoIxds4LqN+WVNiPr3lHVg6piMc2lrXmTrz OkaSJFYfhU3x5kkI3vfeWdd6bUCYBIxPiYwEI2wZD3H+8i86paCG10MOoLyEdbxmTIMR SyOxDMA3yt9khYG2LgIbm3KWGBWCOIDbhxeRHydjrLeSSKD59uhyfzldFMfdA9KZSB7y EyItrkX6aW9L3HjrBseKoWQ9qY4rO95VasrHfFGWYfE98fUXnip5YFOJupCvi7fZXNF/ l6Qw== X-Gm-Message-State: AO0yUKU8J9ySzT8A1EoLqZiGXgs096QlmgmsBq4fAfnM/HalivnL1AKu WmkPvtaDPdFN12zygMyFdZF4DA== X-Google-Smtp-Source: AK7set+I8npdVVtV/vcrZyvKx1BQ1KyM9l7+GYj7E8XzlGUhRvFeZ2CVqMwYFnzIDrBn6EVAcbx8yA== X-Received: by 2002:a05:6a20:12cd:b0:b8:a56f:7612 with SMTP id v13-20020a056a2012cd00b000b8a56f7612mr19679734pzg.55.1675559732776; Sat, 04 Feb 2023 17:15:32 -0800 (PST) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id c7-20020a17090a020700b0023080c4c3bcsm2721917pjc.31.2023.02.04.17.15.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 04 Feb 2023 17:15:32 -0800 (PST) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Andrew Jones , Albert Ou , Anup Patel , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Will Deacon Subject: [PATCH v5 11/14] RISC-V: KVM: Implement trap & emulate for hpmcounters Date: Sat, 4 Feb 2023 17:15:12 -0800 Message-Id: <20230205011515.1284674-12-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230205011515.1284674-1-atishp@rivosinc.com> References: <20230205011515.1284674-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230204_171533_180665_A8556D8A X-CRM114-Status: GOOD ( 18.64 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org As the KVM guests only see the virtual PMU counters, all hpmcounter access should trap and KVM emulates the read access on behalf of guests. Reviewed-by: Andrew Jones Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 16 ++++++++ arch/riscv/kvm/vcpu_insn.c | 4 +- arch/riscv/kvm/vcpu_pmu.c | 59 ++++++++++++++++++++++++++- 3 files changed, 77 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 40905db..344a3ad 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -48,6 +48,19 @@ struct kvm_pmu { #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu_context) #define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu_context)) +#if defined(CONFIG_32BIT) +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{.base = CSR_CYCLEH, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, \ +{.base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, +#else +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{.base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, +#endif + +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask); + int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata); int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_return *retdata); @@ -71,6 +84,9 @@ void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); struct kvm_pmu { }; +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ +{ .base = 0, .count = 0, .func = NULL }, + static inline void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) {} static inline void kvm_riscv_vcpu_pmu_deinit(struct kvm_vcpu *vcpu) {} static inline void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu) {} diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 0bb5276..f689337 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -213,7 +213,9 @@ struct csr_func { unsigned long wr_mask); }; -static const struct csr_func csr_funcs[] = { }; +static const struct csr_func csr_funcs[] = { + KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS +}; /** * kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 6d09a6f..fe9db221 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -17,6 +17,58 @@ #define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) +static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, + unsigned long *out_val) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + u64 enabled, running; + + pmc = &kvpmu->pmc[cidx]; + if (!pmc->perf_event) + return -EINVAL; + + pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); + *out_val = pmc->counter_val; + + return 0; +} + +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, + unsigned long *val, unsigned long new_val, + unsigned long wr_mask) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + int cidx, ret = KVM_INSN_CONTINUE_NEXT_SEPC; + + if (!kvpmu || !kvpmu->init_done) { + /* + * In absence of sscofpmf in the platform, the guest OS may use + * the legacy PMU driver to read cycle/instret. In that case, + * just return 0 to avoid any illegal trap. However, any other + * hpmcounter access should result in illegal trap as they must + * be access through SBI PMU only. + */ + if (csr_num == CSR_CYCLE || csr_num == CSR_INSTRET) { + *val = 0; + return ret; + } else { + return KVM_INSN_ILLEGAL_TRAP; + } + } + + /* The counter CSR are read only. Thus, any write should result in illegal traps */ + if (wr_mask) + return KVM_INSN_ILLEGAL_TRAP; + + cidx = csr_num - CSR_CYCLE; + + if (pmu_ctr_read(vcpu, cidx, val) < 0) + return KVM_INSN_ILLEGAL_TRAP; + + return ret; +} + int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_return *retdata) { struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); @@ -69,7 +121,12 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_return *retdata) { - /* TODO */ + int ret; + + ret = pmu_ctr_read(vcpu, cidx, &retdata->out_val); + if (ret == -EINVAL) + retdata->err_val = SBI_ERR_INVALID_PARAM; + return 0; }