From patchwork Wed Apr 3 08:04:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13615257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C3A7BCD1288 for ; Wed, 3 Apr 2024 08:06:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bCcf8jb+pVHZlTUTHJDCfnIhosxjSv/OP3jkJ5ocQv4=; b=SEELAnLqh+xfTv JGN6TDo5YCG1TTSHXGzGwl+fN2W0rEG8U7cgg1rKMW30ejlyTC/HM6THbYQY9wdBZY2wem+BqFI4F XxUkdw0t0TC8m0zPdBwVB7EwQACQdRxjckEog7vwO81RE/qF81i/UdobeHsdbcjE7CUzuE8lmhRUQ Iqaui4z8Gm7qg14Rn5bckRt1FZhr8Ngo66vAFU5gmVHDtWveZ11gQycJ+LlbcA46deg03Jay9DZbF R60ze30YX8WBFhthfTyTC4KwPd2+4x8fFpj3rXLHX+8k6vgUDkKoEnk0wxtDxSMmsvVX6TjRjTLfs oD414bGR9oCPcPwQHb1A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrvdL-0000000EjDr-0DvI; Wed, 03 Apr 2024 08:06:11 +0000 Received: from mail-pf1-x42a.google.com ([2607:f8b0:4864:20::42a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrvcw-0000000EimT-1cJT for linux-riscv@lists.infradead.org; Wed, 03 Apr 2024 08:05:51 +0000 Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-6e7425a6714so4993774b3a.0 for ; Wed, 03 Apr 2024 01:05:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1712131541; x=1712736341; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aZ3FkBLSpZeB7lcfW0sTr11VliYr1a02KCqdXY16nx8=; b=YnHHAEss1uxFuDWlxAd0Q8xkO/BeGHd22i+8BkEKTUowuQItBxOVqOi2S5c/UPsfy+ VJ/zZC3fG5p3SMgzhQHhAiXexHZg+0rNkpcx7OSyFz3GME0JAivoIdO+FTO3+i3SRppc KqTUgsRXoo/cZmflvBgVPvRoZHjGVUyqSy7kdFpj0qLJXYGzLwV1uDQFu+EH6J3+yWB6 gjnj0ALHNUVmcaEMzwX3fFc4GvwHLNs5HQ1QgMRg2VAR6xrrGnJgIk41e5t4cF0G1d3U 2yP/USgQFQSan06LvnAnLSNtWm0D6ICK+aD41Mcl8HsXiZ86JdANTSq6Zz/Du4YYQ4+S OLdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712131541; x=1712736341; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aZ3FkBLSpZeB7lcfW0sTr11VliYr1a02KCqdXY16nx8=; b=MF+4AcJWdQXf14xZTXVUDCa0jddck434/hDa8VOkrStHZ0oXbIhO2Mxz+/WPCIFGX1 CsDZBwi0E3zs6QLu86dRAbS9V7UrbsqeQEYkKRO4YyNPeftfblS5BpDKZrAXf+hEblr2 H/OcAlY6nlRYI6pFcaqaHhmDwXC7EFdaNqXb4SGdqClE97NURTPsHd3qa4Dx4CAP4/gp PJkrgLtZz8EbVjpYZZRg9IKffDMZ32gbTuyuPTbhsTk87HZuqzlWHfRNtmK0cLpcW1yD qe3zE87eOvcdMGXaCD6JpqkMhSRo+VoCcP6T94eC/zZ8B6Ei1SZyPUSAzmAeOWu3CsTi JXdQ== X-Forwarded-Encrypted: i=1; AJvYcCUflZffF9aQcuKcbucMOBXeeMK4mhNvW0PxHsIt+THR1WWgWgABP+60kHseZDKnr7XRoySb25eh01iBfcw2uqnHFEdZwSJ1SBOPM327UZHC X-Gm-Message-State: AOJu0YxkKWWlHuUehtI41gQSCsnelWzSnSpf5l6Dx5Q+8vbLTEJj0fP7 taG7Uu1AofF5sjbGuBbQwXUxinTwr+0OVLSStyLPNKUPDkRGJOWpEAZ/OiP4Ekw= X-Google-Smtp-Source: AGHT+IEAq+anA/aMtiO8arRrh2Jnm92J1PBH5kUlEvdhS5op/jw7hZzAqhRtuwLXN5Zk3guWOTaagw== X-Received: by 2002:a05:6a20:9e4c:b0:1a3:ca3c:c62b with SMTP id mt12-20020a056a209e4c00b001a3ca3cc62bmr17369882pzb.19.1712131541148; Wed, 03 Apr 2024 01:05:41 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id c12-20020a170902d48c00b001e0b5d49fc7sm12557229plg.161.2024.04.03.01.05.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Apr 2024 01:05:40 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Ajay Kaher , Alexandre Ghiti , Alexey Makhalov , Andrew Jones , Conor Dooley , Juergen Gross , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Shuah Khan , virtualization@lists.linux.dev, VMware PV-Drivers Reviewers , Will Deacon , x86@kernel.org Subject: [PATCH v5 14/22] RISC-V: KVM: Support 64 bit firmware counters on RV32 Date: Wed, 3 Apr 2024 01:04:43 -0700 Message-Id: <20240403080452.1007601-15-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240403080452.1007601-1-atishp@rivosinc.com> References: <20240403080452.1007601-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240403_010546_593505_8F2636CC X-CRM114-Status: GOOD ( 16.97 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The SBI v2.0 introduced a fw_read_hi function to read 64 bit firmware counters for RV32 based systems. Add infrastructure to support that. Reviewed-by: Anup Patel Signed-off-by: Atish Patra Reviewed-by: Andrew Jones --- arch/riscv/include/asm/kvm_vcpu_pmu.h | 4 ++- arch/riscv/kvm/vcpu_pmu.c | 44 ++++++++++++++++++++++++++- arch/riscv/kvm/vcpu_sbi_pmu.c | 6 ++++ 3 files changed, 52 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h index 257f17641e00..55861b5d3382 100644 --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h @@ -20,7 +20,7 @@ static_assert(RISCV_KVM_MAX_COUNTERS <= 64); struct kvm_fw_event { /* Current value of the event */ - unsigned long value; + u64 value; /* Event monitoring status */ bool started; @@ -91,6 +91,8 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba struct kvm_vcpu_sbi_return *retdata); int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_return *retdata); +int kvm_riscv_vcpu_pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_return *retdata); void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu); int kvm_riscv_vcpu_pmu_snapshot_set_shmem(struct kvm_vcpu *vcpu, unsigned long saddr_low, unsigned long saddr_high, unsigned long flags, diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 9fedf9dc498b..ff326152eeff 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -197,6 +197,36 @@ static int pmu_get_pmc_index(struct kvm_pmu *pmu, unsigned long eidx, return kvm_pmu_get_programmable_pmc_index(pmu, eidx, cbase, cmask); } +static int pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx, + unsigned long *out_val) +{ + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + int fevent_code; + + if (!IS_ENABLED(CONFIG_32BIT)) { + pr_warn("%s: should be invoked for only RV32\n", __func__); + return -EINVAL; + } + + if (cidx >= kvm_pmu_num_counters(kvpmu) || cidx == 1) { + pr_warn("Invalid counter id [%ld]during read\n", cidx); + return -EINVAL; + } + + pmc = &kvpmu->pmc[cidx]; + + if (pmc->cinfo.type != SBI_PMU_CTR_TYPE_FW) + return -EINVAL; + + fevent_code = get_event_code(pmc->event_idx); + pmc->counter_val = kvpmu->fw_event[fevent_code].value; + + *out_val = pmc->counter_val >> 32; + + return 0; +} + static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, unsigned long *out_val) { @@ -705,6 +735,18 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba return 0; } +int kvm_riscv_vcpu_pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx, + struct kvm_vcpu_sbi_return *retdata) +{ + int ret; + + ret = pmu_fw_ctr_read_hi(vcpu, cidx, &retdata->out_val); + if (ret == -EINVAL) + retdata->err_val = SBI_ERR_INVALID_PARAM; + + return 0; +} + int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, struct kvm_vcpu_sbi_return *retdata) { @@ -778,7 +820,7 @@ void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) pmc->cinfo.csr = CSR_CYCLE + i; } else { pmc->cinfo.type = SBI_PMU_CTR_TYPE_FW; - pmc->cinfo.width = BITS_PER_LONG - 1; + pmc->cinfo.width = 63; } } diff --git a/arch/riscv/kvm/vcpu_sbi_pmu.c b/arch/riscv/kvm/vcpu_sbi_pmu.c index d3e7625fb2d2..cf111de51bdb 100644 --- a/arch/riscv/kvm/vcpu_sbi_pmu.c +++ b/arch/riscv/kvm/vcpu_sbi_pmu.c @@ -64,6 +64,12 @@ static int kvm_sbi_ext_pmu_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, case SBI_EXT_PMU_COUNTER_FW_READ: ret = kvm_riscv_vcpu_pmu_ctr_read(vcpu, cp->a0, retdata); break; + case SBI_EXT_PMU_COUNTER_FW_READ_HI: + if (IS_ENABLED(CONFIG_32BIT)) + ret = kvm_riscv_vcpu_pmu_fw_ctr_read_hi(vcpu, cp->a0, retdata); + else + retdata->out_val = 0; + break; case SBI_EXT_PMU_SNAPSHOT_SET_SHMEM: ret = kvm_riscv_vcpu_pmu_snapshot_set_shmem(vcpu, cp->a0, cp->a1, cp->a2, retdata); break;