From patchwork Wed Jul 20 19:23:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 12924417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E87DCCA487 for ; Wed, 20 Jul 2022 19:23:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231959AbiGTTXz (ORCPT ); Wed, 20 Jul 2022 15:23:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229651AbiGTTXx (ORCPT ); Wed, 20 Jul 2022 15:23:53 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A0BC4F6A4 for ; Wed, 20 Jul 2022 12:23:53 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id d7-20020a17090a564700b001f209736b89so3059001pji.0 for ; Wed, 20 Jul 2022 12:23:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Z3XlA8t9TDZtl4uJYv9cJwWL/SPmBlhXJhsAi+NFr/o=; b=iGs/G8Fk8ICWu/8fkztpMX5VcYaKBChHqfCs1g5Yp+fv5KwxH+DRPv+4Cn9Le5g9hq 5g2I2tD2ixzrPbpgPQSJSEcBaa1BeEyrnJ+uyosHqPFaWgXDfL2YVrSv3v/NtEukLPuW RCVHM/eSjFmbUh1z9d2RCywE+GRiF6jYxX2Dv0ppyXm4qaCZ498dXsnvvt428KcKUzlg bnmpjbxCLgFb851k+iBmTb02jv5bwsI8tB1hKW24E867I8ebe4Sm1ETvxpJ01mDEQIJG R0Sbj2sgj5wCd2H6wstT6sWfqMUoXQ3UZlOVdFy3okYNSStxgxE4GEN5XOeIfpk4LsTD /M2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Z3XlA8t9TDZtl4uJYv9cJwWL/SPmBlhXJhsAi+NFr/o=; b=RZ/t8bNQ8kCiO0nBfCQXk+p37kp9bllc4UarIAQ2lYadOAwmN/W9dnuAHeQS4Qjdvx K6Fd8BaPfyN9yik/Jp32nSLrZ/D6KOFlUVfgBbXOBKJgfyw065EUO3WG/3lwbsdQ16Ab 85Vs5Vla8QKNpDI66Jstiobs5CMmgPoZIlL6gMjgEbAissEkc3jmSMxlexx1OyUvZhMG 3gVezoIElmH5YsFqvbeYxDVM+IHEx09segM1+BxbX59bbhRrVY+a8/alrjY6VkTsu74a oJ17VGqhOKKfiRCOFnxenvXbpdzPYJiWyW9eebhwpQ2YBRh65+Ou/+eUEU/QJVUYfjzo /alQ== X-Gm-Message-State: AJIora/mUniZ+W1cVp1XrEA3Q4hENp733WRYcte5SnBtZM/DTH3yXZVk DyS9QZuXqAQCtjkbodsue4eP0Q== X-Google-Smtp-Source: AGRyM1uO6DuNqM2Sqju5yNtntBjmlH8f8n/3L/OrpS4Iy1m8+BKt/id8mOYe3xVwttvCP45QG4aMPw== X-Received: by 2002:a17:902:d2d1:b0:16c:223e:a3db with SMTP id n17-20020a170902d2d100b0016c223ea3dbmr40728628plc.37.1658345032791; Wed, 20 Jul 2022 12:23:52 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id y23-20020a17090264d700b0016d2e772550sm219902pli.175.2022.07.20.12.23.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jul 2022 12:23:52 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Albert Ou , Atish Patra , Daniel Lezcano , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Liu Shaohua , Niklas Cassel , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Philipp Tomsich , Thomas Gleixner , Tsukasa OI , Wei Fu Subject: [PATCH v5 1/4] RISC-V: Add SSTC extension CSR details Date: Wed, 20 Jul 2022 12:23:39 -0700 Message-Id: <20220720192342.3428144-2-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220720192342.3428144-1-atishp@rivosinc.com> References: <20220720192342.3428144-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch just introduces the required CSR fields related to the SSTC extension. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/csr.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 17516afc389a..0e571f6483d9 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -247,6 +247,9 @@ #define CSR_SIP 0x144 #define CSR_SATP 0x180 +#define CSR_STIMECMP 0x14D +#define CSR_STIMECMPH 0x15D + #define CSR_VSSTATUS 0x200 #define CSR_VSIE 0x204 #define CSR_VSTVEC 0x205 @@ -256,6 +259,8 @@ #define CSR_VSTVAL 0x243 #define CSR_VSIP 0x244 #define CSR_VSATP 0x280 +#define CSR_VSTIMECMP 0x24D +#define CSR_VSTIMECMPH 0x25D #define CSR_HSTATUS 0x600 #define CSR_HEDELEG 0x602 From patchwork Wed Jul 20 19:23:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 12924418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E088EC3F2D4 for ; Wed, 20 Jul 2022 19:23:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232850AbiGTTX5 (ORCPT ); Wed, 20 Jul 2022 15:23:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231359AbiGTTXz (ORCPT ); Wed, 20 Jul 2022 15:23:55 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF3444F6A4 for ; Wed, 20 Jul 2022 12:23:54 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id f11so17266049pgj.7 for ; Wed, 20 Jul 2022 12:23:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JO4EgaFLA+7AeATY37WXQlGJ9vdJOA0mjTrTo1yHUWU=; b=F5LLa+fti1ldqbN1oggUaGJdciAVFKbR5+6Ynx5vr02UiAXfBtaiR4TNUGh61mbiif 5vLxqOMia10VhbKdBKIqxy5sEblKTJJa4VtjsiXYkFii4Q2uPdNcDVRb8KIeLlsHAxCa WZ4KsxXyIDoAk9i007nAyz7siGGxTWYsNVDVysgUKylfM7+DsiV8DJldr1DNH8gO4eez hHS3Su1nTW1d7iJMrIj02pzOyhUa1pMBjPAzaRxryW+O3nl9sOfCld2s35P+jvNRDkyL R0mhnBEUokGFEg/RTzciQ8x4Ob1he5KkVZPdBqu25kRxIW4gXnW8VV6BQ47vpAsuwvQ3 H4eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JO4EgaFLA+7AeATY37WXQlGJ9vdJOA0mjTrTo1yHUWU=; b=NwLi0ZjlRebMzB1iWkko22+FfylL3TL560fJfl+3xgFWASRbv4i5KBUMvJuvGAlw/M 3nRrxF7ZA114/KkY8g6M9VgLf2/5ZdaEVdflovUSKbPSXUk/fyW/GSQ93WmSXK4I5Pl0 ue3ux4mCNbxpeUuYy2GYTSac8PZeizMaV5XBVyiPIJ0KIRI4iiyASoeen38wgATHpUJe +20OHYOmYIBGfY37oBwRBN18bPk3VUl2J4KEu07sKaBA+MJaUZ3h1ppp7lIypwbBSBeW jDh1YUtjIw6LDwypRaL/Qiy8YOmhxuj6pqSyFosYB4updPN09NDwBKmULbqfYykx6r/G TBjw== X-Gm-Message-State: AJIora+7yFCrV87ayFMFFqfyf8D92VtWMG4BdSTuzrrSuLh5N5PrfhoH LhQYKptcSugYhhAf4zy9e0HOjA== X-Google-Smtp-Source: AGRyM1srE6PB4hYPgFpjfbQll7ccCgxCL74nS8/5jSzOtV/Dpz4OwUpbO6UtkrHOiR37g9OoKefmtg== X-Received: by 2002:a63:f91b:0:b0:40d:d291:1555 with SMTP id h27-20020a63f91b000000b0040dd2911555mr35543371pgi.399.1658345034349; Wed, 20 Jul 2022 12:23:54 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id y23-20020a17090264d700b0016d2e772550sm219902pli.175.2022.07.20.12.23.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jul 2022 12:23:54 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Albert Ou , Atish Patra , Daniel Lezcano , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Liu Shaohua , Niklas Cassel , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Philipp Tomsich , Thomas Gleixner , Tsukasa OI , Wei Fu Subject: [PATCH v5 2/4] RISC-V: Enable sstc extension parsing from DT Date: Wed, 20 Jul 2022 12:23:40 -0700 Message-Id: <20220720192342.3428144-3-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220720192342.3428144-1-atishp@rivosinc.com> References: <20220720192342.3428144-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The ISA extension framework now allows parsing any multi-letter ISA extension. Enable that for sstc extension. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/hwcap.h | 1 + arch/riscv/kernel/cpu.c | 1 + arch/riscv/kernel/cpufeature.c | 1 + 3 files changed, 3 insertions(+) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 4e2486881840..b186fff75198 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -53,6 +53,7 @@ extern unsigned long elf_hwcap; enum riscv_isa_ext_id { RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE, RISCV_ISA_EXT_SVPBMT, + RISCV_ISA_EXT_SSTC, RISCV_ISA_EXT_ID_MAX = RISCV_ISA_EXT_MAX, }; diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index fba9e9f46a8c..0016d9337fe0 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -89,6 +89,7 @@ int riscv_of_parent_hartid(struct device_node *node) static struct riscv_isa_ext_data isa_ext_arr[] = { __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT), + __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX), }; diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 12b05ce164bb..034bdbd189d0 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -199,6 +199,7 @@ void __init riscv_fill_hwcap(void) } else { SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF); SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT); + SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC); } #undef SET_ISA_EXT_MAP } From patchwork Wed Jul 20 19:23:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 12924419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6BB5C43334 for ; Wed, 20 Jul 2022 19:24:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233697AbiGTTYH (ORCPT ); Wed, 20 Jul 2022 15:24:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233129AbiGTTX5 (ORCPT ); Wed, 20 Jul 2022 15:23:57 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61A9D54AF9 for ; Wed, 20 Jul 2022 12:23:56 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id e132so17277789pgc.5 for ; Wed, 20 Jul 2022 12:23:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=k7oAEso24XUyiMu0RhEG3Nlz67s2VkRn8qN/IYX5xGk=; b=sY9xzDFI9X3wzme+LId9I8R9+tn4yaAMLQpEbWaQ3yG2PQ5DtSBnecKXDC1IFimX6t K/qY0qg/2psalruNjlEzf0CEI1o2HccaVgkbFwMlwJj8LXADl7lxeiKjvIzcYUI8q5uJ ttyGd73sE3NX1DqDjAytZoQjw4rUEo0+iIqu765dNAOiICi/0wbNDYOGnMYM7Idiw+f9 dxI0O1CY6sfcSs5MWgNo9pDebidl7EyZEPDZa3unG5xPqkvDeJRoKSNwIckNUA+nvsiS kQry5184T2awXHtc9U47ggv3Fvs0JXytLLjZ9uJzG/lLzy5sXq+hstOnl5ywDIiGBreq hYWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=k7oAEso24XUyiMu0RhEG3Nlz67s2VkRn8qN/IYX5xGk=; b=CdlmZWJAQ7cXeA2jXhRsTtIRTZPAY+HQ5F2j7D5VuuS4Z1/B2tqAZWqywMfmli8+H1 HYKkUzFvPybTj/7GoBmtn5grqcIxCQ5Zn/13LIM4XUTd0XpfdENBywCh3juGLUGQvnJZ 6MjZc5VX6GKZAgBD2B0C+Tve1y5iFgPgT/AAMR4Hw1RQdtySvNAQ1GrCVVTiXSmxj29X vEYxOj4nZ5JUm8Ws8Zf0FZccVwpSqXlw0hxHw6f2BZ+fT5J18Ss6mUtiwmR0/vNNCo9M a73DKn4YwzKb0jvyAzRR2nzg+9FUBeURXzV1+/vvtI8O518xKfiWAzneBRDLz0TkuV4J VchA== X-Gm-Message-State: AJIora+gQmqDYTN5GPH00hL1oKgC/JHB0zJu3MNMm18D2wXgkO5K3D6A Viq24gRVp052yfSuvLLqFPH25w== X-Google-Smtp-Source: AGRyM1u3GnXaQ/dPKAAkFwq7+MPBAz7n0FxebmLk2pgdq9YV976oddbOqJyDP0CBIXa4PDlmreA76w== X-Received: by 2002:a65:4501:0:b0:3fc:4895:283b with SMTP id n1-20020a654501000000b003fc4895283bmr35712773pgq.231.1658345035855; Wed, 20 Jul 2022 12:23:55 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id y23-20020a17090264d700b0016d2e772550sm219902pli.175.2022.07.20.12.23.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jul 2022 12:23:55 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Albert Ou , Atish Patra , Daniel Lezcano , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Liu Shaohua , Niklas Cassel , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Philipp Tomsich , Thomas Gleixner , Tsukasa OI , Wei Fu Subject: [PATCH v5 3/4] RISC-V: Prefer sstc extension if available Date: Wed, 20 Jul 2022 12:23:41 -0700 Message-Id: <20220720192342.3428144-4-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220720192342.3428144-1-atishp@rivosinc.com> References: <20220720192342.3428144-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org RISC-V ISA has sstc extension which allows updating the next clock event via a CSR (stimecmp) instead of an SBI call. This should happen dynamically if sstc extension is available. Otherwise, it will fallback to SBI call to maintain backward compatibility. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- drivers/clocksource/timer-riscv.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c index 593d5a957b69..3f100fb53d82 100644 --- a/drivers/clocksource/timer-riscv.c +++ b/drivers/clocksource/timer-riscv.c @@ -7,6 +7,9 @@ * either be read from the "time" and "timeh" CSRs, and can use the SBI to * setup events, or directly accessed using MMIO registers. */ + +#define pr_fmt(fmt) "riscv-timer: " fmt + #include #include #include @@ -23,11 +26,24 @@ #include #include +static DEFINE_STATIC_KEY_FALSE(riscv_sstc_available); + static int riscv_clock_next_event(unsigned long delta, struct clock_event_device *ce) { + u64 next_tval = get_cycles64() + delta; + csr_set(CSR_IE, IE_TIE); - sbi_set_timer(get_cycles64() + delta); + if (static_branch_likely(&riscv_sstc_available)) { +#if defined(CONFIG_32BIT) + csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF); + csr_write(CSR_STIMECMPH, next_tval >> 32); +#else + csr_write(CSR_STIMECMP, next_tval); +#endif + } else + sbi_set_timer(next_tval); + return 0; } @@ -165,6 +181,12 @@ static int __init riscv_timer_init_dt(struct device_node *n) if (error) pr_err("cpu hp setup state failed for RISCV timer [%d]\n", error); + + if (riscv_isa_extension_available(NULL, SSTC)) { + pr_info("Timer interrupt in S-mode is available via sstc extension\n"); + static_branch_enable(&riscv_sstc_available); + } + return error; } From patchwork Wed Jul 20 19:23:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 12924420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09804C43334 for ; Wed, 20 Jul 2022 19:24:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232776AbiGTTYJ (ORCPT ); Wed, 20 Jul 2022 15:24:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233280AbiGTTYG (ORCPT ); Wed, 20 Jul 2022 15:24:06 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9D4F5C35B for ; Wed, 20 Jul 2022 12:23:57 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id h132so17257691pgc.10 for ; Wed, 20 Jul 2022 12:23:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fTAbWuxZ1cWXUJfkajtQbPJ+HXs61ab3nj6DcDSwFWU=; b=IV3/PZNg+qi/8e59k9/5z7chV+ibXhH/t07gp2TmbWS70uZ/UIm+glSWNAds7viBDV BbuSgwE3D/KzVjQ46d2kuqsQdXNS5mhGzx4Jf6PKZlH1wM6LaMm9WC1/tMrMO/KohQAs +21s67C1rJBX54YX9oiRW0gZeNhxDhw3MHjeL6ku5e3JUbUTcc2M4iWuI9fQfYysI+ed l34TiEAKxxPi0fHTeubUcatHLGVCdvqNJofnconQxbPnjnf/aJExr7/dyF/7iQa/D8tP +s5t7GIRsahrGK2BKXikB7UxNbbEZSW0OHevedaKRdHfu9WLJRjqUtW5qGdZg0SW6F6/ kEaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fTAbWuxZ1cWXUJfkajtQbPJ+HXs61ab3nj6DcDSwFWU=; b=PDm7BeOd5awZtyDPjueR1nCv4aFvYr7C5kWAg85AWMcm/JTbTcs6CIVCEo98NP2pah SfXhbEhehE8+Hgb60rvGDAtpUF367OqEIwc0q4Ogm2nw0z4qW/69koWW1U/S2TSE73N1 gWKk8FUfSnhtGeUr+0xw3FcHM+1bqs+7sVHyT9gS/ZXnFuI9Y3FXv4sYinMuuWg1zTPD ntyhCEgCVRZM5i3PrP3QdF7/Kcmpnq6uF/Xa0nLvKj5YCCyAM93TPjqhmtolAd3hUiZb y17Fv0VKLs6I5vVldcqUyCJn+qqbAgdotyl+QyIn65xJqXKMxwqE5CrBHRJ/N1rsOJHD Jl9g== X-Gm-Message-State: AJIora8O7BLldHtNqll45BhWfPN5lLwQs6I2uRDfZxeJdHrHoIm08PGm g/SnIGKkWR4ex0TD2VHKG7Y0Og== X-Google-Smtp-Source: AGRyM1tZP9Z33bqGQEWIHTG87crikA6vd8J5O1EWb3nr5VDqy+RDZzFzm+it7izONv49wXRkTHaRGg== X-Received: by 2002:a05:6a00:124b:b0:52a:c7de:c76 with SMTP id u11-20020a056a00124b00b0052ac7de0c76mr40836675pfi.5.1658345037474; Wed, 20 Jul 2022 12:23:57 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id y23-20020a17090264d700b0016d2e772550sm219902pli.175.2022.07.20.12.23.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jul 2022 12:23:57 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Anup Patel , Albert Ou , Atish Patra , Daniel Lezcano , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Liu Shaohua , Niklas Cassel , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Philipp Tomsich , Thomas Gleixner , Tsukasa OI , Wei Fu Subject: [PATCH v5 4/4] RISC-V: KVM: Support sstc extension Date: Wed, 20 Jul 2022 12:23:42 -0700 Message-Id: <20220720192342.3428144-5-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220720192342.3428144-1-atishp@rivosinc.com> References: <20220720192342.3428144-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Sstc extension allows the guest to program the vstimecmp CSR directly instead of making an SBI call to the hypervisor to program the next event. The timer interrupt is also directly injected to the guest by the hardware in this case. To maintain backward compatibility, the hypervisors also update the vstimecmp in an SBI set_time call if the hardware supports it. Thus, the older kernels in guest also take advantage of the sstc extension. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_vcpu_timer.h | 7 ++ arch/riscv/include/uapi/asm/kvm.h | 1 + arch/riscv/kvm/vcpu.c | 7 +- arch/riscv/kvm/vcpu_timer.c | 144 +++++++++++++++++++++++- 4 files changed, 152 insertions(+), 7 deletions(-) diff --git a/arch/riscv/include/asm/kvm_vcpu_timer.h b/arch/riscv/include/asm/kvm_vcpu_timer.h index 50138e2eb91b..0d8fdb8ec63a 100644 --- a/arch/riscv/include/asm/kvm_vcpu_timer.h +++ b/arch/riscv/include/asm/kvm_vcpu_timer.h @@ -28,6 +28,11 @@ struct kvm_vcpu_timer { u64 next_cycles; /* Underlying hrtimer instance */ struct hrtimer hrt; + + /* Flag to check if sstc is enabled or not */ + bool sstc_enabled; + /* A function pointer to switch between stimecmp or hrtimer at runtime */ + int (*timer_next_event)(struct kvm_vcpu *vcpu, u64 ncycles); }; int kvm_riscv_vcpu_timer_next_event(struct kvm_vcpu *vcpu, u64 ncycles); @@ -40,5 +45,7 @@ int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu); int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu); void kvm_riscv_guest_timer_init(struct kvm *kvm); +void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu); +bool kvm_riscv_vcpu_timer_pending(struct kvm_vcpu *vcpu); #endif diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index 24b2a6e27698..9ac3dbaf0b0f 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -96,6 +96,7 @@ enum KVM_RISCV_ISA_EXT_ID { KVM_RISCV_ISA_EXT_H, KVM_RISCV_ISA_EXT_I, KVM_RISCV_ISA_EXT_M, + KVM_RISCV_ISA_EXT_SSTC, KVM_RISCV_ISA_EXT_SVPBMT, KVM_RISCV_ISA_EXT_MAX, }; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 5d271b597613..9ee6ad376eb2 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -51,6 +51,7 @@ static const unsigned long kvm_isa_ext_arr[] = { RISCV_ISA_EXT_h, RISCV_ISA_EXT_i, RISCV_ISA_EXT_m, + RISCV_ISA_EXT_SSTC, RISCV_ISA_EXT_SVPBMT, }; @@ -203,7 +204,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) { - return kvm_riscv_vcpu_has_interrupts(vcpu, 1UL << IRQ_VS_TIMER); + return kvm_riscv_vcpu_timer_pending(vcpu); } void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) @@ -785,6 +786,8 @@ static void kvm_riscv_vcpu_update_config(const unsigned long *isa) if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SVPBMT)) henvcfg |= ENVCFG_PBMTE; + if (__riscv_isa_extension_available(isa, RISCV_ISA_EXT_SSTC)) + henvcfg |= ENVCFG_STCE; csr_write(CSR_HENVCFG, henvcfg); #ifdef CONFIG_32BIT csr_write(CSR_HENVCFGH, henvcfg >> 32); @@ -828,6 +831,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) vcpu->arch.isa); kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context); + kvm_riscv_vcpu_timer_save(vcpu); + csr->vsstatus = csr_read(CSR_VSSTATUS); csr->vsie = csr_read(CSR_VSIE); csr->vstvec = csr_read(CSR_VSTVEC); diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c index 595043857049..16f50c46ba39 100644 --- a/arch/riscv/kvm/vcpu_timer.c +++ b/arch/riscv/kvm/vcpu_timer.c @@ -69,7 +69,18 @@ static int kvm_riscv_vcpu_timer_cancel(struct kvm_vcpu_timer *t) return 0; } -int kvm_riscv_vcpu_timer_next_event(struct kvm_vcpu *vcpu, u64 ncycles) +static int kvm_riscv_vcpu_update_vstimecmp(struct kvm_vcpu *vcpu, u64 ncycles) +{ +#if defined(CONFIG_32BIT) + csr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF); + csr_write(CSR_VSTIMECMPH, ncycles >> 32); +#else + csr_write(CSR_VSTIMECMP, ncycles); +#endif + return 0; +} + +static int kvm_riscv_vcpu_update_hrtimer(struct kvm_vcpu *vcpu, u64 ncycles) { struct kvm_vcpu_timer *t = &vcpu->arch.timer; struct kvm_guest_timer *gt = &vcpu->kvm->arch.timer; @@ -88,6 +99,65 @@ int kvm_riscv_vcpu_timer_next_event(struct kvm_vcpu *vcpu, u64 ncycles) return 0; } +int kvm_riscv_vcpu_timer_next_event(struct kvm_vcpu *vcpu, u64 ncycles) +{ + struct kvm_vcpu_timer *t = &vcpu->arch.timer; + + return t->timer_next_event(vcpu, ncycles); +} + +static enum hrtimer_restart kvm_riscv_vcpu_vstimer_expired(struct hrtimer *h) +{ + u64 delta_ns; + struct kvm_vcpu_timer *t = container_of(h, struct kvm_vcpu_timer, hrt); + struct kvm_vcpu *vcpu = container_of(t, struct kvm_vcpu, arch.timer); + struct kvm_guest_timer *gt = &vcpu->kvm->arch.timer; + + if (kvm_riscv_current_cycles(gt) < t->next_cycles) { + delta_ns = kvm_riscv_delta_cycles2ns(t->next_cycles, gt, t); + hrtimer_forward_now(&t->hrt, ktime_set(0, delta_ns)); + return HRTIMER_RESTART; + } + + t->next_set = false; + kvm_vcpu_kick(vcpu); + + return HRTIMER_NORESTART; +} + +bool kvm_riscv_vcpu_timer_pending(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_timer *t = &vcpu->arch.timer; + struct kvm_guest_timer *gt = &vcpu->kvm->arch.timer; + + if (!kvm_riscv_delta_cycles2ns(t->next_cycles, gt, t) || + kvm_riscv_vcpu_has_interrupts(vcpu, 1UL << IRQ_VS_TIMER)) + return true; + else + return false; +} + +static void kvm_riscv_vcpu_timer_blocking(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_timer *t = &vcpu->arch.timer; + struct kvm_guest_timer *gt = &vcpu->kvm->arch.timer; + u64 delta_ns; + + if (!t->init_done) + return; + + delta_ns = kvm_riscv_delta_cycles2ns(t->next_cycles, gt, t); + if (delta_ns) { + hrtimer_start(&t->hrt, ktime_set(0, delta_ns), HRTIMER_MODE_REL); + t->next_set = true; + } +} + +static void kvm_riscv_vcpu_timer_unblocking(struct kvm_vcpu *vcpu) +{ + kvm_riscv_vcpu_timer_cancel(&vcpu->arch.timer); +} + int kvm_riscv_vcpu_get_reg_timer(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { @@ -180,10 +250,20 @@ int kvm_riscv_vcpu_timer_init(struct kvm_vcpu *vcpu) return -EINVAL; hrtimer_init(&t->hrt, CLOCK_MONOTONIC, HRTIMER_MODE_REL); - t->hrt.function = kvm_riscv_vcpu_hrtimer_expired; t->init_done = true; t->next_set = false; + /* Enable sstc for every vcpu if available in hardware */ + if (riscv_isa_extension_available(NULL, SSTC)) { + t->sstc_enabled = true; + t->hrt.function = kvm_riscv_vcpu_vstimer_expired; + t->timer_next_event = kvm_riscv_vcpu_update_vstimecmp; + } else { + t->sstc_enabled = false; + t->hrt.function = kvm_riscv_vcpu_hrtimer_expired; + t->timer_next_event = kvm_riscv_vcpu_update_hrtimer; + } + return 0; } @@ -199,21 +279,73 @@ int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu) int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu) { + struct kvm_vcpu_timer *t = &vcpu->arch.timer; + + t->next_cycles = -1ULL; return kvm_riscv_vcpu_timer_cancel(&vcpu->arch.timer); } -void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu) +static void kvm_riscv_vcpu_update_timedelta(struct kvm_vcpu *vcpu) { struct kvm_guest_timer *gt = &vcpu->kvm->arch.timer; -#ifdef CONFIG_64BIT - csr_write(CSR_HTIMEDELTA, gt->time_delta); -#else +#if defined(CONFIG_32BIT) csr_write(CSR_HTIMEDELTA, (u32)(gt->time_delta)); csr_write(CSR_HTIMEDELTAH, (u32)(gt->time_delta >> 32)); +#else + csr_write(CSR_HTIMEDELTA, gt->time_delta); #endif } +void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_csr *csr; + struct kvm_vcpu_timer *t = &vcpu->arch.timer; + + kvm_riscv_vcpu_update_timedelta(vcpu); + + if (!t->sstc_enabled) + return; + + csr = &vcpu->arch.guest_csr; +#if defined(CONFIG_32BIT) + csr_write(CSR_VSTIMECMP, (u32)t->next_cycles); + csr_write(CSR_VSTIMECMPH, (u32)(t->next_cycles >> 32)); +#else + csr_write(CSR_VSTIMECMP, t->next_cycles); +#endif + + /* timer should be enabled for the remaining operations */ + if (unlikely(!t->init_done)) + return; + + kvm_riscv_vcpu_timer_unblocking(vcpu); +} + +void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_csr *csr; + struct kvm_vcpu_timer *t = &vcpu->arch.timer; + + if (!t->sstc_enabled) + return; + + csr = &vcpu->arch.guest_csr; + t = &vcpu->arch.timer; +#if defined(CONFIG_32BIT) + t->next_cycles = csr_read(CSR_VSTIMECMP); + t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32; +#else + t->next_cycles = csr_read(CSR_VSTIMECMP); +#endif + /* timer should be enabled for the remaining operations */ + if (unlikely(!t->init_done)) + return; + + if (kvm_vcpu_is_blocking(vcpu)) + kvm_riscv_vcpu_timer_blocking(vcpu); +} + void kvm_riscv_guest_timer_init(struct kvm *kvm) { struct kvm_guest_timer *gt = &kvm->arch.timer;