From patchwork Wed Apr 19 22:16:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34404C7EE21 for ; Wed, 19 Apr 2023 22:17:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231144AbjDSWRg (ORCPT ); Wed, 19 Apr 2023 18:17:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229951AbjDSWRd (ORCPT ); Wed, 19 Apr 2023 18:17:33 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E4181BCC for ; Wed, 19 Apr 2023 15:17:32 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1a66888cb89so4499245ad.3 for ; Wed, 19 Apr 2023 15:17:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942652; x=1684534652; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mljZsiO5QRb1GQDkLUDPFjVfkuRVaSDyhp9y+byMdss=; b=rSiuLSw6DyvgYFbph9bmFmqDG3DBHXwiC4IjLHTdoeoNqI3Zj5mvjcgftruD3sRQcL WxDYcG6ScHPYH/tsYrRzbCoTKzHkW8KNXh7K15sX/3vd0JxNdmgwDLmJ5Gbl1ptnc8kV B413qUWKvROFulz1VG3quMom7gr4KDz5SG6hA7i1NB7suVliV8jg80Ei4eKibGK25dVw IUKYu3sVmnUrAoFcDhoqyEJFPDZqPR+RsU7ZbMNTfRONc8CY2sotnhibyQDIiJL6WP4d uPsZMJmzwj1MOrHzXnjW7uCJa6/KKt6iVnYO2QkJX8Ox8I1aJyUyI4qshbT0yxib1Rsh ovBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942652; x=1684534652; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mljZsiO5QRb1GQDkLUDPFjVfkuRVaSDyhp9y+byMdss=; b=db1adVsNhXVJ7IZ6N5+YPMsuJWP9Ge0/tuRmEHKJmsuCZUmKQKWfixN/5lbcDBKtQp d2SPRpUyZkgF6yVtjYlWAJ3zPc7oOrZcOEKlwnMuy1XMbXmmoC9eOgVLJShZ1m7YEZSY nKjfZiR5pQ6rrg1GyK+9ITxAv1QjUDCwcXaaq/jVJzLtqnkhL0E0LHoQ9Rd+9WoMT2g4 iyVLnrxsFG8xMxHpoXZRdQu6EaFKSdGPo6RFGhTd74iUFVoUcW9eqhET0frDX0s9Le0G wrHH3OOnUzzIO+l4kZTwg0noY6X9pqzTOU7ibvoD05twag4X9WmQS+KYipij91FAm1at PJgw== X-Gm-Message-State: AAQBX9fbB0zDi88djW+wWMdw914Y5wn+QVxTtI2q02lN9MIUmYd/4vLJ G0+MI7S7oaipFArzxSHpdkeLnA== X-Google-Smtp-Source: AKy350YAUHKVoqLB+lNg1GyvGR/YhY1dexLjtDy2z84R1FBjB72vycA1EeypPas5l/++BhN5lcYzyg== X-Received: by 2002:a17:902:e74c:b0:1a6:4200:bef4 with SMTP id p12-20020a170902e74c00b001a64200bef4mr7961111plf.56.1681942651818; Wed, 19 Apr 2023 15:17:31 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:31 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 01/48] mm/vmalloc: Introduce arch hooks to notify ioremap/unmap changes Date: Wed, 19 Apr 2023 15:16:29 -0700 Message-Id: <20230419221716.3603068-2-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal In virtualization, the guest may need notify the host about the ioremap regions. This is a common usecase in confidential computing where the host only provides MMIO emulation for the regions specified by the guest. Add a pair if arch specific callbacks to track the ioremapped regions. This patch is based on pkvm patches. A generic arch config can be added similar to pkvm if this is going to be the final solution. The device authorization/filtering approach is very different from this and we may prefer that one as it provides more flexibility in terms of which devices are allowed for the confidential guests. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- mm/vmalloc.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index bef6cf2..023630e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -304,6 +304,14 @@ static int vmap_range_noflush(unsigned long addr, unsigned long end, return err; } +__weak void ioremap_phys_range_hook(phys_addr_t phys_addr, size_t size, pgprot_t prot) +{ +} + +__weak void iounmap_phys_range_hook(phys_addr_t phys_addr, size_t size) +{ +} + int ioremap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { @@ -315,6 +323,10 @@ int ioremap_page_range(unsigned long addr, unsigned long end, if (!err) kmsan_ioremap_page_range(addr, end, phys_addr, prot, ioremap_max_page_shift); + + if (!err) + ioremap_phys_range_hook(phys_addr, end - addr, prot); + return err; } @@ -2772,6 +2784,10 @@ void vunmap(const void *addr) addr); return; } + + if (vm->flags & VM_IOREMAP) + iounmap_phys_range_hook(vm->phys_addr, get_vm_area_size(vm)); + kfree(vm); } EXPORT_SYMBOL(vunmap); From patchwork Wed Apr 19 22:16:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4694DC7EE22 for ; Wed, 19 Apr 2023 22:17:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231224AbjDSWRi (ORCPT ); Wed, 19 Apr 2023 18:17:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229793AbjDSWRf (ORCPT ); Wed, 19 Apr 2023 18:17:35 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D79D1718 for ; Wed, 19 Apr 2023 15:17:34 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1a52667955dso5427045ad.1 for ; Wed, 19 Apr 2023 15:17:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942654; x=1684534654; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TviUiXCC6v39lMS4rfPLGE9Pfx0aaxYAF0PLkJZJ+44=; b=fw3FXTdDZVAQaqNcpqxhJ5jCbeJxZbbRCeUP7dUA0xboMzGhByaEInJPZwYw9bXQXk ARHWS0qP1pZrTXclentFqb6h5U2ua9sUZm6vZcXmwvDFczjrVihV2Nra1Jc2/WF5XyU2 0JadbkEFEI0//guHLR6qxF3JNYLlwaQIZD991cfubV5PenBiFCNtb6hgKaPgRSq66vCg e+VFqjLE6civs190GuIRI3K+ZsW+0KEkHzSOhGggkx8qfJ2wZO5KlkmScX+zwalNjqk6 RuMOo25mdsFietap8z25HKARP1geAiwoigenSaj60oBOp8w+6PxBWZ/NPdxHu1rx5LuB U5bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942654; x=1684534654; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TviUiXCC6v39lMS4rfPLGE9Pfx0aaxYAF0PLkJZJ+44=; b=VVB9T4ggVw4aWYjwsgYhoxU5Q4oPMqnw81Yhs6lWm5U9F+48SORAbviSV2DAZm5pgv MZ+94X1BGNL8UwwTAtwwX24bitDtSDz8QJu4nLGaQ8HuasPD52LLp6xn+pP+D+gfM3QS +Mim9rQzXARfB4SlFCwzoVcIYAFaGLqQBJCSVNgoC06leGDMay6BeC7XCIIp1ABBwv05 TcuguUklDLPTJC4QAQ984f82pMe6sxmAgDLi3ko61gpS6A3siJR5ccADDTza6t4AsQN5 WUjXD7Bo4qmnm3hmtzfogvNiX+yMlpfsRFeZefkrTEVLFo+DqiA6quH/378ac//LrcuY YZLA== X-Gm-Message-State: AAQBX9ds4aYXXVSRc7oV4kPwc1Oy8Qn9ZSBEoOERFOjl7540SGnc0xgJ YYfGiwHTm/n1VztUTYSgDvu9uA== X-Google-Smtp-Source: AKy350bx72vsD2gx/qhvw+4gm1lk9ECFZlAAwd4+tDv5bxVSjJh5w7Co6h0FkmAoPJamfBpYhAs0yQ== X-Received: by 2002:a17:902:e80a:b0:1a1:cc5a:b04 with SMTP id u10-20020a170902e80a00b001a1cc5a0b04mr8122719plg.3.1681942654045; Wed, 19 Apr 2023 15:17:34 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:33 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 02/48] RISC-V: KVM: Improve KVM error reporting to the user space Date: Wed, 19 Apr 2023 15:16:30 -0700 Message-Id: <20230419221716.3603068-3-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds RISC-V specific cause for ioctl run failure. For now, it will be used for the below two cases: 1. Insufficient IMSIC files if VM is configured to run in HWACCEL mode 2. TSM is unable to run run_vcpu SBI call for TVMs KVM also uses a custom scause bit(48) to distinguish this case from regular vcpu exit causes. Signed-off-by: Atish Patra --- arch/riscv/include/asm/csr.h | 2 ++ arch/riscv/include/uapi/asm/kvm.h | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 3176355..e78503a 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -96,6 +96,8 @@ #define EXC_VIRTUAL_INST_FAULT 22 #define EXC_STORE_GUEST_PAGE_FAULT 23 +#define EXC_CUSTOM_KVM_COVE_RUN_FAIL 48 + /* PMP configuration */ #define PMP_R 0x01 #define PMP_W 0x02 diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index b41d0e7..11440df 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -245,6 +245,10 @@ enum KVM_RISCV_SBI_EXT_ID { /* One single KVM irqchip, ie. the AIA */ #define KVM_NR_IRQCHIPS 1 +/* run->fail_entry.hardware_entry_failure_reason codes. */ +#define KVM_EXIT_FAIL_ENTRY_IMSIC_FILE_UNAVAILABLE (1ULL << 0) +#define KVM_EXIT_FAIL_ENTRY_COVE_RUN_VCPU (1ULL << 1) + #endif #endif /* __LINUX_KVM_RISCV_H */ From patchwork Wed Apr 19 22:16:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53BC5C77B7C for ; Wed, 19 Apr 2023 22:17:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231286AbjDSWRl (ORCPT ); Wed, 19 Apr 2023 18:17:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231161AbjDSWRh (ORCPT ); Wed, 19 Apr 2023 18:17:37 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EC051BCC for ; Wed, 19 Apr 2023 15:17:36 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1a92513abebso5352805ad.2 for ; Wed, 19 Apr 2023 15:17:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942656; x=1684534656; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QQ+Q1EB8U9VN9DkQE/7ZRwGYKH2CHxTB2A2ZNISqIdY=; b=M7pMYKMh2fzPlQxiN+Er+1mRXb5xsvVBR/kCf0Xc6I4eRmeu17ZWxN3F+8Uh4bOfYt pLsryNkeitFt3/g2LwdIO4gRgf1eK0d0zd3qJJmJH0/Ytd9PXb2f4ZIv2vw9qF7rcmT+ +/5fNVmr7py0CepFxvid91vQee8LTDJ1trxwVh1r+SZa6D1ybCiNK5WtQpwcXBYzwzon 14Dk5NjS3zvhDHWUiYkgfDVG/0CawxYwCekaOcRjbz+KKATicit8f2wKNBE+FUv2M2pL VFRS7BwOkXZ7Z6aFPV2U5r0uWlcE+ptOAW8XtsMKVeJj+vK+/KtMoB1U0hUKE2I995ZX 1ZPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942656; x=1684534656; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QQ+Q1EB8U9VN9DkQE/7ZRwGYKH2CHxTB2A2ZNISqIdY=; b=VXvgJztprCJo6Yap/UoiffrOa94GIWhxBPzfUkY+Xzmz0p/DKEZlqlXdcFNvzzT61Q 3b3Tp86DL/HRaMCzNciEvXN93dZr4Nb5ZxDYQ1OyeYO1ar+vwrn8vjfvBywxu1wp7uoV cPnghpsGIj5KW2yrVA+D4yXdD+hlB0YAlc+wECIAbqztgkWm3KmL6KpNTCelwpg91fx1 2haYUXGRje/QHBN4pFYUNz7IK9ybJS33ShgLx0N4DgdiVJgYCpNRr32+7hV1aL6m/o3r u0phl/GcFUm647Nq/GfZvz8Ro9LsVTDvyOD22R3E4jD1v71Ej0c3OwJzQ0Z7t7l44O+g TKlg== X-Gm-Message-State: AAQBX9dWpYTEA0QTS+sEdzFyoS+y1NFM2VS2KsuSoN/UHqMqj7JpTMyV Y1LyOshV0+7WLAksnqQEKumf6g== X-Google-Smtp-Source: AKy350ZJUiSSYrDjl7yi3RGe7KkMi+TjKSL1H9rjEJIsWoSyzpH0Qljb3g4DlHLMrKbT1BFQ8UiTyA== X-Received: by 2002:a17:902:6bc1:b0:1a6:81fc:b585 with SMTP id m1-20020a1709026bc100b001a681fcb585mr6201339plt.41.1681942656169; Wed, 19 Apr 2023 15:17:36 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:35 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 03/48] RISC-V: KVM: Invoke aia_update with preempt disabled/irq enabled Date: Wed, 19 Apr 2023 15:16:31 -0700 Message-Id: <20230419221716.3603068-4-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Some of the aia_update operations required to invoke IPIs that needs interrupts to be enabled. Currently, entire aia_update is being called from irqs disabled context while only preemption disable is necessary. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index e65852d..c53bf98 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -1247,15 +1247,16 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_riscv_check_vcpu_requests(vcpu); - local_irq_disable(); - /* Update AIA HW state before entering guest */ + preempt_disable(); ret = kvm_riscv_vcpu_aia_update(vcpu); if (ret <= 0) { - local_irq_enable(); + preempt_enable(); continue; } + preempt_enable(); + local_irq_disable(); /* * Ensure we set mode to IN_GUEST_MODE after we disable * interrupts and before the final VCPU requests check. From patchwork Wed Apr 19 22:16:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F32EC77B75 for ; Wed, 19 Apr 2023 22:17:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231414AbjDSWRu (ORCPT ); Wed, 19 Apr 2023 18:17:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229562AbjDSWRj (ORCPT ); Wed, 19 Apr 2023 18:17:39 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E189349E0 for ; Wed, 19 Apr 2023 15:17:38 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-63b7588005fso395368b3a.0 for ; Wed, 19 Apr 2023 15:17:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942658; x=1684534658; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ur77tHINwX+qbUdlSGnRNMwUnwDaAVVcewhgSBhFUx0=; b=c18b6HjhB6xZTijYQX2nimbdoBlIXr9qBNm2sHraX79sumJ2mnHyyAF4iUwGRyqfEL BhG5ZneVkTF9bRo5tCCiGvsNVcpWUKCNmPy0vd1ZqJvuNoQxq1qR6f9cnB7vi+C19Hpt 5xUpfFR1k+6B8FbGHFbFXaZtvW/H2sqgRRevhtG9hluShqAQzpot/WQd/qsXuDNgOnpx pk3tmGjn1iUOEgqZ0Ek3rQdbQ5Ky98ttJ9TAom0YVfPe7HTavUoUAmMw/LDuEdRosyc4 CHRnQ9rCEYyXJCmVNUHBEGkpIQ2ljog3inmdpJcdTAOUNGgtwJP32aGziItUDdOFTV+8 kKew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942658; x=1684534658; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ur77tHINwX+qbUdlSGnRNMwUnwDaAVVcewhgSBhFUx0=; b=jQZypiSDqY4ddUmJU0zAMQ32F/+wFascmtGz6o9Ced/MYAKplSGwnWr1Y6wTeprGLc 79Z9S27ptcCI2WHvZLPKddAyTE2ciNu8cO3xKva0Ds3zTyseCkWe3pNVkbUsWSMRP4t0 z4EkgY80GBc+DatSe61wOvnqFb5U5cCisStB34Zo9iNns5yjPToAGafWYq72FNTa2Skx PF5vCL7RSY7/disiUvjsFlXPncxTFBTOZxzGkJ34d1mpvaaPky0vKzz8MLxiXbN5omtY 1+pmtpGGVGWIhS0n0eA/l1EE4DFN3b8KTDn6NCL/dw+jPCBU1B3o8da4Puk2n1ENVvjy BJ8w== X-Gm-Message-State: AAQBX9d/SeON/K2Qw2kKAM3nCYQUP4NSdoZ89afre0rrGfwtkDBkM00M ihX4JHweQK9SZLA9hxd7DDxfOA== X-Google-Smtp-Source: AKy350ZqgGYrfhLdHBkYHvd+cLv1LSY5KY8FQdwstau4aKvHx0NnJWKbJ8fq3mAaGG3s8B2dSOXS2Q== X-Received: by 2002:a17:903:1210:b0:1a6:4a64:4d27 with SMTP id l16-20020a170903121000b001a64a644d27mr7782580plh.40.1681942658444; Wed, 19 Apr 2023 15:17:38 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:38 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 04/48] RISC-V: KVM: Add a helper function to get pgd size Date: Wed, 19 Apr 2023 15:16:32 -0700 Message-Id: <20230419221716.3603068-5-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The cove support will require to find out the pgd size to be used for the gstage page table directory. Export the value via an additional helper. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 1 + arch/riscv/kvm/mmu.c | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 8714325..63c46af 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -313,6 +313,7 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); void __init kvm_riscv_gstage_mode_detect(void); unsigned long __init kvm_riscv_gstage_mode(void); int kvm_riscv_gstage_gpa_bits(void); +unsigned long kvm_riscv_gstage_pgd_size(void); void __init kvm_riscv_gstage_vmid_detect(void); unsigned long kvm_riscv_gstage_vmid_bits(void); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f0fff56..6b037f7 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -797,3 +797,8 @@ int kvm_riscv_gstage_gpa_bits(void) { return gstage_gpa_bits; } + +unsigned long kvm_riscv_gstage_pgd_size(void) +{ + return gstage_pgd_size; +} From patchwork Wed Apr 19 22:16:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CF57C77B7C for ; Wed, 19 Apr 2023 22:18:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231582AbjDSWR7 (ORCPT ); Wed, 19 Apr 2023 18:17:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231339AbjDSWRr (ORCPT ); Wed, 19 Apr 2023 18:17:47 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21C1B26A2 for ; Wed, 19 Apr 2023 15:17:41 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id 41be03b00d2f7-52019617020so245130a12.3 for ; Wed, 19 Apr 2023 15:17:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942660; x=1684534660; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YhmpWINWx68bxRl5iJymJMR8koMWZ3k0vMo650bJZTU=; b=ub984HbNYnZCS4cYMVeGAOsRXSkOK2FPD4icQUHZlHix4aKpIqyXeuVfQOuD1xHj9v vsKnnlhxdpBXKMy0/ROqMjpZOfEnWXQbDuHp5T383/Vm/+FQmuzlPZpnUGhvlMFFe7rt 8HkCX+VPliU1MmnP1Ss2H9IInj7b1S/ZGGfNasP3k5lYx2EUDdBQbfa+gCsPCMfJr4EM doY/zh1GbMfgfCGSFTcIGG6F7Q7zICkSwwBAXaUPKIzJ8MoIZKG1DSX+I00o1o0F5UZi WiH33xlJMuVTKRgbJ22ofZQJq8VWsCC+jXY14LMcf4NAwgljQxOy08LcxeWTlGXumKfv zGJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942660; x=1684534660; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YhmpWINWx68bxRl5iJymJMR8koMWZ3k0vMo650bJZTU=; b=HMcUojfx4m9jNV4VSaeEg+bn+LhZzx+F9EHvLFgTYIrPR2vfjJFBHo/5vCdtD/ah95 zGmOZfPhb733CRlULEEgwvh14gwTDQltfhaB6TVHL3g2dU+RP3L1q870CjPuaObMJNvy KqFJz+xWlM94Ccgqccto+UBZDJmTpFOAI+cQdTIBIoeO099Drlr64KEPjs+CQJobz1/A BAUPPaTj+AvNhxVBFpKj2jZjrXlQNYAEbp5TDRNfNF0u+Kv4msKxNGXXAEA/UAR0Iksx pbeBcsAplXG31EB5iVtuTKDeI5BBfftFcPbR8/W2iffAPFWd64fbST+7g02T46SEanPz 8BDw== X-Gm-Message-State: AAQBX9d7U7Xm5CNFNIy9K9MeqWmruNRnwCiu4s+UfDfPnJa1G1eYCcph bon17AEt5zHJxG/wSIk51si70Q== X-Google-Smtp-Source: AKy350b9J0w/WoMpaq+FxHZfAsQ/ItbJjVOrBVDU6LQmeWRXpZ0GrubHeMgcOdfazZVEwzQcIEKGsA== X-Received: by 2002:a17:90a:f696:b0:246:a782:d94 with SMTP id cl22-20020a17090af69600b00246a7820d94mr4315223pjb.7.1681942660612; Wed, 19 Apr 2023 15:17:40 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:40 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 05/48] RISC-V: Add COVH SBI extensions definitions Date: Wed, 19 Apr 2023 15:16:33 -0700 Message-Id: <20230419221716.3603068-6-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org RISC-V Confidential Virtualization Extension(COVE) specification defines following 3 SBI extensions. COVH (Host side interface) COVG (Guest side interface) COVI (Interrupt management interface) Few acronyms introduced in this patch: TSM - TEE Security Manager TVM - TEE VM This patch adds the definitions for COVH extension only. Signed-off-by: Atish Patra --- arch/riscv/include/asm/sbi.h | 61 ++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index 62d00c7..c5a5526 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -32,6 +32,7 @@ enum sbi_ext_id { SBI_EXT_PMU = 0x504D55, SBI_EXT_DBCN = 0x4442434E, SBI_EXT_NACL = 0x4E41434C, + SBI_EXT_COVH = 0x434F5648, /* Experimentals extensions must lie within this range */ SBI_EXT_EXPERIMENTAL_START = 0x08000000, @@ -348,6 +349,66 @@ enum sbi_ext_nacl_feature { #define SBI_NACL_SHMEM_SRET_X(__i) ((__riscv_xlen / 8) * (__i)) #define SBI_NACL_SHMEM_SRET_X_LAST 31 +/* SBI COVH extension data structures */ +enum sbi_ext_covh_fid { + SBI_EXT_COVH_TSM_GET_INFO = 0, + SBI_EXT_COVH_TSM_CONVERT_PAGES, + SBI_EXT_COVH_TSM_RECLAIM_PAGES, + SBI_EXT_COVH_TSM_INITIATE_FENCE, + SBI_EXT_COVH_TSM_LOCAL_FENCE, + SBI_EXT_COVH_CREATE_TVM, + SBI_EXT_COVH_FINALIZE_TVM, + SBI_EXT_COVH_DESTROY_TVM, + SBI_EXT_COVH_TVM_ADD_MEMORY_REGION, + SBI_EXT_COVH_TVM_ADD_PGT_PAGES, + SBI_EXT_COVH_TVM_ADD_MEASURED_PAGES, + SBI_EXT_COVH_TVM_ADD_ZERO_PAGES, + SBI_EXT_COVH_TVM_ADD_SHARED_PAGES, + SBI_EXT_COVH_TVM_CREATE_VCPU, + SBI_EXT_COVH_TVM_VCPU_RUN, + SBI_EXT_COVH_TVM_INITIATE_FENCE, +}; + +enum sbi_cove_page_type { + SBI_COVE_PAGE_4K, + SBI_COVE_PAGE_2MB, + SBI_COVE_PAGE_1GB, + SBI_COVE_PAGE_512GB, +}; + +enum sbi_cove_tsm_state { + /* TSM has not been loaded yet */ + TSM_NOT_LOADED, + /* TSM has been loaded but not initialized yet */ + TSM_LOADED, + /* TSM has been initialized and ready to run */ + TSM_READY, +}; + +struct sbi_cove_tsm_info { + /* Current state of the TSM */ + enum sbi_cove_tsm_state tstate; + + /* Version of the loaded TSM */ + uint32_t version; + + /* Number of 4K pages required per TVM */ + unsigned long tvm_pages_needed; + + /* Maximum VCPUs supported per TVM */ + unsigned long tvm_max_vcpus; + + /* Number of 4K pages each vcpu per TVM */ + unsigned long tvcpu_pages_needed; +}; + +struct sbi_cove_tvm_create_params { + /* Root page directory for TVM's page table management */ + unsigned long tvm_page_directory_addr; + /* Confidential memory address used to store TVM state information. Must be page aligned */ + unsigned long tvm_state_addr; +}; + #define SBI_SPEC_VERSION_DEFAULT 0x1 #define SBI_SPEC_VERSION_MAJOR_SHIFT 24 #define SBI_SPEC_VERSION_MAJOR_MASK 0x7f From patchwork Wed Apr 19 22:16:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DF31C6FD18 for ; Wed, 19 Apr 2023 22:18:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231678AbjDSWSK (ORCPT ); Wed, 19 Apr 2023 18:18:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231194AbjDSWR4 (ORCPT ); Wed, 19 Apr 2023 18:17:56 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 960AA5FF7 for ; Wed, 19 Apr 2023 15:17:43 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id 98e67ed59e1d1-2496863c2c7so215759a91.1 for ; Wed, 19 Apr 2023 15:17:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942663; x=1684534663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MC6vnkFRr+LS0VzfoACyhzPionL0+vOgt4l/y7HAvLQ=; b=DRAvwidPC7IdYDxLvMZPx6rD17zG/oS72Z/5NPbNssSMeEjGZ/qFXxKzxUoDWrBvaq DZ0TInEL8TrdR76cTRhzAJ5vG3gQXDbbiRYVKe5TFYSlezfrU3NDmW+/vfPxscuEdQxg jXWWGJNbmSYg4oA+SEn/VeITfEYeL5F6eAjq1+89vsTHHHYZkIRXRhIdgP4FIkoXVHnV bl38T3t+0HkBGPH61JvdtrFKDQGGEV6QVexZiROWg23DV082ptqUszDkbJ0kgnR2cFa0 h+Xapq4PsL1WId18x50ElV1iwBPVNVRSYCKwHvchYfyD9yud1mQMh17Z1wEgoBq7zr/0 kh+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942663; x=1684534663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MC6vnkFRr+LS0VzfoACyhzPionL0+vOgt4l/y7HAvLQ=; b=lRb6owXiLJeVeDOb14f3zVkyucBOkfAyAd340K4SjCSj2V422jDgTA5bHzv/EldZ75 EtuX8mUg94xruqHyeR7exWh0sljo0DpT3nbpzFHXn1d7P/i05cUMGXKlnOvPjMtDVCrf dQFM0I23ubFQ+K4JEwdySG1tIjIAt/2WWlxNauO72DnOWbaU+YLJ07lP9FOUDx50/8L3 ktyEGHCFH5VxqyNBhUtj7eeoIX59ATGodmYUoE0BMMcfMzbz5xNf5AHKfusao53+g4XX q4kOQomfLMLumIv5CtiKitDejliKmMsTJQdBkTIn9LDxBJLExxCUVCGiDg5Z9KxsmQwH nHSw== X-Gm-Message-State: AAQBX9eF3ZGnOky2KXpMyHOE0QQV9PZNPq39W0YUmAjYPHzc8anyUpYp XZQAM25H9NU8jCCk/PQql4w3OQ== X-Google-Smtp-Source: AKy350YM0J4iT9dthj2PQFzxPsPzjGXv4OgXVXSGvlPqAIkruWzwbcoYuc98tklz6iIJwKCz8HSgyw== X-Received: by 2002:a17:90a:f3c4:b0:247:6364:b8d8 with SMTP id ha4-20020a17090af3c400b002476364b8d8mr4026019pjb.6.1681942662828; Wed, 19 Apr 2023 15:17:42 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:42 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 06/48] RISC-V: KVM: Implement COVH SBI extension Date: Wed, 19 Apr 2023 15:16:34 -0700 Message-Id: <20230419221716.3603068-7-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org COVH SBI extension defines the SBI functions that the host will invoke to configure/create/destroy a TEE VM (TVM). Implement all the COVH SBI extension functions. Signed-off-by: Atish Patra --- arch/riscv/Kconfig | 13 ++ arch/riscv/include/asm/kvm_cove_sbi.h | 46 +++++ arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/cove_sbi.c | 245 ++++++++++++++++++++++++++ 4 files changed, 305 insertions(+) create mode 100644 arch/riscv/include/asm/kvm_cove_sbi.h create mode 100644 arch/riscv/kvm/cove_sbi.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 4044080..8462941 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -501,6 +501,19 @@ config FPU If you don't know what to do here, say Y. +menu "Confidential VM Extension(CoVE) Support" + +config RISCV_COVE_HOST + bool "Host(KVM) support for Confidential VM Extension(CoVE)" + depends on KVM + default n + help + Enable this if the platform supports confidential vm extension. + That means the platform should be capable of running TEE VM (TVM) + using KVM and TEE Security Manager (TSM). + +endmenu # "Confidential VM Extension(CoVE) Support" + endmenu # "Platform type" menu "Kernel features" diff --git a/arch/riscv/include/asm/kvm_cove_sbi.h b/arch/riscv/include/asm/kvm_cove_sbi.h new file mode 100644 index 0000000..24562df --- /dev/null +++ b/arch/riscv/include/asm/kvm_cove_sbi.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * COVE SBI extension related header file. + * + * Copyright (c) 2023 RivosInc + * + * Authors: + * Atish Patra + */ + +#ifndef __KVM_COVE_SBI_H +#define __KVM_COVE_SBI_H + +#include +#include +#include +#include +#include + +int sbi_covh_tsm_get_info(struct sbi_cove_tsm_info *tinfo_addr); +int sbi_covh_tvm_initiate_fence(unsigned long tvmid); +int sbi_covh_tsm_initiate_fence(void); +int sbi_covh_tsm_local_fence(void); +int sbi_covh_tsm_create_tvm(struct sbi_cove_tvm_create_params *tparam, unsigned long *tvmid); +int sbi_covh_tsm_finalize_tvm(unsigned long tvmid, unsigned long sepc, unsigned long entry_arg); +int sbi_covh_tsm_destroy_tvm(unsigned long tvmid); +int sbi_covh_add_memory_region(unsigned long tvmid, unsigned long tgpadr, unsigned long rlen); + +int sbi_covh_tsm_reclaim_pages(unsigned long phys_addr, unsigned long npages); +int sbi_covh_tsm_convert_pages(unsigned long phys_addr, unsigned long npages); +int sbi_covh_tsm_reclaim_page(unsigned long page_addr_phys); +int sbi_covh_add_pgt_pages(unsigned long tvmid, unsigned long page_addr_phys, unsigned long npages); + +int sbi_covh_add_measured_pages(unsigned long tvmid, unsigned long src_addr, + unsigned long dest_addr, enum sbi_cove_page_type ptype, + unsigned long npages, unsigned long tgpa); +int sbi_covh_add_zero_pages(unsigned long tvmid, unsigned long page_addr_phys, + enum sbi_cove_page_type ptype, unsigned long npages, + unsigned long tvm_base_page_addr); + +int sbi_covh_create_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid, + unsigned long vpus_page_addr); + +int sbi_covh_run_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid); + +#endif diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 6986d3c..40dee04 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -31,3 +31,4 @@ kvm-y += aia.o kvm-y += aia_device.o kvm-y += aia_aplic.o kvm-y += aia_imsic.o +kvm-$(CONFIG_RISCV_COVE_HOST) += cove_sbi.o diff --git a/arch/riscv/kvm/cove_sbi.c b/arch/riscv/kvm/cove_sbi.c new file mode 100644 index 0000000..c8c63fe --- /dev/null +++ b/arch/riscv/kvm/cove_sbi.c @@ -0,0 +1,245 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * COVE SBI extensions related helper functions. + * + * Copyright (c) 2023 RivosInc + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include +#include + +#define RISCV_COVE_ALIGN_4KB (1UL << 12) + +int sbi_covh_tsm_get_info(struct sbi_cove_tsm_info *tinfo_addr) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_GET_INFO, __pa(tinfo_addr), + sizeof(*tinfo_addr), 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tvm_initiate_fence(unsigned long tvmid) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_INITIATE_FENCE, tvmid, 0, 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_initiate_fence(void) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_INITIATE_FENCE, 0, 0, 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_local_fence(void) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_LOCAL_FENCE, 0, 0, 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_create_tvm(struct sbi_cove_tvm_create_params *tparam, unsigned long *tvmid) +{ + struct sbiret ret; + int rc = 0; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_CREATE_TVM, __pa(tparam), + sizeof(*tparam), 0, 0, 0, 0); + + if (ret.error) { + rc = sbi_err_map_linux_errno(ret.error); + if (rc == -EFAULT) + kvm_err("Invalid phsyical address for tvm params structure\n"); + goto done; + } + + kvm_info("%s: create_tvm tvmid %lx\n", __func__, ret.value); + *tvmid = ret.value; + +done: + return rc; +} + +int sbi_covh_tsm_finalize_tvm(unsigned long tvmid, unsigned long sepc, unsigned long entry_arg) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_FINALIZE_TVM, tvmid, + sepc, entry_arg, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_destroy_tvm(unsigned long tvmid) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_DESTROY_TVM, tvmid, + 0, 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_add_memory_region(unsigned long tvmid, unsigned long tgpaddr, unsigned long rlen) +{ + struct sbiret ret; + + if (!IS_ALIGNED(tgpaddr, RISCV_COVE_ALIGN_4KB) || !IS_ALIGNED(rlen, RISCV_COVE_ALIGN_4KB)) + return -EINVAL; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_MEMORY_REGION, tvmid, + tgpaddr, rlen, 0, 0, 0); + if (ret.error) { + kvm_err("Add memory region failed with sbi error code %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + + return 0; +} + +int sbi_covh_tsm_convert_pages(unsigned long phys_addr, unsigned long npages) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_CONVERT_PAGES, phys_addr, + npages, 0, 0, 0, 0); + if (ret.error) { + kvm_err("Convert pages failed ret %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + return 0; +} + +int sbi_covh_tsm_reclaim_page(unsigned long page_addr_phys) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_RECLAIM_PAGES, page_addr_phys, + 1, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_reclaim_pages(unsigned long phys_addr, unsigned long npages) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_RECLAIM_PAGES, phys_addr, + npages, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_add_pgt_pages(unsigned long tvmid, unsigned long page_addr_phys, unsigned long npages) +{ + struct sbiret ret; + + if (!PAGE_ALIGNED(page_addr_phys)) + return -EINVAL; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_PGT_PAGES, tvmid, page_addr_phys, + npages, 0, 0, 0); + if (ret.error) { + kvm_err("Adding page table pages at %lx failed %ld\n", page_addr_phys, ret.error); + return sbi_err_map_linux_errno(ret.error); + } + + return 0; +} + +int sbi_covh_add_measured_pages(unsigned long tvmid, unsigned long src_addr, + unsigned long dest_addr, enum sbi_cove_page_type ptype, + unsigned long npages, unsigned long tgpa) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_MEASURED_PAGES, tvmid, src_addr, + dest_addr, ptype, npages, tgpa); + if (ret.error) { + kvm_err("Adding measued pages failed ret %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + + return 0; +} + +int sbi_covh_add_zero_pages(unsigned long tvmid, unsigned long page_addr_phys, + enum sbi_cove_page_type ptype, unsigned long npages, + unsigned long tvm_base_page_addr) +{ + struct sbiret ret; + + if (!PAGE_ALIGNED(page_addr_phys)) + return -EINVAL; + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_ZERO_PAGES, tvmid, page_addr_phys, + ptype, npages, tvm_base_page_addr, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_create_tvm_vcpu(unsigned long tvmid, unsigned long vcpuid, + unsigned long vcpu_state_paddr) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_CREATE_VCPU, tvmid, vcpuid, + vcpu_state_paddr, 0, 0, 0); + if (ret.error) { + kvm_err("create vcpu failed ret %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + return 0; +} + +int sbi_covh_run_tvm_vcpu(unsigned long tvmid, unsigned long vcpuid) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_VCPU_RUN, tvmid, vcpuid, 0, 0, 0, 0); + /* Non-zero return value indicate the vcpu is already terminated */ + if (ret.error || !ret.value) + return ret.error ? sbi_err_map_linux_errno(ret.error) : ret.value; + + return 0; +} From patchwork Wed Apr 19 22:16:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFD55C77B7E for ; Wed, 19 Apr 2023 22:18:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232152AbjDSWSS (ORCPT ); Wed, 19 Apr 2023 18:18:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231610AbjDSWSI (ORCPT ); Wed, 19 Apr 2023 18:18:08 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F3DC5FFA for ; Wed, 19 Apr 2023 15:17:45 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-63b78b344d5so346257b3a.1 for ; Wed, 19 Apr 2023 15:17:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942665; x=1684534665; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ozf7CgfDu6QfbrVSJ88D8CUhAzN4Z1ozQ6Paui1PG2Q=; b=UiEEqgIiFGjQrlRc05xi8M/8WHKF79aVIaVCTtJDmslx64Vz9QBOTfjDBhPpfGvCZM hD6Trwo85wezLe+LkfwiK3xPiE6mOt9Shkgc2tP3Y/1nUYrDuHN5QdD+FGshzmUzMxq4 mejPehx72O0SUsimaMCZdOowioZ7cAJ9dbR/w4XBmGXhQPqlCzQ4x0w5/UsyzImcsr+6 0jU+qDjHWtK8/W2MYoBsF8SijK/drU0MOEOFRuhfuuakyHvRuiRniJi5FDWwOeJwfZL2 B/tTxcEapPl4p4WTn0L3HBdaLB5N3pSCGJN12RxBZo6KQjdkqGjifVeYXsUH3bU3gjvu nEXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942665; x=1684534665; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ozf7CgfDu6QfbrVSJ88D8CUhAzN4Z1ozQ6Paui1PG2Q=; b=hZ6gPQQOfSocNulv72hYh6J84ys+LP+3OFIC/BYdDja+9qL8NX46gBN1fzL3uakzD7 u/UosdbFURm20cX1fOP1ZEETXTBbehV2ji5Ad5BNukElxpQ4ZW0SRrehki5mcgP9zuyA mjMjNj3Sl+H5afdAfNC805uZ6MvDchfYrsZRfnBFClVX0daxTu+kiVzAyXnAdv1UxbAc ooIn92/KB89vXtcaFZAgKIUefV5LSEH74KUdJKe+SbYiPMyL02fOGURW+G6uG7FHNtSY 7G1uVY9csFo+DkF7EiRIatqd+7aPNSTpk+EjoQZSb4jzyEXbcwZRjm8Pcr/lgCjTG+Um d5/g== X-Gm-Message-State: AAQBX9f2HoduXkhHICB9zorKSCv3WKQEsmGJq7FqKI+IktbibrbABM9k bWXSe7AMzxyuX9G1gKrvIaVydA== X-Google-Smtp-Source: AKy350blkLhMRfe6KzTyuHMpWHXcb6B+q1BXTSTdLN78RbOAC8KzxAc9EsSMEhzo4VgDG1Ea+hcCQw== X-Received: by 2002:a17:902:d48c:b0:1a6:e58f:8aee with SMTP id c12-20020a170902d48c00b001a6e58f8aeemr7335979plg.65.1681942665252; Wed, 19 Apr 2023 15:17:45 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:44 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 07/48] RISC-V: KVM: Add a barebone CoVE implementation Date: Wed, 19 Apr 2023 15:16:35 -0700 Message-Id: <20230419221716.3603068-8-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch just adds a barebone implementation of CoVE functionality that exercises the COVH functions to create/manage pages for various boot time operations such as page directory, page table management, vcpu/vm state management etc. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 154 ++++++++++++ arch/riscv/include/asm/kvm_host.h | 7 + arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/cove.c | 401 ++++++++++++++++++++++++++++++ arch/riscv/kvm/cove_sbi.c | 2 - include/uapi/linux/kvm.h | 6 + 6 files changed, 569 insertions(+), 3 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_cove.h create mode 100644 arch/riscv/kvm/cove.c diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h new file mode 100644 index 0000000..3bf1bcd --- /dev/null +++ b/arch/riscv/include/asm/kvm_cove.h @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * COVE related header file. + * + * Copyright (c) 2023 RivosInc + * + * Authors: + * Atish Patra + */ + +#ifndef __KVM_RISCV_COVE_H +#define __KVM_RISCV_COVE_H + +#include +#include +#include +#include +#include +#include +#include + +#define KVM_COVE_PAGE_SIZE_4K (1UL << 12) +#define KVM_COVE_PAGE_SIZE_2MB (1UL << 21) +#define KVM_COVE_PAGE_SIZE_1GB (1UL << 30) +#define KVM_COVE_PAGE_SIZE_512GB (1UL << 39) + +#define bytes_to_pages(n) ((n + PAGE_SIZE - 1) >> PAGE_SHIFT) + +/* Allocate 2MB(i.e. 512 pages) for the page table pool */ +#define KVM_COVE_PGTABLE_SIZE_MAX ((1UL << 10) * PAGE_SIZE) + +#define get_order_num_pages(n) (get_order(n << PAGE_SHIFT)) + +/* Describe a confidential or shared memory region */ +struct kvm_riscv_cove_mem_region { + unsigned long hva; + unsigned long gpa; + unsigned long npages; +}; + +/* Page management structure for the host */ +struct kvm_riscv_cove_page { + struct list_head link; + + /* Pointer to page allocated */ + struct page *page; + + /* number of pages allocated for page */ + unsigned long npages; + + /* Described the page type */ + unsigned long ptype; + + /* set if the page is mapped in guest physical address */ + bool is_mapped; + + /* The below two fileds are only valid if is_mapped is true */ + /* host virtual address for the mapping */ + unsigned long hva; + /* guest physical address for the mapping */ + unsigned long gpa; +}; + +struct kvm_cove_tvm_vcpu_context { + struct kvm_vcpu *vcpu; + /* Pages storing each vcpu state of the TVM in TSM */ + struct kvm_riscv_cove_page vcpu_state; +}; + +struct kvm_cove_tvm_context { + struct kvm *kvm; + + /* TODO: This is not really a VMID as TSM returns the page owner ID instead of VMID */ + unsigned long tvm_guest_id; + + /* Pages where TVM page table is stored */ + struct kvm_riscv_cove_page pgtable; + + /* Pages storing the TVM state in TSM */ + struct kvm_riscv_cove_page tvm_state; + + /* Keep track of zero pages */ + struct list_head zero_pages; + + /* Pages where TVM image is measured & loaded */ + struct list_head measured_pages; + + /* keep track of shared pages */ + struct list_head shared_pages; + + /* keep track of pending reclaim confidential pages */ + struct list_head reclaim_pending_pages; + + struct kvm_riscv_cove_mem_region shared_region; + struct kvm_riscv_cove_mem_region confidential_region; + + /* spinlock to protect the tvm fence sequence */ + spinlock_t tvm_fence_lock; + + /* Track TVM state */ + bool finalized_done; +}; + +static inline bool is_cove_vm(struct kvm *kvm) +{ + return kvm->arch.vm_type == KVM_VM_TYPE_RISCV_COVE; +} + +static inline bool is_cove_vcpu(struct kvm_vcpu *vcpu) +{ + return is_cove_vm(vcpu->kvm); +} + +#ifdef CONFIG_RISCV_COVE_HOST + +bool kvm_riscv_cove_enabled(void); +int kvm_riscv_cove_init(void); + +/* TVM related functions */ +void kvm_riscv_cove_vm_destroy(struct kvm *kvm); +int kvm_riscv_cove_vm_init(struct kvm *kvm); + +/* TVM VCPU related functions */ +void kvm_riscv_cove_vcpu_destroy(struct kvm_vcpu *vcpu); +int kvm_riscv_cove_vcpu_init(struct kvm_vcpu *vcpu); +void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu); +void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu); +void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap); + +int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size); +int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva); +#else +static inline bool kvm_riscv_cove_enabled(void) {return false; }; +static inline int kvm_riscv_cove_init(void) { return -1; } +static inline void kvm_riscv_cove_hardware_disable(void) {} +static inline int kvm_riscv_cove_hardware_enable(void) {return 0; } + +/* TVM related functions */ +static inline void kvm_riscv_cove_vm_destroy(struct kvm *kvm) {} +static inline int kvm_riscv_cove_vm_init(struct kvm *kvm) {return -1; } + +/* TVM VCPU related functions */ +static inline void kvm_riscv_cove_vcpu_destroy(struct kvm_vcpu *vcpu) {} +static inline int kvm_riscv_cove_vcpu_init(struct kvm_vcpu *vcpu) {return -1; } +static inline void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu) {} +static inline void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) {} +static inline void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) {} +static inline int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, + unsigned long size) {return -1; } +static inline int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, + gpa_t gpa, unsigned long hva) {return -1; } +#endif /* CONFIG_RISCV_COVE_HOST */ + +#endif /* __KVM_RISCV_COVE_H */ diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 63c46af..ca2ebe3 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -88,6 +88,8 @@ struct kvm_vmid { }; struct kvm_arch { + unsigned long vm_type; + /* G-stage vmid */ struct kvm_vmid vmid; @@ -100,6 +102,9 @@ struct kvm_arch { /* AIA Guest/VM context */ struct kvm_aia aia; + + /* COVE guest/VM context */ + struct kvm_cove_tvm_context *tvmc; }; struct kvm_cpu_trap { @@ -242,6 +247,8 @@ struct kvm_vcpu_arch { /* Performance monitoring context */ struct kvm_pmu pmu_context; + + struct kvm_cove_tvm_vcpu_context *tc; }; static inline void kvm_arch_sync_events(struct kvm *kvm) {} diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 40dee04..8c91551 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -31,4 +31,4 @@ kvm-y += aia.o kvm-y += aia_device.o kvm-y += aia_aplic.o kvm-y += aia_imsic.o -kvm-$(CONFIG_RISCV_COVE_HOST) += cove_sbi.o +kvm-$(CONFIG_RISCV_COVE_HOST) += cove_sbi.o cove.o diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c new file mode 100644 index 0000000..d001e36 --- /dev/null +++ b/arch/riscv/kvm/cove.c @@ -0,0 +1,401 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * COVE related helper functions. + * + * Copyright (c) 2023 RivosInc + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static struct sbi_cove_tsm_info tinfo; +struct sbi_cove_tvm_create_params params; + +/* We need a global lock as initiate fence can be invoked once per host */ +static DEFINE_SPINLOCK(cove_fence_lock); + +static bool riscv_cove_enabled; + +static void kvm_cove_local_fence(void *info) +{ + int rc; + + rc = sbi_covh_tsm_local_fence(); + + if (rc) + kvm_err("local fence for TSM failed %d on cpu %d\n", rc, smp_processor_id()); +} + +static void cove_delete_page_list(struct kvm *kvm, struct list_head *tpages, bool unpin) +{ + struct kvm_riscv_cove_page *tpage, *temp; + int rc; + + list_for_each_entry_safe(tpage, temp, tpages, link) { + rc = sbi_covh_tsm_reclaim_pages(page_to_phys(tpage->page), tpage->npages); + if (rc) + kvm_err("Reclaiming page %llx failed\n", page_to_phys(tpage->page)); + if (unpin) + unpin_user_pages_dirty_lock(&tpage->page, 1, true); + list_del(&tpage->link); + kfree(tpage); + } +} + +static int kvm_riscv_cove_fence(void) +{ + int rc; + + spin_lock(&cove_fence_lock); + + rc = sbi_covh_tsm_initiate_fence(); + if (rc) { + kvm_err("initiate fence for tsm failed %d\n", rc); + goto done; + } + + /* initiate local fence on each online hart */ + on_each_cpu(kvm_cove_local_fence, NULL, 1); +done: + spin_unlock(&cove_fence_lock); + return rc; +} + +static int cove_convert_pages(unsigned long phys_addr, unsigned long npages, bool fence) +{ + int rc; + + if (!IS_ALIGNED(phys_addr, PAGE_SIZE)) + return -EINVAL; + + rc = sbi_covh_tsm_convert_pages(phys_addr, npages); + if (rc) + return rc; + + /* Conversion was successful. Flush the TLB if caller requested */ + if (fence) + rc = kvm_riscv_cove_fence(); + + return rc; +} + +__always_inline bool kvm_riscv_cove_enabled(void) +{ + return riscv_cove_enabled; +} + +void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu) +{ + /* TODO */ +} + +void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) +{ + /* TODO */ +} + +int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) +{ + /* TODO */ + return 0; +} + +void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) +{ + /* TODO */ +} + +void kvm_riscv_cove_vcpu_destroy(struct kvm_vcpu *vcpu) +{ + struct kvm_cove_tvm_vcpu_context *tvcpuc = vcpu->arch.tc; + struct kvm *kvm = vcpu->kvm; + + /* + * Just add the vcpu state pages to a list at this point as these can not + * be claimed until tvm is destroyed. * + */ + list_add(&tvcpuc->vcpu_state.link, &kvm->arch.tvmc->reclaim_pending_pages); +} + +int kvm_riscv_cove_vcpu_init(struct kvm_vcpu *vcpu) +{ + int rc; + struct kvm *kvm; + struct kvm_cove_tvm_vcpu_context *tvcpuc; + struct kvm_cove_tvm_context *tvmc; + struct page *vcpus_page; + unsigned long vcpus_phys_addr; + + if (!vcpu) + return -EINVAL; + + kvm = vcpu->kvm; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + if (tvmc->finalized_done) { + kvm_err("vcpu init must not happen after finalize\n"); + return -EINVAL; + } + + tvcpuc = kzalloc(sizeof(*tvcpuc), GFP_KERNEL); + if (!tvcpuc) + return -ENOMEM; + + vcpus_page = alloc_pages(GFP_KERNEL | __GFP_ZERO, + get_order_num_pages(tinfo.tvcpu_pages_needed)); + if (!vcpus_page) { + rc = -ENOMEM; + goto alloc_page_failed; + } + + tvcpuc->vcpu = vcpu; + tvcpuc->vcpu_state.npages = tinfo.tvcpu_pages_needed; + tvcpuc->vcpu_state.page = vcpus_page; + vcpus_phys_addr = page_to_phys(vcpus_page); + + rc = cove_convert_pages(vcpus_phys_addr, tvcpuc->vcpu_state.npages, true); + if (rc) + goto convert_failed; + + rc = sbi_covh_create_tvm_vcpu(tvmc->tvm_guest_id, vcpu->vcpu_idx, vcpus_phys_addr); + if (rc) + goto vcpu_create_failed; + + vcpu->arch.tc = tvcpuc; + + return 0; + +vcpu_create_failed: + /* Reclaim all the pages or return to the confidential page pool */ + sbi_covh_tsm_reclaim_pages(vcpus_phys_addr, tvcpuc->vcpu_state.npages); + +convert_failed: + __free_pages(vcpus_page, get_order_num_pages(tinfo.tvcpu_pages_needed)); + +alloc_page_failed: + kfree(tvcpuc); + return rc; +} + +int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size) +{ + int rc; + struct kvm_cove_tvm_context *tvmc = kvm->arch.tvmc; + + if (!tvmc) + return -EFAULT; + + if (tvmc->finalized_done) { + kvm_err("Memory region can not be added after finalize\n"); + return -EINVAL; + } + + tvmc->confidential_region.gpa = gpa; + tvmc->confidential_region.npages = bytes_to_pages(size); + + rc = sbi_covh_add_memory_region(tvmc->tvm_guest_id, gpa, size); + if (rc) { + kvm_err("Registering confidential memory region failed with rc %d\n", rc); + return rc; + } + + kvm_info("%s: Success with gpa %lx size %lx\n", __func__, gpa, size); + + return 0; +} + +/* + * Destroying A TVM is expensive because we need to reclaim all the pages by iterating over it. + * Few ideas to improve: + * 1. At least do the reclaim part in a worker thread in the background + * 2. Define a page pool which can contain a pre-allocated/converted pages. + * In this step, we just return to the confidential page pool. Thus, some other TVM + * can use it. + */ +void kvm_riscv_cove_vm_destroy(struct kvm *kvm) +{ + int rc; + struct kvm_cove_tvm_context *tvmc = kvm->arch.tvmc; + unsigned long pgd_npages; + + if (!tvmc) + return; + + /* Release all the confidential pages using COVH SBI call */ + rc = sbi_covh_tsm_destroy_tvm(tvmc->tvm_guest_id); + if (rc) { + kvm_err("TVM %ld destruction failed with rc = %d\n", tvmc->tvm_guest_id, rc); + return; + } + + cove_delete_page_list(kvm, &tvmc->reclaim_pending_pages, false); + + /* Reclaim and Free the pages for tvm state management */ + rc = sbi_covh_tsm_reclaim_pages(page_to_phys(tvmc->tvm_state.page), tvmc->tvm_state.npages); + if (rc) + goto reclaim_failed; + + __free_pages(tvmc->tvm_state.page, get_order_num_pages(tvmc->tvm_state.npages)); + + /* Reclaim and Free the pages for gstage page table management */ + rc = sbi_covh_tsm_reclaim_pages(page_to_phys(tvmc->pgtable.page), tvmc->pgtable.npages); + if (rc) + goto reclaim_failed; + + __free_pages(tvmc->pgtable.page, get_order_num_pages(tvmc->pgtable.npages)); + + /* Reclaim the confidential page for pgd */ + pgd_npages = kvm_riscv_gstage_pgd_size() >> PAGE_SHIFT; + rc = sbi_covh_tsm_reclaim_pages(kvm->arch.pgd_phys, pgd_npages); + if (rc) + goto reclaim_failed; + + kfree(tvmc); + + return; + +reclaim_failed: + kvm_err("Memory reclaim failed with rc %d\n", rc); +} + +int kvm_riscv_cove_vm_init(struct kvm *kvm) +{ + struct kvm_cove_tvm_context *tvmc; + struct page *tvms_page, *pgt_page; + unsigned long tvm_gid, pgt_phys_addr, tvms_phys_addr; + unsigned long gstage_pgd_size = kvm_riscv_gstage_pgd_size(); + int rc = 0; + + tvmc = kzalloc(sizeof(*tvmc), GFP_KERNEL); + if (!tvmc) + return -ENOMEM; + + /* Allocate the pages required for gstage page table management */ + /* TODO: Just give enough pages for page table pool for now */ + pgt_page = alloc_pages(GFP_KERNEL | __GFP_ZERO, get_order(KVM_COVE_PGTABLE_SIZE_MAX)); + if (!pgt_page) + return -ENOMEM; + + /* pgd is always 16KB aligned */ + rc = cove_convert_pages(kvm->arch.pgd_phys, gstage_pgd_size >> PAGE_SHIFT, false); + if (rc) + goto done; + + /* Convert the gstage page table pages */ + tvmc->pgtable.page = pgt_page; + tvmc->pgtable.npages = KVM_COVE_PGTABLE_SIZE_MAX >> PAGE_SHIFT; + pgt_phys_addr = page_to_phys(pgt_page); + + rc = cove_convert_pages(pgt_phys_addr, tvmc->pgtable.npages, false); + if (rc) { + kvm_err("%s: page table pool conversion failed rc %d\n", __func__, rc); + goto pgt_convert_failed; + } + + /* Allocate and convert the pages required for TVM state management */ + tvms_page = alloc_pages(GFP_KERNEL | __GFP_ZERO, + get_order_num_pages(tinfo.tvm_pages_needed)); + if (!tvms_page) { + rc = -ENOMEM; + goto tvms_alloc_failed; + } + + tvmc->tvm_state.page = tvms_page; + tvmc->tvm_state.npages = tinfo.tvm_pages_needed; + tvms_phys_addr = page_to_phys(tvms_page); + + rc = cove_convert_pages(tvms_phys_addr, tinfo.tvm_pages_needed, false); + if (rc) { + kvm_err("%s: tvm state page conversion failed rc %d\n", __func__, rc); + goto tvms_convert_failed; + } + + rc = kvm_riscv_cove_fence(); + if (rc) + goto tvm_init_failed; + + INIT_LIST_HEAD(&tvmc->measured_pages); + INIT_LIST_HEAD(&tvmc->zero_pages); + INIT_LIST_HEAD(&tvmc->shared_pages); + INIT_LIST_HEAD(&tvmc->reclaim_pending_pages); + + /* The required pages have been converted to confidential memory. Create the TVM now */ + params.tvm_page_directory_addr = kvm->arch.pgd_phys; + params.tvm_state_addr = tvms_phys_addr; + + rc = sbi_covh_tsm_create_tvm(¶ms, &tvm_gid); + if (rc) + goto tvm_init_failed; + + tvmc->tvm_guest_id = tvm_gid; + spin_lock_init(&tvmc->tvm_fence_lock); + kvm->arch.tvmc = tvmc; + + rc = sbi_covh_add_pgt_pages(tvm_gid, pgt_phys_addr, tvmc->pgtable.npages); + if (rc) + goto tvm_init_failed; + + tvmc->kvm = kvm; + kvm_info("Guest VM creation successful with guest id %lx\n", tvm_gid); + + return 0; + +tvm_init_failed: + /* Reclaim tvm state pages */ + sbi_covh_tsm_reclaim_pages(tvms_phys_addr, tvmc->tvm_state.npages); + +tvms_convert_failed: + __free_pages(tvms_page, get_order_num_pages(tinfo.tvm_pages_needed)); + +tvms_alloc_failed: + /* Reclaim pgtable pages */ + sbi_covh_tsm_reclaim_pages(pgt_phys_addr, tvmc->pgtable.npages); + +pgt_convert_failed: + __free_pages(pgt_page, get_order(KVM_COVE_PGTABLE_SIZE_MAX)); + /* Reclaim pgd pages */ + sbi_covh_tsm_reclaim_pages(kvm->arch.pgd_phys, gstage_pgd_size >> PAGE_SHIFT); + +done: + kfree(tvmc); + return rc; +} + +int kvm_riscv_cove_init(void) +{ + int rc; + + /* We currently support host in VS mode. Thus, NACL is mandatory */ + if (sbi_probe_extension(SBI_EXT_COVH) <= 0 || !kvm_riscv_nacl_available()) + return -EOPNOTSUPP; + + rc = sbi_covh_tsm_get_info(&tinfo); + if (rc < 0) + return -EINVAL; + + if (tinfo.tstate != TSM_READY) { + kvm_err("TSM is not ready yet. Can't run TVMs\n"); + return -EAGAIN; + } + + riscv_cove_enabled = true; + kvm_info("The platform has confidential computing feature enabled\n"); + kvm_info("TSM version %d is loaded and ready to run\n", tinfo.version); + + return 0; +} diff --git a/arch/riscv/kvm/cove_sbi.c b/arch/riscv/kvm/cove_sbi.c index c8c63fe..bf037f6 100644 --- a/arch/riscv/kvm/cove_sbi.c +++ b/arch/riscv/kvm/cove_sbi.c @@ -82,9 +82,7 @@ int sbi_covh_tsm_create_tvm(struct sbi_cove_tvm_create_params *tparam, unsigned goto done; } - kvm_info("%s: create_tvm tvmid %lx\n", __func__, ret.value); *tvmid = ret.value; - done: return rc; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 8923319..a55a6a5 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -914,6 +914,12 @@ struct kvm_ppc_resize_hpt { #define KVM_VM_TYPE_ARM_IPA_SIZE_MASK 0xffULL #define KVM_VM_TYPE_ARM_IPA_SIZE(x) \ ((x) & KVM_VM_TYPE_ARM_IPA_SIZE_MASK) + +/* + * RISCV-V Confidential VM type. The large bit shift is chosen on purpose + * to allow other architectures to have their specific VM types if required. + */ +#define KVM_VM_TYPE_RISCV_COVE (1UL << 9) /* * ioctls for /dev/kvm fds: */ From patchwork Wed Apr 19 22:16:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 954B2C77B7C for ; Wed, 19 Apr 2023 22:18:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232324AbjDSWSb (ORCPT ); Wed, 19 Apr 2023 18:18:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231847AbjDSWSP (ORCPT ); Wed, 19 Apr 2023 18:18:15 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D0C95FC0 for ; Wed, 19 Apr 2023 15:17:51 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id d2e1a72fcca58-63b67a26069so503338b3a.0 for ; Wed, 19 Apr 2023 15:17:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942667; x=1684534667; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xru1T6bsi1+3ZKp8XjmLeeEUO6sc9QurHQCwhhM84HA=; b=fB9b4YMtcQOYrcKj9/GWzeONjcxKP2oeywQTvv9jgtHPhKMG6piGSirEiNQPiRUSw/ TNDD98EsCQRHJb8o4KR/40iwkDz2iCL9O1Q5D4OApUqfzyyueF2pWWXu9oUBTr1sBd7r qN8+st+530NNWaQglZ+vx32ZFpdd6jtoI22iSuSF0a+tcpVy60XJq7vb5hms8XM51miU EUekIXnr0Cbflr87WbR+EkVxvu0imyYB9o1bwvrkRw5vQJ9T+Gvh04UNMDx3e/rdtmef Rhm+A3oBDjRklFUpsePBjaVMWeVQWCFlRT5JvPLYOIPW/X1HEUctXf38yhKlTJ/Aw1FH TvXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942667; x=1684534667; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xru1T6bsi1+3ZKp8XjmLeeEUO6sc9QurHQCwhhM84HA=; b=ZL0V8Nx5iA9I8EtOAtghVMNdDJHEL1P6ILf0bhQZHih+W0yLlWchW3l/9sGNfR6FWA hXhr4iNqtyEU/oScpwRDH6937llhcjE2nlPUIK0PR8jXhz3PoLiqHRCtfvCRS2IJr/dH 4zi+oZAKz6AJ+hGU4/PaB3yXhkNtQHEaJGS3hHpBgXhCpFcZfdwYKQx0OOtRcsYel56J Q8wGqg0omt0Ttzcz/OzQPmbEUBl6bfYaDgIKYR67051w5zIivCqlOCALamQvy6lS0Eoq UXlS8/sEbsJwcVTmzO2oE/iSs3Pdshjsp+WshENxhnGO+I++R05h9HaBy9dEmlub5OOL i8dw== X-Gm-Message-State: AAQBX9fqoL4/Yb3OWKCTkyLGUNbHfTDIj32hdcXZKNGY+G9Hjjl+0KZ1 AmODjpmqz9+WzVurUPRlLEhQBg== X-Google-Smtp-Source: AKy350Ze6w+VITNJKc37wiOWND61z06Oiijk/jlDr4gywczSxyZvsezIoz1E2Y/uHqG7LrTSWYdlBg== X-Received: by 2002:a17:902:f682:b0:1a1:f5dd:2dce with SMTP id l2-20020a170902f68200b001a1f5dd2dcemr8068605plg.6.1681942667389; Wed, 19 Apr 2023 15:17:47 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:47 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 08/48] RISC-V: KVM: Add UABI to support static memory region attestation Date: Wed, 19 Apr 2023 15:16:36 -0700 Message-Id: <20230419221716.3603068-9-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To initialize a TVM, a TSM must ensure that all the static memory regions that contain the device tree, the kernel image or initrd for the TVM attested. Some of these information is not usually present with the host and only VMM is aware of these. Introduce an new ioctl which is part of the uABI to support this. Signed-off-by: Atish Patra --- arch/riscv/include/uapi/asm/kvm.h | 12 ++++++++++++ include/uapi/linux/kvm.h | 2 ++ 2 files changed, 14 insertions(+) diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index 11440df..ac3def0 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -98,6 +98,18 @@ struct kvm_riscv_timer { __u64 state; }; +/* Memory region details of a CoVE guest that is measured at boot time */ +struct kvm_riscv_cove_measure_region { + /* Address of the user space where the VM code/data resides */ + unsigned long userspace_addr; + + /* The guest physical address where VM code/data should be mapped */ + unsigned long gpa; + + /* Size of the region */ + unsigned long size; +}; + /* * ISA extension IDs specific to KVM. This is not the same as the host ISA * extension IDs as that is internal to the host and should not be exposed diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index a55a6a5..84a73b5 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1552,6 +1552,8 @@ struct kvm_s390_ucas_mapping { #define KVM_PPC_SVM_OFF _IO(KVMIO, 0xb3) #define KVM_ARM_MTE_COPY_TAGS _IOR(KVMIO, 0xb4, struct kvm_arm_copy_mte_tags) +#define KVM_RISCV_COVE_MEASURE_REGION _IOR(KVMIO, 0xb5, struct kvm_riscv_cove_measure_region) + /* ioctl for vm fd */ #define KVM_CREATE_DEVICE _IOWR(KVMIO, 0xe0, struct kvm_create_device) From patchwork Wed Apr 19 22:16:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 674BAC77B73 for ; Wed, 19 Apr 2023 22:18:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232344AbjDSWSd (ORCPT ); Wed, 19 Apr 2023 18:18:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231882AbjDSWSQ (ORCPT ); Wed, 19 Apr 2023 18:18:16 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD1CA7ECD for ; Wed, 19 Apr 2023 15:17:51 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-63b7096e2e4so376823b3a.2 for ; Wed, 19 Apr 2023 15:17:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942669; x=1684534669; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=T59m5JAZKpRDJgvlPNnQtLbeFsduDf6TBzeyZHAxXV0=; b=qjxNMIrl8HnhfIp/ecw72oYKFII9UTAwql/uMAHKk/FXlEBEiMQ66OH7eiwjlGKtCq fOjlVW3CuH2HqfOiNP9XRTFCzDI4+lJmQdsac4GSSp3r+OkcygCfPaztbk/heRGDIhKM pnYeif4EQ7Prrd5kCwoWu5m8+1XOIOkSJ78OzpuHzmCbajgIzTGS3QZn2FEzSLBwhiSb 6w7a2QMSx0oX83P6+RuntF3wYcXCE4rbF3duzd0QwdejxuKey4lt6WjYMDej+sLNVh9T 6z0R8DizcoQpeT/MVd47bfnzAc938aOKta+516VE8cUvwIwwgAey2BJajvvIriSfLlPU 8bjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942669; x=1684534669; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=T59m5JAZKpRDJgvlPNnQtLbeFsduDf6TBzeyZHAxXV0=; b=ZrCiGaES/8jSF17bKITYx6TJPm7/6+zYQYJGffQuw/JI0UEOScmHvyWplXXzEpKvfx Mrsutx6O8JNhfvIE5A7ROCp3zpGNxQZ5FIOp5OBai/sHQuBnsQkSTq4ouaV6uqUZrPTq 6G3UXY+cZx2ukyIzBUgQiVRmvU+2STfrlLbdrSX9P+TH9L+C75HDKoMrGEPVTlnxojLr C13XlSUE96hqUefp10eHAd96xuhrqSJOE0uqIvTzmzg0IoqvDiiMRU822XAWrtXY/BFd F/4EP1fZwmG7oHMt+p/c7eB4+15ZNAnqe1eN5Bx+rP9gZAj2hjchqBcfS4EI5xJAphOK LdCw== X-Gm-Message-State: AAQBX9dWNJHVJNSzJpt21UMwUyrLOAxQlbpEkqOVspYHxpXIV9eXW0vB LwNK3Shdb24/kz/ppvrchgyGjQ== X-Google-Smtp-Source: AKy350a2O0VHJD6AXYzabb4g8786HYDihr3e7/POpprE3Ii6IVd/H8nE9VOR2CTMJgAkkR7C+O0Rjg== X-Received: by 2002:a17:902:aa4a:b0:1a1:f0cb:1055 with SMTP id c10-20020a170902aa4a00b001a1f0cb1055mr6242462plr.28.1681942669632; Wed, 19 Apr 2023 15:17:49 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:49 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 09/48] RISC-V: KVM: Add CoVE related nacl helpers Date: Wed, 19 Apr 2023 15:16:37 -0700 Message-Id: <20230419221716.3603068-10-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The NACL SBI extension allows the scratch area to be customizable per SBI extension. As per the COVH SBI extension, the scratch area stores the guest gpr state. Add some helpers to read/write gprs easily. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove_sbi.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/riscv/include/asm/kvm_cove_sbi.h b/arch/riscv/include/asm/kvm_cove_sbi.h index 24562df..df7d88c 100644 --- a/arch/riscv/include/asm/kvm_cove_sbi.h +++ b/arch/riscv/include/asm/kvm_cove_sbi.h @@ -17,6 +17,21 @@ #include #include +#include + +/** + * CoVE SBI extensions defines the NACL scratch memory. + * uint64_t gprs[32] + * uint64_t reserved[224] + */ +#define get_scratch_gpr_offset(goffset) (goffset - KVM_ARCH_GUEST_ZERO) + +#define nacl_shmem_gpr_write_cove(__s, __g, __o) \ + nacl_shmem_scratch_write_long(__s, get_scratch_gpr_offset(__g), __o) + +#define nacl_shmem_gpr_read_cove(__s, __g) \ + nacl_shmem_scratch_read_long(__s, get_scratch_gpr_offset(__g)) + int sbi_covh_tsm_get_info(struct sbi_cove_tsm_info *tinfo_addr); int sbi_covh_tvm_initiate_fence(unsigned long tvmid); int sbi_covh_tsm_initiate_fence(void); From patchwork Wed Apr 19 22:16:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A4B1C77B73 for ; Wed, 19 Apr 2023 22:18:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232434AbjDSWSn (ORCPT ); Wed, 19 Apr 2023 18:18:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232054AbjDSWSR (ORCPT ); Wed, 19 Apr 2023 18:18:17 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AF3E212F for ; Wed, 19 Apr 2023 15:17:54 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 41be03b00d2f7-52079a12451so209046a12.3 for ; Wed, 19 Apr 2023 15:17:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942672; x=1684534672; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qOzd/WhvlpjJ956p72hZNImrff40bav5pGpgVf1jDCE=; b=nK5/1WGSlykOrMrz6zLE7ByOwHsNX7ku+LU2VBvTADuPSy3LTEvEYN/5idyeMAH9v6 lDrCh9vx2uQPGrR2xDDmRNujTPNDyyzCxzk4mY1KxzUuZQGhhSyvWuaw1bMwsQMloPyl 7AP1+xv6gUZEztUQTM1GZJONZkurrJQmSw8oo0Vlacs0NCnY8W0HIEeaXUNTT9FXPkDs BnQUvSJIxGIi32I9gl3+IkxfURT26aMcR2YDE4SZvKgasSWBzWeQlJS8HIQa8AHGBoBy oAuh9qhu2Ya7h80HYOyhHTVyAo19NM+J4mD7HNvy+vl3b7n5pBV/07agfGWnPCRxQI9a znow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942672; x=1684534672; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qOzd/WhvlpjJ956p72hZNImrff40bav5pGpgVf1jDCE=; b=aEAAIkJKK7xEw0Pd8cFCE1Hr1QlR0GAj5WwkkOvc5Lga47lNMpRL1770PQl/CyU2DG wxiquksiX+wjKW6Ysmdoiy5DvJtK0imNkMPaSE9xys0JuXe0TByR3TfS6jwBEwn71BqB cv22ZrlTqBL6/0Ja6K9U8DFlOD38xZZ1kyKZ1IY/TbVvNJ1qJS68p0ubvlTPaP4z6pqt saCSrjkeFQ9L9QbOVIstT3bQTCSfXPsOxF95bAQEdVf97Vgicu90Mnkbz6iCrEgTILSV q2mktZBGHNudWv7ur3auXXUfHNE0Cpr+O2dXSun2BZNljF56RXz6BD9yg7Lgfc9ywJqL 10qQ== X-Gm-Message-State: AAQBX9dG9OWHxlBs4zUF63eDRGvqFCaBhEHFa9BwyVIEiBQXrX0sVxXM XO4FMrB8LeWEe6OR8viNkuyNGw== X-Google-Smtp-Source: AKy350YH91pAUPM41j7LqluKiXzQ7Ynx9OMjbtDndSOQdeDLatwEyjhoPFWqpUQdaJ4OsOf4sEQSuA== X-Received: by 2002:a17:902:9004:b0:1a5:2621:34cd with SMTP id a4-20020a170902900400b001a5262134cdmr5755075plp.39.1681942671881; Wed, 19 Apr 2023 15:17:51 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:51 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 10/48] RISC-V: KVM: Implement static memory region measurement Date: Wed, 19 Apr 2023 15:16:38 -0700 Message-Id: <20230419221716.3603068-11-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To support attestation of any images loaded by the VMM, the COVH allows measuring these memory regions. Currently, it will be used for the kernel image, device tree and initrd images. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 6 ++ arch/riscv/kvm/cove.c | 110 ++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+) diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h index 3bf1bcd..4ea1df1 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -127,6 +127,7 @@ void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu); void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu); void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap); +int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, struct kvm_riscv_cove_measure_region *mr); int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size); int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva); #else @@ -147,6 +148,11 @@ static inline void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) {} static inline int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size) {return -1; } +static inline int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, + struct kvm_riscv_cove_measure_region *mr) +{ + return -1; +} static inline int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) {return -1; } #endif /* CONFIG_RISCV_COVE_HOST */ diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index d001e36..5b4d9ba 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -27,6 +27,12 @@ static DEFINE_SPINLOCK(cove_fence_lock); static bool riscv_cove_enabled; +static inline bool cove_is_within_region(unsigned long addr1, unsigned long size1, + unsigned long addr2, unsigned long size2) +{ + return ((addr1 <= addr2) && ((addr1 + size1) >= (addr2 + size2))); +} + static void kvm_cove_local_fence(void *info) { int rc; @@ -192,6 +198,109 @@ int kvm_riscv_cove_vcpu_init(struct kvm_vcpu *vcpu) return rc; } +int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, struct kvm_riscv_cove_measure_region *mr) +{ + struct kvm_cove_tvm_context *tvmc = kvm->arch.tvmc; + int rc = 0, idx, num_pages; + struct kvm_riscv_cove_mem_region *conf; + struct page *pinned_page, *conf_page; + struct kvm_riscv_cove_page *cpage; + + if (!tvmc) + return -EFAULT; + + if (tvmc->finalized_done) { + kvm_err("measured_mr pages can not be added after finalize\n"); + return -EINVAL; + } + + num_pages = bytes_to_pages(mr->size); + conf = &tvmc->confidential_region; + + if (!IS_ALIGNED(mr->userspace_addr, PAGE_SIZE) || + !IS_ALIGNED(mr->gpa, PAGE_SIZE) || !mr->size || + !cove_is_within_region(conf->gpa, conf->npages << PAGE_SHIFT, mr->gpa, mr->size)) + return -EINVAL; + + idx = srcu_read_lock(&kvm->srcu); + + /*TODO: Iterate one page at a time as pinning multiple pages fail with unmapped panic + * with a virtual address range belonging to vmalloc region for some reason. + */ + while (num_pages) { + if (signal_pending(current)) { + rc = -ERESTARTSYS; + break; + } + + if (need_resched()) + cond_resched(); + + rc = get_user_pages_fast(mr->userspace_addr, 1, 0, &pinned_page); + if (rc < 0) { + kvm_err("Pinning the userpsace addr %lx failed\n", mr->userspace_addr); + break; + } + + /* Enough pages are not available to be pinned */ + if (rc != 1) { + rc = -ENOMEM; + break; + } + conf_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!conf_page) { + rc = -ENOMEM; + break; + } + + rc = cove_convert_pages(page_to_phys(conf_page), 1, true); + if (rc) + break; + + /*TODO: Support other pages sizes */ + rc = sbi_covh_add_measured_pages(tvmc->tvm_guest_id, page_to_phys(pinned_page), + page_to_phys(conf_page), SBI_COVE_PAGE_4K, + 1, mr->gpa); + if (rc) + break; + + /* Unpin the page now */ + put_page(pinned_page); + + cpage = kmalloc(sizeof(*cpage), GFP_KERNEL_ACCOUNT); + if (!cpage) { + rc = -ENOMEM; + break; + } + + cpage->page = conf_page; + cpage->npages = 1; + cpage->gpa = mr->gpa; + cpage->hva = mr->userspace_addr; + cpage->is_mapped = true; + INIT_LIST_HEAD(&cpage->link); + list_add(&cpage->link, &tvmc->measured_pages); + + mr->userspace_addr += PAGE_SIZE; + mr->gpa += PAGE_SIZE; + num_pages--; + conf_page = NULL; + + continue; + } + srcu_read_unlock(&kvm->srcu, idx); + + if (rc < 0) { + /* We don't to need unpin pages as it is allocated by the hypervisor itself */ + cove_delete_page_list(kvm, &tvmc->measured_pages, false); + /* Free the last allocated page for which conversion/measurement failed */ + kfree(conf_page); + kvm_err("Adding/Converting measured pages failed %d\n", num_pages); + } + + return rc; +} + int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size) { int rc; @@ -244,6 +353,7 @@ void kvm_riscv_cove_vm_destroy(struct kvm *kvm) } cove_delete_page_list(kvm, &tvmc->reclaim_pending_pages, false); + cove_delete_page_list(kvm, &tvmc->measured_pages, false); /* Reclaim and Free the pages for tvm state management */ rc = sbi_covh_tsm_reclaim_pages(page_to_phys(tvmc->tvm_state.page), tvmc->tvm_state.npages); From patchwork Wed Apr 19 22:16:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF2ABC77B7F for ; Wed, 19 Apr 2023 22:18:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231347AbjDSWSg (ORCPT ); Wed, 19 Apr 2023 18:18:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232020AbjDSWSR (ORCPT ); Wed, 19 Apr 2023 18:18:17 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DAC940D7 for ; Wed, 19 Apr 2023 15:17:54 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1a5197f00e9so4657725ad.1 for ; Wed, 19 Apr 2023 15:17:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942674; x=1684534674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uwIdkHCarryXV50I6knz0ZklcYwS0nTZYHahvAAioGA=; b=JAfyJwlLZTLpeO+0a9mP+Qz17gHyrei/u3hvU+Y1oEWg9aDIafnfffPw2WLQocmCE5 thEwftksAEQCxymE79XbmXlY3CV86W6kKDPWDlsoieUM1U98liDXsdwKh0TtzMRMco/6 2qgFOkWb8NmcTouc/FLGToAZLWlsyjEWM3W1S9eitYQ6PZtGjRV1E0Bh4dYyak7MqlLF NqImRVNGkomVidir55QeblFMKk81mgA2duY/+3llfMenErvbAqjZMXeQVvmExrFa536W fZATF/TIP+lPFweeamhhj205CiSSkghmiFgf4j0xA47+WlMqYFC57GJCbBbKkv5f7DR4 f0PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942674; x=1684534674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uwIdkHCarryXV50I6knz0ZklcYwS0nTZYHahvAAioGA=; b=jc4pizBfSkWIoUR7oVc9CnWhacekD3nFSIaG68iwbJo0MaqW5frRUYNU+LBdvIcKJa 6QvU4Xxad8UU8AZvwULjoa8JEG/cJ2F5Duy/ekgxWjkkA1eJxH+oFj9Jc1uZs/MgExuq Dr2Ke8ORwn9mPv6oo7gYCJe5KtEIImbPxPCo3GnModIhq7dpFrJSEYJ2iWqVioxTFAzN xbOKU03VXbn9NghUKNp24xBHs1iobw3FTlw/5f0OKDGx4bz0Lp4YcZS74Pleah4owS7v C9UoQpbqE+pgzHvWDHD/qT9lPgdVcjmoawkvKvvgE9nCBXKhMEf9yfCMdyL9qMMIv4KN MpNQ== X-Gm-Message-State: AAQBX9fjyyJmGnHLKHzMUuo+O2XZL9m2ha/MoVsgqzaOWvKTUAJROopL wzZIwpxC5y8SHG8NzN4IRZRy9A== X-Google-Smtp-Source: AKy350aOeQnqsiOOnUYsijZU0I/0P7RvEF6TJy/X0/H7ROAFo8Dof2s5JoE26B/8+FVZEZPKo3gc/w== X-Received: by 2002:a17:902:ce88:b0:1a1:f5dd:2dd5 with SMTP id f8-20020a170902ce8800b001a1f5dd2dd5mr8081414plg.13.1681942674035; Wed, 19 Apr 2023 15:17:54 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:53 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 11/48] RISC-V: KVM: Use the new VM IOCTL for measuring pages Date: Wed, 19 Apr 2023 15:16:39 -0700 Message-Id: <20230419221716.3603068-12-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The newly introduced VM IOCTL allow the VMM to measure the pages used to load the blobs. Hookup the VM ioctl. Signed-off-by: Atish Patra --- arch/riscv/kvm/vm.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 29d3221..1b59a8f 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -11,6 +11,7 @@ #include #include #include +#include const struct _kvm_stats_desc kvm_vm_stats_desc[] = { KVM_GENERIC_VM_STATS() @@ -209,5 +210,20 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { - return -EINVAL; + struct kvm *kvm = filp->private_data; + void __user *argp = (void __user *)arg; + struct kvm_riscv_cove_measure_region mr; + + switch (ioctl) { + case KVM_RISCV_COVE_MEASURE_REGION: + if (!is_cove_vm(kvm)) + return -EINVAL; + if (copy_from_user(&mr, argp, sizeof(mr))) + return -EFAULT; + + return kvm_riscv_cove_vm_measure_pages(kvm, &mr); + default: + return -EINVAL; + } + } From patchwork Wed Apr 19 22:16:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9008BC77B7E for ; Wed, 19 Apr 2023 22:18:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232508AbjDSWS4 (ORCPT ); Wed, 19 Apr 2023 18:18:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231612AbjDSWSV (ORCPT ); Wed, 19 Apr 2023 18:18:21 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B521583CF for ; Wed, 19 Apr 2023 15:18:04 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1a920d484bdso4573795ad.1 for ; Wed, 19 Apr 2023 15:18:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942676; x=1684534676; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aZA9SkX62fIMzNEcuGtwzVe8Up8R4eolVQO1cYHyNXI=; b=pr/kZUHrkhLnktVNRdfz4ij02MTZd+B9h2TaIJSMbt3NVujPvaiFp7Ik4rbkspQ4Mw yuGTwtHLYNepi572aUMJZXGKBGxqqLOoQQAVLkJQE8EutQeaNnEXjp7ozMrwLpK3F/bd Ea/tWnKPdLgS76oaBzGp+E0BB9Obb3YUpFKxXwNOfadBL+8Y1NryGBq3tL1grWdeANk4 WZM2Z6J10fv3mT8jXclG/Jc0tJeziAmqwuD529CbKYuuh+zFcMDe14NqalRMWLaTiCrI zVE7Fohp3aDpbBaOaQnfhoMTTf4olrAvxRsbHG36Vnwm4rXJhTcr8zeKFydq/GZ4e/40 +9lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942676; x=1684534676; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aZA9SkX62fIMzNEcuGtwzVe8Up8R4eolVQO1cYHyNXI=; b=EyWf1nEtzPmFdZrVFptmkaLhzC89Xi+0ID8vEtzEI3wcXyh7N3veOcbZHsF3e40aYI d6Hk6hmbwXwZYIQ+kXE489hpHoPbwy0mOAumMOw9OUbONHOMRu1SUsvEz1L1j5NfGr4B xjxVBQDq34nLgkfyyCdr3/owlMiSlgdA5JFGpuCdHOdqjr9KFLGLKRFpIjbAn7t3141R tm/O2BK1kxnFm2oVFLXT0vpdPx2d+NlCZDdPJMVYDGH4GQHOnZoitaTMqcPg54AgUj/p iNoDxig/7/24r0X6qekttNiIsGSEOsNJrDijn9jVWCPKe+gHc0LYCwEs7lvZuJB+i3Dg GOGQ== X-Gm-Message-State: AAQBX9d7JuPL7yFQyg6X5RLzFXzjy35pKK3/mFxPLEuqLI9GiiQMuFsj Hl410F6PPnv0mRaPRyPgJlvPRw== X-Google-Smtp-Source: AKy350aluUR1rPQXEDjya7qc0cousjv7HzRiR+wEHDbTSJe3tj7Btum5nXicmgptUoZ7hSCP4qqeQg== X-Received: by 2002:a17:902:c40d:b0:1a8:1cbb:4f76 with SMTP id k13-20020a170902c40d00b001a81cbb4f76mr4908149plk.28.1681942676188; Wed, 19 Apr 2023 15:17:56 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:55 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 12/48] RISC-V: KVM: Exit to the user space for trap redirection Date: Wed, 19 Apr 2023 15:16:40 -0700 Message-Id: <20230419221716.3603068-13-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, the trap redirection to the guest happens in the following cases. 1. Illegal instruction trap 2. Virtual instruction trap 3. Unsuccesfull unpriv read Allowing host to cause traps in the TVM directly is problematic. TSM doesn't support trap redirection yet. Ideally, the host should not end up in one of these situations where it has to redirect the trap. If it happens, exit to the userspace with error as it can't forward the trap to the TVM. If there is any usecasse arises in the future, it has to be co-ordinated through TSM. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu_exit.c | 9 ++++++++- arch/riscv/kvm/vcpu_insn.c | 17 +++++++++++++++++ 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 4ea101a..0d0c895 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -9,6 +9,7 @@ #include #include #include +#include static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) @@ -135,8 +136,14 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu, void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) { - unsigned long vsstatus = csr_read(CSR_VSSTATUS); + unsigned long vsstatus; + if (is_cove_vcpu(vcpu)) { + kvm_err("RISC-V KVM do not support redirect to CoVE guest yet\n"); + return; + } + + vsstatus = csr_read(CSR_VSSTATUS); /* Change Guest SSTATUS.SPP bit */ vsstatus &= ~SR_SPP; if (vcpu->arch.guest_context.sstatus & SR_SPP) diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 7a6abed..331489f 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -6,6 +6,7 @@ #include #include +#include #define INSN_OPCODE_MASK 0x007c #define INSN_OPCODE_SHIFT 2 @@ -153,6 +154,10 @@ static int truly_illegal_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, { struct kvm_cpu_trap utrap = { 0 }; + /* The host can not redirect any illegal instruction trap to TVM */ + if (unlikely(is_cove_vcpu(vcpu))) + return -EPERM; + /* Redirect trap to Guest VCPU */ utrap.sepc = vcpu->arch.guest_context.sepc; utrap.scause = EXC_INST_ILLEGAL; @@ -169,6 +174,10 @@ static int truly_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, { struct kvm_cpu_trap utrap = { 0 }; + /* The host can not redirect any virtual instruction trap to TVM */ + if (unlikely(is_cove_vcpu(vcpu))) + return -EPERM; + /* Redirect trap to Guest VCPU */ utrap.sepc = vcpu->arch.guest_context.sepc; utrap.scause = EXC_VIRTUAL_INST_FAULT; @@ -417,6 +426,10 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, if (unlikely(INSN_IS_16BIT(insn))) { if (insn == 0) { ct = &vcpu->arch.guest_context; + + if (unlikely(is_cove_vcpu(vcpu))) + return -EPERM; + insn = kvm_riscv_vcpu_unpriv_read(vcpu, true, ct->sepc, &utrap); @@ -469,6 +482,8 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run, insn = htinst | INSN_16BIT_MASK; insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2; } else { + if (unlikely(is_cove_vcpu(vcpu))) + return -EFAULT; /* * Bit[0] == 0 implies trapped instruction value is * zero or special value. @@ -595,6 +610,8 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, insn = htinst | INSN_16BIT_MASK; insn_len = (htinst & BIT(1)) ? INSN_LEN(insn) : 2; } else { + if (unlikely(is_cove_vcpu(vcpu))) + return -EFAULT; /* * Bit[0] == 0 implies trapped instruction value is * zero or special value. From patchwork Wed Apr 19 22:16:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FFECC6FD18 for ; Wed, 19 Apr 2023 22:18:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231627AbjDSWSu (ORCPT ); Wed, 19 Apr 2023 18:18:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231681AbjDSWSS (ORCPT ); Wed, 19 Apr 2023 18:18:18 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1895D49CB for ; Wed, 19 Apr 2023 15:17:58 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-63b7588005fso395553b3a.0 for ; Wed, 19 Apr 2023 15:17:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942678; x=1684534678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LUld/G/1WeH25lUdXRYxnjPYZXeYYgEcmI7A3/bRIcw=; b=3BXkkvcr1OqHpZ/1lavYXP7tve2GxDnWf4O2fWPOiSdbOWRF8chSO/YIyqpEuxWDXh BJ7QFDeoj+Jn0Arzj7+Z5YjQU574gbvT6cAETsRRSqDeVkx/Ylx7RmGaHK6RFXNh6Eit fV0M97Hugr5QTbW1u6+18YRRaKIRIagsziaBLExN4bCT6bnocXCkvHc2Ew+80dXvoOZo BF8+iOJKnVBNIx5dtVYx6vnMu5+ZMBbkouCan/31RqRplh6dabd1rARUBLlSsQGaKiSb 4trlbYIIc+EnwzxQuThX6PKBOpT5C2rO+n2a+7R9hS1L0V6qAuL49rS0jBoNV5zJv4h3 Joww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942678; x=1684534678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LUld/G/1WeH25lUdXRYxnjPYZXeYYgEcmI7A3/bRIcw=; b=ERt4U0YsXfU2eO4PoEHbYXJeHP+RXDzneMm4Sag/rZks/APQu3oAtiTx4dl2LGs8YL yygd0ImYfVzzPsFvoQSN1L6vTV5gs3eY2FC7AqZG0i9dW/bH7XrWEjwWQboGpw0GkDr6 ybYJWZhsV7msW/WIMQEJM84k/xoLwkAeXPW7IMwLfGh9j2m5E5Y5+0RZQvef6LNtgF0E rm694yllTzrPNlSfSKp1oS6A3U3cMiUAplDvqiCp0RkrmDKCGjFmlfwTcfYleTDVe/iK D3IfCP6bbeIqpZOO8kOWe+qHdhRSvpjzFKIuwKsFRc4Pz2FsGgmSP8j7ZGnoRewDKTBb cH6w== X-Gm-Message-State: AAQBX9d6h6yv8Oqhk0oBjugBnPMTL5NXQh4k36W9R9zY4gmnr5xSwldV 9BfOkHInrAyrxASQvtaTrELeAA== X-Google-Smtp-Source: AKy350bb7O7vRNJqV0slEkpJiEEitj66qfL0H4rvbzo6EDHZOM2g1chblrb/iCkgTowCPCRsrQSCkw== X-Received: by 2002:a17:902:eb8d:b0:19c:f476:4793 with SMTP id q13-20020a170902eb8d00b0019cf4764793mr5940941plg.51.1681942678346; Wed, 19 Apr 2023 15:17:58 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:58 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 13/48] RISC-V: KVM: Return early for gstage modifications Date: Wed, 19 Apr 2023 15:16:41 -0700 Message-Id: <20230419221716.3603068-14-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The gstage entries for CoVE VM is managed by the TSM. Return early for any gstage pte modification operations. Signed-off-by: Atish Patra --- arch/riscv/kvm/mmu.c | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 6b037f7..9693897 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -16,6 +16,9 @@ #include #include #include +#include +#include +#include #include #include @@ -356,6 +359,11 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, .gfp_zero = __GFP_ZERO, }; + if (is_cove_vm(kvm)) { + kvm_debug("%s: KVM doesn't support ioremap for TVM io regions\n", __func__); + return -EPERM; + } + end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn = __phys_to_pfn(hpa); @@ -385,6 +393,10 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size) { + /* KVM doesn't map any IO region in gstage for TVM */ + if (is_cove_vm(kvm)) + return; + spin_lock(&kvm->mmu_lock); gstage_unmap_range(kvm, gpa, size, false); spin_unlock(&kvm->mmu_lock); @@ -431,6 +443,10 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, gpa_t gpa = slot->base_gfn << PAGE_SHIFT; phys_addr_t size = slot->npages << PAGE_SHIFT; + /* No need to unmap gstage as it is managed by TSM */ + if (is_cove_vm(kvm)) + return; + spin_lock(&kvm->mmu_lock); gstage_unmap_range(kvm, gpa, size, false); spin_unlock(&kvm->mmu_lock); @@ -547,7 +563,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { - if (!kvm->arch.pgd) + if (!kvm->arch.pgd || is_cove_vm(kvm)) return false; gstage_unmap_range(kvm, range->start << PAGE_SHIFT, @@ -561,7 +577,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) int ret; kvm_pfn_t pfn = pte_pfn(range->pte); - if (!kvm->arch.pgd) + if (!kvm->arch.pgd || is_cove_vm(kvm)) return false; WARN_ON(range->end - range->start != 1); @@ -582,7 +598,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) u32 ptep_level = 0; u64 size = (range->end - range->start) << PAGE_SHIFT; - if (!kvm->arch.pgd) + if (!kvm->arch.pgd || is_cove_vm(kvm)) return false; WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE); @@ -600,7 +616,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) u32 ptep_level = 0; u64 size = (range->end - range->start) << PAGE_SHIFT; - if (!kvm->arch.pgd) + if (!kvm->arch.pgd || is_cove_vm(kvm)) return false; WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE); @@ -737,6 +753,10 @@ void kvm_riscv_gstage_free_pgd(struct kvm *kvm) { void *pgd = NULL; + /* PGD is mapped in TSM */ + if (is_cove_vm(kvm)) + return; + spin_lock(&kvm->mmu_lock); if (kvm->arch.pgd) { gstage_unmap_range(kvm, 0UL, gstage_gpa_size, false); From patchwork Wed Apr 19 22:16:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F9E2C77B75 for ; Wed, 19 Apr 2023 22:19:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232249AbjDSWTE (ORCPT ); Wed, 19 Apr 2023 18:19:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232250AbjDSWS1 (ORCPT ); Wed, 19 Apr 2023 18:18:27 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBE7786BE for ; Wed, 19 Apr 2023 15:18:07 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-63b62d2f729so351375b3a.1 for ; Wed, 19 Apr 2023 15:18:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942680; x=1684534680; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UvoLTFH11lm+jG4OLKr6/L5S3PiYUonU+A7E1tEIRKo=; b=N4XtdIUPBSqsJPJi86uCyaF35SsvQqKIOljuRP/nM3GeL5PnC+sHwcgW+t5y7DrxbF 5TpNpwBssHKlVMD0+rR0V581oyy3Tas3sY/32xO2PA8Waej/gLWBYShGPmuxBhpf6re5 DdBOslqLXNOPSWtzW2KMHDPcIh6b83lTPIBfFraWIh9OIpnFYzRRhBid7+FKOFDAgg76 TSEZFYh5jipl2xgrex0+4e/qWmzQSgGTsJyLQHLHYVjRsdnpfnSpQ74r0gthMKcWw1bC bNIhm8IsywbFNoO4sfJHPMRDxwvnuXKEz2kBcBAbp8YppTYmN0JgS+/GNdHTRqrvAGAB SHNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942680; x=1684534680; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UvoLTFH11lm+jG4OLKr6/L5S3PiYUonU+A7E1tEIRKo=; b=JYmKTMuQhk4f0qcfUEak2DaoZoTtfrHvnJtUCVZOIWGJSfJt+mIboJaj178SD95s+n gBUvUQm+oqbHeEAHIxFnAm22pFb1JDzmdj539BkgJPqimwnBieZ2VNSBCCs2WUKAceNP khW9aeL/14V6joq8eI9IP/TxMqmnsoHtkrO+WabZWcjY/apie9GPBwR+MRSR+lJzpz6N j224PpIAGrM/tH6RcQsYTxMzFYbMJwDcfC3H42B7aZOi6E3F6U0gb6pHa8VOcCuM93Kl 18IqbhjLa7hPN5fx2IH1rFTnMYJb/yasTVG0gFAMyk8KNCqlVe/ISh/IJON0sv7NYhbs PyZQ== X-Gm-Message-State: AAQBX9eTAAiRSgPfL2h1umeFzRQ4XD0SndllKsDWHUQmps0bSSPJuJad Qjox41f3ZaIK7J06+j7jUX7nYA== X-Google-Smtp-Source: AKy350Z88ZumXUVLTXNRcg1zbdln/2qU8CAI5Xlijd5PgR7U+e9fUdXEuzGKHFP/DBsSEjOZB75l1A== X-Received: by 2002:a17:903:2287:b0:1a9:2951:7753 with SMTP id b7-20020a170903228700b001a929517753mr1272548plh.45.1681942680495; Wed, 19 Apr 2023 15:18:00 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:00 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 14/48] RISC-V: KVM: Skip dirty logging updates for TVM Date: Wed, 19 Apr 2023 15:16:42 -0700 Message-Id: <20230419221716.3603068-15-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The CoVE doesn't support dirty logging for TVMs yet. Skip for now. Signed-off-by: Atish Patra --- arch/riscv/kvm/mmu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 9693897..1d5e4ed 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -457,6 +457,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { + /* We don't support dirty logging for CoVE guests yet */ + if (is_cove_vm(kvm)) + return; /* * At this point memslot has been committed and there is an * allocated dirty_bitmap[], dirty pages will be tracked while From patchwork Wed Apr 19 22:16:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 830BAC77B7C for ; Wed, 19 Apr 2023 22:18:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232465AbjDSWSx (ORCPT ); Wed, 19 Apr 2023 18:18:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232191AbjDSWST (ORCPT ); Wed, 19 Apr 2023 18:18:19 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DE5765BF for ; Wed, 19 Apr 2023 15:18:02 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1a69f686345so4526475ad.2 for ; Wed, 19 Apr 2023 15:18:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942682; x=1684534682; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gRHTy1c+Pl7OVrg4l834g5VjRWcSWQwGhgEvJAPkg+w=; b=kmEcD4YfOAFXGG17QX3KsyUFIzwyUfrsKZ1wYGSvZgqODlY6mzST437Sn9HGEmF2e4 2a2a47FJD8fRdSNkrzKWkWk+t1hYBGWo5RKv/QKbDEChEI7DPntySLvvhVENeLXEvAzZ Yi9hf+mn4OFq02nwK3TFnSe2tJbzQbrAKkpWS2zkyUDSsMGzPeY61HsFbh3a25B058l+ uUA1pt3+Fbza1RQ2lPxN0ghz2+DhS82u09mOfYSZt3cX82Z63iV2DNRddUax6jZ/SlYZ 5R7VI8q/CylWQy7dmPwpqmG5M0g3aSf6g2QGsKHX2ECmbls1SK5jLS9FX181LfzlvClk L9rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942682; x=1684534682; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gRHTy1c+Pl7OVrg4l834g5VjRWcSWQwGhgEvJAPkg+w=; b=XerWSFEO3BoGkZa2K2jw9WZpAdyuKER/EGrqpXSRIfIHHhkTPZyGo6t/wZkluOAUUB LBBbK/gdEZPBGvgRIqZCKGz9kuQ57sV1n/7BM9n4UzPVHF8BYUSIOmdORUHsvnneJFj3 0f0890NsVNN+e7so+E1M992RwrQBMvBemQWvPjtb+auXA7KI924JquOIIh0VzByj309r zk1RQ2awVrnpBU1/IKT57nCP6bbCoAdWQ2xG9wQnQa30/PqOOzFBzQBJOKSBNo0ctw79 RSvz9XJHupFvcBPD+cb8J5X/qFoBtW3JDypeAW5GdNHKhtm1AV0lJqLq5CacZ9adc6YV 54AQ== X-Gm-Message-State: AAQBX9dZSIja4joDbPWEzNmyE+vgARZh/ran0Iw6EyPUI3W+pi7xDhXf OiQOMXXLYE/qyHH1Wnn8+00SOA== X-Google-Smtp-Source: AKy350YWY4zn6DitmSoPXPvjQSlrg5vxaeh3mQBYR6TvOk3ni0Mxh3KE27JFZrBKZBdZgegUV17R1Q== X-Received: by 2002:a17:902:ec8b:b0:1a9:23b7:9182 with SMTP id x11-20020a170902ec8b00b001a923b79182mr2673903plg.27.1681942682652; Wed, 19 Apr 2023 15:18:02 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:02 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 15/48] RISC-V: KVM: Add a helper function to trigger fence ops Date: Wed, 19 Apr 2023 15:16:43 -0700 Message-Id: <20230419221716.3603068-16-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When Cove is enabled in RISC-V, the TLB shootdown happens in co-ordination with TSM. The host must not issue hfence directly. It relies on TSM to do that instead. It just needs to initiate the process and make sure that all the running vcpus exit the guest mode. As a result, it traps to TSM and TSM issues hfence on behalf of the host. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 2 ++ arch/riscv/kvm/cove.c | 36 +++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h index 4ea1df1..fc8633d 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -130,6 +130,8 @@ void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *tr int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, struct kvm_riscv_cove_measure_region *mr); int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size); int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva); +/* Fence related function */ +int kvm_riscv_cove_tvm_fence(struct kvm_vcpu *vcpu); #else static inline bool kvm_riscv_cove_enabled(void) {return false; }; static inline int kvm_riscv_cove_init(void) { return -1; } diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 5b4d9ba..4efcae3 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -78,6 +78,42 @@ static int kvm_riscv_cove_fence(void) return rc; } +int kvm_riscv_cove_tvm_fence(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_context *tvmc = kvm->arch.tvmc; + DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); + unsigned long i; + struct kvm_vcpu *temp_vcpu; + int ret; + + if (!tvmc) + return -EINVAL; + + spin_lock(&tvmc->tvm_fence_lock); + ret = sbi_covh_tvm_initiate_fence(tvmc->tvm_guest_id); + if (ret) { + spin_unlock(&tvmc->tvm_fence_lock); + return ret; + } + + bitmap_clear(vcpu_mask, 0, KVM_MAX_VCPUS); + kvm_for_each_vcpu(i, temp_vcpu, kvm) { + if (temp_vcpu != vcpu) + bitmap_set(vcpu_mask, i, 1); + } + + /* + * The host just needs to make sure that the running vcpus exit the + * guest mode and traps into TSM so that it can issue hfence. + */ + kvm_make_vcpus_request_mask(kvm, KVM_REQ_OUTSIDE_GUEST_MODE, vcpu_mask); + spin_unlock(&tvmc->tvm_fence_lock); + + return 0; +} + + static int cove_convert_pages(unsigned long phys_addr, unsigned long npages, bool fence) { int rc; From patchwork Wed Apr 19 22:16:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70546C77B7C for ; Wed, 19 Apr 2023 22:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232628AbjDSWTK (ORCPT ); Wed, 19 Apr 2023 18:19:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232318AbjDSWSb (ORCPT ); Wed, 19 Apr 2023 18:18:31 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23B92900F for ; Wed, 19 Apr 2023 15:18:12 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-63b62d2f729so351421b3a.1 for ; Wed, 19 Apr 2023 15:18:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942685; x=1684534685; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ojiAvkB2QjgGX0WuePfTk5AH+Ftof3ihRnPYCRhXfys=; b=GI5HHNoDfnv7Xt2dXdQVsSPXXrf8xeXQpNWlFXZorCsInUO+7v9NY0aJ61Wq3Jrs8E kGQwH76ph9khKWrUeXzvPLglMXDgMZIIY54cSF/xDKTI2B+6z40RBNIFJsJyIqLBi2Tp /4IjX7V/Nl9JV0ZKRW6w1rRtjDB7U9hS1o72B3bA9f1kQsnaasLnan0u04D8EJ66o/lT QRUjYtEUOZzmooLboK7oxd2ALtGFHtLnXQq0cGS3gCSZ9ykQyEYPx3B41T0RJmlLxBu3 0pGJgbLQ8aYagsnC8EaV+V6QeE9vVfhTJFm18qhaheSKMiu2Zw5f5PQymgLHdTXWDrLv QNiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942685; x=1684534685; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ojiAvkB2QjgGX0WuePfTk5AH+Ftof3ihRnPYCRhXfys=; b=ZiIO5qo7/qXj7k0M7QhwO3ANxsscAVo+DRh7Gnr/rdNUimOGo5KEXDe/jkpTEW2OpK lQole1XRh4vAz6cQCda0mGeRFrlRqfTDNpGpw9Q+OIqUAXwbGDUTruLaUi5kPmX+Ww5s Uf0nCj7mTmu72pLczRwEHS/9EgrGgJ+J6ejwaLtX9ZLgl9o/oqYDQQj3JoUzj9RDErD1 iaUpzec/VL2mre1mctL5U/fAMA89qreh1Ai7KQLBIrIK2cLYuMABuJ2x3LSJqzG+4Xj2 mZwGW+FnLsGFOEas1vn35FXbhLpTTQfCIYu24+mTHVJfL2zak9sqFcadihAQfyTuR9P4 Jt/A== X-Gm-Message-State: AAQBX9dX0R4qWzkZhxE3tPG9QZ6Iq8/lslbBZWG5dMmfZN+LAn9dIhZd hkwpnCC039CUdSCji5PI+HO1FQ== X-Google-Smtp-Source: AKy350aUmxK/wzp1xvJIpKWslK3gwGr/UxzJX5snN5ZgZlPl8MyALzkERpVzHjZpOuQ6botaKIIVqg== X-Received: by 2002:a17:903:22c1:b0:1a1:c3eb:afd with SMTP id y1-20020a17090322c100b001a1c3eb0afdmr7315223plg.65.1681942684832; Wed, 19 Apr 2023 15:18:04 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:04 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 16/48] RISC-V: KVM: Skip most VCPU requests for TVMs Date: Wed, 19 Apr 2023 15:16:44 -0700 Message-Id: <20230419221716.3603068-17-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, KVM manages TLB shootdown, hgatp updates and fence.i through vcpu requests. TLB shootdown for the TVMs happens in co-ordination with TSM. The fence.i & hgatp updates are directly managed by the TSM. There is no need to issue these requests directly for TVMs. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index c53bf98..3b600c6 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -22,6 +22,7 @@ #include #include #include +#include const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { KVM_GENERIC_VCPU_STATS(), @@ -1078,6 +1079,15 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu)) kvm_riscv_reset_vcpu(vcpu); + if (is_cove_vcpu(vcpu)) { + /* + * KVM doesn't need to do anything special here + * as the TSM is expected track the tlb version and issue + * hfence when vcpu is scheduled again. + */ + return; + } + if (kvm_check_request(KVM_REQ_UPDATE_HGATP, vcpu)) kvm_riscv_gstage_update_hgatp(vcpu); From patchwork Wed Apr 19 22:16:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B87DC77B7E for ; Wed, 19 Apr 2023 22:19:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231416AbjDSWTQ (ORCPT ); Wed, 19 Apr 2023 18:19:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231572AbjDSWSc (ORCPT ); Wed, 19 Apr 2023 18:18:32 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 999CF9011 for ; Wed, 19 Apr 2023 15:18:14 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1a69f686345so4526925ad.2 for ; Wed, 19 Apr 2023 15:18:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942687; x=1684534687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IHi27AH8EgCkCQBErjcLInO9Ogd99Uht/0Y2I/rPxr0=; b=YSIfA3fBHehSI9sfNw8kNY3SjYw3MKpDND587n51K3eoVf8RJTCG1iSaqqG9yfNjCx jwqPpoi/ITD0shW0NUnRyZli8JT8h1t3/2gNbp76WuPUO0CZ5s8zSvoyiHQFyjfIMwOE 1ZldJnGMaXcDrIoP6ErfdR2UHM3fNFztr08TXf0xVGRFo5tk0moUvzoYm/r0hkRFoy0K cWLNwCIHBfTQ0bCQR3c38U1nAB1slt1q42rqYoamEUYp5QQ79+wtILJqFzbQlYA06O6P OEHwUmVOCGHeSnw6bqeEZSZ4KjU2xfgwWJGo24N5J96VrnqV0nME7zdNFrLcXHLPuPgD RkSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942687; x=1684534687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IHi27AH8EgCkCQBErjcLInO9Ogd99Uht/0Y2I/rPxr0=; b=j3Adqw+LLAwOl3ruM/z57+TkSfAiJgvFJFMLrFYb+i0gDma1R4/i4Q0/QfJAbir/7C JKhF0aFW08mjOw+k8BvLzDkGXH0O6yLXYkwR+JFlGUN8DoxDog0sPHNsL5/O1gWq0cOY d4rIk2dPtrGokuyoW7VQon8BGz6g1ji+lYLcNg1dAUCOe10Os5d114CRUFNymemRaL1Q SyTKp6dYc+y+Bw+FCdIi8IYbB1VKGOL/Jj6FxjimKI9vc90lzj3VzK+exEa3Awc8nZ11 KjAu8ZWNDZemDrBgS8zr4JRjgryvpMEdhEsKdYum9360T8K1rIEGNfnhrBt4zzpUkPVV NdsQ== X-Gm-Message-State: AAQBX9cm8ZPaFaO5V6KU7lEG5olTCsqFuzdK1crrEOd4yNZqY0VZfO+U kyP3PhhVGSMaJcNzmjP4NqrNpg== X-Google-Smtp-Source: AKy350a7uqnn10B5RYMJKbmqBxpZkXHpV8hWE2JbxUvi5COFiIeLm7AFkPV91DbiIrXf3nFN1rwkeg== X-Received: by 2002:a17:902:ec8b:b0:1a9:23b7:9182 with SMTP id x11-20020a170902ec8b00b001a923b79182mr2674160plg.27.1681942686991; Wed, 19 Apr 2023 15:18:06 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:06 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 17/48] RISC-V : KVM: Skip vmid/hgatp management for TVMs Date: Wed, 19 Apr 2023 15:16:45 -0700 Message-Id: <20230419221716.3603068-18-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TSM manages the vmid for the guests running in CoVE. The host doesn't need to update vmid at all. As a result, the host doesn't need to update the hgatp as well. Return early for vmid/hgatp management functions for confidential guests. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 2 +- arch/riscv/kvm/mmu.c | 4 ++++ arch/riscv/kvm/vcpu.c | 2 +- arch/riscv/kvm/vmid.c | 17 ++++++++++++----- 4 files changed, 18 insertions(+), 7 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index ca2ebe3..047e046 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -325,7 +325,7 @@ unsigned long kvm_riscv_gstage_pgd_size(void); void __init kvm_riscv_gstage_vmid_detect(void); unsigned long kvm_riscv_gstage_vmid_bits(void); int kvm_riscv_gstage_vmid_init(struct kvm *kvm); -bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); +bool kvm_riscv_gstage_vmid_ver_changed(struct kvm *kvm); void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 1d5e4ed..4b0f09e 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -778,6 +778,10 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu) unsigned long hgatp = gstage_mode; struct kvm_arch *k = &vcpu->kvm->arch; + /* COVE VCPU hgatp is managed by TSM. */ + if (is_cove_vcpu(vcpu)) + return; + hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID; hgatp |= (k->pgd_phys >> PAGE_SHIFT) & HGATP_PPN; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 3b600c6..8cf462c 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -1288,7 +1288,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_riscv_update_hvip(vcpu); if (ret <= 0 || - kvm_riscv_gstage_vmid_ver_changed(&vcpu->kvm->arch.vmid) || + kvm_riscv_gstage_vmid_ver_changed(vcpu->kvm) || kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) { vcpu->mode = OUTSIDE_GUEST_MODE; diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index ddc9871..dc03601 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -14,6 +14,7 @@ #include #include #include +#include static unsigned long vmid_version = 1; static unsigned long vmid_next; @@ -54,12 +55,13 @@ int kvm_riscv_gstage_vmid_init(struct kvm *kvm) return 0; } -bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid) +bool kvm_riscv_gstage_vmid_ver_changed(struct kvm *kvm) { - if (!vmid_bits) + /* VMID version can't be changed by the host for TVMs */ + if (!vmid_bits || is_cove_vm(kvm)) return false; - return unlikely(READ_ONCE(vmid->vmid_version) != + return unlikely(READ_ONCE(kvm->arch.vmid.vmid_version) != READ_ONCE(vmid_version)); } @@ -72,9 +74,14 @@ void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu) { unsigned long i; struct kvm_vcpu *v; + struct kvm *kvm = vcpu->kvm; struct kvm_vmid *vmid = &vcpu->kvm->arch.vmid; - if (!kvm_riscv_gstage_vmid_ver_changed(vmid)) + /* No VMID management for TVMs by the host */ + if (is_cove_vcpu(vcpu)) + return; + + if (!kvm_riscv_gstage_vmid_ver_changed(kvm)) return; spin_lock(&vmid_lock); @@ -83,7 +90,7 @@ void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu) * We need to re-check the vmid_version here to ensure that if * another vcpu already allocated a valid vmid for this vm. */ - if (!kvm_riscv_gstage_vmid_ver_changed(vmid)) { + if (!kvm_riscv_gstage_vmid_ver_changed(kvm)) { spin_unlock(&vmid_lock); return; } From patchwork Wed Apr 19 22:16:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7D14C77B73 for ; Wed, 19 Apr 2023 22:19:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232404AbjDSWT3 (ORCPT ); Wed, 19 Apr 2023 18:19:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232047AbjDSWSg (ORCPT ); Wed, 19 Apr 2023 18:18:36 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B7A06592 for ; Wed, 19 Apr 2023 15:18:18 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-63b51fd2972so342082b3a.3 for ; Wed, 19 Apr 2023 15:18:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942689; x=1684534689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EbUa6ryrerMYndbR89JSXBoBIycYIOI5MWjMv8QTenY=; b=EpHc5rfijGiS9cvhe3gxGYJxsSuAlVbG/pIJOuFJNosZx/VFQeVfm0eSrlDMuReSFj 7ypZdw3QjuxdW55Zh8SHxl8sc3yXVSNROJW2fjaa+26b7equVpWpnxFDYreqHSbXDwun vUfn3kQvN844BJ2Z55UTjnK33zctXOKmEJb38uE3yoQBIsr74T/OcqMWb3KweGfTU4Yi yYPd+HV7CeU/AvlBhTKuqFAlbX3z2KwharbhJqAcrRnAlg1XHGRRtMmnLgMuCDk8ejGu HvdE2AEcRzhomKSg5akQJMWerijQ4tiFWsNpofhgVRA19F6Cu15/PaXEiC7xDxP0NCRj VY9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942689; x=1684534689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EbUa6ryrerMYndbR89JSXBoBIycYIOI5MWjMv8QTenY=; b=bmmmsZmaJttQ+d3deKdHXqxesRJhHkB3RwZspgm5qJ7si7EG4yS4rQt2tPsPhEVts+ lu3m3OaivcaJp1raWxaAClcYZ1VGIk1AG5pwKGmcK5JwPj4ain3zenpk0nCmupvz7Vxf EZUFXeqQWOU1Nuq4B+lA8dsIB76xNIztX2icZg67IPlfvFDPxFR97Tl5KYQZKeRrsOq/ RPi+ZVzG0znoh7mo3+UNEN5H6BMAsnC7djSb1YD9Evm6d0LgwO6kbyWNiojEN11aTBbB DI5t53yQJ6pygwHGkqUyZIUEmwDc2FAS4MuBl98eLR2d4ANEH7oT+THl173TbHMe/N90 8yQw== X-Gm-Message-State: AAQBX9fA8i7NQXGSZpRCdPAZQvrJYT+hQRMOOMMcsgmjgiHUHZnvy4x5 Li9VhWEpuHOkZKSyGddt56q6BA== X-Google-Smtp-Source: AKy350bOrY/ltn4aOC/irJe3I2c3DHWQXEGnacE0hKIRD44viSqKcVds6P+5HuawRV2gn0DMpSc7nw== X-Received: by 2002:a17:902:e80c:b0:1a8:32e:3256 with SMTP id u12-20020a170902e80c00b001a8032e3256mr6684314plg.35.1681942689090; Wed, 19 Apr 2023 15:18:09 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:08 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 18/48] RISC-V: KVM: Skip TLB management for TVMs Date: Wed, 19 Apr 2023 15:16:46 -0700 Message-Id: <20230419221716.3603068-19-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TSM manages the tlb entries for the TVMs. Thus, host can ignore all the hfence requests or tlb updates for confidential guests. Most of the hfence requests happen through vcpu requests which are skipped for TVMs. Thus, we just need to take care of the invocation from tlb management here. Signed-off-by: Atish Patra --- arch/riscv/kvm/tlb.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index dff37b57..b007c02 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -15,6 +15,7 @@ #include #include #include +#include #define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL) @@ -72,6 +73,14 @@ void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, void kvm_riscv_local_hfence_gvma_all(void) { + /* For TVMs, TSM will take care of hfence. + * TODO: We can't skip unconditionally if cove is enabled + * as the host may be running in HS-mode and need to issue hfence + * for legacy VMs. + */ + if (kvm_riscv_cove_enabled()) + return; + asm volatile(HFENCE_GVMA(zero, zero) : : : "memory"); } @@ -160,7 +169,7 @@ void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) { unsigned long vmid; - if (!kvm_riscv_gstage_vmid_bits() || + if (is_cove_vcpu(vcpu) || !kvm_riscv_gstage_vmid_bits() || vcpu->arch.last_exit_cpu == vcpu->cpu) return; From patchwork Wed Apr 19 22:16:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A723C77B75 for ; Wed, 19 Apr 2023 22:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232734AbjDSWTe (ORCPT ); Wed, 19 Apr 2023 18:19:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232413AbjDSWSm (ORCPT ); Wed, 19 Apr 2023 18:18:42 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89B887AB7 for ; Wed, 19 Apr 2023 15:18:20 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1a67bcde3a7so5056775ad.3 for ; Wed, 19 Apr 2023 15:18:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942691; x=1684534691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EN7cfqRTuUxE4NYFTrE6RcyXhfZTP+XO506Z7P5jnSo=; b=2HntuY5n1tLM4pNJa7nz4osjLmrd+gHhTujKcUSE8JGJeCF47AW9Dyj0rRPbdS8CKR ec8mdHxJp3zhFL+VPoHjKpwqqItbmuGrX1ePOLL/H/CjhizNW48AJx/+m0SEc6hw+lrg 4TAnXTUfT+76XBgFTJYmo+eDlVergYm7g4UDIIG8q8towz6RIKhd0qUN5gI1M3MPJ6Yf v6+fwgTgJxyGyevBN2wreiHgOozsBMp0FBa2pY9PJ2gXBMvj4I9WYLUgQzPavzY4XGMn BfwkMktMwYiFO0NHEmqBJ8uwsXt7UjR2yMhk7oglZtm60MHD5OqGzZFoQLQcULwN1VOW 9p+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942691; x=1684534691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EN7cfqRTuUxE4NYFTrE6RcyXhfZTP+XO506Z7P5jnSo=; b=dW3TPADjoXEYwBlh52UG4Fv6w1YozZY4qnRs8tEv/gioIJtoLKY7HA8hHIStPpUUbF gEtzkF+yI48fi/0iN6ikJGymAibI8qBuTXM3PEHaORkJ6a+YUrGzVuh6b1SK8y+whBc6 4j9P3hJZNsvbAxBzakO6c7xwrVR9pqx1hj8RY1H2izVM7zIBZX64bv7O3cCWOd4dWmSl fr3Ys4aCR+0mEuzAUFe3jVJBVUBhrknvAmAB+NlK/rZhDc7iyeukjEdcwoiApjjOblJA NlBRLitHry7FkaMHP8lFIJFw5EbAxbil1eG20vaPcle3/ywpuLIM1BBIqBNfysrVHYpK 0ioQ== X-Gm-Message-State: AAQBX9dEltcFpowOSb0ruarXCertm818/zQ4fhTCjdigI4VSTt5XP2H/ UD5lEAhbMFyPTfzHs8enAHaEqA== X-Google-Smtp-Source: AKy350ZYsA8lzJG7NSbCoTlmBQanAcDJyPETNQetpvDDQv9McJwUAM0nHCIZ5qbS6IwUoaXgRnH5YQ== X-Received: by 2002:a17:902:8b8a:b0:1a6:dfb3:5f4b with SMTP id ay10-20020a1709028b8a00b001a6dfb35f4bmr5466195plb.55.1681942691242; Wed, 19 Apr 2023 15:18:11 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:11 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 19/48] RISC-V: KVM: Register memory regions as confidential for TVMs Date: Wed, 19 Apr 2023 15:16:47 -0700 Message-Id: <20230419221716.3603068-20-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The entire DRAM region of a TVM running in CoVE must be confidential by default. If a TVM wishes to share any sub-region, the TVM has to request it explicitly with memory share APIs. Mark the memory region as confidential during vm create itself. Signed-off-by: Atish Patra --- arch/riscv/kvm/mmu.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 4b0f09e..63889d9 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -499,6 +499,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, mmap_read_lock(current->mm); + if (is_cove_vm(kvm)) { + ret = kvm_riscv_cove_vm_add_memreg(kvm, base_gpa, size); + if (ret) + return ret; + } /* * A memory region could potentially cover multiple VMAs, and * any holes between them, so iterate over all of them to find From patchwork Wed Apr 19 22:16:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C3C4C77B75 for ; Wed, 19 Apr 2023 22:19:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232799AbjDSWTg (ORCPT ); Wed, 19 Apr 2023 18:19:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232055AbjDSWSu (ORCPT ); Wed, 19 Apr 2023 18:18:50 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F82E93CD for ; Wed, 19 Apr 2023 15:18:23 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1a67bcde3a7so5057005ad.3 for ; Wed, 19 Apr 2023 15:18:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942693; x=1684534693; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cyIj0UBv9pS76jm0gxcpwmAmAKqEq2wUQqZgUQ6cXgk=; b=JUqXodm7yTv0ln6sHn0eyCWrKgQ2lIYWLmeyEDRipWR1vGY9jvhA0yFwbIBRMjhRbb 5feEP/EeoRDxGxyKun0x5G2O5oEBJdDAFRS/RHZ4Ci8b/iv26PTeN1VUZ+s3pIrJBSTf rZjckAXPtWtvH1Kvmtp4JHL0K8GHX72uGWQN4DiIyrkkdwTzznpc2XULSlQqV9+qqsp4 UJ+vBagyFy5rq5HF8Oa6OHLTBhdbA55/ZL15Y605z4/ZrHmgIiKmwYrEl9ntaC9BfLrM ClfqHB6I3YRHPV8jZmE8uYnBE2UGqLOLZNM172UxVHBchm+jEWBkzY2ZxZ7p2UfryNvP OpFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942693; x=1684534693; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cyIj0UBv9pS76jm0gxcpwmAmAKqEq2wUQqZgUQ6cXgk=; b=PEO2dpyjd9eC6uNVQqOGPx/ZedyzUYXlwb2e/N77vCcBH6++/ON339jFH/7qJTY9QZ 2A2zvWEkShiBRleK0XLtQVjeSpx2xRJbZlcFSUEyFDI+2Lpn00a+ZDfb92d3Jcdig7Rb jfpb6g0iiWk0CISRtbHsY/f5LOOf2dsBC1KnETNTgALstDtZWyLstqPxh8wSnsLDhBcH OKA1uHM6ScHzMavpGc3LQc7Hd5IBwIoXhWjMrD8xyklP0/isl+fqMs6rGrDz9FKjn7Z6 gZlJMOw+3OdCPh7I0rV4VIahIrJZknaNbfMbcVivNfZC4rbkgdZs/xKUzOgs2LCiXEkP Ld5A== X-Gm-Message-State: AAQBX9ciXd1e8b7j1P8zlw+Ge+DmsFrbJROzWDJ1YzGx25SaStU3SpB4 d6cY/9NXnfbn5qY9obd5fXOPYg== X-Google-Smtp-Source: AKy350Y8OMGVUtVmKbRjYFU/NfkRlor4BXVO9AnvAi3r9stoJW7W2hRl3eELkUdIf0o2cuCG7q0qBA== X-Received: by 2002:a17:903:2291:b0:19c:dbce:dce8 with SMTP id b17-20020a170903229100b0019cdbcedce8mr8768332plh.15.1681942693408; Wed, 19 Apr 2023 15:18:13 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:13 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 20/48] RISC-V: KVM: Add gstage mapping for TVMs Date: Wed, 19 Apr 2023 15:16:48 -0700 Message-Id: <20230419221716.3603068-21-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For TVM, the gstage mapping is managed by the TSM via COVH SBI calls. The host is responsible for allocating page that must be pinned to avoid swapping. The page is converted it to confidential before handing over to the TSM for gstage mapping. Signed-off-by: Atish Patra --- arch/riscv/kvm/cove.c | 63 +++++++++++++++++++++++++++++++++++++- arch/riscv/kvm/vcpu_exit.c | 9 ++++-- 2 files changed, 69 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 4efcae3..44095f6 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -149,8 +149,68 @@ void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) { - /* TODO */ + struct kvm_riscv_cove_page *tpage; + struct mm_struct *mm = current->mm; + struct kvm *kvm = vcpu->kvm; + unsigned int flags = FOLL_LONGTERM | FOLL_WRITE | FOLL_HWPOISON; + struct page *page; + int rc; + struct kvm_cove_tvm_context *tvmc = kvm->arch.tvmc; + + tpage = kmalloc(sizeof(*tpage), GFP_KERNEL_ACCOUNT); + if (!tpage) + return -ENOMEM; + + mmap_read_lock(mm); + rc = pin_user_pages(hva, 1, flags, &page, NULL); + mmap_read_unlock(mm); + + if (rc == -EHWPOISON) { + send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, + PAGE_SHIFT, current); + rc = 0; + goto free_tpage; + } else if (rc != 1) { + rc = -EFAULT; + goto free_tpage; + } else if (!PageSwapBacked(page)) { + rc = -EIO; + goto free_tpage; + } + + rc = cove_convert_pages(page_to_phys(page), 1, true); + if (rc) + goto unpin_page; + + rc = sbi_covh_add_zero_pages(tvmc->tvm_guest_id, page_to_phys(page), + SBI_COVE_PAGE_4K, 1, gpa); + if (rc) { + pr_err("%s: Adding zero pages failed %d\n", __func__, rc); + goto zero_page_failed; + } + tpage->page = page; + tpage->npages = 1; + tpage->is_mapped = true; + tpage->gpa = gpa; + tpage->hva = hva; + INIT_LIST_HEAD(&tpage->link); + + spin_lock(&kvm->mmu_lock); + list_add(&tpage->link, &kvm->arch.tvmc->zero_pages); + spin_unlock(&kvm->mmu_lock); + return 0; + +zero_page_failed: + //TODO: Do we need to reclaim the page now or VM gets destroyed ? + +unpin_page: + unpin_user_pages(&page, 1); + +free_tpage: + kfree(tpage); + + return rc; } void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) @@ -390,6 +450,7 @@ void kvm_riscv_cove_vm_destroy(struct kvm *kvm) cove_delete_page_list(kvm, &tvmc->reclaim_pending_pages, false); cove_delete_page_list(kvm, &tvmc->measured_pages, false); + cove_delete_page_list(kvm, &tvmc->zero_pages, true); /* Reclaim and Free the pages for tvm state management */ rc = sbi_covh_tsm_reclaim_pages(page_to_phys(tvmc->tvm_state.page), tvmc->tvm_state.npages); diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 0d0c895..d00b9ee5 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -41,8 +41,13 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, }; } - ret = kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause == EXC_STORE_GUEST_PAGE_FAULT) ? true : false); + if (is_cove_vcpu(vcpu)) { + /* CoVE doesn't care about PTE prots now. No need to compute the prots */ + ret = kvm_riscv_cove_gstage_map(vcpu, fault_addr, hva); + } else { + ret = kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, + (trap->scause == EXC_STORE_GUEST_PAGE_FAULT) ? true : false); + } if (ret < 0) return ret; From patchwork Wed Apr 19 22:16:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00FCBC6FD18 for ; Wed, 19 Apr 2023 22:19:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232592AbjDSWTp (ORCPT ); Wed, 19 Apr 2023 18:19:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232608AbjDSWTI (ORCPT ); Wed, 19 Apr 2023 18:19:08 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B69C572AE for ; Wed, 19 Apr 2023 15:18:27 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1a66911f5faso4694725ad.0 for ; Wed, 19 Apr 2023 15:18:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942695; x=1684534695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BVskLFhSfykEs4eltGnJpzQVboIqM++EFU/NG1sE5NQ=; b=SO8VMn/gX3e5vrOk29Jpp+EW3CH0aS36Ddf+WvlYfYdV2qV/sEQkL249EYwnIbGQPP IbrgDiUX6yusY5G6N+gv2DqO2gsWL/6qr3Vjo5vnbGXNSTABlKc+Z5vspap8SEcOKeLU So7lcKbcoaSAhkenmIqQER+vNAwXBWV2IvUOGT4NehU2jVE3eYLDxu3zsV9yLUqbsLR7 v7+u/wK5FHf0rE8XEaPscHq9ay3O7yJVyu0alqyYfsf0oZDQ04UuLg/LxXgaBLNutCRN t5gGEdgbWRMkTDfoaMJlc0lkYq+/3m6Gj+9+DtkEoQY+7pVpM5Wn8qS8u2Y8SjuI+hhP lcIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942695; x=1684534695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BVskLFhSfykEs4eltGnJpzQVboIqM++EFU/NG1sE5NQ=; b=BG+yn/4jDtH/TKpt7TsA+8MPiv8qXKGe+mo3V8PJgBq5xDEldbi4CLuf2NKdfrhQYq cFwjUl2ISH+6cbVL0oaf0YDcH11PvwU8TEyt6mnLyo+iiY0kKMSpZYzsu8KThF8qWLUP ZBVlD2JmesVsjIIwurFB3SW+uP4ZlMTXMney9NtxuknvG5+LlFLZSuijSqXiTfM7k+O7 S2mdrpJ/bN/0F3WxZpza+31c8Gk0Ze5ppeM/WBJsC7CKOspdGdiQV55gBmqC2KieF5d7 kkZjgHL6RSBIA9iOeJVvcz1VsAxmZ0+gVOy2/gmJldLBr5D0VvtZS0lMuxyFU/PC3H75 Q7oQ== X-Gm-Message-State: AAQBX9c4iQwrHuV8Jb2eHUzWtLSwOO45mTTZbszRR2kAEBOPqQ6fjOz9 UsEg8O9vDXoWx6EbcoHUGxZUsg== X-Google-Smtp-Source: AKy350YJdtE76oL4JZlI/rCsVg2p4hVlt1BCJyTIcyC6OmWTTvtKZe676vtO5k8no3TozALhiDYlgQ== X-Received: by 2002:a17:902:b489:b0:19d:297:f30b with SMTP id y9-20020a170902b48900b0019d0297f30bmr6180769plr.19.1681942695581; Wed, 19 Apr 2023 15:18:15 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:15 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 21/48] RISC-V: KVM: Handle SBI call forward from the TSM Date: Wed, 19 Apr 2023 15:16:49 -0700 Message-Id: <20230419221716.3603068-22-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TSM may forward the some SBI calls to the host as the host is the best place to handle these calls. Any calls related to hart state management or console or guest side interface (COVG) falls under this category. Add a cove specific ecall handler to take appropriate actions upon receiving these SBI calls. Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 5 +++ arch/riscv/kvm/cove.c | 54 +++++++++++++++++++++++++++++++ arch/riscv/kvm/vcpu_exit.c | 6 +++- arch/riscv/kvm/vcpu_sbi.c | 2 ++ 4 files changed, 66 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h index fc8633d..b63682f 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -126,6 +126,7 @@ int kvm_riscv_cove_vcpu_init(struct kvm_vcpu *vcpu); void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu); void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu); void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap); +int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run); int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, struct kvm_riscv_cove_measure_region *mr); int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size); @@ -148,6 +149,10 @@ static inline int kvm_riscv_cove_vcpu_init(struct kvm_vcpu *vcpu) {return -1; } static inline void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu) {} static inline void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) {} static inline void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) {} +static inline int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + return -1; +} static inline int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size) {return -1; } static inline int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 44095f6..87fa04b 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -147,6 +147,60 @@ void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) /* TODO */ } +int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + void *nshmem; + const struct kvm_vcpu_sbi_extension *sbi_ext; + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + struct kvm_cpu_trap utrap = { 0 }; + struct kvm_vcpu_sbi_return sbi_ret = { + .out_val = 0, + .err_val = 0, + .utrap = &utrap, + }; + bool ext_is_01 = false; + int ret = 1; + + nshmem = nacl_shmem(); + cp->a0 = nacl_shmem_gpr_read_cove(nshmem, KVM_ARCH_GUEST_A0); + cp->a1 = nacl_shmem_gpr_read_cove(nshmem, KVM_ARCH_GUEST_A1); + cp->a6 = nacl_shmem_gpr_read_cove(nshmem, KVM_ARCH_GUEST_A6); + cp->a7 = nacl_shmem_gpr_read_cove(nshmem, KVM_ARCH_GUEST_A7); + + /* TSM will only forward legacy console to the host */ +#ifdef CONFIG_RISCV_SBI_V01 + if (cp->a7 == SBI_EXT_0_1_CONSOLE_PUTCHAR) + ext_is_01 = true; +#endif + + sbi_ext = kvm_vcpu_sbi_find_ext(vcpu, cp->a7); + if ((sbi_ext && sbi_ext->handler) && ((cp->a7 == SBI_EXT_DBCN) || + (cp->a7 == SBI_EXT_HSM) || (cp->a7 == SBI_EXT_SRST) || ext_is_01)) { + ret = sbi_ext->handler(vcpu, run, &sbi_ret); + } else { + kvm_err("%s: SBI EXT %lx not supported for TVM\n", __func__, cp->a7); + /* Return error for unsupported SBI calls */ + sbi_ret.err_val = SBI_ERR_NOT_SUPPORTED; + goto ecall_done; + } + + if (ret < 0) + goto ecall_done; + + ret = (sbi_ret.uexit) ? 0 : 1; + +ecall_done: + /* + * No need to update the sepc as TSM will take care of SEPC increment + * for ECALLS that won't be forwarded to the user space (e.g. console) + */ + nacl_shmem_gpr_write_cove(nshmem, KVM_ARCH_GUEST_A0, sbi_ret.err_val); + if (!ext_is_01) + nacl_shmem_gpr_write_cove(nshmem, KVM_ARCH_GUEST_A1, sbi_ret.out_val); + + return ret; +} + int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) { struct kvm_riscv_cove_page *tpage; diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index d00b9ee5..8944e29 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -207,11 +207,15 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, case EXC_INST_GUEST_PAGE_FAULT: case EXC_LOAD_GUEST_PAGE_FAULT: case EXC_STORE_GUEST_PAGE_FAULT: + //TODO: If the host runs in HS mode, this won't work as we don't + //read hstatus from the shared memory yet if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) ret = gstage_page_fault(vcpu, run, trap); break; case EXC_SUPERVISOR_SYSCALL: - if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) + if (is_cove_vcpu(vcpu)) + ret = kvm_riscv_cove_vcpu_sbi_ecall(vcpu, run); + else if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) ret = kvm_riscv_vcpu_sbi_ecall(vcpu, run); break; default: diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 047ba10..d2f43bc 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include #ifndef CONFIG_RISCV_SBI_V01 From patchwork Wed Apr 19 22:16:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACAC3C6FD18 for ; Wed, 19 Apr 2023 22:20:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232336AbjDSWUB (ORCPT ); Wed, 19 Apr 2023 18:20:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232701AbjDSWT0 (ORCPT ); Wed, 19 Apr 2023 18:19:26 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 371B14C13 for ; Wed, 19 Apr 2023 15:18:33 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id 41be03b00d2f7-517bfdf55c3so183417a12.2 for ; Wed, 19 Apr 2023 15:18:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942698; x=1684534698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Pex4ariWnXDdKnos4XH43DasA78+EqfUbH0Wh8CR5w4=; b=di7chbqLiC3SvYTY8G+fGI2cGV6/NplXfCh5v5XyNgDlngesP2y27MIvOHwrp1X4yx 4JsJle4v6NILzYjrBdQfclQHMQ1y+XkUvrnKfQUj0Ry5xsz9RSAaP7xk6bNe/vd52w3q C3nd14GGW5Qj94glYyk3NSDPRmLJqvozvF8lGdcOkQJDY2HZ2XGm6lNAbX6/JmUwtYhH cJH7ZmbN28BevOjXzCzDy6LpP/+8QR1qPGxigpkkPQjHRv6qA67df2bzJD5EJMK27xPT x2yJWlQYdkRTzgVvhswp/3TbWfU5x7nor5CBTVJeRFUg+I3fFtPgIMfjV7YwcmwCLtW3 tRuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942698; x=1684534698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Pex4ariWnXDdKnos4XH43DasA78+EqfUbH0Wh8CR5w4=; b=JGNpakmqA86rkbmUcUgEXTnMKeDE90p8IO3ITas2PXovcWmiLiBke+FWpwBAcTObNp Jkfl/vs4dM+hcMugYtj1FwhpCbhWvMrU5bqaK+dwrQ1O82VeCFKLuQWDxilDfTAi4DgN lv1SHVHthJHSuBaYMveNI66lKWh8DDoQj3ISRE8/Sw9dMb2grrLMDCQqt2FSO441T1s4 wCeVhMxs9ntk/Gwy9VRI/QMDH4eO1NY1AyiT031ZHF19uneWlsKAM46FxxHFitWHUP9o Rfqdvg0zIytogMfIwkZz0IZNI6mwys79810AMuOSqsBLCETlpSESPTCuIVRvLxT5NyBE W+zw== X-Gm-Message-State: AAQBX9fIFAjUp7for4/7YS/YGV84jKugtZtjEh+MlSVzuJuuNnWPdzV8 /6UkCHEexXc8rMubhUtMmAPU/A== X-Google-Smtp-Source: AKy350bAGBT2wDySHdJsoaBivRKAhhoN6FGmg4/i0kk3x7CE/96pMKWgvriPwXlMHgJNkHUROr/6AQ== X-Received: by 2002:a17:903:1210:b0:1a6:4a64:4d27 with SMTP id l16-20020a170903121000b001a64a644d27mr7784281plh.40.1681942697893; Wed, 19 Apr 2023 15:18:17 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:17 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Rajnesh Kanwal , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 22/48] RISC-V: KVM: Implement vcpu load/put functions for CoVE guests Date: Wed, 19 Apr 2023 15:16:50 -0700 Message-Id: <20230419221716.3603068-23-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TSM takes care of most of the H extension CSR/fp save/restore for any guest running in CoVE. It may choose to do the fp save/restore lazily as well. The host has to do minimal operations such timer save/restore and interrupt state restore during vcpu load/put. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/kvm/cove.c | 12 ++++++++++-- arch/riscv/kvm/vcpu.c | 12 +++++++++++- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 87fa04b..c93de9b 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -139,12 +139,20 @@ __always_inline bool kvm_riscv_cove_enabled(void) void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu) { - /* TODO */ + kvm_riscv_vcpu_timer_restore(vcpu); } void kvm_riscv_cove_vcpu_put(struct kvm_vcpu *vcpu) { - /* TODO */ + void *nshmem; + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + kvm_riscv_vcpu_timer_save(vcpu); + /* NACL is mandatory for CoVE */ + nshmem = nacl_shmem(); + + /* Only VSIE needs to be read to manage the interrupt stuff */ + csr->vsie = nacl_shmem_csr_read(nshmem, CSR_VSIE); } int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 8cf462c..3e04b78 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -972,6 +972,11 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) u64 henvcfg = kvm_riscv_vcpu_get_henvcfg(vcpu->arch.isa); struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + if (is_cove_vcpu(vcpu)) { + kvm_riscv_cove_vcpu_load(vcpu); + goto skip_load; + } + if (kvm_riscv_nacl_sync_csr_available()) { nshmem = nacl_shmem(); nacl_shmem_csr_write(nshmem, CSR_VSSTATUS, csr->vsstatus); @@ -1010,9 +1015,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_riscv_vcpu_host_fp_save(&vcpu->arch.host_context); kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context, vcpu->arch.isa); - kvm_riscv_vcpu_aia_load(vcpu, cpu); +skip_load: vcpu->cpu = cpu; } @@ -1023,6 +1028,11 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) vcpu->cpu = -1; + if (is_cove_vcpu(vcpu)) { + kvm_riscv_cove_vcpu_put(vcpu); + return; + } + kvm_riscv_vcpu_aia_put(vcpu); kvm_riscv_vcpu_guest_fp_save(&vcpu->arch.guest_context, From patchwork Wed Apr 19 22:16:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 948FFC6FD18 for ; Wed, 19 Apr 2023 22:20:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232426AbjDSWUO (ORCPT ); Wed, 19 Apr 2023 18:20:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232746AbjDSWTf (ORCPT ); Wed, 19 Apr 2023 18:19:35 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60B6E9743 for ; Wed, 19 Apr 2023 15:18:39 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1a6f0d8cdfeso4501435ad.2 for ; Wed, 19 Apr 2023 15:18:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942700; x=1684534700; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Nt9qr+Ng8JBl/RErKCxn2q3deom0MBnPuCU/gT00P20=; b=jzzLlLFeiMsuZ9XUDLcOQDkvHl7c/vkCVuAH/Y4TphDSd5Mc9u3gD9w9H9CFYzB5dB VNJgXnI5oLZMEQ3VG1XPXo2sPYkG3mVd4jlqIeNIyytoZFK4U8o75fSdl6rlgSdH8yDf 9+C6A4ccVfr/Y6QMC1xbEc1A2Zj6A2nip5LOQdKpRlwCaJZa6zrc37kKkf6MblV8d0h+ SCrca1krwloO92QhHfZeNzDRzfTX4CH3FwSkQPPGyJ+li1uF4S8MDV8rlQs97XcIZPKM ki/SaLm60kl0sQ2WGkbdIBtaAJDxWkY52gy3+Wk0TvSwU5O4fxtQHmdRQz9B0d/5bKvD GE7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942700; x=1684534700; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nt9qr+Ng8JBl/RErKCxn2q3deom0MBnPuCU/gT00P20=; b=SgUOf6AUtIDInimjN8C9raDFn+MsMhOhMyylizR9tqKF5Eiot8aYGlc+46ldS7UZ4a nMtqL/ODRB4hsK506l2rM3L7sHn0fMY5AJzzLG7/38EDH04bVKMJBl3soeGDXJR1JRaE M4KSC/aBvkgqvOhVmakL/AwrbUYcYACtSQeXuNAY/HgRwAs1zcSMArQrnFk7sTyBHYL6 sctyGbv/1Ywe/hl9c5OtWxIEXABSZtMX0tI3zlH7rm9cs//qQXEAXwZJJdib1L8iH7xz +DCJ4RC21/L6ErRW8DXzaAseKFxnBi0wy92yULEDg1z5Sp3YlFZRH4KmAZxa4GsU4i11 YWjw== X-Gm-Message-State: AAQBX9eaMbrFipcgDAY4wbFE4q84QfDODcKUeiGFwc29rnsaMgpWNLkv ZfIxDZ7ZDgYJZqweRRwFT1DN8A== X-Google-Smtp-Source: AKy350bclXeDaJm5NGd+baLQjNm1etLX+lLDzeTXzfOcBiZB99+UVCqjhPXny+Rg6qaFMwVObWbhvw== X-Received: by 2002:a17:903:110e:b0:1a6:6fe3:df91 with SMTP id n14-20020a170903110e00b001a66fe3df91mr7814057plh.50.1681942700098; Wed, 19 Apr 2023 15:18:20 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:19 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 23/48] RISC-V: KVM: Wireup TVM world switch Date: Wed, 19 Apr 2023 15:16:51 -0700 Message-Id: <20230419221716.3603068-24-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TVM worlds switch takes a different path from regular VM world switch as it needs to make an ecall to TSM and TSM actually does the world switch. The host doesn't need to save/restore any context as TSM is expected to do that on behalf of the host. The TSM updatess the trap information in the shared memory which host uses to figure out the cause of the guest exit. Signed-off-by: Atish Patra --- arch/riscv/kvm/cove.c | 31 +++++++++++++++++++++++++++++-- arch/riscv/kvm/vcpu.c | 11 +++++++++++ arch/riscv/kvm/vcpu_exit.c | 10 ++++++++++ 3 files changed, 50 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index c93de9b..c11db7a 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -275,9 +275,36 @@ int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hv return rc; } -void kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) +void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) { - /* TODO */ + int rc; + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_context *tvmc; + struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + void *nshmem; + + if (!kvm->arch.tvmc) + return; + + tvmc = kvm->arch.tvmc; + + nshmem = nacl_shmem(); + /* Invoke finalize to mark TVM is ready run for the first time */ + if (unlikely(!tvmc->finalized_done)) { + + rc = sbi_covh_tsm_finalize_tvm(tvmc->tvm_guest_id, cntx->sepc, cntx->a1); + if (rc) { + kvm_err("TVM Finalized failed with %d\n", rc); + return; + } + tvmc->finalized_done = true; + } + + rc = sbi_covh_run_tvm_vcpu(tvmc->tvm_guest_id, vcpu->vcpu_idx); + if (rc) { + trap->scause = EXC_CUSTOM_KVM_COVE_RUN_FAIL; + return; + } } void kvm_riscv_cove_vcpu_destroy(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 3e04b78..43a0b8c 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -1042,6 +1042,11 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_timer_save(vcpu); if (kvm_riscv_nacl_available()) { + /** + * For TVMs, we don't need a separate case as TSM only updates + * the required CSRs during the world switch. All other CSR + * value should be zeroed out by TSM anyways. + */ nshmem = nacl_shmem(); csr->vsstatus = nacl_shmem_csr_read(nshmem, CSR_VSSTATUS); csr->vsie = nacl_shmem_csr_read(nshmem, CSR_VSIE); @@ -1191,6 +1196,12 @@ static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu, gcntx->hstatus = csr_swap(CSR_HSTATUS, hcntx->hstatus); } + trap->htval = nacl_shmem_csr_read(nshmem, CSR_HTVAL); + trap->htinst = nacl_shmem_csr_read(nshmem, CSR_HTINST); + } else if (is_cove_vcpu(vcpu)) { + nshmem = nacl_shmem(); + kvm_riscv_cove_vcpu_switchto(vcpu, trap); + trap->htval = nacl_shmem_csr_read(nshmem, CSR_HTVAL); trap->htinst = nacl_shmem_csr_read(nshmem, CSR_HTINST); } else { diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 8944e29..c46e7f2 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -218,6 +218,15 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, else if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) ret = kvm_riscv_vcpu_sbi_ecall(vcpu, run); break; + case EXC_CUSTOM_KVM_COVE_RUN_FAIL: + if (likely(is_cove_vcpu(vcpu))) { + ret = -EACCES; + run->fail_entry.hardware_entry_failure_reason = + KVM_EXIT_FAIL_ENTRY_COVE_RUN_VCPU; + run->fail_entry.cpu = vcpu->cpu; + run->exit_reason = KVM_EXIT_FAIL_ENTRY; + } + break; default: break; } @@ -225,6 +234,7 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, /* Print details in-case of error */ if (ret < 0) { kvm_err("VCPU exit error %d\n", ret); + //TODO: These values are bogus/stale for a TVM. Improve it kvm_err("SEPC=0x%lx SSTATUS=0x%lx HSTATUS=0x%lx\n", vcpu->arch.guest_context.sepc, vcpu->arch.guest_context.sstatus, From patchwork Wed Apr 19 22:16:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBAE7C77B75 for ; Wed, 19 Apr 2023 22:20:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232941AbjDSWUR (ORCPT ); Wed, 19 Apr 2023 18:20:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231494AbjDSWTl (ORCPT ); Wed, 19 Apr 2023 18:19:41 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FD417685 for ; Wed, 19 Apr 2023 15:18:45 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-63b5fca48bcso384957b3a.0 for ; Wed, 19 Apr 2023 15:18:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942703; x=1684534703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vFCr/CCby/ixIZR1vcNNYtJI+Hn/JpPKkTFrcmf/240=; b=o8cl3vLFW0T/UgdJpNmsgqy/trLWjUKDtYhD18vtn6zFmKJnhCVb3UEfS7g11Yxo6v 70pgYAJ/oiEoiEaIzaXTTARUDuEe0wemxcSCjeDS0me1n4QkzpJpeXfeqrgbXd5cjW+8 mPLL2Zixb1wKKpwdABmEKJb6AdfFo9aF/CARiWba1a9ZO3iyrr6WtwnggyEbtMKiFH3H d5ALpqRY6NcLK/kvZ5gIdFQLtEIiUaW/W6h18t9SJVK+nkmzEtEqwMweSnvDyUAtFakX ItHgUeBEX6SFRd5as2TzksX5YG1nm8yDIWTd3Rsk2EmExawcoSXVI008NBflAu//NIVj OPxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942703; x=1684534703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vFCr/CCby/ixIZR1vcNNYtJI+Hn/JpPKkTFrcmf/240=; b=EA5TDzZpzZxkmzy64oRwmtYiMAUSgtOy7Gt2kj5EIXks96gzwzQhFxtOc+K6xov1/q rlvKJYfwS8cKXrRMMPtoFcwkvNrL6tIuO2LG+2UT5dQkNb4tUfx/PVskpopLC2HxrH4W Txqmh8YSITltbol4IqKa2cjs5iBVlCvfwme7Far0TBDDzvUyiPGqLZGC3vyBEf0LjLsm bcTUNK7tsyTHT0x+GlJzHUI2rzp73p8jwYDICJFZnQXc2r+7aXVs1kmShkQ55hTzQPRL twctm4W2vN7KI97WDzw6S8XC4FLzcnPxmKbwlNtnQzHJCVSqN23aL5bwZK7zFvnYB2aL LzJg== X-Gm-Message-State: AAQBX9f7ZAgfPiKmGsh3errgRHa8aHOpiCIYuQGFm/Uuno3Z+v+8Ib9p DXW2zzOeK3Dxh6Nml8PWjeVghg== X-Google-Smtp-Source: AKy350YqF/4+akV7tQXSVqBJeeYYqMrxklqGFMYEM4KYqKrP5PB2sjQSY1CZVqN7qH3odtWcRYYeog== X-Received: by 2002:a17:903:290:b0:19d:b02:cca5 with SMTP id j16-20020a170903029000b0019d0b02cca5mr6589575plr.12.1681942702695; Wed, 19 Apr 2023 15:18:22 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:22 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 24/48] RISC-V: KVM: Update timer functionality for TVMs. Date: Wed, 19 Apr 2023 15:16:52 -0700 Message-Id: <20230419221716.3603068-25-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal TSM manages the htimedelta/vstimecmp for the TVM and shares it with the host to properly schedule hrtimer to keep timer interrupt ticking. TSM only sets htimedetla when first VCPU is run to make sure host is not able to control the start time of the VM. TSM updates vstimemcp at every vmexit and ignores any write to vstimecmp from the host. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/kvm/cove.c | 8 ++++++++ arch/riscv/kvm/vcpu_timer.c | 26 +++++++++++++++++++++++++- 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index c11db7a..4a8a8db 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -282,6 +282,7 @@ void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_ struct kvm_cove_tvm_context *tvmc; struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; void *nshmem; + struct kvm_guest_timer *gt = &kvm->arch.timer; if (!kvm->arch.tvmc) return; @@ -305,6 +306,13 @@ void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_ trap->scause = EXC_CUSTOM_KVM_COVE_RUN_FAIL; return; } + + /* Read htimedelta from shmem. Given it's written by TSM only when we + * run first VCPU, we need to update this here rather than in timer + * init. + */ + if (unlikely(!gt->time_delta)) + gt->time_delta = nacl_shmem_csr_read(nshmem, CSR_HTIMEDELTA); } void kvm_riscv_cove_vcpu_destroy(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c index 71a4560..f059e14 100644 --- a/arch/riscv/kvm/vcpu_timer.c +++ b/arch/riscv/kvm/vcpu_timer.c @@ -14,6 +14,7 @@ #include #include #include +#include static u64 kvm_riscv_current_cycles(struct kvm_guest_timer *gt) { @@ -71,6 +72,10 @@ static int kvm_riscv_vcpu_timer_cancel(struct kvm_vcpu_timer *t) static int kvm_riscv_vcpu_update_vstimecmp(struct kvm_vcpu *vcpu, u64 ncycles) { + /* Host is not allowed to update the vstimecmp for the TVM */ + if (is_cove_vcpu(vcpu)) + return 0; + #if defined(CONFIG_32BIT) nacl_csr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF); nacl_csr_write(CSR_VSTIMECMPH, ncycles >> 32); @@ -221,6 +226,11 @@ int kvm_riscv_vcpu_set_reg_timer(struct kvm_vcpu *vcpu, ret = -EOPNOTSUPP; break; case KVM_REG_RISCV_TIMER_REG(time): + /* For trusted VMs we can not update htimedelta. We can just + * read it from shared memory. + */ + if (is_cove_vcpu(vcpu)) + return -EOPNOTSUPP; gt->time_delta = reg_val - get_cycles64(); break; case KVM_REG_RISCV_TIMER_REG(compare): @@ -287,6 +297,7 @@ static void kvm_riscv_vcpu_update_timedelta(struct kvm_vcpu *vcpu) { struct kvm_guest_timer *gt = &vcpu->kvm->arch.timer; + #if defined(CONFIG_32BIT) nacl_csr_write(CSR_HTIMEDELTA, (u32)(gt->time_delta)); nacl_csr_write(CSR_HTIMEDELTAH, (u32)(gt->time_delta >> 32)); @@ -299,6 +310,10 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu) { struct kvm_vcpu_timer *t = &vcpu->arch.timer; + /* While in CoVE, HOST must not manage HTIMEDELTA or VSTIMECMP for TVM */ + if (is_cove_vcpu(vcpu)) + goto skip_hcsr_update; + kvm_riscv_vcpu_update_timedelta(vcpu); if (!t->sstc_enabled) @@ -311,6 +326,7 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu) nacl_csr_write(CSR_VSTIMECMP, t->next_cycles); #endif +skip_hcsr_update: /* timer should be enabled for the remaining operations */ if (unlikely(!t->init_done)) return; @@ -358,5 +374,13 @@ void kvm_riscv_guest_timer_init(struct kvm *kvm) struct kvm_guest_timer *gt = &kvm->arch.timer; riscv_cs_get_mult_shift(>->nsec_mult, >->nsec_shift); - gt->time_delta = -get_cycles64(); + if (is_cove_vm(kvm)) { + /* For TVMs htimedelta is managed by TSM and it's communicated using + * NACL shmem interface when first time VCPU is run. so we read it in + * kvm_riscv_cove_vcpu_switchto() where we enter VCPUs. + */ + gt->time_delta = 0; + } else { + gt->time_delta = -get_cycles64(); + } } From patchwork Wed Apr 19 22:16:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 090A4C6FD18 for ; Wed, 19 Apr 2023 22:20:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232004AbjDSWUm (ORCPT ); Wed, 19 Apr 2023 18:20:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231950AbjDSWUE (ORCPT ); Wed, 19 Apr 2023 18:20:04 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEFC4A5DC for ; Wed, 19 Apr 2023 15:18:54 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id 98e67ed59e1d1-2466f65d7e0so177971a91.2 for ; Wed, 19 Apr 2023 15:18:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942705; x=1684534705; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UJcz91lEoSJqtalpSfBSHZNF30nj7YibjBRer+eosEg=; b=qMbw2bfkgktxDwIJo8enIEL5pLtnbolY3+JaaISTkIKsa3/Pbsco4zcJlI15aLKelR WFKrKF2Hz7v5krpEvYpYwDTqJPbGu3BuJcQtCe9xizDdAggE5jW1G4X8qqMenNOBqXz9 5gqd6GiYE1ZkgW2qNIjSboTRdUgFAKPx3ibrEbYtA8McCRZRHbdZdMvH67xn6xctz3io Yx4YAKRBSdhdHTVV7HAXUHddiREJjeIsjlNB/MFPQtt0zqpmQorHvJ8xpPjM9lkMYa3A PJWfAzX50ZJIGmvwtxGZ3PpJ/kZeMQBLQ0BYf7N4KMtYq6ZlC++uPwgnbCGRBd8idM/u /cyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942705; x=1684534705; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UJcz91lEoSJqtalpSfBSHZNF30nj7YibjBRer+eosEg=; b=aC331Glw7BW95Gbxaiu0VXbPC8Ae2lRECFGVIRis5yG2SStD73WD0Ur/vR/MRikt8k mygJ86czwWu9GKgALdQjwYgY6fhvcZx3ZfcmtYBBlndDfJwh4rNSl0IEHgY3i8CpEzAX 1IrOyEGLVC7WWF7B3ZIAHL89DAHE5v+VqWHfrDsd1gqbHs0LfjaTn7UHKj3w8GkMe5Na ytmAPE7jIhfeNa50AIEkxFlKuIm8Hf/T2giRxEdHMJY4ZlUIFUpi+do7FSRCFQUJwMQz 2BFCF7KjcEKHQRHC1at9V5lbh/C2Vbs01DC7ZQCCR7Y2xPkp1MhY2iwJWXBJOKufALsL 3RGw== X-Gm-Message-State: AAQBX9fzw9u99W/a3f15eSBs7/D0dWHWxFzsfHNSVSee9CInCr4XApG7 6MHxLWd49ZH9bUA/KpoG1AGVIw== X-Google-Smtp-Source: AKy350Z8cKy3RK/3X5ej3IZFoGxmT2xiFm/BExphbF3ahTLRykDnPstTsJrfduCst6vY0llDQCAqqw== X-Received: by 2002:a17:90b:890:b0:23a:333c:6bab with SMTP id bj16-20020a17090b089000b0023a333c6babmr4018734pjb.23.1681942704931; Wed, 19 Apr 2023 15:18:24 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:24 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 25/48] RISC-V: KVM: Skip HVIP update for TVMs Date: Wed, 19 Apr 2023 15:16:53 -0700 Message-Id: <20230419221716.3603068-26-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Skip HVIP update as the Host shouldn't be able to inject interrupt directly to a TVM. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 43a0b8c..20d4800 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -822,7 +822,10 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu) /* Read current HVIP and VSIE CSRs */ csr->vsie = nacl_csr_read(CSR_VSIE); - /* Sync-up HVIP.VSSIP bit changes does by Guest */ + /* + * Sync-up HVIP.VSSIP bit changes does by Guest. For TVMs, + * the HVIP is not updated by the TSM. Expect it to be zero. + */ hvip = nacl_csr_read(CSR_HVIP); if ((csr->hvip ^ hvip) & (1UL << IRQ_VS_SOFT)) { if (hvip & (1UL << IRQ_VS_SOFT)) { @@ -1305,8 +1308,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) */ kvm_riscv_vcpu_flush_interrupts(vcpu); - /* Update HVIP CSR for current CPU */ - kvm_riscv_update_hvip(vcpu); + /* Update HVIP CSR for current CPU only for non TVMs */ + if (!is_cove_vcpu(vcpu)) + kvm_riscv_update_hvip(vcpu); if (ret <= 0 || kvm_riscv_gstage_vmid_ver_changed(vcpu->kvm) || From patchwork Wed Apr 19 22:16:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8140BC6FD18 for ; Wed, 19 Apr 2023 22:20:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232422AbjDSWUq (ORCPT ); Wed, 19 Apr 2023 18:20:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232395AbjDSWUM (ORCPT ); Wed, 19 Apr 2023 18:20:12 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 706B34C22 for ; Wed, 19 Apr 2023 15:19:03 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-63b620188aeso477924b3a.0 for ; Wed, 19 Apr 2023 15:19:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942707; x=1684534707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WztMTyUU5RnTeRMzQjXOG7tTAHmGJecGpWKfqq8b0LY=; b=JM/4VdTUHWchd0agtBcZMVxlzWz+xlNHvW3+YRV1BgFp8SzvOalMG8gKSVRQwN23lF /HaZUXiHN5WGp4Li2ns1uHUPekHEfh3C+xw1tqVd+IJGR2EtpE3yOA/YQtJ1ZYeUVJ/l 7gjUeS7TILHwQHaL9l5Fnqsci5dyPD6ZQhESjL0jYTR+yEk6eIszpIH5SlasnHwxJ7Ky wlJDpXRoqS0jyG2/jEwfH6PNHHxv4wzD8drzVC/Q8h9ghskwb1QRJA40nqZUrBtk1qK0 ooViUN4xyDeEoBWE1iz1aXlci22pk9Oc7YzeXphdmUjkGiDah9tsQrhCdZR8NGP6os1O 4yYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942707; x=1684534707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WztMTyUU5RnTeRMzQjXOG7tTAHmGJecGpWKfqq8b0LY=; b=QBcI8nVR94+PQxoNBVFAsvICgvLtWv69J+y2cv+5y1L9KRy3P8pyo0S3NzCuoKAyDo ckJyHNyzwYF1Qy0hg7wUF7/Y1ukGYO8LEAT0hmlSyYHWtTgfJVp/ghlz0rO0n2sr2pyu snqkI/gSoxk/KKN7efp6UN6YZ84rbbIs+KRkMrpRHVVigjl1KBYUFkDF/qeC0bP1nsxf DaZ/vgl4+jxJp+4nFkUYwUswbiqETV3MjOL5RZjlPZOdV+P3PG9RQh8aqwMFQCDD1uIk 2fN/OFYQjGfHR2R3wrug3uBMTR9jZEakvogfDO7wKs3kDDsNtWE0JgcsO8xC8FW4BT18 vZNQ== X-Gm-Message-State: AAQBX9dGhvO0C8MBSqjixoaEQW2KUeHI+3zjdTXIDMsb77JloUnWmfES ZxpUycrD9SRhzWrfN8RJQ7+AQfH+jyqBI/q16bU= X-Google-Smtp-Source: AKy350am1y3CAd0kzsPnqccpnVuquygqNjOFE27i75PzlAcwaRULYItnkTY6+2qQuafZtBMoWXv5YA== X-Received: by 2002:a17:902:b087:b0:1a6:9363:1632 with SMTP id p7-20020a170902b08700b001a693631632mr6436484plr.25.1681942707075; Wed, 19 Apr 2023 15:18:27 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:26 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 26/48] RISC-V: Add COVI extension definitions Date: Wed, 19 Apr 2023 15:16:54 -0700 Message-Id: <20230419221716.3603068-27-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal This patch adds the CoVE interrupt management extension(COVI) details to the sbi header file. Signed-off-by: Atish Patra Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/sbi.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index c5a5526..bbea922 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -33,6 +33,7 @@ enum sbi_ext_id { SBI_EXT_DBCN = 0x4442434E, SBI_EXT_NACL = 0x4E41434C, SBI_EXT_COVH = 0x434F5648, + SBI_EXT_COVI = 0x434F5649, /* Experimentals extensions must lie within this range */ SBI_EXT_EXPERIMENTAL_START = 0x08000000, @@ -369,6 +370,20 @@ enum sbi_ext_covh_fid { SBI_EXT_COVH_TVM_INITIATE_FENCE, }; +enum sbi_ext_covi_fid { + SBI_EXT_COVI_TVM_AIA_INIT, + SBI_EXT_COVI_TVM_CPU_SET_IMSIC_ADDR, + SBI_EXT_COVI_TVM_CONVERT_IMSIC, + SBI_EXT_COVI_TVM_RECLAIM_IMSIC, + SBI_EXT_COVI_TVM_CPU_BIND_IMSIC, + SBI_EXT_COVI_TVM_CPU_UNBIND_IMSIC_BEGIN, + SBI_EXT_COVI_TVM_CPU_UNBIND_IMSIC_END, + SBI_EXT_COVI_TVM_CPU_INJECT_EXT_INTERRUPT, + SBI_EXT_COVI_TVM_REBIND_IMSIC_BEGIN, + SBI_EXT_COVI_TVM_REBIND_IMSIC_CLONE, + SBI_EXT_COVI_TVM_REBIND_IMSIC_END, +}; + enum sbi_cove_page_type { SBI_COVE_PAGE_4K, SBI_COVE_PAGE_2MB, @@ -409,6 +424,21 @@ struct sbi_cove_tvm_create_params { unsigned long tvm_state_addr; }; +struct sbi_cove_tvm_aia_params { + /* The base address is the address of the IMSIC with group ID, hart ID, and guest ID of 0 */ + uint64_t imsic_base_addr; + /* The number of group index bits in an IMSIC address */ + uint32_t group_index_bits; + /* The location of the group index in an IMSIC address. Must be >= 24i. */ + uint32_t group_index_shift; + /* The number of hart index bits in an IMSIC address */ + uint32_t hart_index_bits; + /* The number of guest index bits in an IMSIC address. Must be >= log2(guests/hart + 1) */ + uint32_t guest_index_bits; + /* The number of guest interrupt files to be implemented per vCPU */ + uint32_t guests_per_hart; +}; + #define SBI_SPEC_VERSION_DEFAULT 0x1 #define SBI_SPEC_VERSION_MAJOR_SHIFT 24 #define SBI_SPEC_VERSION_MAJOR_MASK 0x7f From patchwork Wed Apr 19 22:16:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F7A4C77B73 for ; Wed, 19 Apr 2023 22:19:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232677AbjDSWTy (ORCPT ); Wed, 19 Apr 2023 18:19:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232623AbjDSWTI (ORCPT ); Wed, 19 Apr 2023 18:19:08 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C0BB4C1B for ; Wed, 19 Apr 2023 15:18:29 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-1a6f0d8cdfeso4502055ad.2 for ; Wed, 19 Apr 2023 15:18:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942709; x=1684534709; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e0F6tHm2quQwLEvcCl5FI5ypn8ePvTg03kiPuCNoeNw=; b=B0z1LKxRqo0N/+B1Zq8uNd+jORsnBmdASeazIE6ba9jOD7NNAm6Y6Ydluna+vSisdS Ns9+QkyzZkHLKpqekwlR8LOXx0SmdbLDH70zYIU2tu7Ppm7uTfxUiNTzbORQk6jDkpwT wou9JJMc8eyykdjw9eDblXGrXRv3fHzY2QDwRPmwO1RqTVCF9ZjGycORqBMV7BBhXY2c 4MNxDLzeWD+U/vLAXIW8MKmvELAjO/f9B7KULqHL5O/X2CES/rwKUq+7tYosSOcZcHDe RrEGZ8I2t0oPTTiBYWoZFiRl6ej+YFgKxG/VdR1RjRUQssQeFPlAP7g8lmfAqRhNq39Y qB8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942709; x=1684534709; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e0F6tHm2quQwLEvcCl5FI5ypn8ePvTg03kiPuCNoeNw=; b=XBBY2VMaxZSk+tU85lbYAfYwlUjAS8VdxjWPUQ+ZUNt88A5UinxyrD7bg9gfk6MQux gt/tkw075PABc6ibcqLCsAHGVQ0IYhqNODezZ5N8C+mc+jAF/4S1c2AWrmXjqdasMbd+ UuAe+c4mPIoHTKZBGXH47I9qozHnPWXsJ8JObOP3NbLWXeCl49l3j/olMWQ7y1gjREb6 4jAF3pgN5gZYb9Z6hk7Q6m92290JOXToTUCPKbMU2kmuyujXx+IRvkZuJDwBrVpK4MkS OnmpMFPcnAhaIS4Y3lDl4WSHbzPR8w/Ls0VyJC80ktA9JaVb5dxHoyN+0FSBLCmF2Joh v0Sw== X-Gm-Message-State: AAQBX9fbAviGaGa8Y3c8z+Y5ICkEP506JZuT752qpmZaQWb7HbS/vaXg jqh2c6rrJHQZCazcsVK7UmVUbw== X-Google-Smtp-Source: AKy350aSSLZmcJ6EGd9ViVcCgExeh8NfbY776YNFamx7qI4QWXSDqSo7VY1vzDz6CmaJhnexr4tP0w== X-Received: by 2002:a17:902:be02:b0:1a3:c8c2:c322 with SMTP id r2-20020a170902be0200b001a3c8c2c322mr6340185pls.29.1681942709189; Wed, 19 Apr 2023 15:18:29 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:28 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Rajnesh Kanwal , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 27/48] RISC-V: KVM: Implement COVI SBI extension Date: Wed, 19 Apr 2023 15:16:55 -0700 Message-Id: <20230419221716.3603068-28-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org CoVE specification defines a separate SBI extension to manage interrupts in TVM. This extension is known as COVI as both host & guest interface access these functions. This patch implements the functions defined by COVI. Co-developed-by: Rajnesh Kanwal Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove_sbi.h | 20 ++++ arch/riscv/kvm/cove_sbi.c | 164 ++++++++++++++++++++++++++ 2 files changed, 184 insertions(+) diff --git a/arch/riscv/include/asm/kvm_cove_sbi.h b/arch/riscv/include/asm/kvm_cove_sbi.h index df7d88c..0759f70 100644 --- a/arch/riscv/include/asm/kvm_cove_sbi.h +++ b/arch/riscv/include/asm/kvm_cove_sbi.h @@ -32,6 +32,7 @@ #define nacl_shmem_gpr_read_cove(__s, __g) \ nacl_shmem_scratch_read_long(__s, get_scratch_gpr_offset(__g)) +/* Functions related to CoVE Host Interface (COVH) Extension */ int sbi_covh_tsm_get_info(struct sbi_cove_tsm_info *tinfo_addr); int sbi_covh_tvm_initiate_fence(unsigned long tvmid); int sbi_covh_tsm_initiate_fence(void); @@ -58,4 +59,23 @@ int sbi_covh_create_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid, int sbi_covh_run_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid); +/* Functions related to CoVE Interrupt Management(COVI) Extension */ +int sbi_covi_tvm_aia_init(unsigned long tvm_gid, struct sbi_cove_tvm_aia_params *tvm_aia_params); +int sbi_covi_set_vcpu_imsic_addr(unsigned long tvm_gid, unsigned long vcpu_id, + unsigned long imsic_addr); +int sbi_covi_convert_imsic(unsigned long imsic_addr); +int sbi_covi_reclaim_imsic(unsigned long imsic_addr); +int sbi_covi_bind_vcpu_imsic(unsigned long tvm_gid, unsigned long vcpu_id, + unsigned long imsic_mask); +int sbi_covi_unbind_vcpu_imsic_begin(unsigned long tvm_gid, unsigned long vcpu_id); +int sbi_covi_unbind_vcpu_imsic_end(unsigned long tvm_gid, unsigned long vcpu_id); +int sbi_covi_inject_external_interrupt(unsigned long tvm_gid, unsigned long vcpu_id, + unsigned long interrupt_id); +int sbi_covi_rebind_vcpu_imsic_begin(unsigned long tvm_gid, unsigned long vcpu_id, + unsigned long imsic_mask); +int sbi_covi_rebind_vcpu_imsic_clone(unsigned long tvm_gid, unsigned long vcpu_id); +int sbi_covi_rebind_vcpu_imsic_end(unsigned long tvm_gid, unsigned long vcpu_id); + + + #endif diff --git a/arch/riscv/kvm/cove_sbi.c b/arch/riscv/kvm/cove_sbi.c index bf037f6..a8901ac 100644 --- a/arch/riscv/kvm/cove_sbi.c +++ b/arch/riscv/kvm/cove_sbi.c @@ -18,6 +18,170 @@ #define RISCV_COVE_ALIGN_4KB (1UL << 12) +int sbi_covi_tvm_aia_init(unsigned long tvm_gid, + struct sbi_cove_tvm_aia_params *tvm_aia_params) +{ + struct sbiret ret; + + unsigned long pa = __pa(tvm_aia_params); + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_AIA_INIT, tvm_gid, pa, + sizeof(*tvm_aia_params), 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covi_set_vcpu_imsic_addr(unsigned long tvm_gid, unsigned long vcpu_id, + unsigned long imsic_addr) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_CPU_SET_IMSIC_ADDR, + tvm_gid, vcpu_id, imsic_addr, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +/* + * Converts the guest interrupt file at `imsic_addr` for use with a TVM. + * The guest interrupt file must not be used by the caller until reclaim. + */ +int sbi_covi_convert_imsic(unsigned long imsic_addr) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_CONVERT_IMSIC, + imsic_addr, 0, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covi_reclaim_imsic(unsigned long imsic_addr) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_RECLAIM_IMSIC, + imsic_addr, 0, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +/* + * Binds a vCPU to this physical CPU and the specified set of confidential guest + * interrupt files. + */ +int sbi_covi_bind_vcpu_imsic(unsigned long tvm_gid, unsigned long vcpu_id, + unsigned long imsic_mask) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_CPU_BIND_IMSIC, tvm_gid, + vcpu_id, imsic_mask, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +/* + * Begins the unbind process for the specified vCPU from this physical CPU and its guest + * interrupt files. The host must complete a TLB invalidation sequence for the TVM before + * completing the unbind with `unbind_vcpu_imsic_end()`. + */ +int sbi_covi_unbind_vcpu_imsic_begin(unsigned long tvm_gid, + unsigned long vcpu_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_CPU_UNBIND_IMSIC_BEGIN, + tvm_gid, vcpu_id, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +/* + * Completes the unbind process for the specified vCPU from this physical CPU and its guest + * interrupt files. + */ +int sbi_covi_unbind_vcpu_imsic_end(unsigned long tvm_gid, unsigned long vcpu_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_CPU_UNBIND_IMSIC_END, + tvm_gid, vcpu_id, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +/* + * Injects an external interrupt into the specified vCPU. The interrupt ID must + * have been allowed with `allow_external_interrupt()` by the guest. + */ +int sbi_covi_inject_external_interrupt(unsigned long tvm_gid, + unsigned long vcpu_id, + unsigned long interrupt_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_CPU_INJECT_EXT_INTERRUPT, + tvm_gid, vcpu_id, interrupt_id, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covi_rebind_vcpu_imsic_begin(unsigned long tvm_gid, + unsigned long vcpu_id, + unsigned long imsic_mask) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_REBIND_IMSIC_BEGIN, + tvm_gid, vcpu_id, imsic_mask, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covi_rebind_vcpu_imsic_clone(unsigned long tvm_gid, + unsigned long vcpu_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_REBIND_IMSIC_CLONE, + tvm_gid, vcpu_id, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covi_rebind_vcpu_imsic_end(unsigned long tvm_gid, unsigned long vcpu_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVI, SBI_EXT_COVI_TVM_REBIND_IMSIC_END, + tvm_gid, vcpu_id, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + int sbi_covh_tsm_get_info(struct sbi_cove_tsm_info *tinfo_addr) { struct sbiret ret; From patchwork Wed Apr 19 22:16:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDCFDC6FD18 for ; Wed, 19 Apr 2023 22:21:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232994AbjDSWVB (ORCPT ); Wed, 19 Apr 2023 18:21:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232989AbjDSWUW (ORCPT ); Wed, 19 Apr 2023 18:20:22 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F9B95254 for ; Wed, 19 Apr 2023 15:19:15 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-63b73203e0aso2785625b3a.1 for ; Wed, 19 Apr 2023 15:19:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942711; x=1684534711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PU8iFueolrZCmcbo4gYTi+eKDv6f+PtpXTvRFNX2NZA=; b=SCmbMjD55QloUYOgXX6MGUNEjAyAnMm/HfGRAGM1vS7vADDmD//a3bJ19IoxEPbsjR M2cj4sRqNKbEknJUmqcRzzuGvuAWBBnIZmhseVA9z0HrnfGvz/BvmurAl+ZaqdFV9z0+ XE+By4GvDamj1V0iHkImOVtuMlZpTJ8+WBz54nyEAGyf404LButJI5R1r1M9kSp/2kE/ u0yMzOB8MvU04pFuFPh8V3hwD0+VVdFu6iRbztwNwUe0wOuPOvAaUzMpoheX119q8fkx HByPgmjTnZTij1BszzopQyeoxfkgy1UkfAJB+AO4WFsTrtiA+xJhP2Ikh+DJT3BJUL+Y xQ7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942711; x=1684534711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PU8iFueolrZCmcbo4gYTi+eKDv6f+PtpXTvRFNX2NZA=; b=i0PebYIMB3msYGbC1McepkmYv2Q0CrwNBE2OB6ETIFnZC09KI5lEIG9PfKmaSYxtHn 1DzbYib5XIcmYSqXLLzLaHyLR6zFwbG2771PcoQZtG5LP6LFwNtJ62/6UGCg27xfeTzf 9HQexArbdm+ZJMwMLeopQR6r22/73SBXnljoSCFehjLserKp3er1XmRMKExI6KDNSiIa 0dHcT0pxB7WbjqUZZIjeCF597rnGAWLj33CvWsf47fKf9nK7b9d1Eaw+6u9X+PpwCqDX PFGBE8/XnavACTg6VJ1+FV5jV+B6BD8wpeb7PJc3jlTZ0jOmeXANQIj9VZ9Gw0nOJfS3 +wIw== X-Gm-Message-State: AAQBX9f77FI19PZLQjtW7urfZB1ky5P1p1koL2zXxqAaJesQu+CwdTn/ qdZZeW/CjLvWOadcKIdlRwmq8g== X-Google-Smtp-Source: AKy350YxZk7F6dEeajji4ZjqoV6O38ZOLVkf4vnq1EOHfqdcV069hDfO+BMQgpUlX71NUn0DfXxZlw== X-Received: by 2002:a17:90a:9e5:b0:246:aeee:e61c with SMTP id 92-20020a17090a09e500b00246aeeee61cmr4072001pjo.11.1681942711371; Wed, 19 Apr 2023 15:18:31 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:31 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 28/48] RISC-V: KVM: Add interrupt management functions for TVM Date: Wed, 19 Apr 2023 15:16:56 -0700 Message-Id: <20230419221716.3603068-29-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The COVI SBI extension defines the functions related to interrupt management for TVMs. These functions are the glue logic between AIA code and the actually CoVE Interrupt SBI extension(COVI). Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 34 ++++ arch/riscv/kvm/cove.c | 256 ++++++++++++++++++++++++++++++ 2 files changed, 290 insertions(+) diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h index b63682f..74bad2f 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -61,10 +61,19 @@ struct kvm_riscv_cove_page { unsigned long gpa; }; +struct imsic_tee_state { + bool bind_required; + bool bound; + int vsfile_hgei; +}; + struct kvm_cove_tvm_vcpu_context { struct kvm_vcpu *vcpu; /* Pages storing each vcpu state of the TVM in TSM */ struct kvm_riscv_cove_page vcpu_state; + + /* Per VCPU imsic state */ + struct imsic_tee_state imsic; }; struct kvm_cove_tvm_context { @@ -133,6 +142,16 @@ int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned lo int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva); /* Fence related function */ int kvm_riscv_cove_tvm_fence(struct kvm_vcpu *vcpu); + +/* AIA related CoVE functions */ +int kvm_riscv_cove_aia_init(struct kvm *kvm); +int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, unsigned long iid); +int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, int old_cpu); +int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, unsigned long imsic_mask); +int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, int old_pcpu); +int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa); +int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa); +int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu); #else static inline bool kvm_riscv_cove_enabled(void) {return false; }; static inline int kvm_riscv_cove_init(void) { return -1; } @@ -162,6 +181,21 @@ static inline int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, } static inline int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) {return -1; } +/* AIA related TEE functions */ +static inline int kvm_riscv_cove_aia_init(struct kvm *kvm) { return -1; } +static inline int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, + unsigned long iid) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, + int old_cpu) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, + unsigned long imsic_mask) { return -1; } +static inline int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, + phys_addr_t imsic_pa) { return -1; } +static inline int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, + phys_addr_t imsic_pa) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, + int old_pcpu) { return -1; } #endif /* CONFIG_RISCV_COVE_HOST */ #endif /* __KVM_RISCV_COVE_H */ diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 4a8a8db..154b01a 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -8,6 +8,7 @@ * Atish Patra */ +#include #include #include #include @@ -137,6 +138,247 @@ __always_inline bool kvm_riscv_cove_enabled(void) return riscv_cove_enabled; } +static void kvm_cove_imsic_clone(void *info) +{ + int rc; + struct kvm_vcpu *vcpu = info; + struct kvm *kvm = vcpu->kvm; + + rc = sbi_covi_rebind_vcpu_imsic_clone(kvm->arch.tvmc->tvm_guest_id, vcpu->vcpu_idx); + if (rc) + kvm_err("Imsic clone failed guest %ld vcpu %d pcpu %d\n", + kvm->arch.tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); +} + +static void kvm_cove_imsic_unbind(void *info) +{ + struct kvm_vcpu *vcpu = info; + struct kvm_cove_tvm_context *tvmc = vcpu->kvm->arch.tvmc; + + /*TODO: We probably want to return but the remote function call doesn't allow any return */ + if (sbi_covi_unbind_vcpu_imsic_begin(tvmc->tvm_guest_id, vcpu->vcpu_idx)) + return; + + /* This may issue IPIs to running vcpus. */ + if (kvm_riscv_cove_tvm_fence(vcpu)) + return; + + if (sbi_covi_unbind_vcpu_imsic_end(tvmc->tvm_guest_id, vcpu->vcpu_idx)) + return; + + kvm_info("Unbind success for guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, smp_processor_id(), vcpu->vcpu_idx); +} + +int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_vcpu_aia *vaia = &vcpu->arch.aia_context; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_set_vcpu_imsic_addr(tvmc->tvm_guest_id, vcpu->vcpu_idx, vaia->imsic_addr); + if (ret) + return -EPERM; + + return 0; +} + +int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa) +{ + struct kvm *kvm = vcpu->kvm; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + ret = sbi_covi_convert_imsic(imsic_pa); + if (ret) + return -EPERM; + + ret = kvm_riscv_cove_fence(); + if (ret) + return ret; + + return 0; +} + +int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa) +{ + int ret; + struct kvm *kvm = vcpu->kvm; + + if (!kvm->arch.tvmc) + return -EINVAL; + + ret = sbi_covi_reclaim_imsic(imsic_pa); + if (ret) + return -EPERM; + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, int old_pcpu) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + int ret; + cpumask_t tmpmask; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_rebind_vcpu_imsic_begin(tvmc->tvm_guest_id, vcpu->vcpu_idx, + BIT(tvcpu->imsic.vsfile_hgei)); + if (ret) { + kvm_err("Imsic rebind begin failed guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + + ret = kvm_riscv_cove_tvm_fence(vcpu); + if (ret) + return ret; + + cpumask_clear(&tmpmask); + cpumask_set_cpu(old_pcpu, &tmpmask); + on_each_cpu_mask(&tmpmask, kvm_cove_imsic_clone, vcpu, 1); + + ret = sbi_covi_rebind_vcpu_imsic_end(tvmc->tvm_guest_id, vcpu->vcpu_idx); + if (ret) { + kvm_err("Imsic rebind end failed guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + + tvcpu->imsic.bound = true; + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, unsigned long imsic_mask) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_bind_vcpu_imsic(tvmc->tvm_guest_id, vcpu->vcpu_idx, imsic_mask); + if (ret) { + kvm_err("Imsic bind failed for imsic %lx guest %ld vcpu %d pcpu %d\n", + imsic_mask, tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + tvcpu->imsic.bound = true; + pr_err("%s: rebind success vcpu %d hgei %d pcpu %d\n", __func__, + vcpu->vcpu_idx, tvcpu->imsic.vsfile_hgei, smp_processor_id()); + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, int old_pcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + cpumask_t tmpmask; + + if (!kvm->arch.tvmc) + return -EINVAL; + + /* No need to unbind if it is not bound already */ + if (!tvcpu->imsic.bound) + return 0; + + /* Do it first even if there is failure to prevent it to try again */ + tvcpu->imsic.bound = false; + + if (smp_processor_id() == old_pcpu) { + kvm_cove_imsic_unbind(vcpu); + } else { + /* Unbind can be invoked from a different physical cpu */ + cpumask_clear(&tmpmask); + cpumask_set_cpu(old_pcpu, &tmpmask); + on_each_cpu_mask(&tmpmask, kvm_cove_imsic_unbind, vcpu, 1); + } + + return 0; +} + +int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, unsigned long iid) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_inject_external_interrupt(tvmc->tvm_guest_id, vcpu->vcpu_idx, iid); + if (ret) + return ret; + + return 0; +} + +int kvm_riscv_cove_aia_init(struct kvm *kvm) +{ + struct kvm_aia *aia = &kvm->arch.aia; + struct sbi_cove_tvm_aia_params *tvm_aia; + struct kvm_vcpu *vcpu; + struct kvm_cove_tvm_context *tvmc; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + /* Sanity Check */ + if (aia->aplic_addr != KVM_RISCV_AIA_UNDEF_ADDR) + return -EINVAL; + + /* TVMs must have a physical guest interrut file */ + if (aia->mode != KVM_DEV_RISCV_AIA_MODE_HWACCEL) + return -ENODEV; + + tvm_aia = kzalloc(sizeof(*tvm_aia), GFP_KERNEL); + if (!tvm_aia) + return -ENOMEM; + + /* Address of the IMSIC group ID, hart ID & guest ID of 0 */ + vcpu = kvm_get_vcpu_by_id(kvm, 0); + tvm_aia->imsic_base_addr = vcpu->arch.aia_context.imsic_addr; + + tvm_aia->group_index_bits = aia->nr_group_bits; + tvm_aia->group_index_shift = aia->nr_group_shift; + tvm_aia->hart_index_bits = aia->nr_hart_bits; + tvm_aia->guest_index_bits = aia->nr_guest_bits; + /* Nested TVMs are not supported yet */ + tvm_aia->guests_per_hart = 0; + + + ret = sbi_covi_tvm_aia_init(tvmc->tvm_guest_id, tvm_aia); + if (ret) + kvm_err("TVM AIA init failed with rc %d\n", ret); + + return ret; +} + void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu) { kvm_riscv_vcpu_timer_restore(vcpu); @@ -283,6 +525,7 @@ void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_ struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; void *nshmem; struct kvm_guest_timer *gt = &kvm->arch.timer; + struct kvm_cove_tvm_vcpu_context *tvcpuc = vcpu->arch.tc; if (!kvm->arch.tvmc) return; @@ -301,6 +544,19 @@ void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_ tvmc->finalized_done = true; } + /* + * Bind the vsfile here instead during the new vsfile allocation because + * COVH bind call requires the TVM to be in finalized state. + */ + if (tvcpuc->imsic.bind_required) { + tvcpuc->imsic.bind_required = false; + rc = kvm_riscv_cove_vcpu_imsic_bind(vcpu, BIT(tvcpuc->imsic.vsfile_hgei)); + if (rc) { + kvm_err("bind failed with rc %d\n", rc); + return; + } + } + rc = sbi_covh_run_tvm_vcpu(tvmc->tvm_guest_id, vcpu->vcpu_idx); if (rc) { trap->scause = EXC_CUSTOM_KVM_COVE_RUN_FAIL; From patchwork Wed Apr 19 22:16:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20848C77B73 for ; Wed, 19 Apr 2023 22:21:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232007AbjDSWVH (ORCPT ); Wed, 19 Apr 2023 18:21:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232623AbjDSWUl (ORCPT ); Wed, 19 Apr 2023 18:20:41 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B826A61A7 for ; Wed, 19 Apr 2023 15:19:23 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-63d4595d60fso2914998b3a.0 for ; Wed, 19 Apr 2023 15:19:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942713; x=1684534713; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MzHgRd9E4939KzL4yVjhGODBFacgyWrf3hGucLzcUO8=; b=5j01fN0ij/sx5aznVSQ+84ebob97Uk9ftFvgEXxpPwjhV5dco0Y79w6bw9ez6K4LUG x4ifblG/7m++Uyimmc36fD0O2cl/AoZnom3oZdZMqItGTqYdBH4OIN/teuA3aWTggftI cpfXd7inCZOHfFSvtG6jQH1j/NF3x6zzJYA/BAWcjUuRPw/Oif2Cfb9K4X/RvX3EuV6I rB2tIUmFeiD583OWXUG9/aKVA+9N3U2MddLb8BWDGvrP++iluedDlml9YMOSTGq8prU7 8uZJLu/6wc33TCPOi7KP6620z50VzFYs35XGvSNltrv2JUItl6JoRg3XUUwIodj1A03e LxNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942713; x=1684534713; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MzHgRd9E4939KzL4yVjhGODBFacgyWrf3hGucLzcUO8=; b=ldIp4YvMnw1mq3ejdDGM7qqQbLKCBYjYor+a25LOd6wX244mDJFLsicc4NqFoNAMYB GNOhji9LsjHOC3YtKtwL9vbhP0J6K5P040Aa4qVQhrJZWThgGZE9G9karsQafbmoBuAK Jgndzt6MXBqSKfc0u2zIGASUFgW+4mYTiH87xOsZy2Tr0gVSkmuFzX1DUP1uTEHII7ez 7fkgQBgL4AUVZoQNb0kLYp4AeLowavRInoPzWgRZIlN6I4WiIXa9aDhQan+nOAU4iy9e qRH0RF0jq95wuNYZGULDL20yurhHZY4cPVqdIMVp1UTlg5FLYBS2/r8LnvjnUKcuHPWD 9sQA== X-Gm-Message-State: AAQBX9fpMtrDDCiqbhZT9LhZU9VlAN42kR2Plk/JdvxLV33BBROXl6g9 ptNRWr7pVk9voOnIWVtjKPcI+Q== X-Google-Smtp-Source: AKy350Zeg7WcBlbIeS4PGWBrU+nA6D3ZQw2RQUdtMzrRIbcOBcnB4FvzAieagBSVaRSEerSYF/00HQ== X-Received: by 2002:a17:903:25d1:b0:1a6:3e45:8df with SMTP id jc17-20020a17090325d100b001a63e4508dfmr3379688plb.33.1681942713552; Wed, 19 Apr 2023 15:18:33 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:33 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 29/48] RISC-V: KVM: Skip AIA CSR updates for TVMs Date: Wed, 19 Apr 2023 15:16:57 -0700 Message-Id: <20230419221716.3603068-30-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For TVMs, the host must not support AIA CSR emulation. In addition to that, during vcpu load/put the CSR updates are not necessary as the CSR state must not be visible to the host. Signed-off-by: Atish Patra --- arch/riscv/kvm/aia.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index 71216e1..e3da661 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -15,6 +15,7 @@ #include #include #include +#include struct aia_hgei_control { raw_spinlock_t lock; @@ -134,7 +135,7 @@ void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu) { struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr; - if (!kvm_riscv_aia_available()) + if (!kvm_riscv_aia_available() || is_cove_vcpu(vcpu)) return; csr_write(CSR_VSISELECT, csr->vsiselect); @@ -152,7 +153,7 @@ void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu) { struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr; - if (!kvm_riscv_aia_available()) + if (!kvm_riscv_aia_available() || is_cove_vcpu(vcpu)) return; csr->vsiselect = csr_read(CSR_VSISELECT); @@ -370,6 +371,10 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, if (!kvm_riscv_aia_available()) return KVM_INSN_ILLEGAL_TRAP; + /* TVMs do not support AIA emulation */ + if (is_cove_vcpu(vcpu)) + return KVM_INSN_EXIT_TO_USER_SPACE; + /* First try to emulate in kernel space */ isel = csr_read(CSR_VSISELECT) & ISELECT_MASK; if (isel >= ISELECT_IPRIO0 && isel <= ISELECT_IPRIO15) @@ -529,6 +534,9 @@ void kvm_riscv_aia_enable(void) if (!kvm_riscv_aia_available()) return; + if (unlikely(kvm_riscv_cove_enabled())) + goto enable_gext; + aia_set_hvictl(false); csr_write(CSR_HVIPRIO1, 0x0); csr_write(CSR_HVIPRIO2, 0x0); @@ -539,6 +547,7 @@ void kvm_riscv_aia_enable(void) csr_write(CSR_HVIPRIO2H, 0x0); #endif +enable_gext: /* Enable per-CPU SGEI interrupt */ enable_percpu_irq(hgei_parent_irq, irq_get_trigger_type(hgei_parent_irq)); @@ -559,7 +568,9 @@ void kvm_riscv_aia_disable(void) csr_clear(CSR_HIE, BIT(IRQ_S_GEXT)); disable_percpu_irq(hgei_parent_irq); - aia_set_hvictl(false); + /* The host is not allowed modify hvictl for TVMs */ + if (!unlikely(kvm_riscv_cove_enabled())) + aia_set_hvictl(false); raw_spin_lock_irqsave(&hgctrl->lock, flags); From patchwork Wed Apr 19 22:16:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28D04C77B73 for ; Wed, 19 Apr 2023 22:21:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233254AbjDSWVN (ORCPT ); Wed, 19 Apr 2023 18:21:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232876AbjDSWUn (ORCPT ); Wed, 19 Apr 2023 18:20:43 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B9F9AD17 for ; Wed, 19 Apr 2023 15:19:29 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-1a6670671e3so4744845ad.0 for ; Wed, 19 Apr 2023 15:19:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942716; x=1684534716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g1210zPiD8zHFMhzBINzpx5nEW8Eoe5MAD0zhuCIRKc=; b=ANhU+J8nQWq2obESgBEJE0X9NLT82kBUBDzc1BRbSeZ1EgPeiOl0GddoyWaEPDRU8N vnwjEIo0UjaS9xezvAc/UGttMWlJo7ew/iqSZCawFdCpXkbeL6whYMnxnv1UE0Mkzjl9 zxU19X9QWwNQwOpt5WP+CxQUT1ZA3xUcznURR03bnG+WcdbBRPDMOVkCCCi+rkT0f8qL 85oGYhbbNz2xb1HJWq0YwM9QGXFDFKbWMgSuumHQ7xfEhCPnY/6ETL+ra68ruaG7yl2u qOhetkOJdoWKqEsNNxd/aiUVnUUF2SqhhyrAmB8w+eSk+hd8cQeWPbuaUf1XaZLgT9mw WXww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942716; x=1684534716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g1210zPiD8zHFMhzBINzpx5nEW8Eoe5MAD0zhuCIRKc=; b=kK4ToL2EMp9louQKLusiwUeH44g8GYEILvHyj7l6C/2Yanonh79byc4ZRHOzNUrwOy rxKzZRsh0oYzyboT79gZsQrRCLBUMgTQO1v53rH4Q436qE0/Bsw1A5WVQFcuPmgq37iV D5ERtu/lX++fVjGSDD1t+chF5LGvq2LzDjc/jZc/6tlFQi2L5NxTw9Xq98EeasXfqwRb 7xr45xDAxORddF9dZxoyRVB6IgyjjyurRk4GDovd66XwUirB4Zl4/FL3a//wnEYtZyyc PXKLuM7JeW31yeHY1q7kqjFqVoeAVIH2eqD4PXXyNgankso4TJCqsJMmbgH/YyJoJxSx 2gZA== X-Gm-Message-State: AAQBX9ekYoCYLziZ7OhxMgebefgpJb0HH8vfk1PPuPjTR/INYmmooNvC ekaEEgy38d6YNJM+/N0O71eH0g== X-Google-Smtp-Source: AKy350YnFYoQc/AGMgRzA7K3To1EBVIiHBUS1P/8MWh29k47wvhKo2iefk2kUhAwoVQIoToGy+/bcA== X-Received: by 2002:a17:903:1c4:b0:1a1:ee8c:eeba with SMTP id e4-20020a17090301c400b001a1ee8ceebamr7881066plh.59.1681942715856; Wed, 19 Apr 2023 15:18:35 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:35 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 30/48] RISC-V: KVM: Perform limited operations in hardware enable/disable Date: Wed, 19 Apr 2023 15:16:58 -0700 Message-Id: <20230419221716.3603068-31-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hardware enable/disable path only need to perform AIA/NACL enable/disable for TVMs. All other operations i.e. interrupt/exception delegation, counter access must be provided by the TSM as host doesn't have control of these operations for a TVM. Signed-off-by: Atish Patra --- arch/riscv/kvm/main.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 45ee62d..842b78d 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -13,6 +13,7 @@ #include #include #include +#include long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) @@ -29,6 +30,15 @@ int kvm_arch_hardware_enable(void) if (rc) return rc; + /* + * We just need to invoke aia enable for CoVE if host is in VS mode + * However, if the host is running in HS mode, we need to initialize + * other CSRs as well for legacy VMs. + * TODO: Handle host in HS mode use case. + */ + if (unlikely(kvm_riscv_cove_enabled())) + goto enable_aia; + hedeleg = 0; hedeleg |= (1UL << EXC_INST_MISALIGNED); hedeleg |= (1UL << EXC_BREAKPOINT); @@ -49,6 +59,7 @@ int kvm_arch_hardware_enable(void) csr_write(CSR_HVIP, 0); +enable_aia: kvm_riscv_aia_enable(); return 0; @@ -58,6 +69,8 @@ void kvm_arch_hardware_disable(void) { kvm_riscv_aia_disable(); + if (unlikely(kvm_riscv_cove_enabled())) + goto disable_nacl; /* * After clearing the hideleg CSR, the host kernel will receive * spurious interrupts if hvip CSR has pending interrupts and the @@ -69,6 +82,7 @@ void kvm_arch_hardware_disable(void) csr_write(CSR_HEDELEG, 0); csr_write(CSR_HIDELEG, 0); +disable_nacl: kvm_riscv_nacl_disable(); } From patchwork Wed Apr 19 22:16:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 135DCC6FD18 for ; Wed, 19 Apr 2023 22:21:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232555AbjDSWVb (ORCPT ); Wed, 19 Apr 2023 18:21:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232456AbjDSWUu (ORCPT ); Wed, 19 Apr 2023 18:20:50 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52D507ED7 for ; Wed, 19 Apr 2023 15:19:35 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1a677dffb37so4497045ad.2 for ; Wed, 19 Apr 2023 15:19:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942718; x=1684534718; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=K2nJhanY2RyrdzmZ5jdtnkafHRmAptwnMC5XRsCTn9I=; b=JS7pXfcL0uZWhUdoeqJLjU0060h+zlTyFU4Xu6hWHx+bZXgxY7ubdU1X9NVau3JlX9 +7CYbRRwlIc3Vqq9APC8BsBuJn4DTncbkzqKnldPV32jiP3HmMEO6Gpz0Jpg3E8ecp/b 7UYaf07kCninwBHQ2YB/t4GOxc2RYZ1XD5frQZDPWtSV2zmt3BBqoNXrrBa8O/2uay1U 9Q5+GXI2CSH6c4nCaey/IdmivSR/XDL19JFooIN8hV4UeXZUf4HW7Tq4P1XLc0glmzvp cJ6OAEWscVYWk70eywtGuEoxadALFsAZYFXB+jt15zlHvhhj3dquE05TC9qY1zNQuLog DNPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942718; x=1684534718; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K2nJhanY2RyrdzmZ5jdtnkafHRmAptwnMC5XRsCTn9I=; b=habFfThDA/2G8Yei1X1UkqtWh+w5jQGEJTLWTL/FPb5FyLnXl0PI1f1Pz+tgQl++4z oB5kaB1KgCGLVyOrT7DTa4w709N0Q878QWBGgyuiw3qSROjuHevU0fQ5ijJDbQw6QCFp MgKEGVhwtskCtnW6tVb8KMIjcoSnLBxRklo2gYdi8NO9yVHVkCiuJZNDuP+G/7NjZ4Z6 6tIbShqYURYV/AHhYBzpBBIOVsd8MePj3bcjlK/dKYMdk++8M8pKINX0fu/k85wWP3DR Hzp3St2aE2o+aw/qZiTWz6h1yGX1KirbAxKyRD/+gKKs+OoP/YGU1V7Q29XmrpqbCb52 0xXg== X-Gm-Message-State: AAQBX9cYEWZNAVbyXBtHzNjKWzrHccTt6acsP7E4oRrJuspSmWl6owMq /W3ZYQ6PUG/LXkGfY2fi9brNNg== X-Google-Smtp-Source: AKy350Z7HNgpp/rz6oeri9oovBKao1xAaHX/MXTw46NfVQFmX/yVtIUB1ewCw+YJMzwOiUfVeGRvZQ== X-Received: by 2002:a17:902:e0d3:b0:1a2:8770:bb17 with SMTP id e19-20020a170902e0d300b001a28770bb17mr5706306pla.9.1681942717994; Wed, 19 Apr 2023 15:18:37 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:37 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 31/48] RISC-V: KVM: Indicate no support user space emulated IRQCHIP Date: Wed, 19 Apr 2023 15:16:59 -0700 Message-Id: <20230419221716.3603068-32-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The KVM_INTERRUPT IOCTL is used for the userspace emulated IRQCHIP. The TEE use case do not support that yet. Return appropriate error in case any VMM tries to invoke that operation. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 20d4800..65f87e1 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -716,6 +716,9 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp, if (ioctl == KVM_INTERRUPT) { struct kvm_interrupt irq; + /* We do not support user space emulated IRQCHIP for TVMs yet */ + if (is_cove_vcpu(vcpu)) + return -ENXIO; if (copy_from_user(&irq, argp, sizeof(irq))) return -EFAULT; From patchwork Wed Apr 19 22:17:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38199C77B75 for ; Wed, 19 Apr 2023 22:21:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233366AbjDSWVd (ORCPT ); Wed, 19 Apr 2023 18:21:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233159AbjDSWVB (ORCPT ); Wed, 19 Apr 2023 18:21:01 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 304BEA27E for ; Wed, 19 Apr 2023 15:19:41 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1a6762fd23cso4485805ad.3 for ; Wed, 19 Apr 2023 15:19:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942721; x=1684534721; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7+zj7OnzXIW11i/huBWKMb2xqgovx7dUnUv6Tzb0tqU=; b=kKffrnrLJhT1F2qbpdEGRZ8TTdXO/ezDgOuiwNxMV5iPfOJqI4IDxFqpw9pAeQ5RyL txc+F1a0ERaLPbEf/eIAQ97QF+P0vW9KXEXQJVk4MAG0hZ5UHvM8iR+j9FWY80tzoRv/ i18leGEtafNSkE1v0+pd0Hfx2PDQbn2laJSCgiLJbhWUmlq3l8lS0EIHC495U1w/1Sbo lQtTZAz+qIu3/rAdq8eTemzM8D2U0sbpzmAeqpgkGt5b1+8A50H+erFvF/kq3fNWeSd/ Aaooy2SRdRuS8o6rl9qDP37Xbl9koaQ1p1+GFSXg8xMxaIolOUWU6ET8zb1xeekSFWHq oPvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942721; x=1684534721; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7+zj7OnzXIW11i/huBWKMb2xqgovx7dUnUv6Tzb0tqU=; b=fhFX8rZ2w7ZnSenMefAQ2yP6voYnOk+vXnmNtjc505kGxzWFPTH1O4M/Tf/Dedvqny yojhgAN+9ZSM1vJ6eyqtoeR5Hb4AGrCqOxKiV/kZTBCB6sUmYKM96RlKekzhrEKW9Iql bO8c1u4OmLhRSBBTmEVZcNGvbCyoEYW2WJ+3v8XFqldfMuNA6B9TlVhuJHYpE5R9tDDQ 1GFGXbCtSvYs28uWoZVtQ3ZBnleVtvsZFtrGdDvFMkuEvyapY5RMDIE2UctlaKm/mhiT btFdzfCPX5yzZa1oSBsRc/cV0SufvshmWA6AjGFuhicW591tv54WH82keWKA2zJm7V3k 3O5w== X-Gm-Message-State: AAQBX9fNU15NW4xW1O3XiYI+eUiEVe1S/KQ7uSsAru89lQGoA51rBoI9 V2CEPj6WNXXtmJk0rqCPhj5bJQ== X-Google-Smtp-Source: AKy350YAhdUYSn6knSbdmrvNKc2nBR1+eLVyT4UOeUtVo2MC/0v8WULeVwQQVk7jPKicbrdK5dLY6w== X-Received: by 2002:a17:902:a58a:b0:1a5:153f:2442 with SMTP id az10-20020a170902a58a00b001a5153f2442mr6421815plb.52.1681942720765; Wed, 19 Apr 2023 15:18:40 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:39 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Rajnesh Kanwal , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 32/48] RISC-V: KVM: Add AIA support for TVMs Date: Wed, 19 Apr 2023 15:17:00 -0700 Message-Id: <20230419221716.3603068-33-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The AIA support for TVMs are split between the host and the TSM. While the host allocates the vsfile, the TSM controls the gstage mapping and any updates to it. The host must not be able to inject interrupt to a TVM. Thus, the interrupt injection has to happen via the TSM only for the interrupts allowed by the guest. The swfile maintained by the host is not useful for the TVMs as well as the TVMs only work for HW_ACCEL mode. The TSM does maintain a swfile for the vcpu internally. The swfile allocation in the host is kept as is to avoid further bifurcation of the code. Co-developed-by: Rajnesh Kanwal Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 6 +- arch/riscv/kvm/aia.c | 84 +++++++++++++++++--- arch/riscv/kvm/aia_device.c | 41 +++++++--- arch/riscv/kvm/aia_imsic.c | 127 +++++++++++++++++++++--------- 4 files changed, 195 insertions(+), 63 deletions(-) diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h index 74bad2f..4367281 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -61,7 +61,7 @@ struct kvm_riscv_cove_page { unsigned long gpa; }; -struct imsic_tee_state { +struct imsic_cove_state { bool bind_required; bool bound; int vsfile_hgei; @@ -73,7 +73,7 @@ struct kvm_cove_tvm_vcpu_context { struct kvm_riscv_cove_page vcpu_state; /* Per VCPU imsic state */ - struct imsic_tee_state imsic; + struct imsic_cove_state imsic; }; struct kvm_cove_tvm_context { @@ -181,7 +181,7 @@ static inline int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, } static inline int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) {return -1; } -/* AIA related TEE functions */ +/* TVM interrupt managenet via AIA functions */ static inline int kvm_riscv_cove_aia_init(struct kvm *kvm) { return -1; } static inline int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, unsigned long iid) { return -1; } diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c index e3da661..88b91b5 100644 --- a/arch/riscv/kvm/aia.c +++ b/arch/riscv/kvm/aia.c @@ -20,6 +20,8 @@ struct aia_hgei_control { raw_spinlock_t lock; unsigned long free_bitmap; + /* Tracks if a hgei is converted to confidential mode */ + unsigned long nconf_bitmap; struct kvm_vcpu *owners[BITS_PER_LONG]; }; static DEFINE_PER_CPU(struct aia_hgei_control, aia_hgei); @@ -391,34 +393,96 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num, int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, void __iomem **hgei_va, phys_addr_t *hgei_pa) { - int ret = -ENOENT; - unsigned long flags; + int ret = -ENOENT, rc; + bool reclaim_needed = false; + unsigned long flags, tmp_bitmap; const struct imsic_local_config *lc; struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu); + phys_addr_t imsic_hgei_pa; if (!kvm_riscv_aia_available()) return -ENODEV; if (!hgctrl) return -ENODEV; + lc = imsic_get_local_config(cpu); raw_spin_lock_irqsave(&hgctrl->lock, flags); - if (hgctrl->free_bitmap) { - ret = __ffs(hgctrl->free_bitmap); - hgctrl->free_bitmap &= ~BIT(ret); - hgctrl->owners[ret] = owner; + if (!hgctrl->free_bitmap) { + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + goto done; + } + + if (!is_cove_vcpu(owner)) { + /* Find a free one that is not converted */ + tmp_bitmap = hgctrl->free_bitmap & hgctrl->nconf_bitmap; + if (tmp_bitmap > 0) + ret = __ffs(tmp_bitmap); + else { + /* All free ones have been converted in the past. Reclaim one now */ + ret = __ffs(hgctrl->free_bitmap); + reclaim_needed = true; + } + } else { + /* First try to find a free one that is already converted */ + tmp_bitmap = hgctrl->free_bitmap & !hgctrl->nconf_bitmap; + if (tmp_bitmap > 0) + ret = __ffs(tmp_bitmap); + else + ret = __ffs(hgctrl->free_bitmap); } + hgctrl->free_bitmap &= ~BIT(ret); + hgctrl->owners[ret] = owner; raw_spin_unlock_irqrestore(&hgctrl->lock, flags); - lc = imsic_get_local_config(cpu); if (lc && ret > 0) { if (hgei_va) *hgei_va = lc->msi_va + (ret * IMSIC_MMIO_PAGE_SZ); - if (hgei_pa) - *hgei_pa = lc->msi_pa + (ret * IMSIC_MMIO_PAGE_SZ); + imsic_hgei_pa = lc->msi_pa + (ret * IMSIC_MMIO_PAGE_SZ); + + if (reclaim_needed) { + rc = kvm_riscv_cove_aia_claim_imsic(owner, imsic_hgei_pa); + if (rc) { + kvm_err("Reclaim of imsic pa %pa failed for vcpu %d pcpu %d ret %d\n", + &imsic_hgei_pa, owner->vcpu_idx, smp_processor_id(), ret); + kvm_riscv_aia_free_hgei(cpu, ret); + return rc; + } + } + + /* + * Clear the free_bitmap here instead in case relcaim was necessary. + * Do it here instead of above because it we should only set the nconf + * bitmap after the claim is successful. + */ + raw_spin_lock_irqsave(&hgctrl->lock, flags); + if (reclaim_needed) + set_bit(ret, &hgctrl->nconf_bitmap); + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + + if (is_cove_vcpu(owner) && test_bit(ret, &hgctrl->nconf_bitmap)) { + /* + * Convert the address to confidential mode. + * This may need to send IPIs to issue global fence. Hence, + * enable interrupts temporarily for irq processing + */ + rc = kvm_riscv_cove_aia_convert_imsic(owner, imsic_hgei_pa); + + if (rc) { + kvm_riscv_aia_free_hgei(cpu, ret); + ret = rc; + } else { + raw_spin_lock_irqsave(&hgctrl->lock, flags); + clear_bit(ret, &hgctrl->nconf_bitmap); + raw_spin_unlock_irqrestore(&hgctrl->lock, flags); + } + } } + if (hgei_pa) + *hgei_pa = imsic_hgei_pa; +done: return ret; } @@ -495,6 +559,8 @@ static int aia_hgei_init(void) hgctrl->free_bitmap &= ~BIT(0); } else hgctrl->free_bitmap = 0; + /* By default all vsfiles are to be used for non-confidential mode */ + hgctrl->nconf_bitmap = hgctrl->free_bitmap; } /* Find INTC irq domain */ diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c index 3556e82..ecf6734 100644 --- a/arch/riscv/kvm/aia_device.c +++ b/arch/riscv/kvm/aia_device.c @@ -11,6 +11,7 @@ #include #include #include +#include static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) { @@ -103,6 +104,10 @@ static int aia_config(struct kvm *kvm, unsigned long type, default: return -EINVAL; }; + /* TVM must have a physical vs file */ + if (is_cove_vm(kvm) && *nr != KVM_DEV_RISCV_AIA_MODE_HWACCEL) + return -EINVAL; + aia->mode = *nr; } else *nr = aia->mode; @@ -264,18 +269,24 @@ static int aia_init(struct kvm *kvm) if (kvm->created_vcpus != atomic_read(&kvm->online_vcpus)) return -EBUSY; - /* Number of sources should be less than or equals number of IDs */ - if (aia->nr_ids < aia->nr_sources) - return -EINVAL; + if (!is_cove_vm(kvm)) { + /* Number of sources should be less than or equals number of IDs */ + if (aia->nr_ids < aia->nr_sources) + return -EINVAL; + /* APLIC base is required for non-zero number of sources only for non TVMs*/ + if (aia->nr_sources && aia->aplic_addr == KVM_RISCV_AIA_UNDEF_ADDR) + return -EINVAL; - /* APLIC base is required for non-zero number of sources */ - if (aia->nr_sources && aia->aplic_addr == KVM_RISCV_AIA_UNDEF_ADDR) - return -EINVAL; + /* Initialize APLIC */ + ret = kvm_riscv_aia_aplic_init(kvm); + if (ret) + return ret; - /* Initialze APLIC */ - ret = kvm_riscv_aia_aplic_init(kvm); - if (ret) - return ret; + } else { + ret = kvm_riscv_cove_aia_init(kvm); + if (ret) + return ret; + } /* Iterate over each VCPU */ kvm_for_each_vcpu(idx, vcpu, kvm) { @@ -650,8 +661,14 @@ void kvm_riscv_aia_init_vm(struct kvm *kvm) */ /* Initialize default values in AIA global context */ - aia->mode = (kvm_riscv_aia_nr_hgei) ? - KVM_DEV_RISCV_AIA_MODE_AUTO : KVM_DEV_RISCV_AIA_MODE_EMUL; + if (is_cove_vm(kvm)) { + if (!kvm_riscv_aia_nr_hgei) + return; + aia->mode = KVM_DEV_RISCV_AIA_MODE_HWACCEL; + } else { + aia->mode = (kvm_riscv_aia_nr_hgei) ? + KVM_DEV_RISCV_AIA_MODE_AUTO : KVM_DEV_RISCV_AIA_MODE_EMUL; + } aia->nr_ids = kvm_riscv_aia_max_ids - 1; aia->nr_sources = 0; aia->nr_group_bits = 0; diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 419c98d..8db1e29 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -15,6 +15,7 @@ #include #include #include +#include #define IMSIC_MAX_EIX (IMSIC_MAX_ID / BITS_PER_TYPE(u64)) @@ -583,7 +584,7 @@ static void imsic_vsfile_local_update(int vsfile_hgei, u32 nr_eix, csr_write(CSR_VSISELECT, old_vsiselect); } -static void imsic_vsfile_cleanup(struct imsic *imsic) +static void imsic_vsfile_cleanup(struct kvm_vcpu *vcpu, struct imsic *imsic) { int old_vsfile_hgei, old_vsfile_cpu; unsigned long flags; @@ -604,8 +605,12 @@ static void imsic_vsfile_cleanup(struct imsic *imsic) memset(imsic->swfile, 0, sizeof(*imsic->swfile)); - if (old_vsfile_cpu >= 0) + if (old_vsfile_cpu >= 0) { + if (is_cove_vcpu(vcpu)) + kvm_riscv_cove_vcpu_imsic_unbind(vcpu, old_vsfile_cpu); + kvm_riscv_aia_free_hgei(old_vsfile_cpu, old_vsfile_hgei); + } } static void imsic_swfile_extirq_update(struct kvm_vcpu *vcpu) @@ -688,27 +693,30 @@ void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu) * the old IMSIC VS-file so we first re-direct all interrupt * producers. */ + if (!is_cove_vcpu(vcpu)) { + /* Purge the G-stage mapping */ + kvm_riscv_gstage_iounmap(vcpu->kvm, + vcpu->arch.aia_context.imsic_addr, + IMSIC_MMIO_PAGE_SZ); - /* Purge the G-stage mapping */ - kvm_riscv_gstage_iounmap(vcpu->kvm, - vcpu->arch.aia_context.imsic_addr, - IMSIC_MMIO_PAGE_SZ); - - /* TODO: Purge the IOMMU mapping ??? */ + /* TODO: Purge the IOMMU mapping ??? */ - /* - * At this point, all interrupt producers have been re-directed - * to somewhere else so we move register state from the old IMSIC - * VS-file to the IMSIC SW-file. - */ + /* + * At this point, all interrupt producers have been re-directed + * to somewhere else so we move register state from the old IMSIC + * VS-file to the IMSIC SW-file. + */ - /* Read and clear register state from old IMSIC VS-file */ - memset(&tmrif, 0, sizeof(tmrif)); - imsic_vsfile_read(old_vsfile_hgei, old_vsfile_cpu, imsic->nr_hw_eix, - true, &tmrif); + /* Read and clear register state from old IMSIC VS-file */ + memset(&tmrif, 0, sizeof(tmrif)); + imsic_vsfile_read(old_vsfile_hgei, old_vsfile_cpu, imsic->nr_hw_eix, + true, &tmrif); - /* Update register state in IMSIC SW-file */ - imsic_swfile_update(vcpu, &tmrif); + /* Update register state in IMSIC SW-file */ + imsic_swfile_update(vcpu, &tmrif); + } else { + kvm_riscv_cove_vcpu_imsic_unbind(vcpu, old_vsfile_cpu); + } /* Free-up old IMSIC VS-file */ kvm_riscv_aia_free_hgei(old_vsfile_cpu, old_vsfile_hgei); @@ -747,7 +755,7 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu) /* For HW acceleration mode, we can't continue */ if (kvm->arch.aia.mode == KVM_DEV_RISCV_AIA_MODE_HWACCEL) { run->fail_entry.hardware_entry_failure_reason = - CSR_HSTATUS; + KVM_EXIT_FAIL_ENTRY_IMSIC_FILE_UNAVAILABLE; run->fail_entry.cpu = vcpu->cpu; run->exit_reason = KVM_EXIT_FAIL_ENTRY; return 0; @@ -762,22 +770,24 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu) } new_vsfile_hgei = ret; - /* - * At this point, all interrupt producers are still using - * to the old IMSIC VS-file so we first move all interrupt - * producers to the new IMSIC VS-file. - */ - - /* Zero-out new IMSIC VS-file */ - imsic_vsfile_local_clear(new_vsfile_hgei, imsic->nr_hw_eix); - - /* Update G-stage mapping for the new IMSIC VS-file */ - ret = kvm_riscv_gstage_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, - new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, - true, true); - if (ret) - goto fail_free_vsfile_hgei; - + /* TSM only maintains the gstage mapping. Skip vsfile updates & ioremap */ + if (!is_cove_vcpu(vcpu)) { + /* + * At this point, all interrupt producers are still using + * to the old IMSIC VS-file so we first move all interrupt + * producers to the new IMSIC VS-file. + */ + + /* Zero-out new IMSIC VS-file */ + imsic_vsfile_local_clear(new_vsfile_hgei, imsic->nr_hw_eix); + + /* Update G-stage mapping for the new IMSIC VS-file */ + ret = kvm_riscv_gstage_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, + new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, + true, true); + if (ret) + goto fail_free_vsfile_hgei; + } /* TODO: Update the IOMMU mapping ??? */ /* Update new IMSIC VS-file details in IMSIC context */ @@ -788,12 +798,32 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu) imsic->vsfile_pa = new_vsfile_pa; write_unlock_irqrestore(&imsic->vsfile_lock, flags); + /* Now bind the new vsfile for the TVMs */ + if (is_cove_vcpu(vcpu) && vcpu->arch.tc) { + vcpu->arch.tc->imsic.vsfile_hgei = new_vsfile_hgei; + if (old_vsfile_cpu >= 0) { + if (vcpu->arch.tc->imsic.bound) { + ret = kvm_riscv_cove_vcpu_imsic_rebind(vcpu, old_vsfile_cpu); + if (ret) { + kvm_err("imsic rebind failed for vcpu %d ret %d\n", + vcpu->vcpu_idx, ret); + goto fail_free_vsfile_hgei; + } + } + kvm_riscv_aia_free_hgei(old_vsfile_cpu, old_vsfile_hgei); + } else { + /* Bind if it is not a migration case */ + vcpu->arch.tc->imsic.bind_required = true; + } + /* Skip the oldvsfile and swfile update process as it is managed by TSM */ + goto done; + } + /* * At this point, all interrupt producers have been moved * to the new IMSIC VS-file so we move register state from * the old IMSIC VS/SW-file to the new IMSIC VS-file. */ - memset(&tmrif, 0, sizeof(tmrif)); if (old_vsfile_cpu >= 0) { /* Read and clear register state from old IMSIC VS-file */ @@ -946,6 +976,7 @@ int kvm_riscv_vcpu_aia_imsic_inject(struct kvm_vcpu *vcpu, unsigned long flags; struct imsic_mrif_eix *eix; struct imsic *imsic = vcpu->arch.aia_context.imsic_state; + int ret; /* We only emulate one IMSIC MMIO page for each Guest VCPU */ if (!imsic || !iid || guest_index || @@ -960,7 +991,14 @@ int kvm_riscv_vcpu_aia_imsic_inject(struct kvm_vcpu *vcpu, read_lock_irqsave(&imsic->vsfile_lock, flags); if (imsic->vsfile_cpu >= 0) { - writel(iid, imsic->vsfile_va + IMSIC_MMIO_SETIPNUM_LE); + /* TSM can only inject the external interrupt if it is allowed by the guest */ + if (is_cove_vcpu(vcpu)) { + ret = kvm_riscv_cove_vcpu_inject_interrupt(vcpu, iid); + if (ret) + kvm_err("External interrupt %d injection failed\n", iid); + } else { + writel(iid, imsic->vsfile_va + IMSIC_MMIO_SETIPNUM_LE); + } kvm_vcpu_kick(vcpu); } else { eix = &imsic->swfile->eix[iid / BITS_PER_TYPE(u64)]; @@ -1039,6 +1077,17 @@ int kvm_riscv_vcpu_aia_imsic_init(struct kvm_vcpu *vcpu) imsic->swfile = page_to_virt(swfile_page); imsic->swfile_pa = page_to_phys(swfile_page); + /* No need to setup iodev ops for TVMs. Swfile will also not be used for + * TVMs. However, allocate it for now as to avoid different path during + * free. + */ + if (is_cove_vcpu(vcpu)) { + ret = kvm_riscv_cove_vcpu_imsic_addr(vcpu); + if (ret) + goto fail_free_swfile; + return 0; + } + /* Setup IO device */ kvm_iodevice_init(&imsic->iodev, &imsic_iodoev_ops); mutex_lock(&kvm->slots_lock); @@ -1069,7 +1118,7 @@ void kvm_riscv_vcpu_aia_imsic_cleanup(struct kvm_vcpu *vcpu) if (!imsic) return; - imsic_vsfile_cleanup(imsic); + imsic_vsfile_cleanup(vcpu, imsic); mutex_lock(&kvm->slots_lock); kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, &imsic->iodev); From patchwork Wed Apr 19 22:17:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29004C77B73 for ; Wed, 19 Apr 2023 22:21:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233194AbjDSWVi (ORCPT ); Wed, 19 Apr 2023 18:21:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232856AbjDSWVE (ORCPT ); Wed, 19 Apr 2023 18:21:04 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7F3D93F7 for ; Wed, 19 Apr 2023 15:19:46 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1a682eee3baso4404765ad.0 for ; Wed, 19 Apr 2023 15:19:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942723; x=1684534723; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CQCEa95xde61G0+g1spWAS5gSK9slN4MB0K1b9QrdE8=; b=Ynw03ruOG3PNfgG00HUUd7HJCh15VY3aEZKHBX+JvWIZqAaPdtZJFMNC9sQ+7tdyNf gQij1krIXgqHn6c3FKneR90d+ObOiz01SLEvx89Co8mtqMDidl42AKlK9b8IeRrjhkVj wFMRWmBw0WyiUKSEW2tAX/U5AjDnycCzdvc7/5XQe2CDXdqSpATkNcRDmxBB2iAZSbfO Qe8Etb6nnDF5lWA9UfVwL0EYoHqQHL5jKdNdyJEbe8jXSsaw1PcWzy6qONpwtJLTmV1a iWDDSsLQLNxRbxPSgzzxgbkJX9sxpOs3jDYDbGE18ziPonHZ9Lw5ZRGoMLwBsGOpngcC aTQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942723; x=1684534723; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CQCEa95xde61G0+g1spWAS5gSK9slN4MB0K1b9QrdE8=; b=kqlnpOhDvZ0GIinJ4ihVTTLJZZo9SZfT0SV3QpMgPSJSx7Bic7B6ybF0DB2eCFPFbW wLR3q6/t2f+scx+qIhKBnyrScf98tbzgtqe/JlrtCfYkT2xP7h9KSklz4cW+DLHlgP0S NLWi/txfzgU5n8XCKUY699ytiwfJiKidUGE3vASUeIWX/UgAvWWhB8tUKmRIwJvTKTAx F6di+mkhRGlnclNOPR509DEMSULwg9vzg8e91+Q8L9a6zw0Wx5mk3N6DKlQm9sqRAhB7 sz0JkWgCDs8ErFGFORJUJAbxeZ8sqPUHrJcujOf17lP3T6I9ZR8V56fJ25cJMDo9NEDI swew== X-Gm-Message-State: AAQBX9c/ierpaNPAsDuRvzTZMknQoFOrbh3B5jZTe/m0r/WaT6cucIMU 3P8cZuqknvg8p+166e9zGJpvNw== X-Google-Smtp-Source: AKy350a8Ym8XHInuF0mFeXBc2+oA8kajjJCNeYyHi0F4HD8cqx1mngs6rpkeTwZ7fiAYwBWCbSvEDg== X-Received: by 2002:a17:902:ba89:b0:1a9:2693:2e29 with SMTP id k9-20020a170902ba8900b001a926932e29mr1536823pls.42.1681942722916; Wed, 19 Apr 2023 15:18:42 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:42 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 33/48] RISC-V: KVM: Hookup TVM VCPU init/destroy Date: Wed, 19 Apr 2023 15:17:01 -0700 Message-Id: <20230419221716.3603068-34-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TVM VCPU create function requires the vcpu id which is generated after the arch_create_vcpu returns. Thus, TVM vcpu init can not be invoked from the arch_create_vcpu. Invoke it in post create for now. However, post create doesn't return any error which is problematic as vcpu creation can fail from TSM side. Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 65f87e1..005c7c9 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -218,6 +218,17 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) { + int rc; + /* + * TODO: Ideally it should be invoked in vcpu_create. but vcpu_idx + * is allocated after returning create_vcpu. Find a better place to do it + */ + if (unlikely(is_cove_vcpu(vcpu))) { + rc = kvm_riscv_cove_vcpu_init(vcpu); + if (rc) + pr_err("%s: cove vcpu init failed %d\n", __func__, rc); + } + /** * vcpu with id 0 is the designated boot cpu. * Keep all vcpus with non-zero id in power-off state so that @@ -237,6 +248,9 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_pmu_deinit(vcpu); + if (unlikely(is_cove_vcpu(vcpu))) + kvm_riscv_cove_vcpu_destroy(vcpu); + /* Free unused pages pre-allocated for G-stage page table mappings */ kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); } From patchwork Wed Apr 19 22:17:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0A6CC6FD18 for ; Wed, 19 Apr 2023 22:21:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233464AbjDSWVt (ORCPT ); Wed, 19 Apr 2023 18:21:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233224AbjDSWVL (ORCPT ); Wed, 19 Apr 2023 18:21:11 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4449AF0F for ; Wed, 19 Apr 2023 15:19:51 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-63b5c830d5eso338428b3a.2 for ; Wed, 19 Apr 2023 15:19:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942725; x=1684534725; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V944izAfHfvPXt2SodYy8axfAv3FUhh/YjA49qCII3Y=; b=WToiBYM90+kXSHR4f4lcMC2do7CnNE609AkBJ9ILRsQSAqLQhvGE4fka9RZc7XkmOV nVSgbPgR0g26+R9aOz7prZ0ZYw39dV9KxvmHBGDvRgR8MR74QfL1ZHIt8XSWqAKcLN8r 0f6Tq006zXgK2g4OuuillctnBCZLY1xVWuRZMqYQdfd3ybKvutxZLqu19QqqHmXC8ODS KoYa4jRQnyk9O92MghnQdSKIAU/AWL4fTpgXKATBvPLDFP59BwJ0oeGTqTbRTShwnDj2 1+Ojob0S1CfD+/Tznk4BIbXVZIp9qmP+FcBhkm6vxxe63VSq5U1n2NJ31boDb5/Auf9y qI8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942725; x=1684534725; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V944izAfHfvPXt2SodYy8axfAv3FUhh/YjA49qCII3Y=; b=Q8CzpHJECYXQEi/whlA8XX27BLP8eSMHTffMnqw5T1G86LgEKviHwWjTzCb1Kxucxh B9AIQ/bHPZjD5zo/A0pFRYptWCwM9TqdEMgUsOuIwwDkoWDwfve8pyeKymzjiZdLn/kO S0fUCOmW0nZ277jPO1CcPjFlQlYiVszGL+GwPk9WkL7uBEBYGza6/gVKxmccWkbOURWK p1e0RqbxkfGFLmJNH4rXmLU2YQvdi949u/H9yguEGWpscbSt2nongVjIynt+5IWVCgx9 cyEJTM2VA6+knrxKTDiNvWCVNH1wcqhgtKajAkZOUVVTVyh6kCEZIwH8VBoGfebjiOc0 va9A== X-Gm-Message-State: AAQBX9cujvPl9JpajHX7/lwOQeu/4eIHwL+s66WycyI6oaLKrq1UcV8N 2qfpxsDozHkEqQTx346DrUQvQQ== X-Google-Smtp-Source: AKy350Z1qByMTGJhiqV3zdWqTg/zhHnUn1VfTimSG0DF0as79Rscca2Gz4ZKZ2NL/Pi2zbK61ibEFw== X-Received: by 2002:a17:903:2409:b0:19e:839e:49d8 with SMTP id e9-20020a170903240900b0019e839e49d8mr6224177plo.59.1681942725072; Wed, 19 Apr 2023 15:18:45 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:44 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 34/48] RISC-V: KVM: Initialize CoVE Date: Wed, 19 Apr 2023 15:17:02 -0700 Message-Id: <20230419221716.3603068-35-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org CoVE initialization depends on few underlying conditions that differs from normal VMs. 1. RFENCE extension is no longer mandatory as TEEH APIs has its own set of fence APIs. 2. SBI NACL is mandatory for TEE VMs to share memory between the host and the TSM. Signed-off-by: Atish Patra --- arch/riscv/kvm/main.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 842b78d..a059414 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -102,15 +102,12 @@ static int __init riscv_kvm_init(void) return -ENODEV; } - if (sbi_probe_extension(SBI_EXT_RFENCE) <= 0) { - kvm_info("require SBI RFENCE extension\n"); - return -ENODEV; - } - rc = kvm_riscv_nacl_init(); if (rc && rc != -ENODEV) return rc; + kvm_riscv_cove_init(); + kvm_riscv_gstage_mode_detect(); kvm_riscv_gstage_vmid_detect(); @@ -121,6 +118,15 @@ static int __init riscv_kvm_init(void) return rc; } + /* TVM don't need RFENCE extension as hardware imsic support is mandatory for TVMs + * TODO: This check should happen later if HW_ACCEL mode is not set as RFENCE + * should only be mandatory in that case. + */ + if (!kvm_riscv_cove_enabled() && sbi_probe_extension(SBI_EXT_RFENCE) <= 0) { + kvm_info("require SBI RFENCE extension\n"); + return -ENODEV; + } + kvm_info("hypervisor extension available\n"); if (kvm_riscv_nacl_available()) { From patchwork Wed Apr 19 22:17:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60469C77B73 for ; Wed, 19 Apr 2023 22:22:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233497AbjDSWWD (ORCPT ); Wed, 19 Apr 2023 18:22:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233283AbjDSWVZ (ORCPT ); Wed, 19 Apr 2023 18:21:25 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 418907A84 for ; Wed, 19 Apr 2023 15:19:57 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id d2e1a72fcca58-63b73203e0aso2786778b3a.1 for ; Wed, 19 Apr 2023 15:19:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942727; x=1684534727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PF8Qn0Z7zg/7RPDtBuNuOPN3zeJXhr/Jx4YP7Fj6yI4=; b=G/Fug0YDh/vT2gLC0JnMu9cK8/lm7JaHRK8wavACXkcusYj2LIYmBRBKMqHOqlZK+w 4X0VmEj6BTotc/ZSwQ/hqCf5XFjDLPfMNoRWYQ9eoQMvSpOlejZpx2S6qdCGkDr0mV1j tz/t8nrgfTSM3p2DcPJ6IP/ULX8p9u8n4bLZ8UppcJfyijITcktTN9QxUA/PYL/GUGR2 g1Dwy3qOR6S1EA5f1HDfH1/Z8AcgmbMd6OoD3Rpin7C3Tw7AepN0itfwNNwkxK4BS08T wI5vk6637oOkFsektud4GizoiS09rEJkFLg8xFSYn4+9QqroF/v5cNomyMNwCRJWPRAY /ruQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942727; x=1684534727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PF8Qn0Z7zg/7RPDtBuNuOPN3zeJXhr/Jx4YP7Fj6yI4=; b=Uie57fJyZigeUB//FB2kTM8uASAQTB0AtA5LJrj0rJJIdSJQcZUInh1bWZxjYi36k/ CM8ON9HZd+tYzK4+TSuQ21+zmvgMR4nOfNFZZFd6Wjoj7QQ5uSk9c1PggwICaLNcvHZr aQVXokNTtLuF2v4N+CtNiMtIvqqpvpCYwp9zedcju9zI6x5vMTGElCgUWb/S0Bxf6DBV O0LVrJQC4kzq5awdxgVVC6PTE3cUo73I02a+bKvV4uqzRtDqZLbR55WQdF8LHa3yMvyR uT958JzoqZysnQydJZ5J1XR+t6lEZVC2SNMlp+yCnOSvVxDyZDhf+34FrTkwCfvDwq+s Ov1A== X-Gm-Message-State: AAQBX9fYPxFEjqmuTviwfqeknJcMRWfyLQdqUW4jqZ0w9J+mFhi2Iy/c P8pKwJywPp3xrRy5TBJ4VQFm9g== X-Google-Smtp-Source: AKy350Zh1JoIu/w89g7BU4hvCJLzB5n1X2PHzMgJ8jQn7nK1FJDYYuKYpLhfuRupPsetmV7URaqafw== X-Received: by 2002:a17:902:ea03:b0:19b:64bb:d546 with SMTP id s3-20020a170902ea0300b0019b64bbd546mr4360345plg.18.1681942727339; Wed, 19 Apr 2023 15:18:47 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:47 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 35/48] RISC-V: KVM: Add TVM init/destroy calls Date: Wed, 19 Apr 2023 15:17:03 -0700 Message-Id: <20230419221716.3603068-36-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org A TVM can only be created upon explicit request from the VMM via the vm type if CoVE SBI extensions must supported by the TSM. Signed-off-by: Atish Patra --- arch/riscv/kvm/vm.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 1b59a8f..8a1460d 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -42,6 +42,19 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return r; } + if (unlikely(type == KVM_VM_TYPE_RISCV_COVE)) { + if (!kvm_riscv_cove_enabled()) { + kvm_err("Unable to init CoVE VM because cove is not enabled\n"); + return -EPERM; + } + + r = kvm_riscv_cove_vm_init(kvm); + if (r) + return r; + kvm->arch.vm_type = type; + kvm_info("Trusted VM instance init successful\n"); + } + kvm_riscv_aia_init_vm(kvm); kvm_riscv_guest_timer_init(kvm); @@ -54,6 +67,9 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_destroy_vcpus(kvm); kvm_riscv_aia_destroy_vm(kvm); + + if (unlikely(is_cove_vm(kvm))) + kvm_riscv_cove_vm_destroy(kvm); } int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irql, From patchwork Wed Apr 19 22:17:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32A4BC6FD18 for ; Wed, 19 Apr 2023 22:23:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231509AbjDSWXh (ORCPT ); Wed, 19 Apr 2023 18:23:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232936AbjDSWXQ (ORCPT ); Wed, 19 Apr 2023 18:23:16 -0400 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D54AC172 for ; Wed, 19 Apr 2023 15:21:25 -0700 (PDT) Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-5191796a483so231528a12.0 for ; Wed, 19 Apr 2023 15:21:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942729; x=1684534729; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XK5xUw/rzaPgA4jO4a/YwH8Byrp1ajuQRNW8VnU2lnI=; b=QfXuULt6GKdZ0yIYOBu58cROq1xtd+AwU5tA9tWSYcP8puXwvEN9RX3lXVuV60tyoW 4Q0CSfUoxv6NDjypNOuOT4ywNhU7IYB+qeWNkXwN91wiD9TQD3HOmit1hlHNqFbWVtNJ ziwh+QBq1N6MeIyCvSv5s6RR13cXyHzSqXBQKCnCZR2rrQanp3QhB3VbRYZ2jQSTPE6d Vpg87adIN6SFou4dfOJ3ZLRcLAQq2ZlEUcDyGi0vckdKvSSc1nPlKdGfKGwb9h5rUo3e rLKhVd1kgYj7FOEo7g+ojtFs7lAxzBaXut+YFsbxt/trii3p8w9AEE1pM3je3JF/e5ut 1yXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942729; x=1684534729; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XK5xUw/rzaPgA4jO4a/YwH8Byrp1ajuQRNW8VnU2lnI=; b=SnXvLvYkhF2MSR/N/hfRHrEeQmn68HAJMItUvW0QxaQggFKgwEJ1nk2WIqqnC2dFOV UJ/kOt9Oo7RBBwuofKV8ZVDO9j710BHoj+1421F03WiGwya7uzQmLe/wpltK38NSJ+Rb ZwHtfywYkY+KQSECQy5ph2VZoZ0X9o/u2PUbXU0aQMmHyWzvRn/K6NSp4bVBshq2tIid kFOF2Tokclui1nhOgq7gk+s9oQ4tJcmE6XSIFC+heFir9Rn83QdeD+i66TD0QOoZUyoH A9SB9jayG6k6jbcD934mX+T6dPOsr//XDJjgGxiicWQ6lWwiMDhvLS6XesMGvNfU3qrN kjjw== X-Gm-Message-State: AAQBX9f2JbmjiKPtMPz6u7abqjfanBJdN4sogZ8oucs2BAF99wDqy1JR j5fTa2bm4QpMvj2bZV7QCSi1mA== X-Google-Smtp-Source: AKy350baQTNt4Vs4HHjBbXvyObvJ3IXFGZptf7h2H+OASSZVy5+7QV7+crZBWOtRSKw2ldJwBUdMWw== X-Received: by 2002:a17:903:784:b0:1a2:9183:a499 with SMTP id kn4-20020a170903078400b001a29183a499mr5853196plb.34.1681942729515; Wed, 19 Apr 2023 15:18:49 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:49 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 36/48] RISC-V: KVM: Read/write gprs from/to shmem in case of TVM VCPU. Date: Wed, 19 Apr 2023 15:17:04 -0700 Message-Id: <20230419221716.3603068-37-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal For TVM vcpus, TSM uses shared memory to exposes gprs for the trusted VCPU. This change makes sure we use shmem when doing mmio emulation for trusted VMs. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/kvm/vcpu_insn.c | 98 +++++++++++++++++++++++++++++++++----- 1 file changed, 85 insertions(+), 13 deletions(-) diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c index 331489f..56eeb86 100644 --- a/arch/riscv/kvm/vcpu_insn.c +++ b/arch/riscv/kvm/vcpu_insn.c @@ -7,6 +7,9 @@ #include #include #include +#include +#include +#include #define INSN_OPCODE_MASK 0x007c #define INSN_OPCODE_SHIFT 2 @@ -116,6 +119,10 @@ #define REG_OFFSET(insn, pos) \ (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK) +#define REG_INDEX(insn, pos) \ + ((SHIFT_RIGHT((insn), (pos)-LOG_REGBYTES) & REG_MASK) / \ + (__riscv_xlen / 8)) + #define REG_PTR(insn, pos, regs) \ ((ulong *)((ulong)(regs) + REG_OFFSET(insn, pos))) @@ -600,6 +607,7 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, int len = 0, insn_len = 0; struct kvm_cpu_trap utrap = { 0 }; struct kvm_cpu_context *ct = &vcpu->arch.guest_context; + void *nshmem; /* Determine trapped instruction */ if (htinst & 0x1) { @@ -627,7 +635,15 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, insn_len = INSN_LEN(insn); } - data = GET_RS2(insn, &vcpu->arch.guest_context); + if (is_cove_vcpu(vcpu)) { + nshmem = nacl_shmem(); + data = nacl_shmem_gpr_read_cove(nshmem, + REG_INDEX(insn, SH_RS2) * 8 + + KVM_ARCH_GUEST_ZERO); + } else { + data = GET_RS2(insn, &vcpu->arch.guest_context); + } + data8 = data16 = data32 = data64 = data; if ((insn & INSN_MASK_SW) == INSN_MATCH_SW) { @@ -643,19 +659,43 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run, #ifdef CONFIG_64BIT } else if ((insn & INSN_MASK_C_SD) == INSN_MATCH_C_SD) { len = 8; - data64 = GET_RS2S(insn, &vcpu->arch.guest_context); + if (is_cove_vcpu(vcpu)) { + data64 = nacl_shmem_gpr_read_cove( + nshmem, + RVC_RS2S(insn) * 8 + KVM_ARCH_GUEST_ZERO); + } else { + data64 = GET_RS2S(insn, &vcpu->arch.guest_context); + } } else if ((insn & INSN_MASK_C_SDSP) == INSN_MATCH_C_SDSP && ((insn >> SH_RD) & 0x1f)) { len = 8; - data64 = GET_RS2C(insn, &vcpu->arch.guest_context); + if (is_cove_vcpu(vcpu)) { + data64 = nacl_shmem_gpr_read_cove( + nshmem, REG_INDEX(insn, SH_RS2C) * 8 + + KVM_ARCH_GUEST_ZERO); + } else { + data64 = GET_RS2C(insn, &vcpu->arch.guest_context); + } #endif } else if ((insn & INSN_MASK_C_SW) == INSN_MATCH_C_SW) { len = 4; - data32 = GET_RS2S(insn, &vcpu->arch.guest_context); + if (is_cove_vcpu(vcpu)) { + data32 = nacl_shmem_gpr_read_cove( + nshmem, + RVC_RS2S(insn) * 8 + KVM_ARCH_GUEST_ZERO); + } else { + data32 = GET_RS2S(insn, &vcpu->arch.guest_context); + } } else if ((insn & INSN_MASK_C_SWSP) == INSN_MATCH_C_SWSP && ((insn >> SH_RD) & 0x1f)) { len = 4; - data32 = GET_RS2C(insn, &vcpu->arch.guest_context); + if (is_cove_vcpu(vcpu)) { + data32 = nacl_shmem_gpr_read_cove( + nshmem, REG_INDEX(insn, SH_RS2C) * 8 + + KVM_ARCH_GUEST_ZERO); + } else { + data32 = GET_RS2C(insn, &vcpu->arch.guest_context); + } } else { return -EOPNOTSUPP; } @@ -725,6 +765,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) u64 data64; ulong insn; int len, shift; + void *nshmem; if (vcpu->arch.mmio_decode.return_handled) return 0; @@ -738,26 +779,57 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) len = vcpu->arch.mmio_decode.len; shift = vcpu->arch.mmio_decode.shift; + if (is_cove_vcpu(vcpu)) + nshmem = nacl_shmem(); + switch (len) { case 1: data8 = *((u8 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data8 << shift >> shift); + if (is_cove_vcpu(vcpu)) { + nacl_shmem_gpr_write_cove(nshmem, + REG_INDEX(insn, SH_RD) * 8 + + KVM_ARCH_GUEST_ZERO, + (unsigned long)data8); + } else { + SET_RD(insn, &vcpu->arch.guest_context, + (ulong)data8 << shift >> shift); + } break; case 2: data16 = *((u16 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data16 << shift >> shift); + if (is_cove_vcpu(vcpu)) { + nacl_shmem_gpr_write_cove(nshmem, + REG_INDEX(insn, SH_RD) * 8 + + KVM_ARCH_GUEST_ZERO, + (unsigned long)data16); + } else { + SET_RD(insn, &vcpu->arch.guest_context, + (ulong)data16 << shift >> shift); + } break; case 4: data32 = *((u32 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data32 << shift >> shift); + if (is_cove_vcpu(vcpu)) { + nacl_shmem_gpr_write_cove(nshmem, + REG_INDEX(insn, SH_RD) * 8 + + KVM_ARCH_GUEST_ZERO, + (unsigned long)data32); + } else { + SET_RD(insn, &vcpu->arch.guest_context, + (ulong)data32 << shift >> shift); + } break; case 8: data64 = *((u64 *)run->mmio.data); - SET_RD(insn, &vcpu->arch.guest_context, - (ulong)data64 << shift >> shift); + if (is_cove_vcpu(vcpu)) { + nacl_shmem_gpr_write_cove(nshmem, + REG_INDEX(insn, SH_RD) * 8 + + KVM_ARCH_GUEST_ZERO, + (unsigned long)data64); + } else { + SET_RD(insn, &vcpu->arch.guest_context, + (ulong)data64 << shift >> shift); + } break; default: return -EOPNOTSUPP; From patchwork Wed Apr 19 22:17:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 004F4C77B73 for ; Wed, 19 Apr 2023 22:22:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233539AbjDSWWJ (ORCPT ); Wed, 19 Apr 2023 18:22:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232596AbjDSWVc (ORCPT ); Wed, 19 Apr 2023 18:21:32 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D09137D93 for ; Wed, 19 Apr 2023 15:20:05 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1a6817adde4so5433275ad.0 for ; Wed, 19 Apr 2023 15:20:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942732; x=1684534732; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3wZ38NgXR92ZnL3KCz4X1T1orXD86k7gku+jH9LYvlc=; b=QmuuVYt46KpS/zYYMAaGwSu/4ogHzgebSuHz2Y3KqaXaWYNGDgVkh+TM+9F0z1UKqq yLwfXsEjIj5AyyA1hGl2bhxYnaVMKxeSrJ8kwz/jbSGPSXzXJrYc1cZLmnuKcOHvK/NO W+UYzU5cNYXd7iKLqbolbPZxHkvQvk9pg5qltXG2gdevWPNka1SD3vMUG51IVWwpm1Is pQl00SUeHSzQxwfQfWPPzCjONURk75DuuPNE8FRa7jDR7RGmxRIpfU2bo0VrGWZ8ZwD3 0HsjPnYDT4F6NmAip4Qb5wSlSawzKk38AMTj8xzmlCbhjRiSI7AJ2N04QWgrNDK8/C2A ee7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942732; x=1684534732; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3wZ38NgXR92ZnL3KCz4X1T1orXD86k7gku+jH9LYvlc=; b=jD7XVs12KHaHLVJ1F1ghRrfkgk5PrSLNZlmLHissWyxI0LQMeWkC3/ZoXcp1+hs6JA rsWyWBSpb3kUTw/r10WSBZbQEjoYxAoeER9sxikw1VQRsbV8SD8bo3/9H7KR/wBzEi7R EPv21YwPNGqPqC2FyfPAwFCpLl7ZRJjXE5faY420fKIaljzPYzMI7BZPUfbchG9nJoRY OHgsZBqRFGdXi1WsU4x5doY4K5gPpkfYXPDIQXnxW21QqcW6Sn1FMRYQ9gEva5maUtuS uSt0ejnNmQbrygwDzRGLVlT/CY7Dv3tDT4RLuDCkyhiLKNPHWZ5HCEdYsKF/eqy57KQg O0HA== X-Gm-Message-State: AAQBX9cfdrLCMVXjntKgmf+asYx8/ERXgACxBT8Br0c4C9R0vE9O6T5e DRwcl1yAp9HO7z/AZfr77JWYgw== X-Google-Smtp-Source: AKy350YlbYAHX3czmRaEVnTcNdf891KQxoyoTa0cHJBez0RPMbydsH+Ponf0TcKud2fsWou26F4oLw== X-Received: by 2002:a17:903:2281:b0:1a6:3737:750d with SMTP id b1-20020a170903228100b001a63737750dmr7538499plh.17.1681942731698; Wed, 19 Apr 2023 15:18:51 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:51 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 37/48] RISC-V: Add COVG SBI extension definitions Date: Wed, 19 Apr 2023 15:17:05 -0700 Message-Id: <20230419221716.3603068-38-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal CoVE specification defines a separate SBI extension known as CoVG for the guest side interface. Add the definitions for that extension. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/include/asm/sbi.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index bbea922..e02ee75 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -34,6 +34,7 @@ enum sbi_ext_id { SBI_EXT_NACL = 0x4E41434C, SBI_EXT_COVH = 0x434F5648, SBI_EXT_COVI = 0x434F5649, + SBI_EXT_COVG = 0x434F5647, /* Experimentals extensions must lie within this range */ SBI_EXT_EXPERIMENTAL_START = 0x08000000, @@ -439,6 +440,16 @@ struct sbi_cove_tvm_aia_params { uint32_t guests_per_hart; }; +/* SBI COVG extension data structures */ +enum sbi_ext_covg_fid { + SBI_EXT_COVG_ADD_MMIO_REGION, + SBI_EXT_COVG_REMOVE_MMIO_REGION, + SBI_EXT_COVG_SHARE_MEMORY, + SBI_EXT_COVG_UNSHARE_MEMORY, + SBI_EXT_COVG_ALLOW_EXT_INTERRUPT, + SBI_EXT_COVG_DENY_EXT_INTERRUPT, +}; + #define SBI_SPEC_VERSION_DEFAULT 0x1 #define SBI_SPEC_VERSION_MAJOR_SHIFT 24 #define SBI_SPEC_VERSION_MAJOR_MASK 0x7f From patchwork Wed Apr 19 22:17:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F831C6FD18 for ; Wed, 19 Apr 2023 22:58:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233726AbjDSW6b (ORCPT ); Wed, 19 Apr 2023 18:58:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233765AbjDSW6P (ORCPT ); Wed, 19 Apr 2023 18:58:15 -0400 Received: from mail-ua1-x92b.google.com (mail-ua1-x92b.google.com [IPv6:2607:f8b0:4864:20::92b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 881BC9EFE for ; Wed, 19 Apr 2023 15:57:46 -0700 (PDT) Received: by mail-ua1-x92b.google.com with SMTP id o2so943350uao.11 for ; Wed, 19 Apr 2023 15:57:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681945063; x=1684537063; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wI5JEqEayPS5t0DearmNB32dGieY2Rp85qRqyRU4954=; b=oBaO1gGAq5VShR7JdGsFxiI0UU2MLeNbLcoorvcGS/xmLKw9GaRbBSWp1eIpGI5Ruf DPb4IrkLsDVLMJLRIMkoq3OWraz1NnoPqDPzuE5Qd8Ll1kBDIOm8t3DPSgCHe6kcfdk8 QxKSH6E1JMlCHFknxamvJnKzhcrobhsEVvFAfR/0vVmO6Go676yxlsPXNuW8TLyD35bO B3VbU9VYsQUqCHOkqNuPFU+6BFN5wl62deglp2BnwuGBiTCp0eqppiZ41oqi0F+9RtP8 /s/tIHATI3sgNQg+GizJvSuZQ6VbaCc05k4DIyBJ+0kM+zs4cpshBWXfIcWm61U6DH5H Sn7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681945063; x=1684537063; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wI5JEqEayPS5t0DearmNB32dGieY2Rp85qRqyRU4954=; b=IcmVobBBX/1cIGNflAz8zAxnLpdFVU/OBy9R+f4sXyUF0Z9wUaNCKHeeVV39QQn2go ZXAMwX8l47V9Ilfkdv28BhFjiCtHlCGz+CYN53BWtgXOWL+bcOXpzsyLX4QsEwLsNBho iwEgBTfuoTQqO3NtaN/1A1YsHWe4yvf2zvKq6XqF9yGTFniAx4zWnBTR31CA6ehZ+2EO Ux9Mnq1r+tCm75GW37fyY/pUXyU2EFbcR5t58/zG+Ce9qLY0oJjWKZzjcz4tlIjIhuv9 ugquJ3AIVKXOuCmvrFMoCgP/SyxUja0LS8ExoRHpZBWAS9aVO99aQKYgZtkyOYTlk638 q7ng== X-Gm-Message-State: AAQBX9e2BfTRec1JTa2PVKeP5wnirtjC9ugIqLbpnHdneMva88OdFFIs fWnKq/PPvHMf/6ZAIhVYOlmSCtOhbuD/5FpgWeI= X-Google-Smtp-Source: AKy350bHPqgey3c6g8HqkR9B9xlgHTWq+fR92XQNeVK1lT0G63M1RyCGum2kegw8LaA3eJtSnsnKXQ== X-Received: by 2002:a17:90a:6002:b0:246:865d:419a with SMTP id y2-20020a17090a600200b00246865d419amr3928528pji.6.1681942733866; Wed, 19 Apr 2023 15:18:53 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:53 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 38/48] RISC-V: Add CoVE guest config and helper functions Date: Wed, 19 Apr 2023 15:17:06 -0700 Message-Id: <20230419221716.3603068-39-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal Introduce a separate config for the guest running in CoVE so that it can be enabled separately if required. However, the default config will enable both CoVE host & guest configs in order to make single image work as both host & guest. Introduce a helper function to detect if a guest is TVM or not at run time. The TSM only enables the CoVE guest SBI extension for TVMs. Signed-off-by: Rajnesh Kanwal Co-developed-by: Atish Patra Signed-off-by: Atish Patra --- arch/riscv/Kbuild | 2 ++ arch/riscv/Kconfig | 6 ++++++ arch/riscv/cove/Makefile | 2 ++ arch/riscv/cove/core.c | 28 ++++++++++++++++++++++++++++ arch/riscv/include/asm/cove.h | 27 +++++++++++++++++++++++++++ arch/riscv/kernel/setup.c | 2 ++ 6 files changed, 67 insertions(+) create mode 100644 arch/riscv/cove/Makefile create mode 100644 arch/riscv/cove/core.c create mode 100644 arch/riscv/include/asm/cove.h diff --git a/arch/riscv/Kbuild b/arch/riscv/Kbuild index afa83e3..ecd661e 100644 --- a/arch/riscv/Kbuild +++ b/arch/riscv/Kbuild @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_RISCV_COVE_GUEST) += cove/ + obj-y += kernel/ mm/ net/ obj-$(CONFIG_BUILTIN_DTB) += boot/dts/ obj-y += errata/ diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 8462941..49c3006 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -512,6 +512,12 @@ config RISCV_COVE_HOST That means the platform should be capable of running TEE VM (TVM) using KVM and TEE Security Manager (TSM). +config RISCV_COVE_GUEST + bool "Guest Support for Confidential VM Extension(CoVE)" + default n + help + Enables support for running TVMs on platforms supporting CoVE. + endmenu # "Confidential VM Extension(CoVE) Support" endmenu # "Platform type" diff --git a/arch/riscv/cove/Makefile b/arch/riscv/cove/Makefile new file mode 100644 index 0000000..03a0cac --- /dev/null +++ b/arch/riscv/cove/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_RISCV_COVE_GUEST) += core.o diff --git a/arch/riscv/cove/core.c b/arch/riscv/cove/core.c new file mode 100644 index 0000000..7218fe7 --- /dev/null +++ b/arch/riscv/cove/core.c @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Confidential Computing Platform Capability checks + * + * Copyright (c) 2023 Rivos Inc. + * + * Authors: + * Rajnesh Kanwal + */ + +#include +#include +#include +#include + +static bool is_tvm; + +bool is_cove_guest(void) +{ + return is_tvm; +} +EXPORT_SYMBOL_GPL(is_cove_guest); + +void riscv_cove_sbi_init(void) +{ + if (sbi_probe_extension(SBI_EXT_COVG) > 0) + is_tvm = true; +} diff --git a/arch/riscv/include/asm/cove.h b/arch/riscv/include/asm/cove.h new file mode 100644 index 0000000..c4d609d --- /dev/null +++ b/arch/riscv/include/asm/cove.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * TVM helper functions + * + * Copyright (c) 2023 Rivos Inc. + * + * Authors: + * Rajnesh Kanwal + */ + +#ifndef __RISCV_COVE_H__ +#define __RISCV_COVE_H__ + +#ifdef CONFIG_RISCV_COVE_GUEST +void riscv_cove_sbi_init(void); +bool is_cove_guest(void); +#else /* CONFIG_RISCV_COVE_GUEST */ +static inline bool is_cove_guest(void) +{ + return false; +} +static inline void riscv_cove_sbi_init(void) +{ +} +#endif /* CONFIG_RISCV_COVE_GUEST */ + +#endif /* __RISCV_COVE_H__ */ diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index 7b2b065..20b0280 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -35,6 +35,7 @@ #include #include #include +#include #include "head.h" @@ -272,6 +273,7 @@ void __init setup_arch(char **cmdline_p) early_ioremap_setup(); sbi_init(); + riscv_cove_sbi_init(); jump_label_init(); parse_early_param(); From patchwork Wed Apr 19 22:17:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0ED2C77B75 for ; Wed, 19 Apr 2023 22:22:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232836AbjDSWWU (ORCPT ); Wed, 19 Apr 2023 18:22:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233189AbjDSWVf (ORCPT ); Wed, 19 Apr 2023 18:21:35 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71849975B for ; Wed, 19 Apr 2023 15:20:15 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id 98e67ed59e1d1-24736790966so213026a91.2 for ; Wed, 19 Apr 2023 15:20:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942736; x=1684534736; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HpkP1wnIrltbLZeAb/TZzxnsq9NvqWT39c9WofSyiOk=; b=Nzsbq++D21vwA8dGs2TEhUmfLPFSaig0puRkykWSASC64dxd/vGkpSKeFxQXOJxyon 5Ou4PoViyNfDX1Gfoq/jlBl1G1RhaiR4JLXaKzD7kt+niF6AKE+yOb7DVxkMfV9AiHsx mTPuZ/MAZl6BEVu8xpX5HBzOXWWYp6VQYr3Mp0sTKgkIRgLhxoSPyTdHc4vxQBKRTjC6 nzaETrxZRtdvV7d/S3g19DaIk1Y5S0BgV4Ei7fF0jUB1ujpfDWT04dllTpouA2v7Akvc l3myGSmdOxw+gPB2lOjj2aqYoHDlOR0Rr4q6kW0PPflSvA1d+iZ7docwAXBz+likDSY4 tCOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942736; x=1684534736; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HpkP1wnIrltbLZeAb/TZzxnsq9NvqWT39c9WofSyiOk=; b=UwPVuD9QS/CQsxLdcUEpXkPvzsB3Nrt0UY9VC6ZkoW1jVymxUEDpLL/VLib4RjdFhw q1hi8r3/pP/O5c24wFU3TLY3UNhMKLJB55eZ/t3IsG1MsrhJ44AdhDgg/nZ95yVitUgK y6QpRfhzseO1P9eylMZN5fGjUw4YKBUek2MrezgHeeCpWqxmdiBrw0xcuTU+hifMixnr zN/a8zW2/WFxuKmlmE5LE/gFr2GWFKhnygmG62Rj1VqKL0G5wK6hPZoBc8SR9eeKrGkP nXqcP7F2XL7DwDUMvZXEhVLMj/jqZ4afN+wfpTaq4irV/IZ/albZDu7+g2mRk5m31miO Yxew== X-Gm-Message-State: AAQBX9fLX61Km4GDOhrGB5EpN8edooToN90raLphi67X3Cfu/0PEFONO FbKKIjFy/1sESpVwoHSKw2Of7bDC2ptqfHSM5lc= X-Google-Smtp-Source: AKy350bCoBKIrFcroX21XtHTuZqvJehyqLN918v9CE2Osbf4jkMqvKMUa5z+Sio4RKf3jpe/KHP4Bg== X-Received: by 2002:a17:90a:f415:b0:247:6be7:8cc0 with SMTP id ch21-20020a17090af41500b002476be78cc0mr3832862pjb.35.1681942736133; Wed, 19 Apr 2023 15:18:56 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:55 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 39/48] RISC-V: Implement COVG SBI extension Date: Wed, 19 Apr 2023 15:17:07 -0700 Message-Id: <20230419221716.3603068-40-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal COVG extension defines the guest side interface for running a guest in CoVE. These functions allow a CoVE guest to share/unshare memory, ask host to trap and emulate MMIO regions and allow/deny injection of interrupts from host. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/cove/Makefile | 2 +- arch/riscv/cove/cove_guest_sbi.c | 109 ++++++++++++++++++++++++++++++ arch/riscv/include/asm/covg_sbi.h | 38 +++++++++++ 3 files changed, 148 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/cove/cove_guest_sbi.c create mode 100644 arch/riscv/include/asm/covg_sbi.h diff --git a/arch/riscv/cove/Makefile b/arch/riscv/cove/Makefile index 03a0cac..a95043b 100644 --- a/arch/riscv/cove/Makefile +++ b/arch/riscv/cove/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_RISCV_COVE_GUEST) += core.o +obj-$(CONFIG_RISCV_COVE_GUEST) += core.o cove_guest_sbi.o diff --git a/arch/riscv/cove/cove_guest_sbi.c b/arch/riscv/cove/cove_guest_sbi.c new file mode 100644 index 0000000..af22d5e --- /dev/null +++ b/arch/riscv/cove/cove_guest_sbi.c @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * COVG SBI extensions related helper functions. + * + * Copyright (c) 2023 Rivos Inc. + * + * Authors: + * Rajnesh Kanwal + */ + +#include +#include +#include + +int sbi_covg_add_mmio_region(unsigned long addr, unsigned long len) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVG, SBI_EXT_COVG_ADD_MMIO_REGION, addr, len, + 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covg_remove_mmio_region(unsigned long addr, unsigned long len) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVG, SBI_EXT_COVG_REMOVE_MMIO_REGION, addr, + len, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covg_share_memory(unsigned long addr, unsigned long len) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVG, SBI_EXT_COVG_SHARE_MEMORY, addr, len, 0, + 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covg_unshare_memory(unsigned long addr, unsigned long len) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVG, SBI_EXT_COVG_UNSHARE_MEMORY, addr, len, 0, + 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covg_allow_external_interrupt(unsigned long id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVG, SBI_EXT_COVG_ALLOW_EXT_INTERRUPT, id, 0, + 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covg_allow_all_external_interrupt(void) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVG, SBI_EXT_COVG_ALLOW_EXT_INTERRUPT, -1, 0, + 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covg_deny_external_interrupt(unsigned long id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVG, SBI_EXT_COVG_DENY_EXT_INTERRUPT, id, 0, 0, + 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covg_deny_all_external_interrupt(void) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVG, SBI_EXT_COVG_DENY_EXT_INTERRUPT, -1, 0, 0, + 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} diff --git a/arch/riscv/include/asm/covg_sbi.h b/arch/riscv/include/asm/covg_sbi.h new file mode 100644 index 0000000..31283de --- /dev/null +++ b/arch/riscv/include/asm/covg_sbi.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * COVG SBI extension related header file. + * + * Copyright (c) 2023 Rivos Inc. + * + * Authors: + * Rajnesh Kanwal + */ + +#ifndef __RISCV_COVG_SBI_H__ +#define __RISCV_COVG_SBI_H__ + +#ifdef CONFIG_RISCV_COVE_GUEST + +int sbi_covg_add_mmio_region(unsigned long addr, unsigned long len); +int sbi_covg_remove_mmio_region(unsigned long addr, unsigned long len); +int sbi_covg_share_memory(unsigned long addr, unsigned long len); +int sbi_covg_unshare_memory(unsigned long addr, unsigned long len); +int sbi_covg_allow_external_interrupt(unsigned long id); +int sbi_covg_allow_all_external_interrupt(void); +int sbi_covg_deny_external_interrupt(unsigned long id); +int sbi_covg_deny_all_external_interrupt(void); + +#else + +static inline int sbi_covg_add_mmio_region(unsigned long addr, unsigned long len) { return 0; } +static inline int sbi_covg_remove_mmio_region(unsigned long addr, unsigned long len) { return 0; } +static inline int sbi_covg_share_memory(unsigned long addr, unsigned long len) { return 0; } +static inline int sbi_covg_unshare_memory(unsigned long addr, unsigned long len) { return 0; } +static inline int sbi_covg_allow_external_interrupt(unsigned long id) { return 0; } +static inline int sbi_covg_allow_all_external_interrupt(void) { return 0; } +static inline int sbi_covg_deny_external_interrupt(unsigned long id) { return 0; } +static inline int sbi_covg_deny_all_external_interrupt(void) { return 0; } + +#endif + +#endif /* __RISCV_COVG_SBI_H__ */ From patchwork Wed Apr 19 22:17:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3588BC6FD18 for ; Wed, 19 Apr 2023 22:22:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233370AbjDSWWO (ORCPT ); Wed, 19 Apr 2023 18:22:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233159AbjDSWVe (ORCPT ); Wed, 19 Apr 2023 18:21:34 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F93A212F for ; Wed, 19 Apr 2023 15:20:10 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id 98e67ed59e1d1-246eebbde1cso209964a91.3 for ; Wed, 19 Apr 2023 15:20:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942738; x=1684534738; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LnnJUnDn5Gg0VxA4G2224aJuFesVpwMyXWuJ35Yh1EA=; b=qN2qG27GHv+d9gUwWztkTi8p+g3aaC2rFXOZGwLc8eAMBqzRu7eT8kWoOJLWLTW0Eo Z5lYOlCFWymONdQ75UX7d1w0T/hT7irv6Jb4eLBvu7abb5vVm52aXVMSJ1GMw0Dbzqsr +1z7/IHpZr9TOWWv65z2G7cezkDsdmOF3T3T+GedPiWIaC/Kuy6PLSFoobdSxJzll2MD g1noklYQNjPoDJtmIYCLWo0ww78Wd6QJJJ0qNcx7/a9E59ARcqxtApq0JG/1pIkk/aOb ezm4wy13mbTPlO+a9FFwfpQuhkNR1dmeg9ddTlU2Pq0Tpw9j5c0vXUUJGHEIdY9VbyGW 1hlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942738; x=1684534738; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LnnJUnDn5Gg0VxA4G2224aJuFesVpwMyXWuJ35Yh1EA=; b=JaXAmT/TPPXUKkQrV4ccLb5rGrZ9XC+4kxIQKg04jedSOT3hRjf6dexTx6EIX0NXic XuoWOHD1IssM0mzuQmGljV3j2S+d7xkjr2Dddn5ASUgC0uMI7xZSyE7E95nFOMQDP+Ck Jy16WQi9YS+nn+Uq86ep+pgJVVWU9cgxRmxMFYajcVN+swegByK3KmZr75BELAdu/bwA q26V2/M/MoXpiXW8p3T1rdON/gnAg4KvZRQBHu/hr6150/+tMw81m+QS1b42xgiRbwkU mSfXo1GXgwkWrgljsEnnz1e48pJaEF9KsEd6VVFt2vVIN/NuYM0OMRG0ao89AIf+nAOg xRqA== X-Gm-Message-State: AAQBX9fJ5JkfocCknOzpCjXqzziGgd8JLFp66MamdvDo5z7BMs05R/Fs IMlQ9TxitUQkVUUhCnuzqolaWg== X-Google-Smtp-Source: AKy350aHmxw5PVAlBIl9T883RKpr61ycrn0Nkrml8nQtl+RbpvzosCcY4lybRtm63StCXT7HS5Gtsg== X-Received: by 2002:a17:90b:38c6:b0:237:c209:5b14 with SMTP id nn6-20020a17090b38c600b00237c2095b14mr3820467pjb.22.1681942738315; Wed, 19 Apr 2023 15:18:58 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:58 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 40/48] RISC-V: COVE: Add COVH invalidate, validate, promote, demote and remove APIs. Date: Wed, 19 Apr 2023 15:17:08 -0700 Message-Id: <20230419221716.3603068-41-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal SBI_EXT_COVH_TVM_INVALIDATE_PAGES: Invalidates the pages in the specified range of guest physical address space. The host can now work upon the range without TVM's interaction. Any access from TVM to the range will result in a page-fault which will be reported to the host. SBI_EXT_COVH_TVM_VALIDATE_PAGES: Marks the invalidated pages in the specified range of guest physical address space as present and TVM can now access the pages. SBI_EXT_COVH_TVM_PROMOTE_PAGES: Promotes a set of contiguous mappings to the requested page size. This is mainly to support huge-pages. SBI_EXT_COVH_TVM_DEMOTE_PAGES: Demotes a huge page mapping to a set of contiguous mappings at the target size. SBI_EXT_COVH_TVM_REMOVE_PAGES: Removes mappings from a TVM. The range to be unmapped must already have been invalidated and fenced, and must lie within a removable region of guest physical address space. Signed-off-by: Atish Patra Signed-off-by: Rajnesh Kanwal --- arch/riscv/include/asm/kvm_cove_sbi.h | 16 +++++++ arch/riscv/include/asm/sbi.h | 5 +++ arch/riscv/kvm/cove_sbi.c | 65 +++++++++++++++++++++++++++ 3 files changed, 86 insertions(+) diff --git a/arch/riscv/include/asm/kvm_cove_sbi.h b/arch/riscv/include/asm/kvm_cove_sbi.h index 0759f70..b554a8d 100644 --- a/arch/riscv/include/asm/kvm_cove_sbi.h +++ b/arch/riscv/include/asm/kvm_cove_sbi.h @@ -59,6 +59,22 @@ int sbi_covh_create_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid, int sbi_covh_run_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid); +int sbi_covh_tvm_invalidate_pages(unsigned long tvmid, + unsigned long tvm_base_page_addr, + unsigned long len); +int sbi_covh_tvm_validate_pages(unsigned long tvmid, + unsigned long tvm_base_page_addr, + unsigned long len); +int sbi_covh_tvm_promote_page(unsigned long tvmid, + unsigned long tvm_base_page_addr, + enum sbi_cove_page_type ptype); +int sbi_covh_tvm_demote_page(unsigned long tvmid, + unsigned long tvm_base_page_addr, + enum sbi_cove_page_type ptype); +int sbi_covh_tvm_remove_pages(unsigned long tvmid, + unsigned long tvm_base_page_addr, + unsigned long len); + /* Functions related to CoVE Interrupt Management(COVI) Extension */ int sbi_covi_tvm_aia_init(unsigned long tvm_gid, struct sbi_cove_tvm_aia_params *tvm_aia_params); int sbi_covi_set_vcpu_imsic_addr(unsigned long tvm_gid, unsigned long vcpu_id, diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h index e02ee75..03b0cc8 100644 --- a/arch/riscv/include/asm/sbi.h +++ b/arch/riscv/include/asm/sbi.h @@ -369,6 +369,11 @@ enum sbi_ext_covh_fid { SBI_EXT_COVH_TVM_CREATE_VCPU, SBI_EXT_COVH_TVM_VCPU_RUN, SBI_EXT_COVH_TVM_INITIATE_FENCE, + SBI_EXT_COVH_TVM_INVALIDATE_PAGES, + SBI_EXT_COVH_TVM_VALIDATE_PAGES, + SBI_EXT_COVH_TVM_PROMOTE_PAGE, + SBI_EXT_COVH_TVM_DEMOTE_PAGE, + SBI_EXT_COVH_TVM_REMOVE_PAGES, }; enum sbi_ext_covi_fid { diff --git a/arch/riscv/kvm/cove_sbi.c b/arch/riscv/kvm/cove_sbi.c index a8901ac..01dc260 100644 --- a/arch/riscv/kvm/cove_sbi.c +++ b/arch/riscv/kvm/cove_sbi.c @@ -405,3 +405,68 @@ int sbi_covh_run_tvm_vcpu(unsigned long tvmid, unsigned long vcpuid) return 0; } + +int sbi_covh_tvm_invalidate_pages(unsigned long tvmid, + unsigned long tvm_base_page_addr, + unsigned long len) +{ + struct sbiret ret = sbi_ecall(SBI_EXT_COVH, + SBI_EXT_COVH_TVM_INVALIDATE_PAGES, tvmid, + tvm_base_page_addr, len, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tvm_validate_pages(unsigned long tvmid, + unsigned long tvm_base_page_addr, + unsigned long len) +{ + struct sbiret ret = sbi_ecall(SBI_EXT_COVH, + SBI_EXT_COVH_TVM_VALIDATE_PAGES, tvmid, + tvm_base_page_addr, len, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tvm_promote_page(unsigned long tvmid, + unsigned long tvm_base_page_addr, + enum sbi_cove_page_type ptype) +{ + struct sbiret ret = sbi_ecall(SBI_EXT_COVH, + SBI_EXT_COVH_TVM_PROMOTE_PAGE, tvmid, + tvm_base_page_addr, ptype, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tvm_demote_page(unsigned long tvmid, + unsigned long tvm_base_page_addr, + enum sbi_cove_page_type ptype) +{ + struct sbiret ret = sbi_ecall(SBI_EXT_COVH, + SBI_EXT_COVH_TVM_DEMOTE_PAGE, tvmid, + tvm_base_page_addr, ptype, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tvm_remove_pages(unsigned long tvmid, + unsigned long tvm_base_page_addr, + unsigned long len) +{ + struct sbiret ret = sbi_ecall(SBI_EXT_COVH, + SBI_EXT_COVH_TVM_REMOVE_PAGES, tvmid, + tvm_base_page_addr, len, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} From patchwork Wed Apr 19 22:17:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B75AC6FD18 for ; Wed, 19 Apr 2023 22:22:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233625AbjDSWW2 (ORCPT ); Wed, 19 Apr 2023 18:22:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233008AbjDSWVr (ORCPT ); Wed, 19 Apr 2023 18:21:47 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 055406E85 for ; Wed, 19 Apr 2023 15:20:19 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1a92369761cso4220805ad.3 for ; Wed, 19 Apr 2023 15:20:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942740; x=1684534740; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XoUTXkkdzp66fUQa0SZgxh2I0hcfFcSevl2ozJzsZr8=; b=IhesJB7cf1n2WZ7fBKb+dS6w3v0TIDtk6aiBcZra2CO2eehnlOUntTGLXPnoKcGUou bLgl/XcFe6/UB+ywV55miS51XCkHEWA0laPJEt1KJ/aJ0rahNvo0Q/dF7ADasdpm9XXJ UbjbI+Mf8nQXeZ5MyqNpR4s0bolzM0xGm/nM+7/6dJzXpWgydjb9eiN4RqXE1WgIi7X8 Cthi7K85fjMc6Q2drU0GHSRvqnoqTSaztCaljlOhPnma9qwgoHLv2753zQOCcA61wCJ4 ASXuml6AjUU1EX63a78TtnBe3v9dKyM8HxF0mGk0OEwSi9vUl2cLdbi3OnS7mPROB1q6 5uTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942740; x=1684534740; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XoUTXkkdzp66fUQa0SZgxh2I0hcfFcSevl2ozJzsZr8=; b=CRjXKUmsFIWf6hKOl/Ac1lSfukTdCmc4w0CKg7NqxVJXoqheFVZhV3BAVZRlBGIusu XTge42i8OrnPHUoImfUnWNXz2qLToO7suLIvsPwsWDZNzB0EembO9zXuLudhAvBK86Ay nWysI+v6EB+hl87xc56hCLgW2Ma7eBGIh/idi5BMx9qAmB0CuVhvAvamcxMv5wq/MJF2 3kPtyw5ASLDCiivUqkr360ZbX3VN0dXStSa3sbPL9jNwDa3725q+xjuOXan/SGBd+8dD kC7ar6gVn3Vw+cjFUIy9Dbx4DogdGzd68b9tEIDN3Y90QGymWEnvO+/hLK4rctKugYml bB4w== X-Gm-Message-State: AAQBX9djWc3Q+aNg5cBX9nzPPKBiHTijhAWkDac5c8juLiJ5aYyUEL2G 0i+ovdsthM4e8UttChCIlu5v6w== X-Google-Smtp-Source: AKy350Y7aINNAc9YHSFZjS/1v/4hxySiR9RBBeMIRAwJ/RoWtbD0/kdtTdNjVZfjscd6qYzOFWfZ9Q== X-Received: by 2002:a17:902:eb91:b0:1a6:d2a9:3fab with SMTP id q17-20020a170902eb9100b001a6d2a93fabmr5967973plg.47.1681942740547; Wed, 19 Apr 2023 15:19:00 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:19:00 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 41/48] RISC-V: KVM: Add host side support to handle COVG SBI calls. Date: Wed, 19 Apr 2023 15:17:09 -0700 Message-Id: <20230419221716.3603068-42-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal Adding host side support to allow memory sharing/unsharing. Host needs to check if the page has been already assigned (converted) to a TVM or not. If yes, that page needs to be reclaimed before sharing that page. For the remaining ECALLs host doesn't really need to do anything and we just return in those cases. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 11 +- arch/riscv/include/asm/kvm_cove_sbi.h | 4 + arch/riscv/include/asm/kvm_vcpu_sbi.h | 3 + arch/riscv/include/uapi/asm/kvm.h | 1 + arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/cove.c | 48 +++++- arch/riscv/kvm/cove_sbi.c | 18 ++ arch/riscv/kvm/vcpu_exit.c | 2 +- arch/riscv/kvm/vcpu_sbi.c | 14 ++ arch/riscv/kvm/vcpu_sbi_covg.c | 232 ++++++++++++++++++++++++++ 10 files changed, 328 insertions(+), 7 deletions(-) create mode 100644 arch/riscv/kvm/vcpu_sbi_covg.c diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h index 4367281..afaea7c 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -31,6 +31,9 @@ #define get_order_num_pages(n) (get_order(n << PAGE_SHIFT)) +#define get_gpr_index(goffset) \ + ((goffset - KVM_ARCH_GUEST_ZERO) / (__riscv_xlen / 8)) + /* Describe a confidential or shared memory region */ struct kvm_riscv_cove_mem_region { unsigned long hva; @@ -139,7 +142,8 @@ int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run); int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, struct kvm_riscv_cove_measure_region *mr); int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned long size); -int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva); +int kvm_riscv_cove_handle_pagefault(struct kvm_vcpu *vcpu, gpa_t gpa, + unsigned long hva); /* Fence related function */ int kvm_riscv_cove_tvm_fence(struct kvm_vcpu *vcpu); @@ -179,8 +183,9 @@ static inline int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, { return -1; } -static inline int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, - gpa_t gpa, unsigned long hva) {return -1; } +static inline int kvm_riscv_cove_handle_pagefault(struct kvm_vcpu *vcpu, + gpa_t gpa, unsigned long hva) { return -1; } + /* TVM interrupt managenet via AIA functions */ static inline int kvm_riscv_cove_aia_init(struct kvm *kvm) { return -1; } static inline int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, diff --git a/arch/riscv/include/asm/kvm_cove_sbi.h b/arch/riscv/include/asm/kvm_cove_sbi.h index b554a8d..c930265 100644 --- a/arch/riscv/include/asm/kvm_cove_sbi.h +++ b/arch/riscv/include/asm/kvm_cove_sbi.h @@ -59,6 +59,10 @@ int sbi_covh_create_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid, int sbi_covh_run_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid); +int sbi_covh_add_shared_pages(unsigned long tvmid, unsigned long page_addr_phys, + enum sbi_cove_page_type ptype, + unsigned long npages, + unsigned long tvm_base_page_addr); int sbi_covh_tvm_invalidate_pages(unsigned long tvmid, unsigned long tvm_base_page_addr, unsigned long len); diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h index b10c896..5b37a12 100644 --- a/arch/riscv/include/asm/kvm_vcpu_sbi.h +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h @@ -66,5 +66,8 @@ extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm; extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_dbcn; extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_experimental; extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_vendor; +#ifdef CONFIG_RISCV_COVE_HOST +extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_covg; +#endif #endif /* __RISCV_KVM_VCPU_SBI_H__ */ diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index ac3def0..2a24341 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -148,6 +148,7 @@ enum KVM_RISCV_SBI_EXT_ID { KVM_RISCV_SBI_EXT_EXPERIMENTAL, KVM_RISCV_SBI_EXT_VENDOR, KVM_RISCV_SBI_EXT_DBCN, + KVM_RISCV_SBI_EXT_COVG, KVM_RISCV_SBI_EXT_MAX, }; diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 8c91551..31f4dbd 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -31,4 +31,4 @@ kvm-y += aia.o kvm-y += aia_device.o kvm-y += aia_aplic.o kvm-y += aia_imsic.o -kvm-$(CONFIG_RISCV_COVE_HOST) += cove_sbi.o cove.o +kvm-$(CONFIG_RISCV_COVE_HOST) += cove_sbi.o cove.o vcpu_sbi_covg.o diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 154b01a..ba596b7 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -44,6 +44,18 @@ static void kvm_cove_local_fence(void *info) kvm_err("local fence for TSM failed %d on cpu %d\n", rc, smp_processor_id()); } +static void cove_delete_shared_pinned_page_list(struct kvm *kvm, + struct list_head *tpages) +{ + struct kvm_riscv_cove_page *tpage, *temp; + + list_for_each_entry_safe(tpage, temp, tpages, link) { + unpin_user_pages_dirty_lock(&tpage->page, 1, true); + list_del(&tpage->link); + kfree(tpage); + } +} + static void cove_delete_page_list(struct kvm *kvm, struct list_head *tpages, bool unpin) { struct kvm_riscv_cove_page *tpage, *temp; @@ -425,7 +437,8 @@ int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) sbi_ext = kvm_vcpu_sbi_find_ext(vcpu, cp->a7); if ((sbi_ext && sbi_ext->handler) && ((cp->a7 == SBI_EXT_DBCN) || - (cp->a7 == SBI_EXT_HSM) || (cp->a7 == SBI_EXT_SRST) || ext_is_01)) { + (cp->a7 == SBI_EXT_HSM) || (cp->a7 == SBI_EXT_SRST) || + (cp->a7 == SBI_EXT_COVG) || ext_is_01)) { ret = sbi_ext->handler(vcpu, run, &sbi_ret); } else { kvm_err("%s: SBI EXT %lx not supported for TVM\n", __func__, cp->a7); @@ -451,7 +464,8 @@ int kvm_riscv_cove_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run) return ret; } -int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) +static int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, + unsigned long hva) { struct kvm_riscv_cove_page *tpage; struct mm_struct *mm = current->mm; @@ -517,6 +531,35 @@ int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hv return rc; } +int kvm_riscv_cove_handle_pagefault(struct kvm_vcpu *vcpu, gpa_t gpa, + unsigned long hva) +{ + struct kvm_cove_tvm_context *tvmc = vcpu->kvm->arch.tvmc; + struct kvm_riscv_cove_page *tpage, *next; + bool shared = false; + + /* TODO: Implement a better approach to track regions to avoid + * traversing the whole list on each fault. + */ + spin_lock(&vcpu->kvm->mmu_lock); + list_for_each_entry_safe(tpage, next, &tvmc->shared_pages, link) { + if (tpage->gpa == (gpa & PAGE_MASK)) { + shared = true; + break; + } + } + spin_unlock(&vcpu->kvm->mmu_lock); + + if (shared) { + return sbi_covh_add_shared_pages(tvmc->tvm_guest_id, + page_to_phys(tpage->page), + SBI_COVE_PAGE_4K, 1, + gpa & PAGE_MASK); + } + + return kvm_riscv_cove_gstage_map(vcpu, gpa, hva); +} + void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) { int rc; @@ -804,6 +847,7 @@ void kvm_riscv_cove_vm_destroy(struct kvm *kvm) cove_delete_page_list(kvm, &tvmc->reclaim_pending_pages, false); cove_delete_page_list(kvm, &tvmc->measured_pages, false); cove_delete_page_list(kvm, &tvmc->zero_pages, true); + cove_delete_shared_pinned_page_list(kvm, &tvmc->shared_pages); /* Reclaim and Free the pages for tvm state management */ rc = sbi_covh_tsm_reclaim_pages(page_to_phys(tvmc->tvm_state.page), tvmc->tvm_state.npages); diff --git a/arch/riscv/kvm/cove_sbi.c b/arch/riscv/kvm/cove_sbi.c index 01dc260..4759b49 100644 --- a/arch/riscv/kvm/cove_sbi.c +++ b/arch/riscv/kvm/cove_sbi.c @@ -380,6 +380,24 @@ int sbi_covh_add_zero_pages(unsigned long tvmid, unsigned long page_addr_phys, return 0; } +int sbi_covh_add_shared_pages(unsigned long tvmid, unsigned long page_addr_phys, + enum sbi_cove_page_type ptype, + unsigned long npages, + unsigned long tvm_base_page_addr) +{ + struct sbiret ret; + + if (!PAGE_ALIGNED(page_addr_phys)) + return -EINVAL; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_SHARED_PAGES, tvmid, + page_addr_phys, ptype, npages, tvm_base_page_addr, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + int sbi_covh_create_tvm_vcpu(unsigned long tvmid, unsigned long vcpuid, unsigned long vcpu_state_paddr) { diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index c46e7f2..51eb434 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -43,7 +43,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, if (is_cove_vcpu(vcpu)) { /* CoVE doesn't care about PTE prots now. No need to compute the prots */ - ret = kvm_riscv_cove_gstage_map(vcpu, fault_addr, hva); + ret = kvm_riscv_cove_handle_pagefault(vcpu, fault_addr, hva); } else { ret = kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, (trap->scause == EXC_STORE_GUEST_PAGE_FAULT) ? true : false); diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index d2f43bc..8bc7d73 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -13,6 +13,8 @@ #include #include #include +#include +#include #ifndef CONFIG_RISCV_SBI_V01 static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01 = { @@ -32,6 +34,14 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_pmu = { }; #endif +#ifndef CONFIG_RISCV_COVE_HOST +static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_covg = { + .extid_start = -1UL, + .extid_end = -1UL, + .handler = NULL, +}; +#endif + struct kvm_riscv_sbi_extension_entry { enum KVM_RISCV_SBI_EXT_ID dis_idx; const struct kvm_vcpu_sbi_extension *ext_ptr; @@ -82,6 +92,10 @@ static const struct kvm_riscv_sbi_extension_entry sbi_ext[] = { .dis_idx = KVM_RISCV_SBI_EXT_VENDOR, .ext_ptr = &vcpu_sbi_ext_vendor, }, + { + .dis_idx = KVM_RISCV_SBI_EXT_COVG, + .ext_ptr = &vcpu_sbi_ext_covg, + }, }; void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run) diff --git a/arch/riscv/kvm/vcpu_sbi_covg.c b/arch/riscv/kvm/vcpu_sbi_covg.c new file mode 100644 index 0000000..44a3b06 --- /dev/null +++ b/arch/riscv/kvm/vcpu_sbi_covg.c @@ -0,0 +1,232 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 Rivos Inc. + * + * Authors: + * Rajnesh Kanwal + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static int cove_share_converted_page(struct kvm_vcpu *vcpu, gpa_t gpa, + struct kvm_riscv_cove_page *tpage) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_context *tvmc = kvm->arch.tvmc; + int rc; + + rc = sbi_covh_tvm_invalidate_pages(tvmc->tvm_guest_id, gpa, PAGE_SIZE); + if (rc) + return rc; + + rc = kvm_riscv_cove_tvm_fence(vcpu); + if (rc) + goto err; + + rc = sbi_covh_tvm_remove_pages(tvmc->tvm_guest_id, gpa, PAGE_SIZE); + if (rc) + goto err; + + rc = sbi_covh_tsm_reclaim_page(page_to_phys(tpage->page)); + if (rc) + return rc; + + spin_lock(&kvm->mmu_lock); + list_del(&tpage->link); + list_add(&tpage->link, &tvmc->shared_pages); + spin_unlock(&kvm->mmu_lock); + + return 0; + +err: + sbi_covh_tvm_validate_pages(tvmc->tvm_guest_id, gpa, PAGE_SIZE); + + return rc; +} + +static int cove_share_page(struct kvm_vcpu *vcpu, gpa_t gpa, + unsigned long *sbi_err) +{ + unsigned long hva = gfn_to_hva(vcpu->kvm, gpa >> PAGE_SHIFT); + struct kvm_cove_tvm_context *tvmc = vcpu->kvm->arch.tvmc; + struct mm_struct *mm = current->mm; + struct kvm_riscv_cove_page *tpage; + struct page *page; + int rc; + + if (kvm_is_error_hva(hva)) { + /* Address is out of the guest ram memory region. */ + *sbi_err = SBI_ERR_INVALID_PARAM; + return 0; + } + + tpage = kmalloc(sizeof(*tpage), GFP_KERNEL_ACCOUNT); + if (!tpage) + return -ENOMEM; + + mmap_read_lock(mm); + rc = pin_user_pages(hva, 1, FOLL_LONGTERM | FOLL_WRITE, &page, NULL); + mmap_read_unlock(mm); + + if (rc != 1) { + rc = -EINVAL; + goto free_tpage; + } else if (!PageSwapBacked(page)) { + rc = -EIO; + goto free_tpage; + } + + tpage->page = page; + tpage->gpa = gpa; + tpage->hva = hva; + INIT_LIST_HEAD(&tpage->link); + + spin_lock(&vcpu->kvm->mmu_lock); + list_add(&tpage->link, &tvmc->shared_pages); + spin_unlock(&vcpu->kvm->mmu_lock); + + return 0; + +free_tpage: + kfree(tpage); + + return rc; +} + +static int kvm_riscv_cove_share_page(struct kvm_vcpu *vcpu, gpa_t gpa, + unsigned long *sbi_err) +{ + struct kvm_cove_tvm_context *tvmc = vcpu->kvm->arch.tvmc; + struct kvm_riscv_cove_page *tpage, *next; + bool converted = false; + + /* + * Check if the shared memory is part of the pages already assigned + * to the TVM. + * + * TODO: Implement a better approach to track regions to avoid + * traversing the whole list. + */ + spin_lock(&vcpu->kvm->mmu_lock); + list_for_each_entry_safe(tpage, next, &tvmc->zero_pages, link) { + if (tpage->gpa == gpa) { + converted = true; + break; + } + } + spin_unlock(&vcpu->kvm->mmu_lock); + + if (converted) + return cove_share_converted_page(vcpu, gpa, tpage); + + return cove_share_page(vcpu, gpa, sbi_err); +} + +static int kvm_riscv_cove_unshare_page(struct kvm_vcpu *vcpu, gpa_t gpa) +{ + struct kvm_riscv_cove_page *tpage, *next; + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_context *tvmc = kvm->arch.tvmc; + struct page *page = NULL; + int rc; + + spin_lock(&kvm->mmu_lock); + list_for_each_entry_safe(tpage, next, &tvmc->shared_pages, link) { + if (tpage->gpa == gpa) { + page = tpage->page; + break; + } + } + spin_unlock(&kvm->mmu_lock); + + if (unlikely(!page)) + return -EINVAL; + + rc = sbi_covh_tvm_invalidate_pages(tvmc->tvm_guest_id, gpa, PAGE_SIZE); + if (rc) + return rc; + + rc = kvm_riscv_cove_tvm_fence(vcpu); + if (rc) + return rc; + + rc = sbi_covh_tvm_remove_pages(tvmc->tvm_guest_id, gpa, PAGE_SIZE); + if (rc) + return rc; + + unpin_user_pages_dirty_lock(&page, 1, true); + + spin_lock(&kvm->mmu_lock); + list_del(&tpage->link); + spin_unlock(&kvm->mmu_lock); + + kfree(tpage); + + return 0; +} + +static int kvm_sbi_ext_covg_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, + struct kvm_vcpu_sbi_return *retdata) +{ + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; + uint32_t num_pages = cp->a1 / PAGE_SIZE; + unsigned long funcid = cp->a6; + unsigned long *err_val = &retdata->err_val; + uint32_t i; + int ret; + + switch (funcid) { + case SBI_EXT_COVG_SHARE_MEMORY: + for (i = 0; i < num_pages; i++) { + ret = kvm_riscv_cove_share_page( + vcpu, cp->a0 + i * PAGE_SIZE, err_val); + if (ret || *err_val != SBI_SUCCESS) + return ret; + } + return 0; + + case SBI_EXT_COVG_UNSHARE_MEMORY: + for (i = 0; i < num_pages; i++) { + ret = kvm_riscv_cove_unshare_page( + vcpu, cp->a0 + i * PAGE_SIZE); + if (ret) + return ret; + } + return 0; + + case SBI_EXT_COVG_ADD_MMIO_REGION: + case SBI_EXT_COVG_REMOVE_MMIO_REGION: + case SBI_EXT_COVG_ALLOW_EXT_INTERRUPT: + case SBI_EXT_COVG_DENY_EXT_INTERRUPT: + /* We don't really need to do anything here for now. */ + return 0; + + default: + kvm_err("%s: Unsupported guest SBI %ld.\n", __func__, funcid); + retdata->err_val = SBI_ERR_NOT_SUPPORTED; + return -EOPNOTSUPP; + } +} + +unsigned long kvm_sbi_ext_covg_probe(struct kvm_vcpu *vcpu) +{ + /* KVM COVG SBI handler is only meant for handling calls from TSM */ + return 0; +} + +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_covg = { + .extid_start = SBI_EXT_COVG, + .extid_end = SBI_EXT_COVG, + .handler = kvm_sbi_ext_covg_handler, + .probe = kvm_sbi_ext_covg_probe, +}; From patchwork Wed Apr 19 22:17:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3075CC77B73 for ; Wed, 19 Apr 2023 22:30:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232426AbjDSWaU (ORCPT ); Wed, 19 Apr 2023 18:30:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231286AbjDSW3h (ORCPT ); Wed, 19 Apr 2023 18:29:37 -0400 Received: from mail-ot1-x32e.google.com (mail-ot1-x32e.google.com [IPv6:2607:f8b0:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE31B9778 for ; Wed, 19 Apr 2023 15:28:26 -0700 (PDT) Received: by mail-ot1-x32e.google.com with SMTP id 46e09a7af769-6a60630574aso295039a34.1 for ; Wed, 19 Apr 2023 15:28:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681943251; x=1684535251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a40XLfKnT38y+rtkL3pdG8Ux1IFfCO6rS0FTBqpEPf0=; b=gOe3tlEuMP9Kdiuoo9Fwp5YpqQIznnT58Xka0jTIADKKug+AJObIdvVjyw20Co58Gc 67dIb9pE33Zju/jaYu4dI3j/GBljc1wjRO5cGGuKvtdDBtUSctcDceG27oZudXAX8gJl MrvS6pAiI+6y3B0OSvtwkCM6ibv3Ex9na1q3Be7gVS1BgqaOWv/MMSt3XS/hFel1S1Us 2Co6jRHVd0pgDBR64870dBjwokJ8jhNTu/9WRgKVUqc1XrEgWPOC2ja3WAfuCsbndIYI ZgCGq0f9wDBngl57wQk3QvCegDgDvLuQyMzYJ3urTKf4OC4L9XB8PQl1Ap73RRSSGcnb gyOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681943251; x=1684535251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a40XLfKnT38y+rtkL3pdG8Ux1IFfCO6rS0FTBqpEPf0=; b=Vwv5h4Bq2tBRjYQ1lYblTwilwEUWUZu8eOmVlFC4Cnq7MoeSf3Yg3OHKT/iZfS5M2H joCdbYNQlfvczi+gXuOQwPDtaF2tcw7mn8/u4bh2A8zXRtBiGUl+cVllDFF2RCi7IJy0 gfVzlQiafDFsdcRKlh/824vLEunbSkgLpJmi5azk1LIBO8R6vJWamRAXmkyKYLtTMq0v aOqKNATvxL35h0wBHXiqADi7v2KIf7nM7QQbqNazNlCVgPhQpxuJf+RPNVXSnefRpz8s F/cIHMSwWPAgzEnrBCumiIuFFFOId3NjE+Ji86VKztVio3z0c0JoA/amNhhvGHkopCFg GwKw== X-Gm-Message-State: AAQBX9dLTVsZSp/Wom4fWl2EhxYy9+UwY5F3UuWnOtyefQ5ACcAcKy8F usBe8JljGJk3hb03q0JYL7aEBneKUu6QRbTw+LM= X-Google-Smtp-Source: AKy350YLjqHMzW16SB61ZrKO34D3/TlmDW2rBvoLEyn4qa6+KyWVGCYv7MHLqblqPruYYQP5o2V7iQ== X-Received: by 2002:a17:902:ea0e:b0:1a1:ee8c:eef5 with SMTP id s14-20020a170902ea0e00b001a1ee8ceef5mr7846112plg.7.1681942742761; Wed, 19 Apr 2023 15:19:02 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.19.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:19:02 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 42/48] RISC-V: Allow host to inject any ext interrupt id to a CoVE guest. Date: Wed, 19 Apr 2023 15:17:10 -0700 Message-Id: <20230419221716.3603068-43-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal Ideally, host must not inject any external interrupt until explicitly allowed by the guest. This should be done per interrupt id but currently adding allow-all call in init_IRQ. In future, it will be modified to allow specific interrupts only. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/kernel/irq.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/riscv/kernel/irq.c b/arch/riscv/kernel/irq.c index eb9a68a..b5e0fd8 100644 --- a/arch/riscv/kernel/irq.c +++ b/arch/riscv/kernel/irq.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include static struct fwnode_handle *(*__get_intc_node)(void); @@ -36,8 +38,18 @@ int arch_show_interrupts(struct seq_file *p, int prec) void __init init_IRQ(void) { + int ret; + irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); sbi_ipi_init(); + + if (is_cove_guest()) { + /* FIXME: For now just allow all interrupts. */ + ret = sbi_covg_allow_all_external_interrupt(); + + if (ret) + pr_err("Failed to allow external interrupts.\n"); + } } From patchwork Wed Apr 19 22:17:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5913CC6FD18 for ; Wed, 19 Apr 2023 22:22:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233697AbjDSWWx (ORCPT ); Wed, 19 Apr 2023 18:22:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232907AbjDSWWE (ORCPT ); Wed, 19 Apr 2023 18:22:04 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 548B49743 for ; Wed, 19 Apr 2023 15:20:33 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1a92369761cso4221465ad.3 for ; Wed, 19 Apr 2023 15:20:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942745; x=1684534745; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZFcComAa5TlSFZBz7x8OlktBSnuCSnQ4ODeflbYaEGg=; b=0nUEiKekWfmNI3AIrUphLeoG9IshBvYin/IZirKoihkL7Vz1hKxvtZ9S/+qBRCxzGl Z5d0ha3qHQFVtOuYq101NOgtW6elAbMIjX+yLfEnzAp2xs2I4iBDYY5ZkQhxDmkY/7bK cWsa9maVnCVxVpLIasWERRfsvZygnli2tjaAPy0CiHcpn0ucula1hxhQbTtXkquOE/G9 qpketxd8B8maczWCb6ig5SjaDPKhR+D5kNN9pT45ngLGWR38A7D8mV+x7ZjU7uDYBccF cGzTF21j2WAnsV3IO9kxRpdiOoTA+Oi128benfRj77bo/TAAOLLGK42wViYQ/tssBFno STjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942745; x=1684534745; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZFcComAa5TlSFZBz7x8OlktBSnuCSnQ4ODeflbYaEGg=; b=gEdvHs953iC/4NAvu2I/Yxmo2Z8f3TG0RzDgYH29KYVp0eQR/eqivx9Uu/dozVAdI0 JG2XusBI7ufLM5OizdJqzRjlg9C7xF0YFClBFKzojLkyubnkMFVADx/yy9h+g/FQMrrs uSw/WQqrdMTPGWNNyyrVqiQBW/mhD6wNFw7IRKjEqrY1KiEPyTM77R/E1wenE75RMU4F Uwf4xEh50lNLtZjcK3GINSZ0qzP2tPXqOudg7sLHfGFnyWm7zg8OaWdSFGcbd5wuvkRc HORdYYMDvFvEvXe0DNsmvJ1B5YGGf+dV5gIGcl4q3rgLXXPJqMYsl/dUI/tbH7BotKr9 useQ== X-Gm-Message-State: AAQBX9c0+/tF0YxEkZB3xwjsRcVxo2D1SZY6sjU9u2YgVakDROiOerk9 eGTjgeB650pRWtbMd1a81XA39g== X-Google-Smtp-Source: AKy350ZdssyQae53kgooCFIYW6ziRRtfOc1jDxPEdCsn1TsvpfXtraJ7uSS11rn4Y1HrD+jtGXUOIw== X-Received: by 2002:a17:902:c792:b0:19e:7a2c:78a7 with SMTP id w18-20020a170902c79200b0019e7a2c78a7mr5521629pla.57.1681942745174; Wed, 19 Apr 2023 15:19:05 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.19.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:19:04 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 43/48] RISC-V: Add base memory encryption functions. Date: Wed, 19 Apr 2023 15:17:11 -0700 Message-Id: <20230419221716.3603068-44-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal Devices like virtio use shared memory buffers to transfer data. These buffers are part of the guest memory region. For CoVE guest this is not possible as host can not access guest memory. This is solved by VIRTIO_F_ACCESS_PLATFORM feature and SWIOTLB bounce buffers. Guest only allow devices with VIRTIO_F_ACCESS_PLATFORM feature which leads to guest using DMA API and from there moving to SWIOTLB bounce buffer due to SWIOTLB_FORCE flag set for TEE VM. set_memory_encrypted and set_memory_decrypted sit in this allocation path. Based on if a buffer is being decrypted we mark it shared and if it's being encrypted we mark it unshared using hypercalls. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/Kconfig | 7 ++++ arch/riscv/include/asm/mem_encrypt.h | 26 +++++++++++++ arch/riscv/mm/Makefile | 2 + arch/riscv/mm/init.c | 17 ++++++++- arch/riscv/mm/mem_encrypt.c | 57 ++++++++++++++++++++++++++++ 5 files changed, 108 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/mem_encrypt.h create mode 100644 arch/riscv/mm/mem_encrypt.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 49c3006..414cee1 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -163,6 +163,11 @@ config ARCH_MMAP_RND_BITS_MAX config ARCH_MMAP_RND_COMPAT_BITS_MAX default 17 +config RISCV_MEM_ENCRYPT + select ARCH_HAS_MEM_ENCRYPT + select ARCH_HAS_FORCE_DMA_UNENCRYPTED + def_bool n + # set if we run in machine mode, cleared if we run in supervisor mode config RISCV_M_MODE bool @@ -515,6 +520,8 @@ config RISCV_COVE_HOST config RISCV_COVE_GUEST bool "Guest Support for Confidential VM Extension(CoVE)" default n + select SWIOTLB + select RISCV_MEM_ENCRYPT help Enables support for running TVMs on platforms supporting CoVE. diff --git a/arch/riscv/include/asm/mem_encrypt.h b/arch/riscv/include/asm/mem_encrypt.h new file mode 100644 index 0000000..0dc3fe8 --- /dev/null +++ b/arch/riscv/include/asm/mem_encrypt.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * RISCV Memory Encryption Support. + * + * Copyright (c) 2023 Rivos Inc. + * + * Authors: + * Rajnesh Kanwal + */ + +#ifndef __RISCV_MEM_ENCRYPT_H__ +#define __RISCV_MEM_ENCRYPT_H__ + +#include + +struct device; + +bool force_dma_unencrypted(struct device *dev); + +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void); + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + +#endif /* __RISCV_MEM_ENCRYPT_H__ */ diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile index 2ac177c..1fd9b60 100644 --- a/arch/riscv/mm/Makefile +++ b/arch/riscv/mm/Makefile @@ -33,3 +33,5 @@ endif obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_RISCV_DMA_NONCOHERENT) += dma-noncoherent.o + +obj-$(CONFIG_RISCV_MEM_ENCRYPT) += mem_encrypt.o diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 478d676..b5edd8e 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -21,6 +21,7 @@ #include #include +#include #include #include #include @@ -156,11 +157,25 @@ static void print_vm_layout(void) { } void __init mem_init(void) { + unsigned int flags = SWIOTLB_VERBOSE; + bool swiotlb_en; + + if (is_cove_guest()) { + /* Since the guest memory is inaccessible to the host, devices + * always need to use the SWIOTLB buffer for DMA even if + * dma_capable() says otherwise. + */ + flags |= SWIOTLB_FORCE; + swiotlb_en = true; + } else { + swiotlb_en = !!(max_pfn > PFN_DOWN(dma32_phys_limit)); + } + #ifdef CONFIG_FLATMEM BUG_ON(!mem_map); #endif /* CONFIG_FLATMEM */ - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); + swiotlb_init(swiotlb_en, flags); memblock_free_all(); print_vm_layout(); diff --git a/arch/riscv/mm/mem_encrypt.c b/arch/riscv/mm/mem_encrypt.c new file mode 100644 index 0000000..8207a5c --- /dev/null +++ b/arch/riscv/mm/mem_encrypt.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 Rivos Inc. + * + * Authors: + * Rajnesh Kanwal + */ + +#include +#include +#include +#include +#include + +/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ +bool force_dma_unencrypted(struct device *dev) +{ + /* + * For authorized devices in trusted guest, all DMA must be to/from + * unencrypted addresses. + */ + return cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT); +} + +int set_memory_encrypted(unsigned long addr, int numpages) +{ + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) + return 0; + + if (!PAGE_ALIGNED(addr)) + return -EINVAL; + + return sbi_covg_unshare_memory(__pa(addr), numpages * PAGE_SIZE); +} +EXPORT_SYMBOL_GPL(set_memory_encrypted); + +int set_memory_decrypted(unsigned long addr, int numpages) +{ + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) + return 0; + + if (!PAGE_ALIGNED(addr)) + return -EINVAL; + + return sbi_covg_share_memory(__pa(addr), numpages * PAGE_SIZE); +} +EXPORT_SYMBOL_GPL(set_memory_decrypted); + +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void) +{ + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT)) + return; + + /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ + swiotlb_update_mem_attributes(); +} From patchwork Wed Apr 19 22:17:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B97E0C6FD18 for ; Wed, 19 Apr 2023 22:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233666AbjDSWXb (ORCPT ); Wed, 19 Apr 2023 18:23:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231265AbjDSWXE (ORCPT ); Wed, 19 Apr 2023 18:23:04 -0400 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31F54C17D for ; Wed, 19 Apr 2023 15:21:16 -0700 (PDT) Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-51efefe7814so272560a12.3 for ; Wed, 19 Apr 2023 15:21:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942747; x=1684534747; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+bezt4nspa35ZRqzRaZmY422b4QR0Mn7woQKzr9+0N8=; b=cFqy/7hM+Ny4fP4WIxs4wAIEAjRhQmifvh65+m0WDOJ/jYCumQJdB1BJZNu5ucVQLt vHeOX9JqsZJsPoOkvbrleSPo5GRulHiShRoX7tLw/Bo3NH2vnqKu9FAh7Id3Frv2QuNE +weDjwOUG7PuVaf0Q8I8065eawPZBUZF/IBP3CPxafiXKLMpIPqIxJCnZC8Rka6I13Pf MlvlahxQGVeNZnlElGB4Hwi7P8WFIaVtZscNXzvS3DDLN30pdgEiwWgQCtnYC84qh4Gj EUi0u8wk2hWEG92QVESAvb05CMVa66rE4mAeYWrE7046KCBAGXIObMS7PVHwhD5fW9N7 Ltcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942747; x=1684534747; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+bezt4nspa35ZRqzRaZmY422b4QR0Mn7woQKzr9+0N8=; b=WtVmSRhZpjHOEbcAreMXRAbINCOIEENVtqLWc/jR/Ocm9ubqaMPnlrxvCA/1f6g9XV wYSWM6m/53oVnwsYThPhvMyYp2iNlnd4B1C9TvMChtTuNyN2nHhBLOZ7ze/8BF3205D2 spsLqZdknfPv+0hNnMHS3dbCsBEBVJkyEa+Eg38EyHbul1fiHivJsYamfUOmez40ZcQ3 VUzrdYDG/FhtSlt6FT6EZBPZOJanFePlCoEmPiv65CJrVB4xUFqY5ny29HOqAr2ql/LQ 1ChV7u4/6stwyF0+ZlA91eHCGy99+8rBj8CHllBFohkGkfMkhWgcTG1mF6ljhnyshg8w Gjqg== X-Gm-Message-State: AAQBX9e0bAv1Lbo6TQWnmd36FxNo+Xs6GagUpi7rv2g1wLTkOjX41lPy VHwPs0IO9D7fM1uW45Gqvq+zXQ== X-Google-Smtp-Source: AKy350bB8ZklTkd1MwNIad8/q9PvIlAh0JulWMyK+P713MLjTnxYs1KPIh7Umu4fPlJwkV2KzfaSZw== X-Received: by 2002:a17:902:e5c7:b0:1a8:1436:c892 with SMTP id u7-20020a170902e5c700b001a81436c892mr5030556plf.14.1681942747426; Wed, 19 Apr 2023 15:19:07 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.19.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:19:07 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 44/48] RISC-V: Add cc_platform_has() for RISC-V for CoVE Date: Wed, 19 Apr 2023 15:17:12 -0700 Message-Id: <20230419221716.3603068-45-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal All the confidential computing solutions uses the arch specific cc_platform_has function to enable memory encryption/decryption. Implement the same for RISC-V to support that as well. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/Kconfig | 1 + arch/riscv/cove/core.c | 12 ++++++++++++ 2 files changed, 13 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 414cee1..2ca9e01 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -522,6 +522,7 @@ config RISCV_COVE_GUEST default n select SWIOTLB select RISCV_MEM_ENCRYPT + select ARCH_HAS_CC_PLATFORM help Enables support for running TVMs on platforms supporting CoVE. diff --git a/arch/riscv/cove/core.c b/arch/riscv/cove/core.c index 7218fe7..582feb1c 100644 --- a/arch/riscv/cove/core.c +++ b/arch/riscv/cove/core.c @@ -21,6 +21,18 @@ bool is_cove_guest(void) } EXPORT_SYMBOL_GPL(is_cove_guest); +bool cc_platform_has(enum cc_attr attr) +{ + switch (attr) { + case CC_ATTR_GUEST_MEM_ENCRYPT: + case CC_ATTR_MEM_ENCRYPT: + return is_cove_guest(); + default: + return false; + } +} +EXPORT_SYMBOL_GPL(cc_platform_has); + void riscv_cove_sbi_init(void) { if (sbi_probe_extension(SBI_EXT_COVG) > 0) From patchwork Wed Apr 19 22:17:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79908C77B73 for ; Wed, 19 Apr 2023 22:22:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233226AbjDSWW3 (ORCPT ); Wed, 19 Apr 2023 18:22:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233434AbjDSWVr (ORCPT ); Wed, 19 Apr 2023 18:21:47 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5988A9778 for ; Wed, 19 Apr 2023 15:20:24 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-63b60366047so357731b3a.1 for ; Wed, 19 Apr 2023 15:20:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942749; x=1684534749; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=D8vKC+DATLxW+QWgEBtVVKhLwPezg/zgsscm2TPJ1sU=; b=BwX+DM7PF0bcRvAKKIu4MHKKajOEzXLxGIauLswkbEgjP6DJZ6w6f7IIzl8Qpi8c98 ed4C65vIGkkrNIrLR5kdnMXy2vcDHhmfQHa8C6amTJnDRxQZMkqxUQV7hs024Ati36Dt zpHCU/W8PLW1oErRDOCQdSTnQVm/WTAOCvbqhuYsNSC3p+RiRejOdpcZADkJX6CbNoIH 9EVmLfGzACsyDnRZvTGg/b8+8stR7Al9zVOUv8bZ/yCdHCLAg6/FhYsFSYe4Yx79Lh8K qgEDVcvH60RycDNT8QKaO1+287IZvvLfcWjMdpa7LI9GrhUlCgUkAt3/dSeYRn7Itqpu sgIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942749; x=1684534749; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D8vKC+DATLxW+QWgEBtVVKhLwPezg/zgsscm2TPJ1sU=; b=Wqz8hM77rff7WypF71RQdGgmnlmCbLbUglDN7nOHTu0eqONVcAPf7PKbLx9Wkcwoa0 jVVL9/Smc6dumMX6n1F3+LIwn1c4uwaDBYoZHMbH8YrD9CGkVAx2LlqwK8YByuClfeSE H1/ucM77g9hEPzE9xjzgc0SI488bVLfYVJFq6Rx5cGiDCnGFZqkFh2llj8VCrZgPHNlI UqYUqdqAPBqDqQFNE91cT38RUfkh3Tir3/6+m4pQH4VDxDnu1QCIb4B+G47rLFF/cFxd qMMXLyG565tsKJVVIHZQXDByCjPBY60gCKG+n6sKwb9pO+ElPX88BEekXQO5I5H4Jdip UPIQ== X-Gm-Message-State: AAQBX9cjYT616UboWalCt+aJbF4wA6G/Si1W8AgpCqe48qjehBJd1Rcs EitVEW+k74zXdBO7pyMnurk4Qg== X-Google-Smtp-Source: AKy350YMt1GXaXJLEtI/tcFCT7P2BRkKgLnzDade8rOaHZzTbglX9T86uEwEdM3HEdy8DD35oL3itg== X-Received: by 2002:a17:903:3013:b0:1a2:8940:6da4 with SMTP id o19-20020a170903301300b001a289406da4mr5416617pla.29.1681942749582; Wed, 19 Apr 2023 15:19:09 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:19:09 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 45/48] RISC-V: ioremap: Implement for arch specific ioremap hooks Date: Wed, 19 Apr 2023 15:17:13 -0700 Message-Id: <20230419221716.3603068-46-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal The guests running in CoVE must notify the host about its mmio regions so that host can enable mmio emulation. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- arch/riscv/mm/Makefile | 1 + arch/riscv/mm/ioremap.c | 45 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+) create mode 100644 arch/riscv/mm/ioremap.c diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile index 1fd9b60..721b557 100644 --- a/arch/riscv/mm/Makefile +++ b/arch/riscv/mm/Makefile @@ -15,6 +15,7 @@ obj-y += cacheflush.o obj-y += context.o obj-y += pgtable.o obj-y += pmem.o +obj-y += ioremap.o ifeq ($(CONFIG_MMU),y) obj-$(CONFIG_SMP) += tlbflush.o diff --git a/arch/riscv/mm/ioremap.c b/arch/riscv/mm/ioremap.c new file mode 100644 index 0000000..0d4e026 --- /dev/null +++ b/arch/riscv/mm/ioremap.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2023 Rivos Inc. + * + * Authors: + * Rajnesh Kanwal + */ + +#include +#include +#include +#include +#include +#include +#include + +void ioremap_phys_range_hook(phys_addr_t addr, size_t size, pgprot_t prot) +{ + unsigned long offset; + + if (!is_cove_guest()) + return; + + /* Page-align address and size. */ + offset = addr & (~PAGE_MASK); + addr -= offset; + size = PAGE_ALIGN(size + offset); + + sbi_covg_add_mmio_region(addr, size); +} + +void iounmap_phys_range_hook(phys_addr_t addr, size_t size) +{ + unsigned long offset; + + if (!is_cove_guest()) + return; + + /* Page-align address and size. */ + offset = addr & (~PAGE_MASK); + addr -= offset; + size = PAGE_ALIGN(size + offset); + + sbi_covg_remove_mmio_region(addr, size); +} From patchwork Wed Apr 19 22:17:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2F17C6FD18 for ; Wed, 19 Apr 2023 22:30:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231845AbjDSWaS (ORCPT ); Wed, 19 Apr 2023 18:30:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233630AbjDSW3a (ORCPT ); Wed, 19 Apr 2023 18:29:30 -0400 Received: from mail-ua1-x936.google.com (mail-ua1-x936.google.com [IPv6:2607:f8b0:4864:20::936]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 135A455A5 for ; Wed, 19 Apr 2023 15:28:15 -0700 (PDT) Received: by mail-ua1-x936.google.com with SMTP id a1e0cc1a2514c-771d9ec5aa5so159292241.0 for ; Wed, 19 Apr 2023 15:28:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681943247; x=1684535247; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ecSGlantlLzvA0G9VPQY9TXgG8EZEn7xksLEzRIVmIY=; b=JivewOmrlAr1ZbnurDAaNTRphsPccGwoF6k/PNgJ7KGJwRx0uCfoWYGFR/bEfixo0e r5w+kgN9iHlKiZYstptLVFIoVAAQVnMcChho91eAwSnCVF0OnxMe4as8+J6SQVPzvkii 7GnTcY5w+wG9L8xFddRsQYGt/lZH9/7Hm8ZgBfAVuKvKY70Ax/Nz3ySftgNaMpABgvCF eqb9Ca+MkmM5QX1vMUNqRohYkTP5kIVrd0HLdlx66cTP6tY0LPFGZUI+CKHvudqcJRuQ hHUPmD5b/KDOW7sIfM4jqLnBhE8T3Cr38+IOroeT8X8Orv/Rf60MeRqEvU6HoedzvDUl OPww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681943247; x=1684535247; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ecSGlantlLzvA0G9VPQY9TXgG8EZEn7xksLEzRIVmIY=; b=SdDVa5i0ZoolGbIF7kMYb/Zt7FycTuoaRwbBrgMuVxbQEMgWZyJjmFvVcyzdxnenp2 2bUVmoEhYZcw28P4rKZgTG/ir5j0FGyXkuwMpZtkexourJ4nvZ5zzqnR5IHLg1d/RWRV yUUjHgVuhFuep3glhENXPMLD9O13ahU3TgrUKumhJOLD8mjgaNLcZ37O2zrGdcuqTiII T7Qe7dj7xd6NxeTWvN4nEKHCOZ0eYr7PAwLNGhwu9ullkXOpmnVACV+47pgoXKFL5OzU aiw9Z8n7sCF68x6U/oV9lgzwiwzsWnAPBFgNxYl+r020x6zgK8VMPdrpiBQwoaMwnAMD u7wg== X-Gm-Message-State: AAQBX9coFbs1ulmaYIYKgIXNsP90r8AWlersVPJ3Ximg/DQWFdufWTbf mfEm6hedb4rJ/vNZ3n0TKCmogTuZDOONbklr1IY= X-Google-Smtp-Source: AKy350YKfPHbIymM6qmHQFNTIkUnRhQSV2hEDBlfRu6z0TKA3ibQ947LdlZE55huq5S43nAipp/NMA== X-Received: by 2002:a17:903:22c7:b0:1a6:3737:750c with SMTP id y7-20020a17090322c700b001a63737750cmr4095049plg.21.1681942751934; Wed, 19 Apr 2023 15:19:11 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.19.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:19:11 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 46/48] riscv/virtio: Have CoVE guests enforce restricted virtio memory access. Date: Wed, 19 Apr 2023 15:17:14 -0700 Message-Id: <20230419221716.3603068-47-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal CoVE guest requires that virtio devices use the DMA API to allow the hypervisor to successfully access guest memory as needed. The VIRTIO_F_VERSION_1 and VIRTIO_F_ACCESS_PLATFORM features tell virtio to use the DMA API. Force to check for these features to fail the device probe if these features have not been set when running as an TEE guest. Signed-off-by: Rajnesh Kanwal --- arch/riscv/mm/mem_encrypt.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/riscv/mm/mem_encrypt.c b/arch/riscv/mm/mem_encrypt.c index 8207a5c..8523c50 100644 --- a/arch/riscv/mm/mem_encrypt.c +++ b/arch/riscv/mm/mem_encrypt.c @@ -10,6 +10,7 @@ #include #include #include +#include #include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ @@ -54,4 +55,7 @@ void __init mem_encrypt_init(void) /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ swiotlb_update_mem_attributes(); + + /* Set restricted memory access for virtio. */ + virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc); } From patchwork Wed Apr 19 22:17:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A044DC77B73 for ; Wed, 19 Apr 2023 22:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231372AbjDSWbB (ORCPT ); Wed, 19 Apr 2023 18:31:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232677AbjDSWa6 (ORCPT ); Wed, 19 Apr 2023 18:30:58 -0400 Received: from mail-oi1-x230.google.com (mail-oi1-x230.google.com [IPv6:2607:f8b0:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A48E87D91 for ; Wed, 19 Apr 2023 15:30:23 -0700 (PDT) Received: by mail-oi1-x230.google.com with SMTP id 5614622812f47-38e3a1a07c8so265101b6e.0 for ; Wed, 19 Apr 2023 15:30:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681943341; x=1684535341; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8fg2apWNkO7XMTsfLYU3rTZlWorsx4wHesUuN9ZGrN4=; b=oZQY1fzcQ2RYxKCwReTTWy+GmZ/PBEP3SBF0kvBeLqQOWGkjrcZR4gyuCl4RxtgVzQ nwBHDOA23BM5M8iEvIVz5OvWo3hz1xbFuQ/2LM2VsYXsE7/RgRm/yDtBYZh15lSK5PSD 1n7THk4d3gPqDeruBTROWbrT1tsn0do38RvTDvs14fht/9EstNXJmLuqtzYH4ng6xpoz usmQr5nl4eX+wZv36DeUk7p+OtUYYIut3jPir6TyJMoNO0kinVGloXgDzd8ZuBUtR1nD DM7M9Hdd0xjFRV9F2lkj7WKvRGmNTwXJTOk2yJjsqnH/+rpMwKS2k1RYgO9TUsqI9A1/ mn+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681943341; x=1684535341; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8fg2apWNkO7XMTsfLYU3rTZlWorsx4wHesUuN9ZGrN4=; b=gfWxfWQayX/uMfYUF2dyd5S75VA7LOk/akr7T8LwZsZo3GVqPzJ0Y1iWpgA32VFWuf HDAGZ35qiRF7dGhiUld8mAkP5EPkrRzbcBhftDv9rmY9Kfxn5WAb6uRyuR0eIhULWdlj mk44K8+7w+U07McTp0LDbmonVUcUq5bPegUUH2mDk9RFdYMgq5sJ1iiLReGsv0FfqPLx sCJNzSeCp2qAWoE30lBOkpC0KDTAUIuAZwpbZlQkKR1W5dThLVXOEB6jzjjqmo7Qn5Cc 5RwIrA0aQqJ9oBblQep6ASJ5bzRyEb57aaGx+YFjmXErZvwTgTO5KyF2HwgGwq5Dkwoc o/hg== X-Gm-Message-State: AAQBX9eFrQ3lgaRUjqOoQfHoqwKXZDIekjIAIFTDbdL6DDEhmXvmFi5x w9U7Q/NawsKK8hW8s0sZE48i53KGPQ+936kDEaA= X-Google-Smtp-Source: AKy350bcbgE/EJ3Ej08OUUDHS562dStL1nY59dRLyQva3UBkO8qD2X/slMFNk57ZH7aSQKcUf51qPw== X-Received: by 2002:a17:902:a504:b0:1a1:be45:9857 with SMTP id s4-20020a170902a50400b001a1be459857mr6646673plq.1.1681942754165; Wed, 19 Apr 2023 15:19:14 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.19.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:19:13 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Rajnesh Kanwal , Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Uladzislau Rezki Subject: [RFC 47/48] RISC-V: Add shared bounce buffer to support DBCN for CoVE Guest. Date: Wed, 19 Apr 2023 15:17:15 -0700 Message-Id: <20230419221716.3603068-48-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Rajnesh Kanwal Early console buffer needs to be shared with the host for CoVE Guest. Signed-off-by: Rajnesh Kanwal Signed-off-by: Atish Patra --- drivers/tty/serial/earlycon-riscv-sbi.c | 51 ++++++++++++++++++++++++- 1 file changed, 49 insertions(+), 2 deletions(-) diff --git a/drivers/tty/serial/earlycon-riscv-sbi.c b/drivers/tty/serial/earlycon-riscv-sbi.c index 311a4f8..9033cca 100644 --- a/drivers/tty/serial/earlycon-riscv-sbi.c +++ b/drivers/tty/serial/earlycon-riscv-sbi.c @@ -9,6 +9,14 @@ #include #include #include +#include +#include +#include + +#ifdef CONFIG_RISCV_COVE_GUEST +#define DBCN_BOUNCE_BUF_SIZE (PAGE_SIZE) +static char dbcn_buf[DBCN_BOUNCE_BUF_SIZE] __aligned(PAGE_SIZE); +#endif #ifdef CONFIG_RISCV_SBI_V01 static void sbi_putc(struct uart_port *port, unsigned char c) @@ -24,6 +32,33 @@ static void sbi_0_1_console_write(struct console *con, } #endif +#ifdef CONFIG_RISCV_COVE_GUEST +static void sbi_dbcn_console_write_cove(struct console *con, const char *s, + unsigned int n) +{ + phys_addr_t pa = __pa(dbcn_buf); + unsigned int off = 0; + + while (off < n) { + const unsigned int rem = n - off; + const unsigned int size = + rem > DBCN_BOUNCE_BUF_SIZE ? DBCN_BOUNCE_BUF_SIZE : rem; + + memcpy(dbcn_buf, &s[off], size); + + sbi_ecall(SBI_EXT_DBCN, SBI_EXT_DBCN_CONSOLE_WRITE, +#ifdef CONFIG_32BIT + size, pa, (u64)pa >> 32, +#else + size, pa, 0, +#endif + 0, 0, 0); + + off += size; + } +} +#endif + static void sbi_dbcn_console_write(struct console *con, const char *s, unsigned n) { @@ -45,14 +80,26 @@ static int __init early_sbi_setup(struct earlycon_device *device, /* TODO: Check for SBI debug console (DBCN) extension */ if ((sbi_spec_version >= sbi_mk_version(1, 0)) && - (sbi_probe_extension(SBI_EXT_DBCN) > 0)) + (sbi_probe_extension(SBI_EXT_DBCN) > 0)) { +#ifdef CONFIG_RISCV_COVE_GUEST + if (is_cove_guest()) { + ret = sbi_covg_share_memory(__pa(dbcn_buf), + DBCN_BOUNCE_BUF_SIZE); + if (ret) + return ret; + + device->con->write = sbi_dbcn_console_write_cove; + return 0; + } +#endif device->con->write = sbi_dbcn_console_write; - else + } else { #ifdef CONFIG_RISCV_SBI_V01 device->con->write = sbi_0_1_console_write; #else ret = -ENODEV; #endif + } return ret; } From patchwork Wed Apr 19 22:17:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217552 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F7A2C77B73 for ; Wed, 19 Apr 2023 22:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233065AbjDSWWd (ORCPT ); Wed, 19 Apr 2023 18:22:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233493AbjDSWWD (ORCPT ); Wed, 19 Apr 2023 18:22:03 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B57B99EE3 for ; Wed, 19 Apr 2023 15:20:29 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1a9253d4551so4770515ad.0 for ; Wed, 19 Apr 2023 15:20:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942756; x=1684534756; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P8D5qQaAsWRyihppL8lxgNE7a17gNAwjOwZEXYQRMgE=; b=KY3qeDkuqx7ojUE9edJo91m4yJkLaFWv5WBr0z1XBW37m3Ugju5gvpwIFDQff0jKms +hgKcILxSjrfAbaSn/gsJas5jFwQYwJwvq++pUZKG2L4GsKhyqTPcUu6xp5ypSE2166a M9MbN1bFoVVAdPgWpFhugIMpkrykve+HvKmx+0rzvYuv5b7fjLguHE92nxuCMfaazyT0 EjqjUkf7F7QIDOnYykXPXccd7CL3t2WBS+Rd2ML5ADa2yHIE2iBIctAwVUyHJ5XwQDtq 3t1dyXic09IH7MeVfo82H9l2jXFUHVEBVSeBHFYy8QxMmo1hXkdwnKJUN8t2qaLHF1ux +z5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942756; x=1684534756; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P8D5qQaAsWRyihppL8lxgNE7a17gNAwjOwZEXYQRMgE=; b=PvO5hTlEePzx+TkcKohogfUskVDRK6gAf2szVvMWcrRgrNW42MWB9mZxnnYhsNEWJB Gn4kKMxca5IBfuGs2QOWaG+4IvIwkz3Ea73Oz4jaqC2sFcb995u8elVMzoFTyNqNjPv3 pflGS8qilWXPviiS1GTXM5BaJ5zdrNbvi3ZbKWea0Naj8fOP7taBST6hvk9pttW16qWi EDZ1fXD6Vbunuj1QNMlbChyhUHbyRAk4f4UYoUBg667a26WsKINzfAa76hSWSUDFYLjb HNwkfZ/5HdFoX8WA+obB8xDu498Y9KyZeTzIQa2kL31Y7N7JEE2irnzOSsZeidS3X6Ld ZLRQ== X-Gm-Message-State: AAQBX9etY7TxUBLoWRMoeP1NPNTvjmYZKFcqPJpxwSsU2tFvN2bj+KJZ fizW4PT6nzVT6+YL830TKFO+Rw== X-Google-Smtp-Source: AKy350ZigrT04ztZNfYo7vxPOIAk65YMZhdMrGeFS26zGyCIXgxICUZh5tguDmZpxCxB9++y2y9iww== X-Received: by 2002:a17:903:2288:b0:1a5:2db2:2bb with SMTP id b8-20020a170903228800b001a52db202bbmr8899588plh.15.1681942756356; Wed, 19 Apr 2023 15:19:16 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.19.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:19:16 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 48/48] drivers/hvc: sbi: Disable HVC console for TVMs Date: Wed, 19 Apr 2023 15:17:16 -0700 Message-Id: <20230419221716.3603068-49-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If two same type of console is used in command line, kernel picks up the first registered one instead of the preferred one. The fix was proposed and NACK'ed due to a possible regression for other users. https://lore.kernel.org/all/Y+tziG0Uo5ey+Ocy@alley/ HVC sbi console makes it impossible to use virtio console which is preferred anyways. We could have disabled HVC console for TVMs but same kernel image must work on both host and the the guest. There are genuine reasons for requiring the hvc sbi cosnole for the host. Do not initialize the hvc console for the TVMs so that virtio console can be used. Signed-off-by: Atish Patra --- drivers/tty/hvc/hvc_riscv_sbi.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/tty/hvc/hvc_riscv_sbi.c b/drivers/tty/hvc/hvc_riscv_sbi.c index 83cfe00..dee96c5 100644 --- a/drivers/tty/hvc/hvc_riscv_sbi.c +++ b/drivers/tty/hvc/hvc_riscv_sbi.c @@ -11,6 +11,7 @@ #include #include +#include #include #include "hvc_console.h" @@ -103,6 +104,10 @@ static int __init hvc_sbi_init(void) { int err; + /* Prefer virtio console as hvc console for guests */ + if (is_cove_guest()) + return 0; + if ((sbi_spec_version >= sbi_mk_version(1, 0)) && (sbi_probe_extension(SBI_EXT_DBCN) > 0)) { err = PTR_ERR_OR_ZERO(hvc_alloc(0, 0, &hvc_sbi_dbcn_ops, 16));