From patchwork Wed Apr 19 22:16:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Kumar Patra X-Patchwork-Id: 13217679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C5D07C77B7C for ; Wed, 19 Apr 2023 23:31:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Ase9LrzmRvu0zuQGy3xVq6UvXQOL36xj10FLU6A/Jbo=; b=Fcab0qhWLJ6WOG Cg6iYdW26vrcKPyqsM6vw3E6YB3dlHUKG0UQqxxQfkZ7RXA5Mx0MzBTrhl+hFcdWA9CU6nV+MmwPs mqutDlbfMEbaiKvxjuT8uHcCLtymNsJxvEoUi5g3gvwnXwUFuAn/lujBKf9FKc8/+m/nKkJHQJgch 8ssX4KtdTDc8nMjWrA1M4up3urr2sWeOi3Pfls/d5oYJ7B/DU/HKOW2UbYFTJYCyJA56LlR4Zjzt1 bluG5pgeo7w/ZaEXqSUMxdssZ6A7y96NOIkMRVGGeVRmPRdyHXaHde+DaHnb8Op7HYCpT8PxrKe/1 SDbHQwhPFSQjKuqyoCZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1ppHG6-006eV0-1U; Wed, 19 Apr 2023 23:30:42 +0000 Received: from mail-pj1-x102c.google.com ([2607:f8b0:4864:20::102c]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1ppG7W-006SuK-2K for linux-riscv@lists.infradead.org; Wed, 19 Apr 2023 22:17:50 +0000 Received: by mail-pj1-x102c.google.com with SMTP id 98e67ed59e1d1-2474e09fdcfso226993a91.0 for ; Wed, 19 Apr 2023 15:17:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942663; x=1684534663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MC6vnkFRr+LS0VzfoACyhzPionL0+vOgt4l/y7HAvLQ=; b=DRAvwidPC7IdYDxLvMZPx6rD17zG/oS72Z/5NPbNssSMeEjGZ/qFXxKzxUoDWrBvaq DZ0TInEL8TrdR76cTRhzAJ5vG3gQXDbbiRYVKe5TFYSlezfrU3NDmW+/vfPxscuEdQxg jXWWGJNbmSYg4oA+SEn/VeITfEYeL5F6eAjq1+89vsTHHHYZkIRXRhIdgP4FIkoXVHnV bl38T3t+0HkBGPH61JvdtrFKDQGGEV6QVexZiROWg23DV082ptqUszDkbJ0kgnR2cFa0 h+Xapq4PsL1WId18x50ElV1iwBPVNVRSYCKwHvchYfyD9yud1mQMh17Z1wEgoBq7zr/0 kh+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942663; x=1684534663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MC6vnkFRr+LS0VzfoACyhzPionL0+vOgt4l/y7HAvLQ=; b=Iv8kI1uPltJDW/CO0R06/zXy5SHYNCi/QtNOsIBaN76V52Xg3FvGCXjUhWAvT+7ZYZ aEOYzrbE4DSKoWGz9V/RyLzPHdzikCtw1DspSZ9Kf/vzXCcdO8dcw/v2lZSZ/9yr/s8+ lp33z2USdC+FEC9OTh1DZ4TUugzUTADU3No7fswThn77lH5m8pifj2nWF24FhTcl+6yD tShSx3KxMEKvwgDIy8fWOjPKFlA0cEagyBuafGqpoZz1DhrZfVAxfdzurVjz/sUZmxjt aEvlLpGfimurQldvYpJsewKy3x94sHIpDDlw2lkl8Ts165xjCWC0bjC/zaiAvE6nGHfq 0zxw== X-Gm-Message-State: AAQBX9dhQ2fg71Xn7z8PldnMQ5UM7reIFlX552PgtMf7O1XWw3ARNfG9 p+ezj1mc29zSA+0tr87aemNcZw== X-Google-Smtp-Source: AKy350YM0J4iT9dthj2PQFzxPsPzjGXv4OgXVXSGvlPqAIkruWzwbcoYuc98tklz6iIJwKCz8HSgyw== X-Received: by 2002:a17:90a:f3c4:b0:247:6364:b8d8 with SMTP id ha4-20020a17090af3c400b002476364b8d8mr4026019pjb.6.1681942662828; Wed, 19 Apr 2023 15:17:42 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.17.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:17:42 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 06/48] RISC-V: KVM: Implement COVH SBI extension Date: Wed, 19 Apr 2023 15:16:34 -0700 Message-Id: <20230419221716.3603068-7-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230419_151746_758682_82C106EA X-CRM114-Status: GOOD ( 20.26 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org COVH SBI extension defines the SBI functions that the host will invoke to configure/create/destroy a TEE VM (TVM). Implement all the COVH SBI extension functions. Signed-off-by: Atish Patra --- arch/riscv/Kconfig | 13 ++ arch/riscv/include/asm/kvm_cove_sbi.h | 46 +++++ arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/cove_sbi.c | 245 ++++++++++++++++++++++++++ 4 files changed, 305 insertions(+) create mode 100644 arch/riscv/include/asm/kvm_cove_sbi.h create mode 100644 arch/riscv/kvm/cove_sbi.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 4044080..8462941 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -501,6 +501,19 @@ config FPU If you don't know what to do here, say Y. +menu "Confidential VM Extension(CoVE) Support" + +config RISCV_COVE_HOST + bool "Host(KVM) support for Confidential VM Extension(CoVE)" + depends on KVM + default n + help + Enable this if the platform supports confidential vm extension. + That means the platform should be capable of running TEE VM (TVM) + using KVM and TEE Security Manager (TSM). + +endmenu # "Confidential VM Extension(CoVE) Support" + endmenu # "Platform type" menu "Kernel features" diff --git a/arch/riscv/include/asm/kvm_cove_sbi.h b/arch/riscv/include/asm/kvm_cove_sbi.h new file mode 100644 index 0000000..24562df --- /dev/null +++ b/arch/riscv/include/asm/kvm_cove_sbi.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * COVE SBI extension related header file. + * + * Copyright (c) 2023 RivosInc + * + * Authors: + * Atish Patra + */ + +#ifndef __KVM_COVE_SBI_H +#define __KVM_COVE_SBI_H + +#include +#include +#include +#include +#include + +int sbi_covh_tsm_get_info(struct sbi_cove_tsm_info *tinfo_addr); +int sbi_covh_tvm_initiate_fence(unsigned long tvmid); +int sbi_covh_tsm_initiate_fence(void); +int sbi_covh_tsm_local_fence(void); +int sbi_covh_tsm_create_tvm(struct sbi_cove_tvm_create_params *tparam, unsigned long *tvmid); +int sbi_covh_tsm_finalize_tvm(unsigned long tvmid, unsigned long sepc, unsigned long entry_arg); +int sbi_covh_tsm_destroy_tvm(unsigned long tvmid); +int sbi_covh_add_memory_region(unsigned long tvmid, unsigned long tgpadr, unsigned long rlen); + +int sbi_covh_tsm_reclaim_pages(unsigned long phys_addr, unsigned long npages); +int sbi_covh_tsm_convert_pages(unsigned long phys_addr, unsigned long npages); +int sbi_covh_tsm_reclaim_page(unsigned long page_addr_phys); +int sbi_covh_add_pgt_pages(unsigned long tvmid, unsigned long page_addr_phys, unsigned long npages); + +int sbi_covh_add_measured_pages(unsigned long tvmid, unsigned long src_addr, + unsigned long dest_addr, enum sbi_cove_page_type ptype, + unsigned long npages, unsigned long tgpa); +int sbi_covh_add_zero_pages(unsigned long tvmid, unsigned long page_addr_phys, + enum sbi_cove_page_type ptype, unsigned long npages, + unsigned long tvm_base_page_addr); + +int sbi_covh_create_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid, + unsigned long vpus_page_addr); + +int sbi_covh_run_tvm_vcpu(unsigned long tvmid, unsigned long tvm_vcpuid); + +#endif diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 6986d3c..40dee04 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -31,3 +31,4 @@ kvm-y += aia.o kvm-y += aia_device.o kvm-y += aia_aplic.o kvm-y += aia_imsic.o +kvm-$(CONFIG_RISCV_COVE_HOST) += cove_sbi.o diff --git a/arch/riscv/kvm/cove_sbi.c b/arch/riscv/kvm/cove_sbi.c new file mode 100644 index 0000000..c8c63fe --- /dev/null +++ b/arch/riscv/kvm/cove_sbi.c @@ -0,0 +1,245 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * COVE SBI extensions related helper functions. + * + * Copyright (c) 2023 RivosInc + * + * Authors: + * Atish Patra + */ + +#include +#include +#include +#include +#include +#include +#include + +#define RISCV_COVE_ALIGN_4KB (1UL << 12) + +int sbi_covh_tsm_get_info(struct sbi_cove_tsm_info *tinfo_addr) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_GET_INFO, __pa(tinfo_addr), + sizeof(*tinfo_addr), 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tvm_initiate_fence(unsigned long tvmid) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_INITIATE_FENCE, tvmid, 0, 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_initiate_fence(void) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_INITIATE_FENCE, 0, 0, 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_local_fence(void) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_LOCAL_FENCE, 0, 0, 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_create_tvm(struct sbi_cove_tvm_create_params *tparam, unsigned long *tvmid) +{ + struct sbiret ret; + int rc = 0; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_CREATE_TVM, __pa(tparam), + sizeof(*tparam), 0, 0, 0, 0); + + if (ret.error) { + rc = sbi_err_map_linux_errno(ret.error); + if (rc == -EFAULT) + kvm_err("Invalid phsyical address for tvm params structure\n"); + goto done; + } + + kvm_info("%s: create_tvm tvmid %lx\n", __func__, ret.value); + *tvmid = ret.value; + +done: + return rc; +} + +int sbi_covh_tsm_finalize_tvm(unsigned long tvmid, unsigned long sepc, unsigned long entry_arg) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_FINALIZE_TVM, tvmid, + sepc, entry_arg, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_destroy_tvm(unsigned long tvmid) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_DESTROY_TVM, tvmid, + 0, 0, 0, 0, 0); + + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_add_memory_region(unsigned long tvmid, unsigned long tgpaddr, unsigned long rlen) +{ + struct sbiret ret; + + if (!IS_ALIGNED(tgpaddr, RISCV_COVE_ALIGN_4KB) || !IS_ALIGNED(rlen, RISCV_COVE_ALIGN_4KB)) + return -EINVAL; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_MEMORY_REGION, tvmid, + tgpaddr, rlen, 0, 0, 0); + if (ret.error) { + kvm_err("Add memory region failed with sbi error code %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + + return 0; +} + +int sbi_covh_tsm_convert_pages(unsigned long phys_addr, unsigned long npages) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_CONVERT_PAGES, phys_addr, + npages, 0, 0, 0, 0); + if (ret.error) { + kvm_err("Convert pages failed ret %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + return 0; +} + +int sbi_covh_tsm_reclaim_page(unsigned long page_addr_phys) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_RECLAIM_PAGES, page_addr_phys, + 1, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_tsm_reclaim_pages(unsigned long phys_addr, unsigned long npages) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TSM_RECLAIM_PAGES, phys_addr, + npages, 0, 0, 0, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_add_pgt_pages(unsigned long tvmid, unsigned long page_addr_phys, unsigned long npages) +{ + struct sbiret ret; + + if (!PAGE_ALIGNED(page_addr_phys)) + return -EINVAL; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_PGT_PAGES, tvmid, page_addr_phys, + npages, 0, 0, 0); + if (ret.error) { + kvm_err("Adding page table pages at %lx failed %ld\n", page_addr_phys, ret.error); + return sbi_err_map_linux_errno(ret.error); + } + + return 0; +} + +int sbi_covh_add_measured_pages(unsigned long tvmid, unsigned long src_addr, + unsigned long dest_addr, enum sbi_cove_page_type ptype, + unsigned long npages, unsigned long tgpa) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_MEASURED_PAGES, tvmid, src_addr, + dest_addr, ptype, npages, tgpa); + if (ret.error) { + kvm_err("Adding measued pages failed ret %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + + return 0; +} + +int sbi_covh_add_zero_pages(unsigned long tvmid, unsigned long page_addr_phys, + enum sbi_cove_page_type ptype, unsigned long npages, + unsigned long tvm_base_page_addr) +{ + struct sbiret ret; + + if (!PAGE_ALIGNED(page_addr_phys)) + return -EINVAL; + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_ADD_ZERO_PAGES, tvmid, page_addr_phys, + ptype, npages, tvm_base_page_addr, 0); + if (ret.error) + return sbi_err_map_linux_errno(ret.error); + + return 0; +} + +int sbi_covh_create_tvm_vcpu(unsigned long tvmid, unsigned long vcpuid, + unsigned long vcpu_state_paddr) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_CREATE_VCPU, tvmid, vcpuid, + vcpu_state_paddr, 0, 0, 0); + if (ret.error) { + kvm_err("create vcpu failed ret %ld\n", ret.error); + return sbi_err_map_linux_errno(ret.error); + } + return 0; +} + +int sbi_covh_run_tvm_vcpu(unsigned long tvmid, unsigned long vcpuid) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_COVH, SBI_EXT_COVH_TVM_VCPU_RUN, tvmid, vcpuid, 0, 0, 0, 0); + /* Non-zero return value indicate the vcpu is already terminated */ + if (ret.error || !ret.value) + return ret.error ? sbi_err_map_linux_errno(ret.error) : ret.value; + + return 0; +}