From patchwork Mon Feb 28 02:12:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A336AC433FE for ; Mon, 28 Feb 2022 02:13:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231697AbiB1CO0 (ORCPT ); Sun, 27 Feb 2022 21:14:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231205AbiB1COX (ORCPT ); Sun, 27 Feb 2022 21:14:23 -0500 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3C744C7BC; Sun, 27 Feb 2022 18:13:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014425; x=1677550425; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LOtO0u72qJs8GKCMvZlPMhPwFwJeJfyjtjW+xFA1FEM=; b=Ee6IRrIo0XmdZKnQtKkzCMoSqQXVahua8rNIaR8xFSXEd6mrSdT2BCyc /iaMq52hyv6/wGnU/rMiBxC/BpOHXWXdjKHCUFXpIJm9pggeS0dDc5muI 5nsXY+FH0f2lHFlJOUJ0E0PYhLSrWj+CzjFbi4T1lnPb2+UUxr0pA2Auq N6Te/MQ0niJm/rIdZa+lXJz2jTG4nsvbBfRU7n8pe65K/ff19Y8NKwCiX KO9cNoaXWNPXJkKOyDoCRGl0q0lMO92LMsG3oR0cih/SvQ5oQHKIENbNX Car9lNjAqpXLBzvx3M7C76YczOeRpfjHxQhzNidcw33oLcMzEmnFxgpjA Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="233402443" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="233402443" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:45 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936803" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:41 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 01/21] x86/virt/tdx: Detect SEAM Date: Mon, 28 Feb 2022 15:12:49 +1300 Message-Id: <232d16023c9c017e3d242cc3a118267aec203d6f.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious host and certain physical attacks. To support TDX, a new CPU mode called Secure Arbitration Mode (SEAM) is added to Intel processors. SEAM is an extension to the VMX architecture to define a new VMX root operation (SEAM VMX root) and a new VMX non-root operation (SEAM VMX non-root). SEAM VMX root operation is designed to host a CPU-attested software module called 'TDX module' which implements functions to manage crypto protected VMs called Trust Domains (TD). It is also designed to additionally host a CPU-attested software module called the 'Intel Persistent SEAMLDR (Intel P-SEAMLDR)' to load and update the TDX module. Software modules in SEAM VMX root run in a memory region defined by the SEAM range register (SEAMRR). So the first thing of detecting Intel TDX is to detect the validity of SEAMRR. The presence of SEAMRR is reported via a new SEAMRR bit (15) of the IA32_MTRRCAP MSR. The SEAMRR range registers consist of a pair of MSRs: IA32_SEAMRR_PHYS_BASE and IA32_SEAMRR_PHYS_MASK BIOS is expected to configure SEAMRR with the same value across all cores. In case of BIOS misconfiguration, detect and compare SEAMRR on all cpus. To start to support TDX, create a new arch/x86/virt/vmx/ for non-KVM host kernel virtualization support for Intel platforms, and create a new tdx.c under it for TDX host kernel support. TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to crypto protect TD guests. Part of MKTME KeyIDs are reserved as "TDX private KeyID" or "TDX KeyIDs" for short. Similar to detecting SEAMRR, detecting TDX private KeyIDs also needs to be done on all cpus to detect any BIOS misconfiguration. Add a function to detect all TDX preliminaries (SEAMRR, TDX private KeyIDs) for a given cpu when it is brought up. As the first step, detect the validity of SEAMRR. Also add a new Kconfig option CONFIG_INTEL_TDX_HOST to opt-in TDX host kernel support (to distinguish with TDX guest kernel support). Signed-off-by: Kai Huang --- arch/x86/Kconfig | 12 +++++ arch/x86/Makefile | 2 + arch/x86/include/asm/tdx.h | 9 ++++ arch/x86/kernel/cpu/intel.c | 3 ++ arch/x86/virt/Makefile | 2 + arch/x86/virt/vmx/Makefile | 2 + arch/x86/virt/vmx/tdx.c | 102 ++++++++++++++++++++++++++++++++++++ 7 files changed, 132 insertions(+) create mode 100644 arch/x86/virt/Makefile create mode 100644 arch/x86/virt/vmx/Makefile create mode 100644 arch/x86/virt/vmx/tdx.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index fb2706f7f04a..f4c5481cca46 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1956,6 +1956,18 @@ config X86_SGX If unsure, say N. +config INTEL_TDX_HOST + bool "Intel Trust Domain Extensions (TDX) host support" + default n + depends on CPU_SUP_INTEL + depends on X86_64 + help + Intel Trust Domain Extensions (TDX) protects guest VMs from malicious + host and certain physical attacks. This option enables necessary TDX + support in host kernel to run protected VMs. + + If unsure, say N. + config EFI bool "EFI runtime service support" depends on ACPI diff --git a/arch/x86/Makefile b/arch/x86/Makefile index e84cdd409b64..83a6a5a2e244 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -238,6 +238,8 @@ head-y += arch/x86/kernel/platform-quirks.o libs-y += arch/x86/lib/ +core-y += arch/x86/virt/ + # drivers-y are linked after core-y drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/ drivers-$(CONFIG_PCI) += arch/x86/pci/ diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 6a97d42b0de9..605d87ab580e 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -11,6 +11,8 @@ #ifndef __ASSEMBLY__ +#include + /* * Used to gather the output registers values of the TDCALL and SEAMCALL * instructions when requesting services from the TDX module. @@ -78,5 +80,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1, return -ENODEV; } #endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */ + +#ifdef CONFIG_INTEL_TDX_HOST +void tdx_detect_cpu(struct cpuinfo_x86 *c); +#else +static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { } +#endif /* CONFIG_INTEL_TDX_HOST */ + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_TDX_H */ diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index 8321c43554a1..b142a640fb8e 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -26,6 +26,7 @@ #include #include #include +#include #ifdef CONFIG_X86_64 #include @@ -715,6 +716,8 @@ static void init_intel(struct cpuinfo_x86 *c) if (cpu_has(c, X86_FEATURE_TME)) detect_tme(c); + tdx_detect_cpu(c); + init_intel_misc_features(c); if (tsx_ctrl_state == TSX_CTRL_ENABLE) diff --git a/arch/x86/virt/Makefile b/arch/x86/virt/Makefile new file mode 100644 index 000000000000..1e36502cd738 --- /dev/null +++ b/arch/x86/virt/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-y += vmx/ diff --git a/arch/x86/virt/vmx/Makefile b/arch/x86/virt/vmx/Makefile new file mode 100644 index 000000000000..1bd688684716 --- /dev/null +++ b/arch/x86/virt/vmx/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c new file mode 100644 index 000000000000..03f35c75f439 --- /dev/null +++ b/arch/x86/virt/vmx/tdx.c @@ -0,0 +1,102 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright(c) 2022 Intel Corporation. + * + * Intel Trusted Domain Extensions (TDX) support + */ + +#define pr_fmt(fmt) "tdx: " fmt + +#include +#include +#include +#include +#include +#include +#include + +/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ +#define MTRR_CAP_SEAMRR BIT(15) + +/* Core-scope Intel SEAMRR base and mask registers. */ +#define MSR_IA32_SEAMRR_PHYS_BASE 0x00001400 +#define MSR_IA32_SEAMRR_PHYS_MASK 0x00001401 + +#define SEAMRR_PHYS_BASE_CONFIGURED BIT_ULL(3) +#define SEAMRR_PHYS_MASK_ENABLED BIT_ULL(11) +#define SEAMRR_PHYS_MASK_LOCKED BIT_ULL(10) + +#define SEAMRR_ENABLED_BITS \ + (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) + +/* BIOS must configure SEAMRR registers for all cores consistently */ +static u64 seamrr_base, seamrr_mask; + +static bool __seamrr_enabled(void) +{ + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; +} + +static void detect_seam_bsp(struct cpuinfo_x86 *c) +{ + u64 mtrrcap, base, mask; + + /* SEAMRR is reported via MTRRcap */ + if (!boot_cpu_has(X86_FEATURE_MTRR)) + return; + + rdmsrl(MSR_MTRRcap, mtrrcap); + if (!(mtrrcap & MTRR_CAP_SEAMRR)) + return; + + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); + if (!(base & SEAMRR_PHYS_BASE_CONFIGURED)) { + pr_info("SEAMRR base is not configured by BIOS\n"); + return; + } + + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); + if ((mask & SEAMRR_ENABLED_BITS) != SEAMRR_ENABLED_BITS) { + pr_info("SEAMRR is not enabled by BIOS\n"); + return; + } + + seamrr_base = base; + seamrr_mask = mask; +} + +static void detect_seam_ap(struct cpuinfo_x86 *c) +{ + u64 base, mask; + + /* + * Don't bother to detect this AP if SEAMRR is not + * enabled after earlier detections. + */ + if (!__seamrr_enabled()) + return; + + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); + + if (base == seamrr_base && mask == seamrr_mask) + return; + + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); + /* Mark SEAMRR as disabled. */ + seamrr_base = 0; + seamrr_mask = 0; +} + +static void detect_seam(struct cpuinfo_x86 *c) +{ + if (c == &boot_cpu_data) + detect_seam_bsp(c); + else + detect_seam_ap(c); +} + +void tdx_detect_cpu(struct cpuinfo_x86 *c) +{ + detect_seam(c); +} From patchwork Mon Feb 28 02:12:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0C0CC433EF for ; Mon, 28 Feb 2022 02:14:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232437AbiB1COe (ORCPT ); Sun, 27 Feb 2022 21:14:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231888AbiB1CO3 (ORCPT ); Sun, 27 Feb 2022 21:14:29 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 790384D9F0; Sun, 27 Feb 2022 18:13:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014431; x=1677550431; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fgXEGKp0RjZkCxm5LYZEvtTeZrr3+f9V/9Le+k2NL8g=; b=dUK6AFn4XVxz0tt08QI/qZBAvfgQCMaDFaecXZTMA10WPPndOrSbJso4 Z6sqiXSEXeBxNbfgiOdU98gIH7ZM4xNOyciTkxBIfVS0V7tKOdl7IL1// aHtxU6G3McDjog+OMrC3pd2FWfqooJ0w5kKyTc3fKsOMbka7BB8Q/tB3M RSIfNmxNrtYuxYlERGQEUtUWF+gkQ2+d1y9Gq5u0ZGtPW9Le3coELlv+K ze7B9f9i8GUZbO4HeMeJsHD7LHiZix0dJGZMF8PfJvPBaDAqBEneduzZ7 UvG9hl48PEP5PN5/Ik3PxQAcMkCKi7+A96SGD/yzpdO8g0/5RcJlXbTwq g==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191870" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191870" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:51 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936822" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:45 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 02/21] x86/virt/tdx: Detect TDX private KeyIDs Date: Mon, 28 Feb 2022 15:12:50 +1300 Message-Id: <5e8daef8d5f061ce939d3a5581acba156138f2ee.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Pre-TDX Intel hardware has support for a memory encryption architecture called MKTME. The memory encryption hardware underpinning MKTME is also used for Intel TDX. TDX ends up "stealing" some of the physical address space from the MKTME architecture for crypto protection to VMs. A new MSR (MSR_IA32_MKTME_KEYID_PART) helps to enumerate how MKTME- enumerated "KeyID" space is distributed between TDX and legacy MKTME. KeyIDs reserved for TDX are called 'TDX private KeyIDs' or 'TDX KeyIDs' for short. The new MSR is per package and BIOS is responsible for partitioning MKTME KeyIDs and TDX KeyIDs consistently among all packages. Detect TDX private KeyIDs as a preparation to initialize TDX. Similar to detecting SEAMRR, detect on all cpus to detect any potential BIOS misconfiguration among packages. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 72 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 03f35c75f439..ba2210001ea8 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -29,9 +29,28 @@ #define SEAMRR_ENABLED_BITS \ (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) +/* + * Intel Trusted Domain CPU Architecture Extension spec: + * + * IA32_MKTME_KEYID_PARTIONING: + * + * Bit [31:0]: number of MKTME KeyIDs. + * Bit [63:32]: number of TDX private KeyIDs. + * + * TDX private KeyIDs start after the last MKTME KeyID. + */ +#define MSR_IA32_MKTME_KEYID_PARTITIONING 0x00000087 + +#define TDX_KEYID_START(_keyid_part) \ + ((u32)(((_keyid_part) & 0xffffffffull) + 1)) +#define TDX_KEYID_NUM(_keyid_part) ((u32)((_keyid_part) >> 32)) + /* BIOS must configure SEAMRR registers for all cores consistently */ static u64 seamrr_base, seamrr_mask; +static u32 tdx_keyid_start; +static u32 tdx_keyid_num; + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -96,7 +115,60 @@ static void detect_seam(struct cpuinfo_x86 *c) detect_seam_ap(c); } +static void detect_tdx_keyids_bsp(struct cpuinfo_x86 *c) +{ + u64 keyid_part; + + /* TDX is built on MKTME, which is based on TME */ + if (!boot_cpu_has(X86_FEATURE_TME)) + return; + + if (rdmsrl_safe(MSR_IA32_MKTME_KEYID_PARTITIONING, &keyid_part)) + return; + + /* If MSR value is 0, TDX is not enabled by BIOS. */ + if (!keyid_part) + return; + + tdx_keyid_num = TDX_KEYID_NUM(keyid_part); + if (!tdx_keyid_num) + return; + + tdx_keyid_start = TDX_KEYID_START(keyid_part); +} + +static void detect_tdx_keyids_ap(struct cpuinfo_x86 *c) +{ + u64 keyid_part; + + /* + * Don't bother to detect this AP if TDX KeyIDs are + * not detected or cleared after earlier detections. + */ + if (!tdx_keyid_num) + return; + + rdmsrl(MSR_IA32_MKTME_KEYID_PARTITIONING, keyid_part); + + if ((tdx_keyid_start == TDX_KEYID_START(keyid_part)) && + (tdx_keyid_num == TDX_KEYID_NUM(keyid_part))) + return; + + pr_err("Inconsistent TDX KeyID configuration among packages by BIOS\n"); + tdx_keyid_start = 0; + tdx_keyid_num = 0; +} + +static void detect_tdx_keyids(struct cpuinfo_x86 *c) +{ + if (c == &boot_cpu_data) + detect_tdx_keyids_bsp(c); + else + detect_tdx_keyids_ap(c); +} + void tdx_detect_cpu(struct cpuinfo_x86 *c) { detect_seam(c); + detect_tdx_keyids(c); } From patchwork Mon Feb 28 02:12:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762290 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF379C4332F for ; Mon, 28 Feb 2022 02:14:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232404AbiB1COg (ORCPT ); Sun, 27 Feb 2022 21:14:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232140AbiB1COe (ORCPT ); Sun, 27 Feb 2022 21:14:34 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B19151314; Sun, 27 Feb 2022 18:13:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014435; x=1677550435; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BnYSZkDEfNq6qs3O72hE3RitoTLCTuZUmKqrNKDUKzg=; b=mMCRaForP8SN4WrfVmnhPnwjDDrsUBEGZg4sr9lMU0o7TFMnlDls7Scv mHD6YuPPc89TPTpUddSSVHXdJxlPe0Sqtn/tj+8++FWWMksbXm8TVCtzA IMpn6R19AyVuzfA1W4kMXDVn3LUbKi33R0RN4KACIfx8K1EZIuckGud3U 5eaVPy9xTYFhQSwaydN/WE2K6hs7wT2dc838CVHant50sZxzE5cQFiW7t duJFViyNbjL3QdnUp0LfHegihmS2GSevYyvVgteihIo+vC5SHgTOvsrr8 zjStib/IJnd2w5AEKGZCAUmULrLNB24XnZ1fXLtYGWYgRJI159LC/3tUA Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191880" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191880" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:55 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936834" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:51 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 03/21] x86/virt/tdx: Implement the SEAMCALL base function Date: Mon, 28 Feb 2022 15:12:51 +1300 Message-Id: <67e0161abb0d0363b810d8539ac8aba139ca7403.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Secure Arbitration Mode (SEAM) is an extension of VMX architecture. It defines a new VMX root operation (SEAM VMX root) and a new VMX non-root operation (SEAM VMX non-root) which isolate from legacy VMX root and VMX non-root mode. A CPU-attested software module (called 'TDX module') runs in SEAM VMX root to manage the protected VMs running in SEAM VMX non-root. SEAM VMX root is also used to host another CPU-attested software module (called 'P-SEAMLDR') to load and update the TDX module. Host kernel transits to either P-SEAMLDR or TDX module via the new SEAMCALL instruction. SEAMCALLs are host-side interface functions defined by P-SEAMLDR and TDX module around the new SEAMCALL instruction. They are similar to a hypercall, except they are made by host kernel to the SEAM software. SEAMCALLs use an ABI different from the x86-64 system-v ABI. Instead, it shares the same ABI with the TDCALL. %rax is used to carry both the SEAMCALL leaf function number (input) and the completion status code (output). Additional GPRs (%rcx, %rdx, %r8->%r11) may be further used as both input and output operands in individual leaf SEAMCALLs. Implement a C function __seamcall() to do SEAMCALL using the assembly macro used by __tdx_module_call() (the implementation of TDCALL). The only exception not covered here is TDENTER leaf function which takes all GPRs and XMM0-XMM15 as both input and output. The caller of TDENTER should implement its own logic to call TDENTER directly instead of using this function. SEAMCALL instruction is essentially a VMExit from VMX root to SEAM VMX root. It fails with VMfailInvalid when the SEAM software is not loaded. The C function __seamcall() returns TDX_SEAMCALL_VMFAILINVALID, which doesn't conflict with any actual error code of SEAMCALLs, to uniquely represent this case. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/Makefile | 2 +- arch/x86/virt/vmx/seamcall.S | 53 ++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx.h | 11 ++++++++ 3 files changed, 65 insertions(+), 1 deletion(-) create mode 100644 arch/x86/virt/vmx/seamcall.S create mode 100644 arch/x86/virt/vmx/tdx.h diff --git a/arch/x86/virt/vmx/Makefile b/arch/x86/virt/vmx/Makefile index 1bd688684716..fd577619620e 100644 --- a/arch/x86/virt/vmx/Makefile +++ b/arch/x86/virt/vmx/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0-only -obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o +obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o seamcall.o diff --git a/arch/x86/virt/vmx/seamcall.S b/arch/x86/virt/vmx/seamcall.S new file mode 100644 index 000000000000..65edec23b5f4 --- /dev/null +++ b/arch/x86/virt/vmx/seamcall.S @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include +#include + +#include "../tdxcall.S" + +/* + * __seamcall() - Host-side interface functions to SEAM software + * (P-SEAMLDR or TDX module) + * + * Transform function call register arguments into the SEAMCALL register + * ABI. Return TDX_SEAMCALL_VMFAILINVALID (when SEAM software is not + * loaded or SEAMCALLs are made into P-SEAMLDR concurrently), or the + * completion status of the SEAMCALL. Additional output operands are + * saved in @out (if it is provided by the user). + * + *------------------------------------------------------------------------- + * SEAMCALL ABI: + *------------------------------------------------------------------------- + * Input Registers: + * + * RAX - SEAMCALL Leaf number. + * RCX,RDX,R8-R9 - SEAMCALL Leaf specific input registers. + * + * Output Registers: + * + * RAX - SEAMCALL completion status code. + * RCX,RDX,R8-R11 - SEAMCALL Leaf specific output registers. + * + *------------------------------------------------------------------------- + * + * __seamcall() function ABI: + * + * @fn (RDI) - SEAMCALL Leaf number, moved to RAX + * @rcx (RSI) - Input parameter 1, moved to RCX + * @rdx (RDX) - Input parameter 2, moved to RDX + * @r8 (RCX) - Input parameter 3, moved to R8 + * @r9 (R8) - Input parameter 4, moved to R9 + * + * @out (R9) - struct tdx_module_output pointer + * stored temporarily in R12 (not + * shared with the TDX module). It + * can be NULL. + * + * Return (via RAX) the completion status of the SEAMCALL, or + * TDX_SEAMCALL_VMFAILINVALID. + */ +SYM_FUNC_START(__seamcall) + FRAME_BEGIN + TDX_MODULE_CALL host=1 + FRAME_END + ret +SYM_FUNC_END(__seamcall) diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h new file mode 100644 index 000000000000..9d5b6f554c20 --- /dev/null +++ b/arch/x86/virt/vmx/tdx.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _X86_VIRT_TDX_H +#define _X86_VIRT_TDX_H + +#include + +struct tdx_module_output; +u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out); + +#endif From patchwork Mon Feb 28 02:12:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA10CC433EF for ; Mon, 28 Feb 2022 02:14:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232448AbiB1COs (ORCPT ); Sun, 27 Feb 2022 21:14:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232447AbiB1COr (ORCPT ); Sun, 27 Feb 2022 21:14:47 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D29EE51314; Sun, 27 Feb 2022 18:13:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014439; x=1677550439; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jhP2MPNXG6NQHoTiDclh8J5yavVfUqusf6K/JkKdprM=; b=aE8b4RkWqyVyR4I6SVAEmNRdMJH8fRcCvyV3a56/ZZknieGi9kCVL/wq Hfmxlm1huvSFgiYABXfLmQK6sVUunqX84VxCUcutY04aZwBzezR5J/6aN YT4ssgvTS8GcfbIBB8lbFoKjSpaJiMiiEQaHV4bgDy6Qxml+CZAM0Aj/M XlC70IvzzPdIw7XcpG+jJufqLJ89//qxGoDFnjtc90Gn/yJ7tzNcCWyAx f3ZFgHrXd4iFF1ixFnvIaS1v69bAaWblPWTBkhZW0E9JH3KlaPFL/D8RA Z/zhIbJrdKB0vor7TVW6BHGCAGpG9R8Ce/b6WxsF4LklqDv5b11XKG4XU w==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191884" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191884" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:59 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936845" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:55 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 04/21] x86/virt/tdx: Add skeleton for detecting and initializing TDX on demand Date: Mon, 28 Feb 2022 15:12:52 +1300 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TDX module is essentially a CPU-attested software module running in the new SEAM mode to protect VMs from malicious host and certain physical attacks. The TDX module implements the functions to build, tear down and start execution of protected VMs called Trusted Domains (TD). Before the TDX module can be used to create and run TD guests, it must be loaded into the SEAMRR and properly initialized. The TDX module is expected to be loaded by BIOS before booting to the kernel, and the kernel is expected to detect and initialize it, using the SEAMCALLs defined by TDX architecture. The TDX module can be initialized only once in its lifetime. Instead of always initializing it at boot time, this implementation chooses an on-demand approach to initialize TDX until there is a real need (e.g when requested by KVM). This avoids consuming the memory that must be allocated by kernel and given to the TDX module as metadata (~1/256th of the TDX-usable memory), and also saves the time of initializing the TDX module (and the metadata) when TDX is not used at all. Introduce two placeholders tdx_detect() and tdx_init() to detect and initialize the TDX module on demand, with a state machine introduced to orchestrate the entire process (in case of multiple callers). To start with, tdx_detect() checks SEAMRR and TDX private KeyIDs. The TDX module is reported as not-loaded if either SEAMRR is not enabled, or there is no enough TDX private KeyIDs to create any TD guest. The TDX module itself requires one global TDX private KeyID to crypto protect its metadata. And tdx_init() is currently empty. The TDX module will be initialized in multi-steps defined by the TDX architecture: 1) Global initialization; 2) Logical-CPU scope initialization; 3) Enumerate the TDX module capabilities and platform configuration; 4) Configure the TDX module about usable memory ranges and global KeyID information; 5) Package-scope configuration for the global KeyID; 6) Initialize usable memory ranges based on 4). The TDX module can also be shut down at any time during module's lifetime. In case of any error during the initialization process, shut down the module. It's pointless to leave the module in some intermediate state during the initialization. SEAMCALLs used in the above steps (including shutting down TDX module) require SEAMRR being enabled and CPU being already in VMX operation (VMXON has been done). So far KVM is the only user of TDX, and KVM already puts all online cpus into VMX operation. Handling VMXON isn't something trivial, so this implementation doesn't handle VMXON in both tdx_detect() and tdx_init(), but requires the user of TDX to put all cpus into VMX operation before calling them. Signed-off-by: Kai Huang --- arch/x86/include/asm/tdx.h | 4 + arch/x86/virt/vmx/tdx.c | 220 +++++++++++++++++++++++++++++++++++++ 2 files changed, 224 insertions(+) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 605d87ab580e..b526d41c4bbf 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -83,8 +83,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1, #ifdef CONFIG_INTEL_TDX_HOST void tdx_detect_cpu(struct cpuinfo_x86 *c); +int tdx_detect(void); +int tdx_init(void); #else static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { } +static inline int tdx_detect(void) { return -ENODEV; } +static inline int tdx_init(void) { return -ENODEV; } #endif /* CONFIG_INTEL_TDX_HOST */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index ba2210001ea8..a85bc52c4690 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -9,6 +9,8 @@ #include #include +#include +#include #include #include #include @@ -45,12 +47,33 @@ ((u32)(((_keyid_part) & 0xffffffffull) + 1)) #define TDX_KEYID_NUM(_keyid_part) ((u32)((_keyid_part) >> 32)) +/* + * TDX module status during initialization + */ +enum tdx_module_status_t { + /* TDX module status is unknown */ + TDX_MODULE_UNKNOWN, + /* TDX module is not loaded */ + TDX_MODULE_NONE, + /* TDX module is loaded, but not initialized */ + TDX_MODULE_LOADED, + /* TDX module is fully initialized */ + TDX_MODULE_INITIALIZED, + /* TDX module is shutdown due to error during initialization */ + TDX_MODULE_SHUTDOWN, +}; + /* BIOS must configure SEAMRR registers for all cores consistently */ static u64 seamrr_base, seamrr_mask; static u32 tdx_keyid_start; static u32 tdx_keyid_num; +static enum tdx_module_status_t tdx_module_status; + +/* Prevent concurrent attempts on TDX detection and initialization */ +static DEFINE_MUTEX(tdx_module_lock); + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -172,3 +195,200 @@ void tdx_detect_cpu(struct cpuinfo_x86 *c) detect_seam(c); detect_tdx_keyids(c); } + +static bool seamrr_enabled(void) +{ + /* + * To detect any BIOS misconfiguration among cores, all logical + * cpus must have been brought up at least once. This is true + * unless 'maxcpus' kernel command line is used to limit the + * number of cpus to be brought up during boot time. However + * 'maxcpus' is basically an invalid operation mode due to the + * MCE broadcast problem, and it should not be used on a TDX + * capable machine. Just do paranoid check here and WARN() + * if not the case. + */ + if (WARN_ON_ONCE(!cpumask_equal(&cpus_booted_once_mask, + cpu_present_mask))) + return false; + + return __seamrr_enabled(); +} + +static bool tdx_keyid_sufficient(void) +{ + if (WARN_ON_ONCE(!cpumask_equal(&cpus_booted_once_mask, + cpu_present_mask))) + return false; + + /* + * TDX requires at least two KeyIDs: one global KeyID to + * protect the metadata of the TDX module and one or more + * KeyIDs to run TD guests. + */ + return tdx_keyid_num >= 2; +} + +static int __tdx_detect(void) +{ + /* + * TDX module cannot be possibly loaded if SEAMRR is disabled. + * Also do not report TDX module as loaded if there's no enough + * TDX private KeyIDs to run any TD guests. + */ + if (!seamrr_enabled()) { + pr_info("SEAMRR not enabled.\n"); + goto no_tdx_module; + } + + if (!tdx_keyid_sufficient()) { + pr_info("Number of TDX private KeyIDs too small: %u.\n", + tdx_keyid_num); + goto no_tdx_module; + } + + /* Return -ENODEV until TDX module is detected */ +no_tdx_module: + tdx_module_status = TDX_MODULE_NONE; + return -ENODEV; +} + +static int init_tdx_module(void) +{ + /* + * Return -EFAULT until all steps of TDX module + * initialization are done. + */ + return -EFAULT; +} + +static void shutdown_tdx_module(void) +{ + /* TODO: Shut down the TDX module */ + tdx_module_status = TDX_MODULE_SHUTDOWN; +} + +static int __tdx_init(void) +{ + int ret; + + /* + * Logical-cpu scope initialization requires calling one SEAMCALL + * on all logical cpus enabled by BIOS. Shutting down TDX module + * also has such requirement. Further more, configuring the key + * of the global KeyID requires calling one SEAMCALL for each + * package. For simplicity, disable CPU hotplug in the whole + * initialization process. + * + * It's perhaps better to check whether all BIOS-enabled cpus are + * online before starting initializing, and return early if not. + * But none of 'possible', 'present' and 'online' CPU masks + * represents BIOS-enabled cpus. For example, 'possible' mask is + * impacted by 'nr_cpus' or 'possible_cpus' kernel command line. + * Just let the SEAMCALL to fail if not all BIOS-enabled cpus are + * online. + */ + cpus_read_lock(); + + ret = init_tdx_module(); + /* + * Put TDX module to shutdown mode in case of any error during + * the initialization process. It's meaningless to leave the TDX + * module in any middle state of the initialization process. + */ + if (ret) + shutdown_tdx_module(); + + cpus_read_unlock(); + + return ret; +} + +/** + * tdx_detect - Detect whether the TDX module has been loaded + * + * Detect whether the TDX module has been loaded and ready for + * initialization. Only call this function when CPU is already + * in VMX operation. + * + * This function can be called in parallel by multiple callers. + * + * Return: + * + * * -0: TDX module has been loaded and ready for initialization. + * * -ENODEV: TDX module is not loaded. + * * -EPERM: CPU is not in VMX operation. + * * -EFAULT: Other internal fatal errors. + */ +int tdx_detect(void) +{ + int ret; + + mutex_lock(&tdx_module_lock); + + switch (tdx_module_status) { + case TDX_MODULE_UNKNOWN: + ret = __tdx_detect(); + break; + case TDX_MODULE_NONE: + ret = -ENODEV; + break; + case TDX_MODULE_LOADED: + case TDX_MODULE_INITIALIZED: + ret = 0; + break; + case TDX_MODULE_SHUTDOWN: + ret = -EFAULT; + break; + default: + WARN_ON(1); + ret = -EFAULT; + } + + mutex_unlock(&tdx_module_lock); + return ret; +} +EXPORT_SYMBOL_GPL(tdx_detect); + +/** + * tdx_init - Initialize the TDX module + * + * Initialize the TDX module to make it ready to run TD guests. This + * function should be called after tdx_detect() returns successful. + * Only call this function when all cpus are online and are in VMX + * operation. CPU hotplug is temporarily disabled internally. + * + * This function can be called in parallel by multiple callers. + * + * Return: + * + * * -0: The TDX module has been successfully initialized. + * * -ENODEV: The TDX module is not loaded. + * * -EPERM: The CPU which does SEAMCALL is not in VMX operation. + * * -EFAULT: Other internal fatal errors. + */ +int tdx_init(void) +{ + int ret; + + mutex_lock(&tdx_module_lock); + + switch (tdx_module_status) { + case TDX_MODULE_NONE: + ret = -ENODEV; + break; + case TDX_MODULE_LOADED: + ret = __tdx_init(); + break; + case TDX_MODULE_INITIALIZED: + ret = 0; + break; + default: + ret = -EFAULT; + break; + } + mutex_unlock(&tdx_module_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(tdx_init); From patchwork Mon Feb 28 02:12:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 818AEC433F5 for ; Mon, 28 Feb 2022 02:14:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232486AbiB1CO6 (ORCPT ); Sun, 27 Feb 2022 21:14:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232445AbiB1COs (ORCPT ); Sun, 27 Feb 2022 21:14:48 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75CE952B1E; Sun, 27 Feb 2022 18:14:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014444; x=1677550444; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o2jNeZuUU4l1x431jNTMzx1ocNPIwEk9Y/Euu1K/218=; b=Uzsquzy+N+5Bpi7wynl64PFwEmIS5kagn4JFsHCjk3dGKWB7o403kkCV dcr4EVtDFJ1kFy3qXwjMqYPs7aN/Vv2srkudRg/7nMcbxgocdMfxAlxI6 jqL53t0m0X2cW87fijB8WL+uYj0lufNvwkAJiyEh5vnv+oxfWUvg9UVyF vu++NB4p+vni+CvumpVtSYG1WYX4ERxoAkYAMoEnrpSEeLReZCwWhvII0 rcJUp32iU6bL0n6NrDpNRpABfYUBK63znUmkWcm/mSlkCWxNdir3DSd0P ozv5IiWSGDOWRCqgsXh5ko45qdmlBN/g3t/WTV9KbRnAcx1PBkZy2ODKF w==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191887" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191887" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:04 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936863" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:13:59 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 05/21] x86/virt/tdx: Detect P-SEAMLDR and TDX module Date: Mon, 28 Feb 2022 15:12:53 +1300 Message-Id: <21867aa05eb7e270f4cdcc1407951b8a9201f7e6.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org P-SEAMLDR (persistent SEAM loader) is the first software module that runs in SEAM VMX root, responsible for loading and updating the TDX module. Both P-SEAMLDR and TDX module are expected to be loaded before host kernel boots. Detect P-SEAMLDR and TDX module by calling SEAMLDR.INFO SEAMCALL to get the P-SEAMLDR information. If the SEAMCALL fails with VMfailInvalid, both of them are not loaded. Otherwise, if the SEAMCALL succeeds, the P-SEAMLDR information further tells whether the TDX module is loaded. Also implement a wrapper of __seamcall() to make SEAMCALL to P-SEAMLDR and TDX module with additional defensive check on SEAMRR and CR4.VMXE, since both detecting and initializing TDX module require the caller of TDX to handle VMXON. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 180 +++++++++++++++++++++++++++++++++++++++- arch/x86/virt/vmx/tdx.h | 31 +++++++ 2 files changed, 208 insertions(+), 3 deletions(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index a85bc52c4690..35116eaa0c1a 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -15,7 +15,9 @@ #include #include #include +#include #include +#include "tdx.h" /* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ #define MTRR_CAP_SEAMRR BIT(15) @@ -74,6 +76,8 @@ static enum tdx_module_status_t tdx_module_status; /* Prevent concurrent attempts on TDX detection and initialization */ static DEFINE_MUTEX(tdx_module_lock); +static struct p_seamldr_info p_seamldr_info; + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -229,6 +233,161 @@ static bool tdx_keyid_sufficient(void) return tdx_keyid_num >= 2; } +/* + * All error codes of both P-SEAMLDR and TDX module SEAMCALLs + * have bit 63 set if SEAMCALL fails. + */ +#define SEAMCALL_LEAF_ERROR(_ret) ((_ret) & BIT_ULL(63)) + +/** + * seamcall - make SEAMCALL to P-SEAMLDR or TDX module with additional + * check on SEAMRR and CR4.VMXE + * + * @fn: SEAMCALL leaf number. + * @rcx: Input operand RCX. + * @rdx: Input operand RDX. + * @r8: Input operand R8. + * @r9: Input operand R9. + * @seamcall_ret: SEAMCALL completion status (can be NULL). + * @out: Additional output operands (can be NULL). + * + * Wrapper of __seamcall() to make SEAMCALL to TDX module or P-SEAMLDR + * with additional defensive check on SEAMRR and CR4.VMXE. Caller to + * make sure SEAMRR is enabled and CPU is already in VMX operation before + * calling this function. + * + * Unlike __seamcall(), it returns kernel error code instead of SEAMCALL + * completion status, which is returned via @seamcall_ret if desired. + * + * Return: + * + * * -ENODEV: SEAMCALL failed with VMfailInvalid, or SEAMRR is not enabled. + * * -EPERM: CR4.VMXE is not enabled + * * -EFAULT: SEAMCALL failed + * * -0: SEAMCALL succeeded + */ +static int seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + u64 *seamcall_ret, struct tdx_module_output *out) +{ + u64 ret; + + if (WARN_ON_ONCE(!seamrr_enabled())) + return -ENODEV; + + /* + * SEAMCALL instruction requires CPU being already in VMX + * operation (VMXON has been done), otherwise it causes #UD. + * Sanity check whether CR4.VMXE has been enabled. + * + * Note VMX being enabled in CR4 doesn't mean CPU is already + * in VMX operation, but unfortunately there's no way to do + * such check. However in practice enabling CR4.VMXE and + * doing VMXON are done together (for now) so in practice it + * checks whether VMXON has been done. + * + * Preemption is disabled during the CR4.VMXE check and the + * actual SEAMCALL so VMX doesn't get disabled by other threads + * due to scheduling. + */ + preempt_disable(); + if (WARN_ON_ONCE(!cpu_vmx_enabled())) { + preempt_enable_no_resched(); + return -EPERM; + } + + ret = __seamcall(fn, rcx, rdx, r8, r9, out); + + preempt_enable_no_resched(); + + /* + * Convert SEAMCALL error code to kernel error code: + * - -ENODEV: VMfailInvalid + * - -EFAULT: SEAMCALL failed + * - 0: SEAMCALL was successful + */ + if (ret == TDX_SEAMCALL_VMFAILINVALID) + return -ENODEV; + + /* Save the completion status if caller wants to use it */ + if (seamcall_ret) + *seamcall_ret = ret; + + /* + * TDX module SEAMCALLs may also return non-zero completion + * status codes but w/o bit 63 set. Those codes are treated + * as additional information/warning while the SEAMCALL is + * treated as completed successfully. Return 0 in this case. + * Caller can use @seamcall_ret to get the additional code + * when it is desired. + */ + if (SEAMCALL_LEAF_ERROR(ret)) { + pr_err("SEAMCALL leaf %llu failed: 0x%llx\n", fn, ret); + return -EFAULT; + } + + return 0; +} + +static inline bool p_seamldr_ready(void) +{ + return !!p_seamldr_info.p_seamldr_ready; +} + +static inline bool tdx_module_ready(void) +{ + /* + * SEAMLDR_INFO.SEAM_READY indicates whether TDX module + * is (loaded and) ready for SEAMCALL. + */ + return p_seamldr_ready() && !!p_seamldr_info.seam_ready; +} + +/* + * Detect whether P-SEAMLDR has been loaded by calling SEAMLDR.INFO + * SEAMCALL to P-SEAMLDR information, which also contains whether + * the TDX module has been loaded and ready for SEAMCALL. Caller to + * make sure only calling this function when CPU is already in VMX + * operation. + */ +static int detect_p_seamldr(void) +{ + int ret; + + /* + * SEAMCALL fails with VMfailInvalid when SEAM software is not + * loaded, in which case seamcall() returns -ENODEV. Use this + * to detect P-SEAMLDR. + * + * Note P-SEAMLDR SEAMCALL also fails with VMfailInvalid when + * P-SEAMLDR is already busy with another SEAMCALL. But this + * won't happen here as this function is only called once. + */ + ret = seamcall(P_SEAMCALL_SEAMLDR_INFO, __pa(&p_seamldr_info), + 0, 0, 0, NULL, NULL); + if (ret) { + if (ret == -ENODEV) + pr_info("P-SEAMLDR is not loaded.\n"); + else + pr_info("Failed to detect P-SEAMLDR.\n"); + + return ret; + } + + /* + * If SEAMLDR.INFO was successful, it must be ready for SEAMCALL. + * Otherwise it's either kernel or firmware bug. + */ + if (WARN_ON_ONCE(!p_seamldr_ready())) + return -ENODEV; + + pr_info("P-SEAMLDR: version 0x%x, vendor_id: 0x%x, build_date: %u, build_num %u, major %u, minor %u\n", + p_seamldr_info.version, p_seamldr_info.vendor_id, + p_seamldr_info.build_date, p_seamldr_info.build_num, + p_seamldr_info.major, p_seamldr_info.minor); + + return 0; +} + static int __tdx_detect(void) { /* @@ -247,7 +406,22 @@ static int __tdx_detect(void) goto no_tdx_module; } - /* Return -ENODEV until TDX module is detected */ + /* + * For simplicity any error during detect_p_seamldr() marks + * TDX module as not loaded. + */ + if (detect_p_seamldr()) + goto no_tdx_module; + + if (!tdx_module_ready()) { + pr_info("TDX module is not loaded.\n"); + goto no_tdx_module; + } + + pr_info("TDX module detected.\n"); + tdx_module_status = TDX_MODULE_LOADED; + return 0; + no_tdx_module: tdx_module_status = TDX_MODULE_NONE; return -ENODEV; @@ -308,8 +482,8 @@ static int __tdx_init(void) * tdx_detect - Detect whether the TDX module has been loaded * * Detect whether the TDX module has been loaded and ready for - * initialization. Only call this function when CPU is already - * in VMX operation. + * initialization. Only call this function when all cpus are + * already in VMX operation. * * This function can be called in parallel by multiple callers. * diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index 9d5b6f554c20..6990c93198b3 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -3,6 +3,37 @@ #define _X86_VIRT_TDX_H #include +#include + +/* + * TDX architectural data structures + */ + +#define P_SEAMLDR_INFO_ALIGNMENT 256 + +struct p_seamldr_info { + u32 version; + u32 attributes; + u32 vendor_id; + u32 build_date; + u16 build_num; + u16 minor; + u16 major; + u8 reserved0[2]; + u32 acm_x2apicid; + u8 reserved1[4]; + u8 seaminfo[128]; + u8 seam_ready; + u8 seam_debug; + u8 p_seamldr_ready; + u8 reserved2[88]; +} __packed __aligned(P_SEAMLDR_INFO_ALIGNMENT); + +/* + * P-SEAMLDR SEAMCALL leaf function + */ +#define P_SEAMLDR_SEAMCALL_BASE BIT_ULL(63) +#define P_SEAMCALL_SEAMLDR_INFO (P_SEAMLDR_SEAMCALL_BASE | 0x0) struct tdx_module_output; u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, From patchwork Mon Feb 28 02:12:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 675B0C433EF for ; Mon, 28 Feb 2022 02:14:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232445AbiB1CPd (ORCPT ); Sun, 27 Feb 2022 21:15:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232080AbiB1CPb (ORCPT ); Sun, 27 Feb 2022 21:15:31 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB4004C7BC; Sun, 27 Feb 2022 18:14:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014493; x=1677550493; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fcnsfbicstrBLUSZ0rlOR7ObIplqUS3uKgzPw2/z/t8=; b=bm+P8mHPqrldqc1gH/rYss0ciwLYWV1rn1W64geb6tv6FU4wd1H9swFu hMEBu3wto28bDzmTuEhY1ed325Z1GNRt5E+1TXo+tC3BwtD+FAWDVO0cO NVrXMCc5fzwEaayY5SBfrE7vm0dgn+nkZdxgzQSShpoZe3yFtwyXO3lB1 1CIMuQLZLv9cunwAkiHIqq7zepEbWCrkU8twlV5bkqWWDGuzzD45LVBT1 J42Ek7Yis1Nu0FAgprxIofFtOlnyDXAqbuyVrWCc9q9bXgbaA4lUs1YL5 gOQkBviJSZsxuIO8Y+NKnSBFwLA6uGL95C6E2WWHYCVIt06Kq94m2c17n g==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191909" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191909" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:08 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936882" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:04 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 06/21] x86/virt/tdx: Shut down TDX module in case of error Date: Mon, 28 Feb 2022 15:12:54 +1300 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TDX supports shutting down the TDX module at any time during its lifetime. After TDX module is shut down, no further SEAMCALL can be made on any logical cpu. Shut down the TDX module in case of any error happened during the initialization process. It's pointless to leave the TDX module in some middle state. Shutting down TDX module requires calling TDH.SYS.LP.SHUTDOWN on all BIOS-enabled cpus, and the SEMACALL can run concurrently on different cpus. Implement a mechanism to run SEAMCALL concurrently on all online cpus. Logical-cpu scope initialization will use it too. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 40 +++++++++++++++++++++++++++++++++++++++- arch/x86/virt/vmx/tdx.h | 5 +++++ 2 files changed, 44 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 35116eaa0c1a..17f16ec6cb28 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include #include #include #include @@ -328,6 +330,39 @@ static int seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, return 0; } +/* Data structure to make SEAMCALL on multiple CPUs concurrently */ +struct seamcall_ctx { + u64 fn; + u64 rcx; + u64 rdx; + u64 r8; + u64 r9; + atomic_t err; + u64 seamcall_ret; + struct tdx_module_output out; +}; + +static void seamcall_smp_call_function(void *data) +{ + struct seamcall_ctx *sc = data; + int ret; + + ret = seamcall(sc->fn, sc->rcx, sc->rdx, sc->r8, sc->r9, + &sc->seamcall_ret, &sc->out); + if (ret) + atomic_set(&sc->err, ret); +} + +/* + * Call the SEAMCALAL on all online cpus concurrently. + * Return error if SEAMCALL fails on any cpu. + */ +static int seamcall_on_each_cpu(struct seamcall_ctx *sc) +{ + on_each_cpu(seamcall_smp_call_function, sc, true); + return atomic_read(&sc->err); +} + static inline bool p_seamldr_ready(void) { return !!p_seamldr_info.p_seamldr_ready; @@ -438,7 +473,10 @@ static int init_tdx_module(void) static void shutdown_tdx_module(void) { - /* TODO: Shut down the TDX module */ + struct seamcall_ctx sc = { .fn = TDH_SYS_LP_SHUTDOWN }; + + seamcall_on_each_cpu(&sc); + tdx_module_status = TDX_MODULE_SHUTDOWN; } diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index 6990c93198b3..dcc1f6dfe378 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -35,6 +35,11 @@ struct p_seamldr_info { #define P_SEAMLDR_SEAMCALL_BASE BIT_ULL(63) #define P_SEAMCALL_SEAMLDR_INFO (P_SEAMLDR_SEAMCALL_BASE | 0x0) +/* + * TDX module SEAMCALL leaf functions + */ +#define TDH_SYS_LP_SHUTDOWN 44 + struct tdx_module_output; u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, struct tdx_module_output *out); From patchwork Mon Feb 28 02:12:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCA18C433EF for ; Mon, 28 Feb 2022 02:14:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232478AbiB1CPf (ORCPT ); Sun, 27 Feb 2022 21:15:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231179AbiB1CPc (ORCPT ); Sun, 27 Feb 2022 21:15:32 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB106527F5; Sun, 27 Feb 2022 18:14:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014495; x=1677550495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ge5i1ep11h5AX/7QSE6NzBNHfdf54JJ+XyE9UORI3J0=; b=C10hkV/UEYvXOBkhDhLdoJBnjcqTuIPuJVPpW/S+rJdwTT0GvNbxbat6 m7zCXrxodHqlA7dwabkwFj3AcIJi6OuDceUPfcIc6c/vxiWHSfY/SZG5W Gz4gvXlILyyhdczz47rlo5CJErZUQ98pKxS3sr6VnaiGwMQGU2veuEXO7 RLpu+0JcLjkM1cp7wSqRnxQqoB+tMbup/XlythBaEqZBGvWRbyp5mlQzo CFg827UmdnzVlX/PYW+2bRFOKp9D2lkFibeukHQ49+egZcrSqvaIqR4YD rvO35+FOj1jPMxqz4f/uLKWXpCY4prMIzqYedIBMyp7RC+c0YYfWVmPep A==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191912" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191912" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:12 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936907" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:08 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 07/21] x86/virt/tdx: Do TDX module global initialization Date: Mon, 28 Feb 2022 15:12:55 +1300 Message-Id: <2fd6826f9df6793f030d949af8a71dc77f946817.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Do the TDX module global initialization which requires calling TDH.SYS.INIT once on any logical cpu. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 11 ++++++++++- arch/x86/virt/vmx/tdx.h | 1 + 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 17f16ec6cb28..197c721d5388 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -464,11 +464,20 @@ static int __tdx_detect(void) static int init_tdx_module(void) { + int ret; + + /* TDX module global initialization */ + ret = seamcall(TDH_SYS_INIT, 0, 0, 0, 0, NULL, NULL); + if (ret) + goto out; + /* * Return -EFAULT until all steps of TDX module * initialization are done. */ - return -EFAULT; + ret = -EFAULT; +out: + return ret; } static void shutdown_tdx_module(void) diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index dcc1f6dfe378..f0983b1936d8 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -38,6 +38,7 @@ struct p_seamldr_info { /* * TDX module SEAMCALL leaf functions */ +#define TDH_SYS_INIT 33 #define TDH_SYS_LP_SHUTDOWN 44 struct tdx_module_output; From patchwork Mon Feb 28 02:12:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6893C433F5 for ; Mon, 28 Feb 2022 02:14:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232468AbiB1CPe (ORCPT ); Sun, 27 Feb 2022 21:15:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231130AbiB1CPc (ORCPT ); Sun, 27 Feb 2022 21:15:32 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB09E527EF; Sun, 27 Feb 2022 18:14:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014495; x=1677550495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KCebEsUOr3JNBClpva5gMOxkilzblBWkeHW/AoIsJDM=; b=mxfoZTTV7sDP5hG6Iu8Y/+bTS7rxKeTbMuTkjJBHUkdZ8uSpJGr04Q/B 6Q+Cpk8UwAHx1A/AnRN50e2RzerZ3zznxB9L+Dy/gfxAbQVaxCOvSrwI3 Ki15W9Tt4CFXiPPMMHw1QDT3tI2TH/XI+OFXYeVinMQABQIyAQFDK3RTF E/U9KTmMDR+Oh9ymzQeFSrOloYa/zpHjVXKGKvT9Kcpm5D0wuq1kg29V9 zuRvHW4I2D3FMjKxb6GdQ7FJNhxWD6/iNwqyt+DPzFi54pr1psgWUXgvs d7Yf0jR6eWCm4aCaqpeAbTZY7C0GIDm//wI5stPsh2vHWcVXbZi2USsKG w==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191919" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191919" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:17 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936917" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:13 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 08/21] x86/virt/tdx: Do logical-cpu scope TDX module initialization Date: Mon, 28 Feb 2022 15:12:56 +1300 Message-Id: <3e36f9d3a3b98d6273c296aa17cd0105a27d44ab.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Logical-cpu scope initialization requires calling TDH.SYS.LP.INIT on all BIOS-enabled cpus, otherwise the TDH.SYS.CONFIG SEAMCALL will fail. TDH.SYS.LP.INIT can be called concurrently on all cpus. Following global initialization, do the logical-cpu scope initialization by calling TDH.SYS.LP.INIT on all online cpus. Whether all BIOS-enabled cpus are online is not checked here for simplicity. The user of TDX should guarantee all BIOS-enabled cpus are online. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 12 ++++++++++++ arch/x86/virt/vmx/tdx.h | 1 + 2 files changed, 13 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 197c721d5388..c9de3d6f903d 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -462,6 +462,13 @@ static int __tdx_detect(void) return -ENODEV; } +static int tdx_module_init_cpus(void) +{ + struct seamcall_ctx sc = { .fn = TDH_SYS_LP_INIT }; + + return seamcall_on_each_cpu(&sc); +} + static int init_tdx_module(void) { int ret; @@ -471,6 +478,11 @@ static int init_tdx_module(void) if (ret) goto out; + /* Logical-cpu scope initialization */ + ret = tdx_module_init_cpus(); + if (ret) + goto out; + /* * Return -EFAULT until all steps of TDX module * initialization are done. diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index f0983b1936d8..b8cfdd6e12f3 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -39,6 +39,7 @@ struct p_seamldr_info { * TDX module SEAMCALL leaf functions */ #define TDH_SYS_INIT 33 +#define TDH_SYS_LP_INIT 35 #define TDH_SYS_LP_SHUTDOWN 44 struct tdx_module_output; From patchwork Mon Feb 28 02:12:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762296 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E580C433EF for ; Mon, 28 Feb 2022 02:15:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232493AbiB1CPg (ORCPT ); Sun, 27 Feb 2022 21:15:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232425AbiB1CPd (ORCPT ); Sun, 27 Feb 2022 21:15:33 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36BB352B0B; Sun, 27 Feb 2022 18:14:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014495; x=1677550495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K+hmaAhVde68n+v/yNogL5WRYDEnLwUV1cfqG1jNRN8=; b=j9P3+5YNOrD20GCakc/n9oRAuxWNOPoEU1R1tWqXYKjAQ4XJ3nvSVK3h HqHLcs4nVhaU4Rhas+9Un/aQNnBbE93cW1blFD0rbPudLAbOzz3TmrCRw szVoyPQUDVlITZeQZkxU97Iu0lnBqAa0rX8ApDjex5tqebV7tfXGFM8WO UaWFAsmEUV5gRUTFOBTvmq8TyZIoWbvdwe0ACFyoWDK+/hrbX77lbY9TR ht73o4Kqfea3wFM2cMWzW90rsNbFcnhYktxxPYSamFHqrrrJxc5d0pucZ hXviBVC9rVW0AybVw0h3/NV7lhkmIFyOCXmGKATBPfDSRWNXDad8DiWLF A==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191926" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191926" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:21 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936926" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:17 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 09/21] x86/virt/tdx: Get information about TDX module and convertible memory Date: Mon, 28 Feb 2022 15:12:57 +1300 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TDX provides increased levels of memory confidentiality and integrity. This requires special hardware support for features like memory encryption and storage of memory integrity checksums. Not all memory satisfies these requirements. As a result, TDX introduced the concept of a "Convertible Memory Region" (CMR). During boot, the firmware builds a list of all of the memory ranges which can provide the TDX security guarantees. The list of these ranges, along with TDX module information, is available to the kernel by querying the TDX module via TDH.SYS.INFO SEAMCALL. Host kernel can choose whether or not to use all convertible memory regions as TDX memory. Before TDX module is ready to create any TD guests, all TDX memory regions that host kernel intends to use must be configured to the TDX module, using specific data structures defined by TDX architecture. Constructing those structures requires information of both TDX module and the Convertible Memory Regions. Call TDH.SYS.INFO to get this information as preparation to construct those structures. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 127 ++++++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx.h | 61 +++++++++++++++++++ 2 files changed, 188 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index c9de3d6f903d..ca873b4373fd 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -80,6 +80,11 @@ static DEFINE_MUTEX(tdx_module_lock); static struct p_seamldr_info p_seamldr_info; +/* Base address of CMR array needs to be 512 bytes aligned. */ +static struct cmr_info tdx_cmr_array[MAX_CMRS] __aligned(CMR_INFO_ARRAY_ALIGNMENT); +static int tdx_cmr_num; +static struct tdsysinfo_struct tdx_sysinfo; + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -469,6 +474,123 @@ static int tdx_module_init_cpus(void) return seamcall_on_each_cpu(&sc); } +static inline bool cmr_valid(struct cmr_info *cmr) +{ + return !!cmr->size; +} + +static void print_cmrs(struct cmr_info *cmr_array, int cmr_num, + const char *name) +{ + int i; + + for (i = 0; i < cmr_num; i++) { + struct cmr_info *cmr = &cmr_array[i]; + + pr_info("%s : [0x%llx, 0x%llx)\n", name, + cmr->base, cmr->base + cmr->size); + } +} + +static int sanitize_cmrs(struct cmr_info *cmr_array, int cmr_num) +{ + int i, j; + + /* + * Intel TDX module spec, 20.7.3 CMR_INFO: + * + * TDH.SYS.INFO leaf function returns a MAX_CMRS (32) entry + * array of CMR_INFO entries. The CMRs are sorted from the + * lowest base address to the highest base address, and they + * are non-overlapping. + * + * This implies that BIOS may generate invalid empty entries + * if total CMRs are less than 32. Skip them manually. + */ + for (i = 0; i < cmr_num; i++) { + struct cmr_info *cmr = &cmr_array[i]; + struct cmr_info *prev_cmr = NULL; + + /* Skip further invalid CMRs */ + if (!cmr_valid(cmr)) + break; + + if (i > 0) + prev_cmr = &cmr_array[i - 1]; + + /* + * It is a TDX firmware bug if CMRs are not + * in address ascending order. + */ + if (prev_cmr && ((prev_cmr->base + prev_cmr->size) > + cmr->base)) { + pr_err("Firmware bug: CMRs not in address ascending order.\n"); + return -EFAULT; + } + } + + /* + * Also a sane BIOS should never generate invalid CMR(s) between + * two valid CMRs. Sanity check this and simply return error in + * this case. + */ + for (j = i; j < cmr_num; j++) + if (cmr_valid(&cmr_array[j])) { + pr_err("Firmware bug: invalid CMR(s) among valid CMRs.\n"); + return -EFAULT; + } + + /* + * Trim all tail invalid empty CMRs. BIOS should generate at + * least one valid CMR, otherwise it's a TDX firmware bug. + */ + tdx_cmr_num = i; + if (!tdx_cmr_num) { + pr_err("Firmware bug: No valid CMR.\n"); + return -EFAULT; + } + + /* Print kernel sanitized CMRs */ + print_cmrs(tdx_cmr_array, tdx_cmr_num, "Kernel-sanitized-CMR"); + + return 0; +} + +static int tdx_get_sysinfo(void) +{ + struct tdx_module_output out; + u64 tdsysinfo_sz, cmr_num; + int ret; + + BUILD_BUG_ON(sizeof(struct tdsysinfo_struct) != TDSYSINFO_STRUCT_SIZE); + + ret = seamcall(TDH_SYS_INFO, __pa(&tdx_sysinfo), TDSYSINFO_STRUCT_SIZE, + __pa(tdx_cmr_array), MAX_CMRS, NULL, &out); + if (ret) + return ret; + + /* + * If TDH.SYS.CONFIG succeeds, RDX contains the actual bytes + * written to @tdx_sysinfo and R9 contains the actual entries + * written to @tdx_cmr_array. Sanity check them. + */ + tdsysinfo_sz = out.rdx; + cmr_num = out.r9; + if (WARN_ON_ONCE((tdsysinfo_sz > sizeof(tdx_sysinfo)) || !tdsysinfo_sz || + (cmr_num > MAX_CMRS) || !cmr_num)) + return -EFAULT; + + pr_info("TDX module: vendor_id 0x%x, major_version %u, minor_version %u, build_date %u, build_num %u", + tdx_sysinfo.vendor_id, tdx_sysinfo.major_version, + tdx_sysinfo.minor_version, tdx_sysinfo.build_date, + tdx_sysinfo.build_num); + + /* Print BIOS provided CMRs */ + print_cmrs(tdx_cmr_array, cmr_num, "BIOS-CMR"); + + return sanitize_cmrs(tdx_cmr_array, cmr_num); +} + static int init_tdx_module(void) { int ret; @@ -483,6 +605,11 @@ static int init_tdx_module(void) if (ret) goto out; + /* Get TDX module information and CMRs */ + ret = tdx_get_sysinfo(); + if (ret) + goto out; + /* * Return -EFAULT until all steps of TDX module * initialization are done. diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index b8cfdd6e12f3..2f21c45df6ac 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -29,6 +29,66 @@ struct p_seamldr_info { u8 reserved2[88]; } __packed __aligned(P_SEAMLDR_INFO_ALIGNMENT); +struct cmr_info { + u64 base; + u64 size; +} __packed; + +#define MAX_CMRS 32 +#define CMR_INFO_ARRAY_ALIGNMENT 512 + +struct cpuid_config { + u32 leaf; + u32 sub_leaf; + u32 eax; + u32 ebx; + u32 ecx; + u32 edx; +} __packed; + +#define TDSYSINFO_STRUCT_SIZE 1024 +#define TDSYSINFO_STRUCT_ALIGNMENT 1024 + +struct tdsysinfo_struct { + /* TDX-SEAM Module Info */ + u32 attributes; + u32 vendor_id; + u32 build_date; + u16 build_num; + u16 minor_version; + u16 major_version; + u8 reserved0[14]; + /* Memory Info */ + u16 max_tdmrs; + u16 max_reserved_per_tdmr; + u16 pamt_entry_size; + u8 reserved1[10]; + /* Control Struct Info */ + u16 tdcs_base_size; + u8 reserved2[2]; + u16 tdvps_base_size; + u8 tdvps_xfam_dependent_size; + u8 reserved3[9]; + /* TD Capabilities */ + u64 attributes_fixed0; + u64 attributes_fixed1; + u64 xfam_fixed0; + u64 xfam_fixed1; + u8 reserved4[32]; + u32 num_cpuid_config; + /* + * The actual number of CPUID_CONFIG depends on above + * 'num_cpuid_config'. The size of 'struct tdsysinfo_struct' + * is 1024B defined by TDX architecture. Use a union with + * specific padding to make 'sizeof(struct tdsysinfo_struct)' + * equal to 1024. + */ + union { + struct cpuid_config cpuid_configs[0]; + u8 reserved5[892]; + }; +} __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT); + /* * P-SEAMLDR SEAMCALL leaf function */ @@ -38,6 +98,7 @@ struct p_seamldr_info { /* * TDX module SEAMCALL leaf functions */ +#define TDH_SYS_INFO 32 #define TDH_SYS_INIT 33 #define TDH_SYS_LP_INIT 35 #define TDH_SYS_LP_SHUTDOWN 44 From patchwork Mon Feb 28 02:12:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEFA4C433F5 for ; Mon, 28 Feb 2022 02:15:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232512AbiB1CPm (ORCPT ); Sun, 27 Feb 2022 21:15:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232427AbiB1CPd (ORCPT ); Sun, 27 Feb 2022 21:15:33 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4D3A52B39; Sun, 27 Feb 2022 18:14:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014495; x=1677550495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Wq3egMJKoAmqTPVIbSfkf2lj+Zld4Y217+pv0qFmm9A=; b=dYWROYlkme/S8+/nZaUmPxF89T+K2iyyM/M/OQSXKrRWnW7X3eB+iQRp 5W1L+ERQ1lvpcGOmqukv/5bC9m2Pn48vkQHpwuC5AruriBCv+c89H2Pjq g2tV/AiPY33ad8cu49y6rDOpFS1EV7oLH5y//2t/4sG/ZgTQ4FEqu2K4p xU3UQqk48lTsX7zwqfqpv3sF0QY06TNJ0qHBq8XT7GpJSp588fJ9N5eXZ Ah2logRwRxx663xlq+5UAhQDkilIXAuMcFJOzh6m1KIdU9OtDt58ckhJ8 vrnv0EqdYMKBcrnt4k8MiUhkaT/Z76qK1D3mjn7UVbAlq5/1KkvrHrA2F Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191933" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191933" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:26 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936934" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:21 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 10/21] x86/virt/tdx: Add placeholder to coveret all system RAM as TDX memory Date: Mon, 28 Feb 2022 15:12:58 +1300 Message-Id: <55bdfd91c81fe702b55d74ea3ade8334bf148732.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TDX provides increased levels of memory confidentiality and integrity. This requires special hardware support for features like memory encryption and storage of memory integrity checksums. Not all memory satisfies these requirements. As a result, TDX introduced the concept of a "Convertible Memory Region" (CMR). During boot, the firmware builds a list of all of the memory ranges which can provide the TDX security guarantees. The list of these ranges, along with TDX module information, is available to the kernel by querying the TDX module. In order to provide crypto protection to TD guests, the TDX architecture also needs additional metadata to record things like which TD guest "owns" a given page of memory. This metadata essentially serves as the 'struct page' for the TDX module. The space for this metadata is not reserved by the hardware upfront and must be allocated by the kernel and given to the TDX module. Since this metadata consumes space, the VMM can choose whether or not to allocate it for a given area of convertible memory. If it chooses not to, the memory cannot receive TDX protections and can not be used by TDX guests as private memory. For every memory region that the VMM wants to use as TDX memory, it sets up a "TD Memory Region" (TDMR). Each TDMR represents a physically contiguous convertible range and must also have its own physically contiguous metadata table, referred to as a Physical Address Metadata Table (PAMT), to track status for each page in the TDMR range. Unlike a CMR, each TDMR requires 1G granularity and alignment. To support physical RAM areas that don't meet those strict requirements, each TDMR permits a number of internal "reserved areas" which can be placed over memory holes. If PAMT metadata is placed within a TDMR it must be covered by one of these reserved areas. Let's summarize the concepts: CMR - Firmware-enumerated physical ranges that support TDX. CMRs are 4K aligned. TDMR - Physical address range which is chosen by the kernel to support TDX. 1G granularity and alignment required. Each TDMR has reserved areas where TDX memory holes and overlapping PAMTs can be put into. PAMT - Physically contiguous TDX metadata. One table for each page size per TDMR. Roughly 1/256th of TDMR in size. 256G TDMR = ~1G PAMT. As one step of initializing the TDX module, the memory regions that TDX module can use must be configured to the TDX module via an array of TDMRs. Constructing TDMRs to build the TDX memory consists below steps: 1) Create TDMRs to cover all memory regions that TDX module can use; 2) Allocate and set up PAMT for each TDMR; 3) Set up reserved areas for each TDMR. Add a placeholder right after getting TDX module and CMRs information to construct TDMRs to do the above steps, as the preparation to configure the TDX module. Always free TDMRs at the end of the initialization (no matter successful or not), as TDMRs are only used during the initialization. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 47 +++++++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx.h | 23 ++++++++++++++++++++ 2 files changed, 70 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index ca873b4373fd..cd7c09a57235 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -591,8 +592,29 @@ static int tdx_get_sysinfo(void) return sanitize_cmrs(tdx_cmr_array, cmr_num); } +static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) +{ + int i; + + for (i = 0; i < tdmr_num; i++) { + struct tdmr_info *tdmr = tdmr_array[i]; + + /* kfree() works with NULL */ + kfree(tdmr); + tdmr_array[i] = NULL; + } +} + +static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) +{ + /* Return -EFAULT until constructing TDMRs is done */ + return -EFAULT; +} + static int init_tdx_module(void) { + struct tdmr_info **tdmr_array; + int tdmr_num; int ret; /* TDX module global initialization */ @@ -610,11 +632,36 @@ static int init_tdx_module(void) if (ret) goto out; + /* + * Prepare enough space to hold pointers of TDMRs (TDMR_INFO). + * TDX requires TDMR_INFO being 512 aligned. Each TDMR is + * allocated individually within construct_tdmrs() to meet + * this requirement. + */ + tdmr_array = kcalloc(tdx_sysinfo.max_tdmrs, sizeof(struct tdmr_info *), + GFP_KERNEL); + if (!tdmr_array) { + ret = -ENOMEM; + goto out; + } + + /* Construct TDMRs to build TDX memory */ + ret = construct_tdmrs(tdmr_array, &tdmr_num); + if (ret) + goto out_free_tdmrs; + /* * Return -EFAULT until all steps of TDX module * initialization are done. */ ret = -EFAULT; +out_free_tdmrs: + /* + * TDMRs are only used during initializing TDX module. Always + * free them no matter the initialization was successful or not. + */ + free_tdmrs(tdmr_array, tdmr_num); + kfree(tdmr_array); out: return ret; } diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index 2f21c45df6ac..05bf9fe6bd00 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -89,6 +89,29 @@ struct tdsysinfo_struct { }; } __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT); +struct tdmr_reserved_area { + u64 offset; + u64 size; +} __packed; + +#define TDMR_INFO_ALIGNMENT 512 + +struct tdmr_info { + u64 base; + u64 size; + u64 pamt_1g_base; + u64 pamt_1g_size; + u64 pamt_2m_base; + u64 pamt_2m_size; + u64 pamt_4k_base; + u64 pamt_4k_size; + /* + * Actual number of reserved areas depends on + * 'struct tdsysinfo_struct'::max_reserved_per_tdmr. + */ + struct tdmr_reserved_area reserved_areas[0]; +} __packed __aligned(TDMR_INFO_ALIGNMENT); + /* * P-SEAMLDR SEAMCALL leaf function */ From patchwork Mon Feb 28 02:12:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D1CFC433EF for ; Mon, 28 Feb 2022 02:15:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229714AbiB1CPk (ORCPT ); Sun, 27 Feb 2022 21:15:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232447AbiB1CPd (ORCPT ); Sun, 27 Feb 2022 21:15:33 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8EEE52E58; Sun, 27 Feb 2022 18:14:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014495; x=1677550495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sXas/cs7+DF8UXx5TnXDuYHZFOLlYvCIkDZyCWv9Un4=; b=B7pM0yZ28MvfciQgXnXKb1GNeqSf4HB8DVUALZK2Bp85CORQ0V0VYND3 DxJq0zySO9HFS/GoU6zId6HUDlwuXx5ykPN0iBBomL5ej+vanOiKPgW77 aKpMIfFlpnZR5QZanzc5Qrk7Qsn2/VvLWNDM+7VCPRnXN3mlzgaq6BMXV sML3Wf+iTkKZexBi54dnAVCME0TG0UdbmhBe/O4A7yS0pqGMMQRaQqf3u dipbl8gYxTGDYP+hoMNMMeJOE5rgdZtKmdO1O4qAoB0mZSvNN06jOzsTy ADgDU/KRWKTcGY7qLeyNvXEH9ey17blKqf1okv8xZL93KyjMkc24pSWJI A==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191942" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191942" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:30 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936943" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:26 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 11/21] x86/virt/tdx: Choose to use all system RAM as TDX memory Date: Mon, 28 Feb 2022 15:12:59 +1300 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As one step of initializing the TDX module, the memory regions that TDX module can use must be configured to the TDX module via an array of TDMRs. The kernel is responsible for choosing which memory regions to be used as TDX memory. The first generation of TDX-capable platforms basically guarantee all system RAM regions are Convertible Memory Regions (excluding the memory below 1MB). The memory pages allocated to TD guests can be any pages managed by the page allocator. To avoid modifying the page allocator to distinguish TDX and non-TDX memory allocation, adopt a simple policy to use all system RAM regions as TDX memory. The low 1MB pages are excluded from TDX memory since they are not in CMRs. But this is OK since they are reserved at boot time and won't be managed by the page allocator anyway. This policy could be revised later if future TDX generations break the guarantee or when the size of the metadata (~1/256th of the size of the TDX usable memory) becomes a concern. At that time a CMR-aware page allocator may be necessary. To begin with, sanity check all e820 RAM entries (excluding low 1MB) are fully covered by any CMR and can be used as TDX memory. Use e820_table, rather than using e820_table_firmware or e820_table_kexec, to honor 'mem' and 'memmap' kernel command lines. X86 legacy PMEMs (PRAM) are also treated as RAM since underneath they are RAM, and they may be used by TD guest. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 150 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 149 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index cd7c09a57235..0780ec71651b 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include "tdx.h" @@ -592,6 +593,145 @@ static int tdx_get_sysinfo(void) return sanitize_cmrs(tdx_cmr_array, cmr_num); } +/* + * Only E820_TYPE_RAM and E820_TYPE_PRAM are considered as candidate for + * TDX usable memory. The latter is treated as RAM because it is created + * on top of real RAM via kernel command line and may be allocated for TD + * guests. + */ +static bool e820_entry_is_ram(struct e820_entry *entry) +{ + return (entry->type == E820_TYPE_RAM) || + (entry->type == E820_TYPE_PRAM); +} + +/* + * The low memory below 1MB is not covered by CMRs on some TDX platforms. + * In practice, this range cannot be used for guest memory because it is + * not managed by the page allocator due to boot-time reservation. Just + * skip the low 1MB so this range won't be treated as TDX memory. + * + * Return true if the e820 entry is completely skipped, in which case + * caller should ignore this entry. Otherwise the actual memory range + * after skipping the low 1MB is returned via @start and @end. + */ +static bool e820_entry_skip_lowmem(struct e820_entry *entry, u64 *start, + u64 *end) +{ + u64 _start = entry->addr; + u64 _end = entry->addr + entry->size; + + if (_start < SZ_1M) + _start = SZ_1M; + + *start = _start; + *end = _end; + + return _start >= _end; +} + +/* + * Trim away non-page-aligned memory at the beginning and the end for a + * given region. Return true when there are still pages remaining after + * trimming, and the trimmed region is returned via @start and @end. + */ +static bool e820_entry_trim(u64 *start, u64 *end) +{ + u64 s, e; + + s = round_up(*start, PAGE_SIZE); + e = round_down(*end, PAGE_SIZE); + + if (s >= e) + return false; + + *start = s; + *end = e; + + return true; +} + +/* Find the next RAM entry (excluding low 1MB) in e820 */ +static void e820_next_mem(struct e820_table *table, int *idx, u64 *start, + u64 *end) +{ + int i; + + for (i = *idx; i < table->nr_entries; i++) { + struct e820_entry *entry = &table->entries[i]; + u64 s, e; + + if (!e820_entry_is_ram(entry)) + continue; + + if (e820_entry_skip_lowmem(entry, &s, &e)) + continue; + + if (!e820_entry_trim(&s, &e)) + continue; + + *idx = i; + *start = s; + *end = e; + + return; + } + + *idx = table->nr_entries; +} + +/* Helper to loop all e820 RAM entries with low 1MB excluded */ +#define e820_for_each_mem(_table, _i, _start, _end) \ + for ((_i) = 0, e820_next_mem((_table), &(_i), &(_start), &(_end)); \ + (_i) < (_table)->nr_entries; \ + (_i)++, e820_next_mem((_table), &(_i), &(_start), &(_end))) + +/* Check whether first range is the subrange of the second */ +static bool is_subrange(u64 r1_start, u64 r1_end, u64 r2_start, u64 r2_end) +{ + return (r1_start >= r2_start && r1_end <= r2_end) ? true : false; +} + +/* Check whether address range is covered by any CMR or not. */ +static bool range_covered_by_cmr(struct cmr_info *cmr_array, int cmr_num, + u64 start, u64 end) +{ + int i; + + for (i = 0; i < cmr_num; i++) { + struct cmr_info *cmr = &cmr_array[i]; + + if (is_subrange(start, end, cmr->base, cmr->base + cmr->size)) + return true; + } + + return false; +} + +/* Sanity check whether all e820 RAM entries are fully covered by CMRs. */ +static int e820_check_against_cmrs(void) +{ + u64 start, end; + int i; + + /* + * Loop over e820_table to find all RAM entries and check + * whether they are all fully covered by any CMR. Use e820_table + * instead of e820_table_firmware or e820_table_kexec to honor + * possible 'mem' and 'memmap' kernel command lines. + */ + e820_for_each_mem(e820_table, i, start, end) { + if (!range_covered_by_cmr(tdx_cmr_array, tdx_cmr_num, + start, end)) { + pr_err("[0x%llx, 0x%llx) is not fully convertible memory\n", + start, end); + return -EFAULT; + } + } + + return 0; +} + static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) { int i; @@ -607,8 +747,16 @@ static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) { + int ret; + + ret = e820_check_against_cmrs(); + if (ret) + goto err; + /* Return -EFAULT until constructing TDMRs is done */ - return -EFAULT; + ret = -EFAULT; +err: + return ret; } static int init_tdx_module(void) From patchwork Mon Feb 28 02:13:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F40B0C433EF for ; Mon, 28 Feb 2022 02:15:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232425AbiB1CPq (ORCPT ); Sun, 27 Feb 2022 21:15:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232459AbiB1CPe (ORCPT ); Sun, 27 Feb 2022 21:15:34 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EF1153B47; Sun, 27 Feb 2022 18:14:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014496; x=1677550496; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1hcI89qzJIq9qMe/CEjop+OcHI446TN21iLPJVGm8LE=; b=DujBQNMxNMQaVqHt9EgTGJlNgVqiZOU6yOtvVomHvYWvBTJFtkMbAG2L uwWSd0PvelyiM6V10PngnsAewMs9Xim3SE6hqnUtsX+Q+FJKaeHynJawK bfsIeuPGyHjHISTo2OjZQ2xivyhhrpy1UQYNZIuhsvHMx6mFNqnsiyDX5 VpGbsyaqhgNUYvjF6+ODL+I1tIb1qahVnQQZVx6MH96pnTC+oA9VXNGzE Co84jTxungXE7uM2zVQHWcmnE6E0D8FayUnhi8UvYWQ/F4Mytv8iFpH9s dB6yvvE30DwKTminBOlIC/zDjYiZfzt2hUtG4zCXcnR+MvFyzZoVurJBs w==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191952" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191952" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:35 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936949" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:30 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 12/21] x86/virt/tdx: Create TDMRs to cover all system RAM Date: Mon, 28 Feb 2022 15:13:00 +1300 Message-Id: <2570f75f10ea67b849a47159e4bcde1227e1c8be.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The kernel configures TDX usable memory regions to the TDX module via an array of "TD Memory Region" (TDMR). Each TDMR entry (TDMR_INFO) contains the information of the base/size of a memory region, the base/size of the associated Physical Address Metadata Table (PAMT) and a list of reserved areas in the region. Create a number of TDMRs according to the verified e820 RAM entries. As the first step only set up the base/size information for each TDMR. TDMR must be 1G aligned and the size must be in 1G granularity. This implies that one TDMR could cover multiple e820 RAM entries. If a RAM entry spans the 1GB boundary and the former part is already covered by the previous TDMR, just create a new TDMR for the latter part. TDX only supports a limited number of TDMRs (currently 64). Abort the TDMR construction process when the number of TDMRs exceeds this limitation. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 138 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 138 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 0780ec71651b..fe83cf9ac2f9 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -53,6 +53,18 @@ ((u32)(((_keyid_part) & 0xffffffffull) + 1)) #define TDX_KEYID_NUM(_keyid_part) ((u32)((_keyid_part) >> 32)) +/* TDMR must be 1gb aligned */ +#define TDMR_ALIGNMENT BIT_ULL(30) +#define TDMR_PFN_ALIGNMENT (TDMR_ALIGNMENT >> PAGE_SHIFT) + +/* Align up and down the address to TDMR boundary */ +#define TDMR_ALIGN_DOWN(_addr) ALIGN_DOWN((_addr), TDMR_ALIGNMENT) +#define TDMR_ALIGN_UP(_addr) ALIGN((_addr), TDMR_ALIGNMENT) + +/* TDMR's start and end address */ +#define TDMR_START(_tdmr) ((_tdmr)->base) +#define TDMR_END(_tdmr) ((_tdmr)->base + (_tdmr)->size) + /* * TDX module status during initialization */ @@ -732,6 +744,44 @@ static int e820_check_against_cmrs(void) return 0; } +/* The starting offset of reserved areas within TDMR_INFO */ +#define TDMR_RSVD_START 64 + +static struct tdmr_info *__alloc_tdmr(void) +{ + int tdmr_sz; + + /* + * TDMR_INFO's actual size depends on maximum number of reserved + * areas that one TDMR supports. + */ + tdmr_sz = TDMR_RSVD_START + tdx_sysinfo.max_reserved_per_tdmr * + sizeof(struct tdmr_reserved_area); + + /* + * TDX requires TDMR_INFO to be 512 aligned. Always align up + * TDMR_INFO size to 512 so the memory allocated via kzalloc() + * can meet the alignment requirement. + */ + tdmr_sz = ALIGN(tdmr_sz, TDMR_INFO_ALIGNMENT); + + return kzalloc(tdmr_sz, GFP_KERNEL); +} + +/* Create a new TDMR at given index in the TDMR array */ +static struct tdmr_info *alloc_tdmr(struct tdmr_info **tdmr_array, int idx) +{ + struct tdmr_info *tdmr; + + if (WARN_ON_ONCE(tdmr_array[idx])) + return NULL; + + tdmr = __alloc_tdmr(); + tdmr_array[idx] = tdmr; + + return tdmr; +} + static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) { int i; @@ -745,6 +795,89 @@ static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) } } +/* + * Create TDMRs to cover all RAM entries in e820_table. The created + * TDMRs are saved to @tdmr_array and @tdmr_num is set to the actual + * number of TDMRs. All entries in @tdmr_array must be initially NULL. + */ +static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) +{ + struct tdmr_info *tdmr; + u64 start, end; + int i, tdmr_idx; + int ret = 0; + + tdmr_idx = 0; + tdmr = alloc_tdmr(tdmr_array, 0); + if (!tdmr) + return -ENOMEM; + /* + * Loop over all RAM entries in e820 and create TDMRs to cover + * them. To keep it simple, always try to use one TDMR to cover + * one RAM entry. + */ + e820_for_each_mem(e820_table, i, start, end) { + start = TDMR_ALIGN_DOWN(start); + end = TDMR_ALIGN_UP(end); + + /* + * If the current TDMR's size hasn't been initialized, it + * is a new allocated TDMR to cover the new RAM entry. + * Otherwise the current TDMR already covers the previous + * RAM entry. In the latter case, check whether the + * current RAM entry has been fully or partially covered + * by the current TDMR, since TDMR is 1G aligned. + */ + if (tdmr->size) { + /* + * Loop to next RAM entry if the current entry + * is already fully covered by the current TDMR. + */ + if (end <= TDMR_END(tdmr)) + continue; + + /* + * If part of current RAM entry has already been + * covered by current TDMR, skip the already + * covered part. + */ + if (start < TDMR_END(tdmr)) + start = TDMR_END(tdmr); + + /* + * Create a new TDMR to cover the current RAM + * entry, or the remaining part of it. + */ + tdmr_idx++; + if (tdmr_idx >= tdx_sysinfo.max_tdmrs) { + ret = -E2BIG; + goto err; + } + tdmr = alloc_tdmr(tdmr_array, tdmr_idx); + if (!tdmr) { + ret = -ENOMEM; + goto err; + } + } + + tdmr->base = start; + tdmr->size = end - start; + } + + /* @tdmr_idx is always the index of last valid TDMR. */ + *tdmr_num = tdmr_idx + 1; + + return 0; +err: + /* + * Clean up already allocated TDMRs in case of error. @tdmr_idx + * indicates the last TDMR that wasn't created successfully, + * therefore only needs to free @tdmr_idx TDMRs. + */ + free_tdmrs(tdmr_array, tdmr_idx); + return ret; +} + static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) { int ret; @@ -753,8 +886,13 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) if (ret) goto err; + ret = create_tdmrs(tdmr_array, tdmr_num); + if (ret) + goto err; + /* Return -EFAULT until constructing TDMRs is done */ ret = -EFAULT; + free_tdmrs(tdmr_array, *tdmr_num); err: return ret; } From patchwork Mon Feb 28 02:13:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 892BEC433EF for ; Mon, 28 Feb 2022 02:15:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230001AbiB1CPn (ORCPT ); Sun, 27 Feb 2022 21:15:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232470AbiB1CPe (ORCPT ); Sun, 27 Feb 2022 21:15:34 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76AEF53B58; Sun, 27 Feb 2022 18:14:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014496; x=1677550496; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tysrMtT8jEwXjyo7aI/e3eLENVwhS0HK0HhToMgzF9k=; b=TPyHV7qXEtu/MoEcyd88wzcXFvAlqS9NoVzkJYANiyG9jhls/qEKwM5i ilck1CjDpuTIoNZcXbRZYxecWHtL6FSKJbyxua9TdKQUbbiUJriVaCm9E pDvXqBXxhkJkDmymHqHzO/xFXyBA2jPQMO/ckNipqmtxDStQGcFe0YzDI Wz8Tzt3D1drKdiOkktS02cMdP44tr+FZgwgKJ2g5MFDDOnfOJ90HF7sR+ V3kbcvfvmw/PY1Mqn/ZT3IQYiqtyM/8QcRTlzVx0iLLv1Eu2pVLncgHBt 5rQT2/FZjTWytiyA2vuTiFmMnry5r8SoC3ywQtMirNMcuU52EijwgNs2w g==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191958" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191958" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:39 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936958" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:35 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 13/21] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Date: Mon, 28 Feb 2022 15:13:01 +1300 Message-Id: <79b431a26404951d34c9324c4f2d0e8023fcd259.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to provide crypto protection to guests, the TDX module uses additional metadata to record things like which guest "owns" a given page of memory. This metadata, referred as Physical Address Metadata Table (PAMT), essentially serves as the 'struct page' for the TDX module. PAMTs are not reserved by hardware upfront. They must be allocated by the kernel and then given to the TDX module. TDX supports 3 page sizes: 4K, 2M, and 1G. Each "TD Memory Region" (TDMR) has 3 PAMTs to track the 3 supported page sizes respectively. Each PAMT must be a physically contiguous area from the Convertible Memory Regions (CMR). However, the PAMTs which track pages in one TDMR do not need to reside within that TDMR but can be anywhere in CMRs. If one PAMT overlaps with any TDMR, the overlapping part must be reported as a reserved area in that particular TDMR. Use alloc_contig_pages() since PAMT must be a physically contiguous area and it may be potentially large (~1/256th of the size of the given TDMR). The current version of TDX supports at most 16 reserved areas per TDMR to cover both PAMTs and potential memory holes within the TDMR. If many PAMTs are allocated within a single TDMR, 16 reserved areas may not be sufficient to cover all of them. Adopt the following policies when allocating PAMTs for a given TDMR: - Allocate three PAMTs of the TDMR in one contiguous chunk to minimize the total number of reserved areas consumed for PAMTs. - Try to first allocate PAMT from the local node of the TDMR for better NUMA locality. Signed-off-by: Kai Huang --- arch/x86/Kconfig | 2 + arch/x86/virt/vmx/tdx.c | 165 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 167 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index f4c5481cca46..700a9008dbbe 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1961,6 +1961,8 @@ config INTEL_TDX_HOST default n depends on CPU_SUP_INTEL depends on X86_64 + depends on CONTIG_ALLOC + select NUMA_KEEP_MEMINFO if NUMA help Intel Trust Domain Extensions (TDX) protects guest VMs from malicious host and certain physical attacks. This option enables necessary TDX diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index fe83cf9ac2f9..d29e7943f890 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include "tdx.h" @@ -65,6 +66,16 @@ #define TDMR_START(_tdmr) ((_tdmr)->base) #define TDMR_END(_tdmr) ((_tdmr)->base + (_tdmr)->size) +/* Page sizes supported by TDX */ +enum tdx_page_sz { + TDX_PG_4K = 0, + TDX_PG_2M, + TDX_PG_1G, + TDX_PG_MAX, +}; + +#define TDX_HPAGE_SHIFT 9 + /* * TDX module status during initialization */ @@ -878,6 +889,148 @@ static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) return ret; } +/* Calculate PAMT size given a TDMR and a page size */ +static unsigned long __tdmr_get_pamt_sz(struct tdmr_info *tdmr, + enum tdx_page_sz pgsz) +{ + unsigned long pamt_sz; + + pamt_sz = (tdmr->size >> ((TDX_HPAGE_SHIFT * pgsz) + PAGE_SHIFT)) * + tdx_sysinfo.pamt_entry_size; + /* PAMT size must be 4K aligned */ + pamt_sz = ALIGN(pamt_sz, PAGE_SIZE); + + return pamt_sz; +} + +/* Calculate the size of all PAMTs for a TDMR */ +static unsigned long tdmr_get_pamt_sz(struct tdmr_info *tdmr) +{ + enum tdx_page_sz pgsz; + unsigned long pamt_sz; + + pamt_sz = 0; + for (pgsz = TDX_PG_4K; pgsz < TDX_PG_MAX; pgsz++) + pamt_sz += __tdmr_get_pamt_sz(tdmr, pgsz); + + return pamt_sz; +} + +/* + * Locate the NUMA node containing the start of the given TDMR's first + * RAM entry. The given TDMR may also cover memory in other NUMA nodes. + */ +static int tdmr_get_nid(struct tdmr_info *tdmr) +{ + u64 start, end; + int i; + + /* Find the first RAM entry covered by the TDMR */ + e820_for_each_mem(e820_table, i, start, end) + if (end > TDMR_START(tdmr)) + break; + + /* + * One TDMR must cover at least one (or partial) RAM entry, + * otherwise it is kernel bug. WARN_ON() in this case. + */ + if (WARN_ON(i == e820_table->nr_entries || start >= TDMR_END(tdmr))) + return 0; + + /* + * The first RAM entry may be partially covered by the previous + * TDMR. In this case, use TDMR's start to find the NUMA node. + */ + if (start < TDMR_START(tdmr)) + start = TDMR_START(tdmr); + + return phys_to_target_node(start); +} + +static int tdmr_setup_pamt(struct tdmr_info *tdmr) +{ + unsigned long tdmr_pamt_base, pamt_base[TDX_PG_MAX]; + unsigned long pamt_sz[TDX_PG_MAX]; + unsigned long pamt_npages; + struct page *pamt; + enum tdx_page_sz pgsz; + int nid; + + /* + * Allocate one chunk of physically contiguous memory for all + * PAMTs. This helps minimize the PAMT's use of reserved areas + * in overlapped TDMRs. + */ + nid = tdmr_get_nid(tdmr); + pamt_npages = tdmr_get_pamt_sz(tdmr) >> PAGE_SHIFT; + pamt = alloc_contig_pages(pamt_npages, GFP_KERNEL, nid, + &node_online_map); + if (!pamt) + return -ENOMEM; + + /* Calculate PAMT base and size for all supported page sizes. */ + tdmr_pamt_base = page_to_pfn(pamt) << PAGE_SHIFT; + for (pgsz = TDX_PG_4K; pgsz < TDX_PG_MAX; pgsz++) { + unsigned long sz = __tdmr_get_pamt_sz(tdmr, pgsz); + + pamt_base[pgsz] = tdmr_pamt_base; + pamt_sz[pgsz] = sz; + + tdmr_pamt_base += sz; + } + + tdmr->pamt_4k_base = pamt_base[TDX_PG_4K]; + tdmr->pamt_4k_size = pamt_sz[TDX_PG_4K]; + tdmr->pamt_2m_base = pamt_base[TDX_PG_2M]; + tdmr->pamt_2m_size = pamt_sz[TDX_PG_2M]; + tdmr->pamt_1g_base = pamt_base[TDX_PG_1G]; + tdmr->pamt_1g_size = pamt_sz[TDX_PG_1G]; + + return 0; +} + +static void tdmr_free_pamt(struct tdmr_info *tdmr) +{ + unsigned long pamt_pfn, pamt_sz; + + pamt_pfn = tdmr->pamt_4k_base >> PAGE_SHIFT; + pamt_sz = tdmr->pamt_4k_size + tdmr->pamt_2m_size + tdmr->pamt_1g_size; + + /* Do nothing if PAMT hasn't been allocated for this TDMR */ + if (!pamt_sz) + return; + + if (WARN_ON(!pamt_pfn)) + return; + + free_contig_range(pamt_pfn, pamt_sz >> PAGE_SHIFT); +} + +static void tdmrs_free_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num) +{ + int i; + + for (i = 0; i < tdmr_num; i++) + tdmr_free_pamt(tdmr_array[i]); +} + +/* Allocate and set up PAMTs for all TDMRs */ +static int tdmrs_setup_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num) +{ + int i, ret; + + for (i = 0; i < tdmr_num; i++) { + ret = tdmr_setup_pamt(tdmr_array[i]); + if (ret) + goto err; + } + + return 0; +err: + tdmrs_free_pamt_all(tdmr_array, tdmr_num); + return -ENOMEM; +} + static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) { int ret; @@ -890,8 +1043,14 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) if (ret) goto err; + ret = tdmrs_setup_pamt_all(tdmr_array, *tdmr_num); + if (ret) + goto err_free_tdmrs; + /* Return -EFAULT until constructing TDMRs is done */ ret = -EFAULT; + tdmrs_free_pamt_all(tdmr_array, *tdmr_num); +err_free_tdmrs: free_tdmrs(tdmr_array, *tdmr_num); err: return ret; @@ -941,6 +1100,12 @@ static int init_tdx_module(void) * initialization are done. */ ret = -EFAULT; + /* + * Free PAMTs allocated in construct_tdmrs() when TDX module + * initialization fails. + */ + if (ret) + tdmrs_free_pamt_all(tdmr_array, tdmr_num); out_free_tdmrs: /* * TDMRs are only used during initializing TDX module. Always From patchwork Mon Feb 28 02:13:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4037C433F5 for ; Mon, 28 Feb 2022 02:15:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232516AbiB1CPp (ORCPT ); Sun, 27 Feb 2022 21:15:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232080AbiB1CPe (ORCPT ); Sun, 27 Feb 2022 21:15:34 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACB9C52B0B; Sun, 27 Feb 2022 18:14:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014496; x=1677550496; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bFLy9fpNKQd2QeHf+C89Gyebuh/eSQ0CtscuysTcNBI=; b=l2Bw9RMHKTr3DBkyS5wOtrY07VkbY9dMXIUe7lp/tAES+pndaQJXLnHN dJ8A8mEykDvY8NmnMEzIpwv7mcWK0XjNznSK+jScx1DbTcGJfhGBbi6xV 05RZetQ28bcrUO7RLZTlvv/XTewXeqzfdHPERI2FNKM7j/KCdsE3hZulX ZIU/ejTp9TTM4fnPlB6zf0Etr1MgSYoyhXIyQjDiTjJ9SouKZ7senEyg7 FVcEhWtQSw8a/eeIrOFNOo6uurYHK7uMe5g/IrufCetlFeJohVcwTd0jV C58ndQXE1jXVqOUdAAq/M2vYR4siUMKcXVR6WgWvfaMXAImqgMIJPRARA g==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191961" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191961" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:43 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936962" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:39 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 14/21] x86/virt/tdx: Set up reserved areas for all TDMRs Date: Mon, 28 Feb 2022 15:13:02 +1300 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As the last step of constructing TDMRs, create reserved area information for the memory region holes in each TDMR. If any PAMT (or part of it) resides within a particular TDMR, also it as reserved. All reserved areas in each TDMR must be in address ascending order, required by TDX architecture. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 148 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 146 insertions(+), 2 deletions(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index d29e7943f890..8dac98b91c77 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -1031,6 +1032,145 @@ static int tdmrs_setup_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num) return -ENOMEM; } +static int tdmr_add_rsvd_area(struct tdmr_info *tdmr, int *p_idx, + u64 addr, u64 size) +{ + struct tdmr_reserved_area *rsvd_areas = tdmr->reserved_areas; + int idx = *p_idx; + + /* Reserved area must be 4K aligned in offset and size */ + if (WARN_ON(addr & ~PAGE_MASK || size & ~PAGE_MASK)) + return -EINVAL; + + /* Cannot exceed maximum reserved areas supported by TDX */ + if (idx >= tdx_sysinfo.max_reserved_per_tdmr) + return -E2BIG; + + rsvd_areas[idx].offset = addr - tdmr->base; + rsvd_areas[idx].size = size; + + *p_idx = idx + 1; + + return 0; +} + +/* Compare function called by sort() for TDMR reserved areas */ +static int rsvd_area_cmp_func(const void *a, const void *b) +{ + struct tdmr_reserved_area *r1 = (struct tdmr_reserved_area *)a; + struct tdmr_reserved_area *r2 = (struct tdmr_reserved_area *)b; + + if (r1->offset + r1->size <= r2->offset) + return -1; + if (r1->offset >= r2->offset + r2->size) + return 1; + + /* Reserved areas cannot overlap. Caller should guarantee. */ + WARN_ON(1); + return -1; +} + +/* Set up reserved areas for a TDMR, including memory holes and PAMTs */ +static int tdmr_setup_rsvd_areas(struct tdmr_info *tdmr, + struct tdmr_info **tdmr_array, + int tdmr_num) +{ + u64 start, end, prev_end; + int rsvd_idx, i, ret = 0; + + /* Mark holes between e820 RAM entries as reserved */ + rsvd_idx = 0; + prev_end = TDMR_START(tdmr); + e820_for_each_mem(e820_table, i, start, end) { + /* Break if this entry is after the TDMR */ + if (start >= TDMR_END(tdmr)) + break; + + /* Exclude entries before this TDMR */ + if (end < TDMR_START(tdmr)) + continue; + + /* + * Skip if no hole exists before this entry. "<=" is + * used because one e820 entry might span two TDMRs. + * In that case the start address of this entry is + * smaller then the start address of the second TDMR. + */ + if (start <= prev_end) { + prev_end = end; + continue; + } + + /* Add the hole before this e820 entry */ + ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, prev_end, + start - prev_end); + if (ret) + return ret; + + prev_end = end; + } + + /* Add the hole after the last RAM entry if it exists. */ + if (prev_end < TDMR_END(tdmr)) { + ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, prev_end, + TDMR_END(tdmr) - prev_end); + if (ret) + return ret; + } + + /* + * Walk over all TDMRs to find out whether any PAMT falls into + * the given TDMR. If yes, mark it as reserved too. + */ + for (i = 0; i < tdmr_num; i++) { + struct tdmr_info *tmp = tdmr_array[i]; + u64 pamt_start, pamt_end; + + pamt_start = tmp->pamt_4k_base; + pamt_end = pamt_start + tmp->pamt_4k_size + + tmp->pamt_2m_size + tmp->pamt_1g_size; + + /* Skip PAMTs outside of the given TDMR */ + if ((pamt_end <= TDMR_START(tdmr)) || + (pamt_start >= TDMR_END(tdmr))) + continue; + + /* Only mark the part within the TDMR as reserved */ + if (pamt_start < TDMR_START(tdmr)) + pamt_start = TDMR_START(tdmr); + if (pamt_end > TDMR_END(tdmr)) + pamt_end = TDMR_END(tdmr); + + ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, pamt_start, + pamt_end - pamt_start); + if (ret) + return ret; + } + + /* TDX requires reserved areas listed in address ascending order */ + sort(tdmr->reserved_areas, rsvd_idx, sizeof(struct tdmr_reserved_area), + rsvd_area_cmp_func, NULL); + + return 0; +} + +static int tdmrs_setup_rsvd_areas_all(struct tdmr_info **tdmr_array, + int tdmr_num) +{ + int i; + + for (i = 0; i < tdmr_num; i++) { + int ret; + + ret = tdmr_setup_rsvd_areas(tdmr_array[i], tdmr_array, + tdmr_num); + if (ret) + return ret; + } + + return 0; +} + static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) { int ret; @@ -1047,8 +1187,12 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) if (ret) goto err_free_tdmrs; - /* Return -EFAULT until constructing TDMRs is done */ - ret = -EFAULT; + ret = tdmrs_setup_rsvd_areas_all(tdmr_array, *tdmr_num); + if (ret) + goto err_free_pamts; + + return 0; +err_free_pamts: tdmrs_free_pamt_all(tdmr_array, *tdmr_num); err_free_tdmrs: free_tdmrs(tdmr_array, *tdmr_num); From patchwork Mon Feb 28 02:13:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 597B1C433EF for ; Mon, 28 Feb 2022 02:15:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232520AbiB1CPu (ORCPT ); Sun, 27 Feb 2022 21:15:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232479AbiB1CPf (ORCPT ); Sun, 27 Feb 2022 21:15:35 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9965654FA0; Sun, 27 Feb 2022 18:14:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014497; x=1677550497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CGjUly9oOqE8jcazWKvTLpspiX5nMpmHIHIgV7VYEKE=; b=kdMYFYjW6lkhxwLUi3VFU3q6Nhb61rPqCmV6UaFJNyhb4nKtmE5rWX5f zLcc1WmLyZ6N3Vg20ia90MoYGttMtsXmenyoXx6tdORr+4LURc6CjXSSR eV3R0vu8FddBoUP4r9MfFAtuNVIHwjlKIpicE1QCN4haoIk+J6r9sKWRH lfTq8XuFiexYLc/AcMKcGnR/wz18RxI3XyqPUcESHHB3r5BV7oAR4+u4H rNaehdrCcrueZ2DgzRrkNZPos6/JdNxRPhWrU4gevoPcgtqwSUdrmE9ti cKYlxfYEcexBlYkxk8BUPrEqf8BToy61zM4rX4fGsy91cq2DJM7jLvkVM Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191967" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191967" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:48 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936966" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:43 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 15/21] x86/virt/tdx: Reserve TDX module global KeyID Date: Mon, 28 Feb 2022 15:13:03 +1300 Message-Id: <977a5a4356e47111c05f5d5e5766c743c8db6215.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TDX module initialization requires to use one TDX private KeyID as the global KeyID to crypto protect TDX metadata. The global KeyID is configured to the TDX module along with TDMRs. Just reserve the first TDX private KeyID as the global KeyID. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 8dac98b91c77..e6c54b2a1f6e 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -111,6 +111,9 @@ static struct cmr_info tdx_cmr_array[MAX_CMRS] __aligned(CMR_INFO_ARRAY_ALIGNMEN static int tdx_cmr_num; static struct tdsysinfo_struct tdx_sysinfo; +/* TDX global KeyID to protect TDX metadata */ +static u32 tdx_global_keyid; + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -1239,6 +1242,12 @@ static int init_tdx_module(void) if (ret) goto out_free_tdmrs; + /* + * Reserve the first TDX KeyID as global KeyID to protect + * TDX module metadata. + */ + tdx_global_keyid = tdx_keyid_start; + /* * Return -EFAULT until all steps of TDX module * initialization are done. From patchwork Mon Feb 28 02:13:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D878FC433F5 for ; Mon, 28 Feb 2022 02:15:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232542AbiB1CPt (ORCPT ); Sun, 27 Feb 2022 21:15:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232469AbiB1CPf (ORCPT ); Sun, 27 Feb 2022 21:15:35 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD61354FA9; Sun, 27 Feb 2022 18:14:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014497; x=1677550497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/QmiEMPw2IIicAjmLMzTawZr+TBnOGP5WhdAWSgfHeE=; b=lhNZc7nnKdBXxMKpzGWmrhWpC65EGRCy3oY7ZOEiYnBDWo3dEawCzpn6 q0K8jCBHkXKv+loiTCUFIYE0T/LWBt+5RE8JHIGHgF/y379mZKjU5gNYO /M9etMovZ0gDSPgIWhbf3x1QYpnm5cj9CG81lSFMErWfEDkrw2EWDFT3L N98j+DlalxsT8rIlupnSPMQCjFxrxse6kk+uKU5arq6nEYJlJoFoPOxoj AfKrLGUTuYz0KatWIAdM8DADfN+RFGSzmOgYuujsjz+ZaHZUIdK3v/90Y L1lhlH0NwFFjAKnVpo49LQfCuFQqOVH0ELI0XiSdCgoV8XyHJoadT10Zm w==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191969" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191969" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:52 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936972" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:48 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 16/21] x86/virt/tdx: Configure TDX module with TDMRs and global KeyID Date: Mon, 28 Feb 2022 15:13:04 +1300 Message-Id: <50bee01627c6cfe0a1f53058c41fa775762be035.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org After the TDX usable memory regions are constructed in an array of TDMRs and the global KeyID is reserved, configure them to the TDX module. The configuration is done via TDH.SYS.CONFIG, which is one call and can be done on any logical cpu. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 42 +++++++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx.h | 2 ++ 2 files changed, 44 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index e6c54b2a1f6e..008628674a2f 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -1203,6 +1203,42 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) return ret; } +static int config_tdx_module(struct tdmr_info **tdmr_array, int tdmr_num, + u64 global_keyid) +{ + u64 *tdmr_pa_array; + int i, array_sz; + int ret; + + /* + * TDMR_INFO entries are configured to the TDX module via an + * array of the physical address of each TDMR_INFO. TDX requires + * the array itself must be 512 aligned. Round up the array size + * to 512 aligned so the buffer allocated by kzalloc() meets the + * alignment requirement. + */ + array_sz = ALIGN(tdmr_num * sizeof(u64), TDMR_INFO_PA_ARRAY_ALIGNMENT); + tdmr_pa_array = kzalloc(array_sz, GFP_KERNEL); + if (!tdmr_pa_array) + return -ENOMEM; + + for (i = 0; i < tdmr_num; i++) + tdmr_pa_array[i] = __pa(tdmr_array[i]); + + /* + * TDH.SYS.CONFIG fails when TDH.SYS.LP.INIT is not done on all + * BIOS-enabled cpus. tdx_init() only disables CPU hotplug but + * doesn't do early check whether all BIOS-enabled cpus are + * online, so TDH.SYS.CONFIG can fail here. + */ + ret = seamcall(TDH_SYS_CONFIG, __pa(tdmr_pa_array), tdmr_num, + global_keyid, 0, NULL, NULL); + /* Free the array as it is not required any more. */ + kfree(tdmr_pa_array); + + return ret; +} + static int init_tdx_module(void) { struct tdmr_info **tdmr_array; @@ -1248,11 +1284,17 @@ static int init_tdx_module(void) */ tdx_global_keyid = tdx_keyid_start; + /* Config the TDX module with TDMRs and global KeyID */ + ret = config_tdx_module(tdmr_array, tdmr_num, tdx_global_keyid); + if (ret) + goto out_free_pamts; + /* * Return -EFAULT until all steps of TDX module * initialization are done. */ ret = -EFAULT; +out_free_pamts: /* * Free PAMTs allocated in construct_tdmrs() when TDX module * initialization fails. diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index 05bf9fe6bd00..d8e2800397af 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -95,6 +95,7 @@ struct tdmr_reserved_area { } __packed; #define TDMR_INFO_ALIGNMENT 512 +#define TDMR_INFO_PA_ARRAY_ALIGNMENT 512 struct tdmr_info { u64 base; @@ -125,6 +126,7 @@ struct tdmr_info { #define TDH_SYS_INIT 33 #define TDH_SYS_LP_INIT 35 #define TDH_SYS_LP_SHUTDOWN 44 +#define TDH_SYS_CONFIG 45 struct tdx_module_output; u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, From patchwork Mon Feb 28 02:13:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA40FC433EF for ; Mon, 28 Feb 2022 02:15:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229519AbiB1CPx (ORCPT ); Sun, 27 Feb 2022 21:15:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232483AbiB1CPg (ORCPT ); Sun, 27 Feb 2022 21:15:36 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB96255236; Sun, 27 Feb 2022 18:14:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014497; x=1677550497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mr0RpXWMofozcpe6De6x3hAXcUkujKnR9X9B8qknSc4=; b=K1Ii4aFJOlqWCWj1ZcKRTZFZYKw9buaZeAImUn80fza9fwkaX2zgxbvB 2Py2HHm0GQhHcleT387Ee3yUYDucBZMoTL86uVnT7XU1lYXLlToaQwbv+ P7IyvL82/STX+Vv+kqbvfjLe2m6kbBtZD0+BGlWeNOJ4Sg2Zh9q4ST2nn ULJzcbjLaLXnan2WXqyyG7/tound4o4WOSjQk3/NW1mTNK7Y/N7TGQ7eO w5k/v6cqS2dSnQfQZOGDQoUh3V9nxr9d7ElucN1jOgP/NCj6izUaryX1f gklq+B1gsgyk5JA3/4WR//C7n79WM1+ewpBkTCDneqYSaPaClkfShpdx6 A==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240191993" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240191993" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:57 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936978" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:52 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 17/21] x86/virt/tdx: Configure global KeyID on all packages Date: Mon, 28 Feb 2022 15:13:05 +1300 Message-Id: <21b8abec9044667a8137a2b1569c547dbf40641c.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Before TDX module can use the global KeyID to access TDX metadata, the key of the global KeyID must be configured on all physical packages via TDH.SYS.KEY.CONFIG. This SEAMCALL cannot run concurrently on different cpus since it exclusively acquires the TDX module. Implement a helper to run SEAMCALL on one (any) cpu for all packages in serialized way, and run TDH.SYS.KEY.CONFIG on all packages using the helper. The TDX module uses the global KeyID to initialize its metadata (PAMTs). Before TDX module can do that, all cachelines of PAMTs must be flushed. Otherwise, they may silently corrupt the PAMTs later initialized by the TDX module. Use wbinvd to flush cache as PAMTs can be potentially large (~1/256th of system RAM). Flush cache before configuring the global KeyID on all packages, as suggested by TDX specification. In practice, the current generation of TDX doesn't use the global KeyID in TDH.SYS.KEY.CONFIG. Therefore in practice flushing cache can be done after configuring the global KeyID is done on all packages. But the future generation of TDX may change this behaviour, so just follow TDX specification's suggestion to flush cache before configuring the global KeyID on all packages. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 94 ++++++++++++++++++++++++++++++++++++++++- arch/x86/virt/vmx/tdx.h | 1 + 2 files changed, 94 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 008628674a2f..22cbc43873c9 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "tdx.h" @@ -397,6 +398,47 @@ static int seamcall_on_each_cpu(struct seamcall_ctx *sc) return atomic_read(&sc->err); } +/* + * Call the SEAMCALL on one (any) cpu for each physical package in + * serialized way. Note for serialized calls 'seamcall_ctx::err' + * doesn't have to be atomic, but for simplicity just reuse it + * instead of adding a new one. + * + * Return -ENXIO if IPI SEAMCALL wasn't run on any cpu, or -EFAULT + * when SEAMCALL fails, or -EPERM when the cpu where SEAMCALL runs + * on is not in VMX operation. In case of -EFAULT, the error code + * of SEAMCALL is in 'struct seamcall_ctx::seamcall_ret'. + */ +static int seamcall_on_each_package_serialized(struct seamcall_ctx *sc) +{ + cpumask_var_t packages; + int cpu, ret; + + if (!zalloc_cpumask_var(&packages, GFP_KERNEL)) + return -ENOMEM; + + for_each_online_cpu(cpu) { + if (cpumask_test_and_set_cpu(topology_physical_package_id(cpu), + packages)) + continue; + + ret = smp_call_function_single(cpu, seamcall_smp_call_function, + sc, true); + if (ret) + return ret; + + /* + * Doesn't have to use atomic_read(), but it doesn't + * hurt either. + */ + ret = atomic_read(&sc->err); + if (ret) + return ret; + } + + return 0; +} + static inline bool p_seamldr_ready(void) { return !!p_seamldr_info.p_seamldr_ready; @@ -1239,6 +1281,18 @@ static int config_tdx_module(struct tdmr_info **tdmr_array, int tdmr_num, return ret; } +static int config_global_keyid(u64 global_keyid) +{ + struct seamcall_ctx sc = { .fn = TDH_SYS_KEY_CONFIG }; + + /* + * TDH.SYS.KEY.CONFIG may fail with entropy error (which is + * a recoverable error). Assume this is exceedingly rare and + * just return error if encountered instead of retrying. + */ + return seamcall_on_each_package_serialized(&sc); +} + static int init_tdx_module(void) { struct tdmr_info **tdmr_array; @@ -1289,6 +1343,37 @@ static int init_tdx_module(void) if (ret) goto out_free_pamts; + /* + * The same physical address associated with different KeyIDs + * has separate cachelines. Before using the new KeyID to access + * some memory, the cachelines associated with the old KeyID must + * be flushed, otherwise they may later silently corrupt the data + * written with the new KeyID. After cachelines associated with + * the old KeyID are flushed, CPU speculative fetch using the old + * KeyID is OK since the prefetched cachelines won't be consumed + * by CPU core. + * + * TDX module initializes PAMTs using the global KeyID to crypto + * protect them from malicious host. Before that, the PAMTs are + * used by kernel (with KeyID 0) and the cachelines associated + * with the PAMTs must be flushed. Given PAMTs are potentially + * large (~1/256th of system RAM), just use WBINVD on all cpus to + * flush the cache. + * + * In practice, the current generation of TDX doesn't use the + * global KeyID in TDH.SYS.KEY.CONFIG. Therefore in practice, + * the cachelines can be flushed after configuring the global + * KeyID on all pkgs is done. But the future generation of TDX + * may change this, so just follow the suggestion of TDX spec to + * flush cache before TDH.SYS.KEY.CONFIG. + */ + wbinvd_on_all_cpus(); + + /* Config the key of global KeyID on all packages */ + ret = config_global_keyid(tdx_global_keyid); + if (ret) + goto out_free_pamts; + /* * Return -EFAULT until all steps of TDX module * initialization are done. @@ -1299,8 +1384,15 @@ static int init_tdx_module(void) * Free PAMTs allocated in construct_tdmrs() when TDX module * initialization fails. */ - if (ret) + if (ret) { + /* + * Part of PAMTs may already have been initialized by + * TDX module. Flush cache before returning them back + * to kernel. + */ + wbinvd_on_all_cpus(); tdmrs_free_pamt_all(tdmr_array, tdmr_num); + } out_free_tdmrs: /* * TDMRs are only used during initializing TDX module. Always diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index d8e2800397af..bba8cabea4bb 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -122,6 +122,7 @@ struct tdmr_info { /* * TDX module SEAMCALL leaf functions */ +#define TDH_SYS_KEY_CONFIG 31 #define TDH_SYS_INFO 32 #define TDH_SYS_INIT 33 #define TDH_SYS_LP_INIT 35 From patchwork Mon Feb 28 02:13:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82437C433F5 for ; Mon, 28 Feb 2022 02:15:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232479AbiB1CQS (ORCPT ); Sun, 27 Feb 2022 21:16:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232583AbiB1CPv (ORCPT ); Sun, 27 Feb 2022 21:15:51 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D595D55BF6; Sun, 27 Feb 2022 18:15:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014505; x=1677550505; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dBqfqKzkRy/LMqeH/7Z1ov6sQ73SASKU5eXWSGhoaek=; b=fQrZS8OtoYk/sRwQR2QGR/5RvztvNGkwm1JvV1GB8jSUTfDhlWltBakI MWArxoUtyDM/qDQlZtNH3rrgAzmMNVSBbLBsFFLxLbtyb97U2XEZseD39 dPZgV699djr2fg00Xsp/NkkJo0eRXJxDzbvp8mfPN8+J/swvZzBJjnJyg txedR1eVJM9YMpW3HsTL+ENfR+HYWs/8AtKAVHNbzSBh9J1IeOd7ayK48 z3TTFe8+93Rygp8Rej1pJbraw6pGm2TGVZOXkgd8BfyoXOC7DPIT7+jzw lircWJb2F73/3lvAtBJcYH1qFWDTaM65J3C0las3Jnv8n7BAjFTBcQpEH g==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240192036" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240192036" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:15:01 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777936989" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:14:57 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 18/21] x86/virt/tdx: Initialize all TDMRs Date: Mon, 28 Feb 2022 15:13:06 +1300 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Initialize TDMRs via TDH.SYS.TDMR.INIT as the last step to complete the TDX initialization. All TDMRs need to be initialized using TDH.SYS.TDMR.INIT SEAMCALL before the TDX memory can be used to run any TD guest. The SEAMCALL internally uses the global KeyID to initialize PAMTs in order to crypto protect them from malicious host kernel. TDH.SYS.TDMR.INIT can be done any cpu. The time of initializing TDMR is proportional to the size of the TDMR. To avoid long latency caused in one SEAMCALL, TDH.SYS.TDMR.INIT only initializes an (implementation-specific) subset of PAMT entries of one TDMR in one invocation. The caller is responsible for calling TDH.SYS.TDMR.INIT iteratively until all PAMT entries of the requested TDMR are initialized. Current implementation initializes TDMRs one by one. It takes ~100ms on a 2-socket machine with 2.2GHz CPUs and 64GB memory when the system is idle. Each TDH.SYS.TDMR.INIT takes ~7us on average. TDX does allow different TDMRs to be initialized concurrently on multiple CPUs. This parallel scheme could be introduced later when the total initialization time becomes a real concern, e.g. on a platform with a much bigger memory size. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 75 ++++++++++++++++++++++++++++++++++++++--- arch/x86/virt/vmx/tdx.h | 1 + 2 files changed, 71 insertions(+), 5 deletions(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 22cbc43873c9..2760c10a430a 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -1293,6 +1293,65 @@ static int config_global_keyid(u64 global_keyid) return seamcall_on_each_package_serialized(&sc); } +/* Initialize one TDMR */ +static int init_tdmr(struct tdmr_info *tdmr) +{ + u64 next; + + /* + * Initializing PAMT entries might be time-consuming (in + * proportion to the size of the requested TDMR). To avoid long + * latency in one SEAMCALL, TDH.SYS.TDMR.INIT only initializes + * an (implementation-defined) subset of PAMT entries in one + * invocation. + * + * Call TDH.SYS.TDMR.INIT iteratively until all PAMT entries + * of the requested TDMR are initialized (if next-to-initialize + * address matches the end address of the TDMR). + */ + do { + struct tdx_module_output out; + int ret; + + ret = seamcall(TDH_SYS_TDMR_INIT, tdmr->base, 0, 0, 0, + NULL, &out); + if (ret) + return ret; + /* + * RDX contains 'next-to-initialize' address if + * TDH.SYS.TDMR.INT succeeded. + */ + next = out.rdx; + if (need_resched()) + cond_resched(); + } while (next < tdmr->base + tdmr->size); + + return 0; +} + +/* Initialize all TDMRs */ +static int init_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) +{ + int i; + + /* + * Initialize TDMRs one-by-one for simplicity, though the TDX + * architecture does allow different TDMRs to be initialized in + * parallel on multiple CPUs. Parallel initialization could + * be added later when the time spent in the serialized scheme + * becomes a real concern. + */ + for (i = 0; i < tdmr_num; i++) { + int ret; + + ret = init_tdmr(tdmr_array[i]); + if (ret) + return ret; + } + + return 0; +} + static int init_tdx_module(void) { struct tdmr_info **tdmr_array; @@ -1374,11 +1433,12 @@ static int init_tdx_module(void) if (ret) goto out_free_pamts; - /* - * Return -EFAULT until all steps of TDX module - * initialization are done. - */ - ret = -EFAULT; + /* Initialize TDMRs to complete the TDX module initialization */ + ret = init_tdmrs(tdmr_array, tdmr_num); + if (ret) + goto out_free_pamts; + + tdx_module_status = TDX_MODULE_INITIALIZED; out_free_pamts: /* * Free PAMTs allocated in construct_tdmrs() when TDX module @@ -1401,6 +1461,11 @@ static int init_tdx_module(void) free_tdmrs(tdmr_array, tdmr_num); kfree(tdmr_array); out: + if (ret) + pr_info("Failed to initialize TDX module.\n"); + else + pr_info("TDX module initialized.\n"); + return ret; } diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index bba8cabea4bb..212f83374c0a 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -126,6 +126,7 @@ struct tdmr_info { #define TDH_SYS_INFO 32 #define TDH_SYS_INIT 33 #define TDH_SYS_LP_INIT 35 +#define TDH_SYS_TDMR_INIT 36 #define TDH_SYS_LP_SHUTDOWN 44 #define TDH_SYS_CONFIG 45 From patchwork Mon Feb 28 02:13:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75D50C433EF for ; Mon, 28 Feb 2022 02:15:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232579AbiB1CQR (ORCPT ); Sun, 27 Feb 2022 21:16:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232584AbiB1CPv (ORCPT ); Sun, 27 Feb 2022 21:15:51 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBF936C929; Sun, 27 Feb 2022 18:15:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014506; x=1677550506; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qlVoP3N/KiiurWcxJiuR+/mHqMX2/n+WVAGHxE3ZUUY=; b=G259cS6X5J3jNjJ5Ozurn9xiC2GeogKBCRmjg39AVz306GRVE7j9EJJX oYTWrpO3oAnve75w+PiWZgkLbjRJcAxgV/t81U0VRPVeodjZTV9MU+IOs 1ilxNuwUlQ1sJTw8T+fgLmt5iYbuKYLGPsc+gGxomT5wZIwW0hFLnv+Iz XDpvxEkN9IqD2fXaFm61ExMirXj9iW0hDBMaowC4fjXj8VpAb98ReeqFN 5ZoYo7YXOfg2O9cdzhBt0oI62fq73Xni/nmhhwULpKrbBsggbivlk5LTf T6si/xmSH4w2+vzNfGeLITRs+YHVbDdqUPReM/lVHwNGMVDGuIFxHv7dc A==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240192052" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240192052" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:15:05 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777937039" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:15:01 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 19/21] x86: Flush cache of TDX private memory during kexec() Date: Mon, 28 Feb 2022 15:13:07 +1300 Message-Id: <64bb89cf1108e85057f4b426406fbb5ec5172273.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If TDX is ever enabled and/or used to run any TD guests, the cachelines of TDX private memory, including PAMTs, used by TDX module need to be flushed before transiting to the new kernel otherwise they may silently corrupt the new kernel. TDX module can only be initialized once during its lifetime. TDX does not have interface to reset TDX module to an uninitialized state so it could be initialized again. If the old kernel has enabled TDX, the new kernel won't be able to use TDX again. Therefore, ideally the old kernel should shut down the TDX module if it is ever initialized so that no SEAMCALLs can be made to it again. However, SEAMCALL requires cpu being in VMX operation (VMXON has been done). Currently, only KVM handles VMXON and when KVM is unloaded, all cpus leave VMX operation. Theoretically, during kexec() there's no guarantee all cpus are in VMX operation. Adding VMXON handling to the core kernel isn't trivial so this implementation depends on the caller of TDX to guarantee that. This means it's not easy to shut down TDX module during kexec(). Therefore, this implementation doesn't shut down TDX module, but only does cache flush and just leave TDX module open. And it's fine to leave the module open. If the new kernel wants to use TDX, it needs to go through the initialization process which will fail at the first SEAMCALL due to TDX module is not in uninitialized state. If the new kernel doesn't want to use TDX, then TDX module won't run at all. Following the implementation of SME support, use wbinvd() to flush cache in stop_this_cpu(). Introduce a new function platform_has_tdx() to only check whether the platform is TDX-capable and do wbinvd() when it is true. platform_has_tdx() returns true when SEAMRR is enabled and there are enough TDX private KeyIDs to run at least one TD guest (both of which are detected at boot time). TDX is enabled on demand at runtime and it has a state machine with mutex to protect multiple callers to initialize TDX in parallel. Getting TDX module state needs to hold the mutex but stop_this_cpu() runs in interrupt context, so just check whether platform supports TDX and flush cache. Signed-off-by: Kai Huang --- arch/x86/include/asm/tdx.h | 2 ++ arch/x86/kernel/process.c | 26 +++++++++++++++++++++++++- arch/x86/virt/vmx/tdx.c | 14 ++++++++++++++ 3 files changed, 41 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index b526d41c4bbf..24f2b7e8b280 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -85,10 +85,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1, void tdx_detect_cpu(struct cpuinfo_x86 *c); int tdx_detect(void); int tdx_init(void); +bool platform_has_tdx(void); #else static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { } static inline int tdx_detect(void) { return -ENODEV; } static inline int tdx_init(void) { return -ENODEV; } +static inline bool platform_has_tdx(void) { return false; } #endif /* CONFIG_INTEL_TDX_HOST */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 71aa12082370..70eea43d1f32 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -766,8 +766,32 @@ void stop_this_cpu(void *dummy) * without the encryption bit, they don't race each other when flushed * and potentially end up with the wrong entry being committed to * memory. + * + * In case of kexec, similar to SME, if TDX is ever enabled, the + * cachelines of TDX private memory (including PAMTs) used by TDX + * module need to be flushed before transiting to the new kernel, + * otherwise they may silently corrupt the new kernel. + * + * Note TDX is enabled on demand at runtime, and enabling TDX has a + * state machine protected with a mutex to prevent concurrent calls + * from multiple callers. Holding the mutex is required to get the + * TDX enabling status, but this function runs in interrupt context. + * So to make it simple, always flush cache when platform supports + * TDX (detected at boot time), regardless whether TDX is truly + * enabled by kernel. + * + * TDX module can only be initialized once during its lifetime. So + * if TDX is enabled in old kernel, the new kernel won't be able to + * use TDX again, because when new kernel go through the TDX module + * initialization process, it will fail immediately at the first + * SEAMCALL. Ideally, it's better to shut down TDX module, but this + * requires SEAMCALL, which requires CPU already being in VMX + * operation. It's not trival to do VMXON here so to keep it simple + * just leave the module open. And leaving TDX module open is OK. + * The new kernel cannot use TDX anyway. The TDX module won't run + * at all in the new kernel. */ - if (boot_cpu_has(X86_FEATURE_SME)) + if (boot_cpu_has(X86_FEATURE_SME) || platform_has_tdx()) native_wbinvd(); for (;;) { /* diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 2760c10a430a..f704fddc9dfc 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -1602,3 +1602,17 @@ int tdx_init(void) return ret; } EXPORT_SYMBOL_GPL(tdx_init); + +/** + * platform_has_tdx - Whether platform supports TDX + * + * Check whether platform supports TDX (i.e. TDX is enabled in BIOS), + * regardless whether TDX is truly enabled by kernel. + * + * Return true if SEAMRR is enabled, and there are sufficient TDX private + * KeyIDs to run TD guests. + */ +bool platform_has_tdx(void) +{ + return seamrr_enabled() && tdx_keyid_sufficient(); +} From patchwork Mon Feb 28 02:13:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10978C433FE for ; Mon, 28 Feb 2022 02:15:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232571AbiB1CQZ (ORCPT ); Sun, 27 Feb 2022 21:16:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232615AbiB1CQL (ORCPT ); Sun, 27 Feb 2022 21:16:11 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AFD26E574; Sun, 27 Feb 2022 18:15:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014510; x=1677550510; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MG7YwQYNhM95znuG+4mcnHJ5sWVxwUkOe5978DG+uVE=; b=fMsSaFDyH/wNprUn0qvPWqSaxFsGPePtk2IdzyPSawaB2ydcZgY4fg/e 8ueav75iWPhZNPhW0Ikc+Tm1lkoOCfBO0l/o6yKRZZnkKY3Zx8JCQn63w gzNttv6/IpOgUhVMd9QOa7txMPlw1ZxbXAHc4pDCt2hYBEWv6frKB/J8c BOCkazO2GrX+Z+yETDKEks/jKO+YYOZ8EN5Y+x0/jcwBLBOI7Sv1BaWOm Q8yW7MJmPkAODn4Yfd3zDTUo3/ZZfa9UzVqd3ZCAzRn9AW8s8Nk9OT2Wn T4V1yZitV1ggA0MXNZNSGnSi+dVHLhA6b1ibkzu8DwDpdf3yIOmLgdXjp w==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="313500618" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="313500618" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:15:10 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777937068" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:15:05 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 20/21] x86/virt/tdx: Add kernel command line to opt-in TDX host support Date: Mon, 28 Feb 2022 15:13:08 +1300 Message-Id: <25473dbb7c2f70bdef8a7361f5131b5266e4be95.1646007267.git.kai.huang@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Enabling TDX consumes additional memory (used by TDX as metadata) and additional initialization time. Introduce a kernel command line to allow to opt-in TDX host kernel support when user truly wants to use TDX. Signed-off-by: Kai Huang --- Documentation/admin-guide/kernel-parameters.txt | 6 ++++++ arch/x86/virt/vmx/tdx.c | 14 ++++++++++++++ 2 files changed, 20 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f5a27f067db9..9f85cafd0c2d 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5707,6 +5707,12 @@ tdfx= [HW,DRM] + tdx_host= [X86-64, TDX] + Format: {on|off} + on: Enable TDX host kernel support + off: Disable TDX host kernel support + Default is off. + test_suspend= [SUSPEND][,N] Specify "mem" (for Suspend-to-RAM) or "standby" (for standby suspend) or "freeze" (for suspend type freeze) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index f704fddc9dfc..60d58b2daabd 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -115,6 +115,16 @@ static struct tdsysinfo_struct tdx_sysinfo; /* TDX global KeyID to protect TDX metadata */ static u32 tdx_global_keyid; +static bool enable_tdx_host; + +static int __init tdx_host_setup(char *s) +{ + if (!strcmp(s, "on")) + enable_tdx_host = true; + return 0; +} +__setup("tdx_host=", tdx_host_setup); + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -501,6 +511,10 @@ static int detect_p_seamldr(void) static int __tdx_detect(void) { + /* Disabled by kernel command line */ + if (!enable_tdx_host) + goto no_tdx_module; + /* * TDX module cannot be possibly loaded if SEAMRR is disabled. * Also do not report TDX module as loaded if there's no enough From patchwork Mon Feb 28 02:13:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12762308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFD3DC433F5 for ; Mon, 28 Feb 2022 02:16:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232524AbiB1CQo (ORCPT ); Sun, 27 Feb 2022 21:16:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232672AbiB1CQT (ORCPT ); Sun, 27 Feb 2022 21:16:19 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF22B6BDFF; Sun, 27 Feb 2022 18:15:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646014515; x=1677550515; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/WmxKPB4qhsL2svD6PVrV7kWj5rjF4twzxHXBCAn2+w=; b=FRG+GsBHU8YrUbSI251QWbRN35WeMwVTHLYhPXtgFMlCaArkrbE0Lp46 cyJO8GMejn2HkQLBPzRDMqU7eb3P8vjd3HTZB/bSX9bXo53Le9SpA6VOQ 0Q6ZplvhALUTjVf8f7AldZbffy1M0+Kp88IaMOxNtvSJInF5HffL1Zpji SdzQtJ7E6ovQARpdkNv7XiegwfvArarcqr8o/aVVsO3T2h0buHfwupsEx 2TwyOnobL49HbxehL0qzwSnaK01zFlTqjf19N9rgNsxsrB4wwGXNHU2r0 0Yjpr8SvPnbeMQiuoeLd2WKV439jgrFBnA4i5Ll9v3HJXJ2sJmiR3g0fV Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="313500627" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="313500627" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:15:14 -0800 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="777937079" Received: from jdpanhor-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.49.36]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 18:15:10 -0800 From: Kai Huang To: x86@kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com, hpa@zytor.com, peterz@infradead.org, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, chang.seok.bae@intel.com, keescook@chromium.org, hengqi.arch@bytedance.com, laijs@linux.alibaba.com, metze@samba.org, linux-kernel@vger.kernel.org, kai.huang@intel.com Subject: [RFC PATCH 21/21] Documentation/x86: Add documentation for TDX host support Date: Mon, 28 Feb 2022 15:13:09 +1300 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add for TDX host support. Signed-off-by: Kai Huang --- Documentation/x86/index.rst | 1 + Documentation/x86/tdx_host.rst | 300 +++++++++++++++++++++++++++++++++ 2 files changed, 301 insertions(+) create mode 100644 Documentation/x86/tdx_host.rst diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst index 382e53ca850a..145fc251fbfc 100644 --- a/Documentation/x86/index.rst +++ b/Documentation/x86/index.rst @@ -25,6 +25,7 @@ x86-specific Documentation intel_txt amd-memory-encryption tdx + tdx_host pti mds microcode diff --git a/Documentation/x86/tdx_host.rst b/Documentation/x86/tdx_host.rst new file mode 100644 index 000000000000..a843ede9d45c --- /dev/null +++ b/Documentation/x86/tdx_host.rst @@ -0,0 +1,300 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================================= +Intel Trusted Domain Extensions (TDX) host kernel support +========================================================= + +Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious +host and certain physical attacks. To support TDX, a new CPU mode called +Secure Arbitration Mode (SEAM) is added to Intel processors. + +SEAM is an extension to the VMX architecture to define a new VMX root +operation called 'SEAM VMX root' and a new VMX non-root operation called +'VMX non-root'. Collectively, the SEAM VMX root and SEAM VMX non-root +execution modes are called operation in SEAM. + +SEAM VMX root operation is designed to host a CPU-attested, software +module called 'Intel TDX module' to manage virtual machine (VM) guests +called Trust Domains (TD). The TDX module implements the functions to +build, tear down, and start execution of TD VMs. SEAM VMX root is also +designed to additionally host a CPU-attested, software module called the +'Intel Persistent SEAMLDR (Intel P-SEAMLDR)' module to load and update +the Intel TDX module. + +The software in SEAM VMX root runs in the memory region defined by the +SEAM range register (SEAMRR). Access to this range is restricted to SEAM +VMX root operation. Code fetches outside of SEAMRR when in SEAM VMX root +operation are meant to be disallowed and lead to an unbreakable shutdown. + +TDX leverages Intel Multi-Key Total Memory Encryption (MKTME) to crypto +protect TD guests. TDX reserves part of MKTME KeyID space as TDX private +KeyIDs, which can only be used by software runs in SEAM. The physical +address bits reserved for encoding TDX private KeyID are treated as +reserved bits when not in SEAM operation. The partitioning of MKTME +KeyIDs and TDX private KeyIDs is configured by BIOS. + +Host kernel transits to either P-SEAMLDR or TDX module via the new +SEAMCALL instruction. SEAMCALLs are host-side interface functions +defined by P-SEAMLDR and TDX module around the new SEAMCALL instruction. +They are similar to a hypercall, except they are made by host kernel to +the SEAM software modules. + +Before being able to manage TD guests, the TDX module must be loaded +into SEAMRR and properly initialized using SEAMCALLs defined by TDX +architecture. The current implementation assumes both P-SEAMLDR and +TDX module are loaded by BIOS before the kernel boots. + +Detection and Initialization +---------------------------- + +The presence of SEAMRR is reported via a new SEAMRR bit (15) of the +IA32_MTRRCAP MSR. The SEAMRR range registers consist of a pair of MSRs: +IA32_SEAMRR_PHYS_BASE (0x1400) and IA32_SEAMRR_PHYS_MASK (0x1401). +SEAMRR is enabled when bit 3 of IA32_SEAMRR_PHYS_BASE is set and +bit 10/11 of IA32_SEAMRR_PHYS_MASK are set. + +However, there is no CPUID or MSR for querying the presence of the TDX +module or the P-SEAMLDR. SEAMCALL fails with VMfailInvalid when SEAM +software is not loaded, so SEAMCALL can be used to detect P-SEAMLDR and +TDX module. SEAMLDR.INFO SEAMCALL is used to detect both P-SEAMLDR and +TDX module. Success of the SEAMCALL means P-SEAMLDR is loaded, and the +P-SEAMLDR information returned by the SEAMCALL further tells whether TDX +module is loaded or not. + +User can check whether the TDX module is initialized via dmesg: + + [..] tdx: P-SEAMLDR: version 0x0, vendor_id: 0x8086, build_date: 20211209, build_num 160, major 1, minor 0 + [..] tdx: TDX module detected. + [..] tdx: TDX module: vendor_id 0x8086, major_version 1, minor_version 0, build_date 20211209, build_num 160 + [..] tdx: TDX module initialized. + +Initializing TDX takes time (in seconds) and additional memory space (for +metadata). Both are affected by the size of total usable memory which the +TDX module is configured with. In particular, the TDX metadata consumes +~1/256 of TDX usable memory. This leads to a non-negligible burden as the +current implementation simply treats all E820 RAM ranges as TDX usable +memory (all system RAM meets the security requirements on the first +generation of TDX-capable platforms). + +Therefore, kernel uses lazy TDX initialization to avoid such burden for +all users on a TDX-capable platform. The software component (e.g. KVM) +which wants to use TDX is expected to call two helpers below to detect +and initialize the TDX module until TDX is truly needed: + + if (tdx_detect()) + goto no_tdx; + if (tdx_init()) + goto no_tdx; + +TDX detection and initialization are done via SEAMCALLs which require the +CPU in VMX operation. The caller of the above two helpers should ensure +that condition. + +Currently, only KVM is the only user of TDX and KVM already handles +entering/leaving VMX operation. Letting KVM initialize TDX on demand +avoids handling entering/leaving VMX operation, which isn't trivial, in +core-kernel. + +In addition, a new kernel parameter 'tdx_host={on/off}' can be used to +force disabling the TDX capability by the admin. + +TDX Memory Management +--------------------- + +TDX architecture manages TDX memory via below data structures: + +1) Convertible Memory Regions (CMRs) + +TDX provides increased levels of memory confidentiality and integrity. +This requires special hardware support for features like memory +encryption and storage of memory integrity checksums. A CMR represents a +memory range that meets those requirements and can be used as TDX memory. +The list of CMRs can be queried from TDX module. + +2) TD Memory Regions (TDMRs) + +The TDX module manages TDX usable memory via TD Memory Regions (TDMR). +Each TDMR has information of its base and size, its metadata (PAMT)'s +base and size, and an array of reserved areas to hold the memory region +address holes and PAMTs. TDMR must be 1G aligned and in 1G granularity. + +Host kernel is responsible for choosing which convertible memory regions +(reside in CMRs) to use as TDX memory, and constructing a list of TDMRs +to cover all those memory regions, and configure the TDMRs to TDX module. + +3) Physical Address Metadata Tables (PAMTs) + +This metadata essentially serves as the 'struct page' for the TDX module, +recording things like which TD guest 'owns' a given page of memory. Each +TDMR has a dedicated PAMT. + +PAMT is not reserved by the hardware upfront and must be allocated by the +kernel and given to the TDX module. PAMT for a given TDMR doesn't have +to be within that TDMR, but a PAMT must be within one CMR. Additionally, +if a PAMT overlaps with a TDMR, the overlapping part must be marked as +reserved in that particular TDMR. + +Kernel Policy of TDX Memory +--------------------------- + +The first generation of TDX essentially guarantees that all system RAM +memory regions (excluding the memory below 1MB) are covered by CMRs. +Currently, to avoid having to modify the page allocator to support both +TDX and non-TDX allocation, the kernel choose to use all system RAM as +TDX memory. A list of TDMRs are constructed based on all RAM entries in +e820 table and configured to the TDX module. + +Limitations +----------- + +1. Constructing TDMRs + +Currently, the kernel tries to create one TDMR for each RAM entry in +e820. 'e820_table' is used to find all RAM entries to honor 'mem' and +'memmap' kernel command line. However, 'memmap' command line may also +result in many discrete RAM entries. TDX architecturally only supports a +limited number of TDMRs (currently 64). In this case, constructing TDMRs +may fail due to exceeding the maximum number of TDMRs. The user is +responsible for not doing so otherwise TDX may not be available. This +can be further enhanced by supporting merging adjacent TDMRs. + +2. PAMT allocation + +Currently, the kernel allocates PAMT for each TDMR separately using +alloc_contig_pages(). alloc_contig_pages() only guarantees the PAMT is +allocated from a given NUMA node, but doesn't have control over +allocating PAMT from a given TDMR range. This may result in all PAMTs +on one NUMA node being within one single TDMR. PAMTs overlapping with +a given TDMR must be put into the TDMR's reserved areas too. However TDX +only supports a limited number of reserved areas per TDMR (currently 16), +thus too many PAMTs in one NUMA node may result in constructing TDMR +failure due to exceeding TDMR's maximum reserved areas. + +The user is responsible for not creating too many discrete RAM entries +on one NUMA node, which may result in having too many TDMRs on one node, +which eventually results in constructing TDMR failure due to exceeding +the maximum reserved areas. This can be further enhanced to support +per-NUMA-node PAMT allocation, which could reduce the number of PAMT to +1 for each node. + +3. TDMR initialization + +Currently, the kernel initialize TDMRs one by one. This may take couple +of seconds to finish on large memory systems (TBs). This can be further +enhanced by allowing initializing different TDMRs in parallel on multiple +cpus. + +4. CPU hotplug + +The first generation of TDX architecturally doesn't support ACPI CPU +hotplug. All logical cpus are enabled by BIOS in MADT table. Also, the +first generation of TDX-capable platforms don't support ACPI CPU hotplug +either. Since this physically cannot happen, currently kernel doesn't +have any check in ACPI CPU hotplug code path to disable it. + +Also, only TDX module initialization requires all BIOS-enabled cpus are +online. After the initialization, any logical cpu can be brought down +and brought up to online again later. Therefore this series doesn't +change logical CPU hotplug either. + +This can be enhanced when any future generation of TDX starts to support +ACPI cpu hotplug. + +5. Memory hotplug + +The first generation of TDX architecturally doesn't support memory +hotplug. The CMRs are generated by BIOS during boot and it is fixed +during machine's runtime. + +However, the first generation of TDX-capable platforms don't support ACPI +memory hotplug. Since it physically cannot happen, currently kernel +doesn't have any check in ACPI memory hotplug code path to disable it. + +A special case of memory hotplug is adding NVDIMM as system RAM using +kmem driver. However the first generation of TDX-capable platforms +cannot turn on TDX and NVDIMM simultaneously, so in practice this cannot +happen either. + +Another case is admin can use 'memmap' kernel command line to create +legacy PMEMs and use them as TD guest memory, or theoretically, can use +kmem driver to add them as system RAM. Current implementation always +includes legacy PMEMs when constructing TDMRs so they are also TDX memory. +So legacy PMEMs can either be used as TD guest memory directly or can be +converted to system RAM via kmem driver. + +This can be enhanced when future generation of TDX starts to support ACPI +memory hotplug, or NVDIMM and TDX can be enabled simultaneously on the +same platform. + +6. Online CPUs + +TDX initialization includes a step where certain SEAMCALL must be called +on every BIOS-enabled CPU (with a ACPI MADT entry marked as enabled). +Otherwise, the initialization process aborts at a later step. + +The user should avoid using boot parameters (such as maxcpus, nr_cpus, +possible_cpus) or offlining CPUs before initializing TDX. Doing so will +lead to the mismatch between online CPUs and BIOS-enabled CPUs, resulting +TDX module initialization failure. + +It is ok to offline CPUs after TDX initialization is completed. + +7. Kexec + +The TDX module can be initialized only once during its lifetime. The +first generation of TDX doesn't have interface to reset TDX module to +uninitialized state so it can be initialized again. + +This implies: + + - If the old kernel fails to initialize TDX, the new kernel cannot + use TDX too unless the new kernel fixes the bug which leads to + initialization failure in the old kernel and can resume from where + the old kernel stops. This requires certain coordination between + the two kernels. + + - If the old kernel has initialized TDX successfully, the new kernel + may be able to use TDX if the two kernels have exactly the same + configurations on the TDX module. It further requires the new kernel + to reserve the TDX metadata pages (allocated by the old kernel) in + its page allocator. It also requires coordination between the two + kernels. Furthermore, if kexec() is done when there are active TD + guests running, the new kernel cannot use TDX because it's extremely + hard for the old kernel to pass all TDX private pages to the new + kernel. + +Given that, the current implementation doesn't support TDX after kexec() +(except the old kernel hasn't initialized TDX at all). + +The current implementation doesn't shut down TDX module but leaves it +open during kexec(). This is because shutting down TDX module requires +CPU being in VMX operation but there's no guarantee of this during +kexec(). Leaving the TDX module open is not the best case, but it is OK +since the new kernel won't be able to use TDX anyway (therefore TDX +module won't run at all). + +This can be further enhanced when core-kernele (non-KVM) can handle +VMXON. + +If TDX is ever enabled and/or used to run any TD guests, the cachelines +of TDX private memory, including PAMTs, used by TDX module need to be +flushed before transiting to the new kernel otherwise they may silently +corrupt the new kernel. Similar to SME, the current implementation +flushes cache in stop_this_cpu(). + +8. Initialization error + +Currently, any error happened during TDX initialization moves the TDX +module to the SHUTDOWN state. No SEAMCALL is allowed in this state, and +the TDX module cannot be re-initialized without a hard reset. + +This can be further enhanced to treat some errors as recoverable errors +and let the caller retry later. A more detailed state machine can be +added to record the internal state of TDX module, and the initialization +can resume from that state in the next try. + +Specifically, there are three cases that can be treated as recoverable +error: 1) -ENOMEM (i.e. due to PAMT allocation); 2) TDH.SYS.CONFIG error +due to TDH.SYS.LP.INIT is not called on all cpus (i.e. due to offline +cpus); 3) -EPERM when the caller doesn't guarantee all cpus are in VMX +operation.