mbox series

[v6,00/21] TDX host kernel support

Message ID cover.1666824663.git.kai.huang@intel.com (mailing list archive)
Headers show
Series TDX host kernel support | expand

Message

Huang, Kai Oct. 26, 2022, 11:15 p.m. UTC
Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks.  TDX specs are available in [1].

This series provides the initial support to enable TDX in the host kernel
with minimal code.  More patch series will follow up as next steps.
Specifically, below is our plan for the TDX host kernel support:

 1) This initial version to enable TDX with minimal code, allowing KVM
    to use TDX to create and run TDX guests.  It doesn't support all
    functionalities (i.e. exposing TDX module via /sysfs), and doesn't
    aim to resolve all things perfectly (i.e. some optimizations are
    not done).  Especially, memory hotplug is not handled (please see
    "Design Considerations" section below).
 2) Additional patch series to handle memory hotplug or per-node TDX
    capability flag.
 3) More patch series to add additional functions (/sysfs, etc) and
    optimizations (i.e. initializing the TDMRs).

(For memory hotplug, sorry for broadcasting widely but I cc'ed the
linux-mm@kvack.org following Kirill's suggestion so MM experts can also
help to provide comments.)

KVM support for TDX is being developed separately[2].  A new "userspace
inaccessible memfd" approach to support TDX private memory is also being
developed[3].  The KVM will only support the new "userspace inaccessible
memfd" as TDX guest memory backend.

I highly appreciate if anyone can help to review this series.

Hi Dave, Dan (and Intel reviewers),
   
Please kindly help to review, and I would appreciate reviewed-by or
acked-by tags if the patches look good to you.

This series has been reviewed by Isaku who is developing KVM TDX patches.
The first 4 patches have been reviewed by Kirill as well.

----- Changelog history: ------

- v5 -> v6:

  - Removed ACPI CPU/memory hotplug patches. (Intel internal discussion)
  - Removed patch to disable driver-managed memory hotplug (Intel
    internal discussion).
  - Added one patch to introduce enum type for TDX supported page size
    level to replace the hard-coded values in TDX guest code (Dave).
  - Added one patch to make TDX depends on X2APIC being enabled (Dave).
  - Added one patch to build all boot-time present memory regions as TDX
    memory during kernel boot.
  - Added Reviewed-by from others to some patches.
  - For all others please see individual patch changelog history.

- v4 -> v5:

  This is essentially a resent of v4.  Sorry I forgot to consult
  get_maintainer.pl when sending out v4, so I forgot to add linux-acpi
  and linux-mm mailing list and the relevant people for 4 new patches.

  There are also very minor code and commit message update from v4:

  - Rebased to latest tip/x86/tdx.
  - Fixed a checkpatch issue that I missed in v4.
  - Removed an obsoleted comment that I missed in patch 6.
  - Very minor update to the commit message of patch 12.

  For other changes to individual patches since v3, please refer to the
  changelog histroy of individual patches (I just used v3 -> v5 since
  there's basically no code change to v4).

- v3 -> v4 (addressed Dave's comments, and other comments from others):

 - Simplified SEAMRR and TDX keyID detection.
 - Added patches to handle ACPI CPU hotplug.
 - Added patches to handle ACPI memory hotplug and driver managed memory
   hotplug.
 - Removed tdx_detect() but only use single tdx_init().
 - Removed detecting TDX module via P-SEAMLDR.
 - Changed from using e820 to using memblock to convert system RAM to TDX
   memory.
 - Excluded legacy PMEM from TDX memory.
 - Removed the boot-time command line to disable TDX patch.
 - Addressed comments for other individual patches (please see individual
   patches).
 - Improved the documentation patch based on the new implementation.

- V2 -> v3:

 - Addressed comments from Isaku.
  - Fixed memory leak and unnecessary function argument in the patch to
    configure the key for the global keyid (patch 17).
  - Enhanced a little bit to the patch to get TDX module and CMR
    information (patch 09).
  - Fixed an unintended change in the patch to allocate PAMT (patch 13).
 - Addressed comments from Kevin:
  - Slightly improvement on commit message to patch 03.
 - Removed WARN_ON_ONCE() in the check of cpus_booted_once_mask in
   seamrr_enabled() (patch 04).
 - Changed documentation patch to add TDX host kernel support materials
   to Documentation/x86/tdx.rst together with TDX guest staff, instead
   of a standalone file (patch 21)
 - Very minor improvement in commit messages.

- RFC (v1) -> v2:
  - Rebased to Kirill's latest TDX guest code.
  - Fixed two issues that are related to finding all RAM memory regions
    based on e820.
  - Minor improvement on comments and commit messages.

v5:
https://lore.kernel.org/lkml/cover.1655894131.git.kai.huang@intel.com/T/

v3:
https://lore.kernel.org/lkml/68484e168226037c3a25b6fb983b052b26ab3ec1.camel@intel.com/T/

V2:
https://lore.kernel.org/lkml/cover.1647167475.git.kai.huang@intel.com/T/

RFC (v1):
https://lore.kernel.org/all/e0ff030a49b252d91c789a89c303bb4206f85e3d.1646007267.git.kai.huang@intel.com/T/

== Background ==

TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM)
and a new isolated range pointed by the SEAM Ranger Register (SEAMRR).
A CPU-attested software module called 'the TDX module' runs in the new
isolated region as a trusted hypervisor to create/run protected VMs.

TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
provide crypto-protection to the VMs.  TDX reserves part of MKTME KeyIDs
as TDX private KeyIDs, which are only accessible within the SEAM mode.

TDX is different from AMD SEV/SEV-ES/SEV-SNP, which uses a dedicated
secure processor to provide crypto-protection.  The firmware runs on the
secure processor acts a similar role as the TDX module.

The host kernel communicates with SEAM software via a new SEAMCALL
instruction.  This is conceptually similar to a guest->host hypercall,
except it is made from the host to SEAM software instead.

Before being able to manage TD guests, the TDX module must be loaded
and properly initialized.  This series assumes the TDX module is loaded
by BIOS before the kernel boots.

How to initialize the TDX module is described at TDX module 1.0
specification, chapter "13.Intel TDX Module Lifecycle: Enumeration,
Initialization and Shutdown".

== Design Considerations ==

1. Initialize the TDX module at runtime

There are basically two ways the TDX module could be initialized: either
in early boot, or at runtime before the first TDX guest is run.  This
series implements the runtime initialization.

This series adds a function tdx_enable() to allow the caller to initialize
TDX at runtime:

        if (tdx_enable())
                goto no_tdx;
	// TDX is ready to create TD guests.

This approach has below pros:

1) Initializing the TDX module requires to reserve ~1/256th system RAM as
metadata.  Enabling TDX on demand allows only to consume this memory when
TDX is truly needed (i.e. when KVM wants to create TD guests).

2) SEAMCALL requires CPU being already in VMX operation (VMXON has been
done).  So far, KVM is the only user of TDX, and it already handles VMXON.
Letting KVM to initialize TDX avoids handling VMXON in the core kernel.

3) It is more flexible to support "TDX module runtime update" (not in
this series).  After updating to the new module at runtime, kernel needs
to go through the initialization process again.

2. CPU hotplug

TDX doesn't support physical (ACPI) CPU hotplug.  A non-buggy BIOS should
never support hotpluggable CPU devicee and/or deliver ACPI CPU hotplug
event to the kernel.  This series doesn't handle physical (ACPI) CPU
hotplug at all but depends on the BIOS to behave correctly.

Note TDX works with CPU logical online/offline, thus this series still
allows to do logical CPU online/offline.

3. Kernel policy on TDX memory

The TDX architecture allows the VMM to designate specific memory as
usable for TDX private memory.  This series chooses to designate _all_
boot-time system RAM as TDX to avoid having to modify the page allocator
to distinguish TDX and non-TDX-capable memory.

4. Memory Hotplug

The TDX module reports a list of "Convertible Memory Region" (CMR) to
indicate which memory regions are TDX-capable.  TDX convertible memory
must be physically present during machine boot.  TDX also assumes
convertible memory won't be hot-removed.  A non-buggy BIOS should never
support physical hot-removal of any TDX convertible memory.  This series
doesn't handle physical hot-removal of convertible memory but depends on
the BIOS to behave correctly.

It's possible that one machine can have both TDX and non-TDX memory.
Specifically, runtime hot-added physical memory is not TDX convertible
memory.  Also, for now NVDIMM and CXL memory are not TDX convertible
memory, no matter whether they are physically present during boot or not.

Plugging non-TDX memory to the page allocator could result in failing to
create a TDX guest, or killing a running TDX guest.

To keep things simple, this series doesn't handle memory hotplug at all,
but depends on the machine owner to not do any memory hotplug operation.
For exmaple, the machine owner should not plug any NVDIMM and CXL memory
into the machine, or use kmem driver to plug NVDIMM or CXL memory to the
core-mm.

This will be enhanced in the future after first submission.  We are also
looking into options on how to handle:

- One solution is to enforce the kernel to always guarantee all pages in
the page allocator are TDX memory (i.e. via rejecting non-TDX memory in
memory hotplug).
- Another solution is to manage TDX and non-TDX memory in different NUMA
nodes, and use per-node TDX memory capability flag to show which nodes
are TDX-capable.  Userspace needs to explicitly bind TDX guests to those
TDX-capable NUMA nodes.

The second option is similar to the per-node memory encryption flag
support in below sereies:

https://lore.kernel.org/linux-mm/20221007155323.ue4cdthkilfy4lbd@box.shutemov.name/t/

5. Kexec()

Just like SME, TDX hosts require special cache flushing before kexec().
Similar to SME handling, the kernel uses wbinvd() to flush cache in
stop_this_cpu() when TDX is enabled.

===== Reference ======
[1]: TDX specs:
https://software.intel.com/content/www/us/en/develop/articles/intel-trust-domain-extensions.html

[2]: KVM TDX basic feature support
https://lore.kernel.org/lkml/CAAhR5DFrwP+5K8MOxz5YK7jYShhaK4A+2h1Pi31U_9+Z+cz-0A@mail.gmail.com/T/

[3]: KVM: mm: fd-based approach for supporting KVM
https://lore.kernel.org/lkml/20220915142913.2213336-1-chao.p.peng@linux.intel.com/T/


Kai Huang (21):
  x86/tdx: Use enum to define page level of TDX supported page sizes
  x86/virt/tdx: Detect TDX during kernel boot
  x86/virt/tdx: Disable TDX if X2APIC is not enabled
  x86/virt/tdx: Use all boot-time system memory as TDX memory
  x86/virt/tdx: Add skeleton to initialize TDX on demand
  x86/virt/tdx: Implement functions to make SEAMCALL
  x86/virt/tdx: Shut down TDX module in case of error
  x86/virt/tdx: Do TDX module global initialization
  x86/virt/tdx: Do logical-cpu scope TDX module initialization
  x86/virt/tdx: Get information about TDX module and TDX-capable memory
  x86/virt/tdx: Sanity check all TDX memory ranges are convertible
    memory
  x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX
    memory regions
  x86/virt/tdx: Create TDMRs to cover all TDX memory regions
  x86/virt/tdx: Allocate and set up PAMTs for TDMRs
  x86/virt/tdx: Set up reserved areas for all TDMRs
  x86/virt/tdx: Reserve TDX module global KeyID
  x86/virt/tdx: Configure TDX module with TDMRs and global KeyID
  x86/virt/tdx: Configure global KeyID on all packages
  x86/virt/tdx: Initialize all TDMRs
  x86/virt/tdx: Flush cache in kexec() when TDX is enabled
  Documentation/x86: Add documentation for TDX host support

 Documentation/x86/tdx.rst        |  209 ++++-
 arch/x86/Kconfig                 |   14 +
 arch/x86/Makefile                |    2 +
 arch/x86/coco/tdx/tdx.c          |   20 +-
 arch/x86/include/asm/tdx.h       |   51 ++
 arch/x86/kernel/process.c        |    9 +-
 arch/x86/virt/Makefile           |    2 +
 arch/x86/virt/vmx/Makefile       |    2 +
 arch/x86/virt/vmx/tdx/Makefile   |    2 +
 arch/x86/virt/vmx/tdx/seamcall.S |   52 ++
 arch/x86/virt/vmx/tdx/tdx.c      | 1441 ++++++++++++++++++++++++++++++
 arch/x86/virt/vmx/tdx/tdx.h      |  118 +++
 arch/x86/virt/vmx/tdx/tdxcall.S  |   19 +-
 13 files changed, 1911 insertions(+), 30 deletions(-)
 create mode 100644 arch/x86/virt/Makefile
 create mode 100644 arch/x86/virt/vmx/Makefile
 create mode 100644 arch/x86/virt/vmx/tdx/Makefile
 create mode 100644 arch/x86/virt/vmx/tdx/seamcall.S
 create mode 100644 arch/x86/virt/vmx/tdx/tdx.c
 create mode 100644 arch/x86/virt/vmx/tdx/tdx.h


base-commit: 5eb443db589a4526b2bef750a998ce7f0dc9c87b

Comments

Dave Hansen Oct. 26, 2022, 11:26 p.m. UTC | #1
On 10/26/22 16:15, Kai Huang wrote:
> To keep things simple, this series doesn't handle memory hotplug at all,
> but depends on the machine owner to not do any memory hotplug operation.
> For exmaple, the machine owner should not plug any NVDIMM and CXL memory
> into the machine, or use kmem driver to plug NVDIMM or CXL memory to the
> core-mm.
> 
> This will be enhanced in the future after first submission.  We are also
> looking into options on how to handle:

This is also known as the "hopes and prayers" approach to software
enabling.  "Let's just hope and pray that nobody does these things which
we know are broken."

In the spirit of moving this submission forward, I'm willing to continue
to _review_ this series.  But, I don't think it can go upstream until it
contains at least _some_ way to handle memory hotplug.
Huang, Kai Oct. 26, 2022, 11:51 p.m. UTC | #2
On Wed, 2022-10-26 at 16:26 -0700, Dave Hansen wrote:
> On 10/26/22 16:15, Kai Huang wrote:
> > To keep things simple, this series doesn't handle memory hotplug at all,
> > but depends on the machine owner to not do any memory hotplug operation.
> > For exmaple, the machine owner should not plug any NVDIMM and CXL memory
> > into the machine, or use kmem driver to plug NVDIMM or CXL memory to the
> > core-mm.
> > 
> > This will be enhanced in the future after first submission.  We are also
> > looking into options on how to handle:
> 
> This is also known as the "hopes and prayers" approach to software
> enabling.  "Let's just hope and pray that nobody does these things which
> we know are broken."
> 
> In the spirit of moving this submission forward, I'm willing to continue
> to _review_ this series.  
> 

Thank you Dave!

> But, I don't think it can go upstream until it
> contains at least _some_ way to handle memory hotplug.
> 
> 

Yes I agree.

One intention of sending out this series is actually to hear feedbacks on how to
handle.  As mentioned in the cover letter, AFAICT we have two options:

1) to enforce the kernel to always guarantee all pages in the page allocator are
TDX memory (i.e. via rejecting non-TDX memory in memory hotplug).  Non-TDX
memory can be used via devdax.
2) to manage TDX and non-TDX memory in different NUMA nodes, and use per-node
TDX memory capability flag to show which nodes are TDX-capable.  Userspace needs
to explicitly bind TDX guests to those TDX-capable NUMA nodes.

I think the important thing is we need to get consensus on which direction to go
as this is kinda related to userspace ABI AFAICT.

Kirill has some thoughts on the second option, such as we may need some
additional work to split NUMA node which contains both TDX and non-TDX memory.

I am not entirely clear how hard this work will be, but my thinking is, the
above two are not necessarily conflicting.  For example, from userspace ABI's
perspective we can go option 2, but at the meantime, we still reject hotplug of
non-TDX memory.  This effectively equals to reporting all nodes as TDX-capable.

Splitting NUMA nodes which contains both TDX and non-TDX memory can be enhanced
in the future as it doesn't break userspace ABI --  userspace needs to
explicitly bind TDX guests to TDX-capable nodes anyway.