Message ID | cover.1685887183.git.kai.huang@intel.com (mailing list archive) |
---|---|
Headers | show |
Series | TDX host kernel support | expand |
On Mon, Jun 05, 2023 at 02:27:13AM +1200, Kai Huang <kai.huang@intel.com> wrote: > Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious > host and certain physical attacks. TDX specs are available in [1]. > > This series is the initial support to enable TDX with minimal code to > allow KVM to create and run TDX guests. KVM support for TDX is being > developed separately[2]. A new "userspace inaccessible memfd" approach > to support TDX private memory is also being developed[3]. The KVM will > only support the new "userspace inaccessible memfd" as TDX guest memory. > > This series doesn't aim to support all functionalities, and doesn't aim > to resolve all things perfectly. All other optimizations will be posted > as follow-up once this initial TDX support is upstreamed. > > Also, the patch to add the new kernel comline tdx="force" isn't included > in this initial version, as Dave suggested it isn't mandatory. But I > will add one once this initial version gets merged. > > (For memory hotplug, sorry for broadcasting widely but I cc'ed the > linux-mm@kvack.org following Kirill's suggestion so MM experts can also > help to provide comments.) > > Hi Dave, Kirill, Tony, Peter, Thomas, Dan (and Intel reviewers), > > The new relaxed TDX per-cpu initialization flow has been verified. The > TDX module can be initialized when there are offline cpus, and the > TDH.SYS.LP.INIT SEAMCALL can be made successfully later after module > initialization when the offline cpu is up. > > This series mainly added code to handle the new TDX "partial write > machine check" erratum (SPR113) in [4]. > > And I would appreciate reviewed-by or acked-by tags if the patches look > good to you. Thanks in advance! I've rebased the TDX KVM patch series v14 [1] with this patch series and uploaded it at [2]. As the rebased TDX KVM patches doesn't have any changes except trivial rebase fixes, I don't post something like v14.1. [1] https://lore.kernel.org/lkml/cover.1685333727.git.isaku.yamahata@intel.com/ [2] https://github.com/intel/tdx/tree/kvm-upstream-workaround
Kai Huang wrote: > Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious > host and certain physical attacks. TDX specs are available in [1]. > > This series is the initial support to enable TDX with minimal code to > allow KVM to create and run TDX guests. KVM support for TDX is being > developed separately[2]. A new "userspace inaccessible memfd" approach > to support TDX private memory is also being developed[3]. The KVM will > only support the new "userspace inaccessible memfd" as TDX guest memory. This memfd approach is incompatible with one of the primary ways that new memory topologies like high-bandwidth-memory and CXL are accessed, via a device-special-file mapping. There is already precedent for mmap() to only be used for communicating address value and not CPU accessible memory. See "Userspace P2PDMA with O_DIRECT NVMe devices" [1]. So before this memfd requirement becomes too baked in to the design I want to understand if "userspace inaccessible" is the only requirement so I can look to add that to the device-special-file interface for "device" / "Soft Reserved" memory like HBM and CXL. [1]: https://lore.kernel.org/all/20221021174116.7200-1-logang@deltatee.com/
On Thu, 2023-06-08 at 14:03 -0700, Dan Williams wrote: > Kai Huang wrote: > > Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious > > host and certain physical attacks. TDX specs are available in [1]. > > > > This series is the initial support to enable TDX with minimal code to > > allow KVM to create and run TDX guests. KVM support for TDX is being > > developed separately[2]. A new "userspace inaccessible memfd" approach > > to support TDX private memory is also being developed[3]. The KVM will > > only support the new "userspace inaccessible memfd" as TDX guest memory. > > This memfd approach is incompatible with one of the primary ways that > new memory topologies like high-bandwidth-memory and CXL are accessed, > via a device-special-file mapping. There is already precedent for mmap() > to only be used for communicating address value and not CPU accessible > memory. See "Userspace P2PDMA with O_DIRECT NVMe devices" [1]. > > So before this memfd requirement becomes too baked in to the design I > want to understand if "userspace inaccessible" is the only requirement > so I can look to add that to the device-special-file interface for > "device" / "Soft Reserved" memory like HBM and CXL. > > [1]: https://lore.kernel.org/all/20221021174116.7200-1-logang@deltatee.com/ + Peng, Chao who is working on this with Sean. There are some recent developments around the design of the "userspace inaccessible memfd", e.g., IIUC Sean is proposing to replace the new syscall with a new KVM ioctl(). Hi Sean, Chao, Could you give comments to Dan's concern?