diff mbox series

[v6,12/21] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions

Message ID fe0e1a1133166ca4008840cd1a5959fa70632f07.1666824663.git.kai.huang@intel.com (mailing list archive)
State New
Headers show
Series TDX host kernel support | expand

Commit Message

Huang, Kai Oct. 26, 2022, 11:16 p.m. UTC
TDX provides increased levels of memory confidentiality and integrity.
This requires special hardware support for features like memory
encryption and storage of memory integrity checksums.  Not all memory
satisfies these requirements.

As a result, the TDX introduced the concept of a "Convertible Memory
Region" (CMR).  During boot, the firmware builds a list of all of the
memory ranges which can provide the TDX security guarantees.  The list
of these ranges is available to the kernel by querying the TDX module.

The TDX architecture needs additional metadata to record things like
which TD guest "owns" a given page of memory.  This metadata essentially
serves as the 'struct page' for the TDX module.  The space for this
metadata is not reserved by the hardware up front and must be allocated
by the kernel and given to the TDX module.

Since this metadata consumes space, the VMM can choose whether or not to
allocate it for a given area of convertible memory.  If it chooses not
to, the memory cannot receive TDX protections and can not be used by TDX
guests as private memory.

For every memory region that the VMM wants to use as TDX memory, it sets
up a "TD Memory Region" (TDMR).  Each TDMR represents a physically
contiguous convertible range and must also have its own physically
contiguous metadata table, referred to as a Physical Address Metadata
Table (PAMT), to track status for each page in the TDMR range.

Unlike a CMR, each TDMR requires 1G granularity and alignment.  To
support physical RAM areas that don't meet those strict requirements,
each TDMR permits a number of internal "reserved areas" which can be
placed over memory holes.  If PAMT metadata is placed within a TDMR it
must be covered by one of these reserved areas.

Let's summarize the concepts:

 CMR - Firmware-enumerated physical ranges that support TDX.  CMRs are
       4K aligned.
TDMR - Physical address range which is chosen by the kernel to support
       TDX.  1G granularity and alignment required.  Each TDMR has
       reserved areas where TDX memory holes and overlapping PAMTs can
       be put into.
PAMT - Physically contiguous TDX metadata.  One table for each page size
       per TDMR.  Roughly 1/256th of TDMR in size.  256G TDMR = ~1G
       PAMT.

As one step of initializing the TDX module, the kernel configures
TDX-usable memory regions by passing an array of TDMRs to the TDX module.

Constructing the array of TDMRs consists below steps:

1) Create TDMRs to cover all memory regions that the TDX module can use;
2) Allocate and set up PAMT for each TDMR;
3) Set up reserved areas for each TDMR.

Add a placeholder to construct TDMRs to do the above steps after all
TDX memory regions are verified to be truly convertible.  Always free
TDMRs at the end of the initialization (no matter successful or not)
as TDMRs are only used during the initialization.

Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---

v5 -> v6:
 - construct_tdmrs_memblock() -> construct_tdmrs() as 'tdx_memblock' is
   used instead of memblock.
 - Added Isaku's Reviewed-by.

- v3 -> v5 (no feedback on v4):
 - Moved calculating TDMR size to this patch.
 - Changed to use alloc_pages_exact() to allocate buffer for all TDMRs
   once, instead of allocating each TDMR individually.
 - Removed "crypto protection" in the changelog.
 - -EFAULT -> -EINVAL in couple of places.

---
 arch/x86/virt/vmx/tdx/tdx.c | 72 +++++++++++++++++++++++++++++++++++++
 arch/x86/virt/vmx/tdx/tdx.h | 23 ++++++++++++
 2 files changed, 95 insertions(+)

Comments

Andi Kleen Oct. 27, 2022, 3:31 p.m. UTC | #1
> +/* Calculate the actual TDMR_INFO size */
> +static inline int cal_tdmr_size(void)
> +{
> +	int tdmr_sz;
> +
> +	/*
> +	 * The actual size of TDMR_INFO depends on the maximum number
> +	 * of reserved areas.
> +	 */
> +	tdmr_sz = sizeof(struct tdmr_info);
> +	tdmr_sz += sizeof(struct tdmr_reserved_area) *
> +		   tdx_sysinfo.max_reserved_per_tdmr;


would seem safer to have a overflow check here.
Huang, Kai Oct. 28, 2022, 2:21 a.m. UTC | #2
On Thu, 2022-10-27 at 08:31 -0700, Andi Kleen wrote:
> > +/* Calculate the actual TDMR_INFO size */
> > +static inline int cal_tdmr_size(void)
> > +{
> > +	int tdmr_sz;
> > +
> > +	/*
> > +	 * The actual size of TDMR_INFO depends on the maximum number
> > +	 * of reserved areas.
> > +	 */
> > +	tdmr_sz = sizeof(struct tdmr_info);
> > +	tdmr_sz += sizeof(struct tdmr_reserved_area) *
> > +		   tdx_sysinfo.max_reserved_per_tdmr;
> 
> 
> would seem safer to have a overflow check here.
> 
> 

How about below?

--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -614,6 +614,14 @@ static inline int cal_tdmr_size(void)
        tdmr_sz += sizeof(struct tdmr_reserved_area) *
                   tdx_sysinfo.max_reserved_per_tdmr;
 
+       /*
+        * Do simple check against overflow, and return 0 (invalid)
+        * TDMR_INFO size if it happened.  Also WARN() as it should
+        * should never happen in reality.
+        */
+       if (WARN_ON_ONCE(tdmr_sz < 0))
+               return 0;
+
        /*
         * TDX requires each TDMR_INFO to be 512-byte aligned.  Always
         * round up TDMR_INFO size to the 512-byte boundary.
@@ -623,19 +631,27 @@ static inline int cal_tdmr_size(void)
 
 static struct tdmr_info *alloc_tdmr_array(int *array_sz)
 {
+       int sz;
+
        /*
         * TDX requires each TDMR_INFO to be 512-byte aligned.
         * Use alloc_pages_exact() to allocate all TDMRs at once.
         * Each TDMR_INFO will still be 512-byte aligned since
         * cal_tdmr_size() always return 512-byte aligned size.
         */
-       *array_sz = cal_tdmr_size() * tdx_sysinfo.max_tdmrs;
+       sz = cal_tdmr_size() * tdx_sysinfo.max_tdmrs;
+
+       /* Overflow */
+       if (!sz || WARN_ON_ONCE(sz < 0))
+               return NULL;
+
+       *array_sz = sz;
 
        /*
         * Zero the buffer so 'struct tdmr_info::size' can be
         * used to determine whether a TDMR is valid.
         */
-       return alloc_pages_exact(*array_sz, GFP_KERNEL | __GFP_ZERO);
+       return alloc_pages_exact(sz, GFP_KERNEL | __GFP_ZERO);
 }


Btw, should I use alloc_contig_pages() instead of alloc_pages_exact() as IIUC
the latter should fail if the size is larger than 4MB?  In reality, the entire
array only takes dozens of KBs, though.
Huang, Kai Nov. 3, 2022, 8:55 a.m. UTC | #3
On Fri, 2022-10-28 at 02:21 +0000, Huang, Kai wrote:
> On Thu, 2022-10-27 at 08:31 -0700, Andi Kleen wrote:
> > > +/* Calculate the actual TDMR_INFO size */
> > > +static inline int cal_tdmr_size(void)
> > > +{
> > > +	int tdmr_sz;
> > > +
> > > +	/*
> > > +	 * The actual size of TDMR_INFO depends on the maximum number
> > > +	 * of reserved areas.
> > > +	 */
> > > +	tdmr_sz = sizeof(struct tdmr_info);
> > > +	tdmr_sz += sizeof(struct tdmr_reserved_area) *
> > > +		   tdx_sysinfo.max_reserved_per_tdmr;
> > 
> > 
> > would seem safer to have a overflow check here.
> > 
> > 
> 
> How about below?
> 
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -614,6 +614,14 @@ static inline int cal_tdmr_size(void)
>         tdmr_sz += sizeof(struct tdmr_reserved_area) *
>                    tdx_sysinfo.max_reserved_per_tdmr;
>  
> +       /*
> +        * Do simple check against overflow, and return 0 (invalid)
> +        * TDMR_INFO size if it happened.  Also WARN() as it should
> +        * should never happen in reality.
> +        */
> +       if (WARN_ON_ONCE(tdmr_sz < 0))
> +               return 0;
> +
>         /*
>          * TDX requires each TDMR_INFO to be 512-byte aligned.  Always
>          * round up TDMR_INFO size to the 512-byte boundary.
> @@ -623,19 +631,27 @@ static inline int cal_tdmr_size(void)
>  
>  static struct tdmr_info *alloc_tdmr_array(int *array_sz)
>  {
> +       int sz;
> +
>         /*
>          * TDX requires each TDMR_INFO to be 512-byte aligned.
>          * Use alloc_pages_exact() to allocate all TDMRs at once.
>          * Each TDMR_INFO will still be 512-byte aligned since
>          * cal_tdmr_size() always return 512-byte aligned size.
>          */
> -       *array_sz = cal_tdmr_size() * tdx_sysinfo.max_tdmrs;
> +       sz = cal_tdmr_size() * tdx_sysinfo.max_tdmrs;
> +
> +       /* Overflow */
> +       if (!sz || WARN_ON_ONCE(sz < 0))
> +               return NULL;
> +
> +       *array_sz = sz;
>  
>         /*
>          * Zero the buffer so 'struct tdmr_info::size' can be
>          * used to determine whether a TDMR is valid.
>          */
> -       return alloc_pages_exact(*array_sz, GFP_KERNEL | __GFP_ZERO);
> +       return alloc_pages_exact(sz, GFP_KERNEL | __GFP_ZERO);
>  }
> 
> 
> Btw, should I use alloc_contig_pages() instead of alloc_pages_exact() as IIUC
> the latter should fail if the size is larger than 4MB?  In reality, the entire
> array only takes dozens of KBs, though.

Hi Andi,

Could you take a look whether this is OK?

Also could you take a look my replies to your other comments?

Thanks!
Dave Hansen Nov. 3, 2022, 3:05 p.m. UTC | #4
On 10/27/22 08:31, Andi Kleen wrote:
> 
>> +/* Calculate the actual TDMR_INFO size */
>> +static inline int cal_tdmr_size(void)
>> +{
>> +    int tdmr_sz;
>> +
>> +    /*
>> +     * The actual size of TDMR_INFO depends on the maximum number
>> +     * of reserved areas.
>> +     */
>> +    tdmr_sz = sizeof(struct tdmr_info);
>> +    tdmr_sz += sizeof(struct tdmr_reserved_area) *
>> +           tdx_sysinfo.max_reserved_per_tdmr;
> 
> would seem safer to have a overflow check here.

tdmr_reserved_area is 16 bytes.  To overflow a signed int, tdmr_sz would
need to be for an allocation >2GB.  alloc_pages_exact() tops out at
supplying 4MB allocations.

So, sure, this breaks at max_reserved_per_tdmr>2^27, but it actually
breaks *EARLIER* at max_reserved_per_tdmr>2^18 because the page
allocator is borked.

Plus, max_reserved_per_tdmr is barely in double digits today.  It's a
*LOOOOOOOOONG* way from either of those limits.  If you want to add a
warning here, then go for it and enforce a sane value on
max_reserved_per_tdmr.

But, the overflow is *LITERALLY* an order of magnitude more obscure than
overwhelming the page allocator.  Let's not clutter up the code with
silly checks like that.
Huang, Kai Nov. 3, 2022, 10:07 p.m. UTC | #5
On Thu, 2022-11-03 at 08:05 -0700, Hansen, Dave wrote:
> Plus, max_reserved_per_tdmr is barely in double digits today.  It's a
> *LOOOOOOOOONG* way from either of those limits.  If you want to add a
> warning here, then go for it and enforce a sane value on
> max_reserved_per_tdmr.

Hi Dave,

Thanks. By "enforce a sane value on max_reserved_per_tdmr' could you be more
specific? Did you mean if we find its value is insanely big, we can change it to
a reasonable smaller value?

But I don't think we can as the TDMR_INFO is used by the TDX module, so reducing
max_reserved_per_tdmr by the kernel doesn't actually work?

Perhaps for now we can make the kernel to assume TDMR_INFO won't exceed a
reasonable value (i.e. 4K/8K/16K?) and max_tdmrs (which is 64 currently) won't
exceed a reasonable value either (i.e. 1K/512/256?), so that we can just use
alloc_pages_exact() to allocate  the entire TDMR array? If kernel found either
is too big, then kernel could just fail to initialize the TDX module.
diff mbox series

Patch

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index ff3ef7ed4509..ba577d357aef 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -16,6 +16,8 @@ 
 #include <linux/cpu.h>
 #include <linux/cpumask.h>
 #include <linux/smp.h>
+#include <linux/gfp.h>
+#include <linux/align.h>
 #include <linux/atomic.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
@@ -536,6 +538,53 @@  static int sanity_check_tdx_memory(void)
 	return 0;
 }
 
+/* Calculate the actual TDMR_INFO size */
+static inline int cal_tdmr_size(void)
+{
+	int tdmr_sz;
+
+	/*
+	 * The actual size of TDMR_INFO depends on the maximum number
+	 * of reserved areas.
+	 */
+	tdmr_sz = sizeof(struct tdmr_info);
+	tdmr_sz += sizeof(struct tdmr_reserved_area) *
+		   tdx_sysinfo.max_reserved_per_tdmr;
+
+	/*
+	 * TDX requires each TDMR_INFO to be 512-byte aligned.  Always
+	 * round up TDMR_INFO size to the 512-byte boundary.
+	 */
+	return ALIGN(tdmr_sz, TDMR_INFO_ALIGNMENT);
+}
+
+static struct tdmr_info *alloc_tdmr_array(int *array_sz)
+{
+	/*
+	 * TDX requires each TDMR_INFO to be 512-byte aligned.
+	 * Use alloc_pages_exact() to allocate all TDMRs at once.
+	 * Each TDMR_INFO will still be 512-byte aligned since
+	 * cal_tdmr_size() always return 512-byte aligned size.
+	 */
+	*array_sz = cal_tdmr_size() * tdx_sysinfo.max_tdmrs;
+
+	/*
+	 * Zero the buffer so 'struct tdmr_info::size' can be
+	 * used to determine whether a TDMR is valid.
+	 */
+	return alloc_pages_exact(*array_sz, GFP_KERNEL | __GFP_ZERO);
+}
+
+/*
+ * Construct an array of TDMRs to cover all TDX memory ranges.
+ * The actual number of TDMRs is kept to @tdmr_num.
+ */
+static int construct_tdmrs(struct tdmr_info *tdmr_array, int *tdmr_num)
+{
+	/* Return -EINVAL until constructing TDMRs is done */
+	return -EINVAL;
+}
+
 /*
  * Detect and initialize the TDX module.
  *
@@ -545,6 +594,9 @@  static int sanity_check_tdx_memory(void)
  */
 static int init_tdx_module(void)
 {
+	struct tdmr_info *tdmr_array;
+	int tdmr_array_sz;
+	int tdmr_num;
 	int ret;
 
 	/*
@@ -572,11 +624,31 @@  static int init_tdx_module(void)
 	ret = sanity_check_tdx_memory();
 	if (ret)
 		goto out;
+
+	/* Prepare enough space to construct TDMRs */
+	tdmr_array = alloc_tdmr_array(&tdmr_array_sz);
+	if (!tdmr_array) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	/* Construct TDMRs to cover all TDX memory ranges */
+	ret = construct_tdmrs(tdmr_array, &tdmr_num);
+	if (ret)
+		goto out_free_tdmrs;
+
 	/*
 	 * Return -EINVAL until all steps of TDX module initialization
 	 * process are done.
 	 */
 	ret = -EINVAL;
+out_free_tdmrs:
+	/*
+	 * The array of TDMRs is freed no matter the initialization is
+	 * successful or not.  They are not needed anymore after the
+	 * module initialization.
+	 */
+	free_pages_exact(tdmr_array, tdmr_array_sz);
 out:
 	return ret;
 }
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 8e273756098c..a737f2b51474 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -80,6 +80,29 @@  struct tdsysinfo_struct {
 	};
 } __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT);
 
+struct tdmr_reserved_area {
+	u64 offset;
+	u64 size;
+} __packed;
+
+#define TDMR_INFO_ALIGNMENT	512
+
+struct tdmr_info {
+	u64 base;
+	u64 size;
+	u64 pamt_1g_base;
+	u64 pamt_1g_size;
+	u64 pamt_2m_base;
+	u64 pamt_2m_size;
+	u64 pamt_4k_base;
+	u64 pamt_4k_size;
+	/*
+	 * Actual number of reserved areas depends on
+	 * 'struct tdsysinfo_struct'::max_reserved_per_tdmr.
+	 */
+	struct tdmr_reserved_area reserved_areas[0];
+} __packed __aligned(TDMR_INFO_ALIGNMENT);
+
 /*
  * Do not put any hardware-defined TDX structure representations below
  * this comment!