diff mbox series

[v19,057/130] KVM: x86/mmu: Add a new is_private member for union kvm_mmu_page_role

Message ID 3692118f525c21cbe3670e1c20b1bbbf3be2b4ba.1708933498.git.isaku.yamahata@intel.com (mailing list archive)
State New, archived
Headers show
Series [v19,001/130] x86/virt/tdx: Rename _offset to _member for TD_SYSINFO_MAP() macro | expand

Commit Message

Isaku Yamahata Feb. 26, 2024, 8:25 a.m. UTC
From: Isaku Yamahata <isaku.yamahata@intel.com>

Because TDX support introduces private mapping, add a new member in union
kvm_mmu_page_role with access functions to check the member.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
v19:
- Fix is_private_sptep() when NULL case.
- drop CONFIG_KVM_MMU_PRIVATE
---
 arch/x86/include/asm/kvm_host.h | 13 ++++++++++++-
 arch/x86/kvm/mmu/mmu_internal.h |  5 +++++
 arch/x86/kvm/mmu/spte.h         |  7 +++++++
 3 files changed, 24 insertions(+), 1 deletion(-)

Comments

Edgecombe, Rick P March 21, 2024, 12:18 a.m. UTC | #1
On Mon, 2024-02-26 at 00:25 -0800, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
> 
> Because TDX support introduces private mapping, add a new member in
> union
> kvm_mmu_page_role with access functions to check the member.

I guess we should have a role bit for private like in this patch, but
just barely. AFAICT we have a gfn and struct kvm in every place where
it is checked (assuming my proposal in patch 56 holds water). So we
could have
bool is_private = !(gfn & kvm_gfn_shared_mask(kvm));

But there are extra bits available in the role, so we can skip the
extra step. Can you think of any more reasons? I want to try to write a
log for this one. It's very short.

> 
> +static inline bool is_private_sptep(u64 *sptep)
> +{
> +       if (WARN_ON_ONCE(!sptep))
> +               return false;

This is not supposed to be NULL, from the existence of the warning. It
looks like some previous comments were to not let the NULL pointer
deference happen and bail if it's NULL. I think maybe we should just
drop the check and warning completely. The NULL pointer deference will
be plenty loud if it happens.

> +       return is_private_sp(sptep_to_sp(sptep));
> +}
> +
Isaku Yamahata March 21, 2024, 9:59 p.m. UTC | #2
On Thu, Mar 21, 2024 at 12:18:47AM +0000,
"Edgecombe, Rick P" <rick.p.edgecombe@intel.com> wrote:

> On Mon, 2024-02-26 at 00:25 -0800, isaku.yamahata@intel.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@intel.com>
> > 
> > Because TDX support introduces private mapping, add a new member in
> > union
> > kvm_mmu_page_role with access functions to check the member.
> 
> I guess we should have a role bit for private like in this patch, but
> just barely. AFAICT we have a gfn and struct kvm in every place where
> it is checked (assuming my proposal in patch 56 holds water). So we
> could have
> bool is_private = !(gfn & kvm_gfn_shared_mask(kvm));

Yes, we can use such combination. or !!sp->private_spt or something.
Originally we didn't use role.is_private and passed around private parameter.


> But there are extra bits available in the role, so we can skip the
> extra step. Can you think of any more reasons? I want to try to write a
> log for this one. It's very short.

There are several places to compare role and shared<->private.  For example,
kvm_tdp_mmu_alloc_root(). role.is_private simplifies such comparison.
diff mbox series

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6c10d8d1017f..dcc6f7c38a83 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -349,7 +349,8 @@  union kvm_mmu_page_role {
 		unsigned ad_disabled:1;
 		unsigned guest_mode:1;
 		unsigned passthrough:1;
-		unsigned :5;
+		unsigned is_private:1;
+		unsigned :4;
 
 		/*
 		 * This is left at the top of the word so that
@@ -361,6 +362,16 @@  union kvm_mmu_page_role {
 	};
 };
 
+static inline bool kvm_mmu_page_role_is_private(union kvm_mmu_page_role role)
+{
+	return !!role.is_private;
+}
+
+static inline void kvm_mmu_page_role_set_private(union kvm_mmu_page_role *role)
+{
+	role->is_private = 1;
+}
+
 /*
  * kvm_mmu_extended_role complements kvm_mmu_page_role, tracking properties
  * relevant to the current MMU configuration.   When loading CR0, CR4, or EFER,
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 0443bfcf5d9c..e3f54701f98d 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -145,6 +145,11 @@  static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp)
 	return kvm_mmu_role_as_id(sp->role);
 }
 
+static inline bool is_private_sp(const struct kvm_mmu_page *sp)
+{
+	return kvm_mmu_page_role_is_private(sp->role);
+}
+
 static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
 {
 	/*
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 1a163aee9ec6..3ef8ea18321b 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -264,6 +264,13 @@  static inline struct kvm_mmu_page *root_to_sp(hpa_t root)
 	return spte_to_child_sp(root);
 }
 
+static inline bool is_private_sptep(u64 *sptep)
+{
+	if (WARN_ON_ONCE(!sptep))
+		return false;
+	return is_private_sp(sptep_to_sp(sptep));
+}
+
 static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
 {
 	return (spte & shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&