diff mbox series

[v2,1/2] KVM: x86/tdp_mmu: Add WARN_ON_ONCE() in tdp_mmu_map_handle_target_level()

Message ID c20d70acf5f5b0db5f52d6e163bb49f1235182f4.1728718232.git.isaku.yamahata@intel.com (mailing list archive)
State New, archived
Headers show
Series [v2,1/2] KVM: x86/tdp_mmu: Add WARN_ON_ONCE() in tdp_mmu_map_handle_target_level() | expand

Commit Message

Isaku Yamahata Oct. 12, 2024, 7:39 a.m. UTC
Add WARN_ON_ONCE() in tdp_mmu_map_handle_target_level() to check SPTE
validity made by make_spte() suggested at [1].

The possible SPTE change at present leaf is,
- Non leaf => leaf (large page).
- Read fault (SPTE doesn't have write permission) => write fault.
  Hardening permission, write protect for example, goes through zapping.
- Access tracking when AD bits aren't supported.
- AD bit change.  This should be removed when make_spte() is fixed [1].

[1] https://lore.kernel.org/kvm/ZuOCXarfAwPjYj19@google.com/
  One idea would be to WARN and skip setting the SPTE in
  tdp_mmu_map_handle_target_level().  I.e. WARN and ignore 1=>0 transitions
  for Writable and Dirty bits, and then drop the TLB flush (assuming the
  above patch lands).

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
v2:
- Split out into an independent patch to make common TDP MMU logic. (Sean)
---
 arch/x86/kvm/mmu/spte.h    | 11 +++++++++++
 arch/x86/kvm/mmu/tdp_mmu.c | 14 +++++++++++++-
 2 files changed, 24 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index a72f0e3bde17..85fd99c92960 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -214,6 +214,17 @@  extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
  */
 #define FROZEN_SPTE	(SHADOW_NONPRESENT_VALUE | 0x5a0ULL)
 
+#define AD_SPTE_IGNORE_CHANGE_MASK				\
+	(PT_WRITABLE_MASK |                                     \
+	 shadow_host_writable_mask | shadow_mmu_writable_mask | \
+	 shadow_dirty_mask | shadow_accessed_mask)
+
+#define  ACCESS_TRACK_SPTE_IGNORE_CHANGE_MASK	\
+	(AD_SPTE_IGNORE_CHANGE_MASK |		\
+	 shadow_acc_track_mask |		\
+	 (SHADOW_ACC_TRACK_SAVED_BITS_MASK <<	\
+	  SHADOW_ACC_TRACK_SAVED_BITS_SHIFT))
+
 /* Removed SPTEs must not be misconstrued as shadow present PTEs. */
 static_assert(!(FROZEN_SPTE & SPTE_MMU_PRESENT_MASK));
 
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 37b3769a5d32..1da3df517522 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1186,11 +1186,23 @@  static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 
 	if (unlikely(!fault->slot))
 		new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
-	else
+	else {
 		wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
 					 fault->pfn, iter->old_spte, fault->prefetch, true,
 					 fault->map_writable, &new_spte);
 
+		WARN_ON_ONCE(kvm_ad_enabled() &&
+			     is_shadow_present_pte(iter->old_spte) &&
+			     is_large_pte(iter->old_spte) == is_large_pte(new_spte) &&
+			     ~AD_SPTE_IGNORE_CHANGE_MASK &
+			     (iter->old_spte ^ new_spte));
+		WARN_ON_ONCE(!kvm_ad_enabled() &&
+			     is_shadow_present_pte(iter->old_spte) &&
+			     is_large_pte(iter->old_spte) == is_large_pte(new_spte) &&
+			     ~ACCESS_TRACK_SPTE_IGNORE_CHANGE_MASK &
+			     (iter->old_spte ^ new_spte));
+	}
+
 	if (new_spte == iter->old_spte)
 		ret = RET_PF_SPURIOUS;
 	else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte))