diff mbox series

[1/2] KVM: x86/mmu: Check yielded_gfn for forward progress iff resched is needed

Message ID 20241031170633.1502783-2-seanjc@google.com (mailing list archive)
State New
Headers show
Series KVM: x86/mmu: Micro-optimize TDP MMU cond_resched() | expand

Commit Message

Sean Christopherson Oct. 31, 2024, 5:06 p.m. UTC
Swap the order of the checks in tdp_mmu_iter_cond_resched() so that KVM
checks to see if a resched is needed _before_ checking to see if yielding
must be disallowed to guarantee forward progress.  Iterating over TDP MMU
SPTEs is a hot path, e.g. tearing down a root can touch millions of SPTEs,
and not needing to reschedule is by far the common case.  On the other
hand, disallowing yielding because forward progress has not been made is a
very rare case.

Returning early for the common case (no resched), effectively reduces the
number of checks from 2 to 1 for the common case, and should make the code
slightly more predictable for the CPU.

To resolve a weird conundrum where the forward progress check currently
returns false, but the need resched check subtly returns iter->yielded,
which _should_ be false (enforced by a WARN), return false unconditionally
(which might also help make the sequence more predictable).  If KVM has a
bug where iter->yielded is left danging, continuing to yield is neither
right nor wrong, it was simply an artifact of how the original code was
written.

Unconditionally returning false when yielding is unnecessary or unwanted
will also allow extracting the "should resched" logic to a separate helper
in a future patch.

Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

Comments

James Houghton Oct. 31, 2024, 6:48 p.m. UTC | #1
On Thu, Oct 31, 2024 at 10:07 AM Sean Christopherson <seanjc@google.com> wrote:
>
> Swap the order of the checks in tdp_mmu_iter_cond_resched() so that KVM
> checks to see if a resched is needed _before_ checking to see if yielding
> must be disallowed to guarantee forward progress.  Iterating over TDP MMU
> SPTEs is a hot path, e.g. tearing down a root can touch millions of SPTEs,
> and not needing to reschedule is by far the common case.  On the other
> hand, disallowing yielding because forward progress has not been made is a
> very rare case.
>
> Returning early for the common case (no resched), effectively reduces the
> number of checks from 2 to 1 for the common case, and should make the code
> slightly more predictable for the CPU.
>
> To resolve a weird conundrum where the forward progress check currently
> returns false, but the need resched check subtly returns iter->yielded,
> which _should_ be false (enforced by a WARN), return false unconditionally
> (which might also help make the sequence more predictable).  If KVM has a
> bug where iter->yielded is left danging, continuing to yield is neither
> right nor wrong, it was simply an artifact of how the original code was
> written.
>
> Unconditionally returning false when yielding is unnecessary or unwanted
> will also allow extracting the "should resched" logic to a separate helper
> in a future patch.
>
> Cc: David Matlack <dmatlack@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Feel free to add:

Reviewed-by: James Houghton <jthoughton@google.com>
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 91caa73a905b..a06f3d5cb651 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -700,29 +700,29 @@  static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm,
 {
 	WARN_ON_ONCE(iter->yielded);
 
+	if (!need_resched() && !rwlock_needbreak(&kvm->mmu_lock))
+		return false;
+
 	/* Ensure forward progress has been made before yielding. */
 	if (iter->next_last_level_gfn == iter->yielded_gfn)
 		return false;
 
-	if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
-		if (flush)
-			kvm_flush_remote_tlbs(kvm);
+	if (flush)
+		kvm_flush_remote_tlbs(kvm);
 
-		rcu_read_unlock();
+	rcu_read_unlock();
 
-		if (shared)
-			cond_resched_rwlock_read(&kvm->mmu_lock);
-		else
-			cond_resched_rwlock_write(&kvm->mmu_lock);
+	if (shared)
+		cond_resched_rwlock_read(&kvm->mmu_lock);
+	else
+		cond_resched_rwlock_write(&kvm->mmu_lock);
 
-		rcu_read_lock();
+	rcu_read_lock();
 
-		WARN_ON_ONCE(iter->gfn > iter->next_last_level_gfn);
+	WARN_ON_ONCE(iter->gfn > iter->next_last_level_gfn);
 
-		iter->yielded = true;
-	}
-
-	return iter->yielded;
+	iter->yielded = true;
+	return true;
 }
 
 static inline gfn_t tdp_mmu_max_gfn_exclusive(void)