diff mbox series

[2/3] KVM: x86/mmu: Add lockdep assert to enforce safe usage of kvm_unmap_gfn_range()

Message ID 20241009192345.1148353-3-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/mmu: Don't zap "direct" non-leaf SPTEs on memslot removal | expand

Commit Message

Sean Christopherson Oct. 9, 2024, 7:23 p.m. UTC
Add a lockdep assertion in kvm_unmap_gfn_range() to ensure that either
mmu_invalidate_in_progress is elevated, or that the range is being zapped
due to memslot removal (loosely detected by slots_lock being held).
Zapping SPTEs without mmu_invalidate_{in_progress,seq} protection is unsafe
as KVM's page fault path snapshots state before acquiring mmu_lock, and
thus can create SPTEs with stale information if vCPUs aren't forced to
retry faults (due to seeing an in-progress or past MMU invalidation).

Memslot removal is a special case, as the memslot is retrieved outside of
mmu_invalidate_seq, i.e. doesn't use the "standard" protections, and
instead relies on SRCU synchronization to ensure any in-flight page faults
are fully resolved before zapping SPTEs.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Yan Zhao Oct. 10, 2024, 7:01 a.m. UTC | #1
On Wed, Oct 09, 2024 at 12:23:44PM -0700, Sean Christopherson wrote:
> Add a lockdep assertion in kvm_unmap_gfn_range() to ensure that either
> mmu_invalidate_in_progress is elevated, or that the range is being zapped
> due to memslot removal (loosely detected by slots_lock being held).
> Zapping SPTEs without mmu_invalidate_{in_progress,seq} protection is unsafe
> as KVM's page fault path snapshots state before acquiring mmu_lock, and
> thus can create SPTEs with stale information if vCPUs aren't forced to
> retry faults (due to seeing an in-progress or past MMU invalidation).
> 
> Memslot removal is a special case, as the memslot is retrieved outside of
> mmu_invalidate_seq, i.e. doesn't use the "standard" protections, and
> instead relies on SRCU synchronization to ensure any in-flight page faults
> are fully resolved before zapping SPTEs.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 09494d01c38e..c6716fd3666f 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1556,6 +1556,16 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  {
>  	bool flush = false;
>  
> +	/*
> +	 * To prevent races with vCPUs faulting in a gfn using stale data,
> +	 * zapping a gfn range must be protected by mmu_invalidate_in_progress
> +	 * (and mmu_invalidate_seq).  The only exception is memslot deletion,
> +	 * in which case SRCU synchronization ensures SPTEs a zapped after all
> +	 * vCPUs have unlocked SRCU and are guaranteed to see the invalid slot.
> +	 */
> +	lockdep_assert_once(kvm->mmu_invalidate_in_progress ||
> +			    lockdep_is_held(&kvm->slots_lock));
> +
Is the detection of slots_lock too loose?
If a caller just holds slots_lock without calling
"synchronize_srcu_expedited(&kvm->srcu)" as that in kvm_swap_active_memslots()
to ensure the old slot is retired, stale data may still be encountered. 

>  	if (kvm_memslots_have_rmaps(kvm))
>  		flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,
>  						 range->start, range->end,
> -- 
> 2.47.0.rc1.288.g06298d1525-goog
>
Sean Christopherson Oct. 10, 2024, 4:14 p.m. UTC | #2
On Thu, Oct 10, 2024, Yan Zhao wrote:
> On Wed, Oct 09, 2024 at 12:23:44PM -0700, Sean Christopherson wrote:
> > Add a lockdep assertion in kvm_unmap_gfn_range() to ensure that either
> > mmu_invalidate_in_progress is elevated, or that the range is being zapped
> > due to memslot removal (loosely detected by slots_lock being held).
> > Zapping SPTEs without mmu_invalidate_{in_progress,seq} protection is unsafe
> > as KVM's page fault path snapshots state before acquiring mmu_lock, and
> > thus can create SPTEs with stale information if vCPUs aren't forced to
> > retry faults (due to seeing an in-progress or past MMU invalidation).
> > 
> > Memslot removal is a special case, as the memslot is retrieved outside of
> > mmu_invalidate_seq, i.e. doesn't use the "standard" protections, and
> > instead relies on SRCU synchronization to ensure any in-flight page faults
> > are fully resolved before zapping SPTEs.
> > 
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 09494d01c38e..c6716fd3666f 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -1556,6 +1556,16 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
> >  {
> >  	bool flush = false;
> >  
> > +	/*
> > +	 * To prevent races with vCPUs faulting in a gfn using stale data,
> > +	 * zapping a gfn range must be protected by mmu_invalidate_in_progress
> > +	 * (and mmu_invalidate_seq).  The only exception is memslot deletion,
> > +	 * in which case SRCU synchronization ensures SPTEs a zapped after all
> > +	 * vCPUs have unlocked SRCU and are guaranteed to see the invalid slot.
> > +	 */
> > +	lockdep_assert_once(kvm->mmu_invalidate_in_progress ||
> > +			    lockdep_is_held(&kvm->slots_lock));
> > +
> Is the detection of slots_lock too loose?

Yes, but I can't think of an easy way to tighten it.  My original thought was to
require range->slot to be invalid, but KVM (correctly) passes in the old, valid
memslot to kvm_arch_flush_shadow_memslot().

The goal with the assert is to detect as many bugs as possible, without adding
too much complexity, and also to document the rules for using kvm_unmap_gfn_range().

Actually, we can tighten the check, by verifying that the slot being unmapped is
valid, but that the slot that KVM sees is invalid.  I'm not sure I love it though,
as it's absurdly specific.

(untested)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c6716fd3666f..12b87b746b59 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1552,6 +1552,17 @@ static bool __kvm_rmap_zap_gfn_range(struct kvm *kvm,
                                 start, end - 1, can_yield, true, flush);
 }
 
+static kvm_memslot_is_being_invalidated(const struct kvm_memory_slot *old)
+{
+       const struct kvm_memory_slot *new;
+
+       if (old->flags & KVM_MEMSLOT_INVALID)
+               return false;
+
+       new = id_to_memslot(__kvm_memslots(kvm, old->as_id), old->id);
+       return new && new->flags & KVM_MEMSLOT_INVALID;
+}
+
 bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
 {
        bool flush = false;
@@ -1564,7 +1575,8 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
         * vCPUs have unlocked SRCU and are guaranteed to see the invalid slot.
         */
        lockdep_assert_once(kvm->mmu_invalidate_in_progress ||
-                           lockdep_is_held(&kvm->slots_lock));
+                           (lockdep_is_held(&kvm->slots_lock) &&
+                            kvm_memslot_is_being_invalidated(range->slot));
 
        if (kvm_memslots_have_rmaps(kvm))
                flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,


> If a caller just holds slots_lock without calling
> "synchronize_srcu_expedited(&kvm->srcu)" as that in kvm_swap_active_memslots()
> to ensure the old slot is retired, stale data may still be encountered. 
> 
> >  	if (kvm_memslots_have_rmaps(kvm))
> >  		flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,
> >  						 range->start, range->end,
> > -- 
> > 2.47.0.rc1.288.g06298d1525-goog
> >
Yan Zhao Oct. 11, 2024, 5:37 a.m. UTC | #3
On Thu, Oct 10, 2024 at 09:14:41AM -0700, Sean Christopherson wrote:
> On Thu, Oct 10, 2024, Yan Zhao wrote:
> > On Wed, Oct 09, 2024 at 12:23:44PM -0700, Sean Christopherson wrote:
> > > Add a lockdep assertion in kvm_unmap_gfn_range() to ensure that either
> > > mmu_invalidate_in_progress is elevated, or that the range is being zapped
> > > due to memslot removal (loosely detected by slots_lock being held).
> > > Zapping SPTEs without mmu_invalidate_{in_progress,seq} protection is unsafe
> > > as KVM's page fault path snapshots state before acquiring mmu_lock, and
> > > thus can create SPTEs with stale information if vCPUs aren't forced to
> > > retry faults (due to seeing an in-progress or past MMU invalidation).
> > > 
> > > Memslot removal is a special case, as the memslot is retrieved outside of
> > > mmu_invalidate_seq, i.e. doesn't use the "standard" protections, and
> > > instead relies on SRCU synchronization to ensure any in-flight page faults
> > > are fully resolved before zapping SPTEs.
> > > 
> > > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > > ---
> > >  arch/x86/kvm/mmu/mmu.c | 10 ++++++++++
> > >  1 file changed, 10 insertions(+)
> > > 
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 09494d01c38e..c6716fd3666f 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -1556,6 +1556,16 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
> > >  {
> > >  	bool flush = false;
> > >  
> > > +	/*
> > > +	 * To prevent races with vCPUs faulting in a gfn using stale data,
> > > +	 * zapping a gfn range must be protected by mmu_invalidate_in_progress
> > > +	 * (and mmu_invalidate_seq).  The only exception is memslot deletion,
> > > +	 * in which case SRCU synchronization ensures SPTEs a zapped after all
> > > +	 * vCPUs have unlocked SRCU and are guaranteed to see the invalid slot.
> > > +	 */
> > > +	lockdep_assert_once(kvm->mmu_invalidate_in_progress ||
> > > +			    lockdep_is_held(&kvm->slots_lock));
> > > +
> > Is the detection of slots_lock too loose?
> 
> Yes, but I can't think of an easy way to tighten it.  My original thought was to
> require range->slot to be invalid, but KVM (correctly) passes in the old, valid
> memslot to kvm_arch_flush_shadow_memslot().
> 
> The goal with the assert is to detect as many bugs as possible, without adding
> too much complexity, and also to document the rules for using kvm_unmap_gfn_range().
> 
> Actually, we can tighten the check, by verifying that the slot being unmapped is
> valid, but that the slot that KVM sees is invalid.  I'm not sure I love it though,
> as it's absurdly specific.
Right. It doesn't reflect the wait in kvm_swap_active_memslots() for the old
slot.

  CPU 0                  CPU 1
1. fault on old begins
                       2. swap to new
		       3. zap old
4. fault on old ends

Without CPU 1 waiting for 1&4 complete between 2&3, stale data is still
possible.

So, the detection in kvm_memslot_is_being_invalidated() only indicates the
caller is from kvm_arch_flush_shadow_memslot() with current code.

Given that, how do you feel about passing in a "bool is_flush_slot" to indicate
the caller and asserting?

> (untested)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index c6716fd3666f..12b87b746b59 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1552,6 +1552,17 @@ static bool __kvm_rmap_zap_gfn_range(struct kvm *kvm,
>                                  start, end - 1, can_yield, true, flush);
>  }
>  
> +static kvm_memslot_is_being_invalidated(const struct kvm_memory_slot *old)
> +{
> +       const struct kvm_memory_slot *new;
> +
> +       if (old->flags & KVM_MEMSLOT_INVALID)
> +               return false;
> +
> +       new = id_to_memslot(__kvm_memslots(kvm, old->as_id), old->id);
> +       return new && new->flags & KVM_MEMSLOT_INVALID;
> +}
> +
>  bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  {
>         bool flush = false;
> @@ -1564,7 +1575,8 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>          * vCPUs have unlocked SRCU and are guaranteed to see the invalid slot.
>          */
>         lockdep_assert_once(kvm->mmu_invalidate_in_progress ||
> -                           lockdep_is_held(&kvm->slots_lock));
> +                           (lockdep_is_held(&kvm->slots_lock) &&
> +                            kvm_memslot_is_being_invalidated(range->slot));
>  
>         if (kvm_memslots_have_rmaps(kvm))
>                 flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,
> 
> 
> > If a caller just holds slots_lock without calling
> > "synchronize_srcu_expedited(&kvm->srcu)" as that in kvm_swap_active_memslots()
> > to ensure the old slot is retired, stale data may still be encountered. 
> > 
> > >  	if (kvm_memslots_have_rmaps(kvm))
> > >  		flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,
> > >  						 range->start, range->end,
> > > -- 
> > > 2.47.0.rc1.288.g06298d1525-goog
> > >
Sean Christopherson Oct. 11, 2024, 9:22 p.m. UTC | #4
On Fri, Oct 11, 2024, Yan Zhao wrote:
> On Thu, Oct 10, 2024 at 09:14:41AM -0700, Sean Christopherson wrote:
> > On Thu, Oct 10, 2024, Yan Zhao wrote:
> > > On Wed, Oct 09, 2024 at 12:23:44PM -0700, Sean Christopherson wrote:
> > > > Add a lockdep assertion in kvm_unmap_gfn_range() to ensure that either
> > > > mmu_invalidate_in_progress is elevated, or that the range is being zapped
> > > > due to memslot removal (loosely detected by slots_lock being held).
> > > > Zapping SPTEs without mmu_invalidate_{in_progress,seq} protection is unsafe
> > > > as KVM's page fault path snapshots state before acquiring mmu_lock, and
> > > > thus can create SPTEs with stale information if vCPUs aren't forced to
> > > > retry faults (due to seeing an in-progress or past MMU invalidation).
> > > > 
> > > > Memslot removal is a special case, as the memslot is retrieved outside of
> > > > mmu_invalidate_seq, i.e. doesn't use the "standard" protections, and
> > > > instead relies on SRCU synchronization to ensure any in-flight page faults
> > > > are fully resolved before zapping SPTEs.
> > > > 
> > > > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > > > ---
> > > >  arch/x86/kvm/mmu/mmu.c | 10 ++++++++++
> > > >  1 file changed, 10 insertions(+)
> > > > 
> > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > > index 09494d01c38e..c6716fd3666f 100644
> > > > --- a/arch/x86/kvm/mmu/mmu.c
> > > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > > @@ -1556,6 +1556,16 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
> > > >  {
> > > >  	bool flush = false;
> > > >  
> > > > +	/*
> > > > +	 * To prevent races with vCPUs faulting in a gfn using stale data,
> > > > +	 * zapping a gfn range must be protected by mmu_invalidate_in_progress
> > > > +	 * (and mmu_invalidate_seq).  The only exception is memslot deletion,
> > > > +	 * in which case SRCU synchronization ensures SPTEs a zapped after all
> > > > +	 * vCPUs have unlocked SRCU and are guaranteed to see the invalid slot.
> > > > +	 */
> > > > +	lockdep_assert_once(kvm->mmu_invalidate_in_progress ||
> > > > +			    lockdep_is_held(&kvm->slots_lock));
> > > > +
> > > Is the detection of slots_lock too loose?
> > 
> > Yes, but I can't think of an easy way to tighten it.  My original thought was to
> > require range->slot to be invalid, but KVM (correctly) passes in the old, valid
> > memslot to kvm_arch_flush_shadow_memslot().
> > 
> > The goal with the assert is to detect as many bugs as possible, without adding
> > too much complexity, and also to document the rules for using kvm_unmap_gfn_range().
> > 
> > Actually, we can tighten the check, by verifying that the slot being unmapped is
> > valid, but that the slot that KVM sees is invalid.  I'm not sure I love it though,
> > as it's absurdly specific.
> Right. It doesn't reflect the wait in kvm_swap_active_memslots() for the old
> slot.
> 
>   CPU 0                  CPU 1
> 1. fault on old begins
>                        2. swap to new
> 		       3. zap old
> 4. fault on old ends
> 
> Without CPU 1 waiting for 1&4 complete between 2&3, stale data is still
> possible.
> 
> So, the detection in kvm_memslot_is_being_invalidated() only indicates the
> caller is from kvm_arch_flush_shadow_memslot() with current code.

Yep, which is why I don't love it.

> Given that, how do you feel about passing in a "bool is_flush_slot" to indicate
> the caller and asserting?

I like it even less than the ugliness I proposed :-)  It'd basically be a "I pinky
swear I know what I'm doing" flag, and I think the downsides of having true/false
literals in the code would outweigh the upside of the precise assertion.
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 09494d01c38e..c6716fd3666f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1556,6 +1556,16 @@  bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
 {
 	bool flush = false;
 
+	/*
+	 * To prevent races with vCPUs faulting in a gfn using stale data,
+	 * zapping a gfn range must be protected by mmu_invalidate_in_progress
+	 * (and mmu_invalidate_seq).  The only exception is memslot deletion,
+	 * in which case SRCU synchronization ensures SPTEs a zapped after all
+	 * vCPUs have unlocked SRCU and are guaranteed to see the invalid slot.
+	 */
+	lockdep_assert_once(kvm->mmu_invalidate_in_progress ||
+			    lockdep_is_held(&kvm->slots_lock));
+
 	if (kvm_memslots_have_rmaps(kvm))
 		flush = __kvm_rmap_zap_gfn_range(kvm, range->slot,
 						 range->start, range->end,