Message ID | 20220728030452.484261-1-kai.huang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] KVM, x86/mmu: Fix the comment around kvm_tdp_mmu_zap_leafs() | expand |
On 7/28/22 05:04, Kai Huang wrote: > Now kvm_tdp_mmu_zap_leafs() only zaps leaf SPTEs but not any non-root > pages within that GFN range anymore, so the comment around it isn't > right. > > Fix it by shifting the comment from tdp_mmu_zap_leafs() instead of > duplicating it, as tdp_mmu_zap_leafs() is static and is only called by > kvm_tdp_mmu_zap_leafs(). > > Opportunistically tweak the blurb about SPTEs being cleared to (a) say > "zapped" instead of "cleared" because "cleared" will be wrong if/when > KVM allows a non-zero value for non-present SPTE (i.e. for Intel TDX), > and (b) to clarify that a flush is needed if and only if a SPTE has been > zapped since MMU lock was last acquired. > > Fixes: f47e5bbbc92f ("KVM: x86/mmu: Zap only TDP MMU leafs in zap range and mmu_notifier unmap") > Suggested-by: Sean Christopherson <seanjc@google.com> > Reviewed-by: Sean Christopherson <seanjc@google.com> > Signed-off-by: Kai Huang <kai.huang@intel.com> > --- > v2->v3: > > - s/leafs/leaf > - Added Sean's Reviewed-by. > > --- > arch/x86/kvm/mmu/tdp_mmu.c | 10 +++------- > 1 file changed, 3 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 40ccb5fba870..bf2ccf9debca 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -924,9 +924,6 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) > } > > /* > - * Zap leafs SPTEs for the range of gfns, [start, end). Returns true if SPTEs > - * have been cleared and a TLB flush is needed before releasing the MMU lock. > - * > * If can_yield is true, will release the MMU lock and reschedule if the > * scheduler needs the CPU or there is contention on the MMU lock. If this > * function cannot yield, it will not release the MMU lock or reschedule and > @@ -969,10 +966,9 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, > } > > /* > - * Tears down the mappings for the range of gfns, [start, end), and frees the > - * non-root pages mapping GFNs strictly within that range. Returns true if > - * SPTEs have been cleared and a TLB flush is needed before releasing the > - * MMU lock. > + * Zap leaf SPTEs for the range of gfns, [start, end), for all roots. Returns > + * true if a TLB flush is needed before releasing the MMU lock, i.e. if one or > + * more SPTEs were zapped since the MMU lock was last acquired. > */ > bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, > bool can_yield, bool flush) Queued, thanks. Paolo
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 40ccb5fba870..bf2ccf9debca 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -924,9 +924,6 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) } /* - * Zap leafs SPTEs for the range of gfns, [start, end). Returns true if SPTEs - * have been cleared and a TLB flush is needed before releasing the MMU lock. - * * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this * function cannot yield, it will not release the MMU lock or reschedule and @@ -969,10 +966,9 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, } /* - * Tears down the mappings for the range of gfns, [start, end), and frees the - * non-root pages mapping GFNs strictly within that range. Returns true if - * SPTEs have been cleared and a TLB flush is needed before releasing the - * MMU lock. + * Zap leaf SPTEs for the range of gfns, [start, end), for all roots. Returns + * true if a TLB flush is needed before releasing the MMU lock, i.e. if one or + * more SPTEs were zapped since the MMU lock was last acquired. */ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, bool can_yield, bool flush)