Message ID | 20190820081902.24815-2-daniel.vetter@ffwll.ch (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mmu notifier debug annotations/checks | expand |
On Tue, Aug 20, 2019 at 10:18:59AM +0200, Daniel Vetter wrote: > This is a similar idea to the fs_reclaim fake lockdep lock. It's > fairly easy to provoke a specific notifier to be run on a specific > range: Just prep it, and then munmap() it. > > A bit harder, but still doable, is to provoke the mmu notifiers for > all the various callchains that might lead to them. But both at the > same time is really hard to reliable hit, especially when you want to > exercise paths like direct reclaim or compaction, where it's not > easy to control what exactly will be unmapped. > > By introducing a lockdep map to tie them all together we allow lockdep > to see a lot more dependencies, without having to actually hit them > in a single challchain while testing. > > On Jason's suggestion this is is rolled out for both > invalidate_range_start and invalidate_range_end. They both have the > same calling context, hence we can share the same lockdep map. Note > that the annotation for invalidate_ranage_start is outside of the > mm_has_notifiers(), to make sure lockdep is informed about all paths > leading to this context irrespective of whether mmu notifiers are > present for a given context. We don't do that on the > invalidate_range_end side to avoid paying the overhead twice, there > the lockdep annotation is pushed down behind the mm_has_notifiers() > check. > > v2: Use lock_map_acquire/release() like fs_reclaim, to avoid confusion > with this being a real mutex (Chris Wilson). > > v3: Rebase on top of Glisse's arg rework. > > v4: Also annotate invalidate_range_end (Jason Gunthorpe) > Also annotate invalidate_range_start_nonblock, I somehow missed that > one in the first version. > > Cc: Jason Gunthorpe <jgg@ziepe.ca> > Cc: Chris Wilson <chris@chris-wilson.co.uk> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: David Rientjes <rientjes@google.com> > Cc: "Jérôme Glisse" <jglisse@redhat.com> > Cc: Michal Hocko <mhocko@suse.com> > Cc: "Christian König" <christian.koenig@amd.com> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> > Cc: linux-mm@kvack.org > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> > --- > include/linux/mmu_notifier.h | 8 ++++++++ > mm/mmu_notifier.c | 9 +++++++++ > 2 files changed, 17 insertions(+) Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Jason
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index b6c004bd9f6a..39a86b77a939 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -42,6 +42,10 @@ enum mmu_notifier_event { #ifdef CONFIG_MMU_NOTIFIER +#ifdef CONFIG_LOCKDEP +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map; +#endif + /* * The mmu notifier_mm structure is allocated and installed in * mm->mmu_notifier_mm inside the mm_take_all_locks() protected @@ -310,19 +314,23 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm, static inline void mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) { + lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); if (mm_has_notifiers(range->mm)) { range->flags |= MMU_NOTIFIER_RANGE_BLOCKABLE; __mmu_notifier_invalidate_range_start(range); } + lock_map_release(&__mmu_notifier_invalidate_range_start_map); } static inline int mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range) { + lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); if (mm_has_notifiers(range->mm)) { range->flags &= ~MMU_NOTIFIER_RANGE_BLOCKABLE; return __mmu_notifier_invalidate_range_start(range); } + lock_map_release(&__mmu_notifier_invalidate_range_start_map); return 0; } diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 16f1cbc775d0..d12e3079e7a4 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -21,6 +21,13 @@ /* global SRCU for all MMs */ DEFINE_STATIC_SRCU(srcu); +#ifdef CONFIG_LOCKDEP +struct lockdep_map __mmu_notifier_invalidate_range_start_map = { + .name = "mmu_notifier_invalidate_range_start" +}; +EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start_map); +#endif + /* * This function allows mmu_notifier::release callback to delay a call to * a function that will free appropriate resources. The function must be @@ -197,6 +204,7 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, struct mmu_notifier *mn; int id; + lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); id = srcu_read_lock(&srcu); hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) { /* @@ -220,6 +228,7 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, mn->ops->invalidate_range_end(mn, range); } srcu_read_unlock(&srcu, id); + lock_map_release(&__mmu_notifier_invalidate_range_start_map); } EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_end);
This is a similar idea to the fs_reclaim fake lockdep lock. It's fairly easy to provoke a specific notifier to be run on a specific range: Just prep it, and then munmap() it. A bit harder, but still doable, is to provoke the mmu notifiers for all the various callchains that might lead to them. But both at the same time is really hard to reliable hit, especially when you want to exercise paths like direct reclaim or compaction, where it's not easy to control what exactly will be unmapped. By introducing a lockdep map to tie them all together we allow lockdep to see a lot more dependencies, without having to actually hit them in a single challchain while testing. On Jason's suggestion this is is rolled out for both invalidate_range_start and invalidate_range_end. They both have the same calling context, hence we can share the same lockdep map. Note that the annotation for invalidate_ranage_start is outside of the mm_has_notifiers(), to make sure lockdep is informed about all paths leading to this context irrespective of whether mmu notifiers are present for a given context. We don't do that on the invalidate_range_end side to avoid paying the overhead twice, there the lockdep annotation is pushed down behind the mm_has_notifiers() check. v2: Use lock_map_acquire/release() like fs_reclaim, to avoid confusion with this being a real mutex (Chris Wilson). v3: Rebase on top of Glisse's arg rework. v4: Also annotate invalidate_range_end (Jason Gunthorpe) Also annotate invalidate_range_start_nonblock, I somehow missed that one in the first version. Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Rientjes <rientjes@google.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: linux-mm@kvack.org Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> --- include/linux/mmu_notifier.h | 8 ++++++++ mm/mmu_notifier.c | 9 +++++++++ 2 files changed, 17 insertions(+)