Message ID | 20190520213945.17046-3-daniel.vetter@ffwll.ch (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/4] mm: Check if mmu notifier callbacks are allowed to fail | expand |
On Mon, May 20, 2019 at 11:39:44PM +0200, Daniel Vetter wrote: > We need to make sure implementations don't cheat and don't have a > possible schedule/blocking point deeply burried where review can't > catch it. > > I'm not sure whether this is the best way to make sure all the > might_sleep() callsites trigger, and it's a bit ugly in the code flow. > But it gets the job done. > > Inspired by an i915 patch series which did exactly that, because the > rules haven't been entirely clear to us. > > v2: Use the shiny new non_block_start/end annotations instead of > abusing preempt_disable/enable. > > v3: Rebase on top of Glisse's arg rework. > > v4: Rebase on top of more Glisse rework. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: David Rientjes <rientjes@google.com> > Cc: "Christian König" <christian.koenig@amd.com> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > Cc: "Jérôme Glisse" <jglisse@redhat.com> > Cc: linux-mm@kvack.org > Reviewed-by: Christian König <christian.koenig@amd.com> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> > --- > mm/mmu_notifier.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c > index c05e406a7cd7..a09e737711d5 100644 > --- a/mm/mmu_notifier.c > +++ b/mm/mmu_notifier.c > @@ -176,7 +176,13 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) > id = srcu_read_lock(&srcu); > hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) { > if (mn->ops->invalidate_range_start) { > - int _ret = mn->ops->invalidate_range_start(mn, range); > + int _ret; > + > + if (!mmu_notifier_range_blockable(range)) > + non_block_start(); > + _ret = mn->ops->invalidate_range_start(mn, range); > + if (!mmu_notifier_range_blockable(range)) > + non_block_end(); This is a taste thing so feel free to ignore it as maybe other will dislike more what i prefer: + if (!mmu_notifier_range_blockable(range)) { + non_block_start(); + _ret = mn->ops->invalidate_range_start(mn, range); + non_block_end(); + } else + _ret = mn->ops->invalidate_range_start(mn, range); If only we had predicate on CPU like on GPU :) In any case: Reviewed-by: Jérôme Glisse <jglisse@redhat.com> > if (_ret) { > pr_info("%pS callback failed with %d in %sblockable context.\n", > mn->ops->invalidate_range_start, _ret, > -- > 2.20.1 >
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index c05e406a7cd7..a09e737711d5 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -176,7 +176,13 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) id = srcu_read_lock(&srcu); hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) { if (mn->ops->invalidate_range_start) { - int _ret = mn->ops->invalidate_range_start(mn, range); + int _ret; + + if (!mmu_notifier_range_blockable(range)) + non_block_start(); + _ret = mn->ops->invalidate_range_start(mn, range); + if (!mmu_notifier_range_blockable(range)) + non_block_end(); if (_ret) { pr_info("%pS callback failed with %d in %sblockable context.\n", mn->ops->invalidate_range_start, _ret,