Message ID | CAG48ez3SG6ngZLtasxJ6LABpOnqCz5-QHqb0B4k44TQ8F9n6+w@mail.gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/4] mm/gup_benchmark: Take the mmap lock around GUP | expand |
On Wed, Sep 30, 2020 at 3:19 AM Jann Horn <jannh@google.com> wrote: > To be safe against concurrent changes to the VMA tree, we must take the > mmap lock around GUP operations (excluding the GUP-fast family of > operations, which will take the mmap lock by themselves if necessary). (Sorry, my mail setup messed up the In-Reply-To headers, so you may not see the patches threaded properly... let me know if you want me to resend.)
On Tue, Sep 29, 2020 at 06:19:59PM -0700, Jann Horn wrote: > To be safe against concurrent changes to the VMA tree, we must take the > mmap lock around GUP operations (excluding the GUP-fast family of > operations, which will take the mmap lock by themselves if necessary). > > This code is only for testing, and it's only reachable by root through > debugfs, so this doesn't really have any impact; however, if we want to add > lockdep asserts into the GUP path, we need to have clean locking here. > > Signed-off-by: Jann Horn <jannh@google.com> > --- Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Jason
On 9/29/20 6:19 PM, Jann Horn wrote: > To be safe against concurrent changes to the VMA tree, we must take the > mmap lock around GUP operations (excluding the GUP-fast family of > operations, which will take the mmap lock by themselves if necessary). > > This code is only for testing, and it's only reachable by root through > debugfs, so this doesn't really have any impact; however, if we want to add > lockdep asserts into the GUP path, we need to have clean locking here. > > Signed-off-by: Jann Horn <jannh@google.com> > --- > This series should go on top of the coredump locking series (in > particular "mm/gup: Take mmap_lock in get_dump_page()"), which is > already in the mm tree. Looks good. This also merges properly and still has the right semantics, even after my gup_benchmark overhaul [1]. (Just have to change mm/gup_benchmark.c to mm/gup_test.c). The new needs_mmap_lock rule here still does the right thing, in other words, after adding the new DUMP_USER_PAGES_TEST (a non-fast variant). Reviewed-by: John Hubbard <jhubbard@nvidia.com> [1] https://lore.kernel.org/r/20200929212747.251804-1-jhubbard@nvidia.com thanks,
On Tue, Sep 29, 2020 at 6:20 PM Jann Horn <jannh@google.com> wrote: > To be safe against concurrent changes to the VMA tree, we must take the > mmap lock around GUP operations (excluding the GUP-fast family of > operations, which will take the mmap lock by themselves if necessary). > > This code is only for testing, and it's only reachable by root through > debugfs, so this doesn't really have any impact; however, if we want to add > lockdep asserts into the GUP path, we need to have clean locking here. > > Signed-off-by: Jann Horn <jannh@google.com> Acked-by: Michel Lespinasse <walken@google.com> Thanks for these cleanups :)
diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c index be690fa66a46..558595610650 100644 --- a/mm/gup_benchmark.c +++ b/mm/gup_benchmark.c @@ -71,6 +71,8 @@ static int __gup_benchmark_ioctl(unsigned int cmd, int nr; struct page **pages; int ret = 0; + bool needs_mmap_lock = + cmd != GUP_FAST_BENCHMARK && cmd != PIN_FAST_BENCHMARK; if (gup->size > ULONG_MAX) return -EINVAL; @@ -80,6 +82,11 @@ static int __gup_benchmark_ioctl(unsigned int cmd, if (!pages) return -ENOMEM; + if (needs_mmap_lock && mmap_read_lock_killable(current->mm)) { + ret = -EINTR; + goto free_pages; + } + i = 0; nr = gup->nr_pages_per_call; start_time = ktime_get(); @@ -119,9 +126,8 @@ static int __gup_benchmark_ioctl(unsigned int cmd, NULL); break; default: - kvfree(pages); ret = -EINVAL; - goto out; + goto unlock; } if (nr <= 0) @@ -149,8 +155,11 @@ static int __gup_benchmark_ioctl(unsigned int cmd, end_time = ktime_get(); gup->put_delta_usec = ktime_us_delta(end_time, start_time); +unlock: + if (needs_mmap_lock) + mmap_read_unlock(current->mm); +free_pages: kvfree(pages); -out: return ret; }
To be safe against concurrent changes to the VMA tree, we must take the mmap lock around GUP operations (excluding the GUP-fast family of operations, which will take the mmap lock by themselves if necessary). This code is only for testing, and it's only reachable by root through debugfs, so this doesn't really have any impact; however, if we want to add lockdep asserts into the GUP path, we need to have clean locking here. Signed-off-by: Jann Horn <jannh@google.com> --- This series should go on top of the coredump locking series (in particular "mm/gup: Take mmap_lock in get_dump_page()"), which is already in the mm tree. mm/gup_benchmark.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) base-commit: fb0155a09b0224a7147cb07a4ce6034c8d29667f prerequisite-patch-id: 08f97130a51898a5f6efddeeb5b42638577398c7 prerequisite-patch-id: 577664d761cd23fe9031ffdb1d3c9ac313572c67 prerequisite-patch-id: dc29a39716aa8689f80ba2767803d9df3709beaa prerequisite-patch-id: 42b1b546d33391ead2753621f541bcc408af1769 prerequisite-patch-id: 2cbb839f57006f32e21f4229e099ae1bd782be24 prerequisite-patch-id: 1b4daf01cf61654a5ec54b5c3f7c7508be7244ee prerequisite-patch-id: f46cc8c99f1909fe2a65fbc3cf1f6bc57489a086