Message ID | b5a5626-2684-899d-874b-801e7974b17@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: lock newly mapped VMA with corrected ordering | expand |
On Sat, Jul 8, 2023 at 4:04 PM Hugh Dickins <hughd@google.com> wrote: > > Lockdep is certainly right to complain about > (&vma->vm_lock->lock){++++}-{3:3}, at: vma_start_write+0x2d/0x3f > but task is already holding lock: > (&mapping->i_mmap_rwsem){+.+.}-{3:3}, at: mmap_region+0x4dc/0x6db > Invert those to the usual ordering. Doh! Thanks Hugh! I totally forgot to run this with lockdep enabled :( > > Fixes: 33313a747e81 ("mm: lock newly mapped VMA which can be modified after it becomes visible") > Cc: stable@vger.kernel.org > Signed-off-by: Hugh Dickins <hughd@google.com> > --- > mm/mmap.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/mmap.c b/mm/mmap.c > index 84c71431a527..3eda23c9ebe7 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -2809,11 +2809,11 @@ unsigned long mmap_region(struct file *file, unsigned long addr, > if (vma_iter_prealloc(&vmi)) > goto close_and_free_vma; > > + /* Lock the VMA since it is modified after insertion into VMA tree */ > + vma_start_write(vma); > if (vma->vm_file) > i_mmap_lock_write(vma->vm_file->f_mapping); > > - /* Lock the VMA since it is modified after insertion into VMA tree */ > - vma_start_write(vma); > vma_iter_store(&vmi, vma); > mm->map_count++; > if (vma->vm_file) { > -- > 2.35.3
On Sat, Jul 8, 2023 at 4:10 PM Suren Baghdasaryan <surenb@google.com> wrote: > > On Sat, Jul 8, 2023 at 4:04 PM Hugh Dickins <hughd@google.com> wrote: > > > > Lockdep is certainly right to complain about > > (&vma->vm_lock->lock){++++}-{3:3}, at: vma_start_write+0x2d/0x3f > > but task is already holding lock: > > (&mapping->i_mmap_rwsem){+.+.}-{3:3}, at: mmap_region+0x4dc/0x6db > > Invert those to the usual ordering. > > Doh! Thanks Hugh! > I totally forgot to run this with lockdep enabled :( I verified both the lockdep warning and the fix. Thanks again, Hugh! Tested-by: Suren Baghdasaryan <surenb@google.com> > > > > > Fixes: 33313a747e81 ("mm: lock newly mapped VMA which can be modified after it becomes visible") > > Cc: stable@vger.kernel.org > > Signed-off-by: Hugh Dickins <hughd@google.com> > > --- > > mm/mmap.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/mm/mmap.c b/mm/mmap.c > > index 84c71431a527..3eda23c9ebe7 100644 > > --- a/mm/mmap.c > > +++ b/mm/mmap.c > > @@ -2809,11 +2809,11 @@ unsigned long mmap_region(struct file *file, unsigned long addr, > > if (vma_iter_prealloc(&vmi)) > > goto close_and_free_vma; > > > > + /* Lock the VMA since it is modified after insertion into VMA tree */ > > + vma_start_write(vma); > > if (vma->vm_file) > > i_mmap_lock_write(vma->vm_file->f_mapping); > > > > - /* Lock the VMA since it is modified after insertion into VMA tree */ > > - vma_start_write(vma); > > vma_iter_store(&vmi, vma); > > mm->map_count++; > > if (vma->vm_file) { > > -- > > 2.35.3
diff --git a/mm/mmap.c b/mm/mmap.c index 84c71431a527..3eda23c9ebe7 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2809,11 +2809,11 @@ unsigned long mmap_region(struct file *file, unsigned long addr, if (vma_iter_prealloc(&vmi)) goto close_and_free_vma; + /* Lock the VMA since it is modified after insertion into VMA tree */ + vma_start_write(vma); if (vma->vm_file) i_mmap_lock_write(vma->vm_file->f_mapping); - /* Lock the VMA since it is modified after insertion into VMA tree */ - vma_start_write(vma); vma_iter_store(&vmi, vma); mm->map_count++; if (vma->vm_file) {
Lockdep is certainly right to complain about (&vma->vm_lock->lock){++++}-{3:3}, at: vma_start_write+0x2d/0x3f but task is already holding lock: (&mapping->i_mmap_rwsem){+.+.}-{3:3}, at: mmap_region+0x4dc/0x6db Invert those to the usual ordering. Fixes: 33313a747e81 ("mm: lock newly mapped VMA which can be modified after it becomes visible") Cc: stable@vger.kernel.org Signed-off-by: Hugh Dickins <hughd@google.com> --- mm/mmap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)