Message ID | 20220901173516.702122-16-surenb@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | per-VMA locks proposal | expand |
Le 01/09/2022 à 19:35, Suren Baghdasaryan a écrit : > While unmapping VMAs, adjacent VMAs might be able to grow into the area > being unmapped. In such cases mark adjacent VMAs as locked to prevent > this growth. > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > --- > mm/mmap.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/mm/mmap.c b/mm/mmap.c > index b0d78bdc0de0..b31cc97c2803 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -2680,10 +2680,14 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma, > * VM_GROWSUP VMA. Such VMAs can change their size under > * down_read(mmap_lock) and collide with the VMA we are about to unmap. > */ > - if (vma && (vma->vm_flags & VM_GROWSDOWN)) > + if (vma && (vma->vm_flags & VM_GROWSDOWN)) { > + vma_mark_locked(vma); > return false; > - if (prev && (prev->vm_flags & VM_GROWSUP)) > + } > + if (prev && (prev->vm_flags & VM_GROWSUP)) { > + vma_mark_locked(prev); > return false; > + } > return true; > } > That looks right to be. But, in addition to that, like the previous patch, all the VMAs to be detached from the tree in the loop above, should be marked locked just before calling vm_rb_erase().
On Fri, Sep 9, 2022 at 6:43 AM Laurent Dufour <ldufour@linux.ibm.com> wrote: > > Le 01/09/2022 à 19:35, Suren Baghdasaryan a écrit : > > While unmapping VMAs, adjacent VMAs might be able to grow into the area > > being unmapped. In such cases mark adjacent VMAs as locked to prevent > > this growth. > > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > > --- > > mm/mmap.c | 8 ++++++-- > > 1 file changed, 6 insertions(+), 2 deletions(-) > > > > diff --git a/mm/mmap.c b/mm/mmap.c > > index b0d78bdc0de0..b31cc97c2803 100644 > > --- a/mm/mmap.c > > +++ b/mm/mmap.c > > @@ -2680,10 +2680,14 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma, > > * VM_GROWSUP VMA. Such VMAs can change their size under > > * down_read(mmap_lock) and collide with the VMA we are about to unmap. > > */ > > - if (vma && (vma->vm_flags & VM_GROWSDOWN)) > > + if (vma && (vma->vm_flags & VM_GROWSDOWN)) { > > + vma_mark_locked(vma); > > return false; > > - if (prev && (prev->vm_flags & VM_GROWSUP)) > > + } > > + if (prev && (prev->vm_flags & VM_GROWSUP)) { > > + vma_mark_locked(prev); > > return false; > > + } > > return true; > > } > > > > That looks right to be. > > But, in addition to that, like the previous patch, all the VMAs to be > detached from the tree in the loop above, should be marked locked just > before calling vm_rb_erase(). The following call chain already locks the VMA being isolated: vma_rb_erase->vma_rb_erase_ignore->__vma_rb_erase->vma_mark_locked
diff --git a/mm/mmap.c b/mm/mmap.c index b0d78bdc0de0..b31cc97c2803 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2680,10 +2680,14 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma, * VM_GROWSUP VMA. Such VMAs can change their size under * down_read(mmap_lock) and collide with the VMA we are about to unmap. */ - if (vma && (vma->vm_flags & VM_GROWSDOWN)) + if (vma && (vma->vm_flags & VM_GROWSDOWN)) { + vma_mark_locked(vma); return false; - if (prev && (prev->vm_flags & VM_GROWSUP)) + } + if (prev && (prev->vm_flags & VM_GROWSUP)) { + vma_mark_locked(prev); return false; + } return true; }
While unmapping VMAs, adjacent VMAs might be able to grow into the area being unmapped. In such cases mark adjacent VMAs as locked to prevent this growth. Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- mm/mmap.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)