diff mbox series

userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE

Message ID 20240404171726.2302435-1-lokeshgidra@google.com (mailing list archive)
State New
Headers show
Series userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE | expand

Commit Message

Lokesh Gidra April 4, 2024, 5:17 p.m. UTC
Commit d7a08838ab74 ("mm: userfaultfd: fix unexpected change to src_folio
when UFFDIO_MOVE fails") moved the src_folio->{mapping, index} changing
to after clearing the page-table and ensuring that it's not pinned. This
avoids failure of swapout+migration and possibly memory corruption.

However, the commit missed fixing it in the huge-page case.

Fixes: adef440691ba ("userfaultfd: UFFDIO_MOVE uABI")
Signed-off-by: Lokesh Gidra <lokeshgidra@google.com>
---
 mm/huge_memory.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Matthew Wilcox April 4, 2024, 5:21 p.m. UTC | #1
On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> -		folio_move_anon_rmap(src_folio, dst_vma);
> -		WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> -
>  		src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
>  		/* Folio got pinned from under us. Put it back and fail the move. */
>  		if (folio_maybe_dma_pinned(src_folio)) {
> @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
>  			goto unlock_ptls;
>  		}
>  
> +		folio_move_anon_rmap(src_folio, dst_vma);
> +		WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> +

This use of WRITE_ONCE scares me.  We hold the folio locked.  Why do
we need to use WRITE_ONCE?  Who's looking at folio->index without
holding the folio lock?
Suren Baghdasaryan April 4, 2024, 8:07 p.m. UTC | #2
On Thu, Apr 4, 2024 at 10:21 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > -             folio_move_anon_rmap(src_folio, dst_vma);
> > -             WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > -
> >               src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> >               /* Folio got pinned from under us. Put it back and fail the move. */
> >               if (folio_maybe_dma_pinned(src_folio)) {
> > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> >                       goto unlock_ptls;
> >               }
> >
> > +             folio_move_anon_rmap(src_folio, dst_vma);
> > +             WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > +
>
> This use of WRITE_ONCE scares me.  We hold the folio locked.  Why do
> we need to use WRITE_ONCE?  Who's looking at folio->index without
> holding the folio lock?

Indeed that seems to be unnecessary here. Both here and in
move_present_pte() we are holding folio lock while moving the page. I
must have just blindly copied that from Andrea's original patch [1].

https://gitlab.com/aarcange/aa/-/commit/2aec7aea56b10438a3881a20a411aa4b1fc19e92
Suren Baghdasaryan April 4, 2024, 8:23 p.m. UTC | #3
On Thu, Apr 4, 2024 at 1:16 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 04.04.24 22:07, Suren Baghdasaryan wrote:
> > On Thu, Apr 4, 2024 at 10:21 AM Matthew Wilcox <willy@infradead.org> wrote:
> >>
> >> On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> >>> -             folio_move_anon_rmap(src_folio, dst_vma);
> >>> -             WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> >>> -
> >>>                src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> >>>                /* Folio got pinned from under us. Put it back and fail the move. */
> >>>                if (folio_maybe_dma_pinned(src_folio)) {
> >>> @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> >>>                        goto unlock_ptls;
> >>>                }
> >>>
> >>> +             folio_move_anon_rmap(src_folio, dst_vma);
> >>> +             WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> >>> +
> >>
> >> This use of WRITE_ONCE scares me.  We hold the folio locked.  Why do
> >> we need to use WRITE_ONCE?  Who's looking at folio->index without
> >> holding the folio lock?
> >
> > Indeed that seems to be unnecessary here. Both here and in
> > move_present_pte() we are holding folio lock while moving the page. I
> > must have just blindly copied that from Andrea's original patch [1].
>
> Agreed, I don't think it is required for ->index. (I also don't spot any
> corresponding READ_ONCE)

Since this patch just got Ack'ed, I'll wait for Andrew to take it into
mm-unstable and then will send a fix removing those WRITE_ONCE(). That
way we won't have merge conflicts,

>
> --
> Cheers,
>
> David / dhildenb
>
Peter Xu April 4, 2024, 8:32 p.m. UTC | #4
On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote:
> On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > -		folio_move_anon_rmap(src_folio, dst_vma);
> > -		WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > -
> >  		src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> >  		/* Folio got pinned from under us. Put it back and fail the move. */
> >  		if (folio_maybe_dma_pinned(src_folio)) {
> > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> >  			goto unlock_ptls;
> >  		}
> >  
> > +		folio_move_anon_rmap(src_folio, dst_vma);
> > +		WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > +
> 
> This use of WRITE_ONCE scares me.  We hold the folio locked.  Why do
> we need to use WRITE_ONCE?  Who's looking at folio->index without
> holding the folio lock?

Seems true, but maybe suitable for a separate patch to clean it even so?
We also have the other pte level which has the same WRITE_ONCE(), so if we
want to drop we may want to drop both.

I just got to start reading some the new move codes (Lokesh, apologies on
not be able to provide feedbacks previously..), but then I found one thing
unclear, on special handling of private file mappings only in userfault
context, and I didn't know why:

lock_vma():
        if (vma) {
                /*
                 * lock_vma_under_rcu() only checks anon_vma for private
                 * anonymous mappings. But we need to ensure it is assigned in
                 * private file-backed vmas as well.
                 */
                if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
                        vma_end_read(vma);
                else
                        return vma;
        }

AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must be
stable to be used.  Here it's weird to become an userfault specific
operation to me.

I was surprised how it worked for private file maps on faults, then I had a
check and it seems we postponed such check until vmf_anon_prepare(), which
is the CoW path already, so we do as I expected, but seems unnecessary to
that point?

Would something like below make it much cleaner for us?  As I just don't
yet see why userfault is special here.

Thanks,

===8<===
diff --git a/mm/memory.c b/mm/memory.c
index 984b138f85b4..d5cf1d31c671 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
 
 	if (likely(vma->anon_vma))
 		return 0;
-	if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
-		vma_end_read(vma);
-		return VM_FAULT_RETRY;
-	}
+	/* We shouldn't try a per-vma fault at all if anon_vma isn't solid */
+	WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK);
 	if (__anon_vma_prepare(vma))
 		return VM_FAULT_OOM;
 	return 0;
@@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
 	 * find_mergeable_anon_vma uses adjacent vmas which are not locked.
 	 * This check must happen after vma_start_read(); otherwise, a
 	 * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
-	 * from its anon_vma.
+	 * from its anon_vma.  This applies to both anon or private file maps.
 	 */
-	if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
+	if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma))
 		goto inval_end_read;
 
 	/* Check since vm_start/vm_end might change before we lock the VMA */
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index f6267afe65d1..61f21da77dcd 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_struct *mm,
 	struct vm_area_struct *vma;
 
 	vma = lock_vma_under_rcu(mm, address);
-	if (vma) {
-		/*
-		 * lock_vma_under_rcu() only checks anon_vma for private
-		 * anonymous mappings. But we need to ensure it is assigned in
-		 * private file-backed vmas as well.
-		 */
-		if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
-			vma_end_read(vma);
-		else
-			return vma;
-	}
+	if (vma)
+		return vma;
 
 	mmap_read_lock(mm);
 	vma = find_vma_and_prepare_anon(mm, address);
Andrew Morton April 4, 2024, 8:37 p.m. UTC | #5
On Thu, 4 Apr 2024 13:23:08 -0700 Suren Baghdasaryan <surenb@google.com> wrote:

> > Agreed, I don't think it is required for ->index. (I also don't spot any
> > corresponding READ_ONCE)
> 
> Since this patch just got Ack'ed, I'll wait for Andrew to take it into
> mm-unstable and then will send a fix removing those WRITE_ONCE(). That
> way we won't have merge conflicts,

Yes please, it's an unrelated thing.
Suren Baghdasaryan April 4, 2024, 8:55 p.m. UTC | #6
On Thu, Apr 4, 2024 at 1:32 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote:
> > On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > > -           folio_move_anon_rmap(src_folio, dst_vma);
> > > -           WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > -
> > >             src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > >             /* Folio got pinned from under us. Put it back and fail the move. */
> > >             if (folio_maybe_dma_pinned(src_folio)) {
> > > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> > >                     goto unlock_ptls;
> > >             }
> > >
> > > +           folio_move_anon_rmap(src_folio, dst_vma);
> > > +           WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > +
> >
> > This use of WRITE_ONCE scares me.  We hold the folio locked.  Why do
> > we need to use WRITE_ONCE?  Who's looking at folio->index without
> > holding the folio lock?
>
> Seems true, but maybe suitable for a separate patch to clean it even so?
> We also have the other pte level which has the same WRITE_ONCE(), so if we
> want to drop we may want to drop both.

Yes, I'll do that separately and will remove WRITE_ONCE() in both places.

>
> I just got to start reading some the new move codes (Lokesh, apologies on
> not be able to provide feedbacks previously..), but then I found one thing
> unclear, on special handling of private file mappings only in userfault
> context, and I didn't know why:
>
> lock_vma():
>         if (vma) {
>                 /*
>                  * lock_vma_under_rcu() only checks anon_vma for private
>                  * anonymous mappings. But we need to ensure it is assigned in
>                  * private file-backed vmas as well.
>                  */
>                 if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
>                         vma_end_read(vma);
>                 else
>                         return vma;
>         }
>
> AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must be
> stable to be used.  Here it's weird to become an userfault specific
> operation to me.
>
> I was surprised how it worked for private file maps on faults, then I had a
> check and it seems we postponed such check until vmf_anon_prepare(), which
> is the CoW path already, so we do as I expected, but seems unnecessary to
> that point?
>
> Would something like below make it much cleaner for us?  As I just don't
> yet see why userfault is special here.
>
> Thanks,
>
> ===8<===
> diff --git a/mm/memory.c b/mm/memory.c
> index 984b138f85b4..d5cf1d31c671 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
>
>         if (likely(vma->anon_vma))
>                 return 0;
> -       if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> -               vma_end_read(vma);
> -               return VM_FAULT_RETRY;
> -       }
> +       /* We shouldn't try a per-vma fault at all if anon_vma isn't solid */
> +       WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK);
>         if (__anon_vma_prepare(vma))
>                 return VM_FAULT_OOM;
>         return 0;
> @@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
>          * find_mergeable_anon_vma uses adjacent vmas which are not locked.
>          * This check must happen after vma_start_read(); otherwise, a
>          * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> -        * from its anon_vma.
> +        * from its anon_vma.  This applies to both anon or private file maps.
>          */
> -       if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> +       if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma))
>                 goto inval_end_read;
>
>         /* Check since vm_start/vm_end might change before we lock the VMA */
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index f6267afe65d1..61f21da77dcd 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_struct *mm,
>         struct vm_area_struct *vma;
>
>         vma = lock_vma_under_rcu(mm, address);
> -       if (vma) {
> -               /*
> -                * lock_vma_under_rcu() only checks anon_vma for private
> -                * anonymous mappings. But we need to ensure it is assigned in
> -                * private file-backed vmas as well.
> -                */
> -               if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> -                       vma_end_read(vma);
> -               else
> -                       return vma;
> -       }
> +       if (vma)
> +               return vma;
>
>         mmap_read_lock(mm);
>         vma = find_vma_and_prepare_anon(mm, address);
> --
> 2.44.0
>
>
> --
> Peter Xu
>
Peter Xu April 4, 2024, 9:04 p.m. UTC | #7
On Thu, Apr 04, 2024 at 01:55:07PM -0700, Suren Baghdasaryan wrote:
> On Thu, Apr 4, 2024 at 1:32 PM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote:
> > > On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > > > -           folio_move_anon_rmap(src_folio, dst_vma);
> > > > -           WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > -
> > > >             src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > > >             /* Folio got pinned from under us. Put it back and fail the move. */
> > > >             if (folio_maybe_dma_pinned(src_folio)) {
> > > > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> > > >                     goto unlock_ptls;
> > > >             }
> > > >
> > > > +           folio_move_anon_rmap(src_folio, dst_vma);
> > > > +           WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > +
> > >
> > > This use of WRITE_ONCE scares me.  We hold the folio locked.  Why do
> > > we need to use WRITE_ONCE?  Who's looking at folio->index without
> > > holding the folio lock?
> >
> > Seems true, but maybe suitable for a separate patch to clean it even so?
> > We also have the other pte level which has the same WRITE_ONCE(), so if we
> > want to drop we may want to drop both.
> 
> Yes, I'll do that separately and will remove WRITE_ONCE() in both places.

Thanks, Suren.  Besides, any comment on below?

It's definely a generic per-vma question too (besides my willingness to
remove that userfault specific code..), so comments welcomed.

> 
> >
> > I just got to start reading some the new move codes (Lokesh, apologies on
> > not be able to provide feedbacks previously..), but then I found one thing
> > unclear, on special handling of private file mappings only in userfault
> > context, and I didn't know why:
> >
> > lock_vma():
> >         if (vma) {
> >                 /*
> >                  * lock_vma_under_rcu() only checks anon_vma for private
> >                  * anonymous mappings. But we need to ensure it is assigned in
> >                  * private file-backed vmas as well.
> >                  */
> >                 if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> >                         vma_end_read(vma);
> >                 else
> >                         return vma;
> >         }
> >
> > AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must be
> > stable to be used.  Here it's weird to become an userfault specific
> > operation to me.
> >
> > I was surprised how it worked for private file maps on faults, then I had a
> > check and it seems we postponed such check until vmf_anon_prepare(), which
> > is the CoW path already, so we do as I expected, but seems unnecessary to
> > that point?
> >
> > Would something like below make it much cleaner for us?  As I just don't
> > yet see why userfault is special here.
> >
> > Thanks,
> >
> > ===8<===
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 984b138f85b4..d5cf1d31c671 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
> >
> >         if (likely(vma->anon_vma))
> >                 return 0;
> > -       if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> > -               vma_end_read(vma);
> > -               return VM_FAULT_RETRY;
> > -       }
> > +       /* We shouldn't try a per-vma fault at all if anon_vma isn't solid */
> > +       WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK);
> >         if (__anon_vma_prepare(vma))
> >                 return VM_FAULT_OOM;
> >         return 0;
> > @@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> >          * find_mergeable_anon_vma uses adjacent vmas which are not locked.
> >          * This check must happen after vma_start_read(); otherwise, a
> >          * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> > -        * from its anon_vma.
> > +        * from its anon_vma.  This applies to both anon or private file maps.
> >          */
> > -       if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> > +       if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma))
> >                 goto inval_end_read;
> >
> >         /* Check since vm_start/vm_end might change before we lock the VMA */
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index f6267afe65d1..61f21da77dcd 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_struct *mm,
> >         struct vm_area_struct *vma;
> >
> >         vma = lock_vma_under_rcu(mm, address);
> > -       if (vma) {
> > -               /*
> > -                * lock_vma_under_rcu() only checks anon_vma for private
> > -                * anonymous mappings. But we need to ensure it is assigned in
> > -                * private file-backed vmas as well.
> > -                */
> > -               if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> > -                       vma_end_read(vma);
> > -               else
> > -                       return vma;
> > -       }
> > +       if (vma)
> > +               return vma;
> >
> >         mmap_read_lock(mm);
> >         vma = find_vma_and_prepare_anon(mm, address);
> > --
> > 2.44.0
> >
> >
> > --
> > Peter Xu
> >
>
Suren Baghdasaryan April 4, 2024, 9:07 p.m. UTC | #8
On Thu, Apr 4, 2024 at 2:04 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Thu, Apr 04, 2024 at 01:55:07PM -0700, Suren Baghdasaryan wrote:
> > On Thu, Apr 4, 2024 at 1:32 PM Peter Xu <peterx@redhat.com> wrote:
> > >
> > > On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote:
> > > > On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote:
> > > > > -           folio_move_anon_rmap(src_folio, dst_vma);
> > > > > -           WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > > -
> > > > >             src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > > > >             /* Folio got pinned from under us. Put it back and fail the move. */
> > > > >             if (folio_maybe_dma_pinned(src_folio)) {
> > > > > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
> > > > >                     goto unlock_ptls;
> > > > >             }
> > > > >
> > > > > +           folio_move_anon_rmap(src_folio, dst_vma);
> > > > > +           WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > > +
> > > >
> > > > This use of WRITE_ONCE scares me.  We hold the folio locked.  Why do
> > > > we need to use WRITE_ONCE?  Who's looking at folio->index without
> > > > holding the folio lock?
> > >
> > > Seems true, but maybe suitable for a separate patch to clean it even so?
> > > We also have the other pte level which has the same WRITE_ONCE(), so if we
> > > want to drop we may want to drop both.
> >
> > Yes, I'll do that separately and will remove WRITE_ONCE() in both places.
>
> Thanks, Suren.  Besides, any comment on below?
>
> It's definely a generic per-vma question too (besides my willingness to
> remove that userfault specific code..), so comments welcomed.

Yes, I was typing my reply :)
This might have happened simply because lock_vma_under_rcu() was
originally developed to handle only anonymous page faults and then got
expanded to cover file-backed cases as well. Your suggestion seems
fine to me but I would feel much more comfortable after Matthew (who
added file-backed support) reviewed it.

>
> >
> > >
> > > I just got to start reading some the new move codes (Lokesh, apologies on
> > > not be able to provide feedbacks previously..), but then I found one thing
> > > unclear, on special handling of private file mappings only in userfault
> > > context, and I didn't know why:
> > >
> > > lock_vma():
> > >         if (vma) {
> > >                 /*
> > >                  * lock_vma_under_rcu() only checks anon_vma for private
> > >                  * anonymous mappings. But we need to ensure it is assigned in
> > >                  * private file-backed vmas as well.
> > >                  */
> > >                 if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> > >                         vma_end_read(vma);
> > >                 else
> > >                         return vma;
> > >         }
> > >
> > > AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must be
> > > stable to be used.  Here it's weird to become an userfault specific
> > > operation to me.
> > >
> > > I was surprised how it worked for private file maps on faults, then I had a
> > > check and it seems we postponed such check until vmf_anon_prepare(), which
> > > is the CoW path already, so we do as I expected, but seems unnecessary to
> > > that point?
> > >
> > > Would something like below make it much cleaner for us?  As I just don't
> > > yet see why userfault is special here.
> > >
> > > Thanks,
> > >
> > > ===8<===
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 984b138f85b4..d5cf1d31c671 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
> > >
> > >         if (likely(vma->anon_vma))
> > >                 return 0;
> > > -       if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> > > -               vma_end_read(vma);
> > > -               return VM_FAULT_RETRY;
> > > -       }
> > > +       /* We shouldn't try a per-vma fault at all if anon_vma isn't solid */
> > > +       WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK);
> > >         if (__anon_vma_prepare(vma))
> > >                 return VM_FAULT_OOM;
> > >         return 0;
> > > @@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> > >          * find_mergeable_anon_vma uses adjacent vmas which are not locked.
> > >          * This check must happen after vma_start_read(); otherwise, a
> > >          * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> > > -        * from its anon_vma.
> > > +        * from its anon_vma.  This applies to both anon or private file maps.
> > >          */
> > > -       if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> > > +       if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma))
> > >                 goto inval_end_read;
> > >
> > >         /* Check since vm_start/vm_end might change before we lock the VMA */
> > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > > index f6267afe65d1..61f21da77dcd 100644
> > > --- a/mm/userfaultfd.c
> > > +++ b/mm/userfaultfd.c
> > > @@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_struct *mm,
> > >         struct vm_area_struct *vma;
> > >
> > >         vma = lock_vma_under_rcu(mm, address);
> > > -       if (vma) {
> > > -               /*
> > > -                * lock_vma_under_rcu() only checks anon_vma for private
> > > -                * anonymous mappings. But we need to ensure it is assigned in
> > > -                * private file-backed vmas as well.
> > > -                */
> > > -               if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma))
> > > -                       vma_end_read(vma);
> > > -               else
> > > -                       return vma;
> > > -       }
> > > +       if (vma)
> > > +               return vma;
> > >
> > >         mmap_read_lock(mm);
> > >         vma = find_vma_and_prepare_anon(mm, address);
> > > --
> > > 2.44.0
> > >
> > >
> > > --
> > > Peter Xu
> > >
> >
>
> --
> Peter Xu
>
Peter Xu April 10, 2024, 5:09 p.m. UTC | #9
On Thu, Apr 04, 2024 at 02:07:45PM -0700, Suren Baghdasaryan wrote:
> Yes, I was typing my reply :)
> This might have happened simply because lock_vma_under_rcu() was
> originally developed to handle only anonymous page faults and then got
> expanded to cover file-backed cases as well. Your suggestion seems
> fine to me but I would feel much more comfortable after Matthew (who
> added file-backed support) reviewed it.

Thanks.

Just in case this will fall through the cracks (while I still think we
should do it..), I sent a formal patch just now with some more information
in the commit log.  Any further review comments welcomed there.
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9859aa4f7553..89f58c7603b2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2259,9 +2259,6 @@  int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
 			goto unlock_ptls;
 		}
 
-		folio_move_anon_rmap(src_folio, dst_vma);
-		WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
-
 		src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
 		/* Folio got pinned from under us. Put it back and fail the move. */
 		if (folio_maybe_dma_pinned(src_folio)) {
@@ -2270,6 +2267,9 @@  int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
 			goto unlock_ptls;
 		}
 
+		folio_move_anon_rmap(src_folio, dst_vma);
+		WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
+
 		_dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
 		/* Follow mremap() behavior and treat the entry dirty after the move */
 		_dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);