diff mbox series

[v3] mm/gup: stop leaking pinned pages in low memory conditions

Message ID 20241018223411.310331-1-jhubbard@nvidia.com (mailing list archive)
State New
Headers show
Series [v3] mm/gup: stop leaking pinned pages in low memory conditions | expand

Commit Message

John Hubbard Oct. 18, 2024, 10:34 p.m. UTC
If a driver tries to call any of the pin_user_pages*(FOLL_LONGTERM)
family of functions, and requests "too many" pages, then the call will
erroneously leave pages pinned. This is visible in user space as an
actual memory leak.

Repro is trivial: just make enough pin_user_pages(FOLL_LONGTERM) calls
to exhaust memory.

The root cause of the problem is this sequence, within
__gup_longterm_locked():

    __get_user_pages_locked()
    rc = check_and_migrate_movable_pages()

...which gets retried in a loop. The loop error handling is incomplete,
clearly due to a somewhat unusual and complicated tri-state error API.
But anyway, if -ENOMEM, or in fact, any unexpected error is returned
from check_and_migrate_movable_pages(), then __gup_longterm_locked()
happily returns the error, while leaving the pages pinned.

In the failed case, which is an app that requests (via a device driver)
30720000000 bytes to be pinned, and then exits, I see this:

    $ grep foll /proc/vmstat
        nr_foll_pin_acquired 7502048
        nr_foll_pin_released 2048

And after applying this patch, it returns to balanced pins:

    $ grep foll /proc/vmstat
        nr_foll_pin_acquired 7502048
        nr_foll_pin_released 7502048

Note that the child routine, check_and_migrate_movable_folios(), avoids
this problem, by unpinning any folios in the **folios argument, before
returning an error.

Fix this by making check_and_migrate_movable_pages() behave in exactly
the same way as check_and_migrate_movable_folios(): unpin all pages in
**pages, before returning an error.

Also, documentation was an aggravating factor, so:

1) Consolidate the documentation for these two routines, now that they
have identical external behavior.

2) Rewrite the consolidated documentation:

    a) Clearly list the three return code cases, and what happens in
    each case.

    b) Mention that one of the cases unpins the pages or folios, before
    returning an error code.

Fixes: 24a95998e9ba ("mm/gup.c: simplify and fix check_and_migrate_movable_pages() return codes")
Suggested-by: David Hildenbrand <david@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Shigeru Yoshida <syoshida@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---

Changes since v2:

1) Applied the fix to check_and_migrate_movable_pages(), instead of its
caller, as per David Hildenbrand's suggestion (thanks!).

2) Dropped Alistair's reviewed-by tag, because the fix has changed.

3) Reworked the relevant documentation, and commit description, as
described above.

thanks,
John Hubbard


 mm/gup.c | 33 +++++++++++++++++++--------------
 1 file changed, 19 insertions(+), 14 deletions(-)


base-commit: b04ae0f45168973edb658ac2385045ac13c5aca7

Comments

Alistair Popple Oct. 20, 2024, 11:26 p.m. UTC | #1
John Hubbard <jhubbard@nvidia.com> writes:

[...]

> diff --git a/mm/gup.c b/mm/gup.c
> index a82890b46a36..4637dab7b54f 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2394,20 +2394,25 @@ static int migrate_longterm_unpinnable_folios(
>  }
>  
>  /*
> - * Check whether all folios are *allowed* to be pinned indefinitely (longterm).
> + * Check whether all folios are *allowed* to be pinned indefinitely (long term).
>   * Rather confusingly, all folios in the range are required to be pinned via
>   * FOLL_PIN, before calling this routine.
>   *
> - * If any folios in the range are not allowed to be pinned, then this routine
> - * will migrate those folios away, unpin all the folios in the range and return
> - * -EAGAIN. The caller should re-pin the entire range with FOLL_PIN and then
> - * call this routine again.
> + * Return values:
>   *
> - * If an error other than -EAGAIN occurs, this indicates a migration failure.
> - * The caller should give up, and propagate the error back up the call stack.
> - *
> - * If everything is OK and all folios in the range are allowed to be pinned,
> + * 0: if everything is OK and all folios in the range are allowed to be pinned,
>   * then this routine leaves all folios pinned and returns zero for success.
> + *
> + * -EAGAIN: if any folios in the range are not allowed to be pinned, then this
> + * routine will migrate those folios away, unpin all the folios in the range. If
> + * migration of the entire set of folios succeeds, then -EAGAIN is returned. The
> + * caller should re-pin the entire range with FOLL_PIN and then call this
> + * routine again.
> + *
> + * -ENOMEM, or any other -errno: if an error *other* than -EAGAIN occurs, this
> + * indicates a migration failure. The caller should give up, and propagate the
> + * error back up the call stack. The caller does not need to unpin any folios in
> + * that case, because this routine will do the unpinning.

Where does the unpinning happen in this case though? I can see it
happens for the specific case of failing allocation of the folio array
in check_and_migrate_movable_pages(), but what about for the other error
conditions?

>   */
>  static long check_and_migrate_movable_folios(unsigned long nr_folios,
>  					     struct folio **folios)
> @@ -2425,10 +2430,8 @@ static long check_and_migrate_movable_folios(unsigned long nr_folios,
>  }
>  
>  /*
> - * This routine just converts all the pages in the @pages array to folios and
> - * calls check_and_migrate_movable_folios() to do the heavy lifting.
> - *
> - * Please see the check_and_migrate_movable_folios() documentation for details.
> + * Return values and behavior are the same as those for
> + * check_and_migrate_movable_folios().
>   */
>  static long check_and_migrate_movable_pages(unsigned long nr_pages,
>  					    struct page **pages)
> @@ -2437,8 +2440,10 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
>  	long i, ret;
>  
>  	folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL);
> -	if (!folios)
> +	if (!folios) {
> +		unpin_user_pages(pages, nr_pages);

ie. Doesn't this unpinning need to happen in
check_and_migrate_movable_folios()?

>  		return -ENOMEM;
> +	}
>  
>  	for (i = 0; i < nr_pages; i++)
>  		folios[i] = page_folio(pages[i]);
>
> base-commit: b04ae0f45168973edb658ac2385045ac13c5aca7
John Hubbard Oct. 21, 2024, 3:26 a.m. UTC | #2
On 10/20/24 4:26 PM, Alistair Popple wrote:
> John Hubbard <jhubbard@nvidia.com> writes:
> [...]
>> @@ -2437,8 +2440,10 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
>>   	long i, ret;
>>   
>>   	folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL);
>> -	if (!folios)
>> +	if (!folios) {
>> +		unpin_user_pages(pages, nr_pages);
> 
> ie. Doesn't this unpinning need to happen in
> check_and_migrate_movable_folios()?

It already does.

check_and_migrate_movable_folios() calls
migrate_longterm_unpinnable_folios(), which unpins if errors occur.


thanks,
Alistair Popple Oct. 21, 2024, 5:39 a.m. UTC | #3
John Hubbard <jhubbard@nvidia.com> writes:

> On 10/20/24 4:26 PM, Alistair Popple wrote:
>> John Hubbard <jhubbard@nvidia.com> writes:
>> [...]
>>> @@ -2437,8 +2440,10 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
>>>   	long i, ret;
>>>     	folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL);
>>> -	if (!folios)
>>> +	if (!folios) {
>>> +		unpin_user_pages(pages, nr_pages);
>> ie. Doesn't this unpinning need to happen in
>> check_and_migrate_movable_folios()?
>
> It already does.
>
> check_and_migrate_movable_folios() calls
> migrate_longterm_unpinnable_folios(), which unpins if errors occur.

Right you are.

Reviewed-by: Alistair Popple <apopple@nvidia.com>

As an aside for future clean-ups we could probably get something nicer
if we reversed the process of pin/migrate to migrate/pin. In other words
if FOLL_LONGERM try and migrate the entire range first out of
ZONE_MOVABLE first. Migration invovles walking page tables and getting a
reference on the pages anyway, so if it turns out there is nothing to
migrate you haven't lost anything performance wise.

> thanks,
John Hubbard Oct. 21, 2024, 6:38 a.m. UTC | #4
On 10/20/24 10:39 PM, Alistair Popple wrote:
> 
> John Hubbard <jhubbard@nvidia.com> writes:
> 
>> On 10/20/24 4:26 PM, Alistair Popple wrote:
>>> John Hubbard <jhubbard@nvidia.com> writes:
>>> [...]
>>>> @@ -2437,8 +2440,10 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
>>>>    	long i, ret;
>>>>      	folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL);
>>>> -	if (!folios)
>>>> +	if (!folios) {
>>>> +		unpin_user_pages(pages, nr_pages);
>>> ie. Doesn't this unpinning need to happen in
>>> check_and_migrate_movable_folios()?
>>
>> It already does.
>>
>> check_and_migrate_movable_folios() calls
>> migrate_longterm_unpinnable_folios(), which unpins if errors occur.
> 
> Right you are.
> 
> Reviewed-by: Alistair Popple <apopple@nvidia.com>

Thanks for the review!

> 
> As an aside for future clean-ups we could probably get something nicer
> if we reversed the process of pin/migrate to migrate/pin. In other words
> if FOLL_LONGERM try and migrate the entire range first out of
> ZONE_MOVABLE first. Migration invovles walking page tables and getting a
> reference on the pages anyway, so if it turns out there is nothing to
> migrate you haven't lost anything performance wise.
> 

Yes. In fact, I see our emails crossed, and I just suggested the same thing
in reply to your other comment (in the v2 review thread) about short vs.
long term pinning. Great! :)


thanks,
diff mbox series

Patch

diff --git a/mm/gup.c b/mm/gup.c
index a82890b46a36..4637dab7b54f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2394,20 +2394,25 @@  static int migrate_longterm_unpinnable_folios(
 }
 
 /*
- * Check whether all folios are *allowed* to be pinned indefinitely (longterm).
+ * Check whether all folios are *allowed* to be pinned indefinitely (long term).
  * Rather confusingly, all folios in the range are required to be pinned via
  * FOLL_PIN, before calling this routine.
  *
- * If any folios in the range are not allowed to be pinned, then this routine
- * will migrate those folios away, unpin all the folios in the range and return
- * -EAGAIN. The caller should re-pin the entire range with FOLL_PIN and then
- * call this routine again.
+ * Return values:
  *
- * If an error other than -EAGAIN occurs, this indicates a migration failure.
- * The caller should give up, and propagate the error back up the call stack.
- *
- * If everything is OK and all folios in the range are allowed to be pinned,
+ * 0: if everything is OK and all folios in the range are allowed to be pinned,
  * then this routine leaves all folios pinned and returns zero for success.
+ *
+ * -EAGAIN: if any folios in the range are not allowed to be pinned, then this
+ * routine will migrate those folios away, unpin all the folios in the range. If
+ * migration of the entire set of folios succeeds, then -EAGAIN is returned. The
+ * caller should re-pin the entire range with FOLL_PIN and then call this
+ * routine again.
+ *
+ * -ENOMEM, or any other -errno: if an error *other* than -EAGAIN occurs, this
+ * indicates a migration failure. The caller should give up, and propagate the
+ * error back up the call stack. The caller does not need to unpin any folios in
+ * that case, because this routine will do the unpinning.
  */
 static long check_and_migrate_movable_folios(unsigned long nr_folios,
 					     struct folio **folios)
@@ -2425,10 +2430,8 @@  static long check_and_migrate_movable_folios(unsigned long nr_folios,
 }
 
 /*
- * This routine just converts all the pages in the @pages array to folios and
- * calls check_and_migrate_movable_folios() to do the heavy lifting.
- *
- * Please see the check_and_migrate_movable_folios() documentation for details.
+ * Return values and behavior are the same as those for
+ * check_and_migrate_movable_folios().
  */
 static long check_and_migrate_movable_pages(unsigned long nr_pages,
 					    struct page **pages)
@@ -2437,8 +2440,10 @@  static long check_and_migrate_movable_pages(unsigned long nr_pages,
 	long i, ret;
 
 	folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL);
-	if (!folios)
+	if (!folios) {
+		unpin_user_pages(pages, nr_pages);
 		return -ENOMEM;
+	}
 
 	for (i = 0; i < nr_pages; i++)
 		folios[i] = page_folio(pages[i]);