diff mbox series

[v2,3/6] mm/hugetlb: fix missing call to restore_reserve_on_error()

Message ID 20220823030209.57434-4-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series A few fixup patches for hugetlb | expand

Commit Message

Miaohe Lin Aug. 23, 2022, 3:02 a.m. UTC
When huge_add_to_page_cache() fails, the page is freed directly without
calling restore_reserve_on_error() to restore reserve for newly allocated
pages not in page cache. Fix this by calling restore_reserve_on_error()
when huge_add_to_page_cache fails.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/hugetlb.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

Comments

Mike Kravetz Aug. 24, 2022, 6:21 p.m. UTC | #1
On 08/23/22 11:02, Miaohe Lin wrote:
> When huge_add_to_page_cache() fails, the page is freed directly without
> calling restore_reserve_on_error() to restore reserve for newly allocated
> pages not in page cache. Fix this by calling restore_reserve_on_error()
> when huge_add_to_page_cache fails.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/hugetlb.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d46dfe5ba62c..8e62da153c64 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5576,7 +5576,6 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>  	if (idx >= size)
>  		goto out;
>  
> -retry:
>  	new_page = false;
>  	page = find_lock_page(mapping, idx);
>  	if (!page) {
> @@ -5616,9 +5615,15 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
>  		if (vma->vm_flags & VM_MAYSHARE) {
>  			int err = huge_add_to_page_cache(page, mapping, idx);
>  			if (err) {
> +				/*
> +				 * err can't be -EEXIST which implies someone
> +				 * else consumed the reservation since hugetlb
> +				 * fault mutex is held when add a hugetlb page
> +				 * to the page cache. So it's safe to call
> +				 * restore_reserve_on_error() here.
> +				 */
> +				restore_reserve_on_error(h, vma, haddr, page);
>  				put_page(page);
> -				if (err == -EEXIST)
> -					goto retry;

Thanks for removing this check and adding the comment.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d46dfe5ba62c..8e62da153c64 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5576,7 +5576,6 @@  static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 	if (idx >= size)
 		goto out;
 
-retry:
 	new_page = false;
 	page = find_lock_page(mapping, idx);
 	if (!page) {
@@ -5616,9 +5615,15 @@  static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 		if (vma->vm_flags & VM_MAYSHARE) {
 			int err = huge_add_to_page_cache(page, mapping, idx);
 			if (err) {
+				/*
+				 * err can't be -EEXIST which implies someone
+				 * else consumed the reservation since hugetlb
+				 * fault mutex is held when add a hugetlb page
+				 * to the page cache. So it's safe to call
+				 * restore_reserve_on_error() here.
+				 */
+				restore_reserve_on_error(h, vma, haddr, page);
 				put_page(page);
-				if (err == -EEXIST)
-					goto retry;
 				goto out;
 			}
 			new_pagecache_page = true;