diff mbox series

[v3] mm: hugetlb: improve the handling of hugetlb allocation failure for freed or in-use hugetlb

Message ID 62890fd60b1ecd5bf1cdc476c973f60fe37aa0cb.1707181934.git.baolin.wang@linux.alibaba.com (mailing list archive)
State New
Headers show
Series [v3] mm: hugetlb: improve the handling of hugetlb allocation failure for freed or in-use hugetlb | expand

Commit Message

Baolin Wang Feb. 6, 2024, 3:08 a.m. UTC
alloc_and_dissolve_hugetlb_folio() preallocates a new hugetlb page before
it takes hugetlb_lock. In 3 out of 4 cases the page is not really used and
therefore the newly allocated page is just freed right away. This is
wasteful and it might cause pre-mature failures in those cases.

Address that by moving the allocation down to the only case (hugetlb
page is really in the free pages pool). We need to drop hugetlb_lock
to do so and therefore need to recheck the page state after regaining
it.

The patch is more of a cleanup than an actual fix to an existing
problem. There are no known reports about pre-mature failures.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
Changes from v2;
 - Update the commit message suggested by Michal.
 - Remove unnecessary comments.
Changes from v1:
 - Update the suject line per Muchun.
 - Move the allocation into the free hugetlb handling branch per Michal.
---
 mm/hugetlb.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

Comments

Michal Hocko Feb. 6, 2024, 10:09 a.m. UTC | #1
On Tue 06-02-24 11:08:11, Baolin Wang wrote:
> alloc_and_dissolve_hugetlb_folio() preallocates a new hugetlb page before
> it takes hugetlb_lock. In 3 out of 4 cases the page is not really used and
> therefore the newly allocated page is just freed right away. This is
> wasteful and it might cause pre-mature failures in those cases.
> 
> Address that by moving the allocation down to the only case (hugetlb
> page is really in the free pages pool). We need to drop hugetlb_lock
> to do so and therefore need to recheck the page state after regaining
> it.
> 
> The patch is more of a cleanup than an actual fix to an existing
> problem. There are no known reports about pre-mature failures.
> 
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>

Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!

> ---
> Changes from v2;
>  - Update the commit message suggested by Michal.
>  - Remove unnecessary comments.
> Changes from v1:
>  - Update the suject line per Muchun.
>  - Move the allocation into the free hugetlb handling branch per Michal.
> ---
>  mm/hugetlb.c | 32 ++++++++++++++++----------------
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 9d996fe4ecd9..a05507a2143f 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3031,21 +3031,9 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
>  {
>  	gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
>  	int nid = folio_nid(old_folio);
> -	struct folio *new_folio;
> +	struct folio *new_folio = NULL;
>  	int ret = 0;
>  
> -	/*
> -	 * Before dissolving the folio, we need to allocate a new one for the
> -	 * pool to remain stable.  Here, we allocate the folio and 'prep' it
> -	 * by doing everything but actually updating counters and adding to
> -	 * the pool.  This simplifies and let us do most of the processing
> -	 * under the lock.
> -	 */
> -	new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
> -	if (!new_folio)
> -		return -ENOMEM;
> -	__prep_new_hugetlb_folio(h, new_folio);
> -
>  retry:
>  	spin_lock_irq(&hugetlb_lock);
>  	if (!folio_test_hugetlb(old_folio)) {
> @@ -3075,6 +3063,16 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
>  		cond_resched();
>  		goto retry;
>  	} else {
> +		if (!new_folio) {
> +			spin_unlock_irq(&hugetlb_lock);
> +			new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,
> +							      NULL, NULL);
> +			if (!new_folio)
> +				return -ENOMEM;
> +			__prep_new_hugetlb_folio(h, new_folio);
> +			goto retry;
> +		}
> +
>  		/*
>  		 * Ok, old_folio is still a genuine free hugepage. Remove it from
>  		 * the freelist and decrease the counters. These will be
> @@ -3102,9 +3100,11 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
>  
>  free_new:
>  	spin_unlock_irq(&hugetlb_lock);
> -	/* Folio has a zero ref count, but needs a ref to be freed */
> -	folio_ref_unfreeze(new_folio, 1);
> -	update_and_free_hugetlb_folio(h, new_folio, false);
> +	if (new_folio) {
> +		/* Folio has a zero ref count, but needs a ref to be freed */
> +		folio_ref_unfreeze(new_folio, 1);
> +		update_and_free_hugetlb_folio(h, new_folio, false);
> +	}
>  
>  	return ret;
>  }
> -- 
> 2.39.3
>
Muchun Song Feb. 7, 2024, 2:24 a.m. UTC | #2
> On Feb 6, 2024, at 11:08, Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
> 
> alloc_and_dissolve_hugetlb_folio() preallocates a new hugetlb page before
> it takes hugetlb_lock. In 3 out of 4 cases the page is not really used and
> therefore the newly allocated page is just freed right away. This is
> wasteful and it might cause pre-mature failures in those cases.
> 
> Address that by moving the allocation down to the only case (hugetlb
> page is really in the free pages pool). We need to drop hugetlb_lock
> to do so and therefore need to recheck the page state after regaining
> it.
> 
> The patch is more of a cleanup than an actual fix to an existing
> problem. There are no known reports about pre-mature failures.
> 
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>

Reviewed-by: Muchun Song <muchun.song@linux.dev>

Thanks

> ---
> Changes from v2;
> - Update the commit message suggested by Michal.
> - Remove unnecessary comments.
> Changes from v1:
> - Update the suject line per Muchun.
> - Move the allocation into the free hugetlb handling branch per Michal.
> ---
> mm/hugetlb.c | 32 ++++++++++++++++----------------
> 1 file changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 9d996fe4ecd9..a05507a2143f 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3031,21 +3031,9 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> {
>    gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
>    int nid = folio_nid(old_folio);
> -    struct folio *new_folio;
> +    struct folio *new_folio = NULL;
>    int ret = 0;
> 
> -    /*
> -     * Before dissolving the folio, we need to allocate a new one for the
> -     * pool to remain stable.  Here, we allocate the folio and 'prep' it
> -     * by doing everything but actually updating counters and adding to
> -     * the pool.  This simplifies and let us do most of the processing
> -     * under the lock.
> -     */
> -    new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
> -    if (!new_folio)
> -        return -ENOMEM;
> -    __prep_new_hugetlb_folio(h, new_folio);
> -
> retry:
>    spin_lock_irq(&hugetlb_lock);
>    if (!folio_test_hugetlb(old_folio)) {
> @@ -3075,6 +3063,16 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
>        cond_resched();
>        goto retry;
>    } else {
> +        if (!new_folio) {
> +            spin_unlock_irq(&hugetlb_lock);
> +            new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,
> +                                  NULL, NULL);
> +            if (!new_folio)
> +                return -ENOMEM;
> +            __prep_new_hugetlb_folio(h, new_folio);
> +            goto retry;
> +        }
> +
>        /*
>         * Ok, old_folio is still a genuine free hugepage. Remove it from
>         * the freelist and decrease the counters. These will be
> @@ -3102,9 +3100,11 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> 
> free_new:
>    spin_unlock_irq(&hugetlb_lock);
> -    /* Folio has a zero ref count, but needs a ref to be freed */
> -    folio_ref_unfreeze(new_folio, 1);
> -    update_and_free_hugetlb_folio(h, new_folio, false);
> +    if (new_folio) {
> +        /* Folio has a zero ref count, but needs a ref to be freed */
> +        folio_ref_unfreeze(new_folio, 1);
> +        update_and_free_hugetlb_folio(h, new_folio, false);
> +    }
> 
>    return ret;
> }
> -- 
> 2.39.3
>
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9d996fe4ecd9..a05507a2143f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3031,21 +3031,9 @@  static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
 {
 	gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
 	int nid = folio_nid(old_folio);
-	struct folio *new_folio;
+	struct folio *new_folio = NULL;
 	int ret = 0;
 
-	/*
-	 * Before dissolving the folio, we need to allocate a new one for the
-	 * pool to remain stable.  Here, we allocate the folio and 'prep' it
-	 * by doing everything but actually updating counters and adding to
-	 * the pool.  This simplifies and let us do most of the processing
-	 * under the lock.
-	 */
-	new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
-	if (!new_folio)
-		return -ENOMEM;
-	__prep_new_hugetlb_folio(h, new_folio);
-
 retry:
 	spin_lock_irq(&hugetlb_lock);
 	if (!folio_test_hugetlb(old_folio)) {
@@ -3075,6 +3063,16 @@  static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
 		cond_resched();
 		goto retry;
 	} else {
+		if (!new_folio) {
+			spin_unlock_irq(&hugetlb_lock);
+			new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,
+							      NULL, NULL);
+			if (!new_folio)
+				return -ENOMEM;
+			__prep_new_hugetlb_folio(h, new_folio);
+			goto retry;
+		}
+
 		/*
 		 * Ok, old_folio is still a genuine free hugepage. Remove it from
 		 * the freelist and decrease the counters. These will be
@@ -3102,9 +3100,11 @@  static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
 
 free_new:
 	spin_unlock_irq(&hugetlb_lock);
-	/* Folio has a zero ref count, but needs a ref to be freed */
-	folio_ref_unfreeze(new_folio, 1);
-	update_and_free_hugetlb_folio(h, new_folio, false);
+	if (new_folio) {
+		/* Folio has a zero ref count, but needs a ref to be freed */
+		folio_ref_unfreeze(new_folio, 1);
+		update_and_free_hugetlb_folio(h, new_folio, false);
+	}
 
 	return ret;
 }