diff mbox series

[02/21] mm/hugetlb: correct max_huge_pages accounting on demote

Message ID 20220913195508.3511038-3-opendmb@gmail.com (mailing list archive)
State New
Headers show
Series mm: introduce Designated Movable Blocks | expand

Commit Message

Doug Berger Sept. 13, 2022, 7:54 p.m. UTC
When demoting a hugepage to a smaller order, the number of pages
added to the target hstate will be the size of the large page
divided by the size of the smaller page.

Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
Signed-off-by: Doug Berger <opendmb@gmail.com>
---
 mm/hugetlb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Mike Kravetz Sept. 14, 2022, 5:23 p.m. UTC | #1
On 09/13/22 12:54, Doug Berger wrote:
> When demoting a hugepage to a smaller order, the number of pages
> added to the target hstate will be the size of the large page
> divided by the size of the smaller page.
> 
> Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
> Signed-off-by: Doug Berger <opendmb@gmail.com>
> ---
>  mm/hugetlb.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index e070b8593b37..79949893ac12 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3472,7 +3472,8 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
>  	 * based on pool changes for the demoted page.
>  	 */
>  	h->max_huge_pages--;
> -	target_hstate->max_huge_pages += pages_per_huge_page(h);
> +	target_hstate->max_huge_pages += pages_per_huge_page(h) /
> +					 pages_per_huge_page(target_hstate);
>  
>  	return rc;
>  }

This has already been fixed here,

https://lore.kernel.org/linux-mm/20220823030209.57434-2-linmiaohe@huawei.com/
Florian Fainelli Sept. 14, 2022, 5:26 p.m. UTC | #2
On 9/14/22 10:23, Mike Kravetz wrote:
> On 09/13/22 12:54, Doug Berger wrote:
>> When demoting a hugepage to a smaller order, the number of pages
>> added to the target hstate will be the size of the large page
>> divided by the size of the smaller page.
>>
>> Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
>> Signed-off-by: Doug Berger <opendmb@gmail.com>
>> ---
>>   mm/hugetlb.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index e070b8593b37..79949893ac12 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -3472,7 +3472,8 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
>>   	 * based on pool changes for the demoted page.
>>   	 */
>>   	h->max_huge_pages--;
>> -	target_hstate->max_huge_pages += pages_per_huge_page(h);
>> +	target_hstate->max_huge_pages += pages_per_huge_page(h) /
>> +					 pages_per_huge_page(target_hstate);
>>   
>>   	return rc;
>>   }
> 
> This has already been fixed here,
> 
> https://lore.kernel.org/linux-mm/20220823030209.57434-2-linmiaohe@huawei.com/
> 

Could we slap the Fixes tag when this Miaohe's patch series gets 
accepted since the offending commit is in v5.16 and beyond. Thanks!
Doug Berger Sept. 14, 2022, 5:30 p.m. UTC | #3
On 9/14/2022 10:23 AM, Mike Kravetz wrote:
> On 09/13/22 12:54, Doug Berger wrote:
>> When demoting a hugepage to a smaller order, the number of pages
>> added to the target hstate will be the size of the large page
>> divided by the size of the smaller page.
>>
>> Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
>> Signed-off-by: Doug Berger <opendmb@gmail.com>
>> ---
>>   mm/hugetlb.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index e070b8593b37..79949893ac12 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -3472,7 +3472,8 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
>>   	 * based on pool changes for the demoted page.
>>   	 */
>>   	h->max_huge_pages--;
>> -	target_hstate->max_huge_pages += pages_per_huge_page(h);
>> +	target_hstate->max_huge_pages += pages_per_huge_page(h) /
>> +					 pages_per_huge_page(target_hstate);
>>   
>>   	return rc;
>>   }
> 
> This has already been fixed here,
> 
> https://lore.kernel.org/linux-mm/20220823030209.57434-2-linmiaohe@huawei.com/
> 
Excellent! Thanks for the pointer and sorry for the noise.
-Doug
Mike Kravetz Sept. 14, 2022, 6:43 p.m. UTC | #4
On 09/14/22 10:26, Florian Fainelli wrote:
> On 9/14/22 10:23, Mike Kravetz wrote:
> > On 09/13/22 12:54, Doug Berger wrote:
> > > When demoting a hugepage to a smaller order, the number of pages
> > > added to the target hstate will be the size of the large page
> > > divided by the size of the smaller page.
> > > 
> > > Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
> > > Signed-off-by: Doug Berger <opendmb@gmail.com>
> > > ---
> > >   mm/hugetlb.c | 3 ++-
> > >   1 file changed, 2 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > index e070b8593b37..79949893ac12 100644
> > > --- a/mm/hugetlb.c
> > > +++ b/mm/hugetlb.c
> > > @@ -3472,7 +3472,8 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
> > >   	 * based on pool changes for the demoted page.
> > >   	 */
> > >   	h->max_huge_pages--;
> > > -	target_hstate->max_huge_pages += pages_per_huge_page(h);
> > > +	target_hstate->max_huge_pages += pages_per_huge_page(h) /
> > > +					 pages_per_huge_page(target_hstate);
> > >   	return rc;
> > >   }
> > 
> > This has already been fixed here,
> > 
> > https://lore.kernel.org/linux-mm/20220823030209.57434-2-linmiaohe@huawei.com/
> > 
> 
> Could we slap the Fixes tag when this Miaohe's patch series gets accepted
> since the offending commit is in v5.16 and beyond. Thanks!

We could.  However, this fix/change does not really have any impact on
the way code functions (unless I am mistaken).  See my analysis at,

https://lore.kernel.org/linux-mm/YvwfvxXewnZpHQcz@monkey/
Andrew Morton Sept. 14, 2022, 8:58 p.m. UTC | #5
On Wed, 14 Sep 2022 10:23:05 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote:

> On 09/13/22 12:54, Doug Berger wrote:
> > When demoting a hugepage to a smaller order, the number of pages
> > added to the target hstate will be the size of the large page
> > divided by the size of the smaller page.
> > 
> > Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
> > Signed-off-by: Doug Berger <opendmb@gmail.com>
> > ---
> >  mm/hugetlb.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index e070b8593b37..79949893ac12 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -3472,7 +3472,8 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
> >  	 * based on pool changes for the demoted page.
> >  	 */
> >  	h->max_huge_pages--;
> > -	target_hstate->max_huge_pages += pages_per_huge_page(h);
> > +	target_hstate->max_huge_pages += pages_per_huge_page(h) /
> > +					 pages_per_huge_page(target_hstate);
> >  
> >  	return rc;
> >  }
> 
> This has already been fixed here,
> 
> https://lore.kernel.org/linux-mm/20220823030209.57434-2-linmiaohe@huawei.com/

Neither version tells us the user-visible runtime effects of the change :(
Mike Kravetz Sept. 14, 2022, 9:11 p.m. UTC | #6
On 09/14/22 13:58, Andrew Morton wrote:
> On Wed, 14 Sep 2022 10:23:05 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote:
> 
> > On 09/13/22 12:54, Doug Berger wrote:
> > > When demoting a hugepage to a smaller order, the number of pages
> > > added to the target hstate will be the size of the large page
> > > divided by the size of the smaller page.
> > > 
> > > Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
> > > Signed-off-by: Doug Berger <opendmb@gmail.com>
> > > ---
> > >  mm/hugetlb.c | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > index e070b8593b37..79949893ac12 100644
> > > --- a/mm/hugetlb.c
> > > +++ b/mm/hugetlb.c
> > > @@ -3472,7 +3472,8 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
> > >  	 * based on pool changes for the demoted page.
> > >  	 */
> > >  	h->max_huge_pages--;
> > > -	target_hstate->max_huge_pages += pages_per_huge_page(h);
> > > +	target_hstate->max_huge_pages += pages_per_huge_page(h) /
> > > +					 pages_per_huge_page(target_hstate);
> > >  
> > >  	return rc;
> > >  }
> > 
> > This has already been fixed here,
> > 
> > https://lore.kernel.org/linux-mm/20220823030209.57434-2-linmiaohe@huawei.com/
> 
> Neither version tells us the user-visible runtime effects of the change :(

Sorry, I should have pushed harder on this with Miaohe's patch.

There are no user-visible runtime effects.  In fact, this change really causes
no functional change (unless I am mistaken and Miaohe did not correct me).
max_huge_pages is not used again until it is reset.  See my explanation at:
https://lore.kernel.org/linux-mm/YvwfvxXewnZpHQcz@monkey/
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e070b8593b37..79949893ac12 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3472,7 +3472,8 @@  static int demote_free_huge_page(struct hstate *h, struct page *page)
 	 * based on pool changes for the demoted page.
 	 */
 	h->max_huge_pages--;
-	target_hstate->max_huge_pages += pages_per_huge_page(h);
+	target_hstate->max_huge_pages += pages_per_huge_page(h) /
+					 pages_per_huge_page(target_hstate);
 
 	return rc;
 }