diff mbox series

[v2,1/8] mm: migrate: remove PageTransHuge check in numamigrate_isolate_page()

Message ID 20230821115624.158759-2-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series mm: migrate: more folio conversion and unify | expand

Commit Message

Kefeng Wang Aug. 21, 2023, 11:56 a.m. UTC
Since we begin to convert the numa migration code to use folio, which
could let us to handle arbitrary sizes of folio, so drop assert that
we only support PageTransHuge page(PMD size) when order > 0.

Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/migrate.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Matthew Wilcox Aug. 21, 2023, 12:38 p.m. UTC | #1
On Mon, Aug 21, 2023 at 07:56:17PM +0800, Kefeng Wang wrote:
> Since we begin to convert the numa migration code to use folio, which
> could let us to handle arbitrary sizes of folio, so drop assert that
> we only support PageTransHuge page(PMD size) when order > 0.

Have you looked at the implementation of PageTransHuge()?  Your
description doesn't match what the code does.

> Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
>  mm/migrate.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b7fa020003f3..646d8ee7f102 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2483,8 +2483,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  	int nr_pages = thp_nr_pages(page);
>  	int order = compound_order(page);
>  
> -	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
> -
>  	/* Do not migrate THP mapped by multiple processes */
>  	if (PageTransHuge(page) && total_mapcount(page) > 1)
>  		return 0;
> -- 
> 2.41.0
> 
>
Kefeng Wang Aug. 21, 2023, 12:52 p.m. UTC | #2
On 2023/8/21 20:38, Matthew Wilcox wrote:
> On Mon, Aug 21, 2023 at 07:56:17PM +0800, Kefeng Wang wrote:
>> Since we begin to convert the numa migration code to use folio, which
>> could let us to handle arbitrary sizes of folio, so drop assert that
>> we only support PageTransHuge page(PMD size) when order > 0.
> 
> Have you looked at the implementation of PageTransHuge()?  Your
> description doesn't match what the code does.

oops, not only PMD size...

> 
>> Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>   mm/migrate.c | 2 --
>>   1 file changed, 2 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index b7fa020003f3..646d8ee7f102 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2483,8 +2483,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>   	int nr_pages = thp_nr_pages(page);
>>   	int order = compound_order(page);
>>   
>> -	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>> -
>>   	/* Do not migrate THP mapped by multiple processes */
>>   	if (PageTransHuge(page) && total_mapcount(page) > 1)
>>   		return 0;
>> -- 
>> 2.41.0
>>
>>
>
Kefeng Wang Aug. 21, 2023, 2:41 p.m. UTC | #3
On 2023/8/21 20:52, Kefeng Wang wrote:
> 
> 
> On 2023/8/21 20:38, Matthew Wilcox wrote:
>> On Mon, Aug 21, 2023 at 07:56:17PM +0800, Kefeng Wang wrote:
>>> Since we begin to convert the numa migration code to use folio, which
>>> could let us to handle arbitrary sizes of folio, so drop assert that
>>> we only support PageTransHuge page(PMD size) when order > 0.
>>
>> Have you looked at the implementation of PageTransHuge()?  Your
>> description doesn't match what the code does.
> 
> oops, not only PMD size...

Please ignore about reply, sorry,I misread, PageTransHuge return true
for head page, and BUG_ON for tail page, and compound_order(page) will
return 0 for tail page, so when begin to convert page to folio, we could
drop this line since it is not useful check for folio.

> 
>>
>>> Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>> ---
>>>   mm/migrate.c | 2 --
>>>   1 file changed, 2 deletions(-)
>>>
>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>> index b7fa020003f3..646d8ee7f102 100644
>>> --- a/mm/migrate.c
>>> +++ b/mm/migrate.c
>>> @@ -2483,8 +2483,6 @@ static int numamigrate_isolate_page(pg_data_t 
>>> *pgdat, struct page *page)
>>>       int nr_pages = thp_nr_pages(page);
>>>       int order = compound_order(page);
>>> -    VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>>> -
>>>       /* Do not migrate THP mapped by multiple processes */
>>>       if (PageTransHuge(page) && total_mapcount(page) > 1)
>>>           return 0;
>>> -- 
>>> 2.41.0
>>>
>>>
>>
Kefeng Wang Aug. 25, 2023, 3:51 a.m. UTC | #4
On 2023/8/21 20:38, Matthew Wilcox wrote:
> On Mon, Aug 21, 2023 at 07:56:17PM +0800, Kefeng Wang wrote:
>> Since we begin to convert the numa migration code to use folio, which
>> could let us to handle arbitrary sizes of folio, so drop assert that
>> we only support PageTransHuge page(PMD size) when order > 0.
> 
> Have you looked at the implementation of PageTransHuge()?  Your
> description doesn't match what the code does.

How about change to the following description,

The assert VM_BUG_ON_PAGE(order && !PageTransHuge(page), page) is
not very usefull,

    1) for a tail/base page, order = 0, for a head page, the order > 0 &&
       PageTransHuge() is true
    2) there is a PageCompound() check and only base page is handled in
       do_numa_page(), and do_huge_pmd_numa_page() only handle PMD-mapped
       THP
    3) even though the page is a tail page, isolate_lru_page() will post
       a warning, and fail to isolate the page.
    4) and if folio migrate is supported in the future, it is probable to
       migrate the entire folio if numa fault on a tail page

so just remove the check.

Thanks
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index b7fa020003f3..646d8ee7f102 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2483,8 +2483,6 @@  static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 	int nr_pages = thp_nr_pages(page);
 	int order = compound_order(page);
 
-	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
-
 	/* Do not migrate THP mapped by multiple processes */
 	if (PageTransHuge(page) && total_mapcount(page) > 1)
 		return 0;