diff mbox series

[RFC,v3,2/8] mm: compaction: handle non-lru compound pages properly in isolate_migratepages_block().

Message ID 20220105214756.91065-3-zi.yan@sent.com (mailing list archive)
State New
Headers show
Series Use pageblock_order for cma and alloc_contig_range alignment. | expand

Commit Message

Zi Yan Jan. 5, 2022, 9:47 p.m. UTC
From: Zi Yan <ziy@nvidia.com>

In isolate_migratepages_block(), a !PageLRU tail page can be encountered
when the page is larger than a pageblock. Use compound head page for the
checks inside and skip the entire compound page when isolation succeeds.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/compaction.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

Comments

David Hildenbrand Jan. 12, 2022, 11:01 a.m. UTC | #1
On 05.01.22 22:47, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> In isolate_migratepages_block(), a !PageLRU tail page can be encountered
> when the page is larger than a pageblock. Use compound head page for the
> checks inside and skip the entire compound page when isolation succeeds.
> 

This will currently never happen, due to the way we always isolate
MAX_ORDER -1 ranges, correct?

Better note that in the patch description, because currently it reads
like it's an actual fix "can be encountered".

> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>  mm/compaction.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index b4e94cda3019..ad9053fbbe06 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -979,19 +979,23 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  		 * Skip any other type of page
>  		 */
>  		if (!PageLRU(page)) {
> +			struct page *head = compound_head(page);
>  			/*
>  			 * __PageMovable can return false positive so we need
>  			 * to verify it under page_lock.
>  			 */
> -			if (unlikely(__PageMovable(page)) &&
> -					!PageIsolated(page)) {
> +			if (unlikely(__PageMovable(head)) &&
> +					!PageIsolated(head)) {
>  				if (locked) {
>  					unlock_page_lruvec_irqrestore(locked, flags);
>  					locked = NULL;
>  				}
>  
> -				if (!isolate_movable_page(page, isolate_mode))
> +				if (!isolate_movable_page(head, isolate_mode)) {
> +					low_pfn += (1 << compound_order(head)) - 1 - (page - head);
> +					page = head;
>  					goto isolate_success;
> +				}
>  			}
>  
>  			goto isolate_fail;
Zi Yan Jan. 13, 2022, 2:57 p.m. UTC | #2
On 12 Jan 2022, at 6:01, David Hildenbrand wrote:

> On 05.01.22 22:47, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>>
>> In isolate_migratepages_block(), a !PageLRU tail page can be encountered
>> when the page is larger than a pageblock. Use compound head page for the
>> checks inside and skip the entire compound page when isolation succeeds.
>>
>
> This will currently never happen, due to the way we always isolate
> MAX_ORDER -1 ranges, correct?

You are right.

>
> Better note that in the patch description, because currently it reads
> like it's an actual fix "can be encountered".
>

Will do. This is a preparation patch for the upcoming commits.


>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>>  mm/compaction.c | 10 +++++++---
>>  1 file changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index b4e94cda3019..ad9053fbbe06 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -979,19 +979,23 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>  		 * Skip any other type of page
>>  		 */
>>  		if (!PageLRU(page)) {
>> +			struct page *head = compound_head(page);
>>  			/*
>>  			 * __PageMovable can return false positive so we need
>>  			 * to verify it under page_lock.
>>  			 */
>> -			if (unlikely(__PageMovable(page)) &&
>> -					!PageIsolated(page)) {
>> +			if (unlikely(__PageMovable(head)) &&
>> +					!PageIsolated(head)) {
>>  				if (locked) {
>>  					unlock_page_lruvec_irqrestore(locked, flags);
>>  					locked = NULL;
>>  				}
>>
>> -				if (!isolate_movable_page(page, isolate_mode))
>> +				if (!isolate_movable_page(head, isolate_mode)) {
>> +					low_pfn += (1 << compound_order(head)) - 1 - (page - head);
>> +					page = head;
>>  					goto isolate_success;
>> +				}
>>  			}
>>
>>  			goto isolate_fail;
>
>
> -- 
> Thanks,
>
> David / dhildenb


--
Best Regards,
Yan, Zi
Zi Yan Jan. 13, 2022, 4:23 p.m. UTC | #3
On 13 Jan 2022, at 9:57, Zi Yan wrote:

> On 12 Jan 2022, at 6:01, David Hildenbrand wrote:
>
>> On 05.01.22 22:47, Zi Yan wrote:
>>> From: Zi Yan <ziy@nvidia.com>
>>>
>>> In isolate_migratepages_block(), a !PageLRU tail page can be encountered
>>> when the page is larger than a pageblock. Use compound head page for the
>>> checks inside and skip the entire compound page when isolation succeeds.
>>>
>>
>> This will currently never happen, due to the way we always isolate
>> MAX_ORDER -1 ranges, correct?
>
> You are right.
>
>>
>> Better note that in the patch description, because currently it reads
>> like it's an actual fix "can be encountered".
>>
>
> Will do. This is a preparation patch for the upcoming commits.

I will drop this one too. Like you mentioned in [1], there are no
non-lru migratable compound pages. This is only used by my local
test code.

[1] https://lore.kernel.org/linux-mm/970ca2a4-416d-7e8f-37c7-510c5b050f4b@redhat.com/


>
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> ---
>>>  mm/compaction.c | 10 +++++++---
>>>  1 file changed, 7 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index b4e94cda3019..ad9053fbbe06 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -979,19 +979,23 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>>  		 * Skip any other type of page
>>>  		 */
>>>  		if (!PageLRU(page)) {
>>> +			struct page *head = compound_head(page);
>>>  			/*
>>>  			 * __PageMovable can return false positive so we need
>>>  			 * to verify it under page_lock.
>>>  			 */
>>> -			if (unlikely(__PageMovable(page)) &&
>>> -					!PageIsolated(page)) {
>>> +			if (unlikely(__PageMovable(head)) &&
>>> +					!PageIsolated(head)) {
>>>  				if (locked) {
>>>  					unlock_page_lruvec_irqrestore(locked, flags);
>>>  					locked = NULL;
>>>  				}
>>>
>>> -				if (!isolate_movable_page(page, isolate_mode))
>>> +				if (!isolate_movable_page(head, isolate_mode)) {
>>> +					low_pfn += (1 << compound_order(head)) - 1 - (page - head);
>>> +					page = head;
>>>  					goto isolate_success;
>>> +				}
>>>  			}
>>>
>>>  			goto isolate_fail;
>>
>>
>> -- 
>> Thanks,
>>
>> David / dhildenb
>
>
> --
> Best Regards,
> Yan, Zi


--
Best Regards,
Yan, Zi
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index b4e94cda3019..ad9053fbbe06 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -979,19 +979,23 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		 * Skip any other type of page
 		 */
 		if (!PageLRU(page)) {
+			struct page *head = compound_head(page);
 			/*
 			 * __PageMovable can return false positive so we need
 			 * to verify it under page_lock.
 			 */
-			if (unlikely(__PageMovable(page)) &&
-					!PageIsolated(page)) {
+			if (unlikely(__PageMovable(head)) &&
+					!PageIsolated(head)) {
 				if (locked) {
 					unlock_page_lruvec_irqrestore(locked, flags);
 					locked = NULL;
 				}
 
-				if (!isolate_movable_page(page, isolate_mode))
+				if (!isolate_movable_page(head, isolate_mode)) {
+					low_pfn += (1 << compound_order(head)) - 1 - (page - head);
+					page = head;
 					goto isolate_success;
+				}
 			}
 
 			goto isolate_fail;