diff mbox series

[v3,1/3] mm/compaction: enable compacting >0 order folios.

Message ID 20240202161554.565023-2-zi.yan@sent.com (mailing list archive)
State New
Headers show
Series Enable >0 order folio memory compaction | expand

Commit Message

Zi Yan Feb. 2, 2024, 4:15 p.m. UTC
From: Zi Yan <ziy@nvidia.com>

migrate_pages() supports >0 order folio migration and during compaction,
even if compaction_alloc() cannot provide >0 order free pages,
migrate_pages() can split the source page and try to migrate the base pages
from the split. It can be a baseline and start point for adding support for
compacting >0 order folios.

Suggested-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/compaction.c | 43 +++++++++++++++++++++++++++++++++++--------
 1 file changed, 35 insertions(+), 8 deletions(-)

Comments

Vlastimil Babka Feb. 9, 2024, 2:32 p.m. UTC | #1
On 2/2/24 17:15, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> migrate_pages() supports >0 order folio migration and during compaction,
> even if compaction_alloc() cannot provide >0 order free pages,
> migrate_pages() can split the source page and try to migrate the base pages
> from the split. It can be a baseline and start point for adding support for
> compacting >0 order folios.
> 
> Suggested-by: Huang Ying <ying.huang@intel.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>  mm/compaction.c | 43 +++++++++++++++++++++++++++++++++++--------
>  1 file changed, 35 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 4add68d40e8d..e43e898d2c77 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>  	return too_many;
>  }
>  
> +/*
> + * 1. if the page order is larger than or equal to target_order (i.e.,
> + * cc->order and when it is not -1 for global compaction), skip it since
> + * target_order already indicates no free page with larger than target_order
> + * exists and later migrating it will most likely fail;
> + *
> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,
> + * skip them;
> + */
> +static bool skip_isolation_on_order(int order, int target_order)
> +{
> +	return (target_order != -1 && order >= target_order) ||
> +		order >= pageblock_order;
> +}
> +
>  /**
>   * isolate_migratepages_block() - isolate all migrate-able pages within
>   *				  a single pageblock
> @@ -1010,7 +1025,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  		/*
>  		 * Regardless of being on LRU, compound pages such as THP and
>  		 * hugetlbfs are not to be compacted unless we are attempting
> -		 * an allocation much larger than the huge page size (eg CMA).
> +		 * an allocation larger than the compound page size.
>  		 * We can potentially save a lot of iterations if we skip them
>  		 * at once. The check is racy, but we can consider only valid
>  		 * values and the only danger is skipping too much.
> @@ -1018,11 +1033,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  		if (PageCompound(page) && !cc->alloc_contig) {
>  			const unsigned int order = compound_order(page);
>  
> -			if (likely(order <= MAX_PAGE_ORDER)) {
> -				low_pfn += (1UL << order) - 1;
> -				nr_scanned += (1UL << order) - 1;
> +			/*
> +			 * Skip based on page order and compaction target order
> +			 * and skip hugetlbfs pages.
> +			 */
> +			if (skip_isolation_on_order(order, cc->order) ||
> +			    PageHuge(page)) {

Hm I'd try to avoid a new PageHuge() test here.

Earlier we have a block that does
                if (PageHuge(page) && cc->alloc_contig) {
			...

think I'd rather rewrite it to handle the PageHuge() case completely and
just make it skip the 1UL << order pages there for !cc->alloc_config. Even
if it means duplicating a bit of the low_pfn and nr_scanned bumping code.

Which reminds me the PageHuge() check there is probably still broken ATM:

https://lore.kernel.org/all/8fa1c95c-4749-33dd-42ba-243e492ab109@suse.cz/

Even better reason not to add another one.
If the huge page materialized since the first check, we should bail out when
testing PageLRU later anyway.

> +				if (order <= MAX_PAGE_ORDER) {
> +					low_pfn += (1UL << order) - 1;
> +					nr_scanned += (1UL << order) - 1;
> +				}
> +				goto isolate_fail;
>  			}
> -			goto isolate_fail;
>  		}
>  
>  		/*
> @@ -1165,10 +1187,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  			}
>  
>  			/*
> -			 * folio become large since the non-locked check,
> -			 * and it's on LRU.
> +			 * Check LRU folio order under the lock
>  			 */
> -			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
> +			if (unlikely(skip_isolation_on_order(folio_order(folio),
> +							     cc->order) &&
> +				     !cc->alloc_contig)) {
>  				low_pfn += folio_nr_pages(folio) - 1;
>  				nr_scanned += folio_nr_pages(folio) - 1;
>  				folio_set_lru(folio);
> @@ -1786,6 +1809,10 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>  	struct compact_control *cc = (struct compact_control *)data;
>  	struct folio *dst;
>  
> +	/* this makes migrate_pages() split the source page and retry */
> +	if (folio_test_large(src) > 0)
> +		return NULL;
> +
>  	if (list_empty(&cc->freepages)) {
>  		isolate_freepages(cc);
>
Zi Yan Feb. 9, 2024, 7:25 p.m. UTC | #2
On 9 Feb 2024, at 9:32, Vlastimil Babka wrote:

> On 2/2/24 17:15, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>>
>> migrate_pages() supports >0 order folio migration and during compaction,
>> even if compaction_alloc() cannot provide >0 order free pages,
>> migrate_pages() can split the source page and try to migrate the base pages
>> from the split. It can be a baseline and start point for adding support for
>> compacting >0 order folios.
>>
>> Suggested-by: Huang Ying <ying.huang@intel.com>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>>  mm/compaction.c | 43 +++++++++++++++++++++++++++++++++++--------
>>  1 file changed, 35 insertions(+), 8 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 4add68d40e8d..e43e898d2c77 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>>  	return too_many;
>>  }
>>
>> +/*
>> + * 1. if the page order is larger than or equal to target_order (i.e.,
>> + * cc->order and when it is not -1 for global compaction), skip it since
>> + * target_order already indicates no free page with larger than target_order
>> + * exists and later migrating it will most likely fail;
>> + *
>> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,
>> + * skip them;
>> + */
>> +static bool skip_isolation_on_order(int order, int target_order)
>> +{
>> +	return (target_order != -1 && order >= target_order) ||
>> +		order >= pageblock_order;
>> +}
>> +
>>  /**
>>   * isolate_migratepages_block() - isolate all migrate-able pages within
>>   *				  a single pageblock
>> @@ -1010,7 +1025,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>  		/*
>>  		 * Regardless of being on LRU, compound pages such as THP and
>>  		 * hugetlbfs are not to be compacted unless we are attempting
>> -		 * an allocation much larger than the huge page size (eg CMA).
>> +		 * an allocation larger than the compound page size.
>>  		 * We can potentially save a lot of iterations if we skip them
>>  		 * at once. The check is racy, but we can consider only valid
>>  		 * values and the only danger is skipping too much.
>> @@ -1018,11 +1033,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>  		if (PageCompound(page) && !cc->alloc_contig) {
>>  			const unsigned int order = compound_order(page);
>>
>> -			if (likely(order <= MAX_PAGE_ORDER)) {
>> -				low_pfn += (1UL << order) - 1;
>> -				nr_scanned += (1UL << order) - 1;
>> +			/*
>> +			 * Skip based on page order and compaction target order
>> +			 * and skip hugetlbfs pages.
>> +			 */
>> +			if (skip_isolation_on_order(order, cc->order) ||
>> +			    PageHuge(page)) {
>
> Hm I'd try to avoid a new PageHuge() test here.
>
> Earlier we have a block that does
>                 if (PageHuge(page) && cc->alloc_contig) {
> 			...
>
> think I'd rather rewrite it to handle the PageHuge() case completely and
> just make it skip the 1UL << order pages there for !cc->alloc_config. Even
> if it means duplicating a bit of the low_pfn and nr_scanned bumping code.
>
> Which reminds me the PageHuge() check there is probably still broken ATM:
>
> https://lore.kernel.org/all/8fa1c95c-4749-33dd-42ba-243e492ab109@suse.cz/
>
> Even better reason not to add another one.
> If the huge page materialized since the first check, we should bail out when
> testing PageLRU later anyway.


OK, so basically something like:

if (PageHuge(page)) {
    if (cc->alloc_contig) {
        // existing code for PageHuge(page) && cc->allc_contig
    } else {
        const unsigned int order = compound_order(page);

        if (order <= MAX_PAGE_ORDER) {
            low_pfn += (1UL << order) - 1;
            nr_scanned += (1UL << order) - 1;
        }
        goto isolate_fail;
    }
}

...

if (PageCompound(page) && !cc->alloc_contig) {
        const unsigned int order = compound_order(page);

        /* Skip based on page order and compaction target order. */
        if (skip_isolation_on_order(order, cc->order)) {
                if (order <= MAX_PAGE_ORDER) {
                        low_pfn += (1UL << order) - 1;
                        nr_scanned += (1UL << order) - 1;
                }
                goto isolate_fail;
        }
}

>
>> +				if (order <= MAX_PAGE_ORDER) {
>> +					low_pfn += (1UL << order) - 1;
>> +					nr_scanned += (1UL << order) - 1;
>> +				}
>> +				goto isolate_fail;
>>  			}
>> -			goto isolate_fail;
>>  		}
>>
>>  		/*
>> @@ -1165,10 +1187,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>  			}
>>
>>  			/*
>> -			 * folio become large since the non-locked check,
>> -			 * and it's on LRU.
>> +			 * Check LRU folio order under the lock
>>  			 */
>> -			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
>> +			if (unlikely(skip_isolation_on_order(folio_order(folio),
>> +							     cc->order) &&
>> +				     !cc->alloc_contig)) {
>>  				low_pfn += folio_nr_pages(folio) - 1;
>>  				nr_scanned += folio_nr_pages(folio) - 1;
>>  				folio_set_lru(folio);
>> @@ -1786,6 +1809,10 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>>  	struct compact_control *cc = (struct compact_control *)data;
>>  	struct folio *dst;
>>
>> +	/* this makes migrate_pages() split the source page and retry */
>> +	if (folio_test_large(src) > 0)
>> +		return NULL;
>> +
>>  	if (list_empty(&cc->freepages)) {
>>  		isolate_freepages(cc);
>>


--
Best Regards,
Yan, Zi
Vlastimil Babka Feb. 9, 2024, 8:43 p.m. UTC | #3
On 2/9/24 20:25, Zi Yan wrote:
> On 9 Feb 2024, at 9:32, Vlastimil Babka wrote:
> 
>> On 2/2/24 17:15, Zi Yan wrote:
>>> From: Zi Yan <ziy@nvidia.com>
>>>
>>> migrate_pages() supports >0 order folio migration and during compaction,
>>> even if compaction_alloc() cannot provide >0 order free pages,
>>> migrate_pages() can split the source page and try to migrate the base pages
>>> from the split. It can be a baseline and start point for adding support for
>>> compacting >0 order folios.
>>>
>>> Suggested-by: Huang Ying <ying.huang@intel.com>
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> ---
>>>  mm/compaction.c | 43 +++++++++++++++++++++++++++++++++++--------
>>>  1 file changed, 35 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index 4add68d40e8d..e43e898d2c77 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>>>  	return too_many;
>>>  }
>>>
>>> +/*
>>> + * 1. if the page order is larger than or equal to target_order (i.e.,
>>> + * cc->order and when it is not -1 for global compaction), skip it since
>>> + * target_order already indicates no free page with larger than target_order
>>> + * exists and later migrating it will most likely fail;
>>> + *
>>> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,
>>> + * skip them;
>>> + */
>>> +static bool skip_isolation_on_order(int order, int target_order)
>>> +{
>>> +	return (target_order != -1 && order >= target_order) ||
>>> +		order >= pageblock_order;
>>> +}
>>> +
>>>  /**
>>>   * isolate_migratepages_block() - isolate all migrate-able pages within
>>>   *				  a single pageblock
>>> @@ -1010,7 +1025,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>>  		/*
>>>  		 * Regardless of being on LRU, compound pages such as THP and
>>>  		 * hugetlbfs are not to be compacted unless we are attempting
>>> -		 * an allocation much larger than the huge page size (eg CMA).
>>> +		 * an allocation larger than the compound page size.
>>>  		 * We can potentially save a lot of iterations if we skip them
>>>  		 * at once. The check is racy, but we can consider only valid
>>>  		 * values and the only danger is skipping too much.
>>> @@ -1018,11 +1033,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>>  		if (PageCompound(page) && !cc->alloc_contig) {
>>>  			const unsigned int order = compound_order(page);
>>>
>>> -			if (likely(order <= MAX_PAGE_ORDER)) {
>>> -				low_pfn += (1UL << order) - 1;
>>> -				nr_scanned += (1UL << order) - 1;
>>> +			/*
>>> +			 * Skip based on page order and compaction target order
>>> +			 * and skip hugetlbfs pages.
>>> +			 */
>>> +			if (skip_isolation_on_order(order, cc->order) ||
>>> +			    PageHuge(page)) {
>>
>> Hm I'd try to avoid a new PageHuge() test here.
>>
>> Earlier we have a block that does
>>                 if (PageHuge(page) && cc->alloc_contig) {
>> 			...
>>
>> think I'd rather rewrite it to handle the PageHuge() case completely and
>> just make it skip the 1UL << order pages there for !cc->alloc_config. Even
>> if it means duplicating a bit of the low_pfn and nr_scanned bumping code.
>>
>> Which reminds me the PageHuge() check there is probably still broken ATM:
>>
>> https://lore.kernel.org/all/8fa1c95c-4749-33dd-42ba-243e492ab109@suse.cz/
>>
>> Even better reason not to add another one.
>> If the huge page materialized since the first check, we should bail out when
>> testing PageLRU later anyway.
> 
> 
> OK, so basically something like:
> 
> if (PageHuge(page)) {
>     if (cc->alloc_contig) {

Yeah but I'd handle the !cc->alloc_contig first as that ends with a goto,
and then the rest doesn't need to be "} else { ... }" with extra identation

>         // existing code for PageHuge(page) && cc->allc_contig
>     } else {
>         const unsigned int order = compound_order(page);
> 
>         if (order <= MAX_PAGE_ORDER) {
>             low_pfn += (1UL << order) - 1;
>             nr_scanned += (1UL << order) - 1;
>         }
>         goto isolate_fail;
>     }
> }
Zi Yan Feb. 9, 2024, 8:44 p.m. UTC | #4
On 9 Feb 2024, at 15:43, Vlastimil Babka wrote:

> On 2/9/24 20:25, Zi Yan wrote:
>> On 9 Feb 2024, at 9:32, Vlastimil Babka wrote:
>>
>>> On 2/2/24 17:15, Zi Yan wrote:
>>>> From: Zi Yan <ziy@nvidia.com>
>>>>
>>>> migrate_pages() supports >0 order folio migration and during compaction,
>>>> even if compaction_alloc() cannot provide >0 order free pages,
>>>> migrate_pages() can split the source page and try to migrate the base pages
>>>> from the split. It can be a baseline and start point for adding support for
>>>> compacting >0 order folios.
>>>>
>>>> Suggested-by: Huang Ying <ying.huang@intel.com>
>>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>>> ---
>>>>  mm/compaction.c | 43 +++++++++++++++++++++++++++++++++++--------
>>>>  1 file changed, 35 insertions(+), 8 deletions(-)
>>>>
>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>> index 4add68d40e8d..e43e898d2c77 100644
>>>> --- a/mm/compaction.c
>>>> +++ b/mm/compaction.c
>>>> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>>>>  	return too_many;
>>>>  }
>>>>
>>>> +/*
>>>> + * 1. if the page order is larger than or equal to target_order (i.e.,
>>>> + * cc->order and when it is not -1 for global compaction), skip it since
>>>> + * target_order already indicates no free page with larger than target_order
>>>> + * exists and later migrating it will most likely fail;
>>>> + *
>>>> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,
>>>> + * skip them;
>>>> + */
>>>> +static bool skip_isolation_on_order(int order, int target_order)
>>>> +{
>>>> +	return (target_order != -1 && order >= target_order) ||
>>>> +		order >= pageblock_order;
>>>> +}
>>>> +
>>>>  /**
>>>>   * isolate_migratepages_block() - isolate all migrate-able pages within
>>>>   *				  a single pageblock
>>>> @@ -1010,7 +1025,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>>>  		/*
>>>>  		 * Regardless of being on LRU, compound pages such as THP and
>>>>  		 * hugetlbfs are not to be compacted unless we are attempting
>>>> -		 * an allocation much larger than the huge page size (eg CMA).
>>>> +		 * an allocation larger than the compound page size.
>>>>  		 * We can potentially save a lot of iterations if we skip them
>>>>  		 * at once. The check is racy, but we can consider only valid
>>>>  		 * values and the only danger is skipping too much.
>>>> @@ -1018,11 +1033,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>>>>  		if (PageCompound(page) && !cc->alloc_contig) {
>>>>  			const unsigned int order = compound_order(page);
>>>>
>>>> -			if (likely(order <= MAX_PAGE_ORDER)) {
>>>> -				low_pfn += (1UL << order) - 1;
>>>> -				nr_scanned += (1UL << order) - 1;
>>>> +			/*
>>>> +			 * Skip based on page order and compaction target order
>>>> +			 * and skip hugetlbfs pages.
>>>> +			 */
>>>> +			if (skip_isolation_on_order(order, cc->order) ||
>>>> +			    PageHuge(page)) {
>>>
>>> Hm I'd try to avoid a new PageHuge() test here.
>>>
>>> Earlier we have a block that does
>>>                 if (PageHuge(page) && cc->alloc_contig) {
>>> 			...
>>>
>>> think I'd rather rewrite it to handle the PageHuge() case completely and
>>> just make it skip the 1UL << order pages there for !cc->alloc_config. Even
>>> if it means duplicating a bit of the low_pfn and nr_scanned bumping code.
>>>
>>> Which reminds me the PageHuge() check there is probably still broken ATM:
>>>
>>> https://lore.kernel.org/all/8fa1c95c-4749-33dd-42ba-243e492ab109@suse.cz/
>>>
>>> Even better reason not to add another one.
>>> If the huge page materialized since the first check, we should bail out when
>>> testing PageLRU later anyway.
>>
>>
>> OK, so basically something like:
>>
>> if (PageHuge(page)) {
>>     if (cc->alloc_contig) {
>
> Yeah but I'd handle the !cc->alloc_contig first as that ends with a goto,
> and then the rest doesn't need to be "} else { ... }" with extra identation

OK. No problem.


--
Best Regards,
Yan, Zi
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index 4add68d40e8d..e43e898d2c77 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -816,6 +816,21 @@  static bool too_many_isolated(struct compact_control *cc)
 	return too_many;
 }
 
+/*
+ * 1. if the page order is larger than or equal to target_order (i.e.,
+ * cc->order and when it is not -1 for global compaction), skip it since
+ * target_order already indicates no free page with larger than target_order
+ * exists and later migrating it will most likely fail;
+ *
+ * 2. compacting > pageblock_order pages does not improve memory fragmentation,
+ * skip them;
+ */
+static bool skip_isolation_on_order(int order, int target_order)
+{
+	return (target_order != -1 && order >= target_order) ||
+		order >= pageblock_order;
+}
+
 /**
  * isolate_migratepages_block() - isolate all migrate-able pages within
  *				  a single pageblock
@@ -1010,7 +1025,7 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		/*
 		 * Regardless of being on LRU, compound pages such as THP and
 		 * hugetlbfs are not to be compacted unless we are attempting
-		 * an allocation much larger than the huge page size (eg CMA).
+		 * an allocation larger than the compound page size.
 		 * We can potentially save a lot of iterations if we skip them
 		 * at once. The check is racy, but we can consider only valid
 		 * values and the only danger is skipping too much.
@@ -1018,11 +1033,18 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (PageCompound(page) && !cc->alloc_contig) {
 			const unsigned int order = compound_order(page);
 
-			if (likely(order <= MAX_PAGE_ORDER)) {
-				low_pfn += (1UL << order) - 1;
-				nr_scanned += (1UL << order) - 1;
+			/*
+			 * Skip based on page order and compaction target order
+			 * and skip hugetlbfs pages.
+			 */
+			if (skip_isolation_on_order(order, cc->order) ||
+			    PageHuge(page)) {
+				if (order <= MAX_PAGE_ORDER) {
+					low_pfn += (1UL << order) - 1;
+					nr_scanned += (1UL << order) - 1;
+				}
+				goto isolate_fail;
 			}
-			goto isolate_fail;
 		}
 
 		/*
@@ -1165,10 +1187,11 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			}
 
 			/*
-			 * folio become large since the non-locked check,
-			 * and it's on LRU.
+			 * Check LRU folio order under the lock
 			 */
-			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
+			if (unlikely(skip_isolation_on_order(folio_order(folio),
+							     cc->order) &&
+				     !cc->alloc_contig)) {
 				low_pfn += folio_nr_pages(folio) - 1;
 				nr_scanned += folio_nr_pages(folio) - 1;
 				folio_set_lru(folio);
@@ -1786,6 +1809,10 @@  static struct folio *compaction_alloc(struct folio *src, unsigned long data)
 	struct compact_control *cc = (struct compact_control *)data;
 	struct folio *dst;
 
+	/* this makes migrate_pages() split the source page and retry */
+	if (folio_test_large(src) > 0)
+		return NULL;
+
 	if (list_empty(&cc->freepages)) {
 		isolate_freepages(cc);