diff mbox series

[1/4] mm: migrate: move migration validation into numa_migrate_prep()

Message ID a37b13dd91bd3eadcd56a08cb3c839616f8457e7.1692440586.git.baolin.wang@linux.alibaba.com (mailing list archive)
State New
Headers show
Series Extend migrate_misplaced_page() to support batch migration | expand

Commit Message

Baolin Wang Aug. 19, 2023, 10:52 a.m. UTC
Now there are 3 places will validate if a page can mirate or not, and
some validations are performed later, which will waste some CPU to call
numa_migrate_prep().

Thus we can move all the migration validation into numa_migrate_prep(),
which is more maintainable as well as saving some CPU resources. Another
benefit is that it can serve as a preparation for supporting batch migration
in do_numa_page() in future.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/memory.c  | 19 +++++++++++++++++++
 mm/migrate.c | 19 -------------------
 2 files changed, 19 insertions(+), 19 deletions(-)

Comments

Huang, Ying Aug. 21, 2023, 2:20 a.m. UTC | #1
Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> Now there are 3 places will validate if a page can mirate or not, and
> some validations are performed later, which will waste some CPU to call
> numa_migrate_prep().
>
> Thus we can move all the migration validation into numa_migrate_prep(),
> which is more maintainable as well as saving some CPU resources. Another
> benefit is that it can serve as a preparation for supporting batch migration
> in do_numa_page() in future.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>  mm/memory.c  | 19 +++++++++++++++++++
>  mm/migrate.c | 19 -------------------
>  2 files changed, 19 insertions(+), 19 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index d003076b218d..bee9b1e86ef0 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4747,6 +4747,25 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
>  		*flags |= TNF_FAULT_LOCAL;
>  	}
>  
> +	/*
> +	 * Don't migrate file pages that are mapped in multiple processes
> +	 * with execute permissions as they are probably shared libraries.
> +	 */
> +	if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
> +	    (vma->vm_flags & VM_EXEC))
> +		return NUMA_NO_NODE;
> +
> +	/*
> +	 * Also do not migrate dirty pages as not all filesystems can move
> +	 * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
> +	 */
> +	if (page_is_file_lru(page) && PageDirty(page))
> +		return NUMA_NO_NODE;
> +
> +	/* Do not migrate THP mapped by multiple processes */
> +	if (PageTransHuge(page) && total_mapcount(page) > 1)
> +		return NUMA_NO_NODE;
> +
>  	return mpol_misplaced(page, vma, addr);

In mpol_misplaced()->should_numa_migrate_memory(), accessing CPU and PID
will be recorded.  So the code change above will introduce some behavior
change.

How about move these checks into a separate function which is called
between numa_migrate_prep() and migrate_misplaced_page() after unlocking
PTL?

--
Best Regards,
Huang, Ying

>  }
>  
> diff --git a/mm/migrate.c b/mm/migrate.c
> index e21d5a7e7447..9cc98fb1d6ec 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2485,10 +2485,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  
>  	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>  
> -	/* Do not migrate THP mapped by multiple processes */
> -	if (PageTransHuge(page) && total_mapcount(page) > 1)
> -		return 0;
> -
>  	/* Avoid migrating to a node that is nearly full */
>  	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
>  		int z;
> @@ -2533,21 +2529,6 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>  	LIST_HEAD(migratepages);
>  	int nr_pages = thp_nr_pages(page);
>  
> -	/*
> -	 * Don't migrate file pages that are mapped in multiple processes
> -	 * with execute permissions as they are probably shared libraries.
> -	 */
> -	if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
> -	    (vma->vm_flags & VM_EXEC))
> -		goto out;
> -
> -	/*
> -	 * Also do not migrate dirty pages as not all filesystems can move
> -	 * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
> -	 */
> -	if (page_is_file_lru(page) && PageDirty(page))
> -		goto out;
> -
>  	isolated = numamigrate_isolate_page(pgdat, page);
>  	if (!isolated)
>  		goto out;
Baolin Wang Aug. 21, 2023, 7:52 a.m. UTC | #2
On 8/21/2023 10:20 AM, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> 
>> Now there are 3 places will validate if a page can mirate or not, and
>> some validations are performed later, which will waste some CPU to call
>> numa_migrate_prep().
>>
>> Thus we can move all the migration validation into numa_migrate_prep(),
>> which is more maintainable as well as saving some CPU resources. Another
>> benefit is that it can serve as a preparation for supporting batch migration
>> in do_numa_page() in future.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   mm/memory.c  | 19 +++++++++++++++++++
>>   mm/migrate.c | 19 -------------------
>>   2 files changed, 19 insertions(+), 19 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index d003076b218d..bee9b1e86ef0 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4747,6 +4747,25 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
>>   		*flags |= TNF_FAULT_LOCAL;
>>   	}
>>   
>> +	/*
>> +	 * Don't migrate file pages that are mapped in multiple processes
>> +	 * with execute permissions as they are probably shared libraries.
>> +	 */
>> +	if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
>> +	    (vma->vm_flags & VM_EXEC))
>> +		return NUMA_NO_NODE;
>> +
>> +	/*
>> +	 * Also do not migrate dirty pages as not all filesystems can move
>> +	 * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
>> +	 */
>> +	if (page_is_file_lru(page) && PageDirty(page))
>> +		return NUMA_NO_NODE;
>> +
>> +	/* Do not migrate THP mapped by multiple processes */
>> +	if (PageTransHuge(page) && total_mapcount(page) > 1)
>> +		return NUMA_NO_NODE;
>> +
>>   	return mpol_misplaced(page, vma, addr);
> 
> In mpol_misplaced()->should_numa_migrate_memory(), accessing CPU and PID
> will be recorded.  So the code change above will introduce some behavior
> change.

Indeed.

> 
> How about move these checks into a separate function which is called
> between numa_migrate_prep() and migrate_misplaced_page() after unlocking
> PTL?

Sounds reasonable to me. Thanks for your input.

> 
> --
> Best Regards,
> Huang, Ying
> 
>>   }
>>   
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index e21d5a7e7447..9cc98fb1d6ec 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2485,10 +2485,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>   
>>   	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>>   
>> -	/* Do not migrate THP mapped by multiple processes */
>> -	if (PageTransHuge(page) && total_mapcount(page) > 1)
>> -		return 0;
>> -
>>   	/* Avoid migrating to a node that is nearly full */
>>   	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
>>   		int z;
>> @@ -2533,21 +2529,6 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>>   	LIST_HEAD(migratepages);
>>   	int nr_pages = thp_nr_pages(page);
>>   
>> -	/*
>> -	 * Don't migrate file pages that are mapped in multiple processes
>> -	 * with execute permissions as they are probably shared libraries.
>> -	 */
>> -	if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
>> -	    (vma->vm_flags & VM_EXEC))
>> -		goto out;
>> -
>> -	/*
>> -	 * Also do not migrate dirty pages as not all filesystems can move
>> -	 * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
>> -	 */
>> -	if (page_is_file_lru(page) && PageDirty(page))
>> -		goto out;
>> -
>>   	isolated = numamigrate_isolate_page(pgdat, page);
>>   	if (!isolated)
>>   		goto out;
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index d003076b218d..bee9b1e86ef0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4747,6 +4747,25 @@  int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
 		*flags |= TNF_FAULT_LOCAL;
 	}
 
+	/*
+	 * Don't migrate file pages that are mapped in multiple processes
+	 * with execute permissions as they are probably shared libraries.
+	 */
+	if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
+	    (vma->vm_flags & VM_EXEC))
+		return NUMA_NO_NODE;
+
+	/*
+	 * Also do not migrate dirty pages as not all filesystems can move
+	 * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
+	 */
+	if (page_is_file_lru(page) && PageDirty(page))
+		return NUMA_NO_NODE;
+
+	/* Do not migrate THP mapped by multiple processes */
+	if (PageTransHuge(page) && total_mapcount(page) > 1)
+		return NUMA_NO_NODE;
+
 	return mpol_misplaced(page, vma, addr);
 }
 
diff --git a/mm/migrate.c b/mm/migrate.c
index e21d5a7e7447..9cc98fb1d6ec 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2485,10 +2485,6 @@  static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 
 	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
 
-	/* Do not migrate THP mapped by multiple processes */
-	if (PageTransHuge(page) && total_mapcount(page) > 1)
-		return 0;
-
 	/* Avoid migrating to a node that is nearly full */
 	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
 		int z;
@@ -2533,21 +2529,6 @@  int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 	LIST_HEAD(migratepages);
 	int nr_pages = thp_nr_pages(page);
 
-	/*
-	 * Don't migrate file pages that are mapped in multiple processes
-	 * with execute permissions as they are probably shared libraries.
-	 */
-	if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
-	    (vma->vm_flags & VM_EXEC))
-		goto out;
-
-	/*
-	 * Also do not migrate dirty pages as not all filesystems can move
-	 * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
-	 */
-	if (page_is_file_lru(page) && PageDirty(page))
-		goto out;
-
 	isolated = numamigrate_isolate_page(pgdat, page);
 	if (!isolated)
 		goto out;