diff mbox series

[RFC,v2,16/17] mm: mmap: Align unhinted maps to highest anon folio order

Message ID 20230414130303.2345383-17-ryan.roberts@arm.com (mailing list archive)
State New, archived
Headers show
Series variable-order, large folios for anonymous memory | expand

Commit Message

Ryan Roberts April 14, 2023, 1:03 p.m. UTC
When allocating large anonymous folios, we want to maximize our chances
of being able to use the highest order we support. Since one of the
constraints is that a folio has to be mapped naturally aligned, let's
have mmap default to that alignment when user space does not provide a
hint.

With this in place, an extra 2% of all allocated anonymous memory
belongs to a folio of the highest order, when compiling the kernel.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 mm/mmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--
2.25.1

Comments

Yin Fengwei April 17, 2023, 8:25 a.m. UTC | #1
On 4/14/2023 9:03 PM, Ryan Roberts wrote:
> When allocating large anonymous folios, we want to maximize our chances
> of being able to use the highest order we support. Since one of the
> constraints is that a folio has to be mapped naturally aligned, let's
> have mmap default to that alignment when user space does not provide a
> hint.
> 
> With this in place, an extra 2% of all allocated anonymous memory
> belongs to a folio of the highest order, when compiling the kernel.
This change has side effect: reduce the chance of VMA merging.
So benefit to per-VMA lock also. But find VMA need searching more VMAs.


Regards
Yin, Fengwei

> 
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  mm/mmap.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index ff68a67a2a7c..e7652001a32e 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1627,7 +1627,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
>  	info.length = len;
>  	info.low_limit = mm->mmap_base;
>  	info.high_limit = mmap_end;
> -	info.align_mask = 0;
> +	info.align_mask = BIT(PAGE_SHIFT + ANON_FOLIO_ORDER_MAX) - 1;
>  	info.align_offset = 0;
>  	return vm_unmapped_area(&info);
>  }
> @@ -1677,7 +1677,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
>  	info.length = len;
>  	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
>  	info.high_limit = arch_get_mmap_base(addr, mm->mmap_base);
> -	info.align_mask = 0;
> +	info.align_mask = BIT(PAGE_SHIFT + ANON_FOLIO_ORDER_MAX) - 1;
>  	info.align_offset = 0;
>  	addr = vm_unmapped_area(&info);
> 
> --
> 2.25.1
>
Ryan Roberts April 17, 2023, 10:13 a.m. UTC | #2
On 17/04/2023 09:25, Yin, Fengwei wrote:
> 
> 
> On 4/14/2023 9:03 PM, Ryan Roberts wrote:
>> When allocating large anonymous folios, we want to maximize our chances
>> of being able to use the highest order we support. Since one of the
>> constraints is that a folio has to be mapped naturally aligned, let's
>> have mmap default to that alignment when user space does not provide a
>> hint.
>>
>> With this in place, an extra 2% of all allocated anonymous memory
>> belongs to a folio of the highest order, when compiling the kernel.
> This change has side effect: reduce the chance of VMA merging.
> So benefit to per-VMA lock also. But find VMA need searching more VMAs.

Good point. This change brings only a very marginal benefit anyway, so I think I
might just drop the it from the series to avoid any unexpected issues.

> 
> 
> Regards
> Yin, Fengwei
> 
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>>  mm/mmap.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index ff68a67a2a7c..e7652001a32e 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -1627,7 +1627,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
>>  	info.length = len;
>>  	info.low_limit = mm->mmap_base;
>>  	info.high_limit = mmap_end;
>> -	info.align_mask = 0;
>> +	info.align_mask = BIT(PAGE_SHIFT + ANON_FOLIO_ORDER_MAX) - 1;
>>  	info.align_offset = 0;
>>  	return vm_unmapped_area(&info);
>>  }
>> @@ -1677,7 +1677,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
>>  	info.length = len;
>>  	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
>>  	info.high_limit = arch_get_mmap_base(addr, mm->mmap_base);
>> -	info.align_mask = 0;
>> +	info.align_mask = BIT(PAGE_SHIFT + ANON_FOLIO_ORDER_MAX) - 1;
>>  	info.align_offset = 0;
>>  	addr = vm_unmapped_area(&info);
>>
>> --
>> 2.25.1
>>
diff mbox series

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index ff68a67a2a7c..e7652001a32e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1627,7 +1627,7 @@  generic_get_unmapped_area(struct file *filp, unsigned long addr,
 	info.length = len;
 	info.low_limit = mm->mmap_base;
 	info.high_limit = mmap_end;
-	info.align_mask = 0;
+	info.align_mask = BIT(PAGE_SHIFT + ANON_FOLIO_ORDER_MAX) - 1;
 	info.align_offset = 0;
 	return vm_unmapped_area(&info);
 }
@@ -1677,7 +1677,7 @@  generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
 	info.length = len;
 	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
 	info.high_limit = arch_get_mmap_base(addr, mm->mmap_base);
-	info.align_mask = 0;
+	info.align_mask = BIT(PAGE_SHIFT + ANON_FOLIO_ORDER_MAX) - 1;
 	info.align_offset = 0;
 	addr = vm_unmapped_area(&info);