diff mbox series

[12/16] mm/page_alloc: use helper macro SZ_1{K,M}

Message ID 20220909092451.24883-13-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series A few cleanup patches for mm | expand

Commit Message

Miaohe Lin Sept. 9, 2022, 9:24 a.m. UTC
Use helper macro SZ_1K and SZ_1M to do the size conversion. Minor
readability improvement.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/page_alloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

David Hildenbrand Sept. 9, 2022, 11:34 a.m. UTC | #1
On 09.09.22 11:24, Miaohe Lin wrote:
> Use helper macro SZ_1K and SZ_1M to do the size conversion. Minor
> readability improvement.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>   mm/page_alloc.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 7a8a6bb08a15..e1c7f98cff96 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7053,7 +7053,7 @@ static int zone_batchsize(struct zone *zone)
>   	 * size is striking a balance between allocation latency
>   	 * and zone lock contention.
>   	 */
> -	batch = min(zone_managed_pages(zone) >> 10, (1024 * 1024) / PAGE_SIZE);
> +	batch = min(zone_managed_pages(zone) >> 10, SZ_1M / PAGE_SIZE);
>   	batch /= 4;		/* We effectively *= 4 below */
>   	if (batch < 1)
>   		batch = 1;
> @@ -8528,8 +8528,8 @@ void __init mem_init_print_info(void)
>   #endif
>   		")\n",
>   		K(nr_free_pages()), K(physpages),
> -		codesize >> 10, datasize >> 10, rosize >> 10,
> -		(init_data_size + init_code_size) >> 10, bss_size >> 10,
> +		codesize / SZ_1K, datasize / SZ_1K, rosize / SZ_1K,
> +		(init_data_size + init_code_size) / SZ_1K, bss_size / SZ_1K,
>   		K(physpages - totalram_pages() - totalcma_pages),
>   		K(totalcma_pages)
>   #ifdef	CONFIG_HIGHMEM
> @@ -9055,7 +9055,7 @@ void *__init alloc_large_system_hash(const char *tablename,
>   
>   		/* It isn't necessary when PAGE_SIZE >= 1MB */

Huh, how could we ever have that. Smells like dead code.

>   		if (PAGE_SHIFT < 20)

What about adjusting that as well? The it exactly matches the comment

if (PAGE_SIZE >= SZ_1M)

> -			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
> +			numentries = round_up(numentries, SZ_1M / PAGE_SIZE);
>   
>   #if __BITS_PER_LONG > 32
>   		if (!high_limit) {
Matthew Wilcox Sept. 9, 2022, 7:44 p.m. UTC | #2
On Fri, Sep 09, 2022 at 01:34:52PM +0200, David Hildenbrand wrote:
> On 09.09.22 11:24, Miaohe Lin wrote:
> > @@ -9055,7 +9055,7 @@ void *__init alloc_large_system_hash(const char *tablename,
> >   		/* It isn't necessary when PAGE_SIZE >= 1MB */
> 
> Huh, how could we ever have that. Smells like dead code.
> 
> >   		if (PAGE_SHIFT < 20)
> 
> What about adjusting that as well? The it exactly matches the comment
> 
> if (PAGE_SIZE >= SZ_1M)
> 
> > -			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
> > +			numentries = round_up(numentries, SZ_1M / PAGE_SIZE);

The git history provides some clues here.  See a7e833182a92.
But we do have an architecture which has ...

#ifdef CONFIG_PAGE_SIZE_1MB
#define PAGE_SHIFT 20
#define HEXAGON_L1_PTE_SIZE __HVM_PDE_S_1MB
#endif

I don't think it's an especially common config.
Miaohe Lin Sept. 13, 2022, 7:04 a.m. UTC | #3
On 2022/9/10 3:44, Matthew Wilcox wrote:
> On Fri, Sep 09, 2022 at 01:34:52PM +0200, David Hildenbrand wrote:
>> On 09.09.22 11:24, Miaohe Lin wrote:
>>> @@ -9055,7 +9055,7 @@ void *__init alloc_large_system_hash(const char *tablename,
>>>   		/* It isn't necessary when PAGE_SIZE >= 1MB */
>>
>> Huh, how could we ever have that. Smells like dead code.
>>
>>>   		if (PAGE_SHIFT < 20)
>>
>> What about adjusting that as well? The it exactly matches the comment
>>
>> if (PAGE_SIZE >= SZ_1M)

Looks good. Will do it in next version.

>>
>>> -			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
>>> +			numentries = round_up(numentries, SZ_1M / PAGE_SIZE);
> 
> The git history provides some clues here.  See a7e833182a92.
> But we do have an architecture which has ...
> 
> #ifdef CONFIG_PAGE_SIZE_1MB
> #define PAGE_SHIFT 20
> #define HEXAGON_L1_PTE_SIZE __HVM_PDE_S_1MB
> #endif
> 
> I don't think it's an especially common config.

Maybe commit a7e833182a92 fixed a theoretical bug. But IMHO, it might be better to keep the code even
if no architecture defines PAGE_SIZE >= 1MB. These codes would be eliminated at compiling time. And
once there're architectures with PAGE_SIZE >= 1MB, we still work. Any thoughts? Thanks both.

Thanks,
Miaohe Lin
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7a8a6bb08a15..e1c7f98cff96 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7053,7 +7053,7 @@  static int zone_batchsize(struct zone *zone)
 	 * size is striking a balance between allocation latency
 	 * and zone lock contention.
 	 */
-	batch = min(zone_managed_pages(zone) >> 10, (1024 * 1024) / PAGE_SIZE);
+	batch = min(zone_managed_pages(zone) >> 10, SZ_1M / PAGE_SIZE);
 	batch /= 4;		/* We effectively *= 4 below */
 	if (batch < 1)
 		batch = 1;
@@ -8528,8 +8528,8 @@  void __init mem_init_print_info(void)
 #endif
 		")\n",
 		K(nr_free_pages()), K(physpages),
-		codesize >> 10, datasize >> 10, rosize >> 10,
-		(init_data_size + init_code_size) >> 10, bss_size >> 10,
+		codesize / SZ_1K, datasize / SZ_1K, rosize / SZ_1K,
+		(init_data_size + init_code_size) / SZ_1K, bss_size / SZ_1K,
 		K(physpages - totalram_pages() - totalcma_pages),
 		K(totalcma_pages)
 #ifdef	CONFIG_HIGHMEM
@@ -9055,7 +9055,7 @@  void *__init alloc_large_system_hash(const char *tablename,
 
 		/* It isn't necessary when PAGE_SIZE >= 1MB */
 		if (PAGE_SHIFT < 20)
-			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
+			numentries = round_up(numentries, SZ_1M / PAGE_SIZE);
 
 #if __BITS_PER_LONG > 32
 		if (!high_limit) {