diff mbox series

[2/2] mm/page_alloc: remove unnecessary parameter batch of nr_pcp_free

Message ID 20230809100754.3094517-3-shikemeng@huaweicloud.com (mailing list archive)
State New
Headers show
Series [1/2] mm/page_alloc: remove track of active PCP lists range in bulk free | expand

Commit Message

Kemeng Shi Aug. 9, 2023, 10:07 a.m. UTC
We get batch from pcp and just pass it to nr_pcp_free immediately. Get
batch from pcp inside nr_pcp_free to remove unnecessary parameter batch.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/page_alloc.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

Comments

Chris Li Aug. 15, 2023, 5:46 p.m. UTC | #1
Hi Kemeng,

Since I am discussing the other patch in this series, I might just commend on this one
as well.

On Wed, Aug 09, 2023 at 06:07:54PM +0800, Kemeng Shi wrote:
> We get batch from pcp and just pass it to nr_pcp_free immediately. Get
> batch from pcp inside nr_pcp_free to remove unnecessary parameter batch.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>  mm/page_alloc.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1ddcb2707d05..bb1d14e806ad 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2376,10 +2376,10 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
>  	return true;
>  }
>  
> -static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch,
> -		       bool free_high)
> +static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
>  {
>  	int min_nr_free, max_nr_free;
> +	int batch = READ_ONCE(pcp->batch);

Because nr_pcp_free is static and has only one caller. This function getsĀ inlined
at the caller's side. I verified that on X86_64 compiled code.

So this fix in my opinion is not worthwhile to fix. It will produce the same
machine code. One minor side effect is that it will hide the commit under it
in "git blame".

Chris
Kemeng Shi Aug. 17, 2023, 2:43 a.m. UTC | #2
on 8/16/2023 1:46 AM, Chris Li wrote:
> Hi Kemeng,
> 
> Since I am discussing the other patch in this series, I might just commend on this one
> as well.
> 
> On Wed, Aug 09, 2023 at 06:07:54PM +0800, Kemeng Shi wrote:
>> We get batch from pcp and just pass it to nr_pcp_free immediately. Get
>> batch from pcp inside nr_pcp_free to remove unnecessary parameter batch.
>>
>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>> ---
>>  mm/page_alloc.c | 8 +++-----
>>  1 file changed, 3 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 1ddcb2707d05..bb1d14e806ad 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -2376,10 +2376,10 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
>>  	return true;
>>  }
>>  
>> -static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch,
>> -		       bool free_high)
>> +static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
>>  {
>>  	int min_nr_free, max_nr_free;
>> +	int batch = READ_ONCE(pcp->batch);
> 
> Because nr_pcp_free is static and has only one caller. This function getsĀ inlined
> at the caller's side. I verified that on X86_64 compiled code.
> 
> So this fix in my opinion is not worthwhile to fix. It will produce the same
> machine code. One minor side effect is that it will hide the commit under it
> in "git blame".
> 
Hi Chris, thank for the reply. Except to reduce argument to pass, this patch also
tries make code look little cleaner. I think it's always better to reduce variable
scope and keep relevant code tight. In this case, we know batch is from
per_cpu_pages during reading nr_pcp_free alone rather than search caller to find it
out. And more callers of nr_pcp_free in fulture is free from pass pcp->batch. And so
on. Anyway, this patch definely gains a little without lost in my opinion.:) With it
makes sense to you.

> Chris
>
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1ddcb2707d05..bb1d14e806ad 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2376,10 +2376,10 @@  static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
 	return true;
 }
 
-static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch,
-		       bool free_high)
+static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
 {
 	int min_nr_free, max_nr_free;
+	int batch = READ_ONCE(pcp->batch);
 
 	/* Free everything if batch freeing high-order pages. */
 	if (unlikely(free_high))
@@ -2446,9 +2446,7 @@  static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 
 	high = nr_pcp_high(pcp, zone, free_high);
 	if (pcp->count >= high) {
-		int batch = READ_ONCE(pcp->batch);
-
-		free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch, free_high), pcp, pindex);
+		free_pcppages_bulk(zone, nr_pcp_free(pcp, high, free_high), pcp, pindex);
 	}
 }