diff mbox series

[net-next,v19,08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node()

Message ID 20241001075858.48936-9-linyunsheng@huawei.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series Replace page_frag with page_frag_cache for sk_page_frag() | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 9 this patch: 9
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 2 of 2 maintainers
netdev/build_clang success Errors and warnings before: 9 this patch: 9
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 13 this patch: 12
netdev/checkpatch warning WARNING: From:/Signed-off-by: email address mismatch: 'From: Yunsheng Lin <yunshenglin0825@gmail.com>' != 'Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>'
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Yunsheng Lin Oct. 1, 2024, 7:58 a.m. UTC
It seems there is about 24Bytes binary size increase for
__page_frag_cache_refill() after refactoring in arm64 system
with 64K PAGE_SIZE. By doing the gdb disassembling, It seems
we can have more than 100Bytes decrease for the binary size
by using __alloc_pages() to replace alloc_pages_node(), as
there seems to be some unnecessary checking for nid being
NUMA_NO_NODE, especially when page_frag is part of the mm
system.

CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
 mm/page_frag_cache.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Alexander Duyck Oct. 3, 2024, 11:52 p.m. UTC | #1
On Tue, Oct 1, 2024 at 12:59 AM Yunsheng Lin <yunshenglin0825@gmail.com> wrote:
>
> It seems there is about 24Bytes binary size increase for
> __page_frag_cache_refill() after refactoring in arm64 system
> with 64K PAGE_SIZE. By doing the gdb disassembling, It seems
> we can have more than 100Bytes decrease for the binary size
> by using __alloc_pages() to replace alloc_pages_node(), as
> there seems to be some unnecessary checking for nid being
> NUMA_NO_NODE, especially when page_frag is part of the mm
> system.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
>  mm/page_frag_cache.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> index 6f6e47bbdc8d..a5448b44068a 100644
> --- a/mm/page_frag_cache.c
> +++ b/mm/page_frag_cache.c
> @@ -61,11 +61,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
>  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>         gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) |  __GFP_COMP |
>                    __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
> -       page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
> -                               PAGE_FRAG_CACHE_MAX_ORDER);
> +       page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER,
> +                            numa_mem_id(), NULL);
>  #endif
>         if (unlikely(!page)) {
> -               page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
> +               page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
>                 order = 0;
>         }
>

Still not a huge fan of fixing the bigger issue here, but I guess
there is only one or two other spots that encounter this, so I would
classify it as "mostly harmless" in terms of not fixing it.

Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
diff mbox series

Patch

diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 6f6e47bbdc8d..a5448b44068a 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -61,11 +61,11 @@  static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
 #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
 	gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) |  __GFP_COMP |
 		   __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
-	page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
-				PAGE_FRAG_CACHE_MAX_ORDER);
+	page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER,
+			     numa_mem_id(), NULL);
 #endif
 	if (unlikely(!page)) {
-		page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+		page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
 		order = 0;
 	}