Message ID | 20241001075858.48936-9-linyunsheng@huawei.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | Replace page_frag with page_frag_cache for sk_page_frag() | expand |
On Tue, Oct 1, 2024 at 12:59 AM Yunsheng Lin <yunshenglin0825@gmail.com> wrote: > > It seems there is about 24Bytes binary size increase for > __page_frag_cache_refill() after refactoring in arm64 system > with 64K PAGE_SIZE. By doing the gdb disassembling, It seems > we can have more than 100Bytes decrease for the binary size > by using __alloc_pages() to replace alloc_pages_node(), as > there seems to be some unnecessary checking for nid being > NUMA_NO_NODE, especially when page_frag is part of the mm > system. > > CC: Alexander Duyck <alexander.duyck@gmail.com> > Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> > --- > mm/page_frag_cache.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c > index 6f6e47bbdc8d..a5448b44068a 100644 > --- a/mm/page_frag_cache.c > +++ b/mm/page_frag_cache.c > @@ -61,11 +61,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, > #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) > gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | > __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; > - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, > - PAGE_FRAG_CACHE_MAX_ORDER); > + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, > + numa_mem_id(), NULL); > #endif > if (unlikely(!page)) { > - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); > + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); > order = 0; > } > Still not a huge fan of fixing the bigger issue here, but I guess there is only one or two other spots that encounter this, so I would classify it as "mostly harmless" in terms of not fixing it. Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 6f6e47bbdc8d..a5448b44068a 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -61,11 +61,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); order = 0; }
It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() after refactoring in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is part of the mm system. CC: Alexander Duyck <alexander.duyck@gmail.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)