Message ID | 20240719093338.55117-10-linyunsheng@huawei.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Replace page_frag with page_frag_cache for sk_page_frag() | expand |
On Fri, 2024-07-19 at 17:33 +0800, Yunsheng Lin wrote: > There are more new APIs calling __page_frag_cache_refill() in > this patchset, which may cause compiler not being able to inline > __page_frag_cache_refill() into __page_frag_alloc_va_align(). > > Not being able to do the inlining seems to casue some notiable > performance degradation in arm64 system with 64K PAGE_SIZE after > adding new API calling __page_frag_cache_refill(). > > It seems there is about 24Bytes binary size increase for > __page_frag_cache_refill() and __page_frag_cache_refill() in > arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, > It seems we can have more than 100Bytes decrease for the binary > size by using __alloc_pages() to replace alloc_pages_node(), as > there seems to be some unnecessary checking for nid being > NUMA_NO_NODE, especially when page_frag is still part of the mm > system. > > CC: Alexander Duyck <alexander.duyck@gmail.com> > Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> > --- > mm/page_frag_cache.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c > index d9c9cad17af7..3f162e9d23ba 100644 > --- a/mm/page_frag_cache.c > +++ b/mm/page_frag_cache.c > @@ -59,11 +59,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, > #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) > gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | > __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; > - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, > - PAGE_FRAG_CACHE_MAX_ORDER); > + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, > + numa_mem_id(), NULL); > #endif > if (unlikely(!page)) { > - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); > + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); > if (unlikely(!page)) { > memset(nc, 0, sizeof(*nc)); > return NULL; So if I am understanding correctly this is basically just stripping the checks that were being performed since they aren't really needed to verify the output of numa_mem_id. Rather than changing the code here, it might make more sense to update alloc_pages_node_noprof to move the lines from __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and warn_if_node_offline into an else statement which would cause them to be automatically stripped for this and all other callers. The benefit would likely be much more significant and may be worthy of being accepted on its own merit without being a part of this patch set as I would imagine it would show slight gains in terms of performance and binary size by dropping the unnecessary instructions.
On 2024/7/22 5:41, Alexander H Duyck wrote: ... >> if (unlikely(!page)) { >> - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); >> + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); >> if (unlikely(!page)) { >> memset(nc, 0, sizeof(*nc)); >> return NULL; > > So if I am understanding correctly this is basically just stripping the > checks that were being performed since they aren't really needed to > verify the output of numa_mem_id. > > Rather than changing the code here, it might make more sense to update > alloc_pages_node_noprof to move the lines from > __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and > warn_if_node_offline into an else statement which would cause them to > be automatically stripped for this and all other callers. The benefit I suppose you meant something like below: @@ -290,10 +290,14 @@ struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid) static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask, unsigned int order) { - if (nid == NUMA_NO_NODE) + if (nid == NUMA_NO_NODE) { nid = numa_mem_id(); + } else { + VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); + warn_if_node_offline(nid, gfp_mask); + } - return __alloc_pages_node_noprof(nid, gfp_mask, order); + return __alloc_pages_noprof(gfp_mask, order, nid, NULL); } > would likely be much more significant and may be worthy of being > accepted on its own merit without being a part of this patch set as I > would imagine it would show slight gains in terms of performance and > binary size by dropping the unnecessary instructions. Below is the result, it does reduce the binary size for __page_frag_alloc_align() significantly as expected, but also increase the size for other functions, which seems to be passing a runtime nid, so the trick above doesn't work. I am not sure if the overall reduction is significant enough to justify the change? It seems that depends on how many future callers are passing runtime nid to alloc_pages_node() related APIs. [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux add/remove: 1/2 grow/shrink: 13/8 up/down: 160/-256 (-96) Function old new delta bpf_map_alloc_pages 708 764 +56 its_probe_one 2836 2860 +24 iommu_dma_alloc 984 1008 +24 __iommu_dma_alloc_noncontiguous.constprop 1180 1192 +12 e843419@0f3f_00011fb1_4348 - 8 +8 its_vpe_irq_domain_deactivate 312 316 +4 its_vpe_irq_domain_alloc 1492 1496 +4 its_irq_domain_free 440 444 +4 iommu_dma_map_sg 1328 1332 +4 dpaa_eth_probe 5524 5528 +4 dpaa2_eth_xdp_xmit 676 680 +4 dpaa2_eth_open 564 568 +4 dma_direct_get_required_mask 116 120 +4 __dma_direct_alloc_pages.constprop 656 660 +4 its_vpe_set_affinity 928 924 -4 its_send_single_command 340 336 -4 its_alloc_table_entry 456 452 -4 dpaa_bp_seed 232 228 -4 arm_64_lpae_alloc_pgtable_s1 680 676 -4 __arm_lpae_alloc_pages 900 896 -4 e843419@0473_00005079_16ec 8 - -8 e843419@0189_00001c33_1c8 8 - -8 ringbuf_map_alloc 612 600 -12 __page_frag_alloc_align 740 536 -204 Total: Before=30306836, After=30306740, chg -0.00%
On Wed, Jul 24, 2024 at 5:55 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: > > On 2024/7/22 5:41, Alexander H Duyck wrote: > > ... > > >> if (unlikely(!page)) { > >> - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); > >> + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); > >> if (unlikely(!page)) { > >> memset(nc, 0, sizeof(*nc)); > >> return NULL; > > > > So if I am understanding correctly this is basically just stripping the > > checks that were being performed since they aren't really needed to > > verify the output of numa_mem_id. > > > > Rather than changing the code here, it might make more sense to update > > alloc_pages_node_noprof to move the lines from > > __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and > > warn_if_node_offline into an else statement which would cause them to > > be automatically stripped for this and all other callers. The benefit > > I suppose you meant something like below: > > @@ -290,10 +290,14 @@ struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid) > static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask, > unsigned int order) > { > - if (nid == NUMA_NO_NODE) > + if (nid == NUMA_NO_NODE) { > nid = numa_mem_id(); > + } else { > + VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); > + warn_if_node_offline(nid, gfp_mask); > + } > > - return __alloc_pages_node_noprof(nid, gfp_mask, order); > + return __alloc_pages_noprof(gfp_mask, order, nid, NULL); > } Yes, that is more or less what I was thinking. > > would likely be much more significant and may be worthy of being > > accepted on its own merit without being a part of this patch set as I > > would imagine it would show slight gains in terms of performance and > > binary size by dropping the unnecessary instructions. > > Below is the result, it does reduce the binary size for > __page_frag_alloc_align() significantly as expected, but also > increase the size for other functions, which seems to be passing > a runtime nid, so the trick above doesn't work. I am not sure if > the overall reduction is significant enough to justify the change? > It seems that depends on how many future callers are passing runtime > nid to alloc_pages_node() related APIs. > > [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux > add/remove: 1/2 grow/shrink: 13/8 up/down: 160/-256 (-96) > Function old new delta > bpf_map_alloc_pages 708 764 +56 > its_probe_one 2836 2860 +24 > iommu_dma_alloc 984 1008 +24 > __iommu_dma_alloc_noncontiguous.constprop 1180 1192 +12 > e843419@0f3f_00011fb1_4348 - 8 +8 > its_vpe_irq_domain_deactivate 312 316 +4 > its_vpe_irq_domain_alloc 1492 1496 +4 > its_irq_domain_free 440 444 +4 > iommu_dma_map_sg 1328 1332 +4 > dpaa_eth_probe 5524 5528 +4 > dpaa2_eth_xdp_xmit 676 680 +4 > dpaa2_eth_open 564 568 +4 > dma_direct_get_required_mask 116 120 +4 > __dma_direct_alloc_pages.constprop 656 660 +4 > its_vpe_set_affinity 928 924 -4 > its_send_single_command 340 336 -4 > its_alloc_table_entry 456 452 -4 > dpaa_bp_seed 232 228 -4 > arm_64_lpae_alloc_pgtable_s1 680 676 -4 > __arm_lpae_alloc_pages 900 896 -4 > e843419@0473_00005079_16ec 8 - -8 > e843419@0189_00001c33_1c8 8 - -8 > ringbuf_map_alloc 612 600 -12 > __page_frag_alloc_align 740 536 -204 > Total: Before=30306836, After=30306740, chg -0.00% I'm assuming the compiler must have uninlined __alloc_pages_node_noprof in the previous version of things for the cases where it is causing an increase in the code size. One alternative approach we could look at doing would be to just add the following to the start of the function: if (__builtin_constant_p(nid) && nid == NUMA_NO_NODE) return __alloc_pages_noprof(gfp_mask, order, numa_mem_id(), NULL); That should yield the best result as it essentially skips over the problematic code at compile time for the constant case, otherwise the code should be fully stripped so it shouldn't add any additional overhead.
On 2024/7/24 23:03, Alexander Duyck wrote: > On Wed, Jul 24, 2024 at 5:55 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: >> >> On 2024/7/22 5:41, Alexander H Duyck wrote: >> >> ... >> >>>> if (unlikely(!page)) { >>>> - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); >>>> + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); >>>> if (unlikely(!page)) { >>>> memset(nc, 0, sizeof(*nc)); >>>> return NULL; >>> >>> So if I am understanding correctly this is basically just stripping the >>> checks that were being performed since they aren't really needed to >>> verify the output of numa_mem_id. >>> >>> Rather than changing the code here, it might make more sense to update >>> alloc_pages_node_noprof to move the lines from >>> __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and >>> warn_if_node_offline into an else statement which would cause them to >>> be automatically stripped for this and all other callers. The benefit >> >> I suppose you meant something like below: >> >> @@ -290,10 +290,14 @@ struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid) >> static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask, >> unsigned int order) >> { >> - if (nid == NUMA_NO_NODE) >> + if (nid == NUMA_NO_NODE) { >> nid = numa_mem_id(); >> + } else { >> + VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); >> + warn_if_node_offline(nid, gfp_mask); >> + } >> >> - return __alloc_pages_node_noprof(nid, gfp_mask, order); >> + return __alloc_pages_noprof(gfp_mask, order, nid, NULL); >> } > > Yes, that is more or less what I was thinking. > >>> would likely be much more significant and may be worthy of being >>> accepted on its own merit without being a part of this patch set as I >>> would imagine it would show slight gains in terms of performance and >>> binary size by dropping the unnecessary instructions. >> >> Below is the result, it does reduce the binary size for >> __page_frag_alloc_align() significantly as expected, but also >> increase the size for other functions, which seems to be passing >> a runtime nid, so the trick above doesn't work. I am not sure if >> the overall reduction is significant enough to justify the change? >> It seems that depends on how many future callers are passing runtime >> nid to alloc_pages_node() related APIs. >> >> [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux >> add/remove: 1/2 grow/shrink: 13/8 up/down: 160/-256 (-96) >> Function old new delta >> bpf_map_alloc_pages 708 764 +56 >> its_probe_one 2836 2860 +24 >> iommu_dma_alloc 984 1008 +24 >> __iommu_dma_alloc_noncontiguous.constprop 1180 1192 +12 >> e843419@0f3f_00011fb1_4348 - 8 +8 >> its_vpe_irq_domain_deactivate 312 316 +4 >> its_vpe_irq_domain_alloc 1492 1496 +4 >> its_irq_domain_free 440 444 +4 >> iommu_dma_map_sg 1328 1332 +4 >> dpaa_eth_probe 5524 5528 +4 >> dpaa2_eth_xdp_xmit 676 680 +4 >> dpaa2_eth_open 564 568 +4 >> dma_direct_get_required_mask 116 120 +4 >> __dma_direct_alloc_pages.constprop 656 660 +4 >> its_vpe_set_affinity 928 924 -4 >> its_send_single_command 340 336 -4 >> its_alloc_table_entry 456 452 -4 >> dpaa_bp_seed 232 228 -4 >> arm_64_lpae_alloc_pgtable_s1 680 676 -4 >> __arm_lpae_alloc_pages 900 896 -4 >> e843419@0473_00005079_16ec 8 - -8 >> e843419@0189_00001c33_1c8 8 - -8 >> ringbuf_map_alloc 612 600 -12 >> __page_frag_alloc_align 740 536 -204 >> Total: Before=30306836, After=30306740, chg -0.00% > > I'm assuming the compiler must have uninlined > __alloc_pages_node_noprof in the previous version of things for the > cases where it is causing an increase in the code size. > > One alternative approach we could look at doing would be to just add > the following to the start of the function: > if (__builtin_constant_p(nid) && nid == NUMA_NO_NODE) > return __alloc_pages_noprof(gfp_mask, order, numa_mem_id(), NULL); > > That should yield the best result as it essentially skips over the > problematic code at compile time for the constant case, otherwise the > code should be fully stripped so it shouldn't add any additional > overhead. Just tried it, it seems it is more complicated than expected too. For example, the above changing seems to cause alloc_slab_page() to be inlined to new_slab() and other inlining/uninlining that is hard to understand. [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux add/remove: 1/2 grow/shrink: 16/11 up/down: 432/-536 (-104) Function old new delta new_slab 808 1124 +316 its_probe_one 2836 2876 +40 dpaa2_eth_set_dist_key 1096 1112 +16 e843419@0f3f_00011fb1_4348 - 8 +8 rx_default_dqrr 2776 2780 +4 pcpu_unmap_pages 356 360 +4 its_vpe_irq_domain_alloc 1492 1496 +4 iommu_dma_init_fq 520 524 +4 iommu_dma_alloc 984 988 +4 hns3_nic_net_timeout 704 708 +4 hns3_init_all_ring 1168 1172 +4 hns3_clear_all_ring 372 376 +4 enetc_refill_rx_ring 448 452 +4 enetc_free_rxtx_rings 276 280 +4 dpaa2_eth_xdp_xmit 676 680 +4 dpaa2_eth_rx 1716 1720 +4 ___slab_alloc 2120 2124 +4 pcpu_free_pages.constprop 236 232 -4 its_alloc_table_entry 456 452 -4 hns3_reset_notify_init_enet 628 624 -4 dpaa_cleanup_tx_fd 556 552 -4 dpaa_bp_seed 232 228 -4 blk_update_request 944 940 -4 blk_execute_rq 540 536 -4 arm_64_lpae_alloc_pgtable_s1 680 676 -4 __kmalloc_large_node 340 336 -4 __arm_lpae_unmap 1588 1584 -4 e843419@0473_00005079_16ec 8 - -8 __page_frag_alloc_align 740 536 -204 alloc_slab_page 284 - -284 Total: Before=30306836, After=30306732, chg -0.00%
On Thu, 2024-07-25 at 20:19 +0800, Yunsheng Lin wrote: > On 2024/7/24 23:03, Alexander Duyck wrote: > > On Wed, Jul 24, 2024 at 5:55 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: > > > > > > On 2024/7/22 5:41, Alexander H Duyck wrote: > > > > > > ... > > > > > > > > if (unlikely(!page)) { > > > > > - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); > > > > > + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); > > > > > if (unlikely(!page)) { > > > > > memset(nc, 0, sizeof(*nc)); > > > > > return NULL; > > > > > > > > So if I am understanding correctly this is basically just stripping the > > > > checks that were being performed since they aren't really needed to > > > > verify the output of numa_mem_id. > > > > > > > > Rather than changing the code here, it might make more sense to update > > > > alloc_pages_node_noprof to move the lines from > > > > __alloc_pages_node_noprof into it. Then you could put the VM_BUG_ON and > > > > warn_if_node_offline into an else statement which would cause them to > > > > be automatically stripped for this and all other callers. The benefit > > > > > > I suppose you meant something like below: > > > > > > @@ -290,10 +290,14 @@ struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid) > > > static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask, > > > unsigned int order) > > > { > > > - if (nid == NUMA_NO_NODE) > > > + if (nid == NUMA_NO_NODE) { > > > nid = numa_mem_id(); > > > + } else { > > > + VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); > > > + warn_if_node_offline(nid, gfp_mask); > > > + } > > > > > > - return __alloc_pages_node_noprof(nid, gfp_mask, order); > > > + return __alloc_pages_noprof(gfp_mask, order, nid, NULL); > > > } > > > > Yes, that is more or less what I was thinking. > > > > > > would likely be much more significant and may be worthy of being > > > > accepted on its own merit without being a part of this patch set as I > > > > would imagine it would show slight gains in terms of performance and > > > > binary size by dropping the unnecessary instructions. > > > > > > Below is the result, it does reduce the binary size for > > > __page_frag_alloc_align() significantly as expected, but also > > > increase the size for other functions, which seems to be passing > > > a runtime nid, so the trick above doesn't work. I am not sure if > > > the overall reduction is significant enough to justify the change? > > > It seems that depends on how many future callers are passing runtime > > > nid to alloc_pages_node() related APIs. > > > > > > [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux > > > add/remove: 1/2 grow/shrink: 13/8 up/down: 160/-256 (-96) > > > Function old new delta > > > bpf_map_alloc_pages 708 764 +56 > > > its_probe_one 2836 2860 +24 > > > iommu_dma_alloc 984 1008 +24 > > > __iommu_dma_alloc_noncontiguous.constprop 1180 1192 +12 > > > e843419@0f3f_00011fb1_4348 - 8 +8 > > > its_vpe_irq_domain_deactivate 312 316 +4 > > > its_vpe_irq_domain_alloc 1492 1496 +4 > > > its_irq_domain_free 440 444 +4 > > > iommu_dma_map_sg 1328 1332 +4 > > > dpaa_eth_probe 5524 5528 +4 > > > dpaa2_eth_xdp_xmit 676 680 +4 > > > dpaa2_eth_open 564 568 +4 > > > dma_direct_get_required_mask 116 120 +4 > > > __dma_direct_alloc_pages.constprop 656 660 +4 > > > its_vpe_set_affinity 928 924 -4 > > > its_send_single_command 340 336 -4 > > > its_alloc_table_entry 456 452 -4 > > > dpaa_bp_seed 232 228 -4 > > > arm_64_lpae_alloc_pgtable_s1 680 676 -4 > > > __arm_lpae_alloc_pages 900 896 -4 > > > e843419@0473_00005079_16ec 8 - -8 > > > e843419@0189_00001c33_1c8 8 - -8 > > > ringbuf_map_alloc 612 600 -12 > > > __page_frag_alloc_align 740 536 -204 > > > Total: Before=30306836, After=30306740, chg -0.00% > > > > I'm assuming the compiler must have uninlined > > __alloc_pages_node_noprof in the previous version of things for the > > cases where it is causing an increase in the code size. > > > > One alternative approach we could look at doing would be to just add > > the following to the start of the function: > > if (__builtin_constant_p(nid) && nid == NUMA_NO_NODE) > > return __alloc_pages_noprof(gfp_mask, order, numa_mem_id(), NULL); > > > > That should yield the best result as it essentially skips over the > > problematic code at compile time for the constant case, otherwise the > > code should be fully stripped so it shouldn't add any additional > > overhead. > > Just tried it, it seems it is more complicated than expected too. > For example, the above changing seems to cause alloc_slab_page() to be > inlined to new_slab() and other inlining/uninlining that is hard to > understand. > > [linyunsheng@localhost net-next]$ ./scripts/bloat-o-meter vmlinux.org vmlinux > add/remove: 1/2 grow/shrink: 16/11 up/down: 432/-536 (-104) > Function old new delta > new_slab 808 1124 +316 > its_probe_one 2836 2876 +40 > dpaa2_eth_set_dist_key 1096 1112 +16 > e843419@0f3f_00011fb1_4348 - 8 +8 > rx_default_dqrr 2776 2780 +4 > pcpu_unmap_pages 356 360 +4 > its_vpe_irq_domain_alloc 1492 1496 +4 > iommu_dma_init_fq 520 524 +4 > iommu_dma_alloc 984 988 +4 > hns3_nic_net_timeout 704 708 +4 > hns3_init_all_ring 1168 1172 +4 > hns3_clear_all_ring 372 376 +4 > enetc_refill_rx_ring 448 452 +4 > enetc_free_rxtx_rings 276 280 +4 > dpaa2_eth_xdp_xmit 676 680 +4 > dpaa2_eth_rx 1716 1720 +4 > ___slab_alloc 2120 2124 +4 > pcpu_free_pages.constprop 236 232 -4 > its_alloc_table_entry 456 452 -4 > hns3_reset_notify_init_enet 628 624 -4 > dpaa_cleanup_tx_fd 556 552 -4 > dpaa_bp_seed 232 228 -4 > blk_update_request 944 940 -4 > blk_execute_rq 540 536 -4 > arm_64_lpae_alloc_pgtable_s1 680 676 -4 > __kmalloc_large_node 340 336 -4 > __arm_lpae_unmap 1588 1584 -4 > e843419@0473_00005079_16ec 8 - -8 > __page_frag_alloc_align 740 536 -204 > alloc_slab_page 284 - -284 > Total: Before=30306836, After=30306732, chg -0.00% One interesting similarity between the alloc_slab function and __page_frag_alloc_align is that they both seem to be doing the higher order followed by lower order allocation. I wonder if we couldn't somehow consolidate the checks and make it so that we have a function that will provide a page size within a range.
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index d9c9cad17af7..3f162e9d23ba 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -59,11 +59,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); if (unlikely(!page)) { memset(nc, 0, sizeof(*nc)); return NULL;
There are more new APIs calling __page_frag_cache_refill() in this patchset, which may cause compiler not being able to inline __page_frag_cache_refill() into __page_frag_alloc_va_align(). Not being able to do the inlining seems to casue some notiable performance degradation in arm64 system with 64K PAGE_SIZE after adding new API calling __page_frag_cache_refill(). It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() and __page_frag_cache_refill() in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is still part of the mm system. CC: Alexander Duyck <alexander.duyck@gmail.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)