Message ID | 20210213141021.87840-1-alobakin@pm.me (mailing list archive) |
---|---|
Headers | show |
Series | skbuff: introduce skbuff_heads bulking and reusing | expand |
On Sat, Feb 13, 2021 at 6:10 AM Alexander Lobakin <alobakin@pm.me> wrote: > > Currently, all sorts of skb allocation always do allocate > skbuff_heads one by one via kmem_cache_alloc(). > On the other hand, we have percpu napi_alloc_cache to store > skbuff_heads queued up for freeing and flush them by bulks. > > We can use this cache not only for bulk-wiping, but also to obtain > heads for new skbs and avoid unconditional allocations, as well as > for bulk-allocating (like XDP's cpumap code and veth driver already > do). > > As this might affect latencies, cache pressure and lots of hardware > and driver-dependent stuff, this new feature is mostly optional and > can be issued via: > - a new napi_build_skb() function (as a replacement for build_skb()); > - existing {,__}napi_alloc_skb() and napi_get_frags() functions; > - __alloc_skb() with passing SKB_ALLOC_NAPI in flags. > > iperf3 showed 35-70 Mbps bumps for both TCP and UDP while performing > VLAN NAT on 1.2 GHz MIPS board. The boost is likely to be bigger > on more powerful hosts and NICs with tens of Mpps. > > Note on skbuff_heads from distant slabs or pfmemalloc'ed slabs: > - kmalloc()/kmem_cache_alloc() itself allows by default allocating > memory from the remote nodes to defragment their slabs. This is > controlled by sysctl, but according to this, skbuff_head from a > remote node is an OK case; > - The easiest way to check if the slab of skbuff_head is remote or > pfmemalloc'ed is: > > if (!dev_page_is_reusable(virt_to_head_page(skb))) > /* drop it */; > > ...*but*, regarding that most slabs are built of compound pages, > virt_to_head_page() will hit unlikely-branch every single call. > This check costed at least 20 Mbps in test scenarios and seems > like it'd be better to _not_ do this. <snip> > Alexander Lobakin (11): > skbuff: move __alloc_skb() next to the other skb allocation functions > skbuff: simplify kmalloc_reserve() > skbuff: make __build_skb_around() return void > skbuff: simplify __alloc_skb() a bit > skbuff: use __build_skb_around() in __alloc_skb() > skbuff: remove __kfree_skb_flush() > skbuff: move NAPI cache declarations upper in the file > skbuff: introduce {,__}napi_build_skb() which reuses NAPI cache heads > skbuff: allow to optionally use NAPI cache from __alloc_skb() > skbuff: allow to use NAPI cache from __napi_alloc_skb() > skbuff: queue NAPI_MERGED_FREE skbs into NAPI cache instead of freeing > > include/linux/skbuff.h | 4 +- > net/core/dev.c | 16 +- > net/core/skbuff.c | 428 +++++++++++++++++++++++------------------ > 3 files changed, 242 insertions(+), 206 deletions(-) > With the last few changes and testing to verify the need to drop the cache clearing this patch set looks good to me. Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Hello: This series was applied to netdev/net-next.git (refs/heads/master): On Sat, 13 Feb 2021 14:10:43 +0000 you wrote: > Currently, all sorts of skb allocation always do allocate > skbuff_heads one by one via kmem_cache_alloc(). > On the other hand, we have percpu napi_alloc_cache to store > skbuff_heads queued up for freeing and flush them by bulks. > > We can use this cache not only for bulk-wiping, but also to obtain > heads for new skbs and avoid unconditional allocations, as well as > for bulk-allocating (like XDP's cpumap code and veth driver already > do). > > [...] Here is the summary with links: - [v6,net-next,01/11] skbuff: move __alloc_skb() next to the other skb allocation functions https://git.kernel.org/netdev/net-next/c/5381b23d5bf9 - [v6,net-next,02/11] skbuff: simplify kmalloc_reserve() https://git.kernel.org/netdev/net-next/c/ef28095fce66 - [v6,net-next,03/11] skbuff: make __build_skb_around() return void https://git.kernel.org/netdev/net-next/c/483126b3b2c6 - [v6,net-next,04/11] skbuff: simplify __alloc_skb() a bit https://git.kernel.org/netdev/net-next/c/df1ae022af2c - [v6,net-next,05/11] skbuff: use __build_skb_around() in __alloc_skb() https://git.kernel.org/netdev/net-next/c/f9d6725bf44a - [v6,net-next,06/11] skbuff: remove __kfree_skb_flush() https://git.kernel.org/netdev/net-next/c/fec6e49b6398 - [v6,net-next,07/11] skbuff: move NAPI cache declarations upper in the file https://git.kernel.org/netdev/net-next/c/50fad4b543b3 - [v6,net-next,08/11] skbuff: introduce {,__}napi_build_skb() which reuses NAPI cache heads https://git.kernel.org/netdev/net-next/c/f450d539c05a - [v6,net-next,09/11] skbuff: allow to optionally use NAPI cache from __alloc_skb() https://git.kernel.org/netdev/net-next/c/d13612b58e64 - [v6,net-next,10/11] skbuff: allow to use NAPI cache from __napi_alloc_skb() https://git.kernel.org/netdev/net-next/c/cfb8ec659521 - [v6,net-next,11/11] skbuff: queue NAPI_MERGED_FREE skbs into NAPI cache instead of freeing https://git.kernel.org/netdev/net-next/c/9243adfc311a You are awesome, thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/patchwork/pwbot.html