diff mbox series

[v3,net-next,09/10] skbuff: reuse NAPI skb cache on allocation path (__alloc_skb())

Message ID 20210209204533.327360-10-alobakin@pm.me (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series skbuff: introduce skbuff_heads bulking and reusing | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for net-next
netdev/subject_prefix success Link
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit fail Errors and warnings before: 56 this patch: 16
netdev/kdoc success Errors and warnings before: 1 this patch: 1
netdev/verify_fixes success Link
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 30 lines checked
netdev/build_allmodconfig_warn fail Errors and warnings before: 22 this patch: 170
netdev/header_inline success Link
netdev/stable success Stable not CCed

Commit Message

Alexander Lobakin Feb. 9, 2021, 8:49 p.m. UTC
Try to use the same technique for obtaining skbuff_head from NAPI
cache in {,__}alloc_skb(). Two points here:
 - __alloc_skb() can be used for allocating clones or allocating skbs
   for distant nodes. Try to grab head from the cache only for
   non-clones and for local nodes;
 - can be called from any context, so napi_safe == false.

Signed-off-by: Alexander Lobakin <alobakin@pm.me>
---
 net/core/skbuff.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)
diff mbox series

Patch

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 8747566a8136..8850086f8605 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -354,15 +354,19 @@  struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	struct sk_buff *skb;
 	u8 *data;
 	bool pfmemalloc;
+	bool clone;
 
-	cache = (flags & SKB_ALLOC_FCLONE)
-		? skbuff_fclone_cache : skbuff_head_cache;
+	clone = !!(flags & SKB_ALLOC_FCLONE);
+	cache = clone ? skbuff_fclone_cache : skbuff_head_cache;
 
 	if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
 		gfp_mask |= __GFP_MEMALLOC;
 
 	/* Get the HEAD */
-	skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node);
+	if (clone || unlikely(node != NUMA_NO_NODE && node != numa_mem_id()))
+		skb = kmem_cache_alloc_node(cache, gfp_mask & ~GFP_DMA, node);
+	else
+		skb = napi_skb_cache_get(false);
 	if (unlikely(!skb))
 		return NULL;
 	prefetchw(skb);
@@ -393,7 +397,7 @@  struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 	__build_skb_around(skb, data, 0);
 	skb->pfmemalloc = pfmemalloc;
 
-	if (flags & SKB_ALLOC_FCLONE) {
+	if (clone) {
 		struct sk_buff_fclones *fclones;
 
 		fclones = container_of(skb, struct sk_buff_fclones, skb1);