From patchwork Wed Feb 10 16:30:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12081173 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46839C43381 for ; Wed, 10 Feb 2021 16:33:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 000E164E85 for ; Wed, 10 Feb 2021 16:33:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232830AbhBJQc5 (ORCPT ); Wed, 10 Feb 2021 11:32:57 -0500 Received: from mail-40136.protonmail.ch ([185.70.40.136]:20594 "EHLO mail-40136.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232574AbhBJQbS (ORCPT ); Wed, 10 Feb 2021 11:31:18 -0500 Date: Wed, 10 Feb 2021 16:30:43 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1612974648; bh=1ZV3MU5hm/yFjfHuP1nAedRaPlzwJRc6+iABOHDpkjs=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=Fk22STlnmZbcQinLN0RWEHztMBRSvRCr/YY2hJLvMmnQ4yCcvnjQ7wS2eZLpZ0ZFO Okk/5rqp8eQxwIFo3kZC9+A0KKOOX3ElintYQHYAE94L2TXSwrVpxtyMCPqa3p4rqr JKrpzR6McJfP3snSHtOYJ96D6vhvwKR8O8TUVZsNnte/sjWNfldORZcrCxUoz8SBt1 XZheXN+ccIZCuqyf5iUO78V1AlET3KntLwNr5/QwjG6j4Z4T+AiJmQiGqn72pBNQBL rIpzSGNILOprRhNMnmue4lbg804eu3MHeM8sc0D9ZtGFYI/vFNf21PUBAB7+QHzpRK RqIaJoVFdMGSg== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Jonathan Lemon , Eric Dumazet , Dmitry Vyukov , Willem de Bruijn , Alexander Lobakin , Randy Dunlap , Kevin Hao , Pablo Neira Ayuso , Jakub Sitnicki , Marco Elver , Dexuan Cui , Paolo Abeni , Jesper Dangaard Brouer , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Taehee Yoo , Cong Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Miaohe Lin , Guillaume Nault , Yonghong Song , zhudi , Michal Kubecek , Marcelo Ricardo Leitner , Dmitry Safonov <0x7f454c46@gmail.com>, Yang Yingliang , Florian Westphal , Edward Cree , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH v4 net-next 09/11] skbuff: allow to optionally use NAPI cache from __alloc_skb() Message-ID: <20210210162732.80467-10-alobakin@pm.me> In-Reply-To: <20210210162732.80467-1-alobakin@pm.me> References: <20210210162732.80467-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Reuse the old and forgotten SKB_ALLOC_NAPI to add an option to get an skbuff_head from the NAPI cache instead of inplace allocation inside __alloc_skb(). This implies that the function is called from softirq or BH-off context, not for allocating a clone or from a distant node. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 9e1a8ded4acc..750fa1825b28 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -397,15 +397,20 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, struct sk_buff *skb; u8 *data; bool pfmemalloc; + bool clone; - cache = (flags & SKB_ALLOC_FCLONE) - ? skbuff_fclone_cache : skbuff_head_cache; + clone = !!(flags & SKB_ALLOC_FCLONE); + cache = clone ? skbuff_fclone_cache : skbuff_head_cache; if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX)) gfp_mask |= __GFP_MEMALLOC; /* Get the HEAD */ - skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node); + if (!clone && (flags & SKB_ALLOC_NAPI) && + likely(node == NUMA_NO_NODE || node == numa_mem_id())) + skb = napi_skb_cache_get(); + else + skb = kmem_cache_alloc_node(cache, gfp_mask & ~GFP_DMA, node); if (unlikely(!skb)) return NULL; prefetchw(skb); @@ -436,7 +441,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, __build_skb_around(skb, data, 0); skb->pfmemalloc = pfmemalloc; - if (flags & SKB_ALLOC_FCLONE) { + if (clone) { struct sk_buff_fclones *fclones; fclones = container_of(skb, struct sk_buff_fclones, skb1);