From patchwork Tue Feb 9 20:49:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12079185 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1600AC433E0 for ; Tue, 9 Feb 2021 21:03:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE57D64E6B for ; Tue, 9 Feb 2021 21:03:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234165AbhBIVDS (ORCPT ); Tue, 9 Feb 2021 16:03:18 -0500 Received: from mail-40133.protonmail.ch ([185.70.40.133]:56060 "EHLO mail-40133.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233861AbhBIUuF (ORCPT ); Tue, 9 Feb 2021 15:50:05 -0500 Date: Tue, 09 Feb 2021 20:49:16 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1612903762; bh=CJXPkGdmY5RiFBPMnrBdmPj6J9CQqoGZFly0kXKbtyw=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=LBBLa5jHOXunOIAbx1jOTy6WCs9mzG8gehuj6+j9+Ssi6RHGUbkSBOywMviBEwCQu lda9a7LCxOO7zb5vCmPtWdQZgRedj4QRYcs0q7fq1MpyHfK2kbnNc2y3/wKF+gP0SN 0mOUjWozbuoWUmtEYhXVKu+iqba5NkBBowwTil2osat0R/LaHA6hQHZ9AbSQtnLa5Z wulmrHPn2qAn8TKsxIwF4jcuwmiwnMJinduz+ihs8vbj4VH02CdFsEaWNf7ejHVprp rzzYXuB3ZA0scy0lRt0M08SlYRNW1MM2XPkdJV0RTCt9F5fQCXSe8A4/kgvOLWWpNn rXq8yG7HImKTw== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Jonathan Lemon , Eric Dumazet , Dmitry Vyukov , Willem de Bruijn , Alexander Lobakin , Randy Dunlap , Kevin Hao , Pablo Neira Ayuso , Jakub Sitnicki , Marco Elver , Dexuan Cui , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Taehee Yoo , Cong Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Miaohe Lin , Guillaume Nault , Yonghong Song , zhudi , Michal Kubecek , Marcelo Ricardo Leitner , Dmitry Safonov <0x7f454c46@gmail.com>, Yang Yingliang , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Reply-To: Alexander Lobakin Subject: [v3 net-next 09/10] skbuff: reuse NAPI skb cache on allocation path (__alloc_skb()) Message-ID: <20210209204533.327360-10-alobakin@pm.me> In-Reply-To: <20210209204533.327360-1-alobakin@pm.me> References: <20210209204533.327360-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Try to use the same technique for obtaining skbuff_head from NAPI cache in {,__}alloc_skb(). Two points here: - __alloc_skb() can be used for allocating clones or allocating skbs for distant nodes. Try to grab head from the cache only for non-clones and for local nodes; - can be called from any context, so napi_safe == false. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 8747566a8136..8850086f8605 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -354,15 +354,19 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, struct sk_buff *skb; u8 *data; bool pfmemalloc; + bool clone; - cache = (flags & SKB_ALLOC_FCLONE) - ? skbuff_fclone_cache : skbuff_head_cache; + clone = !!(flags & SKB_ALLOC_FCLONE); + cache = clone ? skbuff_fclone_cache : skbuff_head_cache; if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX)) gfp_mask |= __GFP_MEMALLOC; /* Get the HEAD */ - skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node); + if (clone || unlikely(node != NUMA_NO_NODE && node != numa_mem_id())) + skb = kmem_cache_alloc_node(cache, gfp_mask & ~GFP_DMA, node); + else + skb = napi_skb_cache_get(false); if (unlikely(!skb)) return NULL; prefetchw(skb); @@ -393,7 +397,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, __build_skb_around(skb, data, 0); skb->pfmemalloc = pfmemalloc; - if (flags & SKB_ALLOC_FCLONE) { + if (clone) { struct sk_buff_fclones *fclones; fclones = container_of(skb, struct sk_buff_fclones, skb1);