From patchwork Tue Feb 9 20:46:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12079211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9109DC4332E for ; Tue, 9 Feb 2021 21:21:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56F9864E8A for ; Tue, 9 Feb 2021 21:21:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234409AbhBIVUz (ORCPT ); Tue, 9 Feb 2021 16:20:55 -0500 Received: from mail2.protonmail.ch ([185.70.40.22]:49748 "EHLO mail2.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233987AbhBIUrw (ORCPT ); Tue, 9 Feb 2021 15:47:52 -0500 Date: Tue, 09 Feb 2021 20:46:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1612903595; bh=OqhINS5au7CkgIZc/zoD88FzPH9SHYkuEvXcCCJstX4=; h=Date:To:From:Cc:Reply-To:Subject:From; b=lS7a7+3WQVXxFjZAC+WtL8ZIRbM4CTWuPwlPeAKnpqAMP209LS6XjSsHSbda/uCJY VwrW6VpXc9CvSiBCDi2bn9RFn1fEJBPSJyRni2NnG2FwUsIuzIyj/s/e3PTplWnRMT W+g1N4HtaPd6/naENaWkMkA4odbgnZ2f840yVZ6XZIi46kDgauAyhr/94nlVFAufIC O5WAFvTsBuqqMnS2mxBQWOQ7B7mP1lgzxuurIlgyTYQcstF75yiJvbjiL/3ZW15sqx kEpEpuKp0mjnPCDErWGA/KT5tppyVpYviR11VeBIOEbf5UREDo+IcOa2xCkf2XMdOW LPDACzBtVnFvw== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Jonathan Lemon , Eric Dumazet , Dmitry Vyukov , Willem de Bruijn , Alexander Lobakin , Randy Dunlap , Kevin Hao , Pablo Neira Ayuso , Jakub Sitnicki , Marco Elver , Dexuan Cui , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Taehee Yoo , Cong Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Miaohe Lin , Guillaume Nault , Yonghong Song , zhudi , Michal Kubecek , Marcelo Ricardo Leitner , Dmitry Safonov <0x7f454c46@gmail.com>, Yang Yingliang , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Reply-To: Alexander Lobakin Subject: [v3 net-next 00/10] skbuff: introduce skbuff_heads bulking and reusing Message-ID: <20210209204533.327360-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, all sorts of skb allocation always do allocate skbuff_heads one by one via kmem_cache_alloc(). On the other hand, we have percpu napi_alloc_cache to store skbuff_heads queued up for freeing and flush them by bulks. We can use this cache not only for bulk-wiping, but also to obtain heads for new skbs and avoid unconditional allocations, as well as for bulk-allocating. As accessing napi_alloc_cache implies NAPI softirq context, decaching is protected with in_serving_softirq() check, with the option to bypass the check when the context is 100% known. iperf3 showed 35-70 Mbps bumps for both TCP and UDP while performing VLAN NAT on 1.2 GHz MIPS board. The boost is likely to be way bigger on more powerful hosts and NICs with tens of Mpps. Note on skbuff_heads from distant slabs or pfmemalloc'ed slabs: - kmalloc()/kmem_cache_alloc() itself allows by default allocating memory from the remote nodes to defragment their slabs. This is controlled by sysctl, but according to this, skbuff_head from a remote node is an OK case; - The easiest way to check if the slab of skbuff_head is remote or pfmemalloc'ed is: if (!dev_page_is_reusable(virt_to_head_page(skb))) /* drop it */; ...*but*, regarding that most slabs are built of compound pages, virt_to_head_page() will hit unlikely-branch every single call. This check costed at least 20 Mbps in test scenarios and seems like it'd be better to _not_ do this. Since v2 [1]: - also cover {,__}alloc_skb() and {,__}build_skb() cases (became handy after the changes that pass tiny skbs requests to kmalloc layer); - cover the cache with KASAN instrumentation (suggested by Eric Dumazet, help of Dmitry Vyukov); - completely drop redundant __kfree_skb_flush() (also Eric); - lots of code cleanups; - expand the commit message with NUMA and pfmemalloc points (Jakub). Since v1 [0]: - use one unified cache instead of two separate to greatly simplify the logics and reduce hotpath overhead (Edward Cree); - new: recycle also GRO_MERGED_FREE skbs instead of immediate freeing; - correct performance numbers after optimizations and performing lots of tests for different use cases. [0] https://lore.kernel.org/netdev/20210111182655.12159-1-alobakin@pm.me [1] https://lore.kernel.org/netdev/20210113133523.39205-1-alobakin@pm.me Alexander Lobakin (10): skbuff: move __alloc_skb() next to the other skb allocation functions skbuff: simplify kmalloc_reserve() skbuff: make __build_skb_around() return void skbuff: simplify __alloc_skb() a bit skbuff: use __build_skb_around() in __alloc_skb() skbuff: remove __kfree_skb_flush() skbuff: move NAPI cache declarations upper in the file skbuff: reuse NAPI skb cache on allocation path (__build_skb()) skbuff: reuse NAPI skb cache on allocation path (__alloc_skb()) skbuff: queue NAPI_MERGED_FREE skbs into NAPI cache instead of freeing include/linux/skbuff.h | 4 +- net/core/dev.c | 15 +- net/core/skbuff.c | 392 ++++++++++++++++++++------------------- net/netlink/af_netlink.c | 2 +- 4 files changed, 202 insertions(+), 211 deletions(-)