From patchwork Tue Mar 30 23:15:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12174001 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38A45C433DB for ; Tue, 30 Mar 2021 23:16:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDF2261996 for ; Tue, 30 Mar 2021 23:16:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233167AbhC3XQX (ORCPT ); Tue, 30 Mar 2021 19:16:23 -0400 Received: from mail-40131.protonmail.ch ([185.70.40.131]:44226 "EHLO mail-40131.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233112AbhC3XP4 (ORCPT ); Tue, 30 Mar 2021 19:15:56 -0400 Date: Tue, 30 Mar 2021 23:15:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1617146154; bh=zI3QaYD0UyhErTHzJ9xtVjlAtErv7jv4T3mrzGJ1yG8=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=hEEInPeZfS46I+eb+l/P+YSFbDIzaO/vmhqGZlKOcsx8ZXI59qVkIwcLof96mmSEE Pi/OjqUIiEyOOoaVLP/qObbLVQvol/hYXquNhPoAWC0gE13BHJx/aSjzHcxy3sF5+o guQh8oCRq799d3V4msfSNjPxeNO2as4vvzS/Klfl7vT74QcmUEJKSLJutw4BTLFYKo 0XQFvm54I970bpX3IAdhVlMJJ94xvNlFjLUtnPI938QbjO3wiJ59v+NBrxRUkDhuon Nldxte0uOVBTO6WOv53ZQvOIzOHEjN2A7+6gfxwKOCaOV3zjnbNLc267XDih9j9XLu LeekmPefi9jmA== To: Alexei Starovoitov , Daniel Borkmann From: Alexander Lobakin Cc: Xuan Zhuo , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Jonathan Lemon , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Alexander Lobakin , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH bpf-next 1/2] xsk: speed-up generic full-copy xmit Message-ID: <20210330231528.546284-2-alobakin@pm.me> In-Reply-To: <20210330231528.546284-1-alobakin@pm.me> References: <20210330231528.546284-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net There are a few moments that are known for sure at the moment of copying: - allocated skb is fully linear; - its linear space is long enough to hold the full buffer data. So, the out-of-line skb_put(), skb_store_bits() and the check for a retcode can be replaced with plain memcpy(__skb_put()) with no loss. Also align memcpy()'s len to sizeof(long) to improve its performance. Signed-off-by: Alexander Lobakin --- net/xdp/xsk.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) -- 2.31.1 diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index a71ed664da0a..41f8f21b3348 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -517,14 +517,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, return ERR_PTR(err); skb_reserve(skb, hr); - skb_put(skb, len); buffer = xsk_buff_raw_get_data(xs->pool, desc->addr); - err = skb_store_bits(skb, 0, buffer, len); - if (unlikely(err)) { - kfree_skb(skb); - return ERR_PTR(err); - } + memcpy(__skb_put(skb, len), buffer, ALIGN(len, sizeof(long))); } skb->dev = dev; From patchwork Tue Mar 30 23:15:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12174005 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8451AC433E1 for ; Tue, 30 Mar 2021 23:17:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53ABC619E0 for ; Tue, 30 Mar 2021 23:17:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233176AbhC3XQY (ORCPT ); Tue, 30 Mar 2021 19:16:24 -0400 Received: from mail1.protonmail.ch ([185.70.40.18]:61831 "EHLO mail1.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233118AbhC3XQI (ORCPT ); Tue, 30 Mar 2021 19:16:08 -0400 Date: Tue, 30 Mar 2021 23:15:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1617146166; bh=uwv69iaWSKcMKpaWEGIM/Lw3/ltMpeHlZTENOpjcuZ8=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=l3HzsJyAAdMP6kj5/QUIZfO9SIGnSJUHzUqvPNF1VMBvZAhebd/tvW0ufIMxf0Ub/ v/neGwAdPEfZCDi4TW+/ec5CkoG3Zdk/39AVInLe2zRw+zbj28BdJk7qjtNVTza6gw /Bj0dtqJ62d1O/iTmDFtCqLkBU9h5rOc0Qy2VHn59gBTgsx2zYNb/Nm600xWISxXun XFupXZEClWhyOSaAHpppC6ztv2zFTHOMZlkm1fvagJ38PLIFv+FU+YDshIXFLsyROT 4S803XD2Cp1JMJPlyhK7/oHVH+dyWl5KS8sQnrL4g1krFCyGFg3maUO2MTYLwy1Twe T8xQ4oNWLIISA== To: Alexei Starovoitov , Daniel Borkmann From: Alexander Lobakin Cc: Xuan Zhuo , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Jonathan Lemon , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Alexander Lobakin , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH bpf-next 2/2] xsk: introduce generic almost-zerocopy xmit Message-ID: <20210330231528.546284-3-alobakin@pm.me> In-Reply-To: <20210330231528.546284-1-alobakin@pm.me> References: <20210330231528.546284-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The reasons behind IFF_TX_SKB_NO_LINEAR are: - most drivers expect skb with the linear space; - most drivers expect hard header in the linear space; - many drivers need some headroom to insert custom headers and/or pull headers from frags (pskb_may_pull() etc.). With some bits of overhead, we can satisfy all of this without inducing full buffer data copy. Now frames that are no lesser than 128 bytes (to mitigate allocation overhead) are also being built using zerocopy path (if the device and driver support S/G xmit, which is almost always true). We allocate 256* additional bytes for skb linear space and pull hard header there (aligning its end by 16 bytes for platforms with NET_IP_ALIGN). The rest of the buffer data is just pinned as frags. A room of at least 242 bytes is left for any driver needs. We could just pass the buffer to eth_get_headlen() to minimize allocation overhead and be able to copy all the headers into the linear space, but the flow dissection procedure tends to be more expensive than the current approach. IFF_TX_SKB_NO_LINEAR path remains unchanged and is still actual and generally faster. * The value of 256 bytes is kinda "magic", it can be found in lots of drivers and places of core code and it is believed that 256 bytes are enough to store any headers of any frame. Cc: Xuan Zhuo Signed-off-by: Alexander Lobakin --- net/xdp/xsk.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) -- 2.31.1 diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 41f8f21b3348..090ff9c096a3 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -445,6 +445,9 @@ static void xsk_destruct_skb(struct sk_buff *skb) sock_wfree(skb); } +#define XSK_SKB_HEADLEN 256 +#define XSK_COPY_THRESHOLD (XSK_SKB_HEADLEN / 2) + static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs, struct xdp_desc *desc) { @@ -452,13 +455,22 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs, u32 hr, len, ts, offset, copy, copied; struct sk_buff *skb; struct page *page; + bool need_pull; void *buffer; int err, i; u64 addr; hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom)); + len = hr; + + need_pull = !(xs->dev->priv_flags & IFF_TX_SKB_NO_LINEAR); + if (need_pull) { + len += XSK_SKB_HEADLEN; + len += NET_IP_ALIGN; + hr += NET_IP_ALIGN; + } - skb = sock_alloc_send_skb(&xs->sk, hr, 1, &err); + skb = sock_alloc_send_skb(&xs->sk, len, 1, &err); if (unlikely(!skb)) return ERR_PTR(err); @@ -488,6 +500,11 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs, skb->data_len += len; skb->truesize += ts; + if (need_pull && unlikely(!__pskb_pull_tail(skb, ETH_HLEN))) { + kfree_skb(skb); + return ERR_PTR(-ENOMEM); + } + refcount_add(ts, &xs->sk.sk_wmem_alloc); return skb; @@ -498,19 +515,20 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, { struct net_device *dev = xs->dev; struct sk_buff *skb; + u32 len = desc->len; - if (dev->priv_flags & IFF_TX_SKB_NO_LINEAR) { + if ((dev->priv_flags & IFF_TX_SKB_NO_LINEAR) || + (len >= XSK_COPY_THRESHOLD && likely(dev->features & NETIF_F_SG))) { skb = xsk_build_skb_zerocopy(xs, desc); if (IS_ERR(skb)) return skb; } else { - u32 hr, tr, len; void *buffer; + u32 hr, tr; int err; hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom)); tr = dev->needed_tailroom; - len = desc->len; skb = sock_alloc_send_skb(&xs->sk, hr + len + tr, 1, &err); if (unlikely(!skb))