From patchwork Sun Nov 8 01:10:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 11889363 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53739C388F9 for ; Sun, 8 Nov 2020 01:11:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A29420885 for ; Sun, 8 Nov 2020 01:11:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="mByuuSWG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728777AbgKHBK5 (ORCPT ); Sat, 7 Nov 2020 20:10:57 -0500 Received: from mail-02.mail-europe.com ([51.89.119.103]:37948 "EHLO mail-02.mail-europe.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725838AbgKHBK5 (ORCPT ); Sat, 7 Nov 2020 20:10:57 -0500 Date: Sun, 08 Nov 2020 01:10:40 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1604797851; bh=bbuaFuL5yeagDloO3ygY6+kP0ogMmd7CoJq8XOz2JQY=; h=Date:To:From:Cc:Reply-To:Subject:From; b=mByuuSWGW7enhyevQhL7LR+BZzpnJ+oqTMbf32kY7KrYFQToGNaAAdgZ+rusOw4L4 VzOlKLhkae1SpuxrLfI8L2HK+v5rTf2IVMD6jBlPj5fpeSgHMLN6TkKg/t58JmO0NV BkDe1ZeukNIYryoo9bQEg6M05T3ifAWlrdbxwTPWGndxuIxn5xqLCTndBY1lG/twcu LOqKrqK6kF1h0/aYjxeVuFyoCNW9DDwfd7hwpq69lQFPNkI7K3ZQYVzZA74W02WLnz D6B+6Xhs0SaIl60TYfhGOBOODm3P1rDRjPdsH+rXPPlvufHvoLumDzI8v69i5Jy4cy Fdo58Sl3O6zyg== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Alexey Kuznetsov , Hideaki YOSHIFUJI , Paolo Abeni , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Alexander Lobakin Reply-To: Alexander Lobakin Subject: [PATCH net] net: udp: fix Fast/frag0 UDP GRO Message-ID: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org While testing UDP GSO fraglists forwarding through driver that uses Fast GRO (via napi_gro_frags()), I was observing lots of out-of-order iperf packets: [ ID] Interval Transfer Bitrate Jitter [SUM] 0.0-40.0 sec 12106 datagrams received out-of-order Simple switch to napi_gro_receive() any other method without frag0 shortcut completely resolved them. I've found that UDP GRO uses udp_hdr(skb) in its .gro_receive() callback. While it's probably OK for non-frag0 paths (when all headers or even the entire frame are already in skb->data), this inline points to junk when using Fast GRO (napi_gro_frags() or napi_gro_receive() with only Ethernet header in skb->data and all the rest in shinfo->frags) and breaks GRO packet compilation and the packet flow itself. To support both modes, skb_gro_header_fast() + skb_gro_header_slow() are typically used. UDP even has an inline helper that makes use of them, udp_gro_udphdr(). Use that instead of troublemaking udp_hdr() to get rid of the out-of-order delivers. Present since the introduction of plain UDP GRO in 5.0-rc1. Fixes: e20cf8d3f1f7 ("udp: implement GRO for plain UDP sockets.") Signed-off-by: Alexander Lobakin --- net/ipv4/udp_offload.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index e67a66fbf27b..13740e9fe6ec 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -366,7 +366,7 @@ static struct sk_buff *udp4_ufo_fragment(struct sk_buff *skb, static struct sk_buff *udp_gro_receive_segment(struct list_head *head, struct sk_buff *skb) { - struct udphdr *uh = udp_hdr(skb); + struct udphdr *uh = udp_gro_udphdr(skb); struct sk_buff *pp = NULL; struct udphdr *uh2; struct sk_buff *p;