From patchwork Tue Jan 18 15:24:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 12716649 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 135DBC433EF for ; Tue, 18 Jan 2022 15:25:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345373AbiARPZV (ORCPT ); Tue, 18 Jan 2022 10:25:21 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:25390 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235669AbiARPZR (ORCPT ); Tue, 18 Jan 2022 10:25:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1642519516; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EIjAmUx69A5OyubPPVEtpS/29f9h1we0607QgH7kyVM=; b=KPosYiwAJZk7w5QJ5NqVPFuZm/avx3BnJX1pqUA0bqpL4oVFW3jkpWtIfYNyz/rTmwxTc2 w03G/WhbZRo2zSWm22KLLgem36kqA8+gXrpvdMsg/MriUxbuzpEs3M/eJ+gZnAk3jP+xLH U8btMx1wpD9DNLufNHDxiROfAkNyTPQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-500-4pUvArwaNhGRta5MsxOwzg-1; Tue, 18 Jan 2022 10:25:14 -0500 X-MC-Unique: 4pUvArwaNhGRta5MsxOwzg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DDD6410151E7; Tue, 18 Jan 2022 15:25:12 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.39.194.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 214BF106C06F; Tue, 18 Jan 2022 15:25:11 +0000 (UTC) From: Paolo Abeni To: netdev@vger.kernel.org Cc: Eric Dumazet Subject: [RFC PATCH 1/3] net: gro: avoid re-computing truesize twice on recycle Date: Tue, 18 Jan 2022 16:24:18 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC After commit 5e10da5385d2 ("skbuff: allow 'slow_gro' for skb carring sock reference") and commit af352460b465 ("net: fix GRO skb truesize update") the truesize of freed skb is properly updated by the GRO engine, we don't need anymore resetting it at recycle time. Signed-off-by: Paolo Abeni --- net/core/gro.c | 1 - 1 file changed, 1 deletion(-) diff --git a/net/core/gro.c b/net/core/gro.c index a11b286d1495..d43d42215bdb 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -634,7 +634,6 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) skb->encapsulation = 0; skb_shinfo(skb)->gso_type = 0; - skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); if (unlikely(skb->slow_gro)) { skb_orphan(skb); skb_ext_reset(skb); From patchwork Tue Jan 18 15:24:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 12716648 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EF46C433F5 for ; Tue, 18 Jan 2022 15:25:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344683AbiARPZU (ORCPT ); Tue, 18 Jan 2022 10:25:20 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:39552 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242695AbiARPZR (ORCPT ); Tue, 18 Jan 2022 10:25:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1642519516; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V6TEMHfkJFFsNPCEXQEWk8iYL0SnF/mJ5afISlqyFkk=; b=Fb1QNyEAnwQ+0r35eZDETRWDWDYykgjt4sBYhd80xVRCt3N90mjgjcSzh4aiGi5SGvadJU 984Ctyr2DOCYLEQiDGlnlP4HPz8Ven6W18kWlHIFexN8376xCPm0zpwd7T9HZMf3dESPn0 9W7pK7NoAurrBJr43ku8jpEmePlOZFE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-592-Fhdy4-gtM8GE1lMHhBEhVg-1; Tue, 18 Jan 2022 10:25:15 -0500 X-MC-Unique: Fhdy4-gtM8GE1lMHhBEhVg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0A43B1018725; Tue, 18 Jan 2022 15:25:14 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.39.194.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4227D106C063; Tue, 18 Jan 2022 15:25:13 +0000 (UTC) From: Paolo Abeni To: netdev@vger.kernel.org Cc: Eric Dumazet Subject: [RFC PATCH 2/3] net: gro: minor optimization for dev_gro_receive() Date: Tue, 18 Jan 2022 16:24:19 +0100 Message-Id: <35d722e246b7c4afb6afb03760df6f664db4ef05.1642519257.git.pabeni@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC While inspecting some perf report, I noticed that the compiler emits suboptimal code for the napi CB initialization, fetching and storing multiple times the memory for flags bitfield. This is with gcc 10.3.1, but I observed the same with older compiler versions. We can help the compiler to do a nicer work e.g. initially setting all the bitfield to 0 using an u16 alias. The generated code is quite smaller, with the same number of conditional Before: objdump -t net/core/gro.o | grep " F .text" 0000000000000bb0 l F .text 0000000000000357 dev_gro_receive After: 0000000000000bb0 l F .text 000000000000033c dev_gro_receive Signed-off-by: Paolo Abeni --- include/net/gro.h | 13 +++++++++---- net/core/gro.c | 16 +++++----------- 2 files changed, 14 insertions(+), 15 deletions(-) diff --git a/include/net/gro.h b/include/net/gro.h index 8f75802d50fd..a068b27d341f 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -29,14 +29,17 @@ struct napi_gro_cb { /* Number of segments aggregated. */ u16 count; - /* Start offset for remote checksum offload */ - u16 gro_remcsum_start; + /* Used in ipv6_gro_receive() and foo-over-udp */ + u16 proto; /* jiffies when first packet was created/queued */ unsigned long age; - /* Used in ipv6_gro_receive() and foo-over-udp */ - u16 proto; + /* portion of the cb set to zero at every gro iteration */ + u32 zeroed_start[0]; + + /* Start offset for remote checksum offload */ + u16 gro_remcsum_start; /* This is non-zero if the packet may be of the same flow. */ u8 same_flow:1; @@ -70,6 +73,8 @@ struct napi_gro_cb { /* GRO is done by frag_list pointer chaining. */ u8 is_flist:1; + u32 zeroed_end[0]; + /* used to support CHECKSUM_COMPLETE for tunneling protocols */ __wsum csum; diff --git a/net/core/gro.c b/net/core/gro.c index d43d42215bdb..b9ebe9298731 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -435,6 +435,9 @@ static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) napi_gro_complete(napi, oldest); } +#define zeroed_len (offsetof(struct napi_gro_cb, zeroed_end) - \ + offsetof(struct napi_gro_cb, zeroed_start)) + static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb) { u32 bucket = skb_get_hash_raw(skb) & (GRO_HASH_BUCKETS - 1); @@ -459,29 +462,20 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff skb_set_network_header(skb, skb_gro_offset(skb)); skb_reset_mac_len(skb); - NAPI_GRO_CB(skb)->same_flow = 0; + BUILD_BUG_ON(zeroed_len != sizeof(NAPI_GRO_CB(skb)->zeroed_start[0])); + NAPI_GRO_CB(skb)->zeroed_start[0] = 0; NAPI_GRO_CB(skb)->flush = skb_is_gso(skb) || skb_has_frag_list(skb); - NAPI_GRO_CB(skb)->free = 0; - NAPI_GRO_CB(skb)->encap_mark = 0; - NAPI_GRO_CB(skb)->recursion_counter = 0; - NAPI_GRO_CB(skb)->is_fou = 0; NAPI_GRO_CB(skb)->is_atomic = 1; - NAPI_GRO_CB(skb)->gro_remcsum_start = 0; /* Setup for GRO checksum validation */ switch (skb->ip_summed) { case CHECKSUM_COMPLETE: NAPI_GRO_CB(skb)->csum = skb->csum; NAPI_GRO_CB(skb)->csum_valid = 1; - NAPI_GRO_CB(skb)->csum_cnt = 0; break; case CHECKSUM_UNNECESSARY: NAPI_GRO_CB(skb)->csum_cnt = skb->csum_level + 1; - NAPI_GRO_CB(skb)->csum_valid = 0; break; - default: - NAPI_GRO_CB(skb)->csum_cnt = 0; - NAPI_GRO_CB(skb)->csum_valid = 0; } pp = INDIRECT_CALL_INET(ptype->callbacks.gro_receive, From patchwork Tue Jan 18 15:24:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 12716650 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22B70C4332F for ; Tue, 18 Jan 2022 15:25:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345375AbiARPZW (ORCPT ); Tue, 18 Jan 2022 10:25:22 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:20897 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344941AbiARPZU (ORCPT ); Tue, 18 Jan 2022 10:25:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1642519519; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L1s/djRe/FHavK44oxjFPLiHiyOP0WNzLrIIxDgpmxE=; b=A0qZLWVZOJd0bKZcCCwPdxg5vywrPkkC+iiX3gNeRgbvbySsk+/VzS+CWbmh5bPdcaFOzz X2FRCSRXp8Lk6B4zcXUgAzzxsumXbkLF8F1unNfqP9kbateMG0Ehs25ZNpcwNnMw6r9Y8m QHTZTFoeyt0EPPvoZ0H7RaRbuQWPIPQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-252-taXisSlMPEG_y8aGis3DHQ-1; Tue, 18 Jan 2022 10:25:16 -0500 X-MC-Unique: taXisSlMPEG_y8aGis3DHQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4DE5C10151E7; Tue, 18 Jan 2022 15:25:15 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.39.194.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5E231106C063; Tue, 18 Jan 2022 15:25:14 +0000 (UTC) From: Paolo Abeni To: netdev@vger.kernel.org Cc: Eric Dumazet Subject: [RFC PATCH 3/3] net: gro: register gso and gro offload on separate lists Date: Tue, 18 Jan 2022 16:24:20 +0100 Message-Id: <049644de738a9fb91db660af1849bc1420baf971.1642519257.git.pabeni@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC So that we know each element in gro_list has valid gro callbacks (and the same for gso). This allows dropping a bunch of conditional in fastpath. Before: objdump -t net/core/gro.o | grep " F .text" 0000000000000bb0 l F .text 000000000000033c dev_gro_receive After: 0000000000000bb0 l F .text 0000000000000325 dev_gro_receive Signed-off-by: Paolo Abeni --- include/linux/netdevice.h | 3 +- net/core/gro.c | 90 +++++++++++++++++++++++---------------- 2 files changed, 56 insertions(+), 37 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 3213c7227b59..406cb457d788 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2564,7 +2564,8 @@ struct packet_offload { __be16 type; /* This is really htons(ether_type). */ u16 priority; struct offload_callbacks callbacks; - struct list_head list; + struct list_head gro_list; + struct list_head gso_list; }; /* often modified stats are per-CPU, other are shared (netdev->stats) */ diff --git a/net/core/gro.c b/net/core/gro.c index b9ebe9298731..5d7bc6813a7d 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -10,10 +10,21 @@ #define GRO_MAX_HEAD (MAX_HEADER + 128) static DEFINE_SPINLOCK(offload_lock); -static struct list_head offload_base __read_mostly = LIST_HEAD_INIT(offload_base); +static struct list_head gro_offload_base __read_mostly = LIST_HEAD_INIT(gro_offload_base); +static struct list_head gso_offload_base __read_mostly = LIST_HEAD_INIT(gso_offload_base); /* Maximum number of GRO_NORMAL skbs to batch up for list-RX */ int gro_normal_batch __read_mostly = 8; +#define offload_list_insert(head, poff, list) \ +({ \ + struct packet_offload *elem; \ + list_for_each_entry(elem, head, list) { \ + if ((poff)->priority < elem->priority) \ + break; \ + } \ + list_add_rcu(&(poff)->list, elem->list.prev); \ +}) + /** * dev_add_offload - register offload handlers * @po: protocol offload declaration @@ -28,18 +39,33 @@ int gro_normal_batch __read_mostly = 8; */ void dev_add_offload(struct packet_offload *po) { - struct packet_offload *elem; - spin_lock(&offload_lock); - list_for_each_entry(elem, &offload_base, list) { - if (po->priority < elem->priority) - break; - } - list_add_rcu(&po->list, elem->list.prev); + if (po->callbacks.gro_receive && po->callbacks.gro_complete) + offload_list_insert(&gro_offload_base, po, gro_list); + else if (po->callbacks.gro_complete) + pr_warn("missing gro_receive callback"); + else if (po->callbacks.gro_receive) + pr_warn("missing gro_complete callback"); + + if (po->callbacks.gso_segment) + offload_list_insert(&gso_offload_base, po, gso_list); spin_unlock(&offload_lock); } EXPORT_SYMBOL(dev_add_offload); +#define offload_list_remove(type, head, poff, list) \ +({ \ + struct packet_offload *elem; \ + list_for_each_entry(elem, head, list) { \ + if ((poff) == elem) { \ + list_del_rcu(&(poff)->list); \ + break; \ + } \ + } \ + if (elem != (poff)) \ + pr_warn("dev_remove_offload: %p not found in %s list\n", (poff), type); \ +}) + /** * __dev_remove_offload - remove offload handler * @po: packet offload declaration @@ -55,20 +81,12 @@ EXPORT_SYMBOL(dev_add_offload); */ static void __dev_remove_offload(struct packet_offload *po) { - struct list_head *head = &offload_base; - struct packet_offload *po1; - spin_lock(&offload_lock); + if (po->callbacks.gro_receive) + offload_list_remove("gro", &gro_offload_base, po, gso_list); - list_for_each_entry(po1, head, list) { - if (po == po1) { - list_del_rcu(&po->list); - goto out; - } - } - - pr_warn("dev_remove_offload: %p not found\n", po); -out: + if (po->callbacks.gso_segment) + offload_list_remove("gso", &gso_offload_base, po, gro_list); spin_unlock(&offload_lock); } @@ -111,8 +129,8 @@ struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb, __skb_pull(skb, vlan_depth); rcu_read_lock(); - list_for_each_entry_rcu(ptype, &offload_base, list) { - if (ptype->type == type && ptype->callbacks.gso_segment) { + list_for_each_entry_rcu(ptype, &gso_offload_base, gso_list) { + if (ptype->type == type) { segs = ptype->callbacks.gso_segment(skb, features); break; } @@ -250,7 +268,7 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) { struct packet_offload *ptype; __be16 type = skb->protocol; - struct list_head *head = &offload_base; + struct list_head *head = &gro_offload_base; int err = -ENOENT; BUILD_BUG_ON(sizeof(struct napi_gro_cb) > sizeof(skb->cb)); @@ -261,8 +279,8 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) } rcu_read_lock(); - list_for_each_entry_rcu(ptype, head, list) { - if (ptype->type != type || !ptype->callbacks.gro_complete) + list_for_each_entry_rcu(ptype, head, gro_list) { + if (ptype->type != type) continue; err = INDIRECT_CALL_INET(ptype->callbacks.gro_complete, @@ -273,7 +291,7 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) rcu_read_unlock(); if (err) { - WARN_ON(&ptype->list == head); + WARN_ON(&ptype->gro_list == head); kfree_skb(skb); return; } @@ -442,7 +460,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff { u32 bucket = skb_get_hash_raw(skb) & (GRO_HASH_BUCKETS - 1); struct gro_list *gro_list = &napi->gro_hash[bucket]; - struct list_head *head = &offload_base; + struct list_head *head = &gro_offload_base; struct packet_offload *ptype; __be16 type = skb->protocol; struct sk_buff *pp = NULL; @@ -456,8 +474,8 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff gro_list_prepare(&gro_list->list, skb); rcu_read_lock(); - list_for_each_entry_rcu(ptype, head, list) { - if (ptype->type != type || !ptype->callbacks.gro_receive) + list_for_each_entry_rcu(ptype, head, gro_list) { + if (ptype->type != type) continue; skb_set_network_header(skb, skb_gro_offset(skb)); @@ -485,7 +503,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff } rcu_read_unlock(); - if (&ptype->list == head) + if (&ptype->gro_list == head) goto normal; if (PTR_ERR(pp) == -EINPROGRESS) { @@ -541,11 +559,11 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff struct packet_offload *gro_find_receive_by_type(__be16 type) { - struct list_head *offload_head = &offload_base; + struct list_head *offload_head = &gro_offload_base; struct packet_offload *ptype; - list_for_each_entry_rcu(ptype, offload_head, list) { - if (ptype->type != type || !ptype->callbacks.gro_receive) + list_for_each_entry_rcu(ptype, offload_head, gro_list) { + if (ptype->type != type) continue; return ptype; } @@ -555,11 +573,11 @@ EXPORT_SYMBOL(gro_find_receive_by_type); struct packet_offload *gro_find_complete_by_type(__be16 type) { - struct list_head *offload_head = &offload_base; + struct list_head *offload_head = &gro_offload_base; struct packet_offload *ptype; - list_for_each_entry_rcu(ptype, offload_head, list) { - if (ptype->type != type || !ptype->callbacks.gro_complete) + list_for_each_entry_rcu(ptype, offload_head, gro_list) { + if (ptype->type != type) continue; return ptype; }