From patchwork Thu Feb 3 15:48:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 12734294 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9208DC433F5 for ; Thu, 3 Feb 2022 15:48:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351967AbiBCPsp (ORCPT ); Thu, 3 Feb 2022 10:48:45 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:57877 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351951AbiBCPsm (ORCPT ); Thu, 3 Feb 2022 10:48:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643903322; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EIjAmUx69A5OyubPPVEtpS/29f9h1we0607QgH7kyVM=; b=eZ8WNL8+q6Et+e8raMvaPuVDFxfGKAJvFdIEn1gPwr0X71GRtp2KcpuuGKF/ugWgfQzoCG 67zRvLkI82IF+jNTuwVjbvZEcPshND/1sotC6Z196YdMqlGYMb7y7K0UZfBzyunYL6Qq6y mWsAb+SNJKFMWtUjXOgs41rT7obr+Co= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-296-oktbuJHFPNyQLTJOcTRXdg-1; Thu, 03 Feb 2022 10:48:38 -0500 X-MC-Unique: oktbuJHFPNyQLTJOcTRXdg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A0500190B2A0; Thu, 3 Feb 2022 15:48:36 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.39.193.191]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0CE0C10841E4; Thu, 3 Feb 2022 15:48:34 +0000 (UTC) From: Paolo Abeni To: netdev@vger.kernel.org Cc: "David S. Miller" , Jakub Kicinski , Alexander Lobakin , Eric Dumazet Subject: [PATCH net-next 1/3] net: gro: avoid re-computing truesize twice on recycle Date: Thu, 3 Feb 2022 16:48:21 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org After commit 5e10da5385d2 ("skbuff: allow 'slow_gro' for skb carring sock reference") and commit af352460b465 ("net: fix GRO skb truesize update") the truesize of freed skb is properly updated by the GRO engine, we don't need anymore resetting it at recycle time. Signed-off-by: Paolo Abeni --- net/core/gro.c | 1 - 1 file changed, 1 deletion(-) diff --git a/net/core/gro.c b/net/core/gro.c index a11b286d1495..d43d42215bdb 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -634,7 +634,6 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) skb->encapsulation = 0; skb_shinfo(skb)->gso_type = 0; - skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); if (unlikely(skb->slow_gro)) { skb_orphan(skb); skb_ext_reset(skb); From patchwork Thu Feb 3 15:48:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 12734295 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E8EDC433FE for ; Thu, 3 Feb 2022 15:48:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351959AbiBCPss (ORCPT ); Thu, 3 Feb 2022 10:48:48 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:51156 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244315AbiBCPsn (ORCPT ); Thu, 3 Feb 2022 10:48:43 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643903322; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ONfCTeY4DTT3Qpg9+rSTZST8f2A/MEhSP804OmUADLA=; b=W4Cnv7qpQY+X5K8smEoY/U7yKwyfRgKVTMr+ziGf633zmxAGoiX8BxqO6CP5GMMPUSI2py htqIdyCE7RZdRkoMkppq4u4rvWxsNgKRzPoIRGbGQ3oH/NSLr0YQWgxs2r4Fh9v/eMkTNO 6ET6ic46WNSOvuQ9BWfz91Z6xwL7POI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-207-6CZ-_sOBP-aPSlvDnVnnxA-1; Thu, 03 Feb 2022 10:48:39 -0500 X-MC-Unique: 6CZ-_sOBP-aPSlvDnVnnxA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3FC3B802C87; Thu, 3 Feb 2022 15:48:38 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.39.193.191]) by smtp.corp.redhat.com (Postfix) with ESMTP id EC0E410841E4; Thu, 3 Feb 2022 15:48:36 +0000 (UTC) From: Paolo Abeni To: netdev@vger.kernel.org Cc: "David S. Miller" , Jakub Kicinski , Alexander Lobakin , Eric Dumazet Subject: [PATCH net-next 2/3] net: gro: minor optimization for dev_gro_receive() Date: Thu, 3 Feb 2022 16:48:22 +0100 Message-Id: <2a6db6d6ca415cb35cc7b3e4d9424baf0516d782.1643902526.git.pabeni@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org While inspecting some perf report, I noticed that the compiler emits suboptimal code for the napi CB initialization, fetching and storing multiple times the memory for flags bitfield. This is with gcc 10.3.1, but I observed the same with older compiler versions. We can help the compiler to do a nicer work clearing several fields at once using an u32 alias. The generated code is quite smaller, with the same number of conditional. Before: objdump -t net/core/gro.o | grep " F .text" 0000000000000bb0 l F .text 0000000000000357 dev_gro_receive After: 0000000000000bb0 l F .text 000000000000033c dev_gro_receive RFC -> v1: - use __struct_group to delimt the zeroed area (Alexander) Signed-off-by: Paolo Abeni --- include/net/gro.h | 52 +++++++++++++++++++++++++---------------------- net/core/gro.c | 18 +++++++--------- 2 files changed, 35 insertions(+), 35 deletions(-) diff --git a/include/net/gro.h b/include/net/gro.h index 8f75802d50fd..fa1bb0f0ad28 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -29,46 +29,50 @@ struct napi_gro_cb { /* Number of segments aggregated. */ u16 count; - /* Start offset for remote checksum offload */ - u16 gro_remcsum_start; + /* Used in ipv6_gro_receive() and foo-over-udp */ + u16 proto; /* jiffies when first packet was created/queued */ unsigned long age; - /* Used in ipv6_gro_receive() and foo-over-udp */ - u16 proto; + /* portion of the cb set to zero at every gro iteration */ + __struct_group(/* no tag */, zeroed, /* no attrs */, + + /* Start offset for remote checksum offload */ + u16 gro_remcsum_start; - /* This is non-zero if the packet may be of the same flow. */ - u8 same_flow:1; + /* This is non-zero if the packet may be of the same flow. */ + u8 same_flow:1; - /* Used in tunnel GRO receive */ - u8 encap_mark:1; + /* Used in tunnel GRO receive */ + u8 encap_mark:1; - /* GRO checksum is valid */ - u8 csum_valid:1; + /* GRO checksum is valid */ + u8 csum_valid:1; - /* Number of checksums via CHECKSUM_UNNECESSARY */ - u8 csum_cnt:3; + /* Number of checksums via CHECKSUM_UNNECESSARY */ + u8 csum_cnt:3; - /* Free the skb? */ - u8 free:2; + /* Free the skb? */ + u8 free:2; #define NAPI_GRO_FREE 1 #define NAPI_GRO_FREE_STOLEN_HEAD 2 - /* Used in foo-over-udp, set in udp[46]_gro_receive */ - u8 is_ipv6:1; + /* Used in foo-over-udp, set in udp[46]_gro_receive */ + u8 is_ipv6:1; - /* Used in GRE, set in fou/gue_gro_receive */ - u8 is_fou:1; + /* Used in GRE, set in fou/gue_gro_receive */ + u8 is_fou:1; - /* Used to determine if flush_id can be ignored */ - u8 is_atomic:1; + /* Used to determine if flush_id can be ignored */ + u8 is_atomic:1; - /* Number of gro_receive callbacks this packet already went through */ - u8 recursion_counter:4; + /* Number of gro_receive callbacks this packet already went through */ + u8 recursion_counter:4; - /* GRO is done by frag_list pointer chaining. */ - u8 is_flist:1; + /* GRO is done by frag_list pointer chaining. */ + u8 is_flist:1; + ); /* used to support CHECKSUM_COMPLETE for tunneling protocols */ __wsum csum; diff --git a/net/core/gro.c b/net/core/gro.c index d43d42215bdb..fc56be9408c7 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -435,6 +435,9 @@ static void gro_flush_oldest(struct napi_struct *napi, struct list_head *head) napi_gro_complete(napi, oldest); } +#define zeroed_len (offsetof(struct napi_gro_cb, zeroed_end) - \ + offsetof(struct napi_gro_cb, zeroed_start)) + static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb) { u32 bucket = skb_get_hash_raw(skb) & (GRO_HASH_BUCKETS - 1); @@ -459,29 +462,22 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff skb_set_network_header(skb, skb_gro_offset(skb)); skb_reset_mac_len(skb); - NAPI_GRO_CB(skb)->same_flow = 0; + BUILD_BUG_ON(sizeof_field(struct napi_gro_cb, zeroed) != sizeof(u32)); + BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct napi_gro_cb, zeroed), + sizeof(u32))); /* Avoid slow unaligned acc */ + *(u32 *)&NAPI_GRO_CB(skb)->zeroed = 0; NAPI_GRO_CB(skb)->flush = skb_is_gso(skb) || skb_has_frag_list(skb); - NAPI_GRO_CB(skb)->free = 0; - NAPI_GRO_CB(skb)->encap_mark = 0; - NAPI_GRO_CB(skb)->recursion_counter = 0; - NAPI_GRO_CB(skb)->is_fou = 0; NAPI_GRO_CB(skb)->is_atomic = 1; - NAPI_GRO_CB(skb)->gro_remcsum_start = 0; /* Setup for GRO checksum validation */ switch (skb->ip_summed) { case CHECKSUM_COMPLETE: NAPI_GRO_CB(skb)->csum = skb->csum; NAPI_GRO_CB(skb)->csum_valid = 1; - NAPI_GRO_CB(skb)->csum_cnt = 0; break; case CHECKSUM_UNNECESSARY: NAPI_GRO_CB(skb)->csum_cnt = skb->csum_level + 1; - NAPI_GRO_CB(skb)->csum_valid = 0; break; - default: - NAPI_GRO_CB(skb)->csum_cnt = 0; - NAPI_GRO_CB(skb)->csum_valid = 0; } pp = INDIRECT_CALL_INET(ptype->callbacks.gro_receive, From patchwork Thu Feb 3 15:48:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 12734296 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D39AC433EF for ; Thu, 3 Feb 2022 15:48:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351966AbiBCPst (ORCPT ); Thu, 3 Feb 2022 10:48:49 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:49096 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351953AbiBCPso (ORCPT ); Thu, 3 Feb 2022 10:48:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643903324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ffBtinprILClSos7PKEgxhbKzcDIt4pUY7Pu1Nz+eII=; b=KX2kbrgQ+JtfXMOqQ2exrLPYS06GmVzXH26l8l+CJ2NHj7JIIChSN3+qvaVnleYMa2LrGv RNDLGvZLAAcMIdM5L6pD2J5TbsMrBadF8NFa08sBnL9zLEF8n4IKEqHMeFLiaJjdmCt9AK qdK+UmB1ilahT7l0H75Q2In4ejmtgG8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-481--0rrdKbMMpypBF_4hQFqrw-1; Thu, 03 Feb 2022 10:48:41 -0500 X-MC-Unique: -0rrdKbMMpypBF_4hQFqrw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0A190101F002; Thu, 3 Feb 2022 15:48:40 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.39.193.191]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9775910841E4; Thu, 3 Feb 2022 15:48:38 +0000 (UTC) From: Paolo Abeni To: netdev@vger.kernel.org Cc: "David S. Miller" , Jakub Kicinski , Alexander Lobakin , Eric Dumazet Subject: [PATCH net-next 3/3] net: gro: register gso and gro offload on separate lists Date: Thu, 3 Feb 2022 16:48:23 +0100 Message-Id: <550566fedb425275bb9d351a565a0220f67d498b.1643902527.git.pabeni@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org So that we know each element in gro_list has valid gro callbacks (and the same for gso). This allows dropping a bunch of conditional in fastpath. Before: objdump -t net/core/gro.o | grep " F .text" 0000000000000bb0 l F .text 000000000000033c dev_gro_receive After: 0000000000000bb0 l F .text 0000000000000325 dev_gro_receive Signed-off-by: Paolo Abeni --- include/linux/netdevice.h | 3 +- net/core/gro.c | 90 +++++++++++++++++++++++---------------- 2 files changed, 56 insertions(+), 37 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 3213c7227b59..406cb457d788 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2564,7 +2564,8 @@ struct packet_offload { __be16 type; /* This is really htons(ether_type). */ u16 priority; struct offload_callbacks callbacks; - struct list_head list; + struct list_head gro_list; + struct list_head gso_list; }; /* often modified stats are per-CPU, other are shared (netdev->stats) */ diff --git a/net/core/gro.c b/net/core/gro.c index fc56be9408c7..bd619d494fdd 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -10,10 +10,21 @@ #define GRO_MAX_HEAD (MAX_HEADER + 128) static DEFINE_SPINLOCK(offload_lock); -static struct list_head offload_base __read_mostly = LIST_HEAD_INIT(offload_base); +static struct list_head gro_offload_base __read_mostly = LIST_HEAD_INIT(gro_offload_base); +static struct list_head gso_offload_base __read_mostly = LIST_HEAD_INIT(gso_offload_base); /* Maximum number of GRO_NORMAL skbs to batch up for list-RX */ int gro_normal_batch __read_mostly = 8; +#define offload_list_insert(head, poff, list) \ +({ \ + struct packet_offload *elem; \ + list_for_each_entry(elem, head, list) { \ + if ((poff)->priority < elem->priority) \ + break; \ + } \ + list_add_rcu(&(poff)->list, elem->list.prev); \ +}) + /** * dev_add_offload - register offload handlers * @po: protocol offload declaration @@ -28,18 +39,33 @@ int gro_normal_batch __read_mostly = 8; */ void dev_add_offload(struct packet_offload *po) { - struct packet_offload *elem; - spin_lock(&offload_lock); - list_for_each_entry(elem, &offload_base, list) { - if (po->priority < elem->priority) - break; - } - list_add_rcu(&po->list, elem->list.prev); + if (po->callbacks.gro_receive && po->callbacks.gro_complete) + offload_list_insert(&gro_offload_base, po, gro_list); + else if (po->callbacks.gro_complete) + pr_warn("missing gro_receive callback"); + else if (po->callbacks.gro_receive) + pr_warn("missing gro_complete callback"); + + if (po->callbacks.gso_segment) + offload_list_insert(&gso_offload_base, po, gso_list); spin_unlock(&offload_lock); } EXPORT_SYMBOL(dev_add_offload); +#define offload_list_remove(type, head, poff, list) \ +({ \ + struct packet_offload *elem; \ + list_for_each_entry(elem, head, list) { \ + if ((poff) == elem) { \ + list_del_rcu(&(poff)->list); \ + break; \ + } \ + } \ + if (elem != (poff)) \ + pr_warn("dev_remove_offload: %p not found in %s list\n", (poff), type); \ +}) + /** * __dev_remove_offload - remove offload handler * @po: packet offload declaration @@ -55,20 +81,12 @@ EXPORT_SYMBOL(dev_add_offload); */ static void __dev_remove_offload(struct packet_offload *po) { - struct list_head *head = &offload_base; - struct packet_offload *po1; - spin_lock(&offload_lock); + if (po->callbacks.gro_receive) + offload_list_remove("gro", &gro_offload_base, po, gso_list); - list_for_each_entry(po1, head, list) { - if (po == po1) { - list_del_rcu(&po->list); - goto out; - } - } - - pr_warn("dev_remove_offload: %p not found\n", po); -out: + if (po->callbacks.gso_segment) + offload_list_remove("gso", &gso_offload_base, po, gro_list); spin_unlock(&offload_lock); } @@ -111,8 +129,8 @@ struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb, __skb_pull(skb, vlan_depth); rcu_read_lock(); - list_for_each_entry_rcu(ptype, &offload_base, list) { - if (ptype->type == type && ptype->callbacks.gso_segment) { + list_for_each_entry_rcu(ptype, &gso_offload_base, gso_list) { + if (ptype->type == type) { segs = ptype->callbacks.gso_segment(skb, features); break; } @@ -250,7 +268,7 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) { struct packet_offload *ptype; __be16 type = skb->protocol; - struct list_head *head = &offload_base; + struct list_head *head = &gro_offload_base; int err = -ENOENT; BUILD_BUG_ON(sizeof(struct napi_gro_cb) > sizeof(skb->cb)); @@ -261,8 +279,8 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) } rcu_read_lock(); - list_for_each_entry_rcu(ptype, head, list) { - if (ptype->type != type || !ptype->callbacks.gro_complete) + list_for_each_entry_rcu(ptype, head, gro_list) { + if (ptype->type != type) continue; err = INDIRECT_CALL_INET(ptype->callbacks.gro_complete, @@ -273,7 +291,7 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) rcu_read_unlock(); if (err) { - WARN_ON(&ptype->list == head); + WARN_ON(&ptype->gro_list == head); kfree_skb(skb); return; } @@ -442,7 +460,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff { u32 bucket = skb_get_hash_raw(skb) & (GRO_HASH_BUCKETS - 1); struct gro_list *gro_list = &napi->gro_hash[bucket]; - struct list_head *head = &offload_base; + struct list_head *head = &gro_offload_base; struct packet_offload *ptype; __be16 type = skb->protocol; struct sk_buff *pp = NULL; @@ -456,8 +474,8 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff gro_list_prepare(&gro_list->list, skb); rcu_read_lock(); - list_for_each_entry_rcu(ptype, head, list) { - if (ptype->type != type || !ptype->callbacks.gro_receive) + list_for_each_entry_rcu(ptype, head, gro_list) { + if (ptype->type != type) continue; skb_set_network_header(skb, skb_gro_offset(skb)); @@ -487,7 +505,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff } rcu_read_unlock(); - if (&ptype->list == head) + if (&ptype->gro_list == head) goto normal; if (PTR_ERR(pp) == -EINPROGRESS) { @@ -543,11 +561,11 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff struct packet_offload *gro_find_receive_by_type(__be16 type) { - struct list_head *offload_head = &offload_base; + struct list_head *offload_head = &gro_offload_base; struct packet_offload *ptype; - list_for_each_entry_rcu(ptype, offload_head, list) { - if (ptype->type != type || !ptype->callbacks.gro_receive) + list_for_each_entry_rcu(ptype, offload_head, gro_list) { + if (ptype->type != type) continue; return ptype; } @@ -557,11 +575,11 @@ EXPORT_SYMBOL(gro_find_receive_by_type); struct packet_offload *gro_find_complete_by_type(__be16 type) { - struct list_head *offload_head = &offload_base; + struct list_head *offload_head = &gro_offload_base; struct packet_offload *ptype; - list_for_each_entry_rcu(ptype, offload_head, list) { - if (ptype->type != type || !ptype->callbacks.gro_complete) + list_for_each_entry_rcu(ptype, offload_head, gro_list) { + if (ptype->type != type) continue; return ptype; }