From patchwork Fri Feb 28 20:15:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Woudstra X-Patchwork-Id: 13997059 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29653202973; Fri, 28 Feb 2025 20:16:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740773783; cv=none; b=hznVkCm77OlcY8Eyhso/z+LQOfI1ZII2etKTrN3uou5k/x4TlRPzZIywwHm0HUrhL29+gS9kUkEg59va4QQRPJ4DsxTh+DL1b+LRz0m4oczsYtU42z4z2RgY9jjFPCV9zmS6qyT7NHFXLky+ENj8jxCeZFA9EloCqANCeIsGBE0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740773783; c=relaxed/simple; bh=wMUZpUs1MP0v4aGWSuV5DzAXHNdRlbpPAoNl+5GV7TA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=svCixIZgh3uzJ4Ju6jocNztEjaa3TFZtqR+SDNKSH/gGYZ8TvvCSxxNNpM8pimez8utE/YJyqcZ39q4OHLOwfFuucrScbpAtciPqfVQJE6RydxEb17uy2iad87moLkyzHS0BFr2z5xaNxbrPNdetEuT1HFKpD8h+viVl9IpfVRs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=l7pkLaYU; arc=none smtp.client-ip=209.85.218.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="l7pkLaYU" Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-ab771575040so642977866b.1; Fri, 28 Feb 2025 12:16:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740773779; x=1741378579; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VGr0DGFtdF9vUQZNuQxbjKzErqMA9pCa8F2vVWAZr6U=; b=l7pkLaYUJIATSTCwI9rrSDOI0c210iDrWzk0HwUFFJOwVA13YX8weMOzfMb1RiikBK MKIKX4zYGz1nrw1I+jgANnq6pyDlNQA055dYFsrNsZqIO9U6MPzSZ1kkuGlW9DE/laqB zu/XZIeId/GawyPUCbnAuZBcRd2PIuGhPS0/0xoN5I4KI2undJmCKpnBrfNnj45BjLJa SIb/X5fdTW7jIbeeG/DXajV8esaw80mbbFmV58grcoVX/ZrMs8yLQI0BR/3wcDGON1mF 82EN63ZYkJ6mR0+EtV3Zx4TDITqflB2RNoASDMYRPw+xlXGmwFf7sxDnTSHaa6f3tMHf FmMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740773779; x=1741378579; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VGr0DGFtdF9vUQZNuQxbjKzErqMA9pCa8F2vVWAZr6U=; b=WIPXfcqZ/cn4Z4GFvqhcJbhjuLKqss8sq8HkmcfA6GBUHdRCyqbdeZwNjhRB6vyG3T uaOucROwdpuujlwPmx/Ig8tcR58MNuyuGcs+GKG51WI1CTv1IXQkXVLuYaQsuzj6ws+/ 9F2E81AeM0qqJIjiN1Fx1QJOnR+VdiiOSwHzAdO5o72wedKTVcQXpqNxL7xozDqjU2rr Ut1mcsUqhwFRc0nPnPPUJFzQq8KqFzf8JVwhjTkzGw8VN5YNCRui1aPJKSghnI4iOuvm 3T7Ub7UcwA0pFpAM2b9W8ovyVPD4oUn6SbyN6NkPaqmVLB1qYby+2O//iW7amNnmII8h TMTQ== X-Forwarded-Encrypted: i=1; AJvYcCVLodwQxj9fuCqrpQYSBpTKkGLbxfg9C8pVfiUwV4Tu9Y1hMoh9RgCqvabTVqF4soCy6pr8cMKDsBd5Ax31uR/E@vger.kernel.org, AJvYcCVSiyGTqkYNf68mT1DEP57SxA8BIWGA5UkwfFS0bFrjs5I193Pll60OA8lQSbp6ubNDNdTo6T5rFYtULjbvFMg=@vger.kernel.org, AJvYcCX4qGnr+EBe4UxLcK0k/RmUuIpfOiX2J9f7lqmaQ/X/g+cuDYK8f3qHhQgk9Yyidk7lqmpq2A9MIpsNtPBG@vger.kernel.org X-Gm-Message-State: AOJu0Yw2Z1ScOI5DJF+GDvYiOzOoQf/p3QPgx7pxnRTNCWDe8zQT2mXl R5NrU2dbJ2vbWitDpbGiqjZCn2Fvq0Wr9YQO11hHz9hzc3Xug6Dp X-Gm-Gg: ASbGncvTFahyv/ky/dwgYOxTtlDaPod8m8lm4GhbSGdLvIoJ4nz3wuFM8QeZRLfAIAa P+E3uu6xWFq553F4tIvccJaM1udfIaJehKe0a3dvkQj+ZI2lCRl+PcpgAvzRQwD6dB17dvfV3TV +iLIHPSvRzBupJ2sLaon72tpfneqxlUMkWtVGnjO9/Qgpc6FO0uMm73tZwphBsBDU1/+owKHDTC 2jh/3CxNQRr11PceKEF6b9e2gdg5B0qucIRdFNrW0MqzD6gqsl0SbfBQI2wKXjPXhX9XKgWYzNt +5j28HKkBV9iWoijjy6P68s7CvH8ChxVMuhN1WB4JyTaOXioz8ESiFULCll3cNH4ABY6e5Wp6cs 5cfDfmUfyd4nd1/Jc+zGtiNe8HFqGt7a1qsRnOQPzJTw= X-Google-Smtp-Source: AGHT+IFSsO0D6FLoaTBr5CsWmwmRlLw+sSkoUxgYwd46U2pqe5KcLEx75Cg5UhavDa3o6cjGcNQ8Ug== X-Received: by 2002:a17:907:a4c7:b0:abb:af33:d0ac with SMTP id a640c23a62f3a-abf0605eec7mr944542066b.16.1740773779092; Fri, 28 Feb 2025 12:16:19 -0800 (PST) Received: from corebook.localdomain (2001-1c00-020d-1300-1b1c-4449-176a-89ea.cable.dynamic.v6.ziggo.nl. [2001:1c00:20d:1300:1b1c:4449:176a:89ea]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-abf0c755c66sm340812666b.136.2025.02.28.12.16.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2025 12:16:18 -0800 (PST) From: Eric Woudstra To: Michal Ostrowski , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Pablo Neira Ayuso , Jozsef Kadlecsik , Jiri Pirko , Ivan Vecera , Roopa Prabhu , Nikolay Aleksandrov , Matthias Brugger , AngeloGioacchino Del Regno , Kuniyuki Iwashima , Sebastian Andrzej Siewior , Ahmed Zaki , Alexander Lobakin , Vladimir Oltean , "Frank Wunderlich" , Daniel Golle Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, bridge@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-hardening@vger.kernel.org, Kees Cook , "Gustavo A. R. Silva" , Eric Woudstra Subject: [PATCH v8 net-next 02/15] netfilter: nf_flow_table_offload: Add nf_flow_encap_push() for xmit direct Date: Fri, 28 Feb 2025 21:15:20 +0100 Message-ID: <20250228201533.23836-3-ericwouds@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250228201533.23836-1-ericwouds@gmail.com> References: <20250228201533.23836-1-ericwouds@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Loosely based on wenxu's patches: "nf_flow_table_offload: offload the vlan/PPPoE encap in the flowtable". Fixed double vlan and pppoe packets, almost entirely rewriting the patch. After this patch, it is possible to transmit packets in the fastpath with outgoing encaps, without using vlan- and/or pppoe-devices. This makes it possible to use more different kinds of network setups. For example, when bridge tagging is used to egress vlan tagged packets using the forward fastpath. Another example is passing 802.1q tagged packets through a bridge using the bridge fastpath. This also makes the software fastpath process more similar to the hardware offloaded fastpath process, where encaps are also pushed. After applying this patch, always info->outdev = info->hw_outdev, so the netfilter code can be further cleaned up by removing: * hw_outdev from struct nft_forward_info * out.hw_ifindex from struct nf_flow_route * out.hw_ifidx from struct flow_offload_tuple Reviewed-by: Nikolay Aleksandrov Signed-off-by: Eric Woudstra --- net/netfilter/nf_flow_table_ip.c | 96 +++++++++++++++++++++++++++++++- net/netfilter/nft_flow_offload.c | 6 +- 2 files changed, 96 insertions(+), 6 deletions(-) diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c index 8cd4cf7ae211..d0c3c459c4d2 100644 --- a/net/netfilter/nf_flow_table_ip.c +++ b/net/netfilter/nf_flow_table_ip.c @@ -306,6 +306,92 @@ static bool nf_flow_skb_encap_protocol(struct sk_buff *skb, __be16 proto, return false; } +static int nf_flow_vlan_inner_push(struct sk_buff *skb, __be16 proto, u16 id) +{ + struct vlan_hdr *vhdr; + + if (skb_cow_head(skb, VLAN_HLEN)) + return -1; + + __skb_push(skb, VLAN_HLEN); + skb_reset_network_header(skb); + + vhdr = (struct vlan_hdr *)(skb->data); + vhdr->h_vlan_TCI = htons(id); + vhdr->h_vlan_encapsulated_proto = skb->protocol; + skb->protocol = proto; + + return 0; +} + +static int nf_flow_ppoe_push(struct sk_buff *skb, u16 id) +{ + struct ppp_hdr { + struct pppoe_hdr hdr; + __be16 proto; + } *ph; + int data_len = skb->len + 2; + __be16 proto; + + if (skb_cow_head(skb, PPPOE_SES_HLEN)) + return -1; + + if (skb->protocol == htons(ETH_P_IP)) + proto = htons(PPP_IP); + else if (skb->protocol == htons(ETH_P_IPV6)) + proto = htons(PPP_IPV6); + else + return -1; + + __skb_push(skb, PPPOE_SES_HLEN); + skb_reset_network_header(skb); + + ph = (struct ppp_hdr *)(skb->data); + ph->hdr.ver = 1; + ph->hdr.type = 1; + ph->hdr.code = 0; + ph->hdr.sid = htons(id); + ph->hdr.length = htons(data_len); + ph->proto = proto; + skb->protocol = htons(ETH_P_PPP_SES); + + return 0; +} + +static int nf_flow_encap_push(struct sk_buff *skb, + struct flow_offload_tuple_rhash *tuplehash, + unsigned short *type) +{ + int i = 0, ret = 0; + + if (!tuplehash->tuple.encap_num) + return 0; + + if (tuplehash->tuple.encap[i].proto == htons(ETH_P_8021Q) || + tuplehash->tuple.encap[i].proto == htons(ETH_P_8021AD)) { + __vlan_hwaccel_put_tag(skb, tuplehash->tuple.encap[i].proto, + tuplehash->tuple.encap[i].id); + i++; + if (i >= tuplehash->tuple.encap_num) + return 0; + } + + switch (tuplehash->tuple.encap[i].proto) { + case htons(ETH_P_8021Q): + *type = ETH_P_8021Q; + ret = nf_flow_vlan_inner_push(skb, + tuplehash->tuple.encap[i].proto, + tuplehash->tuple.encap[i].id); + break; + case htons(ETH_P_PPP_SES): + *type = ETH_P_PPP_SES; + ret = nf_flow_ppoe_push(skb, + tuplehash->tuple.encap[i].id); + break; + } + return ret; +} + static void nf_flow_encap_pop(struct sk_buff *skb, struct flow_offload_tuple_rhash *tuplehash) { @@ -335,6 +421,7 @@ static void nf_flow_encap_pop(struct sk_buff *skb, static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb, const struct flow_offload_tuple_rhash *tuplehash, + struct flow_offload_tuple_rhash *other_tuplehash, unsigned short type) { struct net_device *outdev; @@ -343,6 +430,9 @@ static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb, if (!outdev) return NF_DROP; + if (nf_flow_encap_push(skb, other_tuplehash, &type) < 0) + return NF_DROP; + skb->dev = outdev; dev_hard_header(skb, skb->dev, type, tuplehash->tuple.out.h_dest, tuplehash->tuple.out.h_source, skb->len); @@ -462,7 +552,8 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb, ret = NF_STOLEN; break; case FLOW_OFFLOAD_XMIT_DIRECT: - ret = nf_flow_queue_xmit(state->net, skb, tuplehash, ETH_P_IP); + ret = nf_flow_queue_xmit(state->net, skb, tuplehash, + &flow->tuplehash[!dir], ETH_P_IP); if (ret == NF_DROP) flow_offload_teardown(flow); break; @@ -757,7 +848,8 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb, ret = NF_STOLEN; break; case FLOW_OFFLOAD_XMIT_DIRECT: - ret = nf_flow_queue_xmit(state->net, skb, tuplehash, ETH_P_IPV6); + ret = nf_flow_queue_xmit(state->net, skb, tuplehash, + &flow->tuplehash[!dir], ETH_P_IPV6); if (ret == NF_DROP) flow_offload_teardown(flow); break; diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c index 46a6d280b09c..b4baee519e18 100644 --- a/net/netfilter/nft_flow_offload.c +++ b/net/netfilter/nft_flow_offload.c @@ -124,13 +124,12 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack, info->indev = NULL; break; } - if (!info->outdev) - info->outdev = path->dev; info->encap[info->num_encaps].id = path->encap.id; info->encap[info->num_encaps].proto = path->encap.proto; info->num_encaps++; if (path->type == DEV_PATH_PPPOE) memcpy(info->h_dest, path->encap.h_dest, ETH_ALEN); + info->xmit_type = FLOW_OFFLOAD_XMIT_DIRECT; break; case DEV_PATH_BRIDGE: if (is_zero_ether_addr(info->h_source)) @@ -158,8 +157,7 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack, break; } } - if (!info->outdev) - info->outdev = info->indev; + info->outdev = info->indev; info->hw_outdev = info->indev;