From patchwork Tue Dec 10 09:44:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Woudstra X-Patchwork-Id: 13901155 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDD9178F36; Tue, 10 Dec 2024 09:46:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733823976; cv=none; b=FVnuyaknJkj7f5G0HMUaE4QypDkyBFKsTcDqhYIVqxL6VWoBCbCi4JxhRtKRa0IYUU0JOAk6vy9fgk1Z6vn5ni9N9ETrfjDBi9/zOnC0x5rjadTQuhZvR60PlpI2bQtpQWvWFr3dY2NMEDb6zan5bqf13Xc5CPQ90v3JU+4T03I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733823976; c=relaxed/simple; bh=Coh7dA4NZN0wC8ukUEBEVleZPuWCkMhm3t7ip8PSPUo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DCqBXqzGUgka9aJCOqNTkVdC+eTSV3NhdwhAlwrG1ERqao4RYhEpioQgQWO9/AEhz/KUEN/z1v6J1c3O4vWwRV/8n9ZtLLmxfmI4FYh8jNo1suGcLH4f9KssFtWMWt1q5RvU1Tm+VeUHrwclcdT3clKT3p48dhmJ3ZLijdrxgag= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QEx2QKVm; arc=none smtp.client-ip=209.85.208.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QEx2QKVm" Received: by mail-ed1-f41.google.com with SMTP id 4fb4d7f45d1cf-5d3d14336f0so6017275a12.3; Tue, 10 Dec 2024 01:46:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733823973; x=1734428773; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Baj5jGYp4+wV8/iAbfAoST2ANmEn8FK5eb8xxUvDBwU=; b=QEx2QKVmKxfey8D44GatcYZloXUT1biqLMA7RAnKcr6mdxH2nnJW2Uhkte+93Dl+8H BNeDIqyaZOt81/o2X6JPkf/DqGBSFgPcPsbFnJPSLC01J/3MSenqfBTnzXp93yZlgw3+ UGB+IN5LX05H/JWNP9bCXRXpHQ833hHzTV5yCicCz6OHOwL7Xvjp0YXp2DKXK5FIwiki JfRKDaoXw1mnKgjUvOWs9aTOApexUsKBziGrC4Aan5PvMvg6XdNxkNmb3ZTZtv+ApxeS jOWlm1+vwUETFU1Kly/tjPQHWlxlgd0TH8/4F40nrzEnODIp+Ujbq9e1i3LsXYXYdl9x DBMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733823973; x=1734428773; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Baj5jGYp4+wV8/iAbfAoST2ANmEn8FK5eb8xxUvDBwU=; b=NB+yMuxK7/XGe0IF6wO32uFV37iBWHwV88Hhv6sLvpAGKnE0ipYBsPGVkR/Xvmiwe9 lwdqWr1xkkjNjKyh+X7byIExpZP27fqDRswKrLdrXmtA0kJEL/LKhSYPzMyjBKSYdspQ 0TdHMHEmdAa/B95xvPs83+AVBGssbP+Mda+mpLSnN9Nqiq7tqyT+vW2/anANzo1QNiXi Zzwpe779j6zQKKKNhfBCjpejXL8UYYqf+tqUpVwDqvl2LlqpkJPUwQmgilK1ZyjZX+aJ KGmxLyuh8uEPjY8uioX5jyaZZwvbtz8Cr9NfSijWoAOVke1l990nQhppMgL62w2nRc2W 2Wbw== X-Forwarded-Encrypted: i=1; AJvYcCUDGrzxBBEulqZP9UAxiUnVS1vjNwgztR+buGm5UNpxijdgOu+DyDLqL4GR6CfilikfS3UwNLYbN/TSqlE=@vger.kernel.org, AJvYcCXreIg5LA2Dwt1iiv1B28NGMgjwWbHSV5e1zXqNGsihWX152NlizCAo7cFWQh4kPGLmkdccOzcrWt+uTZ4w4FZh@vger.kernel.org X-Gm-Message-State: AOJu0YyHQ6jnAc0c2n0DFYt/cxmBe2MCZg4XJtTznNEv45jOjniJGWMV kxnlsG69/z2uQ4Yhii8gjsRPM0hibdbb0hnDTZ0rK9Hue47oiiV2 X-Gm-Gg: ASbGncu4DRzat/cQcDTITPcwN1zmQUA41IhCLQPnb+ztExHVPbemkvgdnxrfKJv6ydA i6fv6iDqqkaNu4O5s1oZF6Yd27HX7tNDl3Dj2qJ/R5z77qXFsrExqbBxRcOonlmUFqvaIiLmvEV gwjsZcQvnv2EzATuXj4Xd3kfzfPs0zQ/jUs5ouorUoo0GXqH7lWqk/z4XGfxD8Wbao9lChtWWAE +eA43532orUrk8crkl/MLNPIGAhTlF8p5ENZ3KyQIqtuiRUYX46aBi5lUaNGAudGcY2Etg5p1I4 6bLXYaGut+ueWCva6Je4IuBAdu6nZMQgG/BAz0NGj8YHKp38HqwVcAmoV5SxeqiZTTcdzwM= X-Google-Smtp-Source: AGHT+IHKdE8bXe8IMSV2wZCfAqbK8hBoc5nXAt4+RtU2iWTR2cHV5c46LmMZ/phJZQ7EkHH6eDKNXw== X-Received: by 2002:a05:6402:2807:b0:5d0:c67e:e26c with SMTP id 4fb4d7f45d1cf-5d41852f135mr4854517a12.10.1733823973031; Tue, 10 Dec 2024 01:46:13 -0800 (PST) Received: from corebook.localdomain (2001-1c00-020d-1300-1b1c-4449-176a-89ea.cable.dynamic.v6.ziggo.nl. [2001:1c00:20d:1300:1b1c:4449:176a:89ea]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5d14b609e56sm7313936a12.40.2024.12.10.01.46.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Dec 2024 01:46:12 -0800 (PST) From: Eric Woudstra To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Pablo Neira Ayuso , Jozsef Kadlecsik , Jiri Pirko , Ivan Vecera , Roopa Prabhu , Nikolay Aleksandrov , Matthias Brugger , AngeloGioacchino Del Regno , David Ahern , Sebastian Andrzej Siewior , Lorenzo Bianconi , Joe Damato , Alexander Lobakin , Vladimir Oltean , "Frank Wunderlich" , Daniel Golle Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, bridge@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, Eric Woudstra Subject: [PATCH RFC v3 net-next 01/13] netfilter: nf_flow_table_offload: Add nf_flow_encap_push() for xmit direct Date: Tue, 10 Dec 2024 10:44:49 +0100 Message-ID: <20241210094501.3069-2-ericwouds@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241210094501.3069-1-ericwouds@gmail.com> References: <20241210094501.3069-1-ericwouds@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Loosely based on wenxu's patches: "nf_flow_table_offload: offload the vlan/PPPoE encap in the flowtable". Fixed double vlan and pppoe packets, almost entirely rewriting the patch. After this patch, it is possible to transmit packets in the fastpath with outgoing encaps, without using vlan- and/or pppoe-devices. This makes it possible to use more different kinds of network setups. For example, when bridge tagging is used to egress vlan tagged packets using the forward fastpath. Another example is passing 802.1q tagged packets through a bridge using the bridge fastpath. This also makes the software fastpath process more similar to the hardware offloaded fastpath process, where encaps are also pushed. After applying this patch, always info->outdev = info->hw_outdev, so the netfilter code can be further cleaned up by removing: * hw_outdev from struct nft_forward_info * out.hw_ifindex from struct nf_flow_route * out.hw_ifidx from struct flow_offload_tuple Signed-off-by: Eric Woudstra --- net/netfilter/nf_flow_table_ip.c | 96 +++++++++++++++++++++++++++++++- net/netfilter/nft_flow_offload.c | 6 +- 2 files changed, 96 insertions(+), 6 deletions(-) diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c index 98edcaa37b38..290d8e10d85b 100644 --- a/net/netfilter/nf_flow_table_ip.c +++ b/net/netfilter/nf_flow_table_ip.c @@ -302,6 +302,92 @@ static bool nf_flow_skb_encap_protocol(struct sk_buff *skb, __be16 proto, return false; } +static int nf_flow_vlan_inner_push(struct sk_buff *skb, __be16 proto, u16 id) +{ + struct vlan_hdr *vhdr; + + if (skb_cow_head(skb, VLAN_HLEN)) + return -1; + + __skb_push(skb, VLAN_HLEN); + skb_reset_network_header(skb); + + vhdr = (struct vlan_hdr *)(skb->data); + vhdr->h_vlan_TCI = htons(id); + vhdr->h_vlan_encapsulated_proto = skb->protocol; + skb->protocol = proto; + + return 0; +} + +static int nf_flow_ppoe_push(struct sk_buff *skb, u16 id) +{ + struct ppp_hdr { + struct pppoe_hdr hdr; + __be16 proto; + } *ph; + int data_len = skb->len + 2; + __be16 proto; + + if (skb_cow_head(skb, PPPOE_SES_HLEN)) + return -1; + + if (skb->protocol == htons(ETH_P_IP)) + proto = htons(PPP_IP); + else if (skb->protocol == htons(ETH_P_IPV6)) + proto = htons(PPP_IPV6); + else + return -1; + + __skb_push(skb, PPPOE_SES_HLEN); + skb_reset_network_header(skb); + + ph = (struct ppp_hdr *)(skb->data); + ph->hdr.ver = 1; + ph->hdr.type = 1; + ph->hdr.code = 0; + ph->hdr.sid = htons(id); + ph->hdr.length = htons(data_len); + ph->proto = proto; + skb->protocol = htons(ETH_P_PPP_SES); + + return 0; +} + +static int nf_flow_encap_push(struct sk_buff *skb, + struct flow_offload_tuple_rhash *tuplehash, + unsigned short *type) +{ + int i = 0, ret = 0; + + if (!tuplehash->tuple.encap_num) + return 0; + + if (tuplehash->tuple.encap[i].proto == htons(ETH_P_8021Q) || + tuplehash->tuple.encap[i].proto == htons(ETH_P_8021AD)) { + __vlan_hwaccel_put_tag(skb, tuplehash->tuple.encap[i].proto, + tuplehash->tuple.encap[i].id); + i++; + if (i >= tuplehash->tuple.encap_num) + return 0; + } + + switch (tuplehash->tuple.encap[i].proto) { + case htons(ETH_P_8021Q): + *type = ETH_P_8021Q; + ret = nf_flow_vlan_inner_push(skb, + tuplehash->tuple.encap[i].proto, + tuplehash->tuple.encap[i].id); + break; + case htons(ETH_P_PPP_SES): + *type = ETH_P_PPP_SES; + ret = nf_flow_ppoe_push(skb, + tuplehash->tuple.encap[i].id); + break; + } + return ret; +} + static void nf_flow_encap_pop(struct sk_buff *skb, struct flow_offload_tuple_rhash *tuplehash) { @@ -331,6 +417,7 @@ static void nf_flow_encap_pop(struct sk_buff *skb, static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb, const struct flow_offload_tuple_rhash *tuplehash, + struct flow_offload_tuple_rhash *other_tuplehash, unsigned short type) { struct net_device *outdev; @@ -339,6 +426,9 @@ static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb, if (!outdev) return NF_DROP; + if (nf_flow_encap_push(skb, other_tuplehash, &type) < 0) + return NF_DROP; + skb->dev = outdev; dev_hard_header(skb, skb->dev, type, tuplehash->tuple.out.h_dest, tuplehash->tuple.out.h_source, skb->len); @@ -458,7 +548,8 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb, ret = NF_STOLEN; break; case FLOW_OFFLOAD_XMIT_DIRECT: - ret = nf_flow_queue_xmit(state->net, skb, tuplehash, ETH_P_IP); + ret = nf_flow_queue_xmit(state->net, skb, tuplehash, + &flow->tuplehash[!dir], ETH_P_IP); if (ret == NF_DROP) flow_offload_teardown(flow); break; @@ -753,7 +844,8 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb, ret = NF_STOLEN; break; case FLOW_OFFLOAD_XMIT_DIRECT: - ret = nf_flow_queue_xmit(state->net, skb, tuplehash, ETH_P_IPV6); + ret = nf_flow_queue_xmit(state->net, skb, tuplehash, + &flow->tuplehash[!dir], ETH_P_IPV6); if (ret == NF_DROP) flow_offload_teardown(flow); break; diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c index 7b84d8d3469c..cdf1771906b8 100644 --- a/net/netfilter/nft_flow_offload.c +++ b/net/netfilter/nft_flow_offload.c @@ -124,13 +124,12 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack, info->indev = NULL; break; } - if (!info->outdev) - info->outdev = path->dev; info->encap[info->num_encaps].id = path->encap.id; info->encap[info->num_encaps].proto = path->encap.proto; info->num_encaps++; if (path->type == DEV_PATH_PPPOE) memcpy(info->h_dest, path->encap.h_dest, ETH_ALEN); + info->xmit_type = FLOW_OFFLOAD_XMIT_DIRECT; break; case DEV_PATH_BRIDGE: if (is_zero_ether_addr(info->h_source)) @@ -158,8 +157,7 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack, break; } } - if (!info->outdev) - info->outdev = info->indev; + info->outdev = info->indev; info->hw_outdev = info->indev;