From patchwork Wed Nov 25 04:01:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wenxu X-Patchwork-Id: 11930289 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21E3EC64E7A for ; Wed, 25 Nov 2020 04:01:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E29BE21D7E for ; Wed, 25 Nov 2020 04:01:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727813AbgKYEB3 (ORCPT ); Tue, 24 Nov 2020 23:01:29 -0500 Received: from m9785.mail.qiye.163.com ([220.181.97.85]:28427 "EHLO m9785.mail.qiye.163.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727736AbgKYEB2 (ORCPT ); Tue, 24 Nov 2020 23:01:28 -0500 Received: from localhost.localdomain (unknown [123.59.132.129]) by m9785.mail.qiye.163.com (Hmail) with ESMTPA id 1068D5C1D49; Wed, 25 Nov 2020 12:01:24 +0800 (CST) From: wenxu@ucloud.cn To: kuba@kernel.org, jhs@mojatatu.com Cc: netdev@vger.kernel.org, marcelo.leitner@gmail.com, vladbu@nvidia.com, xiyou.wangcong@gmail.com Subject: [PATCH v4 net-next 1/3] net/sched: fix miss init the mru in qdisc_skb_cb Date: Wed, 25 Nov 2020 12:01:21 +0800 Message-Id: <1606276883-6825-2-git-send-email-wenxu@ucloud.cn> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1606276883-6825-1-git-send-email-wenxu@ucloud.cn> References: <1606276883-6825-1-git-send-email-wenxu@ucloud.cn> X-HM-Spam-Status: e1kfGhgUHx5ZQUtXWQgYFAkeWUFZS1VLWVdZKFlBSUI3V1ktWUFJV1kPCR oVCBIfWUFZSUsZTUseQh9IGk9IVkpNS01JTE1DQ09KT09VGRETFhoSFyQUDg9ZV1kWGg8SFR0UWU FZT0tIVUpKS0hKTFVLWQY+ X-HM-Sender-Digest: e1kMHhlZQR0aFwgeV1kSHx4VD1lBWUc6Pj46Ggw5ST0*GDFDMD8xTUlC T1YKChBVSlVKTUtNSUxNQ0NPSUxCVTMWGhIXVQweFQMOOw4YFxQOH1UYFUVZV1kSC1lBWUpJSFVO QlVKSElVSklCWVdZCAFZQUlLSks3Bg++ X-HM-Tid: 0a75fd90269c2087kuqy1068d5c1d49 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: wenxu The mru in the qdisc_skb_cb should be init as 0. Only defrag packets in the act_ct will set the value. Fixes: 038ebb1a713d ("net/sched: act_ct: fix miss set mru for ovs after defrag in act_ct") Signed-off-by: wenxu --- v4: no change net/core/dev.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/net/core/dev.c b/net/core/dev.c index 60d325b..d0efa98 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3867,6 +3867,7 @@ int dev_loopback_xmit(struct net *net, struct sock *sk, struct sk_buff *skb) return skb; /* qdisc_skb_cb(skb)->pkt_len was already set by the caller. */ + qdisc_skb_cb(skb)->mru = 0; mini_qdisc_bstats_cpu_update(miniq, skb); switch (tcf_classify(skb, miniq->filter_list, &cl_res, false)) { @@ -4954,6 +4955,7 @@ static __latent_entropy void net_tx_action(struct softirq_action *h) } qdisc_skb_cb(skb)->pkt_len = skb->len; + qdisc_skb_cb(skb)->mru = 0; skb->tc_at_ingress = 1; mini_qdisc_bstats_cpu_update(miniq, skb); From patchwork Wed Nov 25 04:01:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wenxu X-Patchwork-Id: 11930293 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A106AC64E7D for ; Wed, 25 Nov 2020 04:01:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5EBB421D93 for ; Wed, 25 Nov 2020 04:01:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727860AbgKYEBl (ORCPT ); Tue, 24 Nov 2020 23:01:41 -0500 Received: from m9785.mail.qiye.163.com ([220.181.97.85]:28443 "EHLO m9785.mail.qiye.163.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726819AbgKYEBl (ORCPT ); Tue, 24 Nov 2020 23:01:41 -0500 Received: from localhost.localdomain (unknown [123.59.132.129]) by m9785.mail.qiye.163.com (Hmail) with ESMTPA id 4C4535C1D25; Wed, 25 Nov 2020 12:01:24 +0800 (CST) From: wenxu@ucloud.cn To: kuba@kernel.org, jhs@mojatatu.com Cc: netdev@vger.kernel.org, marcelo.leitner@gmail.com, vladbu@nvidia.com, xiyou.wangcong@gmail.com Subject: [PATCH v4 net-next 2/3] net/sched: act_mirred: refactor the handle of xmit Date: Wed, 25 Nov 2020 12:01:22 +0800 Message-Id: <1606276883-6825-3-git-send-email-wenxu@ucloud.cn> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1606276883-6825-1-git-send-email-wenxu@ucloud.cn> References: <1606276883-6825-1-git-send-email-wenxu@ucloud.cn> X-HM-Spam-Status: e1kfGhgUHx5ZQUtXWQgYFAkeWUFZS1VLWVdZKFlBSUI3V1ktWUFJV1kPCR oVCBIfWUFZSxhMHxkeGkkZSB8dVkpNS01JTE1DQ09PSU5VGRETFhoSFyQUDg9ZV1kWGg8SFR0UWU FZT0tIVUpKS0hKTFVLWQY+ X-HM-Sender-Digest: e1kMHhlZQR0aFwgeV1kSHx4VD1lBWUc6PRw6Eio*GD0rMjEoMD86TDw# OR4KCTBVSlVKTUtNSUxNQ0NPTkxCVTMWGhIXVQweFQMOOw4YFxQOH1UYFUVZV1kSC1lBWUpJSFVO QlVKSElVSklCWVdZCAFZQUhLSE03Bg++ X-HM-Tid: 0a75fd9027af2087kuqy4c4535c1d25 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: wenxu This one is prepare for the next patch. Signed-off-by: wenxu --- v4: no change include/net/sch_generic.h | 5 ----- net/sched/act_mirred.c | 21 +++++++++++++++------ 2 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index d8fd867..dd74f06 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -1281,9 +1281,4 @@ void mini_qdisc_pair_init(struct mini_Qdisc_pair *miniqp, struct Qdisc *qdisc, void mini_qdisc_pair_block_init(struct mini_Qdisc_pair *miniqp, struct tcf_block *block); -static inline int skb_tc_reinsert(struct sk_buff *skb, struct tcf_result *res) -{ - return res->ingress ? netif_receive_skb(skb) : dev_queue_xmit(skb); -} - #endif diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c index e24b7e2..17d0095 100644 --- a/net/sched/act_mirred.c +++ b/net/sched/act_mirred.c @@ -205,6 +205,18 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla, return err; } +static int tcf_mirred_forward(bool want_ingress, struct sk_buff *skb) +{ + int err; + + if (!want_ingress) + err = dev_queue_xmit(skb); + else + err = netif_receive_skb(skb); + + return err; +} + static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, struct tcf_result *res) { @@ -287,18 +299,15 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, /* let's the caller reinsert the packet, if possible */ if (use_reinsert) { res->ingress = want_ingress; - if (skb_tc_reinsert(skb, res)) + err = tcf_mirred_forward(res->ingress, skb); + if (err) tcf_action_inc_overlimit_qstats(&m->common); __this_cpu_dec(mirred_rec_level); return TC_ACT_CONSUMED; } } - if (!want_ingress) - err = dev_queue_xmit(skb2); - else - err = netif_receive_skb(skb2); - + err = tcf_mirred_forward(want_ingress, skb2); if (err) { out: tcf_action_inc_overlimit_qstats(&m->common); From patchwork Wed Nov 25 04:01:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wenxu X-Patchwork-Id: 11930295 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82091C64E7B for ; Wed, 25 Nov 2020 04:01:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3AD5620897 for ; Wed, 25 Nov 2020 04:01:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727852AbgKYEBk (ORCPT ); Tue, 24 Nov 2020 23:01:40 -0500 Received: from m9785.mail.qiye.163.com ([220.181.97.85]:28495 "EHLO m9785.mail.qiye.163.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726595AbgKYEBj (ORCPT ); Tue, 24 Nov 2020 23:01:39 -0500 Received: from localhost.localdomain (unknown [123.59.132.129]) by m9785.mail.qiye.163.com (Hmail) with ESMTPA id 993A35C1D4F; Wed, 25 Nov 2020 12:01:24 +0800 (CST) From: wenxu@ucloud.cn To: kuba@kernel.org, jhs@mojatatu.com Cc: netdev@vger.kernel.org, marcelo.leitner@gmail.com, vladbu@nvidia.com, xiyou.wangcong@gmail.com Subject: [PATCH v4 net-next 3/3] net/sched: sch_frag: add generic packet fragment support. Date: Wed, 25 Nov 2020 12:01:23 +0800 Message-Id: <1606276883-6825-4-git-send-email-wenxu@ucloud.cn> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1606276883-6825-1-git-send-email-wenxu@ucloud.cn> References: <1606276883-6825-1-git-send-email-wenxu@ucloud.cn> X-HM-Spam-Status: e1kfGhgUHx5ZQUtXWQgYFAkeWUFZS1VLWVdZKFlBSUI3V1ktWUFJV1kPCR oVCBIfWUFZTksZGk8ZTx5LQkIfVkpNS01JTE1DQ09MT0NVGRETFhoSFyQUDg9ZV1kWGg8SFR0UWU FZT0tIVUpKS0hKTFVLWQY+ X-HM-Sender-Digest: e1kMHhlZQR0aFwgeV1kSHx4VD1lBWUc6K1E6USo4UT04LjEoTzoSTD0y PTEaCh1VSlVKTUtNSUxNQ0NOS0NMVTMWGhIXVQweFQMOOw4YFxQOH1UYFUVZV1kSC1lBWUpJSFVO QlVKSElVSklCWVdZCAFZQUJMTU43Bg++ X-HM-Tid: 0a75fd9028e72087kuqy993a35c1d4f Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: wenxu Currently kernel tc subsystem can do conntrack in cat_ct. But when several fragment packets go through the act_ct, function tcf_ct_handle_fragments will defrag the packets to a big one. But the last action will redirect mirred to a device which maybe lead the reassembly big packet over the mtu of target device. This patch add support for a xmit hook to mirred, that gets executed before xmiting the packet. Then, when act_ct gets loaded, it configs that hook. The frag xmit hook maybe reused by other modules. Signed-off-by: wenxu Acked-by: Cong Wang Acked-by: Jamal Hadi Salim --- v2: make act_frag just buildin for tc core but not a module return an error code from tcf_fragment depends on INET for ip_do_fragment v3: put the whole sch_frag.c under CONFIG_INET v4: remove the abstraction for xmit_hook include/net/act_api.h | 6 ++ include/net/sch_generic.h | 2 + net/sched/Makefile | 1 + net/sched/act_api.c | 16 +++++ net/sched/act_ct.c | 3 + net/sched/act_mirred.c | 2 +- net/sched/sch_frag.c | 150 ++++++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 179 insertions(+), 1 deletion(-) create mode 100644 net/sched/sch_frag.c diff --git a/include/net/act_api.h b/include/net/act_api.h index 8721492..55dab60 100644 --- a/include/net/act_api.h +++ b/include/net/act_api.h @@ -239,6 +239,12 @@ int tcf_action_check_ctrlact(int action, struct tcf_proto *tp, struct netlink_ext_ack *newchain); struct tcf_chain *tcf_action_set_ctrlact(struct tc_action *a, int action, struct tcf_chain *newchain); + +#ifdef CONFIG_INET +DECLARE_STATIC_KEY_FALSE(tcf_frag_xmit_count); +#endif + +int tcf_dev_queue_xmit(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb)); #endif /* CONFIG_NET_CLS_ACT */ static inline void tcf_action_stats_update(struct tc_action *a, u64 bytes, diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index dd74f06..162ed62 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -1281,4 +1281,6 @@ void mini_qdisc_pair_init(struct mini_Qdisc_pair *miniqp, struct Qdisc *qdisc, void mini_qdisc_pair_block_init(struct mini_Qdisc_pair *miniqp, struct tcf_block *block); +int sch_frag_xmit_hook(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb)); + #endif diff --git a/net/sched/Makefile b/net/sched/Makefile index 66bbf9a..dd14ef4 100644 --- a/net/sched/Makefile +++ b/net/sched/Makefile @@ -5,6 +5,7 @@ obj-y := sch_generic.o sch_mq.o +obj-$(CONFIG_INET) += sch_frag.o obj-$(CONFIG_NET_SCHED) += sch_api.o sch_blackhole.o obj-$(CONFIG_NET_CLS) += cls_api.o obj-$(CONFIG_NET_CLS_ACT) += act_api.o diff --git a/net/sched/act_api.c b/net/sched/act_api.c index 60e1572..34fe743 100644 --- a/net/sched/act_api.c +++ b/net/sched/act_api.c @@ -22,6 +22,22 @@ #include #include +#ifdef CONFIG_INET +DEFINE_STATIC_KEY_FALSE(tcf_frag_xmit_count); +EXPORT_SYMBOL_GPL(tcf_frag_xmit_count); +#endif + +int tcf_dev_queue_xmit(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb)) +{ +#ifdef CONFIG_INET + if (static_branch_unlikely(&tcf_frag_xmit_count)) + return sch_frag_xmit_hook(skb, xmit); +#endif + + return xmit(skb); +} +EXPORT_SYMBOL_GPL(tcf_dev_queue_xmit); + static void tcf_action_goto_chain_exec(const struct tc_action *a, struct tcf_result *res) { diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c index aba3cd8..61092c5 100644 --- a/net/sched/act_ct.c +++ b/net/sched/act_ct.c @@ -1541,6 +1541,8 @@ static int __init ct_init_module(void) if (err) goto err_register; + static_branch_inc(&tcf_frag_xmit_count); + return 0; err_register: @@ -1552,6 +1554,7 @@ static int __init ct_init_module(void) static void __exit ct_cleanup_module(void) { + static_branch_dec(&tcf_frag_xmit_count); tcf_unregister_action(&act_ct_ops, &ct_net_ops); tcf_ct_flow_tables_uninit(); destroy_workqueue(act_ct_wq); diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c index 17d0095..7153c67 100644 --- a/net/sched/act_mirred.c +++ b/net/sched/act_mirred.c @@ -210,7 +210,7 @@ static int tcf_mirred_forward(bool want_ingress, struct sk_buff *skb) int err; if (!want_ingress) - err = dev_queue_xmit(skb); + err = tcf_dev_queue_xmit(skb, dev_queue_xmit); else err = netif_receive_skb(skb); diff --git a/net/sched/sch_frag.c b/net/sched/sch_frag.c new file mode 100644 index 0000000..e1e77d3 --- /dev/null +++ b/net/sched/sch_frag.c @@ -0,0 +1,150 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +#include +#include +#include +#include +#include + +struct sch_frag_data { + unsigned long dst; + struct qdisc_skb_cb cb; + __be16 inner_protocol; + u16 vlan_tci; + __be16 vlan_proto; + unsigned int l2_len; + u8 l2_data[VLAN_ETH_HLEN]; + int (*xmit)(struct sk_buff *skb); +}; + +static DEFINE_PER_CPU(struct sch_frag_data, sch_frag_data_storage); + +static int sch_frag_xmit(struct net *net, struct sock *sk, struct sk_buff *skb) +{ + struct sch_frag_data *data = this_cpu_ptr(&sch_frag_data_storage); + + if (skb_cow_head(skb, data->l2_len) < 0) { + kfree_skb(skb); + return -ENOMEM; + } + + __skb_dst_copy(skb, data->dst); + *qdisc_skb_cb(skb) = data->cb; + skb->inner_protocol = data->inner_protocol; + if (data->vlan_tci & VLAN_CFI_MASK) + __vlan_hwaccel_put_tag(skb, data->vlan_proto, + data->vlan_tci & ~VLAN_CFI_MASK); + else + __vlan_hwaccel_clear_tag(skb); + + /* Reconstruct the MAC header. */ + skb_push(skb, data->l2_len); + memcpy(skb->data, &data->l2_data, data->l2_len); + skb_postpush_rcsum(skb, skb->data, data->l2_len); + skb_reset_mac_header(skb); + + return data->xmit(skb); +} + +static void sch_frag_prepare_frag(struct sk_buff *skb, + int (*xmit)(struct sk_buff *skb)) +{ + unsigned int hlen = skb_network_offset(skb); + struct sch_frag_data *data; + + data = this_cpu_ptr(&sch_frag_data_storage); + data->dst = skb->_skb_refdst; + data->cb = *qdisc_skb_cb(skb); + data->xmit = xmit; + data->inner_protocol = skb->inner_protocol; + if (skb_vlan_tag_present(skb)) + data->vlan_tci = skb_vlan_tag_get(skb) | VLAN_CFI_MASK; + else + data->vlan_tci = 0; + data->vlan_proto = skb->vlan_proto; + data->l2_len = hlen; + memcpy(&data->l2_data, skb->data, hlen); + + memset(IPCB(skb), 0, sizeof(struct inet_skb_parm)); + skb_pull(skb, hlen); +} + +static unsigned int +sch_frag_dst_get_mtu(const struct dst_entry *dst) +{ + return dst->dev->mtu; +} + +static struct dst_ops sch_frag_dst_ops = { + .family = AF_UNSPEC, + .mtu = sch_frag_dst_get_mtu, +}; + +static int sch_fragment(struct net *net, struct sk_buff *skb, + u16 mru, int (*xmit)(struct sk_buff *skb)) +{ + int ret = -1; + + if (skb_network_offset(skb) > VLAN_ETH_HLEN) { + net_warn_ratelimited("L2 header too long to fragment\n"); + goto err; + } + + if (skb_protocol(skb, true) == htons(ETH_P_IP)) { + struct dst_entry sch_frag_dst; + unsigned long orig_dst; + + sch_frag_prepare_frag(skb, xmit); + dst_init(&sch_frag_dst, &sch_frag_dst_ops, NULL, 1, + DST_OBSOLETE_NONE, DST_NOCOUNT); + sch_frag_dst.dev = skb->dev; + + orig_dst = skb->_skb_refdst; + skb_dst_set_noref(skb, &sch_frag_dst); + IPCB(skb)->frag_max_size = mru; + + ret = ip_do_fragment(net, skb->sk, skb, sch_frag_xmit); + refdst_drop(orig_dst); + } else if (skb_protocol(skb, true) == htons(ETH_P_IPV6)) { + unsigned long orig_dst; + struct rt6_info sch_frag_rt; + + sch_frag_prepare_frag(skb, xmit); + memset(&sch_frag_rt, 0, sizeof(sch_frag_rt)); + dst_init(&sch_frag_rt.dst, &sch_frag_dst_ops, NULL, 1, + DST_OBSOLETE_NONE, DST_NOCOUNT); + sch_frag_rt.dst.dev = skb->dev; + + orig_dst = skb->_skb_refdst; + skb_dst_set_noref(skb, &sch_frag_rt.dst); + IP6CB(skb)->frag_max_size = mru; + + ret = ipv6_stub->ipv6_fragment(net, skb->sk, skb, + sch_frag_xmit); + refdst_drop(orig_dst); + } else { + net_warn_ratelimited("Fail frag %s: eth=%x, MRU=%d, MTU=%d\n", + netdev_name(skb->dev), + ntohs(skb_protocol(skb, true)), mru, + skb->dev->mtu); + goto err; + } + + return ret; +err: + kfree_skb(skb); + return ret; +} + +int sch_frag_xmit_hook(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb)) +{ + u16 mru = qdisc_skb_cb(skb)->mru; + int err; + + if (mru && skb->len > mru + skb->dev->hard_header_len) + err = sch_fragment(dev_net(skb->dev), skb, mru, xmit); + else + err = xmit(skb); + + return err; +} +EXPORT_SYMBOL_GPL(sch_frag_xmit_hook);