diff mbox series

[RFC,v2,net-next,4/5] net/sched: sch_htb: Use Qdisc backpressure infrastructure

Message ID c0554e13c7f2abb8fa38a70975ba3adbe4d9ecff.1661158173.git.peilin.ye@bytedance.com (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series net: Qdisc backpressure infrastructure | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 86 this patch: 86
netdev/cc_maintainers success CCed 8 of 8 maintainers
netdev/build_clang success Errors and warnings before: 1 this patch: 1
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 86 this patch: 86
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 14 lines checked
netdev/kdoc success Errors and warnings before: 1 this patch: 1
netdev/source_inline success Was 0 now: 0

Commit Message

Peilin Ye Aug. 22, 2022, 9:12 a.m. UTC
From: Peilin Ye <peilin.ye@bytedance.com>

Recently we introduced a Qdisc backpressure infrastructure (currently
supports UDP sockets).  Use it in HTB Qdisc.

Tested with 500 Mbits/sec rate limit using 16 iperf UDP 1 Gbit/sec
clients.  Before:

[  3]  0.0-15.0 sec  54.2 MBytes  30.4 Mbits/sec   0.875 ms 1245750/1284444 (97%)
[  3]  0.0-15.0 sec  54.2 MBytes  30.3 Mbits/sec   1.288 ms 1238753/1277402 (97%)
[  3]  0.0-15.0 sec  54.8 MBytes  30.6 Mbits/sec   1.761 ms 1261762/1300817 (97%)
[  3]  0.0-15.0 sec  53.9 MBytes  30.1 Mbits/sec   1.635 ms 1241690/1280133 (97%)
<...>                                                       ^^^^^^^^^^^^^^^^^^^^^

Total throughput is 482.0 Mbits/sec and average drop rate is 97.0%.

Now enable Qdisc backpressure for UDP sockets, with
udp_backpressure_interval default to 100 milliseconds:

[  3]  0.0-15.0 sec  53.0 MBytes  29.6 Mbits/sec   1.621 ms 54/37856 (0.14%)
[  3]  0.0-15.0 sec  55.9 MBytes  31.3 Mbits/sec   1.368 ms  6/39895 (0.015%)
[  3]  0.0-15.0 sec  52.3 MBytes  29.2 Mbits/sec   1.560 ms 56/37340 (0.15%)
[  3]  0.0-15.0 sec  52.7 MBytes  29.5 Mbits/sec   1.495 ms 57/37677 (0.15%)
<...>                                                       ^^^^^^^^^^^^^^^^

Total throughput is 485.9 Mbits/sec (0.81% higher) and average drop rate
is 0.1% (99.9% lower).

Fairness between flows is slightly affected, with per-flow average
throughput ranging from 29.2 to 31.8 Mbits/sec (compared with 29.7 to
30.6 Mbits/sec).

Signed-off-by: Peilin Ye <peilin.ye@bytedance.com>
---
 net/sched/sch_htb.c | 2 ++
 1 file changed, 2 insertions(+)
diff mbox series

Patch

diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index 23a9d6242429..e337b3d0dab3 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -623,6 +623,7 @@  static int htb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 			__qdisc_enqueue_tail(skb, &q->direct_queue);
 			q->direct_pkts++;
 		} else {
+			qdisc_backpressure(skb);
 			return qdisc_drop(skb, sch, to_free);
 		}
 #ifdef CONFIG_NET_CLS_ACT
@@ -634,6 +635,7 @@  static int htb_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 #endif
 	} else if ((ret = qdisc_enqueue(skb, cl->leaf.q,
 					to_free)) != NET_XMIT_SUCCESS) {
+		qdisc_backpressure(skb);
 		if (net_xmit_drop_count(ret)) {
 			qdisc_qstats_drop(sch);
 			cl->drops++;