diff mbox series

[RFC,v2,net-next,3/5] net/sched: sch_tbf: Use Qdisc backpressure infrastructure

Message ID e55ba88846dee3c6cc6e1c84bcb80590cde0adc4.1661158173.git.peilin.ye@bytedance.com (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series net: Qdisc backpressure infrastructure | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 85 this patch: 85
netdev/cc_maintainers success CCed 8 of 8 maintainers
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 85 this patch: 85
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 14 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Peilin Ye Aug. 22, 2022, 9:12 a.m. UTC
From: Peilin Ye <peilin.ye@bytedance.com>

Recently we introduced a Qdisc backpressure infrastructure (currently
supports UDP sockets).  Use it in TBF Qdisc.

Tested with 500 Mbits/sec rate limit and SFQ inner Qdisc using 16 iperf UDP
1 Gbit/sec clients.  Before:

[  3]  0.0-15.0 sec  53.6 MBytes  30.0 Mbits/sec   0.208 ms 1190234/1228450 (97%)
[  3]  0.0-15.0 sec  54.7 MBytes  30.6 Mbits/sec   0.085 ms   955591/994593 (96%)
[  3]  0.0-15.0 sec  55.4 MBytes  31.0 Mbits/sec   0.170 ms  966364/1005868 (96%)
[  3]  0.0-15.0 sec  55.0 MBytes  30.8 Mbits/sec   0.167 ms   925083/964333 (96%)
<...>                                                         ^^^^^^^^^^^^^^^^^^^

Total throughput is 480.2 Mbits/sec and average drop rate is 96.5%.

Now enable Qdisc backpressure for UDP sockets, with
udp_backpressure_interval default to 100 milliseconds:

[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.097 ms 450/39246 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.331 ms 435/39232 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.040 ms 435/39212 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.031 ms 426/39208 (1.1%)
<...>                                                       ^^^^^^^^^^^^^^^^

Total throughput is 486.4 Mbits/sec (1.29% higher) and average drop rate
is 1.1% (98.86% lower).

However, enabling Qdisc backpressure affects fairness between flow if we
use TBF Qdisc with default bfifo inner Qdisc:

[  3]  0.0-15.0 sec  46.1 MBytes  25.8 Mbits/sec   1.102 ms 142/33048 (0.43%)
[  3]  0.0-15.0 sec  72.8 MBytes  40.7 Mbits/sec   0.476 ms 145/52081 (0.28%)
[  3]  0.0-15.0 sec  53.2 MBytes  29.7 Mbits/sec   1.047 ms 141/38086 (0.37%)
[  3]  0.0-15.0 sec  45.5 MBytes  25.4 Mbits/sec   1.600 ms 141/32573 (0.43%)
<...>                                                       ^^^^^^^^^^^^^^^^^

In the test, per-flow throughput ranged from 16.4 to 68.7 Mbits/sec.
However, total throughput was still 486.4 Mbits/sec (0.87% higher than
before), and average drop rate was 0.41% (99.58% lower than before).

Signed-off-by: Peilin Ye <peilin.ye@bytedance.com>
---
 net/sched/sch_tbf.c | 2 ++
 1 file changed, 2 insertions(+)
diff mbox series

Patch

diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index 72102277449e..cf9cc7dbf078 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -222,6 +222,7 @@  static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
 		len += segs->len;
 		ret = qdisc_enqueue(segs, q->qdisc, to_free);
 		if (ret != NET_XMIT_SUCCESS) {
+			qdisc_backpressure(skb);
 			if (net_xmit_drop_count(ret))
 				qdisc_qstats_drop(sch);
 		} else {
@@ -250,6 +251,7 @@  static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 	}
 	ret = qdisc_enqueue(skb, q->qdisc, to_free);
 	if (ret != NET_XMIT_SUCCESS) {
+		qdisc_backpressure(skb);
 		if (net_xmit_drop_count(ret))
 			qdisc_qstats_drop(sch);
 		return ret;