diff mbox series

[RFC,v2,net-next,5/5] net/sched: sch_cbq: Use Qdisc backpressure infrastructure

Message ID 614f8f31e3b62dfebb8cb4707c81918a6c7e381d.1661158173.git.peilin.ye@bytedance.com (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series net: Qdisc backpressure infrastructure | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 85 this patch: 85
netdev/cc_maintainers success CCed 8 of 8 maintainers
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 85 this patch: 85
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 7 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Peilin Ye Aug. 22, 2022, 9:12 a.m. UTC
From: Peilin Ye <peilin.ye@bytedance.com>

Recently we introduced a Qdisc backpressure infrastructure (currently
supports UDP sockets).  Use it in CBQ Qdisc.

Tested with 500 Mbits/sec rate limit using 16 iperf UDP 1 Gbit/sec
clients.  Before:

[  3]  0.0-15.0 sec  55.8 MBytes  31.2 Mbits/sec   1.185 ms 1073326/1113110 (96%)
[  3]  0.0-15.0 sec  55.9 MBytes  31.3 Mbits/sec   1.001 ms 1080330/1120201 (96%)
[  3]  0.0-15.0 sec  55.6 MBytes  31.1 Mbits/sec   1.750 ms 1078292/1117980 (96%)
[  3]  0.0-15.0 sec  55.3 MBytes  30.9 Mbits/sec   0.895 ms 1089200/1128640 (97%)
<...>                                                       ^^^^^^^^^^^^^^^^^^^^^

Total throughput is 493.7 Mbits/sec and average drop rate is 96.13%.

Now enable Qdisc backpressure for UDP sockets, with
udp_backpressure_interval default to 100 milliseconds:

[  3]  0.0-15.0 sec  54.2 MBytes  30.3 Mbits/sec   2.302 ms 54/38692 (0.14%)
[  3]  0.0-15.0 sec  54.1 MBytes  30.2 Mbits/sec   2.227 ms 54/38671 (0.14%)
[  3]  0.0-15.0 sec  53.5 MBytes  29.9 Mbits/sec   2.043 ms 57/38203 (0.15%)
[  3]  0.0-15.0 sec  58.1 MBytes  32.5 Mbits/sec   1.843 ms 1/41480 (0.0024%)
<...>                                                       ^^^^^^^^^^^^^^^^^

Total throughput is 497.1 Mbits/sec (0.69% higher), average drop rate is
0.08% (99.9% lower).

Fairness between flows is slightly affected, with per-flow average
throughput ranging from 29.9 to 32.6 Mbits/sec (compared with 30.3 to
31.3 Mbits/sec).

Signed-off-by: Peilin Ye <peilin.ye@bytedance.com>
---
 net/sched/sch_cbq.c | 1 +
 1 file changed, 1 insertion(+)
diff mbox series

Patch

diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
index 91a0dc463c48..42e44f570988 100644
--- a/net/sched/sch_cbq.c
+++ b/net/sched/sch_cbq.c
@@ -381,6 +381,7 @@  cbq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 		return ret;
 	}
 
+	qdisc_backpressure(skb);
 	if (net_xmit_drop_count(ret)) {
 		qdisc_qstats_drop(sch);
 		cbq_mark_toplevel(q, cl);