diff mbox series

[RFC,2/2] udp: optimise write wakeups with SOCK_NOSPACE

Message ID fc4fb618b3c3760dd10e2cbcee0d0050be8cdac9.1707138546.git.asml.silence@gmail.com (mailing list archive)
State RFC
Delegated to: Netdev Maintainers
Headers show
Series optimise UDP skb completion wakeups | expand

Checks

Context Check Description
netdev/series_format warning Target tree name not specified in the subject
netdev/tree_selection success Guessed tree name to be net-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1056 this patch: 1056
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 1 maintainers not CCed: willemdebruijn.kernel@gmail.com
netdev/build_clang success Errors and warnings before: 1066 this patch: 1066
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1073 this patch: 1073
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 34 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Pavel Begunkov Feb. 7, 2024, 2:23 p.m. UTC
Often the write queue is never filled full, however every send skb would
still try to unnecessary wake the pollers via sock_wfree(). That holds
even if we don't have any write/POLLIN pollers as the socket's waitqueue
is rw mixed.

Optimise it with SOCK_NOSPACE, which avoids waking up unless there are
waiters that were starved of space. With a dummy device io_uring
benchmark pushing as much as it can, I've got +5% to CPU bound
throughput (2268 Krps -> 2380). Profiles showed ~3-4% total reduction
due to the change in the CPU share of destructors.

As noted in the previous patch, we introduced udp_wfree and it's not
based on sock_wfree() because SOCK_NOSPACE requires support from
the poll callback, and there seems to be a bunch of custom ones in the
tree.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 net/ipv4/udp.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 90ff77ab78f9..cacfbee71437 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -803,9 +803,13 @@  static inline bool __udp_wfree(struct sk_buff *skb)
 	bool free;
 
 	free = refcount_sub_and_test(skb->truesize, &sk->sk_wmem_alloc);
-	/* a full barrier is required before waitqueue_active() */
+	/* a full barrier is required before waitqueue_active() and the
+	 * SOCK_NOSPACE test below.
+	 */
 	smp_mb__after_atomic();
 
+	if (sk->sk_socket && !test_bit(SOCK_NOSPACE, &sk->sk_socket->flags))
+		goto out;
 	if (!sock_writeable(sk))
 		goto out;
 
@@ -2925,8 +2929,19 @@  __poll_t udp_poll(struct file *file, struct socket *sock, poll_table *wait)
 	/* psock ingress_msg queue should not contain any bad checksum frames */
 	if (sk_is_readable(sk))
 		mask |= EPOLLIN | EPOLLRDNORM;
-	return mask;
 
+	if (!sock_writeable(sk)) {
+		set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+		/* Order with the wspace read so either we observe it
+		 * writeable or udp_sock_wfree() would find SOCK_NOSPACE and
+		 * wake us up.
+		 */
+		smp_mb__after_atomic();
+
+		if (sock_writeable(sk))
+			mask |= EPOLLOUT | EPOLLWRNORM | EPOLLWRBAND;
+	}
+	return mask;
 }
 EXPORT_SYMBOL(udp_poll);