mbox series

[PATCHSET,RFC,0/7] Send and receive bundles

Message ID 20240308235045.1014125-1-axboe@kernel.dk (mailing list archive)
Headers show
Series Send and receive bundles | expand

Message

Jens Axboe March 8, 2024, 11:34 p.m. UTC
Hi,

I went back to the drawing board a bit on the send multishot, and this
is what came out.

First support was added for provided buffers for send. This works like
provided buffers for recv/recvmsg, and the intent here to use the buffer
ring queue as an outgoing sequence for sending.

But the real meat is adding support for picking multiple buffers at the
time, what I dubbed "bundles" here. Rather than just pick a single buffer
for send, it can pick a bunch of them and send them in one go. The idea
here is that the expensive part of a request is not the sqe issue, it's
the fact that we have to do each buffer separately. That entails calling
all the way down into the networking stack, locking the socket, checking
what needs doing afterwards (like flushing the backlog), unlocking the
socket, etc. If we have an outgoing send queue, then pick what buffers
we have (up to a certain cap), and pass them to the networking stack in
one go.

Bundles must be used with provided buffers, obviously. At completion
time, they pass the starting buffer ID in cqe->flags, like any other
provided buffer completion. cqe->res is the TOTAL number of bytes sent,
so it's up to the application to iterate buffers to figure out how many
completed. This part is trivial. I'll push the proxy changes out soon,
just need to cleanup them up as I did the sendmsg bundling too and would
love to compare.

With that in place, I added support for recv for bundles as well. Exactly
the same as the send side - if we have a known amount of data pending,
pick enough buffers to satisfy the receive and post a single completion
for that round. Buffer ID in cqe->flags, cqe->res is the total number of
buffers sent. Receive can be used with multishot as well - fire off one
multishot recv, and keep getting big completions. Unfortunately, recvmsg
multishot is just not as efficient as recv, as it carries additional
data that needs copying. recv multishot with bundles provide a good
alternative to recvmsg, if all you need is more than one range of data.
I'll compare these too soon as well.

This is obviously a bigger win for smaller packets than for large ones,
as the overall cost of entering sys_sendmsg/sys_recvmsg() in terms of
throughput decreases as the packet size increases. For the extreme end,
using 32b packets, performance increases substantially. Runtime for
proxying 32b packets between three machines on a 10G link for the test:

Send ring:		3462 msec		1183Mbit
Send ring + bundles	 844 msec		4853Mbit

and bundles reach 100% bandwidth at 80b of packet size, compared to send
ring alone needing 320b to reach 95% of bandwidth (I didn't redo that
test so don't have the 100% number).

Patches are on top of my for-6.9/io_uring branch and can also be found
here:

https://git.kernel.dk/cgit/linux/log/?h=io_uring-recvsend-bundle

 include/linux/io_uring_types.h |   3 +
 include/uapi/linux/io_uring.h  |  10 +
 io_uring/io_uring.c            |   3 +-
 io_uring/kbuf.c                | 203 ++++++++++++-
 io_uring/kbuf.h                |  39 ++-
 io_uring/net.c                 | 528 +++++++++++++++++++++++----------
 io_uring/net.h                 |   2 +-
 io_uring/opdef.c               |   9 +-
 8 files changed, 609 insertions(+), 188 deletions(-)

Comments

Jens Axboe March 10, 2024, 6:15 p.m. UTC | #1
On 3/8/24 4:34 PM, Jens Axboe wrote:
> Hi,
> 
> I went back to the drawing board a bit on the send multishot, and this
> is what came out.
> 
> First support was added for provided buffers for send. This works like
> provided buffers for recv/recvmsg, and the intent here to use the buffer
> ring queue as an outgoing sequence for sending.
> 
> But the real meat is adding support for picking multiple buffers at the
> time, what I dubbed "bundles" here. Rather than just pick a single buffer
> for send, it can pick a bunch of them and send them in one go. The idea
> here is that the expensive part of a request is not the sqe issue, it's
> the fact that we have to do each buffer separately. That entails calling
> all the way down into the networking stack, locking the socket, checking
> what needs doing afterwards (like flushing the backlog), unlocking the
> socket, etc. If we have an outgoing send queue, then pick what buffers
> we have (up to a certain cap), and pass them to the networking stack in
> one go.
> 
> Bundles must be used with provided buffers, obviously. At completion
> time, they pass the starting buffer ID in cqe->flags, like any other
> provided buffer completion. cqe->res is the TOTAL number of bytes sent,
> so it's up to the application to iterate buffers to figure out how many
> completed. This part is trivial. I'll push the proxy changes out soon,
> just need to cleanup them up as I did the sendmsg bundling too and would
> love to compare.
> 
> With that in place, I added support for recv for bundles as well. Exactly
> the same as the send side - if we have a known amount of data pending,
> pick enough buffers to satisfy the receive and post a single completion
> for that round. Buffer ID in cqe->flags, cqe->res is the total number of
> buffers sent. Receive can be used with multishot as well - fire off one
> multishot recv, and keep getting big completions. Unfortunately, recvmsg
> multishot is just not as efficient as recv, as it carries additional
> data that needs copying. recv multishot with bundles provide a good
> alternative to recvmsg, if all you need is more than one range of data.
> I'll compare these too soon as well.
> 
> This is obviously a bigger win for smaller packets than for large ones,
> as the overall cost of entering sys_sendmsg/sys_recvmsg() in terms of
> throughput decreases as the packet size increases. For the extreme end,
> using 32b packets, performance increases substantially. Runtime for
> proxying 32b packets between three machines on a 10G link for the test:
> 
> Send ring:		3462 msec		1183Mbit
> Send ring + bundles	 844 msec		4853Mbit
> 
> and bundles reach 100% bandwidth at 80b of packet size, compared to send
> ring alone needing 320b to reach 95% of bandwidth (I didn't redo that
> test so don't have the 100% number).

Re-did all the numbers, see attached graph. tldr is that send bundles OR
sendmsg are by far the fastest, they hit line rate very quickly. This is
expected as both of these send methods can pack more than a single
packet into a send operation, reducing the cost of the smaller payloads.
Looking at profiles, sendmsg does use ~3.5% more CPU for the same work.
Which is also expected, it needs to do a bit more work to accomplish the
same.