mbox series

[RFC,v5,0/4] vsock: update tools and error handling

Message ID e04f749e-f1a7-9a1d-8213-c633ffcc0a69@sberdevices.ru (mailing list archive)
Headers show
Series vsock: update tools and error handling | expand

Message

Arseniy Krasnov Dec. 20, 2022, 7:16 a.m. UTC
Patchset consists of two parts:

1) Kernel patch
One patch from Bobby Eshleman. I took single patch from Bobby:
https://lore.kernel.org/lkml/d81818b868216c774613dd03641fcfe63cc55a45
.1660362668.git.bobby.eshleman@bytedance.com/ and use only part for
af_vsock.c, as VMCI and Hyper-V parts were rejected.

I used it, because for SOCK_SEQPACKET big messages handling was broken -
ENOMEM was returned instead of EMSGSIZE. And anyway, current logic which
always replaces any error code returned by transport to ENOMEM looks
strange for me also(for example in EMSGSIZE case it was changed to
ENOMEM).

2) Tool patches
Since there is work on several significant updates for vsock(virtio/
vsock especially): skbuff, DGRAM, zerocopy rx/tx, so I think that this
patchset will be useful.

This patchset updates vsock tests and tools a little bit. First of all
it updates test suite: two new tests are added. One test is reworked
message bound test. Now it is more complex. Instead of sending 1 byte
messages with one MSG_EOR bit, it sends messages of random length(one
half of messages are smaller than page size, second half are bigger)
with random number of MSG_EOR bits set. Receiver also don't know total
number of messages. Message bounds control is maintained by hash sum
of messages length calculation. Second test is for SOCK_SEQPACKET - it
tries to send message with length more than allowed. I think both tests
will be useful for DGRAM support also.

Third thing that this patchset adds is small utility to test vsock
performance for both rx and tx. I think this util could be useful as
'iperf'/'uperf', because:
1) It is small comparing to 'iperf' or 'uperf', so it very easy to add
   new mode or feature to it(especially vsock specific).
2) It allows to set SO_RCVLOWAT and SO_VM_SOCKETS_BUFFER_SIZE option.
   Whole throughtput depends on both parameters.
3) It is located in the kernel source tree, so it could be updated by
   the same patchset which changes related kernel functionality in vsock.

I used this util very often to check performance of my rx zerocopy
support(this tool has rx zerocopy support, but not in this patchset).

Here is comparison of outputs from three utils: 'iperf', 'uperf' and
'vsock_perf'. In all three cases sender was at guest side. rx and
tx buffers were always 64Kb(because by default 'uperf' uses 8K).

iperf:

   [ ID] Interval           Transfer     Bitrate
   [  5]   0.00-10.00  sec  12.8 GBytes  11.0 Gbits/sec sender
   [  5]   0.00-10.00  sec  12.8 GBytes  11.0 Gbits/sec receiver

uperf:

   Total     16.27GB /  11.36(s) =    12.30Gb/s       23455op/s

vsock_perf:

   tx performance: 12.301529 Gbits/s
   rx performance: 12.288011 Gbits/s

Results are almost same in all three cases.

Patchset was rebased and tested on skbuff v8 patch from Bobby Eshleman:
https://lore.kernel.org/netdev/20221215043645.3545127-1-bobby.eshleman@bytedance.com/

Changelog:
 v4 -> v5:
 - Kernel patch: update commit message
 - vsock_perf:
   - Fix typo in commit message
   - Use "fprintf(stderr," instead of "printf(" for errors
   - More stats for tx: total bytes sent and time in tx loop
   - Print throughput in 'gigabits' instead of 'gigabytes'(as in
     'iperf' and 'uperf')
   - Output comparisons between 'iperf', 'uperf' and 'vsock_perf'
     added to CV.

 v3 -> v4:
 - Kernel patch: update commit message by adding error case description
 - Message bounds test:
   - Typo fix: s/contols/controls
   - Fix error output on 'setsockopt()'s
 - vsock_perf:
   - Add 'vsock_perf' target to 'all' in Makefile
   - Fix error output on 'setsockopt()'s
   - Swap sender/receiver roles: now sender does 'connect()' and sends
     data, while receiver accepts connection.
   - Update arguments names: s/mb/bytes, s/so_rcvlowat/rcvlowat
   - Update usage output and description in README

 v2 -> v3:
 - Patches for VMCI and Hyper-V were removed from patchset(commented by
   Vishnu Dasa and Dexuan Cui)
 - In message bounds test hash is computed from data buffer with random
   content(in v2 it was size only). This approach controls both data
   integrity and message bounds.
 - vsock_perf:
   - grammar fixes
   - only long parameters supported(instead of only short)

 v1 -> v2:
 - Three new patches from Bobby Eshleman to kernel part
 - Message bounds test: some refactoring and add comment to describe
   hashing purpose
 - Big message test: check 'errno' for EMSGSIZE and  move new test to
   the end of tests array
 - vsock_perf:
   - update README file
   - add simple usage example to commit message
   - update '-h' (help) output
   - use 'stdout' for output instead of 'stderr'
   - use 'strtol' instead of 'atoi'

Bobby Eshleman(1):
 vsock: return errors other than -ENOMEM to socket

Arseniy Krasnov(3):
 test/vsock: rework message bound test
 test/vsock: add big message test
 test/vsock: vsock_perf utility

 net/vmw_vsock/af_vsock.c         |   3 +-
 tools/testing/vsock/Makefile     |   3 +-
 tools/testing/vsock/README       |  34 +++
 tools/testing/vsock/control.c    |  28 +++
 tools/testing/vsock/control.h    |   2 +
 tools/testing/vsock/util.c       |  13 ++
 tools/testing/vsock/util.h       |   1 +
 tools/testing/vsock/vsock_perf.c | 441 +++++++++++++++++++++++++++++++++++++++
 tools/testing/vsock/vsock_test.c | 197 +++++++++++++++--
 9 files changed, 705 insertions(+), 17 deletions(-)

Comments

Stefano Garzarella Dec. 20, 2022, 10:38 a.m. UTC | #1
On Tue, Dec 20, 2022 at 07:16:38AM +0000, Arseniy Krasnov wrote:
>Patchset consists of two parts:
>
>1) Kernel patch
>One patch from Bobby Eshleman. I took single patch from Bobby:
>https://lore.kernel.org/lkml/d81818b868216c774613dd03641fcfe63cc55a45
>.1660362668.git.bobby.eshleman@bytedance.com/ and use only part for
>af_vsock.c, as VMCI and Hyper-V parts were rejected.
>
>I used it, because for SOCK_SEQPACKET big messages handling was broken -
>ENOMEM was returned instead of EMSGSIZE. And anyway, current logic which
>always replaces any error code returned by transport to ENOMEM looks
>strange for me also(for example in EMSGSIZE case it was changed to
>ENOMEM).
>
>2) Tool patches
>Since there is work on several significant updates for vsock(virtio/
>vsock especially): skbuff, DGRAM, zerocopy rx/tx, so I think that this
>patchset will be useful.
>
>This patchset updates vsock tests and tools a little bit. First of all
>it updates test suite: two new tests are added. One test is reworked
>message bound test. Now it is more complex. Instead of sending 1 byte
>messages with one MSG_EOR bit, it sends messages of random length(one
>half of messages are smaller than page size, second half are bigger)
>with random number of MSG_EOR bits set. Receiver also don't know total
>number of messages. Message bounds control is maintained by hash sum
>of messages length calculation. Second test is for SOCK_SEQPACKET - it
>tries to send message with length more than allowed. I think both tests
>will be useful for DGRAM support also.
>
>Third thing that this patchset adds is small utility to test vsock
>performance for both rx and tx. I think this util could be useful as
>'iperf'/'uperf', because:
>1) It is small comparing to 'iperf' or 'uperf', so it very easy to add
>   new mode or feature to it(especially vsock specific).
>2) It allows to set SO_RCVLOWAT and SO_VM_SOCKETS_BUFFER_SIZE option.
>   Whole throughtput depends on both parameters.
>3) It is located in the kernel source tree, so it could be updated by
>   the same patchset which changes related kernel functionality in vsock.
>
>I used this util very often to check performance of my rx zerocopy
>support(this tool has rx zerocopy support, but not in this patchset).
>
>Here is comparison of outputs from three utils: 'iperf', 'uperf' and
>'vsock_perf'. In all three cases sender was at guest side. rx and
>tx buffers were always 64Kb(because by default 'uperf' uses 8K).
>
>iperf:
>
>   [ ID] Interval           Transfer     Bitrate
>   [  5]   0.00-10.00  sec  12.8 GBytes  11.0 Gbits/sec sender
>   [  5]   0.00-10.00  sec  12.8 GBytes  11.0 Gbits/sec receiver
>
>uperf:
>
>   Total     16.27GB /  11.36(s) =    12.30Gb/s       23455op/s
>
>vsock_perf:
>
>   tx performance: 12.301529 Gbits/s
>   rx performance: 12.288011 Gbits/s
>
>Results are almost same in all three cases.

Thanks for checking this!

>
>Patchset was rebased and tested on skbuff v8 patch from Bobby Eshleman:
>https://lore.kernel.org/netdev/20221215043645.3545127-1-bobby.eshleman@bytedance.com/

I reviewed all the patches, in the last one there is just to update the 
README, so I think it is ready for net-next (when it will re-open).

Thanks,
Stefano
Arseniy Krasnov Dec. 20, 2022, 12:06 p.m. UTC | #2
On 20.12.2022 13:38, Stefano Garzarella wrote:
> On Tue, Dec 20, 2022 at 07:16:38AM +0000, Arseniy Krasnov wrote:
>> Patchset consists of two parts:
>>
>> 1) Kernel patch
>> One patch from Bobby Eshleman. I took single patch from Bobby:
>> https://lore.kernel.org/lkml/d81818b868216c774613dd03641fcfe63cc55a45
>> .1660362668.git.bobby.eshleman@bytedance.com/ and use only part for
>> af_vsock.c, as VMCI and Hyper-V parts were rejected.
>>
>> I used it, because for SOCK_SEQPACKET big messages handling was broken -
>> ENOMEM was returned instead of EMSGSIZE. And anyway, current logic which
>> always replaces any error code returned by transport to ENOMEM looks
>> strange for me also(for example in EMSGSIZE case it was changed to
>> ENOMEM).
>>
>> 2) Tool patches
>> Since there is work on several significant updates for vsock(virtio/
>> vsock especially): skbuff, DGRAM, zerocopy rx/tx, so I think that this
>> patchset will be useful.
>>
>> This patchset updates vsock tests and tools a little bit. First of all
>> it updates test suite: two new tests are added. One test is reworked
>> message bound test. Now it is more complex. Instead of sending 1 byte
>> messages with one MSG_EOR bit, it sends messages of random length(one
>> half of messages are smaller than page size, second half are bigger)
>> with random number of MSG_EOR bits set. Receiver also don't know total
>> number of messages. Message bounds control is maintained by hash sum
>> of messages length calculation. Second test is for SOCK_SEQPACKET - it
>> tries to send message with length more than allowed. I think both tests
>> will be useful for DGRAM support also.
>>
>> Third thing that this patchset adds is small utility to test vsock
>> performance for both rx and tx. I think this util could be useful as
>> 'iperf'/'uperf', because:
>> 1) It is small comparing to 'iperf' or 'uperf', so it very easy to add
>>   new mode or feature to it(especially vsock specific).
>> 2) It allows to set SO_RCVLOWAT and SO_VM_SOCKETS_BUFFER_SIZE option.
>>   Whole throughtput depends on both parameters.
>> 3) It is located in the kernel source tree, so it could be updated by
>>   the same patchset which changes related kernel functionality in vsock.
>>
>> I used this util very often to check performance of my rx zerocopy
>> support(this tool has rx zerocopy support, but not in this patchset).
>>
>> Here is comparison of outputs from three utils: 'iperf', 'uperf' and
>> 'vsock_perf'. In all three cases sender was at guest side. rx and
>> tx buffers were always 64Kb(because by default 'uperf' uses 8K).
>>
>> iperf:
>>
>>   [ ID] Interval           Transfer     Bitrate
>>   [  5]   0.00-10.00  sec  12.8 GBytes  11.0 Gbits/sec sender
>>   [  5]   0.00-10.00  sec  12.8 GBytes  11.0 Gbits/sec receiver
>>
>> uperf:
>>
>>   Total     16.27GB /  11.36(s) =    12.30Gb/s       23455op/s
>>
>> vsock_perf:
>>
>>   tx performance: 12.301529 Gbits/s
>>   rx performance: 12.288011 Gbits/s
>>
>> Results are almost same in all three cases.
> 
> Thanks for checking this!
> 
>>
>> Patchset was rebased and tested on skbuff v8 patch from Bobby Eshleman:
>> https://lore.kernel.org/netdev/20221215043645.3545127-1-bobby.eshleman@bytedance.com/
> 
> I reviewed all the patches, in the last one there is just to update the README, so I think it is ready for net-next (when it will re-open).
Thanks! I'll fix it(just forgot about README) and send v6 with 'net-next' tag when net-next will be opened
> 
> Thanks,
> Stefano
>