Message ID | e04f749e-f1a7-9a1d-8213-c633ffcc0a69@sberdevices.ru (mailing list archive) |
---|---|
Headers | show |
Series | vsock: update tools and error handling | expand |
On Tue, Dec 20, 2022 at 07:16:38AM +0000, Arseniy Krasnov wrote: >Patchset consists of two parts: > >1) Kernel patch >One patch from Bobby Eshleman. I took single patch from Bobby: >https://lore.kernel.org/lkml/d81818b868216c774613dd03641fcfe63cc55a45 >.1660362668.git.bobby.eshleman@bytedance.com/ and use only part for >af_vsock.c, as VMCI and Hyper-V parts were rejected. > >I used it, because for SOCK_SEQPACKET big messages handling was broken - >ENOMEM was returned instead of EMSGSIZE. And anyway, current logic which >always replaces any error code returned by transport to ENOMEM looks >strange for me also(for example in EMSGSIZE case it was changed to >ENOMEM). > >2) Tool patches >Since there is work on several significant updates for vsock(virtio/ >vsock especially): skbuff, DGRAM, zerocopy rx/tx, so I think that this >patchset will be useful. > >This patchset updates vsock tests and tools a little bit. First of all >it updates test suite: two new tests are added. One test is reworked >message bound test. Now it is more complex. Instead of sending 1 byte >messages with one MSG_EOR bit, it sends messages of random length(one >half of messages are smaller than page size, second half are bigger) >with random number of MSG_EOR bits set. Receiver also don't know total >number of messages. Message bounds control is maintained by hash sum >of messages length calculation. Second test is for SOCK_SEQPACKET - it >tries to send message with length more than allowed. I think both tests >will be useful for DGRAM support also. > >Third thing that this patchset adds is small utility to test vsock >performance for both rx and tx. I think this util could be useful as >'iperf'/'uperf', because: >1) It is small comparing to 'iperf' or 'uperf', so it very easy to add > new mode or feature to it(especially vsock specific). >2) It allows to set SO_RCVLOWAT and SO_VM_SOCKETS_BUFFER_SIZE option. > Whole throughtput depends on both parameters. >3) It is located in the kernel source tree, so it could be updated by > the same patchset which changes related kernel functionality in vsock. > >I used this util very often to check performance of my rx zerocopy >support(this tool has rx zerocopy support, but not in this patchset). > >Here is comparison of outputs from three utils: 'iperf', 'uperf' and >'vsock_perf'. In all three cases sender was at guest side. rx and >tx buffers were always 64Kb(because by default 'uperf' uses 8K). > >iperf: > > [ ID] Interval Transfer Bitrate > [ 5] 0.00-10.00 sec 12.8 GBytes 11.0 Gbits/sec sender > [ 5] 0.00-10.00 sec 12.8 GBytes 11.0 Gbits/sec receiver > >uperf: > > Total 16.27GB / 11.36(s) = 12.30Gb/s 23455op/s > >vsock_perf: > > tx performance: 12.301529 Gbits/s > rx performance: 12.288011 Gbits/s > >Results are almost same in all three cases. Thanks for checking this! > >Patchset was rebased and tested on skbuff v8 patch from Bobby Eshleman: >https://lore.kernel.org/netdev/20221215043645.3545127-1-bobby.eshleman@bytedance.com/ I reviewed all the patches, in the last one there is just to update the README, so I think it is ready for net-next (when it will re-open). Thanks, Stefano
On 20.12.2022 13:38, Stefano Garzarella wrote: > On Tue, Dec 20, 2022 at 07:16:38AM +0000, Arseniy Krasnov wrote: >> Patchset consists of two parts: >> >> 1) Kernel patch >> One patch from Bobby Eshleman. I took single patch from Bobby: >> https://lore.kernel.org/lkml/d81818b868216c774613dd03641fcfe63cc55a45 >> .1660362668.git.bobby.eshleman@bytedance.com/ and use only part for >> af_vsock.c, as VMCI and Hyper-V parts were rejected. >> >> I used it, because for SOCK_SEQPACKET big messages handling was broken - >> ENOMEM was returned instead of EMSGSIZE. And anyway, current logic which >> always replaces any error code returned by transport to ENOMEM looks >> strange for me also(for example in EMSGSIZE case it was changed to >> ENOMEM). >> >> 2) Tool patches >> Since there is work on several significant updates for vsock(virtio/ >> vsock especially): skbuff, DGRAM, zerocopy rx/tx, so I think that this >> patchset will be useful. >> >> This patchset updates vsock tests and tools a little bit. First of all >> it updates test suite: two new tests are added. One test is reworked >> message bound test. Now it is more complex. Instead of sending 1 byte >> messages with one MSG_EOR bit, it sends messages of random length(one >> half of messages are smaller than page size, second half are bigger) >> with random number of MSG_EOR bits set. Receiver also don't know total >> number of messages. Message bounds control is maintained by hash sum >> of messages length calculation. Second test is for SOCK_SEQPACKET - it >> tries to send message with length more than allowed. I think both tests >> will be useful for DGRAM support also. >> >> Third thing that this patchset adds is small utility to test vsock >> performance for both rx and tx. I think this util could be useful as >> 'iperf'/'uperf', because: >> 1) It is small comparing to 'iperf' or 'uperf', so it very easy to add >> new mode or feature to it(especially vsock specific). >> 2) It allows to set SO_RCVLOWAT and SO_VM_SOCKETS_BUFFER_SIZE option. >> Whole throughtput depends on both parameters. >> 3) It is located in the kernel source tree, so it could be updated by >> the same patchset which changes related kernel functionality in vsock. >> >> I used this util very often to check performance of my rx zerocopy >> support(this tool has rx zerocopy support, but not in this patchset). >> >> Here is comparison of outputs from three utils: 'iperf', 'uperf' and >> 'vsock_perf'. In all three cases sender was at guest side. rx and >> tx buffers were always 64Kb(because by default 'uperf' uses 8K). >> >> iperf: >> >> [ ID] Interval Transfer Bitrate >> [ 5] 0.00-10.00 sec 12.8 GBytes 11.0 Gbits/sec sender >> [ 5] 0.00-10.00 sec 12.8 GBytes 11.0 Gbits/sec receiver >> >> uperf: >> >> Total 16.27GB / 11.36(s) = 12.30Gb/s 23455op/s >> >> vsock_perf: >> >> tx performance: 12.301529 Gbits/s >> rx performance: 12.288011 Gbits/s >> >> Results are almost same in all three cases. > > Thanks for checking this! > >> >> Patchset was rebased and tested on skbuff v8 patch from Bobby Eshleman: >> https://lore.kernel.org/netdev/20221215043645.3545127-1-bobby.eshleman@bytedance.com/ > > I reviewed all the patches, in the last one there is just to update the README, so I think it is ready for net-next (when it will re-open). Thanks! I'll fix it(just forgot about README) and send v6 with 'net-next' tag when net-next will be opened > > Thanks, > Stefano >