mbox series

[v5,00/11] remove msize limit in virtio transport

Message ID cover.1657636554.git.linux_oss@crudebyte.com (mailing list archive)
Headers show
Series remove msize limit in virtio transport | expand

Message

Christian Schoenebeck July 12, 2022, 2:35 p.m. UTC
This series aims to get get rid of the current 500k 'msize' limitation in
the 9p virtio transport, which is currently a bottleneck for performance
of 9p mounts.

To avoid confusion: it does remove the msize limit for the virtio transport,
on 9p client level though the anticipated milestone for this series is now
a max. 'msize' of 4 MB. See patch 7 for reason why.

This is a follow-up of the following series and discussion:
https://lore.kernel.org/all/cover.1640870037.git.linux_oss@crudebyte.com/

Latest version of this series:
https://github.com/cschoenebeck/linux/commits/9p-virtio-drop-msize-cap


OVERVIEW OF PATCHES:

* Patches 1..6 remove the msize limitation from the 'virtio' transport
  (i.e. the 9p 'virtio' transport itself actually supports >4MB now, tested
  successfully with an experimental QEMU version and some dirty 9p Linux
  client hacks up to msize=128MB).

* Patch 7 limits msize for all transports to 4 MB for now as >4MB would need
  more work on 9p client level (see commit log of patch 7 for details).

* Patches 8..11 tremendously reduce unnecessarily huge 9p message sizes and
  therefore provide performance gain as well. So far, almost all 9p messages
  simply allocated message buffers exactly msize large, even for messages
  that actually just needed few bytes. So these patches make sense by
  themselves, independent of this overall series, however for this series
  even more, because the larger msize, the more this issue would have hurt
  otherwise.


PREREQUISITES:

If you are testing with QEMU then please either use QEMU 6.2 or higher, or
at least apply the following patch on QEMU side:

  https://lore.kernel.org/qemu-devel/E1mT2Js-0000DW-OH@lizzy.crudebyte.com/

That QEMU patch is required if you are using a user space app that
automatically retrieves an optimum I/O block size by obeying stat's
st_blksize, which 'cat' for instance is doing, e.g.:

	time cat test_rnd.dat > /dev/null

Otherwise please use a user space app for performance testing that allows
you to force a large block size and to avoid that QEMU issue, like 'dd'
for instance, in that case you don't need to patch QEMU.


KNOWN LIMITATION:

With this series applied I can run

  QEMU host <-> 9P virtio <-> Linux guest

with up to slightly below 4 MB msize [4186112 = (1024-2) * 4096]. If I try
to run it with exactly 4 MB (4194304) it currently hits a limitation on
QEMU side:

  qemu-system-x86_64: virtio: too many write descriptors in indirect table

That's because QEMU currently has a hard coded limit of max. 1024 virtio
descriptors per vring slot (i.e. per virtio message), see to do (1.) below.


STILL TO DO:

  1. Negotiating virtio "Queue Indirect Size" (MANDATORY):

    The QEMU issue described above must be addressed by negotiating the
    maximum length of virtio indirect descriptor tables on virtio device
    initialization. This would not only avoid the QEMU error above, but would
    also allow msize of >4MB in future. Before that change can be done on
    Linux and QEMU sides though, it first requires a change to the virtio
    specs. Work on that on the virtio specs is in progress:

    https://github.com/oasis-tcs/virtio-spec/issues/122

    This is not really an issue for testing this series. Just stick to max.
    msize=4186112 as described above and you will be fine. However for the
    final PR this should obviously be addressed in a clean way.

  2. Reduce readdir buffer sizes (optional - maybe later):

    This series already reduced the message buffers for most 9p message
    types. This does not include Treaddir though yet, which is still simply
    using msize. It would make sense to benchmark first whether this is
    actually an issue that hurts. If it does, then one might use already
    existing vfs knowledge to estimate the Treaddir size, or starting with
    some reasonable hard coded small Treaddir size first and then increasing
    it just on the 2nd Treaddir request if there are more directory entries
    to fetch.

  3. Add more buffer caches (optional - maybe later):

    p9_fcall_init() uses kmem_cache_alloc() instead of kmalloc() for very
    large buffers to reduce latency waiting for memory allocation to
    complete. Currently it does that only if the requested buffer size is
    exactly msize large. As patch 10 already divided the 9p message types
    into few message size categories, maybe it would make sense to use e.g.
    4 separate caches for those memory category (e.g. 4k, 8k, msize/2,
    msize). Might be worth a benchmark test.

Testing and feedback appreciated!

v4 -> v5:

  * Exclude RDMA transport from buffer size reduction. [patch 11]

Christian Schoenebeck (11):
  9p/trans_virtio: separate allocation of scatter gather list
  9p/trans_virtio: turn amount of sg lists into runtime info
  9p/trans_virtio: introduce struct virtqueue_sg
  net/9p: add trans_maxsize to struct p9_client
  9p/trans_virtio: support larger msize values
  9p/trans_virtio: resize sg lists to whatever is possible
  net/9p: limit 'msize' to KMALLOC_MAX_SIZE for all transports
  net/9p: split message size argument into 't_size' and 'r_size' pair
  9p: add P9_ERRMAX for 9p2000 and 9p2000.u
  net/9p: add p9_msg_buf_size()
  net/9p: allocate appropriate reduced message buffers

 include/net/9p/9p.h     |   3 +
 include/net/9p/client.h |   2 +
 net/9p/client.c         |  68 +++++++--
 net/9p/protocol.c       | 154 ++++++++++++++++++++
 net/9p/protocol.h       |   2 +
 net/9p/trans_virtio.c   | 304 +++++++++++++++++++++++++++++++++++-----
 6 files changed, 484 insertions(+), 49 deletions(-)

Comments

Dominique Martinet July 12, 2022, 9:13 p.m. UTC | #1
Alright; anything I didn't reply to looks good to me.

Christian Schoenebeck wrote on Tue, Jul 12, 2022 at 04:35:54PM +0200:
> OVERVIEW OF PATCHES:
> 
> * Patches 1..6 remove the msize limitation from the 'virtio' transport
>   (i.e. the 9p 'virtio' transport itself actually supports >4MB now, tested
>   successfully with an experimental QEMU version and some dirty 9p Linux
>   client hacks up to msize=128MB).

I have no problem with this except for the small nitpicks I gave, but
would be tempted to delay this part for one more cycle as it's really
independant -- what do you think?


> * Patch 7 limits msize for all transports to 4 MB for now as >4MB would need
>   more work on 9p client level (see commit log of patch 7 for details).
> 
> * Patches 8..11 tremendously reduce unnecessarily huge 9p message sizes and
>   therefore provide performance gain as well. So far, almost all 9p messages
>   simply allocated message buffers exactly msize large, even for messages
>   that actually just needed few bytes. So these patches make sense by
>   themselves, independent of this overall series, however for this series
>   even more, because the larger msize, the more this issue would have hurt
>   otherwise.

time-wise we're getting close to the merge window already (probably in 2
weeks), how confident are you in this?
I can take patches 8..11 in -next now and probably find some time to
test over next weekend, are we good?
Christian Schoenebeck July 13, 2022, 8:54 a.m. UTC | #2
On Dienstag, 12. Juli 2022 23:13:16 CEST Dominique Martinet wrote:
> Alright; anything I didn't reply to looks good to me.
> 
> Christian Schoenebeck wrote on Tue, Jul 12, 2022 at 04:35:54PM +0200:
> > OVERVIEW OF PATCHES:
> > 
> > * Patches 1..6 remove the msize limitation from the 'virtio' transport
> > 
> >   (i.e. the 9p 'virtio' transport itself actually supports >4MB now,
> >   tested
> >   successfully with an experimental QEMU version and some dirty 9p Linux
> >   client hacks up to msize=128MB).
> 
> I have no problem with this except for the small nitpicks I gave, but
> would be tempted to delay this part for one more cycle as it's really
> independant -- what do you think?

Yes, I would also postpone the virtio patches towards subsequent release 
cycle.

> > * Patch 7 limits msize for all transports to 4 MB for now as >4MB would
> > need> 
> >   more work on 9p client level (see commit log of patch 7 for details).
> > 
> > * Patches 8..11 tremendously reduce unnecessarily huge 9p message sizes
> > and
> > 
> >   therefore provide performance gain as well. So far, almost all 9p
> >   messages
> >   simply allocated message buffers exactly msize large, even for messages
> >   that actually just needed few bytes. So these patches make sense by
> >   themselves, independent of this overall series, however for this series
> >   even more, because the larger msize, the more this issue would have hurt
> >   otherwise.
> 
> time-wise we're getting close to the merge window already (probably in 2
> weeks), how confident are you in this?
> I can take patches 8..11 in -next now and probably find some time to
> test over next weekend, are we good?

Well, I have tested them thoroughly, but nevertheless IMO someone else than me 
should review patch 10 as well, and review whether the calculations for the 
individual message types are correct. That's a bit of spec dictionary lookup.

Best regards,
Christian Schoenebeck