Message ID | CANeMGR6CBxC8HtqbGamgpLGM+M1Ndng_WJ-RxFXXJnc9O3cVwQ@mail.gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Calculate VIRTQUEUE_NUM in "net/9p/trans_virtio.c" from stack size | expand |
Christian, this is more up your alley, letting you comment as well as you weren't even sent a copy in Ccs Guan, overall, please check Documentation/process/submitting-patches.rst - this is missing [PATCH] in the mail header, missing some recipients that you'd have gotten from get_maintiner.pl, and the commit title is a mess. Have a look at other recent patches on https://lore.kernel.org/v9fs/ Guan Xin wrote on Sat, Oct 26, 2024 at 12:18:42AM +0800: > For HPC applications the hard-coded VIRTQUEUE_NUM of 128 seems to > limit the throughput of guest systems accessing cluster filesystems > mounted on the host. > > Just increase VIRTQUEUE_NUM for kernels with a > larger stack. You're replacing an hardcoded value with another, this could be made dynamic e.g. as a module_param so someone could tune this based on their actual needs (and test more easily); I'd more readily accept such a patch. > Author: GUAN Xin <guanx.bac@gmail.com> Author: tag doesn't exist and would be useless here as it's the mail you sent the patch from. > Signed-off-by: GUAN Xin <guanx.bac@gmail.com> > cc: Eric Van Hensbergen <ericvh@kernel.org> > cc: v9fs@lists.linux.dev > cc: netdev@vger.kernel.org > cc: linux-fsdevel@vger.kernel.org > > --- net/9p/trans_virtio.c.orig 2024-10-25 10:25:09.390922517 +0800 > +++ net/9p/trans_virtio.c 2024-10-25 16:48:40.451680192 +0800 > @@ -31,11 +31,12 @@ > #include <net/9p/transport.h> > #include <linux/scatterlist.h> > #include <linux/swap.h> > +#include <linux/thread_info.h> > #include <linux/virtio.h> > #include <linux/virtio_9p.h> > #include "trans_common.h" > > -#define VIRTQUEUE_NUM 128 > +#define VIRTQUEUE_NUM (1 << (THREAD_SIZE_ORDER + PAGE_SHIFT - 6)) (FWIW that turned out to be 256 on my system) > /* a single mutex to manage channel initialization and attachment */ > static DEFINE_MUTEX(virtio_9p_lock); >
Dominique, On Sat, Oct 26, 2024 at 5:53 AM Dominique Martinet <asmadeus@codewreck.org> wrote: > >> Have a look at other recent patches on https://lore.kernel.org/v9fs/ Sorry I'm new to Linux and didn't find out the exact workflow from the numerous instruction files. Thank you for pointing me to the examples! This is greatly helpful. Guan P.S. Patch attached: > Guan Xin wrote on Sat, Oct 26, 2024 at 12:18:42AM +0800: > > For HPC applications the hard-coded VIRTQUEUE_NUM of 128 seems to > > limit the throughput of guest systems accessing cluster filesystems > > mounted on the host. > > > > Just increase VIRTQUEUE_NUM for kernels with a > > larger stack. > > You're replacing an hardcoded value with another, this could be made > dynamic e.g. as a module_param so someone could tune this based on their > actual needs (and test more easily); I'd more readily accept such a > patch. > > > Author: GUAN Xin <guanx.bac@gmail.com> > > Author: tag doesn't exist and would be useless here as it's the mail you > sent the patch from. > > > Signed-off-by: GUAN Xin <guanx.bac@gmail.com> > > cc: Eric Van Hensbergen <ericvh@kernel.org> > > cc: v9fs@lists.linux.dev > > cc: netdev@vger.kernel.org > > cc: linux-fsdevel@vger.kernel.org > > > > --- net/9p/trans_virtio.c.orig 2024-10-25 10:25:09.390922517 +0800 > > +++ net/9p/trans_virtio.c 2024-10-25 16:48:40.451680192 +0800 > > @@ -31,11 +31,12 @@ > > #include <net/9p/transport.h> > > #include <linux/scatterlist.h> > > #include <linux/swap.h> > > +#include <linux/thread_info.h> > > #include <linux/virtio.h> > > #include <linux/virtio_9p.h> > > #include "trans_common.h" > > > > -#define VIRTQUEUE_NUM 128 > > +#define VIRTQUEUE_NUM (1 << (THREAD_SIZE_ORDER + PAGE_SHIFT - 6)) > > (FWIW that turned out to be 256 on my system)
On Friday, October 25, 2024 11:52:56 PM CEST Dominique Martinet wrote: > Christian, > > this is more up your alley, letting you comment as well as you weren't > even sent a copy in Ccs [...] > > Signed-off-by: GUAN Xin <guanx.bac@gmail.com> > > cc: Eric Van Hensbergen <ericvh@kernel.org> > > cc: v9fs@lists.linux.dev > > cc: netdev@vger.kernel.org > > cc: linux-fsdevel@vger.kernel.org > > > > --- net/9p/trans_virtio.c.orig 2024-10-25 10:25:09.390922517 +0800 > > +++ net/9p/trans_virtio.c 2024-10-25 16:48:40.451680192 +0800 > > @@ -31,11 +31,12 @@ > > #include <net/9p/transport.h> > > #include <linux/scatterlist.h> > > #include <linux/swap.h> > > +#include <linux/thread_info.h> > > #include <linux/virtio.h> > > #include <linux/virtio_9p.h> > > #include "trans_common.h" > > > > -#define VIRTQUEUE_NUM 128 > > +#define VIRTQUEUE_NUM (1 << (THREAD_SIZE_ORDER + PAGE_SHIFT - 6)) > > (FWIW that turned out to be 256 on my system) Guan, it took me a bit to understand why you would change this constant depending on maximum stack size, as it is not obvious. Looks like you made this because of this comment (net/9p/trans_virtio.c): struct virtio_chan { ... /* Scatterlist: can be too big for stack. */ struct scatterlist sg[VIRTQUEUE_NUM]; ... }; However the stack size is not the limiting factor. It's a bit more complicated than that: I have also been working on increasing performance by allowing larger 9p message size and made it user-configurable at runtime. Here is the latest version of my patch set: https://lore.kernel.org/all/cover.1657636554.git.linux_oss@crudebyte.com/ Patches 8..11 have already been merged. Patches 1..7 are still to be merged. /Christian
Hi Christian, On Sat, Oct 26, 2024 at 5:36 PM Christian Schoenebeck <linux_oss@crudebyte.com> wrote: > > Guan, > > it took me a bit to understand why you would change this constant depending on > maximum stack size, as it is not obvious. Looks like you made this because of > this comment (net/9p/trans_virtio.c): > > struct virtio_chan { > ... > /* Scatterlist: can be too big for stack. */ > struct scatterlist sg[VIRTQUEUE_NUM]; > ... > }; Yes, exactly. > However the stack size is not the limiting factor. It's a bit more complicated > than that: > > I have also been working on increasing performance by allowing larger 9p > message size and made it user-configurable at runtime. Here is the latest > version of my patch set: > > https://lore.kernel.org/all/cover.1657636554.git.linux_oss@crudebyte.com/ > > Patches 8..11 have already been merged. Patches 1..7 are still to be merged. > > /Christian That would be better! I'll take a look at your patches. Please ignore my patch for the moment. Guan
On Saturday, October 26, 2024 11:36:13 AM CET Christian Schoenebeck wrote: [...] > I have also been working on increasing performance by allowing larger 9p > message size and made it user-configurable at runtime. Here is the latest > version of my patch set: > > https://lore.kernel.org/all/cover.1657636554.git.linux_oss@crudebyte.com/ > > Patches 8..11 have already been merged. Patches 1..7 are still to be merged. Sorry, it's been a while, I linked the wrong version of this patch set (v5). Latest version is actually v6 here: https://lore.kernel.org/all/cover.1657920926.git.linux_oss@crudebyte.com/ Accordingly patches 7..11 (of v6) have already been merged, whereas patches 1..6 are not merged yet. /Christian
--- net/9p/trans_virtio.c.orig 2024-10-25 10:25:09.390922517 +0800 +++ net/9p/trans_virtio.c 2024-10-25 16:48:40.451680192 +0800 @@ -31,11 +31,12 @@ #include <net/9p/transport.h> #include <linux/scatterlist.h> #include <linux/swap.h> +#include <linux/thread_info.h> #include <linux/virtio.h> #include <linux/virtio_9p.h> #include "trans_common.h" -#define VIRTQUEUE_NUM 128 +#define VIRTQUEUE_NUM (1 << (THREAD_SIZE_ORDER + PAGE_SHIFT - 6)) /* a single mutex to manage channel initialization and attachment */ static DEFINE_MUTEX(virtio_9p_lock);
For HPC applications the hard-coded VIRTQUEUE_NUM of 128 seems to limit the throughput of guest systems accessing cluster filesystems mounted on the host. Just increase VIRTQUEUE_NUM for kernels with a larger stack. Author: GUAN Xin <guanx.bac@gmail.com> Signed-off-by: GUAN Xin <guanx.bac@gmail.com> cc: Eric Van Hensbergen <ericvh@kernel.org> cc: v9fs@lists.linux.dev cc: netdev@vger.kernel.org cc: linux-fsdevel@vger.kernel.org