diff mbox series

[RFC,2/3] veth: make queues nr configurable via kernel module params

Message ID 480e7a960c26c9ab84efe59ed706f1a1a459d38c.1625823139.git.pabeni@redhat.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series veth: more flexible channels number configuration | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Guessed tree name to be net-next
netdev/subject_prefix warning Target tree name not specified in the subject
netdev/cc_maintainers success CCed 3 of 3 maintainers
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param fail Was 0 now: 1
netdev/build_32bit fail Errors and warnings before: 3 this patch: 5
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 39 lines checked
netdev/build_allmodconfig_warn fail Errors and warnings before: 3 this patch: 5
netdev/header_inline success Link

Commit Message

Paolo Abeni July 9, 2021, 9:39 a.m. UTC
This allows configuring the number of tx and rx queues at
module load time. A single module parameter controls
both the default number of RX and TX queues created
at device registration time.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 drivers/net/veth.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

Comments

Toke Høiland-Jørgensen July 9, 2021, 10:24 a.m. UTC | #1
Paolo Abeni <pabeni@redhat.com> writes:

> This allows configuring the number of tx and rx queues at
> module load time. A single module parameter controls
> both the default number of RX and TX queues created
> at device registration time.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
>  drivers/net/veth.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 10360228a06a..787b4ad2cc87 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -27,6 +27,11 @@
>  #include <linux/bpf_trace.h>
>  #include <linux/net_tstamp.h>
>  
> +static int queues_nr	= 1;
> +
> +module_param(queues_nr, int, 0644);
> +MODULE_PARM_DESC(queues_nr, "Max number of RX and TX queues (default = 1)");

Adding new module parameters is generally discouraged. Also, it's sort
of a cumbersome API that you'll have to set this first, then re-create
the device, and then use channels to get the number you want.

So why not just default to allocating num_possible_cpus() number of
queues? Arguably that is the value that makes the most sense from a
scalability point of view anyway, but if we're concerned about behaviour
change (are we?), we could just default real_num_*_queues to 1, so that
the extra queues have to be explicitly enabled by ethtool?

-Toke
Paolo Abeni July 9, 2021, 3:33 p.m. UTC | #2
On Fri, 2021-07-09 at 12:24 +0200, Toke Høiland-Jørgensen wrote:
> Paolo Abeni <pabeni@redhat.com> writes:
> 
> > This allows configuring the number of tx and rx queues at
> > module load time. A single module parameter controls
> > both the default number of RX and TX queues created
> > at device registration time.
> > 
> > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > ---
> >  drivers/net/veth.c | 21 +++++++++++++++++++++
> >  1 file changed, 21 insertions(+)
> > 
> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> > index 10360228a06a..787b4ad2cc87 100644
> > --- a/drivers/net/veth.c
> > +++ b/drivers/net/veth.c
> > @@ -27,6 +27,11 @@
> >  #include <linux/bpf_trace.h>
> >  #include <linux/net_tstamp.h>
> >  
> > +static int queues_nr	= 1;
> > +
> > +module_param(queues_nr, int, 0644);
> > +MODULE_PARM_DESC(queues_nr, "Max number of RX and TX queues (default = 1)");
> 
> Adding new module parameters is generally discouraged. Also, it's sort
> of a cumbersome API that you'll have to set this first, then re-create
> the device, and then use channels to get the number you want.
> 
> So why not just default to allocating num_possible_cpus() number of
> queues? Arguably that is the value that makes the most sense from a
> scalability point of view anyway, but if we're concerned about behaviour
> change (are we?), we could just default real_num_*_queues to 1, so that
> the extra queues have to be explicitly enabled by ethtool?

I was concerned by the amount of memory wasted memory (should be ~256
bytes per rx queue, ~320 per tx, plus the sysfs entries).

real_num_tx_queue > 1 will makes the xmit path slower, so we likely
want to keep that to 1 by default - unless the userspace explicitly set
numtxqueues via netlink.

Finally, a default large num_tx_queue slows down device creation:

cat << ENDL > run.sh
#!/bin/sh
MAX=$1
for I in `seq 1 $MAX`; do
	ip link add name v$I type veth peer name pv$I
done
for I in `seq 1 $MAX`; do
	ip link del dev v$I
done
ENDL
chmod a+x run.sh

# with num_tx_queue == 1
time ./run.sh 100 
real	0m2.276s
user	0m0.107s
sys	0m0.162s

# with num_tx_queue == 128
time ./run.sh 100 1
real	0m4.199s
user	0m0.091s
sys	0m1.419s

# with num_tx_queue == 4096
time ./run.sh 100 
real	0m24.519s
user	0m0.089s
sys	0m21.711s

Still, if there is agreement I can switch to num_possible_cpus default,
plus some trickery to keep real_num_{r,t}x_queue unchanged.

WDYT?

Thanks!

Paolo
Toke Høiland-Jørgensen July 9, 2021, 4:12 p.m. UTC | #3
Paolo Abeni <pabeni@redhat.com> writes:

> On Fri, 2021-07-09 at 12:24 +0200, Toke Høiland-Jørgensen wrote:
>> Paolo Abeni <pabeni@redhat.com> writes:
>> 
>> > This allows configuring the number of tx and rx queues at
>> > module load time. A single module parameter controls
>> > both the default number of RX and TX queues created
>> > at device registration time.
>> > 
>> > Signed-off-by: Paolo Abeni <pabeni@redhat.com>
>> > ---
>> >  drivers/net/veth.c | 21 +++++++++++++++++++++
>> >  1 file changed, 21 insertions(+)
>> > 
>> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
>> > index 10360228a06a..787b4ad2cc87 100644
>> > --- a/drivers/net/veth.c
>> > +++ b/drivers/net/veth.c
>> > @@ -27,6 +27,11 @@
>> >  #include <linux/bpf_trace.h>
>> >  #include <linux/net_tstamp.h>
>> >  
>> > +static int queues_nr	= 1;
>> > +
>> > +module_param(queues_nr, int, 0644);
>> > +MODULE_PARM_DESC(queues_nr, "Max number of RX and TX queues (default = 1)");
>> 
>> Adding new module parameters is generally discouraged. Also, it's sort
>> of a cumbersome API that you'll have to set this first, then re-create
>> the device, and then use channels to get the number you want.
>> 
>> So why not just default to allocating num_possible_cpus() number of
>> queues? Arguably that is the value that makes the most sense from a
>> scalability point of view anyway, but if we're concerned about behaviour
>> change (are we?), we could just default real_num_*_queues to 1, so that
>> the extra queues have to be explicitly enabled by ethtool?
>
> I was concerned by the amount of memory wasted memory (should be ~256
> bytes per rx queue, ~320 per tx, plus the sysfs entries).

I'm not too worried by that since it's per CPU; systems with a lot of
CPUs should hopefully also have plenty of memory. Or at least I think
the user friendliness outweighs the cost in memory.

> real_num_tx_queue > 1 will makes the xmit path slower, so we likely
> want to keep that to 1 by default - unless the userspace explicitly set
> numtxqueues via netlink.

Right, that's fine by me :)

> Finally, a default large num_tx_queue slows down device creation:
>
> cat << ENDL > run.sh
> #!/bin/sh
> MAX=$1
> for I in `seq 1 $MAX`; do
> 	ip link add name v$I type veth peer name pv$I
> done
> for I in `seq 1 $MAX`; do
> 	ip link del dev v$I
> done
> ENDL
> chmod a+x run.sh
>
> # with num_tx_queue == 1
> time ./run.sh 100 
> real	0m2.276s
> user	0m0.107s
> sys	0m0.162s
>
> # with num_tx_queue == 128
> time ./run.sh 100 1
> real	0m4.199s
> user	0m0.091s
> sys	0m1.419s
>
> # with num_tx_queue == 4096
> time ./run.sh 100 
> real	0m24.519s
> user	0m0.089s
> sys	0m21.711s

So ~42 ms to create a device if there are 128 CPUs? And ~245 when
there's 4k CPUs? Doesn't seem too onerous to me...

> Still, if there is agreement I can switch to num_possible_cpus default,
> plus some trickery to keep real_num_{r,t}x_queue unchanged.
>
> WDYT?

SGTM :)

-Toke
diff mbox series

Patch

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 10360228a06a..787b4ad2cc87 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -27,6 +27,11 @@ 
 #include <linux/bpf_trace.h>
 #include <linux/net_tstamp.h>
 
+static int queues_nr	= 1;
+
+module_param(queues_nr, int, 0644);
+MODULE_PARM_DESC(queues_nr, "Max number of RX and TX queues (default = 1)");
+
 #define DRV_NAME	"veth"
 #define DRV_VERSION	"1.0"
 
@@ -1662,6 +1667,18 @@  static struct net *veth_get_link_net(const struct net_device *dev)
 	return peer ? dev_net(peer) : dev_net(dev);
 }
 
+unsigned int veth_get_num_tx_queues(void)
+{
+	/* enforce the same queue limit as rtnl_create_link */
+	int queues = queues_nr;
+
+	if (queues < 1)
+		queues = 1;
+	if (queues > 4096)
+		queues = 4096;
+	return queues;
+}
+
 static struct rtnl_link_ops veth_link_ops = {
 	.kind		= DRV_NAME,
 	.priv_size	= sizeof(struct veth_priv),
@@ -1672,6 +1689,10 @@  static struct rtnl_link_ops veth_link_ops = {
 	.policy		= veth_policy,
 	.maxtype	= VETH_INFO_MAX,
 	.get_link_net	= veth_get_link_net,
+	.get_num_tx_queues	= veth_get_num_tx_queues,
+	.get_num_rx_queues	= veth_get_num_tx_queues, /* Use the same number
+							   * as for TX queues
+							   */
 };
 
 /*