diff mbox series

[net] tipc: re-configure queue limit for broadcast link

Message ID 20201013061810.77866-1-hoang.h.le@dektech.com.au (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net] tipc: re-configure queue limit for broadcast link | expand

Commit Message

Hoang Huu Le Oct. 13, 2020, 6:18 a.m. UTC
The queue limit of the broadcast link is being calculated base on initial
MTU. However, when MTU value changed (e.g manual changing MTU on NIC
device, MTU negotiation etc.,) we do not re-calculate queue limit.
This gives throughput does not reflect with the change.

So fix it by calling the function to re-calculate queue limit of the
broadcast link.

Acked-by: Jon Maloy <jmaloy@redhat.com>
Signed-off-by: Hoang Huu Le <hoang.h.le@dektech.com.au>
---
 net/tipc/bcast.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Comments

Jakub Kicinski Oct. 15, 2020, 12:47 a.m. UTC | #1
On Tue, 13 Oct 2020 13:18:10 +0700 Hoang Huu Le wrote:
> The queue limit of the broadcast link is being calculated base on initial
> MTU. However, when MTU value changed (e.g manual changing MTU on NIC
> device, MTU negotiation etc.,) we do not re-calculate queue limit.
> This gives throughput does not reflect with the change.
> 
> So fix it by calling the function to re-calculate queue limit of the
> broadcast link.
> 
> Acked-by: Jon Maloy <jmaloy@redhat.com>
> Signed-off-by: Hoang Huu Le <hoang.h.le@dektech.com.au>
> ---
>  net/tipc/bcast.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
> index 940d176e0e87..c77fd13e2777 100644
> --- a/net/tipc/bcast.c
> +++ b/net/tipc/bcast.c
> @@ -108,6 +108,7 @@ static void tipc_bcbase_select_primary(struct net *net)
>  {
>  	struct tipc_bc_base *bb = tipc_bc_base(net);
>  	int all_dests =  tipc_link_bc_peers(bb->link);
> +	int max_win = tipc_link_max_win(bb->link);
>  	int i, mtu, prim;
>  
>  	bb->primary_bearer = INVALID_BEARER_ID;
> @@ -121,8 +122,11 @@ static void tipc_bcbase_select_primary(struct net *net)
>  			continue;
>  
>  		mtu = tipc_bearer_mtu(net, i);
> -		if (mtu < tipc_link_mtu(bb->link))
> +		if (mtu < tipc_link_mtu(bb->link)) {
>  			tipc_link_set_mtu(bb->link, mtu);
> +			tipc_link_set_queue_limits(bb->link, max_win,
> +						   max_win);

Is max/max okay here? Other places seem to use BCLINK_WIN_MIN.

> +		}
>  		bb->bcast_support &= tipc_bearer_bcast_support(net, i);
>  		if (bb->dests[i] < all_dests)
>  			continue;
Hoang Huu Le Oct. 15, 2020, 2:25 a.m. UTC | #2
Thanks for your reviewing.
Yes, in this commit, we intend to fix the queue calculation limited, and, 
besides we're planning to fix both in another fix. However, it should be used the default (i.e BCLINK_WIN_DEFAULT) one.
Since, we keep to choose fix window size for broadcast link.

Regards,
Hoang
> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Thursday, October 15, 2020 7:47 AM
> To: Hoang Huu Le <hoang.h.le@dektech.com.au>
> Cc: tipc-discussion@lists.sourceforge.net; jmaloy@redhat.com; maloy@donjonn.com; ying.xue@windriver.com;
> netdev@vger.kernel.org
> Subject: Re: [net] tipc: re-configure queue limit for broadcast link
> 
> On Tue, 13 Oct 2020 13:18:10 +0700 Hoang Huu Le wrote:
> > The queue limit of the broadcast link is being calculated base on initial
> > MTU. However, when MTU value changed (e.g manual changing MTU on NIC
> > device, MTU negotiation etc.,) we do not re-calculate queue limit.
> > This gives throughput does not reflect with the change.
> >
> > So fix it by calling the function to re-calculate queue limit of the
> > broadcast link.
> >
> > Acked-by: Jon Maloy <jmaloy@redhat.com>
> > Signed-off-by: Hoang Huu Le <hoang.h.le@dektech.com.au>
> > ---
> >  net/tipc/bcast.c | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
> > index 940d176e0e87..c77fd13e2777 100644
> > --- a/net/tipc/bcast.c
> > +++ b/net/tipc/bcast.c
> > @@ -108,6 +108,7 @@ static void tipc_bcbase_select_primary(struct net *net)
> >  {
> >  	struct tipc_bc_base *bb = tipc_bc_base(net);
> >  	int all_dests =  tipc_link_bc_peers(bb->link);
> > +	int max_win = tipc_link_max_win(bb->link);
> >  	int i, mtu, prim;
> >
> >  	bb->primary_bearer = INVALID_BEARER_ID;
> > @@ -121,8 +122,11 @@ static void tipc_bcbase_select_primary(struct net *net)
> >  			continue;
> >
> >  		mtu = tipc_bearer_mtu(net, i);
> > -		if (mtu < tipc_link_mtu(bb->link))
> > +		if (mtu < tipc_link_mtu(bb->link)) {
> >  			tipc_link_set_mtu(bb->link, mtu);
> > +			tipc_link_set_queue_limits(bb->link, max_win,
> > +						   max_win);
> 
> Is max/max okay here? Other places seem to use BCLINK_WIN_MIN.
> 
> > +		}
> >  		bb->bcast_support &= tipc_bearer_bcast_support(net, i);
> >  		if (bb->dests[i] < all_dests)
> >  			continue;
diff mbox series

Patch

diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
index 940d176e0e87..c77fd13e2777 100644
--- a/net/tipc/bcast.c
+++ b/net/tipc/bcast.c
@@ -108,6 +108,7 @@  static void tipc_bcbase_select_primary(struct net *net)
 {
 	struct tipc_bc_base *bb = tipc_bc_base(net);
 	int all_dests =  tipc_link_bc_peers(bb->link);
+	int max_win = tipc_link_max_win(bb->link);
 	int i, mtu, prim;
 
 	bb->primary_bearer = INVALID_BEARER_ID;
@@ -121,8 +122,11 @@  static void tipc_bcbase_select_primary(struct net *net)
 			continue;
 
 		mtu = tipc_bearer_mtu(net, i);
-		if (mtu < tipc_link_mtu(bb->link))
+		if (mtu < tipc_link_mtu(bb->link)) {
 			tipc_link_set_mtu(bb->link, mtu);
+			tipc_link_set_queue_limits(bb->link, max_win,
+						   max_win);
+		}
 		bb->bcast_support &= tipc_bearer_bcast_support(net, i);
 		if (bb->dests[i] < all_dests)
 			continue;