Message ID | b241da0e8aa31773472591e219ada3632a84dfbb.1617965243.git.pabeni@redhat.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 47e550e0105be9b716a3860545731735a67c6b3c |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | veth: allow GRO even without XDP | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | success | Link |
netdev/tree_selection | success | Clearly marked for net-next |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | success | CCed 3 of 3 maintainers |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 0 this patch: 0 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 36 lines checked |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 0 this patch: 0 |
netdev/header_inline | success | Link |
Paolo Abeni <pabeni@redhat.com> writes: > After the previous patch, when enabling GRO, locally generated > TCP traffic experiences some measurable overhead, as it traverses > the GRO engine without any chance of aggregation. > > This change refine the NAPI receive path admission test, to avoid > unnecessary GRO overhead in most scenarios, when GRO is enabled > on a veth peer. > > Only skbs that are eligible for aggregation enter the GRO layer, > the others will go through the traditional receive path. > > Signed-off-by: Paolo Abeni <pabeni@redhat.com> > --- > drivers/net/veth.c | 23 ++++++++++++++++++++++- > 1 file changed, 22 insertions(+), 1 deletion(-) > > diff --git a/drivers/net/veth.c b/drivers/net/veth.c > index ca44e82d1edeb..85f90f33d437e 100644 > --- a/drivers/net/veth.c > +++ b/drivers/net/veth.c > @@ -282,6 +282,25 @@ static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb, > netif_rx(skb); > } > > +/* return true if the specified skb has chances of GRO aggregation > + * Don't strive for accuracy, but try to avoid GRO overhead in the most > + * common scenarios. > + * When XDP is enabled, all traffic is considered eligible, as the xmit > + * device has TSO off. > + * When TSO is enabled on the xmit device, we are likely interested only > + * in UDP aggregation, explicitly check for that if the skb is suspected > + * - the sock_wfree destructor is used by UDP, ICMP and XDP sockets - > + * to belong to locally generated UDP traffic. > + */ > +static bool veth_skb_is_eligible_for_gro(const struct net_device *dev, > + const struct net_device *rcv, > + const struct sk_buff *skb) > +{ > + return !(dev->features & NETIF_F_ALL_TSO) || > + (skb->destructor == sock_wfree && > + rcv->features & (NETIF_F_GRO_FRAGLIST | NETIF_F_GRO_UDP_FWD)); > +} > + > static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) > { > struct veth_priv *rcv_priv, *priv = netdev_priv(dev); > @@ -305,8 +324,10 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) > > /* The napi pointer is available when an XDP program is > * attached or when GRO is enabled > + * Don't bother with napi/GRO if the skb can't be aggregated > */ > - use_napi = rcu_access_pointer(rq->napi); > + use_napi = rcu_access_pointer(rq->napi) && > + veth_skb_is_eligible_for_gro(dev, rcv, skb); > skb_record_rx_queue(skb, rxq); > } You just changed the 'xdp_rcv' check to this use_napi, and now you're conditioning it on GRO eligibility, so doesn't this break XDP if that was the reason NAPI was turned on in the first place? -Toke
hello, On Fri, 2021-04-09 at 16:57 +0200, Toke Høiland-Jørgensen wrote: > Paolo Abeni <pabeni@redhat.com> writes: > > > After the previous patch, when enabling GRO, locally generated > > TCP traffic experiences some measurable overhead, as it traverses > > the GRO engine without any chance of aggregation. > > > > This change refine the NAPI receive path admission test, to avoid > > unnecessary GRO overhead in most scenarios, when GRO is enabled > > on a veth peer. > > > > Only skbs that are eligible for aggregation enter the GRO layer, > > the others will go through the traditional receive path. > > > > Signed-off-by: Paolo Abeni <pabeni@redhat.com> > > --- > > drivers/net/veth.c | 23 ++++++++++++++++++++++- > > 1 file changed, 22 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/net/veth.c b/drivers/net/veth.c > > index ca44e82d1edeb..85f90f33d437e 100644 > > --- a/drivers/net/veth.c > > +++ b/drivers/net/veth.c > > @@ -282,6 +282,25 @@ static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb, > > netif_rx(skb); > > } > > > > +/* return true if the specified skb has chances of GRO aggregation > > + * Don't strive for accuracy, but try to avoid GRO overhead in the most > > + * common scenarios. > > + * When XDP is enabled, all traffic is considered eligible, as the xmit > > + * device has TSO off. > > + * When TSO is enabled on the xmit device, we are likely interested only > > + * in UDP aggregation, explicitly check for that if the skb is suspected > > + * - the sock_wfree destructor is used by UDP, ICMP and XDP sockets - > > + * to belong to locally generated UDP traffic. > > + */ > > +static bool veth_skb_is_eligible_for_gro(const struct net_device *dev, > > + const struct net_device *rcv, > > + const struct sk_buff *skb) > > +{ > > + return !(dev->features & NETIF_F_ALL_TSO) || > > + (skb->destructor == sock_wfree && > > + rcv->features & (NETIF_F_GRO_FRAGLIST | NETIF_F_GRO_UDP_FWD)); > > +} > > + > > static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) > > { > > struct veth_priv *rcv_priv, *priv = netdev_priv(dev); > > @@ -305,8 +324,10 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) > > > > /* The napi pointer is available when an XDP program is > > * attached or when GRO is enabled > > + * Don't bother with napi/GRO if the skb can't be aggregated > > */ > > - use_napi = rcu_access_pointer(rq->napi); > > + use_napi = rcu_access_pointer(rq->napi) && > > + veth_skb_is_eligible_for_gro(dev, rcv, skb); > > skb_record_rx_queue(skb, rxq); > > } > > You just changed the 'xdp_rcv' check to this use_napi, and now you're > conditioning it on GRO eligibility, so doesn't this break XDP if that > was the reason NAPI was turned on in the first place? Thank you for the feedback. If XDP is enabled, TSO is forced of on 'dev' and veth_skb_is_eligible_for_gro() returns true, so napi/GRO is always used - there is no functional change when XDP is enabled. Please let me know if the above is more clear, thanks! Paolo
Paolo Abeni <pabeni@redhat.com> writes: > hello, > > On Fri, 2021-04-09 at 16:57 +0200, Toke Høiland-Jørgensen wrote: >> Paolo Abeni <pabeni@redhat.com> writes: >> >> > After the previous patch, when enabling GRO, locally generated >> > TCP traffic experiences some measurable overhead, as it traverses >> > the GRO engine without any chance of aggregation. >> > >> > This change refine the NAPI receive path admission test, to avoid >> > unnecessary GRO overhead in most scenarios, when GRO is enabled >> > on a veth peer. >> > >> > Only skbs that are eligible for aggregation enter the GRO layer, >> > the others will go through the traditional receive path. >> > >> > Signed-off-by: Paolo Abeni <pabeni@redhat.com> >> > --- >> > drivers/net/veth.c | 23 ++++++++++++++++++++++- >> > 1 file changed, 22 insertions(+), 1 deletion(-) >> > >> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c >> > index ca44e82d1edeb..85f90f33d437e 100644 >> > --- a/drivers/net/veth.c >> > +++ b/drivers/net/veth.c >> > @@ -282,6 +282,25 @@ static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb, >> > netif_rx(skb); >> > } >> > >> > +/* return true if the specified skb has chances of GRO aggregation >> > + * Don't strive for accuracy, but try to avoid GRO overhead in the most >> > + * common scenarios. >> > + * When XDP is enabled, all traffic is considered eligible, as the xmit >> > + * device has TSO off. >> > + * When TSO is enabled on the xmit device, we are likely interested only >> > + * in UDP aggregation, explicitly check for that if the skb is suspected >> > + * - the sock_wfree destructor is used by UDP, ICMP and XDP sockets - >> > + * to belong to locally generated UDP traffic. >> > + */ >> > +static bool veth_skb_is_eligible_for_gro(const struct net_device *dev, >> > + const struct net_device *rcv, >> > + const struct sk_buff *skb) >> > +{ >> > + return !(dev->features & NETIF_F_ALL_TSO) || >> > + (skb->destructor == sock_wfree && >> > + rcv->features & (NETIF_F_GRO_FRAGLIST | NETIF_F_GRO_UDP_FWD)); >> > +} >> > + >> > static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) >> > { >> > struct veth_priv *rcv_priv, *priv = netdev_priv(dev); >> > @@ -305,8 +324,10 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) >> > >> > /* The napi pointer is available when an XDP program is >> > * attached or when GRO is enabled >> > + * Don't bother with napi/GRO if the skb can't be aggregated >> > */ >> > - use_napi = rcu_access_pointer(rq->napi); >> > + use_napi = rcu_access_pointer(rq->napi) && >> > + veth_skb_is_eligible_for_gro(dev, rcv, skb); >> > skb_record_rx_queue(skb, rxq); >> > } >> >> You just changed the 'xdp_rcv' check to this use_napi, and now you're >> conditioning it on GRO eligibility, so doesn't this break XDP if that >> was the reason NAPI was turned on in the first place? > > Thank you for the feedback. > > If XDP is enabled, TSO is forced of on 'dev' > and veth_skb_is_eligible_for_gro() returns true, so napi/GRO is always > used - there is no functional change when XDP is enabled. Ah, right, so it says right there in the comment; sorry for missing that! :) -Toke
diff --git a/drivers/net/veth.c b/drivers/net/veth.c index ca44e82d1edeb..85f90f33d437e 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -282,6 +282,25 @@ static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb, netif_rx(skb); } +/* return true if the specified skb has chances of GRO aggregation + * Don't strive for accuracy, but try to avoid GRO overhead in the most + * common scenarios. + * When XDP is enabled, all traffic is considered eligible, as the xmit + * device has TSO off. + * When TSO is enabled on the xmit device, we are likely interested only + * in UDP aggregation, explicitly check for that if the skb is suspected + * - the sock_wfree destructor is used by UDP, ICMP and XDP sockets - + * to belong to locally generated UDP traffic. + */ +static bool veth_skb_is_eligible_for_gro(const struct net_device *dev, + const struct net_device *rcv, + const struct sk_buff *skb) +{ + return !(dev->features & NETIF_F_ALL_TSO) || + (skb->destructor == sock_wfree && + rcv->features & (NETIF_F_GRO_FRAGLIST | NETIF_F_GRO_UDP_FWD)); +} + static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) { struct veth_priv *rcv_priv, *priv = netdev_priv(dev); @@ -305,8 +324,10 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) /* The napi pointer is available when an XDP program is * attached or when GRO is enabled + * Don't bother with napi/GRO if the skb can't be aggregated */ - use_napi = rcu_access_pointer(rq->napi); + use_napi = rcu_access_pointer(rq->napi) && + veth_skb_is_eligible_for_gro(dev, rcv, skb); skb_record_rx_queue(skb, rxq); }
After the previous patch, when enabling GRO, locally generated TCP traffic experiences some measurable overhead, as it traverses the GRO engine without any chance of aggregation. This change refine the NAPI receive path admission test, to avoid unnecessary GRO overhead in most scenarios, when GRO is enabled on a veth peer. Only skbs that are eligible for aggregation enter the GRO layer, the others will go through the traditional receive path. Signed-off-by: Paolo Abeni <pabeni@redhat.com> --- drivers/net/veth.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)