mbox series

[net-next,v3,0/4] net: lan966x: Add xdp support

Message ID 20221109204613.3669905-1-horatiu.vultur@microchip.com (mailing list archive)
Headers show
Series net: lan966x: Add xdp support | expand

Message

Horatiu Vultur Nov. 9, 2022, 8:46 p.m. UTC
Add support for xdp in lan966x driver. Currently only XDP_PASS and
XDP_DROP are supported.

The first 2 patches are just moving things around just to simplify
the code for when the xdp is added.
Patch 3 actually adds the xdp. Currently the only supported actions
are XDP_PASS and XDP_DROP. In the future this will be extended with
XDP_TX and XDP_REDIRECT.
Patch 4 changes to use page pool API, because the handling of the
pages is similar with what already lan966x driver is doing. In this
way is possible to remove some of the code.

All these changes give a small improvement on the RX side:
Before:
iperf3 -c 10.96.10.1 -R
[  5]   0.00-10.01  sec   514 MBytes   430 Mbits/sec    0         sender
[  5]   0.00-10.00  sec   509 MBytes   427 Mbits/sec              receiver

After:
iperf3 -c 10.96.10.1 -R
[  5]   0.00-10.02  sec   540 MBytes   452 Mbits/sec    0         sender
[  5]   0.00-10.01  sec   537 MBytes   450 Mbits/sec              receiver

---
v2->v3:
- inline lan966x_xdp_port_present
- update max_len of page_pool_params not to be the page size anymore but
  actually be rx->max_mtu.

v1->v2:
- rebase on net-next, once the fixes for FDMA and MTU were accepted
- drop patch 2, which changes the MTU as is not needed anymore
- allow to run xdp programs on frames bigger than 4KB

Horatiu Vultur (4):
  net: lan966x: Add define IFH_LEN_BYTES
  net: lan966x: Split function lan966x_fdma_rx_get_frame
  net: lan966x: Add basic XDP support
  net: lan96x: Use page_pool API

 .../net/ethernet/microchip/lan966x/Kconfig    |   1 +
 .../net/ethernet/microchip/lan966x/Makefile   |   3 +-
 .../ethernet/microchip/lan966x/lan966x_fdma.c | 181 +++++++++++-------
 .../ethernet/microchip/lan966x/lan966x_ifh.h  |   1 +
 .../ethernet/microchip/lan966x/lan966x_main.c |   7 +-
 .../ethernet/microchip/lan966x/lan966x_main.h |  33 ++++
 .../ethernet/microchip/lan966x/lan966x_xdp.c  |  76 ++++++++
 7 files changed, 236 insertions(+), 66 deletions(-)
 create mode 100644 drivers/net/ethernet/microchip/lan966x/lan966x_xdp.c

Comments

Alexander Lobakin Nov. 10, 2022, 11:17 a.m. UTC | #1
From: Horatiu Vultur <horatiu.vultur@microchip.com>
Date: Wed, 9 Nov 2022 21:46:09 +0100

> Add support for xdp in lan966x driver. Currently only XDP_PASS and
> XDP_DROP are supported.
> 
> The first 2 patches are just moving things around just to simplify
> the code for when the xdp is added.
> Patch 3 actually adds the xdp. Currently the only supported actions
> are XDP_PASS and XDP_DROP. In the future this will be extended with
> XDP_TX and XDP_REDIRECT.
> Patch 4 changes to use page pool API, because the handling of the
> pages is similar with what already lan966x driver is doing. In this
> way is possible to remove some of the code.
> 
> All these changes give a small improvement on the RX side:
> Before:
> iperf3 -c 10.96.10.1 -R
> [  5]   0.00-10.01  sec   514 MBytes   430 Mbits/sec    0         sender
> [  5]   0.00-10.00  sec   509 MBytes   427 Mbits/sec              receiver
> 
> After:
> iperf3 -c 10.96.10.1 -R
> [  5]   0.00-10.02  sec   540 MBytes   452 Mbits/sec    0         sender
> [  5]   0.00-10.01  sec   537 MBytes   450 Mbits/sec              receiver

A bit confusing name 'max_mtu' which in fact represents the max
frame len + skb overhead (4th patch), but it's more of a personal
taste probably.

For the series:

Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>

Nice stuff! I hear time to time that XDP is for 10G+ NICs only, but
I'm not a fan of such, and this series proves once again XDP fits
any hardware ^.^

> 
> ---
> v2->v3:
> - inline lan966x_xdp_port_present
> - update max_len of page_pool_params not to be the page size anymore but
>   actually be rx->max_mtu.
> 
> v1->v2:
> - rebase on net-next, once the fixes for FDMA and MTU were accepted
> - drop patch 2, which changes the MTU as is not needed anymore
> - allow to run xdp programs on frames bigger than 4KB
> 
> Horatiu Vultur (4):
>   net: lan966x: Add define IFH_LEN_BYTES
>   net: lan966x: Split function lan966x_fdma_rx_get_frame
>   net: lan966x: Add basic XDP support
>   net: lan96x: Use page_pool API
> 
>  .../net/ethernet/microchip/lan966x/Kconfig    |   1 +
>  .../net/ethernet/microchip/lan966x/Makefile   |   3 +-
>  .../ethernet/microchip/lan966x/lan966x_fdma.c | 181 +++++++++++-------
>  .../ethernet/microchip/lan966x/lan966x_ifh.h  |   1 +
>  .../ethernet/microchip/lan966x/lan966x_main.c |   7 +-
>  .../ethernet/microchip/lan966x/lan966x_main.h |  33 ++++
>  .../ethernet/microchip/lan966x/lan966x_xdp.c  |  76 ++++++++
>  7 files changed, 236 insertions(+), 66 deletions(-)
>  create mode 100644 drivers/net/ethernet/microchip/lan966x/lan966x_xdp.c
> 
> -- 
> 2.38.0

Thanks,
Olek
Andrew Lunn Nov. 10, 2022, 1:57 p.m. UTC | #2
> Nice stuff! I hear time to time that XDP is for 10G+ NICs only, but
> I'm not a fan of such, and this series proves once again XDP fits
> any hardware ^.^

The Freescale FEC recently gained XDP support. Many variants of it are
Fast Ethernet only.

What i found most interesting about that patchset was that the use of
the page_ppol API made the driver significantly faster for the general
case as well as XDP.

     Andrew
Alexander Lobakin Nov. 10, 2022, 4:21 p.m. UTC | #3
From: Andrew Lunn <andrew@lunn.ch>
Date: Thu, 10 Nov 2022 14:57:35 +0100

> > Nice stuff! I hear time to time that XDP is for 10G+ NICs only, but
> > I'm not a fan of such, and this series proves once again XDP fits
> > any hardware ^.^
> 
> The Freescale FEC recently gained XDP support. Many variants of it are
> Fast Ethernet only.
> 
> What i found most interesting about that patchset was that the use of
> the page_ppol API made the driver significantly faster for the general
> case as well as XDP.

The driver didn't have any page recycling or page splitting logics,
while Page Pool recycles even pages from skbs if
skb_mark_for_recycle() is used, which is the case here. So it
significantly reduced the number of new page allocations for Rx, if
there still are any at all.
Plus, Page Pool allocates pages by bulks (of 16 IIRC), not one by
one, that reduces CPU overhead as well.

> 
>      Andrew

Thanks,
Olek
Horatiu Vultur Nov. 10, 2022, 8:21 p.m. UTC | #4
The 11/10/2022 17:21, Alexander Lobakin wrote:

Hi,

> 
> From: Andrew Lunn <andrew@lunn.ch>
> Date: Thu, 10 Nov 2022 14:57:35 +0100
> 
> > > Nice stuff! I hear time to time that XDP is for 10G+ NICs only, but
> > > I'm not a fan of such, and this series proves once again XDP fits
> > > any hardware ^.^
> >
> > The Freescale FEC recently gained XDP support. Many variants of it are
> > Fast Ethernet only.
> >
> > What i found most interesting about that patchset was that the use of
> > the page_ppol API made the driver significantly faster for the general
> > case as well as XDP.
> 
> The driver didn't have any page recycling or page splitting logics,
> while Page Pool recycles even pages from skbs if
> skb_mark_for_recycle() is used, which is the case here. So it
> significantly reduced the number of new page allocations for Rx, if
> there still are any at all.
> Plus, Page Pool allocates pages by bulks (of 16 IIRC), not one by
> one, that reduces CPU overhead as well.

Just to make sure that everything is clear, those results that I have
shown in the cover letter are without any XDP programs on the
interfaces. Because I thought that is the correct comparison of the
results before and after all these changes.

Once I add an XDP program on the interface the performance drops. The
program will look for some ether types and always return XDP_PASS.

These are the results when I have such a XDP program on the interface:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.01  sec   486 MBytes   408 Mbits/sec    0 sender
[  5]   0.00-10.00  sec   483 MBytes   405 Mbits/sec      receiver

> 
> >
> >      Andrew
> 
> Thanks,
> Olek
Andrew Lunn Nov. 10, 2022, 10:51 p.m. UTC | #5
On Thu, Nov 10, 2022 at 05:21:48PM +0100, Alexander Lobakin wrote:
> From: Andrew Lunn <andrew@lunn.ch>
> Date: Thu, 10 Nov 2022 14:57:35 +0100
> 
> > > Nice stuff! I hear time to time that XDP is for 10G+ NICs only, but
> > > I'm not a fan of such, and this series proves once again XDP fits
> > > any hardware ^.^
> > 
> > The Freescale FEC recently gained XDP support. Many variants of it are
> > Fast Ethernet only.
> > 
> > What i found most interesting about that patchset was that the use of
> > the page_ppol API made the driver significantly faster for the general
> > case as well as XDP.
> 
> The driver didn't have any page recycling or page splitting logics,
> while Page Pool recycles even pages from skbs if
> skb_mark_for_recycle() is used, which is the case here. So it
> significantly reduced the number of new page allocations for Rx, if
> there still are any at all.

When reviewing new drivers we should be pushing them towards using the
page pool API. It seems to do better than the average role your own
implementation.

	Andrew
patchwork-bot+netdevbpf@kernel.org Nov. 11, 2022, 11 a.m. UTC | #6
Hello:

This series was applied to netdev/net-next.git (master)
by David S. Miller <davem@davemloft.net>:

On Wed, 9 Nov 2022 21:46:09 +0100 you wrote:
> Add support for xdp in lan966x driver. Currently only XDP_PASS and
> XDP_DROP are supported.
> 
> The first 2 patches are just moving things around just to simplify
> the code for when the xdp is added.
> Patch 3 actually adds the xdp. Currently the only supported actions
> are XDP_PASS and XDP_DROP. In the future this will be extended with
> XDP_TX and XDP_REDIRECT.
> Patch 4 changes to use page pool API, because the handling of the
> pages is similar with what already lan966x driver is doing. In this
> way is possible to remove some of the code.
> 
> [...]

Here is the summary with links:
  - [net-next,v3,1/4] net: lan966x: Add define IFH_LEN_BYTES
    https://git.kernel.org/netdev/net-next/c/e83163b66a37
  - [net-next,v3,2/4] net: lan966x: Split function lan966x_fdma_rx_get_frame
    https://git.kernel.org/netdev/net-next/c/4a00b0c712e3
  - [net-next,v3,3/4] net: lan966x: Add basic XDP support
    https://git.kernel.org/netdev/net-next/c/6a2159be7604
  - [net-next,v3,4/4] net: lan96x: Use page_pool API
    https://git.kernel.org/netdev/net-next/c/11871aba1974

You are awesome, thank you!