diff mbox series

[net-next,v3] lan966x: Don't use xdp_frame when action is XDP_TX

Message ID 20230422142344.3630602-1-horatiu.vultur@microchip.com (mailing list archive)
State Accepted
Commit 700f11eb2cbea349bda2599b4b676b49d43b4175
Delegated to: Netdev Maintainers
Headers show
Series [net-next,v3] lan966x: Don't use xdp_frame when action is XDP_TX | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 8 this patch: 8
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 8 this patch: 8
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 8 this patch: 8
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 137 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Horatiu Vultur April 22, 2023, 2:23 p.m. UTC
When the action of an xdp program was XDP_TX, lan966x was creating
a xdp_frame and use this one to send the frame back. But it is also
possible to send back the frame without needing a xdp_frame, because
it is possible to send it back using the page.
And then once the frame is transmitted is possible to use directly
page_pool_recycle_direct as lan966x is using page pools.
This would save some CPU usage on this path, which results in higher
number of transmitted frames. Bellow are the statistics:
Frame size:    Improvement:
64                ~8%
256              ~11%
512               ~8%
1000              ~0%
1500              ~0%

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
---
v2->v3:
- fix length issue when the XDP action is XDP_REDIRECT, this issue was
  introduced in v2 of this patch series.
- reduce the number of changes by moving back all the assignments to
  next_dcb_buf

v1->v2:
- reduce number of arguments for the function lan966x_fdma_xmit_xdpf,
  as some of them are mutual exclusive, and other can be replaced with
  deduced from the other ones
- update commit message and add statistics for the improvement
---
 .../ethernet/microchip/lan966x/lan966x_fdma.c | 35 ++++++++++++-------
 .../ethernet/microchip/lan966x/lan966x_main.h |  6 ++--
 .../ethernet/microchip/lan966x/lan966x_xdp.c  | 10 ++----
 3 files changed, 28 insertions(+), 23 deletions(-)

Comments

Alexander Lobakin April 24, 2023, 1:36 p.m. UTC | #1
From: Horatiu Vultur <horatiu.vultur@microchip.com>
Date: Sat, 22 Apr 2023 16:23:44 +0200

> When the action of an xdp program was XDP_TX, lan966x was creating
> a xdp_frame and use this one to send the frame back. But it is also
> possible to send back the frame without needing a xdp_frame, because
> it is possible to send it back using the page.
> And then once the frame is transmitted is possible to use directly
> page_pool_recycle_direct as lan966x is using page pools.
> This would save some CPU usage on this path, which results in higher
> number of transmitted frames. Bellow are the statistics:
> Frame size:    Improvement:
> 64                ~8%
> 256              ~11%
> 512               ~8%
> 1000              ~0%
> 1500              ~0%
> 
> Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>

Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>

> ---
> v2->v3:
> - fix length issue when the XDP action is XDP_REDIRECT, this issue was
>   introduced in v2 of this patch series.
> - reduce the number of changes by moving back all the assignments to
>   next_dcb_buf
> 
> v1->v2:
> - reduce number of arguments for the function lan966x_fdma_xmit_xdpf,
>   as some of them are mutual exclusive, and other can be replaced with
>   deduced from the other ones
> - update commit message and add statistics for the improvement
[...]

Thanks,
Olek
patchwork-bot+netdevbpf@kernel.org April 25, 2023, 2:10 a.m. UTC | #2
Hello:

This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Sat, 22 Apr 2023 16:23:44 +0200 you wrote:
> When the action of an xdp program was XDP_TX, lan966x was creating
> a xdp_frame and use this one to send the frame back. But it is also
> possible to send back the frame without needing a xdp_frame, because
> it is possible to send it back using the page.
> And then once the frame is transmitted is possible to use directly
> page_pool_recycle_direct as lan966x is using page pools.
> This would save some CPU usage on this path, which results in higher
> number of transmitted frames. Bellow are the statistics:
> Frame size:    Improvement:
> 64                ~8%
> 256              ~11%
> 512               ~8%
> 1000              ~0%
> 1500              ~0%
> 
> [...]

Here is the summary with links:
  - [net-next,v3] lan966x: Don't use xdp_frame when action is XDP_TX
    https://git.kernel.org/netdev/net-next/c/700f11eb2cbe

You are awesome, thank you!
diff mbox series

Patch

diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
index 2ed76bb61a731..bd72fbc2220f3 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
@@ -390,6 +390,7 @@  static void lan966x_fdma_stop_netdev(struct lan966x *lan966x)
 static void lan966x_fdma_tx_clear_buf(struct lan966x *lan966x, int weight)
 {
 	struct lan966x_tx *tx = &lan966x->tx;
+	struct lan966x_rx *rx = &lan966x->rx;
 	struct lan966x_tx_dcb_buf *dcb_buf;
 	struct xdp_frame_bulk bq;
 	struct lan966x_db *db;
@@ -432,7 +433,8 @@  static void lan966x_fdma_tx_clear_buf(struct lan966x *lan966x, int weight)
 			if (dcb_buf->xdp_ndo)
 				xdp_return_frame_bulk(dcb_buf->data.xdpf, &bq);
 			else
-				xdp_return_frame_rx_napi(dcb_buf->data.xdpf);
+				page_pool_recycle_direct(rx->page_pool,
+							 dcb_buf->data.page);
 		}
 
 		clear = true;
@@ -699,15 +701,14 @@  static void lan966x_fdma_tx_start(struct lan966x_tx *tx, int next_to_use)
 	tx->last_in_use = next_to_use;
 }
 
-int lan966x_fdma_xmit_xdpf(struct lan966x_port *port,
-			   struct xdp_frame *xdpf,
-			   struct page *page,
-			   bool dma_map)
+int lan966x_fdma_xmit_xdpf(struct lan966x_port *port, void *ptr, u32 len)
 {
 	struct lan966x *lan966x = port->lan966x;
 	struct lan966x_tx_dcb_buf *next_dcb_buf;
 	struct lan966x_tx *tx = &lan966x->tx;
+	struct xdp_frame *xdpf;
 	dma_addr_t dma_addr;
+	struct page *page;
 	int next_to_use;
 	__be32 *ifh;
 	int ret = 0;
@@ -722,8 +723,13 @@  int lan966x_fdma_xmit_xdpf(struct lan966x_port *port,
 		goto out;
 	}
 
+	/* Get the next buffer */
+	next_dcb_buf = &tx->dcbs_buf[next_to_use];
+
 	/* Generate new IFH */
-	if (dma_map) {
+	if (!len) {
+		xdpf = ptr;
+
 		if (xdpf->headroom < IFH_LEN_BYTES) {
 			ret = NETDEV_TX_OK;
 			goto out;
@@ -743,11 +749,16 @@  int lan966x_fdma_xmit_xdpf(struct lan966x_port *port,
 			goto out;
 		}
 
+		next_dcb_buf->data.xdpf = xdpf;
+		next_dcb_buf->len = xdpf->len + IFH_LEN_BYTES;
+
 		/* Setup next dcb */
 		lan966x_fdma_tx_setup_dcb(tx, next_to_use,
 					  xdpf->len + IFH_LEN_BYTES,
 					  dma_addr);
 	} else {
+		page = ptr;
+
 		ifh = page_address(page) + XDP_PACKET_HEADROOM;
 		memset(ifh, 0x0, sizeof(__be32) * IFH_LEN);
 		lan966x_ifh_set_bypass(ifh, 1);
@@ -756,21 +767,21 @@  int lan966x_fdma_xmit_xdpf(struct lan966x_port *port,
 		dma_addr = page_pool_get_dma_addr(page);
 		dma_sync_single_for_device(lan966x->dev,
 					   dma_addr + XDP_PACKET_HEADROOM,
-					   xdpf->len + IFH_LEN_BYTES,
+					   len + IFH_LEN_BYTES,
 					   DMA_TO_DEVICE);
 
+		next_dcb_buf->data.page = page;
+		next_dcb_buf->len = len + IFH_LEN_BYTES;
+
 		/* Setup next dcb */
 		lan966x_fdma_tx_setup_dcb(tx, next_to_use,
-					  xdpf->len + IFH_LEN_BYTES,
+					  len + IFH_LEN_BYTES,
 					  dma_addr + XDP_PACKET_HEADROOM);
 	}
 
 	/* Fill up the buffer */
-	next_dcb_buf = &tx->dcbs_buf[next_to_use];
 	next_dcb_buf->use_skb = false;
-	next_dcb_buf->data.xdpf = xdpf;
-	next_dcb_buf->xdp_ndo = dma_map;
-	next_dcb_buf->len = xdpf->len + IFH_LEN_BYTES;
+	next_dcb_buf->xdp_ndo = !len;
 	next_dcb_buf->dma_addr = dma_addr;
 	next_dcb_buf->used = true;
 	next_dcb_buf->ptp = false;
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
index 757378516f1fd..27f272831ea5c 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
@@ -262,6 +262,7 @@  struct lan966x_tx_dcb_buf {
 	union {
 		struct sk_buff *skb;
 		struct xdp_frame *xdpf;
+		struct page *page;
 	} data;
 	u32 len;
 	u32 used : 1;
@@ -593,10 +594,7 @@  int lan966x_ptp_setup_traps(struct lan966x_port *port, struct ifreq *ifr);
 int lan966x_ptp_del_traps(struct lan966x_port *port);
 
 int lan966x_fdma_xmit(struct sk_buff *skb, __be32 *ifh, struct net_device *dev);
-int lan966x_fdma_xmit_xdpf(struct lan966x_port *port,
-			   struct xdp_frame *frame,
-			   struct page *page,
-			   bool dma_map);
+int lan966x_fdma_xmit_xdpf(struct lan966x_port *port, void *ptr, u32 len);
 int lan966x_fdma_change_mtu(struct lan966x *lan966x);
 void lan966x_fdma_netdev_init(struct lan966x *lan966x, struct net_device *dev);
 void lan966x_fdma_netdev_deinit(struct lan966x *lan966x, struct net_device *dev);
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_xdp.c b/drivers/net/ethernet/microchip/lan966x/lan966x_xdp.c
index 2e6f486ec67d7..9ee61db8690b4 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_xdp.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_xdp.c
@@ -62,7 +62,7 @@  int lan966x_xdp_xmit(struct net_device *dev,
 		struct xdp_frame *xdpf = frames[i];
 		int err;
 
-		err = lan966x_fdma_xmit_xdpf(port, xdpf, NULL, true);
+		err = lan966x_fdma_xmit_xdpf(port, xdpf, 0);
 		if (err)
 			break;
 
@@ -76,7 +76,6 @@  int lan966x_xdp_run(struct lan966x_port *port, struct page *page, u32 data_len)
 {
 	struct bpf_prog *xdp_prog = port->xdp_prog;
 	struct lan966x *lan966x = port->lan966x;
-	struct xdp_frame *xdpf;
 	struct xdp_buff xdp;
 	u32 act;
 
@@ -90,11 +89,8 @@  int lan966x_xdp_run(struct lan966x_port *port, struct page *page, u32 data_len)
 	case XDP_PASS:
 		return FDMA_PASS;
 	case XDP_TX:
-		xdpf = xdp_convert_buff_to_frame(&xdp);
-		if (!xdpf)
-			return FDMA_DROP;
-
-		return lan966x_fdma_xmit_xdpf(port, xdpf, page, false) ?
+		return lan966x_fdma_xmit_xdpf(port, page,
+					      data_len - IFH_LEN_BYTES) ?
 		       FDMA_DROP : FDMA_TX;
 	case XDP_REDIRECT:
 		if (xdp_do_redirect(port->dev, &xdp, xdp_prog))