diff mbox series

[net-next,11/15] net/mlx5e: XDP, Allow non-linear single-segment frames in XDP TX MPWQE

Message ID 20230417121903.46218-12-tariqt@nvidia.com (mailing list archive)
State Accepted
Delegated to: Netdev Maintainers
Headers show
Series net/mlx5e: Extend XDP multi-buffer capabilities | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 18 this patch: 18
netdev/cc_maintainers warning 7 maintainers not CCed: leon@kernel.org linux-rdma@vger.kernel.org daniel@iogearbox.net john.fastabend@gmail.com bpf@vger.kernel.org ast@kernel.org hawk@kernel.org
netdev/build_clang success Errors and warnings before: 18 this patch: 18
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 18 this patch: 18
netdev/checkpatch warning WARNING: line length of 84 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Tariq Toukan April 17, 2023, 12:18 p.m. UTC
Under a few restrictions, TX MPWQE feature can serve multiple TX packets
in a single TX descriptor. It requires each of the packets to have a
single scatter entry / segment.

Today we allow only linear frames to use this feature, although there's
no real problem with non-linear ones where the whole packet reside in
the first fragment.

Expand the XDP TX MPWQE feature support to include such frames. This is
in preparation for the downstream patch, in which we will generate such
non-linear frames.

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/en/xdp.c  | 35 ++++++++++++++-----
 1 file changed, 26 insertions(+), 9 deletions(-)
diff mbox series

Patch

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index d89f934570ee..f0e6095809fa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -405,18 +405,35 @@  mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptx
 {
 	struct mlx5e_tx_mpwqe *session = &sq->mpwqe;
 	struct mlx5e_xdpsq_stats *stats = sq->stats;
+	struct mlx5e_xmit_data *p = xdptxd;
+	struct mlx5e_xmit_data tmp;
 
 	if (xdptxd->has_frags) {
-		/* MPWQE is enabled, but a multi-buffer packet is queued for
-		 * transmission. MPWQE can't send fragmented packets, so close
-		 * the current session and fall back to a regular WQE.
-		 */
-		if (unlikely(sq->mpwqe.wqe))
-			mlx5e_xdp_mpwqe_complete(sq);
-		return mlx5e_xmit_xdp_frame(sq, xdptxd, 0);
+		struct mlx5e_xmit_data_frags *xdptxdf =
+			container_of(xdptxd, struct mlx5e_xmit_data_frags, xd);
+
+		if (!!xdptxd->len + xdptxdf->sinfo->nr_frags > 1) {
+			/* MPWQE is enabled, but a multi-buffer packet is queued for
+			 * transmission. MPWQE can't send fragmented packets, so close
+			 * the current session and fall back to a regular WQE.
+			 */
+			if (unlikely(sq->mpwqe.wqe))
+				mlx5e_xdp_mpwqe_complete(sq);
+			return mlx5e_xmit_xdp_frame(sq, xdptxd, 0);
+		}
+		if (!xdptxd->len) {
+			skb_frag_t *frag = &xdptxdf->sinfo->frags[0];
+
+			tmp.data = skb_frag_address(frag);
+			tmp.len = skb_frag_size(frag);
+			tmp.dma_addr = xdptxdf->dma_arr ? xdptxdf->dma_arr[0] :
+				page_pool_get_dma_addr(skb_frag_page(frag)) +
+				skb_frag_off(frag);
+			p = &tmp;
+		}
 	}
 
-	if (unlikely(xdptxd->len > sq->hw_mtu)) {
+	if (unlikely(p->len > sq->hw_mtu)) {
 		stats->err++;
 		return false;
 	}
@@ -434,7 +451,7 @@  mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptx
 		mlx5e_xdp_mpwqe_session_start(sq);
 	}
 
-	mlx5e_xdp_mpwqe_add_dseg(sq, xdptxd, stats);
+	mlx5e_xdp_mpwqe_add_dseg(sq, p, stats);
 
 	if (unlikely(mlx5e_xdp_mpwqe_is_full(session, sq->max_sq_mpw_wqebbs)))
 		mlx5e_xdp_mpwqe_complete(sq);