diff mbox series

[iwl-net] ice: xsk: fix Rx allocation on non-coherent systems

Message ID 20240903180511.244041-1-maciej.fijalkowski@intel.com (mailing list archive)
State Awaiting Upstream
Delegated to: Netdev Maintainers
Headers show
Series [iwl-net] ice: xsk: fix Rx allocation on non-coherent systems | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 16 this patch: 16
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers fail 1 blamed authors not CCed: daniel@iogearbox.net; 9 maintainers not CCed: przemyslaw.kitszel@intel.com pabeni@redhat.com kuba@kernel.org edumazet@google.com daniel@iogearbox.net bpf@vger.kernel.org ast@kernel.org hawk@kernel.org john.fastabend@gmail.com
netdev/build_clang success Errors and warnings before: 16 this patch: 16
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 16 this patch: 16
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 37 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 18 this patch: 18
netdev/source_inline success Was 0 now: 0

Commit Message

Fijalkowski, Maciej Sept. 3, 2024, 6:05 p.m. UTC
In cases when synchronizing DMA operations is necessary,
xsk_buff_alloc_batch() returns a single buffer instead of the requested
count. Detect such situation when filling HW Rx ring in ZC driver and
use xsk_buff_alloc() in a loop manner so that ring gets the buffers to
be used.

Reported-and-tested-by: Dries De Winter <ddewinter@synamedia.com>
Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_xsk.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

Comments

Fijalkowski, Maciej Sept. 5, 2024, 1:56 p.m. UTC | #1
On Tue, Sep 03, 2024 at 08:05:11PM +0200, Maciej Fijalkowski wrote:
> In cases when synchronizing DMA operations is necessary,
> xsk_buff_alloc_batch() returns a single buffer instead of the requested
> count. Detect such situation when filling HW Rx ring in ZC driver and
> use xsk_buff_alloc() in a loop manner so that ring gets the buffers to
> be used.

Instead of addressing this on every driver side, let us do this in core
by looping over xp_alloc().

Please drop this patch I will follow-up with a fix to core instead.

Dries also found an issue that if xp_alloc_batch() is called with max == 0
it still returns a single buffer for dma_need_sync which we will fix as
well.

> 
> Reported-and-tested-by: Dries De Winter <ddewinter@synamedia.com>
> Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface")
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_xsk.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> index 240a7bec242b..889d0a5070d7 100644
> --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> @@ -449,7 +449,24 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
>  	u16 buffs;
>  	int i;
>  
> +	if (unlikely(!xsk_buff_can_alloc(pool, count)))
> +		return 0;
> +
>  	buffs = xsk_buff_alloc_batch(pool, xdp, count);
> +	/* fill the remainder part that batch API did not provide for us,
> +	 * this is usually the case for non-coherent systems that require DMA
> +	 * syncs
> +	 */
> +	for (; buffs < count; buffs++) {
> +		struct xdp_buff *tmp;
> +
> +		tmp = xsk_buff_alloc(pool);
> +		if (unlikely(!tmp))
> +			goto free;
> +
> +		xdp[buffs] = tmp;
> +	}
> +
>  	for (i = 0; i < buffs; i++) {
>  		dma = xsk_buff_xdp_get_dma(*xdp);
>  		rx_desc->read.pkt_addr = cpu_to_le64(dma);
> @@ -465,6 +482,13 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
>  	}
>  
>  	return buffs;
> +
> +free:
> +	for (i = 0; i < buffs; i++) {
> +		xsk_buff_free(*xdp);
> +		xdp++;
> +	}
> +	return 0;
>  }
>  
>  /**
> -- 
> 2.34.1
> 
>
diff mbox series

Patch

diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 240a7bec242b..889d0a5070d7 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -449,7 +449,24 @@  static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
 	u16 buffs;
 	int i;
 
+	if (unlikely(!xsk_buff_can_alloc(pool, count)))
+		return 0;
+
 	buffs = xsk_buff_alloc_batch(pool, xdp, count);
+	/* fill the remainder part that batch API did not provide for us,
+	 * this is usually the case for non-coherent systems that require DMA
+	 * syncs
+	 */
+	for (; buffs < count; buffs++) {
+		struct xdp_buff *tmp;
+
+		tmp = xsk_buff_alloc(pool);
+		if (unlikely(!tmp))
+			goto free;
+
+		xdp[buffs] = tmp;
+	}
+
 	for (i = 0; i < buffs; i++) {
 		dma = xsk_buff_xdp_get_dma(*xdp);
 		rx_desc->read.pkt_addr = cpu_to_le64(dma);
@@ -465,6 +482,13 @@  static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
 	}
 
 	return buffs;
+
+free:
+	for (i = 0; i < buffs; i++) {
+		xsk_buff_free(*xdp);
+		xdp++;
+	}
+	return 0;
 }
 
 /**