From patchwork Mon Nov 20 23:44:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462287 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="Mf4+LSNm" Received: from mail-oa1-x33.google.com (mail-oa1-x33.google.com [IPv6:2001:4860:4864:20::33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 428CC95 for ; Mon, 20 Nov 2023 15:44:51 -0800 (PST) Received: by mail-oa1-x33.google.com with SMTP id 586e51a60fabf-1efad296d42so2890695fac.2 for ; Mon, 20 Nov 2023 15:44:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523890; x=1701128690; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=bqe3q3gcaeVni3mDguaIglphkpCu9D8s2XuHLUz6sFg=; b=Mf4+LSNmlNP59eWxiESMgkOoSjX1yx1ndvLSUEkAwbn5/t6nf4/PXS04qOVqzEM9Mu T5EJjYx5WixhWN4Ipm+qElpvZx2L7M905CpNBpQ3ZobZm1u7LGzPatZHUEPld/cNk2Ph dvSAQ5v4rzNFyOt3GUim3ku2TFLPXl/ueqtzY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523890; x=1701128690; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bqe3q3gcaeVni3mDguaIglphkpCu9D8s2XuHLUz6sFg=; b=tFTBOVPb48UqXrCNBdbLCls3+qQMEOOcjQ53vXr2HwIneUha2yakJR4rROLlWkNlyz Rlxv+f9nlfFGeAS4Pkf95ff0Qgsor9b7XZeSJ+TFgbBhEeZ4pbR/ghsTa1omA5BuJ2zQ oL3R4rYhGB8IJzSpVRvKFJDCTL2PdIzOIKAeqtvYx5dcjReAuewEKARguUWDVkM7yo/Q n/j4eLYOGvODq16EwnY9STp7LmsGka5y4djcZM1XIKHWKa5M+2mebBB6tsVJaUnAc6WO z+226J44RJw5PODGA0LHaW5OijmKwG7jM0yG+v2EjOmZzDAX+uQCwjWD4CidAQoFCbbJ 9e0g== X-Gm-Message-State: AOJu0YxMsH+waRId+pUy8xq59XviDTNy+2nGNCUE6/sbekccPp6ftLUc EtYNGCQUdR46HkQnPGt2FwFtKA== X-Google-Smtp-Source: AGHT+IGo6nh6yfyFPAP9g/kIrMkwkKs2Jv/aD0K4uVSqnkqywcGfLGkpINs+9dn31eBg2APULsMD6Q== X-Received: by 2002:a05:6870:3893:b0:1f9:5c71:a684 with SMTP id y19-20020a056870389300b001f95c71a684mr2431706oan.42.1700523890398; Mon, 20 Nov 2023 15:44:50 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:50 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Pavan Chebbi Subject: [PATCH net-next 10/13] bnxt_en: Modify TX ring indexing logic. Date: Mon, 20 Nov 2023 15:44:02 -0800 Message-Id: <20231120234405.194542-11-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Change the TX ring logic so that the index increments unbounded and mask it only when needed. Modify the existing macros so that the index is not masked. Add a new macro RING_TX() to mask it only when needed to get the index of txr->tx_buf_ring[]. Reviewed-by: Pavan Chebbi Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 22 +++++++++---------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 5 +++-- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 14 ++++++------ 3 files changed, 21 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 48c443d52344..c83450564765 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -432,9 +432,9 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) len = skb_headlen(skb); last_frag = skb_shinfo(skb)->nr_frags; - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; tx_buf->skb = skb; tx_buf->nr_frags = last_frag; @@ -522,7 +522,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) txbd->tx_bd_opaque = SET_TX_OPAQUE(bp, txr, prod, 2); prod = NEXT_TX(prod); tx_push->tx_bd_opaque = txbd->tx_bd_opaque; - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; memcpy(txbd, tx_push1, sizeof(*txbd)); prod = NEXT_TX(prod); tx_push->doorbell = @@ -569,7 +569,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) prod = NEXT_TX(prod); txbd1 = (struct tx_bd_ext *) - &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; txbd1->tx_bd_hsize_lflags = lflags; if (skb_is_gso(skb)) { @@ -610,7 +610,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; prod = NEXT_TX(prod); - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; len = skb_frag_size(frag); mapping = skb_frag_dma_map(&pdev->dev, frag, 0, len, @@ -619,7 +619,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(dma_mapping_error(&pdev->dev, mapping))) goto tx_dma_error; - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; dma_unmap_addr_set(tx_buf, mapping, mapping); txbd->tx_bd_haddr = cpu_to_le64(mapping); @@ -668,7 +668,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) /* start back at beginning and unmap skb */ prod = txr->tx_prod; - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; dma_unmap_single(&pdev->dev, dma_unmap_addr(tx_buf, mapping), skb_headlen(skb), DMA_TO_DEVICE); prod = NEXT_TX(prod); @@ -676,7 +676,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) /* unmap remaining mapped pages */ for (i = 0; i < last_frag; i++) { prod = NEXT_TX(prod); - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; dma_unmap_page(&pdev->dev, dma_unmap_addr(tx_buf, mapping), skb_frag_size(&skb_shinfo(skb)->frags[i]), DMA_TO_DEVICE); @@ -702,12 +702,12 @@ static void __bnxt_tx_int(struct bnxt *bp, struct bnxt_tx_ring_info *txr, u16 cons = txr->tx_cons; int tx_pkts = 0; - while (cons != hw_cons) { + while (RING_TX(bp, cons) != hw_cons) { struct bnxt_sw_tx_bd *tx_buf; struct sk_buff *skb; int j, last; - tx_buf = &txr->tx_buf_ring[cons]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, cons)]; cons = NEXT_TX(cons); skb = tx_buf->skb; tx_buf->skb = NULL; @@ -731,7 +731,7 @@ static void __bnxt_tx_int(struct bnxt *bp, struct bnxt_tx_ring_info *txr, for (j = 0; j < last; j++) { cons = NEXT_TX(cons); - tx_buf = &txr->tx_buf_ring[cons]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, cons)]; dma_unmap_page( &pdev->dev, dma_unmap_addr(tx_buf, mapping), diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 7d79a2c8d3c3..1b04510f677b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -689,7 +689,7 @@ struct nqe_cn { #define RX_RING(x) (((x) & ~(RX_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4)) #define RX_IDX(x) ((x) & (RX_DESC_CNT - 1)) -#define TX_RING(x) (((x) & ~(TX_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4)) +#define TX_RING(bp, x) (((x) & (bp)->tx_ring_mask) >> (BNXT_PAGE_SHIFT - 4)) #define TX_IDX(x) ((x) & (TX_DESC_CNT - 1)) #define CP_RING(x) (((x) & ~(CP_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4)) @@ -720,7 +720,8 @@ struct nqe_cn { #define NEXT_RX_AGG(idx) (((idx) + 1) & bp->rx_agg_ring_mask) -#define NEXT_TX(idx) (((idx) + 1) & bp->tx_ring_mask) +#define RING_TX(bp, idx) ((idx) & (bp)->tx_ring_mask) +#define NEXT_TX(idx) ((idx) + 1) #define ADV_RAW_CMP(idx, n) ((idx) + (n)) #define NEXT_RAW_CMP(idx) ADV_RAW_CMP(idx, 1) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 9d428eb3fdb9..4791f6a14e55 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -42,12 +42,12 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, /* fill up the first buffer */ prod = txr->tx_prod; - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; tx_buf->nr_frags = num_frags; if (xdp) tx_buf->page = virt_to_head_page(xdp->data); - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; flags = (len << TX_BD_LEN_SHIFT) | ((num_frags + 1) << TX_BD_FLAGS_BD_CNT_SHIFT) | bnxt_lhint_arr[len >> 9]; @@ -67,10 +67,10 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, WRITE_ONCE(txr->tx_prod, prod); /* first fill up the first buffer */ - frag_tx_buf = &txr->tx_buf_ring[prod]; + frag_tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; frag_tx_buf->page = skb_frag_page(frag); - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; frag_len = skb_frag_size(frag); frag_mapping = skb_frag_dma_map(&pdev->dev, frag, 0, @@ -139,8 +139,8 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) if (!budget) return; - while (tx_cons != tx_hw_cons) { - tx_buf = &txr->tx_buf_ring[tx_cons]; + while (RING_TX(bp, tx_cons) != tx_hw_cons) { + tx_buf = &txr->tx_buf_ring[RING_TX(bp, tx_cons)]; if (tx_buf->action == XDP_REDIRECT) { struct pci_dev *pdev = bp->pdev; @@ -160,7 +160,7 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) frags = tx_buf->nr_frags; for (j = 0; j < frags; j++) { tx_cons = NEXT_TX(tx_cons); - tx_buf = &txr->tx_buf_ring[tx_cons]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, tx_cons)]; page_pool_recycle_direct(rxr->page_pool, tx_buf->page); } } else {