From patchwork Fri Jul 28 23:18:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13332730 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39A5F214F9 for ; Fri, 28 Jul 2023 23:18:45 +0000 (UTC) Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2217C30E4 for ; Fri, 28 Jul 2023 16:18:44 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id d2e1a72fcca58-686f25d045cso1762190b3a.0 for ; Fri, 28 Jul 2023 16:18:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1690586323; x=1691191123; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=NJsStpnWc2LOKafRA/NzR+Nc/UkVk+41aeG6DDX2ySk=; b=evE4qVian6J7izcZqOEMWhjKOrpmrVpzoWk6bjv0RJzm/+ZD7VsfnYCQyzfyD1SRuH z3KY2Ycdf+gyx0fqhJ1kxnNU+Ga/qmttjemcl6QY3cOQOpdcvtQRjBWozqmSo2Shr2Ej SpkQ+oSJ4kKOuEYgFBvRiWqiPGaTiIZ0ogRFI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690586323; x=1691191123; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NJsStpnWc2LOKafRA/NzR+Nc/UkVk+41aeG6DDX2ySk=; b=iebOTVmsAn5ppoubFX50PMlTacKSey94ETwOVfeLKvxfHa1eME7fr7qEY8tQjB80WZ xTDDL486a6XFVrjSTk8hA3F+VECd/JoZ/3mVhkJf36fhVzItm3pUiHnc9NLhTFbfjN/p Mw6cLpgpYy7SyxKydE62MIdZH/gSq7DP5MJUVvQ90e8bCkaD4IyMubVopHXpIHAfIh2b GszhnQV7ikp0AzEjXiC9rzhlBmkLDvbrQWU0mUIuchCROBi/aTJ3xpsBcvwaYAvrhZ9Z D/EIRKI12aJ9YGGfxswxIT7YKxMDnC+UhUvJc0z/QyUdxHPwZhC6SZz4VeKT6m4oOhyP KA/g== X-Gm-Message-State: ABy/qLbkhj1baQTSCpXof2wn6kDpQ56OguijfIqhjbPX9yjjcDOpe8Ow trQk1ezuLgMwrohWT1xk8r8XgQ== X-Google-Smtp-Source: APBJJlHJKRpBKkCwYBqmeihgQYbgRFSq2icOiubQ9r8RxElbczoOTRorNr8nXhoeehbBPxlpaOAmuA== X-Received: by 2002:a05:6a00:a08:b0:686:fb87:7f55 with SMTP id p8-20020a056a000a0800b00686fb877f55mr3698191pfh.15.1690586323028; Fri, 28 Jul 2023 16:18:43 -0700 (PDT) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id f20-20020aa782d4000000b0066f37665a6asm3654424pfn.117.2023.07.28.16.18.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 28 Jul 2023 16:18:42 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, bpf@vger.kernel.org, somnath.kotur@broadcom.com Subject: [PATCH net-next 3/3] bnxt_en: Let the page pool manage the DMA mapping Date: Fri, 28 Jul 2023 16:18:29 -0700 Message-Id: <20230728231829.235716-4-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20230728231829.235716-1-michael.chan@broadcom.com> References: <20230728231829.235716-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Somnath Kotur Use the page pool's ability to maintain DMA mappings for us. This avoids re-mapping of the recycled pages. Signed-off-by: Somnath Kotur Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 32 +++++++---------------- 1 file changed, 10 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index adf785b7aa42..b35bc92094ce 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -759,7 +759,6 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, unsigned int *offset, gfp_t gfp) { - struct device *dev = &bp->pdev->dev; struct page *page; if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { @@ -772,12 +771,7 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, if (!page) return NULL; - *mapping = dma_map_page_attrs(dev, page, *offset, BNXT_RX_PAGE_SIZE, - bp->rx_dir, DMA_ATTR_WEAK_ORDERING); - if (dma_mapping_error(dev, *mapping)) { - page_pool_recycle_direct(rxr->page_pool, page); - return NULL; - } + *mapping = page_pool_get_dma_addr(page) + *offset; return page; } @@ -996,8 +990,8 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct bnxt *bp, return NULL; } dma_addr -= bp->rx_dma_offset; - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + bp->rx_dir); skb = build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); if (!skb) { page_pool_recycle_direct(rxr->page_pool, page); @@ -1030,8 +1024,8 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp, return NULL; } dma_addr -= bp->rx_dma_offset; - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + bp->rx_dir); if (unlikely(!payload)) payload = eth_get_headlen(bp->dev, data_ptr, len); @@ -1147,9 +1141,8 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, return 0; } - dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, - bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, + bp->rx_dir); total_frag_len += frag_len; prod = NEXT_RX_AGG(prod); @@ -2945,10 +2938,6 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) rx_buf->data = NULL; if (BNXT_RX_PAGE_MODE(bp)) { - mapping -= bp->rx_dma_offset; - dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, - bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); page_pool_recycle_direct(rxr->page_pool, data); } else { dma_unmap_single_attrs(&pdev->dev, mapping, @@ -2969,9 +2958,6 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) if (!page) continue; - dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, - BNXT_RX_PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); rx_agg_buf->page = NULL; __clear_bit(i, rxr->rx_agg_bmap); @@ -3203,7 +3189,9 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.nid = dev_to_node(&bp->pdev->dev); pp.napi = &rxr->bnapi->napi; pp.dev = &bp->pdev->dev; - pp.dma_dir = DMA_BIDIRECTIONAL; + pp.dma_dir = bp->rx_dir; + pp.max_len = BNXT_RX_PAGE_SIZE; + pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) pp.flags |= PP_FLAG_PAGE_FRAG;