From patchwork Thu Aug 17 23:19:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13357065 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46B871C286 for ; Thu, 17 Aug 2023 23:19:34 +0000 (UTC) Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2E313595 for ; Thu, 17 Aug 2023 16:19:32 -0700 (PDT) Received: by mail-oi1-x234.google.com with SMTP id 5614622812f47-3a44cccbd96so210592b6e.3 for ; Thu, 17 Aug 2023 16:19:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1692314372; x=1692919172; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=X6lHrCYuHuZi+FFA23xul/Fk0r5GfzER7QKxy4ctvGg=; b=QWk8eyNyXZA4pxDGPkWUGYQr/PsoaQN6ZSC0qFbaA+KNcBJmumdtu5r0bCw26WfrhY mvkY5VVbbn1EP/kJa8D9OeUprgQAARTFQ9a+W5Vz+Y34HRDnN4HUgRqw1+6K0masGgkL Nl1hZSrXc7Usj6EN7NG4z0BgbULaFNfVjY4BQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692314372; x=1692919172; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=X6lHrCYuHuZi+FFA23xul/Fk0r5GfzER7QKxy4ctvGg=; b=RnRavJzunkzIgRd3buqntv5Z2hyT8rQJMNf7x9PevWAUNMlChPIm1dyHc7CvvLghse 2EbxM30cz7ZufK5yC7qi7y25xOKw2zgflp9J1M1SRpqQ4GKMLHYEh94WdS0yzdCX4bvt wxElPFHMdaHsOFED64d1OiejZGQKT8s+JnvSvctwjjroREs6jwINf3jrriZBH+3Bf3RV lf2e3G0puTpIH+L+90+TFRDnEzv1lght9CbxJIY5KseTI5RAgFc0TyPkxni4bbOrgkNr kubJDkvSTUy+BBQdzcUcDffGH0H2uj6cMcPr2TVizRfO8SpOwPkZnzE8UVeaphWsqnyr Et0w== X-Gm-Message-State: AOJu0YwtpKSR+qP0iKEUagDm84g4PZLFA/PO8+LHUs0XRVOPxuDgWfJ7 L4dPa5pZtn1o9A4VnjGvz7lk64iroJLPJSq3Ww8= X-Google-Smtp-Source: AGHT+IFQp7uSkrTryqlWyHTD/sfuCkh/CIBqV+7tHBekSU6UPBd01Owf+AYHRNdON5bDsaET6Wi8CA== X-Received: by 2002:a05:6808:1795:b0:3a4:ccf:6a63 with SMTP id bg21-20020a056808179500b003a40ccf6a63mr1209512oib.55.1692314371688; Thu, 17 Aug 2023 16:19:31 -0700 (PDT) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id w27-20020a05620a149b00b0076d25b11b62sm145516qkj.38.2023.08.17.16.19.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Aug 2023 16:19:31 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Somnath Kotur , bpf@vger.kernel.org Subject: [PATCH net-next v2 2/6] bnxt_en: Let the page pool manage the DMA mapping Date: Thu, 17 Aug 2023 16:19:07 -0700 Message-Id: <20230817231911.165035-3-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20230817231911.165035-1-michael.chan@broadcom.com> References: <20230817231911.165035-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Somnath Kotur Use the page pool's ability to maintain DMA mappings for us. This avoids re-mapping of the recycled pages. Link: https://lore.kernel.org/netdev/20230728231829.235716-4-michael.chan@broadcom.com/ Cc: bpf@vger.kernel.org Signed-off-by: Somnath Kotur Signed-off-by: Michael Chan --- v2: Use PAGE_SIZE for pp.max_len. --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 32 +++++++---------------- 1 file changed, 10 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 6b815a2288e2..73a3936ee498 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -761,7 +761,6 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, unsigned int *offset, gfp_t gfp) { - struct device *dev = &bp->pdev->dev; struct page *page; if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { @@ -774,12 +773,7 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, if (!page) return NULL; - *mapping = dma_map_page_attrs(dev, page, *offset, BNXT_RX_PAGE_SIZE, - bp->rx_dir, DMA_ATTR_WEAK_ORDERING); - if (dma_mapping_error(dev, *mapping)) { - page_pool_recycle_direct(rxr->page_pool, page); - return NULL; - } + *mapping = page_pool_get_dma_addr(page) + *offset; return page; } @@ -998,8 +992,8 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct bnxt *bp, return NULL; } dma_addr -= bp->rx_dma_offset; - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, - bp->rx_dir, DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + bp->rx_dir); skb = build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); if (!skb) { page_pool_recycle_direct(rxr->page_pool, page); @@ -1032,8 +1026,8 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp, return NULL; } dma_addr -= bp->rx_dma_offset; - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, - bp->rx_dir, DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + bp->rx_dir); if (unlikely(!payload)) payload = eth_get_headlen(bp->dev, data_ptr, len); @@ -1149,9 +1143,8 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, return 0; } - dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, - bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, + bp->rx_dir); total_frag_len += frag_len; prod = NEXT_RX_AGG(prod); @@ -2947,10 +2940,6 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) rx_buf->data = NULL; if (BNXT_RX_PAGE_MODE(bp)) { - mapping -= bp->rx_dma_offset; - dma_unmap_page_attrs(&pdev->dev, mapping, - BNXT_RX_PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); page_pool_recycle_direct(rxr->page_pool, data); } else { dma_unmap_single_attrs(&pdev->dev, mapping, @@ -2971,9 +2960,6 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) if (!page) continue; - dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, - BNXT_RX_PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); rx_agg_buf->page = NULL; __clear_bit(i, rxr->rx_agg_bmap); @@ -3205,7 +3191,9 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.nid = dev_to_node(&bp->pdev->dev); pp.napi = &rxr->bnapi->napi; pp.dev = &bp->pdev->dev; - pp.dma_dir = DMA_BIDIRECTIONAL; + pp.dma_dir = bp->rx_dir; + pp.max_len = PAGE_SIZE; + pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) pp.flags |= PP_FLAG_PAGE_FRAG;