From patchwork Tue Aug 15 04:56:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13353522 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2C81442F for ; Tue, 15 Aug 2023 04:57:14 +0000 (UTC) Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com [IPv6:2607:f8b0:4864:20::729]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50941107 for ; Mon, 14 Aug 2023 21:57:13 -0700 (PDT) Received: by mail-qk1-x729.google.com with SMTP id af79cd13be357-76ad8892d49so417452685a.1 for ; Mon, 14 Aug 2023 21:57:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1692075432; x=1692680232; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=0K64tGlPkRJZjDLpb1d7fPjv3Dt5rIZx54F0bUAjv7c=; b=cLQXl3c/qRCh4KAI3bZNCa9DPBsj6XTk6/B68X+mY4ioooH9iTrVwaCI/m3klur7dk aO11E+l+2j9goGUO3ygzgAUGq5MyvJ3VYtZE+gYtNZxXxnFCStUw7lLiWMxt7ZTOArkk bYCzqdKZPRNEvcEBkaVlegb2u2zK+5qwuJJPs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692075432; x=1692680232; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0K64tGlPkRJZjDLpb1d7fPjv3Dt5rIZx54F0bUAjv7c=; b=EXKnQcOZCkK9aP3aiUqv9FZOAhbk32bhm+Y5NincTzUHwtwz8UVj/OkMXijuJeSQ6L A6pJMkaIaFu85fjLy8/e8AKQ6ZpUdpKQgsZ62LTqruC3VV+2iJHvk6Ep4yDGQBKID+/U 5AxFU91opduHvLidHbZ8JSstIwr9nBeDv5zHw2pOJfZylW8sOPsFHqEuAdMe9WR559HG BwClM2AMZroyvmY2sc7v9HU/2a423Ahc4tdXc0jzZskY2LyLcfsT4T7h7RxiiU8hL8YK P92AlrvIzkCeqqZ/ZI1iNxiD19NuWq7bZy3OceNTSr1TlXkT/0dDmiQyHEP3BTc6lbTY 8dPQ== X-Gm-Message-State: AOJu0YxIhQHHImzqlDY7kaUleMhAgbzbq8AVswO9MGhLWH84rtZkMoOi ZxJtMTAQMQUNkj2Bt1Z6uyF77Q== X-Google-Smtp-Source: AGHT+IGM8ObjALvKj5988M/z1MNsmGVS3KFsdO/o8eB6kT2V1JQmKhxTEmHLQ9+htsbLNJkbjOHwtw== X-Received: by 2002:a05:620a:4691:b0:75d:54fc:47b1 with SMTP id bq17-20020a05620a469100b0075d54fc47b1mr14824451qkb.54.1692075431886; Mon, 14 Aug 2023 21:57:11 -0700 (PDT) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id k28-20020a05620a143c00b00767cbd5e942sm3516575qkj.72.2023.08.14.21.57.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 14 Aug 2023 21:57:11 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Somnath Kotur , bpf@vger.kernel.org Subject: [PATCH net-next 02/12] bnxt_en: Let the page pool manage the DMA mapping Date: Mon, 14 Aug 2023 21:56:48 -0700 Message-Id: <20230815045658.80494-3-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20230815045658.80494-1-michael.chan@broadcom.com> References: <20230815045658.80494-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Somnath Kotur Use the page pool's ability to maintain DMA mappings for us. This avoids re-mapping of the recycled pages. Link: https://lore.kernel.org/netdev/20230728231829.235716-4-michael.chan@broadcom.com/ Cc: bpf@vger.kernel.org Signed-off-by: Somnath Kotur Signed-off-by: Michael Chan --- v2: Use PAGE_SIZE for pp.max_len. --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 32 +++++++---------------- 1 file changed, 10 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 6b815a2288e2..73a3936ee498 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -761,7 +761,6 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, unsigned int *offset, gfp_t gfp) { - struct device *dev = &bp->pdev->dev; struct page *page; if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { @@ -774,12 +773,7 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, if (!page) return NULL; - *mapping = dma_map_page_attrs(dev, page, *offset, BNXT_RX_PAGE_SIZE, - bp->rx_dir, DMA_ATTR_WEAK_ORDERING); - if (dma_mapping_error(dev, *mapping)) { - page_pool_recycle_direct(rxr->page_pool, page); - return NULL; - } + *mapping = page_pool_get_dma_addr(page) + *offset; return page; } @@ -998,8 +992,8 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct bnxt *bp, return NULL; } dma_addr -= bp->rx_dma_offset; - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, - bp->rx_dir, DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + bp->rx_dir); skb = build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); if (!skb) { page_pool_recycle_direct(rxr->page_pool, page); @@ -1032,8 +1026,8 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *bp, return NULL; } dma_addr -= bp->rx_dma_offset; - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, - bp->rx_dir, DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + bp->rx_dir); if (unlikely(!payload)) payload = eth_get_headlen(bp->dev, data_ptr, len); @@ -1149,9 +1143,8 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, return 0; } - dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, - bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); + dma_sync_single_for_cpu(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, + bp->rx_dir); total_frag_len += frag_len; prod = NEXT_RX_AGG(prod); @@ -2947,10 +2940,6 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) rx_buf->data = NULL; if (BNXT_RX_PAGE_MODE(bp)) { - mapping -= bp->rx_dma_offset; - dma_unmap_page_attrs(&pdev->dev, mapping, - BNXT_RX_PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); page_pool_recycle_direct(rxr->page_pool, data); } else { dma_unmap_single_attrs(&pdev->dev, mapping, @@ -2971,9 +2960,6 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) if (!page) continue; - dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, - BNXT_RX_PAGE_SIZE, bp->rx_dir, - DMA_ATTR_WEAK_ORDERING); rx_agg_buf->page = NULL; __clear_bit(i, rxr->rx_agg_bmap); @@ -3205,7 +3191,9 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.nid = dev_to_node(&bp->pdev->dev); pp.napi = &rxr->bnapi->napi; pp.dev = &bp->pdev->dev; - pp.dma_dir = DMA_BIDIRECTIONAL; + pp.dma_dir = bp->rx_dir; + pp.max_len = PAGE_SIZE; + pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) pp.flags |= PP_FLAG_PAGE_FRAG;