From patchwork Wed Apr 6 02:53:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 12803433 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66239C433EF for ; Wed, 6 Apr 2022 13:31:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233634AbiDFNd1 (ORCPT ); Wed, 6 Apr 2022 09:33:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233646AbiDFNck (ORCPT ); Wed, 6 Apr 2022 09:32:40 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCE8C32FDED for ; Tue, 5 Apr 2022 19:54:35 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id a16-20020a17090a6d9000b001c7d6c1bb13so1368016pjk.4 for ; Tue, 05 Apr 2022 19:54:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uy1+mqy44Y8ZNqwhRp+n17IT9yEhFha7+I3bQ7ogrb0=; b=HykHgv4xjU9u+nbufo46UzkTjKkvwUtVsODgHE8Tm9UTd2vUjZ5+3d0C4U8Lkcm8Nb fBmULl/WyBr8kkrcyXDFV/cdgCQKaGNzvNFumfqiStZZGvieGi20BiaJyF1xlENr4m7H D7/yrY0dnGE6oxtRvkVm4+DpKjQvgylQmghkk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uy1+mqy44Y8ZNqwhRp+n17IT9yEhFha7+I3bQ7ogrb0=; b=vlj2xQXZDkdsMaAynYiDpVcyAz35smRy5qz8dGC3+5q7bBv5rxR6xxBFwgtsvkmyRw Y/HKh4Iu2JjzwIpteZt1qID7BxMhhUQksBYFhUIP5H9I+a/EcocTfyzVszUKSIRuIago 1fKn6JyfktY/INwtD/IA447Emu2Bffed1CYv9HBOmtBAEJ2laI3bjdD5Wf3/A3WVymJl jfU6d9TH3ICqgr55mKAamo4VIyoaiRIEXDD602UI5XLGeZmOe3tyTDK5ebrMf9lmBPcp /Ug3QAaxa2hyaANlCZ/kS95Jy5unV0FIEAo7N7Weju4F4juG6V3jxDFfduWg7bPm6jpt RsZA== X-Gm-Message-State: AOAM533WKgOfYiiBPuq/KzfQOP31PLnv1WCZZ5AHI4fus2Y8LwrygSbY 1SiWFGZ/IlKBnNk8e2cJUSXXWw== X-Google-Smtp-Source: ABdhPJxcyIhFXuBH/o9qSpncXnltOTaXEKCjCbaO+g1ma0AgOVqKeY7prZ0lApPhHGCoCYYM2+swQw== X-Received: by 2002:a17:902:f612:b0:14c:e978:f99e with SMTP id n18-20020a170902f61200b0014ce978f99emr6562680plg.23.1649213667684; Tue, 05 Apr 2022 19:54:27 -0700 (PDT) Received: from localhost.swdvt.lab.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id x9-20020a17090a970900b001ca6c59b350sm3395111pjo.2.2022.04.05.19.54.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 05 Apr 2022 19:54:27 -0700 (PDT) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, kuba@kernel.org, gospo@broadcom.com, bpf@vger.kernel.org, john.fastabend@gmail.com, toke@redhat.com, lorenzo@kernel.org, ast@kernel.org, daniel@iogearbox.net, echaudro@redhat.com, pabeni@redhat.com Subject: [PATCH net-next v3 08/11] bnxt: add page_pool support for aggregation ring when using xdp Date: Tue, 5 Apr 2022 22:53:50 -0400 Message-Id: <1649213633-7662-9-git-send-email-michael.chan@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1649213633-7662-1-git-send-email-michael.chan@broadcom.com> References: <1649213633-7662-1-git-send-email-michael.chan@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Andy Gospodarek If we are using aggregation rings with XDP enabled, allocate page buffers for the aggregation rings from the page_pool. Signed-off-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 77 ++++++++++++++--------- 1 file changed, 47 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 2a919905f256..f89a45042f38 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -739,7 +739,6 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, page_pool_recycle_direct(rxr->page_pool, page); return NULL; } - *mapping += bp->rx_dma_offset; return page; } @@ -781,6 +780,7 @@ int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, if (!page) return -ENOMEM; + mapping += bp->rx_dma_offset; rx_buf->data = page; rx_buf->data_ptr = page_address(page) + bp->rx_offset; } else { @@ -841,33 +841,41 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp, u16 sw_prod = rxr->rx_sw_agg_prod; unsigned int offset = 0; - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { - page = rxr->rx_page; - if (!page) { + if (BNXT_RX_PAGE_MODE(bp)) { + page = __bnxt_alloc_rx_page(bp, &mapping, rxr, gfp); + + if (!page) + return -ENOMEM; + + } else { + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { + page = rxr->rx_page; + if (!page) { + page = alloc_page(gfp); + if (!page) + return -ENOMEM; + rxr->rx_page = page; + rxr->rx_page_offset = 0; + } + offset = rxr->rx_page_offset; + rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; + if (rxr->rx_page_offset == PAGE_SIZE) + rxr->rx_page = NULL; + else + get_page(page); + } else { page = alloc_page(gfp); if (!page) return -ENOMEM; - rxr->rx_page = page; - rxr->rx_page_offset = 0; } - offset = rxr->rx_page_offset; - rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; - if (rxr->rx_page_offset == PAGE_SIZE) - rxr->rx_page = NULL; - else - get_page(page); - } else { - page = alloc_page(gfp); - if (!page) - return -ENOMEM; - } - mapping = dma_map_page_attrs(&pdev->dev, page, offset, - BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, - DMA_ATTR_WEAK_ORDERING); - if (dma_mapping_error(&pdev->dev, mapping)) { - __free_page(page); - return -EIO; + mapping = dma_map_page_attrs(&pdev->dev, page, offset, + BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, + DMA_ATTR_WEAK_ORDERING); + if (dma_mapping_error(&pdev->dev, mapping)) { + __free_page(page); + return -EIO; + } } if (unlikely(test_bit(sw_prod, rxr->rx_agg_bmap))) @@ -1105,7 +1113,7 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, } dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, - DMA_FROM_DEVICE, + bp->rx_dir, DMA_ATTR_WEAK_ORDERING); total_frag_len += frag_len; @@ -2936,14 +2944,23 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr) if (!page) continue; - dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, - BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, - DMA_ATTR_WEAK_ORDERING); + if (BNXT_RX_PAGE_MODE(bp)) { + dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, + BNXT_RX_PAGE_SIZE, bp->rx_dir, + DMA_ATTR_WEAK_ORDERING); + rx_agg_buf->page = NULL; + __clear_bit(i, rxr->rx_agg_bmap); - rx_agg_buf->page = NULL; - __clear_bit(i, rxr->rx_agg_bmap); + page_pool_recycle_direct(rxr->page_pool, page); + } else { + dma_unmap_page_attrs(&pdev->dev, rx_agg_buf->mapping, + BNXT_RX_PAGE_SIZE, DMA_FROM_DEVICE, + DMA_ATTR_WEAK_ORDERING); + rx_agg_buf->page = NULL; + __clear_bit(i, rxr->rx_agg_bmap); - __free_page(page); + __free_page(page); + } } skip_rx_agg_free: