From patchwork Tue Jul 18 21:48:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haiyang Zhang X-Patchwork-Id: 13317807 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B31E15ADC; Tue, 18 Jul 2023 21:48:07 +0000 (UTC) Received: from BN3PR00CU001.outbound.protection.outlook.com (mail-eastus2azon11020015.outbound.protection.outlook.com [52.101.56.15]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC534198C; Tue, 18 Jul 2023 14:48:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=P7HCfzKOdcFxhiX10HkYj3GTcYx0zXP8mFCMhwJQMm8EUUsUl6LxJWm7l8KCs72cgSIWS4YAPkN50t7fwcJm87gxxHB5k3GVugiDAOegLOX9hUJtpWySrb32tD3g9AqU0OWUuIG+ZaiY5ZqEeINIT8xTIkfvGJxLzh9zD7KLrFt+YugdBJEd9XVkTnphXmukUlIXIU2ghk6QMlL4MzfrrIRGNlBHVn9rfc5IxaVJFl51zINdMOz5xhU9+gCjfg/FgeaKyLS2kYIPW0QZC45EFGi1KWwOWGcdwt1CGGVEjYQBiDxgUlGVgKfQWm/c/ufeeu0ZeZyK9IVeCmRQCzUfDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fRVllPSTZM+RXvB6JCVrCFiQIvH1G5cqF3j+/aQ+aFM=; b=cIYaR16/2Rb8d1qQBw0PqXAvqic0q/daPMHKM0o7J05FqZOXpg5JQEL+fApVgTGlgdEVGnvbJ99wGPjbIkmFO6C6JgLj+bwnTuS09xIZrmXDufZRGGNGY8BsUO34uEuJPZeYciszRI/3bSVlexKs7MtKQpgKVzkw/RtoMuwSt5yCdOhPkM3NsHSRbx/H1rsijE3EH5tIZ5qBkQRnnUC70LLVohDQIiY9vr02niutRJIgOK8M15AylXxasa/5T3XsXjuXy5u8IjRdLiffPM6ji6tZgbmt9VQwP4tVVPPB+LK2zLL+mKIffMk66Fcbid8MN16HbdRUElPjwJQSimVVRw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=microsoft.com; dmarc=pass action=none header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fRVllPSTZM+RXvB6JCVrCFiQIvH1G5cqF3j+/aQ+aFM=; b=V5JDKzSjfQYdRBbPbUdQif/YTBhwNtY03oyXo8PBzapwn4XAcZTrfHVejc1G1eizEM6MZoxiCdpBsEX12tuf0Tt6CLrWXRVWQMu9ar8TIXzfdoXNcKbu3PDvECA/Nt/6vL9kWOUvDN1IL/QzW/BKThuUpQ4QqBMeFg/2CxYWFlQ= Received: from BY5PR21MB1443.namprd21.prod.outlook.com (2603:10b6:a03:21f::18) by MN0PR21MB3386.namprd21.prod.outlook.com (2603:10b6:208:382::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6631.8; Tue, 18 Jul 2023 21:48:01 +0000 Received: from BY5PR21MB1443.namprd21.prod.outlook.com ([fe80::9461:c35c:25c5:85cc]) by BY5PR21MB1443.namprd21.prod.outlook.com ([fe80::9461:c35c:25c5:85cc%3]) with mapi id 15.20.6631.005; Tue, 18 Jul 2023 21:48:01 +0000 From: Haiyang Zhang To: "linux-hyperv@vger.kernel.org" , "netdev@vger.kernel.org" CC: Haiyang Zhang , Dexuan Cui , KY Srinivasan , Paul Rosswurm , "olaf@aepfle.de" , "vkuznets@redhat.com" , "davem@davemloft.net" , "wei.liu@kernel.org" , "edumazet@google.com" , "kuba@kernel.org" , "pabeni@redhat.com" , "leon@kernel.org" , Long Li , "ssengar@linux.microsoft.com" , "linux-rdma@vger.kernel.org" , "daniel@iogearbox.net" , "john.fastabend@gmail.com" , "bpf@vger.kernel.org" , "ast@kernel.org" , Ajay Sharma , "hawk@kernel.org" , "tglx@linutronix.de" , "shradhagupta@linux.microsoft.com" , "linux-kernel@vger.kernel.org" Subject: [PATCH V2,net-next] net: mana: Add page pool for RX buffers Thread-Topic: [PATCH V2,net-next] net: mana: Add page pool for RX buffers Thread-Index: Adm5wYXxv9h/LIMj1k64MaVoaiA9Dw== Sender: LKML haiyangz Date: Tue, 18 Jul 2023 21:48:01 +0000 Message-ID: <1689716837-22859-1-git-send-email-haiyangz@microsoft.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: MW4PR04CA0070.namprd04.prod.outlook.com (2603:10b6:303:6b::15) To BY5PR21MB1443.namprd21.prod.outlook.com (2603:10b6:a03:21f::18) authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=microsoft.com; x-ms-exchange-messagesentrepresentingtype: 2 x-ms-publictraffictype: Email x-ms-traffictypediagnostic: BY5PR21MB1443:EE_|MN0PR21MB3386:EE_ x-ms-office365-filtering-correlation-id: 2f6cac47-96ae-414c-76fe-08db87d8a87f x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: HcydyqoW5YwqAs8n9xsYq/697DYqB2TuRJFA63AuF9A8/rPelDKNNPZQWhbHrEALwZFmssyOocJRnAt6UFrOk13Pxz9QjYHG8odMKo7u2v3FeSaN0bmp3BmPzRTAVOkVQBTh4HjBY+bLT+4pksPpF9iypBHQ6giE6dt+K/lbzORx+I9i86ZnHDcFcMWkaQiYJZNxZ72Y4r8o9Kta1FOm6bE4afvmcdVY6jgDGR0scIOkMq0NA9cexr3czNuHc5F5EWzJAO6P2RFDjvXa9pSr2zzJ40+TrPfIDcFbqPodRLajYpgLCZ+m3CFoyP1dAFiY4TixJv9dW3nP0mUVF0XIJnJjeO2bkC4QNlEhGCQF81YpP4HlENICkCLrw/Ide3Q78Q8lUI8txegNoGVAA7uD9zEUFNGxtNTCWZ3jG86/MfoN7K6WIaP+fpOWjm8l5dCX41sGHJ2PqpZTR/ofmNcNKg4uS8UnYY7EwWsYIZbVg4SlmK+tdHDAG3qtKqF0XNyLD9aYUqAmeoUFEnznGYS3mqOGR5uvwUogOV54Op0UpipUn3gH3pABzA8GkEU3vz30NQvX+4AOkUBTSLjGIM91zvIHX8CGtszMNHG4nXbV9Mi3FzrDtMUpOXpbbm67b5Oa240PSMwigt1Yr/NHaAkE0cecE0mK6DhNEuqvNj1U7yo= x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BY5PR21MB1443.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(346002)(39860400002)(376002)(396003)(451199021)(478600001)(10290500003)(41300700001)(36756003)(6512007)(7846003)(8676002)(8936002)(52116002)(66946007)(66476007)(64756008)(66446008)(316002)(66556008)(4326008)(186003)(26005)(5660300002)(6506007)(54906003)(71200400001)(110136005)(6486002)(7416002)(83380400001)(2906002)(2616005)(122000001)(82960400001)(38100700002)(38350700002)(82950400001);DIR:OUT;SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?q?Zbe82stM3Xaj0Iniq0iXfEf?= =?iso-8859-1?q?HfXcWe0WsS0AXlN9k/XULAbdY0cr9aNH2kZ4Pzd09HGjTbH4eIjaWr4V4WA7?= =?iso-8859-1?q?wgRCxUYgTN07+NweGvTXEl7yBAIKqjKCmtNeQAM1ptlVVZIe+79LJOunTciV?= =?iso-8859-1?q?yuqSwX5rvRaIQz4O5+XpPgIJxcmTJuyNTM3unKrERUW+eP/5CK7xeb9X4yQQ?= =?iso-8859-1?q?kBLILn1jGFUoiTmN5fJzIleHUXuJrjzo4qyuKAafA/TjRLO6Pod3f7eDF8Od?= =?iso-8859-1?q?NzCTE+t8g6n6cV1eVhb4pZwNPa6fhbhv2jNZM0aOwuRvrkpJVkxYR8gMs5wc?= =?iso-8859-1?q?KH7kN/46/4Yvf1ESPqxiBiBkEwQPI/04sKooqFD8VSrORVm4yGmq8vfQYG4h?= =?iso-8859-1?q?jxbsENk9iC3uvK7P6dyLRgyNcDZWehANmas0LLxYexecGR9Z+6I5deEZxpzV?= =?iso-8859-1?q?QFz93ryrxp3CR1fUnSSwSf4tvcg+ij20t6QkH7dSx0B8lx9jvC/P3VaYTsDA?= =?iso-8859-1?q?zygkjsSBqnH3OAhSVRPxWbmJJymZsYR0OLpihRANBbuLDPCDiHu7Llot0PO/?= =?iso-8859-1?q?X0tRg97tPhLSguRu0KsU5s2mRWqmP/wtNjRHSXQWojCLBXWWXOBr+AmeDK/Y?= =?iso-8859-1?q?F0qgdW4BfaWtehH46/174Afamcx3rASoRHtkvf/iMlYj99+4m9541yE1QgZp?= =?iso-8859-1?q?j3uKST8V7Gy09KUbNucU/Wtsa+TdAITf2kkn2f11xzvFSmH+AB05wCTiQnpc?= =?iso-8859-1?q?HcDNQT7MRk9F94TrZDdYojaZCq+FlhkVTZ2SqvVQesVZwFn1t8KoLwCIhoiP?= =?iso-8859-1?q?ElE6MSqfyu1sL7ZQa0QKVk+7yKw6JQR+w/tMJO/vzHSBs3Qp+5RDWD9A1d+2?= =?iso-8859-1?q?kK/ZmWUCz7VTekaPsywR+nbPq5Z6YRClxPNCm0PMCaJbb4gv+JOo29mvTp2n?= =?iso-8859-1?q?CPist8WSXVatJSZRICBeVtojVRzElGASy62gSShL7WiPdGee+jxlRyvUsZnc?= =?iso-8859-1?q?2uSFhVQtNZhd90x5gbhEVlUCdqqVqG2omyeNk1OwlGskU5D+L/ctcTfqa3HY?= =?iso-8859-1?q?87r4ZK+Fzd4leXaXcmdB0v/U9cC/BkbNkTKXB5E7c0oKsdjUji5t8cfbPanu?= =?iso-8859-1?q?jN6ecyeQqRHT0s7WnfSrBSkYXCS2goVnsCFlqVYgcLh5JtXJiIktKR0Mc0Gy?= =?iso-8859-1?q?PgmSjG9P4wv8hQJhzQyYcf103tD/a4QRoSFFdmx9B0P18XcgzzAHrLpVxhRk?= =?iso-8859-1?q?397ypIrszOFRKWZyBC4ByCEN3+bF7b21I51iEas5pEEsf7ARFEnQUPpgJ1M3?= =?iso-8859-1?q?63fWQsosFE0lZdM9KTZWcEkdjcN47h2Vd0o8S3FG0Oe+r2DviCrin40eq1aR?= =?iso-8859-1?q?P72QQPTyBmFbNd7PN2jETyoL5OeM7kOIPPr8HKQJyqOhDgA9iJl6PalVBXRu?= =?iso-8859-1?q?ucROK9JMXgEYvTmV3Y1Y623LmOoPgCN6eIXO5RvlNMxUTCW3DYepdxdP696S?= =?iso-8859-1?q?J0n2LzN2IqJ7QH0CVM8ogH/WQBCPtuSOXO6Xera52+4vDdr7Uwin+Wuv0iGu?= =?iso-8859-1?q?/1GC5QhyP6eonI8D8ds9f3c+rO89UkVhpUrv/B0s5o0ltPxYG1uauuVB8rzI?= =?iso-8859-1?q?ga2kVhUQnxJG20uD+?= Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: BY5PR21MB1443.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2f6cac47-96ae-414c-76fe-08db87d8a87f X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jul 2023 21:48:01.2138 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: XZUAeyaTV171cv9uZHO4FC1WaMxoqvPfW0E61uisCK1mOHr5wITQgbvEpHy/dOltd8wGEtm1dGixCvBuic7X1g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR21MB3386 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Add page pool for RX buffers for faster buffer cycle and reduce CPU usage. The standard page pool API is used. Signed-off-by: Haiyang Zhang --- V2: Use the standard page pool API as suggested by Jesper Dangaard Brouer --- drivers/net/ethernet/microsoft/mana/mana_en.c | 101 +++++++++++++++--- include/net/mana/mana.h | 3 + 2 files changed, 89 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index a499e460594b..0b557b70cd45 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1414,8 +1414,8 @@ static struct sk_buff *mana_build_skb(struct mana_rxq *rxq, void *buf_va, return skb; } -static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, - struct mana_rxq *rxq) +static void mana_rx_skb(void *buf_va, bool from_pool, + struct mana_rxcomp_oob *cqe, struct mana_rxq *rxq) { struct mana_stats_rx *rx_stats = &rxq->stats; struct net_device *ndev = rxq->ndev; @@ -1437,8 +1437,12 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, act = mana_run_xdp(ndev, rxq, &xdp, buf_va, pkt_len); - if (act == XDP_REDIRECT && !rxq->xdp_rc) + if (act == XDP_REDIRECT && !rxq->xdp_rc) { + if (from_pool) + page_pool_release_page(rxq->page_pool, + virt_to_head_page(buf_va)); return; + } if (act != XDP_PASS && act != XDP_TX) goto drop_xdp; @@ -1448,6 +1452,9 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, if (!skb) goto drop; + if (from_pool) + skb_mark_for_recycle(skb); + skb->dev = napi->dev; skb->protocol = eth_type_trans(skb, ndev); @@ -1498,9 +1505,14 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, u64_stats_update_end(&rx_stats->syncp); drop: - WARN_ON_ONCE(rxq->xdp_save_va); - /* Save for reuse */ - rxq->xdp_save_va = buf_va; + if (from_pool) { + page_pool_recycle_direct(rxq->page_pool, + virt_to_head_page(buf_va)); + } else { + WARN_ON_ONCE(rxq->xdp_save_va); + /* Save for reuse */ + rxq->xdp_save_va = buf_va; + } ++ndev->stats.rx_dropped; @@ -1508,11 +1520,13 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, } static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, - dma_addr_t *da, bool is_napi) + dma_addr_t *da, bool *from_pool, bool is_napi) { struct page *page; void *va; + *from_pool = false; + /* Reuse XDP dropped page if available */ if (rxq->xdp_save_va) { va = rxq->xdp_save_va; @@ -1533,7 +1547,13 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, return NULL; } } else { - page = dev_alloc_page(); + if (is_napi) { + page = page_pool_dev_alloc_pages(rxq->page_pool); + *from_pool = true; + } else { + page = dev_alloc_page(); + } + if (!page) return NULL; @@ -1543,7 +1563,11 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, *da = dma_map_single(dev, va + rxq->headroom, rxq->datasize, DMA_FROM_DEVICE); if (dma_mapping_error(dev, *da)) { - put_page(virt_to_head_page(va)); + if (*from_pool) + page_pool_put_full_page(rxq->page_pool, page, true); + else + put_page(virt_to_head_page(va)); + return NULL; } @@ -1552,21 +1576,25 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, /* Allocate frag for rx buffer, and save the old buf */ static void mana_refill_rx_oob(struct device *dev, struct mana_rxq *rxq, - struct mana_recv_buf_oob *rxoob, void **old_buf) + struct mana_recv_buf_oob *rxoob, void **old_buf, + bool *old_fp) { + bool from_pool; dma_addr_t da; void *va; - va = mana_get_rxfrag(rxq, dev, &da, true); + va = mana_get_rxfrag(rxq, dev, &da, &from_pool, true); if (!va) return; dma_unmap_single(dev, rxoob->sgl[0].address, rxq->datasize, DMA_FROM_DEVICE); *old_buf = rxoob->buf_va; + *old_fp = rxoob->from_pool; rxoob->buf_va = va; rxoob->sgl[0].address = da; + rxoob->from_pool = from_pool; } static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq, @@ -1580,6 +1608,7 @@ static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq, struct device *dev = gc->dev; void *old_buf = NULL; u32 curr, pktlen; + bool old_fp; apc = netdev_priv(ndev); @@ -1622,12 +1651,12 @@ static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq, rxbuf_oob = &rxq->rx_oobs[curr]; WARN_ON_ONCE(rxbuf_oob->wqe_inf.wqe_size_in_bu != 1); - mana_refill_rx_oob(dev, rxq, rxbuf_oob, &old_buf); + mana_refill_rx_oob(dev, rxq, rxbuf_oob, &old_buf, &old_fp); /* Unsuccessful refill will have old_buf == NULL. * In this case, mana_rx_skb() will drop the packet. */ - mana_rx_skb(old_buf, oob, rxq); + mana_rx_skb(old_buf, old_fp, oob, rxq); drop: mana_move_wq_tail(rxq->gdma_rq, rxbuf_oob->wqe_inf.wqe_size_in_bu); @@ -1659,6 +1688,8 @@ static void mana_poll_rx_cq(struct mana_cq *cq) if (rxq->xdp_flush) xdp_do_flush(); + + page_pool_nid_changed(rxq->page_pool, numa_mem_id()); } static int mana_cq_handler(void *context, struct gdma_queue *gdma_queue) @@ -1881,6 +1912,7 @@ static void mana_destroy_rxq(struct mana_port_context *apc, struct mana_recv_buf_oob *rx_oob; struct device *dev = gc->dev; struct napi_struct *napi; + struct page *page; int i; if (!rxq) @@ -1913,10 +1945,18 @@ static void mana_destroy_rxq(struct mana_port_context *apc, dma_unmap_single(dev, rx_oob->sgl[0].address, rx_oob->sgl[0].size, DMA_FROM_DEVICE); - put_page(virt_to_head_page(rx_oob->buf_va)); + page = virt_to_head_page(rx_oob->buf_va); + + if (rx_oob->from_pool) + page_pool_put_full_page(rxq->page_pool, page, false); + else + put_page(page); + rx_oob->buf_va = NULL; } + page_pool_destroy(rxq->page_pool); + if (rxq->gdma_rq) mana_gd_destroy_queue(gc, rxq->gdma_rq); @@ -1927,18 +1967,20 @@ static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key, struct mana_rxq *rxq, struct device *dev) { struct mana_port_context *mpc = netdev_priv(rxq->ndev); + bool from_pool = false; dma_addr_t da; void *va; if (mpc->rxbufs_pre) va = mana_get_rxbuf_pre(rxq, &da); else - va = mana_get_rxfrag(rxq, dev, &da, false); + va = mana_get_rxfrag(rxq, dev, &da, &from_pool, false); if (!va) return -ENOMEM; rx_oob->buf_va = va; + rx_oob->from_pool = from_pool; rx_oob->sgl[0].address = da; rx_oob->sgl[0].size = rxq->datasize; @@ -2008,6 +2050,28 @@ static int mana_push_wqe(struct mana_rxq *rxq) return 0; } +static int mana_create_page_pool(struct gdma_context *gc, struct mana_cq *cq, + struct mana_rxq *rxq) +{ + struct page_pool_params pprm = {}; + int ret; + + pprm.pool_size = RX_BUFFERS_PER_QUEUE; + pprm.napi = &cq->napi; + pprm.dev = gc->dev; + pprm.dma_dir = DMA_FROM_DEVICE; + + rxq->page_pool = page_pool_create(&pprm); + + if (IS_ERR(rxq->page_pool)) { + ret = PTR_ERR(rxq->page_pool); + rxq->page_pool = NULL; + return ret; + } + + return 0; +} + static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, u32 rxq_idx, struct mana_eq *eq, struct net_device *ndev) @@ -2106,6 +2170,13 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, netif_napi_add_weight(ndev, &cq->napi, mana_poll, 1); + /* Create page pool for RX queue */ + err = mana_create_page_pool(gc, cq, rxq); + if (err) { + netdev_err(ndev, "Create page pool err:%d\n", err); + goto out; + } + WARN_ON(xdp_rxq_info_reg(&rxq->xdp_rxq, ndev, rxq_idx, cq->napi.napi_id)); WARN_ON(xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq, diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index 024ad8ddb27e..b12859511839 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -280,6 +280,7 @@ struct mana_recv_buf_oob { struct gdma_wqe_request wqe_req; void *buf_va; + bool from_pool; /* allocated from a page pool */ /* SGL of the buffer going to be sent has part of the work request. */ u32 num_sge; @@ -330,6 +331,8 @@ struct mana_rxq { bool xdp_flush; int xdp_rc; /* XDP redirect return code */ + struct page_pool *page_pool; + /* MUST BE THE LAST MEMBER: * Each receive buffer has an associated mana_recv_buf_oob. */