From patchwork Thu Jan 16 21:55:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942354 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1F2B236ED4 for ; Thu, 16 Jan 2025 21:55:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064551; cv=none; b=qriuIa/Yga6NDXX3iaW/sk2s23am+AjtGCtaLVCoMQDA99EeV1xSAtLUrCcjJGeGeJ3gI7WJxdwJzkpj0v9rBGnCiPIHECjZf3H5HSyeQPyNfGqKRHuwz3dA2qNJzO2HHTUb3ZUJ57Q37JmC+2KDlIt0LYknP5LQmn8w0qbZ2l4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064551; c=relaxed/simple; bh=hReaaEVazhIHcUoBuTktGZRTmQPX7aSOAiWApkLHko8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bnLVBjOdiPY3N1qQzwtfnUMPn+FJtzjJ+pm1SV5UmfVVsbtBDq+uK/uGTavfMAUDes4i0Ug7qY5xyslPKZtDI0afb37Y+j7NlREIh1CXOJ0nLrdiksIgPtNvqEV7CFeAGAuYDBfxQtH2qBtzI4N8Y/VKeOWIL1PIDji4H0GhTcw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uJg9A5yk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uJg9A5yk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28BF4C4CEDD; Thu, 16 Jan 2025 21:55:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064551; bh=hReaaEVazhIHcUoBuTktGZRTmQPX7aSOAiWApkLHko8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uJg9A5ykSab2J6r9CMDOoQ86f5ud8D41Fu2cxxDc9H0dyw8g+PLcP+KTpUebk5+ym ddJZU2PD2bFQPFRZKvkVerIMPRrBGcM8cnnhIXRfNghqpp/kLHx3dLRKvAywgrVCPr I7MBfpavUSx6nto9qDXytcjUkzzTb0kzSrdvEkKR7mRylQ7AzAB5iSlN3JCt2RMToG PSTWqTZXUwOV3TSTJwtGSgiZOLb1vdjaKRFaeiMfftq1cOGUFxbIEHZWvJtAtuyDbv vVYOs+YLNiq8Ub3gxIeYCifDnap5T8e0XwLA/fLiqmNjXN3oCWYKtVIUlFWptOTWS3 z1ADsiSoylo9A== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky Subject: [net-next 01/11] net: Kconfig NET_DEVMEM selects GENERIC_ALLOCATOR Date: Thu, 16 Jan 2025 13:55:19 -0800 Message-ID: <20250116215530.158886-2-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed GENERIC_ALLOCATOR is a non-prompt kconfig, meaning users can't enable it selectively. All kconfig users of GENERIC_ALLOCATOR select it, except of NET_DEVMEM which only depends on it, there is no easy way to turn GENERIC_ALLOCATOR on unless we select other unnecessary configs that will select it. Instead of depending on it, select it when NET_DEVMEM is enabled. Signed-off-by: Saeed Mahameed --- net/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/Kconfig b/net/Kconfig index c3fca69a7c83..4c18dd416a50 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -68,8 +68,8 @@ config SKB_EXTENSIONS config NET_DEVMEM def_bool y + select GENERIC_ALLOCATOR depends on DMA_SHARED_BUFFER - depends on GENERIC_ALLOCATOR depends on PAGE_POOL config NET_SHAPER From patchwork Thu Jan 16 21:55:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942355 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A839B241696 for ; Thu, 16 Jan 2025 21:55:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064552; cv=none; b=rL1pTiXMpMp2JGPyhwEH35E90MOU82VYpQmwGaZ1eRu9uNM8l+RywnpUz1iOrdzRrI70uId/kBgwrYg2nWe0MVkVdJrVgUKe63/GQcnIMI4HTEcDGU+8NKqJrNRVvnXJcPy1byMXtUp2NdQHYPMCNGUbKYXN663w68eNCHrpQ8k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064552; c=relaxed/simple; bh=ezAUu1jEM9wg0f3iyBxIgHCoCnF1eyTUXF+0fCPcLbE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B+tqVMJ0XlZvIFfE0yMhQGWcmRfyOWgjRMevp04lRqavfiKHHCzTA6oKQLzG2f2z8RkZ1CXGxad1yZNz0/pC78Tp9OjeUE5jPv409sPCL/c/r9I1bR7X2YwzcvHCopQiBhPtPhHfC0pZ3w9zxcfleKhrzKhI89pHFtbrDkTwGh0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OpIgT3GA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OpIgT3GA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1F6B4C4CED6; Thu, 16 Jan 2025 21:55:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064552; bh=ezAUu1jEM9wg0f3iyBxIgHCoCnF1eyTUXF+0fCPcLbE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OpIgT3GAek5AqAG6G26frYbdqhVdEnoWYEBk0JEyPMhrDCW3FkeaOoCf4fzbOHnUs FikTGx6UyoYVdqxrt2ux+F8ipowT9JtPeXhidEZTrH3ravBLmgT+kEgXbDlj5ESZsv H2eIft08CyhnGmEJDmoXpjDzFGwB/HkvSqzZXQHxtU1/C5EVgvrfqaM9IKdYHtVMzs HaFi+gA0aBVX0PyFqZfy282M249YSR8RAIZx5+5Ny+81VFbtZAbVQGmfbQNjkQXw2+ Z7RWK/IDTAvhrDt065pElVQXaQx4t0PYjRho7iWwf0PVy9nyhvO1hSLNCHOlNMJ8gi sK03nysPXDZfg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 02/11] net/mlx5e: SHAMPO: Reorganize mlx5_rq_shampo_alloc Date: Thu, 16 Jan 2025 13:55:20 -0800 Message-ID: <20250116215530.158886-3-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Drop redundant SHAMPO structure alloc/free functions. Gather together function calls pertaining to header split info, pass header per WQE (hd_per_wqe) as parameter to those function to avoid use before initialization future mistakes. Allocate HW GRO related info outside of the header related info scope. Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 - .../net/ethernet/mellanox/mlx5/core/en_main.c | 132 +++++++++--------- 2 files changed, 63 insertions(+), 70 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 979fc56205e1..66c93816803e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -628,7 +628,6 @@ struct mlx5e_shampo_hd { struct mlx5e_frag_page *pages; u32 hd_per_wq; u16 hd_per_wqe; - u16 pages_per_wq; unsigned long *bitmap; u16 pi; u16 ci; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index bd41b75d246e..c687c926cba3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -330,47 +330,6 @@ static inline void mlx5e_build_umr_wqe(struct mlx5e_rq *rq, ucseg->mkey_mask = cpu_to_be64(MLX5_MKEY_MASK_FREE); } -static int mlx5e_rq_shampo_hd_alloc(struct mlx5e_rq *rq, int node) -{ - rq->mpwqe.shampo = kvzalloc_node(sizeof(*rq->mpwqe.shampo), - GFP_KERNEL, node); - if (!rq->mpwqe.shampo) - return -ENOMEM; - return 0; -} - -static void mlx5e_rq_shampo_hd_free(struct mlx5e_rq *rq) -{ - kvfree(rq->mpwqe.shampo); -} - -static int mlx5e_rq_shampo_hd_info_alloc(struct mlx5e_rq *rq, int node) -{ - struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; - - shampo->bitmap = bitmap_zalloc_node(shampo->hd_per_wq, GFP_KERNEL, - node); - shampo->pages = kvzalloc_node(array_size(shampo->hd_per_wq, - sizeof(*shampo->pages)), - GFP_KERNEL, node); - if (!shampo->bitmap || !shampo->pages) - goto err_nomem; - - return 0; - -err_nomem: - kvfree(shampo->bitmap); - kvfree(shampo->pages); - - return -ENOMEM; -} - -static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq) -{ - kvfree(rq->mpwqe.shampo->bitmap); - kvfree(rq->mpwqe.shampo->pages); -} - static int mlx5e_rq_alloc_mpwqe_info(struct mlx5e_rq *rq, int node) { int wq_sz = mlx5_wq_ll_get_size(&rq->mpwqe.wq); @@ -581,19 +540,18 @@ static int mlx5e_create_rq_umr_mkey(struct mlx5_core_dev *mdev, struct mlx5e_rq } static int mlx5e_create_rq_hd_umr_mkey(struct mlx5_core_dev *mdev, - struct mlx5e_rq *rq) + u16 hd_per_wq, u32 *umr_mkey) { u32 max_ksm_size = BIT(MLX5_CAP_GEN(mdev, log_max_klm_list_size)); - if (max_ksm_size < rq->mpwqe.shampo->hd_per_wq) { + if (max_ksm_size < hd_per_wq) { mlx5_core_err(mdev, "max ksm list size 0x%x is smaller than shampo header buffer list size 0x%x\n", - max_ksm_size, rq->mpwqe.shampo->hd_per_wq); + max_ksm_size, hd_per_wq); return -EINVAL; } - - return mlx5e_create_umr_ksm_mkey(mdev, rq->mpwqe.shampo->hd_per_wq, + return mlx5e_create_umr_ksm_mkey(mdev, hd_per_wq, MLX5E_SHAMPO_LOG_HEADER_ENTRY_SIZE, - &rq->mpwqe.shampo->mkey); + umr_mkey); } static void mlx5e_init_frags_partition(struct mlx5e_rq *rq) @@ -755,6 +713,33 @@ static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *param xdp_frag_size); } +static int mlx5e_rq_shampo_hd_info_alloc(struct mlx5e_rq *rq, u16 hd_per_wq, int node) +{ + struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; + + shampo->hd_per_wq = hd_per_wq; + + shampo->bitmap = bitmap_zalloc_node(hd_per_wq, GFP_KERNEL, node); + shampo->pages = kvzalloc_node(array_size(hd_per_wq, sizeof(*shampo->pages)), + GFP_KERNEL, node); + if (!shampo->bitmap || !shampo->pages) + goto err_nomem; + + return 0; + +err_nomem: + kvfree(shampo->pages); + bitmap_free(shampo->bitmap); + + return -ENOMEM; +} + +static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq) +{ + kvfree(rq->mpwqe.shampo->pages); + bitmap_free(rq->mpwqe.shampo->bitmap); +} + static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_rq_param *rqp, @@ -762,42 +747,51 @@ static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev, u32 *pool_size, int node) { + void *wqc = MLX5_ADDR_OF(rqc, rqp->rqc, wq); + u16 hd_per_wq; + int wq_size; int err; if (!test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state)) return 0; - err = mlx5e_rq_shampo_hd_alloc(rq, node); - if (err) - goto out; - rq->mpwqe.shampo->hd_per_wq = - mlx5e_shampo_hd_per_wq(mdev, params, rqp); - err = mlx5e_create_rq_hd_umr_mkey(mdev, rq); + + rq->mpwqe.shampo = kvzalloc_node(sizeof(*rq->mpwqe.shampo), + GFP_KERNEL, node); + if (!rq->mpwqe.shampo) + return -ENOMEM; + + /* split headers data structures */ + hd_per_wq = mlx5e_shampo_hd_per_wq(mdev, params, rqp); + err = mlx5e_rq_shampo_hd_info_alloc(rq, hd_per_wq, node); if (err) - goto err_shampo_hd; - err = mlx5e_rq_shampo_hd_info_alloc(rq, node); + goto err_shampo_hd_info_alloc; + + err = mlx5e_create_rq_hd_umr_mkey(mdev, hd_per_wq, &rq->mpwqe.shampo->mkey); if (err) - goto err_shampo_info; + goto err_umr_mkey; + + rq->mpwqe.shampo->key = cpu_to_be32(rq->mpwqe.shampo->mkey); + rq->mpwqe.shampo->hd_per_wqe = + mlx5e_shampo_hd_per_wqe(mdev, params, rqp); + wq_size = BIT(MLX5_GET(wq, wqc, log_wq_sz)); + *pool_size += (rq->mpwqe.shampo->hd_per_wqe * wq_size) / + MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; + + /* gro only data structures */ rq->hw_gro_data = kvzalloc_node(sizeof(*rq->hw_gro_data), GFP_KERNEL, node); if (!rq->hw_gro_data) { err = -ENOMEM; goto err_hw_gro_data; } - rq->mpwqe.shampo->key = - cpu_to_be32(rq->mpwqe.shampo->mkey); - rq->mpwqe.shampo->hd_per_wqe = - mlx5e_shampo_hd_per_wqe(mdev, params, rqp); - rq->mpwqe.shampo->pages_per_wq = - rq->mpwqe.shampo->hd_per_wq / MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; - *pool_size += rq->mpwqe.shampo->pages_per_wq; + return 0; err_hw_gro_data: - mlx5e_rq_shampo_hd_info_free(rq); -err_shampo_info: mlx5_core_destroy_mkey(mdev, rq->mpwqe.shampo->mkey); -err_shampo_hd: - mlx5e_rq_shampo_hd_free(rq); -out: +err_umr_mkey: + mlx5e_rq_shampo_hd_info_free(rq); +err_shampo_hd_info_alloc: + kvfree(rq->mpwqe.shampo); return err; } @@ -809,7 +803,7 @@ static void mlx5e_rq_free_shampo(struct mlx5e_rq *rq) kvfree(rq->hw_gro_data); mlx5e_rq_shampo_hd_info_free(rq); mlx5_core_destroy_mkey(rq->mdev, rq->mpwqe.shampo->mkey); - mlx5e_rq_shampo_hd_free(rq); + kvfree(rq->mpwqe.shampo); } static int mlx5e_alloc_rq(struct mlx5e_params *params, From patchwork Thu Jan 16 21:55:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942356 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45BA72419F3 for ; Thu, 16 Jan 2025 21:55:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064553; cv=none; b=L7ZcOaLf9F1h8KC/qQmufNdFrd5/Qo1BTeeis5n9//aIwvYVEhlqtaps7iP5RwV1q0pDWihe3hE5AtLMGY7JmwEcq6hVdISCQ5BJrMd7AFePs7Ryj0ojpXXvfNbILkwxCQJPjtpOKIvilqoF7il1vksyOTtfS/4D75Yr4hDqNhk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064553; c=relaxed/simple; bh=Wf5GnrHiQaqb1/kvJYQNOcQP9e+ej3XSCEPkodxKUIM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KcztKEafXOG9ZjbHdqwJVb7M58xu5u8Rry21E5vc5q3QS15b+b9u7FCBTpOpDKoiFJ7NWoSSMPM3mtE5H0g5bhE4r4LJSBiDT400qq90xFZRIZU12VdjCouQTu9o3cFFbiKWL72yQtNe7/RZpaCj9ZhB3/cNoMCrMkWDc3LXHEs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KKeeXnER; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KKeeXnER" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A641C4CED6; Thu, 16 Jan 2025 21:55:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064553; bh=Wf5GnrHiQaqb1/kvJYQNOcQP9e+ej3XSCEPkodxKUIM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KKeeXnERn+bqD4mz7GXmX8P8skH66v9soU3u6XRDnshWxw2H/7no4peNZvf2hXwDC 0KnzaaLEewiowhnLGbwHMcuThZW8GwjRUsoG47P5q0GvwkH+Qw/T4OhRLTDZm0vnMz L3o6f3T32HjX1ktsLoah1Rv3zMhxfj2sLWLtiU5JVUCt+HtA0wfjitDeeFe0LjRkuL is4f3YyuhpwiRkgy3V193GJdmTlw79mV9sKU6pKgTxMCwQyLFPzQQtiupgdYhuYaRY N66CrRj50b639OwZE5983Hx7UZga9usgTEWC4n342IFb7aZ4Ct8rwZe2IAyw2iVXG6 jM6nMS9rS7rIA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 03/11] net/mlx5e: SHAMPO: Remove redundant params Date: Thu, 16 Jan 2025 13:55:21 -0800 Message-ID: <20250116215530.158886-4-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Two SHAMPO params are static and always the same, remove them from the global mlx5e_params struct. Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 4 ---- drivers/net/ethernet/mellanox/mlx5/core/en/params.c | 4 ++-- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 4 ---- 3 files changed, 2 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 66c93816803e..18f8c00f4d7f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -274,10 +274,6 @@ enum packet_merge { struct mlx5e_packet_merge_param { enum packet_merge type; u32 timeout; - struct { - u8 match_criteria_type; - u8 alignment_granularity; - } shampo; }; struct mlx5e_params { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 64b62ed17b07..377363eb1faa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -930,9 +930,9 @@ int mlx5e_build_rq_param(struct mlx5_core_dev *mdev, MLX5_SET(rqc, rqc, reservation_timeout, mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_SHAMPO_TIMEOUT)); MLX5_SET(rqc, rqc, shampo_match_criteria_type, - params->packet_merge.shampo.match_criteria_type); + MLX5_RQC_SHAMPO_MATCH_CRITERIA_TYPE_EXTENDED); MLX5_SET(rqc, rqc, shampo_no_match_alignment_granularity, - params->packet_merge.shampo.alignment_granularity); + MLX5_RQC_SHAMPO_NO_MATCH_ALIGNMENT_GRANULARITY_STRIDE); } break; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index c687c926cba3..73947df91a33 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4047,10 +4047,6 @@ static int set_feature_hw_gro(struct net_device *netdev, bool enable) if (enable) { new_params.packet_merge.type = MLX5E_PACKET_MERGE_SHAMPO; - new_params.packet_merge.shampo.match_criteria_type = - MLX5_RQC_SHAMPO_MATCH_CRITERIA_TYPE_EXTENDED; - new_params.packet_merge.shampo.alignment_granularity = - MLX5_RQC_SHAMPO_NO_MATCH_ALIGNMENT_GRANULARITY_STRIDE; } else if (new_params.packet_merge.type == MLX5E_PACKET_MERGE_SHAMPO) { new_params.packet_merge.type = MLX5E_PACKET_MERGE_NONE; } else { From patchwork Thu Jan 16 21:55:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942357 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 94FE3241A1C for ; Thu, 16 Jan 2025 21:55:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064554; cv=none; b=R+VrV8JjM6rOydqMEZXcxBmP9JsXd3m0mys5qk6psFpWpfLquTkIwU5EKo0iMJFEFNb+TUqyHikys90GUgBM+b640rQZiOzSJPq14fLKvq1li30xOzm2OnixrjI4WSdNIrmhAPSPnkj18FZ1uc/8XNAc1MjjakqHmIT0jRBIxxE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064554; c=relaxed/simple; bh=VklbB5178MBwLG13lY+GR1utck8zyHlhXcYJQWB8cJ8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=I/Y9Zb+r9I9dh3RicAgX2/vc8GBEGe0PGQDOE4mI+LB/Xnwo1VXJbFhY1JT5hA+NwN239CSTV5dmChKDafLZyxT0s5msKSwoFcwiDPoPIyKoWGDiNOlO64c9MZIxv4E/1pDKXRjZEjpQExIK7uh8MCQSmId4Cz1mDRUL/FJMFDM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fLXizuR6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fLXizuR6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12774C4CED6; Thu, 16 Jan 2025 21:55:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064554; bh=VklbB5178MBwLG13lY+GR1utck8zyHlhXcYJQWB8cJ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fLXizuR63kPThXsgX+vd4YhORInn/kKV4K+NVaFtclLlMN6nUKTtb7+5jP0k63NmD Fwnd+ULLlEFg5RvAZxReslCrLr5KcY7RR7X7HRrb9Q+emqc6RAkVMn0cFdi8tc6Qku AIk0rsoQ1SMqOSTa7meJ4TZEjYW8kX8jx9aaBErd1ivxM8pEdbiJTzVfuAEbw+KQvk ncVioJDE3T/aGxwIcCgRIUyPq912/kJ6DGLr+Dqkm4B6hnTmsY5OsgGvJvoPH1CxuT rOcNZfR/43106Fz+MzudmfPupHBqRvKen5uMH/Zns9wALmVX78DX1sLSTWyHkYoODv QTi2G24miFp9w== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 04/11] net/mlx5e: SHAMPO: Improve hw gro capability checking Date: Thu, 16 Jan 2025 13:55:22 -0800 Message-ID: <20250116215530.158886-5-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Add missing HW capabilities, declare the feature in netdev->vlan_features, similar to other features in mlx5e_build_nic_netdev. No functional change here as all by default disabled features are explicitly disabled at the bottom of the function. Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 73947df91a33..66d1b3fe3134 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -77,7 +77,8 @@ static bool mlx5e_hw_gro_supported(struct mlx5_core_dev *mdev) { - if (!MLX5_CAP_GEN(mdev, shampo)) + if (!MLX5_CAP_GEN(mdev, shampo) || + !MLX5_CAP_SHAMPO(mdev, shampo_header_split_data_merge)) return false; /* Our HW-GRO implementation relies on "KSM Mkey" for @@ -5508,17 +5509,17 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev) MLX5E_MPWRQ_UMR_MODE_ALIGNED)) netdev->vlan_features |= NETIF_F_LRO; + if (mlx5e_hw_gro_supported(mdev) && + mlx5e_check_fragmented_striding_rq_cap(mdev, PAGE_SHIFT, + MLX5E_MPWRQ_UMR_MODE_ALIGNED)) + netdev->vlan_features |= NETIF_F_GRO_HW; + netdev->hw_features = netdev->vlan_features; netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX; netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX; netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER; netdev->hw_features |= NETIF_F_HW_VLAN_STAG_TX; - if (mlx5e_hw_gro_supported(mdev) && - mlx5e_check_fragmented_striding_rq_cap(mdev, PAGE_SHIFT, - MLX5E_MPWRQ_UMR_MODE_ALIGNED)) - netdev->hw_features |= NETIF_F_GRO_HW; - if (mlx5e_tunnel_any_tx_proto_supported(mdev)) { netdev->hw_enc_features |= NETIF_F_HW_CSUM; netdev->hw_enc_features |= NETIF_F_TSO; From patchwork Thu Jan 16 21:55:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942358 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44D38242249 for ; Thu, 16 Jan 2025 21:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064555; cv=none; b=oDaPh2g1kFbbkJdy2V3fMMNoG3qW+t8IlNbl4ddfxiSuag0d7rUbDImvoEdNXBNUapPldpiPIXS/5O61s6AmU76olfoYugWpspSJHmlSWlQDFQeme4rKJhxgF5kmFAdOPwbblpwvF8rScq3mRyjem6EFWM4qdKkUCn8cCcSwSCY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064555; c=relaxed/simple; bh=zBYG2TJMANasobCEKxxvDpa61sS318nQSHPEqKax4MU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AzbNLGis9+wHMe7hy01I6VeznXhi+RX6+vCnMcsxirp9QmjfeBRAGN44Reesq/2xBYwRGwL0/g+UD1y6aFuohc6EY1nUCIbTi3wllUEF5lKmKrxJUSKnTo9AUBzPK5QczC5xkoeeUm0Kx0VqkpO2VimFq32DouzMhjnfmwu3/xs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=EJjADaAX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EJjADaAX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1301FC4CED6; Thu, 16 Jan 2025 21:55:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064555; bh=zBYG2TJMANasobCEKxxvDpa61sS318nQSHPEqKax4MU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EJjADaAXesVD6V/hVpeC+wtgctzd8rO+9rFAOwqs0AyFgkKOrKUZ2pIA+ZObrQjPv OeLw7VpbxytoHgw76xPCS0b5gWdetELqVah2yTJuB3FXP3MujSJ1821bg05eHhfvcf 4X3FY6apllyAxL+imHHUm6xkVXRS0CnEmhxumt7vNwwehQ3ZDEZ3SD4eKLP2KTtpiz NOSz4yxwzP8JkTgvndAyidzDG07DNpqV83i3t9YEEXWnTOrPvwVQokCCUKigX2p33J lGNso/FOqRqRNnSjAxEZJc4f53MsedM0f+YyFeNc4VT4HXB03dm8e1mAWb3sP1pyjg 1YcZ662oMvCqw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 05/11] net/mlx5e: SHAMPO: Separate pool for headers Date: Thu, 16 Jan 2025 13:55:23 -0800 Message-ID: <20250116215530.158886-6-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Allocate a separate page pool for headers when SHAMPO is enabled. This will be useful for adding support to zc page pool, which has to be different from the headers page pool. Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 3 ++ .../net/ethernet/mellanox/mlx5/core/en_main.c | 37 ++++++++++++++++--- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 35 +++++++++--------- 3 files changed, 52 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 18f8c00f4d7f..29b9bcecd125 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -706,7 +706,10 @@ struct mlx5e_rq { struct bpf_prog __rcu *xdp_prog; struct mlx5e_xdpsq *xdpsq; DECLARE_BITMAP(flags, 8); + + /* page pools */ struct page_pool *page_pool; + struct page_pool *hd_page_pool; /* AF_XDP zero-copy */ struct xsk_buff_pool *xsk_pool; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 66d1b3fe3134..02c9737868b3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -745,12 +745,10 @@ static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_rq_param *rqp, struct mlx5e_rq *rq, - u32 *pool_size, int node) { void *wqc = MLX5_ADDR_OF(rqc, rqp->rqc, wq); u16 hd_per_wq; - int wq_size; int err; if (!test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state)) @@ -774,9 +772,33 @@ static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev, rq->mpwqe.shampo->key = cpu_to_be32(rq->mpwqe.shampo->mkey); rq->mpwqe.shampo->hd_per_wqe = mlx5e_shampo_hd_per_wqe(mdev, params, rqp); - wq_size = BIT(MLX5_GET(wq, wqc, log_wq_sz)); - *pool_size += (rq->mpwqe.shampo->hd_per_wqe * wq_size) / - MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; + + /* separate page pool for shampo headers */ + { + int wq_size = BIT(MLX5_GET(wq, wqc, log_wq_sz)); + struct page_pool_params pp_params = { }; + u32 pool_size; + + pool_size = (rq->mpwqe.shampo->hd_per_wqe * wq_size) / + MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; + + pp_params.order = 0; + pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; + pp_params.pool_size = pool_size; + pp_params.nid = node; + pp_params.dev = rq->pdev; + pp_params.napi = rq->cq.napi; + pp_params.netdev = rq->netdev; + pp_params.dma_dir = rq->buff.map_dir; + pp_params.max_len = PAGE_SIZE; + + rq->hd_page_pool = page_pool_create(&pp_params); + if (IS_ERR(rq->hd_page_pool)) { + err = PTR_ERR(rq->hd_page_pool); + rq->hd_page_pool = NULL; + goto err_hds_page_pool; + } + } /* gro only data structures */ rq->hw_gro_data = kvzalloc_node(sizeof(*rq->hw_gro_data), GFP_KERNEL, node); @@ -788,6 +810,8 @@ static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev, return 0; err_hw_gro_data: + page_pool_destroy(rq->hd_page_pool); +err_hds_page_pool: mlx5_core_destroy_mkey(mdev, rq->mpwqe.shampo->mkey); err_umr_mkey: mlx5e_rq_shampo_hd_info_free(rq); @@ -802,6 +826,7 @@ static void mlx5e_rq_free_shampo(struct mlx5e_rq *rq) return; kvfree(rq->hw_gro_data); + page_pool_destroy(rq->hd_page_pool); mlx5e_rq_shampo_hd_info_free(rq); mlx5_core_destroy_mkey(rq->mdev, rq->mpwqe.shampo->mkey); kvfree(rq->mpwqe.shampo); @@ -881,7 +906,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params, if (err) goto err_rq_mkey; - err = mlx5_rq_shampo_alloc(mdev, params, rqp, rq, &pool_size, node); + err = mlx5_rq_shampo_alloc(mdev, params, rqp, rq, node); if (err) goto err_free_mpwqe_info; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 1963bc5adb18..df561251b30b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -273,12 +273,12 @@ static inline u32 mlx5e_decompress_cqes_start(struct mlx5e_rq *rq, #define MLX5E_PAGECNT_BIAS_MAX (PAGE_SIZE / 64) -static int mlx5e_page_alloc_fragmented(struct mlx5e_rq *rq, +static int mlx5e_page_alloc_fragmented(struct page_pool *pool, struct mlx5e_frag_page *frag_page) { struct page *page; - page = page_pool_dev_alloc_pages(rq->page_pool); + page = page_pool_dev_alloc_pages(pool); if (unlikely(!page)) return -ENOMEM; @@ -292,14 +292,14 @@ static int mlx5e_page_alloc_fragmented(struct mlx5e_rq *rq, return 0; } -static void mlx5e_page_release_fragmented(struct mlx5e_rq *rq, +static void mlx5e_page_release_fragmented(struct page_pool *pool, struct mlx5e_frag_page *frag_page) { u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; struct page *page = frag_page->page; if (page_pool_unref_page(page, drain_count) == 0) - page_pool_put_unrefed_page(rq->page_pool, page, -1, true); + page_pool_put_unrefed_page(pool, page, -1, true); } static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, @@ -313,7 +313,7 @@ static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, * offset) should just use the new one without replenishing again * by themselves. */ - err = mlx5e_page_alloc_fragmented(rq, frag->frag_page); + err = mlx5e_page_alloc_fragmented(rq->page_pool, frag->frag_page); return err; } @@ -332,7 +332,7 @@ static inline void mlx5e_put_rx_frag(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *frag) { if (mlx5e_frag_can_release(frag)) - mlx5e_page_release_fragmented(rq, frag->frag_page); + mlx5e_page_release_fragmented(rq->page_pool, frag->frag_page); } static inline struct mlx5e_wqe_frag_info *get_frag(struct mlx5e_rq *rq, u16 ix) @@ -584,7 +584,7 @@ mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi) struct mlx5e_frag_page *frag_page; frag_page = &wi->alloc_units.frag_pages[i]; - mlx5e_page_release_fragmented(rq, frag_page); + mlx5e_page_release_fragmented(rq->page_pool, frag_page); } } } @@ -679,11 +679,10 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); u64 addr; - err = mlx5e_page_alloc_fragmented(rq, frag_page); + err = mlx5e_page_alloc_fragmented(rq->hd_page_pool, frag_page); if (unlikely(err)) goto err_unmap; - addr = page_pool_get_dma_addr(frag_page->page); for (int j = 0; j < MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; j++) { @@ -715,7 +714,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, if (!header_offset) { struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index); - mlx5e_page_release_fragmented(rq, frag_page); + mlx5e_page_release_fragmented(rq->hd_page_pool, frag_page); } } @@ -791,7 +790,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) for (i = 0; i < rq->mpwqe.pages_per_wqe; i++, frag_page++) { dma_addr_t addr; - err = mlx5e_page_alloc_fragmented(rq, frag_page); + err = mlx5e_page_alloc_fragmented(rq->page_pool, frag_page); if (unlikely(err)) goto err_unmap; addr = page_pool_get_dma_addr(frag_page->page); @@ -836,7 +835,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err_unmap: while (--i >= 0) { frag_page--; - mlx5e_page_release_fragmented(rq, frag_page); + mlx5e_page_release_fragmented(rq->page_pool, frag_page); } bitmap_fill(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe); @@ -855,7 +854,7 @@ mlx5e_free_rx_shampo_hd_entry(struct mlx5e_rq *rq, u16 header_index) if (((header_index + 1) & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) == 0) { struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); - mlx5e_page_release_fragmented(rq, frag_page); + mlx5e_page_release_fragmented(rq->hd_page_pool, frag_page); } clear_bit(header_index, shampo->bitmap); } @@ -1100,6 +1099,8 @@ INDIRECT_CALLABLE_SCOPE bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq) if (rq->page_pool) page_pool_nid_changed(rq->page_pool, numa_mem_id()); + if (rq->hd_page_pool) + page_pool_nid_changed(rq->hd_page_pool, numa_mem_id()); head = rq->mpwqe.actual_wq_head; i = missing; @@ -2001,7 +2002,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w if (prog) { /* area for bpf_xdp_[store|load]_bytes */ net_prefetchw(page_address(frag_page->page) + frag_offset); - if (unlikely(mlx5e_page_alloc_fragmented(rq, &wi->linear_page))) { + if (unlikely(mlx5e_page_alloc_fragmented(rq->page_pool, &wi->linear_page))) { rq->stats->buff_alloc_err++; return NULL; } @@ -2063,7 +2064,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w wi->linear_page.frags++; } - mlx5e_page_release_fragmented(rq, &wi->linear_page); + mlx5e_page_release_fragmented(rq->page_pool, &wi->linear_page); return NULL; /* page/packet was consumed by XDP */ } @@ -2072,13 +2073,13 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w mxbuf.xdp.data - mxbuf.xdp.data_hard_start, 0, mxbuf.xdp.data - mxbuf.xdp.data_meta); if (unlikely(!skb)) { - mlx5e_page_release_fragmented(rq, &wi->linear_page); + mlx5e_page_release_fragmented(rq->page_pool, &wi->linear_page); return NULL; } skb_mark_for_recycle(skb); wi->linear_page.frags++; - mlx5e_page_release_fragmented(rq, &wi->linear_page); + mlx5e_page_release_fragmented(rq->page_pool, &wi->linear_page); if (xdp_buff_has_frags(&mxbuf.xdp)) { struct mlx5e_frag_page *pagep; From patchwork Thu Jan 16 21:55:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942359 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B53022CBDC for ; Thu, 16 Jan 2025 21:55:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064556; cv=none; b=itv5vnWQhLhD11FtN+3ZxRmWXuPz5mnSq0rKNxnSWd+yCF0W3H4W3Jw7MrjKEF5e4O+ra43pQFwKNGl8a4yF0QQmC9oCwLq8WXNDPcnX/G0OSLeHEF/JTUspJWsE9ehI10AKMtfWOOyNv/om8rFfjV5EX34NUHam98MoVZDTXQ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064556; c=relaxed/simple; bh=HQ5ejpHYL17BwCWvx86kgR1GknZUZ2atB5m7+vTF1Lk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YVCnwGiwlTZoesHnMdVFPbhN1+OLMHJO1HenEALE2QQaR0RIAWldKjZOx8NR7WQd6YKd0dlW/5jofP6Dgr43yU7ujPviB0lklv842IPbiWTd5LTXZbd1r+bNQJD8l0AzcRFel4pcipv0X3KsfZHdX0MphsHKWhB/XKtKBzY4myA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=s623yPyE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="s623yPyE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14ACFC4CEDF; Thu, 16 Jan 2025 21:55:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064556; bh=HQ5ejpHYL17BwCWvx86kgR1GknZUZ2atB5m7+vTF1Lk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s623yPyEupJaNCzhAMUBXrpv1TPPsDhBNynFxW+sd46TwPMH5VLSV3c/PIdoScuSt qVr6YgLIxMxpWI4YsROzk1l+fP89lEjSZQs8o0xekKkuq3fvQFDzOSm/hlK25K7fGT pPvSTtuiGuMyn2Pm9/R1SKpUzTe3/oIM+EF1H13VvogruovpWNi7yBuvW30n1gPvre SCoaopZwCEzJ0CSLOqbXI5S2fxDmbFH5D+PAYmg0fmzQZPE2taZsEk25z4dEA3dyJA ggbu5FrXFxWG8pxjGnH3ZCfFClpvVYtmhSR9dBqMVAhZ5q7fGFs0EFUTfTSDr4ujUQ Z0fTZeu5LMFTA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 06/11] net/mlx5e: SHAMPO: Headers page pool stats Date: Thu, 16 Jan 2025 13:55:24 -0800 Message-ID: <20250116215530.158886-7-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Expose the stats of the new headers page pool. Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- .../ethernet/mellanox/mlx5/core/en_stats.c | 53 +++++++++++++++++++ .../ethernet/mellanox/mlx5/core/en_stats.h | 24 +++++++++ 2 files changed, 77 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 611ec4b6f370..a34b829a810b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -208,6 +208,18 @@ static const struct counter_desc sw_stats_desc[] = { { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_ring) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_ring_full) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_released_ref) }, + + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_alloc_fast) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_alloc_slow) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_alloc_slow_high_order) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_alloc_empty) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_alloc_refill) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_alloc_waive) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_recycle_cached) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_recycle_cache_full) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_recycle_ring) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_recycle_ring_full) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_hd_recycle_released_ref) }, #endif #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_packets) }, @@ -389,6 +401,18 @@ static void mlx5e_stats_grp_sw_update_stats_rq_stats(struct mlx5e_sw_stats *s, s->rx_pp_recycle_ring += rq_stats->pp_recycle_ring; s->rx_pp_recycle_ring_full += rq_stats->pp_recycle_ring_full; s->rx_pp_recycle_released_ref += rq_stats->pp_recycle_released_ref; + + s->rx_pp_hd_alloc_fast += rq_stats->pp_hd_alloc_fast; + s->rx_pp_hd_alloc_slow += rq_stats->pp_hd_alloc_slow; + s->rx_pp_hd_alloc_empty += rq_stats->pp_hd_alloc_empty; + s->rx_pp_hd_alloc_refill += rq_stats->pp_hd_alloc_refill; + s->rx_pp_hd_alloc_waive += rq_stats->pp_hd_alloc_waive; + s->rx_pp_hd_alloc_slow_high_order += rq_stats->pp_hd_alloc_slow_high_order; + s->rx_pp_hd_recycle_cached += rq_stats->pp_hd_recycle_cached; + s->rx_pp_hd_recycle_cache_full += rq_stats->pp_hd_recycle_cache_full; + s->rx_pp_hd_recycle_ring += rq_stats->pp_hd_recycle_ring; + s->rx_pp_hd_recycle_ring_full += rq_stats->pp_hd_recycle_ring_full; + s->rx_pp_hd_recycle_released_ref += rq_stats->pp_hd_recycle_released_ref; #endif #ifdef CONFIG_MLX5_EN_TLS s->rx_tls_decrypted_packets += rq_stats->tls_decrypted_packets; @@ -518,6 +542,23 @@ static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_channel *c) rq_stats->pp_recycle_ring = stats.recycle_stats.ring; rq_stats->pp_recycle_ring_full = stats.recycle_stats.ring_full; rq_stats->pp_recycle_released_ref = stats.recycle_stats.released_refcnt; + + pool = c->rq.hd_page_pool; + if (!pool || !page_pool_get_stats(pool, &stats)) + return; + + rq_stats->pp_hd_alloc_fast = stats.alloc_stats.fast; + rq_stats->pp_hd_alloc_slow = stats.alloc_stats.slow; + rq_stats->pp_hd_alloc_slow_high_order = stats.alloc_stats.slow_high_order; + rq_stats->pp_hd_alloc_empty = stats.alloc_stats.empty; + rq_stats->pp_hd_alloc_waive = stats.alloc_stats.waive; + rq_stats->pp_hd_alloc_refill = stats.alloc_stats.refill; + + rq_stats->pp_hd_recycle_cached = stats.recycle_stats.cached; + rq_stats->pp_hd_recycle_cache_full = stats.recycle_stats.cache_full; + rq_stats->pp_hd_recycle_ring = stats.recycle_stats.ring; + rq_stats->pp_hd_recycle_ring_full = stats.recycle_stats.ring_full; + rq_stats->pp_hd_recycle_released_ref = stats.recycle_stats.released_refcnt; } #else static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_channel *c) @@ -2098,6 +2139,18 @@ static const struct counter_desc rq_stats_desc[] = { { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_ring) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_ring_full) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_released_ref) }, + + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_alloc_fast) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_alloc_slow) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_alloc_slow_high_order) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_alloc_empty) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_alloc_refill) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_alloc_waive) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_recycle_cached) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_recycle_cache_full) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_recycle_ring) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_recycle_ring_full) }, + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_hd_recycle_released_ref) }, #endif #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_packets) }, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h index 5961c569cfe0..d69071e20083 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h @@ -227,6 +227,18 @@ struct mlx5e_sw_stats { u64 rx_pp_recycle_ring; u64 rx_pp_recycle_ring_full; u64 rx_pp_recycle_released_ref; + + u64 rx_pp_hd_alloc_fast; + u64 rx_pp_hd_alloc_slow; + u64 rx_pp_hd_alloc_slow_high_order; + u64 rx_pp_hd_alloc_empty; + u64 rx_pp_hd_alloc_refill; + u64 rx_pp_hd_alloc_waive; + u64 rx_pp_hd_recycle_cached; + u64 rx_pp_hd_recycle_cache_full; + u64 rx_pp_hd_recycle_ring; + u64 rx_pp_hd_recycle_ring_full; + u64 rx_pp_hd_recycle_released_ref; #endif #ifdef CONFIG_MLX5_EN_TLS u64 tx_tls_encrypted_packets; @@ -393,6 +405,18 @@ struct mlx5e_rq_stats { u64 pp_recycle_ring; u64 pp_recycle_ring_full; u64 pp_recycle_released_ref; + + u64 pp_hd_alloc_fast; + u64 pp_hd_alloc_slow; + u64 pp_hd_alloc_slow_high_order; + u64 pp_hd_alloc_empty; + u64 pp_hd_alloc_refill; + u64 pp_hd_alloc_waive; + u64 pp_hd_recycle_cached; + u64 pp_hd_recycle_cache_full; + u64 pp_hd_recycle_ring; + u64 pp_hd_recycle_ring_full; + u64 pp_hd_recycle_released_ref; #endif #ifdef CONFIG_MLX5_EN_TLS u64 tls_decrypted_packets; From patchwork Thu Jan 16 21:55:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942360 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 844F2242273 for ; Thu, 16 Jan 2025 21:55:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064557; cv=none; b=bbwCKTGPVSLmrX6eV86tx8Or0vxQZtKYW9m3QMDIstRCxO4z0Rb6Ps6o2qg4SVphztiOWXtzcnCCpWN6Mh5qUFq1JprDD7quY+QkE9pr7rQghFIZ5f+JfvNO3o0adcQGhAOx0WDPV/KgbGBmmTRpykvRj9XQA6KFTJemh0E6ZZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064557; c=relaxed/simple; bh=fTGy3F4zMbOqPVhxveMdXdIJi515R2895VYP5QSFn2A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t7ZYeVx54Nx44kHPjWgy8PjOp1+n2E7SKk7saNQrwKlg+c3NYn2E7eoX+Uw7WXBiqN1E+gH3gSIMI/vG0PD5DagWU97CgHnIZ7jiDo36fFwmR4afRFfom3HTPL49Jm87SEltyCBC1pTBys3MUUwZWxA/44ZPavs8Fq33S0U6Xwk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hAP0vYgO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hAP0vYgO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D5A1C4CED6; Thu, 16 Jan 2025 21:55:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064557; bh=fTGy3F4zMbOqPVhxveMdXdIJi515R2895VYP5QSFn2A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hAP0vYgOfXxtKMQyQCxRVepfOcuLT+i3u3u7DHoeSk81hW7JogF5CUMQLPHzDqwjf hc06pevZvK3rb2KG4mqqct7/g+9t8Z+sTz0K2cTSoVNor3a3NEGt79FvXBAuO6U7Nm oBPLzFDr8Ygq5vFzbFKH2WHr7KTfY6rOwe2EpDgtb5Y3AkmcRrN0dnrU/hB0du9vNC MJQyOgM6qbVT1t0iuau0HoRWdJvCB803y0hvdA92ZHHfSQp9vqKs6czejXTU7SZrB+ Xvy7uNesPwXhukDKXfJ6d8RXq4MSeJ6pcVLX3WIa3rxsqlPwwwCyvU7xxL+TFYQdWt SJ2WJx9E4sStg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 07/11] net/mlx5e: Convert over to netmem Date: Thu, 16 Jan 2025 13:55:25 -0800 Message-ID: <20250116215530.158886-8-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed mlx5e_page_frag holds the physical page itself, to naturally support zc page pools, remove physical page reference from mlx5 and replace it with netmem_ref, to avoid internal handling in mlx5 for net_iov backed pages. No performance degradation observed. Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 +- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 80 ++++++++++--------- 2 files changed, 43 insertions(+), 39 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 29b9bcecd125..8f4c21f88f78 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -543,7 +543,7 @@ struct mlx5e_icosq { } ____cacheline_aligned_in_smp; struct mlx5e_frag_page { - struct page *page; + netmem_ref netmem; u16 frags; }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index df561251b30b..b08c2ac10b67 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -273,33 +273,32 @@ static inline u32 mlx5e_decompress_cqes_start(struct mlx5e_rq *rq, #define MLX5E_PAGECNT_BIAS_MAX (PAGE_SIZE / 64) -static int mlx5e_page_alloc_fragmented(struct page_pool *pool, +static int mlx5e_page_alloc_fragmented(struct page_pool *pp, struct mlx5e_frag_page *frag_page) { - struct page *page; + netmem_ref netmem = page_pool_alloc_netmems(pp, GFP_ATOMIC | __GFP_NOWARN); - page = page_pool_dev_alloc_pages(pool); - if (unlikely(!page)) + if (unlikely(!netmem)) return -ENOMEM; - page_pool_fragment_page(page, MLX5E_PAGECNT_BIAS_MAX); + page_pool_fragment_netmem(netmem, MLX5E_PAGECNT_BIAS_MAX); *frag_page = (struct mlx5e_frag_page) { - .page = page, + .netmem = netmem, .frags = 0, }; return 0; } -static void mlx5e_page_release_fragmented(struct page_pool *pool, +static void mlx5e_page_release_fragmented(struct page_pool *pp, struct mlx5e_frag_page *frag_page) { u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags; - struct page *page = frag_page->page; + netmem_ref netmem = frag_page->netmem; - if (page_pool_unref_page(page, drain_count) == 0) - page_pool_put_unrefed_page(pool, page, -1, true); + if (page_pool_unref_netmem(netmem, drain_count) == 0) + page_pool_put_unrefed_netmem(pp, netmem, -1, true); } static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq, @@ -358,7 +357,7 @@ static int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe_cyc *wqe, frag->flags &= ~BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); headroom = i == 0 ? rq->buff.headroom : 0; - addr = page_pool_get_dma_addr(frag->frag_page->page); + addr = page_pool_get_dma_addr_netmem(frag->frag_page->netmem); wqe->data[i].addr = cpu_to_be64(addr + frag->offset + headroom); } @@ -499,9 +498,10 @@ mlx5e_add_skb_shared_info_frag(struct mlx5e_rq *rq, struct skb_shared_info *sinf struct xdp_buff *xdp, struct mlx5e_frag_page *frag_page, u32 frag_offset, u32 len) { + netmem_ref netmem = frag_page->netmem; skb_frag_t *frag; - dma_addr_t addr = page_pool_get_dma_addr(frag_page->page); + dma_addr_t addr = page_pool_get_dma_addr_netmem(netmem); dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); if (!xdp_buff_has_frags(xdp)) { @@ -514,9 +514,9 @@ mlx5e_add_skb_shared_info_frag(struct mlx5e_rq *rq, struct skb_shared_info *sinf } frag = &sinfo->frags[sinfo->nr_frags++]; - skb_frag_fill_page_desc(frag, frag_page->page, frag_offset, len); + skb_frag_fill_netmem_desc(frag, netmem, frag_offset, len); - if (page_is_pfmemalloc(frag_page->page)) + if (!netmem_is_net_iov(netmem) && page_is_pfmemalloc(netmem_to_page(netmem))) xdp_buff_set_frag_pfmemalloc(xdp); sinfo->xdp_frags_size += len; } @@ -527,27 +527,29 @@ mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb, u32 frag_offset, u32 len, unsigned int truesize) { - dma_addr_t addr = page_pool_get_dma_addr(frag_page->page); + dma_addr_t addr = page_pool_get_dma_addr_netmem(frag_page->netmem); + struct page *page = netmem_to_page(frag_page->netmem); u8 next_frag = skb_shinfo(skb)->nr_frags; dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); - if (skb_can_coalesce(skb, next_frag, frag_page->page, frag_offset)) { + if (skb_can_coalesce(skb, next_frag, page, frag_offset)) { skb_coalesce_rx_frag(skb, next_frag - 1, len, truesize); - } else { - frag_page->frags++; - skb_add_rx_frag(skb, next_frag, frag_page->page, - frag_offset, len, truesize); + return; } + + frag_page->frags++; + skb_add_rx_frag_netmem(skb, next_frag, frag_page->netmem, + frag_offset, len, truesize); } static inline void mlx5e_copy_skb_header(struct mlx5e_rq *rq, struct sk_buff *skb, - struct page *page, dma_addr_t addr, + netmem_ref netmem, dma_addr_t addr, int offset_from, int dma_offset, u32 headlen) { - const void *from = page_address(page) + offset_from; + const void *from = netmem_address(netmem) + offset_from; /* Aligning len to sizeof(long) optimizes memcpy performance */ unsigned int len = ALIGN(headlen, sizeof(long)); @@ -683,7 +685,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, if (unlikely(err)) goto err_unmap; - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr_netmem(frag_page->netmem); for (int j = 0; j < MLX5E_SHAMPO_WQ_HEADER_PER_PAGE; j++) { header_offset = mlx5e_shampo_hd_offset(index++); @@ -793,7 +795,8 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err = mlx5e_page_alloc_fragmented(rq->page_pool, frag_page); if (unlikely(err)) goto err_unmap; - addr = page_pool_get_dma_addr(frag_page->page); + + addr = page_pool_get_dma_addr_netmem(frag_page->netmem); umr_wqe->inline_mtts[i] = (struct mlx5_mtt) { .ptag = cpu_to_be64(addr | MLX5_EN_WR), }; @@ -1213,7 +1216,7 @@ static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index) struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); u16 head_offset = mlx5e_shampo_hd_offset(header_index) + rq->buff.headroom; - return page_address(frag_page->page) + head_offset; + return netmem_address(frag_page->netmem) + head_offset; } static void mlx5e_shampo_update_ipv4_udp_hdr(struct mlx5e_rq *rq, struct iphdr *ipv4) @@ -1674,11 +1677,11 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, dma_addr_t addr; u32 frag_size; - va = page_address(frag_page->page) + wi->offset; + va = netmem_address(frag_page->netmem) + wi->offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr_netmem(frag_page->netmem); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -1728,10 +1731,10 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi frag_page = wi->frag_page; - va = page_address(frag_page->page) + wi->offset; + va = netmem_address(frag_page->netmem) + wi->offset; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr_netmem(frag_page->netmem); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, rq->buff.frame0_sz, rq->buff.map_dir); net_prefetchw(va); /* xdp_frame data area */ @@ -2001,12 +2004,13 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w if (prog) { /* area for bpf_xdp_[store|load]_bytes */ - net_prefetchw(page_address(frag_page->page) + frag_offset); + net_prefetchw(netmem_address(frag_page->netmem) + frag_offset); if (unlikely(mlx5e_page_alloc_fragmented(rq->page_pool, &wi->linear_page))) { rq->stats->buff_alloc_err++; return NULL; } - va = page_address(wi->linear_page.page); + + va = netmem_address(wi->linear_page.netmem); net_prefetchw(va); /* xdp_frame data area */ linear_hr = XDP_PACKET_HEADROOM; linear_data_len = 0; @@ -2111,8 +2115,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w while (++pagep < frag_page); } /* copy header */ - addr = page_pool_get_dma_addr(head_page->page); - mlx5e_copy_skb_header(rq, skb, head_page->page, addr, + addr = page_pool_get_dma_addr_netmem(head_page->netmem); + mlx5e_copy_skb_header(rq, skb, head_page->netmem, addr, head_offset, head_offset, headlen); /* skb linear part was allocated with headlen and aligned to long */ skb->tail += headlen; @@ -2142,11 +2146,11 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; } - va = page_address(frag_page->page) + head_offset; + va = netmem_address(frag_page->netmem) + head_offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(frag_page->page); + addr = page_pool_get_dma_addr_netmem(frag_page->netmem); dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -2185,7 +2189,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, struct mlx5_cqe64 *cqe, u16 header_index) { struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, header_index); - dma_addr_t page_dma_addr = page_pool_get_dma_addr(frag_page->page); + dma_addr_t page_dma_addr = page_pool_get_dma_addr_netmem(frag_page->netmem); u16 head_offset = mlx5e_shampo_hd_offset(header_index); dma_addr_t dma_addr = page_dma_addr + head_offset; u16 head_size = cqe->shampo.header_size; @@ -2194,7 +2198,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, void *hdr, *data; u32 frag_size; - hdr = page_address(frag_page->page) + head_offset; + hdr = netmem_address(frag_page->netmem) + head_offset; data = hdr + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + head_size); @@ -2219,7 +2223,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, } net_prefetchw(skb->data); - mlx5e_copy_skb_header(rq, skb, frag_page->page, dma_addr, + mlx5e_copy_skb_header(rq, skb, frag_page->netmem, dma_addr, head_offset + rx_headroom, rx_headroom, head_size); /* skb linear part was allocated with headlen and aligned to long */ From patchwork Thu Jan 16 21:55:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942361 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9085D1F37AA for ; Thu, 16 Jan 2025 21:55:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064558; cv=none; b=ZFy6X0Hfzt3kUO9vvzVnyrsHhnA5GBS7PS1dtYctAIBxIzvpycUugPMXE9Y0K7FV3K+rf80tYyYqeiH9d83RVkrSB2MaKicGyvLF3UqD5d4WBdgVGQEMazkNkWm4vYV2g1Wfw+HwhupwCEdT/UEGo+5HmiEmoWjyNSFf3Koazss= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064558; c=relaxed/simple; bh=k1H4Xyi58jPIckQ7FrA+5Iw1Sg4R1V8kYooAN+elpE0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VZ0C912ZGlmLjIAKCQoftWRKZYN0eWqP2fIlesKp4UfkxaGFA/OyN7I4UKCs2jQK93ET4ofIZ85r5ZRaqriAF1NtfJhcL0aSUvx1iTd0pLONulJsNa9H6rfxLkCxtfgy1+c4fKjl6oTlS66MDk1eCLM3LFIym7JR8fkau4OiMIA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=J1Uny3wa; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="J1Uny3wa" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 02C00C4CED6; Thu, 16 Jan 2025 21:55:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064558; bh=k1H4Xyi58jPIckQ7FrA+5Iw1Sg4R1V8kYooAN+elpE0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=J1Uny3wauLEPeZR4PUXwHKNDdeVclxPveYbHKo2/so0lhU9VrPXfgIjUu9jGpAZLd aUdX650IzpdRd9NsNAeg7aHqh4bHECv4MRQDQjRwVCo0EXPBW/uz6ErmO/u8vTe30E l0J05XHja7/0kDwtR5z3x7b1GgzBAHzteJ9eDxj3dZOHKYY0XXz0Ozl+LrXQm+F88x p+gL3R8qm6hrMViqcEHt5gpyqfcrvqYxa8701GHt8U8gVyeuVsNaCqFu16lwxeO2k2 VEqbMyLLU+TdNaGHAyBLvc0wQaeiKFnzq2Z1PP1SwMozZDxFsnrz4IGqGhHvbcFKGu edj7cn9nC1hVg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 08/11] net/mlx5e: Handle iov backed netmems Date: Thu, 16 Jan 2025 13:55:26 -0800 Message-ID: <20250116215530.158886-9-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Special page pools can allocate an iov backed netmem, such netmem pages are unreachable by driver, for such cases don't attempt to access those pages in the driver. The only affected path is mlx5e_add_skb_frag()->skb_can_coalesce(). Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index b08c2ac10b67..2ac00962c7a3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -528,15 +528,18 @@ mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb, unsigned int truesize) { dma_addr_t addr = page_pool_get_dma_addr_netmem(frag_page->netmem); - struct page *page = netmem_to_page(frag_page->netmem); u8 next_frag = skb_shinfo(skb)->nr_frags; dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); - if (skb_can_coalesce(skb, next_frag, page, frag_offset)) { - skb_coalesce_rx_frag(skb, next_frag - 1, len, truesize); - return; + if (!netmem_is_net_iov(frag_page->netmem)) { + struct page *page = netmem_to_page(frag_page->netmem); + + if (skb_can_coalesce(skb, next_frag, page, frag_offset)) { + skb_coalesce_rx_frag(skb, next_frag - 1, len, truesize); + return; + } } frag_page->frags++; From patchwork Thu Jan 16 21:55:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942362 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CD7B242249 for ; Thu, 16 Jan 2025 21:55:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064559; cv=none; b=IgVMdaT3lko6/1id37W3TZJ/poYtU5wyJeGVLPePmLs4XjsRSdspo7SJrOvmBGA08Dmlprt6d+LuwhLjUiE2qkZPRBx9U8kstPMeVvmuAmfwoLb7V5jkUWyAV2cfFM1yu7hJYiRk9PbwzwhLEMJLgHsYB3fx60lLtOe1xp32+cE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064559; c=relaxed/simple; bh=U9Eg32W0x0Xs3/Ve5JUm9F3l5xsUwt8Qz2Yn2wvgE3I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bZNGN313R7sZbzBwqr75ltt91Rt2LhW0NuDmBK7/HTCH/cEQ/deCbWi/LTMzC6kFwqq3DaHYarPy0XbzfP/2eFN9/4rqbZ6CekmKCZBH3vCK0BFkGZnc35o8HVtHG/Gu8ehB6MixePKycPt70+C7EEIc50AeoctTkVDfS1P2v74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=B9D29Bhh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="B9D29Bhh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3912C4CEDF; Thu, 16 Jan 2025 21:55:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064559; bh=U9Eg32W0x0Xs3/Ve5JUm9F3l5xsUwt8Qz2Yn2wvgE3I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B9D29BhhmEny6gwkrFsBNf0/zP0oSU5N4CI6EXTdgbTJ4rjeC/nsw03s8VzgcOqHK +WDZ8ieaZ1GyQyZceDBpNfd2EF/7aNleAzAbdvXI7mmXAtUbCXVtEXIdTMRT9Gq6sl K/ad/1mMAD/lXW3F/QLUMr/LW1LCWkttiSAtXbVBUI1KKN1WQmaHC/2qGXQQcbnARa c7eJMqSmzwHtvK82/z5FbvDtbsvGTNOR2EvnHn/4spCE5BGDFvNTF1UABzr7oT45C6 6wEyfvsPU9cuJfTginWe4FEgKQSr/T1+19sbjP/U+f2ZksW5Y+/Rn4d/03lSqkUB+g WNttOBny0AXNg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 09/11] net/mlx5e: Add support for UNREADABLE netmem page pools Date: Thu, 16 Jan 2025 13:55:27 -0800 Message-ID: <20250116215530.158886-10-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed On netdev_rx_queue_restart, a special type of page pool maybe expected. In this patch declare support for UNREADABLE netmem iov pages in the pool params only when header data split shampo RQ mode is enabled, also set the queue index in the page pool params struct. Shampo mode requirement: Without header split rx needs to peek at the data, we can't do UNREADABLE_NETMEM. Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 02c9737868b3..340ed7d3feac 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -946,6 +946,11 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params, pp_params.netdev = rq->netdev; pp_params.dma_dir = rq->buff.map_dir; pp_params.max_len = PAGE_SIZE; + pp_params.queue_idx = rq->ix; + + /* Shampo header data split rx path allow for unreadable netmem */ + if (test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state)) + pp_params.flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM; /* page_pool can be used even when there is no rq->xdp_prog, * given page_pool does not handle DMA mapping there is no From patchwork Thu Jan 16 21:55:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942363 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82038243861 for ; Thu, 16 Jan 2025 21:56:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064560; cv=none; b=cgb1b4OHlS53vYGsG8je3EtnvOS99/y2SQnc0igyB5nbQjHrML52on46swmOCu0+AXahagAJcVlvr7Mwmo75eFN8hdxRrR2bRXb2f8Huw8nTcZB+zcF+J66VSx6Tu+vXRSvzBkaCQwI++6CGAu9eVZqpl/6+yJT/CFokumW1AqA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064560; c=relaxed/simple; bh=5EbEcp2FZ+JVu3cf+ci5oG4EMKcdkO5xHtXX6DAx/FQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dJqPxjKID+GEoC7JDUl+eV2HA37edW465IatyDlvRpnEizwUb/BpJ8X+JPBasflTOwJ/LJae5i7CVqIjOkSHEX/85GKC3fuRshmYkFWqDecV6F1PeVly6CHaaektAz5Z2PnxcohKARr4Znw2DmtW3yWnavVBErWjmw42YubhjPQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pKdJPnhD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pKdJPnhD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE533C4CED6; Thu, 16 Jan 2025 21:55:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064560; bh=5EbEcp2FZ+JVu3cf+ci5oG4EMKcdkO5xHtXX6DAx/FQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pKdJPnhDOv09YYb02lgXjFrS0OatUMnAd3wPXldarHiLC6SHiQvl6oXmT1KMtAmmU wJP64r4RjuqAHCCCJHzvo6DZuMGmddSJnjJi7xAKEx+RtzKvbImDAGXGayxr5E7ex8 RnnwsAIvfw0x8HqNeiSZzwX6Bga1ZZEdCdn2d9e4tFR0hAv1hY2cf5GmQHsX+FGUol 6olPD14RzRmIFJcOvemfz0+cngsWjGkX95fCBMPxVtmi51B5MMGylU0GlFZOAhcq4N Elnf3YlBxbgMix9fkXRRwPcN56C1H+7GCcfgK+itiqfhBDIBeKfzoLaAP0dW7J1V/A 8Y17NeSoZPMbw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 10/11] net/mlx5e: Implement queue mgmt ops and single channel swap Date: Thu, 16 Jan 2025 13:55:28 -0800 Message-ID: <20250116215530.158886-11-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed The bulk of the work is done in mlx5e_queue_mem_alloc, where we allocate and create the new channel resources, similar to mlx5e_safe_switch_params, but here we do it for a single channel using existing params, sort of a clone channel. To swap the old channel with the new one, we deactivate and close the old channel then replace it with the new one, since the swap procedure doesn't fail in mlx5, we do it all in one place (mlx5e_queue_start). Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/en_main.c | 96 +++++++++++++++++++ 1 file changed, 96 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 340ed7d3feac..1e03f2afe625 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -5489,6 +5489,101 @@ static const struct netdev_stat_ops mlx5e_stat_ops = { .get_base_stats = mlx5e_get_base_stats, }; +struct mlx5_qmgmt_data { + struct mlx5e_channel *c; + struct mlx5e_channel_param cparam; +}; + +static int mlx5e_queue_mem_alloc(struct net_device *dev, void *newq, int queue_index) +{ + struct mlx5_qmgmt_data *new = (struct mlx5_qmgmt_data *)newq; + struct mlx5e_priv *priv = netdev_priv(dev); + struct mlx5e_channels *chs = &priv->channels; + struct mlx5e_params params = chs->params; + struct mlx5_core_dev *mdev; + int err; + + ASSERT_RTNL(); + mutex_lock(&priv->state_lock); + if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { + err = -ENODEV; + goto unlock; + } + + if (queue_index >= chs->num) { + err = -ERANGE; + goto unlock; + } + + if (MLX5E_GET_PFLAG(&chs->params, MLX5E_PFLAG_TX_PORT_TS) || + chs->params.ptp_rx || + chs->params.xdp_prog || + priv->htb) { + netdev_err(priv->netdev, + "Cloning channels with Port/rx PTP, XDP or HTB is not supported\n"); + err = -EOPNOTSUPP; + goto unlock; + } + + mdev = mlx5_sd_ch_ix_get_dev(priv->mdev, queue_index); + err = mlx5e_build_channel_param(mdev, ¶ms, &new->cparam); + if (err) { + return err; + goto unlock; + } + + err = mlx5e_open_channel(priv, queue_index, ¶ms, NULL, &new->c); +unlock: + mutex_unlock(&priv->state_lock); + return err; +} + +static void mlx5e_queue_mem_free(struct net_device *dev, void *mem) +{ + struct mlx5_qmgmt_data *data = (struct mlx5_qmgmt_data *)mem; + + /* not supposed to happen since mlx5e_queue_start never fails + * but this is how this should be implemented just in case + */ + if (data->c) + mlx5e_close_channel(data->c); +} + +static int mlx5e_queue_stop(struct net_device *dev, void *oldq, int queue_index) +{ + /* mlx5e_queue_start does not fail, we stop the old queue there */ + return 0; +} + +static int mlx5e_queue_start(struct net_device *dev, void *newq, int queue_index) +{ + struct mlx5_qmgmt_data *new = (struct mlx5_qmgmt_data *)newq; + struct mlx5e_priv *priv = netdev_priv(dev); + struct mlx5e_channel *old; + + mutex_lock(&priv->state_lock); + + /* stop and close the old */ + old = priv->channels.c[queue_index]; + mlx5e_deactivate_priv_channels(priv); + /* close old before activating new, to avoid napi conflict */ + mlx5e_close_channel(old); + + /* start the new */ + priv->channels.c[queue_index] = new->c; + mlx5e_activate_priv_channels(priv); + mutex_unlock(&priv->state_lock); + return 0; +} + +static const struct netdev_queue_mgmt_ops mlx5e_queue_mgmt_ops = { + .ndo_queue_mem_size = sizeof(struct mlx5_qmgmt_data), + .ndo_queue_mem_alloc = mlx5e_queue_mem_alloc, + .ndo_queue_mem_free = mlx5e_queue_mem_free, + .ndo_queue_start = mlx5e_queue_start, + .ndo_queue_stop = mlx5e_queue_stop, +}; + static void mlx5e_build_nic_netdev(struct net_device *netdev) { struct mlx5e_priv *priv = netdev_priv(netdev); @@ -5499,6 +5594,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev) SET_NETDEV_DEV(netdev, mdev->device); netdev->netdev_ops = &mlx5e_netdev_ops; + netdev->queue_mgmt_ops = &mlx5e_queue_mgmt_ops; netdev->xdp_metadata_ops = &mlx5e_xdp_metadata_ops; netdev->xsk_tx_metadata_ops = &mlx5e_xsk_tx_metadata_ops; From patchwork Thu Jan 16 21:55:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13942364 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27616243853 for ; Thu, 16 Jan 2025 21:56:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064561; cv=none; b=aJBFuhBxGaNd2T2k03n+1yWtxPXvTctA8bYPulhofz3N0zrZaaxaaA2Oxh/4v4bOAGBc4q+T0LKky+2230E/8deCM9rrSwJjIiGo9TohWYOF+3apxqhmnacFF4CqgSkrpoq5NfZW5D0LTXuftdpEf6jMY71Jf3lk3/QqQ1C5Yys= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737064561; c=relaxed/simple; bh=3a1Kgxkp7cFO0SKRQVCaOOhbnuY6OHqdt5jjCtEd06k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WP8KIYL+ryOZ8qlLYP7EQP0aRNL/yYPwu2MtfHSGbObMeeNnBxmzfpmlA2CGf6G760hn4RR8JO2QnlXxTKdsalyi8b23ZnPm5xhs9TmJEtxW0rEhTRDh0vfEpGGaS3HPQvn7VkvbhgTCUS9YkMl3Aa9r0n45gxArzWIT8eSL5X0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KuQ4+eKl; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KuQ4+eKl" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E4C8BC4CED6; Thu, 16 Jan 2025 21:56:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737064561; bh=3a1Kgxkp7cFO0SKRQVCaOOhbnuY6OHqdt5jjCtEd06k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KuQ4+eKlpj3HjqxmsF61YZPpaPQBnt4tRJhURbb4MfXUqoE5sJM8yjVI8Ah8Ig/gQ DE37yI+TqFhjoNxXtXXyVEEh1ePufpK+EKVHtyqjwXCsu95iuCoKVC62audKDAJesk wyV/dQKEIYQNn345jNnkao+PvjWzsudxwUJewFpKbHdNKN4In5WFyq3zI/pJ47zfmH mvCMYBrtP81JPecx69VDZor36+CA7sFV1yZRSz3PNpMpYZkzu1J+9lQMRAMMh+auAd P/rFFjH4Jj2hRG3xhgmEZwGAy7HzhkAGFBmerxqOCxGDVvKGsxTxto9gqrSLpRe8Yk PYRK1cC/T2NZw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Gal Pressman , Leon Romanovsky , Dragos Tatulea Subject: [net-next 11/11] net/mlx5e: Support ethtool tcp-data-split settings Date: Thu, 16 Jan 2025 13:55:29 -0800 Message-ID: <20250116215530.158886-12-saeed@kernel.org> X-Mailer: git-send-email 2.48.0 In-Reply-To: <20250116215530.158886-1-saeed@kernel.org> References: <20250116215530.158886-1-saeed@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Saeed Mahameed Try enabling HW GRO when requested. Signed-off-by: Saeed Mahameed Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan --- .../ethernet/mellanox/mlx5/core/en_ethtool.c | 49 +++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index cae39198b4db..ee188e033e99 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -349,6 +349,14 @@ void mlx5e_ethtool_get_ringparam(struct mlx5e_priv *priv, (priv->channels.params.packet_merge.type == MLX5E_PACKET_MERGE_SHAMPO) ? ETHTOOL_TCP_DATA_SPLIT_ENABLED : ETHTOOL_TCP_DATA_SPLIT_DISABLED; + + /* if HW GRO is not enabled due to external limitations but is wanted, + * report HDS state as unknown so it won't get truned off explicitly. + */ + if (kernel_param->tcp_data_split == ETHTOOL_TCP_DATA_SPLIT_DISABLED && + priv->netdev->wanted_features & NETIF_F_GRO_HW) + kernel_param->tcp_data_split = ETHTOOL_TCP_DATA_SPLIT_UNKNOWN; + } static void mlx5e_get_ringparam(struct net_device *dev, @@ -361,6 +369,43 @@ static void mlx5e_get_ringparam(struct net_device *dev, mlx5e_ethtool_get_ringparam(priv, param, kernel_param); } +static bool mlx5e_ethtool_set_tcp_data_split(struct mlx5e_priv *priv, + u8 tcp_data_split) +{ + bool enable = (tcp_data_split == ETHTOOL_TCP_DATA_SPLIT_ENABLED); + struct net_device *dev = priv->netdev; + + if (tcp_data_split == ETHTOOL_TCP_DATA_SPLIT_UNKNOWN) + return true; + + if (enable && !(dev->hw_features & NETIF_F_GRO_HW)) { + netdev_warn(dev, "TCP-data-split is not supported when GRO HW is not supported\n"); + return false; /* GRO HW is not supported */ + } + + if (enable && (dev->features & NETIF_F_GRO_HW)) { + /* Already enabled */ + dev->wanted_features |= NETIF_F_GRO_HW; + return true; + } + + if (!enable && !(dev->features & NETIF_F_GRO_HW)) { + /* Already disabled */ + dev->wanted_features &= ~NETIF_F_GRO_HW; + return true; + } + + /* Try enable or disable GRO HW */ + if (enable) + dev->wanted_features |= NETIF_F_GRO_HW; + else + dev->wanted_features &= ~NETIF_F_GRO_HW; + + netdev_change_features(dev); + + return enable == !!(dev->features & NETIF_F_GRO_HW); +} + int mlx5e_ethtool_set_ringparam(struct mlx5e_priv *priv, struct ethtool_ringparam *param, struct netlink_ext_ack *extack) @@ -419,6 +464,9 @@ static int mlx5e_set_ringparam(struct net_device *dev, { struct mlx5e_priv *priv = netdev_priv(dev); + if (!mlx5e_ethtool_set_tcp_data_split(priv, kernel_param->tcp_data_split)) + return -EINVAL; + return mlx5e_ethtool_set_ringparam(priv, param, extack); } @@ -2613,6 +2661,7 @@ const struct ethtool_ops mlx5e_ethtool_ops = { ETHTOOL_COALESCE_MAX_FRAMES | ETHTOOL_COALESCE_USE_ADAPTIVE | ETHTOOL_COALESCE_USE_CQE, + .supported_ring_params = ETHTOOL_RING_USE_TCP_DATA_SPLIT, .get_drvinfo = mlx5e_get_drvinfo, .get_link = ethtool_op_get_link, .get_link_ext_state = mlx5e_get_link_ext_state,