From patchwork Fri Nov 20 23:03:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11922787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB71DC6379D for ; Fri, 20 Nov 2020 23:04:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A8CA12242A for ; Fri, 20 Nov 2020 23:04:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="XCJvAIEF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728578AbgKTXEV (ORCPT ); Fri, 20 Nov 2020 18:04:21 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:1193 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728606AbgKTXEL (ORCPT ); Fri, 20 Nov 2020 18:04:11 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Fri, 20 Nov 2020 15:03:59 -0800 Received: from sx1.mtl.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 20 Nov 2020 23:04:03 +0000 From: Saeed Mahameed To: Saeed Mahameed , Leon Romanovsky CC: , , Parav Pandit , Bodong Wang Subject: [PATCH mlx5-next 14/16] net/mlx5: Rename peer_pf to host_pf Date: Fri, 20 Nov 2020 15:03:37 -0800 Message-ID: <20201120230339.651609-15-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201120230339.651609-1-saeedm@nvidia.com> References: <20201120230339.651609-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605913439; bh=urRcAetAinHa4Qxhvypt/JWuwdjs1jU7lqtjtq1fOi8=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=XCJvAIEF4YLLR3c0OJlgvLL9eQBGIMeMsXwg15vq6rdksGaenhTg7t/duP1fjy8jg MLuD+WDTPCvVe/gkFEXeldyPcBVlAUpK8WMy9nfUOogFkKnqOFgbIzORZslWRHdQ+4 KXwVqJfGyq8nnXFiRBraVsBTpNIebRWe8C6nwtYiHyJb0+Im7UiH6ot57wfHsNDM2a uRLluV7yG0mx6VWlE2a/mFNIiS2TztQmQIABGmzqVj3Pzz6wUf3hIljK5ZkGJOW13g jSdvBG3wHsuP+9hNPV5r7somzZIJ3221vlTYRS5CWUc9/4Khib4q/YxVCzc0nNFrIG 5kY7KeNGCF6FQ== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Parav Pandit To match the hardware spec, rename peer_pf to host_pf. Signed-off-by: Parav Pandit Reviewed-by: Bodong Wang Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/ecpf.c | 51 ++++++++++++------- .../ethernet/mellanox/mlx5/core/pagealloc.c | 12 ++--- include/linux/mlx5/driver.h | 2 +- 3 files changed, 40 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c index 3dc9dd3f24dc..68ca0e2b26cd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c @@ -8,37 +8,52 @@ bool mlx5_read_embedded_cpu(struct mlx5_core_dev *dev) return (ioread32be(&dev->iseg->initializing) >> MLX5_ECPU_BIT_NUM) & 1; } -static int mlx5_peer_pf_init(struct mlx5_core_dev *dev) +static int mlx5_cmd_host_pf_enable_hca(struct mlx5_core_dev *dev) { - u32 in[MLX5_ST_SZ_DW(enable_hca_in)] = {}; - int err; + u32 out[MLX5_ST_SZ_DW(enable_hca_out)] = {}; + u32 in[MLX5_ST_SZ_DW(enable_hca_in)] = {}; MLX5_SET(enable_hca_in, in, opcode, MLX5_CMD_OP_ENABLE_HCA); - err = mlx5_cmd_exec_in(dev, enable_hca, in); + MLX5_SET(enable_hca_in, in, function_id, 0); + MLX5_SET(enable_hca_in, in, embedded_cpu_function, 0); + return mlx5_cmd_exec(dev, &in, sizeof(in), &out, sizeof(out)); +} + +static int mlx5_cmd_host_pf_disable_hca(struct mlx5_core_dev *dev) +{ + u32 out[MLX5_ST_SZ_DW(disable_hca_out)] = {}; + u32 in[MLX5_ST_SZ_DW(disable_hca_in)] = {}; + + MLX5_SET(disable_hca_in, in, opcode, MLX5_CMD_OP_DISABLE_HCA); + MLX5_SET(disable_hca_in, in, function_id, 0); + MLX5_SET(disable_hca_in, in, embedded_cpu_function, 0); + return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); +} + +static int mlx5_host_pf_init(struct mlx5_core_dev *dev) +{ + int err; + + err = mlx5_cmd_host_pf_enable_hca(dev); if (err) - mlx5_core_err(dev, "Failed to enable peer PF HCA err(%d)\n", - err); + mlx5_core_err(dev, "Failed to enable external host PF HCA err(%d)\n", err); return err; } -static void mlx5_peer_pf_cleanup(struct mlx5_core_dev *dev) +static void mlx5_host_pf_cleanup(struct mlx5_core_dev *dev) { - u32 in[MLX5_ST_SZ_DW(disable_hca_in)] = {}; int err; - MLX5_SET(disable_hca_in, in, opcode, MLX5_CMD_OP_DISABLE_HCA); - err = mlx5_cmd_exec_in(dev, disable_hca, in); + err = mlx5_cmd_host_pf_disable_hca(dev); if (err) { - mlx5_core_err(dev, "Failed to disable peer PF HCA err(%d)\n", - err); + mlx5_core_err(dev, "Failed to disable external host PF HCA err(%d)\n", err); return; } - err = mlx5_wait_for_pages(dev, &dev->priv.peer_pf_pages); + err = mlx5_wait_for_pages(dev, &dev->priv.host_pf_pages); if (err) - mlx5_core_warn(dev, "Timeout reclaiming peer PF pages err(%d)\n", - err); + mlx5_core_warn(dev, "Timeout reclaiming external host PF pages err(%d)\n", err); } int mlx5_ec_init(struct mlx5_core_dev *dev) @@ -46,10 +61,10 @@ int mlx5_ec_init(struct mlx5_core_dev *dev) if (!mlx5_core_is_ecpf(dev)) return 0; - /* ECPF shall enable HCA for peer PF in the same way a PF + /* ECPF shall enable HCA for host PF in the same way a PF * does this for its VFs. */ - return mlx5_peer_pf_init(dev); + return mlx5_host_pf_init(dev); } void mlx5_ec_cleanup(struct mlx5_core_dev *dev) @@ -57,5 +72,5 @@ void mlx5_ec_cleanup(struct mlx5_core_dev *dev) if (!mlx5_core_is_ecpf(dev)) return; - mlx5_peer_pf_cleanup(dev); + mlx5_host_pf_cleanup(dev); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c index 150638814517..539baea358bf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c @@ -374,7 +374,7 @@ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, if (func_id) dev->priv.vfs_pages += npages; else if (mlx5_core_is_ecpf(dev) && !ec_function) - dev->priv.peer_pf_pages += npages; + dev->priv.host_pf_pages += npages; mlx5_core_dbg(dev, "npages %d, ec_function %d, func_id 0x%x, err %d\n", npages, ec_function, func_id, err); @@ -416,7 +416,7 @@ static void release_all_pages(struct mlx5_core_dev *dev, u32 func_id, if (func_id) dev->priv.vfs_pages -= npages; else if (mlx5_core_is_ecpf(dev) && !ec_function) - dev->priv.peer_pf_pages -= npages; + dev->priv.host_pf_pages -= npages; mlx5_core_dbg(dev, "npages %d, ec_function %d, func_id 0x%x\n", npages, ec_function, func_id); @@ -506,7 +506,7 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages, if (func_id) dev->priv.vfs_pages -= num_claimed; else if (mlx5_core_is_ecpf(dev) && !ec_function) - dev->priv.peer_pf_pages -= num_claimed; + dev->priv.host_pf_pages -= num_claimed; out_free: kvfree(out); @@ -661,9 +661,9 @@ int mlx5_reclaim_startup_pages(struct mlx5_core_dev *dev) WARN(dev->priv.vfs_pages, "VFs FW pages counter is %d after reclaiming all pages\n", dev->priv.vfs_pages); - WARN(dev->priv.peer_pf_pages, - "Peer PF FW pages counter is %d after reclaiming all pages\n", - dev->priv.peer_pf_pages); + WARN(dev->priv.host_pf_pages, + "External host PF FW pages counter is %d after reclaiming all pages\n", + dev->priv.host_pf_pages); return 0; } diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index d6ef3068d7d3..8e9bcb3bfd77 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -547,7 +547,7 @@ struct mlx5_priv { atomic_t reg_pages; struct list_head free_list; int vfs_pages; - int peer_pf_pages; + int host_pf_pages; struct mlx5_core_health health;