From patchwork Tue Oct 9 18:45:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 10633161 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97FCF15E8 for ; Tue, 9 Oct 2018 18:47:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E204294F9 for ; Tue, 9 Oct 2018 18:47:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 827E9294FD; Tue, 9 Oct 2018 18:47:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE91F294F9 for ; Tue, 9 Oct 2018 18:46:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727332AbeJJCFT (ORCPT ); Tue, 9 Oct 2018 22:05:19 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:36049 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726910AbeJJCFT (ORCPT ); Tue, 9 Oct 2018 22:05:19 -0400 Received: by mail-wm1-f68.google.com with SMTP id a8-v6so3090446wmf.1 for ; Tue, 09 Oct 2018 11:46:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tEQypoJExVT08PG3jvYi3cMy48tMRH7WxT4wHh3YxX4=; b=jMFWhXVFPzDGPKmFgxrBg7ZO+dqRvWe/sgkolmk/yq4Y0/J9bS/l0a7db5QgKm1B+C oKChOn0HgmZ0IWVZffWI0M7VRI/6Hz7P1f71FHHzMz8cWp90LrEDTHAuigiOE39Sll71 GSJF0W7c/+fTUrI7jry3tgl/yv1cQ4qourgIXFYfzbKnZlE6xsCylUrPjPXuzGROT5C5 GAgplZELag6PmoyK5sHujGnopdouNrV+Vr6mjF4h9Af3svAmIxgVfAOU3+PkRjLj+0aT l/yOLx1zUCGai1TVRB+LO1waGUg0u4DNi20dBc2zaDZhorAiWrGKF6j/bCy5ZnQWzoBD jALw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tEQypoJExVT08PG3jvYi3cMy48tMRH7WxT4wHh3YxX4=; b=O3tuYMynfPmVD/kz/A8Q/0hQigKNc551YEJfqvFQ4BEc0MaMjyU3VpADIqz9I2ULOo Afp/zZ1VRmGnA9UPxxRBNuq/YY8HN8FrrLrxaTDtFhPB7B0ESceZh12HIQrPYdYg9wqP y7xco2Tv0lxwJQ0qeiilMV35TZRc1GzdJXr/JUYVeBPeYUgTO6e3EzWjpxD5229tO/2q 9SPE3VnR2uHc4CERezL+gbzQE23epsE4JpQQV7fVEiA3UBk6PIIRBhTCHeSO7HssWCTu 79G/MzaQMcwdMaMQsNlYKL8mzMlZk9DxY+dJ3q1RU6OwfgbRPUDYo0DaPnjvewzXG42w FLlw== X-Gm-Message-State: ABuFfojEyGHPhCZ4Ao7kHS1C7MLMsRMvK709HAEA9ihUTm8L2MQxAbo5 KuOo5Hmw7XGypNDFcmI4ofm+UPe3 X-Google-Smtp-Source: ACcGV62kRhBSl7DQ3+gycp9G5KNFNGcFetBYbJG7vI4BnGsM/sCzoBgoTna7Y46mmGbu93p4yKwf/g== X-Received: by 2002:a7b:c216:: with SMTP id x22-v6mr2791967wmi.65.1539110816760; Tue, 09 Oct 2018 11:46:56 -0700 (PDT) Received: from localhost.localdomain (bzq-79-179-121-186.red.bezeqint.net. [79.179.121.186]) by smtp.gmail.com with ESMTPSA id r16-v6sm27093327wrv.21.2018.10.09.11.46.55 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 09 Oct 2018 11:46:56 -0700 (PDT) From: Kamal Heib To: Doug Ledford , Jason Gunthorpe Cc: linux-rdma@vger.kernel.org, kamalheib1@gmail.com Subject: [PATCH rdma-next 09/18] RDMA/mlx5: Initialize ib_device_ops struct Date: Tue, 9 Oct 2018 21:45:38 +0300 Message-Id: <20181009184547.5907-10-kamalheib1@gmail.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181009184547.5907-1-kamalheib1@gmail.com> References: <20181009184547.5907-1-kamalheib1@gmail.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Initialize ib_device_ops with the supported operations. Signed-off-by: Kamal Heib --- drivers/infiniband/hw/mlx5/main.c | 126 +++++++++++++++++++++++++++++++++++++- 1 file changed, 125 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index b3294a7e3ff9..1d2b8f4b2904 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -5760,6 +5760,92 @@ static void mlx5_ib_stage_flow_db_cleanup(struct mlx5_ib_dev *dev) kfree(dev->flow_db); } +static struct ib_device_ops mlx5_ib_dev_ops = { + .query_device = mlx5_ib_query_device, + .get_link_layer = mlx5_ib_port_link_layer, + .query_gid = mlx5_ib_query_gid, + .add_gid = mlx5_ib_add_gid, + .del_gid = mlx5_ib_del_gid, + .query_pkey = mlx5_ib_query_pkey, + .modify_device = mlx5_ib_modify_device, + .modify_port = mlx5_ib_modify_port, + .alloc_ucontext = mlx5_ib_alloc_ucontext, + .dealloc_ucontext = mlx5_ib_dealloc_ucontext, + .mmap = mlx5_ib_mmap, + .alloc_pd = mlx5_ib_alloc_pd, + .dealloc_pd = mlx5_ib_dealloc_pd, + .create_ah = mlx5_ib_create_ah, + .query_ah = mlx5_ib_query_ah, + .destroy_ah = mlx5_ib_destroy_ah, + .create_srq = mlx5_ib_create_srq, + .modify_srq = mlx5_ib_modify_srq, + .query_srq = mlx5_ib_query_srq, + .destroy_srq = mlx5_ib_destroy_srq, + .post_srq_recv = mlx5_ib_post_srq_recv, + .create_qp = mlx5_ib_create_qp, + .modify_qp = mlx5_ib_modify_qp, + .query_qp = mlx5_ib_query_qp, + .destroy_qp = mlx5_ib_destroy_qp, + .drain_sq = mlx5_ib_drain_sq, + .drain_rq = mlx5_ib_drain_rq, + .post_send = mlx5_ib_post_send, + .post_recv = mlx5_ib_post_recv, + .create_cq = mlx5_ib_create_cq, + .modify_cq = mlx5_ib_modify_cq, + .resize_cq = mlx5_ib_resize_cq, + .destroy_cq = mlx5_ib_destroy_cq, + .poll_cq = mlx5_ib_poll_cq, + .req_notify_cq = mlx5_ib_arm_cq, + .get_dma_mr = mlx5_ib_get_dma_mr, + .reg_user_mr = mlx5_ib_reg_user_mr, + .rereg_user_mr = mlx5_ib_rereg_user_mr, + .dereg_mr = mlx5_ib_dereg_mr, + .attach_mcast = mlx5_ib_mcg_attach, + .detach_mcast = mlx5_ib_mcg_detach, + .process_mad = mlx5_ib_process_mad, + .alloc_mr = mlx5_ib_alloc_mr, + .map_mr_sg = mlx5_ib_map_mr_sg, + .check_mr_status = mlx5_ib_check_mr_status, + .get_dev_fw_str = get_dev_fw_str, + .get_vector_affinity = mlx5_ib_get_vector_affinity, + .disassociate_ucontext = mlx5_ib_disassociate_ucontext, + .create_flow = mlx5_ib_create_flow, + .destroy_flow = mlx5_ib_destroy_flow, + .create_flow_action_esp = mlx5_ib_create_flow_action_esp, + .destroy_flow_action = mlx5_ib_destroy_flow_action, + .modify_flow_action_esp = mlx5_ib_modify_flow_action_esp, + .create_counters = mlx5_ib_create_counters, + .destroy_counters = mlx5_ib_destroy_counters, + .read_counters = mlx5_ib_read_counters, +}; + +static struct ib_device_ops mlx5_ib_dev_ipoib_enhanced_ops = { + .alloc_rdma_netdev = mlx5_ib_alloc_rdma_netdev, +}; + +static struct ib_device_ops mlx5_ib_dev_sriov_ops = { + .get_vf_config = mlx5_ib_get_vf_config, + .set_vf_link_state = mlx5_ib_set_vf_link_state, + .get_vf_stats = mlx5_ib_get_vf_stats, + .set_vf_guid = mlx5_ib_set_vf_guid, +}; + +static struct ib_device_ops mlx5_ib_dev_mw_ops = { + .alloc_mw = mlx5_ib_alloc_mw, + .dealloc_mw = mlx5_ib_dealloc_mw, +}; + +static struct ib_device_ops mlx5_ib_dev_xrc_ops = { + .alloc_xrcd = mlx5_ib_alloc_xrcd, + .dealloc_xrcd = mlx5_ib_dealloc_xrcd, +}; + +static struct ib_device_ops mlx5_ib_dev_dm_ops = { + .alloc_dm = mlx5_ib_alloc_dm, + .dealloc_dm = mlx5_ib_dealloc_dm, + .reg_dm_mr = mlx5_ib_reg_dm_mr, +}; + int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) { struct mlx5_core_dev *mdev = dev->mdev; @@ -5847,14 +5933,18 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) dev->ib_dev.check_mr_status = mlx5_ib_check_mr_status; dev->ib_dev.get_dev_fw_str = get_dev_fw_str; dev->ib_dev.get_vector_affinity = mlx5_ib_get_vector_affinity; - if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads)) + if (MLX5_CAP_GEN(mdev, ipoib_enhanced_offloads)) { dev->ib_dev.alloc_rdma_netdev = mlx5_ib_alloc_rdma_netdev; + ib_set_device_ops(&dev->ib_dev, + &mlx5_ib_dev_ipoib_enhanced_ops); + } if (mlx5_core_is_pf(mdev)) { dev->ib_dev.get_vf_config = mlx5_ib_get_vf_config; dev->ib_dev.set_vf_link_state = mlx5_ib_set_vf_link_state; dev->ib_dev.get_vf_stats = mlx5_ib_get_vf_stats; dev->ib_dev.set_vf_guid = mlx5_ib_set_vf_guid; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_sriov_ops); } dev->ib_dev.disassociate_ucontext = mlx5_ib_disassociate_ucontext; @@ -5864,6 +5954,7 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) if (MLX5_CAP_GEN(mdev, imaicl)) { dev->ib_dev.alloc_mw = mlx5_ib_alloc_mw; dev->ib_dev.dealloc_mw = mlx5_ib_dealloc_mw; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_mw_ops); dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_ALLOC_MW) | (1ull << IB_USER_VERBS_CMD_DEALLOC_MW); @@ -5872,6 +5963,7 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) if (MLX5_CAP_GEN(mdev, xrc)) { dev->ib_dev.alloc_xrcd = mlx5_ib_alloc_xrcd; dev->ib_dev.dealloc_xrcd = mlx5_ib_dealloc_xrcd; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_xrc_ops); dev->ib_dev.uverbs_cmd_mask |= (1ull << IB_USER_VERBS_CMD_OPEN_XRCD) | (1ull << IB_USER_VERBS_CMD_CLOSE_XRCD); @@ -5881,6 +5973,7 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) dev->ib_dev.alloc_dm = mlx5_ib_alloc_dm; dev->ib_dev.dealloc_dm = mlx5_ib_dealloc_dm; dev->ib_dev.reg_dm_mr = mlx5_ib_reg_dm_mr; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_dm_ops); } dev->ib_dev.create_flow = mlx5_ib_create_flow; @@ -5895,6 +5988,7 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) dev->ib_dev.create_counters = mlx5_ib_create_counters; dev->ib_dev.destroy_counters = mlx5_ib_destroy_counters; dev->ib_dev.read_counters = mlx5_ib_read_counters; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_ops); err = init_node_data(dev); if (err) @@ -5908,22 +6002,45 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) return 0; } +static struct ib_device_ops mlx5_ib_dev_port_ops = { + .get_port_immutable = mlx5_port_immutable, + .query_port = mlx5_ib_query_port, +}; + static int mlx5_ib_stage_non_default_cb(struct mlx5_ib_dev *dev) { dev->ib_dev.get_port_immutable = mlx5_port_immutable; dev->ib_dev.query_port = mlx5_ib_query_port; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_ops); + return 0; } +static struct ib_device_ops mlx5_ib_dev_port_rep_ops = { + .get_port_immutable = mlx5_port_rep_immutable, + .query_port = mlx5_ib_rep_query_port, +}; + int mlx5_ib_stage_rep_non_default_cb(struct mlx5_ib_dev *dev) { dev->ib_dev.get_port_immutable = mlx5_port_rep_immutable; dev->ib_dev.query_port = mlx5_ib_rep_query_port; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_rep_ops); + return 0; } +static struct ib_device_ops mlx5_ib_dev_common_roce_ops = { + .get_netdev = mlx5_ib_get_netdev, + .create_wq = mlx5_ib_create_wq, + .modify_wq = mlx5_ib_modify_wq, + .destroy_wq = mlx5_ib_destroy_wq, + .create_rwq_ind_table = mlx5_ib_create_rwq_ind_table, + .destroy_rwq_ind_table = mlx5_ib_destroy_rwq_ind_table, +}; + static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) { u8 port_num; @@ -5942,6 +6059,7 @@ static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) dev->ib_dev.create_rwq_ind_table = mlx5_ib_create_rwq_ind_table; dev->ib_dev.destroy_rwq_ind_table = mlx5_ib_destroy_rwq_ind_table; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_common_roce_ops); dev->ib_dev.uverbs_ex_cmd_mask |= (1ull << IB_USER_VERBS_EX_CMD_CREATE_WQ) | (1ull << IB_USER_VERBS_EX_CMD_MODIFY_WQ) | @@ -6041,11 +6159,17 @@ static int mlx5_ib_stage_odp_init(struct mlx5_ib_dev *dev) return mlx5_ib_odp_init_one(dev); } +static struct ib_device_ops mlx5_ib_dev_hw_stats_ops = { + .get_hw_stats = mlx5_ib_get_hw_stats, + .alloc_hw_stats = mlx5_ib_alloc_hw_stats, +}; + int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev) { if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt)) { dev->ib_dev.get_hw_stats = mlx5_ib_get_hw_stats; dev->ib_dev.alloc_hw_stats = mlx5_ib_alloc_hw_stats; + ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_hw_stats_ops); return mlx5_ib_alloc_counters(dev); }