From patchwork Mon Nov 16 20:23:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ED66C61DD8 for ; Mon, 16 Nov 2020 20:23:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B3D9F20782 for ; Mon, 16 Nov 2020 20:23:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="VvJwY/AU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732270AbgKPUXS (ORCPT ); Mon, 16 Nov 2020 15:23:18 -0500 Received: from nat-hk.nvidia.com ([203.18.50.4]:12435 "EHLO nat-hk.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730917AbgKPUXS (ORCPT ); Mon, 16 Nov 2020 15:23:18 -0500 Received: from HKMAIL102.nvidia.com (Not Verified[10.18.92.77]) by nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 17 Nov 2020 04:23:16 +0800 Received: from HKMAIL103.nvidia.com (10.18.16.12) by HKMAIL102.nvidia.com (10.18.16.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:15 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172) by HKMAIL103.nvidia.com (10.18.16.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:14 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SsdTGJVRjNVmAxAIZvWsugyLnFwRYr/jpvggp/JVacdcqFAMH4CAlyo4My56HgAIDJi1JNCQ68B91eDXykncFdWVS8fV5z04oqe3j5kbH42hWqTpbUvt6B58ejufrbJ31hn4HWUd1ChbnTN2LhNh1CLZcNp4DWRFHDyo5WqWu77R1Cgelz8kBYRQxues4Y51unSBw1kyQwHgTe+/qtknKUxvovjlJqjTL4NSZMNWZ2pjxZPjx+uuxEF/jTh5hTGeBnYWmR6qPy8QQI7FnMD1DHRFNAOump0OF+ZYk6c1iIEw5Gn+lgJ5+4hbYpcvKDGZwMbOSlL5asZYld/+icKWPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uqedE2Z0OvS+aSup+xYbDGMXx5+7UFj9IaZ+t4wbQFM=; b=V0n0bd1N4h1q2/G9sGQCr80eSTamKqTW26/Sh1S3y5LQrDHAdvCp7ZS02SEHpAEdN9cXf3UPDLbl3eXYM67nhhqZDFr8DhYSBaEps5s5PnYmP69BEoUofra53fiojX8z/FOiLutXgr4FrSooes3EdvpOLbsbfc1leWmOtS/V29h7k35dnHLdPMLYxz2ZkOCtgG4215q8YUJxi7FHyxCdXXCjylfhGU6DskeHzoSRWBHK/kCU8ZpHFwLd6gU5l071Lp7nVfJr4AAiFYDwFjMCf84mPUzQ47v4Y+3gWqQmfvS2tjiDd5dSqC+7zou2UZJ+ZXNP50IGxDnZWRWuBHafww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:12 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:12 +0000 From: Jason Gunthorpe To: , Bob Pearson Subject: [PATCH 1/9] verbs: Simplify query_device_ex Date: Mon, 16 Nov 2020 16:23:02 -0400 Message-ID: <1-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR19CA0016.namprd19.prod.outlook.com (2603:10b6:208:178::29) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR19CA0016.namprd19.prod.outlook.com (2603:10b6:208:178::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend Transport; Mon, 16 Nov 2020 20:23:11 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l7m-NI; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558196; bh=j3gDQ2sl+zTQuS4OknwpuI95kSSb27blfulGcSsFJGs=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=VvJwY/AURFgRlnLyCQiDlA2p6xmqmDfMXQ/A4CPTYaR3s2yiio2STkc1cX46EjWBg gBQ9A0Kb6czs4pMtZ56W23QKBVVeZYP9yW7tMW2AT5lj9Qa9Lhmxj0s4tiYFES0F5a XQyWMutueMl0QDo3Mpve4FBSFqNAzHsmKpYjmHhslxANnRTcnxURVu78KkpPzoNZrh D5UQDOZQez6+Me1nfxRu4hYji/23qTUvjpuTVJRfJThsbGIVncIA/F/2wyls7L0diV 2S4AlZlPon5nCJ/2dhKVyjN02evmVNLSOwf13K1D9dADXPm2ICZ9iqUNBCImBbYD5F MvEZQQO+92Zyw== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The obtuse logic here is hard to read, simplify it with a small macro and add offsetofend() Signed-off-by: Jason Gunthorpe --- libibverbs/cmd.c | 146 ++++++++++++++++------------------------------- util/util.h | 3 + 2 files changed, 52 insertions(+), 97 deletions(-) diff --git a/libibverbs/cmd.c b/libibverbs/cmd.c index 25c8a971540c63..a439f8c06481dd 100644 --- a/libibverbs/cmd.c +++ b/libibverbs/cmd.c @@ -44,6 +44,7 @@ #include #include "ibverbs.h" #include +#include bool verbs_allow_disassociate_destroy; @@ -144,117 +145,68 @@ int ibv_cmd_query_device_ex(struct ibv_context *context, /* Report back supported comp_mask bits. For now no comp_mask bit is * defined */ attr->comp_mask = resp->comp_mask & 0; - if (attr_size >= offsetof(struct ibv_device_attr_ex, odp_caps) + - sizeof(attr->odp_caps)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, odp_caps) + - sizeof(resp->odp_caps)) { - attr->odp_caps.general_caps = resp->odp_caps.general_caps; - attr->odp_caps.per_transport_caps.rc_odp_caps = - resp->odp_caps.per_transport_caps.rc_odp_caps; - attr->odp_caps.per_transport_caps.uc_odp_caps = - resp->odp_caps.per_transport_caps.uc_odp_caps; - attr->odp_caps.per_transport_caps.ud_odp_caps = - resp->odp_caps.per_transport_caps.ud_odp_caps; - } - } - if (attr_size >= offsetof(struct ibv_device_attr_ex, - completion_timestamp_mask) + - sizeof(attr->completion_timestamp_mask)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, timestamp_mask) + - sizeof(resp->timestamp_mask)) - attr->completion_timestamp_mask = resp->timestamp_mask; +#define CAN_COPY(_ibv_attr, _uverbs_attr) \ + (attr_size >= offsetofend(struct ibv_device_attr_ex, _ibv_attr) && \ + resp->response_length >= \ + offsetofend(struct ib_uverbs_ex_query_device_resp, \ + _uverbs_attr)) + + if (CAN_COPY(odp_caps, odp_caps)) { + attr->odp_caps.general_caps = resp->odp_caps.general_caps; + attr->odp_caps.per_transport_caps.rc_odp_caps = + resp->odp_caps.per_transport_caps.rc_odp_caps; + attr->odp_caps.per_transport_caps.uc_odp_caps = + resp->odp_caps.per_transport_caps.uc_odp_caps; + attr->odp_caps.per_transport_caps.ud_odp_caps = + resp->odp_caps.per_transport_caps.ud_odp_caps; } - if (attr_size >= offsetof(struct ibv_device_attr_ex, hca_core_clock) + - sizeof(attr->hca_core_clock)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, hca_core_clock) + - sizeof(resp->hca_core_clock)) - attr->hca_core_clock = resp->hca_core_clock; - } + if (CAN_COPY(completion_timestamp_mask, timestamp_mask)) + attr->completion_timestamp_mask = resp->timestamp_mask; - if (attr_size >= offsetof(struct ibv_device_attr_ex, device_cap_flags_ex) + - sizeof(attr->device_cap_flags_ex)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, device_cap_flags_ex) + - sizeof(resp->device_cap_flags_ex)) - attr->device_cap_flags_ex = resp->device_cap_flags_ex; - } + if (CAN_COPY(hca_core_clock, hca_core_clock)) + attr->hca_core_clock = resp->hca_core_clock; - if (attr_size >= offsetof(struct ibv_device_attr_ex, rss_caps) + - sizeof(attr->rss_caps)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, rss_caps) + - sizeof(resp->rss_caps)) { - attr->rss_caps.supported_qpts = resp->rss_caps.supported_qpts; - attr->rss_caps.max_rwq_indirection_tables = resp->rss_caps.max_rwq_indirection_tables; - attr->rss_caps.max_rwq_indirection_table_size = resp->rss_caps.max_rwq_indirection_table_size; - } - } + if (CAN_COPY(device_cap_flags_ex, device_cap_flags_ex)) + attr->device_cap_flags_ex = resp->device_cap_flags_ex; - if (attr_size >= offsetof(struct ibv_device_attr_ex, max_wq_type_rq) + - sizeof(attr->max_wq_type_rq)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, max_wq_type_rq) + - sizeof(resp->max_wq_type_rq)) - attr->max_wq_type_rq = resp->max_wq_type_rq; + if (CAN_COPY(rss_caps, rss_caps)) { + attr->rss_caps.supported_qpts = resp->rss_caps.supported_qpts; + attr->rss_caps.max_rwq_indirection_tables = + resp->rss_caps.max_rwq_indirection_tables; + attr->rss_caps.max_rwq_indirection_table_size = + resp->rss_caps.max_rwq_indirection_table_size; } - if (attr_size >= offsetof(struct ibv_device_attr_ex, raw_packet_caps) + - sizeof(attr->raw_packet_caps)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, raw_packet_caps) + - sizeof(resp->raw_packet_caps)) - attr->raw_packet_caps = resp->raw_packet_caps; - } + if (CAN_COPY(max_wq_type_rq, max_wq_type_rq)) + attr->max_wq_type_rq = resp->max_wq_type_rq; - if (attr_size >= offsetof(struct ibv_device_attr_ex, tm_caps) + - sizeof(attr->tm_caps)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, tm_caps) + - sizeof(resp->tm_caps)) { - attr->tm_caps.max_rndv_hdr_size = - resp->tm_caps.max_rndv_hdr_size; - attr->tm_caps.max_num_tags = - resp->tm_caps.max_num_tags; - attr->tm_caps.flags = resp->tm_caps.flags; - attr->tm_caps.max_ops = - resp->tm_caps.max_ops; - attr->tm_caps.max_sge = - resp->tm_caps.max_sge; - } - } + if (CAN_COPY(raw_packet_caps, raw_packet_caps)) + attr->raw_packet_caps = resp->raw_packet_caps; - if (attr_size >= offsetof(struct ibv_device_attr_ex, cq_mod_caps) + - sizeof(attr->cq_mod_caps)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, cq_moderation_caps) + - sizeof(resp->cq_moderation_caps)) { - attr->cq_mod_caps.max_cq_count = resp->cq_moderation_caps.max_cq_moderation_count; - attr->cq_mod_caps.max_cq_period = resp->cq_moderation_caps.max_cq_moderation_period; - } + if (CAN_COPY(tm_caps, tm_caps)) { + attr->tm_caps.max_rndv_hdr_size = + resp->tm_caps.max_rndv_hdr_size; + attr->tm_caps.max_num_tags = resp->tm_caps.max_num_tags; + attr->tm_caps.flags = resp->tm_caps.flags; + attr->tm_caps.max_ops = resp->tm_caps.max_ops; + attr->tm_caps.max_sge = resp->tm_caps.max_sge; } - if (attr_size >= offsetof(struct ibv_device_attr_ex, max_dm_size) + - sizeof(attr->max_dm_size)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, max_dm_size) + - sizeof(resp->max_dm_size)) { - attr->max_dm_size = resp->max_dm_size; - } + if (CAN_COPY(cq_mod_caps, cq_moderation_caps)) { + attr->cq_mod_caps.max_cq_count = + resp->cq_moderation_caps.max_cq_moderation_count; + attr->cq_mod_caps.max_cq_period = + resp->cq_moderation_caps.max_cq_moderation_period; } - if (attr_size >= offsetof(struct ibv_device_attr_ex, xrc_odp_caps) + - sizeof(attr->xrc_odp_caps)) { - if (resp->response_length >= - offsetof(struct ib_uverbs_ex_query_device_resp, xrc_odp_caps) + - sizeof(resp->xrc_odp_caps)) { - attr->xrc_odp_caps = resp->xrc_odp_caps; - } - } + if (CAN_COPY(max_dm_size, max_dm_size)) + attr->max_dm_size = resp->max_dm_size; + + if (CAN_COPY(xrc_odp_caps, xrc_odp_caps)) + attr->xrc_odp_caps = resp->xrc_odp_caps; +#undef CAN_COPY return 0; } diff --git a/util/util.h b/util/util.h index 0f2c35cd0647ce..47346ca1bf5841 100644 --- a/util/util.h +++ b/util/util.h @@ -23,6 +23,9 @@ static inline bool __good_snprintf(size_t len, int rc) ((a)->tv_nsec CMP (b)->tv_nsec) : \ ((a)->tv_sec CMP (b)->tv_sec)) +#define offsetofend(_type, _member) \ + (offsetof(_type, _member) + sizeof(((_type *)0)->_member)) + static inline unsigned long align(unsigned long val, unsigned long align) { return (val + align - 1) & ~(align - 1); From patchwork Mon Nov 16 20:23:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D33BC6379F for ; Mon, 16 Nov 2020 20:23:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2419720782 for ; Mon, 16 Nov 2020 20:23:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="it6lZL0r" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732329AbgKPUXW (ORCPT ); Mon, 16 Nov 2020 15:23:22 -0500 Received: from nat-hk.nvidia.com ([203.18.50.4]:28922 "EHLO nat-hk.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732296AbgKPUXV (ORCPT ); Mon, 16 Nov 2020 15:23:21 -0500 Received: from HKMAIL101.nvidia.com (Not Verified[10.18.92.77]) by nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 17 Nov 2020 04:23:20 +0800 Received: from HKMAIL103.nvidia.com (10.18.16.12) by HKMAIL101.nvidia.com (10.18.16.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:19 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172) by HKMAIL103.nvidia.com (10.18.16.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:19 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CNI+jFMK33P5lThssz4KfklKgMsB5wnvSmttWxtEYRf2O+5szZL5QADFbh2Oko7PIjzrdSPjjcy4+83sInpmkST55WYVQWXulGUMmTpxeJlyA4sEo7AHHoXL4jOgIzaxSmyPMjKflOpYTbIaFHoS3Zb2EGU9USCF2ZhIF1iwPgLWPL4Sk5vI10KjkbgHFatGcdKEzdyvpR00W0T6viFh5ZyQNHXsC1QBK2tz6o0jB41aYKqlt+GScTuQH6VhoCLYm/MKaryA9v5IP4ssXIU09MzwuVrxsZabSUsXUbpZgUFoK/E0RkYjC0iZbebpNM6dgJpNcz2XE/xb3lkpVh8V1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=if9llFioi97RFZAFVgXd9vTymvGMX0KR2j54ZpT+ZVI=; b=kzKRcE6ydNOU2J6JIuFeU+B6PNtb35wofJ3o9fUWyZTN3qFJmK8kml8ngNiAhkz78aU9Eu7dUcjiFYNR0gfFH7Z7POpe3n4BTYew3zI3RVhVd0Hzrlvfaz/rn2lGGgc3y+thI0jgCpF4PRVTZYwrgzVlbsOe8mnptUzyP3FWQKOug1swygAFowE7FS8wEHBb+OILWM25ELX9tezhZ8P0bNyqMNMJG5zdCeBG/dxs/rlXGzptT/rF8hHKoFtuTLsWx687ZV8taoZrOgLDsjHbaGlIu41JiYqSZGB8ks+VGkVhZxZms55gdf2lX2idVYcWteRGVdh1k0T4kdoK1k7+Bg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:16 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:16 +0000 From: Jason Gunthorpe To: , Bob Pearson Subject: [PATCH 2/9] verbs: Add ibv_cmd_query_device_any() Date: Mon, 16 Nov 2020 16:23:03 -0400 Message-ID: <2-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR05CA0015.namprd05.prod.outlook.com (2603:10b6:208:c0::28) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR05CA0015.namprd05.prod.outlook.com (2603:10b6:208:c0::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend Transport; Mon, 16 Nov 2020 20:23:13 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l7r-OA; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558200; bh=8QQ1pUmlvvGcCPSTSwih+uFucHwaDEeekeWuKEelfps=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=it6lZL0ryFDo010hV42bAJ4vx7S6b5h1scvTrbXT6qOQmgvPusMiaFT42on6Ga3wo 1LrhjA6arh4l8K99m+fLR0hvGBvgRsAV/WQwnf0qzgtNN968v/mMm5B2k3R2roOMqJ FAvlQQ73SkEhn1YCFj9aNXhpkpOBhEC6xSyZCdSCPhZ07me508B+EXdJQzUCrXaupV 2Uk6ds4q35VlNRiuhjsEHtjyI3Md+opvJ/Ll5AmVSuaAnLRpIzAYnUbsO6r7IoU91g 9DKf99Sajs2Zh6ZsiniYa0n8E2vnfUa2xHb/uW8UZN1XbAXfLafmeMkN3UPMFM7jgv h1xIXZeE7eTLg== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This implements all the query_device command flows under a single call. Signed-off-by: Jason Gunthorpe --- libibverbs/cmd_device.c | 155 +++++++++++++++++++++++++++++++++++ libibverbs/driver.h | 5 ++ libibverbs/libibverbs.map.in | 1 + 3 files changed, 161 insertions(+) diff --git a/libibverbs/cmd_device.c b/libibverbs/cmd_device.c index 6c8e01ec9866a9..0019784ee779c1 100644 --- a/libibverbs/cmd_device.c +++ b/libibverbs/cmd_device.c @@ -35,6 +35,7 @@ #include #include #include +#include #include @@ -516,3 +517,157 @@ ssize_t _ibv_query_gid_table(struct ibv_context *context, return num_entries; } + +int ibv_cmd_query_device_any(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size, + struct ib_uverbs_ex_query_device_resp *resp, + size_t *resp_size) +{ + struct ib_uverbs_ex_query_device_resp internal_resp; + size_t internal_resp_size; + int err; + + if (input && input->comp_mask) + return EINVAL; + if (attr_size < sizeof(attr->orig_attr)) + return EINVAL; + + if (!resp) { + resp = &internal_resp; + internal_resp_size = sizeof(internal_resp); + resp_size = &internal_resp_size; + } + memset(attr, 0, attr_size); + memset(resp, 0, *resp_size); + + if (attr_size > sizeof(attr->orig_attr)) { + struct ibv_query_device_ex cmd = {}; + + err = execute_cmd_write_ex(context, + IB_USER_VERBS_EX_CMD_QUERY_DEVICE, + &cmd, sizeof(cmd), resp, *resp_size); + if (err) { + if (err != EOPNOTSUPP) + return err; + attr_size = sizeof(attr->orig_attr); + } + } + + if (attr_size == sizeof(attr->orig_attr)) { + struct ibv_query_device cmd = {}; + + err = execute_cmd_write(context, IB_USER_VERBS_CMD_QUERY_DEVICE, + &cmd, sizeof(cmd), &resp->base, + sizeof(resp->base)); + if (err) + return err; + resp->response_length = sizeof(resp->base); + } + + *resp_size = resp->response_length; + attr->orig_attr.node_guid = resp->base.node_guid; + attr->orig_attr.sys_image_guid = resp->base.sys_image_guid; + attr->orig_attr.max_mr_size = resp->base.max_mr_size; + attr->orig_attr.page_size_cap = resp->base.page_size_cap; + attr->orig_attr.vendor_id = resp->base.vendor_id; + attr->orig_attr.vendor_part_id = resp->base.vendor_part_id; + attr->orig_attr.hw_ver = resp->base.hw_ver; + attr->orig_attr.max_qp = resp->base.max_qp; + attr->orig_attr.max_qp_wr = resp->base.max_qp_wr; + attr->orig_attr.device_cap_flags = resp->base.device_cap_flags; + attr->orig_attr.max_sge = resp->base.max_sge; + attr->orig_attr.max_sge_rd = resp->base.max_sge_rd; + attr->orig_attr.max_cq = resp->base.max_cq; + attr->orig_attr.max_cqe = resp->base.max_cqe; + attr->orig_attr.max_mr = resp->base.max_mr; + attr->orig_attr.max_pd = resp->base.max_pd; + attr->orig_attr.max_qp_rd_atom = resp->base.max_qp_rd_atom; + attr->orig_attr.max_ee_rd_atom = resp->base.max_ee_rd_atom; + attr->orig_attr.max_res_rd_atom = resp->base.max_res_rd_atom; + attr->orig_attr.max_qp_init_rd_atom = resp->base.max_qp_init_rd_atom; + attr->orig_attr.max_ee_init_rd_atom = resp->base.max_ee_init_rd_atom; + attr->orig_attr.atomic_cap = resp->base.atomic_cap; + attr->orig_attr.max_ee = resp->base.max_ee; + attr->orig_attr.max_rdd = resp->base.max_rdd; + attr->orig_attr.max_mw = resp->base.max_mw; + attr->orig_attr.max_raw_ipv6_qp = resp->base.max_raw_ipv6_qp; + attr->orig_attr.max_raw_ethy_qp = resp->base.max_raw_ethy_qp; + attr->orig_attr.max_mcast_grp = resp->base.max_mcast_grp; + attr->orig_attr.max_mcast_qp_attach = resp->base.max_mcast_qp_attach; + attr->orig_attr.max_total_mcast_qp_attach = + resp->base.max_total_mcast_qp_attach; + attr->orig_attr.max_ah = resp->base.max_ah; + attr->orig_attr.max_fmr = resp->base.max_fmr; + attr->orig_attr.max_map_per_fmr = resp->base.max_map_per_fmr; + attr->orig_attr.max_srq = resp->base.max_srq; + attr->orig_attr.max_srq_wr = resp->base.max_srq_wr; + attr->orig_attr.max_srq_sge = resp->base.max_srq_sge; + attr->orig_attr.max_pkeys = resp->base.max_pkeys; + attr->orig_attr.local_ca_ack_delay = resp->base.local_ca_ack_delay; + attr->orig_attr.phys_port_cnt = resp->base.phys_port_cnt; + +#define CAN_COPY(_ibv_attr, _uverbs_attr) \ + (attr_size >= offsetofend(struct ibv_device_attr_ex, _ibv_attr) && \ + resp->response_length >= \ + offsetofend(struct ib_uverbs_ex_query_device_resp, \ + _uverbs_attr)) + + if (CAN_COPY(odp_caps, odp_caps)) { + attr->odp_caps.general_caps = resp->odp_caps.general_caps; + attr->odp_caps.per_transport_caps.rc_odp_caps = + resp->odp_caps.per_transport_caps.rc_odp_caps; + attr->odp_caps.per_transport_caps.uc_odp_caps = + resp->odp_caps.per_transport_caps.uc_odp_caps; + attr->odp_caps.per_transport_caps.ud_odp_caps = + resp->odp_caps.per_transport_caps.ud_odp_caps; + } + + if (CAN_COPY(completion_timestamp_mask, timestamp_mask)) + attr->completion_timestamp_mask = resp->timestamp_mask; + + if (CAN_COPY(hca_core_clock, hca_core_clock)) + attr->hca_core_clock = resp->hca_core_clock; + + if (CAN_COPY(device_cap_flags_ex, device_cap_flags_ex)) + attr->device_cap_flags_ex = resp->device_cap_flags_ex; + + if (CAN_COPY(rss_caps, rss_caps)) { + attr->rss_caps.supported_qpts = resp->rss_caps.supported_qpts; + attr->rss_caps.max_rwq_indirection_tables = + resp->rss_caps.max_rwq_indirection_tables; + attr->rss_caps.max_rwq_indirection_table_size = + resp->rss_caps.max_rwq_indirection_table_size; + } + + if (CAN_COPY(max_wq_type_rq, max_wq_type_rq)) + attr->max_wq_type_rq = resp->max_wq_type_rq; + + if (CAN_COPY(raw_packet_caps, raw_packet_caps)) + attr->raw_packet_caps = resp->raw_packet_caps; + + if (CAN_COPY(tm_caps, tm_caps)) { + attr->tm_caps.max_rndv_hdr_size = + resp->tm_caps.max_rndv_hdr_size; + attr->tm_caps.max_num_tags = resp->tm_caps.max_num_tags; + attr->tm_caps.flags = resp->tm_caps.flags; + attr->tm_caps.max_ops = resp->tm_caps.max_ops; + attr->tm_caps.max_sge = resp->tm_caps.max_sge; + } + + if (CAN_COPY(cq_mod_caps, cq_moderation_caps)) { + attr->cq_mod_caps.max_cq_count = + resp->cq_moderation_caps.max_cq_moderation_count; + attr->cq_mod_caps.max_cq_period = + resp->cq_moderation_caps.max_cq_moderation_period; + } + + if (CAN_COPY(max_dm_size, max_dm_size)) + attr->max_dm_size = resp->max_dm_size; + + if (CAN_COPY(xrc_odp_caps, xrc_odp_caps)) + attr->xrc_odp_caps = resp->xrc_odp_caps; +#undef CAN_COPY + + return 0; +} diff --git a/libibverbs/driver.h b/libibverbs/driver.h index 87d1a030a39c2d..e54db0ea6413e8 100644 --- a/libibverbs/driver.h +++ b/libibverbs/driver.h @@ -460,6 +460,11 @@ int ibv_cmd_create_flow_action_esp(struct ibv_context *ctx, int ibv_cmd_modify_flow_action_esp(struct verbs_flow_action *flow_action, struct ibv_flow_action_esp_attr *attr, struct ibv_command_buffer *driver); +int ibv_cmd_query_device_any(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size, + struct ib_uverbs_ex_query_device_resp *resp, + size_t *resp_size); int ibv_cmd_query_device_ex(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr, size_t attr_size, diff --git a/libibverbs/libibverbs.map.in b/libibverbs/libibverbs.map.in index 7429016aae0f02..c1f7e09b240ab0 100644 --- a/libibverbs/libibverbs.map.in +++ b/libibverbs/libibverbs.map.in @@ -204,6 +204,7 @@ IBVERBS_PRIVATE_@IBVERBS_PABI_VERSION@ { ibv_cmd_post_srq_recv; ibv_cmd_query_context; ibv_cmd_query_device; + ibv_cmd_query_device_any; ibv_cmd_query_device_ex; ibv_cmd_query_mr; ibv_cmd_query_port; From patchwork Mon Nov 16 20:23:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 982B6C2D0A3 for ; Mon, 16 Nov 2020 20:23:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 46C4A20782 for ; Mon, 16 Nov 2020 20:23:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="aHV+NuqA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732269AbgKPUXO (ORCPT ); Mon, 16 Nov 2020 15:23:14 -0500 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:12625 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727359AbgKPUXN (ORCPT ); Mon, 16 Nov 2020 15:23:13 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 16 Nov 2020 12:23:23 -0800 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:12 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.175) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:13 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IsU0ejNyBy83Sj9tzZWtGmG2/80YE+EY56XAA/kVUfu5yMvcPPhJbgJL/xyFKldXRIlYFnRgng0CST2ovi5BArFFURZKUAPAGFrMidVhmKnCSJcmpVtWNpSBZchF6x2QxlnFZPmY3nwOlgq/yy8YpLeqrfzi8pOrX01IXUpK4YjMx9xhv2FjE2aZQcf/LV9r1Nj9bcL/jyHz4PCZX0ck7zZRfSKbT6+Xh1xIJZLf//XXSmxAjBfksdM2CkJ/5XhvYS6Hu/Xmk95xMW5UoMz9QkmLAOfpLGDaiwYCprP4vX72y4b5D+r1BlvRtmvtSnZ8yMAOFtSL6y6EHb+0htIlKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=51s1+kWORPkJl090L1pmm00dTc7s19N82eli6m206aw=; b=Xv1a0ess24vUWAG87QM+KLx1WZGwr2LYT7ugHIKa99pS84pLgkBfF43rRgyvxOp6gkUz3orhAStCKzx1nEsDlkhZNXF9Sj3pDl4dgSDj6iF2AAnZBjSNJZC/vGYwzCPgeEYBXPThMc0H4J/+fgx6TI0ABwSE1KkuGIzl6EOfOgFgB9qGV0g7Yr0lzBp4/l9rvP/Bs2vim7T9uLObk/Pv7wtD76WC111USIBtJ/zA9iJn4gqXACulg5nKjaxvHVqcxm7DGc7bNArSz10rp8+akXdMBXrbeDamWMVbgPvcMWfvYBKmBXcgpW+43hZ/lJmX+wXjzPyw1lFC95YjQKNLYw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:11 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:11 +0000 From: Jason Gunthorpe To: , Bob Pearson Subject: [PATCH 3/9] mlx5: Move context initialization out of mlx5_query_device_ex() Date: Mon, 16 Nov 2020 16:23:04 -0400 Message-ID: <3-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR11CA0001.namprd11.prod.outlook.com (2603:10b6:208:23b::6) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR11CA0001.namprd11.prod.outlook.com (2603:10b6:208:23b::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend Transport; Mon, 16 Nov 2020 20:23:11 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l7v-P8; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558203; bh=GEePscrNeNY8DNgcLpwdv53Z3sJTM2FGoOcqyrl16ow=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=aHV+NuqAL1d+PU4J6OVXtANPgNNK4ddJrUtYNkW5x9kGLFveqD5fXRekvZXo0vMZj xEyZ5yBXg3wp5EBkVNDJoPfGyfJN9Jp5tGbIvbbQ+O0XS4QVMD1CTJux/DdcacKnki EkF5bTCNojyWr04vrixLOr3RdWur3H5DTs4hLWRK/PjyTPi0E0VnW9nW/vo4osdD90 KcJjrkxw10EeVDLzs2biLRfrxDPYNhfn1ouaxqgOjzKKLCraZEjzijpQrA0qbB6z8a SbAHSPjXjmK4mw+I+0mxfXTgdHd+ghIkGlQ8/ijjtiKZTctrz4MG6l3juoxDJnvuk5 oPhD6+1YLeWlg== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org When the user calls mlx5_query_device_ex() it should not cause the context values to be mutated, only the attribute should be returned. Move this code to a dedicated function that is only called during context setup. Signed-off-by: Jason Gunthorpe --- providers/mlx5/mlx5.c | 10 +------ providers/mlx5/mlx5.h | 1 + providers/mlx5/verbs.c | 62 ++++++++++++++++++++++++++++-------------- 3 files changed, 44 insertions(+), 29 deletions(-) diff --git a/providers/mlx5/mlx5.c b/providers/mlx5/mlx5.c index 1378acf2e2f3af..06b9a52ebb3019 100644 --- a/providers/mlx5/mlx5.c +++ b/providers/mlx5/mlx5.c @@ -1373,7 +1373,6 @@ static int mlx5_set_context(struct mlx5_context *context, { struct verbs_context *v_ctx = &context->ibv_ctx; struct ibv_port_attr port_attr = {}; - struct ibv_device_attr_ex device_attr = {}; int cmd_fd = v_ctx->context.cmd_fd; struct mlx5_device *mdev = to_mdev(v_ctx->context.device); struct ibv_device *ibdev = v_ctx->context.device; @@ -1518,14 +1517,7 @@ bf_done: goto err_free; } - if (!mlx5_query_device_ex(&v_ctx->context, NULL, &device_attr, - sizeof(struct ibv_device_attr_ex))) { - context->cached_device_cap_flags = - device_attr.orig_attr.device_cap_flags; - context->atomic_cap = device_attr.orig_attr.atomic_cap; - context->cached_tso_caps = device_attr.tso_caps; - context->max_dm_size = device_attr.max_dm_size; - } + mlx5_query_device_ctx(context); for (j = 0; j < min(MLX5_MAX_PORTS_NUM, context->num_ports); ++j) { memset(&port_attr, 0, sizeof(port_attr)); diff --git a/providers/mlx5/mlx5.h b/providers/mlx5/mlx5.h index 782d29bf757e0b..72e710b7b5e4aa 100644 --- a/providers/mlx5/mlx5.h +++ b/providers/mlx5/mlx5.h @@ -878,6 +878,7 @@ __be32 *mlx5_alloc_dbrec(struct mlx5_context *context, struct ibv_pd *pd, void mlx5_free_db(struct mlx5_context *context, __be32 *db, struct ibv_pd *pd, bool custom_alloc); +void mlx5_query_device_ctx(struct mlx5_context *mctx); int mlx5_query_device(struct ibv_context *context, struct ibv_device_attr *attr); int mlx5_query_device_ex(struct ibv_context *context, diff --git a/providers/mlx5/verbs.c b/providers/mlx5/verbs.c index 3622cae1df5017..42c984033d8eaa 100644 --- a/providers/mlx5/verbs.c +++ b/providers/mlx5/verbs.c @@ -3450,19 +3450,19 @@ static void get_pci_atomic_caps(struct ibv_context *context, } } -static void get_lag_caps(struct ibv_context *ctx) +static void get_lag_caps(struct mlx5_context *mctx) { uint16_t opmod = MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE | HCA_CAP_OPMOD_GET_CUR; uint32_t out[DEVX_ST_SZ_DW(query_hca_cap_out)] = {}; uint32_t in[DEVX_ST_SZ_DW(query_hca_cap_in)] = {}; - struct mlx5_context *mctx = to_mctx(ctx); int ret; DEVX_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); DEVX_SET(query_hca_cap_in, in, op_mod, opmod); - ret = mlx5dv_devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + ret = mlx5dv_devx_general_cmd(&mctx->ibv_ctx.context, in, sizeof(in), + out, sizeof(out)); if (ret) return; @@ -3512,6 +3512,41 @@ int mlx5_query_device_ex(struct ibv_context *context, attr->packet_pacing_caps.supported_qpts = resp.packet_pacing_caps.supported_qpts; + major = (raw_fw_ver >> 32) & 0xffff; + minor = (raw_fw_ver >> 16) & 0xffff; + sub_minor = raw_fw_ver & 0xffff; + a = &attr->orig_attr; + snprintf(a->fw_ver, sizeof(a->fw_ver), "%d.%d.%04d", + major, minor, sub_minor); + + if (attr_size >= offsetof(struct ibv_device_attr_ex, pci_atomic_caps) + + sizeof(attr->pci_atomic_caps)) + get_pci_atomic_caps(context, attr); + + return 0; +} + +void mlx5_query_device_ctx(struct mlx5_context *mctx) +{ + struct ibv_device_attr_ex device_attr; + struct mlx5_query_device_ex_resp resp; + size_t resp_size = sizeof(resp); + + get_lag_caps(mctx); + + if (!(mctx->cmds_supp_uhw & MLX5_USER_CMDS_SUPP_UHW_QUERY_DEVICE)) + return; + + if (ibv_cmd_query_device_any(&mctx->ibv_ctx.context, NULL, &device_attr, + sizeof(device_attr), &resp.ibv_resp, + &resp_size)) + return; + + mctx->cached_device_cap_flags = device_attr.orig_attr.device_cap_flags; + mctx->atomic_cap = device_attr.orig_attr.atomic_cap; + mctx->cached_tso_caps = device_attr.tso_caps; + mctx->max_dm_size = device_attr.max_dm_size; + if (resp.mlx5_ib_support_multi_pkt_send_wqes & MLX5_IB_ALLOW_MPW) mctx->vendor_cap_flags |= MLX5_VENDOR_CAP_FLAGS_MPW_ALLOWED; @@ -3519,7 +3554,8 @@ int mlx5_query_device_ex(struct ibv_context *context, mctx->vendor_cap_flags |= MLX5_VENDOR_CAP_FLAGS_ENHANCED_MPW; mctx->cqe_comp_caps.max_num = resp.cqe_comp_caps.max_num; - mctx->cqe_comp_caps.supported_format = resp.cqe_comp_caps.supported_format; + mctx->cqe_comp_caps.supported_format = + resp.cqe_comp_caps.supported_format; mctx->sw_parsing_caps.sw_parsing_offloads = resp.sw_parsing_caps.sw_parsing_offloads; mctx->sw_parsing_caps.supported_qpts = @@ -3544,25 +3580,11 @@ int mlx5_query_device_ex(struct ibv_context *context, mctx->vendor_cap_flags |= MLX5_VENDOR_CAP_FLAGS_CQE_128B_PAD; if (resp.flags & MLX5_IB_QUERY_DEV_RESP_PACKET_BASED_CREDIT_MODE) - mctx->vendor_cap_flags |= MLX5_VENDOR_CAP_FLAGS_PACKET_BASED_CREDIT_MODE; + mctx->vendor_cap_flags |= + MLX5_VENDOR_CAP_FLAGS_PACKET_BASED_CREDIT_MODE; if (resp.flags & MLX5_IB_QUERY_DEV_RESP_FLAGS_SCAT2CQE_DCT) mctx->vendor_cap_flags |= MLX5_VENDOR_CAP_FLAGS_SCAT2CQE_DCT; - - major = (raw_fw_ver >> 32) & 0xffff; - minor = (raw_fw_ver >> 16) & 0xffff; - sub_minor = raw_fw_ver & 0xffff; - a = &attr->orig_attr; - snprintf(a->fw_ver, sizeof(a->fw_ver), "%d.%d.%04d", - major, minor, sub_minor); - - if (attr_size >= offsetof(struct ibv_device_attr_ex, pci_atomic_caps) + - sizeof(attr->pci_atomic_caps)) - get_pci_atomic_caps(context, attr); - - get_lag_caps(context); - - return 0; } static int rwq_sig_enabled(struct ibv_context *context) From patchwork Mon Nov 16 20:23:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB0FCC63777 for ; Mon, 16 Nov 2020 20:23:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6475220781 for ; Mon, 16 Nov 2020 20:23:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="aM1md6eq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732273AbgKPUXU (ORCPT ); Mon, 16 Nov 2020 15:23:20 -0500 Received: from nat-hk.nvidia.com ([203.18.50.4]:43069 "EHLO nat-hk.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732296AbgKPUXU (ORCPT ); Mon, 16 Nov 2020 15:23:20 -0500 Received: from HKMAIL104.nvidia.com (Not Verified[10.18.92.77]) by nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 17 Nov 2020 04:23:19 +0800 Received: from HKMAIL103.nvidia.com (10.18.16.12) by HKMAIL104.nvidia.com (10.18.16.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:19 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172) by HKMAIL103.nvidia.com (10.18.16.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:18 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SZ8GRjlUmg3htAfvorM0/hC8Z+22jxN5ZKmhA8k/SdY6zXv7IQj4eeTQd1wpV1EhcS/elliIaAtzZWvbxCBefB+hqd8PuJGZ1LhQ9RLOg7DHMwuE2jHaDI2vNrXdAFEt1FCr9eqMtF3anaFC7Ykmh+UyVRfbkAshV6zaPIeV/L7BHQ86Fvyr2UpFOGiqGQ3WRQEHKAARljz2nhA1YMadBApT8BwT2+FiAohOuQumyv4em62G4/SiiTnrvbFY2Tn7xUfUSQlVPbDqEZ5pbksIZX1LEwsb3C0xhBmLGCmgMFB7fcyZRydRmQSgFNCa0mL8TcsQnUA9B61UOiLkM2A8Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/EnnM0es8CU1PxAsd9gvqh2VIRy8netKPwRkme9VR7c=; b=iwajj08ZUFDpNhnz83xnqeNbstLSg1fqQ1GbAdNiPT6qCPtwybZopHPHBRewi9XXWiT70PzJE/Zi1BuMQ9ZhoBThmZNqe2THfxsb8Z29lQ6UuJeAEYBWvuHS48bUpyqZuMKB4Ihzl2hekoLHXZPrjx8N4cmtSTTR9t22uVtHaIK+1ARxDgwPtHWopKJsavky+cEMnMCvdNvJ0C1rnsKAY6LmoMUJC5q1A3MvV8weSLUbPuoqCJFjv/RXEh6m7DBlBrQ5Q21fClAhjoUpB9QQXJKK8vjeODgKZwir/NOdCdFF/Y5hv4tCwXlyO6PQ+6vDumoglboHKpyRmoZermwGQQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:16 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:16 +0000 From: Jason Gunthorpe To: , Bob Pearson CC: Gal Pressman Subject: [PATCH 4/9] efa: Move the context intialization out of efa_query_device_ex() Date: Mon, 16 Nov 2020 16:23:05 -0400 Message-ID: <4-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR19CA0008.namprd19.prod.outlook.com (2603:10b6:208:178::21) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR19CA0008.namprd19.prod.outlook.com (2603:10b6:208:178::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend Transport; Mon, 16 Nov 2020 20:23:13 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l7z-Q0; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558199; bh=MvlJR4z3rc0QfhINWKcWhq0XqLO/+ljYEfZwMSN/l9Q=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: CC:Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=aM1md6eqVbkI012Z2x8XTbPqVbKLS6Q8P9xkpP8p/jZCIni9/glTeoq3QwFrpjZdr cLVX+lafbnjb9Fv22DCYfTDMllFB+pQ1Ptp2Ha/cvWZ+LVFjO/2+qRbjp/h1taYPao QTpNthNTY9e/g+R9JwRNT6+wLdGut8j0HQAH6Y5wYL2SZemVgKM2h6hltd35rTvcBm mb87FXfYkwkNGEQ6lsrhO1KLmhUUZ/PwzRtuZ0WT+VKVA0ITtjEWLDqiuRFPcXx0Jh C3cPULw8PoMuV25MmR9RYAyBXRgYl9sw3UzZvgDx6Wr+DFYBvd+8EejHNvBOOI9kLr Y68m6AoElnkNw== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org When the user calls efa_query_device_ex() it should not cause the context values to be mutated, only the attribute shuld be returned. Move this code to a dedicated function that is only called during context setup. Cc: Gal Pressman Signed-off-by: Jason Gunthorpe --- providers/efa/efa.c | 14 +------------ providers/efa/verbs.c | 46 +++++++++++++++++++++++++++++++++++-------- providers/efa/verbs.h | 1 + 3 files changed, 40 insertions(+), 21 deletions(-) diff --git a/providers/efa/efa.c b/providers/efa/efa.c index 35f9b246a711ec..b24c14f7fa1fe1 100644 --- a/providers/efa/efa.c +++ b/providers/efa/efa.c @@ -54,10 +54,7 @@ static struct verbs_context *efa_alloc_context(struct ibv_device *vdev, { struct efa_alloc_ucontext_resp resp = {}; struct efa_alloc_ucontext cmd = {}; - struct ibv_device_attr_ex attr; - unsigned int qp_table_sz; struct efa_context *ctx; - int err; cmd.comp_mask |= EFA_ALLOC_UCONTEXT_CMD_COMP_TX_BATCH; cmd.comp_mask |= EFA_ALLOC_UCONTEXT_CMD_COMP_MIN_SQ_WR; @@ -86,17 +83,8 @@ static struct verbs_context *efa_alloc_context(struct ibv_device *vdev, verbs_set_ops(&ctx->ibvctx, &efa_ctx_ops); - err = efa_query_device_ex(&ctx->ibvctx.context, NULL, &attr, - sizeof(attr)); - if (err) + if (!efa_query_device_ctx(ctx)) goto err_free_spinlock; - - qp_table_sz = roundup_pow_of_two(attr.orig_attr.max_qp); - ctx->qp_table_sz_m1 = qp_table_sz - 1; - ctx->qp_table = calloc(qp_table_sz, sizeof(*ctx->qp_table)); - if (!ctx->qp_table) - goto err_free_spinlock; - return &ctx->ibvctx; err_free_spinlock: diff --git a/providers/efa/verbs.c b/providers/efa/verbs.c index 1a9633155c62f8..52d6285f1f409c 100644 --- a/providers/efa/verbs.c +++ b/providers/efa/verbs.c @@ -106,14 +106,6 @@ int efa_query_device_ex(struct ibv_context *context, if (err) return err; - ctx->device_caps = resp.device_caps; - ctx->max_sq_wr = resp.max_sq_wr; - ctx->max_rq_wr = resp.max_rq_wr; - ctx->max_sq_sge = resp.max_sq_sge; - ctx->max_rq_sge = resp.max_rq_sge; - ctx->max_rdma_size = resp.max_rdma_size; - ctx->max_wr_rdma_sge = a->max_sge_rd; - a->max_qp_wr = min_t(int, a->max_qp_wr, ctx->max_llq_size / sizeof(struct efa_io_tx_wqe)); snprintf(a->fw_ver, sizeof(a->fw_ver), "%u.%u.%u.%u", @@ -122,6 +114,44 @@ int efa_query_device_ex(struct ibv_context *context, return 0; } +int efa_query_device_ctx(struct efa_context *ctx) +{ + struct ibv_device_attr_ex attr; + struct efa_query_device_ex_resp resp; + size_t resp_size = sizeof(resp); + unsigned int qp_table_sz; + int err; + + if (ctx->cmds_supp_udata_mask & EFA_USER_CMDS_SUPP_UDATA_QUERY_DEVICE) { + err = ibv_cmd_query_device_any(&ctx->ibvctx.context, NULL, + &attr, sizeof(attr), + &resp.ibv_resp, &resp_size); + if (err) + return err; + + ctx->device_caps = resp.device_caps; + ctx->max_sq_wr = resp.max_sq_wr; + ctx->max_rq_wr = resp.max_rq_wr; + ctx->max_sq_sge = resp.max_sq_sge; + ctx->max_rq_sge = resp.max_rq_sge; + ctx->max_rdma_size = resp.max_rdma_size; + ctx->max_wr_rdma_sge = attr.orig_attr.max_sge_rd; + } else { + err = ibv_cmd_query_device_any(&ctx->ibvctx.context, NULL, + &attr, sizeof(attr.orig_attr), + NULL, NULL); + if (err) + return err; + } + + qp_table_sz = roundup_pow_of_two(attr.orig_attr.max_qp); + ctx->qp_table_sz_m1 = qp_table_sz - 1; + ctx->qp_table = calloc(qp_table_sz, sizeof(*ctx->qp_table)); + if (!ctx->qp_table) + return ENOMEM; + return 0; +} + int efadv_query_device(struct ibv_context *ibvctx, struct efadv_device_attr *attr, uint32_t inlen) diff --git a/providers/efa/verbs.h b/providers/efa/verbs.h index da022e615af064..3b0e4e0d498761 100644 --- a/providers/efa/verbs.h +++ b/providers/efa/verbs.h @@ -9,6 +9,7 @@ #include #include +int efa_query_device_ctx(struct efa_context *ctx); int efa_query_device(struct ibv_context *uctx, struct ibv_device_attr *attr); int efa_query_port(struct ibv_context *uctx, uint8_t port, struct ibv_port_attr *attr); From patchwork Mon Nov 16 20:23:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8AEAC55ABD for ; Mon, 16 Nov 2020 20:23:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7CCAB20781 for ; Mon, 16 Nov 2020 20:23:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="a2pQPspJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727359AbgKPUXO (ORCPT ); Mon, 16 Nov 2020 15:23:14 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:3319 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732263AbgKPUXN (ORCPT ); Mon, 16 Nov 2020 15:23:13 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 16 Nov 2020 12:23:17 -0800 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:13 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.175) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:13 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lHYZPW32qZBG26Wu7wUixUoNs9OsY4SGDNu0aFGd9K7O16wPo6VjPyq94tEYQPJQFkiVTDFGBKhUikkuIivjANhma7OAki9jcjFCGedVINRXLwKMAMnhqDgsPh07lI3kJU+e33Kgb7b2swJKn9JAtnsPiVv4BCLSgEhTQk4kBwHItYbq3QGn5YdvxP420oUPfUbHF2Y/C8LrdwDKDw16EFdKPhpEKhlN5+ezWocmo9sYMIT4FbzPYHyjnEfYogjc371bPfnzaLNSoEAQqrfk54d+ehQiLsgBLSbfu3fmotKN3jytZGHumld84EfafiudIkB5W4S2v2BdoVDhIGLX/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wOR8kxl7+Glby/qXr/Fr3WAwO0foD8dN/e8zqeHT6t4=; b=k+Ng0fsHcggAJg2ui2W8fHxMLB7s/FX2bTizpejXCywfDKGqwgwWTT/RrgGsg6iq8D3PaRo+07047R0CyBvdZQDvBmEd5gFrZj+Jr5s92Txut7vFHLHsfwA4yySUGp5VBfD3dNoZPU3vML+FVlYqOrGtpdIxieu5nhTwxt6q9RsvypOzxYmrA+fp2ODXmLaU2hvIf+eKVoHh9g96D3qJAgnqCeGDfBzgNxisKdDvdk2/XhuezTBF89cKbyZNNKZzGOnJuSU+saSerd2bOvsNts73FvbjnTg8aL0suqwQb+YOSWfMz8qjLqLSm5cAqvw13iJMfkX9OJ736NKIWKeXtA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:12 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:12 +0000 From: Jason Gunthorpe To: , Bob Pearson Subject: [PATCH 5/9] mlx4: Move the context intialization out of mlx4_query_device_ex() Date: Mon, 16 Nov 2020 16:23:06 -0400 Message-ID: <5-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR11CA0021.namprd11.prod.outlook.com (2603:10b6:208:23b::26) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR11CA0021.namprd11.prod.outlook.com (2603:10b6:208:23b::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28 via Frontend Transport; Mon, 16 Nov 2020 20:23:11 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l82-RL; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558197; bh=ztaESyD2IsmnAT7uarJTmK7wOFWIR1BHA6TG7EpqhUU=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=a2pQPspJHTbUL5Lqh0yOizCLZDtIl4aefDlF6AHpV6lb6/aJ5ewSOjUaRz+pIJps+ ZzlU7prdR3HSqZgflGcXY9VpKLagKkNh+igOjfz+/nRo9XKmiXHEdx/icIZRklOedn yLR9xeh0eVe06PdLOWLomKsd8VgD/pwxWDL9WjvAmEWOvtrs8RZ4SOGW0zrlR7oe7B bD4tyQxezxN056mcavtda/R230gIsdWoiABfRGqCiP/bFKd8jm0KuxSm+2YtaV+pSs Ufl5Wiu9YzZ176pGBQpi9eJkpai6h4eLjiomXyHZIxJIZZnDzt7K8xQ2L4jhH/OaC/ zIM250h7Smc1w== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org When the user calls mlx4_query_device_ex() it should not cause the context values to be mutated, only the attribute shuld be returned. Move this code to a dedicated function that is only called during context setup. Signed-off-by: Jason Gunthorpe --- providers/mlx4/mlx4.c | 33 +------------------------------- providers/mlx4/mlx4.h | 1 + providers/mlx4/verbs.c | 43 +++++++++++++++++++++++++++++++++++------- 3 files changed, 38 insertions(+), 39 deletions(-) diff --git a/providers/mlx4/mlx4.c b/providers/mlx4/mlx4.c index c4a3c557fea426..619b841d788cb2 100644 --- a/providers/mlx4/mlx4.c +++ b/providers/mlx4/mlx4.c @@ -136,28 +136,6 @@ static const struct verbs_context_ops mlx4_ctx_ops = { .free_context = mlx4_free_context, }; -static int mlx4_map_internal_clock(struct mlx4_device *mdev, - struct ibv_context *ibv_ctx) -{ - struct mlx4_context *context = to_mctx(ibv_ctx); - void *hca_clock_page; - - hca_clock_page = mmap(NULL, mdev->page_size, - PROT_READ, MAP_SHARED, ibv_ctx->cmd_fd, - mdev->page_size * 3); - - if (hca_clock_page == MAP_FAILED) { - fprintf(stderr, PFX - "Warning: Timestamp available,\n" - "but failed to mmap() hca core clock page.\n"); - return -1; - } - - context->hca_core_clock = hca_clock_page + - (context->core_clock.offset & (mdev->page_size - 1)); - return 0; -} - static struct verbs_context *mlx4_alloc_context(struct ibv_device *ibdev, int cmd_fd, void *private_data) @@ -170,7 +148,6 @@ static struct verbs_context *mlx4_alloc_context(struct ibv_device *ibdev, __u16 bf_reg_size; struct mlx4_device *dev = to_mdev(ibdev); struct verbs_context *verbs_ctx; - struct ibv_device_attr_ex dev_attrs; context = verbs_init_and_alloc_context(ibdev, cmd_fd, context, ibv_ctx, RDMA_DRIVER_MLX4); @@ -242,15 +219,7 @@ static struct verbs_context *mlx4_alloc_context(struct ibv_device *ibdev, verbs_set_ops(verbs_ctx, &mlx4_ctx_ops); - context->hca_core_clock = NULL; - memset(&dev_attrs, 0, sizeof(dev_attrs)); - if (!mlx4_query_device_ex(&verbs_ctx->context, NULL, &dev_attrs, - sizeof(struct ibv_device_attr_ex))) { - context->max_qp_wr = dev_attrs.orig_attr.max_qp_wr; - context->max_sge = dev_attrs.orig_attr.max_sge; - if (context->core_clock.offset_valid) - mlx4_map_internal_clock(dev, &verbs_ctx->context); - } + mlx4_query_device_ctx(dev, context); return verbs_ctx; diff --git a/providers/mlx4/mlx4.h b/providers/mlx4/mlx4.h index 479c39d0a69fc4..3c0787144e7e51 100644 --- a/providers/mlx4/mlx4.h +++ b/providers/mlx4/mlx4.h @@ -304,6 +304,7 @@ __be32 *mlx4_alloc_db(struct mlx4_context *context, enum mlx4_db_type type); void mlx4_free_db(struct mlx4_context *context, enum mlx4_db_type type, __be32 *db); +void mlx4_query_device_ctx(struct mlx4_device *mdev, struct mlx4_context *mctx); int mlx4_query_device(struct ibv_context *context, struct ibv_device_attr *attr); int mlx4_query_device_ex(struct ibv_context *context, diff --git a/providers/mlx4/verbs.c b/providers/mlx4/verbs.c index 512297f2eebac0..4fe5c1d87d6d91 100644 --- a/providers/mlx4/verbs.c +++ b/providers/mlx4/verbs.c @@ -38,6 +38,7 @@ #include #include #include +#include #include @@ -70,7 +71,6 @@ int mlx4_query_device_ex(struct ibv_context *context, struct ibv_device_attr_ex *attr, size_t attr_size) { - struct mlx4_context *mctx = to_mctx(context); struct mlx4_query_device_ex_resp resp = {}; struct mlx4_query_device_ex cmd = {}; uint64_t raw_fw_ver; @@ -90,12 +90,6 @@ int mlx4_query_device_ex(struct ibv_context *context, attr->tso_caps.max_tso = resp.tso_caps.max_tso; attr->tso_caps.supported_qpts = resp.tso_caps.supported_qpts; - if (resp.comp_mask & MLX4_IB_QUERY_DEV_RESP_MASK_CORE_CLOCK_OFFSET) { - mctx->core_clock.offset = resp.hca_core_clock_offset; - mctx->core_clock.offset_valid = 1; - } - mctx->max_inl_recv_sz = resp.max_inl_recv_sz; - major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; @@ -106,6 +100,41 @@ int mlx4_query_device_ex(struct ibv_context *context, return 0; } +void mlx4_query_device_ctx(struct mlx4_device *mdev, struct mlx4_context *mctx) +{ + struct ibv_device_attr_ex device_attr; + struct mlx4_query_device_ex_resp resp; + size_t resp_size = sizeof(resp); + + if (ibv_cmd_query_device_any(&mctx->ibv_ctx.context, NULL, + &device_attr, sizeof(device_attr), + &resp.ibv_resp, &resp_size)) + return; + + mctx->max_qp_wr = device_attr.orig_attr.max_qp_wr; + mctx->max_sge = device_attr.orig_attr.max_sge; + mctx->max_inl_recv_sz = resp.max_inl_recv_sz; + + if (resp.comp_mask & MLX4_IB_QUERY_DEV_RESP_MASK_CORE_CLOCK_OFFSET) { + void *hca_clock_page; + + mctx->core_clock.offset = resp.hca_core_clock_offset; + mctx->core_clock.offset_valid = 1; + + hca_clock_page = + mmap(NULL, mdev->page_size, PROT_READ, MAP_SHARED, + mctx->ibv_ctx.context.cmd_fd, mdev->page_size * 3); + if (hca_clock_page != MAP_FAILED) + mctx->hca_core_clock = + hca_clock_page + (mctx->core_clock.offset & + (mdev->page_size - 1)); + else + fprintf(stderr, PFX + "Warning: Timestamp available,\n" + "but failed to mmap() hca core clock page.\n"); + } +} + static int mlx4_read_clock(struct ibv_context *context, uint64_t *cycles) { uint32_t clockhi, clocklo, clockhi1; From patchwork Mon Nov 16 20:23:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1A6AC64E75 for ; Mon, 16 Nov 2020 20:23:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 63AC320781 for ; Mon, 16 Nov 2020 20:23:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="WwdEoURs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732296AbgKPUXW (ORCPT ); Mon, 16 Nov 2020 15:23:22 -0500 Received: from nat-hk.nvidia.com ([203.18.50.4]:41714 "EHLO nat-hk.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732338AbgKPUXV (ORCPT ); Mon, 16 Nov 2020 15:23:21 -0500 Received: from HKMAIL102.nvidia.com (Not Verified[10.18.92.77]) by nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 17 Nov 2020 04:23:20 +0800 Received: from HKMAIL103.nvidia.com (10.18.16.12) by HKMAIL102.nvidia.com (10.18.16.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:20 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172) by HKMAIL103.nvidia.com (10.18.16.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:20 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iOBShVJgNWcLI4/X1aew8msDUQOlIvBWmnjGJgDIWWWYkxnNXoqRW3a4rls2gzSktQ7ycIS/bue6W9cZbIdM8WCjBtLyppkjHpk3L6z9RqBq7fPi62DI90zB3f4T1vr5bmKjyOwSUWGBEFitkz7kQnbJLpIaMmgIElWx017boYW+KfgN8KZ5yFLtTHYrYjJMkR0ljSBi2jl4tmpdM3mOt7YlZLT7Dh/UtygeimONrKDX5dIJnte87eV/J/faThkXbEZ+lwnKW140LQR6/TpfvpkXgl/AI3PMsa1mFPrZWYPYH9QmdwTyfvKb/G9Q6Nh7Qi0UCn5TKge3myvBbnqdaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hqEXq/nsH8uNY37KeLcHiX720dv29a5KRuWdnUVvS3A=; b=ccNzaniqzoi2UUmBVfdXQUHMvnTz+yKbwQ8IBQNRwCycyAtrl/X8FJoQwt9fjEBpLAtFZmjnXU0FeOB11104C/6HdEg5CvDuNoSGwJECGyccq9Yis4MtNf8XVDF7az9PnO7luwnZRP1dAqhfKtxqb7T377037w4eSY+I/WQAJrRMD0BYTeiwv2ty7x5VBXLI5rCXnIoXwoWLjVaGtqsgaKnBg58a9WsQ7Sl/k0LaXuWFvX82v2so/NLj1kPpb1qFqq7+7pnF4wvtJK5wFWxiAaW9ovF45NRW9pOwogragkC/sD4m7LlUBf4s/YnLFdA9wxwTProUWB/JG5IZ1SDrLg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:16 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:16 +0000 From: Jason Gunthorpe To: , Bob Pearson Subject: [PATCH 6/9] providers: Remove normal query_device() from providers that have _ex Date: Mon, 16 Nov 2020 16:23:07 -0400 Message-ID: <6-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR22CA0015.namprd22.prod.outlook.com (2603:10b6:208:238::20) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR22CA0015.namprd22.prod.outlook.com (2603:10b6:208:238::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28 via Frontend Transport; Mon, 16 Nov 2020 20:23:15 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l87-SE; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558200; bh=yCAtk7tRWNCdxVYMX4hqFw1dOCF93202jEr/2cgCb6g=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=WwdEoURsmeRyfgqx0fLcM5fCptIhtwEswmcei4skhqa60A6bb/KXw31BTGZt6wBe+ CDsnqilxN9XzMFNYmIx+/8uc8NbsfHz7GR9Rz2v7v1np/1Yg+uR2uDKvqvy9wf1twA kBmYUvs1GUs74GpT9zJZbY9Z/LLSVRV8Ej8PtcaiAwtS/5x5M70CKQKkMttBkcqfLE MUCUpS3Nt0p1Jzs87RdVDFZy5dY3n0o6Us1tHGX8S06Se1YpC4WqQBX8+a6EYfVDBy /v1h2bPkv3vbMi5rjq6xCKd3kb2/ZKNarHAyuQ/mqK7vqxq/WWQoVSfD3JsNP3EULP hjQTJ4MW/0VSA== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The ex callback can implement both versions, no reason for duplicate code in two paths. Have the core code call the _ex version instead. Signed-off-by: Jason Gunthorpe Reviewed-by: Gal Pressman --- libibverbs/dummy_ops.c | 9 ++++- providers/efa/efa.c | 1 - providers/efa/verbs.c | 36 +++++--------------- providers/efa/verbs.h | 1 - providers/mlx4/mlx4.c | 1 - providers/mlx4/mlx4.h | 2 -- providers/mlx4/verbs.c | 45 +++++++++---------------- providers/mlx5/mlx5.c | 1 - providers/mlx5/mlx5.h | 2 -- providers/mlx5/verbs.c | 75 +++++++++++++++++------------------------- 10 files changed, 61 insertions(+), 112 deletions(-) diff --git a/libibverbs/dummy_ops.c b/libibverbs/dummy_ops.c index e5af9e4eac8e34..711dfafb5caed5 100644 --- a/libibverbs/dummy_ops.c +++ b/libibverbs/dummy_ops.c @@ -380,7 +380,14 @@ static int post_srq_recv(struct ibv_srq *srq, struct ibv_recv_wr *recv_wr, static int query_device(struct ibv_context *context, struct ibv_device_attr *device_attr) { - return EOPNOTSUPP; + const struct verbs_context_ops *ops = get_ops(context); + + if (!ops->query_device_ex) + return EOPNOTSUPP; + return ops->query_device_ex( + context, NULL, + container_of(device_attr, struct ibv_device_attr_ex, orig_attr), + sizeof(*device_attr)); } /* Provide a generic implementation for all providers that don't implement diff --git a/providers/efa/efa.c b/providers/efa/efa.c index b24c14f7fa1fe1..f6d314dad51e58 100644 --- a/providers/efa/efa.c +++ b/providers/efa/efa.c @@ -40,7 +40,6 @@ static const struct verbs_context_ops efa_ctx_ops = { .poll_cq = efa_poll_cq, .post_recv = efa_post_recv, .post_send = efa_post_send, - .query_device = efa_query_device, .query_device_ex = efa_query_device_ex, .query_port = efa_query_port, .query_qp = efa_query_qp, diff --git a/providers/efa/verbs.c b/providers/efa/verbs.c index 52d6285f1f409c..d50206c13d4295 100644 --- a/providers/efa/verbs.c +++ b/providers/efa/verbs.c @@ -56,27 +56,6 @@ struct efa_wq_init_attr { uint16_t sub_cq_idx; }; -int efa_query_device(struct ibv_context *ibvctx, - struct ibv_device_attr *dev_attr) -{ - struct efa_context *ctx = to_efa_context(ibvctx); - struct ibv_query_device cmd; - uint8_t fw_ver[8]; - int err; - - err = ibv_cmd_query_device(ibvctx, dev_attr, (uint64_t *)&fw_ver, - &cmd, sizeof(cmd)); - if (err) - return err; - - dev_attr->max_qp_wr = min_t(int, dev_attr->max_qp_wr, - ctx->max_llq_size / sizeof(struct efa_io_tx_wqe)); - snprintf(dev_attr->fw_ver, sizeof(dev_attr->fw_ver), "%u.%u.%u.%u", - fw_ver[0], fw_ver[1], fw_ver[2], fw_ver[3]); - - return 0; -} - int efa_query_port(struct ibv_context *ibvctx, uint8_t port, struct ibv_port_attr *port_attr) { @@ -91,23 +70,24 @@ int efa_query_device_ex(struct ibv_context *context, size_t attr_size) { struct efa_context *ctx = to_efa_context(context); - int cmd_supp_uhw = ctx->cmds_supp_udata_mask & - EFA_USER_CMDS_SUPP_UDATA_QUERY_DEVICE; struct ibv_device_attr *a = &attr->orig_attr; struct efa_query_device_ex_resp resp = {}; - struct ibv_query_device_ex cmd = {}; + size_t resp_size = (ctx->cmds_supp_udata_mask & + EFA_USER_CMDS_SUPP_UDATA_QUERY_DEVICE) ? + sizeof(resp) : + sizeof(resp.ibv_resp); uint8_t fw_ver[8]; int err; - err = ibv_cmd_query_device_ex( - context, input, attr, attr_size, (uint64_t *)&fw_ver, &cmd, - sizeof(cmd), &resp.ibv_resp, - cmd_supp_uhw ? sizeof(resp) : sizeof(resp.ibv_resp)); + err = ibv_cmd_query_device_any(context, input, attr, attr_size, + &resp.ibv_resp, &resp_size); if (err) return err; a->max_qp_wr = min_t(int, a->max_qp_wr, ctx->max_llq_size / sizeof(struct efa_io_tx_wqe)); + memcpy(fw_ver, &resp.ibv_resp.base.fw_ver, + sizeof(resp.ibv_resp.base.fw_ver)); snprintf(a->fw_ver, sizeof(a->fw_ver), "%u.%u.%u.%u", fw_ver[0], fw_ver[1], fw_ver[2], fw_ver[3]); diff --git a/providers/efa/verbs.h b/providers/efa/verbs.h index 3b0e4e0d498761..b7ae3f0a15c00c 100644 --- a/providers/efa/verbs.h +++ b/providers/efa/verbs.h @@ -10,7 +10,6 @@ #include int efa_query_device_ctx(struct efa_context *ctx); -int efa_query_device(struct ibv_context *uctx, struct ibv_device_attr *attr); int efa_query_port(struct ibv_context *uctx, uint8_t port, struct ibv_port_attr *attr); int efa_query_device_ex(struct ibv_context *context, diff --git a/providers/mlx4/mlx4.c b/providers/mlx4/mlx4.c index 619b841d788cb2..1e71cde4a1f9dc 100644 --- a/providers/mlx4/mlx4.c +++ b/providers/mlx4/mlx4.c @@ -84,7 +84,6 @@ static const struct verbs_match_ent hca_table[] = { }; static const struct verbs_context_ops mlx4_ctx_ops = { - .query_device = mlx4_query_device, .query_port = mlx4_query_port, .alloc_pd = mlx4_alloc_pd, .dealloc_pd = mlx4_free_pd, diff --git a/providers/mlx4/mlx4.h b/providers/mlx4/mlx4.h index 3c0787144e7e51..6c6ffc77657463 100644 --- a/providers/mlx4/mlx4.h +++ b/providers/mlx4/mlx4.h @@ -305,8 +305,6 @@ void mlx4_free_db(struct mlx4_context *context, enum mlx4_db_type type, __be32 *db); void mlx4_query_device_ctx(struct mlx4_device *mdev, struct mlx4_context *mctx); -int mlx4_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); int mlx4_query_device_ex(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr, diff --git a/providers/mlx4/verbs.c b/providers/mlx4/verbs.c index 4fe5c1d87d6d91..ea8e882bb363ba 100644 --- a/providers/mlx4/verbs.c +++ b/providers/mlx4/verbs.c @@ -45,51 +45,36 @@ #include "mlx4.h" #include "mlx4-abi.h" -int mlx4_query_device(struct ibv_context *context, struct ibv_device_attr *attr) -{ - struct ibv_query_device cmd; - uint64_t raw_fw_ver; - unsigned major, minor, sub_minor; - int ret; - - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, &cmd, sizeof cmd); - if (ret) - return ret; - - major = (raw_fw_ver >> 32) & 0xffff; - minor = (raw_fw_ver >> 16) & 0xffff; - sub_minor = raw_fw_ver & 0xffff; - - snprintf(attr->fw_ver, sizeof attr->fw_ver, - "%d.%d.%03d", major, minor, sub_minor); - - return 0; -} - int mlx4_query_device_ex(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr, size_t attr_size) { - struct mlx4_query_device_ex_resp resp = {}; - struct mlx4_query_device_ex cmd = {}; + struct mlx4_query_device_ex_resp resp; + size_t resp_size = sizeof(resp); uint64_t raw_fw_ver; unsigned sub_minor; unsigned major; unsigned minor; int err; - err = ibv_cmd_query_device_ex(context, input, attr, attr_size, - &raw_fw_ver, &cmd.ibv_cmd, sizeof(cmd), - &resp.ibv_resp, sizeof(resp)); + err = ibv_cmd_query_device_any(context, input, attr, attr_size, + &resp.ibv_resp, &resp_size); if (err) return err; - attr->rss_caps.rx_hash_fields_mask = resp.rss_caps.rx_hash_fields_mask; - attr->rss_caps.rx_hash_function = resp.rss_caps.rx_hash_function; - attr->tso_caps.max_tso = resp.tso_caps.max_tso; - attr->tso_caps.supported_qpts = resp.tso_caps.supported_qpts; + if (attr_size >= offsetofend(struct ibv_device_attr_ex, rss_caps)) { + attr->rss_caps.rx_hash_fields_mask = + resp.rss_caps.rx_hash_fields_mask; + attr->rss_caps.rx_hash_function = + resp.rss_caps.rx_hash_function; + } + if (attr_size >= offsetofend(struct ibv_device_attr_ex, tso_caps)) { + attr->tso_caps.max_tso = resp.tso_caps.max_tso; + attr->tso_caps.supported_qpts = resp.tso_caps.supported_qpts; + } + raw_fw_ver = resp.ibv_resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; diff --git a/providers/mlx5/mlx5.c b/providers/mlx5/mlx5.c index 06b9a52ebb3019..cf0a62928705bc 100644 --- a/providers/mlx5/mlx5.c +++ b/providers/mlx5/mlx5.c @@ -90,7 +90,6 @@ uint32_t mlx5_debug_mask = 0; int mlx5_freeze_on_error_cqe; static const struct verbs_context_ops mlx5_ctx_common_ops = { - .query_device = mlx5_query_device, .query_port = mlx5_query_port, .alloc_pd = mlx5_alloc_pd, .async_event = mlx5_async_event, diff --git a/providers/mlx5/mlx5.h b/providers/mlx5/mlx5.h index 72e710b7b5e4aa..8821015c6d503e 100644 --- a/providers/mlx5/mlx5.h +++ b/providers/mlx5/mlx5.h @@ -879,8 +879,6 @@ void mlx5_free_db(struct mlx5_context *context, __be32 *db, struct ibv_pd *pd, bool custom_alloc); void mlx5_query_device_ctx(struct mlx5_context *mctx); -int mlx5_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); int mlx5_query_device_ex(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr, diff --git a/providers/mlx5/verbs.c b/providers/mlx5/verbs.c index 42c984033d8eaa..5882e209b06b54 100644 --- a/providers/mlx5/verbs.c +++ b/providers/mlx5/verbs.c @@ -65,27 +65,6 @@ static inline int is_xrc_tgt(int type) return type == IBV_QPT_XRC_RECV; } -int mlx5_query_device(struct ibv_context *context, struct ibv_device_attr *attr) -{ - struct ibv_query_device cmd; - uint64_t raw_fw_ver; - unsigned major, minor, sub_minor; - int ret; - - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, &cmd, sizeof cmd); - if (ret) - return ret; - - major = (raw_fw_ver >> 32) & 0xffff; - minor = (raw_fw_ver >> 16) & 0xffff; - sub_minor = raw_fw_ver & 0xffff; - - snprintf(attr->fw_ver, sizeof attr->fw_ver, - "%d.%d.%04d", major, minor, sub_minor); - - return 0; -} - static int mlx5_read_clock(struct ibv_context *context, uint64_t *cycles) { unsigned int clockhi, clocklo, clockhi1; @@ -3481,37 +3460,47 @@ int mlx5_query_device_ex(struct ibv_context *context, size_t attr_size) { struct mlx5_context *mctx = to_mctx(context); - struct mlx5_query_device_ex_resp resp; - struct mlx5_query_device_ex cmd; + struct mlx5_query_device_ex_resp resp = {}; + size_t resp_size = + (mctx->cmds_supp_uhw & MLX5_USER_CMDS_SUPP_UHW_QUERY_DEVICE) ? + sizeof(resp) : + sizeof(resp.ibv_resp); struct ibv_device_attr *a; uint64_t raw_fw_ver; unsigned sub_minor; unsigned major; unsigned minor; int err; - int cmd_supp_uhw = mctx->cmds_supp_uhw & - MLX5_USER_CMDS_SUPP_UHW_QUERY_DEVICE; - memset(&cmd, 0, sizeof(cmd)); - memset(&resp, 0, sizeof(resp)); - err = ibv_cmd_query_device_ex( - context, input, attr, attr_size, &raw_fw_ver, &cmd.ibv_cmd, - sizeof(cmd), &resp.ibv_resp, - cmd_supp_uhw ? sizeof(resp) : sizeof(resp.ibv_resp)); + err = ibv_cmd_query_device_any(context, input, attr, attr_size, + &resp.ibv_resp, &resp_size); if (err) return err; - attr->tso_caps.max_tso = resp.tso_caps.max_tso; - attr->tso_caps.supported_qpts = resp.tso_caps.supported_qpts; - attr->rss_caps.rx_hash_fields_mask = resp.rss_caps.rx_hash_fields_mask; - attr->rss_caps.rx_hash_function = resp.rss_caps.rx_hash_function; - attr->packet_pacing_caps.qp_rate_limit_min = - resp.packet_pacing_caps.qp_rate_limit_min; - attr->packet_pacing_caps.qp_rate_limit_max = - resp.packet_pacing_caps.qp_rate_limit_max; - attr->packet_pacing_caps.supported_qpts = - resp.packet_pacing_caps.supported_qpts; + if (attr_size >= offsetofend(struct ibv_device_attr_ex, tso_caps)) { + attr->tso_caps.max_tso = resp.tso_caps.max_tso; + attr->tso_caps.supported_qpts = resp.tso_caps.supported_qpts; + } + if (attr_size >= offsetofend(struct ibv_device_attr_ex, rss_caps)) { + attr->rss_caps.rx_hash_fields_mask = + resp.rss_caps.rx_hash_fields_mask; + attr->rss_caps.rx_hash_function = + resp.rss_caps.rx_hash_function; + } + if (attr_size >= + offsetofend(struct ibv_device_attr_ex, packet_pacing_caps)) { + attr->packet_pacing_caps.qp_rate_limit_min = + resp.packet_pacing_caps.qp_rate_limit_min; + attr->packet_pacing_caps.qp_rate_limit_max = + resp.packet_pacing_caps.qp_rate_limit_max; + attr->packet_pacing_caps.supported_qpts = + resp.packet_pacing_caps.supported_qpts; + } + + if (attr_size >= offsetofend(struct ibv_device_attr_ex, pci_atomic_caps)) + get_pci_atomic_caps(context, attr); + raw_fw_ver = resp.ibv_resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; @@ -3519,10 +3508,6 @@ int mlx5_query_device_ex(struct ibv_context *context, snprintf(a->fw_ver, sizeof(a->fw_ver), "%d.%d.%04d", major, minor, sub_minor); - if (attr_size >= offsetof(struct ibv_device_attr_ex, pci_atomic_caps) + - sizeof(attr->pci_atomic_caps)) - get_pci_atomic_caps(context, attr); - return 0; } From patchwork Mon Nov 16 20:23:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 085B0C6379D for ; Mon, 16 Nov 2020 20:23:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9C06720782 for ; Mon, 16 Nov 2020 20:23:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ezfKSCV9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732373AbgKPUXV (ORCPT ); Mon, 16 Nov 2020 15:23:21 -0500 Received: from nat-hk.nvidia.com ([203.18.50.4]:17355 "EHLO nat-hk.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730917AbgKPUXV (ORCPT ); Mon, 16 Nov 2020 15:23:21 -0500 Received: from HKMAIL101.nvidia.com (Not Verified[10.18.92.100]) by nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 17 Nov 2020 04:23:18 +0800 Received: from HKMAIL103.nvidia.com (10.18.16.12) by HKMAIL101.nvidia.com (10.18.16.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:18 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172) by HKMAIL103.nvidia.com (10.18.16.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:17 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kP349JPxuAUe5tit8+0eJsAXERv7O2lNKqMrXV6297Nm1+FFo5ll5I15HaFxJ7iCK2hDxjLdXUMXZ2VK+7cWVPYnrLAIjLPxVbcHDrRdywIPTI+C2/ne8uhRkmsOlkoWSn9PVoRQKsIv4i0lsfYMxZscOh45JM+ReqyQ3j6dynGCUP2u2wjGCwlctDCk/k/xGoHsJkFbHj6ebCL/ruO/gP0AWrI3z4DrAYObNtzr4yaLrlCHT03v69PLmNpgF0g1oor9lELeAs9RDmjM62HeJ88VPj21zSy/g/qKmf/dhEsKk4PHMvnCEn6GqqLH0NDPd5IaL772GSuyNaMtTd89Zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gWAJdzG41PYDl4nyqo3E2yEyZH8w348Nzc0LHxv3vZ4=; b=KsHaMLbCuuup06wQUswIewdXRo8AwnTC3ZNbcQGbpfj6vIdDyIxdLlyPi6JSSqGJw9xf15Yzj3r45Fqr1KaiIcEaqQL7oINykfH/G3MPNNixIIDqDivW0p84jjxslbBxOimepLW1h15YrMT0vm2+3aELDnSGNZb6JJNmDcCk/ihXvE9POywUZ/+S7tk5OtNK1MlKdSj/hwfpl55xV3k9OckTF8ukjU1DTG8xxgSDoHhxuyEcXBcVOvF2SBw/H77YlhstMuwPHn086uGJiIlq5ZEFgU4092biVDChMKr1CsHSK8qP0N3e67RGbbXgMXMO7WcvlKIid76p0NWk5XWpXA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:14 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:14 +0000 From: Jason Gunthorpe To: , Bob Pearson Subject: [PATCH 7/9] providers: Convert all providers to implement query_device_ex Date: Mon, 16 Nov 2020 16:23:08 -0400 Message-ID: <7-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR22CA0017.namprd22.prod.outlook.com (2603:10b6:208:238::22) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR22CA0017.namprd22.prod.outlook.com (2603:10b6:208:238::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend Transport; Mon, 16 Nov 2020 20:23:12 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l8B-TO; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558198; bh=LnSgc1sEYcE8JcKXca9nxTv3XWQwbdWNGlxEgKhdZYU=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=ezfKSCV9SIM+jqBEjdVqTKoYomNbKmBIBGkibektfsJdwVUM8LiCGPgxPaRzuZV// zOcYmSyJESX6CmrCJfuAJ2LMWnlRYTkWfLLxdLKkVIeYH5ujgEUEZ0qUZNvH/1Nnbr IacHA+PhTtowPLJPQLeVFiBQBJitLB93I8KhS6zibPiZN+u9Ln9gsBkrj9iqgR6/oR 5ksgiBljIjlw3UPmYg1L9rDjL/dRFcQlCP9TED3PM7BlQaoegGw9lhQ2cqmjrCMR2v P6k2kHs7Zls6/yyjNr2Fjx0tnJaXcv8e0Q0YftO42OfrAMvxDDA/iKIj6bOQyNCUti UiJADBmGXs5Tg== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The kernel now supports query_device_ex for all drivers, there is no reason to have a weird split where providers do different things. All providers implement only query_device_ex and call ibv_cmd_query_device_any() to get as much of the device_attr's as the kernel can return. The user facing ibv_query_device is emulated by requesting only the first portion of the ibv_device_attr_ex structure using a shorter size. Nearly all providers have a fairly simple pattern where they just call ibv_cmd_query_device_any() and the manipulate the fw_version into a string. A few return a device udata and process that as well. Signed-off-by: Jason Gunthorpe --- providers/bnxt_re/main.c | 2 +- providers/bnxt_re/verbs.c | 30 +++++++++++++++++++----------- providers/bnxt_re/verbs.h | 5 +++-- providers/cxgb4/dev.c | 2 +- providers/cxgb4/libcxgb4.h | 3 ++- providers/cxgb4/verbs.c | 14 +++++++++----- providers/hfi1verbs/hfiverbs.c | 2 +- providers/hfi1verbs/hfiverbs.h | 5 +++-- providers/hfi1verbs/verbs.c | 13 ++++++++----- providers/hns/hns_roce_u.c | 8 ++++++-- providers/hns/hns_roce_u.h | 3 ++- providers/hns/hns_roce_u_verbs.c | 15 +++++++++------ providers/i40iw/i40iw_umain.c | 2 +- providers/i40iw/i40iw_umain.h | 4 +++- providers/i40iw/i40iw_uverbs.c | 18 +++++++++++------- providers/ipathverbs/ipathverbs.c | 2 +- providers/ipathverbs/ipathverbs.h | 5 +++-- providers/ipathverbs/verbs.c | 13 ++++++++----- providers/mthca/mthca.c | 2 +- providers/mthca/mthca.h | 3 ++- providers/mthca/verbs.c | 13 +++++++++---- providers/ocrdma/ocrdma_main.c | 2 +- providers/ocrdma/ocrdma_main.h | 4 +++- providers/ocrdma/ocrdma_verbs.c | 20 ++++++++++++-------- providers/qedr/qelr_main.c | 2 +- providers/qedr/qelr_verbs.c | 21 ++++++++++++--------- providers/qedr/qelr_verbs.h | 3 ++- providers/rxe/rxe.c | 15 +++++++++------ providers/siw/siw.c | 20 +++++++++++--------- providers/vmw_pvrdma/pvrdma.h | 3 ++- providers/vmw_pvrdma/pvrdma_main.c | 2 +- providers/vmw_pvrdma/verbs.c | 13 ++++++++----- 32 files changed, 165 insertions(+), 104 deletions(-) diff --git a/providers/bnxt_re/main.c b/providers/bnxt_re/main.c index baeee733fdfbab..a78e6b98815dee 100644 --- a/providers/bnxt_re/main.c +++ b/providers/bnxt_re/main.c @@ -91,7 +91,7 @@ static const struct verbs_match_ent cna_table[] = { }; static const struct verbs_context_ops bnxt_re_cntx_ops = { - .query_device = bnxt_re_query_device, + .query_device_ex = bnxt_re_query_device, .query_port = bnxt_re_query_port, .alloc_pd = bnxt_re_alloc_pd, .dealloc_pd = bnxt_re_free_pd, diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index 03237e7f810321..20f4e7dd08b37c 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -53,19 +53,24 @@ #include "main.h" #include "verbs.h" -int bnxt_re_query_device(struct ibv_context *ibvctx, - struct ibv_device_attr *dev_attr) +int bnxt_re_query_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint8_t fw_ver[8]; - int status; + int err; - memset(dev_attr, 0, sizeof(struct ibv_device_attr)); - status = ibv_cmd_query_device(ibvctx, dev_attr, (uint64_t *)&fw_ver, - &cmd, sizeof(cmd)); - snprintf(dev_attr->fw_ver, 64, "%d.%d.%d.%d", - fw_ver[0], fw_ver[1], fw_ver[2], fw_ver[3]); - return status; + err = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); + if (err) + return err; + + memcpy(fw_ver, &resp.base.fw_ver, sizeof(resp.base.fw_ver)); + snprintf(attr->orig_attr.fw_ver, 64, "%d.%d.%d.%d", fw_ver[0], + fw_ver[1], fw_ver[2], fw_ver[3]); + return 0; } int bnxt_re_query_port(struct ibv_context *ibvctx, uint8_t port, @@ -773,7 +778,10 @@ static int bnxt_re_check_qp_limits(struct bnxt_re_context *cntx, struct ibv_device_attr devattr; int ret; - ret = bnxt_re_query_device(&cntx->ibvctx.context, &devattr); + ret = bnxt_re_query_device( + &cntx->ibvctx.context, NULL, + container_of(&devattr, struct ibv_device_attr_ex, orig_attr), + sizeof(devattr)); if (ret) return ret; if (attr->cap.max_send_sge > devattr.max_sge) diff --git a/providers/bnxt_re/verbs.h b/providers/bnxt_re/verbs.h index b9fd84bdbac9a8..1566709f096093 100644 --- a/providers/bnxt_re/verbs.h +++ b/providers/bnxt_re/verbs.h @@ -54,8 +54,9 @@ #include #include -int bnxt_re_query_device(struct ibv_context *uctx, - struct ibv_device_attr *attr); +int bnxt_re_query_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); int bnxt_re_query_port(struct ibv_context *uctx, uint8_t port, struct ibv_port_attr *attr); struct ibv_pd *bnxt_re_alloc_pd(struct ibv_context *uctx); diff --git a/providers/cxgb4/dev.c b/providers/cxgb4/dev.c index 06948efb3f0105..76b78d9b29a71c 100644 --- a/providers/cxgb4/dev.c +++ b/providers/cxgb4/dev.c @@ -75,7 +75,7 @@ int t5_en_wc = 1; static LIST_HEAD(devices); static const struct verbs_context_ops c4iw_ctx_common_ops = { - .query_device = c4iw_query_device, + .query_device_ex = c4iw_query_device, .query_port = c4iw_query_port, .alloc_pd = c4iw_alloc_pd, .dealloc_pd = c4iw_free_pd, diff --git a/providers/cxgb4/libcxgb4.h b/providers/cxgb4/libcxgb4.h index c5036d0b83c21e..f0658ab89f4aa5 100644 --- a/providers/cxgb4/libcxgb4.h +++ b/providers/cxgb4/libcxgb4.h @@ -191,7 +191,8 @@ static inline unsigned long_log2(unsigned long x) } int c4iw_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); int c4iw_query_port(struct ibv_context *context, uint8_t port, struct ibv_port_attr *attr); diff --git a/providers/cxgb4/verbs.c b/providers/cxgb4/verbs.c index 32bae6906a1595..a28152fa32761c 100644 --- a/providers/cxgb4/verbs.c +++ b/providers/cxgb4/verbs.c @@ -47,24 +47,28 @@ bool is_64b_cqe; #define MASKED(x) (void *)((unsigned long)(x) & c4iw_page_mask) -int c4iw_query_device(struct ibv_context *context, struct ibv_device_attr *attr) +int c4iw_query_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t raw_fw_ver; u8 major, minor, sub_minor, build; int ret; - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, &cmd, - sizeof cmd); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); if (ret) return ret; + raw_fw_ver = resp.base.fw_ver; major = (raw_fw_ver >> 24) & 0xff; minor = (raw_fw_ver >> 16) & 0xff; sub_minor = (raw_fw_ver >> 8) & 0xff; build = raw_fw_ver & 0xff; - snprintf(attr->fw_ver, sizeof attr->fw_ver, + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), "%d.%d.%d.%d", major, minor, sub_minor, build); return 0; diff --git a/providers/hfi1verbs/hfiverbs.c b/providers/hfi1verbs/hfiverbs.c index 9bfb967c09791e..514a7e6b602cb7 100644 --- a/providers/hfi1verbs/hfiverbs.c +++ b/providers/hfi1verbs/hfiverbs.c @@ -90,7 +90,7 @@ static const struct verbs_match_ent hca_table[] = { static const struct verbs_context_ops hfi1_ctx_common_ops = { .free_context = hfi1_free_context, - .query_device = hfi1_query_device, + .query_device_ex = hfi1_query_device, .query_port = hfi1_query_port, .alloc_pd = hfi1_alloc_pd, diff --git a/providers/hfi1verbs/hfiverbs.h b/providers/hfi1verbs/hfiverbs.h index b9e91d8072acf3..34977fc0bdfca2 100644 --- a/providers/hfi1verbs/hfiverbs.h +++ b/providers/hfi1verbs/hfiverbs.h @@ -194,8 +194,9 @@ static inline struct hfi1_rwqe *get_rwqe_ptr(struct hfi1_rq *rq, rq->max_sge * sizeof(struct ibv_sge)) * n); } -extern int hfi1_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); +int hfi1_query_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); extern int hfi1_query_port(struct ibv_context *context, uint8_t port, struct ibv_port_attr *attr); diff --git a/providers/hfi1verbs/verbs.c b/providers/hfi1verbs/verbs.c index 275f8d511392a7..028552a23718cd 100644 --- a/providers/hfi1verbs/verbs.c +++ b/providers/hfi1verbs/verbs.c @@ -68,23 +68,26 @@ #include "hfi-abi.h" int hfi1_query_device(struct ibv_context *context, - struct ibv_device_attr *attr) + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t raw_fw_ver; unsigned major, minor, sub_minor; int ret; - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, - &cmd, sizeof cmd); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); if (ret) return ret; + raw_fw_ver = resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; - snprintf(attr->fw_ver, sizeof attr->fw_ver, + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), "%d.%d.%d", major, minor, sub_minor); return 0; diff --git a/providers/hns/hns_roce_u.c b/providers/hns/hns_roce_u.c index c4370411d8aa33..23b317cc9b5fb8 100644 --- a/providers/hns/hns_roce_u.c +++ b/providers/hns/hns_roce_u.c @@ -74,7 +74,7 @@ static const struct verbs_context_ops hns_common_ops = { .dereg_mr = hns_roce_u_dereg_mr, .destroy_cq = hns_roce_u_destroy_cq, .modify_cq = hns_roce_u_modify_cq, - .query_device = hns_roce_u_query_device, + .query_device_ex = hns_roce_u_query_device, .query_port = hns_roce_u_query_port, .query_qp = hns_roce_u_query_qp, .reg_mr = hns_roce_u_reg_mr, @@ -147,7 +147,11 @@ static struct verbs_context *hns_roce_alloc_context(struct ibv_device *ibdev, verbs_set_ops(&context->ibv_ctx, &hns_common_ops); verbs_set_ops(&context->ibv_ctx, &hr_dev->u_hw->hw_ops); - if (hns_roce_u_query_device(&context->ibv_ctx.context, &dev_attrs)) + if (hns_roce_u_query_device(&context->ibv_ctx.context, NULL, + container_of(&dev_attrs, + struct ibv_device_attr_ex, + orig_attr), + sizeof(dev_attrs))) goto tptr_free; context->max_qp_wr = dev_attrs.max_qp_wr; diff --git a/providers/hns/hns_roce_u.h b/providers/hns/hns_roce_u.h index b0308d14820b06..0c2dc1e3441550 100644 --- a/providers/hns/hns_roce_u.h +++ b/providers/hns/hns_roce_u.h @@ -316,7 +316,8 @@ static inline struct hns_roce_qp *to_hr_qp(struct ibv_qp *ibv_qp) } int hns_roce_u_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); int hns_roce_u_query_port(struct ibv_context *context, uint8_t port, struct ibv_port_attr *attr); diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index 06fdbfac5e898d..bffd8df4f3f7a4 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -54,24 +54,27 @@ void hns_roce_init_qp_indices(struct hns_roce_qp *qp) } int hns_roce_u_query_device(struct ibv_context *context, - struct ibv_device_attr *attr) + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); int ret; - struct ibv_query_device cmd; uint64_t raw_fw_ver; unsigned int major, minor, sub_minor; - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, &cmd, - sizeof(cmd)); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); if (ret) return ret; + raw_fw_ver = resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; - snprintf(attr->fw_ver, sizeof(attr->fw_ver), "%d.%d.%03d", major, minor, - sub_minor); + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), + "%d.%d.%03d", major, minor, sub_minor); return 0; } diff --git a/providers/i40iw/i40iw_umain.c b/providers/i40iw/i40iw_umain.c index eef8cd50cabf43..c5ac0792ed6f83 100644 --- a/providers/i40iw/i40iw_umain.c +++ b/providers/i40iw/i40iw_umain.c @@ -95,7 +95,7 @@ static const struct verbs_match_ent hca_table[] = { }; static const struct verbs_context_ops i40iw_uctx_ops = { - .query_device = i40iw_uquery_device, + .query_device_ex = i40iw_uquery_device, .query_port = i40iw_uquery_port, .alloc_pd = i40iw_ualloc_pd, .dealloc_pd = i40iw_ufree_pd, diff --git a/providers/i40iw/i40iw_umain.h b/providers/i40iw/i40iw_umain.h index 10385dfc8304e9..fe643dd1a04e06 100644 --- a/providers/i40iw/i40iw_umain.h +++ b/providers/i40iw/i40iw_umain.h @@ -151,7 +151,9 @@ static inline struct i40iw_uqp *to_i40iw_uqp(struct ibv_qp *ibqp) } /* i40iw_uverbs.c */ -int i40iw_uquery_device(struct ibv_context *, struct ibv_device_attr *); +int i40iw_uquery_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); int i40iw_uquery_port(struct ibv_context *, uint8_t, struct ibv_port_attr *); struct ibv_pd *i40iw_ualloc_pd(struct ibv_context *); int i40iw_ufree_pd(struct ibv_pd *); diff --git a/providers/i40iw/i40iw_uverbs.c b/providers/i40iw/i40iw_uverbs.c index 71b59a7a812c17..c170bb33ef8b6e 100644 --- a/providers/i40iw/i40iw_uverbs.c +++ b/providers/i40iw/i40iw_uverbs.c @@ -55,23 +55,27 @@ * @context: user context for the device * @attr: where to save all the mx resources from the driver **/ -int i40iw_uquery_device(struct ibv_context *context, struct ibv_device_attr *attr) +int i40iw_uquery_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t i40iw_fw_ver; int ret; unsigned int minor, major; - ret = ibv_cmd_query_device(context, attr, &i40iw_fw_ver, &cmd, sizeof(cmd)); - if (ret) { - fprintf(stderr, PFX "%s: query device failed and returned status code: %d\n", __func__, ret); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); + if (ret) return ret; - } + i40iw_fw_ver = resp.base.fw_ver; major = (i40iw_fw_ver >> 16) & 0xffff; minor = i40iw_fw_ver & 0xffff; - snprintf(attr->fw_ver, sizeof(attr->fw_ver), "%d.%d", major, minor); + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), + "%d.%d", major, minor); return 0; } diff --git a/providers/ipathverbs/ipathverbs.c b/providers/ipathverbs/ipathverbs.c index 0e1a58433d34b2..975f52d011c4c0 100644 --- a/providers/ipathverbs/ipathverbs.c +++ b/providers/ipathverbs/ipathverbs.c @@ -89,7 +89,7 @@ static const struct verbs_match_ent hca_table[] = { static const struct verbs_context_ops ipath_ctx_common_ops = { .free_context = ipath_free_context, - .query_device = ipath_query_device, + .query_device_ex = ipath_query_device, .query_port = ipath_query_port, .alloc_pd = ipath_alloc_pd, diff --git a/providers/ipathverbs/ipathverbs.h b/providers/ipathverbs/ipathverbs.h index 694f1f44a48315..c5fa761f794567 100644 --- a/providers/ipathverbs/ipathverbs.h +++ b/providers/ipathverbs/ipathverbs.h @@ -173,8 +173,9 @@ static inline struct ipath_rwqe *get_rwqe_ptr(struct ipath_rq *rq, rq->max_sge * sizeof(struct ibv_sge)) * n); } -extern int ipath_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); +int ipath_query_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); extern int ipath_query_port(struct ibv_context *context, uint8_t port, struct ibv_port_attr *attr); diff --git a/providers/ipathverbs/verbs.c b/providers/ipathverbs/verbs.c index 505ea584e878de..e1b098a078584a 100644 --- a/providers/ipathverbs/verbs.c +++ b/providers/ipathverbs/verbs.c @@ -48,23 +48,26 @@ #include "ipath-abi.h" int ipath_query_device(struct ibv_context *context, - struct ibv_device_attr *attr) + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t raw_fw_ver; unsigned major, minor, sub_minor; int ret; - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, - &cmd, sizeof cmd); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); if (ret) return ret; + raw_fw_ver = resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; - snprintf(attr->fw_ver, sizeof attr->fw_ver, + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), "%d.%d.%d", major, minor, sub_minor); return 0; diff --git a/providers/mthca/mthca.c b/providers/mthca/mthca.c index abce4866883d8b..809aae00ef26f4 100644 --- a/providers/mthca/mthca.c +++ b/providers/mthca/mthca.c @@ -92,7 +92,7 @@ static const struct verbs_match_ent hca_table[] = { }; static const struct verbs_context_ops mthca_ctx_common_ops = { - .query_device = mthca_query_device, + .query_device_ex = mthca_query_device, .query_port = mthca_query_port, .alloc_pd = mthca_alloc_pd, .dealloc_pd = mthca_free_pd, diff --git a/providers/mthca/mthca.h b/providers/mthca/mthca.h index b7df2f734686c8..43c58fa44d1a07 100644 --- a/providers/mthca/mthca.h +++ b/providers/mthca/mthca.h @@ -273,7 +273,8 @@ struct mthca_db_table *mthca_alloc_db_tab(int uarc_size); void mthca_free_db_tab(struct mthca_db_table *db_tab); int mthca_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); int mthca_query_port(struct ibv_context *context, uint8_t port, struct ibv_port_attr *attr); diff --git a/providers/mthca/verbs.c b/providers/mthca/verbs.c index 99e5ec661265a6..7ba5a4177b485c 100644 --- a/providers/mthca/verbs.c +++ b/providers/mthca/verbs.c @@ -42,22 +42,27 @@ #include "mthca.h" #include "mthca-abi.h" -int mthca_query_device(struct ibv_context *context, struct ibv_device_attr *attr) +int mthca_query_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t raw_fw_ver; unsigned major, minor, sub_minor; int ret; - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, &cmd, sizeof cmd); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); if (ret) return ret; + raw_fw_ver = resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; - snprintf(attr->fw_ver, sizeof attr->fw_ver, + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), "%d.%d.%d", major, minor, sub_minor); return 0; diff --git a/providers/ocrdma/ocrdma_main.c b/providers/ocrdma/ocrdma_main.c index f7ed629de8b4cf..c955dd1ba6642f 100644 --- a/providers/ocrdma/ocrdma_main.c +++ b/providers/ocrdma/ocrdma_main.c @@ -68,7 +68,7 @@ static const struct verbs_match_ent ucna_table[] = { }; static const struct verbs_context_ops ocrdma_ctx_ops = { - .query_device = ocrdma_query_device, + .query_device_ex = ocrdma_query_device, .query_port = ocrdma_query_port, .alloc_pd = ocrdma_alloc_pd, .dealloc_pd = ocrdma_free_pd, diff --git a/providers/ocrdma/ocrdma_main.h b/providers/ocrdma/ocrdma_main.h index aadefd9649ac90..33ea20e0c6066b 100644 --- a/providers/ocrdma/ocrdma_main.h +++ b/providers/ocrdma/ocrdma_main.h @@ -265,7 +265,9 @@ static inline struct ocrdma_ah *get_ocrdma_ah(struct ibv_ah *ibah) } void ocrdma_init_ahid_tbl(struct ocrdma_devctx *ctx); -int ocrdma_query_device(struct ibv_context *, struct ibv_device_attr *); +int ocrdma_query_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); int ocrdma_query_port(struct ibv_context *, uint8_t, struct ibv_port_attr *); struct ibv_pd *ocrdma_alloc_pd(struct ibv_context *); int ocrdma_free_pd(struct ibv_pd *); diff --git a/providers/ocrdma/ocrdma_verbs.c b/providers/ocrdma/ocrdma_verbs.c index 4ae35be9d2d9ee..688ff7d4f24043 100644 --- a/providers/ocrdma/ocrdma_verbs.c +++ b/providers/ocrdma/ocrdma_verbs.c @@ -68,17 +68,21 @@ static inline void ocrdma_swap_cpu_to_le(void *dst, uint32_t len) * ocrdma_query_device */ int ocrdma_query_device(struct ibv_context *context, - struct ibv_device_attr *attr) + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; - uint64_t fw_ver; struct ocrdma_device *dev = get_ocrdma_dev(context->device); - int status; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); + int ret; - bzero(attr, sizeof *attr); - status = ibv_cmd_query_device(context, attr, &fw_ver, &cmd, sizeof cmd); - memcpy(attr->fw_ver, dev->fw_ver, sizeof(dev->fw_ver)); - return status; + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); + if (ret) + return ret; + + memcpy(attr->orig_attr.fw_ver, dev->fw_ver, sizeof(dev->fw_ver)); + return 0; } /* diff --git a/providers/qedr/qelr_main.c b/providers/qedr/qelr_main.c index bdfaa930f0c601..334972ae043cc4 100644 --- a/providers/qedr/qelr_main.c +++ b/providers/qedr/qelr_main.c @@ -87,7 +87,7 @@ static const struct verbs_match_ent hca_table[] = { }; static const struct verbs_context_ops qelr_ctx_ops = { - .query_device = qelr_query_device, + .query_device_ex = qelr_query_device, .query_port = qelr_query_port, .alloc_pd = qelr_alloc_pd, .dealloc_pd = qelr_dealloc_pd, diff --git a/providers/qedr/qelr_verbs.c b/providers/qedr/qelr_verbs.c index 4e77a1976a9154..dab9cf67539704 100644 --- a/providers/qedr/qelr_verbs.c +++ b/providers/qedr/qelr_verbs.c @@ -75,26 +75,29 @@ static inline int qelr_wq_is_full(struct qelr_qp_hwq_info *info) } int qelr_query_device(struct ibv_context *context, - struct ibv_device_attr *attr) + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t fw_ver; unsigned int major, minor, revision, eng; - int status; + int ret; - bzero(attr, sizeof(*attr)); - status = ibv_cmd_query_device(context, attr, &fw_ver, &cmd, - sizeof(cmd)); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); + if (ret) + return ret; + fw_ver = resp.base.fw_ver; major = (fw_ver >> 24) & 0xff; minor = (fw_ver >> 16) & 0xff; revision = (fw_ver >> 8) & 0xff; eng = fw_ver & 0xff; - snprintf(attr->fw_ver, sizeof(attr->fw_ver), + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), "%d.%d.%d.%d", major, minor, revision, eng); - - return status; + return 0; } int qelr_query_port(struct ibv_context *context, uint8_t port, diff --git a/providers/qedr/qelr_verbs.h b/providers/qedr/qelr_verbs.h index bbfd4906b082e6..b5b43b19241904 100644 --- a/providers/qedr/qelr_verbs.h +++ b/providers/qedr/qelr_verbs.h @@ -41,7 +41,8 @@ #include int qelr_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); int qelr_query_port(struct ibv_context *context, uint8_t port, struct ibv_port_attr *attr); diff --git a/providers/rxe/rxe.c b/providers/rxe/rxe.c index 18e8c53dcd253a..d4357bac9d4d40 100644 --- a/providers/rxe/rxe.c +++ b/providers/rxe/rxe.c @@ -65,23 +65,26 @@ static const struct verbs_match_ent hca_table[] = { }; static int rxe_query_device(struct ibv_context *context, - struct ibv_device_attr *attr) + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t raw_fw_ver; unsigned int major, minor, sub_minor; int ret; - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, - &cmd, sizeof(cmd)); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); if (ret) return ret; + raw_fw_ver = resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; - snprintf(attr->fw_ver, sizeof(attr->fw_ver), + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), "%d.%d.%d", major, minor, sub_minor); return 0; @@ -831,7 +834,7 @@ static int rxe_destroy_ah(struct ibv_ah *ibah) } static const struct verbs_context_ops rxe_ctx_ops = { - .query_device = rxe_query_device, + .query_device_ex = rxe_query_device, .query_port = rxe_query_port, .alloc_pd = rxe_alloc_pd, .dealloc_pd = rxe_dealloc_pd, diff --git a/providers/siw/siw.c b/providers/siw/siw.c index 0f94e614d16876..8f6dee4e58af56 100644 --- a/providers/siw/siw.c +++ b/providers/siw/siw.c @@ -20,26 +20,28 @@ static const int siw_debug; static void siw_free_context(struct ibv_context *ibv_ctx); -static int siw_query_device(struct ibv_context *ctx, - struct ibv_device_attr *attr) +static int siw_query_device(struct ibv_context *context, + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t raw_fw_ver; unsigned int major, minor, sub_minor; int rv; - memset(&cmd, 0, sizeof(cmd)); - - rv = ibv_cmd_query_device(ctx, attr, &raw_fw_ver, &cmd, sizeof(cmd)); + rv = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); if (rv) return rv; + raw_fw_ver = resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; - snprintf(attr->fw_ver, sizeof(attr->fw_ver), "%d.%d.%d", major, minor, - sub_minor); + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), + "%d.%d.%d", major, minor, sub_minor); return 0; } @@ -832,7 +834,7 @@ static const struct verbs_context_ops siw_context_ops = { .post_recv = siw_post_recv, .post_send = siw_post_send, .post_srq_recv = siw_post_srq_recv, - .query_device = siw_query_device, + .query_device_ex = siw_query_device, .query_port = siw_query_port, .query_qp = siw_query_qp, .reg_mr = siw_reg_mr, diff --git a/providers/vmw_pvrdma/pvrdma.h b/providers/vmw_pvrdma/pvrdma.h index 0db65773f5d003..bb6ba729d0e4bc 100644 --- a/providers/vmw_pvrdma/pvrdma.h +++ b/providers/vmw_pvrdma/pvrdma.h @@ -275,7 +275,8 @@ int pvrdma_alloc_buf(struct pvrdma_buf *buf, size_t size, int page_size); void pvrdma_free_buf(struct pvrdma_buf *buf); int pvrdma_query_device(struct ibv_context *context, - struct ibv_device_attr *attr); + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size); int pvrdma_query_port(struct ibv_context *context, uint8_t port, struct ibv_port_attr *attr); diff --git a/providers/vmw_pvrdma/pvrdma_main.c b/providers/vmw_pvrdma/pvrdma_main.c index 14a67c1cee3150..4f93b519dfa6b3 100644 --- a/providers/vmw_pvrdma/pvrdma_main.c +++ b/providers/vmw_pvrdma/pvrdma_main.c @@ -55,7 +55,7 @@ static void pvrdma_free_context(struct ibv_context *ibctx); static const struct verbs_context_ops pvrdma_ctx_ops = { .free_context = pvrdma_free_context, - .query_device = pvrdma_query_device, + .query_device_ex = pvrdma_query_device, .query_port = pvrdma_query_port, .alloc_pd = pvrdma_alloc_pd, .dealloc_pd = pvrdma_free_pd, diff --git a/providers/vmw_pvrdma/verbs.c b/providers/vmw_pvrdma/verbs.c index e8423c01365e7b..815333691c3831 100644 --- a/providers/vmw_pvrdma/verbs.c +++ b/providers/vmw_pvrdma/verbs.c @@ -47,23 +47,26 @@ #include "pvrdma.h" int pvrdma_query_device(struct ibv_context *context, - struct ibv_device_attr *attr) + const struct ibv_query_device_ex_input *input, + struct ibv_device_attr_ex *attr, size_t attr_size) { - struct ibv_query_device cmd; + struct ib_uverbs_ex_query_device_resp resp; + size_t resp_size = sizeof(resp); uint64_t raw_fw_ver; unsigned major, minor, sub_minor; int ret; - ret = ibv_cmd_query_device(context, attr, &raw_fw_ver, - &cmd, sizeof(cmd)); + ret = ibv_cmd_query_device_any(context, input, attr, attr_size, &resp, + &resp_size); if (ret) return ret; + raw_fw_ver = resp.base.fw_ver; major = (raw_fw_ver >> 32) & 0xffff; minor = (raw_fw_ver >> 16) & 0xffff; sub_minor = raw_fw_ver & 0xffff; - snprintf(attr->fw_ver, sizeof(attr->fw_ver), + snprintf(attr->orig_attr.fw_ver, sizeof(attr->orig_attr.fw_ver), "%d.%d.%03d", major, minor, sub_minor); return 0; From patchwork Mon Nov 16 20:23:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 500D6C63697 for ; Mon, 16 Nov 2020 20:23:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6EF820781 for ; Mon, 16 Nov 2020 20:23:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ShvasIgd" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732332AbgKPUXU (ORCPT ); Mon, 16 Nov 2020 15:23:20 -0500 Received: from nat-hk.nvidia.com ([203.18.50.4]:22498 "EHLO nat-hk.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732263AbgKPUXS (ORCPT ); Mon, 16 Nov 2020 15:23:18 -0500 Received: from HKMAIL103.nvidia.com (Not Verified[10.18.92.9]) by nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 17 Nov 2020 04:23:17 +0800 Received: from HKMAIL103.nvidia.com (10.18.16.12) by HKMAIL103.nvidia.com (10.18.16.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:16 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172) by HKMAIL103.nvidia.com (10.18.16.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:16 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=T3PyxubbWodDRaMqF6/YLelCJ6zmMlw4ConUV58IjT8o9UTQERQlYKGet2cY+JgnPsEXsc/knDPURmOesB1VOt4ocJKEa7SJT91hAxqse2YtVtC0VvubzG0kBzn5xT+/m7bWs+IvKf93KA7D888MwJs7Y6FLxprHLRScjauvCJDCoPQADPmqfrM+z6soyYB1d2PHbIgT4maGleJEVMIAGlUG8ySxQh7pO5X7TCBNaCwQGrKgat0ZBTXSVfxh3VO1L+dujHxWuiRIGgrGPnuyxLfGV7PEh0UUmxFacCSvkAW2WltG0DwFKvpvbPeQ/1wSfydYwMTUXLHjbc93lzfn9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0D4lmO6fFVwKi0hfYsbAKvh4vaN/YbN75GVEjB+LyC8=; b=OkShcwmRtSsg+ZTNqyuDP7cKsA8RNCz7ODnYFgRZGiN3HTfnx1VLtL9b6o8B/qiiXErdFdm7/Uaz9qekyKDWw5WSvAQISpxmqfsG1KFlMHN/dOLHNJ8jWkRHpbTmWN7iJlHzW3AjbNXApebYAzbjqsO3AxF6lwA3NaBLorSJQuQKM3J15++7CwlK7zIC+orV4RX7R0Ek+NF2dZbW4lxcRJvGV60Y6247j5Uoxr5eVCYWpTi/CV36zc1e63mwbGTEx3mEygINtm8lqhN7TNRrh5eAFAnoEUz2/XV2G086P4adUAL2DikdMIA5EC7YYDD5E1woXBDkAg1E/r+YjET/QQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:13 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:13 +0000 From: Jason Gunthorpe To: , Bob Pearson Subject: [PATCH 8/9] verbs: Remove dead code Date: Mon, 16 Nov 2020 16:23:09 -0400 Message-ID: <8-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR22CA0024.namprd22.prod.outlook.com (2603:10b6:208:238::29) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR22CA0024.namprd22.prod.outlook.com (2603:10b6:208:238::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend Transport; Mon, 16 Nov 2020 20:23:12 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l8F-Ut; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558197; bh=Ds8LhA3ff12qYdVtifdyU6rA84OwjurspIjk629ojLE=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=ShvasIgdevdkGZU5z7tSn4NSFtyMaw5FRQ5hTzbtA07Gwk9jtrmq27rcPtkN1iOjg 5Mo5I+50WIHP9y3I9YWbIo33at5/ZlBoaEtPC3dQ1fVdmALrweJI3sfJYwYUB0nv9Y H8Eq+Wrlo7e8127taKoo4OUWSyxtE0V1iu/177X1pfj6C3E7oPzz542FHpxplFUb+j flMSGnkisASNgsfzvCmoESg/tzkswpMxOAi8phaGZxErir/15SaphdgAbuGrV9KIWx FA+BV1nlYWKFerDYbUwW4wN84BxaJQNqERJH1ZMMQ2FBINV51WezejZJat49g62v4W Mx2llLCG3G6uA== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Remove the old query_device support code, it is now replaced by ibv_cmd_query_device_any() Signed-off-by: Jason Gunthorpe --- libibverbs/cmd.c | 99 ------------------------------------ libibverbs/driver.h | 8 --- libibverbs/libibverbs.map.in | 1 - 3 files changed, 108 deletions(-) diff --git a/libibverbs/cmd.c b/libibverbs/cmd.c index a439f8c06481dd..ec9750e7c04eb4 100644 --- a/libibverbs/cmd.c +++ b/libibverbs/cmd.c @@ -44,7 +44,6 @@ #include #include "ibverbs.h" #include -#include bool verbs_allow_disassociate_destroy; @@ -113,104 +112,6 @@ int ibv_cmd_query_device(struct ibv_context *context, return 0; } -int ibv_cmd_query_device_ex(struct ibv_context *context, - const struct ibv_query_device_ex_input *input, - struct ibv_device_attr_ex *attr, size_t attr_size, - uint64_t *raw_fw_ver, - struct ibv_query_device_ex *cmd, - size_t cmd_size, - struct ib_uverbs_ex_query_device_resp *resp, - size_t resp_size) -{ - int err; - - if (input && input->comp_mask) - return EINVAL; - - if (attr_size < offsetof(struct ibv_device_attr_ex, comp_mask) + - sizeof(attr->comp_mask)) - return EINVAL; - - cmd->comp_mask = 0; - cmd->reserved = 0; - memset(attr->orig_attr.fw_ver, 0, sizeof(attr->orig_attr.fw_ver)); - memset(&attr->comp_mask, 0, attr_size - sizeof(attr->orig_attr)); - - err = execute_cmd_write_ex(context, IB_USER_VERBS_EX_CMD_QUERY_DEVICE, - cmd, cmd_size, resp, resp_size); - if (err) - return err; - - copy_query_dev_fields(&attr->orig_attr, &resp->base, raw_fw_ver); - /* Report back supported comp_mask bits. For now no comp_mask bit is - * defined */ - attr->comp_mask = resp->comp_mask & 0; - -#define CAN_COPY(_ibv_attr, _uverbs_attr) \ - (attr_size >= offsetofend(struct ibv_device_attr_ex, _ibv_attr) && \ - resp->response_length >= \ - offsetofend(struct ib_uverbs_ex_query_device_resp, \ - _uverbs_attr)) - - if (CAN_COPY(odp_caps, odp_caps)) { - attr->odp_caps.general_caps = resp->odp_caps.general_caps; - attr->odp_caps.per_transport_caps.rc_odp_caps = - resp->odp_caps.per_transport_caps.rc_odp_caps; - attr->odp_caps.per_transport_caps.uc_odp_caps = - resp->odp_caps.per_transport_caps.uc_odp_caps; - attr->odp_caps.per_transport_caps.ud_odp_caps = - resp->odp_caps.per_transport_caps.ud_odp_caps; - } - - if (CAN_COPY(completion_timestamp_mask, timestamp_mask)) - attr->completion_timestamp_mask = resp->timestamp_mask; - - if (CAN_COPY(hca_core_clock, hca_core_clock)) - attr->hca_core_clock = resp->hca_core_clock; - - if (CAN_COPY(device_cap_flags_ex, device_cap_flags_ex)) - attr->device_cap_flags_ex = resp->device_cap_flags_ex; - - if (CAN_COPY(rss_caps, rss_caps)) { - attr->rss_caps.supported_qpts = resp->rss_caps.supported_qpts; - attr->rss_caps.max_rwq_indirection_tables = - resp->rss_caps.max_rwq_indirection_tables; - attr->rss_caps.max_rwq_indirection_table_size = - resp->rss_caps.max_rwq_indirection_table_size; - } - - if (CAN_COPY(max_wq_type_rq, max_wq_type_rq)) - attr->max_wq_type_rq = resp->max_wq_type_rq; - - if (CAN_COPY(raw_packet_caps, raw_packet_caps)) - attr->raw_packet_caps = resp->raw_packet_caps; - - if (CAN_COPY(tm_caps, tm_caps)) { - attr->tm_caps.max_rndv_hdr_size = - resp->tm_caps.max_rndv_hdr_size; - attr->tm_caps.max_num_tags = resp->tm_caps.max_num_tags; - attr->tm_caps.flags = resp->tm_caps.flags; - attr->tm_caps.max_ops = resp->tm_caps.max_ops; - attr->tm_caps.max_sge = resp->tm_caps.max_sge; - } - - if (CAN_COPY(cq_mod_caps, cq_moderation_caps)) { - attr->cq_mod_caps.max_cq_count = - resp->cq_moderation_caps.max_cq_moderation_count; - attr->cq_mod_caps.max_cq_period = - resp->cq_moderation_caps.max_cq_moderation_period; - } - - if (CAN_COPY(max_dm_size, max_dm_size)) - attr->max_dm_size = resp->max_dm_size; - - if (CAN_COPY(xrc_odp_caps, xrc_odp_caps)) - attr->xrc_odp_caps = resp->xrc_odp_caps; -#undef CAN_COPY - - return 0; -} - int ibv_cmd_alloc_pd(struct ibv_context *context, struct ibv_pd *pd, struct ibv_alloc_pd *cmd, size_t cmd_size, struct ib_uverbs_alloc_pd_resp *resp, size_t resp_size) diff --git a/libibverbs/driver.h b/libibverbs/driver.h index e54db0ea6413e8..33998e227c98ec 100644 --- a/libibverbs/driver.h +++ b/libibverbs/driver.h @@ -465,14 +465,6 @@ int ibv_cmd_query_device_any(struct ibv_context *context, struct ibv_device_attr_ex *attr, size_t attr_size, struct ib_uverbs_ex_query_device_resp *resp, size_t *resp_size); -int ibv_cmd_query_device_ex(struct ibv_context *context, - const struct ibv_query_device_ex_input *input, - struct ibv_device_attr_ex *attr, size_t attr_size, - uint64_t *raw_fw_ver, - struct ibv_query_device_ex *cmd, - size_t cmd_size, - struct ib_uverbs_ex_query_device_resp *resp, - size_t resp_size); int ibv_cmd_query_port(struct ibv_context *context, uint8_t port_num, struct ibv_port_attr *port_attr, struct ibv_query_port *cmd, size_t cmd_size); diff --git a/libibverbs/libibverbs.map.in b/libibverbs/libibverbs.map.in index c1f7e09b240ab0..672717a6fa551e 100644 --- a/libibverbs/libibverbs.map.in +++ b/libibverbs/libibverbs.map.in @@ -205,7 +205,6 @@ IBVERBS_PRIVATE_@IBVERBS_PABI_VERSION@ { ibv_cmd_query_context; ibv_cmd_query_device; ibv_cmd_query_device_any; - ibv_cmd_query_device_ex; ibv_cmd_query_mr; ibv_cmd_query_port; ibv_cmd_query_qp; From patchwork Mon Nov 16 20:23:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11910753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B9DDC63798 for ; Mon, 16 Nov 2020 20:23:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2DC5920782 for ; Mon, 16 Nov 2020 20:23:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="MzlE12vG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732263AbgKPUXU (ORCPT ); Mon, 16 Nov 2020 15:23:20 -0500 Received: from nat-hk.nvidia.com ([203.18.50.4]:34378 "EHLO nat-hk.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732273AbgKPUXT (ORCPT ); Mon, 16 Nov 2020 15:23:19 -0500 Received: from HKMAIL102.nvidia.com (Not Verified[10.18.92.100]) by nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 17 Nov 2020 04:23:17 +0800 Received: from HKMAIL103.nvidia.com (10.18.16.12) by HKMAIL102.nvidia.com (10.18.16.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 16 Nov 2020 20:23:17 +0000 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172) by HKMAIL103.nvidia.com (10.18.16.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 16 Nov 2020 20:23:17 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cRsZoX2VZBqCUn7ja/kbGxqXOtt7ZZSiGsDDeUCqnjGjbUrHvaXPWDgABLiZJBrSx4T06nwbQUoAJVCf4pdjMbsMNgruahe1VroPit3/gz7EiRyssirfUFlUybJJ/nbHGeMVnX3MFhT8Z7K6S+G4dk/wvI260Ue4DEFR66d4ceaHdyC3gBnszWlBuJOztUoQJeEuMTXN8J+TAs5+JtKV50UvtWAxbP0M8zRPbuDPZlFuT1mtFF+dJkPCDnKNgBTt7Bk7TuWOjHDw8HglWs0HDjE6YmZcGIfX2dLI8hUk6W5dQPl/H9UO7zSkkoOdomdQM53a3qFhBbK3FxoT/N7ehA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7eTkCR8woWkrC/G6SDMORKzsRZv71XqArzVZMUhPR8I=; b=JFiRBJTlT86rgK5lCqjk/VSN498Ln6U13qFmIiY6fwZyXhwNXZ6czPclhtStp8qldE+uytebVm+y3kCKAkYWWuygK7+gAqCevsh3ZWx2pUBsg1q5BtoLJdsVD7rr7OOh0AuXzoTgvOh4wxnXGY0m6bVWnKu3dn7f6BUfkGZSHr8qlxFatmvo6XDGvisieq2GecFzs+uvLN4GsAIDp+l+8JPwg9nzJpGx+ZtS93JFpw91PkweEn/ubFaPiKsgYXo/+Jx4P+iadpofcoNM8JIW87n69Yk2cbhrD+kr8S1ohp0hjh+uryIpbE8VmwGXeUpE8gNUr/FGg9C0cFTaZzcjYg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) by DM6PR12MB3305.namprd12.prod.outlook.com (2603:10b6:5:189::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov 2020 20:23:15 +0000 Received: from DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020 20:23:15 +0000 From: Jason Gunthorpe To: , Bob Pearson Subject: [PATCH 9/9] verbs: Delete query_device() internal support Date: Mon, 16 Nov 2020 16:23:10 -0400 Message-ID: <9-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> In-Reply-To: <0-v1-34e141ddf17e+89-query_device_ex_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR19CA0026.namprd19.prod.outlook.com (2603:10b6:208:178::39) To DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR19CA0026.namprd19.prod.outlook.com (2603:10b6:208:178::39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28 via Frontend Transport; Mon, 16 Nov 2020 20:23:12 +0000 Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kel1q-006l8I-Vh; Mon, 16 Nov 2020 16:23:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605558197; bh=HG4720+cJVyzYX1qqso0mFbPlQI5tUWkyTpDhb/SDJs=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: Subject:Date:Message-ID:In-Reply-To:References: Content-Transfer-Encoding:Content-Type:X-ClientProxiedBy: MIME-Version:X-MS-Exchange-MessageSentRepresentingType; b=MzlE12vGnq+k44czM3Wy/9Mhxs/izLC6WInQgoh6Jl1qJyg6tlAIIeNl6XU5cI7dA rasCefXUsqySBUmUOc13T5MeS0S4d9gmplL3/sVQScg77DXfy+t+9Afjt1pXIR07G0 X8Kynmafc0X4BSUUFuZNaXwGKd8fwtfkQM1brOHklAhKEWNLP75h5GHmj9juyCK6jl a7RL/XRGXz2CZvg9EWCjd0yoFWfYiqeLXMZW21KCJFZxAGbf1FFLwMe7dcqiI4f3Mk 8FfxZrsXcRyohKWsMICm3nQKZvEgkoHJOcTVDL8u/7yCDyD6wBCl91tIdbaMdZVxF1 rDAPxzwPz01Bw== Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Now that all providers only implement the _ex API have the external API call query_device_ex() directly and remove everything to do with the internal query_device op. Signed-off-by: Jason Gunthorpe --- libibverbs/cmd.c | 65 ------------------------------------------ libibverbs/device.c | 1 + libibverbs/driver.h | 6 ---- libibverbs/dummy_ops.c | 28 +----------------- libibverbs/verbs.c | 5 +++- libibverbs/verbs.h | 3 +- providers/cxgb4/dev.c | 8 +++--- 7 files changed, 12 insertions(+), 104 deletions(-) diff --git a/libibverbs/cmd.c b/libibverbs/cmd.c index ec9750e7c04eb4..d7078823989bd2 100644 --- a/libibverbs/cmd.c +++ b/libibverbs/cmd.c @@ -47,71 +47,6 @@ bool verbs_allow_disassociate_destroy; -static void copy_query_dev_fields(struct ibv_device_attr *device_attr, - struct ib_uverbs_query_device_resp *resp, - uint64_t *raw_fw_ver) -{ - *raw_fw_ver = resp->fw_ver; - device_attr->node_guid = resp->node_guid; - device_attr->sys_image_guid = resp->sys_image_guid; - device_attr->max_mr_size = resp->max_mr_size; - device_attr->page_size_cap = resp->page_size_cap; - device_attr->vendor_id = resp->vendor_id; - device_attr->vendor_part_id = resp->vendor_part_id; - device_attr->hw_ver = resp->hw_ver; - device_attr->max_qp = resp->max_qp; - device_attr->max_qp_wr = resp->max_qp_wr; - device_attr->device_cap_flags = resp->device_cap_flags; - device_attr->max_sge = resp->max_sge; - device_attr->max_sge_rd = resp->max_sge_rd; - device_attr->max_cq = resp->max_cq; - device_attr->max_cqe = resp->max_cqe; - device_attr->max_mr = resp->max_mr; - device_attr->max_pd = resp->max_pd; - device_attr->max_qp_rd_atom = resp->max_qp_rd_atom; - device_attr->max_ee_rd_atom = resp->max_ee_rd_atom; - device_attr->max_res_rd_atom = resp->max_res_rd_atom; - device_attr->max_qp_init_rd_atom = resp->max_qp_init_rd_atom; - device_attr->max_ee_init_rd_atom = resp->max_ee_init_rd_atom; - device_attr->atomic_cap = resp->atomic_cap; - device_attr->max_ee = resp->max_ee; - device_attr->max_rdd = resp->max_rdd; - device_attr->max_mw = resp->max_mw; - device_attr->max_raw_ipv6_qp = resp->max_raw_ipv6_qp; - device_attr->max_raw_ethy_qp = resp->max_raw_ethy_qp; - device_attr->max_mcast_grp = resp->max_mcast_grp; - device_attr->max_mcast_qp_attach = resp->max_mcast_qp_attach; - device_attr->max_total_mcast_qp_attach = resp->max_total_mcast_qp_attach; - device_attr->max_ah = resp->max_ah; - device_attr->max_fmr = resp->max_fmr; - device_attr->max_map_per_fmr = resp->max_map_per_fmr; - device_attr->max_srq = resp->max_srq; - device_attr->max_srq_wr = resp->max_srq_wr; - device_attr->max_srq_sge = resp->max_srq_sge; - device_attr->max_pkeys = resp->max_pkeys; - device_attr->local_ca_ack_delay = resp->local_ca_ack_delay; - device_attr->phys_port_cnt = resp->phys_port_cnt; -} - -int ibv_cmd_query_device(struct ibv_context *context, - struct ibv_device_attr *device_attr, - uint64_t *raw_fw_ver, - struct ibv_query_device *cmd, size_t cmd_size) -{ - struct ib_uverbs_query_device_resp resp; - int ret; - - ret = execute_cmd_write(context, IB_USER_VERBS_CMD_QUERY_DEVICE, cmd, - cmd_size, &resp, sizeof(resp)); - if (ret) - return ret; - - memset(device_attr->fw_ver, 0, sizeof device_attr->fw_ver); - copy_query_dev_fields(device_attr, &resp, raw_fw_ver); - - return 0; -} - int ibv_cmd_alloc_pd(struct ibv_context *context, struct ibv_pd *pd, struct ibv_alloc_pd *cmd, size_t cmd_size, struct ib_uverbs_alloc_pd_resp *resp, size_t resp_size) diff --git a/libibverbs/device.c b/libibverbs/device.c index e9c429a51c7dd1..331a0871b4a77b 100644 --- a/libibverbs/device.c +++ b/libibverbs/device.c @@ -320,6 +320,7 @@ static void set_lib_ops(struct verbs_context *vctx) #undef ibv_query_port vctx->context.ops._compat_query_port = ibv_query_port; vctx->query_port = __lib_query_port; + vctx->context.ops._compat_query_device = ibv_query_device; /* * In order to maintain backward/forward binary compatibility diff --git a/libibverbs/driver.h b/libibverbs/driver.h index 33998e227c98ec..359b17302fab5d 100644 --- a/libibverbs/driver.h +++ b/libibverbs/driver.h @@ -354,8 +354,6 @@ struct verbs_context_ops { struct ibv_ops_wr **bad_op); int (*post_srq_recv)(struct ibv_srq *srq, struct ibv_recv_wr *recv_wr, struct ibv_recv_wr **bad_recv_wr); - int (*query_device)(struct ibv_context *context, - struct ibv_device_attr *device_attr); int (*query_device_ex)(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr, @@ -449,10 +447,6 @@ int ibv_cmd_get_context(struct verbs_context *context, struct ib_uverbs_get_context_resp *resp, size_t resp_size); int ibv_cmd_query_context(struct ibv_context *ctx, struct ibv_command_buffer *driver); -int ibv_cmd_query_device(struct ibv_context *context, - struct ibv_device_attr *device_attr, - uint64_t *raw_fw_ver, - struct ibv_query_device *cmd, size_t cmd_size); int ibv_cmd_create_flow_action_esp(struct ibv_context *ctx, struct ibv_flow_action_esp_attr *attr, struct verbs_flow_action *flow_action, diff --git a/libibverbs/dummy_ops.c b/libibverbs/dummy_ops.c index 711dfafb5caed5..b6f272dbd8f6de 100644 --- a/libibverbs/dummy_ops.c +++ b/libibverbs/dummy_ops.c @@ -377,35 +377,11 @@ static int post_srq_recv(struct ibv_srq *srq, struct ibv_recv_wr *recv_wr, return EOPNOTSUPP; } -static int query_device(struct ibv_context *context, - struct ibv_device_attr *device_attr) -{ - const struct verbs_context_ops *ops = get_ops(context); - - if (!ops->query_device_ex) - return EOPNOTSUPP; - return ops->query_device_ex( - context, NULL, - container_of(device_attr, struct ibv_device_attr_ex, orig_attr), - sizeof(*device_attr)); -} - -/* Provide a generic implementation for all providers that don't implement - * query_device_ex. - */ static int query_device_ex(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr, size_t attr_size) { - if (input && input->comp_mask) - return EINVAL; - - if (attr_size < sizeof(attr->orig_attr)) - return EOPNOTSUPP; - - memset(attr, 0, attr_size); - - return ibv_query_device(context, &attr->orig_attr); + return EOPNOTSUPP; } static int query_ece(struct ibv_qp *qp, struct ibv_ece *ece) @@ -558,7 +534,6 @@ const struct verbs_context_ops verbs_dummy_ops = { post_send, post_srq_ops, post_srq_recv, - query_device, query_device_ex, query_ece, query_port, @@ -680,7 +655,6 @@ void verbs_set_ops(struct verbs_context *vctx, SET_OP(ctx, post_send); SET_OP(vctx, post_srq_ops); SET_OP(ctx, post_srq_recv); - SET_PRIV_OP(ctx, query_device); SET_OP(vctx, query_device_ex); SET_PRIV_OP_IC(vctx, query_ece); SET_PRIV_OP_IC(ctx, query_port); diff --git a/libibverbs/verbs.c b/libibverbs/verbs.c index 7fc10240cf9def..18f5cba8c49525 100644 --- a/libibverbs/verbs.c +++ b/libibverbs/verbs.c @@ -156,7 +156,10 @@ LATEST_SYMVER_FUNC(ibv_query_device, 1_1, "IBVERBS_1.1", struct ibv_context *context, struct ibv_device_attr *device_attr) { - return get_ops(context)->query_device(context, device_attr); + return get_ops(context)->query_device_ex( + context, NULL, + container_of(device_attr, struct ibv_device_attr_ex, orig_attr), + sizeof(*device_attr)); } int __lib_query_port(struct ibv_context *context, uint8_t port_num, diff --git a/libibverbs/verbs.h b/libibverbs/verbs.h index ee57e0526d65b4..aafab2ab5547bd 100644 --- a/libibverbs/verbs.h +++ b/libibverbs/verbs.h @@ -1922,7 +1922,8 @@ struct ibv_device { struct _compat_ibv_port_attr; struct ibv_context_ops { - void *(*_compat_query_device)(void); + int (*_compat_query_device)(struct ibv_context *context, + struct ibv_device_attr *device_attr); int (*_compat_query_port)(struct ibv_context *context, uint8_t port_num, struct _compat_ibv_port_attr *port_attr); diff --git a/providers/cxgb4/dev.c b/providers/cxgb4/dev.c index 76b78d9b29a71c..c42c2300f1751f 100644 --- a/providers/cxgb4/dev.c +++ b/providers/cxgb4/dev.c @@ -114,8 +114,6 @@ static struct verbs_context *c4iw_alloc_context(struct ibv_device *ibdev, struct ibv_get_context cmd; struct uc4iw_alloc_ucontext_resp resp; struct c4iw_dev *rhp = to_c4iw_dev(ibdev); - struct ibv_query_device qcmd; - uint64_t raw_fw_ver; struct ibv_device_attr attr; context = verbs_init_and_alloc_context(ibdev, cmd_fd, context, ibv_ctx, @@ -143,8 +141,10 @@ static struct verbs_context *c4iw_alloc_context(struct ibv_device *ibdev, } verbs_set_ops(&context->ibv_ctx, &c4iw_ctx_common_ops); - if (ibv_cmd_query_device(&context->ibv_ctx.context, &attr, - &raw_fw_ver, &qcmd, sizeof(qcmd))) + if (c4iw_query_device(&context->ibv_ctx.context, NULL, + container_of(&attr, struct ibv_device_attr_ex, + orig_attr), + sizeof(attr))) goto err_unmap; if (!rhp->mmid2ptr) {