From patchwork Thu Nov 5 20:12:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11885039 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDA5CC388F7 for ; Thu, 5 Nov 2020 20:12:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A14A2074B for ; Thu, 5 Nov 2020 20:12:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="IfFbfY+u" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731922AbgKEUM6 (ORCPT ); Thu, 5 Nov 2020 15:12:58 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:18612 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726801AbgKEUM6 (ORCPT ); Thu, 5 Nov 2020 15:12:58 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:13:02 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:12:57 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Yevgeny Kliteynik" , Erez Shitrit , Alex Vesker , Saeed Mahameed Subject: [net-next v2 01/12] net/mlx5: DR, Remove unused member of action struct Date: Thu, 5 Nov 2020 12:12:31 -0800 Message-ID: <20201105201242.21716-2-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607182; bh=4GQKcM14tDuVeMtQrmbo5SDv7m1IcwmRx3JFpApD58Y=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=IfFbfY+u9Cslw8YKnDhPMCH4dNHrqFlgO+ISwj1tQLmMyHKYgz47xTe06ln9AeA3Q Lhly9BLijtkX81dMtSQUWH3RaIOoGAcMb6JaATydIanWJr+oe57dk1Z+/TpCUXLiqT chPx13jTGApq3qy/ABvBtS6tEMcQy0zuTXvANA3ROcvHLaqvKQARgyXm2evRBgfT0I CY5kMbAhFWvkdMWg4daeZDnNq5ZrShCR/Jp9A/dSc7WcpOXkA5R3281Kp3X+av37ud bz0hZPXPdiobJ3Tqf4XCYYairxIu6/17GxjSR0Yko10xRubxcK5qnxZQUeFoNaBZV/ tyXcGTUARxZWw== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik Struct mlx5dr_action doesn't use this member Signed-off-by: Erez Shitrit Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h index f50f3b107aa3..5caf082b7000 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h @@ -731,7 +731,6 @@ struct mlx5dr_action { struct mlx5dr_domain *dmn; struct mlx5dr_icm_chunk *chunk; u8 *data; - u32 data_size; u16 num_of_actions; u32 index; u8 allow_rx:1; From patchwork Thu Nov 5 20:12:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11885041 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56DEFC388F7 for ; Thu, 5 Nov 2020 20:13:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB5802074B for ; Thu, 5 Nov 2020 20:13:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="bhWMXnz1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732181AbgKEUNL (ORCPT ); Thu, 5 Nov 2020 15:13:11 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:6021 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731954AbgKEUM7 (ORCPT ); Thu, 5 Nov 2020 15:12:59 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:12:57 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:12:58 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Yevgeny Kliteynik" , Alex Vesker , "Saeed Mahameed" Subject: [net-next v2 02/12] net/mlx5: DR, Rename builders HW specific names Date: Thu, 5 Nov 2020 12:12:32 -0800 Message-ID: <20201105201242.21716-3-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607177; bh=rq6PWq11BfejZhig3WUJMpUTotqp4JdDPFWggDOS57U=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=bhWMXnz1ISw6hP1AO2OIp9gshWT1pZqFGMA1HWbLFqWbRn6JZMw+F/cegQKPJRbc9 K1nYJ97/q5PY3KFNT+OUsMPZkfENez5r6iVJ95xX1uQJ7l91jmVDbY0KDxXAo+rcez ae0fXXynM9at4AncbULKSmN4nCi+ErAtNlKooq8gkANPFBBbBL5cB2Qcrw0KNef6Dk Mf4yvZrf8df9z8ZcYYtTrw8cD9NA6vrGmc/cUWN51IHhPblV9Iah6/VTUIxwbKEf2n TJ0yv8GqbfGzSoqmLhxjqhE/6Pg0M88LYe7k/uhDefbtQrOSO7FjAavxlZFek98m5A CEEkayOa9zTsw== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik We will support multiple STE versions. The existing naming is not suitable for newer versions. Removed the HW specific details and renamed with a more general names. Signed-off-by: Alex Vesker Signed-off-by: Yevgeny Kliteynik Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_matcher.c | 63 ++++++++++--------- .../mellanox/mlx5/core/steering/dr_ste.c | 42 ++++++------- .../mellanox/mlx5/core/steering/dr_types.h | 42 ++++++------- 3 files changed, 76 insertions(+), 71 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c index 7df883686d46..752afdb20e23 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c @@ -85,7 +85,7 @@ static bool dr_mask_is_ttl_set(struct mlx5dr_match_spec *spec) (_misc2)._inner_outer##_first_mpls_s_bos || \ (_misc2)._inner_outer##_first_mpls_ttl) -static bool dr_mask_is_gre_set(struct mlx5dr_match_misc *misc) +static bool dr_mask_is_tnl_gre_set(struct mlx5dr_match_misc *misc) { return (misc->gre_key_h || misc->gre_key_l || misc->gre_protocol || misc->gre_c_present || @@ -98,7 +98,7 @@ static bool dr_mask_is_gre_set(struct mlx5dr_match_misc *misc) (_misc2).outer_first_mpls_over_##gre_udp##_s_bos || \ (_misc2).outer_first_mpls_over_##gre_udp##_ttl) -#define DR_MASK_IS_FLEX_PARSER_0_SET(_misc2) ( \ +#define DR_MASK_IS_TNL_MPLS_SET(_misc2) ( \ DR_MASK_IS_OUTER_MPLS_OVER_GRE_UDP_SET((_misc2), gre) || \ DR_MASK_IS_OUTER_MPLS_OVER_GRE_UDP_SET((_misc2), udp)) @@ -148,12 +148,23 @@ dr_mask_is_flex_parser_tnl_geneve_set(struct mlx5dr_match_param *mask, dr_matcher_supp_flex_parser_geneve(&dmn->info.caps); } -static bool dr_mask_is_flex_parser_icmpv6_set(struct mlx5dr_match_misc3 *misc3) +static bool dr_mask_is_icmpv6_set(struct mlx5dr_match_misc3 *misc3) { return (misc3->icmpv6_type || misc3->icmpv6_code || misc3->icmpv6_header_data); } +static bool dr_mask_is_flex_parser_icmp_set(struct mlx5dr_match_param *mask, + struct mlx5dr_domain *dmn) +{ + if (DR_MASK_IS_ICMPV4_SET(&mask->misc3)) + return mlx5dr_matcher_supp_flex_parser_icmp_v4(&dmn->info.caps); + else if (dr_mask_is_icmpv6_set(&mask->misc3)) + return mlx5dr_matcher_supp_flex_parser_icmp_v6(&dmn->info.caps); + + return false; +} + static bool dr_mask_is_wqe_metadata_set(struct mlx5dr_match_misc2 *misc2) { return misc2->metadata_reg_a; @@ -257,7 +268,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, if (dr_mask_is_smac_set(&mask.outer) && dr_mask_is_dmac_set(&mask.outer)) { - mlx5dr_ste_build_eth_l2_src_des(&sb[idx++], &mask, + mlx5dr_ste_build_eth_l2_src_dst(&sb[idx++], &mask, inner, rx); } @@ -277,8 +288,8 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, inner, rx); if (DR_MASK_IS_ETH_L4_SET(mask.outer, mask.misc, outer)) - mlx5dr_ste_build_ipv6_l3_l4(&sb[idx++], &mask, - inner, rx); + mlx5dr_ste_build_eth_ipv6_l3_l4(&sb[idx++], &mask, + inner, rx); } else { if (dr_mask_is_ipv4_5_tuple_set(&mask.outer)) mlx5dr_ste_build_eth_l3_ipv4_5_tuple(&sb[idx++], &mask, @@ -290,13 +301,11 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, } if (dr_mask_is_flex_parser_tnl_vxlan_gpe_set(&mask, dmn)) - mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(&sb[idx++], - &mask, - inner, rx); + mlx5dr_ste_build_tnl_vxlan_gpe(&sb[idx++], &mask, + inner, rx); else if (dr_mask_is_flex_parser_tnl_geneve_set(&mask, dmn)) - mlx5dr_ste_build_flex_parser_tnl_geneve(&sb[idx++], - &mask, - inner, rx); + mlx5dr_ste_build_tnl_geneve(&sb[idx++], &mask, + inner, rx); if (DR_MASK_IS_ETH_L4_MISC_SET(mask.misc3, outer)) mlx5dr_ste_build_eth_l4_misc(&sb[idx++], &mask, inner, rx); @@ -304,22 +313,18 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, if (DR_MASK_IS_FIRST_MPLS_SET(mask.misc2, outer)) mlx5dr_ste_build_mpls(&sb[idx++], &mask, inner, rx); - if (DR_MASK_IS_FLEX_PARSER_0_SET(mask.misc2)) - mlx5dr_ste_build_flex_parser_0(&sb[idx++], &mask, - inner, rx); + if (DR_MASK_IS_TNL_MPLS_SET(mask.misc2)) + mlx5dr_ste_build_tnl_mpls(&sb[idx++], &mask, inner, rx); - if ((DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(&mask.misc3) && - mlx5dr_matcher_supp_flex_parser_icmp_v4(&dmn->info.caps)) || - (dr_mask_is_flex_parser_icmpv6_set(&mask.misc3) && - mlx5dr_matcher_supp_flex_parser_icmp_v6(&dmn->info.caps))) { - ret = mlx5dr_ste_build_flex_parser_1(&sb[idx++], - &mask, &dmn->info.caps, - inner, rx); + if (dr_mask_is_flex_parser_icmp_set(&mask, dmn)) { + ret = mlx5dr_ste_build_icmp(&sb[idx++], + &mask, &dmn->info.caps, + inner, rx); if (ret) return ret; } - if (dr_mask_is_gre_set(&mask.misc)) - mlx5dr_ste_build_gre(&sb[idx++], &mask, inner, rx); + if (dr_mask_is_tnl_gre_set(&mask.misc)) + mlx5dr_ste_build_tnl_gre(&sb[idx++], &mask, inner, rx); } /* Inner */ @@ -334,7 +339,7 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, if (dr_mask_is_smac_set(&mask.inner) && dr_mask_is_dmac_set(&mask.inner)) { - mlx5dr_ste_build_eth_l2_src_des(&sb[idx++], + mlx5dr_ste_build_eth_l2_src_dst(&sb[idx++], &mask, inner, rx); } @@ -354,8 +359,8 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, inner, rx); if (DR_MASK_IS_ETH_L4_SET(mask.inner, mask.misc, inner)) - mlx5dr_ste_build_ipv6_l3_l4(&sb[idx++], &mask, - inner, rx); + mlx5dr_ste_build_eth_ipv6_l3_l4(&sb[idx++], &mask, + inner, rx); } else { if (dr_mask_is_ipv4_5_tuple_set(&mask.inner)) mlx5dr_ste_build_eth_l3_ipv4_5_tuple(&sb[idx++], &mask, @@ -372,8 +377,8 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher, if (DR_MASK_IS_FIRST_MPLS_SET(mask.misc2, inner)) mlx5dr_ste_build_mpls(&sb[idx++], &mask, inner, rx); - if (DR_MASK_IS_FLEX_PARSER_0_SET(mask.misc2)) - mlx5dr_ste_build_flex_parser_0(&sb[idx++], &mask, inner, rx); + if (DR_MASK_IS_TNL_MPLS_SET(mask.misc2)) + mlx5dr_ste_build_tnl_mpls(&sb[idx++], &mask, inner, rx); } /* Empty matcher, takes all */ if (matcher->match_criteria == DR_MATCHER_CRITERIA_EMPTY) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c index b01aaec75622..d275823bff2f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c @@ -1090,7 +1090,7 @@ static int dr_ste_build_eth_l2_src_des_tag(struct mlx5dr_match_param *value, return 0; } -void mlx5dr_ste_build_eth_l2_src_des(struct mlx5dr_ste_build *sb, +void mlx5dr_ste_build_eth_l2_src_dst(struct mlx5dr_ste_build *sb, struct mlx5dr_match_param *mask, bool inner, bool rx) { @@ -1594,9 +1594,9 @@ static int dr_ste_build_ipv6_l3_l4_tag(struct mlx5dr_match_param *value, return 0; } -void mlx5dr_ste_build_ipv6_l3_l4(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - bool inner, bool rx) +void mlx5dr_ste_build_eth_ipv6_l3_l4(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + bool inner, bool rx) { dr_ste_build_ipv6_l3_l4_bit_mask(mask, inner, sb->bit_mask); @@ -1693,8 +1693,8 @@ static int dr_ste_build_gre_tag(struct mlx5dr_match_param *value, return 0; } -void mlx5dr_ste_build_gre(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, bool inner, bool rx) +void mlx5dr_ste_build_tnl_gre(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, bool inner, bool rx) { dr_ste_build_gre_bit_mask(mask, inner, sb->bit_mask); @@ -1771,9 +1771,9 @@ static int dr_ste_build_flex_parser_0_tag(struct mlx5dr_match_param *value, return 0; } -void mlx5dr_ste_build_flex_parser_0(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - bool inner, bool rx) +void mlx5dr_ste_build_tnl_mpls(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + bool inner, bool rx) { dr_ste_build_flex_parser_0_bit_mask(mask, inner, sb->bit_mask); @@ -1792,8 +1792,8 @@ static int dr_ste_build_flex_parser_1_bit_mask(struct mlx5dr_match_param *mask, struct mlx5dr_cmd_caps *caps, u8 *bit_mask) { + bool is_ipv4_mask = DR_MASK_IS_ICMPV4_SET(&mask->misc3); struct mlx5dr_match_misc3 *misc_3_mask = &mask->misc3; - bool is_ipv4_mask = DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(misc_3_mask); u32 icmp_header_data_mask; u32 icmp_type_mask; u32 icmp_code_mask; @@ -1869,7 +1869,7 @@ static int dr_ste_build_flex_parser_1_tag(struct mlx5dr_match_param *value, u32 icmp_code; bool is_ipv4; - is_ipv4 = DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(misc_3); + is_ipv4 = DR_MASK_IS_ICMPV4_SET(misc_3); if (is_ipv4) { icmp_header_data = misc_3->icmpv4_header_data; icmp_type = misc_3->icmpv4_type; @@ -1928,10 +1928,10 @@ static int dr_ste_build_flex_parser_1_tag(struct mlx5dr_match_param *value, return 0; } -int mlx5dr_ste_build_flex_parser_1(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - struct mlx5dr_cmd_caps *caps, - bool inner, bool rx) +int mlx5dr_ste_build_icmp(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + struct mlx5dr_cmd_caps *caps, + bool inner, bool rx) { int ret; @@ -2069,9 +2069,9 @@ dr_ste_build_flex_parser_tnl_vxlan_gpe_tag(struct mlx5dr_match_param *value, return 0; } -void mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - bool inner, bool rx) +void mlx5dr_ste_build_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + bool inner, bool rx) { dr_ste_build_flex_parser_tnl_vxlan_gpe_bit_mask(mask, inner, sb->bit_mask); @@ -2122,9 +2122,9 @@ dr_ste_build_flex_parser_tnl_geneve_tag(struct mlx5dr_match_param *value, return 0; } -void mlx5dr_ste_build_flex_parser_tnl_geneve(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - bool inner, bool rx) +void mlx5dr_ste_build_tnl_geneve(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + bool inner, bool rx) { dr_ste_build_flex_parser_tnl_geneve_bit_mask(mask, sb->bit_mask); sb->rx = rx; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h index 5caf082b7000..562c72aad733 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h @@ -288,7 +288,7 @@ int mlx5dr_ste_build_ste_arr(struct mlx5dr_matcher *matcher, struct mlx5dr_matcher_rx_tx *nic_matcher, struct mlx5dr_match_param *value, u8 *ste_arr); -void mlx5dr_ste_build_eth_l2_src_des(struct mlx5dr_ste_build *builder, +void mlx5dr_ste_build_eth_l2_src_dst(struct mlx5dr_ste_build *builder, struct mlx5dr_match_param *mask, bool inner, bool rx); void mlx5dr_ste_build_eth_l3_ipv4_5_tuple(struct mlx5dr_ste_build *sb, @@ -312,31 +312,31 @@ void mlx5dr_ste_build_eth_l2_dst(struct mlx5dr_ste_build *sb, void mlx5dr_ste_build_eth_l2_tnl(struct mlx5dr_ste_build *sb, struct mlx5dr_match_param *mask, bool inner, bool rx); -void mlx5dr_ste_build_ipv6_l3_l4(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - bool inner, bool rx); +void mlx5dr_ste_build_eth_ipv6_l3_l4(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + bool inner, bool rx); void mlx5dr_ste_build_eth_l4_misc(struct mlx5dr_ste_build *sb, struct mlx5dr_match_param *mask, bool inner, bool rx); -void mlx5dr_ste_build_gre(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - bool inner, bool rx); +void mlx5dr_ste_build_tnl_gre(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + bool inner, bool rx); void mlx5dr_ste_build_mpls(struct mlx5dr_ste_build *sb, struct mlx5dr_match_param *mask, bool inner, bool rx); -void mlx5dr_ste_build_flex_parser_0(struct mlx5dr_ste_build *sb, +void mlx5dr_ste_build_tnl_mpls(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + bool inner, bool rx); +int mlx5dr_ste_build_icmp(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + struct mlx5dr_cmd_caps *caps, + bool inner, bool rx); +void mlx5dr_ste_build_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, struct mlx5dr_match_param *mask, bool inner, bool rx); -int mlx5dr_ste_build_flex_parser_1(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - struct mlx5dr_cmd_caps *caps, - bool inner, bool rx); -void mlx5dr_ste_build_flex_parser_tnl_vxlan_gpe(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - bool inner, bool rx); -void mlx5dr_ste_build_flex_parser_tnl_geneve(struct mlx5dr_ste_build *sb, - struct mlx5dr_match_param *mask, - bool inner, bool rx); +void mlx5dr_ste_build_tnl_geneve(struct mlx5dr_ste_build *sb, + struct mlx5dr_match_param *mask, + bool inner, bool rx); void mlx5dr_ste_build_general_purpose(struct mlx5dr_ste_build *sb, struct mlx5dr_match_param *mask, bool inner, bool rx); @@ -588,9 +588,9 @@ struct mlx5dr_match_param { struct mlx5dr_match_misc3 misc3; }; -#define DR_MASK_IS_FLEX_PARSER_ICMPV4_SET(_misc3) ((_misc3)->icmpv4_type || \ - (_misc3)->icmpv4_code || \ - (_misc3)->icmpv4_header_data) +#define DR_MASK_IS_ICMPV4_SET(_misc3) ((_misc3)->icmpv4_type || \ + (_misc3)->icmpv4_code || \ + (_misc3)->icmpv4_header_data) struct mlx5dr_esw_caps { u64 drop_icm_address_rx; From patchwork Thu Nov 5 20:12:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11885057 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B635C55178 for ; Thu, 5 Nov 2020 20:13:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F26CA2074B for ; Thu, 5 Nov 2020 20:13:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="pAUUoQgu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732145AbgKEUNI (ORCPT ); Thu, 5 Nov 2020 15:13:08 -0500 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:5835 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731994AbgKEUNA (ORCPT ); Thu, 5 Nov 2020 15:13:00 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:13:02 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:12:59 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Yevgeny Kliteynik" , Erez Shitrit , Mark Bloch , Saeed Mahameed Subject: [net-next v2 04/12] net/mlx5: DR, Add buddy allocator utilities Date: Thu, 5 Nov 2020 12:12:34 -0800 Message-ID: <20201105201242.21716-5-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607182; bh=KlqdsbI/oCX9bvKwmmS4+CRhJ5NZg4+fbnJFBSXnB6g=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=pAUUoQguRRX9apHlmmlv2WI51dn5s5lGk8G3oKsOJAzphAhBl+ItBpfqf0VLSubzr pVOIHxjQ1mvtTeykL2LVLHeSkzuHNZtAWNW0VBoF8VWhhrZmxhpmR5D2zqpO5UXoAd ITqwml1p04FWu/Ktbpvn7YxXrECqfr76SpHVu2mVkkuTN5k6ekkB5zyEN4mVQ2UBIG +aUq1LM9ul+1Lb9mnd3XzN/SRNegmEAO9hE842FOnBBesqamP7C+K7DL8S//tE5F4z KYB8fQZfxuPEHOoQe2DWR0qEV8kd6EANmAX8vfgt+9VKTEQlVNsOm2ETRO7NYGWxoe npWAxVZcg4hFA== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik Add implementation of SW Steering variation of buddy allocator. The buddy system for ICM memory uses 2 main data structures: - Bitmap per order, that keeps the current state of allocated blocks for this order - Indicator for the number of available blocks per each order Signed-off-by: Erez Shitrit Signed-off-by: Yevgeny Kliteynik Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/Makefile | 2 +- .../mellanox/mlx5/core/steering/dr_buddy.c | 170 ++++++++++++++++++ .../mellanox/mlx5/core/steering/mlx5dr.h | 33 ++++ 3 files changed, 204 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/steering/dr_buddy.c diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 2d477f9a8cb7..83a67ca43a41 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -81,7 +81,7 @@ mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/tls.o en_accel/tls_rxtx.o en_accel/t mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/dr_domain.o steering/dr_table.o \ steering/dr_matcher.o steering/dr_rule.o \ - steering/dr_icm_pool.o \ + steering/dr_icm_pool.o steering/dr_buddy.o \ steering/dr_ste.o steering/dr_send.o \ steering/dr_cmd.o steering/dr_fw.o \ steering/dr_action.o steering/fs_dr.o diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_buddy.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_buddy.c new file mode 100644 index 000000000000..7df11a019df9 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_buddy.c @@ -0,0 +1,170 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2004 Topspin Communications. All rights reserved. + * Copyright (c) 2005 - 2008 Mellanox Technologies. All rights reserved. + * Copyright (c) 2006 - 2007 Cisco Systems, Inc. All rights reserved. + * Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved. + */ + +#include "dr_types.h" + +int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy, + unsigned int max_order) +{ + int i; + + buddy->max_order = max_order; + + INIT_LIST_HEAD(&buddy->list_node); + INIT_LIST_HEAD(&buddy->used_list); + INIT_LIST_HEAD(&buddy->hot_list); + + buddy->bitmap = kcalloc(buddy->max_order + 1, + sizeof(*buddy->bitmap), + GFP_KERNEL); + buddy->num_free = kcalloc(buddy->max_order + 1, + sizeof(*buddy->num_free), + GFP_KERNEL); + + if (!buddy->bitmap || !buddy->num_free) + goto err_free_all; + + /* Allocating max_order bitmaps, one for each order */ + + for (i = 0; i <= buddy->max_order; ++i) { + unsigned int size = 1 << (buddy->max_order - i); + + buddy->bitmap[i] = bitmap_zalloc(size, GFP_KERNEL); + if (!buddy->bitmap[i]) + goto err_out_free_each_bit_per_order; + } + + /* In the beginning, we have only one order that is available for + * use (the biggest one), so mark the first bit in both bitmaps. + */ + + bitmap_set(buddy->bitmap[buddy->max_order], 0, 1); + + buddy->num_free[buddy->max_order] = 1; + + return 0; + +err_out_free_each_bit_per_order: + for (i = 0; i <= buddy->max_order; ++i) + bitmap_free(buddy->bitmap[i]); + +err_free_all: + kfree(buddy->num_free); + kfree(buddy->bitmap); + return -ENOMEM; +} + +void mlx5dr_buddy_cleanup(struct mlx5dr_icm_buddy_mem *buddy) +{ + int i; + + list_del(&buddy->list_node); + + for (i = 0; i <= buddy->max_order; ++i) + bitmap_free(buddy->bitmap[i]); + + kfree(buddy->num_free); + kfree(buddy->bitmap); +} + +static int dr_buddy_find_free_seg(struct mlx5dr_icm_buddy_mem *buddy, + unsigned int start_order, + unsigned int *segment, + unsigned int *order) +{ + unsigned int seg, order_iter, m; + + for (order_iter = start_order; + order_iter <= buddy->max_order; ++order_iter) { + if (!buddy->num_free[order_iter]) + continue; + + m = 1 << (buddy->max_order - order_iter); + seg = find_first_bit(buddy->bitmap[order_iter], m); + + if (WARN(seg >= m, + "ICM Buddy: failed finding free mem for order %d\n", + order_iter)) + return -ENOMEM; + + break; + } + + if (order_iter > buddy->max_order) + return -ENOMEM; + + *segment = seg; + *order = order_iter; + return 0; +} + +/** + * mlx5dr_buddy_alloc_mem() - Update second level bitmap. + * @buddy: Buddy to update. + * @order: Order of the buddy to update. + * @segment: Segment number. + * + * This function finds the first area of the ICM memory managed by this buddy. + * It uses the data structures of the buddy system in order to find the first + * area of free place, starting from the current order till the maximum order + * in the system. + * + * Return: 0 when segment is set, non-zero error status otherwise. + * + * The function returns the location (segment) in the whole buddy ICM memory + * area - the index of the memory segment that is available for use. + */ +int mlx5dr_buddy_alloc_mem(struct mlx5dr_icm_buddy_mem *buddy, + unsigned int order, + unsigned int *segment) +{ + unsigned int seg, order_iter; + int err; + + err = dr_buddy_find_free_seg(buddy, order, &seg, &order_iter); + if (err) + return err; + + bitmap_clear(buddy->bitmap[order_iter], seg, 1); + --buddy->num_free[order_iter]; + + /* If we found free memory in some order that is bigger than the + * required order, we need to split every order between the required + * order and the order that we found into two parts, and mark accordingly. + */ + while (order_iter > order) { + --order_iter; + seg <<= 1; + bitmap_set(buddy->bitmap[order_iter], seg ^ 1, 1); + ++buddy->num_free[order_iter]; + } + + seg <<= order; + *segment = seg; + + return 0; +} + +void mlx5dr_buddy_free_mem(struct mlx5dr_icm_buddy_mem *buddy, + unsigned int seg, unsigned int order) +{ + seg >>= order; + + /* Whenever a segment is free, + * the mem is added to the buddy that gave it. + */ + while (test_bit(seg ^ 1, buddy->bitmap[order])) { + bitmap_clear(buddy->bitmap[order], seg ^ 1, 1); + --buddy->num_free[order]; + seg >>= 1; + ++order; + } + bitmap_set(buddy->bitmap[order], seg, 1); + + ++buddy->num_free[order]; +} + diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h index 7914fe3fc68d..25b7bda3ce4a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h @@ -127,4 +127,37 @@ mlx5dr_is_supported(struct mlx5_core_dev *dev) return MLX5_CAP_ESW_FLOWTABLE_FDB(dev, sw_owner); } +/* buddy functions & structure */ + +struct mlx5dr_icm_mr; + +struct mlx5dr_icm_buddy_mem { + unsigned long **bitmap; + unsigned int *num_free; + u32 max_order; + struct list_head list_node; + struct mlx5dr_icm_mr *icm_mr; + struct mlx5dr_icm_pool *pool; + + /* This is the list of used chunks. HW may be accessing this memory */ + struct list_head used_list; + + /* Hardware may be accessing this memory but at some future, + * undetermined time, it might cease to do so. + * sync_ste command sets them free. + */ + struct list_head hot_list; + /* indicates the byte size of hot mem */ + unsigned int hot_memory_size; +}; + +int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy, + unsigned int max_order); +void mlx5dr_buddy_cleanup(struct mlx5dr_icm_buddy_mem *buddy); +int mlx5dr_buddy_alloc_mem(struct mlx5dr_icm_buddy_mem *buddy, + unsigned int order, + unsigned int *segment); +void mlx5dr_buddy_free_mem(struct mlx5dr_icm_buddy_mem *buddy, + unsigned int seg, unsigned int order); + #endif /* _MLX5DR_H_ */ From patchwork Thu Nov 5 20:12:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11885037 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24EAEC55178 for ; Thu, 5 Nov 2020 20:13:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B54DF20729 for ; Thu, 5 Nov 2020 20:13:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="DwyvGcvt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732136AbgKEUNH (ORCPT ); Thu, 5 Nov 2020 15:13:07 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:6024 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732033AbgKEUNB (ORCPT ); Thu, 5 Nov 2020 15:13:01 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:13:00 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:13:01 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Yevgeny Kliteynik" , Erez Shitrit , Alex Vesker , Mark Bloch , Saeed Mahameed Subject: [net-next v2 06/12] net/mlx5: DR, Sync chunks only during free Date: Thu, 5 Nov 2020 12:12:36 -0800 Message-ID: <20201105201242.21716-7-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607180; bh=/CXbSfyI/MbYr2+Ar+3wtDX8QM7y3m+T9ksjUkE6JnM=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=DwyvGcvtPb6V4aExE52que42rUC7J8otyMpZhqT61Ur/RGu3iRyrk9gSgqhfDjLc6 lLVbSUGosAAhsquWhE+zBPwAB7PCsR5rddFXBOyomPYrELK9WdbZg62DznaCvKHNSW u/ajcNAM7G97Kzrv1npNT+KxZiEA5pEgy+m2shhBB/a9v5aCMFosx56OSRnSgMdhjZ jnPS6GWZ/mBl1/KFu43UyEHGQrZ9y0sKt7F3qkPEIhGYDzTkKlgfCaKgmY0zdpUI/b Cs8pLbchmKmoOaxLQYAkj8Mnk97Gg/Wr0ca2Su53InnXt72I54Qoxf0tqsUjA/N4xJ hd4S4vscxeUlQ== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik When freeing chunks, we want to sync the steering so that all the "hot" memory will be written to ICM and all the chunks that are in the hot_list will be actually destroyed. When allocating from the pool, we don't have a need to sync the steering, as we're not freeing anything, and sync might just hurt the performance in terms of flow-per-second offloaded. Signed-off-by: Erez Shitrit Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_icm_pool.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c index 2c5886b469f7..4d8330aab169 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c @@ -332,10 +332,6 @@ static int dr_icm_handle_buddies_get_mem(struct mlx5dr_icm_pool *pool, bool new_mem = false; int err; - /* Check if we have chunks that are waiting for sync-ste */ - if (dr_icm_pool_is_sync_required(pool)) - dr_icm_pool_sync_all_buddy_pools(pool); - alloc_buddy_mem: /* find the next free place from the buddy list */ list_for_each_entry(buddy_mem_pool, &pool->buddy_mem_list, list_node) { @@ -409,12 +405,18 @@ mlx5dr_icm_alloc_chunk(struct mlx5dr_icm_pool *pool, void mlx5dr_icm_free_chunk(struct mlx5dr_icm_chunk *chunk) { struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem; + struct mlx5dr_icm_pool *pool = buddy->pool; /* move the memory to the waiting list AKA "hot" */ - mutex_lock(&buddy->pool->mutex); + mutex_lock(&pool->mutex); list_move_tail(&chunk->chunk_list, &buddy->hot_list); buddy->hot_memory_size += chunk->byte_size; - mutex_unlock(&buddy->pool->mutex); + + /* Check if we have chunks that are waiting for sync-ste */ + if (dr_icm_pool_is_sync_required(pool)) + dr_icm_pool_sync_all_buddy_pools(pool); + + mutex_unlock(&pool->mutex); } struct mlx5dr_icm_pool *mlx5dr_icm_pool_create(struct mlx5dr_domain *dmn, From patchwork Thu Nov 5 20:12:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11885059 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D58CC388F7 for ; Thu, 5 Nov 2020 20:13:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7E152074B for ; Thu, 5 Nov 2020 20:13:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Qfnwb3/s" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732167AbgKEUNJ (ORCPT ); Thu, 5 Nov 2020 15:13:09 -0500 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:5839 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732035AbgKEUNC (ORCPT ); Thu, 5 Nov 2020 15:13:02 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:13:03 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:13:01 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Yevgeny Kliteynik" , Hamdan Igbaria , "Alex Vesker" , Mark Bloch , Saeed Mahameed Subject: [net-next v2 07/12] net/mlx5: DR, ICM memory pools sync optimization Date: Thu, 5 Nov 2020 12:12:37 -0800 Message-ID: <20201105201242.21716-8-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607183; bh=ScUO9wawF/h4WRuYEbfD8YJbWMGxO/tN2OnhcBBRAyM=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=Qfnwb3/sRhQX5zoQeL46wKB8LIakpQkEikXZC3TLjnhEmq/HEx2F7X9xovftPtJ1O t54UdYBBfzz2rNRheFaEw2kVqyhGXdzRfbxpf+4gi99SenAdws4JL5R+6SWlCxnGwr qO5P3ichFcTQD/dW69WwUk36zxnC2qyEPCy25tDSJfCJW97mjXdZpO379ezucOlrQb BNTHiMMLYO2RMnxLmMFh1oU4XRFjyKPUKa3iz+Z8ecUPqGKfJR6fS12HRnjA0w+Yyi OgR9PDB4MAhVPDZGg0TT9P06gS4brMP7HX5aLqj8CFjbEJG9Tc6/30dkB53suAPmpW hLqJiCCs9O5sQ== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik Track the pool's hot ICM memory when freeing/allocating chunk, so that when checking if the sync is required, just check if the pool hot memory has reached the sync threshold. Signed-off-by: Hamdan Igbaria Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_icm_pool.c | 22 +++++-------------- .../mellanox/mlx5/core/steering/mlx5dr.h | 2 -- 2 files changed, 6 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c index 4d8330aab169..c49f8e86f3bc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c @@ -4,7 +4,7 @@ #include "dr_types.h" #define DR_ICM_MODIFY_HDR_ALIGN_BASE 64 -#define DR_ICM_SYNC_THRESHOLD (64 * 1024 * 1024) +#define DR_ICM_SYNC_THRESHOLD_POOL (64 * 1024 * 1024) struct mlx5dr_icm_pool { enum mlx5dr_icm_type icm_type; @@ -13,6 +13,7 @@ struct mlx5dr_icm_pool { /* memory management */ struct mutex mutex; /* protect the ICM pool and ICM buddy */ struct list_head buddy_mem_list; + u64 hot_memory_size; }; struct mlx5dr_icm_dm { @@ -281,19 +282,8 @@ dr_icm_chunk_create(struct mlx5dr_icm_pool *pool, static bool dr_icm_pool_is_sync_required(struct mlx5dr_icm_pool *pool) { - u64 allow_hot_size, all_hot_mem = 0; - struct mlx5dr_icm_buddy_mem *buddy; - - list_for_each_entry(buddy, &pool->buddy_mem_list, list_node) { - allow_hot_size = - mlx5dr_icm_pool_chunk_size_to_byte((buddy->max_order - 2), - pool->icm_type); - all_hot_mem += buddy->hot_memory_size; - - if (buddy->hot_memory_size > allow_hot_size || - all_hot_mem > DR_ICM_SYNC_THRESHOLD) - return true; - } + if (pool->hot_memory_size > DR_ICM_SYNC_THRESHOLD_POOL) + return true; return false; } @@ -315,7 +305,7 @@ static int dr_icm_pool_sync_all_buddy_pools(struct mlx5dr_icm_pool *pool) list_for_each_entry_safe(chunk, tmp_chunk, &buddy->hot_list, chunk_list) { mlx5dr_buddy_free_mem(buddy, chunk->seg, ilog2(chunk->num_of_entries)); - buddy->hot_memory_size -= chunk->byte_size; + pool->hot_memory_size -= chunk->byte_size; dr_icm_chunk_destroy(chunk); } } @@ -410,7 +400,7 @@ void mlx5dr_icm_free_chunk(struct mlx5dr_icm_chunk *chunk) /* move the memory to the waiting list AKA "hot" */ mutex_lock(&pool->mutex); list_move_tail(&chunk->chunk_list, &buddy->hot_list); - buddy->hot_memory_size += chunk->byte_size; + pool->hot_memory_size += chunk->byte_size; /* Check if we have chunks that are waiting for sync-ste */ if (dr_icm_pool_is_sync_required(pool)) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h index 25b7bda3ce4a..a483d7de9ea6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h @@ -147,8 +147,6 @@ struct mlx5dr_icm_buddy_mem { * sync_ste command sets them free. */ struct list_head hot_list; - /* indicates the byte size of hot mem */ - unsigned int hot_memory_size; }; int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy, From patchwork Thu Nov 5 20:12:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11885061 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B6DEC4741F for ; Thu, 5 Nov 2020 20:13:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0E1F82074B for ; Thu, 5 Nov 2020 20:13:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="p6zNvp9m" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732253AbgKEUNa (ORCPT ); Thu, 5 Nov 2020 15:13:30 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:18627 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732049AbgKEUND (ORCPT ); Thu, 5 Nov 2020 15:13:03 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:13:06 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:13:02 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Yevgeny Kliteynik" , Alex Vesker , Mark Bloch , Saeed Mahameed Subject: [net-next v2 08/12] net/mlx5: DR, Free unused buddy ICM memory Date: Thu, 5 Nov 2020 12:12:38 -0800 Message-ID: <20201105201242.21716-9-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607186; bh=ee/PPwuC9tk0h0HecSEiuRm1+zIIVnA8x8tkhbhIiFs=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=p6zNvp9moyqrJ4dYNONPV8Jr8iofeZD4gvB25Tei7q1wApRjUOB3RZigJPA1XhW9S /4BsXQR/sa+5s53SEaERiHSTOTK3zACHGFx4mIO13wyHfqMmv7Ad1Kh1ZaMHG9MI9M V9P3ZB4NItv79eXhjK3KDKOPVkYi/GJYxc+imoy3fv0SD3lIZfDkJBzXmRB5Og9cTY hRidNBNeSWsOEs0qnr6dc2zhDZWgG271AvObQhe2FRdxNXiI2fOHIN/BO1OeOeAVO+ 556M20mfnSKL8fDD8PviT/rta72N+WGDgSr6H+gjrnxiPYf7KC8oN9xDlSLz8nQA3Y oGus/jBPIa3iA== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yevgeny Kliteynik Track buddy's used ICM memory, and free it if all of the buddy's memory bacame unused. Do this only for STEs. MODIFY_ACTION buddies are much smaller, so in case there is a large amount of modify_header actions, which result in large amount of MODIFY_ACTION buddies, doing this cleanup during sync will result in performance hit while not freeing significant amount of memory. Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Reviewed-by: Mark Bloch Signed-off-by: Saeed Mahameed --- .../mellanox/mlx5/core/steering/dr_icm_pool.c | 14 ++++++++++---- .../ethernet/mellanox/mlx5/core/steering/mlx5dr.h | 1 + 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c index c49f8e86f3bc..66c24767e3b0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c @@ -175,10 +175,12 @@ get_chunk_icm_type(struct mlx5dr_icm_chunk *chunk) return chunk->buddy_mem->pool->icm_type; } -static void dr_icm_chunk_destroy(struct mlx5dr_icm_chunk *chunk) +static void dr_icm_chunk_destroy(struct mlx5dr_icm_chunk *chunk, + struct mlx5dr_icm_buddy_mem *buddy) { enum mlx5dr_icm_type icm_type = get_chunk_icm_type(chunk); + buddy->used_memory -= chunk->byte_size; list_del(&chunk->chunk_list); if (icm_type == DR_ICM_TYPE_STE) @@ -223,10 +225,10 @@ static void dr_icm_buddy_destroy(struct mlx5dr_icm_buddy_mem *buddy) struct mlx5dr_icm_chunk *chunk, *next; list_for_each_entry_safe(chunk, next, &buddy->hot_list, chunk_list) - dr_icm_chunk_destroy(chunk); + dr_icm_chunk_destroy(chunk, buddy); list_for_each_entry_safe(chunk, next, &buddy->used_list, chunk_list) - dr_icm_chunk_destroy(chunk); + dr_icm_chunk_destroy(chunk, buddy); dr_icm_pool_mr_destroy(buddy->icm_mr); @@ -267,6 +269,7 @@ dr_icm_chunk_create(struct mlx5dr_icm_pool *pool, goto out_free_chunk; } + buddy_mem_pool->used_memory += chunk->byte_size; chunk->buddy_mem = buddy_mem_pool; INIT_LIST_HEAD(&chunk->chunk_list); @@ -306,8 +309,11 @@ static int dr_icm_pool_sync_all_buddy_pools(struct mlx5dr_icm_pool *pool) mlx5dr_buddy_free_mem(buddy, chunk->seg, ilog2(chunk->num_of_entries)); pool->hot_memory_size -= chunk->byte_size; - dr_icm_chunk_destroy(chunk); + dr_icm_chunk_destroy(chunk, buddy); } + + if (!buddy->used_memory && pool->icm_type == DR_ICM_TYPE_STE) + dr_icm_buddy_destroy(buddy); } return 0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h index a483d7de9ea6..4177786b8eaf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h @@ -141,6 +141,7 @@ struct mlx5dr_icm_buddy_mem { /* This is the list of used chunks. HW may be accessing this memory */ struct list_head used_list; + u64 used_memory; /* Hardware may be accessing this memory but at some future, * undetermined time, it might cease to do so. From patchwork Thu Nov 5 20:12:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11885051 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9510C55178 for ; Thu, 5 Nov 2020 20:13:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E94C2074B for ; Thu, 5 Nov 2020 20:13:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="k17kcXfg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732201AbgKEUNP (ORCPT ); Thu, 5 Nov 2020 15:13:15 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:6026 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732013AbgKEUNE (ORCPT ); Thu, 5 Nov 2020 15:13:04 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:13:02 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:13:03 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Saeed Mahameed" , Moshe Shemesh , Tariq Toukan Subject: [net-next v2 10/12] net/mlx4: Cleanup kernel-doc warnings Date: Thu, 5 Nov 2020 12:12:40 -0800 Message-ID: <20201105201242.21716-11-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607182; bh=ZYGOBZD+TNb+ul0jRAulgyd2QejbQe5CDfzyGjg7CzA=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=k17kcXfg1LqjfFKI9OzP7yYxwjDsUNBIsyuYUkIgBUhMi0rlvK/7suddMvMuWbCUb f89tZiTwQjYnlFBgSYiLxHyBvcKs7J2T5OI2LO3gQWjTiyaRE0I1SiGELOMraUo83H 8N8lBI3ixcg+TnvSUvM/LPr+P5LL1XHUAYa0RBGTmfNzA7E5K8kBKiv5qbdN8UJAg+ fC6ZhfBXodcDXREt8dS76mSzTr/e9XchI7UHHreO8L6Sd9nNxiJb+w3UDWfCq2n88t wxi3b+5mDv9qA0Q3KHNhsD7PCNeEkAM5+unRguMhy/QOvCN3rMOKh6kSRlMw8wgMr8 kRt4ip3xOGuvg== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org $ git ls-files *.[ch] | egrep drivers/net/ethernet/mellanox/ | \ xargs scripts/kernel-doc -none drivers/net/ethernet/mellanox/mlx4/fw_qos.h:144: warning: Function parameter or member 'in_param' not described ... drivers/net/ethernet/mellanox/mlx4/fw_qos.h:144: warning: Excess function parameter 'out_param' description ... Signed-off-by: Saeed Mahameed Reported-by: Moshe Shemesh Reviewed-by: Moshe Shemesh Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx4/fw_qos.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx4/fw_qos.h b/drivers/net/ethernet/mellanox/mlx4/fw_qos.h index 582997577a04..954b86faac29 100644 --- a/drivers/net/ethernet/mellanox/mlx4/fw_qos.h +++ b/drivers/net/ethernet/mellanox/mlx4/fw_qos.h @@ -135,7 +135,7 @@ int mlx4_SET_VPORT_QOS_get(struct mlx4_dev *dev, u8 port, u8 vport, * @dev: mlx4_dev. * @port: Physical port number. * @vport: Vport id. - * @out_param: Array of mlx4_vport_qos_param which holds the requested values. + * @in_param: Array of mlx4_vport_qos_param which holds the requested values. * * Returns 0 on success or a negative mlx4_core errno code. **/ From patchwork Thu Nov 5 20:12:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 11885055 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94A8DC388F7 for ; Thu, 5 Nov 2020 20:13:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A8AF2074B for ; Thu, 5 Nov 2020 20:13:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Ciw0+bnM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732236AbgKEUNU (ORCPT ); Thu, 5 Nov 2020 15:13:20 -0500 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:5852 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732124AbgKEUNE (ORCPT ); Thu, 5 Nov 2020 15:13:04 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 05 Nov 2020 12:13:06 -0800 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Nov 2020 20:13:03 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: , "David S. Miller" , "Saeed Mahameed" , Moshe Shemesh , Tariq Toukan Subject: [net-next v2 11/12] net/mlx5: Cleanup kernel-doc warnings Date: Thu, 5 Nov 2020 12:12:41 -0800 Message-ID: <20201105201242.21716-12-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201105201242.21716-1-saeedm@nvidia.com> References: <20201105201242.21716-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1604607186; bh=rgDi8dHholG3cnQHfzhODsW0MLg0HUjZtTw4IEypu3A=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=Ciw0+bnMDAqvWnAR+k8bI8p5k3Czygo5E3TeqcZpZQhZ013d5g5G3HDoazx+rfPpc laG0MTDEFNu0DM7rTMBYfhxJDgbcH6iz7iKPfwDJpAP2BsNDr3Q/3kNpu3ZU+KuW7i R4ZLMxou14AAbIR1oXQkMuLzlSGsA0eRwKvKm532Dnx1h1ujawkxweeH3hadNuSOaK RzOtdjgyZfFTQloXr2/EZZjJPdoqHE9LMo+n/k9LuWhjG5tXCm32wUFSSfX1B7NLyv 3F+HnZ0ekNF3O6PutzeKjo9Ao0GqGfo+PmwqQ03g8+wapB5witpdmjgCyJ/WRArg1q S/mnOB36N6vMA== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org $ git ls-files *.[ch] | egrep drivers/net/ethernet/mellanox/ | \ xargs scripts/kernel-doc -none drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:57: warning: Enum value 'MLX5_FPGA_ACCESS_TYPE_I2C' not described ... drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:57: warning: Enum value 'MLX5_FPGA_ACCESS_TYPE_DONTCARE' not described ... drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:118: warning: Function parameter or member 'cb_arg' not described ... drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:160: warning: Function parameter or member 'conn' not described ... drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h:160: warning: Excess function parameter 'fdev' description ... Signed-off-by: Saeed Mahameed Reported-by: Moshe Shemesh Reviewed-by: Moshe Shemesh Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h b/drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h index 656f96be6e20..89ef592656c8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/sdk.h @@ -47,11 +47,12 @@ /** * enum mlx5_fpga_access_type - Enumerated the different methods possible for * accessing the device memory address space + * + * @MLX5_FPGA_ACCESS_TYPE_I2C: Use the slow CX-FPGA I2C bus + * @MLX5_FPGA_ACCESS_TYPE_DONTCARE: Use the fastest available method */ enum mlx5_fpga_access_type { - /** Use the slow CX-FPGA I2C bus */ MLX5_FPGA_ACCESS_TYPE_I2C = 0x0, - /** Use the fastest available method */ MLX5_FPGA_ACCESS_TYPE_DONTCARE = 0x0, }; @@ -113,6 +114,7 @@ struct mlx5_fpga_conn_attr { * subsequent receives. */ void (*recv_cb)(void *cb_arg, struct mlx5_fpga_dma_buf *buf); + /** @cb_arg: A context to be passed to recv_cb callback */ void *cb_arg; }; @@ -145,7 +147,7 @@ void mlx5_fpga_sbu_conn_destroy(struct mlx5_fpga_conn *conn); /** * mlx5_fpga_sbu_conn_sendmsg() - Queue the transmission of a packet - * @fdev: An FPGA SBU connection + * @conn: An FPGA SBU connection * @buf: The packet buffer * * Queues a packet for transmission over an FPGA SBU connection.