From patchwork Thu Jan 19 08:23:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 13107590 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05D3CC00A5A for ; Thu, 19 Jan 2023 08:25:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230054AbjASIZ1 (ORCPT ); Thu, 19 Jan 2023 03:25:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229811AbjASIZS (ORCPT ); Thu, 19 Jan 2023 03:25:18 -0500 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2058.outbound.protection.outlook.com [40.107.220.58]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8120676D0 for ; Thu, 19 Jan 2023 00:24:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aT/gOgIh/ivFZgXk4XOpJVmP/7xlmRsxsLBP79vUxNitiLTTbIGUksUSJVX66Z/Gw44q4UiP/qfzJhQnkkLwnEUtiUbzObRfULa+fxaqYK3TIyFljaumyEafMlpQMjQvxbaW7M9Z7hEbDQaZ2aQYZo0CuEoAjFp1lPvPwymVMnK+TSaJSyrSiJO5O2GhUHK0MkC2dXfvVjJGVsXOPiLEXj2LcY5x+Patr6yZGkQ6Y8qsY8HUWm/DPIuEp4R5ocoYAARe9xTZrYoIOH2PpF1UtipERL0BzqAihSO+g2shbvW/SqIlkkgfiwLEUQyDnQhDeK/3oczKBf4Fy8YPK90oYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Cfdv92kUFrIQacUhy9O/2HvY5vEu8VwttcB1UKDHJao=; b=PywdkqnnILkyBskPARlsKQJgq39eiBVIxi+Qr7IhpryBtkNCmNzU/HPXILZnR0PgH/BOTznu81FNlL9dE6nPqLAkFhS7+fDCXm2q622Ycs9aZmMeCLCE8VIeGuid1bYZgcKuGL6q/SUuXhs4wVlI4kNEcqmIQQKDV+Uyo3TjTJ9ShzWWJrzVvDlo7hZEwdof7qZ5K65szeahdJIiwhEfhNaTrJT4QndtY0BprB9fRJim6C2pRMDbpfQbKv1878SVF0E1eldhYyL7g4RpskxP4UM7WfCzUYgv82ZfpzZc0rSwPXwuP8F2m29A3n/6yI3aWibpK6rcPBVkgweXNYw7bw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Cfdv92kUFrIQacUhy9O/2HvY5vEu8VwttcB1UKDHJao=; b=kigLPa/H1aAtiWFQUYZKrU8n+Prep7g7iU4soxhj2JTOzh+jz5b/lgMM7EKOAX3Q6cVfABvAIvImG2JpCi0gQg3KoAzncoND62+7J2eagfF3BCyeLRirSFvDdIhSAVYkOzwuPEhLAMZv0jwLf4pDI6cwhb0tlooT/fz2WMccnM4BHimDfEEQJMoBrVva90ejclpBwHtiMEo8xo2O1VZHv0UCQh9ynmOMyXxQCRoOUkMHUDaGMPGaxq3qUI1BLnSBrwlshDEXnR138fzkcg7x93I3bczAx+wxyFy7biOUgYNgnz2UvroefsWAs8b+MmYY0h+vzSpghMdSLca/A19k0Q== Received: from DS7PR03CA0096.namprd03.prod.outlook.com (2603:10b6:5:3b7::11) by MW4PR12MB7431.namprd12.prod.outlook.com (2603:10b6:303:225::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 19 Jan 2023 08:24:39 +0000 Received: from DS1PEPF0000E658.namprd02.prod.outlook.com (2603:10b6:5:3b7:cafe::cd) by DS7PR03CA0096.outlook.office365.com (2603:10b6:5:3b7::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.25 via Frontend Transport; Thu, 19 Jan 2023 08:24:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by DS1PEPF0000E658.mail.protection.outlook.com (10.167.18.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend Transport; Thu, 19 Jan 2023 08:24:38 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 19 Jan 2023 00:24:29 -0800 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 19 Jan 2023 00:24:29 -0800 Received: from reg-r-vrt-019-180.mtr.labs.mlnx (10.127.8.11) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Thu, 19 Jan 2023 00:24:26 -0800 From: Paul Blakey To: Paul Blakey , , Saeed Mahameed , Paolo Abeni , Jakub Kicinski , Eric Dumazet , Jamal Hadi Salim , Cong Wang , "David S. Miller" CC: Oz Shlomo , Jiri Pirko , Roi Dayan , Vlad Buslov Subject: [PATCH net-next v3 6/6] net/mlx5: TC, Set CT miss to the specific ct action instance Date: Thu, 19 Jan 2023 10:23:57 +0200 Message-ID: <20230119082357.21744-7-paulb@nvidia.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20230119082357.21744-1-paulb@nvidia.com> References: <20230119082357.21744-1-paulb@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E658:EE_|MW4PR12MB7431:EE_ X-MS-Office365-Filtering-Correlation-Id: 3372ecb4-9568-4df7-f072-08daf9f69b5a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rRrTVPKs+hLXIGWcK3p65Q9QXdsGVVe/VY3aPHQ3XncXrh1H7mG2PJgawvXkDJnpPak4+OUQUqS3izvAgzq0vYF35gM7GIPHG+kTceFTO3Gfb1A3kElf9+2wV8i+O1Q9jZlKMkUj+tPZsYocI+1VBUtNyw9CELUNHXhnpXeranVHkcvc28laRVQaml5ukF9Yz7v9PFhy2bLL/TgctxXzgcqrKmcxbB4CusMuhGYHTY96FUtXE8D57Fm/RknSFMQXXoultyAZ4MysMyo2u4YwlO/BKaO17nGZkfYwObgsSU5Z2uDmiSTkOupHGxHpj5/N5pRA2WMphNGcMsQ/XKnRjj3TJbejjOKoguLCv9tf645gc32G0G8dKlxw5ypKlj6/areWIJL1qUIW4tktd6tZR9hxfipNuVohC0DBKhfceBPZ2or7uZPJRpbHuajhgGWXLZ/DaL9iZtm5Sk+26f85MUhBWRW5ORxU/FHcEYsdQyDb+zXiDxB4IhVoM/ZyBPWc2hdkqA1bV9WizE5lak6fWXh22raXbY2pklKVp2vgwPn+nfY586kGLZFzhRalR8GjVtw6iypNilOL7liKGQUG9cRSoM41GTrOT+co1Fmyu78J9Eq+Pwe827rb5v+7cu5C36hkPVg082JCBpQICj/LQVxVRKqFYy06ggm7lZl+NIX+NkHgUKghQqxtshpq+i36D+Kpqb5GgpzAqn6c7PoVuA== X-Forefront-Antispam-Report: CIP:216.228.118.232;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc7edge1.nvidia.com;CAT:NONE;SFS:(13230022)(4636009)(346002)(39860400002)(376002)(396003)(136003)(451199015)(46966006)(40470700004)(36840700001)(26005)(86362001)(41300700001)(82310400005)(8936002)(36756003)(186003)(70586007)(4326008)(8676002)(70206006)(2616005)(426003)(47076005)(40480700001)(1076003)(6666004)(107886003)(356005)(110136005)(316002)(54906003)(478600001)(82740400003)(7636003)(40460700003)(30864003)(2906002)(83380400001)(36860700001)(5660300002)(336012)(66899015);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jan 2023 08:24:38.8639 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3372ecb4-9568-4df7-f072-08daf9f69b5a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.118.232];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E658.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7431 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, CT misses restore the missed chain on the tc skb extension so tc will continue from the relevant chain. Instead, restore the CT action's miss cookie on the extension, which will instruct tc to continue from the this specific CT action instance on the relevant filter's action list. Map the CT action's miss_cookie to a new miss object (ACT_MISS), and use this miss mapping instead of the current chain miss object (CHAIN_MISS) for CT action misses. To restore this new miss mapping value, add a RX restore rule for each such mapping value. Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan Reviewed-by: Oz Sholmo --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 32 +++++----- .../ethernet/mellanox/mlx5/core/en/tc_ct.h | 2 + .../net/ethernet/mellanox/mlx5/core/en_tc.c | 61 ++++++++++++++++--- .../net/ethernet/mellanox/mlx5/core/en_tc.h | 6 ++ .../net/ethernet/mellanox/mlx5/core/eswitch.h | 2 + 5 files changed, 79 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index e1a2861cc13ba..71d8a906add97 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -59,6 +59,7 @@ struct mlx5_tc_ct_debugfs { struct mlx5_tc_ct_priv { struct mlx5_core_dev *dev; + struct mlx5e_priv *priv; const struct net_device *netdev; struct mod_hdr_tbl *mod_hdr_tbl; struct xarray tuple_ids; @@ -85,7 +86,6 @@ struct mlx5_ct_flow { struct mlx5_flow_attr *pre_ct_attr; struct mlx5_flow_handle *pre_ct_rule; struct mlx5_ct_ft *ft; - u32 chain_mapping; }; struct mlx5_ct_zone_rule { @@ -1441,6 +1441,7 @@ mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv, attr->ct_attr.zone = act->ct.zone; attr->ct_attr.ct_action = act->ct.action; attr->ct_attr.nf_ft = act->ct.flow_table; + attr->ct_attr.act_miss_cookie = act->miss_cookie; return 0; } @@ -1778,7 +1779,7 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft) * + ft prio (tc chain) + * + original match + * +---------------------+ - * | set chain miss mapping + * | set act_miss_cookie mapping * | set fte_id * | set tunnel_id * | do decap @@ -1823,7 +1824,7 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_flow_attr *pre_ct_attr; struct mlx5_modify_hdr *mod_hdr; struct mlx5_ct_flow *ct_flow; - int chain_mapping = 0, err; + int act_miss_mapping = 0, err; struct mlx5_ct_ft *ft; u16 zone; @@ -1858,22 +1859,18 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, pre_ct_attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; - /* Write chain miss tag for miss in ct table as we - * don't go though all prios of this chain as normal tc rules - * miss. - */ - err = mlx5_chains_get_chain_mapping(ct_priv->chains, attr->chain, - &chain_mapping); + err = mlx5e_tc_action_miss_mapping_get(ct_priv->priv, attr, attr->ct_attr.act_miss_cookie, + &act_miss_mapping); if (err) { - ct_dbg("Failed to get chain register mapping for chain"); - goto err_get_chain; + ct_dbg("Failed to get register mapping for act miss"); + goto err_get_act_miss; } - ct_flow->chain_mapping = chain_mapping; + attr->ct_attr.act_miss_mapping = act_miss_mapping; err = mlx5e_tc_match_to_reg_set(priv->mdev, pre_mod_acts, ct_priv->ns_type, - MAPPED_OBJ_TO_REG, chain_mapping); + MAPPED_OBJ_TO_REG, act_miss_mapping); if (err) { - ct_dbg("Failed to set chain register mapping"); + ct_dbg("Failed to set act miss register mapping"); goto err_mapping; } @@ -1937,8 +1934,8 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr); err_mapping: mlx5e_mod_hdr_dealloc(pre_mod_acts); - mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping); -err_get_chain: + mlx5e_tc_action_miss_mapping_put(ct_priv->priv, attr, act_miss_mapping); +err_get_act_miss: kfree(ct_flow->pre_ct_attr); err_alloc_pre: mlx5_tc_ct_del_ft_cb(ct_priv, ft); @@ -1977,7 +1974,7 @@ __mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *ct_priv, mlx5_tc_rule_delete(priv, ct_flow->pre_ct_rule, pre_ct_attr); mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr); - mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping); + mlx5e_tc_action_miss_mapping_put(ct_priv->priv, attr, attr->ct_attr.act_miss_mapping); mlx5_tc_ct_del_ft_cb(ct_priv, ct_flow->ft); kfree(ct_flow->pre_ct_attr); @@ -2157,6 +2154,7 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, } spin_lock_init(&ct_priv->ht_lock); + ct_priv->priv = priv; ct_priv->ns_type = ns_type; ct_priv->chains = chains; ct_priv->netdev = priv->netdev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h index 5bbd6b92840fb..5c5ddaa83055d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h @@ -28,6 +28,8 @@ struct mlx5_ct_attr { struct mlx5_ct_flow *ct_flow; struct nf_flowtable *nf_ft; u32 ct_labels_id; + u32 act_miss_mapping; + u64 act_miss_cookie; }; #define zone_to_reg_ct {\ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index d613811785c89..c3601b1c48c92 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -3829,6 +3829,7 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr, attr2->parse_attr = parse_attr; attr2->dest_chain = 0; attr2->dest_ft = NULL; + attr2->act_id_restore_rule = NULL; if (ns_type == MLX5_FLOW_NAMESPACE_FDB) { attr2->esw_attr->out_count = 0; @@ -5684,15 +5685,18 @@ static bool mlx5e_tc_restore_tunnel(struct mlx5e_priv *priv, struct sk_buff *skb return true; } -static bool mlx5e_tc_restore_skb_chain(struct sk_buff *skb, struct mlx5_tc_ct_priv *ct_priv, - u32 chain, u32 zone_restore_id, - u32 tunnel_id, struct mlx5e_tc_update_priv *tc_priv) +static bool mlx5e_tc_restore_skb_tc_meta(struct sk_buff *skb, struct mlx5_tc_ct_priv *ct_priv, + struct mlx5_mapped_obj *mapped_obj, u32 zone_restore_id, + u32 tunnel_id, struct mlx5e_tc_update_priv *tc_priv) { + u32 chain = mapped_obj->type == MLX5_MAPPED_OBJ_CHAIN ? mapped_obj->chain : 0; + u64 act_miss_cookie = mapped_obj->type == MLX5_MAPPED_OBJ_ACT_MISS ? + mapped_obj->act_miss_cookie : 0; struct mlx5e_priv *priv = netdev_priv(skb->dev); struct tc_skb_ext *tc_skb_ext; #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) - if (chain) { + if (chain || act_miss_cookie) { if (!mlx5e_tc_ct_restore_flow(ct_priv, skb, zone_restore_id)) return false; @@ -5702,7 +5706,12 @@ static bool mlx5e_tc_restore_skb_chain(struct sk_buff *skb, struct mlx5_tc_ct_pr return false; } - tc_skb_ext->chain = chain; + if (act_miss_cookie) { + tc_skb_ext->act_miss_cookie = act_miss_cookie; + tc_skb_ext->act_miss = 1; + } else { + tc_skb_ext->chain = chain; + } } #endif /* CONFIG_NET_TC_SKB_EXT */ @@ -5773,8 +5782,9 @@ bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb, switch (mapped_obj.type) { case MLX5_MAPPED_OBJ_CHAIN: - return mlx5e_tc_restore_skb_chain(skb, ct_priv, mapped_obj.chain, zone_restore_id, - tunnel_id, tc_priv); + case MLX5_MAPPED_OBJ_ACT_MISS: + return mlx5e_tc_restore_skb_tc_meta(skb, ct_priv, &mapped_obj, zone_restore_id, + tunnel_id, tc_priv); case MLX5_MAPPED_OBJ_SAMPLE: mlx5e_tc_restore_skb_sample(priv, skb, &mapped_obj, tc_priv); tc_priv->skb_done = true; @@ -5812,3 +5822,40 @@ bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb) return true; } + +int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u64 act_miss_cookie, u32 *act_miss_mapping) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mlx5_mapped_obj mapped_obj = {}; + struct mapping_ctx *ctx; + int err; + + ctx = esw->offloads.reg_c0_obj_pool; + + mapped_obj.type = MLX5_MAPPED_OBJ_ACT_MISS; + mapped_obj.act_miss_cookie = act_miss_cookie; + err = mapping_add(ctx, &mapped_obj, act_miss_mapping); + if (err) + return err; + + attr->act_id_restore_rule = esw_add_restore_rule(esw, *act_miss_mapping); + if (IS_ERR(attr->act_id_restore_rule)) + goto err_rule; + + return 0; + +err_rule: + mapping_remove(ctx, *act_miss_mapping); + return err; +} + +void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u32 act_miss_mapping) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mapping_ctx *ctx = esw->offloads.reg_c0_obj_pool; + + mlx5_del_flow_rules(attr->act_id_restore_rule); + mapping_remove(ctx, act_miss_mapping); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index 306e8b20941a2..3033afa23d0ae 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -100,6 +100,7 @@ struct mlx5_flow_attr { struct mlx5_flow_attr *branch_true; struct mlx5_flow_attr *branch_false; struct mlx5_flow_attr *jumping_attr; + struct mlx5_flow_handle *act_id_restore_rule; /* keep this union last */ union { DECLARE_FLEX_ARRAY(struct mlx5_esw_flow_attr, esw_attr); @@ -400,4 +401,9 @@ mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb) { return true; } #endif +int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u64 act_miss_cookie, u32 *act_miss_mapping); +void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u32 act_miss_mapping); + #endif /* __MLX5_EN_TC_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 92644fbb50816..6aa48c003ba63 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -52,12 +52,14 @@ enum mlx5_mapped_obj_type { MLX5_MAPPED_OBJ_CHAIN, MLX5_MAPPED_OBJ_SAMPLE, MLX5_MAPPED_OBJ_INT_PORT_METADATA, + MLX5_MAPPED_OBJ_ACT_MISS, }; struct mlx5_mapped_obj { enum mlx5_mapped_obj_type type; union { u32 chain; + u64 act_miss_cookie; struct { u32 group_id; u32 rate;