From patchwork Thu Jan 12 10:59:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 13097832 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD37EC54EBC for ; Thu, 12 Jan 2023 11:08:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233250AbjALLIV (ORCPT ); Thu, 12 Jan 2023 06:08:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230113AbjALLHv (ORCPT ); Thu, 12 Jan 2023 06:07:51 -0500 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2056.outbound.protection.outlook.com [40.107.93.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4550D47330 for ; Thu, 12 Jan 2023 02:59:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OrdiPPNq4jGlVn3viQiFCYNcqsR3L2xgdUQsouC+R50kQ6Ret2QWTItEQqU67HeFeXKa3ZVPozZjhSlKQAUF0VWV+97DOVvyvGG4cJ55WPYeFyfSBEkuOipPBoMgIQcAfFmYoZfk23fF/z98ck9wtUVP7VYYnK7brwKDNRK7yGl4aIO/vhWmc4jNB03WH/bWRkS40bTUBhMoaPYcvbemV9DuHPluG9Jr/SCt2s8Awl2X8L4XWhevQJNOtl7GGUVPIhzXjIHsrFpPD3d61xXZ6XsmBw5+nBL4RbQShF1Zf2DsV6znQhEajuE/cuK9hPB35uTQOnRD7ZhaKDGGQoZlkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Q3oz0idnPPYIcORaStUY2Z5WMhLAVmTsHSQLo0eMoCA=; b=Sfu1joJO8uh6APkKvQRYdefwYdWeLIZ7Ep/upbd2jhSe2+pYfmBGc+YZmyOCtLS3ubsVTwzrDbV3GN1pLBrKmgvdF5FPsJCDLPzHS+1d92XwPwh62X4jruXFCe728kAThmHS/XnfGvqNDJ59NM+CXIzrlaRLXDm8/HY6POeb7KuwgCtz/SbqZqN4qxWeLTKlL2/p9X37QuI5P34ezWUS4Sij7s1283KXbYHTQ8dBOeiFSenKi5Oqamzi4x/ICug3Y280/e2XKFAuRmS3XpJl4t5D5gm4tlmD4M4cD9FiuBMdOcRegFBepPIfuyVrfBXKDhAQI5a2foTmOf7H0J2TMg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Q3oz0idnPPYIcORaStUY2Z5WMhLAVmTsHSQLo0eMoCA=; b=P3KmqqO8cut91zs1npqeKON/6jzifE9WnTai+5d1iXmhJDyyNQh5cvJij2LCAQNla/dkcma4DaQfY2+KVGaccjiAgDN+lDCYca6YF/TgInnNwvm4layeWVhksRAYUUubySfoE87bC5Cye8oSgBeSfMvBMcOxl/JiVQ7+F6/tPOfy7A9mTRES8XKq/jCapXzSh/NqiQYsFyt9td0PcUkN7z3AAGWXwX/DTI32Ro7BY6CPJ0ROEY9AiLNM0wCdlI205twMHoH52jKqRwSyL/dQBiki79I2HEUgDIcLb9NqhsQCQO3X64cV3nCTFZ08AoWpAwXQjNpaqL0WpRQjP4ffug== Received: from MW2PR16CA0056.namprd16.prod.outlook.com (2603:10b6:907:1::33) by BL1PR12MB5945.namprd12.prod.outlook.com (2603:10b6:208:398::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan 2023 10:59:33 +0000 Received: from CO1NAM11FT093.eop-nam11.prod.protection.outlook.com (2603:10b6:907:1:cafe::9a) by MW2PR16CA0056.outlook.office365.com (2603:10b6:907:1::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT093.mail.protection.outlook.com (10.13.175.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.12 via Frontend Transport; Thu, 12 Jan 2023 10:59:32 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:17 -0800 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:17 -0800 Received: from reg-r-vrt-019-180.mtr.labs.mlnx (10.127.8.11) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Thu, 12 Jan 2023 02:59:14 -0800 From: Paul Blakey To: Paul Blakey , , Saeed Mahameed , Paolo Abeni , Jakub Kicinski , Eric Dumazet , Jamal Hadi Salim , Cong Wang , "David S. Miller" CC: Oz Shlomo , Jiri Pirko , Roi Dayan , Vlad Buslov Subject: [PATCH net-next 1/6] net/sched: cls_api: Support hardware miss to tc action Date: Thu, 12 Jan 2023 12:59:00 +0200 Message-ID: <20230112105905.1738-2-paulb@nvidia.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20230112105905.1738-1-paulb@nvidia.com> References: <20230112105905.1738-1-paulb@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT093:EE_|BL1PR12MB5945:EE_ X-MS-Office365-Filtering-Correlation-Id: e7c59501-7765-43a2-0f4a-08daf48c1611 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: dVS7NN8oZsytXdeG7b7Y0Euvt8CSXYbYxdskQhqdRGbD49xvR3FHyO46HE6mKtQAyDOY4vl+VYtfWTbOb5/HGiGxLlt4MI29iCAAaeToS2AlceBVJLyTuBZlJtXyIG8UI8EfpJC0uilr25s5sNlyPlUfPZZq6WQZ2cg6AMy2OCvYzi7snA6uMSiTkVdDlaG/Qyo7mBNdGg26iYoKI6heUoLoWPhXPtGW4IsEOYIjm5+b3IFMD7uA13Rx9Pbw22DYzlSv2nxQ4qwWBkadRxwZrlkaYzkeSyPPK/ugTAh2UBGtBlkiQnzFJmTSJxnJmwKwGiwvtGcU13gulAVPlt+rl708A8x3gvcR3YPk/oLxPm4HxLZ1v20ZHCn5/y5+M2P8ZM9jJfEgdnhuaMGfwtBsMq2EWub+nAyEHN9yMCaVuodJ5AFr8Hfnj+0u2iAsBAvMYdkYWSDRe0b2Nx7si/Hni0B4XUeo/tSBc8NMfsA+D3hntZVX9iAHc1Jpdwelofj3/ulyWfirMLixi0HCw/31/Iv9XVWWDGBwovuepTs9aseBOJam7/IR07wkcrgfZUk2cPfd30TLSABVFlkUFaweRxYvsc+hoAkIloWn8VHW+5buWTcVRX76nlBpNTnB78Gwcu7yaH3u9Q/lXu1hMF/pvhYnrSlN+qwsTO/0RR9mQJT6fvsShFTzT1FQWppOolUyeuAcDIG6Gn/h/PEYk/YM1Q== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(36860700001)(107886003)(6666004)(2906002)(70206006)(36756003)(4326008)(5660300002)(83380400001)(8936002)(30864003)(8676002)(82740400003)(426003)(41300700001)(356005)(47076005)(478600001)(7636003)(336012)(40480700001)(26005)(70586007)(82310400005)(186003)(1076003)(40460700003)(86362001)(2616005)(316002)(54906003)(110136005);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:59:32.7599 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e7c59501-7765-43a2-0f4a-08daf48c1611 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT093.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5945 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org For drivers to support partial offload of a filter's action list, add support for action miss to specify an action instance to continue from in sw. CT action in particular can't be fully offloaded, as new connections need to be handled in software. This imposes other limitations on the actions that can be offloaded together with the CT action, such as packet modifications. Assign each action on a filter's action list a unique miss_cookie which drivers can then use to fill action_miss part of the tc skb extension. On getting back this miss_cookie, find the action instance with relevant cookie and continue classifying from there. Signed-off-by: Paul Blakey Reviewed-by: Jiri Pirko --- include/linux/skbuff.h | 6 +- include/net/flow_offload.h | 1 + include/net/pkt_cls.h | 20 +--- include/net/sch_generic.h | 2 + net/openvswitch/flow.c | 2 +- net/sched/act_api.c | 2 +- net/sched/cls_api.c | 208 +++++++++++++++++++++++++++++++++++-- 7 files changed, 214 insertions(+), 27 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 4c8492401a101..348673dcb6bb9 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -316,12 +316,16 @@ struct nf_bridge_info { * and read by ovs to recirc_id. */ struct tc_skb_ext { - __u32 chain; + union { + u64 act_miss_cookie; + __u32 chain; + }; __u16 mru; __u16 zone; u8 post_ct:1; u8 post_ct_snat:1; u8 post_ct_dnat:1; + u8 act_miss:1; /* Set if act_miss_cookie is used */ }; #endif diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h index 0400a0ac8a295..88db7346eb7a0 100644 --- a/include/net/flow_offload.h +++ b/include/net/flow_offload.h @@ -228,6 +228,7 @@ void flow_action_cookie_destroy(struct flow_action_cookie *cookie); struct flow_action_entry { enum flow_action_id id; u32 hw_index; + u64 miss_cookie; enum flow_action_hw_stats hw_stats; action_destr destructor; void *destructor_priv; diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index 4cabb32a2ad94..a5d17b103328d 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -59,6 +59,8 @@ int tcf_block_get_ext(struct tcf_block **p_block, struct Qdisc *q, void tcf_block_put(struct tcf_block *block); void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q, struct tcf_block_ext_info *ei); +int tcf_exts_init_ex(struct tcf_exts *exts, struct net *net, int action, int police, + struct tcf_proto *tp, u32 handle, bool used_action_miss); static inline bool tcf_block_shared(struct tcf_block *block) { @@ -229,6 +231,7 @@ struct tcf_exts { struct tc_action **actions; struct net *net; netns_tracker ns_tracker; + struct tcf_exts_miss_cookie_node *miss_cookie_node; #endif /* Map to export classifier specific extension TLV types to the * generic extensions API. Unsupported extensions must be set to 0. @@ -240,21 +243,7 @@ struct tcf_exts { static inline int tcf_exts_init(struct tcf_exts *exts, struct net *net, int action, int police) { -#ifdef CONFIG_NET_CLS_ACT - exts->type = 0; - exts->nr_actions = 0; - /* Note: we do not own yet a reference on net. - * This reference might be taken later from tcf_exts_get_net(). - */ - exts->net = net; - exts->actions = kcalloc(TCA_ACT_MAX_PRIO, sizeof(struct tc_action *), - GFP_KERNEL); - if (!exts->actions) - return -ENOMEM; -#endif - exts->action = action; - exts->police = police; - return 0; + return tcf_exts_init_ex(exts, net, action, police, NULL, 0, false); } /* Return false if the netns is being destroyed in cleanup_net(). Callers @@ -577,6 +566,7 @@ int tc_setup_offload_action(struct flow_action *flow_action, void tc_cleanup_offload_action(struct flow_action *flow_action); int tc_setup_action(struct flow_action *flow_action, struct tc_action *actions[], + u32 miss_cookie_base, struct netlink_ext_ack *extack); int tc_setup_cb_call(struct tcf_block *block, enum tc_setup_type type, diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index d5517719af4ef..d2b859e3c8602 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -369,6 +369,8 @@ struct tcf_proto_ops { struct nlattr **tca, struct netlink_ext_ack *extack); void (*tmplt_destroy)(void *tmplt_priv); + struct tcf_exts * (*get_exts)(const struct tcf_proto *tp, + u32 handle); /* rtnetlink specific */ int (*dump)(struct net*, struct tcf_proto*, void *, diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c index e20d1a9734175..b1a5eed8d1a9d 100644 --- a/net/openvswitch/flow.c +++ b/net/openvswitch/flow.c @@ -1038,7 +1038,7 @@ int ovs_flow_key_extract(const struct ip_tunnel_info *tun_info, #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) if (tc_skb_ext_tc_enabled()) { tc_ext = skb_ext_find(skb, TC_SKB_EXT); - key->recirc_id = tc_ext ? tc_ext->chain : 0; + key->recirc_id = tc_ext && !tc_ext->act_miss ? tc_ext->chain : 0; OVS_CB(skb)->mru = tc_ext ? tc_ext->mru : 0; post_ct = tc_ext ? tc_ext->post_ct : false; post_ct_snat = post_ct ? tc_ext->post_ct_snat : false; diff --git a/net/sched/act_api.c b/net/sched/act_api.c index 5b3c0ac495bee..e28148015fbb5 100644 --- a/net/sched/act_api.c +++ b/net/sched/act_api.c @@ -272,7 +272,7 @@ static int tcf_action_offload_add_ex(struct tc_action *action, if (err) goto fl_err; - err = tc_setup_action(&fl_action->action, actions, extack); + err = tc_setup_action(&fl_action->action, actions, 0, extack); if (err) { NL_SET_ERR_MSG_MOD(extack, "Failed to setup tc actions for offload"); diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 668130f089034..7d9fab24a8417 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -50,6 +51,98 @@ static LIST_HEAD(tcf_proto_base); /* Protects list of registered TC modules. It is pure SMP lock. */ static DEFINE_RWLOCK(cls_mod_lock); +static struct xarray tcf_exts_miss_cookies_xa; +struct tcf_exts_miss_cookie_node { + const struct tcf_chain *chain; + const struct tcf_proto *tp; + const struct tcf_exts *exts; + u32 chain_index; + u32 tp_prio; + u32 handle; + u32 miss_cookie_base; + struct rcu_head rcu; +}; + +/* Each tc action entry cookie will be comprised of 32bit miss_cookie_base + + * action index in the exts tc actions array. + */ +union tcf_exts_miss_cookie { + struct { + u32 miss_cookie_base; + u32 act_index; + }; + u64 miss_cookie; +}; + +#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) +static int +tcf_exts_miss_cookie_base_alloc(struct tcf_exts *exts, struct tcf_proto *tp, + u32 handle) +{ + struct tcf_exts_miss_cookie_node *n; + static u32 next; + int err; + + if (WARN_ON(!handle || !tp->ops->get_exts)) + return -EINVAL; + + n = kzalloc(sizeof(*n), GFP_KERNEL); + if (!n) + return -ENOMEM; + + n->chain_index = tp->chain->index; + n->chain = tp->chain; + n->tp_prio = tp->prio; + n->tp = tp; + n->exts = exts; + n->handle = handle; + + err = xa_alloc_cyclic(&tcf_exts_miss_cookies_xa, &n->miss_cookie_base, + n, xa_limit_32b, &next, GFP_KERNEL); + if (err) + goto err_xa_alloc; + + exts->miss_cookie_node = n; + return 0; + +err_xa_alloc: + kfree(n); + return err; +} + +static void tcf_exts_miss_cookie_base_destroy(struct tcf_exts *exts) +{ + struct tcf_exts_miss_cookie_node *n; + + if (!exts->miss_cookie_node) + return; + + n = exts->miss_cookie_node; + xa_erase(&tcf_exts_miss_cookies_xa, n->miss_cookie_base); + kfree_rcu(n, rcu); +} + +static struct tcf_exts_miss_cookie_node * +tcf_exts_miss_cookie_lookup(u64 miss_cookie, int *act_index) +{ + union tcf_exts_miss_cookie mc = { .miss_cookie = miss_cookie, }; + + *act_index = mc.act_index; + return xa_load(&tcf_exts_miss_cookies_xa, mc.miss_cookie_base); +} +#endif /* IS_ENABLED(CONFIG_NET_TC_SKB_EXT) */ + +static u64 tcf_exts_miss_cookie_get(u32 miss_cookie_base, int act_index) +{ + union tcf_exts_miss_cookie mc = { .act_index = act_index, }; + + if (!miss_cookie_base) + return 0; + + mc.miss_cookie_base = miss_cookie_base; + return mc.miss_cookie; +} + #ifdef CONFIG_NET_CLS_ACT DEFINE_STATIC_KEY_FALSE(tc_skb_ext_tc); EXPORT_SYMBOL(tc_skb_ext_tc); @@ -1548,6 +1641,8 @@ static inline int __tcf_classify(struct sk_buff *skb, const struct tcf_proto *orig_tp, struct tcf_result *res, bool compat_mode, + struct tcf_exts_miss_cookie_node *n, + int act_index, u32 *last_executed_chain) { #ifdef CONFIG_NET_CLS_ACT @@ -1561,11 +1656,38 @@ static inline int __tcf_classify(struct sk_buff *skb, __be16 protocol = skb_protocol(skb, false); int err; - if (tp->protocol != protocol && - tp->protocol != htons(ETH_P_ALL)) - continue; + if (n) { + struct tcf_exts *exts; + + if (n->tp_prio != tp->prio) + continue; - err = tc_classify(skb, tp, res); + /* We re-lookup the tp and chain based on index instead + * of having hard refs and locks to them, so do a sanity + * check if any of tp,chain,exts was replaced by the + * time we got here with a cookie from hardware. + */ + if (unlikely(n->tp != tp || n->tp->chain != n->chain || + !tp->ops->get_exts)) + return TC_ACT_SHOT; + + exts = tp->ops->get_exts(tp, n->handle); + if (unlikely(!exts || n->exts != exts)) + return TC_ACT_SHOT; + + n = NULL; +#ifdef CONFIG_NET_CLS_ACT + err = tcf_action_exec(skb, exts->actions + act_index, + exts->nr_actions - act_index, + res); +#endif + } else { + if (tp->protocol != protocol && + tp->protocol != htons(ETH_P_ALL)) + continue; + + err = tc_classify(skb, tp, res); + } #ifdef CONFIG_NET_CLS_ACT if (unlikely(err == TC_ACT_RECLASSIFY && !compat_mode)) { first_tp = orig_tp; @@ -1581,6 +1703,9 @@ static inline int __tcf_classify(struct sk_buff *skb, return err; } + if (unlikely(n)) + return TC_ACT_SHOT; + return TC_ACT_UNSPEC; /* signal: continue lookup */ #ifdef CONFIG_NET_CLS_ACT reset: @@ -1605,21 +1730,33 @@ int tcf_classify(struct sk_buff *skb, #if !IS_ENABLED(CONFIG_NET_TC_SKB_EXT) u32 last_executed_chain = 0; - return __tcf_classify(skb, tp, tp, res, compat_mode, + return __tcf_classify(skb, tp, tp, res, compat_mode, NULL, 0, &last_executed_chain); #else u32 last_executed_chain = tp ? tp->chain->index : 0; + struct tcf_exts_miss_cookie_node *n = NULL; const struct tcf_proto *orig_tp = tp; struct tc_skb_ext *ext; + int act_index = 0; int ret; if (block) { ext = skb_ext_find(skb, TC_SKB_EXT); - if (ext && ext->chain) { + if (ext && (ext->chain || ext->act_miss)) { struct tcf_chain *fchain; + u32 chain = ext->chain; - fchain = tcf_chain_lookup_rcu(block, ext->chain); + if (ext->act_miss) { + n = tcf_exts_miss_cookie_lookup(ext->act_miss_cookie, + &act_index); + if (!n) + return TC_ACT_SHOT; + + chain = n->chain_index; + } + + fchain = tcf_chain_lookup_rcu(block, chain); if (!fchain) return TC_ACT_SHOT; @@ -1631,7 +1768,7 @@ int tcf_classify(struct sk_buff *skb, } } - ret = __tcf_classify(skb, tp, orig_tp, res, compat_mode, + ret = __tcf_classify(skb, tp, orig_tp, res, compat_mode, n, act_index, &last_executed_chain); if (tc_skb_ext_tc_enabled()) { @@ -3040,9 +3177,52 @@ static int tc_dump_chain(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; } +int tcf_exts_init_ex(struct tcf_exts *exts, struct net *net, int action, + int police, struct tcf_proto *tp, u32 handle, + bool use_action_miss) +{ + int err; + +#ifdef CONFIG_NET_CLS_ACT + exts->type = 0; + exts->nr_actions = 0; + /* Note: we do not own yet a reference on net. + * This reference might be taken later from tcf_exts_get_net(). + */ + exts->net = net; + exts->actions = kcalloc(TCA_ACT_MAX_PRIO, sizeof(struct tc_action *), + GFP_KERNEL); + if (!exts->actions) + return -ENOMEM; +#endif + + exts->action = action; + exts->police = police; + + if (!use_action_miss) + return 0; + +#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) + err = tcf_exts_miss_cookie_base_alloc(exts, tp, handle); +#endif + if (err) + goto err_miss_alloc; + + return 0; + +err_miss_alloc: + tcf_exts_destroy(exts); + return err; +} +EXPORT_SYMBOL(tcf_exts_init_ex); + void tcf_exts_destroy(struct tcf_exts *exts) { #ifdef CONFIG_NET_CLS_ACT +#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) + tcf_exts_miss_cookie_base_destroy(exts); +#endif + if (exts->actions) { tcf_action_destroy(exts->actions, TCA_ACT_UNBIND); kfree(exts->actions); @@ -3531,6 +3711,7 @@ static int tc_setup_offload_act(struct tc_action *act, int tc_setup_action(struct flow_action *flow_action, struct tc_action *actions[], + u32 miss_cookie_base, struct netlink_ext_ack *extack) { int i, j, k, index, err = 0; @@ -3561,6 +3742,8 @@ int tc_setup_action(struct flow_action *flow_action, for (k = 0; k < index ; k++) { entry[k].hw_stats = tc_act_hw_stats(act->hw_stats); entry[k].hw_index = act->tcfa_index; + entry[k].miss_cookie = + tcf_exts_miss_cookie_get(miss_cookie_base, i); } j += index; @@ -3583,10 +3766,15 @@ int tc_setup_offload_action(struct flow_action *flow_action, struct netlink_ext_ack *extack) { #ifdef CONFIG_NET_CLS_ACT + u32 miss_cookie_base; + if (!exts) return 0; - return tc_setup_action(flow_action, exts->actions, extack); + miss_cookie_base = exts->miss_cookie_node ? + exts->miss_cookie_node->miss_cookie_base : 0; + return tc_setup_action(flow_action, exts->actions, miss_cookie_base, + extack); #else return 0; #endif @@ -3754,6 +3942,8 @@ static int __init tc_filter_init(void) if (err) goto err_register_pernet_subsys; + xa_init_flags(&tcf_exts_miss_cookies_xa, XA_FLAGS_ALLOC1); + rtnl_register(PF_UNSPEC, RTM_NEWTFILTER, tc_new_tfilter, NULL, RTNL_FLAG_DOIT_UNLOCKED); rtnl_register(PF_UNSPEC, RTM_DELTFILTER, tc_del_tfilter, NULL, From patchwork Thu Jan 12 10:59:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 13097833 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34CFAC61DB3 for ; Thu, 12 Jan 2023 11:08:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230245AbjALLIW (ORCPT ); Thu, 12 Jan 2023 06:08:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232496AbjALLHx (ORCPT ); Thu, 12 Jan 2023 06:07:53 -0500 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2081.outbound.protection.outlook.com [40.107.243.81]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 429FB48CD3 for ; Thu, 12 Jan 2023 02:59:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AWLfP7yxFSCybTyOdglxz/Qg64YqJuDlH8oxQcWDdS/kHdlUdN6fZlzEuQynDvJw/xbIB7H4j5MYQVzwtVwaSa6/oRZrcd4AywKy1NyLlzViG0HSDMVK+cq8WoB/HtZNvPS6F7kVCT+Px6TglJ5knXpko94b9/LXbu+IVJ2F37x3T9ccg269QooXuC2Wl0nO7bEUhRn3QExV3Ywxry1tyqVqlz4iVtcbg28vPd2z1PaCiJvF+JJOH0Hx+Dws8Z0argglnb0Pjku5ESwonHuVads0M1BqIqEeVqXky/H97b+Ey272lpM3pHlqgEw0gz6ckTFlM/C4JE9eyQx+adv//A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=V8OYOOEsC8xGwgh2M9ur2WnW0MvOJQtmg1NeINxI8RI=; b=T3wmHqtSrp9c7BaihIcu9VJBLHCPXwJdw4j6UVXMhsf46L9RJxW2H+AO0N5ci+HffPxdVbNhpL5ZyUX2tNnJm+oH3Nsx8E/kCayea914/zEErsfXV2hNl4kBh0MVaFhNnGjcrNezjUr+lTU+nClVfE+6caAL5lRjhd1kzNA/pkdB7tCvZXMZzPN1aVKkGpCo7b7o++yGTl5hNHHMtjJj2Ejk0hneTmPfhYmqWfFy8Sxpl3lGTEIBwkc0ng+hkQjL5D5n27IHxJrfSUan+87U11hdlD9uhUDbyuXke/kWErLuyGkK0fUARF2JrLLi99ITcCuczcFEphpxt/ZAIz5HyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=V8OYOOEsC8xGwgh2M9ur2WnW0MvOJQtmg1NeINxI8RI=; b=uPS7Gc29YYvm+Pg1EC7kLwNAtXQKwlbHL2WrXKcmxcgU4XSI8zXH7RQm678bxsHOHduC6ypnxFXu5HoqulRUQzTemscuwEHD3hSWU/Gv4gXZyj5SLp8bbCT3NuUCNmqWGexJjEbNfPLNOayyoJtjlqNuYNibNKS4Kz/6q1fUzjDRS9CvNEp2A4OFEtWhXZIuQSqymtQH+1qGCGGNoeakqG2ghJLPxlEPh3tbZa5rmRIijUtcF4Za2Tl3f8vnCb3f1agPQozBb8+8RCHl5qmtxb1fWCuo47+dlzXYpYLAHNRQ9J0Sf3WymsTl5bMvJX8tfMcSwJVGJpKc9mfTSJ0ulA== Received: from BN9P222CA0006.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::11) by DS7PR12MB5765.namprd12.prod.outlook.com (2603:10b6:8:74::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Thu, 12 Jan 2023 10:59:36 +0000 Received: from BN8NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10c:cafe::c0) by BN9P222CA0006.outlook.office365.com (2603:10b6:408:10c::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT016.mail.protection.outlook.com (10.13.176.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:35 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:21 -0800 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:21 -0800 Received: from reg-r-vrt-019-180.mtr.labs.mlnx (10.127.8.11) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Thu, 12 Jan 2023 02:59:17 -0800 From: Paul Blakey To: Paul Blakey , , Saeed Mahameed , Paolo Abeni , Jakub Kicinski , Eric Dumazet , Jamal Hadi Salim , Cong Wang , "David S. Miller" CC: Oz Shlomo , Jiri Pirko , Roi Dayan , Vlad Buslov Subject: [PATCH net-next 2/6] net/sched: flower: Move filter handle initialization earlier Date: Thu, 12 Jan 2023 12:59:01 +0200 Message-ID: <20230112105905.1738-3-paulb@nvidia.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20230112105905.1738-1-paulb@nvidia.com> References: <20230112105905.1738-1-paulb@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT016:EE_|DS7PR12MB5765:EE_ X-MS-Office365-Filtering-Correlation-Id: 07bffec2-1b6a-4c26-399a-08daf48c17a7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xDOP2ZRf67vkatBBFRGeSCFP3G9HvnuDud+iaDmI/6P8rPnbzAbp1gMqYtg/eW+reQTSFYqzp+xKUw5Hc7FpBbGfWMsLeqmH+d7qQve99Dnn+XM5Sy1Tlode1swNuHVLsdc2Cv9YSET6bCz53Lbb4PCzJcXejERnPVNtDxAauF9hM8v7EZmt0bvmQ6jk0Augkpjcd8nZUr+YjyQ4E75mam2bA3Ch8FOAkTSpHvn9lhA6WFm8efN2lXtCI4fhc604gDk6YrQfF9IP5CoVnkPxkDfXAAqXqcDsIBdtL/lrjvJNYjyDixTJIhLNZtUPYIT3OPfq8XD4qyPCGPW69j4p1Hy8tSST4Pb5rqscfmzPyl/Hj4TxYPoj3rOuPaBQcAw/s5q2cvuzndEn/Ck36wMRYy/GTZ5YW3QUzE82BuMMDChnRu6x/06kf/YyKBXKclbY9mI+XUsut5/AOePRGEIw9U8bFXpvVhLnt3iEeLJjKDPGAHmYIeeDO5pj/IKvIvSHILUcForluzw6lIvIIRei4I/xr54R0hI2pyL7ZJrVPEOdyVZMjD6kH1/fICsCyJcepEM0rxku6QJiVT/jRAfFdfubHfzfHZ8k3vvQYiM+4bH353YssbpJTw7tuze1NasDqCHhc0e0aA+sODvS6Ey65jY1Zytoi+diu70usYBNvjLpvks/ulA+0iHETHl0ic3RBBeXy/ptvb6iI7+V5vh6WA== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199015)(36840700001)(40470700004)(46966006)(70206006)(107886003)(70586007)(8936002)(41300700001)(4326008)(8676002)(54906003)(2616005)(6666004)(336012)(82310400005)(1076003)(36756003)(2906002)(40460700003)(36860700001)(5660300002)(82740400003)(86362001)(186003)(478600001)(26005)(316002)(110136005)(356005)(7636003)(83380400001)(40480700001)(47076005)(426003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:59:35.3749 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 07bffec2-1b6a-4c26-399a-08daf48c17a7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5765 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org To support miss to action during hardware offload the filter's handle is needed when setting up the actions (tcf_exts_init()), and before offloading. Move filter handle initialization earlier. Signed-off-by: Paul Blakey Reviewed-by: Jiri Pirko --- net/sched/cls_flower.c | 64 ++++++++++++++++++++++++------------------ 1 file changed, 37 insertions(+), 27 deletions(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 0b15698b3531d..99af1819bf546 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -2192,10 +2192,6 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, INIT_LIST_HEAD(&fnew->hw_list); refcount_set(&fnew->refcnt, 1); - err = tcf_exts_init(&fnew->exts, net, TCA_FLOWER_ACT, 0); - if (err < 0) - goto errout; - if (tb[TCA_FLOWER_FLAGS]) { fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]); @@ -2205,15 +2201,47 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, } } + if (!fold) { + spin_lock(&tp->lock); + if (!handle) { + handle = 1; + err = idr_alloc_u32(&head->handle_idr, fnew, &handle, + INT_MAX, GFP_ATOMIC); + if (err) + goto errout; + } else { + err = idr_alloc_u32(&head->handle_idr, fnew, &handle, + handle, GFP_ATOMIC); + + /* Filter with specified handle was concurrently + * inserted after initial check in cls_api. This is not + * necessarily an error if NLM_F_EXCL is not set in + * message flags. Returning EAGAIN will cause cls_api to + * try to update concurrently inserted rule. + */ + if (err == -ENOSPC) + err = -EAGAIN; + } + spin_unlock(&tp->lock); + + if (err) + goto errout; + } + fnew->handle = handle; + + err = tcf_exts_init(&fnew->exts, net, TCA_FLOWER_ACT, 0); + if (err < 0) + goto errout_idr; + err = fl_set_parms(net, tp, fnew, mask, base, tb, tca[TCA_RATE], tp->chain->tmplt_priv, flags, fnew->flags, extack); if (err) - goto errout; + goto errout_idr; err = fl_check_assign_mask(head, fnew, fold, mask); if (err) - goto errout; + goto errout_idr; err = fl_ht_insert_unique(fnew, fold, &in_ht); if (err) @@ -2279,29 +2307,9 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, refcount_dec(&fold->refcnt); __fl_put(fold); } else { - if (handle) { - /* user specifies a handle and it doesn't exist */ - err = idr_alloc_u32(&head->handle_idr, fnew, &handle, - handle, GFP_ATOMIC); - - /* Filter with specified handle was concurrently - * inserted after initial check in cls_api. This is not - * necessarily an error if NLM_F_EXCL is not set in - * message flags. Returning EAGAIN will cause cls_api to - * try to update concurrently inserted rule. - */ - if (err == -ENOSPC) - err = -EAGAIN; - } else { - handle = 1; - err = idr_alloc_u32(&head->handle_idr, fnew, &handle, - INT_MAX, GFP_ATOMIC); - } - if (err) - goto errout_hw; + idr_replace(&head->handle_idr, fnew, fnew->handle); refcount_inc(&fnew->refcnt); - fnew->handle = handle; list_add_tail_rcu(&fnew->list, &fnew->mask->filters); spin_unlock(&tp->lock); } @@ -2324,6 +2332,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, fnew->mask->filter_ht_params); errout_mask: fl_mask_put(head, fnew->mask); +errout_idr: + idr_remove(&head->handle_idr, fnew->handle); errout: __fl_put(fnew); errout_tb: From patchwork Thu Jan 12 10:59:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 13097835 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9739FC54EBC for ; Thu, 12 Jan 2023 11:08:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231223AbjALLI1 (ORCPT ); Thu, 12 Jan 2023 06:08:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230273AbjALLH4 (ORCPT ); Thu, 12 Jan 2023 06:07:56 -0500 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2064.outbound.protection.outlook.com [40.107.95.64]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 879A5482A7 for ; Thu, 12 Jan 2023 02:59:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bZW3jyLORh8ZSmAtGEwD1K/DfKzBGucdF1XcmhzVeRK2Z95PE6jW8DkwaiR87Ht+ZCIPG2KWhtE7JZtlTp7jfz863YnMClWm+BBPCcmmCEaeUGHZMrHnKaEBRV7emsFJ6ZZ13d52Wb2guaOzg/Sg8bDD+om5P3VQ9Cf4busEHowjuY/Bm1esJ/5Q0yIIxHhVUjDyHohQVKBZk/sflIKbRT5wWpfhuk08KKg2tv7zUdz1q6BMqkRzBfDUytJ1TSAKIz2Y2KWiuBNgcmTjsJ2TPrWbqZeettwBYX3nwV1EOd3XN0QbJ5HINTIn2C0T30iRPsI1g8J2hC+9T/WIX5ixDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yntnSlDeBZedWEfeAHF0iOvpxo1iH5hurKBUozia2qo=; b=n441G6Ymp7RA3VFQFxiBCzmyZhAgl+qDyFXnh3JjcELHlthO9xgX/L4BuBahvImziLoj6mA6FS49xOQirB4Q0D6cFcg78mE/CjW50Fy2wxqM+QCmLmWhkCQQEKLvuhBVcV9AB9aJMdDS74O1rtKG5S0Y1dE0ljZwtkG/cjGWVhCXEHTmj35Xr1kHOng1m+AqgG3/L/afxgtbhGdblZW32ggOCXWL7qaweZ6PJcHLeXi/GVu2cGF5TNSAzb8JkKA+CWbELxcFSbhcNYt6239XZc0AtAi90PBGKb/wvKjEyfCQHk5oljfEM2HYbsP+UGl1NQQ4+zEPUdpWlk6eKXrsPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yntnSlDeBZedWEfeAHF0iOvpxo1iH5hurKBUozia2qo=; b=Up345y16vtRPBQfGMsyIeeVId+YQ6vzd5br/RapsyMFk0WznrO91Cq28zdwWiSoz/EzWK5bsvGLTjpQyWVEXaFWTE81Pv5J5Rr8pAQSfJUzrlEVdcC9++G8f0HDwNj7fW6AWzNHpQgg2NRPFJsMMx4hooMspIedwcdrRYo/2RkROVnWq8TUiYC6KhiQnAwkjHr58xT91GgUwAG6VxCNBCCUB1eYENpsFXYUNfq94nKPRdM0yff+d6axzuEMq8OQCfjuvpW8es4Jyix7oANIGIw3OaCVQitpPeaJWs41VjqoaZHaZJ7yGh5Yvhi+bX1/yJmvf52y0hHmLb/eKATKtBQ== Received: from BN1PR12CA0012.namprd12.prod.outlook.com (2603:10b6:408:e1::17) by DM6PR12MB4420.namprd12.prod.outlook.com (2603:10b6:5:2a7::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan 2023 10:59:41 +0000 Received: from BN8NAM11FT051.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e1:cafe::5c) by BN1PR12CA0012.outlook.office365.com (2603:10b6:408:e1::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT051.mail.protection.outlook.com (10.13.177.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:40 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:25 -0800 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:25 -0800 Received: from reg-r-vrt-019-180.mtr.labs.mlnx (10.127.8.11) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Thu, 12 Jan 2023 02:59:21 -0800 From: Paul Blakey To: Paul Blakey , , Saeed Mahameed , Paolo Abeni , Jakub Kicinski , Eric Dumazet , Jamal Hadi Salim , Cong Wang , "David S. Miller" CC: Oz Shlomo , Jiri Pirko , Roi Dayan , Vlad Buslov Subject: [PATCH net-next 3/6] net/sched: flower: Support hardware miss to tc action Date: Thu, 12 Jan 2023 12:59:02 +0200 Message-ID: <20230112105905.1738-4-paulb@nvidia.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20230112105905.1738-1-paulb@nvidia.com> References: <20230112105905.1738-1-paulb@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT051:EE_|DM6PR12MB4420:EE_ X-MS-Office365-Filtering-Correlation-Id: 884dde20-9106-461b-e395-08daf48c1a9d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MtZV/+PcgR+E+MgW5U+EiDkg6HcIJh/bpq8kdmZZp8bCWsrtdd2mN2H0NKDLSoe20CTPFl0SAGNIIodPhRlC+62Dqzyji7TX9D/S4Wlnol0tMgJxVC8K0nVIy8+poXHv39bbJzu8UuJOvehpPO8D/cN3Yr5pC1Pj+GgwGsQqiQ04XBcvYa0IffDXatfQ50WZ5RtfRwSnG2WzxD6t2F76uXDpS1rWFLBz5zxIB+XwF66b1QhHtZldiIFXKQ7zaT50Xp//CxttJhcPSZvkZWiTndFhCEfnxwnuhVhYy6c4tzvIP5vw6QeO0ElEmF0ohtag+NUIj90hMK1xZrDULWf6FoHsWcponjvAKB/6qjp2i4k5ZF8DTnRGQ0fVSHEBpMyIGpSO+9irFkGAx4ZpltmbfpiJoPNB9r454pXa/Hug5JmUoUBBlU+qYaEUmtD4lBOqNbKcjpENWJdNkq8fnnUk37GVIJWidaBncHbeeRTf0xmzlAkPu3b9bG1yrGN945OkPoaK/mpjq9XJQ2015cSSzrHTQ0uSrdtNlmMln5bceNSExJepNjcR+6m7FrBxY5tNv6kSYrz8EuRT7FRG14tk/T+Qea2ebFE+o0e/OOy8IBn0yQlTNp/mc5N14NnLMQfcdP2+CCaAObOFYWShPLl51+DENxsgWE1xKg211cGZ4atMGdIhqL/OobWOj2eKcfBCkUDIlnhQ7xsHnHuxSu+3Ww== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230022)(4636009)(376002)(396003)(346002)(136003)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(70206006)(8936002)(5660300002)(70586007)(8676002)(4326008)(110136005)(2906002)(54906003)(316002)(478600001)(41300700001)(6666004)(107886003)(1076003)(186003)(356005)(336012)(426003)(47076005)(26005)(83380400001)(2616005)(82740400003)(40480700001)(40460700003)(86362001)(36756003)(7636003)(82310400005)(36860700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:59:40.3469 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 884dde20-9106-461b-e395-08daf48c1a9d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT051.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4420 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org To support hardware miss to tc action in actions on the flower classifier, implement the required getting of filter actions, and setup filter exts (actions) miss by giving it the filter's handle and actions. Signed-off-by: Paul Blakey Reviewed-by: Jiri Pirko --- net/sched/cls_flower.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 99af1819bf546..c264d9136ed06 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -534,6 +534,15 @@ static struct cls_fl_filter *__fl_get(struct cls_fl_head *head, u32 handle) return f; } +static struct tcf_exts *fl_get_exts(const struct tcf_proto *tp, u32 handle) +{ + struct cls_fl_head *head = rcu_dereference_bh(tp->root); + struct cls_fl_filter *f; + + f = idr_find(&head->handle_idr, handle); + return f ? &f->exts : NULL; +} + static int __fl_delete(struct tcf_proto *tp, struct cls_fl_filter *f, bool *last, bool rtnl_held, struct netlink_ext_ack *extack) @@ -2229,7 +2238,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, } fnew->handle = handle; - err = tcf_exts_init(&fnew->exts, net, TCA_FLOWER_ACT, 0); + err = tcf_exts_init_ex(&fnew->exts, net, TCA_FLOWER_ACT, 0, tp, handle, + !tc_skip_hw(fnew->flags)); if (err < 0) goto errout_idr; @@ -3451,6 +3461,7 @@ static struct tcf_proto_ops cls_fl_ops __read_mostly = { .tmplt_create = fl_tmplt_create, .tmplt_destroy = fl_tmplt_destroy, .tmplt_dump = fl_tmplt_dump, + .get_exts = fl_get_exts, .owner = THIS_MODULE, .flags = TCF_PROTO_OPS_DOIT_UNLOCKED, }; From patchwork Thu Jan 12 10:59:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 13097836 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BACEDC54EBC for ; Thu, 12 Jan 2023 11:09:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230247AbjALLI6 (ORCPT ); Thu, 12 Jan 2023 06:08:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229553AbjALLH5 (ORCPT ); Thu, 12 Jan 2023 06:07:57 -0500 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2060f.outbound.protection.outlook.com [IPv6:2a01:111:f400:7e8a::60f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4399D18684 for ; Thu, 12 Jan 2023 02:59:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=if4YnAp2jPwEElPWp/A5CAEyP2Zs0Cl224kEbZgZemL6FNo+Q3xbTlb35PWnT/fQiMm0sa4xK7CX7rznjvs2K9iCh17KcW6zEukXoWoVtF5Akm+02WmR4Vg1ZQZNm9EKPR/EuQo6CMdzOp6MXchOwDe5xeHhuxoKDZGwwdlUZjh/bflt25keK9qznAtu9p5LDxrhpEc8yVBWUUuMK8EcJ5HLnpm8iPQggGRGzXflZgERLh0rzLcHsRBHCXtQmp9Z1bFhh9dztJe+sljTwwiaPGu6+WSs8TmEnfHBX1iZQRtreBrM+5gzn1aiShPYdH/Vlez2nJ2FV/0IrXRdqvA22Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=s4coGRuCgA7scVJi6paY2r7HRYW3LRgJvyRpoTngEvw=; b=WkzELT6bVspWdTugpkHPrexcQf6y7bvh4q6PSkJjJFFHF83x6PnvFPGTOL9UtuxVHPI2HY8HHhd1RncCU9na/NGuWoWVGkJTS3JiXB2AdTDv398+k3hlkyKo6FzSobT5nV6HveMODo31q+WllLHtsuvJlPgDAHY1ThkTC20+xhv+6NvCugf/kRPY8NRITzfNiMMn9qZnU+ZcTAlSPEfG92uiyxCYN3g9VtpWA2x+QAcMfrQsKtCnVmPj0DChrFiW/8WNlwqdINN35kLRCu2WxU/tW5IjKh6ZG+ff6l8ym8X+ivrI+sOUjYEZKQJtWVTednTUprLSYgxutQEGDkfHoQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=s4coGRuCgA7scVJi6paY2r7HRYW3LRgJvyRpoTngEvw=; b=j6GzDpsourd+z/CuNMp9jJ0kMPZ3SwySyppoJ68So9sADYMpwZILk/eS3RrIr0Mhq9jsPrI4gQOuk6g5bQBIo4PaXZSaTMNLe4yS9Jr6staDkeRE+L9qgMaK9izvzA8LVxaIijqpbPGBSJOFcZTeEfJavmnZf1VRwidMigKdsthbP9Tx7obc+wDNRwvetRngAkfG69cWpacXE+SSKWo/jRMlqP6yNUowLABdPNkegl7xAegiFlAWw+ioaMTByOmvGHiJkiCjAUAaPdVIMp0EIJ1ATNlMGoZR7MUIFuykqW7g8l+p7Kpk42s/tiSCR9uwAwy8NViuJ/cfg848GJ1/Kg== Received: from BN9P222CA0017.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::22) by CH2PR12MB4088.namprd12.prod.outlook.com (2603:10b6:610:a5::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan 2023 10:59:49 +0000 Received: from BN8NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10c:cafe::a5) by BN9P222CA0017.outlook.office365.com (2603:10b6:408:10c::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT016.mail.protection.outlook.com (10.13.176.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:48 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:29 -0800 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:28 -0800 Received: from reg-r-vrt-019-180.mtr.labs.mlnx (10.127.8.11) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Thu, 12 Jan 2023 02:59:25 -0800 From: Paul Blakey To: Paul Blakey , , Saeed Mahameed , Paolo Abeni , Jakub Kicinski , Eric Dumazet , Jamal Hadi Salim , Cong Wang , "David S. Miller" CC: Oz Shlomo , Jiri Pirko , Roi Dayan , Vlad Buslov Subject: [PATCH net-next 4/6] net/mlx5: Refactor tc miss handling to a single function Date: Thu, 12 Jan 2023 12:59:03 +0200 Message-ID: <20230112105905.1738-5-paulb@nvidia.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20230112105905.1738-1-paulb@nvidia.com> References: <20230112105905.1738-1-paulb@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT016:EE_|CH2PR12MB4088:EE_ X-MS-Office365-Filtering-Correlation-Id: 0d2dda49-532f-498f-ba38-08daf48c1f7e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PsNh4CjW3syoGg9UhiAsoXRTZGICRX13mm35Z6XqPMWEudcJjM9+bH8EYk3lto01YAmZIs5+HPKkqPCEoAFFjHhjRVJ2tTCicej0ERNBSWha0fBKmq1eg90Bt6uBFFZq/akJaa0+8/5ZDwhYK4JDdEerXLjJd1bQLB2++1jaz/RyWuH8X8yADN/vzaTdIFHsZSo1INpcWgAe/IG7NxN+Yk0p7ASBVXpHcbhSdnqb08+EjHZPi5ZakQ0qfa8sVfsmg4ftNsl8gqxXcckkQscZRdBK7JexZ7GVK0sNQKvSPjZI7+o8Sx88LgzIrY/o+N1dvUC9E8azl+Z4LSCCNmLi7ruKBa8JNmrgQkbvbQDtbyTFv2AXlcY9iH5C3hM77QmZWuYH2jWfmpNcfvD3HwrdijH5wBITB6BPa2Q04b8aQAJlQ/2dS1ALqppx1/gA63Htymob+/z15QzhuItNUGFNZ5Llnt3+s4Tpm85QU82kE7WapjYdQ0ofNSxQDRAmZolXxldK2zxnJR8MXYkciWHJXlWuIWjKQUezhvExgbCfwBcKQXcGYzOD2EfSuBS2jDa+kM0rxO4hi1xOMt1AOdcaR05DyyyDdZwpD6PBWePHW0fPskGBEVKDGaV3jPXWoFyg3XBmNXpNZN/gdt8WgzrxJ/G8L2BnUR+j608qwQhQqIaruRExIJoySEohfeakw2DpNSYyagP6jv5fY+FIwxZPAw== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199015)(40470700004)(46966006)(36840700001)(478600001)(2906002)(6666004)(26005)(107886003)(110136005)(186003)(70206006)(41300700001)(36756003)(316002)(40480700001)(54906003)(70586007)(336012)(8676002)(4326008)(47076005)(40460700003)(83380400001)(1076003)(2616005)(426003)(82740400003)(5660300002)(30864003)(36860700001)(8936002)(7636003)(356005)(82310400005)(86362001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:59:48.5313 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0d2dda49-532f-498f-ba38-08daf48c1f7e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4088 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Move tc miss handling code to en_tc.c, and remove duplicate code. Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan --- .../ethernet/mellanox/mlx5/core/en/rep/tc.c | 225 ++---------------- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 4 +- .../net/ethernet/mellanox/mlx5/core/en_tc.c | 221 +++++++++++++++-- .../net/ethernet/mellanox/mlx5/core/en_tc.h | 11 +- 4 files changed, 232 insertions(+), 229 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c index b08339d986d5f..69ff212eaad86 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* Copyright (c) 2020 Mellanox Technologies. */ -#include #include #include #include @@ -665,235 +664,57 @@ void mlx5e_rep_tc_netdevice_event_unregister(struct mlx5e_rep_priv *rpriv) mlx5e_rep_indr_block_unbind); } -static bool mlx5e_restore_tunnel(struct mlx5e_priv *priv, struct sk_buff *skb, - struct mlx5e_tc_update_priv *tc_priv, - u32 tunnel_id) -{ - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; - struct tunnel_match_enc_opts enc_opts = {}; - struct mlx5_rep_uplink_priv *uplink_priv; - struct mlx5e_rep_priv *uplink_rpriv; - struct metadata_dst *tun_dst; - struct tunnel_match_key key; - u32 tun_id, enc_opts_id; - struct net_device *dev; - int err; - - enc_opts_id = tunnel_id & ENC_OPTS_BITS_MASK; - tun_id = tunnel_id >> ENC_OPTS_BITS; - - if (!tun_id) - return true; - - uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - uplink_priv = &uplink_rpriv->uplink_priv; - - err = mapping_find(uplink_priv->tunnel_mapping, tun_id, &key); - if (err) { - netdev_dbg(priv->netdev, - "Couldn't find tunnel for tun_id: %d, err: %d\n", - tun_id, err); - return false; - } - - if (enc_opts_id) { - err = mapping_find(uplink_priv->tunnel_enc_opts_mapping, - enc_opts_id, &enc_opts); - if (err) { - netdev_dbg(priv->netdev, - "Couldn't find tunnel (opts) for tun_id: %d, err: %d\n", - enc_opts_id, err); - return false; - } - } - - if (key.enc_control.addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { - tun_dst = __ip_tun_set_dst(key.enc_ipv4.src, key.enc_ipv4.dst, - key.enc_ip.tos, key.enc_ip.ttl, - key.enc_tp.dst, TUNNEL_KEY, - key32_to_tunnel_id(key.enc_key_id.keyid), - enc_opts.key.len); - } else if (key.enc_control.addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) { - tun_dst = __ipv6_tun_set_dst(&key.enc_ipv6.src, &key.enc_ipv6.dst, - key.enc_ip.tos, key.enc_ip.ttl, - key.enc_tp.dst, 0, TUNNEL_KEY, - key32_to_tunnel_id(key.enc_key_id.keyid), - enc_opts.key.len); - } else { - netdev_dbg(priv->netdev, - "Couldn't restore tunnel, unsupported addr_type: %d\n", - key.enc_control.addr_type); - return false; - } - - if (!tun_dst) { - netdev_dbg(priv->netdev, "Couldn't restore tunnel, no tun_dst\n"); - return false; - } - - tun_dst->u.tun_info.key.tp_src = key.enc_tp.src; - - if (enc_opts.key.len) - ip_tunnel_info_opts_set(&tun_dst->u.tun_info, - enc_opts.key.data, - enc_opts.key.len, - enc_opts.key.dst_opt_type); - - skb_dst_set(skb, (struct dst_entry *)tun_dst); - dev = dev_get_by_index(&init_net, key.filter_ifindex); - if (!dev) { - netdev_dbg(priv->netdev, - "Couldn't find tunnel device with ifindex: %d\n", - key.filter_ifindex); - return false; - } - - /* Set fwd_dev so we do dev_put() after datapath */ - tc_priv->fwd_dev = dev; - - skb->dev = dev; - - return true; -} - -static bool mlx5e_restore_skb_chain(struct sk_buff *skb, u32 chain, u32 reg_c1, - struct mlx5e_tc_update_priv *tc_priv) -{ - struct mlx5e_priv *priv = netdev_priv(skb->dev); - u32 tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK; - -#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) - if (chain) { - struct mlx5_rep_uplink_priv *uplink_priv; - struct mlx5e_rep_priv *uplink_rpriv; - struct tc_skb_ext *tc_skb_ext; - struct mlx5_eswitch *esw; - u32 zone_restore_id; - - tc_skb_ext = tc_skb_ext_alloc(skb); - if (!tc_skb_ext) { - WARN_ON(1); - return false; - } - tc_skb_ext->chain = chain; - zone_restore_id = reg_c1 & ESW_ZONE_ID_MASK; - esw = priv->mdev->priv.eswitch; - uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - uplink_priv = &uplink_rpriv->uplink_priv; - if (!mlx5e_tc_ct_restore_flow(uplink_priv->ct_priv, skb, - zone_restore_id)) - return false; - } -#endif /* CONFIG_NET_TC_SKB_EXT */ - - return mlx5e_restore_tunnel(priv, skb, tc_priv, tunnel_id); -} - -static void mlx5_rep_tc_post_napi_receive(struct mlx5e_tc_update_priv *tc_priv) -{ - if (tc_priv->fwd_dev) - dev_put(tc_priv->fwd_dev); -} - -static void mlx5e_restore_skb_sample(struct mlx5e_priv *priv, struct sk_buff *skb, - struct mlx5_mapped_obj *mapped_obj, - struct mlx5e_tc_update_priv *tc_priv) -{ - if (!mlx5e_restore_tunnel(priv, skb, tc_priv, mapped_obj->sample.tunnel_id)) { - netdev_dbg(priv->netdev, - "Failed to restore tunnel info for sampled packet\n"); - return; - } - mlx5e_tc_sample_skb(skb, mapped_obj); - mlx5_rep_tc_post_napi_receive(tc_priv); -} - -static bool mlx5e_restore_skb_int_port(struct mlx5e_priv *priv, struct sk_buff *skb, - struct mlx5_mapped_obj *mapped_obj, - struct mlx5e_tc_update_priv *tc_priv, - bool *forward_tx, - u32 reg_c1) -{ - u32 tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK; - struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; - struct mlx5_rep_uplink_priv *uplink_priv; - struct mlx5e_rep_priv *uplink_rpriv; - - /* Tunnel restore takes precedence over int port restore */ - if (tunnel_id) - return mlx5e_restore_tunnel(priv, skb, tc_priv, tunnel_id); - - uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); - uplink_priv = &uplink_rpriv->uplink_priv; - - if (mlx5e_tc_int_port_dev_fwd(uplink_priv->int_port_priv, skb, - mapped_obj->int_port_metadata, forward_tx)) { - /* Set fwd_dev for future dev_put */ - tc_priv->fwd_dev = skb->dev; - - return true; - } - - return false; -} - void mlx5e_rep_tc_receive(struct mlx5_cqe64 *cqe, struct mlx5e_rq *rq, struct sk_buff *skb) { - u32 reg_c1 = be32_to_cpu(cqe->ft_metadata); + u32 reg_c1 = be32_to_cpu(cqe->ft_metadata), reg_c0, zone_restore_id, tunnel_id; struct mlx5e_tc_update_priv tc_priv = {}; - struct mlx5_mapped_obj mapped_obj; + struct mlx5_rep_uplink_priv *uplink_priv; + struct mlx5e_rep_priv *uplink_rpriv; + struct mlx5_tc_ct_priv *ct_priv; + struct mapping_ctx *mapping_ctx; struct mlx5_eswitch *esw; - bool forward_tx = false; struct mlx5e_priv *priv; - u32 reg_c0; - int err; reg_c0 = (be32_to_cpu(cqe->sop_drop_qpn) & MLX5E_TC_FLOW_ID_MASK); if (!reg_c0 || reg_c0 == MLX5_FS_DEFAULT_FLOW_TAG) goto forward; - /* If reg_c0 is not equal to the default flow tag then skb->mark + /* If mapped_obj_id is not equal to the default flow tag then skb->mark * is not supported and must be reset back to 0. */ skb->mark = 0; priv = netdev_priv(skb->dev); esw = priv->mdev->priv.eswitch; - err = mapping_find(esw->offloads.reg_c0_obj_pool, reg_c0, &mapped_obj); - if (err) { - netdev_dbg(priv->netdev, - "Couldn't find mapped object for reg_c0: %d, err: %d\n", - reg_c0, err); - goto free_skb; - } + mapping_ctx = esw->offloads.reg_c0_obj_pool; + zone_restore_id = reg_c1 & ESW_ZONE_ID_MASK; + tunnel_id = (reg_c1 >> ESW_TUN_OFFSET) & TUNNEL_ID_MASK; - if (mapped_obj.type == MLX5_MAPPED_OBJ_CHAIN) { - if (!mlx5e_restore_skb_chain(skb, mapped_obj.chain, reg_c1, &tc_priv) && - !mlx5_ipsec_is_rx_flow(cqe)) - goto free_skb; - } else if (mapped_obj.type == MLX5_MAPPED_OBJ_SAMPLE) { - mlx5e_restore_skb_sample(priv, skb, &mapped_obj, &tc_priv); - goto free_skb; - } else if (mapped_obj.type == MLX5_MAPPED_OBJ_INT_PORT_METADATA) { - if (!mlx5e_restore_skb_int_port(priv, skb, &mapped_obj, &tc_priv, - &forward_tx, reg_c1)) - goto free_skb; - } else { - netdev_dbg(priv->netdev, "Invalid mapped object type: %d\n", mapped_obj.type); + uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); + uplink_priv = &uplink_rpriv->uplink_priv; + ct_priv = uplink_priv->ct_priv; + + if (!mlx5_ipsec_is_rx_flow(cqe) && + !mlx5e_tc_update_skb(cqe, skb, mapping_ctx, reg_c0, ct_priv, zone_restore_id, tunnel_id, + &tc_priv)) goto free_skb; - } forward: - if (forward_tx) + if (tc_priv.skb_done) + goto free_skb; + + if (tc_priv.forward_tx) dev_queue_xmit(skb); else napi_gro_receive(rq->cq.napi, skb); - mlx5_rep_tc_post_napi_receive(&tc_priv); + if (tc_priv.fwd_dev) + dev_put(tc_priv.fwd_dev); return; free_skb: + WARN_ON(tc_priv.fwd_dev); dev_kfree_skb_any(skb); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index c8820ab221694..5dd05901c60b2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1792,7 +1792,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); if (mlx5e_cqe_regb_chain(cqe)) - if (!mlx5e_tc_update_skb(cqe, skb)) { + if (!mlx5e_tc_update_skb_nic(cqe, skb)) { dev_kfree_skb_any(skb); goto free_wqe; } @@ -2256,7 +2256,7 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb); if (mlx5e_cqe_regb_chain(cqe)) - if (!mlx5e_tc_update_skb(cqe, skb)) { + if (!mlx5e_tc_update_skb_nic(cqe, skb)) { dev_kfree_skb_any(skb); goto mpwrq_cqe_out; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 99a7edb886610..893e3d7e4ff02 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -43,6 +43,7 @@ #include #include #include +#include #include "en.h" #include "en/tc/post_act.h" #include "en_rep.h" @@ -5591,47 +5592,221 @@ int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data, } } -bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, - struct sk_buff *skb) +static bool mlx5e_tc_restore_tunnel(struct mlx5e_priv *priv, struct sk_buff *skb, + struct mlx5e_tc_update_priv *tc_priv, + u32 tunnel_id) { -#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) - u32 chain = 0, chain_tag, reg_b, zone_restore_id; - struct mlx5e_priv *priv = netdev_priv(skb->dev); - struct mlx5_mapped_obj mapped_obj; - struct tc_skb_ext *tc_skb_ext; - struct mlx5e_tc_table *tc; + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct tunnel_match_enc_opts enc_opts = {}; + struct mlx5_rep_uplink_priv *uplink_priv; + struct mlx5e_rep_priv *uplink_rpriv; + struct metadata_dst *tun_dst; + struct tunnel_match_key key; + u32 tun_id, enc_opts_id; + struct net_device *dev; int err; - reg_b = be32_to_cpu(cqe->ft_metadata); - tc = mlx5e_fs_get_tc(priv->fs); - chain_tag = reg_b & MLX5E_TC_TABLE_CHAIN_TAG_MASK; + enc_opts_id = tunnel_id & ENC_OPTS_BITS_MASK; + tun_id = tunnel_id >> ENC_OPTS_BITS; - err = mapping_find(tc->mapping, chain_tag, &mapped_obj); + if (!tun_id) + return true; + + uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); + uplink_priv = &uplink_rpriv->uplink_priv; + + err = mapping_find(uplink_priv->tunnel_mapping, tun_id, &key); if (err) { netdev_dbg(priv->netdev, - "Couldn't find chain for chain tag: %d, err: %d\n", - chain_tag, err); + "Couldn't find tunnel for tun_id: %d, err: %d\n", + tun_id, err); + return false; + } + + if (enc_opts_id) { + err = mapping_find(uplink_priv->tunnel_enc_opts_mapping, + enc_opts_id, &enc_opts); + if (err) { + netdev_dbg(priv->netdev, + "Couldn't find tunnel (opts) for tun_id: %d, err: %d\n", + enc_opts_id, err); + return false; + } + } + + if (key.enc_control.addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { + tun_dst = __ip_tun_set_dst(key.enc_ipv4.src, key.enc_ipv4.dst, + key.enc_ip.tos, key.enc_ip.ttl, + key.enc_tp.dst, TUNNEL_KEY, + key32_to_tunnel_id(key.enc_key_id.keyid), + enc_opts.key.len); + } else if (key.enc_control.addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) { + tun_dst = __ipv6_tun_set_dst(&key.enc_ipv6.src, &key.enc_ipv6.dst, + key.enc_ip.tos, key.enc_ip.ttl, + key.enc_tp.dst, 0, TUNNEL_KEY, + key32_to_tunnel_id(key.enc_key_id.keyid), + enc_opts.key.len); + } else { + netdev_dbg(priv->netdev, + "Couldn't restore tunnel, unsupported addr_type: %d\n", + key.enc_control.addr_type); return false; } - if (mapped_obj.type == MLX5_MAPPED_OBJ_CHAIN) { - chain = mapped_obj.chain; + if (!tun_dst) { + netdev_dbg(priv->netdev, "Couldn't restore tunnel, no tun_dst\n"); + return false; + } + + tun_dst->u.tun_info.key.tp_src = key.enc_tp.src; + + if (enc_opts.key.len) + ip_tunnel_info_opts_set(&tun_dst->u.tun_info, + enc_opts.key.data, + enc_opts.key.len, + enc_opts.key.dst_opt_type); + + skb_dst_set(skb, (struct dst_entry *)tun_dst); + dev = dev_get_by_index(&init_net, key.filter_ifindex); + if (!dev) { + netdev_dbg(priv->netdev, + "Couldn't find tunnel device with ifindex: %d\n", + key.filter_ifindex); + return false; + } + + /* Set fwd_dev so we do dev_put() after datapath */ + tc_priv->fwd_dev = dev; + + skb->dev = dev; + + return true; +} + +static bool mlx5e_tc_restore_skb_chain(struct sk_buff *skb, struct mlx5_tc_ct_priv *ct_priv, + u32 chain, u32 zone_restore_id, + u32 tunnel_id, struct mlx5e_tc_update_priv *tc_priv) +{ + struct mlx5e_priv *priv = netdev_priv(skb->dev); + struct tc_skb_ext *tc_skb_ext; + +#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) + if (chain) { + if (!mlx5e_tc_ct_restore_flow(ct_priv, skb, zone_restore_id)) + return false; + tc_skb_ext = tc_skb_ext_alloc(skb); - if (WARN_ON(!tc_skb_ext)) + if (!tc_skb_ext) { + WARN_ON(1); return false; + } tc_skb_ext->chain = chain; + } +#endif /* CONFIG_NET_TC_SKB_EXT */ - zone_restore_id = (reg_b >> MLX5_REG_MAPPING_MOFFSET(NIC_ZONE_RESTORE_TO_REG)) & - ESW_ZONE_ID_MASK; + if (tc_priv) + return mlx5e_tc_restore_tunnel(priv, skb, tc_priv, tunnel_id); - if (!mlx5e_tc_ct_restore_flow(tc->ct, skb, - zone_restore_id)) - return false; - } else { + return true; +} + +static void mlx5e_tc_restore_skb_sample(struct mlx5e_priv *priv, struct sk_buff *skb, + struct mlx5_mapped_obj *mapped_obj, + struct mlx5e_tc_update_priv *tc_priv) +{ + if (!mlx5e_tc_restore_tunnel(priv, skb, tc_priv, mapped_obj->sample.tunnel_id)) { + netdev_dbg(priv->netdev, + "Failed to restore tunnel info for sampled packet\n"); + return; + } + mlx5e_tc_sample_skb(skb, mapped_obj); +} + +static bool mlx5e_tc_restore_skb_int_port(struct mlx5e_priv *priv, struct sk_buff *skb, + struct mlx5_mapped_obj *mapped_obj, + struct mlx5e_tc_update_priv *tc_priv, + u32 tunnel_id) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mlx5_rep_uplink_priv *uplink_priv; + struct mlx5e_rep_priv *uplink_rpriv; + bool forward_tx = false; + + /* Tunnel restore takes precedence over int port restore */ + if (tunnel_id) + return mlx5e_tc_restore_tunnel(priv, skb, tc_priv, tunnel_id); + + uplink_rpriv = mlx5_eswitch_get_uplink_priv(esw, REP_ETH); + uplink_priv = &uplink_rpriv->uplink_priv; + + if (mlx5e_tc_int_port_dev_fwd(uplink_priv->int_port_priv, skb, + mapped_obj->int_port_metadata, &forward_tx)) { + /* Set fwd_dev for future dev_put */ + tc_priv->fwd_dev = skb->dev; + tc_priv->forward_tx = forward_tx; + + return true; + } + + return false; +} + +bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb, + struct mapping_ctx *mapping_ctx, u32 mapped_obj_id, + struct mlx5_tc_ct_priv *ct_priv, + u32 zone_restore_id, u32 tunnel_id, + struct mlx5e_tc_update_priv *tc_priv) +{ + struct mlx5e_priv *priv = netdev_priv(skb->dev); + struct mlx5_mapped_obj mapped_obj; + int err; + + err = mapping_find(mapping_ctx, mapped_obj_id, &mapped_obj); + if (err) { + netdev_dbg(skb->dev, + "Couldn't find mapped object for mapped_obj_id: %d, err: %d\n", + mapped_obj_id, err); + return false; + } + + switch (mapped_obj.type) { + case MLX5_MAPPED_OBJ_CHAIN: + return mlx5e_tc_restore_skb_chain(skb, ct_priv, mapped_obj.chain, zone_restore_id, + tunnel_id, tc_priv); + case MLX5_MAPPED_OBJ_SAMPLE: + mlx5e_tc_restore_skb_sample(priv, skb, &mapped_obj, tc_priv); + tc_priv->skb_done = true; + return true; + case MLX5_MAPPED_OBJ_INT_PORT_METADATA: + return mlx5e_tc_restore_skb_int_port(priv, skb, &mapped_obj, tc_priv, tunnel_id); + default: netdev_dbg(priv->netdev, "Invalid mapped object type: %d\n", mapped_obj.type); return false; } + + return false; +} + +bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb) +{ +#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) + struct mlx5e_priv *priv = netdev_priv(skb->dev); + u32 mapped_obj_id, reg_b, zone_restore_id; + struct mlx5_tc_ct_priv *ct_priv; + struct mapping_ctx *mapping_ctx; + struct mlx5e_tc_table *tc; + + reg_b = be32_to_cpu(cqe->ft_metadata); + tc = mlx5e_fs_get_tc(priv->fs); + mapped_obj_id = reg_b & MLX5E_TC_TABLE_CHAIN_TAG_MASK; + zone_restore_id = (reg_b >> MLX5_REG_MAPPING_MOFFSET(NIC_ZONE_RESTORE_TO_REG)) & + ESW_ZONE_ID_MASK; + ct_priv = tc->ct; + mapping_ctx = tc->mapping; + + return mlx5e_tc_update_skb(cqe, skb, mapping_ctx, mapped_obj_id, ct_priv, zone_restore_id, + 0, NULL); #endif /* CONFIG_NET_TC_SKB_EXT */ return true; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index 50af70ef22f3c..e574efff85eb6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -59,6 +59,8 @@ int mlx5e_tc_num_filters(struct mlx5e_priv *priv, unsigned long flags); struct mlx5e_tc_update_priv { struct net_device *fwd_dev; + bool skb_done; + bool forward_tx; }; struct mlx5_nic_flow_attr { @@ -382,14 +384,19 @@ static inline bool mlx5e_cqe_regb_chain(struct mlx5_cqe64 *cqe) return false; } -bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb); +bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb); +bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb, + struct mapping_ctx *mapping_ctx, u32 mapped_obj_id, + struct mlx5_tc_ct_priv *ct_priv, + u32 zone_restore_id, u32 tunnel_id, + struct mlx5e_tc_update_priv *tc_priv); #else /* CONFIG_MLX5_CLS_ACT */ static inline struct mlx5e_tc_table *mlx5e_tc_table_alloc(void) { return NULL; } static inline void mlx5e_tc_table_free(struct mlx5e_tc_table *tc) {} static inline bool mlx5e_cqe_regb_chain(struct mlx5_cqe64 *cqe) { return false; } static inline bool -mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb) +mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb) { return true; } #endif From patchwork Thu Jan 12 10:59:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 13097837 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F87FC54EBC for ; Thu, 12 Jan 2023 11:09:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231222AbjALLJF (ORCPT ); Thu, 12 Jan 2023 06:09:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234574AbjALLH7 (ORCPT ); Thu, 12 Jan 2023 06:07:59 -0500 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2052.outbound.protection.outlook.com [40.107.237.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92AB6395F5 for ; Thu, 12 Jan 2023 02:59:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CfT0Jg7kcm7LuGM3hckA4E6ljr1N4sO9bfIwRtU1XdJOODhEYJjhmUj3CqcgRZlCyEu4y1b7jUNp/H4dnl7w5aRR/4LDlr/BteWP7Pj2AyDNQzjuZtGBvf//QXX+k+YRzrjzZNwQe/bJfUIw638gstlW8CsF3ymPsjYk1qdOScVd79BaworqaHVb34b1QGbV7TLDbc8U0hBrK/OTcpo3gAn2BI0uOay4NNXfTM/5cJH/zcsjkSAN4qiAOI3ZxAUJTedm4fjLI8B69+eRd0/piyokennkKclTo03VOeHRH81p92mw7fVt1kluDCnt1ZRG3GlTQLfPUgy3WYCU9/fDkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=W11kvHaThgrVElcKfK4gMNqWwivTfuftRnKBcIBTQpw=; b=Rml74g63h8X4gBJVEtT4odQ95h4tA1CCBvADkrGNv0M6HIvRgVBoKkD0PcqieVUDl8mQ03TXHzUyTc4xtAQOZyiJH8gR72tfNIBQvNdQZuK1iIr6l8DaeOjvdGcVcaOdIPph/2LQKz7MYsEsgcfcxlO7NIKMfytCyv/5QPskr6P2d+Czw6rhZF9jRJshbJ8cWRdSkmGvRsGR3Ni+p0B1dvI+yqH4Z6ZUQBjH6eViq3wjGH97OYxWeRsWHIXSaOPo3JUBZSJq22L3yPeFRebYELMSPDxJPxZa/CcpbdjlDpsLktjs/VGdZmrg0y8q7INtLkKxgT6aAoTd3ktHA+y08g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W11kvHaThgrVElcKfK4gMNqWwivTfuftRnKBcIBTQpw=; b=i+5bQLiJYzoMLUfZ8g0jsz04H+jUAqmeZ37PYmIl2Nts0ydbC46HXpSb3FERk5XB6zuy3Gc4Cr0+hWsafUNG0o8PMJhlcvptKMpl5WSb6xnbhWXtKwoADMAb+YaQqrEdAIdQA7wD9/TSZ0iYBP2ObmFYox7mUzpuaXx3njBXZT2pImFlaRBlLIXd3++JvWD3zBsFp7Frfbm9VVY1GlY79GGd/zpLH8NcXJQoZzyibskfL0ATZOHLhkQEJ2X40q1xfohseyU8I2ingcjpIIMwwSCACjmNSDDNKTlei0Qx44SRnUOUJc+rMIzSlWaGgY2kkyTlw6PPjC89Ok8xp+U1wg== Received: from BN0PR03CA0033.namprd03.prod.outlook.com (2603:10b6:408:e7::8) by PH0PR12MB5420.namprd12.prod.outlook.com (2603:10b6:510:e8::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.19; Thu, 12 Jan 2023 10:59:53 +0000 Received: from BN8NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e7:cafe::5b) by BN0PR03CA0033.outlook.office365.com (2603:10b6:408:e7::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT022.mail.protection.outlook.com (10.13.176.112) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:52 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:33 -0800 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:32 -0800 Received: from reg-r-vrt-019-180.mtr.labs.mlnx (10.127.8.11) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Thu, 12 Jan 2023 02:59:29 -0800 From: Paul Blakey To: Paul Blakey , , Saeed Mahameed , Paolo Abeni , Jakub Kicinski , Eric Dumazet , Jamal Hadi Salim , Cong Wang , "David S. Miller" CC: Oz Shlomo , Jiri Pirko , Roi Dayan , Vlad Buslov Subject: [PATCH net-next 5/6] net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG Date: Thu, 12 Jan 2023 12:59:04 +0200 Message-ID: <20230112105905.1738-6-paulb@nvidia.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20230112105905.1738-1-paulb@nvidia.com> References: <20230112105905.1738-1-paulb@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT022:EE_|PH0PR12MB5420:EE_ X-MS-Office365-Filtering-Correlation-Id: 3dc698f0-d04b-4b9b-7b17-08daf48c21ed X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: egI0Jv1twfJut6rI2kfk+wR++vxwhK+e/WmYEVbFZMQ79ruPWvwDL7fm/XyjdvFuIEsN96zQ+qnrXB2cGMUi3+T5n46lXGDGSnDPipZ1bGCwx3v3+mjqsQ3RJYcO/3KJsB2B3s/kxJgz2jtfcoXuzDszlsyZ+APn98OfcYHdY7npZEbs1sDHaXdTtloz78j6j1+4Ep4sQ6S6JejjLA2OciPlSb5KPO9ul8dlfEuj71gb8ajDaAukTxwepm0gfBzjXaj+y7IYnPKDnYiXCO1nyFTRqeQ+1bCQazD4YhQ5ff9QQKGQbvScZGJrVtlh2jkG5ahR/RgOJcjZISOPcRNPCXYtRFx8zDUJc/EYcd23TaWcAUZ0Uaoq0HcPRbgAFlwvnUn6Qui6NeLiVuzNj2NnQctvElUnZ6/Sc5825RIUn4/4IqmB9VfFhAvs48fRvHgnNTH1B7yHGdA0+0gnyMCUGzOqhHJV7PGmZc6mmaLHfiABUmgSyicEEl2Gq/OkITOoAdnqXmq9SNlprjDpFZtKFH07VCHeVWdBDE/SbApy4mqCeN0GTt6zKbxeK2BGb+r6qgLAVtd5qFBGzGpZZ0scC1TkSjCs4lFZyMpPM+2ffGqW0e0ARt7Sv+gYeLz9iAKpVT+6YhSE1iu/3SHaRECLUGgYywiVrYbMfbbz67YIU06NuK7vGBSE7foSPKqXx1FiG5ZoYwnqPZGnUf5JvBovcw== X-Forefront-Antispam-Report: CIP:216.228.117.160;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge1.nvidia.com;CAT:NONE;SFS:(13230022)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199015)(46966006)(40470700004)(36840700001)(186003)(478600001)(26005)(1076003)(110136005)(316002)(70586007)(70206006)(6666004)(54906003)(2616005)(107886003)(8676002)(4326008)(336012)(426003)(47076005)(7636003)(8936002)(5660300002)(41300700001)(83380400001)(86362001)(40480700001)(36860700001)(356005)(36756003)(82310400005)(82740400003)(2906002)(40460700003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:59:52.6125 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3dc698f0-d04b-4b9b-7b17-08daf48c21ed X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.160];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5420 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This reg usage is always a mapped object, not necessarily containing chain info. Rename to properly convey what it stores. This patch doesn't change any functionality. Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan --- .../net/ethernet/mellanox/mlx5/core/en/tc/sample.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 2 +- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 6 +++--- drivers/net/ethernet/mellanox/mlx5/core/en_tc.h | 4 ++-- .../ethernet/mellanox/mlx5/core/lib/fs_chains.c | 14 +++++++------- 5 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c index 1cbd2eb9d04f9..d68a446153eec 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c @@ -237,7 +237,7 @@ sample_modify_hdr_get(struct mlx5_core_dev *mdev, u32 obj_id, int err; err = mlx5e_tc_match_to_reg_set(mdev, mod_acts, MLX5_FLOW_NAMESPACE_FDB, - CHAIN_TO_REG, obj_id); + MAPPED_OBJ_TO_REG, obj_id); if (err) goto err_set_regc0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index 313df8232db70..e1a2861cc13ba 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -1871,7 +1871,7 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, ct_flow->chain_mapping = chain_mapping; err = mlx5e_tc_match_to_reg_set(priv->mdev, pre_mod_acts, ct_priv->ns_type, - CHAIN_TO_REG, chain_mapping); + MAPPED_OBJ_TO_REG, chain_mapping); if (err) { ct_dbg("Failed to set chain register mapping"); goto err_mapping; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 893e3d7e4ff02..2390b227b5037 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -105,7 +105,7 @@ struct mlx5e_tc_table { }; struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[] = { - [CHAIN_TO_REG] = { + [MAPPED_OBJ_TO_REG] = { .mfield = MLX5_ACTION_IN_FIELD_METADATA_REG_C_0, .moffset = 0, .mlen = 16, @@ -132,7 +132,7 @@ struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[] = { * into reg_b that is passed to SW since we don't * jump between steering domains. */ - [NIC_CHAIN_TO_REG] = { + [NIC_MAPPED_OBJ_TO_REG] = { .mfield = MLX5_ACTION_IN_FIELD_METADATA_REG_B, .moffset = 0, .mlen = 16, @@ -1583,7 +1583,7 @@ mlx5e_tc_offload_to_slow_path(struct mlx5_eswitch *esw, goto err_get_chain; err = mlx5e_tc_match_to_reg_set(esw->dev, &mod_acts, MLX5_FLOW_NAMESPACE_FDB, - CHAIN_TO_REG, chain_mapping); + MAPPED_OBJ_TO_REG, chain_mapping); if (err) goto err_reg_set; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index e574efff85eb6..306e8b20941a2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -229,7 +229,7 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe); void mlx5e_tc_reoffload_flows_work(struct work_struct *work); enum mlx5e_tc_attr_to_reg { - CHAIN_TO_REG, + MAPPED_OBJ_TO_REG, VPORT_TO_REG, TUNNEL_TO_REG, CTSTATE_TO_REG, @@ -238,7 +238,7 @@ enum mlx5e_tc_attr_to_reg { MARK_TO_REG, LABELS_TO_REG, FTEID_TO_REG, - NIC_CHAIN_TO_REG, + NIC_MAPPED_OBJ_TO_REG, NIC_ZONE_RESTORE_TO_REG, PACKET_COLOR_TO_REG, }; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c index df58cba37930a..81ed91fee59b9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c @@ -214,7 +214,7 @@ create_chain_restore(struct fs_chain *chain) struct mlx5_eswitch *esw = chain->chains->dev->priv.eswitch; u8 modact[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; struct mlx5_fs_chains *chains = chain->chains; - enum mlx5e_tc_attr_to_reg chain_to_reg; + enum mlx5e_tc_attr_to_reg mapped_obj_to_reg; struct mlx5_modify_hdr *mod_hdr; u32 index; int err; @@ -242,7 +242,7 @@ create_chain_restore(struct fs_chain *chain) chain->id = index; if (chains->ns == MLX5_FLOW_NAMESPACE_FDB) { - chain_to_reg = CHAIN_TO_REG; + mapped_obj_to_reg = MAPPED_OBJ_TO_REG; chain->restore_rule = esw_add_restore_rule(esw, chain->id); if (IS_ERR(chain->restore_rule)) { err = PTR_ERR(chain->restore_rule); @@ -253,7 +253,7 @@ create_chain_restore(struct fs_chain *chain) * since we write the metadata to reg_b * that is passed to SW directly. */ - chain_to_reg = NIC_CHAIN_TO_REG; + mapped_obj_to_reg = NIC_MAPPED_OBJ_TO_REG; } else { err = -EINVAL; goto err_rule; @@ -261,12 +261,12 @@ create_chain_restore(struct fs_chain *chain) MLX5_SET(set_action_in, modact, action_type, MLX5_ACTION_TYPE_SET); MLX5_SET(set_action_in, modact, field, - mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mfield); + mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mfield); MLX5_SET(set_action_in, modact, offset, - mlx5e_tc_attr_to_reg_mappings[chain_to_reg].moffset); + mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].moffset); MLX5_SET(set_action_in, modact, length, - mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mlen == 32 ? - 0 : mlx5e_tc_attr_to_reg_mappings[chain_to_reg].mlen); + mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mlen == 32 ? + 0 : mlx5e_tc_attr_to_reg_mappings[mapped_obj_to_reg].mlen); MLX5_SET(set_action_in, modact, data, chain->id); mod_hdr = mlx5_modify_header_alloc(chains->dev, chains->ns, 1, modact); From patchwork Thu Jan 12 10:59:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 13097838 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EDAFC61DB3 for ; Thu, 12 Jan 2023 11:09:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230491AbjALLJM (ORCPT ); Thu, 12 Jan 2023 06:09:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232440AbjALLIA (ORCPT ); Thu, 12 Jan 2023 06:08:00 -0500 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2053.outbound.protection.outlook.com [40.107.223.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BADB3E875 for ; Thu, 12 Jan 2023 02:59:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bu/qlI1j2WfDSQnFE58sOfv4ea7cBZ5tCpjBk6GJCx8ehrHqUJ1SKXnCAet/Yjy8PyVmKZaqzaPSS2aBjCH9K7tpaqRgwjVdRX+3IGA+2anuHp1+DCcm1T7sFsXCVeXHuUd2DO8mI+Bfwx/+6kh3rUUdArJFWqY1D1oomIz2eATczOEbLwnggv81YXzLT2NAEJJkla/bl77UNx7aRy9jaT5Cw25d8Ii/RmvzaKaWWKY6NbRKUdlmDqNLIcX+uplDey1MI44ySddBb5KRJ69UHGHwghb2VdXDFF07VsX4vYNhLvbXNAEefaG8kEl1rosMgYSQ96Peqsd7c40cLmHBJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=g8H+5MsM6Ui0/GkEVmJbJbZECDMDdNqT08TS5jWgdkI=; b=GkLfv3Jh6H+6j1lMMf3EjzkAWnTqpIgeIAev72JWqAJtuHnURxE0K9t0ZiGeqw5Ioeoi6vG4FLQzMiQofXhCFyD/3enT+f7atjH52bg2XK6CNmTBqw3AywBEUa0mS/ONpACyDgp3eGQI6tZn4AjxZnvy3XvIOzkTQtV8KN36X6EQgQjBY3vra+1zdEhLlSVM3Lnh4Bt7heHQKbubXyscw9hNU7CHjG9yOPtWFtPEf8s4zQ2lBwmlFj94gqtrxK+U3xe24UtJHzRQOxqhi78f+iqyJHTDfkqldLKTNXfkLWxBlI5997v/wZgzCrTKj+VeLJq0O8MJGto+IByL/H9OBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=g8H+5MsM6Ui0/GkEVmJbJbZECDMDdNqT08TS5jWgdkI=; b=l0sY1qr5sdz+5CePYchQ41txlgBbJ8+HaaKVySKmjtuOJgpOPb2lk6nHBvU/IXq9Ov84+ENelmY5pxl5V4rzTQh2v120WvGc0w574PRxjdjnnjfGak9cxLN1llgfChz510kOuykqN1YUQBF1gFac0E5sIvI6l81eyIEtDJQx8ph+i5r5pmAQGUYYG2swCwNDrNMNbTfGmQ6cl/pm7U0SNeko8VGtIzbIBmk8QSL0CuP9BAaJQDpqyrFx0tgD6nqlkwh0DdYuj1fwblEabMLh1dGVpzIrEl/xu+TXDsU0LvIL2rNd6jFjnqTePojUlp/sAXfUEQLFsFwrC+xXbZewGw== Received: from MW4PR03CA0024.namprd03.prod.outlook.com (2603:10b6:303:8f::29) by DS7PR12MB6309.namprd12.prod.outlook.com (2603:10b6:8:96::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Thu, 12 Jan 2023 10:59:55 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8f:cafe::b0) by MW4PR03CA0024.outlook.office365.com (2603:10b6:303:8f::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Thu, 12 Jan 2023 10:59:54 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:37 -0800 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 12 Jan 2023 02:59:36 -0800 Received: from reg-r-vrt-019-180.mtr.labs.mlnx (10.127.8.11) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Thu, 12 Jan 2023 02:59:33 -0800 From: Paul Blakey To: Paul Blakey , , Saeed Mahameed , Paolo Abeni , Jakub Kicinski , Eric Dumazet , Jamal Hadi Salim , Cong Wang , "David S. Miller" CC: Oz Shlomo , Jiri Pirko , Roi Dayan , Vlad Buslov Subject: [PATCH net-next 6/6] net/mlx5: TC, Set CT miss to the specific ct action instance Date: Thu, 12 Jan 2023 12:59:05 +0200 Message-ID: <20230112105905.1738-7-paulb@nvidia.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20230112105905.1738-1-paulb@nvidia.com> References: <20230112105905.1738-1-paulb@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT036:EE_|DS7PR12MB6309:EE_ X-MS-Office365-Filtering-Correlation-Id: 1ab85025-cf1f-4462-a7bb-08daf48c2348 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vPY/T0a/EK1KCY8AzBBD2Dme0TCQTLcqjcjYW8ihJO7OVa/NNqbR1QKmSudh9SXaDaOCjigCa5xcoQ1o2Wo6uZduNUAiNlyAakULSop90XPJY7EMuQNbPZZY2A7WF3a83z/MG9AaoIPozvKRusoVci5y5hqkvu3rljpJnmTbatQeb0FURpUinkxOBF8013nVSvv595lcv0MUGL4TSCWn+6vkRPVZGCKeA24D7t/L/O4YdpRYCcvrgx6hCdZLSn3jWCCo3JoH1Z1C9swo7iE+3mVhaehE7YI1YYKTT77U2y5nywuKC2NnkxgBgRLGzlFnPdkBa4lBLiCNLz1tEafH4s1MKKh4zAYDcnOGAafxtV5nYQQvM6Edl4oBaxIf2rvHtVaRX1de9zg3n0GZSLQZ2ToazSBW54eUBnZHVWdeVPoca4qUDsLsRQXg8ABkKv+IpzzqWfNPaZct/uyf4ACilm2b63REYJyE2SZIZMl9L/H3sET6pRuG0A09vhikxP1MSWlcZdzCIk/g6HP+w/UUGkgonYCb5M8CE9DmpBNXknU/CglFIPJffAZQr8mqkt6R3X7GPtfn9q3+nSQnTJEazRuCeHx7yL3Zj8NXzrGokPS0ldVwTRz8MvKgcjj0q+FE6rGT0q3T7DVM39ySFz1ocBxMgo8ZtgeGzOLCQr7XHbK3dOD+x/B+GyEcJfS5UeuUTl4vhbQHMSzr9ljSWpj5ew== X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199015)(46966006)(40470700004)(36840700001)(70586007)(107886003)(30864003)(36756003)(8936002)(1076003)(316002)(4326008)(5660300002)(8676002)(82740400003)(41300700001)(336012)(26005)(426003)(47076005)(7636003)(82310400005)(2616005)(36860700001)(83380400001)(186003)(54906003)(356005)(40480700001)(110136005)(40460700003)(86362001)(70206006)(6666004)(478600001)(2906002)(66899015);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jan 2023 10:59:54.9346 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1ab85025-cf1f-4462-a7bb-08daf48c2348 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6309 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, CT misses restore the missed chain on the tc skb extension so tc will continue from the relevant chain. Instead, restore the CT action's miss cookie on the extension, which will instruct tc to continue from the this specific CT action instance on the relevant filter's action list. Map the CT action's miss_cookie to a new miss object (ACT_MISS), and use this miss mapping instead of the current chain miss object (CHAIN_MISS) for CT action misses. To restore this new miss mapping value, add a RX restore rule for each such mapping value. Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan Reviewed-by: Oz Sholmo --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 32 +++++----- .../ethernet/mellanox/mlx5/core/en/tc_ct.h | 2 + .../net/ethernet/mellanox/mlx5/core/en_tc.c | 61 ++++++++++++++++--- .../net/ethernet/mellanox/mlx5/core/en_tc.h | 6 ++ .../net/ethernet/mellanox/mlx5/core/eswitch.h | 2 + 5 files changed, 79 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index e1a2861cc13ba..71d8a906add97 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -59,6 +59,7 @@ struct mlx5_tc_ct_debugfs { struct mlx5_tc_ct_priv { struct mlx5_core_dev *dev; + struct mlx5e_priv *priv; const struct net_device *netdev; struct mod_hdr_tbl *mod_hdr_tbl; struct xarray tuple_ids; @@ -85,7 +86,6 @@ struct mlx5_ct_flow { struct mlx5_flow_attr *pre_ct_attr; struct mlx5_flow_handle *pre_ct_rule; struct mlx5_ct_ft *ft; - u32 chain_mapping; }; struct mlx5_ct_zone_rule { @@ -1441,6 +1441,7 @@ mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv, attr->ct_attr.zone = act->ct.zone; attr->ct_attr.ct_action = act->ct.action; attr->ct_attr.nf_ft = act->ct.flow_table; + attr->ct_attr.act_miss_cookie = act->miss_cookie; return 0; } @@ -1778,7 +1779,7 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft) * + ft prio (tc chain) + * + original match + * +---------------------+ - * | set chain miss mapping + * | set act_miss_cookie mapping * | set fte_id * | set tunnel_id * | do decap @@ -1823,7 +1824,7 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_flow_attr *pre_ct_attr; struct mlx5_modify_hdr *mod_hdr; struct mlx5_ct_flow *ct_flow; - int chain_mapping = 0, err; + int act_miss_mapping = 0, err; struct mlx5_ct_ft *ft; u16 zone; @@ -1858,22 +1859,18 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, pre_ct_attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; - /* Write chain miss tag for miss in ct table as we - * don't go though all prios of this chain as normal tc rules - * miss. - */ - err = mlx5_chains_get_chain_mapping(ct_priv->chains, attr->chain, - &chain_mapping); + err = mlx5e_tc_action_miss_mapping_get(ct_priv->priv, attr, attr->ct_attr.act_miss_cookie, + &act_miss_mapping); if (err) { - ct_dbg("Failed to get chain register mapping for chain"); - goto err_get_chain; + ct_dbg("Failed to get register mapping for act miss"); + goto err_get_act_miss; } - ct_flow->chain_mapping = chain_mapping; + attr->ct_attr.act_miss_mapping = act_miss_mapping; err = mlx5e_tc_match_to_reg_set(priv->mdev, pre_mod_acts, ct_priv->ns_type, - MAPPED_OBJ_TO_REG, chain_mapping); + MAPPED_OBJ_TO_REG, act_miss_mapping); if (err) { - ct_dbg("Failed to set chain register mapping"); + ct_dbg("Failed to set act miss register mapping"); goto err_mapping; } @@ -1937,8 +1934,8 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr); err_mapping: mlx5e_mod_hdr_dealloc(pre_mod_acts); - mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping); -err_get_chain: + mlx5e_tc_action_miss_mapping_put(ct_priv->priv, attr, act_miss_mapping); +err_get_act_miss: kfree(ct_flow->pre_ct_attr); err_alloc_pre: mlx5_tc_ct_del_ft_cb(ct_priv, ft); @@ -1977,7 +1974,7 @@ __mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *ct_priv, mlx5_tc_rule_delete(priv, ct_flow->pre_ct_rule, pre_ct_attr); mlx5_modify_header_dealloc(priv->mdev, pre_ct_attr->modify_hdr); - mlx5_chains_put_chain_mapping(ct_priv->chains, ct_flow->chain_mapping); + mlx5e_tc_action_miss_mapping_put(ct_priv->priv, attr, attr->ct_attr.act_miss_mapping); mlx5_tc_ct_del_ft_cb(ct_priv, ct_flow->ft); kfree(ct_flow->pre_ct_attr); @@ -2157,6 +2154,7 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, } spin_lock_init(&ct_priv->ht_lock); + ct_priv->priv = priv; ct_priv->ns_type = ns_type; ct_priv->chains = chains; ct_priv->netdev = priv->netdev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h index 5bbd6b92840fb..5c5ddaa83055d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h @@ -28,6 +28,8 @@ struct mlx5_ct_attr { struct mlx5_ct_flow *ct_flow; struct nf_flowtable *nf_ft; u32 ct_labels_id; + u32 act_miss_mapping; + u64 act_miss_cookie; }; #define zone_to_reg_ct {\ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 2390b227b5037..daacac5144034 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -3828,6 +3828,7 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr, attr2->parse_attr = parse_attr; attr2->dest_chain = 0; attr2->dest_ft = NULL; + attr2->act_id_restore_rule = NULL; if (ns_type == MLX5_FLOW_NAMESPACE_FDB) { attr2->esw_attr->out_count = 0; @@ -5683,15 +5684,18 @@ static bool mlx5e_tc_restore_tunnel(struct mlx5e_priv *priv, struct sk_buff *skb return true; } -static bool mlx5e_tc_restore_skb_chain(struct sk_buff *skb, struct mlx5_tc_ct_priv *ct_priv, - u32 chain, u32 zone_restore_id, - u32 tunnel_id, struct mlx5e_tc_update_priv *tc_priv) +static bool mlx5e_tc_restore_skb_tc_meta(struct sk_buff *skb, struct mlx5_tc_ct_priv *ct_priv, + struct mlx5_mapped_obj *mapped_obj, u32 zone_restore_id, + u32 tunnel_id, struct mlx5e_tc_update_priv *tc_priv) { + u32 chain = mapped_obj->type == MLX5_MAPPED_OBJ_CHAIN ? mapped_obj->chain : 0; + u64 act_miss_cookie = mapped_obj->type == MLX5_MAPPED_OBJ_ACT_MISS ? + mapped_obj->act_miss_cookie : 0; struct mlx5e_priv *priv = netdev_priv(skb->dev); struct tc_skb_ext *tc_skb_ext; #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) - if (chain) { + if (chain || act_miss_cookie) { if (!mlx5e_tc_ct_restore_flow(ct_priv, skb, zone_restore_id)) return false; @@ -5701,7 +5705,12 @@ static bool mlx5e_tc_restore_skb_chain(struct sk_buff *skb, struct mlx5_tc_ct_pr return false; } - tc_skb_ext->chain = chain; + if (act_miss_cookie) { + tc_skb_ext->act_miss_cookie = act_miss_cookie; + tc_skb_ext->act_miss = 1; + } else { + tc_skb_ext->chain = chain; + } } #endif /* CONFIG_NET_TC_SKB_EXT */ @@ -5772,8 +5781,9 @@ bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe, struct sk_buff *skb, switch (mapped_obj.type) { case MLX5_MAPPED_OBJ_CHAIN: - return mlx5e_tc_restore_skb_chain(skb, ct_priv, mapped_obj.chain, zone_restore_id, - tunnel_id, tc_priv); + case MLX5_MAPPED_OBJ_ACT_MISS: + return mlx5e_tc_restore_skb_tc_meta(skb, ct_priv, &mapped_obj, zone_restore_id, + tunnel_id, tc_priv); case MLX5_MAPPED_OBJ_SAMPLE: mlx5e_tc_restore_skb_sample(priv, skb, &mapped_obj, tc_priv); tc_priv->skb_done = true; @@ -5811,3 +5821,40 @@ bool mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb) return true; } + +int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u64 act_miss_cookie, u32 *act_miss_mapping) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mlx5_mapped_obj mapped_obj = {}; + struct mapping_ctx *ctx; + int err; + + ctx = esw->offloads.reg_c0_obj_pool; + + mapped_obj.type = MLX5_MAPPED_OBJ_ACT_MISS; + mapped_obj.act_miss_cookie = act_miss_cookie; + err = mapping_add(ctx, &mapped_obj, act_miss_mapping); + if (err) + return err; + + attr->act_id_restore_rule = esw_add_restore_rule(esw, *act_miss_mapping); + if (IS_ERR(attr->act_id_restore_rule)) + goto err_rule; + + return 0; + +err_rule: + mapping_remove(ctx, *act_miss_mapping); + return err; +} + +void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u32 act_miss_mapping) +{ + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; + struct mapping_ctx *ctx = esw->offloads.reg_c0_obj_pool; + + mlx5_del_flow_rules(attr->act_id_restore_rule); + mapping_remove(ctx, act_miss_mapping); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h index 306e8b20941a2..3033afa23d0ae 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h @@ -100,6 +100,7 @@ struct mlx5_flow_attr { struct mlx5_flow_attr *branch_true; struct mlx5_flow_attr *branch_false; struct mlx5_flow_attr *jumping_attr; + struct mlx5_flow_handle *act_id_restore_rule; /* keep this union last */ union { DECLARE_FLEX_ARRAY(struct mlx5_esw_flow_attr, esw_attr); @@ -400,4 +401,9 @@ mlx5e_tc_update_skb_nic(struct mlx5_cqe64 *cqe, struct sk_buff *skb) { return true; } #endif +int mlx5e_tc_action_miss_mapping_get(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u64 act_miss_cookie, u32 *act_miss_mapping); +void mlx5e_tc_action_miss_mapping_put(struct mlx5e_priv *priv, struct mlx5_flow_attr *attr, + u32 act_miss_mapping); + #endif /* __MLX5_EN_TC_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h index 92644fbb50816..6aa48c003ba63 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h @@ -52,12 +52,14 @@ enum mlx5_mapped_obj_type { MLX5_MAPPED_OBJ_CHAIN, MLX5_MAPPED_OBJ_SAMPLE, MLX5_MAPPED_OBJ_INT_PORT_METADATA, + MLX5_MAPPED_OBJ_ACT_MISS, }; struct mlx5_mapped_obj { enum mlx5_mapped_obj_type type; union { u32 chain; + u64 act_miss_cookie; struct { u32 group_id; u32 rate;