From patchwork Fri Jul 21 07:15:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Szycik X-Patchwork-Id: 13322071 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1D4BC1ED21 for ; Fri, 21 Jul 2023 14:14:55 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CF8A2737 for ; Fri, 21 Jul 2023 07:14:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689948893; x=1721484893; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hbZf4l/3QAycDS9sbgmRPeTHg9LjdaKBxHkywjTDghM=; b=dgHOYvWC1en5P090fe7+a9gEh581WHNMc4NsO0nLv+9ZDgr6r3r4dckW PCEMtx+puQwJwgjTotNKEnHprWRd8bZfTh5broTKhlM1FQehvNccfq+c0 HjbU/ps2Gu+X/YkT767DWOknCjsW6uvqY5aZTds/aIi5I1O5Qg/dph3p9 hfTIXE09epv16QRv+zZeWdLONDkECWOtCAM9Z9SQ5zlv7NZmGVvsMq6au UCAyPkGHWHLQwlGUG4q+ROwNnxp1+xFWKBgiVd8SxX9dBi3jjyVtFEAs2 SizM2dH2l17f8MG/EoKa4S2eBhWTn7jwczwfTQ7pgIM4aeZnDlvGFNoNJ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10778"; a="433263414" X-IronPort-AV: E=Sophos;i="6.01,220,1684825200"; d="scan'208";a="433263414" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2023 07:14:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="868256468" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga001.fm.intel.com with ESMTP; 21 Jul 2023 07:14:50 -0700 Received: from mystra-4.igk.intel.com (mystra-4.igk.intel.com [10.123.220.40]) by irvmail002.ir.intel.com (Postfix) with ESMTP id F166A35803; Fri, 21 Jul 2023 15:14:47 +0100 (IST) From: Marcin Szycik To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, wojciech.drewek@intel.com, michal.swiatkowski@linux.intel.com, aleksander.lobakin@intel.com, davem@davemloft.net, kuba@kernel.org, jiri@resnulli.us, pabeni@redhat.com, jesse.brandeburg@intel.com, simon.horman@corigine.com, idosch@nvidia.com, andy@kernel.org, Marcin Szycik Subject: [PATCH iwl-next v3 5/6] ice: refactor ICE_TC_FLWR_FIELD_ENC_OPTS Date: Fri, 21 Jul 2023 09:15:31 +0200 Message-ID: <20230721071532.613888-6-marcin.szycik@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230721071532.613888-1-marcin.szycik@linux.intel.com> References: <20230721071532.613888-1-marcin.szycik@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DATE_IN_PAST_06_12, DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org FLOW_DISSECTOR_KEY_ENC_OPTS can be used for multiple headers, but currently it is treated as GTP-exclusive in ice. Rename ICE_TC_FLWR_FIELD_ENC_OPTS to ICE_TC_FLWR_FIELD_GTP_OPTS and check for tunnel type earlier. After this refactor, it is easier to add new headers using FLOW_DISSECTOR_KEY_ENC_OPTS - instead of checking tunnel type in ice_tc_count_lkups() and ice_tc_fill_tunnel_outer(), it needs to be checked only once, in ice_parse_tunnel_attr(). Signed-off-by: Marcin Szycik Reviewed-by: Simon Horman --- drivers/net/ethernet/intel/ice/ice_tc_lib.c | 10 +++++----- drivers/net/ethernet/intel/ice/ice_tc_lib.h | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c index a3f96e889454..4355dc7c122b 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c @@ -35,7 +35,7 @@ ice_tc_count_lkups(u32 flags, struct ice_tc_flower_lyr_2_4_hdrs *headers, if (flags & ICE_TC_FLWR_FIELD_ENC_DST_MAC) lkups_cnt++; - if (flags & ICE_TC_FLWR_FIELD_ENC_OPTS) + if (flags & ICE_TC_FLWR_FIELD_GTP_OPTS) lkups_cnt++; if (flags & (ICE_TC_FLWR_FIELD_ENC_SRC_IPV4 | @@ -219,8 +219,7 @@ ice_tc_fill_tunnel_outer(u32 flags, struct ice_tc_flower_fltr *fltr, i++; } - if (flags & ICE_TC_FLWR_FIELD_ENC_OPTS && - (fltr->tunnel_type == TNL_GTPU || fltr->tunnel_type == TNL_GTPC)) { + if (flags & ICE_TC_FLWR_FIELD_GTP_OPTS) { list[i].type = ice_proto_type_from_tunnel(fltr->tunnel_type); if (fltr->gtp_pdu_info_masks.pdu_type) { @@ -1305,7 +1304,8 @@ ice_parse_tunnel_attr(struct net_device *dev, struct flow_rule *rule, } } - if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_OPTS)) { + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_OPTS) && + (fltr->tunnel_type == TNL_GTPU || fltr->tunnel_type == TNL_GTPC)) { struct flow_match_enc_opts match; flow_rule_match_enc_opts(rule, &match); @@ -1316,7 +1316,7 @@ ice_parse_tunnel_attr(struct net_device *dev, struct flow_rule *rule, memcpy(&fltr->gtp_pdu_info_masks, &match.mask->data[0], sizeof(struct gtp_pdu_session_info)); - fltr->flags |= ICE_TC_FLWR_FIELD_ENC_OPTS; + fltr->flags |= ICE_TC_FLWR_FIELD_GTP_OPTS; } return 0; diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.h b/drivers/net/ethernet/intel/ice/ice_tc_lib.h index 65d387163a46..5d188ad7517a 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.h @@ -22,7 +22,7 @@ #define ICE_TC_FLWR_FIELD_ENC_SRC_L4_PORT BIT(15) #define ICE_TC_FLWR_FIELD_ENC_DST_MAC BIT(16) #define ICE_TC_FLWR_FIELD_ETH_TYPE_ID BIT(17) -#define ICE_TC_FLWR_FIELD_ENC_OPTS BIT(18) +#define ICE_TC_FLWR_FIELD_GTP_OPTS BIT(18) #define ICE_TC_FLWR_FIELD_CVLAN BIT(19) #define ICE_TC_FLWR_FIELD_PPPOE_SESSID BIT(20) #define ICE_TC_FLWR_FIELD_PPP_PROTO BIT(21)