From patchwork Thu Jun 8 18:06:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272656 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B8301DCCC for ; Thu, 8 Jun 2023 18:05:05 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7733A1988 for ; Thu, 8 Jun 2023 11:05:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247504; x=1717783504; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QkmJjuarimT/NDHZpAvvjJcxdjAd8T8HWu3JG/DFuzA=; b=BUsuGAUr+0iMg0PEMyTolrVIn3SOmtYhTm8VcRMgseZRTlEXfJsb4Q/H YRr4QniEFQCM97ADokLVu9NhsfdZP4E7L321oku2bgx2h4545mfnI+gkK xjedRA/98V8ZSFzM3FUooy9LJsplc8lCct/xHKpMnJhW18gvRJUlyKQmo PRDy4JIbyzh5i8LIndGzA2nj/hnjlP7sXyXWkkcqPiIfl/9j/g+yVKvUF +cP7eQqH+gj4Kwcr+SEqQ1Jp+0nf9SkwErzkEcal1tjY2D5qewGjJUmDW JWpcBm8F/hGGLrPeuPCuaHGqDC5xjSMJfFasI50512zV2EbWwj6D9WCGJ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738701" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738701" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187921" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187921" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:36 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com, Jacob Keller Subject: [PATCH iwl-next v3 01/10] ice: Correctly initialize queue context values Date: Thu, 8 Jun 2023 11:06:09 -0700 Message-Id: <20230608180618.574171-2-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org From: Jacob Keller The ice_alloc_lan_q_ctx function allocates the queue context array for a given traffic class. This function uses devm_kcalloc which will zero-allocate the structure. Thus, prior to any queue being setup by ice_ena_vsi_txq, the q_ctx structure will have a q_handle of 0 and a q_teid of 0. These are potentially valid values. Modify the ice_alloc_lan_q_ctx function to initialize every member of the q_ctx array to have invalid values. Modify ice_dis_vsi_txq to ensure that it assigns q_teid to an invalid value when it assigns q_handle to the invalid value as well. This will allow other code to check whether the queue context is currently valid before operating on it. Signed-off-by: Jacob Keller Signed-off-by: Dave Ertman Reviewed-by: Daniel Machon --- drivers/net/ethernet/intel/ice/ice_common.c | 1 + drivers/net/ethernet/intel/ice/ice_sched.c | 23 ++++++++++++++++----- 2 files changed, 19 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 6acb40f3c202..ebdaf8dc679c 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -4700,6 +4700,7 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, break; ice_free_sched_node(pi, node); q_ctx->q_handle = ICE_INVAL_Q_HANDLE; + q_ctx->q_teid = ICE_INVAL_TEID; } mutex_unlock(&pi->sched_lock); kfree(qg_list); diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index b664d60fd037..79a8972873f1 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -569,18 +569,24 @@ ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs) { struct ice_vsi_ctx *vsi_ctx; struct ice_q_ctx *q_ctx; + u16 idx; vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle); if (!vsi_ctx) return -EINVAL; /* allocate LAN queue contexts */ if (!vsi_ctx->lan_q_ctx[tc]) { - vsi_ctx->lan_q_ctx[tc] = devm_kcalloc(ice_hw_to_dev(hw), - new_numqs, - sizeof(*q_ctx), - GFP_KERNEL); - if (!vsi_ctx->lan_q_ctx[tc]) + q_ctx = devm_kcalloc(ice_hw_to_dev(hw), new_numqs, + sizeof(*q_ctx), GFP_KERNEL); + if (!q_ctx) return -ENOMEM; + + for (idx = 0; idx < new_numqs; idx++) { + q_ctx[idx].q_handle = ICE_INVAL_Q_HANDLE; + q_ctx[idx].q_teid = ICE_INVAL_TEID; + } + + vsi_ctx->lan_q_ctx[tc] = q_ctx; vsi_ctx->num_lan_q_entries[tc] = new_numqs; return 0; } @@ -592,9 +598,16 @@ ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs) sizeof(*q_ctx), GFP_KERNEL); if (!q_ctx) return -ENOMEM; + memcpy(q_ctx, vsi_ctx->lan_q_ctx[tc], prev_num * sizeof(*q_ctx)); devm_kfree(ice_hw_to_dev(hw), vsi_ctx->lan_q_ctx[tc]); + + for (idx = prev_num; idx < new_numqs; idx++) { + q_ctx[idx].q_handle = ICE_INVAL_Q_HANDLE; + q_ctx[idx].q_teid = ICE_INVAL_TEID; + } + vsi_ctx->lan_q_ctx[tc] = q_ctx; vsi_ctx->num_lan_q_entries[tc] = new_numqs; } From patchwork Thu Jun 8 18:06:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272657 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E2A51DCCC for ; Thu, 8 Jun 2023 18:05:06 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE28C19AC for ; Thu, 8 Jun 2023 11:05:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247504; x=1717783504; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gnP0R/H0jVqtSCTeKb6AGZa0D3Z8AcN5ddc8oljp0SM=; b=LXeFs7Mjxs21pbriL0IHf30xEkHCAgUM+560hAEkzPq4U0TP5GRdx1cM FdD4q9ySMmi9N0z0DzVSrTY0nbQ5IVqHHEMF9Oc6egA/bRedW/nFYqeNt YYE94UxMF8Ic01bszDlII8ynTeQLAJVzUTG+CkXXp9olIcg8HZa9sStQZ zHaXD9tCZp8PcAmFw2x1qihOmFmzydiCCYt0KyV3JMSRsN5RCi1u8V3Qe WCz1ZRuoZwsxrLpXxVP+cF9obAe3wuqC7zzquLNnKfEk6SmxUMcKR6Gx4 MAZ/WnIxt063Vr50+Sp9ZS9TZPs0lMzdy3AimBCsaLmolmGA0UVRic+qj A==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738704" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738704" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187922" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187922" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:37 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 02/10] ice: Add driver support for firmware changes for LAG Date: Thu, 8 Jun 2023 11:06:10 -0700 Message-Id: <20230608180618.574171-3-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Add the defines, fields, and detection code for FW support of LAG for SRIOV. Also exposes some previously static functions to allow access in the lag code. Clean up code that is unused or not needed for LAG support. Also add an ordered workqueue for processing LAG events. Reviewed-by: Daniel Machon Signed-off-by: Dave Ertman --- drivers/net/ethernet/intel/ice/ice.h | 5 ++ .../net/ethernet/intel/ice/ice_adminq_cmd.h | 3 ++ drivers/net/ethernet/intel/ice/ice_common.c | 8 +++ drivers/net/ethernet/intel/ice/ice_lag.c | 53 ++++++++++--------- drivers/net/ethernet/intel/ice/ice_lib.c | 2 +- drivers/net/ethernet/intel/ice/ice_lib.h | 1 + drivers/net/ethernet/intel/ice/ice_main.c | 12 +++++ drivers/net/ethernet/intel/ice/ice_type.h | 2 + 8 files changed, 59 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 9109006336f0..5ac0ad12f9f1 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -200,6 +200,8 @@ enum ice_feature { ICE_F_PTP_EXTTS, ICE_F_SMA_CTRL, ICE_F_GNSS, + ICE_F_ROCE_LAG, + ICE_F_SRIOV_LAG, ICE_F_MAX }; @@ -569,6 +571,7 @@ struct ice_pf { struct mutex sw_mutex; /* lock for protecting VSI alloc flow */ struct mutex tc_mutex; /* lock to protect TC changes */ struct mutex adev_mutex; /* lock to protect aux device access */ + struct mutex lag_mutex; /* protect ice_lag struct in PF */ u32 msg_enable; struct ice_ptp ptp; struct gnss_serial *gnss_serial; @@ -639,6 +642,8 @@ struct ice_pf { struct ice_agg_node vf_agg_node[ICE_MAX_VF_AGG_NODES]; }; +extern struct workqueue_struct *ice_lag_wq; + struct ice_netdev_priv { struct ice_vsi *vsi; struct ice_repr *repr; diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 63d3e1dcbba5..1d4227b024d3 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -120,6 +120,9 @@ struct ice_aqc_list_caps_elem { #define ICE_AQC_CAPS_PCIE_RESET_AVOIDANCE 0x0076 #define ICE_AQC_CAPS_POST_UPDATE_RESET_RESTRICT 0x0077 #define ICE_AQC_CAPS_NVM_MGMT 0x0080 +#define ICE_AQC_CAPS_FW_LAG_SUPPORT 0x0092 +#define ICE_AQC_BIT_ROCEV2_LAG 0x01 +#define ICE_AQC_BIT_SRIOV_LAG 0x02 u8 major_ver; u8 minor_ver; diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index ebdaf8dc679c..9704e7fa2953 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -2241,6 +2241,14 @@ ice_parse_common_caps(struct ice_hw *hw, struct ice_hw_common_caps *caps, "%s: reset_restrict_support = %d\n", prefix, caps->reset_restrict_support); break; + case ICE_AQC_CAPS_FW_LAG_SUPPORT: + caps->roce_lag = !!(number & ICE_AQC_BIT_ROCEV2_LAG); + ice_debug(hw, ICE_DBG_INIT, "%s: roce_lag = %u\n", + prefix, caps->roce_lag); + caps->sriov_lag = !!(number & ICE_AQC_BIT_SRIOV_LAG); + ice_debug(hw, ICE_DBG_INIT, "%s: sriov_lag = %u\n", + prefix, caps->sriov_lag); + break; default: /* Not one of the recognized common capabilities */ found = false; diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index 5a7753bda324..73bfc5cd8b37 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -4,8 +4,12 @@ /* Link Aggregation code */ #include "ice.h" +#include "ice_lib.h" #include "ice_lag.h" +#define ICE_LAG_RES_SHARED BIT(14) +#define ICE_LAG_RES_VALID BIT(15) + /** * ice_lag_set_primary - set PF LAG state as Primary * @lag: LAG info struct @@ -225,6 +229,26 @@ static void ice_lag_unregister(struct ice_lag *lag, struct net_device *netdev) lag->role = ICE_LAG_NONE; } +/** + * ice_lag_check_nvm_support - Check for NVM support for LAG + * @pf: PF struct + */ +static void ice_lag_check_nvm_support(struct ice_pf *pf) +{ + struct ice_hw_dev_caps *caps; + + caps = &pf->hw.dev_caps; + if (caps->common_cap.roce_lag) + ice_set_feature_support(pf, ICE_F_ROCE_LAG); + else + ice_clear_feature_support(pf, ICE_F_ROCE_LAG); + + if (caps->common_cap.sriov_lag) + ice_set_feature_support(pf, ICE_F_SRIOV_LAG); + else + ice_clear_feature_support(pf, ICE_F_SRIOV_LAG); +} + /** * ice_lag_changeupper_event - handle LAG changeupper event * @lag: LAG info struct @@ -264,26 +288,6 @@ static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr) ice_display_lag_info(lag); } -/** - * ice_lag_changelower_event - handle LAG changelower event - * @lag: LAG info struct - * @ptr: opaque data pointer - * - * ptr to be cast to netdev_notifier_changelowerstate_info - */ -static void ice_lag_changelower_event(struct ice_lag *lag, void *ptr) -{ - struct net_device *netdev = netdev_notifier_info_to_dev(ptr); - - if (netdev != lag->netdev) - return; - - netdev_dbg(netdev, "bonding info\n"); - - if (!netif_is_lag_port(netdev)) - netdev_dbg(netdev, "CHANGELOWER rcvd, but netdev not in LAG. Bail\n"); -} - /** * ice_lag_event_handler - handle LAG events from netdev * @notif_blk: notifier block registered by this netdev @@ -310,9 +314,6 @@ ice_lag_event_handler(struct notifier_block *notif_blk, unsigned long event, case NETDEV_CHANGEUPPER: ice_lag_changeupper_event(lag, ptr); break; - case NETDEV_CHANGELOWERSTATE: - ice_lag_changelower_event(lag, ptr); - break; case NETDEV_BONDING_INFO: ice_lag_info_event(lag, ptr); break; @@ -379,6 +380,8 @@ int ice_init_lag(struct ice_pf *pf) struct ice_vsi *vsi; int err; + ice_lag_check_nvm_support(pf); + pf->lag = kzalloc(sizeof(*lag), GFP_KERNEL); if (!pf->lag) return -ENOMEM; @@ -435,9 +438,7 @@ void ice_deinit_lag(struct ice_pf *pf) if (lag->pf) ice_unregister_lag_handler(lag); - dev_put(lag->upper_netdev); - - dev_put(lag->peer_netdev); + flush_workqueue(ice_lag_wq); kfree(lag); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index c3722c68af99..a3d3857ef59b 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3997,7 +3997,7 @@ bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f) * @pf: pointer to the struct ice_pf instance * @f: feature enum to set */ -static void ice_set_feature_support(struct ice_pf *pf, enum ice_feature f) +void ice_set_feature_support(struct ice_pf *pf, enum ice_feature f) { if (f < 0 || f >= ICE_F_MAX) return; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 1628385a9672..dd53fe968ad8 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -163,6 +163,7 @@ int ice_vsi_del_vlan_zero(struct ice_vsi *vsi); bool ice_vsi_has_non_zero_vlans(struct ice_vsi *vsi); u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi); bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f); +void ice_set_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_init_feature_support(struct ice_pf *pf); bool ice_vsi_is_rx_queue_active(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index f358018337af..e01834d0417e 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -64,6 +64,7 @@ struct device *ice_hw_to_dev(struct ice_hw *hw) } static struct workqueue_struct *ice_wq; +struct workqueue_struct *ice_lag_wq; static const struct net_device_ops ice_netdev_safe_mode_ops; static const struct net_device_ops ice_netdev_ops; @@ -3795,6 +3796,7 @@ u16 ice_get_avail_rxq_count(struct ice_pf *pf) static void ice_deinit_pf(struct ice_pf *pf) { ice_service_task_stop(pf); + mutex_destroy(&pf->lag_mutex); mutex_destroy(&pf->adev_mutex); mutex_destroy(&pf->sw_mutex); mutex_destroy(&pf->tc_mutex); @@ -3875,6 +3877,7 @@ static int ice_init_pf(struct ice_pf *pf) mutex_init(&pf->sw_mutex); mutex_init(&pf->tc_mutex); mutex_init(&pf->adev_mutex); + mutex_init(&pf->lag_mutex); INIT_HLIST_HEAD(&pf->aq_wait_list); spin_lock_init(&pf->aq_wait_lock); @@ -5576,10 +5579,18 @@ static int __init ice_module_init(void) return -ENOMEM; } + ice_lag_wq = alloc_ordered_workqueue("ice_lag_wq", 0); + if (!ice_lag_wq) { + pr_err("Failed to create LAG workqueue\n"); + destroy_workqueue(ice_wq); + return -ENOMEM; + } + status = pci_register_driver(&ice_driver); if (status) { pr_err("failed to register PCI driver, err %d\n", status); destroy_workqueue(ice_wq); + destroy_workqueue(ice_lag_wq); } return status; @@ -5596,6 +5607,7 @@ static void __exit ice_module_exit(void) { pci_unregister_driver(&ice_driver); destroy_workqueue(ice_wq); + destroy_workqueue(ice_lag_wq); pr_info("module unloaded\n"); } module_exit(ice_module_exit); diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index df9171a1a34f..e82f38c2a940 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -277,6 +277,8 @@ struct ice_hw_common_caps { u8 dcb; u8 ieee_1588; u8 rdma; + u8 roce_lag; + u8 sriov_lag; bool nvm_update_pending_nvm; bool nvm_update_pending_orom; From patchwork Thu Jun 8 18:06:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272659 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DB131DCCC for ; Thu, 8 Jun 2023 18:05:08 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A7CB125 for ; Thu, 8 Jun 2023 11:05:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247505; x=1717783505; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=x9c7Qh8ML2P9yfTTLLR2wTaV2H4lYHzfJ0EK1/R2w2Q=; b=JZwfIQ4WkCNOQpcwpv1bVXT1l19JQOPTP659vCpxoOuY1FtlWOvDD4sg 9+xpTayiQrS/atuXy8lbbB/EjtC1bN7un2GgYpOEpXhxsjCZ5esLkxop0 Ukd8zEDLHZvHshu6UJxQUbQBSnLEauqy14xQe89WHxx7UGnjb1+q5JG1l nbk4fvPgym1dv8murQHsOfr9nR5VTfz2yzez8qYgLEpkT8czipzOkHHe1 K9Di7qlde84qW+jiO08kajZnHmo7Qmf6bkZgEdectpYsZObUPZMRhJ+s0 Yhl/um1Urf/8UpEP7q7YDt7Evy0vF1VESBBbJ5dQ8093L8cWPe1t+gt6c A==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738710" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738710" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187924" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187924" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:37 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 03/10] ice: changes to the interface with the HW and FW for SRIOV_VF+LAG Date: Thu, 8 Jun 2023 11:06:11 -0700 Message-Id: <20230608180618.574171-4-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Add defines needed for interaction with the FW admin queue interface in relation to supporting LAG and SRIOV VFs interacting. Add code, or make non-static previously static functions, to access the new and changed admin queue calls for LAG. Signed-off-by: Dave Ertman Reviewed-by: Daniel Machon --- .../net/ethernet/intel/ice/ice_adminq_cmd.h | 50 +++++++++++- drivers/net/ethernet/intel/ice/ice_common.c | 47 +++++++++++ drivers/net/ethernet/intel/ice/ice_common.h | 4 + drivers/net/ethernet/intel/ice/ice_sched.c | 14 ++-- drivers/net/ethernet/intel/ice/ice_sched.h | 21 +++++ drivers/net/ethernet/intel/ice/ice_switch.c | 78 ++++++++++++++----- drivers/net/ethernet/intel/ice/ice_switch.h | 26 +++++++ 7 files changed, 211 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 1d4227b024d3..c0ad34b42531 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -235,6 +235,8 @@ struct ice_aqc_set_port_params { #define ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA BIT(2) __le16 bad_frame_vsi; __le16 swid; +#define ICE_AQC_PORT_SWID_VALID BIT(15) +#define ICE_AQC_PORT_SWID_M 0xFF u8 reserved[10]; }; @@ -244,10 +246,12 @@ struct ice_aqc_set_port_params { * Allocate Resources command (indirect 0x0208) * Free Resources command (indirect 0x0209) * Get Allocated Resource Descriptors Command (indirect 0x020A) + * Share Resource command (indirect 0x020B) */ #define ICE_AQC_RES_TYPE_VSI_LIST_REP 0x03 #define ICE_AQC_RES_TYPE_VSI_LIST_PRUNE 0x04 #define ICE_AQC_RES_TYPE_RECIPE 0x05 +#define ICE_AQC_RES_TYPE_SWID 0x07 #define ICE_AQC_RES_TYPE_FDIR_COUNTER_BLOCK 0x21 #define ICE_AQC_RES_TYPE_FDIR_GUARANTEED_ENTRIES 0x22 #define ICE_AQC_RES_TYPE_FDIR_SHARED_ENTRIES 0x23 @@ -267,6 +271,7 @@ struct ice_aqc_set_port_params { /* Allocate Resources command (indirect 0x0208) * Free Resources command (indirect 0x0209) + * Share Resource command (indirect 0x020B) */ struct ice_aqc_alloc_free_res_cmd { __le16 num_entries; /* Number of Resource entries */ @@ -821,7 +826,11 @@ struct ice_aqc_txsched_move_grp_info_hdr { __le32 src_parent_teid; __le32 dest_parent_teid; __le16 num_elems; - __le16 reserved; + u8 mode; +#define ICE_AQC_MOVE_ELEM_MODE_SAME_PF 0x0 +#define ICE_AQC_MOVE_ELEM_MODE_GIVE_OWN 0x1 +#define ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN 0x2 + u8 reserved; }; struct ice_aqc_move_elem { @@ -1926,6 +1935,42 @@ struct ice_aqc_dis_txq_item { __le16 q_id[]; } __packed; +/* Move/Reconfigure Tx queue (indirect 0x0C32) */ +struct ice_aqc_cfg_txqs { + u8 cmd_type; +#define ICE_AQC_Q_CFG_MOVE_NODE 0x1 +#define ICE_AQC_Q_CFG_TC_CHNG 0x2 +#define ICE_AQC_Q_CFG_MOVE_TC_CHNG 0x3 +#define ICE_AQC_Q_CFG_SUBSEQ_CALL BIT(2) +#define ICE_AQC_Q_CFG_FLUSH BIT(3) + u8 num_qs; + u8 port_num_chng; +#define ICE_AQC_Q_CFG_SRC_PRT_M 0x7 +#define ICE_AQC_Q_CFG_DST_PRT_S 3 +#define ICE_AQC_Q_CFG_DST_PRT_M (0x7 << ICE_AQC_Q_CFG_DST_PRT_S) + u8 time_out; +#define ICE_AQC_Q_CFG_TIMEOUT_S 2 +#define ICE_AQC_Q_CFG_TIMEOUT_M (0x1F << ICE_AQC_Q_CFG_TIMEOUT_S) + __le32 blocked_cgds; + __le32 addr_high; + __le32 addr_low; +}; + +/* Per Q struct for Move/Reconfigure Tx LAN Queues (indirect 0x0C32) */ +struct ice_aqc_cfg_txq_perq { + __le16 q_handle; + u8 tc; + u8 rsvd; + __le32 q_teid; +}; + +/* The buffer for Move/Reconfigure Tx LAN Queues (indirect 0x0C32) */ +struct ice_aqc_cfg_txqs_buf { + __le32 src_parent_teid; + __le32 dst_parent_teid; + struct ice_aqc_cfg_txq_perq queue_info[]; +}; + /* Add Tx RDMA Queue Set (indirect 0x0C33) */ struct ice_aqc_add_rdma_qset { u8 num_qset_grps; @@ -2184,6 +2229,7 @@ struct ice_aq_desc { struct ice_aqc_neigh_dev_req neigh_dev; struct ice_aqc_add_txqs add_txqs; struct ice_aqc_dis_txqs dis_txqs; + struct ice_aqc_cfg_txqs cfg_txqs; struct ice_aqc_add_rdma_qset add_rdma_qset; struct ice_aqc_add_get_update_free_vsi vsi_cmd; struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res; @@ -2266,6 +2312,7 @@ enum ice_adminq_opc { /* Alloc/Free/Get Resources */ ice_aqc_opc_alloc_res = 0x0208, ice_aqc_opc_free_res = 0x0209, + ice_aqc_opc_share_res = 0x020B, ice_aqc_opc_set_vlan_mode_parameters = 0x020C, ice_aqc_opc_get_vlan_mode_parameters = 0x020D, @@ -2359,6 +2406,7 @@ enum ice_adminq_opc { /* Tx queue handling commands/events */ ice_aqc_opc_add_txqs = 0x0C30, ice_aqc_opc_dis_txqs = 0x0C31, + ice_aqc_opc_cfg_txqs = 0x0C32, ice_aqc_opc_add_rdma_qset = 0x0C33, /* package commands */ diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 9704e7fa2953..9a3f62f897fb 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -4229,6 +4229,53 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps, return status; } +/** + * ice_aq_cfg_lan_txq + * @hw: pointer to the hardware structure + * @buf: buffer for command + * @buf_size: size of buffer in bytes + * @num_qs: number of queues being configured + * @oldport: origination lport + * @newport: destination lport + * @cd: pointer to command details structure or NULL + * + * Move/Configure LAN Tx queue (0x0C32) + * + * There is a better AQ command to use for moving nodes, so only coding + * this one for configuring the node. + */ +int +ice_aq_cfg_lan_txq(struct ice_hw *hw, struct ice_aqc_cfg_txqs_buf *buf, + u16 buf_size, u16 num_qs, u8 oldport, u8 newport, + struct ice_sq_cd *cd) +{ + struct ice_aqc_cfg_txqs *cmd; + struct ice_aq_desc desc; + int status; + + cmd = &desc.params.cfg_txqs; + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_cfg_txqs); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + if (!buf) + return -EINVAL; + + cmd->cmd_type = ICE_AQC_Q_CFG_TC_CHNG; + cmd->num_qs = num_qs; + cmd->port_num_chng = (oldport & ICE_AQC_Q_CFG_SRC_PRT_M); + cmd->port_num_chng |= (newport << ICE_AQC_Q_CFG_DST_PRT_S) & + ICE_AQC_Q_CFG_DST_PRT_M; + cmd->time_out = (5 << ICE_AQC_Q_CFG_TIMEOUT_S) & + ICE_AQC_Q_CFG_TIMEOUT_M; + cmd->blocked_cgds = 0; + + status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd); + if (status) + ice_debug(hw, ICE_DBG_SCHED, "Failed to reconfigure nodes %d\n", + hw->adminq.sq_last_status); + return status; +} + /** * ice_aq_add_rdma_qsets * @hw: pointer to the hardware structure diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index 81961a7d6598..df12a9d8d28c 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -186,6 +186,10 @@ int ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, u8 num_qgrps, struct ice_aqc_add_tx_qgrp *buf, u16 buf_size, struct ice_sq_cd *cd); +int +ice_aq_cfg_lan_txq(struct ice_hw *hw, struct ice_aqc_cfg_txqs_buf *buf, + u16 buf_size, u16 num_qs, u8 oldport, u8 newport, + struct ice_sq_cd *cd); int ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle); void ice_replay_post(struct ice_hw *hw); void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf); diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index 79a8972873f1..f4677704b95e 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -447,7 +447,7 @@ ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req, * * Move scheduling elements (0x0408) */ -static int +int ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req, struct ice_aqc_move_elem *buf, u16 buf_size, u16 *grps_movd, struct ice_sq_cd *cd) @@ -526,7 +526,7 @@ ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size, * * This function suspends or resumes HW nodes */ -static int +int ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids, bool suspend) { @@ -1057,7 +1057,7 @@ ice_sched_add_nodes_to_hw_layer(struct ice_port_info *pi, * * This function add nodes to a given layer. */ -static int +int ice_sched_add_nodes_to_layer(struct ice_port_info *pi, struct ice_sched_node *tc_node, struct ice_sched_node *parent, u8 layer, @@ -1132,7 +1132,7 @@ static u8 ice_sched_get_qgrp_layer(struct ice_hw *hw) * * This function returns the current VSI layer number */ -static u8 ice_sched_get_vsi_layer(struct ice_hw *hw) +u8 ice_sched_get_vsi_layer(struct ice_hw *hw) { /* Num Layers VSI layer * 9 6 @@ -1155,7 +1155,7 @@ static u8 ice_sched_get_vsi_layer(struct ice_hw *hw) * * This function returns the current aggregator layer number */ -static u8 ice_sched_get_agg_layer(struct ice_hw *hw) +u8 ice_sched_get_agg_layer(struct ice_hw *hw) { /* Num Layers aggregator layer * 9 4 @@ -1590,7 +1590,7 @@ ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node, * This function retrieves an aggregator node for a given aggregator ID from * a given TC branch */ -static struct ice_sched_node * +struct ice_sched_node * ice_sched_get_agg_node(struct ice_port_info *pi, struct ice_sched_node *tc_node, u32 agg_id) { @@ -2152,7 +2152,7 @@ ice_get_agg_info(struct ice_hw *hw, u32 agg_id) * This function walks through the aggregator subtree to find a free parent * node */ -static struct ice_sched_node * +struct ice_sched_node * ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node, u16 *num_nodes) { diff --git a/drivers/net/ethernet/intel/ice/ice_sched.h b/drivers/net/ethernet/intel/ice/ice_sched.h index 9c100747445a..8bd26353d76a 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.h +++ b/drivers/net/ethernet/intel/ice/ice_sched.h @@ -146,8 +146,29 @@ ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id, enum ice_agg_type agg_type, u8 tc, enum ice_rl_type rl_type, u32 bw); int ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes); +int +ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids, + bool suspend); +struct ice_sched_node * +ice_sched_get_agg_node(struct ice_port_info *pi, struct ice_sched_node *tc_node, + u32 agg_id); +u8 ice_sched_get_agg_layer(struct ice_hw *hw); +u8 ice_sched_get_vsi_layer(struct ice_hw *hw); +struct ice_sched_node * +ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node, + u16 *num_nodes); +int +ice_sched_add_nodes_to_layer(struct ice_port_info *pi, + struct ice_sched_node *tc_node, + struct ice_sched_node *parent, u8 layer, + u16 num_nodes, u32 *first_node_teid, + u16 *num_nodes_added); void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw); void ice_sched_replay_agg(struct ice_hw *hw); +int +ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req, + struct ice_aqc_move_elem *buf, u16 buf_size, + u16 *grps_movd, struct ice_sq_cd *cd); int ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle); int ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); #endif /* _ICE_SCHED_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index 49be0d2532eb..e8d4f07d7914 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -20,10 +20,10 @@ * byte 0 = 0x2: to identify it as locally administered DA MAC * byte 6 = 0x2: to identify it as locally administered SA MAC * byte 12 = 0x81 & byte 13 = 0x00: - * In case of VLAN filter first two bytes defines ether type (0x8100) - * and remaining two bytes are placeholder for programming a given VLAN ID - * In case of Ether type filter it is treated as header without VLAN tag - * and byte 12 and 13 is used to program a given Ether type instead + * In case of VLAN filter first two bytes defines ether type (0x8100) + * and remaining two bytes are placeholder for programming a given VLAN ID + * In case of Ether type filter it is treated as header without VLAN tag + * and byte 12 and 13 is used to program a given Ether type instead */ #define DUMMY_ETH_HDR_LEN 16 static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0, @@ -1369,14 +1369,6 @@ static const struct ice_dummy_pkt_profile ice_dummy_pkt_profiles[] = { ICE_PKT_PROFILE(tcp, 0), }; -#define ICE_SW_RULE_RX_TX_HDR_SIZE(s, l) struct_size((s), hdr_data, (l)) -#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s) \ - ICE_SW_RULE_RX_TX_HDR_SIZE((s), DUMMY_ETH_HDR_LEN) -#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s) \ - ICE_SW_RULE_RX_TX_HDR_SIZE((s), 0) -#define ICE_SW_RULE_LG_ACT_SIZE(s, n) struct_size((s), act, (n)) -#define ICE_SW_RULE_VSI_LIST_SIZE(s, n) struct_size((s), vsi, (n)) - /* this is a recipe to profile association bitmap */ static DECLARE_BITMAP(recipe_to_profile[ICE_MAX_NUM_RECIPES], ICE_MAX_NUM_PROFILES); @@ -1841,8 +1833,13 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id, lkup_type == ICE_SW_LKUP_DFLT) { sw_buf->res_type = cpu_to_le16(ICE_AQC_RES_TYPE_VSI_LIST_REP); } else if (lkup_type == ICE_SW_LKUP_VLAN) { - sw_buf->res_type = - cpu_to_le16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE); + if (opc == ice_aqc_opc_alloc_res) + sw_buf->res_type = + cpu_to_le16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE | + ICE_AQC_RES_TYPE_FLAG_SHARED); + else + sw_buf->res_type = + cpu_to_le16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE); } else { status = -EINVAL; goto ice_aq_alloc_free_vsi_list_exit; @@ -1910,7 +1907,7 @@ ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz, * * Add(0x0290) */ -static int +int ice_aq_add_recipe(struct ice_hw *hw, struct ice_aqc_recipe_data_elem *s_recipe_list, u16 num_recipes, struct ice_sq_cd *cd) @@ -1947,7 +1944,7 @@ ice_aq_add_recipe(struct ice_hw *hw, * The caller must supply enough space in s_recipe_list to hold all possible * recipes and *num_recipes must equal ICE_MAX_NUM_RECIPES. */ -static int +int ice_aq_get_recipe(struct ice_hw *hw, struct ice_aqc_recipe_data_elem *s_recipe_list, u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd) @@ -2040,7 +2037,7 @@ ice_update_recipe_lkup_idx(struct ice_hw *hw, * @cd: pointer to command details structure or NULL * Recipe to profile association (0x0291) */ -static int +int ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, struct ice_sq_cd *cd) { @@ -2066,7 +2063,7 @@ ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, * @cd: pointer to command details structure or NULL * Associate profile ID with given recipe (0x0293) */ -static int +int ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, struct ice_sq_cd *cd) { @@ -2090,7 +2087,7 @@ ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, * @hw: pointer to the hardware structure * @rid: recipe ID returned as response to AQ call */ -static int ice_alloc_recipe(struct ice_hw *hw, u16 *rid) +int ice_alloc_recipe(struct ice_hw *hw, u16 *rid) { struct ice_aqc_alloc_free_res_elem *sw_buf; u16 buf_len; @@ -3122,7 +3119,7 @@ ice_find_rule_entry(struct ice_hw *hw, u8 recp_id, struct ice_fltr_info *f_info) * handle element. This can be extended further to search VSI list with more * than 1 vsi_count. Returns pointer to VSI list entry if found. */ -static struct ice_vsi_list_map_info * +struct ice_vsi_list_map_info * ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle, u16 *vsi_list_id) { @@ -3133,7 +3130,7 @@ ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle, list_head = &sw->recp_list[recp_id].filt_rules; list_for_each_entry(list_itr, list_head, list_entry) { - if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) { + if (list_itr->vsi_list_info) { map_info = list_itr->vsi_list_info; if (test_bit(vsi_handle, map_info->vsi_map)) { *vsi_list_id = map_info->vsi_list_id; @@ -4544,6 +4541,45 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, .offs = {__VA_ARGS__}, \ } +/** + * ice_share_res - set a resource as shared or dedicated + * @hw: hw struct of original owner of resource + * @type: resource type + * @shared: is the resource being set to shared + * @res_id: resource id (descriptor) + */ +int ice_share_res(struct ice_hw *hw, u16 type, u8 shared, u16 res_id) +{ + struct ice_aqc_alloc_free_res_elem *buf; + u16 buf_len; + int status; + + buf_len = struct_size(buf, elem, 1); + buf = kzalloc(buf_len, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + buf->num_elems = cpu_to_le16(1); + if (shared) + buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & + ICE_AQC_RES_TYPE_M) | + ICE_AQC_RES_TYPE_FLAG_SHARED); + else + buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) & + ICE_AQC_RES_TYPE_M) & + ~ICE_AQC_RES_TYPE_FLAG_SHARED); + + buf->elem[0].e.sw_resp = cpu_to_le16(res_id); + status = ice_aq_alloc_free_res(hw, 1, buf, buf_len, + ice_aqc_opc_share_res, NULL); + if (status) + ice_debug(hw, ICE_DBG_SW, "Could not set resource type %u id %u to %s\n", + type, res_id, shared ? "SHARED" : "DEDICATED"); + + kfree(buf); + return status; +} + /* This is mapping table entry that maps every word within a given protocol * structure to the real byte offset as per the specification of that * protocol header. diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 838a2823b3dc..1f1888932d54 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -22,6 +22,14 @@ #define ICE_PROFID_IPV6_GTPU_TEID 46 #define ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER 70 +#define ICE_SW_RULE_VSI_LIST_SIZE(s, n) struct_size((s), vsi, (n)) +#define ICE_SW_RULE_RX_TX_HDR_SIZE(s, l) struct_size((s), hdr_data, (l)) +#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s) \ + ICE_SW_RULE_RX_TX_HDR_SIZE((s), DUMMY_ETH_HDR_LEN) +#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE(s) \ + ICE_SW_RULE_RX_TX_HDR_SIZE((s), 0) +#define ICE_SW_RULE_LG_ACT_SIZE(s, n) struct_size((s), act, (n)) + /* VSI context structure for add/get/update/free operations */ struct ice_vsi_ctx { u16 vsi_num; @@ -345,6 +353,7 @@ ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, int ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items, u16 counter_id); +int ice_share_res(struct ice_hw *hw, u16 type, u8 shared, u16 res_id); /* Switch/bridge related commands */ void ice_rule_add_tunnel_metadata(struct ice_adv_lkup_elem *lkup); @@ -402,4 +411,21 @@ int ice_update_recipe_lkup_idx(struct ice_hw *hw, struct ice_update_recipe_lkup_idx_params *params); void ice_change_proto_id_to_dvm(void); +struct ice_vsi_list_map_info * +ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle, + u16 *vsi_list_id); +int ice_alloc_recipe(struct ice_hw *hw, u16 *rid); +int ice_aq_get_recipe(struct ice_hw *hw, + struct ice_aqc_recipe_data_elem *s_recipe_list, + u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd); +int ice_aq_add_recipe(struct ice_hw *hw, + struct ice_aqc_recipe_data_elem *s_recipe_list, + u16 num_recipes, struct ice_sq_cd *cd); +int +ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, + struct ice_sq_cd *cd); +int +ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap, + struct ice_sq_cd *cd); + #endif /* _ICE_SWITCH_H_ */ From patchwork Thu Jun 8 18:06:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272658 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80A241DCCC for ; Thu, 8 Jun 2023 18:05:07 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C00D7172E for ; Thu, 8 Jun 2023 11:05:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247505; x=1717783505; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5BdzdOtqSrLWLX05vhwUA9qoHpH2emJ4znp2SEUH1WQ=; b=bl4HlodI2U+q6IzRTkrDJ+BoCOyMPjFlW8WABtybMNL/VOYQiTWz6GDz ZdOxmm+amITmhKSyZ7XXlAWdE+4LsJMzFX3KTzxtFsMU28SvNS3qM5vfz TNEil0gGgJUZCOJFfkINTt1dRPAoUTf6h3uC57YfbQ2r1ETUSGWsr4EdP Zrla+qVh4UVm5aTDtO4LNSQBz1QlHWYuEMYR2tdV0fdRAP2+L5su1okdv GqdqI9j8XWJo5kBsCogygNs5cLg+KlBeNb2sUDsXbPGRDzgc4kpaz9Wy2 BR26vS9NxUGUSVnrzikuV2tP1KcUqg2TsCX5bn2nTuXzz8tduFIJWEapn g==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738718" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738718" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187927" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187927" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:37 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 04/10] ice: implement lag netdev event handler Date: Thu, 8 Jun 2023 11:06:12 -0700 Message-Id: <20230608180618.574171-5-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org The event handler for LAG will create a work item to place on the ordered workqueue to be processed. Add in defines for training packets and new recipes to be used by the switching block of the HW for LAG packet steering. Update the ice_lag struct to reflect the new processing methodology. Signed-off-by: Dave Ertman --- drivers/net/ethernet/intel/ice/ice_lag.c | 125 ++++++++++++++++-- drivers/net/ethernet/intel/ice/ice_lag.h | 31 ++++- drivers/net/ethernet/intel/ice/ice_virtchnl.c | 2 + 3 files changed, 144 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index 73bfc5cd8b37..529abfb904d0 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -56,11 +56,10 @@ static void ice_lag_set_backup(struct ice_lag *lag) */ static void ice_display_lag_info(struct ice_lag *lag) { - const char *name, *peer, *upper, *role, *bonded, *primary; + const char *name, *upper, *role, *bonded, *primary; struct device *dev = &lag->pf->pdev->dev; name = lag->netdev ? netdev_name(lag->netdev) : "unset"; - peer = lag->peer_netdev ? netdev_name(lag->peer_netdev) : "unset"; upper = lag->upper_netdev ? netdev_name(lag->upper_netdev) : "unset"; primary = lag->primary ? "TRUE" : "FALSE"; bonded = lag->bonded ? "BONDED" : "UNBONDED"; @@ -82,8 +81,8 @@ static void ice_display_lag_info(struct ice_lag *lag) role = "ERROR"; } - dev_dbg(dev, "%s %s, peer:%s, upper:%s, role:%s, primary:%s\n", name, - bonded, peer, upper, role, primary); + dev_dbg(dev, "%s %s, upper:%s, role:%s, primary:%s\n", name, bonded, + upper, role, primary); } /** @@ -198,7 +197,6 @@ ice_lag_unlink(struct ice_lag *lag, lag->upper_netdev = NULL; } - lag->peer_netdev = NULL; ice_set_rdma_cap(pf); lag->bonded = false; lag->role = ICE_LAG_NONE; @@ -288,6 +286,60 @@ static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr) ice_display_lag_info(lag); } +/** + * ice_lag_process_event - process a task assigned to the lag_wq + * @work: pointer to work_struct + */ +static void ice_lag_process_event(struct work_struct *work) +{ + struct netdev_notifier_changeupper_info *info; + struct ice_lag_work *lag_work; + struct net_device *netdev; + struct list_head *tmp, *n; + struct ice_pf *pf; + + lag_work = container_of(work, struct ice_lag_work, lag_task); + pf = lag_work->lag->pf; + + mutex_lock(&pf->lag_mutex); + lag_work->lag->netdev_head = &lag_work->netdev_list.node; + + switch (lag_work->event) { + case NETDEV_CHANGEUPPER: + info = &lag_work->info.changeupper_info; + if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) + ice_lag_changeupper_event(lag_work->lag, info); + break; + case NETDEV_BONDING_INFO: + ice_lag_info_event(lag_work->lag, &lag_work->info.bonding_info); + break; + case NETDEV_UNREGISTER: + if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) { + netdev = lag_work->info.bonding_info.info.dev; + if ((netdev == lag_work->lag->netdev || + lag_work->lag->primary) && lag_work->lag->bonded) + ice_lag_unregister(lag_work->lag, netdev); + } + break; + default: + break; + } + + /* cleanup resources allocated for this work item */ + list_for_each_safe(tmp, n, &lag_work->netdev_list.node) { + struct ice_lag_netdev_list *entry; + + entry = list_entry(tmp, struct ice_lag_netdev_list, node); + list_del(&entry->node); + kfree(entry); + } + lag_work->lag->netdev_head = NULL; + + mutex_unlock(&pf->lag_mutex); + + kfree(work); +} + /** * ice_lag_event_handler - handle LAG events from netdev * @notif_blk: notifier block registered by this netdev @@ -299,31 +351,79 @@ ice_lag_event_handler(struct notifier_block *notif_blk, unsigned long event, void *ptr) { struct net_device *netdev = netdev_notifier_info_to_dev(ptr); + struct net_device *upper_netdev; + struct ice_lag_work *lag_work; struct ice_lag *lag; - lag = container_of(notif_blk, struct ice_lag, notif_block); + if (!netif_is_ice(netdev)) + return NOTIFY_DONE; + + if (event != NETDEV_CHANGEUPPER && event != NETDEV_BONDING_INFO && + event != NETDEV_UNREGISTER) + return NOTIFY_DONE; + if (!(netdev->priv_flags & IFF_BONDING)) + return NOTIFY_DONE; + + lag = container_of(notif_blk, struct ice_lag, notif_block); if (!lag->netdev) return NOTIFY_DONE; - /* Check that the netdev is in the working namespace */ if (!net_eq(dev_net(netdev), &init_net)) return NOTIFY_DONE; + /* This memory will be freed at the end of ice_lag_process_event */ + lag_work = kzalloc(sizeof(*lag_work), GFP_KERNEL); + if (!lag_work) + return -ENOMEM; + + lag_work->event_netdev = netdev; + lag_work->lag = lag; + lag_work->event = event; + if (event == NETDEV_CHANGEUPPER) { + struct netdev_notifier_changeupper_info *info; + + info = ptr; + upper_netdev = info->upper_dev; + } else { + upper_netdev = netdev_master_upper_dev_get(netdev); + } + + INIT_LIST_HEAD(&lag_work->netdev_list.node); + if (upper_netdev) { + struct ice_lag_netdev_list *nd_list; + struct net_device *tmp_nd; + + rcu_read_lock(); + for_each_netdev_in_bond_rcu(upper_netdev, tmp_nd) { + nd_list = kzalloc(sizeof(*nd_list), GFP_KERNEL); + if (!nd_list) + break; + + nd_list->netdev = tmp_nd; + list_add(&nd_list->node, &lag_work->netdev_list.node); + } + rcu_read_unlock(); + } + switch (event) { case NETDEV_CHANGEUPPER: - ice_lag_changeupper_event(lag, ptr); + lag_work->info.changeupper_info = + *((struct netdev_notifier_changeupper_info *)ptr); break; case NETDEV_BONDING_INFO: - ice_lag_info_event(lag, ptr); - break; - case NETDEV_UNREGISTER: - ice_lag_unregister(lag, netdev); + lag_work->info.bonding_info = + *((struct netdev_notifier_bonding_info *)ptr); break; default: + lag_work->info.notifier_info = + *((struct netdev_notifier_info *)ptr); break; } + INIT_WORK(&lag_work->lag_task, ice_lag_process_event); + queue_work(ice_lag_wq, &lag_work->lag_task); + return NOTIFY_DONE; } @@ -398,7 +498,6 @@ int ice_init_lag(struct ice_pf *pf) lag->netdev = vsi->netdev; lag->role = ICE_LAG_NONE; lag->bonded = false; - lag->peer_netdev = NULL; lag->upper_netdev = NULL; lag->notif_block.notifier_call = NULL; diff --git a/drivers/net/ethernet/intel/ice/ice_lag.h b/drivers/net/ethernet/intel/ice/ice_lag.h index 2c373676c42f..df4af5184a75 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.h +++ b/drivers/net/ethernet/intel/ice/ice_lag.h @@ -14,20 +14,49 @@ enum ice_lag_role { ICE_LAG_UNSET }; +#define ICE_LAG_INVALID_PORT 0xFF + struct ice_pf; +struct ice_vf; + +struct ice_lag_netdev_list { + struct list_head node; + struct net_device *netdev; +}; /* LAG info struct */ struct ice_lag { struct ice_pf *pf; /* backlink to PF struct */ struct net_device *netdev; /* this PF's netdev */ - struct net_device *peer_netdev; struct net_device *upper_netdev; /* upper bonding netdev */ + struct list_head *netdev_head; struct notifier_block notif_block; + s32 bond_mode; + u16 bond_swid; /* swid for primary interface */ + u8 active_port; /* lport value for the current active port */ u8 bonded:1; /* currently bonded */ u8 primary:1; /* this is primary */ + u16 pf_recipe; + u16 pf_rule_id; + u16 cp_rule_idx; u8 role; }; +/* LAG workqueue struct */ +struct ice_lag_work { + struct work_struct lag_task; + struct ice_lag_netdev_list netdev_list; + struct ice_lag *lag; + unsigned long event; + struct net_device *event_netdev; + union { + struct netdev_notifier_changeupper_info changeupper_info; + struct netdev_notifier_bonding_info bonding_info; + struct netdev_notifier_info notifier_info; + } info; +}; + +void ice_lag_move_new_vf_nodes(struct ice_vf *vf); int ice_init_lag(struct ice_pf *pf); void ice_deinit_lag(struct ice_pf *pf); #endif /* _ICE_LAG_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c index efbc2968a7bf..625da88e7965 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c @@ -1724,6 +1724,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) vf->vf_id, i); } + ice_lag_move_new_vf_nodes(vf); + /* send the response to the VF */ return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_VSI_QUEUES, VIRTCHNL_STATUS_ERR_PARAM, NULL, 0); From patchwork Thu Jun 8 18:06:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272661 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9143E200B5 for ; Thu, 8 Jun 2023 18:05:09 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78ACA1734 for ; Thu, 8 Jun 2023 11:05:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247506; x=1717783506; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TCpLVaiMNVsJPkL+u+l+Dbx2L7O2WpeEl5QH4S5Xt8g=; b=AqBRjS1tsVhBJCfKMPApkgMic/16HspLLdi21Vz4RqlyFnSCONAbfstG LsHUAicqkc9pwuU+X33XM64W6VvtpB1+MqPH7yQeyJuBznsL/Bd7bhQ6j YCsr1sZDwmptvRCMIcMzvuydQC4FRBSDIIW/cI/+NMvl+cajs63ZEsZ4V 9npp15rWz2Vgs/h418YE3BTyI2zOWA1NiWq1oznO8tYsm+bu4+DQ3IXyh 2Y61QyPrGlyuJ47sf/YqFCxtepu0AjLzywzKMExk8izOI9W9WDEye2FnA FVp6YWV/I3HaBtNHBpt7a5KO/a2T7MICn/uRdpeRxbYlO2HFjzCYOZA4q Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738723" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738723" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187930" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187930" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:37 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 05/10] ice: process events created by lag netdev event handler Date: Thu, 8 Jun 2023 11:06:13 -0700 Message-Id: <20230608180618.574171-6-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Add in the function framework for the processing of LAG events. Also add in helper function to perform common tasks. Add the basis of the process of linking a lower netdev to an upper netdev. Reviewed-by: Daniel Machon Signed-off-by: Dave Ertman --- drivers/net/ethernet/intel/ice/ice_lag.c | 535 +++++++++++++++++--- drivers/net/ethernet/intel/ice/ice_switch.c | 10 +- drivers/net/ethernet/intel/ice/ice_switch.h | 3 + 3 files changed, 475 insertions(+), 73 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index 529abfb904d0..e66f1129027a 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -10,6 +10,13 @@ #define ICE_LAG_RES_SHARED BIT(14) #define ICE_LAG_RES_VALID BIT(15) +#define ICE_RECIPE_LEN 64 +static const u8 ice_dflt_vsi_rcp[ICE_RECIPE_LEN] = { + 0x05, 0, 0, 0, 0x20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0x85, 0, 0x01, 0, 0, 0, 0xff, 0xff, 0x08, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0x30, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; + /** * ice_lag_set_primary - set PF LAG state as Primary * @lag: LAG info struct @@ -50,6 +57,147 @@ static void ice_lag_set_backup(struct ice_lag *lag) lag->role = ICE_LAG_BACKUP; } +/** + * ice_netdev_to_lag - return pointer to associated lag struct from netdev + * @netdev: pointer to net_device struct to query + */ +static struct ice_lag *ice_netdev_to_lag(struct net_device *netdev) +{ + struct ice_netdev_priv *np; + struct ice_vsi *vsi; + + if (!netif_is_ice(netdev)) + return NULL; + + np = netdev_priv(netdev); + if (!np) + return NULL; + + vsi = np->vsi; + if (!vsi) + return NULL; + + return vsi->back->lag; +} + +/** + * ice_lag_find_primary - returns pointer to primary interfaces lag struct + * @lag: local interfaces lag struct + */ +static struct ice_lag *ice_lag_find_primary(struct ice_lag *lag) +{ + struct ice_lag *primary_lag = NULL; + struct list_head *tmp; + + list_for_each(tmp, lag->netdev_head) { + struct ice_lag_netdev_list *entry; + struct ice_lag *tmp_lag; + + entry = list_entry(tmp, struct ice_lag_netdev_list, node); + tmp_lag = ice_netdev_to_lag(entry->netdev); + if (tmp_lag && tmp_lag->primary) { + primary_lag = tmp_lag; + break; + } + } + + return primary_lag; +} + +/** + * ice_lag_cfg_dflt_fltr - Add/Remove default VSI rule for LAG + * @lag: lag struct for local interface + * @add: boolean on whether we are adding filters + */ +static int +ice_lag_cfg_dflt_fltr(struct ice_lag *lag, bool add) +{ + struct ice_sw_rule_lkup_rx_tx *s_rule; + u16 s_rule_sz, vsi_num; + struct ice_hw *hw; + u32 act, opc; + u8 *eth_hdr; + int err; + + hw = &lag->pf->hw; + vsi_num = ice_get_hw_vsi_num(hw, 0); + + s_rule_sz = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule); + s_rule = kzalloc(s_rule_sz, GFP_KERNEL); + if (!s_rule) { + dev_err(ice_pf_to_dev(lag->pf), "error allocating rule for LAG default VSI\n"); + return -ENOMEM; + } + + if (add) { + eth_hdr = s_rule->hdr_data; + ice_fill_eth_hdr(eth_hdr); + + act = (vsi_num << ICE_SINGLE_ACT_VSI_ID_S) & + ICE_SINGLE_ACT_VSI_ID_M; + act |= ICE_SINGLE_ACT_VSI_FORWARDING | + ICE_SINGLE_ACT_VALID_BIT | ICE_SINGLE_ACT_LAN_ENABLE; + + s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX); + s_rule->recipe_id = cpu_to_le16(lag->pf_recipe); + s_rule->src = cpu_to_le16(hw->port_info->lport); + s_rule->act = cpu_to_le32(act); + s_rule->hdr_len = cpu_to_le16(DUMMY_ETH_HDR_LEN); + opc = ice_aqc_opc_add_sw_rules; + } else { + s_rule->index = cpu_to_le16(lag->pf_rule_id); + opc = ice_aqc_opc_remove_sw_rules; + } + + err = ice_aq_sw_rules(&lag->pf->hw, s_rule, s_rule_sz, 1, opc, NULL); + if (err) + goto dflt_fltr_free; + + if (add) + lag->pf_rule_id = le16_to_cpu(s_rule->index); + else + lag->pf_rule_id = 0; + +dflt_fltr_free: + kfree(s_rule); + return err; +} + +/** + * ice_lag_cfg_pf_fltrs - set filters up for new active port + * @lag: local interfaces lag struct + * @ptr: opaque data containing notifier event + */ +static void +ice_lag_cfg_pf_fltrs(struct ice_lag *lag, void *ptr) +{ + struct netdev_notifier_bonding_info *info; + struct netdev_bonding_info *bonding_info; + struct net_device *event_netdev; + struct device *dev; + + event_netdev = netdev_notifier_info_to_dev(ptr); + /* not for this netdev */ + if (event_netdev != lag->netdev) + return; + + info = (struct netdev_notifier_bonding_info *)ptr; + bonding_info = &info->bonding_info; + dev = ice_pf_to_dev(lag->pf); + + /* interface not active - remove old default VSI rule */ + if (bonding_info->slave.state && lag->pf_rule_id) { + if (ice_lag_cfg_dflt_fltr(lag, false)) + dev_err(dev, "Error removing old default VSI filter\n"); + return; + } + + /* interface becoming active - add new default VSI rule */ + if (!bonding_info->slave.state && !lag->pf_rule_id) + if (ice_lag_cfg_dflt_fltr(lag, true)) + dev_err(dev, "Error adding new default VSI filter\n"); +} + /** * ice_display_lag_info - print LAG info * @lag: LAG info struct @@ -85,6 +233,83 @@ static void ice_display_lag_info(struct ice_lag *lag) upper, role, primary); } +/** + * ice_lag_move_vf_node_tc - move scheduling nodes for one VF on one TC + * @lag: lag info struct + * @oldport: lport of previous nodes location + * @newport: lport of destination nodes location + * @vsi_num: array index of VSI in PF space + * @tc: traffic class to move + */ +static void +ice_lag_move_vf_node_tc(struct ice_lag *lag, u8 oldport, u8 newport, + u16 vsi_num, u8 tc) +{ +} + +/** + * ice_lag_move_single_vf_nodes - Move Tx scheduling nodes for single VF + * @lag: primary interface LAG struct + * @oldport: lport of previous interface + * @newport: lport of destination interface + * @vsi_num: SW index of VF's VSI + */ +static void +ice_lag_move_single_vf_nodes(struct ice_lag *lag, u8 oldport, u8 newport, + u16 vsi_num) +{ + u8 tc; + + ice_for_each_traffic_class(tc) + ice_lag_move_vf_node_tc(lag, oldport, newport, vsi_num, tc); +} + +/** + * ice_lag_move_new_vf_nodes - Move Tx scheduling nodes for a VF if required + * @vf: the VF to move Tx nodes for + * + * Called just after configuring new VF queues. Check whether the VF Tx + * scheduling nodes need to be updated to fail over to the active port. If so, + * move them now. + */ +void ice_lag_move_new_vf_nodes(struct ice_vf *vf) +{ +} + +/** + * ice_lag_move_vf_nodes - move Tx scheduling nodes for all VFs to new port + * @lag: lag info struct + * @oldport: lport of previous interface + * @newport: lport of destination interface + */ +static void ice_lag_move_vf_nodes(struct ice_lag *lag, u8 oldport, u8 newport) +{ + struct ice_pf *pf; + int i; + + if (!lag->primary) + return; + + pf = lag->pf; + ice_for_each_vsi(pf, i) + if (pf->vsi[i] && (pf->vsi[i]->type == ICE_VSI_VF || + pf->vsi[i]->type == ICE_VSI_SWITCHDEV_CTRL)) + ice_lag_move_single_vf_nodes(lag, oldport, newport, i); +} + +#define ICE_LAG_SRIOV_CP_RECIPE 10 +#define ICE_LAG_SRIOV_TRAIN_PKT_LEN 16 + +/** + * ice_lag_cfg_cp_fltr - configure filter for control packets + * @lag: local interface's lag struct + * @add: add or remove rule + */ +static void +ice_lag_cfg_cp_fltr(struct ice_lag *lag, bool add) +{ +} + /** * ice_lag_info_event - handle NETDEV_BONDING_INFO event * @lag: LAG info struct @@ -126,105 +351,171 @@ static void ice_lag_info_event(struct ice_lag *lag, void *ptr) ice_display_lag_info(lag); } +/** + * ice_lag_reclaim_vf_tc - move scheduling nodes back to primary interface + * @lag: primary interface lag struct + * @src_hw: HW struct current node location + * @vsi_num: VSI index in PF space + * @tc: traffic class to move + */ +static void +ice_lag_reclaim_vf_tc(struct ice_lag *lag, struct ice_hw *src_hw, u16 vsi_num, + u8 tc) +{ +} + +/** + * ice_lag_reclaim_vf_nodes - When interface leaving bond primary reclaims nodes + * @lag: primary interface lag struct + * @src_hw: HW struct for current node location + */ +static void +ice_lag_reclaim_vf_nodes(struct ice_lag *lag, struct ice_hw *src_hw) +{ + struct ice_pf *pf; + int i, tc; + + if (!lag->primary || !src_hw) + return; + + pf = lag->pf; + ice_for_each_vsi(pf, i) + if (pf->vsi[i] && (pf->vsi[i]->type == ICE_VSI_VF || + pf->vsi[i]->type == ICE_VSI_SWITCHDEV_CTRL)) + ice_for_each_traffic_class(tc) + ice_lag_reclaim_vf_tc(lag, src_hw, i, tc); +} + /** * ice_lag_link - handle LAG link event * @lag: LAG info struct - * @info: info from the netdev notifier */ -static void -ice_lag_link(struct ice_lag *lag, struct netdev_notifier_changeupper_info *info) +static void ice_lag_link(struct ice_lag *lag) { - struct net_device *netdev_tmp, *upper = info->upper_dev; struct ice_pf *pf = lag->pf; - int peers = 0; if (lag->bonded) dev_warn(ice_pf_to_dev(pf), "%s Already part of a bond\n", netdev_name(lag->netdev)); - rcu_read_lock(); - for_each_netdev_in_bond_rcu(upper, netdev_tmp) - peers++; - rcu_read_unlock(); - - if (lag->upper_netdev != upper) { - dev_hold(upper); - lag->upper_netdev = upper; - } - ice_clear_rdma_cap(pf); lag->bonded = true; lag->role = ICE_LAG_UNSET; - - /* if this is the first element in an LAG mark as primary */ - lag->primary = !!(peers == 1); } /** * ice_lag_unlink - handle unlink event * @lag: LAG info struct - * @info: info from netdev notification */ -static void -ice_lag_unlink(struct ice_lag *lag, - struct netdev_notifier_changeupper_info *info) +static void ice_lag_unlink(struct ice_lag *lag) { - struct net_device *netdev_tmp, *upper = info->upper_dev; + u8 pri_port, act_port, loc_port; struct ice_pf *pf = lag->pf; - bool found = false; if (!lag->bonded) { netdev_dbg(lag->netdev, "bonding unlink event on non-LAG netdev\n"); return; } - /* determine if we are in the new LAG config or not */ - rcu_read_lock(); - for_each_netdev_in_bond_rcu(upper, netdev_tmp) { - if (netdev_tmp == lag->netdev) { - found = true; - break; + if (lag->primary) { + act_port = lag->active_port; + pri_port = lag->pf->hw.port_info->lport; + if (act_port != pri_port && act_port != ICE_LAG_INVALID_PORT) + ice_lag_move_vf_nodes(lag, act_port, pri_port); + lag->primary = false; + lag->active_port = ICE_LAG_INVALID_PORT; + } else { + struct ice_lag *primary_lag; + + primary_lag = ice_lag_find_primary(lag); + if (primary_lag) { + act_port = primary_lag->active_port; + pri_port = primary_lag->pf->hw.port_info->lport; + loc_port = pf->hw.port_info->lport; + if (act_port == loc_port && + act_port != ICE_LAG_INVALID_PORT) { + ice_lag_reclaim_vf_nodes(primary_lag, + &lag->pf->hw); + primary_lag->active_port = ICE_LAG_INVALID_PORT; + } } } - rcu_read_unlock(); - - if (found) - return; - - if (lag->upper_netdev) { - dev_put(lag->upper_netdev); - lag->upper_netdev = NULL; - } ice_set_rdma_cap(pf); lag->bonded = false; lag->role = ICE_LAG_NONE; + lag->upper_netdev = NULL; } /** - * ice_lag_unregister - handle netdev unregister events - * @lag: LAG info struct - * @netdev: netdev reporting the event + * ice_lag_link_unlink - helper function to call lag_link/unlink + * @lag: lag info struct + * @ptr: opaque pointer data */ -static void ice_lag_unregister(struct ice_lag *lag, struct net_device *netdev) +static void ice_lag_link_unlink(struct ice_lag *lag, void *ptr) { - struct ice_pf *pf = lag->pf; + struct net_device *netdev = netdev_notifier_info_to_dev(ptr); + struct netdev_notifier_changeupper_info *info = ptr; - /* check to see if this event is for this netdev - * check that we are in an aggregate - */ - if (netdev != lag->netdev || !lag->bonded) + if (netdev != lag->netdev) return; - if (lag->upper_netdev) { - dev_put(lag->upper_netdev); - lag->upper_netdev = NULL; - ice_set_rdma_cap(pf); - } - /* perform some cleanup in case we come back */ - lag->bonded = false; - lag->role = ICE_LAG_NONE; + if (info->linking) + ice_lag_link(lag); + else + ice_lag_unlink(lag); +} + +/** + * ice_lag_set_swid - set the SWID on secondary interface + * @primary_swid: primary interface's SWID + * @local_lag: local interfaces LAG struct + * @link: Is this a linking activity + * + * If link is false, then primary_swid should be expected to not be valid + */ +static void +ice_lag_set_swid(u16 primary_swid, struct ice_lag *local_lag, + bool link) +{ +} + +/** + * ice_lag_primary_swid - set/clear the SHARED attrib of primary's SWID + * @lag: primary interfaces lag struct + * @link: is this a linking activity + * + * Implement setting primary SWID as shared using 0x020B + */ +static void ice_lag_primary_swid(struct ice_lag *lag, bool link) +{ + struct ice_hw *hw; + u16 swid; + + hw = &lag->pf->hw; + swid = hw->port_info->sw_id; + + if (ice_share_res(hw, ICE_AQC_RES_TYPE_SWID, link, swid)) + dev_warn(ice_pf_to_dev(lag->pf), "Failure to set primary interface shared status\n"); +} + +/** + * ice_lag_add_prune_list - Adds event_pf's VSI to primary's prune list + * @lag: lag info struct + * @event_pf: PF struct for VSI we are adding to primary's prune list + */ +static void ice_lag_add_prune_list(struct ice_lag *lag, struct ice_pf *event_pf) +{ +} + +/** + * ice_lag_del_prune_list - Remove secondary's vsi from primary's prune list + * @lag: primary interface's ice_lag struct + * @event_pf: PF struct for unlinking interface + */ +static void ice_lag_del_prune_list(struct ice_lag *lag, struct ice_pf *event_pf) +{ } /** @@ -257,6 +548,7 @@ static void ice_lag_check_nvm_support(struct ice_pf *pf) static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr) { struct netdev_notifier_changeupper_info *info; + struct ice_lag *primary_lag; struct net_device *netdev; info = ptr; @@ -266,24 +558,84 @@ static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr) if (netdev != lag->netdev) return; - if (!info->upper_dev) { - netdev_dbg(netdev, "changeupper rcvd, but no upper defined\n"); - return; + primary_lag = ice_lag_find_primary(lag); + if (info->linking) { + lag->upper_netdev = info->upper_dev; + /* If there is not already a primary interface in the LAG, + * then mark this one as primary. + */ + if (!primary_lag) { + lag->primary = true; + /* Configure primary's SWID to be shared */ + ice_lag_primary_swid(lag, true); + primary_lag = lag; + } else { + u16 swid; + + swid = primary_lag->pf->hw.port_info->sw_id; + ice_lag_set_swid(swid, lag, true); + ice_lag_add_prune_list(primary_lag, lag->pf); + } + /* add filter for primary control packets */ + ice_lag_cfg_cp_fltr(lag, true); + } else { + if (!primary_lag && lag->primary) + primary_lag = lag; + + if (primary_lag) { + if (!lag->primary) { + ice_lag_set_swid(0, lag, false); + } else { + ice_lag_primary_swid(lag, false); + ice_lag_del_prune_list(primary_lag, lag->pf); + } + ice_lag_cfg_cp_fltr(lag, false); + } } +} - netdev_dbg(netdev, "bonding %s\n", info->linking ? "LINK" : "UNLINK"); +/** + * ice_lag_monitor_link - monitor interfaces entering/leaving the aggregate + * @lag: lag info struct + * @ptr: opaque data containing notifier event + * + * This function only operates after a primary has been set. + */ +static void ice_lag_monitor_link(struct ice_lag *lag, void *ptr) +{ +} - if (!netif_is_lag_master(info->upper_dev)) { - netdev_dbg(netdev, "changeupper rcvd, but not primary. bail\n"); - return; - } +/** + * ice_lag_monitor_active - main PF keep track of which port is active + * @lag: lag info struct + * @ptr: opaque data containing notifier event + * + * This function is for the primary PF to monitor changes in which port is + * active and handle changes for SRIOV VF functionality + */ +static void ice_lag_monitor_active(struct ice_lag *lag, void *ptr) +{ +} - if (info->linking) - ice_lag_link(lag, info); - else - ice_lag_unlink(lag, info); +/** + * ice_lag_chk_comp - evaluate bonded interface for feature support + * @lag: lag info struct + * @ptr: opaque data for netdev event info + */ +static bool +ice_lag_chk_comp(struct ice_lag *lag, void *ptr) +{ + return true; +} - ice_display_lag_info(lag); +/** + * ice_lag_unregister - handle netdev unregister events + * @lag: LAG info struct + * @event_netdev: netdev struct for target of notifier event + */ +static void +ice_lag_unregister(struct ice_lag *lag, struct net_device *event_netdev) +{ } /** @@ -307,17 +659,30 @@ static void ice_lag_process_event(struct work_struct *work) switch (lag_work->event) { case NETDEV_CHANGEUPPER: info = &lag_work->info.changeupper_info; - if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) + if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) { + ice_lag_monitor_link(lag_work->lag, info); ice_lag_changeupper_event(lag_work->lag, info); + ice_lag_link_unlink(lag_work->lag, info); + } break; case NETDEV_BONDING_INFO: + if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) { + if (!ice_lag_chk_comp(lag_work->lag, + &lag_work->info.bonding_info)) { + goto lag_cleanup; + } + ice_lag_monitor_active(lag_work->lag, + &lag_work->info.bonding_info); + ice_lag_cfg_pf_fltrs(lag_work->lag, + &lag_work->info.bonding_info); + } ice_lag_info_event(lag_work->lag, &lag_work->info.bonding_info); break; case NETDEV_UNREGISTER: if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) { netdev = lag_work->info.bonding_info.info.dev; - if ((netdev == lag_work->lag->netdev || - lag_work->lag->primary) && lag_work->lag->bonded) + if (netdev == lag_work->lag->netdev && + lag_work->lag->bonded) ice_lag_unregister(lag_work->lag, netdev); } break; @@ -325,6 +690,7 @@ static void ice_lag_process_event(struct work_struct *work) break; } +lag_cleanup: /* cleanup resources allocated for this work item */ list_for_each_safe(tmp, n, &lag_work->netdev_list.node) { struct ice_lag_netdev_list *entry; @@ -466,6 +832,22 @@ static void ice_unregister_lag_handler(struct ice_lag *lag) } } +/** + * ice_create_lag_recipe + * @hw: pointer to HW struct + * @base_recipe: recipe to base the new recipe on + * @prio: priority for new recipe + * + * function returns 0 on error + */ +static u16 ice_create_lag_recipe(struct ice_hw *hw, const u8 *base_recipe, + u8 prio) +{ + u16 rid = 0; + + return rid; +} + /** * ice_init_lag - initialize support for LAG * @pf: PF struct @@ -507,6 +889,12 @@ int ice_init_lag(struct ice_pf *pf) goto lag_error; } + lag->pf_recipe = ice_create_lag_recipe(&pf->hw, ice_dflt_vsi_rcp, 1); + if (!lag->pf_recipe) { + err = -EINVAL; + goto lag_error; + } + ice_display_lag_info(lag); dev_dbg(dev, "INIT LAG complete\n"); @@ -539,6 +927,9 @@ void ice_deinit_lag(struct ice_pf *pf) flush_workqueue(ice_lag_wq); + ice_free_hw_res(&pf->hw, ICE_AQC_RES_TYPE_RECIPE, 1, + &pf->lag->pf_recipe); + kfree(lag); pf->lag = NULL; diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index e8d4f07d7914..04bb1e9c2026 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -25,7 +25,6 @@ * In case of Ether type filter it is treated as header without VLAN tag * and byte 12 and 13 is used to program a given Ether type instead */ -#define DUMMY_ETH_HDR_LEN 16 static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0, 0x2, 0, 0, 0, 0, 0, 0x81, 0, 0, 0}; @@ -2460,6 +2459,15 @@ static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi) } } +/** + * ice_fill_eth_hdr - helper to copy dummy_eth_hdr into supplied buffer + * @eth_hdr: pointer to buffer to populate + */ +void ice_fill_eth_hdr(u8 *eth_hdr) +{ + memcpy(eth_hdr, dummy_eth_header, DUMMY_ETH_HDR_LEN); +} + /** * ice_fill_sw_rule - Helper function to fill switch rule structure * @hw: pointer to the hardware structure diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 1f1888932d54..b63f642c2d2a 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -30,6 +30,8 @@ ICE_SW_RULE_RX_TX_HDR_SIZE((s), 0) #define ICE_SW_RULE_LG_ACT_SIZE(s, n) struct_size((s), act, (n)) +#define DUMMY_ETH_HDR_LEN 16 + /* VSI context structure for add/get/update/free operations */ struct ice_vsi_ctx { u16 vsi_num; @@ -403,6 +405,7 @@ u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle); int ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle); void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw); +void ice_fill_eth_hdr(u8 *eth_hdr); int ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz, From patchwork Thu Jun 8 18:06:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272664 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D91C7200B5 for ; Thu, 8 Jun 2023 18:05:10 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58DD61988 for ; Thu, 8 Jun 2023 11:05:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247507; x=1717783507; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gGVZ9wEU9uCT3g0FWoRlvBe9L784X+leIACG0Wu8JKY=; b=Gd9UAAGaUVLVdKQ3nlUmvsQzwpY8jy2UPajBAJbRXv+xb7cbBIfXL5R7 +F8AdXLvZ40n9whFYp5d1WXoLTTGdv6GYL1TMVRGbm1y9P2QwDsmCYi9v 0HHpMIS9TpPq3RZ7kh/V7JYawCVzM6QBAdHDaDHv/ITH3izrLNc8bd276 hrETigIC9eyrDs5Lr1dWRoRHhYXLEHWFz+Xj08wowK9tIBki8XnltQN57 QLnnevlkCJOg6ZIvInPnf+aPKzun3g7JajFlJZdDpBLDv3EnDojR8z0mX 5bSgKQidEWBrfAdt+W+XnSVAE5cv5QVtl1Xf9smH/gYowXndh0HXoz4iK w==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738735" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738735" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187931" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187931" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:38 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 06/10] ice: Flesh out implementation of support for SRIOV on bonded interface Date: Thu, 8 Jun 2023 11:06:14 -0700 Message-Id: <20230608180618.574171-7-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Add in the functions that will allow a VF created on the primary interface of a bond to "fail-over" to another PF interface in the bond and continue to Tx and Rx. Add in an ordered take-down path for the bonded interface. Signed-off-by: Dave Ertman Reviewed-by: Daniel Machon --- drivers/net/ethernet/intel/ice/ice_lag.c | 864 ++++++++++++++++++++++- 1 file changed, 854 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index e66f1129027a..1ef9c849f79a 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -10,6 +10,11 @@ #define ICE_LAG_RES_SHARED BIT(14) #define ICE_LAG_RES_VALID BIT(15) +#define LACP_TRAIN_PKT_LEN 16 +static const u8 lacp_train_pkt[LACP_TRAIN_PKT_LEN] = { 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, + 0x88, 0x09, 0, 0 }; + #define ICE_RECIPE_LEN 64 static const u8 ice_dflt_vsi_rcp[ICE_RECIPE_LEN] = { 0x05, 0, 0, 0, 0x20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, @@ -57,6 +62,39 @@ static void ice_lag_set_backup(struct ice_lag *lag) lag->role = ICE_LAG_BACKUP; } +/** + * netif_is_same_ice - determine if netdev is on the same ice NIC as local PF + * @pf: local PF struct + * @netdev: netdev we are evaluating + */ +static bool netif_is_same_ice(struct ice_pf *pf, struct net_device *netdev) +{ + struct ice_netdev_priv *np; + struct ice_pf *test_pf; + struct ice_vsi *vsi; + + if (!netif_is_ice(netdev)) + return false; + + np = netdev_priv(netdev); + if (!np) + return false; + + vsi = np->vsi; + if (!vsi) + return false; + + test_pf = vsi->back; + if (!test_pf) + return false; + + if (pf->pdev->bus != test_pf->pdev->bus || + pf->pdev->slot != test_pf->pdev->slot) + return false; + + return true; +} + /** * ice_netdev_to_lag - return pointer to associated lag struct from netdev * @netdev: pointer to net_device struct to query @@ -80,6 +118,38 @@ static struct ice_lag *ice_netdev_to_lag(struct net_device *netdev) return vsi->back->lag; } +/** + * ice_lag_find_hw_by_lport - return an hw struct from bond members lport + * @lag: lag struct + * @lport: lport value to search for + */ +static struct ice_hw * +ice_lag_find_hw_by_lport(struct ice_lag *lag, u8 lport) +{ + struct ice_lag_netdev_list *entry; + struct net_device *tmp_netdev; + struct ice_netdev_priv *np; + struct list_head *tmp; + struct ice_hw *hw; + + list_for_each(tmp, lag->netdev_head) { + entry = list_entry(tmp, struct ice_lag_netdev_list, node); + tmp_netdev = entry->netdev; + if (!tmp_netdev || !netif_is_ice(tmp_netdev)) + continue; + + np = netdev_priv(tmp_netdev); + if (!np || !np->vsi) + continue; + + hw = &np->vsi->back->hw; + if (hw->port_info->lport == lport) + return hw; + } + + return NULL; +} + /** * ice_lag_find_primary - returns pointer to primary interfaces lag struct * @lag: local interfaces lag struct @@ -233,6 +303,44 @@ static void ice_display_lag_info(struct ice_lag *lag) upper, role, primary); } +/** + * ice_lag_qbuf_recfg - generate a buffer of queues for a reconfigure command + * @hw: HW struct that contains the queue contexts + * @qbuf: pointer to buffer to populate + * @vsi_num: index of the VSI in PF space + * @numq: number of queues to search for + * @tc: traffic class that contains the queues + * + * function returns the number of valid queues in buffer + */ +static u16 +ice_lag_qbuf_recfg(struct ice_hw *hw, struct ice_aqc_cfg_txqs_buf *qbuf, + u16 vsi_num, u16 numq, u8 tc) +{ + struct ice_q_ctx *q_ctx; + u16 qid, count = 0; + struct ice_pf *pf; + int i; + + pf = hw->back; + for (i = 0; i < numq; i++) { + q_ctx = ice_get_lan_q_ctx(hw, vsi_num, tc, i); + if (!q_ctx || q_ctx->q_handle == ICE_INVAL_Q_HANDLE) { + dev_dbg(ice_hw_to_dev(hw), "%s queue %d %s\n", __func__, + i, q_ctx ? "INVAL Q HANDLE" : "NO Q CONTEXT"); + continue; + } + + qid = pf->vsi[vsi_num]->txq_map[q_ctx->q_handle]; + qbuf->queue_info[count].q_handle = cpu_to_le16(qid); + qbuf->queue_info[count].tc = tc; + qbuf->queue_info[count].q_teid = cpu_to_le32(q_ctx->q_teid); + count++; + } + + return count; +} + /** * ice_lag_move_vf_node_tc - move scheduling nodes for one VF on one TC * @lag: lag info struct @@ -245,6 +353,167 @@ static void ice_lag_move_vf_node_tc(struct ice_lag *lag, u8 oldport, u8 newport, u16 vsi_num, u8 tc) { + u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; + struct ice_sched_node *n_prt, *tc_node, *aggnode; + u16 numq, valq, buf_size, num_moved, qbuf_size; + struct device *dev = ice_pf_to_dev(lag->pf); + struct ice_aqc_cfg_txqs_buf *qbuf; + struct ice_aqc_move_elem *buf; + struct ice_hw *new_hw = NULL; + struct ice_port_info *pi; + __le32 teid, parent_teid; + struct ice_vsi_ctx *ctx; + u8 aggl, vsil; + u32 tmp_teid; + int n; + + ctx = ice_get_vsi_ctx(&lag->pf->hw, vsi_num); + if (!ctx) { + dev_warn(dev, "Unable to locate VSI context for LAG failover\n"); + return; + } + + /* check to see if this VF is enabled on this TC */ + if (!ctx->sched.vsi_node[tc]) + return; + + /* locate HW struct for destination port */ + new_hw = ice_lag_find_hw_by_lport(lag, newport); + if (!new_hw) { + dev_warn(dev, "Unable to locate HW struct for LAG node destination\n"); + return; + } + + pi = new_hw->port_info; + + numq = ctx->num_lan_q_entries[tc]; + teid = ctx->sched.vsi_node[tc]->info.node_teid; + tmp_teid = le32_to_cpu(teid); + parent_teid = ctx->sched.vsi_node[tc]->info.parent_teid; + /* if no teid assigned or numq == 0, then this TC is not active */ + if (!tmp_teid || !numq) + return; + + /* suspend VSI subtree for Traffic Class "tc" on + * this VF's VSI + */ + if (ice_sched_suspend_resume_elems(&lag->pf->hw, 1, &tmp_teid, true)) + dev_dbg(dev, "Problem suspending traffic for LAG node move\n"); + + /* reconfigure all VF's queues on this Traffic Class + * to new port + */ + qbuf_size = struct_size(qbuf, queue_info, numq); + qbuf = kzalloc(qbuf_size, GFP_KERNEL); + if (!qbuf) { + dev_warn(dev, "Failure allocating memory for VF queue recfg buffer\n"); + goto resume_traffic; + } + + /* add the per queue info for the reconfigure command buffer */ + valq = ice_lag_qbuf_recfg(&lag->pf->hw, qbuf, vsi_num, numq, tc); + if (!valq) { + dev_dbg(dev, "No valid queues found for LAG failover\n"); + goto qbuf_none; + } + + if (ice_aq_cfg_lan_txq(&lag->pf->hw, qbuf, qbuf_size, valq, oldport, + newport, NULL)) { + dev_warn(dev, "Failure to configure queues for LAG failover\n"); + goto qbuf_err; + } + +qbuf_none: + kfree(qbuf); + + /* find new parent in destination port's tree for VF VSI node on this + * Traffic Class + */ + tc_node = ice_sched_get_tc_node(pi, tc); + if (!tc_node) { + dev_warn(dev, "Failure to find TC node in failover tree\n"); + goto resume_traffic; + } + + aggnode = ice_sched_get_agg_node(pi, tc_node, + ICE_DFLT_AGG_ID); + if (!aggnode) { + dev_warn(dev, "Failure to find aggregate node in failover tree\n"); + goto resume_traffic; + } + + aggl = ice_sched_get_agg_layer(new_hw); + vsil = ice_sched_get_vsi_layer(new_hw); + + for (n = aggl + 1; n < vsil; n++) + num_nodes[n] = 1; + + for (n = 0; n < aggnode->num_children; n++) { + n_prt = ice_sched_get_free_vsi_parent(new_hw, + aggnode->children[n], + num_nodes); + if (n_prt) + break; + } + + /* add parent if none were free */ + if (!n_prt) { + u16 num_nodes_added; + u32 first_teid; + int status; + + n_prt = aggnode; + for (n = aggl + 1; n < vsil; n++) { + status = ice_sched_add_nodes_to_layer(pi, tc_node, + n_prt, n, + num_nodes[n], + &first_teid, + &num_nodes_added); + if (status || num_nodes[n] != num_nodes_added) + goto resume_traffic; + + if (num_nodes_added) + n_prt = ice_sched_find_node_by_teid(tc_node, + first_teid); + else + n_prt = n_prt->children[0]; + if (!n_prt) { + dev_warn(dev, "Failure to add new parent for LAG node\n"); + goto resume_traffic; + } + } + } + + /* Move Vf's VSI node for this TC to newport's scheduler tree */ + buf_size = struct_size(buf, teid, 1); + buf = kzalloc(buf_size, GFP_KERNEL); + if (!buf) { + dev_warn(dev, "Failure to alloc memory for VF node failover\n"); + goto resume_traffic; + } + + buf->hdr.src_parent_teid = parent_teid; + buf->hdr.dest_parent_teid = n_prt->info.node_teid; + buf->hdr.num_elems = cpu_to_le16(1); + buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN; + buf->teid[0] = teid; + + if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved, + NULL)) + dev_warn(dev, "Failure to move VF nodes for failover\n"); + else + ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]); + + kfree(buf); + goto resume_traffic; + +qbuf_err: + kfree(qbuf); + +resume_traffic: + /* restart traffic for VSI node */ + if (ice_sched_suspend_resume_elems(&lag->pf->hw, 1, &tmp_teid, false)) + dev_dbg(dev, "Problem restarting traffic for LAG node move\n"); } /** @@ -274,6 +543,66 @@ ice_lag_move_single_vf_nodes(struct ice_lag *lag, u8 oldport, u8 newport, */ void ice_lag_move_new_vf_nodes(struct ice_vf *vf) { + struct ice_lag_netdev_list ndlist; + struct list_head *tmp, *n; + u8 pri_port, act_port; + struct ice_lag *lag; + struct ice_vsi *vsi; + struct ice_pf *pf; + + vsi = ice_get_vf_vsi(vf); + + if (WARN_ON(!vsi)) + return; + + if (WARN_ON(vsi->type != ICE_VSI_VF)) + return; + + pf = vf->pf; + lag = pf->lag; + + mutex_lock(&pf->lag_mutex); + if (!lag->bonded) + goto new_vf_unlock; + + pri_port = pf->hw.port_info->lport; + act_port = lag->active_port; + + if (lag->upper_netdev) { + struct ice_lag_netdev_list *nl; + struct net_device *tmp_nd; + + INIT_LIST_HEAD(&ndlist.node); + rcu_read_lock(); + for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) { + nl = kzalloc(sizeof(*nl), GFP_KERNEL); + if (!nl) + break; + + nl->netdev = tmp_nd; + list_add(&nl->node, &ndlist.node); + } + rcu_read_unlock(); + } + + lag->netdev_head = &ndlist.node; + + if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) && + lag->bonded && lag->primary && pri_port != act_port && + !list_empty(lag->netdev_head)) + ice_lag_move_single_vf_nodes(lag, pri_port, act_port, vsi->idx); + + list_for_each_safe(tmp, n, &ndlist.node) { + struct ice_lag_netdev_list *entry; + + entry = list_entry(tmp, struct ice_lag_netdev_list, node); + list_del(&entry->node); + kfree(entry); + } + lag->netdev_head = NULL; + +new_vf_unlock: + mutex_unlock(&pf->lag_mutex); } /** @@ -308,6 +637,50 @@ static void ice_lag_move_vf_nodes(struct ice_lag *lag, u8 oldport, u8 newport) static void ice_lag_cfg_cp_fltr(struct ice_lag *lag, bool add) { + struct ice_sw_rule_lkup_rx_tx *s_rule = NULL; + struct ice_vsi *vsi; + u16 buf_len, opc; + + vsi = lag->pf->vsi[0]; + + buf_len = ICE_SW_RULE_RX_TX_HDR_SIZE(s_rule, + ICE_LAG_SRIOV_TRAIN_PKT_LEN); + s_rule = kzalloc(buf_len, GFP_KERNEL); + if (!s_rule) { + netdev_warn(lag->netdev, "-ENOMEM error configuring CP filter\n"); + return; + } + + if (add) { + s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX); + s_rule->recipe_id = cpu_to_le16(ICE_LAG_SRIOV_CP_RECIPE); + s_rule->src = cpu_to_le16(vsi->port_info->lport); + s_rule->act = cpu_to_le32(ICE_FWD_TO_VSI | + ICE_SINGLE_ACT_LAN_ENABLE | + ICE_SINGLE_ACT_VALID_BIT | + ((vsi->vsi_num << + ICE_SINGLE_ACT_VSI_ID_S) & + ICE_SINGLE_ACT_VSI_ID_M)); + s_rule->hdr_len = cpu_to_le16(ICE_LAG_SRIOV_TRAIN_PKT_LEN); + memcpy(s_rule->hdr_data, lacp_train_pkt, LACP_TRAIN_PKT_LEN); + opc = ice_aqc_opc_add_sw_rules; + } else { + opc = ice_aqc_opc_remove_sw_rules; + s_rule->index = cpu_to_le16(lag->cp_rule_idx); + } + if (ice_aq_sw_rules(&lag->pf->hw, s_rule, buf_len, 1, opc, NULL)) { + netdev_warn(lag->netdev, "Error %s CP rule for fail-over\n", + add ? "ADDING" : "REMOVING"); + goto cp_free; + } + + if (add) + lag->cp_rule_idx = le16_to_cpu(s_rule->index); + else + lag->cp_rule_idx = 0; + +cp_free: + kfree(s_rule); } /** @@ -362,6 +735,155 @@ static void ice_lag_reclaim_vf_tc(struct ice_lag *lag, struct ice_hw *src_hw, u16 vsi_num, u8 tc) { + u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; + struct ice_sched_node *n_prt, *tc_node, *aggnode; + u16 numq, valq, buf_size, num_moved, qbuf_size; + struct device *dev = ice_pf_to_dev(lag->pf); + struct ice_aqc_cfg_txqs_buf *qbuf; + struct ice_aqc_move_elem *buf; + struct ice_port_info *pi; + __le32 teid, parent_teid; + struct ice_vsi_ctx *ctx; + struct ice_hw *hw; + u8 aggl, vsil; + u32 tmp_teid; + int n; + + hw = &lag->pf->hw; + ctx = ice_get_vsi_ctx(hw, vsi_num); + if (!ctx) { + dev_warn(dev, "Unable to locate VSI context for LAG reclaim\n"); + return; + } + + /* check to see if this VF is enabled on this TC */ + if (!ctx->sched.vsi_node[tc]) + return; + + numq = ctx->num_lan_q_entries[tc]; + teid = ctx->sched.vsi_node[tc]->info.node_teid; + tmp_teid = le32_to_cpu(teid); + parent_teid = ctx->sched.vsi_node[tc]->info.parent_teid; + + /* if !teid or !numq, then this TC is not active */ + if (!tmp_teid || !numq) + return; + + /* suspend traffic */ + if (ice_sched_suspend_resume_elems(hw, 1, &tmp_teid, true)) + dev_dbg(dev, "Problem suspending traffic for LAG node move\n"); + + /* reconfig queues for new port */ + qbuf_size = struct_size(qbuf, queue_info, numq); + qbuf = kzalloc(qbuf_size, GFP_KERNEL); + if (!qbuf) { + dev_warn(dev, "Failure allocating memory for VF queue recfg buffer\n"); + goto resume_reclaim; + } + + /* add the per queue info for the reconfigure command buffer */ + valq = ice_lag_qbuf_recfg(hw, qbuf, vsi_num, numq, tc); + if (!valq) { + dev_dbg(dev, "No valid queues found for LAG reclaim\n"); + goto reclaim_none; + } + + if (ice_aq_cfg_lan_txq(hw, qbuf, qbuf_size, numq, + src_hw->port_info->lport, hw->port_info->lport, + NULL)) { + dev_warn(dev, "Failure to configure queues for LAG failover\n"); + goto reclaim_qerr; + } + +reclaim_none: + kfree(qbuf); + + /* find parent in primary tree */ + pi = hw->port_info; + tc_node = ice_sched_get_tc_node(pi, tc); + if (!tc_node) { + dev_warn(dev, "Failure to find TC node in failover tree\n"); + goto resume_reclaim; + } + + aggnode = ice_sched_get_agg_node(pi, tc_node, ICE_DFLT_AGG_ID); + if (!aggnode) { + dev_warn(dev, "Failure to find aggreagte node in failover tree\n"); + goto resume_reclaim; + } + + aggl = ice_sched_get_agg_layer(hw); + vsil = ice_sched_get_vsi_layer(hw); + + for (n = aggl + 1; n < vsil; n++) + num_nodes[n] = 1; + + for (n = 0; n < aggnode->num_children; n++) { + n_prt = ice_sched_get_free_vsi_parent(hw, aggnode->children[n], + num_nodes); + if (n_prt) + break; + } + + /* if no free parent found - add one */ + if (!n_prt) { + u16 num_nodes_added; + u32 first_teid; + int status; + + n_prt = aggnode; + for (n = aggl + 1; n < vsil; n++) { + status = ice_sched_add_nodes_to_layer(pi, tc_node, + n_prt, n, + num_nodes[n], + &first_teid, + &num_nodes_added); + if (status || num_nodes[n] != num_nodes_added) + goto resume_reclaim; + + if (num_nodes_added) + n_prt = ice_sched_find_node_by_teid(tc_node, + first_teid); + else + n_prt = n_prt->children[0]; + + if (!n_prt) { + dev_warn(dev, "Failure to add new parent for LAG node\n"); + goto resume_reclaim; + } + } + } + + /* Move node to new parent */ + buf_size = struct_size(buf, teid, 1); + buf = kzalloc(buf_size, GFP_KERNEL); + if (!buf) { + dev_warn(dev, "Failure to alloc memory for VF node failover\n"); + goto resume_reclaim; + } + + buf->hdr.src_parent_teid = parent_teid; + buf->hdr.dest_parent_teid = n_prt->info.node_teid; + buf->hdr.num_elems = cpu_to_le16(1); + buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN; + buf->teid[0] = teid; + + if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved, + NULL)) + dev_warn(dev, "Failure to move VF nodes for LAG reclaim\n"); + else + ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]); + + kfree(buf); + goto resume_reclaim; + +reclaim_qerr: + kfree(qbuf); + +resume_reclaim: + /* restart traffic */ + if (ice_sched_suspend_resume_elems(hw, 1, &tmp_teid, false)) + dev_warn(dev, "Problem restarting traffic for LAG node reclaim\n"); } /** @@ -479,6 +1001,65 @@ static void ice_lag_set_swid(u16 primary_swid, struct ice_lag *local_lag, bool link) { + struct ice_aqc_alloc_free_res_elem *buf; + struct ice_aqc_set_port_params *cmd; + struct ice_aq_desc desc; + u16 buf_len, swid; + int status; + + buf_len = struct_size(buf, elem, 1); + buf = kzalloc(buf_len, GFP_KERNEL); + if (!buf) { + dev_err(ice_pf_to_dev(local_lag->pf), "-ENOMEM error setting SWID\n"); + return; + } + + buf->num_elems = cpu_to_le16(1); + buf->res_type = cpu_to_le16(ICE_AQC_RES_TYPE_SWID); + /* if unlinnking need to free the shared resource */ + if (!link && local_lag->bond_swid) { + buf->elem[0].e.sw_resp = cpu_to_le16(local_lag->bond_swid); + status = ice_aq_alloc_free_res(&local_lag->pf->hw, 1, buf, + buf_len, ice_aqc_opc_free_res, + NULL); + if (status) + dev_err(ice_pf_to_dev(local_lag->pf), "Error freeing SWID during LAG unlink\n"); + local_lag->bond_swid = 0; + } + + if (link) { + buf->res_type |= cpu_to_le16(ICE_LAG_RES_SHARED | + ICE_LAG_RES_VALID); + /* store the primary's SWID in case it leaves bond first */ + local_lag->bond_swid = primary_swid; + buf->elem[0].e.sw_resp = cpu_to_le16(local_lag->bond_swid); + } else { + buf->elem[0].e.sw_resp = + cpu_to_le16(local_lag->pf->hw.port_info->sw_id); + } + + status = ice_aq_alloc_free_res(&local_lag->pf->hw, 1, buf, buf_len, + ice_aqc_opc_alloc_res, NULL); + if (status) + dev_err(ice_pf_to_dev(local_lag->pf), "Error subscribing to SWID 0x%04X\n", + local_lag->bond_swid); + + kfree(buf); + + /* Configure port param SWID to correct value */ + if (link) + swid = primary_swid; + else + swid = local_lag->pf->hw.port_info->sw_id; + + cmd = &desc.params.set_port_params; + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_params); + + cmd->swid = cpu_to_le16(ICE_AQC_PORT_SWID_VALID | swid); + status = ice_aq_send_cmd(&local_lag->pf->hw, &desc, NULL, 0, NULL); + if (status) + dev_err(ice_pf_to_dev(local_lag->pf), "Error setting SWID in port params %d\n", + status); } /** @@ -507,6 +1088,38 @@ static void ice_lag_primary_swid(struct ice_lag *lag, bool link) */ static void ice_lag_add_prune_list(struct ice_lag *lag, struct ice_pf *event_pf) { + u16 num_vsi, rule_buf_sz, vsi_list_id, event_vsi_num, prim_vsi_idx; + struct ice_sw_rule_vsi_list *s_rule = NULL; + struct device *dev; + + num_vsi = 1; + + dev = ice_pf_to_dev(lag->pf); + event_vsi_num = event_pf->vsi[0]->vsi_num; + prim_vsi_idx = lag->pf->vsi[0]->idx; + + if (!ice_find_vsi_list_entry(&lag->pf->hw, ICE_SW_LKUP_VLAN, + prim_vsi_idx, &vsi_list_id)) { + dev_warn(dev, "Could not locate prune list when setting up SRIOV LAG\n"); + return; + } + + rule_buf_sz = (u16)ICE_SW_RULE_VSI_LIST_SIZE(s_rule, num_vsi); + s_rule = kzalloc(rule_buf_sz, GFP_KERNEL); + if (!s_rule) { + dev_warn(dev, "Error allocating space for prune list when configuring SRIOV LAG\n"); + return; + } + + s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_PRUNE_LIST_SET); + s_rule->index = cpu_to_le16(vsi_list_id); + s_rule->number_vsi = cpu_to_le16(num_vsi); + s_rule->vsi[0] = cpu_to_le16(event_vsi_num); + + if (ice_aq_sw_rules(&event_pf->hw, s_rule, rule_buf_sz, 1, + ice_aqc_opc_update_sw_rules, NULL)) + dev_warn(dev, "Error adding VSI prune list\n"); + kfree(s_rule); } /** @@ -516,6 +1129,39 @@ static void ice_lag_add_prune_list(struct ice_lag *lag, struct ice_pf *event_pf) */ static void ice_lag_del_prune_list(struct ice_lag *lag, struct ice_pf *event_pf) { + u16 num_vsi, vsi_num, vsi_idx, rule_buf_sz, vsi_list_id; + struct ice_sw_rule_vsi_list *s_rule = NULL; + struct device *dev; + + num_vsi = 1; + + dev = ice_pf_to_dev(lag->pf); + vsi_num = event_pf->vsi[0]->vsi_num; + vsi_idx = lag->pf->vsi[0]->idx; + + if (!ice_find_vsi_list_entry(&lag->pf->hw, ICE_SW_LKUP_VLAN, + vsi_idx, &vsi_list_id)) { + dev_warn(dev, "Could not locate prune list when unwinding SRIOV LAG\n"); + return; + } + + rule_buf_sz = (u16)ICE_SW_RULE_VSI_LIST_SIZE(s_rule, num_vsi); + s_rule = kzalloc(rule_buf_sz, GFP_KERNEL); + if (!s_rule) { + dev_warn(dev, "Error allocating prune list when unwinding SRIOV LAG\n"); + return; + } + + s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR); + s_rule->index = cpu_to_le16(vsi_list_id); + s_rule->number_vsi = cpu_to_le16(num_vsi); + s_rule->vsi[0] = cpu_to_le16(vsi_num); + + if (ice_aq_sw_rules(&event_pf->hw, (struct ice_aqc_sw_rules *)s_rule, + rule_buf_sz, 1, ice_aqc_opc_update_sw_rules, NULL)) + dev_warn(dev, "Error clearing VSI prune list\n"); + + kfree(s_rule); } /** @@ -542,8 +1188,6 @@ static void ice_lag_check_nvm_support(struct ice_pf *pf) * ice_lag_changeupper_event - handle LAG changeupper event * @lag: LAG info struct * @ptr: opaque pointer data - * - * ptr is to be cast into netdev_notifier_changeupper_info */ static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr) { @@ -603,6 +1247,40 @@ static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr) */ static void ice_lag_monitor_link(struct ice_lag *lag, void *ptr) { + struct netdev_notifier_changeupper_info *info; + struct ice_hw *prim_hw, *active_hw; + struct net_device *event_netdev; + struct ice_pf *pf; + u8 prim_port; + + if (!lag->primary) + return; + + event_netdev = netdev_notifier_info_to_dev(ptr); + if (!netif_is_same_ice(lag->pf, event_netdev)) + return; + + pf = lag->pf; + prim_hw = &pf->hw; + prim_port = prim_hw->port_info->lport; + + info = (struct netdev_notifier_changeupper_info *)ptr; + if (info->upper_dev != lag->upper_netdev) + return; + + if (!info->linking) { + /* Since there are only two interfaces allowed in SRIOV+LAG, if + * one port is leaving, then nodes need to be on primary + * interface. + */ + if (prim_port != lag->active_port && + lag->active_port != ICE_LAG_INVALID_PORT) { + active_hw = ice_lag_find_hw_by_lport(lag, + lag->active_port); + ice_lag_reclaim_vf_nodes(lag, active_hw); + lag->active_port = ICE_LAG_INVALID_PORT; + } + } } /** @@ -615,6 +1293,67 @@ static void ice_lag_monitor_link(struct ice_lag *lag, void *ptr) */ static void ice_lag_monitor_active(struct ice_lag *lag, void *ptr) { + struct net_device *event_netdev, *event_upper; + struct netdev_notifier_bonding_info *info; + struct netdev_bonding_info *bonding_info; + struct ice_netdev_priv *event_np; + struct ice_pf *pf, *event_pf; + u8 prim_port, event_port; + + if (!lag->primary) + return; + + pf = lag->pf; + if (!pf) + return; + + event_netdev = netdev_notifier_info_to_dev(ptr); + rcu_read_lock(); + event_upper = netdev_master_upper_dev_get_rcu(event_netdev); + rcu_read_unlock(); + if (!netif_is_ice(event_netdev) || event_upper != lag->upper_netdev) + return; + + event_np = netdev_priv(event_netdev); + event_pf = event_np->vsi->back; + event_port = event_pf->hw.port_info->lport; + prim_port = pf->hw.port_info->lport; + + info = (struct netdev_notifier_bonding_info *)ptr; + bonding_info = &info->bonding_info; + + if (!bonding_info->slave.state) { + /* if no port is currently active, then nodes and filters exist + * on primary port, check if we need to move them + */ + if (lag->active_port == ICE_LAG_INVALID_PORT) { + if (event_port != prim_port) + ice_lag_move_vf_nodes(lag, prim_port, + event_port); + lag->active_port = event_port; + return; + } + + /* active port is already set and is current event port */ + if (lag->active_port == event_port) + return; + /* new active port */ + ice_lag_move_vf_nodes(lag, lag->active_port, event_port); + lag->active_port = event_port; + } else { + /* port not set as currently active (e.g. new active port + * has already claimed the nodes and filters + */ + if (lag->active_port != event_port) + return; + /* This is the case when neither port is active (both link down) + * Link down on the bond - set active port to invalid and move + * nodes and filters back to primary if not already there + */ + if (event_port != prim_port) + ice_lag_move_vf_nodes(lag, event_port, prim_port); + lag->active_port = ICE_LAG_INVALID_PORT; + } } /** @@ -625,6 +1364,73 @@ static void ice_lag_monitor_active(struct ice_lag *lag, void *ptr) static bool ice_lag_chk_comp(struct ice_lag *lag, void *ptr) { + struct net_device *event_netdev, *event_upper; + struct netdev_notifier_bonding_info *info; + struct netdev_bonding_info *bonding_info; + struct list_head *tmp; + int count = 0; + + if (!lag->primary) + return true; + + event_netdev = netdev_notifier_info_to_dev(ptr); + rcu_read_lock(); + event_upper = netdev_master_upper_dev_get_rcu(event_netdev); + rcu_read_unlock(); + if (event_upper != lag->upper_netdev) + return true; + + info = (struct netdev_notifier_bonding_info *)ptr; + bonding_info = &info->bonding_info; + lag->bond_mode = bonding_info->master.bond_mode; + if (lag->bond_mode != BOND_MODE_ACTIVEBACKUP) { + netdev_info(lag->netdev, "Bond Mode not ACTIVE-BACKUP\n"); + return false; + } + + list_for_each(tmp, lag->netdev_head) { +#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT) + struct ice_dcbx_cfg *dcb_cfg, *peer_dcb_cfg; +#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */ + struct ice_lag_netdev_list *entry; + struct ice_netdev_priv *peer_np; + struct net_device *peer_netdev; + struct ice_vsi *vsi, *peer_vsi; + + entry = list_entry(tmp, struct ice_lag_netdev_list, node); + peer_netdev = entry->netdev; + if (!netif_is_ice(peer_netdev)) { + netdev_info(lag->netdev, "Found non-ice netdev in LAG\n"); + return false; + } + + count++; + if (count > 2) { + netdev_info(lag->netdev, "Found more than two netdevs in LAG\n"); + return false; + } + + peer_np = netdev_priv(peer_netdev); + vsi = ice_get_main_vsi(lag->pf); + peer_vsi = peer_np->vsi; + if (lag->pf->pdev->bus != peer_vsi->back->pdev->bus || + lag->pf->pdev->slot != peer_vsi->back->pdev->slot) { + netdev_info(lag->netdev, "Found netdev on different device in LAG\n"); + return false; + } + +#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT) + dcb_cfg = &vsi->port_info->qos_cfg.local_dcbx_cfg; + peer_dcb_cfg = &peer_vsi->port_info->qos_cfg.local_dcbx_cfg; + if (memcmp(dcb_cfg, peer_dcb_cfg, + sizeof(struct ice_dcbx_cfg))) { + netdev_info(lag->netdev, "Found netdev with different DCB config in LAG\n"); + return false; + } + +#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */ + } + return true; } @@ -835,17 +1641,40 @@ static void ice_unregister_lag_handler(struct ice_lag *lag) /** * ice_create_lag_recipe * @hw: pointer to HW struct + * @rid: pinter to u16 to pass back recipe index * @base_recipe: recipe to base the new recipe on * @prio: priority for new recipe * * function returns 0 on error */ -static u16 ice_create_lag_recipe(struct ice_hw *hw, const u8 *base_recipe, - u8 prio) +static int ice_create_lag_recipe(struct ice_hw *hw, u16 *rid, + const u8 *base_recipe, u8 prio) { - u16 rid = 0; + struct ice_aqc_recipe_data_elem *new_rcp; + int err; - return rid; + err = ice_alloc_recipe(hw, rid); + if (err) + return err; + + new_rcp = kzalloc(ICE_RECIPE_LEN * ICE_MAX_NUM_RECIPES, GFP_KERNEL); + if (!new_rcp) + return -ENOMEM; + + memcpy(new_rcp, base_recipe, ICE_RECIPE_LEN); + new_rcp->content.act_ctrl_fwd_priority = prio; + new_rcp->content.rid = *rid | ICE_AQ_RECIPE_ID_IS_ROOT; + new_rcp->recipe_indx = *rid; + bitmap_zero((unsigned long *)new_rcp->recipe_bitmap, + ICE_MAX_NUM_RECIPES); + set_bit(*rid, (unsigned long *)new_rcp->recipe_bitmap); + + err = ice_aq_add_recipe(hw, new_rcp, 1, NULL); + if (err) + *rid = 0; + + kfree(new_rcp); + return err; } /** @@ -860,7 +1689,8 @@ int ice_init_lag(struct ice_pf *pf) struct device *dev = ice_pf_to_dev(pf); struct ice_lag *lag; struct ice_vsi *vsi; - int err; + u64 recipe_bits = 0; + int n, err; ice_lag_check_nvm_support(pf); @@ -879,6 +1709,7 @@ int ice_init_lag(struct ice_pf *pf) lag->pf = pf; lag->netdev = vsi->netdev; lag->role = ICE_LAG_NONE; + lag->active_port = ICE_LAG_INVALID_PORT; lag->bonded = false; lag->upper_netdev = NULL; lag->notif_block.notifier_call = NULL; @@ -889,10 +1720,23 @@ int ice_init_lag(struct ice_pf *pf) goto lag_error; } - lag->pf_recipe = ice_create_lag_recipe(&pf->hw, ice_dflt_vsi_rcp, 1); - if (!lag->pf_recipe) { - err = -EINVAL; + err = ice_create_lag_recipe(&pf->hw, &lag->pf_recipe, ice_dflt_vsi_rcp, + 1); + if (err) goto lag_error; + + /* associate recipes to profiles */ + for (n = 0; n < ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER; n++) { + err = ice_aq_get_recipe_to_profile(&pf->hw, n, + (u8 *)&recipe_bits, NULL); + if (err) + continue; + + if (recipe_bits & BIT(ICE_SW_LKUP_DFLT)) { + recipe_bits |= BIT(lag->pf_recipe); + ice_aq_map_recipe_to_profile(&pf->hw, n, + (u8 *)&recipe_bits, NULL); + } } ice_display_lag_info(lag); From patchwork Thu Jun 8 18:06:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272660 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32AB41DCCC for ; Thu, 8 Jun 2023 18:05:09 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D703198C for ; Thu, 8 Jun 2023 11:05:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247507; x=1717783507; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lM9SwY5KOSOermZaDJOkhj5mjxN0pcbQaCxUCTCPM+M=; b=nC4GrFJ5kv0U3PBCXFAe1SIcbW7APs5WdVLihhEl4px982mZb/vBXir7 T7hh7YqBKetd6Q/nk1+NQQQc6kRmu1iTvv6Y2D8YKeqs1x15xtRK6sSlO 4dttdBd0vPeEm1r+Td/QcdJtR90JWRKnclovWlyDK4IYG3dyhPQCDXecF OU+ecv4G2wAXeue1OnWDM8xKRYhn+S7xnT/lf8h+g1WfvxzhbmHUIOXRN LWsD2iMc2Cm6EafzSU5gtlKMTFi/jCoycACqIU3sLwQBoPjjP9CRkvTnd og37GGfH3nDVoB8WflqTBEHgAOj7/85k4koNb4/aVmDfp23JOquKkukb2 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738739" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738739" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187932" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187932" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:38 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 07/10] ice: support non-standard teardown of bond interface Date: Thu, 8 Jun 2023 11:06:15 -0700 Message-Id: <20230608180618.574171-8-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Code for supporting removal of the PF driver (NETDEV_UNREGISTER) events for both when the bond has the primary interface as active and when failed over to thew secondary interface. Reviewed-by: Daniel Machon Signed-off-by: Dave Ertman --- drivers/net/ethernet/intel/ice/ice_lag.c | 47 ++++++++++++++++++++---- 1 file changed, 40 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index 1ef9c849f79a..bd64f22631d6 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -1226,15 +1226,16 @@ static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr) if (!primary_lag && lag->primary) primary_lag = lag; - if (primary_lag) { - if (!lag->primary) { - ice_lag_set_swid(0, lag, false); - } else { + if (!lag->primary) { + ice_lag_set_swid(0, lag, false); + } else { + if (primary_lag && lag->primary) { ice_lag_primary_swid(lag, false); ice_lag_del_prune_list(primary_lag, lag->pf); } - ice_lag_cfg_cp_fltr(lag, false); } + /* remove filter for control packets */ + ice_lag_cfg_cp_fltr(lag, false); } } @@ -1442,6 +1443,38 @@ ice_lag_chk_comp(struct ice_lag *lag, void *ptr) static void ice_lag_unregister(struct ice_lag *lag, struct net_device *event_netdev) { + struct ice_netdev_priv *np; + struct ice_pf *event_pf; + struct ice_lag *p_lag; + + p_lag = ice_lag_find_primary(lag); + np = netdev_priv(event_netdev); + event_pf = np->vsi->back; + + if (p_lag) { + if (p_lag->active_port != p_lag->pf->hw.port_info->lport && + p_lag->active_port != ICE_LAG_INVALID_PORT) { + struct ice_hw *active_hw; + + active_hw = ice_lag_find_hw_by_lport(lag, + p_lag->active_port); + if (active_hw) + ice_lag_reclaim_vf_nodes(p_lag, active_hw); + lag->active_port = ICE_LAG_INVALID_PORT; + } + } + + /* primary processing for primary */ + if (lag->primary && lag->netdev == event_netdev) + ice_lag_primary_swid(lag, false); + + /* primary processing for secondary */ + if (lag->primary && lag->netdev != event_netdev) + ice_lag_del_prune_list(lag, event_pf); + + /* secondary processing for secondary */ + if (!lag->primary && lag->netdev == event_netdev) + ice_lag_set_swid(0, lag, false); } /** @@ -1487,8 +1520,8 @@ static void ice_lag_process_event(struct work_struct *work) case NETDEV_UNREGISTER: if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) { netdev = lag_work->info.bonding_info.info.dev; - if (netdev == lag_work->lag->netdev && - lag_work->lag->bonded) + if ((netdev == lag_work->lag->netdev || + lag_work->lag->primary) && lag_work->lag->bonded) ice_lag_unregister(lag_work->lag, netdev); } break; From patchwork Thu Jun 8 18:06:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272663 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2408C206A6 for ; Thu, 8 Jun 2023 18:05:10 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2120C1BE2 for ; Thu, 8 Jun 2023 11:05:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247509; x=1717783509; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5hbF3hL5xyurJ9/mpZHGq0ncbx8fY2frXzIwkO8UY14=; b=UYLt+nq+Q0UmojW6K8s19ccLJFfS1r2sAODrBG5DyJzRUA58+j/tmyn3 70aHKSmgFVHoblKw9OVwNq0oYKjClYxFvBKfCXuMTN22yuM3qmLt8s7c+ HtM/oL3RpP8CmysRvSPV5Q8rNy23EPoxZzDj6RQSYcddelkn9n3QKXhkJ Fa+yz3J0Z1qMVynasUdnitSAPUIR7j+Lx9OG8T3QoTcpMImh7iX1QnUFu HnAJ5QUwDE7go78+TEhQ7IED/Bhk631xeEmZ8TibkBBKnO12XPVx+nBXL JDBwYNUQ2EK2BfxdYALASBAfjFV99Lh7/SVXAg4OXYCJSbpyDFd88AHqJ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738751" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738751" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187936" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187936" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:38 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 08/10] ice: enforce interface eligibility and add messaging for SRIOV LAG Date: Thu, 8 Jun 2023 11:06:16 -0700 Message-Id: <20230608180618.574171-9-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Implement checks on what interfaces are eligible for supporting SRIOV VFs when a member of an aggregate interface. Implement unwind path for interfaces that become ineligible. checks for the SRIOV LAG feature bit wrap most of the functional code for manipulating resources that apply to this feature. Utilize this bit to track compliant aggregates. Also flag any new entries into the aggregate as not supporting SRIOV LAG for the time they are in the non-compliant aggregate. Once an aggregate has been flagged as non-compliant, only unpopulating the aggregate and re-populating it will return SRIOV LAG functionality. Reviewed-by: Daniel Machon Signed-off-by: Dave Ertman --- drivers/net/ethernet/intel/ice/ice_lag.c | 88 ++++++++++++++++++++++-- 1 file changed, 83 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index bd64f22631d6..6c6e5f4fe12a 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -924,6 +924,7 @@ static void ice_lag_link(struct ice_lag *lag) lag->bonded = true; lag->role = ICE_LAG_UNSET; + netdev_info(lag->netdev, "Shared SR-IOV resources in bond are active\n"); } /** @@ -1369,6 +1370,7 @@ ice_lag_chk_comp(struct ice_lag *lag, void *ptr) struct netdev_notifier_bonding_info *info; struct netdev_bonding_info *bonding_info; struct list_head *tmp; + struct device *dev; int count = 0; if (!lag->primary) @@ -1381,11 +1383,21 @@ ice_lag_chk_comp(struct ice_lag *lag, void *ptr) if (event_upper != lag->upper_netdev) return true; + dev = ice_pf_to_dev(lag->pf); + + /* only supporting switchdev mode for SRIOV VF LAG. + * primary interface has to be in switchdev mode + */ + if (!ice_is_switchdev_running(lag->pf)) { + dev_info(dev, "Primary interface not in switchdev mode - VF LAG disabled\n"); + return false; + } + info = (struct netdev_notifier_bonding_info *)ptr; bonding_info = &info->bonding_info; lag->bond_mode = bonding_info->master.bond_mode; if (lag->bond_mode != BOND_MODE_ACTIVEBACKUP) { - netdev_info(lag->netdev, "Bond Mode not ACTIVE-BACKUP\n"); + dev_info(dev, "Bond Mode not ACTIVE-BACKUP - VF LAG disabled\n"); return false; } @@ -1397,17 +1409,19 @@ ice_lag_chk_comp(struct ice_lag *lag, void *ptr) struct ice_netdev_priv *peer_np; struct net_device *peer_netdev; struct ice_vsi *vsi, *peer_vsi; + struct ice_pf *peer_pf; entry = list_entry(tmp, struct ice_lag_netdev_list, node); peer_netdev = entry->netdev; if (!netif_is_ice(peer_netdev)) { - netdev_info(lag->netdev, "Found non-ice netdev in LAG\n"); + dev_info(dev, "Found %s non-ice netdev in LAG - VF LAG disabled\n", + netdev_name(peer_netdev)); return false; } count++; if (count > 2) { - netdev_info(lag->netdev, "Found more than two netdevs in LAG\n"); + dev_info(dev, "Found more than two netdevs in LAG - VF LAG disabled\n"); return false; } @@ -1416,7 +1430,8 @@ ice_lag_chk_comp(struct ice_lag *lag, void *ptr) peer_vsi = peer_np->vsi; if (lag->pf->pdev->bus != peer_vsi->back->pdev->bus || lag->pf->pdev->slot != peer_vsi->back->pdev->slot) { - netdev_info(lag->netdev, "Found netdev on different device in LAG\n"); + dev_info(dev, "Found %s on different device in LAG - VF LAG disabled\n", + netdev_name(peer_netdev)); return false; } @@ -1425,11 +1440,18 @@ ice_lag_chk_comp(struct ice_lag *lag, void *ptr) peer_dcb_cfg = &peer_vsi->port_info->qos_cfg.local_dcbx_cfg; if (memcmp(dcb_cfg, peer_dcb_cfg, sizeof(struct ice_dcbx_cfg))) { - netdev_info(lag->netdev, "Found netdev with different DCB config in LAG\n"); + dev_info(dev, "Found %s with different DCB in LAG - VF LAG disabled\n", + netdev_name(peer_netdev)); return false; } #endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */ + peer_pf = peer_vsi->back; + if (test_bit(ICE_FLAG_FW_LLDP_AGENT, peer_pf->flags)) { + dev_warn(dev, "Found %s with FW LLDP agent active - VF LAG disabled\n", + netdev_name(peer_netdev)); + return false; + } } return true; @@ -1477,6 +1499,58 @@ ice_lag_unregister(struct ice_lag *lag, struct net_device *event_netdev) ice_lag_set_swid(0, lag, false); } +/** + * ice_lag_chk_disabled_bond - monitor interfaces entering/leaving disabled bond + * @lag: lag info struct + * @ptr: opaque data containing event + * + * as interfaces enter a bond - determine if the bond is currently + * SRIOV LAG compliant and flag if not. As interfaces leave the + * bond, reset their compliant status. + */ +static void ice_lag_chk_disabled_bond(struct ice_lag *lag, void *ptr) +{ + struct net_device *netdev = netdev_notifier_info_to_dev(ptr); + struct netdev_notifier_changeupper_info *info = ptr; + struct ice_lag *prim_lag; + + if (netdev != lag->netdev) + return; + + if (info->linking) { + prim_lag = ice_lag_find_primary(lag); + if (prim_lag && + !ice_is_feature_supported(prim_lag->pf, ICE_F_SRIOV_LAG)) { + ice_clear_feature_support(lag->pf, ICE_F_SRIOV_LAG); + netdev_info(netdev, "Interface added to non-compliant SRIOV LAG aggregate\n"); + } + } else { + ice_lag_check_nvm_support(lag->pf); + } +} + +/** + * ice_lag_disable_sriov_bond - set members of bond as not supporting SRIOV LAG + * @lag: primary interfaces lag struct + */ +static void ice_lag_disable_sriov_bond(struct ice_lag *lag) +{ + struct ice_lag_netdev_list *entry; + struct ice_netdev_priv *np; + struct net_device *netdev; + struct list_head *tmp; + struct ice_pf *pf; + + list_for_each(tmp, lag->netdev_head) { + entry = list_entry(tmp, struct ice_lag_netdev_list, node); + netdev = entry->netdev; + np = netdev_priv(netdev); + pf = np->vsi->back; + + ice_clear_feature_support(pf, ICE_F_SRIOV_LAG); + } +} + /** * ice_lag_process_event - process a task assigned to the lag_wq * @work: pointer to work_struct @@ -1498,6 +1572,7 @@ static void ice_lag_process_event(struct work_struct *work) switch (lag_work->event) { case NETDEV_CHANGEUPPER: info = &lag_work->info.changeupper_info; + ice_lag_chk_disabled_bond(lag_work->lag, info); if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) { ice_lag_monitor_link(lag_work->lag, info); ice_lag_changeupper_event(lag_work->lag, info); @@ -1508,6 +1583,9 @@ static void ice_lag_process_event(struct work_struct *work) if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) { if (!ice_lag_chk_comp(lag_work->lag, &lag_work->info.bonding_info)) { + netdev = lag_work->info.bonding_info.info.dev; + ice_lag_disable_sriov_bond(lag_work->lag); + ice_lag_unregister(lag_work->lag, netdev); goto lag_cleanup; } ice_lag_monitor_active(lag_work->lag, From patchwork Thu Jun 8 18:06:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272662 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCDC62068C for ; Thu, 8 Jun 2023 18:05:10 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C1BC19AC for ; Thu, 8 Jun 2023 11:05:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247508; x=1717783508; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fCymMh11BDM0sIUq78o1Kso+F0PGaNZuR5UarBdn8Jc=; b=UnK4dXA04n6Ev8P5L9kMq4AamPrSAdY0f3+KWHZd+0sS271wyVIJGVg/ TIqByqq4AzI0aDAR6Jpg1/a8kXNa1BQb8uzci+6vtuj7K1uWvyIKvV1nF vQjCh9m8Ws2pPBBS1AK1kHWQ7sWBrAT+SUIeypL+92/qKW4p7aFIrcT2z XHiQO4SvAzBPQsRBiQx7Ak8V/rBfZ5VtjYSWKAz7LfeVjadfeg9/eMQ5H lw96mA7oMkHd+2yfhQLCbUaegF64Zrd1D35R53DalmigfiJHfcw6uUWZb I/+uAqFLPXvVyZYNepwUP5AP0qpzcWyfzrnevqUiIY7WXPFKwVL5lUrBi g==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738759" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738759" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187938" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187938" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:38 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 09/10] ice: enforce no DCB config changing when in bond Date: Thu, 8 Jun 2023 11:06:17 -0700 Message-Id: <20230608180618.574171-10-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org To support SRIOV LAG, the driver cannot allow changes to an interface's DCB configuration when in a bond. This would break the ability to modify interfaces Tx scheduling for fail-over interfaces. Block kernel generated DCB config events when in a bond. Reviewed-by: Daniel Machon Signed-off-by: Dave Ertman --- drivers/net/ethernet/intel/ice/ice_dcb_nl.c | 50 +++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c index 3eb01731e496..e1fbc6de452d 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c @@ -70,6 +70,11 @@ static int ice_dcbnl_setets(struct net_device *netdev, struct ieee_ets *ets) !(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE)) return -EINVAL; + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return -EINVAL; + } + new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg; mutex_lock(&pf->tc_mutex); @@ -170,6 +175,11 @@ static u8 ice_dcbnl_setdcbx(struct net_device *netdev, u8 mode) if (mode == pf->dcbx_cap) return ICE_DCB_NO_HW_CHG; + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return ICE_DCB_NO_HW_CHG; + } + qos_cfg = &pf->hw.port_info->qos_cfg; /* DSCP configuration is not DCBx negotiated */ @@ -261,6 +271,11 @@ static int ice_dcbnl_setpfc(struct net_device *netdev, struct ieee_pfc *pfc) !(pf->dcbx_cap & DCB_CAP_DCBX_VER_IEEE)) return -EINVAL; + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return -EINVAL; + } + mutex_lock(&pf->tc_mutex); new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg; @@ -323,6 +338,11 @@ static void ice_dcbnl_set_pfc_cfg(struct net_device *netdev, int prio, u8 set) if (prio >= ICE_MAX_USER_PRIORITY) return; + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return; + } + new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg; new_cfg->pfc.pfccap = pf->hw.func_caps.common_cap.maxtc; @@ -379,6 +399,11 @@ static u8 ice_dcbnl_setstate(struct net_device *netdev, u8 state) !(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE)) return ICE_DCB_NO_HW_CHG; + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return ICE_DCB_NO_HW_CHG; + } + /* Nothing to do */ if (!!state == test_bit(ICE_FLAG_DCB_ENA, pf->flags)) return ICE_DCB_NO_HW_CHG; @@ -451,6 +476,11 @@ ice_dcbnl_set_pg_tc_cfg_tx(struct net_device *netdev, int tc, if (tc >= ICE_MAX_TRAFFIC_CLASS) return; + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return; + } + new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg; /* prio_type, bwg_id and bw_pct per UP are not supported */ @@ -505,6 +535,11 @@ ice_dcbnl_set_pg_bwg_cfg_tx(struct net_device *netdev, int pgid, u8 bw_pct) if (pgid >= ICE_MAX_TRAFFIC_CLASS) return; + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return; + } + new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg; new_cfg->etscfg.tcbwtable[pgid] = bw_pct; @@ -725,6 +760,11 @@ static int ice_dcbnl_setapp(struct net_device *netdev, struct dcb_app *app) return -EINVAL; } + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return -EINVAL; + } + max_tc = pf->hw.func_caps.common_cap.maxtc; if (app->priority >= max_tc) { netdev_err(netdev, "TC %d out of range, max TC %d\n", @@ -836,6 +876,11 @@ static int ice_dcbnl_delapp(struct net_device *netdev, struct dcb_app *app) return -EINVAL; } + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return -EINVAL; + } + mutex_lock(&pf->tc_mutex); old_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg; @@ -937,6 +982,11 @@ static u8 ice_dcbnl_cee_set_all(struct net_device *netdev) !(pf->dcbx_cap & DCB_CAP_DCBX_VER_CEE)) return ICE_DCB_NO_HW_CHG; + if (pf->lag && pf->lag->bonded) { + netdev_err(netdev, "DCB changes not allowed when in a bond\n"); + return ICE_DCB_NO_HW_CHG; + } + new_cfg = &pf->hw.port_info->qos_cfg.desired_dcbx_cfg; mutex_lock(&pf->tc_mutex); From patchwork Thu Jun 8 18:06:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ertman, David M" X-Patchwork-Id: 13272665 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FD931DCDF for ; Thu, 8 Jun 2023 18:05:13 +0000 (UTC) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3A21125 for ; Thu, 8 Jun 2023 11:05:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686247510; x=1717783510; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1555XC8uFIUGuO3E3d+12FkpBSnKt43tGU7SibA4Wwg=; b=BB7+P5GAFDtvD9v7pEsPnVDWD8LEpY8zPy1lFUVOlIgisoDaUaXlqBsz saCf6Odrgo8IN89EMw82POHWEDBkCG4/KoxvWlFa/jo/9lOByKnNYNMhd GnoGYe2IHljEbTauWSTqOqHvePF4QvPaEoMTbx4UdBaCZyPhIyr4o6l+5 mazEcQdwFQfJRP2RGPqC8jVGtXpgWOaMV80Xaj4BSF8aEJ7bwec3ZA5Go xxBUCvuOGwJKU/j5LJyoFYfGhQxB8+loBCRoUWVRNTF/XeZCaGOiO6Z8q 2KnFu6AGP+lQDk9X5olZqmmuJczbN1AOpgw5yjjgukpA9ck5xDlg9sJbK w==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="385738768" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="385738768" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="775187942" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="775187942" Received: from dmert-dev.jf.intel.com ([10.166.241.14]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 11:04:39 -0700 From: Dave Ertman To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, daniel.machon@microchip.com Subject: [PATCH iwl-next v3 10/10] ice: update reset path for SRIOV LAG support Date: Thu, 8 Jun 2023 11:06:18 -0700 Message-Id: <20230608180618.574171-11-david.m.ertman@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230608180618.574171-1-david.m.ertman@intel.com> References: <20230608180618.574171-1-david.m.ertman@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Add code to rebuild the LAG resources when rebuilding the state of the interface after a reset. Also added in a function for building per-queue information into the buffer used to configure VF queues for LAG fail-over. This improves code reuse. Due to differences in timing per interface for recovering from a reset, add in the ability to retry on non-local dependencies where needed. Signed-off-by: Dave Ertman Reviewed-by: Daniel Machon --- drivers/net/ethernet/intel/ice/ice_lag.c | 363 ++++++++++++++++++---- drivers/net/ethernet/intel/ice/ice_lag.h | 3 + drivers/net/ethernet/intel/ice/ice_main.c | 14 +- 3 files changed, 316 insertions(+), 64 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index 6c6e5f4fe12a..40c5e9ee3849 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -341,6 +341,75 @@ ice_lag_qbuf_recfg(struct ice_hw *hw, struct ice_aqc_cfg_txqs_buf *qbuf, return count; } +/** + * ice_lag_get_sched_parent - locate or create a sched node parent + * @hw: HW struct for getting parent in + * @tc: traffic class on parent/node + */ +static struct ice_sched_node * +ice_lag_get_sched_parent(struct ice_hw *hw, u8 tc) +{ + struct ice_sched_node *tc_node, *aggnode, *parent = NULL; + u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; + struct ice_port_info *pi = hw->port_info; + struct device *dev; + u8 aggl, vsil; + int n; + + dev = ice_hw_to_dev(hw); + + tc_node = ice_sched_get_tc_node(pi, tc); + if (!tc_node) { + dev_warn(dev, "Failure to find TC node in for LAG move\n"); + return parent; + } + + aggnode = ice_sched_get_agg_node(pi, tc_node, ICE_DFLT_AGG_ID); + if (!aggnode) { + dev_warn(dev, "Failure to find aggregate node for LAG move\n"); + return parent; + } + + aggl = ice_sched_get_agg_layer(hw); + vsil = ice_sched_get_vsi_layer(hw); + + for (n = aggl + 1; n < vsil; n++) + num_nodes[n] = 1; + + for (n = 0; n < aggnode->num_children; n++) { + parent = ice_sched_get_free_vsi_parent(hw, aggnode->children[n], + num_nodes); + if (parent) + return parent; + } + + /* if free parent not found - add one */ + parent = aggnode; + for (n = aggl + 1; n < vsil; n++) { + u16 num_nodes_added; + u32 first_teid; + int err; + + err = ice_sched_add_nodes_to_layer(pi, tc_node, parent, n, + num_nodes[n], &first_teid, + &num_nodes_added); + if (err || num_nodes[n] != num_nodes_added) + return NULL; + + if (num_nodes_added) + parent = ice_sched_find_node_by_teid(tc_node, + first_teid); + else + parent = parent->children[0]; + if (!parent) { + dev_warn(dev, "Failure to add new parent for LAG move\n"); + return parent; + } + } + + return parent; +} + /** * ice_lag_move_vf_node_tc - move scheduling nodes for one VF on one TC * @lag: lag info struct @@ -353,19 +422,15 @@ static void ice_lag_move_vf_node_tc(struct ice_lag *lag, u8 oldport, u8 newport, u16 vsi_num, u8 tc) { - u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; - struct ice_sched_node *n_prt, *tc_node, *aggnode; u16 numq, valq, buf_size, num_moved, qbuf_size; struct device *dev = ice_pf_to_dev(lag->pf); struct ice_aqc_cfg_txqs_buf *qbuf; struct ice_aqc_move_elem *buf; + struct ice_sched_node *n_prt; struct ice_hw *new_hw = NULL; - struct ice_port_info *pi; __le32 teid, parent_teid; struct ice_vsi_ctx *ctx; - u8 aggl, vsil; u32 tmp_teid; - int n; ctx = ice_get_vsi_ctx(&lag->pf->hw, vsi_num); if (!ctx) { @@ -384,8 +449,6 @@ ice_lag_move_vf_node_tc(struct ice_lag *lag, u8 oldport, u8 newport, return; } - pi = new_hw->port_info; - numq = ctx->num_lan_q_entries[tc]; teid = ctx->sched.vsi_node[tc]->info.node_teid; tmp_teid = le32_to_cpu(teid); @@ -429,60 +492,9 @@ ice_lag_move_vf_node_tc(struct ice_lag *lag, u8 oldport, u8 newport, /* find new parent in destination port's tree for VF VSI node on this * Traffic Class */ - tc_node = ice_sched_get_tc_node(pi, tc); - if (!tc_node) { - dev_warn(dev, "Failure to find TC node in failover tree\n"); + n_prt = ice_lag_get_sched_parent(new_hw, tc); + if (!n_prt) goto resume_traffic; - } - - aggnode = ice_sched_get_agg_node(pi, tc_node, - ICE_DFLT_AGG_ID); - if (!aggnode) { - dev_warn(dev, "Failure to find aggregate node in failover tree\n"); - goto resume_traffic; - } - - aggl = ice_sched_get_agg_layer(new_hw); - vsil = ice_sched_get_vsi_layer(new_hw); - - for (n = aggl + 1; n < vsil; n++) - num_nodes[n] = 1; - - for (n = 0; n < aggnode->num_children; n++) { - n_prt = ice_sched_get_free_vsi_parent(new_hw, - aggnode->children[n], - num_nodes); - if (n_prt) - break; - } - - /* add parent if none were free */ - if (!n_prt) { - u16 num_nodes_added; - u32 first_teid; - int status; - - n_prt = aggnode; - for (n = aggl + 1; n < vsil; n++) { - status = ice_sched_add_nodes_to_layer(pi, tc_node, - n_prt, n, - num_nodes[n], - &first_teid, - &num_nodes_added); - if (status || num_nodes[n] != num_nodes_added) - goto resume_traffic; - - if (num_nodes_added) - n_prt = ice_sched_find_node_by_teid(tc_node, - first_teid); - else - n_prt = n_prt->children[0]; - if (!n_prt) { - dev_warn(dev, "Failure to add new parent for LAG node\n"); - goto resume_traffic; - } - } - } /* Move Vf's VSI node for this TC to newport's scheduler tree */ buf_size = struct_size(buf, teid, 1); @@ -997,6 +1009,7 @@ static void ice_lag_link_unlink(struct ice_lag *lag, void *ptr) * @link: Is this a linking activity * * If link is false, then primary_swid should be expected to not be valid + * This function should never be called in interrupt context. */ static void ice_lag_set_swid(u16 primary_swid, struct ice_lag *local_lag, @@ -1006,7 +1019,7 @@ ice_lag_set_swid(u16 primary_swid, struct ice_lag *local_lag, struct ice_aqc_set_port_params *cmd; struct ice_aq_desc desc; u16 buf_len, swid; - int status; + int status, i; buf_len = struct_size(buf, elem, 1); buf = kzalloc(buf_len, GFP_KERNEL); @@ -1057,7 +1070,20 @@ ice_lag_set_swid(u16 primary_swid, struct ice_lag *local_lag, ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_params); cmd->swid = cpu_to_le16(ICE_AQC_PORT_SWID_VALID | swid); - status = ice_aq_send_cmd(&local_lag->pf->hw, &desc, NULL, 0, NULL); + /* If this is happening in reset context, it is possible that the + * primary interface has not finished setting its SWID to SHARED + * yet. Allow retries to account for this timing issue between + * interfaces. + */ + for (i = 0; i < ICE_LAG_RESET_RETRIES; i++) { + status = ice_aq_send_cmd(&local_lag->pf->hw, &desc, NULL, 0, + NULL); + if (!status) + break; + + usleep_range(1000, 2000); + } + if (status) dev_err(ice_pf_to_dev(local_lag->pf), "Error setting SWID in port params %d\n", status); @@ -1065,7 +1091,7 @@ ice_lag_set_swid(u16 primary_swid, struct ice_lag *local_lag, /** * ice_lag_primary_swid - set/clear the SHARED attrib of primary's SWID - * @lag: primary interfaces lag struct + * @lag: primary interface's lag struct * @link: is this a linking activity * * Implement setting primary SWID as shared using 0x020B @@ -1788,6 +1814,135 @@ static int ice_create_lag_recipe(struct ice_hw *hw, u16 *rid, return err; } +/** + * ice_lag_move_vf_nodes_tc_sync - move a VF's nodes for a tc during reset + * @lag: primary interfaces lag struct + * @dest_hw: HW struct for destination's interface + * @vsi_num: VSI index in PF space + * @tc: traffic class to move + */ +static void +ice_lag_move_vf_nodes_tc_sync(struct ice_lag *lag, struct ice_hw *dest_hw, + u16 vsi_num, u8 tc) +{ + u16 numq, valq, buf_size, num_moved, qbuf_size; + struct device *dev = ice_pf_to_dev(lag->pf); + struct ice_aqc_cfg_txqs_buf *qbuf; + struct ice_aqc_move_elem *buf; + struct ice_sched_node *n_prt; + __le32 teid, parent_teid; + struct ice_vsi_ctx *ctx; + struct ice_hw *hw; + u32 tmp_teid; + + hw = &lag->pf->hw; + ctx = ice_get_vsi_ctx(hw, vsi_num); + if (!ctx) { + dev_warn(dev, "LAG rebuild failed after reset due to VSI Context failure\n"); + return; + } + + if (!ctx->sched.vsi_node[tc]) + return; + + numq = ctx->num_lan_q_entries[tc]; + teid = ctx->sched.vsi_node[tc]->info.node_teid; + tmp_teid = le32_to_cpu(teid); + parent_teid = ctx->sched.vsi_node[tc]->info.parent_teid; + + if (!tmp_teid || !numq) + return; + + if (ice_sched_suspend_resume_elems(hw, 1, &tmp_teid, true)) + dev_dbg(dev, "Problem suspending traffic during reset rebuild\n"); + + /* reconfig queues for new port */ + qbuf_size = struct_size(qbuf, queue_info, numq); + qbuf = kzalloc(qbuf_size, GFP_KERNEL); + if (!qbuf) { + dev_warn(dev, "Failure allocating VF queue recfg buffer for reset rebuild\n"); + goto resume_sync; + } + + /* add the per queue info for the reconfigure command buffer */ + valq = ice_lag_qbuf_recfg(hw, qbuf, vsi_num, numq, tc); + if (!valq) { + dev_warn(dev, "Failure to reconfig queues for LAG reset rebuild\n"); + goto sync_none; + } + + if (ice_aq_cfg_lan_txq(hw, qbuf, qbuf_size, numq, hw->port_info->lport, + dest_hw->port_info->lport, NULL)) { + dev_warn(dev, "Failure to configure queues for LAG reset rebuild\n"); + goto sync_qerr; + } + +sync_none: + kfree(qbuf); + + /* find parent in destination tree */ + n_prt = ice_lag_get_sched_parent(dest_hw, tc); + if (!n_prt) + goto resume_sync; + + /* Move node to new parent */ + buf_size = struct_size(buf, teid, 1); + buf = kzalloc(buf_size, GFP_KERNEL); + if (!buf) { + dev_warn(dev, "Failure to alloc for VF node move in reset rebuild\n"); + goto resume_sync; + } + + buf->hdr.src_parent_teid = parent_teid; + buf->hdr.dest_parent_teid = n_prt->info.node_teid; + buf->hdr.num_elems = cpu_to_le16(1); + buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN; + buf->teid[0] = teid; + + if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved, + NULL)) + dev_warn(dev, "Failure to move VF nodes for LAG reset rebuild\n"); + else + ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]); + + kfree(buf); + goto resume_sync; + +sync_qerr: + kfree(qbuf); + +resume_sync: + if (ice_sched_suspend_resume_elems(hw, 1, &tmp_teid, false)) + dev_warn(dev, "Problem restarting traffic for LAG node reset rebuild\n"); +} + +/** + * ice_lag_move_vf_nodes_sync - move vf nodes to active interface + * @lag: primary interfaces lag struct + * @dest_hw: lport value for currently active port + * + * This function is used in a reset context, outside of event handling, + * to move the VF nodes to the secondary interface when that interface + * is the active interface during a reset rebuild + */ +static void +ice_lag_move_vf_nodes_sync(struct ice_lag *lag, struct ice_hw *dest_hw) +{ + struct ice_pf *pf; + int i, tc; + + if (!lag->primary || !dest_hw) + return; + + pf = lag->pf; + ice_for_each_vsi(pf, i) + if (pf->vsi[i] && (pf->vsi[i]->type == ICE_VSI_VF || + pf->vsi[i]->type == ICE_VSI_SWITCHDEV_CTRL)) + ice_for_each_traffic_class(tc) + ice_lag_move_vf_nodes_tc_sync(lag, dest_hw, i, + tc); +} + /** * ice_init_lag - initialize support for LAG * @pf: PF struct @@ -1889,3 +2044,85 @@ void ice_deinit_lag(struct ice_pf *pf) pf->lag = NULL; } + +/** + * ice_lag_rebuild - rebuild lag resources after reset + * @pf: pointer to local pf struct + * + * PF resets are promoted to CORER resets when interface in an aggregate. This + * means that we need to rebuild the PF resources for the interface. Since + * this will happen outside the normal event processing, need to acquire the lag + * lock. + * + * This function will also evaluate the VF resources if this is the primary + * interface. + */ +void ice_lag_rebuild(struct ice_pf *pf) +{ + struct ice_lag_netdev_list ndlist; + struct ice_lag *lag, *prim_lag; + struct list_head *tmp, *n; + u8 act_port, loc_port; + + if (!pf->lag || !pf->lag->bonded) + return; + + mutex_lock(&pf->lag_mutex); + + lag = pf->lag; + if (lag->primary) { + prim_lag = lag; + } else { + struct ice_lag_netdev_list *nl; + struct net_device *tmp_nd; + + INIT_LIST_HEAD(&ndlist.node); + rcu_read_lock(); + for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) { + nl = kzalloc(sizeof(*nl), GFP_KERNEL); + if (!nl) + break; + + nl->netdev = tmp_nd; + list_add(&nl->node, &ndlist.node); + } + rcu_read_unlock(); + lag->netdev_head = &ndlist.node; + prim_lag = ice_lag_find_primary(lag); + } + + if (!prim_lag) { + dev_dbg(ice_pf_to_dev(pf), "No primary interface in aggregate, can't rebuild\n"); + goto lag_rebuild_out; + } + + act_port = prim_lag->active_port; + loc_port = lag->pf->hw.port_info->lport; + + /* configure SWID for this port */ + if (lag->primary) { + ice_lag_primary_swid(lag, true); + } else { + ice_lag_set_swid(prim_lag->pf->hw.port_info->sw_id, lag, true); + ice_lag_add_prune_list(prim_lag, pf); + if (act_port == loc_port) + ice_lag_move_vf_nodes_sync(prim_lag, &pf->hw); + } + + ice_lag_cfg_cp_fltr(lag, true); + + if (lag->pf_rule_id) + if (ice_lag_cfg_dflt_fltr(lag, true)) + dev_err(ice_pf_to_dev(pf), "Error adding default VSI rule in rebuild\n"); + + ice_clear_rdma_cap(pf); +lag_rebuild_out: + list_for_each_safe(tmp, n, &ndlist.node) { + struct ice_lag_netdev_list *entry; + + entry = list_entry(tmp, struct ice_lag_netdev_list, node); + list_del(&entry->node); + kfree(entry); + } + mutex_unlock(&pf->lag_mutex); +} diff --git a/drivers/net/ethernet/intel/ice/ice_lag.h b/drivers/net/ethernet/intel/ice/ice_lag.h index df4af5184a75..18075b82485a 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.h +++ b/drivers/net/ethernet/intel/ice/ice_lag.h @@ -16,6 +16,8 @@ enum ice_lag_role { #define ICE_LAG_INVALID_PORT 0xFF +#define ICE_LAG_RESET_RETRIES 5 + struct ice_pf; struct ice_vf; @@ -59,4 +61,5 @@ struct ice_lag_work { void ice_lag_move_new_vf_nodes(struct ice_vf *vf); int ice_init_lag(struct ice_pf *pf); void ice_deinit_lag(struct ice_pf *pf); +void ice_lag_rebuild(struct ice_pf *pf); #endif /* _ICE_LAG_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index e01834d0417e..af57102538d1 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -636,6 +636,11 @@ static void ice_do_reset(struct ice_pf *pf, enum ice_reset_req reset_type) dev_dbg(dev, "reset_type 0x%x requested\n", reset_type); + if (pf->lag && pf->lag->bonded && reset_type == ICE_RESET_PFR) { + dev_dbg(dev, "PFR on a bonded interface, promoting to CORER\n"); + reset_type = ICE_RESET_CORER; + } + ice_prepare_for_reset(pf, reset_type); /* trigger the reset */ @@ -719,8 +724,13 @@ static void ice_reset_subtask(struct ice_pf *pf) } /* No pending resets to finish processing. Check for new resets */ - if (test_bit(ICE_PFR_REQ, pf->state)) + if (test_bit(ICE_PFR_REQ, pf->state)) { reset_type = ICE_RESET_PFR; + if (pf->lag && pf->lag->bonded) { + dev_dbg(ice_pf_to_dev(pf), "PFR on a bonded interface, promoting to CORER\n"); + reset_type = ICE_RESET_CORER; + } + } if (test_bit(ICE_CORER_REQ, pf->state)) reset_type = ICE_RESET_CORER; if (test_bit(ICE_GLOBR_REQ, pf->state)) @@ -7361,6 +7371,8 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) clear_bit(ICE_RESET_FAILED, pf->state); ice_plug_aux_dev(pf); + if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) + ice_lag_rebuild(pf); return; err_vsi_rebuild: