From patchwork Thu Jun 22 07:09:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wojciech Drewek X-Patchwork-Id: 13288334 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE5641FB4 for ; Thu, 22 Jun 2023 07:11:10 +0000 (UTC) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E50761BD3 for ; Thu, 22 Jun 2023 00:11:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1687417868; x=1718953868; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=zNrM5EfILa+8jAwJ05BuJylyicgAwiZFdn0B+Akn36w=; b=HsMGxoTz81yq6Z6/MWryidTlBFzgaGM25/Yg4rkmdpDLVQ+SzzTuX/cd r1Zcc47Yczl9S7FWY8kOnrjEzfyYKA/x+p/nIynM+zpA2h9mXIreg1X0H 9f/0Ov9QIW9reNlPzQ+tRjt7XaYUeuk8L8iD+n4Keg7lSt5/Wl/cRIFIu 8eolS5qgHb6i3q0K3kDSZfilaJE/69Eraa7MvQEczASCYVKFH3nzvKhVD uU+4fwarepL992b3I2LYFkml+nhtQBIWkFaj2PZPmMD8W7LkERvxmgi50 XryoQSC7SevQdAF1oKDYDo6NtpktrK+NJw4ok4FcZVox5F/OwyhydbiBI A==; X-IronPort-AV: E=McAfee;i="6600,9927,10748"; a="350147580" X-IronPort-AV: E=Sophos;i="6.00,262,1681196400"; d="scan'208";a="350147580" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2023 00:11:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10748"; a="1045061619" X-IronPort-AV: E=Sophos;i="6.00,262,1681196400"; d="scan'208";a="1045061619" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga005.fm.intel.com with ESMTP; 22 Jun 2023 00:11:06 -0700 Received: from rozewie.igk.intel.com (rozewie.igk.intel.com [10.211.8.69]) by irvmail002.ir.intel.com (Postfix) with ESMTP id DB317333DF; Thu, 22 Jun 2023 08:11:05 +0100 (IST) From: Wojciech Drewek To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, david.m.ertman@intel.com, michal.swiatkowski@linux.intel.com, marcin.szycik@linux.intel.com, simon.horman@corigine.com Subject: [PATCH iwl-next] ice: Accept LAG netdevs in bridge offloads Date: Thu, 22 Jun 2023 09:09:56 +0200 Message-Id: <20230622070956.357404-1-wojciech.drewek@intel.com> X-Mailer: git-send-email 2.40.1 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Allow LAG interfaces to be used in bridge offload using netif_is_lag_master. In this case, search for ice netdev in the list of LAG's lower devices. Reviewed-by: Jedrzej Jagielski Signed-off-by: Wojciech Drewek --- Note for Tony: This patch needs to go with Dave's LAG patchset: https://lore.kernel.org/netdev/20230615162932.762756-1-david.m.ertman@intel.com/ --- .../net/ethernet/intel/ice/ice_eswitch_br.c | 47 +++++++++++++++++-- 1 file changed, 42 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c index 1e57ce7b22d3..81b69ba9e939 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c @@ -15,8 +15,23 @@ static const struct rhashtable_params ice_fdb_ht_params = { static bool ice_eswitch_br_is_dev_valid(const struct net_device *dev) { - /* Accept only PF netdev and PRs */ - return ice_is_port_repr_netdev(dev) || netif_is_ice(dev); + /* Accept only PF netdev, PRs and LAG */ + return ice_is_port_repr_netdev(dev) || netif_is_ice(dev) || + netif_is_lag_master(dev); +} + +static struct net_device * +ice_eswitch_br_get_uplnik_from_lag(struct net_device *lag_dev) +{ + struct net_device *lower; + struct list_head *iter; + + netdev_for_each_lower_dev(lag_dev, lower, iter) { + if (netif_is_ice(lower)) + return lower; + } + + return NULL; } static struct ice_esw_br_port * @@ -26,8 +41,19 @@ ice_eswitch_br_netdev_to_port(struct net_device *dev) struct ice_repr *repr = ice_netdev_to_repr(dev); return repr->br_port; - } else if (netif_is_ice(dev)) { - struct ice_pf *pf = ice_netdev_to_pf(dev); + } else if (netif_is_ice(dev) || netif_is_lag_master(dev)) { + struct net_device *ice_dev; + struct ice_pf *pf; + + if (netif_is_lag_master(dev)) + ice_dev = ice_eswitch_br_get_uplnik_from_lag(dev); + else + ice_dev = dev; + + if (!ice_dev) + return NULL; + + pf = ice_netdev_to_pf(ice_dev); return pf->br_port; } @@ -712,7 +738,18 @@ ice_eswitch_br_port_link(struct ice_esw_br_offloads *br_offloads, err = ice_eswitch_br_vf_repr_port_init(bridge, repr); } else { - struct ice_pf *pf = ice_netdev_to_pf(dev); + struct net_device *ice_dev; + struct ice_pf *pf; + + if (netif_is_lag_master(dev)) + ice_dev = ice_eswitch_br_get_uplnik_from_lag(dev); + else + ice_dev = dev; + + if (!ice_dev) + return 0; + + pf = ice_netdev_to_pf(ice_dev); err = ice_eswitch_br_uplink_port_init(bridge, pf); }