From patchwork Tue Jun 11 16:22:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694006 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3789C3B2A2; Tue, 11 Jun 2024 16:22:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122960; cv=none; b=cV41PpW7y+K5/CpLoaox66JoMm8rNec66jhgldIsNxwot6DbGPhhwPvvbpwFKRS4sqSBHfvkUvnMNFYbfc2nocaxW+qHJa0qQzvd2NsGUk89el33fpRrUyAdZx01SbzdsuPxWq8oVWISpf3OZHthpqanm76zICRfdAXmGs7Gyb4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122960; c=relaxed/simple; bh=BYxugKrD2AWuV/8kwaLMDYZYYZbUJbdF3RJcQ97KkUk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=p0nBgfoixKrbs73R07E5P8dnTRAUYnFydkKbAr4jNC3JTG6hp4ua1jwRkQ1jYn7vw1/XjhS2eYBf+JXDVKS8ihnVwb5TKMTai12eAwii/N69Jfz2u8xZ9z2hA0wUuGBF6/GmKrUkxAfDrNuCMPVQ5FVqcAfoSQtaXem1CABvwSA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=NzF7hYFF; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="NzF7hYFF" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJRin021185; Tue, 11 Jun 2024 09:22:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=iCWdnwlZFS5J8V+WmbQthHV9n emvLXQPxJQx3IhaBEw=; b=NzF7hYFFyR3nwyy81Gt66QXLo8PwINQm0W6TqpFSu zfQO9JYb3t4yIIdV5NI6eVvzXL20JU9YCiUFL45daxRZbVZBrRI3RCbFyMsN5/CO W2SlLw35CGewJJRpKP5QIqBE5UDZggm6zms2X7MbzHb7T2PMe2qPiwKIWjHeoPf3 cPcjqIzOqnsIpBhdB52+DeP+2VbND7OVUj/yBZ/zJGyZQ9pMdOBfG3ZzkgVxXh+S YFf/WCUfTO0ix5n2AN+5S5qYKiASD9w3tIHhg9NQ2hVeuZdp+ufXP5R4qZj48MhT NLoFeAPyzzUPvtcfVChH/R1yb4WgWXwq0VW2/50UjudRA== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3ypmq0hcsh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:22 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:21 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 666393F7059; Tue, 11 Jun 2024 09:22:18 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 01/10] octeontx2-pf: Refactoring RVU driver Date: Tue, 11 Jun 2024 21:52:04 +0530 Message-ID: <20240611162213.22213-2-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: UlRfAW7RurNzlarHgS0x-b7OsOFAokKJ X-Proofpoint-ORIG-GUID: UlRfAW7RurNzlarHgS0x-b7OsOFAokKJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org Refactoring and export list of shared functions such that they can be used by both RVU NIC and representor driver. Signed-off-by: Geetha sowjanya --- .../ethernet/marvell/octeontx2/af/common.h | 2 + .../net/ethernet/marvell/octeontx2/af/mbox.h | 2 + .../net/ethernet/marvell/octeontx2/af/npc.h | 1 + .../net/ethernet/marvell/octeontx2/af/rvu.c | 9 + .../net/ethernet/marvell/octeontx2/af/rvu.h | 1 + .../marvell/octeontx2/af/rvu_debugfs.c | 27 -- .../ethernet/marvell/octeontx2/af/rvu_nix.c | 43 ++-- .../marvell/octeontx2/af/rvu_npc_fs.c | 5 + .../ethernet/marvell/octeontx2/af/rvu_reg.h | 1 + .../marvell/octeontx2/af/rvu_struct.h | 26 ++ .../marvell/octeontx2/af/rvu_switch.c | 2 +- .../marvell/octeontx2/nic/otx2_common.c | 6 +- .../marvell/octeontx2/nic/otx2_common.h | 43 ++-- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 240 +++++++++++------- .../marvell/octeontx2/nic/otx2_txrx.c | 17 +- .../marvell/octeontx2/nic/otx2_txrx.h | 3 +- .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 7 +- 17 files changed, 258 insertions(+), 177 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h index 2436c1ff9ba4..46fbc8f2dce8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h @@ -155,9 +155,11 @@ enum nix_scheduler { /* Min/Max packet sizes, excluding FCS */ #define NIC_HW_MIN_FRS 40 #define NIC_HW_MAX_FRS 9212 +#define SDP_HW_MIN_FRS 16 #define SDP_HW_MAX_FRS 65535 #define CN10K_LMAC_LINK_MAX_FRS 16380 /* 16k - FCS */ #define CN10K_LBK_LINK_MAX_FRS 65535 /* 64k */ +#define SDP_LINK_CREDIT 0x320202 /* NIX RX action operation*/ #define NIX_RX_ACTIONOP_DROP (0x0ull) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 4a77f6fe2622..e6d7d6e862c0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -78,6 +78,7 @@ struct otx2_mbox { struct mbox_hdr { u64 msg_size; /* Total msgs size embedded */ u16 num_msgs; /* No of msgs embedded */ + u16 sig; }; /* Header which precedes every msg and is also part of it */ @@ -1562,6 +1563,7 @@ struct flow_msg { u8 icmp_type; u8 icmp_code; __be16 tcp_flags; + u16 sq_id; }; struct npc_install_flow_req { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h index d883157393ea..a6533a9b8aa1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h @@ -245,6 +245,7 @@ enum key_fields { NPC_VLAN_TAG2, /* inner vlan tci for double tagged frame */ NPC_VLAN_TAG3, + NPC_SQ_ID, /* other header fields programmed to extract but not of our interest */ NPC_UNKNOWN, NPC_KEY_FIELDS_MAX, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index ff78251f92d4..ef6be6291057 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -2151,6 +2151,14 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll) offset = mbox->rx_start + ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + if (req_hdr->sig) { + msg = mdev->mbase + offset; + writeq(0x1, rvu->pfreg_base + + ((BLKADDR_NIX0 << 20) | + NIX_LF_CINTX_INT_W1S(rvu_get_pf(msg->pcifunc)))); + usleep_range(1000, 2000); + goto done; + } for (id = 0; id < mw->mbox_wrk[devid].num_msgs; id++) { msg = mdev->mbase + offset; @@ -2184,6 +2192,7 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll) err, otx2_mbox_id2name(msg->id), msg->id, devid); } +done: mw->mbox_wrk[devid].num_msgs = 0; if (poll) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 3063a84a45ef..30efa5607c58 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -1016,6 +1016,7 @@ int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr); void rvu_switch_enable(struct rvu *rvu); void rvu_switch_disable(struct rvu *rvu); void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc); +void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool ena); int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir, u64 pkind, u8 var_len_off, u8 var_len_off_mask, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index 4a4ef5bd9e0b..e661f83b188b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -45,33 +45,6 @@ enum { CGX_STAT18, }; -/* NIX TX stats */ -enum nix_stat_lf_tx { - TX_UCAST = 0x0, - TX_BCAST = 0x1, - TX_MCAST = 0x2, - TX_DROP = 0x3, - TX_OCTS = 0x4, - TX_STATS_ENUM_LAST, -}; - -/* NIX RX stats */ -enum nix_stat_lf_rx { - RX_OCTS = 0x0, - RX_UCAST = 0x1, - RX_BCAST = 0x2, - RX_MCAST = 0x3, - RX_DROP = 0x4, - RX_DROP_OCTS = 0x5, - RX_FCS = 0x6, - RX_ERR = 0x7, - RX_DRP_BCAST = 0x8, - RX_DRP_MCAST = 0x9, - RX_DRP_L3BCAST = 0xa, - RX_DRP_L3MCAST = 0xb, - RX_STATS_ENUM_LAST, -}; - static char *cgx_rx_stats_fields[] = { [CGX_STAT0] = "Received packets", [CGX_STAT1] = "Octets of received packets", diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 00af8888e329..b94ff8c75ffc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -289,6 +289,23 @@ static void nix_rx_sync(struct rvu *rvu, int blkaddr) dev_err(rvu->dev, "SYNC2: NIX RX software sync failed\n"); } +static int nix_get_tx_link(struct rvu *rvu, u16 pcifunc) +{ + struct rvu_hwinfo *hw = rvu->hw; + int pf = rvu_get_pf(pcifunc); + u8 cgx_id = 0, lmac_id = 0; + + if (is_lbk_vf(rvu, pcifunc)) {/* LBK links */ + return hw->cgx_links; + } else if (is_pf_cgxmapped(rvu, pf)) { + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); + return (cgx_id * hw->lmac_per_cgx) + lmac_id; + } + + /* SDP link */ + return hw->cgx_links + hw->lbk_links; +} + static bool is_valid_txschq(struct rvu *rvu, int blkaddr, int lvl, u16 pcifunc, u16 schq) { @@ -584,6 +601,9 @@ int rvu_mbox_handler_nix_bp_disable(struct rvu *rvu, if (!is_pf_cgxmapped(rvu, pf) && type != NIX_INTF_TYPE_LBK) return 0; + if (is_sdp_pfvf(pcifunc)) + type = NIX_INTF_TYPE_SDP; + pfvf = rvu_get_pfvf(rvu, pcifunc); err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr); if (err) @@ -1984,23 +2004,6 @@ static void nix_clear_tx_xoff(struct rvu *rvu, int blkaddr, rvu_write64(rvu, blkaddr, reg, 0x0); } -static int nix_get_tx_link(struct rvu *rvu, u16 pcifunc) -{ - struct rvu_hwinfo *hw = rvu->hw; - int pf = rvu_get_pf(pcifunc); - u8 cgx_id = 0, lmac_id = 0; - - if (is_lbk_vf(rvu, pcifunc)) {/* LBK links */ - return hw->cgx_links; - } else if (is_pf_cgxmapped(rvu, pf)) { - rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); - return (cgx_id * hw->lmac_per_cgx) + lmac_id; - } - - /* SDP link */ - return hw->cgx_links + hw->lbk_links; -} - static void nix_get_txschq_range(struct rvu *rvu, u16 pcifunc, int link, int *start, int *end) { @@ -2949,6 +2952,7 @@ static int nix_tx_vtag_alloc(struct rvu *rvu, int blkaddr, mutex_unlock(&vlan->rsrc_lock); regval = size ? vtag : vtag << 32; + regval |= (vtag & ~GENMASK_ULL(47, 0)) << 48; rvu_write64(rvu, blkaddr, NIX_AF_TX_VTAG_DEFX_DATA(index), regval); @@ -4619,6 +4623,7 @@ static void nix_link_config(struct rvu *rvu, int blkaddr, rvu_get_lbk_link_max_frs(rvu, &lbk_max_frs); rvu_get_lmac_link_max_frs(rvu, &lmac_max_frs); + rvu_write64(rvu, blkaddr, NIX_AF_SDP_LINK_CREDIT, SDP_LINK_CREDIT); /* Set default min/max packet lengths allowed on NIX Rx links. * * With HW reset minlen value of 60byte, HW will treat ARP pkts @@ -4630,14 +4635,14 @@ static void nix_link_config(struct rvu *rvu, int blkaddr, ((u64)lmac_max_frs << 16) | NIC_HW_MIN_FRS); } - for (link = hw->cgx_links; link < hw->lbk_links; link++) { + for (link = hw->cgx_links; link < hw->cgx_links + hw->lbk_links; link++) { rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), ((u64)lbk_max_frs << 16) | NIC_HW_MIN_FRS); } if (hw->sdp_links) { link = hw->cgx_links + hw->lbk_links; rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), - SDP_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS); + SDP_HW_MAX_FRS << 16 | SDP_HW_MIN_FRS); } /* Get MCS external bypass status for CN10K-B */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c index 150635de2bd5..9fa06ec21ad1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c @@ -555,6 +555,7 @@ do { \ NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start + 6, 6); /* PF_FUNC is 2 bytes at 0th byte of NPC_LT_LA_IH_NIX_ETHER */ NPC_SCAN_HDR(NPC_PF_FUNC, NPC_LID_LA, NPC_LT_LA_IH_NIX_ETHER, 0, 2); + NPC_SCAN_HDR(NPC_SQ_ID, NPC_LID_LA, NPC_LT_LA_IH_NIX_ETHER, 2, 3); } static void npc_set_features(struct rvu *rvu, int blkaddr, u8 intf) @@ -1225,6 +1226,10 @@ static int npc_update_tx_entry(struct rvu *rvu, struct rvu_pfvf *pfvf, npc_update_entry(rvu, NPC_PF_FUNC, entry, (__force u16)htons(target), 0, mask, 0, NIX_INTF_TX); + npc_update_entry(rvu, NPC_SQ_ID, entry, + (__force u16)htons(req->packet.sq_id), + 0, req->mask.sq_id, 0, NIX_INTF_TX); + *(u64 *)&action = 0x00; action.op = req->op; action.index = req->index; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index 5ec92654e7ad..95c332a1fc70 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -431,6 +431,7 @@ #define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0 | (a) << 16) #define NIX_AF_SMQX_STATUS(a) (0x730 | (a) << 16) #define NIX_AF_MDQX_OUT_MD_COUNT(a) (0xdb0 | (a) << 16) +#define NIX_LF_CINTX_INT_W1S(a) (0xd30 | (a) << 12) #define NIX_PRIV_AF_INT_CFG (0x8000000) #define NIX_PRIV_LFX_CFG (0x8000010) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h index 5ef406c7e8a4..ee54f1694ea6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h @@ -825,4 +825,30 @@ enum nix_tx_vtag_op { #define VTAG_STRIP BIT_ULL(4) #define VTAG_CAPTURE BIT_ULL(5) +/* NIX TX stats */ +enum nix_stat_lf_tx { + TX_UCAST = 0x0, + TX_BCAST = 0x1, + TX_MCAST = 0x2, + TX_DROP = 0x3, + TX_OCTS = 0x4, + TX_STATS_ENUM_LAST, +}; + +/* NIX RX stats */ +enum nix_stat_lf_rx { + RX_OCTS = 0x0, + RX_UCAST = 0x1, + RX_BCAST = 0x2, + RX_MCAST = 0x3, + RX_DROP = 0x4, + RX_DROP_OCTS = 0x5, + RX_FCS = 0x6, + RX_ERR = 0x7, + RX_DRP_BCAST = 0x8, + RX_DRP_MCAST = 0x9, + RX_DRP_L3BCAST = 0xa, + RX_DRP_L3MCAST = 0xb, + RX_STATS_ENUM_LAST, +}; #endif /* RVU_STRUCT_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c index 854045ed3b06..ceb81eebf65e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c @@ -8,7 +8,7 @@ #include #include "rvu.h" -static void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool enable) +void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool enable) { struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); struct nix_hw *nix_hw; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index a85ac039d779..4b53af087cb4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -227,7 +227,7 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu) u16 maxlen; int err; - maxlen = otx2_get_max_mtu(pfvf) + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; + maxlen = pfvf->hw.max_mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; mutex_lock(&pfvf->mbox.lock); req = otx2_mbox_alloc_msg_nix_set_hw_frs(&pfvf->mbox); @@ -236,7 +236,7 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu) return -ENOMEM; } - req->maxlen = pfvf->netdev->mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; + req->maxlen = mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; /* Use max receive length supported by hardware for loopback devices */ if (is_otx2_lbkvf(pfvf->pdev)) @@ -246,6 +246,7 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu) mutex_unlock(&pfvf->mbox.lock); return err; } +EXPORT_SYMBOL(otx2_hw_set_mtu); int otx2_config_pause_frm(struct otx2_nic *pfvf) { @@ -1782,6 +1783,7 @@ void otx2_free_cints(struct otx2_nic *pfvf, int n) free_irq(vector, &qset->napi[qidx]); } } +EXPORT_SYMBOL(otx2_free_cints); void otx2_set_cints_affinity(struct otx2_nic *pfvf) { diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 24fbbef265a6..3072648f478c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -120,33 +120,6 @@ enum otx2_errcodes_re { ERRCODE_IL4_CSUM = 0x22, }; -/* NIX TX stats */ -enum nix_stat_lf_tx { - TX_UCAST = 0x0, - TX_BCAST = 0x1, - TX_MCAST = 0x2, - TX_DROP = 0x3, - TX_OCTS = 0x4, - TX_STATS_ENUM_LAST, -}; - -/* NIX RX stats */ -enum nix_stat_lf_rx { - RX_OCTS = 0x0, - RX_UCAST = 0x1, - RX_BCAST = 0x2, - RX_MCAST = 0x3, - RX_DROP = 0x4, - RX_DROP_OCTS = 0x5, - RX_FCS = 0x6, - RX_ERR = 0x7, - RX_DRP_BCAST = 0x8, - RX_DRP_MCAST = 0x9, - RX_DRP_L3BCAST = 0xa, - RX_DRP_L3MCAST = 0xb, - RX_STATS_ENUM_LAST, -}; - struct otx2_dev_stats { u64 rx_bytes; u64 rx_frames; @@ -228,6 +201,7 @@ struct otx2_hw { u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; u16 matchall_ipolicer; u32 dwrr_mtu; + u32 max_mtu; u8 smq_link_type; /* HW settings, coalescing etc */ @@ -999,6 +973,21 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, int otx2_aura_init(struct otx2_nic *pfvf, int aura_id, int pool_id, int numptrs); +int otx2_init_hw_resources(struct otx2_nic *pfvf); +void otx2_free_hw_resources(struct otx2_nic *pf); +int otx2_wq_init(struct otx2_nic *pf); +int otx2_check_pf_usable(struct otx2_nic *pf); +int otx2_pfaf_mbox_init(struct otx2_nic *pf); +int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af); +int otx2_realloc_msix_vectors(struct otx2_nic *pf); +void otx2_pfaf_mbox_destroy(struct otx2_nic *pf); +void otx2_disable_mbox_intr(struct otx2_nic *pf); +void otx2_free_queue_mem(struct otx2_qset *qset); +int otx2_alloc_queue_mem(struct otx2_nic *pf); +void otx2_disable_napi(struct otx2_nic *pf); +irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq); +int otx2_init_rsrc(struct pci_dev *pdev, struct otx2_nic *pf); + /* RSS configuration APIs*/ int otx2_rss_init(struct otx2_nic *pfvf); int otx2_set_flowkey_cfg(struct otx2_nic *pfvf); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index f5bce3e326cc..9d16019cfaab 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1008,7 +1008,7 @@ static irqreturn_t otx2_pfaf_mbox_intr_handler(int irq, void *pf_irq) return IRQ_HANDLED; } -static void otx2_disable_mbox_intr(struct otx2_nic *pf) +void otx2_disable_mbox_intr(struct otx2_nic *pf) { int vector = pci_irq_vector(pf->pdev, RVU_PF_INT_VEC_AFPF_MBOX); @@ -1016,8 +1016,9 @@ static void otx2_disable_mbox_intr(struct otx2_nic *pf) otx2_write64(pf, RVU_PF_INT_ENA_W1C, BIT_ULL(0)); free_irq(vector, pf); } +EXPORT_SYMBOL(otx2_disable_mbox_intr); -static int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af) +int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af) { struct otx2_hw *hw = &pf->hw; struct msg_req *req; @@ -1060,8 +1061,9 @@ static int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af) return 0; } +EXPORT_SYMBOL(otx2_register_mbox_intr); -static void otx2_pfaf_mbox_destroy(struct otx2_nic *pf) +void otx2_pfaf_mbox_destroy(struct otx2_nic *pf) { struct mbox *mbox = &pf->mbox; @@ -1076,8 +1078,9 @@ static void otx2_pfaf_mbox_destroy(struct otx2_nic *pf) otx2_mbox_destroy(&mbox->mbox); otx2_mbox_destroy(&mbox->mbox_up); } +EXPORT_SYMBOL(otx2_pfaf_mbox_destroy); -static int otx2_pfaf_mbox_init(struct otx2_nic *pf) +int otx2_pfaf_mbox_init(struct otx2_nic *pf) { struct mbox *mbox = &pf->mbox; void __iomem *hwbase; @@ -1124,6 +1127,7 @@ static int otx2_pfaf_mbox_init(struct otx2_nic *pf) otx2_pfaf_mbox_destroy(pf); return err; } +EXPORT_SYMBOL(otx2_pfaf_mbox_init); static int otx2_cgx_config_linkevents(struct otx2_nic *pf, bool enable) { @@ -1379,7 +1383,7 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data) return IRQ_HANDLED; } -static irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq) +irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq) { struct otx2_cq_poll *cq_poll = (struct otx2_cq_poll *)cq_irq; struct otx2_nic *pf = (struct otx2_nic *)cq_poll->dev; @@ -1398,20 +1402,25 @@ static irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq) return IRQ_HANDLED; } +EXPORT_SYMBOL(otx2_cq_intr_handler); -static void otx2_disable_napi(struct otx2_nic *pf) +void otx2_disable_napi(struct otx2_nic *pf) { struct otx2_qset *qset = &pf->qset; struct otx2_cq_poll *cq_poll; + struct work_struct *work; int qidx; for (qidx = 0; qidx < pf->hw.cint_cnt; qidx++) { cq_poll = &qset->napi[qidx]; - cancel_work_sync(&cq_poll->dim.work); + work = &cq_poll->dim.work; + if (work->func) + cancel_work_sync(&cq_poll->dim.work); napi_disable(&cq_poll->napi); netif_napi_del(&cq_poll->napi); } } +EXPORT_SYMBOL(otx2_disable_napi); static void otx2_free_cq_res(struct otx2_nic *pf) { @@ -1477,7 +1486,7 @@ static int otx2_get_rbuf_size(struct otx2_nic *pf, int mtu) return ALIGN(rbuf_size, 2048); } -static int otx2_init_hw_resources(struct otx2_nic *pf) +int otx2_init_hw_resources(struct otx2_nic *pf) { struct nix_lf_free_req *free_req; struct mbox *mbox = &pf->mbox; @@ -1601,8 +1610,9 @@ static int otx2_init_hw_resources(struct otx2_nic *pf) mutex_unlock(&mbox->lock); return err; } +EXPORT_SYMBOL(otx2_init_hw_resources); -static void otx2_free_hw_resources(struct otx2_nic *pf) +void otx2_free_hw_resources(struct otx2_nic *pf) { struct otx2_qset *qset = &pf->qset; struct nix_lf_free_req *free_req; @@ -1688,6 +1698,7 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) } mutex_unlock(&mbox->lock); } +EXPORT_SYMBOL(otx2_free_hw_resources); static bool otx2_promisc_use_mce_list(struct otx2_nic *pfvf) { @@ -1770,15 +1781,23 @@ static void otx2_dim_work(struct work_struct *w) dim->state = DIM_START_MEASURE; } -int otx2_open(struct net_device *netdev) +void otx2_free_queue_mem(struct otx2_qset *qset) +{ + kfree(qset->sq); + qset->sq = NULL; + kfree(qset->cq); + qset->cq = NULL; + kfree(qset->rq); + qset->rq = NULL; + kfree(qset->napi); +} +EXPORT_SYMBOL(otx2_free_queue_mem); +int otx2_alloc_queue_mem(struct otx2_nic *pf) { - struct otx2_nic *pf = netdev_priv(netdev); - struct otx2_cq_poll *cq_poll = NULL; struct otx2_qset *qset = &pf->qset; - int err = 0, qidx, vec; - char *irq_name; + struct otx2_cq_poll *cq_poll; + int err = -ENOMEM; - netif_carrier_off(netdev); /* RQ and SQs are mapped to different CQs, * so find out max CQ IRQs (i.e CINTs) needed. @@ -1791,14 +1810,13 @@ int otx2_open(struct net_device *netdev) qset->napi = kcalloc(pf->hw.cint_cnt, sizeof(*cq_poll), GFP_KERNEL); if (!qset->napi) - return -ENOMEM; + return err; /* CQ size of RQ */ qset->rqe_cnt = qset->rqe_cnt ? qset->rqe_cnt : Q_COUNT(Q_SIZE_256); /* CQ size of SQ */ qset->sqe_cnt = qset->sqe_cnt ? qset->sqe_cnt : Q_COUNT(Q_SIZE_4K); - err = -ENOMEM; qset->cq = kcalloc(pf->qset.cq_cnt, sizeof(struct otx2_cq_queue), GFP_KERNEL); if (!qset->cq) @@ -1814,6 +1832,28 @@ int otx2_open(struct net_device *netdev) if (!qset->rq) goto err_free_mem; + return 0; + +err_free_mem: + otx2_free_queue_mem(qset); + return err; +} +EXPORT_SYMBOL(otx2_alloc_queue_mem); + +int otx2_open(struct net_device *netdev) +{ + struct otx2_nic *pf = netdev_priv(netdev); + struct otx2_cq_poll *cq_poll = NULL; + struct otx2_qset *qset = &pf->qset; + int err = 0, qidx, vec; + char *irq_name; + + netif_carrier_off(netdev); + + err = otx2_alloc_queue_mem(pf); + if (err) + return err; + err = otx2_init_hw_resources(pf); if (err) goto err_free_mem; @@ -1979,10 +2019,7 @@ int otx2_open(struct net_device *netdev) otx2_disable_napi(pf); otx2_free_hw_resources(pf); err_free_mem: - kfree(qset->sq); - kfree(qset->cq); - kfree(qset->rq); - kfree(qset->napi); + otx2_free_queue_mem(qset); return err; } EXPORT_SYMBOL(otx2_open); @@ -2047,11 +2084,8 @@ int otx2_stop(struct net_device *netdev) for (qidx = 0; qidx < netdev->num_tx_queues; qidx++) netdev_tx_reset_queue(netdev_get_tx_queue(netdev, qidx)); + otx2_free_queue_mem(qset); - kfree(qset->sq); - kfree(qset->cq); - kfree(qset->rq); - kfree(qset->napi); /* Do not clear RQ/SQ ringsize settings */ memset_startat(qset, 0, sqe_cnt); return 0; @@ -2081,7 +2115,7 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev) sq = &pf->qset.sq[sq_idx]; txq = netdev_get_tx_queue(netdev, qidx); - if (!otx2_sq_append_skb(netdev, sq, skb, qidx)) { + if (!otx2_sq_append_skb(pf, txq, sq, skb, qidx)) { netif_tx_stop_queue(txq); /* Check again, incase SQBs got freed up */ @@ -2786,7 +2820,7 @@ static const struct net_device_ops otx2_netdev_ops = { .ndo_set_vf_trust = otx2_ndo_set_vf_trust, }; -static int otx2_wq_init(struct otx2_nic *pf) +int otx2_wq_init(struct otx2_nic *pf) { pf->otx2_wq = create_singlethread_workqueue("otx2_wq"); if (!pf->otx2_wq) @@ -2797,7 +2831,7 @@ static int otx2_wq_init(struct otx2_nic *pf) return 0; } -static int otx2_check_pf_usable(struct otx2_nic *nic) +int otx2_check_pf_usable(struct otx2_nic *nic) { u64 rev; @@ -2814,8 +2848,9 @@ static int otx2_check_pf_usable(struct otx2_nic *nic) } return 0; } +EXPORT_SYMBOL(otx2_check_pf_usable); -static int otx2_realloc_msix_vectors(struct otx2_nic *pf) +int otx2_realloc_msix_vectors(struct otx2_nic *pf) { struct otx2_hw *hw = &pf->hw; int num_vec, err; @@ -2837,6 +2872,7 @@ static int otx2_realloc_msix_vectors(struct otx2_nic *pf) return otx2_register_mbox_intr(pf, false); } +EXPORT_SYMBOL(otx2_realloc_msix_vectors); static int otx2_sriov_vfcfg_init(struct otx2_nic *pf) { @@ -2872,6 +2908,88 @@ static void otx2_sriov_vfcfg_cleanup(struct otx2_nic *pf) } } +int otx2_init_rsrc(struct pci_dev *pdev, struct otx2_nic *pf) +{ + struct device *dev = &pdev->dev; + struct otx2_hw *hw = &pf->hw; + int num_vec, err; + + num_vec = pci_msix_vec_count(pdev); + hw->irq_name = devm_kmalloc_array(&hw->pdev->dev, num_vec, NAME_SIZE, + GFP_KERNEL); + if (!hw->irq_name) + return -ENOMEM; + + hw->affinity_mask = devm_kcalloc(&hw->pdev->dev, num_vec, + sizeof(cpumask_var_t), GFP_KERNEL); + if (!hw->affinity_mask) + return -ENOMEM; + + /* Map CSRs */ + pf->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0); + if (!pf->reg_base) { + dev_err(dev, "Unable to map physical function CSRs, aborting\n"); + return -ENOMEM; + } + + err = otx2_check_pf_usable(pf); + if (err) + return err; + + err = pci_alloc_irq_vectors(hw->pdev, RVU_PF_INT_VEC_CNT, + RVU_PF_INT_VEC_CNT, PCI_IRQ_MSIX); + if (err < 0) { + dev_err(dev, "%s: Failed to alloc %d IRQ vectors\n", + __func__, num_vec); + return err; + } + + otx2_setup_dev_hw_settings(pf); + + /* Init PF <=> AF mailbox stuff */ + err = otx2_pfaf_mbox_init(pf); + if (err) + goto err_free_irq_vectors; + + /* Register mailbox interrupt */ + err = otx2_register_mbox_intr(pf, true); + if (err) + goto err_mbox_destroy; + + /* Request AF to attach NPA and NIX LFs to this PF. + * NIX and NPA LFs are needed for this PF to function as a NIC. + */ + err = otx2_attach_npa_nix(pf); + if (err) + goto err_disable_mbox_intr; + + err = otx2_realloc_msix_vectors(pf); + if (err) + goto err_detach_rsrc; + + err = cn10k_lmtst_init(pf); + if (err) + goto err_detach_rsrc; + + return 0; + +err_detach_rsrc: + if (pf->hw.lmt_info) + free_percpu(pf->hw.lmt_info); + if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) + qmem_free(pf->dev, pf->dync_lmt); + otx2_detach_resources(&pf->mbox); +err_disable_mbox_intr: + otx2_disable_mbox_intr(pf); +err_mbox_destroy: + otx2_pfaf_mbox_destroy(pf); +err_free_irq_vectors: + pci_free_irq_vectors(hw->pdev); + + return err; +} +EXPORT_SYMBOL(otx2_init_rsrc); + static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct device *dev = &pdev->dev; @@ -2879,7 +2997,6 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) struct net_device *netdev; struct otx2_nic *pf; struct otx2_hw *hw; - int num_vec; err = pcim_enable_device(pdev); if (err) { @@ -2930,71 +3047,14 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) /* Use CQE of 128 byte descriptor size by default */ hw->xqe_size = 128; - num_vec = pci_msix_vec_count(pdev); - hw->irq_name = devm_kmalloc_array(&hw->pdev->dev, num_vec, NAME_SIZE, - GFP_KERNEL); - if (!hw->irq_name) { - err = -ENOMEM; - goto err_free_netdev; - } - - hw->affinity_mask = devm_kcalloc(&hw->pdev->dev, num_vec, - sizeof(cpumask_var_t), GFP_KERNEL); - if (!hw->affinity_mask) { - err = -ENOMEM; - goto err_free_netdev; - } - - /* Map CSRs */ - pf->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0); - if (!pf->reg_base) { - dev_err(dev, "Unable to map physical function CSRs, aborting\n"); - err = -ENOMEM; - goto err_free_netdev; - } - - err = otx2_check_pf_usable(pf); + err = otx2_init_rsrc(pdev, pf); if (err) goto err_free_netdev; - err = pci_alloc_irq_vectors(hw->pdev, RVU_PF_INT_VEC_CNT, - RVU_PF_INT_VEC_CNT, PCI_IRQ_MSIX); - if (err < 0) { - dev_err(dev, "%s: Failed to alloc %d IRQ vectors\n", - __func__, num_vec); - goto err_free_netdev; - } - - otx2_setup_dev_hw_settings(pf); - - /* Init PF <=> AF mailbox stuff */ - err = otx2_pfaf_mbox_init(pf); - if (err) - goto err_free_irq_vectors; - - /* Register mailbox interrupt */ - err = otx2_register_mbox_intr(pf, true); - if (err) - goto err_mbox_destroy; - - /* Request AF to attach NPA and NIX LFs to this PF. - * NIX and NPA LFs are needed for this PF to function as a NIC. - */ - err = otx2_attach_npa_nix(pf); - if (err) - goto err_disable_mbox_intr; - - err = otx2_realloc_msix_vectors(pf); - if (err) - goto err_detach_rsrc; - err = otx2_set_real_num_queues(netdev, hw->tx_queues, hw->rx_queues); if (err) goto err_detach_rsrc; - err = cn10k_lmtst_init(pf); - if (err) - goto err_detach_rsrc; /* Assign default mac address */ otx2_get_mac_from_af(netdev); @@ -3058,6 +3118,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) netdev->min_mtu = OTX2_MIN_MTU; netdev->max_mtu = otx2_get_max_mtu(pf); + hw->max_mtu = netdev->max_mtu; /* reset CGX/RPM MAC stats */ otx2_reset_mac_stats(pf); @@ -3118,11 +3179,8 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) qmem_free(pf->dev, pf->dync_lmt); otx2_detach_resources(&pf->mbox); -err_disable_mbox_intr: otx2_disable_mbox_intr(pf); -err_mbox_destroy: otx2_pfaf_mbox_destroy(pf); -err_free_irq_vectors: pci_free_irq_vectors(hw->pdev); err_free_netdev: pci_set_drvdata(pdev, NULL); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index a16e9f244117..aad467aa94a3 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -131,6 +131,7 @@ static void otx2_xdp_snd_pkt_handler(struct otx2_nic *pfvf, } static void otx2_snd_pkt_handler(struct otx2_nic *pfvf, + struct net_device *ndev, struct otx2_cq_queue *cq, struct otx2_snd_queue *sq, struct nix_cqe_tx_s *cqe, @@ -145,7 +146,7 @@ static void otx2_snd_pkt_handler(struct otx2_nic *pfvf, if (unlikely(snd_comp->status) && netif_msg_tx_err(pfvf)) net_err_ratelimited("%s: TX%d: Error in send CQ status:%x\n", - pfvf->netdev->name, cq->cint_idx, + ndev->name, cq->cint_idx, snd_comp->status); sg = &sq->sg[snd_comp->sqe_id]; @@ -453,6 +454,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, int tx_pkts = 0, tx_bytes = 0, qidx; struct otx2_snd_queue *sq; struct nix_cqe_tx_s *cqe; + struct net_device *ndev; int processed_cqe = 0; if (cq->pend_cqe >= budget) @@ -464,6 +466,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, process_cqe: qidx = cq->cq_idx - pfvf->hw.rx_queues; sq = &pfvf->qset.sq[qidx]; + ndev = pfvf->netdev; while (likely(processed_cqe < budget) && cq->pend_cqe) { cqe = (struct nix_cqe_tx_s *)otx2_get_next_cqe(cq); @@ -478,7 +481,8 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, if (cq->cq_type == CQ_XDP) otx2_xdp_snd_pkt_handler(pfvf, sq, cqe); else - otx2_snd_pkt_handler(pfvf, cq, &pfvf->qset.sq[qidx], + otx2_snd_pkt_handler(pfvf, ndev, cq, + &pfvf->qset.sq[qidx], cqe, budget, &tx_pkts, &tx_bytes); cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID; @@ -505,7 +509,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, /* Check if queue was stopped earlier due to ring full */ smp_mb(); if (netif_tx_queue_stopped(txq) && - netif_carrier_ok(pfvf->netdev)) + netif_carrier_ok(ndev)) netif_tx_wake_queue(txq); } return 0; @@ -594,6 +598,7 @@ int otx2_napi_handler(struct napi_struct *napi, int budget) } return workdone; } +EXPORT_SYMBOL(otx2_napi_handler); void otx2_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx) @@ -1141,13 +1146,13 @@ static void otx2_set_txtstamp(struct otx2_nic *pfvf, struct sk_buff *skb, } } -bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq, +bool otx2_sq_append_skb(void *dev, struct netdev_queue *txq, + struct otx2_snd_queue *sq, struct sk_buff *skb, u16 qidx) { - struct netdev_queue *txq = netdev_get_tx_queue(netdev, qidx); - struct otx2_nic *pfvf = netdev_priv(netdev); int offset, num_segs, free_desc; struct nix_sqe_hdr_s *sqe_hdr; + struct otx2_nic *pfvf = dev; /* Check if there is enough room between producer * and consumer index. diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h index 3f1d2655ff77..e1db5f961877 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -167,7 +167,8 @@ static inline u64 otx2_iova_to_phys(void *iommu_domain, dma_addr_t dma_addr) } int otx2_napi_handler(struct napi_struct *napi, int budget); -bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq, +bool otx2_sq_append_skb(void *dev, struct netdev_queue *txq, + struct otx2_snd_queue *sq, struct sk_buff *skb, u16 qidx); void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index 99fcc5661674..0486fca8b573 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -395,7 +395,7 @@ static netdev_tx_t otx2vf_xmit(struct sk_buff *skb, struct net_device *netdev) sq = &vf->qset.sq[qidx]; txq = netdev_get_tx_queue(netdev, qidx); - if (!otx2_sq_append_skb(netdev, sq, skb, qidx)) { + if (!otx2_sq_append_skb(vf, txq, sq, skb, qidx)) { netif_tx_stop_queue(txq); /* Check again, incase SQBs got freed up */ @@ -500,7 +500,7 @@ static const struct net_device_ops otx2vf_netdev_ops = { .ndo_setup_tc = otx2_setup_tc, }; -static int otx2_wq_init(struct otx2_nic *vf) +static int otx2_vf_wq_init(struct otx2_nic *vf) { vf->otx2_wq = create_singlethread_workqueue("otx2vf_wq"); if (!vf->otx2_wq) @@ -671,6 +671,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) netdev->min_mtu = OTX2_MIN_MTU; netdev->max_mtu = otx2_get_max_mtu(vf); + hw->max_mtu = netdev->max_mtu; /* To distinguish, for LBK VFs set netdev name explicitly */ if (is_otx2_lbkvf(vf->pdev)) { @@ -688,7 +689,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err_ptp_destroy; } - err = otx2_wq_init(vf); + err = otx2_vf_wq_init(vf); if (err) goto err_unreg_netdev; From patchwork Tue Jun 11 16:22:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694008 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A06BE3D38E; Tue, 11 Jun 2024 16:22:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122963; cv=none; b=H38xuWUjeML9jPo0jUKV+kP3vF06dtcEPYRKljIKagvqrmpzTMXeRcrS6pMBF/+FRjB0U8XuP/Glf9N6OACTFywoPAsunNAPFYh4ydv98/aMAfvTFg5GIIJ0TkW5JhHTkTUc0JSEowcHJNflLhW5HFKSD+AH9O8PZNRRHraywKs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122963; c=relaxed/simple; bh=tFz+y5YKdhP3HiE4faU9QWR2zJrz2LBQn5XL69WcVjU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KQQnBfbIDqmm3E75IItuYf/CPD7t3M19zKsaJ0G43nepw9hJ++q/3aLlU8CBEGWwV1SkjOu5fTXFBkLsALx6GrNYV9k4yT9fpX/2NTxQzVjWEMvI5KwEijD1iJVox/daI3IyfPrAofyFn/JSuUT7l8uN8ekKLTP4+eb8t4qx/yM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=KkyU1QQh; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="KkyU1QQh" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJRio021185; Tue, 11 Jun 2024 09:22:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=pwbTcXmzq2Z3qKBV8sLVSPfxp HfMULIxYQh0nm3flhM=; b=KkyU1QQh/DvkUefklrKLyxhPa1ro+9s0gTUG5ovU1 qExcKmaqMkOmbukx+TE5lhBhjiifudPPN3HlL21YDyb4BQswBm6zXKhg2+7mSgUl ulI7caIZAflk81nGhcWlJIZgYgQfoIuun3Q/qqc8SovofPAEneQV12L/RNVkzyfb f/uHGFFnqGUZCU2F0Nh6xXJhaUMO4dlXRqjShoHIlZTN1c9tcNXYtW567KJaWxlc +vz5C/O2sfangq1MNkc9KAHeZdNs8YJ9yZeKUX8uodM5wSMjyB16X66TFkg/7TTq M9rnufnrRBYVDIUHNp+b7Ocihdp2vWxkgaRxr3mJ9BQZQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3ypmq0hcsq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:25 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:24 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id D4B753F7059; Tue, 11 Jun 2024 09:22:21 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 02/10] octeontx2-pf: RVU representor driver Date: Tue, 11 Jun 2024 21:52:05 +0530 Message-ID: <20240611162213.22213-3-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: R2ynOjgKYFmoHRJk5NeIFIoPUOnvMQJP X-Proofpoint-ORIG-GUID: R2ynOjgKYFmoHRJk5NeIFIoPUOnvMQJP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org This patch adds basic driver for the RVU representor. Driver on probe does pci specific initialization and does hw resources configuration. Introduces RVU_ESWITCH kernel config to enable/disable this driver. Representor and NIC shares the code but representors netdev support subset of NIC functionality. Hence "otx2_rep_dev" api helps to skip the features initialization that are not supported by the representors. Signed-off-by: Geetha sowjanya --- .../net/ethernet/marvell/octeontx2/Kconfig | 8 + .../ethernet/marvell/octeontx2/af/Makefile | 3 +- .../net/ethernet/marvell/octeontx2/af/mbox.h | 8 + .../net/ethernet/marvell/octeontx2/af/rvu.h | 11 + .../ethernet/marvell/octeontx2/af/rvu_nix.c | 21 +- .../ethernet/marvell/octeontx2/af/rvu_rep.c | 47 ++++ .../ethernet/marvell/octeontx2/nic/Makefile | 2 + .../marvell/octeontx2/nic/otx2_common.h | 12 +- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 17 +- .../marvell/octeontx2/nic/otx2_txrx.c | 21 +- .../net/ethernet/marvell/octeontx2/nic/rep.c | 221 ++++++++++++++++++ .../net/ethernet/marvell/octeontx2/nic/rep.h | 31 +++ 12 files changed, 385 insertions(+), 17 deletions(-) create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/rep.c create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/rep.h diff --git a/drivers/net/ethernet/marvell/octeontx2/Kconfig b/drivers/net/ethernet/marvell/octeontx2/Kconfig index a32d85d6f599..ff86a5f267c3 100644 --- a/drivers/net/ethernet/marvell/octeontx2/Kconfig +++ b/drivers/net/ethernet/marvell/octeontx2/Kconfig @@ -46,3 +46,11 @@ config OCTEONTX2_VF depends on OCTEONTX2_PF help This driver supports Marvell's OcteonTX2 NIC virtual function. + +config RVU_ESWITCH + tristate "Marvell RVU E-Switch support" + depends on OCTEONTX2_PF + default m + help + This driver supports Marvell's RVU E-Switch that + provides internal SRIOV packet steering and switching for the diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile index 3cf4c8285c90..ccea37847df8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile @@ -11,4 +11,5 @@ rvu_mbox-y := mbox.o rvu_trace.o rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \ rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \ rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \ - rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o + rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o \ + rvu_rep.o diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index e6d7d6e862c0..befb327e8aff 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -144,6 +144,7 @@ M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req, \ msg_rsp) \ M(SET_VF_PERM, 0x00b, set_vf_perm, set_vf_perm, msg_rsp) \ M(PTP_GET_CAP, 0x00c, ptp_get_cap, msg_req, ptp_get_cap_rsp) \ +M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ @@ -1525,6 +1526,13 @@ struct ptp_get_cap_rsp { u64 cap; }; +struct get_rep_cnt_rsp { + struct mbox_msghdr hdr; + u16 rep_cnt; + u16 rep_pf_map[64]; + u64 rsvd; +}; + struct flow_msg { unsigned char dmac[6]; unsigned char smac[6]; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 30efa5607c58..cbdc7aeaccfc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -594,6 +594,9 @@ struct rvu { spinlock_t cpt_intr_lock; struct mutex mbox_lock; /* Serialize mbox up and down msgs */ + u16 rep_pcifunc; + int rep_cnt; + u16 *rep2pfvf_map; }; static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val) @@ -822,6 +825,14 @@ bool is_sdp_pfvf(u16 pcifunc); bool is_sdp_pf(u16 pcifunc); bool is_sdp_vf(struct rvu *rvu, u16 pcifunc); +static inline bool is_rep_dev(struct rvu *rvu, u16 pcifunc) +{ + if (rvu->rep_pcifunc && rvu->rep_pcifunc == pcifunc) + return true; + + return false; +} + /* CGX APIs */ static inline bool is_pf_cgxmapped(struct rvu *rvu, u8 pf) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index b94ff8c75ffc..44db4694a257 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -329,7 +329,9 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr, /* TLs aggegating traffic are shared across PF and VFs */ if (lvl >= hw->cap.nix_tx_aggr_lvl) { - if (rvu_get_pf(map_func) != rvu_get_pf(pcifunc)) + if ((nix_get_tx_link(rvu, map_func) != + nix_get_tx_link(rvu, pcifunc)) && + (rvu_get_pf(map_func) != rvu_get_pf(pcifunc))) return false; else return true; @@ -1634,6 +1636,12 @@ int rvu_mbox_handler_nix_lf_alloc(struct rvu *rvu, cfg = NPC_TX_DEF_PKIND; rvu_write64(rvu, blkaddr, NIX_AF_LFX_TX_PARSE_CFG(nixlf), cfg); + if (is_rep_dev(rvu, pcifunc)) { + pfvf->tx_chan_base = RVU_SWITCH_LBK_CHAN; + pfvf->tx_chan_cnt = 1; + goto exit; + } + intf = is_lbk_vf(rvu, pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX; if (is_sdp_pfvf(pcifunc)) intf = NIX_INTF_TYPE_SDP; @@ -1704,6 +1712,9 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req, if (nixlf < 0) return NIX_AF_ERR_AF_LF_INVALID; + if (is_rep_dev(rvu, pcifunc)) + goto free_lf; + if (req->flags & NIX_LF_DISABLE_FLOWS) rvu_npc_disable_mcam_entries(rvu, pcifunc, nixlf); else @@ -1715,6 +1726,7 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req, nix_interface_deinit(rvu, pcifunc, nixlf); +free_lf: /* Reset this NIX LF */ err = rvu_lf_reset(rvu, block, nixlf); if (err) { @@ -2010,7 +2022,8 @@ static void nix_get_txschq_range(struct rvu *rvu, u16 pcifunc, struct rvu_hwinfo *hw = rvu->hw; int pf = rvu_get_pf(pcifunc); - if (is_lbk_vf(rvu, pcifunc)) { /* LBK links */ + /* LBK links */ + if (is_lbk_vf(rvu, pcifunc) || is_rep_dev(rvu, pcifunc)) { *start = hw->cap.nix_txsch_per_cgx_lmac * link; *end = *start + hw->cap.nix_txsch_per_lbk_lmac; } else if (is_pf_cgxmapped(rvu, pf)) { /* CGX links */ @@ -4522,7 +4535,7 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, if (!nix_hw) return NIX_AF_ERR_INVALID_NIXBLK; - if (is_lbk_vf(rvu, pcifunc)) + if (is_lbk_vf(rvu, pcifunc) || is_rep_dev(rvu, pcifunc)) rvu_get_lbk_link_max_frs(rvu, &max_mtu); else rvu_get_lmac_link_max_frs(rvu, &max_mtu); @@ -4550,6 +4563,8 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, /* For VFs of PF0 ingress is LBK port, so config LBK link */ pfvf = rvu_get_pfvf(rvu, pcifunc); link = hw->cgx_links + pfvf->lbkid; + } else if (is_rep_dev(rvu, pcifunc)) { + link = hw->cgx_links + 0; } if (link < 0) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c new file mode 100644 index 000000000000..cf13c5f0a3c5 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell RVU Admin Function driver + * + * Copyright (C) 2024 Marvell. + * + */ + +#include +#include +#include +#include + +#include "rvu.h" +#include "rvu_reg.h" + +int rvu_mbox_handler_get_rep_cnt(struct rvu *rvu, struct msg_req *req, + struct get_rep_cnt_rsp *rsp) +{ + int pf, vf, numvfs, hwvf, rep = 0; + u16 pcifunc; + + rvu->rep_pcifunc = req->hdr.pcifunc; + rsp->rep_cnt = rvu->cgx_mapped_pfs + rvu->cgx_mapped_vfs; + rvu->rep_cnt = rsp->rep_cnt; + + rvu->rep2pfvf_map = devm_kzalloc(rvu->dev, rvu->rep_cnt * + sizeof(u16), GFP_KERNEL); + if (!rvu->rep2pfvf_map) + return -ENOMEM; + + for (pf = 0; pf < rvu->hw->total_pfs; pf++) { + if (!is_pf_cgxmapped(rvu, pf)) + continue; + pcifunc = pf << RVU_PFVF_PF_SHIFT; + rvu->rep2pfvf_map[rep] = pcifunc; + rsp->rep_pf_map[rep] = pcifunc; + rep++; + rvu_get_pf_numvfs(rvu, pf, &numvfs, &hwvf); + for (vf = 0; vf < numvfs; vf++) { + rvu->rep2pfvf_map[rep] = pcifunc | + ((vf + 1) & RVU_PFVF_FUNC_MASK); + rsp->rep_pf_map[rep] = rvu->rep2pfvf_map[rep]; + rep++; + } + } + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile index 5664f768cb0c..7420915f72bb 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile @@ -5,11 +5,13 @@ obj-$(CONFIG_OCTEONTX2_PF) += rvu_nicpf.o otx2_ptp.o obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o otx2_ptp.o +obj-$(CONFIG_RVU_ESWITCH) += rvu_rep.o rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \ otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \ otx2_devlink.o qos_sq.o qos.o rvu_nicvf-y := otx2_vf.o otx2_devlink.o +rvu_rep-y := rep.o otx2_devlink.o rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o rvu_nicvf-$(CONFIG_DCB) += otx2_dcbnl.o diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 3072648f478c..cd4b84d89dd1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -29,6 +29,7 @@ #include "otx2_devlink.h" #include #include "qos.h" +#include "rep.h" /* IPv4 flag more fragment bit */ #define IPV4_FLAG_MORE 0x20 @@ -441,6 +442,7 @@ struct otx2_nic { #define OTX2_FLAG_PTP_ONESTEP_SYNC BIT_ULL(15) #define OTX2_FLAG_ADPTV_INT_COAL_ENABLED BIT_ULL(16) #define OTX2_FLAG_TC_MARK_ENABLED BIT_ULL(17) +#define OTX2_FLAG_REP_MODE_ENABLED BIT_ULL(18) u64 flags; u64 *cq_op_addr; @@ -508,11 +510,19 @@ struct otx2_nic { #if IS_ENABLED(CONFIG_MACSEC) struct cn10k_mcs_cfg *macsec_cfg; #endif + +#if IS_ENABLED(CONFIG_RVU_ESWITCH) + struct rep_dev **reps; + int rep_cnt; + u16 rep_pf_map[RVU_MAX_REP]; + u16 esw_mode; +#endif }; static inline bool is_otx2_lbkvf(struct pci_dev *pdev) { - return pdev->device == PCI_DEVID_OCTEONTX2_RVU_AFVF; + return (pdev->device == PCI_DEVID_OCTEONTX2_RVU_AFVF) || + (pdev->device == PCI_DEVID_RVU_REP); } static inline bool is_96xx_A0(struct pci_dev *pdev) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 9d16019cfaab..5e43d1ffaa33 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1502,10 +1502,11 @@ int otx2_init_hw_resources(struct otx2_nic *pf) hw->sqpool_cnt = otx2_get_total_tx_queues(pf); hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt; - /* Maximum hardware supported transmit length */ - pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN; - - pf->rbsize = otx2_get_rbuf_size(pf, pf->netdev->mtu); + if (!otx2_rep_dev(pf->pdev)) { + /* Maximum hardware supported transmit length */ + pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN; + pf->rbsize = otx2_get_rbuf_size(pf, pf->netdev->mtu); + } mutex_lock(&mbox->lock); /* NPA init */ @@ -1634,11 +1635,12 @@ void otx2_free_hw_resources(struct otx2_nic *pf) otx2_pfc_txschq_stop(pf); #endif - otx2_clean_qos_queues(pf); + if (!otx2_rep_dev(pf->pdev)) + otx2_clean_qos_queues(pf); mutex_lock(&mbox->lock); /* Disable backpressure */ - if (!(pf->pcifunc & RVU_PFVF_FUNC_MASK)) + if (!is_otx2_lbkvf(pf->pdev)) otx2_nix_config_bp(pf, false); mutex_unlock(&mbox->lock); @@ -1670,7 +1672,8 @@ void otx2_free_hw_resources(struct otx2_nic *pf) otx2_free_cq_res(pf); /* Free all ingress bandwidth profiles allocated */ - cn10k_free_all_ipolicers(pf); + if (!otx2_rep_dev(pf->pdev)) + cn10k_free_all_ipolicers(pf); mutex_lock(&mbox->lock); /* Reset NIX LF */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index aad467aa94a3..87489904e595 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -375,11 +375,13 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, } start += sizeof(*sg); } - otx2_set_rxhash(pfvf, cqe, skb); - skb_record_rx_queue(skb, cq->cq_idx); - if (pfvf->netdev->features & NETIF_F_RXCSUM) - skb->ip_summed = CHECKSUM_UNNECESSARY; + if (!(pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED)) { + otx2_set_rxhash(pfvf, cqe, skb); + skb_record_rx_queue(skb, cq->cq_idx); + if (pfvf->netdev->features & NETIF_F_RXCSUM) + skb->ip_summed = CHECKSUM_UNNECESSARY; + } if (pfvf->flags & OTX2_FLAG_TC_MARK_ENABLED) skb->mark = parse->match_id; @@ -466,7 +468,13 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, process_cqe: qidx = cq->cq_idx - pfvf->hw.rx_queues; sq = &pfvf->qset.sq[qidx]; - ndev = pfvf->netdev; + +#if IS_ENABLED(CONFIG_RVU_ESWITCH) + if (pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED) + ndev = pfvf->reps[qidx]->netdev; + else +#endif + ndev = pfvf->netdev; while (likely(processed_cqe < budget) && cq->pend_cqe) { cqe = (struct nix_cqe_tx_s *)otx2_get_next_cqe(cq); @@ -504,6 +512,9 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, if (qidx >= pfvf->hw.tx_queues) qidx -= pfvf->hw.xdp_queues; + + if (pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED) + qidx = 0; txq = netdev_get_tx_queue(pfvf->netdev, qidx); netdev_tx_completed_queue(txq, tx_pkts, tx_bytes); /* Check if queue was stopped earlier due to ring full */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c new file mode 100644 index 000000000000..26f6935279ad --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -0,0 +1,221 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell RVU representor driver + * + * Copyright (C) 2024 Marvell. + * + */ + +#include +#include +#include +#include + +#include "otx2_common.h" +#include "cn10k.h" +#include "otx2_reg.h" +#include "rep.h" + +#define DRV_NAME "rvu_rep" +#define DRV_STRING "Marvell RVU Representor Driver" + +static const struct pci_device_id rvu_rep_id_table[] = { + { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_RVU_REP) }, + { } +}; + +MODULE_AUTHOR("Marvell International Ltd."); +MODULE_DESCRIPTION(DRV_STRING); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); + +static int rvu_rep_rsrc_free(struct otx2_nic *priv) +{ + struct otx2_qset *qset = &priv->qset; + int wrk; + + for (wrk = 0; wrk < priv->qset.cq_cnt; wrk++) + cancel_delayed_work_sync(&priv->refill_wrk[wrk].pool_refill_work); + devm_kfree(priv->dev, priv->refill_wrk); + + otx2_free_hw_resources(priv); + otx2_free_queue_mem(qset); + return 0; +} + +static int rvu_rep_rsrc_init(struct otx2_nic *priv) +{ + struct otx2_qset *qset = &priv->qset; + int err = 0; + + err = otx2_alloc_queue_mem(priv); + if (err) + return err; + + priv->hw.max_mtu = otx2_get_max_mtu(priv); + priv->tx_max_pktlen = priv->hw.max_mtu + OTX2_ETH_HLEN; + priv->rbsize = ALIGN(priv->hw.rbuf_len, OTX2_ALIGN) + OTX2_HEAD_ROOM; + + err = otx2_init_hw_resources(priv); + if (err) + goto err_free_rsrc; + + /* Set maximum frame size allowed in HW */ + err = otx2_hw_set_mtu(priv, priv->hw.max_mtu); + if (err) { + dev_err(priv->dev, "Failed to set HW MTU\n"); + goto err_free_rsrc; + } + return 0; + +err_free_rsrc: + otx2_free_hw_resources(priv); + otx2_free_queue_mem(qset); + return err; +} + +static int rvu_get_rep_cnt(struct otx2_nic *priv) +{ + struct get_rep_cnt_rsp *rsp; + struct mbox_msghdr *msghdr; + struct msg_req *req; + int err, rep; + + mutex_lock(&priv->mbox.lock); + req = otx2_mbox_alloc_msg_get_rep_cnt(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return -ENOMEM; + } + err = otx2_sync_mbox_msg(&priv->mbox); + if (err) + goto exit; + + msghdr = otx2_mbox_get_rsp(&priv->mbox.mbox, 0, &req->hdr); + if (IS_ERR(msghdr)) { + err = PTR_ERR(msghdr); + goto exit; + } + + rsp = (struct get_rep_cnt_rsp *)msghdr; + priv->hw.tx_queues = rsp->rep_cnt; + priv->hw.rx_queues = rsp->rep_cnt; + priv->rep_cnt = rsp->rep_cnt; + for (rep = 0; rep < priv->rep_cnt; rep++) + priv->rep_pf_map[rep] = rsp->rep_pf_map[rep]; + +exit: + mutex_unlock(&priv->mbox.lock); + return err; +} + +static int rvu_rep_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct device *dev = &pdev->dev; + struct otx2_nic *priv; + struct otx2_hw *hw; + int err; + + err = pcim_enable_device(pdev); + if (err) { + dev_err(dev, "Failed to enable PCI device\n"); + return err; + } + + err = pci_request_regions(pdev, DRV_NAME); + if (err) { + dev_err(dev, "PCI request regions failed 0x%x\n", err); + return err; + } + + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48)); + if (err) { + dev_err(dev, "DMA mask config failed, abort\n"); + goto err_release_regions; + } + + pci_set_master(pdev); + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) { + err = -ENOMEM; + goto err_release_regions; + } + + pci_set_drvdata(pdev, priv); + priv->pdev = pdev; + priv->dev = dev; + priv->flags |= OTX2_FLAG_INTF_DOWN; + priv->flags |= OTX2_FLAG_REP_MODE_ENABLED; + + hw = &priv->hw; + hw->pdev = pdev; + hw->max_queues = OTX2_MAX_CQ_CNT; + hw->rbuf_len = OTX2_DEFAULT_RBUF_LEN; + hw->xqe_size = 128; + + err = otx2_init_rsrc(pdev, priv); + if (err) + goto err_release_regions; + + err = rvu_get_rep_cnt(priv); + if (err) + goto err_detach_rsrc; + + err = rvu_rep_rsrc_init(priv); + if (err) + goto err_detach_rsrc; + + return 0; + +err_detach_rsrc: + if (priv->hw.lmt_info) + free_percpu(priv->hw.lmt_info); + if (test_bit(CN10K_LMTST, &priv->hw.cap_flag)) + qmem_free(priv->dev, priv->dync_lmt); + otx2_detach_resources(&priv->mbox); + otx2_disable_mbox_intr(priv); + otx2_pfaf_mbox_destroy(priv); + pci_free_irq_vectors(pdev); +err_release_regions: + pci_set_drvdata(pdev, NULL); + pci_release_regions(pdev); + return err; +} + +static void rvu_rep_remove(struct pci_dev *pdev) +{ + struct otx2_nic *priv = pci_get_drvdata(pdev); + + rvu_rep_rsrc_free(priv); + otx2_detach_resources(&priv->mbox); + if (priv->hw.lmt_info) + free_percpu(priv->hw.lmt_info); + if (test_bit(CN10K_LMTST, &priv->hw.cap_flag)) + qmem_free(priv->dev, priv->dync_lmt); + otx2_disable_mbox_intr(priv); + otx2_pfaf_mbox_destroy(priv); + pci_free_irq_vectors(priv->pdev); + pci_set_drvdata(pdev, NULL); + pci_release_regions(pdev); +} + +static struct pci_driver rvu_rep_driver = { + .name = DRV_NAME, + .id_table = rvu_rep_id_table, + .probe = rvu_rep_probe, + .remove = rvu_rep_remove, + .shutdown = rvu_rep_remove, +}; + +static int __init rvu_rep_init_module(void) +{ + return pci_register_driver(&rvu_rep_driver); +} + +static void __exit rvu_rep_cleanup_module(void) +{ + pci_unregister_driver(&rvu_rep_driver); +} + +module_init(rvu_rep_init_module); +module_exit(rvu_rep_cleanup_module); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h new file mode 100644 index 000000000000..565e75628df2 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Marvell RVU REPRESENTOR driver + * + * Copyright (C) 2024 Marvell. + * + */ + +#ifndef REP_H +#define REP_H + +#include + +#include "otx2_reg.h" +#include "otx2_txrx.h" +#include "otx2_common.h" + +#define PCI_DEVID_RVU_REP 0xA0E0 + +#define RVU_MAX_REP OTX2_MAX_CQ_CNT +struct rep_dev { + struct otx2_nic *mdev; + struct net_device *netdev; + u16 rep_id; + u16 pcifunc; +}; + +static inline bool otx2_rep_dev(struct pci_dev *pdev) +{ + return pdev->device == PCI_DEVID_RVU_REP; +} +#endif /* REP_H */ From patchwork Tue Jun 11 16:22:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694007 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4F5C3FB1B; Tue, 11 Jun 2024 16:22:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122962; cv=none; b=ShND1LUF+N4GIGdZH/nT3JUI/MIfr/Yb5c/3pyOCEF49uohDuXI7bFRVM877C5931AZbXjPDWJZdHjb+OuKsdTvmwMeYXbPJq1hGczMkwRkln8UsBH8GrBvqF/0yzuYmgkRlqxJhKXmFJFVITB4hXracA7/P3u6zelPvIUOKF24= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122962; c=relaxed/simple; bh=a+HOBLK+vo1DOM72g0Pwf3BUrc+gda1XDfnwgjwqHeY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=p9wyoxXjj/+7NlC53zOXpBQV/olN47RF2ky6eOclQ83caYTr/ycMFhqnIt6Tqys9HAJH11ewKxPo4X0xAlIL/ln2QqG5VIk4pTL9W/JYcl60AA8mJwlyieVOGR3EYzfz/xSc+HLfpZbDK9q5sLci6lJuluC/6eOynDlK78RlOKQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=M9L/ISbq; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="M9L/ISbq" Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJRrv024118; Tue, 11 Jun 2024 09:22:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=neFxwUsSJc72PpaUuv4uxfi4r WNUHkcdS0oo5189W44=; b=M9L/ISbqveUfsipQsNeshKkJ7m1h1V6pn8vivn65o T3L5o+uTvTdBdUKQP8oByiACZdk4kuFN3g0H+lW18PwIe8wlbVs3c+VAXMMST/6y qDsfmpr5UK32lRV14Pyt47DxWoTlDq7p8oyoiDkNWhFgmzW59LiiuhmKiZ68JhHV k0bGC3dbvsnP2HWs8afAJj5wSaPWiw7y7DnlPgrCuXqtJef6aKHR/3wRnrF9PTmX EBbncDbOyJoF3YODTnKx14aM9nVMCFbIe93J6glJeFWfTgk+UFGJ25qfbctE+IrU WP4+jb0wyZnEBerBwg/EtxPthQpFyO+SFs1yvbyLqiohg== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3ympthay8v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:29 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:28 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:28 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 4CA613F7059; Tue, 11 Jun 2024 09:22:25 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 03/10] octeontx2-pf: Create representor netdev Date: Tue, 11 Jun 2024 21:52:06 +0530 Message-ID: <20240611162213.22213-4-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: NkaEfBtS1ehz1tPNQGWf4qneqJCnmHAq X-Proofpoint-ORIG-GUID: NkaEfBtS1ehz1tPNQGWf4qneqJCnmHAq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org Adds initial devlink support to set/get the switchdev mode. Representor netdevs are created for each rvu devices when the switch mode is set to 'switchdev'. These netdevs are be used to control and configure VFs. Signed-off-by: Geetha sowjanya --- .../marvell/octeontx2/nic/otx2_devlink.c | 49 ++++++ .../net/ethernet/marvell/octeontx2/nic/rep.c | 162 ++++++++++++++++++ .../net/ethernet/marvell/octeontx2/nic/rep.h | 3 + 3 files changed, 214 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c index 99ddf31269d9..5d577b671686 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c @@ -77,7 +77,56 @@ static const struct devlink_param otx2_dl_params[] = { otx2_dl_mcam_count_validate), }; +#ifdef CONFIG_RVU_ESWITCH +static int otx2_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode) +{ + struct otx2_devlink *otx2_dl = devlink_priv(devlink); + struct otx2_nic *pfvf = otx2_dl->pfvf; + + if (!otx2_rep_dev(pfvf->pdev)) + return -EOPNOTSUPP; + + *mode = pfvf->esw_mode; + + return 0; +} + +static int otx2_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode, + struct netlink_ext_ack *extack) +{ + struct otx2_devlink *otx2_dl = devlink_priv(devlink); + struct otx2_nic *pfvf = otx2_dl->pfvf; + int ret = 0; + + if (!otx2_rep_dev(pfvf->pdev)) + return -EOPNOTSUPP; + + if (pfvf->esw_mode == mode) + return 0; + + switch (mode) { + case DEVLINK_ESWITCH_MODE_LEGACY: + rvu_rep_destroy(pfvf); + break; + case DEVLINK_ESWITCH_MODE_SWITCHDEV: + ret = rvu_rep_create(pfvf, extack); + break; + default: + return -EINVAL; + } + + if (!ret) + pfvf->esw_mode = mode; + + return ret; +} +#endif + static const struct devlink_ops otx2_devlink_ops = { +#ifdef CONFIG_RVU_ESWITCH + .eswitch_mode_get = otx2_devlink_eswitch_mode_get, + .eswitch_mode_set = otx2_devlink_eswitch_mode_set, +#endif }; int otx2_register_dl(struct otx2_nic *pfvf) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c index 26f6935279ad..16325ba23c74 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -28,6 +28,162 @@ MODULE_DESCRIPTION(DRV_STRING); MODULE_LICENSE("GPL"); MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); +static int rvu_rep_napi_init(struct otx2_nic *priv, + struct netlink_ext_ack *extack) +{ + struct otx2_qset *qset = &priv->qset; + struct otx2_cq_poll *cq_poll = NULL; + struct otx2_hw *hw = &priv->hw; + int err = 0, qidx, vec; + char *irq_name; + + qset->napi = kcalloc(hw->cint_cnt, sizeof(*cq_poll), GFP_KERNEL); + if (!qset->napi) + return -ENOMEM; + + /* Register NAPI handler */ + for (qidx = 0; qidx < hw->cint_cnt; qidx++) { + cq_poll = &qset->napi[qidx]; + cq_poll->cint_idx = qidx; + cq_poll->cq_ids[CQ_RX] = + (qidx < hw->rx_queues) ? qidx : CINT_INVALID_CQ; + cq_poll->cq_ids[CQ_TX] = (qidx < hw->tx_queues) ? + qidx + hw->rx_queues : + CINT_INVALID_CQ; + cq_poll->cq_ids[CQ_XDP] = CINT_INVALID_CQ; + cq_poll->cq_ids[CQ_QOS] = CINT_INVALID_CQ; + + cq_poll->dev = (void *)priv; + netif_napi_add(priv->reps[qidx]->netdev, &cq_poll->napi, + otx2_napi_handler); + napi_enable(&cq_poll->napi); + } + /* Register CQ IRQ handlers */ + vec = hw->nix_msixoff + NIX_LF_CINT_VEC_START; + for (qidx = 0; qidx < hw->cint_cnt; qidx++) { + irq_name = &hw->irq_name[vec * NAME_SIZE]; + + snprintf(irq_name, NAME_SIZE, "rep%d-rxtx-%d", qidx, qidx); + + err = request_irq(pci_irq_vector(priv->pdev, vec), + otx2_cq_intr_handler, 0, irq_name, + &qset->napi[qidx]); + if (err) { + NL_SET_ERR_MSG_FMT_MOD(extack, + "RVU REP IRQ registration failed for CQ%d", + qidx); + goto err_free_cints; + } + vec++; + + /* Enable CQ IRQ */ + otx2_write64(priv, NIX_LF_CINTX_INT(qidx), BIT_ULL(0)); + otx2_write64(priv, NIX_LF_CINTX_ENA_W1S(qidx), BIT_ULL(0)); + } + priv->flags &= ~OTX2_FLAG_INTF_DOWN; + return 0; + +err_free_cints: + otx2_free_cints(priv, qidx); + otx2_disable_napi(priv); + return err; +} + +static void rvu_rep_free_cq_rsrc(struct otx2_nic *priv) +{ + struct otx2_qset *qset = &priv->qset; + struct otx2_cq_poll *cq_poll = NULL; + int qidx, vec; + + /* Cleanup CQ NAPI and IRQ */ + vec = priv->hw.nix_msixoff + NIX_LF_CINT_VEC_START; + for (qidx = 0; qidx < priv->hw.cint_cnt; qidx++) { + /* Disable interrupt */ + otx2_write64(priv, NIX_LF_CINTX_ENA_W1C(qidx), BIT_ULL(0)); + + synchronize_irq(pci_irq_vector(priv->pdev, vec)); + + cq_poll = &qset->napi[qidx]; + napi_synchronize(&cq_poll->napi); + vec++; + } + otx2_free_cints(priv, priv->hw.cint_cnt); + otx2_disable_napi(priv); +} + +void rvu_rep_destroy(struct otx2_nic *priv) +{ + struct rep_dev *rep; + int rep_id; + + rvu_rep_free_cq_rsrc(priv); + for (rep_id = 0; rep_id < priv->rep_cnt; rep_id++) { + rep = priv->reps[rep_id]; + unregister_netdev(rep->netdev); + free_netdev(rep->netdev); + } + kfree(priv->reps); +} + +int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack) +{ + int rep_cnt = priv->rep_cnt; + struct net_device *ndev; + struct rep_dev *rep; + int rep_id, err; + u16 pcifunc; + + priv->reps = kcalloc(rep_cnt, sizeof(struct rep_dev *), GFP_KERNEL); + if (!priv->reps) + return -ENOMEM; + + for (rep_id = 0; rep_id < rep_cnt; rep_id++) { + ndev = alloc_etherdev(sizeof(*rep)); + if (!ndev) { + NL_SET_ERR_MSG_FMT_MOD(extack, + "PFVF representor:%d creation failed", + rep_id); + err = -ENOMEM; + goto exit; + } + + rep = netdev_priv(ndev); + priv->reps[rep_id] = rep; + rep->mdev = priv; + rep->netdev = ndev; + rep->rep_id = rep_id; + + ndev->min_mtu = OTX2_MIN_MTU; + ndev->max_mtu = priv->hw.max_mtu; + pcifunc = priv->rep_pf_map[rep_id]; + rep->pcifunc = pcifunc; + + snprintf(ndev->name, sizeof(ndev->name), "r%dp%d", rep_id, + rvu_get_pf(pcifunc)); + + eth_hw_addr_random(ndev); + err = register_netdev(ndev); + if (err) { + NL_SET_ERR_MSG_MOD(extack, + "PFVF reprentator registration failed"); + goto exit; + } + } + err = rvu_rep_napi_init(priv, extack); + if (err) + goto exit; + + return 0; +exit: + while (--rep_id >= 0) { + rep = priv->reps[rep_id]; + unregister_netdev(rep->netdev); + free_netdev(rep->netdev); + } + kfree(priv->reps); + return err; +} + static int rvu_rep_rsrc_free(struct otx2_nic *priv) { struct otx2_qset *qset = &priv->qset; @@ -165,6 +321,10 @@ static int rvu_rep_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (err) goto err_detach_rsrc; + err = otx2_register_dl(priv); + if (err) + goto err_detach_rsrc; + return 0; err_detach_rsrc: @@ -186,6 +346,8 @@ static void rvu_rep_remove(struct pci_dev *pdev) { struct otx2_nic *priv = pci_get_drvdata(pdev); + otx2_unregister_dl(priv); + rvu_rep_destroy(priv); rvu_rep_rsrc_free(priv); otx2_detach_resources(&priv->mbox); if (priv->hw.lmt_info) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h index 565e75628df2..c04874c4d4c6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h @@ -28,4 +28,7 @@ static inline bool otx2_rep_dev(struct pci_dev *pdev) { return pdev->device == PCI_DEVID_RVU_REP; } + +int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack); +void rvu_rep_destroy(struct otx2_nic *priv); #endif /* REP_H */ From patchwork Tue Jun 11 16:22:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694011 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A471457CB5; Tue, 11 Jun 2024 16:22:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122969; cv=none; b=ZWFAiUaiZNxo/MjrHPufFu8sgQxW09FUtB2VYoiUSspqZGhG2todQ3cMHBlebSutayCToqiu9A//v19BzBEXW1mesQiv1RG2ynPDgHqtjkq7IbvZCpvCHtwvtN12sgGsovDL0bMUdod5dFXO3h4dUGABdwGP42JLuzVkRQlXhe4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122969; c=relaxed/simple; bh=NNmlYt0CMe+FoELVTXscqp/2pwGzb0HrWpAGV6ZT3O0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=YKAhSTmJbG7R8fw6LGJBPrA4QIZxKWOO2W1ENi9U9KfMpMk6dAHfZ92Ie8OIsDxDthoQCIopguGjVR0E+BtDjg6hjoAopdGJ/iUbZ1fP2C8Vt0rliEhLJR2zDTyoZ5GT1yxqeCD7YdC+ABkfBN3TexZTMmUTKYUZuykcbHGbL5o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=FKoZDZf9; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="FKoZDZf9" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJRrJ021192; Tue, 11 Jun 2024 09:22:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=gw1q3fY56iqcuUucMSMbsL/AX 8UEWNVpX4PKvlB3e/E=; b=FKoZDZf9u/CAuod+6MdVupdTg03rpCxFXKA8o/QAa Bu2AV3i0m4BdBKeoXDTKRZKInxkO8MVi4tP4QFGfXPO9jOS+Gsa0abAwFddZQzm+ nsMCiuxhWqC6JnYIjwo2iCNF56ZBZdUQMq1E9kTZbU6HsnXh0HbBFEt8IGFA5x7d VOLpqxXg5rydx8nVds2CQFJz1nKr2l3fL13GT+GNgJQwImH7jhsdKbcwd0zwhNLQ 535ls7bCOG0MmAwbe6vabQ3fLDYiZlwdH1OsvCESXap359rNYJ5L7eC7UKigf1J5 S3vcQJibUB9DBtlq/fCv6nXkYu6bPn9yUMzbgR5oMz0TA== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3ypmq0hct8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:33 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:31 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id B908C3F7059; Tue, 11 Jun 2024 09:22:28 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 04/10] octeontx2-pf: Add basic net_device_ops Date: Tue, 11 Jun 2024 21:52:07 +0530 Message-ID: <20240611162213.22213-5-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: zyZzJsw--bGkzA0xCcpQxzD3sDvQwvyL X-Proofpoint-ORIG-GUID: zyZzJsw--bGkzA0xCcpQxzD3sDvQwvyL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org Implements basic set of net_device_ops. Signed-off-by: Geetha sowjanya --- .../net/ethernet/marvell/octeontx2/nic/rep.c | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c index 16325ba23c74..3cb8dc820fdd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -28,6 +28,51 @@ MODULE_DESCRIPTION(DRV_STRING); MODULE_LICENSE("GPL"); MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); +static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *pf = rep->mdev; + struct otx2_snd_queue *sq; + struct netdev_queue *txq; + + sq = &pf->qset.sq[rep->rep_id]; + txq = netdev_get_tx_queue(dev, 0); + + if (!otx2_sq_append_skb(pf, txq, sq, skb, rep->rep_id)) { + netif_tx_stop_queue(txq); + + /* Check again, in case SQBs got freed up */ + smp_mb(); + if (((sq->num_sqbs - *sq->aura_fc_addr) * sq->sqe_per_sqb) + > sq->sqe_thresh) + netif_tx_wake_queue(txq); + + return NETDEV_TX_BUSY; + } + return NETDEV_TX_OK; +} + +static int rvu_rep_open(struct net_device *dev) +{ + netif_carrier_on(dev); + netif_tx_start_all_queues(dev); + return 0; +} + +static int rvu_rep_stop(struct net_device *dev) +{ + netif_carrier_off(dev); + netif_tx_disable(dev); + + return 0; +} + +static const struct net_device_ops rvu_rep_netdev_ops = { + .ndo_open = rvu_rep_open, + .ndo_stop = rvu_rep_stop, + .ndo_start_xmit = rvu_rep_xmit, +}; + static int rvu_rep_napi_init(struct otx2_nic *priv, struct netlink_ext_ack *extack) { @@ -155,6 +200,7 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack) ndev->min_mtu = OTX2_MIN_MTU; ndev->max_mtu = priv->hw.max_mtu; + ndev->netdev_ops = &rvu_rep_netdev_ops; pcifunc = priv->rep_pf_map[rep_id]; rep->pcifunc = pcifunc; From patchwork Tue Jun 11 16:22:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694009 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF76047F5D; Tue, 11 Jun 2024 16:22:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122965; cv=none; b=riY32Dm2wnwh8JBfGl+9Nnk8fLmc8J739ZQWbDGkm7Z0JxVGZWfNi07wFL2cHpvjuPWt3nFmLn87mGMVHpaT0/W17viPidBdYNSahWIseSwpgM0cadz5LHmuMXww48Du3JCp9Xyz7etBMd6vHInXo4hS8BfnNg3Dlubbz1CmOq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122965; c=relaxed/simple; bh=QCSQ9Ay96PTSo+M9U5SrnN66f2yXwwoYvOidCLWZjdg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=L9A5ORkwq68Ykdg9LbYOZ7PZgiKPVzx1Otucd0DJKdGC6JyFZGsdHL217uxC8TTPFJ/yVwnERBe4rf7MzKMim7Yq9cUkPMU4fjYOnznbZ/Hyf2eioPn+0gr3gHE1JFz233ZcNS66LCG8gE/8VmxIsquwMuYMONhl88agDyPvy9w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=eOpvk1z4; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="eOpvk1z4" Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJStM024158; Tue, 11 Jun 2024 09:22:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=nYhEmJxvhoAYDljd1T3KgN1Yg yCb0nwbpDWApGnZvlE=; b=eOpvk1z4ZTrGbBiZBg7UlZojXrrdfadj7Lwa8e4BH hlkSelR9SNgXoVghvthMf9aDakqn0rCXfkdDtiSTA/6aRuaUg8k7zhKCPLtKnXkb IqLwI2xAJOT6rnwjJaBEL5/hsDpGv6HEIDMkb2YHeXMzC9qfZCfCY2RIIbp3jhoF iHsfu8pK4fYEcelle8XvLTQ5sZol2tVUuIo8DaXpJt8g8rcvr+KPzu9tlmLGvQNd poYMKZ+HBGmkHE9dPjuyrlE0Uci25id1rEy4PTD4oUXHBlmiCiXw5IVY4zrZRUA9 ELVxT+RU6BGIiQ2XJXNNPNHPqiUSTAEU+6uZSzz0oxPYQ== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3ympthay98-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:36 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:35 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 327713F7059; Tue, 11 Jun 2024 09:22:31 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 05/10] octeontx2-af: Add packet path between representor and VF Date: Tue, 11 Jun 2024 21:52:08 +0530 Message-ID: <20240611162213.22213-6-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: uEzObyG18RFaEqpODya5UEO6GiafXwka X-Proofpoint-ORIG-GUID: uEzObyG18RFaEqpODya5UEO6GiafXwka X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org Current HW, do not support in-built switch which will forward pkts between representee and representor. When representor is put under a bridge and pkts needs to be sent to representee, then pkts from representor are sent on a HW internal loopback channel, which again will be punted to ingress pkt parser. Now the rules that this patch installs are the MCAM filters/rules which will match against these pkts and forward them to representee. The rules that this patch installs are for basic representor <=> representee path similar to Tun/TAP between VM and Host. Signed-off-by: Geetha sowjanya --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 7 + .../net/ethernet/marvell/octeontx2/af/rvu.h | 7 +- .../marvell/octeontx2/af/rvu_devlink.c | 6 + .../ethernet/marvell/octeontx2/af/rvu_nix.c | 7 +- .../ethernet/marvell/octeontx2/af/rvu_rep.c | 246 ++++++++++++++++++ .../marvell/octeontx2/af/rvu_switch.c | 18 +- .../net/ethernet/marvell/octeontx2/nic/rep.c | 23 +- 7 files changed, 304 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index befb327e8aff..a7c32f1cc924 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -145,6 +145,7 @@ M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req, \ M(SET_VF_PERM, 0x00b, set_vf_perm, set_vf_perm, msg_rsp) \ M(PTP_GET_CAP, 0x00c, ptp_get_cap, msg_req, ptp_get_cap_rsp) \ M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \ +M(ESW_CFG, 0x00e, esw_cfg, esw_cfg_req, msg_rsp) \ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ @@ -1533,6 +1534,12 @@ struct get_rep_cnt_rsp { u64 rsvd; }; +struct esw_cfg_req { + struct mbox_msghdr hdr; + u8 ena; + u64 rsvd; +}; + struct flow_msg { unsigned char dmac[6]; unsigned char smac[6]; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index cbdc7aeaccfc..f7f8b96a6208 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -597,6 +597,7 @@ struct rvu { u16 rep_pcifunc; int rep_cnt; u16 *rep2pfvf_map; + u8 rep_mode; }; static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val) @@ -1026,7 +1027,7 @@ int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr); /* RVU Switch */ void rvu_switch_enable(struct rvu *rvu); void rvu_switch_disable(struct rvu *rvu); -void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc); +void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc, bool ena); void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool ena); int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir, @@ -1040,4 +1041,8 @@ int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc); void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena); void rvu_mcs_exit(struct rvu *rvu); +/* Representor APIs */ +int rvu_rep_pf_init(struct rvu *rvu); +int rvu_rep_install_mcam_rules(struct rvu *rvu); +void rvu_rep_update_rules(struct rvu *rvu, u16 pcifunc, bool ena); #endif /* RVU_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c index 7498ab429963..4d29c509ef6b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c @@ -1468,6 +1468,9 @@ static int rvu_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode) struct rvu *rvu = rvu_dl->rvu; struct rvu_switch *rswitch; + if (rvu->rep_mode) + return -EOPNOTSUPP; + rswitch = &rvu->rswitch; *mode = rswitch->mode; @@ -1481,6 +1484,9 @@ static int rvu_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode, struct rvu *rvu = rvu_dl->rvu; struct rvu_switch *rswitch; + if (rvu->rep_mode) + return -EOPNOTSUPP; + rswitch = &rvu->rswitch; switch (mode) { case DEVLINK_ESWITCH_MODE_LEGACY: diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 44db4694a257..9cf47cdee192 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -2743,7 +2743,7 @@ void rvu_nix_tx_tl2_cfg(struct rvu *rvu, int blkaddr, u16 pcifunc, int schq; u64 cfg; - if (!is_pf_cgxmapped(rvu, pf)) + if (!is_pf_cgxmapped(rvu, pf) && !is_rep_dev(rvu, pcifunc)) return; cfg = enable ? (BIT_ULL(12) | RVU_SWITCH_LBK_CHAN) : 0; @@ -4373,8 +4373,6 @@ int rvu_mbox_handler_nix_set_mac_addr(struct rvu *rvu, if (test_bit(PF_SET_VF_TRUSTED, &pfvf->flags) && from_vf) ether_addr_copy(pfvf->default_mac, req->mac_addr); - rvu_switch_update_rules(rvu, pcifunc); - return 0; } @@ -5165,7 +5163,7 @@ int rvu_mbox_handler_nix_lf_start_rx(struct rvu *rvu, struct msg_req *req, pfvf = rvu_get_pfvf(rvu, pcifunc); set_bit(NIXLF_INITIALIZED, &pfvf->flags); - rvu_switch_update_rules(rvu, pcifunc); + rvu_switch_update_rules(rvu, pcifunc, true); return rvu_cgx_start_stop_io(rvu, pcifunc, true); } @@ -5193,6 +5191,7 @@ int rvu_mbox_handler_nix_lf_stop_rx(struct rvu *rvu, struct msg_req *req, if (err) return err; + rvu_switch_update_rules(rvu, pcifunc, false); rvu_cgx_tx_enable(rvu, pcifunc, true); return 0; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c index cf13c5f0a3c5..e137bb9383a2 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c @@ -13,6 +13,252 @@ #include "rvu.h" #include "rvu_reg.h" +static int rvu_rep_get_vlan_id(struct rvu *rvu, u16 pcifunc) +{ + int id; + + for (id = 0; id < rvu->rep_cnt; id++) + if (rvu->rep2pfvf_map[id] == pcifunc) + return id; + return -ENODEV; +} + +static int rvu_rep_tx_vlan_cfg(struct rvu *rvu, u16 pcifunc, + u16 vlan_tci, int *vidx) +{ + struct nix_vtag_config_rsp rsp = {}; + struct nix_vtag_config req = {}; + u64 etype = ETH_P_8021Q; + int err; + + /* Insert vlan tag */ + req.hdr.pcifunc = pcifunc; + req.vtag_size = VTAGSIZE_T4; + req.cfg_type = 0; /* tx vlan cfg */ + req.tx.cfg_vtag0 = true; + req.tx.vtag0 = etype << 48 | ntohs(vlan_tci); + + err = rvu_mbox_handler_nix_vtag_cfg(rvu, &req, &rsp); + if (err) { + dev_err(rvu->dev, "Tx vlan config failed\n"); + return err; + } + *vidx = rsp.vtag0_idx; + return 0; +} + +static int rvu_rep_rx_vlan_cfg(struct rvu *rvu, u16 pcifunc) +{ + struct nix_vtag_config req = {}; + struct nix_vtag_config_rsp rsp; + + /* config strip, capture and size */ + req.hdr.pcifunc = pcifunc; + req.vtag_size = VTAGSIZE_T4; + req.cfg_type = 1; /* rx vlan cfg */ + req.rx.vtag_type = NIX_AF_LFX_RX_VTAG_TYPE0; + req.rx.strip_vtag = true; + req.rx.capture_vtag = false; + + return rvu_mbox_handler_nix_vtag_cfg(rvu, &req, &rsp); +} + +static int rvu_rep_install_rx_rule(struct rvu *rvu, u16 pcifunc, + u16 entry, bool rte) +{ + struct npc_install_flow_req req = {}; + struct npc_install_flow_rsp rsp = {}; + struct rvu_pfvf *pfvf; + u16 vlan_tci, rep_id; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + + /* To stree the traffic from Representee to Representor */ + rep_id = (u16)rvu_rep_get_vlan_id(rvu, pcifunc); + if (rte) { + vlan_tci = rep_id | 0x1ull << 8; + req.vf = rvu->rep_pcifunc; + req.op = NIX_RX_ACTIONOP_UCAST; + req.index = rep_id; + } else { + vlan_tci = rep_id; + req.vf = pcifunc; + req.op = NIX_RX_ACTION_DEFAULT; + } + + rvu_rep_rx_vlan_cfg(rvu, req.vf); + req.entry = entry; + req.hdr.pcifunc = 0; /* AF is requester */ + req.features = BIT_ULL(NPC_OUTER_VID) | BIT_ULL(NPC_VLAN_ETYPE_CTAG); + req.vtag0_valid = true; + req.vtag0_type = NIX_AF_LFX_RX_VTAG_TYPE0; + req.packet.vlan_etype = (__be16)ETH_P_8021Q; + req.mask.vlan_etype = (__be16)ETH_P_8021Q; + req.packet.vlan_tci = (__be16)vlan_tci; + req.mask.vlan_tci = (__be16)0xffff; + + req.channel = RVU_SWITCH_LBK_CHAN; + req.chan_mask = 0xffff; + req.intf = pfvf->nix_rx_intf; + + return rvu_mbox_handler_npc_install_flow(rvu, &req, &rsp); +} + +static int rvu_rep_install_tx_rule(struct rvu *rvu, u16 pcifunc, u16 entry, + bool rte) +{ + struct npc_install_flow_req req = {}; + struct npc_install_flow_rsp rsp = {}; + struct rvu_pfvf *pfvf; + int vidx, err; + u16 vlan_tci; + u8 lbkid; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + vlan_tci = rvu_rep_get_vlan_id(rvu, pcifunc); + if (rte) + vlan_tci |= 0x1ull << 8; + + err = rvu_rep_tx_vlan_cfg(rvu, pcifunc, vlan_tci, &vidx); + if (err) + return err; + + lbkid = pfvf->nix_blkaddr == BLKADDR_NIX0 ? 0 : 1; + req.hdr.pcifunc = 0; /* AF is requester */ + if (rte) { + req.vf = pcifunc; + } else { + req.vf = rvu->rep_pcifunc; + req.packet.sq_id = vlan_tci; + req.mask.sq_id = 0xffff; + } + + req.entry = entry; + req.intf = pfvf->nix_tx_intf; + req.op = NIX_TX_ACTIONOP_UCAST_CHAN; + req.index = (lbkid << 8) | RVU_SWITCH_LBK_CHAN; + req.set_cntr = 1; + req.vtag0_def = vidx; + req.vtag0_op = 1; + return rvu_mbox_handler_npc_install_flow(rvu, &req, &rsp); +} + +int rvu_rep_install_mcam_rules(struct rvu *rvu) +{ + struct rvu_switch *rswitch = &rvu->rswitch; + u16 start = rswitch->start_entry; + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc, entry = 0; + int pf, vf, numvfs; + int err, nixlf, i; + u8 rep; + + for (pf = 1; pf < hw->total_pfs; pf++) { + if (!is_pf_cgxmapped(rvu, pf)) + continue; + + pcifunc = pf << RVU_PFVF_PF_SHIFT; + rvu_get_nix_blkaddr(rvu, pcifunc); + rep = true; + for (i = 0; i < 2; i++) { + err = rvu_rep_install_rx_rule(rvu, pcifunc, + start + entry, rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] = pcifunc; + + err = rvu_rep_install_tx_rule(rvu, pcifunc, + start + entry, rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] = pcifunc; + rep = false; + } + + rvu_get_pf_numvfs(rvu, pf, &numvfs, NULL); + for (vf = 0; vf < numvfs; vf++) { + pcifunc = pf << RVU_PFVF_PF_SHIFT | + ((vf + 1) & RVU_PFVF_FUNC_MASK); + rvu_get_nix_blkaddr(rvu, pcifunc); + + /* Skip installimg rules if nixlf is not attached */ + err = nix_get_nixlf(rvu, pcifunc, &nixlf, NULL); + if (err) + continue; + rep = true; + for (i = 0; i < 2; i++) { + err = rvu_rep_install_rx_rule(rvu, pcifunc, + start + entry, + rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] = pcifunc; + + err = rvu_rep_install_tx_rule(rvu, pcifunc, + start + entry, + rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] = pcifunc; + rep = false; + } + } + } + return 0; +} + +void rvu_rep_update_rules(struct rvu *rvu, u16 pcifunc, bool ena) +{ + struct rvu_switch *rswitch = &rvu->rswitch; + struct npc_mcam *mcam = &rvu->hw->mcam; + u32 max = rswitch->used_entries; + int blkaddr; + u16 entry; + + if (!rswitch->used_entries) + return; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + + if (blkaddr < 0) + return; + + rvu_switch_enable_lbk_link(rvu, pcifunc, ena); + mutex_lock(&mcam->lock); + for (entry = 0; entry < max; entry++) { + if (rswitch->entry2pcifunc[entry] == pcifunc) + npc_enable_mcam_entry(rvu, mcam, blkaddr, entry, ena); + } + mutex_unlock(&mcam->lock); +} + +int rvu_rep_pf_init(struct rvu *rvu) +{ + u16 pcifunc = rvu->rep_pcifunc; + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + + set_bit(NIXLF_INITIALIZED, &pfvf->flags); + rvu_switch_enable_lbk_link(rvu, pcifunc, true); + rvu_rep_rx_vlan_cfg(rvu, pcifunc); + return 0; +} + +int rvu_mbox_handler_esw_cfg(struct rvu *rvu, struct esw_cfg_req *req, + struct msg_rsp *rsp) +{ + if (req->hdr.pcifunc != rvu->rep_pcifunc) + return 0; + + rvu->rep_mode = req->ena; + + if (req->ena) + rvu_switch_enable(rvu); + else + rvu_switch_disable(rvu); + + return 0; +} + int rvu_mbox_handler_get_rep_cnt(struct rvu *rvu, struct msg_req *req, struct get_rep_cnt_rsp *rsp) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c index ceb81eebf65e..268efb7c1c15 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c @@ -166,6 +166,8 @@ void rvu_switch_enable(struct rvu *rvu) alloc_req.contig = true; alloc_req.count = rvu->cgx_mapped_pfs + rvu->cgx_mapped_vfs; + if (rvu->rep_mode) + alloc_req.count = alloc_req.count * 4; ret = rvu_mbox_handler_npc_mcam_alloc_entry(rvu, &alloc_req, &alloc_rsp); if (ret) { @@ -189,7 +191,12 @@ void rvu_switch_enable(struct rvu *rvu) rswitch->used_entries = alloc_rsp.count; rswitch->start_entry = alloc_rsp.entry; - ret = rvu_switch_install_rules(rvu); + if (rvu->rep_mode) { + rvu_rep_pf_init(rvu); + ret = rvu_rep_install_mcam_rules(rvu); + } else { + ret = rvu_switch_install_rules(rvu); + } if (ret) goto uninstall_rules; @@ -222,6 +229,9 @@ void rvu_switch_disable(struct rvu *rvu) if (!rswitch->used_entries) return; + if (rvu->rep_mode) + goto free_ents; + for (pf = 1; pf < hw->total_pfs; pf++) { if (!is_pf_cgxmapped(rvu, pf)) continue; @@ -249,6 +259,7 @@ void rvu_switch_disable(struct rvu *rvu) } } +free_ents: uninstall_req.start = rswitch->start_entry; uninstall_req.end = rswitch->start_entry + rswitch->used_entries - 1; free_req.all = 1; @@ -258,12 +269,15 @@ void rvu_switch_disable(struct rvu *rvu) kfree(rswitch->entry2pcifunc); } -void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc) +void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc, bool ena) { struct rvu_switch *rswitch = &rvu->rswitch; u32 max = rswitch->used_entries; u16 entry; + if (rvu->rep_mode) + return rvu_rep_update_rules(rvu, pcifunc, ena); + if (!rswitch->used_entries) return; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c index 3cb8dc820fdd..e276a354d9e4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -28,6 +28,22 @@ MODULE_DESCRIPTION(DRV_STRING); MODULE_LICENSE("GPL"); MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); +static int rvu_eswitch_config(struct otx2_nic *priv, u8 ena) +{ + struct esw_cfg_req *req; + + mutex_lock(&priv->mbox.lock); + req = otx2_mbox_alloc_msg_esw_cfg(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return -ENOMEM; + } + req->ena = ena; + otx2_sync_mbox_msg(&priv->mbox); + mutex_unlock(&priv->mbox.lock); + return 0; +} + static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *dev) { struct rep_dev *rep = netdev_priv(dev); @@ -161,6 +177,7 @@ void rvu_rep_destroy(struct otx2_nic *priv) struct rep_dev *rep; int rep_id; + rvu_eswitch_config(priv, false); rvu_rep_free_cq_rsrc(priv); for (rep_id = 0; rep_id < priv->rep_cnt; rep_id++) { rep = priv->reps[rep_id]; @@ -219,6 +236,7 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack) if (err) goto exit; + rvu_eswitch_config(priv, true); return 0; exit: while (--rep_id >= 0) { @@ -230,7 +248,7 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack) return err; } -static int rvu_rep_rsrc_free(struct otx2_nic *priv) +static void rvu_rep_rsrc_free(struct otx2_nic *priv) { struct otx2_qset *qset = &priv->qset; int wrk; @@ -241,13 +259,12 @@ static int rvu_rep_rsrc_free(struct otx2_nic *priv) otx2_free_hw_resources(priv); otx2_free_queue_mem(qset); - return 0; } static int rvu_rep_rsrc_init(struct otx2_nic *priv) { struct otx2_qset *qset = &priv->qset; - int err = 0; + int err; err = otx2_alloc_queue_mem(priv); if (err) From patchwork Tue Jun 11 16:22:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694010 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F1543CF63; Tue, 11 Jun 2024 16:22:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122968; cv=none; b=i36H0HVAVZeGODZ0KYo1q1xZov/8zj5boClmSx556JkG7EyCWrveOvcauRIyd0Z9+3O/ecJiGJSzi6Z2oXR+q9Dbl3L3uyc+fXbn3SW1oTCO0AOJIdZdcg/LBE47lb9jtsojqSj9WfOOBNGvSlgkKV9fmz+QFxsRC3LSWgOyomM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122968; c=relaxed/simple; bh=FhTzGKwmmWQzeibYZJYQCFvD15RzU/+wZFmQHn737wE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZZwhbVUx6NnWWiwEKXPl4GO0KGnHhc2suAzeBzop+RFnyk7UK7ClBv0LEhqTRai6QFHFxGIGA0gffABjS44oVs0VCRo+vfVTTsM9qTgR8N32CpOyGeXsvL4jqXJG8Td8SdBNDlnjJWSRagX03dyJRBAqRKiYdMmA9HhKujZXZhc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=B1eE2q7g; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="B1eE2q7g" Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJRrw024118; Tue, 11 Jun 2024 09:22:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=Pz3Yw0h07biX9MJlFB8FgN5vx ZUJ5b/XRHOdwimHUmM=; b=B1eE2q7gYU4ynkfNlWUAyMk/aSEPorrduv1HD6tVz 4BVr2jgdSHqOw7i/6XcaliMFuztjgAK9dEMXVhL44i6jX2gdEF04tyjjFFPDx633 2SUarAVXT0y32trZLLbmPscuVgaxN6Nn0HeKfNNN0BwidQyKg7WQ/zGVHVoufTnu UqBcQ+0EQb0FN2kmJ0yLkBsxGB3i/SEJNYwcZKsz3r7DRRNIuymuZIJXtapEiKEy TlBx3WOiZJMrPD7dYRy5r2L3uPLcOTucWQqE60ZsGhklFCJvRoN/VQRFtx0nGovS U2ykNgdUWNDJ2gD2ty349nQY+y3Es12XJzDKl+GNZPpvw== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3ympthay9e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:39 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:38 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 9F7803F7059; Tue, 11 Jun 2024 09:22:35 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 06/10] octeontx2-pf: Get VF stats via representor Date: Tue, 11 Jun 2024 21:52:09 +0530 Message-ID: <20240611162213.22213-7-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: RzL5rL_-5LvajIyqU5Pfr5GZx-lj6HI1 X-Proofpoint-ORIG-GUID: RzL5rL_-5LvajIyqU5Pfr5GZx-lj6HI1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org This patch add support to export VF port statistics via representor netdev. Defines new mbox "NIX_LF_STATS" to fetch VF hw stats. Signed-off-by: Geetha sowjanya Reviewed-by: Simon Horman Reviewed-by: Simon Horman --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 32 +++++++++ .../ethernet/marvell/octeontx2/af/rvu_rep.c | 43 ++++++++++++ .../net/ethernet/marvell/octeontx2/nic/rep.c | 65 +++++++++++++++++++ .../net/ethernet/marvell/octeontx2/nic/rep.h | 14 ++++ 4 files changed, 154 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index a7c32f1cc924..d293a3c35b6b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -321,6 +321,7 @@ M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_re M(NIX_MCAST_GRP_UPDATE, 0x802d, nix_mcast_grp_update, \ nix_mcast_grp_update_req, \ nix_mcast_grp_update_rsp) \ +M(NIX_LF_STATS, 0x802e, nix_lf_stats, nix_stats_req, nix_stats_rsp) \ /* MCS mbox IDs (range 0xA000 - 0xBFFF) */ \ M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \ mcs_alloc_rsrc_rsp) \ @@ -1366,6 +1367,37 @@ struct nix_bandprof_get_hwinfo_rsp { u32 policer_timeunit; }; +struct nix_stats_req { + struct mbox_msghdr hdr; + u8 reset; + u16 pcifunc; + u64 rsvd; +}; + +struct nix_stats_rsp { + struct mbox_msghdr hdr; + u16 pcifunc; + struct { + u64 octs; + u64 ucast; + u64 bcast; + u64 mcast; + u64 drop; + u64 drop_octs; + u64 drop_mcast; + u64 drop_bcast; + u64 err; + u64 rsvd[5]; + } rx; + struct { + u64 ucast; + u64 bcast; + u64 mcast; + u64 drop; + u64 octs; + } tx; +}; + /* NPC mbox message structs */ #define NPC_MCAM_ENTRY_INVALID 0xFFFF diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c index e137bb9383a2..a7a15d7878e5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c @@ -13,6 +13,49 @@ #include "rvu.h" #include "rvu_reg.h" +#define RVU_LF_RX_STATS(reg) \ + rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_STATX(nixlf, reg)) + +#define RVU_LF_TX_STATS(reg) \ + rvu_read64(rvu, blkaddr, NIX_AF_LFX_TX_STATX(nixlf, reg)) + +int rvu_mbox_handler_nix_lf_stats(struct rvu *rvu, + struct nix_stats_req *req, + struct nix_stats_rsp *rsp) +{ + u16 pcifunc = req->pcifunc; + int nixlf, blkaddr, err; + struct msg_req rst_req; + struct msg_rsp rst_rsp; + + err = nix_get_nixlf(rvu, pcifunc, &nixlf, &blkaddr); + if (err) + return 0; + + if (req->reset) { + rst_req.hdr.pcifunc = pcifunc; + return rvu_mbox_handler_nix_stats_rst(rvu, &rst_req, &rst_rsp); + } + rsp->rx.octs = RVU_LF_RX_STATS(RX_OCTS); + rsp->rx.ucast = RVU_LF_RX_STATS(RX_UCAST); + rsp->rx.bcast = RVU_LF_RX_STATS(RX_BCAST); + rsp->rx.mcast = RVU_LF_RX_STATS(RX_MCAST); + rsp->rx.drop = RVU_LF_RX_STATS(RX_DROP); + rsp->rx.err = RVU_LF_RX_STATS(RX_ERR); + rsp->rx.drop_octs = RVU_LF_RX_STATS(RX_DROP_OCTS); + rsp->rx.drop_mcast = RVU_LF_RX_STATS(RX_DRP_MCAST); + rsp->rx.drop_bcast = RVU_LF_RX_STATS(RX_DRP_BCAST); + + rsp->tx.octs = RVU_LF_TX_STATS(TX_OCTS); + rsp->tx.ucast = RVU_LF_TX_STATS(TX_UCAST); + rsp->tx.bcast = RVU_LF_TX_STATS(TX_BCAST); + rsp->tx.mcast = RVU_LF_TX_STATS(TX_MCAST); + rsp->tx.drop = RVU_LF_TX_STATS(TX_DROP); + + rsp->pcifunc = req->pcifunc; + return 0; +} + static int rvu_rep_get_vlan_id(struct rvu *rvu, u16 pcifunc) { int id; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c index e276a354d9e4..592b0f4406f0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -28,6 +28,68 @@ MODULE_DESCRIPTION(DRV_STRING); MODULE_LICENSE("GPL"); MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); +static void rvu_rep_get_stats(struct work_struct *work) +{ + struct delayed_work *del_work = to_delayed_work(work); + struct nix_stats_req *req; + struct nix_stats_rsp *rsp; + struct rep_stats *stats; + struct otx2_nic *priv; + struct rep_dev *rep; + int err; + + rep = container_of(del_work, struct rep_dev, stats_wrk); + priv = rep->mdev; + + mutex_lock(&priv->mbox.lock); + req = otx2_mbox_alloc_msg_nix_lf_stats(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return; + } + req->pcifunc = rep->pcifunc; + err = otx2_sync_mbox_msg_busy_poll(&priv->mbox); + if (err) + goto exit; + + rsp = (struct nix_stats_rsp *) + otx2_mbox_get_rsp(&priv->mbox.mbox, 0, &req->hdr); + + if (IS_ERR(rsp)) { + err = PTR_ERR(rsp); + goto exit; + } + + stats = &rep->stats; + stats->rx_bytes = rsp->rx.octs; + stats->rx_frames = rsp->rx.ucast + rsp->rx.bcast + + rsp->rx.mcast; + stats->rx_drops = rsp->rx.drop; + stats->rx_mcast_frames = rsp->rx.mcast; + stats->tx_bytes = rsp->tx.octs; + stats->tx_frames = rsp->tx.ucast + rsp->tx.bcast + rsp->tx.mcast; + stats->tx_drops = rsp->tx.drop; +exit: + mutex_unlock(&priv->mbox.lock); +} + +static void rvu_rep_get_stats64(struct net_device *dev, + struct rtnl_link_stats64 *stats) +{ + struct rep_dev *rep = netdev_priv(dev); + + stats->rx_packets = rep->stats.rx_frames; + stats->rx_bytes = rep->stats.rx_bytes; + stats->rx_dropped = rep->stats.rx_drops; + stats->multicast = rep->stats.rx_mcast_frames; + + stats->tx_packets = rep->stats.tx_frames; + stats->tx_bytes = rep->stats.tx_bytes; + stats->tx_dropped = rep->stats.tx_drops; + + schedule_delayed_work(&rep->stats_wrk, msecs_to_jiffies(100)); +} + static int rvu_eswitch_config(struct otx2_nic *priv, u8 ena) { struct esw_cfg_req *req; @@ -87,6 +149,7 @@ static const struct net_device_ops rvu_rep_netdev_ops = { .ndo_open = rvu_rep_open, .ndo_stop = rvu_rep_stop, .ndo_start_xmit = rvu_rep_xmit, + .ndo_get_stats64 = rvu_rep_get_stats64, }; static int rvu_rep_napi_init(struct otx2_nic *priv, @@ -231,6 +294,8 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack) "PFVF reprentator registration failed"); goto exit; } + + INIT_DELAYED_WORK(&rep->stats_wrk, rvu_rep_get_stats); } err = rvu_rep_napi_init(priv, extack); if (err) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h index c04874c4d4c6..5d39bf636655 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h @@ -17,9 +17,23 @@ #define PCI_DEVID_RVU_REP 0xA0E0 #define RVU_MAX_REP OTX2_MAX_CQ_CNT + +struct rep_stats { + u64 rx_bytes; + u64 rx_frames; + u64 rx_drops; + u64 rx_mcast_frames; + + u64 tx_bytes; + u64 tx_frames; + u64 tx_drops; +}; + struct rep_dev { struct otx2_nic *mdev; struct net_device *netdev; + struct rep_stats stats; + struct delayed_work stats_wrk; u16 rep_id; u16 pcifunc; }; From patchwork Tue Jun 11 16:22:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694012 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA97C6D1B2; Tue, 11 Jun 2024 16:22:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122972; cv=none; b=Utg9UjWg1UVXPk7g0MIIo2GIa3iBgqe7gSjho4Nn6mMPf10MsJ4VBeP3iPYYBxm9IiWm5DDKGFdLEqrNhghh8hPHeikELbC3qT4CmegZu8CB27VoW2DvWfQwqeZSIlNBhueQoYrdC1OSbVhrQslZZ3USaob1XNd7E1onhOBR194= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122972; c=relaxed/simple; bh=TxS3evur+B7WqqJzm6f/aGeKsFYG/yWv5YrNNHxQpeI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KJ9wiOEXegacLAa7qWrAi16+6YioZOjd4fEdbiF2O4YL4MBemn7LfId8/1KyHntViKQ8VaUJw780xETIc1vpM0oscobK1DkYu5mY4VA+HsJgTVlySYVfmGpXmTRWbd0SFS3QzmHCHHGtXmlQukcR5UGZYmjY+BfTkk7k9dSsx/8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=TiHemhjx; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="TiHemhjx" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJRrL021192; Tue, 11 Jun 2024 09:22:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=JWfs5zhmHAjrNif3Sa0lznO0f AswIck8t+jepmtLbxI=; b=TiHemhjxlaxmbcfklshtYUOFNPmttiRDe5Wt4Gq3S LuSSWGC82UjMXcOTFZV1ZlWtScRGwrbDqq8nYULliP1kPZe3H2BU6kH8pZqPm79m TOL1mCtbM5JvreKsuZLuNaicOC4IQoYU19hP74OCFgIW3rAWKfxT+gMQ97/OMShI 1LVxsq4BOcDitca6Sgtk9wceyM8nRq6D7Jdk0aZJ1mqsvdPR0v9W56i8y9R7vyiJ cLUp7lhcD5zGrQf87uw131VobWU3+sHPg7GImZcaJQmuVSY5ythHeyumbjggbZYQ dcujU3M/ho+1Y3wmohoT1399dxmHM2gKv296ipIx08Rbw== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3ypmq0hcu3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:43 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:42 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 1809C3F7059; Tue, 11 Jun 2024 09:22:38 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 07/10] octeontx2-pf: Add support to sync link state between representor and VFs Date: Tue, 11 Jun 2024 21:52:10 +0530 Message-ID: <20240611162213.22213-8-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: vALFF0-1tluqnhbO6j9OC41Nl6kE48wA X-Proofpoint-ORIG-GUID: vALFF0-1tluqnhbO6j9OC41Nl6kE48wA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org This patch implement the below requirement mentioned in the representors documentation. " The representee's link state is controlled through the representor. Setting the representor administratively UP or DOWN should cause carrier ON or OFF at the representee. " This patch enables - Reflecting the link state of representor based on the VF state and link state of VF based on representor. - On VF interface up/down a notification is sent via mbox to representor to update the link state. eg: ip link set eth0 up/down will disable carrier on/off of the corresponding representor(r0p1) interface. - On representor interface up/down will cause the link state update of VF. eg: ip link set r0p1 up/down will disable carrier on/off of the corresponding representee(eth0) interface. Signed-off-by: Harman Kalra Signed-off-by: Geetha sowjanya Reviewed-by: Simon Horman --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 25 ++++ .../net/ethernet/marvell/octeontx2/af/rvu.h | 11 ++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 6 + .../ethernet/marvell/octeontx2/af/rvu_rep.c | 127 ++++++++++++++++++ .../marvell/octeontx2/nic/otx2_common.h | 2 + .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 30 +++++ .../net/ethernet/marvell/octeontx2/nic/rep.c | 76 +++++++++++ .../net/ethernet/marvell/octeontx2/nic/rep.h | 3 + 8 files changed, 280 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index d293a3c35b6b..2381f84efc21 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -146,6 +146,7 @@ M(SET_VF_PERM, 0x00b, set_vf_perm, set_vf_perm, msg_rsp) \ M(PTP_GET_CAP, 0x00c, ptp_get_cap, msg_req, ptp_get_cap_rsp) \ M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \ M(ESW_CFG, 0x00e, esw_cfg, esw_cfg_req, msg_rsp) \ +M(REP_EVENT_NOTIFY, 0x00f, rep_event_notify, rep_event, msg_rsp) \ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ @@ -383,12 +384,16 @@ M(CPT_INST_LMTST, 0xD00, cpt_inst_lmtst, cpt_inst_lmtst_req, msg_rsp) #define MBOX_UP_MCS_MESSAGES \ M(MCS_INTR_NOTIFY, 0xE00, mcs_intr_notify, mcs_intr_info, msg_rsp) +#define MBOX_UP_REP_MESSAGES \ +M(REP_EVENT_UP_NOTIFY, 0xEF0, rep_event_up_notify, rep_event, msg_rsp) \ + enum { #define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id, MBOX_MESSAGES MBOX_UP_CGX_MESSAGES MBOX_UP_CPT_MESSAGES MBOX_UP_MCS_MESSAGES +MBOX_UP_REP_MESSAGES #undef M }; @@ -1572,6 +1577,26 @@ struct esw_cfg_req { u64 rsvd; }; +struct rep_evt_data { + u8 port_state; + u8 vf_state; + u16 rx_mode; + u16 rx_flags; + u16 mtu; + u64 rsvd[5]; +}; + +struct rep_event { + struct mbox_msghdr hdr; + u16 pcifunc; +#define RVU_EVENT_PORT_STATE BIT_ULL(0) +#define RVU_EVENT_PFVF_STATE BIT_ULL(1) +#define RVU_EVENT_MTU_CHANGE BIT_ULL(2) +#define RVU_EVENT_RX_MODE_CHANGE BIT_ULL(3) + u16 event; + struct rep_evt_data evt_data; +}; + struct flow_msg { unsigned char dmac[6]; unsigned char smac[6]; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index f7f8b96a6208..c040b667d079 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -513,6 +513,11 @@ struct rvu_switch { u16 start_entry; }; +struct rep_evtq_ent { + struct list_head node; + struct rep_event event; +}; + struct rvu { void __iomem *afreg_base; void __iomem *pfreg_base; @@ -598,6 +603,11 @@ struct rvu { int rep_cnt; u16 *rep2pfvf_map; u8 rep_mode; + struct work_struct rep_evt_work; + struct workqueue_struct *rep_evt_wq; + struct list_head rep_evtq_head; + /* Representor event lock */ + spinlock_t rep_evtq_lock; }; static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val) @@ -1045,4 +1055,5 @@ void rvu_mcs_exit(struct rvu *rvu); int rvu_rep_pf_init(struct rvu *rvu); int rvu_rep_install_mcam_rules(struct rvu *rvu); void rvu_rep_update_rules(struct rvu *rvu, u16 pcifunc, bool ena); +int rvu_rep_notify_pfvf_state(struct rvu *rvu, u16 pcifunc, bool enable); #endif /* RVU_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 9cf47cdee192..dda69162badc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -380,6 +380,9 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf, cgx_set_pkind(rvu_cgx_pdata(cgx_id, rvu), lmac_id, pkind); rvu_npc_set_pkind(rvu, pkind, pfvf); + if (rvu->rep_mode) + rvu_rep_notify_pfvf_state(rvu, pcifunc, true); + break; case NIX_INTF_TYPE_LBK: vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1; @@ -516,6 +519,9 @@ static void nix_interface_deinit(struct rvu *rvu, u16 pcifunc, u8 nixlf) /* Disable DMAC filters used */ rvu_cgx_disable_dmac_entries(rvu, pcifunc); + + if (rvu->rep_mode) + rvu_rep_notify_pfvf_state(rvu, pcifunc, false); } #define NIX_BPIDS_PER_LMAC 8 diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c index a7a15d7878e5..5d06ef3aa877 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c @@ -13,6 +13,124 @@ #include "rvu.h" #include "rvu_reg.h" +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ +static struct _req_type __maybe_unused \ +*otx2_mbox_alloc_msg_ ## _fn_name(struct rvu *rvu, int devid) \ +{ \ + struct _req_type *req; \ + \ + req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \ + &rvu->afpf_wq_info.mbox_up, devid, sizeof(struct _req_type), \ + sizeof(struct _rsp_type)); \ + if (!req) \ + return NULL; \ + req->hdr.sig = OTX2_MBOX_REQ_SIG; \ + req->hdr.id = _id; \ + return req; \ +} + +MBOX_UP_REP_MESSAGES +#undef M + +static int rvu_rep_up_notify(struct rvu *rvu, struct rep_event *event) +{ + struct rep_event *msg; + int pf; + + pf = rvu_get_pf(event->pcifunc); + + mutex_lock(&rvu->mbox_lock); + msg = otx2_mbox_alloc_msg_rep_event_up_notify(rvu, pf); + if (!msg) { + mutex_unlock(&rvu->mbox_lock); + return -ENOMEM; + } + + msg->hdr.pcifunc = event->pcifunc; + msg->event = event->event; + + memcpy(&msg->evt_data, &event->evt_data, sizeof(struct rep_evt_data)); + + otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf); + + otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf); + + mutex_unlock(&rvu->mbox_lock); + return 0; +} + +static void rvu_rep_wq_handler(struct work_struct *work) +{ + struct rvu *rvu = container_of(work, struct rvu, rep_evt_work); + struct rep_evtq_ent *qentry; + struct rep_event *event; + unsigned long flags; + + do { + spin_lock_irqsave(&rvu->rep_evtq_lock, flags); + qentry = list_first_entry_or_null(&rvu->rep_evtq_head, + struct rep_evtq_ent, + node); + if (qentry) + list_del(&qentry->node); + + spin_unlock_irqrestore(&rvu->rep_evtq_lock, flags); + if (!qentry) + break; /* nothing more to process */ + + event = &qentry->event; + + rvu_rep_up_notify(rvu, event); + kfree(qentry); + } while (1); +} + +int rvu_mbox_handler_rep_event_notify(struct rvu *rvu, struct rep_event *req, + struct msg_rsp *rsp) +{ + struct rep_evtq_ent *qentry; + + qentry = kmalloc(sizeof(*qentry), GFP_ATOMIC); + if (!qentry) + return -ENOMEM; + + qentry->event = *req; + spin_lock(&rvu->rep_evtq_lock); + list_add_tail(&qentry->node, &rvu->rep_evtq_head); + spin_unlock(&rvu->rep_evtq_lock); + queue_work(rvu->rep_evt_wq, &rvu->rep_evt_work); + return 0; +} + +int rvu_rep_notify_pfvf_state(struct rvu *rvu, u16 pcifunc, bool enable) +{ + struct rep_event *req; + int pf; + + if (!is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) + return 0; + + pf = rvu_get_pf(rvu->rep_pcifunc); + + mutex_lock(&rvu->mbox_lock); + req = otx2_mbox_alloc_msg_rep_event_up_notify(rvu, pf); + if (!req) { + mutex_unlock(&rvu->mbox_lock); + return -ENOMEM; + } + + req->hdr.pcifunc = rvu->rep_pcifunc; + req->event |= RVU_EVENT_PFVF_STATE; + req->pcifunc = pcifunc; + req->evt_data.vf_state = enable; + + otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf); + otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf); + + mutex_unlock(&rvu->mbox_lock); + return 0; +} + #define RVU_LF_RX_STATS(reg) \ rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_STATX(nixlf, reg)) @@ -247,6 +365,15 @@ int rvu_rep_install_mcam_rules(struct rvu *rvu) } } } + + /* Initialize the wq for handling REP events */ + INIT_LIST_HEAD(&rvu->rep_evtq_head); + INIT_WORK(&rvu->rep_evt_work, rvu_rep_wq_handler); + rvu->rep_evt_wq = alloc_workqueue("rep_evt_wq", 0, 0); + if (!rvu->rep_evt_wq) { + dev_err(rvu->dev, "REP workqueue allocation failed\n"); + return -ENOMEM; + } return 0; } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index cd4b84d89dd1..d3d56ca6d3a2 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -443,6 +443,7 @@ struct otx2_nic { #define OTX2_FLAG_ADPTV_INT_COAL_ENABLED BIT_ULL(16) #define OTX2_FLAG_TC_MARK_ENABLED BIT_ULL(17) #define OTX2_FLAG_REP_MODE_ENABLED BIT_ULL(18) +#define OTX2_FLAG_PORT_UP BIT_ULL(19) u64 flags; u64 *cq_op_addr; @@ -1127,4 +1128,5 @@ u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb, int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid); void otx2_qos_config_txschq(struct otx2_nic *pfvf); void otx2_clean_qos_queues(struct otx2_nic *pfvf); +int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info); #endif /* OTX2_COMMON_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 5e43d1ffaa33..0f17e8af4f76 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -519,6 +519,7 @@ static void otx2_pfvf_mbox_up_handler(struct work_struct *work) switch (msg->id) { case MBOX_MSG_CGX_LINK_EVENT: + case MBOX_MSG_REP_EVENT_UP_NOTIFY: break; default: if (msg->rc) @@ -832,6 +833,9 @@ static void otx2_handle_link_event(struct otx2_nic *pf) struct cgx_link_user_info *linfo = &pf->linfo; struct net_device *netdev = pf->netdev; + if (pf->flags & OTX2_FLAG_PORT_UP) + return; + pr_info("%s NIC Link is %s %d Mbps %s duplex\n", netdev->name, linfo->link_up ? "UP" : "DOWN", linfo->speed, linfo->full_duplex ? "Full" : "Half"); @@ -844,6 +848,30 @@ static void otx2_handle_link_event(struct otx2_nic *pf) } } +static int otx2_mbox_up_handler_rep_event_up_notify(struct otx2_nic *pf, + struct rep_event *info, + struct msg_rsp *rsp) +{ + struct net_device *netdev = pf->netdev; + + if (info->event == RVU_EVENT_PORT_STATE) { + if (info->evt_data.port_state) { + pf->flags |= OTX2_FLAG_PORT_UP; + netif_carrier_on(netdev); + netif_tx_start_all_queues(netdev); + } else { + pf->flags &= ~OTX2_FLAG_PORT_UP; + netif_tx_stop_all_queues(netdev); + netif_carrier_off(netdev); + } + return 0; + } +#ifdef CONFIG_RVU_ESWITCH + rvu_event_up_notify(pf, info); +#endif + return 0; +} + int otx2_mbox_up_handler_mcs_intr_notify(struct otx2_nic *pf, struct mcs_intr_info *event, struct msg_rsp *rsp) @@ -913,6 +941,7 @@ static int otx2_process_mbox_msg_up(struct otx2_nic *pf, } MBOX_UP_CGX_MESSAGES MBOX_UP_MCS_MESSAGES +MBOX_UP_REP_MESSAGES #undef M break; default: @@ -1975,6 +2004,7 @@ int otx2_open(struct net_device *netdev) } pf->flags &= ~OTX2_FLAG_INTF_DOWN; + pf->flags &= ~OTX2_FLAG_PORT_UP; /* 'intf_down' may be checked on any cpu */ smp_wmb(); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c index 592b0f4406f0..6a527b824f33 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -28,6 +28,57 @@ MODULE_DESCRIPTION(DRV_STRING); MODULE_LICENSE("GPL"); MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); +static int rvu_rep_get_repid(struct otx2_nic *priv, u16 pcifunc) +{ + int rep_id; + + for (rep_id = 0; rep_id < priv->rep_cnt; rep_id++) + if (priv->rep_pf_map[rep_id] == pcifunc) + return rep_id; + return -EINVAL; +} + +static int rvu_rep_notify_pfvf(struct otx2_nic *priv, u16 event, + struct rep_event *data) +{ + struct rep_event *req; + + mutex_lock(&priv->mbox.lock); + req = otx2_mbox_alloc_msg_rep_event_notify(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return -ENOMEM; + } + req->event = event; + req->pcifunc = data->pcifunc; + + memcpy(&req->evt_data, &data->evt_data, sizeof(struct rep_evt_data)); + otx2_sync_mbox_msg(&priv->mbox); + mutex_unlock(&priv->mbox.lock); + return 0; +} + +static void rvu_rep_state_evt_handler(struct otx2_nic *priv, + struct rep_event *info) +{ + struct rep_dev *rep; + int rep_id; + + rep_id = rvu_rep_get_repid(priv, info->pcifunc); + rep = priv->reps[rep_id]; + if (info->evt_data.vf_state) + rep->flags |= RVU_REP_VF_INITIALIZED; + else + rep->flags &= ~RVU_REP_VF_INITIALIZED; +} + +int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info) +{ + if (info->event & RVU_EVENT_PFVF_STATE) + rvu_rep_state_evt_handler(pf, info); + return 0; +} + static void rvu_rep_get_stats(struct work_struct *work) { struct delayed_work *del_work = to_delayed_work(work); @@ -78,6 +129,9 @@ static void rvu_rep_get_stats64(struct net_device *dev, { struct rep_dev *rep = netdev_priv(dev); + if (!(rep->flags & RVU_REP_VF_INITIALIZED)) + return; + stats->rx_packets = rep->stats.rx_frames; stats->rx_bytes = rep->stats.rx_bytes; stats->rx_dropped = rep->stats.rx_drops; @@ -132,16 +186,38 @@ static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *dev) static int rvu_rep_open(struct net_device *dev) { + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *priv = rep->mdev; + struct rep_event evt = {0}; + + if (!(rep->flags & RVU_REP_VF_INITIALIZED)) + return 0; + netif_carrier_on(dev); netif_tx_start_all_queues(dev); + + evt.event = RVU_EVENT_PORT_STATE; + evt.evt_data.port_state = 1; + evt.pcifunc = rep->pcifunc; + rvu_rep_notify_pfvf(priv, RVU_EVENT_PORT_STATE, &evt); return 0; } static int rvu_rep_stop(struct net_device *dev) { + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *priv = rep->mdev; + struct rep_event evt = {0}; + + if (!(rep->flags & RVU_REP_VF_INITIALIZED)) + return 0; + netif_carrier_off(dev); netif_tx_disable(dev); + evt.event = RVU_EVENT_PORT_STATE; + evt.pcifunc = rep->pcifunc; + rvu_rep_notify_pfvf(priv, RVU_EVENT_PORT_STATE, &evt); return 0; } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h index 5d39bf636655..0cefa482f83c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h @@ -36,6 +36,8 @@ struct rep_dev { struct delayed_work stats_wrk; u16 rep_id; u16 pcifunc; +#define RVU_REP_VF_INITIALIZED BIT_ULL(0) + u8 flags; }; static inline bool otx2_rep_dev(struct pci_dev *pdev) @@ -45,4 +47,5 @@ static inline bool otx2_rep_dev(struct pci_dev *pdev) int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack); void rvu_rep_destroy(struct otx2_nic *priv); +int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info); #endif /* REP_H */ From patchwork Tue Jun 11 16:22:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694013 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0967A78C88; Tue, 11 Jun 2024 16:22:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122977; cv=none; b=WreQ4/8AAmFxL6Gju2JI/S6NY4BRPPcPCQ66qN7plA1np686UxCgjWi4as9ihaTEnHKYyJhhS4wJGCUpgslvjv8cVx2UapgyIBdceI01LkEsHBCP6o7rdA7f4rUg+/UklojfHToBcmVlXB9iLbqCOnMycnUe6W4MhKSvQoq44rc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122977; c=relaxed/simple; bh=/vZvduUP4Jcb/xXodAzJYL1w7O4WoH/d8cEZ9PInfPU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cVzzFLNx395K5IP4qd4ZoBpl8vfXf7uy/rRE/XoGY2Xaag1bOu249C8JtCv1UpCJGPPsGDtJGARM0+xrMGT+3aW+0wxmG0QOM8RrWfE/chx+E37csGapdBCTgHCmLXd8bvrOf0giUz3bXoKmhpQeZxuDyA21RTaEId7VNrCBYQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=MMK+u8aJ; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="MMK+u8aJ" Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJSN5024152; Tue, 11 Jun 2024 09:22:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=oCCqOfpNqYQ9/lY5Wf/iO4M7x nfMhpvsc28RqiMHpCE=; b=MMK+u8aJ1fU5Iy/hEfuJ4u6II0NNqu9YR8G7du8VZ S5su4OjEZ2S8qPaNnNsW4Hz0Ii+1h0cTuzenAlJlKdXVMkYneBiqkdoR9tdHrHQZ WEiXndSuNfXdPcr5dXvnJgAsKExfjU5PoRK1DqYasAG1OkQhjhrBttIdbCOxIRYs ax2Io/vY9kCycuI1k/85+tA60Xc92F7VmIDURT0ImVqepINgMBFP2uAEnmvGU60q oQ7aFxiCdG9qeUYet4gxKnrV5Q8C7NRqp1toj76RBNHI4gp6ZXoM9d4MPrYwMRdi wJl9R4yRtf1w2/8mA6xRRpaVtQcimW8NdrJM2ADLy5Pww== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3ympthaya6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:46 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:46 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:45 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id D015D3F7059; Tue, 11 Jun 2024 09:22:42 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v4 08/10] octeontx2-pf: Configure VF mtu via representor Date: Tue, 11 Jun 2024 21:52:11 +0530 Message-ID: <20240611162213.22213-9-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: 3roULdbkhpDk0sStHT2LESr_19xW1dqs X-Proofpoint-ORIG-GUID: 3roULdbkhpDk0sStHT2LESr_19xW1dqs X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org Add support to manage the mtu configuration for VF through representor. On update of representor mtu a mbox notification is send to VF to update its mtu. Signed-off-by: Sai Krishna Signed-off-by: Geetha sowjanya Reviewed-by: Simon Horman --- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 5 +++++ .../net/ethernet/marvell/octeontx2/nic/rep.c | 17 +++++++++++++++++ 2 files changed, 22 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 0f17e8af4f76..91a458c5028a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -854,6 +854,11 @@ static int otx2_mbox_up_handler_rep_event_up_notify(struct otx2_nic *pf, { struct net_device *netdev = pf->netdev; + if (info->event == RVU_EVENT_MTU_CHANGE) { + netdev->mtu = info->evt_data.mtu; + return 0; + } + if (info->event == RVU_EVENT_PORT_STATE) { if (info->evt_data.port_state) { pf->flags |= OTX2_FLAG_PORT_UP; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c index 6a527b824f33..d984b7d9dc64 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -79,6 +79,22 @@ int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info) return 0; } +static int rvu_rep_change_mtu(struct net_device *dev, int new_mtu) +{ + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *priv = rep->mdev; + struct rep_event evt = {0}; + + netdev_info(dev, "Changing MTU from %d to %d\n", + dev->mtu, new_mtu); + dev->mtu = new_mtu; + + evt.evt_data.mtu = new_mtu; + evt.pcifunc = rep->pcifunc; + rvu_rep_notify_pfvf(priv, RVU_EVENT_MTU_CHANGE, &evt); + return 0; +} + static void rvu_rep_get_stats(struct work_struct *work) { struct delayed_work *del_work = to_delayed_work(work); @@ -226,6 +242,7 @@ static const struct net_device_ops rvu_rep_netdev_ops = { .ndo_stop = rvu_rep_stop, .ndo_start_xmit = rvu_rep_xmit, .ndo_get_stats64 = rvu_rep_get_stats64, + .ndo_change_mtu = rvu_rep_change_mtu, }; static int rvu_rep_napi_init(struct otx2_nic *priv, From patchwork Tue Jun 11 16:22:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694014 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB4F441C6D; Tue, 11 Jun 2024 16:23:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122988; cv=none; b=nTAFxRz8S5n0nFvrlnkG/CUMreTc2c3nGyI6zhGD1thwt/OXiDGEHYd+gic9uQaiRfPgm8bpt7iOyUhZIPgWSotbxVnOfNLO0OYWDTmPmYEBD+kKR8sMs2UogUlRDCCoxsnUWZ9krAyzyDx/+Myuq1fG58ofotSdGpNV+ICNFfA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122988; c=relaxed/simple; bh=ZNGSzFZzO9wj6VEvpOiy3IY8jWHFfGCnawHOZ6eejj4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QxeLKZCljV11rBCW7HWf8izRAK694HujQLb4r/nub90m22gsulfKwQ/mTBdn+X6ZoocNPovOvt2BVEBVCnw4xi+noIsp4RusmAUEl+9UM7MEbn4/WVYfKatEO9+0+x7uqTzPBpFBbpEUjFGuMcmHSHs+qRTQUzPtPdujtcgoHo8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=jQiFIEec; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="jQiFIEec" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJRit021185; Tue, 11 Jun 2024 09:22:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=P0oHtFMjNmW+1mil9ma82XkEn LUbzNpEGqehz3/BCk4=; b=jQiFIEec+8tJJf72451yKOTJ9NyGllv+d1Fj9qIuM UZwc5kHyqotNQ997omDMPzJ02JU+gVCW7zjLVw+mIe7XuBS7GsqS5l71lFbmBvIl mguWtRFyYY5W6m0dlMm1VZmBhFRiqYEsVXdkVfvi3msSsrkGTFqm0LizCqBp6fXp Qd2aNfD5n5rc+QGg7HG6U74GscS3EcAewIA6NfA9/Ucx1vlCDuhwBCHHOf35DlY2 jfwE+AmpBILEcBK2X3a0X7Z3qOpsDkvP2cjdD7NyIHG5RV0KiE21uQ0PjIZy+GzV 2CdoaI9Tz3zFLw2miWGIrdyaZu8LY1URzaK5pNetNnn+g== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3ypmq0hcu4-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:57 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:49 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:49 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 487B53F7059; Tue, 11 Jun 2024 09:22:46 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 09/10] octeontx2-pf: Add representors for sdp MAC Date: Tue, 11 Jun 2024 21:52:12 +0530 Message-ID: <20240611162213.22213-10-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: kW2wJVBgZ91kESku5KF542bdaeKCEwzh X-Proofpoint-ORIG-GUID: kW2wJVBgZ91kESku5KF542bdaeKCEwzh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org Hardware supports different types of MACs eg RPM, SDP, LBK. Also, hardware doesn't have an internal switch. LBK is a loopback mac which punts egress pkts back to ingress pipeline. RPM and SDP MACs support ingress/egress pkt IO on interfaces with different set of capabilities like interface modes ranging from 10/100/1000BaseX to 100Gbps KR modes. At the time of netdev driver registration PF will seek MAC related information from Admin function driver 'drivers/net/ethernet/marvell/octeontx2/af' and sets up ingress/egress queues etc such that pkt IO on the channels of these different MACs is possible. This patch add representors for SDP MAC. Signed-off-by: Geetha sowjanya --- .../ethernet/marvell/octeontx2/nic/cn10k.c | 4 +- .../ethernet/marvell/octeontx2/nic/cn10k.h | 2 +- .../marvell/octeontx2/nic/otx2_common.c | 50 +++++++++++++++---- .../marvell/octeontx2/nic/otx2_common.h | 27 +++++++--- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 13 +++-- .../ethernet/marvell/octeontx2/nic/otx2_reg.h | 1 + .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 12 ++++- 7 files changed, 84 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c index c1c99d7054f8..777e123e69c7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c @@ -72,7 +72,7 @@ int cn10k_lmtst_init(struct otx2_nic *pfvf) } EXPORT_SYMBOL(cn10k_lmtst_init); -int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) +int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura) { struct nix_cn10k_aq_enq_req *aq; struct otx2_nic *pfvf = dev; @@ -88,7 +88,7 @@ int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) aq->sq.ena = 1; aq->sq.smq = otx2_get_smq_idx(pfvf, qidx); aq->sq.smq_rr_weight = mtu_to_dwrr_weight(pfvf, pfvf->tx_max_pktlen); - aq->sq.default_chan = pfvf->hw.tx_chan_base; + aq->sq.default_chan = pfvf->hw.tx_chan_base + chan_offset; aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ aq->sq.sqb_aura = sqb_aura; aq->sq.sq_int_ena = NIX_SQINT_BITS; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h index c1861f7de254..e3f0bce9908f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h @@ -26,7 +26,7 @@ static inline int mtu_to_dwrr_weight(struct otx2_nic *pfvf, int mtu) int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx); -int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); +int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura); int cn10k_lmtst_init(struct otx2_nic *pfvf); int cn10k_free_all_ipolicers(struct otx2_nic *pfvf); int cn10k_alloc_matchall_ipolicer(struct otx2_nic *pfvf); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 4b53af087cb4..af37015ef8e2 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -253,7 +253,7 @@ int otx2_config_pause_frm(struct otx2_nic *pfvf) struct cgx_pause_frm_cfg *req; int err; - if (is_otx2_lbkvf(pfvf->pdev)) + if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev)) return 0; mutex_lock(&pfvf->mbox.lock); @@ -647,12 +647,22 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for req->reg[2] = NIX_AF_MDQX_SCHEDULE(schq); req->regval[2] = dwrr_val; } else if (lvl == NIX_TXSCH_LVL_TL4) { + int sdp_chan = hw->tx_chan_base + prio; + + if (is_otx2_sdp_rep(pfvf->pdev)) + prio = 0; parent = schq_list[NIX_TXSCH_LVL_TL3][prio]; req->reg[0] = NIX_AF_TL4X_PARENT(schq); req->regval[0] = parent << 16; req->num_regs++; req->reg[1] = NIX_AF_TL4X_SCHEDULE(schq); req->regval[1] = dwrr_val; + if (is_otx2_sdp_rep(pfvf->pdev)) { + req->num_regs++; + req->reg[2] = NIX_AF_TL4X_SDP_LINK_CFG(schq); + req->regval[2] = BIT_ULL(12) | BIT_ULL(13) | + (sdp_chan & 0xff); + } } else if (lvl == NIX_TXSCH_LVL_TL3) { parent = schq_list[NIX_TXSCH_LVL_TL2][prio]; req->reg[0] = NIX_AF_TL3X_PARENT(schq); @@ -660,7 +670,8 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for req->num_regs++; req->reg[1] = NIX_AF_TL3X_SCHEDULE(schq); req->regval[1] = dwrr_val; - if (lvl == hw->txschq_link_cfg_lvl) { + if (lvl == hw->txschq_link_cfg_lvl && + !is_otx2_sdp_rep(pfvf->pdev)) { req->num_regs++; req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link); /* Enable this queue and backpressure @@ -677,7 +688,8 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for req->reg[1] = NIX_AF_TL2X_SCHEDULE(schq); req->regval[1] = TXSCH_TL1_DFLT_RR_PRIO << 24 | dwrr_val; - if (lvl == hw->txschq_link_cfg_lvl) { + if (lvl == hw->txschq_link_cfg_lvl && + !is_otx2_sdp_rep(pfvf->pdev)) { req->num_regs++; req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link); /* Enable this queue and backpressure @@ -736,6 +748,7 @@ EXPORT_SYMBOL(otx2_smq_flush); int otx2_txsch_alloc(struct otx2_nic *pfvf) { + int chan_cnt = pfvf->hw.tx_chan_cnt; struct nix_txsch_alloc_req *req; struct nix_txsch_alloc_rsp *rsp; int lvl, schq, rc; @@ -748,6 +761,12 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) /* Request one schq per level */ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) req->schq[lvl] = 1; + + if (is_otx2_sdp_rep(pfvf->pdev) && chan_cnt > 1) { + req->schq[NIX_TXSCH_LVL_SMQ] = chan_cnt; + req->schq[NIX_TXSCH_LVL_TL4] = chan_cnt; + } + rc = otx2_sync_mbox_msg(&pfvf->mbox); if (rc) return rc; @@ -758,10 +777,12 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) return PTR_ERR(rsp); /* Setup transmit scheduler list */ - for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + pfvf->hw.txschq_cnt[lvl] = rsp->schq[lvl]; for (schq = 0; schq < rsp->schq[lvl]; schq++) pfvf->hw.txschq_list[lvl][schq] = rsp->schq_list[lvl][schq]; + } pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; pfvf->hw.txschq_aggr_lvl_rr_prio = rsp->aggr_lvl_rr_prio; @@ -799,12 +820,15 @@ EXPORT_SYMBOL(otx2_txschq_free_one); void otx2_txschq_stop(struct otx2_nic *pfvf) { - int lvl, schq; + int lvl, schq, idx; /* free non QOS TLx nodes */ - for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) - otx2_txschq_free_one(pfvf, lvl, - pfvf->hw.txschq_list[lvl][0]); + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + for (idx = 0; idx < pfvf->hw.txschq_cnt[lvl]; idx++) { + otx2_txschq_free_one(pfvf, lvl, + pfvf->hw.txschq_list[lvl][idx]); + } + } /* Clear the txschq list */ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { @@ -884,7 +908,7 @@ static int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) return otx2_sync_mbox_msg(&pfvf->mbox); } -int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) +int otx2_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura) { struct otx2_nic *pfvf = dev; struct otx2_snd_queue *sq; @@ -903,7 +927,7 @@ int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) aq->sq.ena = 1; aq->sq.smq = otx2_get_smq_idx(pfvf, qidx); aq->sq.smq_rr_quantum = mtu_to_dwrr_weight(pfvf, pfvf->tx_max_pktlen); - aq->sq.default_chan = pfvf->hw.tx_chan_base; + aq->sq.default_chan = pfvf->hw.tx_chan_base + chan_offset; aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ aq->sq.sqb_aura = sqb_aura; aq->sq.sq_int_ena = NIX_SQINT_BITS; @@ -926,6 +950,7 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) struct otx2_qset *qset = &pfvf->qset; struct otx2_snd_queue *sq; struct otx2_pool *pool; + u8 chan_offset; int err; pool = &pfvf->qset.pool[sqb_aura]; @@ -972,7 +997,8 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) sq->stats.bytes = 0; sq->stats.pkts = 0; - err = pfvf->hw_ops->sq_aq_init(pfvf, qidx, sqb_aura); + chan_offset = qidx % pfvf->hw.tx_chan_cnt; + err = pfvf->hw_ops->sq_aq_init(pfvf, qidx, chan_offset, sqb_aura); if (err) { kfree(sq->sg); sq->sg = NULL; @@ -1739,6 +1765,8 @@ void mbox_handler_nix_lf_alloc(struct otx2_nic *pfvf, pfvf->hw.sqb_size = rsp->sqb_size; pfvf->hw.rx_chan_base = rsp->rx_chan_base; pfvf->hw.tx_chan_base = rsp->tx_chan_base; + pfvf->hw.rx_chan_cnt = rsp->rx_chan_cnt; + pfvf->hw.tx_chan_cnt = rsp->tx_chan_cnt; pfvf->hw.lso_tsov4_idx = rsp->lso_tsov4_idx; pfvf->hw.lso_tsov6_idx = rsp->lso_tsov6_idx; pfvf->hw.cgx_links = rsp->cgx_links; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index d3d56ca6d3a2..d3c78538d36c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -42,6 +42,8 @@ #define PCI_SUBSYS_DEVID_96XX_RVU_PFVF 0xB200 #define PCI_SUBSYS_DEVID_CN10K_B_RVU_PFVF 0xBD00 +#define PCI_DEVID_OCTEONTX2_SDP_REP 0xA0F7 + /* PCI BAR nos */ #define PCI_CFG_REG_BAR_NUM 2 #define PCI_MBOX_BAR_NUM 4 @@ -198,6 +200,7 @@ struct otx2_hw { /* NIX */ u8 txschq_link_cfg_lvl; + u8 txschq_cnt[NIX_TXSCH_LVL_CNT]; u8 txschq_aggr_lvl_rr_prio; u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; u16 matchall_ipolicer; @@ -208,6 +211,8 @@ struct otx2_hw { /* HW settings, coalescing etc */ u16 rx_chan_base; u16 tx_chan_base; + u8 rx_chan_cnt; + u8 tx_chan_cnt; u16 cq_qcount_wait; u16 cq_ecount_wait; u16 rq_skid; @@ -344,7 +349,8 @@ struct otx2_flow_config { }; struct dev_hw_ops { - int (*sq_aq_init)(void *dev, u16 qidx, u16 sqb_aura); + int (*sq_aq_init)(void *dev, u16 qidx, u8 chan_offset, + u16 sqb_aura); void (*sqe_flush)(void *dev, struct otx2_snd_queue *sq, int size, int qidx); int (*refill_pool_ptrs)(void *dev, struct otx2_cq_queue *cq); @@ -538,6 +544,11 @@ static inline bool is_96xx_B0(struct pci_dev *pdev) (pdev->subsystem_device == PCI_SUBSYS_DEVID_96XX_RVU_PFVF); } +static inline bool is_otx2_sdp_rep(struct pci_dev *pdev) +{ + return pdev->device == PCI_DEVID_OCTEONTX2_SDP_REP; +} + /* REVID for PCIe devices. * Bits 0..1: minor pass, bit 3..2: major pass * bits 7..4: midr id @@ -900,15 +911,19 @@ static inline void otx2_dma_unmap_page(struct otx2_nic *pfvf, static inline u16 otx2_get_smq_idx(struct otx2_nic *pfvf, u16 qidx) { u16 smq; + int idx; + #ifdef CONFIG_DCB if (qidx < NIX_PF_PFC_PRIO_MAX && pfvf->pfc_alloc_status[qidx]) return pfvf->pfc_schq_list[NIX_TXSCH_LVL_SMQ][qidx]; #endif /* check if qidx falls under QOS queues */ - if (qidx >= pfvf->hw.non_qos_queues) + if (qidx >= pfvf->hw.non_qos_queues) { smq = pfvf->qos.qid_to_sqmap[qidx - pfvf->hw.non_qos_queues]; - else - smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0]; + } else { + idx = qidx % pfvf->hw.txschq_cnt[NIX_TXSCH_LVL_SMQ]; + smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][idx]; + } return smq; } @@ -975,8 +990,8 @@ int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable); void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, int qidx); void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq); int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura); -int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); -int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); +int otx2_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura); +int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura); int otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, dma_addr_t *dma); int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 91a458c5028a..fcdeac36306d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1593,10 +1593,15 @@ int otx2_init_hw_resources(struct otx2_nic *pf) } for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { - err = otx2_txschq_config(pf, lvl, 0, false); - if (err) { - mutex_unlock(&mbox->lock); - goto err_free_nix_queues; + int idx; + + for (idx = 0; idx < pf->hw.txschq_cnt[lvl]; idx++) { + err = otx2_txschq_config(pf, lvl, idx, false); + if (err) { + dev_err(pf->dev, "Failed to config TXSCH\n"); + mutex_unlock(&mbox->lock); + goto err_free_nix_queues; + } } } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h index 45a32e4b49d1..defcb9ea21e1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h @@ -140,6 +140,7 @@ /* NIX AF transmit scheduler registers */ #define NIX_AF_SMQX_CFG(a) (0x700 | (a) << 16) +#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xB10 | (a) << 16) #define NIX_AF_TL1X_SCHEDULE(a) (0xC00 | (a) << 16) #define NIX_AF_TL1X_CIR(a) (0xC20 | (a) << 16) #define NIX_AF_TL1X_TOPOLOGY(a) (0xC80 | (a) << 16) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index 0486fca8b573..64c88119b79a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -21,6 +21,7 @@ static const struct pci_device_id otx2_vf_id_table[] = { { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AFVF) }, { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF) }, + { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_SDP_REP) }, { } }; @@ -371,7 +372,7 @@ static int otx2vf_open(struct net_device *netdev) /* LBKs do not receive link events so tell everyone we are up here */ vf = netdev_priv(netdev); - if (is_otx2_lbkvf(vf->pdev)) { + if (is_otx2_lbkvf(vf->pdev) || is_otx2_sdp_rep(vf->pdev)) { pr_info("%s NIC Link is UP\n", netdev->name); netif_carrier_on(netdev); netif_tx_start_all_queues(netdev); @@ -683,6 +684,15 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) snprintf(netdev->name, sizeof(netdev->name), "lbk%d", n); } + if (is_otx2_sdp_rep(vf->pdev)) { + int n; + + n = (vf->pcifunc >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK; + n -= 1; + snprintf(netdev->name, sizeof(netdev->name), "sdp%d-%d", + pdev->bus->number, n); + } + err = register_netdev(netdev); if (err) { dev_err(dev, "Failed to register netdevice\n"); From patchwork Tue Jun 11 16:22:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 13694015 X-Patchwork-Delegate: kuba@kernel.org Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 740A343AB4; Tue, 11 Jun 2024 16:23:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122991; cv=none; b=cumUbHGVvLK1xOao9A9ifQi6i09wHJTDr0uEzvWUU9Ev+ZLnaE7g1sKb1Cvd6iG4D7TmVpWm2K8UhdpyIQHLBDlh5uOBPXZMT4MjhXedHIepQQ4DwZzWAWDwUgqVdEoYNbjkCyfvkkJCDYjs/v3fINUrqgI9E/8ktaXLN3Fw7pg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718122991; c=relaxed/simple; bh=m5UE1MiwSmATBLpmwbRdWhRqhM/mEKs4X2RVg6dEgsY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=sUmzVByT57XQ1AuIu10waufEGkDB77by2DextCm4Si6qdhny2Yl1OWAE3JyCzbvPColYXNtz7xDevfZ0QpRStO6fiL3ICB5do2vZfTHNK4PkgAMB7Gde/6gWnuZmpNbW+hHPz6YclNisFwS5qM6QoJOWgXf3VHpEpy0Y4m+KA+I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=KSqLXNk9; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="KSqLXNk9" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BGJRiu021185; Tue, 11 Jun 2024 09:22:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pfpt0220; bh=y0whBG5VzxLNn5/8tDUFHBocf GMBvnXU/5WiaR7V9GI=; b=KSqLXNk9hbSqap1dt2wrO0rhC4zadRkJuUx//t1pS s54laxjMxsDkHOSvF/87fFJUXjdLcBaKyJPoezNlPQBeMfKqA8qrlEmZbes6LNzQ wttpe80DYl1vAOlVQfNlwrJEQCaNHKvncAS0Wu0g+XqmWS7Uwz0LkpozemVQ+vpz QD53u/4YQeqno+uFPe2t48I0hSyn+94+MoHV4Bx0MF3TlUCTksfV8SJPt4d/tfny oYU+5gHHSQ4hZipObWfsD3wxxdQmcgMt9JJAgP8dLWq3ULvJZ2euRWz19yfMspUZ BewUKYa6xoq1IjL4Hzbx8QfSrj6/chQXX1w2N0dmiSeIw== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3ypmq0hcu4-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 11 Jun 2024 09:22:57 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 11 Jun 2024 09:22:52 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 11 Jun 2024 09:22:52 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id B51603F7059; Tue, 11 Jun 2024 09:22:49 -0700 (PDT) From: Geetha sowjanya To: , CC: , , , , , , , Subject: [net-next PATCH v5 10/10] octeontx2-pf: Add devlink port support Date: Tue, 11 Jun 2024 21:52:13 +0530 Message-ID: <20240611162213.22213-11-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240611162213.22213-1-gakula@marvell.com> References: <20240611162213.22213-1-gakula@marvell.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: OifaibnbaQWKG9k9aLkCjNtOBACoXjZT X-Proofpoint-ORIG-GUID: OifaibnbaQWKG9k9aLkCjNtOBACoXjZT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-11_09,2024-06-11_01,2024-05-17_01 X-Patchwork-Delegate: kuba@kernel.org Register devlink port for the rvu representors. Signed-off-by: Geetha sowjanya Reviewed-by: Simon Horman --- .../net/ethernet/marvell/octeontx2/nic/rep.c | 74 +++++++++++++++++++ .../net/ethernet/marvell/octeontx2/nic/rep.h | 2 + 2 files changed, 76 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c index d984b7d9dc64..2a0ec4c25c22 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -28,6 +28,73 @@ MODULE_DESCRIPTION(DRV_STRING); MODULE_LICENSE("GPL"); MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); +static int rvu_rep_dl_port_fn_hw_addr_get(struct devlink_port *port, + u8 *hw_addr, int *hw_addr_len, + struct netlink_ext_ack *extack) +{ + struct otx2_devlink *otx2_dl = devlink_priv(port->devlink); + int rep_id = port->index; + struct otx2_nic *priv; + struct rep_dev *rep; + + priv = otx2_dl->pfvf; + rep = priv->reps[rep_id]; + ether_addr_copy(hw_addr, rep->mac); + *hw_addr_len = ETH_ALEN; + return 0; +} + +static int rvu_rep_dl_port_fn_hw_addr_set(struct devlink_port *port, + const u8 *hw_addr, int hw_addr_len, + struct netlink_ext_ack *extack) +{ + struct otx2_devlink *otx2_dl = devlink_priv(port->devlink); + int rep_id = port->index; + struct otx2_nic *priv; + struct rep_dev *rep; + + priv = otx2_dl->pfvf; + rep = priv->reps[rep_id]; + eth_hw_addr_set(rep->netdev, hw_addr); + ether_addr_copy(rep->mac, hw_addr); + return 0; +} + +static const struct devlink_port_ops rvu_rep_dl_port_ops = { + .port_fn_hw_addr_get = rvu_rep_dl_port_fn_hw_addr_get, + .port_fn_hw_addr_set = rvu_rep_dl_port_fn_hw_addr_set, +}; + +static void rvu_rep_devlink_port_unregister(struct rep_dev *rep) +{ + devlink_port_unregister(&rep->dl_port); +} + +static int rvu_rep_devlink_port_register(struct rep_dev *rep) +{ + struct devlink_port_attrs attrs = {}; + struct otx2_nic *priv = rep->mdev; + struct devlink *dl = priv->dl->dl; + int err; + + attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_PF; + attrs.pci_vf.pf = rvu_get_pf(rep->pcifunc); + attrs.pci_vf.vf = rep->pcifunc & RVU_PFVF_FUNC_MASK; + if (attrs.pci_vf.vf) + attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_VF; + + devlink_port_attrs_set(&rep->dl_port, &attrs); + + err = devl_port_register_with_ops(dl, &rep->dl_port, rep->rep_id, + &rvu_rep_dl_port_ops); + if (err) { + dev_err(rep->mdev->dev, "devlink_port_register failed: %d\n", + err); + return err; + } + return 0; +} + static int rvu_rep_get_repid(struct otx2_nic *priv, u16 pcifunc) { int rep_id; @@ -338,6 +405,7 @@ void rvu_rep_destroy(struct otx2_nic *priv) for (rep_id = 0; rep_id < priv->rep_cnt; rep_id++) { rep = priv->reps[rep_id]; unregister_netdev(rep->netdev); + rvu_rep_devlink_port_unregister(rep); free_netdev(rep->netdev); } kfree(priv->reps); @@ -380,6 +448,11 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack) snprintf(ndev->name, sizeof(ndev->name), "r%dp%d", rep_id, rvu_get_pf(pcifunc)); + err = rvu_rep_devlink_port_register(rep); + if (err) + goto exit; + + SET_NETDEV_DEVLINK_PORT(ndev, &rep->dl_port); eth_hw_addr_random(ndev); err = register_netdev(ndev); if (err) { @@ -400,6 +473,7 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack) while (--rep_id >= 0) { rep = priv->reps[rep_id]; unregister_netdev(rep->netdev); + rvu_rep_devlink_port_unregister(rep); free_netdev(rep->netdev); } kfree(priv->reps); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h index 0cefa482f83c..d81af376bf50 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h @@ -34,10 +34,12 @@ struct rep_dev { struct net_device *netdev; struct rep_stats stats; struct delayed_work stats_wrk; + struct devlink_port dl_port; u16 rep_id; u16 pcifunc; #define RVU_REP_VF_INITIALIZED BIT_ULL(0) u8 flags; + u8 mac[ETH_ALEN]; }; static inline bool otx2_rep_dev(struct pci_dev *pdev)