From patchwork Wed Dec 11 15:31:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13903647 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43EE0134AC for ; Wed, 11 Dec 2024 15:32:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931131; cv=none; b=WzfRUWUKZuOVz+h6JrW0LGjHAUV7LsYwgb1AMUBkywa1VxihRo9WT3XsL63z0eifU1Ni0Rh3+wQxN9r4PV4Zl3JVR28Hs3vjkMKtKrtLdeHDBUx+RyJj1eFdBD+BBDpR5v7Rol7JHYNLnzcpZGjLHWvUO/0K907iLgqprUqR63E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931131; c=relaxed/simple; bh=ROV6gAJ72HepSe7VqiKDMySMiiQ8/V+aYPKFzp3BNs4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NvZsP4FSTFi5iWAE4aI0WdRvK2SLK7rGrPowNbf/hJ4tC2nLK2sboXSaTLfCFM+o7QHNLyiQ+gigRryzm1/R/XmZM6cdVew2HQF4yaxXHrpMWh9HpwO6+4LURc/F7i5jfc+DIo4if55kgA2A2msoBm0WEMxb3kGoz0uY5c03ml8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=EuhlBhi2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EuhlBhi2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78A64C4CED2; Wed, 11 Dec 2024 15:32:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733931130; bh=ROV6gAJ72HepSe7VqiKDMySMiiQ8/V+aYPKFzp3BNs4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EuhlBhi21LY7LlvP50LO4JRKHRzggagoRnf2Pppfij12bi5yjPbQeBybXTyuq7Ksb 051D+Zm9I+sc82lD7KhQP6HgnIpNB4eTNvSBg3Du/IZVm7ORiOiARtPuHtj4rWECaS fipGJYnkA4sw4t7pXTTjMGBK1n7mrKxlZkfHeWR57C5XbaJDBVB1RobmmUwXMpgMiN 39oQmXROaH+6ofVvc4vVie8FITrgo6EBk2p3lmbGNsDQJuJ3SJZaqYxURUs03XEP47 DdfTPLii4GY3tw46HtO3xVKil1O5tNDMzTEBsWSjdm2asPttXzfynaMO/jKaNrUQmD ZcbU7EifXIoTQ== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: andrew@lunn.ch, olteanv@gmail.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, horms@kernel.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo.bianconi83@gmail.com Subject: [RFC net-next 1/5] net: airoha: Enable Tx drop capability for each Tx DMA ring Date: Wed, 11 Dec 2024 16:31:49 +0100 Message-ID: <7447d3ae100352962c6a0c967bfeed73df9068b0.1733930558.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This is a preliminary patch in order to enable hw Qdisc offloading. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/airoha_eth.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c index 6c683a12d5aa..dd8d65a7e255 100644 --- a/drivers/net/ethernet/mediatek/airoha_eth.c +++ b/drivers/net/ethernet/mediatek/airoha_eth.c @@ -1789,6 +1789,10 @@ static int airoha_qdma_init_tx_queue(struct airoha_queue *q, WRITE_ONCE(q->desc[i].ctrl, cpu_to_le32(val)); } + /* xmit ring drop default setting */ + airoha_qdma_set(qdma, REG_TX_RING_BLOCKING(qid), + TX_RING_IRQ_BLOCKING_TX_DROP_EN_MASK); + airoha_qdma_wr(qdma, REG_TX_RING_BASE(qid), dma_addr); airoha_qdma_rmw(qdma, REG_TX_CPU_IDX(qid), TX_RING_CPU_IDX_MASK, FIELD_PREP(TX_RING_CPU_IDX_MASK, q->head)); From patchwork Wed Dec 11 15:31:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13903648 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E5FA134AC for ; Wed, 11 Dec 2024 15:32:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931134; cv=none; b=ECoQ0j6iDQaj+DX64+VaGI+dHf3HOxiqgpAccEfxdK0nAIR1pn9KWqBOkXF3ceOzhUYmabmOpc+AkZnsK3d7XaNgVAdUlIXz89J6Ksb05iUBDleuY4oWCtQUsQqZk0Xrcm5+IA+emHDyfTmo5c5RGW3V22SttrE62ZaZGreE55M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931134; c=relaxed/simple; bh=FjkADuk5QT857Oow1g4ktRv2/j1HY4gW3jc60LkOf6A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ov0MAgHJCEeD2fbbbCvTI/VGpf0Ug+2KCCLP2FfXLBkMjc5W5Da/j/aE+C820cv5kHPSz5uWwOo44KCUN017rCoYmYemIDCif92nvnyTQ3Kx08KlIR7ufSh3XHCR2Q0sI39PccUA0k1QnjfmYTPQi/aKAhrKGItABiBlJZ6+sFk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=elwXX8+H; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="elwXX8+H" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 826BBC4CED2; Wed, 11 Dec 2024 15:32:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733931133; bh=FjkADuk5QT857Oow1g4ktRv2/j1HY4gW3jc60LkOf6A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=elwXX8+HlLQ6eYQQ9H6qXZhmeeMzSBm0B1S8Aiy8bwOM+2OHOQmhrR39suayuF+5n PkT6lq3aOOsQsmI9fANi9Gr53zaMgpP2IoFbKwABqkI2Zt8DR5eiDifQAKrPzqwIZ0 B6iLxkecOlMRLQTEGSMC3BuBcpoZ1KcGmBpJisEh6DjCG+p+yQmacJhPBwzq7fDFd+ kpyOpyVya30UCnUiTuuUJyq/GTvdaoZor6K5ecCyDV+TuE+9IHofNEmrbh4OxPG8fh uUUPq5JzR7/k8mXBRIVNmWWgadHIv4EvdjD3Sl+CVdaQmoNXAVO65zvn9nH61XWsr9 c0GQjPq6GMdEg== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: andrew@lunn.ch, olteanv@gmail.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, horms@kernel.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo.bianconi83@gmail.com Subject: [RFC net-next 2/5] net: airoha: Introduce ndo_select_queue callback Date: Wed, 11 Dec 2024 16:31:50 +0100 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Airoha EN7581 SoC supports 32 Tx DMA rings used to feed packets to 32 QoS channels. Each channels supports 8 QoS queues where the user can apply QoS scheduling policies. In a similar way, the user can configure hw rate shaping for each QoS channel. Introduce ndo_select_queue callback in order to select the tx queue based on QoS channel and QoS queue. In particular, for dsa device select QoS channel according to the dsa user port index, rely on port id otherwise. Select QoS queue based on the skb priority. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/airoha_eth.c | 28 ++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c index dd8d65a7e255..8b927bf310ed 100644 --- a/drivers/net/ethernet/mediatek/airoha_eth.c +++ b/drivers/net/ethernet/mediatek/airoha_eth.c @@ -23,6 +23,7 @@ #define AIROHA_MAX_NUM_XSI_RSTS 5 #define AIROHA_MAX_MTU 2000 #define AIROHA_MAX_PACKET_SIZE 2048 +#define AIROHA_NUM_QOS_QUEUES 8 #define AIROHA_NUM_TX_RING 32 #define AIROHA_NUM_RX_RING 32 #define AIROHA_FE_MC_MAX_VLAN_TABLE 64 @@ -2417,21 +2418,43 @@ static void airoha_dev_get_stats64(struct net_device *dev, } while (u64_stats_fetch_retry(&port->stats.syncp, start)); } +static u16 airoha_dev_select_queue(struct net_device *dev, struct sk_buff *skb, + struct net_device *sb_dev) +{ + struct airoha_gdm_port *port = netdev_priv(dev); + u16 queue; + + /* For dsa device select QoS channel according to the dsa user port + * index, rely on port id otherwise. Select QoS queue based on the + * skb priority. + */ + queue = netdev_uses_dsa(dev) ? skb_get_queue_mapping(skb) : port->id; + queue = queue * AIROHA_NUM_QOS_QUEUES + /* QoS channel */ + skb->priority % AIROHA_NUM_QOS_QUEUES; /* QoS queue */ + + return queue < dev->num_tx_queues ? queue : 0; +} + static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, struct net_device *dev) { struct skb_shared_info *sinfo = skb_shinfo(skb); struct airoha_gdm_port *port = netdev_priv(dev); - u32 msg0 = 0, msg1, len = skb_headlen(skb); - int i, qid = skb_get_queue_mapping(skb); + u32 msg0, msg1, len = skb_headlen(skb); struct airoha_qdma *qdma = port->qdma; u32 nr_frags = 1 + sinfo->nr_frags; struct netdev_queue *txq; struct airoha_queue *q; void *data = skb->data; + int i, qid; u16 index; u8 fport; + qid = skb_get_queue_mapping(skb) % ARRAY_SIZE(qdma->q_tx); + msg0 = FIELD_PREP(QDMA_ETH_TXMSG_CHAN_MASK, + qid / AIROHA_NUM_QOS_QUEUES) | + FIELD_PREP(QDMA_ETH_TXMSG_QUEUE_MASK, + qid % AIROHA_NUM_QOS_QUEUES); if (skb->ip_summed == CHECKSUM_PARTIAL) msg0 |= FIELD_PREP(QDMA_ETH_TXMSG_TCO_MASK, 1) | FIELD_PREP(QDMA_ETH_TXMSG_UCO_MASK, 1) | @@ -2605,6 +2628,7 @@ static const struct net_device_ops airoha_netdev_ops = { .ndo_init = airoha_dev_init, .ndo_open = airoha_dev_open, .ndo_stop = airoha_dev_stop, + .ndo_select_queue = airoha_dev_select_queue, .ndo_start_xmit = airoha_dev_xmit, .ndo_get_stats64 = airoha_dev_get_stats64, .ndo_set_mac_address = airoha_dev_set_macaddr, From patchwork Wed Dec 11 15:31:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13903649 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8527D1A725C for ; Wed, 11 Dec 2024 15:32:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931137; cv=none; b=utTnVZwi1OO+MkylQfz5iKmc2BE2QeSSdsFK4c6d6etpMwI32qGvVao0PmN/mt2d5VTHk8yMHZIgF5OMl6DlOvjrBCTbo0owjv/53+swtH09HcLMT27r/NMXoXrm4kruwI6tqZH0y6VVeetsmoy4dQqOk50TLk5QBCXmr8d175Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931137; c=relaxed/simple; bh=f6ViX32rvfhUWlEkUt2+RRsqkElTz3AsnKkXcFtL9OQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EAIS+pg17JpGCRxPx4ahkhnnqtHzk2eqpAtfOU7HQrqa0BSyRdbxaAzBVC1J7s/WXU3PouQlwyiG66KmE5UmouG7oOSBhrcK/ir8i9D+sveY5T4eKvPpb+LfjX1lEf2un4JAfEDy3BOUX0debU7IaDHoEkhlzcxOcEdSYhHhViU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Q56YGE3L; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Q56YGE3L" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6C41C4CED2; Wed, 11 Dec 2024 15:32:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733931137; bh=f6ViX32rvfhUWlEkUt2+RRsqkElTz3AsnKkXcFtL9OQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q56YGE3LZFRUXUc6zrvDyLDwFw0lT2EtFjGdfBTfcSxQGlJ80IPCWoPXUOQBfNh6j hrbk9CTcPb8DPz1/DOgneCZPGNCkfpl2PEtfRidi4jixu5tvOfk9qlIBdO9fcxMm9K 8SBNdPfVzFQAes+Oqg5lG8SDf/6chP5jVwQsOLjW9jsirZmNAE0kpaJGWcjYwyL0hP vdfS5ycQ+ojitQFoaAXhm6dAF4CI1mmlPtNJ4tZcrak1dcaaS2U9wV0GKXocrty7qa m1b2Zb26ibBu8ct6MIYiMSIsquwn5CQUPPgLCHr2o9XXjGH/tozibvz2JumKir3rxx uEy7fBSB2vohg== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: andrew@lunn.ch, olteanv@gmail.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, horms@kernel.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo.bianconi83@gmail.com Subject: [RFC net-next 3/5] net: dsa: Introduce ndo_setup_tc_conduit callback Date: Wed, 11 Dec 2024 16:31:51 +0100 Message-ID: <8e57ae3c4b064403ca843ffa45a5eb4e4198cf80.1733930558.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Some DSA hw switches do not support Qdisc offloading or the mac chip has more fine grained QoS capabilities with respect to the hw switch (e.g. Airoha EN7581 mac chip has more hw QoS and buffering capabilities with respect to the mt7530 switch). On the other hand, configuring the switch cpu port via tc does not allow to address all possible use-cases (e.g. shape just tcp traffic with dst port 80 transmitted on lan0). Introduce ndo_setup_tc_conduit callback in order to allow tc to offload Qdisc policies for the specified user ports configuring the hw switch cpu port (mac chip). Signed-off-by: Lorenzo Bianconi --- include/linux/netdevice.h | 12 ++++++++++ net/dsa/user.c | 47 ++++++++++++++++++++++++++++++++++----- 2 files changed, 53 insertions(+), 6 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index d917949bba03..78b63dafad16 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1169,6 +1169,14 @@ struct netdev_net_notifier { * This is always called from the stack with the rtnl lock held and netif * tx queues stopped. This allows the netdevice to perform queue * management safely. + * int (*ndo_setup_tc_conduit)(struct net_device *dev, int user_port, + * enum tc_setup_type type, void *type_data); + * Called to setup any 'tc' scheduler, classifier or action on the user + * port @user_port via the conduit port @dev. This is useful if the hw + * supports improved offloading capability through the conduit port. + * This is always called from the stack with the rtnl lock held and netif + * tx queues stopped. This allows the netdevice to perform queue + * management safely. * * Fiber Channel over Ethernet (FCoE) offload functions. * int (*ndo_fcoe_enable)(struct net_device *dev); @@ -1475,6 +1483,10 @@ struct net_device_ops { int (*ndo_setup_tc)(struct net_device *dev, enum tc_setup_type type, void *type_data); + int (*ndo_setup_tc_conduit)(struct net_device *dev, + int user_port, + enum tc_setup_type type, + void *type_data); #if IS_ENABLED(CONFIG_FCOE) int (*ndo_fcoe_enable)(struct net_device *dev); int (*ndo_fcoe_disable)(struct net_device *dev); diff --git a/net/dsa/user.c b/net/dsa/user.c index c736c019e2af..2d5ed32a1f7c 100644 --- a/net/dsa/user.c +++ b/net/dsa/user.c @@ -1725,6 +1725,46 @@ static int dsa_user_setup_ft_block(struct dsa_switch *ds, int port, return conduit->netdev_ops->ndo_setup_tc(conduit, TC_SETUP_FT, type_data); } +static int dsa_user_setup_qdisc(struct net_device *dev, + enum tc_setup_type type, void *type_data) +{ + struct dsa_port *dp = dsa_user_to_port(dev); + struct dsa_switch *ds = dp->ds; + struct net_device *conduit; + int ret = -EOPNOTSUPP; + + conduit = dsa_port_to_conduit(dsa_to_port(ds, dp->index)); + if (conduit->netdev_ops->ndo_setup_tc_conduit) { + ret = conduit->netdev_ops->ndo_setup_tc_conduit(conduit, + dp->index, + type, + type_data); + if (ret && ret != -EOPNOTSUPP) { + netdev_err(dev, + "qdisc offload failed on conduit %s: %d\n", + conduit->name, ret); + return ret; + } + } + + /* Try to offload the requested qdisc via user port. This is necessary + * if the traffic is forwarded by the hw dsa switch. + */ + if (ds->ops->port_setup_tc) { + int err; + + err = ds->ops->port_setup_tc(ds, dp->index, type, type_data); + if (err != -EOPNOTSUPP) { + if (err) + netdev_err(dev, "qdisc offload failed: %d\n", + err); + ret = err; + } + } + + return ret; +} + static int dsa_user_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data) { @@ -1737,13 +1777,8 @@ static int dsa_user_setup_tc(struct net_device *dev, enum tc_setup_type type, case TC_SETUP_FT: return dsa_user_setup_ft_block(ds, dp->index, type_data); default: - break; + return dsa_user_setup_qdisc(dev, type, type_data); } - - if (!ds->ops->port_setup_tc) - return -EOPNOTSUPP; - - return ds->ops->port_setup_tc(ds, dp->index, type, type_data); } static int dsa_user_get_rxnfc(struct net_device *dev, From patchwork Wed Dec 11 15:31:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13903650 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B6901BDAB5 for ; Wed, 11 Dec 2024 15:32:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931140; cv=none; b=HflI8J/0P0xcltwQcY1qBxbxa96/tGYjcAfAzJcklChkR2FS2qhDv9vn9dIlNsBmUMSRWHk6tF7p4Hlnfn/5+gbl3UMBrFkWS1AbFXGHcJh9/NhoeIYyFgVQWq2wh/LyblvUnkZTb7/bjPRLfNlgBA0TW9B3Ts/6VD99tuzwbN0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931140; c=relaxed/simple; bh=XoKW2zY976GO2P35664sYhJ6oHt0mDyyKYtYUAPwBOc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VFvRi6pWjyM2vX/pN8kzakDDmJl0pYyRs75zNfRPLSAR1MxDREoiPsATLcQHIXDSoT9d7v4UrDWf9/rhOhOHmRms6sY9qlsVWb4rRwXRRl8aKUkzqpjVc6AmaH4vZwEca+p8PcJf47jBz7rKJ2tV9MVcSAwcrDEyorO4FlHroOo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mt1fQo7n; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mt1fQo7n" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B0B0FC4CED2; Wed, 11 Dec 2024 15:32:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733931140; bh=XoKW2zY976GO2P35664sYhJ6oHt0mDyyKYtYUAPwBOc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mt1fQo7nPvInxc8xP793fDIz7XmicDt2AGUhiYuqq5YqQr1+6uDKOQHE/AJKu9ZHO VarxJw1k/mJZcBB0xAvZLtzOjBQWCUkj2lUM5Dv1T3fp8i1+7+xlZXAQCHLbCQ2cc+ vRwLj++CY3Zsj7uHhfkWfTDeF623Q2OQmQzo0ROQc5Uv65GLnZu6b199n8p3max9Y/ PqerlbE27HoY3szz7rd2BMM68V6QCXisYAltzBAgYdmL3uyn946YS82LCm2OZPDonr Nl0zMZ03FYGgMNOmWAjl/SA/afCawNx6/ShkZ0p3H3x6G1ZGpKxGRQwyiwWXKSaOPn aO5lRu2UppVtA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: andrew@lunn.ch, olteanv@gmail.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, horms@kernel.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo.bianconi83@gmail.com Subject: [RFC net-next 4/5] net: airoha: Add sched ETS offload support Date: Wed, 11 Dec 2024 16:31:52 +0100 Message-ID: X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Introduce support for ETS qdisc offload available in the Airoha EN7581 ethernet controller. Add the capability to configure hw ETS Qdisc for the specified DSA user port via the QDMA block available in the mac chip (QDMA block is connected to the DSA switch cpu port). Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/airoha_eth.c | 155 ++++++++++++++++++++- 1 file changed, 154 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c index 8b927bf310ed..23aad8670a17 100644 --- a/drivers/net/ethernet/mediatek/airoha_eth.c +++ b/drivers/net/ethernet/mediatek/airoha_eth.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #define AIROHA_MAX_NUM_GDM_PORTS 1 @@ -542,9 +543,24 @@ #define INGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) #define INGRESS_FAST_TICK_MASK GENMASK(15, 0) +#define REG_QUEUE_CLOSE_CFG(_n) (0x00a0 + ((_n) & 0xfc)) +#define TXQ_DISABLE_CHAN_QUEUE_MASK(_n, _m) BIT((_m) + (((_n) & 0x3) << 3)) + #define REG_TXQ_DIS_CFG_BASE(_n) ((_n) ? 0x20a0 : 0x00a0) #define REG_TXQ_DIS_CFG(_n, _m) (REG_TXQ_DIS_CFG_BASE((_n)) + (_m) << 2) +#define REG_CNTR_CFG(_n) (0x0400 + ((_n) << 3)) +#define CNTR_EN_MASK BIT(31) +#define CNTR_ALL_CHAN_EN_MASK BIT(30) +#define CNTR_ALL_QUEUE_EN_MASK BIT(29) +#define CNTR_ALL_DSCP_RING_EN_MASK BIT(28) +#define CNTR_SRC_MASK GENMASK(27, 24) +#define CNTR_DSCP_RING_MASK GENMASK(20, 16) +#define CNTR_CHAN_MASK GENMASK(7, 3) +#define CNTR_QUEUE_MASK GENMASK(2, 0) + +#define REG_CNTR_VAL(_n) (0x0404 + ((_n) << 3)) + #define REG_LMGR_INIT_CFG 0x1000 #define LMGR_INIT_START BIT(31) #define LMGR_SRAM_MODE_MASK BIT(30) @@ -570,9 +586,19 @@ #define TWRR_WEIGHT_SCALE_MASK BIT(31) #define TWRR_WEIGHT_BASE_MASK BIT(3) +#define REG_TXWRR_WEIGHT_CFG 0x1024 +#define TWRR_RW_CMD_MASK BIT(31) +#define TWRR_RW_CMD_DONE BIT(30) +#define TWRR_CHAN_IDX_MASK GENMASK(23, 19) +#define TWRR_QUEUE_IDX_MASK GENMASK(18, 16) +#define TWRR_VALUE_MASK GENMASK(15, 0) + #define REG_PSE_BUF_USAGE_CFG 0x1028 #define PSE_BUF_ESTIMATE_EN_MASK BIT(29) +#define REG_CHAN_QOS_MODE(_n) (0x1040 + ((_n) << 2)) +#define CHAN_QOS_MODE_MASK(_n) GENMASK(2 + ((_n) << 2), (_n) << 2) + #define REG_GLB_TRTCM_CFG 0x1080 #define GLB_TRTCM_EN_MASK BIT(31) #define GLB_TRTCM_MODE_MASK BIT(30) @@ -721,6 +747,17 @@ enum { FE_PSE_PORT_DROP = 0xf, }; +enum tx_sched_mode { + TC_SCH_WRR8, + TC_SCH_SP, + TC_SCH_WRR7, + TC_SCH_WRR6, + TC_SCH_WRR5, + TC_SCH_WRR4, + TC_SCH_WRR3, + TC_SCH_WRR2, +}; + struct airoha_queue_entry { union { void *buf; @@ -2624,6 +2661,119 @@ airoha_ethtool_get_rmon_stats(struct net_device *dev, } while (u64_stats_fetch_retry(&port->stats.syncp, start)); } +static int airoha_qdma_set_chan_tx_sched(struct airoha_gdm_port *port, + int channel, enum tx_sched_mode mode, + const u16 *weights, u8 n_weights) +{ + int i; + + for (i = 0; i < AIROHA_NUM_TX_RING; i++) + airoha_qdma_clear(port->qdma, REG_QUEUE_CLOSE_CFG(channel), + TXQ_DISABLE_CHAN_QUEUE_MASK(channel, i)); + + for (i = 0; i < n_weights; i++) { + u32 status; + int err; + + airoha_qdma_wr(port->qdma, REG_TXWRR_WEIGHT_CFG, + TWRR_RW_CMD_MASK | + FIELD_PREP(TWRR_CHAN_IDX_MASK, channel) | + FIELD_PREP(TWRR_QUEUE_IDX_MASK, i) | + FIELD_PREP(TWRR_VALUE_MASK, weights[i])); + err = read_poll_timeout(airoha_qdma_rr, status, + status & TWRR_RW_CMD_DONE, + USEC_PER_MSEC, 10 * USEC_PER_MSEC, + true, port->qdma, + REG_TXWRR_WEIGHT_CFG); + if (err) + return err; + } + + airoha_qdma_rmw(port->qdma, REG_CHAN_QOS_MODE(channel >> 3), + CHAN_QOS_MODE_MASK(channel), + mode << __ffs(CHAN_QOS_MODE_MASK(channel))); + + return 0; +} + +static int airoha_qdma_set_tx_prio_sched(struct airoha_gdm_port *port, + int channel) +{ + static const u16 w[AIROHA_NUM_QOS_QUEUES] = {}; + + return airoha_qdma_set_chan_tx_sched(port, channel, TC_SCH_SP, w, + ARRAY_SIZE(w)); +} + +static int airoha_qdma_set_tx_ets_sched(struct airoha_gdm_port *port, + int channel, + struct tc_ets_qopt_offload *opt) +{ + struct tc_ets_qopt_offload_replace_params *p = &opt->replace_params; + enum tx_sched_mode mode = TC_SCH_SP; + u16 w[AIROHA_NUM_QOS_QUEUES] = {}; + int i, nstrict = 0; + + if (p->bands != AIROHA_NUM_QOS_QUEUES) + return -EINVAL; + + for (i = 0; i < p->bands; i++) { + if (!p->quanta[i]) + nstrict++; + } + + /* this configuration is not supported by the hw */ + if (nstrict == AIROHA_NUM_QOS_QUEUES - 1) + return -EINVAL; + + for (i = 0; i < p->bands - nstrict; i++) + w[i] = p->weights[nstrict + i]; + + if (!nstrict) + mode = TC_SCH_WRR8; + else if (nstrict < AIROHA_NUM_QOS_QUEUES - 1) + mode = nstrict + 1; + + return airoha_qdma_set_chan_tx_sched(port, channel, mode, w, + ARRAY_SIZE(w)); +} + +static int airoha_tc_setup_qdisc_ets(struct airoha_gdm_port *port, int channel, + struct tc_ets_qopt_offload *opt) +{ + switch (opt->command) { + case TC_ETS_REPLACE: + return airoha_qdma_set_tx_ets_sched(port, channel, opt); + case TC_ETS_DESTROY: + /* PRIO is default qdisc scheduler */ + return airoha_qdma_set_tx_prio_sched(port, channel); + default: + return -EOPNOTSUPP; + } +} + +static int airoha_dev_tc_setup_conduit(struct net_device *dev, int channel, + enum tc_setup_type type, + void *type_data) +{ + struct airoha_gdm_port *port = netdev_priv(dev); + + switch (type) { + case TC_SETUP_QDISC_ETS: + return airoha_tc_setup_qdisc_ets(port, channel, type_data); + default: + return -EOPNOTSUPP; + } +} + +static int airoha_dev_tc_setup(struct net_device *dev, enum tc_setup_type type, + void *type_data) +{ + struct airoha_gdm_port *port = netdev_priv(dev); + + return airoha_dev_tc_setup_conduit(dev, port->id, type, type_data); +} + static const struct net_device_ops airoha_netdev_ops = { .ndo_init = airoha_dev_init, .ndo_open = airoha_dev_open, @@ -2632,6 +2782,8 @@ static const struct net_device_ops airoha_netdev_ops = { .ndo_start_xmit = airoha_dev_xmit, .ndo_get_stats64 = airoha_dev_get_stats64, .ndo_set_mac_address = airoha_dev_set_macaddr, + .ndo_setup_tc = airoha_dev_tc_setup, + .ndo_setup_tc_conduit = airoha_dev_tc_setup_conduit, }; static const struct ethtool_ops airoha_ethtool_ops = { @@ -2681,7 +2833,8 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) dev->watchdog_timeo = 5 * HZ; dev->hw_features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM | NETIF_F_TSO6 | NETIF_F_IPV6_CSUM | - NETIF_F_SG | NETIF_F_TSO; + NETIF_F_SG | NETIF_F_TSO | + NETIF_F_HW_TC; dev->features |= dev->hw_features; dev->dev.of_node = np; dev->irq = qdma->irq; From patchwork Wed Dec 11 15:31:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13903651 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9911F1A4F22 for ; Wed, 11 Dec 2024 15:32:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931144; cv=none; b=JhczAwLxNpsTk78X+5LfhjRWj5KGt//lJZMPzWpaIJHHhupqFGrTgWmZ2nA4kKBP6bhAwofkyDF6F2MMIDayEzs0ZfnLFE/VUJ6B0r2cO5d3829dS06qt+eiircSNU10kWWBh73vE33FsSUcICXVHr/4STni8eMB2xaXPqLmWik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733931144; c=relaxed/simple; bh=EoKLh/pM5nNqF9swvB7EQWfP7hlELeMaP30SHFDjaJQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D+KB2N1SB/oKqbaOivi58Yi4XULQz4HWBENXP6cYWkwT+BUITvC1jYg7LXVb113NfoU0Xdou5sRxntXAKoCvsZvjZaY+3YaocrdLD5HMZrd/ypDk6Z+LUju2NBGojW1XUKkBrofXyodynL6RAev406Z2pGy1Q71+7DuySTRnTqM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QOUgm8sC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QOUgm8sC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D47EAC4CED2; Wed, 11 Dec 2024 15:32:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733931144; bh=EoKLh/pM5nNqF9swvB7EQWfP7hlELeMaP30SHFDjaJQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QOUgm8sCSgGBNVz2chyf4RsBrkAUMo7ogz8opW8xmCkQnUeejNAyKx5iyb4aS8LZz 9Qozz7R9w50Y/xaZ/EhydM09eYBbVVUiLZOFr8Qzg5PNFa07IGc6VrB1APKgYCzYCD 9Si7JN34CgXtu8g5KQ30mtE+i94ulFvRakCpkyGhU4ndzYXvp20nbPzbLiAhD7f6dV plTDa7XRJbzTSmxoreT0RKviIRpYdZR2OuHR7m9uf4xo9JoYEcW+241tJvGK2wmzZQ w4IbfaPZUliKyjSLoK0YOK2tFEAPbR1MgKoF+pFPQxrXgyeZD40sBUNKMp9XXXyWbI mnOJF9j7fQcRw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: andrew@lunn.ch, olteanv@gmail.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, horms@kernel.org, nbd@nbd.name, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, lorenzo.bianconi83@gmail.com Subject: [RFC net-next 5/5] net: airoha: Add sched TBF offload support Date: Wed, 11 Dec 2024 16:31:53 +0100 Message-ID: <454e81d5ef8f7de1749555936bf73ff7a709cc7c.1733930558.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Introduce support for TBF qdisc offload available in the Airoha EN7581 ethernet controller. Add the capability to configure hw TBF Qdisc for the specified DSA user port via the QDMA block available in the mac chip (QDMA block is connected to the DSA switch cpu port). Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/airoha_eth.c | 185 +++++++++++++++++++++ 1 file changed, 185 insertions(+) diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c index 23aad8670a17..a79c92a816a2 100644 --- a/drivers/net/ethernet/mediatek/airoha_eth.c +++ b/drivers/net/ethernet/mediatek/airoha_eth.c @@ -42,6 +42,9 @@ #define PSE_RSV_PAGES 128 #define PSE_QUEUE_RSV_PAGES 64 +#define QDMA_METER_IDX(_n) ((_n) & 0xff) +#define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) + /* FE */ #define PSE_BASE 0x0100 #define CSR_IFC_BASE 0x0200 @@ -582,6 +585,17 @@ #define EGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) #define EGRESS_FAST_TICK_MASK GENMASK(15, 0) +#define TRTCM_PARAM_RW_MASK BIT(31) +#define TRTCM_PARAM_RW_DONE_MASK BIT(30) +#define TRTCM_PARAM_TYPE_MASK GENMASK(29, 28) +#define TRTCM_METER_GROUP_MASK GENMASK(27, 26) +#define TRTCM_PARAM_INDEX_MASK GENMASK(23, 17) +#define TRTCM_PARAM_RATE_TYPE_MASK BIT(16) + +#define REG_TRTCM_CFG_PARAM(_n) ((_n) + 0x4) +#define REG_TRTCM_DATA_LOW(_n) ((_n) + 0x8) +#define REG_TRTCM_DATA_HIGH(_n) ((_n) + 0xc) + #define REG_TXWRR_MODE_CFG 0x1020 #define TWRR_WEIGHT_SCALE_MASK BIT(31) #define TWRR_WEIGHT_BASE_MASK BIT(3) @@ -758,6 +772,29 @@ enum tx_sched_mode { TC_SCH_WRR2, }; +enum trtcm_param_type { + TRTCM_MISC_MODE, /* meter_en, pps_mode, tick_sel */ + TRTCM_TOKEN_RATE_MODE, + TRTCM_BUCKETSIZE_SHIFT_MODE, + TRTCM_BUCKET_COUNTER_MODE, +}; + +enum trtcm_mode_type { + TRTCM_COMMIT_MODE, + TRTCM_PEAK_MODE, +}; + +enum trtcm_param { + TRTCM_TICK_SEL = BIT(0), + TRTCM_PKT_MODE = BIT(1), + TRTCM_METER_MODE = BIT(2), +}; + +#define MIN_TOKEN_SIZE 4096 +#define MAX_TOKEN_SIZE_OFFSET 17 +#define TRTCM_TOKEN_RATE_MASK GENMASK(23, 6) +#define TRTCM_TOKEN_RATE_FRACTION_MASK GENMASK(5, 0) + struct airoha_queue_entry { union { void *buf; @@ -2752,6 +2789,152 @@ static int airoha_tc_setup_qdisc_ets(struct airoha_gdm_port *port, int channel, } } +static int airoha_qdma_get_trtcm_param(struct airoha_qdma *qdma, int channel, + u32 addr, enum trtcm_param_type param, + enum trtcm_mode_type mode, + u32 *val_low, u32 *val_high) +{ + u32 idx = QDMA_METER_IDX(channel), group = QDMA_METER_GROUP(channel); + u32 val, config = FIELD_PREP(TRTCM_PARAM_TYPE_MASK, param) | + FIELD_PREP(TRTCM_METER_GROUP_MASK, group) | + FIELD_PREP(TRTCM_PARAM_INDEX_MASK, idx) | + FIELD_PREP(TRTCM_PARAM_RATE_TYPE_MASK, mode); + + airoha_qdma_wr(qdma, REG_TRTCM_CFG_PARAM(addr), config); + if (read_poll_timeout(airoha_qdma_rr, val, + val & TRTCM_PARAM_RW_DONE_MASK, + USEC_PER_MSEC, 10 * USEC_PER_MSEC, true, + qdma, REG_TRTCM_CFG_PARAM(addr))) + return -ETIMEDOUT; + + *val_low = airoha_qdma_rr(qdma, REG_TRTCM_DATA_LOW(addr)); + if (val_high) + *val_high = airoha_qdma_rr(qdma, REG_TRTCM_DATA_HIGH(addr)); + + return 0; +} + +static int airoha_qdma_set_trtcm_param(struct airoha_qdma *qdma, int channel, + u32 addr, enum trtcm_param_type param, + enum trtcm_mode_type mode, u32 val) +{ + u32 idx = QDMA_METER_IDX(channel), group = QDMA_METER_GROUP(channel); + u32 config = TRTCM_PARAM_RW_MASK | + FIELD_PREP(TRTCM_PARAM_TYPE_MASK, param) | + FIELD_PREP(TRTCM_METER_GROUP_MASK, group) | + FIELD_PREP(TRTCM_PARAM_INDEX_MASK, idx) | + FIELD_PREP(TRTCM_PARAM_RATE_TYPE_MASK, mode); + + airoha_qdma_wr(qdma, REG_TRTCM_DATA_LOW(addr), val); + airoha_qdma_wr(qdma, REG_TRTCM_CFG_PARAM(addr), config); + + return read_poll_timeout(airoha_qdma_rr, val, + val & TRTCM_PARAM_RW_DONE_MASK, + USEC_PER_MSEC, 10 * USEC_PER_MSEC, true, + qdma, REG_TRTCM_CFG_PARAM(addr)); +} + +static int airoha_qdma_set_trtcm_config(struct airoha_qdma *qdma, int channel, + u32 addr, enum trtcm_mode_type mode, + bool enable, u32 enable_mask) +{ + u32 val; + + if (airoha_qdma_get_trtcm_param(qdma, channel, addr, TRTCM_MISC_MODE, + mode, &val, NULL)) + return -EINVAL; + + val = enable ? val | enable_mask : val & ~enable_mask; + + return airoha_qdma_set_trtcm_param(qdma, channel, addr, TRTCM_MISC_MODE, + mode, val); +} + +static int airoha_qdma_set_trtcm_token_bucket(struct airoha_qdma *qdma, + int channel, u32 addr, + enum trtcm_mode_type mode, + u32 rate_val, u32 bucket_size) +{ + u32 val, config, tick, unit, rate, rate_frac; + int err; + + if (airoha_qdma_get_trtcm_param(qdma, channel, addr, TRTCM_MISC_MODE, + mode, &config, NULL)) + return -EINVAL; + + val = airoha_qdma_rr(qdma, addr); + tick = FIELD_GET(INGRESS_FAST_TICK_MASK, val); + if (config & TRTCM_TICK_SEL) + tick *= FIELD_GET(INGRESS_SLOW_TICK_RATIO_MASK, val); + if (!tick) + return -EINVAL; + + unit = (config & TRTCM_PKT_MODE) ? 1000000 / tick : 8000 / tick; + if (!unit) + return -EINVAL; + + rate = rate_val / unit; + rate_frac = rate_val % unit; + rate_frac = FIELD_PREP(TRTCM_TOKEN_RATE_MASK, rate_frac) / unit; + rate = FIELD_PREP(TRTCM_TOKEN_RATE_MASK, rate) | + FIELD_PREP(TRTCM_TOKEN_RATE_FRACTION_MASK, rate_frac); + + err = airoha_qdma_set_trtcm_param(qdma, channel, addr, + TRTCM_TOKEN_RATE_MODE, mode, rate); + if (err) + return err; + + val = max_t(u32, bucket_size, MIN_TOKEN_SIZE); + val = min_t(u32, __fls(val), MAX_TOKEN_SIZE_OFFSET); + + return airoha_qdma_set_trtcm_param(qdma, channel, addr, + TRTCM_BUCKETSIZE_SHIFT_MODE, + mode, val); +} + +static int airoha_qdma_set_tx_tbf_sched(struct airoha_gdm_port *port, + int channel, u32 rate, u32 bucket_size) +{ + int i, err; + + for (i = 0; i <= TRTCM_PEAK_MODE; i++) { + err = airoha_qdma_set_trtcm_config(port->qdma, channel, + REG_EGRESS_TRTCM_CFG, i, + !!rate, TRTCM_METER_MODE); + if (err) + return err; + + err = airoha_qdma_set_trtcm_token_bucket(port->qdma, channel, + REG_EGRESS_TRTCM_CFG, + i, rate, bucket_size); + if (err) + return err; + } + + return 0; +} + +static int airoha_tc_setup_qdisc_tbf(struct airoha_gdm_port *port, int channel, + struct tc_tbf_qopt_offload *qopt) +{ + struct tc_tbf_qopt_offload_replace_params *p = &qopt->replace_params; + u32 rate = 0; + + if (qopt->parent != TC_H_ROOT) + return -EINVAL; + + switch (qopt->command) { + case TC_TBF_REPLACE: + rate = div_u64(p->rate.rate_bytes_ps, 1000) << 3; /* kbps */ + fallthrough; + case TC_TBF_DESTROY: + return airoha_qdma_set_tx_tbf_sched(port, channel, rate, + p->max_size); + default: + return -EOPNOTSUPP; + } +} + static int airoha_dev_tc_setup_conduit(struct net_device *dev, int channel, enum tc_setup_type type, void *type_data) @@ -2761,6 +2944,8 @@ static int airoha_dev_tc_setup_conduit(struct net_device *dev, int channel, switch (type) { case TC_SETUP_QDISC_ETS: return airoha_tc_setup_qdisc_ets(port, channel, type_data); + case TC_SETUP_QDISC_TBF: + return airoha_tc_setup_qdisc_tbf(port, channel, type_data); default: return -EOPNOTSUPP; }