From patchwork Thu Feb 13 15:34:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F2978C0219D for ; Thu, 13 Feb 2025 15:37:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=MheUbBAVa9zRYV/qc//494W7jC1UkL0T5kBqgYrS/vk=; b=Dfk26anqGsvkTKqsKOg3vB48Zd XgmVn/4S9eSdlbhKt0yF2uGn2M3jywt2oEvxgQ21C9pY9nj/elQAKrw9NyTHF8Zw8y2QY7ZdjlAcm RQPclIU12etpaH1NzN67onzTnRu/0U5H0P2B0xQDwmYu/4DU8afAmvJWUFhKMQ8bqUynOZ154IiA3 Cvu0W3/DrJ0AThPmC7km8l0QxtkO2NN5I9eQ71sSfoRFc6WR1t5YTm/Zu4XfiTY8kaELd2ARzgM3F 2T/kYbByp8u8H+d9jbecU9rhl3vVxEcuaxYYHrod5cyaqGSMbQNLw5OAB9Rb6JNG/iNfyuM/AsSzi Ql1xrDvg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibHd-0000000BZHx-0YF1; Thu, 13 Feb 2025 15:37:45 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibEo-0000000BYc8-2IAi; Thu, 13 Feb 2025 15:34:51 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 6D5AEA426E2; Thu, 13 Feb 2025 15:33:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E7D27C4CED1; Thu, 13 Feb 2025 15:34:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460889; bh=L5nffdWdH0LR7Q80jRhjrqzq1DZVr+htRNvDd6KyaNk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ksTt28lfjlRlE/DPAyXk9PTXaabBhNNayZ1h9pQTYfclDNfSzzA4yAjI55wk5rdx1 tOQkGijBR8gZUR941+qh9jlYLPNCMNfBwQWaXYjSnXbOZGjw+XMgVy5fbRveeb6/EP SVk0Y1QE1x4I1cpMwdhgS+HjY1KvTJTj63p3uB+/bNEv4oDZEYoqmS4UBmdwF9USrm SxKL8fvj1qzE2WMyidWGZJxhabWjm4OKE6nZoPe54ULXr7Epfh7dfe+tQQXa+lX/iV W+mNqOkeKWgKr/76LZ49XW8ki7muq2tF8TsqvejVqsIXjDgqNvwyoApxiJSr04b9v/ naji+f5haUutw== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:20 +0100 Subject: [PATCH net-next v4 01/16] net: airoha: Fix TSO support for header cloned skbs MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-1-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com, Mateusz Polchlopek X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073450_720787_87DFC10D X-CRM114-Status: GOOD ( 13.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For GSO packets, skb_cow_head() will reallocate the skb for TSO header cloned skbs in airoha_dev_xmit(). For this reason, sinfo pointer can be no more valid. Fix the issue relying on skb_shinfo() macro directly in airoha_dev_xmit(). This is not a user visible issue since we can't currently enable TSO for DSA user ports since we are missing to initialize net_device vlan_features field. Fixes: 23020f049327 ("net: airoha: Introduce ethernet support for EN7581 SoC") Reviewed-by: Mateusz Polchlopek Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/airoha_eth.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c index 09f448f291240257c5748725848ede231c502fbd..aa5f220ddbcf9ca5bee1173114294cb3aec701c9 100644 --- a/drivers/net/ethernet/mediatek/airoha_eth.c +++ b/drivers/net/ethernet/mediatek/airoha_eth.c @@ -2556,11 +2556,10 @@ static u16 airoha_dev_select_queue(struct net_device *dev, struct sk_buff *skb, static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, struct net_device *dev) { - struct skb_shared_info *sinfo = skb_shinfo(skb); struct airoha_gdm_port *port = netdev_priv(dev); + u32 nr_frags = 1 + skb_shinfo(skb)->nr_frags; u32 msg0, msg1, len = skb_headlen(skb); struct airoha_qdma *qdma = port->qdma; - u32 nr_frags = 1 + sinfo->nr_frags; struct netdev_queue *txq; struct airoha_queue *q; void *data = skb->data; @@ -2583,8 +2582,9 @@ static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, if (skb_cow_head(skb, 0)) goto error; - if (sinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) { - __be16 csum = cpu_to_be16(sinfo->gso_size); + if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | + SKB_GSO_TCPV6)) { + __be16 csum = cpu_to_be16(skb_shinfo(skb)->gso_size); tcp_hdr(skb)->check = (__force __sum16)csum; msg0 |= FIELD_PREP(QDMA_ETH_TXMSG_TSO_MASK, 1); @@ -2613,7 +2613,7 @@ static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, for (i = 0; i < nr_frags; i++) { struct airoha_qdma_desc *desc = &q->desc[index]; struct airoha_queue_entry *e = &q->entry[index]; - skb_frag_t *frag = &sinfo->frags[i]; + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; dma_addr_t addr; u32 val; From patchwork Thu Feb 13 15:34:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91AD4C0219D for ; Thu, 13 Feb 2025 15:39:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vSY+UBE9kzGHPUbh5Y1nwA/QIwhXvgGkQmB5IgW7iD8=; b=p1YPGlMx5QMRVbFNJDUFIYM5xQ CI82rA/eqaD/Ysq+2Pws2uUSMzdSrcpVUG8BOrBUX4AME7xsPKMsGWcfOXbnXw0RX1epN2KlW+63a RzdvpXIgLEATdIgMP8Ris3Qb+fRDFfGjI5gWwzdsBEubBD8yom6kWq6BfUj8p8kvqU4UOFPBxfyId 6gkN2uUbT/UFISs1cTmQd8W8vMXAetpq4FRSKWkGhxjco+4PV3C30j5mwt/RiilYj1ukAK6Kme5E7 bE2n+RVbaGcaDkp9SYPKk33sr+cLRzmReMLuU+it7OLkrGIhXl+WOGtMDxi09TW64Tmy2M9pSvHg4 IpSw43pw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibJ2-0000000BZWZ-0BuA; Thu, 13 Feb 2025 15:39:12 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibEr-0000000BYd5-0WN1; Thu, 13 Feb 2025 15:34:54 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 1E5F8A426DA; Thu, 13 Feb 2025 15:33:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE713C4CED1; Thu, 13 Feb 2025 15:34:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460892; bh=IpkbnQD0aqV4zTrDQLNPfko65wJI87nBEI5JGdjdZk8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=kU8SSk8ONwT6Iv1k95fOOUHb6caz2rMafNMfR6p+rIE/zAJDCSRWSNN08rMG5jOPu wDN4n49DQGsqmheu70Dx2F+HsRXou76Wj1uQ1KCPVpKzqyT2j2nY9V4N4qdMgL26jF dGUQz7bIxbQzUsbvTART5lpW0DN+mRWe7hd5zPhgiGXAK5QhdNYBphG1omcXrxC4TO XAKhWcVOct6tDZU+85a7GYz3A1aFiVEDNv/5VRnwudjX/GzWPa2kS4K8aCdlsKg/AG NB6oeihb6KyX3MenHBIB+KSUmE/ye5osDHydzmIO8q+njTmZsuoDHqQjldgeb8tIcs dezDU8LCH1AoA== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:21 +0100 Subject: [PATCH net-next v4 02/16] net: airoha: Move airoha_eth driver in a dedicated folder MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-2-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073453_305119_00E3FE54 X-CRM114-Status: GOOD ( 14.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The airoha_eth driver has no codebase shared with mtk_eth_soc one. Moreover, the upcoming features (flowtable hw offloading, PCS, ..) will not reuse any code from MediaTek driver. Move the Airoha driver in a dedicated folder. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/Kconfig | 2 ++ drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/airoha/Kconfig | 18 ++++++++++++++++++ drivers/net/ethernet/airoha/Makefile | 6 ++++++ drivers/net/ethernet/{mediatek => airoha}/airoha_eth.c | 0 drivers/net/ethernet/mediatek/Kconfig | 8 -------- drivers/net/ethernet/mediatek/Makefile | 1 - 7 files changed, 27 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index 977b42bc1e8c1e8804eb7fafa9ed85252d956cad..7941983d21e9e84cbd78241d5c1d48c95e50a8e4 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -20,6 +20,8 @@ source "drivers/net/ethernet/actions/Kconfig" source "drivers/net/ethernet/adaptec/Kconfig" source "drivers/net/ethernet/aeroflex/Kconfig" source "drivers/net/ethernet/agere/Kconfig" +source "drivers/net/ethernet/airoha/Kconfig" +source "drivers/net/ethernet/mellanox/Kconfig" source "drivers/net/ethernet/alacritech/Kconfig" source "drivers/net/ethernet/allwinner/Kconfig" source "drivers/net/ethernet/alteon/Kconfig" diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index 99fa180dedb80555e64b0fbcd7767044262cf432..67182339469a0d8337cc4e92aa51e498c615156d 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_NET_VENDOR_ADAPTEC) += adaptec/ obj-$(CONFIG_GRETH) += aeroflex/ obj-$(CONFIG_NET_VENDOR_ADI) += adi/ obj-$(CONFIG_NET_VENDOR_AGERE) += agere/ +obj-$(CONFIG_NET_VENDOR_AIROHA) += airoha/ obj-$(CONFIG_NET_VENDOR_ALACRITECH) += alacritech/ obj-$(CONFIG_NET_VENDOR_ALLWINNER) += allwinner/ obj-$(CONFIG_NET_VENDOR_ALTEON) += alteon/ diff --git a/drivers/net/ethernet/airoha/Kconfig b/drivers/net/ethernet/airoha/Kconfig new file mode 100644 index 0000000000000000000000000000000000000000..b6a131845f13b23a12464cfc281e3abe5699389f --- /dev/null +++ b/drivers/net/ethernet/airoha/Kconfig @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: GPL-2.0-only +config NET_VENDOR_AIROHA + bool "Airoha devices" + depends on ARCH_AIROHA || COMPILE_TEST + help + If you have a Airoha SoC with ethernet, say Y. + +if NET_VENDOR_AIROHA + +config NET_AIROHA + tristate "Airoha SoC Gigabit Ethernet support" + depends on NET_DSA || !NET_DSA + select PAGE_POOL + help + This driver supports the gigabit ethernet MACs in the + Airoha SoC family. + +endif #NET_VENDOR_AIROHA diff --git a/drivers/net/ethernet/airoha/Makefile b/drivers/net/ethernet/airoha/Makefile new file mode 100644 index 0000000000000000000000000000000000000000..73a6f3680a4c4ce92ee785d83b905d76a63421df --- /dev/null +++ b/drivers/net/ethernet/airoha/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Airoha for the Mediatek SoCs built-in ethernet macs +# + +obj-$(CONFIG_NET_AIROHA) += airoha_eth.o diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c similarity index 100% rename from drivers/net/ethernet/mediatek/airoha_eth.c rename to drivers/net/ethernet/airoha/airoha_eth.c diff --git a/drivers/net/ethernet/mediatek/Kconfig b/drivers/net/ethernet/mediatek/Kconfig index 95c4405b7d7bee53b964243480a0c173b555da56..7bfd3f230ff50739b3fc6103cd5d0e57ab8f70e1 100644 --- a/drivers/net/ethernet/mediatek/Kconfig +++ b/drivers/net/ethernet/mediatek/Kconfig @@ -7,14 +7,6 @@ config NET_VENDOR_MEDIATEK if NET_VENDOR_MEDIATEK -config NET_AIROHA - tristate "Airoha SoC Gigabit Ethernet support" - depends on NET_DSA || !NET_DSA - select PAGE_POOL - help - This driver supports the gigabit ethernet MACs in the - Airoha SoC family. - config NET_MEDIATEK_SOC_WED depends on ARCH_MEDIATEK || COMPILE_TEST def_bool NET_MEDIATEK_SOC != n diff --git a/drivers/net/ethernet/mediatek/Makefile b/drivers/net/ethernet/mediatek/Makefile index ddbb7f4a516caccf5eef7140de1872e9b35e3471..03e008fbc859b35067682f8640dab05ccce6caf7 100644 --- a/drivers/net/ethernet/mediatek/Makefile +++ b/drivers/net/ethernet/mediatek/Makefile @@ -11,4 +11,3 @@ mtk_eth-$(CONFIG_NET_MEDIATEK_SOC_WED) += mtk_wed_debugfs.o endif obj-$(CONFIG_NET_MEDIATEK_SOC_WED) += mtk_wed_ops.o obj-$(CONFIG_NET_MEDIATEK_STAR_EMAC) += mtk_star_emac.o -obj-$(CONFIG_NET_AIROHA) += airoha_eth.o From patchwork Thu Feb 13 15:34:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8AA78C0219D for ; Thu, 13 Feb 2025 15:40:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=N2cM2gnGKBAusMI6tYCConh0KoR2L7toyD6Ubl1gDTA=; b=VAjrUOteUYuEZXRuYnPD6SJ6PS LDOvcuHHXr+cdVolcq8CRyWujQBPT8c5pY8WSZ6TU4p1NNVjeDwwesmmnNI/Fhd6+kvtxcd5gvwOG febAOzWF4D5gaJgiLp5AtE9akiOtuZCP5bodnFi9Ban5YXjclt4Z1YeWK2WnUiuD/V9zbpSmwBONM nOhbcvdFCTPOILL3ZEo56GDXEYMGJM4zSYKVXyvlrfd0+29ik2CkLIDgLcLf0+V8RYaFPOMCMxO4c AxXm1r6UuV8YYET+EFLDX8tDhMF9Ux7FqK+dTOSUh7Nby1YKkZeC+MvyLW8AWIlCvmwQ7hUMtZfRW c86gzyQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibKR-0000000BZpc-1BU1; Thu, 13 Feb 2025 15:40:39 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibEt-0000000BYek-3pC2; Thu, 13 Feb 2025 15:34:57 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 9F9E05C5479; Thu, 13 Feb 2025 15:34:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A4F9C4CED1; Thu, 13 Feb 2025 15:34:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460894; bh=AG4jf7U7RiixvNy/Wq2aS66C8zHWh8H0synk3h9IdPk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=emCrzIX5IDPbjtiYsR+m9XZzBDHrcw4aK6fIFd/9wgAn7mXrXenuGONoKhV7it1nW DSh5pR0KQKTNTQ6VuQI7iZZ9iYWfBERSSUkJYwZcMDrFWVxpKL9RWvekbacrL1eYhz zxAHLoLe9iNPodqLTtni7UTftBv+fZXPZnmU9hBPljpY86g7TYKkm1yQUxTSN42PAO 5acy3jmjivDVjiIUK9yKKga/GAKNNeuuy/8M+79DFLO4QKE+sRLkc5SSqwruF3r3H3 f/V6yk25ARRxF4fa/M1t3gh8gdac2DIpBjqUKQdnOD3U7gjbz0PVpxMMCy7OjI43G7 dffADwYmp9Jlw== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:22 +0100 Subject: [PATCH net-next v4 03/16] net: airoha: Move definitions in airoha_eth.h MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-3-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073456_042363_70B1400D X-CRM114-Status: GOOD ( 15.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move common airoha_eth definitions in airoha_eth.h in order to reuse them for Packet Processor Engine (PPE) codebase. PPE module is used to enable support for flowtable hw offloading in airoha_eth driver. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 240 +---------------------------- drivers/net/ethernet/airoha/airoha_eth.h | 251 +++++++++++++++++++++++++++++++ 2 files changed, 252 insertions(+), 239 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index aa5f220ddbcf9ca5bee1173114294cb3aec701c9..0048a5665576afaf532778f0bd96be8b07d29703 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -3,14 +3,9 @@ * Copyright (c) 2024 AIROHA Inc * Author: Lorenzo Bianconi */ -#include -#include -#include -#include #include #include #include -#include #include #include #include @@ -18,35 +13,7 @@ #include #include -#define AIROHA_MAX_NUM_GDM_PORTS 1 -#define AIROHA_MAX_NUM_QDMA 2 -#define AIROHA_MAX_NUM_RSTS 3 -#define AIROHA_MAX_NUM_XSI_RSTS 5 -#define AIROHA_MAX_MTU 2000 -#define AIROHA_MAX_PACKET_SIZE 2048 -#define AIROHA_NUM_QOS_CHANNELS 4 -#define AIROHA_NUM_QOS_QUEUES 8 -#define AIROHA_NUM_TX_RING 32 -#define AIROHA_NUM_RX_RING 32 -#define AIROHA_NUM_NETDEV_TX_RINGS (AIROHA_NUM_TX_RING + \ - AIROHA_NUM_QOS_CHANNELS) -#define AIROHA_FE_MC_MAX_VLAN_TABLE 64 -#define AIROHA_FE_MC_MAX_VLAN_PORT 16 -#define AIROHA_NUM_TX_IRQ 2 -#define HW_DSCP_NUM 2048 -#define IRQ_QUEUE_LEN(_n) ((_n) ? 1024 : 2048) -#define TX_DSCP_NUM 1024 -#define RX_DSCP_NUM(_n) \ - ((_n) == 2 ? 128 : \ - (_n) == 11 ? 128 : \ - (_n) == 15 ? 128 : \ - (_n) == 0 ? 1024 : 16) - -#define PSE_RSV_PAGES 128 -#define PSE_QUEUE_RSV_PAGES 64 - -#define QDMA_METER_IDX(_n) ((_n) & 0xff) -#define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) +#include "airoha_eth.h" /* FE */ #define PSE_BASE 0x0100 @@ -706,211 +673,6 @@ struct airoha_qdma_fwd_desc { __le32 rsv1; }; -enum { - QDMA_INT_REG_IDX0, - QDMA_INT_REG_IDX1, - QDMA_INT_REG_IDX2, - QDMA_INT_REG_IDX3, - QDMA_INT_REG_IDX4, - QDMA_INT_REG_MAX -}; - -enum { - XSI_PCIE0_PORT, - XSI_PCIE1_PORT, - XSI_USB_PORT, - XSI_AE_PORT, - XSI_ETH_PORT, -}; - -enum { - XSI_PCIE0_VIP_PORT_MASK = BIT(22), - XSI_PCIE1_VIP_PORT_MASK = BIT(23), - XSI_USB_VIP_PORT_MASK = BIT(25), - XSI_ETH_VIP_PORT_MASK = BIT(24), -}; - -enum { - DEV_STATE_INITIALIZED, -}; - -enum { - CDM_CRSN_QSEL_Q1 = 1, - CDM_CRSN_QSEL_Q5 = 5, - CDM_CRSN_QSEL_Q6 = 6, - CDM_CRSN_QSEL_Q15 = 15, -}; - -enum { - CRSN_08 = 0x8, - CRSN_21 = 0x15, /* KA */ - CRSN_22 = 0x16, /* hit bind and force route to CPU */ - CRSN_24 = 0x18, - CRSN_25 = 0x19, -}; - -enum { - FE_PSE_PORT_CDM1, - FE_PSE_PORT_GDM1, - FE_PSE_PORT_GDM2, - FE_PSE_PORT_GDM3, - FE_PSE_PORT_PPE1, - FE_PSE_PORT_CDM2, - FE_PSE_PORT_CDM3, - FE_PSE_PORT_CDM4, - FE_PSE_PORT_PPE2, - FE_PSE_PORT_GDM4, - FE_PSE_PORT_CDM5, - FE_PSE_PORT_DROP = 0xf, -}; - -enum tx_sched_mode { - TC_SCH_WRR8, - TC_SCH_SP, - TC_SCH_WRR7, - TC_SCH_WRR6, - TC_SCH_WRR5, - TC_SCH_WRR4, - TC_SCH_WRR3, - TC_SCH_WRR2, -}; - -enum trtcm_param_type { - TRTCM_MISC_MODE, /* meter_en, pps_mode, tick_sel */ - TRTCM_TOKEN_RATE_MODE, - TRTCM_BUCKETSIZE_SHIFT_MODE, - TRTCM_BUCKET_COUNTER_MODE, -}; - -enum trtcm_mode_type { - TRTCM_COMMIT_MODE, - TRTCM_PEAK_MODE, -}; - -enum trtcm_param { - TRTCM_TICK_SEL = BIT(0), - TRTCM_PKT_MODE = BIT(1), - TRTCM_METER_MODE = BIT(2), -}; - -#define MIN_TOKEN_SIZE 4096 -#define MAX_TOKEN_SIZE_OFFSET 17 -#define TRTCM_TOKEN_RATE_MASK GENMASK(23, 6) -#define TRTCM_TOKEN_RATE_FRACTION_MASK GENMASK(5, 0) - -struct airoha_queue_entry { - union { - void *buf; - struct sk_buff *skb; - }; - dma_addr_t dma_addr; - u16 dma_len; -}; - -struct airoha_queue { - struct airoha_qdma *qdma; - - /* protect concurrent queue accesses */ - spinlock_t lock; - struct airoha_queue_entry *entry; - struct airoha_qdma_desc *desc; - u16 head; - u16 tail; - - int queued; - int ndesc; - int free_thr; - int buf_size; - - struct napi_struct napi; - struct page_pool *page_pool; -}; - -struct airoha_tx_irq_queue { - struct airoha_qdma *qdma; - - struct napi_struct napi; - - int size; - u32 *q; -}; - -struct airoha_hw_stats { - /* protect concurrent hw_stats accesses */ - spinlock_t lock; - struct u64_stats_sync syncp; - - /* get_stats64 */ - u64 rx_ok_pkts; - u64 tx_ok_pkts; - u64 rx_ok_bytes; - u64 tx_ok_bytes; - u64 rx_multicast; - u64 rx_errors; - u64 rx_drops; - u64 tx_drops; - u64 rx_crc_error; - u64 rx_over_errors; - /* ethtool stats */ - u64 tx_broadcast; - u64 tx_multicast; - u64 tx_len[7]; - u64 rx_broadcast; - u64 rx_fragment; - u64 rx_jabber; - u64 rx_len[7]; -}; - -struct airoha_qdma { - struct airoha_eth *eth; - void __iomem *regs; - - /* protect concurrent irqmask accesses */ - spinlock_t irq_lock; - u32 irqmask[QDMA_INT_REG_MAX]; - int irq; - - struct airoha_tx_irq_queue q_tx_irq[AIROHA_NUM_TX_IRQ]; - - struct airoha_queue q_tx[AIROHA_NUM_TX_RING]; - struct airoha_queue q_rx[AIROHA_NUM_RX_RING]; - - /* descriptor and packet buffers for qdma hw forward */ - struct { - void *desc; - void *q; - } hfwd; -}; - -struct airoha_gdm_port { - struct airoha_qdma *qdma; - struct net_device *dev; - int id; - - struct airoha_hw_stats stats; - - DECLARE_BITMAP(qos_sq_bmap, AIROHA_NUM_QOS_CHANNELS); - - /* qos stats counters */ - u64 cpu_tx_packets; - u64 fwd_tx_packets; -}; - -struct airoha_eth { - struct device *dev; - - unsigned long state; - void __iomem *fe_regs; - - struct reset_control_bulk_data rsts[AIROHA_MAX_NUM_RSTS]; - struct reset_control_bulk_data xsi_rsts[AIROHA_MAX_NUM_XSI_RSTS]; - - struct net_device *napi_dev; - - struct airoha_qdma qdma[AIROHA_MAX_NUM_QDMA]; - struct airoha_gdm_port *ports[AIROHA_MAX_NUM_GDM_PORTS]; -}; - static u32 airoha_rr(void __iomem *base, u32 offset) { return readl(base + offset); diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h new file mode 100644 index 0000000000000000000000000000000000000000..3310e0cf85f1d240e95404a0c15703e5f6be26bc --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -0,0 +1,251 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2024 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#ifndef AIROHA_ETH_H +#define AIROHA_ETH_H + +#include +#include +#include +#include +#include + +#define AIROHA_MAX_NUM_GDM_PORTS 1 +#define AIROHA_MAX_NUM_QDMA 2 +#define AIROHA_MAX_NUM_RSTS 3 +#define AIROHA_MAX_NUM_XSI_RSTS 5 +#define AIROHA_MAX_MTU 2000 +#define AIROHA_MAX_PACKET_SIZE 2048 +#define AIROHA_NUM_QOS_CHANNELS 4 +#define AIROHA_NUM_QOS_QUEUES 8 +#define AIROHA_NUM_TX_RING 32 +#define AIROHA_NUM_RX_RING 32 +#define AIROHA_NUM_NETDEV_TX_RINGS (AIROHA_NUM_TX_RING + \ + AIROHA_NUM_QOS_CHANNELS) +#define AIROHA_FE_MC_MAX_VLAN_TABLE 64 +#define AIROHA_FE_MC_MAX_VLAN_PORT 16 +#define AIROHA_NUM_TX_IRQ 2 +#define HW_DSCP_NUM 2048 +#define IRQ_QUEUE_LEN(_n) ((_n) ? 1024 : 2048) +#define TX_DSCP_NUM 1024 +#define RX_DSCP_NUM(_n) \ + ((_n) == 2 ? 128 : \ + (_n) == 11 ? 128 : \ + (_n) == 15 ? 128 : \ + (_n) == 0 ? 1024 : 16) + +#define PSE_RSV_PAGES 128 +#define PSE_QUEUE_RSV_PAGES 64 + +#define QDMA_METER_IDX(_n) ((_n) & 0xff) +#define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) + +enum { + QDMA_INT_REG_IDX0, + QDMA_INT_REG_IDX1, + QDMA_INT_REG_IDX2, + QDMA_INT_REG_IDX3, + QDMA_INT_REG_IDX4, + QDMA_INT_REG_MAX +}; + +enum { + XSI_PCIE0_PORT, + XSI_PCIE1_PORT, + XSI_USB_PORT, + XSI_AE_PORT, + XSI_ETH_PORT, +}; + +enum { + XSI_PCIE0_VIP_PORT_MASK = BIT(22), + XSI_PCIE1_VIP_PORT_MASK = BIT(23), + XSI_USB_VIP_PORT_MASK = BIT(25), + XSI_ETH_VIP_PORT_MASK = BIT(24), +}; + +enum { + DEV_STATE_INITIALIZED, +}; + +enum { + CDM_CRSN_QSEL_Q1 = 1, + CDM_CRSN_QSEL_Q5 = 5, + CDM_CRSN_QSEL_Q6 = 6, + CDM_CRSN_QSEL_Q15 = 15, +}; + +enum { + CRSN_08 = 0x8, + CRSN_21 = 0x15, /* KA */ + CRSN_22 = 0x16, /* hit bind and force route to CPU */ + CRSN_24 = 0x18, + CRSN_25 = 0x19, +}; + +enum { + FE_PSE_PORT_CDM1, + FE_PSE_PORT_GDM1, + FE_PSE_PORT_GDM2, + FE_PSE_PORT_GDM3, + FE_PSE_PORT_PPE1, + FE_PSE_PORT_CDM2, + FE_PSE_PORT_CDM3, + FE_PSE_PORT_CDM4, + FE_PSE_PORT_PPE2, + FE_PSE_PORT_GDM4, + FE_PSE_PORT_CDM5, + FE_PSE_PORT_DROP = 0xf, +}; + +enum tx_sched_mode { + TC_SCH_WRR8, + TC_SCH_SP, + TC_SCH_WRR7, + TC_SCH_WRR6, + TC_SCH_WRR5, + TC_SCH_WRR4, + TC_SCH_WRR3, + TC_SCH_WRR2, +}; + +enum trtcm_param_type { + TRTCM_MISC_MODE, /* meter_en, pps_mode, tick_sel */ + TRTCM_TOKEN_RATE_MODE, + TRTCM_BUCKETSIZE_SHIFT_MODE, + TRTCM_BUCKET_COUNTER_MODE, +}; + +enum trtcm_mode_type { + TRTCM_COMMIT_MODE, + TRTCM_PEAK_MODE, +}; + +enum trtcm_param { + TRTCM_TICK_SEL = BIT(0), + TRTCM_PKT_MODE = BIT(1), + TRTCM_METER_MODE = BIT(2), +}; + +#define MIN_TOKEN_SIZE 4096 +#define MAX_TOKEN_SIZE_OFFSET 17 +#define TRTCM_TOKEN_RATE_MASK GENMASK(23, 6) +#define TRTCM_TOKEN_RATE_FRACTION_MASK GENMASK(5, 0) + +struct airoha_queue_entry { + union { + void *buf; + struct sk_buff *skb; + }; + dma_addr_t dma_addr; + u16 dma_len; +}; + +struct airoha_queue { + struct airoha_qdma *qdma; + + /* protect concurrent queue accesses */ + spinlock_t lock; + struct airoha_queue_entry *entry; + struct airoha_qdma_desc *desc; + u16 head; + u16 tail; + + int queued; + int ndesc; + int free_thr; + int buf_size; + + struct napi_struct napi; + struct page_pool *page_pool; +}; + +struct airoha_tx_irq_queue { + struct airoha_qdma *qdma; + + struct napi_struct napi; + + int size; + u32 *q; +}; + +struct airoha_hw_stats { + /* protect concurrent hw_stats accesses */ + spinlock_t lock; + struct u64_stats_sync syncp; + + /* get_stats64 */ + u64 rx_ok_pkts; + u64 tx_ok_pkts; + u64 rx_ok_bytes; + u64 tx_ok_bytes; + u64 rx_multicast; + u64 rx_errors; + u64 rx_drops; + u64 tx_drops; + u64 rx_crc_error; + u64 rx_over_errors; + /* ethtool stats */ + u64 tx_broadcast; + u64 tx_multicast; + u64 tx_len[7]; + u64 rx_broadcast; + u64 rx_fragment; + u64 rx_jabber; + u64 rx_len[7]; +}; + +struct airoha_qdma { + struct airoha_eth *eth; + void __iomem *regs; + + /* protect concurrent irqmask accesses */ + spinlock_t irq_lock; + u32 irqmask[QDMA_INT_REG_MAX]; + int irq; + + struct airoha_tx_irq_queue q_tx_irq[AIROHA_NUM_TX_IRQ]; + + struct airoha_queue q_tx[AIROHA_NUM_TX_RING]; + struct airoha_queue q_rx[AIROHA_NUM_RX_RING]; + + /* descriptor and packet buffers for qdma hw forward */ + struct { + void *desc; + void *q; + } hfwd; +}; + +struct airoha_gdm_port { + struct airoha_qdma *qdma; + struct net_device *dev; + int id; + + struct airoha_hw_stats stats; + + DECLARE_BITMAP(qos_sq_bmap, AIROHA_NUM_QOS_CHANNELS); + + /* qos stats counters */ + u64 cpu_tx_packets; + u64 fwd_tx_packets; +}; + +struct airoha_eth { + struct device *dev; + + unsigned long state; + void __iomem *fe_regs; + + struct reset_control_bulk_data rsts[AIROHA_MAX_NUM_RSTS]; + struct reset_control_bulk_data xsi_rsts[AIROHA_MAX_NUM_XSI_RSTS]; + + struct net_device *napi_dev; + + struct airoha_qdma qdma[AIROHA_MAX_NUM_QDMA]; + struct airoha_gdm_port *ports[AIROHA_MAX_NUM_GDM_PORTS]; +}; + +#endif /* AIROHA_ETH_H */ From patchwork Thu Feb 13 15:34:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78563C021A0 for ; Thu, 13 Feb 2025 15:42:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=5kJky53O1BNSrKuVwtvgegZZH840bmuuggpyY2a/1mY=; b=od/dwsp9mXdzj3Q1VkeO1OxaKO lIOV3Z4zdYE6IRWfjfxk9v3KnTm8wFq3NFiW0DW8fTJ3eZn2P2ET2aQ3wTj00aavQuEpS3lcE7KPo OSBj0oDtOFDPmPoWV0Fy3qLjr6msDZ5VKl9omGLrjZpeJGMG+Y0L7pvJAWy2MqHdkpA9XC3FEykMF dUKN5FU72MwjJ/3faQ2XT3wK4vjK7bn89bxB2CVeZMDTJtYgMrgl1mYZ7SBssjBQo5dkNshWuZMko 8Rsub+D1z7YMigOKDCq2j41/caqKbUBcpSwEnY8vP0nnmr7jnxBqoSLvVMRonSbymr06tEjSRoits nzDUQkFw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibLr-0000000BaZy-30bf; Thu, 13 Feb 2025 15:42:07 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibEw-0000000BYgK-2ZYc; Thu, 13 Feb 2025 15:34:59 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 8D177A426DA; Thu, 13 Feb 2025 15:33:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3649BC4CED1; Thu, 13 Feb 2025 15:34:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460897; bh=LYtv9hkmflZk9C5liopfd0s1ChdnvNXvY0QApzYcDPA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=PKOiMOFyL11xk9BD6nFG8LluJQRpmzcv4N0/H/4tGhEyGTp52CmHz+aYzZXgjns3o adan/D9veTS5Un8YEl5Emy/36DSJoYar/z3XobTWC904C9uyc3AKQznRn+EYrxXaNl VfagUM6lgvykgX1qzgDlPjtZcUUdzEq1plnhFUJQvW/LHbfGd01Tn+oh1KL/kqJlS6 ubpI7gTfgjyeYR73fTw5jknqCBJ4l1vqlBK2WSjt3NYjgs6MfNLb47Po1/R+wXGIZO iWjSA34MbEjJYWaR0GbRuP4EBZhKgyb+aE7dCBCJBTQOl0Z0UZUQP+Yd+VT/wrkr4F YdaPjK3y+wdJg== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:23 +0100 Subject: [PATCH net-next v4 04/16] net: airoha: Move reg/write utility routines in airoha_eth.h MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-4-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073458_787684_260105C5 X-CRM114-Status: GOOD ( 10.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This is a preliminary patch to introduce flowtable hw offloading support for airoha_eth driver. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 28 +++------------------------- drivers/net/ethernet/airoha/airoha_eth.h | 26 ++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 0048a5665576afaf532778f0bd96be8b07d29703..1c6fb7b9ccbbaec846643343e0347a1e948a575f 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -673,17 +673,17 @@ struct airoha_qdma_fwd_desc { __le32 rsv1; }; -static u32 airoha_rr(void __iomem *base, u32 offset) +u32 airoha_rr(void __iomem *base, u32 offset) { return readl(base + offset); } -static void airoha_wr(void __iomem *base, u32 offset, u32 val) +void airoha_wr(void __iomem *base, u32 offset, u32 val) { writel(val, base + offset); } -static u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val) +u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val) { val |= (airoha_rr(base, offset) & ~mask); airoha_wr(base, offset, val); @@ -691,28 +691,6 @@ static u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val) return val; } -#define airoha_fe_rr(eth, offset) \ - airoha_rr((eth)->fe_regs, (offset)) -#define airoha_fe_wr(eth, offset, val) \ - airoha_wr((eth)->fe_regs, (offset), (val)) -#define airoha_fe_rmw(eth, offset, mask, val) \ - airoha_rmw((eth)->fe_regs, (offset), (mask), (val)) -#define airoha_fe_set(eth, offset, val) \ - airoha_rmw((eth)->fe_regs, (offset), 0, (val)) -#define airoha_fe_clear(eth, offset, val) \ - airoha_rmw((eth)->fe_regs, (offset), (val), 0) - -#define airoha_qdma_rr(qdma, offset) \ - airoha_rr((qdma)->regs, (offset)) -#define airoha_qdma_wr(qdma, offset, val) \ - airoha_wr((qdma)->regs, (offset), (val)) -#define airoha_qdma_rmw(qdma, offset, mask, val) \ - airoha_rmw((qdma)->regs, (offset), (mask), (val)) -#define airoha_qdma_set(qdma, offset, val) \ - airoha_rmw((qdma)->regs, (offset), 0, (val)) -#define airoha_qdma_clear(qdma, offset, val) \ - airoha_rmw((qdma)->regs, (offset), (val), 0) - static void airoha_qdma_set_irqmask(struct airoha_qdma *qdma, int index, u32 clear, u32 set) { diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 3310e0cf85f1d240e95404a0c15703e5f6be26bc..743aaf10235fe09fb2a91b491f4b25064ed8319b 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -248,4 +248,30 @@ struct airoha_eth { struct airoha_gdm_port *ports[AIROHA_MAX_NUM_GDM_PORTS]; }; +u32 airoha_rr(void __iomem *base, u32 offset); +void airoha_wr(void __iomem *base, u32 offset, u32 val); +u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val); + +#define airoha_fe_rr(eth, offset) \ + airoha_rr((eth)->fe_regs, (offset)) +#define airoha_fe_wr(eth, offset, val) \ + airoha_wr((eth)->fe_regs, (offset), (val)) +#define airoha_fe_rmw(eth, offset, mask, val) \ + airoha_rmw((eth)->fe_regs, (offset), (mask), (val)) +#define airoha_fe_set(eth, offset, val) \ + airoha_rmw((eth)->fe_regs, (offset), 0, (val)) +#define airoha_fe_clear(eth, offset, val) \ + airoha_rmw((eth)->fe_regs, (offset), (val), 0) + +#define airoha_qdma_rr(qdma, offset) \ + airoha_rr((qdma)->regs, (offset)) +#define airoha_qdma_wr(qdma, offset, val) \ + airoha_wr((qdma)->regs, (offset), (val)) +#define airoha_qdma_rmw(qdma, offset, mask, val) \ + airoha_rmw((qdma)->regs, (offset), (mask), (val)) +#define airoha_qdma_set(qdma, offset, val) \ + airoha_rmw((qdma)->regs, (offset), 0, (val)) +#define airoha_qdma_clear(qdma, offset, val) \ + airoha_rmw((qdma)->regs, (offset), (val), 0) + #endif /* AIROHA_ETH_H */ From patchwork Thu Feb 13 15:34:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 10C3CC0219D for ; Thu, 13 Feb 2025 15:43:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=lWiBQaNl8iHfsM2cxKX7zcmu+lchYoouNlGe8u1xbRE=; b=nqwiyJy57zhAG6WnJ2MMLCXwsy MHP96z9OTOSqee9qJA8kI77RNulpDICyEeD0fcOskAgCEGURwfjUPMkAp8GW+K4Oz7LorfiTZN3+x Q3Xs7cTwE8MJY1bqoBCJuCnmapYO8jxpESqgcVeO6t4ByaZ86tkQhqMoPo6ep+zf20F5OEdH8r/Nd YGSJp8XKoR/kh1Jg4ngRgOUt5zuXhqMtI6HJWcqLEKRyOP6Fq4tZ5TV+LAe8PixjySJiLRNCmh6Qd Iz/p79CkaVrcWuM4jp9iFl6qtxLIRJgpF1G9BDfgFMaykYePKDoEF8cI+zV+YMU6fhdORgTGnhfI9 TDIe2wNA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibNG-0000000Bb8C-46UC; Thu, 13 Feb 2025 15:43:34 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibEz-0000000BYi1-3ARE; Thu, 13 Feb 2025 15:35:04 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 90FD5A41CDB; Thu, 13 Feb 2025 15:33:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 10A5CC4CED1; Thu, 13 Feb 2025 15:34:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460900; bh=lGu7kyyDxjP87LBdKrwQNQPjy/Q+PtBmP2j/tK+GVcg=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=OEX0VOMwIec6lD0hq1qCkfNnJj08V55C2cd8EWDCR63U51paWshLSW15sw9af5//N IY788+DZ31ljRrpQEEEJ5xVYcYFetReibh0r3f+fruWN+ADTvd1S0CtP+tocoENng4 UkQtpdlWIvJvL2FUjHKr0INEsGRflvQxb/XYO3Lvfbzb7yJxp64c2IPZCO5ABdlEQp WchaMaFmnbGQKSEk8adK2wg9T6VhCblq5JA0Hd0jgGDqR8cGus4YeAuVqbUjCYluwY g76uAxWRj08d6zF1O/4uxBXv4dys/Aj6Q7/q/fZnEGMSL+9xysgbTLsswxOOuNDxov 81kxxdXT0hHrg== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:24 +0100 Subject: [PATCH net-next v4 05/16] net: airoha: Move register definitions in airoha_regs.h MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-5-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073502_020384_EF5F3867 X-CRM114-Status: UNSURE ( 7.81 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move common airoha_eth register definitions in airoha_regs.h in order to reuse them for Packet Processor Engine (PPE) codebase. PPE module is used to enable support for flowtable hw offloading in airoha_eth driver. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 659 +---------------------------- drivers/net/ethernet/airoha/airoha_regs.h | 670 ++++++++++++++++++++++++++++++ 2 files changed, 671 insertions(+), 658 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 1c6fb7b9ccbbaec846643343e0347a1e948a575f..b79556f1b4951c687aa89bc5839fc9405581a6c3 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -13,666 +13,9 @@ #include #include +#include "airoha_regs.h" #include "airoha_eth.h" -/* FE */ -#define PSE_BASE 0x0100 -#define CSR_IFC_BASE 0x0200 -#define CDM1_BASE 0x0400 -#define GDM1_BASE 0x0500 -#define PPE1_BASE 0x0c00 - -#define CDM2_BASE 0x1400 -#define GDM2_BASE 0x1500 - -#define GDM3_BASE 0x1100 -#define GDM4_BASE 0x2500 - -#define GDM_BASE(_n) \ - ((_n) == 4 ? GDM4_BASE : \ - (_n) == 3 ? GDM3_BASE : \ - (_n) == 2 ? GDM2_BASE : GDM1_BASE) - -#define REG_FE_DMA_GLO_CFG 0x0000 -#define FE_DMA_GLO_L2_SPACE_MASK GENMASK(7, 4) -#define FE_DMA_GLO_PG_SZ_MASK BIT(3) - -#define REG_FE_RST_GLO_CFG 0x0004 -#define FE_RST_GDM4_MBI_ARB_MASK BIT(3) -#define FE_RST_GDM3_MBI_ARB_MASK BIT(2) -#define FE_RST_CORE_MASK BIT(0) - -#define REG_FE_WAN_MAC_H 0x0030 -#define REG_FE_LAN_MAC_H 0x0040 - -#define REG_FE_MAC_LMIN(_n) ((_n) + 0x04) -#define REG_FE_MAC_LMAX(_n) ((_n) + 0x08) - -#define REG_FE_CDM1_OQ_MAP0 0x0050 -#define REG_FE_CDM1_OQ_MAP1 0x0054 -#define REG_FE_CDM1_OQ_MAP2 0x0058 -#define REG_FE_CDM1_OQ_MAP3 0x005c - -#define REG_FE_PCE_CFG 0x0070 -#define PCE_DPI_EN_MASK BIT(2) -#define PCE_KA_EN_MASK BIT(1) -#define PCE_MC_EN_MASK BIT(0) - -#define REG_FE_PSE_QUEUE_CFG_WR 0x0080 -#define PSE_CFG_PORT_ID_MASK GENMASK(27, 24) -#define PSE_CFG_QUEUE_ID_MASK GENMASK(20, 16) -#define PSE_CFG_WR_EN_MASK BIT(8) -#define PSE_CFG_OQRSV_SEL_MASK BIT(0) - -#define REG_FE_PSE_QUEUE_CFG_VAL 0x0084 -#define PSE_CFG_OQ_RSV_MASK GENMASK(13, 0) - -#define PSE_FQ_CFG 0x008c -#define PSE_FQ_LIMIT_MASK GENMASK(14, 0) - -#define REG_FE_PSE_BUF_SET 0x0090 -#define PSE_SHARE_USED_LTHD_MASK GENMASK(31, 16) -#define PSE_ALLRSV_MASK GENMASK(14, 0) - -#define REG_PSE_SHARE_USED_THD 0x0094 -#define PSE_SHARE_USED_MTHD_MASK GENMASK(31, 16) -#define PSE_SHARE_USED_HTHD_MASK GENMASK(15, 0) - -#define REG_GDM_MISC_CFG 0x0148 -#define GDM2_RDM_ACK_WAIT_PREF_MASK BIT(9) -#define GDM2_CHN_VLD_MODE_MASK BIT(5) - -#define REG_FE_CSR_IFC_CFG CSR_IFC_BASE -#define FE_IFC_EN_MASK BIT(0) - -#define REG_FE_VIP_PORT_EN 0x01f0 -#define REG_FE_IFC_PORT_EN 0x01f4 - -#define REG_PSE_IQ_REV1 (PSE_BASE + 0x08) -#define PSE_IQ_RES1_P2_MASK GENMASK(23, 16) - -#define REG_PSE_IQ_REV2 (PSE_BASE + 0x0c) -#define PSE_IQ_RES2_P5_MASK GENMASK(15, 8) -#define PSE_IQ_RES2_P4_MASK GENMASK(7, 0) - -#define REG_FE_VIP_EN(_n) (0x0300 + ((_n) << 3)) -#define PATN_FCPU_EN_MASK BIT(7) -#define PATN_SWP_EN_MASK BIT(6) -#define PATN_DP_EN_MASK BIT(5) -#define PATN_SP_EN_MASK BIT(4) -#define PATN_TYPE_MASK GENMASK(3, 1) -#define PATN_EN_MASK BIT(0) - -#define REG_FE_VIP_PATN(_n) (0x0304 + ((_n) << 3)) -#define PATN_DP_MASK GENMASK(31, 16) -#define PATN_SP_MASK GENMASK(15, 0) - -#define REG_CDM1_VLAN_CTRL CDM1_BASE -#define CDM1_VLAN_MASK GENMASK(31, 16) - -#define REG_CDM1_FWD_CFG (CDM1_BASE + 0x08) -#define CDM1_VIP_QSEL_MASK GENMASK(24, 20) - -#define REG_CDM1_CRSN_QSEL(_n) (CDM1_BASE + 0x10 + ((_n) << 2)) -#define CDM1_CRSN_QSEL_REASON_MASK(_n) \ - GENMASK(4 + (((_n) % 4) << 3), (((_n) % 4) << 3)) - -#define REG_CDM2_FWD_CFG (CDM2_BASE + 0x08) -#define CDM2_OAM_QSEL_MASK GENMASK(31, 27) -#define CDM2_VIP_QSEL_MASK GENMASK(24, 20) - -#define REG_CDM2_CRSN_QSEL(_n) (CDM2_BASE + 0x10 + ((_n) << 2)) -#define CDM2_CRSN_QSEL_REASON_MASK(_n) \ - GENMASK(4 + (((_n) % 4) << 3), (((_n) % 4) << 3)) - -#define REG_GDM_FWD_CFG(_n) GDM_BASE(_n) -#define GDM_DROP_CRC_ERR BIT(23) -#define GDM_IP4_CKSUM BIT(22) -#define GDM_TCP_CKSUM BIT(21) -#define GDM_UDP_CKSUM BIT(20) -#define GDM_UCFQ_MASK GENMASK(15, 12) -#define GDM_BCFQ_MASK GENMASK(11, 8) -#define GDM_MCFQ_MASK GENMASK(7, 4) -#define GDM_OCFQ_MASK GENMASK(3, 0) - -#define REG_GDM_INGRESS_CFG(_n) (GDM_BASE(_n) + 0x10) -#define GDM_INGRESS_FC_EN_MASK BIT(1) -#define GDM_STAG_EN_MASK BIT(0) - -#define REG_GDM_LEN_CFG(_n) (GDM_BASE(_n) + 0x14) -#define GDM_SHORT_LEN_MASK GENMASK(13, 0) -#define GDM_LONG_LEN_MASK GENMASK(29, 16) - -#define REG_FE_CPORT_CFG (GDM1_BASE + 0x40) -#define FE_CPORT_PAD BIT(26) -#define FE_CPORT_PORT_XFC_MASK BIT(25) -#define FE_CPORT_QUEUE_XFC_MASK BIT(24) - -#define REG_FE_GDM_MIB_CLEAR(_n) (GDM_BASE(_n) + 0xf0) -#define FE_GDM_MIB_RX_CLEAR_MASK BIT(1) -#define FE_GDM_MIB_TX_CLEAR_MASK BIT(0) - -#define REG_FE_GDM1_MIB_CFG (GDM1_BASE + 0xf4) -#define FE_STRICT_RFC2819_MODE_MASK BIT(31) -#define FE_GDM1_TX_MIB_SPLIT_EN_MASK BIT(17) -#define FE_GDM1_RX_MIB_SPLIT_EN_MASK BIT(16) -#define FE_TX_MIB_ID_MASK GENMASK(15, 8) -#define FE_RX_MIB_ID_MASK GENMASK(7, 0) - -#define REG_FE_GDM_TX_OK_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x104) -#define REG_FE_GDM_TX_OK_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x10c) -#define REG_FE_GDM_TX_ETH_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x110) -#define REG_FE_GDM_TX_ETH_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x114) -#define REG_FE_GDM_TX_ETH_DROP_CNT(_n) (GDM_BASE(_n) + 0x118) -#define REG_FE_GDM_TX_ETH_BC_CNT(_n) (GDM_BASE(_n) + 0x11c) -#define REG_FE_GDM_TX_ETH_MC_CNT(_n) (GDM_BASE(_n) + 0x120) -#define REG_FE_GDM_TX_ETH_RUNT_CNT(_n) (GDM_BASE(_n) + 0x124) -#define REG_FE_GDM_TX_ETH_LONG_CNT(_n) (GDM_BASE(_n) + 0x128) -#define REG_FE_GDM_TX_ETH_E64_CNT_L(_n) (GDM_BASE(_n) + 0x12c) -#define REG_FE_GDM_TX_ETH_L64_CNT_L(_n) (GDM_BASE(_n) + 0x130) -#define REG_FE_GDM_TX_ETH_L127_CNT_L(_n) (GDM_BASE(_n) + 0x134) -#define REG_FE_GDM_TX_ETH_L255_CNT_L(_n) (GDM_BASE(_n) + 0x138) -#define REG_FE_GDM_TX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x13c) -#define REG_FE_GDM_TX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x140) - -#define REG_FE_GDM_RX_OK_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x148) -#define REG_FE_GDM_RX_FC_DROP_CNT(_n) (GDM_BASE(_n) + 0x14c) -#define REG_FE_GDM_RX_RC_DROP_CNT(_n) (GDM_BASE(_n) + 0x150) -#define REG_FE_GDM_RX_OVERFLOW_DROP_CNT(_n) (GDM_BASE(_n) + 0x154) -#define REG_FE_GDM_RX_ERROR_DROP_CNT(_n) (GDM_BASE(_n) + 0x158) -#define REG_FE_GDM_RX_OK_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x15c) -#define REG_FE_GDM_RX_ETH_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x160) -#define REG_FE_GDM_RX_ETH_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x164) -#define REG_FE_GDM_RX_ETH_DROP_CNT(_n) (GDM_BASE(_n) + 0x168) -#define REG_FE_GDM_RX_ETH_BC_CNT(_n) (GDM_BASE(_n) + 0x16c) -#define REG_FE_GDM_RX_ETH_MC_CNT(_n) (GDM_BASE(_n) + 0x170) -#define REG_FE_GDM_RX_ETH_CRC_ERR_CNT(_n) (GDM_BASE(_n) + 0x174) -#define REG_FE_GDM_RX_ETH_FRAG_CNT(_n) (GDM_BASE(_n) + 0x178) -#define REG_FE_GDM_RX_ETH_JABBER_CNT(_n) (GDM_BASE(_n) + 0x17c) -#define REG_FE_GDM_RX_ETH_RUNT_CNT(_n) (GDM_BASE(_n) + 0x180) -#define REG_FE_GDM_RX_ETH_LONG_CNT(_n) (GDM_BASE(_n) + 0x184) -#define REG_FE_GDM_RX_ETH_E64_CNT_L(_n) (GDM_BASE(_n) + 0x188) -#define REG_FE_GDM_RX_ETH_L64_CNT_L(_n) (GDM_BASE(_n) + 0x18c) -#define REG_FE_GDM_RX_ETH_L127_CNT_L(_n) (GDM_BASE(_n) + 0x190) -#define REG_FE_GDM_RX_ETH_L255_CNT_L(_n) (GDM_BASE(_n) + 0x194) -#define REG_FE_GDM_RX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x198) -#define REG_FE_GDM_RX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x19c) - -#define REG_PPE1_TB_HASH_CFG (PPE1_BASE + 0x250) -#define PPE1_SRAM_TABLE_EN_MASK BIT(0) -#define PPE1_SRAM_HASH1_EN_MASK BIT(8) -#define PPE1_DRAM_TABLE_EN_MASK BIT(16) -#define PPE1_DRAM_HASH1_EN_MASK BIT(24) - -#define REG_FE_GDM_TX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x280) -#define REG_FE_GDM_TX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x284) -#define REG_FE_GDM_TX_ETH_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x288) -#define REG_FE_GDM_TX_ETH_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x28c) - -#define REG_FE_GDM_RX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x290) -#define REG_FE_GDM_RX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x294) -#define REG_FE_GDM_RX_ETH_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x298) -#define REG_FE_GDM_RX_ETH_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x29c) -#define REG_FE_GDM_TX_ETH_E64_CNT_H(_n) (GDM_BASE(_n) + 0x2b8) -#define REG_FE_GDM_TX_ETH_L64_CNT_H(_n) (GDM_BASE(_n) + 0x2bc) -#define REG_FE_GDM_TX_ETH_L127_CNT_H(_n) (GDM_BASE(_n) + 0x2c0) -#define REG_FE_GDM_TX_ETH_L255_CNT_H(_n) (GDM_BASE(_n) + 0x2c4) -#define REG_FE_GDM_TX_ETH_L511_CNT_H(_n) (GDM_BASE(_n) + 0x2c8) -#define REG_FE_GDM_TX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2cc) -#define REG_FE_GDM_RX_ETH_E64_CNT_H(_n) (GDM_BASE(_n) + 0x2e8) -#define REG_FE_GDM_RX_ETH_L64_CNT_H(_n) (GDM_BASE(_n) + 0x2ec) -#define REG_FE_GDM_RX_ETH_L127_CNT_H(_n) (GDM_BASE(_n) + 0x2f0) -#define REG_FE_GDM_RX_ETH_L255_CNT_H(_n) (GDM_BASE(_n) + 0x2f4) -#define REG_FE_GDM_RX_ETH_L511_CNT_H(_n) (GDM_BASE(_n) + 0x2f8) -#define REG_FE_GDM_RX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2fc) - -#define REG_GDM2_CHN_RLS (GDM2_BASE + 0x20) -#define MBI_RX_AGE_SEL_MASK GENMASK(26, 25) -#define MBI_TX_AGE_SEL_MASK GENMASK(18, 17) - -#define REG_GDM3_FWD_CFG GDM3_BASE -#define GDM3_PAD_EN_MASK BIT(28) - -#define REG_GDM4_FWD_CFG GDM4_BASE -#define GDM4_PAD_EN_MASK BIT(28) -#define GDM4_SPORT_OFFSET0_MASK GENMASK(11, 8) - -#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x23c) -#define GDM4_SPORT_OFF2_MASK GENMASK(19, 16) -#define GDM4_SPORT_OFF1_MASK GENMASK(15, 12) -#define GDM4_SPORT_OFF0_MASK GENMASK(11, 8) - -#define REG_IP_FRAG_FP 0x2010 -#define IP_ASSEMBLE_PORT_MASK GENMASK(24, 21) -#define IP_ASSEMBLE_NBQ_MASK GENMASK(20, 16) -#define IP_FRAGMENT_PORT_MASK GENMASK(8, 5) -#define IP_FRAGMENT_NBQ_MASK GENMASK(4, 0) - -#define REG_MC_VLAN_EN 0x2100 -#define MC_VLAN_EN_MASK BIT(0) - -#define REG_MC_VLAN_CFG 0x2104 -#define MC_VLAN_CFG_CMD_DONE_MASK BIT(31) -#define MC_VLAN_CFG_TABLE_ID_MASK GENMASK(21, 16) -#define MC_VLAN_CFG_PORT_ID_MASK GENMASK(11, 8) -#define MC_VLAN_CFG_TABLE_SEL_MASK BIT(4) -#define MC_VLAN_CFG_RW_MASK BIT(0) - -#define REG_MC_VLAN_DATA 0x2108 - -#define REG_CDM5_RX_OQ1_DROP_CNT 0x29d4 - -/* QDMA */ -#define REG_QDMA_GLOBAL_CFG 0x0004 -#define GLOBAL_CFG_RX_2B_OFFSET_MASK BIT(31) -#define GLOBAL_CFG_DMA_PREFERENCE_MASK GENMASK(30, 29) -#define GLOBAL_CFG_CPU_TXR_RR_MASK BIT(28) -#define GLOBAL_CFG_DSCP_BYTE_SWAP_MASK BIT(27) -#define GLOBAL_CFG_PAYLOAD_BYTE_SWAP_MASK BIT(26) -#define GLOBAL_CFG_MULTICAST_MODIFY_FP_MASK BIT(25) -#define GLOBAL_CFG_OAM_MODIFY_MASK BIT(24) -#define GLOBAL_CFG_RESET_MASK BIT(23) -#define GLOBAL_CFG_RESET_DONE_MASK BIT(22) -#define GLOBAL_CFG_MULTICAST_EN_MASK BIT(21) -#define GLOBAL_CFG_IRQ1_EN_MASK BIT(20) -#define GLOBAL_CFG_IRQ0_EN_MASK BIT(19) -#define GLOBAL_CFG_LOOPCNT_EN_MASK BIT(18) -#define GLOBAL_CFG_RD_BYPASS_WR_MASK BIT(17) -#define GLOBAL_CFG_QDMA_LOOPBACK_MASK BIT(16) -#define GLOBAL_CFG_LPBK_RXQ_SEL_MASK GENMASK(13, 8) -#define GLOBAL_CFG_CHECK_DONE_MASK BIT(7) -#define GLOBAL_CFG_TX_WB_DONE_MASK BIT(6) -#define GLOBAL_CFG_MAX_ISSUE_NUM_MASK GENMASK(5, 4) -#define GLOBAL_CFG_RX_DMA_BUSY_MASK BIT(3) -#define GLOBAL_CFG_RX_DMA_EN_MASK BIT(2) -#define GLOBAL_CFG_TX_DMA_BUSY_MASK BIT(1) -#define GLOBAL_CFG_TX_DMA_EN_MASK BIT(0) - -#define REG_FWD_DSCP_BASE 0x0010 -#define REG_FWD_BUF_BASE 0x0014 - -#define REG_HW_FWD_DSCP_CFG 0x0018 -#define HW_FWD_DSCP_PAYLOAD_SIZE_MASK GENMASK(29, 28) -#define HW_FWD_DSCP_SCATTER_LEN_MASK GENMASK(17, 16) -#define HW_FWD_DSCP_MIN_SCATTER_LEN_MASK GENMASK(15, 0) - -#define REG_INT_STATUS(_n) \ - (((_n) == 4) ? 0x0730 : \ - ((_n) == 3) ? 0x0724 : \ - ((_n) == 2) ? 0x0720 : \ - ((_n) == 1) ? 0x0024 : 0x0020) - -#define REG_INT_ENABLE(_n) \ - (((_n) == 4) ? 0x0750 : \ - ((_n) == 3) ? 0x0744 : \ - ((_n) == 2) ? 0x0740 : \ - ((_n) == 1) ? 0x002c : 0x0028) - -/* QDMA_CSR_INT_ENABLE1 */ -#define RX15_COHERENT_INT_MASK BIT(31) -#define RX14_COHERENT_INT_MASK BIT(30) -#define RX13_COHERENT_INT_MASK BIT(29) -#define RX12_COHERENT_INT_MASK BIT(28) -#define RX11_COHERENT_INT_MASK BIT(27) -#define RX10_COHERENT_INT_MASK BIT(26) -#define RX9_COHERENT_INT_MASK BIT(25) -#define RX8_COHERENT_INT_MASK BIT(24) -#define RX7_COHERENT_INT_MASK BIT(23) -#define RX6_COHERENT_INT_MASK BIT(22) -#define RX5_COHERENT_INT_MASK BIT(21) -#define RX4_COHERENT_INT_MASK BIT(20) -#define RX3_COHERENT_INT_MASK BIT(19) -#define RX2_COHERENT_INT_MASK BIT(18) -#define RX1_COHERENT_INT_MASK BIT(17) -#define RX0_COHERENT_INT_MASK BIT(16) -#define TX7_COHERENT_INT_MASK BIT(15) -#define TX6_COHERENT_INT_MASK BIT(14) -#define TX5_COHERENT_INT_MASK BIT(13) -#define TX4_COHERENT_INT_MASK BIT(12) -#define TX3_COHERENT_INT_MASK BIT(11) -#define TX2_COHERENT_INT_MASK BIT(10) -#define TX1_COHERENT_INT_MASK BIT(9) -#define TX0_COHERENT_INT_MASK BIT(8) -#define CNT_OVER_FLOW_INT_MASK BIT(7) -#define IRQ1_FULL_INT_MASK BIT(5) -#define IRQ1_INT_MASK BIT(4) -#define HWFWD_DSCP_LOW_INT_MASK BIT(3) -#define HWFWD_DSCP_EMPTY_INT_MASK BIT(2) -#define IRQ0_FULL_INT_MASK BIT(1) -#define IRQ0_INT_MASK BIT(0) - -#define TX_DONE_INT_MASK(_n) \ - ((_n) ? IRQ1_INT_MASK | IRQ1_FULL_INT_MASK \ - : IRQ0_INT_MASK | IRQ0_FULL_INT_MASK) - -#define INT_TX_MASK \ - (IRQ1_INT_MASK | IRQ1_FULL_INT_MASK | \ - IRQ0_INT_MASK | IRQ0_FULL_INT_MASK) - -#define INT_IDX0_MASK \ - (TX0_COHERENT_INT_MASK | TX1_COHERENT_INT_MASK | \ - TX2_COHERENT_INT_MASK | TX3_COHERENT_INT_MASK | \ - TX4_COHERENT_INT_MASK | TX5_COHERENT_INT_MASK | \ - TX6_COHERENT_INT_MASK | TX7_COHERENT_INT_MASK | \ - RX0_COHERENT_INT_MASK | RX1_COHERENT_INT_MASK | \ - RX2_COHERENT_INT_MASK | RX3_COHERENT_INT_MASK | \ - RX4_COHERENT_INT_MASK | RX7_COHERENT_INT_MASK | \ - RX8_COHERENT_INT_MASK | RX9_COHERENT_INT_MASK | \ - RX15_COHERENT_INT_MASK | INT_TX_MASK) - -/* QDMA_CSR_INT_ENABLE2 */ -#define RX15_NO_CPU_DSCP_INT_MASK BIT(31) -#define RX14_NO_CPU_DSCP_INT_MASK BIT(30) -#define RX13_NO_CPU_DSCP_INT_MASK BIT(29) -#define RX12_NO_CPU_DSCP_INT_MASK BIT(28) -#define RX11_NO_CPU_DSCP_INT_MASK BIT(27) -#define RX10_NO_CPU_DSCP_INT_MASK BIT(26) -#define RX9_NO_CPU_DSCP_INT_MASK BIT(25) -#define RX8_NO_CPU_DSCP_INT_MASK BIT(24) -#define RX7_NO_CPU_DSCP_INT_MASK BIT(23) -#define RX6_NO_CPU_DSCP_INT_MASK BIT(22) -#define RX5_NO_CPU_DSCP_INT_MASK BIT(21) -#define RX4_NO_CPU_DSCP_INT_MASK BIT(20) -#define RX3_NO_CPU_DSCP_INT_MASK BIT(19) -#define RX2_NO_CPU_DSCP_INT_MASK BIT(18) -#define RX1_NO_CPU_DSCP_INT_MASK BIT(17) -#define RX0_NO_CPU_DSCP_INT_MASK BIT(16) -#define RX15_DONE_INT_MASK BIT(15) -#define RX14_DONE_INT_MASK BIT(14) -#define RX13_DONE_INT_MASK BIT(13) -#define RX12_DONE_INT_MASK BIT(12) -#define RX11_DONE_INT_MASK BIT(11) -#define RX10_DONE_INT_MASK BIT(10) -#define RX9_DONE_INT_MASK BIT(9) -#define RX8_DONE_INT_MASK BIT(8) -#define RX7_DONE_INT_MASK BIT(7) -#define RX6_DONE_INT_MASK BIT(6) -#define RX5_DONE_INT_MASK BIT(5) -#define RX4_DONE_INT_MASK BIT(4) -#define RX3_DONE_INT_MASK BIT(3) -#define RX2_DONE_INT_MASK BIT(2) -#define RX1_DONE_INT_MASK BIT(1) -#define RX0_DONE_INT_MASK BIT(0) - -#define RX_DONE_INT_MASK \ - (RX0_DONE_INT_MASK | RX1_DONE_INT_MASK | \ - RX2_DONE_INT_MASK | RX3_DONE_INT_MASK | \ - RX4_DONE_INT_MASK | RX7_DONE_INT_MASK | \ - RX8_DONE_INT_MASK | RX9_DONE_INT_MASK | \ - RX15_DONE_INT_MASK) -#define INT_IDX1_MASK \ - (RX_DONE_INT_MASK | \ - RX0_NO_CPU_DSCP_INT_MASK | RX1_NO_CPU_DSCP_INT_MASK | \ - RX2_NO_CPU_DSCP_INT_MASK | RX3_NO_CPU_DSCP_INT_MASK | \ - RX4_NO_CPU_DSCP_INT_MASK | RX7_NO_CPU_DSCP_INT_MASK | \ - RX8_NO_CPU_DSCP_INT_MASK | RX9_NO_CPU_DSCP_INT_MASK | \ - RX15_NO_CPU_DSCP_INT_MASK) - -/* QDMA_CSR_INT_ENABLE5 */ -#define TX31_COHERENT_INT_MASK BIT(31) -#define TX30_COHERENT_INT_MASK BIT(30) -#define TX29_COHERENT_INT_MASK BIT(29) -#define TX28_COHERENT_INT_MASK BIT(28) -#define TX27_COHERENT_INT_MASK BIT(27) -#define TX26_COHERENT_INT_MASK BIT(26) -#define TX25_COHERENT_INT_MASK BIT(25) -#define TX24_COHERENT_INT_MASK BIT(24) -#define TX23_COHERENT_INT_MASK BIT(23) -#define TX22_COHERENT_INT_MASK BIT(22) -#define TX21_COHERENT_INT_MASK BIT(21) -#define TX20_COHERENT_INT_MASK BIT(20) -#define TX19_COHERENT_INT_MASK BIT(19) -#define TX18_COHERENT_INT_MASK BIT(18) -#define TX17_COHERENT_INT_MASK BIT(17) -#define TX16_COHERENT_INT_MASK BIT(16) -#define TX15_COHERENT_INT_MASK BIT(15) -#define TX14_COHERENT_INT_MASK BIT(14) -#define TX13_COHERENT_INT_MASK BIT(13) -#define TX12_COHERENT_INT_MASK BIT(12) -#define TX11_COHERENT_INT_MASK BIT(11) -#define TX10_COHERENT_INT_MASK BIT(10) -#define TX9_COHERENT_INT_MASK BIT(9) -#define TX8_COHERENT_INT_MASK BIT(8) - -#define INT_IDX4_MASK \ - (TX8_COHERENT_INT_MASK | TX9_COHERENT_INT_MASK | \ - TX10_COHERENT_INT_MASK | TX11_COHERENT_INT_MASK | \ - TX12_COHERENT_INT_MASK | TX13_COHERENT_INT_MASK | \ - TX14_COHERENT_INT_MASK | TX15_COHERENT_INT_MASK | \ - TX16_COHERENT_INT_MASK | TX17_COHERENT_INT_MASK | \ - TX18_COHERENT_INT_MASK | TX19_COHERENT_INT_MASK | \ - TX20_COHERENT_INT_MASK | TX21_COHERENT_INT_MASK | \ - TX22_COHERENT_INT_MASK | TX23_COHERENT_INT_MASK | \ - TX24_COHERENT_INT_MASK | TX25_COHERENT_INT_MASK | \ - TX26_COHERENT_INT_MASK | TX27_COHERENT_INT_MASK | \ - TX28_COHERENT_INT_MASK | TX29_COHERENT_INT_MASK | \ - TX30_COHERENT_INT_MASK | TX31_COHERENT_INT_MASK) - -#define REG_TX_IRQ_BASE(_n) ((_n) ? 0x0048 : 0x0050) - -#define REG_TX_IRQ_CFG(_n) ((_n) ? 0x004c : 0x0054) -#define TX_IRQ_THR_MASK GENMASK(27, 16) -#define TX_IRQ_DEPTH_MASK GENMASK(11, 0) - -#define REG_IRQ_CLEAR_LEN(_n) ((_n) ? 0x0064 : 0x0058) -#define IRQ_CLEAR_LEN_MASK GENMASK(7, 0) - -#define REG_IRQ_STATUS(_n) ((_n) ? 0x0068 : 0x005c) -#define IRQ_ENTRY_LEN_MASK GENMASK(27, 16) -#define IRQ_HEAD_IDX_MASK GENMASK(11, 0) - -#define REG_TX_RING_BASE(_n) \ - (((_n) < 8) ? 0x0100 + ((_n) << 5) : 0x0b00 + (((_n) - 8) << 5)) - -#define REG_TX_RING_BLOCKING(_n) \ - (((_n) < 8) ? 0x0104 + ((_n) << 5) : 0x0b04 + (((_n) - 8) << 5)) - -#define TX_RING_IRQ_BLOCKING_MAP_MASK BIT(6) -#define TX_RING_IRQ_BLOCKING_CFG_MASK BIT(4) -#define TX_RING_IRQ_BLOCKING_TX_DROP_EN_MASK BIT(2) -#define TX_RING_IRQ_BLOCKING_MAX_TH_TXRING_EN_MASK BIT(1) -#define TX_RING_IRQ_BLOCKING_MIN_TH_TXRING_EN_MASK BIT(0) - -#define REG_TX_CPU_IDX(_n) \ - (((_n) < 8) ? 0x0108 + ((_n) << 5) : 0x0b08 + (((_n) - 8) << 5)) - -#define TX_RING_CPU_IDX_MASK GENMASK(15, 0) - -#define REG_TX_DMA_IDX(_n) \ - (((_n) < 8) ? 0x010c + ((_n) << 5) : 0x0b0c + (((_n) - 8) << 5)) - -#define TX_RING_DMA_IDX_MASK GENMASK(15, 0) - -#define IRQ_RING_IDX_MASK GENMASK(20, 16) -#define IRQ_DESC_IDX_MASK GENMASK(15, 0) - -#define REG_RX_RING_BASE(_n) \ - (((_n) < 16) ? 0x0200 + ((_n) << 5) : 0x0e00 + (((_n) - 16) << 5)) - -#define REG_RX_RING_SIZE(_n) \ - (((_n) < 16) ? 0x0204 + ((_n) << 5) : 0x0e04 + (((_n) - 16) << 5)) - -#define RX_RING_THR_MASK GENMASK(31, 16) -#define RX_RING_SIZE_MASK GENMASK(15, 0) - -#define REG_RX_CPU_IDX(_n) \ - (((_n) < 16) ? 0x0208 + ((_n) << 5) : 0x0e08 + (((_n) - 16) << 5)) - -#define RX_RING_CPU_IDX_MASK GENMASK(15, 0) - -#define REG_RX_DMA_IDX(_n) \ - (((_n) < 16) ? 0x020c + ((_n) << 5) : 0x0e0c + (((_n) - 16) << 5)) - -#define REG_RX_DELAY_INT_IDX(_n) \ - (((_n) < 16) ? 0x0210 + ((_n) << 5) : 0x0e10 + (((_n) - 16) << 5)) - -#define RX_DELAY_INT_MASK GENMASK(15, 0) - -#define RX_RING_DMA_IDX_MASK GENMASK(15, 0) - -#define REG_INGRESS_TRTCM_CFG 0x0070 -#define INGRESS_TRTCM_EN_MASK BIT(31) -#define INGRESS_TRTCM_MODE_MASK BIT(30) -#define INGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) -#define INGRESS_FAST_TICK_MASK GENMASK(15, 0) - -#define REG_QUEUE_CLOSE_CFG(_n) (0x00a0 + ((_n) & 0xfc)) -#define TXQ_DISABLE_CHAN_QUEUE_MASK(_n, _m) BIT((_m) + (((_n) & 0x3) << 3)) - -#define REG_TXQ_DIS_CFG_BASE(_n) ((_n) ? 0x20a0 : 0x00a0) -#define REG_TXQ_DIS_CFG(_n, _m) (REG_TXQ_DIS_CFG_BASE((_n)) + (_m) << 2) - -#define REG_CNTR_CFG(_n) (0x0400 + ((_n) << 3)) -#define CNTR_EN_MASK BIT(31) -#define CNTR_ALL_CHAN_EN_MASK BIT(30) -#define CNTR_ALL_QUEUE_EN_MASK BIT(29) -#define CNTR_ALL_DSCP_RING_EN_MASK BIT(28) -#define CNTR_SRC_MASK GENMASK(27, 24) -#define CNTR_DSCP_RING_MASK GENMASK(20, 16) -#define CNTR_CHAN_MASK GENMASK(7, 3) -#define CNTR_QUEUE_MASK GENMASK(2, 0) - -#define REG_CNTR_VAL(_n) (0x0404 + ((_n) << 3)) - -#define REG_LMGR_INIT_CFG 0x1000 -#define LMGR_INIT_START BIT(31) -#define LMGR_SRAM_MODE_MASK BIT(30) -#define HW_FWD_PKTSIZE_OVERHEAD_MASK GENMASK(27, 20) -#define HW_FWD_DESC_NUM_MASK GENMASK(16, 0) - -#define REG_FWD_DSCP_LOW_THR 0x1004 -#define FWD_DSCP_LOW_THR_MASK GENMASK(17, 0) - -#define REG_EGRESS_RATE_METER_CFG 0x100c -#define EGRESS_RATE_METER_EN_MASK BIT(31) -#define EGRESS_RATE_METER_EQ_RATE_EN_MASK BIT(17) -#define EGRESS_RATE_METER_WINDOW_SZ_MASK GENMASK(16, 12) -#define EGRESS_RATE_METER_TIMESLICE_MASK GENMASK(10, 0) - -#define REG_EGRESS_TRTCM_CFG 0x1010 -#define EGRESS_TRTCM_EN_MASK BIT(31) -#define EGRESS_TRTCM_MODE_MASK BIT(30) -#define EGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) -#define EGRESS_FAST_TICK_MASK GENMASK(15, 0) - -#define TRTCM_PARAM_RW_MASK BIT(31) -#define TRTCM_PARAM_RW_DONE_MASK BIT(30) -#define TRTCM_PARAM_TYPE_MASK GENMASK(29, 28) -#define TRTCM_METER_GROUP_MASK GENMASK(27, 26) -#define TRTCM_PARAM_INDEX_MASK GENMASK(23, 17) -#define TRTCM_PARAM_RATE_TYPE_MASK BIT(16) - -#define REG_TRTCM_CFG_PARAM(_n) ((_n) + 0x4) -#define REG_TRTCM_DATA_LOW(_n) ((_n) + 0x8) -#define REG_TRTCM_DATA_HIGH(_n) ((_n) + 0xc) - -#define REG_TXWRR_MODE_CFG 0x1020 -#define TWRR_WEIGHT_SCALE_MASK BIT(31) -#define TWRR_WEIGHT_BASE_MASK BIT(3) - -#define REG_TXWRR_WEIGHT_CFG 0x1024 -#define TWRR_RW_CMD_MASK BIT(31) -#define TWRR_RW_CMD_DONE BIT(30) -#define TWRR_CHAN_IDX_MASK GENMASK(23, 19) -#define TWRR_QUEUE_IDX_MASK GENMASK(18, 16) -#define TWRR_VALUE_MASK GENMASK(15, 0) - -#define REG_PSE_BUF_USAGE_CFG 0x1028 -#define PSE_BUF_ESTIMATE_EN_MASK BIT(29) - -#define REG_CHAN_QOS_MODE(_n) (0x1040 + ((_n) << 2)) -#define CHAN_QOS_MODE_MASK(_n) GENMASK(2 + ((_n) << 2), (_n) << 2) - -#define REG_GLB_TRTCM_CFG 0x1080 -#define GLB_TRTCM_EN_MASK BIT(31) -#define GLB_TRTCM_MODE_MASK BIT(30) -#define GLB_SLOW_TICK_RATIO_MASK GENMASK(29, 16) -#define GLB_FAST_TICK_MASK GENMASK(15, 0) - -#define REG_TXQ_CNGST_CFG 0x10a0 -#define TXQ_CNGST_DROP_EN BIT(31) -#define TXQ_CNGST_DEI_DROP_EN BIT(30) - -#define REG_SLA_TRTCM_CFG 0x1150 -#define SLA_TRTCM_EN_MASK BIT(31) -#define SLA_TRTCM_MODE_MASK BIT(30) -#define SLA_SLOW_TICK_RATIO_MASK GENMASK(29, 16) -#define SLA_FAST_TICK_MASK GENMASK(15, 0) - -/* CTRL */ -#define QDMA_DESC_DONE_MASK BIT(31) -#define QDMA_DESC_DROP_MASK BIT(30) /* tx: drop - rx: overflow */ -#define QDMA_DESC_MORE_MASK BIT(29) /* more SG elements */ -#define QDMA_DESC_DEI_MASK BIT(25) -#define QDMA_DESC_NO_DROP_MASK BIT(24) -#define QDMA_DESC_LEN_MASK GENMASK(15, 0) -/* DATA */ -#define QDMA_DESC_NEXT_ID_MASK GENMASK(15, 0) -/* TX MSG0 */ -#define QDMA_ETH_TXMSG_MIC_IDX_MASK BIT(30) -#define QDMA_ETH_TXMSG_SP_TAG_MASK GENMASK(29, 14) -#define QDMA_ETH_TXMSG_ICO_MASK BIT(13) -#define QDMA_ETH_TXMSG_UCO_MASK BIT(12) -#define QDMA_ETH_TXMSG_TCO_MASK BIT(11) -#define QDMA_ETH_TXMSG_TSO_MASK BIT(10) -#define QDMA_ETH_TXMSG_FAST_MASK BIT(9) -#define QDMA_ETH_TXMSG_OAM_MASK BIT(8) -#define QDMA_ETH_TXMSG_CHAN_MASK GENMASK(7, 3) -#define QDMA_ETH_TXMSG_QUEUE_MASK GENMASK(2, 0) -/* TX MSG1 */ -#define QDMA_ETH_TXMSG_NO_DROP BIT(31) -#define QDMA_ETH_TXMSG_METER_MASK GENMASK(30, 24) /* 0x7f no meters */ -#define QDMA_ETH_TXMSG_FPORT_MASK GENMASK(23, 20) -#define QDMA_ETH_TXMSG_NBOQ_MASK GENMASK(19, 15) -#define QDMA_ETH_TXMSG_HWF_MASK BIT(14) -#define QDMA_ETH_TXMSG_HOP_MASK BIT(13) -#define QDMA_ETH_TXMSG_PTP_MASK BIT(12) -#define QDMA_ETH_TXMSG_ACNT_G1_MASK GENMASK(10, 6) /* 0x1f do not count */ -#define QDMA_ETH_TXMSG_ACNT_G0_MASK GENMASK(5, 0) /* 0x3f do not count */ - -/* RX MSG1 */ -#define QDMA_ETH_RXMSG_DEI_MASK BIT(31) -#define QDMA_ETH_RXMSG_IP6_MASK BIT(30) -#define QDMA_ETH_RXMSG_IP4_MASK BIT(29) -#define QDMA_ETH_RXMSG_IP4F_MASK BIT(28) -#define QDMA_ETH_RXMSG_L4_VALID_MASK BIT(27) -#define QDMA_ETH_RXMSG_L4F_MASK BIT(26) -#define QDMA_ETH_RXMSG_SPORT_MASK GENMASK(25, 21) -#define QDMA_ETH_RXMSG_CRSN_MASK GENMASK(20, 16) -#define QDMA_ETH_RXMSG_PPE_ENTRY_MASK GENMASK(15, 0) - -struct airoha_qdma_desc { - __le32 rsv; - __le32 ctrl; - __le32 addr; - __le32 data; - __le32 msg0; - __le32 msg1; - __le32 msg2; - __le32 msg3; -}; - -/* CTRL0 */ -#define QDMA_FWD_DESC_CTX_MASK BIT(31) -#define QDMA_FWD_DESC_RING_MASK GENMASK(30, 28) -#define QDMA_FWD_DESC_IDX_MASK GENMASK(27, 16) -#define QDMA_FWD_DESC_LEN_MASK GENMASK(15, 0) -/* CTRL1 */ -#define QDMA_FWD_DESC_FIRST_IDX_MASK GENMASK(15, 0) -/* CTRL2 */ -#define QDMA_FWD_DESC_MORE_PKT_NUM_MASK GENMASK(2, 0) - -struct airoha_qdma_fwd_desc { - __le32 addr; - __le32 ctrl0; - __le32 ctrl1; - __le32 ctrl2; - __le32 msg0; - __le32 msg1; - __le32 rsv0; - __le32 rsv1; -}; - u32 airoha_rr(void __iomem *base, u32 offset) { return readl(base + offset); diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h new file mode 100644 index 0000000000000000000000000000000000000000..7c9dadb348834cb5a856760abe45e8221d6fd700 --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_regs.h @@ -0,0 +1,670 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2024 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#ifndef AIROHA_REGS_H +#define AIROHA_REGS_H + +#include + +/* FE */ +#define PSE_BASE 0x0100 +#define CSR_IFC_BASE 0x0200 +#define CDM1_BASE 0x0400 +#define GDM1_BASE 0x0500 +#define PPE1_BASE 0x0c00 + +#define CDM2_BASE 0x1400 +#define GDM2_BASE 0x1500 + +#define GDM3_BASE 0x1100 +#define GDM4_BASE 0x2500 + +#define GDM_BASE(_n) \ + ((_n) == 4 ? GDM4_BASE : \ + (_n) == 3 ? GDM3_BASE : \ + (_n) == 2 ? GDM2_BASE : GDM1_BASE) + +#define REG_FE_DMA_GLO_CFG 0x0000 +#define FE_DMA_GLO_L2_SPACE_MASK GENMASK(7, 4) +#define FE_DMA_GLO_PG_SZ_MASK BIT(3) + +#define REG_FE_RST_GLO_CFG 0x0004 +#define FE_RST_GDM4_MBI_ARB_MASK BIT(3) +#define FE_RST_GDM3_MBI_ARB_MASK BIT(2) +#define FE_RST_CORE_MASK BIT(0) + +#define REG_FE_WAN_MAC_H 0x0030 +#define REG_FE_LAN_MAC_H 0x0040 + +#define REG_FE_MAC_LMIN(_n) ((_n) + 0x04) +#define REG_FE_MAC_LMAX(_n) ((_n) + 0x08) + +#define REG_FE_CDM1_OQ_MAP0 0x0050 +#define REG_FE_CDM1_OQ_MAP1 0x0054 +#define REG_FE_CDM1_OQ_MAP2 0x0058 +#define REG_FE_CDM1_OQ_MAP3 0x005c + +#define REG_FE_PCE_CFG 0x0070 +#define PCE_DPI_EN_MASK BIT(2) +#define PCE_KA_EN_MASK BIT(1) +#define PCE_MC_EN_MASK BIT(0) + +#define REG_FE_PSE_QUEUE_CFG_WR 0x0080 +#define PSE_CFG_PORT_ID_MASK GENMASK(27, 24) +#define PSE_CFG_QUEUE_ID_MASK GENMASK(20, 16) +#define PSE_CFG_WR_EN_MASK BIT(8) +#define PSE_CFG_OQRSV_SEL_MASK BIT(0) + +#define REG_FE_PSE_QUEUE_CFG_VAL 0x0084 +#define PSE_CFG_OQ_RSV_MASK GENMASK(13, 0) + +#define PSE_FQ_CFG 0x008c +#define PSE_FQ_LIMIT_MASK GENMASK(14, 0) + +#define REG_FE_PSE_BUF_SET 0x0090 +#define PSE_SHARE_USED_LTHD_MASK GENMASK(31, 16) +#define PSE_ALLRSV_MASK GENMASK(14, 0) + +#define REG_PSE_SHARE_USED_THD 0x0094 +#define PSE_SHARE_USED_MTHD_MASK GENMASK(31, 16) +#define PSE_SHARE_USED_HTHD_MASK GENMASK(15, 0) + +#define REG_GDM_MISC_CFG 0x0148 +#define GDM2_RDM_ACK_WAIT_PREF_MASK BIT(9) +#define GDM2_CHN_VLD_MODE_MASK BIT(5) + +#define REG_FE_CSR_IFC_CFG CSR_IFC_BASE +#define FE_IFC_EN_MASK BIT(0) + +#define REG_FE_VIP_PORT_EN 0x01f0 +#define REG_FE_IFC_PORT_EN 0x01f4 + +#define REG_PSE_IQ_REV1 (PSE_BASE + 0x08) +#define PSE_IQ_RES1_P2_MASK GENMASK(23, 16) + +#define REG_PSE_IQ_REV2 (PSE_BASE + 0x0c) +#define PSE_IQ_RES2_P5_MASK GENMASK(15, 8) +#define PSE_IQ_RES2_P4_MASK GENMASK(7, 0) + +#define REG_FE_VIP_EN(_n) (0x0300 + ((_n) << 3)) +#define PATN_FCPU_EN_MASK BIT(7) +#define PATN_SWP_EN_MASK BIT(6) +#define PATN_DP_EN_MASK BIT(5) +#define PATN_SP_EN_MASK BIT(4) +#define PATN_TYPE_MASK GENMASK(3, 1) +#define PATN_EN_MASK BIT(0) + +#define REG_FE_VIP_PATN(_n) (0x0304 + ((_n) << 3)) +#define PATN_DP_MASK GENMASK(31, 16) +#define PATN_SP_MASK GENMASK(15, 0) + +#define REG_CDM1_VLAN_CTRL CDM1_BASE +#define CDM1_VLAN_MASK GENMASK(31, 16) + +#define REG_CDM1_FWD_CFG (CDM1_BASE + 0x08) +#define CDM1_VIP_QSEL_MASK GENMASK(24, 20) + +#define REG_CDM1_CRSN_QSEL(_n) (CDM1_BASE + 0x10 + ((_n) << 2)) +#define CDM1_CRSN_QSEL_REASON_MASK(_n) \ + GENMASK(4 + (((_n) % 4) << 3), (((_n) % 4) << 3)) + +#define REG_CDM2_FWD_CFG (CDM2_BASE + 0x08) +#define CDM2_OAM_QSEL_MASK GENMASK(31, 27) +#define CDM2_VIP_QSEL_MASK GENMASK(24, 20) + +#define REG_CDM2_CRSN_QSEL(_n) (CDM2_BASE + 0x10 + ((_n) << 2)) +#define CDM2_CRSN_QSEL_REASON_MASK(_n) \ + GENMASK(4 + (((_n) % 4) << 3), (((_n) % 4) << 3)) + +#define REG_GDM_FWD_CFG(_n) GDM_BASE(_n) +#define GDM_DROP_CRC_ERR BIT(23) +#define GDM_IP4_CKSUM BIT(22) +#define GDM_TCP_CKSUM BIT(21) +#define GDM_UDP_CKSUM BIT(20) +#define GDM_UCFQ_MASK GENMASK(15, 12) +#define GDM_BCFQ_MASK GENMASK(11, 8) +#define GDM_MCFQ_MASK GENMASK(7, 4) +#define GDM_OCFQ_MASK GENMASK(3, 0) + +#define REG_GDM_INGRESS_CFG(_n) (GDM_BASE(_n) + 0x10) +#define GDM_INGRESS_FC_EN_MASK BIT(1) +#define GDM_STAG_EN_MASK BIT(0) + +#define REG_GDM_LEN_CFG(_n) (GDM_BASE(_n) + 0x14) +#define GDM_SHORT_LEN_MASK GENMASK(13, 0) +#define GDM_LONG_LEN_MASK GENMASK(29, 16) + +#define REG_FE_CPORT_CFG (GDM1_BASE + 0x40) +#define FE_CPORT_PAD BIT(26) +#define FE_CPORT_PORT_XFC_MASK BIT(25) +#define FE_CPORT_QUEUE_XFC_MASK BIT(24) + +#define REG_FE_GDM_MIB_CLEAR(_n) (GDM_BASE(_n) + 0xf0) +#define FE_GDM_MIB_RX_CLEAR_MASK BIT(1) +#define FE_GDM_MIB_TX_CLEAR_MASK BIT(0) + +#define REG_FE_GDM1_MIB_CFG (GDM1_BASE + 0xf4) +#define FE_STRICT_RFC2819_MODE_MASK BIT(31) +#define FE_GDM1_TX_MIB_SPLIT_EN_MASK BIT(17) +#define FE_GDM1_RX_MIB_SPLIT_EN_MASK BIT(16) +#define FE_TX_MIB_ID_MASK GENMASK(15, 8) +#define FE_RX_MIB_ID_MASK GENMASK(7, 0) + +#define REG_FE_GDM_TX_OK_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x104) +#define REG_FE_GDM_TX_OK_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x10c) +#define REG_FE_GDM_TX_ETH_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x110) +#define REG_FE_GDM_TX_ETH_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x114) +#define REG_FE_GDM_TX_ETH_DROP_CNT(_n) (GDM_BASE(_n) + 0x118) +#define REG_FE_GDM_TX_ETH_BC_CNT(_n) (GDM_BASE(_n) + 0x11c) +#define REG_FE_GDM_TX_ETH_MC_CNT(_n) (GDM_BASE(_n) + 0x120) +#define REG_FE_GDM_TX_ETH_RUNT_CNT(_n) (GDM_BASE(_n) + 0x124) +#define REG_FE_GDM_TX_ETH_LONG_CNT(_n) (GDM_BASE(_n) + 0x128) +#define REG_FE_GDM_TX_ETH_E64_CNT_L(_n) (GDM_BASE(_n) + 0x12c) +#define REG_FE_GDM_TX_ETH_L64_CNT_L(_n) (GDM_BASE(_n) + 0x130) +#define REG_FE_GDM_TX_ETH_L127_CNT_L(_n) (GDM_BASE(_n) + 0x134) +#define REG_FE_GDM_TX_ETH_L255_CNT_L(_n) (GDM_BASE(_n) + 0x138) +#define REG_FE_GDM_TX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x13c) +#define REG_FE_GDM_TX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x140) + +#define REG_FE_GDM_RX_OK_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x148) +#define REG_FE_GDM_RX_FC_DROP_CNT(_n) (GDM_BASE(_n) + 0x14c) +#define REG_FE_GDM_RX_RC_DROP_CNT(_n) (GDM_BASE(_n) + 0x150) +#define REG_FE_GDM_RX_OVERFLOW_DROP_CNT(_n) (GDM_BASE(_n) + 0x154) +#define REG_FE_GDM_RX_ERROR_DROP_CNT(_n) (GDM_BASE(_n) + 0x158) +#define REG_FE_GDM_RX_OK_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x15c) +#define REG_FE_GDM_RX_ETH_PKT_CNT_L(_n) (GDM_BASE(_n) + 0x160) +#define REG_FE_GDM_RX_ETH_BYTE_CNT_L(_n) (GDM_BASE(_n) + 0x164) +#define REG_FE_GDM_RX_ETH_DROP_CNT(_n) (GDM_BASE(_n) + 0x168) +#define REG_FE_GDM_RX_ETH_BC_CNT(_n) (GDM_BASE(_n) + 0x16c) +#define REG_FE_GDM_RX_ETH_MC_CNT(_n) (GDM_BASE(_n) + 0x170) +#define REG_FE_GDM_RX_ETH_CRC_ERR_CNT(_n) (GDM_BASE(_n) + 0x174) +#define REG_FE_GDM_RX_ETH_FRAG_CNT(_n) (GDM_BASE(_n) + 0x178) +#define REG_FE_GDM_RX_ETH_JABBER_CNT(_n) (GDM_BASE(_n) + 0x17c) +#define REG_FE_GDM_RX_ETH_RUNT_CNT(_n) (GDM_BASE(_n) + 0x180) +#define REG_FE_GDM_RX_ETH_LONG_CNT(_n) (GDM_BASE(_n) + 0x184) +#define REG_FE_GDM_RX_ETH_E64_CNT_L(_n) (GDM_BASE(_n) + 0x188) +#define REG_FE_GDM_RX_ETH_L64_CNT_L(_n) (GDM_BASE(_n) + 0x18c) +#define REG_FE_GDM_RX_ETH_L127_CNT_L(_n) (GDM_BASE(_n) + 0x190) +#define REG_FE_GDM_RX_ETH_L255_CNT_L(_n) (GDM_BASE(_n) + 0x194) +#define REG_FE_GDM_RX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x198) +#define REG_FE_GDM_RX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x19c) + +#define REG_PPE1_TB_HASH_CFG (PPE1_BASE + 0x250) +#define PPE1_SRAM_TABLE_EN_MASK BIT(0) +#define PPE1_SRAM_HASH1_EN_MASK BIT(8) +#define PPE1_DRAM_TABLE_EN_MASK BIT(16) +#define PPE1_DRAM_HASH1_EN_MASK BIT(24) + +#define REG_FE_GDM_TX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x280) +#define REG_FE_GDM_TX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x284) +#define REG_FE_GDM_TX_ETH_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x288) +#define REG_FE_GDM_TX_ETH_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x28c) + +#define REG_FE_GDM_RX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x290) +#define REG_FE_GDM_RX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x294) +#define REG_FE_GDM_RX_ETH_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x298) +#define REG_FE_GDM_RX_ETH_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x29c) +#define REG_FE_GDM_TX_ETH_E64_CNT_H(_n) (GDM_BASE(_n) + 0x2b8) +#define REG_FE_GDM_TX_ETH_L64_CNT_H(_n) (GDM_BASE(_n) + 0x2bc) +#define REG_FE_GDM_TX_ETH_L127_CNT_H(_n) (GDM_BASE(_n) + 0x2c0) +#define REG_FE_GDM_TX_ETH_L255_CNT_H(_n) (GDM_BASE(_n) + 0x2c4) +#define REG_FE_GDM_TX_ETH_L511_CNT_H(_n) (GDM_BASE(_n) + 0x2c8) +#define REG_FE_GDM_TX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2cc) +#define REG_FE_GDM_RX_ETH_E64_CNT_H(_n) (GDM_BASE(_n) + 0x2e8) +#define REG_FE_GDM_RX_ETH_L64_CNT_H(_n) (GDM_BASE(_n) + 0x2ec) +#define REG_FE_GDM_RX_ETH_L127_CNT_H(_n) (GDM_BASE(_n) + 0x2f0) +#define REG_FE_GDM_RX_ETH_L255_CNT_H(_n) (GDM_BASE(_n) + 0x2f4) +#define REG_FE_GDM_RX_ETH_L511_CNT_H(_n) (GDM_BASE(_n) + 0x2f8) +#define REG_FE_GDM_RX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2fc) + +#define REG_GDM2_CHN_RLS (GDM2_BASE + 0x20) +#define MBI_RX_AGE_SEL_MASK GENMASK(26, 25) +#define MBI_TX_AGE_SEL_MASK GENMASK(18, 17) + +#define REG_GDM3_FWD_CFG GDM3_BASE +#define GDM3_PAD_EN_MASK BIT(28) + +#define REG_GDM4_FWD_CFG GDM4_BASE +#define GDM4_PAD_EN_MASK BIT(28) +#define GDM4_SPORT_OFFSET0_MASK GENMASK(11, 8) + +#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x23c) +#define GDM4_SPORT_OFF2_MASK GENMASK(19, 16) +#define GDM4_SPORT_OFF1_MASK GENMASK(15, 12) +#define GDM4_SPORT_OFF0_MASK GENMASK(11, 8) + +#define REG_IP_FRAG_FP 0x2010 +#define IP_ASSEMBLE_PORT_MASK GENMASK(24, 21) +#define IP_ASSEMBLE_NBQ_MASK GENMASK(20, 16) +#define IP_FRAGMENT_PORT_MASK GENMASK(8, 5) +#define IP_FRAGMENT_NBQ_MASK GENMASK(4, 0) + +#define REG_MC_VLAN_EN 0x2100 +#define MC_VLAN_EN_MASK BIT(0) + +#define REG_MC_VLAN_CFG 0x2104 +#define MC_VLAN_CFG_CMD_DONE_MASK BIT(31) +#define MC_VLAN_CFG_TABLE_ID_MASK GENMASK(21, 16) +#define MC_VLAN_CFG_PORT_ID_MASK GENMASK(11, 8) +#define MC_VLAN_CFG_TABLE_SEL_MASK BIT(4) +#define MC_VLAN_CFG_RW_MASK BIT(0) + +#define REG_MC_VLAN_DATA 0x2108 + +#define REG_CDM5_RX_OQ1_DROP_CNT 0x29d4 + +/* QDMA */ +#define REG_QDMA_GLOBAL_CFG 0x0004 +#define GLOBAL_CFG_RX_2B_OFFSET_MASK BIT(31) +#define GLOBAL_CFG_DMA_PREFERENCE_MASK GENMASK(30, 29) +#define GLOBAL_CFG_CPU_TXR_RR_MASK BIT(28) +#define GLOBAL_CFG_DSCP_BYTE_SWAP_MASK BIT(27) +#define GLOBAL_CFG_PAYLOAD_BYTE_SWAP_MASK BIT(26) +#define GLOBAL_CFG_MULTICAST_MODIFY_FP_MASK BIT(25) +#define GLOBAL_CFG_OAM_MODIFY_MASK BIT(24) +#define GLOBAL_CFG_RESET_MASK BIT(23) +#define GLOBAL_CFG_RESET_DONE_MASK BIT(22) +#define GLOBAL_CFG_MULTICAST_EN_MASK BIT(21) +#define GLOBAL_CFG_IRQ1_EN_MASK BIT(20) +#define GLOBAL_CFG_IRQ0_EN_MASK BIT(19) +#define GLOBAL_CFG_LOOPCNT_EN_MASK BIT(18) +#define GLOBAL_CFG_RD_BYPASS_WR_MASK BIT(17) +#define GLOBAL_CFG_QDMA_LOOPBACK_MASK BIT(16) +#define GLOBAL_CFG_LPBK_RXQ_SEL_MASK GENMASK(13, 8) +#define GLOBAL_CFG_CHECK_DONE_MASK BIT(7) +#define GLOBAL_CFG_TX_WB_DONE_MASK BIT(6) +#define GLOBAL_CFG_MAX_ISSUE_NUM_MASK GENMASK(5, 4) +#define GLOBAL_CFG_RX_DMA_BUSY_MASK BIT(3) +#define GLOBAL_CFG_RX_DMA_EN_MASK BIT(2) +#define GLOBAL_CFG_TX_DMA_BUSY_MASK BIT(1) +#define GLOBAL_CFG_TX_DMA_EN_MASK BIT(0) + +#define REG_FWD_DSCP_BASE 0x0010 +#define REG_FWD_BUF_BASE 0x0014 + +#define REG_HW_FWD_DSCP_CFG 0x0018 +#define HW_FWD_DSCP_PAYLOAD_SIZE_MASK GENMASK(29, 28) +#define HW_FWD_DSCP_SCATTER_LEN_MASK GENMASK(17, 16) +#define HW_FWD_DSCP_MIN_SCATTER_LEN_MASK GENMASK(15, 0) + +#define REG_INT_STATUS(_n) \ + (((_n) == 4) ? 0x0730 : \ + ((_n) == 3) ? 0x0724 : \ + ((_n) == 2) ? 0x0720 : \ + ((_n) == 1) ? 0x0024 : 0x0020) + +#define REG_INT_ENABLE(_n) \ + (((_n) == 4) ? 0x0750 : \ + ((_n) == 3) ? 0x0744 : \ + ((_n) == 2) ? 0x0740 : \ + ((_n) == 1) ? 0x002c : 0x0028) + +/* QDMA_CSR_INT_ENABLE1 */ +#define RX15_COHERENT_INT_MASK BIT(31) +#define RX14_COHERENT_INT_MASK BIT(30) +#define RX13_COHERENT_INT_MASK BIT(29) +#define RX12_COHERENT_INT_MASK BIT(28) +#define RX11_COHERENT_INT_MASK BIT(27) +#define RX10_COHERENT_INT_MASK BIT(26) +#define RX9_COHERENT_INT_MASK BIT(25) +#define RX8_COHERENT_INT_MASK BIT(24) +#define RX7_COHERENT_INT_MASK BIT(23) +#define RX6_COHERENT_INT_MASK BIT(22) +#define RX5_COHERENT_INT_MASK BIT(21) +#define RX4_COHERENT_INT_MASK BIT(20) +#define RX3_COHERENT_INT_MASK BIT(19) +#define RX2_COHERENT_INT_MASK BIT(18) +#define RX1_COHERENT_INT_MASK BIT(17) +#define RX0_COHERENT_INT_MASK BIT(16) +#define TX7_COHERENT_INT_MASK BIT(15) +#define TX6_COHERENT_INT_MASK BIT(14) +#define TX5_COHERENT_INT_MASK BIT(13) +#define TX4_COHERENT_INT_MASK BIT(12) +#define TX3_COHERENT_INT_MASK BIT(11) +#define TX2_COHERENT_INT_MASK BIT(10) +#define TX1_COHERENT_INT_MASK BIT(9) +#define TX0_COHERENT_INT_MASK BIT(8) +#define CNT_OVER_FLOW_INT_MASK BIT(7) +#define IRQ1_FULL_INT_MASK BIT(5) +#define IRQ1_INT_MASK BIT(4) +#define HWFWD_DSCP_LOW_INT_MASK BIT(3) +#define HWFWD_DSCP_EMPTY_INT_MASK BIT(2) +#define IRQ0_FULL_INT_MASK BIT(1) +#define IRQ0_INT_MASK BIT(0) + +#define TX_DONE_INT_MASK(_n) \ + ((_n) ? IRQ1_INT_MASK | IRQ1_FULL_INT_MASK \ + : IRQ0_INT_MASK | IRQ0_FULL_INT_MASK) + +#define INT_TX_MASK \ + (IRQ1_INT_MASK | IRQ1_FULL_INT_MASK | \ + IRQ0_INT_MASK | IRQ0_FULL_INT_MASK) + +#define INT_IDX0_MASK \ + (TX0_COHERENT_INT_MASK | TX1_COHERENT_INT_MASK | \ + TX2_COHERENT_INT_MASK | TX3_COHERENT_INT_MASK | \ + TX4_COHERENT_INT_MASK | TX5_COHERENT_INT_MASK | \ + TX6_COHERENT_INT_MASK | TX7_COHERENT_INT_MASK | \ + RX0_COHERENT_INT_MASK | RX1_COHERENT_INT_MASK | \ + RX2_COHERENT_INT_MASK | RX3_COHERENT_INT_MASK | \ + RX4_COHERENT_INT_MASK | RX7_COHERENT_INT_MASK | \ + RX8_COHERENT_INT_MASK | RX9_COHERENT_INT_MASK | \ + RX15_COHERENT_INT_MASK | INT_TX_MASK) + +/* QDMA_CSR_INT_ENABLE2 */ +#define RX15_NO_CPU_DSCP_INT_MASK BIT(31) +#define RX14_NO_CPU_DSCP_INT_MASK BIT(30) +#define RX13_NO_CPU_DSCP_INT_MASK BIT(29) +#define RX12_NO_CPU_DSCP_INT_MASK BIT(28) +#define RX11_NO_CPU_DSCP_INT_MASK BIT(27) +#define RX10_NO_CPU_DSCP_INT_MASK BIT(26) +#define RX9_NO_CPU_DSCP_INT_MASK BIT(25) +#define RX8_NO_CPU_DSCP_INT_MASK BIT(24) +#define RX7_NO_CPU_DSCP_INT_MASK BIT(23) +#define RX6_NO_CPU_DSCP_INT_MASK BIT(22) +#define RX5_NO_CPU_DSCP_INT_MASK BIT(21) +#define RX4_NO_CPU_DSCP_INT_MASK BIT(20) +#define RX3_NO_CPU_DSCP_INT_MASK BIT(19) +#define RX2_NO_CPU_DSCP_INT_MASK BIT(18) +#define RX1_NO_CPU_DSCP_INT_MASK BIT(17) +#define RX0_NO_CPU_DSCP_INT_MASK BIT(16) +#define RX15_DONE_INT_MASK BIT(15) +#define RX14_DONE_INT_MASK BIT(14) +#define RX13_DONE_INT_MASK BIT(13) +#define RX12_DONE_INT_MASK BIT(12) +#define RX11_DONE_INT_MASK BIT(11) +#define RX10_DONE_INT_MASK BIT(10) +#define RX9_DONE_INT_MASK BIT(9) +#define RX8_DONE_INT_MASK BIT(8) +#define RX7_DONE_INT_MASK BIT(7) +#define RX6_DONE_INT_MASK BIT(6) +#define RX5_DONE_INT_MASK BIT(5) +#define RX4_DONE_INT_MASK BIT(4) +#define RX3_DONE_INT_MASK BIT(3) +#define RX2_DONE_INT_MASK BIT(2) +#define RX1_DONE_INT_MASK BIT(1) +#define RX0_DONE_INT_MASK BIT(0) + +#define RX_DONE_INT_MASK \ + (RX0_DONE_INT_MASK | RX1_DONE_INT_MASK | \ + RX2_DONE_INT_MASK | RX3_DONE_INT_MASK | \ + RX4_DONE_INT_MASK | RX7_DONE_INT_MASK | \ + RX8_DONE_INT_MASK | RX9_DONE_INT_MASK | \ + RX15_DONE_INT_MASK) +#define INT_IDX1_MASK \ + (RX_DONE_INT_MASK | \ + RX0_NO_CPU_DSCP_INT_MASK | RX1_NO_CPU_DSCP_INT_MASK | \ + RX2_NO_CPU_DSCP_INT_MASK | RX3_NO_CPU_DSCP_INT_MASK | \ + RX4_NO_CPU_DSCP_INT_MASK | RX7_NO_CPU_DSCP_INT_MASK | \ + RX8_NO_CPU_DSCP_INT_MASK | RX9_NO_CPU_DSCP_INT_MASK | \ + RX15_NO_CPU_DSCP_INT_MASK) + +/* QDMA_CSR_INT_ENABLE5 */ +#define TX31_COHERENT_INT_MASK BIT(31) +#define TX30_COHERENT_INT_MASK BIT(30) +#define TX29_COHERENT_INT_MASK BIT(29) +#define TX28_COHERENT_INT_MASK BIT(28) +#define TX27_COHERENT_INT_MASK BIT(27) +#define TX26_COHERENT_INT_MASK BIT(26) +#define TX25_COHERENT_INT_MASK BIT(25) +#define TX24_COHERENT_INT_MASK BIT(24) +#define TX23_COHERENT_INT_MASK BIT(23) +#define TX22_COHERENT_INT_MASK BIT(22) +#define TX21_COHERENT_INT_MASK BIT(21) +#define TX20_COHERENT_INT_MASK BIT(20) +#define TX19_COHERENT_INT_MASK BIT(19) +#define TX18_COHERENT_INT_MASK BIT(18) +#define TX17_COHERENT_INT_MASK BIT(17) +#define TX16_COHERENT_INT_MASK BIT(16) +#define TX15_COHERENT_INT_MASK BIT(15) +#define TX14_COHERENT_INT_MASK BIT(14) +#define TX13_COHERENT_INT_MASK BIT(13) +#define TX12_COHERENT_INT_MASK BIT(12) +#define TX11_COHERENT_INT_MASK BIT(11) +#define TX10_COHERENT_INT_MASK BIT(10) +#define TX9_COHERENT_INT_MASK BIT(9) +#define TX8_COHERENT_INT_MASK BIT(8) + +#define INT_IDX4_MASK \ + (TX8_COHERENT_INT_MASK | TX9_COHERENT_INT_MASK | \ + TX10_COHERENT_INT_MASK | TX11_COHERENT_INT_MASK | \ + TX12_COHERENT_INT_MASK | TX13_COHERENT_INT_MASK | \ + TX14_COHERENT_INT_MASK | TX15_COHERENT_INT_MASK | \ + TX16_COHERENT_INT_MASK | TX17_COHERENT_INT_MASK | \ + TX18_COHERENT_INT_MASK | TX19_COHERENT_INT_MASK | \ + TX20_COHERENT_INT_MASK | TX21_COHERENT_INT_MASK | \ + TX22_COHERENT_INT_MASK | TX23_COHERENT_INT_MASK | \ + TX24_COHERENT_INT_MASK | TX25_COHERENT_INT_MASK | \ + TX26_COHERENT_INT_MASK | TX27_COHERENT_INT_MASK | \ + TX28_COHERENT_INT_MASK | TX29_COHERENT_INT_MASK | \ + TX30_COHERENT_INT_MASK | TX31_COHERENT_INT_MASK) + +#define REG_TX_IRQ_BASE(_n) ((_n) ? 0x0048 : 0x0050) + +#define REG_TX_IRQ_CFG(_n) ((_n) ? 0x004c : 0x0054) +#define TX_IRQ_THR_MASK GENMASK(27, 16) +#define TX_IRQ_DEPTH_MASK GENMASK(11, 0) + +#define REG_IRQ_CLEAR_LEN(_n) ((_n) ? 0x0064 : 0x0058) +#define IRQ_CLEAR_LEN_MASK GENMASK(7, 0) + +#define REG_IRQ_STATUS(_n) ((_n) ? 0x0068 : 0x005c) +#define IRQ_ENTRY_LEN_MASK GENMASK(27, 16) +#define IRQ_HEAD_IDX_MASK GENMASK(11, 0) + +#define REG_TX_RING_BASE(_n) \ + (((_n) < 8) ? 0x0100 + ((_n) << 5) : 0x0b00 + (((_n) - 8) << 5)) + +#define REG_TX_RING_BLOCKING(_n) \ + (((_n) < 8) ? 0x0104 + ((_n) << 5) : 0x0b04 + (((_n) - 8) << 5)) + +#define TX_RING_IRQ_BLOCKING_MAP_MASK BIT(6) +#define TX_RING_IRQ_BLOCKING_CFG_MASK BIT(4) +#define TX_RING_IRQ_BLOCKING_TX_DROP_EN_MASK BIT(2) +#define TX_RING_IRQ_BLOCKING_MAX_TH_TXRING_EN_MASK BIT(1) +#define TX_RING_IRQ_BLOCKING_MIN_TH_TXRING_EN_MASK BIT(0) + +#define REG_TX_CPU_IDX(_n) \ + (((_n) < 8) ? 0x0108 + ((_n) << 5) : 0x0b08 + (((_n) - 8) << 5)) + +#define TX_RING_CPU_IDX_MASK GENMASK(15, 0) + +#define REG_TX_DMA_IDX(_n) \ + (((_n) < 8) ? 0x010c + ((_n) << 5) : 0x0b0c + (((_n) - 8) << 5)) + +#define TX_RING_DMA_IDX_MASK GENMASK(15, 0) + +#define IRQ_RING_IDX_MASK GENMASK(20, 16) +#define IRQ_DESC_IDX_MASK GENMASK(15, 0) + +#define REG_RX_RING_BASE(_n) \ + (((_n) < 16) ? 0x0200 + ((_n) << 5) : 0x0e00 + (((_n) - 16) << 5)) + +#define REG_RX_RING_SIZE(_n) \ + (((_n) < 16) ? 0x0204 + ((_n) << 5) : 0x0e04 + (((_n) - 16) << 5)) + +#define RX_RING_THR_MASK GENMASK(31, 16) +#define RX_RING_SIZE_MASK GENMASK(15, 0) + +#define REG_RX_CPU_IDX(_n) \ + (((_n) < 16) ? 0x0208 + ((_n) << 5) : 0x0e08 + (((_n) - 16) << 5)) + +#define RX_RING_CPU_IDX_MASK GENMASK(15, 0) + +#define REG_RX_DMA_IDX(_n) \ + (((_n) < 16) ? 0x020c + ((_n) << 5) : 0x0e0c + (((_n) - 16) << 5)) + +#define REG_RX_DELAY_INT_IDX(_n) \ + (((_n) < 16) ? 0x0210 + ((_n) << 5) : 0x0e10 + (((_n) - 16) << 5)) + +#define RX_DELAY_INT_MASK GENMASK(15, 0) + +#define RX_RING_DMA_IDX_MASK GENMASK(15, 0) + +#define REG_INGRESS_TRTCM_CFG 0x0070 +#define INGRESS_TRTCM_EN_MASK BIT(31) +#define INGRESS_TRTCM_MODE_MASK BIT(30) +#define INGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) +#define INGRESS_FAST_TICK_MASK GENMASK(15, 0) + +#define REG_QUEUE_CLOSE_CFG(_n) (0x00a0 + ((_n) & 0xfc)) +#define TXQ_DISABLE_CHAN_QUEUE_MASK(_n, _m) BIT((_m) + (((_n) & 0x3) << 3)) + +#define REG_TXQ_DIS_CFG_BASE(_n) ((_n) ? 0x20a0 : 0x00a0) +#define REG_TXQ_DIS_CFG(_n, _m) (REG_TXQ_DIS_CFG_BASE((_n)) + (_m) << 2) + +#define REG_CNTR_CFG(_n) (0x0400 + ((_n) << 3)) +#define CNTR_EN_MASK BIT(31) +#define CNTR_ALL_CHAN_EN_MASK BIT(30) +#define CNTR_ALL_QUEUE_EN_MASK BIT(29) +#define CNTR_ALL_DSCP_RING_EN_MASK BIT(28) +#define CNTR_SRC_MASK GENMASK(27, 24) +#define CNTR_DSCP_RING_MASK GENMASK(20, 16) +#define CNTR_CHAN_MASK GENMASK(7, 3) +#define CNTR_QUEUE_MASK GENMASK(2, 0) + +#define REG_CNTR_VAL(_n) (0x0404 + ((_n) << 3)) + +#define REG_LMGR_INIT_CFG 0x1000 +#define LMGR_INIT_START BIT(31) +#define LMGR_SRAM_MODE_MASK BIT(30) +#define HW_FWD_PKTSIZE_OVERHEAD_MASK GENMASK(27, 20) +#define HW_FWD_DESC_NUM_MASK GENMASK(16, 0) + +#define REG_FWD_DSCP_LOW_THR 0x1004 +#define FWD_DSCP_LOW_THR_MASK GENMASK(17, 0) + +#define REG_EGRESS_RATE_METER_CFG 0x100c +#define EGRESS_RATE_METER_EN_MASK BIT(31) +#define EGRESS_RATE_METER_EQ_RATE_EN_MASK BIT(17) +#define EGRESS_RATE_METER_WINDOW_SZ_MASK GENMASK(16, 12) +#define EGRESS_RATE_METER_TIMESLICE_MASK GENMASK(10, 0) + +#define REG_EGRESS_TRTCM_CFG 0x1010 +#define EGRESS_TRTCM_EN_MASK BIT(31) +#define EGRESS_TRTCM_MODE_MASK BIT(30) +#define EGRESS_SLOW_TICK_RATIO_MASK GENMASK(29, 16) +#define EGRESS_FAST_TICK_MASK GENMASK(15, 0) + +#define TRTCM_PARAM_RW_MASK BIT(31) +#define TRTCM_PARAM_RW_DONE_MASK BIT(30) +#define TRTCM_PARAM_TYPE_MASK GENMASK(29, 28) +#define TRTCM_METER_GROUP_MASK GENMASK(27, 26) +#define TRTCM_PARAM_INDEX_MASK GENMASK(23, 17) +#define TRTCM_PARAM_RATE_TYPE_MASK BIT(16) + +#define REG_TRTCM_CFG_PARAM(_n) ((_n) + 0x4) +#define REG_TRTCM_DATA_LOW(_n) ((_n) + 0x8) +#define REG_TRTCM_DATA_HIGH(_n) ((_n) + 0xc) + +#define REG_TXWRR_MODE_CFG 0x1020 +#define TWRR_WEIGHT_SCALE_MASK BIT(31) +#define TWRR_WEIGHT_BASE_MASK BIT(3) + +#define REG_TXWRR_WEIGHT_CFG 0x1024 +#define TWRR_RW_CMD_MASK BIT(31) +#define TWRR_RW_CMD_DONE BIT(30) +#define TWRR_CHAN_IDX_MASK GENMASK(23, 19) +#define TWRR_QUEUE_IDX_MASK GENMASK(18, 16) +#define TWRR_VALUE_MASK GENMASK(15, 0) + +#define REG_PSE_BUF_USAGE_CFG 0x1028 +#define PSE_BUF_ESTIMATE_EN_MASK BIT(29) + +#define REG_CHAN_QOS_MODE(_n) (0x1040 + ((_n) << 2)) +#define CHAN_QOS_MODE_MASK(_n) GENMASK(2 + ((_n) << 2), (_n) << 2) + +#define REG_GLB_TRTCM_CFG 0x1080 +#define GLB_TRTCM_EN_MASK BIT(31) +#define GLB_TRTCM_MODE_MASK BIT(30) +#define GLB_SLOW_TICK_RATIO_MASK GENMASK(29, 16) +#define GLB_FAST_TICK_MASK GENMASK(15, 0) + +#define REG_TXQ_CNGST_CFG 0x10a0 +#define TXQ_CNGST_DROP_EN BIT(31) +#define TXQ_CNGST_DEI_DROP_EN BIT(30) + +#define REG_SLA_TRTCM_CFG 0x1150 +#define SLA_TRTCM_EN_MASK BIT(31) +#define SLA_TRTCM_MODE_MASK BIT(30) +#define SLA_SLOW_TICK_RATIO_MASK GENMASK(29, 16) +#define SLA_FAST_TICK_MASK GENMASK(15, 0) + +/* CTRL */ +#define QDMA_DESC_DONE_MASK BIT(31) +#define QDMA_DESC_DROP_MASK BIT(30) /* tx: drop - rx: overflow */ +#define QDMA_DESC_MORE_MASK BIT(29) /* more SG elements */ +#define QDMA_DESC_DEI_MASK BIT(25) +#define QDMA_DESC_NO_DROP_MASK BIT(24) +#define QDMA_DESC_LEN_MASK GENMASK(15, 0) +/* DATA */ +#define QDMA_DESC_NEXT_ID_MASK GENMASK(15, 0) +/* TX MSG0 */ +#define QDMA_ETH_TXMSG_MIC_IDX_MASK BIT(30) +#define QDMA_ETH_TXMSG_SP_TAG_MASK GENMASK(29, 14) +#define QDMA_ETH_TXMSG_ICO_MASK BIT(13) +#define QDMA_ETH_TXMSG_UCO_MASK BIT(12) +#define QDMA_ETH_TXMSG_TCO_MASK BIT(11) +#define QDMA_ETH_TXMSG_TSO_MASK BIT(10) +#define QDMA_ETH_TXMSG_FAST_MASK BIT(9) +#define QDMA_ETH_TXMSG_OAM_MASK BIT(8) +#define QDMA_ETH_TXMSG_CHAN_MASK GENMASK(7, 3) +#define QDMA_ETH_TXMSG_QUEUE_MASK GENMASK(2, 0) +/* TX MSG1 */ +#define QDMA_ETH_TXMSG_NO_DROP BIT(31) +#define QDMA_ETH_TXMSG_METER_MASK GENMASK(30, 24) /* 0x7f no meters */ +#define QDMA_ETH_TXMSG_FPORT_MASK GENMASK(23, 20) +#define QDMA_ETH_TXMSG_NBOQ_MASK GENMASK(19, 15) +#define QDMA_ETH_TXMSG_HWF_MASK BIT(14) +#define QDMA_ETH_TXMSG_HOP_MASK BIT(13) +#define QDMA_ETH_TXMSG_PTP_MASK BIT(12) +#define QDMA_ETH_TXMSG_ACNT_G1_MASK GENMASK(10, 6) /* 0x1f do not count */ +#define QDMA_ETH_TXMSG_ACNT_G0_MASK GENMASK(5, 0) /* 0x3f do not count */ + +/* RX MSG1 */ +#define QDMA_ETH_RXMSG_DEI_MASK BIT(31) +#define QDMA_ETH_RXMSG_IP6_MASK BIT(30) +#define QDMA_ETH_RXMSG_IP4_MASK BIT(29) +#define QDMA_ETH_RXMSG_IP4F_MASK BIT(28) +#define QDMA_ETH_RXMSG_L4_VALID_MASK BIT(27) +#define QDMA_ETH_RXMSG_L4F_MASK BIT(26) +#define QDMA_ETH_RXMSG_SPORT_MASK GENMASK(25, 21) +#define QDMA_ETH_RXMSG_CRSN_MASK GENMASK(20, 16) +#define QDMA_ETH_RXMSG_PPE_ENTRY_MASK GENMASK(15, 0) + +struct airoha_qdma_desc { + __le32 rsv; + __le32 ctrl; + __le32 addr; + __le32 data; + __le32 msg0; + __le32 msg1; + __le32 msg2; + __le32 msg3; +}; + +/* CTRL0 */ +#define QDMA_FWD_DESC_CTX_MASK BIT(31) +#define QDMA_FWD_DESC_RING_MASK GENMASK(30, 28) +#define QDMA_FWD_DESC_IDX_MASK GENMASK(27, 16) +#define QDMA_FWD_DESC_LEN_MASK GENMASK(15, 0) +/* CTRL1 */ +#define QDMA_FWD_DESC_FIRST_IDX_MASK GENMASK(15, 0) +/* CTRL2 */ +#define QDMA_FWD_DESC_MORE_PKT_NUM_MASK GENMASK(2, 0) + +struct airoha_qdma_fwd_desc { + __le32 addr; + __le32 ctrl0; + __le32 ctrl1; + __le32 ctrl2; + __le32 msg0; + __le32 msg1; + __le32 rsv0; + __le32 rsv1; +}; + +#endif /* AIROHA_REGS_H */ From patchwork Thu Feb 13 15:34:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4DB9DC0219D for ; Thu, 13 Feb 2025 15:45:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=KU7M6nRrVk+vhppBdjU1BKuk3f2ewIkrKIf/a+1uE48=; b=G/YHPr9E97hnW5c80qrjsXM0gL hF+TlqsFk9uxujREUCrHFqlKCXvAREWdAfvM0iyDip5b9db6Tjvzgu06APFgReh/Tp7Y0uq7WkYuA X5WJjU9hn06InK1XIXq6CI++r5cnbdTtzY9bLOwMEfGlW0qBrM+bmKA0k88c7ap5I3dKUWVz03nuy +UP9CC6ppyn9+s9/nPnHMMEf5wRDa05yNgo0zJxgLM7l0ILbdjxca6l3NzKRMxBFNZxp8phe7SKfS 6sQWyHw5aQroHq0hHpLl/nfx615iZsDoCp58rKrOA7YbID0pihAYmWjmMGgy/IU1cSUS+eoS++rLv RNOsV6JQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibOg-0000000BbVM-1MEI; Thu, 13 Feb 2025 15:45:02 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibF2-0000000BYil-1sVk; Thu, 13 Feb 2025 15:35:05 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4A0205C5440; Thu, 13 Feb 2025 15:34:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29550C4CEE4; Thu, 13 Feb 2025 15:35:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460903; bh=fPqAl5WVzSgSEPCYLYJcFo3xIskaAzrnsiLHeLFi9PU=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=bSLaNEhjAqt58Fcm2Os+WsqtKOZfvP+pvPmGhWTcEIcDGOeDVId8tug+xYIUdisd1 WhRnUGgWF8wKujf9hTHvMnmEmuLohaoo9BEVk+jpAYFkfFVZlnzriRisgAvXzXUmje Htm180EAn7Ndm0TquGj8iRw1SuG8yrvJcj/o0w2lzxy+PugO9dypUZBz4ARLRRGkz2 82nzWduz2p2w/B4rlou6oPmB3Suq9m+/iQKhogE060KjQOVu/AnjNVdtTzIX4fU9C5 DkUJ2zgFeZlNqV23sXTMIcrLTEa6VuHP4B6O2wlrVKbb7xLTnBLaTFL61BDrdwsScI FefBZD1g7Su3Q== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:25 +0100 Subject: [PATCH net-next v4 06/16] net: airoha: Move DSA tag in DMA descriptor MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-6-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073504_610723_57B9399F X-CRM114-Status: GOOD ( 23.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Packet Processor Engine (PPE) module reads DSA tags from the DMA descriptor and requires untagged DSA packets to properly parse them. Move DSA tag in the DMA descriptor on TX side and read DSA tag from DMA descriptor on RX side. In order to avoid skb reallocation, store tag in skb_dst on RX side. This is a preliminary patch to enable netfilter flowtable hw offloading on EN7581 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 123 ++++++++++++++++++++++++++++-- drivers/net/ethernet/airoha/airoha_eth.h | 7 ++ drivers/net/ethernet/airoha/airoha_regs.h | 2 + 3 files changed, 126 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index b79556f1b4951c687aa89bc5839fc9405581a6c3..4f45db86d8d8d6b7a13d56a9315773f8685a09f6 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -656,6 +657,7 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) struct airoha_qdma_desc *desc = &q->desc[q->tail]; dma_addr_t dma_addr = le32_to_cpu(desc->addr); u32 desc_ctrl = le32_to_cpu(desc->ctrl); + struct airoha_gdm_port *port; struct sk_buff *skb; int len, p; @@ -683,6 +685,7 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) continue; } + port = eth->ports[p]; skb = napi_build_skb(e->buf, q->buf_size); if (!skb) { page_pool_put_full_page(q->page_pool, @@ -694,10 +697,26 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) skb_reserve(skb, 2); __skb_put(skb, len); skb_mark_for_recycle(skb); - skb->dev = eth->ports[p]->dev; + skb->dev = port->dev; skb->protocol = eth_type_trans(skb, skb->dev); skb->ip_summed = CHECKSUM_UNNECESSARY; skb_record_rx_queue(skb, qid); + + if (netdev_uses_dsa(port->dev)) { + /* PPE module requires untagged packets to work + * properly and it provides DSA port index via the + * DMA descriptor. Report DSA tag to the DSA stack + * via skb dst info. + */ + u32 sptag = FIELD_GET(QDMA_ETH_RXMSG_SPTAG, + le32_to_cpu(desc->msg0)); + + if (sptag < ARRAY_SIZE(port->dsa_meta) && + port->dsa_meta[sptag]) + skb_dst_set_noref(skb, + &port->dsa_meta[sptag]->dst); + } + napi_gro_receive(&q->napi, skb); done++; @@ -1636,25 +1655,74 @@ static u16 airoha_dev_select_queue(struct net_device *dev, struct sk_buff *skb, return queue < dev->num_tx_queues ? queue : 0; } +static u32 airoha_get_dsa_tag(struct sk_buff *skb, struct net_device *dev) +{ +#if IS_ENABLED(CONFIG_NET_DSA) + struct ethhdr *ehdr; + struct dsa_port *dp; + u8 xmit_tpid; + u16 tag; + + if (!netdev_uses_dsa(dev)) + return 0; + + dp = dev->dsa_ptr; + if (IS_ERR(dp)) + return 0; + + if (dp->tag_ops->proto != DSA_TAG_PROTO_MTK) + return 0; + + if (skb_ensure_writable(skb, ETH_HLEN)) + return 0; + + ehdr = (struct ethhdr *)skb->data; + tag = be16_to_cpu(ehdr->h_proto); + xmit_tpid = tag >> 8; + + switch (xmit_tpid) { + case MTK_HDR_XMIT_TAGGED_TPID_8100: + ehdr->h_proto = cpu_to_be16(ETH_P_8021Q); + break; + case MTK_HDR_XMIT_TAGGED_TPID_88A8: + ehdr->h_proto = cpu_to_be16(ETH_P_8021AD); + break; + default: + /* PPE module requires untagged DSA packets to work properly, + * so move DSA tag to DMA descriptor. + */ + memmove(skb->data + MTK_HDR_LEN, skb->data, 2 * ETH_ALEN); + __skb_pull(skb, MTK_HDR_LEN); + break; + } + + return tag; +#else + return 0; +#endif +} + static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, struct net_device *dev) { struct airoha_gdm_port *port = netdev_priv(dev); - u32 nr_frags = 1 + skb_shinfo(skb)->nr_frags; - u32 msg0, msg1, len = skb_headlen(skb); struct airoha_qdma *qdma = port->qdma; + u32 nr_frags, tag, msg0, msg1, len; struct netdev_queue *txq; struct airoha_queue *q; - void *data = skb->data; + void *data; int i, qid; u16 index; u8 fport; qid = skb_get_queue_mapping(skb) % ARRAY_SIZE(qdma->q_tx); + tag = airoha_get_dsa_tag(skb, dev); + msg0 = FIELD_PREP(QDMA_ETH_TXMSG_CHAN_MASK, qid / AIROHA_NUM_QOS_QUEUES) | FIELD_PREP(QDMA_ETH_TXMSG_QUEUE_MASK, - qid % AIROHA_NUM_QOS_QUEUES); + qid % AIROHA_NUM_QOS_QUEUES) | + FIELD_PREP(QDMA_ETH_TXMSG_SP_TAG_MASK, tag); if (skb->ip_summed == CHECKSUM_PARTIAL) msg0 |= FIELD_PREP(QDMA_ETH_TXMSG_TCO_MASK, 1) | FIELD_PREP(QDMA_ETH_TXMSG_UCO_MASK, 1) | @@ -1685,6 +1753,8 @@ static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, spin_lock_bh(&q->lock); txq = netdev_get_tx_queue(dev, qid); + nr_frags = 1 + skb_shinfo(skb)->nr_frags; + if (q->queued + nr_frags > q->ndesc) { /* not enough space in the queue */ netif_tx_stop_queue(txq); @@ -1692,7 +1762,10 @@ static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, return NETDEV_TX_BUSY; } + len = skb_headlen(skb); + data = skb->data; index = q->head; + for (i = 0; i < nr_frags; i++) { struct airoha_qdma_desc *desc = &q->desc[index]; struct airoha_queue_entry *e = &q->entry[index]; @@ -2226,6 +2299,37 @@ static const struct ethtool_ops airoha_ethtool_ops = { .get_rmon_stats = airoha_ethtool_get_rmon_stats, }; +static int airoha_metadata_dst_alloc(struct airoha_gdm_port *port) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(port->dsa_meta); i++) { + struct metadata_dst *md_dst; + + md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, + GFP_KERNEL); + if (!md_dst) + return -ENOMEM; + + md_dst->u.port_info.port_id = i; + port->dsa_meta[i] = md_dst; + } + + return 0; +} + +static void airoha_metadata_dst_free(struct airoha_gdm_port *port) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(port->dsa_meta); i++) { + if (!port->dsa_meta[i]) + continue; + + metadata_dst_free(port->dsa_meta[i]); + } +} + static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) { const __be32 *id_ptr = of_get_property(np, "reg", NULL); @@ -2298,6 +2402,10 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) port->id = id; eth->ports[index] = port; + err = airoha_metadata_dst_alloc(port); + if (err) + return err; + return register_netdev(dev); } @@ -2390,8 +2498,10 @@ static int airoha_probe(struct platform_device *pdev) for (i = 0; i < ARRAY_SIZE(eth->ports); i++) { struct airoha_gdm_port *port = eth->ports[i]; - if (port && port->dev->reg_state == NETREG_REGISTERED) + if (port && port->dev->reg_state == NETREG_REGISTERED) { + airoha_metadata_dst_free(port); unregister_netdev(port->dev); + } } free_netdev(eth->napi_dev); platform_set_drvdata(pdev, NULL); @@ -2416,6 +2526,7 @@ static void airoha_remove(struct platform_device *pdev) continue; airoha_dev_stop(port->dev); + airoha_metadata_dst_free(port); unregister_netdev(port->dev); } free_netdev(eth->napi_dev); diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 743aaf10235fe09fb2a91b491f4b25064ed8319b..fee6c10eaedfd30207205b6557e856091fd45d7e 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -15,6 +15,7 @@ #define AIROHA_MAX_NUM_GDM_PORTS 1 #define AIROHA_MAX_NUM_QDMA 2 +#define AIROHA_MAX_DSA_PORTS 7 #define AIROHA_MAX_NUM_RSTS 3 #define AIROHA_MAX_NUM_XSI_RSTS 5 #define AIROHA_MAX_MTU 2000 @@ -43,6 +44,10 @@ #define QDMA_METER_IDX(_n) ((_n) & 0xff) #define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) +#define MTK_HDR_LEN 4 +#define MTK_HDR_XMIT_TAGGED_TPID_8100 1 +#define MTK_HDR_XMIT_TAGGED_TPID_88A8 2 + enum { QDMA_INT_REG_IDX0, QDMA_INT_REG_IDX1, @@ -231,6 +236,8 @@ struct airoha_gdm_port { /* qos stats counters */ u64 cpu_tx_packets; u64 fwd_tx_packets; + + struct metadata_dst *dsa_meta[AIROHA_MAX_DSA_PORTS]; }; struct airoha_eth { diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h index 7c9dadb348834cb5a856760abe45e8221d6fd700..e467dd81ff44a9ad560226cab42b7431812f5fb9 100644 --- a/drivers/net/ethernet/airoha/airoha_regs.h +++ b/drivers/net/ethernet/airoha/airoha_regs.h @@ -624,6 +624,8 @@ #define QDMA_ETH_TXMSG_ACNT_G1_MASK GENMASK(10, 6) /* 0x1f do not count */ #define QDMA_ETH_TXMSG_ACNT_G0_MASK GENMASK(5, 0) /* 0x3f do not count */ +/* RX MSG0 */ +#define QDMA_ETH_RXMSG_SPTAG GENMASK(21, 14) /* RX MSG1 */ #define QDMA_ETH_RXMSG_DEI_MASK BIT(31) #define QDMA_ETH_RXMSG_IP6_MASK BIT(30) From patchwork Thu Feb 13 15:34:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AFC84C021A0 for ; Thu, 13 Feb 2025 15:46:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tka6xmrD6y43k+BDIfix4JOU0iopwBuyRSqkyAdsxWA=; b=zS8J7wXkZLt4dESUuHPVl7C/Gi SmkfdNEp4gmlB1NRnSG3vjGTRIPLpJZIeZpFTKUw/tsQScYDJAUoPdJgiOHZEWyBp6pzvkOYzBw9w ZgrLKevyYEYgRIuZ0SgJX97387+JnSnpEhvcpJaDv5kfd99m0NWnNJOWeGmzunIVxcaR9QwEr8e3B UZ/U61wlhibcG3kbDIYbv/XzIGRKinqQ7FUA5gnQW0lyw80/mZknj4+a7WXBdBQejE0Up/yJ+9plF 4lRSLLSs7zc/eN+cfbl+l3BuakD9bWcmX02Licx10m6f8ctlataCcO0i6dRL0g0nzuB7xLEHqB1MU ypAplDMg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibQ5-0000000BbmM-1YVY; Thu, 13 Feb 2025 15:46:29 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibF5-0000000BYjc-0WyF; Thu, 13 Feb 2025 15:35:08 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 08F835C5038; Thu, 13 Feb 2025 15:34:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4C4BC4CEE4; Thu, 13 Feb 2025 15:35:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460906; bh=AhjglNReMh607BlyjRNHV+zEVyRap39omDdEs9Q1guw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=IZ+ZEX6dKY1KfwxzwCwFN8ay2FH1HY3FbHN6qg6sLZf99XrCukFbIibPTNmuLKf2K Jx0ioe43ILMPMXO+SCB2+eFKKhmYtCppHi/bMjBEAnI2SiMftjftXHzf7gcXo7uL2A FBkyu9ZzxPqvRvcc5SqW20vrpERPi3wejV+oUHn8vHbGY3ivISpuoKH42TboD2rYZO kL9Nt1hurUOjao4lZmXFl6Ljod1AxjhqtGjYda/jMZyyqYq+9x0EdLnVRFkO8PRBXE 2AaoWYz6f4iErsH/Zz3oDwaMLxKXjubRzmUW8xb1TDY/4wyB42doizAZzy4khflR/G /+1qJ8ZvrUw3w== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:26 +0100 Subject: [PATCH net-next v4 07/16] net: dsa: mt7530: Enable Rx sptag for EN7581 SoC MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-7-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073507_253610_75D937F0 X-CRM114-Status: GOOD ( 10.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Packet Processor Engine (PPE) module used for hw acceleration on EN7581 mac block, in order to properly parse packets, requires DSA untagged packets on TX side and read DSA tag from DMA descriptor on RX side. For this reason, enable RX Special Tag (SPTAG) for EN7581 SoC. This is a preliminary patch to enable netfilter flowtable hw offloading on EN7581 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/dsa/mt7530.c | 5 +++++ drivers/net/dsa/mt7530.h | 4 ++++ 2 files changed, 9 insertions(+) diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index 9fd44e55d51963318ed5cfa3175af9faa77236e4..5b2fa9c0375b6fe7ced7fefc4cc16326d9deb7cf 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -2586,6 +2586,11 @@ mt7531_setup_common(struct dsa_switch *ds) /* Allow mirroring frames received on the local port (monitor port). */ mt7530_set(priv, MT753X_AGC, LOCAL_EN); + /* Enable Special Tag for rx frames */ + if (priv->id == ID_EN7581) + mt7530_write(priv, MT753X_CPORT_SPTAG_CFG, + CPORT_SW2FE_STAG_EN | CPORT_FE2SW_STAG_EN); + /* Flush the FDB table */ ret = mt7530_fdb_cmd(priv, MT7530_FDB_FLUSH, NULL); if (ret < 0) diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h index 448200689f492dcb73ef056d7284090c1c662e67..349d72a35771f35d478244ab29be1801b3466a5f 100644 --- a/drivers/net/dsa/mt7530.h +++ b/drivers/net/dsa/mt7530.h @@ -627,6 +627,10 @@ enum mt7531_xtal_fsel { #define MT7531_GPIO12_RG_RXD3_MASK GENMASK(19, 16) #define MT7531_EXT_P_MDIO_12 (2 << 16) +#define MT753X_CPORT_SPTAG_CFG 0x7c10 +#define CPORT_SW2FE_STAG_EN BIT(1) +#define CPORT_FE2SW_STAG_EN BIT(0) + /* Registers for LED GPIO control (MT7530 only) * All registers follow this pattern: * [ 2: 0] port 0 From patchwork Thu Feb 13 15:34:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 00C3EC021A0 for ; Thu, 13 Feb 2025 15:48:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=SHgvdymIKYdf2pjmBIG1a+MPKx6FLIuz+bwHkfDHU74=; b=IVIIuvptoE55HIHQROXwO30PWR EqvTH45pr/8TF8zU0dudr40HamULrWETwSdueGPcvhfXNVQ/X0aR74UmZ9heiCjCCIpgWJCaggexC 0u1TH31tHojZptcTWzbShKC0j6PHHcXQSTUn4gbF3vR0vAUIbTNmRpgmD5Ox6+3ZZypx1ImAhA9E5 VgyEU3FK1shu6EzgM3K8JOky2tSOJAClcrHSlSJ59UGel5x5kaVxFRp2UuXk42g1Vqd452ihKYtVU b+pWhBnhIeJi96Qomb5zDSyXgS0vDS8LoeO6pi+K5MbvtGo/1BF337eb4awY9+Ta8Nw4zgZ69Hz8i y2qYikNQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibRU-0000000Bc3w-1Fev; Thu, 13 Feb 2025 15:47:56 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibF7-0000000BYka-2obj; Thu, 13 Feb 2025 15:35:10 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id A8276A41CDB; Thu, 13 Feb 2025 15:33:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52E29C4CED1; Thu, 13 Feb 2025 15:35:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460908; bh=6PWC818ZtuH6Oyc32LpcXRRNu0olUP6Lyo19SqxkH1Y=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=A0JOdHHRbUW2A0NFv29k3wK4KP/sn9D/lZzkRjVkyNEVnD+1jFk7+cJbmV9WBcews dXNu1zZSz1ynvzSK0bN1WJQ2ZuD1zJwWs9gC8WVSzAEYSxj2ColuTdUaAcSa3fYQx6 w2zvk7f0nKQsrADXYyO8dCKdzalIKP7P3JkEsU6EUTP8qsBAEgUN1x7ys8fDLNvKzp OhBT5fKPfxswuY7Tnj1DoFaix0MA06S/ikRqrrMsFPPrBkpNLgqBU7DjJl44grEbGR csKUrn3Ouuz+BlRUK/Nd4Js+9SQe5ojvey8o1BByqTLSnN7RUwFZJFIXnjLKiNketj ch2zY0a7kXHVQ== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:27 +0100 Subject: [PATCH net-next v4 08/16] net: airoha: Enable support for multiple net_devices MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-8-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com, Christian Marangi X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073509_843766_090A8EB4 X-CRM114-Status: GOOD ( 19.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In the current codebase airoha_eth driver supports just a single net_device connected to the Packet Switch Engine (PSE) lan port (GDM1). As shown in commit 23020f049327 ("net: airoha: Introduce ethernet support for EN7581 SoC"), PSE can switch packets between four GDM ports. Enable the capability to create a net_device for each GDM port of the PSE module. Moreover, since the QDMA blocks can be shared between net_devices, do not stop TX/RX DMA in airoha_dev_stop() if there are active net_devices for this QDMA block. This is a preliminary patch to enable flowtable hw offloading for EN7581 SoC. Co-developed-by: Christian Marangi Signed-off-by: Christian Marangi Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 35 +++++++++++++++++++------------- drivers/net/ethernet/airoha/airoha_eth.h | 4 +++- 2 files changed, 24 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 4f45db86d8d8d6b7a13d56a9315773f8685a09f6..513914da8503c1162b0f1b4fcca57434385fa4d1 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -1562,6 +1562,7 @@ static int airoha_dev_open(struct net_device *dev) airoha_qdma_set(qdma, REG_QDMA_GLOBAL_CFG, GLOBAL_CFG_TX_DMA_EN_MASK | GLOBAL_CFG_RX_DMA_EN_MASK); + atomic_inc(&qdma->users); return 0; } @@ -1577,16 +1578,20 @@ static int airoha_dev_stop(struct net_device *dev) if (err) return err; - airoha_qdma_clear(qdma, REG_QDMA_GLOBAL_CFG, - GLOBAL_CFG_TX_DMA_EN_MASK | - GLOBAL_CFG_RX_DMA_EN_MASK); + for (i = 0; i < ARRAY_SIZE(qdma->q_tx); i++) + netdev_tx_reset_subqueue(dev, i); - for (i = 0; i < ARRAY_SIZE(qdma->q_tx); i++) { - if (!qdma->q_tx[i].ndesc) - continue; + if (atomic_dec_and_test(&qdma->users)) { + airoha_qdma_clear(qdma, REG_QDMA_GLOBAL_CFG, + GLOBAL_CFG_TX_DMA_EN_MASK | + GLOBAL_CFG_RX_DMA_EN_MASK); - airoha_qdma_cleanup_tx_queue(&qdma->q_tx[i]); - netdev_tx_reset_subqueue(dev, i); + for (i = 0; i < ARRAY_SIZE(qdma->q_tx); i++) { + if (!qdma->q_tx[i].ndesc) + continue; + + airoha_qdma_cleanup_tx_queue(&qdma->q_tx[i]); + } } return 0; @@ -2330,13 +2335,14 @@ static void airoha_metadata_dst_free(struct airoha_gdm_port *port) } } -static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) +static int airoha_alloc_gdm_port(struct airoha_eth *eth, + struct device_node *np, int index) { const __be32 *id_ptr = of_get_property(np, "reg", NULL); struct airoha_gdm_port *port; struct airoha_qdma *qdma; struct net_device *dev; - int err, index; + int err, p; u32 id; if (!id_ptr) { @@ -2345,14 +2351,14 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) } id = be32_to_cpup(id_ptr); - index = id - 1; + p = id - 1; if (!id || id > ARRAY_SIZE(eth->ports)) { dev_err(eth->dev, "invalid gdm port id: %d\n", id); return -EINVAL; } - if (eth->ports[index]) { + if (eth->ports[p]) { dev_err(eth->dev, "duplicate gdm port id: %d\n", id); return -EINVAL; } @@ -2400,7 +2406,7 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth, struct device_node *np) port->qdma = qdma; port->dev = dev; port->id = id; - eth->ports[index] = port; + eth->ports[p] = port; err = airoha_metadata_dst_alloc(port); if (err) @@ -2472,6 +2478,7 @@ static int airoha_probe(struct platform_device *pdev) for (i = 0; i < ARRAY_SIZE(eth->qdma); i++) airoha_qdma_start_napi(ð->qdma[i]); + i = 0; for_each_child_of_node(pdev->dev.of_node, np) { if (!of_device_is_compatible(np, "airoha,eth-mac")) continue; @@ -2479,7 +2486,7 @@ static int airoha_probe(struct platform_device *pdev) if (!of_device_is_available(np)) continue; - err = airoha_alloc_gdm_port(eth, np); + err = airoha_alloc_gdm_port(eth, np, i++); if (err) { of_node_put(np); goto error_napi_stop; diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index fee6c10eaedfd30207205b6557e856091fd45d7e..74bdfd9e8d2fb3706f5ec6a4e17fe07fbcb38c3d 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -13,7 +13,7 @@ #include #include -#define AIROHA_MAX_NUM_GDM_PORTS 1 +#define AIROHA_MAX_NUM_GDM_PORTS 4 #define AIROHA_MAX_NUM_QDMA 2 #define AIROHA_MAX_DSA_PORTS 7 #define AIROHA_MAX_NUM_RSTS 3 @@ -212,6 +212,8 @@ struct airoha_qdma { u32 irqmask[QDMA_INT_REG_MAX]; int irq; + atomic_t users; + struct airoha_tx_irq_queue q_tx_irq[AIROHA_NUM_TX_IRQ]; struct airoha_queue q_tx[AIROHA_NUM_TX_RING]; From patchwork Thu Feb 13 15:34:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DAAA0C0219D for ; Thu, 13 Feb 2025 15:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=RBgn6m6XfWg9FS22zzESp4kIsqSgbH+SZXoud4mG/CY=; b=viFqgVTizoB+TAjoYJcGBUkNkP IchwVAAgGcRLZ+dQrHUYOBWFB72pVw316/A+X0iZ16a4HldQCPFJXfn5p5lWtV5ObpdleVqTSwr8A 4CBwhvES2YMXNsxabTwIdpQ2dfkUQ7yTqhLvix/PQIS3oUb0yjQO3PmkuhKUd4aVB9n+g38RyyosR e21c6A0cRDoXg/uUguBFrT6x505eBprW/iRzH5IO6oPvvO1fYBSkySM6oj7CNI3vXGKxTNnb25DJ2 PAOxaKZCJ7//BnlzWN3QPME6TplIJeQWn03/7k0hZwDTXadu2uVEnSStwxKmMzMjWJu2TQGWFt0iQ dNaZ3XzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibSs-0000000BcPN-2hPG; Thu, 13 Feb 2025 15:49:22 +0000 Received: from nyc.source.kernel.org ([2604:1380:45d1:ec00::3]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibFA-0000000BYlp-15kM; Thu, 13 Feb 2025 15:35:13 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 41F8EA426CC; Thu, 13 Feb 2025 15:33:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DFE7FC4CED1; Thu, 13 Feb 2025 15:35:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460911; bh=UyYEPlddIkFUvoM3xOhih3hnmMO/4Vng4ybbFMkkQHs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=pJ5+aHhjKxkLotKUPybqPEqSyz8EEOzjESGjPiXhta8Pqexy0No5cGbIiwTua1g8C uQ2TVeCm279dxE476aFQPrJfAwOurxV0Saldd+7r4ntq3VGEkaTQLJHvMCSD4ddVjH /NuKGT6zEC1C4edCxaM5pSk2f4pGvkrxwhVxrYNNJ/p1lqrpDkVWR9Lm3FTiVu0Cp7 hj8wP8RPUkeQ3U2hD06PvyUrtWZSyip3eE6Pctut5z19c3HB7j+oYABd2JK9Jdbntm eNB8P6DxVfbaleFP5llq/rkaOAmCpEVqU/lvr//kmP7K8JT59D6ayKLdF3maqHg+qS HsuIC50ilL3VA== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:28 +0100 Subject: [PATCH net-next v4 09/16] net: airoha: Move REG_GDM_FWD_CFG() initialization in airoha_dev_init() MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-9-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073512_430610_08CC10E7 X-CRM114-Status: GOOD ( 10.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move REG_GDM_FWD_CFG() register initialization in airoha_dev_init routine. Moreover, always send traffic PPE module in order to be processed by hw accelerator. This is a preliminary patch to enable netfilter flowtable hw offloading on EN7581 SoC. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 513914da8503c1162b0f1b4fcca57434385fa4d1..6c899358c086e6eb1de3ed25f625e48db129888f 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -107,25 +107,20 @@ static void airoha_set_gdm_port_fwd_cfg(struct airoha_eth *eth, u32 addr, static int airoha_set_gdm_port(struct airoha_eth *eth, int port, bool enable) { - u32 val = enable ? FE_PSE_PORT_PPE1 : FE_PSE_PORT_DROP; - u32 vip_port, cfg_addr; + u32 vip_port; switch (port) { case XSI_PCIE0_PORT: vip_port = XSI_PCIE0_VIP_PORT_MASK; - cfg_addr = REG_GDM_FWD_CFG(3); break; case XSI_PCIE1_PORT: vip_port = XSI_PCIE1_VIP_PORT_MASK; - cfg_addr = REG_GDM_FWD_CFG(3); break; case XSI_USB_PORT: vip_port = XSI_USB_VIP_PORT_MASK; - cfg_addr = REG_GDM_FWD_CFG(4); break; case XSI_ETH_PORT: vip_port = XSI_ETH_VIP_PORT_MASK; - cfg_addr = REG_GDM_FWD_CFG(4); break; default: return -EINVAL; @@ -139,8 +134,6 @@ static int airoha_set_gdm_port(struct airoha_eth *eth, int port, bool enable) airoha_fe_clear(eth, REG_FE_IFC_PORT_EN, vip_port); } - airoha_set_gdm_port_fwd_cfg(eth, cfg_addr, val); - return 0; } @@ -177,8 +170,6 @@ static void airoha_fe_maccr_init(struct airoha_eth *eth) airoha_fe_set(eth, REG_GDM_FWD_CFG(p), GDM_TCP_CKSUM | GDM_UDP_CKSUM | GDM_IP4_CKSUM | GDM_DROP_CRC_ERR); - airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(p), - FE_PSE_PORT_CDM1); airoha_fe_rmw(eth, REG_GDM_LEN_CFG(p), GDM_SHORT_LEN_MASK | GDM_LONG_LEN_MASK, FIELD_PREP(GDM_SHORT_LEN_MASK, 60) | @@ -1614,8 +1605,11 @@ static int airoha_dev_set_macaddr(struct net_device *dev, void *p) static int airoha_dev_init(struct net_device *dev) { struct airoha_gdm_port *port = netdev_priv(dev); + struct airoha_eth *eth = port->qdma->eth; airoha_set_macaddr(port, dev->dev_addr); + airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(port->id), + FE_PSE_PORT_PPE1); return 0; } From patchwork Thu Feb 13 15:34:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E90FFC021A0 for ; Thu, 13 Feb 2025 15:51:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uEOZUQBAeR0NI122BFz/mJHucRqEfhdt0VkEFkj2sW4=; b=JkThjx6x8Y8pumdO6KrqTSGL// kTpCxCtH+FvEoTT147AJh4ahW5u/Fv/oE4JtYyE4hngtFVFL4AM38sxJkVt1atFHDuLW6adRbvJAs Ab/HPtKj6ne0dT2K376ivl5nLmi7o780bVoGmJzQEKUudwbqzqIq3VjAi59cIXTeiu2TqnT5mk+Tj 5z61Ug59JX269suUh1iC0aejI0wzT8rKhlILYviTJqmrzwsai4K0EyM7lrvJ235MDuM6veoaCNGEy F4qUnfXu3ZBK7mHgfImboFMjgA+RGUlbIcwwXsrjILLU7D1fpBm/rUo0iDw3wuRIXSeobp3hNPUjp 9TpzFZVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibUI-0000000Bcjq-2kM7; Thu, 13 Feb 2025 15:50:50 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibFD-0000000BYmI-0PYE; Thu, 13 Feb 2025 15:35:16 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id F028CA426CC; Thu, 13 Feb 2025 15:33:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79C4CC4CEE4; Thu, 13 Feb 2025 15:35:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460914; bh=bKwaKtekWr7M03OSmAw2n+nPB23T8n8mEzDPoJBQF1I=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=drPUIfterHYFy2CUS2NwRH6wSpehP65E7TD8Y+Gcg6m6a7foSPm4E1tAY0H7x0IsW /mQ+jFEwkin3FzegoyMH/dpBN+zzr2ZYftEJ51nbshQR4CyyKuErpcP/OL5/3bXVTE UbrwrkUJ0P5D84xDvrNTp/1nmRj7kr1NyfAIFg0Ip4XJUIho0zfeUOzckwcYQHB4q8 H2GLwVJ5ErtQWsgCXwVFoJ7eqX80NKkwxOCyChEt8K0Usw1BMFA9/zIwHXrFwd0FTJ 5DquaKeC/B07naQB9QSMD+Zjkqf7oFNobvN+9TCwGtoijEkeckJQroyGeqTok6zmiT gnj8Nrm1vJt0A== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:29 +0100 Subject: [PATCH net-next v4 10/16] net: airoha: Rename airoha_set_gdm_port_fwd_cfg() in airoha_set_vip_for_gdm_port() MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-10-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073515_264605_34273F3B X-CRM114-Status: GOOD ( 13.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Rename airoha_set_gdm_port() in airoha_set_vip_for_gdm_port(). Get rid of airoha_set_gdm_ports routine. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 49 +++++++------------------------- drivers/net/ethernet/airoha/airoha_eth.h | 8 ------ 2 files changed, 11 insertions(+), 46 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 6c899358c086e6eb1de3ed25f625e48db129888f..31e5f0368faa13a120ba01f7413cf5c23761c143 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -105,25 +105,23 @@ static void airoha_set_gdm_port_fwd_cfg(struct airoha_eth *eth, u32 addr, FIELD_PREP(GDM_UCFQ_MASK, val)); } -static int airoha_set_gdm_port(struct airoha_eth *eth, int port, bool enable) +static int airoha_set_vip_for_gdm_port(struct airoha_gdm_port *port, + bool enable) { + struct airoha_eth *eth = port->qdma->eth; u32 vip_port; - switch (port) { - case XSI_PCIE0_PORT: + switch (port->id) { + case 3: + /* FIXME: handle XSI_PCIE1_PORT */ vip_port = XSI_PCIE0_VIP_PORT_MASK; break; - case XSI_PCIE1_PORT: - vip_port = XSI_PCIE1_VIP_PORT_MASK; - break; - case XSI_USB_PORT: - vip_port = XSI_USB_VIP_PORT_MASK; - break; - case XSI_ETH_PORT: + case 4: + /* FIXME: handle XSI_USB_PORT */ vip_port = XSI_ETH_VIP_PORT_MASK; break; default: - return -EINVAL; + return 0; } if (enable) { @@ -137,31 +135,6 @@ static int airoha_set_gdm_port(struct airoha_eth *eth, int port, bool enable) return 0; } -static int airoha_set_gdm_ports(struct airoha_eth *eth, bool enable) -{ - const int port_list[] = { - XSI_PCIE0_PORT, - XSI_PCIE1_PORT, - XSI_USB_PORT, - XSI_ETH_PORT - }; - int i, err; - - for (i = 0; i < ARRAY_SIZE(port_list); i++) { - err = airoha_set_gdm_port(eth, port_list[i], enable); - if (err) - goto error; - } - - return 0; - -error: - for (i--; i >= 0; i--) - airoha_set_gdm_port(eth, port_list[i], false); - - return err; -} - static void airoha_fe_maccr_init(struct airoha_eth *eth) { int p; @@ -1539,7 +1512,7 @@ static int airoha_dev_open(struct net_device *dev) int err; netif_tx_start_all_queues(dev); - err = airoha_set_gdm_ports(qdma->eth, true); + err = airoha_set_vip_for_gdm_port(port, true); if (err) return err; @@ -1565,7 +1538,7 @@ static int airoha_dev_stop(struct net_device *dev) int i, err; netif_tx_disable(dev); - err = airoha_set_gdm_ports(qdma->eth, false); + err = airoha_set_vip_for_gdm_port(port, false); if (err) return err; diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 74bdfd9e8d2fb3706f5ec6a4e17fe07fbcb38c3d..44834227a58982d4491f3d8174b9e0bea542f785 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -57,14 +57,6 @@ enum { QDMA_INT_REG_MAX }; -enum { - XSI_PCIE0_PORT, - XSI_PCIE1_PORT, - XSI_USB_PORT, - XSI_AE_PORT, - XSI_ETH_PORT, -}; - enum { XSI_PCIE0_VIP_PORT_MASK = BIT(22), XSI_PCIE1_VIP_PORT_MASK = BIT(23), From patchwork Thu Feb 13 15:34:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1DAFBC021A4 for ; Thu, 13 Feb 2025 16:56:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QGnbw1yc2Wp4X+bHTQYoKxaJgLXfXHOChU0InoSB5tU=; b=Nc2p3iScZeT+lNj3zFgk30qdmd Wbo6Iip/w9O9d3La+he4qY7+yINAY8kv7ZLcqWrgGrY9wGHyX4bQVE0A5ePK5WzyEsfocqcCTgCbo 2/oBdSdjQSgbUTPd9VpdKHd0UvSEuPMfMNPALHLEXZQuu99JzXADaVI9YawQoJnHUVEyvORJB0OSd wZFO5iFQoRV62sK59m0POywwyYJzP7kUV2AoLbppaeHUNdOlmep/iX7uKbdRK4YBDbR9wmzSDp3hF III22AvbHZwLM4ytCwdk4ODo3jZLhsYpRi9E/c960VNpAamPXGPJtTR36gapIQsQ86fK0r4PKXtuI 1FWznzwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1ticVd-0000000BuN3-0ySJ; Thu, 13 Feb 2025 16:56:17 +0000 Received: from nyc.source.kernel.org ([2604:1380:45d1:ec00::3]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibFF-0000000BYnb-2Yyt; Thu, 13 Feb 2025 15:35:19 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 7DE8AA41CDB; Thu, 13 Feb 2025 15:33:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28696C4CEE8; Thu, 13 Feb 2025 15:35:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460916; bh=LwwtkOpdej/qZv1Ihtwk/09yNh2N86nJc5Uj66l1lg8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=c10AqJOZKOyKErUBcYbRfhLxES2xvgnwyMuFyOqzO5Pg1GyyDm7dj1Q65BuPr0ui3 NTuYGinpvMb/rn4SK3RfFRAUfM8c6QLhfT7rtj2pimgbAr4zI/ttePwcMeJpaBrm4U Gxvu8X2CwvMk0kPZZfMItAvJxh1ts0fDhUNd/UZNGD0dVwsU8ebD7xz8ZZ9pnjtir8 p+Kmj6vNEzREu3ohEJrUgZAuL0K0o5JCzxM/P8j1A4ygeoYX6bEcakZiYK4C1i0LTq X3DCxVj3H1apk3BS8YhRV1Z9KAkxiJZN6HdaAPKvxRk23ockBPaQWgumv7joQNtQs4 JbH3lfeifopKQ== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:30 +0100 Subject: [PATCH net-next v4 11/16] dt-bindings: net: airoha: Add the NPU node for EN7581 SoC MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-11-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073517_801651_FB27F926 X-CRM114-Status: GOOD ( 11.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds the NPU document binding for EN7581 SoC. The Airoha Network Processor Unit (NPU) provides a configuration interface to implement wired and wireless hardware flow offloading programming Packet Processor Engine (PPE) flow table. Signed-off-by: Lorenzo Bianconi Reviewed-by: Krzysztof Kozlowski --- .../devicetree/bindings/net/airoha,en7581-npu.yaml | 72 ++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/Documentation/devicetree/bindings/net/airoha,en7581-npu.yaml b/Documentation/devicetree/bindings/net/airoha,en7581-npu.yaml new file mode 100644 index 0000000000000000000000000000000000000000..414e5c0615b8a8db487c23978a283b0254b2b63c --- /dev/null +++ b/Documentation/devicetree/bindings/net/airoha,en7581-npu.yaml @@ -0,0 +1,72 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/net/airoha,en7581-npu.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Airoha Network Processor Unit for EN7581 SoC + +maintainers: + - Lorenzo Bianconi + +description: + The Airoha Network Processor Unit (NPU) provides a configuration interface + to implement wired and wireless hardware flow offloading programming Packet + Processor Engine (PPE) flow table. + +properties: + compatible: + enum: + - airoha,en7581-npu + + reg: + maxItems: 1 + + interrupts: + items: + - description: mbox host irq line + - description: watchdog0 irq line + - description: watchdog1 irq line + - description: watchdog2 irq line + - description: watchdog3 irq line + - description: watchdog4 irq line + - description: watchdog5 irq line + - description: watchdog6 irq line + - description: watchdog7 irq line + + memory-region: + maxItems: 1 + description: + Memory used to store NPU firmware binary. + +required: + - compatible + - reg + - interrupts + - memory-region + +additionalProperties: false + +examples: + - | + #include + #include + soc { + #address-cells = <2>; + #size-cells = <2>; + + npu@1e900000 { + compatible = "airoha,en7581-npu"; + reg = <0 0x1e900000 0 0x313000>; + interrupts = , + , + , + , + , + , + , + , + ; + memory-region = <&npu_binary>; + }; + }; From patchwork Thu Feb 13 15:34:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31177C0219D for ; Thu, 13 Feb 2025 15:55:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=EotvCmIg76eD5bu40Jq0Nke6pVYu8L5NVdcYEJafa9E=; b=f34CnIKP+MbEGgPd8MdEfiNCD4 Uya8xcROzMql+QlOh4DbkpqPaTvC36bhcSe66Vz0QfyOWcton44EdJ1TCweZCgPzFywSIrmS4p6nL TLOfU82aTlAvIqztwhawr/+JwGL0OhLTksOF03K9ahsL9O3Q2BF7mydbfYAbBB1Ej6qs2AU3xKejV J6gaLHvYTcq4TuLgx+Q5lbPYz6k3Jv0Q5eCeeUdwxGA+KxXcp4BlZ3MJESlOuLEQ8mhkOo1zkF8Yq 2Gq8jO4atiLAebE9SNETheB1irCP+2Zf1HWYCxvaQ1h6eYTvaswInwIAV14yOfYkRCWUMghIgLG6y 5yyWy+Yw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibYW-0000000BddB-1RcC; Thu, 13 Feb 2025 15:55:12 +0000 Received: from nyc.source.kernel.org ([147.75.193.91]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibFI-0000000BYoS-1Etb; Thu, 13 Feb 2025 15:35:21 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 0C4D8A41CDB; Thu, 13 Feb 2025 15:33:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA915C4CED1; Thu, 13 Feb 2025 15:35:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460919; bh=JZDiAKKeX0eSnGFkmKTsrRKkLXrlYR1V8xaHJKtSw9k=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=qESmeCOllYKWRPA+/oqxPEs2FmUntuKCOUixsgtpvs7kBVlUR7Gi/ZdcefSC9BhM1 rSXsz3RZgah9glsDfW6fiQMXzE7nHn2GfjjKcX4lfxS8IIvd/i/ruxGhmMYq6KNdVK 9EvLVE85tgtjJJ3R9QonUe/n1KrT2znYZfG+Cn6kwuuXassuDr8bUpu8io7DPIx4uZ kONORwUEUP9eKG08ROPcdnyp3knfUgEHVfD3sJ07ClK+GKjhBgJ6TxGxpcMmWYOv6H vb6o5/lNSF0f8kAU5dEeCLlOGhRHs90t2Dsm+pbXBeAFdJGZLwKyqlJ5hTgF70d9Dr Idue/I/44Plgg== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:31 +0100 Subject: [PATCH net-next v4 12/16] dt-bindings: net: airoha: Add airoha,npu phandle property MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-12-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com, Krzysztof Kozlowski X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073520_406184_1FDBDCCB X-CRM114-Status: UNSURE ( 8.91 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce the airoha,npu property for the NPU node available on EN7581 SoC. The airoha Network Processor Unit (NPU) is used to offload network traffic forwarded between Packet Switch Engine (PSE) ports. Reviewed-by: Krzysztof Kozlowski Signed-off-by: Lorenzo Bianconi --- Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml b/Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml index c578637c5826db4bf470a4d01ac6f3133976ae1a..0fdd1126541774acacc783d98e4c089b2d2b85e2 100644 --- a/Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml +++ b/Documentation/devicetree/bindings/net/airoha,en7581-eth.yaml @@ -63,6 +63,14 @@ properties: "#size-cells": const: 0 + airoha,npu: + $ref: /schemas/types.yaml#/definitions/phandle + description: + Phandle to the node used to configure the NPU module. + The Airoha Network Processor Unit (NPU) provides a configuration + interface to implement hardware flow offloading programming Packet + Processor Engine (PPE) flow table. + patternProperties: "^ethernet@[1-4]$": type: object @@ -132,6 +140,8 @@ examples: , ; + airoha,npu = <&npu>; + #address-cells = <1>; #size-cells = <0>; From patchwork Thu Feb 13 15:34:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CB9EC0219D for ; Thu, 13 Feb 2025 15:56:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=YBtItRhYFXlbTVfy0ucGVafHJYYaYuOstWEbz9Fbq24=; b=HjP1/ak02ZS4xH6gfaR4hNCtzy X3iO+NBj4CORv9XnhoZTJWd6LMtKMa/YGwjZuVHrPeDhL1zD0tmAbW2I3pXocNIiX5Pq0mLk7gF9+ 8hH3jkw92/jsk4qpeDltt2GBsmz2bJqlLSA6zvcEYTBUQoOO1y4hKe91TCUOr8ESxILj9jCIS56QQ UHU0I0i4WEXlrGjaXwMOc5BpolOsCHtB0T7VR6Eass3V4IHF2AvrSjWMtnpVoFCvYnOimzKPN/cP7 h2/qGgb67985bmAhoPezQY6kumTarkjW81Lwd/tgkwsjqBpjoMzvZZs9I2W+1nfqUw+Zsiez9PIs4 hkRSrB2g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibZu-0000000Bduf-3awp; Thu, 13 Feb 2025 15:56:38 +0000 Received: from nyc.source.kernel.org ([2604:1380:45d1:ec00::3]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibFK-0000000BYp9-3gba; Thu, 13 Feb 2025 15:35:24 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id C4914A4253D; Thu, 13 Feb 2025 15:33:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 477CBC4CEE4; Thu, 13 Feb 2025 15:35:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460921; bh=woRN02D6/jVDuwJPZOdHCP/oKTaxdG/uNexf0reyXJ0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=hmDDeD2GTXgX+58y4n7K3eoyLwPhIE1/Fc0MtTLBVNNA/UePkGhdueWpMqgP6hN/w lRDGYLVzcjZhAkVlI0s9q/Uo1/WJQpDbpeS5yUrTiS+r0WPHPp8hQ1YMFODU89+9Hq T8ozxCwmG4ViYaNFv0z/r4HzS22rvaWn/3Z2KKKU1q+1dqnM6pTWYzQYGBDxwIR2m2 do3QIFjBVwtOxgNpxtTAiw7GBGokx9xz/6S3z+lvDKKP3vJtOVYyLyaxhW1Vrtyn6Z /Umw+WdSdq2HfVeHQXs9b4MFD7UnOcxIHYTkzDP7TJjGyh4FVIxF9/1EEydqDtwHH4 TOi3WbdFSHggQ== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:32 +0100 Subject: [PATCH net-next v4 13/16] net: airoha: Introduce Airoha NPU support MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-13-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073523_067568_7EC90768 X-CRM114-Status: GOOD ( 23.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Packet Processor Engine (PPE) module available on EN7581 SoC populates the PPE table with 5-tuples flower rules learned from traffic forwarded between the GDM ports connected to the Packet Switch Engine (PSE) module. The airoha_eth driver can enable hw acceleration of learned 5-tuples rules if the user configure them in netfilter flowtable (netfilter flowtable support will be added with subsequent patches). airoha_eth driver configures and collects data from the PPE module via a Network Processor Unit (NPU) RISC-V module available on the EN7581 SoC. Introduce basic support for Airoha NPU moudle. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/Kconfig | 9 + drivers/net/ethernet/airoha/Makefile | 1 + drivers/net/ethernet/airoha/airoha_eth.h | 2 + drivers/net/ethernet/airoha/airoha_npu.c | 521 +++++++++++++++++++++++++++++++ drivers/net/ethernet/airoha/airoha_npu.h | 28 ++ 5 files changed, 561 insertions(+) diff --git a/drivers/net/ethernet/airoha/Kconfig b/drivers/net/ethernet/airoha/Kconfig index b6a131845f13b23a12464cfc281e3abe5699389f..1a4cf6a259f67adf9c6115e05de16b0585092573 100644 --- a/drivers/net/ethernet/airoha/Kconfig +++ b/drivers/net/ethernet/airoha/Kconfig @@ -7,9 +7,18 @@ config NET_VENDOR_AIROHA if NET_VENDOR_AIROHA +config NET_AIROHA_NPU + tristate "Airoha NPU support" + select WANT_DEV_COREDUMP + select REGMAP_MMIO + help + This driver supports Airoha Network Processor (NPU) available + on the Airoha Soc family. + config NET_AIROHA tristate "Airoha SoC Gigabit Ethernet support" depends on NET_DSA || !NET_DSA + select NET_AIROHA_NPU select PAGE_POOL help This driver supports the gigabit ethernet MACs in the diff --git a/drivers/net/ethernet/airoha/Makefile b/drivers/net/ethernet/airoha/Makefile index 73a6f3680a4c4ce92ee785d83b905d76a63421df..fcd3fc7759485f772dc3247efd51054320626f32 100644 --- a/drivers/net/ethernet/airoha/Makefile +++ b/drivers/net/ethernet/airoha/Makefile @@ -4,3 +4,4 @@ # obj-$(CONFIG_NET_AIROHA) += airoha_eth.o +obj-$(CONFIG_NET_AIROHA_NPU) += airoha_npu.o diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 44834227a58982d4491f3d8174b9e0bea542f785..7be932a8e8d2e8e9221b8a5909ac5d00329d550c 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -240,6 +240,8 @@ struct airoha_eth { unsigned long state; void __iomem *fe_regs; + struct airoha_npu __rcu *npu; + struct reset_control_bulk_data rsts[AIROHA_MAX_NUM_RSTS]; struct reset_control_bulk_data xsi_rsts[AIROHA_MAX_NUM_XSI_RSTS]; diff --git a/drivers/net/ethernet/airoha/airoha_npu.c b/drivers/net/ethernet/airoha/airoha_npu.c new file mode 100644 index 0000000000000000000000000000000000000000..3620f4e771e659de4c91b40575e18d689199fb4c --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_npu.c @@ -0,0 +1,521 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "airoha_npu.h" + +#define NPU_EN7581_FIRMWARE_DATA "airoha/en7581_npu_data.bin" +#define NPU_EN7581_FIRMWARE_RV32 "airoha/en7581_npu_rv32.bin" +#define NPU_EN7581_FIRMWARE_RV32_MAX_SIZE 0x200000 +#define NPU_EN7581_FIRMWARE_DATA_MAX_SIZE 0x10000 +#define NPU_DUMP_SIZE 512 + +#define REG_NPU_LOCAL_SRAM 0x0 + +#define NPU_PC_BASE_ADDR 0x305000 +#define REG_PC_DBG(_n) (0x305000 + ((_n) * 0x100)) + +#define NPU_CLUSTER_BASE_ADDR 0x306000 + +#define REG_CR_BOOT_TRIGGER (NPU_CLUSTER_BASE_ADDR + 0x000) +#define REG_CR_BOOT_CONFIG (NPU_CLUSTER_BASE_ADDR + 0x004) +#define REG_CR_BOOT_BASE(_n) (NPU_CLUSTER_BASE_ADDR + 0x020 + ((_n) << 2)) + +#define NPU_MBOX_BASE_ADDR 0x30c000 + +#define REG_CR_MBOX_INT_STATUS (NPU_MBOX_BASE_ADDR + 0x000) +#define MBOX_INT_STATUS_MASK BIT(8) + +#define REG_CR_MBOX_INT_MASK(_n) (NPU_MBOX_BASE_ADDR + 0x004 + ((_n) << 2)) +#define REG_CR_MBQ0_CTRL(_n) (NPU_MBOX_BASE_ADDR + 0x030 + ((_n) << 2)) +#define REG_CR_MBQ8_CTRL(_n) (NPU_MBOX_BASE_ADDR + 0x0b0 + ((_n) << 2)) +#define REG_CR_NPU_MIB(_n) (NPU_MBOX_BASE_ADDR + 0x140 + ((_n) << 2)) + +#define NPU_TIMER_BASE_ADDR 0x310100 +#define REG_WDT_TIMER_CTRL(_n) (NPU_TIMER_BASE_ADDR + ((_n) * 0x100)) +#define WDT_EN_MASK BIT(25) +#define WDT_INTR_MASK BIT(21) + +enum { + NPU_OP_SET = 1, + NPU_OP_SET_NO_WAIT, + NPU_OP_GET, + NPU_OP_GET_NO_WAIT, +}; + +enum { + NPU_FUNC_WIFI, + NPU_FUNC_TUNNEL, + NPU_FUNC_NOTIFY, + NPU_FUNC_DBA, + NPU_FUNC_TR471, + NPU_FUNC_PPE, +}; + +enum { + NPU_MBOX_ERROR, + NPU_MBOX_SUCCESS, +}; + +enum { + PPE_FUNC_SET_WAIT, + PPE_FUNC_SET_WAIT_HWNAT_INIT, + PPE_FUNC_SET_WAIT_HWNAT_DEINIT, + PPE_FUNC_SET_WAIT_API, +}; + +enum { + PPE2_SRAM_SET_ENTRY, + PPE_SRAM_SET_ENTRY, + PPE_SRAM_SET_VAL, + PPE_SRAM_RESET_VAL, +}; + +enum { + QDMA_WAN_ETHER = 1, + QDMA_WAN_PON_XDSL, +}; + +#define MBOX_MSG_FUNC_ID GENMASK(14, 11) +#define MBOX_MSG_STATIC_BUF BIT(5) +#define MBOX_MSG_STATUS GENMASK(4, 2) +#define MBOX_MSG_DONE BIT(1) +#define MBOX_MSG_WAIT_RSP BIT(0) + +#define PPE_TYPE_L2B_IPV4 2 +#define PPE_TYPE_L2B_IPV4_IPV6 3 + +struct ppe_mbox_data { + u32 func_type; + u32 func_id; + union { + struct { + u8 cds; + u8 xpon_hal_api; + u8 wan_xsi; + u8 ct_joyme4; + int ppe_type; + int wan_mode; + int wan_sel; + } init_info; + struct { + int func_id; + u32 size; + u32 data; + } set_info; + }; +}; + +static int airoha_npu_send_msg(struct airoha_npu *npu, int func_id, + void *p, int size) +{ + u16 core = 0; /* FIXME */ + u32 val, offset = core << 4; + dma_addr_t dma_addr; + void *addr; + int ret; + + addr = kzalloc(size, GFP_ATOMIC | GFP_DMA); + if (!addr) + return -ENOMEM; + + memcpy(addr, p, size); + dma_addr = dma_map_single(npu->dev, addr, size, DMA_TO_DEVICE); + ret = dma_mapping_error(npu->dev, dma_addr); + if (ret) + goto out; + + spin_lock_bh(&npu->cores[core].lock); + + regmap_write(npu->regmap, REG_CR_MBQ0_CTRL(0) + offset, dma_addr); + regmap_write(npu->regmap, REG_CR_MBQ0_CTRL(1) + offset, size); + regmap_read(npu->regmap, REG_CR_MBQ0_CTRL(2) + offset, &val); + regmap_write(npu->regmap, REG_CR_MBQ0_CTRL(2) + offset, val + 1); + val = FIELD_PREP(MBOX_MSG_FUNC_ID, func_id) | MBOX_MSG_WAIT_RSP; + regmap_write(npu->regmap, REG_CR_MBQ0_CTRL(3) + offset, val); + + ret = regmap_read_poll_timeout_atomic(npu->regmap, + REG_CR_MBQ0_CTRL(3) + offset, + val, (val & MBOX_MSG_DONE), + 100, 100 * MSEC_PER_SEC); + if (!ret && FIELD_GET(MBOX_MSG_STATUS, val) != NPU_MBOX_SUCCESS) + ret = -EINVAL; + + spin_unlock_bh(&npu->cores[core].lock); + + dma_unmap_single(npu->dev, dma_addr, size, DMA_TO_DEVICE); +out: + kfree(addr); + + return ret; +} + +static int airoha_npu_run_firmware(struct device *dev, void __iomem *base, + struct reserved_mem *rmem) +{ + const struct firmware *fw; + void __iomem *addr; + int ret; + + ret = request_firmware(&fw, NPU_EN7581_FIRMWARE_RV32, dev); + if (ret) + return ret; + + if (fw->size > NPU_EN7581_FIRMWARE_RV32_MAX_SIZE) { + dev_err(dev, "%s: fw size too overlimit (%zu)\n", + NPU_EN7581_FIRMWARE_RV32, fw->size); + ret = -E2BIG; + goto out; + } + + addr = devm_ioremap(dev, rmem->base, rmem->size); + if (!addr) { + ret = -ENOMEM; + goto out; + } + + memcpy_toio(addr, fw->data, fw->size); + release_firmware(fw); + + ret = request_firmware(&fw, NPU_EN7581_FIRMWARE_DATA, dev); + if (ret) + return ret; + + if (fw->size > NPU_EN7581_FIRMWARE_DATA_MAX_SIZE) { + dev_err(dev, "%s: fw size too overlimit (%zu)\n", + NPU_EN7581_FIRMWARE_DATA, fw->size); + ret = -E2BIG; + goto out; + } + + memcpy_toio(base + REG_NPU_LOCAL_SRAM, fw->data, fw->size); +out: + release_firmware(fw); + + return ret; +} + +static irqreturn_t airoha_npu_mbox_handler(int irq, void *npu_instance) +{ + struct airoha_npu *npu = npu_instance; + + /* clear mbox interrupt status */ + regmap_write(npu->regmap, REG_CR_MBOX_INT_STATUS, + MBOX_INT_STATUS_MASK); + + /* acknowledge npu */ + regmap_update_bits(npu->regmap, REG_CR_MBQ8_CTRL(3), + MBOX_MSG_STATUS | MBOX_MSG_DONE, MBOX_MSG_DONE); + + return IRQ_HANDLED; +} + +static void airoha_npu_wdt_work(struct work_struct *work) +{ + struct airoha_npu_core *core; + struct airoha_npu *npu; + void *dump; + u32 val[3]; + int c; + + core = container_of(work, struct airoha_npu_core, wdt_work); + npu = core->npu; + + dump = vzalloc(NPU_DUMP_SIZE); + if (!dump) + return; + + c = core - &npu->cores[0]; + regmap_bulk_read(npu->regmap, REG_PC_DBG(c), val, ARRAY_SIZE(val)); + snprintf(dump, NPU_DUMP_SIZE, "PC: %08x SP: %08x LR: %08x\n", + val[0], val[1], val[2]); + + dev_coredumpv(npu->dev, dump, NPU_DUMP_SIZE, GFP_KERNEL); +} + +static irqreturn_t airoha_npu_wdt_handler(int irq, void *core_instance) +{ + struct airoha_npu_core *core = core_instance; + struct airoha_npu *npu = core->npu; + int c = core - &npu->cores[0]; + u32 val; + + regmap_set_bits(npu->regmap, REG_WDT_TIMER_CTRL(c), WDT_INTR_MASK); + if (!regmap_read(npu->regmap, REG_WDT_TIMER_CTRL(c), &val) && + FIELD_GET(WDT_EN_MASK, val)) + schedule_work(&core->wdt_work); + + return IRQ_HANDLED; +} + +static int airoha_npu_ppe_init(struct airoha_npu *npu) +{ + struct ppe_mbox_data ppe_data = { + .func_type = NPU_OP_SET, + .func_id = PPE_FUNC_SET_WAIT_HWNAT_INIT, + .init_info = { + .ppe_type = PPE_TYPE_L2B_IPV4_IPV6, + .wan_mode = QDMA_WAN_ETHER, + }, + }; + + return airoha_npu_send_msg(npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); +} + +int airoha_npu_ppe_deinit(struct airoha_npu *npu) +{ + struct ppe_mbox_data ppe_data = { + .func_type = NPU_OP_SET, + .func_id = PPE_FUNC_SET_WAIT_HWNAT_DEINIT, + }; + + return airoha_npu_send_msg(npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); +} +EXPORT_SYMBOL_GPL(airoha_npu_ppe_deinit); + +int airoha_npu_ppe_flush_sram_entries(struct airoha_npu *npu, + dma_addr_t foe_addr, + int sram_num_entries) +{ + struct ppe_mbox_data ppe_data = { + .func_type = NPU_OP_SET, + .func_id = PPE_FUNC_SET_WAIT_API, + .set_info = { + .func_id = PPE_SRAM_RESET_VAL, + .data = foe_addr, + .size = sram_num_entries, + }, + }; + + return airoha_npu_send_msg(npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); +} +EXPORT_SYMBOL_GPL(airoha_npu_ppe_flush_sram_entries); + +int airoha_npu_foe_commit_entry(struct airoha_npu *npu, dma_addr_t foe_addr, + u32 entry_size, u32 hash, bool ppe2) +{ + struct ppe_mbox_data ppe_data = { + .func_type = NPU_OP_SET, + .func_id = PPE_FUNC_SET_WAIT_API, + .set_info = { + .data = foe_addr, + .size = entry_size, + }, + }; + int err; + + ppe_data.set_info.func_id = ppe2 ? PPE2_SRAM_SET_ENTRY + : PPE_SRAM_SET_ENTRY; + + err = airoha_npu_send_msg(npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); + if (err) + return err; + + ppe_data.set_info.func_id = PPE_SRAM_SET_VAL; + ppe_data.set_info.data = hash; + ppe_data.set_info.size = sizeof(u32); + + return airoha_npu_send_msg(npu, NPU_FUNC_PPE, &ppe_data, + sizeof(struct ppe_mbox_data)); +} +EXPORT_SYMBOL_GPL(airoha_npu_foe_commit_entry); + +struct airoha_npu *airoha_npu_get(struct device *dev) +{ + struct platform_device *pdev; + struct device_node *np; + struct airoha_npu *npu; + + np = of_parse_phandle(dev->of_node, "airoha,npu", 0); + if (!np) + return ERR_PTR(-ENODEV); + + pdev = of_find_device_by_node(np); + of_node_put(np); + + if (!pdev) { + dev_err(dev, "cannot find device node %s\n", np->name); + return ERR_PTR(-ENODEV); + } + + if (!try_module_get(THIS_MODULE)) { + dev_err(dev, "failed to get the device driver module\n"); + npu = ERR_PTR(-ENODEV); + goto error_pdev_put; + } + + npu = platform_get_drvdata(pdev); + if (!npu) { + npu = ERR_PTR(-ENODEV); + goto error_module_put; + } + + if (!device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_SUPPLIER)) { + dev_err(&pdev->dev, + "failed to create device link to consumer %s\n", + dev_name(dev)); + npu = ERR_PTR(-EINVAL); + goto error_module_put; + } + + return npu; + +error_module_put: + module_put(THIS_MODULE); +error_pdev_put: + platform_device_put(pdev); + + return npu; +} +EXPORT_SYMBOL_GPL(airoha_npu_get); + +void airoha_npu_put(struct airoha_npu *npu) +{ + module_put(THIS_MODULE); + put_device(npu->dev); +} +EXPORT_SYMBOL_GPL(airoha_npu_put); + +static const struct of_device_id of_airoha_npu_match[] = { + { .compatible = "airoha,en7581-npu" }, + { /* sentinel */ } +}; +MODULE_DEVICE_TABLE(of, of_airoha_npu_match); + +static const struct regmap_config regmap_config = { + .name = "npu", + .reg_bits = 32, + .val_bits = 32, + .reg_stride = 4, + .disable_locking = true, +}; + +static int airoha_npu_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct reserved_mem *rmem; + struct airoha_npu *npu; + struct device_node *np; + void __iomem *base; + int i, irq, err; + + base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(base)) + return PTR_ERR(base); + + npu = devm_kzalloc(dev, sizeof(*npu), GFP_KERNEL); + if (!npu) + return -ENOMEM; + + npu->dev = dev; + npu->regmap = devm_regmap_init_mmio(dev, base, ®map_config); + if (IS_ERR(npu->regmap)) + return PTR_ERR(npu->regmap); + + np = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!np) + return -ENODEV; + + rmem = of_reserved_mem_lookup(np); + of_node_put(np); + + if (!rmem) + return -ENODEV; + + irq = platform_get_irq(pdev, 0); + if (irq < 0) + return irq; + + err = devm_request_irq(dev, irq, airoha_npu_mbox_handler, + IRQF_SHARED, "airoha-npu-mbox", npu); + if (err) + return err; + + for (i = 0; i < ARRAY_SIZE(npu->cores); i++) { + struct airoha_npu_core *core = &npu->cores[i]; + + spin_lock_init(&core->lock); + core->npu = npu; + + irq = platform_get_irq(pdev, i + 1); + if (irq < 0) + return err; + + err = devm_request_irq(dev, irq, airoha_npu_wdt_handler, + IRQF_SHARED, "airoha-npu-wdt", core); + if (err) + return err; + + INIT_WORK(&core->wdt_work, airoha_npu_wdt_work); + } + + if (dma_set_coherent_mask(dev, 0xbfffffff)) + dev_warn(dev, "failed coherent DMA configuration\n"); + + err = airoha_npu_run_firmware(dev, base, rmem); + if (err) + return err; + + regmap_write(npu->regmap, REG_CR_NPU_MIB(10), + rmem->base + NPU_EN7581_FIRMWARE_RV32_MAX_SIZE); + regmap_write(npu->regmap, REG_CR_NPU_MIB(11), 0x40000); /* SRAM 256K */ + regmap_write(npu->regmap, REG_CR_NPU_MIB(12), 0); + regmap_write(npu->regmap, REG_CR_NPU_MIB(21), 1); + msleep(100); + + /* setting booting address */ + for (i = 0; i < NPU_NUM_CORES; i++) + regmap_write(npu->regmap, REG_CR_BOOT_BASE(i), rmem->base); + usleep_range(1000, 2000); + + /* enable NPU cores */ + /* do not start core3 since it is used for WiFi offloading */ + regmap_write(npu->regmap, REG_CR_BOOT_CONFIG, 0xf7); + regmap_write(npu->regmap, REG_CR_BOOT_TRIGGER, 0x1); + msleep(100); + + err = airoha_npu_ppe_init(npu); + if (err) + return err; + + platform_set_drvdata(pdev, npu); + + return 0; +} + +static void airoha_npu_remove(struct platform_device *pdev) +{ + struct airoha_npu *npu = platform_get_drvdata(pdev); + int i; + + for (i = 0; i < ARRAY_SIZE(npu->cores); i++) + cancel_work_sync(&npu->cores[i].wdt_work); +} + +static struct platform_driver airoha_npu_driver = { + .probe = airoha_npu_probe, + .remove = airoha_npu_remove, + .driver = { + .name = KBUILD_MODNAME, + .of_match_table = of_airoha_npu_match, + }, +}; +module_platform_driver(airoha_npu_driver); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Lorenzo Bianconi "); +MODULE_DESCRIPTION("Airoha Network Processor Unit driver"); diff --git a/drivers/net/ethernet/airoha/airoha_npu.h b/drivers/net/ethernet/airoha/airoha_npu.h new file mode 100644 index 0000000000000000000000000000000000000000..819cc8c1d6b61d2e8cad22ef66acb7e4d0030c1d --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_npu.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#define NPU_NUM_CORES 8 + +struct airoha_npu { + struct device *dev; + struct regmap *regmap; + + struct airoha_npu_core { + struct airoha_npu *npu; + /* protect concurrent npu memory accesses */ + spinlock_t lock; + struct work_struct wdt_work; + } cores[NPU_NUM_CORES]; +}; + +struct airoha_npu *airoha_npu_get(struct device *dev); +void airoha_npu_put(struct airoha_npu *npu); +int airoha_npu_ppe_deinit(struct airoha_npu *npu); +int airoha_npu_ppe_flush_sram_entries(struct airoha_npu *npu, + dma_addr_t foe_addr, + int sram_num_entries); +int airoha_npu_foe_commit_entry(struct airoha_npu *npu, dma_addr_t foe_addr, + u32 entry_size, u32 hash, bool ppe2); From patchwork Thu Feb 13 15:34:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 038CFC021A0 for ; Thu, 13 Feb 2025 15:58:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=qUdlim33+u65/J8rW+5zwX0kmk4WF69SbCida1HEt+8=; b=fFhDvvuy4VvzjCRgguxtgDMwL6 dlu91BQCURS91x9uZhqO7VvU08emCrnBVQLsgXBqBK7nPM7J04OKCD6Ej329Yxyaj1h8M6mXP8CwZ A5VZtTcmtNaA98CX/77JQb/fbolsV9ININQzaJ/ghW7Ig45YDX4qxoFCbBqWKqCHxI93D+lJpzBAY FU2HChIby/2DwKchBf4h0SfVJuP22a5vQC8XyvpkesbkuBkCCNnrLQwNSsK8Ke/R9L6pr9mBCl3TW MknteIywtcmKZntjV6hGp2MAVODltUILjI3VY/4qE4RfWZ/hKBmompG6qVsYhqMDHRtoq0minHtJh S3BZOyJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibbJ-0000000BeIq-3qPJ; Thu, 13 Feb 2025 15:58:05 +0000 Received: from nyc.source.kernel.org ([2604:1380:45d1:ec00::3]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibFN-0000000BYqS-3Oik; Thu, 13 Feb 2025 15:35:27 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id A6A98A41CDB; Thu, 13 Feb 2025 15:33:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 036FBC4CED1; Thu, 13 Feb 2025 15:35:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460924; bh=aehFxdrkedSnj0PqRncAWfEGnxJSMY9snUWkmiOfpaI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jhiNG/pBh2DjYUatInhjHlXAmO030xlC1Oc8aKxWAp442j07cXasuUVqFEngqazJd NTNRySmRRD9zfr9XmNYsIfFD5QiqleBOvyEgrtMlxDWUd4+F3joAlgBZSknxtivxJa q2XhtAePf+6y0Dww3kzeUcKqPPfV9/nUzV4oiFC3N+DIydod9JOd4P4Pl/vmXyJDZ3 tARi1axyEqnVWVMnf7jghP3lzaxkNhkmJIXphB7P9DegxXVHXNkcoRWho1xGAtTUTl EspsEWaj2fHdyBavDWyCets9NdBjup7pZ+PmVp078V3hVfDbYsuWy1Ka4CdWn5PLhc aOlAA+YLu7ZFg== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:33 +0100 Subject: [PATCH net-next v4 14/16] net: airoha: Introduce flowtable offload support MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-14-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073526_087498_E91701B0 X-CRM114-Status: GOOD ( 23.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce netfilter flowtable integration in order to allow airoha_eth driver to offload 5-tuple flower rules learned by the PPE module if the user accelerates them using a nft configuration similar to the one reported below: table inet filter { flowtable ft { hook ingress priority filter devices = { lan1, lan2, lan3, lan4, eth1 } flags offload; } chain forward { type filter hook forward priority filter; policy accept; meta l4proto { tcp, udp } flow add @ft } } Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/Makefile | 3 +- drivers/net/ethernet/airoha/airoha_eth.c | 60 +- drivers/net/ethernet/airoha/airoha_eth.h | 252 +++++++++ drivers/net/ethernet/airoha/airoha_ppe.c | 884 ++++++++++++++++++++++++++++++ drivers/net/ethernet/airoha/airoha_regs.h | 107 +++- 5 files changed, 1299 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/airoha/Makefile b/drivers/net/ethernet/airoha/Makefile index fcd3fc7759485f772dc3247efd51054320626f32..6deff2f16229a7638be0737caa06282ba63183a4 100644 --- a/drivers/net/ethernet/airoha/Makefile +++ b/drivers/net/ethernet/airoha/Makefile @@ -3,5 +3,6 @@ # Airoha for the Mediatek SoCs built-in ethernet macs # -obj-$(CONFIG_NET_AIROHA) += airoha_eth.o +obj-$(CONFIG_NET_AIROHA) += airoha-eth.o +airoha-eth-y := airoha_eth.o airoha_ppe.o obj-$(CONFIG_NET_AIROHA_NPU) += airoha_npu.o diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 31e5f0368faa13a120ba01f7413cf5c23761c143..3065f418c92ad51b4042433110a678b39702e9c7 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -8,7 +8,6 @@ #include #include #include -#include #include #include #include @@ -619,6 +618,7 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) while (done < budget) { struct airoha_queue_entry *e = &q->entry[q->tail]; struct airoha_qdma_desc *desc = &q->desc[q->tail]; + u32 hash, reason, msg1 = le32_to_cpu(desc->msg1); dma_addr_t dma_addr = le32_to_cpu(desc->addr); u32 desc_ctrl = le32_to_cpu(desc->ctrl); struct airoha_gdm_port *port; @@ -681,6 +681,15 @@ static int airoha_qdma_rx_process(struct airoha_queue *q, int budget) &port->dsa_meta[sptag]->dst); } + hash = FIELD_GET(AIROHA_RXD4_FOE_ENTRY, msg1); + if (hash != AIROHA_RXD4_FOE_ENTRY) + skb_set_hash(skb, jhash_1word(hash, 0), + PKT_HASH_TYPE_L4); + + reason = FIELD_GET(AIROHA_RXD4_PPE_CPU_REASON, msg1); + if (reason == PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) + airoha_ppe_check_skb(eth->ppe, hash); + napi_gro_receive(&q->napi, skb); done++; @@ -1301,6 +1310,10 @@ static int airoha_hw_init(struct platform_device *pdev, return err; } + err = airoha_ppe_init(eth); + if (err) + return err; + set_bit(DEV_STATE_INITIALIZED, ð->state); return 0; @@ -2166,6 +2179,47 @@ static int airoha_tc_htb_alloc_leaf_queue(struct airoha_gdm_port *port, return 0; } +static int airoha_dev_setup_tc_block(struct airoha_gdm_port *port, + struct flow_block_offload *f) +{ + flow_setup_cb_t *cb = airoha_ppe_setup_tc_block_cb; + static LIST_HEAD(block_cb_list); + struct flow_block_cb *block_cb; + + if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS) + return -EOPNOTSUPP; + + f->driver_block_list = &block_cb_list; + switch (f->command) { + case FLOW_BLOCK_BIND: + block_cb = flow_block_cb_lookup(f->block, cb, port->dev); + if (block_cb) { + flow_block_cb_incref(block_cb); + return 0; + } + block_cb = flow_block_cb_alloc(cb, port->dev, port->dev, NULL); + if (IS_ERR(block_cb)) + return PTR_ERR(block_cb); + + flow_block_cb_incref(block_cb); + flow_block_cb_add(block_cb, f); + list_add_tail(&block_cb->driver_list, &block_cb_list); + return 0; + case FLOW_BLOCK_UNBIND: + block_cb = flow_block_cb_lookup(f->block, cb, port->dev); + if (!block_cb) + return -ENOENT; + + if (!flow_block_cb_decref(block_cb)) { + flow_block_cb_remove(block_cb, f); + list_del(&block_cb->driver_list); + } + return 0; + default: + return -EOPNOTSUPP; + } +} + static void airoha_tc_remove_htb_queue(struct airoha_gdm_port *port, int queue) { struct net_device *dev = port->dev; @@ -2249,6 +2303,9 @@ static int airoha_dev_tc_setup(struct net_device *dev, enum tc_setup_type type, return airoha_tc_setup_qdisc_ets(port, type_data); case TC_SETUP_QDISC_HTB: return airoha_tc_setup_qdisc_htb(port, type_data); + case TC_SETUP_BLOCK: + case TC_SETUP_FT: + return airoha_dev_setup_tc_block(port, type_data); default: return -EOPNOTSUPP; } @@ -2505,6 +2562,7 @@ static void airoha_remove(struct platform_device *pdev) } free_netdev(eth->napi_dev); + airoha_ppe_deinit(eth); platform_set_drvdata(pdev, NULL); } diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 7be932a8e8d2e8e9221b8a5909ac5d00329d550c..60f5dd9908b41f95b5b627640ff6b25c8e3e1706 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -12,6 +12,7 @@ #include #include #include +#include #define AIROHA_MAX_NUM_GDM_PORTS 4 #define AIROHA_MAX_NUM_QDMA 2 @@ -44,6 +45,15 @@ #define QDMA_METER_IDX(_n) ((_n) & 0xff) #define QDMA_METER_GROUP(_n) (((_n) >> 8) & 0x3) +#define PPE_NUM 2 +#define PPE1_SRAM_NUM_ENTRIES (8 * 1024) +#define PPE_SRAM_NUM_ENTRIES (2 * PPE1_SRAM_NUM_ENTRIES) +#define PPE_DRAM_NUM_ENTRIES (16 * 1024) +#define PPE_NUM_ENTRIES (PPE_SRAM_NUM_ENTRIES + PPE_DRAM_NUM_ENTRIES) +#define PPE_HASH_MASK (PPE_NUM_ENTRIES - 1) +#define PPE_ENTRY_SIZE 80 +#define PPE_RAM_NUM_ENTRIES_SHIFT(_n) (__ffs((_n) >> 10)) + #define MTK_HDR_LEN 4 #define MTK_HDR_XMIT_TAGGED_TPID_8100 1 #define MTK_HDR_XMIT_TAGGED_TPID_88A8 2 @@ -195,6 +205,226 @@ struct airoha_hw_stats { u64 rx_len[7]; }; +enum { + PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED = 0x0f, +}; + +enum { + AIROHA_FOE_STATE_INVALID, + AIROHA_FOE_STATE_UNBIND, + AIROHA_FOE_STATE_BIND, + AIROHA_FOE_STATE_FIN +}; + +enum { + PPE_PKT_TYPE_IPV4_HNAPT = 0, + PPE_PKT_TYPE_IPV4_ROUTE = 1, + PPE_PKT_TYPE_BRIDGE = 2, + PPE_PKT_TYPE_IPV4_DSLITE = 3, + PPE_PKT_TYPE_IPV6_ROUTE_3T = 4, + PPE_PKT_TYPE_IPV6_ROUTE_5T = 5, + PPE_PKT_TYPE_IPV6_6RD = 7, +}; + +#define AIROHA_FOE_MAC_PPPOE_ID GENMASK(15, 0) +#define AIROHA_FOE_MAC_SMAC_ID GENMASK(20, 16) + +struct airoha_foe_mac_info_common { + u16 vlan1; + u16 etype; + + u32 dest_mac_hi; + + u16 vlan2; + u16 dest_mac_lo; + + u32 src_mac_hi; +}; + +struct airoha_foe_mac_info { + struct airoha_foe_mac_info_common common; + + u16 pppoe_id; + u16 src_mac_lo; +}; + +#define AIROHA_FOE_IB1_UNBIND_TIMESTAMP GENMASK(7, 0) +#define AIROHA_FOE_IB1_UNBIND_PACKETS GENMASK(23, 8) +#define AIROHA_FOE_IB1_UNBIND_PREBIND BIT(24) + +#define AIROHA_FOE_IB1_BIND_TIMESTAMP GENMASK(14, 0) +#define AIROHA_FOE_IB1_BIND_KEEPALIVE BIT(15) +#define AIROHA_FOE_IB1_BIND_VLAN_LAYER GENMASK(18, 16) +#define AIROHA_FOE_IB1_BIND_PPPOE BIT(19) +#define AIROHA_FOE_IB1_BIND_VLAN_TAG BIT(20) +#define AIROHA_FOE_IB1_BIND_PKT_SAMPLE BIT(21) +#define AIROHA_FOE_IB1_BIND_CACHE BIT(22) +#define AIROHA_FOE_IB1_BIND_TUNNEL_DECAP BIT(23) +#define AIROHA_FOE_IB1_BIND_TTL BIT(24) +#define AIROHA_FOE_IB1_PACKET_TYPE GENMASK(27, 25) +#define AIROHA_FOE_IB1_STATE GENMASK(29, 28) +#define AIROHA_FOE_IB1_UDP BIT(30) +#define AIROHA_FOE_IB1_STATIC BIT(31) + +#define AIROHA_FOE_IB2_NBQ GENMASK(4, 0) +#define AIROHA_FOE_IB2_PSE_PORT GENMASK(8, 5) +#define AIROHA_FOE_IB2_PSE_QOS BIT(9) +#define AIROHA_FOE_IB2_FAST_PATH BIT(10) +#define AIROHA_FOE_IB2_MULTICAST BIT(11) +#define AIROHA_FOE_IB2_PCP BIT(12) +#define AIROHA_FOE_IB2_PORT_AG GENMASK(23, 13) +#define AIROHA_FOE_IB2_DSCP GENMASK(31, 24) + +#define AIROHA_FOE_TUNNEL_ID GENMASK(5, 0) +#define AIROHA_FOE_TUNNEL BIT(6) +#define AIROHA_FOE_DPI BIT(7) +#define AIROHA_FOE_QID GENMASK(10, 8) +#define AIROHA_FOE_CHANNEL GENMASK(15, 11) +#define AIROHA_FOE_SHAPER_ID GENMASK(23, 16) +#define AIROHA_FOE_ACTDP GENMASK(31, 24) + +struct airoha_foe_bridge { + u32 dest_mac_hi; + + u16 src_mac_hi; + u16 dest_mac_lo; + + u32 src_mac_lo; + + u32 ib2; + + u32 rsv[5]; + + u32 data; + + struct airoha_foe_mac_info l2; +}; + +struct airoha_foe_ipv4_tuple { + u32 src_ip; + u32 dest_ip; + union { + struct { + u16 dest_port; + u16 src_port; + }; + struct { + u8 protocol; + u8 _pad[3]; /* fill with 0xa5a5a5 */ + }; + u32 ports; + }; +}; + +struct airoha_foe_ipv4 { + struct airoha_foe_ipv4_tuple orig_tuple; + + u32 ib2; + + struct airoha_foe_ipv4_tuple new_tuple; + + u32 rsv[2]; + + u32 data; + + struct airoha_foe_mac_info l2; +}; + +struct airoha_foe_ipv4_dslite { + struct airoha_foe_ipv4_tuple ip4; + + u32 ib2; + + u8 flow_label[3]; + u8 priority; + + u32 rsv[4]; + + u32 data; + + struct airoha_foe_mac_info l2; +}; + +struct airoha_foe_ipv6 { + u32 src_ip[4]; + u32 dest_ip[4]; + + union { + struct { + u16 dest_port; + u16 src_port; + }; + struct { + u8 protocol; + u8 pad[3]; + }; + u32 ports; + }; + + u32 data; + + u32 ib2; + + struct airoha_foe_mac_info_common l2; +}; + +struct airoha_foe_entry { + union { + struct { + u32 ib1; + union { + struct airoha_foe_bridge bridge; + struct airoha_foe_ipv4 ipv4; + struct airoha_foe_ipv4_dslite dslite; + struct airoha_foe_ipv6 ipv6; + DECLARE_FLEX_ARRAY(u32, d); + }; + }; + u8 data[PPE_ENTRY_SIZE]; + }; +}; + +struct airoha_flow_data { + struct ethhdr eth; + + union { + struct { + __be32 src_addr; + __be32 dst_addr; + } v4; + + struct { + struct in6_addr src_addr; + struct in6_addr dst_addr; + } v6; + }; + + __be16 src_port; + __be16 dst_port; + + u16 vlan_in; + + struct { + u16 id; + __be16 proto; + u8 num; + } vlan; + struct { + u16 sid; + u8 num; + } pppoe; +}; + +struct airoha_flow_table_entry { + struct hlist_node list; + + struct airoha_foe_entry data; + u32 hash; + + struct rhash_head node; + unsigned long cookie; +}; + struct airoha_qdma { struct airoha_eth *eth; void __iomem *regs; @@ -234,6 +464,19 @@ struct airoha_gdm_port { struct metadata_dst *dsa_meta[AIROHA_MAX_DSA_PORTS]; }; +#define AIROHA_RXD4_FOE_ENTRY GENMASK(15, 0) +#define AIROHA_RXD4_PPE_CPU_REASON GENMASK(20, 16) + +struct airoha_ppe { + struct airoha_eth *eth; + + void *foe; + dma_addr_t foe_dma; + + struct hlist_head *foe_flow; + u16 foe_check_time[PPE_NUM_ENTRIES]; +}; + struct airoha_eth { struct device *dev; @@ -242,6 +485,9 @@ struct airoha_eth { struct airoha_npu __rcu *npu; + struct airoha_ppe *ppe; + struct rhashtable flow_table; + struct reset_control_bulk_data rsts[AIROHA_MAX_NUM_RSTS]; struct reset_control_bulk_data xsi_rsts[AIROHA_MAX_NUM_XSI_RSTS]; @@ -277,4 +523,10 @@ u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val); #define airoha_qdma_clear(qdma, offset, val) \ airoha_rmw((qdma)->regs, (offset), (val), 0) +void airoha_ppe_check_skb(struct airoha_ppe *ppe, u16 hash); +int airoha_ppe_setup_tc_block_cb(enum tc_setup_type type, void *type_data, + void *cb_priv); +int airoha_ppe_init(struct airoha_eth *eth); +void airoha_ppe_deinit(struct airoha_eth *eth); + #endif /* AIROHA_ETH_H */ diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c new file mode 100644 index 0000000000000000000000000000000000000000..346bbc9408628067102f5988a1e8f22237110747 --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_ppe.c @@ -0,0 +1,884 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#include +#include +#include +#include +#include + +#include "airoha_npu.h" +#include "airoha_regs.h" +#include "airoha_eth.h" + +static DEFINE_MUTEX(flow_offload_mutex); +static DEFINE_SPINLOCK(ppe_lock); + +static const struct rhashtable_params airoha_flow_table_params = { + .head_offset = offsetof(struct airoha_flow_table_entry, node), + .key_offset = offsetof(struct airoha_flow_table_entry, cookie), + .key_len = sizeof(unsigned long), + .automatic_shrinking = true, +}; + +static bool airoha_ppe2_is_enabled(struct airoha_eth *eth) +{ + return airoha_fe_rr(eth, REG_PPE_GLO_CFG(1)) & PPE_GLO_CFG_EN_MASK; +} + +static u32 airoha_ppe_get_timestamp(struct airoha_ppe *ppe) +{ + u16 timestamp = airoha_fe_rr(ppe->eth, REG_FE_FOE_TS); + + return FIELD_GET(AIROHA_FOE_IB1_BIND_TIMESTAMP, timestamp); +} + +static void airoha_ppe_hw_init(struct airoha_ppe *ppe) +{ + u32 sram_tb_size, sram_num_entries, dram_num_entries; + struct airoha_eth *eth = ppe->eth; + int i; + + sram_tb_size = PPE_SRAM_NUM_ENTRIES * sizeof(struct airoha_foe_entry); + dram_num_entries = PPE_RAM_NUM_ENTRIES_SHIFT(PPE_DRAM_NUM_ENTRIES); + + for (i = 0; i < PPE_NUM; i++) { + int p; + + airoha_fe_wr(eth, REG_PPE_TB_BASE(i), + ppe->foe_dma + sram_tb_size); + + airoha_fe_rmw(eth, REG_PPE_BND_AGE0(i), + PPE_BIND_AGE0_DELTA_NON_L4 | + PPE_BIND_AGE0_DELTA_UDP, + FIELD_PREP(PPE_BIND_AGE0_DELTA_NON_L4, 1) | + FIELD_PREP(PPE_BIND_AGE0_DELTA_UDP, 12)); + airoha_fe_rmw(eth, REG_PPE_BND_AGE1(i), + PPE_BIND_AGE1_DELTA_TCP_FIN | + PPE_BIND_AGE1_DELTA_TCP, + FIELD_PREP(PPE_BIND_AGE1_DELTA_TCP_FIN, 1) | + FIELD_PREP(PPE_BIND_AGE1_DELTA_TCP, 7)); + + airoha_fe_rmw(eth, REG_PPE_TB_HASH_CFG(i), + PPE_SRAM_TABLE_EN_MASK | + PPE_SRAM_HASH1_EN_MASK | + PPE_DRAM_TABLE_EN_MASK | + PPE_SRAM_HASH0_MODE_MASK | + PPE_SRAM_HASH1_MODE_MASK | + PPE_DRAM_HASH0_MODE_MASK | + PPE_DRAM_HASH1_MODE_MASK, + FIELD_PREP(PPE_SRAM_TABLE_EN_MASK, 1) | + FIELD_PREP(PPE_SRAM_HASH1_EN_MASK, 1) | + FIELD_PREP(PPE_SRAM_HASH1_MODE_MASK, 1) | + FIELD_PREP(PPE_DRAM_HASH1_MODE_MASK, 3)); + + airoha_fe_rmw(eth, REG_PPE_TB_CFG(i), + PPE_TB_CFG_SEARCH_MISS_MASK | + PPE_TB_ENTRY_SIZE_MASK, + FIELD_PREP(PPE_TB_CFG_SEARCH_MISS_MASK, 3) | + FIELD_PREP(PPE_TB_ENTRY_SIZE_MASK, 0)); + + airoha_fe_wr(eth, REG_PPE_HASH_SEED(i), PPE_HASH_SEED); + + for (p = 0; p < ARRAY_SIZE(eth->ports); p++) + airoha_fe_rmw(eth, REG_PPE_MTU(i, p), + FP0_EGRESS_MTU_MASK | + FP1_EGRESS_MTU_MASK, + FIELD_PREP(FP0_EGRESS_MTU_MASK, + AIROHA_MAX_MTU) | + FIELD_PREP(FP1_EGRESS_MTU_MASK, + AIROHA_MAX_MTU)); + } + + if (airoha_ppe2_is_enabled(eth)) { + sram_num_entries = + PPE_RAM_NUM_ENTRIES_SHIFT(PPE1_SRAM_NUM_ENTRIES); + airoha_fe_rmw(eth, REG_PPE_TB_CFG(0), + PPE_SRAM_TB_NUM_ENTRY_MASK | + PPE_DRAM_TB_NUM_ENTRY_MASK, + FIELD_PREP(PPE_SRAM_TB_NUM_ENTRY_MASK, + sram_num_entries) | + FIELD_PREP(PPE_DRAM_TB_NUM_ENTRY_MASK, + dram_num_entries)); + airoha_fe_rmw(eth, REG_PPE_TB_CFG(1), + PPE_SRAM_TB_NUM_ENTRY_MASK | + PPE_DRAM_TB_NUM_ENTRY_MASK, + FIELD_PREP(PPE_SRAM_TB_NUM_ENTRY_MASK, + sram_num_entries) | + FIELD_PREP(PPE_DRAM_TB_NUM_ENTRY_MASK, + dram_num_entries)); + } else { + sram_num_entries = + PPE_RAM_NUM_ENTRIES_SHIFT(PPE_SRAM_NUM_ENTRIES); + airoha_fe_rmw(eth, REG_PPE_TB_CFG(0), + PPE_SRAM_TB_NUM_ENTRY_MASK | + PPE_DRAM_TB_NUM_ENTRY_MASK, + FIELD_PREP(PPE_SRAM_TB_NUM_ENTRY_MASK, + sram_num_entries) | + FIELD_PREP(PPE_DRAM_TB_NUM_ENTRY_MASK, + dram_num_entries)); + } +} + +static void airoha_ppe_flow_mangle_eth(const struct flow_action_entry *act, void *eth) +{ + void *dest = eth + act->mangle.offset; + const void *src = &act->mangle.val; + + if (act->mangle.offset > 8) + return; + + if (act->mangle.mask == 0xffff) { + src += 2; + dest += 2; + } + + memcpy(dest, src, act->mangle.mask ? 2 : 4); +} + +static int airoha_ppe_flow_mangle_ports(const struct flow_action_entry *act, + struct airoha_flow_data *data) +{ + u32 val = be32_to_cpu((__force __be32)act->mangle.val); + + switch (act->mangle.offset) { + case 0: + if ((__force __be32)act->mangle.mask == ~cpu_to_be32(0xffff)) + data->dst_port = cpu_to_be16(val); + else + data->src_port = cpu_to_be16(val >> 16); + break; + case 2: + data->dst_port = cpu_to_be16(val); + break; + default: + return -EINVAL; + } + + return 0; +} + +static int airoha_ppe_flow_mangle_ipv4(const struct flow_action_entry *act, + struct airoha_flow_data *data) +{ + __be32 *dest; + + switch (act->mangle.offset) { + case offsetof(struct iphdr, saddr): + dest = &data->v4.src_addr; + break; + case offsetof(struct iphdr, daddr): + dest = &data->v4.dst_addr; + break; + default: + return -EINVAL; + } + + memcpy(dest, &act->mangle.val, sizeof(u32)); + + return 0; +} + +static int airoha_get_dsa_port(struct net_device **dev) +{ +#if IS_ENABLED(CONFIG_NET_DSA) + struct dsa_port *dp = dsa_port_from_netdev(*dev); + + if (IS_ERR(dp)) + return -ENODEV; + + *dev = dsa_port_to_conduit(dp); + return dp->index; +#else + return -ENODEV; +#endif +} + +static int airoha_ppe_foe_entry_prepare(struct airoha_foe_entry *hwe, + struct net_device *dev, int type, + int l4proto, u8 *src_mac, u8 *dest_mac) +{ + int dsa_port = airoha_get_dsa_port(&dev); + struct airoha_foe_mac_info_common *l2; + u32 data, ports_pad, val; + + memset(hwe, 0, sizeof(*hwe)); + + val = FIELD_PREP(AIROHA_FOE_IB1_STATE, AIROHA_FOE_STATE_BIND) | + FIELD_PREP(AIROHA_FOE_IB1_PACKET_TYPE, type) | + FIELD_PREP(AIROHA_FOE_IB1_UDP, l4proto == IPPROTO_UDP) | + AIROHA_FOE_IB1_BIND_TTL; + hwe->ib1 = val; + + val = FIELD_PREP(AIROHA_FOE_IB2_PORT_AG, 0x1f); + if (dsa_port >= 0) + val |= FIELD_PREP(AIROHA_FOE_IB2_NBQ, dsa_port); + if (dev) { + struct airoha_gdm_port *port = netdev_priv(dev); + u8 pse_port; + + pse_port = port->id == 4 ? FE_PSE_PORT_GDM4 : port->id; + val |= FIELD_PREP(AIROHA_FOE_IB2_PSE_PORT, pse_port); + } + + /* FIXME: implement QoS support setting pse_port to 2 (loopback) + * for uplink and setting qos bit in ib2 + */ + + if (is_multicast_ether_addr(dest_mac)) + val |= AIROHA_FOE_IB2_MULTICAST; + + ports_pad = 0xa5a5a500 | (l4proto & 0xff); + if (type == PPE_PKT_TYPE_IPV4_ROUTE) + hwe->ipv4.orig_tuple.ports = ports_pad; + if (type == PPE_PKT_TYPE_IPV6_ROUTE_3T) + hwe->ipv6.ports = ports_pad; + + data = FIELD_PREP(AIROHA_FOE_SHAPER_ID, 0x7f); + if (type == PPE_PKT_TYPE_BRIDGE) { + hwe->bridge.dest_mac_hi = get_unaligned_be32(dest_mac); + hwe->bridge.dest_mac_lo = get_unaligned_be16(dest_mac + 4); + hwe->bridge.src_mac_hi = get_unaligned_be16(src_mac); + hwe->bridge.src_mac_lo = get_unaligned_be32(src_mac + 2); + hwe->bridge.data = data; + hwe->bridge.ib2 = val; + l2 = &hwe->bridge.l2.common; + } else if (type >= PPE_PKT_TYPE_IPV6_ROUTE_3T) { + hwe->ipv6.data = data; + hwe->ipv6.ib2 = val; + l2 = &hwe->ipv6.l2; + } else { + hwe->ipv4.data = data; + hwe->ipv4.ib2 = val; + l2 = &hwe->ipv4.l2.common; + } + + l2->dest_mac_hi = get_unaligned_be32(dest_mac); + l2->dest_mac_lo = get_unaligned_be16(dest_mac + 4); + if (type <= PPE_PKT_TYPE_IPV4_DSLITE) { + l2->src_mac_hi = get_unaligned_be32(src_mac); + hwe->ipv4.l2.src_mac_lo = get_unaligned_be16(src_mac + 4); + } else { + l2->src_mac_hi = FIELD_PREP(AIROHA_FOE_MAC_SMAC_ID, 0xf); + } + + if (dsa_port >= 0) + l2->etype = BIT(15) | BIT(dsa_port); + else if (type >= PPE_PKT_TYPE_IPV6_ROUTE_3T) + l2->etype = ETH_P_IPV6; + else + l2->etype = ETH_P_IP; + + return 0; +} + +static int airoha_ppe_foe_entry_set_ipv4_tuple(struct airoha_foe_entry *hwe, + struct airoha_flow_data *data, + bool egress) +{ + int type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, hwe->ib1); + struct airoha_foe_ipv4_tuple *t; + + switch (type) { + case PPE_PKT_TYPE_IPV4_HNAPT: + if (egress) { + t = &hwe->ipv4.new_tuple; + break; + } + fallthrough; + case PPE_PKT_TYPE_IPV4_DSLITE: + case PPE_PKT_TYPE_IPV4_ROUTE: + t = &hwe->ipv4.orig_tuple; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + t->src_ip = be32_to_cpu(data->v4.src_addr); + t->dest_ip = be32_to_cpu(data->v4.dst_addr); + + if (type != PPE_PKT_TYPE_IPV4_ROUTE) { + t->src_port = be16_to_cpu(data->src_port); + t->dest_port = be16_to_cpu(data->dst_port); + } + + return 0; +} + +static int airoha_ppe_foe_entry_set_ipv6_tuple(struct airoha_foe_entry *hwe, + struct airoha_flow_data *data) + +{ + int type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, hwe->ib1); + u32 *src, *dest; + + switch (type) { + case PPE_PKT_TYPE_IPV6_ROUTE_5T: + case PPE_PKT_TYPE_IPV6_6RD: + hwe->ipv6.src_port = be16_to_cpu(data->src_port); + hwe->ipv6.dest_port = be16_to_cpu(data->dst_port); + fallthrough; + case PPE_PKT_TYPE_IPV6_ROUTE_3T: + src = hwe->ipv6.src_ip; + dest = hwe->ipv6.dest_ip; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + ipv6_addr_be32_to_cpu(src, data->v6.src_addr.s6_addr32); + ipv6_addr_be32_to_cpu(dest, data->v6.dst_addr.s6_addr32); + + return 0; +} + +static u32 airoha_ppe_foe_get_entry_hash(struct airoha_foe_entry *hwe) +{ + int type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, hwe->ib1); + u32 hash, hv1, hv2, hv3; + + switch (type) { + case PPE_PKT_TYPE_IPV4_ROUTE: + case PPE_PKT_TYPE_IPV4_HNAPT: + hv1 = hwe->ipv4.orig_tuple.ports; + hv2 = hwe->ipv4.orig_tuple.dest_ip; + hv3 = hwe->ipv4.orig_tuple.src_ip; + break; + case PPE_PKT_TYPE_IPV6_ROUTE_3T: + case PPE_PKT_TYPE_IPV6_ROUTE_5T: + hv1 = hwe->ipv6.src_ip[3] ^ hwe->ipv6.dest_ip[3]; + hv1 ^= hwe->ipv6.ports; + + hv2 = hwe->ipv6.src_ip[2] ^ hwe->ipv6.dest_ip[2]; + hv2 ^= hwe->ipv6.dest_ip[0]; + + hv3 = hwe->ipv6.src_ip[1] ^ hwe->ipv6.dest_ip[1]; + hv3 ^= hwe->ipv6.src_ip[0]; + break; + case PPE_PKT_TYPE_IPV4_DSLITE: + case PPE_PKT_TYPE_IPV6_6RD: + default: + WARN_ON_ONCE(1); + return PPE_HASH_MASK; + } + + hash = (hv1 & hv2) | ((~hv1) & hv3); + hash = (hash >> 24) | ((hash & 0xffffff) << 8); + hash ^= hv1 ^ hv2 ^ hv3; + hash ^= hash >> 16; + hash &= PPE_NUM_ENTRIES - 1; + + return hash; +} + +static struct airoha_foe_entry * +airoha_ppe_foe_get_entry(struct airoha_ppe *ppe, u32 hash) +{ + if (hash < PPE_SRAM_NUM_ENTRIES) { + u32 *hwe = ppe->foe + hash * sizeof(struct airoha_foe_entry); + struct airoha_eth *eth = ppe->eth; + bool ppe2; + u32 val; + int i; + + ppe2 = airoha_ppe2_is_enabled(ppe->eth) && + hash >= PPE1_SRAM_NUM_ENTRIES; + airoha_fe_wr(ppe->eth, REG_PPE_RAM_CTRL(ppe2), + FIELD_PREP(PPE_SRAM_CTRL_ENTRY_MASK, hash) | + PPE_SRAM_CTRL_REQ_MASK); + if (read_poll_timeout_atomic(airoha_fe_rr, val, + val & PPE_SRAM_CTRL_ACK_MASK, + 10, 100, false, eth, + REG_PPE_RAM_CTRL(ppe2))) + return NULL; + + for (i = 0; i < sizeof(struct airoha_foe_entry) / 4; i++) + hwe[i] = airoha_fe_rr(eth, + REG_PPE_RAM_ENTRY(ppe2, i)); + } + + return ppe->foe + hash * sizeof(struct airoha_foe_entry); +} + +static bool airoha_ppe_foe_compare_entry(struct airoha_flow_table_entry *e, + struct airoha_foe_entry *hwe) +{ + int type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, e->data.ib1), len; + + if ((hwe->ib1 ^ e->data.ib1) & AIROHA_FOE_IB1_UDP) + return false; + + if (type > PPE_PKT_TYPE_IPV4_DSLITE) + len = offsetof(struct airoha_foe_entry, ipv6.data); + else + len = offsetof(struct airoha_foe_entry, ipv4.ib2); + + return !memcmp(&e->data.d, &hwe->d, len - sizeof(hwe->ib1)); +} + +static int airoha_ppe_foe_commit_entry(struct airoha_ppe *ppe, + struct airoha_foe_entry *e, + u32 hash) +{ + struct airoha_foe_entry *hwe = ppe->foe + hash * sizeof(*hwe); + u32 ts = airoha_ppe_get_timestamp(ppe); + struct airoha_eth *eth = ppe->eth; + + memcpy(&hwe->d, &e->d, sizeof(*hwe) - sizeof(hwe->ib1)); + wmb(); + + e->ib1 &= ~AIROHA_FOE_IB1_BIND_TIMESTAMP; + e->ib1 |= FIELD_PREP(AIROHA_FOE_IB1_BIND_TIMESTAMP, ts); + hwe->ib1 = e->ib1; + + if (hash < PPE_SRAM_NUM_ENTRIES) { + dma_addr_t addr = ppe->foe_dma + hash * sizeof(*hwe); + bool ppe2 = airoha_ppe2_is_enabled(eth) && + hash >= PPE1_SRAM_NUM_ENTRIES; + struct airoha_npu *npu; + int err = -ENODEV; + + rcu_read_lock(); + npu = rcu_dereference(eth->npu); + if (npu) + err = airoha_npu_foe_commit_entry(npu, addr, + sizeof(*hwe), hash, + ppe2); + rcu_read_unlock(); + + return err; + } + + return 0; +} + +static void airoha_ppe_foe_insert_entry(struct airoha_ppe *ppe, u32 hash) +{ + struct airoha_flow_table_entry *e; + struct airoha_foe_entry *hwe; + struct hlist_node *n; + u32 index; + + spin_lock_bh(&ppe_lock); + + hwe = airoha_ppe_foe_get_entry(ppe, hash); + if (!hwe) + goto unlock; + + if (FIELD_GET(AIROHA_FOE_IB1_STATE, hwe->ib1) == AIROHA_FOE_STATE_BIND) + goto unlock; + + index = airoha_ppe_foe_get_entry_hash(hwe); + hlist_for_each_entry_safe(e, n, &ppe->foe_flow[index], list) { + if (airoha_ppe_foe_compare_entry(e, hwe)) { + airoha_ppe_foe_commit_entry(ppe, &e->data, hash); + e->hash = hash; + break; + } + } +unlock: + spin_unlock_bh(&ppe_lock); +} + +static int airoha_ppe_foe_flow_commit_entry(struct airoha_ppe *ppe, + struct airoha_flow_table_entry *e) +{ + u32 hash = airoha_ppe_foe_get_entry_hash(&e->data); + + e->hash = 0xffff; + + spin_lock_bh(&ppe_lock); + hlist_add_head(&e->list, &ppe->foe_flow[hash]); + spin_unlock_bh(&ppe_lock); + + return 0; +} + +static void airoha_ppe_foe_flow_remove_entry(struct airoha_ppe *ppe, + struct airoha_flow_table_entry *e) +{ + spin_lock_bh(&ppe_lock); + + hlist_del_init(&e->list); + if (e->hash != 0xffff) { + e->data.ib1 &= ~AIROHA_FOE_IB1_STATE; + e->data.ib1 |= FIELD_PREP(AIROHA_FOE_IB1_STATE, + AIROHA_FOE_STATE_INVALID); + airoha_ppe_foe_commit_entry(ppe, &e->data, e->hash); + e->hash = 0xffff; + } + + spin_unlock_bh(&ppe_lock); +} + +static int airoha_ppe_flow_offload_replace(struct airoha_gdm_port *port, + struct flow_cls_offload *f) +{ + struct flow_rule *rule = flow_cls_offload_flow_rule(f); + struct airoha_eth *eth = port->qdma->eth; + struct airoha_flow_table_entry *e; + struct airoha_flow_data data = {}; + struct net_device *odev = NULL; + struct flow_action_entry *act; + struct airoha_foe_entry hwe; + int err, i, offload_type; + u16 addr_type = 0; + u8 l4proto = 0; + + if (rhashtable_lookup(ð->flow_table, &f->cookie, + airoha_flow_table_params)) + return -EEXIST; + + if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_META)) + return -EOPNOTSUPP; + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) { + struct flow_match_control match; + + flow_rule_match_control(rule, &match); + addr_type = match.key->addr_type; + if (flow_rule_has_control_flags(match.mask->flags, + f->common.extack)) + return -EOPNOTSUPP; + } else { + return -EOPNOTSUPP; + } + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) { + struct flow_match_basic match; + + flow_rule_match_basic(rule, &match); + l4proto = match.key->ip_proto; + } else { + return -EOPNOTSUPP; + } + + switch (addr_type) { + case 0: + offload_type = PPE_PKT_TYPE_BRIDGE; + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { + struct flow_match_eth_addrs match; + + flow_rule_match_eth_addrs(rule, &match); + memcpy(data.eth.h_dest, match.key->dst, ETH_ALEN); + memcpy(data.eth.h_source, match.key->src, ETH_ALEN); + } else { + return -EOPNOTSUPP; + } + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) { + struct flow_match_vlan match; + + flow_rule_match_vlan(rule, &match); + if (match.key->vlan_tpid != cpu_to_be16(ETH_P_8021Q)) + return -EOPNOTSUPP; + + data.vlan_in = match.key->vlan_id; + } + break; + case FLOW_DISSECTOR_KEY_IPV4_ADDRS: + offload_type = PPE_PKT_TYPE_IPV4_HNAPT; + break; + case FLOW_DISSECTOR_KEY_IPV6_ADDRS: + offload_type = PPE_PKT_TYPE_IPV6_ROUTE_5T; + break; + default: + return -EOPNOTSUPP; + } + + flow_action_for_each(i, act, &rule->action) { + switch (act->id) { + case FLOW_ACTION_MANGLE: + if (offload_type == PPE_PKT_TYPE_BRIDGE) + return -EOPNOTSUPP; + + if (act->mangle.htype == FLOW_ACT_MANGLE_HDR_TYPE_ETH) + airoha_ppe_flow_mangle_eth(act, &data.eth); + break; + case FLOW_ACTION_REDIRECT: + odev = act->dev; + break; + case FLOW_ACTION_CSUM: + break; + case FLOW_ACTION_VLAN_PUSH: + if (data.vlan.num == 1 || + act->vlan.proto != htons(ETH_P_8021Q)) + return -EOPNOTSUPP; + + data.vlan.id = act->vlan.vid; + data.vlan.proto = act->vlan.proto; + data.vlan.num++; + break; + case FLOW_ACTION_VLAN_POP: + break; + case FLOW_ACTION_PPPOE_PUSH: + if (data.pppoe.num == 1) + return -EOPNOTSUPP; + + data.pppoe.sid = act->pppoe.sid; + data.pppoe.num++; + break; + default: + return -EOPNOTSUPP; + } + } + + if (!is_valid_ether_addr(data.eth.h_source) || + !is_valid_ether_addr(data.eth.h_dest)) + return -EINVAL; + + err = airoha_ppe_foe_entry_prepare(&hwe, odev, offload_type, l4proto, + data.eth.h_source, data.eth.h_dest); + if (err) + return err; + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) { + struct flow_match_ports ports; + + if (offload_type == PPE_PKT_TYPE_BRIDGE) + return -EOPNOTSUPP; + + flow_rule_match_ports(rule, &ports); + data.src_port = ports.key->src; + data.dst_port = ports.key->dst; + } else if (offload_type != PPE_PKT_TYPE_BRIDGE) { + return -EOPNOTSUPP; + } + + if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { + struct flow_match_ipv4_addrs addrs; + + flow_rule_match_ipv4_addrs(rule, &addrs); + data.v4.src_addr = addrs.key->src; + data.v4.dst_addr = addrs.key->dst; + airoha_ppe_foe_entry_set_ipv4_tuple(&hwe, &data, false); + } + + if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) { + struct flow_match_ipv6_addrs addrs; + + flow_rule_match_ipv6_addrs(rule, &addrs); + + data.v6.src_addr = addrs.key->src; + data.v6.dst_addr = addrs.key->dst; + airoha_ppe_foe_entry_set_ipv6_tuple(&hwe, &data); + } + + flow_action_for_each(i, act, &rule->action) { + if (act->id != FLOW_ACTION_MANGLE) + continue; + + if (offload_type == PPE_PKT_TYPE_BRIDGE) + return -EOPNOTSUPP; + + switch (act->mangle.htype) { + case FLOW_ACT_MANGLE_HDR_TYPE_TCP: + case FLOW_ACT_MANGLE_HDR_TYPE_UDP: + err = airoha_ppe_flow_mangle_ports(act, &data); + break; + case FLOW_ACT_MANGLE_HDR_TYPE_IP4: + err = airoha_ppe_flow_mangle_ipv4(act, &data); + break; + case FLOW_ACT_MANGLE_HDR_TYPE_ETH: + /* handled earlier */ + break; + default: + return -EOPNOTSUPP; + } + + if (err) + return err; + } + + if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) { + err = airoha_ppe_foe_entry_set_ipv4_tuple(&hwe, &data, true); + if (err) + return err; + } + + e = kzalloc(sizeof(*e), GFP_KERNEL); + if (!e) + return -ENOMEM; + + e->cookie = f->cookie; + memcpy(&e->data, &hwe, sizeof(e->data)); + + err = airoha_ppe_foe_flow_commit_entry(eth->ppe, e); + if (err) + goto free_entry; + + err = rhashtable_insert_fast(ð->flow_table, &e->node, + airoha_flow_table_params); + if (err < 0) + goto remove_foe_entry; + + return 0; + +remove_foe_entry: + airoha_ppe_foe_flow_remove_entry(eth->ppe, e); +free_entry: + kfree(e); + + return err; +} + +static int airoha_ppe_flow_offload_destroy(struct airoha_gdm_port *port, + struct flow_cls_offload *f) +{ + struct airoha_eth *eth = port->qdma->eth; + struct airoha_flow_table_entry *e; + + e = rhashtable_lookup(ð->flow_table, &f->cookie, + airoha_flow_table_params); + if (!e) + return -ENOENT; + + airoha_ppe_foe_flow_remove_entry(eth->ppe, e); + rhashtable_remove_fast(ð->flow_table, &e->node, + airoha_flow_table_params); + kfree(e); + + return 0; +} + +static int airoha_ppe_flow_offload_cmd(struct airoha_gdm_port *port, + struct flow_cls_offload *f) +{ + switch (f->command) { + case FLOW_CLS_REPLACE: + return airoha_ppe_flow_offload_replace(port, f); + case FLOW_CLS_DESTROY: + return airoha_ppe_flow_offload_destroy(port, f); + default: + break; + } + + return -EOPNOTSUPP; +} + +static int airoha_ppe_flush_sram_entries(struct airoha_ppe *ppe, + struct airoha_npu *npu) +{ + int i, sram_num_entries = PPE_SRAM_NUM_ENTRIES; + struct airoha_foe_entry *hwe = ppe->foe; + + if (airoha_ppe2_is_enabled(ppe->eth)) + sram_num_entries = sram_num_entries / 2; + + for (i = 0; i < sram_num_entries; i++) + memset(&hwe[i], 0, sizeof(*hwe)); + + return airoha_npu_ppe_flush_sram_entries(npu, ppe->foe_dma, + PPE_SRAM_NUM_ENTRIES); +} + +static int airoha_ppe_offload_setup(struct airoha_eth *eth) +{ + struct airoha_npu *npu = airoha_npu_get(eth->dev); + int err; + + if (IS_ERR(npu)) + return PTR_ERR(npu); + + airoha_ppe_hw_init(eth->ppe); + err = airoha_ppe_flush_sram_entries(eth->ppe, npu); + if (err) + goto error_npu_put; + + rcu_assign_pointer(eth->npu, npu); + synchronize_rcu(); + + return 0; + +error_npu_put: + airoha_npu_put(npu); + + return err; +} + +int airoha_ppe_setup_tc_block_cb(enum tc_setup_type type, void *type_data, + void *cb_priv) +{ + struct flow_cls_offload *cls = type_data; + struct net_device *dev = cb_priv; + struct airoha_gdm_port *port = netdev_priv(dev); + struct airoha_eth *eth = port->qdma->eth; + int err; + + if (!tc_can_offload(dev) || type != TC_SETUP_CLSFLOWER) + return -EOPNOTSUPP; + + mutex_lock(&flow_offload_mutex); + + if (!eth->npu) + err = airoha_ppe_offload_setup(eth); + if (!err) + err = airoha_ppe_flow_offload_cmd(port, cls); + + mutex_unlock(&flow_offload_mutex); + + return err; +} + +void airoha_ppe_check_skb(struct airoha_ppe *ppe, u16 hash) +{ + u16 now, diff; + + if (hash > PPE_HASH_MASK) + return; + + now = (u16)jiffies; + diff = now - ppe->foe_check_time[hash]; + if (diff < HZ / 10) + return; + + ppe->foe_check_time[hash] = now; + airoha_ppe_foe_insert_entry(ppe, hash); +} + +int airoha_ppe_init(struct airoha_eth *eth) +{ + struct airoha_ppe *ppe; + int foe_size; + + ppe = devm_kzalloc(eth->dev, sizeof(*ppe), GFP_KERNEL); + if (!ppe) + return -ENOMEM; + + foe_size = PPE_NUM_ENTRIES * sizeof(struct airoha_foe_entry); + ppe->foe = dmam_alloc_coherent(eth->dev, foe_size, &ppe->foe_dma, + GFP_KERNEL | __GFP_ZERO); + if (!ppe->foe) + return -ENOMEM; + + ppe->eth = eth; + eth->ppe = ppe; + + ppe->foe_flow = devm_kzalloc(eth->dev, + PPE_NUM_ENTRIES * sizeof(*ppe->foe_flow), + GFP_KERNEL); + if (!ppe->foe_flow) + return -ENOMEM; + + return rhashtable_init(ð->flow_table, &airoha_flow_table_params); +} + +void airoha_ppe_deinit(struct airoha_eth *eth) +{ + struct airoha_npu *npu; + + rcu_read_lock(); + npu = rcu_dereference(eth->npu); + if (npu) { + airoha_npu_ppe_deinit(npu); + airoha_npu_put(npu); + } + rcu_read_unlock(); + + rhashtable_destroy(ð->flow_table); +} diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h index e467dd81ff44a9ad560226cab42b7431812f5fb9..6cc64c60953a3961b7c93dfa75a289a6f7a6599b 100644 --- a/drivers/net/ethernet/airoha/airoha_regs.h +++ b/drivers/net/ethernet/airoha/airoha_regs.h @@ -15,6 +15,7 @@ #define CDM1_BASE 0x0400 #define GDM1_BASE 0x0500 #define PPE1_BASE 0x0c00 +#define PPE2_BASE 0x1c00 #define CDM2_BASE 0x1400 #define GDM2_BASE 0x1500 @@ -36,6 +37,7 @@ #define FE_RST_GDM3_MBI_ARB_MASK BIT(2) #define FE_RST_CORE_MASK BIT(0) +#define REG_FE_FOE_TS 0x0010 #define REG_FE_WAN_MAC_H 0x0030 #define REG_FE_LAN_MAC_H 0x0040 @@ -192,11 +194,106 @@ #define REG_FE_GDM_RX_ETH_L511_CNT_L(_n) (GDM_BASE(_n) + 0x198) #define REG_FE_GDM_RX_ETH_L1023_CNT_L(_n) (GDM_BASE(_n) + 0x19c) -#define REG_PPE1_TB_HASH_CFG (PPE1_BASE + 0x250) -#define PPE1_SRAM_TABLE_EN_MASK BIT(0) -#define PPE1_SRAM_HASH1_EN_MASK BIT(8) -#define PPE1_DRAM_TABLE_EN_MASK BIT(16) -#define PPE1_DRAM_HASH1_EN_MASK BIT(24) +#define REG_PPE_GLO_CFG(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x200) +#define PPE_GLO_CFG_BUSY_MASK BIT(31) +#define PPE_GLO_CFG_FLOW_DROP_UPDATE_MASK BIT(9) +#define PPE_GLO_CFG_PSE_HASH_OFS_MASK BIT(6) +#define PPE_GLO_CFG_PPE_BSWAP_MASK BIT(5) +#define PPE_GLO_CFG_TTL_DROP_MASK BIT(4) +#define PPE_GLO_CFG_IP4_CS_DROP_MASK BIT(3) +#define PPE_GLO_CFG_IP4_L4_CS_DROP_MASK BIT(2) +#define PPE_GLO_CFG_EN_MASK BIT(0) + +#define REG_PPE_PPE_FLOW_CFG(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x204) +#define PPE_FLOW_CFG_IP6_HASH_GRE_KEY_MASK BIT(20) +#define PPE_FLOW_CFG_IP4_HASH_GRE_KEY_MASK BIT(19) +#define PPE_FLOW_CFG_IP4_HASH_FLOW_LABEL_MASK BIT(18) +#define PPE_FLOW_CFG_IP4_NAT_FRAG_MASK BIT(17) +#define PPE_FLOW_CFG_IP_PROTO_BLACKLIST_MASK BIT(16) +#define PPE_FLOW_CFG_IP4_DSLITE_MASK BIT(14) +#define PPE_FLOW_CFG_IP4_NAPT_MASK BIT(13) +#define PPE_FLOW_CFG_IP4_NAT_MASK BIT(12) +#define PPE_FLOW_CFG_IP6_6RD_MASK BIT(10) +#define PPE_FLOW_CFG_IP6_5T_ROUTE_MASK BIT(9) +#define PPE_FLOW_CFG_IP6_3T_ROUTE_MASK BIT(8) +#define PPE_FLOW_CFG_IP4_UDP_FRAG_MASK BIT(7) +#define PPE_FLOW_CFG_IP4_TCP_FRAG_MASK BIT(6) + +#define REG_PPE_IP_PROTO_CHK(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x208) +#define PPE_IP_PROTO_CHK_IPV4_MASK GENMASK(15, 0) +#define PPE_IP_PROTO_CHK_IPV6_MASK GENMASK(31, 16) + +#define REG_PPE_TB_CFG(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x21c) +#define PPE_SRAM_TB_NUM_ENTRY_MASK GENMASK(26, 24) +#define PPE_TB_CFG_KEEPALIVE_MASK GENMASK(13, 12) +#define PPE_TB_CFG_AGE_TCP_FIN_MASK BIT(11) +#define PPE_TB_CFG_AGE_UDP_MASK BIT(10) +#define PPE_TB_CFG_AGE_TCP_MASK BIT(9) +#define PPE_TB_CFG_AGE_UNBIND_MASK BIT(8) +#define PPE_TB_CFG_AGE_NON_L4_MASK BIT(7) +#define PPE_TB_CFG_AGE_PREBIND_MASK BIT(6) +#define PPE_TB_CFG_SEARCH_MISS_MASK GENMASK(5, 4) +#define PPE_TB_ENTRY_SIZE_MASK BIT(3) +#define PPE_DRAM_TB_NUM_ENTRY_MASK GENMASK(2, 0) + +#define REG_PPE_TB_BASE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x220) + +#define REG_PPE_BIND_RATE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x228) +#define PPE_BIND_RATE_L2B_BIND_MASK GENMASK(31, 16) +#define PPE_BIND_RATE_BIND_MASK GENMASK(15, 0) + +#define REG_PPE_BIND_LIMIT0(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x22c) +#define PPE_BIND_LIMIT0_HALF_MASK GENMASK(29, 16) +#define PPE_BIND_LIMIT0_QUARTER_MASK GENMASK(13, 0) + +#define REG_PPE_BIND_LIMIT1(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x230) +#define PPE_BIND_LIMIT1_NON_L4_MASK GENMASK(23, 16) +#define PPE_BIND_LIMIT1_FULL_MASK GENMASK(13, 0) + +#define REG_PPE_BND_AGE0(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x23c) +#define PPE_BIND_AGE0_DELTA_NON_L4 GENMASK(30, 16) +#define PPE_BIND_AGE0_DELTA_UDP GENMASK(14, 0) + +#define REG_PPE_UNBIND_AGE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x238) +#define PPE_UNBIND_AGE_MIN_PACKETS_MASK GENMASK(31, 16) +#define PPE_UNBIND_AGE_DELTA_MASK GENMASK(7, 0) + +#define REG_PPE_BND_AGE1(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x240) +#define PPE_BIND_AGE1_DELTA_TCP_FIN GENMASK(30, 16) +#define PPE_BIND_AGE1_DELTA_TCP GENMASK(14, 0) + +#define REG_PPE_HASH_SEED(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x244) +#define PPE_HASH_SEED 0x12345678 + +#define REG_PPE_DFT_CPORT0(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x248) + +#define REG_PPE_DFT_CPORT1(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x24c) + +#define REG_PPE_TB_HASH_CFG(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x250) +#define PPE_DRAM_HASH1_MODE_MASK GENMASK(31, 28) +#define PPE_DRAM_HASH1_EN_MASK BIT(24) +#define PPE_DRAM_HASH0_MODE_MASK GENMASK(23, 20) +#define PPE_DRAM_TABLE_EN_MASK BIT(16) +#define PPE_SRAM_HASH1_MODE_MASK GENMASK(15, 12) +#define PPE_SRAM_HASH1_EN_MASK BIT(8) +#define PPE_SRAM_HASH0_MODE_MASK GENMASK(7, 4) +#define PPE_SRAM_TABLE_EN_MASK BIT(0) + +#define REG_PPE_MTU_BASE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x304) +#define REG_PPE_MTU(_m, _n) (REG_PPE_MTU_BASE(_m) + ((_n) << 2)) +#define FP1_EGRESS_MTU_MASK GENMASK(29, 16) +#define FP0_EGRESS_MTU_MASK GENMASK(13, 0) + +#define REG_PPE_RAM_CTRL(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x31c) +#define PPE_SRAM_CTRL_ACK_MASK BIT(31) +#define PPE_SRAM_CTRL_DUAL_SUCESS_MASK BIT(30) +#define PPE_SRAM_CTRL_ENTRY_MASK GENMASK(23, 8) +#define PPE_SRAM_WR_DUAL_DIRECTION_MASK BIT(2) +#define PPE_SRAM_CTRL_WR_MASK BIT(1) +#define PPE_SRAM_CTRL_REQ_MASK BIT(0) + +#define REG_PPE_RAM_BASE(_n) (((_n) ? PPE2_BASE : PPE1_BASE) + 0x320) +#define REG_PPE_RAM_ENTRY(_m, _n) (REG_PPE_RAM_BASE(_m) + ((_n) << 2)) #define REG_FE_GDM_TX_OK_PKT_CNT_H(_n) (GDM_BASE(_n) + 0x280) #define REG_FE_GDM_TX_OK_BYTE_CNT_H(_n) (GDM_BASE(_n) + 0x284) From patchwork Thu Feb 13 15:34:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 01B27C021A0 for ; Thu, 13 Feb 2025 15:59:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=RgGzXEZjyclhhwq5Z/VZE+ifAoIXz/4F9pAmTqO6v7c=; b=ZRP199Gmsw1V11IcSoEoKV2Nm4 3ok8HiXPEOZAUCHLKZ637yOBPPmnK2p4f4t6nxbdxnGibXgDKMZ/niSaIrBGXTpQfrU0dNb6GyY9O nXN+SKaXHjjWFEx9ZY/FXCmz+jxZIIcNoavqcN76KSytkwUodHKs2fJKq2gDIc51R/APvxPU2Yze9 Gmw/UMtrEg28gjMP/hNYhrC+FIjNUipaR2SXPzMEfsB+IFhHTzmG11nfgDhUK8i2AbQRiHyXxFX8P 5dK1WFgN7MU0SNTy2QAihdOUQcloSfQRlD8XN/B8BQK5pE1TtgvQkEnYU4U/PqASO+lhypEgnE49a Oh15FkSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibcj-0000000BebB-0r5T; Thu, 13 Feb 2025 15:59:33 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibFQ-0000000BYqz-0b5G; Thu, 13 Feb 2025 15:35:29 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id F37385C5479; Thu, 13 Feb 2025 15:34:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0E55C4CED1; Thu, 13 Feb 2025 15:35:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460927; bh=ssU4RdqHTTIS99kGMo7vpAakHLKzzaATYvo+fq+A+cQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=uO7a0ndRYHxXSAXUlwo3Vox2zQ030aDqcqH4H0VYuj6IbM5nyKrpL7vZ1z8x6ifL9 j+fjTyZtTyMczktbqf9XxPaQ12XQCX/4Lo4HP8mnlz107Hp/tNdZh4n/fS67DEC00B pWj/P6aL//jKb9WfH/mH2QjlBYCXQjMRbQJzOA26ON8+MQZfdnkc0S9HMBS086PcS/ suScRpo6dJcHUwDqe2BiafRrb35M+xxWDlXyAKWB9I5DXgk22v0vJdFVBA5l++GMlW QBFL3v7iY2APqQfoXiypQFwH1eR1ssKO7/qOLUQSE1Mr7H1smAAR4MVv3w/GcYzhqk 5Iz1appFygTgg== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:34 +0100 Subject: [PATCH net-next v4 15/16] net: airoha: Add loopback support for GDM2 MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-15-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073528_277226_5871F19D X-CRM114-Status: GOOD ( 16.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Enable hw redirection for traffic received on GDM2 port to GDM{3,4}. This is required to apply Qdisc offloading (HTB or ETS) for traffic to and from GDM{3,4} port. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/airoha_eth.c | 69 ++++++++++++++++++++++++++++++- drivers/net/ethernet/airoha/airoha_eth.h | 7 ++++ drivers/net/ethernet/airoha/airoha_ppe.c | 13 +++--- drivers/net/ethernet/airoha/airoha_regs.h | 29 +++++++++++++ 4 files changed, 110 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c index 3065f418c92ad51b4042433110a678b39702e9c7..d2fd4a4f6cd325e47e3d1273aad795faabe13def 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.c +++ b/drivers/net/ethernet/airoha/airoha_eth.c @@ -1588,14 +1588,79 @@ static int airoha_dev_set_macaddr(struct net_device *dev, void *p) return 0; } +static void airhoha_set_gdm2_loopback(struct airoha_gdm_port *port) +{ + u32 pse_port = port->id == 3 ? FE_PSE_PORT_GDM3 : FE_PSE_PORT_GDM4; + struct airoha_eth *eth = port->qdma->eth; + u32 chan = port->id == 3 ? 4 : 0; + + /* Forward the traffic to the proper GDM port */ + airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(2), pse_port); + airoha_fe_clear(eth, REG_GDM_FWD_CFG(2), GDM_STRIP_CRC); + + /* Enable GDM2 loopback */ + airoha_fe_wr(eth, REG_GDM_TXCHN_EN(2), 0xffffffff); + airoha_fe_wr(eth, REG_GDM_RXCHN_EN(2), 0xffff); + airoha_fe_rmw(eth, REG_GDM_LPBK_CFG(2), + LPBK_CHAN_MASK | LPBK_MODE_MASK | LPBK_EN_MASK, + FIELD_PREP(LPBK_CHAN_MASK, chan) | LPBK_EN_MASK); + airoha_fe_rmw(eth, REG_GDM_LEN_CFG(2), + GDM_SHORT_LEN_MASK | GDM_LONG_LEN_MASK, + FIELD_PREP(GDM_SHORT_LEN_MASK, 60) | + FIELD_PREP(GDM_LONG_LEN_MASK, AIROHA_MAX_MTU)); + + /* Disable VIP and IFC for GDM2 */ + airoha_fe_clear(eth, REG_FE_VIP_PORT_EN, BIT(2)); + airoha_fe_clear(eth, REG_FE_IFC_PORT_EN, BIT(2)); + + if (port->id == 3) { + /* FIXME: handle XSI_PCE1_PORT */ + airoha_fe_wr(eth, REG_PPE_DFT_CPORT0(0), 0x5500); + airoha_fe_rmw(eth, REG_FE_WAN_PORT, + WAN1_EN_MASK | WAN1_MASK | WAN0_MASK, + FIELD_PREP(WAN0_MASK, HSGMII_LAN_PCIE0_SRCPORT)); + airoha_fe_rmw(eth, + REG_SP_DFT_CPORT(HSGMII_LAN_PCIE0_SRCPORT >> 3), + SP_CPORT_PCIE0_MASK, + FIELD_PREP(SP_CPORT_PCIE0_MASK, + FE_PSE_PORT_CDM2)); + } else { + /* FIXME: handle XSI_USB_PORT */ + airoha_fe_rmw(eth, REG_SRC_PORT_FC_MAP6, + FC_ID_OF_SRC_PORT24_MASK, + FIELD_PREP(FC_ID_OF_SRC_PORT24_MASK, 2)); + airoha_fe_rmw(eth, REG_FE_WAN_PORT, + WAN1_EN_MASK | WAN1_MASK | WAN0_MASK, + FIELD_PREP(WAN0_MASK, HSGMII_LAN_ETH_SRCPORT)); + airoha_fe_rmw(eth, + REG_SP_DFT_CPORT(HSGMII_LAN_ETH_SRCPORT >> 3), + SP_CPORT_ETH_MASK, + FIELD_PREP(SP_CPORT_ETH_MASK, FE_PSE_PORT_CDM2)); + } +} + static int airoha_dev_init(struct net_device *dev) { struct airoha_gdm_port *port = netdev_priv(dev); struct airoha_eth *eth = port->qdma->eth; + u32 pse_port; airoha_set_macaddr(port, dev->dev_addr); - airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(port->id), - FE_PSE_PORT_PPE1); + + switch (port->id) { + case 3: + case 4: + airhoha_set_gdm2_loopback(port); + fallthrough; + case 2: + pse_port = FE_PSE_PORT_PPE2; + break; + default: + pse_port = FE_PSE_PORT_PPE1; + break; + } + + airoha_set_gdm_port_fwd_cfg(eth, REG_GDM_FWD_CFG(port->id), pse_port); return 0; } diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 60f5dd9908b41f95b5b627640ff6b25c8e3e1706..064941348bd91f1115082bf5e7734f34007953f8 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -67,6 +67,13 @@ enum { QDMA_INT_REG_MAX }; +enum { + HSGMII_LAN_PCIE0_SRCPORT = 0x16, + HSGMII_LAN_PCIE1_SRCPORT, + HSGMII_LAN_ETH_SRCPORT, + HSGMII_LAN_USB_SRCPORT, +}; + enum { XSI_PCIE0_VIP_PORT_MASK = BIT(22), XSI_PCIE1_VIP_PORT_MASK = BIT(23), diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c index 346bbc9408628067102f5988a1e8f22237110747..71761a0a4b0e93587151e2eae55c090992ce85a2 100644 --- a/drivers/net/ethernet/airoha/airoha_ppe.c +++ b/drivers/net/ethernet/airoha/airoha_ppe.c @@ -213,21 +213,22 @@ static int airoha_ppe_foe_entry_prepare(struct airoha_foe_entry *hwe, AIROHA_FOE_IB1_BIND_TTL; hwe->ib1 = val; - val = FIELD_PREP(AIROHA_FOE_IB2_PORT_AG, 0x1f); + val = FIELD_PREP(AIROHA_FOE_IB2_PORT_AG, 0x1f) | + AIROHA_FOE_IB2_PSE_QOS; if (dsa_port >= 0) val |= FIELD_PREP(AIROHA_FOE_IB2_NBQ, dsa_port); + if (dev) { struct airoha_gdm_port *port = netdev_priv(dev); u8 pse_port; - pse_port = port->id == 4 ? FE_PSE_PORT_GDM4 : port->id; + if (dsa_port >= 0) + pse_port = port->id == 4 ? FE_PSE_PORT_GDM4 : port->id; + else + pse_port = 2; /* uplink relies on GDM2 loopback */ val |= FIELD_PREP(AIROHA_FOE_IB2_PSE_PORT, pse_port); } - /* FIXME: implement QoS support setting pse_port to 2 (loopback) - * for uplink and setting qos bit in ib2 - */ - if (is_multicast_ether_addr(dest_mac)) val |= AIROHA_FOE_IB2_MULTICAST; diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h index 6cc64c60953a3961b7c93dfa75a289a6f7a6599b..1aa06cdffe2320375e8710d58f2bbb056a330dfd 100644 --- a/drivers/net/ethernet/airoha/airoha_regs.h +++ b/drivers/net/ethernet/airoha/airoha_regs.h @@ -38,6 +38,12 @@ #define FE_RST_CORE_MASK BIT(0) #define REG_FE_FOE_TS 0x0010 + +#define REG_FE_WAN_PORT 0x0024 +#define WAN1_EN_MASK BIT(16) +#define WAN1_MASK GENMASK(12, 8) +#define WAN0_MASK GENMASK(4, 0) + #define REG_FE_WAN_MAC_H 0x0030 #define REG_FE_LAN_MAC_H 0x0040 @@ -126,6 +132,7 @@ #define GDM_IP4_CKSUM BIT(22) #define GDM_TCP_CKSUM BIT(21) #define GDM_UDP_CKSUM BIT(20) +#define GDM_STRIP_CRC BIT(16) #define GDM_UCFQ_MASK GENMASK(15, 12) #define GDM_BCFQ_MASK GENMASK(11, 8) #define GDM_MCFQ_MASK GENMASK(7, 4) @@ -139,6 +146,16 @@ #define GDM_SHORT_LEN_MASK GENMASK(13, 0) #define GDM_LONG_LEN_MASK GENMASK(29, 16) +#define REG_GDM_LPBK_CFG(_n) (GDM_BASE(_n) + 0x1c) +#define LPBK_GAP_MASK GENMASK(31, 24) +#define LPBK_LEN_MASK GENMASK(23, 10) +#define LPBK_CHAN_MASK GENMASK(8, 4) +#define LPBK_MODE_MASK GENMASK(3, 1) +#define LPBK_EN_MASK BIT(0) + +#define REG_GDM_TXCHN_EN(_n) (GDM_BASE(_n) + 0x24) +#define REG_GDM_RXCHN_EN(_n) (GDM_BASE(_n) + 0x28) + #define REG_FE_CPORT_CFG (GDM1_BASE + 0x40) #define FE_CPORT_PAD BIT(26) #define FE_CPORT_PORT_XFC_MASK BIT(25) @@ -351,6 +368,18 @@ #define REG_MC_VLAN_DATA 0x2108 +#define REG_SP_DFT_CPORT(_n) (0x20e0 + ((_n) << 2)) +#define SP_CPORT_PCIE1_MASK GENMASK(31, 28) +#define SP_CPORT_PCIE0_MASK GENMASK(27, 24) +#define SP_CPORT_USB_MASK GENMASK(7, 4) +#define SP_CPORT_ETH_MASK GENMASK(7, 4) + +#define REG_SRC_PORT_FC_MAP6 0x2298 +#define FC_ID_OF_SRC_PORT27_MASK GENMASK(28, 24) +#define FC_ID_OF_SRC_PORT26_MASK GENMASK(20, 16) +#define FC_ID_OF_SRC_PORT25_MASK GENMASK(12, 8) +#define FC_ID_OF_SRC_PORT24_MASK GENMASK(4, 0) + #define REG_CDM5_RX_OQ1_DROP_CNT 0x29d4 /* QDMA */ From patchwork Thu Feb 13 15:34:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13973537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53842C021A0 for ; Thu, 13 Feb 2025 16:01:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=nOjoRlfMEQ9pxD84BY5W+gihBY0ksOiBCcF88UG5xyA=; b=KxCM8BDbBZTB7r7RcxTVEf0x9i Y5vURDrQtaOOmdDi4pqLIDilVeCc97Jn9Grl1jpIaEXe1mrjkzPvsDlGhNui+jH4FFwI7wePvREfp 4IsUHTga5ufre4nJ+51aEGSDz4EeCzTSGeHdKK61ZLnuOtWO+mE7pZVmr3aKmtymynJeOXeGudl8/ 7kU6j00lIzkuq3hWWrlxUIZNFVMvMNGtYwvFdYPiWzHUIYddJRnNNyMUpbiGBdlKzizOwMe9tB1Dt pOoZALtZcnFjZy2TfjYiRP2zgLfkt8mbA+4XmJ+BANQXoq5PY1QYxP/+zzhxivXcixpvFfCBes0+E tqnHDt3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibe8-0000000Bexa-3Fim; Thu, 13 Feb 2025 16:01:00 +0000 Received: from nyc.source.kernel.org ([2604:1380:45d1:ec00::3]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tibFS-0000000BYre-3HUu; Thu, 13 Feb 2025 15:35:32 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id A420DA426E3; Thu, 13 Feb 2025 15:33:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4FAF9C4CED1; Thu, 13 Feb 2025 15:35:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739460929; bh=1Rb+4jmHgXEuS60iJKLLggv5HEm6LdCrBtgbYBGoEos=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=rlTmOBSFo+RcrfcO8/EmCHfzZBHverrEfQsyN06895sn8vNq1ESdLEar9wl6ALNMa WCLJbzIuOrrVP/QY/7sJ+nBoUv4LTA0AClk6BFxKU80hsL2KO85UAzK8FeFQCa3BVX TNSDwiFbiqpC8feFfSy087Yyt9yAkQk1Z2lD96tulJWD9vJvmkPA0bQUnbxoUfRi8L JQu1hCbfiyzSL/mtwgvEryIa7Yxk7a0/lAD8VNwZixijgPta+K14y89bv/BIB6GapT W0sSwtfYuDPSaNTLmEwl/HKyqlnsAZ3HfoKuYEwmOJwCGPwpX0ZnIN5eFPLLG1Sbos /uk9x633BJLtA== From: Lorenzo Bianconi Date: Thu, 13 Feb 2025 16:34:35 +0100 Subject: [PATCH net-next v4 16/16] net: airoha: Introduce PPE debugfs support MIME-Version: 1.0 Message-Id: <20250213-airoha-en7581-flowtable-offload-v4-16-b69ca16d74db@kernel.org> References: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> In-Reply-To: <20250213-airoha-en7581-flowtable-offload-v4-0-b69ca16d74db@kernel.org> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Felix Fietkau , Sean Wang , Matthias Brugger , AngeloGioacchino Del Regno , Philipp Zabel , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lorenzo Bianconi , "Chester A. Unal" , Daniel Golle , DENG Qingfang , Andrew Lunn , Vladimir Oltean Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, upstream@airoha.com X-Mailer: b4 0.14.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_073530_957950_7CFAE4B3 X-CRM114-Status: GOOD ( 20.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Similar to PPE support for Mediatek devices, introduce PPE debugfs in order to dump binded and unbinded flows. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/airoha/Makefile | 1 + drivers/net/ethernet/airoha/airoha_eth.h | 14 ++ drivers/net/ethernet/airoha/airoha_ppe.c | 17 ++- drivers/net/ethernet/airoha/airoha_ppe_debugfs.c | 175 +++++++++++++++++++++++ 4 files changed, 203 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/airoha/Makefile b/drivers/net/ethernet/airoha/Makefile index 6deff2f16229a7638be0737caa06282ba63183a4..94468053e34bef8fd155760e13745a8663592f4a 100644 --- a/drivers/net/ethernet/airoha/Makefile +++ b/drivers/net/ethernet/airoha/Makefile @@ -5,4 +5,5 @@ obj-$(CONFIG_NET_AIROHA) += airoha-eth.o airoha-eth-y := airoha_eth.o airoha_ppe.o +airoha-eth-$(CONFIG_DEBUG_FS) += airoha_ppe_debugfs.o obj-$(CONFIG_NET_AIROHA_NPU) += airoha_npu.o diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h index 064941348bd91f1115082bf5e7734f34007953f8..18b5fed7ff286b813b260fae121892f44eb1cf8c 100644 --- a/drivers/net/ethernet/airoha/airoha_eth.h +++ b/drivers/net/ethernet/airoha/airoha_eth.h @@ -7,6 +7,7 @@ #ifndef AIROHA_ETH_H #define AIROHA_ETH_H +#include #include #include #include @@ -482,6 +483,8 @@ struct airoha_ppe { struct hlist_head *foe_flow; u16 foe_check_time[PPE_NUM_ENTRIES]; + + struct dentry *debugfs_dir; }; struct airoha_eth { @@ -535,5 +538,16 @@ int airoha_ppe_setup_tc_block_cb(enum tc_setup_type type, void *type_data, void *cb_priv); int airoha_ppe_init(struct airoha_eth *eth); void airoha_ppe_deinit(struct airoha_eth *eth); +struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe, + u32 hash); + +#if CONFIG_DEBUG_FS +int airoha_ppe_debugfs_init(struct airoha_ppe *ppe); +#else +static inline int airoha_ppe_debugfs_init(struct airoha_ppe *ppe) +{ + return 0; +} +#endif #endif /* AIROHA_ETH_H */ diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c index 71761a0a4b0e93587151e2eae55c090992ce85a2..f28ae8735639959a97b99358cfc7e44c1bcac11d 100644 --- a/drivers/net/ethernet/airoha/airoha_ppe.c +++ b/drivers/net/ethernet/airoha/airoha_ppe.c @@ -377,8 +377,8 @@ static u32 airoha_ppe_foe_get_entry_hash(struct airoha_foe_entry *hwe) return hash; } -static struct airoha_foe_entry * -airoha_ppe_foe_get_entry(struct airoha_ppe *ppe, u32 hash) +struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe, + u32 hash) { if (hash < PPE_SRAM_NUM_ENTRIES) { u32 *hwe = ppe->foe + hash * sizeof(struct airoha_foe_entry); @@ -845,7 +845,7 @@ void airoha_ppe_check_skb(struct airoha_ppe *ppe, u16 hash) int airoha_ppe_init(struct airoha_eth *eth) { struct airoha_ppe *ppe; - int foe_size; + int foe_size, err; ppe = devm_kzalloc(eth->dev, sizeof(*ppe), GFP_KERNEL); if (!ppe) @@ -866,7 +866,15 @@ int airoha_ppe_init(struct airoha_eth *eth) if (!ppe->foe_flow) return -ENOMEM; - return rhashtable_init(ð->flow_table, &airoha_flow_table_params); + err = rhashtable_init(ð->flow_table, &airoha_flow_table_params); + if (err) + return err; + + err = airoha_ppe_debugfs_init(ppe); + if (err) + rhashtable_destroy(ð->flow_table); + + return err; } void airoha_ppe_deinit(struct airoha_eth *eth) @@ -882,4 +890,5 @@ void airoha_ppe_deinit(struct airoha_eth *eth) rcu_read_unlock(); rhashtable_destroy(ð->flow_table); + debugfs_remove(eth->ppe->debugfs_dir); } diff --git a/drivers/net/ethernet/airoha/airoha_ppe_debugfs.c b/drivers/net/ethernet/airoha/airoha_ppe_debugfs.c new file mode 100644 index 0000000000000000000000000000000000000000..f79788458903c93ca73d774edaca97f1e08f92a7 --- /dev/null +++ b/drivers/net/ethernet/airoha/airoha_ppe_debugfs.c @@ -0,0 +1,175 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 AIROHA Inc + * Author: Lorenzo Bianconi + */ + +#include "airoha_eth.h" + +static void airoha_debugfs_ppe_print_tuple(struct seq_file *m, + void *src_addr, void *dest_addr, + u16 *src_port, u16 *dest_port, + bool ipv6) +{ + __be32 n_addr[IPV6_ADDR_WORDS]; + + if (ipv6) { + ipv6_addr_cpu_to_be32(n_addr, src_addr); + seq_printf(m, "%pI6", n_addr); + } else { + seq_printf(m, "%pI4h", src_addr); + } + if (src_port) + seq_printf(m, ":%d", *src_port); + + seq_puts(m, "->"); + + if (ipv6) { + ipv6_addr_cpu_to_be32(n_addr, dest_addr); + seq_printf(m, "%pI6", n_addr); + } else { + seq_printf(m, "%pI4h", dest_addr); + } + if (dest_port) + seq_printf(m, ":%d", *dest_port); +} + +static int airoha_ppe_debugfs_foe_show(struct seq_file *m, void *private, + bool bind) +{ + static const char *const ppe_type_str[] = { + [PPE_PKT_TYPE_IPV4_HNAPT] = "IPv4 5T", + [PPE_PKT_TYPE_IPV4_ROUTE] = "IPv4 3T", + [PPE_PKT_TYPE_BRIDGE] = "L2B", + [PPE_PKT_TYPE_IPV4_DSLITE] = "DS-LITE", + [PPE_PKT_TYPE_IPV6_ROUTE_3T] = "IPv6 3T", + [PPE_PKT_TYPE_IPV6_ROUTE_5T] = "IPv6 5T", + [PPE_PKT_TYPE_IPV6_6RD] = "6RD", + }; + static const char *const ppe_state_str[] = { + [AIROHA_FOE_STATE_INVALID] = "INV", + [AIROHA_FOE_STATE_UNBIND] = "UNB", + [AIROHA_FOE_STATE_BIND] = "BND", + [AIROHA_FOE_STATE_FIN] = "FIN", + }; + struct airoha_ppe *ppe = m->private; + int i; + + for (i = 0; i < PPE_NUM_ENTRIES; i++) { + const char *state_str, *type_str = "UNKNOWN"; + u16 *src_port = NULL, *dest_port = NULL; + struct airoha_foe_mac_info_common *l2; + unsigned char h_source[ETH_ALEN] = {}; + unsigned char h_dest[ETH_ALEN]; + struct airoha_foe_entry *hwe; + u32 type, state, ib2, data; + void *src_addr, *dest_addr; + bool ipv6 = false; + + hwe = airoha_ppe_foe_get_entry(ppe, i); + if (!hwe) + continue; + + state = FIELD_GET(AIROHA_FOE_IB1_STATE, hwe->ib1); + if (!state) + continue; + + if (bind && state != AIROHA_FOE_STATE_BIND) + continue; + + state_str = ppe_state_str[state % ARRAY_SIZE(ppe_state_str)]; + type = FIELD_GET(AIROHA_FOE_IB1_PACKET_TYPE, hwe->ib1); + if (type < ARRAY_SIZE(ppe_type_str) && ppe_type_str[type]) + type_str = ppe_type_str[type]; + + seq_printf(m, "%05x %s %7s", i, state_str, type_str); + + switch (type) { + case PPE_PKT_TYPE_IPV4_HNAPT: + case PPE_PKT_TYPE_IPV4_DSLITE: + src_port = &hwe->ipv4.orig_tuple.src_port; + dest_port = &hwe->ipv4.orig_tuple.dest_port; + fallthrough; + case PPE_PKT_TYPE_IPV4_ROUTE: + src_addr = &hwe->ipv4.orig_tuple.src_ip; + dest_addr = &hwe->ipv4.orig_tuple.dest_ip; + break; + case PPE_PKT_TYPE_IPV6_ROUTE_5T: + src_port = &hwe->ipv6.src_port; + dest_port = &hwe->ipv6.dest_port; + fallthrough; + case PPE_PKT_TYPE_IPV6_ROUTE_3T: + case PPE_PKT_TYPE_IPV6_6RD: + src_addr = &hwe->ipv6.src_ip; + dest_addr = &hwe->ipv6.dest_ip; + ipv6 = true; + break; + } + + seq_puts(m, " orig="); + airoha_debugfs_ppe_print_tuple(m, src_addr, dest_addr, + src_port, dest_port, ipv6); + + switch (type) { + case PPE_PKT_TYPE_IPV4_HNAPT: + case PPE_PKT_TYPE_IPV4_DSLITE: + src_port = &hwe->ipv4.new_tuple.src_port; + dest_port = &hwe->ipv4.new_tuple.dest_port; + fallthrough; + case PPE_PKT_TYPE_IPV4_ROUTE: + src_addr = &hwe->ipv4.new_tuple.src_ip; + dest_addr = &hwe->ipv4.new_tuple.dest_ip; + seq_puts(m, " new="); + airoha_debugfs_ppe_print_tuple(m, src_addr, dest_addr, + src_port, dest_port, + ipv6); + break; + } + + if (type >= PPE_PKT_TYPE_IPV6_ROUTE_3T) { + data = hwe->ipv6.data; + ib2 = hwe->ipv6.ib2; + l2 = &hwe->ipv6.l2; + } else { + data = hwe->ipv4.data; + ib2 = hwe->ipv4.ib2; + l2 = &hwe->ipv4.l2.common; + *((__be16 *)&h_source[4]) = + cpu_to_be16(hwe->ipv4.l2.src_mac_lo); + } + + *((__be32 *)h_dest) = cpu_to_be32(l2->dest_mac_hi); + *((__be16 *)&h_dest[4]) = cpu_to_be16(l2->dest_mac_lo); + *((__be32 *)h_source) = cpu_to_be32(l2->src_mac_hi); + + seq_printf(m, " eth=%pM->%pM etype=%04x data=%08x" + " vlan=%d,%d ib1=%08x ib2=%08x\n", + h_source, h_dest, l2->etype, data, + l2->vlan1, l2->vlan2, hwe->ib1, ib2); + } + + return 0; +} + +static int airoha_ppe_debugfs_foe_all_show(struct seq_file *m, void *private) +{ + return airoha_ppe_debugfs_foe_show(m, private, false); +} +DEFINE_SHOW_ATTRIBUTE(airoha_ppe_debugfs_foe_all); + +static int airoha_ppe_debugfs_foe_bind_show(struct seq_file *m, void *private) +{ + return airoha_ppe_debugfs_foe_show(m, private, true); +} +DEFINE_SHOW_ATTRIBUTE(airoha_ppe_debugfs_foe_bind); + +int airoha_ppe_debugfs_init(struct airoha_ppe *ppe) +{ + ppe->debugfs_dir = debugfs_create_dir("ppe", NULL); + debugfs_create_file("entries", 0444, ppe->debugfs_dir, ppe, + &airoha_ppe_debugfs_foe_all_fops); + debugfs_create_file("bind", 0444, ppe->debugfs_dir, ppe, + &airoha_ppe_debugfs_foe_bind_fops); + + return 0; +}