From patchwork Wed Nov 9 16:34:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037704 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1B37C43217 for ; Wed, 9 Nov 2022 16:35:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231514AbiKIQe6 (ORCPT ); Wed, 9 Nov 2022 11:34:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229995AbiKIQew (ORCPT ); Wed, 9 Nov 2022 11:34:52 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5530A1A06F; Wed, 9 Nov 2022 08:34:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=W2E4Ms9uc9J9Ds/fMfsOB8+k0By+V4+l9wVONX22uWE=; b=pA8lzr+rM2UmcM0MV2wVv0Kw4C fyf9BHbCCg2RLB0h0MPAXyJR70KaGvc28usii6aQo4415C3/S/LcHMO0L+SEQTSbO/+RpvrrbfScD 0ydNos+7lUENHNS+21V+Mo82l7pniop4A4kl9lVI/WJUN/LNmCs88s8AwHc6XYSAr8b8=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso23-000l4N-2z; Wed, 09 Nov 2022 17:34:31 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 01/12] net: ethernet: mtk_eth_soc: account for vlan in rx header length Date: Wed, 9 Nov 2022 17:34:15 +0100 Message-Id: <20221109163426.76164-2-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The network stack assumes that devices can handle an extra VLAN tag without increasing the MTU Signed-off-by: Felix Fietkau Reviewed-by: Vladimir Oltean --- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 589f27ddc401..3d7cdc1efbbc 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -29,7 +29,7 @@ #define MTK_TX_DMA_BUF_LEN_V2 0xffff #define MTK_DMA_SIZE 512 #define MTK_MAC_COUNT 2 -#define MTK_RX_ETH_HLEN (ETH_HLEN + ETH_FCS_LEN) +#define MTK_RX_ETH_HLEN (VLAN_ETH_HLEN + ETH_FCS_LEN) #define MTK_RX_HLEN (NET_SKB_PAD + MTK_RX_ETH_HLEN + NET_IP_ALIGN) #define MTK_DMA_DUMMY_DESC 0xffffffff #define MTK_DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | \ From patchwork Wed Nov 9 16:34:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037706 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C01AC433FE for ; Wed, 9 Nov 2022 16:35:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231657AbiKIQfB (ORCPT ); Wed, 9 Nov 2022 11:35:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231209AbiKIQex (ORCPT ); Wed, 9 Nov 2022 11:34:53 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7A9A5F74; Wed, 9 Nov 2022 08:34:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=28vp25puRu98YusczJTWwgPKLBzAy9rVTW+MshpfQqE=; b=OV/LDjaYz9C3YbZibvplvZA4ud lnIfk4dfelBl2h56HFraR/SHSkOsGS+775IBY+adR2z56rU3i4D0ssoVwawKe5zl2q9bO8X12BDba T/6nhuT8R+JLm537b8OsXgAS5UWFVCeZq8woNDmC1w8igxA4kuTrrBJCUrqfMp+w2fG0=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso23-000l4N-TD; Wed, 09 Nov 2022 17:34:31 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 02/12] net: ethernet: mtk_eth_soc: increase tx ring side for QDMA devices Date: Wed, 9 Nov 2022 17:34:16 +0100 Message-Id: <20221109163426.76164-3-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In order to use the hardware traffic shaper feature, a larger tx ring is needed, especially for the scratch ring, which the hardware shaper uses to reorder packets. Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 38 ++++++++++++--------- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 1 + 2 files changed, 23 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 789268b15106..02a57729db28 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -938,7 +938,7 @@ static int mtk_init_fq_dma(struct mtk_eth *eth) { const struct mtk_soc_data *soc = eth->soc; dma_addr_t phy_ring_tail; - int cnt = MTK_DMA_SIZE; + int cnt = MTK_QDMA_RING_SIZE; dma_addr_t dma_addr; int i; @@ -2202,19 +2202,25 @@ static int mtk_tx_alloc(struct mtk_eth *eth) struct mtk_tx_ring *ring = ð->tx_ring; int i, sz = soc->txrx.txd_size; struct mtk_tx_dma_v2 *txd; + int ring_size; - ring->buf = kcalloc(MTK_DMA_SIZE, sizeof(*ring->buf), + if (MTK_HAS_CAPS(soc->caps, MTK_QDMA)) + ring_size = MTK_QDMA_RING_SIZE; + else + ring_size = MTK_DMA_SIZE; + + ring->buf = kcalloc(ring_size, sizeof(*ring->buf), GFP_KERNEL); if (!ring->buf) goto no_tx_mem; - ring->dma = dma_alloc_coherent(eth->dma_dev, MTK_DMA_SIZE * sz, + ring->dma = dma_alloc_coherent(eth->dma_dev, ring_size * sz, &ring->phys, GFP_KERNEL); if (!ring->dma) goto no_tx_mem; - for (i = 0; i < MTK_DMA_SIZE; i++) { - int next = (i + 1) % MTK_DMA_SIZE; + for (i = 0; i < ring_size; i++) { + int next = (i + 1) % ring_size; u32 next_ptr = ring->phys + next * sz; txd = ring->dma + i * sz; @@ -2234,22 +2240,22 @@ static int mtk_tx_alloc(struct mtk_eth *eth) * descriptors in ring->dma_pdma. */ if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { - ring->dma_pdma = dma_alloc_coherent(eth->dma_dev, MTK_DMA_SIZE * sz, + ring->dma_pdma = dma_alloc_coherent(eth->dma_dev, ring_size * sz, &ring->phys_pdma, GFP_KERNEL); if (!ring->dma_pdma) goto no_tx_mem; - for (i = 0; i < MTK_DMA_SIZE; i++) { + for (i = 0; i < ring_size; i++) { ring->dma_pdma[i].txd2 = TX_DMA_DESP2_DEF; ring->dma_pdma[i].txd4 = 0; } } - ring->dma_size = MTK_DMA_SIZE; - atomic_set(&ring->free_count, MTK_DMA_SIZE - 2); + ring->dma_size = ring_size; + atomic_set(&ring->free_count, ring_size - 2); ring->next_free = ring->dma; ring->last_free = (void *)txd; - ring->last_free_ptr = (u32)(ring->phys + ((MTK_DMA_SIZE - 1) * sz)); + ring->last_free_ptr = (u32)(ring->phys + ((ring_size - 1) * sz)); ring->thresh = MAX_SKB_FRAGS; /* make sure that all changes to the dma ring are flushed before we @@ -2261,14 +2267,14 @@ static int mtk_tx_alloc(struct mtk_eth *eth) mtk_w32(eth, ring->phys, soc->reg_map->qdma.ctx_ptr); mtk_w32(eth, ring->phys, soc->reg_map->qdma.dtx_ptr); mtk_w32(eth, - ring->phys + ((MTK_DMA_SIZE - 1) * sz), + ring->phys + ((ring_size - 1) * sz), soc->reg_map->qdma.crx_ptr); mtk_w32(eth, ring->last_free_ptr, soc->reg_map->qdma.drx_ptr); mtk_w32(eth, (QDMA_RES_THRES << 8) | QDMA_RES_THRES, soc->reg_map->qdma.qtx_cfg); } else { mtk_w32(eth, ring->phys_pdma, MT7628_TX_BASE_PTR0); - mtk_w32(eth, MTK_DMA_SIZE, MT7628_TX_MAX_CNT0); + mtk_w32(eth, ring_size, MT7628_TX_MAX_CNT0); mtk_w32(eth, 0, MT7628_TX_CTX_IDX0); mtk_w32(eth, MT7628_PST_DTX_IDX0, soc->reg_map->pdma.rst_idx); } @@ -2286,7 +2292,7 @@ static void mtk_tx_clean(struct mtk_eth *eth) int i; if (ring->buf) { - for (i = 0; i < MTK_DMA_SIZE; i++) + for (i = 0; i < ring->dma_size; i++) mtk_tx_unmap(eth, &ring->buf[i], NULL, false); kfree(ring->buf); ring->buf = NULL; @@ -2294,14 +2300,14 @@ static void mtk_tx_clean(struct mtk_eth *eth) if (ring->dma) { dma_free_coherent(eth->dma_dev, - MTK_DMA_SIZE * soc->txrx.txd_size, + ring->dma_size * soc->txrx.txd_size, ring->dma, ring->phys); ring->dma = NULL; } if (ring->dma_pdma) { dma_free_coherent(eth->dma_dev, - MTK_DMA_SIZE * soc->txrx.txd_size, + ring->dma_size * soc->txrx.txd_size, ring->dma_pdma, ring->phys_pdma); ring->dma_pdma = NULL; } @@ -2821,7 +2827,7 @@ static void mtk_dma_free(struct mtk_eth *eth) netdev_reset_queue(eth->netdev[i]); if (eth->scratch_ring) { dma_free_coherent(eth->dma_dev, - MTK_DMA_SIZE * soc->txrx.txd_size, + MTK_QDMA_RING_SIZE * soc->txrx.txd_size, eth->scratch_ring, eth->phy_scratch_ring); eth->scratch_ring = NULL; eth->phy_scratch_ring = 0; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 3d7cdc1efbbc..637d3c60507e 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -27,6 +27,7 @@ #define MTK_MAX_RX_LENGTH_2K 2048 #define MTK_TX_DMA_BUF_LEN 0x3fff #define MTK_TX_DMA_BUF_LEN_V2 0xffff +#define MTK_QDMA_RING_SIZE 2048 #define MTK_DMA_SIZE 512 #define MTK_MAC_COUNT 2 #define MTK_RX_ETH_HLEN (VLAN_ETH_HLEN + ETH_FCS_LEN) From patchwork Wed Nov 9 16:34:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037701 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 494AEC433FE for ; Wed, 9 Nov 2022 16:34:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231203AbiKIQex (ORCPT ); Wed, 9 Nov 2022 11:34:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230389AbiKIQev (ORCPT ); Wed, 9 Nov 2022 11:34:51 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6E641A22B; Wed, 9 Nov 2022 08:34:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=R2N+jSfSlY43TnrF1h50zfC2H3h4TDeeUUxvXDObycg=; b=Oz7nOGqVGIhjqu9jeS3nT4Kbfs PSWnp8/YhdyRWGxpJST3iAZeB/CvGM4hlDI2z0s0hvgvW5P7oPIiYbcRtdPXScAQmKSNJ+uIasf1W 4rhCHSWzYSJsPV90qpbpYaZlVFCo2WLJztJvriYCj5gvnHN063jICFpX9MQzSyb/GurU=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso24-000l4N-Ln; Wed, 09 Nov 2022 17:34:32 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 03/12] net: ethernet: mtk_eth_soc: avoid port_mg assignment on MT7622 and newer Date: Wed, 9 Nov 2022 17:34:17 +0100 Message-Id: <20221109163426.76164-4-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org On newer chips, this field is unused and contains some bits related to queue assignment. Initialize it to 0 in those cases. Fix offload_version on MT7621 and MT7623, which still need the previous value. Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 4 ++-- drivers/net/ethernet/mediatek/mtk_ppe.c | 4 +++- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 02a57729db28..9f607c80f49c 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -4243,7 +4243,7 @@ static const struct mtk_soc_data mt7621_data = { .hw_features = MTK_HW_FEATURES, .required_clks = MT7621_CLKS_BITMAP, .required_pctl = false, - .offload_version = 2, + .offload_version = 1, .hash_offset = 2, .foe_entry_size = sizeof(struct mtk_foe_entry) - 16, .txrx = { @@ -4282,7 +4282,7 @@ static const struct mtk_soc_data mt7623_data = { .hw_features = MTK_HW_FEATURES, .required_clks = MT7623_CLKS_BITMAP, .required_pctl = true, - .offload_version = 2, + .offload_version = 1, .hash_offset = 2, .foe_entry_size = sizeof(struct mtk_foe_entry) - 16, .txrx = { diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 2d8ca99f2467..3ee2bf53f9e5 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -175,6 +175,8 @@ int mtk_foe_entry_prepare(struct mtk_eth *eth, struct mtk_foe_entry *entry, val = FIELD_PREP(MTK_FOE_IB2_DEST_PORT_V2, pse_port) | FIELD_PREP(MTK_FOE_IB2_PORT_AG_V2, 0xf); } else { + int port_mg = eth->soc->offload_version > 1 ? 0 : 0x3f; + val = FIELD_PREP(MTK_FOE_IB1_STATE, MTK_FOE_STATE_BIND) | FIELD_PREP(MTK_FOE_IB1_PACKET_TYPE, type) | FIELD_PREP(MTK_FOE_IB1_UDP, l4proto == IPPROTO_UDP) | @@ -182,7 +184,7 @@ int mtk_foe_entry_prepare(struct mtk_eth *eth, struct mtk_foe_entry *entry, entry->ib1 = val; val = FIELD_PREP(MTK_FOE_IB2_DEST_PORT, pse_port) | - FIELD_PREP(MTK_FOE_IB2_PORT_MG, 0x3f) | + FIELD_PREP(MTK_FOE_IB2_PORT_MG, port_mg) | FIELD_PREP(MTK_FOE_IB2_PORT_AG, 0x1f); } From patchwork Wed Nov 9 16:34:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037707 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 333B0C43219 for ; Wed, 9 Nov 2022 16:35:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229680AbiKIQfF (ORCPT ); Wed, 9 Nov 2022 11:35:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231375AbiKIQe4 (ORCPT ); Wed, 9 Nov 2022 11:34:56 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AE821AD85; Wed, 9 Nov 2022 08:34:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=EhDIwEk9AdqphzChfLs32KwFNnnA0w6JRhWZG/P8w0E=; b=LRTyZVdDDVrbl3Gm2rRffU+hfh 4Mmj8Rd2w/OjcCIQFqbXa1Rj52jT3/Z6zQx8TyfCP8AMBx0B1vIFt0H+76aix9RksvdFEWqm7pIYa tInDylsIKBTEY7u+euUDj8iNR2ZfH+h8SuEvL0HoigyBsTlkv9cNcVge1kJpXzr2+jYE=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso25-000l4N-Mt; Wed, 09 Nov 2022 17:34:33 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger , Russell King Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 04/12] net: ethernet: mtk_eth_soc: implement multi-queue support for per-port queues Date: Wed, 9 Nov 2022 17:34:18 +0100 Message-Id: <20221109163426.76164-5-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org When sending traffic to multiple ports with different link speeds, queued packets to one port can drown out tx to other ports. In order to better handle transmission to multiple ports, use the hardware shaper feature to implement weighted fair queueing between ports. Weight and maximum rate are automatically adjusted based on the link speed of the port. The first 3 queues are unrestricted and reserved for non-DSA direct tx on GMAC ports. The following queues are automatically assigned by the MTK DSA tag driver based on the target port number. The PPE offload code configures the queues for offloaded traffic in the same way. This feature is only supported on devices supporting QDMA. All queues still share the same DMA ring and descriptor pool. Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 281 ++++++++++++++++---- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 26 +- 2 files changed, 258 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 9f607c80f49c..92bdd69eed2e 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -54,6 +54,7 @@ static const struct mtk_reg_map mtk_reg_map = { }, .qdma = { .qtx_cfg = 0x1800, + .qtx_sch = 0x1804, .rx_ptr = 0x1900, .rx_cnt_cfg = 0x1904, .qcrx_ptr = 0x1908, @@ -61,6 +62,7 @@ static const struct mtk_reg_map mtk_reg_map = { .rst_idx = 0x1a08, .delay_irq = 0x1a0c, .fc_th = 0x1a10, + .tx_sch_rate = 0x1a14, .int_grp = 0x1a20, .hred = 0x1a44, .ctx_ptr = 0x1b00, @@ -113,6 +115,7 @@ static const struct mtk_reg_map mt7986_reg_map = { }, .qdma = { .qtx_cfg = 0x4400, + .qtx_sch = 0x4404, .rx_ptr = 0x4500, .rx_cnt_cfg = 0x4504, .qcrx_ptr = 0x4508, @@ -130,6 +133,7 @@ static const struct mtk_reg_map mt7986_reg_map = { .fq_tail = 0x4724, .fq_count = 0x4728, .fq_blen = 0x472c, + .tx_sch_rate = 0x4798, }, .gdm1_cnt = 0x1c00, .gdma_to_ppe = 0x3333, @@ -613,6 +617,75 @@ static void mtk_mac_link_down(struct phylink_config *config, unsigned int mode, mtk_w32(mac->hw, mcr, MTK_MAC_MCR(mac->id)); } +static void mtk_set_queue_speed(struct mtk_eth *eth, unsigned int idx, + int speed) +{ + const struct mtk_soc_data *soc = eth->soc; + u32 ofs, val; + + if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) + return; + + val = MTK_QTX_SCH_MIN_RATE_EN | + /* minimum: 10 Mbps */ + FIELD_PREP(MTK_QTX_SCH_MIN_RATE_MAN, 1) | + FIELD_PREP(MTK_QTX_SCH_MIN_RATE_EXP, 4) | + MTK_QTX_SCH_LEAKY_BUCKET_SIZE; + if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + val |= MTK_QTX_SCH_LEAKY_BUCKET_EN; + + if (IS_ENABLED(CONFIG_SOC_MT7621)) { + switch (speed) { + case SPEED_10: + val |= MTK_QTX_SCH_MAX_RATE_EN | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 103) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 2) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1); + break; + case SPEED_100: + val |= MTK_QTX_SCH_MAX_RATE_EN | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 103) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 3); + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1); + break; + case SPEED_1000: + val |= MTK_QTX_SCH_MAX_RATE_EN | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 105) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 4) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 10); + break; + default: + break; + } + } else { + switch (speed) { + case SPEED_10: + val |= MTK_QTX_SCH_MAX_RATE_EN | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 1) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 4) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1); + break; + case SPEED_100: + val |= MTK_QTX_SCH_MAX_RATE_EN | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 1) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5); + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1); + break; + case SPEED_1000: + val |= MTK_QTX_SCH_MAX_RATE_EN | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 10) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 10); + break; + default: + break; + } + } + + ofs = MTK_QTX_OFFSET * idx; + mtk_w32(eth, val, soc->reg_map->qdma.qtx_sch + ofs); +} + static void mtk_mac_link_up(struct phylink_config *config, struct phy_device *phy, unsigned int mode, phy_interface_t interface, @@ -638,6 +711,8 @@ static void mtk_mac_link_up(struct phylink_config *config, break; } + mtk_set_queue_speed(mac->hw, mac->id, speed); + /* Configure duplex */ if (duplex == DUPLEX_FULL) mcr |= MAC_MCR_FORCE_DPX; @@ -1099,7 +1174,8 @@ static void mtk_tx_set_dma_desc_v1(struct net_device *dev, void *txd, WRITE_ONCE(desc->txd1, info->addr); - data = TX_DMA_SWC | TX_DMA_PLEN0(info->size); + data = TX_DMA_SWC | TX_DMA_PLEN0(info->size) | + FIELD_PREP(TX_DMA_PQID, info->qid); if (info->last) data |= TX_DMA_LS0; WRITE_ONCE(desc->txd3, data); @@ -1133,9 +1209,6 @@ static void mtk_tx_set_dma_desc_v2(struct net_device *dev, void *txd, data |= TX_DMA_LS0; WRITE_ONCE(desc->txd3, data); - if (!info->qid && mac->id) - info->qid = MTK_QDMA_GMAC2_QID; - data = (mac->id + 1) << TX_DMA_FPORT_SHIFT_V2; /* forward port */ data |= TX_DMA_SWC_V2 | QID_BITS_V2(info->qid); WRITE_ONCE(desc->txd4, data); @@ -1179,11 +1252,12 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, .gso = gso, .csum = skb->ip_summed == CHECKSUM_PARTIAL, .vlan = skb_vlan_tag_present(skb), - .qid = skb->mark & MTK_QDMA_TX_MASK, + .qid = skb_get_queue_mapping(skb), .vlan_tci = skb_vlan_tag_get(skb), .first = true, .last = !skb_is_nonlinear(skb), }; + struct netdev_queue *txq; struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; const struct mtk_soc_data *soc = eth->soc; @@ -1191,8 +1265,10 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, struct mtk_tx_dma *itxd_pdma, *txd_pdma; struct mtk_tx_buf *itx_buf, *tx_buf; int i, n_desc = 1; + int queue = skb_get_queue_mapping(skb); int k = 0; + txq = netdev_get_tx_queue(dev, queue); itxd = ring->next_free; itxd_pdma = qdma_to_pdma(ring, itxd); if (itxd == ring->last_free) @@ -1241,7 +1317,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, memset(&txd_info, 0, sizeof(struct mtk_tx_dma_desc_info)); txd_info.size = min_t(unsigned int, frag_size, soc->txrx.dma_max_len); - txd_info.qid = skb->mark & MTK_QDMA_TX_MASK; + txd_info.qid = queue; txd_info.last = i == skb_shinfo(skb)->nr_frags - 1 && !(frag_size - txd_info.size); txd_info.addr = skb_frag_dma_map(eth->dma_dev, frag, @@ -1280,7 +1356,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, txd_pdma->txd2 |= TX_DMA_LS1; } - netdev_sent_queue(dev, skb->len); + netdev_tx_sent_queue(txq, skb->len); skb_tx_timestamp(skb); ring->next_free = mtk_qdma_phys_to_virt(ring, txd->txd2); @@ -1292,8 +1368,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, wmb(); if (MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { - if (netif_xmit_stopped(netdev_get_tx_queue(dev, 0)) || - !netdev_xmit_more()) + if (netif_xmit_stopped(txq) || !netdev_xmit_more()) mtk_w32(eth, txd->txd2, soc->reg_map->qdma.ctx_ptr); } else { int next_idx; @@ -1362,7 +1437,7 @@ static void mtk_wake_queue(struct mtk_eth *eth) for (i = 0; i < MTK_MAC_COUNT; i++) { if (!eth->netdev[i]) continue; - netif_wake_queue(eth->netdev[i]); + netif_tx_wake_all_queues(eth->netdev[i]); } } @@ -1386,7 +1461,7 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) tx_num = mtk_cal_txd_req(eth, skb); if (unlikely(atomic_read(&ring->free_count) <= tx_num)) { - netif_stop_queue(dev); + netif_tx_stop_all_queues(dev); netif_err(eth, tx_queued, dev, "Tx Ring full when queue awake!\n"); spin_unlock(ð->page_lock); @@ -1412,7 +1487,7 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) goto drop; if (unlikely(atomic_read(&ring->free_count) <= ring->thresh)) - netif_stop_queue(dev); + netif_tx_stop_all_queues(dev); spin_unlock(ð->page_lock); @@ -1579,10 +1654,12 @@ static int mtk_xdp_submit_frame(struct mtk_eth *eth, struct xdp_frame *xdpf, struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf); const struct mtk_soc_data *soc = eth->soc; struct mtk_tx_ring *ring = ð->tx_ring; + struct mtk_mac *mac = netdev_priv(dev); struct mtk_tx_dma_desc_info txd_info = { .size = xdpf->len, .first = true, .last = !xdp_frame_has_frags(xdpf), + .qid = mac->id, }; int err, index = 0, n_desc = 1, nr_frags; struct mtk_tx_buf *htx_buf, *tx_buf; @@ -1632,6 +1709,7 @@ static int mtk_xdp_submit_frame(struct mtk_eth *eth, struct xdp_frame *xdpf, memset(&txd_info, 0, sizeof(struct mtk_tx_dma_desc_info)); txd_info.size = skb_frag_size(&sinfo->frags[index]); txd_info.last = index + 1 == nr_frags; + txd_info.qid = mac->id; data = skb_frag_address(&sinfo->frags[index]); index++; @@ -1986,8 +2064,46 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, return done; } +struct mtk_poll_state { + struct netdev_queue *txq; + unsigned int total; + unsigned int done; + unsigned int bytes; +}; + +static void +mtk_poll_tx_done(struct mtk_eth *eth, struct mtk_poll_state *state, u8 mac, + struct sk_buff *skb) +{ + struct netdev_queue *txq; + struct net_device *dev; + unsigned int bytes = skb->len; + + state->total++; + eth->tx_packets++; + eth->tx_bytes += bytes; + + dev = eth->netdev[mac]; + if (!dev) + return; + + txq = netdev_get_tx_queue(dev, skb_get_queue_mapping(skb)); + if (state->txq == txq) { + state->done++; + state->bytes += bytes; + return; + } + + if (state->txq) + netdev_tx_completed_queue(state->txq, state->done, state->bytes); + + state->txq = txq; + state->done = 1; + state->bytes = bytes; +} + static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, - unsigned int *done, unsigned int *bytes) + struct mtk_poll_state *state) { const struct mtk_reg_map *reg_map = eth->soc->reg_map; struct mtk_tx_ring *ring = ð->tx_ring; @@ -2019,12 +2135,9 @@ static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, break; if (tx_buf->data != (void *)MTK_DMA_DUMMY_DESC) { - if (tx_buf->type == MTK_TYPE_SKB) { - struct sk_buff *skb = tx_buf->data; + if (tx_buf->type == MTK_TYPE_SKB) + mtk_poll_tx_done(eth, state, mac, tx_buf->data); - bytes[mac] += skb->len; - done[mac]++; - } budget--; } mtk_tx_unmap(eth, tx_buf, &bq, true); @@ -2043,7 +2156,7 @@ static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, } static int mtk_poll_tx_pdma(struct mtk_eth *eth, int budget, - unsigned int *done, unsigned int *bytes) + struct mtk_poll_state *state) { struct mtk_tx_ring *ring = ð->tx_ring; struct mtk_tx_buf *tx_buf; @@ -2061,12 +2174,8 @@ static int mtk_poll_tx_pdma(struct mtk_eth *eth, int budget, break; if (tx_buf->data != (void *)MTK_DMA_DUMMY_DESC) { - if (tx_buf->type == MTK_TYPE_SKB) { - struct sk_buff *skb = tx_buf->data; - - bytes[0] += skb->len; - done[0]++; - } + if (tx_buf->type == MTK_TYPE_SKB) + mtk_poll_tx_done(eth, state, 0, tx_buf->data); budget--; } mtk_tx_unmap(eth, tx_buf, &bq, true); @@ -2088,26 +2197,15 @@ static int mtk_poll_tx(struct mtk_eth *eth, int budget) { struct mtk_tx_ring *ring = ð->tx_ring; struct dim_sample dim_sample = {}; - unsigned int done[MTK_MAX_DEVS]; - unsigned int bytes[MTK_MAX_DEVS]; - int total = 0, i; - - memset(done, 0, sizeof(done)); - memset(bytes, 0, sizeof(bytes)); + struct mtk_poll_state state = {}; if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) - budget = mtk_poll_tx_qdma(eth, budget, done, bytes); + budget = mtk_poll_tx_qdma(eth, budget, &state); else - budget = mtk_poll_tx_pdma(eth, budget, done, bytes); + budget = mtk_poll_tx_pdma(eth, budget, &state); - for (i = 0; i < MTK_MAC_COUNT; i++) { - if (!eth->netdev[i] || !done[i]) - continue; - netdev_completed_queue(eth->netdev[i], done[i], bytes[i]); - total += done[i]; - eth->tx_packets += done[i]; - eth->tx_bytes += bytes[i]; - } + if (state.txq) + netdev_tx_completed_queue(state.txq, state.done, state.bytes); dim_update_sample(eth->tx_events, eth->tx_packets, eth->tx_bytes, &dim_sample); @@ -2117,7 +2215,7 @@ static int mtk_poll_tx(struct mtk_eth *eth, int budget) (atomic_read(&ring->free_count) > ring->thresh)) mtk_wake_queue(eth); - return total; + return state.total; } static void mtk_handle_status_irq(struct mtk_eth *eth) @@ -2203,6 +2301,7 @@ static int mtk_tx_alloc(struct mtk_eth *eth) int i, sz = soc->txrx.txd_size; struct mtk_tx_dma_v2 *txd; int ring_size; + u32 ofs, val; if (MTK_HAS_CAPS(soc->caps, MTK_QDMA)) ring_size = MTK_QDMA_RING_SIZE; @@ -2270,8 +2369,25 @@ static int mtk_tx_alloc(struct mtk_eth *eth) ring->phys + ((ring_size - 1) * sz), soc->reg_map->qdma.crx_ptr); mtk_w32(eth, ring->last_free_ptr, soc->reg_map->qdma.drx_ptr); - mtk_w32(eth, (QDMA_RES_THRES << 8) | QDMA_RES_THRES, - soc->reg_map->qdma.qtx_cfg); + + for (i = 0, ofs = 0; i < MTK_QDMA_NUM_QUEUES; i++) { + val = (QDMA_RES_THRES << 8) | QDMA_RES_THRES; + mtk_w32(eth, val, soc->reg_map->qdma.qtx_cfg + ofs); + + val = MTK_QTX_SCH_MIN_RATE_EN | + /* minimum: 10 Mbps */ + FIELD_PREP(MTK_QTX_SCH_MIN_RATE_MAN, 1) | + FIELD_PREP(MTK_QTX_SCH_MIN_RATE_EXP, 4) | + MTK_QTX_SCH_LEAKY_BUCKET_SIZE; + if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + val |= MTK_QTX_SCH_LEAKY_BUCKET_EN; + mtk_w32(eth, val, soc->reg_map->qdma.qtx_sch + ofs); + ofs += MTK_QTX_OFFSET; + } + val = MTK_QDMA_TX_SCH_MAX_WFQ | (MTK_QDMA_TX_SCH_MAX_WFQ << 16); + mtk_w32(eth, val, soc->reg_map->qdma.tx_sch_rate); + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + mtk_w32(eth, val, soc->reg_map->qdma.tx_sch_rate + 4); } else { mtk_w32(eth, ring->phys_pdma, MT7628_TX_BASE_PTR0); mtk_w32(eth, ring_size, MT7628_TX_MAX_CNT0); @@ -2936,7 +3052,7 @@ static int mtk_start_dma(struct mtk_eth *eth) if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) val |= MTK_MUTLI_CNT | MTK_RESV_BUF | MTK_WCOMP_EN | MTK_DMAD_WR_WDONE | - MTK_CHK_DDONE_EN; + MTK_CHK_DDONE_EN | MTK_LEAKY_BUCKET_EN; else val |= MTK_RX_BT_32DWORDS; mtk_w32(eth, val, reg_map->qdma.glo_cfg); @@ -2982,6 +3098,45 @@ static void mtk_gdm_config(struct mtk_eth *eth, u32 config) mtk_w32(eth, 0, MTK_RST_GL); } +static int mtk_device_event(struct notifier_block *n, unsigned long event, void *ptr) +{ + struct mtk_mac *mac = container_of(n, struct mtk_mac, device_notifier); + struct mtk_eth *eth = mac->hw; + struct net_device *dev = netdev_notifier_info_to_dev(ptr); + struct ethtool_link_ksettings s; + struct net_device *ldev; + struct list_head *iter; + struct dsa_port *dp; + + if (event != NETDEV_CHANGE) + return NOTIFY_DONE; + + netdev_for_each_lower_dev(dev, ldev, iter) { + if (netdev_priv(ldev) == mac) + goto found; + } + + return NOTIFY_DONE; + +found: + if (!dsa_slave_dev_check(dev)) + return NOTIFY_DONE; + + if (__ethtool_get_link_ksettings(dev, &s)) + return NOTIFY_DONE; + + if (s.base.speed == 0 || s.base.speed == ((__u32)-1)) + return NOTIFY_DONE; + + dp = dsa_port_from_netdev(dev); + if (dp->index >= MTK_QDMA_NUM_QUEUES) + return NOTIFY_DONE; + + mtk_set_queue_speed(eth, dp->index + 3, s.base.speed); + + return NOTIFY_DONE; +} + static int mtk_open(struct net_device *dev) { struct mtk_mac *mac = netdev_priv(dev); @@ -3022,7 +3177,8 @@ static int mtk_open(struct net_device *dev) refcount_inc(ð->dma_refcnt); phylink_start(mac->phylink); - netif_start_queue(dev); + netif_tx_start_all_queues(dev); + return 0; } @@ -3538,8 +3694,12 @@ static int mtk_unreg_dev(struct mtk_eth *eth) int i; for (i = 0; i < MTK_MAC_COUNT; i++) { + struct mtk_mac *mac; if (!eth->netdev[i]) continue; + mac = netdev_priv(eth->netdev[i]); + if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) + unregister_netdevice_notifier(&mac->device_notifier); unregister_netdev(eth->netdev[i]); } @@ -3755,6 +3915,23 @@ static int mtk_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd) return ret; } +static u16 mtk_select_queue(struct net_device *dev, struct sk_buff *skb, + struct net_device *sb_dev) +{ + struct mtk_mac *mac = netdev_priv(dev); + unsigned int queue = 0; + + if (netdev_uses_dsa(dev)) + queue = skb_get_queue_mapping(skb) + 3; + else + queue = mac->id; + + if (queue >= dev->num_tx_queues) + queue = 0; + + return queue; +} + static const struct ethtool_ops mtk_ethtool_ops = { .get_link_ksettings = mtk_get_link_ksettings, .set_link_ksettings = mtk_set_link_ksettings, @@ -3790,6 +3967,7 @@ static const struct net_device_ops mtk_netdev_ops = { .ndo_setup_tc = mtk_eth_setup_tc, .ndo_bpf = mtk_xdp, .ndo_xdp_xmit = mtk_xdp_xmit, + .ndo_select_queue = mtk_select_queue, }; static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np) @@ -3799,6 +3977,7 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np) struct phylink *phylink; struct mtk_mac *mac; int id, err; + int txqs = 1; if (!_id) { dev_err(eth->dev, "missing mac id\n"); @@ -3816,7 +3995,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np) return -EINVAL; } - eth->netdev[id] = alloc_etherdev(sizeof(*mac)); + if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) + txqs = MTK_QDMA_NUM_QUEUES; + + eth->netdev[id] = alloc_etherdev_mqs(sizeof(*mac), txqs, 1); if (!eth->netdev[id]) { dev_err(eth->dev, "alloc_etherdev failed\n"); return -ENOMEM; @@ -3913,6 +4095,11 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np) else eth->netdev[id]->max_mtu = MTK_MAX_RX_LENGTH_2K - MTK_RX_ETH_HLEN; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { + mac->device_notifier.notifier_call = mtk_device_event; + register_netdevice_notifier(&mac->device_notifier); + } + return 0; free_netdev: diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 637d3c60507e..c6a39a249fb5 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -22,6 +22,7 @@ #include #include "mtk_ppe.h" +#define MTK_QDMA_NUM_QUEUES 16 #define MTK_QDMA_PAGE_SIZE 2048 #define MTK_MAX_RX_LENGTH 1536 #define MTK_MAX_RX_LENGTH_2K 2048 @@ -203,8 +204,26 @@ #define MTK_RING_MAX_AGG_CNT_H ((MTK_HW_LRO_MAX_AGG_CNT >> 6) & 0x3) /* QDMA TX Queue Configuration Registers */ +#define MTK_QTX_OFFSET 0x10 #define QDMA_RES_THRES 4 +/* QDMA Tx Queue Scheduler Configuration Registers */ +#define MTK_QTX_SCH_TX_SEL BIT(31) +#define MTK_QTX_SCH_TX_SEL_V2 GENMASK(31, 30) + +#define MTK_QTX_SCH_LEAKY_BUCKET_EN BIT(30) +#define MTK_QTX_SCH_LEAKY_BUCKET_SIZE GENMASK(29, 28) +#define MTK_QTX_SCH_MIN_RATE_EN BIT(27) +#define MTK_QTX_SCH_MIN_RATE_MAN GENMASK(26, 20) +#define MTK_QTX_SCH_MIN_RATE_EXP GENMASK(19, 16) +#define MTK_QTX_SCH_MAX_RATE_WEIGHT GENMASK(15, 12) +#define MTK_QTX_SCH_MAX_RATE_EN BIT(11) +#define MTK_QTX_SCH_MAX_RATE_MAN GENMASK(10, 4) +#define MTK_QTX_SCH_MAX_RATE_EXP GENMASK(3, 0) + +/* QDMA TX Scheduler Rate Control Register */ +#define MTK_QDMA_TX_SCH_MAX_WFQ BIT(15) + /* QDMA Global Configuration Register */ #define MTK_RX_2B_OFFSET BIT(31) #define MTK_RX_BT_32DWORDS (3 << 11) @@ -223,6 +242,7 @@ #define MTK_WCOMP_EN BIT(24) #define MTK_RESV_BUF (0x40 << 16) #define MTK_MUTLI_CNT (0x4 << 12) +#define MTK_LEAKY_BUCKET_EN BIT(11) /* QDMA Flow Control Register */ #define FC_THRES_DROP_MODE BIT(20) @@ -251,8 +271,6 @@ #define MTK_STAT_OFFSET 0x40 /* QDMA TX NUM */ -#define MTK_QDMA_TX_NUM 16 -#define MTK_QDMA_TX_MASK (MTK_QDMA_TX_NUM - 1) #define QID_BITS_V2(x) (((x) & 0x3f) << 16) #define MTK_QDMA_GMAC2_QID 8 @@ -282,6 +300,7 @@ #define TX_DMA_PLEN0(x) (((x) & eth->soc->txrx.dma_max_len) << eth->soc->txrx.dma_len_offset) #define TX_DMA_PLEN1(x) ((x) & eth->soc->txrx.dma_max_len) #define TX_DMA_SWC BIT(14) +#define TX_DMA_PQID GENMASK(3, 0) /* PDMA on MT7628 */ #define TX_DMA_DONE BIT(31) @@ -940,6 +959,7 @@ struct mtk_reg_map { } pdma; struct { u32 qtx_cfg; /* tx queue configuration */ + u32 qtx_sch; /* tx queue scheduler configuration */ u32 rx_ptr; /* rx base pointer */ u32 rx_cnt_cfg; /* rx max count configuration */ u32 qcrx_ptr; /* rx cpu pointer */ @@ -957,6 +977,7 @@ struct mtk_reg_map { u32 fq_tail; /* fq tail pointer */ u32 fq_count; /* fq free page count */ u32 fq_blen; /* fq free page buffer length */ + u32 tx_sch_rate; /* tx scheduler rate control registers */ } qdma; u32 gdm1_cnt; u32 gdma_to_ppe; @@ -1148,6 +1169,7 @@ struct mtk_mac { __be32 hwlro_ip[MTK_MAX_LRO_IP_CNT]; int hwlro_ip_cnt; unsigned int syscfg0; + struct notifier_block device_notifier; }; /* the struct describing the SoC. these are declared in the soc_xyz.c files */ From patchwork Wed Nov 9 16:34:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037703 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FE02C4332F for ; Wed, 9 Nov 2022 16:34:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231386AbiKIQe4 (ORCPT ); Wed, 9 Nov 2022 11:34:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230410AbiKIQew (ORCPT ); Wed, 9 Nov 2022 11:34:52 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 384821A05E; Wed, 9 Nov 2022 08:34:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=CNx2EvBgaSK9aSDl5n7dtYqaf36QTyssctQ2Sm28n8U=; b=VOzMv9ZbI1H3uX1yYfBGtAh3xN xAtjtcjwrDAzIMopqcbRnIq5FDn4TYwmuVdg9rfGaGWC16JkG2SU8uPgSqz+sCZST6PEzBX2dFwNO BUnvv4DBPKDN99s5U3EYjgYcCTFIy8eSa2Vjmy4XC8XIU+dObGIyvYT67QXfYcESuROk=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso26-000l4N-Fp; Wed, 09 Nov 2022 17:34:34 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, Sean Wang , Landen Chao , DENG Qingfang , Andrew Lunn , Vivien Didelot , Florian Fainelli , Vladimir Oltean , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 05/12] net: dsa: tag_mtk: assign per-port queues Date: Wed, 9 Nov 2022 17:34:19 +0100 Message-Id: <20221109163426.76164-6-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Keeps traffic sent to the switch within link speed limits Signed-off-by: Felix Fietkau --- net/dsa/tag_mtk.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/net/dsa/tag_mtk.c b/net/dsa/tag_mtk.c index 415d8ece242a..5953356b829d 100644 --- a/net/dsa/tag_mtk.c +++ b/net/dsa/tag_mtk.c @@ -25,6 +25,8 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb, u8 xmit_tpid; u8 *mtk_tag; + skb_set_queue_mapping(skb, dp->index); + /* Build the special tag after the MAC Source Address. If VLAN header * is present, it's required that VLAN header and special tag is * being combined. Only in this way we can allow the switch can parse From patchwork Wed Nov 9 16:34:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037702 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC3C2C43219 for ; Wed, 9 Nov 2022 16:34:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230359AbiKIQey (ORCPT ); Wed, 9 Nov 2022 11:34:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229527AbiKIQev (ORCPT ); Wed, 9 Nov 2022 11:34:51 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E31E1A3A1; Wed, 9 Nov 2022 08:34:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=/rqSZTPHTzjvgQiKEtydX5Kmlo97LMyPbAHh98dxziY=; b=sDXwynUwMTWL9k/+n3l1jXgPBt kJl0TjusG4QI7dFTev5pZph46nVAFRao/yDinHhxbDKqxAV3P1zkQrLUMUOXJ80REQKwz23zJkE6z ScpyI1fcBBkNfae2UKZnzqVCNpjzv4xXBTERh8mfMfyNGDnGOfWHhTEW9QDCm9NjKE3w=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso27-000l4N-BL; Wed, 09 Nov 2022 17:34:35 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 06/12] net: ethernet: mediatek: ppe: assign per-port queues for offloaded traffic Date: Wed, 9 Nov 2022 17:34:20 +0100 Message-Id: <20221109163426.76164-7-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Keeps traffic sent to the switch within link speed limits Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_ppe.c | 18 ++++++++++++++++++ drivers/net/ethernet/mediatek/mtk_ppe.h | 4 ++++ .../net/ethernet/mediatek/mtk_ppe_offload.c | 12 +++++++++--- 3 files changed, 31 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c index 3ee2bf53f9e5..96ad0a9b79b4 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe.c @@ -399,6 +399,24 @@ int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry, return 0; } +int mtk_foe_entry_set_queue(struct mtk_eth *eth, struct mtk_foe_entry *entry, + unsigned int queue) +{ + u32 *ib2 = mtk_foe_entry_ib2(eth, entry); + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + *ib2 &= ~MTK_FOE_IB2_QID_V2; + *ib2 |= FIELD_PREP(MTK_FOE_IB2_QID_V2, queue); + *ib2 |= MTK_FOE_IB2_PSE_QOS_V2; + } else { + *ib2 &= ~MTK_FOE_IB2_QID; + *ib2 |= FIELD_PREP(MTK_FOE_IB2_QID, queue); + *ib2 |= MTK_FOE_IB2_PSE_QOS; + } + + return 0; +} + static bool mtk_flow_entry_match(struct mtk_eth *eth, struct mtk_flow_entry *entry, struct mtk_foe_entry *data) diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h index 0b7a67a958e4..be635864bb96 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe.h +++ b/drivers/net/ethernet/mediatek/mtk_ppe.h @@ -68,7 +68,9 @@ enum { #define MTK_FOE_IB2_DSCP GENMASK(31, 24) /* CONFIG_MEDIATEK_NETSYS_V2 */ +#define MTK_FOE_IB2_QID_V2 GENMASK(6, 0) #define MTK_FOE_IB2_PORT_MG_V2 BIT(7) +#define MTK_FOE_IB2_PSE_QOS_V2 BIT(8) #define MTK_FOE_IB2_DEST_PORT_V2 GENMASK(12, 9) #define MTK_FOE_IB2_MULTICAST_V2 BIT(13) #define MTK_FOE_IB2_WDMA_WINFO_V2 BIT(19) @@ -350,6 +352,8 @@ int mtk_foe_entry_set_pppoe(struct mtk_eth *eth, struct mtk_foe_entry *entry, int sid); int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry, int wdma_idx, int txq, int bss, int wcid); +int mtk_foe_entry_set_queue(struct mtk_eth *eth, struct mtk_foe_entry *entry, + unsigned int queue); int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); void mtk_foe_entry_clear(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); int mtk_foe_entry_idle_time(struct mtk_ppe *ppe, struct mtk_flow_entry *entry); diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c index 28bbd1df3e30..81afd5ee3fbf 100644 --- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c @@ -188,7 +188,7 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, int *wed_index) { struct mtk_wdma_info info = {}; - int pse_port, dsa_port; + int pse_port, dsa_port, queue; if (mtk_flow_get_wdma_info(dev, dest_mac, &info) == 0) { mtk_foe_entry_set_wdma(eth, foe, info.wdma_idx, info.queue, @@ -212,8 +212,6 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, } dsa_port = mtk_flow_get_dsa_port(&dev); - if (dsa_port >= 0) - mtk_foe_entry_set_dsa(eth, foe, dsa_port); if (dev == eth->netdev[0]) pse_port = 1; @@ -222,6 +220,14 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe, else return -EOPNOTSUPP; + if (dsa_port >= 0) { + mtk_foe_entry_set_dsa(eth, foe, dsa_port); + queue = 3 + dsa_port; + } else { + queue = pse_port - 1; + } + mtk_foe_entry_set_queue(eth, foe, queue); + out: mtk_foe_entry_set_pse_port(eth, foe, pse_port); From patchwork Wed Nov 9 16:34:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037705 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CFD7C4321E for ; Wed, 9 Nov 2022 16:35:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231573AbiKIQe7 (ORCPT ); Wed, 9 Nov 2022 11:34:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231173AbiKIQex (ORCPT ); Wed, 9 Nov 2022 11:34:53 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80AAF1A064; Wed, 9 Nov 2022 08:34:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=wlWx8xsTCdfxtgLYoUzFnVCZjXRJlA1hg0PQEq10fwE=; b=OIAM7q5NYu41hcHMIPQK990l5Y vpQYZhPl7o5Ns6foy1wqgtGrtFwt9W2Iyi1WTmsGi+V/AD5VoLFVaeZHSvW0x7Cat3jNXsOkuKgR4 iE61HSTdnYDCj31f5D2+zoNBsXOZgl0ahtxiKGT9XWl4Dk7u2G9joUbEy24ox9mBWFkA=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso28-000l4N-3V; Wed, 09 Nov 2022 17:34:36 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 07/12] net: ethernet: mtk_eth_soc: compile out netsys v2 code on mt7621 Date: Wed, 9 Nov 2022 17:34:21 +0100 Message-Id: <20221109163426.76164-8-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Avoid some branches in the hot path on low-end devices with limited CPU power, and reduce code size Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index c6a39a249fb5..146437ca044b 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -905,7 +905,13 @@ enum mkt_eth_capabilities { #define MTK_MUX_GMAC12_TO_GEPHY_SGMII \ (MTK_ETH_MUX_GMAC12_TO_GEPHY_SGMII | MTK_MUX) -#define MTK_HAS_CAPS(caps, _x) (((caps) & (_x)) == (_x)) +#ifdef CONFIG_SOC_MT7621 +#define MTK_CAP_MASK MTK_NETSYS_V2 +#else +#define MTK_CAP_MASK 0 +#endif + +#define MTK_HAS_CAPS(caps, _x) (((caps) & (_x) & ~(MTK_CAP_MASK)) == (_x)) #define MT7621_CAPS (MTK_GMAC1_RGMII | MTK_GMAC1_TRGMII | \ MTK_GMAC2_RGMII | MTK_SHARED_INT | \ From patchwork Wed Nov 9 16:34:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037700 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2A67C4332F for ; Wed, 9 Nov 2022 16:34:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230465AbiKIQew (ORCPT ); Wed, 9 Nov 2022 11:34:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230359AbiKIQeu (ORCPT ); Wed, 9 Nov 2022 11:34:50 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C8DC1A214; Wed, 9 Nov 2022 08:34:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=WpWzU38sZahtcrw07Mp5eEqyTgQmSc3A1QzCHmwEN28=; b=ipGOmIs6PBntXhEE2KDZqLvtTS 0jmp3nC0C1dm/cVH7uQotqyB4IQQvFf3tjjqzbzGmQFweWQ0nj53iQSRq85pafup2hibW60Mnk3VK ujiZjtHU+BuSFNgB5LiRYv5kswNfFV8HbVERdLwBz7fu6BMfJzCw52hRg9xlCKUg+mxk=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso28-000l4N-QM; Wed, 09 Nov 2022 17:34:37 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , Vivien Didelot , Florian Fainelli , Vladimir Oltean Cc: linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 08/12] net: dsa: add support for DSA rx offloading via metadata dst Date: Wed, 9 Nov 2022 17:34:22 +0100 Message-Id: <20221109163426.76164-9-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org If a metadata dst is present with the type METADATA_HW_PORT_MUX on a dsa cpu port netdev, assume that it carries the port number and that there is no DSA tag present in the skb data. Signed-off-by: Felix Fietkau --- net/core/flow_dissector.c | 4 +++- net/dsa/dsa.c | 18 +++++++++++++++++- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c index 25cd35f5922e..1f476abc25e1 100644 --- a/net/core/flow_dissector.c +++ b/net/core/flow_dissector.c @@ -972,11 +972,13 @@ bool __skb_flow_dissect(const struct net *net, if (unlikely(skb->dev && netdev_uses_dsa(skb->dev) && proto == htons(ETH_P_XDSA))) { const struct dsa_device_ops *ops; + struct metadata_dst *md_dst = skb_metadata_dst(skb); int offset = 0; ops = skb->dev->dsa_ptr->tag_ops; /* Only DSA header taggers break flow dissection */ - if (ops->needed_headroom) { + if (ops->needed_headroom && + (!md_dst || md_dst->type != METADATA_HW_PORT_MUX)) { if (ops->flow_dissect) ops->flow_dissect(skb, &proto, &offset); else diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 64b14f655b23..a20440e82dec 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -11,6 +11,7 @@ #include #include #include +#include #include "dsa_priv.h" @@ -216,6 +217,7 @@ static bool dsa_skb_defer_rx_timestamp(struct dsa_slave_priv *p, static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt, struct net_device *unused) { + struct metadata_dst *md_dst = skb_metadata_dst(skb); struct dsa_port *cpu_dp = dev->dsa_ptr; struct sk_buff *nskb = NULL; struct dsa_slave_priv *p; @@ -229,7 +231,21 @@ static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev, if (!skb) return 0; - nskb = cpu_dp->rcv(skb, dev); + if (md_dst && md_dst->type == METADATA_HW_PORT_MUX) { + unsigned int port = md_dst->u.port_info.port_id; + + dsa_default_offload_fwd_mark(skb); + skb_dst_set(skb, NULL); + if (!skb_has_extensions(skb)) + skb->slow_gro = 0; + + skb->dev = dsa_master_find_slave(dev, 0, port); + if (skb->dev) + nskb = skb; + } else { + nskb = cpu_dp->rcv(skb, dev); + } + if (!nskb) { kfree_skb(skb); return 0; From patchwork Wed Nov 9 16:34:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037708 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBD1CC433FE for ; Wed, 9 Nov 2022 16:35:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231528AbiKIQfK (ORCPT ); Wed, 9 Nov 2022 11:35:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231516AbiKIQe6 (ORCPT ); Wed, 9 Nov 2022 11:34:58 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A67475F74; Wed, 9 Nov 2022 08:34:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=f3GOja7Is52RRRLrMfML51HAl004gx2DWCw4/j2hhVg=; b=s2mZaC37XGQFJySuzmmq+zPGl4 gVQQ5udGWO2v+3j0hkG83G2yQFylFoy2dxSJ3CzJgCTpU9q/0gY2g9hIDaiw+zLKesLXJdcxHJrhc qKhHLI06DGxcdHV1tBNpKUnyDMI1d3tQG30L89fpfEAl0Tkll7BAPGNnw+AtFZqUi95Y=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso29-000l4N-SG; Wed, 09 Nov 2022 17:34:38 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger , Russell King Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 09/12] net: ethernet: mtk_eth_soc: fix VLAN rx hardware acceleration Date: Wed, 9 Nov 2022 17:34:23 +0100 Message-Id: <20221109163426.76164-10-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org - enable VLAN untagging for PDMA rx - make it possible to disable the feature via ethtool - pass VLAN tag to the DSA driver - untag special tag on PDMA only if no non-DSA devices are in use - disable special tag untagging on 7986 for now, since it's not working yet Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 99 ++++++++++++++++----- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 8 ++ 2 files changed, 84 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 92bdd69eed2e..ffaa9fe32b14 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "mtk_eth_soc.h" #include "mtk_wed.h" @@ -2008,23 +2009,27 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) mtk_ppe_check_skb(eth->ppe[0], skb, hash); - if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { - if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { - if (trxd.rxd3 & RX_DMA_VTAG_V2) - __vlan_hwaccel_put_tag(skb, - htons(RX_DMA_VPID(trxd.rxd4)), - RX_DMA_VID(trxd.rxd4)); - } else if (trxd.rxd2 & RX_DMA_VTAG) { - __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), - RX_DMA_VID(trxd.rxd3)); - } + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + if (trxd.rxd3 & RX_DMA_VTAG_V2) + __vlan_hwaccel_put_tag(skb, + htons(RX_DMA_VPID(trxd.rxd4)), + RX_DMA_VID(trxd.rxd4)); + } else if (trxd.rxd2 & RX_DMA_VTAG) { + __vlan_hwaccel_put_tag(skb, htons(RX_DMA_VPID(trxd.rxd3)), + RX_DMA_VID(trxd.rxd3)); + } + + /* When using VLAN untagging in combination with DSA, the + * hardware treats the MTK special tag as a VLAN and untags it. + */ + if (skb_vlan_tag_present(skb) && netdev_uses_dsa(netdev)) { + unsigned int port = ntohs(skb->vlan_proto) & GENMASK(2, 0); + + if (port < ARRAY_SIZE(eth->dsa_meta) && + eth->dsa_meta[port]) + skb_dst_set_noref(skb, ð->dsa_meta[port]->dst); - /* If the device is attached to a dsa switch, the special - * tag inserted in VLAN field by hw switch can * be offloaded - * by RX HW VLAN offload. Clear vlan info. - */ - if (netdev_uses_dsa(netdev)) - __vlan_hwaccel_clear_tag(skb); + __vlan_hwaccel_clear_tag(skb); } skb_record_rx_queue(skb, 0); @@ -2847,15 +2852,19 @@ static netdev_features_t mtk_fix_features(struct net_device *dev, static int mtk_set_features(struct net_device *dev, netdev_features_t features) { - int err = 0; - - if (!((dev->features ^ features) & NETIF_F_LRO)) - return 0; + struct mtk_mac *mac = netdev_priv(dev); + struct mtk_eth *eth = mac->hw; + netdev_features_t diff = dev->features ^ features; - if (!(features & NETIF_F_LRO)) + if ((diff & NETIF_F_LRO) && !(features & NETIF_F_LRO)) mtk_hwlro_netdev_disable(dev); - return err; + /* Set RX VLAN offloading */ + if (diff & NETIF_F_HW_VLAN_CTAG_RX) + mtk_w32(eth, !!(features & NETIF_F_HW_VLAN_CTAG_RX), + MTK_CDMP_EG_CTRL); + + return 0; } /* wait for DMA to finish whatever it is doing before we start using it again */ @@ -3137,11 +3146,45 @@ static int mtk_device_event(struct notifier_block *n, unsigned long event, void return NOTIFY_DONE; } +static bool mtk_uses_dsa(struct net_device *dev) +{ +#if IS_ENABLED(CONFIG_NET_DSA) + return netdev_uses_dsa(dev) && + dev->dsa_ptr->tag_ops->proto == DSA_TAG_PROTO_MTK; +#else + return false; +#endif +} + static int mtk_open(struct net_device *dev) { struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; - int err; + int i, err; + + if (mtk_uses_dsa(dev)) { + for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) { + struct metadata_dst *md_dst = eth->dsa_meta[i]; + + if (md_dst) + continue; + + md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, + GFP_KERNEL); + if (!md_dst) + return -ENOMEM; + + md_dst->u.port_info.port_id = i; + eth->dsa_meta[i] = md_dst; + } + } else { + /* Hardware special tag parsing needs to be disabled if at least + * one MAC does not use DSA. + */ + u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL); + val &= ~MTK_CDMP_STAG_EN; + mtk_w32(eth, val, MTK_CDMP_IG_CTRL); + } err = phylink_of_phy_connect(mac->phylink, mac->of_node, 0); if (err) { @@ -3469,6 +3512,10 @@ static int mtk_hw_init(struct mtk_eth *eth) */ val = mtk_r32(eth, MTK_CDMQ_IG_CTRL); mtk_w32(eth, val | MTK_CDMQ_STAG_EN, MTK_CDMQ_IG_CTRL); + if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + val = mtk_r32(eth, MTK_CDMP_IG_CTRL); + mtk_w32(eth, val | MTK_CDMP_STAG_EN, MTK_CDMP_IG_CTRL); + } /* Enable RX VLan Offloading */ mtk_w32(eth, 1, MTK_CDMP_EG_CTRL); @@ -3686,6 +3733,12 @@ static int mtk_free_dev(struct mtk_eth *eth) free_netdev(eth->netdev[i]); } + for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) { + if (!eth->dsa_meta[i]) + break; + metadata_dst_free(eth->dsa_meta[i]); + } + return 0; } diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 146437ca044b..1c85fbad5bc1 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -22,6 +22,9 @@ #include #include "mtk_ppe.h" +#define MTK_MAX_DSA_PORTS 7 +#define MTK_DSA_PORT_MASK GENMASK(2, 0) + #define MTK_QDMA_NUM_QUEUES 16 #define MTK_QDMA_PAGE_SIZE 2048 #define MTK_MAX_RX_LENGTH 1536 @@ -93,6 +96,9 @@ #define MTK_CDMQ_IG_CTRL 0x1400 #define MTK_CDMQ_STAG_EN BIT(0) +/* CDMQ Exgress Control Register */ +#define MTK_CDMQ_EG_CTRL 0x1404 + /* CDMP Ingress Control Register */ #define MTK_CDMP_IG_CTRL 0x400 #define MTK_CDMP_STAG_EN BIT(0) @@ -1149,6 +1155,8 @@ struct mtk_eth { int ip_align; + struct metadata_dst *dsa_meta[MTK_MAX_DSA_PORTS]; + struct mtk_ppe *ppe[2]; struct rhashtable flow_table; From patchwork Wed Nov 9 16:34:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037747 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34B18C433FE for ; Wed, 9 Nov 2022 16:54:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231154AbiKIQy1 (ORCPT ); Wed, 9 Nov 2022 11:54:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230212AbiKIQx7 (ORCPT ); Wed, 9 Nov 2022 11:53:59 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06F4D27DEC; Wed, 9 Nov 2022 08:53:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=K5PkePP8GWSVO/aA2H0vGreyzQRw46Jl9mG+ltEDiYg=; b=HTd09RZIWm9L73OSxmR7psrYuu VoT8Dce6T8Twgp9f7m/vnYYvgsRF2R9z4z0tBVa//GfjFlQipgaOpimAqk0Cej6PQRWPjvG7KbAAg JXrUX+9PZSi6aUV+nTP0u5aDkLO1uLtZB0fx/6Fwt+hwy5U4WBVXKCXi6Dqpp8FaBwpA=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso2A-000l4N-VU; Wed, 09 Nov 2022 17:34:39 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 10/12] net: ethernet: mtk_eth_soc: work around issue with sending small fragments Date: Wed, 9 Nov 2022 17:34:24 +0100 Message-Id: <20221109163426.76164-11-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org When frames are sent with very small fragments, the DMA engine appears to lock up and transmit attempts time out. Fix this by detecting the presence of small fragments and use skb_gso_segment + skb_linearize to deal with them Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 36 +++++++++++++++++++-- 1 file changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index f7b1d39bf936..a9a270317ff4 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1442,12 +1442,28 @@ static void mtk_wake_queue(struct mtk_eth *eth) } } +static bool mtk_skb_has_small_frag(struct sk_buff *skb) +{ + int min_size = 16; + int i; + + if (skb_headlen(skb) < min_size) + return true; + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) + if (skb_frag_size(&skb_shinfo(skb)->frags[i]) < min_size) + return true; + + return false; +} + static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) { struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; struct mtk_tx_ring *ring = ð->tx_ring; struct net_device_stats *stats = &dev->stats; + struct sk_buff *segs, *next; bool gso = false; int tx_num; @@ -1469,6 +1485,17 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_BUSY; } + if (skb_is_gso(skb) && mtk_skb_has_small_frag(skb)) { + segs = skb_gso_segment(skb, dev->features & ~NETIF_F_ALL_TSO); + if (IS_ERR(segs)) + goto drop; + + if (segs) { + consume_skb(skb); + skb = segs; + } + } + /* TSO: fill MSS info in tcp checksum field */ if (skb_is_gso(skb)) { if (skb_cow_head(skb, 0)) { @@ -1484,8 +1511,13 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) } } - if (mtk_tx_map(skb, dev, tx_num, ring, gso) < 0) - goto drop; + skb_list_walk_safe(skb, skb, next) { + if ((mtk_skb_has_small_frag(skb) && skb_linearize(skb)) || + mtk_tx_map(skb, dev, tx_num, ring, gso) < 0) { + stats->tx_dropped++; + dev_kfree_skb_any(skb); + } + } if (unlikely(atomic_read(&ring->free_count) <= ring->thresh)) netif_tx_stop_all_queues(dev); From patchwork Wed Nov 9 16:34:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037782 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E483FC4332F for ; Wed, 9 Nov 2022 17:01:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229678AbiKIRBH (ORCPT ); Wed, 9 Nov 2022 12:01:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231343AbiKIRAm (ORCPT ); Wed, 9 Nov 2022 12:00:42 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D658DF35; Wed, 9 Nov 2022 08:57:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=810KJYvQua3Tgj0eTQnOjCyIwfPHWRslPBRk0KhS8r0=; b=NfgRes3FlgLZOAi5KPhr9WSYdb rrBOtRbDLSQNvOvLOh6GDIC6Y8+5WVCLr8KEzM46i2phVihE73iMK3m3knPfug2NL4xmYMdgRQjGY KslSg1b2y1q1dfWQMjh40HhMGbWbK3pRT4Da2eR4NAaPIDW4Yu/BGunFKikhGOOJHcGg=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso2B-000l4N-M4; Wed, 09 Nov 2022 17:34:39 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 11/12] net: ethernet: mtk_eth_soc: set NETIF_F_ALL_TSO Date: Wed, 9 Nov 2022 17:34:25 +0100 Message-Id: <20221109163426.76164-12-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Significantly improves performance by avoiding unnecessary segmentation Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 1c85fbad5bc1..60da6936559a 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -49,8 +49,7 @@ NETIF_F_RXCSUM | \ NETIF_F_HW_VLAN_CTAG_TX | \ NETIF_F_HW_VLAN_CTAG_RX | \ - NETIF_F_SG | NETIF_F_TSO | \ - NETIF_F_TSO6 | \ + NETIF_F_SG | NETIF_F_ALL_TSO | \ NETIF_F_IPV6_CSUM |\ NETIF_F_HW_TC) #define MTK_HW_FEATURES_MT7628 (NETIF_F_SG | NETIF_F_RXCSUM) From patchwork Wed Nov 9 16:34:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Fietkau X-Patchwork-Id: 13037746 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE5E0C4332F for ; Wed, 9 Nov 2022 16:54:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230133AbiKIQyM (ORCPT ); Wed, 9 Nov 2022 11:54:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229949AbiKIQxm (ORCPT ); Wed, 9 Nov 2022 11:53:42 -0500 Received: from nbd.name (nbd.name [46.4.11.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8D8B27CC7; Wed, 9 Nov 2022 08:52:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nbd.name; s=20160729; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=8c7pffncEhfwqoQQg2EC0YEAcAl6M2qtlXdIEqg+CSM=; b=mrd2aWMdxVqcU8TFF6WqiZqyT/ //ulOpV9zGWyyMeig9++5Qp0UrBjXcG8T1hnFhxumHk/+uzOdTRTDwRiIGH2obFvZh26D5UqQluZU pmc6EWYfD0FlaEUD6BtAZoAr8aiM2Brr/52O0g2Xx19AdwpZQ/LmU3JGzBFNwYifC9UY=; Received: from p200300daa72ee100054f3c61b16ef6e7.dip0.t-ipconnect.de ([2003:da:a72e:e100:54f:3c61:b16e:f6e7] helo=localhost.localdomain) by ds12 with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (Exim 4.94.2) (envelope-from ) id 1oso2C-000l4N-FT; Wed, 09 Nov 2022 17:34:40 +0100 From: Felix Fietkau To: netdev@vger.kernel.org, John Crispin , Sean Wang , Mark Lee , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger Cc: Vladimir Oltean , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 12/12] net: ethernet: mtk_eth_soc: drop packets to WDMA if the ring is full Date: Wed, 9 Nov 2022 17:34:26 +0100 Message-Id: <20221109163426.76164-13-nbd@nbd.name> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221109163426.76164-1-nbd@nbd.name> References: <20221109163426.76164-1-nbd@nbd.name> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Improves handling of DMA ring overflow. Clarify other WDMA drop related comment. Signed-off-by: Felix Fietkau --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 5 ++++- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 1 + 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index a9a270317ff4..9aa4fa322794 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -3566,9 +3566,12 @@ static int mtk_hw_init(struct mtk_eth *eth) mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP); if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { - /* PSE should not drop port8 and port9 packets */ + /* PSE should not drop port8 and port9 packets from WDMA Tx */ mtk_w32(eth, 0x00000300, PSE_DROP_CFG); + /* PSE should drop packets to port 8/9 on WDMA Rx ring full */ + mtk_w32(eth, 0x00000300, PSE_PPE0_DROP); + /* PSE Free Queue Flow Control */ mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2); diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 60da6936559a..fb876d2a6ea1 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -127,6 +127,7 @@ #define PSE_FQFC_CFG1 0x100 #define PSE_FQFC_CFG2 0x104 #define PSE_DROP_CFG 0x108 +#define PSE_PPE0_DROP 0x110 /* PSE Input Queue Reservation Register*/ #define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2))