From patchwork Mon Aug 7 16:09:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Furong Xu <0x1207@gmail.com> X-Patchwork-Id: 13344489 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F299A111A9 for ; Mon, 7 Aug 2023 16:10:50 +0000 (UTC) Received: from mail-il1-x12b.google.com (mail-il1-x12b.google.com [IPv6:2607:f8b0:4864:20::12b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FBFB10E5; Mon, 7 Aug 2023 09:10:49 -0700 (PDT) Received: by mail-il1-x12b.google.com with SMTP id e9e14a558f8ab-348db491d0eso22537825ab.3; Mon, 07 Aug 2023 09:10:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691424649; x=1692029449; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=KVadzo05Trppr5PYepmjmmUny3lh84/3p0xio4Ziz2w=; b=C/jZXjXV1AoWmOBfnXi6kglvKcSI8zQIDgpQrLVrCjN+FD77GCjFc5ONs1CkYN0sSz KUTfj6bsLNC1B3NxPqMZPh2J+P8IHDMzNn4z36lfYZIev5GXoxIdrw5xlZo5fZv+Kqiq rxqMHeWFUIHCnalT9uxeg2bFd1//JN7c/AtUtFCRO1clMiN+bnu/SEMvmRLwsiKPEFaY s2CTNCOcT8ZgaCiQ96Nly8hDs7lek2/8/VK4GJFVLR9bt5x20xUuhNr/YvV533fFOw8E 6C2TOERUEDhYnHe6c/vKj5uKdgIaSaFiIVvDoVMfnh7sojeTn6Ks0Nzq5l0kai2e2U2s gzew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691424649; x=1692029449; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KVadzo05Trppr5PYepmjmmUny3lh84/3p0xio4Ziz2w=; b=lW1iYoYEd2CMDvCb1J5Qhm1MbiWR4ItQCE4jM6UBoAH4Y1CB1LO8CtrToG4bYLHqTO fLup0HfPp+bWx7HpTOACBcmb4IwnZBaVvN02sCJrDc7FvjTK/fzbOX8KUcb3Yym85BB/ 6Q9uCew8BtGjvM++0opoIpQnAqLGqw+/SKESk+mbgL43+pKyaHV3+CRlC+Hxx0u5Q7Wt /P5D2UDx22ZPxj3hOCnHXJ+ug3RkLyZ70nC+NO+iu9ZZOJo1C9IMZSWlJ/ZrnDVk7UXZ 0iVCuAUkYyCe3HoolwM/ywOWHLxhTt1yMsC6LidsSDfJQsQhbGWB1vUHCzrmmyGD6NkZ UbLg== X-Gm-Message-State: AOJu0Yz7IEFp0Q8MXOwI7j/3QPr742Lg11lAbgpkVgI5pL6FQqbGB+BE eyVxNCTbPbtV/pR/VOZUU3Y= X-Google-Smtp-Source: AGHT+IHUY9s7W+bsFQGLPuSFqwSaDO55W8bwT4Q/REJIBx4fPtyJTkTe5sZvn+OjbgbPNjhuyZHpNw== X-Received: by 2002:a05:6e02:154d:b0:349:1d60:f038 with SMTP id j13-20020a056e02154d00b003491d60f038mr11792029ilu.27.1691424648826; Mon, 07 Aug 2023 09:10:48 -0700 (PDT) Received: from localhost.localdomain ([198.211.45.220]) by smtp.googlemail.com with ESMTPSA id k14-20020a637b4e000000b005533b6cb3a6sm5074741pgn.16.2023.08.07.09.10.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 09:10:48 -0700 (PDT) From: Furong Xu <0x1207@gmail.com> To: "David S. Miller" , Alexandre Torgue , Jose Abreu , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Maxime Coquelin , Simon Horman , Joao Pinto Cc: netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, xfr@outlook.com, rock.xu@nio.com, Furong Xu <0x1207@gmail.com> Subject: [PATCH net-next] net: stmmac: xgmac: RX queue routing configuration Date: Tue, 8 Aug 2023 00:09:55 +0800 Message-Id: <20230807160955.1111104-1-0x1207@gmail.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Commit abe80fdc6ee6 ("net: stmmac: RX queue routing configuration") introduced RX queue routing to DWMAC4 core. This patch extend the support to XGMAC2 core. Signed-off-by: Furong Xu <0x1207@gmail.com> --- .../net/ethernet/stmicro/stmmac/dwxgmac2.h | 14 +++++++ .../ethernet/stmicro/stmmac/dwxgmac2_core.c | 37 ++++++++++++++++++- 2 files changed, 49 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h index 1913385df685..a2498da7406b 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h @@ -74,8 +74,22 @@ #define XGMAC_RXQEN(x) GENMASK((x) * 2 + 1, (x) * 2) #define XGMAC_RXQEN_SHIFT(x) ((x) * 2) #define XGMAC_RXQ_CTRL1 0x000000a4 +#define XGMAC_AVCPQ GENMASK(31, 28) +#define XGMAC_AVCPQ_SHIFT 28 +#define XGMAC_PTPQ GENMASK(27, 24) +#define XGMAC_PTPQ_SHIFT 24 +#define XGMAC_TACPQE BIT(23) +#define XGMAC_TACPQE_SHIFT 23 +#define XGMAC_DCBCPQ GENMASK(19, 16) +#define XGMAC_DCBCPQ_SHIFT 16 +#define XGMAC_MCBCQEN BIT(15) +#define XGMAC_MCBCQEN_SHIFT 15 +#define XGMAC_MCBCQ GENMASK(11, 8) +#define XGMAC_MCBCQ_SHIFT 8 #define XGMAC_RQ GENMASK(7, 4) #define XGMAC_RQ_SHIFT 4 +#define XGMAC_UPQ GENMASK(3, 0) +#define XGMAC_UPQ_SHIFT 0 #define XGMAC_RXQ_CTRL2 0x000000a8 #define XGMAC_RXQ_CTRL3 0x000000ac #define XGMAC_PSRQ(x) GENMASK((x) * 8 + 7, (x) * 8) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c index a0c2ef8bb0ac..097b891a608d 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c @@ -127,6 +127,39 @@ static void dwxgmac2_tx_queue_prio(struct mac_device_info *hw, u32 prio, writel(value, ioaddr + reg); } +static void dwxgmac2_rx_queue_routing(struct mac_device_info *hw, + u8 packet, u32 queue) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value; + + static const struct stmmac_rx_routing dwxgmac2_route_possibilities[] = { + { XGMAC_AVCPQ, XGMAC_AVCPQ_SHIFT }, + { XGMAC_PTPQ, XGMAC_PTPQ_SHIFT }, + { XGMAC_DCBCPQ, XGMAC_DCBCPQ_SHIFT }, + { XGMAC_UPQ, XGMAC_UPQ_SHIFT }, + { XGMAC_MCBCQ, XGMAC_MCBCQ_SHIFT }, + }; + + value = readl(ioaddr + XGMAC_RXQ_CTRL1); + + /* routing configuration */ + value &= ~dwxgmac2_route_possibilities[packet - 1].reg_mask; + value |= (queue << dwxgmac2_route_possibilities[packet - 1].reg_shift) & + dwxgmac2_route_possibilities[packet - 1].reg_mask; + + /* some packets require extra ops */ + if (packet == PACKET_AVCPQ) { + value &= ~XGMAC_TACPQE; + value |= 0x1 << XGMAC_TACPQE_SHIFT; + } else if (packet == PACKET_MCBCQ) { + value &= ~XGMAC_MCBCQEN; + value |= 0x1 << XGMAC_MCBCQEN_SHIFT; + } + + writel(value, ioaddr + XGMAC_RXQ_CTRL1); +} + static void dwxgmac2_prog_mtl_rx_algorithms(struct mac_device_info *hw, u32 rx_alg) { @@ -1463,7 +1496,7 @@ const struct stmmac_ops dwxgmac210_ops = { .rx_queue_enable = dwxgmac2_rx_queue_enable, .rx_queue_prio = dwxgmac2_rx_queue_prio, .tx_queue_prio = dwxgmac2_tx_queue_prio, - .rx_queue_routing = NULL, + .rx_queue_routing = dwxgmac2_rx_queue_routing, .prog_mtl_rx_algorithms = dwxgmac2_prog_mtl_rx_algorithms, .prog_mtl_tx_algorithms = dwxgmac2_prog_mtl_tx_algorithms, .set_mtl_tx_queue_weight = dwxgmac2_set_mtl_tx_queue_weight, @@ -1524,7 +1557,7 @@ const struct stmmac_ops dwxlgmac2_ops = { .rx_queue_enable = dwxlgmac2_rx_queue_enable, .rx_queue_prio = dwxgmac2_rx_queue_prio, .tx_queue_prio = dwxgmac2_tx_queue_prio, - .rx_queue_routing = NULL, + .rx_queue_routing = dwxgmac2_rx_queue_routing, .prog_mtl_rx_algorithms = dwxgmac2_prog_mtl_rx_algorithms, .prog_mtl_tx_algorithms = dwxgmac2_prog_mtl_tx_algorithms, .set_mtl_tx_queue_weight = dwxgmac2_set_mtl_tx_queue_weight,