From patchwork Sun Jan 23 15:54:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 12720981 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 950C7C433F5 for ; Sun, 23 Jan 2022 16:02:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238317AbiAWQCk (ORCPT ); Sun, 23 Jan 2022 11:02:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbiAWQCj (ORCPT ); Sun, 23 Jan 2022 11:02:39 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A84CAC06173D; Sun, 23 Jan 2022 08:02:39 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 70924B80DC8; Sun, 23 Jan 2022 16:02:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BD516C340E2; Sun, 23 Jan 2022 16:02:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642953757; bh=l88lQtrseXk/JAyAHGQCbLgbBloHClnKMjfNE/ZzE78=; h=From:To:Cc:Subject:Date:From; b=LnlLKkqbMvuhJHRZ/6GTqKnTWt2AK6pSNu/X+QLl/HlsU4xB5CLri1uxG6SRrzfM6 jRYdtwTiwAK+fL5lu62VRWaX5jQkYuRBlkPpqUHfkGPdaQ/3nQfvOT/22W4ftGhn7N Verc/O0mZ5kVvOw0EdsPTXE+qsf32I6CWdIA1rATaOG1sDiiqTEurddN2fapHVDiUt cjHzfzmhTNSY6DhTuxlxZb1Ng/iYplo0GR0eBoKpDXb4EAISO9Ywc3s6FeyrKr+F8N qe4q9H+Ofrkrr+RTwqrmIQd3Tt0r52mLZXXHPn5BdmXCL2KY+cfm8eG3nI9dJCfJYV KlVSI6VJFlGBw== From: Jisheng Zhang To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Maxime Coquelin Cc: netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH] net: stmmac: reduce unnecessary wakeups from eee sw timer Date: Sun, 23 Jan 2022 23:54:58 +0800 Message-Id: <20220123155458.2288-1-jszhang@kernel.org> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, on EEE capable platforms, if EEE SW timer is used, the SW timer cause 1 wakeup/s even if the TX has successfully entered EEE. Remove this unnecessary wakeup by only calling mod_timer() if we haven't successfully entered EEE. Signed-off-by: Jisheng Zhang --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 92a9b0b226b1..f1246df143da 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -402,7 +402,7 @@ static void stmmac_lpi_entry_timer_config(struct stmmac_priv *priv, bool en) * Description: this function is to verify and enter in LPI mode in case of * EEE. */ -static void stmmac_enable_eee_mode(struct stmmac_priv *priv) +static int stmmac_enable_eee_mode(struct stmmac_priv *priv) { u32 tx_cnt = priv->plat->tx_queues_to_use; u32 queue; @@ -412,13 +412,14 @@ static void stmmac_enable_eee_mode(struct stmmac_priv *priv) struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; if (tx_q->dirty_tx != tx_q->cur_tx) - return; /* still unfinished work */ + return -EBUSY; /* still unfinished work */ } /* Check and enter in LPI mode */ if (!priv->tx_path_in_lpi_mode) stmmac_set_eee_mode(priv, priv->hw, priv->plat->en_tx_lpi_clockgating); + return 0; } /** @@ -450,8 +451,8 @@ static void stmmac_eee_ctrl_timer(struct timer_list *t) { struct stmmac_priv *priv = from_timer(priv, t, eee_ctrl_timer); - stmmac_enable_eee_mode(priv); - mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer)); + if (stmmac_enable_eee_mode(priv)) + mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer)); } /** @@ -2647,8 +2648,8 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) if (priv->eee_enabled && !priv->tx_path_in_lpi_mode && priv->eee_sw_timer_en) { - stmmac_enable_eee_mode(priv); - mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer)); + if (stmmac_enable_eee_mode(priv)) + mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer)); } /* We still have pending packets, let's call for a new scheduling */