From patchwork Mon Sep 19 22:18:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Lunn X-Patchwork-Id: 12981055 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECA11ECAAA1 for ; Mon, 19 Sep 2022 22:19:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229884AbiISWTS (ORCPT ); Mon, 19 Sep 2022 18:19:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbiISWTI (ORCPT ); Mon, 19 Sep 2022 18:19:08 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A3CE4E603 for ; Mon, 19 Sep 2022 15:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:From:Sender:Reply-To:Subject:Date: Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=eHl0ZCe+JKtLYwzQiC8ZxiolQIEQWzlJfarvuSsLvTY=; b=SiYwsvS0aIrBk7axSXLhHBgA9a 74RLWCiTACo4L36tAWQfrA/kbaytg6tU3EnQadBqTz6f1kU0VfTwFIMTMpswSVZEFM6xvWSB7GgBI f48VaDw0QL3hBGdi034wR4qAmbKynj0sIQKuh6lwpzlPrDTNd95vYlzr8dKzX1W1y8XU=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oaP6R-00HBRC-0D; Tue, 20 Sep 2022 00:18:59 +0200 From: Andrew Lunn To: mattias.forsblad@gmail.com Cc: netdev , Florian Fainelli , Vladimir Oltean , Christian Marangi , Andrew Lunn Subject: [PATCH rfc v0 3/9] net: dsa: qca8K: Move queuing for request frame into the core Date: Tue, 20 Sep 2022 00:18:47 +0200 Message-Id: <20220919221853.4095491-4-andrew@lunn.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220919221853.4095491-1-andrew@lunn.ch> References: <20220919110847.744712-3-mattias.forsblad@gmail.com> <20220919221853.4095491-1-andrew@lunn.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Combine the queuing of the request and waiting for the completion into one core helper. Add the function dsa_rmu_request() to perform this. Access to statistics is not a strict request/reply, so the dsa_rmu_wait_for_completion needs to be kept. It is also no possible to combine dsa_rmu_request() and dsa_rmu_wait_for_completion() since we need to avoid the race of sending the request, receiving a reply, and the completion has not been reinitialised because the schedule at decided to do other things. Signed-off-by: Andrew Lunn --- drivers/net/dsa/qca/qca8k-8xxx.c | 32 ++++++++++---------------------- include/net/dsa.h | 2 ++ net/dsa/dsa.c | 16 ++++++++++++++++ 3 files changed, 28 insertions(+), 22 deletions(-) diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c index f4e92156bd32..9c44a09590a6 100644 --- a/drivers/net/dsa/qca/qca8k-8xxx.c +++ b/drivers/net/dsa/qca/qca8k-8xxx.c @@ -253,10 +253,8 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); *val = mgmt_eth_data->data[0]; if (len > QCA_HDR_MGMT_DATA1_LEN) @@ -303,10 +301,8 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len) qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -449,10 +445,8 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data, qca8k_mdio_header_fill_seq_num(skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -539,10 +533,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(write_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(write_skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, write_skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -574,10 +566,8 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(read_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(read_skb); - - ret = dsa_inband_wait_for_completion(&mgmt_eth_data->inband, - QCA8K_ETHERNET_TIMEOUT); + ret = dsa_inband_request(&mgmt_eth_data->inband, read_skb, + QCA8K_ETHERNET_TIMEOUT); ack = mgmt_eth_data->ack; @@ -601,8 +591,6 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy, qca8k_mdio_header_fill_seq_num(clear_skb, mgmt_eth_data->seq); mgmt_eth_data->ack = false; - dev_queue_xmit(clear_skb); - dsa_inband_wait_for_completion(&mgmt_eth_data->inband, QCA8K_ETHERNET_TIMEOUT); diff --git a/include/net/dsa.h b/include/net/dsa.h index ca81541703f4..50c319832939 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -1286,6 +1286,8 @@ struct dsa_inband { void dsa_inband_init(struct dsa_inband *inband); void dsa_inband_complete(struct dsa_inband *inband); +int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + int timeout_ms); int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms); /* Keep inline for faster access in hot path */ diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index 382dbb9e921a..8de0c3124abf 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -540,6 +540,22 @@ int dsa_inband_wait_for_completion(struct dsa_inband *inband, int timeout_ms) } EXPORT_SYMBOL_GPL(dsa_inband_wait_for_completion); +/* Cannot use dsa_inband_wait_completion since the completion needs to be + * reinitialized before the skb is queue to avoid races. + */ +int dsa_inband_request(struct dsa_inband *inband, struct sk_buff *skb, + int timeout_ms) +{ + unsigned long jiffies = msecs_to_jiffies(timeout_ms); + + reinit_completion(&inband->completion); + + dev_queue_xmit(skb); + + return wait_for_completion_timeout(&inband->completion, jiffies); +} +EXPORT_SYMBOL_GPL(dsa_inband_request); + static int __init dsa_init_module(void) { int rc;