From patchwork Sat Dec 17 00:02:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miquel Raynal X-Patchwork-Id: 13075663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71D18C4332F for ; Sat, 17 Dec 2022 00:02:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229950AbiLQACs (ORCPT ); Fri, 16 Dec 2022 19:02:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230172AbiLQACp (ORCPT ); Fri, 16 Dec 2022 19:02:45 -0500 Received: from relay3-d.mail.gandi.net (relay3-d.mail.gandi.net [217.70.183.195]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EA2C73B1B; Fri, 16 Dec 2022 16:02:40 -0800 (PST) Received: (Authenticated sender: miquel.raynal@bootlin.com) by mail.gandi.net (Postfix) with ESMTPSA id 6A21B6000A; Sat, 17 Dec 2022 00:02:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1671235359; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6ocsrVWDse05TiOwgeTNazLf/2kUkHgtJk9TtjrMWVk=; b=cgdx5ZcfXoceO4op6gBHMDldN0kxm+Hgvh2RSE6VGGqKx4KSGXW1qnrhlNSE2Pdp/iOXWG cOH4EEOba642U9LKI8+E1k7YEld4U5BVwU2vHVQC1AKRVEyo0A1xWPDKKJTCqDFEgBek4Q 04tI8KMij9/gE9e1HCg9ys2yIWzwOvRdnIjiytUj/Y9u473yDAl1RzXu040DZONChQnyw9 5iHBlVZQQpEJrzQskwaM9+0mmXwKQdXqb/fiyiOZ2zzeT1OeHxvXRHVzDw5hFBCDygQv2I OsiAexHc36Lod+SAFtcttSji1HfNO4Q/UtcIFYlKUXoEQrIeIlGPTrRPgevkpg== From: Miquel Raynal To: Alexander Aring , Stefan Schmidt , linux-wpan@vger.kernel.org Cc: David Girault , Romuald Despres , Frederic Blain , Nicolas Schodet , Guilhem Imberton , Thomas Petazzoni , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet , netdev@vger.kernel.org, Miquel Raynal Subject: [PATCH wpan-next v2 5/6] mac802154: Add MLME Tx locked helpers Date: Sat, 17 Dec 2022 01:02:25 +0100 Message-Id: <20221217000226.646767-6-miquel.raynal@bootlin.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221217000226.646767-1-miquel.raynal@bootlin.com> References: <20221217000226.646767-1-miquel.raynal@bootlin.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-wpan@vger.kernel.org These have the exact same behavior as before, except they expect the rtnl to be already taken (and will complain otherwise). This allows performing MLME transmissions from different contexts. Signed-off-by: Miquel Raynal --- net/mac802154/ieee802154_i.h | 6 ++++++ net/mac802154/tx.c | 42 +++++++++++++++++++++++++----------- 2 files changed, 35 insertions(+), 13 deletions(-) diff --git a/net/mac802154/ieee802154_i.h b/net/mac802154/ieee802154_i.h index 509e0172fe82..aeadee543a9c 100644 --- a/net/mac802154/ieee802154_i.h +++ b/net/mac802154/ieee802154_i.h @@ -141,10 +141,16 @@ int ieee802154_mlme_op_pre(struct ieee802154_local *local); int ieee802154_mlme_tx(struct ieee802154_local *local, struct ieee802154_sub_if_data *sdata, struct sk_buff *skb); +int ieee802154_mlme_tx_locked(struct ieee802154_local *local, + struct ieee802154_sub_if_data *sdata, + struct sk_buff *skb); void ieee802154_mlme_op_post(struct ieee802154_local *local); int ieee802154_mlme_tx_one(struct ieee802154_local *local, struct ieee802154_sub_if_data *sdata, struct sk_buff *skb); +int ieee802154_mlme_tx_one_locked(struct ieee802154_local *local, + struct ieee802154_sub_if_data *sdata, + struct sk_buff *skb); netdev_tx_t ieee802154_monitor_start_xmit(struct sk_buff *skb, struct net_device *dev); netdev_tx_t diff --git a/net/mac802154/tx.c b/net/mac802154/tx.c index 9d8d43cf1e64..2a6f1ed763c9 100644 --- a/net/mac802154/tx.c +++ b/net/mac802154/tx.c @@ -137,34 +137,37 @@ int ieee802154_mlme_op_pre(struct ieee802154_local *local) return ieee802154_sync_and_hold_queue(local); } -int ieee802154_mlme_tx(struct ieee802154_local *local, - struct ieee802154_sub_if_data *sdata, - struct sk_buff *skb) +int ieee802154_mlme_tx_locked(struct ieee802154_local *local, + struct ieee802154_sub_if_data *sdata, + struct sk_buff *skb) { - int ret; - /* Avoid possible calls to ->ndo_stop() when we asynchronously perform * MLME transmissions. */ - rtnl_lock(); + ASSERT_RTNL(); /* Ensure the device was not stopped, otherwise error out */ - if (!local->open_count) { - rtnl_unlock(); + if (!local->open_count) return -ENETDOWN; - } /* Warn if the ieee802154 core thinks MLME frames can be sent while the * net interface expects this cannot happen. */ - if (WARN_ON_ONCE(!netif_running(sdata->dev))) { - rtnl_unlock(); + if (WARN_ON_ONCE(!netif_running(sdata->dev))) return -ENETDOWN; - } ieee802154_tx(local, skb); - ret = ieee802154_sync_queue(local); + return ieee802154_sync_queue(local); +} +int ieee802154_mlme_tx(struct ieee802154_local *local, + struct ieee802154_sub_if_data *sdata, + struct sk_buff *skb) +{ + int ret; + + rtnl_lock(); + ret = ieee802154_mlme_tx_locked(local, sdata, skb); rtnl_unlock(); return ret; @@ -188,6 +191,19 @@ int ieee802154_mlme_tx_one(struct ieee802154_local *local, return ret; } +int ieee802154_mlme_tx_one_locked(struct ieee802154_local *local, + struct ieee802154_sub_if_data *sdata, + struct sk_buff *skb) +{ + int ret; + + ieee802154_mlme_op_pre(local); + ret = ieee802154_mlme_tx_locked(local, sdata, skb); + ieee802154_mlme_op_post(local); + + return ret; +} + static bool ieee802154_queue_is_stopped(struct ieee802154_local *local) { return test_bit(WPAN_PHY_FLAG_STATE_QUEUE_STOPPED, &local->phy->flags);