From patchwork Tue Oct 31 09:42:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ganapathi Bhat X-Patchwork-Id: 10033791 X-Patchwork-Delegate: kvalo@adurom.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1EBC260327 for ; Tue, 31 Oct 2017 09:46:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0A09D28A26 for ; Tue, 31 Oct 2017 09:46:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F2FE228A2E; Tue, 31 Oct 2017 09:46:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D1DD28A26 for ; Tue, 31 Oct 2017 09:46:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752581AbdJaJqO (ORCPT ); Tue, 31 Oct 2017 05:46:14 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:35896 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751997AbdJaJqL (ORCPT ); Tue, 31 Oct 2017 05:46:11 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9V9jA9R007352; Tue, 31 Oct 2017 02:46:10 -0700 Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2dxfpmktpr-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 31 Oct 2017 02:46:10 -0700 Received: from SC-EXCH02.marvell.com (10.93.176.82) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Tue, 31 Oct 2017 02:46:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server id 15.0.1210.3 via Frontend Transport; Tue, 31 Oct 2017 02:46:08 -0700 Received: from localhost.marvell.com (gbhat-thinkpad-t430.marvell.com [10.31.130.81]) by maili.marvell.com (Postfix) with ESMTP id 08E9E3F703F; Tue, 31 Oct 2017 02:46:08 -0700 (PDT) Received: from localhost.marvell.com (localhost [127.0.0.1]) by localhost.marvell.com (8.14.4/8.14.4/Debian-4.1ubuntu1) with ESMTP id v9V9hAdH014200; Tue, 31 Oct 2017 15:13:10 +0530 Received: (from root@localhost) by localhost.marvell.com (8.14.4/8.14.4/Submit) id v9V9hADg014199; Tue, 31 Oct 2017 15:13:10 +0530 From: Ganapathi Bhat To: CC: Brian Norris , Cathy Luo , Xinming Hu , Zhiyuan Yang , James Cao , Mangesh Malusare , Douglas Anderson , Karthik Ananthapadmanabha , Ganapathi Bhat Subject: [PATCH 3/3] mwifiex: cleanup rx_reorder_tbl_lock usage Date: Tue, 31 Oct 2017 15:12:47 +0530 Message-ID: <1509442967-14149-4-git-send-email-gbhat@marvell.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1509442967-14149-1-git-send-email-gbhat@marvell.com> References: <1509442967-14149-1-git-send-email-gbhat@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-10-31_03:, , signatures=0 X-Proofpoint-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710310136 Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Karthik Ananthapadmanabha Fix the incorrect usage of rx_reorder_tbl_lock spinlock: 1. We shouldn't access the fields of the elements returned by mwifiex_11n_get_rx_reorder_tbl() after we have released spin lock. To fix this, To fix this, aquire the spinlock before calling this function and release the lock only after processing the corresponding element is complete. 2. Realsing the spinlock during iteration of the list and holding it back before next iteration is unsafe. Fix it by releasing the lock only after iteration of the list is complete. Signed-off-by: Karthik Ananthapadmanabha Signed-off-by: Ganapathi Bhat --- .../net/wireless/marvell/mwifiex/11n_rxreorder.c | 34 +++++++++++++++++----- drivers/net/wireless/marvell/mwifiex/uap_txrx.c | 3 ++ 2 files changed, 29 insertions(+), 8 deletions(-) diff --git a/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c b/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c index 4631bc2..b99ace8 100644 --- a/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c +++ b/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c @@ -234,23 +234,19 @@ static int mwifiex_11n_dispatch_pkt(struct mwifiex_private *priv, void *payload) /* * This function returns the pointer to an entry in Rx reordering - * table which matches the given TA/TID pair. + * table which matches the given TA/TID pair. The caller must + * hold rx_reorder_tbl_lock, before calling this function. */ struct mwifiex_rx_reorder_tbl * mwifiex_11n_get_rx_reorder_tbl(struct mwifiex_private *priv, int tid, u8 *ta) { struct mwifiex_rx_reorder_tbl *tbl; - unsigned long flags; - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); list_for_each_entry(tbl, &priv->rx_reorder_tbl_ptr, list) { if (!memcmp(tbl->ta, ta, ETH_ALEN) && tbl->tid == tid) { - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, - flags); return tbl; } } - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); return NULL; } @@ -355,11 +351,14 @@ void mwifiex_11n_del_rx_reorder_tbl_by_ta(struct mwifiex_private *priv, u8 *ta) * If we get a TID, ta pair which is already present dispatch all the * the packets and move the window size until the ssn */ + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, ta); if (tbl) { mwifiex_11n_dispatch_pkt_until_start_win(priv, tbl, seq_num); + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); return; } + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); /* if !tbl then create one */ new_node = kzalloc(sizeof(struct mwifiex_rx_reorder_tbl), GFP_KERNEL); if (!new_node) @@ -571,15 +570,19 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv, u16 pkt_index; bool init_window_shift = false; int ret = 0; + unsigned long flags; + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, ta); if (!tbl) { + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); if (pkt_type != PKT_TYPE_BAR) mwifiex_11n_dispatch_pkt(priv, payload); return ret; } if ((pkt_type == PKT_TYPE_AMSDU) && !tbl->amsdu) { + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); mwifiex_11n_dispatch_pkt(priv, payload); return ret; } @@ -666,6 +669,8 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv, if (!tbl->timer_context.timer_is_set || prev_start_win != tbl->start_win) mwifiex_11n_rxreorder_timer_restart(tbl); + + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); return ret; } @@ -683,6 +688,7 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv, struct mwifiex_ra_list_tbl *ra_list; u8 cleanup_rx_reorder_tbl; int tid_down; + unsigned long flags; if (type == TYPE_DELBA_RECEIVE) cleanup_rx_reorder_tbl = (initiator) ? true : false; @@ -693,14 +699,18 @@ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *priv, peer_mac, tid, initiator); if (cleanup_rx_reorder_tbl) { + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, peer_mac); if (!tbl) { + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, + flags); mwifiex_dbg(priv->adapter, EVENT, "event: TID, TA not found in table\n"); return; } mwifiex_del_rx_reorder_entry(priv, tbl); + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); } else { ptx_tbl = mwifiex_get_ba_tbl(priv, tid, peer_mac); if (!ptx_tbl) { @@ -732,6 +742,7 @@ int mwifiex_ret_11n_addba_resp(struct mwifiex_private *priv, int tid, win_size; struct mwifiex_rx_reorder_tbl *tbl; uint16_t block_ack_param_set; + unsigned long flags; block_ack_param_set = le16_to_cpu(add_ba_rsp->block_ack_param_set); @@ -745,17 +756,20 @@ int mwifiex_ret_11n_addba_resp(struct mwifiex_private *priv, mwifiex_dbg(priv->adapter, ERROR, "ADDBA RSP: failed %pM tid=%d)\n", add_ba_rsp->peer_mac_addr, tid); + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, add_ba_rsp->peer_mac_addr); if (tbl) mwifiex_del_rx_reorder_entry(priv, tbl); + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); return 0; } win_size = (block_ack_param_set & IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK) >> BLOCKACKPARAM_WINSIZE_POS; + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, add_ba_rsp->peer_mac_addr); if (tbl) { @@ -766,6 +780,7 @@ int mwifiex_ret_11n_addba_resp(struct mwifiex_private *priv, else tbl->amsdu = false; } + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); mwifiex_dbg(priv->adapter, CMD, "cmd: ADDBA RSP: %pM tid=%d ssn=%d win_size=%d\n", @@ -806,9 +821,7 @@ void mwifiex_11n_cleanup_reorder_tbl(struct mwifiex_private *priv) spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); list_for_each_entry_safe(del_tbl_ptr, tmp_node, &priv->rx_reorder_tbl_ptr, list) { - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); mwifiex_del_rx_reorder_entry(priv, del_tbl_ptr); - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); } INIT_LIST_HEAD(&priv->rx_reorder_tbl_ptr); spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); @@ -933,6 +946,7 @@ void mwifiex_11n_rxba_sync_event(struct mwifiex_private *priv, int tlv_buf_left = len; int ret; u8 *tmp; + unsigned long flags; mwifiex_dbg_dump(priv->adapter, EVT_D, "RXBA_SYNC event:", event_buf, len); @@ -952,14 +966,18 @@ void mwifiex_11n_rxba_sync_event(struct mwifiex_private *priv, tlv_rxba->mac, tlv_rxba->tid, tlv_seq_num, tlv_bitmap_len); + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); rx_reor_tbl_ptr = mwifiex_11n_get_rx_reorder_tbl(priv, tlv_rxba->tid, tlv_rxba->mac); if (!rx_reor_tbl_ptr) { + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, + flags); mwifiex_dbg(priv->adapter, ERROR, "Can not find rx_reorder_tbl!"); return; } + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); for (i = 0; i < tlv_bitmap_len; i++) { for (j = 0 ; j < 8; j++) { diff --git a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c index 1e6a62c..21dc14f 100644 --- a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c +++ b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c @@ -421,12 +421,15 @@ int mwifiex_process_uap_rx_packet(struct mwifiex_private *priv, spin_unlock_irqrestore(&priv->sta_list_spinlock, flags); } + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); if (!priv->ap_11n_enabled || (!mwifiex_11n_get_rx_reorder_tbl(priv, uap_rx_pd->priority, ta) && (le16_to_cpu(uap_rx_pd->rx_pkt_type) != PKT_TYPE_AMSDU))) { + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); ret = mwifiex_handle_uap_rx_forward(priv, skb); return ret; } + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); /* Reorder and send to kernel */ pkt_type = (u8)le16_to_cpu(uap_rx_pd->rx_pkt_type);