From patchwork Thu Apr 7 22:54:08 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Crispin X-Patchwork-Id: 8781731 Return-Path: X-Original-To: patchwork-linux-mediatek@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 59AE2C0555 for ; Fri, 8 Apr 2016 07:51:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 703592027D for ; Fri, 8 Apr 2016 07:51:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 36C3A20274 for ; Fri, 8 Apr 2016 07:51:35 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aoRCY-0001M6-Nm; Fri, 08 Apr 2016 07:51:34 +0000 Received: from caladan.dune.hu ([78.24.191.180] helo=arrakis.dune.hu) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aoRCV-0001E0-NI for linux-mediatek@lists.infradead.org; Fri, 08 Apr 2016 07:51:33 +0000 Received: from arrakis.dune.hu (localhost [127.0.0.1]) by arrakis.dune.hu (Postfix) with ESMTP id 55E41B91AE1; Fri, 8 Apr 2016 09:50:47 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from localhost.localdomain (unknown [84.140.135.254]) by arrakis.dune.hu (Postfix) with ESMTPSA; Fri, 8 Apr 2016 09:50:47 +0200 (CEST) From: John Crispin To: "David S. Miller" Subject: [PATCH V3 5/8] net: mediatek: fix TX locking Date: Fri, 8 Apr 2016 00:54:08 +0200 Message-Id: <1460069651-1234-6-git-send-email-blogic@openwrt.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1460069651-1234-1-git-send-email-blogic@openwrt.org> References: <1460069651-1234-1-git-send-email-blogic@openwrt.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160408_005132_267468_AAD33E87 X-CRM114-Status: GOOD ( 14.44 ) X-Spam-Score: -0.4 (/) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Felix Fietkau , netdev@vger.kernel.org, =?UTF-8?q?Sean=20Wang=20=28=E7=8E=8B=E5=BF=97=E4=BA=98=29?= , linux-kernel@vger.kernel.org, linux-mediatek@lists.infradead.org, Matthias Brugger , John Crispin MIME-Version: 1.0 Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+patchwork-linux-mediatek=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Inside the TX path there is a lock inside the tx_map function. This is however too late. The patch moves the lock to the start of the xmit function right before the free count check of the DMA ring happens. If we do not do this, the code becomes racy leading to TX stalls and dropped packets. This happens as there are 2 netdevs running on the same physical DMA ring. Signed-off-by: John Crispin --- Changes in V3 * fix a typo in the comment drivers/net/ethernet/mediatek/mtk_eth_soc.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 4ebc42e..7b76075 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -536,7 +536,6 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, struct mtk_eth *eth = mac->hw; struct mtk_tx_dma *itxd, *txd; struct mtk_tx_buf *tx_buf; - unsigned long flags; dma_addr_t mapped_addr; unsigned int nr_frags; int i, n_desc = 1; @@ -568,11 +567,6 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, if (unlikely(dma_mapping_error(&dev->dev, mapped_addr))) return -ENOMEM; - /* normally we can rely on the stack not calling this more than once, - * however we have 2 queues running ont he same ring so we need to lock - * the ring access - */ - spin_lock_irqsave(ð->page_lock, flags); WRITE_ONCE(itxd->txd1, mapped_addr); tx_buf->flags |= MTK_TX_FLAGS_SINGLE0; dma_unmap_addr_set(tx_buf, dma_addr0, mapped_addr); @@ -632,8 +626,6 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, WRITE_ONCE(itxd->txd3, (TX_DMA_SWC | TX_DMA_PLEN0(skb_headlen(skb)) | (!nr_frags * TX_DMA_LS0))); - spin_unlock_irqrestore(ð->page_lock, flags); - netdev_sent_queue(dev, skb->len); skb_tx_timestamp(skb); @@ -661,8 +653,6 @@ err_dma: itxd = mtk_qdma_phys_to_virt(ring, itxd->txd2); } while (itxd != txd); - spin_unlock_irqrestore(ð->page_lock, flags); - return -ENOMEM; } @@ -712,14 +702,22 @@ static int mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) struct mtk_eth *eth = mac->hw; struct mtk_tx_ring *ring = ð->tx_ring; struct net_device_stats *stats = &dev->stats; + unsigned long flags; bool gso = false; int tx_num; + /* normally we can rely on the stack not calling this more than once, + * however we have 2 queues running on the same ring so we need to lock + * the ring access + */ + spin_lock_irqsave(ð->page_lock, flags); + tx_num = mtk_cal_txd_req(skb); if (unlikely(atomic_read(&ring->free_count) <= tx_num)) { mtk_stop_queue(eth); netif_err(eth, tx_queued, dev, "Tx Ring full when queue awake!\n"); + spin_unlock_irqrestore(ð->page_lock, flags); return NETDEV_TX_BUSY; } @@ -747,10 +745,12 @@ static int mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) ring->thresh)) mtk_wake_queue(eth); } + spin_unlock_irqrestore(ð->page_lock, flags); return NETDEV_TX_OK; drop: + spin_unlock_irqrestore(ð->page_lock, flags); stats->tx_dropped++; dev_kfree_skb(skb); return NETDEV_TX_OK;