From patchwork Sun May 4 22:43:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Soren Brinkmann X-Patchwork-Id: 4111021 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9BE469F44A for ; Sun, 4 May 2014 22:47:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C672E203E6 for ; Sun, 4 May 2014 22:47:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 01BB120411 for ; Sun, 4 May 2014 22:47:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wh59o-0007Ds-Dz; Sun, 04 May 2014 22:45:16 +0000 Received: from mail-qa0-x236.google.com ([2607:f8b0:400d:c00::236]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wh59T-00063D-LI for linux-arm-kernel@lists.infradead.org; Sun, 04 May 2014 22:44:56 +0000 Received: by mail-qa0-f54.google.com with SMTP id j15so1210370qaq.13 for ; Sun, 04 May 2014 15:44:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=YaPtjRs5OvRP4YWMR/wye+EG3py93d4AgNqQ8xhlFSM=; b=KWrWkoCYi518OaFGNbwQioo+xtq9SLkU3nPLgd5N9UM7DwW91qmQackSQWnHyauXmt PkB5oVib/8ZYt5dicnFso174LyfDUtJq3eTiqrznmvr30eOTuaFamnREyXIOxMCutks/ mGPT3j4nezZiKOafFHj5X8bBwZfzecInCawshq/9K5y5WWsEwQtKHEC8nAyHzq+AmeUH 4dtES4rI+xP9UG0s68JxxqK1xXHyBz7HQPfuqpFIeqGzydT3GRo0w6qzh7Vp1/4rAb/V k+XSNKClTGMOWlSiHLw4+m2pRKTpNSK9B2UqduUa2Vt2b21Tz5uFIHkH+RY1S4b6v/8F c88Q== X-Received: by 10.140.24.56 with SMTP id 53mr36604568qgq.37.1399243474317; Sun, 04 May 2014 15:44:34 -0700 (PDT) Received: from localhost ([149.199.62.254]) by mx.google.com with ESMTPSA id d18sm13214911qac.28.2014.05.04.15.44.32 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sun, 04 May 2014 15:44:33 -0700 (PDT) From: Soren Brinkmann To: Michal Simek , Nicolas Ferre Subject: [PATCH 5/5] net: macb: Fix race between HW and driver Date: Sun, 4 May 2014 15:43:02 -0700 Message-Id: <1399243382-12528-6-git-send-email-soren.brinkmann@xilinx.com> X-Mailer: git-send-email 1.9.2.1.g06c4abd In-Reply-To: <1399243382-12528-1-git-send-email-soren.brinkmann@xilinx.com> References: <1399243382-12528-1-git-send-email-soren.brinkmann@xilinx.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140504_154455_766656_B8469F48 X-CRM114-Status: GOOD ( 12.00 ) X-Spam-Score: 0.0 (/) Cc: linux-arm-kernel@lists.infradead.org, netdev@vger.kernel.org, git@xilinx.com, =?UTF-8?q?S=C3=B6ren=20Brinkmann?= , linux-kernel@vger.kernel.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Under "heavy" RX load, the driver cannot handle the descriptors fast enough. In detail, when a descriptor is consumed, its used flag is cleared and once the RX budget is consumed all descriptors with a cleared used flag are prepared to receive more data. Under load though, the HW may constantly receive more data and use those descriptors with a cleared used flag before they are actually prepared for next usage. The head and tail pointers into the RX-ring should always be valid and we can omit clearing and checking of the used flag. Signed-off-by: Soren Brinkmann --- --- drivers/net/ethernet/cadence/macb.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c index 3e13aa31548a..e9daa072ebb4 100644 --- a/drivers/net/ethernet/cadence/macb.c +++ b/drivers/net/ethernet/cadence/macb.c @@ -599,25 +599,16 @@ static void gem_rx_refill(struct macb *bp) { unsigned int entry; struct sk_buff *skb; - struct macb_dma_desc *desc; dma_addr_t paddr; while (CIRC_SPACE(bp->rx_prepared_head, bp->rx_tail, RX_RING_SIZE) > 0) { - u32 addr, ctrl; - entry = macb_rx_ring_wrap(bp->rx_prepared_head); - desc = &bp->rx_ring[entry]; /* Make hw descriptor updates visible to CPU */ rmb(); - addr = desc->addr; - ctrl = desc->ctrl; bp->rx_prepared_head++; - if ((addr & MACB_BIT(RX_USED))) - continue; - if (bp->rx_skbuff[entry] == NULL) { /* allocate sk_buff for this free entry in ring */ skb = netdev_alloc_skb(bp->dev, bp->rx_buffer_size); @@ -698,7 +689,6 @@ static int gem_rx(struct macb *bp, int budget) if (!(addr & MACB_BIT(RX_USED))) break; - desc->addr &= ~MACB_BIT(RX_USED); bp->rx_tail++; count++;