From patchwork Thu Nov 13 07:05:01 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yoshihiro Kaneko X-Patchwork-Id: 5293911 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A4A479F440 for ; Thu, 13 Nov 2014 07:05:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BAA89201C7 for ; Thu, 13 Nov 2014 07:05:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C4ACD201E4 for ; Thu, 13 Nov 2014 07:05:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753916AbaKMHFj (ORCPT ); Thu, 13 Nov 2014 02:05:39 -0500 Received: from mail-pd0-f174.google.com ([209.85.192.174]:34582 "EHLO mail-pd0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751336AbaKMHFh (ORCPT ); Thu, 13 Nov 2014 02:05:37 -0500 Received: by mail-pd0-f174.google.com with SMTP id p10so13837295pdj.19 for ; Wed, 12 Nov 2014 23:05:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=g9p3XOtuDEHr8ek8HTXv+Rax8ECZyVD0IDJu+D5ODEI=; b=YjS49H1LRku279/UD4alfnzNojpJ96Tnr0G9uKYFGmIN2CFbnNoUYZVQUfTi6vEM63 Q/3eUn59egcURVIlCNiO1x5Lzx6T3fG7MjFqpgsIgk2xxb2EVcSU7c5KRUc6aaZkhPkt z7M46BqtOhNc76rdS043gbMMLQvCED3e58gYIvbI0G6R2ygz6EzpTRNGtgvAySbbazsR 1kJsROuZJG8vAic/m98AoUsoRLnd6rJYGuNRTNlIi7gaDPj3sKpmHyoburTxjt3TAL0A xWYnJZUhOGfM44xOH4u7DgOhMzawO5wUm5YXF6+/pthXVXAjpZ9182XH0r0lbgYvQ2ra oJzg== X-Received: by 10.66.222.231 with SMTP id qp7mr630096pac.39.1415862337140; Wed, 12 Nov 2014 23:05:37 -0800 (PST) Received: from localhost.localdomain (p5095-ipngn6701marunouchi.tokyo.ocn.ne.jp. [153.174.4.95]) by mx.google.com with ESMTPSA id pv7sm6847309pdb.69.2014.11.12.23.05.35 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 12 Nov 2014 23:05:36 -0800 (PST) From: Yoshihiro Kaneko To: netdev@vger.kernel.org Cc: "David S. Miller" , Simon Horman , Magnus Damm , linux-sh@vger.kernel.org Subject: [PATCH 3/3] sh_eth: Fix dma mapping issue Date: Thu, 13 Nov 2014 16:05:01 +0900 Message-Id: <1415862301-28032-4-git-send-email-ykaneko0929@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415862301-28032-1-git-send-email-ykaneko0929@gmail.com> References: <1415862301-28032-1-git-send-email-ykaneko0929@gmail.com> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Mitsuhiro Kimura When CONFIG_DMA_API_DEBUG=y, many DMA error messages reports. In order to use DMA debug, This patch fix following issues. Issue 1: If dma_mapping_error function is not called appropriately after DMA mapping, DMA debug will report error message when DMA unmap function is called. Issue 2: If skb_reserve function is called after DMA mapping, the relationship between mapping addr and mapping size will be broken. In this case, DMA debug will report error messages when DMA sync function and DMA unmap function are called. Issue 3: If the size of frame data is less than ETH_ZLEN, the size is resized to ETH_ZLEN after DMA map function is called. In the TX skb freeing function, dma unmap function is called with that resized value. So, unmap size error will reported. Issue 4: In the rx function, DMA map function is called without DMA unmap function is called for RX skb reallocating. It will case the DMA debug error that number of debug entry is full and DMA debug logic is stopped. Signed-off-by: Mitsuhiro Kimura Signed-off-by: Yoshihiro Kaneko --- drivers/net/ethernet/renesas/sh_eth.c | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c index 0e4a407..23318cf 100644 --- a/drivers/net/ethernet/renesas/sh_eth.c +++ b/drivers/net/ethernet/renesas/sh_eth.c @@ -1136,6 +1136,11 @@ static void sh_eth_ring_format(struct net_device *ndev) dma_map_single(&ndev->dev, skb->data, rxdesc->buffer_length, DMA_FROM_DEVICE); rxdesc->addr = virt_to_phys(skb->data); + if (dma_mapping_error(&ndev->dev, rxdesc->addr)) { + dev_kfree_skb(mdp->rx_skbuff[i]); + mdp->rx_skbuff[i] = NULL; + break; + } rxdesc->status = cpu_to_edmac(mdp, RD_RACT | RD_RFP); /* Rx descriptor address set */ @@ -1364,7 +1369,7 @@ static int sh_eth_txfree(struct net_device *ndev) if (mdp->tx_skbuff[entry]) { dma_unmap_single(&ndev->dev, txdesc->addr, txdesc->buffer_length, DMA_TO_DEVICE); - dev_kfree_skb_irq(mdp->tx_skbuff[entry]); + dev_kfree_skb_any(mdp->tx_skbuff[entry]); mdp->tx_skbuff[entry] = NULL; free_num++; } @@ -1466,11 +1471,19 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota) if (skb == NULL) break; /* Better luck next round. */ sh_eth_set_receive_align(skb); + dma_unmap_single(&ndev->dev, rxdesc->addr, + rxdesc->buffer_length, + DMA_FROM_DEVICE); dma_map_single(&ndev->dev, skb->data, rxdesc->buffer_length, DMA_FROM_DEVICE); skb_checksum_none_assert(skb); rxdesc->addr = virt_to_phys(skb->data); + if (dma_mapping_error(&ndev->dev, rxdesc->addr)) { + dev_kfree_skb_any(mdp->rx_skbuff[entry]); + mdp->rx_skbuff[entry] = NULL; + break; + } } if (entry >= mdp->num_rx_ring - 1) rxdesc->status |= @@ -2104,12 +2117,18 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev) if (!mdp->cd->hw_swap) sh_eth_soft_swap(phys_to_virt(ALIGN(txdesc->addr, 4)), skb->len + 2); - txdesc->addr = dma_map_single(&ndev->dev, skb->data, skb->len, - DMA_TO_DEVICE); if (skb->len < ETH_ZLEN) txdesc->buffer_length = ETH_ZLEN; else txdesc->buffer_length = skb->len; + txdesc->addr = dma_map_single(&ndev->dev, skb->data, + txdesc->buffer_length, + DMA_TO_DEVICE); + if (dma_mapping_error(&ndev->dev, txdesc->addr)) { + dev_kfree_skb_any(mdp->tx_skbuff[entry]); + mdp->tx_skbuff[entry] = NULL; + goto out; + } if (entry >= mdp->num_tx_ring - 1) txdesc->status |= cpu_to_edmac(mdp, TD_TACT | TD_TDLE); @@ -2121,6 +2140,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev) if (!(sh_eth_read(ndev, EDTRR) & sh_eth_get_edtrr_trns(mdp))) sh_eth_write(ndev, sh_eth_get_edtrr_trns(mdp), EDTRR); +out: return NETDEV_TX_OK; }