From patchwork Sat Mar 15 00:30:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergei Shtylyov X-Patchwork-Id: 3836021 Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 435D89F369 for ; Fri, 14 Mar 2014 23:31:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 404E72028D for ; Fri, 14 Mar 2014 23:31:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4BE8820170 for ; Fri, 14 Mar 2014 23:31:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756136AbaCNXbA (ORCPT ); Fri, 14 Mar 2014 19:31:00 -0400 Received: from mail-la0-f47.google.com ([209.85.215.47]:37436 "EHLO mail-la0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754883AbaCNXa7 (ORCPT ); Fri, 14 Mar 2014 19:30:59 -0400 Received: by mail-la0-f47.google.com with SMTP id y1so2216317lam.20 for ; Fri, 14 Mar 2014 16:30:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:organization:to:subject:date:user-agent:cc :references:in-reply-to:mime-version:content-type :content-transfer-encoding:message-id; bh=y79Yf9BKeMwk1n6eID/75iQStyoLXbI4nZomU4d7Cvw=; b=jpMV8dBHfyqbFJ5oXEghmiCxMpX+Cas+VdBQBGbUKCRdqDYdyNFbJvxn4kKPfAolMX bEyBBN4VBGhbnRN3Blm/meQp00iu6/f6ny+UGb2x/gAR6lUPPEwWhmHT8L3Mt4nBov5n vze2qpFRbljhHi4oYj+C+Lhb5zLoSeagB0wRR7+bywVB6RKz4xk1TlmWK/54Z6/cPd94 LIfb2akgZSaqX7uFVL6hzbI0RswB98SU4ZlU+JkldTKW9SSuVUdAt8Gu+EhzNSQYIMgl jbHPBJdGP6/h5V6m5yrftvTTR+mEiRDC4uiBgCzgv566sg0jwWt1zSzbTbcJJ1M3VV3W r64g== X-Gm-Message-State: ALoCoQn0ic1Xuv1wbgGzteEF/JssGghIdqdezipAuG1buBH1WzQi7p/TopRsuRbnljGcujr4qJQk X-Received: by 10.152.6.199 with SMTP id d7mr7439419laa.22.1394839858522; Fri, 14 Mar 2014 16:30:58 -0700 (PDT) Received: from wasted.cogentembedded.com (ppp83-237-61-145.pppoe.mtu-net.ru. [83.237.61.145]) by mx.google.com with ESMTPSA id f9sm6435520laa.8.2014.03.14.16.30.57 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 14 Mar 2014 16:30:57 -0700 (PDT) From: Sergei Shtylyov Organization: Cogent Embedded To: netdev@vger.kernel.org Subject: [PATCH 3/3] sh_eth: fold netif_msg_*() and netdev_*() calls into netif_*() invocations Date: Sat, 15 Mar 2014 03:30:59 +0300 User-Agent: KMail/1.13.5 (Linux/2.6.32.26-175.fc12.i686.PAE; KDE/4.4.5; i686; ; ) Cc: linux-sh@vger.kernel.org, joe@perches.com References: <201403150321.47674.sergei.shtylyov@cogentembedded.com> In-Reply-To: <201403150321.47674.sergei.shtylyov@cogentembedded.com> MIME-Version: 1.0 Message-Id: <201403150330.59864.sergei.shtylyov@cogentembedded.com> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we call netdev_*() under netif_msg_*() checks, we can fold these into netif_*() macro invocations. Suggested-by: Joe Perches Signed-off-by: Sergei Shtylyov --- drivers/net/ethernet/renesas/sh_eth.c | 33 +++++++++++---------------------- 1 file changed, 11 insertions(+), 22 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: net-next/drivers/net/ethernet/renesas/sh_eth.c =================================================================== --- net-next.orig/drivers/net/ethernet/renesas/sh_eth.c +++ net-next/drivers/net/ethernet/renesas/sh_eth.c @@ -1557,8 +1557,7 @@ ignore_link: /* Unused write back interrupt */ if (intr_status & EESR_TABT) { /* Transmit Abort int */ ndev->stats.tx_aborted_errors++; - if (netif_msg_tx_err(mdp)) - netdev_err(ndev, "Transmit Abort\n"); + netif_err(mdp, tx_err, ndev, "Transmit Abort\n"); } } @@ -1567,45 +1566,38 @@ ignore_link: if (intr_status & EESR_RFRMER) { /* Receive Frame Overflow int */ ndev->stats.rx_frame_errors++; - if (netif_msg_rx_err(mdp)) - netdev_err(ndev, "Receive Abort\n"); + netif_err(mdp, rx_err, ndev, "Receive Abort\n"); } } if (intr_status & EESR_TDE) { /* Transmit Descriptor Empty int */ ndev->stats.tx_fifo_errors++; - if (netif_msg_tx_err(mdp)) - netdev_err(ndev, "Transmit Descriptor Empty\n"); + netif_err(mdp, tx_err, ndev, "Transmit Descriptor Empty\n"); } if (intr_status & EESR_TFE) { /* FIFO under flow */ ndev->stats.tx_fifo_errors++; - if (netif_msg_tx_err(mdp)) - netdev_err(ndev, "Transmit FIFO Under flow\n"); + netif_err(mdp, tx_err, ndev, "Transmit FIFO Under flow\n"); } if (intr_status & EESR_RDE) { /* Receive Descriptor Empty int */ ndev->stats.rx_over_errors++; - - if (netif_msg_rx_err(mdp)) - netdev_err(ndev, "Receive Descriptor Empty\n"); + netif_err(mdp, rx_err, ndev, "Receive Descriptor Empty\n"); } if (intr_status & EESR_RFE) { /* Receive FIFO Overflow int */ ndev->stats.rx_fifo_errors++; - if (netif_msg_rx_err(mdp)) - netdev_err(ndev, "Receive FIFO Overflow\n"); + netif_err(mdp, rx_err, ndev, "Receive FIFO Overflow\n"); } if (!mdp->cd->no_ade && (intr_status & EESR_ADE)) { /* Address Error */ ndev->stats.tx_fifo_errors++; - if (netif_msg_tx_err(mdp)) - netdev_err(ndev, "Address Error\n"); + netif_err(mdp, tx_err, ndev, "Address Error\n"); } mask = EESR_TWB | EESR_TABT | EESR_ADE | EESR_TDE | EESR_TFE; @@ -2064,11 +2056,9 @@ static void sh_eth_tx_timeout(struct net netif_stop_queue(ndev); - if (netif_msg_timer(mdp)) { - netdev_err(ndev, - "transmit timed out, status %8.8x, resetting...\n", - (int)sh_eth_read(ndev, EESR)); - } + netif_err(mdp, timer, ndev, + "transmit timed out, status %8.8x, resetting...\n", + (int)sh_eth_read(ndev, EESR)); /* tx_errors count up */ ndev->stats.tx_errors++; @@ -2103,8 +2093,7 @@ static int sh_eth_start_xmit(struct sk_b spin_lock_irqsave(&mdp->lock, flags); if ((mdp->cur_tx - mdp->dirty_tx) >= (mdp->num_tx_ring - 4)) { if (!sh_eth_txfree(ndev)) { - if (netif_msg_tx_queued(mdp)) - netdev_warn(ndev, "TxFD exhausted.\n"); + netif_warn(mdp, tx_queued, ndev, "TxFD exhausted.\n"); netif_stop_queue(ndev); spin_unlock_irqrestore(&mdp->lock, flags); return NETDEV_TX_BUSY;