From patchwork Mon Feb 20 12:53:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 9582749 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E663E604A0 for ; Mon, 20 Feb 2017 13:17:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D7BEE28871 for ; Mon, 20 Feb 2017 13:17:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CBC9728874; Mon, 20 Feb 2017 13:17:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4928028871 for ; Mon, 20 Feb 2017 13:17:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CgAOGiz+2GVWMFtpmROm643GWbs4YJygR8TP2frM9yM=; b=trj7VZ1XzZ7PFx vXVtoxjeFyX+WzNhVYCFg/UABif1XH74g00588GD6GgxgPKtdXlKWkSVLt3Qd9NmX7gbmYwGrrI4C LIe74HcyMaGeK4WvB8YkNwZHm3TfsCyfGb1uZ65YPNPTR5Q5Ibz/AnRWAcKAS3CLrLZC0dlkbKahM fG34hhApEMX7KolDRCRXbXauneV8IzKTaHZDwqdewFQlh7zMTqaYIUdY8aVveUQbHDqFsUA1PkVLj Mj7SsR9nmgQGEoJ2ChlP6m4u8UME+NeAc5t1acaGiBCsTrJXISVr8WYTfritC7p4PBkRhj0zd11AD kmcgyG+YzJveG9PCzE8Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cfnqZ-0000o9-Q9; Mon, 20 Feb 2017 13:17:43 +0000 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174] helo=mx0b-0016f401.pphosted.com) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cfnYJ-0007ZC-4x for linux-arm-kernel@lists.infradead.org; Mon, 20 Feb 2017 12:58:56 +0000 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v1KCpTI5021639; Mon, 20 Feb 2017 04:58:29 -0800 Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 28ppp12978-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 20 Feb 2017 04:58:29 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Mon, 20 Feb 2017 04:58:28 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server id 15.0.1210.3 via Frontend Transport; Mon, 20 Feb 2017 04:58:28 -0800 Received: from xhacker.marvell.com (unknown [10.37.130.223]) by maili.marvell.com (Postfix) with ESMTP id 34E603F7040; Mon, 20 Feb 2017 04:58:26 -0800 (PST) From: Jisheng Zhang To: , , , , Subject: [PATCH net-next v3 3/4] net: mvneta: avoid reading from tx_desc as much as possible Date: Mon, 20 Feb 2017 20:53:43 +0800 Message-ID: <20170220125344.3555-4-jszhang@marvell.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170220125344.3555-1-jszhang@marvell.com> References: <20170220125344.3555-1-jszhang@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-02-20_11:, , signatures=0 X-Proofpoint-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1612050000 definitions=main-1702200127 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170220_045851_377355_616C8084 X-CRM114-Status: GOOD ( 13.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jisheng Zhang , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In hot code path such as mvneta_tx(), mvneta_txq_bufs_free() etc. we access tx_desc several times. The tx_desc is allocated by dma_alloc_coherent, it's uncacheable if the device isn't cache-coherent, reading from uncached memory is fairly slow. So use local variable to store what we need to avoid extra reading from uncached memory. We get the following performance data on Marvell BG4CT Platforms (tested with iperf): before the patch: sending 1GB in mvneta_tx()(disabled TSO) costs 793553760ns after the patch: sending 1GB in mvneta_tx()(disabled TSO) costs 719953800ns we saved 9.2% time. Signed-off-by: Jisheng Zhang --- drivers/net/ethernet/marvell/mvneta.c | 50 ++++++++++++++++++----------------- 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index a25042801eec..b6cda4131c78 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1770,6 +1770,7 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, struct mvneta_tx_desc *tx_desc = txq->descs + txq->txq_get_index; struct sk_buff *skb = txq->tx_skb[txq->txq_get_index]; + u32 dma_addr = tx_desc->buf_phys_addr; if (skb) { bytes_compl += skb->len; @@ -1778,9 +1779,8 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp, mvneta_txq_inc_get(txq); - if (!IS_TSO_HEADER(txq, tx_desc->buf_phys_addr)) - dma_unmap_single(pp->dev->dev.parent, - tx_desc->buf_phys_addr, + if (!IS_TSO_HEADER(txq, dma_addr)) + dma_unmap_single(pp->dev->dev.parent, dma_addr, tx_desc->data_size, DMA_TO_DEVICE); if (!skb) continue; @@ -2191,17 +2191,18 @@ mvneta_tso_put_data(struct net_device *dev, struct mvneta_tx_queue *txq, bool last_tcp, bool is_last) { struct mvneta_tx_desc *tx_desc; + dma_addr_t dma_addr; tx_desc = mvneta_txq_next_desc_get(txq); tx_desc->data_size = size; - tx_desc->buf_phys_addr = dma_map_single(dev->dev.parent, data, - size, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(dev->dev.parent, - tx_desc->buf_phys_addr))) { + + dma_addr = dma_map_single(dev->dev.parent, data, size, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev->dev.parent, dma_addr))) { mvneta_txq_desc_put(txq); return -ENOMEM; } + tx_desc->buf_phys_addr = dma_addr; tx_desc->command = 0; txq->tx_skb[txq->txq_put_index] = NULL; @@ -2278,9 +2279,10 @@ static int mvneta_tx_tso(struct sk_buff *skb, struct net_device *dev, */ for (i = desc_count - 1; i >= 0; i--) { struct mvneta_tx_desc *tx_desc = txq->descs + i; - if (!IS_TSO_HEADER(txq, tx_desc->buf_phys_addr)) + u32 dma_addr = tx_desc->buf_phys_addr; + if (!IS_TSO_HEADER(txq, dma_addr)) dma_unmap_single(pp->dev->dev.parent, - tx_desc->buf_phys_addr, + dma_addr, tx_desc->data_size, DMA_TO_DEVICE); mvneta_txq_desc_put(txq); @@ -2296,21 +2298,20 @@ static int mvneta_tx_frag_process(struct mvneta_port *pp, struct sk_buff *skb, int i, nr_frags = skb_shinfo(skb)->nr_frags; for (i = 0; i < nr_frags; i++) { + dma_addr_t dma_addr; skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; void *addr = page_address(frag->page.p) + frag->page_offset; tx_desc = mvneta_txq_next_desc_get(txq); tx_desc->data_size = frag->size; - tx_desc->buf_phys_addr = - dma_map_single(pp->dev->dev.parent, addr, - tx_desc->data_size, DMA_TO_DEVICE); - - if (dma_mapping_error(pp->dev->dev.parent, - tx_desc->buf_phys_addr)) { + dma_addr = dma_map_single(pp->dev->dev.parent, addr, + frag->size, DMA_TO_DEVICE); + if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { mvneta_txq_desc_put(txq); goto error; } + tx_desc->buf_phys_addr = dma_addr; if (i == nr_frags - 1) { /* Last descriptor */ @@ -2351,7 +2352,8 @@ static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) struct mvneta_tx_desc *tx_desc; int len = skb->len; int frags = 0; - u32 tx_cmd; + u32 tx_cmd, size; + dma_addr_t dma_addr; if (!netif_running(dev)) goto out; @@ -2368,17 +2370,17 @@ static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) tx_cmd = mvneta_skb_tx_csum(pp, skb); - tx_desc->data_size = skb_headlen(skb); + size = skb_headlen(skb); + tx_desc->data_size = size; - tx_desc->buf_phys_addr = dma_map_single(dev->dev.parent, skb->data, - tx_desc->data_size, - DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(dev->dev.parent, - tx_desc->buf_phys_addr))) { + dma_addr = dma_map_single(dev->dev.parent, skb->data, + size, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev->dev.parent, dma_addr))) { mvneta_txq_desc_put(txq); frags = 0; goto out; } + tx_desc->buf_phys_addr = dma_addr; if (frags == 1) { /* First and Last descriptor */ @@ -2395,8 +2397,8 @@ static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) /* Continue with other skb fragments */ if (mvneta_tx_frag_process(pp, skb, txq)) { dma_unmap_single(dev->dev.parent, - tx_desc->buf_phys_addr, - tx_desc->data_size, + dma_addr, + size, DMA_TO_DEVICE); mvneta_txq_desc_put(txq); frags = 0;