From patchwork Mon Jul 29 15:00:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13745164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0A3DC49EA1 for ; Mon, 29 Jul 2024 15:00:58 +0000 (UTC) Received: from relmlie6.idc.renesas.com (relmlie6.idc.renesas.com [210.160.252.172]) by mx.groups.io with SMTP id smtpd.web10.58079.1722265254667312243 for ; Mon, 29 Jul 2024 08:00:55 -0700 Authentication-Results: mx.groups.io; dkim=none (message not signed); spf=pass (domain: bp.renesas.com, ip: 210.160.252.172, mailfrom: paul.barker.ct@bp.renesas.com) X-IronPort-AV: E=Sophos;i="6.09,246,1716217200"; d="scan'208";a="217876775" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie6.idc.renesas.com with ESMTP; 30 Jul 2024 00:00:55 +0900 Received: from ree-du1sdd5.ree.adwin.renesas.com (unknown [10.226.105.7]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 40EC6401849C; Tue, 30 Jul 2024 00:00:53 +0900 (JST) From: Paul Barker To: cip-dev@lists.cip-project.org, Nobuhiro Iwamatsu , Pavel Machek Cc: Biju Das , Lad Prabhakar Subject: [PATCH 6.1.y cip 1/8] net: ravb: Simplify poll & receive functions Date: Mon, 29 Jul 2024 15:00:42 +0000 Message-Id: <20240729150049.1924352-2-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> References: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Mon, 29 Jul 2024 15:00:58 -0000 X-Groupsio-URL: https://lists.cip-project.org/g/cip-dev/message/16646 commit 118e640af30c1355cfea1eeaa2960dc9331ee3db upstream. We don't need to pass the work budget to ravb_rx() by reference, it's cleaner to pass this by value and return the amount of work done. This allows us to simplify the ravb_poll() function and use the common `work_done` variable name seen in other network drivers for consistency and ease of understanding. This is a pure refactor and should not affect behaviour. Signed-off-by: Paul Barker Reviewed-by: Sergey Shtylyov Signed-off-by: Paolo Abeni Signed-off-by: Paul Barker --- drivers/net/ethernet/renesas/ravb.h | 2 +- drivers/net/ethernet/renesas/ravb_main.c | 27 +++++++++++------------- 2 files changed, 13 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index b48935ec7e28..71de2a7aa27c 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1039,7 +1039,7 @@ struct ravb_ptp { }; struct ravb_hw_info { - bool (*receive)(struct net_device *ndev, int *quota, int q); + int (*receive)(struct net_device *ndev, int budget, int q); void (*set_rate)(struct net_device *ndev); int (*set_feature)(struct net_device *ndev, netdev_features_t features); int (*dmac_init)(struct net_device *ndev); diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 207c2bf005c0..ac3b56629910 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -761,7 +761,7 @@ static struct sk_buff *ravb_get_skb_gbeth(struct net_device *ndev, int entry, } /* Packet receive function for Gigabit Ethernet */ -static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) +static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) { struct ravb_private *priv = netdev_priv(ndev); const struct ravb_hw_info *info = priv->info; @@ -783,7 +783,7 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) for (i = 0; i < limit; i++, priv->cur_rx[q]++) { entry = priv->cur_rx[q] % priv->num_rx_ring[q]; desc = &priv->rx_ring[q].desc[entry]; - if (rx_packets == *quota || desc->die_dt == DT_FEMPTY) + if (rx_packets == budget || desc->die_dt == DT_FEMPTY) break; /* Descriptor type must be checked before all other reads */ @@ -884,12 +884,11 @@ static bool ravb_rx_gbeth(struct net_device *ndev, int *quota, int q) } stats->rx_packets += rx_packets; - *quota -= rx_packets; - return *quota == 0; + return rx_packets; } /* Packet receive function for Ethernet AVB */ -static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) +static int ravb_rx_rcar(struct net_device *ndev, int budget, int q) { struct ravb_private *priv = netdev_priv(ndev); const struct ravb_hw_info *info = priv->info; @@ -908,7 +907,7 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) for (i = 0; i < limit; i++, priv->cur_rx[q]++) { entry = priv->cur_rx[q] % priv->num_rx_ring[q]; desc = &priv->rx_ring[q].ex_desc[entry]; - if (rx_packets == *quota || desc->die_dt == DT_FEMPTY) + if (rx_packets == budget || desc->die_dt == DT_FEMPTY) break; /* Descriptor type must be checked before all other reads */ @@ -994,17 +993,16 @@ static bool ravb_rx_rcar(struct net_device *ndev, int *quota, int q) } stats->rx_packets += rx_packets; - *quota -= rx_packets; - return *quota == 0; + return rx_packets; } /* Packet receive function for Ethernet AVB */ -static bool ravb_rx(struct net_device *ndev, int *quota, int q) +static int ravb_rx(struct net_device *ndev, int budget, int q) { struct ravb_private *priv = netdev_priv(ndev); const struct ravb_hw_info *info = priv->info; - return info->receive(ndev, quota, q); + return info->receive(ndev, budget, q); } static void ravb_rcv_snd_disable(struct net_device *ndev) @@ -1321,13 +1319,12 @@ static int ravb_poll(struct napi_struct *napi, int budget) unsigned long flags; int q = napi - priv->napi; int mask = BIT(q); - int quota = budget; - bool unmask; + int work_done; /* Processing RX Descriptor Ring */ /* Clear RX interrupt */ ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0); - unmask = !ravb_rx(ndev, "a, q); + work_done = ravb_rx(ndev, budget, q); /* Processing TX Descriptor Ring */ spin_lock_irqsave(&priv->lock, flags); @@ -1346,7 +1343,7 @@ static int ravb_poll(struct napi_struct *napi, int budget) if (priv->rx_fifo_errors != ndev->stats.rx_fifo_errors) ndev->stats.rx_fifo_errors = priv->rx_fifo_errors; - if (!unmask) + if (work_done == budget) goto out; napi_complete(napi); @@ -1363,7 +1360,7 @@ static int ravb_poll(struct napi_struct *napi, int budget) spin_unlock_irqrestore(&priv->lock, flags); out: - return budget - quota; + return work_done; } static void ravb_set_duplex_gbeth(struct net_device *ndev) From patchwork Mon Jul 29 15:00:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13745163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id C55BAC3DA61 for ; Mon, 29 Jul 2024 15:00:58 +0000 (UTC) Received: from relmlie5.idc.renesas.com (relmlie5.idc.renesas.com [210.160.252.171]) by mx.groups.io with SMTP id smtpd.web10.58080.1722265258132494915 for ; Mon, 29 Jul 2024 08:00:58 -0700 Authentication-Results: mx.groups.io; dkim=none (message not signed); spf=pass (domain: bp.renesas.com, ip: 210.160.252.171, mailfrom: paul.barker.ct@bp.renesas.com) X-IronPort-AV: E=Sophos;i="6.09,246,1716217200"; d="scan'208";a="213907139" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 30 Jul 2024 00:00:57 +0900 Received: from ree-du1sdd5.ree.adwin.renesas.com (unknown [10.226.105.7]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 7DE0F4018253; Tue, 30 Jul 2024 00:00:55 +0900 (JST) From: Paul Barker To: cip-dev@lists.cip-project.org, Nobuhiro Iwamatsu , Pavel Machek Cc: Biju Das , Lad Prabhakar Subject: [PATCH 6.1.y cip 2/8] net: ravb: Align poll function with NAPI docs Date: Mon, 29 Jul 2024 15:00:43 +0000 Message-Id: <20240729150049.1924352-3-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> References: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Mon, 29 Jul 2024 15:00:58 -0000 X-Groupsio-URL: https://lists.cip-project.org/g/cip-dev/message/16647 commit b0e0e20dc60e9e37b7e5bc71f0c912f66ad75529 upstream. Align ravb_poll() with the documentation in `Documentation/networking/kapi.rst` and `Documentation/networking/napi.rst`. The documentation says that we should prefer napi_complete_done() over napi_complete(), and using the former allows us to properly support busy polling. We should ensure that napi_complete_done() is only called if the work budget has not been exhausted, and we should only re-arm interrupts if it returns true. Signed-off-by: Paul Barker Reviewed-by: Sergey Shtylyov Signed-off-by: Paolo Abeni Signed-off-by: Paul Barker --- drivers/net/ethernet/renesas/ravb_main.c | 26 ++++++++++-------------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index ac3b56629910..3e4769af3c84 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -1343,23 +1343,19 @@ static int ravb_poll(struct napi_struct *napi, int budget) if (priv->rx_fifo_errors != ndev->stats.rx_fifo_errors) ndev->stats.rx_fifo_errors = priv->rx_fifo_errors; - if (work_done == budget) - goto out; - - napi_complete(napi); - - /* Re-enable RX/TX interrupts */ - spin_lock_irqsave(&priv->lock, flags); - if (!info->irq_en_dis) { - ravb_modify(ndev, RIC0, mask, mask); - ravb_modify(ndev, TIC, mask, mask); - } else { - ravb_write(ndev, mask, RIE0); - ravb_write(ndev, mask, TIE); + if (work_done < budget && napi_complete_done(napi, work_done)) { + /* Re-enable RX/TX interrupts */ + spin_lock_irqsave(&priv->lock, flags); + if (!info->irq_en_dis) { + ravb_modify(ndev, RIC0, mask, mask); + ravb_modify(ndev, TIC, mask, mask); + } else { + ravb_write(ndev, mask, RIE0); + ravb_write(ndev, mask, TIE); + } + spin_unlock_irqrestore(&priv->lock, flags); } - spin_unlock_irqrestore(&priv->lock, flags); -out: return work_done; } From patchwork Mon Jul 29 15:00:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13745166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5E80C3DA7E for ; Mon, 29 Jul 2024 15:01:08 +0000 (UTC) Received: from relmlie5.idc.renesas.com (relmlie5.idc.renesas.com [210.160.252.171]) by mx.groups.io with SMTP id smtpd.web10.58080.1722265258132494915 for ; Mon, 29 Jul 2024 08:01:00 -0700 Authentication-Results: mx.groups.io; dkim=none (message not signed); spf=pass (domain: bp.renesas.com, ip: 210.160.252.171, mailfrom: paul.barker.ct@bp.renesas.com) X-IronPort-AV: E=Sophos;i="6.09,246,1716217200"; d="scan'208";a="213907143" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 30 Jul 2024 00:00:59 +0900 Received: from ree-du1sdd5.ree.adwin.renesas.com (unknown [10.226.105.7]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id C4F93401849C; Tue, 30 Jul 2024 00:00:57 +0900 (JST) From: Paul Barker To: cip-dev@lists.cip-project.org, Nobuhiro Iwamatsu , Pavel Machek Cc: Biju Das , Lad Prabhakar Subject: [PATCH 6.1.y cip 3/8] net: ravb: Refactor RX ring refill Date: Mon, 29 Jul 2024 15:00:44 +0000 Message-Id: <20240729150049.1924352-4-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> References: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Mon, 29 Jul 2024 15:01:08 -0000 X-Groupsio-URL: https://lists.cip-project.org/g/cip-dev/message/16648 commit 37a01c12e9e89fe2545657d7b42ca2de9c780c45 upstream. To reduce code duplication, we add a new RX ring refill function which can handle both the initial RX ring population (which was split between ravb_ring_init() and ravb_ring_format()) and the RX ring refill after polling (in ravb_rx()). Signed-off-by: Paul Barker Reviewed-by: Sergey Shtylyov Signed-off-by: Paolo Abeni Signed-off-by: Paul Barker --- drivers/net/ethernet/renesas/ravb_main.c | 150 +++++++++-------------- 1 file changed, 57 insertions(+), 93 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 3e4769af3c84..04df3c93bb54 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -319,35 +319,43 @@ static void ravb_ring_free(struct net_device *ndev, int q) priv->tx_skb[q] = NULL; } -static void ravb_rx_ring_format(struct net_device *ndev, int q) +static u32 +ravb_rx_ring_refill(struct net_device *ndev, int q, u32 count, gfp_t gfp_mask) { struct ravb_private *priv = netdev_priv(ndev); + const struct ravb_hw_info *info = priv->info; struct ravb_rx_desc *rx_desc; - unsigned int rx_ring_size; dma_addr_t dma_addr; - unsigned int i; + u32 i, entry; - rx_ring_size = priv->info->rx_desc_size * priv->num_rx_ring[q]; - memset(priv->rx_ring[q].raw, 0, rx_ring_size); - /* Build RX ring buffer */ - for (i = 0; i < priv->num_rx_ring[q]; i++) { - /* RX descriptor */ - rx_desc = ravb_rx_get_desc(priv, q, i); - rx_desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); - dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data, - priv->info->rx_max_frame_size, - DMA_FROM_DEVICE); - /* We just set the data size to 0 for a failed mapping which - * should prevent DMA from happening... - */ - if (dma_mapping_error(ndev->dev.parent, dma_addr)) - rx_desc->ds_cc = cpu_to_le16(0); - rx_desc->dptr = cpu_to_le32(dma_addr); + for (i = 0; i < count; i++) { + entry = (priv->dirty_rx[q] + i) % priv->num_rx_ring[q]; + rx_desc = ravb_rx_get_desc(priv, q, entry); + rx_desc->ds_cc = cpu_to_le16(info->rx_max_desc_use); + + if (!priv->rx_skb[q][entry]) { + priv->rx_skb[q][entry] = ravb_alloc_skb(ndev, info, + gfp_mask); + if (!priv->rx_skb[q][entry]) + break; + dma_addr = dma_map_single(ndev->dev.parent, + priv->rx_skb[q][entry]->data, + priv->info->rx_max_frame_size, + DMA_FROM_DEVICE); + skb_checksum_none_assert(priv->rx_skb[q][entry]); + /* We just set the data size to 0 for a failed mapping + * which should prevent DMA from happening... + */ + if (dma_mapping_error(ndev->dev.parent, dma_addr)) + rx_desc->ds_cc = cpu_to_le16(0); + rx_desc->dptr = cpu_to_le32(dma_addr); + } + /* Descriptor type must be set after all the above writes */ + dma_wmb(); rx_desc->die_dt = DT_FEMPTY; } - rx_desc = ravb_rx_get_desc(priv, q, i); - rx_desc->dptr = cpu_to_le32((u32)priv->rx_desc_dma[q]); - rx_desc->die_dt = DT_LINKFIX; /* type */ + + return i; } /* Format skb and descriptor buffer for Ethernet AVB */ @@ -355,6 +363,7 @@ static void ravb_ring_format(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); unsigned int num_tx_desc = priv->num_tx_desc; + struct ravb_rx_desc *rx_desc; struct ravb_tx_desc *tx_desc; struct ravb_desc *desc; unsigned int tx_ring_size = sizeof(*tx_desc) * priv->num_tx_ring[q] * @@ -366,7 +375,13 @@ static void ravb_ring_format(struct net_device *ndev, int q) priv->dirty_rx[q] = 0; priv->dirty_tx[q] = 0; - ravb_rx_ring_format(ndev, q); + /* Regular RX descriptors have already been initialized by + * ravb_rx_ring_refill(), we just need to initialize the final link + * descriptor. + */ + rx_desc = ravb_rx_get_desc(priv, q, priv->num_rx_ring[q]); + rx_desc->dptr = cpu_to_le32((u32)priv->rx_desc_dma[q]); + rx_desc->die_dt = DT_LINKFIX; /* type */ memset(priv->tx_ring[q], 0, tx_ring_size); /* Build TX ring buffer */ @@ -410,11 +425,9 @@ static void *ravb_alloc_rx_desc(struct net_device *ndev, int q) static int ravb_ring_init(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); - const struct ravb_hw_info *info = priv->info; unsigned int num_tx_desc = priv->num_tx_desc; unsigned int ring_size; - struct sk_buff *skb; - unsigned int i; + u32 num_filled; /* Allocate RX and TX skb rings */ priv->rx_skb[q] = kcalloc(priv->num_rx_ring[q], @@ -424,12 +437,18 @@ static int ravb_ring_init(struct net_device *ndev, int q) if (!priv->rx_skb[q] || !priv->tx_skb[q]) goto error; - for (i = 0; i < priv->num_rx_ring[q]; i++) { - skb = ravb_alloc_skb(ndev, info, GFP_KERNEL); - if (!skb) - goto error; - priv->rx_skb[q][i] = skb; - } + /* Allocate all RX descriptors. */ + if (!ravb_alloc_rx_desc(ndev, q)) + goto error; + + /* Populate RX ring buffer. */ + priv->dirty_rx[q] = 0; + ring_size = priv->info->rx_desc_size * priv->num_rx_ring[q]; + memset(priv->rx_ring[q].raw, 0, ring_size); + num_filled = ravb_rx_ring_refill(ndev, q, priv->num_rx_ring[q], + GFP_KERNEL); + if (num_filled != priv->num_rx_ring[q]) + goto error; if (num_tx_desc > 1) { /* Allocate rings for the aligned buffers */ @@ -439,12 +458,6 @@ static int ravb_ring_init(struct net_device *ndev, int q) goto error; } - /* Allocate all RX descriptors. */ - if (!ravb_alloc_rx_desc(ndev, q)) - goto error; - - priv->dirty_rx[q] = 0; - /* Allocate all TX descriptors. */ ring_size = sizeof(struct ravb_tx_desc) * (priv->num_tx_ring[q] * num_tx_desc + 1); @@ -764,11 +777,9 @@ static struct sk_buff *ravb_get_skb_gbeth(struct net_device *ndev, int entry, static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) { struct ravb_private *priv = netdev_priv(ndev); - const struct ravb_hw_info *info = priv->info; struct net_device_stats *stats; struct ravb_rx_desc *desc; struct sk_buff *skb; - dma_addr_t dma_addr; int rx_packets = 0; u8 desc_status; u16 desc_len; @@ -856,32 +867,9 @@ static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) } /* Refill the RX ring buffers. */ - for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) { - entry = priv->dirty_rx[q] % priv->num_rx_ring[q]; - desc = &priv->rx_ring[q].desc[entry]; - desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); - - if (!priv->rx_skb[q][entry]) { - skb = ravb_alloc_skb(ndev, info, GFP_ATOMIC); - if (!skb) - break; - dma_addr = dma_map_single(ndev->dev.parent, - skb->data, - priv->info->rx_max_frame_size, - DMA_FROM_DEVICE); - skb_checksum_none_assert(skb); - /* We just set the data size to 0 for a failed mapping - * which should prevent DMA from happening... - */ - if (dma_mapping_error(ndev->dev.parent, dma_addr)) - desc->ds_cc = cpu_to_le16(0); - desc->dptr = cpu_to_le32(dma_addr); - priv->rx_skb[q][entry] = skb; - } - /* Descriptor type must be set after all the above writes */ - dma_wmb(); - desc->die_dt = DT_FEMPTY; - } + priv->dirty_rx[q] += ravb_rx_ring_refill(ndev, q, + priv->cur_rx[q] - priv->dirty_rx[q], + GFP_ATOMIC); stats->rx_packets += rx_packets; return rx_packets; @@ -891,12 +879,10 @@ static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) static int ravb_rx_rcar(struct net_device *ndev, int budget, int q) { struct ravb_private *priv = netdev_priv(ndev); - const struct ravb_hw_info *info = priv->info; struct net_device_stats *stats = &priv->stats[q]; struct ravb_ex_rx_desc *desc; unsigned int limit, i; struct sk_buff *skb; - dma_addr_t dma_addr; struct timespec64 ts; int rx_packets = 0; u8 desc_status; @@ -966,31 +952,9 @@ static int ravb_rx_rcar(struct net_device *ndev, int budget, int q) } /* Refill the RX ring buffers. */ - for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) { - entry = priv->dirty_rx[q] % priv->num_rx_ring[q]; - desc = &priv->rx_ring[q].ex_desc[entry]; - desc->ds_cc = cpu_to_le16(priv->info->rx_max_desc_use); - - if (!priv->rx_skb[q][entry]) { - skb = ravb_alloc_skb(ndev, info, GFP_ATOMIC); - if (!skb) - break; /* Better luck next round. */ - dma_addr = dma_map_single(ndev->dev.parent, skb->data, - priv->info->rx_max_frame_size, - DMA_FROM_DEVICE); - skb_checksum_none_assert(skb); - /* We just set the data size to 0 for a failed mapping - * which should prevent DMA from happening... - */ - if (dma_mapping_error(ndev->dev.parent, dma_addr)) - desc->ds_cc = cpu_to_le16(0); - desc->dptr = cpu_to_le32(dma_addr); - priv->rx_skb[q][entry] = skb; - } - /* Descriptor type must be set after all the above writes */ - dma_wmb(); - desc->die_dt = DT_FEMPTY; - } + priv->dirty_rx[q] += ravb_rx_ring_refill(ndev, q, + priv->cur_rx[q] - priv->dirty_rx[q], + GFP_ATOMIC); stats->rx_packets += rx_packets; return rx_packets; From patchwork Mon Jul 29 15:00:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13745167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6B02C49EA1 for ; Mon, 29 Jul 2024 15:01:08 +0000 (UTC) Received: from relmlie5.idc.renesas.com (relmlie5.idc.renesas.com [210.160.252.171]) by mx.groups.io with SMTP id smtpd.web10.58080.1722265258132494915 for ; Mon, 29 Jul 2024 08:01:02 -0700 Authentication-Results: mx.groups.io; dkim=none (message not signed); spf=pass (domain: bp.renesas.com, ip: 210.160.252.171, mailfrom: paul.barker.ct@bp.renesas.com) X-IronPort-AV: E=Sophos;i="6.09,246,1716217200"; d="scan'208";a="213907147" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 30 Jul 2024 00:01:01 +0900 Received: from ree-du1sdd5.ree.adwin.renesas.com (unknown [10.226.105.7]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 0B655401849C; Tue, 30 Jul 2024 00:00:59 +0900 (JST) From: Paul Barker To: cip-dev@lists.cip-project.org, Nobuhiro Iwamatsu , Pavel Machek Cc: Biju Das , Lad Prabhakar Subject: [PATCH 6.1.y cip 4/8] net: ravb: Refactor GbEth RX code path Date: Mon, 29 Jul 2024 15:00:45 +0000 Message-Id: <20240729150049.1924352-5-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> References: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Mon, 29 Jul 2024 15:01:08 -0000 X-Groupsio-URL: https://lists.cip-project.org/g/cip-dev/message/16649 commit 3ee43f09cb2cd7643d5a4bfbc78a9f75f6a563af upstream. We can reduce code duplication in ravb_rx_gbeth(). Signed-off-by: Paul Barker Reviewed-by: Sergey Shtylyov Signed-off-by: Paolo Abeni Signed-off-by: Paul Barker --- drivers/net/ethernet/renesas/ravb_main.c | 59 +++++++++++++----------- 1 file changed, 32 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 04df3c93bb54..ba3c6150f360 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -821,47 +821,52 @@ static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) stats->rx_missed_errors++; } else { die_dt = desc->die_dt & 0xF0; + skb = ravb_get_skb_gbeth(ndev, entry, desc); switch (die_dt) { case DT_FSINGLE: - skb = ravb_get_skb_gbeth(ndev, entry, desc); - skb_put(skb, desc_len); - skb->protocol = eth_type_trans(skb, ndev); - if (ndev->features & NETIF_F_RXCSUM) - ravb_rx_csum_gbeth(skb); - napi_gro_receive(&priv->napi[q], skb); - rx_packets++; - stats->rx_bytes += desc_len; - break; case DT_FSTART: - priv->rx_1st_skb = ravb_get_skb_gbeth(ndev, entry, desc); - skb_put(priv->rx_1st_skb, desc_len); + /* Start of packet: Set initial data length. */ + skb_put(skb, desc_len); + + /* Save this skb if the packet spans multiple + * descriptors. + */ + if (die_dt == DT_FSTART) + priv->rx_1st_skb = skb; break; + case DT_FMID: - skb = ravb_get_skb_gbeth(ndev, entry, desc); - skb_copy_to_linear_data_offset(priv->rx_1st_skb, - priv->rx_1st_skb->len, - skb->data, - desc_len); - skb_put(priv->rx_1st_skb, desc_len); - dev_kfree_skb(skb); - break; case DT_FEND: - skb = ravb_get_skb_gbeth(ndev, entry, desc); + /* Continuing a packet: Move data into the saved + * skb. + */ skb_copy_to_linear_data_offset(priv->rx_1st_skb, priv->rx_1st_skb->len, skb->data, desc_len); skb_put(priv->rx_1st_skb, desc_len); dev_kfree_skb(skb); - priv->rx_1st_skb->protocol = - eth_type_trans(priv->rx_1st_skb, ndev); + + /* Set skb to point at the whole packet so that + * we only need one code path for finishing a + * packet. + */ + skb = priv->rx_1st_skb; + } + + switch (die_dt) { + case DT_FSINGLE: + case DT_FEND: + /* Finishing a packet: Determine protocol & + * checksum, hand off to NAPI and update our + * stats. + */ + skb->protocol = eth_type_trans(skb, ndev); if (ndev->features & NETIF_F_RXCSUM) - ravb_rx_csum_gbeth(priv->rx_1st_skb); - stats->rx_bytes += priv->rx_1st_skb->len; - napi_gro_receive(&priv->napi[q], - priv->rx_1st_skb); + ravb_rx_csum_gbeth(skb); + stats->rx_bytes += skb->len; + napi_gro_receive(&priv->napi[q], skb); rx_packets++; - break; } } } From patchwork Mon Jul 29 15:00:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13745165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id C566AC3DA4A for ; Mon, 29 Jul 2024 15:01:08 +0000 (UTC) Received: from relmlie5.idc.renesas.com (relmlie5.idc.renesas.com [210.160.252.171]) by mx.groups.io with SMTP id smtpd.web10.58080.1722265258132494915 for ; Mon, 29 Jul 2024 08:01:04 -0700 Authentication-Results: mx.groups.io; dkim=none (message not signed); spf=pass (domain: bp.renesas.com, ip: 210.160.252.171, mailfrom: paul.barker.ct@bp.renesas.com) X-IronPort-AV: E=Sophos;i="6.09,246,1716217200"; d="scan'208";a="213907156" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 30 Jul 2024 00:01:04 +0900 Received: from ree-du1sdd5.ree.adwin.renesas.com (unknown [10.226.105.7]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 4A0CA401849C; Tue, 30 Jul 2024 00:01:02 +0900 (JST) From: Paul Barker To: cip-dev@lists.cip-project.org, Nobuhiro Iwamatsu , Pavel Machek Cc: Biju Das , Lad Prabhakar Subject: [PATCH 6.1.y cip 5/8] net: add netdev_sw_irq_coalesce_default_on() Date: Mon, 29 Jul 2024 15:00:46 +0000 Message-Id: <20240729150049.1924352-6-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> References: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Mon, 29 Jul 2024 15:01:08 -0000 X-Groupsio-URL: https://lists.cip-project.org/g/cip-dev/message/16650 From: Heiner Kallweit commit d93607082e982223cf92750f2d9039ff365b9d24 upstream. Add a helper for drivers wanting to set SW IRQ coalescing by default. The related sysfs attributes can be used to override the default values. Follow Jakub's suggestion and put this functionality into net core so that drivers wanting to use software interrupt coalescing per default don't have to open-code it. Note that this function needs to be called before the netdevice is registered. Suggested-by: Jakub Kicinski Signed-off-by: Heiner Kallweit Signed-off-by: David S. Miller Signed-off-by: Paul Barker --- include/linux/netdevice.h | 1 + net/core/dev.c | 16 ++++++++++++++++ 2 files changed, 17 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 0373e0935990..ca90dfb71299 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -78,6 +78,7 @@ struct xdp_buff; void synchronize_net(void); void netdev_set_default_ethtool_ops(struct net_device *dev, const struct ethtool_ops *ops); +void netdev_sw_irq_coalesce_default_on(struct net_device *dev); /* Backlog congestion levels */ #define NET_RX_SUCCESS 0 /* keep 'em coming, baby */ diff --git a/net/core/dev.c b/net/core/dev.c index 20d8b9195ef6..895082bd254b 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -10570,6 +10570,22 @@ void netdev_set_default_ethtool_ops(struct net_device *dev, } EXPORT_SYMBOL_GPL(netdev_set_default_ethtool_ops); +/** + * netdev_sw_irq_coalesce_default_on() - enable SW IRQ coalescing by default + * @dev: netdev to enable the IRQ coalescing on + * + * Sets a conservative default for SW IRQ coalescing. Users can use + * sysfs attributes to override the default values. + */ +void netdev_sw_irq_coalesce_default_on(struct net_device *dev) +{ + WARN_ON(dev->reg_state == NETREG_REGISTERED); + + dev->gro_flush_timeout = 20000; + dev->napi_defer_hard_irqs = 1; +} +EXPORT_SYMBOL_GPL(netdev_sw_irq_coalesce_default_on); + void netdev_freemem(struct net_device *dev) { char *addr = (char *)dev - dev->padded; From patchwork Mon Jul 29 15:00:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13745168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4861C3DA61 for ; Mon, 29 Jul 2024 15:01:08 +0000 (UTC) Received: from relmlie5.idc.renesas.com (relmlie5.idc.renesas.com [210.160.252.171]) by mx.groups.io with SMTP id smtpd.web10.58080.1722265258132494915 for ; Mon, 29 Jul 2024 08:01:06 -0700 Authentication-Results: mx.groups.io; dkim=none (message not signed); spf=pass (domain: bp.renesas.com, ip: 210.160.252.171, mailfrom: paul.barker.ct@bp.renesas.com) X-IronPort-AV: E=Sophos;i="6.09,246,1716217200"; d="scan'208";a="213907162" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 30 Jul 2024 00:01:06 +0900 Received: from ree-du1sdd5.ree.adwin.renesas.com (unknown [10.226.105.7]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 8405D401849C; Tue, 30 Jul 2024 00:01:04 +0900 (JST) From: Paul Barker To: cip-dev@lists.cip-project.org, Nobuhiro Iwamatsu , Pavel Machek Cc: Biju Das , Lad Prabhakar Subject: [PATCH 6.1.y cip 6/8] net: ravb: Enable SW IRQ Coalescing for GbEth Date: Mon, 29 Jul 2024 15:00:47 +0000 Message-Id: <20240729150049.1924352-7-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> References: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Mon, 29 Jul 2024 15:01:08 -0000 X-Groupsio-URL: https://lists.cip-project.org/g/cip-dev/message/16651 commit 7b39c1814ce3bcdf95d026bbb27322218840a27d upstream. Software IRQ Coalescing is required to improve network stack performance in the RZ/G2L SoC family and the RZ/G3S SoC, i.e. the SoCs which use the GbEth IP. This patch gives the following improvements during testing with iperf3: * RZ/G2L: * TCP RX: same bandwidth with -6% CPU load (76% -> 71%) * UDP RX: same bandwidth with -10% CPU load (99% -> 89%) * RZ/G2UL: * UDP RX: +4200% bandwidth (1.23Mbps -> 53Mbps) * RZ/G3S: * UDP RX: +425% bandwidth (1.23Mbps -> 6.46Mbps) The improvement of UDP RX bandwidth for the single core SoCs (RZ/G2UL & RZ/G3S) is particularly critical. Signed-off-by: Paul Barker Reviewed-by: Sergey Shtylyov Signed-off-by: Paolo Abeni Signed-off-by: Paul Barker --- drivers/net/ethernet/renesas/ravb.h | 1 + drivers/net/ethernet/renesas/ravb_main.c | 4 ++++ 2 files changed, 5 insertions(+) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 71de2a7aa27c..6a7aa7dd17e6 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -1054,6 +1054,7 @@ struct ravb_hw_info { u32 rx_max_desc_use; u32 rx_desc_size; unsigned aligned_tx: 1; + unsigned coalesce_irqs:1; /* Needs software IRQ coalescing */ /* hardware features */ unsigned internal_delay:1; /* AVB-DMAC has internal delays */ diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index ba3c6150f360..cdc3c3be9c4c 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -2676,6 +2676,7 @@ static const struct ravb_hw_info gbeth_hw_info = { .rx_max_desc_use = 4080, .rx_desc_size = sizeof(struct ravb_rx_desc), .aligned_tx = 1, + .coalesce_irqs = 1, .tx_counters = 1, .carrier_counters = 1, .half_duplex = 1, @@ -2952,6 +2953,9 @@ static int ravb_probe(struct platform_device *pdev) if (info->nc_queues) netif_napi_add(ndev, &priv->napi[RAVB_NC], ravb_poll); + if (info->coalesce_irqs) + netdev_sw_irq_coalesce_default_on(ndev); + /* Network device register */ error = register_netdev(ndev); if (error) From patchwork Mon Jul 29 15:00:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13745169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9F4CC3DA61 for ; Mon, 29 Jul 2024 15:01:18 +0000 (UTC) Received: from relmlie5.idc.renesas.com (relmlie5.idc.renesas.com [210.160.252.171]) by mx.groups.io with SMTP id smtpd.web10.58080.1722265258132494915 for ; Mon, 29 Jul 2024 08:01:09 -0700 Authentication-Results: mx.groups.io; dkim=none (message not signed); spf=pass (domain: bp.renesas.com, ip: 210.160.252.171, mailfrom: paul.barker.ct@bp.renesas.com) X-IronPort-AV: E=Sophos;i="6.09,246,1716217200"; d="scan'208";a="213907176" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 30 Jul 2024 00:01:08 +0900 Received: from ree-du1sdd5.ree.adwin.renesas.com (unknown [10.226.105.7]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id C2B64401849C; Tue, 30 Jul 2024 00:01:06 +0900 (JST) From: Paul Barker To: cip-dev@lists.cip-project.org, Nobuhiro Iwamatsu , Pavel Machek Cc: Biju Das , Lad Prabhakar Subject: [PATCH 6.1.y cip 7/8] net: ravb: Use NAPI threaded mode on 1-core CPUs with GbEth IP Date: Mon, 29 Jul 2024 15:00:48 +0000 Message-Id: <20240729150049.1924352-8-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> References: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Mon, 29 Jul 2024 15:01:18 -0000 X-Groupsio-URL: https://lists.cip-project.org/g/cip-dev/message/16652 commit 65c482bc226ab25a7884ee6fd1cc09bb08cbd1be upstream. NAPI Threaded mode (along with the previously enabled SW IRQ Coalescing) is required to improve network stack performance for single core SoCs using the GbEth IP (currently the RZ/G2L SoC family and the RZ/G3S SoC). This patch gives the following improvements during testing with iperf3. * RZ/G2UL: * TCP TX: +32% bandwidth (638Mbps -> 841Mbps) * TXP RX: +8.8% bandwidth (667Mbps -> 726Mbps) * UDP RX: +104% bandwidth (53Mbps -> 108Mbps) * RZ/G3S: * TCP TX: 29% bandwidth (529Mbps -> 681Mbps) * UDP RX: +1290% bandwidth (6.46Mbps -> 90Mbps) * RZ/Five: * UDP RX: Test no longer crashes (0 -> 20 Mbps) This patch gives the following reductions in performance in the same testing: * RZ/G2UL: * UDP TX: -7.5% bandwidth (594Mbps -> 549Mbps) * RZ/G3S: * UDP TX: -5% bandwidth (625Mbps -> 594Mbps) These losses are considered acceptable given the benefits shown above. If UDP TX bandwidth must be maximised for a particular use case, NAPI threaded mode can be disabled at runtime via sysfs writes. The improvement of UDP RX bandwidth for the single core SoCs (RZ/G2UL & RZ/G3S) is particularly critical. Signed-off-by: Paul Barker Reviewed-by: Sergey Shtylyov Signed-off-by: Paolo Abeni Signed-off-by: Paul Barker --- drivers/net/ethernet/renesas/ravb_main.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index cdc3c3be9c4c..fb34491d1fd0 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -2953,8 +2953,11 @@ static int ravb_probe(struct platform_device *pdev) if (info->nc_queues) netif_napi_add(ndev, &priv->napi[RAVB_NC], ravb_poll); - if (info->coalesce_irqs) + if (info->coalesce_irqs) { netdev_sw_irq_coalesce_default_on(ndev); + if (num_present_cpus() == 1) + dev_set_threaded(ndev, true); + } /* Network device register */ error = register_netdev(ndev); From patchwork Mon Jul 29 15:00:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Barker X-Patchwork-Id: 13745170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9F8CC3DA7E for ; Mon, 29 Jul 2024 15:01:18 +0000 (UTC) Received: from relmlie6.idc.renesas.com (relmlie6.idc.renesas.com [210.160.252.172]) by mx.groups.io with SMTP id smtpd.web11.58420.1722265271674461949 for ; Mon, 29 Jul 2024 08:01:12 -0700 Authentication-Results: mx.groups.io; dkim=none (message not signed); spf=pass (domain: bp.renesas.com, ip: 210.160.252.172, mailfrom: paul.barker.ct@bp.renesas.com) X-IronPort-AV: E=Sophos;i="6.09,246,1716217200"; d="scan'208";a="217876813" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie6.idc.renesas.com with ESMTP; 30 Jul 2024 00:01:11 +0900 Received: from ree-du1sdd5.ree.adwin.renesas.com (unknown [10.226.105.7]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 0EE5A401849C; Tue, 30 Jul 2024 00:01:08 +0900 (JST) From: Paul Barker To: cip-dev@lists.cip-project.org, Nobuhiro Iwamatsu , Pavel Machek Cc: Biju Das , Lad Prabhakar Subject: [PATCH 6.1.y cip 8/8] net: ravb: Allocate RX buffers via page pool Date: Mon, 29 Jul 2024 15:00:49 +0000 Message-Id: <20240729150049.1924352-9-paul.barker.ct@bp.renesas.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> References: <20240729150049.1924352-1-paul.barker.ct@bp.renesas.com> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Mon, 29 Jul 2024 15:01:18 -0000 X-Groupsio-URL: https://lists.cip-project.org/g/cip-dev/message/16653 commit 966726324b7b14009216fda33b47e0bc003944c6 upstream. This patch makes multiple changes that can't be separated: 1) Allocate plain RX buffers via a page pool instead of allocating SKBs, then use build_skb() when a packet is received. 2) For GbEth IP, reduce the RX buffer size to 2kB. 3) For GbEth IP, merge packets which span more than one RX descriptor as SKB fragments instead of copying data. Implementing (1) without (2) would require the use of an order-1 page pool (instead of an order-0 page pool split into page fragments) for GbEth. Implementing (2) without (3) would leave us no space to re-assemble packets which span more than one RX descriptor. Implementing (3) without (1) would not be possible as the network stack expects to use put_page() or page_pool_put_page() to free SKB fragments after an SKB is consumed. RX checksum offload support is adjusted to handle both linear and nonlinear (fragmented) packets. This patch gives the following improvements during testing with iperf3. * RZ/G2L: * TCP RX: same bandwidth at -43% CPU load (70% -> 40%) * UDP RX: same bandwidth at -17% CPU load (88% -> 74%) * RZ/G2UL: * TCP RX: +30% bandwidth (726Mbps -> 941Mbps) * UDP RX: +417% bandwidth (108Mbps -> 558Mbps) * RZ/G3S: * TCP RX: +64% bandwidth (562Mbps -> 920Mbps) * UDP RX: +420% bandwidth (90Mbps -> 468Mbps) * RZ/Five: * TCP RX: +217% bandwidth (145Mbps -> 459Mbps) * UDP RX: +470% bandwidth (20Mbps -> 114Mbps) There is no significant impact on bandwidth or CPU load in testing on RZ/G2H or R-Car M3N. Signed-off-by: Paul Barker Reviewed-by: Sergey Shtylyov Signed-off-by: Paolo Abeni [Modified includes as we have a single header in v6.1.y. Use PP_FLAG_PAGE_FRAG, page_pool_alloc_frag() and page_pool_alloc_frag() as page_pool_alloc() doesn't exist in v6.1.y.] Signed-off-by: Paul Barker --- drivers/net/ethernet/renesas/ravb.h | 11 +- drivers/net/ethernet/renesas/ravb_main.c | 271 +++++++++++++++-------- 2 files changed, 186 insertions(+), 96 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h index 6a7aa7dd17e6..cd2a52c365da 100644 --- a/drivers/net/ethernet/renesas/ravb.h +++ b/drivers/net/ethernet/renesas/ravb.h @@ -19,6 +19,7 @@ #include #include #include +#include #define BE_TX_RING_SIZE 64 /* TX ring size for Best Effort */ #define BE_RX_RING_SIZE 1024 /* RX ring size for Best Effort */ @@ -1051,7 +1052,7 @@ struct ravb_hw_info { int stats_len; u32 tccr_mask; u32 rx_max_frame_size; - u32 rx_max_desc_use; + u32 rx_buffer_size; u32 rx_desc_size; unsigned aligned_tx: 1; unsigned coalesce_irqs:1; /* Needs software IRQ coalescing */ @@ -1071,6 +1072,11 @@ struct ravb_hw_info { unsigned half_duplex:1; /* E-MAC supports half duplex mode */ }; +struct ravb_rx_buffer { + struct page *page; + unsigned int offset; +}; + struct ravb_private { struct net_device *ndev; struct platform_device *pdev; @@ -1094,7 +1100,8 @@ struct ravb_private { struct ravb_tx_desc *tx_ring[NUM_TX_QUEUE]; void *tx_align[NUM_TX_QUEUE]; struct sk_buff *rx_1st_skb; - struct sk_buff **rx_skb[NUM_RX_QUEUE]; + struct page_pool *rx_pool[NUM_RX_QUEUE]; + struct ravb_rx_buffer *rx_buffers[NUM_RX_QUEUE]; struct sk_buff **tx_skb[NUM_TX_QUEUE]; u32 rx_over_errors; u32 rx_fifo_errors; diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index fb34491d1fd0..2a48ffc08464 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -32,6 +32,7 @@ #include #include #include +#include #include "ravb.h" @@ -115,25 +116,6 @@ static void ravb_set_rate_rcar(struct net_device *ndev) } } -static struct sk_buff * -ravb_alloc_skb(struct net_device *ndev, const struct ravb_hw_info *info, - gfp_t gfp_mask) -{ - struct sk_buff *skb; - u32 reserve; - - skb = __netdev_alloc_skb(ndev, info->rx_max_frame_size + RAVB_ALIGN - 1, - gfp_mask); - if (!skb) - return NULL; - - reserve = (unsigned long)skb->data & (RAVB_ALIGN - 1); - if (reserve) - skb_reserve(skb, RAVB_ALIGN - reserve); - - return skb; -} - /* Get MAC address from the MAC address registers * * Ethernet AVB device doesn't have ROM for MAC address. @@ -259,21 +241,10 @@ static void ravb_rx_ring_free(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); unsigned int ring_size; - unsigned int i; if (!priv->rx_ring[q].raw) return; - for (i = 0; i < priv->num_rx_ring[q]; i++) { - struct ravb_rx_desc *desc = ravb_rx_get_desc(priv, q, i); - - if (!dma_mapping_error(ndev->dev.parent, - le32_to_cpu(desc->dptr))) - dma_unmap_single(ndev->dev.parent, - le32_to_cpu(desc->dptr), - priv->info->rx_max_frame_size, - DMA_FROM_DEVICE); - } ring_size = priv->info->rx_desc_size * (priv->num_rx_ring[q] + 1); dma_free_coherent(ndev->dev.parent, ring_size, priv->rx_ring[q].raw, priv->rx_desc_dma[q]); @@ -300,13 +271,16 @@ static void ravb_ring_free(struct net_device *ndev, int q) priv->tx_ring[q] = NULL; } - /* Free RX skb ringbuffer */ - if (priv->rx_skb[q]) { - for (i = 0; i < priv->num_rx_ring[q]; i++) - dev_kfree_skb(priv->rx_skb[q][i]); + /* Free RX buffers */ + for (i = 0; i < priv->num_rx_ring[q]; i++) { + if (priv->rx_buffers[q][i].page) + page_pool_put_page(priv->rx_pool[q], + priv->rx_buffers[q][i].page, + 0, true); } - kfree(priv->rx_skb[q]); - priv->rx_skb[q] = NULL; + kfree(priv->rx_buffers[q]); + priv->rx_buffers[q] = NULL; + page_pool_destroy(priv->rx_pool[q]); /* Free aligned TX buffers */ kfree(priv->tx_align[q]); @@ -319,36 +293,63 @@ static void ravb_ring_free(struct net_device *ndev, int q) priv->tx_skb[q] = NULL; } +static int +ravb_alloc_rx_buffer(struct net_device *ndev, int q, u32 entry, gfp_t gfp_mask, + struct ravb_rx_desc *rx_desc) +{ + struct ravb_private *priv = netdev_priv(ndev); + const struct ravb_hw_info *info = priv->info; + struct ravb_rx_buffer *rx_buff; + dma_addr_t dma_addr; + + rx_buff = &priv->rx_buffers[q][entry]; + if (info->rx_buffer_size <= PAGE_SIZE / 2) { + rx_buff->page = page_pool_alloc_frag(priv->rx_pool[q], + &rx_buff->offset, + info->rx_buffer_size, + gfp_mask); + } else { + rx_buff->page = page_pool_alloc_pages(priv->rx_pool[q], + gfp_mask); + rx_buff->offset = 0; + } + if (unlikely(!rx_buff->page)) { + /* We just set the data size to 0 for a failed mapping which + * should prevent DMA from happening... + */ + rx_desc->ds_cc = cpu_to_le16(0); + return -ENOMEM; + } + + dma_addr = page_pool_get_dma_addr(rx_buff->page) + rx_buff->offset; + dma_sync_single_for_device(ndev->dev.parent, dma_addr, + info->rx_buffer_size, DMA_FROM_DEVICE); + rx_desc->dptr = cpu_to_le32(dma_addr); + + /* The end of the RX buffer is used to store skb shared data, so we need + * to ensure that the hardware leaves enough space for this. + */ + rx_desc->ds_cc = cpu_to_le16(info->rx_buffer_size - + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) - + ETH_FCS_LEN + sizeof(__sum16)); + return 0; +} + static u32 ravb_rx_ring_refill(struct net_device *ndev, int q, u32 count, gfp_t gfp_mask) { struct ravb_private *priv = netdev_priv(ndev); - const struct ravb_hw_info *info = priv->info; struct ravb_rx_desc *rx_desc; - dma_addr_t dma_addr; u32 i, entry; for (i = 0; i < count; i++) { entry = (priv->dirty_rx[q] + i) % priv->num_rx_ring[q]; rx_desc = ravb_rx_get_desc(priv, q, entry); - rx_desc->ds_cc = cpu_to_le16(info->rx_max_desc_use); - if (!priv->rx_skb[q][entry]) { - priv->rx_skb[q][entry] = ravb_alloc_skb(ndev, info, - gfp_mask); - if (!priv->rx_skb[q][entry]) + if (!priv->rx_buffers[q][entry].page) { + if (unlikely(ravb_alloc_rx_buffer(ndev, q, entry, + gfp_mask, rx_desc))) break; - dma_addr = dma_map_single(ndev->dev.parent, - priv->rx_skb[q][entry]->data, - priv->info->rx_max_frame_size, - DMA_FROM_DEVICE); - skb_checksum_none_assert(priv->rx_skb[q][entry]); - /* We just set the data size to 0 for a failed mapping - * which should prevent DMA from happening... - */ - if (dma_mapping_error(ndev->dev.parent, dma_addr)) - rx_desc->ds_cc = cpu_to_le16(0); - rx_desc->dptr = cpu_to_le32(dma_addr); } /* Descriptor type must be set after all the above writes */ dma_wmb(); @@ -426,15 +427,37 @@ static int ravb_ring_init(struct net_device *ndev, int q) { struct ravb_private *priv = netdev_priv(ndev); unsigned int num_tx_desc = priv->num_tx_desc; + struct page_pool_params params = { + .order = 0, + .flags = PP_FLAG_DMA_MAP, + .pool_size = priv->num_rx_ring[q], + .nid = NUMA_NO_NODE, + .dev = ndev->dev.parent, + .dma_dir = DMA_FROM_DEVICE, + }; unsigned int ring_size; u32 num_filled; - /* Allocate RX and TX skb rings */ - priv->rx_skb[q] = kcalloc(priv->num_rx_ring[q], - sizeof(*priv->rx_skb[q]), GFP_KERNEL); + if (priv->info->rx_buffer_size <= PAGE_SIZE / 2) { + dev_info(&ndev->dev, "Using page fragments"); + params.flags |= PP_FLAG_PAGE_FRAG; + } + + /* Allocate RX page pool and buffers */ + priv->rx_pool[q] = page_pool_create(¶ms); + if (IS_ERR(priv->rx_pool[q])) + goto error; + + /* Allocate RX buffers */ + priv->rx_buffers[q] = kcalloc(priv->num_rx_ring[q], + sizeof(*priv->rx_buffers[q]), GFP_KERNEL); + if (!priv->rx_buffers[q]) + goto error; + + /* Allocate TX skb rings */ priv->tx_skb[q] = kcalloc(priv->num_tx_ring[q], sizeof(*priv->tx_skb[q]), GFP_KERNEL); - if (!priv->rx_skb[q] || !priv->tx_skb[q]) + if (!priv->tx_skb[q]) goto error; /* Allocate all RX descriptors. */ @@ -721,7 +744,9 @@ static void ravb_get_tx_tstamp(struct net_device *ndev) static void ravb_rx_csum_gbeth(struct sk_buff *skb) { + struct skb_shared_info *shinfo = skb_shinfo(skb); __wsum csum_ip_hdr, csum_proto; + skb_frag_t *last_frag; u8 *hw_csum; /* The hardware checksum status is contained in sizeof(__sum16) * 2 = 4 @@ -731,12 +756,24 @@ static void ravb_rx_csum_gbeth(struct sk_buff *skb) if (unlikely(skb->len < sizeof(__sum16) * 2)) return; - hw_csum = skb_tail_pointer(skb) - sizeof(__sum16); + if (skb_is_nonlinear(skb)) { + last_frag = &shinfo->frags[shinfo->nr_frags - 1]; + hw_csum = skb_frag_address(last_frag) + + skb_frag_size(last_frag); + } else { + hw_csum = skb_tail_pointer(skb); + } + + hw_csum -= sizeof(__sum16); csum_proto = csum_unfold((__force __sum16)get_unaligned_le16(hw_csum)); hw_csum -= sizeof(__sum16); csum_ip_hdr = csum_unfold((__force __sum16)get_unaligned_le16(hw_csum)); - skb_trim(skb, skb->len - 2 * sizeof(__sum16)); + + if (skb_is_nonlinear(skb)) + skb_frag_size_sub(last_frag, 2 * sizeof(__sum16)); + else + skb_trim(skb, skb->len - 2 * sizeof(__sum16)); /* TODO: IPV6 Rx checksum */ if (skb->protocol == htons(ETH_P_IP) && !csum_ip_hdr && !csum_proto) @@ -758,25 +795,11 @@ static void ravb_rx_csum(struct sk_buff *skb) skb_trim(skb, skb->len - sizeof(__sum16)); } -static struct sk_buff *ravb_get_skb_gbeth(struct net_device *ndev, int entry, - struct ravb_rx_desc *desc) -{ - struct ravb_private *priv = netdev_priv(ndev); - struct sk_buff *skb; - - skb = priv->rx_skb[RAVB_BE][entry]; - priv->rx_skb[RAVB_BE][entry] = NULL; - dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), - ALIGN(priv->info->rx_max_frame_size, 16), - DMA_FROM_DEVICE); - - return skb; -} - /* Packet receive function for Gigabit Ethernet */ static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) { struct ravb_private *priv = netdev_priv(ndev); + const struct ravb_hw_info *info = priv->info; struct net_device_stats *stats; struct ravb_rx_desc *desc; struct sk_buff *skb; @@ -820,12 +843,30 @@ static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) if (desc_status & MSC_CEEF) stats->rx_missed_errors++; } else { + struct ravb_rx_buffer *rx_buff; + void *rx_addr; + + rx_buff = &priv->rx_buffers[q][entry]; + rx_addr = page_address(rx_buff->page) + rx_buff->offset; die_dt = desc->die_dt & 0xF0; - skb = ravb_get_skb_gbeth(ndev, entry, desc); + dma_sync_single_for_cpu(ndev->dev.parent, + le32_to_cpu(desc->dptr), + desc_len, DMA_FROM_DEVICE); + switch (die_dt) { case DT_FSINGLE: case DT_FSTART: /* Start of packet: Set initial data length. */ + skb = napi_build_skb(rx_addr, + info->rx_buffer_size); + if (unlikely(!skb)) { + stats->rx_errors++; + page_pool_put_page(priv->rx_pool[q], + rx_buff->page, 0, + true); + goto refill; + } + skb_mark_for_recycle(skb); skb_put(skb, desc_len); /* Save this skb if the packet spans multiple @@ -837,15 +878,30 @@ static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) case DT_FMID: case DT_FEND: - /* Continuing a packet: Move data into the saved - * skb. + /* Continuing a packet: Add this buffer as an RX + * frag. */ - skb_copy_to_linear_data_offset(priv->rx_1st_skb, - priv->rx_1st_skb->len, - skb->data, - desc_len); - skb_put(priv->rx_1st_skb, desc_len); - dev_kfree_skb(skb); + + /* rx_1st_skb will be NULL if napi_build_skb() + * failed for the first descriptor of a + * multi-descriptor packet. + */ + if (unlikely(!priv->rx_1st_skb)) { + stats->rx_errors++; + page_pool_put_page(priv->rx_pool[q], + rx_buff->page, 0, + true); + + /* We may find a DT_FSINGLE or DT_FSTART + * descriptor in the queue which we can + * process, so don't give up yet. + */ + continue; + } + skb_add_rx_frag(priv->rx_1st_skb, + skb_shinfo(priv->rx_1st_skb)->nr_frags, + rx_buff->page, rx_buff->offset, + desc_len, info->rx_buffer_size); /* Set skb to point at the whole packet so that * we only need one code path for finishing a @@ -867,10 +923,19 @@ static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) stats->rx_bytes += skb->len; napi_gro_receive(&priv->napi[q], skb); rx_packets++; + + /* Clear rx_1st_skb so that it will only be + * non-NULL when valid. + */ + priv->rx_1st_skb = NULL; } + + /* Mark this RX buffer as consumed. */ + rx_buff->page = NULL; } } +refill: /* Refill the RX ring buffers. */ priv->dirty_rx[q] += ravb_rx_ring_refill(ndev, q, priv->cur_rx[q] - priv->dirty_rx[q], @@ -884,6 +949,7 @@ static int ravb_rx_gbeth(struct net_device *ndev, int budget, int q) static int ravb_rx_rcar(struct net_device *ndev, int budget, int q) { struct ravb_private *priv = netdev_priv(ndev); + const struct ravb_hw_info *info = priv->info; struct net_device_stats *stats = &priv->stats[q]; struct ravb_ex_rx_desc *desc; unsigned int limit, i; @@ -926,12 +992,23 @@ static int ravb_rx_rcar(struct net_device *ndev, int budget, int q) stats->rx_missed_errors++; } else { u32 get_ts = priv->tstamp_rx_ctrl & RAVB_RXTSTAMP_TYPE; - - skb = priv->rx_skb[q][entry]; - priv->rx_skb[q][entry] = NULL; - dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), - priv->info->rx_max_frame_size, - DMA_FROM_DEVICE); + struct ravb_rx_buffer *rx_buff; + void *rx_addr; + + rx_buff = &priv->rx_buffers[q][entry]; + rx_addr = page_address(rx_buff->page) + rx_buff->offset; + dma_sync_single_for_cpu(ndev->dev.parent, + le32_to_cpu(desc->dptr), + pkt_len, DMA_FROM_DEVICE); + + skb = napi_build_skb(rx_addr, info->rx_buffer_size); + if (unlikely(!skb)) { + stats->rx_errors++; + page_pool_put_page(priv->rx_pool[q], + rx_buff->page, 0, true); + break; + } + skb_mark_for_recycle(skb); get_ts &= (q == RAVB_NC) ? RAVB_RXTSTAMP_TYPE_V2_L2_EVENT : ~RAVB_RXTSTAMP_TYPE_V2_L2_EVENT; @@ -953,6 +1030,9 @@ static int ravb_rx_rcar(struct net_device *ndev, int budget, int q) napi_gro_receive(&priv->napi[q], skb); rx_packets++; stats->rx_bytes += pkt_len; + + /* Mark this RX buffer as consumed. */ + rx_buff->page = NULL; } } @@ -2605,7 +2685,8 @@ static const struct ravb_hw_info ravb_gen3_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, - .rx_max_desc_use = SZ_2K - ETH_FCS_LEN + sizeof(__sum16), + .rx_buffer_size = SZ_2K + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), .rx_desc_size = sizeof(struct ravb_ex_rx_desc), .internal_delay = 1, .tx_counters = 1, @@ -2629,7 +2710,8 @@ static const struct ravb_hw_info ravb_gen2_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, - .rx_max_desc_use = SZ_2K - ETH_FCS_LEN + sizeof(__sum16), + .rx_buffer_size = SZ_2K + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), .rx_desc_size = sizeof(struct ravb_ex_rx_desc), .aligned_tx = 1, .gptp = 1, @@ -2650,7 +2732,8 @@ static const struct ravb_hw_info ravb_rzv2m_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats), .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, .rx_max_frame_size = SZ_2K, - .rx_max_desc_use = SZ_2K - ETH_FCS_LEN + sizeof(__sum16), + .rx_buffer_size = SZ_2K + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), .rx_desc_size = sizeof(struct ravb_ex_rx_desc), .multi_irqs = 1, .err_mgmt_irqs = 1, @@ -2673,7 +2756,7 @@ static const struct ravb_hw_info gbeth_hw_info = { .stats_len = ARRAY_SIZE(ravb_gstrings_stats_gbeth), .tccr_mask = TCCR_TSRQ0, .rx_max_frame_size = SZ_8K, - .rx_max_desc_use = 4080, + .rx_buffer_size = SZ_2K, .rx_desc_size = sizeof(struct ravb_rx_desc), .aligned_tx = 1, .coalesce_irqs = 1,