From patchwork Fri Jul 12 08:55:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kurt Kanzenbach X-Patchwork-Id: 13731431 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59BC8144D27; Fri, 12 Jul 2024 08:55:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720774559; cv=none; b=VgEJKd6OC0fKvlgZ9iOE/STOZDrMCnakiOfWChWON6OnhKf9ctjc2/QclyIb1dq3golHVEc0uQMPpm+BWMClT4XTZqlRTH8UL0ra/wgSeU46qAdCL67HyhNIwqkP/PRwReD1u9BQxTkoOYrEVJjO1seCvZQSuC1FWSbATJjS5YE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720774559; c=relaxed/simple; bh=fbkl2kK2k408oirFixrPRXjY5NWWKe/D1AfCr2AlR58=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=BxenntLi6DodMjE2kEqDCMxxV/NBKkCn3AzcanjHB7LBLwSGtkoZBxJ10ScJ/CSt3jTerMNqDHrvqcqInuPb3TTj0GI5wAS/imQqEkGpoQQfrHIv+BmpeBpfhFlSJ1CJ8QT1WnI7tX81O49dX/+GUD/HKvsUbpoO+qEw1S8qQkY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=xJ1yaWZD; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=yAlQ9ecE; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="xJ1yaWZD"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="yAlQ9ecE" From: Kurt Kanzenbach DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1720774555; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QoZTiw4RzynW2aXOJczHFuJNP1z24G6tT8kwFl99k9U=; b=xJ1yaWZDejz5cl7akHxay/euK9vN7nxlCaKP7Nl9Ghys+HRAIC2pW4rpMIapp8d8XFNVKW k7ty4nzyZxocjEdBs1I26qqvFMxzV0r4Qv/msMRs6fSt3xvp234tokvyAMORYux2fm9Wd5 O/+cAkt2Dh4US4emoSUEI4ObPIQ7mtz9V+9uqRebfnM67slsFuKRfNWbeEqNH91f1cquIS FG/Pngk7PD/E75hcDDikyLaOKuBjoFYaaSgCi9g1PFJUSewTYXGctEWSIc0z6nDD152C4F dEo8LaUYVqP+M8+JlLrxmx1MH4wqt31+JpOZoU6Fnf+FVDq3C9d3SpLtSG0ELQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1720774555; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QoZTiw4RzynW2aXOJczHFuJNP1z24G6tT8kwFl99k9U=; b=yAlQ9ecET4qayZQ0ukthAoNgCrzcD6eFo6utmJEkA0pwwt8pyW+a7/Pi7+6dqK+u27Bip7 2HiBNvSaksgD7HDA== Date: Fri, 12 Jul 2024 10:55:32 +0200 Subject: [PATCH iwl-next v5 4/4] igb: add AF_XDP zero-copy Tx support Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240711-b4-igb_zero_copy-v5-4-f3f455113b11@linutronix.de> References: <20240711-b4-igb_zero_copy-v5-0-f3f455113b11@linutronix.de> In-Reply-To: <20240711-b4-igb_zero_copy-v5-0-f3f455113b11@linutronix.de> To: Tony Nguyen , Przemek Kitszel Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Richard Cochran , Sriram Yagnaraman , Benjamin Steinke , Sebastian Andrzej Siewior , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Sriram Yagnaraman , Kurt Kanzenbach X-Developer-Signature: v=1; a=openpgp-sha256; l=8780; i=kurt@linutronix.de; h=from:subject:message-id; bh=ceURveCToMKlf3R2KjKtvvECaRqa6XN5wCqFUVUmODc=; b=owEBbQKS/ZANAwAKAcGT0fKqRnOCAcsmYgBmkO+XmWfgAHNA0vXxIyoOT+NnAsUarZYwm8Pq4 pmRyZh/6sqJAjMEAAEKAB0WIQS8ub+yyMN909/bWZLBk9HyqkZzggUCZpDvlwAKCRDBk9HyqkZz gobsD/49YiLFMiX4NJVSW0Rije8NP/1BXwm3i8o0cziR6yx5RZeAqMNSz202bbHPGKqtS/aJmVh N9GtVmANvM4dthYCQ+MMVUNjAwstI1a3Lup6sOQ5qzlKMDKzlPVEM1LhXEMiv8SutNr26UdkgJS kI1v1x/nuDQh8aAj2nbx6YH9Z+S232M4MJsj9vTczMwFbUpRU0hnV2rPHiNdejr8K2DMc8LA1qC 3ycflQ/n+7wFyfPlocuhsv9CimFmBFxvAKgDl7fY2he4lAqMUGMupM1FnErxcID39TqoYm1dTbT scjltSwapD7JX8+7NQp3ueg4splVVCyvsjvMqWzbJAk45dm9Z1U+HP9DuksUnrEk7zvOAHOAdre T1bNYbWARP/ENYls3pd96//z4MmpIIoNHWboTwuNvHFvDB6fAAfQ/fWectwc6iSwgVeG96aAURO UNwMoEPTcmHPh9SuyyuTs9/E/K4Z0M9A0gCnupVkJ4tQRR36zIThEzVGjUnbm4NLVpDBl+fc7TI lpHoXA2BYZp7UX0IiY7Xmm5AturjD9bt4kx4jnqxndBSjpfCjV6tz+TbDRI67pNG5R3MGb0zOHs ZHLsJuqT6x6u0U2m30mynF0aqEgsv5vUe91eN8hLn2lAxli3sExmNVkMtdpZfylmHV4RDeXCnlN b+yngWlVdB/6u3A== X-Developer-Key: i=kurt@linutronix.de; a=openpgp; fpr=BCB9BFB2C8C37DD3DFDB5992C193D1F2AA467382 X-Patchwork-Delegate: kuba@kernel.org From: Sriram Yagnaraman Add support for AF_XDP zero-copy transmit path. A new TX buffer type IGB_TYPE_XSK is introduced to indicate that the Tx frame was allocated from the xsk buff pool, so igb_clean_tx_ring and igb_clean_tx_irq can clean the buffers correctly based on type. igb_xmit_zc performs the actual packet transmit when AF_XDP zero-copy is enabled. We share the TX ring between slow path, XDP and AF_XDP zero-copy, so we use the netdev queue lock to ensure mutual exclusion. Signed-off-by: Sriram Yagnaraman [Kurt: Set olinfo_status in igb_xmit_zc() so that frames are transmitted] Signed-off-by: Kurt Kanzenbach Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/igb/igb.h | 2 ++ drivers/net/ethernet/intel/igb/igb_main.c | 56 ++++++++++++++++++++++++++----- drivers/net/ethernet/intel/igb/igb_xsk.c | 53 +++++++++++++++++++++++++++++ 3 files changed, 102 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index 4983a6ec718e..9ee18ac1ba47 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -257,6 +257,7 @@ enum igb_tx_flags { enum igb_tx_buf_type { IGB_TYPE_SKB = 0, IGB_TYPE_XDP, + IGB_TYPE_XSK }; /* wrapper around a pointer to a socket buffer, @@ -836,6 +837,7 @@ int igb_xsk_pool_setup(struct igb_adapter *adapter, bool igb_alloc_rx_buffers_zc(struct igb_ring *rx_ring, u16 count); void igb_clean_rx_ring_zc(struct igb_ring *rx_ring); int igb_clean_rx_irq_zc(struct igb_q_vector *q_vector, const int budget); +bool igb_xmit_zc(struct igb_ring *tx_ring); int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags); #endif /* _IGB_H_ */ diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 8d65c9eca102..4069732e2f4c 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2997,6 +2997,9 @@ static int igb_xdp_xmit(struct net_device *dev, int n, if (unlikely(!tx_ring)) return -ENXIO; + if (unlikely(test_bit(IGB_RING_FLAG_TX_DISABLED, &tx_ring->flags))) + return -ENXIO; + nq = txring_txq(tx_ring); __netif_tx_lock(nq, cpu); @@ -4918,15 +4921,20 @@ void igb_clean_tx_ring(struct igb_ring *tx_ring) { u16 i = tx_ring->next_to_clean; struct igb_tx_buffer *tx_buffer = &tx_ring->tx_buffer_info[i]; + u32 xsk_frames = 0; while (i != tx_ring->next_to_use) { union e1000_adv_tx_desc *eop_desc, *tx_desc; /* Free all the Tx ring sk_buffs or xdp frames */ - if (tx_buffer->type == IGB_TYPE_SKB) + if (tx_buffer->type == IGB_TYPE_SKB) { dev_kfree_skb_any(tx_buffer->skb); - else + } else if (tx_buffer->type == IGB_TYPE_XDP) { xdp_return_frame(tx_buffer->xdpf); + } else if (tx_buffer->type == IGB_TYPE_XSK) { + xsk_frames++; + goto skip_for_xsk; + } /* unmap skb header data */ dma_unmap_single(tx_ring->dev, @@ -4957,6 +4965,7 @@ void igb_clean_tx_ring(struct igb_ring *tx_ring) DMA_TO_DEVICE); } +skip_for_xsk: tx_buffer->next_to_watch = NULL; /* move us one more past the eop_desc for start of next pkt */ @@ -4971,6 +4980,9 @@ void igb_clean_tx_ring(struct igb_ring *tx_ring) /* reset BQL for queue */ netdev_tx_reset_queue(txring_txq(tx_ring)); + if (tx_ring->xsk_pool && xsk_frames) + xsk_tx_completed(tx_ring->xsk_pool, xsk_frames); + /* reset next_to_use and next_to_clean */ tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; @@ -6504,6 +6516,9 @@ netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb, return NETDEV_TX_BUSY; } + if (unlikely(test_bit(IGB_RING_FLAG_TX_DISABLED, &tx_ring->flags))) + return NETDEV_TX_BUSY; + /* record the location of the first descriptor for this packet */ first = &tx_ring->tx_buffer_info[tx_ring->next_to_use]; first->type = IGB_TYPE_SKB; @@ -8264,13 +8279,17 @@ static int igb_poll(struct napi_struct *napi, int budget) **/ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) { - struct igb_adapter *adapter = q_vector->adapter; - struct igb_ring *tx_ring = q_vector->tx.ring; - struct igb_tx_buffer *tx_buffer; - union e1000_adv_tx_desc *tx_desc; unsigned int total_bytes = 0, total_packets = 0; + struct igb_adapter *adapter = q_vector->adapter; unsigned int budget = q_vector->tx.work_limit; + struct igb_ring *tx_ring = q_vector->tx.ring; unsigned int i = tx_ring->next_to_clean; + union e1000_adv_tx_desc *tx_desc; + struct igb_tx_buffer *tx_buffer; + int cpu = smp_processor_id(); + bool xsk_xmit_done = true; + struct netdev_queue *nq; + u32 xsk_frames = 0; if (test_bit(__IGB_DOWN, &adapter->state)) return true; @@ -8301,10 +8320,14 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) total_packets += tx_buffer->gso_segs; /* free the skb */ - if (tx_buffer->type == IGB_TYPE_SKB) + if (tx_buffer->type == IGB_TYPE_SKB) { napi_consume_skb(tx_buffer->skb, napi_budget); - else + } else if (tx_buffer->type == IGB_TYPE_XDP) { xdp_return_frame(tx_buffer->xdpf); + } else if (tx_buffer->type == IGB_TYPE_XSK) { + xsk_frames++; + goto skip_for_xsk; + } /* unmap skb header data */ dma_unmap_single(tx_ring->dev, @@ -8336,6 +8359,7 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) } } +skip_for_xsk: /* move us one more past the eop_desc for start of next pkt */ tx_buffer++; tx_desc++; @@ -8364,6 +8388,20 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) q_vector->tx.total_bytes += total_bytes; q_vector->tx.total_packets += total_packets; + if (tx_ring->xsk_pool) { + if (xsk_frames) + xsk_tx_completed(tx_ring->xsk_pool, xsk_frames); + if (xsk_uses_need_wakeup(tx_ring->xsk_pool)) + xsk_set_tx_need_wakeup(tx_ring->xsk_pool); + + nq = txring_txq(tx_ring); + __netif_tx_lock(nq, cpu); + /* Avoid transmit queue timeout since we share it with the slow path */ + txq_trans_cond_update(nq); + xsk_xmit_done = igb_xmit_zc(tx_ring); + __netif_tx_unlock(nq); + } + if (test_bit(IGB_RING_FLAG_TX_DETECT_HANG, &tx_ring->flags)) { struct e1000_hw *hw = &adapter->hw; @@ -8426,7 +8464,7 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget) } } - return !!budget; + return !!budget && xsk_xmit_done; } /** diff --git a/drivers/net/ethernet/intel/igb/igb_xsk.c b/drivers/net/ethernet/intel/igb/igb_xsk.c index 66cdc30e9b6e..4e530e1eb3c0 100644 --- a/drivers/net/ethernet/intel/igb/igb_xsk.c +++ b/drivers/net/ethernet/intel/igb/igb_xsk.c @@ -431,6 +431,59 @@ int igb_clean_rx_irq_zc(struct igb_q_vector *q_vector, const int budget) return failure ? budget : (int)total_packets; } +bool igb_xmit_zc(struct igb_ring *tx_ring) +{ + unsigned int budget = igb_desc_unused(tx_ring); + struct xsk_buff_pool *pool = tx_ring->xsk_pool; + u32 cmd_type, olinfo_status, nb_pkts, i = 0; + struct xdp_desc *descs = pool->tx_descs; + union e1000_adv_tx_desc *tx_desc = NULL; + struct igb_tx_buffer *tx_buffer_info; + unsigned int total_bytes = 0; + dma_addr_t dma; + + nb_pkts = xsk_tx_peek_release_desc_batch(pool, budget); + if (!nb_pkts) + return true; + + while (nb_pkts-- > 0) { + dma = xsk_buff_raw_get_dma(pool, descs[i].addr); + xsk_buff_raw_dma_sync_for_device(pool, dma, descs[i].len); + + tx_buffer_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use]; + tx_buffer_info->bytecount = descs[i].len; + tx_buffer_info->type = IGB_TYPE_XSK; + tx_buffer_info->xdpf = NULL; + tx_buffer_info->gso_segs = 1; + tx_buffer_info->time_stamp = jiffies; + + tx_desc = IGB_TX_DESC(tx_ring, tx_ring->next_to_use); + tx_desc->read.buffer_addr = cpu_to_le64(dma); + + /* put descriptor type bits */ + cmd_type = E1000_ADVTXD_DTYP_DATA | E1000_ADVTXD_DCMD_DEXT | + E1000_ADVTXD_DCMD_IFCS; + olinfo_status = descs[i].len << E1000_ADVTXD_PAYLEN_SHIFT; + + cmd_type |= descs[i].len | IGB_TXD_DCMD; + tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type); + tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status); + + total_bytes += descs[i].len; + + i++; + tx_ring->next_to_use++; + tx_buffer_info->next_to_watch = tx_desc; + if (tx_ring->next_to_use == tx_ring->count) + tx_ring->next_to_use = 0; + } + + netdev_tx_sent_queue(txring_txq(tx_ring), total_bytes); + igb_xdp_ring_update_tail(tx_ring); + + return nb_pkts < budget; +} + int igb_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) { struct igb_adapter *adapter = netdev_priv(dev);