From patchwork Mon Oct 7 12:31:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kurt Kanzenbach X-Patchwork-Id: 13824568 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40D2C18BB89; Mon, 7 Oct 2024 12:31:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728304294; cv=none; b=jPDZwJbR0jOVwTH83NoEnuMHCvhGiR5fr2vvlqHh3NMOR+i3Ef9D+bDz2DqQlrHGrMz1bLg8ZuENUbtBFNuT63dHi3/v4ypipVqoP5IaDz90AAawfLEnZYjG3sH6Q5EnYV90ZvBj3EiSdkiopDczVQSMtc7G9cczeTMCAzrG3UY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728304294; c=relaxed/simple; bh=d9lK34L8zunQ7HDOE3UWaku66e/Qei3NVI1+NTcuFks=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=R5DyM2YOJp239iUL7TJgwqmNg2ySrNzGbnJ6b1b72XeOcl6Y//hRudfyRoWxWS0ec24sgMEgkRfhPQiLGw2nGq7tWvjygPZ5oAsXMiPAKN75J62jDjUKXTnOaaYQpWX+Ik3oPRyVCmISq/T7j24L47fNN97QzlENEDh3n7Hsj+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=fCCkiTi+; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=JFdX7VBG; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="fCCkiTi+"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="JFdX7VBG" From: Kurt Kanzenbach DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1728304290; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Cz4kPA6dKsgqq0HrMQA9PFkTNe0rHOz9xSJaT+6WwYI=; b=fCCkiTi+iwRKvJRex6M0fydKG+EnU0oGr2P1u2zS4xpVKhUGRR2LlKbMj7wuHRERTwTsvS 06n9yE1NHR/5Wg7uwOwm5+3jLLn04alvXqvfZQdmn8KpW1orfnCVFkvrAFFZlLs7a0m1YG U4VNGo7E3dGAO7a8mwiUJFRsAE3t21OM9vHZW8X/WXb983+hFQdUmaBySEwbgt8NLpW25E tO9ap+mKinSdiPQixrqheuNw8ENpto0vw7lqTg1Ij6K5w1nLQXIukzc29AcbJqTRSB8j1v YsmNUiWsXzg7jt7r+6vNIyhV4wdvmhZDQ7C3b+9Wzg1OKbFE+NSEP1DdVwDF9Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1728304290; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Cz4kPA6dKsgqq0HrMQA9PFkTNe0rHOz9xSJaT+6WwYI=; b=JFdX7VBGIvRcjAdyXacqiEJPYMBK3YqRZDwlHNqRgCwF9YbC2bQY4MxYOnwGGjm2PSEbLv EZB3cu1iSwgLEMAw== Date: Mon, 07 Oct 2024 14:31:23 +0200 Subject: [PATCH iwl-next v7 1/5] igb: Remove static qualifiers Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241007-b4-igb_zero_copy-v7-1-23556668adc6@linutronix.de> References: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> In-Reply-To: <20241007-b4-igb_zero_copy-v7-0-23556668adc6@linutronix.de> To: Tony Nguyen , Przemek Kitszel Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Richard Cochran , Sriram Yagnaraman , Benjamin Steinke , Sebastian Andrzej Siewior , Maciej Fijalkowski , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Sriram Yagnaraman , Kurt Kanzenbach X-Developer-Signature: v=1; a=openpgp-sha256; l=6388; i=kurt@linutronix.de; h=from:subject:message-id; bh=BUo9bjWjvJHruoupHD0KEda6yWDg4kg5Kz4Gw3OfUZg=; b=owEBbQKS/ZANAwAKAcGT0fKqRnOCAcsmYgBnA9SfY/ZqC+n53/lMBEg3poTgH2A/fa4gFky71 LWRtyBLvnaJAjMEAAEKAB0WIQS8ub+yyMN909/bWZLBk9HyqkZzggUCZwPUnwAKCRDBk9HyqkZz glg6D/9SGOE+XMUm3Vk9Xa+oheEJKv4Qv4MlgIU0TVoRiLlKmbQp7DR0670vN71nn6V05918z9t eiks6LVmQfinAuiUB2dmN5MiGwu/Ck0CJhAiMvkSkJ9tQJRKz7lCvCoE3pHzORPPrbAqONqctMb izLXVGU8PCE7UCAaL9efqxxWwpg0y18/JgDE8K15ZaZ2qnfxES3V+SitGcK5thfxGP17knoh/vD botpa1TCnboVGVc+YYwTnG3WVLb1TUflI9hMz8UYlcZRbEOpK25SMUWN4ZuC6oiMdxHfANSMWcZ 8sp5SPBAkgZRxJc8jZNwvy3jIv6YLHJhDvwGBGkBYP5Se46VLJWPuBYKnXsJWmuXBBJTbeW/Uqd DwuAHUlXM6lF050iCCdVavL1/3ZXRd3W6OboiJynFELuK18B3XKpQuM4vN7Zwx36QvP9YdKem+K wiUSpx8iZroNoEC4sf5wn3bg713QZQjS8hBqqExzF7Y1+pQXGqTQJcTn4Qibuvu98yhFBf2IBgh 2F1sTs1rKmrT6fpmeF+pXHNqvJzoOFcIYc980pnGfcxlo74ZiopUSnmbpHoJF4fdKRywoOEAJ+J H1G2anEWrnPggHfO+P3asIfu1+Se0ZadKoC8WSAFUpp+38kw69K0F6naDBFhDOViuhQTbmDfw/0 9rpZAV721nT7P8Q== X-Developer-Key: i=kurt@linutronix.de; a=openpgp; fpr=BCB9BFB2C8C37DD3DFDB5992C193D1F2AA467382 X-Patchwork-Delegate: kuba@kernel.org From: Sriram Yagnaraman Remove static qualifiers on the following functions to be able to call from XSK specific file that is added in the later patches: - igb_xdp_tx_queue_mapping() - igb_xdp_ring_update_tail() - igb_clean_tx_ring() - igb_clean_rx_ring() - igb_xdp_xmit_back() - igb_process_skb_fields() While at it, inline igb_xdp_tx_queue_mapping() and igb_xdp_ring_update_tail(). These functions are small enough and used in XDP hot paths. Signed-off-by: Sriram Yagnaraman [Kurt: Split patches, inline small XDP functions] Signed-off-by: Kurt Kanzenbach Acked-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/igb/igb.h | 29 ++++++++++++++++++++++++ drivers/net/ethernet/intel/igb/igb_main.c | 37 +++++-------------------------- 2 files changed, 35 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index 3c2dc7bdebb5..1bfe703e73d9 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -18,6 +18,7 @@ #include #include #include +#include #include @@ -731,12 +732,18 @@ int igb_setup_tx_resources(struct igb_ring *); int igb_setup_rx_resources(struct igb_ring *); void igb_free_tx_resources(struct igb_ring *); void igb_free_rx_resources(struct igb_ring *); +void igb_clean_tx_ring(struct igb_ring *tx_ring); +void igb_clean_rx_ring(struct igb_ring *rx_ring); void igb_configure_tx_ring(struct igb_adapter *, struct igb_ring *); void igb_configure_rx_ring(struct igb_adapter *, struct igb_ring *); void igb_setup_tctl(struct igb_adapter *); void igb_setup_rctl(struct igb_adapter *); void igb_setup_srrctl(struct igb_adapter *, struct igb_ring *); netdev_tx_t igb_xmit_frame_ring(struct sk_buff *, struct igb_ring *); +int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp); +void igb_process_skb_fields(struct igb_ring *rx_ring, + union e1000_adv_rx_desc *rx_desc, + struct sk_buff *skb); void igb_alloc_rx_buffers(struct igb_ring *, u16); void igb_update_stats(struct igb_adapter *); bool igb_has_link(struct igb_adapter *adapter); @@ -797,6 +804,28 @@ static inline struct netdev_queue *txring_txq(const struct igb_ring *tx_ring) return netdev_get_tx_queue(tx_ring->netdev, tx_ring->queue_index); } +/* This function assumes __netif_tx_lock is held by the caller. */ +static inline void igb_xdp_ring_update_tail(struct igb_ring *ring) +{ + lockdep_assert_held(&txring_txq(ring)->_xmit_lock); + + /* Force memory writes to complete before letting h/w know there + * are new descriptors to fetch. + */ + wmb(); + writel(ring->next_to_use, ring->tail); +} + +static inline struct igb_ring *igb_xdp_tx_queue_mapping(struct igb_adapter *adapter) +{ + unsigned int r_idx = smp_processor_id(); + + if (r_idx >= adapter->num_tx_queues) + r_idx = r_idx % adapter->num_tx_queues; + + return adapter->tx_ring[r_idx]; +} + int igb_add_filter(struct igb_adapter *adapter, struct igb_nfc_filter *input); int igb_erase_filter(struct igb_adapter *adapter, diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 1ef4cb871452..71addc0eac96 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -33,7 +33,6 @@ #include #include #include -#include #ifdef CONFIG_IGB_DCA #include #endif @@ -116,8 +115,6 @@ static void igb_configure_tx(struct igb_adapter *); static void igb_configure_rx(struct igb_adapter *); static void igb_clean_all_tx_rings(struct igb_adapter *); static void igb_clean_all_rx_rings(struct igb_adapter *); -static void igb_clean_tx_ring(struct igb_ring *); -static void igb_clean_rx_ring(struct igb_ring *); static void igb_set_rx_mode(struct net_device *); static void igb_update_phy_info(struct timer_list *); static void igb_watchdog(struct timer_list *); @@ -2915,29 +2912,7 @@ static int igb_xdp(struct net_device *dev, struct netdev_bpf *xdp) } } -/* This function assumes __netif_tx_lock is held by the caller. */ -static void igb_xdp_ring_update_tail(struct igb_ring *ring) -{ - lockdep_assert_held(&txring_txq(ring)->_xmit_lock); - - /* Force memory writes to complete before letting h/w know there - * are new descriptors to fetch. - */ - wmb(); - writel(ring->next_to_use, ring->tail); -} - -static struct igb_ring *igb_xdp_tx_queue_mapping(struct igb_adapter *adapter) -{ - unsigned int r_idx = smp_processor_id(); - - if (r_idx >= adapter->num_tx_queues) - r_idx = r_idx % adapter->num_tx_queues; - - return adapter->tx_ring[r_idx]; -} - -static int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp) +int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp) { struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); int cpu = smp_processor_id(); @@ -4884,7 +4859,7 @@ static void igb_free_all_tx_resources(struct igb_adapter *adapter) * igb_clean_tx_ring - Free Tx Buffers * @tx_ring: ring to be cleaned **/ -static void igb_clean_tx_ring(struct igb_ring *tx_ring) +void igb_clean_tx_ring(struct igb_ring *tx_ring) { u16 i = tx_ring->next_to_clean; struct igb_tx_buffer *tx_buffer = &tx_ring->tx_buffer_info[i]; @@ -5003,7 +4978,7 @@ static void igb_free_all_rx_resources(struct igb_adapter *adapter) * igb_clean_rx_ring - Free Rx Buffers per Queue * @rx_ring: ring to free buffers from **/ -static void igb_clean_rx_ring(struct igb_ring *rx_ring) +void igb_clean_rx_ring(struct igb_ring *rx_ring) { u16 i = rx_ring->next_to_clean; @@ -8782,9 +8757,9 @@ static bool igb_cleanup_headers(struct igb_ring *rx_ring, * order to populate the hash, checksum, VLAN, timestamp, protocol, and * other fields within the skb. **/ -static void igb_process_skb_fields(struct igb_ring *rx_ring, - union e1000_adv_rx_desc *rx_desc, - struct sk_buff *skb) +void igb_process_skb_fields(struct igb_ring *rx_ring, + union e1000_adv_rx_desc *rx_desc, + struct sk_buff *skb) { struct net_device *dev = rx_ring->netdev;