From patchwork Thu Apr 4 15:43:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 628FECD1299 for ; Thu, 4 Apr 2024 15:45:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF5226B0089; Thu, 4 Apr 2024 11:45:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7DD06B008A; Thu, 4 Apr 2024 11:45:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 832A36B008C; Thu, 4 Apr 2024 11:45:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5A0666B0089 for ; Thu, 4 Apr 2024 11:45:56 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C465F1A0CB3 for ; Thu, 4 Apr 2024 15:45:55 +0000 (UTC) X-FDA: 81972275070.28.0811440 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf18.hostedemail.com (Postfix) with ESMTP id 27D781C0020 for ; Thu, 4 Apr 2024 15:45:52 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SrssjjK7; spf=pass (imf18.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245553; a=rsa-sha256; cv=none; b=1t0hDY6hdjyx+cYrLFB9WL6kP64no5fsgHVMFhPG+9vDtbs3fEnQXgYkfae0LtK1JpZyyA ApQOsYN+5Whub2xKyC9NoSb0KHMqIBoal67LJfiROLEvQYBQHcrHt6qg0TKSMtQlpuggNL CZWk8IDOCGu4SNvbnTJxISCxBBM+7EQ= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SrssjjK7; spf=pass (imf18.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245553; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=unoupv5n3QrwAomgV7cHlrRCAoMmhM3ckV89LDTl408=; b=p7dXoimzNb6hmDs5WCDdHCQHeUZG4pSauaeF5j8m1kg0d8HLABCeKrSD4rRzANc+lSXRzn UN1dV41Jiusd9uKNGc9xvyyxboIRdhTQ3ACf7IZUImvYj1U/KRmxFPM7I4T+aYVq0HM0gp uzbtujSwteKzAoAHpwfOcsVtrqjWSQk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245553; x=1743781553; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lSZkLO8icDQO/974e3N0EcdFsRz7GiSeMcffwdumwM4=; b=SrssjjK75ZulUQwItWEO8+XjB3SjiPgFoWUVXXIyuVaf9e71h8kYpYOK wtmGAV9Ab3XIOU695NE1Yqi1P29gw82edwGUvgT8s0LOx/lXMczL1wZbz oC+dmNa88fDpMuUyLPfouE34YW3r7j2RJFKIPHPs08p6KCX/C/1hrX41n RiF/go24EekrglmeFiDv6fcs5CGpgQ2iCtI0syxpqMFeyJIkrotD+VA7q 98axDo6E9LYQwnZ5kcDbR5+3SxbnFwljVQC6GGTx4W2MaHiHgBec3MbT7 vOllIwttDjsrhzohI+V8zxAfBFO05TL2uVd80e7+KwZUOr1ORXiNvUiTb A==; X-CSE-ConnectionGUID: 7lstwiu2QtuHiTaBdHrHAg== X-CSE-MsgGUID: JMmJSLMvS0CsrtJimI31/Q== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312110" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312110" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:45:52 -0700 X-CSE-ConnectionGUID: JijvdXvsQjy/hbUFp88OBQ== X-CSE-MsgGUID: ju5C001VSiGcqA5UslBc4g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288042" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:45:47 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 1/9] net: intel: introduce {,Intel} Ethernet common library Date: Thu, 4 Apr 2024 17:43:54 +0200 Message-ID: <20240404154402.3581254-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 27D781C0020 X-Stat-Signature: fzph7u7jy3gjgk88wspxoaekcoeui3rq X-Rspam-User: X-HE-Tag: 1712245552-701563 X-HE-Meta: U2FsdGVkX1/QX6gA3wfa2wsm3RohbMerWDgdTwP/EJ1k8bhqCSWyv61GdYiModbqsfJRk7ZtVh1gpgTAG1kKaEYCsAmjtuv7wurI6Y/arnuGDLqq03yDQSwkTeHjJfTrbSWagsoPsODWjZJeoL883afn+2uH35yaEU+I2HCe8TsiOmNizwz5ul0SisSVh81hajdwf4adcpMBl0G5hnbIslrHL1qXc3+Lt2cxg8MZ+8pOv43SVURtigE1Z6Sn8P4aSvv8NDKxnrn2QkAPNKYmDyTvgrhsREm7dSPyAjcqu8e3b2vOYnoKpEz8iS2U/bWhukNttYrXWmtT/+VeCO1pFny0+uWNWFWQfcc7cwiT7QrXMHZxSFHLqrwNWEfYlagvPnJpOoeUes/jmzxdymYee+8Pb37Iczy5UhltAUXNx5EDB6iVBxJWFKmUCubNe8YqsyJvTx+6naV1GZ9VFJwAd1rboj3RF4h6oFl27oTGuD5E8DzZ6KZjoX2w6n8DhnwrFrFfoT+QzNTw+9t886UZZRbCNyg7jA69qTEsaU0a2lycDQ4FgEjsEwbJ9ay4vJpNZ9xSo+ZyTC5Ycc+oLq0fgRiS1vNmhQsFyUSxbOqhdJf3z0yBYeZcFk54oHDFCJZaT3u2RqPKTK35bPIgFRIcPIHlxD2gxA8VdhDy8pN4nNWRgpwNfEBHxX4zKZZYqAMcVWrtt/AItBZ2OOfzffFZz/Bek1ugEXngxSZZLB74E+vG+DopBvS0C5EpUwXrfoU7lTYtuYPpoJdWUUq3AQdDDnWa/bUca3SbFLG7vQbdzCrzGEYr4pnlDRMp1z13E/o994+A5ATu1HjkGWY7O7iFQDBqzGppvotN+FqNe/eLHUz961utt+bvYWvTQtEPJMvfouq15/bctOaLEci6R2o315XsEolRbHDGlou4pv6c1ZY2wkjIJ1Au9j+fHHHn0lBaOoUtxhvz4LR3jBNiYo8 vJl3ToAC xQ/nzLlWeFa47kxU/lwBpYeoY8WdK2aW+7Xj7ggrUil2rRQrFdsHhkReZEjmAvXc9IZw3e+ixjCYoBIa7DqxvT20nW/Kx1IchtQBAtSaG/zCUishyB6gellJOYjrpM2h7KQW+o9sSnG8pornKNNpkMScaBbDjeBWOWRgqQQzN7KH/4p8nWqVEnqy4XMB21XEo/7AOni1wsp5laQ6caUh9og/zQg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Not a secret there's a ton of code duplication between two and more Intel ethernet modules. Before introducing new changes, which would need to be copied over again, start decoupling the already existing duplicate functionality into a new module, which will be shared between several Intel Ethernet drivers. Add the lookup table which converts 8/10-bit hardware packet type into a parsed bitfield structure for easy checking packet format parameters, such as payload level, IP version, etc. This is currently used by i40e, ice and iavf and it's all the same in all three drivers. The only difference introduced in this implementation is that instead of defining a 256 (or 1024 in case of ice) element array, add unlikely() condition to limit the input to 154 (current maximum non-reserved packet type). There's no reason to waste 600 (or even 3600) bytes only to not hurt very unlikely exception packets. The hash computation function now takes payload level directly as a pkt_hash_type. There's a couple cases when non-IP ptypes are marked as L3 payload and in the previous versions their hash level would be 2, not 3. But skb_set_hash() only sees difference between L4 and non-L4, thus this won't change anything at all. The module is behind the hidden Kconfig symbol, which the drivers will select when needed. The exports are behind 'LIBIE' namespace to limit the scope of the functions. Not that non-HW-specific symbols will live in yet another module, libeth. This is done to easily distinguish pretty generic code ready for reusing by any other vendor and/or for moving the layer up from the code useful in Intel's 1-100G drivers only. Signed-off-by: Alexander Lobakin --- MAINTAINERS | 4 +- drivers/net/ethernet/intel/Kconfig | 7 + drivers/net/ethernet/intel/libeth/Kconfig | 8 + drivers/net/ethernet/intel/libie/Kconfig | 10 + drivers/net/ethernet/intel/Makefile | 3 + drivers/net/ethernet/intel/libeth/Makefile | 6 + drivers/net/ethernet/intel/libie/Makefile | 6 + .../net/ethernet/intel/i40e/i40e_prototype.h | 7 - drivers/net/ethernet/intel/i40e/i40e_type.h | 88 ----- .../net/ethernet/intel/iavf/iavf_prototype.h | 7 - drivers/net/ethernet/intel/iavf/iavf_type.h | 88 ----- .../net/ethernet/intel/ice/ice_lan_tx_rx.h | 320 ------------------ include/linux/net/intel/libie/rx.h | 33 ++ include/net/libeth/rx.h | 125 +++++++ drivers/net/ethernet/intel/i40e/i40e_common.c | 253 -------------- drivers/net/ethernet/intel/i40e/i40e_main.c | 1 + drivers/net/ethernet/intel/i40e/i40e_txrx.c | 72 +--- drivers/net/ethernet/intel/iavf/iavf_common.c | 253 -------------- drivers/net/ethernet/intel/iavf/iavf_main.c | 1 + drivers/net/ethernet/intel/iavf/iavf_txrx.c | 70 +--- drivers/net/ethernet/intel/ice/ice_main.c | 1 + drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 111 +----- drivers/net/ethernet/intel/libeth/rx.c | 52 +++ drivers/net/ethernet/intel/libie/rx.c | 124 +++++++ 24 files changed, 429 insertions(+), 1221 deletions(-) create mode 100644 drivers/net/ethernet/intel/libeth/Kconfig create mode 100644 drivers/net/ethernet/intel/libie/Kconfig create mode 100644 drivers/net/ethernet/intel/libeth/Makefile create mode 100644 drivers/net/ethernet/intel/libie/Makefile create mode 100644 include/linux/net/intel/libie/rx.h create mode 100644 include/net/libeth/rx.h create mode 100644 drivers/net/ethernet/intel/libeth/rx.c create mode 100644 drivers/net/ethernet/intel/libie/rx.c diff --git a/MAINTAINERS b/MAINTAINERS index 909c2c531d8e..1b71d5c5dc86 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10874,7 +10874,9 @@ F: Documentation/networking/device_drivers/ethernet/intel/ F: drivers/net/ethernet/intel/ F: drivers/net/ethernet/intel/*/ F: include/linux/avf/virtchnl.h -F: include/linux/net/intel/iidc.h +F: include/linux/net/intel/ +F: include/linux/net/intel/*/ +F: include/net/libeth/ INTEL ETHERNET PROTOCOL DRIVER FOR RDMA M: Mustafa Ismail diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 639fbb12bd35..f5efdd6f9d96 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -16,6 +16,9 @@ config NET_VENDOR_INTEL if NET_VENDOR_INTEL +source "drivers/net/ethernet/intel/libeth/Kconfig" +source "drivers/net/ethernet/intel/libie/Kconfig" + config E100 tristate "Intel(R) PRO/100+ support" depends on PCI @@ -225,6 +228,7 @@ config I40E depends on PTP_1588_CLOCK_OPTIONAL depends on PCI select AUXILIARY_BUS + select LIBIE select NET_DEVLINK help This driver supports Intel(R) Ethernet Controller XL710 Family of @@ -253,6 +257,8 @@ config I40E_DCB # so that CONFIG_IAVF symbol will always mirror the state of CONFIG_I40EVF config IAVF tristate + select LIBIE + config I40EVF tristate "Intel(R) Ethernet Adaptive Virtual Function support" select IAVF @@ -283,6 +289,7 @@ config ICE depends on GNSS || GNSS = n select AUXILIARY_BUS select DIMLIB + select LIBIE select NET_DEVLINK select PLDMFW select DPLL diff --git a/drivers/net/ethernet/intel/libeth/Kconfig b/drivers/net/ethernet/intel/libeth/Kconfig new file mode 100644 index 000000000000..af970a63c227 --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/Kconfig @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2024 Intel Corporation + +config LIBETH + tristate + help + libeth is a common library containing routines shared between several + drivers, but not yet promoted to the generic kernel API. diff --git a/drivers/net/ethernet/intel/libie/Kconfig b/drivers/net/ethernet/intel/libie/Kconfig new file mode 100644 index 000000000000..33aff6bc8f81 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/Kconfig @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2024 Intel Corporation + +config LIBIE + tristate + select LIBETH + help + libie (Intel Ethernet library) is a common library built on top of + libeth and containing vendor-specific routines shared between several + Intel Ethernet drivers. diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile index dacb481ee5b1..04c844ef4964 100644 --- a/drivers/net/ethernet/intel/Makefile +++ b/drivers/net/ethernet/intel/Makefile @@ -3,6 +3,9 @@ # Makefile for the Intel network device drivers. # +obj-$(CONFIG_LIBETH) += libeth/ +obj-$(CONFIG_LIBIE) += libie/ + obj-$(CONFIG_E100) += e100.o obj-$(CONFIG_E1000) += e1000/ obj-$(CONFIG_E1000E) += e1000e/ diff --git a/drivers/net/ethernet/intel/libeth/Makefile b/drivers/net/ethernet/intel/libeth/Makefile new file mode 100644 index 000000000000..e99d51649bb6 --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2024 Intel Corporation + +obj-$(CONFIG_LIBETH) += libeth.o + +libeth-objs += rx.o diff --git a/drivers/net/ethernet/intel/libie/Makefile b/drivers/net/ethernet/intel/libie/Makefile new file mode 100644 index 000000000000..bf42c5aeeedd --- /dev/null +++ b/drivers/net/ethernet/intel/libie/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2024 Intel Corporation + +obj-$(CONFIG_LIBIE) += libie.o + +libie-objs += rx.o diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h index ce1f11b8ad65..5a0699ca7ce5 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h @@ -371,13 +371,6 @@ void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status); int i40e_set_mac_type(struct i40e_hw *hw); -extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[]; - -static inline struct i40e_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) -{ - return i40e_ptype_lookup[ptype]; -} - /** * i40e_virtchnl_link_speed - Convert AdminQ link_speed to virtchnl definition * @link_speed: the speed to convert diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h index d9031499697e..28568e126850 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_type.h +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h @@ -745,94 +745,6 @@ enum i40e_rx_desc_error_l3l4e_fcoe_masks { #define I40E_RXD_QW1_PTYPE_SHIFT 30 #define I40E_RXD_QW1_PTYPE_MASK (0xFFULL << I40E_RXD_QW1_PTYPE_SHIFT) -/* Packet type non-ip values */ -enum i40e_rx_l2_ptype { - I40E_RX_PTYPE_L2_RESERVED = 0, - I40E_RX_PTYPE_L2_MAC_PAY2 = 1, - I40E_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, - I40E_RX_PTYPE_L2_FIP_PAY2 = 3, - I40E_RX_PTYPE_L2_OUI_PAY2 = 4, - I40E_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, - I40E_RX_PTYPE_L2_LLDP_PAY2 = 6, - I40E_RX_PTYPE_L2_ECP_PAY2 = 7, - I40E_RX_PTYPE_L2_EVB_PAY2 = 8, - I40E_RX_PTYPE_L2_QCN_PAY2 = 9, - I40E_RX_PTYPE_L2_EAPOL_PAY2 = 10, - I40E_RX_PTYPE_L2_ARP = 11, - I40E_RX_PTYPE_L2_FCOE_PAY3 = 12, - I40E_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, - I40E_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, - I40E_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, - I40E_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, - I40E_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, - I40E_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, - I40E_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, - I40E_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, - I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, - I40E_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, - I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, - I40E_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, - I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 -}; - -struct i40e_rx_ptype_decoded { - u32 known:1; - u32 outer_ip:1; - u32 outer_ip_ver:1; - u32 outer_frag:1; - u32 tunnel_type:3; - u32 tunnel_end_prot:2; - u32 tunnel_end_frag:1; - u32 inner_prot:4; - u32 payload_layer:3; -}; - -enum i40e_rx_ptype_outer_ip { - I40E_RX_PTYPE_OUTER_L2 = 0, - I40E_RX_PTYPE_OUTER_IP = 1 -}; - -enum i40e_rx_ptype_outer_ip_ver { - I40E_RX_PTYPE_OUTER_NONE = 0, - I40E_RX_PTYPE_OUTER_IPV4 = 0, - I40E_RX_PTYPE_OUTER_IPV6 = 1 -}; - -enum i40e_rx_ptype_outer_fragmented { - I40E_RX_PTYPE_NOT_FRAG = 0, - I40E_RX_PTYPE_FRAG = 1 -}; - -enum i40e_rx_ptype_tunnel_type { - I40E_RX_PTYPE_TUNNEL_NONE = 0, - I40E_RX_PTYPE_TUNNEL_IP_IP = 1, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, -}; - -enum i40e_rx_ptype_tunnel_end_prot { - I40E_RX_PTYPE_TUNNEL_END_NONE = 0, - I40E_RX_PTYPE_TUNNEL_END_IPV4 = 1, - I40E_RX_PTYPE_TUNNEL_END_IPV6 = 2, -}; - -enum i40e_rx_ptype_inner_prot { - I40E_RX_PTYPE_INNER_PROT_NONE = 0, - I40E_RX_PTYPE_INNER_PROT_UDP = 1, - I40E_RX_PTYPE_INNER_PROT_TCP = 2, - I40E_RX_PTYPE_INNER_PROT_SCTP = 3, - I40E_RX_PTYPE_INNER_PROT_ICMP = 4, - I40E_RX_PTYPE_INNER_PROT_TIMESYNC = 5 -}; - -enum i40e_rx_ptype_payload_layer { - I40E_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, -}; - #define I40E_RXD_QW1_LENGTH_PBUF_SHIFT 38 #define I40E_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ I40E_RXD_QW1_LENGTH_PBUF_SHIFT) diff --git a/drivers/net/ethernet/intel/iavf/iavf_prototype.h b/drivers/net/ethernet/intel/iavf/iavf_prototype.h index 4a48e6171405..48c3901381b4 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_prototype.h +++ b/drivers/net/ethernet/intel/iavf/iavf_prototype.h @@ -45,13 +45,6 @@ enum iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 seid, enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid, struct iavf_aqc_get_set_rss_key_data *key); -extern struct iavf_rx_ptype_decoded iavf_ptype_lookup[]; - -static inline struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) -{ - return iavf_ptype_lookup[ptype]; -} - void iavf_vf_parse_hw_config(struct iavf_hw *hw, struct virtchnl_vf_resource *msg); enum iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw, diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h index 2b6a207fa441..23ded4fcd94f 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_type.h +++ b/drivers/net/ethernet/intel/iavf/iavf_type.h @@ -327,94 +327,6 @@ enum iavf_rx_desc_error_l3l4e_fcoe_masks { #define IAVF_RXD_QW1_PTYPE_SHIFT 30 #define IAVF_RXD_QW1_PTYPE_MASK (0xFFULL << IAVF_RXD_QW1_PTYPE_SHIFT) -/* Packet type non-ip values */ -enum iavf_rx_l2_ptype { - IAVF_RX_PTYPE_L2_RESERVED = 0, - IAVF_RX_PTYPE_L2_MAC_PAY2 = 1, - IAVF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, - IAVF_RX_PTYPE_L2_FIP_PAY2 = 3, - IAVF_RX_PTYPE_L2_OUI_PAY2 = 4, - IAVF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, - IAVF_RX_PTYPE_L2_LLDP_PAY2 = 6, - IAVF_RX_PTYPE_L2_ECP_PAY2 = 7, - IAVF_RX_PTYPE_L2_EVB_PAY2 = 8, - IAVF_RX_PTYPE_L2_QCN_PAY2 = 9, - IAVF_RX_PTYPE_L2_EAPOL_PAY2 = 10, - IAVF_RX_PTYPE_L2_ARP = 11, - IAVF_RX_PTYPE_L2_FCOE_PAY3 = 12, - IAVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, - IAVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, - IAVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, - IAVF_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, - IAVF_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, - IAVF_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, - IAVF_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, - IAVF_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, - IAVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, - IAVF_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, - IAVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, - IAVF_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, - IAVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 -}; - -struct iavf_rx_ptype_decoded { - u32 known:1; - u32 outer_ip:1; - u32 outer_ip_ver:1; - u32 outer_frag:1; - u32 tunnel_type:3; - u32 tunnel_end_prot:2; - u32 tunnel_end_frag:1; - u32 inner_prot:4; - u32 payload_layer:3; -}; - -enum iavf_rx_ptype_outer_ip { - IAVF_RX_PTYPE_OUTER_L2 = 0, - IAVF_RX_PTYPE_OUTER_IP = 1 -}; - -enum iavf_rx_ptype_outer_ip_ver { - IAVF_RX_PTYPE_OUTER_NONE = 0, - IAVF_RX_PTYPE_OUTER_IPV4 = 0, - IAVF_RX_PTYPE_OUTER_IPV6 = 1 -}; - -enum iavf_rx_ptype_outer_fragmented { - IAVF_RX_PTYPE_NOT_FRAG = 0, - IAVF_RX_PTYPE_FRAG = 1 -}; - -enum iavf_rx_ptype_tunnel_type { - IAVF_RX_PTYPE_TUNNEL_NONE = 0, - IAVF_RX_PTYPE_TUNNEL_IP_IP = 1, - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, -}; - -enum iavf_rx_ptype_tunnel_end_prot { - IAVF_RX_PTYPE_TUNNEL_END_NONE = 0, - IAVF_RX_PTYPE_TUNNEL_END_IPV4 = 1, - IAVF_RX_PTYPE_TUNNEL_END_IPV6 = 2, -}; - -enum iavf_rx_ptype_inner_prot { - IAVF_RX_PTYPE_INNER_PROT_NONE = 0, - IAVF_RX_PTYPE_INNER_PROT_UDP = 1, - IAVF_RX_PTYPE_INNER_PROT_TCP = 2, - IAVF_RX_PTYPE_INNER_PROT_SCTP = 3, - IAVF_RX_PTYPE_INNER_PROT_ICMP = 4, - IAVF_RX_PTYPE_INNER_PROT_TIMESYNC = 5 -}; - -enum iavf_rx_ptype_payload_layer { - IAVF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, -}; - #define IAVF_RXD_QW1_LENGTH_PBUF_SHIFT 38 #define IAVF_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ IAVF_RXD_QW1_LENGTH_PBUF_SHIFT) diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index d384ddfcb83e..611577ebc29d 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -160,64 +160,6 @@ struct ice_fltr_desc { (0x1ULL << ICE_FXD_FLTR_WB_QW1_FAIL_PROF_S) #define ICE_FXD_FLTR_WB_QW1_FAIL_PROF_YES 0x1ULL -struct ice_rx_ptype_decoded { - u32 known:1; - u32 outer_ip:1; - u32 outer_ip_ver:2; - u32 outer_frag:1; - u32 tunnel_type:3; - u32 tunnel_end_prot:2; - u32 tunnel_end_frag:1; - u32 inner_prot:4; - u32 payload_layer:3; -}; - -enum ice_rx_ptype_outer_ip { - ICE_RX_PTYPE_OUTER_L2 = 0, - ICE_RX_PTYPE_OUTER_IP = 1, -}; - -enum ice_rx_ptype_outer_ip_ver { - ICE_RX_PTYPE_OUTER_NONE = 0, - ICE_RX_PTYPE_OUTER_IPV4 = 1, - ICE_RX_PTYPE_OUTER_IPV6 = 2, -}; - -enum ice_rx_ptype_outer_fragmented { - ICE_RX_PTYPE_NOT_FRAG = 0, - ICE_RX_PTYPE_FRAG = 1, -}; - -enum ice_rx_ptype_tunnel_type { - ICE_RX_PTYPE_TUNNEL_NONE = 0, - ICE_RX_PTYPE_TUNNEL_IP_IP = 1, - ICE_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, -}; - -enum ice_rx_ptype_tunnel_end_prot { - ICE_RX_PTYPE_TUNNEL_END_NONE = 0, - ICE_RX_PTYPE_TUNNEL_END_IPV4 = 1, - ICE_RX_PTYPE_TUNNEL_END_IPV6 = 2, -}; - -enum ice_rx_ptype_inner_prot { - ICE_RX_PTYPE_INNER_PROT_NONE = 0, - ICE_RX_PTYPE_INNER_PROT_UDP = 1, - ICE_RX_PTYPE_INNER_PROT_TCP = 2, - ICE_RX_PTYPE_INNER_PROT_SCTP = 3, - ICE_RX_PTYPE_INNER_PROT_ICMP = 4, - ICE_RX_PTYPE_INNER_PROT_TIMESYNC = 5, -}; - -enum ice_rx_ptype_payload_layer { - ICE_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, -}; - /* Rx Flex Descriptor * This descriptor is used instead of the legacy version descriptor when * ice_rlan_ctx.adv_desc is set @@ -651,266 +593,4 @@ struct ice_tlan_ctx { u8 int_q_state; /* width not needed - internal - DO NOT WRITE!!! */ }; -/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the - * hardware to a bit-field that can be used by SW to more easily determine the - * packet type. - * - * Macros are used to shorten the table lines and make this table human - * readable. - * - * We store the PTYPE in the top byte of the bit field - this is just so that - * we can check that the table doesn't have a row missing, as the index into - * the table should be the PTYPE. - * - * Typical work flow: - * - * IF NOT ice_ptype_lkup[ptype].known - * THEN - * Packet is unknown - * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP - * Use the rest of the fields to look at the tunnels, inner protocols, etc - * ELSE - * Use the enum ice_rx_l2_ptype to decode the packet type - * ENDIF - */ -#define ICE_PTYPES \ - /* L2 Packet types */ \ - ICE_PTT_UNUSED_ENTRY(0), \ - ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), \ - ICE_PTT_UNUSED_ENTRY(2), \ - ICE_PTT_UNUSED_ENTRY(3), \ - ICE_PTT_UNUSED_ENTRY(4), \ - ICE_PTT_UNUSED_ENTRY(5), \ - ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), \ - ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), \ - ICE_PTT_UNUSED_ENTRY(8), \ - ICE_PTT_UNUSED_ENTRY(9), \ - ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), \ - ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), \ - ICE_PTT_UNUSED_ENTRY(12), \ - ICE_PTT_UNUSED_ENTRY(13), \ - ICE_PTT_UNUSED_ENTRY(14), \ - ICE_PTT_UNUSED_ENTRY(15), \ - ICE_PTT_UNUSED_ENTRY(16), \ - ICE_PTT_UNUSED_ENTRY(17), \ - ICE_PTT_UNUSED_ENTRY(18), \ - ICE_PTT_UNUSED_ENTRY(19), \ - ICE_PTT_UNUSED_ENTRY(20), \ - ICE_PTT_UNUSED_ENTRY(21), \ - \ - /* Non Tunneled IPv4 */ \ - ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), \ - ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), \ - ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(25), \ - ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), \ - ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), \ - ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), \ - \ - /* IPv4 --> IPv4 */ \ - ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), \ - ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), \ - ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(32), \ - ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), \ - ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), \ - ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), \ - \ - /* IPv4 --> IPv6 */ \ - ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), \ - ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), \ - ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(39), \ - ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), \ - ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), \ - ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), \ - \ - /* IPv4 --> GRE/NAT */ \ - ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), \ - \ - /* IPv4 --> GRE/NAT --> IPv4 */ \ - ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), \ - ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), \ - ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(47), \ - ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), \ - ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), \ - ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), \ - \ - /* IPv4 --> GRE/NAT --> IPv6 */ \ - ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), \ - ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), \ - ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(54), \ - ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), \ - ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), \ - ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), \ - \ - /* IPv4 --> GRE/NAT --> MAC */ \ - ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), \ - \ - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ \ - ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), \ - ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), \ - ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(62), \ - ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), \ - ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), \ - ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), \ - \ - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ \ - ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), \ - ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), \ - ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(69), \ - ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), \ - ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), \ - ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), \ - \ - /* IPv4 --> GRE/NAT --> MAC/VLAN */ \ - ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), \ - \ - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ \ - ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), \ - ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), \ - ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(77), \ - ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), \ - ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), \ - ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), \ - \ - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ \ - ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), \ - ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), \ - ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(84), \ - ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), \ - ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), \ - ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), \ - \ - /* Non Tunneled IPv6 */ \ - ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), \ - ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), \ - ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(91), \ - ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), \ - ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), \ - ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), \ - \ - /* IPv6 --> IPv4 */ \ - ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), \ - ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), \ - ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(98), \ - ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), \ - ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), \ - ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), \ - \ - /* IPv6 --> IPv6 */ \ - ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), \ - ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), \ - ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(105), \ - ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), \ - ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), \ - ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), \ - \ - /* IPv6 --> GRE/NAT */ \ - ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), \ - \ - /* IPv6 --> GRE/NAT -> IPv4 */ \ - ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), \ - ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), \ - ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(113), \ - ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), \ - ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), \ - ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), \ - \ - /* IPv6 --> GRE/NAT -> IPv6 */ \ - ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), \ - ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), \ - ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(120), \ - ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), \ - ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), \ - ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), \ - \ - /* IPv6 --> GRE/NAT -> MAC */ \ - ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), \ - \ - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ \ - ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), \ - ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), \ - ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(128), \ - ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), \ - ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), \ - ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), \ - \ - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ \ - ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), \ - ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), \ - ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(135), \ - ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), \ - ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), \ - ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), \ - \ - /* IPv6 --> GRE/NAT -> MAC/VLAN */ \ - ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), \ - \ - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ \ - ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), \ - ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), \ - ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(143), \ - ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), \ - ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), \ - ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), \ - \ - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ \ - ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), \ - ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), \ - ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), \ - ICE_PTT_UNUSED_ENTRY(150), \ - ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), \ - ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), \ - ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - -#define ICE_NUM_DEFINED_PTYPES 154 - -/* macro to make the table lines short, use explicit indexing with [PTYPE] */ -#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ - [PTYPE] = { \ - 1, \ - ICE_RX_PTYPE_OUTER_##OUTER_IP, \ - ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \ - ICE_RX_PTYPE_##OUTER_FRAG, \ - ICE_RX_PTYPE_TUNNEL_##T, \ - ICE_RX_PTYPE_TUNNEL_END_##TE, \ - ICE_RX_PTYPE_##TEF, \ - ICE_RX_PTYPE_INNER_PROT_##I, \ - ICE_RX_PTYPE_PAYLOAD_LAYER_##PL } - -#define ICE_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } - -/* shorter macros makes the table fit but are terse */ -#define ICE_RX_PTYPE_NOF ICE_RX_PTYPE_NOT_FRAG -#define ICE_RX_PTYPE_FRG ICE_RX_PTYPE_FRAG - -/* Lookup table mapping in the 10-bit HW PTYPE to the bit field for decoding */ -static const struct ice_rx_ptype_decoded ice_ptype_lkup[BIT(10)] = { - ICE_PTYPES - - /* unused entries */ - [ICE_NUM_DEFINED_PTYPES ... 1023] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } -}; - -static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype) -{ - return ice_ptype_lkup[ptype]; -} - - #endif /* _ICE_LAN_TX_RX_H_ */ diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h new file mode 100644 index 000000000000..ab9ffe1e93d8 --- /dev/null +++ b/include/linux/net/intel/libie/rx.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef __LIBIE_RX_H +#define __LIBIE_RX_H + +#include + +/* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed + * bitfield struct. + */ + +#define LIBIE_RX_PT_NUM 154 + +extern const struct libeth_rpt libie_rx_pt_lut[LIBIE_RX_PT_NUM]; + +/** + * libie_rx_pt_parse - convert HW packet type to software bitfield structure + * @pt: 10-bit hardware packet type value from the descriptor + * + * ```libie_rx_pt_lut``` must be accessed only using this wrapper. + * + * Return: parsed bitfield struct corresponding to the provided ptype. + */ +static inline struct libeth_rpt libie_rx_pt_parse(u32 pt) +{ + if (unlikely(pt >= LIBIE_RX_PT_NUM)) + pt = 0; + + return libie_rx_pt_lut[pt]; +} + +#endif /* __LIBIE_RX_H */ diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h new file mode 100644 index 000000000000..aaf9c2cdf7fd --- /dev/null +++ b/include/net/libeth/rx.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef __LIBETH_RX_H +#define __LIBETH_RX_H + +#include + +/* Converting abstract packet type numbers into a software structure with + * the packet parameters to do O(1) lookup on Rx. + */ + +enum { + LIBETH_RX_PT_OUTER_L2 = 0U, + LIBETH_RX_PT_OUTER_IPV4, + LIBETH_RX_PT_OUTER_IPV6, +}; + +enum { + LIBETH_RX_PT_NOT_FRAG = 0U, + LIBETH_RX_PT_FRAG, +}; + +enum { + LIBETH_RX_PT_TUNNEL_IP_NONE = 0U, + LIBETH_RX_PT_TUNNEL_IP_IP, + LIBETH_RX_PT_TUNNEL_IP_GRENAT, + LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC, + LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC_VLAN, +}; + +enum { + LIBETH_RX_PT_TUNNEL_END_NONE = 0U, + LIBETH_RX_PT_TUNNEL_END_IPV4, + LIBETH_RX_PT_TUNNEL_END_IPV6, +}; + +enum { + LIBETH_RX_PT_INNER_NONE = 0U, + LIBETH_RX_PT_INNER_UDP, + LIBETH_RX_PT_INNER_TCP, + LIBETH_RX_PT_INNER_SCTP, + LIBETH_RX_PT_INNER_ICMP, + LIBETH_RX_PT_INNER_TIMESYNC, +}; + +#define LIBETH_RX_PT_PAYLOAD_NONE PKT_HASH_TYPE_NONE +#define LIBETH_RX_PT_PAYLOAD_L2 PKT_HASH_TYPE_L2 +#define LIBETH_RX_PT_PAYLOAD_L3 PKT_HASH_TYPE_L3 +#define LIBETH_RX_PT_PAYLOAD_L4 PKT_HASH_TYPE_L4 + +struct libeth_rpt { + u32 outer_ip:2; + u32 outer_frag:1; + u32 tunnel_type:3; + u32 tunnel_end_prot:2; + u32 tunnel_end_frag:1; + u32 inner_prot:3; + enum pkt_hash_types payload_layer:2; + + u32 pad:2; + enum xdp_rss_hash_type hash_type:16; +}; + +void libeth_rx_pt_gen_hash_type(struct libeth_rpt *pt); + +/** + * libeth_rx_pt_get_ip_ver - get IP version from a packet type structure + * @pt: packet type params + * + * Wrapper to compile out the IPv6 code from the drivers when not supported + * by the kernel. + * + * Return: @pt.outer_ip or stub for IPv6 when not compiled-in. + */ +static inline u32 libeth_rx_pt_get_ip_ver(struct libeth_rpt pt) +{ +#if !IS_ENABLED(CONFIG_IPV6) + switch (pt.outer_ip) { + case LIBETH_RX_PT_OUTER_IPV4: + return LIBETH_RX_PT_OUTER_IPV4; + default: + return LIBETH_RX_PT_OUTER_L2; + } +#else + return pt.outer_ip; +#endif +} + +/* libeth_has_*() can be used to quickly check whether the HW metadata is + * available to avoid further expensive processing such as descriptor reads. + * They already check for the corresponding netdev feature to be enabled, + * thus can be used as drop-in replacements. + */ + +static inline bool libeth_rx_pt_has_checksum(const struct net_device *dev, + struct libeth_rpt pt) +{ + /* Non-zero _INNER* is only possible when _OUTER_IPV* is set, + * it is enough to check only for the L4 type. + */ + return likely(pt.inner_prot > LIBETH_RX_PT_INNER_NONE && + (dev->features & NETIF_F_RXCSUM)); +} + +static inline bool libeth_rx_pt_has_hash(const struct net_device *dev, + struct libeth_rpt pt) +{ + return likely(pt.payload_layer > LIBETH_RX_PT_PAYLOAD_NONE && + (dev->features & NETIF_F_RXHASH)); +} + +/** + * libeth_rx_pt_set_hash - fill in skb hash value basing on the PT + * @skb: skb to fill the hash in + * @hash: 32-bit hash value from the descriptor + * @pt: packet type + */ +static inline void libeth_rx_pt_set_hash(struct sk_buff *skb, u32 hash, + struct libeth_rpt pt) +{ + skb_set_hash(skb, hash, pt.payload_layer); +} + +#endif /* __LIBETH_RX_H */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c index de6ca6295742..e8031f1a9b4f 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_common.c +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c @@ -381,259 +381,6 @@ int i40e_aq_set_rss_key(struct i40e_hw *hw, return i40e_aq_get_set_rss_key(hw, vsi_id, key, true); } -/* The i40e_ptype_lookup table is used to convert from the 8-bit ptype in the - * hardware to a bit-field that can be used by SW to more easily determine the - * packet type. - * - * Macros are used to shorten the table lines and make this table human - * readable. - * - * We store the PTYPE in the top byte of the bit field - this is just so that - * we can check that the table doesn't have a row missing, as the index into - * the table should be the PTYPE. - * - * Typical work flow: - * - * IF NOT i40e_ptype_lookup[ptype].known - * THEN - * Packet is unknown - * ELSE IF i40e_ptype_lookup[ptype].outer_ip == I40E_RX_PTYPE_OUTER_IP - * Use the rest of the fields to look at the tunnels, inner protocols, etc - * ELSE - * Use the enum i40e_rx_l2_ptype to decode the packet type - * ENDIF - */ - -/* macro to make the table lines short, use explicit indexing with [PTYPE] */ -#define I40E_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ - [PTYPE] = { \ - 1, \ - I40E_RX_PTYPE_OUTER_##OUTER_IP, \ - I40E_RX_PTYPE_OUTER_##OUTER_IP_VER, \ - I40E_RX_PTYPE_##OUTER_FRAG, \ - I40E_RX_PTYPE_TUNNEL_##T, \ - I40E_RX_PTYPE_TUNNEL_END_##TE, \ - I40E_RX_PTYPE_##TEF, \ - I40E_RX_PTYPE_INNER_PROT_##I, \ - I40E_RX_PTYPE_PAYLOAD_LAYER_##PL } - -#define I40E_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } - -/* shorter macros makes the table fit but are terse */ -#define I40E_RX_PTYPE_NOF I40E_RX_PTYPE_NOT_FRAG -#define I40E_RX_PTYPE_FRG I40E_RX_PTYPE_FRAG -#define I40E_RX_PTYPE_INNER_PROT_TS I40E_RX_PTYPE_INNER_PROT_TIMESYNC - -/* Lookup table mapping in the 8-bit HW PTYPE to the bit field for decoding */ -struct i40e_rx_ptype_decoded i40e_ptype_lookup[BIT(8)] = { - /* L2 Packet types */ - I40E_PTT_UNUSED_ENTRY(0), - I40E_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), - I40E_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT_UNUSED_ENTRY(4), - I40E_PTT_UNUSED_ENTRY(5), - I40E_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT_UNUSED_ENTRY(8), - I40E_PTT_UNUSED_ENTRY(9), - I40E_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - I40E_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - - /* Non Tunneled IPv4 */ - I40E_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(25), - I40E_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), - I40E_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), - I40E_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv4 --> IPv4 */ - I40E_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - I40E_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - I40E_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(32), - I40E_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - I40E_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - I40E_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> IPv6 */ - I40E_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - I40E_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - I40E_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(39), - I40E_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - I40E_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - I40E_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT */ - I40E_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> IPv4 */ - I40E_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - I40E_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - I40E_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(47), - I40E_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - I40E_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - I40E_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> IPv6 */ - I40E_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - I40E_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - I40E_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(54), - I40E_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - I40E_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - I40E_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC */ - I40E_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ - I40E_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - I40E_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - I40E_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(62), - I40E_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - I40E_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - I40E_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ - I40E_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - I40E_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - I40E_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(69), - I40E_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - I40E_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - I40E_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC/VLAN */ - I40E_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ - I40E_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - I40E_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - I40E_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(77), - I40E_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - I40E_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - I40E_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ - I40E_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - I40E_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - I40E_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(84), - I40E_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - I40E_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - I40E_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* Non Tunneled IPv6 */ - I40E_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(91), - I40E_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), - I40E_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), - I40E_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv6 --> IPv4 */ - I40E_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - I40E_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - I40E_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(98), - I40E_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - I40E_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - I40E_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> IPv6 */ - I40E_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - I40E_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - I40E_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(105), - I40E_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - I40E_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - I40E_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT */ - I40E_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> IPv4 */ - I40E_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - I40E_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - I40E_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(113), - I40E_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - I40E_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - I40E_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> IPv6 */ - I40E_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - I40E_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - I40E_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(120), - I40E_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - I40E_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - I40E_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC */ - I40E_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ - I40E_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - I40E_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - I40E_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(128), - I40E_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - I40E_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - I40E_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ - I40E_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - I40E_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - I40E_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(135), - I40E_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - I40E_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - I40E_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN */ - I40E_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ - I40E_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - I40E_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - I40E_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(143), - I40E_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - I40E_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - I40E_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ - I40E_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - I40E_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - I40E_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(150), - I40E_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - I40E_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - I40E_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* unused entries */ - [154 ... 255] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } -}; - /** * i40e_init_shared_code - Initialize the shared code * @hw: pointer to hardware structure diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 3c7d6ea84491..8d6ca8055f7b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -100,6 +100,7 @@ MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX MODULE_AUTHOR("Intel Corporation, "); MODULE_DESCRIPTION("Intel(R) Ethernet Connection XL710 Network Driver"); +MODULE_IMPORT_NS(LIBIE); MODULE_LICENSE("GPL v2"); static struct workqueue_struct *i40e_wq; diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index ac2fcc5ac595..f258947078a4 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2,6 +2,7 @@ /* Copyright(c) 2013 - 2018 Intel Corporation. */ #include +#include #include #include #include @@ -1741,38 +1742,30 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi, struct sk_buff *skb, union i40e_rx_desc *rx_desc) { - struct i40e_rx_ptype_decoded decoded; + struct libeth_rpt decoded; u32 rx_error, rx_status; bool ipv4, ipv6; u8 ptype; u64 qword; - qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); - ptype = FIELD_GET(I40E_RXD_QW1_PTYPE_MASK, qword); - rx_error = FIELD_GET(I40E_RXD_QW1_ERROR_MASK, qword); - rx_status = FIELD_GET(I40E_RXD_QW1_STATUS_MASK, qword); - decoded = decode_rx_desc_ptype(ptype); - skb->ip_summed = CHECKSUM_NONE; - skb_checksum_none_assert(skb); + qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); + ptype = FIELD_GET(I40E_RXD_QW1_PTYPE_MASK, qword); - /* Rx csum enabled and ip headers found? */ - if (!(vsi->netdev->features & NETIF_F_RXCSUM)) + decoded = libie_rx_pt_parse(ptype); + if (!libeth_rx_pt_has_checksum(vsi->netdev, decoded)) return; + rx_error = FIELD_GET(I40E_RXD_QW1_ERROR_MASK, qword); + rx_status = FIELD_GET(I40E_RXD_QW1_STATUS_MASK, qword); + /* did the hardware decode the packet and checksum? */ if (!(rx_status & BIT(I40E_RX_DESC_STATUS_L3L4P_SHIFT))) return; - /* both known and outer_ip must be set for the below code to work */ - if (!(decoded.known && decoded.outer_ip)) - return; - - ipv4 = (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == I40E_RX_PTYPE_OUTER_IPV4); - ipv6 = (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == I40E_RX_PTYPE_OUTER_IPV6); + ipv4 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV4; + ipv6 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV6; if (ipv4 && (rx_error & (BIT(I40E_RX_DESC_ERROR_IPE_SHIFT) | @@ -1800,49 +1793,16 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi, * we need to bump the checksum level by 1 to reflect the fact that * we are indicating we validated the inner checksum. */ - if (decoded.tunnel_type >= I40E_RX_PTYPE_TUNNEL_IP_GRENAT) + if (decoded.tunnel_type >= LIBETH_RX_PT_TUNNEL_IP_GRENAT) skb->csum_level = 1; - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ - switch (decoded.inner_prot) { - case I40E_RX_PTYPE_INNER_PROT_TCP: - case I40E_RX_PTYPE_INNER_PROT_UDP: - case I40E_RX_PTYPE_INNER_PROT_SCTP: - skb->ip_summed = CHECKSUM_UNNECESSARY; - fallthrough; - default: - break; - } - + skb->ip_summed = CHECKSUM_UNNECESSARY; return; checksum_fail: vsi->back->hw_csum_rx_error++; } -/** - * i40e_ptype_to_htype - get a hash type - * @ptype: the ptype value from the descriptor - * - * Returns a hash type to be used by skb_set_hash - **/ -static inline int i40e_ptype_to_htype(u8 ptype) -{ - struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype); - - if (!decoded.known) - return PKT_HASH_TYPE_NONE; - - if (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP && - decoded.payload_layer == I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4) - return PKT_HASH_TYPE_L4; - else if (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP && - decoded.payload_layer == I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3) - return PKT_HASH_TYPE_L3; - else - return PKT_HASH_TYPE_L2; -} - /** * i40e_rx_hash - set the hash value in the skb * @ring: descriptor ring @@ -1855,17 +1815,19 @@ static inline void i40e_rx_hash(struct i40e_ring *ring, struct sk_buff *skb, u8 rx_ptype) { + struct libeth_rpt decoded; u32 hash; const __le64 rss_mask = cpu_to_le64((u64)I40E_RX_DESC_FLTSTAT_RSS_HASH << I40E_RX_DESC_STATUS_FLTSTAT_SHIFT); - if (!(ring->netdev->features & NETIF_F_RXHASH)) + decoded = libie_rx_pt_parse(rx_ptype); + if (!libeth_rx_pt_has_hash(ring->netdev, decoded)) return; if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) { hash = le32_to_cpu(rx_desc->wb.qword0.hi_dword.rss); - skb_set_hash(skb, hash, i40e_ptype_to_htype(rx_ptype)); + libeth_rx_pt_set_hash(skb, hash, decoded); } } diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c index 5a25233a89d5..aa751ce3425b 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_common.c +++ b/drivers/net/ethernet/intel/iavf/iavf_common.c @@ -432,259 +432,6 @@ enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 vsi_id, return iavf_aq_get_set_rss_key(hw, vsi_id, key, true); } -/* The iavf_ptype_lookup table is used to convert from the 8-bit ptype in the - * hardware to a bit-field that can be used by SW to more easily determine the - * packet type. - * - * Macros are used to shorten the table lines and make this table human - * readable. - * - * We store the PTYPE in the top byte of the bit field - this is just so that - * we can check that the table doesn't have a row missing, as the index into - * the table should be the PTYPE. - * - * Typical work flow: - * - * IF NOT iavf_ptype_lookup[ptype].known - * THEN - * Packet is unknown - * ELSE IF iavf_ptype_lookup[ptype].outer_ip == IAVF_RX_PTYPE_OUTER_IP - * Use the rest of the fields to look at the tunnels, inner protocols, etc - * ELSE - * Use the enum iavf_rx_l2_ptype to decode the packet type - * ENDIF - */ - -/* macro to make the table lines short, use explicit indexing with [PTYPE] */ -#define IAVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ - [PTYPE] = { \ - 1, \ - IAVF_RX_PTYPE_OUTER_##OUTER_IP, \ - IAVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \ - IAVF_RX_PTYPE_##OUTER_FRAG, \ - IAVF_RX_PTYPE_TUNNEL_##T, \ - IAVF_RX_PTYPE_TUNNEL_END_##TE, \ - IAVF_RX_PTYPE_##TEF, \ - IAVF_RX_PTYPE_INNER_PROT_##I, \ - IAVF_RX_PTYPE_PAYLOAD_LAYER_##PL } - -#define IAVF_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } - -/* shorter macros makes the table fit but are terse */ -#define IAVF_RX_PTYPE_NOF IAVF_RX_PTYPE_NOT_FRAG -#define IAVF_RX_PTYPE_FRG IAVF_RX_PTYPE_FRAG -#define IAVF_RX_PTYPE_INNER_PROT_TS IAVF_RX_PTYPE_INNER_PROT_TIMESYNC - -/* Lookup table mapping the 8-bit HW PTYPE to the bit field for decoding */ -struct iavf_rx_ptype_decoded iavf_ptype_lookup[BIT(8)] = { - /* L2 Packet types */ - IAVF_PTT_UNUSED_ENTRY(0), - IAVF_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), - IAVF_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT_UNUSED_ENTRY(4), - IAVF_PTT_UNUSED_ENTRY(5), - IAVF_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT_UNUSED_ENTRY(8), - IAVF_PTT_UNUSED_ENTRY(9), - IAVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - IAVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - IAVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - - /* Non Tunneled IPv4 */ - IAVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(25), - IAVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), - IAVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), - IAVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv4 --> IPv4 */ - IAVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - IAVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - IAVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(32), - IAVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - IAVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> IPv6 */ - IAVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - IAVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - IAVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(39), - IAVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - IAVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT */ - IAVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> IPv4 */ - IAVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - IAVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - IAVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(47), - IAVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - IAVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> IPv6 */ - IAVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - IAVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - IAVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(54), - IAVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - IAVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC */ - IAVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ - IAVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - IAVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - IAVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(62), - IAVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - IAVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ - IAVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - IAVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - IAVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(69), - IAVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - IAVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC/VLAN */ - IAVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ - IAVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - IAVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - IAVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(77), - IAVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - IAVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ - IAVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - IAVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - IAVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(84), - IAVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - IAVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* Non Tunneled IPv6 */ - IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), - IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(91), - IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), - IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), - IAVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv6 --> IPv4 */ - IAVF_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - IAVF_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - IAVF_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(98), - IAVF_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - IAVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> IPv6 */ - IAVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - IAVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - IAVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(105), - IAVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - IAVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT */ - IAVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> IPv4 */ - IAVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - IAVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - IAVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(113), - IAVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - IAVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> IPv6 */ - IAVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - IAVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - IAVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(120), - IAVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - IAVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC */ - IAVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ - IAVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - IAVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - IAVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(128), - IAVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - IAVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ - IAVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - IAVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - IAVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(135), - IAVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - IAVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN */ - IAVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ - IAVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - IAVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - IAVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(143), - IAVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - IAVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - IAVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ - IAVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - IAVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - IAVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - IAVF_PTT_UNUSED_ENTRY(150), - IAVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - IAVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - IAVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* unused entries */ - [154 ... 255] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 } -}; - /** * iavf_aq_send_msg_to_pf * @hw: pointer to the hardware structure diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 13361a780ece..d6cbe5022815 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -45,6 +45,7 @@ MODULE_DEVICE_TABLE(pci, iavf_pci_tbl); MODULE_ALIAS("i40evf"); MODULE_AUTHOR("Intel Corporation, "); MODULE_DESCRIPTION("Intel(R) Ethernet Adaptive Virtual Function Network Driver"); +MODULE_IMPORT_NS(LIBIE); MODULE_LICENSE("GPL v2"); static const struct net_device_ops iavf_netdev_ops; diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 32bb604a1382..ab8420719a9a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -2,6 +2,7 @@ /* Copyright(c) 2013 - 2018 Intel Corporation. */ #include +#include #include #include "iavf.h" @@ -982,38 +983,30 @@ static void iavf_rx_checksum(struct iavf_vsi *vsi, struct sk_buff *skb, union iavf_rx_desc *rx_desc) { - struct iavf_rx_ptype_decoded decoded; + struct libeth_rpt decoded; u32 rx_error, rx_status; bool ipv4, ipv6; u8 ptype; u64 qword; - qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); - ptype = FIELD_GET(IAVF_RXD_QW1_PTYPE_MASK, qword); - rx_error = FIELD_GET(IAVF_RXD_QW1_ERROR_MASK, qword); - rx_status = FIELD_GET(IAVF_RXD_QW1_STATUS_MASK, qword); - decoded = decode_rx_desc_ptype(ptype); - skb->ip_summed = CHECKSUM_NONE; - skb_checksum_none_assert(skb); + qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); + ptype = FIELD_GET(IAVF_RXD_QW1_PTYPE_MASK, qword); - /* Rx csum enabled and ip headers found? */ - if (!(vsi->netdev->features & NETIF_F_RXCSUM)) + decoded = libie_rx_pt_parse(ptype); + if (!libeth_rx_pt_has_checksum(vsi->netdev, decoded)) return; + rx_error = FIELD_GET(IAVF_RXD_QW1_ERROR_MASK, qword); + rx_status = FIELD_GET(IAVF_RXD_QW1_STATUS_MASK, qword); + /* did the hardware decode the packet and checksum? */ if (!(rx_status & BIT(IAVF_RX_DESC_STATUS_L3L4P_SHIFT))) return; - /* both known and outer_ip must be set for the below code to work */ - if (!(decoded.known && decoded.outer_ip)) - return; - - ipv4 = (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == IAVF_RX_PTYPE_OUTER_IPV4); - ipv6 = (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == IAVF_RX_PTYPE_OUTER_IPV6); + ipv4 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV4; + ipv6 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV6; if (ipv4 && (rx_error & (BIT(IAVF_RX_DESC_ERROR_IPE_SHIFT) | @@ -1037,46 +1030,13 @@ static void iavf_rx_checksum(struct iavf_vsi *vsi, if (rx_error & BIT(IAVF_RX_DESC_ERROR_PPRS_SHIFT)) return; - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ - switch (decoded.inner_prot) { - case IAVF_RX_PTYPE_INNER_PROT_TCP: - case IAVF_RX_PTYPE_INNER_PROT_UDP: - case IAVF_RX_PTYPE_INNER_PROT_SCTP: - skb->ip_summed = CHECKSUM_UNNECESSARY; - fallthrough; - default: - break; - } - + skb->ip_summed = CHECKSUM_UNNECESSARY; return; checksum_fail: vsi->back->hw_csum_rx_error++; } -/** - * iavf_ptype_to_htype - get a hash type - * @ptype: the ptype value from the descriptor - * - * Returns a hash type to be used by skb_set_hash - **/ -static int iavf_ptype_to_htype(u8 ptype) -{ - struct iavf_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype); - - if (!decoded.known) - return PKT_HASH_TYPE_NONE; - - if (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP && - decoded.payload_layer == IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4) - return PKT_HASH_TYPE_L4; - else if (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP && - decoded.payload_layer == IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3) - return PKT_HASH_TYPE_L3; - else - return PKT_HASH_TYPE_L2; -} - /** * iavf_rx_hash - set the hash value in the skb * @ring: descriptor ring @@ -1089,17 +1049,19 @@ static void iavf_rx_hash(struct iavf_ring *ring, struct sk_buff *skb, u8 rx_ptype) { + struct libeth_rpt decoded; u32 hash; const __le64 rss_mask = cpu_to_le64((u64)IAVF_RX_DESC_FLTSTAT_RSS_HASH << IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT); - if (!(ring->netdev->features & NETIF_F_RXHASH)) + decoded = libie_rx_pt_parse(rx_ptype); + if (!libeth_rx_pt_has_hash(ring->netdev, decoded)) return; if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) { hash = le32_to_cpu(rx_desc->wb.qword0.hi_dword.rss); - skb_set_hash(skb, hash, iavf_ptype_to_htype(rx_ptype)); + libeth_rx_pt_set_hash(skb, hash, decoded); } } diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 9d751954782c..4f75ea4b9ea1 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -37,6 +37,7 @@ static const char ice_copyright[] = "Copyright (c) 2018, Intel Corporation."; MODULE_AUTHOR("Intel Corporation, "); MODULE_DESCRIPTION(DRV_SUMMARY); +MODULE_IMPORT_NS(LIBIE); MODULE_LICENSE("GPL v2"); MODULE_FIRMWARE(ICE_DDP_PKG_FILE); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index df072ce767b1..969e1ed6df4a 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -2,6 +2,7 @@ /* Copyright (c) 2019, Intel Corporation. */ #include +#include #include "ice_txrx_lib.h" #include "ice_eswitch.h" @@ -38,30 +39,6 @@ void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val) } } -/** - * ice_ptype_to_htype - get a hash type - * @ptype: the ptype value from the descriptor - * - * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by - * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of - * Rx desc. - */ -static enum pkt_hash_types ice_ptype_to_htype(u16 ptype) -{ - struct ice_rx_ptype_decoded decoded = ice_decode_rx_desc_ptype(ptype); - - if (!decoded.known) - return PKT_HASH_TYPE_NONE; - if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4) - return PKT_HASH_TYPE_L4; - if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3) - return PKT_HASH_TYPE_L3; - if (decoded.outer_ip == ICE_RX_PTYPE_OUTER_L2) - return PKT_HASH_TYPE_L2; - - return PKT_HASH_TYPE_NONE; -} - /** * ice_get_rx_hash - get RX hash value from descriptor * @rx_desc: specific descriptor @@ -91,14 +68,16 @@ ice_rx_hash_to_skb(const struct ice_rx_ring *rx_ring, const union ice_32b_rx_flex_desc *rx_desc, struct sk_buff *skb, u16 rx_ptype) { + struct libeth_rpt decoded; u32 hash; - if (!(rx_ring->netdev->features & NETIF_F_RXHASH)) + decoded = libie_rx_pt_parse(rx_ptype); + if (!libeth_rx_pt_has_hash(rx_ring->netdev, decoded)) return; hash = ice_get_rx_hash(rx_desc); if (likely(hash)) - skb_set_hash(skb, hash, ice_ptype_to_htype(rx_ptype)); + libeth_rx_pt_set_hash(skb, hash, decoded); } /** @@ -114,34 +93,26 @@ static void ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb, union ice_32b_rx_flex_desc *rx_desc, u16 ptype) { - struct ice_rx_ptype_decoded decoded; + struct libeth_rpt decoded; u16 rx_status0, rx_status1; bool ipv4, ipv6; - rx_status0 = le16_to_cpu(rx_desc->wb.status_error0); - rx_status1 = le16_to_cpu(rx_desc->wb.status_error1); - - decoded = ice_decode_rx_desc_ptype(ptype); - /* Start with CHECKSUM_NONE and by default csum_level = 0 */ skb->ip_summed = CHECKSUM_NONE; - skb_checksum_none_assert(skb); - /* check if Rx checksum is enabled */ - if (!(ring->netdev->features & NETIF_F_RXCSUM)) + decoded = libie_rx_pt_parse(ptype); + if (!libeth_rx_pt_has_checksum(ring->netdev, decoded)) return; + rx_status0 = le16_to_cpu(rx_desc->wb.status_error0); + rx_status1 = le16_to_cpu(rx_desc->wb.status_error1); + /* check if HW has decoded the packet and checksum */ if (!(rx_status0 & BIT(ICE_RX_FLEX_DESC_STATUS0_L3L4P_S))) return; - if (!(decoded.known && decoded.outer_ip)) - return; - - ipv4 = (decoded.outer_ip == ICE_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == ICE_RX_PTYPE_OUTER_IPV4); - ipv6 = (decoded.outer_ip == ICE_RX_PTYPE_OUTER_IP) && - (decoded.outer_ip_ver == ICE_RX_PTYPE_OUTER_IPV6); + ipv4 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV4; + ipv6 = libeth_rx_pt_get_ip_ver(decoded) == LIBETH_RX_PT_OUTER_IPV6; if (ipv4 && (rx_status0 & (BIT(ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))) { ring->vsi->back->hw_rx_eipe_error++; @@ -169,19 +140,10 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb, * we need to bump the checksum level by 1 to reflect the fact that * we are indicating we validated the inner checksum. */ - if (decoded.tunnel_type >= ICE_RX_PTYPE_TUNNEL_IP_GRENAT) + if (decoded.tunnel_type >= LIBETH_RX_PT_TUNNEL_IP_GRENAT) skb->csum_level = 1; - /* Only report checksum unnecessary for TCP, UDP, or SCTP */ - switch (decoded.inner_prot) { - case ICE_RX_PTYPE_INNER_PROT_TCP: - case ICE_RX_PTYPE_INNER_PROT_UDP: - case ICE_RX_PTYPE_INNER_PROT_SCTP: - skb->ip_summed = CHECKSUM_UNNECESSARY; - break; - default: - break; - } + skb->ip_summed = CHECKSUM_UNNECESSARY; return; checksum_fail: @@ -536,42 +498,6 @@ static int ice_xdp_rx_hw_ts(const struct xdp_md *ctx, u64 *ts_ns) return 0; } -/* Define a ptype index -> XDP hash type lookup table. - * It uses the same ptype definitions as ice_decode_rx_desc_ptype[], - * avoiding possible copy-paste errors. - */ -#undef ICE_PTT -#undef ICE_PTT_UNUSED_ENTRY - -#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ - [PTYPE] = XDP_RSS_L3_##OUTER_IP_VER | XDP_RSS_L4_##I | XDP_RSS_TYPE_##PL - -#define ICE_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = 0 - -/* A few supplementary definitions for when XDP hash types do not coincide - * with what can be generated from ptype definitions - * by means of preprocessor concatenation. - */ -#define XDP_RSS_L3_NONE XDP_RSS_TYPE_NONE -#define XDP_RSS_L4_NONE XDP_RSS_TYPE_NONE -#define XDP_RSS_TYPE_PAY2 XDP_RSS_TYPE_L2 -#define XDP_RSS_TYPE_PAY3 XDP_RSS_TYPE_NONE -#define XDP_RSS_TYPE_PAY4 XDP_RSS_L4 - -static const enum xdp_rss_hash_type -ice_ptype_to_xdp_hash[ICE_NUM_DEFINED_PTYPES] = { - ICE_PTYPES -}; - -#undef XDP_RSS_L3_NONE -#undef XDP_RSS_L4_NONE -#undef XDP_RSS_TYPE_PAY2 -#undef XDP_RSS_TYPE_PAY3 -#undef XDP_RSS_TYPE_PAY4 - -#undef ICE_PTT -#undef ICE_PTT_UNUSED_ENTRY - /** * ice_xdp_rx_hash_type - Get XDP-specific hash type from the RX descriptor * @eop_desc: End of Packet descriptor @@ -579,12 +505,7 @@ ice_ptype_to_xdp_hash[ICE_NUM_DEFINED_PTYPES] = { static enum xdp_rss_hash_type ice_xdp_rx_hash_type(const union ice_32b_rx_flex_desc *eop_desc) { - u16 ptype = ice_get_ptype(eop_desc); - - if (unlikely(ptype >= ICE_NUM_DEFINED_PTYPES)) - return 0; - - return ice_ptype_to_xdp_hash[ptype]; + return libie_rx_pt_parse(ice_get_ptype(eop_desc)).hash_type; } /** diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/intel/libeth/rx.c new file mode 100644 index 000000000000..86f17e29b47d --- /dev/null +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2024 Intel Corporation */ + +#include + +/* Converting abstract packet type numbers into a software structure with + * the packet parameters to do O(1) lookup on Rx. + */ + +static const u16 libeth_rx_pt_xdp_oip[] = { + [LIBETH_RX_PT_OUTER_L2] = XDP_RSS_TYPE_NONE, + [LIBETH_RX_PT_OUTER_IPV4] = XDP_RSS_L3_IPV4, + [LIBETH_RX_PT_OUTER_IPV6] = XDP_RSS_L3_IPV6, +}; + +static const u16 libeth_rx_pt_xdp_iprot[] = { + [LIBETH_RX_PT_INNER_NONE] = XDP_RSS_TYPE_NONE, + [LIBETH_RX_PT_INNER_UDP] = XDP_RSS_L4_UDP, + [LIBETH_RX_PT_INNER_TCP] = XDP_RSS_L4_TCP, + [LIBETH_RX_PT_INNER_SCTP] = XDP_RSS_L4_SCTP, + [LIBETH_RX_PT_INNER_ICMP] = XDP_RSS_L4_ICMP, + [LIBETH_RX_PT_INNER_TIMESYNC] = XDP_RSS_TYPE_NONE, +}; + +static const u16 libeth_rx_pt_xdp_pl[] = { + [LIBETH_RX_PT_PAYLOAD_NONE] = XDP_RSS_TYPE_NONE, + [LIBETH_RX_PT_PAYLOAD_L2] = XDP_RSS_TYPE_NONE, + [LIBETH_RX_PT_PAYLOAD_L3] = XDP_RSS_TYPE_NONE, + [LIBETH_RX_PT_PAYLOAD_L4] = XDP_RSS_L4, +}; + +/** + * libeth_rx_pt_gen_hash_type - generate an XDP RSS hash type for a PT + * @pt: PT structure to evaluate + * + * Generates ```hash_type``` field with XDP RSS type values from the parsed + * packet parameters if they're obtained dynamically at runtime. + */ +void libeth_rx_pt_gen_hash_type(struct libeth_rpt *pt) +{ + pt->hash_type = 0; + pt->hash_type |= libeth_rx_pt_xdp_oip[pt->outer_ip]; + pt->hash_type |= libeth_rx_pt_xdp_iprot[pt->inner_prot]; + pt->hash_type |= libeth_rx_pt_xdp_pl[pt->payload_layer]; +} +EXPORT_SYMBOL_NS_GPL(libeth_rx_pt_gen_hash_type, LIBETH); + +/* Module */ + +MODULE_AUTHOR("Intel Corporation"); +MODULE_DESCRIPTION("Common Ethernet library"); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c new file mode 100644 index 000000000000..757d30914f8d --- /dev/null +++ b/drivers/net/ethernet/intel/libie/rx.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2024 Intel Corporation */ + +#include + +/* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed + * bitfield struct. + */ + +/* A few supplementary definitions for when XDP hash types do not coincide + * with what can be generated from ptype definitions by means of preprocessor + * concatenation. + */ +#define XDP_RSS_L3_L2 XDP_RSS_TYPE_NONE +#define XDP_RSS_L4_NONE XDP_RSS_TYPE_NONE +#define XDP_RSS_L4_TIMESYNC XDP_RSS_TYPE_NONE +#define XDP_RSS_TYPE_L3 XDP_RSS_TYPE_NONE +#define XDP_RSS_TYPE_L4 XDP_RSS_L4 + +#define LIBIE_RX_PT(oip, ofrag, tun, tp, tefr, iprot, pl) { \ + .outer_ip = LIBETH_RX_PT_OUTER_##oip, \ + .outer_frag = LIBETH_RX_PT_##ofrag, \ + .tunnel_type = LIBETH_RX_PT_TUNNEL_IP_##tun, \ + .tunnel_end_prot = LIBETH_RX_PT_TUNNEL_END_##tp, \ + .tunnel_end_frag = LIBETH_RX_PT_##tefr, \ + .inner_prot = LIBETH_RX_PT_INNER_##iprot, \ + .payload_layer = LIBETH_RX_PT_PAYLOAD_##pl, \ + .hash_type = XDP_RSS_L3_##oip | \ + XDP_RSS_L4_##iprot | \ + XDP_RSS_TYPE_##pl, \ + } + +#define LIBIE_RX_PT_UNUSED { } + +#define __LIBIE_RX_PT_L2(iprot, pl) \ + LIBIE_RX_PT(L2, NOT_FRAG, NONE, NONE, NOT_FRAG, iprot, pl) +#define LIBIE_RX_PT_L2 __LIBIE_RX_PT_L2(NONE, L2) +#define LIBIE_RX_PT_TS __LIBIE_RX_PT_L2(TIMESYNC, L2) +#define LIBIE_RX_PT_L3 __LIBIE_RX_PT_L2(NONE, L3) + +#define LIBIE_RX_PT_IP_FRAG(oip) \ + LIBIE_RX_PT(IPV##oip, FRAG, NONE, NONE, NOT_FRAG, NONE, L3) +#define LIBIE_RX_PT_IP_L3(oip, tun, teprot, tefr) \ + LIBIE_RX_PT(IPV##oip, NOT_FRAG, tun, teprot, tefr, NONE, L3) +#define LIBIE_RX_PT_IP_L4(oip, tun, teprot, iprot) \ + LIBIE_RX_PT(IPV##oip, NOT_FRAG, tun, teprot, NOT_FRAG, iprot, L4) + +#define LIBIE_RX_PT_IP_NOF(oip, tun, ver) \ + LIBIE_RX_PT_IP_L3(oip, tun, ver, NOT_FRAG), \ + LIBIE_RX_PT_IP_L4(oip, tun, ver, UDP), \ + LIBIE_RX_PT_UNUSED, \ + LIBIE_RX_PT_IP_L4(oip, tun, ver, TCP), \ + LIBIE_RX_PT_IP_L4(oip, tun, ver, SCTP), \ + LIBIE_RX_PT_IP_L4(oip, tun, ver, ICMP) + +/* IPv oip --> tun --> IPv ver */ +#define LIBIE_RX_PT_IP_TUN_VER(oip, tun, ver) \ + LIBIE_RX_PT_IP_L3(oip, tun, ver, FRAG), \ + LIBIE_RX_PT_IP_NOF(oip, tun, ver) + +/* Non Tunneled IPv oip */ +#define LIBIE_RX_PT_IP_RAW(oip) \ + LIBIE_RX_PT_IP_FRAG(oip), \ + LIBIE_RX_PT_IP_NOF(oip, NONE, NONE) + +/* IPv oip --> tun --> { IPv4, IPv6 } */ +#define LIBIE_RX_PT_IP_TUN(oip, tun) \ + LIBIE_RX_PT_IP_TUN_VER(oip, tun, IPV4), \ + LIBIE_RX_PT_IP_TUN_VER(oip, tun, IPV6) + +/* IPv oip --> GRE/NAT tun --> { x, IPv4, IPv6 } */ +#define LIBIE_RX_PT_IP_GRE(oip, tun) \ + LIBIE_RX_PT_IP_L3(oip, tun, NONE, NOT_FRAG), \ + LIBIE_RX_PT_IP_TUN(oip, tun) + +/* Non Tunneled IPv oip + * IPv oip --> { IPv4, IPv6 } + * IPv oip --> GRE/NAT --> { x, IPv4, IPv6 } + * IPv oip --> GRE/NAT --> MAC --> { x, IPv4, IPv6 } + * IPv oip --> GRE/NAT --> MAC/VLAN --> { x, IPv4, IPv6 } + */ +#define LIBIE_RX_PT_IP(oip) \ + LIBIE_RX_PT_IP_RAW(oip), \ + LIBIE_RX_PT_IP_TUN(oip, IP), \ + LIBIE_RX_PT_IP_GRE(oip, GRENAT), \ + LIBIE_RX_PT_IP_GRE(oip, GRENAT_MAC), \ + LIBIE_RX_PT_IP_GRE(oip, GRENAT_MAC_VLAN) + +/* Lookup table mapping for O(1) parsing */ +const struct libeth_rpt libie_rx_pt_lut[LIBIE_RX_PT_NUM] = { + /* L2 packet types */ + LIBIE_RX_PT_UNUSED, + LIBIE_RX_PT_L2, + LIBIE_RX_PT_TS, + LIBIE_RX_PT_L2, + LIBIE_RX_PT_UNUSED, + LIBIE_RX_PT_UNUSED, + LIBIE_RX_PT_L2, + LIBIE_RX_PT_L2, + LIBIE_RX_PT_UNUSED, + LIBIE_RX_PT_UNUSED, + LIBIE_RX_PT_L2, + LIBIE_RX_PT_UNUSED, + + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + LIBIE_RX_PT_L3, + + LIBIE_RX_PT_IP(4), + LIBIE_RX_PT_IP(6), +}; +EXPORT_SYMBOL_NS_GPL(libie_rx_pt_lut, LIBIE); + +MODULE_AUTHOR("Intel Corporation"); +MODULE_DESCRIPTION("Intel(R) Ethernet common library"); +MODULE_IMPORT_NS(LIBETH); +MODULE_LICENSE("GPL"); From patchwork Thu Apr 4 15:43:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19B83CD1284 for ; Thu, 4 Apr 2024 15:46:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9968F6B008A; Thu, 4 Apr 2024 11:46:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 946256B008C; Thu, 4 Apr 2024 11:46:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 798846B0092; Thu, 4 Apr 2024 11:46:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 54A226B008A for ; Thu, 4 Apr 2024 11:46:00 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DB15CA1693 for ; Thu, 4 Apr 2024 15:45:59 +0000 (UTC) X-FDA: 81972275238.17.4C802BB Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id 72C4C18001F for ; Thu, 4 Apr 2024 15:45:57 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ETGh4a8g; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245557; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AcW7lHOVLw29NS4MZQOfNHWZKf5uBHTyIohKYhtSmD0=; b=hER8R7dd7ItucB96N03WB5lOHXVU8fr4wXKxBCZWRmeBEgf1go2lC+dQxlH3UDqDzZYrzq 49n62Qt907Gw+4FzhnP1z9qtNVGfYU9CI/4qxpJ4lalOq7PLVrC19zJlVHye5E7Vq+yIeh iZlto366PqOs3FYHgqE9IsLEuiVaos0= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ETGh4a8g; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245557; a=rsa-sha256; cv=none; b=TZ/6HqN7nZd9XSVC6pg4kymj9gB84QC2YNDocEHY7sgS9VSZ1BXGfuZGkV0hgJP75bMyhd zEHUhSjzm684hWclLt7lxveAsXM4c9B3jteIBQpPUUplR11R4wKtGennZuHnUuNO7rtSlB ijzKK1211pGxuGkJ8F/toIjiM0TYiDo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245558; x=1743781558; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4jjZJe/xv1xbVE2ufNuwqFZjzSnuNayRCnOUSj3kMWw=; b=ETGh4a8ge6wtr08U1J/ehgxVotUaLVAtLpdNV8tlCm6EFsXtrrnJBS9q 0d2XirMUpTnLGCX8tEEgDDFDOXtPlJz8aCRbhxorgJdI1p0QEeGccayDD UEh8YmhF328gPKS408PU1I5blBD4inEHNsNUs24xSp4ihGJrjlxcr605c tIfZhkqYF4xEDyKkRoFAgidG/CyzBWAz8yK9HqG1X3M/VEOEbluES/SK2 7EcRo1InVwqhycmle/94Edf7xl3hwHt0fHGEl/HXchymShOjeBIwZeDjT OHPtz+LrXYrMgVEr3yr8xDE66yiizHFntUiPzS3Qg09H2AMGPy1htg2FJ A==; X-CSE-ConnectionGUID: IBypyRX8QFikPFMpbTuy3A== X-CSE-MsgGUID: BsfL8hEmRnWSBy/X8jHNXQ== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312140" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312140" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:45:56 -0700 X-CSE-ConnectionGUID: 1zO9UenURlWUE3Yn5Th9Sg== X-CSE-MsgGUID: xP5r1O9YTcSXkdgzA+lodw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288051" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:45:52 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 2/9] iavf: kill "legacy-rx" for good Date: Thu, 4 Apr 2024 17:43:55 +0200 Message-ID: <20240404154402.3581254-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 72C4C18001F X-Stat-Signature: yhmwu1sqrfms3yq4dpnksobwcwangpuo X-Rspam-User: X-HE-Tag: 1712245557-830829 X-HE-Meta: U2FsdGVkX1+NSuF8BNdf7b+R8f5hBAJbOdbQ3vOHSs5o0nU1GbiruQbp187xyWQYK6/VI8s8CHZaFdFGDf/M+qIKeCNUHUpndM4oQQJqaXnM0IkUED4xKjz9C+UHMUQW0b6rjRJuV6qeSjJ2nibwJrbWLmoPMVq2TIgvyVY7DP8OYOfBghelSm8fOYTOcbnaJGtGzJ2KJ9TGuSt8dZ3E00Nyq0FsOdiOZBnQINT+wFGDiS3Ke+SCMRUKbayng/6fgtBh3V2IjVZo7GQfsbG10Onx+9Jaf6xUnidJNvGFipno3hXzWBWhJUPZY8HdKlb3UrFujemPCaQIHQXK6vScUz1lih8zsKtMJ9sPD7N/EKrfTRskaMHhbTRI2HtgeiqviZd5e3Xu0PGfKHqnKgmB2qU+eKvhF3E1xEIvnQj0vAjneKzt+o4Rt+A2xr4lKJQbcjaDOewjnEGa6g+T0P2FV7Zul+ItdNdmdepBUaYUFRqHof33ACQFw18XjtkFBo6ZM3N+NylY+R5QedQz1Iy2PFUt+3N2J2SplrqaXem+5qKfmi+eBUpeirrOM6oCNkBlJmZpbnq7EQDhQqZbmq9bo9Iyb6QKIG7DiO2WkbtrFwzsFhDjXyLWdk+JgMsCQIZAgu5edCZwCcQZYUzNqb4nFrQQscHefhBMOByHxIBHnfuanZgR3FLdp9r2qH1WupmS72U289uroVJjJpTjmFZufMabcQtNDabees0VoagjWonqNDJoH271OgQke1pXKw1Hvvn8lBCKuqhWX684wZEzAx4Rd6vLzqjZLDgP1OIE4ekvTvtUGwLgjy6nOJlL5mCL5P36dTS3Gx7eRSdQimZ8/alNfmNz3hl9heNqK9l+dE6Ju8rZuMbxaEY9UiKx2Ftr9lvUI9XPlc8fgjn6m/2VdGrd2iZ6RwtCw4MkINzQcF/yeEIwfZfwHdtILHVhCJdH4T4KcwhnDzhOoyUlC4s OFTSniW5 42+AmkTfXwPNH+87DNyaKkZTH2jxmdebnlGBkHQR23gu6v0wwPKKMk7Do2SukqX6h1wpeet6VodQw5RuByek2u/aAqtJquE4VH+pwyCdwlDQAq9qujIO2U03itTSiXDzlX4k4xgM4fjaCKxlw09UCKDjIJr7GtrLW4YRgKP7dWUWSe1lh523hrDoAzuDl2c7YqvyEAYDF5qBX0ZzXxkwj39pyEw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Ever since build_skb() became stable, the old way with allocating an skb for storing the headers separately, which will be then copied manually, was slower, less flexible, and thus obsolete. * It had higher pressure on MM since it actually allocates new pages, which then get split and refcount-biased (NAPI page cache); * It implies memcpy() of packet headers (40+ bytes per each frame); * the actual header length was calculated via eth_get_headlen(), which invokes Flow Dissector and thus wastes a bunch of CPU cycles; * XDP makes it even more weird since it requires headroom for long and also tailroom for some time (since mbuf landed). Take a look at the ice driver, which is built around work-arounds to make XDP work with it. Even on some quite low-end hardware (not a common case for 100G NICs) it was performing worse. The only advantage "legacy-rx" had is that it didn't require any reserved headroom and tailroom. But iavf didn't use this, as it always splits pages into two halves of 2k, while that save would only be useful when striding. And again, XDP effectively removes that sole pro. There's a train of features to land in IAVF soon: Page Pool, XDP, XSk, multi-buffer etc. Each new would require adding more and more Danse Macabre for absolutely no reason, besides making hotpath less and less effective. Remove the "feature" with all the related code. This includes at least one very hot branch (typically hit on each new frame), which was either always-true or always-false at least for a complete NAPI bulk of 64 frames, the whole private flags cruft, and so on. Some stats: Function: add/remove: 0/4 grow/shrink: 0/7 up/down: 0/-721 (-721) RO Data: add/remove: 0/1 grow/shrink: 0/0 up/down: 0/-40 (-40) Reviewed-by: Alexander Duyck Signed-off-by: Alexander Lobakin Reviewed-by: Przemek Kitszel --- drivers/net/ethernet/intel/iavf/iavf.h | 2 +- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 27 +--- .../net/ethernet/intel/iavf/iavf_ethtool.c | 140 ------------------ drivers/net/ethernet/intel/iavf/iavf_main.c | 10 +- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 82 +--------- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 3 +- 6 files changed, 8 insertions(+), 256 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index db8188c7ac4b..23a6557fc3db 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -287,7 +287,7 @@ struct iavf_adapter { #define IAVF_FLAG_RESET_PENDING BIT(4) #define IAVF_FLAG_RESET_NEEDED BIT(5) #define IAVF_FLAG_WB_ON_ITR_CAPABLE BIT(6) -#define IAVF_FLAG_LEGACY_RX BIT(15) +/* BIT(15) is free, was IAVF_FLAG_LEGACY_RX */ #define IAVF_FLAG_REINIT_ITR_NEEDED BIT(16) #define IAVF_FLAG_QUEUES_DISABLED BIT(17) #define IAVF_FLAG_SETUP_NETDEV_FEATURES BIT(18) diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 10ba36602c0c..68543efdd29b 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -81,20 +81,11 @@ enum iavf_dyn_idx_t { BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP)) /* Supported Rx Buffer Sizes (a multiple of 128) */ -#define IAVF_RXBUFFER_256 256 #define IAVF_RXBUFFER_1536 1536 /* 128B aligned standard Ethernet frame */ #define IAVF_RXBUFFER_2048 2048 #define IAVF_RXBUFFER_3072 3072 /* Used for large frames w/ padding */ #define IAVF_MAX_RXBUFFER 9728 /* largest size for single descriptor */ -/* NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we - * reserve 2 more, and skb_shared_info adds an additional 384 bytes more, - * this adds up to 512 bytes of extra data meaning the smallest allocation - * we could have is 1K. - * i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab) - * i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab) - */ -#define IAVF_RX_HDR_SIZE IAVF_RXBUFFER_256 #define IAVF_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) #define iavf_rx_desc iavf_32byte_rx_desc @@ -361,7 +352,8 @@ struct iavf_ring { u16 flags; #define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0) -#define IAVF_RXR_FLAGS_BUILD_SKB_ENABLED BIT(1) +/* BIT(1) is free, was IAVF_RXR_FLAGS_BUILD_SKB_ENABLED */ +/* BIT(2) is free */ #define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3) #define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4) #define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5) @@ -392,21 +384,6 @@ struct iavf_ring { */ } ____cacheline_internodealigned_in_smp; -static inline bool ring_uses_build_skb(struct iavf_ring *ring) -{ - return !!(ring->flags & IAVF_RXR_FLAGS_BUILD_SKB_ENABLED); -} - -static inline void set_ring_build_skb_enabled(struct iavf_ring *ring) -{ - ring->flags |= IAVF_RXR_FLAGS_BUILD_SKB_ENABLED; -} - -static inline void clear_ring_build_skb_enabled(struct iavf_ring *ring) -{ - ring->flags &= ~IAVF_RXR_FLAGS_BUILD_SKB_ENABLED; -} - #define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002 #define IAVF_ITR_ADAPTIVE_MIN_USECS 0x0002 #define IAVF_ITR_ADAPTIVE_MAX_USECS 0x007e diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c index 378c3e9ddf9d..52273f7eab2c 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c @@ -240,29 +240,6 @@ static const struct iavf_stats iavf_gstrings_stats[] = { #define IAVF_QUEUE_STATS_LEN ARRAY_SIZE(iavf_gstrings_queue_stats) -/* For now we have one and only one private flag and it is only defined - * when we have support for the SKIP_CPU_SYNC DMA attribute. Instead - * of leaving all this code sitting around empty we will strip it unless - * our one private flag is actually available. - */ -struct iavf_priv_flags { - char flag_string[ETH_GSTRING_LEN]; - u32 flag; - bool read_only; -}; - -#define IAVF_PRIV_FLAG(_name, _flag, _read_only) { \ - .flag_string = _name, \ - .flag = _flag, \ - .read_only = _read_only, \ -} - -static const struct iavf_priv_flags iavf_gstrings_priv_flags[] = { - IAVF_PRIV_FLAG("legacy-rx", IAVF_FLAG_LEGACY_RX, 0), -}; - -#define IAVF_PRIV_FLAGS_STR_LEN ARRAY_SIZE(iavf_gstrings_priv_flags) - /** * iavf_get_link_ksettings - Get Link Speed and Duplex settings * @netdev: network interface device structure @@ -342,8 +319,6 @@ static int iavf_get_sset_count(struct net_device *netdev, int sset) return IAVF_STATS_LEN + (IAVF_QUEUE_STATS_LEN * 2 * netdev->real_num_tx_queues); - else if (sset == ETH_SS_PRIV_FLAGS) - return IAVF_PRIV_FLAGS_STR_LEN; else return -EINVAL; } @@ -385,21 +360,6 @@ static void iavf_get_ethtool_stats(struct net_device *netdev, rcu_read_unlock(); } -/** - * iavf_get_priv_flag_strings - Get private flag strings - * @netdev: network interface device structure - * @data: buffer for string data - * - * Builds the private flags string table - **/ -static void iavf_get_priv_flag_strings(struct net_device *netdev, u8 *data) -{ - unsigned int i; - - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) - ethtool_puts(&data, iavf_gstrings_priv_flags[i].flag_string); -} - /** * iavf_get_stat_strings - Get stat strings * @netdev: network interface device structure @@ -438,108 +398,11 @@ static void iavf_get_strings(struct net_device *netdev, u32 sset, u8 *data) case ETH_SS_STATS: iavf_get_stat_strings(netdev, data); break; - case ETH_SS_PRIV_FLAGS: - iavf_get_priv_flag_strings(netdev, data); - break; default: break; } } -/** - * iavf_get_priv_flags - report device private flags - * @netdev: network interface device structure - * - * The get string set count and the string set should be matched for each - * flag returned. Add new strings for each flag to the iavf_gstrings_priv_flags - * array. - * - * Returns a u32 bitmap of flags. - **/ -static u32 iavf_get_priv_flags(struct net_device *netdev) -{ - struct iavf_adapter *adapter = netdev_priv(netdev); - u32 i, ret_flags = 0; - - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) { - const struct iavf_priv_flags *priv_flags; - - priv_flags = &iavf_gstrings_priv_flags[i]; - - if (priv_flags->flag & adapter->flags) - ret_flags |= BIT(i); - } - - return ret_flags; -} - -/** - * iavf_set_priv_flags - set private flags - * @netdev: network interface device structure - * @flags: bit flags to be set - **/ -static int iavf_set_priv_flags(struct net_device *netdev, u32 flags) -{ - struct iavf_adapter *adapter = netdev_priv(netdev); - u32 orig_flags, new_flags, changed_flags; - int ret = 0; - u32 i; - - orig_flags = READ_ONCE(adapter->flags); - new_flags = orig_flags; - - for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) { - const struct iavf_priv_flags *priv_flags; - - priv_flags = &iavf_gstrings_priv_flags[i]; - - if (flags & BIT(i)) - new_flags |= priv_flags->flag; - else - new_flags &= ~(priv_flags->flag); - - if (priv_flags->read_only && - ((orig_flags ^ new_flags) & ~BIT(i))) - return -EOPNOTSUPP; - } - - /* Before we finalize any flag changes, any checks which we need to - * perform to determine if the new flags will be supported should go - * here... - */ - - /* Compare and exchange the new flags into place. If we failed, that - * is if cmpxchg returns anything but the old value, this means - * something else must have modified the flags variable since we - * copied it. We'll just punt with an error and log something in the - * message buffer. - */ - if (cmpxchg(&adapter->flags, orig_flags, new_flags) != orig_flags) { - dev_warn(&adapter->pdev->dev, - "Unable to update adapter->flags as it was modified by another thread...\n"); - return -EAGAIN; - } - - changed_flags = orig_flags ^ new_flags; - - /* Process any additional changes needed as a result of flag changes. - * The changed_flags value reflects the list of bits that were changed - * in the code above. - */ - - /* issue a reset to force legacy-rx change to take effect */ - if (changed_flags & IAVF_FLAG_LEGACY_RX) { - if (netif_running(netdev)) { - iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED); - ret = iavf_wait_for_reset(adapter); - if (ret) - netdev_warn(netdev, "Changing private flags timeout or interrupted waiting for reset"); - } - } - - return ret; -} - /** * iavf_get_msglevel - Get debug message level * @netdev: network interface device structure @@ -585,7 +448,6 @@ static void iavf_get_drvinfo(struct net_device *netdev, strscpy(drvinfo->driver, iavf_driver_name, 32); strscpy(drvinfo->fw_version, "N/A", 4); strscpy(drvinfo->bus_info, pci_name(adapter->pdev), 32); - drvinfo->n_priv_flags = IAVF_PRIV_FLAGS_STR_LEN; } /** @@ -1995,8 +1857,6 @@ static const struct ethtool_ops iavf_ethtool_ops = { .get_strings = iavf_get_strings, .get_ethtool_stats = iavf_get_ethtool_stats, .get_sset_count = iavf_get_sset_count, - .get_priv_flags = iavf_get_priv_flags, - .set_priv_flags = iavf_set_priv_flags, .get_msglevel = iavf_get_msglevel, .set_msglevel = iavf_set_msglevel, .get_coalesce = iavf_get_coalesce, diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index d6cbe5022815..5eb7379956e4 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -719,9 +719,7 @@ static void iavf_configure_rx(struct iavf_adapter *adapter) struct iavf_hw *hw = &adapter->hw; int i; - /* Legacy Rx will always default to a 2048 buffer size. */ -#if (PAGE_SIZE < 8192) - if (!(adapter->flags & IAVF_FLAG_LEGACY_RX)) { + if (PAGE_SIZE < 8192) { struct net_device *netdev = adapter->netdev; /* For jumbo frames on systems with 4K pages we have to use @@ -738,16 +736,10 @@ static void iavf_configure_rx(struct iavf_adapter *adapter) (netdev->mtu <= ETH_DATA_LEN)) rx_buf_len = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; } -#endif for (i = 0; i < adapter->num_active_queues; i++) { adapter->rx_rings[i].tail = hw->hw_addr + IAVF_QRX_TAIL1(i); adapter->rx_rings[i].rx_buf_len = rx_buf_len; - - if (adapter->flags & IAVF_FLAG_LEGACY_RX) - clear_ring_build_skb_enabled(&adapter->rx_rings[i]); - else - set_ring_build_skb_enabled(&adapter->rx_rings[i]); } } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index ab8420719a9a..4b61675b4548 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -824,17 +824,6 @@ static void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) writel(val, rx_ring->tail); } -/** - * iavf_rx_offset - Return expected offset into page to access data - * @rx_ring: Ring we are requesting offset of - * - * Returns the offset value for ring into the data buffer. - */ -static unsigned int iavf_rx_offset(struct iavf_ring *rx_ring) -{ - return ring_uses_build_skb(rx_ring) ? IAVF_SKB_PAD : 0; -} - /** * iavf_alloc_mapped_page - recycle or make a new page * @rx_ring: ring to use @@ -879,7 +868,7 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, bi->dma = dma; bi->page = page; - bi->page_offset = iavf_rx_offset(rx_ring); + bi->page_offset = IAVF_SKB_PAD; /* initialize pagecnt_bias to 1 representing we fully own page */ bi->pagecnt_bias = 1; @@ -1218,7 +1207,7 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring, #if (PAGE_SIZE < 8192) unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; #else - unsigned int truesize = SKB_DATA_ALIGN(size + iavf_rx_offset(rx_ring)); + unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD); #endif if (!size) @@ -1266,69 +1255,6 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, return rx_buffer; } -/** - * iavf_construct_skb - Allocate skb and populate it - * @rx_ring: rx descriptor ring to transact packets on - * @rx_buffer: rx buffer to pull data from - * @size: size of buffer to add to skb - * - * This function allocates an skb. It then populates it with the page - * data from the current receive descriptor, taking care to set up the - * skb correctly. - */ -static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *rx_buffer, - unsigned int size) -{ - void *va; -#if (PAGE_SIZE < 8192) - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(size); -#endif - unsigned int headlen; - struct sk_buff *skb; - - if (!rx_buffer) - return NULL; - /* prefetch first cache line of first page */ - va = page_address(rx_buffer->page) + rx_buffer->page_offset; - net_prefetch(va); - - /* allocate a skb to store the frags */ - skb = napi_alloc_skb(&rx_ring->q_vector->napi, IAVF_RX_HDR_SIZE); - if (unlikely(!skb)) - return NULL; - - /* Determine available headroom for copy */ - headlen = size; - if (headlen > IAVF_RX_HDR_SIZE) - headlen = eth_get_headlen(skb->dev, va, IAVF_RX_HDR_SIZE); - - /* align pull length to size of long to optimize memcpy performance */ - memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long))); - - /* update all of the pointers */ - size -= headlen; - if (size) { - skb_add_rx_frag(skb, 0, rx_buffer->page, - rx_buffer->page_offset + headlen, - size, truesize); - - /* buffer is used by skb, update page_offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif - } else { - /* buffer is unused, reset bias back to rx_buffer */ - rx_buffer->pagecnt_bias++; - } - - return skb; -} - /** * iavf_build_skb - Build skb around an existing buffer * @rx_ring: Rx descriptor ring to transact packets on @@ -1500,10 +1426,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* retrieve a buffer from the ring */ if (skb) iavf_add_rx_frag(rx_ring, rx_buffer, skb, size); - else if (ring_uses_build_skb(rx_ring)) - skb = iavf_build_skb(rx_ring, rx_buffer, size); else - skb = iavf_construct_skb(rx_ring, rx_buffer, size); + skb = iavf_build_skb(rx_ring, rx_buffer, size); /* exit if we failed to retrieve a buffer */ if (!skb) { diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 22f2df7c460b..a31df5af0473 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -289,8 +289,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter) return; /* Limit maximum frame size when jumbo frames is not enabled */ - if (!(adapter->flags & IAVF_FLAG_LEGACY_RX) && - (adapter->netdev->mtu <= ETH_DATA_LEN)) + if (adapter->netdev->mtu <= ETH_DATA_LEN) max_frame = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; vqci->vsi_id = adapter->vsi_res->vsi_id; From patchwork Thu Apr 4 15:43:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D42ACD129D for ; Thu, 4 Apr 2024 15:46:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BFD36B0092; Thu, 4 Apr 2024 11:46:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 96D556B0093; Thu, 4 Apr 2024 11:46:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E63F6B0095; Thu, 4 Apr 2024 11:46:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 596826B0092 for ; Thu, 4 Apr 2024 11:46:03 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 103A3A00EB for ; Thu, 4 Apr 2024 15:46:03 +0000 (UTC) X-FDA: 81972275406.10.EBDCC90 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id ABB4518002C for ; Thu, 4 Apr 2024 15:46:00 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Usz1iiDP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245561; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t6zt/CmDfzWI1YDBkI1d+Rfrx7us7owLQ2UYLBe0LNI=; b=X829fjOh2+Eh8USdcB2sCTjxMAZTFqqDTyjsSFTp7FXn2fDtJN8zLe/oGJYtibl3MGPv2X Y27IbEUa7YWh58GXZ6p6c71dmGQPbE4XefQp4PvmGfhb9QOhWgxNMB4k1dTU+CEh2wqqTc 7nhlm2b6QfuNhDPoRIou1sJqjxdln2M= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Usz1iiDP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245561; a=rsa-sha256; cv=none; b=dkqhtKkF57uPpDhSThWimwCbO0Z2m0YLejAlTrQ/1LvPJJ6P2xcq9ZQDBRdxqFJSpjnpWl dpj/hdbwhohYTj0SC009JPvEsMO1juO79E3MOwyeB9/XbpEqxtYPmgt2tNl8UyzVDgtuee wfftIIvPUhxER52ZGU1FEtpUKeIx5Nw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245561; x=1743781561; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GQNVs0QM1AKYeP5a4YOGKwCCXkGQYm93tZKCbuq7BzU=; b=Usz1iiDPsH+Qoe8PQIcafusJv0KCCDdU7wTNlXtbkvAJuuup5AJUuyFx 96GrvUS+feM3g3pzbXDlfGmDa6nxBDuuPSmpQQa3kEoMoJvt3mE38fqc/ eVKjQ/h3WffrAckm1GfwRJBu8ke700G9RBidIqoLT14N2FnWOPudH1kfW n65ywrNlBxtQixlBkrnePGNtiTS8cEhz2jLJYiO2ULWSvm/Nc0utOT9kx DYQNGvdcEviNemrMed4vGcOL9SXUBbQmNED+9aGVV21Jos2vvjP7wQqrV yQj7Y9BJjtPlGs2LpopXu6fBfD9VmnlBCTbg4DbAXeKflCkiKnTdVlYPM g==; X-CSE-ConnectionGUID: 2FGNLwlwTHK7Y11UYNYRPA== X-CSE-MsgGUID: 7TcXpXFmTyKIF/vjxc3GXA== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312166" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312166" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:46:00 -0700 X-CSE-ConnectionGUID: Y4c+gQRvRwWi+GBP2Qz02w== X-CSE-MsgGUID: ognP/tM7SBK0UjlEq74sxA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288060" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:45:55 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 3/9] iavf: drop page splitting and recycling Date: Thu, 4 Apr 2024 17:43:56 +0200 Message-ID: <20240404154402.3581254-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: ABB4518002C X-Stat-Signature: c4aogt41ot9i3hza8nn1tk1w5hcgprcj X-Rspam-User: X-HE-Tag: 1712245560-262531 X-HE-Meta: U2FsdGVkX1+wHk1lES/jG8zhTSa9HO8ocEM1QRq/zLV7GmI5U98KX7fYyVqEIrWHu81PDbBJ7/PyHE3adE45YNj9F+Dae56gxS5CnAgUNFCpIkULMKjHZh8ygA73SZrk3bntYS52Aao7A0YgV9WIlYgflmP7dwgUKYSbMBbQFy+GUPQ53dHV/gE3FJ0OJueXiRqp8WRwQhWmzl+1bXm+U9ZYEtRzd9XVHUN0fdVwX8PAFvnN6+7sYAcTlHuyEnTI5VJlI1KrnBf0X55eOK+7uh7mvlVXvUvNwUzSZCmc4+E68XhHfDrTgPUIWroVdhgpSI4D0u8fBJggd3u0Gjo7CCaQIZ+P01xez5+wgYy3leZFJlkQPhCbTuWsey/eRRRMyFUPpRgqWPmM1rIOkVj851H9kEzKCSnc3ri6HxwR5pzvk0CrUGthdGRWqnSrsZKTb5cdnNv6JX0J4mTsU5xxWRivWY3DD7mZl9T+x/IRPtoNKLn6sgKex26dCp/LUCrOfSMLijUIW+WHzj0xL7iwKpxhO7zWRoS54R0XmHRzHKmsBA/EYH1kSqb0lcBtSfsrkb7P9cCLm0QICIeyYpGBDVP8lvbWsIB/pr6SdePxx9L5eRVvDzHdhYtlEo+Y41MdXB+r8eZhzCIBMSYKnuvQPR/1Mp1apBVuwZvqSLgiNtO2VPlgEiNebmqVS8O0j5kaUinoTa4s9LYB0Qjb42062oZYwTxaeOeF6ZzormTVLEJEeL/9ghSkHCjD9t1Xre7CAVz0AQTwEXtXknKFDf3M4mjhcQOjjQxng58Y6jIKQRzKX6EVtofWEZsEiFu/hDY2UH8sExCPLMHtz5sTDafDy0Xr1B8UgOtoyKRAjuAm7el3yRez8YfU+r4oR55pj/FfVXRud1+90vg+E2OHSRhIh8fNDIe0e5uIF0Jp1WZdjjJAo6dSk13CzGoVvFbugK40SVe0Ay+wY8c7d4NkLvo 81u7qzyJ Lryhc7zoD2RI1LHUE731jXpHIvxx3gl+DQyYIn1EhfuGHM6e6sKc1tyF2WXvLOyBWMENlMWPdFXqBwyF8+NsqTRfgpfpz800EkZjRne7warXZLn8fuAnwUlwpbnVGKunv6o1GEaEbs+bLEanZsIxb3L4h8zNvAVxpNHDp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As an intermediate step, remove all page splitting/recycling code. Just always allocate a new page and don't touch its refcount, so that it gets freed by the core stack later. Same for the "in-place" recycling, i.e. when an unused buffer gets assigned to a first needs-refilling descriptor. In some cases, this was leading to moving up to 63 &iavf_rx_buf structures around the ring on a per-field basis -- not something wanted on hotpath. The change allows to greatly simplify certain parts of the code: Function: add/remove: 0/2 grow/shrink: 0/7 up/down: 0/-744 (-744) Although the array of &iavf_rx_buf is barely used now and could be replaced with just page pointer array, don't touch it now to not complicate replacing it with libie Rx buffer struct later on. No surprise perf loses up to 30% here, but that regression will go away once PP lands. Note that iavf_rx_pg_*() definitions are left to reduce diffstat. They will be removed with the conversion to Page Pool. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 65 -------- drivers/net/ethernet/intel/iavf/iavf_type.h | 2 - drivers/net/ethernet/intel/iavf/iavf_main.c | 24 +-- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 152 +----------------- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 8 +- 5 files changed, 10 insertions(+), 241 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 68543efdd29b..e01777531635 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -81,8 +81,6 @@ enum iavf_dyn_idx_t { BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP)) /* Supported Rx Buffer Sizes (a multiple of 128) */ -#define IAVF_RXBUFFER_1536 1536 /* 128B aligned standard Ethernet frame */ -#define IAVF_RXBUFFER_2048 2048 #define IAVF_RXBUFFER_3072 3072 /* Used for large frames w/ padding */ #define IAVF_MAX_RXBUFFER 9728 /* largest size for single descriptor */ @@ -92,57 +90,7 @@ enum iavf_dyn_idx_t { #define IAVF_RX_DMA_ATTR \ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) -/* Attempt to maximize the headroom available for incoming frames. We - * use a 2K buffer for receives and need 1536/1534 to store the data for - * the frame. This leaves us with 512 bytes of room. From that we need - * to deduct the space needed for the shared info and the padding needed - * to IP align the frame. - * - * Note: For cache line sizes 256 or larger this value is going to end - * up negative. In these cases we should fall back to the legacy - * receive path. - */ -#if (PAGE_SIZE < 8192) -#define IAVF_2K_TOO_SMALL_WITH_PADDING \ -((NET_SKB_PAD + IAVF_RXBUFFER_1536) > SKB_WITH_OVERHEAD(IAVF_RXBUFFER_2048)) - -static inline int iavf_compute_pad(int rx_buf_len) -{ - int page_size, pad_size; - - page_size = ALIGN(rx_buf_len, PAGE_SIZE / 2); - pad_size = SKB_WITH_OVERHEAD(page_size) - rx_buf_len; - - return pad_size; -} - -static inline int iavf_skb_pad(void) -{ - int rx_buf_len; - - /* If a 2K buffer cannot handle a standard Ethernet frame then - * optimize padding for a 3K buffer instead of a 1.5K buffer. - * - * For a 3K buffer we need to add enough padding to allow for - * tailroom due to NET_IP_ALIGN possibly shifting us out of - * cache-line alignment. - */ - if (IAVF_2K_TOO_SMALL_WITH_PADDING) - rx_buf_len = IAVF_RXBUFFER_3072 + SKB_DATA_ALIGN(NET_IP_ALIGN); - else - rx_buf_len = IAVF_RXBUFFER_1536; - - /* if needed make room for NET_IP_ALIGN */ - rx_buf_len -= NET_IP_ALIGN; - - return iavf_compute_pad(rx_buf_len); -} - -#define IAVF_SKB_PAD iavf_skb_pad() -#else -#define IAVF_2K_TOO_SMALL_WITH_PADDING false #define IAVF_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#endif /** * iavf_test_staterr - tests bits in Rx descriptor status and error fields @@ -265,12 +213,7 @@ struct iavf_tx_buffer { struct iavf_rx_buffer { dma_addr_t dma; struct page *page; -#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) __u32 page_offset; -#else - __u16 page_offset; -#endif - __u16 pagecnt_bias; }; struct iavf_queue_stats { @@ -292,8 +235,6 @@ struct iavf_rx_queue_stats { u64 non_eop_descs; u64 alloc_page_failed; u64 alloc_buff_failed; - u64 page_reuse_count; - u64 realloc_count; }; enum iavf_ring_state_t { @@ -337,7 +278,6 @@ struct iavf_ring { u16 count; /* Number of descriptors */ u16 reg_idx; /* HW register index of the ring */ - u16 rx_buf_len; /* used in interrupt processing */ u16 next_to_use; @@ -373,7 +313,6 @@ struct iavf_ring { struct iavf_q_vector *q_vector; /* Backreference to associated vector */ struct rcu_head rcu; /* to avoid race on free */ - u16 next_to_alloc; struct sk_buff *skb; /* When iavf_clean_rx_ring_irq() must * return before it sees the EOP for * the current packet, we save that skb @@ -407,10 +346,6 @@ struct iavf_ring_container { static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring) { -#if (PAGE_SIZE < 8192) - if (ring->rx_buf_len > (PAGE_SIZE / 2)) - return 1; -#endif return 0; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h index 23ded4fcd94f..f6b09e57abce 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_type.h +++ b/drivers/net/ethernet/intel/iavf/iavf_type.h @@ -10,8 +10,6 @@ #include "iavf_adminq.h" #include "iavf_devids.h" -#define IAVF_RXQ_CTX_DBUFF_SHIFT 7 - /* IAVF_MASK is a macro used on 32 bit registers */ #define IAVF_MASK(mask, shift) ((u32)(mask) << (shift)) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 5eb7379956e4..ffb71a62b105 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -715,32 +715,10 @@ static void iavf_configure_tx(struct iavf_adapter *adapter) **/ static void iavf_configure_rx(struct iavf_adapter *adapter) { - unsigned int rx_buf_len = IAVF_RXBUFFER_2048; struct iavf_hw *hw = &adapter->hw; - int i; - - if (PAGE_SIZE < 8192) { - struct net_device *netdev = adapter->netdev; - /* For jumbo frames on systems with 4K pages we have to use - * an order 1 page, so we might as well increase the size - * of our Rx buffer to make better use of the available space - */ - rx_buf_len = IAVF_RXBUFFER_3072; - - /* We use a 1536 buffer size for configurations with - * standard Ethernet mtu. On x86 this gives us enough room - * for shared info and 192 bytes of padding. - */ - if (!IAVF_2K_TOO_SMALL_WITH_PADDING && - (netdev->mtu <= ETH_DATA_LEN)) - rx_buf_len = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; - } - - for (i = 0; i < adapter->num_active_queues; i++) { + for (u32 i = 0; i < adapter->num_active_queues; i++) adapter->rx_rings[i].tail = hw->hw_addr + IAVF_QRX_TAIL1(i); - adapter->rx_rings[i].rx_buf_len = rx_buf_len; - } } /** diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 4b61675b4548..a14f7f211150 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -715,7 +715,7 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) dma_sync_single_range_for_cpu(rx_ring->dev, rx_bi->dma, rx_bi->page_offset, - rx_ring->rx_buf_len, + IAVF_RXBUFFER_3072, DMA_FROM_DEVICE); /* free resources associated with mapping */ @@ -724,7 +724,7 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - __page_frag_cache_drain(rx_bi->page, rx_bi->pagecnt_bias); + __free_page(rx_bi->page); rx_bi->page = NULL; rx_bi->page_offset = 0; @@ -736,7 +736,6 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) /* Zero out the descriptor ring */ memset(rx_ring->desc, 0, rx_ring->size); - rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; } @@ -792,7 +791,6 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) goto err; } - rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; @@ -812,9 +810,6 @@ static void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) { rx_ring->next_to_use = val; - /* update next to alloc since we have filled the ring */ - rx_ring->next_to_alloc = val; - /* Force memory writes to complete before letting h/w * know there are new descriptors to fetch. (Only * applicable for weak-ordered memory model archs, @@ -838,12 +833,6 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, struct page *page = bi->page; dma_addr_t dma; - /* since we are recycling buffers we should seldom need to alloc */ - if (likely(page)) { - rx_ring->rx_stats.page_reuse_count++; - return true; - } - /* alloc new page for storage */ page = dev_alloc_pages(iavf_rx_pg_order(rx_ring)); if (unlikely(!page)) { @@ -870,9 +859,6 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, bi->page = page; bi->page_offset = IAVF_SKB_PAD; - /* initialize pagecnt_bias to 1 representing we fully own page */ - bi->pagecnt_bias = 1; - return true; } @@ -924,7 +910,7 @@ bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count) /* sync the buffer for use by the device */ dma_sync_single_range_for_device(rx_ring->dev, bi->dma, bi->page_offset, - rx_ring->rx_buf_len, + IAVF_RXBUFFER_3072, DMA_FROM_DEVICE); /* Refresh the desc even if buffer_addrs didn't change @@ -1102,91 +1088,6 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb) return false; } -/** - * iavf_reuse_rx_page - page flip buffer and store it back on the ring - * @rx_ring: rx descriptor ring to store buffers on - * @old_buff: donor buffer to have page reused - * - * Synchronizes page for reuse by the adapter - **/ -static void iavf_reuse_rx_page(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *old_buff) -{ - struct iavf_rx_buffer *new_buff; - u16 nta = rx_ring->next_to_alloc; - - new_buff = &rx_ring->rx_bi[nta]; - - /* update, and store next to alloc */ - nta++; - rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0; - - /* transfer page from old buffer to new buffer */ - new_buff->dma = old_buff->dma; - new_buff->page = old_buff->page; - new_buff->page_offset = old_buff->page_offset; - new_buff->pagecnt_bias = old_buff->pagecnt_bias; -} - -/** - * iavf_can_reuse_rx_page - Determine if this page can be reused by - * the adapter for another receive - * - * @rx_buffer: buffer containing the page - * - * If page is reusable, rx_buffer->page_offset is adjusted to point to - * an unused region in the page. - * - * For small pages, @truesize will be a constant value, half the size - * of the memory at page. We'll attempt to alternate between high and - * low halves of the page, with one half ready for use by the hardware - * and the other half being consumed by the stack. We use the page - * ref count to determine whether the stack has finished consuming the - * portion of this page that was passed up with a previous packet. If - * the page ref count is >1, we'll assume the "other" half page is - * still busy, and this page cannot be reused. - * - * For larger pages, @truesize will be the actual space used by the - * received packet (adjusted upward to an even multiple of the cache - * line size). This will advance through the page by the amount - * actually consumed by the received packets while there is still - * space for a buffer. Each region of larger pages will be used at - * most once, after which the page will not be reused. - * - * In either case, if the page is reusable its refcount is increased. - **/ -static bool iavf_can_reuse_rx_page(struct iavf_rx_buffer *rx_buffer) -{ - unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; - struct page *page = rx_buffer->page; - - /* Is any reuse possible? */ - if (!dev_page_is_reusable(page)) - return false; - -#if (PAGE_SIZE < 8192) - /* if we are only owner of page we can reuse it */ - if (unlikely((page_count(page) - pagecnt_bias) > 1)) - return false; -#else -#define IAVF_LAST_OFFSET \ - (SKB_WITH_OVERHEAD(PAGE_SIZE) - IAVF_RXBUFFER_2048) - if (rx_buffer->page_offset > IAVF_LAST_OFFSET) - return false; -#endif - - /* If we have drained the page fragment pool we need to update - * the pagecnt_bias and page count so that we fully restock the - * number of references the driver holds. - */ - if (unlikely(!pagecnt_bias)) { - page_ref_add(page, USHRT_MAX); - rx_buffer->pagecnt_bias = USHRT_MAX; - } - - return true; -} - /** * iavf_add_rx_frag - Add contents of Rx buffer to sk_buff * @rx_ring: rx descriptor ring to transact packets on @@ -1204,24 +1105,13 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring, struct sk_buff *skb, unsigned int size) { -#if (PAGE_SIZE < 8192) - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; -#else unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD); -#endif if (!size) return; skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, rx_buffer->page_offset, size, truesize); - - /* page is being used so we must update the page offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif } /** @@ -1249,9 +1139,6 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, size, DMA_FROM_DEVICE); - /* We have pulled a buffer for use, so decrement pagecnt_bias */ - rx_buffer->pagecnt_bias--; - return rx_buffer; } @@ -1269,12 +1156,8 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, unsigned int size) { void *va; -#if (PAGE_SIZE < 8192) - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; -#else unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + SKB_DATA_ALIGN(IAVF_SKB_PAD + size); -#endif struct sk_buff *skb; if (!rx_buffer || !size) @@ -1292,23 +1175,15 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, skb_reserve(skb, IAVF_SKB_PAD); __skb_put(skb, size); - /* buffer is used by skb, update page_offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif - return skb; } /** - * iavf_put_rx_buffer - Clean up used buffer and either recycle or free + * iavf_put_rx_buffer - Unmap used buffer * @rx_ring: rx descriptor ring to transact packets on * @rx_buffer: rx buffer to pull data from * - * This function will clean up the contents of the rx_buffer. It will - * either recycle the buffer or unmap it and free the associated resources. + * This function will unmap the buffer after it's written by HW. */ static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, struct iavf_rx_buffer *rx_buffer) @@ -1316,18 +1191,9 @@ static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, if (!rx_buffer) return; - if (iavf_can_reuse_rx_page(rx_buffer)) { - /* hand second half of page back to the ring */ - iavf_reuse_rx_page(rx_ring, rx_buffer); - rx_ring->rx_stats.page_reuse_count++; - } else { - /* we are not reusing the buffer so unmap it */ - dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, - iavf_rx_pg_size(rx_ring), - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - __page_frag_cache_drain(rx_buffer->page, - rx_buffer->pagecnt_bias); - } + /* we are not reusing the buffer so unmap it */ + dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, PAGE_SIZE, + DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); /* clear contents of buffer_info */ rx_buffer->page = NULL; @@ -1432,8 +1298,6 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* exit if we failed to retrieve a buffer */ if (!skb) { rx_ring->rx_stats.alloc_buff_failed++; - if (rx_buffer && size) - rx_buffer->pagecnt_bias++; break; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index a31df5af0473..f8e9f859a4f1 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -288,10 +288,6 @@ void iavf_configure_queues(struct iavf_adapter *adapter) if (!vqci) return; - /* Limit maximum frame size when jumbo frames is not enabled */ - if (adapter->netdev->mtu <= ETH_DATA_LEN) - max_frame = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; - vqci->vsi_id = adapter->vsi_res->vsi_id; vqci->num_queue_pairs = pairs; vqpi = vqci->qpair; @@ -308,9 +304,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter) vqpi->rxq.ring_len = adapter->rx_rings[i].count; vqpi->rxq.dma_ring_addr = adapter->rx_rings[i].dma; vqpi->rxq.max_pkt_size = max_frame; - vqpi->rxq.databuffer_size = - ALIGN(adapter->rx_rings[i].rx_buf_len, - BIT_ULL(IAVF_RXQ_CTX_DBUFF_SHIFT)); + vqpi->rxq.databuffer_size = IAVF_RXBUFFER_3072; if (CRC_OFFLOAD_ALLOWED(adapter)) vqpi->rxq.crc_disable = !!(adapter->netdev->features & NETIF_F_RXFCS); From patchwork Thu Apr 4 15:43:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97049CD1284 for ; Thu, 4 Apr 2024 15:46:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACEC46B0095; Thu, 4 Apr 2024 11:46:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7D926B0098; Thu, 4 Apr 2024 11:46:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85AF76B0099; Thu, 4 Apr 2024 11:46:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 675266B0095 for ; Thu, 4 Apr 2024 11:46:06 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 33C59A0370 for ; Thu, 4 Apr 2024 15:46:06 +0000 (UTC) X-FDA: 81972275532.23.B7EC281 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id F0203180029 for ; Thu, 4 Apr 2024 15:46:03 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HyZNI5d0; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245564; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Jy1d0Xv2XVaArfdhoR+MsiwJ8k5Lq9Q2bv5xCRNmIAQ=; b=D1v8sSBitegXxdHkDRk87UoHLFmVhWmdgTsQbEszeXPlufZLH1ugvXxcnQczVgVc3ZrQaW EoM6Goed2OKDtTwZI4700LSnErwGcfAfvQbnk970VUAjrDU6OrlOf+AE4fqn75i+sNJ1iN aBtvHnazh1ZDpE75bt148TBIhXZ6mdQ= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HyZNI5d0; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245564; a=rsa-sha256; cv=none; b=artEe/IoJBhTt5KztRthpzm2CAbaZW93cvcEvtgSiWxDOml/Og7Wyn7QCsvbe3pnzTAI4z lQEw5VXelDhlP61W24y8vJNgXUnBnccVIeB7OrEQtjr3BJdfiEr4ShtIQRQtQj+4jWN2t0 HRt6hKafO5tlFaTdi5KN12aJV04wPT8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245564; x=1743781564; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0zdNtgo/Mi03+32brAD9ZsQQUigrMfXyFWjV5uvo34w=; b=HyZNI5d069LO6orr0Ksgq0Gnk3qwF2vGqbFbprMUWVYfuIlSBQsYFxfE xjaxmYIgBGshNl4odULEdBJrKq1GjhNKUNC/NrN7lVCGsumOdogry3oUB NPjuQfF3wBSq5VOtumOybsptI8gp8lTBraj4jIlUMy/xAbjk1VxUrYdRj EnyyOQ/Fq9s8xh+eSluVX6EU1QgfvyW5mXlwx89ltGeH10wBgwjtY0Ubn OFuT7RJcOQDNRlNSOoqb9CokPrzSTVNYdqmgKFLUaQMSNPpO8Cs42Sqqo EvlYtVSyYyY7X9XHaonFXJK+VrusmTuANhf20r5Al6Q09H5oR/ebJDtqW Q==; X-CSE-ConnectionGUID: d988XlHPSWCzRuEJk96v1A== X-CSE-MsgGUID: Y8/XBmdwQGmo9Ofe6BlSDQ== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312190" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312190" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:46:04 -0700 X-CSE-ConnectionGUID: FHLVSIiOR0mFs333aBuJww== X-CSE-MsgGUID: 3qwMB2+XR9Kfr0a3faOH4w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288071" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:46:00 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 4/9] slab: introduce kvmalloc_array_node() and kvcalloc_node() Date: Thu, 4 Apr 2024 17:43:57 +0200 Message-ID: <20240404154402.3581254-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: F0203180029 X-Stat-Signature: do1yctjy4cbg79syter3x8bs7chqokdb X-Rspam-User: X-HE-Tag: 1712245563-141464 X-HE-Meta: U2FsdGVkX1+eBW40CIEpBTKj/zlSP+KXl7AgmlBh5t+N9wlTNuvSIAHNUmE0PuzCCL8jhE6K9aWBKrgA1idhKbKPwjcnWh9GTQkNCt1lxzl70Jj8xw4uSyZlHL6jplyuUycpnCdmt9A6dmghh/bI/eR8WPXTGrnk6ei8lutOzmilKqb6WgIOiIb5yTfUXGgzNe7w8Io+UUhGuvoUpMHPm3fpjbu/O15FvbTebF/thcC19tvRPA7hDnsx7H4AgMTujBY81l8jxbg5qe6yHHujCkZFY7WVfk2ny69fIxMs/Tms/4C9DvaS/dd3HqibNYo7v99j89bWhg5KJm13dH+hajuWOTkR5Hnc8gLQcZVfDYCm9XOMMIw4Nxx+LloPtGhW3O2y5t9/uNLLy08X4gDNV5CqphJF/EEnIT/h/BIghY3LT1vxxcFOJHGAl4qX4cg+VtN/bVf9eRLnAfef9Ikwxje/zY2AwML8NqiBQnJpbvZ77gfqNdZdi2rcZGX1WLUwtY/ezbjFrjTzFZoAHFZdYwNjFPGhxYi74fFPJxfyPeCk7gNShU/cu59Tzh78Iry1CFXq9adf9MdV/8K7Cc7NZLVSbrXyWELwIqnGU8fcJvIVgJYCCkIeq9C3gVSv0zV+zSvhxW+JePo95XCj7Q0W8WJgk6WTiH18FHvFGW3okEVWjIj4z0syk8bjOGbNGn3phEXqdt1NtuayB14kSwZMXKFEvU8fbWJk2P7nvydAQ8hGxfRYdur0yqPgBVQetNbmMvH8pjzvexiWadHYClUP283KK3p5j1EGUgmDucJlEqKlVu56PpTBI+IIfY7E+7pnczZtkaKgkE1zBVzvSS4AsArIS3dSoqFEy7aE+fyj/PRcy3A2KJfJtlxxHMw5euGPDDqxx9ux4pzBCHziqemhxfS/+Otp9N6qKh+agxtEmK61TFu2EP7iIdkpL9mjg+n8r64Wq3pUnN0xFj39jRg Bc1Rd1E6 SZrKpuqa91s4zJCWGPYO44ztKchEHabArtRAOut7F/h3i4f5TJNPcm+M7WGW5zHrNC6xDzASWQR7ktatwY7MpirO0X0za2C8hqgOC9UVq2uopYPcFOb6tmUEZcd3Q2/ZYXHPzjtD26G2pnckGZ4GlC8Vs6LwXVvFv5rTDg+yGnf0P5HgKnb31wZyKv2KZR7A5bpmx1OdSmC8MIs8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add NUMA-aware counterparts for kvmalloc_array() and kvcalloc() to be able to flexibly allocate arrays for a particular node. Rewrite kvmalloc_array() to kvmalloc_array_node(NUMA_NO_NODE) call. Signed-off-by: Alexander Lobakin Reviewed-by: Przemek Kitszel Acked-by: Vlastimil Babka --- include/linux/slab.h | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index e53cbfa18325..d1d1fa5e7983 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -774,14 +774,27 @@ static inline __alloc_size(1) void *kvzalloc(size_t size, gfp_t flags) return kvmalloc(size, flags | __GFP_ZERO); } -static inline __alloc_size(1, 2) void *kvmalloc_array(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void * +kvmalloc_array_node(size_t n, size_t size, gfp_t flags, int node) { size_t bytes; if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; - return kvmalloc(bytes, flags); + return kvmalloc_node(bytes, flags, node); +} + +static inline __alloc_size(1, 2) void * +kvmalloc_array(size_t n, size_t size, gfp_t flags) +{ + return kvmalloc_array_node(n, size, flags, NUMA_NO_NODE); +} + +static inline __alloc_size(1, 2) void * +kvcalloc_node(size_t n, size_t size, gfp_t flags, int node) +{ + return kvmalloc_array_node(n, size, flags | __GFP_ZERO, node); } static inline __alloc_size(1, 2) void *kvcalloc(size_t n, size_t size, gfp_t flags) From patchwork Thu Apr 4 15:43:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618014 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53242CD1299 for ; Thu, 4 Apr 2024 15:46:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA7026B0099; Thu, 4 Apr 2024 11:46:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A2F576B009A; Thu, 4 Apr 2024 11:46:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 882776B009B; Thu, 4 Apr 2024 11:46:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 65BE56B0099 for ; Thu, 4 Apr 2024 11:46:10 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id DE1754077F for ; Thu, 4 Apr 2024 15:46:09 +0000 (UTC) X-FDA: 81972275658.20.87FE897 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id A422B180023 for ; Thu, 4 Apr 2024 15:46:07 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=g802UKen; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245567; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MfoG59Ii3GnXIympG3Q0TE5v8Q+0lb0NrDw41syBAIM=; b=c7hTds17d4SMzGNgrZNsxlw2ziTeXGmEE9LFZV+rsEtA1whDaNJ1ghgVsfdD4HJncCK+fV vXAXNgOeBt1rlITJn5ZhT6pKH/ggcCZ+AUC6jBkFLX7O9gtRH80a+vlN8lZW7+4wmGJZ22 Ht2fa081GsftQYfb0oEp1oPMxlOAWw8= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=g802UKen; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245567; a=rsa-sha256; cv=none; b=BA87JR8/yCMmR9L/nfTCnRD2v1We7pY84/oXSWhFL6HARijo0+ulI1StJTTus9GcZQsDBV ivX07bhwGsQTdpvR2XYhcDPruQbuIYWbwQzRuAmqRjEv/SwMDz3ZjBqQ1Pjo8sXipCfORW BjPipJyZ+QeGgGaM/xhJaRAVk49mylo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245568; x=1743781568; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=h2xX5viKcqXvvZA3Hz2In9WNHWkHFwMdaq3Z6SIdJfY=; b=g802UKenTYy7sPQiAeXSjyMonJp1sPuS6Zu45xujKveOL+OmgkluvDTH F7SqHNhWQSeiQqwkUhzwHMLMfUZ33oD/LF0oYISXpLz70i8DE4DdYz/By iOf3OBn8EFqrthh/PQa6huvf3moJL2Do8Jiet0E6jJ9D3tRo5GAb1NlFY ICwr4qHfafVR5m8fwW6y4o6VT5dtQ7m5tZLI3Qp+44aQsq1Xh5Mky/Dml G+3zcrzucV9ZlHJO1grHlZxdI1IoDN5J9FLybH62Di1bJ0Hc89q263JnN o3m9S3iy2cVQCW4tk3cHpLefFA8wFBhn5yQJUOtcnKyLEKjg//De0OGR8 A==; X-CSE-ConnectionGUID: srTiYlaER7OqXo6nr724rQ== X-CSE-MsgGUID: dMMkXi3nQoqF/gDeTxR4qg== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312211" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312211" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:46:07 -0700 X-CSE-ConnectionGUID: wt3vYwmsS5qJdmZ4xoLx5g== X-CSE-MsgGUID: ewCrjZoNT9yCK/IOl1Mu1g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288081" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:46:03 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 5/9] page_pool: constify some read-only function arguments Date: Thu, 4 Apr 2024 17:43:58 +0200 Message-ID: <20240404154402.3581254-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A422B180023 X-Stat-Signature: ft4e5at14me7sa6kamiomjobz6iikekm X-Rspam-User: X-HE-Tag: 1712245567-792022 X-HE-Meta: U2FsdGVkX1+nLElJIezH0RY8Jx/k7753D+6+Q2FK2I6RtKMpSn2a8Ga0tHjt24+kxl88Blj+w7SRzV+uSgOZtYBcA8/SBCWXstkqFuM+BVpd6ChU5fVwOuSGCjBxyOIQtftkP9gpy5xI2O/BKMtidJEQUZgQqDfvxrSRqKJlhb8POxCU+938LQg4orhmJ4+ODF+bqkXvVKgIvqNlFmzvlPlzelfjxA/yWo2wvt/L7V3Wqlvy7jU/NrPg5kPc6oEY/GjQ5P88gXJ0/NcyOO3LyqMlEo6EvdNeEoXNvosuVb2BRR7kAZ8WIiDvm9LV12OgYmm9pi6FVNr3Lsqh7VMjFDhZDIFY5oq9tUzK02H/QSdfFR0zCDKSBigJ9m/vAIcOWgwcOAX1YUZA9kUNrI29LQBk7f+HNjgjl9dQ+J3GJM3GoCBloDEfe8WFQbNwRBU2Tv5i6DOLj5dCBzJgqKl85NtggUYwjTdF1Gs513pMrnG73Hd5K/P1DAFJq9PDF5UXdhlJUrmnhp4Z/6IsggL2YTtANPBRb6jFbogW7vSm5UH04kno+pWq+1jPku7xd8VU4oex2oSJRfwFcwZaALdPtnUnEkd8yK8x+P53x/vLCT8m8dyQ4Yt43PhjSnu5NF9s6E/Pn2BbV4AslPSDnpBhVqnS+B1J951dmB/GKyQFOxWn7PpLgB30ozw7vtOz0bCS/AfCrfGMAeFY1IVtDGX5vSYTih9kcDaIkNQBANi0XkFd6Eb14THvDtdcPsR5PA1bGXyl8nic3ZlkWfoegCZUFfHSCANR++o4M9kOfrbnIvYntjhE9vy823AfJPIn2xyK8KuSt7bZgPd0t5bLQU979NTcbebucoPU30s+lHXp7bPcJKigW+U+5eEdeZHo5Cfu79S9qLnDbyOavKCeopJGcT6FMKCv5QNYSDaZd1295gFwhv4dEsXvj+G0+TltCZtr5EXB96m0Viv11oYZDS8 FY0WBOMy HQG46LhjYy+n87WAPMN6+yeOvJqlK2pIKE9riRhzSWfrUu2zeb6/aZQBw11HfCjWa7tECfQdBRtYssbO1Nj+lXAzwJnxgEZR2lpDLl5Gpt8Gqyy/tR9h1acuH4knFHYcWIgBrdstWexvhEZP6KNHzm1glqpP8QjsFnSZGun/v8p5OfEqSz8hxHTTomD+BHRHExkHFrYt3JnXImZMXIYvENr1jZn0nLAipsswtWHMm4iMyOpSpoQ85tUKrQ5VFGC+f8RbMIPc4qkd7l+g= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are several functions taking pointers to data they don't modify. This includes statistics fetching, page and page_pool parameters, etc. Constify the pointers, so that call sites will be able to pass const pointers as well. No functional changes, no visible changes in functions sizes. Reviewed-by: Ilias Apalodimas Signed-off-by: Alexander Lobakin --- include/net/page_pool/types.h | 4 ++-- include/net/page_pool/helpers.h | 10 +++++----- net/core/page_pool.c | 10 +++++----- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 5e43a08d3231..a6ebed002216 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -213,7 +213,7 @@ struct xdp_mem_info; #ifdef CONFIG_PAGE_POOL void page_pool_destroy(struct page_pool *pool); void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), - struct xdp_mem_info *mem); + const struct xdp_mem_info *mem); void page_pool_put_page_bulk(struct page_pool *pool, void **data, int count); #else @@ -223,7 +223,7 @@ static inline void page_pool_destroy(struct page_pool *pool) static inline void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), - struct xdp_mem_info *mem) + const struct xdp_mem_info *mem) { } diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 1d397c1a0043..c7bb06750e85 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -58,7 +58,7 @@ /* Deprecated driver-facing API, use netlink instead */ int page_pool_ethtool_stats_get_count(void); u8 *page_pool_ethtool_stats_get_strings(u8 *data); -u64 *page_pool_ethtool_stats_get(u64 *data, void *stats); +u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats); bool page_pool_get_stats(const struct page_pool *pool, struct page_pool_stats *stats); @@ -73,7 +73,7 @@ static inline u8 *page_pool_ethtool_stats_get_strings(u8 *data) return data; } -static inline u64 *page_pool_ethtool_stats_get(u64 *data, void *stats) +static inline u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats) { return data; } @@ -204,8 +204,8 @@ static inline void *page_pool_dev_alloc_va(struct page_pool *pool, * Get the stored dma direction. A driver might decide to store this locally * and avoid the extra cache line from page_pool to determine the direction. */ -static -inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) +static inline enum dma_data_direction +page_pool_get_dma_dir(const struct page_pool *pool) { return pool->p.dma_dir; } @@ -370,7 +370,7 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va, * Fetch the DMA address of the page. The page pool to which the page belongs * must had been created with PP_FLAG_DMA_MAP. */ -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t page_pool_get_dma_addr(const struct page *page) { dma_addr_t ret = page->dma_addr; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 4c175091fc0a..273c24429bce 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -123,9 +123,9 @@ int page_pool_ethtool_stats_get_count(void) } EXPORT_SYMBOL(page_pool_ethtool_stats_get_count); -u64 *page_pool_ethtool_stats_get(u64 *data, void *stats) +u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats) { - struct page_pool_stats *pool_stats = stats; + const struct page_pool_stats *pool_stats = stats; *data++ = pool_stats->alloc_stats.fast; *data++ = pool_stats->alloc_stats.slow; @@ -383,8 +383,8 @@ static struct page *__page_pool_get_cached(struct page_pool *pool) return page; } -static void page_pool_dma_sync_for_device(struct page_pool *pool, - struct page *page, +static void page_pool_dma_sync_for_device(const struct page_pool *pool, + const struct page *page, unsigned int dma_sync_size) { dma_addr_t dma_addr = page_pool_get_dma_addr(page); @@ -987,7 +987,7 @@ static void page_pool_release_retry(struct work_struct *wq) } void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), - struct xdp_mem_info *mem) + const struct xdp_mem_info *mem) { refcount_inc(&pool->user_cnt); pool->disconnect = disconnect; From patchwork Thu Apr 4 15:43:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D310CD1284 for ; Thu, 4 Apr 2024 15:46:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCC0B6B009B; Thu, 4 Apr 2024 11:46:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7A3E6B009C; Thu, 4 Apr 2024 11:46:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF49E6B009D; Thu, 4 Apr 2024 11:46:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 882F76B009B for ; Thu, 4 Apr 2024 11:46:13 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 55F0A1C0DD9 for ; Thu, 4 Apr 2024 15:46:13 +0000 (UTC) X-FDA: 81972275826.14.43FEB06 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id 31B9618002C for ; Thu, 4 Apr 2024 15:46:11 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=E+8LdYcP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245571; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cP32Q3dEgPXhmR3/rYVgH7UWTRGTR3MvWVNydA/XXaI=; b=zso0p525caoqo19c6EuUQtW64o9ktMeuCiy4LVaxnQJcPZ29+KMDLtvopaSdl8X1cfDrzY F5uP0dYBji1eADjtgOptZsg7sdrPWUK9/UrBhsj3ROduBYCkPLzJyk2vTpaTPHJxVK5FpZ Z05vkVT0pS63dwrrDkNqi8QSuI6I1a4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=E+8LdYcP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245571; a=rsa-sha256; cv=none; b=Au2F/a6f1kKOay6hB3MZQIfxeeZbLl4lS5S9Q5QKbsNt+p1NIRl20GG2D3CZh7CMnHT6XM 3yVoQ+CumhRihJyD6ONDin+osDiRuNgdLSubAs3kaaaei81Gr9m9gW7EM50/BHjbIU15sA Px9mVjQ4qc8pS7Zmuyx6DtzvuvWGdP8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245571; x=1743781571; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OhUqtf35fdgCEyiIWVQHl/adPq6y26tJ1DJr2rp+7fQ=; b=E+8LdYcPmg19eD/MkabWsuyv0ScEbxJi/1ifLJgD/AZWYw3f6GTOuwx3 5CY3Bpe1Urg7PwWYTHhq+D77BmRIPBGAal79j/0s8RyWSwFokGhaMIc8Q iLSifU4L9kCtGxqVKSC0WfeZaa91osDefesFtkBnz6I6Uf34y7WLDSg+7 ig533OD7EKqdx6TT6qQSUDy+FWlkmqHBdsqaa8T6V8Q7ZjcmO6qwQri7I C1lAUdKO1S72pnIhRtrXTXLYNLtF5bChANlveELx4PpaWpzU5G0q9u8ba Io/6OBm2SlM/nHIpEmV6hKvPXMH9AOQg9F5vnHEoQMuL+SP57XsqjshMG A==; X-CSE-ConnectionGUID: oulPZGMCSj6W33Kimek9ng== X-CSE-MsgGUID: NGW9Qq0tReabJyZxd6M+VA== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312231" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312231" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:46:11 -0700 X-CSE-ConnectionGUID: gL68NynBRQKuBxaQ9iDDNQ== X-CSE-MsgGUID: WllCqo++T3K8lvL91O/gXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288103" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:46:07 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 6/9] page_pool: add DMA-sync-for-CPU inline helper Date: Thu, 4 Apr 2024 17:43:59 +0200 Message-ID: <20240404154402.3581254-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 31B9618002C X-Stat-Signature: ffysnhpn6ha73n38k3nygsbjmu5zdo5c X-Rspam-User: X-HE-Tag: 1712245571-145275 X-HE-Meta: U2FsdGVkX19xrCU1oNIfKYPKx3H1KFpKG+ztjy+JZB3wBp7WRH4N4gjy8C53hP1py9F180H2/rrJ85fUKZrgOa3pY2WnReutHxgdQRVinJ/HGzdE+ki8ezBtWh7OVZfyUB6LqM26vUas23Uob1zhSG6IPl0MabqIVnaKEl8711z4t7TKFDoDsS9ZAf55c3Q6MqK83Hsk81jt+XROnJM/xrxJMqB+Gfvm/5dv8vaSdPhbUi5drVaznXCe0E7369l9Vm0GJ7gf09C8YbsX90ArqzoNumiZqzm2dOuHydAVRkulRSKD+7pREKMdIs4hUFHlANlHCtrm9qoqi53zvKZyMJXXdlGNvFBmRYlDnhwUlSe12Tyi8T4XFDbqRgBsG9zZlmExjGzasZWkiWJ4xehp+lGLHd3JjzMffwFf5+gASkq+P7p49O43PaZ8bXuQrKNBGxuzx10a8JMoXYGcDgI+MlikTZxTWGJoPLZDLM8N6lImZzy6o9dZv/3UoHYcPHXFdXcrgxdBSMXbAYYwRxQFd6UdR6n/aMfhn5LbZk0v6d8/nkz1u4pScffnRYoK1YgsXNUyuDfhXT5+I5ACVIaRf/eE4Bl6KEaoSYxavm/1idRLU9o3uSk+1reTU6964YXyUl2YZcAkb6w6yXIdwByNjYxBt+c1IBi47Lg8hDxmtegB/F86N9sCcrxvuPvrs3ig63CuP0pIO4Bo0feXP13giaD0zi81A74cmBCg6ND9tyjgc+p90JKYoux45OzgO+rTC6EXXpucQKyTYm1YjmI/WuPHgPjOdhy2r69o5AWYJpgVd2NjLScz/wZxO/sUcdO8jfsjnjnrMYlEC+MkC5KfEOTlRoLTXKZWmzoHX0zKtBL45igW7uIyTO5e6uOnZ/VhqqWz+RqNN6NFRz4u4A2/RBQTtbU1x6L/mjw4hbavogIzH2HtG9EvNRH+qUQMiIDE9v5+gKfuIjCFULppNqt gLOhIvTf dsJ4WeuAnPkn5L2O1VwRp7l3kWgox8MHnzm3siSBe/3BcGR4jC5dtqzgwTtO6c9gIViO1nEERTh6RXwxfH9pGTRreTrStxtDZFNXYm25CH2VPlYcwB1RHVZq8jIWWUjtgG432PW+jwJYqQ0F5MrIkixfoxmzSostZh74TAIvoXSYRLKT4vrSdcc2AtDJHoaLt1g9Hyoz/vVZovW8QTJwle51YWX2ql+JXr+1A8FsgLznNrebHMJUxXSUDQQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Each driver is responsible for syncing buffers written by HW for CPU before accessing them. Almost each PP-enabled driver uses the same pattern, which could be shorthanded into a static inline to make driver code a little bit more compact. Introduce a simple helper which performs DMA synchronization for the size passed from the driver. It can be used even when the pool doesn't manage DMA-syncs-for-device, just make sure the page has a correct DMA address set via page_pool_set_dma_addr(). Signed-off-by: Alexander Lobakin --- include/net/page_pool/helpers.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index c7bb06750e85..873631c79ab1 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -52,6 +52,8 @@ #ifndef _NET_PAGE_POOL_HELPERS_H #define _NET_PAGE_POOL_HELPERS_H +#include + #include #ifdef CONFIG_PAGE_POOL_STATS @@ -395,6 +397,28 @@ static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) return false; } +/** + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW + * @pool: &page_pool the @page belongs to + * @page: page to sync + * @offset: offset from page start to "hard" start if using PP frags + * @dma_sync_size: size of the data written to the page + * + * Can be used as a shorthand to sync Rx pages before accessing them in the + * driver. Caller must ensure the pool was created with ``PP_FLAG_DMA_MAP``. + * Note that this version performs DMA sync unconditionally, even if the + * associated PP doesn't perform sync-for-device. + */ +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, + const struct page *page, + u32 offset, u32 dma_sync_size) +{ + dma_sync_single_range_for_cpu(pool->p.dev, + page_pool_get_dma_addr(page), + offset + pool->p.offset, dma_sync_size, + page_pool_get_dma_dir(pool)); +} + static inline bool page_pool_put(struct page_pool *pool) { return refcount_dec_and_test(&pool->user_cnt); From patchwork Thu Apr 4 15:44:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618016 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79E04CD1284 for ; Thu, 4 Apr 2024 15:46:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0946A6B009D; Thu, 4 Apr 2024 11:46:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0438E6B009E; Thu, 4 Apr 2024 11:46:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E278E6B009F; Thu, 4 Apr 2024 11:46:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C54DF6B009D for ; Thu, 4 Apr 2024 11:46:17 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7BD2F809A9 for ; Thu, 4 Apr 2024 15:46:17 +0000 (UTC) X-FDA: 81972275994.17.6679A4E Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id 0797B180032 for ; Thu, 4 Apr 2024 15:46:14 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=BSRlSfTH; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245575; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PulupQH9UVXsmY2JRAPTJS6pOmiun9WSd/M3riY2cAI=; b=7JvZsu64hvmx7LIoS5WHPvRQILEc6pOiKXC8YFJfyKiqrvuAO45Pl97igGrsuYLD4CmuCz yuXKDg8tT0gvtKR30qQhj8HHLWL1QyWslW5PAiSSeOGiWRLi/kkNamub+3NPvyZvOrO54e 2TCfWz6B/2QOY06vHrqrG1z/47utoEI= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=BSRlSfTH; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245575; a=rsa-sha256; cv=none; b=mpFb1qnlsxaVGDnKkoXUfgHxaoW7kk55KSR9eyEjxujSGm028dFpLOvY3w/vqmxE7yvALb YAMT7f03tBRCZN5Umyu2Qd8DfIxrromCEWP27t/allJ3UozjZbZpH/FkbZPoC6cooQ3UoM dfOu+pwvxnk42UwYXtD6Xbn1z/By3w4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245575; x=1743781575; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s23ubA/HqhhYkzvwsc8HuTKzMBLagp6hK98NDCtSckI=; b=BSRlSfTHje/OXSdxzvnxcMTzHb2ds6oJ3iIEhH248FHg7SvUr0PI5Vk+ m594khpIiJZzm47NyW7Tl+oWmSFvlyGU37qtJ9yojW+Ql9NW6idC72JMR E1T9cQETlKmWBgqt8i4XbTQqbSZ5l7HAtjFMGT43ShivaQr6oD1yjbxkY Vft151FMB5A9dqpLdmtjgrfkZqfdBrRhOAlGdWutwhCMvroEYOI39cryC kxvKhNJHc76yMPI7YLvH+RLVncinAVrxCcW9tllDYru30tohFl4PSsenC G2aO2cUAAtNCP+Ho5eFvPUOb5XJ4LS1VGXpj3BuHCyyy5SF6KZvO21DaO A==; X-CSE-ConnectionGUID: ZkXqJxSmR+evMc+irivwkA== X-CSE-MsgGUID: vJ1JYnTtR3+xOwhyiNiUnw== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312253" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312253" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:46:15 -0700 X-CSE-ConnectionGUID: 1y1TOT6xQAu8HtSdWpHScw== X-CSE-MsgGUID: BrkeVGZoQ8Csx8N3AFY+ZA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288173" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:46:11 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 7/9] libeth: add Rx buffer management Date: Thu, 4 Apr 2024 17:44:00 +0200 Message-ID: <20240404154402.3581254-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 0797B180032 X-Stat-Signature: zgaqyno49z3cidgwgjady5nduqrsi8af X-Rspam-User: X-HE-Tag: 1712245574-133472 X-HE-Meta: U2FsdGVkX19rtxmetHiaZLXoGLgSkGjRQqZTkOSlgsKDRRneLS8y/DUuDQUZDRDQ1JQCh+wpQsCg/koRXYpmYVC1ElRIuh+8hpfwj3XKu8hvo2Wm4kK1wV8vglWwR6aHJJSUmmX3P+SBKRFXsrjIDgHyxwJ8YzYGusphjrMwX+/V6YyaCdZPGMp6Eu3uTuK+UD7fNEyQMKHdwjCK7JlSM2iDwA8klJ6uQYfd8aba8bZu+70dpz3RtBiR/aK8+Lfjd4pfTHvuF5tDnaxQfA9lhp9g/u4nnQ7D+rHoicWGSVgdpS6gOFfp3knqSZDDGpBJugHpc6vghfsbVZ8yPMaC+df4wdZOiCSztbbFbTLqJ0ALNOY4jaQwFufmGE6C1zqYeOp0T4s4L2NNwhwHAXmeqRN9n8rOWJIRQyaFjKnzFf3Djg1qFkKGi64DQhTiVwDqXmaH0KsGdIDGru6LJWykOBDwGDt8s+uaG9X/2WR90ORl6UIDWaHw6SEEiz4sJCfvKCrYOeI3fLZtWZCe4hNu/Cb+OE1uSFHXFsv1cCr8+Aa4lh/ow1kL5U0E684SYcZD/ogKlKZJdorp3Wn7wAy4xpilZGZBo2PrfL63Ioj+aPT2h/BUhctis2azMatMcZZ0hhYLH/leUaYMx2y3Jc8CGKDYSWlvxGh3FRwU4sQ9BjiiE5sBgc9e4FIvsslTfJqMg0kQDnRXm2WlVccbgQl8qF1sVzz1e6RA8G658Ispi+pVVPU26IGnPvKq4nZ3+Hv7XJgQVZ9VJE2x1clnKDswFvIBGwuCWhs4gzV/0ipdk7yWXfV3bcgEi4Oq7IMi0GVR9TETYgbbCflcaXZBYJXwldQQBf+qeZt0hge3C1nQopwo7GvLTnh5uPRNsiq/1KBTn9jXXx91GRCuV6IFBnu1gKVMeLk4ewpdrANJfM4C/X352Izet3niAUO9LjZ/Askxa/B0cI+6Q8na6T6EOJv dHRbhm/U 7gHMyWW1IeUB5dNwsxVQaLXe4InVLm+gGbZsoazIujIwYHlVdKY/bkLvmPVVxrpqoEKJ51MimqJTfG2SKH/rjwG4hkjmc+paVmQOMzLpkpMbu4rJrSty9k7H7jwvnmCuCVYG1dbU+WBbs8ETomQCL2dFaCfQZyT2T5XBp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add a couple intuitive helpers to hide Rx buffer implementation details in the library and not multiplicate it between drivers. The settings are sorta optimized for 100G+ NICs, but nothing really HW-specific here. Use the new page_pool_dev_alloc() to dynamically switch between split-page and full-page modes depending on MTU, page size, required headroom etc. For example, on x86_64 with the default driver settings each page is shared between 2 buffers. Turning on XDP (not in this series) -> increasing headroom requirement pushes truesize out of 2048 boundary, leading to that each buffer starts getting a full page. The "ceiling" limit is %PAGE_SIZE, as only order-0 pages are used to avoid compound overhead. For the above architecture, this means maximum linear frame size of 3712 w/o XDP. Not that &libeth_buf_queue is not a complete queue/ring structure for now, rather a shim, but eventually the libeth-enabled drivers will move to it, with iavf being the first one. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libeth/Kconfig | 1 + include/net/libeth/rx.h | 117 ++++++++++++++++++++++ drivers/net/ethernet/intel/libeth/rx.c | 98 ++++++++++++++++++ 3 files changed, 216 insertions(+) diff --git a/drivers/net/ethernet/intel/libeth/Kconfig b/drivers/net/ethernet/intel/libeth/Kconfig index af970a63c227..480293b71dbc 100644 --- a/drivers/net/ethernet/intel/libeth/Kconfig +++ b/drivers/net/ethernet/intel/libeth/Kconfig @@ -3,6 +3,7 @@ config LIBETH tristate + select PAGE_POOL help libeth is a common library containing routines shared between several drivers, but not yet promoted to the generic kernel API. diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h index aaf9c2cdf7fd..3db2bda4eab6 100644 --- a/include/net/libeth/rx.h +++ b/include/net/libeth/rx.h @@ -4,8 +4,125 @@ #ifndef __LIBETH_RX_H #define __LIBETH_RX_H +#include + +#include #include +/* Rx buffer management */ + +/* Space reserved in front of each frame */ +#define LIBETH_SKB_HEADROOM (NET_SKB_PAD + NET_IP_ALIGN) +/* Maximum headroom for worst-case calculations */ +#define LIBETH_MAX_HEADROOM LIBETH_SKB_HEADROOM +/* Link layer / L2 overhead: Ethernet, 2 VLAN tags (C + S), FCS */ +#define LIBETH_RX_LL_LEN (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN) + +/* Always use order-0 pages */ +#define LIBETH_RX_PAGE_ORDER 0 +/* Pick a sane buffer stride and align to a cacheline boundary */ +#define LIBETH_RX_BUF_STRIDE SKB_DATA_ALIGN(128) +/* HW-writeable space in one buffer: truesize - headroom/tailroom, aligned */ +#define LIBETH_RX_PAGE_LEN(hr) \ + ALIGN_DOWN(SKB_MAX_ORDER(hr, LIBETH_RX_PAGE_ORDER), \ + LIBETH_RX_BUF_STRIDE) + +/** + * struct libeth_fqe - structure representing an Rx buffer + * @page: page holding the buffer + * @offset: offset from the page start (to the headroom) + * @truesize: total space occupied by the buffer (w/ headroom and tailroom) + * + * Depending on the MTU, API switches between one-page-per-frame and shared + * page model (to conserve memory on bigger-page platforms). In case of the + * former, @offset is always 0 and @truesize is always ```PAGE_SIZE```. + */ +struct libeth_fqe { + struct page *page; + u32 offset; + u32 truesize; +} __aligned_largest; + +/** + * struct libeth_fq - structure representing a buffer queue + * @fp: hotpath part of the structure + * @pp: &page_pool for buffer management + * @fqes: array of Rx buffers + * @truesize: size to allocate per buffer, w/overhead + * @count: number of descriptors/buffers the queue has + * @buf_len: HW-writeable length per each buffer + * @nid: ID of the closest NUMA node with memory + */ +struct libeth_fq { + struct_group_tagged(libeth_fq_fp, fp, + struct page_pool *pp; + struct libeth_fqe *fqes; + + u32 truesize; + u32 count; + ); + + /* Cold fields */ + u32 buf_len; + int nid; +}; + +int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi); +void libeth_rx_fq_destroy(struct libeth_fq *fq); + +/** + * libeth_rx_alloc - allocate a new Rx buffer + * @fq: buffer queue to allocate for + * @i: index of the buffer within the queue + * + * Return: DMA address to be passed to HW for Rx on successful allocation, + * ```DMA_MAPPING_ERROR``` otherwise. + */ +static inline dma_addr_t libeth_rx_alloc(const struct libeth_fq_fp *fq, u32 i) +{ + struct libeth_fqe *buf = &fq->fqes[i]; + + buf->truesize = fq->truesize; + buf->page = page_pool_dev_alloc(fq->pp, &buf->offset, &buf->truesize); + if (unlikely(!buf->page)) + return DMA_MAPPING_ERROR; + + return page_pool_get_dma_addr(buf->page) + buf->offset + + fq->pp->p.offset; +} + +void libeth_rx_recycle_slow(struct page *page); + +/** + * libeth_rx_sync_for_cpu - synchronize or recycle buffer post DMA + * @fqe: buffer to process + * @len: frame length from the descriptor + * + * Process the buffer after it's written by HW. The regular path is to + * synchronize DMA for CPU, but in case of no data it will be immediately + * recycled back to its PP. + * + * Return: true when there's data to process, false otherwise. + */ +static inline bool libeth_rx_sync_for_cpu(const struct libeth_fqe *fqe, + u32 len) +{ + struct page *page = fqe->page; + + /* Very rare, but possible case. The most common reason: + * the last fragment contained FCS only, which was then + * stripped by the HW. + */ + if (unlikely(!len)) { + libeth_rx_recycle_slow(page); + return false; + } + + page_pool_dma_sync_for_cpu(page->pp, page, fqe->offset, len); + + return true; +} + /* Converting abstract packet type numbers into a software structure with * the packet parameters to do O(1) lookup on Rx. */ diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/intel/libeth/rx.c index 86f17e29b47d..a557a1ebcbe5 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -3,6 +3,104 @@ #include +/* Rx buffer management */ + +/** + * libeth_rx_hw_len - get the actual buffer size to be passed to HW + * @pp: &page_pool_params of the netdev to calculate the size for + * @max_len: maximum buffer size for a single descriptor + * + * Return: HW-writeable length per one buffer to pass it to the HW accounting: + * MTU the @dev has, HW required alignment, minimum and maximum allowed values, + * and system's page size. + */ +static u32 libeth_rx_hw_len(const struct page_pool_params *pp, u32 max_len) +{ + u32 len; + + len = READ_ONCE(pp->netdev->mtu) + LIBETH_RX_LL_LEN; + len = ALIGN(len, LIBETH_RX_BUF_STRIDE); + len = min3(len, ALIGN_DOWN(max_len ? : U32_MAX, LIBETH_RX_BUF_STRIDE), + pp->max_len); + + return len; +} + +/** + * libeth_rx_fq_create - create a PP with the default libeth settings + * @fq: buffer queue struct to fill + * @napi: &napi_struct covering this PP (no usage outside its poll loops) + * + * Return: %0 on success, -%errno on failure. + */ +int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi) +{ + struct page_pool_params pp = { + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, + .order = LIBETH_RX_PAGE_ORDER, + .pool_size = fq->count, + .nid = fq->nid, + .dev = napi->dev->dev.parent, + .netdev = napi->dev, + .napi = napi, + .dma_dir = DMA_FROM_DEVICE, + .offset = LIBETH_SKB_HEADROOM, + }; + struct libeth_fqe *fqes; + struct page_pool *pool; + + /* HW-writeable / syncable length per one page */ + pp.max_len = LIBETH_RX_PAGE_LEN(pp.offset); + + /* HW-writeable length per buffer */ + fq->buf_len = libeth_rx_hw_len(&pp, fq->buf_len); + /* Buffer size to allocate */ + fq->truesize = roundup_pow_of_two(SKB_HEAD_ALIGN(pp.offset + + fq->buf_len)); + + pool = page_pool_create(&pp); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + fqes = kvcalloc_node(fq->count, sizeof(*fqes), GFP_KERNEL, fq->nid); + if (!fqes) + goto err_buf; + + fq->fqes = fqes; + fq->pp = pool; + + return 0; + +err_buf: + page_pool_destroy(pool); + + return -ENOMEM; +} +EXPORT_SYMBOL_NS_GPL(libeth_rx_fq_create, LIBETH); + +/** + * libeth_rx_fq_destroy - destroy a &page_pool created by libeth + * @fq: buffer queue to process + */ +void libeth_rx_fq_destroy(struct libeth_fq *fq) +{ + kvfree(fq->fqes); + page_pool_destroy(fq->pp); +} +EXPORT_SYMBOL_NS_GPL(libeth_rx_fq_destroy, LIBETH); + +/** + * libeth_rx_recycle_slow - recycle a libeth page from the NAPI context + * @page: page to recycle + * + * To be used on exceptions or rare cases not requiring fast inline recycling. + */ +void libeth_rx_recycle_slow(struct page *page) +{ + page_pool_recycle_direct(page->pp, page); +} +EXPORT_SYMBOL_NS_GPL(libeth_rx_recycle_slow, LIBETH); + /* Converting abstract packet type numbers into a software structure with * the packet parameters to do O(1) lookup on Rx. */ From patchwork Thu Apr 4 15:44:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5C39CD129B for ; Thu, 4 Apr 2024 15:46:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 63ADA6B009E; Thu, 4 Apr 2024 11:46:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5EA456B009F; Thu, 4 Apr 2024 11:46:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 48AD36B00A0; Thu, 4 Apr 2024 11:46:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2810C6B009E for ; Thu, 4 Apr 2024 11:46:21 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E95B8C0F5C for ; Thu, 4 Apr 2024 15:46:20 +0000 (UTC) X-FDA: 81972276120.11.AE9E25A Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id BDBAF180010 for ; Thu, 4 Apr 2024 15:46:18 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Oj3C3Ewi; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245579; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TWULREaDYd0GhQ+xj1jSHJEQCjdrCYSQBfrRSSrbhK4=; b=PqVNTB/eOvhqmhbF1GaYp9NZJNcrhWp57Ftdyr8TrGVHsg9aGKgJTnVTQXycFt/VU5wEtg sF+AqA3jfKQ5Yz2pxB4BDLaWrDzQ8jrMZVchVBpv5U1qcZGPwUo0Fais4TTOqIgh8XrZo3 /Zg/5Oo0XlnEjjw+VyHa4R7dhCiw5Bg= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Oj3C3Ewi; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245579; a=rsa-sha256; cv=none; b=C9ZAECLPuUAFtKOW3kSElmMwOA7x4lo9KF1oNVsv2EMFMqnTUvMb/sfhU/VWHZ105vfoB+ 7aEhyfGTRY44Zo8ay8Zf4kHZkYtUlvFMH4Lu6PnVUNJypxEgNZQDVmF7VV8vqxKceUjZY6 ua+FdJjPNGGOONZCW0oqlX5ok4gZ1iY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245579; x=1743781579; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zVEgMieKOqlQ10EzIrZ/nTBJY74dbSjTG4wpdzvMuBc=; b=Oj3C3Ewi8fjlcwpgiRNPBMSF7LQNgNK+QSwmSVyBapNSb1iwGN/nac0V QOYogmscx9untcsLvb7w3/jDZtA+aS1XaDtQo0uDZ0ErmHBDVcX5GLosp 7Czu1PcQeq/seUgVa4yKuL4uaoKRyIo8z179nI5HcyVBmbbDw4g2RO6J0 QF4OP4LwbgJ8g+TiOjn5UknOwYi/QMe4nuE296TJaUmtIodhePTUH50jS DEONl0YzYDtMExDi36EYjCGTEyZq1QJxCiUILThaCVEsjhOfwuHzacG7d f7YGOraAN373c1x4NNyDjcBqAs/w2yTg4ozjtha2iSSOztDM9CRn1yqi/ w==; X-CSE-ConnectionGUID: 2yx3tDKtRiWH2OCxcg32tw== X-CSE-MsgGUID: vsso1ZvTRR6gHx6P3vMASA== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312278" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312278" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:46:18 -0700 X-CSE-ConnectionGUID: GqwZ3y4WS0adSW3xkpnDgQ== X-CSE-MsgGUID: kipuvfN9ShqhUig8GjCFNg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288218" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:46:14 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 8/9] iavf: pack iavf_ring more efficiently Date: Thu, 4 Apr 2024 17:44:01 +0200 Message-ID: <20240404154402.3581254-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: BDBAF180010 X-Stat-Signature: 97z8yz8jpnbe15iszbt3ahbih5d5qqdr X-Rspam-User: X-HE-Tag: 1712245578-580839 X-HE-Meta: U2FsdGVkX1+0hYrB4mjJxxvtQhmgidleyYn3kPSmRK7oEy3Uqde1cgteHFDEH6R5JZT9P0uE3XRowMsvkpnE6maPWjiLfUwz4SjkxlgbrOtFMr8veqC38rSofbdEC4xnOeSoS6Fw0RGQ0Qx6+ipteW5sryVKma+dtMgPg29YAf9+t5N/Y4Qt9B+T66qUepMtO9sS++LTxNzkfmGJKB1+MnTqZh1EwvxIbgybugZLd6JWSvxrM3I/qGfU98ZN/G85Eg6KyMESC5xax0Jxmmq0QCElP/7V4FpGWGWRbUJ8T9Kf/sF72JI/igHxvwv0nwEJndci/0DmFHnTS0c2mg0aiQ5SEpgIaq69DxMcQujsRHldkJ/I9ec2SLb9vTjIkhKOaf8ycXQWUBjA+E4XBTEPwoX2kJz3+LyuT4lHaGcWWjAye6SjXdZkjYHJhtnbgcgOsLHidmnIy/ssjbAT/du2ailEAAW8wrrrFrd6KZhMMtxyRYJsFUPU/Zo9qGwbYNJcI4IRkpPs6C/a/vmiTuW23q3hT2zab8rkNeA3oR0/f0oNvsfeGSwjwSCBFFVa35yB/L7cEOYypy1T54slvvui2nskPro1lsA+6fstyiRp43KWtCuZD9RvCaZ8Xp68sbxEyZYqbsloaTpfKkhPmcwx7uojflWZBfiAjJZ16pa5/3+SW0Jh52+VmTl7l45V1T514GNkyJA+HTVcQeX55H8scl7/xpB1hhQ5JB4K0gYSfp6oCvi7MrI6HqEf3aQsfSS0B9+Ri7Wg8S4FrvqbT3wP7QOzP8TvpR11OPDHh7f6pUe4tM+bvmgKyDm5FZ9cOvd31sld2sSugkJmm3pI1XkVXWFjUnPybfaUUVDKEqHlkFiY77/z/BU8IWHKRQKQx/+Jo6V3noCIX2Vp95ON+bjETtWKjFbqc/StwzDQeVFujn47kKICBpDpP0BbAr5Ki3et85dyED9QTDmG/GG0Lf5 nL/a6t0o RYizQDx4sH6GE7/OChopR7+vBJFGB7Ri59WpIjeGwFdVvV12FIh2a+bhTspD0qtBREdJ9PXF+j4gKWcE3vCXJbCGqM4BQkDg3Qk4hLnOTef8ITgFPe2SuIYt5Ek3Wkf+Wv4jtZkSxAl2pReUIdc6OvJCwBZu8mVSmWX6jeSMRwVroeUT5C8CgrOHfkyxyP8N3qNjUbrTx5CEOyyQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Before replacing the Rx buffer management with libie, clean up &iavf_ring a bit. There are several fields not used anywhere in the code -- simply remove them. Move ::tail up to remove a hole. Replace ::arm_wb boolean with 1-bit flag in ::flags to free 1 more byte. Finally, move ::prev_pkt_ctr out of &iavf_tx_queue_stats -- it doesn't belong there (used for Tx stall detection). Place it next to the stats on the ring itself to fill the 4-byte slot. The result: no holes and all the hot fields fit into the first 64-byte cacheline. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 22 +++------------------ drivers/net/ethernet/intel/iavf/iavf_txrx.c | 12 +++++------ 2 files changed, 9 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index e01777531635..ed559fa6f214 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -227,7 +227,6 @@ struct iavf_tx_queue_stats { u64 tx_done_old; u64 tx_linearize; u64 tx_force_wb; - int prev_pkt_ctr; u64 tx_lost_interrupt; }; @@ -237,12 +236,6 @@ struct iavf_rx_queue_stats { u64 alloc_buff_failed; }; -enum iavf_ring_state_t { - __IAVF_TX_FDIR_INIT_DONE, - __IAVF_TX_XPS_INIT_DONE, - __IAVF_RING_STATE_NBITS /* must be last */ -}; - /* some useful defines for virtchannel interface, which * is the only remaining user of header split */ @@ -264,10 +257,8 @@ struct iavf_ring { struct iavf_tx_buffer *tx_bi; struct iavf_rx_buffer *rx_bi; }; - DECLARE_BITMAP(state, __IAVF_RING_STATE_NBITS); - u16 queue_index; /* Queue number of ring */ - u8 dcb_tc; /* Traffic class of ring */ u8 __iomem *tail; + u16 queue_index; /* Queue number of ring */ /* high bit set means dynamic, use accessors routines to read/write. * hardware only supports 2us resolution for the ITR registers. @@ -277,22 +268,14 @@ struct iavf_ring { u16 itr_setting; u16 count; /* Number of descriptors */ - u16 reg_idx; /* HW register index of the ring */ /* used in interrupt processing */ u16 next_to_use; u16 next_to_clean; - u8 atr_sample_rate; - u8 atr_count; - - bool ring_active; /* is ring online or not */ - bool arm_wb; /* do something to arm write back */ - u8 packet_stride; - u16 flags; #define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0) -/* BIT(1) is free, was IAVF_RXR_FLAGS_BUILD_SKB_ENABLED */ +#define IAVF_TXR_FLAGS_ARM_WB BIT(1) /* BIT(2) is free */ #define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3) #define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4) @@ -306,6 +289,7 @@ struct iavf_ring { struct iavf_rx_queue_stats rx_stats; }; + int prev_pkt_ctr; /* For Tx stall detection */ unsigned int size; /* length of descriptor ring in bytes */ dma_addr_t dma; /* physical address of ring */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index a14f7f211150..1a27fa613f6d 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -185,7 +185,7 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) * pending work. */ packets = tx_ring->stats.packets & INT_MAX; - if (tx_ring->tx_stats.prev_pkt_ctr == packets) { + if (tx_ring->prev_pkt_ctr == packets) { iavf_force_wb(vsi, tx_ring->q_vector); continue; } @@ -194,7 +194,7 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) * to iavf_get_tx_pending() */ smp_rmb(); - tx_ring->tx_stats.prev_pkt_ctr = + tx_ring->prev_pkt_ctr = iavf_get_tx_pending(tx_ring, true) ? packets : -1; } } @@ -320,7 +320,7 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, ((j / WB_STRIDE) == 0) && (j > 0) && !test_bit(__IAVF_VSI_DOWN, vsi->state) && (IAVF_DESC_UNUSED(tx_ring) != tx_ring->count)) - tx_ring->arm_wb = true; + tx_ring->flags |= IAVF_TXR_FLAGS_ARM_WB; } /* notify netdev of completed buffers */ @@ -675,7 +675,7 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; - tx_ring->tx_stats.prev_pkt_ctr = -1; + tx_ring->prev_pkt_ctr = -1; return 0; err: @@ -1491,8 +1491,8 @@ int iavf_napi_poll(struct napi_struct *napi, int budget) clean_complete = false; continue; } - arm_wb |= ring->arm_wb; - ring->arm_wb = false; + arm_wb |= !!(ring->flags & IAVF_TXR_FLAGS_ARM_WB); + ring->flags &= ~IAVF_TXR_FLAGS_ARM_WB; } /* Handle case where we are called by netpoll with a budget of 0 */ From patchwork Thu Apr 4 15:44:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07F4FCD1292 for ; Thu, 4 Apr 2024 15:46:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 91FE56B00A0; Thu, 4 Apr 2024 11:46:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D1326B00A1; Thu, 4 Apr 2024 11:46:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74B2E6B00A2; Thu, 4 Apr 2024 11:46:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4CE966B00A0 for ; Thu, 4 Apr 2024 11:46:25 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1B380121044 for ; Thu, 4 Apr 2024 15:46:25 +0000 (UTC) X-FDA: 81972276330.09.6C7EC15 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id C51E1180005 for ; Thu, 4 Apr 2024 15:46:22 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SLrsWoFf; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245583; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/GNCxfOGA2FFwieD7uyxix76lecgVRzR17bz1hjtvEs=; b=U3urh2/wal2SVsNeR4UMGd/Ulbnc3lRQKykFd53OTqOc06BTTfQZ567GCBR1V/u6s92Rj5 DLJYC+wpAKR/aiBPp+M9eSGh4ODpfikX1rrSE0xjQvL5tzhc0gHP7KJczULoaGFrBgDU0i Asp7SO/gOXkVu+MTzesXHwmu9m9cTLM= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SLrsWoFf; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245583; a=rsa-sha256; cv=none; b=zNl4mYf1xtcPM7pRhFMkvaIZhwkcYpuXiQrzJegti/yr9JgRwXmWNcEZesSv3eOUVgBLDm r8SSoRXY54epS47wmbfLwpKOCk6xGAUBtH2AtGZemJQ49MW8Xwzp8CSQUDAkktcdqYQT4b shSqVYjg/Sjhf8nROiD9KbJmV1bnXck= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245583; x=1743781583; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vjsY2nVHl5yn6IbTx+aoWub4CIDNxtDh/nMAxMjdNAQ=; b=SLrsWoFfvc2sGBrosr2zo7zAkIAQVkim33pVQtI6sS56l79QAwmS7Toi WVtTVWDrYnRwlWbQKimC3v/Hvj2b8Mq/VWLXikD2Xx6b/FNkxv0TaNTat ifEZnMNiYBaQiqZjoqtBxpxioPnJI3i0l5foJV4sC031gCs6e7PcyQaHX rjiADJgEGnc4Pgddx8OZFF6KuX+RnCD3mqb3QB00Y5lH/E0kGxKVMvpvL 4xvUrWHxBLP1M5sn5zD+Po7bluVObRBrnAUvGQzLad8J+KO+P1v2lswUf iRyptl7khQ44+xQQBJOD2pMAj/L0CRtJoYyeLeFUo9411x5wBDkejTpPj g==; X-CSE-ConnectionGUID: sAJhaeDST+6dPupt1Zzwng== X-CSE-MsgGUID: u0vbcTv3SYud1uYlq9CZVg== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312308" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312308" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:46:22 -0700 X-CSE-ConnectionGUID: ywm4lFaQQum9ux5Ag5ecDA== X-CSE-MsgGUID: Xy/3cMo/QMG9Yqw1Wm9BnQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288247" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:46:18 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 9/9] iavf: switch to Page Pool Date: Thu, 4 Apr 2024 17:44:02 +0200 Message-ID: <20240404154402.3581254-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: C51E1180005 X-Stat-Signature: 8e5hjd3m8pwnraqaosefbfc87fxbrgg9 X-Rspam-User: X-HE-Tag: 1712245582-850870 X-HE-Meta: U2FsdGVkX1/9W6S6dSuhJynab/AI1LwCtrSOy0GR2moUa7dq2YjGKek1RccU6M+Fbebm1nZvPxbiDjYey6YeJZyvWEhe4zyRZgnAR4BTo74a0OQsmptBAk2ILKSlzebU+UjDmOto7Pvt3TwSr9A/+xQiP16E7v+3lP/DQ81woT77wFJza8FrmHBcGuzv896tULrpTh60CM0UiESjT7uZ8lHRKm97sCbai37zpxBZMln/O4pVcAGH+B6Xk747nHzBDUuqxmlqDc91KpYTrHCnL+uXq+lCx6HtLEFRbfyfQDSkVQHvbKfVA/nlEyvLdCHA3nRqiMpK/QSrWyzX9uTjXwQVLVFQRtaAXHU5ZNauuoepIdSp3mYizUKi/4rkpjjc7nOWrlIbjZsXZoPFAJF8Y6lvvWDgc75GQpNER7RXDq8PpZUTgorai0gEcya+erQyiniRRIFTj0hO4Bm8mhFX8LXhMmB6X+sBBc/izDh1v3zQ0kO4Hi/NP66AmIngSYbHvysMM7JSjzaqj+jlMSfbgZFK0TDlOfIQMk/5+5acyrr8+ckC0cn+oZwxY3E3GfzOhcIpa4bcpwKWafc4BkLdAwyZulUGPAbSPrviLhXMZKEox3G0sPuV+mWrfwVmhmXzQ5wl0tQz2/MOamjvj1yt/g/NSIENlK2o/hML3le68RpuO3DxS0omblsYAcqqGp/g5WAdxEOJR9fNYV4N53daepU7BzPEVSe2+xaKpbw+5LY1OVyk9Yah5SuPYnyCkiQ2ZW1z9q6TuU47huNS60cMiR1ARR+jjJ1deTT7p8jXdDekqaNt+1tD5YtBbVF7c/oVfbC+nyJBNk20Ug2i4GgvaeCtA1g9pYEd9mnzI1IUr9+jO7HVbAm5YiOV0kbUcuwq2kCJh2GcJhGrsyrva/dGF9K5m5FhlkF/GWF55r2z1q3vmwYENQIQLQ02yqxNlXK30Lru94yHgRu+oTHRCBh oFNF6VBX JoCiohOAfD1ZTgdToER8CecmOxDfQslXlbJfPrwEqv9vrRAFgiX3clSV5U/9G+hh0unGVquHSnVUmjf3rT/YsMa25QgsVwqYWX4EmfujQE24+NcGiWAhsMNHXeV1efYkryBRjrh/q/54lRWA4sdEz9IDw+6QIAcKArGju X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that the IAVF driver simply uses dev_alloc_page() + free_page() with no custom recycling logics, it can easily be switched to using Page Pool / libeth API instead. This allows to removing the whole dancing around headroom, HW buffer size, and page order. All DMA-for-device is now done in the PP core, for-CPU -- in the libeth helper. Use skb_mark_for_recycle() to bring back the recycling and restore the performance. Speaking of performance: on par with the baseline and faster with the PP optimization series applied. But the memory usage for 1500b MTU is now almost 2x lower (x86_64) thanks to allocating a page every second descriptor. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 34 +-- include/linux/net/intel/libie/rx.h | 17 ++ drivers/net/ethernet/intel/iavf/iavf_main.c | 7 +- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 257 +++++------------- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 10 +- 5 files changed, 111 insertions(+), 214 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index ed559fa6f214..d7b5587aeb8e 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -80,18 +80,8 @@ enum iavf_dyn_idx_t { BIT_ULL(IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) | \ BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP)) -/* Supported Rx Buffer Sizes (a multiple of 128) */ -#define IAVF_RXBUFFER_3072 3072 /* Used for large frames w/ padding */ -#define IAVF_MAX_RXBUFFER 9728 /* largest size for single descriptor */ - -#define IAVF_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) #define iavf_rx_desc iavf_32byte_rx_desc -#define IAVF_RX_DMA_ATTR \ - (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) - -#define IAVF_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) - /** * iavf_test_staterr - tests bits in Rx descriptor status and error fields * @rx_desc: pointer to receive descriptor (in le64 format) @@ -210,12 +200,6 @@ struct iavf_tx_buffer { u32 tx_flags; }; -struct iavf_rx_buffer { - dma_addr_t dma; - struct page *page; - __u32 page_offset; -}; - struct iavf_queue_stats { u64 packets; u64 bytes; @@ -251,13 +235,18 @@ struct iavf_rx_queue_stats { struct iavf_ring { struct iavf_ring *next; /* pointer to next ring in q_vector */ void *desc; /* Descriptor ring memory */ - struct device *dev; /* Used for DMA mapping */ + union { + struct page_pool *pp; /* Used on Rx for buffer management */ + struct device *dev; /* Used on Tx for DMA mapping */ + }; struct net_device *netdev; /* netdev ring maps to */ union { + struct libeth_fqe *rx_fqes; struct iavf_tx_buffer *tx_bi; - struct iavf_rx_buffer *rx_bi; }; u8 __iomem *tail; + u32 truesize; + u16 queue_index; /* Queue number of ring */ /* high bit set means dynamic, use accessors routines to read/write. @@ -305,6 +294,8 @@ struct iavf_ring { * iavf_clean_rx_ring_irq() is called * for this ring. */ + + u32 rx_buf_len; } ____cacheline_internodealigned_in_smp; #define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002 @@ -328,13 +319,6 @@ struct iavf_ring_container { #define iavf_for_each_ring(pos, head) \ for (pos = (head).ring; pos != NULL; pos = pos->next) -static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring) -{ - return 0; -} - -#define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring)) - bool iavf_alloc_rx_buffers(struct iavf_ring *rxr, u16 cleaned_count); netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev); int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring); diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index ab9ffe1e93d8..be6cfb59bc6c 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -6,6 +6,23 @@ #include +/* Rx buffer management */ + +/* The largest size for a single descriptor as per HW */ +#define LIBIE_MAX_RX_BUF_LEN 9728U +/* "True" HW-writeable space: minimum from SW and HW values */ +#define LIBIE_RX_BUF_LEN(hr) min_t(u32, LIBETH_RX_PAGE_LEN(hr), \ + LIBIE_MAX_RX_BUF_LEN) + +/* The maximum frame size as per HW (S/G) */ +#define __LIBIE_MAX_RX_FRM_LEN 16382U +/* ATST, HW can chain up to 5 Rx descriptors */ +#define LIBIE_MAX_RX_FRM_LEN(hr) \ + min_t(u32, __LIBIE_MAX_RX_FRM_LEN, LIBIE_RX_BUF_LEN(hr) * 5) +/* Maximum frame size minus LL overhead */ +#define LIBIE_MAX_MTU \ + (LIBIE_MAX_RX_FRM_LEN(LIBETH_MAX_HEADROOM) - LIBETH_RX_LL_LEN) + /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed * bitfield struct. */ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index ffb71a62b105..7e0de0a9b883 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include + #include "iavf.h" #include "iavf_prototype.h" /* All iavf tracepoints are defined by the include below, which must @@ -45,6 +47,7 @@ MODULE_DEVICE_TABLE(pci, iavf_pci_tbl); MODULE_ALIAS("i40evf"); MODULE_AUTHOR("Intel Corporation, "); MODULE_DESCRIPTION("Intel(R) Ethernet Adaptive Virtual Function Network Driver"); +MODULE_IMPORT_NS(LIBETH); MODULE_IMPORT_NS(LIBIE); MODULE_LICENSE("GPL v2"); @@ -1586,7 +1589,6 @@ static int iavf_alloc_queues(struct iavf_adapter *adapter) rx_ring = &adapter->rx_rings[i]; rx_ring->queue_index = i; rx_ring->netdev = adapter->netdev; - rx_ring->dev = &adapter->pdev->dev; rx_ring->count = adapter->rx_desc_count; rx_ring->itr_setting = IAVF_ITR_RX_DEF; } @@ -2613,9 +2615,8 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) iavf_set_ethtool_ops(netdev); netdev->watchdog_timeo = 5 * HZ; - /* MTU range: 68 - 9710 */ netdev->min_mtu = ETH_MIN_MTU; - netdev->max_mtu = IAVF_MAX_RXBUFFER - IAVF_PACKET_HDR_PAD; + netdev->max_mtu = LIBIE_MAX_MTU; if (!is_valid_ether_addr(adapter->hw.mac.addr)) { dev_info(&pdev->dev, "Invalid MAC address %pM, using random\n", diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 1a27fa613f6d..02cda84bcb9a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -690,11 +690,8 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) **/ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) { - unsigned long bi_size; - u16 i; - /* ring already cleared, nothing to do */ - if (!rx_ring->rx_bi) + if (!rx_ring->rx_fqes) return; if (rx_ring->skb) { @@ -702,40 +699,16 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) rx_ring->skb = NULL; } - /* Free all the Rx ring sk_buffs */ - for (i = 0; i < rx_ring->count; i++) { - struct iavf_rx_buffer *rx_bi = &rx_ring->rx_bi[i]; + /* Free all the Rx ring buffers */ + for (u32 i = rx_ring->next_to_clean; i != rx_ring->next_to_use; ) { + const struct libeth_fqe *rx_fqes = &rx_ring->rx_fqes[i]; - if (!rx_bi->page) - continue; + page_pool_put_full_page(rx_ring->pp, rx_fqes->page, false); - /* Invalidate cache lines that may have been written to by - * device so that we avoid corrupting memory. - */ - dma_sync_single_range_for_cpu(rx_ring->dev, - rx_bi->dma, - rx_bi->page_offset, - IAVF_RXBUFFER_3072, - DMA_FROM_DEVICE); - - /* free resources associated with mapping */ - dma_unmap_page_attrs(rx_ring->dev, rx_bi->dma, - iavf_rx_pg_size(rx_ring), - DMA_FROM_DEVICE, - IAVF_RX_DMA_ATTR); - - __free_page(rx_bi->page); - - rx_bi->page = NULL; - rx_bi->page_offset = 0; + if (unlikely(++i == rx_ring->count)) + i = 0; } - bi_size = sizeof(struct iavf_rx_buffer) * rx_ring->count; - memset(rx_ring->rx_bi, 0, bi_size); - - /* Zero out the descriptor ring */ - memset(rx_ring->desc, 0, rx_ring->size); - rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; } @@ -748,15 +721,22 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) **/ void iavf_free_rx_resources(struct iavf_ring *rx_ring) { + struct libeth_fq fq = { + .fqes = rx_ring->rx_fqes, + .pp = rx_ring->pp, + }; + iavf_clean_rx_ring(rx_ring); - kfree(rx_ring->rx_bi); - rx_ring->rx_bi = NULL; if (rx_ring->desc) { - dma_free_coherent(rx_ring->dev, rx_ring->size, + dma_free_coherent(rx_ring->pp->p.dev, rx_ring->size, rx_ring->desc, rx_ring->dma); rx_ring->desc = NULL; } + + libeth_rx_fq_destroy(&fq); + rx_ring->rx_fqes = NULL; + rx_ring->pp = NULL; } /** @@ -767,26 +747,32 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring) **/ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) { - struct device *dev = rx_ring->dev; - int bi_size; - - /* warn if we are about to overwrite the pointer */ - WARN_ON(rx_ring->rx_bi); - bi_size = sizeof(struct iavf_rx_buffer) * rx_ring->count; - rx_ring->rx_bi = kzalloc(bi_size, GFP_KERNEL); - if (!rx_ring->rx_bi) - goto err; + struct libeth_fq fq = { + .count = rx_ring->count, + .buf_len = LIBIE_MAX_RX_BUF_LEN, + .nid = NUMA_NO_NODE, + }; + int ret; + + ret = libeth_rx_fq_create(&fq, &rx_ring->q_vector->napi); + if (ret) + return ret; + + rx_ring->pp = fq.pp; + rx_ring->rx_fqes = fq.fqes; + rx_ring->truesize = fq.truesize; + rx_ring->rx_buf_len = fq.buf_len; u64_stats_init(&rx_ring->syncp); /* Round up to nearest 4K */ rx_ring->size = rx_ring->count * sizeof(union iavf_32byte_rx_desc); rx_ring->size = ALIGN(rx_ring->size, 4096); - rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, + rx_ring->desc = dma_alloc_coherent(fq.pp->p.dev, rx_ring->size, &rx_ring->dma, GFP_KERNEL); if (!rx_ring->desc) { - dev_info(dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n", + dev_info(fq.pp->p.dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n", rx_ring->size); goto err; } @@ -795,9 +781,12 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) rx_ring->next_to_use = 0; return 0; + err: - kfree(rx_ring->rx_bi); - rx_ring->rx_bi = NULL; + libeth_rx_fq_destroy(&fq); + rx_ring->rx_fqes = NULL; + rx_ring->pp = NULL; + return -ENOMEM; } @@ -819,49 +808,6 @@ static void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) writel(val, rx_ring->tail); } -/** - * iavf_alloc_mapped_page - recycle or make a new page - * @rx_ring: ring to use - * @bi: rx_buffer struct to modify - * - * Returns true if the page was successfully allocated or - * reused. - **/ -static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *bi) -{ - struct page *page = bi->page; - dma_addr_t dma; - - /* alloc new page for storage */ - page = dev_alloc_pages(iavf_rx_pg_order(rx_ring)); - if (unlikely(!page)) { - rx_ring->rx_stats.alloc_page_failed++; - return false; - } - - /* map page for use */ - dma = dma_map_page_attrs(rx_ring->dev, page, 0, - iavf_rx_pg_size(rx_ring), - DMA_FROM_DEVICE, - IAVF_RX_DMA_ATTR); - - /* if mapping failed free memory back to system since - * there isn't much point in holding memory we can't use - */ - if (dma_mapping_error(rx_ring->dev, dma)) { - __free_pages(page, iavf_rx_pg_order(rx_ring)); - rx_ring->rx_stats.alloc_page_failed++; - return false; - } - - bi->dma = dma; - bi->page = page; - bi->page_offset = IAVF_SKB_PAD; - - return true; -} - /** * iavf_receive_skb - Send a completed packet up the stack * @rx_ring: rx ring in play @@ -892,38 +838,37 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, **/ bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count) { + const struct libeth_fq_fp fq = { + .pp = rx_ring->pp, + .fqes = rx_ring->rx_fqes, + .truesize = rx_ring->truesize, + .count = rx_ring->count, + }; u16 ntu = rx_ring->next_to_use; union iavf_rx_desc *rx_desc; - struct iavf_rx_buffer *bi; /* do nothing if no valid netdev defined */ if (!rx_ring->netdev || !cleaned_count) return false; rx_desc = IAVF_RX_DESC(rx_ring, ntu); - bi = &rx_ring->rx_bi[ntu]; do { - if (!iavf_alloc_mapped_page(rx_ring, bi)) - goto no_buffers; + dma_addr_t addr; - /* sync the buffer for use by the device */ - dma_sync_single_range_for_device(rx_ring->dev, bi->dma, - bi->page_offset, - IAVF_RXBUFFER_3072, - DMA_FROM_DEVICE); + addr = libeth_rx_alloc(&fq, ntu); + if (addr == DMA_MAPPING_ERROR) + goto no_buffers; /* Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. */ - rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset); + rx_desc->read.pkt_addr = cpu_to_le64(addr); rx_desc++; - bi++; ntu++; if (unlikely(ntu == rx_ring->count)) { rx_desc = IAVF_RX_DESC(rx_ring, 0); - bi = rx_ring->rx_bi; ntu = 0; } @@ -942,6 +887,8 @@ bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count) if (rx_ring->next_to_use != ntu) iavf_release_rx_desc(rx_ring, ntu); + rx_ring->rx_stats.alloc_page_failed++; + /* make sure to come back via polling to try again after * allocation failure */ @@ -1090,9 +1037,8 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb) /** * iavf_add_rx_frag - Add contents of Rx buffer to sk_buff - * @rx_ring: rx descriptor ring to transact packets on - * @rx_buffer: buffer containing page to add * @skb: sk_buff to place the data into + * @rx_buffer: buffer containing page to add * @size: packet length from rx_desc * * This function will add the data contained in rx_buffer->page to the skb. @@ -1100,105 +1046,49 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb) * * The function will then update the page offset. **/ -static void iavf_add_rx_frag(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *rx_buffer, - struct sk_buff *skb, +static void iavf_add_rx_frag(struct sk_buff *skb, + const struct libeth_fqe *rx_buffer, unsigned int size) { - unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD); - - if (!size) - return; + u32 hr = rx_buffer->page->pp->p.offset; skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, - rx_buffer->page_offset, size, truesize); -} - -/** - * iavf_get_rx_buffer - Fetch Rx buffer and synchronize data for use - * @rx_ring: rx descriptor ring to transact packets on - * @size: size of buffer to add to skb - * - * This function will pull an Rx buffer from the ring and synchronize it - * for use by the CPU. - */ -static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, - const unsigned int size) -{ - struct iavf_rx_buffer *rx_buffer; - - rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean]; - prefetchw(rx_buffer->page); - if (!size) - return rx_buffer; - - /* we are reusing so sync this buffer for CPU use */ - dma_sync_single_range_for_cpu(rx_ring->dev, - rx_buffer->dma, - rx_buffer->page_offset, - size, - DMA_FROM_DEVICE); - - return rx_buffer; + rx_buffer->offset + hr, size, rx_buffer->truesize); } /** * iavf_build_skb - Build skb around an existing buffer - * @rx_ring: Rx descriptor ring to transact packets on * @rx_buffer: Rx buffer to pull data from * @size: size of buffer to add to skb * * This function builds an skb around an existing Rx buffer, taking care * to set up the skb correctly and avoid any memcpy overhead. */ -static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *rx_buffer, +static struct sk_buff *iavf_build_skb(const struct libeth_fqe *rx_buffer, unsigned int size) { - void *va; - unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + - SKB_DATA_ALIGN(IAVF_SKB_PAD + size); + u32 hr = rx_buffer->page->pp->p.offset; struct sk_buff *skb; + void *va; - if (!rx_buffer || !size) - return NULL; /* prefetch first cache line of first page */ - va = page_address(rx_buffer->page) + rx_buffer->page_offset; - net_prefetch(va); + va = page_address(rx_buffer->page) + rx_buffer->offset; + net_prefetch(va + hr); /* build an skb around the page buffer */ - skb = napi_build_skb(va - IAVF_SKB_PAD, truesize); + skb = napi_build_skb(va, rx_buffer->truesize); if (unlikely(!skb)) return NULL; + skb_mark_for_recycle(skb); + /* update pointers within the skb to store the data */ - skb_reserve(skb, IAVF_SKB_PAD); + skb_reserve(skb, hr); __skb_put(skb, size); return skb; } -/** - * iavf_put_rx_buffer - Unmap used buffer - * @rx_ring: rx descriptor ring to transact packets on - * @rx_buffer: rx buffer to pull data from - * - * This function will unmap the buffer after it's written by HW. - */ -static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *rx_buffer) -{ - if (!rx_buffer) - return; - - /* we are not reusing the buffer so unmap it */ - dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, PAGE_SIZE, - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - - /* clear contents of buffer_info */ - rx_buffer->page = NULL; -} - /** * iavf_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed @@ -1252,7 +1142,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) bool failure = false; while (likely(total_rx_packets < (unsigned int)budget)) { - struct iavf_rx_buffer *rx_buffer; + struct libeth_fqe *rx_buffer; union iavf_rx_desc *rx_desc; unsigned int size; u16 vlan_tag = 0; @@ -1287,13 +1177,16 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) size = FIELD_GET(IAVF_RXD_QW1_LENGTH_PBUF_MASK, qword); iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb); - rx_buffer = iavf_get_rx_buffer(rx_ring, size); + + rx_buffer = &rx_ring->rx_fqes[rx_ring->next_to_clean]; + if (!libeth_rx_sync_for_cpu(rx_buffer, size)) + goto skip_data; /* retrieve a buffer from the ring */ if (skb) - iavf_add_rx_frag(rx_ring, rx_buffer, skb, size); + iavf_add_rx_frag(skb, rx_buffer, size); else - skb = iavf_build_skb(rx_ring, rx_buffer, size); + skb = iavf_build_skb(rx_buffer, size); /* exit if we failed to retrieve a buffer */ if (!skb) { @@ -1301,10 +1194,10 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) break; } - iavf_put_rx_buffer(rx_ring, rx_buffer); +skip_data: cleaned_count++; - if (iavf_is_non_eop(rx_ring, rx_desc, skb)) + if (iavf_is_non_eop(rx_ring, rx_desc, skb) || unlikely(!skb)) continue; /* ERR_MASK will only have valid bits if EOP set, and diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index f8e9f859a4f1..1e543f6a7c30 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2013 - 2018 Intel Corporation. */ +#include + #include "iavf.h" #include "iavf_prototype.h" @@ -268,13 +270,13 @@ int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter) void iavf_configure_queues(struct iavf_adapter *adapter) { struct virtchnl_vsi_queue_config_info *vqci; - int i, max_frame = adapter->vf_res->max_mtu; int pairs = adapter->num_active_queues; struct virtchnl_queue_pair_info *vqpi; + u32 i, max_frame; size_t len; - if (max_frame > IAVF_MAX_RXBUFFER || !max_frame) - max_frame = IAVF_MAX_RXBUFFER; + max_frame = LIBIE_MAX_RX_FRM_LEN(adapter->rx_rings->pp->p.offset); + max_frame = min_not_zero(adapter->vf_res->max_mtu, max_frame); if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { /* bail because we already have a command pending */ @@ -304,7 +306,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter) vqpi->rxq.ring_len = adapter->rx_rings[i].count; vqpi->rxq.dma_ring_addr = adapter->rx_rings[i].dma; vqpi->rxq.max_pkt_size = max_frame; - vqpi->rxq.databuffer_size = IAVF_RXBUFFER_3072; + vqpi->rxq.databuffer_size = adapter->rx_rings[i].rx_buf_len; if (CRC_OFFLOAD_ALLOWED(adapter)) vqpi->rxq.crc_disable = !!(adapter->netdev->features & NETIF_F_RXFCS);