From patchwork Thu Apr 4 15:43:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13618012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D42ACD129D for ; Thu, 4 Apr 2024 15:46:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BFD36B0092; Thu, 4 Apr 2024 11:46:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 96D556B0093; Thu, 4 Apr 2024 11:46:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E63F6B0095; Thu, 4 Apr 2024 11:46:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 596826B0092 for ; Thu, 4 Apr 2024 11:46:03 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 103A3A00EB for ; Thu, 4 Apr 2024 15:46:03 +0000 (UTC) X-FDA: 81972275406.10.EBDCC90 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf24.hostedemail.com (Postfix) with ESMTP id ABB4518002C for ; Thu, 4 Apr 2024 15:46:00 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Usz1iiDP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712245561; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t6zt/CmDfzWI1YDBkI1d+Rfrx7us7owLQ2UYLBe0LNI=; b=X829fjOh2+Eh8USdcB2sCTjxMAZTFqqDTyjsSFTp7FXn2fDtJN8zLe/oGJYtibl3MGPv2X Y27IbEUa7YWh58GXZ6p6c71dmGQPbE4XefQp4PvmGfhb9QOhWgxNMB4k1dTU+CEh2wqqTc 7nhlm2b6QfuNhDPoRIou1sJqjxdln2M= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Usz1iiDP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of aleksander.lobakin@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=aleksander.lobakin@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712245561; a=rsa-sha256; cv=none; b=dkqhtKkF57uPpDhSThWimwCbO0Z2m0YLejAlTrQ/1LvPJJ6P2xcq9ZQDBRdxqFJSpjnpWl dpj/hdbwhohYTj0SC009JPvEsMO1juO79E3MOwyeB9/XbpEqxtYPmgt2tNl8UyzVDgtuee wfftIIvPUhxER52ZGU1FEtpUKeIx5Nw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712245561; x=1743781561; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GQNVs0QM1AKYeP5a4YOGKwCCXkGQYm93tZKCbuq7BzU=; b=Usz1iiDPsH+Qoe8PQIcafusJv0KCCDdU7wTNlXtbkvAJuuup5AJUuyFx 96GrvUS+feM3g3pzbXDlfGmDa6nxBDuuPSmpQQa3kEoMoJvt3mE38fqc/ eVKjQ/h3WffrAckm1GfwRJBu8ke700G9RBidIqoLT14N2FnWOPudH1kfW n65ywrNlBxtQixlBkrnePGNtiTS8cEhz2jLJYiO2ULWSvm/Nc0utOT9kx DYQNGvdcEviNemrMed4vGcOL9SXUBbQmNED+9aGVV21Jos2vvjP7wQqrV yQj7Y9BJjtPlGs2LpopXu6fBfD9VmnlBCTbg4DbAXeKflCkiKnTdVlYPM g==; X-CSE-ConnectionGUID: 2FGNLwlwTHK7Y11UYNYRPA== X-CSE-MsgGUID: 7TcXpXFmTyKIF/vjxc3GXA== X-IronPort-AV: E=McAfee;i="6600,9927,11034"; a="11312166" X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="11312166" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2024 08:46:00 -0700 X-CSE-ConnectionGUID: Y4c+gQRvRwWi+GBP2Qz02w== X-CSE-MsgGUID: ognP/tM7SBK0UjlEq74sxA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,179,1708416000"; d="scan'208";a="23288060" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa005.fm.intel.com with ESMTP; 04 Apr 2024 08:45:55 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexander Duyck , Yunsheng Lin , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Lameter , Vlastimil Babka , Andrew Morton , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v9 3/9] iavf: drop page splitting and recycling Date: Thu, 4 Apr 2024 17:43:56 +0200 Message-ID: <20240404154402.3581254-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240404154402.3581254-1-aleksander.lobakin@intel.com> References: <20240404154402.3581254-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: ABB4518002C X-Stat-Signature: c4aogt41ot9i3hza8nn1tk1w5hcgprcj X-Rspam-User: X-HE-Tag: 1712245560-262531 X-HE-Meta: U2FsdGVkX1+wHk1lES/jG8zhTSa9HO8ocEM1QRq/zLV7GmI5U98KX7fYyVqEIrWHu81PDbBJ7/PyHE3adE45YNj9F+Dae56gxS5CnAgUNFCpIkULMKjHZh8ygA73SZrk3bntYS52Aao7A0YgV9WIlYgflmP7dwgUKYSbMBbQFy+GUPQ53dHV/gE3FJ0OJueXiRqp8WRwQhWmzl+1bXm+U9ZYEtRzd9XVHUN0fdVwX8PAFvnN6+7sYAcTlHuyEnTI5VJlI1KrnBf0X55eOK+7uh7mvlVXvUvNwUzSZCmc4+E68XhHfDrTgPUIWroVdhgpSI4D0u8fBJggd3u0Gjo7CCaQIZ+P01xez5+wgYy3leZFJlkQPhCbTuWsey/eRRRMyFUPpRgqWPmM1rIOkVj851H9kEzKCSnc3ri6HxwR5pzvk0CrUGthdGRWqnSrsZKTb5cdnNv6JX0J4mTsU5xxWRivWY3DD7mZl9T+x/IRPtoNKLn6sgKex26dCp/LUCrOfSMLijUIW+WHzj0xL7iwKpxhO7zWRoS54R0XmHRzHKmsBA/EYH1kSqb0lcBtSfsrkb7P9cCLm0QICIeyYpGBDVP8lvbWsIB/pr6SdePxx9L5eRVvDzHdhYtlEo+Y41MdXB+r8eZhzCIBMSYKnuvQPR/1Mp1apBVuwZvqSLgiNtO2VPlgEiNebmqVS8O0j5kaUinoTa4s9LYB0Qjb42062oZYwTxaeOeF6ZzormTVLEJEeL/9ghSkHCjD9t1Xre7CAVz0AQTwEXtXknKFDf3M4mjhcQOjjQxng58Y6jIKQRzKX6EVtofWEZsEiFu/hDY2UH8sExCPLMHtz5sTDafDy0Xr1B8UgOtoyKRAjuAm7el3yRez8YfU+r4oR55pj/FfVXRud1+90vg+E2OHSRhIh8fNDIe0e5uIF0Jp1WZdjjJAo6dSk13CzGoVvFbugK40SVe0Ay+wY8c7d4NkLvo 81u7qzyJ Lryhc7zoD2RI1LHUE731jXpHIvxx3gl+DQyYIn1EhfuGHM6e6sKc1tyF2WXvLOyBWMENlMWPdFXqBwyF8+NsqTRfgpfpz800EkZjRne7warXZLn8fuAnwUlwpbnVGKunv6o1GEaEbs+bLEanZsIxb3L4h8zNvAVxpNHDp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As an intermediate step, remove all page splitting/recycling code. Just always allocate a new page and don't touch its refcount, so that it gets freed by the core stack later. Same for the "in-place" recycling, i.e. when an unused buffer gets assigned to a first needs-refilling descriptor. In some cases, this was leading to moving up to 63 &iavf_rx_buf structures around the ring on a per-field basis -- not something wanted on hotpath. The change allows to greatly simplify certain parts of the code: Function: add/remove: 0/2 grow/shrink: 0/7 up/down: 0/-744 (-744) Although the array of &iavf_rx_buf is barely used now and could be replaced with just page pointer array, don't touch it now to not complicate replacing it with libie Rx buffer struct later on. No surprise perf loses up to 30% here, but that regression will go away once PP lands. Note that iavf_rx_pg_*() definitions are left to reduce diffstat. They will be removed with the conversion to Page Pool. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 65 -------- drivers/net/ethernet/intel/iavf/iavf_type.h | 2 - drivers/net/ethernet/intel/iavf/iavf_main.c | 24 +-- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 152 +----------------- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 8 +- 5 files changed, 10 insertions(+), 241 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 68543efdd29b..e01777531635 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -81,8 +81,6 @@ enum iavf_dyn_idx_t { BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP)) /* Supported Rx Buffer Sizes (a multiple of 128) */ -#define IAVF_RXBUFFER_1536 1536 /* 128B aligned standard Ethernet frame */ -#define IAVF_RXBUFFER_2048 2048 #define IAVF_RXBUFFER_3072 3072 /* Used for large frames w/ padding */ #define IAVF_MAX_RXBUFFER 9728 /* largest size for single descriptor */ @@ -92,57 +90,7 @@ enum iavf_dyn_idx_t { #define IAVF_RX_DMA_ATTR \ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) -/* Attempt to maximize the headroom available for incoming frames. We - * use a 2K buffer for receives and need 1536/1534 to store the data for - * the frame. This leaves us with 512 bytes of room. From that we need - * to deduct the space needed for the shared info and the padding needed - * to IP align the frame. - * - * Note: For cache line sizes 256 or larger this value is going to end - * up negative. In these cases we should fall back to the legacy - * receive path. - */ -#if (PAGE_SIZE < 8192) -#define IAVF_2K_TOO_SMALL_WITH_PADDING \ -((NET_SKB_PAD + IAVF_RXBUFFER_1536) > SKB_WITH_OVERHEAD(IAVF_RXBUFFER_2048)) - -static inline int iavf_compute_pad(int rx_buf_len) -{ - int page_size, pad_size; - - page_size = ALIGN(rx_buf_len, PAGE_SIZE / 2); - pad_size = SKB_WITH_OVERHEAD(page_size) - rx_buf_len; - - return pad_size; -} - -static inline int iavf_skb_pad(void) -{ - int rx_buf_len; - - /* If a 2K buffer cannot handle a standard Ethernet frame then - * optimize padding for a 3K buffer instead of a 1.5K buffer. - * - * For a 3K buffer we need to add enough padding to allow for - * tailroom due to NET_IP_ALIGN possibly shifting us out of - * cache-line alignment. - */ - if (IAVF_2K_TOO_SMALL_WITH_PADDING) - rx_buf_len = IAVF_RXBUFFER_3072 + SKB_DATA_ALIGN(NET_IP_ALIGN); - else - rx_buf_len = IAVF_RXBUFFER_1536; - - /* if needed make room for NET_IP_ALIGN */ - rx_buf_len -= NET_IP_ALIGN; - - return iavf_compute_pad(rx_buf_len); -} - -#define IAVF_SKB_PAD iavf_skb_pad() -#else -#define IAVF_2K_TOO_SMALL_WITH_PADDING false #define IAVF_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN) -#endif /** * iavf_test_staterr - tests bits in Rx descriptor status and error fields @@ -265,12 +213,7 @@ struct iavf_tx_buffer { struct iavf_rx_buffer { dma_addr_t dma; struct page *page; -#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) __u32 page_offset; -#else - __u16 page_offset; -#endif - __u16 pagecnt_bias; }; struct iavf_queue_stats { @@ -292,8 +235,6 @@ struct iavf_rx_queue_stats { u64 non_eop_descs; u64 alloc_page_failed; u64 alloc_buff_failed; - u64 page_reuse_count; - u64 realloc_count; }; enum iavf_ring_state_t { @@ -337,7 +278,6 @@ struct iavf_ring { u16 count; /* Number of descriptors */ u16 reg_idx; /* HW register index of the ring */ - u16 rx_buf_len; /* used in interrupt processing */ u16 next_to_use; @@ -373,7 +313,6 @@ struct iavf_ring { struct iavf_q_vector *q_vector; /* Backreference to associated vector */ struct rcu_head rcu; /* to avoid race on free */ - u16 next_to_alloc; struct sk_buff *skb; /* When iavf_clean_rx_ring_irq() must * return before it sees the EOP for * the current packet, we save that skb @@ -407,10 +346,6 @@ struct iavf_ring_container { static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring) { -#if (PAGE_SIZE < 8192) - if (ring->rx_buf_len > (PAGE_SIZE / 2)) - return 1; -#endif return 0; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h index 23ded4fcd94f..f6b09e57abce 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_type.h +++ b/drivers/net/ethernet/intel/iavf/iavf_type.h @@ -10,8 +10,6 @@ #include "iavf_adminq.h" #include "iavf_devids.h" -#define IAVF_RXQ_CTX_DBUFF_SHIFT 7 - /* IAVF_MASK is a macro used on 32 bit registers */ #define IAVF_MASK(mask, shift) ((u32)(mask) << (shift)) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 5eb7379956e4..ffb71a62b105 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -715,32 +715,10 @@ static void iavf_configure_tx(struct iavf_adapter *adapter) **/ static void iavf_configure_rx(struct iavf_adapter *adapter) { - unsigned int rx_buf_len = IAVF_RXBUFFER_2048; struct iavf_hw *hw = &adapter->hw; - int i; - - if (PAGE_SIZE < 8192) { - struct net_device *netdev = adapter->netdev; - /* For jumbo frames on systems with 4K pages we have to use - * an order 1 page, so we might as well increase the size - * of our Rx buffer to make better use of the available space - */ - rx_buf_len = IAVF_RXBUFFER_3072; - - /* We use a 1536 buffer size for configurations with - * standard Ethernet mtu. On x86 this gives us enough room - * for shared info and 192 bytes of padding. - */ - if (!IAVF_2K_TOO_SMALL_WITH_PADDING && - (netdev->mtu <= ETH_DATA_LEN)) - rx_buf_len = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; - } - - for (i = 0; i < adapter->num_active_queues; i++) { + for (u32 i = 0; i < adapter->num_active_queues; i++) adapter->rx_rings[i].tail = hw->hw_addr + IAVF_QRX_TAIL1(i); - adapter->rx_rings[i].rx_buf_len = rx_buf_len; - } } /** diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 4b61675b4548..a14f7f211150 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -715,7 +715,7 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) dma_sync_single_range_for_cpu(rx_ring->dev, rx_bi->dma, rx_bi->page_offset, - rx_ring->rx_buf_len, + IAVF_RXBUFFER_3072, DMA_FROM_DEVICE); /* free resources associated with mapping */ @@ -724,7 +724,7 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - __page_frag_cache_drain(rx_bi->page, rx_bi->pagecnt_bias); + __free_page(rx_bi->page); rx_bi->page = NULL; rx_bi->page_offset = 0; @@ -736,7 +736,6 @@ static void iavf_clean_rx_ring(struct iavf_ring *rx_ring) /* Zero out the descriptor ring */ memset(rx_ring->desc, 0, rx_ring->size); - rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; } @@ -792,7 +791,6 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) goto err; } - rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; @@ -812,9 +810,6 @@ static void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) { rx_ring->next_to_use = val; - /* update next to alloc since we have filled the ring */ - rx_ring->next_to_alloc = val; - /* Force memory writes to complete before letting h/w * know there are new descriptors to fetch. (Only * applicable for weak-ordered memory model archs, @@ -838,12 +833,6 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, struct page *page = bi->page; dma_addr_t dma; - /* since we are recycling buffers we should seldom need to alloc */ - if (likely(page)) { - rx_ring->rx_stats.page_reuse_count++; - return true; - } - /* alloc new page for storage */ page = dev_alloc_pages(iavf_rx_pg_order(rx_ring)); if (unlikely(!page)) { @@ -870,9 +859,6 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring, bi->page = page; bi->page_offset = IAVF_SKB_PAD; - /* initialize pagecnt_bias to 1 representing we fully own page */ - bi->pagecnt_bias = 1; - return true; } @@ -924,7 +910,7 @@ bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count) /* sync the buffer for use by the device */ dma_sync_single_range_for_device(rx_ring->dev, bi->dma, bi->page_offset, - rx_ring->rx_buf_len, + IAVF_RXBUFFER_3072, DMA_FROM_DEVICE); /* Refresh the desc even if buffer_addrs didn't change @@ -1102,91 +1088,6 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb) return false; } -/** - * iavf_reuse_rx_page - page flip buffer and store it back on the ring - * @rx_ring: rx descriptor ring to store buffers on - * @old_buff: donor buffer to have page reused - * - * Synchronizes page for reuse by the adapter - **/ -static void iavf_reuse_rx_page(struct iavf_ring *rx_ring, - struct iavf_rx_buffer *old_buff) -{ - struct iavf_rx_buffer *new_buff; - u16 nta = rx_ring->next_to_alloc; - - new_buff = &rx_ring->rx_bi[nta]; - - /* update, and store next to alloc */ - nta++; - rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0; - - /* transfer page from old buffer to new buffer */ - new_buff->dma = old_buff->dma; - new_buff->page = old_buff->page; - new_buff->page_offset = old_buff->page_offset; - new_buff->pagecnt_bias = old_buff->pagecnt_bias; -} - -/** - * iavf_can_reuse_rx_page - Determine if this page can be reused by - * the adapter for another receive - * - * @rx_buffer: buffer containing the page - * - * If page is reusable, rx_buffer->page_offset is adjusted to point to - * an unused region in the page. - * - * For small pages, @truesize will be a constant value, half the size - * of the memory at page. We'll attempt to alternate between high and - * low halves of the page, with one half ready for use by the hardware - * and the other half being consumed by the stack. We use the page - * ref count to determine whether the stack has finished consuming the - * portion of this page that was passed up with a previous packet. If - * the page ref count is >1, we'll assume the "other" half page is - * still busy, and this page cannot be reused. - * - * For larger pages, @truesize will be the actual space used by the - * received packet (adjusted upward to an even multiple of the cache - * line size). This will advance through the page by the amount - * actually consumed by the received packets while there is still - * space for a buffer. Each region of larger pages will be used at - * most once, after which the page will not be reused. - * - * In either case, if the page is reusable its refcount is increased. - **/ -static bool iavf_can_reuse_rx_page(struct iavf_rx_buffer *rx_buffer) -{ - unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; - struct page *page = rx_buffer->page; - - /* Is any reuse possible? */ - if (!dev_page_is_reusable(page)) - return false; - -#if (PAGE_SIZE < 8192) - /* if we are only owner of page we can reuse it */ - if (unlikely((page_count(page) - pagecnt_bias) > 1)) - return false; -#else -#define IAVF_LAST_OFFSET \ - (SKB_WITH_OVERHEAD(PAGE_SIZE) - IAVF_RXBUFFER_2048) - if (rx_buffer->page_offset > IAVF_LAST_OFFSET) - return false; -#endif - - /* If we have drained the page fragment pool we need to update - * the pagecnt_bias and page count so that we fully restock the - * number of references the driver holds. - */ - if (unlikely(!pagecnt_bias)) { - page_ref_add(page, USHRT_MAX); - rx_buffer->pagecnt_bias = USHRT_MAX; - } - - return true; -} - /** * iavf_add_rx_frag - Add contents of Rx buffer to sk_buff * @rx_ring: rx descriptor ring to transact packets on @@ -1204,24 +1105,13 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring, struct sk_buff *skb, unsigned int size) { -#if (PAGE_SIZE < 8192) - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; -#else unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD); -#endif if (!size) return; skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, rx_buffer->page_offset, size, truesize); - - /* page is being used so we must update the page offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif } /** @@ -1249,9 +1139,6 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, size, DMA_FROM_DEVICE); - /* We have pulled a buffer for use, so decrement pagecnt_bias */ - rx_buffer->pagecnt_bias--; - return rx_buffer; } @@ -1269,12 +1156,8 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, unsigned int size) { void *va; -#if (PAGE_SIZE < 8192) - unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2; -#else unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + SKB_DATA_ALIGN(IAVF_SKB_PAD + size); -#endif struct sk_buff *skb; if (!rx_buffer || !size) @@ -1292,23 +1175,15 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, skb_reserve(skb, IAVF_SKB_PAD); __skb_put(skb, size); - /* buffer is used by skb, update page_offset */ -#if (PAGE_SIZE < 8192) - rx_buffer->page_offset ^= truesize; -#else - rx_buffer->page_offset += truesize; -#endif - return skb; } /** - * iavf_put_rx_buffer - Clean up used buffer and either recycle or free + * iavf_put_rx_buffer - Unmap used buffer * @rx_ring: rx descriptor ring to transact packets on * @rx_buffer: rx buffer to pull data from * - * This function will clean up the contents of the rx_buffer. It will - * either recycle the buffer or unmap it and free the associated resources. + * This function will unmap the buffer after it's written by HW. */ static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, struct iavf_rx_buffer *rx_buffer) @@ -1316,18 +1191,9 @@ static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, if (!rx_buffer) return; - if (iavf_can_reuse_rx_page(rx_buffer)) { - /* hand second half of page back to the ring */ - iavf_reuse_rx_page(rx_ring, rx_buffer); - rx_ring->rx_stats.page_reuse_count++; - } else { - /* we are not reusing the buffer so unmap it */ - dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, - iavf_rx_pg_size(rx_ring), - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - __page_frag_cache_drain(rx_buffer->page, - rx_buffer->pagecnt_bias); - } + /* we are not reusing the buffer so unmap it */ + dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, PAGE_SIZE, + DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); /* clear contents of buffer_info */ rx_buffer->page = NULL; @@ -1432,8 +1298,6 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* exit if we failed to retrieve a buffer */ if (!skb) { rx_ring->rx_stats.alloc_buff_failed++; - if (rx_buffer && size) - rx_buffer->pagecnt_bias++; break; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index a31df5af0473..f8e9f859a4f1 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -288,10 +288,6 @@ void iavf_configure_queues(struct iavf_adapter *adapter) if (!vqci) return; - /* Limit maximum frame size when jumbo frames is not enabled */ - if (adapter->netdev->mtu <= ETH_DATA_LEN) - max_frame = IAVF_RXBUFFER_1536 - NET_IP_ALIGN; - vqci->vsi_id = adapter->vsi_res->vsi_id; vqci->num_queue_pairs = pairs; vqpi = vqci->qpair; @@ -308,9 +304,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter) vqpi->rxq.ring_len = adapter->rx_rings[i].count; vqpi->rxq.dma_ring_addr = adapter->rx_rings[i].dma; vqpi->rxq.max_pkt_size = max_frame; - vqpi->rxq.databuffer_size = - ALIGN(adapter->rx_rings[i].rx_buf_len, - BIT_ULL(IAVF_RXQ_CTX_DBUFF_SHIFT)); + vqpi->rxq.databuffer_size = IAVF_RXBUFFER_3072; if (CRC_OFFLOAD_ALLOWED(adapter)) vqpi->rxq.crc_disable = !!(adapter->netdev->features & NETIF_F_RXFCS);