From patchwork Mon Dec 11 06:27:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Arinzon, David" X-Patchwork-Id: 13486703 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="BUI1amFp" Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1081E9 for ; Sun, 10 Dec 2023 22:28:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1702276103; x=1733812103; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fNdhKyqQSfTqcVFdqC5AdCH2k1eZp/wrj8V3qDi6cj8=; b=BUI1amFp8t4YIKSYo+rXbzdViUekuwoT7XkgvH1m7TQvmp/G2kMzge5H HGoQJyCZU6O+F1JVylvlvhx/mik2FKM9vVjIKmsZG+Ep3xn68A+WXHQN2 2S/dxSHj2ylXHAdwEE75XzoZDwk8DO/ULk2fVsIjSZznp50Fkcvlr0cFg g=; X-IronPort-AV: E=Sophos;i="6.04,267,1695686400"; d="scan'208";a="382018409" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-iad-1e-m6i4x-b538c141.us-east-1.amazon.com) ([10.25.36.214]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2023 06:28:17 +0000 Received: from smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev (iad7-ws-svc-p70-lb3-vlan2.iad.amazon.com [10.32.235.34]) by email-inbound-relay-iad-1e-m6i4x-b538c141.us-east-1.amazon.com (Postfix) with ESMTPS id 39394A0F83; Mon, 11 Dec 2023 06:28:15 +0000 (UTC) Received: from EX19MTAUEA001.ant.amazon.com [10.0.44.209:37394] by smtpin.naws.us-east-1.prod.farcaster.email.amazon.dev [10.0.16.162:2525] with esmtp (Farcaster) id 6b79ee56-17da-4c07-b655-21f8f6b09897; Mon, 11 Dec 2023 06:28:15 +0000 (UTC) X-Farcaster-Flow-ID: 6b79ee56-17da-4c07-b655-21f8f6b09897 Received: from EX19D008UEA004.ant.amazon.com (10.252.134.191) by EX19MTAUEA001.ant.amazon.com (10.252.134.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Mon, 11 Dec 2023 06:28:15 +0000 Received: from EX19MTAUEA001.ant.amazon.com (10.252.134.203) by EX19D008UEA004.ant.amazon.com (10.252.134.191) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Mon, 11 Dec 2023 06:28:14 +0000 Received: from dev-dsk-darinzon-1c-05962a8d.eu-west-1.amazon.com (172.19.80.187) by mail-relay.amazon.com (10.252.134.102) with Microsoft SMTP Server id 15.2.1118.40 via Frontend Transport; Mon, 11 Dec 2023 06:28:12 +0000 From: To: David Miller , Jakub Kicinski , CC: David Arinzon , "Woodhouse, David" , "Machulsky, Zorik" , "Matushevsky, Alexander" , Saeed Bshara , "Wilson, Matt" , "Liguori, Anthony" , "Bshara, Nafea" , "Belgazal, Netanel" , "Saidi, Ali" , "Herrenschmidt, Benjamin" , "Kiyanovski, Arthur" , "Dagan, Noam" , "Agroskin, Shay" , "Itzko, Shahar" , "Abboud, Osama" , "Ostrovsky, Evgeny" , Sameeh Jubran Subject: [PATCH v2 net 1/4] net: ena: Destroy correct number of xdp queues upon failure Date: Mon, 11 Dec 2023 06:27:58 +0000 Message-ID: <20231211062801.27891-2-darinzon@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231211062801.27891-1-darinzon@amazon.com> References: <20231211062801.27891-1-darinzon@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Precedence: Bulk X-Patchwork-Delegate: kuba@kernel.org From: David Arinzon The ena_setup_and_create_all_xdp_queues() function freed all the resources upon failure, after creating only xdp_num_queues queues, instead of freeing just the created ones. In this patch, the only resources that are freed, are the ones allocated right before the failure occurs. Fixes: 548c4940b9f1 ("net: ena: Implement XDP_TX action") Signed-off-by: Shahar Itzko Signed-off-by: David Arinzon --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index b5bca48..9884ef3 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -74,6 +74,8 @@ static void ena_unmap_tx_buff(struct ena_ring *tx_ring, struct ena_tx_buffer *tx_info); static int ena_create_io_tx_queues_in_range(struct ena_adapter *adapter, int first_index, int count); +static void ena_free_all_io_tx_resources_in_range(struct ena_adapter *adapter, + int first_index, int count); /* Increase a stat by cnt while holding syncp seqlock on 32bit machines */ static void ena_increase_stat(u64 *statp, u64 cnt, @@ -457,23 +459,22 @@ static void ena_init_all_xdp_queues(struct ena_adapter *adapter) static int ena_setup_and_create_all_xdp_queues(struct ena_adapter *adapter) { + u32 xdp_first_ring = adapter->xdp_first_ring; + u32 xdp_num_queues = adapter->xdp_num_queues; int rc = 0; - rc = ena_setup_tx_resources_in_range(adapter, adapter->xdp_first_ring, - adapter->xdp_num_queues); + rc = ena_setup_tx_resources_in_range(adapter, xdp_first_ring, xdp_num_queues); if (rc) goto setup_err; - rc = ena_create_io_tx_queues_in_range(adapter, - adapter->xdp_first_ring, - adapter->xdp_num_queues); + rc = ena_create_io_tx_queues_in_range(adapter, xdp_first_ring, xdp_num_queues); if (rc) goto create_err; return 0; create_err: - ena_free_all_io_tx_resources(adapter); + ena_free_all_io_tx_resources_in_range(adapter, xdp_first_ring, xdp_num_queues); setup_err: return rc; } From patchwork Mon Dec 11 06:27:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Arinzon, David" X-Patchwork-Id: 13486702 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="lDTHzZh1" Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DED37CD for ; Sun, 10 Dec 2023 22:28:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1702276103; x=1733812103; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8okHNn4X593Tkh2sf5jfHRxhEuyxbOmoKmPlGUX7zzo=; b=lDTHzZh1LTuhyQxYSnq0cQ9vTrMiJSOGcf+tXsuv6tx8JIu9wIyYb6rQ r1Q7EGBL32NvvBCWpWQR286bXZdgdkmYNkynjqIotj9jpkkAgmFWqW0t2 p2QigBGSwuTpjQhMCf9LpHRdO6OU3+xiayEo1SPBLhTGunyebF0aAIkp4 U=; X-IronPort-AV: E=Sophos;i="6.04,267,1695686400"; d="scan'208";a="367685136" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-d2040ec1.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2023 06:28:20 +0000 Received: from smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev (pdx2-ws-svc-p26-lb5-vlan2.pdx.amazon.com [10.39.38.66]) by email-inbound-relay-pdx-2c-m6i4x-d2040ec1.us-west-2.amazon.com (Postfix) with ESMTPS id A1C3D40D52; Mon, 11 Dec 2023 06:28:18 +0000 (UTC) Received: from EX19MTAUEC002.ant.amazon.com [10.0.0.204:12719] by smtpin.naws.us-east-1.prod.farcaster.email.amazon.dev [10.0.54.31:2525] with esmtp (Farcaster) id b427d11c-9bc7-42e6-95d6-5191142ec41b; Mon, 11 Dec 2023 06:28:18 +0000 (UTC) X-Farcaster-Flow-ID: b427d11c-9bc7-42e6-95d6-5191142ec41b Received: from EX19D008UEA003.ant.amazon.com (10.252.134.116) by EX19MTAUEC002.ant.amazon.com (10.252.135.253) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Mon, 11 Dec 2023 06:28:17 +0000 Received: from EX19MTAUEA001.ant.amazon.com (10.252.134.203) by EX19D008UEA003.ant.amazon.com (10.252.134.116) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Mon, 11 Dec 2023 06:28:17 +0000 Received: from dev-dsk-darinzon-1c-05962a8d.eu-west-1.amazon.com (172.19.80.187) by mail-relay.amazon.com (10.252.134.102) with Microsoft SMTP Server id 15.2.1118.40 via Frontend Transport; Mon, 11 Dec 2023 06:28:15 +0000 From: To: David Miller , Jakub Kicinski , CC: David Arinzon , "Woodhouse, David" , "Machulsky, Zorik" , "Matushevsky, Alexander" , Saeed Bshara , "Wilson, Matt" , "Liguori, Anthony" , "Bshara, Nafea" , "Belgazal, Netanel" , "Saidi, Ali" , "Herrenschmidt, Benjamin" , "Kiyanovski, Arthur" , "Dagan, Noam" , "Agroskin, Shay" , "Itzko, Shahar" , "Abboud, Osama" , "Ostrovsky, Evgeny" , Sameeh Jubran Subject: [PATCH v2 net 2/4] net: ena: Fix xdp drops handling due to multibuf packets Date: Mon, 11 Dec 2023 06:27:59 +0000 Message-ID: <20231211062801.27891-3-darinzon@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231211062801.27891-1-darinzon@amazon.com> References: <20231211062801.27891-1-darinzon@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Precedence: Bulk X-Patchwork-Delegate: kuba@kernel.org From: David Arinzon Current xdp code drops packets larger than ENA_XDP_MAX_MTU. This is an incorrect condition since the problem is not the size of the packet, rather the number of buffers it contains. This commit: 1. Identifies and drops XDP multi-buffer packets at the beginning of the function. 2. Increases the xdp drop statistic when this drop occurs. 3. Adds a one-time print that such drops are happening to give better indication to the user. Fixes: 838c93dc5449 ("net: ena: implement XDP drop support") Signed-off-by: Arthur Kiyanovski Signed-off-by: David Arinzon --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 9884ef3..c7bc572 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -1672,20 +1672,23 @@ static void ena_set_rx_hash(struct ena_ring *rx_ring, } } -static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp) +static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp, u16 num_descs) { struct ena_rx_buffer *rx_info; int ret; + /* XDP multi-buffer packets not supported */ + if (unlikely(num_descs > 1)) { + netdev_err_once(rx_ring->adapter->netdev, + "xdp: dropped unsupported multi-buffer packets\n"); + ena_increase_stat(&rx_ring->rx_stats.xdp_drop, 1, &rx_ring->syncp); + return ENA_XDP_DROP; + } + rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id]; xdp_prepare_buff(xdp, page_address(rx_info->page), rx_info->buf_offset, rx_ring->ena_bufs[0].len, false); - /* If for some reason we received a bigger packet than - * we expect, then we simply drop it - */ - if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU)) - return ENA_XDP_DROP; ret = ena_xdp_execute(rx_ring, xdp); @@ -1754,7 +1757,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, ena_rx_ctx.l4_proto, ena_rx_ctx.hash); if (ena_xdp_present_ring(rx_ring)) - xdp_verdict = ena_xdp_handle_buff(rx_ring, &xdp); + xdp_verdict = ena_xdp_handle_buff(rx_ring, &xdp, ena_rx_ctx.descs); /* allocate skb and fill it */ if (xdp_verdict == ENA_XDP_PASS) From patchwork Mon Dec 11 06:28:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Arinzon, David" X-Patchwork-Id: 13486704 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="EpVBgPcC" Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0CE2CD for ; Sun, 10 Dec 2023 22:28:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1702276109; x=1733812109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1IJz2l0juziCvMnsvEbOolxYYF0CdFHEdvVrr5qDJbk=; b=EpVBgPcCtK5+CwEJ+JTG+H2oCgNuHh29LH48eoOYfiIrJEMYtkiAkoPf dtWZRlD6/uKki64cqnbgxgVySOx1uGXZY1OTu0xKdCou05A8j1xj9d4jt sfv91UXLZRYOcWKqmNPKS0/KGJ40IUUMJ9tTeNA+8GSkYNv4H3qLwqH2z 0=; X-IronPort-AV: E=Sophos;i="6.04,267,1695686400"; d="scan'208";a="258768220" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-iad-1e-m6i4x-9694bb9e.us-east-1.amazon.com) ([10.25.36.210]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2023 06:28:27 +0000 Received: from smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev (iad7-ws-svc-p70-lb3-vlan2.iad.amazon.com [10.32.235.34]) by email-inbound-relay-iad-1e-m6i4x-9694bb9e.us-east-1.amazon.com (Postfix) with ESMTPS id 4615680874; Mon, 11 Dec 2023 06:28:25 +0000 (UTC) Received: from EX19MTAUEA002.ant.amazon.com [10.0.0.204:8358] by smtpin.naws.us-east-1.prod.farcaster.email.amazon.dev [10.0.48.204:2525] with esmtp (Farcaster) id c6bb0273-2809-4c8a-a265-359628ed6ef2; Mon, 11 Dec 2023 06:28:25 +0000 (UTC) X-Farcaster-Flow-ID: c6bb0273-2809-4c8a-a265-359628ed6ef2 Received: from EX19D008UEC001.ant.amazon.com (10.252.135.232) by EX19MTAUEA002.ant.amazon.com (10.252.134.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Mon, 11 Dec 2023 06:28:25 +0000 Received: from EX19MTAUEA001.ant.amazon.com (10.252.134.203) by EX19D008UEC001.ant.amazon.com (10.252.135.232) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Mon, 11 Dec 2023 06:28:25 +0000 Received: from dev-dsk-darinzon-1c-05962a8d.eu-west-1.amazon.com (172.19.80.187) by mail-relay.amazon.com (10.252.134.102) with Microsoft SMTP Server id 15.2.1118.40 via Frontend Transport; Mon, 11 Dec 2023 06:28:23 +0000 From: To: David Miller , Jakub Kicinski , CC: David Arinzon , "Woodhouse, David" , "Machulsky, Zorik" , "Matushevsky, Alexander" , Saeed Bshara , "Wilson, Matt" , "Liguori, Anthony" , "Bshara, Nafea" , "Belgazal, Netanel" , "Saidi, Ali" , "Herrenschmidt, Benjamin" , "Kiyanovski, Arthur" , "Dagan, Noam" , "Agroskin, Shay" , "Itzko, Shahar" , "Abboud, Osama" , "Ostrovsky, Evgeny" , Sameeh Jubran Subject: [PATCH v2 net 3/4] net: ena: Fix DMA syncing in XDP path when SWIOTLB is on Date: Mon, 11 Dec 2023 06:28:00 +0000 Message-ID: <20231211062801.27891-4-darinzon@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231211062801.27891-1-darinzon@amazon.com> References: <20231211062801.27891-1-darinzon@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Precedence: Bulk X-Patchwork-Delegate: kuba@kernel.org From: David Arinzon This patch fixes two issues: Issue 1 ------- Description ``````````` Current code does not call dma_sync_single_for_cpu() to sync data from the device side memory to the CPU side memory before the XDP code path uses the CPU side data. This causes the XDP code path to read the unset garbage data in the CPU side memory, resulting in incorrect handling of the packet by XDP. Solution ```````` 1. Add a call to dma_sync_single_for_cpu() before the XDP code starts to use the data in the CPU side memory. 2. The XDP code verdict can be XDP_PASS, in which case there is a fallback to the non-XDP code, which also calls dma_sync_single_for_cpu(). To avoid calling dma_sync_single_for_cpu() twice: 2.1. Put the dma_sync_single_for_cpu() in the code in such a place where it happens before XDP and non-XDP code. 2.2. Remove the calls to dma_sync_single_for_cpu() in the non-XDP code for the first buffer only (rx_copybreak and non-rx_copybreak cases), since the new call that was added covers these cases. The call to dma_sync_single_for_cpu() for the second buffer and on stays because only the first buffer is handled by the newly added dma_sync_single_for_cpu(). And there is no need for special handling of the second buffer and on for the XDP path since currently the driver supports only single buffer packets. Issue 2 ------- Description ``````````` In case the XDP code forwarded the packet (ENA_XDP_FORWARDED), ena_unmap_rx_buff_attrs() is called with attrs set to 0. This means that before unmapping the buffer, the internal function dma_unmap_page_attrs() will also call dma_sync_single_for_cpu() on the whole buffer (not only on the data part of it). This sync is both wasteful (since a sync was already explicitly called before) and also causes a bug, which will be explained using the below diagram. The following diagram shows the flow of events causing the bug. The order of events is (1)-(4) as shown in the diagram. CPU side memory area (3)convert_to_xdp_frame() initializes the headroom with xdpf metadata || \/ ___________________________________ | | 0 | V 4K --------------------------------------------------------------------- | xdpf->data | other xdpf | < data > | tailroom ||...| | | fields | | GARBAGE || | --------------------------------------------------------------------- /\ /\ || || (4)ena_unmap_rx_buff_attrs() calls (2)dma_sync_single_for_cpu() dma_sync_single_for_cpu() on the copies data from device whole buffer page, overwriting side to CPU side memory the xdpf->data with GARBAGE. || 0 4K --------------------------------------------------------------------- | headroom | < data > | tailroom ||...| | GARBAGE | | GARBAGE || | --------------------------------------------------------------------- Device side memory area /\ || (1) device writes RX packet data After the call to ena_unmap_rx_buff_attrs() in (4), the xdpf->data becomes corrupted, and so when it is later accessed in ena_clean_xdp_irq()->xdp_return_frame(), it causes a page fault, crashing the kernel. Solution ```````` Explicitly tell ena_unmap_rx_buff_attrs() not to call dma_sync_single_for_cpu() by passing it the ENA_DMA_ATTR_SKIP_CPU_SYNC flag. Fixes: f7d625adeb7b ("net: ena: Add dynamic recycling mechanism for rx buffers") Signed-off-by: Arthur Kiyanovski Signed-off-by: David Arinzon --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 23 ++++++++------------ 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index c7bc572..c44c44e 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -1493,11 +1493,6 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring, if (unlikely(!skb)) return NULL; - /* sync this buffer for CPU use */ - dma_sync_single_for_cpu(rx_ring->dev, - dma_unmap_addr(&rx_info->ena_buf, paddr) + pkt_offset, - len, - DMA_FROM_DEVICE); skb_copy_to_linear_data(skb, buf_addr + buf_offset, len); dma_sync_single_for_device(rx_ring->dev, dma_unmap_addr(&rx_info->ena_buf, paddr) + pkt_offset, @@ -1516,17 +1511,10 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring, buf_len = SKB_DATA_ALIGN(len + buf_offset + tailroom); - pre_reuse_paddr = dma_unmap_addr(&rx_info->ena_buf, paddr); - /* If XDP isn't loaded try to reuse part of the RX buffer */ reuse_rx_buf_page = !is_xdp_loaded && ena_try_rx_buf_page_reuse(rx_info, buf_len, len, pkt_offset); - dma_sync_single_for_cpu(rx_ring->dev, - pre_reuse_paddr + pkt_offset, - len, - DMA_FROM_DEVICE); - if (!reuse_rx_buf_page) ena_unmap_rx_buff_attrs(rx_ring, rx_info, DMA_ATTR_SKIP_CPU_SYNC); @@ -1723,6 +1711,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, int xdp_flags = 0; int total_len = 0; int xdp_verdict; + u8 pkt_offset; int rc = 0; int i; @@ -1749,13 +1738,19 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, /* First descriptor might have an offset set by the device */ rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id]; - rx_info->buf_offset += ena_rx_ctx.pkt_offset; + pkt_offset = ena_rx_ctx.pkt_offset; + rx_info->buf_offset += pkt_offset; netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev, "rx_poll: q %d got packet from ena. descs #: %d l3 proto %d l4 proto %d hash: %x\n", rx_ring->qid, ena_rx_ctx.descs, ena_rx_ctx.l3_proto, ena_rx_ctx.l4_proto, ena_rx_ctx.hash); + dma_sync_single_for_cpu(rx_ring->dev, + dma_unmap_addr(&rx_info->ena_buf, paddr) + pkt_offset, + rx_ring->ena_bufs[0].len, + DMA_FROM_DEVICE); + if (ena_xdp_present_ring(rx_ring)) xdp_verdict = ena_xdp_handle_buff(rx_ring, &xdp, ena_rx_ctx.descs); @@ -1781,7 +1776,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, if (xdp_verdict & ENA_XDP_FORWARDED) { ena_unmap_rx_buff_attrs(rx_ring, &rx_ring->rx_buffer_info[req_id], - 0); + DMA_ATTR_SKIP_CPU_SYNC); rx_ring->rx_buffer_info[req_id].page = NULL; } } From patchwork Mon Dec 11 06:28:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Arinzon, David" X-Patchwork-Id: 13486705 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="fjSrN4nP" Received: from smtp-fw-52003.amazon.com (smtp-fw-52003.amazon.com [52.119.213.152]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A72D0CD for ; Sun, 10 Dec 2023 22:28:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1702276114; x=1733812114; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nDCLNglblptaXFZnaypTxwdEm2nyyRZ+ifXRxHra9zY=; b=fjSrN4nPs6G7i0m8qTtjffZ9lqc5uIlrhqQY+sbK0EzdPPz4q4GZQC5i RrBa4rZxPocY4pX5w+jR0jTmBDbj01JsvMZ25iLfMxrRBzl+B3Hoelkbs g2DQoSUulHmWAPBDuXlCKNPTjCbSKF3GTqOSMl0Z5WZrsvX0vArmyNHTs c=; X-IronPort-AV: E=Sophos;i="6.04,267,1695686400"; d="scan'208";a="624423827" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-iad-1e-m6i4x-b538c141.us-east-1.amazon.com) ([10.43.8.6]) by smtp-border-fw-52003.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2023 06:28:32 +0000 Received: from smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev (iad7-ws-svc-p70-lb3-vlan2.iad.amazon.com [10.32.235.34]) by email-inbound-relay-iad-1e-m6i4x-b538c141.us-east-1.amazon.com (Postfix) with ESMTPS id DE3F8A1196; Mon, 11 Dec 2023 06:28:30 +0000 (UTC) Received: from EX19MTAUEC001.ant.amazon.com [10.0.44.209:50719] by smtpin.naws.us-east-1.prod.farcaster.email.amazon.dev [10.0.16.162:2525] with esmtp (Farcaster) id 3f08356f-5727-4bcb-a8c1-a5810c5a2977; Mon, 11 Dec 2023 06:28:30 +0000 (UTC) X-Farcaster-Flow-ID: 3f08356f-5727-4bcb-a8c1-a5810c5a2977 Received: from EX19D008UEA003.ant.amazon.com (10.252.134.116) by EX19MTAUEC001.ant.amazon.com (10.252.135.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Mon, 11 Dec 2023 06:28:30 +0000 Received: from EX19MTAUEA001.ant.amazon.com (10.252.134.203) by EX19D008UEA003.ant.amazon.com (10.252.134.116) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Mon, 11 Dec 2023 06:28:29 +0000 Received: from dev-dsk-darinzon-1c-05962a8d.eu-west-1.amazon.com (172.19.80.187) by mail-relay.amazon.com (10.252.134.102) with Microsoft SMTP Server id 15.2.1118.40 via Frontend Transport; Mon, 11 Dec 2023 06:28:28 +0000 From: To: David Miller , Jakub Kicinski , CC: David Arinzon , "Woodhouse, David" , "Machulsky, Zorik" , "Matushevsky, Alexander" , Saeed Bshara , "Wilson, Matt" , "Liguori, Anthony" , "Bshara, Nafea" , "Belgazal, Netanel" , "Saidi, Ali" , "Herrenschmidt, Benjamin" , "Kiyanovski, Arthur" , "Dagan, Noam" , "Agroskin, Shay" , "Itzko, Shahar" , "Abboud, Osama" , "Ostrovsky, Evgeny" , Sameeh Jubran Subject: [PATCH v2 net 4/4] net: ena: Fix XDP redirection error Date: Mon, 11 Dec 2023 06:28:01 +0000 Message-ID: <20231211062801.27891-5-darinzon@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231211062801.27891-1-darinzon@amazon.com> References: <20231211062801.27891-1-darinzon@amazon.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Precedence: Bulk X-Patchwork-Delegate: kuba@kernel.org From: David Arinzon When sending TX packets, the meta descriptor can be all zeroes as no meta information is required (as in XDP). This patch removes the validity check, as when `disable_meta_caching` is enabled, such TX packets will be dropped otherwise. Fixes: 0e3a3f6dacf0 ("net: ena: support new LLQ acceleration mode") Signed-off-by: Shay Agroskin Signed-off-by: David Arinzon --- drivers/net/ethernet/amazon/ena/ena_eth_com.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c b/drivers/net/ethernet/amazon/ena/ena_eth_com.c index 3d6f0a4..f9f8862 100644 --- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c @@ -328,9 +328,6 @@ static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq, * compare it to the stored version, just create the meta */ if (io_sq->disable_meta_caching) { - if (unlikely(!ena_tx_ctx->meta_valid)) - return -EINVAL; - *have_meta = true; return ena_com_create_meta(io_sq, ena_meta); }