From patchwork Tue May 28 13:14:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676722 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4961216F267 for ; Tue, 28 May 2024 13:15:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902134; cv=none; b=hZbS4mBpJfBHpu/PV2w3k3kbAAl092QaSxqyO2epsAQEWDmdyXJRnDgJXA5FHTZshNghIEQbEFqLb4KcjvkZOFLNzKIi88/qGh67GgpGH/sK3ZeByhE9FBkdsWWwu/n0baRIIF0eXvSmFhxaU1wKHQN3Whv6FIpKcOeeRyX5nhg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902134; c=relaxed/simple; bh=ROb1fNo4eUUBVPk/vZTtKWpVQ5tdIGh5tR00KpxWySo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=T1+Z3qyTR4S9BHVuqMk2MgZlt/W83ZzLJKMWMUrfYjINGvEXnm9IarcmE8R4pAMEootjJhJdIH3P88vmhIMJ3kIV7e8EJdof9p1WmjeE9VvRUdPyenD/+YlRWmmGHGnFBuzhhSgmcmOp7l8jaeHpUW3MoQaDxK+h0FphUajQsRY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TJzXWA8Z; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TJzXWA8Z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902133; x=1748438133; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ROb1fNo4eUUBVPk/vZTtKWpVQ5tdIGh5tR00KpxWySo=; b=TJzXWA8Zgqpp36e+7HKHXyIj3I8wqNNbbvlG4QmnyaxsI1nQpKQ00QKk YTnwwKnQB/88Y8F2gvUwTPO4I1BmaL//+z4I8th5He5kVYMWQFkrlompo mPoNVg/cdLx8Eh3sh3nx6QOaSMU1vwYwiGpSseNCjlz6OmIQFpBHuU3Y2 zNXMq+CshOH5k7OyqcEJAcow3GfMPo00hsZkcQXGzVP3EXkbTX/6aDp1k 3Wui/uqV48TENpmlTPkYuzvRuYIZ8Ei2EcbFZS3Tq6FVsQbA+7JlvZEhQ TnxGuOXDSRCjBzHcgnGEfLQCjtVxU+vHMOdCLA8SW+pD5nNEKbhQ2V/bc w==; X-CSE-ConnectionGUID: T6f4ZAUASIKNQ9Rj0k3DUQ== X-CSE-MsgGUID: U775DJd7Ts+3ksnsGf3Tfw== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193525" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193525" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:31 -0700 X-CSE-ConnectionGUID: snxPHJpoRPO3MbU5aeH8Fw== X-CSE-MsgGUID: 9LakitMJS7+V6bWMmQfCHg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891105" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:29 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH iwl-net 01/11] ice: respect netif readiness in AF_XDP ZC related ndo's Date: Tue, 28 May 2024 15:14:19 +0200 Message-Id: <20240528131429.3012910-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Michal Kubiak Address a scenario in which XSK ZC Tx produces descriptors to XDP Tx ring when link is either not yet fully initialized or process of stopping the netdev has already started. To avoid this, add checks against carrier readiness in ice_xsk_wakeup() and in ice_xmit_zc(). One could argue that bailing out early in ice_xsk_wakeup() would be sufficient but given the fact that we produce Tx descriptors on behalf of NAPI that is triggered for Rx traffic, the latter is also needed. Bringing link up is an asynchronous event executed within ice_service_task so even though interface has been brought up there is still a time frame where link is not yet ok. Without this patch, when AF_XDP ZC Tx is used simultaneously with stack Tx, Tx timeouts occur after going through link flap (admin brings interface down then up again). HW seem to be unable to transmit descriptor to the wire after HW tail register bump which in turn causes bit __QUEUE_STATE_STACK_XOFF to be set forever as netdev_tx_completed_queue() sees no cleaned bytes on the input. Fixes: 126cdfe1007a ("ice: xsk: Improve AF_XDP ZC Tx and use batching API") Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Michal Kubiak Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 2015f66b0cf9..1bd4b054dd80 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -1048,6 +1048,10 @@ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) ice_clean_xdp_irq_zc(xdp_ring); + if (!netif_carrier_ok(xdp_ring->vsi->netdev) || + !netif_running(xdp_ring->vsi->netdev)) + return true; + budget = ICE_DESC_UNUSED(xdp_ring); budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring)); @@ -1091,7 +1095,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, struct ice_vsi *vsi = np->vsi; struct ice_tx_ring *ring; - if (test_bit(ICE_VSI_DOWN, vsi->state)) + if (test_bit(ICE_VSI_DOWN, vsi->state) || !netif_carrier_ok(netdev)) return -ENETDOWN; if (!ice_is_xdp_ena_vsi(vsi)) From patchwork Tue May 28 13:14:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676721 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCA9B16FF4B for ; Tue, 28 May 2024 13:15:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902134; cv=none; b=R4+WW5+OyPI2lZZD5HpnEYfHJ5nZ6Zgz3/w8Jtveh/u0DZKnemwRok/ixlXuQGKVpY8HSo/n8dGonMSrosPurMNBEtfqJjXzN0/twTXh4oHSK7jIPTHFBfT4/gq6AK8y68BiMgsbvPuk3/ZwjSPLUUssAT9L/ZXwwDPQkSzzpI0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902134; c=relaxed/simple; bh=89ItfMvWVLYERCZU8akE0pKSpXDzCfiBwTBt6Ag8CQc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ISkThMEI/I2HQNmJIsvAVhTk5g88ZAXVNf4rT64yxx3sfkmsrD9CBjTXChtZ2a404ievE8ESOf+MMzSan9mIULTIgal+5GSSHyt8yCGFZGJia8g27dN0J85w4AqF+NjjsekShBLYtADwuLTGBdo2FDdMW/lfOi6e2xZZy3PFtN8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=JBEJjydp; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JBEJjydp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902133; x=1748438133; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=89ItfMvWVLYERCZU8akE0pKSpXDzCfiBwTBt6Ag8CQc=; b=JBEJjydpatHQM9nsO2r6zQeOI8SqFRkOIeIeM0ndqQrGOUN49TrcDtgx vzeIfi36a5SkNRZTsWlwTmuf60fAgdzd2pewaS7w+6HfHN8h4sRX6kPQQ 8dFqmC84vRXyzhQRiwIK+1ad+bP3LdOQvAXJKd6cm/U1jAzt0PuCKQbv6 4O78Y377CiS0Aq1EHT372PN801nI5C0F1rrblQZjWNrEfPeLAeZ3FjBmu TVIwhmJdSPZGPcMe2SP/CX6yFjKs4aJ5DM9CuWDRVSjv2wqq3w/yB1E1T vH4QHtKybK+Qt7hXdDPSTvpswSl8jEy0kxcWztHMsB2h1ztxWMgmZiVbz g==; X-CSE-ConnectionGUID: 6f/z3vc9QiOxoQhz3M1vFw== X-CSE-MsgGUID: oZ04emfMSX2Vkbyc3Kweow== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193527" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193527" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:33 -0700 X-CSE-ConnectionGUID: P23Lx7zwSCiMPJNsaDIpfA== X-CSE-MsgGUID: k41in13cSAmLmizpZxBY9A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891109" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:31 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH iwl-net 02/11] ice: don't busy wait for Rx queue disable in ice_qp_dis() Date: Tue, 28 May 2024 15:14:20 +0200 Message-Id: <20240528131429.3012910-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org When ice driver is spammed with multiple xdpsock instances and flow control is enabled, there are cases when Rx queue gets stuck and unable to reflect the disable state in QRX_CTRL register. Similar issue has previously been addressed in commit 13a6233b033f ("ice: Add support to enable/disable all Rx queues before waiting"). To workaround this, let us simply not wait for a disabled state as later patch will make sure that regardless of the encountered error in the process of disabling a queue pair, the Rx queue will be enabled. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 1bd4b054dd80..4f606a1055b0 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -199,10 +199,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) if (err) return err; } - err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true); - if (err) - return err; + ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false); ice_qp_clean_rings(vsi, q_idx); ice_qp_reset_stats(vsi, q_idx); From patchwork Tue May 28 13:14:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676723 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D895117108B for ; Tue, 28 May 2024 13:15:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902136; cv=none; b=P47O4Ic8Yp2qQyI9pIIb3+gLvppeKZDbfTJkBdNGnOZbj7oWjH4FuqfnWnIiukap58Bgi0nz3xEbnko143N07Zg6s6DOxBe6Z2fR8RODm9pTyo7X+pjGwnDxBugFzoHO91K6JOLHEfv2LmXdPIR2VuAb3uYJ4G5trVK99xw0rRE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902136; c=relaxed/simple; bh=k+t6/dW0a7XTqHTJyIkOeGq6uYftDeoAQY5f4RgHSbU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dNb5A6pzDgEiLKsDevZMbGf8mm4+Mzx76KUi7zm+aHDi1Gtj0fz+vuHzxNAaS+ZOOBPGBHJygahER72u547PMQLYYmIt30bMxmB51ueXk6P9kljG7GnM/lxeOTf/rsD2KV+kKaxBnRy3zrtUWM0jYFouWaNL69/9stea7FIbhIs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ljdslKUq; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ljdslKUq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902135; x=1748438135; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k+t6/dW0a7XTqHTJyIkOeGq6uYftDeoAQY5f4RgHSbU=; b=ljdslKUqp6eK1I3oYWPok+Xwk9PwEjujkFOSHjC1RHQIFsLmburCW5Rq n+IOhvORetp9OyOJSTo7UCp3FVxKSjHV/Bpxo+aQUu+xL6kjQ1Ju/QbP+ +lEF0qfAzwnDc2vaNHi8StNnyOVYXNNf91u/eoEbcnbhnbXSNAQUMtGd6 b0txhPCtdX0b4FldrUyb1yziKGGywLPJTXNX3oXtCZoU8cTiK17Rixf0X J0UlCiXvr9ZDT8iywJtlqtmH+nkjfeJES6021NEdl127qjJsqs7F8dCWf uFM68/osRCpvm2BqvUl3d9huL8aHtxjteaB6XwWk7FyXtL9t+oNbado2q A==; X-CSE-ConnectionGUID: hzlzGmfoTruVSXm9lxQI7g== X-CSE-MsgGUID: wFlF9J/1Tz27AacgTb89lw== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193531" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193531" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:35 -0700 X-CSE-ConnectionGUID: w43hokqDTl68StL9KdAGiA== X-CSE-MsgGUID: iIOK/ey0Sx+/bDS+xpjKyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891117" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:33 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH iwl-net 03/11] ice: replace synchronize_rcu with synchronize_net Date: Tue, 28 May 2024 15:14:21 +0200 Message-Id: <20240528131429.3012910-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Given that ice_qp_dis() is called under rtnl_lock, synchronize_net() can be called instead of synchronize_rcu() so that XDP rings can finish its job in a faster way. Also let us do this as earlier in XSK queue disable flow. Additionally, turn off regular Tx queue before disabling irqs and NAPI. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 4f606a1055b0..e93cb0ca4106 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -53,7 +53,6 @@ static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx) { ice_clean_tx_ring(vsi->tx_rings[q_idx]); if (ice_is_xdp_ena_vsi(vsi)) { - synchronize_rcu(); ice_clean_tx_ring(vsi->xdp_rings[q_idx]); } ice_clean_rx_ring(vsi->rx_rings[q_idx]); @@ -180,11 +179,12 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) usleep_range(1000, 2000); } + synchronize_net(); + netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + ice_qvec_dis_irq(vsi, rx_ring, q_vector); ice_qvec_toggle_napi(vsi, q_vector, false); - netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); - ice_fill_txq_meta(vsi, tx_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); if (err) From patchwork Tue May 28 13:14:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676724 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1425D171671 for ; Tue, 28 May 2024 13:15:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902138; cv=none; b=NkmqMwk/Qw24eAgJWUuHISWuh3uZQOI6ByyodywJlCUJlxAI1VSxfBYiWPckMxByyw+chnZpIpesqB/ZWa+RJs72OEQUADrQOJRiZAueciqlSpVYzsnRUGeCk0v2YJQZgAae3rOOo4PRl6IiZW+lvG8r9cZRaUF8nhN3jKdz7EU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902138; c=relaxed/simple; bh=YVeYKEPpaAffYIVARbSPYcT3/ud4/QQnvxjXpeJuUIY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lGWTEVe6UD0IeOBrryKzg2QpI2S/afkv2mTsTCrjGUS2WXohf9A0bSeQf8vVosXmYFvPhNhR5wPr6Pu27ZljVfSP1UEI5xYYI9BzGpAASqi2daMd0YgalFAD0eSc5PgPDvILWkwTfP2vRXPwhi4YLMqGWRbZ868NqBOQg8gYSUc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=blF7HS8r; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="blF7HS8r" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902138; x=1748438138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YVeYKEPpaAffYIVARbSPYcT3/ud4/QQnvxjXpeJuUIY=; b=blF7HS8rxJ4G7+9oIyFUUUKYwlV2b4gKP7ofqXPLtNJyWxPWy5XobR/v /a9+k98UTyh1z7NVWYTxR344uwAqA5p7/4MzLYohDAaAdQrYY9xEaxhDc CgNn0oVz8YmbSWZ+q2bhjr7+vGz3PgjeNked1akyQHddnSA2+6EVJ2wsa aSGf5MeswI3i/XATio+rFN0RUKzxgD+PmELSmVnC2Tt/c9ftOedG5tkJ/ wC4+eXPgN+Y1uZrZpHlxHKW6QgRB9HtjcfSjj67Ft16ruMmwgoyEpMOXA JPU6moNzCZIsxVo6uAtQGRtHwfxTU/+oaclzOfOAg3FqHwBkDSv6AJhWU w==; X-CSE-ConnectionGUID: B2PM9EpKR7Cb6x9DsHi7eA== X-CSE-MsgGUID: IkXd6XpLQWOnVaWLIL4MVA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193535" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193535" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:38 -0700 X-CSE-ConnectionGUID: Gm3g5tZkT8WQ+I0uR5nKTg== X-CSE-MsgGUID: QkrJuViXRKKVEnf6TapHaw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891126" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:35 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH iwl-net 04/11] ice: modify error handling when setting XSK pool in ndo_bpf Date: Tue, 28 May 2024 15:14:22 +0200 Message-Id: <20240528131429.3012910-5-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Don't bail out right when spotting an error within ice_qp_{dis,ena}() but rather track error and go through whole flow of disabling and enabling queue pair. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 30 +++++++++++++----------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index e93cb0ca4106..3dcab89be256 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -163,6 +163,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) struct ice_tx_ring *tx_ring; struct ice_rx_ring *rx_ring; int timeout = 50; + int fail = 0; int err; if (q_idx >= vsi->num_rxq || q_idx >= vsi->num_txq) @@ -187,8 +188,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) ice_fill_txq_meta(vsi, tx_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); - if (err) - return err; + if (!fail) + fail = err; if (ice_is_xdp_ena_vsi(vsi)) { struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; @@ -196,15 +197,15 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) ice_fill_txq_meta(vsi, xdp_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, xdp_ring, &txq_meta); - if (err) - return err; + if (!fail) + fail = err; } ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false); ice_qp_clean_rings(vsi, q_idx); ice_qp_reset_stats(vsi, q_idx); - return 0; + return fail; } /** @@ -217,32 +218,33 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) { struct ice_q_vector *q_vector; + int fail = 0; int err; err = ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx); - if (err) - return err; + if (!fail) + fail = err; if (ice_is_xdp_ena_vsi(vsi)) { struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; err = ice_vsi_cfg_single_txq(vsi, vsi->xdp_rings, q_idx); - if (err) - return err; + if (!fail) + fail = err; ice_set_ring_xdp(xdp_ring); ice_tx_xsk_pool(vsi, q_idx); } err = ice_vsi_cfg_single_rxq(vsi, q_idx); - if (err) - return err; + if (!fail) + fail = err; q_vector = vsi->rx_rings[q_idx]->q_vector; ice_qvec_cfg_msix(vsi, q_vector); err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); - if (err) - return err; + if (!fail) + fail = err; ice_qvec_toggle_napi(vsi, q_vector, true); ice_qvec_ena_irq(vsi, q_vector); @@ -250,7 +252,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); clear_bit(ICE_CFG_BUSY, vsi->state); - return 0; + return fail; } /** From patchwork Tue May 28 13:14:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676725 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C5CA171E64 for ; Tue, 28 May 2024 13:15:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902140; cv=none; b=j82AM5hB8i/oIM+RI2RkLWDLCn1cMu2MpyB7CMEIcS6XEJRFCq73jkS/fQ7KwAhnI3LxQE7nKXIsyBMvq/tlazwtivge2WROu/8UG1HNj4jJTIFk03xI8tiX2cuZDFvTnH4jxWEg1Cps09zTPrwRVl9daIX3YY8Em0gKCF8jBkU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902140; c=relaxed/simple; bh=kIt/ZTFBn9CaI96lLJdikL/BzSIZijxoQUVVgIOAlUE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=e1K1Xa1m8fcH1iVSKRxLEyaK8arvCZTmxyVnHmbqfrGqOuTp8RcwCXcknW5qnTdNwZzAgbAwWoVYQvu/PgywNRzZybVLqTrAWlBTlPaMkJtFTOuVioxM4cvj6Pq7/lrcP1vfwwdR/RobmHA7sMY1ZyRkppZa84v70FGQyhJNR3E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=isKN1ueT; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="isKN1ueT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902140; x=1748438140; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kIt/ZTFBn9CaI96lLJdikL/BzSIZijxoQUVVgIOAlUE=; b=isKN1ueTlOY6mBe8J8Uzo/W+q0g2jFlMuBxBAVwSl1KiXlaU9bTIoTs1 CHBiBEZzJ9AUH6pg0BxQqtizBfzpZL2+4tYT4c2oEJGyH34gcqkYPRac9 RywGz6lPg1FhA1paYonsIEdDtPTQmSouXcKeC4+n06wltp/b9cbyLXhD7 86nNRcFQZJ6i4WWWS8xPhvRdVDk3Wqwpn/Q/nnc7FvvA+eLRJ3dIySAMQ ibjKd81RYE7NtwDF0p1yKqxdTQeXCIhB5MuXGtF2CDbtxbbHjdSoCV1GN A5ROeLVaiwS0KZwogaiXqHEUAbKSvXWrDO+uZ3E2lRggTf7M/456K3BAf g==; X-CSE-ConnectionGUID: Fp+8rCBTSmS9at7xyAZVRg== X-CSE-MsgGUID: /UWPohz7Sre5ymBoAT6b7w== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193539" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193539" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:40 -0700 X-CSE-ConnectionGUID: oOTIR56jR+C4jLxYQEJyqg== X-CSE-MsgGUID: sf69Gwt4RLKQXloVN8OBpw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891136" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:37 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH iwl-net 05/11] ice: toggle netif_carrier when setting up XSK pool Date: Tue, 28 May 2024 15:14:23 +0200 Message-Id: <20240528131429.3012910-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org In order to prevent Tx timeout issues, toggle the netif_carrier when disabling and enabling queue pair during XSK pool setup. One of conditions checked on running in the background dev_watchdog() is netif_carrier_ok(), so let us turn it off when we disable the queues that belong to a q_vector where XSK pool is being configured. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 3dcab89be256..8c5006f37310 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -181,6 +181,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) } synchronize_net(); + netif_carrier_off(vsi->netdev); netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); ice_qvec_dis_irq(vsi, rx_ring, q_vector); @@ -250,6 +251,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) ice_qvec_ena_irq(vsi, q_vector); netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + netif_carrier_on(vsi->netdev); clear_bit(ICE_CFG_BUSY, vsi->state); return fail; From patchwork Tue May 28 13:14:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676726 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A866E172BAC for ; Tue, 28 May 2024 13:15:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902144; cv=none; b=SrdyoIjNYBgd6U3/g08bZvEHTRnr1+nJR/OYsuwwCOwEtN0mKdIdxIENIoVMI17cXzTlXWQ6+fhMzBN1txpm6IOjSL9RpokOeANphU/JytR4WERX0pSXvHV6hkDDUKXzWVih0aI1C9zmflRmcFot/focfkLp3n2l3L4g2d+5vBw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902144; c=relaxed/simple; bh=5Fj1JTqE3Az1tnrWcUbUzpd1PZB31xMYtRZLoikh3yc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jm5+8ow9u2vu6A0Oioza3jsCCXU35dfuu5NjH6XZBCsw+7UyVo1O4z7FX4nElWsSLwOpNNZ98glYWdlbyNPtN+q1QA5viPDm191hvY65/UHxNo/I7VtRTmOcrV1A7i0nrV5M9xP4zo+qGSZTKTPqee7deoDnQyuARvIrO7aIUNs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=crJqlsU/; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="crJqlsU/" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902142; x=1748438142; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5Fj1JTqE3Az1tnrWcUbUzpd1PZB31xMYtRZLoikh3yc=; b=crJqlsU/oDSh3DFrGDXZKee8p8+z7vEfWkIpssbtXtCrxg62m8LMUkpH aMPGAzMBX1tgaGfF3BqQFIEAR8AzxOU2THfdiB4RUr6i/DukOTFup9rVU EdaeMO0MsICpXgZjfZhHR4AOx/BfqFbfNGeZgmvIAiSz/NiOj277fl+iH MtU6Tm8bsKtneeTk+ar/XjtIJVazcGBw61mVC+9gC7YkX2qN++RkpccJ2 sOhsRPnbjtKlyd7Kh4nw41rpAClbdDDhbusP834mgOP88aR0iWyaquiHi x5vSH9Xb+TGdpwCm2FvDVg0lmA6lEKfXmk2mdzBfMwIeW6V5jfJUYPPYZ A==; X-CSE-ConnectionGUID: UEOFcsilTluneu9aVqBEBg== X-CSE-MsgGUID: P4S6F3JxT3GMIrMmEyNcvA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193541" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193541" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:42 -0700 X-CSE-ConnectionGUID: xQPez/OnQ/CXzEbGwbn6lw== X-CSE-MsgGUID: oCr2RIS0SbGFn0ATKShcZQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891148" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:40 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH iwl-net 06/11] ice: improve updating ice_{t,r}x_ring::xsk_pool Date: Tue, 28 May 2024 15:14:24 +0200 Message-Id: <20240528131429.3012910-7-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org xsk_buff_pool pointers that ice ring structs hold are updated via ndo_bpf that is executed in process context while it can be read by remote CPU at the same time within NAPI poll. Use synchronize_net() after pointer update and {READ,WRITE}_ONCE() when working with mentioned pointer. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice.h | 6 +- drivers/net/ethernet/intel/ice/ice_base.c | 4 +- drivers/net/ethernet/intel/ice/ice_main.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.c | 4 +- drivers/net/ethernet/intel/ice/ice_xsk.c | 73 +++++++++++++---------- drivers/net/ethernet/intel/ice/ice_xsk.h | 4 +- 6 files changed, 54 insertions(+), 39 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index da8c8afebc93..701a61d791dd 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -771,12 +771,12 @@ static inline struct xsk_buff_pool *ice_get_xp_from_qid(struct ice_vsi *vsi, * Returns a pointer to xsk_buff_pool structure if there is a buffer pool * present, NULL otherwise. */ -static inline struct xsk_buff_pool *ice_xsk_pool(struct ice_rx_ring *ring) +static inline void ice_xsk_pool(struct ice_rx_ring *ring) { struct ice_vsi *vsi = ring->vsi; u16 qid = ring->q_index; - return ice_get_xp_from_qid(vsi, qid); + WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid)); } /** @@ -801,7 +801,7 @@ static inline void ice_tx_xsk_pool(struct ice_vsi *vsi, u16 qid) if (!ring) return; - ring->xsk_pool = ice_get_xp_from_qid(vsi, qid); + WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid)); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 5d396c1a7731..f3dfce136106 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -536,7 +536,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) return err; } - ring->xsk_pool = ice_xsk_pool(ring); + ice_xsk_pool(ring); if (ring->xsk_pool) { xdp_rxq_info_unreg(&ring->xdp_rxq); @@ -597,7 +597,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) return 0; } - ok = ice_alloc_rx_bufs_zc(ring, num_bufs); + ok = ice_alloc_rx_bufs_zc(ring, ring->xsk_pool, num_bufs); if (!ok) { u16 pf_q = ring->vsi->rxq_map[ring->q_index]; diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 1b61ca3a6eb6..15a6805ac2a1 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -2946,7 +2946,7 @@ static void ice_vsi_rx_napi_schedule(struct ice_vsi *vsi) ice_for_each_rxq(vsi, i) { struct ice_rx_ring *rx_ring = vsi->rx_rings[i]; - if (rx_ring->xsk_pool) + if (READ_ONCE(rx_ring->xsk_pool)) napi_schedule(&rx_ring->q_vector->napi); } } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8bb743f78fcb..f4b2b1bca234 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1523,7 +1523,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget) ice_for_each_tx_ring(tx_ring, q_vector->tx) { bool wd; - if (tx_ring->xsk_pool) + if (READ_ONCE(tx_ring->xsk_pool)) wd = ice_xmit_zc(tx_ring); else if (ice_ring_is_xdp(tx_ring)) wd = true; @@ -1556,7 +1556,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget) * comparison in the irq context instead of many inside the * ice_clean_rx_irq function and makes the codebase cleaner. */ - cleaned = rx_ring->xsk_pool ? + cleaned = READ_ONCE(rx_ring->xsk_pool) ? ice_clean_rx_irq_zc(rx_ring, budget_per_ring) : ice_clean_rx_irq(rx_ring, budget_per_ring); work_done += cleaned; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 8c5006f37310..e554cf424fb3 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -250,6 +250,8 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) ice_qvec_toggle_napi(vsi, q_vector, true); ice_qvec_ena_irq(vsi, q_vector); + /* make sure NAPI sees updated ice_{t,x}_ring::xsk_pool */ + synchronize_net(); netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); netif_carrier_on(vsi->netdev); clear_bit(ICE_CFG_BUSY, vsi->state); @@ -469,7 +471,8 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp, * * Returns true if all allocations were successful, false if any fail. */ -static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) +static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count) { u32 nb_buffs_extra = 0, nb_buffs = 0; union ice_32b_rx_flex_desc *rx_desc; @@ -481,8 +484,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) xdp = ice_xdp_buf(rx_ring, ntu); if (ntu + count >= rx_ring->count) { - nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, - rx_desc, + nb_buffs_extra = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, rx_ring->count - ntu); if (nb_buffs_extra != rx_ring->count - ntu) { ntu += nb_buffs_extra; @@ -495,7 +497,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) ice_release_rx_desc(rx_ring, 0); } - nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count); + nb_buffs = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, count); ntu += nb_buffs; if (ntu == rx_ring->count) @@ -518,7 +520,8 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) * * Returns true if all calls to internal alloc routine succeeded */ -bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) +bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count) { u16 rx_thresh = ICE_RING_QUARTER(rx_ring); u16 leftover, i, tail_bumps; @@ -527,9 +530,9 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) leftover = count - (tail_bumps * rx_thresh); for (i = 0; i < tail_bumps; i++) - if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh)) + if (!__ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, rx_thresh)) return false; - return __ice_alloc_rx_bufs_zc(rx_ring, leftover); + return __ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, leftover); } /** @@ -650,7 +653,7 @@ static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) if (xdp_ring->next_to_clean >= cnt) xdp_ring->next_to_clean -= cnt; if (xsk_frames) - xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); + xsk_tx_completed(READ_ONCE(xdp_ring->xsk_pool), xsk_frames); return completed_frames; } @@ -702,7 +705,8 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, dma_addr_t dma; dma = xsk_buff_xdp_get_dma(xdp); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, size); + xsk_buff_raw_dma_sync_for_device(READ_ONCE(xdp_ring->xsk_pool), + dma, size); tx_buf->xdp = xdp; tx_buf->type = ICE_TX_BUF_XSK_TX; @@ -760,7 +764,8 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); if (!err) return ICE_XDP_REDIR; - if (xsk_uses_need_wakeup(rx_ring->xsk_pool) && err == -ENOBUFS) + if (xsk_uses_need_wakeup(READ_ONCE(rx_ring->xsk_pool)) && + err == -ENOBUFS) result = ICE_XDP_EXIT; else result = ICE_XDP_CONSUMED; @@ -829,8 +834,8 @@ ice_add_xsk_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *first, */ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) { + struct xsk_buff_pool *xsk_pool = READ_ONCE(rx_ring->xsk_pool); unsigned int total_rx_bytes = 0, total_rx_packets = 0; - struct xsk_buff_pool *xsk_pool = rx_ring->xsk_pool; u32 ntc = rx_ring->next_to_clean; u32 ntu = rx_ring->next_to_use; struct xdp_buff *first = NULL; @@ -942,7 +947,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) rx_ring->next_to_clean = ntc; entries_to_alloc = ICE_RX_DESC_UNUSED(rx_ring); if (entries_to_alloc > ICE_RING_QUARTER(rx_ring)) - failure |= !ice_alloc_rx_bufs_zc(rx_ring, entries_to_alloc); + failure |= !ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, + entries_to_alloc); ice_finalize_xdp_rx(xdp_ring, xdp_xmit, 0); ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); @@ -968,14 +974,15 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) * @desc: AF_XDP descriptor to pull the DMA address and length from * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc, +static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, struct xdp_desc *desc, unsigned int *total_bytes) { struct ice_tx_desc *tx_desc; dma_addr_t dma; - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, desc->addr); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, desc->len); + dma = xsk_buff_raw_get_dma(xsk_pool, desc->addr); + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, desc->len); tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_to_use++); tx_desc->buf_addr = cpu_to_le64(dma); @@ -991,7 +998,9 @@ static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc, * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, +static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, + struct xdp_desc *descs, unsigned int *total_bytes) { u16 ntu = xdp_ring->next_to_use; @@ -1001,8 +1010,8 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de loop_unrolled_for(i = 0; i < PKTS_PER_BATCH; i++) { dma_addr_t dma; - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, descs[i].addr); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, descs[i].len); + dma = xsk_buff_raw_get_dma(xsk_pool, descs[i].addr); + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, descs[i].len); tx_desc = ICE_TX_DESC(xdp_ring, ntu++); tx_desc->buf_addr = cpu_to_le64(dma); @@ -1022,17 +1031,19 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de * @nb_pkts: count of packets to be send * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, - u32 nb_pkts, unsigned int *total_bytes) +static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, + struct xdp_desc *descs, u32 nb_pkts, + unsigned int *total_bytes) { u32 batched, leftover, i; batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH); leftover = nb_pkts & (PKTS_PER_BATCH - 1); for (i = 0; i < batched; i += PKTS_PER_BATCH) - ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes); + ice_xmit_pkt_batch(xdp_ring, xsk_pool, &descs[i], total_bytes); for (; i < batched + leftover; i++) - ice_xmit_pkt(xdp_ring, &descs[i], total_bytes); + ice_xmit_pkt(xdp_ring, xsk_pool, &descs[i], total_bytes); } /** @@ -1043,7 +1054,8 @@ static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *d */ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) { - struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs; + struct xsk_buff_pool *xsk_pool = READ_ONCE(xdp_ring->xsk_pool); + struct xdp_desc *descs = xsk_pool->tx_descs; u32 nb_pkts, nb_processed = 0; unsigned int total_bytes = 0; int budget; @@ -1057,25 +1069,26 @@ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) budget = ICE_DESC_UNUSED(xdp_ring); budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring)); - nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget); + nb_pkts = xsk_tx_peek_release_desc_batch(xsk_pool, budget); if (!nb_pkts) return true; if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { nb_processed = xdp_ring->count - xdp_ring->next_to_use; - ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes); + ice_fill_tx_hw_ring(xdp_ring, xsk_pool, descs, nb_processed, + &total_bytes); xdp_ring->next_to_use = 0; } - ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed, - &total_bytes); + ice_fill_tx_hw_ring(xdp_ring, xsk_pool, &descs[nb_processed], + nb_pkts - nb_processed, &total_bytes); ice_set_rs_bit(xdp_ring); ice_xdp_ring_update_tail(xdp_ring); ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes); - if (xsk_uses_need_wakeup(xdp_ring->xsk_pool)) - xsk_set_tx_need_wakeup(xdp_ring->xsk_pool); + if (xsk_uses_need_wakeup(xsk_pool)) + xsk_set_tx_need_wakeup(xsk_pool); return nb_pkts < budget; } @@ -1108,7 +1121,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, ring = vsi->rx_rings[queue_id]->xdp_ring; - if (!ring->xsk_pool) + if (!READ_ONCE(ring->xsk_pool)) return -EINVAL; /* The idea here is that if NAPI is running, mark a miss, so diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.h b/drivers/net/ethernet/intel/ice/ice_xsk.h index 6fa181f080ef..4cd2d62a0836 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.h +++ b/drivers/net/ethernet/intel/ice/ice_xsk.h @@ -22,7 +22,8 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid); int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget); int ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, u32 flags); -bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count); +bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count); bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi); void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring); void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring); @@ -51,6 +52,7 @@ ice_clean_rx_irq_zc(struct ice_rx_ring __always_unused *rx_ring, static inline bool ice_alloc_rx_bufs_zc(struct ice_rx_ring __always_unused *rx_ring, + struct xsk_buff_pool __always_unused *xsk_pool, u16 __always_unused count) { return false; From patchwork Tue May 28 13:14:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676727 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1140016F289 for ; Tue, 28 May 2024 13:15:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902145; cv=none; b=gT4/iyn4hbjykzfvqvhozckpzuqwC0z8WHHsLtSJ5Mbz4Vlgto0pEv4XtcFKAsgF7hiR+viAA392eZpk7C4AazaQ4HYRMnaFCqTSdTvKwBRCAi6uXLds1QqaAyF9pGT7kF1fwgck9V2h511mtClpfrVEj5EWBSXXjHUcNaEep1Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902145; c=relaxed/simple; bh=4Tu1vWmHZq1dZhCwiajusm9P0b3jYlUYGR9h4gmDQJg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YCzJx46JWegK0SRXixcde0a5ukBjArTnqBgz6EM6qGIiVh/Ja5TFjUVSWqoPPqpqhg6Y4CNNlt4BIfTde2hl2liGu9VzhpGHl9yAP+QGGixABEGiuTeJ9+6PVxVZp+sHsvmwidXTJF29bHpGdP6zctU2XbRXSAuVGuS/h72l5Ig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=JNlmWysk; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JNlmWysk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902145; x=1748438145; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4Tu1vWmHZq1dZhCwiajusm9P0b3jYlUYGR9h4gmDQJg=; b=JNlmWyskgXH4Q2iiyWHWbSVaca4B9WR7UoqdPyBIZcxNbLO/AWs5wVQ+ GRnOTf6/L17ldsR1C8iQjjf8X3jnybjlKI7F59yO4qyFZ2Mc87NbDqxW7 A/UihK4DGLG7GAMhnyyfE/tFFdMBQ5L2EZ/bhVoEZadCvq/33u0BOIb9U oidnMULmKUxksesUnn4cjxLqkWNid3G1DRJxJ2jSjnOs2Elm8U619/vnz P36iFESxAfsTJm6W1eNtvlOudnjRcqXoyEiyM8winQHAfkZ8N0d4sX+Nf r+MoQEaKf1aIWbER4IElx+6SpHGTX9cjt3Tc4fth93Y8EfpysWix2ilSW A==; X-CSE-ConnectionGUID: ut/3Id7tSLmuUD5kE1UsgQ== X-CSE-MsgGUID: OW5ViS/BSXOu3bFdMZ/f8w== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193552" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193552" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:44 -0700 X-CSE-ConnectionGUID: J6Ag7psJQYS+bugeluUNug== X-CSE-MsgGUID: yR1PiVKwQki+JoWWuV67yA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891155" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:42 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH iwl-net 07/11] ice: add missing WRITE_ONCE when clearing ice_rx_ring::xdp_prog Date: Tue, 28 May 2024 15:14:25 +0200 Message-Id: <20240528131429.3012910-8-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org It is read by data path and modified from process context on remote cpu so it is needed to use WRITE_ONCE to clear the pointer. Fixes: efc2214b6047 ("ice: Add support for XDP") Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index f4b2b1bca234..4c115531beba 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -456,7 +456,7 @@ void ice_free_rx_ring(struct ice_rx_ring *rx_ring) if (rx_ring->vsi->type == ICE_VSI_PF) if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) xdp_rxq_info_unreg(&rx_ring->xdp_rxq); - rx_ring->xdp_prog = NULL; + WRITE_ONCE(rx_ring->xdp_prog, NULL); if (rx_ring->xsk_pool) { kfree(rx_ring->xdp_buf); rx_ring->xdp_buf = NULL; From patchwork Tue May 28 13:14:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676728 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13611175570 for ; Tue, 28 May 2024 13:15:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902147; cv=none; b=TQyGNBejUPqIT/VxlFUbvzCFMlB0EcoF5sJdN6jCD2aFQSq4YQd2pOgpqQ17Wn5rbjcJX2RvXIsuGiIu8edpst2v4ltV0ZuGSPH2putH8C/SQtIM7SLYIrPNtXe1IaoUiOmSt7Phdeh5uwt+4E3465McGqti5Z/oMX/WZ8nhaHw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902147; c=relaxed/simple; bh=v0cZw/C8zWmdW/96UfED93gGMqtozyU5z56YS5yvp8E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EwbKsRcZXWDI6lmVb8os0nF2TlsGnKUusU7scEhAXzkzYxhMXv5CihMb/pFZnRn5Zosj96EuiKHqiC0CiYiBILxJYzfdpMcWT5UyapoKQRVtAjfy3Izq/7eY3w1yTmuitaEP9L8e0OdoKQQ2+m7t+4uhOXvftRVgYU0NLDJTpps= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=nVoU0qps; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nVoU0qps" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902147; x=1748438147; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v0cZw/C8zWmdW/96UfED93gGMqtozyU5z56YS5yvp8E=; b=nVoU0qpsxcWKXP8rgaPdVSYf6sZ2T2KWwodLAWWwBRMDnNHiwfPlDig/ j08dIusz5KgCik2FTixHeOwmtWY3HIRboPAwQexSJNshiDCtQnl9+IVLz g6aWrwrYifkao8Ls+bjt2ZSiqmA5JwGLLMZ+QxqM8B9kevdnOouL+Oqhj sTK2iZy/caz+/qiH24Z/2KgeGcAXTzUgkXj1VIAQsKQp6hDt56pCvZA6L tLFjaLxyMzbqIg2N64EV84OcNtYz38YEOhnRJWugOxw+Cp5EaRFiVxMDf 40K5Agy2N56FZwP0WUGvT6+IuVuqv8H0BliJpZxXPNkySGuMNxpRBR/HN A==; X-CSE-ConnectionGUID: B8qWiYtiRjyon7VHmwf+zQ== X-CSE-MsgGUID: /yLO1luiTeK3uUIJlvDICg== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193559" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193559" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:46 -0700 X-CSE-ConnectionGUID: wOCgip5WQdO3GzRgR4NGQg== X-CSE-MsgGUID: ZPK+e8ahQneQurjOC5v3Mg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891167" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:44 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH iwl-net 08/11] ice: xsk: fix txq interrupt mapping Date: Tue, 28 May 2024 15:14:26 +0200 Message-Id: <20240528131429.3012910-9-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org ice_cfg_txq_interrupt() internally handles XDP Tx ring. Do not use ice_for_each_tx_ring() in ice_qvec_cfg_msix() as this causing us to treat XDP ring that belongs to queue vector as Tx ring and therefore misconfiguring the interrupts. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index e554cf424fb3..3135fc0aaf73 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -113,23 +113,26 @@ ice_qvec_dis_irq(struct ice_vsi *vsi, struct ice_rx_ring *rx_ring, * @q_vector: queue vector */ static void -ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector, u16 qid) { u16 reg_idx = q_vector->reg_idx; struct ice_pf *pf = vsi->back; struct ice_hw *hw = &pf->hw; - struct ice_tx_ring *tx_ring; - struct ice_rx_ring *rx_ring; + int q, _qid = qid; ice_cfg_itr(hw, q_vector); - ice_for_each_tx_ring(tx_ring, q_vector->tx) - ice_cfg_txq_interrupt(vsi, tx_ring->reg_idx, reg_idx, - q_vector->tx.itr_idx); + for (q = 0; q < q_vector->num_ring_tx; q++) { + ice_cfg_txq_interrupt(vsi, _qid, reg_idx, q_vector->tx.itr_idx); + _qid++; + } - ice_for_each_rx_ring(rx_ring, q_vector->rx) - ice_cfg_rxq_interrupt(vsi, rx_ring->reg_idx, reg_idx, - q_vector->rx.itr_idx); + _qid = qid; + + for (q = 0; q < q_vector->num_ring_rx; q++) { + ice_cfg_rxq_interrupt(vsi, _qid, reg_idx, q_vector->rx.itr_idx); + _qid++; + } ice_flush(hw); } @@ -241,7 +244,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) fail = err; q_vector = vsi->rx_rings[q_idx]->q_vector; - ice_qvec_cfg_msix(vsi, q_vector); + ice_qvec_cfg_msix(vsi, q_vector, q_idx); err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); if (!fail) From patchwork Tue May 28 13:14:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676729 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF888178372 for ; Tue, 28 May 2024 13:16:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902164; cv=none; b=dt6EXLAm/gq2Nn45SfHUB9dWPkiSN+j6D3SS/seczWSrsnQnLZjc5kqHx5t4X1zDMMk/i/yeKDDC1x7JIRgwjU8VYgQUD3m1k2B7/K+QIWe/XQ7WlaXPzabBUnkrJFnKr82EvKd8AQ18Ux4tsJ8etDNWUX7TG90FL1v2eIIUeAU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902164; c=relaxed/simple; bh=9ibL8cpkE1BEVaC2RFbojcJo454wmbd9a09sM9ZH/l0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ok/SFx3lWAsq+3wVP8cFZuUDn0XRlst19DnSOmMrv6pGWdMUfo3YWtgPI5vQQJm1M2iekxlUrqaI7Oknd3QLVU7Z6k/QWqyohedVhBU7aUp/mFgGyu0QKBR1Wh1VuX7XAFhrP1sWdwKh90JXzxv7D3hP7lPptBrqcn9ZxRUkMlE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bAYh3522; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bAYh3522" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902163; x=1748438163; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9ibL8cpkE1BEVaC2RFbojcJo454wmbd9a09sM9ZH/l0=; b=bAYh3522X/RNigmOU1tI0b4QPas8qChgQLicS+5SLigy2MV7yt4PCKzW QWRSxZ2d0IUqqGH9lc0gdyAjWpjOh+VB+YmJ23Pfz6tFUh6bbfh6OL6Ro qr4FP9AQWPvNEFkdSKSkCyA0lCRN8iayWpbNhXp6H2DEj9xEdDPZFbhMb cUjo0UcP5q3Ag6z4kxYNBB0b4wKXrnPukO3rrHc1Qv/v9YX9yuJ0TOKjI LZDth5zb0RJLzJ0Lpgis7uCi4eDAVBBXWa4uw9mtsS/Md7SCC8CzF1vj/ omJb+wjgeSem9/BbhFdgkG6pG+mAGQs2Fq9y+qpu5WFPxGdbJz6EAN45r Q==; X-CSE-ConnectionGUID: DVhwOpvQSqiAYaSRPaDPlQ== X-CSE-MsgGUID: tD0jleptT3aXVQbUiUMuLg== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193569" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193569" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:48 -0700 X-CSE-ConnectionGUID: 9YdYx1YpRYOPzJySl5u6Uw== X-CSE-MsgGUID: eUBxHCQ4SluH2zrAs3gsfQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891171" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:46 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com Subject: [PATCH iwl-net 09/11] ice: move locking outside of ice_qp_ena and ice_qp_dis Date: Tue, 28 May 2024 15:14:27 +0200 Message-Id: <20240528131429.3012910-10-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Larysa Zaremba Currently, ice_qp_ena() is called, even if ICE_CFG_BUSY could not be acquired by ice_qp_dis(), in such case there is nothing to undo. Move locking logic out of these functions, so: * we immediately return, if the lock could not be acquired * ice_qp_ena() does not operate in an unsafe context * ice_qp_ena() does not clear ICE_CFG_BUSY, when it is not held Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Larysa Zaremba --- drivers/net/ethernet/intel/ice/ice_xsk.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 3135fc0aaf73..fe4aa4b537dd 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -165,7 +165,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) struct ice_q_vector *q_vector; struct ice_tx_ring *tx_ring; struct ice_rx_ring *rx_ring; - int timeout = 50; int fail = 0; int err; @@ -176,13 +175,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) rx_ring = vsi->rx_rings[q_idx]; q_vector = rx_ring->q_vector; - while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) { - timeout--; - if (!timeout) - return -EBUSY; - usleep_range(1000, 2000); - } - synchronize_net(); netif_carrier_off(vsi->netdev); netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); @@ -257,7 +249,6 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) synchronize_net(); netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); netif_carrier_on(vsi->netdev); - clear_bit(ICE_CFG_BUSY, vsi->state); return fail; } @@ -390,6 +381,14 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid) if (if_running) { struct ice_rx_ring *rx_ring = vsi->rx_rings[qid]; + int timeout = 50; + + while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) { + timeout--; + if (!timeout) + return -EBUSY; + usleep_range(1000, 2000); + } ret = ice_qp_dis(vsi, qid); if (ret) { @@ -412,6 +411,7 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid) napi_schedule(&vsi->rx_rings[qid]->xdp_ring->q_vector->napi); else if (ret) netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret); + clear_bit(ICE_CFG_BUSY, vsi->state); } failure: From patchwork Tue May 28 13:14:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676730 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B825316F262 for ; Tue, 28 May 2024 13:16:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902165; cv=none; b=WIpjRhQeXV0Bj2fU+x7oAeWpBhCyRt8qaVAm5+rLfU805QyuK/VHKDNBIS6+wNn5P8gJmCU6Y6eyTqIlsP8oyOH8R1wNumx22e2VepnCH759mFLHJy+XGjV1GiAstWnJ5Xl1A+9WQ+djjuPBMlPPSA6N+Ga9ZJ+Bv7jsYwkhzng= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902165; c=relaxed/simple; bh=ETPGhiS7aB5sus5c74yVshGkoE8DOPIbYrYn+PvjTzk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=s1mhAZnqKGGYgpSs7eJV42eA6Ai/OqA48nJxCYAhxcI8YsrBsH//rdfrIlkNjhvtL/RI/poWDbRz9NjNqdEINRLs6bUJaq/Ct1nLOs5gMb8jtzk/hdX2HJo5LKqC8c/A1XJJzClMZgzL9rcmcnTNIozynQpzaKAlZwNxVioL/6A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=haYBOoFz; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="haYBOoFz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902164; x=1748438164; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ETPGhiS7aB5sus5c74yVshGkoE8DOPIbYrYn+PvjTzk=; b=haYBOoFzEVPkFTBhsGDgcQ21UXADfxzBNFoX5STglKgYj2makDjgYNVC GCW6pgGQ2/cshyvR0Pg2wVbCxf0ftV4/lMepA8oSLgi04S1pEm+0iI25D X19fNwnH6Jv1rZnBxFSpaM0OOvPKT9rAHMX55LKqd2rIFH8yn9w64OK6W j80uZDSF6XM6BNf+MUmksxUQvlBFcPwGUcMV7YFsvEnPeK3yS/7HmAU7n h7oOhZrXMuQjQ7RftJkuVXFxO04DBjtXl9QeXjNDuGzg7VYIwnB3H1wXk RVX4P3GCwl2T6PuxtLHV6HH5x1iH5jwtyEHNcCoWbvIRMzWj3r8QwpXXy w==; X-CSE-ConnectionGUID: X0JhsjodSkuPRkE2odL8TA== X-CSE-MsgGUID: US8vnicVR+e9qbs02M7A8Q== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193578" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193578" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:50 -0700 X-CSE-ConnectionGUID: Tk+RovD3Rg21ao04H38rUg== X-CSE-MsgGUID: +kyv3pA3QH+wXIBzfQW+wQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891177" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:48 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com Subject: [PATCH iwl-net 10/11] ice: lock with PF state instead of VSI state in ice_xsk_pool_setup() Date: Tue, 28 May 2024 15:14:28 +0200 Message-Id: <20240528131429.3012910-11-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Larysa Zaremba The main intent of using ICE_CFG_BUSY is to prevent the reset from starting, when other configuration is being processed. pf->state is checked before starting reset, but ice_xsk_pool_setup sets the flag in vsi->state, which is almost useless. Also, ICE_CFG_BUSY belongs to enum ice_pf_state, not ice_vsi_state. Change vsi->state to pf->state in ice_xsk_pool_setup() locking code, so reset would not interfere with AF_XDP configuration. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Larysa Zaremba --- drivers/net/ethernet/intel/ice/ice_xsk.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index fe4aa4b537dd..225d027d3d7a 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -370,6 +370,7 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid) { bool if_running, pool_present = !!pool; int ret = 0, pool_failure = 0; + struct ice_pf *pf = vsi->back; if (qid >= vsi->num_rxq || qid >= vsi->num_txq) { netdev_err(vsi->netdev, "Please use queue id in scope of combined queues count\n"); @@ -383,7 +384,7 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid) struct ice_rx_ring *rx_ring = vsi->rx_rings[qid]; int timeout = 50; - while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) { + while (test_and_set_bit(ICE_CFG_BUSY, pf->state)) { timeout--; if (!timeout) return -EBUSY; @@ -411,7 +412,7 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid) napi_schedule(&vsi->rx_rings[qid]->xdp_ring->q_vector->napi); else if (ret) netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret); - clear_bit(ICE_CFG_BUSY, vsi->state); + clear_bit(ICE_CFG_BUSY, pf->state); } failure: From patchwork Tue May 28 13:14:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 13676731 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F9D616F836 for ; Tue, 28 May 2024 13:16:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902166; cv=none; b=rJ+paFl/v+9P96i44dR64SoFFI3AyQGDW5+TbxuSxrEsFfOXN30emmIoGY4c5wo1dcfT/Zril24PDj/dPmJK/bREh7yIaCRUHz20bKJdkelcmoi2jxhrVhQ6UEY5mEUlPOFADZOVnTYR80mv5tTcp6qRPtE3OV/daFi+V5zkd6A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716902166; c=relaxed/simple; bh=PiHcrt4MpBpN05Srlg+JhvwjicgYE6GmVyjVgxZdXqU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=H84pDbMk3Nr5/IbXWuwepZp4I30QUW4AuPnNiiSrDDrl+EIL0K+c3EMVSkTj9DnFRCeLbqNFOZariViSORs+bdSOZnb/a5cC9AgLmr3RTl8Ttgw19itCyynY6sv5RqDzfT+A+n/jVRiIZ/pp02sBP5e/oA04NKuqs5vbk+FZB0k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=lzDm7NVX; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="lzDm7NVX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716902165; x=1748438165; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PiHcrt4MpBpN05Srlg+JhvwjicgYE6GmVyjVgxZdXqU=; b=lzDm7NVXC3o76ROiwXylScNhQqLerVRtXK8Uj2gFPOLkj83ltf+bs9+j pOmyeAPpsK3/3/mVdjSTRqAMvKDUQId8iIJfKD+ifMgjoY2hA0CJfvSW3 w522dZXb0OyIRUUgrmjlCBuR6kXXKzNu15jKXP4T4n57cia6ZCJhRu6eU /TXGwfW9cqxUE9r2g3BYHDehivg3vux/F641/N+8rba1ni6cU1VB9RPpT y4HrFlU6zVf5axuHdPy3gZBQhVzNynpnDwGJw8KlTlkAlC00SnmsEbnPt DQYzcJ1eXGep8Y0cIUYVVAxA0GouM97k9DiaCm/Di0GIEMV9lKzhhW1rX g==; X-CSE-ConnectionGUID: reTfTlbMQXeWYNQuir+KbA== X-CSE-MsgGUID: lGfCzs+eQUWFnwfWt+KMhQ== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13193589" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13193589" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:15:52 -0700 X-CSE-ConnectionGUID: UHB0bFdaQ6mUWy/OIdKRJA== X-CSE-MsgGUID: eaDjYtd+SpWsHAwPorLzzQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="39891182" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa003.jf.intel.com with ESMTP; 28 May 2024 06:15:50 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com Subject: [PATCH iwl-net 11/11] ice: protect ring configuration with a mutex Date: Tue, 28 May 2024 15:14:29 +0200 Message-Id: <20240528131429.3012910-12-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> References: <20240528131429.3012910-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Larysa Zaremba Add a ring_lock mutex to protect sections, where software rings are affected. Particularly, to prevent system crash, when tx_timeout and .ndo_bpf() happen at the same time. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Larysa Zaremba --- drivers/net/ethernet/intel/ice/ice.h | 2 ++ drivers/net/ethernet/intel/ice/ice_lib.c | 23 ++++++++++--- drivers/net/ethernet/intel/ice/ice_main.c | 39 ++++++++++++++++++++--- drivers/net/ethernet/intel/ice/ice_xsk.c | 13 ++------ 4 files changed, 57 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 701a61d791dd..7c1e24afa34b 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -307,6 +307,7 @@ enum ice_pf_state { ICE_PHY_INIT_COMPLETE, ICE_FD_VF_FLUSH_CTX, /* set at FD Rx IRQ or timeout */ ICE_AUX_ERR_PENDING, + ICE_RTNL_WAITS_FOR_RESET, ICE_STATE_NBITS /* must be last */ }; @@ -941,6 +942,7 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog, enum ice_xdp_cfg cfg_type); int ice_destroy_xdp_rings(struct ice_vsi *vsi, enum ice_xdp_cfg cfg_type); void ice_map_xdp_rings(struct ice_vsi *vsi); +bool ice_rebuild_pending(struct ice_vsi *vsi); int ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, u32 flags); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 7629b0190578..a5dc6fc6e63d 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2426,7 +2426,10 @@ void ice_vsi_decfg(struct ice_vsi *vsi) dev_err(ice_pf_to_dev(pf), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", vsi->vsi_num, err); - if (ice_is_xdp_ena_vsi(vsi)) + /* xdp_rings can be absent, if program was attached amid reset, + * VSI rebuild is supposed to create them later + */ + if (ice_is_xdp_ena_vsi(vsi) && vsi->xdp_rings) /* return value check can be skipped here, it always returns * 0 if reset is in progress */ @@ -2737,12 +2740,24 @@ ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index, if (current_work() == &pf->serv_task || test_bit(ICE_PREPARED_FOR_RESET, pf->state) || test_bit(ICE_DOWN, pf->state) || - test_bit(ICE_SUSPENDED, pf->state)) + test_bit(ICE_SUSPENDED, pf->state)) { + bool rtnl_held_here = true; + + while (!rtnl_trylock()) { + if (test_bit(ICE_RTNL_WAITS_FOR_RESET, pf->state)) { + rtnl_held_here = false; + break; + } + usleep_range(1000, 2000); + } __ice_queue_set_napi(vsi->netdev, queue_index, type, napi, - false); - else + true); + if (rtnl_held_here) + rtnl_unlock(); + } else { __ice_queue_set_napi(vsi->netdev, queue_index, type, napi, true); + } } /** diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 15a6805ac2a1..7724ed8fc1b1 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -2986,6 +2986,20 @@ static int ice_max_xdp_frame_size(struct ice_vsi *vsi) return ICE_RXBUF_3072; } +/** + * ice_rebuild_pending - ice_vsi_rebuild will be performed, when locks are released + * @vsi: VSI to setup XDP for + * + * ice_vsi_close() in the reset path is called under rtnl_lock(), + * so it happened strictly before or after .ndo_bpf(). + * In case it has happened before, we do not have anything attached to rings + */ +bool ice_rebuild_pending(struct ice_vsi *vsi) +{ + return ice_is_reset_in_progress(vsi->back->state) && + !vsi->rx_rings[0]->desc; +} + /** * ice_xdp_setup_prog - Add or remove XDP eBPF program * @vsi: VSI to setup XDP for @@ -3009,7 +3023,7 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog, } /* hot swap progs and avoid toggling link */ - if (ice_is_xdp_ena_vsi(vsi) == !!prog) { + if (ice_is_xdp_ena_vsi(vsi) == !!prog || ice_rebuild_pending(vsi)) { ice_vsi_assign_bpf_prog(vsi, prog); return 0; } @@ -3081,21 +3095,33 @@ static int ice_xdp(struct net_device *dev, struct netdev_bpf *xdp) { struct ice_netdev_priv *np = netdev_priv(dev); struct ice_vsi *vsi = np->vsi; + struct ice_pf *pf = vsi->back; + int ret; if (vsi->type != ICE_VSI_PF) { NL_SET_ERR_MSG_MOD(xdp->extack, "XDP can be loaded only on PF VSI"); return -EINVAL; } + while (test_and_set_bit(ICE_CFG_BUSY, pf->state)) { + set_bit(ICE_RTNL_WAITS_FOR_RESET, pf->state); + usleep_range(1000, 2000); + } + clear_bit(ICE_RTNL_WAITS_FOR_RESET, pf->state); + switch (xdp->command) { case XDP_SETUP_PROG: - return ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack); + ret = ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack); + break; case XDP_SETUP_XSK_POOL: - return ice_xsk_pool_setup(vsi, xdp->xsk.pool, - xdp->xsk.queue_id); + ret = ice_xsk_pool_setup(vsi, xdp->xsk.pool, xdp->xsk.queue_id); + break; default: - return -EINVAL; + ret = -EINVAL; } + + clear_bit(ICE_CFG_BUSY, pf->state); + return ret; } /** @@ -7672,7 +7698,10 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) ice_gnss_init(pf); /* rebuild PF VSI */ + while (test_and_set_bit(ICE_CFG_BUSY, pf->state)) + usleep_range(1000, 2000); err = ice_vsi_rebuild_by_type(pf, ICE_VSI_PF); + clear_bit(ICE_CFG_BUSY, pf->state); if (err) { dev_err(dev, "PF VSI rebuild failed: %d\n", err); goto err_vsi_rebuild; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 225d027d3d7a..962af14f9fd5 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -370,7 +370,6 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid) { bool if_running, pool_present = !!pool; int ret = 0, pool_failure = 0; - struct ice_pf *pf = vsi->back; if (qid >= vsi->num_rxq || qid >= vsi->num_txq) { netdev_err(vsi->netdev, "Please use queue id in scope of combined queues count\n"); @@ -378,18 +377,11 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid) goto failure; } - if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi); + if_running = !ice_rebuild_pending(vsi) && + (netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi)); if (if_running) { struct ice_rx_ring *rx_ring = vsi->rx_rings[qid]; - int timeout = 50; - - while (test_and_set_bit(ICE_CFG_BUSY, pf->state)) { - timeout--; - if (!timeout) - return -EBUSY; - usleep_range(1000, 2000); - } ret = ice_qp_dis(vsi, qid); if (ret) { @@ -412,7 +404,6 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid) napi_schedule(&vsi->rx_rings[qid]->xdp_ring->q_vector->napi); else if (ret) netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret); - clear_bit(ICE_CFG_BUSY, pf->state); } failure: