From patchwork Thu Jan 9 23:31:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Zaki X-Patchwork-Id: 13933320 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64342204F67 for ; Thu, 9 Jan 2025 23:31:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465490; cv=none; b=AcmlcBqmDxFoK4pG/R/HvGRB+mAOMaYpikCUeGoftLOU31gosHTEfS9/xABNLTy0QvhDEYClqQGG/F+517ricSO3QSvnDHbl1MjdLHgAFHIs+yTMH1SEPmhKbyeTjqV5NhS/3yYjJiMbcPOMZgYBLOtnAY9WSbtV7o6KlYmvulg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465490; c=relaxed/simple; bh=HUIY42m5q+KCXfrwiEnxCTkCP8wFzbx1YreFvGltKx4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LRIfrTaIcrJWyFxtwfz/FRAnSqqtxLtWKTwNPC1cZZnkUFD9enV6T+1M2fQAadEPAG0PkIoX5MFWrRkl30zngWrSogwSSqQu7Xxov0XGE3slHonr65zAZwisvsESzmj74190K+wAnfCH9XdjmoSjfIzYJaiDymeTTV61iiG5ZLs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MqlYOTSc; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MqlYOTSc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736465489; x=1768001489; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HUIY42m5q+KCXfrwiEnxCTkCP8wFzbx1YreFvGltKx4=; b=MqlYOTScemLP/DMA80033zkBcBbwwK+S/66SZtsbDyVDPhROADSfmWC9 M4S47GRC0IZj/abPc4qcYJs7hMz/uvbFmJro83CUR+jtV42e8QD6EmxX2 DCy+Lu+U0/N18OM5YkYpuvMWz6ylbu+mdE5UtfYZHcaOCAPdABw50P7WG +30CxhJ7S3+1/GlXNX5En+wnbtbbxg++ZHebdUk40EZwZ2lMcp2fJcyEG y+ShdjN1NETYLnHR6K2xARzvlqLEUyzbfmtU/Le7GmSVqZrp6Ef8MuTJU 1puS3IQeeSk90CILhr47kdLEdFqgdpjg3tUWuGAkMQ3CnFEjaoexl2TuO w==; X-CSE-ConnectionGUID: FB8+EAbBQSGGp+lwXEuxCw== X-CSE-MsgGUID: eRBIK9wpRTG4GUM7eA+aZA== X-IronPort-AV: E=McAfee;i="6700,10204,11310"; a="47245108" X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="47245108" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:29 -0800 X-CSE-ConnectionGUID: N/d5o82TTh+0CHiP1KMmNA== X-CSE-MsgGUID: 3Vz8E4HKTvarnTfYKhAz8g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="134398965" Received: from kinlongk-mobl1.amr.corp.intel.com (HELO azaki-desk1.intel.com) ([10.125.111.128]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:22 -0800 From: Ahmed Zaki To: netdev@vger.kernel.org Cc: intel-wired-lan@lists.osuosl.org, andrew+netdev@lunn.ch, edumazet@google.com, kuba@kernel.org, horms@kernel.org, pabeni@redhat.com, davem@davemloft.net, michael.chan@broadcom.com, tariqt@nvidia.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, jdamato@fastly.com, shayd@nvidia.com, akpm@linux-foundation.org, shayagr@amazon.com, kalesh-anakkur.purayil@broadcom.com, Ahmed Zaki Subject: [PATCH net-next v4 1/6] net: move ARFS rmap management to core Date: Thu, 9 Jan 2025 16:31:02 -0700 Message-ID: <20250109233107.17519-2-ahmed.zaki@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250109233107.17519-1-ahmed.zaki@intel.com> References: <20250109233107.17519-1-ahmed.zaki@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add a new netdev flag "rx_cpu_rmap_auto". Drivers supporting ARFS should set the flag via netif_enable_cpu_rmap() and core will allocate and manage the ARFS rmap. Freeing the rmap is also done by core when the netdev is freed. Signed-off-by: Ahmed Zaki --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 38 ++--------------- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 27 ++---------- drivers/net/ethernet/intel/ice/ice_arfs.c | 17 +------- include/linux/netdevice.h | 12 ++++-- net/core/dev.c | 44 ++++++++++++++++++++ 5 files changed, 60 insertions(+), 78 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index c1295dfad0d0..a3fceaa83cd5 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -5,9 +5,6 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt -#ifdef CONFIG_RFS_ACCEL -#include -#endif /* CONFIG_RFS_ACCEL */ #include #include #include @@ -165,25 +162,10 @@ int ena_xmit_common(struct ena_adapter *adapter, static int ena_init_rx_cpu_rmap(struct ena_adapter *adapter) { #ifdef CONFIG_RFS_ACCEL - u32 i; - int rc; - - adapter->netdev->rx_cpu_rmap = alloc_irq_cpu_rmap(adapter->num_io_queues); - if (!adapter->netdev->rx_cpu_rmap) - return -ENOMEM; - for (i = 0; i < adapter->num_io_queues; i++) { - int irq_idx = ENA_IO_IRQ_IDX(i); - - rc = irq_cpu_rmap_add(adapter->netdev->rx_cpu_rmap, - pci_irq_vector(adapter->pdev, irq_idx)); - if (rc) { - free_irq_cpu_rmap(adapter->netdev->rx_cpu_rmap); - adapter->netdev->rx_cpu_rmap = NULL; - return rc; - } - } -#endif /* CONFIG_RFS_ACCEL */ + return netif_enable_cpu_rmap(adapter->netdev, adapter->num_io_queues); +#else return 0; +#endif /* CONFIG_RFS_ACCEL */ } static void ena_init_io_rings_common(struct ena_adapter *adapter, @@ -1742,13 +1724,6 @@ static void ena_free_io_irq(struct ena_adapter *adapter) struct ena_irq *irq; int i; -#ifdef CONFIG_RFS_ACCEL - if (adapter->msix_vecs >= 1) { - free_irq_cpu_rmap(adapter->netdev->rx_cpu_rmap); - adapter->netdev->rx_cpu_rmap = NULL; - } -#endif /* CONFIG_RFS_ACCEL */ - for (i = ENA_IO_IRQ_FIRST_IDX; i < ENA_MAX_MSIX_VEC(io_queue_count); i++) { irq = &adapter->irq_tbl[i]; irq_set_affinity_hint(irq->vector, NULL); @@ -4131,13 +4106,6 @@ static void __ena_shutoff(struct pci_dev *pdev, bool shutdown) ena_dev = adapter->ena_dev; netdev = adapter->netdev; -#ifdef CONFIG_RFS_ACCEL - if ((adapter->msix_vecs >= 1) && (netdev->rx_cpu_rmap)) { - free_irq_cpu_rmap(netdev->rx_cpu_rmap); - netdev->rx_cpu_rmap = NULL; - } - -#endif /* CONFIG_RFS_ACCEL */ /* Make sure timer and reset routine won't be called after * freeing device resources. */ diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 46edea75e062..cc3ca3440b0a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -49,7 +49,6 @@ #include #include #include -#include #include #include #include @@ -10833,7 +10832,7 @@ static int bnxt_set_real_num_queues(struct bnxt *bp) #ifdef CONFIG_RFS_ACCEL if (bp->flags & BNXT_FLAG_RFS) - dev->rx_cpu_rmap = alloc_irq_cpu_rmap(bp->rx_nr_rings); + return netif_enable_cpu_rmap(dev, bp->rx_nr_rings); #endif return rc; @@ -11187,10 +11186,6 @@ static void bnxt_free_irq(struct bnxt *bp) struct bnxt_irq *irq; int i; -#ifdef CONFIG_RFS_ACCEL - free_irq_cpu_rmap(bp->dev->rx_cpu_rmap); - bp->dev->rx_cpu_rmap = NULL; -#endif if (!bp->irq_tbl || !bp->bnapi) return; @@ -11213,11 +11208,8 @@ static void bnxt_free_irq(struct bnxt *bp) static int bnxt_request_irq(struct bnxt *bp) { - int i, j, rc = 0; + int i, rc = 0; unsigned long flags = 0; -#ifdef CONFIG_RFS_ACCEL - struct cpu_rmap *rmap; -#endif rc = bnxt_setup_int_mode(bp); if (rc) { @@ -11225,22 +11217,11 @@ static int bnxt_request_irq(struct bnxt *bp) rc); return rc; } -#ifdef CONFIG_RFS_ACCEL - rmap = bp->dev->rx_cpu_rmap; -#endif - for (i = 0, j = 0; i < bp->cp_nr_rings; i++) { + + for (i = 0; i < bp->cp_nr_rings; i++) { int map_idx = bnxt_cp_num_to_irq_num(bp, i); struct bnxt_irq *irq = &bp->irq_tbl[map_idx]; -#ifdef CONFIG_RFS_ACCEL - if (rmap && bp->bnapi[i]->rx_ring) { - rc = irq_cpu_rmap_add(rmap, irq->vector); - if (rc) - netdev_warn(bp->dev, "failed adding irq rmap for ring %d\n", - j); - j++; - } -#endif rc = request_irq(irq->vector, irq->handler, flags, irq->name, bp->bnapi[i]); if (rc) diff --git a/drivers/net/ethernet/intel/ice/ice_arfs.c b/drivers/net/ethernet/intel/ice/ice_arfs.c index 7cee365cc7d1..3b1b892e6958 100644 --- a/drivers/net/ethernet/intel/ice/ice_arfs.c +++ b/drivers/net/ethernet/intel/ice/ice_arfs.c @@ -584,9 +584,6 @@ void ice_free_cpu_rx_rmap(struct ice_vsi *vsi) netdev = vsi->netdev; if (!netdev || !netdev->rx_cpu_rmap) return; - - free_irq_cpu_rmap(netdev->rx_cpu_rmap); - netdev->rx_cpu_rmap = NULL; } /** @@ -597,7 +594,6 @@ int ice_set_cpu_rx_rmap(struct ice_vsi *vsi) { struct net_device *netdev; struct ice_pf *pf; - int i; if (!vsi || vsi->type != ICE_VSI_PF) return 0; @@ -610,18 +606,7 @@ int ice_set_cpu_rx_rmap(struct ice_vsi *vsi) netdev_dbg(netdev, "Setup CPU RMAP: vsi type 0x%x, ifname %s, q_vectors %d\n", vsi->type, netdev->name, vsi->num_q_vectors); - netdev->rx_cpu_rmap = alloc_irq_cpu_rmap(vsi->num_q_vectors); - if (unlikely(!netdev->rx_cpu_rmap)) - return -EINVAL; - - ice_for_each_q_vector(vsi, i) - if (irq_cpu_rmap_add(netdev->rx_cpu_rmap, - vsi->q_vectors[i]->irq.virq)) { - ice_free_cpu_rx_rmap(vsi); - return -EINVAL; - } - - return 0; + return netif_enable_cpu_rmap(netdev, vsi->num_q_vectors); } /** diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 1812564b5204..acf20191e114 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2398,6 +2398,9 @@ struct net_device { struct lock_class_key *qdisc_tx_busylock; bool proto_down; bool threaded; +#ifdef CONFIG_RFS_ACCEL + bool rx_cpu_rmap_auto; +#endif /* priv_flags_slow, ungrouped to save space */ unsigned long see_all_hwtstamp_requests:1; @@ -2671,10 +2674,7 @@ void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index, enum netdev_queue_type type, struct napi_struct *napi); -static inline void netif_napi_set_irq(struct napi_struct *napi, int irq) -{ - napi->irq = irq; -} +void netif_napi_set_irq(struct napi_struct *napi, int irq); /* Default NAPI poll() weight * Device drivers are strongly advised to not use bigger value @@ -2765,6 +2765,10 @@ static inline void netif_napi_del(struct napi_struct *napi) synchronize_net(); } +#ifdef CONFIG_RFS_ACCEL +int netif_enable_cpu_rmap(struct net_device *dev, unsigned int num_irqs); + +#endif struct packet_type { __be16 type; /* This is really htons(ether_type). */ bool ignore_outgoing; diff --git a/net/core/dev.c b/net/core/dev.c index 26f0c2fbb8aa..8373e4cf56d8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6730,6 +6730,46 @@ void netif_queue_set_napi(struct net_device *dev, unsigned int queue_index, } EXPORT_SYMBOL(netif_queue_set_napi); +#ifdef CONFIG_RFS_ACCEL +static void netif_disable_cpu_rmap(struct net_device *dev) +{ + free_irq_cpu_rmap(dev->rx_cpu_rmap); + dev->rx_cpu_rmap = NULL; + dev->rx_cpu_rmap_auto = false; +} + +int netif_enable_cpu_rmap(struct net_device *dev, unsigned int num_irqs) +{ + dev->rx_cpu_rmap = alloc_irq_cpu_rmap(num_irqs); + if (!dev->rx_cpu_rmap) + return -ENOMEM; + + dev->rx_cpu_rmap_auto = true; + return 0; +} +EXPORT_SYMBOL(netif_enable_cpu_rmap); +#endif + +void netif_napi_set_irq(struct napi_struct *napi, int irq) +{ +#ifdef CONFIG_RFS_ACCEL + int rc; +#endif + napi->irq = irq; + +#ifdef CONFIG_RFS_ACCEL + if (napi->dev->rx_cpu_rmap && napi->dev->rx_cpu_rmap_auto) { + rc = irq_cpu_rmap_add(napi->dev->rx_cpu_rmap, irq); + if (rc) { + netdev_warn(napi->dev, "Unable to update ARFS map (%d)\n", + rc); + netif_disable_cpu_rmap(napi->dev); + } + } +#endif +} +EXPORT_SYMBOL(netif_napi_set_irq); + static void napi_restore_config(struct napi_struct *n) { n->defer_hard_irqs = n->config->defer_hard_irqs; @@ -11406,6 +11446,10 @@ void free_netdev(struct net_device *dev) /* Flush device addresses */ dev_addr_flush(dev); +#ifdef CONFIG_RFS_ACCEL + if (dev->rx_cpu_rmap && dev->rx_cpu_rmap_auto) + netif_disable_cpu_rmap(dev); +#endif list_for_each_entry_safe(p, n, &dev->napi_list, dev_list) netif_napi_del(p); From patchwork Thu Jan 9 23:31:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Zaki X-Patchwork-Id: 13933321 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDA7D205E31 for ; Thu, 9 Jan 2025 23:31:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465495; cv=none; b=L0HLTZuHIyk30Eq+G4lFy+bAVa50Gsw8RdzW46xCfk2rudrLTDcSv3s8ZUByvFXRED5TG7Q2sP7rU5z69me5GIfUz6NwWFb+Fhs4W7464ghxZxTjy3QNebH5B6H3Hu04YBLmdx5OqBtZZlAnAi2047ZDx2T1mcL3D72R5ItC1po= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465495; c=relaxed/simple; bh=9STKZR++v1K6aFKrRxR2yATVX5LFBPD1PuzLbyW22Cc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=epvH3dQ0Es9BNR5CeYIePJU8DfF+XOJn6Qe5qmk7OWG4FyxbxeVO2k8qWMsB9CJCLE0GMWAeofxcml1g+KWtsXDupKPGjyVSSz+y2JaOZFYuUHPJ8CXpgSEEEITPXP8drCeEUr9pwy3Ssg+0V0GF3xj3Am97cndJYSqrprV8zBg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mLf9JxPA; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mLf9JxPA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736465494; x=1768001494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9STKZR++v1K6aFKrRxR2yATVX5LFBPD1PuzLbyW22Cc=; b=mLf9JxPAoaMQnj+N2RiPnKaBcCjwNnu4z58n3gOh1Iz2e2Nb3+Vu/aZ4 Nyp9ESe6kqzHOSKZpvK8p/HEBfyPQd9vVefeJj5b73jjK/s2l+Sh3uyFm NL+mtf0GjVLGsT/cQXOZ708IhJ0CvGVCBNzm5CpBqXq9Vavg15qLnFFhL Q4u9fqFZxZnUlViX81LvNyfx8cadM7QIgrOkWRYRuDLq0rhQ42+09OGpO hK+UIZwNyAmd1SJzWlP+olaj+PufpoSClqgBAjQpuZgl5CsIG9VIZz/H6 v1l01/FNhb2VrwmGoMlugWAyBnAnN/RKMJ3owtDbh11dFNG5elJjsdYWm A==; X-CSE-ConnectionGUID: nQL5pxOGQBGmhX+8YBjc3A== X-CSE-MsgGUID: a1niD9R6TcqHjDwDeYdAQw== X-IronPort-AV: E=McAfee;i="6700,10204,11310"; a="47245121" X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="47245121" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:34 -0800 X-CSE-ConnectionGUID: NL+1q0YYTsip/tdcKSLKPQ== X-CSE-MsgGUID: UUFBbTB2RJiba+XfUHk2AA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="134399007" Received: from kinlongk-mobl1.amr.corp.intel.com (HELO azaki-desk1.intel.com) ([10.125.111.128]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:28 -0800 From: Ahmed Zaki To: netdev@vger.kernel.org Cc: intel-wired-lan@lists.osuosl.org, andrew+netdev@lunn.ch, edumazet@google.com, kuba@kernel.org, horms@kernel.org, pabeni@redhat.com, davem@davemloft.net, michael.chan@broadcom.com, tariqt@nvidia.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, jdamato@fastly.com, shayd@nvidia.com, akpm@linux-foundation.org, shayagr@amazon.com, kalesh-anakkur.purayil@broadcom.com, Ahmed Zaki Subject: [PATCH net-next v4 2/6] net: napi: add internal ARFS rmap management Date: Thu, 9 Jan 2025 16:31:03 -0700 Message-ID: <20250109233107.17519-3-ahmed.zaki@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250109233107.17519-1-ahmed.zaki@intel.com> References: <20250109233107.17519-1-ahmed.zaki@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org For drivers using the netif_enable_cpu_rmap(), move the IRQ rmap notifier inside the napi_struct. Signed-off-by: Ahmed Zaki --- include/linux/cpu_rmap.h | 1 + include/linux/netdevice.h | 4 +++ lib/cpu_rmap.c | 2 +- net/core/dev.c | 73 +++++++++++++++++++++++++++++++++++++-- 4 files changed, 77 insertions(+), 3 deletions(-) diff --git a/include/linux/cpu_rmap.h b/include/linux/cpu_rmap.h index 20b5729903d7..2fd7ba75362a 100644 --- a/include/linux/cpu_rmap.h +++ b/include/linux/cpu_rmap.h @@ -32,6 +32,7 @@ struct cpu_rmap { #define CPU_RMAP_DIST_INF 0xffff extern struct cpu_rmap *alloc_cpu_rmap(unsigned int size, gfp_t flags); +extern void cpu_rmap_get(struct cpu_rmap *rmap); extern int cpu_rmap_put(struct cpu_rmap *rmap); extern int cpu_rmap_add(struct cpu_rmap *rmap, void *obj); diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index acf20191e114..c789218cca5d 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -392,6 +392,10 @@ struct napi_struct { struct list_head dev_list; struct hlist_node napi_hash_node; int irq; +#ifdef CONFIG_RFS_ACCEL + struct irq_affinity_notify notify; + int napi_rmap_idx; +#endif int index; struct napi_config *config; }; diff --git a/lib/cpu_rmap.c b/lib/cpu_rmap.c index 4c348670da31..f03d9be3f06b 100644 --- a/lib/cpu_rmap.c +++ b/lib/cpu_rmap.c @@ -73,7 +73,7 @@ static void cpu_rmap_release(struct kref *ref) * cpu_rmap_get - internal helper to get new ref on a cpu_rmap * @rmap: reverse-map allocated with alloc_cpu_rmap() */ -static inline void cpu_rmap_get(struct cpu_rmap *rmap) +void cpu_rmap_get(struct cpu_rmap *rmap) { kref_get(&rmap->refcount); } diff --git a/net/core/dev.c b/net/core/dev.c index 8373e4cf56d8..1d4378962857 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6733,7 +6733,20 @@ EXPORT_SYMBOL(netif_queue_set_napi); #ifdef CONFIG_RFS_ACCEL static void netif_disable_cpu_rmap(struct net_device *dev) { - free_irq_cpu_rmap(dev->rx_cpu_rmap); + struct cpu_rmap *rmap = dev->rx_cpu_rmap; + struct napi_struct *napi; + u16 index; + + if (!rmap || !dev->rx_cpu_rmap_auto) + return; + + for (index = 0; index < rmap->size; index++) { + napi = rmap->obj[index]; + if (napi && napi->irq > 0) + irq_set_affinity_notifier(napi->irq, NULL); + } + + cpu_rmap_put(rmap); dev->rx_cpu_rmap = NULL; dev->rx_cpu_rmap_auto = false; } @@ -6748,6 +6761,62 @@ int netif_enable_cpu_rmap(struct net_device *dev, unsigned int num_irqs) return 0; } EXPORT_SYMBOL(netif_enable_cpu_rmap); + +static void +netif_irq_cpu_rmap_notify(struct irq_affinity_notify *notify, + const cpumask_t *mask) +{ + struct napi_struct *napi = + container_of(notify, struct napi_struct, notify); + struct cpu_rmap *rmap = napi->dev->rx_cpu_rmap; + int err; + + if (rmap && napi->dev->rx_cpu_rmap_auto) { + err = cpu_rmap_update(rmap, napi->napi_rmap_idx, mask); + if (err) + pr_warn("%s: RMAP update failed (%d)\n", + __func__, err); + } +} + +static void +netif_napi_affinity_release(struct kref *ref) +{ + struct napi_struct *napi = + container_of(ref, struct napi_struct, notify.kref); + struct cpu_rmap *rmap = napi->dev->rx_cpu_rmap; + + rmap->obj[napi->napi_rmap_idx] = NULL; + cpu_rmap_put(rmap); +} + +static int napi_irq_cpu_rmap_add(struct napi_struct *napi, int irq) +{ + struct cpu_rmap *rmap = napi->dev->rx_cpu_rmap; + int rc; + + if (!napi || !rmap) + return -EINVAL; + napi->notify.notify = netif_irq_cpu_rmap_notify; + napi->notify.release = netif_napi_affinity_release; + cpu_rmap_get(rmap); + rc = cpu_rmap_add(rmap, napi); + if (rc < 0) + goto err_add; + + napi->napi_rmap_idx = rc; + rc = irq_set_affinity_notifier(irq, &napi->notify); + if (rc) + goto err_set; + + return 0; + +err_set: + rmap->obj[napi->napi_rmap_idx] = NULL; +err_add: + cpu_rmap_put(rmap); + return rc; +} #endif void netif_napi_set_irq(struct napi_struct *napi, int irq) @@ -6759,7 +6828,7 @@ void netif_napi_set_irq(struct napi_struct *napi, int irq) #ifdef CONFIG_RFS_ACCEL if (napi->dev->rx_cpu_rmap && napi->dev->rx_cpu_rmap_auto) { - rc = irq_cpu_rmap_add(napi->dev->rx_cpu_rmap, irq); + rc = napi_irq_cpu_rmap_add(napi, irq); if (rc) { netdev_warn(napi->dev, "Unable to update ARFS map (%d)\n", rc); From patchwork Thu Jan 9 23:31:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Zaki X-Patchwork-Id: 13933322 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66B15205AB4 for ; Thu, 9 Jan 2025 23:31:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465501; cv=none; b=Pt8KhWTR30qG70I2O0ttc7X/SVi1gDReLkZS4Ab7eyghNMJCRXpIbVqvVxiVbHvxEMsR+gngpLGrl4nRJKl7EjJtfLqlm4jPEdgAtXNKLlYw1lSYPMKYZFr4ere86ZdFNjEY4fQ2HC3m+VkqxuEd+3rum8DKwBUQlbKdHsbFwHk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465501; c=relaxed/simple; bh=lGWvyvRbAveSQqXajzK4A4PR3DPtMw+ZclWSl6vjKQQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qzmthfodW+8JJ7Wz3cXH8S1EQNhAWe5c/Gs2jadm3CiOJc7EM3xvJOLjZzUExorgHm8jklQEJKPKQVMVKbHMO9dkPNymG8COmUrc42pQ2IeocAGDq5iveROe5kyzo2GSwwUorRR1jybqfWSzd+8PoOfRt9+pYQb8efdnMyTGq7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=QFJQDxOy; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QFJQDxOy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736465500; x=1768001500; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lGWvyvRbAveSQqXajzK4A4PR3DPtMw+ZclWSl6vjKQQ=; b=QFJQDxOyNF+0+8t6kvCfQgcg5Gv+/YMVkf4jg/Befss2A+oEUQEm2Km4 8IYVTXZ8pDHcw+QUjuyEkyIKLX5JQlz9JsB9AD6zkcYbsPf+78lgwH87t 3qT6mK+xziILgdsQgcuXcIt6tNgjUPNFFvq/35CPJ62XDXsdqVrd85A/N yV+sgDQZUNNahCRKSTh+0gxRZk/H3Pn3UmhsiuOslY7mOI6BN2QnpmRCI w/AKGlW9/QHxAcbhIXtNX5FNaPjhFTtKWuZ3vm7+ZbkI62/YEbM+x4DgY pawBkdgK8dk2elUbkozCRXdYuWyKlKFEpQeW+ZxQ2b2P6KUq/ZCHIUF4c w==; X-CSE-ConnectionGUID: jfjqg0IVTJCwSbpRM1cclw== X-CSE-MsgGUID: n2YwTwyFT8GZgsyRjSE/8Q== X-IronPort-AV: E=McAfee;i="6700,10204,11310"; a="47245155" X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="47245155" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:40 -0800 X-CSE-ConnectionGUID: S4+KUSGsQ4i808Gl9SmfJw== X-CSE-MsgGUID: sYYqohP8R6G0MRz5dJPeqA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="134399052" Received: from kinlongk-mobl1.amr.corp.intel.com (HELO azaki-desk1.intel.com) ([10.125.111.128]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:34 -0800 From: Ahmed Zaki To: netdev@vger.kernel.org Cc: intel-wired-lan@lists.osuosl.org, andrew+netdev@lunn.ch, edumazet@google.com, kuba@kernel.org, horms@kernel.org, pabeni@redhat.com, davem@davemloft.net, michael.chan@broadcom.com, tariqt@nvidia.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, jdamato@fastly.com, shayd@nvidia.com, akpm@linux-foundation.org, shayagr@amazon.com, kalesh-anakkur.purayil@broadcom.com, Ahmed Zaki Subject: [PATCH net-next v4 3/6] net: napi: add CPU affinity to napi_config Date: Thu, 9 Jan 2025 16:31:04 -0700 Message-ID: <20250109233107.17519-4-ahmed.zaki@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250109233107.17519-1-ahmed.zaki@intel.com> References: <20250109233107.17519-1-ahmed.zaki@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org A common task for most drivers is to remember the user-set CPU affinity to its IRQs. On each netdev reset, the driver should re-assign the user's settings to the IRQs. Add CPU affinity mask to napi_config. To delegate the CPU affinity management to the core, drivers must: 1 - set the new netdev flag "irq_affinity_auto": netif_enable_irq_affinity(netdev) 2 - create the napi with persistent config: netif_napi_add_config() 3 - bind an IRQ to the napi instance: netif_napi_set_irq() the core will then make sure to use re-assign affinity to the napi's IRQ. The default IRQ mask is set to one cpu starting from the closest NUMA. Signed-off-by: Ahmed Zaki --- include/linux/netdevice.h | 9 +++++++- net/core/dev.c | 44 ++++++++++++++++++++++++++++++++------- 2 files changed, 45 insertions(+), 8 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index c789218cca5d..82da827b5ec6 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -351,6 +351,7 @@ struct napi_config { u64 gro_flush_timeout; u64 irq_suspend_timeout; u32 defer_hard_irqs; + cpumask_t affinity_mask; unsigned int napi_id; }; @@ -392,8 +393,8 @@ struct napi_struct { struct list_head dev_list; struct hlist_node napi_hash_node; int irq; -#ifdef CONFIG_RFS_ACCEL struct irq_affinity_notify notify; +#ifdef CONFIG_RFS_ACCEL int napi_rmap_idx; #endif int index; @@ -2402,6 +2403,7 @@ struct net_device { struct lock_class_key *qdisc_tx_busylock; bool proto_down; bool threaded; + bool irq_affinity_auto; #ifdef CONFIG_RFS_ACCEL bool rx_cpu_rmap_auto; #endif @@ -2637,6 +2639,11 @@ static inline void netdev_set_ml_priv(struct net_device *dev, dev->ml_priv_type = type; } +static inline void netif_enable_irq_affinity(struct net_device *dev) +{ + dev->irq_affinity_auto = true; +} + /* * Net namespace inlines */ diff --git a/net/core/dev.c b/net/core/dev.c index 1d4378962857..72b3caf0e79f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6761,22 +6761,30 @@ int netif_enable_cpu_rmap(struct net_device *dev, unsigned int num_irqs) return 0; } EXPORT_SYMBOL(netif_enable_cpu_rmap); +#endif static void -netif_irq_cpu_rmap_notify(struct irq_affinity_notify *notify, - const cpumask_t *mask) +netif_napi_irq_notify(struct irq_affinity_notify *notify, + const cpumask_t *mask) { struct napi_struct *napi = container_of(notify, struct napi_struct, notify); +#ifdef CONFIG_RFS_ACCEL struct cpu_rmap *rmap = napi->dev->rx_cpu_rmap; int err; +#endif + if (napi->config && napi->dev->irq_affinity_auto) + cpumask_copy(&napi->config->affinity_mask, mask); + +#ifdef CONFIG_RFS_ACCEL if (rmap && napi->dev->rx_cpu_rmap_auto) { err = cpu_rmap_update(rmap, napi->napi_rmap_idx, mask); if (err) pr_warn("%s: RMAP update failed (%d)\n", __func__, err); } +#endif } static void @@ -6790,6 +6798,7 @@ netif_napi_affinity_release(struct kref *ref) cpu_rmap_put(rmap); } +#ifdef CONFIG_RFS_ACCEL static int napi_irq_cpu_rmap_add(struct napi_struct *napi, int irq) { struct cpu_rmap *rmap = napi->dev->rx_cpu_rmap; @@ -6797,7 +6806,7 @@ static int napi_irq_cpu_rmap_add(struct napi_struct *napi, int irq) if (!napi || !rmap) return -EINVAL; - napi->notify.notify = netif_irq_cpu_rmap_notify; + napi->notify.notify = netif_napi_irq_notify; napi->notify.release = netif_napi_affinity_release; cpu_rmap_get(rmap); rc = cpu_rmap_add(rmap, napi); @@ -6821,9 +6830,8 @@ static int napi_irq_cpu_rmap_add(struct napi_struct *napi, int irq) void netif_napi_set_irq(struct napi_struct *napi, int irq) { -#ifdef CONFIG_RFS_ACCEL int rc; -#endif + napi->irq = irq; #ifdef CONFIG_RFS_ACCEL @@ -6834,8 +6842,18 @@ void netif_napi_set_irq(struct napi_struct *napi, int irq) rc); netif_disable_cpu_rmap(napi->dev); } - } + } else if (irq > 0 && napi->config && napi->dev->irq_affinity_auto) { +#else + if (irq > 0 && napi->config && napi->dev->irq_affinity_auto) { #endif + napi->notify.notify = netif_napi_irq_notify; + napi->notify.release = netif_napi_affinity_release; + + rc = irq_set_affinity_notifier(irq, &napi->notify); + if (rc) + netdev_warn(napi->dev, "Unable to set IRQ notifier (%d)\n", + rc); + } } EXPORT_SYMBOL(netif_napi_set_irq); @@ -6844,6 +6862,10 @@ static void napi_restore_config(struct napi_struct *n) n->defer_hard_irqs = n->config->defer_hard_irqs; n->gro_flush_timeout = n->config->gro_flush_timeout; n->irq_suspend_timeout = n->config->irq_suspend_timeout; + + if (n->irq > 0 && n->dev->irq_affinity_auto) + irq_set_affinity(n->irq, &n->config->affinity_mask); + /* a NAPI ID might be stored in the config, if so use it. if not, use * napi_hash_add to generate one for us. */ @@ -6860,6 +6882,10 @@ static void napi_save_config(struct napi_struct *n) n->config->defer_hard_irqs = n->defer_hard_irqs; n->config->gro_flush_timeout = n->gro_flush_timeout; n->config->irq_suspend_timeout = n->irq_suspend_timeout; + + if (n->irq > 0 && n->dev->irq_affinity_auto) + irq_set_affinity_notifier(n->irq, NULL); + napi_hash_del(n); } @@ -11358,7 +11384,7 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name, { struct net_device *dev; size_t napi_config_sz; - unsigned int maxqs; + unsigned int maxqs, i, numa; BUG_ON(strlen(name) >= sizeof(dev->name)); @@ -11454,6 +11480,10 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name, dev->napi_config = kvzalloc(napi_config_sz, GFP_KERNEL_ACCOUNT); if (!dev->napi_config) goto free_all; + numa = dev_to_node(&dev->dev); + for (i = 0; i < maxqs; i++) + cpumask_set_cpu(cpumask_local_spread(i, numa), + &dev->napi_config[i].affinity_mask); strscpy(dev->name, name); dev->name_assign_type = name_assign_type; From patchwork Thu Jan 9 23:31:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Zaki X-Patchwork-Id: 13933323 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0849F205AB4 for ; Thu, 9 Jan 2025 23:31:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465506; cv=none; b=tYK9LTF4NT58rwVqpWdrk44pv/eBETmR8OV5YMipJwbt5BZ52gNDl2BgHbsm+tqWESVp/QsPLlqa7zmubQ4X8QKm+sK6OjEbpVv5p+7k2bgppK8d01dI7xRgRJYPhWWoJfnrzgF4gd1xG2qdHdktAX5Z90k9RMZC5hR1emqd3Kw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465506; c=relaxed/simple; bh=vEuEUak2e4V2UO9LeYnRYWF+ndykJaoTYWBSZuFDtRg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VkQ3cMEGRaeQkRUogT/MYjHkHdlHGD9+y7O9G8kRZBFKqG6B1KO63mmKKbf0mPJMEfVSjhm5AXOaPT7FI+29LB/vKNaHC+qFbw8oLy1ABjtE/LA7tFwNn8/TZq9bKLgPMErPthTDh5OdnPkka7BYu36AZwj9y0xXKhTo/awK3Ps= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XIzZyk/t; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XIzZyk/t" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736465505; x=1768001505; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vEuEUak2e4V2UO9LeYnRYWF+ndykJaoTYWBSZuFDtRg=; b=XIzZyk/teX/fzpjxmTJTRHjOmVqvciRYhVQ/7/rb0TOUwRgfWxQGy1xS qXC6krC2HfuPXt1ixKXW2m+xZ6FdEJlJzH8IPGKEpCWjhXrAhaRK19vtT t8yPPyCdTp+JuDdPgzZ2kzHu8gVwnBls86fiHFrwpW2xpp6EyD8O/tvah qdtIs/gYWf7kmcREepR4VP6QM77+fa6JulH+wkjc0moeN6zGawnuQUydS irsIL2NY17ERdg58dDl5DirMgW8DE1VRI1QNXcDmSRDhyMdTGKdQANvKR cOXwlPEKndJFcJc7Ra+V6uUoIhXG6n7x/wW4c5bpB9/T43qA65lNqI3x7 g==; X-CSE-ConnectionGUID: OBMvZkwbTFOC7uy6CJADZQ== X-CSE-MsgGUID: 1ic59y3iQduRkN8l9+B6Tg== X-IronPort-AV: E=McAfee;i="6700,10204,11310"; a="47245170" X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="47245170" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:45 -0800 X-CSE-ConnectionGUID: o7876VvNQGKCi3ms1OIo5w== X-CSE-MsgGUID: 7tuMLtvWQzuyPCG3VclbUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="134399083" Received: from kinlongk-mobl1.amr.corp.intel.com (HELO azaki-desk1.intel.com) ([10.125.111.128]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:39 -0800 From: Ahmed Zaki To: netdev@vger.kernel.org Cc: intel-wired-lan@lists.osuosl.org, andrew+netdev@lunn.ch, edumazet@google.com, kuba@kernel.org, horms@kernel.org, pabeni@redhat.com, davem@davemloft.net, michael.chan@broadcom.com, tariqt@nvidia.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, jdamato@fastly.com, shayd@nvidia.com, akpm@linux-foundation.org, shayagr@amazon.com, kalesh-anakkur.purayil@broadcom.com, Ahmed Zaki Subject: [PATCH net-next v4 4/6] bnxt: use napi's irq affinity Date: Thu, 9 Jan 2025 16:31:05 -0700 Message-ID: <20250109233107.17519-5-ahmed.zaki@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250109233107.17519-1-ahmed.zaki@intel.com> References: <20250109233107.17519-1-ahmed.zaki@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Delete the driver CPU affinity info and use the core's napi config instead. Signed-off-by: Ahmed Zaki --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 25 +++-------------------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 -- 2 files changed, 3 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index cc3ca3440b0a..b11bd9d31e91 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -11193,14 +11193,8 @@ static void bnxt_free_irq(struct bnxt *bp) int map_idx = bnxt_cp_num_to_irq_num(bp, i); irq = &bp->irq_tbl[map_idx]; - if (irq->requested) { - if (irq->have_cpumask) { - irq_update_affinity_hint(irq->vector, NULL); - free_cpumask_var(irq->cpu_mask); - irq->have_cpumask = 0; - } + if (irq->requested) free_irq(irq->vector, bp->bnapi[i]); - } irq->requested = 0; } @@ -11229,21 +11223,6 @@ static int bnxt_request_irq(struct bnxt *bp) netif_napi_set_irq(&bp->bnapi[i]->napi, irq->vector); irq->requested = 1; - - if (zalloc_cpumask_var(&irq->cpu_mask, GFP_KERNEL)) { - int numa_node = dev_to_node(&bp->pdev->dev); - - irq->have_cpumask = 1; - cpumask_set_cpu(cpumask_local_spread(i, numa_node), - irq->cpu_mask); - rc = irq_update_affinity_hint(irq->vector, irq->cpu_mask); - if (rc) { - netdev_warn(bp->dev, - "Update affinity hint failed, IRQ = %d\n", - irq->vector); - break; - } - } } return rc; } @@ -16172,6 +16151,8 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | NETDEV_XDP_ACT_RX_SG; + netif_enable_irq_affinity(dev); + #ifdef CONFIG_BNXT_SRIOV init_waitqueue_head(&bp->sriov_cfg_wait); #endif diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 094c9e95b463..7be2f90d0c05 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1228,9 +1228,7 @@ struct bnxt_irq { irq_handler_t handler; unsigned int vector; u8 requested:1; - u8 have_cpumask:1; char name[IFNAMSIZ + BNXT_IRQ_NAME_EXTRA]; - cpumask_var_t cpu_mask; }; #define HWRM_RING_ALLOC_TX 0x1 From patchwork Thu Jan 9 23:31:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Zaki X-Patchwork-Id: 13933324 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7EAFD205E2E for ; Thu, 9 Jan 2025 23:31:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465512; cv=none; b=Xt6NIwwzqQ0/3h9l+ZueW3Li+BRbvBLlcuxBxFiNrc/pPL3+30ipyRme15QqMJDmoCk6xAJR5UfsrO2VXeyJ5bsQAUK6FLxmeMcNGtsColbxINFnU/E8VwIt0DnBJp4xaiFavhkjQNyUMuf6X3wtqETfrP1v+Qg1xahovfmWqDg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465512; c=relaxed/simple; bh=jWErJz+EQDOaRYTwARq8kmAaWWmMLuEYTCWS+F+YS5U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZoPD6cg1ZZIqc5aAJfnu9PSe60gsTWcUozGJX1S0E05v6Jiz8B75OcHcZi+d2Bkpvd0WiPnkG0MojmRHs8q+rJ9R6HxvdTJfD4fa5DtlXQ1V/M5QPDBWuhMOSXP5L/pAt/biUUBnoe5wrkdVIbF5JXkUDna2QBzggD1F7/y3Ado= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=USccxuax; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="USccxuax" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736465511; x=1768001511; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jWErJz+EQDOaRYTwARq8kmAaWWmMLuEYTCWS+F+YS5U=; b=USccxuax22zpfetUJ642dbcasraMlvgMNRuOq3GnV/qe58vLjVIQ1zgZ PVaqKtmcmPoDWjvz1JnNxzyR4RMTlTuEoMawzEI3hvkOdT1HhMg5H84Un YarR/vPAh5ak36V0QMbw4AUgb2PN7N6DdQwMIyXMgxP5su5Mpq0j+vcDk XCEuNoWwd4N2nJGQ53gCJ1DQGGJ2WyHFGDjV9IbowFrxwJnyR5uWguUC9 Nt0Ug4o1pui7bHKbq8N+AlVkgFn8avFrf/K0/MoX1rF+2/Un1Pl5fgzEr MGLpua5ENiJ6amhX8EFbVFG3R6zG10WjRYt5TGAgD23yHUajLV/u8ZZ0C A==; X-CSE-ConnectionGUID: 5lTAY8mbQOqGmioDOnUE4A== X-CSE-MsgGUID: YaRmeJgDRuGFCeRdhA5NNw== X-IronPort-AV: E=McAfee;i="6700,10204,11310"; a="47245187" X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="47245187" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:51 -0800 X-CSE-ConnectionGUID: lylwrpa4QF+ow7yr24M3+g== X-CSE-MsgGUID: U6tDS0ozQFGzv3bTb5Ak3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="134399115" Received: from kinlongk-mobl1.amr.corp.intel.com (HELO azaki-desk1.intel.com) ([10.125.111.128]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:45 -0800 From: Ahmed Zaki To: netdev@vger.kernel.org Cc: intel-wired-lan@lists.osuosl.org, andrew+netdev@lunn.ch, edumazet@google.com, kuba@kernel.org, horms@kernel.org, pabeni@redhat.com, davem@davemloft.net, michael.chan@broadcom.com, tariqt@nvidia.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, jdamato@fastly.com, shayd@nvidia.com, akpm@linux-foundation.org, shayagr@amazon.com, kalesh-anakkur.purayil@broadcom.com, Ahmed Zaki Subject: [PATCH net-next v4 5/6] ice: use napi's irq affinity Date: Thu, 9 Jan 2025 16:31:06 -0700 Message-ID: <20250109233107.17519-6-ahmed.zaki@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250109233107.17519-1-ahmed.zaki@intel.com> References: <20250109233107.17519-1-ahmed.zaki@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Delete the driver CPU affinity info and use the core's napi config instead. Signed-off-by: Ahmed Zaki --- drivers/net/ethernet/intel/ice/ice.h | 3 -- drivers/net/ethernet/intel/ice/ice_base.c | 7 +--- drivers/net/ethernet/intel/ice/ice_lib.c | 6 --- drivers/net/ethernet/intel/ice/ice_main.c | 47 ++--------------------- 4 files changed, 5 insertions(+), 58 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 71e05d30f0fd..a6e6c9e1edc1 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -478,9 +478,6 @@ struct ice_q_vector { struct ice_ring_container rx; struct ice_ring_container tx; - cpumask_t affinity_mask; - struct irq_affinity_notify affinity_notify; - struct ice_channel *ch; char name[ICE_INT_NAME_STR_LEN]; diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index b2af8e3586f7..86cf715de00f 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -147,10 +147,6 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx) q_vector->reg_idx = q_vector->irq.index; q_vector->vf_reg_idx = q_vector->irq.index; - /* only set affinity_mask if the CPU is online */ - if (cpu_online(v_idx)) - cpumask_set_cpu(v_idx, &q_vector->affinity_mask); - /* This will not be called in the driver load path because the netdev * will not be created yet. All other cases with register the NAPI * handler here (i.e. resume, reset/rebuild, etc.) @@ -276,7 +272,8 @@ static void ice_cfg_xps_tx_ring(struct ice_tx_ring *ring) if (test_and_set_bit(ICE_TX_XPS_INIT_DONE, ring->xps_state)) return; - netif_set_xps_queue(ring->netdev, &ring->q_vector->affinity_mask, + netif_set_xps_queue(ring->netdev, + &ring->q_vector->napi.config->affinity_mask, ring->q_index); } diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index a7d45a8ce7ac..b5b93a426933 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2589,12 +2589,6 @@ void ice_vsi_free_irq(struct ice_vsi *vsi) vsi->q_vectors[i]->num_ring_rx)) continue; - /* clear the affinity notifier in the IRQ descriptor */ - if (!IS_ENABLED(CONFIG_RFS_ACCEL)) - irq_set_affinity_notifier(irq_num, NULL); - - /* clear the affinity_hint in the IRQ descriptor */ - irq_update_affinity_hint(irq_num, NULL); synchronize_irq(irq_num); devm_free_irq(ice_pf_to_dev(pf), irq_num, vsi->q_vectors[i]); } diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 1701f7143f24..8df7332fcbbb 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -2512,34 +2512,6 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset) return 0; } -/** - * ice_irq_affinity_notify - Callback for affinity changes - * @notify: context as to what irq was changed - * @mask: the new affinity mask - * - * This is a callback function used by the irq_set_affinity_notifier function - * so that we may register to receive changes to the irq affinity masks. - */ -static void -ice_irq_affinity_notify(struct irq_affinity_notify *notify, - const cpumask_t *mask) -{ - struct ice_q_vector *q_vector = - container_of(notify, struct ice_q_vector, affinity_notify); - - cpumask_copy(&q_vector->affinity_mask, mask); -} - -/** - * ice_irq_affinity_release - Callback for affinity notifier release - * @ref: internal core kernel usage - * - * This is a callback function used by the irq_set_affinity_notifier function - * to inform the current notification subscriber that they will no longer - * receive notifications. - */ -static void ice_irq_affinity_release(struct kref __always_unused *ref) {} - /** * ice_vsi_ena_irq - Enable IRQ for the given VSI * @vsi: the VSI being configured @@ -2603,19 +2575,6 @@ static int ice_vsi_req_irq_msix(struct ice_vsi *vsi, char *basename) err); goto free_q_irqs; } - - /* register for affinity change notifications */ - if (!IS_ENABLED(CONFIG_RFS_ACCEL)) { - struct irq_affinity_notify *affinity_notify; - - affinity_notify = &q_vector->affinity_notify; - affinity_notify->notify = ice_irq_affinity_notify; - affinity_notify->release = ice_irq_affinity_release; - irq_set_affinity_notifier(irq_num, affinity_notify); - } - - /* assign the mask for this irq */ - irq_update_affinity_hint(irq_num, &q_vector->affinity_mask); } err = ice_set_cpu_rx_rmap(vsi); @@ -2631,9 +2590,6 @@ static int ice_vsi_req_irq_msix(struct ice_vsi *vsi, char *basename) free_q_irqs: while (vector--) { irq_num = vsi->q_vectors[vector]->irq.virq; - if (!IS_ENABLED(CONFIG_RFS_ACCEL)) - irq_set_affinity_notifier(irq_num, NULL); - irq_update_affinity_hint(irq_num, NULL); devm_free_irq(dev, irq_num, &vsi->q_vectors[vector]); } return err; @@ -3674,6 +3630,9 @@ void ice_set_netdev_features(struct net_device *netdev) */ netdev->hw_features |= NETIF_F_RXFCS; + /* Allow core to manage IRQs affinity */ + netif_enable_irq_affinity(netdev); + netif_set_tso_max_size(netdev, ICE_MAX_TSO_SIZE); } From patchwork Thu Jan 9 23:31:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ahmed Zaki X-Patchwork-Id: 13933325 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFF0C205E37 for ; Thu, 9 Jan 2025 23:31:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465517; cv=none; b=aywgV2m6EJkmZ0X3/KegN/ZOSmD4YGnGpJhT8ncnxRUBmi1uDNelUEUPXBIsD4h+pp68Vn8qXcK5CaTB4gwr8Xd8mA6ODLgFWo4jslMCN/yvFWN+SCWoJ2ZsVlqb8e9mUxs9OiOrKAqPShhu+tvY2StXtyNQk54bMTZw7WK0hWQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736465517; c=relaxed/simple; bh=dYpk1spwEtlOomeBT9dQvdbxt/f+PnNZTwRIWbezKxM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Fi0BKhWiuzhccFhk9gxGPfjNRXAPk9a9c5DlF96K4XvS8QSYF9GmCnYOiV1bpM4xeY54OewzJiw6Hm8XQYivelZ9d29XlF0cf5xmHOGgyjQKf36WPGuGtOr2+5BJCUZrPA1bT/F+43w4BgLccggidSrH/2tMFYsqnN6x4XK5FKw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=m5AJS2b0; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="m5AJS2b0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736465516; x=1768001516; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dYpk1spwEtlOomeBT9dQvdbxt/f+PnNZTwRIWbezKxM=; b=m5AJS2b055d+5s7HpDFlEvrRJlp/LoZF/2V/h0gMFV3ntx5v5KxrNMd6 6vyljV0dZle19dwho5Fv0x3CNaKnCuYCb7JUk8DMvZoEZv5yoihWYGfyH ei1BzQbyl5lflDaK7CqBkU5JU4LPLmiltrDQF7RHgaopqbP7H8ZjFmSmd JfsrzmSmsW2TW2C5XWJMOu5Q/fmmkPEvBdgTf//ha24EE9CVisYaTRkzl Ydjtd7j4B0BCiv6r9LYTDyMszmqwf36sDOcMPj+JsnObzI9c++atfeJ7D qyKwrHYEHo0C6zd6D6MFq/jtHrof/Mdu6r7tB29KKtaaJCVnWnOA3933E w==; X-CSE-ConnectionGUID: WsA+WWheSGCHN4hbiKXu8A== X-CSE-MsgGUID: 4/4Da7/nRYGn79L9qxDJ6A== X-IronPort-AV: E=McAfee;i="6700,10204,11310"; a="47245203" X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="47245203" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:56 -0800 X-CSE-ConnectionGUID: 1DwrzqNoRaSv/JaCYWsXJA== X-CSE-MsgGUID: a4z/elitRB2CAnQjvKXTyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,302,1728975600"; d="scan'208";a="134399147" Received: from kinlongk-mobl1.amr.corp.intel.com (HELO azaki-desk1.intel.com) ([10.125.111.128]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2025 15:31:50 -0800 From: Ahmed Zaki To: netdev@vger.kernel.org Cc: intel-wired-lan@lists.osuosl.org, andrew+netdev@lunn.ch, edumazet@google.com, kuba@kernel.org, horms@kernel.org, pabeni@redhat.com, davem@davemloft.net, michael.chan@broadcom.com, tariqt@nvidia.com, anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com, jdamato@fastly.com, shayd@nvidia.com, akpm@linux-foundation.org, shayagr@amazon.com, kalesh-anakkur.purayil@broadcom.com, Ahmed Zaki Subject: [PATCH net-next v4 6/6] idpf: use napi's irq affinity Date: Thu, 9 Jan 2025 16:31:07 -0700 Message-ID: <20250109233107.17519-7-ahmed.zaki@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250109233107.17519-1-ahmed.zaki@intel.com> References: <20250109233107.17519-1-ahmed.zaki@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Delete the driver CPU affinity info and use the core's napi config instead. Signed-off-by: Ahmed Zaki --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 1 + drivers/net/ethernet/intel/idpf/idpf_txrx.c | 22 +++++++-------------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 ++---- 3 files changed, 10 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index b4fbb99bfad2..d54be068f53f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -814,6 +814,7 @@ static int idpf_cfg_netdev(struct idpf_vport *vport) netdev->hw_features |= dflt_features | offloads; netdev->hw_enc_features |= dflt_features | offloads; idpf_set_ethtool_ops(netdev); + netif_enable_irq_affinity(netdev); SET_NETDEV_DEV(netdev, &adapter->pdev->dev); /* carrier off on init to avoid Tx hangs */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 2fa9c36e33c9..f6b5b45a061c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -3554,8 +3554,6 @@ void idpf_vport_intr_rel(struct idpf_vport *vport) q_vector->tx = NULL; kfree(q_vector->rx); q_vector->rx = NULL; - - free_cpumask_var(q_vector->affinity_mask); } kfree(vport->q_vectors); @@ -3582,8 +3580,6 @@ static void idpf_vport_intr_rel_irq(struct idpf_vport *vport) vidx = vport->q_vector_idxs[vector]; irq_num = adapter->msix_entries[vidx].vector; - /* clear the affinity_mask in the IRQ descriptor */ - irq_set_affinity_hint(irq_num, NULL); kfree(free_irq(irq_num, q_vector)); } } @@ -3771,8 +3767,6 @@ static int idpf_vport_intr_req_irq(struct idpf_vport *vport) "Request_irq failed, error: %d\n", err); goto free_q_irqs; } - /* assign the mask for this irq */ - irq_set_affinity_hint(irq_num, q_vector->affinity_mask); } return 0; @@ -4184,7 +4178,8 @@ static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport) static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport) { int (*napi_poll)(struct napi_struct *napi, int budget); - u16 v_idx; + u16 v_idx, qv_idx; + int irq_num; if (idpf_is_queue_model_split(vport->txq_model)) napi_poll = idpf_vport_splitq_napi_poll; @@ -4193,12 +4188,12 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport) for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx]; + qv_idx = vport->q_vector_idxs[v_idx]; + irq_num = vport->adapter->msix_entries[qv_idx].vector; - netif_napi_add(vport->netdev, &q_vector->napi, napi_poll); - - /* only set affinity_mask if the CPU is online */ - if (cpu_online(v_idx)) - cpumask_set_cpu(v_idx, q_vector->affinity_mask); + netif_napi_add_config(vport->netdev, &q_vector->napi, + napi_poll, v_idx); + netif_napi_set_irq(&q_vector->napi, irq_num); } } @@ -4242,9 +4237,6 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport) q_vector->rx_intr_mode = IDPF_ITR_DYNAMIC; q_vector->rx_itr_idx = VIRTCHNL2_ITR_IDX_0; - if (!zalloc_cpumask_var(&q_vector->affinity_mask, GFP_KERNEL)) - goto error; - q_vector->tx = kcalloc(txqs_per_vector, sizeof(*q_vector->tx), GFP_KERNEL); if (!q_vector->tx) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 0f71a6f5557b..13251f63c7c3 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -401,7 +401,6 @@ struct idpf_intr_reg { * @rx_intr_mode: Dynamic ITR or not * @rx_itr_idx: RX ITR index * @v_idx: Vector index - * @affinity_mask: CPU affinity mask */ struct idpf_q_vector { __cacheline_group_begin_aligned(read_mostly); @@ -438,13 +437,12 @@ struct idpf_q_vector { __cacheline_group_begin_aligned(cold); u16 v_idx; - cpumask_var_t affinity_mask; __cacheline_group_end_aligned(cold); }; libeth_cacheline_set_assert(struct idpf_q_vector, 120, 24 + sizeof(struct napi_struct) + 2 * sizeof(struct dim), - 8 + sizeof(cpumask_var_t)); + 8); struct idpf_rx_queue_stats { u64_stats_t packets; @@ -940,7 +938,7 @@ static inline int idpf_q_vector_to_mem(const struct idpf_q_vector *q_vector) if (!q_vector) return NUMA_NO_NODE; - cpu = cpumask_first(q_vector->affinity_mask); + cpu = cpumask_first(&q_vector->napi.config->affinity_mask); return cpu < nr_cpu_ids ? cpu_to_mem(cpu) : NUMA_NO_NODE; }