From patchwork Thu Jun 1 17:42:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13264342 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A9E442501 for ; Thu, 1 Jun 2023 17:28:32 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C0FC197 for ; Thu, 1 Jun 2023 10:28:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685640510; x=1717176510; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VTCQVlNvw+HDeCLZctIdNaIhEf7FG4AyishSLgIg3PE=; b=b9sfLdR4PHLOF3YOHaQNpV8O451b/REPfrQt+r7dpMsiokDl7gX8Hqo+ Ub5I6fwWRlbPTJqdErXY8Mf2BlNfQemRCe84ZfoKtUDgR/lerDo3WwkXu 9NLvJtHkhYDb4lyqeDsDhntmzqtia7jSTLGx2k6SHEC4R6IPIuecZuvXI k8vn+W/Y8LkCaLK/gPgyEUl3q6d0JuwjpkTAlWwMvvyUBBL/AjM1hsX6W WiehDJWFDAOrGbT7tZBt7kjsTBp4XynX7d6KuMhUlvK6BUdPlnEv6cI6U wWK1mntw/2wguuYgTLol0T5U0fJXZ+7f3iR7rWrLlMfTt1L9F8xSIHonD g==; X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="355647469" X-IronPort-AV: E=Sophos;i="6.00,210,1681196400"; d="scan'208";a="355647469" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2023 10:28:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="701638178" X-IronPort-AV: E=Sophos;i="6.00,210,1681196400"; d="scan'208";a="701638178" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2023 10:28:29 -0700 Subject: [net-next/RFC PATCH v1 1/4] net: Introduce new napi fields for rx/tx queues From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Thu, 01 Jun 2023 10:42:25 -0700 Message-ID: <168564134580.7284.16867711571036004706.stgit@anambiarhost.jf.intel.com> In-Reply-To: <168564116688.7284.6877238631049679250.stgit@anambiarhost.jf.intel.com> References: <168564116688.7284.6877238631049679250.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Introduce new napi fields 'napi_rxq_list' and 'napi_txq_list' for rx and tx queue set associated with the napi and initialize them. Handle their removal as well. This enables a mapping of each napi instance with the queue/queue-set on the corresponding irq line. Signed-off-by: Amritha Nambiar --- include/linux/netdevice.h | 7 +++++++ net/core/dev.c | 21 +++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 08fbd4622ccf..49f64401af7c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -342,6 +342,11 @@ struct gro_list { */ #define GRO_HASH_BUCKETS 8 +struct napi_queue { + struct list_head q_list; + u16 queue_index; +}; + /* * Structure for NAPI scheduling similar to tasklet but with weighting */ @@ -376,6 +381,8 @@ struct napi_struct { /* control-path-only fields follow */ struct list_head dev_list; struct hlist_node napi_hash_node; + struct list_head napi_rxq_list; + struct list_head napi_txq_list; }; enum { diff --git a/net/core/dev.c b/net/core/dev.c index 3393c2f3dbe8..9ee8eb3ef223 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6401,6 +6401,9 @@ void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, */ if (dev->threaded && napi_kthread_create(napi)) dev->threaded = 0; + + INIT_LIST_HEAD(&napi->napi_rxq_list); + INIT_LIST_HEAD(&napi->napi_txq_list); } EXPORT_SYMBOL(netif_napi_add_weight); @@ -6462,6 +6465,23 @@ static void flush_gro_hash(struct napi_struct *napi) } } +static void __napi_del_queue(struct napi_queue *napi_queue) +{ + list_del_rcu(&napi_queue->q_list); + kfree(napi_queue); +} + +static void napi_del_queues(struct napi_struct *napi) +{ + struct napi_queue *napi_queue, *n; + + list_for_each_entry_safe(napi_queue, n, &napi->napi_rxq_list, q_list) + __napi_del_queue(napi_queue); + + list_for_each_entry_safe(napi_queue, n, &napi->napi_txq_list, q_list) + __napi_del_queue(napi_queue); +} + /* Must be called in process context */ void __netif_napi_del(struct napi_struct *napi) { @@ -6479,6 +6499,7 @@ void __netif_napi_del(struct napi_struct *napi) kthread_stop(napi->thread); napi->thread = NULL; } + napi_del_queues(napi); } EXPORT_SYMBOL(__netif_napi_del); From patchwork Thu Jun 1 17:42:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13264343 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18A0C42501 for ; Thu, 1 Jun 2023 17:28:38 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04BBA1A5 for ; Thu, 1 Jun 2023 10:28:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685640514; x=1717176514; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=E/0xGHZW5oru8YpOpFgUxsFVfiXRc5zOq+Ho+HLO5Gc=; b=mBYwA7MOimMchdkLi7etRhqEG/L6tB09vS79zHBSD59hy0ftRvex9xFl 99ifQ7i5h2/VNp6+XMqBLI3CDAk/oPSSQNRcqynMVuAqBKgThTXCy6K6b +qL/0OGpJ1TxrkcTar+qqTc/fkCa8yYj/4EWVMEESKIhfwJ30zCC+S1ks JGW05RfPYoWEveCxx9QVZkzPerZW2gkx9nzMA0/B/3YE7HdxGMLPFaozT DJdVHEvR0Qsti0Qf03exmh7qOMpPULMEerVR7ghO1naRE4KFM7nd31Yoo MZAEVdNLboMKzwFqHG8cfYJYQZ+CVfbF/xhAjw+Se4pBz5HLdnm+FeW3M w==; X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="355647485" X-IronPort-AV: E=Sophos;i="6.00,210,1681196400"; d="scan'208";a="355647485" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2023 10:28:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="701638199" X-IronPort-AV: E=Sophos;i="6.00,210,1681196400"; d="scan'208";a="701638199" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2023 10:28:34 -0700 Subject: [net-next/RFC PATCH v1 2/4] net: Add support for associating napi with queue[s] From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Thu, 01 Jun 2023 10:42:30 -0700 Message-ID: <168564135094.7284.9691772825401908320.stgit@anambiarhost.jf.intel.com> In-Reply-To: <168564116688.7284.6877238631049679250.stgit@anambiarhost.jf.intel.com> References: <168564116688.7284.6877238631049679250.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC After the napi context is initialized, map the napi instance with the queue/queue-set on the corresponding irq line. Signed-off-by: Amritha Nambiar --- drivers/net/ethernet/intel/ice/ice_lib.c | 57 +++++++++++++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_lib.h | 4 ++ drivers/net/ethernet/intel/ice/ice_main.c | 4 ++ include/linux/netdevice.h | 11 ++++++ net/core/dev.c | 34 +++++++++++++++++ 5 files changed, 109 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 5ddb95d1073a..58f68363119f 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2478,6 +2478,12 @@ ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vsi_cfg_params *params) goto unroll_vector_base; ice_vsi_map_rings_to_vectors(vsi); + + /* Associate q_vector rings to napi */ + ret = ice_vsi_add_napi_queues(vsi); + if (ret) + goto unroll_vector_base; + vsi->stat_offsets_loaded = false; if (ice_is_xdp_ena_vsi(vsi)) { @@ -2957,6 +2963,57 @@ void ice_vsi_dis_irq(struct ice_vsi *vsi) synchronize_irq(vsi->q_vectors[i]->irq.virq); } +/** + * ice_q_vector_add_napi_queues - Add queue[s] associated with the napi + * @q_vector: q_vector pointer + * + * Associate the q_vector napi with all the queue[s] on the vector + * Returns 0 on success or < 0 on error + */ +int ice_q_vector_add_napi_queues(struct ice_q_vector *q_vector) +{ + struct ice_rx_ring *rx_ring; + struct ice_tx_ring *tx_ring; + int ret; + + ice_for_each_rx_ring(rx_ring, q_vector->rx) { + ret = netif_napi_add_queue(&q_vector->napi, rx_ring->q_index, + NAPI_RX_CONTAINER); + if (ret) + return ret; + } + ice_for_each_tx_ring(tx_ring, q_vector->tx) { + ret = netif_napi_add_queue(&q_vector->napi, tx_ring->q_index, + NAPI_TX_CONTAINER); + if (ret) + return ret; + } + + return ret; +} + +/** + * ice_vsi_add_napi_queues + * @vsi: VSI pointer + * + * Associate queue[s] with napi for all vectors + * Returns 0 on success or < 0 on error + */ +int ice_vsi_add_napi_queues(struct ice_vsi *vsi) +{ + int i, ret = 0; + + if (!vsi->netdev) + return ret; + + ice_for_each_q_vector(vsi, i) { + ret = ice_q_vector_add_napi_queues(vsi->q_vectors[i]); + if (ret) + return ret; + } + return ret; +} + /** * ice_napi_del - Remove NAPI handler for the VSI * @vsi: VSI for which NAPI handler is to be removed diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index e985766e6bb5..623b5f738a5c 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -93,6 +93,10 @@ void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc); struct ice_vsi * ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params); +int ice_q_vector_add_napi_queues(struct ice_q_vector *q_vector); + +int ice_vsi_add_napi_queues(struct ice_vsi *vsi); + void ice_napi_del(struct ice_vsi *vsi); int ice_vsi_release(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 62e91512aeab..c66ff1473aeb 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3348,9 +3348,11 @@ static void ice_napi_add(struct ice_vsi *vsi) if (!vsi->netdev) return; - ice_for_each_q_vector(vsi, v_idx) + ice_for_each_q_vector(vsi, v_idx) { netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi, ice_napi_poll); + ice_q_vector_add_napi_queues(vsi->q_vectors[v_idx]); + } } /** diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 49f64401af7c..a562db712c6e 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -342,6 +342,14 @@ struct gro_list { */ #define GRO_HASH_BUCKETS 8 +/* + * napi queue container type + */ +enum napi_container_type { + NAPI_RX_CONTAINER, + NAPI_TX_CONTAINER, +}; + struct napi_queue { struct list_head q_list; u16 queue_index; @@ -2622,6 +2630,9 @@ static inline void *netdev_priv(const struct net_device *dev) */ #define SET_NETDEV_DEVTYPE(net, devtype) ((net)->dev.type = (devtype)) +int netif_napi_add_queue(struct napi_struct *napi, u16 queue_index, + enum napi_container_type); + /* Default NAPI poll() weight * Device drivers are strongly advised to not use bigger value */ diff --git a/net/core/dev.c b/net/core/dev.c index 9ee8eb3ef223..ba712119ec85 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6366,6 +6366,40 @@ int dev_set_threaded(struct net_device *dev, bool threaded) } EXPORT_SYMBOL(dev_set_threaded); +/** + * netif_napi_add_queue - Associate queue with the napi + * @napi: NAPI context + * @queue_index: Index of queue + * @napi_container_type: queue type as RX or TX + * + * Add queue with its corresponding napi context + */ +int netif_napi_add_queue(struct napi_struct *napi, u16 queue_index, + enum napi_container_type type) +{ + struct napi_queue *napi_queue; + + napi_queue = kzalloc(sizeof(*napi_queue), GFP_KERNEL); + if (!napi_queue) + return -ENOMEM; + + napi_queue->queue_index = queue_index; + + switch (type) { + case NAPI_RX_CONTAINER: + list_add_rcu(&napi_queue->q_list, &napi->napi_rxq_list); + break; + case NAPI_TX_CONTAINER: + list_add_rcu(&napi_queue->q_list, &napi->napi_txq_list); + break; + default: + return -EINVAL; + } + + return 0; +} +EXPORT_SYMBOL(netif_napi_add_queue); + void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, int (*poll)(struct napi_struct *, int), int weight) { From patchwork Thu Jun 1 17:42:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13264344 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F5F042501 for ; Thu, 1 Jun 2023 17:28:41 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09D83189 for ; Thu, 1 Jun 2023 10:28:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685640519; x=1717176519; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wfld8HBPAN4YfQnm37Uca9k2heqbjSZjp8Dpmd3Evvc=; b=eMpdC/z8uzfvbago1DhZOP2Kb1jJDnRKsD7RArUzaln+LkCUDvWuSiPO BYXbafg76mowuI58l0AP2cN+ENhUeQM3tGoQBBu4fDmrM9E93QEHawi2X dKXSxQhUb6iPQNJ0MLUoa8hDAMTT77CWdiYZDJZMjzlp3jbWUH5EOgcW4 6r3br+9SOB4hC2TEGnw2zpmM/mYuXx6B/RluOJEyblPv17xBMBlDIZzfr RbRmfHnn7Vah/phDKxH8lz3Xj4GCe47vsVNlNrL90SPVz9xM+ERYtaa7T 8pbLt11NOsMtjzACNov+Ujii58FseQCQODIlC5NZDWgrFnYzJlEiOBhft w==; X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="355647500" X-IronPort-AV: E=Sophos;i="6.00,210,1681196400"; d="scan'208";a="355647500" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2023 10:28:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="701638222" X-IronPort-AV: E=Sophos;i="6.00,210,1681196400"; d="scan'208";a="701638222" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by orsmga007.jf.intel.com with ESMTP; 01 Jun 2023 10:28:39 -0700 Subject: [net-next/RFC PATCH v1 3/4] netdev-genl: Introduce netdev dump ctx From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Thu, 01 Jun 2023 10:42:36 -0700 Message-ID: <168564135607.7284.13867080215910148101.stgit@anambiarhost.jf.intel.com> In-Reply-To: <168564116688.7284.6877238631049679250.stgit@anambiarhost.jf.intel.com> References: <168564116688.7284.6877238631049679250.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC A comment in the definition of struct netlink_callback states "args is deprecated. Cast a struct over ctx instead for proper type safety." Introduce netdev_nl_dump_ctx structure and replace 'args' with netdev_nl_dump_ctx fields. Signed-off-by: Amritha Nambiar --- net/core/netdev-genl.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index a4270fafdf11..8d6a840821c7 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -8,6 +8,19 @@ #include "netdev-genl-gen.h" +struct netdev_nl_dump_ctx { + int dev_entry_hash; + int dev_entry_idx; +}; + +static inline struct netdev_nl_dump_ctx * +netdev_dump_ctx(struct netlink_callback *cb) +{ + NL_ASSERT_DUMP_CTX_FITS(struct netdev_nl_dump_ctx); + + return (struct netdev_nl_dump_ctx *)cb->ctx; +} + static int netdev_nl_dev_fill(struct net_device *netdev, struct sk_buff *rsp, u32 portid, u32 seq, int flags, u32 cmd) @@ -91,14 +104,15 @@ int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info) int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) { + struct netdev_nl_dump_ctx *ctx = netdev_dump_ctx(cb); struct net *net = sock_net(skb->sk); struct net_device *netdev; int idx = 0, s_idx; int h, s_h; int err; - s_h = cb->args[0]; - s_idx = cb->args[1]; + s_h = ctx->dev_entry_hash; + s_idx = ctx->dev_entry_idx; rtnl_lock(); @@ -126,8 +140,8 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) if (err != -EMSGSIZE) return err; - cb->args[1] = idx; - cb->args[0] = h; + ctx->dev_entry_idx = idx; + ctx->dev_entry_hash = h; cb->seq = net->dev_base_seq; return skb->len; From patchwork Thu Jun 1 17:42:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13264345 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0C5642501 for ; Thu, 1 Jun 2023 17:28:48 +0000 (UTC) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B265119A for ; Thu, 1 Jun 2023 10:28:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685640525; x=1717176525; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gZoobYWWUqykqjdxQJnGc7p5go/a7UXLz0bCEGpFHrw=; b=LN6MNzKpd1KkmSvEQh7GgQIIS4W+jpNd1cwARHDCt5lQLh9/WufjBgxO 5JGSrwY5hB8POIQzjY0F3ygtKFTr92CWqiOwSOlVy2YPBEaEw4VzBfAHk Y35XPjy8QRcjL611Rpk3cx+MAnAmQ4l3+zzjy1X8sUZRVDo2yPxtGBMz0 gLqfX5yq8tarmv1wc7YJOmUeonX+nqjLJCUIMfueAlHsKF+QWfbmXmJiu qhJWUhibUewF+w3hWcCeldKwlSHWqV5BS8hAWGmynEBtis3ZqdrCQmnkQ WHNzY8+0VKwsA4EW717uBrFwcHqO7YHTBcP1B2IyorF18IrGCtj09bnRj Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="353123433" X-IronPort-AV: E=Sophos;i="6.00,210,1681196400"; d="scan'208";a="353123433" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2023 10:28:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10728"; a="954128983" X-IronPort-AV: E=Sophos;i="6.00,210,1681196400"; d="scan'208";a="954128983" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by fmsmga006.fm.intel.com with ESMTP; 01 Jun 2023 10:28:44 -0700 Subject: [net-next/RFC PATCH v1 4/4] netdev-genl: Add support for exposing napi info from netdev From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Thu, 01 Jun 2023 10:42:41 -0700 Message-ID: <168564136118.7284.18138054610456895287.stgit@anambiarhost.jf.intel.com> In-Reply-To: <168564116688.7284.6877238631049679250.stgit@anambiarhost.jf.intel.com> References: <168564116688.7284.6877238631049679250.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add support in ynl/netdev.yaml for napi related information. The netdev structure tracks all the napi instances and napi fields. The napi instances and associated queue[s] can be retrieved this way. Refactored netdev-genl to support exposing napi<->queue[s] mapping that is retained in a netdev. Signed-off-by: Amritha Nambiar --- Documentation/netlink/specs/netdev.yaml | 39 +++++ include/uapi/linux/netdev.h | 4 + net/core/netdev-genl.c | 239 ++++++++++++++++++++++++++----- tools/include/uapi/linux/netdev.h | 4 + 4 files changed, 247 insertions(+), 39 deletions(-) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index b99e7ffef7a1..8d0edb529563 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -62,6 +62,44 @@ attribute-sets: type: u64 enum: xdp-act enum-as-flags: true + - + name: napi-info + doc: napi information such as napi-id, napi queues etc. + type: nest + multi-attr: true + nested-attributes: dev-napi-info + - + name: napi-id + doc: napi id + type: u32 + - + name: rx-queues + doc: list of rx queues associated with a napi + type: u16 + multi-attr: true + - + name: tx-queues + doc: list of tx queues associated with a napi + type: u16 + multi-attr: true + - + name: dev-napi-info + subset-of: dev + attributes: + - + name: napi-id + doc: napi id + type: u32 + - + name: rx-queues + doc: list rx of queues associated with a napi + type: u16 + multi-attr: true + - + name: tx-queues + doc: list tx of queues associated with a napi + type: u16 + multi-attr: true operations: list: @@ -77,6 +115,7 @@ operations: attributes: - ifindex - xdp-features + - napi-info dump: reply: *dev-all - diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index 639524b59930..16538fb1406a 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -41,6 +41,10 @@ enum { NETDEV_A_DEV_IFINDEX = 1, NETDEV_A_DEV_PAD, NETDEV_A_DEV_XDP_FEATURES, + NETDEV_A_DEV_NAPI_INFO, + NETDEV_A_DEV_NAPI_ID, + NETDEV_A_DEV_RX_QUEUES, + NETDEV_A_DEV_TX_QUEUES, __NETDEV_A_DEV_MAX, NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 8d6a840821c7..fdaa67f53b22 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -11,6 +11,7 @@ struct netdev_nl_dump_ctx { int dev_entry_hash; int dev_entry_idx; + int napi_idx; }; static inline struct netdev_nl_dump_ctx * @@ -21,54 +22,187 @@ netdev_dump_ctx(struct netlink_callback *cb) return (struct netdev_nl_dump_ctx *)cb->ctx; } +enum netdev_nl_type { + NETDEV_NL_DO, + NETDEV_NL_NOTIFY, +}; + +static int netdev_nl_send_func(struct net_device *netdev, struct sk_buff *skb, + u32 portid, enum netdev_nl_type type) +{ + switch (type) { + case NETDEV_NL_DO: + return genlmsg_unicast(dev_net(netdev), skb, portid); + case NETDEV_NL_NOTIFY: + return genlmsg_multicast_netns(&netdev_nl_family, + dev_net(netdev), skb, 0, + NETDEV_NLGRP_MGMT, GFP_KERNEL); + default: + return -EINVAL; + } +} + static int -netdev_nl_dev_fill(struct net_device *netdev, struct sk_buff *rsp, - u32 portid, u32 seq, int flags, u32 cmd) +netdev_nl_dev_napi_fill_one(struct sk_buff *msg, struct napi_struct *napi) { - void *hdr; + struct nlattr *napi_info; + struct napi_queue *q, *n; - hdr = genlmsg_put(rsp, portid, seq, &netdev_nl_family, flags, cmd); - if (!hdr) + napi_info = nla_nest_start(msg, NETDEV_A_DEV_NAPI_INFO); + if (!napi_info) return -EMSGSIZE; + if (nla_put_u32(msg, NETDEV_A_DEV_NAPI_ID, napi->napi_id)) + goto nla_put_failure; + + list_for_each_entry_safe(q, n, &napi->napi_rxq_list, q_list) { + if (nla_put_u16(msg, NETDEV_A_DEV_RX_QUEUES, q->queue_index)) + goto nla_put_failure; + } + + list_for_each_entry_safe(q, n, &napi->napi_txq_list, q_list) { + if (nla_put_u16(msg, NETDEV_A_DEV_TX_QUEUES, q->queue_index)) + goto nla_put_failure; + } + nla_nest_end(msg, napi_info); + return 0; +nla_put_failure: + nla_nest_cancel(msg, napi_info); + return -EMSGSIZE; +} + +static int +netdev_nl_dev_napi_fill(struct net_device *netdev, struct sk_buff *msg, int *start) +{ + struct napi_struct *napi, *n; + int i = 0; + + list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list) { + if (i < *start) { + i++; + continue; + } + if (netdev_nl_dev_napi_fill_one(msg, napi)) + return -EMSGSIZE; + *start = ++i; + } + return 0; +} + +static int +netdev_nl_dev_napi_prepare_fill(struct net_device *netdev, + struct sk_buff **pskb, u32 portid, u32 seq, + int flags, u32 cmd, enum netdev_nl_type type) +{ + struct nlmsghdr *nlh; + struct sk_buff *skb = *pskb; + bool last = false; + int index = 0; + void *hdr; + int err; + + while (!last) { + int tmp_index = index; + + skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!skb) + return -ENOMEM; + + hdr = genlmsg_put(skb, portid, seq, &netdev_nl_family, + flags | NLM_F_MULTI, cmd); + if (!hdr) { + err = -EMSGSIZE; + goto nla_put_failure; + } + err = netdev_nl_dev_napi_fill(netdev, skb, &index); + if (!err) + last = true; + else if (err != -EMSGSIZE || tmp_index == index) + goto nla_put_failure; + + genlmsg_end(skb, hdr); + err = netdev_nl_send_func(netdev, skb, portid, type); + if (err) + return err; + } + + skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!skb) + return -ENOMEM; + nlh = nlmsg_put(skb, portid, seq, NLMSG_DONE, 0, flags | NLM_F_MULTI); + if (!nlh) { + err = -EMSGSIZE; + goto nla_put_failure; + } + + return netdev_nl_send_func(netdev, skb, portid, type); + +nla_put_failure: + nlmsg_free(skb); + return err; +} + +static int +netdev_nl_dev_info_fill(struct net_device *netdev, struct sk_buff *rsp) +{ if (nla_put_u32(rsp, NETDEV_A_DEV_IFINDEX, netdev->ifindex) || nla_put_u64_64bit(rsp, NETDEV_A_DEV_XDP_FEATURES, - netdev->xdp_features, NETDEV_A_DEV_PAD)) { - genlmsg_cancel(rsp, hdr); - return -EINVAL; + netdev->xdp_features, NETDEV_A_DEV_PAD)) + return -EMSGSIZE; + return 0; +} + +static int +netdev_nl_dev_fill(struct net_device *netdev, u32 portid, u32 seq, int flags, + u32 cmd, enum netdev_nl_type type) +{ + struct sk_buff *skb; + void *hdr; + int err; + + skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!skb) + return -ENOMEM; + + hdr = genlmsg_put(skb, portid, seq, &netdev_nl_family, flags, cmd); + if (!hdr) { + err = -EMSGSIZE; + goto err_free_msg; + } + err = netdev_nl_dev_info_fill(netdev, skb); + if (err) { + genlmsg_cancel(skb, hdr); + goto err_free_msg; } - genlmsg_end(rsp, hdr); + genlmsg_end(skb, hdr); - return 0; + err = netdev_nl_send_func(netdev, skb, portid, type); + if (err) + return err; + + return netdev_nl_dev_napi_prepare_fill(netdev, &skb, portid, seq, flags, + cmd, type); + +err_free_msg: + nlmsg_free(skb); + return err; } static void netdev_genl_dev_notify(struct net_device *netdev, int cmd) { - struct sk_buff *ntf; - if (!genl_has_listeners(&netdev_nl_family, dev_net(netdev), NETDEV_NLGRP_MGMT)) return; - ntf = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!ntf) - return; - - if (netdev_nl_dev_fill(netdev, ntf, 0, 0, 0, cmd)) { - nlmsg_free(ntf); - return; - } + netdev_nl_dev_fill(netdev, 0, 0, 0, cmd, NETDEV_NL_NOTIFY); - genlmsg_multicast_netns(&netdev_nl_family, dev_net(netdev), ntf, - 0, NETDEV_NLGRP_MGMT, GFP_KERNEL); } int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info) { struct net_device *netdev; - struct sk_buff *rsp; u32 ifindex; int err; @@ -77,29 +211,53 @@ int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info) ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]); - rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); - if (!rsp) - return -ENOMEM; - rtnl_lock(); netdev = __dev_get_by_index(genl_info_net(info), ifindex); if (netdev) - err = netdev_nl_dev_fill(netdev, rsp, info->snd_portid, - info->snd_seq, 0, info->genlhdr->cmd); + err = netdev_nl_dev_fill(netdev, info->snd_portid, + info->snd_seq, 0, info->genlhdr->cmd, + NETDEV_NL_DO); else err = -ENODEV; rtnl_unlock(); - if (err) - goto err_free_msg; + return err; - return genlmsg_reply(rsp, info); + return 0; +} + +static int +netdev_nl_dev_dump_entry(struct net_device *netdev, struct sk_buff *rsp, + struct netlink_callback *cb, int *start) +{ + int index = *start; + int tmp_index = index; + void *hdr; + int err; + + hdr = genlmsg_put(rsp, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, + &netdev_nl_family, NLM_F_MULTI, NETDEV_CMD_DEV_GET); + if (!hdr) + return -EMSGSIZE; + + if (netdev_nl_dev_info_fill(netdev, rsp)) + goto nla_put_failure; + + err = netdev_nl_dev_napi_fill(netdev, rsp, &index); + if (err) { + if (err != -EMSGSIZE || tmp_index == index) + goto nla_put_failure; + } + *start = index; + genlmsg_end(rsp, hdr); -err_free_msg: - nlmsg_free(rsp); return err; + +nla_put_failure: + genlmsg_cancel(rsp, hdr); + return -EINVAL; } int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) @@ -107,12 +265,13 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) struct netdev_nl_dump_ctx *ctx = netdev_dump_ctx(cb); struct net *net = sock_net(skb->sk); struct net_device *netdev; - int idx = 0, s_idx; + int idx = 0, s_idx, n_idx; int h, s_h; int err; s_h = ctx->dev_entry_hash; s_idx = ctx->dev_entry_idx; + n_idx = ctx->napi_idx; rtnl_lock(); @@ -124,10 +283,10 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) hlist_for_each_entry(netdev, head, index_hlist) { if (idx < s_idx) goto cont; - err = netdev_nl_dev_fill(netdev, skb, - NETLINK_CB(cb->skb).portid, - cb->nlh->nlmsg_seq, 0, - NETDEV_CMD_DEV_GET); + err = netdev_nl_dev_dump_entry(netdev, skb, cb, &n_idx); + if (err == -EMSGSIZE) + goto out; + n_idx = 0; if (err < 0) break; cont: @@ -135,6 +294,7 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) } } +out: rtnl_unlock(); if (err != -EMSGSIZE) @@ -142,6 +302,7 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) ctx->dev_entry_idx = idx; ctx->dev_entry_hash = h; + ctx->napi_idx = n_idx; cb->seq = net->dev_base_seq; return skb->len; diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index 639524b59930..16538fb1406a 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -41,6 +41,10 @@ enum { NETDEV_A_DEV_IFINDEX = 1, NETDEV_A_DEV_PAD, NETDEV_A_DEV_XDP_FEATURES, + NETDEV_A_DEV_NAPI_INFO, + NETDEV_A_DEV_NAPI_ID, + NETDEV_A_DEV_RX_QUEUES, + NETDEV_A_DEV_TX_QUEUES, __NETDEV_A_DEV_MAX, NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1)