From patchwork Wed Feb 16 09:41:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?w43DsWlnbyBIdWd1ZXQ=?= X-Patchwork-Id: 12748324 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 637DAC433FE for ; Wed, 16 Feb 2022 09:42:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232306AbiBPJmN (ORCPT ); Wed, 16 Feb 2022 04:42:13 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:51176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232340AbiBPJmM (ORCPT ); Wed, 16 Feb 2022 04:42:12 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 27143245FD3 for ; Wed, 16 Feb 2022 01:42:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645004520; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vz/uNpBYB2ddvYzuSEK+QCIdrdDFGiC4+ty64oFSy+g=; b=ElDmtvSf1GVvmRCALEsPVZ+kQUAKVJ+/9ZC0Ha6mgyp1mptrzj4mmr6hlDiZTNJHHOd2Nq nDIHtO06z2Yy95aVSvFIavIUubv/NxY+A7SNvq+Giy/DWr1rxfZy/kQfHwiqd/y8wjewrx Vcti+BBUOLmWyirLuCTT1zECUAOAF50= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-541-X7qnPIRrO0-qPcPLCtJfOw-1; Wed, 16 Feb 2022 04:41:57 -0500 X-MC-Unique: X7qnPIRrO0-qPcPLCtJfOw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E2E112F26; Wed, 16 Feb 2022 09:41:55 +0000 (UTC) Received: from localhost.localdomain (unknown [10.39.192.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4A72455F57; Wed, 16 Feb 2022 09:41:54 +0000 (UTC) From: =?utf-8?b?w43DsWlnbyBIdWd1ZXQ=?= To: ecree.xilinx@gmail.com, habetsm.xilinx@gmail.com Cc: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, =?utf-8?b?w43DsWlnbyBIdWd1ZXQ=?= Subject: [PATCH net-next resend 1/2] sfc: default config to 1 channel/core in local NUMA node only Date: Wed, 16 Feb 2022 10:41:38 +0100 Message-Id: <20220216094139.15989-2-ihuguet@redhat.com> In-Reply-To: <20220216094139.15989-1-ihuguet@redhat.com> References: <20220128151922.1016841-1-ihuguet@redhat.com> <20220216094139.15989-1-ihuguet@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Handling channels from CPUs in different NUMA node can penalize performance, so better configure only one channel per core in the same NUMA node than the NIC, and not per each core in the system. Fallback to all other online cores if there are not online CPUs in local NUMA node. Signed-off-by: Íñigo Huguet Acked-by: Martin Habets --- drivers/net/ethernet/sfc/efx_channels.c | 50 ++++++++++++++++--------- 1 file changed, 33 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c index ead550ae2709..ec6c2f231e73 100644 --- a/drivers/net/ethernet/sfc/efx_channels.c +++ b/drivers/net/ethernet/sfc/efx_channels.c @@ -78,31 +78,48 @@ static const struct efx_channel_type efx_default_channel_type = { * INTERRUPTS *************/ -static unsigned int efx_wanted_parallelism(struct efx_nic *efx) +static unsigned int count_online_cores(struct efx_nic *efx, bool local_node) { - cpumask_var_t thread_mask; + cpumask_var_t filter_mask; unsigned int count; int cpu; + + if (unlikely(!zalloc_cpumask_var(&filter_mask, GFP_KERNEL))) { + netif_warn(efx, probe, efx->net_dev, + "RSS disabled due to allocation failure\n"); + return 1; + } + + cpumask_copy(filter_mask, cpu_online_mask); + if (local_node) { + int numa_node = pcibus_to_node(efx->pci_dev->bus); + + cpumask_and(filter_mask, filter_mask, cpumask_of_node(numa_node)); + } + + count = 0; + for_each_cpu(cpu, filter_mask) { + ++count; + cpumask_andnot(filter_mask, filter_mask, topology_sibling_cpumask(cpu)); + } + + free_cpumask_var(filter_mask); + + return count; +} + +static unsigned int efx_wanted_parallelism(struct efx_nic *efx) +{ + unsigned int count; if (rss_cpus) { count = rss_cpus; } else { - if (unlikely(!zalloc_cpumask_var(&thread_mask, GFP_KERNEL))) { - netif_warn(efx, probe, efx->net_dev, - "RSS disabled due to allocation failure\n"); - return 1; - } - - count = 0; - for_each_online_cpu(cpu) { - if (!cpumask_test_cpu(cpu, thread_mask)) { - ++count; - cpumask_or(thread_mask, thread_mask, - topology_sibling_cpumask(cpu)); - } - } + count = count_online_cores(efx, true); - free_cpumask_var(thread_mask); + /* If no online CPUs in local node, fallback to any online CPUs */ + if (count == 0) + count = count_online_cores(efx, false); } if (count > EFX_MAX_RX_QUEUES) { From patchwork Wed Feb 16 09:41:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?w43DsWlnbyBIdWd1ZXQ=?= X-Patchwork-Id: 12748325 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 524D9C433EF for ; Wed, 16 Feb 2022 09:42:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232356AbiBPJmY (ORCPT ); Wed, 16 Feb 2022 04:42:24 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:51678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232355AbiBPJmR (ORCPT ); Wed, 16 Feb 2022 04:42:17 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0CADC27AFD1 for ; Wed, 16 Feb 2022 01:42:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645004520; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B8RpAVQIIBcOkJHyEwhaXB4QdE5C2J+jVccyRDXTKM4=; b=UdzyAW61ogsNsbjtAShhP1DiXc9FMzxPgPrdagzziwaC8fng2VYrG06Z7a7jvnmzCl4oGR 3ApWxRjNkb1hXLy8n7WSCn4dzYKzx29UmtlG4/pXDR70osFumln6l2bptfU9sIcoAGHNCP WQLHRT3drqK9SgiDer8GlPTlHJxcwIc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-443-wP9m-6uaOVi8bSMkwlb3-g-1; Wed, 16 Feb 2022 04:41:59 -0500 X-MC-Unique: wP9m-6uaOVi8bSMkwlb3-g-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 269741091DA2; Wed, 16 Feb 2022 09:41:58 +0000 (UTC) Received: from localhost.localdomain (unknown [10.39.192.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5633455F57; Wed, 16 Feb 2022 09:41:56 +0000 (UTC) From: =?utf-8?b?w43DsWlnbyBIdWd1ZXQ=?= To: ecree.xilinx@gmail.com, habetsm.xilinx@gmail.com Cc: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, =?utf-8?b?w43DsWlnbyBIdWd1ZXQ=?= Subject: [PATCH net-next resend 2/2] sfc: set affinity hints in local NUMA node only Date: Wed, 16 Feb 2022 10:41:39 +0100 Message-Id: <20220216094139.15989-3-ihuguet@redhat.com> In-Reply-To: <20220216094139.15989-1-ihuguet@redhat.com> References: <20220128151922.1016841-1-ihuguet@redhat.com> <20220216094139.15989-1-ihuguet@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Affinity hints were being set to CPUs in local NUMA node first, and then in other CPUs. This was creating 2 unintended issues: 1. Channels created to be assigned each to a different physical core were assigned to hyperthreading siblings because of being in same NUMA node. Since the patch previous to this one, this did not longer happen with default rss_cpus modparam because less channels are created. 2. XDP channels could be assigned to CPUs in different NUMA nodes, decreasing performance too much (to less than half in some of my tests). This patch sets the affinity hints spreading the channels only in local NUMA node's CPUs. A fallback for the case that no CPU in local NUMA node is online has been added too. Example of CPUs being assigned in a non optimal way before this and the previous patch (note: in this system, xdp-8 to xdp-15 are created because num_possible_cpus == 64, but num_present_cpus == 32 so they're never used): $ lscpu | grep -i numa NUMA node(s): 2 NUMA node0 CPU(s): 0-7,16-23 NUMA node1 CPU(s): 8-15,24-31 $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list /proc/irq/141/0000:07:00.0-0/../smp_affinity_list:0 /proc/irq/142/0000:07:00.0-1/../smp_affinity_list:1 /proc/irq/143/0000:07:00.0-2/../smp_affinity_list:2 /proc/irq/144/0000:07:00.0-3/../smp_affinity_list:3 /proc/irq/145/0000:07:00.0-4/../smp_affinity_list:4 /proc/irq/146/0000:07:00.0-5/../smp_affinity_list:5 /proc/irq/147/0000:07:00.0-6/../smp_affinity_list:6 /proc/irq/148/0000:07:00.0-7/../smp_affinity_list:7 /proc/irq/149/0000:07:00.0-8/../smp_affinity_list:16 /proc/irq/150/0000:07:00.0-9/../smp_affinity_list:17 /proc/irq/151/0000:07:00.0-10/../smp_affinity_list:18 /proc/irq/152/0000:07:00.0-11/../smp_affinity_list:19 /proc/irq/153/0000:07:00.0-12/../smp_affinity_list:20 /proc/irq/154/0000:07:00.0-13/../smp_affinity_list:21 /proc/irq/155/0000:07:00.0-14/../smp_affinity_list:22 /proc/irq/156/0000:07:00.0-15/../smp_affinity_list:23 /proc/irq/157/0000:07:00.0-xdp-0/../smp_affinity_list:8 /proc/irq/158/0000:07:00.0-xdp-1/../smp_affinity_list:9 /proc/irq/159/0000:07:00.0-xdp-2/../smp_affinity_list:10 /proc/irq/160/0000:07:00.0-xdp-3/../smp_affinity_list:11 /proc/irq/161/0000:07:00.0-xdp-4/../smp_affinity_list:12 /proc/irq/162/0000:07:00.0-xdp-5/../smp_affinity_list:13 /proc/irq/163/0000:07:00.0-xdp-6/../smp_affinity_list:14 /proc/irq/164/0000:07:00.0-xdp-7/../smp_affinity_list:15 /proc/irq/165/0000:07:00.0-xdp-8/../smp_affinity_list:24 /proc/irq/166/0000:07:00.0-xdp-9/../smp_affinity_list:25 /proc/irq/167/0000:07:00.0-xdp-10/../smp_affinity_list:26 /proc/irq/168/0000:07:00.0-xdp-11/../smp_affinity_list:27 /proc/irq/169/0000:07:00.0-xdp-12/../smp_affinity_list:28 /proc/irq/170/0000:07:00.0-xdp-13/../smp_affinity_list:29 /proc/irq/171/0000:07:00.0-xdp-14/../smp_affinity_list:30 /proc/irq/172/0000:07:00.0-xdp-15/../smp_affinity_list:31 CPUs assignments after this and previous patch, so normal channels created only one per core in NUMA node and affinities set only to local NUMA node: $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list /proc/irq/116/0000:07:00.0-0/../smp_affinity_list:0 /proc/irq/117/0000:07:00.0-1/../smp_affinity_list:1 /proc/irq/118/0000:07:00.0-2/../smp_affinity_list:2 /proc/irq/119/0000:07:00.0-3/../smp_affinity_list:3 /proc/irq/120/0000:07:00.0-4/../smp_affinity_list:4 /proc/irq/121/0000:07:00.0-5/../smp_affinity_list:5 /proc/irq/122/0000:07:00.0-6/../smp_affinity_list:6 /proc/irq/123/0000:07:00.0-7/../smp_affinity_list:7 /proc/irq/124/0000:07:00.0-xdp-0/../smp_affinity_list:16 /proc/irq/125/0000:07:00.0-xdp-1/../smp_affinity_list:17 /proc/irq/126/0000:07:00.0-xdp-2/../smp_affinity_list:18 /proc/irq/127/0000:07:00.0-xdp-3/../smp_affinity_list:19 /proc/irq/128/0000:07:00.0-xdp-4/../smp_affinity_list:20 /proc/irq/129/0000:07:00.0-xdp-5/../smp_affinity_list:21 /proc/irq/130/0000:07:00.0-xdp-6/../smp_affinity_list:22 /proc/irq/131/0000:07:00.0-xdp-7/../smp_affinity_list:23 /proc/irq/132/0000:07:00.0-xdp-8/../smp_affinity_list:0 /proc/irq/133/0000:07:00.0-xdp-9/../smp_affinity_list:1 /proc/irq/134/0000:07:00.0-xdp-10/../smp_affinity_list:2 /proc/irq/135/0000:07:00.0-xdp-11/../smp_affinity_list:3 /proc/irq/136/0000:07:00.0-xdp-12/../smp_affinity_list:4 /proc/irq/137/0000:07:00.0-xdp-13/../smp_affinity_list:5 /proc/irq/138/0000:07:00.0-xdp-14/../smp_affinity_list:6 /proc/irq/139/0000:07:00.0-xdp-15/../smp_affinity_list:7 Signed-off-by: Íñigo Huguet --- drivers/net/ethernet/sfc/efx_channels.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c index ec6c2f231e73..ef3168fbb5a6 100644 --- a/drivers/net/ethernet/sfc/efx_channels.c +++ b/drivers/net/ethernet/sfc/efx_channels.c @@ -387,10 +387,18 @@ void efx_set_interrupt_affinity(struct efx_nic *efx) { struct efx_channel *channel; unsigned int cpu; + int numa_node = pcibus_to_node(efx->pci_dev->bus); + const struct cpumask *numa_mask = cpumask_of_node(numa_node); + /* If no online CPUs in local node, fallback to any online CPU */ + if (cpumask_first_and(cpu_online_mask, numa_mask) >= nr_cpu_ids) + numa_mask = cpu_online_mask; + + cpu = -1; efx_for_each_channel(channel, efx) { - cpu = cpumask_local_spread(channel->channel, - pcibus_to_node(efx->pci_dev->bus)); + cpu = cpumask_next_and(cpu, cpu_online_mask, numa_mask); + if (cpu >= nr_cpu_ids) + cpu = cpumask_first_and(cpu_online_mask, numa_mask); irq_set_affinity_hint(channel->irq, cpumask_of(cpu)); } }