From patchwork Sun Nov 3 03:21:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13860292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF9FEE6781A for ; Sun, 3 Nov 2024 03:21:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C8896B008C; Sat, 2 Nov 2024 23:21:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E28986B0096; Sat, 2 Nov 2024 23:21:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5ED66B0096; Sat, 2 Nov 2024 23:21:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 71A126B0093 for ; Sat, 2 Nov 2024 23:21:23 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DBC62120B96 for ; Sun, 3 Nov 2024 03:21:22 +0000 (UTC) X-FDA: 82743331860.08.FB20A45 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf17.hostedemail.com (Postfix) with ESMTP id 6795B4000E for ; Sun, 3 Nov 2024 03:20:57 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=iFuJIJ5F; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf17.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730603998; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wN2LmZ5a5qfTWVX1cLgoIE5geGN37SlJY4gc3Svdtbo=; b=rkz+xIWZxRe/PI7RsQMmQ0pQG+cup4nOeuaxwSUyUTXKy6bDkaqMb5jnH1IssUqUZNZT4e ZwMHzGvt9UYYfZBA5VssObEX6/Bl+NQVG9+MKNpTryCx5OrUu02LdmTV6myv2/JJfN3LIs VSKmMkKfYiCYRUuVvopHItht0AtlG2I= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=iFuJIJ5F; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf17.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730603998; a=rsa-sha256; cv=none; b=ZyZ70q169bF6xYGzMwvcE4E2OduA5wrjC3L+SApO9b9rdkEjqbUKDNzVcOqKT2IOdedtLc eW+idM9oQqVh8gHMYFAzyOFSWlRfB7cNujx3fEBehaYOs+fLBJjmve7QmfrRRvHnq8MQMb TkIcbX9BWrt6uAyI5ZrBVQXRw8yhWcc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730604080; x=1762140080; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dsGJj+DqE0i7pg9VVqHrElSTuA3RtibV8w9INDbEh+Q=; b=iFuJIJ5FiFXaf0MG4Qd0NZFBwLh8fV7JICB3nKpzXADRAk/bH/zdIfO9 gUt2BRU2OY92w2NGeNHgs5lUonWSC/QQ8jrv2b7DZ1Dh98tLGDpUe7OjH Rsp+P7Aj+nfP5XOZvjJrMoi6cO3bOM8YnG55No5eqrhAg+cyxth7lPntx d7CTV/GFsi52YZz6xvLUPz8FHlyThcffYlug58cZpOFmUoL6rqUilC49L qzds90jO2FNY40NgimSkCokWFcRn1GFXT4l6jXt5cE5sDLXPbeDvgI1pb VBMrJ1t/TLpvakY04MXmAIXbmopYlWijFoqcchCTwjwJuEkum8Xgdgfne g==; X-CSE-ConnectionGUID: Xn3xlgvkQeKZIHto/pnRrA== X-CSE-MsgGUID: xSfQEQDAQ/mr1GWggaNLsg== X-IronPort-AV: E=McAfee;i="6700,10204,11244"; a="30548273" X-IronPort-AV: E=Sophos;i="6.11,254,1725346800"; d="scan'208";a="30548273" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Nov 2024 20:21:13 -0700 X-CSE-ConnectionGUID: r4p/5N5kTQ6zqMjLh5lsKg== X-CSE-MsgGUID: MBv2V3OuTLWdRXqpDMTzfg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,254,1725346800"; d="scan'208";a="83451885" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by orviesa006.jf.intel.com with ESMTP; 02 Nov 2024 20:21:14 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, zanussi@kernel.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v2 06/13] crypto: iaa - Change cpu-to-iaa mappings to evenly balance cores to IAAs. Date: Sat, 2 Nov 2024 20:21:04 -0700 Message-Id: <20241103032111.333282-7-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241103032111.333282-1-kanchana.p.sridhar@intel.com> References: <20241103032111.333282-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6795B4000E X-Stat-Signature: 44pngxi847sqgneytr8w8pq84skcj881 X-HE-Tag: 1730604057-868542 X-HE-Meta: U2FsdGVkX1+Ewv/Zc+/pNI/BfxgSBD1NoT8gJAxGDIVe5/FAGvGEHatmvN8mYwmMbPI/c/QIDOhRX8WiE2wpdNpiSN2uM67xM2tbbVLBmB1g7Oxyu+BjDdOGxHp8iCsE7k3JgJhx7LdvM+X4EIECrMsP0fB7DBmbZe8oCSjDMuQOfbIeJlCdW33V3Vh4QcI32T8Xui0sz0e9TUHGi7aYR+e4sJgSUK5EWanurn9pgvy4PqxKbq3fV49xIODRhbXHN9643cmJuZQG0V123/Upgh/JBnC6+f48CMgV1sUw815GTm4Ik9jckqOb9gBQ2a8IiXpK8GiADGAnZPRs47k4MZYffLxSeDoVGEZV+CvmzO8qlBi4zOsXgriogCt1n0z6NRgOkPCsAR8vh3l/p5gqgaaO+VIWCbPA5APme5bnMmSkvtQJtPx/mg0MgJj2Y/LU7wv2hPJQPC7V9POFhLEptFT042jWQrq+MWgmuoXMwMA5YOUkMyrRneUV0ai/nMyz8YzyrnVS4Yu5vcIeNUxpXjoCOZ2daw4RhtOsNRnK49CZs5ggtCwrwiRMr/J0dYw0jj11FGpm/5xG3wETh84ZWCq9soiiWQplVtmQaZAdIydTNmpKxeeWIIqjZTwRE8CrvKt5S6eEBguZkfJQ2G8WUUblOLUwnmQ851t6wMhDAH0uN7ta/l4h4ehJwlSqrJeEl3SDoOzIkVIasX9ttCrxG8h9zW18trRKnsdTbCDNw1HGtpmyQDzBcGda44/is99WORNP6FIxRk5FZN1gLkeu+xGpIcsN2BsnpL0xb//OmnMq67VFeZ+kXMHx6vb+uL2hw/yo1S5Ktj5RXZEWQM1BrQuQyBkSBpc+exyNqx7DAcBe1CuD2izNYhkv1AjvFu/pnx94iEhrpxqZ5tAzkev8RXbyOz6ld0QBvlVS2ROTkuzPbs/Y37FNOYzmOuPYJnVrLZjNGs57plEtLkG16M/ UbWdhjJc 6QsSsBIykIi9jaKl5iycB6p1bx/q3cbiW5tXnIVzj3lF6cRkVdsDsTrlroA4Hh2oZKrfJ9lHR0iMPX9f55Npvk80TjlK2s2WGLTvBNsCy1y4IAjSM/BVjNei6f3mrObKwr31RFbhvLPEl6vN5he2BuWjHIQXuqCcej75u1mo6mdM/GKGhauPp6xsCsQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This change distributes the cpus more evenly among the IAAs in each socket. Old algorithm to assign cpus to IAA: ------------------------------------ If "nr_cpus" = nr_logical_cpus (includes hyper-threading), the current algorithm determines "nr_cpus_per_node" = nr_cpus / nr_nodes. Hence, on a 2-socket Sapphire Rapids server where each socket has 56 cores and 4 IAA devices, nr_cpus_per_node = 112. Further, cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa Hence, cpus_per_iaa = 224/8 = 28. The iaa_crypto driver then assigns 28 "logical" node cpus per IAA device on that node, that results in this cpu-to-iaa mapping: lscpu|grep NUMA NUMA node(s): 2 NUMA node0 CPU(s): 0-55,112-167 NUMA node1 CPU(s): 56-111,168-223 NUMA node 0: cpu 0-27 28-55 112-139 140-167 iaa iax1 iax3 iax5 iax7 NUMA node 1: cpu 56-83 84-111 168-195 196-223 iaa iax9 iax11 iax13 iax15 This appears non-optimal for a few reasons: 1) The 2 logical threads on a core will get assigned to different IAA devices. For e.g.: cpu 0: iax1 cpu 112: iax5 2) One of the logical threads on a core is assigned to an IAA that is not closest to that core. For e.g. cpu 112. 3) If numactl is used to start processes sequentially on the logical cores, some of the IAA devices on the socket could be over-subscribed, while some could be under-utilized. This patch introduces a scheme to more evenly balance the logical cores to IAA devices on a socket. New algorithm to assign cpus to IAA: ------------------------------------ We introduce a function "cpu_to_iaa()" that takes a logical cpu and returns the IAA device closest to it. If "nr_cpus" = nr_logical_cpus (includes hyper-threading), the new algorithm determines "nr_cpus_per_node" = topology_num_cores_per_package(). Hence, on a 2-socket Sapphire Rapids server where each socket has 56 cores and 4 IAA devices, nr_cpus_per_node = 56. Further, cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa Hence, cpus_per_iaa = 112/8 = 14. The iaa_crypto driver then assigns 14 "logical" node cpus per IAA device on that node, that results in this cpu-to-iaa mapping: NUMA node 0: cpu 0-13,112-125 14-27,126-139 28-41,140-153 42-55,154-167 iaa iax1 iax3 iax5 iax7 NUMA node 1: cpu 56-69,168-181 70-83,182-195 84-97,196-209 98-111,210-223 iaa iax9 iax11 iax13 iax15 This resolves the 3 issues with non-optimality of cpu-to-iaa mappings pointed out earlier with the existing approach. Originally-by: Tom Zanussi Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 84 ++++++++++++++-------- 1 file changed, 54 insertions(+), 30 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index c4b143dd1ddd..a12a8f9caa84 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -55,6 +55,46 @@ static struct idxd_wq *wq_table_next_wq(int cpu) return entry->wqs[entry->cur_wq]; } +/* + * Given a cpu, find the closest IAA instance. The idea is to try to + * choose the most appropriate IAA instance for a caller and spread + * available workqueues around to clients. + */ +static inline int cpu_to_iaa(int cpu) +{ + int node, n_cpus = 0, test_cpu, iaa = 0; + int nr_iaa_per_node; + const struct cpumask *node_cpus; + + if (!nr_nodes) + return 0; + + nr_iaa_per_node = nr_iaa / nr_nodes; + if (!nr_iaa_per_node) + return 0; + + for_each_online_node(node) { + node_cpus = cpumask_of_node(node); + if (!cpumask_test_cpu(cpu, node_cpus)) + continue; + + for_each_cpu(test_cpu, node_cpus) { + if ((n_cpus % nr_cpus_per_node) == 0) + iaa = node * nr_iaa_per_node; + + if (test_cpu == cpu) + return iaa; + + n_cpus++; + + if ((n_cpus % cpus_per_iaa) == 0) + iaa++; + } + } + + return -1; +} + static void wq_table_add(int cpu, struct idxd_wq *wq) { struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu); @@ -895,8 +935,7 @@ static int wq_table_add_wqs(int iaa, int cpu) */ static void rebalance_wq_table(void) { - const struct cpumask *node_cpus; - int node, cpu, iaa = -1; + int cpu, iaa; if (nr_iaa == 0) return; @@ -906,37 +945,22 @@ static void rebalance_wq_table(void) clear_wq_table(); - if (nr_iaa == 1) { - for (cpu = 0; cpu < nr_cpus; cpu++) { - if (WARN_ON(wq_table_add_wqs(0, cpu))) { - pr_debug("could not add any wqs for iaa 0 to cpu %d!\n", cpu); - return; - } - } - - return; - } - - for_each_node_with_cpus(node) { - node_cpus = cpumask_of_node(node); - - for (cpu = 0; cpu < cpumask_weight(node_cpus); cpu++) { - int node_cpu = cpumask_nth(cpu, node_cpus); - - if (WARN_ON(node_cpu >= nr_cpu_ids)) { - pr_debug("node_cpu %d doesn't exist!\n", node_cpu); - return; - } + for (cpu = 0; cpu < nr_cpus; cpu++) { + iaa = cpu_to_iaa(cpu); + pr_debug("rebalance: cpu=%d iaa=%d\n", cpu, iaa); - if ((cpu % cpus_per_iaa) == 0) - iaa++; + if (WARN_ON(iaa == -1)) { + pr_debug("rebalance (cpu_to_iaa(%d)) failed!\n", cpu); + return; + } - if (WARN_ON(wq_table_add_wqs(iaa, node_cpu))) { - pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); - return; - } + if (WARN_ON(wq_table_add_wqs(iaa, cpu))) { + pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu); + return; } } + + pr_debug("Finished rebalance local wqs."); } static inline int check_completion(struct device *dev, @@ -2332,7 +2356,7 @@ static int __init iaa_crypto_init_module(void) pr_err("IAA couldn't find any nodes with cpus\n"); return -ENODEV; } - nr_cpus_per_node = nr_cpus / nr_nodes; + nr_cpus_per_node = topology_num_cores_per_package(); if (crypto_has_comp("deflate-generic", 0, 0)) deflate_generic_tfm = crypto_alloc_comp("deflate-generic", 0, 0);