From patchwork Tue Apr 20 00:18:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Song Bao Hua (Barry Song)" X-Patchwork-Id: 12213111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3AB8C433B4 for ; Tue, 20 Apr 2021 00:28:51 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 07FEE6113C for ; Tue, 20 Apr 2021 00:28:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 07FEE6113C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PXEBtqgi1HnuiZS92zV6mefITCyqMImh9LHedThs758=; b=XXnM2o4nXVLrxG6PUi/2ZnsyO 3vPljIV+NzTIZi09ZwKFO8731ZkKdKvlk9HdAZh1oUbgi5mzXfwqZRGR4XQUTcwSNKZ92fHDNOt3D d7cWlcl9L1uJHhXrmLffF3ZygTj4snnygEbFZbInHSVBdoSeYdmKCXX3jvnHgHGb1FS/qBBwx8uPP K9WgRYbKCXcZhY/4Zk2mOGrTlvf8mn0K8d2vDG9ID421wrmlfzsqvPRM3Mn7ucURSx5SRnHZjSURL Xynn+W/oR6l28qNoAPu8i4vLhM/u3ccybth+DuolSzki8Zn3O6LubVjiy6H1k2dkFuYiA4s529/Hb 3gcrMudaw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDr-00AqfY-4M; Tue, 20 Apr 2021 00:26:35 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDc-00Aqeb-22 for linux-arm-kernel@desiato.infradead.org; Tue, 20 Apr 2021 00:26:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Sender:Reply-To:Content-ID:Content-Description; bh=HkrqPp9IqWWHOpiCi09IVExnQNjMy2WM/WKxJJ/Yuy4=; b=BkwXc3094LmppAmvQzK7de2mCB dxcOpvPokFeGv1h4YUYB9b33Dxa3quIeUYOvgCZjwE5JpPwQ7swqi7uYPaaBRGBQ/DyKzIQJCFynn ZBSxAWotOsMrfQpeBX3/N03h0jgR41426NKIkCu9W9mQPokdoZCSyB4dsa7uGVWoJmuK9kdgHWFXU aTQbKiLyR1o+dQjLvxbqXXSjiJz4/hHLNWn471AG+ZOlHX5kr0evQ7drusuaoygC1tJYFe3YkH407 Ks1GeiN906XhtFIKPFV34RfrXkvJEq+FAByg0X03AUe/g2UxH8wroA15OiK2Wpp4KuSQjKXRwJvS0 y63dYfRQ==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDU-00Bk1z-AJ for linux-arm-kernel@lists.infradead.org; Tue, 20 Apr 2021 00:26:17 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FPPYW527TzrfSS; Tue, 20 Apr 2021 08:23:43 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.200.79) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 20 Apr 2021 08:25:59 +0800 From: Barry Song To: , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , Jonathan Cameron , Barry Song Subject: [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die Date: Tue, 20 Apr 2021 12:18:41 +1200 Message-ID: <20210420001844.9116-2-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210420001844.9116-1-song.bao.hua@hisilicon.com> References: <20210420001844.9116-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.200.79] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210419_172612_708421_AEDB2A7F X-CRM114-Status: GOOD ( 23.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Jonathan Cameron Both ACPI and DT provide the ability to describe additional layers of topology between that of individual cores and higher level constructs such as the level at which the last level cache is shared. In ACPI this can be represented in PPTT as a Processor Hierarchy Node Structure [1] that is the parent of the CPU cores and in turn has a parent Processor Hierarchy Nodes Structure representing a higher level of topology. For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. +-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +---+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ ++ | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +--+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | +---------+ +-----------------------------------+ That means the cost to transfer ownership of a cacheline between CPUs within a cluster is lower than between CPUs in different clusters on the same die. Hence, it can make sense to tell the scheduler to use the cache affinity of the cluster to make better decision on thread migration. This patch simply exposes this information to userspace libraries like hwloc by providing cluster_cpus and related sysfs attributes. PoC of HWLOC support at [2]. Note this patch only handle the ACPI case. Special consideration is needed for SMT processors, where it is necessary to move 2 levels up the hierarchy from the leaf nodes (thus skipping the processor core level). Currently the ID provided is the offset of the Processor Hierarchy Nodes Structure within PPTT. Whilst this is unique it is not terribly elegant so alternative suggestions welcome. Note that arm64 / ACPI does not provide any means of identifying a die level in the topology but that may be unrelate to the cluster level. [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node structure (Type 0) [2] https://github.com/hisilicon/hwloc/tree/linux-cluster Signed-off-by: Jonathan Cameron Signed-off-by: Barry Song --- -v6: * the topology ABI documents required by Greg is not completed yet. will have a separate patch for that. Documentation/admin-guide/cputopology.rst | 26 +++++++++++-- arch/arm64/kernel/topology.c | 2 + drivers/acpi/pptt.c | 63 +++++++++++++++++++++++++++++++ drivers/base/arch_topology.c | 15 ++++++++ drivers/base/topology.c | 10 +++++ include/linux/acpi.h | 5 +++ include/linux/arch_topology.h | 5 +++ include/linux/topology.h | 6 +++ 8 files changed, 128 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst index b90dafc..f9d3745 100644 --- a/Documentation/admin-guide/cputopology.rst +++ b/Documentation/admin-guide/cputopology.rst @@ -24,6 +24,12 @@ core_id: identifier (rather than the kernel's). The actual value is architecture and platform dependent. +cluster_id: + + the Cluster ID of cpuX. Typically it is the hardware platform's + identifier (rather than the kernel's). The actual value is + architecture and platform dependent. + book_id: the book ID of cpuX. Typically it is the hardware platform's @@ -56,6 +62,14 @@ package_cpus_list: human-readable list of CPUs sharing the same physical_package_id. (deprecated name: "core_siblings_list") +cluster_cpus: + + internal kernel map of CPUs within the same cluster. + +cluster_cpus_list: + + human-readable list of CPUs within the same cluster. + die_cpus: internal kernel map of CPUs within the same die. @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h:: #define topology_physical_package_id(cpu) #define topology_die_id(cpu) + #define topology_cluster_id(cpu) #define topology_core_id(cpu) #define topology_book_id(cpu) #define topology_drawer_id(cpu) #define topology_sibling_cpumask(cpu) #define topology_core_cpumask(cpu) + #define topology_cluster_cpumask(cpu) #define topology_die_cpumask(cpu) #define topology_book_cpumask(cpu) #define topology_drawer_cpumask(cpu) @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h: 1) topology_physical_package_id: -1 2) topology_die_id: -1 -3) topology_core_id: 0 -4) topology_sibling_cpumask: just the given CPU -5) topology_core_cpumask: just the given CPU -6) topology_die_cpumask: just the given CPU +3) topology_cluster_id: -1 +4) topology_core_id: 0 +5) topology_sibling_cpumask: just the given CPU +6) topology_core_cpumask: just the given CPU +7) topology_cluster_cpumask: just the given CPU +8) topology_die_cpumask: just the given CPU For architectures that don't support books (CONFIG_SCHED_BOOK) there are no default definitions for topology_book_id() and topology_book_cpumask(). diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index e08a412..d72eb8d 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void) cpu_topology[cpu].thread_id = -1; cpu_topology[cpu].core_id = topology_id; } + topology_id = find_acpi_cpu_topology_cluster(cpu); + cpu_topology[cpu].cluster_id = topology_id; topology_id = find_acpi_cpu_topology_package(cpu); cpu_topology[cpu].package_id = topology_id; diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c index 4ae9335..11f8b02 100644 --- a/drivers/acpi/pptt.c +++ b/drivers/acpi/pptt.c @@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu) } /** + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value + * @cpu: Kernel logical CPU number + * + * Determine a topology unique cluster ID for the given CPU/thread. + * This ID can then be used to group peers, which will have matching ids. + * + * The cluster, if present is the level of topology above CPUs. In a + * multi-thread CPU, it will be the level above the CPU, not the thread. + * It may not exist in single CPU systems. In simple multi-CPU systems, + * it may be equal to the package topology level. + * + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found + * or there is no toplogy level above the CPU.. + * Otherwise returns a value which represents the package for this CPU. + */ + +int find_acpi_cpu_topology_cluster(unsigned int cpu) +{ + struct acpi_table_header *table; + acpi_status status; + struct acpi_pptt_processor *cpu_node, *cluster_node; + u32 acpi_cpu_id; + int retval; + int is_thread; + + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); + if (ACPI_FAILURE(status)) { + acpi_pptt_warn_missing(); + return -ENOENT; + } + + acpi_cpu_id = get_acpi_id_for_cpu(cpu); + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); + if (cpu_node == NULL || !cpu_node->parent) { + retval = -ENOENT; + goto put_table; + } + + is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD; + cluster_node = fetch_pptt_node(table, cpu_node->parent); + if (cluster_node == NULL) { + retval = -ENOENT; + goto put_table; + } + if (is_thread) { + if (!cluster_node->parent) { + retval = -ENOENT; + goto put_table; + } + cluster_node = fetch_pptt_node(table, cluster_node->parent); + if (cluster_node == NULL) { + retval = -ENOENT; + goto put_table; + } + } + retval = ACPI_PTR_DIFF(cluster_node, table); +put_table: + acpi_put_table(table); + + return retval; +} + +/** * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag * @cpu: Kernel logical CPU number * diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c index de8587c..ca3b8c1 100644 --- a/drivers/base/arch_topology.c +++ b/drivers/base/arch_topology.c @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return core_mask; } +const struct cpumask *cpu_clustergroup_mask(int cpu) +{ + return &cpu_topology[cpu].cluster_sibling; +} + void update_siblings_masks(unsigned int cpuid) { struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid) if (cpuid_topo->package_id != cpu_topo->package_id) continue; + if (cpuid_topo->cluster_id == cpu_topo->cluster_id) { + cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling); + cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling); + } + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpuid_topo->core_sibling); @@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu) cpumask_clear(&cpu_topo->llc_sibling); cpumask_set_cpu(cpu, &cpu_topo->llc_sibling); + cpumask_clear(&cpu_topo->cluster_sibling); + cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling); + cpumask_clear(&cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); @@ -556,6 +569,7 @@ void __init reset_cpu_topology(void) cpu_topo->thread_id = -1; cpu_topo->core_id = -1; + cpu_topo->cluster_id = -1; cpu_topo->package_id = -1; cpu_topo->llc_id = -1; @@ -571,6 +585,7 @@ void remove_cpu_topology(unsigned int cpu) cpumask_clear_cpu(cpu, topology_core_cpumask(sibling)); for_each_cpu(sibling, topology_sibling_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling)); + for_each_cpu(sibling, topology_llc_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling)); diff --git a/drivers/base/topology.c b/drivers/base/topology.c index 4d254fc..7157ac0 100644 --- a/drivers/base/topology.c +++ b/drivers/base/topology.c @@ -46,6 +46,9 @@ define_id_show_func(die_id); static DEVICE_ATTR_RO(die_id); +define_id_show_func(cluster_id); +static DEVICE_ATTR_RO(cluster_id); + define_id_show_func(core_id); static DEVICE_ATTR_RO(core_id); @@ -61,6 +64,10 @@ static DEVICE_ATTR_RO(core_siblings); static DEVICE_ATTR_RO(core_siblings_list); +define_siblings_show_func(cluster_cpus, cluster_cpumask); +static DEVICE_ATTR_RO(cluster_cpus); +static DEVICE_ATTR_RO(cluster_cpus_list); + define_siblings_show_func(die_cpus, die_cpumask); static DEVICE_ATTR_RO(die_cpus); static DEVICE_ATTR_RO(die_cpus_list); @@ -88,6 +95,7 @@ static struct attribute *default_attrs[] = { &dev_attr_physical_package_id.attr, &dev_attr_die_id.attr, + &dev_attr_cluster_id.attr, &dev_attr_core_id.attr, &dev_attr_thread_siblings.attr, &dev_attr_thread_siblings_list.attr, @@ -95,6 +103,8 @@ &dev_attr_core_cpus_list.attr, &dev_attr_core_siblings.attr, &dev_attr_core_siblings_list.attr, + &dev_attr_cluster_cpus.attr, + &dev_attr_cluster_cpus_list.attr, &dev_attr_die_cpus.attr, &dev_attr_die_cpus_list.attr, &dev_attr_package_cpus.attr, diff --git a/include/linux/acpi.h b/include/linux/acpi.h index 9f43241..138b779 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -1307,6 +1307,7 @@ static inline int lpit_read_residency_count_address(u64 *address) #ifdef CONFIG_ACPI_PPTT int acpi_pptt_cpu_is_thread(unsigned int cpu); int find_acpi_cpu_topology(unsigned int cpu, int level); +int find_acpi_cpu_topology_cluster(unsigned int cpu); int find_acpi_cpu_topology_package(unsigned int cpu); int find_acpi_cpu_topology_hetero_id(unsigned int cpu); int find_acpi_cpu_cache_topology(unsigned int cpu, int level); @@ -1319,6 +1320,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level) { return -EINVAL; } +static inline int find_acpi_cpu_topology_cluster(unsigned int cpu) +{ + return -EINVAL; +} static inline int find_acpi_cpu_topology_package(unsigned int cpu) { return -EINVAL; diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h index 0f6cd6b..987c7ea 100644 --- a/include/linux/arch_topology.h +++ b/include/linux/arch_topology.h @@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus, struct cpu_topology { int thread_id; int core_id; + int cluster_id; int package_id; int llc_id; cpumask_t thread_sibling; cpumask_t core_sibling; + cpumask_t cluster_sibling; cpumask_t llc_sibling; }; @@ -60,13 +62,16 @@ struct cpu_topology { extern struct cpu_topology cpu_topology[NR_CPUS]; #define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id) +#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id) #define topology_core_id(cpu) (cpu_topology[cpu].core_id) #define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) #define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) +#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling) #define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling) void init_cpu_topology(void); void store_cpu_topology(unsigned int cpuid); const struct cpumask *cpu_coregroup_mask(int cpu); +const struct cpumask *cpu_clustergroup_mask(int cpu); void update_siblings_masks(unsigned int cpu); void remove_cpu_topology(unsigned int cpuid); void reset_cpu_topology(void); diff --git a/include/linux/topology.h b/include/linux/topology.h index 7634cd7..80d27d7 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -186,6 +186,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_die_id #define topology_die_id(cpu) ((void)(cpu), -1) #endif +#ifndef topology_cluster_id +#define topology_cluster_id(cpu) ((void)(cpu), -1) +#endif #ifndef topology_core_id #define topology_core_id(cpu) ((void)(cpu), 0) #endif @@ -195,6 +198,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_core_cpumask #define topology_core_cpumask(cpu) cpumask_of(cpu) #endif +#ifndef topology_cluster_cpumask +#define topology_cluster_cpumask(cpu) cpumask_of(cpu) +#endif #ifndef topology_die_cpumask #define topology_die_cpumask(cpu) cpumask_of(cpu) #endif From patchwork Tue Apr 20 00:18:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Song Bao Hua (Barry Song)" X-Patchwork-Id: 12213105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77222C433B4 for ; Tue, 20 Apr 2021 00:28:47 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EA9F06113C for ; Tue, 20 Apr 2021 00:28:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA9F06113C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=R1I6zUwxXRiiDiiO30JkapOamaiQJ71eMhNfWghGJ+0=; b=bQHv8VL9Qlsiu7M5scXTbLbN9 djeIui0CnCLjhp2hyiHZvopeIx3o27ZiJ6mqwaVZWQwaKC+JCIZyYUHj9YWFi7YIbAzL8/mQXIbyg Y1SqVRACsVn0T+fG+lQYF3qdz6U9dvcrb31OqKfRhbcXZMpn0zUwwHkn06yRXK/WJpkODU6YwlyyX F8BGbCVB4X2jdWhMn6NiLTqrocDhQ5LQo5fZeo38BRbNA38KxmgWEj4J5V0iLQwroeF8el+LGJv03 ayBhlam+ocfdGs19ERuqcGpX/Nak89n/decVqJZ2CVDzutAMDHUTNh031xJGakNaOBK1Vb07pRjJm IQgHRPxMQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lYeE0-00AqgB-7G; Tue, 20 Apr 2021 00:26:44 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDf-00Aqet-5B for linux-arm-kernel@desiato.infradead.org; Tue, 20 Apr 2021 00:26:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Sender:Reply-To:Content-ID:Content-Description; bh=DIwDrY1looFId1eRJw0hzQFT7aNTcv/0M1evhUh06Pk=; b=PJAQ/PUz4LBNZPvsKv7+ToYQcW 6CpI8ltimz0blDQeS6CKn0dnv0D30jxDPEH3ncqvADFjXDbVbJqzoagyc2OjNSR+tKLeoQI0OsUtm KHtGH8dGziY/TZLyMmtT6Ni8Kq837ctSVn3k79JK6TaXx0gWJ49eR86cJ4ziie5FXljmm9bb02FJB vUfbuX2MkwKTheNuKelRKufCKnk6ty4RL9VgaSzUbc3/lP67XSDmt6uACzHWQ1W/nPEQyQTO9WhLn 2rc6O3VvUPzRzm2UuqKlSv79b7vQLhYzIHSVExVSbHqluM97MF9Gxk5y5fsHyc3agOs3nNBVlFa8k jp1swK7Q==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDY-00Bk3l-Kx for linux-arm-kernel@lists.infradead.org; Tue, 20 Apr 2021 00:26:21 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FPPY01jmkzmdWh; Tue, 20 Apr 2021 08:23:16 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.200.79) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 20 Apr 2021 08:26:06 +0800 From: Barry Song To: , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , Barry Song Subject: [RFC PATCH v6 2/4] scheduler: add scheduler level for clusters Date: Tue, 20 Apr 2021 12:18:42 +1200 Message-ID: <20210420001844.9116-3-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210420001844.9116-1-song.bao.hua@hisilicon.com> References: <20210420001844.9116-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.200.79] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210419_172617_033600_1919050D X-CRM114-Status: GOOD ( 19.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters. This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3. This will help spread unrelated tasks among clusters, thus decrease the contention and improve the throughput, for example, stream benchmark can improve 20%+ while parallelism is 6 and improve around 5% while paralle- lism is 12: (1) -P 6 $ numactl -N 0 /usr/lib/lmbench/bin/stream -P 6 -M 1024M -N 5 w/o patch: STREAM copy latency: 2.46 nanoseconds STREAM copy bandwidth: 39096.28 MB/sec STREAM scale latency: 2.46 nanoseconds STREAM scale bandwidth: 38970.26 MB/sec STREAM add latency: 4.45 nanoseconds STREAM add bandwidth: 32332.04 MB/sec STREAM triad latency: 4.07 nanoseconds STREAM triad bandwidth: 35387.69 MB/sec w/ patch: STREAM copy latency: 2.02 nanoseconds STREAM copy bandwidth: 47604.47 MB/sec +21.7% STREAM scale latency: 2.04 nanoseconds STREAM scale bandwidth: 47066.84 MB/sec +20.8% STREAM add latency: 3.35 nanoseconds STREAM add bandwidth: 42942.15 MB/sec +32.8% STREAM triad latency: 3.16 nanoseconds STREAM triad bandwidth: 45619.18 MB/sec +28.9% On the other hand,stream result could change significantly during different tests without the patch, eg: a. STREAM copy latency: 2.16 nanoseconds STREAM copy bandwidth: 44448.45 MB/sec STREAM scale latency: 2.17 nanoseconds STREAM scale bandwidth: 44320.77 MB/sec STREAM add latency: 3.77 nanoseconds STREAM add bandwidth: 38230.54 MB/sec STREAM triad latency: 3.88 nanoseconds STREAM triad bandwidth: 37072.10 MB/sec b. STREAM copy latency: 2.16 nanoseconds STREAM copy bandwidth: 44403.22 MB/sec STREAM scale latency: 2.39 nanoseconds STREAM scale bandwidth: 40173.69 MB/sec STREAM add latency: 3.77 nanoseconds STREAM add bandwidth: 38232.56 MB/sec STREAM triad latency: 3.38 nanoseconds STREAM triad bandwidth: 42592.04 MB/sec Obviously it is because the 6 threads are put randomly in 6 cores. Sometimes they are packed in clusters, sometimes they are spread widely. (2) -P 12 $ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5 w/o patch: STREAM copy latency: 3.37 nanoseconds STREAM copy bandwidth: 57008.80 MB/sec STREAM scale latency: 3.38 nanoseconds STREAM scale bandwidth: 56848.47 MB/sec STREAM add latency: 5.50 nanoseconds STREAM add bandwidth: 52398.62 MB/sec STREAM triad latency: 5.09 nanoseconds STREAM triad bandwidth: 56591.60 MB/sec w/ patch: STREAM copy latency: 3.24 nanoseconds STREAM copy bandwidth: 59338.60 MB/sec +4.1% STREAM scale latency: 3.25 nanoseconds STREAM scale bandwidth: 58993.23 MB/sec +3.7% STREAM add latency: 5.19 nanoseconds STREAM add bandwidth: 55517.45 MB/sec +5.9% STREAM triad latency: 4.86 nanoseconds STREAM triad bandwidth: 59245.34 MB/sec +4.7% Obviously the load balance between clusters help improve the parallelism of unrelated tasks. To evaluate the performance impact to related tasks talking with each other, we run the below hackbench with different -g parameter from 6 to 32 in a NUMA node with 24 cores, for each different g, we run the command 20 times and get the average time: $ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1 As -f is set to 1, this means all threads are talking with each other monogamously. hackbench will report the time which is needed to complete a certain number of messages transmissions between a certain number of tasks, for example: $ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g 6 Running in threaded mode with 6 groups using 2 file descriptors each (== 12 tasks) Each sender will pass 1000000 messages of 100 bytes The below is the result of hackbench w/ and w/o the patch: g= 6 12 18 24 28 32 w/o: 1.2474 1.5635 1.5133 1.4796 1.6177 1.7898 w/ : 1.1458 1.3309 1.3416 1.4990 1.9212 2.3411 It seems this patch benefits hackbench when the load is relatively low, while it hurts hackbench much when the load is relatively high(56 and 64 threads in 24 cores). Signed-off-by: Barry Song --- arch/arm64/Kconfig | 7 +++++++ include/linux/sched/cluster.h | 19 +++++++++++++++++++ include/linux/sched/sd_flags.h | 9 +++++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/core.c | 20 ++++++++++++++++++++ kernel/sched/fair.c | 4 ++++ kernel/sched/sched.h | 1 + kernel/sched/topology.c | 6 ++++++ 9 files changed, 80 insertions(+) create mode 100644 include/linux/sched/cluster.h diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1f212b4..9432a30 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -977,6 +977,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER + bool "Cluster scheduler support" + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters(sharing internal + bus or sharing LLC cache tag). If unsure say N here. + config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h new file mode 100644 index 0000000..ea6c475 --- /dev/null +++ b/include/linux/sched/cluster.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_SCHED_CLUSTER_H +#define _LINUX_SCHED_CLUSTER_H + +#include + +#ifdef CONFIG_SCHED_CLUSTER +extern struct static_key_false sched_cluster_present; + +static __always_inline bool sched_cluster_active(void) +{ + return static_branch_likely(&sched_cluster_present); +} +#else +static inline bool sched_cluster_active(void) { return false; } + +#endif + +#endif diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h index 34b21e9..fc3c894 100644 --- a/include/linux/sched/sd_flags.h +++ b/include/linux/sched/sd_flags.h @@ -100,6 +100,15 @@ SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) /* + * Domain members share CPU cluster resources (i.e. llc cache tags) + * + * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share + * the cluster resouces (such as llc tags and internal bus) + * NEEDS_GROUPS: Caches are shared between groups. + */ +SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) + +/* * Domain members share CPU package resources (i.e. caches) * * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778..846fcac 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif +#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{ + return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES; +} +#endif + #ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index 80d27d7..0b3704a 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask) +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{ + return topology_cluster_cpumask(cpu); +} +#endif + static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 95bd6ab..30c300c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7840,6 +7840,17 @@ int sched_cpu_activate(unsigned int cpu) if (cpumask_weight(cpu_smt_mask(cpu)) == 2) static_branch_inc_cpuslocked(&sched_smt_present); #endif + +#ifdef CONFIG_SCHED_CLUSTER + /* + * When going up, increment the number of cluster cpus with + * cluster present. + */ + if (cpumask_weight(cpu_cluster_mask(cpu)) > cpumask_weight(cpu_smt_mask(cpu)) && + cpumask_weight(cpu_cluster_mask(cpu)) < cpumask_weight(cpu_coregroup_mask(cpu))) + static_branch_inc_cpuslocked(&sched_cluster_present); +#endif + set_cpu_active(cpu, true); if (sched_smp_initialized) { @@ -7916,6 +7927,15 @@ int sched_cpu_deactivate(unsigned int cpu) static_branch_dec_cpuslocked(&sched_smt_present); #endif +#ifdef CONFIG_SCHED_CLUSTER + /* + * When going down, decrement the number of cpus with cluster present. + */ + if (cpumask_weight(cpu_cluster_mask(cpu)) > cpumask_weight(cpu_smt_mask(cpu)) && + cpumask_weight(cpu_cluster_mask(cpu)) < cpumask_weight(cpu_coregroup_mask(cpu))) + static_branch_dec_cpuslocked(&sched_cluster_present); +#endif + if (!sched_smp_initialized) return 0; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6d73bdb..a327746 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6021,6 +6021,10 @@ static inline int __select_idle_cpu(int cpu) return -1; } +#ifdef CONFIG_SCHED_CLUSTER +DEFINE_STATIC_KEY_FALSE(sched_cluster_present); +#endif + #ifdef CONFIG_SCHED_SMT DEFINE_STATIC_KEY_FALSE(sched_smt_present); EXPORT_SYMBOL_GPL(sched_smt_present); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index cbb0b01..4e938ba 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -6,6 +6,7 @@ #include #include +#include #include #include #include diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index d1aec24..829ac9d 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1392,6 +1392,7 @@ static void claim_allocations(int cpu, struct sched_domain *sd) */ #define TOPOLOGY_SD_FLAGS \ (SD_SHARE_CPUCAPACITY | \ + SD_SHARE_CLS_RESOURCES | \ SD_SHARE_PKG_RESOURCES | \ SD_NUMA | \ SD_ASYM_PACKING) @@ -1511,6 +1512,11 @@ static void claim_allocations(int cpu, struct sched_domain *sd) #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif + +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) }, +#endif + #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif From patchwork Tue Apr 20 00:18:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Song Bao Hua (Barry Song)" X-Patchwork-Id: 12213107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E6DBC433ED for ; Tue, 20 Apr 2021 00:28:48 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EB6696113C for ; Tue, 20 Apr 2021 00:28:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB6696113C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=YT3ycITUq63lTiM8lt2pxRbE+rGfjiUxuIiwxytsZPs=; b=gKgtl4L8RryPUT3wart2uH3Du z3JZh4Z/v+5pYBdNdRLL7jfJ0pQZvJ5oYUpkpQDYBbeGBbu8/eRo/g6x4GqU+nbSA8VehLQoIs2nd keoTBCUUC7axPBrDQtlbL+F9es8PY02yBKe9FqTeumLDZfbAfBoEoeURuGgk7VdPwhObvqg7nhxP9 zIR0kfXi2IIXgJe/uA2b381ZrKlLvs9Sy7okwr5cilSLJBq9nxi2q6ZLma4EL6Xj0yICgRoNx787A JCXVZsJS1nCBrI41FkiqEgOYz5d5e9V0PU8l1q8xqH8KUMFWZVqMMhs3bl3uBDGk/BZObzqunYzdg NMPIc6J6g==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lYeEF-00Aqia-Df; Tue, 20 Apr 2021 00:26:59 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDh-00Aqf2-H7 for linux-arm-kernel@desiato.infradead.org; Tue, 20 Apr 2021 00:26:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Sender:Reply-To:Content-ID:Content-Description; bh=oTAIr1NpT+85RH/3vBxggv95kKtPnzKA4clRsT6jbXs=; b=ptH7GmpeeQwsV+3bZhWwu0P3L5 0oLNjaR7tHZNZLRZ/qf/lzuvQZREMTNtKHHIhAliqLs9jlETssDSJnIAbY0JYw5mP2okYB26r/JMx JWEao5AM/vQA1qmF8nceF78FKDqX0g1lpJ6H3H0MbAqiEx/oXYMSx83IqchwOu06xmSWQVa9aGUXT xhtzRmsGirpMMeXqp6fxUqZpq7FflLVnCZMR+vcviAtVNK95DqpkGPi8RzoPhHeknT5rf5ZaykLZm IZ6wb9UrNtXOPh/rKQxFRn+1nBdEC3/Sb+Dl3AmycZOs6gtWYbvkqiw1lcWJpo81XaV7d/EAPzQx1 MiKgNtjA==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDd-00Bk4c-Pu for linux-arm-kernel@lists.infradead.org; Tue, 20 Apr 2021 00:26:24 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FPPYp5Rrjz19LtT; Tue, 20 Apr 2021 08:23:58 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.200.79) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 20 Apr 2021 08:26:13 +0800 From: Barry Song To: , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , Barry Song Subject: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC Date: Tue, 20 Apr 2021 12:18:43 +1200 Message-ID: <20210420001844.9116-4-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210420001844.9116-1-song.bao.hua@hisilicon.com> References: <20210420001844.9116-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.200.79] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210419_172622_187654_A46FB777 X-CRM114-Status: GOOD ( 28.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On kunpeng920, cpus within one cluster can communicate wit each other much faster than cpus across different clusters. A simple hackbench can prove that. hackbench running on 4 cpus in single one cluster and 4 cpus in different clusters shows a large contrast: (1) within a cluster: root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1 Running in threaded mode with 1 groups using 40 file descriptors each (== 40 tasks) Each sender will pass 20000 messages of 100 bytes Time: 4.285 (2) across clusters: root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1 Running in threaded mode with 1 groups using 40 file descriptors each (== 40 tasks) Each sender will pass 20000 messages of 100 bytes Time: 5.524 This inspires us to change the wake_affine path to scan cluster to pack the related tasks. Ideally, a two-level packing vs. spreading heuristic could be introduced to distinguish between llc-packing and even narrower (cluster or MC-L2)-packing. But this way would be quite trivial. So this patch begins from those tasks running in same LLC. This is actually quite common in real use cases when tasks are bound in a NUMA. If users use "numactl -N 0" to bind tasks, this patch will scan cluster rather than llc to select idle cpu. A hackbench running with some groups of monogamous sender-receiver model shows a major improvement. To evaluate the performance impact to related tasks talking with each other, we run the below hackbench with different -g parameter from 6 to 32 in a NUMA node with 24 cores, for each different g, we run the command 20 times and get the average time: $ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1 As -f is set to 1, this means all threads are talking with each other monogamously. hackbench will report the time which is needed to complete a certain number of messages transmissions between a certain number of tasks, for example: $ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g 6 Running in threaded mode with 6 groups using 2 file descriptors each (== 12 tasks) Each sender will pass 1000000 messages of 100 bytes The below is the result of hackbench: g = 6 12 18 24 28 32 w/o 1.2474 1.5635 1.5133 1.4796 1.6177 1.7898 w/domain 1.1458 1.3309 1.3416 1.4990 1.9212 2.3411 w/domain+affine 0.9500 1.0728 1.1756 1.2201 1.4166 1.5464 w/o: without any change w/domain: added cluster domain without changing wake_affine w/domain+affine: added cluster domain, changed wake_affine while g=6, if we use top -H to show the cpus which tasks are running on, we can easily find couples are running in same CCL. Signed-off-by: Barry Song --- -v6: * emulated a two-level spreading/packing heuristic by only scanning cluster in wake_affine path for tasks running in same LLC(also NUMA). This partially addressed Dietmar's comment in RFC v3: "In case we would like to further distinguish between llc-packing and even narrower (cluster or MC-L2)-packing, we would introduce a 2. level packing vs. spreading heuristic further down in sis(). IMHO, Barry's current implementation doesn't do this right now. Instead he's trying to pack on cluster first and if not successful look further among the remaining llc CPUs for an idle CPU." * adjusted the hackbench parameter to make relatively low and high load. previous patchsets with "-f 10" ran under an extremely high load with hundreds of threads, which seems not real use cases. This also addressed Vincent's question in RFC v4: "In particular, I'm still not convinced that the modification of the wakeup path is the root of the hackbench improvement; especially with g=14 where there should not be much idle CPUs with 14*40 tasks on at most 32 CPUs." block/blk-mq.c | 2 +- include/linux/sched/topology.h | 5 +++-- kernel/sched/core.c | 9 +++++--- kernel/sched/fair.c | 47 +++++++++++++++++++++++++----------------- kernel/sched/sched.h | 3 +++ kernel/sched/topology.c | 12 +++++++++++ 6 files changed, 53 insertions(+), 25 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index d4d7c1c..1418981 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -611,7 +611,7 @@ static inline bool blk_mq_complete_need_ipi(struct request *rq) /* same CPU or cache domain? Complete locally */ if (cpu == rq->mq_ctx->cpu || (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags) && - cpus_share_cache(cpu, rq->mq_ctx->cpu))) + cpus_share_cache(cpu, rq->mq_ctx->cpu, 0))) return false; /* don't try to IPI to an offline CPU */ diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 846fcac..d63d6b8 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -176,7 +176,8 @@ extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], cpumask_var_t *alloc_sched_domains(unsigned int ndoms); void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms); -bool cpus_share_cache(int this_cpu, int that_cpu); +/* return true if cpus share cluster(while cluster=1) or llc cache */ +bool cpus_share_cache(int this_cpu, int that_cpu, int cluster); typedef const struct cpumask *(*sched_domain_mask_f)(int cpu); typedef int (*sched_domain_flags_f)(void); @@ -225,7 +226,7 @@ struct sched_domain_topology_level { { } -static inline bool cpus_share_cache(int this_cpu, int that_cpu) +static inline bool cpus_share_cache(int this_cpu, int that_cpu, int cluster) { return true; } diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 30c300c..c74812a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3126,9 +3126,12 @@ void wake_up_if_idle(int cpu) rcu_read_unlock(); } -bool cpus_share_cache(int this_cpu, int that_cpu) +bool cpus_share_cache(int this_cpu, int that_cpu, int cluster) { - return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu); + if (cluster) + return per_cpu(sd_cluster_id, this_cpu) == per_cpu(sd_cluster_id, that_cpu); + else + return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu); } static inline bool ttwu_queue_cond(int cpu, int wake_flags) @@ -3144,7 +3147,7 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags) * If the CPU does not share cache, then queue the task on the * remote rqs wakelist to avoid accessing remote data. */ - if (!cpus_share_cache(smp_processor_id(), cpu)) + if (!cpus_share_cache(smp_processor_id(), cpu, 0)) return true; /* diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a327746..69a1704 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -718,7 +718,7 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se) #include "pelt.h" #ifdef CONFIG_SMP -static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu); +static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu, int cluster); static unsigned long task_h_load(struct task_struct *p); static unsigned long capacity_of(int cpu); @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p) * whatever is irrelevant, spread criteria is apparent partner count exceeds * socket size. */ -static int wake_wide(struct task_struct *p) +static int wake_wide(struct task_struct *p, int cluster) { unsigned int master = current->wakee_flips; unsigned int slave = p->wakee_flips; - int factor = __this_cpu_read(sd_llc_size); + int factor = cluster ? __this_cpu_read(sd_cluster_size) : + __this_cpu_read(sd_llc_size); if (master < slave) swap(master, slave); @@ -5812,7 +5813,7 @@ static int wake_wide(struct task_struct *p) * for the overloaded case. */ static int -wake_affine_idle(int this_cpu, int prev_cpu, int sync) +wake_affine_idle(int this_cpu, int prev_cpu, int sync, int cluster) { /* * If this_cpu is idle, it implies the wakeup is from interrupt @@ -5826,7 +5827,7 @@ static int wake_wide(struct task_struct *p) * a cpufreq perspective, it's better to have higher utilisation * on one CPU. */ - if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu)) + if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu, cluster)) return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu; if (sync && cpu_rq(this_cpu)->nr_running == 1) @@ -5882,12 +5883,12 @@ static int wake_wide(struct task_struct *p) } static int wake_affine(struct sched_domain *sd, struct task_struct *p, - int this_cpu, int prev_cpu, int sync) + int this_cpu, int prev_cpu, int sync, int cluster) { int target = nr_cpumask_bits; if (sched_feat(WA_IDLE)) - target = wake_affine_idle(this_cpu, prev_cpu, sync); + target = wake_affine_idle(this_cpu, prev_cpu, sync, cluster); if (sched_feat(WA_WEIGHT) && target == nr_cpumask_bits) target = wake_affine_weight(sd, p, this_cpu, prev_cpu, sync); @@ -6139,7 +6140,8 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma * comparing the average scan cost (tracked in sd->avg_scan_cost) against the * average idle time for this rq (as found in rq->avg_idle). */ -static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target) +static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target, + int cluster) { struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); int i, cpu, idle_cpu = -1, nr = INT_MAX; @@ -6154,7 +6156,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); - if (sched_feat(SIS_PROP) && !smt) { + /* cluster is usually quite small like 4, no need SIS_PROP */ + if (sched_feat(SIS_PROP) && !smt && !cluster) { u64 avg_cost, avg_idle, span_avg; /* @@ -6191,7 +6194,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t if (smt) set_idle_cores(this, false); - if (sched_feat(SIS_PROP) && !smt) { + if (sched_feat(SIS_PROP) && !smt && !cluster) { time = cpu_clock(this) - time; update_avg(&this_sd->avg_scan_cost, time); } @@ -6244,7 +6247,7 @@ static inline bool asym_fits_capacity(int task_util, int cpu) /* * Try and locate an idle core/thread in the LLC cache domain. */ -static int select_idle_sibling(struct task_struct *p, int prev, int target) +static int select_idle_sibling(struct task_struct *p, int prev, int target, int cluster) { struct sched_domain *sd; unsigned long task_util; @@ -6266,7 +6269,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) /* * If the previous CPU is cache affine and idle, don't be stupid: */ - if (prev != target && cpus_share_cache(prev, target) && + if (prev != target && cpus_share_cache(prev, target, cluster) && (available_idle_cpu(prev) || sched_idle_cpu(prev)) && asym_fits_capacity(task_util, prev)) return prev; @@ -6289,7 +6292,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) recent_used_cpu = p->recent_used_cpu; if (recent_used_cpu != prev && recent_used_cpu != target && - cpus_share_cache(recent_used_cpu, target) && + cpus_share_cache(recent_used_cpu, target, cluster) && (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) && cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) && asym_fits_capacity(task_util, recent_used_cpu)) { @@ -6321,11 +6324,11 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) } } - sd = rcu_dereference(per_cpu(sd_llc, target)); + sd = cluster ? rcu_dereference(per_cpu(sd_cluster, target)) : + rcu_dereference(per_cpu(sd_llc, target)); if (!sd) return target; - - i = select_idle_cpu(p, sd, target); + i = select_idle_cpu(p, sd, target, cluster); if ((unsigned)i < nr_cpumask_bits) return i; @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) int want_affine = 0; /* SD_flags and WF_flags share the first nibble */ int sd_flag = wake_flags & 0xF; + /* + * if cpu and prev_cpu share LLC, consider cluster sibling rather + * than llc. this is typically true while tasks are bound within + * one numa + */ + int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0); if (wake_flags & WF_TTWU) { record_wakee(p); @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) new_cpu = prev_cpu; } - want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr); + want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu, p->cpus_ptr); } rcu_read_lock(); @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) if (want_affine && (tmp->flags & SD_WAKE_AFFINE) && cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) { if (cpu != prev_cpu) - new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync); + new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster); sd = NULL; /* Prefer wake_affine over balance flags */ break; @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag); } else if (wake_flags & WF_TTWU) { /* XXX always ? */ /* Fast path */ - new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); + new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster); if (want_affine) current->recent_used_cpu = cpu; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4e938ba..b4b7d95 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1487,6 +1487,9 @@ static inline struct sched_domain *lowest_flag_domain(int cpu, int flag) DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc); DECLARE_PER_CPU(int, sd_llc_size); DECLARE_PER_CPU(int, sd_llc_id); +DECLARE_PER_CPU(struct sched_domain __rcu *, sd_cluster); +DECLARE_PER_CPU(int, sd_cluster_size); +DECLARE_PER_CPU(int, sd_cluster_id); DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 829ac9d..28a2032 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -644,6 +644,9 @@ static void destroy_sched_domains(struct sched_domain *sd) DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc); DEFINE_PER_CPU(int, sd_llc_size); DEFINE_PER_CPU(int, sd_llc_id); +DEFINE_PER_CPU(struct sched_domain __rcu *, sd_cluster); +DEFINE_PER_CPU(int, sd_cluster_size); +DEFINE_PER_CPU(int, sd_cluster_id); DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared); DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa); DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing); @@ -657,6 +660,15 @@ static void update_top_cache_domain(int cpu) int id = cpu; int size = 1; + sd = highest_flag_domain(cpu, SD_SHARE_CLS_RESOURCES); + if (sd) { + id = cpumask_first(sched_domain_span(sd)); + size = cpumask_weight(sched_domain_span(sd)); + } + rcu_assign_pointer(per_cpu(sd_cluster, cpu), sd); + per_cpu(sd_cluster_size, cpu) = size; + per_cpu(sd_cluster_id, cpu) = id; + sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES); if (sd) { id = cpumask_first(sched_domain_span(sd)); From patchwork Tue Apr 20 00:18:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Song Bao Hua (Barry Song)" X-Patchwork-Id: 12213109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFA54C43461 for ; Tue, 20 Apr 2021 00:28:50 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2C0B46113C for ; Tue, 20 Apr 2021 00:28:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C0B46113C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=lKx9f4L4mINEaCQLFxA3PlZRVsgjdw47QdLleeuhNyo=; b=Jqfa4X7lgPCT5ElWoFMTzDSVI VKkekHCEZcYF357DZrW6GEnCvv7JMVnzVGSkrVNbNoFeNHxkUbdYKNkCdBW2fqIZ/5SD6+/lOWXvj ltSJkCKOIVsoNLTMGA+k4NNXBpqz7WdYvXCgZDI0RBs4zUhN3+dOA3LGf5qp1gT7sz001Pvpmllef eTgbE26KQw6hgRqYdS7EeS6ZBmO3gvgl1Ymg1LO+f/1jo0QaQGzaPvm0YbxL3mYSkUuIrlREuzVnx 5f+pJUdKUMPzeKZAg0zsVF3c4kZqeEMNgbzuZgiHZsjx+7QtJX4scDuDOwODaLq3VV6wS0n2E6n2/ MrQs4CWjA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lYeEP-00Aqlk-Hp; Tue, 20 Apr 2021 00:27:09 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDs-00Aqfq-PV for linux-arm-kernel@desiato.infradead.org; Tue, 20 Apr 2021 00:26:37 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Sender:Reply-To:Content-ID:Content-Description; bh=/KiWigg8cCUscgDmFieWWVZyrxCPRWTV5eNVRhXCBiE=; b=vNFhy4lAU+wfop7WFoAvXJDJ5w 4jDxn1DUNlpUeCjUCPUpFHNRG3iXUSJnOTn6tbvM+6WhTesRrMclv15jcR641y7aRjKUnUBrOA5nz T/GKtHTkO8EolKWGD14s0BdyYsTuUwSBS6+XJ9IdKO0ftMkcp1gW+6E/eHD6oT2uEsE3wktayy6Jq 4F0OzCOqnuicjM3pVNfYp7g0Vu/RGld6JdTCeW23dDAungE4iis6usvbBX2YKjmBTQjJmakCV4u+6 DV1jVv1AZ248pOKVD7WbX2i1vMLPFlzYNqwCCvb8FjtSWAUMz808hinkcFUtyFWB82QbonC2AfIhg dmyMUYAw==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lYeDo-00Bk5q-4v for linux-arm-kernel@lists.infradead.org; Tue, 20 Apr 2021 00:26:35 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FPPYG6wX6zPrjD; Tue, 20 Apr 2021 08:23:30 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.200.79) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 20 Apr 2021 08:26:20 +0800 From: Barry Song To: , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , Barry Song Subject: [RFC PATCH v6 4/4] scheduler: Add cluster scheduler level for x86 Date: Tue, 20 Apr 2021 12:18:44 +1200 Message-ID: <20210420001844.9116-5-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210420001844.9116-1-song.bao.hua@hisilicon.com> References: <20210420001844.9116-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.200.79] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210419_172632_534470_D5836CF9 X-CRM114-Status: GOOD ( 19.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Tim Chen There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce is shared among a cluster of cores instead of being exclusive to one single core. To prevent oversubscription of L2 cache, load should be balanced between such L2 clusters, especially for tasks with no shared data. Also with cluster scheduling policy where tasks are woken up in the same L2 cluster, we will benefit from keeping tasks related to each other and likely sharing data in the same L2 cluster. Add CPU masks of CPUs sharing the L2 cache so we can build such L2 cluster scheduler domain. Signed-off-by: Tim Chen Signed-off-by: Barry Song --- -v6: * added topology_cluster_cpumask() for x86, code provided by Tim. arch/x86/Kconfig | 8 ++++++++ arch/x86/include/asm/smp.h | 7 +++++++ arch/x86/include/asm/topology.h | 2 ++ arch/x86/kernel/cpu/cacheinfo.c | 1 + arch/x86/kernel/cpu/common.c | 3 +++ arch/x86/kernel/smpboot.c | 43 ++++++++++++++++++++++++++++++++++++++++- 6 files changed, 63 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 2792879..d597de2 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1002,6 +1002,14 @@ config NR_CPUS This is purely to save memory: each supported CPU adds about 8KB to the kernel image. +config SCHED_CLUSTER + bool "Cluster scheduler support" + default n + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters of CPUs + sharing L2 cache. If unsure say N here. + config SCHED_SMT def_bool y if SMP diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index c0538f8..9cbc4ae 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -16,7 +16,9 @@ DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map); /* cpus sharing the last level cache: */ DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map); +DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map); DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id); +DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id); DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number); static inline struct cpumask *cpu_llc_shared_mask(int cpu) @@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu) return per_cpu(cpu_llc_shared_map, cpu); } +static inline struct cpumask *cpu_l2c_shared_mask(int cpu) +{ + return per_cpu(cpu_l2c_shared_map, cpu); +} + DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid); DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid); DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid); diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index 9239399..800fa48 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { } #include extern const struct cpumask *cpu_coregroup_mask(int cpu); +extern const struct cpumask *cpu_clustergroup_mask(int cpu); #define topology_logical_package_id(cpu) (cpu_data(cpu).logical_proc_id) #define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id) @@ -114,6 +115,7 @@ static inline void setup_node_to_cpumask_map(void) { } #ifdef CONFIG_SMP #define topology_die_cpumask(cpu) (per_cpu(cpu_die_map, cpu)) +#define topology_cluster_cpumask(cpu) (cpu_clustergroup_mask(cpu)) #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu)) #define topology_sibling_cpumask(cpu) (per_cpu(cpu_sibling_map, cpu)) diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c index 3ca9be4..0d03a71 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c) l2 = new_l2; #ifdef CONFIG_SMP per_cpu(cpu_llc_id, cpu) = l2_id; + per_cpu(cpu_l2c_id, cpu) = l2_id; #endif } diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index ab640ab..0ba282d 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -78,6 +78,9 @@ /* Last level cache ID of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID; +/* L2 cache ID of each logical CPU */ +DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID; + /* correctly size the local cpu masks */ void __init setup_cpu_local_masks(void) { diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 02813a7..c85ffa8 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -101,6 +101,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map); +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map); + /* Per CPU bogomips and other parameters */ DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info); EXPORT_PER_CPU_SYMBOL(cpu_info); @@ -501,6 +503,21 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) return topology_sane(c, o, "llc"); } +static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) +{ + int cpu1 = c->cpu_index, cpu2 = o->cpu_index; + + /* Do not match if we do not have a valid APICID for cpu: */ + if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID) + return false; + + /* Do not match if L2 cache id does not match: */ + if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2)) + return false; + + return topology_sane(c, o, "l2c"); +} + /* * Unlike the other levels, we do not enforce keeping a * multicore group inside a NUMA node. If this happens, we will @@ -522,7 +539,7 @@ static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) } -#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC) +#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_CLUSTER) || defined(CONFIG_SCHED_MC) static inline int x86_sched_itmt_flags(void) { return sysctl_sched_itmt_enabled ? SD_ASYM_PACKING : 0; @@ -540,12 +557,21 @@ static int x86_smt_flags(void) return cpu_smt_flags() | x86_sched_itmt_flags(); } #endif +#ifdef CONFIG_SCHED_CLUSTER +static int x86_cluster_flags(void) +{ + return cpu_cluster_flags() | x86_sched_itmt_flags(); +} +#endif #endif static struct sched_domain_topology_level x86_numa_in_package_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) }, #endif +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) }, +#endif #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) }, #endif @@ -556,6 +582,9 @@ static int x86_smt_flags(void) #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) }, #endif +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) }, +#endif #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) }, #endif @@ -583,6 +612,7 @@ void set_cpu_sibling_map(int cpu) if (!has_mp) { cpumask_set_cpu(cpu, topology_sibling_cpumask(cpu)); cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu)); + cpumask_set_cpu(cpu, cpu_l2c_shared_mask(cpu)); cpumask_set_cpu(cpu, topology_core_cpumask(cpu)); cpumask_set_cpu(cpu, topology_die_cpumask(cpu)); c->booted_cores = 1; @@ -598,6 +628,8 @@ void set_cpu_sibling_map(int cpu) if ((i == cpu) || (has_mp && match_llc(c, o))) link_mask(cpu_llc_shared_mask, cpu, i); + if ((i == cpu) || (has_mp && match_l2c(c, o))) + link_mask(cpu_l2c_shared_mask, cpu, i); } /* @@ -649,6 +681,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return cpu_llc_shared_mask(cpu); } +const struct cpumask *cpu_clustergroup_mask(int cpu) +{ + return cpu_l2c_shared_mask(cpu); +} + static void impress_friends(void) { int cpu; @@ -1332,6 +1369,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus) zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL); zalloc_cpumask_var(&per_cpu(cpu_die_map, i), GFP_KERNEL); zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL); + zalloc_cpumask_var(&per_cpu(cpu_l2c_shared_map, i), GFP_KERNEL); } /* @@ -1556,7 +1594,10 @@ static void remove_siblinginfo(int cpu) cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling)); for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling)); + for_each_cpu(sibling, cpu_l2c_shared_mask(cpu)) + cpumask_clear_cpu(cpu, cpu_l2c_shared_mask(sibling)); cpumask_clear(cpu_llc_shared_mask(cpu)); + cpumask_clear(cpu_l2c_shared_mask(cpu)); cpumask_clear(topology_sibling_cpumask(cpu)); cpumask_clear(topology_core_cpumask(cpu)); cpumask_clear(topology_die_cpumask(cpu));