From patchwork Tue Dec 1 02:59:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Song Bao Hua (Barry Song)" X-Patchwork-Id: 11941777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A466AC63777 for ; Tue, 1 Dec 2020 03:05:54 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 262182074A for ; Tue, 1 Dec 2020 03:05:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ixIGNkst" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 262182074A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=nRZBX0DaOecNDpPX5Y4wYl55MYnB4MXZAWORoFSOGNE=; b=ixIGNkstR1vhhuUHczsGnQ1ga eI+lj/TrzwYRNboDR1/Jxa6DkLrVoLXKYkI7z0RW69/ugU18Et4bzdLKgVPA4/oWOXVCeCkumCSZ5 V9iTN8D5T+ZksU7p3XEotoVPKd86E7vnlvu0hKmwhUjy6/kippJJyMi5IgVaJVwfRoI78QsTNcyHe v+rJ4fk3Rws97NZDnXEV/4auyUM1+wQNKONUpKLpbbK51i60/kPH2RVo3p+4lgZ+qPzD3rcs49oTe nG9W7nznFCGHdrDcNB8r5vTImZ77kC/8k3iuCzIMbkwEqyDkblNDxf8/wMxe+NTzaN3T3PvKopA8H 2FwVY4qWw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kjvy5-00024v-Br; Tue, 01 Dec 2020 03:04:41 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kjvxs-00021E-M4 for linux-arm-kernel@lists.infradead.org; Tue, 01 Dec 2020 03:04:34 +0000 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4ClRkx6b2lzLxjc; Tue, 1 Dec 2020 11:03:53 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.202.198) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.487.0; Tue, 1 Dec 2020 11:04:18 +0800 From: Barry Song To: , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters Date: Tue, 1 Dec 2020 15:59:44 +1300 Message-ID: <20201201025944.18260-3-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20201201025944.18260-1-song.bao.hua@hisilicon.com> References: <20201201025944.18260-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.202.198] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201130_220429_362063_9E5A6D68 X-CRM114-Status: GOOD ( 22.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Barry Song , prime.zeng@hisilicon.com, linuxarm@huawei.com, xuwei5@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters. +-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +---+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ ++ | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +--+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | +---------+ +-----------------------------------+ This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with min_interval=24, max_interval=48; with this patch, MC becomes domain1, a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and max_interval=8. This will affect load balance. For example, without this patch, while cpu0 becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0 will try to pull a task from cpu1-cpu3 first. This will have much less overhead of task migration. On the other hand, while doing WAKE_AFFINE, this patch will try to find a core in the target cluster before scanning the llc domain. This means it will proactively use a core which has better affinity with target core at first. Not much benchmark has been done yet. but here is a rough hackbench result. we run the below command with different -g parameter to increase system load by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times and record the data to get the average time: First, we run hackbench in only one NUMA node(cpu0-cpu23): $ numactl -N 0 hackbench -p -T -l 100000 -g $1 g=1 (seen cpu utilization around 50% for each core) Running in threaded mode with 1 groups using 40 file descriptors Each sender will pass 100000 messages of 100 bytes w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853 w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697 performance improvement w/ patch: -1.01% g=2 (seen cpu utilization around 70% for each core) Running in threaded mode with 2 groups using 40 file descriptors Each sender will pass 100000 messages of 100 bytes w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162 9.955=10.1006 w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878 performance improvement w/ patch: 4.08% g=3 (seen cpu utilization around 90% for each core) Running in threaded mode with 3 groups using 40 file descriptors Each sender will pass 100000 messages of 100 bytes w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674 15.721=15.7727 w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946 14.895=14.9005 performance improvement w/ patch: 5.53% g=4 Running in threaded mode with 4 groups using 40 file descriptors Each sender will pass 100000 messages of 100 bytes w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187 w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647 performance improvement w/ patch: 1.22% After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47): g=1 w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229 w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925 performance improvement by patch: -2.2% g=2 w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028 w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913 performance improvement by patch: 9.3% g=3 w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235 11.849=11.9557 w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417 performance improvement by patch: 20.2% g=4 w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859 18.515=17.7647 w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467 10.401=10.4927 performance improvement by patch: 40.9% g=5 w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073 25.819=27.4382 w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954 13.907=13.9735 performance improvement by patch: 49.1% It seems the patch can bring a huge increase on hackbench especially when we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running on single NUMA node(cpu0-cpu23) Signed-off-by: Barry Song --- arch/arm64/Kconfig | 7 +++++++ arch/arm64/kernel/smp.c | 17 +++++++++++++++++ include/linux/topology.h | 7 +++++++ kernel/sched/fair.c | 35 +++++++++++++++++++++++++++++++++++ 4 files changed, 66 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6d23283..3583c26 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -938,6 +938,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER + bool "Cluster scheduler support" + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters(sharing internal + bus or sharing LLC cache tag). If unsure say N here. + config SCHED_SMT bool "SMT scheduler support" help diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 355ee9e..5c8f026 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include @@ -726,6 +727,20 @@ void __init smp_init_cpus(void) } } +static struct sched_domain_topology_level arm64_topology[] = { +#ifdef CONFIG_SCHED_SMT + { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, +#endif +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) }, +#endif +#ifdef CONFIG_SCHED_MC + { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, +#endif + { cpu_cpu_mask, SD_INIT_NAME(DIE) }, + { NULL, }, +}; + void __init smp_prepare_cpus(unsigned int max_cpus) { const struct cpu_operations *ops; @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus) init_cpu_topology(); + set_sched_topology(arm64_topology); + this_cpu = smp_processor_id(); store_cpu_topology(this_cpu); numa_store_cpu_info(this_cpu); diff --git a/include/linux/topology.h b/include/linux/topology.h index 5f66648..2c823c0 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif +#ifdef CONFIG_SCHED_CLUSTER +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{ + return topology_cluster_cpumask(cpu); +} +#endif + static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1a68a05..ae8ec910 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct *p, int target) #endif /* CONFIG_SCHED_SMT */ +#ifdef CONFIG_SCHED_CLUSTER +/* + * Scan the local CLUSTER mask for idle CPUs. + */ +static int select_idle_cluster(struct task_struct *p, int target) +{ + int cpu; + + /* right now, no hardware with both cluster and smt to run */ + if (sched_smt_active()) + return -1; + + for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) { + if (!cpumask_test_cpu(cpu, p->cpus_ptr)) + continue; + if (available_idle_cpu(cpu)) + return cpu; + } + + return -1; +} + +#else /* CONFIG_SCHED_CLUSTER */ + +static inline int select_idle_cluster(struct task_struct *p, int target) +{ + return -1; +} + +#endif /* CONFIG_SCHED_CLUSTER */ + /* * Scan the LLC domain for idle CPUs; this is dynamically regulated by * comparing the average scan cost (tracked in sd->avg_scan_cost) against the @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) if ((unsigned)i < nr_cpumask_bits) return i; + i = select_idle_cluster(p, target); + if ((unsigned)i < nr_cpumask_bits) + return i; + i = select_idle_cpu(p, sd, target); if ((unsigned)i < nr_cpumask_bits) return i;