mbox series

[RFC,v2,0/2] scheduler: expose the topology of clusters and add cluster scheduler

Message ID 20201201025944.18260-1-song.bao.hua@hisilicon.com (mailing list archive)
Headers show
Series scheduler: expose the topology of clusters and add cluster scheduler | expand

Message

Song Bao Hua (Barry Song) Dec. 1, 2020, 2:59 a.m. UTC
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data while each cluster
has local L3 tag. On the other hand, each cluster will share some
internal system bus. This means cache is much more affine inside one cluster
than across clusters.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |


The presented illustration is still a simplification of what is actually
going on, but is a more accurate model than currently presented.

Through the following small program, you can see the performance impact of
running it in one cluster and across two clusters:

struct foo {
        int x;
        int y;
} f;

void *thread1_fun(void *param)
{
        int s = 0;
        for (int i = 0; i < 0xfffffff; i++)
                s += f.x;
}

void *thread2_fun(void *param)
{
        int s = 0;
        for (int i = 0; i < 0xfffffff; i++)
                f.y++;
}

int main(int argc, char **argv)
{
        pthread_t tid1, tid2;

        pthread_create(&tid1, NULL, thread1_fun, NULL);
        pthread_create(&tid2, NULL, thread2_fun, NULL);
        pthread_join(tid1, NULL);
        pthread_join(tid2, NULL);
}

While running this program in one cluster, it takes:
$ time taskset -c 0,1 ./a.out 
real	0m0.832s
user	0m1.649s
sys	0m0.004s

As a contrast, it takes much more time if we run the same program
in two clusters:
$ time taskset -c 0,4 ./a.out 
real	0m1.133s
user	0m1.960s
sys	0m0.000s

0.832/1.133 = 73%, it is a huge difference.

This implies that we should let the Linux scheduler use cluster topology to
make better load balancing and WAKE_AFFINE decisions. Unfortuantely, right
now, all cpu0-23 are treated equally in the current kernel running on
Kunpeng 920.

This patchset first exposes the topology, then add a new sched_domain
between smt and mc. The new sched_domain will influence the load balance
and wake_affine of the scheduler.

The code is still pretty much a proof-of-concept and need lots of benchmark
and tuning. However, a rough hackbench result shows

While running hackbench on one numa node(cpu0-cpu23), we may achieve 5%+
performance improvement with the new sched_domain.
While running hackbench on two numa nodes(cpu0-cpu47), we may achieve 49%+
performance improvement with the new sched_domain.

Although I believe there is still a lot to do, sending a RFC to get feedbacks
of community experts might be helpful for the next step.

Barry Song (1):
  scheduler: add scheduler level for clusters

Jonathan Cameron (1):
  topology: Represent clusters of CPUs within a die.

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++---
 arch/arm64/Kconfig                        |  7 ++++
 arch/arm64/kernel/smp.c                   | 17 +++++++++
 arch/arm64/kernel/topology.c              |  2 ++
 drivers/acpi/pptt.c                       | 60 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 ++++++++
 drivers/base/topology.c                   | 10 ++++++
 include/linux/acpi.h                      |  5 +++ 
 include/linux/arch_topology.h             |  5 +++ 
 include/linux/topology.h                  | 13 +++++++
 kernel/sched/fair.c                       | 35 ++++++++++++++++++
 11 files changed, 190 insertions(+), 4 deletions(-)

Comments

Dietmar Eggemann Dec. 1, 2020, 10:46 a.m. UTC | #1
On 01/12/2020 03:59, Barry Song wrote:

[...]

> Although I believe there is still a lot to do, sending a RFC to get feedbacks
> of community experts might be helpful for the next step.
> 
> Barry Song (1):
>   scheduler: add scheduler level for clusters
> 
> Jonathan Cameron (1):
>   topology: Represent clusters of CPUs within a die.

Just to make sure. Since this is v2, the v1 is
https://lkml.kernel.org/r//20201016152702.1513592-1-Jonathan.Cameron@huawei.com

Might not be obvious to everyone since sender and subject have changed.