Message ID | 20220518093325.2070336-2-sudeep.holla@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arch_topology: Updates to add socket support and fix cluster ids | expand |
On 18/05/2022 11:33, Sudeep Holla wrote: You say `cluster identifier` which to me refers to `cpu_topology[cpu].cluster_id`. But I guess you meant `cluster node` from cpu-map DT node? Otherwise you say we link (1.level) cpu-map cluster nodes to `cluster_id\_sibling`? But then (1) we will never support nested clusters and (2) why would we need llc support then? [...]
On Fri, May 20, 2022 at 02:31:24PM +0200, Dietmar Eggemann wrote: > On 18/05/2022 11:33, Sudeep Holla wrote: > > You say `cluster identifier` which to me refers to > `cpu_topology[cpu].cluster_id`. But I guess you meant `cluster node` > from cpu-map DT node? > Correct, I am referring to the leaf cluster node identifier in terms of cpu-map DT node which we now store in cpu_topology[cpu].cluster_id as part of this series instead of previous cpu_topology[cpu].package_id. > Otherwise you say we link (1.level) cpu-map cluster nodes to > `cluster_id\_sibling`? But then (1) we will never support nested > clusters and (2) why would we need llc support then? > (1) Do we have any platforms with nested clusters today ? No phantom clusters please as this is info that gets to the userspace and must reflect the real hardware. If one exist, then we need to add nested cluster if we need to support that hardware. I am not aware of any platform in particular DT based one. (2) LLC was added to support chiplets. IIRC, cpu_coregroup_mask was changed to select the smallest of LLC, socket siblings, and NUMA node siblings to ensure that the sched domain we build for the MC layer isn't larger than the DIE above it or it's shrunk to the socket or NUMA node if LLC exist across NUMA node/chiplets. But overtime, we have patched cpu_coregroup_mask to workaround things which I think is now about to break
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c index f73b836047cf..44f733b365cc 100644 --- a/drivers/base/arch_topology.c +++ b/drivers/base/arch_topology.c @@ -543,7 +543,6 @@ static int __init parse_cluster(struct device_node *cluster, int depth) bool leaf = true; bool has_cores = false; struct device_node *c; - static int package_id __initdata; int core_id = 0; int i, ret; @@ -582,7 +581,7 @@ static int __init parse_cluster(struct device_node *cluster, int depth) } if (leaf) { - ret = parse_core(c, package_id, core_id++); + ret = parse_core(c, 0, core_id++); } else { pr_err("%pOF: Non-leaf cluster with core %s\n", cluster, name); @@ -599,9 +598,6 @@ static int __init parse_cluster(struct device_node *cluster, int depth) if (leaf && !has_cores) pr_warn("%pOF: empty cluster\n", cluster); - if (leaf) - package_id++; - return 0; }
Currently as we parse the CPU topology from /cpu-map node from the device tree, we assign generated cluster count as the physical package identifier for each CPU which is wrong. The device tree bindings for CPU topology supports sockets to infer the socket or physical package identifier for a given CPU. Since it is fairly new and not support on most of the old and existing systems, we can assume all such systems have single socket/physical package. Fix the physical package identifier to 0 by removing the assignment of cluster identifier to the same. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> --- drivers/base/arch_topology.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)