Message ID | 20220215101551.23101-5-luca.fancellu@arm.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Boot time cpupools | expand |
On 15.02.22 11:15, Luca Fancellu wrote: > Introduce an architecture specific way to create different cpupools > at boot time, this is particularly useful on ARM big.LITTLE system > where there might be the need to have different cpupools for each type > of core, but also systems using NUMA can have different cpu pools for > each node. > > The feature on arm relies on a specification of the cpupools from the > device tree to build pools and assign cpus to them. > > Documentation is created to explain the feature. > > Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> IIRC I suggested to have the core functionality in common code in order to allow using boot time cpupool creation e.g. via commandline for x86, too. Juergen
> On 15 Feb 2022, at 10:48, Juergen Gross <jgross@suse.com> wrote: > > On 15.02.22 11:15, Luca Fancellu wrote: >> Introduce an architecture specific way to create different cpupools >> at boot time, this is particularly useful on ARM big.LITTLE system >> where there might be the need to have different cpupools for each type >> of core, but also systems using NUMA can have different cpu pools for >> each node. >> The feature on arm relies on a specification of the cpupools from the >> device tree to build pools and assign cpus to them. >> Documentation is created to explain the feature. >> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> > > IIRC I suggested to have the core functionality in common code in order > to allow using boot time cpupool creation e.g. via commandline for x86, > too. Yes, however I think the parser to handle everything by command line would be huge due to input sanitisation and not easy enough as the DT, however I see Hyperlaunch has plans to use DT on x86 so I guess it would be ok to make this feature common once the DT is available also on x86. Cheers, Luca > > > Juergen > <OpenPGP_0xB0DE9DD628BF132F.asc>
On Tue, 15 Feb 2022, Luca Fancellu wrote: > Introduce an architecture specific way to create different cpupools > at boot time, this is particularly useful on ARM big.LITTLE system > where there might be the need to have different cpupools for each type > of core, but also systems using NUMA can have different cpu pools for > each node. > > The feature on arm relies on a specification of the cpupools from the > device tree to build pools and assign cpus to them. > > Documentation is created to explain the feature. > > Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> > --- > docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ > xen/arch/arm/Kconfig | 9 ++ > xen/arch/arm/Makefile | 1 + > xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ > xen/common/sched/cpupool.c | 4 +- > xen/include/xen/sched.h | 11 +++ > 6 files changed, 260 insertions(+), 1 deletion(-) > create mode 100644 docs/misc/arm/device-tree/cpupools.txt > create mode 100644 xen/arch/arm/cpupool.c > > diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt > new file mode 100644 > index 000000000000..7298b6394332 > --- /dev/null > +++ b/docs/misc/arm/device-tree/cpupools.txt > @@ -0,0 +1,118 @@ > +Boot time cpupools > +================== > + > +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is > +possible to create cpupools during boot phase by specifying them in the device > +tree. > + > +Cpupools specification nodes shall be direct childs of /chosen node. > +Each cpupool node contains the following properties: > + > +- compatible (mandatory) > + > + Must always include the compatiblity string: "xen,cpupool". > + > +- cpupool-id (mandatory) > + > + Must be a positive integer number. Why is cpupool-id mandatory? It looks like it could be generated by Xen. Or is it actually better to have the user specify it anyway? > +- cpupool-cpus (mandatory) > + > + Must be a list of device tree phandle to nodes describing cpus (e.g. having > + device_type = "cpu"), it can't be empty. > + > +- cpupool-sched (optional) > + > + Must be a string having the name of a Xen scheduler, it has no effect when > + used in conjunction of a cpupool-id equal to zero, in that case the > + default Xen scheduler is selected (sched=<...> boot argument). I don't get why cpupool-id == 0 should trigger a special cpupool-sched behavior. > +Constraints > +=========== > + > +The cpupool with id zero is implicitly created even if not specified, that pool > +must have at least one cpu assigned, otherwise Xen will stop. > + > +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's > +not assigned to any other cpupool. > + > +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will > +stop. Thank you for documenting the constraints, but why do we have them? Imagine a user specifying 3 cpu pools and imagine the cpupool-id is optional and missing. We could take care of the cpupool-id generation in Xen and we could also assign the default scheduler everywhere cpupool-sched is not specified. Maybe I am missing something? Does cpupool0 has to exist? I guess the answer could be yes, but if it is specified as id of one of the pools we are fine, otherwise it could be automatically generated by Xen. In any case, I don't think that cpupool0 has to have the default scheduler? My suggestion would be: - make cpupool-id optional - assign automatically cpupool-ids starting from 0 - respect cpupool-ids chosen by the user - if some CPUs are left out (not specified in any pool) add an extra cpupool - the extra cpupool doesn't have to be cpupool-id == 0, it could be cpupool-id == n - the extra cpupool uses the default scheduler If the user created cpupools in device tree covering all CPUs and also specified all cpupool-ids everywhere, and none of them are 0 (no cpupool in the system is cpupool0) then panic. (Assuming that cpupool0 is required.) > +Examples > +======== > + > +A system having two types of core, the following device tree specification will > +instruct Xen to have two cpupools: > + > +- The cpupool with id 0 will have 4 cpus assigned. > +- The cpupool with id 1 will have 2 cpus assigned. > + > +As can be seen from the example, cpupool_a has only two cpus assigned, but since > +there are two cpus unassigned, they are automatically assigned to cpupool with > +id zero. The following example can work only if hmp-unsafe=1 is passed to Xen > +boot arguments, otherwise not all cores will be brought up by Xen and the > +cpupool creation process will stop Xen. > + > + > +a72_1: cpu@0 { > + compatible = "arm,cortex-a72"; > + reg = <0x0 0x0>; > + device_type = "cpu"; > + [...] > +}; > + > +a72_2: cpu@1 { > + compatible = "arm,cortex-a72"; > + reg = <0x0 0x1>; > + device_type = "cpu"; > + [...] > +}; > + > +a53_1: cpu@100 { > + compatible = "arm,cortex-a53"; > + reg = <0x0 0x100>; > + device_type = "cpu"; > + [...] > +}; > + > +a53_2: cpu@101 { > + compatible = "arm,cortex-a53"; > + reg = <0x0 0x101>; > + device_type = "cpu"; > + [...] > +}; > + > +cpu@102 { > + compatible = "arm,cortex-a53"; > + reg = <0x0 0x102>; > + device_type = "cpu"; > + [...] > +}; > + > +cpu@103 { > + compatible = "arm,cortex-a53"; > + reg = <0x0 0x103>; > + device_type = "cpu"; > + [...] > +}; > + > +chosen { > + > + cpupool_a { > + compatible = "xen,cpupool"; > + cpupool-id = <0>; > + cpupool-cpus = <&a53_1 &a53_2>; > + }; > + cpupool_b { > + compatible = "xen,cpupool"; > + cpupool-id = <1>; > + cpupool-cpus = <&a72_1 &a72_2>; > + cpupool-sched = "credit2"; > + }; Question above notwithstanding, I like it! > + [...] > + > +}; > diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig > index ecfa6822e4d3..64c2879513b7 100644 > --- a/xen/arch/arm/Kconfig > +++ b/xen/arch/arm/Kconfig > @@ -33,6 +33,15 @@ config ACPI > Advanced Configuration and Power Interface (ACPI) support for Xen is > an alternative to device tree on ARM64. > > +config BOOT_TIME_CPUPOOLS > + bool "Create cpupools at boot time" > + depends on ARM > + default n > + help > + > + Creates cpupools during boot time and assigns cpus to them. Cpupools > + options can be specified in the device tree. > + > config GICV3 > bool "GICv3 driver" > depends on ARM_64 && !NEW_VGIC > diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile > index d0dee10102b6..6165da4e77b4 100644 > --- a/xen/arch/arm/Makefile > +++ b/xen/arch/arm/Makefile > @@ -13,6 +13,7 @@ obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o > obj-y += bootfdt.init.o > obj-y += cpuerrata.o > obj-y += cpufeature.o > +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += cpupool.o > obj-y += decode.o > obj-y += device.o > obj-$(CONFIG_IOREQ_SERVER) += dm.o > diff --git a/xen/arch/arm/cpupool.c b/xen/arch/arm/cpupool.c > new file mode 100644 > index 000000000000..a9d5b94635b9 > --- /dev/null > +++ b/xen/arch/arm/cpupool.c > @@ -0,0 +1,118 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * xen/arch/arm/cpupool.c > + * > + * Code to create cpupools at boot time for arm architecture. > + * > + * Copyright (C) 2022 Arm Ltd. > + */ > + > +#include <xen/sched.h> > + > +static struct cpupool *__initdata pool_cpu_map[NR_CPUS]; > + > +void __init arch_allocate_cpupools(const cpumask_t *cpu_online_map) > +{ > + const struct dt_device_node *chosen, *node; > + unsigned int cpu_num, cpupool0_cpu_count = 0; > + cpumask_t cpus_to_assign; > + > + chosen = dt_find_node_by_path("/chosen"); > + if ( !chosen ) > + return; > + > + cpumask_copy(&cpus_to_assign, cpu_online_map); > + > + dt_for_each_child_node(chosen, node) > + { > + const struct dt_device_node *cpu_node; > + unsigned int pool_id; > + int i = 0, sched_id = -1; > + const char* scheduler_name; > + struct cpupool *pool = cpupool0; > + > + if ( !dt_device_is_compatible(node, "xen,cpupool") ) > + continue; > + > + if ( !dt_property_read_u32(node, "cpupool-id", &pool_id) ) > + panic("Missing cpupool-id property!\n"); > + > + if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) ) > + { > + sched_id = sched_get_id_by_name(scheduler_name); > + if ( sched_id < 0 ) > + panic("Scheduler %s does not exists!\n", scheduler_name); > + } > + > + if ( pool_id ) > + { > + pool = cpupool_create_pool(pool_id, sched_id); > + if ( !pool ) > + panic("Error creating pool id %u!\n", pool_id); > + } > + > + cpu_node = dt_parse_phandle(node, "cpupool-cpus", 0); > + if ( !cpu_node ) > + panic("Missing or empty cpupool-cpus property!\n"); > + > + while ( cpu_node ) > + { > + register_t cpu_reg; > + const __be32 *prop; > + > + prop = dt_get_property(cpu_node, "reg", NULL); > + if ( !prop ) > + panic("cpupool-cpus pointed node has no reg property!\n"); > + > + cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node)); > + > + /* Check if the cpu is online and in the set to be assigned */ > + for_each_cpu ( cpu_num, &cpus_to_assign ) > + if ( cpu_logical_map(cpu_num) == cpu_reg ) > + break; > + > + if ( cpu_num >= nr_cpu_ids ) > + panic("Cpu found in %s is not online or it's assigned twice!\n", > + dt_node_name(node)); > + > + pool_cpu_map[cpu_num] = pool; > + cpumask_clear_cpu(cpu_num, &cpus_to_assign); > + > + printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-%u.\n", > + cpu_reg, pool_id); > + > + /* Keep track of how many cpus are assigned to Pool-0 */ > + if ( !pool_id ) > + cpupool0_cpu_count++; > + > + cpu_node = dt_parse_phandle(node, "cpupool-cpus", ++i); > + } > + } > + > + /* Assign every non assigned cpu to Pool-0 */ > + for_each_cpu ( cpu_num, &cpus_to_assign ) > + { > + pool_cpu_map[cpu_num] = cpupool0; > + cpupool0_cpu_count++; > + printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-0.\n", > + cpu_logical_map(cpu_num)); > + } > + > + if ( !cpupool0_cpu_count ) > + panic("No cpu assigned to cpupool0!\n"); > +} > + > +struct cpupool *__init arch_get_cpupool(unsigned int cpu) > +{ > + return pool_cpu_map[cpu]; > +} > + > +/* > + * Local variables: > + * mode: C > + * c-file-style: "BSD" > + * c-basic-offset: 4 > + * tab-width: 4 > + * indent-tabs-mode: nil > + * End: > + */ > diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c > index 4da12528d6b9..6013d75e2edd 100644 > --- a/xen/common/sched/cpupool.c > +++ b/xen/common/sched/cpupool.c > @@ -1257,12 +1257,14 @@ static int __init cpupool_init(void) > cpupool_put(cpupool0); > register_cpu_notifier(&cpu_nfb); > > + arch_allocate_cpupools(&cpu_online_map); > + > spin_lock(&cpupool_lock); > > cpumask_copy(&cpupool_free_cpus, &cpu_online_map); > > for_each_cpu ( cpu, &cpupool_free_cpus ) > - cpupool_assign_cpu_locked(cpupool0, cpu); > + cpupool_assign_cpu_locked(arch_get_cpupool(cpu), cpu); > > spin_unlock(&cpupool_lock); > > diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h > index a67a9eb2fe9d..dda7db2ba51f 100644 > --- a/xen/include/xen/sched.h > +++ b/xen/include/xen/sched.h > @@ -1177,6 +1177,17 @@ extern void dump_runq(unsigned char key); > > void arch_do_physinfo(struct xen_sysctl_physinfo *pi); > > +#ifdef CONFIG_BOOT_TIME_CPUPOOLS > +void arch_allocate_cpupools(const cpumask_t *cpu_online_map); > +struct cpupool *arch_get_cpupool(unsigned int cpu); > +#else > +static inline void arch_allocate_cpupools(const cpumask_t *cpu_online_map) {} > +static inline struct cpupool *arch_get_cpupool(unsigned int cpu) > +{ > + return cpupool0; > +} > +#endif > + > #endif /* __SCHED_H__ */ > > /* > -- > 2.17.1 >
On 15.02.22 18:56, Luca Fancellu wrote: > > >> On 15 Feb 2022, at 10:48, Juergen Gross <jgross@suse.com> wrote: >> >> On 15.02.22 11:15, Luca Fancellu wrote: >>> Introduce an architecture specific way to create different cpupools >>> at boot time, this is particularly useful on ARM big.LITTLE system >>> where there might be the need to have different cpupools for each type >>> of core, but also systems using NUMA can have different cpu pools for >>> each node. >>> The feature on arm relies on a specification of the cpupools from the >>> device tree to build pools and assign cpus to them. >>> Documentation is created to explain the feature. >>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> >> >> IIRC I suggested to have the core functionality in common code in order >> to allow using boot time cpupool creation e.g. via commandline for x86, >> too. > > Yes, however I think the parser to handle everything by command line would > be huge due to input sanitisation and not easy enough as the DT, however > I see Hyperlaunch has plans to use DT on x86 so I guess it would be ok to make > this feature common once the DT is available also on x86. Everything not being explicitly specific to Arm should be in common code. Think of the work in progress for Risc-V. Juergen
> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote: > > On Tue, 15 Feb 2022, Luca Fancellu wrote: >> Introduce an architecture specific way to create different cpupools >> at boot time, this is particularly useful on ARM big.LITTLE system >> where there might be the need to have different cpupools for each type >> of core, but also systems using NUMA can have different cpu pools for >> each node. >> >> The feature on arm relies on a specification of the cpupools from the >> device tree to build pools and assign cpus to them. >> >> Documentation is created to explain the feature. >> >> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> >> --- >> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ >> xen/arch/arm/Kconfig | 9 ++ >> xen/arch/arm/Makefile | 1 + >> xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ >> xen/common/sched/cpupool.c | 4 +- >> xen/include/xen/sched.h | 11 +++ >> 6 files changed, 260 insertions(+), 1 deletion(-) >> create mode 100644 docs/misc/arm/device-tree/cpupools.txt >> create mode 100644 xen/arch/arm/cpupool.c >> >> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt >> new file mode 100644 >> index 000000000000..7298b6394332 >> --- /dev/null >> +++ b/docs/misc/arm/device-tree/cpupools.txt >> @@ -0,0 +1,118 @@ >> +Boot time cpupools >> +================== >> + >> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is >> +possible to create cpupools during boot phase by specifying them in the device >> +tree. >> + >> +Cpupools specification nodes shall be direct childs of /chosen node. >> +Each cpupool node contains the following properties: >> + >> +- compatible (mandatory) >> + >> + Must always include the compatiblity string: "xen,cpupool". >> + >> +- cpupool-id (mandatory) >> + >> + Must be a positive integer number. > Hi Stefano, Thank you for your review, > Why is cpupool-id mandatory? It looks like it could be generated by Xen. > Or is it actually better to have the user specify it anyway? > Yes at first I thought to automatically generate that, however I needed a structure to map the id to the cpupool DT node. Here my doubt was about the size of the structure, because the user could even specify a cpupool for each cpu. I could allocate It dynamically and free it after domUs creation in setup_xen. What do you think could be the right way? Or the dom0less guest could specify the id, but I like it more when using a phandle to the Xen,cpupool node. > >> +- cpupool-cpus (mandatory) >> + >> + Must be a list of device tree phandle to nodes describing cpus (e.g. having >> + device_type = "cpu"), it can't be empty. >> + >> +- cpupool-sched (optional) >> + >> + Must be a string having the name of a Xen scheduler, it has no effect when >> + used in conjunction of a cpupool-id equal to zero, in that case the >> + default Xen scheduler is selected (sched=<...> boot argument). > > I don't get why cpupool-id == 0 should trigger a special cpupool-sched > behavior. Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create that is giving it the default scheduler. I thought it was better to leave it as it was, however the cpupool0 scheduler can be modified using sched= boot args as it was before. > > >> +Constraints >> +=========== >> + >> +The cpupool with id zero is implicitly created even if not specified, that pool >> +must have at least one cpu assigned, otherwise Xen will stop. >> + >> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's >> +not assigned to any other cpupool. >> + >> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will >> +stop. > > Thank you for documenting the constraints, but why do we have them? > Imagine a user specifying 3 cpu pools and imagine the cpupool-id is > optional and missing. We could take care of the cpupool-id generation in > Xen and we could also assign the default scheduler everywhere > cpupool-sched is not specified. Maybe I am missing something? Yes we could make the cpupool-id optional, my doubts are in the fist comment above. Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler. > > Does cpupool0 has to exist? I guess the answer could be yes, but if it > is specified as id of one of the pools we are fine, otherwise it could > be automatically generated by Xen. Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT specifications. In fact you could not specify in the DT any xen,cpupool compatible node with the cpupool-id == 0 and Xen will generate the cpupool0 anyway (Xen internals are tied with the existence of a cpupool0). > > In any case, I don't think that cpupool0 has to have the default > scheduler? Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation, I would need to test it to be sure I don’t find something strange. > > My suggestion would be: > > - make cpupool-id optional > - assign automatically cpupool-ids starting from 0 > - respect cpupool-ids chosen by the user Ok, it would start from 1 because cpupool0 always exists > - if some CPUs are left out (not specified in any pool) add an extra cpupool > - the extra cpupool doesn't have to be cpupool-id == 0, it could be > cpupool-id == n > - the extra cpupool uses the default scheduler I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the current behaviour as the feature is not enabled. However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0, else assign them to a new cpupool and... > > If the user created cpupools in device tree covering all CPUs and also > specified all cpupool-ids everywhere, and none of them are 0 (no cpupool > in the system is cpupool0) then panic. (Assuming that cpupool0 is > required.) … panic if cpupool0 has no cpus. Cheers, Luca > > >> +Examples >> +======== >> + >> +A system having two types of core, the following device tree specification will >> +instruct Xen to have two cpupools: >> + >> +- The cpupool with id 0 will have 4 cpus assigned. >> +- The cpupool with id 1 will have 2 cpus assigned. >> + >> +As can be seen from the example, cpupool_a has only two cpus assigned, but since >> +there are two cpus unassigned, they are automatically assigned to cpupool with >> +id zero. The following example can work only if hmp-unsafe=1 is passed to Xen >> +boot arguments, otherwise not all cores will be brought up by Xen and the >> +cpupool creation process will stop Xen. >> + >> + >> +a72_1: cpu@0 { >> + compatible = "arm,cortex-a72"; >> + reg = <0x0 0x0>; >> + device_type = "cpu"; >> + [...] >> +}; >> + >> +a72_2: cpu@1 { >> + compatible = "arm,cortex-a72"; >> + reg = <0x0 0x1>; >> + device_type = "cpu"; >> + [...] >> +}; >> + >> +a53_1: cpu@100 { >> + compatible = "arm,cortex-a53"; >> + reg = <0x0 0x100>; >> + device_type = "cpu"; >> + [...] >> +}; >> + >> +a53_2: cpu@101 { >> + compatible = "arm,cortex-a53"; >> + reg = <0x0 0x101>; >> + device_type = "cpu"; >> + [...] >> +}; >> + >> +cpu@102 { >> + compatible = "arm,cortex-a53"; >> + reg = <0x0 0x102>; >> + device_type = "cpu"; >> + [...] >> +}; >> + >> +cpu@103 { >> + compatible = "arm,cortex-a53"; >> + reg = <0x0 0x103>; >> + device_type = "cpu"; >> + [...] >> +}; >> + >> +chosen { >> + >> + cpupool_a { >> + compatible = "xen,cpupool"; >> + cpupool-id = <0>; >> + cpupool-cpus = <&a53_1 &a53_2>; >> + }; >> + cpupool_b { >> + compatible = "xen,cpupool"; >> + cpupool-id = <1>; >> + cpupool-cpus = <&a72_1 &a72_2>; >> + cpupool-sched = "credit2"; >> + }; > > Question above notwithstanding, I like it! > > >> + [...] >> + >> +}; >> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig >> index ecfa6822e4d3..64c2879513b7 100644 >> --- a/xen/arch/arm/Kconfig >> +++ b/xen/arch/arm/Kconfig >> @@ -33,6 +33,15 @@ config ACPI >> Advanced Configuration and Power Interface (ACPI) support for Xen is >> an alternative to device tree on ARM64. >> >> +config BOOT_TIME_CPUPOOLS >> + bool "Create cpupools at boot time" >> + depends on ARM >> + default n >> + help >> + >> + Creates cpupools during boot time and assigns cpus to them. Cpupools >> + options can be specified in the device tree. >> + >> config GICV3 >> bool "GICv3 driver" >> depends on ARM_64 && !NEW_VGIC >> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile >> index d0dee10102b6..6165da4e77b4 100644 >> --- a/xen/arch/arm/Makefile >> +++ b/xen/arch/arm/Makefile >> @@ -13,6 +13,7 @@ obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o >> obj-y += bootfdt.init.o >> obj-y += cpuerrata.o >> obj-y += cpufeature.o >> +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += cpupool.o >> obj-y += decode.o >> obj-y += device.o >> obj-$(CONFIG_IOREQ_SERVER) += dm.o >> diff --git a/xen/arch/arm/cpupool.c b/xen/arch/arm/cpupool.c >> new file mode 100644 >> index 000000000000..a9d5b94635b9 >> --- /dev/null >> +++ b/xen/arch/arm/cpupool.c >> @@ -0,0 +1,118 @@ >> +/* SPDX-License-Identifier: GPL-2.0 */ >> +/* >> + * xen/arch/arm/cpupool.c >> + * >> + * Code to create cpupools at boot time for arm architecture. >> + * >> + * Copyright (C) 2022 Arm Ltd. >> + */ >> + >> +#include <xen/sched.h> >> + >> +static struct cpupool *__initdata pool_cpu_map[NR_CPUS]; >> + >> +void __init arch_allocate_cpupools(const cpumask_t *cpu_online_map) >> +{ >> + const struct dt_device_node *chosen, *node; >> + unsigned int cpu_num, cpupool0_cpu_count = 0; >> + cpumask_t cpus_to_assign; >> + >> + chosen = dt_find_node_by_path("/chosen"); >> + if ( !chosen ) >> + return; >> + >> + cpumask_copy(&cpus_to_assign, cpu_online_map); >> + >> + dt_for_each_child_node(chosen, node) >> + { >> + const struct dt_device_node *cpu_node; >> + unsigned int pool_id; >> + int i = 0, sched_id = -1; >> + const char* scheduler_name; >> + struct cpupool *pool = cpupool0; >> + >> + if ( !dt_device_is_compatible(node, "xen,cpupool") ) >> + continue; >> + >> + if ( !dt_property_read_u32(node, "cpupool-id", &pool_id) ) >> + panic("Missing cpupool-id property!\n"); >> + >> + if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) ) >> + { >> + sched_id = sched_get_id_by_name(scheduler_name); >> + if ( sched_id < 0 ) >> + panic("Scheduler %s does not exists!\n", scheduler_name); >> + } >> + >> + if ( pool_id ) >> + { >> + pool = cpupool_create_pool(pool_id, sched_id); >> + if ( !pool ) >> + panic("Error creating pool id %u!\n", pool_id); >> + } >> + >> + cpu_node = dt_parse_phandle(node, "cpupool-cpus", 0); >> + if ( !cpu_node ) >> + panic("Missing or empty cpupool-cpus property!\n"); >> + >> + while ( cpu_node ) >> + { >> + register_t cpu_reg; >> + const __be32 *prop; >> + >> + prop = dt_get_property(cpu_node, "reg", NULL); >> + if ( !prop ) >> + panic("cpupool-cpus pointed node has no reg property!\n"); >> + >> + cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node)); >> + >> + /* Check if the cpu is online and in the set to be assigned */ >> + for_each_cpu ( cpu_num, &cpus_to_assign ) >> + if ( cpu_logical_map(cpu_num) == cpu_reg ) >> + break; >> + >> + if ( cpu_num >= nr_cpu_ids ) >> + panic("Cpu found in %s is not online or it's assigned twice!\n", >> + dt_node_name(node)); >> + >> + pool_cpu_map[cpu_num] = pool; >> + cpumask_clear_cpu(cpu_num, &cpus_to_assign); >> + >> + printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-%u.\n", >> + cpu_reg, pool_id); >> + >> + /* Keep track of how many cpus are assigned to Pool-0 */ >> + if ( !pool_id ) >> + cpupool0_cpu_count++; >> + >> + cpu_node = dt_parse_phandle(node, "cpupool-cpus", ++i); >> + } >> + } >> + >> + /* Assign every non assigned cpu to Pool-0 */ >> + for_each_cpu ( cpu_num, &cpus_to_assign ) >> + { >> + pool_cpu_map[cpu_num] = cpupool0; >> + cpupool0_cpu_count++; >> + printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-0.\n", >> + cpu_logical_map(cpu_num)); >> + } >> + >> + if ( !cpupool0_cpu_count ) >> + panic("No cpu assigned to cpupool0!\n"); >> +} >> + >> +struct cpupool *__init arch_get_cpupool(unsigned int cpu) >> +{ >> + return pool_cpu_map[cpu]; >> +} >> + >> +/* >> + * Local variables: >> + * mode: C >> + * c-file-style: "BSD" >> + * c-basic-offset: 4 >> + * tab-width: 4 >> + * indent-tabs-mode: nil >> + * End: >> + */ >> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c >> index 4da12528d6b9..6013d75e2edd 100644 >> --- a/xen/common/sched/cpupool.c >> +++ b/xen/common/sched/cpupool.c >> @@ -1257,12 +1257,14 @@ static int __init cpupool_init(void) >> cpupool_put(cpupool0); >> register_cpu_notifier(&cpu_nfb); >> >> + arch_allocate_cpupools(&cpu_online_map); >> + >> spin_lock(&cpupool_lock); >> >> cpumask_copy(&cpupool_free_cpus, &cpu_online_map); >> >> for_each_cpu ( cpu, &cpupool_free_cpus ) >> - cpupool_assign_cpu_locked(cpupool0, cpu); >> + cpupool_assign_cpu_locked(arch_get_cpupool(cpu), cpu); >> >> spin_unlock(&cpupool_lock); >> >> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h >> index a67a9eb2fe9d..dda7db2ba51f 100644 >> --- a/xen/include/xen/sched.h >> +++ b/xen/include/xen/sched.h >> @@ -1177,6 +1177,17 @@ extern void dump_runq(unsigned char key); >> >> void arch_do_physinfo(struct xen_sysctl_physinfo *pi); >> >> +#ifdef CONFIG_BOOT_TIME_CPUPOOLS >> +void arch_allocate_cpupools(const cpumask_t *cpu_online_map); >> +struct cpupool *arch_get_cpupool(unsigned int cpu); >> +#else >> +static inline void arch_allocate_cpupools(const cpumask_t *cpu_online_map) {} >> +static inline struct cpupool *arch_get_cpupool(unsigned int cpu) >> +{ >> + return cpupool0; >> +} >> +#endif >> + >> #endif /* __SCHED_H__ */ >> >> /* >> -- >> 2.17.1
> On 16 Feb 2022, at 06:18, Juergen Gross <jgross@suse.com> wrote: > > On 15.02.22 18:56, Luca Fancellu wrote: >>> On 15 Feb 2022, at 10:48, Juergen Gross <jgross@suse.com> wrote: >>> >>> On 15.02.22 11:15, Luca Fancellu wrote: >>>> Introduce an architecture specific way to create different cpupools >>>> at boot time, this is particularly useful on ARM big.LITTLE system >>>> where there might be the need to have different cpupools for each type >>>> of core, but also systems using NUMA can have different cpu pools for >>>> each node. >>>> The feature on arm relies on a specification of the cpupools from the >>>> device tree to build pools and assign cpus to them. >>>> Documentation is created to explain the feature. >>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> >>> >>> IIRC I suggested to have the core functionality in common code in order >>> to allow using boot time cpupool creation e.g. via commandline for x86, >>> too. >> Yes, however I think the parser to handle everything by command line would >> be huge due to input sanitisation and not easy enough as the DT, however >> I see Hyperlaunch has plans to use DT on x86 so I guess it would be ok to make >> this feature common once the DT is available also on x86. > > Everything not being explicitly specific to Arm should be in common > code. Think of the work in progress for Risc-V. Ok I will put it in common and I will make the feature depend on HAS_DEVICE_TREE. Thank you. Cheers, Luca > > > Juergen > <OpenPGP_0xB0DE9DD628BF132F.asc>
On 16.02.22 13:10, Luca Fancellu wrote: > > >> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote: >> >> On Tue, 15 Feb 2022, Luca Fancellu wrote: >>> Introduce an architecture specific way to create different cpupools >>> at boot time, this is particularly useful on ARM big.LITTLE system >>> where there might be the need to have different cpupools for each type >>> of core, but also systems using NUMA can have different cpu pools for >>> each node. >>> >>> The feature on arm relies on a specification of the cpupools from the >>> device tree to build pools and assign cpus to them. >>> >>> Documentation is created to explain the feature. >>> >>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> >>> --- >>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ >>> xen/arch/arm/Kconfig | 9 ++ >>> xen/arch/arm/Makefile | 1 + >>> xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ >>> xen/common/sched/cpupool.c | 4 +- >>> xen/include/xen/sched.h | 11 +++ >>> 6 files changed, 260 insertions(+), 1 deletion(-) >>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt >>> create mode 100644 xen/arch/arm/cpupool.c >>> >>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt >>> new file mode 100644 >>> index 000000000000..7298b6394332 >>> --- /dev/null >>> +++ b/docs/misc/arm/device-tree/cpupools.txt >>> @@ -0,0 +1,118 @@ >>> +Boot time cpupools >>> +================== >>> + >>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is >>> +possible to create cpupools during boot phase by specifying them in the device >>> +tree. >>> + >>> +Cpupools specification nodes shall be direct childs of /chosen node. >>> +Each cpupool node contains the following properties: >>> + >>> +- compatible (mandatory) >>> + >>> + Must always include the compatiblity string: "xen,cpupool". >>> + >>> +- cpupool-id (mandatory) >>> + >>> + Must be a positive integer number. >> > > Hi Stefano, > > Thank you for your review, > >> Why is cpupool-id mandatory? It looks like it could be generated by Xen. >> Or is it actually better to have the user specify it anyway? >> > > Yes at first I thought to automatically generate that, however I needed a structure > to map the id to the cpupool DT node. Here my doubt was about the size of the > structure, because the user could even specify a cpupool for each cpu. I could allocate > It dynamically and free it after domUs creation in setup_xen. > What do you think could be the right way? > Or the dom0less guest could specify the id, but I like it more when using a phandle to the > Xen,cpupool node. > >> >>> +- cpupool-cpus (mandatory) >>> + >>> + Must be a list of device tree phandle to nodes describing cpus (e.g. having >>> + device_type = "cpu"), it can't be empty. >>> + >>> +- cpupool-sched (optional) >>> + >>> + Must be a string having the name of a Xen scheduler, it has no effect when >>> + used in conjunction of a cpupool-id equal to zero, in that case the >>> + default Xen scheduler is selected (sched=<...> boot argument). >> >> I don't get why cpupool-id == 0 should trigger a special cpupool-sched >> behavior. > > Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create > that is giving it the default scheduler. I thought it was better to leave it as it was, however the > cpupool0 scheduler can be modified using sched= boot args as it was before. > >> >> >>> +Constraints >>> +=========== >>> + >>> +The cpupool with id zero is implicitly created even if not specified, that pool >>> +must have at least one cpu assigned, otherwise Xen will stop. >>> + >>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's >>> +not assigned to any other cpupool. >>> + >>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will >>> +stop. >> >> Thank you for documenting the constraints, but why do we have them? >> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is >> optional and missing. We could take care of the cpupool-id generation in >> Xen and we could also assign the default scheduler everywhere >> cpupool-sched is not specified. Maybe I am missing something? > > Yes we could make the cpupool-id optional, my doubts are in the fist comment above. > Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler. > >> >> Does cpupool0 has to exist? I guess the answer could be yes, but if it >> is specified as id of one of the pools we are fine, otherwise it could >> be automatically generated by Xen. > > Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT > specifications. In fact you could not specify in the DT any xen,cpupool compatible node > with the cpupool-id == 0 and Xen will generate the cpupool0 anyway > (Xen internals are tied with the existence of a cpupool0). > >> >> In any case, I don't think that cpupool0 has to have the default >> scheduler? > > Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation, > I would need to test it to be sure I don’t find something strange. > >> >> My suggestion would be: >> >> - make cpupool-id optional >> - assign automatically cpupool-ids starting from 0 >> - respect cpupool-ids chosen by the user > > Ok, it would start from 1 because cpupool0 always exists > >> - if some CPUs are left out (not specified in any pool) add an extra cpupool >> - the extra cpupool doesn't have to be cpupool-id == 0, it could be >> cpupool-id == n >> - the extra cpupool uses the default scheduler > > I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that > a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the > current behaviour as the feature is not enabled. > However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0, > else assign them to a new cpupool and... > >> >> If the user created cpupools in device tree covering all CPUs and also >> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool >> in the system is cpupool0) then panic. (Assuming that cpupool0 is >> required.) > > … panic if cpupool0 has no cpus. Today cpu 0 is always member of cpupool0, and changing that might be hard. Juergen
> On 16 Feb 2022, at 12:55, Juergen Gross <jgross@suse.com> wrote: > > On 16.02.22 13:10, Luca Fancellu wrote: >>> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote: >>> >>> On Tue, 15 Feb 2022, Luca Fancellu wrote: >>>> Introduce an architecture specific way to create different cpupools >>>> at boot time, this is particularly useful on ARM big.LITTLE system >>>> where there might be the need to have different cpupools for each type >>>> of core, but also systems using NUMA can have different cpu pools for >>>> each node. >>>> >>>> The feature on arm relies on a specification of the cpupools from the >>>> device tree to build pools and assign cpus to them. >>>> >>>> Documentation is created to explain the feature. >>>> >>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> >>>> --- >>>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ >>>> xen/arch/arm/Kconfig | 9 ++ >>>> xen/arch/arm/Makefile | 1 + >>>> xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ >>>> xen/common/sched/cpupool.c | 4 +- >>>> xen/include/xen/sched.h | 11 +++ >>>> 6 files changed, 260 insertions(+), 1 deletion(-) >>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt >>>> create mode 100644 xen/arch/arm/cpupool.c >>>> >>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt >>>> new file mode 100644 >>>> index 000000000000..7298b6394332 >>>> --- /dev/null >>>> +++ b/docs/misc/arm/device-tree/cpupools.txt >>>> @@ -0,0 +1,118 @@ >>>> +Boot time cpupools >>>> +================== >>>> + >>>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is >>>> +possible to create cpupools during boot phase by specifying them in the device >>>> +tree. >>>> + >>>> +Cpupools specification nodes shall be direct childs of /chosen node. >>>> +Each cpupool node contains the following properties: >>>> + >>>> +- compatible (mandatory) >>>> + >>>> + Must always include the compatiblity string: "xen,cpupool". >>>> + >>>> +- cpupool-id (mandatory) >>>> + >>>> + Must be a positive integer number. >>> >> Hi Stefano, >> Thank you for your review, >>> Why is cpupool-id mandatory? It looks like it could be generated by Xen. >>> Or is it actually better to have the user specify it anyway? >>> >> Yes at first I thought to automatically generate that, however I needed a structure >> to map the id to the cpupool DT node. Here my doubt was about the size of the >> structure, because the user could even specify a cpupool for each cpu. I could allocate >> It dynamically and free it after domUs creation in setup_xen. >> What do you think could be the right way? >> Or the dom0less guest could specify the id, but I like it more when using a phandle to the >> Xen,cpupool node. >>> >>>> +- cpupool-cpus (mandatory) >>>> + >>>> + Must be a list of device tree phandle to nodes describing cpus (e.g. having >>>> + device_type = "cpu"), it can't be empty. >>>> + >>>> +- cpupool-sched (optional) >>>> + >>>> + Must be a string having the name of a Xen scheduler, it has no effect when >>>> + used in conjunction of a cpupool-id equal to zero, in that case the >>>> + default Xen scheduler is selected (sched=<...> boot argument). >>> >>> I don't get why cpupool-id == 0 should trigger a special cpupool-sched >>> behavior. >> Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create >> that is giving it the default scheduler. I thought it was better to leave it as it was, however the >> cpupool0 scheduler can be modified using sched= boot args as it was before. >>> >>> >>>> +Constraints >>>> +=========== >>>> + >>>> +The cpupool with id zero is implicitly created even if not specified, that pool >>>> +must have at least one cpu assigned, otherwise Xen will stop. >>>> + >>>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's >>>> +not assigned to any other cpupool. >>>> + >>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will >>>> +stop. >>> >>> Thank you for documenting the constraints, but why do we have them? >>> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is >>> optional and missing. We could take care of the cpupool-id generation in >>> Xen and we could also assign the default scheduler everywhere >>> cpupool-sched is not specified. Maybe I am missing something? >> Yes we could make the cpupool-id optional, my doubts are in the fist comment above. >> Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler. >>> >>> Does cpupool0 has to exist? I guess the answer could be yes, but if it >>> is specified as id of one of the pools we are fine, otherwise it could >>> be automatically generated by Xen. >> Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT >> specifications. In fact you could not specify in the DT any xen,cpupool compatible node >> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway >> (Xen internals are tied with the existence of a cpupool0). >>> >>> In any case, I don't think that cpupool0 has to have the default >>> scheduler? >> Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation, >> I would need to test it to be sure I don’t find something strange. >>> >>> My suggestion would be: >>> >>> - make cpupool-id optional >>> - assign automatically cpupool-ids starting from 0 >>> - respect cpupool-ids chosen by the user >> Ok, it would start from 1 because cpupool0 always exists >>> - if some CPUs are left out (not specified in any pool) add an extra cpupool >>> - the extra cpupool doesn't have to be cpupool-id == 0, it could be >>> cpupool-id == n >>> - the extra cpupool uses the default scheduler >> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that >> a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the >> current behaviour as the feature is not enabled. >> However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0, >> else assign them to a new cpupool and... >>> >>> If the user created cpupools in device tree covering all CPUs and also >>> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool >>> in the system is cpupool0) then panic. (Assuming that cpupool0 is >>> required.) >> … panic if cpupool0 has no cpus. > > Today cpu 0 is always member of cpupool0, and changing that might be > hard. Oh, are you sure? I did some test in the past for this serie using a Juno board, giving to cpupool0 only a72 cores and the a53 cores in another cpupool, my Juno firmware configuration makes Xen having the boot cpu (cpu 0) to be one of the a53 and it was working fine. But it was long time ago so I would need to try it again. > > > Juergen > <OpenPGP_0xB0DE9DD628BF132F.asc>
On 16.02.22 14:01, Luca Fancellu wrote: > > >> On 16 Feb 2022, at 12:55, Juergen Gross <jgross@suse.com> wrote: >> >> On 16.02.22 13:10, Luca Fancellu wrote: >>>> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote: >>>> >>>> On Tue, 15 Feb 2022, Luca Fancellu wrote: >>>>> Introduce an architecture specific way to create different cpupools >>>>> at boot time, this is particularly useful on ARM big.LITTLE system >>>>> where there might be the need to have different cpupools for each type >>>>> of core, but also systems using NUMA can have different cpu pools for >>>>> each node. >>>>> >>>>> The feature on arm relies on a specification of the cpupools from the >>>>> device tree to build pools and assign cpus to them. >>>>> >>>>> Documentation is created to explain the feature. >>>>> >>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> >>>>> --- >>>>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ >>>>> xen/arch/arm/Kconfig | 9 ++ >>>>> xen/arch/arm/Makefile | 1 + >>>>> xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ >>>>> xen/common/sched/cpupool.c | 4 +- >>>>> xen/include/xen/sched.h | 11 +++ >>>>> 6 files changed, 260 insertions(+), 1 deletion(-) >>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt >>>>> create mode 100644 xen/arch/arm/cpupool.c >>>>> >>>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt >>>>> new file mode 100644 >>>>> index 000000000000..7298b6394332 >>>>> --- /dev/null >>>>> +++ b/docs/misc/arm/device-tree/cpupools.txt >>>>> @@ -0,0 +1,118 @@ >>>>> +Boot time cpupools >>>>> +================== >>>>> + >>>>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is >>>>> +possible to create cpupools during boot phase by specifying them in the device >>>>> +tree. >>>>> + >>>>> +Cpupools specification nodes shall be direct childs of /chosen node. >>>>> +Each cpupool node contains the following properties: >>>>> + >>>>> +- compatible (mandatory) >>>>> + >>>>> + Must always include the compatiblity string: "xen,cpupool". >>>>> + >>>>> +- cpupool-id (mandatory) >>>>> + >>>>> + Must be a positive integer number. >>>> >>> Hi Stefano, >>> Thank you for your review, >>>> Why is cpupool-id mandatory? It looks like it could be generated by Xen. >>>> Or is it actually better to have the user specify it anyway? >>>> >>> Yes at first I thought to automatically generate that, however I needed a structure >>> to map the id to the cpupool DT node. Here my doubt was about the size of the >>> structure, because the user could even specify a cpupool for each cpu. I could allocate >>> It dynamically and free it after domUs creation in setup_xen. >>> What do you think could be the right way? >>> Or the dom0less guest could specify the id, but I like it more when using a phandle to the >>> Xen,cpupool node. >>>> >>>>> +- cpupool-cpus (mandatory) >>>>> + >>>>> + Must be a list of device tree phandle to nodes describing cpus (e.g. having >>>>> + device_type = "cpu"), it can't be empty. >>>>> + >>>>> +- cpupool-sched (optional) >>>>> + >>>>> + Must be a string having the name of a Xen scheduler, it has no effect when >>>>> + used in conjunction of a cpupool-id equal to zero, in that case the >>>>> + default Xen scheduler is selected (sched=<...> boot argument). >>>> >>>> I don't get why cpupool-id == 0 should trigger a special cpupool-sched >>>> behavior. >>> Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create >>> that is giving it the default scheduler. I thought it was better to leave it as it was, however the >>> cpupool0 scheduler can be modified using sched= boot args as it was before. >>>> >>>> >>>>> +Constraints >>>>> +=========== >>>>> + >>>>> +The cpupool with id zero is implicitly created even if not specified, that pool >>>>> +must have at least one cpu assigned, otherwise Xen will stop. >>>>> + >>>>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's >>>>> +not assigned to any other cpupool. >>>>> + >>>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will >>>>> +stop. >>>> >>>> Thank you for documenting the constraints, but why do we have them? >>>> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is >>>> optional and missing. We could take care of the cpupool-id generation in >>>> Xen and we could also assign the default scheduler everywhere >>>> cpupool-sched is not specified. Maybe I am missing something? >>> Yes we could make the cpupool-id optional, my doubts are in the fist comment above. >>> Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler. >>>> >>>> Does cpupool0 has to exist? I guess the answer could be yes, but if it >>>> is specified as id of one of the pools we are fine, otherwise it could >>>> be automatically generated by Xen. >>> Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT >>> specifications. In fact you could not specify in the DT any xen,cpupool compatible node >>> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway >>> (Xen internals are tied with the existence of a cpupool0). >>>> >>>> In any case, I don't think that cpupool0 has to have the default >>>> scheduler? >>> Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation, >>> I would need to test it to be sure I don’t find something strange. >>>> >>>> My suggestion would be: >>>> >>>> - make cpupool-id optional >>>> - assign automatically cpupool-ids starting from 0 >>>> - respect cpupool-ids chosen by the user >>> Ok, it would start from 1 because cpupool0 always exists >>>> - if some CPUs are left out (not specified in any pool) add an extra cpupool >>>> - the extra cpupool doesn't have to be cpupool-id == 0, it could be >>>> cpupool-id == n >>>> - the extra cpupool uses the default scheduler >>> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that >>> a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the >>> current behaviour as the feature is not enabled. >>> However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0, >>> else assign them to a new cpupool and... >>>> >>>> If the user created cpupools in device tree covering all CPUs and also >>>> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool >>>> in the system is cpupool0) then panic. (Assuming that cpupool0 is >>>> required.) >>> … panic if cpupool0 has no cpus. >> >> Today cpu 0 is always member of cpupool0, and changing that might be >> hard. > > Oh, are you sure? I did some test in the past for this serie using a Juno board, > giving to cpupool0 only a72 cores and the a53 cores in another cpupool, my Juno > firmware configuration makes Xen having the boot cpu (cpu 0) to be one of the a53 > and it was working fine. But it was long time ago so I would need to try it again. Maybe on Arm the restrictions are less problematic, but I wouldn't bet that all operations (like moving cpus between cpupools, cpu hotplug, destroying cpupools, shutdown of the host, ...) are working in a sane way. Juergen
On Wed, 2022-02-16 at 12:37 +0000, Luca Fancellu wrote: > > On 16 Feb 2022, at 06:18, Juergen Gross <jgross@suse.com> wrote: > > On 15.02.22 18:56, Luca Fancellu wrote: > > > > > > > Yes, however I think the parser to handle everything by command > > > line would > > > be huge due to input sanitisation and not easy enough as the DT, > > > however > > > I see Hyperlaunch has plans to use DT on x86 so I guess it would > > > be ok to make > > > this feature common once the DT is available also on x86. > > > > Everything not being explicitly specific to Arm should be in common > > code. Think of the work in progress for Risc-V. > > Ok I will put it in common and I will make the feature depend on > HAS_DEVICE_TREE. > Can't we split the DT parsing logic & code, which is fine to be either ARM or HAS_DEVICE_TREE specific, and the actual core logic, which can be common and not gated by any particular condition? This way, if one wants to add boot-time cpupool to x86 via command line, he/she would have to implement "only" the command line parsing. I've looked at the code, and I appreciate that it's not trivial and that it's probably impossible to achieve 100% decoupling (at least not without adding a lot more complexity)... But any step we can make in that direction would be, IMO, a good investment. Regards
> On 16 Feb 2022, at 16:32, Dario Faggioli <dfaggioli@suse.com> wrote: > > On Wed, 2022-02-16 at 12:37 +0000, Luca Fancellu wrote: >>> On 16 Feb 2022, at 06:18, Juergen Gross <jgross@suse.com> wrote: >>> On 15.02.22 18:56, Luca Fancellu wrote: >>>>> >>>> Yes, however I think the parser to handle everything by command >>>> line would >>>> be huge due to input sanitisation and not easy enough as the DT, >>>> however >>>> I see Hyperlaunch has plans to use DT on x86 so I guess it would >>>> be ok to make >>>> this feature common once the DT is available also on x86. >>> >>> Everything not being explicitly specific to Arm should be in common >>> code. Think of the work in progress for Risc-V. >> >> Ok I will put it in common and I will make the feature depend on >> HAS_DEVICE_TREE. >> > Can't we split the DT parsing logic & code, which is fine to be either > ARM or HAS_DEVICE_TREE specific, and the actual core logic, which can > be common and not gated by any particular condition? > > This way, if one wants to add boot-time cpupool to x86 via command > line, he/she would have to implement "only" the command line parsing. > > I've looked at the code, and I appreciate that it's not trivial and > that it's probably impossible to achieve 100% decoupling (at least not > without adding a lot more complexity)... But any step we can make in > that direction would be, IMO, a good investment. > Hi Dario, Sure I will try to do my best to point in that direction. Cheers, Luca > Regards > -- > Dario Faggioli, Ph.D > http://about.me/dario.faggioli > Virtualization Software Engineer > SUSE Labs, SUSE https://www.suse.com/ > ------------------------------------------------------------------- > <<This happens because _I_ choose it to happen!>> (Raistlin Majere)
On Wed, 16 Feb 2022, Luca Fancellu wrote: > > On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote: > > > > On Tue, 15 Feb 2022, Luca Fancellu wrote: > >> Introduce an architecture specific way to create different cpupools > >> at boot time, this is particularly useful on ARM big.LITTLE system > >> where there might be the need to have different cpupools for each type > >> of core, but also systems using NUMA can have different cpu pools for > >> each node. > >> > >> The feature on arm relies on a specification of the cpupools from the > >> device tree to build pools and assign cpus to them. > >> > >> Documentation is created to explain the feature. > >> > >> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> > >> --- > >> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ > >> xen/arch/arm/Kconfig | 9 ++ > >> xen/arch/arm/Makefile | 1 + > >> xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ > >> xen/common/sched/cpupool.c | 4 +- > >> xen/include/xen/sched.h | 11 +++ > >> 6 files changed, 260 insertions(+), 1 deletion(-) > >> create mode 100644 docs/misc/arm/device-tree/cpupools.txt > >> create mode 100644 xen/arch/arm/cpupool.c > >> > >> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt > >> new file mode 100644 > >> index 000000000000..7298b6394332 > >> --- /dev/null > >> +++ b/docs/misc/arm/device-tree/cpupools.txt > >> @@ -0,0 +1,118 @@ > >> +Boot time cpupools > >> +================== > >> + > >> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is > >> +possible to create cpupools during boot phase by specifying them in the device > >> +tree. > >> + > >> +Cpupools specification nodes shall be direct childs of /chosen node. > >> +Each cpupool node contains the following properties: > >> + > >> +- compatible (mandatory) > >> + > >> + Must always include the compatiblity string: "xen,cpupool". > >> + > >> +- cpupool-id (mandatory) > >> + > >> + Must be a positive integer number. > > > > Hi Stefano, > > Thank you for your review, > > > Why is cpupool-id mandatory? It looks like it could be generated by Xen. > > Or is it actually better to have the user specify it anyway? > > > > Yes at first I thought to automatically generate that, however I needed a structure > to map the id to the cpupool DT node. Here my doubt was about the size of the > structure, because the user could even specify a cpupool for each cpu. I could allocate > It dynamically and free it after domUs creation in setup_xen. > What do you think could be the right way? Maybe we can achieve the goal with the structure we already have: pool_cpu_map. pool_cpu_map returns struct cpupool*, which has a unsigned int cpupool_id field. As a pCPU can only be in 1 cpupool, we could for each dom0less domain: - get the xen,cpupool phandle from "domain-cpupool" - get the first CPU phandle from "cpupool-cpus" in xen,cpupool - from the CPU node phandle get the CPU number from "reg" - pool_cpu_map[cpu_reg]->cpupool_id is the id that we need It should be fast as they are all direct accesses (no walking long lists or binary trees.) > Or the dom0less guest could specify the id, but I like it more when using a phandle to the > Xen,cpupool node. No, I think the ID is something Xen should generate. > >> +- cpupool-cpus (mandatory) > >> + > >> + Must be a list of device tree phandle to nodes describing cpus (e.g. having > >> + device_type = "cpu"), it can't be empty. > >> + > >> +- cpupool-sched (optional) > >> + > >> + Must be a string having the name of a Xen scheduler, it has no effect when > >> + used in conjunction of a cpupool-id equal to zero, in that case the > >> + default Xen scheduler is selected (sched=<...> boot argument). > > > > I don't get why cpupool-id == 0 should trigger a special cpupool-sched > > behavior. > > Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create > that is giving it the default scheduler. I thought it was better to leave it as it was, however the > cpupool0 scheduler can be modified using sched= boot args as it was before. > > > > > > >> +Constraints > >> +=========== > >> + > >> +The cpupool with id zero is implicitly created even if not specified, that pool > >> +must have at least one cpu assigned, otherwise Xen will stop. > >> + > >> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's > >> +not assigned to any other cpupool. > >> + > >> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will > >> +stop. > > > > Thank you for documenting the constraints, but why do we have them? > > Imagine a user specifying 3 cpu pools and imagine the cpupool-id is > > optional and missing. We could take care of the cpupool-id generation in > > Xen and we could also assign the default scheduler everywhere > > cpupool-sched is not specified. Maybe I am missing something? > > Yes we could make the cpupool-id optional, my doubts are in the fist comment above. > Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler. > > > > > Does cpupool0 has to exist? I guess the answer could be yes, but if it > > is specified as id of one of the pools we are fine, otherwise it could > > be automatically generated by Xen. > > Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT > specifications. In fact you could not specify in the DT any xen,cpupool compatible node > with the cpupool-id == 0 and Xen will generate the cpupool0 anyway > (Xen internals are tied with the existence of a cpupool0). > > > > > In any case, I don't think that cpupool0 has to have the default > > scheduler? > > Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation, > I would need to test it to be sure I don’t find something strange. > > > > > My suggestion would be: > > > > - make cpupool-id optional > > - assign automatically cpupool-ids starting from 0 > > - respect cpupool-ids chosen by the user > > Ok, it would start from 1 because cpupool0 always exists > > > - if some CPUs are left out (not specified in any pool) add an extra cpupool > > - the extra cpupool doesn't have to be cpupool-id == 0, it could be > > cpupool-id == n > > - the extra cpupool uses the default scheduler > > I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that > a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the > current behaviour as the feature is not enabled. > However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0, > else assign them to a new cpupool and... > > > > > If the user created cpupools in device tree covering all CPUs and also > > specified all cpupool-ids everywhere, and none of them are 0 (no cpupool > > in the system is cpupool0) then panic. (Assuming that cpupool0 is > > required.) > > … panic if cpupool0 has no cpus. That could be a good plan. However, if cpupool0 has to have CPU0 (as Juergen wrote) then we could automatically assign cpupool-id == 0 to whatever xen,cpupool node has CPU0: - if CPU0 is unassigned, cpupool0 is the one with all the unassigned CPUs - if CPU0 is assigned to one of the xen,cpupool nodes, then that cpupool gets id == 0 Alternative we could fix the Xen limitation that cpupool0 has to have CPU0. In any case the good thing is that from a device interface perspective, it doesn't matter. The device tree description doesn't have to change. The user doesn't need to care how Xen comes up with the IDs.
> On 16 Feb 2022, at 13:07, Juergen Gross <jgross@suse.com> wrote: > > On 16.02.22 14:01, Luca Fancellu wrote: >>> On 16 Feb 2022, at 12:55, Juergen Gross <jgross@suse.com> wrote: >>> >>> On 16.02.22 13:10, Luca Fancellu wrote: >>>>> On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@kernel.org> wrote: >>>>> >>>>> On Tue, 15 Feb 2022, Luca Fancellu wrote: >>>>>> Introduce an architecture specific way to create different cpupools >>>>>> at boot time, this is particularly useful on ARM big.LITTLE system >>>>>> where there might be the need to have different cpupools for each type >>>>>> of core, but also systems using NUMA can have different cpu pools for >>>>>> each node. >>>>>> >>>>>> The feature on arm relies on a specification of the cpupools from the >>>>>> device tree to build pools and assign cpus to them. >>>>>> >>>>>> Documentation is created to explain the feature. >>>>>> >>>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> >>>>>> --- >>>>>> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ >>>>>> xen/arch/arm/Kconfig | 9 ++ >>>>>> xen/arch/arm/Makefile | 1 + >>>>>> xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ >>>>>> xen/common/sched/cpupool.c | 4 +- >>>>>> xen/include/xen/sched.h | 11 +++ >>>>>> 6 files changed, 260 insertions(+), 1 deletion(-) >>>>>> create mode 100644 docs/misc/arm/device-tree/cpupools.txt >>>>>> create mode 100644 xen/arch/arm/cpupool.c >>>>>> >>>>>> diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt >>>>>> new file mode 100644 >>>>>> index 000000000000..7298b6394332 >>>>>> --- /dev/null >>>>>> +++ b/docs/misc/arm/device-tree/cpupools.txt >>>>>> @@ -0,0 +1,118 @@ >>>>>> +Boot time cpupools >>>>>> +================== >>>>>> + >>>>>> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is >>>>>> +possible to create cpupools during boot phase by specifying them in the device >>>>>> +tree. >>>>>> + >>>>>> +Cpupools specification nodes shall be direct childs of /chosen node. >>>>>> +Each cpupool node contains the following properties: >>>>>> + >>>>>> +- compatible (mandatory) >>>>>> + >>>>>> + Must always include the compatiblity string: "xen,cpupool". >>>>>> + >>>>>> +- cpupool-id (mandatory) >>>>>> + >>>>>> + Must be a positive integer number. >>>>> >>>> Hi Stefano, >>>> Thank you for your review, >>>>> Why is cpupool-id mandatory? It looks like it could be generated by Xen. >>>>> Or is it actually better to have the user specify it anyway? >>>>> >>>> Yes at first I thought to automatically generate that, however I needed a structure >>>> to map the id to the cpupool DT node. Here my doubt was about the size of the >>>> structure, because the user could even specify a cpupool for each cpu. I could allocate >>>> It dynamically and free it after domUs creation in setup_xen. >>>> What do you think could be the right way? >>>> Or the dom0less guest could specify the id, but I like it more when using a phandle to the >>>> Xen,cpupool node. >>>>> >>>>>> +- cpupool-cpus (mandatory) >>>>>> + >>>>>> + Must be a list of device tree phandle to nodes describing cpus (e.g. having >>>>>> + device_type = "cpu"), it can't be empty. >>>>>> + >>>>>> +- cpupool-sched (optional) >>>>>> + >>>>>> + Must be a string having the name of a Xen scheduler, it has no effect when >>>>>> + used in conjunction of a cpupool-id equal to zero, in that case the >>>>>> + default Xen scheduler is selected (sched=<...> boot argument). >>>>> >>>>> I don't get why cpupool-id == 0 should trigger a special cpupool-sched >>>>> behavior. >>>> Cpupool with id 0 is embedded in Xen, it has its own special case handling in cpupool_create >>>> that is giving it the default scheduler. I thought it was better to leave it as it was, however the >>>> cpupool0 scheduler can be modified using sched= boot args as it was before. >>>>> >>>>> >>>>>> +Constraints >>>>>> +=========== >>>>>> + >>>>>> +The cpupool with id zero is implicitly created even if not specified, that pool >>>>>> +must have at least one cpu assigned, otherwise Xen will stop. >>>>>> + >>>>>> +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's >>>>>> +not assigned to any other cpupool. >>>>>> + >>>>>> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will >>>>>> +stop. >>>>> >>>>> Thank you for documenting the constraints, but why do we have them? >>>>> Imagine a user specifying 3 cpu pools and imagine the cpupool-id is >>>>> optional and missing. We could take care of the cpupool-id generation in >>>>> Xen and we could also assign the default scheduler everywhere >>>>> cpupool-sched is not specified. Maybe I am missing something? >>>> Yes we could make the cpupool-id optional, my doubts are in the fist comment above. >>>> Whenever the cpupool-sched is not specified, the current behaviour is to use the default scheduler. >>>>> >>>>> Does cpupool0 has to exist? I guess the answer could be yes, but if it >>>>> is specified as id of one of the pools we are fine, otherwise it could >>>>> be automatically generated by Xen. >>>> Yes cpupool0 needs to exists, however it is still generated by Xen regardless of the DT >>>> specifications. In fact you could not specify in the DT any xen,cpupool compatible node >>>> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway >>>> (Xen internals are tied with the existence of a cpupool0). >>>>> >>>>> In any case, I don't think that cpupool0 has to have the default >>>>> scheduler? >>>> Ok I think I can create a function to assign a scheduler to the cpupool0 after its creation, >>>> I would need to test it to be sure I don’t find something strange. >>>>> >>>>> My suggestion would be: >>>>> >>>>> - make cpupool-id optional >>>>> - assign automatically cpupool-ids starting from 0 >>>>> - respect cpupool-ids chosen by the user >>>> Ok, it would start from 1 because cpupool0 always exists >>>>> - if some CPUs are left out (not specified in any pool) add an extra cpupool >>>>> - the extra cpupool doesn't have to be cpupool-id == 0, it could be >>>>> cpupool-id == n >>>>> - the extra cpupool uses the default scheduler >>>> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, so that >>>> a user that doesn’t specify any xen,cpupool node ends up in a system reflecting the >>>> current behaviour as the feature is not enabled. >>>> However I can say, if no xen,cpupool nodes are found then assign cpus to cpupool0, >>>> else assign them to a new cpupool and... >>>>> >>>>> If the user created cpupools in device tree covering all CPUs and also >>>>> specified all cpupool-ids everywhere, and none of them are 0 (no cpupool >>>>> in the system is cpupool0) then panic. (Assuming that cpupool0 is >>>>> required.) >>>> … panic if cpupool0 has no cpus. >>> >>> Today cpu 0 is always member of cpupool0, and changing that might be >>> hard. >> Oh, are you sure? I did some test in the past for this serie using a Juno board, >> giving to cpupool0 only a72 cores and the a53 cores in another cpupool, my Juno >> firmware configuration makes Xen having the boot cpu (cpu 0) to be one of the a53 >> and it was working fine. But it was long time ago so I would need to try it again. > > Maybe on Arm the restrictions are less problematic, but I wouldn't bet > that all operations (like moving cpus between cpupools, cpu hotplug, > destroying cpupools, shutdown of the host, ...) are working in a sane > way. Hi Juergen, Dario, I will try to check the cases you list (on arm because I don’t have an x86 setup), I spotted in the code some places where the cpu0 can be hardcoded but I would need a feedback from you (and Dario, I know he worked around that area too) since the scheduler code is very complex. While I see cpu0 is hardcoded in these places: cpu_schedule_up scheduler_init I can’t see the relation with cpupool0. However here: #ifdef CONFIG_X86 void __init sched_setup_dom0_vcpus(struct domain *d) […] I see something that I’m not able to test, could you confirm if that code would lead to problems if cpu0 is not in cpupool0, since Dom0 is attached to that pool? Thank you very much for every feedback you can provide. Cheers, Luca > > > Juergen > <OpenPGP_0xB0DE9DD628BF132F.asc>
diff --git a/docs/misc/arm/device-tree/cpupools.txt b/docs/misc/arm/device-tree/cpupools.txt new file mode 100644 index 000000000000..7298b6394332 --- /dev/null +++ b/docs/misc/arm/device-tree/cpupools.txt @@ -0,0 +1,118 @@ +Boot time cpupools +================== + +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is +possible to create cpupools during boot phase by specifying them in the device +tree. + +Cpupools specification nodes shall be direct childs of /chosen node. +Each cpupool node contains the following properties: + +- compatible (mandatory) + + Must always include the compatiblity string: "xen,cpupool". + +- cpupool-id (mandatory) + + Must be a positive integer number. + +- cpupool-cpus (mandatory) + + Must be a list of device tree phandle to nodes describing cpus (e.g. having + device_type = "cpu"), it can't be empty. + +- cpupool-sched (optional) + + Must be a string having the name of a Xen scheduler, it has no effect when + used in conjunction of a cpupool-id equal to zero, in that case the + default Xen scheduler is selected (sched=<...> boot argument). + + +Constraints +=========== + +The cpupool with id zero is implicitly created even if not specified, that pool +must have at least one cpu assigned, otherwise Xen will stop. + +Every cpu brought up by Xen will be assigned to the cpupool with id zero if it's +not assigned to any other cpupool. + +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen will +stop. + + +Examples +======== + +A system having two types of core, the following device tree specification will +instruct Xen to have two cpupools: + +- The cpupool with id 0 will have 4 cpus assigned. +- The cpupool with id 1 will have 2 cpus assigned. + +As can be seen from the example, cpupool_a has only two cpus assigned, but since +there are two cpus unassigned, they are automatically assigned to cpupool with +id zero. The following example can work only if hmp-unsafe=1 is passed to Xen +boot arguments, otherwise not all cores will be brought up by Xen and the +cpupool creation process will stop Xen. + + +a72_1: cpu@0 { + compatible = "arm,cortex-a72"; + reg = <0x0 0x0>; + device_type = "cpu"; + [...] +}; + +a72_2: cpu@1 { + compatible = "arm,cortex-a72"; + reg = <0x0 0x1>; + device_type = "cpu"; + [...] +}; + +a53_1: cpu@100 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x100>; + device_type = "cpu"; + [...] +}; + +a53_2: cpu@101 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x101>; + device_type = "cpu"; + [...] +}; + +cpu@102 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x102>; + device_type = "cpu"; + [...] +}; + +cpu@103 { + compatible = "arm,cortex-a53"; + reg = <0x0 0x103>; + device_type = "cpu"; + [...] +}; + +chosen { + + cpupool_a { + compatible = "xen,cpupool"; + cpupool-id = <0>; + cpupool-cpus = <&a53_1 &a53_2>; + }; + cpupool_b { + compatible = "xen,cpupool"; + cpupool-id = <1>; + cpupool-cpus = <&a72_1 &a72_2>; + cpupool-sched = "credit2"; + }; + + [...] + +}; diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index ecfa6822e4d3..64c2879513b7 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -33,6 +33,15 @@ config ACPI Advanced Configuration and Power Interface (ACPI) support for Xen is an alternative to device tree on ARM64. +config BOOT_TIME_CPUPOOLS + bool "Create cpupools at boot time" + depends on ARM + default n + help + + Creates cpupools during boot time and assigns cpus to them. Cpupools + options can be specified in the device tree. + config GICV3 bool "GICv3 driver" depends on ARM_64 && !NEW_VGIC diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index d0dee10102b6..6165da4e77b4 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -13,6 +13,7 @@ obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o obj-y += bootfdt.init.o obj-y += cpuerrata.o obj-y += cpufeature.o +obj-$(CONFIG_BOOT_TIME_CPUPOOLS) += cpupool.o obj-y += decode.o obj-y += device.o obj-$(CONFIG_IOREQ_SERVER) += dm.o diff --git a/xen/arch/arm/cpupool.c b/xen/arch/arm/cpupool.c new file mode 100644 index 000000000000..a9d5b94635b9 --- /dev/null +++ b/xen/arch/arm/cpupool.c @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * xen/arch/arm/cpupool.c + * + * Code to create cpupools at boot time for arm architecture. + * + * Copyright (C) 2022 Arm Ltd. + */ + +#include <xen/sched.h> + +static struct cpupool *__initdata pool_cpu_map[NR_CPUS]; + +void __init arch_allocate_cpupools(const cpumask_t *cpu_online_map) +{ + const struct dt_device_node *chosen, *node; + unsigned int cpu_num, cpupool0_cpu_count = 0; + cpumask_t cpus_to_assign; + + chosen = dt_find_node_by_path("/chosen"); + if ( !chosen ) + return; + + cpumask_copy(&cpus_to_assign, cpu_online_map); + + dt_for_each_child_node(chosen, node) + { + const struct dt_device_node *cpu_node; + unsigned int pool_id; + int i = 0, sched_id = -1; + const char* scheduler_name; + struct cpupool *pool = cpupool0; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + continue; + + if ( !dt_property_read_u32(node, "cpupool-id", &pool_id) ) + panic("Missing cpupool-id property!\n"); + + if ( !dt_property_read_string(node, "cpupool-sched", &scheduler_name) ) + { + sched_id = sched_get_id_by_name(scheduler_name); + if ( sched_id < 0 ) + panic("Scheduler %s does not exists!\n", scheduler_name); + } + + if ( pool_id ) + { + pool = cpupool_create_pool(pool_id, sched_id); + if ( !pool ) + panic("Error creating pool id %u!\n", pool_id); + } + + cpu_node = dt_parse_phandle(node, "cpupool-cpus", 0); + if ( !cpu_node ) + panic("Missing or empty cpupool-cpus property!\n"); + + while ( cpu_node ) + { + register_t cpu_reg; + const __be32 *prop; + + prop = dt_get_property(cpu_node, "reg", NULL); + if ( !prop ) + panic("cpupool-cpus pointed node has no reg property!\n"); + + cpu_reg = dt_read_number(prop, dt_n_addr_cells(cpu_node)); + + /* Check if the cpu is online and in the set to be assigned */ + for_each_cpu ( cpu_num, &cpus_to_assign ) + if ( cpu_logical_map(cpu_num) == cpu_reg ) + break; + + if ( cpu_num >= nr_cpu_ids ) + panic("Cpu found in %s is not online or it's assigned twice!\n", + dt_node_name(node)); + + pool_cpu_map[cpu_num] = pool; + cpumask_clear_cpu(cpu_num, &cpus_to_assign); + + printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-%u.\n", + cpu_reg, pool_id); + + /* Keep track of how many cpus are assigned to Pool-0 */ + if ( !pool_id ) + cpupool0_cpu_count++; + + cpu_node = dt_parse_phandle(node, "cpupool-cpus", ++i); + } + } + + /* Assign every non assigned cpu to Pool-0 */ + for_each_cpu ( cpu_num, &cpus_to_assign ) + { + pool_cpu_map[cpu_num] = cpupool0; + cpupool0_cpu_count++; + printk(XENLOG_INFO "CPU with MPIDR %"PRIregister" in Pool-0.\n", + cpu_logical_map(cpu_num)); + } + + if ( !cpupool0_cpu_count ) + panic("No cpu assigned to cpupool0!\n"); +} + +struct cpupool *__init arch_get_cpupool(unsigned int cpu) +{ + return pool_cpu_map[cpu]; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 4da12528d6b9..6013d75e2edd 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -1257,12 +1257,14 @@ static int __init cpupool_init(void) cpupool_put(cpupool0); register_cpu_notifier(&cpu_nfb); + arch_allocate_cpupools(&cpu_online_map); + spin_lock(&cpupool_lock); cpumask_copy(&cpupool_free_cpus, &cpu_online_map); for_each_cpu ( cpu, &cpupool_free_cpus ) - cpupool_assign_cpu_locked(cpupool0, cpu); + cpupool_assign_cpu_locked(arch_get_cpupool(cpu), cpu); spin_unlock(&cpupool_lock); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index a67a9eb2fe9d..dda7db2ba51f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1177,6 +1177,17 @@ extern void dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi); +#ifdef CONFIG_BOOT_TIME_CPUPOOLS +void arch_allocate_cpupools(const cpumask_t *cpu_online_map); +struct cpupool *arch_get_cpupool(unsigned int cpu); +#else +static inline void arch_allocate_cpupools(const cpumask_t *cpu_online_map) {} +static inline struct cpupool *arch_get_cpupool(unsigned int cpu) +{ + return cpupool0; +} +#endif + #endif /* __SCHED_H__ */ /*
Introduce an architecture specific way to create different cpupools at boot time, this is particularly useful on ARM big.LITTLE system where there might be the need to have different cpupools for each type of core, but also systems using NUMA can have different cpu pools for each node. The feature on arm relies on a specification of the cpupools from the device tree to build pools and assign cpus to them. Documentation is created to explain the feature. Signed-off-by: Luca Fancellu <luca.fancellu@arm.com> --- docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++ xen/arch/arm/Kconfig | 9 ++ xen/arch/arm/Makefile | 1 + xen/arch/arm/cpupool.c | 118 +++++++++++++++++++++++++ xen/common/sched/cpupool.c | 4 +- xen/include/xen/sched.h | 11 +++ 6 files changed, 260 insertions(+), 1 deletion(-) create mode 100644 docs/misc/arm/device-tree/cpupools.txt create mode 100644 xen/arch/arm/cpupool.c