diff mbox series

x86/Xen: streamline (and fix) PV CPU enumeration

Message ID 2dbd5f0a-9859-ca2d-085e-a02f7166c610@suse.com (mailing list archive)
State Accepted
Commit e25a8d959992f61b64a58fc62fb7951dc6f31d1f
Headers show
Series x86/Xen: streamline (and fix) PV CPU enumeration | expand

Commit Message

Jan Beulich Feb. 1, 2022, 10:57 a.m. UTC
This started out with me noticing that "dom0_max_vcpus=<N>" with <N>
larger than the number of physical CPUs reported through ACPI tables
would not bring up the "excess" vCPU-s. Addressing this is the primary
purpose of the change; CPU maps handling is being tidied only as far as
is necessary for the change here (with the effect of also avoiding the
setting up of too much per-CPU infrastructure, i.e. for CPUs which can
never come online).

Noticing that xen_fill_possible_map() is called way too early, whereas
xen_filter_cpu_maps() is called too late (after per-CPU areas were
already set up), and further observing that each of the functions serves
only one of Dom0 or DomU, it looked like it was better to simplify this.
Use the .get_smp_config hook instead, uniformly for Dom0 and DomU.
xen_fill_possible_map() can be dropped altogether, while
xen_filter_cpu_maps() is re-purposed but not otherwise changed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Extend description.

Comments

Boris Ostrovsky Feb. 2, 2022, 2:27 p.m. UTC | #1
On 2/1/22 5:57 AM, Jan Beulich wrote:
> This started out with me noticing that "dom0_max_vcpus=<N>" with <N>
> larger than the number of physical CPUs reported through ACPI tables
> would not bring up the "excess" vCPU-s. Addressing this is the primary
> purpose of the change; CPU maps handling is being tidied only as far as
> is necessary for the change here (with the effect of also avoiding the
> setting up of too much per-CPU infrastructure, i.e. for CPUs which can
> never come online).
>
> Noticing that xen_fill_possible_map() is called way too early, whereas
> xen_filter_cpu_maps() is called too late (after per-CPU areas were
> already set up), and further observing that each of the functions serves
> only one of Dom0 or DomU, it looked like it was better to simplify this.
> Use the .get_smp_config hook instead, uniformly for Dom0 and DomU.
> xen_fill_possible_map() can be dropped altogether, while
> xen_filter_cpu_maps() is re-purposed but not otherwise changed.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Extend description.


That's been a while ;-)


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Jan Beulich Feb. 2, 2022, 2:35 p.m. UTC | #2
On 02.02.2022 15:27, Boris Ostrovsky wrote:
> 
> On 2/1/22 5:57 AM, Jan Beulich wrote:
>> This started out with me noticing that "dom0_max_vcpus=<N>" with <N>
>> larger than the number of physical CPUs reported through ACPI tables
>> would not bring up the "excess" vCPU-s. Addressing this is the primary
>> purpose of the change; CPU maps handling is being tidied only as far as
>> is necessary for the change here (with the effect of also avoiding the
>> setting up of too much per-CPU infrastructure, i.e. for CPUs which can
>> never come online).
>>
>> Noticing that xen_fill_possible_map() is called way too early, whereas
>> xen_filter_cpu_maps() is called too late (after per-CPU areas were
>> already set up), and further observing that each of the functions serves
>> only one of Dom0 or DomU, it looked like it was better to simplify this.
>> Use the .get_smp_config hook instead, uniformly for Dom0 and DomU.
>> xen_fill_possible_map() can be dropped altogether, while
>> xen_filter_cpu_maps() is re-purposed but not otherwise changed.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: Extend description.
> 
> 
> That's been a while ;-)

Indeed. For some reason I had stored in the back of my memory that
you asked me for splitting the patch. That's something that would
have required at least as much time (to make sure I get it right)
as it took to put together (and test) the patch. Which was more
than I could afford in all this time. Recently I decided to check
with you whether I could talk you into withdrawing that (supposed)
request. But when going back through the old thread, I was
surprised to find that all you did ask for is extending the
description to point out that the CPU map management isn't the
primary purpose of the change.

> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Thanks.

Jan
Jürgen Groß Feb. 4, 2022, 9:24 a.m. UTC | #3
On 01.02.22 11:57, Jan Beulich wrote:
> This started out with me noticing that "dom0_max_vcpus=<N>" with <N>
> larger than the number of physical CPUs reported through ACPI tables
> would not bring up the "excess" vCPU-s. Addressing this is the primary
> purpose of the change; CPU maps handling is being tidied only as far as
> is necessary for the change here (with the effect of also avoiding the
> setting up of too much per-CPU infrastructure, i.e. for CPUs which can
> never come online).
> 
> Noticing that xen_fill_possible_map() is called way too early, whereas
> xen_filter_cpu_maps() is called too late (after per-CPU areas were
> already set up), and further observing that each of the functions serves
> only one of Dom0 or DomU, it looked like it was better to simplify this.
> Use the .get_smp_config hook instead, uniformly for Dom0 and DomU.
> xen_fill_possible_map() can be dropped altogether, while
> xen_filter_cpu_maps() is re-purposed but not otherwise changed.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Pushed to xen/tip.git for-linus-5.17a


Juergen
diff mbox series

Patch

--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1341,10 +1341,6 @@  asmlinkage __visible void __init xen_sta
 
 		xen_acpi_sleep_register();
 
-		/* Avoid searching for BIOS MP tables */
-		x86_init.mpparse.find_smp_config = x86_init_noop;
-		x86_init.mpparse.get_smp_config = x86_init_uint_noop;
-
 		xen_boot_params_init_edd();
 
 #ifdef CONFIG_ACPI
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -148,28 +148,12 @@  int xen_smp_intr_init_pv(unsigned int cp
 	return rc;
 }
 
-static void __init xen_fill_possible_map(void)
-{
-	int i, rc;
-
-	if (xen_initial_domain())
-		return;
-
-	for (i = 0; i < nr_cpu_ids; i++) {
-		rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
-		if (rc >= 0) {
-			num_processors++;
-			set_cpu_possible(i, true);
-		}
-	}
-}
-
-static void __init xen_filter_cpu_maps(void)
+static void __init _get_smp_config(unsigned int early)
 {
 	int i, rc;
 	unsigned int subtract = 0;
 
-	if (!xen_initial_domain())
+	if (early)
 		return;
 
 	num_processors = 0;
@@ -210,7 +194,6 @@  static void __init xen_pv_smp_prepare_bo
 		 * sure the old memory can be recycled. */
 		make_lowmem_page_readwrite(xen_initial_gdt);
 
-	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
 
 	/*
@@ -476,5 +459,8 @@  static const struct smp_ops xen_smp_ops
 void __init xen_smp_init(void)
 {
 	smp_ops = xen_smp_ops;
-	xen_fill_possible_map();
+
+	/* Avoid searching for BIOS MP tables */
+	x86_init.mpparse.find_smp_config = x86_init_noop;
+	x86_init.mpparse.get_smp_config = _get_smp_config;
 }