Message ID | 20230316222109.1940300-4-usama.arif@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Parallel CPU bringup for x86_64 | expand |
On Thu, Mar 16 2023 at 22:21, Usama Arif wrote: > diff --git a/kernel/cpu.c b/kernel/cpu.c > index 6b3dccb4a888..6ccc64defd47 100644 > --- a/kernel/cpu.c > +++ b/kernel/cpu.c > @@ -1497,8 +1497,30 @@ int bringup_hibernate_cpu(unsigned int sleep_cpu) > > void bringup_nonboot_cpus(unsigned int setup_max_cpus) > { > + unsigned int n = setup_max_cpus - num_online_cpus(); > unsigned int cpu; > > + /* > + * An architecture may have registered parallel pre-bringup states to > + * which each CPU may be brought in parallel. For each such state, > + * bring N CPUs to it in turn before the final round of bringing them > + * online. > + */ > + if (n > 0) { > + enum cpuhp_state st = CPUHP_BP_PARALLEL_DYN; > + > + while (st <= CPUHP_BP_PARALLEL_DYN_END && cpuhp_hp_states[st].name) { > + int i = n; > + > + for_each_present_cpu(cpu) { > + cpu_up(cpu, st); > + if (!--i) > + break; > + } > + st++; > + } > + } > + > for_each_present_cpu(cpu) { > if (num_online_cpus() >= setup_max_cpus) > break; This causes a subtle issue. The bringup loop above moves all CPUs to cpuhp_state == CPUHP_BP_PARALLEL_DYN_END. So the serial bootup will start from there and bring them fully up. Now if a bringup fails, then the rollback will only go back down to CPUHP_BP_PARALLEL_DYN_END, which means that the control CPU won't do any cleanups below CPUHP_BP_PARALLEL_DYN_END. That 'fail' is a common case for SMT soft disable via the 'nosmt' command line parameter. Due to the marvelous MCE broadcast 'feature' we need to bringup the SMT siblings at least to the CPUHP_AP_ONLINE_IDLE state once and then roll them back. While this is not necessarily a fatal problem, it's changing behaviour and with quite some of the details hidden in the (then not issued) teardown callbacks might cause some hard to decode subtle surprises. So that second for_each_present_cpu() loop needs to check the return value of cpu_up() and issue a full rollback to CPUHP_OFFLINE in case of fail. Thanks, tglx
On Mon, 2023-03-20 at 15:30 +0100, Thomas Gleixner wrote: > > This causes a subtle issue. The bringup loop above moves all CPUs to > cpuhp_state == CPUHP_BP_PARALLEL_DYN_END. So the serial bootup will > start from there and bring them fully up. > > Now if a bringup fails, then the rollback will only go back down to > CPUHP_BP_PARALLEL_DYN_END, which means that the control CPU won't do any > cleanups below CPUHP_BP_PARALLEL_DYN_END. > > That 'fail' is a common case for SMT soft disable via the 'nosmt' > command line parameter. Due to the marvelous MCE broadcast 'feature' we > need to bringup the SMT siblings at least to the CPUHP_AP_ONLINE_IDLE > state once and then roll them back. > > While this is not necessarily a fatal problem, it's changing behaviour > and with quite some of the details hidden in the (then not issued) > teardown callbacks might cause some hard to decode subtle surprises. > > So that second for_each_present_cpu() loop needs to check the return > value of cpu_up() and issue a full rollback to CPUHP_OFFLINE in case of > fail. @@ -1524,8 +1531,22 @@ void bringup_nonboot_cpus(unsigned int setup_max_cpus) for_each_present_cpu(cpu) { if (num_online_cpus() >= setup_max_cpus) break; - if (!cpu_online(cpu)) - cpu_up(cpu, CPUHP_ONLINE); + if (!cpu_online(cpu)) { + int ret = cpu_up(cpu, CPUHP_ONLINE); + + /* + * For the parallel bringup case, roll all the way back + * to CPUHP_OFFLINE on failure; don't leave them in the + * parallel stages. This happens in the nosmt case for + * non-primary threads. + */ + if (ret && cpuhp_hp_states[CPUHP_BP_PARALLEL_DYN].name) { + struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu); + if (can_rollback_cpu(st)) + WARN_ON(cpuhp_invoke_callback_range(false, cpu, st, + CPUHP_OFFLINE)); + } + } } }
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 6c6859bfc454..e5a73ae6ccc0 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -133,6 +133,8 @@ enum cpuhp_state { CPUHP_MIPS_SOC_PREPARE, CPUHP_BP_PREPARE_DYN, CPUHP_BP_PREPARE_DYN_END = CPUHP_BP_PREPARE_DYN + 20, + CPUHP_BP_PARALLEL_DYN, + CPUHP_BP_PARALLEL_DYN_END = CPUHP_BP_PARALLEL_DYN + 4, CPUHP_BRINGUP_CPU, /* diff --git a/kernel/cpu.c b/kernel/cpu.c index 6b3dccb4a888..6ccc64defd47 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -1497,8 +1497,30 @@ int bringup_hibernate_cpu(unsigned int sleep_cpu) void bringup_nonboot_cpus(unsigned int setup_max_cpus) { + unsigned int n = setup_max_cpus - num_online_cpus(); unsigned int cpu; + /* + * An architecture may have registered parallel pre-bringup states to + * which each CPU may be brought in parallel. For each such state, + * bring N CPUs to it in turn before the final round of bringing them + * online. + */ + if (n > 0) { + enum cpuhp_state st = CPUHP_BP_PARALLEL_DYN; + + while (st <= CPUHP_BP_PARALLEL_DYN_END && cpuhp_hp_states[st].name) { + int i = n; + + for_each_present_cpu(cpu) { + cpu_up(cpu, st); + if (!--i) + break; + } + st++; + } + } + for_each_present_cpu(cpu) { if (num_online_cpus() >= setup_max_cpus) break; @@ -1875,6 +1897,10 @@ static int cpuhp_reserve_state(enum cpuhp_state state) step = cpuhp_hp_states + CPUHP_BP_PREPARE_DYN; end = CPUHP_BP_PREPARE_DYN_END; break; + case CPUHP_BP_PARALLEL_DYN: + step = cpuhp_hp_states + CPUHP_BP_PARALLEL_DYN; + end = CPUHP_BP_PARALLEL_DYN_END; + break; default: return -EINVAL; } @@ -1899,14 +1925,15 @@ static int cpuhp_store_callbacks(enum cpuhp_state state, const char *name, /* * If name is NULL, then the state gets removed. * - * CPUHP_AP_ONLINE_DYN and CPUHP_BP_PREPARE_DYN are handed out on + * CPUHP_AP_ONLINE_DYN and CPUHP_BP_P*_DYN are handed out on * the first allocation from these dynamic ranges, so the removal * would trigger a new allocation and clear the wrong (already * empty) state, leaving the callbacks of the to be cleared state * dangling, which causes wreckage on the next hotplug operation. */ if (name && (state == CPUHP_AP_ONLINE_DYN || - state == CPUHP_BP_PREPARE_DYN)) { + state == CPUHP_BP_PREPARE_DYN || + state == CPUHP_BP_PARALLEL_DYN)) { ret = cpuhp_reserve_state(state); if (ret < 0) return ret;