Message ID | 20211209150938.3518-9-dwmw2@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Parallel CPU bringup for x86_64 | expand |
On Thu, Dec 09, 2021 at 03:09:35PM +0000, David Woodhouse wrote: > diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c > index 50a4515fe0ad..4ee247d89a49 100644 > --- a/arch/x86/kernel/tsc_sync.c > +++ b/arch/x86/kernel/tsc_sync.c > @@ -202,6 +202,7 @@ bool tsc_store_and_check_tsc_adjust(bool bootcpu) > * Entry/exit counters that make sure that both CPUs > * run the measurement code at once: > */ > +static atomic_t tsc_sync_cpu = ATOMIC_INIT(-1); > static atomic_t start_count; > static atomic_t stop_count; > static atomic_t skip_test; > @@ -326,6 +327,8 @@ void check_tsc_sync_source(int cpu) > atomic_set(&test_runs, 1); > else > atomic_set(&test_runs, 3); > + > + atomic_set(&tsc_sync_cpu, cpu); > retry: > /* > * Wait for the target to start or to skip the test: > @@ -407,6 +410,10 @@ void check_tsc_sync_target(void) > if (unsynchronized_tsc()) > return; > > + /* Wait for this CPU's turn */ > + while (atomic_read(&tsc_sync_cpu) != cpu) > + cpu_relax(); > + > /* > * Store, verify and sanitize the TSC adjust register. If > * successful skip the test. This new atomic_t seems superfluous, there isn't any actual atomic operation used.
On Thu, 2021-12-09 at 16:43 +0100, Peter Zijlstra wrote: > On Thu, Dec 09, 2021 at 03:09:35PM +0000, David Woodhouse wrote: > > diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c > > index 50a4515fe0ad..4ee247d89a49 100644 > > --- a/arch/x86/kernel/tsc_sync.c > > +++ b/arch/x86/kernel/tsc_sync.c > > @@ -202,6 +202,7 @@ bool tsc_store_and_check_tsc_adjust(bool bootcpu) > > * Entry/exit counters that make sure that both CPUs > > * run the measurement code at once: > > */ > > +static atomic_t tsc_sync_cpu = ATOMIC_INIT(-1); > > static atomic_t start_count; > > static atomic_t stop_count; > > static atomic_t skip_test; > > @@ -326,6 +327,8 @@ void check_tsc_sync_source(int cpu) > > atomic_set(&test_runs, 1); > > else > > atomic_set(&test_runs, 3); > > + > > + atomic_set(&tsc_sync_cpu, cpu); > > retry: > > /* > > * Wait for the target to start or to skip the test: > > @@ -407,6 +410,10 @@ void check_tsc_sync_target(void) > > if (unsynchronized_tsc()) > > return; > > > > + /* Wait for this CPU's turn */ > > + while (atomic_read(&tsc_sync_cpu) != cpu) > > + cpu_relax(); > > + > > /* > > * Store, verify and sanitize the TSC adjust register. If > > * successful skip the test. > > This new atomic_t seems superfluous, there isn't any actual atomic > operation used. That's true; it could just be WRITE_ONCE/READ_ONCE. But the atomic is fairly much equivalent and does no harm. I see this one mostly as a placeholder — I'd still prefer to have a decent 1:many TSC sync or at least a 1:many *check* falling back to 1:1 mode if anything actually needs to be adjusted. And/or just avoid the TSC sync completely when it's not needed, like on kexec.
diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c index 50a4515fe0ad..4ee247d89a49 100644 --- a/arch/x86/kernel/tsc_sync.c +++ b/arch/x86/kernel/tsc_sync.c @@ -202,6 +202,7 @@ bool tsc_store_and_check_tsc_adjust(bool bootcpu) * Entry/exit counters that make sure that both CPUs * run the measurement code at once: */ +static atomic_t tsc_sync_cpu = ATOMIC_INIT(-1); static atomic_t start_count; static atomic_t stop_count; static atomic_t skip_test; @@ -326,6 +327,8 @@ void check_tsc_sync_source(int cpu) atomic_set(&test_runs, 1); else atomic_set(&test_runs, 3); + + atomic_set(&tsc_sync_cpu, cpu); retry: /* * Wait for the target to start or to skip the test: @@ -407,6 +410,10 @@ void check_tsc_sync_target(void) if (unsynchronized_tsc()) return; + /* Wait for this CPU's turn */ + while (atomic_read(&tsc_sync_cpu) != cpu) + cpu_relax(); + /* * Store, verify and sanitize the TSC adjust register. If * successful skip the test.