Message ID | 20240814091005.969756-1-samuel.holland@sifive.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Fix KASAN random tag seed initialization | expand |
On Wed, Aug 14, 2024 at 11:10 AM Samuel Holland <samuel.holland@sifive.com> wrote: > > Currently, kasan_init_sw_tags() is called before setup_per_cpu_areas(), > so per_cpu(prng_state, cpu) accesses the same address regardless of the > value of "cpu", and the same seed value gets copied to the percpu area > for every CPU. Fix this by moving the call to smp_prepare_boot_cpu(), > which is the first architecture hook after setup_per_cpu_areas(). > > Fixes: 3c9e3aa11094 ("kasan: add tag related helper functions") > Fixes: 3f41b6093823 ("kasan: fix random seed generation for tag-based mode") > Signed-off-by: Samuel Holland <samuel.holland@sifive.com> > --- > > arch/arm64/kernel/setup.c | 3 --- > arch/arm64/kernel/smp.c | 2 ++ > 2 files changed, 2 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c > index a096e2451044..b22d28ec8028 100644 > --- a/arch/arm64/kernel/setup.c > +++ b/arch/arm64/kernel/setup.c > @@ -355,9 +355,6 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) > smp_init_cpus(); > smp_build_mpidr_hash(); > > - /* Init percpu seeds for random tags after cpus are set up. */ > - kasan_init_sw_tags(); > - > #ifdef CONFIG_ARM64_SW_TTBR0_PAN > /* > * Make sure init_thread_info.ttbr0 always generates translation > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c > index 5e18fbcee9a2..f01f0fd7b7fe 100644 > --- a/arch/arm64/kernel/smp.c > +++ b/arch/arm64/kernel/smp.c > @@ -467,6 +467,8 @@ void __init smp_prepare_boot_cpu(void) > init_gic_priority_masking(); > > kasan_init_hw_tags(); > + /* Init percpu seeds for random tags after cpus are set up. */ > + kasan_init_sw_tags(); > } > > /* > -- > 2.45.1 > Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Thank you!
On Wed, Aug 14, 2024 at 02:09:53AM -0700, Samuel Holland wrote: > Currently, kasan_init_sw_tags() is called before setup_per_cpu_areas(), > so per_cpu(prng_state, cpu) accesses the same address regardless of the > value of "cpu", and the same seed value gets copied to the percpu area > for every CPU. Fix this by moving the call to smp_prepare_boot_cpu(), > which is the first architecture hook after setup_per_cpu_areas(). Even with the fix, given the lower resolution of get_cycles(), there's a good chance that we still have the same seed on all CPUs. If we want separate seeds, a better bet would be to initialise each CPU separately via the secondary_start_kernel() path. I'll let the KASAN people comment on whether that's important.
On Wed, Aug 14, 2024 at 6:19 PM Catalin Marinas <catalin.marinas@arm.com> wrote: > > On Wed, Aug 14, 2024 at 02:09:53AM -0700, Samuel Holland wrote: > > Currently, kasan_init_sw_tags() is called before setup_per_cpu_areas(), > > so per_cpu(prng_state, cpu) accesses the same address regardless of the > > value of "cpu", and the same seed value gets copied to the percpu area > > for every CPU. Fix this by moving the call to smp_prepare_boot_cpu(), > > which is the first architecture hook after setup_per_cpu_areas(). > > Even with the fix, given the lower resolution of get_cycles(), there's a > good chance that we still have the same seed on all CPUs. If we want > separate seeds, a better bet would be to initialise each CPU separately > via the secondary_start_kernel() path. I'll let the KASAN people comment > on whether that's important. I think it's fine if we end up with the same seed: SW_TAGS KASAN is just a debugging feature, not a mitigation. We just want some kind of randomness. Calling kasan_init_sw_tags() after setup_per_cpu_areas() seems reasonable though.
On Wed, 14 Aug 2024 02:09:53 -0700, Samuel Holland wrote: > Currently, kasan_init_sw_tags() is called before setup_per_cpu_areas(), > so per_cpu(prng_state, cpu) accesses the same address regardless of the > value of "cpu", and the same seed value gets copied to the percpu area > for every CPU. Fix this by moving the call to smp_prepare_boot_cpu(), > which is the first architecture hook after setup_per_cpu_areas(). > > > [...] Applied to arm64 (for-next/fixes), thanks! [1/1] arm64: Fix KASAN random tag seed initialization https://git.kernel.org/arm64/c/f75c235565f9
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index a096e2451044..b22d28ec8028 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -355,9 +355,6 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) smp_init_cpus(); smp_build_mpidr_hash(); - /* Init percpu seeds for random tags after cpus are set up. */ - kasan_init_sw_tags(); - #ifdef CONFIG_ARM64_SW_TTBR0_PAN /* * Make sure init_thread_info.ttbr0 always generates translation diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 5e18fbcee9a2..f01f0fd7b7fe 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -467,6 +467,8 @@ void __init smp_prepare_boot_cpu(void) init_gic_priority_masking(); kasan_init_hw_tags(); + /* Init percpu seeds for random tags after cpus are set up. */ + kasan_init_sw_tags(); } /*
Currently, kasan_init_sw_tags() is called before setup_per_cpu_areas(), so per_cpu(prng_state, cpu) accesses the same address regardless of the value of "cpu", and the same seed value gets copied to the percpu area for every CPU. Fix this by moving the call to smp_prepare_boot_cpu(), which is the first architecture hook after setup_per_cpu_areas(). Fixes: 3c9e3aa11094 ("kasan: add tag related helper functions") Fixes: 3f41b6093823 ("kasan: fix random seed generation for tag-based mode") Signed-off-by: Samuel Holland <samuel.holland@sifive.com> --- arch/arm64/kernel/setup.c | 3 --- arch/arm64/kernel/smp.c | 2 ++ 2 files changed, 2 insertions(+), 3 deletions(-)