Message ID | 20221109054056.3618089-1-guoren@kernel.org (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Palmer Dabbelt |
Headers | show |
Series | [V2] riscv: asid: Fixup stale TLB entry cause application crash | expand |
Context | Check | Description |
---|---|---|
conchuod/patch_count | success | Link |
conchuod/cover_letter | success | Single patches do not need cover letters |
conchuod/tree_selection | success | Guessed tree name to be fixes |
conchuod/fixes_present | success | Fixes tag present in non-next series |
conchuod/verify_signedoff | success | Signed-off-by tag matches author and committer |
conchuod/kdoc | success | Errors and warnings before: 0 this patch: 0 |
conchuod/module_param | success | Was 0 now: 0 |
conchuod/build_rv32_defconfig | success | Build OK |
conchuod/build_warn_rv64 | success | Errors and warnings before: 0 this patch: 0 |
conchuod/dtb_warn_rv64 | success | Errors and warnings before: 0 this patch: 0 |
conchuod/header_inline | success | No static functions without inline keyword in header files |
conchuod/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 12 lines checked |
conchuod/source_inline | success | Was 0 now: 0 |
conchuod/build_rv64_nommu_k210_defconfig | success | Build OK |
conchuod/verify_fixes | success | Fixes tag looks correct |
conchuod/build_rv64_nommu_virt_defconfig | success | Build OK |
On Wed, Nov 09, 2022 at 12:40:56AM -0500, guoren@kernel.org wrote: > From: Guo Ren <guoren@linux.alibaba.com> > > After use_asid_allocator is enabled, the userspace application will > crash by stale TLB entries. Because only using cpumask_clear_cpu without > local_flush_tlb_all couldn't guarantee CPU's TLB entries were fresh. > Then set_mm_asid would cause the user space application to get a stale > value by stale TLB entry, but set_mm_noasid is okay. > > Here is the symptom of the bug: > unhandled signal 11 code 0x1 (coredump) > 0x0000003fd6d22524 <+4>: auipc s0,0x70 > 0x0000003fd6d22528 <+8>: ld s0,-148(s0) # 0x3fd6d92490 > => 0x0000003fd6d2252c <+12>: ld a5,0(s0) > (gdb) i r s0 > s0 0x8082ed1cc3198b21 0x8082ed1cc3198b21 > (gdb) x /2x 0x3fd6d92490 > 0x3fd6d92490: 0xd80ac8a8 0x0000003f > The core dump file shows that register s0 is wrong, but the value in > memory is correct. Because 'ld s0, -148(s0)' used a stale mapping entry > in TLB and got a wrong result from an incorrect physical address. > > When the task ran on CPU0, which loaded/speculative-loaded the value of > address(0x3fd6d92490), then the first version of the mapping entry was > PTWed into CPU0's TLB. > When the task switched from CPU0 to CPU1 (No local_tlb_flush_all here by > asid), it happened to write a value on the address (0x3fd6d92490). It > caused do_page_fault -> wp_page_copy -> ptep_clear_flush -> > ptep_get_and_clear & flush_tlb_page. > The flush_tlb_page used mm_cpumask(mm) to determine which CPUs need TLB > flush, but CPU0 had cleared the CPU0's mm_cpumask in the previous > switch_mm. So we only flushed the CPU1 TLB and set the second version > mapping of the PTE. When the task switched from CPU1 to CPU0 again, CPU0 > still used a stale TLB mapping entry which contained a wrong target > physical address. It raised a bug when the task happened to read that > value. > > The solution is to keep all CPUs' footmarks of cpumask(mm) in switch_mm, > which could prevent losing pieces of stuff during TLB flush. > > Fixes: 65d4b9c53017 ("RISC-V: Implement ASID allocator") > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > Signed-off-by: Guo Ren <guoren@kernel.org> > Cc: Anup Patel <apatel@ventanamicro.com> > Cc: Palmer Dabbelt <palmer@rivosinc.com> > --- > Changes in v2: > - Fixup nommu compile problem (Thx Conor, Also Reported-by: kernel > test robot <lkp@intel.com>) > - Keep cpumask_clear_cpu for noasid > --- > arch/riscv/mm/context.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c > index 7acbfbd14557..f58e4b211595 100644 > --- a/arch/riscv/mm/context.c > +++ b/arch/riscv/mm/context.c > @@ -317,7 +317,11 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, > */ > cpu = smp_processor_id(); > > - cpumask_clear_cpu(cpu, mm_cpumask(prev)); > +#ifdef CONFIG_MMU > + if (!static_branch_unlikely(&use_asid_allocator)) > +#endif That's not very pretty. Can't we just do the following, instead? diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 7acbfbd14557..ace419761e31 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -16,10 +16,11 @@ #include <asm/cacheflush.h> #include <asm/mmu_context.h> -#ifdef CONFIG_MMU DEFINE_STATIC_KEY_FALSE(use_asid_allocator); +#ifdef CONFIG_MMU + static unsigned long asid_bits; static unsigned long num_asids; static unsigned long asid_mask; Thanks, drew > + cpumask_clear_cpu(cpu, mm_cpumask(prev)); > + > cpumask_set_cpu(cpu, mm_cpumask(next)); > > set_mm(next, cpu); > -- > 2.36.1 > > > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv
On Wed, Nov 9, 2022 at 5:45 PM Andrew Jones <ajones@ventanamicro.com> wrote: > > On Wed, Nov 09, 2022 at 12:40:56AM -0500, guoren@kernel.org wrote: > > From: Guo Ren <guoren@linux.alibaba.com> > > > > After use_asid_allocator is enabled, the userspace application will > > crash by stale TLB entries. Because only using cpumask_clear_cpu without > > local_flush_tlb_all couldn't guarantee CPU's TLB entries were fresh. > > Then set_mm_asid would cause the user space application to get a stale > > value by stale TLB entry, but set_mm_noasid is okay. > > > > Here is the symptom of the bug: > > unhandled signal 11 code 0x1 (coredump) > > 0x0000003fd6d22524 <+4>: auipc s0,0x70 > > 0x0000003fd6d22528 <+8>: ld s0,-148(s0) # 0x3fd6d92490 > > => 0x0000003fd6d2252c <+12>: ld a5,0(s0) > > (gdb) i r s0 > > s0 0x8082ed1cc3198b21 0x8082ed1cc3198b21 > > (gdb) x /2x 0x3fd6d92490 > > 0x3fd6d92490: 0xd80ac8a8 0x0000003f > > The core dump file shows that register s0 is wrong, but the value in > > memory is correct. Because 'ld s0, -148(s0)' used a stale mapping entry > > in TLB and got a wrong result from an incorrect physical address. > > > > When the task ran on CPU0, which loaded/speculative-loaded the value of > > address(0x3fd6d92490), then the first version of the mapping entry was > > PTWed into CPU0's TLB. > > When the task switched from CPU0 to CPU1 (No local_tlb_flush_all here by > > asid), it happened to write a value on the address (0x3fd6d92490). It > > caused do_page_fault -> wp_page_copy -> ptep_clear_flush -> > > ptep_get_and_clear & flush_tlb_page. > > The flush_tlb_page used mm_cpumask(mm) to determine which CPUs need TLB > > flush, but CPU0 had cleared the CPU0's mm_cpumask in the previous > > switch_mm. So we only flushed the CPU1 TLB and set the second version > > mapping of the PTE. When the task switched from CPU1 to CPU0 again, CPU0 > > still used a stale TLB mapping entry which contained a wrong target > > physical address. It raised a bug when the task happened to read that > > value. > > > > The solution is to keep all CPUs' footmarks of cpumask(mm) in switch_mm, > > which could prevent losing pieces of stuff during TLB flush. > > > > Fixes: 65d4b9c53017 ("RISC-V: Implement ASID allocator") > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > > Signed-off-by: Guo Ren <guoren@kernel.org> > > Cc: Anup Patel <apatel@ventanamicro.com> > > Cc: Palmer Dabbelt <palmer@rivosinc.com> > > --- > > Changes in v2: > > - Fixup nommu compile problem (Thx Conor, Also Reported-by: kernel > > test robot <lkp@intel.com>) > > - Keep cpumask_clear_cpu for noasid > > --- > > arch/riscv/mm/context.c | 6 +++++- > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c > > index 7acbfbd14557..f58e4b211595 100644 > > --- a/arch/riscv/mm/context.c > > +++ b/arch/riscv/mm/context.c > > @@ -317,7 +317,11 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, > > */ > > cpu = smp_processor_id(); > > > > - cpumask_clear_cpu(cpu, mm_cpumask(prev)); > > +#ifdef CONFIG_MMU > > + if (!static_branch_unlikely(&use_asid_allocator)) > > +#endif > > That's not very pretty. Can't we just do the following, instead? > > diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c > index 7acbfbd14557..ace419761e31 100644 > --- a/arch/riscv/mm/context.c > +++ b/arch/riscv/mm/context.c > @@ -16,10 +16,11 @@ > #include <asm/cacheflush.h> > #include <asm/mmu_context.h> > > -#ifdef CONFIG_MMU > > DEFINE_STATIC_KEY_FALSE(use_asid_allocator); Define use_asid_allocator in nommu part? How about: diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 7acbfbd14557..ed3f8de7ef97 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -205,12 +205,16 @@ static void set_mm_noasid(struct mm_struct *mm) local_flush_tlb_all(); } -static inline void set_mm(struct mm_struct *mm, unsigned int cpu) +static inline void set_mm(struct mm_struct *prev, + struct mm_struct *next, unsigned int cpu) { - if (static_branch_unlikely(&use_asid_allocator)) - set_mm_asid(mm, cpu); - else - set_mm_noasid(mm); + cpumask_set_cpu(cpu, mm_cpumask(next)); + if (static_branch_unlikely(&use_asid_allocator)) { + set_mm_asid(next, cpu); + } else { + cpumask_clear_cpu(cpu, mm_cpumask(prev)); + set_mm_noasid(next); + } } static int __init asids_init(void) @@ -264,7 +268,8 @@ static int __init asids_init(void) } early_initcall(asids_init); #else -static inline void set_mm(struct mm_struct *mm, unsigned int cpu) +static inline void set_mm(struct mm_struct *prev, + struct mm_struct *next, unsigned int cpu) { /* Nothing to do here when there is no MMU */ } @@ -317,10 +322,7 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, */ cpu = smp_processor_id(); - cpumask_clear_cpu(cpu, mm_cpumask(prev)); - cpumask_set_cpu(cpu, mm_cpumask(next)); - - set_mm(next, cpu); + set_mm(prev, next, cpu); flush_icache_deferred(next, cpu); > > +#ifdef CONFIG_MMU > + > static unsigned long asid_bits; > static unsigned long num_asids; > static unsigned long asid_mask; > > > Thanks, > drew > > > + cpumask_clear_cpu(cpu, mm_cpumask(prev)); > > + > > cpumask_set_cpu(cpu, mm_cpumask(next)); > > > > set_mm(next, cpu); > > -- > > 2.36.1 > > > > > > _______________________________________________ > > linux-riscv mailing list > > linux-riscv@lists.infradead.org > > http://lists.infradead.org/mailman/listinfo/linux-riscv
On Thu, Nov 10, 2022 at 09:51:03AM +0800, Guo Ren wrote: > On Wed, Nov 9, 2022 at 5:45 PM Andrew Jones <ajones@ventanamicro.com> wrote: > > > > On Wed, Nov 09, 2022 at 12:40:56AM -0500, guoren@kernel.org wrote: > > > > > > - cpumask_clear_cpu(cpu, mm_cpumask(prev)); > > > +#ifdef CONFIG_MMU > > > + if (!static_branch_unlikely(&use_asid_allocator)) > > > +#endif > > > > That's not very pretty. Can't we just do the following, instead? > > > > diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c > > index 7acbfbd14557..ace419761e31 100644 > > --- a/arch/riscv/mm/context.c > > +++ b/arch/riscv/mm/context.c > > @@ -16,10 +16,11 @@ > > #include <asm/cacheflush.h> > > #include <asm/mmu_context.h> > > > > -#ifdef CONFIG_MMU > > > > DEFINE_STATIC_KEY_FALSE(use_asid_allocator); > Define use_asid_allocator in nommu part? How about: Yeah, I was thinking it'll just always be a false static branch in the nommu case, but I like your proposal below better. Thanks, drew > > diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c > index 7acbfbd14557..ed3f8de7ef97 100644 > --- a/arch/riscv/mm/context.c > +++ b/arch/riscv/mm/context.c > @@ -205,12 +205,16 @@ static void set_mm_noasid(struct mm_struct *mm) > local_flush_tlb_all(); > } > > -static inline void set_mm(struct mm_struct *mm, unsigned int cpu) > +static inline void set_mm(struct mm_struct *prev, > + struct mm_struct *next, unsigned int cpu) > { > - if (static_branch_unlikely(&use_asid_allocator)) > - set_mm_asid(mm, cpu); > - else > - set_mm_noasid(mm); > + cpumask_set_cpu(cpu, mm_cpumask(next)); > + if (static_branch_unlikely(&use_asid_allocator)) { > + set_mm_asid(next, cpu); > + } else { > + cpumask_clear_cpu(cpu, mm_cpumask(prev)); > + set_mm_noasid(next); > + } > } > > static int __init asids_init(void) > @@ -264,7 +268,8 @@ static int __init asids_init(void) > } > early_initcall(asids_init); > #else > -static inline void set_mm(struct mm_struct *mm, unsigned int cpu) > +static inline void set_mm(struct mm_struct *prev, > + struct mm_struct *next, unsigned int cpu) > { > /* Nothing to do here when there is no MMU */ > } > @@ -317,10 +322,7 @@ void switch_mm(struct mm_struct *prev, struct > mm_struct *next, > */ > cpu = smp_processor_id(); > > - cpumask_clear_cpu(cpu, mm_cpumask(prev)); > - cpumask_set_cpu(cpu, mm_cpumask(next)); > - > - set_mm(next, cpu); > + set_mm(prev, next, cpu); > > flush_icache_deferred(next, cpu); > > > > > +#ifdef CONFIG_MMU > > + > > static unsigned long asid_bits; > > static unsigned long num_asids; > > static unsigned long asid_mask; > > > > > > Thanks, > > drew > > > > > + cpumask_clear_cpu(cpu, mm_cpumask(prev)); > > > + > > > cpumask_set_cpu(cpu, mm_cpumask(next)); > > > > > > set_mm(next, cpu); > > > -- > > > 2.36.1 > > > > > > > > > _______________________________________________ > > > linux-riscv mailing list > > > linux-riscv@lists.infradead.org > > > http://lists.infradead.org/mailman/listinfo/linux-riscv > > > > -- > Best Regards > Guo Ren
diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 7acbfbd14557..f58e4b211595 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -317,7 +317,11 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, */ cpu = smp_processor_id(); - cpumask_clear_cpu(cpu, mm_cpumask(prev)); +#ifdef CONFIG_MMU + if (!static_branch_unlikely(&use_asid_allocator)) +#endif + cpumask_clear_cpu(cpu, mm_cpumask(prev)); + cpumask_set_cpu(cpu, mm_cpumask(next)); set_mm(next, cpu);