Message ID | 20221101064146.69551-1-lihuafei1@huawei.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | ftrace: Fix use-after-free for dynamic ftrace_ops | expand |
On Tue, 1 Nov 2022 14:41:46 +0800 Li Huafei <lihuafei1@huawei.com> wrote: > KASAN reported a use-after-free with ftrace ops [1]. It was found from > vmcore that perf had registered two ops with the same content > successively, both dynamic. After unregistering the second ops, a > use-after-free occurred. > > In ftrace_shutdown(), when the second ops is unregistered, the > FTRACE_UPDATE_CALLS command is not set because there is another enabled > ops with the same content. Also, both ops are dynamic and the ftrace > callback function is ftrace_ops_list_func, so the > FTRACE_UPDATE_TRACE_FUNC command will not be set. Eventually the value > of 'command' will be 0 and ftrace_shutdown() will skip the rcu > synchronization. > > However, ftrace may be activated. When the ops is released, another CPU > may be accessing the ops. Add the missing synchronization to fix this > problem. > > [1] > BUG: KASAN: use-after-free in __ftrace_ops_list_func kernel/trace/ftrace.c:7020 [inline] > BUG: KASAN: use-after-free in ftrace_ops_list_func+0x2b0/0x31c kernel/trace/ftrace.c:7049 > Read of size 8 at addr ffff56551965bbc8 by task syz-executor.2/14468 > > CPU: 1 PID: 14468 Comm: syz-executor.2 Not tainted 5.10.0 #7 > Hardware name: linux,dummy-virt (DT) > Call trace: > dump_backtrace+0x0/0x40c arch/arm64/kernel/stacktrace.c:132 > show_stack+0x30/0x40 arch/arm64/kernel/stacktrace.c:196 > __dump_stack lib/dump_stack.c:77 [inline] > dump_stack+0x1b4/0x248 lib/dump_stack.c:118 > print_address_description.constprop.0+0x28/0x48c mm/kasan/report.c:387 > __kasan_report mm/kasan/report.c:547 [inline] > kasan_report+0x118/0x210 mm/kasan/report.c:564 > check_memory_region_inline mm/kasan/generic.c:187 [inline] > __asan_load8+0x98/0xc0 mm/kasan/generic.c:253 > __ftrace_ops_list_func kernel/trace/ftrace.c:7020 [inline] > ftrace_ops_list_func+0x2b0/0x31c kernel/trace/ftrace.c:7049 > ftrace_graph_call+0x0/0x4 > __might_sleep+0x8/0x100 include/linux/perf_event.h:1170 > __might_fault mm/memory.c:5183 [inline] > __might_fault+0x58/0x70 mm/memory.c:5171 > do_strncpy_from_user lib/strncpy_from_user.c:41 [inline] > strncpy_from_user+0x1f4/0x4b0 lib/strncpy_from_user.c:139 > getname_flags+0xb0/0x31c fs/namei.c:149 > getname+0x2c/0x40 fs/namei.c:209 > [...] > > Allocated by task 14445: > kasan_save_stack+0x24/0x50 mm/kasan/common.c:48 > kasan_set_track mm/kasan/common.c:56 [inline] > __kasan_kmalloc mm/kasan/common.c:479 [inline] > __kasan_kmalloc.constprop.0+0x110/0x13c mm/kasan/common.c:449 > kasan_kmalloc+0xc/0x14 mm/kasan/common.c:493 > kmem_cache_alloc_trace+0x440/0x924 mm/slub.c:2950 > kmalloc include/linux/slab.h:563 [inline] > kzalloc include/linux/slab.h:675 [inline] > perf_event_alloc.part.0+0xb4/0x1350 kernel/events/core.c:11230 > perf_event_alloc kernel/events/core.c:11733 [inline] > __do_sys_perf_event_open kernel/events/core.c:11831 [inline] > __se_sys_perf_event_open+0x550/0x15f4 kernel/events/core.c:11723 > __arm64_sys_perf_event_open+0x6c/0x80 kernel/events/core.c:11723 > [...] > > Freed by task 14445: > kasan_save_stack+0x24/0x50 mm/kasan/common.c:48 > kasan_set_track+0x24/0x34 mm/kasan/common.c:56 > kasan_set_free_info+0x20/0x40 mm/kasan/generic.c:358 > __kasan_slab_free.part.0+0x11c/0x1b0 mm/kasan/common.c:437 > __kasan_slab_free mm/kasan/common.c:445 [inline] > kasan_slab_free+0x2c/0x40 mm/kasan/common.c:446 > slab_free_hook mm/slub.c:1569 [inline] > slab_free_freelist_hook mm/slub.c:1608 [inline] > slab_free mm/slub.c:3179 [inline] > kfree+0x12c/0xc10 mm/slub.c:4176 > perf_event_alloc.part.0+0xa0c/0x1350 kernel/events/core.c:11434 > perf_event_alloc kernel/events/core.c:11733 [inline] > __do_sys_perf_event_open kernel/events/core.c:11831 [inline] > __se_sys_perf_event_open+0x550/0x15f4 kernel/events/core.c:11723 > [...] > Good catch! This should go stable tree too. Cc: stable@vger.kernel.org Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> But I'm not sure what commit this is fixed. Maybe commit a4c35ed24112 ("ftrace: Fix synchronization location disabling and freeing ftrace_ops"). Steve, can you add Fixed: ? Also, I found a typo below. > Signed-off-by: Li Huafei <lihuafei1@huawei.com> > --- > kernel/trace/ftrace.c | 14 +++++++++----- > 1 file changed, 9 insertions(+), 5 deletions(-) > > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c > index fbf2543111c0..4219cc2a04a6 100644 > --- a/kernel/trace/ftrace.c > +++ b/kernel/trace/ftrace.c > @@ -3030,13 +3030,16 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) > > if (!command || !ftrace_enabled) { > /* > - * If these are dynamic or per_cpu ops, they still > - * need their data freed. Since, function tracing is > - * not currently active, we can just free them > - * without synchronizing all CPUs. > + * If these are dynamic, they still need their data freed. If > + * function tracing is currently active, we neet to synchronize ^need Thank you! > + * all CPUs before we can release them. > */ > - if (ops->flags & FTRACE_OPS_FL_DYNAMIC) > + if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { > + if (ftrace_enabled) > + goto sync_rcu; > + > goto free_ops; > + } > > return 0; > } > @@ -3083,6 +3086,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) > * ops. > */ > if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { > + sync_rcu: > /* > * We need to do a hard force of sched synchronization. > * This is because we use preempt_disable() to do RCU, but > -- > 2.17.1 >
On 2022/11/1 22:07, Masami Hiramatsu (Google) wrote: > On Tue, 1 Nov 2022 14:41:46 +0800 > Li Huafei <lihuafei1@huawei.com> wrote: > >> KASAN reported a use-after-free with ftrace ops [1]. It was found from >> vmcore that perf had registered two ops with the same content >> successively, both dynamic. After unregistering the second ops, a >> use-after-free occurred. >> >> In ftrace_shutdown(), when the second ops is unregistered, the >> FTRACE_UPDATE_CALLS command is not set because there is another enabled >> ops with the same content. Also, both ops are dynamic and the ftrace >> callback function is ftrace_ops_list_func, so the >> FTRACE_UPDATE_TRACE_FUNC command will not be set. Eventually the value >> of 'command' will be 0 and ftrace_shutdown() will skip the rcu >> synchronization. >> >> However, ftrace may be activated. When the ops is released, another CPU >> may be accessing the ops. Add the missing synchronization to fix this >> problem. >> >> [1] >> BUG: KASAN: use-after-free in __ftrace_ops_list_func kernel/trace/ftrace.c:7020 [inline] >> BUG: KASAN: use-after-free in ftrace_ops_list_func+0x2b0/0x31c kernel/trace/ftrace.c:7049 >> Read of size 8 at addr ffff56551965bbc8 by task syz-executor.2/14468 >> >> CPU: 1 PID: 14468 Comm: syz-executor.2 Not tainted 5.10.0 #7 >> Hardware name: linux,dummy-virt (DT) >> Call trace: >> dump_backtrace+0x0/0x40c arch/arm64/kernel/stacktrace.c:132 >> show_stack+0x30/0x40 arch/arm64/kernel/stacktrace.c:196 >> __dump_stack lib/dump_stack.c:77 [inline] >> dump_stack+0x1b4/0x248 lib/dump_stack.c:118 >> print_address_description.constprop.0+0x28/0x48c mm/kasan/report.c:387 >> __kasan_report mm/kasan/report.c:547 [inline] >> kasan_report+0x118/0x210 mm/kasan/report.c:564 >> check_memory_region_inline mm/kasan/generic.c:187 [inline] >> __asan_load8+0x98/0xc0 mm/kasan/generic.c:253 >> __ftrace_ops_list_func kernel/trace/ftrace.c:7020 [inline] >> ftrace_ops_list_func+0x2b0/0x31c kernel/trace/ftrace.c:7049 >> ftrace_graph_call+0x0/0x4 >> __might_sleep+0x8/0x100 include/linux/perf_event.h:1170 >> __might_fault mm/memory.c:5183 [inline] >> __might_fault+0x58/0x70 mm/memory.c:5171 >> do_strncpy_from_user lib/strncpy_from_user.c:41 [inline] >> strncpy_from_user+0x1f4/0x4b0 lib/strncpy_from_user.c:139 >> getname_flags+0xb0/0x31c fs/namei.c:149 >> getname+0x2c/0x40 fs/namei.c:209 >> [...] >> >> Allocated by task 14445: >> kasan_save_stack+0x24/0x50 mm/kasan/common.c:48 >> kasan_set_track mm/kasan/common.c:56 [inline] >> __kasan_kmalloc mm/kasan/common.c:479 [inline] >> __kasan_kmalloc.constprop.0+0x110/0x13c mm/kasan/common.c:449 >> kasan_kmalloc+0xc/0x14 mm/kasan/common.c:493 >> kmem_cache_alloc_trace+0x440/0x924 mm/slub.c:2950 >> kmalloc include/linux/slab.h:563 [inline] >> kzalloc include/linux/slab.h:675 [inline] >> perf_event_alloc.part.0+0xb4/0x1350 kernel/events/core.c:11230 >> perf_event_alloc kernel/events/core.c:11733 [inline] >> __do_sys_perf_event_open kernel/events/core.c:11831 [inline] >> __se_sys_perf_event_open+0x550/0x15f4 kernel/events/core.c:11723 >> __arm64_sys_perf_event_open+0x6c/0x80 kernel/events/core.c:11723 >> [...] >> >> Freed by task 14445: >> kasan_save_stack+0x24/0x50 mm/kasan/common.c:48 >> kasan_set_track+0x24/0x34 mm/kasan/common.c:56 >> kasan_set_free_info+0x20/0x40 mm/kasan/generic.c:358 >> __kasan_slab_free.part.0+0x11c/0x1b0 mm/kasan/common.c:437 >> __kasan_slab_free mm/kasan/common.c:445 [inline] >> kasan_slab_free+0x2c/0x40 mm/kasan/common.c:446 >> slab_free_hook mm/slub.c:1569 [inline] >> slab_free_freelist_hook mm/slub.c:1608 [inline] >> slab_free mm/slub.c:3179 [inline] >> kfree+0x12c/0xc10 mm/slub.c:4176 >> perf_event_alloc.part.0+0xa0c/0x1350 kernel/events/core.c:11434 >> perf_event_alloc kernel/events/core.c:11733 [inline] >> __do_sys_perf_event_open kernel/events/core.c:11831 [inline] >> __se_sys_perf_event_open+0x550/0x15f4 kernel/events/core.c:11723 >> [...] >> > > Good catch! This should go stable tree too. > > Cc: stable@vger.kernel.org > Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Thanks! > > But I'm not sure what commit this is fixed. Maybe commit a4c35ed24112 > ("ftrace: Fix synchronization location disabling and freeing ftrace_ops"). I also can't find out the exact commit that was fixed. Hopefully Steve can give one. > Steve, can you add Fixed: ? > > Also, I found a typo below. > >> Signed-off-by: Li Huafei <lihuafei1@huawei.com> >> --- >> kernel/trace/ftrace.c | 14 +++++++++----- >> 1 file changed, 9 insertions(+), 5 deletions(-) >> >> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c >> index fbf2543111c0..4219cc2a04a6 100644 >> --- a/kernel/trace/ftrace.c >> +++ b/kernel/trace/ftrace.c >> @@ -3030,13 +3030,16 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) >> >> if (!command || !ftrace_enabled) { >> /* >> - * If these are dynamic or per_cpu ops, they still >> - * need their data freed. Since, function tracing is >> - * not currently active, we can just free them >> - * without synchronizing all CPUs. >> + * If these are dynamic, they still need their data freed. If >> + * function tracing is currently active, we neet to synchronize > ^need > Yes. Do I need to send a v2 patch? > Thank you! > >> + * all CPUs before we can release them. >> */ >> - if (ops->flags & FTRACE_OPS_FL_DYNAMIC) >> + if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { >> + if (ftrace_enabled) >> + goto sync_rcu; >> + >> goto free_ops; >> + } >> >> return 0; >> } >> @@ -3083,6 +3086,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) >> * ops. >> */ >> if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { >> + sync_rcu: >> /* >> * We need to do a hard force of sched synchronization. >> * This is because we use preempt_disable() to do RCU, but >> -- >> 2.17.1 >> > >
On Tue, 1 Nov 2022 23:07:16 +0900 Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote: > But I'm not sure what commit this is fixed. Maybe commit a4c35ed24112 > ("ftrace: Fix synchronization location disabling and freeing ftrace_ops"). > Steve, can you add Fixed: ? I'll take a deeper look at this when lore and vger start talking together and it shows up on patchwork :-/ -- Steve
On Tue, 1 Nov 2022 14:41:46 +0800 Li Huafei <lihuafei1@huawei.com> wrote: > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c > index fbf2543111c0..4219cc2a04a6 100644 > --- a/kernel/trace/ftrace.c > +++ b/kernel/trace/ftrace.c > @@ -3030,13 +3030,16 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) > > if (!command || !ftrace_enabled) { ftrace_enabled is seldom not set. I don't think we even need to check it. It's just the value of /proc/sys/kernel/ftrace_enabled, where most people don't even know that file exists. I do want to get rid of it one day too. So let's not optimize for it. > /* > - * If these are dynamic or per_cpu ops, they still > - * need their data freed. Since, function tracing is > - * not currently active, we can just free them > - * without synchronizing all CPUs. > + * If these are dynamic, they still need their data freed. If > + * function tracing is currently active, we neet to synchronize > + * all CPUs before we can release them. > */ > - if (ops->flags & FTRACE_OPS_FL_DYNAMIC) > + if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { > + if (ftrace_enabled) > + goto sync_rcu; > + > goto free_ops; Change the above just to "goto out;" > + } > > return 0; > } > @@ -3083,6 +3086,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) > * ops. > */ Add here: out: > if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { > + sync_rcu: > /* > * We need to do a hard force of sched synchronization. > * This is because we use preempt_disable() to do RCU, but And get rid of the labels in the if block. Thanks! -- Steve
On Tue, 1 Nov 2022 23:07:16 +0900 Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote: > But I'm not sure what commit this is fixed. Maybe commit a4c35ed24112 > ("ftrace: Fix synchronization location disabling and freeing ftrace_ops"). > Steve, can you add Fixed: ? I would associate it to: Fixes: edb096e00724f ("ftrace: Fix memleak when unregistering dynamic ops when tracing disabled") -- Steve
On 2022/11/3 6:53, Steven Rostedt wrote: > On Tue, 1 Nov 2022 14:41:46 +0800 > Li Huafei <lihuafei1@huawei.com> wrote: > > >> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c >> index fbf2543111c0..4219cc2a04a6 100644 >> --- a/kernel/trace/ftrace.c >> +++ b/kernel/trace/ftrace.c >> @@ -3030,13 +3030,16 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) >> >> if (!command || !ftrace_enabled) { > > ftrace_enabled is seldom not set. I don't think we even need to check > it. It's just the value of /proc/sys/kernel/ftrace_enabled, where most > people don't even know that file exists. I do want to get rid of it one > day too. So let's not optimize for it. > Agree. The code will look much simpler this way. >> /* >> - * If these are dynamic or per_cpu ops, they still >> - * need their data freed. Since, function tracing is >> - * not currently active, we can just free them >> - * without synchronizing all CPUs. >> + * If these are dynamic, they still need their data freed. If >> + * function tracing is currently active, we neet to synchronize >> + * all CPUs before we can release them. >> */ >> - if (ops->flags & FTRACE_OPS_FL_DYNAMIC) >> + if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { >> + if (ftrace_enabled) >> + goto sync_rcu; >> + >> goto free_ops; > > Change the above just to "goto out;" > >> + } >> >> return 0; >> } >> @@ -3083,6 +3086,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) >> * ops. >> */ > > Add here: > > out: > >> if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { >> + sync_rcu: >> /* >> * We need to do a hard force of sched synchronization. >> * This is because we use preempt_disable() to do RCU, but > > And get rid of the labels in the if block. > Will make the above changes in v2, thanks! > Thanks! > > -- Steve > > . >
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index fbf2543111c0..4219cc2a04a6 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -3030,13 +3030,16 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) if (!command || !ftrace_enabled) { /* - * If these are dynamic or per_cpu ops, they still - * need their data freed. Since, function tracing is - * not currently active, we can just free them - * without synchronizing all CPUs. + * If these are dynamic, they still need their data freed. If + * function tracing is currently active, we neet to synchronize + * all CPUs before we can release them. */ - if (ops->flags & FTRACE_OPS_FL_DYNAMIC) + if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { + if (ftrace_enabled) + goto sync_rcu; + goto free_ops; + } return 0; } @@ -3083,6 +3086,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command) * ops. */ if (ops->flags & FTRACE_OPS_FL_DYNAMIC) { + sync_rcu: /* * We need to do a hard force of sched synchronization. * This is because we use preempt_disable() to do RCU, but
KASAN reported a use-after-free with ftrace ops [1]. It was found from vmcore that perf had registered two ops with the same content successively, both dynamic. After unregistering the second ops, a use-after-free occurred. In ftrace_shutdown(), when the second ops is unregistered, the FTRACE_UPDATE_CALLS command is not set because there is another enabled ops with the same content. Also, both ops are dynamic and the ftrace callback function is ftrace_ops_list_func, so the FTRACE_UPDATE_TRACE_FUNC command will not be set. Eventually the value of 'command' will be 0 and ftrace_shutdown() will skip the rcu synchronization. However, ftrace may be activated. When the ops is released, another CPU may be accessing the ops. Add the missing synchronization to fix this problem. [1] BUG: KASAN: use-after-free in __ftrace_ops_list_func kernel/trace/ftrace.c:7020 [inline] BUG: KASAN: use-after-free in ftrace_ops_list_func+0x2b0/0x31c kernel/trace/ftrace.c:7049 Read of size 8 at addr ffff56551965bbc8 by task syz-executor.2/14468 CPU: 1 PID: 14468 Comm: syz-executor.2 Not tainted 5.10.0 #7 Hardware name: linux,dummy-virt (DT) Call trace: dump_backtrace+0x0/0x40c arch/arm64/kernel/stacktrace.c:132 show_stack+0x30/0x40 arch/arm64/kernel/stacktrace.c:196 __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1b4/0x248 lib/dump_stack.c:118 print_address_description.constprop.0+0x28/0x48c mm/kasan/report.c:387 __kasan_report mm/kasan/report.c:547 [inline] kasan_report+0x118/0x210 mm/kasan/report.c:564 check_memory_region_inline mm/kasan/generic.c:187 [inline] __asan_load8+0x98/0xc0 mm/kasan/generic.c:253 __ftrace_ops_list_func kernel/trace/ftrace.c:7020 [inline] ftrace_ops_list_func+0x2b0/0x31c kernel/trace/ftrace.c:7049 ftrace_graph_call+0x0/0x4 __might_sleep+0x8/0x100 include/linux/perf_event.h:1170 __might_fault mm/memory.c:5183 [inline] __might_fault+0x58/0x70 mm/memory.c:5171 do_strncpy_from_user lib/strncpy_from_user.c:41 [inline] strncpy_from_user+0x1f4/0x4b0 lib/strncpy_from_user.c:139 getname_flags+0xb0/0x31c fs/namei.c:149 getname+0x2c/0x40 fs/namei.c:209 [...] Allocated by task 14445: kasan_save_stack+0x24/0x50 mm/kasan/common.c:48 kasan_set_track mm/kasan/common.c:56 [inline] __kasan_kmalloc mm/kasan/common.c:479 [inline] __kasan_kmalloc.constprop.0+0x110/0x13c mm/kasan/common.c:449 kasan_kmalloc+0xc/0x14 mm/kasan/common.c:493 kmem_cache_alloc_trace+0x440/0x924 mm/slub.c:2950 kmalloc include/linux/slab.h:563 [inline] kzalloc include/linux/slab.h:675 [inline] perf_event_alloc.part.0+0xb4/0x1350 kernel/events/core.c:11230 perf_event_alloc kernel/events/core.c:11733 [inline] __do_sys_perf_event_open kernel/events/core.c:11831 [inline] __se_sys_perf_event_open+0x550/0x15f4 kernel/events/core.c:11723 __arm64_sys_perf_event_open+0x6c/0x80 kernel/events/core.c:11723 [...] Freed by task 14445: kasan_save_stack+0x24/0x50 mm/kasan/common.c:48 kasan_set_track+0x24/0x34 mm/kasan/common.c:56 kasan_set_free_info+0x20/0x40 mm/kasan/generic.c:358 __kasan_slab_free.part.0+0x11c/0x1b0 mm/kasan/common.c:437 __kasan_slab_free mm/kasan/common.c:445 [inline] kasan_slab_free+0x2c/0x40 mm/kasan/common.c:446 slab_free_hook mm/slub.c:1569 [inline] slab_free_freelist_hook mm/slub.c:1608 [inline] slab_free mm/slub.c:3179 [inline] kfree+0x12c/0xc10 mm/slub.c:4176 perf_event_alloc.part.0+0xa0c/0x1350 kernel/events/core.c:11434 perf_event_alloc kernel/events/core.c:11733 [inline] __do_sys_perf_event_open kernel/events/core.c:11831 [inline] __se_sys_perf_event_open+0x550/0x15f4 kernel/events/core.c:11723 [...] Signed-off-by: Li Huafei <lihuafei1@huawei.com> --- kernel/trace/ftrace.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-)