Message ID | 20241026154629.593041-3-mathieu.desnoyers@efficios.com (mailing list archive) |
---|---|
State | RFC |
Headers | show |
Series | [RFC,v3,1/3] tracing: Introduce tracepoint extended structure | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On Sat, Oct 26, 2024 at 8:48 AM Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote: > > The grace period used internally within tracepoint.c:release_probes() > uses call_rcu() to batch waiting for quiescence of old probe arrays, > rather than using the tracepoint_synchronize_unregister() which blocks > while waiting for quiescence. > > With the introduction of faultable syscall tracepoints, this causes > use-after-free issues reproduced with syzkaller. > > Fix this by using the appropriate call_rcu() or call_rcu_tasks_trace() > before invoking the rcu_free_old_probes callback. This can be chosen > using the tracepoint_is_syscall() API. > > A similar issue exists in bpf use of call_rcu(). Fixing this is left to > a separate change. > > Reported-by: syzbot+b390c8062d8387b6272a@syzkaller.appspotmail.com > Fixes: a363d27cdbc2 ("tracing: Allow system call tracepoints to handle page faults") > Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> > Cc: Michael Jeanson <mjeanson@efficios.com> > Cc: Steven Rostedt <rostedt@goodmis.org> > Cc: Masami Hiramatsu <mhiramat@kernel.org> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Alexei Starovoitov <ast@kernel.org> > Cc: Yonghong Song <yhs@fb.com> > Cc: Paul E. McKenney <paulmck@kernel.org> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Arnaldo Carvalho de Melo <acme@kernel.org> > Cc: Mark Rutland <mark.rutland@arm.com> > Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> > Cc: Namhyung Kim <namhyung@kernel.org> > Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> > Cc: bpf@vger.kernel.org > Cc: Joel Fernandes <joel@joelfernandes.org> > Cc: Jordan Rife <jrife@google.com> > --- > Changes since v0: > - Introduce tracepoint_call_rcu(), > - Fix bpf_link_free() use of call_rcu as well. > > Changes since v1: > - Use tracepoint_call_rcu() for bpf_prog_put as well. > > Changes since v2: > - Do not cover bpf changes in the same commit, let bpf developers > implement it. > --- > kernel/tracepoint.c | 11 +++++++---- > 1 file changed, 7 insertions(+), 4 deletions(-) > > diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c > index 5658dc92f5b5..47569fb06596 100644 > --- a/kernel/tracepoint.c > +++ b/kernel/tracepoint.c > @@ -106,13 +106,16 @@ static void rcu_free_old_probes(struct rcu_head *head) > kfree(container_of(head, struct tp_probes, rcu)); > } > > -static inline void release_probes(struct tracepoint_func *old) > +static inline void release_probes(struct tracepoint *tp, struct tracepoint_func *old) > { > if (old) { > struct tp_probes *tp_probes = container_of(old, > struct tp_probes, probes[0]); > > - call_rcu(&tp_probes->rcu, rcu_free_old_probes); > + if (tracepoint_is_syscall(tp)) > + call_rcu_tasks_trace(&tp_probes->rcu, rcu_free_old_probes); should this be call_rcu_tasks_trace() -> call_rcu() chain instead of just call_rcu_tasks_trace()? While currently call_rcu_tasks_trace() implies RCU GP (as evidenced by rcu_trace_implies_rcu_gp() being hardcoded right now to returning true), this might not always be the case in the future, so it's best to have a guarantee that regardless of sleepable or not, we'll always have have RCU GP, and for sleepable tracepoint *also* RCU Tasks Trace GP. > + else > + call_rcu(&tp_probes->rcu, rcu_free_old_probes); > } > } > > @@ -334,7 +337,7 @@ static int tracepoint_add_func(struct tracepoint *tp, > break; > } > > - release_probes(old); > + release_probes(tp, old); > return 0; > } > > @@ -405,7 +408,7 @@ static int tracepoint_remove_func(struct tracepoint *tp, > WARN_ON_ONCE(1); > break; > } > - release_probes(old); > + release_probes(tp, old); > return 0; > } > > -- > 2.39.5 >
On 2024-10-27 21:22, Andrii Nakryiko wrote: > On Sat, Oct 26, 2024 at 8:48 AM Mathieu Desnoyers > <mathieu.desnoyers@efficios.com> wrote: >> >> The grace period used internally within tracepoint.c:release_probes() >> uses call_rcu() to batch waiting for quiescence of old probe arrays, >> rather than using the tracepoint_synchronize_unregister() which blocks >> while waiting for quiescence. >> >> With the introduction of faultable syscall tracepoints, this causes >> use-after-free issues reproduced with syzkaller. >> >> Fix this by using the appropriate call_rcu() or call_rcu_tasks_trace() >> before invoking the rcu_free_old_probes callback. This can be chosen >> using the tracepoint_is_syscall() API. >> >> A similar issue exists in bpf use of call_rcu(). Fixing this is left to >> a separate change. >> >> Reported-by: syzbot+b390c8062d8387b6272a@syzkaller.appspotmail.com >> Fixes: a363d27cdbc2 ("tracing: Allow system call tracepoints to handle page faults") >> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> >> Cc: Michael Jeanson <mjeanson@efficios.com> >> Cc: Steven Rostedt <rostedt@goodmis.org> >> Cc: Masami Hiramatsu <mhiramat@kernel.org> >> Cc: Peter Zijlstra <peterz@infradead.org> >> Cc: Alexei Starovoitov <ast@kernel.org> >> Cc: Yonghong Song <yhs@fb.com> >> Cc: Paul E. McKenney <paulmck@kernel.org> >> Cc: Ingo Molnar <mingo@redhat.com> >> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> >> Cc: Mark Rutland <mark.rutland@arm.com> >> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> >> Cc: Namhyung Kim <namhyung@kernel.org> >> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> >> Cc: bpf@vger.kernel.org >> Cc: Joel Fernandes <joel@joelfernandes.org> >> Cc: Jordan Rife <jrife@google.com> >> --- >> Changes since v0: >> - Introduce tracepoint_call_rcu(), >> - Fix bpf_link_free() use of call_rcu as well. >> >> Changes since v1: >> - Use tracepoint_call_rcu() for bpf_prog_put as well. >> >> Changes since v2: >> - Do not cover bpf changes in the same commit, let bpf developers >> implement it. >> --- >> kernel/tracepoint.c | 11 +++++++---- >> 1 file changed, 7 insertions(+), 4 deletions(-) >> >> diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c >> index 5658dc92f5b5..47569fb06596 100644 >> --- a/kernel/tracepoint.c >> +++ b/kernel/tracepoint.c >> @@ -106,13 +106,16 @@ static void rcu_free_old_probes(struct rcu_head *head) >> kfree(container_of(head, struct tp_probes, rcu)); >> } >> >> -static inline void release_probes(struct tracepoint_func *old) >> +static inline void release_probes(struct tracepoint *tp, struct tracepoint_func *old) >> { >> if (old) { >> struct tp_probes *tp_probes = container_of(old, >> struct tp_probes, probes[0]); >> >> - call_rcu(&tp_probes->rcu, rcu_free_old_probes); >> + if (tracepoint_is_syscall(tp)) >> + call_rcu_tasks_trace(&tp_probes->rcu, rcu_free_old_probes); > > should this be call_rcu_tasks_trace() -> call_rcu() chain instead of > just call_rcu_tasks_trace()? While currently call_rcu_tasks_trace() > implies RCU GP (as evidenced by rcu_trace_implies_rcu_gp() being > hardcoded right now to returning true), this might not always be the > case in the future, so it's best to have a guarantee that regardless > of sleepable or not, we'll always have have RCU GP, and for sleepable > tracepoint *also* RCU Tasks Trace GP. Given that faultable tracepoints only use RCU tasks trace for the read-side and do not rely on preempt disable, I don't see why we would need to chain both grace periods there ? Thanks, Mathieu > >> + else >> + call_rcu(&tp_probes->rcu, rcu_free_old_probes); >> } >> } >> >> @@ -334,7 +337,7 @@ static int tracepoint_add_func(struct tracepoint *tp, >> break; >> } >> >> - release_probes(old); >> + release_probes(tp, old); >> return 0; >> } >> >> @@ -405,7 +408,7 @@ static int tracepoint_remove_func(struct tracepoint *tp, >> WARN_ON_ONCE(1); >> break; >> } >> - release_probes(old); >> + release_probes(tp, old); >> return 0; >> } >> >> -- >> 2.39.5 >>
On 2024-10-28 15:19, Mathieu Desnoyers wrote: > On 2024-10-27 21:22, Andrii Nakryiko wrote: >> On Sat, Oct 26, 2024 at 8:48 AM Mathieu Desnoyers >> <mathieu.desnoyers@efficios.com> wrote: >>> >>> The grace period used internally within tracepoint.c:release_probes() >>> uses call_rcu() to batch waiting for quiescence of old probe arrays, >>> rather than using the tracepoint_synchronize_unregister() which blocks >>> while waiting for quiescence. >>> >>> With the introduction of faultable syscall tracepoints, this causes >>> use-after-free issues reproduced with syzkaller. >>> >>> Fix this by using the appropriate call_rcu() or call_rcu_tasks_trace() >>> before invoking the rcu_free_old_probes callback. This can be chosen >>> using the tracepoint_is_syscall() API. >>> >>> A similar issue exists in bpf use of call_rcu(). Fixing this is left to >>> a separate change. >>> >>> Reported-by: syzbot+b390c8062d8387b6272a@syzkaller.appspotmail.com >>> Fixes: a363d27cdbc2 ("tracing: Allow system call tracepoints to >>> handle page faults") >>> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> >>> Cc: Michael Jeanson <mjeanson@efficios.com> >>> Cc: Steven Rostedt <rostedt@goodmis.org> >>> Cc: Masami Hiramatsu <mhiramat@kernel.org> >>> Cc: Peter Zijlstra <peterz@infradead.org> >>> Cc: Alexei Starovoitov <ast@kernel.org> >>> Cc: Yonghong Song <yhs@fb.com> >>> Cc: Paul E. McKenney <paulmck@kernel.org> >>> Cc: Ingo Molnar <mingo@redhat.com> >>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> >>> Cc: Mark Rutland <mark.rutland@arm.com> >>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> >>> Cc: Namhyung Kim <namhyung@kernel.org> >>> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> >>> Cc: bpf@vger.kernel.org >>> Cc: Joel Fernandes <joel@joelfernandes.org> >>> Cc: Jordan Rife <jrife@google.com> >>> --- >>> Changes since v0: >>> - Introduce tracepoint_call_rcu(), >>> - Fix bpf_link_free() use of call_rcu as well. >>> >>> Changes since v1: >>> - Use tracepoint_call_rcu() for bpf_prog_put as well. >>> >>> Changes since v2: >>> - Do not cover bpf changes in the same commit, let bpf developers >>> implement it. >>> --- >>> kernel/tracepoint.c | 11 +++++++---- >>> 1 file changed, 7 insertions(+), 4 deletions(-) >>> >>> diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c >>> index 5658dc92f5b5..47569fb06596 100644 >>> --- a/kernel/tracepoint.c >>> +++ b/kernel/tracepoint.c >>> @@ -106,13 +106,16 @@ static void rcu_free_old_probes(struct rcu_head >>> *head) >>> kfree(container_of(head, struct tp_probes, rcu)); >>> } >>> >>> -static inline void release_probes(struct tracepoint_func *old) >>> +static inline void release_probes(struct tracepoint *tp, struct >>> tracepoint_func *old) >>> { >>> if (old) { >>> struct tp_probes *tp_probes = container_of(old, >>> struct tp_probes, probes[0]); >>> >>> - call_rcu(&tp_probes->rcu, rcu_free_old_probes); >>> + if (tracepoint_is_syscall(tp)) >>> + call_rcu_tasks_trace(&tp_probes->rcu, >>> rcu_free_old_probes); >> >> should this be call_rcu_tasks_trace() -> call_rcu() chain instead of >> just call_rcu_tasks_trace()? While currently call_rcu_tasks_trace() >> implies RCU GP (as evidenced by rcu_trace_implies_rcu_gp() being >> hardcoded right now to returning true), this might not always be the >> case in the future, so it's best to have a guarantee that regardless >> of sleepable or not, we'll always have have RCU GP, and for sleepable >> tracepoint *also* RCU Tasks Trace GP. > > Given that faultable tracepoints only use RCU tasks trace for the > read-side and do not rely on preempt disable, I don't see why we would > need to chain both grace periods there ? Hi Andrii, AFAIU, your question above is rooted in the way bpf does its sleepable program grace periods (chaining RCU tasks trace + RCU GP), e.g.: bpf_map_free_mult_rcu_gp bpf_link_defer_dealloc_mult_rcu_gp and bpf_link_free: /* schedule BPF link deallocation; if underlying BPF program * is sleepable, we need to first wait for RCU tasks trace * sync, then go through "classic" RCU grace period */ This is introduced in commit 1a80dbcb2db ("bpf: support deferring bpf_link dealloc to after RCU grace period") which has a bit more information in the commit message, but what I'm not seeing is an explanation of *why* chaining RCU tasks trace and RCU grace periods is needed for sleepable bpf programs. What am I missing ? As far as tracepoint.c release_probes() is concerned, just waiting for RCU tasks trace before freeing memory of faultable tracepoints is sufficient. Thanks, Mathieu > > Thanks, > > Mathieu > >> >>> + else >>> + call_rcu(&tp_probes->rcu, rcu_free_old_probes); >>> } >>> } >>> >>> @@ -334,7 +337,7 @@ static int tracepoint_add_func(struct tracepoint >>> *tp, >>> break; >>> } >>> >>> - release_probes(old); >>> + release_probes(tp, old); >>> return 0; >>> } >>> >>> @@ -405,7 +408,7 @@ static int tracepoint_remove_func(struct >>> tracepoint *tp, >>> WARN_ON_ONCE(1); >>> break; >>> } >>> - release_probes(old); >>> + release_probes(tp, old); >>> return 0; >>> } >>> >>> -- >>> 2.39.5 >>> >
On Thu, Oct 31, 2024 at 8:44 AM Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote: > > On 2024-10-28 15:19, Mathieu Desnoyers wrote: > > On 2024-10-27 21:22, Andrii Nakryiko wrote: > >> On Sat, Oct 26, 2024 at 8:48 AM Mathieu Desnoyers > >> <mathieu.desnoyers@efficios.com> wrote: > >>> > >>> The grace period used internally within tracepoint.c:release_probes() > >>> uses call_rcu() to batch waiting for quiescence of old probe arrays, > >>> rather than using the tracepoint_synchronize_unregister() which blocks > >>> while waiting for quiescence. > >>> > >>> With the introduction of faultable syscall tracepoints, this causes > >>> use-after-free issues reproduced with syzkaller. > >>> > >>> Fix this by using the appropriate call_rcu() or call_rcu_tasks_trace() > >>> before invoking the rcu_free_old_probes callback. This can be chosen > >>> using the tracepoint_is_syscall() API. > >>> > >>> A similar issue exists in bpf use of call_rcu(). Fixing this is left to > >>> a separate change. > >>> > >>> Reported-by: syzbot+b390c8062d8387b6272a@syzkaller.appspotmail.com > >>> Fixes: a363d27cdbc2 ("tracing: Allow system call tracepoints to > >>> handle page faults") > >>> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> > >>> Cc: Michael Jeanson <mjeanson@efficios.com> > >>> Cc: Steven Rostedt <rostedt@goodmis.org> > >>> Cc: Masami Hiramatsu <mhiramat@kernel.org> > >>> Cc: Peter Zijlstra <peterz@infradead.org> > >>> Cc: Alexei Starovoitov <ast@kernel.org> > >>> Cc: Yonghong Song <yhs@fb.com> > >>> Cc: Paul E. McKenney <paulmck@kernel.org> > >>> Cc: Ingo Molnar <mingo@redhat.com> > >>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> > >>> Cc: Mark Rutland <mark.rutland@arm.com> > >>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> > >>> Cc: Namhyung Kim <namhyung@kernel.org> > >>> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> > >>> Cc: bpf@vger.kernel.org > >>> Cc: Joel Fernandes <joel@joelfernandes.org> > >>> Cc: Jordan Rife <jrife@google.com> > >>> --- > >>> Changes since v0: > >>> - Introduce tracepoint_call_rcu(), > >>> - Fix bpf_link_free() use of call_rcu as well. > >>> > >>> Changes since v1: > >>> - Use tracepoint_call_rcu() for bpf_prog_put as well. > >>> > >>> Changes since v2: > >>> - Do not cover bpf changes in the same commit, let bpf developers > >>> implement it. > >>> --- > >>> kernel/tracepoint.c | 11 +++++++---- > >>> 1 file changed, 7 insertions(+), 4 deletions(-) > >>> > >>> diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c > >>> index 5658dc92f5b5..47569fb06596 100644 > >>> --- a/kernel/tracepoint.c > >>> +++ b/kernel/tracepoint.c > >>> @@ -106,13 +106,16 @@ static void rcu_free_old_probes(struct rcu_head > >>> *head) > >>> kfree(container_of(head, struct tp_probes, rcu)); > >>> } > >>> > >>> -static inline void release_probes(struct tracepoint_func *old) > >>> +static inline void release_probes(struct tracepoint *tp, struct > >>> tracepoint_func *old) > >>> { > >>> if (old) { > >>> struct tp_probes *tp_probes = container_of(old, > >>> struct tp_probes, probes[0]); > >>> > >>> - call_rcu(&tp_probes->rcu, rcu_free_old_probes); > >>> + if (tracepoint_is_syscall(tp)) > >>> + call_rcu_tasks_trace(&tp_probes->rcu, > >>> rcu_free_old_probes); > >> > >> should this be call_rcu_tasks_trace() -> call_rcu() chain instead of > >> just call_rcu_tasks_trace()? While currently call_rcu_tasks_trace() > >> implies RCU GP (as evidenced by rcu_trace_implies_rcu_gp() being > >> hardcoded right now to returning true), this might not always be the > >> case in the future, so it's best to have a guarantee that regardless > >> of sleepable or not, we'll always have have RCU GP, and for sleepable > >> tracepoint *also* RCU Tasks Trace GP. > > > > Given that faultable tracepoints only use RCU tasks trace for the > > read-side and do not rely on preempt disable, I don't see why we would > > need to chain both grace periods there ? > > Hi Andrii, > > AFAIU, your question above is rooted in the way bpf does its sleepable > program grace periods (chaining RCU tasks trace + RCU GP), e.g.: > > bpf_map_free_mult_rcu_gp > bpf_link_defer_dealloc_mult_rcu_gp > > and > > bpf_link_free: > /* schedule BPF link deallocation; if underlying BPF program > * is sleepable, we need to first wait for RCU tasks trace > * sync, then go through "classic" RCU grace period > */ > > This is introduced in commit 1a80dbcb2db ("bpf: support deferring bpf_link dealloc to after RCU grace period") > which has a bit more information in the commit message, but what I'm not seeing > is an explanation of *why* chaining RCU tasks trace and RCU grace periods is > needed for sleepable bpf programs. What am I missing ? At least one of the reasons are BPF maps that can be used from both sleepable and non-sleepable BPF programs *at the same time*. So in BPF everything is *at least* protected with rcu_read_lock(), and then sleepable-capable things are *additionally* supported by rcu_read_tasks_trace(). So on destruction, we chain both RCU GP kinds to make sure that all users can't see BPF map/prog/(and soon links). It might not be strictly necessary in general, but you are right that I asked because of how we do this in BPF. Also, in practice, tasks trace RCU GP implies RCU GP, so there is no overhead for how we do this for BPF maps (and progs? didn't check). Anyways, this might be fine as is. > > As far as tracepoint.c release_probes() is concerned, just waiting for > RCU tasks trace before freeing memory of faultable tracepoints is > sufficient. > > Thanks, > > Mathieu > > > > > Thanks, > > > > Mathieu > > > >> > >>> + else > >>> + call_rcu(&tp_probes->rcu, rcu_free_old_probes); > >>> } > >>> } > >>> > >>> @@ -334,7 +337,7 @@ static int tracepoint_add_func(struct tracepoint > >>> *tp, > >>> break; > >>> } > >>> > >>> - release_probes(old); > >>> + release_probes(tp, old); > >>> return 0; > >>> } > >>> > >>> @@ -405,7 +408,7 @@ static int tracepoint_remove_func(struct > >>> tracepoint *tp, > >>> WARN_ON_ONCE(1); > >>> break; > >>> } > >>> - release_probes(old); > >>> + release_probes(tp, old); > >>> return 0; > >>> } > >>> > >>> -- > >>> 2.39.5 > >>> > > > > -- > Mathieu Desnoyers > EfficiOS Inc. > https://www.efficios.com >
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c index 5658dc92f5b5..47569fb06596 100644 --- a/kernel/tracepoint.c +++ b/kernel/tracepoint.c @@ -106,13 +106,16 @@ static void rcu_free_old_probes(struct rcu_head *head) kfree(container_of(head, struct tp_probes, rcu)); } -static inline void release_probes(struct tracepoint_func *old) +static inline void release_probes(struct tracepoint *tp, struct tracepoint_func *old) { if (old) { struct tp_probes *tp_probes = container_of(old, struct tp_probes, probes[0]); - call_rcu(&tp_probes->rcu, rcu_free_old_probes); + if (tracepoint_is_syscall(tp)) + call_rcu_tasks_trace(&tp_probes->rcu, rcu_free_old_probes); + else + call_rcu(&tp_probes->rcu, rcu_free_old_probes); } } @@ -334,7 +337,7 @@ static int tracepoint_add_func(struct tracepoint *tp, break; } - release_probes(old); + release_probes(tp, old); return 0; } @@ -405,7 +408,7 @@ static int tracepoint_remove_func(struct tracepoint *tp, WARN_ON_ONCE(1); break; } - release_probes(old); + release_probes(tp, old); return 0; }
The grace period used internally within tracepoint.c:release_probes() uses call_rcu() to batch waiting for quiescence of old probe arrays, rather than using the tracepoint_synchronize_unregister() which blocks while waiting for quiescence. With the introduction of faultable syscall tracepoints, this causes use-after-free issues reproduced with syzkaller. Fix this by using the appropriate call_rcu() or call_rcu_tasks_trace() before invoking the rcu_free_old_probes callback. This can be chosen using the tracepoint_is_syscall() API. A similar issue exists in bpf use of call_rcu(). Fixing this is left to a separate change. Reported-by: syzbot+b390c8062d8387b6272a@syzkaller.appspotmail.com Fixes: a363d27cdbc2 ("tracing: Allow system call tracepoints to handle page faults") Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Michael Jeanson <mjeanson@efficios.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Yonghong Song <yhs@fb.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> Cc: bpf@vger.kernel.org Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Jordan Rife <jrife@google.com> --- Changes since v0: - Introduce tracepoint_call_rcu(), - Fix bpf_link_free() use of call_rcu as well. Changes since v1: - Use tracepoint_call_rcu() for bpf_prog_put as well. Changes since v2: - Do not cover bpf changes in the same commit, let bpf developers implement it. --- kernel/tracepoint.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)