diff mbox series

[1/8] perf/kprobe: Add support to create multiple probes

Message ID 20211124084119.260239-2-jolsa@kernel.org (mailing list archive)
State RFC
Delegated to: BPF
Headers show
Series perf/bpf: Add batch support for [ku]probes attach | expand

Checks

Context Check Description
bpf/vmtest-bpf-next fail VM_Test
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf fail VM_Test
bpf/vmtest-bpf-PR fail PR summary
netdev/tree_selection success Clearly marked for bpf
netdev/fixes_present fail Series targets non-next tree, but doesn't contain any Fixes tags
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 2338 this patch: 2338
netdev/cc_maintainers warning 5 maintainers not CCed: linux-perf-users@vger.kernel.org peterz@infradead.org namhyung@kernel.org mingo@redhat.com kpsingh@kernel.org
netdev/build_clang success Errors and warnings before: 390 this patch: 390
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 2365 this patch: 2365
netdev/checkpatch warning WARNING: From:/Signed-off-by: email address mismatch: 'From: Jiri Olsa <jolsa@redhat.com>' != 'Signed-off-by: Jiri Olsa <jolsa@kernel.org>'
netdev/kdoc success Errors and warnings before: 3 this patch: 3
netdev/source_inline success Was 0 now: 0

Commit Message

Jiri Olsa Nov. 24, 2021, 8:41 a.m. UTC
Adding support to create multiple probes within single perf event.
This way we can associate single bpf program with multiple kprobes,
because bpf program gets associated with the perf event.

The perf_event_attr is not extended, current fields for kprobe
attachment are used for multi attachment.

For current kprobe atachment we use either:

   kprobe_func (in config1) + probe_offset (in config2)

to define kprobe by function name with offset, or:

   kprobe_addr (in config2)

to define kprobe with direct address value.

For multi probe attach the same fields point to array of values
with the same semantic. Each probe is defined as set of values
with the same array index (idx) as:

   kprobe_func[idx]  + probe_offset[idx]

to define kprobe by function name with offset, or:

   kprobe_addr[idx]

to define kprobe with direct address value.

The number of probes is passed in probe_cnt value, which shares
the union with wakeup_events/wakeup_watermark values which are
not used for kprobes.

Since [1] it's possible to stack multiple probes events under
one head event. Using the same code to allow that for probes
defined under perf kprobe interface.

[1] https://lore.kernel.org/lkml/156095682948.28024.14190188071338900568.stgit@devnote2/
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/uapi/linux/perf_event.h |   1 +
 kernel/trace/trace_event_perf.c | 106 ++++++++++++++++++++++++++++----
 kernel/trace/trace_kprobe.c     |  47 ++++++++++++--
 kernel/trace/trace_probe.c      |   2 +-
 kernel/trace/trace_probe.h      |   3 +-
 5 files changed, 138 insertions(+), 21 deletions(-)

Comments

Masami Hiramatsu (Google) Nov. 28, 2021, 1:49 p.m. UTC | #1
On Wed, 24 Nov 2021 09:41:12 +0100
Jiri Olsa <jolsa@redhat.com> wrote:

> Adding support to create multiple probes within single perf event.
> This way we can associate single bpf program with multiple kprobes,
> because bpf program gets associated with the perf event.
> 
> The perf_event_attr is not extended, current fields for kprobe
> attachment are used for multi attachment.
> 
> For current kprobe atachment we use either:
> 
>    kprobe_func (in config1) + probe_offset (in config2)
> 
> to define kprobe by function name with offset, or:
> 
>    kprobe_addr (in config2)
> 
> to define kprobe with direct address value.
> 
> For multi probe attach the same fields point to array of values
> with the same semantic. Each probe is defined as set of values
> with the same array index (idx) as:
> 
>    kprobe_func[idx]  + probe_offset[idx]
> 
> to define kprobe by function name with offset, or:
> 
>    kprobe_addr[idx]
> 
> to define kprobe with direct address value.
> 
> The number of probes is passed in probe_cnt value, which shares
> the union with wakeup_events/wakeup_watermark values which are
> not used for kprobes.
> 
> Since [1] it's possible to stack multiple probes events under
> one head event. Using the same code to allow that for probes
> defined under perf kprobe interface.

OK, so you also want to add multi-probes on single event by
single perf-event syscall. Not defining different events.

Those are bit different, multi-probes on single event can
invoke single event handler from different probe points. For
exapmple same user bpf handler will be invoked from different
address.

> 
> [1] https://lore.kernel.org/lkml/156095682948.28024.14190188071338900568.stgit@devnote2/
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/uapi/linux/perf_event.h |   1 +
>  kernel/trace/trace_event_perf.c | 106 ++++++++++++++++++++++++++++----
>  kernel/trace/trace_kprobe.c     |  47 ++++++++++++--
>  kernel/trace/trace_probe.c      |   2 +-
>  kernel/trace/trace_probe.h      |   3 +-
>  5 files changed, 138 insertions(+), 21 deletions(-)
> 
> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> index bd8860eeb291..eea80709d1ed 100644
> --- a/include/uapi/linux/perf_event.h
> +++ b/include/uapi/linux/perf_event.h
> @@ -414,6 +414,7 @@ struct perf_event_attr {
>  	union {
>  		__u32		wakeup_events;	  /* wakeup every n events */
>  		__u32		wakeup_watermark; /* bytes before wakeup   */
> +		__u32		probe_cnt;	  /* number of [k,u] probes */
>  	};
>  
>  	__u32			bp_type;
> diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
> index a114549720d6..26078e40c299 100644
> --- a/kernel/trace/trace_event_perf.c
> +++ b/kernel/trace/trace_event_perf.c
> @@ -245,23 +245,27 @@ void perf_trace_destroy(struct perf_event *p_event)
>  }
>  
>  #ifdef CONFIG_KPROBE_EVENTS
> -int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
> +static struct trace_event_call*
> +kprobe_init(bool is_retprobe, u64 kprobe_func, u64 kprobe_addr,
> +	    u64 probe_offset, struct trace_event_call *old)
>  {
>  	int ret;
>  	char *func = NULL;
>  	struct trace_event_call *tp_event;
>  
> -	if (p_event->attr.kprobe_func) {
> +	if (kprobe_func) {
>  		func = kzalloc(KSYM_NAME_LEN, GFP_KERNEL);
>  		if (!func)
> -			return -ENOMEM;
> +			return ERR_PTR(-ENOMEM);
>  		ret = strncpy_from_user(
> -			func, u64_to_user_ptr(p_event->attr.kprobe_func),
> +			func, u64_to_user_ptr(kprobe_func),
>  			KSYM_NAME_LEN);
>  		if (ret == KSYM_NAME_LEN)
>  			ret = -E2BIG;
> -		if (ret < 0)
> -			goto out;
> +		if (ret < 0) {
> +			kfree(func);
> +			return ERR_PTR(ret);
> +		}
>  
>  		if (func[0] == '\0') {
>  			kfree(func);
> @@ -270,20 +274,96 @@ int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
>  	}
>  
>  	tp_event = create_local_trace_kprobe(
> -		func, (void *)(unsigned long)(p_event->attr.kprobe_addr),
> -		p_event->attr.probe_offset, is_retprobe);
> -	if (IS_ERR(tp_event)) {
> -		ret = PTR_ERR(tp_event);
> -		goto out;
> +		func, (void *)(unsigned long)(kprobe_addr),
> +		probe_offset, is_retprobe, old);

Hmm, here I have a concern (maybe no real issue is caused at this momemnt.)
Since ftrace's multi-probe event has same event/group name among the
probes's internal event-calls. However, create_local_trace_kprobe()
actually uses the "func" name for the event name.
I think you should choose a randome different "representative" event name
for the event (not probe), and share it among the probes on the event,
if the perf event has no event name.

(I'm not sure how the event names are used from inside of the BPF programs,
but just for the consistency.)

> +	kfree(func);
> +	return tp_event;
> +}
> +
> +static struct trace_event_call*
> +kprobe_init_multi(struct perf_event *p_event, bool is_retprobe)
> +{
> +	void __user *kprobe_func = u64_to_user_ptr(p_event->attr.kprobe_func);
> +	void __user *kprobe_addr = u64_to_user_ptr(p_event->attr.kprobe_addr);
> +	u64 *funcs = NULL, *addrs = NULL, *offs = NULL;
> +	struct trace_event_call *tp_event, *tp_old = NULL;
> +	u32 i, cnt = p_event->attr.probe_cnt;
> +	int ret = -EINVAL;
> +	size_t size;
> +
> +	if (!cnt)
> +		return ERR_PTR(-EINVAL);
> +
> +	size = cnt * sizeof(u64);
> +	if (kprobe_func) {
> +		ret = -ENOMEM;
> +		funcs = kmalloc(size, GFP_KERNEL);
> +		if (!funcs)
> +			goto out;
> +		ret = -EFAULT;
> +		if (copy_from_user(funcs, kprobe_func, size))
> +			goto out;
> +	}
> +
> +	if (kprobe_addr) {
> +		ret = -ENOMEM;
> +		addrs = kmalloc(size, GFP_KERNEL);
> +		if (!addrs)
> +			goto out;
> +		ret = -EFAULT;
> +		if (copy_from_user(addrs, kprobe_addr, size))
> +			goto out;
> +		/* addresses and ofsets share the same array */
> +		offs = addrs;
>  	}
>  
> +	for (i = 0; i < cnt; i++) {
> +		tp_event = kprobe_init(is_retprobe, funcs ? funcs[i] : 0,
> +				       addrs ? addrs[i] : 0, offs ? offs[i] : 0,
> +				       tp_old);
> +		if (IS_ERR(tp_event)) {
> +			if (tp_old)
> +				destroy_local_trace_kprobe(tp_old);
> +			ret = PTR_ERR(tp_event);
> +			goto out;
> +		}
> +		if (!tp_old)
> +			tp_old = tp_event;
> +	}
> +	ret = 0;
> +out:
> +	kfree(funcs);
> +	kfree(addrs);
> +	return ret ? ERR_PTR(ret) : tp_old;
> +}
> +
> +static struct trace_event_call*
> +kprobe_init_single(struct perf_event *p_event, bool is_retprobe)
> +{
> +	struct perf_event_attr *attr = &p_event->attr;
> +
> +	return kprobe_init(is_retprobe, attr->kprobe_func, attr->kprobe_addr,
> +			   attr->probe_offset, NULL);
> +}
> +
> +int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
> +{
> +	struct trace_event_call *tp_event;
> +	int ret;
> +
> +	if (p_event->attr.probe_cnt)

isn't this "p_event->attr.probe_cnt > 1" ?

> +		tp_event = kprobe_init_multi(p_event, is_retprobe);
> +	else
> +		tp_event = kprobe_init_single(p_event, is_retprobe);
> +
> +	if (IS_ERR(tp_event))
> +		return PTR_ERR(tp_event);
> +
>  	mutex_lock(&event_mutex);
>  	ret = perf_trace_event_init(tp_event, p_event);
>  	if (ret)
>  		destroy_local_trace_kprobe(tp_event);
>  	mutex_unlock(&event_mutex);
> -out:
> -	kfree(func);
>  	return ret;
>  }
>  
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 33272a7b6912..86a7aada853a 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -237,13 +237,18 @@ static int kprobe_dispatcher(struct kprobe *kp, struct pt_regs *regs);
>  static int kretprobe_dispatcher(struct kretprobe_instance *ri,
>  				struct pt_regs *regs);
>  
> +static void __free_trace_kprobe(struct trace_kprobe *tk)
> +{
> +	kfree(tk->symbol);
> +	free_percpu(tk->nhit);
> +	kfree(tk);
> +}
> +
>  static void free_trace_kprobe(struct trace_kprobe *tk)
>  {
>  	if (tk) {
>  		trace_probe_cleanup(&tk->tp);
> -		kfree(tk->symbol);
> -		free_percpu(tk->nhit);
> -		kfree(tk);
> +		__free_trace_kprobe(tk);

Why is this needed?

>  	}
>  }
>  
> @@ -1796,7 +1801,7 @@ static int unregister_kprobe_event(struct trace_kprobe *tk)
>  /* create a trace_kprobe, but don't add it to global lists */
>  struct trace_event_call *
>  create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
> -			  bool is_return)
> +			  bool is_return, struct trace_event_call *old)
>  {
>  	enum probe_print_type ptype;
>  	struct trace_kprobe *tk;
> @@ -1820,6 +1825,28 @@ create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
>  		return ERR_CAST(tk);
>  	}
>  
> +	if (old) {
> +		struct trace_kprobe *tk_old;
> +
> +		tk_old = trace_kprobe_primary_from_call(old);

So, this will choose the first(primary) probe's function name as
the representative event name. But other probes can have different
event names.

> +		if (!tk_old) {
> +			ret = -EINVAL;
> +			goto error;
> +		}
> +
> +		/* Append to existing event */
> +		ret = trace_probe_append(&tk->tp, &tk_old->tp);
> +		if (ret)
> +			goto error;
> +
> +		/* Register k*probe */
> +		ret = __register_trace_kprobe(tk);
> +		if (ret)
> +			goto error;

If "appended" probe failed to register, it must be "unlinked" from
the first one and goto error to free the trace_kprobe.

	if (ret) {
		trace_probe_unlink(&tk->tp);
		goto error;
	}

See append_trace_kprobe() for details.

> +
> +		return trace_probe_event_call(&tk->tp);
> +	}
> +
>  	init_trace_event_call(tk);
>  
>  	ptype = trace_kprobe_is_return(tk) ?
> @@ -1841,6 +1868,8 @@ create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
>  
>  void destroy_local_trace_kprobe(struct trace_event_call *event_call)
>  {
> +	struct trace_probe_event *event;
> +	struct trace_probe *pos, *tmp;
>  	struct trace_kprobe *tk;
>  
>  	tk = trace_kprobe_primary_from_call(event_call);
> @@ -1852,9 +1881,15 @@ void destroy_local_trace_kprobe(struct trace_event_call *event_call)
>  		return;
>  	}
>  
> -	__unregister_trace_kprobe(tk);
> +	event = tk->tp.event;
> +	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> +		list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> +		list_del_init(&pos->list);
> +		__unregister_trace_kprobe(tk);
> +		__free_trace_kprobe(tk);
> +	}
>  
> -	free_trace_kprobe(tk);
> +	trace_probe_event_free(event);

Actually, each probe already allocated the trace_probe events (which are not
used if it is appended). Thus you have to use trace_probe_unlink(&tk->tp) in
the above loop.

	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
		list_for_each_entry_safe(pos, tmp, &event->probes, list) {
		__unregister_trace_kprobe(tk);
		trace_probe_unlink(&tk->tp); /* This will call trace_probe_event_free() internally */
		free_trace_kprobe(tk);
	}

Thank you,

>  }
>  #endif /* CONFIG_PERF_EVENTS */
>  
> diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
> index 3ed2a3f37297..2dff85aa21e9 100644
> --- a/kernel/trace/trace_probe.c
> +++ b/kernel/trace/trace_probe.c
> @@ -974,7 +974,7 @@ int traceprobe_define_arg_fields(struct trace_event_call *event_call,
>  	return 0;
>  }
>  
> -static void trace_probe_event_free(struct trace_probe_event *tpe)
> +void trace_probe_event_free(struct trace_probe_event *tpe)
>  {
>  	kfree(tpe->class.system);
>  	kfree(tpe->call.name);
> diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
> index 99e7a5df025e..ba8e46c7efe8 100644
> --- a/kernel/trace/trace_probe.h
> +++ b/kernel/trace/trace_probe.h
> @@ -333,6 +333,7 @@ int trace_probe_init(struct trace_probe *tp, const char *event,
>  		     const char *group, bool alloc_filter);
>  void trace_probe_cleanup(struct trace_probe *tp);
>  int trace_probe_append(struct trace_probe *tp, struct trace_probe *to);
> +void trace_probe_event_free(struct trace_probe_event *tpe);
>  void trace_probe_unlink(struct trace_probe *tp);
>  int trace_probe_register_event_call(struct trace_probe *tp);
>  int trace_probe_add_file(struct trace_probe *tp, struct trace_event_file *file);
> @@ -377,7 +378,7 @@ extern int traceprobe_set_print_fmt(struct trace_probe *tp, enum probe_print_typ
>  #ifdef CONFIG_PERF_EVENTS
>  extern struct trace_event_call *
>  create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
> -			  bool is_return);
> +			  bool is_return, struct trace_event_call *old);
>  extern void destroy_local_trace_kprobe(struct trace_event_call *event_call);
>  
>  extern struct trace_event_call *
> -- 
> 2.33.1
>
Jiri Olsa Nov. 28, 2021, 10:34 p.m. UTC | #2
On Sun, Nov 28, 2021 at 10:49:54PM +0900, Masami Hiramatsu wrote:
> On Wed, 24 Nov 2021 09:41:12 +0100
> Jiri Olsa <jolsa@redhat.com> wrote:
> 
> > Adding support to create multiple probes within single perf event.
> > This way we can associate single bpf program with multiple kprobes,
> > because bpf program gets associated with the perf event.
> > 
> > The perf_event_attr is not extended, current fields for kprobe
> > attachment are used for multi attachment.
> > 
> > For current kprobe atachment we use either:
> > 
> >    kprobe_func (in config1) + probe_offset (in config2)
> > 
> > to define kprobe by function name with offset, or:
> > 
> >    kprobe_addr (in config2)
> > 
> > to define kprobe with direct address value.
> > 
> > For multi probe attach the same fields point to array of values
> > with the same semantic. Each probe is defined as set of values
> > with the same array index (idx) as:
> > 
> >    kprobe_func[idx]  + probe_offset[idx]
> > 
> > to define kprobe by function name with offset, or:
> > 
> >    kprobe_addr[idx]
> > 
> > to define kprobe with direct address value.
> > 
> > The number of probes is passed in probe_cnt value, which shares
> > the union with wakeup_events/wakeup_watermark values which are
> > not used for kprobes.
> > 
> > Since [1] it's possible to stack multiple probes events under
> > one head event. Using the same code to allow that for probes
> > defined under perf kprobe interface.
> 
> OK, so you also want to add multi-probes on single event by
> single perf-event syscall. Not defining different events.

correct.. bpf program is then attached to perf event with
ioctl call.. this way we can have multiple probes attached
to single bpf program

> 
> Those are bit different, multi-probes on single event can
> invoke single event handler from different probe points. For
> exapmple same user bpf handler will be invoked from different
> address.

right, that's the goal, having single bpf program executed
from multiple probes

> 
> > 
> > [1] https://lore.kernel.org/lkml/156095682948.28024.14190188071338900568.stgit@devnote2/
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/uapi/linux/perf_event.h |   1 +
> >  kernel/trace/trace_event_perf.c | 106 ++++++++++++++++++++++++++++----
> >  kernel/trace/trace_kprobe.c     |  47 ++++++++++++--
> >  kernel/trace/trace_probe.c      |   2 +-
> >  kernel/trace/trace_probe.h      |   3 +-
> >  5 files changed, 138 insertions(+), 21 deletions(-)
> > 
> > diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> > index bd8860eeb291..eea80709d1ed 100644
> > --- a/include/uapi/linux/perf_event.h
> > +++ b/include/uapi/linux/perf_event.h
> > @@ -414,6 +414,7 @@ struct perf_event_attr {
> >  	union {
> >  		__u32		wakeup_events;	  /* wakeup every n events */
> >  		__u32		wakeup_watermark; /* bytes before wakeup   */
> > +		__u32		probe_cnt;	  /* number of [k,u] probes */
> >  	};
> >  
> >  	__u32			bp_type;
> > diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
> > index a114549720d6..26078e40c299 100644
> > --- a/kernel/trace/trace_event_perf.c
> > +++ b/kernel/trace/trace_event_perf.c
> > @@ -245,23 +245,27 @@ void perf_trace_destroy(struct perf_event *p_event)
> >  }
> >  
> >  #ifdef CONFIG_KPROBE_EVENTS
> > -int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
> > +static struct trace_event_call*
> > +kprobe_init(bool is_retprobe, u64 kprobe_func, u64 kprobe_addr,
> > +	    u64 probe_offset, struct trace_event_call *old)
> >  {
> >  	int ret;
> >  	char *func = NULL;
> >  	struct trace_event_call *tp_event;
> >  
> > -	if (p_event->attr.kprobe_func) {
> > +	if (kprobe_func) {
> >  		func = kzalloc(KSYM_NAME_LEN, GFP_KERNEL);
> >  		if (!func)
> > -			return -ENOMEM;
> > +			return ERR_PTR(-ENOMEM);
> >  		ret = strncpy_from_user(
> > -			func, u64_to_user_ptr(p_event->attr.kprobe_func),
> > +			func, u64_to_user_ptr(kprobe_func),
> >  			KSYM_NAME_LEN);
> >  		if (ret == KSYM_NAME_LEN)
> >  			ret = -E2BIG;
> > -		if (ret < 0)
> > -			goto out;
> > +		if (ret < 0) {
> > +			kfree(func);
> > +			return ERR_PTR(ret);
> > +		}
> >  
> >  		if (func[0] == '\0') {
> >  			kfree(func);
> > @@ -270,20 +274,96 @@ int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
> >  	}
> >  
> >  	tp_event = create_local_trace_kprobe(
> > -		func, (void *)(unsigned long)(p_event->attr.kprobe_addr),
> > -		p_event->attr.probe_offset, is_retprobe);
> > -	if (IS_ERR(tp_event)) {
> > -		ret = PTR_ERR(tp_event);
> > -		goto out;
> > +		func, (void *)(unsigned long)(kprobe_addr),
> > +		probe_offset, is_retprobe, old);
> 
> Hmm, here I have a concern (maybe no real issue is caused at this momemnt.)
> Since ftrace's multi-probe event has same event/group name among the
> probes's internal event-calls. However, create_local_trace_kprobe()
> actually uses the "func" name for the event name.
> I think you should choose a randome different "representative" event name
> for the event (not probe), and share it among the probes on the event,
> if the perf event has no event name.
> 
> (I'm not sure how the event names are used from inside of the BPF programs,
> but just for the consistency.)

ok, I don't think event names are used, I'll check

> 
> > +	kfree(func);
> > +	return tp_event;
> > +}
> > +
> > +static struct trace_event_call*
> > +kprobe_init_multi(struct perf_event *p_event, bool is_retprobe)
> > +{
> > +	void __user *kprobe_func = u64_to_user_ptr(p_event->attr.kprobe_func);
> > +	void __user *kprobe_addr = u64_to_user_ptr(p_event->attr.kprobe_addr);
> > +	u64 *funcs = NULL, *addrs = NULL, *offs = NULL;
> > +	struct trace_event_call *tp_event, *tp_old = NULL;
> > +	u32 i, cnt = p_event->attr.probe_cnt;
> > +	int ret = -EINVAL;
> > +	size_t size;
> > +
> > +	if (!cnt)
> > +		return ERR_PTR(-EINVAL);
> > +
> > +	size = cnt * sizeof(u64);
> > +	if (kprobe_func) {
> > +		ret = -ENOMEM;
> > +		funcs = kmalloc(size, GFP_KERNEL);
> > +		if (!funcs)
> > +			goto out;
> > +		ret = -EFAULT;
> > +		if (copy_from_user(funcs, kprobe_func, size))
> > +			goto out;
> > +	}
> > +
> > +	if (kprobe_addr) {
> > +		ret = -ENOMEM;
> > +		addrs = kmalloc(size, GFP_KERNEL);
> > +		if (!addrs)
> > +			goto out;
> > +		ret = -EFAULT;
> > +		if (copy_from_user(addrs, kprobe_addr, size))
> > +			goto out;
> > +		/* addresses and ofsets share the same array */
> > +		offs = addrs;
> >  	}
> >  
> > +	for (i = 0; i < cnt; i++) {
> > +		tp_event = kprobe_init(is_retprobe, funcs ? funcs[i] : 0,
> > +				       addrs ? addrs[i] : 0, offs ? offs[i] : 0,
> > +				       tp_old);
> > +		if (IS_ERR(tp_event)) {
> > +			if (tp_old)
> > +				destroy_local_trace_kprobe(tp_old);
> > +			ret = PTR_ERR(tp_event);
> > +			goto out;
> > +		}
> > +		if (!tp_old)
> > +			tp_old = tp_event;
> > +	}
> > +	ret = 0;
> > +out:
> > +	kfree(funcs);
> > +	kfree(addrs);
> > +	return ret ? ERR_PTR(ret) : tp_old;
> > +}
> > +
> > +static struct trace_event_call*
> > +kprobe_init_single(struct perf_event *p_event, bool is_retprobe)
> > +{
> > +	struct perf_event_attr *attr = &p_event->attr;
> > +
> > +	return kprobe_init(is_retprobe, attr->kprobe_func, attr->kprobe_addr,
> > +			   attr->probe_offset, NULL);
> > +}
> > +
> > +int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
> > +{
> > +	struct trace_event_call *tp_event;
> > +	int ret;
> > +
> > +	if (p_event->attr.probe_cnt)
> 
> isn't this "p_event->attr.probe_cnt > 1" ?

right, probe_cnt is just added by this patchset and used only when
there are multiple probes being attached, so that's why it works
even with 1 at the moment

will change

> 
> > +		tp_event = kprobe_init_multi(p_event, is_retprobe);
> > +	else
> > +		tp_event = kprobe_init_single(p_event, is_retprobe);
> > +
> > +	if (IS_ERR(tp_event))
> > +		return PTR_ERR(tp_event);
> > +
> >  	mutex_lock(&event_mutex);
> >  	ret = perf_trace_event_init(tp_event, p_event);
> >  	if (ret)
> >  		destroy_local_trace_kprobe(tp_event);
> >  	mutex_unlock(&event_mutex);
> > -out:
> > -	kfree(func);
> >  	return ret;
> >  }
> >  
> > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> > index 33272a7b6912..86a7aada853a 100644
> > --- a/kernel/trace/trace_kprobe.c
> > +++ b/kernel/trace/trace_kprobe.c
> > @@ -237,13 +237,18 @@ static int kprobe_dispatcher(struct kprobe *kp, struct pt_regs *regs);
> >  static int kretprobe_dispatcher(struct kretprobe_instance *ri,
> >  				struct pt_regs *regs);
> >  
> > +static void __free_trace_kprobe(struct trace_kprobe *tk)
> > +{
> > +	kfree(tk->symbol);
> > +	free_percpu(tk->nhit);
> > +	kfree(tk);
> > +}
> > +
> >  static void free_trace_kprobe(struct trace_kprobe *tk)
> >  {
> >  	if (tk) {
> >  		trace_probe_cleanup(&tk->tp);
> > -		kfree(tk->symbol);
> > -		free_percpu(tk->nhit);
> > -		kfree(tk);
> > +		__free_trace_kprobe(tk);
> 
> Why is this needed?

I needed some free function that does not call trace_probe_cleanup and
trace_probe_unlink.. I wrote details in the destroy_local_trace_kprobe
comment below

> 
> >  	}
> >  }
> >  
> > @@ -1796,7 +1801,7 @@ static int unregister_kprobe_event(struct trace_kprobe *tk)
> >  /* create a trace_kprobe, but don't add it to global lists */
> >  struct trace_event_call *
> >  create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
> > -			  bool is_return)
> > +			  bool is_return, struct trace_event_call *old)
> >  {
> >  	enum probe_print_type ptype;
> >  	struct trace_kprobe *tk;
> > @@ -1820,6 +1825,28 @@ create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
> >  		return ERR_CAST(tk);
> >  	}
> >  
> > +	if (old) {
> > +		struct trace_kprobe *tk_old;
> > +
> > +		tk_old = trace_kprobe_primary_from_call(old);
> 
> So, this will choose the first(primary) probe's function name as
> the representative event name. But other probes can have different
> event names.

ok

> 
> > +		if (!tk_old) {
> > +			ret = -EINVAL;
> > +			goto error;
> > +		}
> > +
> > +		/* Append to existing event */
> > +		ret = trace_probe_append(&tk->tp, &tk_old->tp);
> > +		if (ret)
> > +			goto error;
> > +
> > +		/* Register k*probe */
> > +		ret = __register_trace_kprobe(tk);
> > +		if (ret)
> > +			goto error;
> 
> If "appended" probe failed to register, it must be "unlinked" from
> the first one and goto error to free the trace_kprobe.
> 
> 	if (ret) {
> 		trace_probe_unlink(&tk->tp);
> 		goto error;
> 	}
> 
> See append_trace_kprobe() for details.

so there's goto error jumping to:

error:
	free_trace_kprobe(tk);

that calls:
	trace_probe_cleanup
	  -> trace_probe_unlink

that should do it, right?

> 
> > +
> > +		return trace_probe_event_call(&tk->tp);
> > +	}
> > +
> >  	init_trace_event_call(tk);
> >  
> >  	ptype = trace_kprobe_is_return(tk) ?
> > @@ -1841,6 +1868,8 @@ create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
> >  
> >  void destroy_local_trace_kprobe(struct trace_event_call *event_call)
> >  {
> > +	struct trace_probe_event *event;
> > +	struct trace_probe *pos, *tmp;
> >  	struct trace_kprobe *tk;
> >  
> >  	tk = trace_kprobe_primary_from_call(event_call);
> > @@ -1852,9 +1881,15 @@ void destroy_local_trace_kprobe(struct trace_event_call *event_call)
> >  		return;
> >  	}
> >  
> > -	__unregister_trace_kprobe(tk);
> > +	event = tk->tp.event;
> > +	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> > +		list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> > +		list_del_init(&pos->list);
> > +		__unregister_trace_kprobe(tk);
> > +		__free_trace_kprobe(tk);
> > +	}
> >  
> > -	free_trace_kprobe(tk);
> > +	trace_probe_event_free(event);
> 
> Actually, each probe already allocated the trace_probe events (which are not
> used if it is appended). Thus you have to use trace_probe_unlink(&tk->tp) in
> the above loop.
> 
> 	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> 		list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> 		__unregister_trace_kprobe(tk);
> 		trace_probe_unlink(&tk->tp); /* This will call trace_probe_event_free() internally */
> 		free_trace_kprobe(tk);
> 	}

so calling trace_probe_event_free inside this loop is a problem,
because the loop iterates that trace_probe_event's probes list,
and last probe removed will trigger trace_probe_event_free, that
will free the list we iterate..  and we go down ;-)

so that's why I added new free function '__free_trace_kprobe'
that frees everything as free_trace_kprobe, but does not call
trace_probe_unlink

	event = tk->tp.event;
	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
		list_for_each_entry_safe(pos, tmp, &event->probes, list) {
		list_del_init(&pos->list);
		__unregister_trace_kprobe(tk);
		__free_trace_kprobe(tk);
	}

	trace_probe_event_free(event);

and there's trace_probe_event_free(event) to make the final free

thanks,
jirka
Masami Hiramatsu (Google) Nov. 29, 2021, 1:43 a.m. UTC | #3
On Sun, 28 Nov 2021 23:34:13 +0100
Jiri Olsa <jolsa@redhat.com> wrote:
> 
> > > +		if (!tk_old) {
> > > +			ret = -EINVAL;
> > > +			goto error;
> > > +		}
> > > +
> > > +		/* Append to existing event */
> > > +		ret = trace_probe_append(&tk->tp, &tk_old->tp);
> > > +		if (ret)
> > > +			goto error;
> > > +
> > > +		/* Register k*probe */
> > > +		ret = __register_trace_kprobe(tk);
> > > +		if (ret)
> > > +			goto error;
> > 
> > If "appended" probe failed to register, it must be "unlinked" from
> > the first one and goto error to free the trace_kprobe.
> > 
> > 	if (ret) {
> > 		trace_probe_unlink(&tk->tp);
> > 		goto error;
> > 	}
> > 
> > See append_trace_kprobe() for details.
> 
> so there's goto error jumping to:
> 
> error:
> 	free_trace_kprobe(tk);
> 
> that calls:
> 	trace_probe_cleanup
> 	  -> trace_probe_unlink
> 
> that should do it, right?

Ah, OK. Clean up all the kprobe events in this function. Then it's good. 

> 
> > 
> > > +
> > > +		return trace_probe_event_call(&tk->tp);
> > > +	}
> > > +
> > >  	init_trace_event_call(tk);
> > >  
> > >  	ptype = trace_kprobe_is_return(tk) ?
> > > @@ -1841,6 +1868,8 @@ create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
> > >  
> > >  void destroy_local_trace_kprobe(struct trace_event_call *event_call)
> > >  {
> > > +	struct trace_probe_event *event;
> > > +	struct trace_probe *pos, *tmp;
> > >  	struct trace_kprobe *tk;
> > >  
> > >  	tk = trace_kprobe_primary_from_call(event_call);
> > > @@ -1852,9 +1881,15 @@ void destroy_local_trace_kprobe(struct trace_event_call *event_call)
> > >  		return;
> > >  	}
> > >  
> > > -	__unregister_trace_kprobe(tk);
> > > +	event = tk->tp.event;
> > > +	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> > > +		list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> > > +		list_del_init(&pos->list);
> > > +		__unregister_trace_kprobe(tk);
> > > +		__free_trace_kprobe(tk);
> > > +	}
> > >  
> > > -	free_trace_kprobe(tk);
> > > +	trace_probe_event_free(event);
> > 
> > Actually, each probe already allocated the trace_probe events (which are not
> > used if it is appended). Thus you have to use trace_probe_unlink(&tk->tp) in
> > the above loop.
> > 
> > 	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> > 		list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> > 		__unregister_trace_kprobe(tk);
> > 		trace_probe_unlink(&tk->tp); /* This will call trace_probe_event_free() internally */
> > 		free_trace_kprobe(tk);
> > 	}
> 
> so calling trace_probe_event_free inside this loop is a problem,
> because the loop iterates that trace_probe_event's probes list,
> and last probe removed will trigger trace_probe_event_free, that
> will free the list we iterate..  and we go down ;-)

Oops, right. So in this case, you are looping on the all probes
on an event, so event is referred outside of loop.

OK, I got it.

In the ftrace kprobe-event, this loop cursor is done by dynevent,
so this problem doesn't occur. But the BPF is only using the
trace_event, thus this special routine is needed.

Could you add such comment on your loop?

Thank you,

> 
> so that's why I added new free function '__free_trace_kprobe'
> that frees everything as free_trace_kprobe, but does not call
> trace_probe_unlink
> 
> 	event = tk->tp.event;
> 	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> 		list_for_each_entry_safe(pos, tmp, &event->probes, list) {
> 		list_del_init(&pos->list);
> 		__unregister_trace_kprobe(tk);
> 		__free_trace_kprobe(tk);
> 	}
> 
> 	trace_probe_event_free(event);
> 
> and there's trace_probe_event_free(event) to make the final free
> 
> thanks,
> jirka
>
Andrii Nakryiko Dec. 1, 2021, 6:53 a.m. UTC | #4
On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding support to create multiple probes within single perf event.
> This way we can associate single bpf program with multiple kprobes,
> because bpf program gets associated with the perf event.
>
> The perf_event_attr is not extended, current fields for kprobe
> attachment are used for multi attachment.

I'm a bit concerned with complicating perf_event_attr further to
support this multi-attach. For BPF, at least, we now have
bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
syscall which allows much simpler and cleaner API to do this. Libbpf
will actually pick bpf_link-based attachment if kernel supports it. I
think we should better do bpf_link-based approach from the get go.

Another thing I'd like you to keep in mind and think about is BPF
cookie. Currently kprobe/uprobe/tracepoint allow to associate
arbitrary user-provided u64 value which will be accessible from BPF
program with bpf_get_attach_cookie(). With multi-attach kprobes this
because extremely crucial feature to support, otherwise it's both
expensive, inconvenient and complicated to be able to distinguish
between different instances of the same multi-attach kprobe
invocation. So with that, what would be the interface to specify these
BPF cookies for this multi-attach kprobe, if we are going through
perf_event_attr. Probably picking yet another unused field and
union-izing it with a pointer. It will work, but makes the interface
even more overloaded. While for LINK_CREATE we can just add another
pointer to a u64[] with the same size as number of kfunc names and
offsets.

But other than that, I'm super happy that you are working on these
complicated multi-attach capabilities! It would be great to benchmark
one-by-one attachment vs multi-attach to the same set of kprobes once
you arrive at the final implementation.

>
> For current kprobe atachment we use either:
>
>    kprobe_func (in config1) + probe_offset (in config2)
>
> to define kprobe by function name with offset, or:
>
>    kprobe_addr (in config2)
>
> to define kprobe with direct address value.
>
> For multi probe attach the same fields point to array of values
> with the same semantic. Each probe is defined as set of values
> with the same array index (idx) as:
>
>    kprobe_func[idx]  + probe_offset[idx]
>
> to define kprobe by function name with offset, or:
>
>    kprobe_addr[idx]
>
> to define kprobe with direct address value.
>
> The number of probes is passed in probe_cnt value, which shares
> the union with wakeup_events/wakeup_watermark values which are
> not used for kprobes.
>
> Since [1] it's possible to stack multiple probes events under
> one head event. Using the same code to allow that for probes
> defined under perf kprobe interface.
>
> [1] https://lore.kernel.org/lkml/156095682948.28024.14190188071338900568.stgit@devnote2/
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/uapi/linux/perf_event.h |   1 +
>  kernel/trace/trace_event_perf.c | 106 ++++++++++++++++++++++++++++----
>  kernel/trace/trace_kprobe.c     |  47 ++++++++++++--
>  kernel/trace/trace_probe.c      |   2 +-
>  kernel/trace/trace_probe.h      |   3 +-
>  5 files changed, 138 insertions(+), 21 deletions(-)
>

[...]
Andrii Nakryiko Dec. 1, 2021, 6:55 a.m. UTC | #5
On Tue, Nov 30, 2021 at 10:53 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding support to create multiple probes within single perf event.
> > This way we can associate single bpf program with multiple kprobes,
> > because bpf program gets associated with the perf event.
> >
> > The perf_event_attr is not extended, current fields for kprobe
> > attachment are used for multi attachment.
>
> I'm a bit concerned with complicating perf_event_attr further to
> support this multi-attach. For BPF, at least, we now have
> bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
> syscall which allows much simpler and cleaner API to do this. Libbpf
> will actually pick bpf_link-based attachment if kernel supports it. I
> think we should better do bpf_link-based approach from the get go.
>
> Another thing I'd like you to keep in mind and think about is BPF
> cookie. Currently kprobe/uprobe/tracepoint allow to associate
> arbitrary user-provided u64 value which will be accessible from BPF
> program with bpf_get_attach_cookie(). With multi-attach kprobes this
> because extremely crucial feature to support, otherwise it's both
> expensive, inconvenient and complicated to be able to distinguish
> between different instances of the same multi-attach kprobe
> invocation. So with that, what would be the interface to specify these
> BPF cookies for this multi-attach kprobe, if we are going through
> perf_event_attr. Probably picking yet another unused field and
> union-izing it with a pointer. It will work, but makes the interface
> even more overloaded. While for LINK_CREATE we can just add another
> pointer to a u64[] with the same size as number of kfunc names and
> offsets.

Oh, and to be clear, I'm not proposing to bypass underlying perf
infra. Rather use it directly as an internal API, not through
perf_event_open syscall.

>
> But other than that, I'm super happy that you are working on these
> complicated multi-attach capabilities! It would be great to benchmark
> one-by-one attachment vs multi-attach to the same set of kprobes once
> you arrive at the final implementation.
>
> >
> > For current kprobe atachment we use either:
> >
> >    kprobe_func (in config1) + probe_offset (in config2)
> >
> > to define kprobe by function name with offset, or:
> >
> >    kprobe_addr (in config2)
> >
> > to define kprobe with direct address value.
> >
> > For multi probe attach the same fields point to array of values
> > with the same semantic. Each probe is defined as set of values
> > with the same array index (idx) as:
> >
> >    kprobe_func[idx]  + probe_offset[idx]
> >
> > to define kprobe by function name with offset, or:
> >
> >    kprobe_addr[idx]
> >
> > to define kprobe with direct address value.
> >
> > The number of probes is passed in probe_cnt value, which shares
> > the union with wakeup_events/wakeup_watermark values which are
> > not used for kprobes.
> >
> > Since [1] it's possible to stack multiple probes events under
> > one head event. Using the same code to allow that for probes
> > defined under perf kprobe interface.
> >
> > [1] https://lore.kernel.org/lkml/156095682948.28024.14190188071338900568.stgit@devnote2/
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/uapi/linux/perf_event.h |   1 +
> >  kernel/trace/trace_event_perf.c | 106 ++++++++++++++++++++++++++++----
> >  kernel/trace/trace_kprobe.c     |  47 ++++++++++++--
> >  kernel/trace/trace_probe.c      |   2 +-
> >  kernel/trace/trace_probe.h      |   3 +-
> >  5 files changed, 138 insertions(+), 21 deletions(-)
> >
>
> [...]
Jiri Olsa Dec. 1, 2021, 9:32 p.m. UTC | #6
On Tue, Nov 30, 2021 at 10:53:58PM -0800, Andrii Nakryiko wrote:
> On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding support to create multiple probes within single perf event.
> > This way we can associate single bpf program with multiple kprobes,
> > because bpf program gets associated with the perf event.
> >
> > The perf_event_attr is not extended, current fields for kprobe
> > attachment are used for multi attachment.
> 
> I'm a bit concerned with complicating perf_event_attr further to
> support this multi-attach. For BPF, at least, we now have
> bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
> syscall which allows much simpler and cleaner API to do this. Libbpf
> will actually pick bpf_link-based attachment if kernel supports it. I
> think we should better do bpf_link-based approach from the get go.
> 
> Another thing I'd like you to keep in mind and think about is BPF
> cookie. Currently kprobe/uprobe/tracepoint allow to associate
> arbitrary user-provided u64 value which will be accessible from BPF
> program with bpf_get_attach_cookie(). With multi-attach kprobes this
> because extremely crucial feature to support, otherwise it's both
> expensive, inconvenient and complicated to be able to distinguish
> between different instances of the same multi-attach kprobe
> invocation. So with that, what would be the interface to specify these
> BPF cookies for this multi-attach kprobe, if we are going through
> perf_event_attr. Probably picking yet another unused field and
> union-izing it with a pointer. It will work, but makes the interface
> even more overloaded. While for LINK_CREATE we can just add another
> pointer to a u64[] with the same size as number of kfunc names and
> offsets.

I'm not sure we could bypass perf event easily.. perhaps introduce
BPF_PROG_TYPE_RAW_KPROBE as we did for tracepoints or just new
type for multi kprobe attachment like BPF_PROG_TYPE_MULTI_KPROBE
that might be that way we'd have full control over the API

> 
> But other than that, I'm super happy that you are working on these
> complicated multi-attach capabilities! It would be great to benchmark
> one-by-one attachment vs multi-attach to the same set of kprobes once
> you arrive at the final implementation.

I have the change for bpftrace to use this and even though there's
some speed up, it's not as substantial as for trampolines

looks like we 'only' got rid of the multiple perf syscall overheads,
compared to rcu syncs timeouts like we eliminated for trampolines 

I'll make full benchmarks once we have some final solution

jirka
Alexei Starovoitov Dec. 2, 2021, 5:10 a.m. UTC | #7
On Wed, Dec 1, 2021 at 1:32 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Tue, Nov 30, 2021 at 10:53:58PM -0800, Andrii Nakryiko wrote:
> > On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > Adding support to create multiple probes within single perf event.
> > > This way we can associate single bpf program with multiple kprobes,
> > > because bpf program gets associated with the perf event.
> > >
> > > The perf_event_attr is not extended, current fields for kprobe
> > > attachment are used for multi attachment.
> >
> > I'm a bit concerned with complicating perf_event_attr further to
> > support this multi-attach. For BPF, at least, we now have
> > bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
> > syscall which allows much simpler and cleaner API to do this. Libbpf
> > will actually pick bpf_link-based attachment if kernel supports it. I
> > think we should better do bpf_link-based approach from the get go.
> >
> > Another thing I'd like you to keep in mind and think about is BPF
> > cookie. Currently kprobe/uprobe/tracepoint allow to associate
> > arbitrary user-provided u64 value which will be accessible from BPF
> > program with bpf_get_attach_cookie(). With multi-attach kprobes this
> > because extremely crucial feature to support, otherwise it's both
> > expensive, inconvenient and complicated to be able to distinguish
> > between different instances of the same multi-attach kprobe
> > invocation. So with that, what would be the interface to specify these
> > BPF cookies for this multi-attach kprobe, if we are going through
> > perf_event_attr. Probably picking yet another unused field and
> > union-izing it with a pointer. It will work, but makes the interface
> > even more overloaded. While for LINK_CREATE we can just add another
> > pointer to a u64[] with the same size as number of kfunc names and
> > offsets.
>
> I'm not sure we could bypass perf event easily.. perhaps introduce
> BPF_PROG_TYPE_RAW_KPROBE as we did for tracepoints or just new
> type for multi kprobe attachment like BPF_PROG_TYPE_MULTI_KPROBE
> that might be that way we'd have full control over the API

Indeed. The existing kprobe prog type has this api:
 * Return: BPF programs always return an integer which is interpreted by
 * kprobe handler as:
 * 0 - return from kprobe (event is filtered out)
 * 1 - store kprobe event into ring buffer

that part we cannot change.
No one was using that filtering feature. It often was in a way.
New MULTI_KPROBE prog type should not have it.
Andrii Nakryiko Dec. 7, 2021, 3:15 a.m. UTC | #8
On Wed, Dec 1, 2021 at 1:32 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Tue, Nov 30, 2021 at 10:53:58PM -0800, Andrii Nakryiko wrote:
> > On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > Adding support to create multiple probes within single perf event.
> > > This way we can associate single bpf program with multiple kprobes,
> > > because bpf program gets associated with the perf event.
> > >
> > > The perf_event_attr is not extended, current fields for kprobe
> > > attachment are used for multi attachment.
> >
> > I'm a bit concerned with complicating perf_event_attr further to
> > support this multi-attach. For BPF, at least, we now have
> > bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
> > syscall which allows much simpler and cleaner API to do this. Libbpf
> > will actually pick bpf_link-based attachment if kernel supports it. I
> > think we should better do bpf_link-based approach from the get go.
> >
> > Another thing I'd like you to keep in mind and think about is BPF
> > cookie. Currently kprobe/uprobe/tracepoint allow to associate
> > arbitrary user-provided u64 value which will be accessible from BPF
> > program with bpf_get_attach_cookie(). With multi-attach kprobes this
> > because extremely crucial feature to support, otherwise it's both
> > expensive, inconvenient and complicated to be able to distinguish
> > between different instances of the same multi-attach kprobe
> > invocation. So with that, what would be the interface to specify these
> > BPF cookies for this multi-attach kprobe, if we are going through
> > perf_event_attr. Probably picking yet another unused field and
> > union-izing it with a pointer. It will work, but makes the interface
> > even more overloaded. While for LINK_CREATE we can just add another
> > pointer to a u64[] with the same size as number of kfunc names and
> > offsets.
>
> I'm not sure we could bypass perf event easily.. perhaps introduce
> BPF_PROG_TYPE_RAW_KPROBE as we did for tracepoints or just new
> type for multi kprobe attachment like BPF_PROG_TYPE_MULTI_KPROBE
> that might be that way we'd have full control over the API

Sure, new type works.

>
> >
> > But other than that, I'm super happy that you are working on these
> > complicated multi-attach capabilities! It would be great to benchmark
> > one-by-one attachment vs multi-attach to the same set of kprobes once
> > you arrive at the final implementation.
>
> I have the change for bpftrace to use this and even though there's
> some speed up, it's not as substantial as for trampolines
>
> looks like we 'only' got rid of the multiple perf syscall overheads,
> compared to rcu syncs timeouts like we eliminated for trampolines

if it's just eliminating a pretty small overhead of multiple syscalls,
then it would be quite disappointing to add a bunch of complexity just
for that. Are there any reasons we can't use the same low-level ftrace
batch attach API to speed this up considerably? I assume it's only
possible if kprobe is attached at the beginning of the function (not
sure how kretprobe is treated here), so we can either say that this
new kprobe prog type can only be attached at the beginning of each
function and enforce that (probably would be totally reasonable
assumption as that's what's happening most frequently in practice).
Worst case, should be possible to split all requested attach targets
into two groups, one fast at function entry and all the rest.

Am I too far off on this one? There might be some more complications
that I don't see.

>
> I'll make full benchmarks once we have some final solution
>
> jirka
>
Jiri Olsa Dec. 8, 2021, 1:50 p.m. UTC | #9
On Mon, Dec 06, 2021 at 07:15:58PM -0800, Andrii Nakryiko wrote:
> On Wed, Dec 1, 2021 at 1:32 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > On Tue, Nov 30, 2021 at 10:53:58PM -0800, Andrii Nakryiko wrote:
> > > On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > > >
> > > > Adding support to create multiple probes within single perf event.
> > > > This way we can associate single bpf program with multiple kprobes,
> > > > because bpf program gets associated with the perf event.
> > > >
> > > > The perf_event_attr is not extended, current fields for kprobe
> > > > attachment are used for multi attachment.
> > >
> > > I'm a bit concerned with complicating perf_event_attr further to
> > > support this multi-attach. For BPF, at least, we now have
> > > bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
> > > syscall which allows much simpler and cleaner API to do this. Libbpf
> > > will actually pick bpf_link-based attachment if kernel supports it. I
> > > think we should better do bpf_link-based approach from the get go.
> > >
> > > Another thing I'd like you to keep in mind and think about is BPF
> > > cookie. Currently kprobe/uprobe/tracepoint allow to associate
> > > arbitrary user-provided u64 value which will be accessible from BPF
> > > program with bpf_get_attach_cookie(). With multi-attach kprobes this
> > > because extremely crucial feature to support, otherwise it's both
> > > expensive, inconvenient and complicated to be able to distinguish
> > > between different instances of the same multi-attach kprobe
> > > invocation. So with that, what would be the interface to specify these
> > > BPF cookies for this multi-attach kprobe, if we are going through
> > > perf_event_attr. Probably picking yet another unused field and
> > > union-izing it with a pointer. It will work, but makes the interface
> > > even more overloaded. While for LINK_CREATE we can just add another
> > > pointer to a u64[] with the same size as number of kfunc names and
> > > offsets.
> >
> > I'm not sure we could bypass perf event easily.. perhaps introduce
> > BPF_PROG_TYPE_RAW_KPROBE as we did for tracepoints or just new
> > type for multi kprobe attachment like BPF_PROG_TYPE_MULTI_KPROBE
> > that might be that way we'd have full control over the API
> 
> Sure, new type works.
> 
> >
> > >
> > > But other than that, I'm super happy that you are working on these
> > > complicated multi-attach capabilities! It would be great to benchmark
> > > one-by-one attachment vs multi-attach to the same set of kprobes once
> > > you arrive at the final implementation.
> >
> > I have the change for bpftrace to use this and even though there's
> > some speed up, it's not as substantial as for trampolines
> >
> > looks like we 'only' got rid of the multiple perf syscall overheads,
> > compared to rcu syncs timeouts like we eliminated for trampolines
> 
> if it's just eliminating a pretty small overhead of multiple syscalls,
> then it would be quite disappointing to add a bunch of complexity just
> for that.

I meant it's not as huge save as for trampolines, but I expect some
noticeable speedup, I'll make more becnhmarks with current patchset

> Are there any reasons we can't use the same low-level ftrace
> batch attach API to speed this up considerably? I assume it's only
> possible if kprobe is attached at the beginning of the function (not
> sure how kretprobe is treated here), so we can either say that this
> new kprobe prog type can only be attached at the beginning of each
> function and enforce that (probably would be totally reasonable
> assumption as that's what's happening most frequently in practice).
> Worst case, should be possible to split all requested attach targets
> into two groups, one fast at function entry and all the rest.
> 
> Am I too far off on this one? There might be some more complications
> that I don't see.

I'd need to check more on kprobes internals, but.. ;-)

the new ftrace interface is special for 'direct' trampolines and
I think that although kprobes can use ftrace for attaching, they
use it in a different way

also this current 'multi attach' approach is on top of current kprobe
interface, if we wanted to use the new ftrace API we'd need to add new
kprobe interface and change the kprobe attaching to use it (for cases
it's attached at the function entry)

jirka

> 
> >
> > I'll make full benchmarks once we have some final solution
> >
> > jirka
> >
>
Jiri Olsa Dec. 10, 2021, 12:42 p.m. UTC | #10
On Wed, Dec 08, 2021 at 02:50:09PM +0100, Jiri Olsa wrote:
> On Mon, Dec 06, 2021 at 07:15:58PM -0800, Andrii Nakryiko wrote:
> > On Wed, Dec 1, 2021 at 1:32 PM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > On Tue, Nov 30, 2021 at 10:53:58PM -0800, Andrii Nakryiko wrote:
> > > > On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > > > >
> > > > > Adding support to create multiple probes within single perf event.
> > > > > This way we can associate single bpf program with multiple kprobes,
> > > > > because bpf program gets associated with the perf event.
> > > > >
> > > > > The perf_event_attr is not extended, current fields for kprobe
> > > > > attachment are used for multi attachment.
> > > >
> > > > I'm a bit concerned with complicating perf_event_attr further to
> > > > support this multi-attach. For BPF, at least, we now have
> > > > bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
> > > > syscall which allows much simpler and cleaner API to do this. Libbpf
> > > > will actually pick bpf_link-based attachment if kernel supports it. I
> > > > think we should better do bpf_link-based approach from the get go.
> > > >
> > > > Another thing I'd like you to keep in mind and think about is BPF
> > > > cookie. Currently kprobe/uprobe/tracepoint allow to associate
> > > > arbitrary user-provided u64 value which will be accessible from BPF
> > > > program with bpf_get_attach_cookie(). With multi-attach kprobes this
> > > > because extremely crucial feature to support, otherwise it's both
> > > > expensive, inconvenient and complicated to be able to distinguish
> > > > between different instances of the same multi-attach kprobe
> > > > invocation. So with that, what would be the interface to specify these
> > > > BPF cookies for this multi-attach kprobe, if we are going through
> > > > perf_event_attr. Probably picking yet another unused field and
> > > > union-izing it with a pointer. It will work, but makes the interface
> > > > even more overloaded. While for LINK_CREATE we can just add another
> > > > pointer to a u64[] with the same size as number of kfunc names and
> > > > offsets.
> > >
> > > I'm not sure we could bypass perf event easily.. perhaps introduce
> > > BPF_PROG_TYPE_RAW_KPROBE as we did for tracepoints or just new
> > > type for multi kprobe attachment like BPF_PROG_TYPE_MULTI_KPROBE
> > > that might be that way we'd have full control over the API
> > 
> > Sure, new type works.
> > 
> > >
> > > >
> > > > But other than that, I'm super happy that you are working on these
> > > > complicated multi-attach capabilities! It would be great to benchmark
> > > > one-by-one attachment vs multi-attach to the same set of kprobes once
> > > > you arrive at the final implementation.
> > >
> > > I have the change for bpftrace to use this and even though there's
> > > some speed up, it's not as substantial as for trampolines
> > >
> > > looks like we 'only' got rid of the multiple perf syscall overheads,
> > > compared to rcu syncs timeouts like we eliminated for trampolines
> > 
> > if it's just eliminating a pretty small overhead of multiple syscalls,
> > then it would be quite disappointing to add a bunch of complexity just
> > for that.
> 
> I meant it's not as huge save as for trampolines, but I expect some
> noticeable speedup, I'll make more becnhmarks with current patchset

so with this approach there's noticable speedup, but it's not the
'instant attachment speed' as for trampolines

as a base I used bpftrace with change that allows to reuse bpf program
for multiple kprobes

bpftrace standard attach of 672 kprobes:

  Performance counter stats for './src/bpftrace -vv -e kprobe:kvm* { @[kstack] += 1; }  i:ms:10 { printf("KRAVA\n"); exit() }':

      70.548897815 seconds time elapsed

       0.909996000 seconds user
      50.622834000 seconds sys


bpftrace using interface from this patchset attach of 673 kprobes:

  Performance counter stats for './src/bpftrace -vv -e kprobe:kvm* { @[kstack] += 1; }  i:ms:10 { printf("KRAVA\n"); exit() }':

      36.947586803 seconds time elapsed

       0.272585000 seconds user
      30.900831000 seconds sys


so it's noticeable, but I wonder it's not enough ;-)

jirka

> 
> > Are there any reasons we can't use the same low-level ftrace
> > batch attach API to speed this up considerably? I assume it's only
> > possible if kprobe is attached at the beginning of the function (not
> > sure how kretprobe is treated here), so we can either say that this
> > new kprobe prog type can only be attached at the beginning of each
> > function and enforce that (probably would be totally reasonable
> > assumption as that's what's happening most frequently in practice).
> > Worst case, should be possible to split all requested attach targets
> > into two groups, one fast at function entry and all the rest.
> > 
> > Am I too far off on this one? There might be some more complications
> > that I don't see.
> 
> I'd need to check more on kprobes internals, but.. ;-)
> 
> the new ftrace interface is special for 'direct' trampolines and
> I think that although kprobes can use ftrace for attaching, they
> use it in a different way
> 
> also this current 'multi attach' approach is on top of current kprobe
> interface, if we wanted to use the new ftrace API we'd need to add new
> kprobe interface and change the kprobe attaching to use it (for cases
> it's attached at the function entry)
> 
> jirka
> 
> > 
> > >
> > > I'll make full benchmarks once we have some final solution
> > >
> > > jirka
> > >
> >
Andrii Nakryiko Dec. 10, 2021, 6:28 p.m. UTC | #11
On Fri, Dec 10, 2021 at 4:42 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Wed, Dec 08, 2021 at 02:50:09PM +0100, Jiri Olsa wrote:
> > On Mon, Dec 06, 2021 at 07:15:58PM -0800, Andrii Nakryiko wrote:
> > > On Wed, Dec 1, 2021 at 1:32 PM Jiri Olsa <jolsa@redhat.com> wrote:
> > > >
> > > > On Tue, Nov 30, 2021 at 10:53:58PM -0800, Andrii Nakryiko wrote:
> > > > > On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > > > > >
> > > > > > Adding support to create multiple probes within single perf event.
> > > > > > This way we can associate single bpf program with multiple kprobes,
> > > > > > because bpf program gets associated with the perf event.
> > > > > >
> > > > > > The perf_event_attr is not extended, current fields for kprobe
> > > > > > attachment are used for multi attachment.
> > > > >
> > > > > I'm a bit concerned with complicating perf_event_attr further to
> > > > > support this multi-attach. For BPF, at least, we now have
> > > > > bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
> > > > > syscall which allows much simpler and cleaner API to do this. Libbpf
> > > > > will actually pick bpf_link-based attachment if kernel supports it. I
> > > > > think we should better do bpf_link-based approach from the get go.
> > > > >
> > > > > Another thing I'd like you to keep in mind and think about is BPF
> > > > > cookie. Currently kprobe/uprobe/tracepoint allow to associate
> > > > > arbitrary user-provided u64 value which will be accessible from BPF
> > > > > program with bpf_get_attach_cookie(). With multi-attach kprobes this
> > > > > because extremely crucial feature to support, otherwise it's both
> > > > > expensive, inconvenient and complicated to be able to distinguish
> > > > > between different instances of the same multi-attach kprobe
> > > > > invocation. So with that, what would be the interface to specify these
> > > > > BPF cookies for this multi-attach kprobe, if we are going through
> > > > > perf_event_attr. Probably picking yet another unused field and
> > > > > union-izing it with a pointer. It will work, but makes the interface
> > > > > even more overloaded. While for LINK_CREATE we can just add another
> > > > > pointer to a u64[] with the same size as number of kfunc names and
> > > > > offsets.
> > > >
> > > > I'm not sure we could bypass perf event easily.. perhaps introduce
> > > > BPF_PROG_TYPE_RAW_KPROBE as we did for tracepoints or just new
> > > > type for multi kprobe attachment like BPF_PROG_TYPE_MULTI_KPROBE
> > > > that might be that way we'd have full control over the API
> > >
> > > Sure, new type works.
> > >
> > > >
> > > > >
> > > > > But other than that, I'm super happy that you are working on these
> > > > > complicated multi-attach capabilities! It would be great to benchmark
> > > > > one-by-one attachment vs multi-attach to the same set of kprobes once
> > > > > you arrive at the final implementation.
> > > >
> > > > I have the change for bpftrace to use this and even though there's
> > > > some speed up, it's not as substantial as for trampolines
> > > >
> > > > looks like we 'only' got rid of the multiple perf syscall overheads,
> > > > compared to rcu syncs timeouts like we eliminated for trampolines
> > >
> > > if it's just eliminating a pretty small overhead of multiple syscalls,
> > > then it would be quite disappointing to add a bunch of complexity just
> > > for that.
> >
> > I meant it's not as huge save as for trampolines, but I expect some
> > noticeable speedup, I'll make more becnhmarks with current patchset
>
> so with this approach there's noticable speedup, but it's not the
> 'instant attachment speed' as for trampolines
>
> as a base I used bpftrace with change that allows to reuse bpf program
> for multiple kprobes
>
> bpftrace standard attach of 672 kprobes:
>
>   Performance counter stats for './src/bpftrace -vv -e kprobe:kvm* { @[kstack] += 1; }  i:ms:10 { printf("KRAVA\n"); exit() }':
>
>       70.548897815 seconds time elapsed
>
>        0.909996000 seconds user
>       50.622834000 seconds sys
>
>
> bpftrace using interface from this patchset attach of 673 kprobes:
>
>   Performance counter stats for './src/bpftrace -vv -e kprobe:kvm* { @[kstack] += 1; }  i:ms:10 { printf("KRAVA\n"); exit() }':
>
>       36.947586803 seconds time elapsed
>
>        0.272585000 seconds user
>       30.900831000 seconds sys
>
>
> so it's noticeable, but I wonder it's not enough ;-)

Typical retsnoop run for BPF use case is attaching to ~1200 functions.
Answer for yourself if you think the tool that takes 36 seconds to
start up is a great user experience? ;)

>
> jirka
>
> >
> > > Are there any reasons we can't use the same low-level ftrace
> > > batch attach API to speed this up considerably? I assume it's only
> > > possible if kprobe is attached at the beginning of the function (not
> > > sure how kretprobe is treated here), so we can either say that this
> > > new kprobe prog type can only be attached at the beginning of each
> > > function and enforce that (probably would be totally reasonable
> > > assumption as that's what's happening most frequently in practice).
> > > Worst case, should be possible to split all requested attach targets
> > > into two groups, one fast at function entry and all the rest.
> > >
> > > Am I too far off on this one? There might be some more complications
> > > that I don't see.
> >
> > I'd need to check more on kprobes internals, but.. ;-)
> >
> > the new ftrace interface is special for 'direct' trampolines and
> > I think that although kprobes can use ftrace for attaching, they
> > use it in a different way
> >
> > also this current 'multi attach' approach is on top of current kprobe
> > interface, if we wanted to use the new ftrace API we'd need to add new
> > kprobe interface and change the kprobe attaching to use it (for cases
> > it's attached at the function entry)
> >
> > jirka
> >
> > >
> > > >
> > > > I'll make full benchmarks once we have some final solution
> > > >
> > > > jirka
> > > >
> > >
>
diff mbox series

Patch

diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index bd8860eeb291..eea80709d1ed 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -414,6 +414,7 @@  struct perf_event_attr {
 	union {
 		__u32		wakeup_events;	  /* wakeup every n events */
 		__u32		wakeup_watermark; /* bytes before wakeup   */
+		__u32		probe_cnt;	  /* number of [k,u] probes */
 	};
 
 	__u32			bp_type;
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index a114549720d6..26078e40c299 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -245,23 +245,27 @@  void perf_trace_destroy(struct perf_event *p_event)
 }
 
 #ifdef CONFIG_KPROBE_EVENTS
-int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
+static struct trace_event_call*
+kprobe_init(bool is_retprobe, u64 kprobe_func, u64 kprobe_addr,
+	    u64 probe_offset, struct trace_event_call *old)
 {
 	int ret;
 	char *func = NULL;
 	struct trace_event_call *tp_event;
 
-	if (p_event->attr.kprobe_func) {
+	if (kprobe_func) {
 		func = kzalloc(KSYM_NAME_LEN, GFP_KERNEL);
 		if (!func)
-			return -ENOMEM;
+			return ERR_PTR(-ENOMEM);
 		ret = strncpy_from_user(
-			func, u64_to_user_ptr(p_event->attr.kprobe_func),
+			func, u64_to_user_ptr(kprobe_func),
 			KSYM_NAME_LEN);
 		if (ret == KSYM_NAME_LEN)
 			ret = -E2BIG;
-		if (ret < 0)
-			goto out;
+		if (ret < 0) {
+			kfree(func);
+			return ERR_PTR(ret);
+		}
 
 		if (func[0] == '\0') {
 			kfree(func);
@@ -270,20 +274,96 @@  int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
 	}
 
 	tp_event = create_local_trace_kprobe(
-		func, (void *)(unsigned long)(p_event->attr.kprobe_addr),
-		p_event->attr.probe_offset, is_retprobe);
-	if (IS_ERR(tp_event)) {
-		ret = PTR_ERR(tp_event);
-		goto out;
+		func, (void *)(unsigned long)(kprobe_addr),
+		probe_offset, is_retprobe, old);
+	kfree(func);
+	return tp_event;
+}
+
+static struct trace_event_call*
+kprobe_init_multi(struct perf_event *p_event, bool is_retprobe)
+{
+	void __user *kprobe_func = u64_to_user_ptr(p_event->attr.kprobe_func);
+	void __user *kprobe_addr = u64_to_user_ptr(p_event->attr.kprobe_addr);
+	u64 *funcs = NULL, *addrs = NULL, *offs = NULL;
+	struct trace_event_call *tp_event, *tp_old = NULL;
+	u32 i, cnt = p_event->attr.probe_cnt;
+	int ret = -EINVAL;
+	size_t size;
+
+	if (!cnt)
+		return ERR_PTR(-EINVAL);
+
+	size = cnt * sizeof(u64);
+	if (kprobe_func) {
+		ret = -ENOMEM;
+		funcs = kmalloc(size, GFP_KERNEL);
+		if (!funcs)
+			goto out;
+		ret = -EFAULT;
+		if (copy_from_user(funcs, kprobe_func, size))
+			goto out;
+	}
+
+	if (kprobe_addr) {
+		ret = -ENOMEM;
+		addrs = kmalloc(size, GFP_KERNEL);
+		if (!addrs)
+			goto out;
+		ret = -EFAULT;
+		if (copy_from_user(addrs, kprobe_addr, size))
+			goto out;
+		/* addresses and ofsets share the same array */
+		offs = addrs;
 	}
 
+	for (i = 0; i < cnt; i++) {
+		tp_event = kprobe_init(is_retprobe, funcs ? funcs[i] : 0,
+				       addrs ? addrs[i] : 0, offs ? offs[i] : 0,
+				       tp_old);
+		if (IS_ERR(tp_event)) {
+			if (tp_old)
+				destroy_local_trace_kprobe(tp_old);
+			ret = PTR_ERR(tp_event);
+			goto out;
+		}
+		if (!tp_old)
+			tp_old = tp_event;
+	}
+	ret = 0;
+out:
+	kfree(funcs);
+	kfree(addrs);
+	return ret ? ERR_PTR(ret) : tp_old;
+}
+
+static struct trace_event_call*
+kprobe_init_single(struct perf_event *p_event, bool is_retprobe)
+{
+	struct perf_event_attr *attr = &p_event->attr;
+
+	return kprobe_init(is_retprobe, attr->kprobe_func, attr->kprobe_addr,
+			   attr->probe_offset, NULL);
+}
+
+int perf_kprobe_init(struct perf_event *p_event, bool is_retprobe)
+{
+	struct trace_event_call *tp_event;
+	int ret;
+
+	if (p_event->attr.probe_cnt)
+		tp_event = kprobe_init_multi(p_event, is_retprobe);
+	else
+		tp_event = kprobe_init_single(p_event, is_retprobe);
+
+	if (IS_ERR(tp_event))
+		return PTR_ERR(tp_event);
+
 	mutex_lock(&event_mutex);
 	ret = perf_trace_event_init(tp_event, p_event);
 	if (ret)
 		destroy_local_trace_kprobe(tp_event);
 	mutex_unlock(&event_mutex);
-out:
-	kfree(func);
 	return ret;
 }
 
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 33272a7b6912..86a7aada853a 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -237,13 +237,18 @@  static int kprobe_dispatcher(struct kprobe *kp, struct pt_regs *regs);
 static int kretprobe_dispatcher(struct kretprobe_instance *ri,
 				struct pt_regs *regs);
 
+static void __free_trace_kprobe(struct trace_kprobe *tk)
+{
+	kfree(tk->symbol);
+	free_percpu(tk->nhit);
+	kfree(tk);
+}
+
 static void free_trace_kprobe(struct trace_kprobe *tk)
 {
 	if (tk) {
 		trace_probe_cleanup(&tk->tp);
-		kfree(tk->symbol);
-		free_percpu(tk->nhit);
-		kfree(tk);
+		__free_trace_kprobe(tk);
 	}
 }
 
@@ -1796,7 +1801,7 @@  static int unregister_kprobe_event(struct trace_kprobe *tk)
 /* create a trace_kprobe, but don't add it to global lists */
 struct trace_event_call *
 create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
-			  bool is_return)
+			  bool is_return, struct trace_event_call *old)
 {
 	enum probe_print_type ptype;
 	struct trace_kprobe *tk;
@@ -1820,6 +1825,28 @@  create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
 		return ERR_CAST(tk);
 	}
 
+	if (old) {
+		struct trace_kprobe *tk_old;
+
+		tk_old = trace_kprobe_primary_from_call(old);
+		if (!tk_old) {
+			ret = -EINVAL;
+			goto error;
+		}
+
+		/* Append to existing event */
+		ret = trace_probe_append(&tk->tp, &tk_old->tp);
+		if (ret)
+			goto error;
+
+		/* Register k*probe */
+		ret = __register_trace_kprobe(tk);
+		if (ret)
+			goto error;
+
+		return trace_probe_event_call(&tk->tp);
+	}
+
 	init_trace_event_call(tk);
 
 	ptype = trace_kprobe_is_return(tk) ?
@@ -1841,6 +1868,8 @@  create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
 
 void destroy_local_trace_kprobe(struct trace_event_call *event_call)
 {
+	struct trace_probe_event *event;
+	struct trace_probe *pos, *tmp;
 	struct trace_kprobe *tk;
 
 	tk = trace_kprobe_primary_from_call(event_call);
@@ -1852,9 +1881,15 @@  void destroy_local_trace_kprobe(struct trace_event_call *event_call)
 		return;
 	}
 
-	__unregister_trace_kprobe(tk);
+	event = tk->tp.event;
+	list_for_each_entry_safe(pos, tmp, &event->probes, list) {
+		tk = container_of(pos, struct trace_kprobe, tp);
+		list_del_init(&pos->list);
+		__unregister_trace_kprobe(tk);
+		__free_trace_kprobe(tk);
+	}
 
-	free_trace_kprobe(tk);
+	trace_probe_event_free(event);
 }
 #endif /* CONFIG_PERF_EVENTS */
 
diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
index 3ed2a3f37297..2dff85aa21e9 100644
--- a/kernel/trace/trace_probe.c
+++ b/kernel/trace/trace_probe.c
@@ -974,7 +974,7 @@  int traceprobe_define_arg_fields(struct trace_event_call *event_call,
 	return 0;
 }
 
-static void trace_probe_event_free(struct trace_probe_event *tpe)
+void trace_probe_event_free(struct trace_probe_event *tpe)
 {
 	kfree(tpe->class.system);
 	kfree(tpe->call.name);
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index 99e7a5df025e..ba8e46c7efe8 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -333,6 +333,7 @@  int trace_probe_init(struct trace_probe *tp, const char *event,
 		     const char *group, bool alloc_filter);
 void trace_probe_cleanup(struct trace_probe *tp);
 int trace_probe_append(struct trace_probe *tp, struct trace_probe *to);
+void trace_probe_event_free(struct trace_probe_event *tpe);
 void trace_probe_unlink(struct trace_probe *tp);
 int trace_probe_register_event_call(struct trace_probe *tp);
 int trace_probe_add_file(struct trace_probe *tp, struct trace_event_file *file);
@@ -377,7 +378,7 @@  extern int traceprobe_set_print_fmt(struct trace_probe *tp, enum probe_print_typ
 #ifdef CONFIG_PERF_EVENTS
 extern struct trace_event_call *
 create_local_trace_kprobe(char *func, void *addr, unsigned long offs,
-			  bool is_return);
+			  bool is_return, struct trace_event_call *old);
 extern void destroy_local_trace_kprobe(struct trace_event_call *event_call);
 
 extern struct trace_event_call *