diff mbox series

[RFC,bpf-next] bpf: Add a kfunc filter function to 'struct btf_kfunc_id_set'.

Message ID 20230404060959.2259448-1-martin.lau@linux.dev (mailing list archive)
State RFC
Delegated to: BPF
Headers show
Series [RFC,bpf-next] bpf: Add a kfunc filter function to 'struct btf_kfunc_id_set'. | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR fail merge-conflict
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-7 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-10 pending Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on x86_64 with llvm-16
netdev/tree_selection success Clearly marked for bpf-next
netdev/apply fail Patch does not apply to bpf-next

Commit Message

Martin KaFai Lau April 4, 2023, 6:09 a.m. UTC
From: Martin KaFai Lau <martin.lau@kernel.org>

This set (https://lore.kernel.org/bpf/https://lore.kernel.org/bpf/500d452b-f9d5-d01f-d365-2949c4fd37ab@linux.dev/)
needs to limit bpf_sock_destroy kfunc to BPF_TRACE_ITER.
In the earlier reply, I thought of adding a BTF_KFUNC_HOOK_TRACING_ITER.

Instead of adding BTF_KFUNC_HOOK_TRACING_ITER, I quickly hacked something
that added a callback filter to 'struct btf_kfunc_id_set'. The filter has
access to the prog such that it can filter by other properties of a prog.
The prog->expected_attached_type is used in the tracing_iter_filter().
It is mostly compiler tested only, so it is still very rough but should
be good enough to show the idea.

would like to hear how others think. It is pretty much the only
piece left for the above mentioned set.

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
---
 include/linux/btf.h                           | 18 +++---
 kernel/bpf/btf.c                              | 59 +++++++++++++++----
 kernel/bpf/verifier.c                         |  7 ++-
 net/core/filter.c                             |  9 +++
 .../selftests/bpf/progs/sock_destroy_prog.c   | 10 ++++
 5 files changed, 83 insertions(+), 20 deletions(-)

Comments

Aditi Ghag April 5, 2023, 3:05 p.m. UTC | #1
Looks quite promising for the sock_destroy use case, and also as a generic filtering mechanism, but I'm not aware of other use cases. I haven't had a chance to apply this patch locally, but I'm planning to do it soon. Thanks!

> On Apr 3, 2023, at 11:09 PM, Martin KaFai Lau <martin.lau@linux.dev> wrote:
> 
> From: Martin KaFai Lau <martin.lau@kernel.org>
> 
> This set (https://lore.kernel.org/bpf/https://lore.kernel.org/bpf/500d452b-f9d5-d01f-d365-2949c4fd37ab@linux.dev/)
> needs to limit bpf_sock_destroy kfunc to BPF_TRACE_ITER.
> In the earlier reply, I thought of adding a BTF_KFUNC_HOOK_TRACING_ITER.
> 
> Instead of adding BTF_KFUNC_HOOK_TRACING_ITER, I quickly hacked something
> that added a callback filter to 'struct btf_kfunc_id_set'. The filter has
> access to the prog such that it can filter by other properties of a prog.
> The prog->expected_attached_type is used in the tracing_iter_filter().
> It is mostly compiler tested only, so it is still very rough but should
> be good enough to show the idea.
> 
> would like to hear how others think. It is pretty much the only
> piece left for the above mentioned set.
> 
> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
> ---
> include/linux/btf.h                           | 18 +++---
> kernel/bpf/btf.c                              | 59 +++++++++++++++----
> kernel/bpf/verifier.c                         |  7 ++-
> net/core/filter.c                             |  9 +++
> .../selftests/bpf/progs/sock_destroy_prog.c   | 10 ++++
> 5 files changed, 83 insertions(+), 20 deletions(-)
> 
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index d53b10cc55f2..84c31b4f5785 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -99,10 +99,14 @@ struct btf_type;
> union bpf_attr;
> struct btf_show;
> struct btf_id_set;
> +struct bpf_prog;
> +
> +typedef int (*btf_kfunc_filter_t)(const struct bpf_prog *prog, u32 kfunc_id);
> 
> struct btf_kfunc_id_set {
> 	struct module *owner;
> 	struct btf_id_set8 *set;
> +	btf_kfunc_filter_t filter;
> };
> 
> struct btf_id_dtor_kfunc {
> @@ -482,7 +486,6 @@ static inline void *btf_id_set8_contains(const struct btf_id_set8 *set, u32 id)
> 	return bsearch(&id, set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func);
> }
> 
> -struct bpf_prog;
> struct bpf_verifier_log;
> 
> #ifdef CONFIG_BPF_SYSCALL
> @@ -490,10 +493,10 @@ const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
> const char *btf_name_by_offset(const struct btf *btf, u32 offset);
> struct btf *btf_parse_vmlinux(void);
> struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
> -u32 *btf_kfunc_id_set_contains(const struct btf *btf,
> -			       enum bpf_prog_type prog_type,
> -			       u32 kfunc_btf_id);
> -u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id);
> +u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id,
> +			       const struct bpf_prog *prog);
> +u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
> +				const struct bpf_prog *prog);
> int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
> 			      const struct btf_kfunc_id_set *s);
> int register_btf_fmodret_id_set(const struct btf_kfunc_id_set *kset);
> @@ -520,8 +523,9 @@ static inline const char *btf_name_by_offset(const struct btf *btf,
> 	return NULL;
> }
> static inline u32 *btf_kfunc_id_set_contains(const struct btf *btf,
> -					     enum bpf_prog_type prog_type,
> -					     u32 kfunc_btf_id)
> +					     u32 kfunc_btf_id,
> +					     struct bpf_prog *prog)
> +
> {
> 	return NULL;
> }
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index b7e5a5510b91..7685af3ca9c0 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -218,10 +218,17 @@ enum btf_kfunc_hook {
> enum {
> 	BTF_KFUNC_SET_MAX_CNT = 256,
> 	BTF_DTOR_KFUNC_MAX_CNT = 256,
> +	BTF_KFUNC_FILTER_MAX_CNT = 16,
> +};
> +
> +struct btf_kfunc_hook_filter {
> +	btf_kfunc_filter_t filters[BTF_KFUNC_FILTER_MAX_CNT];
> +	u32 nr_filters;
> };
> 
> struct btf_kfunc_set_tab {
> 	struct btf_id_set8 *sets[BTF_KFUNC_HOOK_MAX];
> +	struct btf_kfunc_hook_filter hook_filters[BTF_KFUNC_HOOK_MAX];
> };
> 
> struct btf_id_dtor_kfunc_tab {
> @@ -7712,9 +7719,12 @@ static int btf_check_kfunc_protos(struct btf *btf, u32 func_id, u32 func_flags)
> /* Kernel Function (kfunc) BTF ID set registration API */
> 
> static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> -				  struct btf_id_set8 *add_set)
> +				  const struct btf_kfunc_id_set *kset)
> {
> +	struct btf_kfunc_hook_filter *hook_filter;
> +	struct btf_id_set8 *add_set = kset->set;
> 	bool vmlinux_set = !btf_is_module(btf);
> +	bool add_filter = !!kset->filter;
> 	struct btf_kfunc_set_tab *tab;
> 	struct btf_id_set8 *set;
> 	u32 set_cnt;
> @@ -7729,6 +7739,20 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> 		return 0;
> 
> 	tab = btf->kfunc_set_tab;
> +
> +	if (tab && add_filter) {
> +		int i;
> +
> +		hook_filter = &tab->hook_filters[hook];
> +		for (i = 0; i < hook_filter->nr_filters; i++) {
> +			if (hook_filter->filters[i] == kset->filter)
> +				add_filter = false;
> +		}

Not intimately familiar with the internals of kfuncs, so mainly for my understanding, what's the use case for having multiple filters?

> +
> +		if (add_filter && hook_filter->nr_filters == BTF_KFUNC_FILTER_MAX_CNT)
> +			return -E2BIG;
> +	}
> +
> 	if (!tab) {
> 		tab = kzalloc(sizeof(*tab), GFP_KERNEL | __GFP_NOWARN);
> 		if (!tab)
> @@ -7751,7 +7775,7 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> 	 */
> 	if (!vmlinux_set) {
> 		tab->sets[hook] = add_set;
> -		return 0;
> +		goto do_add_filter;
> 	}
> 
> 	/* In case of vmlinux sets, there may be more than one set being
> @@ -7793,6 +7817,11 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> 
> 	sort(set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func, NULL);
> 
> +do_add_filter:
> +	if (add_filter) {
> +		hook_filter = &tab->hook_filters[hook];
> +		hook_filter->filters[hook_filter->nr_filters++] = kset->filter;
> +	}
> 	return 0;
> end:
> 	btf_free_kfunc_set_tab(btf);
> @@ -7801,15 +7830,22 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> 
> static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
> 					enum btf_kfunc_hook hook,
> +					const struct bpf_prog *prog,
> 					u32 kfunc_btf_id)
> {
> +	struct btf_kfunc_hook_filter *hook_filter;
> 	struct btf_id_set8 *set;
> -	u32 *id;
> +	u32 *id, i;
> 
> 	if (hook >= BTF_KFUNC_HOOK_MAX)
> 		return NULL;
> 	if (!btf->kfunc_set_tab)
> 		return NULL;
> +	hook_filter = &btf->kfunc_set_tab->hook_filters[hook];
> +	for (i = 0; i < hook_filter->nr_filters; i++) {
> +		if (hook_filter->filters[i](prog, kfunc_btf_id))
> +			return NULL;
> +	}
> 	set = btf->kfunc_set_tab->sets[hook];
> 	if (!set)
> 		return NULL;
> @@ -7862,23 +7898,25 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
>  * protection for looking up a well-formed btf->kfunc_set_tab.
>  */
> u32 *btf_kfunc_id_set_contains(const struct btf *btf,
> -			       enum bpf_prog_type prog_type,
> -			       u32 kfunc_btf_id)
> +			       u32 kfunc_btf_id,
> +			       const struct bpf_prog *prog)
> {
> +	enum bpf_prog_type prog_type = resolve_prog_type(prog);
> 	enum btf_kfunc_hook hook;
> 	u32 *kfunc_flags;
> 
> -	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
> +	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, prog, kfunc_btf_id);
> 	if (kfunc_flags)
> 		return kfunc_flags;
> 
> 	hook = bpf_prog_type_to_kfunc_hook(prog_type);
> -	return __btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
> +	return __btf_kfunc_id_set_contains(btf, hook, prog, kfunc_btf_id);
> }
> 
> -u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id)
> +u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
> +				const struct bpf_prog *prog)
> {
> -	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id);
> +	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, prog, kfunc_btf_id);
> }
> 
> static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
> @@ -7909,7 +7947,8 @@ static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
> 			goto err_out;
> 	}
> 
> -	ret = btf_populate_kfunc_set(btf, hook, kset->set);
> +	ret = btf_populate_kfunc_set(btf, hook, kset);
> +
> err_out:
> 	btf_put(btf);
> 	return ret;
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 20eb2015842f..1a854cdb2566 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -10553,7 +10553,7 @@ static int fetch_kfunc_meta(struct bpf_verifier_env *env,
> 		*kfunc_name = func_name;
> 	func_proto = btf_type_by_id(desc_btf, func->type);
> 
> -	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, resolve_prog_type(env->prog), func_id);
> +	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, func_id, env->prog);
> 	if (!kfunc_flags) {
> 		return -EACCES;
> 	}
> @@ -18521,7 +18521,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> 				 * in the fmodret id set with the KF_SLEEPABLE flag.
> 				 */
> 				else {
> -					u32 *flags = btf_kfunc_is_modify_return(btf, btf_id);
> +					u32 *flags = btf_kfunc_is_modify_return(btf, btf_id,
> +										prog);
> 
> 					if (flags && (*flags & KF_SLEEPABLE))
> 						ret = 0;
> @@ -18549,7 +18550,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> 				return -EINVAL;
> 			}
> 			ret = -EINVAL;
> -			if (btf_kfunc_is_modify_return(btf, btf_id) ||
> +			if (btf_kfunc_is_modify_return(btf, btf_id, prog) ||
> 			    !check_attach_modify_return(addr, tname))
> 				ret = 0;
> 			if (ret) {
> diff --git a/net/core/filter.c b/net/core/filter.c
> index a70c7b9876fa..5e5e6f9baccc 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -11768,9 +11768,18 @@ BTF_SET8_START(sock_destroy_kfunc_set)
> BTF_ID_FLAGS(func, bpf_sock_destroy)
> BTF_SET8_END(sock_destroy_kfunc_set)
> 
> +static int tracing_iter_filter(const struct bpf_prog *prog, u32 kfunc_id)
> +{
> +	if (btf_id_set8_contains(&sock_destroy_kfunc_set, kfunc_id) &&
> +	    prog->expected_attach_type != BPF_TRACE_ITER)
> +		return -EACCES;
> +	return 0;
> +}
> +
> static const struct btf_kfunc_id_set bpf_sock_destroy_kfunc_set = {
> 	.owner = THIS_MODULE,
> 	.set   = &sock_destroy_kfunc_set,
> +	.filter = tracing_iter_filter,
> };
> 
> static int init_subsystem(void)
> diff --git a/tools/testing/selftests/bpf/progs/sock_destroy_prog.c b/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
> index 5c1e65d50598..5204e44f6ae4 100644
> --- a/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
> +++ b/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
> @@ -3,6 +3,7 @@
> #include "vmlinux.h"
> #include <bpf/bpf_helpers.h>
> #include <bpf/bpf_endian.h>
> +#include <bpf/bpf_tracing.h>
> 
> #include "bpf_tracing_net.h"
> 
> @@ -144,4 +145,13 @@ int iter_udp6_server(struct bpf_iter__udp *ctx)
> 	return 0;
> }
> 
> +SEC("tp_btf/tcp_destroy_sock")
> +int BPF_PROG(trace_tcp_destroy_sock, struct sock *sk)
> +{
> +	/* should not load */
> +	bpf_sock_destroy((struct sock_common *)sk);
> +
> +	return 0;
> +}
> +
> char _license[] SEC("license") = "GPL";
> -- 
> 2.34.1
>
Martin KaFai Lau April 5, 2023, 5:26 p.m. UTC | #2
On 4/5/23 8:05 AM, Aditi Ghag wrote:
> Looks quite promising for the sock_destroy use case, and also as a generic filtering mechanism, but I'm not aware of other use cases. I haven't had a chance to apply this patch locally, but I'm planning to do it soon. Thanks!

Please don't top post.

Other use case is to allow different sets of kfuncs to struct_ops programs from 
David: https://lore.kernel.org/bpf/Y9KLHZ1TNXVHdVKm@maniforge/

>> From: Martin KaFai Lau <martin.lau@kernel.org>
>>
>> This set (https://lore.kernel.org/bpf/https://lore.kernel.org/bpf/500d452b-f9d5-d01f-d365-2949c4fd37ab@linux.dev/)
>> needs to limit bpf_sock_destroy kfunc to BPF_TRACE_ITER.
>> In the earlier reply, I thought of adding a BTF_KFUNC_HOOK_TRACING_ITER.
>>
>> Instead of adding BTF_KFUNC_HOOK_TRACING_ITER, I quickly hacked something
>> that added a callback filter to 'struct btf_kfunc_id_set'. The filter has
>> access to the prog such that it can filter by other properties of a prog.
>> The prog->expected_attached_type is used in the tracing_iter_filter().
>> It is mostly compiler tested only, so it is still very rough but should
>> be good enough to show the idea.
>>
>> would like to hear how others think. It is pretty much the only
>> piece left for the above mentioned set.
Aditi Ghag April 10, 2023, 11:05 p.m. UTC | #3
> On Apr 3, 2023, at 11:09 PM, Martin KaFai Lau <martin.lau@linux.dev> wrote:
> 
> From: Martin KaFai Lau <martin.lau@kernel.org>
> 
> This set (https://lore.kernel.org/bpf/https://lore.kernel.org/bpf/500d452b-f9d5-d01f-d365-2949c4fd37ab@linux.dev/)
> needs to limit bpf_sock_destroy kfunc to BPF_TRACE_ITER.
> In the earlier reply, I thought of adding a BTF_KFUNC_HOOK_TRACING_ITER.
> 
> Instead of adding BTF_KFUNC_HOOK_TRACING_ITER, I quickly hacked something
> that added a callback filter to 'struct btf_kfunc_id_set'. The filter has
> access to the prog such that it can filter by other properties of a prog.
> The prog->expected_attached_type is used in the tracing_iter_filter().
> It is mostly compiler tested only, so it is still very rough but should
> be good enough to show the idea.
> 
> would like to hear how others think. It is pretty much the only
> piece left for the above mentioned set.
> 
> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
> ---
> include/linux/btf.h                           | 18 +++---
> kernel/bpf/btf.c                              | 59 +++++++++++++++----
> kernel/bpf/verifier.c                         |  7 ++-
> net/core/filter.c                             |  9 +++
> .../selftests/bpf/progs/sock_destroy_prog.c   | 10 ++++
> 5 files changed, 83 insertions(+), 20 deletions(-)
> 
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index d53b10cc55f2..84c31b4f5785 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -99,10 +99,14 @@ struct btf_type;
> union bpf_attr;
> struct btf_show;
> struct btf_id_set;
> +struct bpf_prog;
> +
> +typedef int (*btf_kfunc_filter_t)(const struct bpf_prog *prog, u32 kfunc_id);
> 
> struct btf_kfunc_id_set {
> 	struct module *owner;
> 	struct btf_id_set8 *set;
> +	btf_kfunc_filter_t filter;
> };
> 
> struct btf_id_dtor_kfunc {
> @@ -482,7 +486,6 @@ static inline void *btf_id_set8_contains(const struct btf_id_set8 *set, u32 id)
> 	return bsearch(&id, set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func);
> }
> 
> -struct bpf_prog;
> struct bpf_verifier_log;
> 
> #ifdef CONFIG_BPF_SYSCALL
> @@ -490,10 +493,10 @@ const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
> const char *btf_name_by_offset(const struct btf *btf, u32 offset);
> struct btf *btf_parse_vmlinux(void);
> struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
> -u32 *btf_kfunc_id_set_contains(const struct btf *btf,
> -			       enum bpf_prog_type prog_type,
> -			       u32 kfunc_btf_id);
> -u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id);
> +u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id,
> +			       const struct bpf_prog *prog);
> +u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
> +				const struct bpf_prog *prog);
> int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
> 			      const struct btf_kfunc_id_set *s);
> int register_btf_fmodret_id_set(const struct btf_kfunc_id_set *kset);
> @@ -520,8 +523,9 @@ static inline const char *btf_name_by_offset(const struct btf *btf,
> 	return NULL;
> }
> static inline u32 *btf_kfunc_id_set_contains(const struct btf *btf,
> -					     enum bpf_prog_type prog_type,
> -					     u32 kfunc_btf_id)
> +					     u32 kfunc_btf_id,
> +					     struct bpf_prog *prog)
> +
> {
> 	return NULL;
> }
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index b7e5a5510b91..7685af3ca9c0 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -218,10 +218,17 @@ enum btf_kfunc_hook {
> enum {
> 	BTF_KFUNC_SET_MAX_CNT = 256,
> 	BTF_DTOR_KFUNC_MAX_CNT = 256,
> +	BTF_KFUNC_FILTER_MAX_CNT = 16,
> +};
> +
> +struct btf_kfunc_hook_filter {
> +	btf_kfunc_filter_t filters[BTF_KFUNC_FILTER_MAX_CNT];
> +	u32 nr_filters;
> };
> 
> struct btf_kfunc_set_tab {
> 	struct btf_id_set8 *sets[BTF_KFUNC_HOOK_MAX];
> +	struct btf_kfunc_hook_filter hook_filters[BTF_KFUNC_HOOK_MAX];
> };
> 
> struct btf_id_dtor_kfunc_tab {
> @@ -7712,9 +7719,12 @@ static int btf_check_kfunc_protos(struct btf *btf, u32 func_id, u32 func_flags)
> /* Kernel Function (kfunc) BTF ID set registration API */
> 
> static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> -				  struct btf_id_set8 *add_set)
> +				  const struct btf_kfunc_id_set *kset)
> {
> +	struct btf_kfunc_hook_filter *hook_filter;
> +	struct btf_id_set8 *add_set = kset->set;
> 	bool vmlinux_set = !btf_is_module(btf);
> +	bool add_filter = !!kset->filter;
> 	struct btf_kfunc_set_tab *tab;
> 	struct btf_id_set8 *set;
> 	u32 set_cnt;
> @@ -7729,6 +7739,20 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> 		return 0;
> 
> 	tab = btf->kfunc_set_tab;
> +
> +	if (tab && add_filter) {
> +		int i;
> +
> +		hook_filter = &tab->hook_filters[hook];
> +		for (i = 0; i < hook_filter->nr_filters; i++) {
> +			if (hook_filter->filters[i] == kset->filter)
> +				add_filter = false;
> +		}
> +
> +		if (add_filter && hook_filter->nr_filters == BTF_KFUNC_FILTER_MAX_CNT)
> +			return -E2BIG;
> +	}
> +
> 	if (!tab) {
> 		tab = kzalloc(sizeof(*tab), GFP_KERNEL | __GFP_NOWARN);
> 		if (!tab)
> @@ -7751,7 +7775,7 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> 	 */
> 	if (!vmlinux_set) {
> 		tab->sets[hook] = add_set;
> -		return 0;
> +		goto do_add_filter;
> 	}
> 
> 	/* In case of vmlinux sets, there may be more than one set being
> @@ -7793,6 +7817,11 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> 
> 	sort(set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func, NULL);
> 
> +do_add_filter:
> +	if (add_filter) {
> +		hook_filter = &tab->hook_filters[hook];
> +		hook_filter->filters[hook_filter->nr_filters++] = kset->filter;
> +	}
> 	return 0;
> end:
> 	btf_free_kfunc_set_tab(btf);
> @@ -7801,15 +7830,22 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
> 
> static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
> 					enum btf_kfunc_hook hook,
> +					const struct bpf_prog *prog,
> 					u32 kfunc_btf_id)
> {
> +	struct btf_kfunc_hook_filter *hook_filter;
> 	struct btf_id_set8 *set;
> -	u32 *id;
> +	u32 *id, i;
> 
> 	if (hook >= BTF_KFUNC_HOOK_MAX)
> 		return NULL;
> 	if (!btf->kfunc_set_tab)
> 		return NULL;
> +	hook_filter = &btf->kfunc_set_tab->hook_filters[hook];
> +	for (i = 0; i < hook_filter->nr_filters; i++) {
> +		if (hook_filter->filters[i](prog, kfunc_btf_id))
> +			return NULL;
> +	}
> 	set = btf->kfunc_set_tab->sets[hook];
> 	if (!set)
> 		return NULL;
> @@ -7862,23 +7898,25 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
>  * protection for looking up a well-formed btf->kfunc_set_tab.
>  */
> u32 *btf_kfunc_id_set_contains(const struct btf *btf,
> -			       enum bpf_prog_type prog_type,
> -			       u32 kfunc_btf_id)
> +			       u32 kfunc_btf_id,
> +			       const struct bpf_prog *prog)
> {
> +	enum bpf_prog_type prog_type = resolve_prog_type(prog);
> 	enum btf_kfunc_hook hook;
> 	u32 *kfunc_flags;
> 
> -	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
> +	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, prog, kfunc_btf_id);
> 	if (kfunc_flags)
> 		return kfunc_flags;
> 
> 	hook = bpf_prog_type_to_kfunc_hook(prog_type);
> -	return __btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
> +	return __btf_kfunc_id_set_contains(btf, hook, prog, kfunc_btf_id);
> }
> 
> -u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id)
> +u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
> +				const struct bpf_prog *prog)
> {
> -	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id);
> +	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, prog, kfunc_btf_id);
> }
> 
> static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
> @@ -7909,7 +7947,8 @@ static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
> 			goto err_out;
> 	}
> 
> -	ret = btf_populate_kfunc_set(btf, hook, kset->set);
> +	ret = btf_populate_kfunc_set(btf, hook, kset);
> +
> err_out:
> 	btf_put(btf);
> 	return ret;
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 20eb2015842f..1a854cdb2566 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -10553,7 +10553,7 @@ static int fetch_kfunc_meta(struct bpf_verifier_env *env,
> 		*kfunc_name = func_name;
> 	func_proto = btf_type_by_id(desc_btf, func->type);
> 
> -	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, resolve_prog_type(env->prog), func_id);
> +	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, func_id, env->prog);
> 	if (!kfunc_flags) {
> 		return -EACCES;
> 	}
> @@ -18521,7 +18521,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> 				 * in the fmodret id set with the KF_SLEEPABLE flag.
> 				 */
> 				else {
> -					u32 *flags = btf_kfunc_is_modify_return(btf, btf_id);
> +					u32 *flags = btf_kfunc_is_modify_return(btf, btf_id,
> +										prog);
> 
> 					if (flags && (*flags & KF_SLEEPABLE))
> 						ret = 0;
> @@ -18549,7 +18550,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
> 				return -EINVAL;
> 			}
> 			ret = -EINVAL;
> -			if (btf_kfunc_is_modify_return(btf, btf_id) ||
> +			if (btf_kfunc_is_modify_return(btf, btf_id, prog) ||
> 			    !check_attach_modify_return(addr, tname))
> 				ret = 0;
> 			if (ret) {
> diff --git a/net/core/filter.c b/net/core/filter.c
> index a70c7b9876fa..5e5e6f9baccc 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -11768,9 +11768,18 @@ BTF_SET8_START(sock_destroy_kfunc_set)
> BTF_ID_FLAGS(func, bpf_sock_destroy)
> BTF_SET8_END(sock_destroy_kfunc_set)
> 
> +static int tracing_iter_filter(const struct bpf_prog *prog, u32 kfunc_id)
> +{
> +	if (btf_id_set8_contains(&sock_destroy_kfunc_set, kfunc_id) &&
> +	    prog->expected_attach_type != BPF_TRACE_ITER)
> +		return -EACCES;
> +	return 0;
> +}
> +
> static const struct btf_kfunc_id_set bpf_sock_destroy_kfunc_set = {
> 	.owner = THIS_MODULE,
> 	.set   = &sock_destroy_kfunc_set,
> +	.filter = tracing_iter_filter,
> };
> 
> static int init_subsystem(void)
> diff --git a/tools/testing/selftests/bpf/progs/sock_destroy_prog.c b/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
> index 5c1e65d50598..5204e44f6ae4 100644
> --- a/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
> +++ b/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
> @@ -3,6 +3,7 @@
> #include "vmlinux.h"
> #include <bpf/bpf_helpers.h>
> #include <bpf/bpf_endian.h>
> +#include <bpf/bpf_tracing.h>
> 
> #include "bpf_tracing_net.h"
> 
> @@ -144,4 +145,13 @@ int iter_udp6_server(struct bpf_iter__udp *ctx)
> 	return 0;
> }
> 
> +SEC("tp_btf/tcp_destroy_sock")
> +int BPF_PROG(trace_tcp_destroy_sock, struct sock *sk)
> +{
> +	/* should not load */
> +	bpf_sock_destroy((struct sock_common *)sk);
> +
> +	return 0;
> +}

Applied, and tested the patch locally. It works as expected: tp_btf/tcp_destroy_sock is rejected, while the other programs in sock_destroy_prog load successfully. 

1: (85) call bpf_sock_destroy#75448
calling kernel function bpf_sock_destroy is not allowed


> +
> char _license[] SEC("license") = "GPL";
> -- 
> 2.34.1
>
Aditi Ghag April 12, 2023, 3:21 p.m. UTC | #4
> On Apr 10, 2023, at 4:05 PM, Aditi Ghag <aditi.ghag@isovalent.com> wrote:
> 
> 
> 
>> On Apr 3, 2023, at 11:09 PM, Martin KaFai Lau <martin.lau@linux.dev> wrote:
>> 
>> From: Martin KaFai Lau <martin.lau@kernel.org>
>> 
>> This set (https://lore.kernel.org/bpf/https://lore.kernel.org/bpf/500d452b-f9d5-d01f-d365-2949c4fd37ab@linux.dev/)
>> needs to limit bpf_sock_destroy kfunc to BPF_TRACE_ITER.
>> In the earlier reply, I thought of adding a BTF_KFUNC_HOOK_TRACING_ITER.
>> 
>> Instead of adding BTF_KFUNC_HOOK_TRACING_ITER, I quickly hacked something
>> that added a callback filter to 'struct btf_kfunc_id_set'. The filter has
>> access to the prog such that it can filter by other properties of a prog.
>> The prog->expected_attached_type is used in the tracing_iter_filter().
>> It is mostly compiler tested only, so it is still very rough but should
>> be good enough to show the idea.
>> 
>> would like to hear how others think. It is pretty much the only
>> piece left for the above mentioned set.
>> 
>> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
>> ---
>> include/linux/btf.h                           | 18 +++---
>> kernel/bpf/btf.c                              | 59 +++++++++++++++----
>> kernel/bpf/verifier.c                         |  7 ++-
>> net/core/filter.c                             |  9 +++
>> .../selftests/bpf/progs/sock_destroy_prog.c   | 10 ++++
>> 5 files changed, 83 insertions(+), 20 deletions(-)
>> 
>> diff --git a/include/linux/btf.h b/include/linux/btf.h
>> index d53b10cc55f2..84c31b4f5785 100644
>> --- a/include/linux/btf.h
>> +++ b/include/linux/btf.h
>> @@ -99,10 +99,14 @@ struct btf_type;
>> union bpf_attr;
>> struct btf_show;
>> struct btf_id_set;
>> +struct bpf_prog;
>> +
>> +typedef int (*btf_kfunc_filter_t)(const struct bpf_prog *prog, u32 kfunc_id);
>> 
>> struct btf_kfunc_id_set {
>> 	struct module *owner;
>> 	struct btf_id_set8 *set;
>> +	btf_kfunc_filter_t filter;
>> };
>> 
>> struct btf_id_dtor_kfunc {
>> @@ -482,7 +486,6 @@ static inline void *btf_id_set8_contains(const struct btf_id_set8 *set, u32 id)
>> 	return bsearch(&id, set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func);
>> }
>> 
>> -struct bpf_prog;
>> struct bpf_verifier_log;
>> 
>> #ifdef CONFIG_BPF_SYSCALL
>> @@ -490,10 +493,10 @@ const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
>> const char *btf_name_by_offset(const struct btf *btf, u32 offset);
>> struct btf *btf_parse_vmlinux(void);
>> struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
>> -u32 *btf_kfunc_id_set_contains(const struct btf *btf,
>> -			       enum bpf_prog_type prog_type,
>> -			       u32 kfunc_btf_id);
>> -u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id);
>> +u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id,
>> +			       const struct bpf_prog *prog);
>> +u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
>> +				const struct bpf_prog *prog);
>> int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
>> 			      const struct btf_kfunc_id_set *s);
>> int register_btf_fmodret_id_set(const struct btf_kfunc_id_set *kset);
>> @@ -520,8 +523,9 @@ static inline const char *btf_name_by_offset(const struct btf *btf,
>> 	return NULL;
>> }
>> static inline u32 *btf_kfunc_id_set_contains(const struct btf *btf,
>> -					     enum bpf_prog_type prog_type,
>> -					     u32 kfunc_btf_id)
>> +					     u32 kfunc_btf_id,
>> +					     struct bpf_prog *prog)
>> +
>> {
>> 	return NULL;
>> }
>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>> index b7e5a5510b91..7685af3ca9c0 100644
>> --- a/kernel/bpf/btf.c
>> +++ b/kernel/bpf/btf.c
>> @@ -218,10 +218,17 @@ enum btf_kfunc_hook {
>> enum {
>> 	BTF_KFUNC_SET_MAX_CNT = 256,
>> 	BTF_DTOR_KFUNC_MAX_CNT = 256,
>> +	BTF_KFUNC_FILTER_MAX_CNT = 16,
>> +};
>> +
>> +struct btf_kfunc_hook_filter {
>> +	btf_kfunc_filter_t filters[BTF_KFUNC_FILTER_MAX_CNT];
>> +	u32 nr_filters;
>> };
>> 
>> struct btf_kfunc_set_tab {
>> 	struct btf_id_set8 *sets[BTF_KFUNC_HOOK_MAX];
>> +	struct btf_kfunc_hook_filter hook_filters[BTF_KFUNC_HOOK_MAX];
>> };
>> 
>> struct btf_id_dtor_kfunc_tab {
>> @@ -7712,9 +7719,12 @@ static int btf_check_kfunc_protos(struct btf *btf, u32 func_id, u32 func_flags)
>> /* Kernel Function (kfunc) BTF ID set registration API */
>> 
>> static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
>> -				  struct btf_id_set8 *add_set)
>> +				  const struct btf_kfunc_id_set *kset)
>> {
>> +	struct btf_kfunc_hook_filter *hook_filter;
>> +	struct btf_id_set8 *add_set = kset->set;
>> 	bool vmlinux_set = !btf_is_module(btf);
>> +	bool add_filter = !!kset->filter;
>> 	struct btf_kfunc_set_tab *tab;
>> 	struct btf_id_set8 *set;
>> 	u32 set_cnt;
>> @@ -7729,6 +7739,20 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
>> 		return 0;
>> 
>> 	tab = btf->kfunc_set_tab;
>> +
>> +	if (tab && add_filter) {
>> +		int i;
>> +
>> +		hook_filter = &tab->hook_filters[hook];
>> +		for (i = 0; i < hook_filter->nr_filters; i++) {
>> +			if (hook_filter->filters[i] == kset->filter)
>> +				add_filter = false;
>> +		}
>> +
>> +		if (add_filter && hook_filter->nr_filters == BTF_KFUNC_FILTER_MAX_CNT)
>> +			return -E2BIG;
>> +	}
>> +
>> 	if (!tab) {
>> 		tab = kzalloc(sizeof(*tab), GFP_KERNEL | __GFP_NOWARN);
>> 		if (!tab)
>> @@ -7751,7 +7775,7 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
>> 	 */
>> 	if (!vmlinux_set) {
>> 		tab->sets[hook] = add_set;
>> -		return 0;
>> +		goto do_add_filter;
>> 	}
>> 
>> 	/* In case of vmlinux sets, there may be more than one set being
>> @@ -7793,6 +7817,11 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
>> 
>> 	sort(set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func, NULL);
>> 
>> +do_add_filter:
>> +	if (add_filter) {
>> +		hook_filter = &tab->hook_filters[hook];
>> +		hook_filter->filters[hook_filter->nr_filters++] = kset->filter;
>> +	}
>> 	return 0;
>> end:
>> 	btf_free_kfunc_set_tab(btf);
>> @@ -7801,15 +7830,22 @@ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
>> 
>> static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
>> 					enum btf_kfunc_hook hook,
>> +					const struct bpf_prog *prog,
>> 					u32 kfunc_btf_id)
>> {
>> +	struct btf_kfunc_hook_filter *hook_filter;
>> 	struct btf_id_set8 *set;
>> -	u32 *id;
>> +	u32 *id, i;
>> 
>> 	if (hook >= BTF_KFUNC_HOOK_MAX)
>> 		return NULL;
>> 	if (!btf->kfunc_set_tab)
>> 		return NULL;
>> +	hook_filter = &btf->kfunc_set_tab->hook_filters[hook];
>> +	for (i = 0; i < hook_filter->nr_filters; i++) {
>> +		if (hook_filter->filters[i](prog, kfunc_btf_id))
>> +			return NULL;
>> +	}
>> 	set = btf->kfunc_set_tab->sets[hook];
>> 	if (!set)
>> 		return NULL;
>> @@ -7862,23 +7898,25 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
>> * protection for looking up a well-formed btf->kfunc_set_tab.
>> */
>> u32 *btf_kfunc_id_set_contains(const struct btf *btf,
>> -			       enum bpf_prog_type prog_type,
>> -			       u32 kfunc_btf_id)
>> +			       u32 kfunc_btf_id,
>> +			       const struct bpf_prog *prog)
>> {
>> +	enum bpf_prog_type prog_type = resolve_prog_type(prog);
>> 	enum btf_kfunc_hook hook;
>> 	u32 *kfunc_flags;
>> 
>> -	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
>> +	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, prog, kfunc_btf_id);
>> 	if (kfunc_flags)
>> 		return kfunc_flags;
>> 
>> 	hook = bpf_prog_type_to_kfunc_hook(prog_type);
>> -	return __btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
>> +	return __btf_kfunc_id_set_contains(btf, hook, prog, kfunc_btf_id);
>> }
>> 
>> -u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id)
>> +u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
>> +				const struct bpf_prog *prog)
>> {
>> -	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id);
>> +	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, prog, kfunc_btf_id);
>> }
>> 
>> static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
>> @@ -7909,7 +7947,8 @@ static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
>> 			goto err_out;
>> 	}
>> 
>> -	ret = btf_populate_kfunc_set(btf, hook, kset->set);
>> +	ret = btf_populate_kfunc_set(btf, hook, kset);
>> +
>> err_out:
>> 	btf_put(btf);
>> 	return ret;
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 20eb2015842f..1a854cdb2566 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -10553,7 +10553,7 @@ static int fetch_kfunc_meta(struct bpf_verifier_env *env,
>> 		*kfunc_name = func_name;
>> 	func_proto = btf_type_by_id(desc_btf, func->type);
>> 
>> -	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, resolve_prog_type(env->prog), func_id);
>> +	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, func_id, env->prog);
>> 	if (!kfunc_flags) {
>> 		return -EACCES;
>> 	}
>> @@ -18521,7 +18521,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
>> 				 * in the fmodret id set with the KF_SLEEPABLE flag.
>> 				 */
>> 				else {
>> -					u32 *flags = btf_kfunc_is_modify_return(btf, btf_id);
>> +					u32 *flags = btf_kfunc_is_modify_return(btf, btf_id,
>> +										prog);
>> 
>> 					if (flags && (*flags & KF_SLEEPABLE))
>> 						ret = 0;
>> @@ -18549,7 +18550,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
>> 				return -EINVAL;
>> 			}
>> 			ret = -EINVAL;
>> -			if (btf_kfunc_is_modify_return(btf, btf_id) ||
>> +			if (btf_kfunc_is_modify_return(btf, btf_id, prog) ||
>> 			    !check_attach_modify_return(addr, tname))
>> 				ret = 0;
>> 			if (ret) {
>> diff --git a/net/core/filter.c b/net/core/filter.c
>> index a70c7b9876fa..5e5e6f9baccc 100644
>> --- a/net/core/filter.c
>> +++ b/net/core/filter.c
>> @@ -11768,9 +11768,18 @@ BTF_SET8_START(sock_destroy_kfunc_set)
>> BTF_ID_FLAGS(func, bpf_sock_destroy)
>> BTF_SET8_END(sock_destroy_kfunc_set)
>> 
>> +static int tracing_iter_filter(const struct bpf_prog *prog, u32 kfunc_id)
>> +{
>> +	if (btf_id_set8_contains(&sock_destroy_kfunc_set, kfunc_id) &&
>> +	    prog->expected_attach_type != BPF_TRACE_ITER)
>> +		return -EACCES;
>> +	return 0;
>> +}
>> +
>> static const struct btf_kfunc_id_set bpf_sock_destroy_kfunc_set = {
>> 	.owner = THIS_MODULE,
>> 	.set   = &sock_destroy_kfunc_set,
>> +	.filter = tracing_iter_filter,
>> };
>> 
>> static int init_subsystem(void)
>> diff --git a/tools/testing/selftests/bpf/progs/sock_destroy_prog.c b/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
>> index 5c1e65d50598..5204e44f6ae4 100644
>> --- a/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
>> +++ b/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
>> @@ -3,6 +3,7 @@
>> #include "vmlinux.h"
>> #include <bpf/bpf_helpers.h>
>> #include <bpf/bpf_endian.h>
>> +#include <bpf/bpf_tracing.h>
>> 
>> #include "bpf_tracing_net.h"
>> 
>> @@ -144,4 +145,13 @@ int iter_udp6_server(struct bpf_iter__udp *ctx)
>> 	return 0;
>> }
>> 
>> +SEC("tp_btf/tcp_destroy_sock")
>> +int BPF_PROG(trace_tcp_destroy_sock, struct sock *sk)
>> +{
>> +	/* should not load */
>> +	bpf_sock_destroy((struct sock_common *)sk);
>> +
>> +	return 0;
>> +}
> 
> Applied, and tested the patch locally. It works as expected: tp_btf/tcp_destroy_sock is rejected, while the other programs in sock_destroy_prog load successfully. 
> 
> 1: (85) call bpf_sock_destroy#75448
> calling kernel function bpf_sock_destroy is not allowed

Martin: I'm ready to push the next revision that addresses all the review comments from the current patch. How do you want to coordinate it with this commit?

> 
> 
>> +
>> char _license[] SEC("license") = "GPL";
>> -- 
>> 2.34.1
diff mbox series

Patch

diff --git a/include/linux/btf.h b/include/linux/btf.h
index d53b10cc55f2..84c31b4f5785 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -99,10 +99,14 @@  struct btf_type;
 union bpf_attr;
 struct btf_show;
 struct btf_id_set;
+struct bpf_prog;
+
+typedef int (*btf_kfunc_filter_t)(const struct bpf_prog *prog, u32 kfunc_id);
 
 struct btf_kfunc_id_set {
 	struct module *owner;
 	struct btf_id_set8 *set;
+	btf_kfunc_filter_t filter;
 };
 
 struct btf_id_dtor_kfunc {
@@ -482,7 +486,6 @@  static inline void *btf_id_set8_contains(const struct btf_id_set8 *set, u32 id)
 	return bsearch(&id, set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func);
 }
 
-struct bpf_prog;
 struct bpf_verifier_log;
 
 #ifdef CONFIG_BPF_SYSCALL
@@ -490,10 +493,10 @@  const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
 const char *btf_name_by_offset(const struct btf *btf, u32 offset);
 struct btf *btf_parse_vmlinux(void);
 struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
-u32 *btf_kfunc_id_set_contains(const struct btf *btf,
-			       enum bpf_prog_type prog_type,
-			       u32 kfunc_btf_id);
-u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id);
+u32 *btf_kfunc_id_set_contains(const struct btf *btf, u32 kfunc_btf_id,
+			       const struct bpf_prog *prog);
+u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
+				const struct bpf_prog *prog);
 int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
 			      const struct btf_kfunc_id_set *s);
 int register_btf_fmodret_id_set(const struct btf_kfunc_id_set *kset);
@@ -520,8 +523,9 @@  static inline const char *btf_name_by_offset(const struct btf *btf,
 	return NULL;
 }
 static inline u32 *btf_kfunc_id_set_contains(const struct btf *btf,
-					     enum bpf_prog_type prog_type,
-					     u32 kfunc_btf_id)
+					     u32 kfunc_btf_id,
+					     struct bpf_prog *prog)
+
 {
 	return NULL;
 }
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index b7e5a5510b91..7685af3ca9c0 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -218,10 +218,17 @@  enum btf_kfunc_hook {
 enum {
 	BTF_KFUNC_SET_MAX_CNT = 256,
 	BTF_DTOR_KFUNC_MAX_CNT = 256,
+	BTF_KFUNC_FILTER_MAX_CNT = 16,
+};
+
+struct btf_kfunc_hook_filter {
+	btf_kfunc_filter_t filters[BTF_KFUNC_FILTER_MAX_CNT];
+	u32 nr_filters;
 };
 
 struct btf_kfunc_set_tab {
 	struct btf_id_set8 *sets[BTF_KFUNC_HOOK_MAX];
+	struct btf_kfunc_hook_filter hook_filters[BTF_KFUNC_HOOK_MAX];
 };
 
 struct btf_id_dtor_kfunc_tab {
@@ -7712,9 +7719,12 @@  static int btf_check_kfunc_protos(struct btf *btf, u32 func_id, u32 func_flags)
 /* Kernel Function (kfunc) BTF ID set registration API */
 
 static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
-				  struct btf_id_set8 *add_set)
+				  const struct btf_kfunc_id_set *kset)
 {
+	struct btf_kfunc_hook_filter *hook_filter;
+	struct btf_id_set8 *add_set = kset->set;
 	bool vmlinux_set = !btf_is_module(btf);
+	bool add_filter = !!kset->filter;
 	struct btf_kfunc_set_tab *tab;
 	struct btf_id_set8 *set;
 	u32 set_cnt;
@@ -7729,6 +7739,20 @@  static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
 		return 0;
 
 	tab = btf->kfunc_set_tab;
+
+	if (tab && add_filter) {
+		int i;
+
+		hook_filter = &tab->hook_filters[hook];
+		for (i = 0; i < hook_filter->nr_filters; i++) {
+			if (hook_filter->filters[i] == kset->filter)
+				add_filter = false;
+		}
+
+		if (add_filter && hook_filter->nr_filters == BTF_KFUNC_FILTER_MAX_CNT)
+			return -E2BIG;
+	}
+
 	if (!tab) {
 		tab = kzalloc(sizeof(*tab), GFP_KERNEL | __GFP_NOWARN);
 		if (!tab)
@@ -7751,7 +7775,7 @@  static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
 	 */
 	if (!vmlinux_set) {
 		tab->sets[hook] = add_set;
-		return 0;
+		goto do_add_filter;
 	}
 
 	/* In case of vmlinux sets, there may be more than one set being
@@ -7793,6 +7817,11 @@  static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
 
 	sort(set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func, NULL);
 
+do_add_filter:
+	if (add_filter) {
+		hook_filter = &tab->hook_filters[hook];
+		hook_filter->filters[hook_filter->nr_filters++] = kset->filter;
+	}
 	return 0;
 end:
 	btf_free_kfunc_set_tab(btf);
@@ -7801,15 +7830,22 @@  static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook,
 
 static u32 *__btf_kfunc_id_set_contains(const struct btf *btf,
 					enum btf_kfunc_hook hook,
+					const struct bpf_prog *prog,
 					u32 kfunc_btf_id)
 {
+	struct btf_kfunc_hook_filter *hook_filter;
 	struct btf_id_set8 *set;
-	u32 *id;
+	u32 *id, i;
 
 	if (hook >= BTF_KFUNC_HOOK_MAX)
 		return NULL;
 	if (!btf->kfunc_set_tab)
 		return NULL;
+	hook_filter = &btf->kfunc_set_tab->hook_filters[hook];
+	for (i = 0; i < hook_filter->nr_filters; i++) {
+		if (hook_filter->filters[i](prog, kfunc_btf_id))
+			return NULL;
+	}
 	set = btf->kfunc_set_tab->sets[hook];
 	if (!set)
 		return NULL;
@@ -7862,23 +7898,25 @@  static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
  * protection for looking up a well-formed btf->kfunc_set_tab.
  */
 u32 *btf_kfunc_id_set_contains(const struct btf *btf,
-			       enum bpf_prog_type prog_type,
-			       u32 kfunc_btf_id)
+			       u32 kfunc_btf_id,
+			       const struct bpf_prog *prog)
 {
+	enum bpf_prog_type prog_type = resolve_prog_type(prog);
 	enum btf_kfunc_hook hook;
 	u32 *kfunc_flags;
 
-	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, kfunc_btf_id);
+	kfunc_flags = __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_COMMON, prog, kfunc_btf_id);
 	if (kfunc_flags)
 		return kfunc_flags;
 
 	hook = bpf_prog_type_to_kfunc_hook(prog_type);
-	return __btf_kfunc_id_set_contains(btf, hook, kfunc_btf_id);
+	return __btf_kfunc_id_set_contains(btf, hook, prog, kfunc_btf_id);
 }
 
-u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id)
+u32 *btf_kfunc_is_modify_return(const struct btf *btf, u32 kfunc_btf_id,
+				const struct bpf_prog *prog)
 {
-	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, kfunc_btf_id);
+	return __btf_kfunc_id_set_contains(btf, BTF_KFUNC_HOOK_FMODRET, prog, kfunc_btf_id);
 }
 
 static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
@@ -7909,7 +7947,8 @@  static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
 			goto err_out;
 	}
 
-	ret = btf_populate_kfunc_set(btf, hook, kset->set);
+	ret = btf_populate_kfunc_set(btf, hook, kset);
+
 err_out:
 	btf_put(btf);
 	return ret;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 20eb2015842f..1a854cdb2566 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -10553,7 +10553,7 @@  static int fetch_kfunc_meta(struct bpf_verifier_env *env,
 		*kfunc_name = func_name;
 	func_proto = btf_type_by_id(desc_btf, func->type);
 
-	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, resolve_prog_type(env->prog), func_id);
+	kfunc_flags = btf_kfunc_id_set_contains(desc_btf, func_id, env->prog);
 	if (!kfunc_flags) {
 		return -EACCES;
 	}
@@ -18521,7 +18521,8 @@  int bpf_check_attach_target(struct bpf_verifier_log *log,
 				 * in the fmodret id set with the KF_SLEEPABLE flag.
 				 */
 				else {
-					u32 *flags = btf_kfunc_is_modify_return(btf, btf_id);
+					u32 *flags = btf_kfunc_is_modify_return(btf, btf_id,
+										prog);
 
 					if (flags && (*flags & KF_SLEEPABLE))
 						ret = 0;
@@ -18549,7 +18550,7 @@  int bpf_check_attach_target(struct bpf_verifier_log *log,
 				return -EINVAL;
 			}
 			ret = -EINVAL;
-			if (btf_kfunc_is_modify_return(btf, btf_id) ||
+			if (btf_kfunc_is_modify_return(btf, btf_id, prog) ||
 			    !check_attach_modify_return(addr, tname))
 				ret = 0;
 			if (ret) {
diff --git a/net/core/filter.c b/net/core/filter.c
index a70c7b9876fa..5e5e6f9baccc 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -11768,9 +11768,18 @@  BTF_SET8_START(sock_destroy_kfunc_set)
 BTF_ID_FLAGS(func, bpf_sock_destroy)
 BTF_SET8_END(sock_destroy_kfunc_set)
 
+static int tracing_iter_filter(const struct bpf_prog *prog, u32 kfunc_id)
+{
+	if (btf_id_set8_contains(&sock_destroy_kfunc_set, kfunc_id) &&
+	    prog->expected_attach_type != BPF_TRACE_ITER)
+		return -EACCES;
+	return 0;
+}
+
 static const struct btf_kfunc_id_set bpf_sock_destroy_kfunc_set = {
 	.owner = THIS_MODULE,
 	.set   = &sock_destroy_kfunc_set,
+	.filter = tracing_iter_filter,
 };
 
 static int init_subsystem(void)
diff --git a/tools/testing/selftests/bpf/progs/sock_destroy_prog.c b/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
index 5c1e65d50598..5204e44f6ae4 100644
--- a/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
+++ b/tools/testing/selftests/bpf/progs/sock_destroy_prog.c
@@ -3,6 +3,7 @@ 
 #include "vmlinux.h"
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_endian.h>
+#include <bpf/bpf_tracing.h>
 
 #include "bpf_tracing_net.h"
 
@@ -144,4 +145,13 @@  int iter_udp6_server(struct bpf_iter__udp *ctx)
 	return 0;
 }
 
+SEC("tp_btf/tcp_destroy_sock")
+int BPF_PROG(trace_tcp_destroy_sock, struct sock *sk)
+{
+	/* should not load */
+	bpf_sock_destroy((struct sock_common *)sk);
+
+	return 0;
+}
+
 char _license[] SEC("license") = "GPL";