Message ID | 20230405004239.1375399-10-memxor@gmail.com (mailing list archive) |
---|---|
State | RFC |
Delegated to: | BPF |
Headers | show |
Series | Exceptions - 1/2 | expand |
On Wed, Apr 05, 2023 at 02:42:39AM +0200, Kumar Kartikeya Dwivedi wrote: > +static __noinline int throwing_subprog(struct __sk_buff *ctx) > +{ > + if (ctx) > + bpf_throw(); > + return 0; > +} > + > +__noinline int global_subprog(struct __sk_buff *ctx) > +{ > + return subprog(ctx) + 1; > +} > + > +__noinline int throwing_global_subprog(struct __sk_buff *ctx) > +{ > + if (ctx) > + bpf_throw(); > + return 0; > +} > + > +static __noinline int exception_cb(void) > +{ > + return 16; > +} > + > +SEC("tc") > +int exception_throw_subprog(struct __sk_buff *ctx) > +{ > + volatile int i; > + > + exception_cb(); > + bpf_set_exception_callback(exception_cb); > + i = subprog(ctx); > + i += global_subprog(ctx) - 1; > + if (!i) > + return throwing_global_subprog(ctx); > + else > + return throwing_subprog(ctx); > + bpf_throw(); > + return 0; > +} > + > +__noinline int throwing_gfunc(volatile int i) > +{ > + bpf_assert_eq(i, 0); > + return 1; > +} > + > +__noinline static int throwing_func(volatile int i) > +{ > + bpf_assert_lt(i, 1); > + return 1; > +} exception_cb() has no way of knowning which assert statement threw the exception. How about extending a macro: bpf_assert_eq(i, 0, MY_INT_ERR); or bpf_assert_eq(i, 0) {bpf_throw(MY_INT_ERR);} bpf_throw can store it in prog->aux->exception pass the address to cb. Also I think we shouldn't complicate the verifier with auto release of resources. If the user really wants to assert when spin_lock is held it should be user's job to specify what resources should be released. Can we make it look like: bpf_spin_lock(&lock); bpf_assert_eq(i, 0) { bpf_spin_unlock(&lock); bpf_throw(MY_INT_ERR); }
On Thu, Apr 06, 2023 at 04:38:09AM CEST, Alexei Starovoitov wrote: > On Wed, Apr 05, 2023 at 02:42:39AM +0200, Kumar Kartikeya Dwivedi wrote: > > +static __noinline int throwing_subprog(struct __sk_buff *ctx) > > +{ > > + if (ctx) > > + bpf_throw(); > > + return 0; > > +} > > + > > +__noinline int global_subprog(struct __sk_buff *ctx) > > +{ > > + return subprog(ctx) + 1; > > +} > > + > > +__noinline int throwing_global_subprog(struct __sk_buff *ctx) > > +{ > > + if (ctx) > > + bpf_throw(); > > + return 0; > > +} > > + > > +static __noinline int exception_cb(void) > > +{ > > + return 16; > > +} > > + > > +SEC("tc") > > +int exception_throw_subprog(struct __sk_buff *ctx) > > +{ > > + volatile int i; > > + > > + exception_cb(); > > + bpf_set_exception_callback(exception_cb); > > + i = subprog(ctx); > > + i += global_subprog(ctx) - 1; > > + if (!i) > > + return throwing_global_subprog(ctx); > > + else > > + return throwing_subprog(ctx); > > + bpf_throw(); > > + return 0; > > +} > > + > > +__noinline int throwing_gfunc(volatile int i) > > +{ > > + bpf_assert_eq(i, 0); > > + return 1; > > +} > > + > > +__noinline static int throwing_func(volatile int i) > > +{ > > + bpf_assert_lt(i, 1); > > + return 1; > > +} > > exception_cb() has no way of knowning which assert statement threw the exception. > How about extending a macro: > bpf_assert_eq(i, 0, MY_INT_ERR); > or > bpf_assert_eq(i, 0) {bpf_throw(MY_INT_ERR);} > > bpf_throw can store it in prog->aux->exception pass the address to cb. > I agree and will add passing of a value that gets passed to the callback (probably just set it in the exception state), but I don't think prog->aux will work, see previous mails. > Also I think we shouldn't complicate the verifier with auto release of resources. > If the user really wants to assert when spin_lock is held it should be user's > job to specify what resources should be released. > Can we make it look like: > > bpf_spin_lock(&lock); > bpf_assert_eq(i, 0) { > bpf_spin_unlock(&lock); > bpf_throw(MY_INT_ERR); > } Do you mean just locks or all resources? Then it kind of undermines the point of having something like bpf_throw IMO. Since it's easy to insert code from the point of throw but it's not possible to do the same in callers (unless we add a way to 'catch' throws), so it only works for some particular cases where callers don't hold references (or in the main subprog). There are also other ways to go about this whole thing, like having the compiler emit calls to instrinsics which the BPF runtime provides (or have the call configurable through compiler switches), and it already emits the landing pad code to release stuff and we simply receive the table of pads indexed by each throwing instruction, perform necessary checks to ensure everything is actually released correctly when control flow goes through them (e.g. when exploring multiple paths through the same instruction), and unwind frame by frame. That reduces the burden on both the verifier and user, but then it would be probably need to be BPF C++, or have to be a new language extension for BPF C. E.g. there was something about defer, panic, recover etc. in wg14 https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2542.pdf . Having the compiler do it is also probably easier if we want 'catch' style handlers.
On Fri, Apr 07, 2023 at 02:42:14AM +0200, Kumar Kartikeya Dwivedi wrote: > On Thu, Apr 06, 2023 at 04:38:09AM CEST, Alexei Starovoitov wrote: > > On Wed, Apr 05, 2023 at 02:42:39AM +0200, Kumar Kartikeya Dwivedi wrote: > > > +static __noinline int throwing_subprog(struct __sk_buff *ctx) > > > +{ > > > + if (ctx) > > > + bpf_throw(); > > > + return 0; > > > +} > > > + > > > +__noinline int global_subprog(struct __sk_buff *ctx) > > > +{ > > > + return subprog(ctx) + 1; > > > +} > > > + > > > +__noinline int throwing_global_subprog(struct __sk_buff *ctx) > > > +{ > > > + if (ctx) > > > + bpf_throw(); > > > + return 0; > > > +} > > > + > > > +static __noinline int exception_cb(void) > > > +{ > > > + return 16; > > > +} > > > + > > > +SEC("tc") > > > +int exception_throw_subprog(struct __sk_buff *ctx) > > > +{ > > > + volatile int i; > > > + > > > + exception_cb(); > > > + bpf_set_exception_callback(exception_cb); > > > + i = subprog(ctx); > > > + i += global_subprog(ctx) - 1; > > > + if (!i) > > > + return throwing_global_subprog(ctx); > > > + else > > > + return throwing_subprog(ctx); > > > + bpf_throw(); > > > + return 0; > > > +} > > > + > > > +__noinline int throwing_gfunc(volatile int i) > > > +{ > > > + bpf_assert_eq(i, 0); > > > + return 1; > > > +} > > > + > > > +__noinline static int throwing_func(volatile int i) > > > +{ > > > + bpf_assert_lt(i, 1); > > > + return 1; > > > +} > > > > exception_cb() has no way of knowning which assert statement threw the exception. > > How about extending a macro: > > bpf_assert_eq(i, 0, MY_INT_ERR); > > or > > bpf_assert_eq(i, 0) {bpf_throw(MY_INT_ERR);} > > > > bpf_throw can store it in prog->aux->exception pass the address to cb. > > > > I agree and will add passing of a value that gets passed to the callback > (probably just set it in the exception state), but I don't think prog->aux will > work, see previous mails. > > > Also I think we shouldn't complicate the verifier with auto release of resources. > > If the user really wants to assert when spin_lock is held it should be user's > > job to specify what resources should be released. > > Can we make it look like: > > > > bpf_spin_lock(&lock); > > bpf_assert_eq(i, 0) { > > bpf_spin_unlock(&lock); > > bpf_throw(MY_INT_ERR); > > } > > Do you mean just locks or all resources? Then it kind of undermines the point of > having something like bpf_throw IMO. Since it's easy to insert code from the > point of throw but it's not possible to do the same in callers (unless we add a > way to 'catch' throws), so it only works for some particular cases where callers > don't hold references (or in the main subprog). That's a good point. This approach will force caller of functions that can throw not hold any referenced objects (spin_locks, sk_lookup, obj_new, etc) That indeed might be too restrictive. It would be great if we could come up with a way for bpf prog to release resources explicitly though. I'm not a fan of magic release of spin locks. > There are also other ways to go about this whole thing, like having the compiler > emit calls to instrinsics which the BPF runtime provides (or have the call > configurable through compiler switches), and it already emits the landing pad > code to release stuff and we simply receive the table of pads indexed by each > throwing instruction, perform necessary checks to ensure everything is actually > released correctly when control flow goes through them (e.g. when exploring > multiple paths through the same instruction), and unwind frame by frame. That > reduces the burden on both the verifier and user, but then it would be probably > need to be BPF C++, or have to be a new language extension for BPF C. E.g. there > was something about defer, panic, recover etc. in wg14 > https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2542.pdf . Having the compiler > do it is also probably easier if we want 'catch' style handlers. Interesting idea. __attribute__((cleanup(..))) may be handy, but that's a ton of work. I'm in a camp of developers who enforce -fno-exceptions in C++ projects. bpf_assert is not analogous to C++ exceptions. It shouldn't be seen a mechanism to simplify control flow and error handling.
On Fri, Apr 07, 2023 at 04:30:07AM CEST, Alexei Starovoitov wrote: > On Fri, Apr 07, 2023 at 02:42:14AM +0200, Kumar Kartikeya Dwivedi wrote: > > On Thu, Apr 06, 2023 at 04:38:09AM CEST, Alexei Starovoitov wrote: > > > On Wed, Apr 05, 2023 at 02:42:39AM +0200, Kumar Kartikeya Dwivedi wrote: > > > > +static __noinline int throwing_subprog(struct __sk_buff *ctx) > > > > +{ > > > > + if (ctx) > > > > + bpf_throw(); > > > > + return 0; > > > > +} > > > > + > > > > +__noinline int global_subprog(struct __sk_buff *ctx) > > > > +{ > > > > + return subprog(ctx) + 1; > > > > +} > > > > + > > > > +__noinline int throwing_global_subprog(struct __sk_buff *ctx) > > > > +{ > > > > + if (ctx) > > > > + bpf_throw(); > > > > + return 0; > > > > +} > > > > + > > > > +static __noinline int exception_cb(void) > > > > +{ > > > > + return 16; > > > > +} > > > > + > > > > +SEC("tc") > > > > +int exception_throw_subprog(struct __sk_buff *ctx) > > > > +{ > > > > + volatile int i; > > > > + > > > > + exception_cb(); > > > > + bpf_set_exception_callback(exception_cb); > > > > + i = subprog(ctx); > > > > + i += global_subprog(ctx) - 1; > > > > + if (!i) > > > > + return throwing_global_subprog(ctx); > > > > + else > > > > + return throwing_subprog(ctx); > > > > + bpf_throw(); > > > > + return 0; > > > > +} > > > > + > > > > +__noinline int throwing_gfunc(volatile int i) > > > > +{ > > > > + bpf_assert_eq(i, 0); > > > > + return 1; > > > > +} > > > > + > > > > +__noinline static int throwing_func(volatile int i) > > > > +{ > > > > + bpf_assert_lt(i, 1); > > > > + return 1; > > > > +} > > > > > > exception_cb() has no way of knowning which assert statement threw the exception. > > > How about extending a macro: > > > bpf_assert_eq(i, 0, MY_INT_ERR); > > > or > > > bpf_assert_eq(i, 0) {bpf_throw(MY_INT_ERR);} > > > > > > bpf_throw can store it in prog->aux->exception pass the address to cb. > > > > > > > I agree and will add passing of a value that gets passed to the callback > > (probably just set it in the exception state), but I don't think prog->aux will > > work, see previous mails. > > > > > Also I think we shouldn't complicate the verifier with auto release of resources. > > > If the user really wants to assert when spin_lock is held it should be user's > > > job to specify what resources should be released. > > > Can we make it look like: > > > > > > bpf_spin_lock(&lock); > > > bpf_assert_eq(i, 0) { > > > bpf_spin_unlock(&lock); > > > bpf_throw(MY_INT_ERR); > > > } > > > > Do you mean just locks or all resources? Then it kind of undermines the point of > > having something like bpf_throw IMO. Since it's easy to insert code from the > > point of throw but it's not possible to do the same in callers (unless we add a > > way to 'catch' throws), so it only works for some particular cases where callers > > don't hold references (or in the main subprog). > > That's a good point. This approach will force caller of functions that can > throw not hold any referenced objects (spin_locks, sk_lookup, obj_new, etc) > That indeed might be too restrictive. > It would be great if we could come up with a way for bpf prog to release resources > explicitly though. I'm not a fan of magic release of spin locks. > I think to realize that, we need a way to 'catch' thrown exceptions in the caller. The user will write a block of code that handles release of resources in the current scope and resume unwinding (implicitly by calling into some intrinsics). When we discussed this earlier, you rightly mentioned problems associated with introducing control flow into the program not seen by the compiler (unlike try/catch in something like C++ which has clear semantics). But to take a step back, and about what you've said below: > bpf_assert is not analogous to C++ exceptions. It shouldn't be seen a mechanism > to simplify control flow and error handling. When programs get explicit control and handle the 'exceptional' condition, it begins to mimic more of an alternative 'slow path' of the program when it encounters an error, and starts resembling an alternative error handling mechanism pretty much like C++ exceptions. I think that was not the goal here, and the automatic release of resources is simply supposed to happen to ensure that the kernel's resource safety is not violated when a program is aborting. An assertion's should never occur at runtime 99.9999% of the time, but when it does, the BPF runtime emits code to ensure that the aborting program does not leave the kernel in an inconsistent state (memory leaks, held locks that other programs can never take again, etc.). So this will involve clean up of every resource it had acquired when it threw. If we cannot ensure that we can safely release resources (e.g. throwing in NMI context where a kptr_xchg'd pointer cannot be freed), we will bail during verification. > > There are also other ways to go about this whole thing, like having the compiler > > emit calls to instrinsics which the BPF runtime provides (or have the call > > configurable through compiler switches), and it already emits the landing pad > > code to release stuff and we simply receive the table of pads indexed by each > > throwing instruction, perform necessary checks to ensure everything is actually > > released correctly when control flow goes through them (e.g. when exploring > > multiple paths through the same instruction), and unwind frame by frame. That > > reduces the burden on both the verifier and user, but then it would be probably > > need to be BPF C++, or have to be a new language extension for BPF C. E.g. there > > was something about defer, panic, recover etc. in wg14 > > https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2542.pdf . Having the compiler > > do it is also probably easier if we want 'catch' style handlers. > > Interesting idea. __attribute__((cleanup(..))) may be handy, but that's a ton of work. > I'm in a camp of developers who enforce -fno-exceptions in C++ projects. > bpf_assert is not analogous to C++ exceptions. It shouldn't be seen a mechanism > to simplify control flow and error handling.
On Fri, Apr 07, 2023 at 05:12:24AM +0200, Kumar Kartikeya Dwivedi wrote: > On Fri, Apr 07, 2023 at 04:30:07AM CEST, Alexei Starovoitov wrote: > > On Fri, Apr 07, 2023 at 02:42:14AM +0200, Kumar Kartikeya Dwivedi wrote: > > > On Thu, Apr 06, 2023 at 04:38:09AM CEST, Alexei Starovoitov wrote: > > > > On Wed, Apr 05, 2023 at 02:42:39AM +0200, Kumar Kartikeya Dwivedi wrote: > > > > > +static __noinline int throwing_subprog(struct __sk_buff *ctx) > > > > > +{ > > > > > + if (ctx) > > > > > + bpf_throw(); > > > > > + return 0; > > > > > +} > > > > > + > > > > > +__noinline int global_subprog(struct __sk_buff *ctx) > > > > > +{ > > > > > + return subprog(ctx) + 1; > > > > > +} > > > > > + > > > > > +__noinline int throwing_global_subprog(struct __sk_buff *ctx) > > > > > +{ > > > > > + if (ctx) > > > > > + bpf_throw(); > > > > > + return 0; > > > > > +} > > > > > + > > > > > +static __noinline int exception_cb(void) > > > > > +{ > > > > > + return 16; > > > > > +} > > > > > + > > > > > +SEC("tc") > > > > > +int exception_throw_subprog(struct __sk_buff *ctx) > > > > > +{ > > > > > + volatile int i; > > > > > + > > > > > + exception_cb(); > > > > > + bpf_set_exception_callback(exception_cb); > > > > > + i = subprog(ctx); > > > > > + i += global_subprog(ctx) - 1; > > > > > + if (!i) > > > > > + return throwing_global_subprog(ctx); > > > > > + else > > > > > + return throwing_subprog(ctx); > > > > > + bpf_throw(); > > > > > + return 0; > > > > > +} > > > > > + > > > > > +__noinline int throwing_gfunc(volatile int i) > > > > > +{ > > > > > + bpf_assert_eq(i, 0); > > > > > + return 1; > > > > > +} > > > > > + > > > > > +__noinline static int throwing_func(volatile int i) > > > > > +{ > > > > > + bpf_assert_lt(i, 1); > > > > > + return 1; > > > > > +} > > > > > > > > exception_cb() has no way of knowning which assert statement threw the exception. > > > > How about extending a macro: > > > > bpf_assert_eq(i, 0, MY_INT_ERR); > > > > or > > > > bpf_assert_eq(i, 0) {bpf_throw(MY_INT_ERR);} > > > > > > > > bpf_throw can store it in prog->aux->exception pass the address to cb. > > > > > > > > > > I agree and will add passing of a value that gets passed to the callback > > > (probably just set it in the exception state), but I don't think prog->aux will > > > work, see previous mails. > > > > > > > Also I think we shouldn't complicate the verifier with auto release of resources. > > > > If the user really wants to assert when spin_lock is held it should be user's > > > > job to specify what resources should be released. > > > > Can we make it look like: > > > > > > > > bpf_spin_lock(&lock); > > > > bpf_assert_eq(i, 0) { > > > > bpf_spin_unlock(&lock); > > > > bpf_throw(MY_INT_ERR); > > > > } > > > > > > Do you mean just locks or all resources? Then it kind of undermines the point of > > > having something like bpf_throw IMO. Since it's easy to insert code from the > > > point of throw but it's not possible to do the same in callers (unless we add a > > > way to 'catch' throws), so it only works for some particular cases where callers > > > don't hold references (or in the main subprog). > > > > That's a good point. This approach will force caller of functions that can > > throw not hold any referenced objects (spin_locks, sk_lookup, obj_new, etc) > > That indeed might be too restrictive. > > It would be great if we could come up with a way for bpf prog to release resources > > explicitly though. I'm not a fan of magic release of spin locks. > > > > I think to realize that, we need a way to 'catch' thrown exceptions in the > caller. The user will write a block of code that handles release of resources in > the current scope and resume unwinding (implicitly by calling into some > intrinsics). or we simply don't allow calling subprogs that can throw while holding resources. > When we discussed this earlier, you rightly mentioned problems > associated with introducing control flow into the program not seen by the > compiler (unlike try/catch in something like C++ which has clear semantics). > > But to take a step back, and about what you've said below: > > > bpf_assert is not analogous to C++ exceptions. It shouldn't be seen a mechanism > > to simplify control flow and error handling. > > When programs get explicit control and handle the 'exceptional' condition, it > begins to mimic more of an alternative 'slow path' of the program when it > encounters an error, and starts resembling an alternative error handling > mechanism pretty much like C++ exceptions. > > I think that was not the goal here, and the automatic release of resources is > simply supposed to happen to ensure that the kernel's resource safety is not > violated when a program is aborting. An assertion's should never occur at > runtime 99.9999% of the time, but when it does, the BPF runtime emits code to > ensure that the aborting program does not leave the kernel in an inconsistent > state (memory leaks, held locks that other programs can never take again, etc.). > > So this will involve clean up of every resource it had acquired when it threw. > If we cannot ensure that we can safely release resources (e.g. throwing in NMI > context where a kptr_xchg'd pointer cannot be freed), we will bail during > verification. exactly. +1 to all of the above. the progs shouldn't be paying run-time cost for this 0.0001% case.
diff --git a/tools/testing/selftests/bpf/prog_tests/exceptions.c b/tools/testing/selftests/bpf/prog_tests/exceptions.c new file mode 100644 index 000000000000..342f44a12c65 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/exceptions.c @@ -0,0 +1,240 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <test_progs.h> +#include <network_helpers.h> + +#include "exceptions.skel.h" +#include "exceptions_ext.skel.h" +#include "exceptions_fail.skel.h" + +static char log_buf[1024 * 1024]; + +static void test_exceptions_failure(void) +{ + RUN_TESTS(exceptions_fail); +} + +static void test_exceptions_success(void) +{ + LIBBPF_OPTS(bpf_test_run_opts, ropts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + struct exceptions_ext *eskel = NULL; + struct exceptions *skel; + int ret; + + skel = exceptions__open_and_load(); + if (!ASSERT_OK_PTR(skel, "exceptions__open_and_load")) + return; + +#define RUN_SUCCESS(_prog, return_val) \ + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs._prog), &ropts); \ + ASSERT_OK(ret, #_prog " prog run ret"); \ + ASSERT_EQ(ropts.retval, return_val, #_prog " prog run retval"); + + RUN_SUCCESS(exception_throw_subprog, 16); + RUN_SUCCESS(exception_throw, 0); + RUN_SUCCESS(exception_throw_gfunc1, 1); + RUN_SUCCESS(exception_throw_gfunc2, 0); + RUN_SUCCESS(exception_throw_gfunc3, 1); + RUN_SUCCESS(exception_throw_gfunc4, 0); + RUN_SUCCESS(exception_throw_gfunc5, 1); + RUN_SUCCESS(exception_throw_gfunc6, 16); + RUN_SUCCESS(exception_throw_func1, 1); + RUN_SUCCESS(exception_throw_func2, 0); + RUN_SUCCESS(exception_throw_func3, 1); + RUN_SUCCESS(exception_throw_func4, 0); + RUN_SUCCESS(exception_throw_func5, 1); + RUN_SUCCESS(exception_throw_func6, 16); + RUN_SUCCESS(exception_throw_cb1, 0); + RUN_SUCCESS(exception_throw_cb2, 16); + RUN_SUCCESS(exception_throw_cb_diff, 16); + RUN_SUCCESS(exception_throw_kfunc1, 0); + RUN_SUCCESS(exception_throw_kfunc2, 1); + +#define RUN_EXT(load_ret, attach_err, expr, msg) \ + { \ + LIBBPF_OPTS(bpf_object_open_opts, o, .kernel_log_buf = log_buf, \ + .kernel_log_size = sizeof(log_buf), \ + .kernel_log_level = 2); \ + exceptions_ext__destroy(eskel); \ + eskel = exceptions_ext__open_opts(&o); \ + struct bpf_program *prog = NULL; \ + struct bpf_link *link = NULL; \ + if (!ASSERT_OK_PTR(eskel, "exceptions_ext__open")) \ + goto done; \ + (expr); \ + ASSERT_OK_PTR(bpf_program__name(prog), bpf_program__name(prog)); \ + if (!ASSERT_EQ(exceptions_ext__load(eskel), load_ret, \ + "exceptions_ext__load")) { \ + printf("%s\n", log_buf); \ + goto done; \ + } \ + if (load_ret != 0) { \ + printf("%s\n", log_buf); \ + if (!ASSERT_OK_PTR(strstr(log_buf, msg), "strstr")) \ + goto done; \ + } \ + if (!load_ret && attach_err) { \ + if (!ASSERT_ERR_PTR(link = bpf_program__attach(prog), "attach err")) \ + goto done; \ + } else if (!load_ret) { \ + if (!ASSERT_OK_PTR(link = bpf_program__attach(prog), "attach ok")) \ + goto done; \ + bpf_link__destroy(link); \ + } \ + } + + /* non-throwing fexit -> non-throwing subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.pfexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), ""); + + /* throwing fexit -> non-throwing subprog : BAD */ + RUN_EXT(0, true, ({ + prog = eskel->progs.throwing_fexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), ""); + + /* non-throwing fexit -> throwing subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.pfexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* throwing fexit -> throwing subprog : BAD */ + RUN_EXT(0, true, ({ + prog = eskel->progs.throwing_fexit; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* fmod_ret not allowed for subprog - Check so we remember to handle its + * throwing specification compatibility with target when supported. + */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.pfmod_ret; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), "can't modify return codes of BPF program"); + + /* fmod_ret not allowed for global subprog - Check so we remember to + * handle its throwing specification compatibility with target when + * supported. + */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.pfmod_ret; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), "can't modify return codes of BPF program"); + + /* non-throwing extension -> non-throwing subprog : BAD (!global) + * We need to handle and reject it for static subprogs when supported + * when extension is throwing as not all callsites are marked to handle + * them. + */ + RUN_EXT(-EINVAL, true, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "subprog"), "set_attach_target")) + goto done; + }), "subprog() is not a global function"); + + /* non-throwing extension -> throwing subprog : BAD (!global) + * We need to handle and reject it for static subprogs when supported + * when extension is throwing as not all callsites are marked to handle + * them. + */ + RUN_EXT(-EINVAL, true, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_subprog"), "set_attach_target")) + goto done; + }), "throwing_subprog() is not a global function"); + + /* non-throwing extension -> non-throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* non-throwing extension -> throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_global_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* throwing extension -> throwing global subprog : OK */ + RUN_EXT(0, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "throwing_global_subprog"), "set_attach_target")) + goto done; + }), ""); + + /* throwing extension -> main subprog : BAD (OUTER vs INNER mismatch) */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "exception_throw_subprog"), "set_attach_target")) + goto done; + }), "Cannot attach throwing extension to main subprog"); + + /* throwing extension -> non-throwing global subprog : BAD */ + RUN_EXT(-EINVAL, false, ({ + prog = eskel->progs.throwing_extension; + bpf_program__set_autoload(prog, true); + if (!ASSERT_OK(bpf_program__set_attach_target(prog, + bpf_program__fd(skel->progs.exception_throw_subprog), + "global_subprog"), "set_attach_target")) + goto done; + }), "Cannot attach throwing extension to non-throwing subprog"); +done: + exceptions_ext__destroy(eskel); + exceptions__destroy(skel); +} + +void test_exceptions(void) +{ + test_exceptions_failure(); + test_exceptions_success(); +} diff --git a/tools/testing/selftests/bpf/progs/exceptions.c b/tools/testing/selftests/bpf/progs/exceptions.c new file mode 100644 index 000000000000..9a33f88e7e2c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions.c @@ -0,0 +1,218 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <vmlinux.h> +#include <bpf/bpf_tracing.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_core_read.h> +#include "bpf_misc.h" +#include "bpf_experimental.h" + +SEC("tc") +int exception_throw(struct __sk_buff *ctx) +{ + if (ctx->data) + bpf_throw(); + return 1; +} + + +static __noinline int subprog(struct __sk_buff *ctx) +{ + return ctx->len; +} + +static __noinline int throwing_subprog(struct __sk_buff *ctx) +{ + if (ctx) + bpf_throw(); + return 0; +} + +__noinline int global_subprog(struct __sk_buff *ctx) +{ + return subprog(ctx) + 1; +} + +__noinline int throwing_global_subprog(struct __sk_buff *ctx) +{ + if (ctx) + bpf_throw(); + return 0; +} + +static __noinline int exception_cb(void) +{ + return 16; +} + +SEC("tc") +int exception_throw_subprog(struct __sk_buff *ctx) +{ + volatile int i; + + exception_cb(); + bpf_set_exception_callback(exception_cb); + i = subprog(ctx); + i += global_subprog(ctx) - 1; + if (!i) + return throwing_global_subprog(ctx); + else + return throwing_subprog(ctx); + bpf_throw(); + return 0; +} + +__noinline int throwing_gfunc(volatile int i) +{ + bpf_assert_eq(i, 0); + return 1; +} + +__noinline static int throwing_func(volatile int i) +{ + bpf_assert_lt(i, 1); + return 1; +} + +SEC("tc") +int exception_throw_gfunc1(void *ctx) +{ + return throwing_gfunc(0); +} + +SEC("tc") +__noinline int exception_throw_gfunc2() +{ + return throwing_gfunc(1); +} + +__noinline int throwing_gfunc_2(volatile int i) +{ + return throwing_gfunc(i); +} + +SEC("tc") +int exception_throw_gfunc3(void *ctx) +{ + return throwing_gfunc_2(0); +} + +SEC("tc") +int exception_throw_gfunc4(void *ctx) +{ + return throwing_gfunc_2(1); +} + +SEC("tc") +int exception_throw_gfunc5(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_gfunc_2(0); +} + +SEC("tc") +int exception_throw_gfunc6(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_gfunc_2(1); +} + + +SEC("tc") +int exception_throw_func1(void *ctx) +{ + return throwing_func(0); +} + +SEC("tc") +int exception_throw_func2(void *ctx) +{ + return throwing_func(1); +} + +__noinline static int throwing_func_2(volatile int i) +{ + return throwing_func(i); +} + +SEC("tc") +int exception_throw_func3(void *ctx) +{ + return throwing_func_2(0); +} + +SEC("tc") +int exception_throw_func4(void *ctx) +{ + return throwing_func_2(1); +} + +SEC("tc") +int exception_throw_func5(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_func_2(0); +} + +SEC("tc") +int exception_throw_func6(void *ctx) +{ + bpf_set_exception_callback(exception_cb); + return throwing_func_2(1); +} + +__noinline static int loop_cb1(u32 index, int *ctx) +{ + bpf_throw(); + return 0; +} + +__noinline static int loop_cb2(u32 index, int *ctx) +{ + bpf_throw(); + return 0; +} + +SEC("tc") +int exception_throw_cb1(struct __sk_buff *ctx) +{ + bpf_loop(5, loop_cb1, NULL, 0); + return 1; +} + +SEC("tc") +int exception_throw_cb2(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + bpf_loop(5, loop_cb1, NULL, 0); + return 0; +} + +SEC("tc") +int exception_throw_cb_diff(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb); + if (ctx->protocol) + bpf_loop(5, loop_cb1, NULL, 0); + else + bpf_loop(5, loop_cb2, NULL, 0); + return 1; +} + +extern void bpf_kfunc_call_test_always_throws(void) __ksym; +extern void bpf_kfunc_call_test_never_throws(void) __ksym; + +SEC("tc") +int exception_throw_kfunc1(struct __sk_buff *ctx) +{ + bpf_kfunc_call_test_always_throws(); + return 1; +} + +SEC("tc") +int exception_throw_kfunc2(struct __sk_buff *ctx) +{ + bpf_kfunc_call_test_never_throws(); + return 1; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/exceptions_ext.c b/tools/testing/selftests/bpf/progs/exceptions_ext.c new file mode 100644 index 000000000000..d3b9e32681ec --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions_ext.c @@ -0,0 +1,42 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <vmlinux.h> +#include <bpf/bpf_helpers.h> +#include "bpf_experimental.h" + +SEC("?freplace") +int extension(struct __sk_buff *ctx) +{ + return 0; +} + +SEC("?freplace") +int throwing_extension(struct __sk_buff *ctx) +{ + bpf_throw(); +} + +SEC("?fexit") +int pfexit(void *ctx) +{ + return 0; +} + +SEC("?fexit") +int throwing_fexit(void *ctx) +{ + bpf_throw(); +} + +SEC("?fmod_ret") +int pfmod_ret(void *ctx) +{ + return 1; +} + +SEC("?fmod_ret") +int throwing_fmod_ret(void *ctx) +{ + bpf_throw(); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/exceptions_fail.c b/tools/testing/selftests/bpf/progs/exceptions_fail.c new file mode 100644 index 000000000000..d8459c3840e2 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/exceptions_fail.c @@ -0,0 +1,267 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <vmlinux.h> +#include <bpf/bpf_tracing.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_core_read.h> + +#include "bpf_misc.h" +#include "bpf_experimental.h" + +extern void bpf_rcu_read_lock(void) __ksym; + +#define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8))) + +struct foo { + struct bpf_rb_node node; +}; + +private(A) struct bpf_spin_lock lock; +private(A) struct bpf_rb_root rbtree __contains(foo, node); + +__noinline static int subprog_lock(struct __sk_buff *ctx) +{ + bpf_spin_lock(&lock); + if (ctx->len) + bpf_throw(); + return 0; +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_with_lock(void *ctx) +{ + bpf_spin_lock(&lock); + bpf_throw(); +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_subprog_with_lock(void *ctx) +{ + return subprog_lock(ctx); +} + +SEC("?tc") +__failure __msg("bpf_rcu_read_unlock is missing") +int reject_with_rcu_read_lock(void *ctx) +{ + bpf_rcu_read_lock(); + bpf_throw(); +} + +__noinline static int throwing_subprog(struct __sk_buff *ctx) +{ + if (ctx->len) + bpf_throw(); + return 0; +} + +SEC("?tc") +__failure __msg("bpf_rcu_read_unlock is missing") +int reject_subprog_with_rcu_read_lock(void *ctx) +{ + bpf_rcu_read_lock(); + return throwing_subprog(ctx); +} + +static bool rbless(struct bpf_rb_node *n1, const struct bpf_rb_node *n2) +{ + bpf_throw(); +} + +SEC("?tc") +__failure __msg("function calls are not allowed while holding a lock") +int reject_with_rbtree_add_throw(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&lock); + bpf_rbtree_add(&rbtree, &f->node, rbless); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_reference(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_throw(); +} + +__noinline static int subprog_ref(struct __sk_buff *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_throw(); +} + +__noinline static int subprog_cb_ref(u32 i, void *ctx) +{ + bpf_throw(); +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_cb_reference(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_loop(5, subprog_cb_ref, NULL, 0); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference") +int reject_with_subprog_reference(void *ctx) +{ + return subprog_ref(ctx) + 1; +} + +static __noinline int throwing_exception_cb(void) +{ + int i = 0; + + bpf_assert_ne(i, 0); + return i; +} + +static __noinline int exception_cb1(void) +{ + int i = 0; + + bpf_assert_eq(i, 0); + return i; +} + +static __noinline int exception_cb2(void) +{ + int i = 0; + + bpf_assert_eq(i, 0); + return i; +} + +__noinline int throwing_exception_gfunc(void) +{ + return throwing_exception_cb(); +} + +SEC("?tc") +__failure __msg("is used as exception callback, cannot throw") +int reject_throwing_exception_cb_1(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(throwing_exception_cb); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback can throw, which is not allowed") +int reject_throwing_exception_cb_2(struct __sk_buff *ctx) +{ + throwing_exception_gfunc(); + bpf_set_exception_callback(throwing_exception_cb); + return 0; +} + +SEC("?tc") +__failure __msg("different exception callback subprogs for same insn 7: 2 and 1") +int reject_throwing_exception_cb_3(struct __sk_buff *ctx) +{ + if (ctx->protocol) + bpf_set_exception_callback(exception_cb1); + else + bpf_set_exception_callback(exception_cb2); + bpf_throw(); +} + +__noinline int gfunc_set_exception_cb(void) +{ + bpf_set_exception_callback(exception_cb1); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback cannot be set within global function or extension program") +int reject_set_exception_cb_gfunc(struct __sk_buff *ctx) +{ + gfunc_set_exception_cb(); + return 0; +} + +static __noinline int exception_cb_rec(void) +{ + bpf_set_exception_callback(exception_cb_rec); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback cannot be set from within exception callback") +int reject_set_exception_cb_rec1(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec); + return 0; +} + +static __noinline int exception_cb_rec2(void); + +static __noinline int exception_cb_rec1(void) +{ + bpf_set_exception_callback(exception_cb_rec2); + return 0; +} + +static __noinline int exception_cb_rec2(void) +{ + bpf_set_exception_callback(exception_cb_rec2); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback cannot be set from within exception callback") +int reject_set_exception_cb_rec2(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec1); + return 0; +} + +static __noinline int exception_cb_rec3(void) +{ + bpf_set_exception_callback(exception_cb1); + return 0; +} + +SEC("?tc") +__failure __msg("exception callback cannot be set from within exception callback") +int reject_set_exception_cb_rec3(struct __sk_buff *ctx) +{ + bpf_set_exception_callback(exception_cb_rec3); + return 0; +} + +static __noinline int exception_cb_bad_ret(void) +{ + return 4242; +} + +SEC("?fentry/bpf_check") +__failure __msg("At program exit the register R0 has value") +int reject_set_exception_cb_bad_ret(void *ctx) +{ + bpf_set_exception_callback(exception_cb_bad_ret); + return 0; +} + +char _license[] SEC("license") = "GPL";
Add selftests to cover success and failure cases of API usage, runtime behavior and invariants that need to be maintained for implementation correctness. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> --- .../selftests/bpf/prog_tests/exceptions.c | 240 ++++++++++++++++ .../testing/selftests/bpf/progs/exceptions.c | 218 ++++++++++++++ .../selftests/bpf/progs/exceptions_ext.c | 42 +++ .../selftests/bpf/progs/exceptions_fail.c | 267 ++++++++++++++++++ 4 files changed, 767 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/exceptions.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions_ext.c create mode 100644 tools/testing/selftests/bpf/progs/exceptions_fail.c