Message ID | 20230713023232.1411523-5-memxor@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | Exceptions - 1/2 | expand |
On Thu, Jul 13, 2023 at 08:02:26AM +0530, Kumar Kartikeya Dwivedi wrote: > Introduce support in the verifier for generating a subprogram and > include it as part of a BPF program dynamically after the do_check > phase is complete. The appropriate place of invocation would be > do_misc_fixups. > > Since they are always appended to the end of the instruction sequence of > the program, it becomes relatively inexpensive to do the related > adjustments to the subprog_info of the program. Only the fake exit > subprogram is shifted forward by 1, making room for our invented subprog. > > This is useful to insert a new subprogram and obtain its function > pointer. The next patch will use this functionality to insert a default > exception callback which will be invoked after unwinding the stack. > > Note that these invented subprograms are invisible to userspace, and > never reported in BPF_OBJ_GET_INFO_BY_ID etc. For now, only a single > invented program is supported, but more can be easily supported in the > future. > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> > --- > include/linux/bpf.h | 1 + > include/linux/bpf_verifier.h | 4 +++- > kernel/bpf/core.c | 4 ++-- > kernel/bpf/syscall.c | 19 ++++++++++++++++++- > kernel/bpf/verifier.c | 29 ++++++++++++++++++++++++++++- > 5 files changed, 52 insertions(+), 5 deletions(-) > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index 360433f14496..70f212dddfbf 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -1385,6 +1385,7 @@ struct bpf_prog_aux { > bool sleepable; > bool tail_call_reachable; > bool xdp_has_frags; > + bool invented_prog; > /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ > const struct btf_type *attach_func_proto; > /* function name for valid attach_btf_id */ > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > index f70f9ac884d2..360aa304ec09 100644 > --- a/include/linux/bpf_verifier.h > +++ b/include/linux/bpf_verifier.h > @@ -540,6 +540,7 @@ struct bpf_subprog_info { > bool has_tail_call; > bool tail_call_reachable; > bool has_ld_abs; > + bool invented_prog; > bool is_async_cb; > }; > > @@ -594,10 +595,11 @@ struct bpf_verifier_env { > bool bypass_spec_v1; > bool bypass_spec_v4; > bool seen_direct_write; > + bool invented_prog; Instead of a flag in two places how about adding aux->func_cnt_real and use it in JITing and free-ing while get_info*() keep using aux->func_cnt. > +/* The function requires that first instruction in 'patch' is insnsi[prog->len - 1] */ > +static int invent_subprog(struct bpf_verifier_env *env, struct bpf_insn *patch, int len) > +{ > + struct bpf_subprog_info *info = env->subprog_info; > + int cnt = env->subprog_cnt; > + struct bpf_prog *prog; > + > + if (env->invented_prog) { > + verbose(env, "verifier internal error: only one invented prog supported\n"); > + return -EFAULT; > + } > + prog = bpf_patch_insn_data(env, env->prog->len - 1, patch, len); The actual patching is not necessary. bpf_prog_realloc() and memcpy would be enough, no? > + if (!prog) > + return -ENOMEM; > + env->prog = prog; > + info[cnt + 1].start = info[cnt].start; > + info[cnt].start = prog->len - len + 1; > + info[cnt].invented_prog = true; > + env->subprog_cnt++; > + env->invented_prog = true; > + return 0; > +} > + > /* Do various post-verification rewrites in a single program pass. > * These rewrites simplify JIT and interpreter implementations. > */ > -- > 2.40.1 >
On Sat, 15 Jul 2023 at 03:28, Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > On Thu, Jul 13, 2023 at 08:02:26AM +0530, Kumar Kartikeya Dwivedi wrote: > > Introduce support in the verifier for generating a subprogram and > > include it as part of a BPF program dynamically after the do_check > > phase is complete. The appropriate place of invocation would be > > do_misc_fixups. > > > > Since they are always appended to the end of the instruction sequence of > > the program, it becomes relatively inexpensive to do the related > > adjustments to the subprog_info of the program. Only the fake exit > > subprogram is shifted forward by 1, making room for our invented subprog. > > > > This is useful to insert a new subprogram and obtain its function > > pointer. The next patch will use this functionality to insert a default > > exception callback which will be invoked after unwinding the stack. > > > > Note that these invented subprograms are invisible to userspace, and > > never reported in BPF_OBJ_GET_INFO_BY_ID etc. For now, only a single > > invented program is supported, but more can be easily supported in the > > future. > > > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> > > --- > > include/linux/bpf.h | 1 + > > include/linux/bpf_verifier.h | 4 +++- > > kernel/bpf/core.c | 4 ++-- > > kernel/bpf/syscall.c | 19 ++++++++++++++++++- > > kernel/bpf/verifier.c | 29 ++++++++++++++++++++++++++++- > > 5 files changed, 52 insertions(+), 5 deletions(-) > > > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > > index 360433f14496..70f212dddfbf 100644 > > --- a/include/linux/bpf.h > > +++ b/include/linux/bpf.h > > @@ -1385,6 +1385,7 @@ struct bpf_prog_aux { > > bool sleepable; > > bool tail_call_reachable; > > bool xdp_has_frags; > > + bool invented_prog; > > /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ > > const struct btf_type *attach_func_proto; > > /* function name for valid attach_btf_id */ > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > > index f70f9ac884d2..360aa304ec09 100644 > > --- a/include/linux/bpf_verifier.h > > +++ b/include/linux/bpf_verifier.h > > @@ -540,6 +540,7 @@ struct bpf_subprog_info { > > bool has_tail_call; > > bool tail_call_reachable; > > bool has_ld_abs; > > + bool invented_prog; > > bool is_async_cb; > > }; > > > > @@ -594,10 +595,11 @@ struct bpf_verifier_env { > > bool bypass_spec_v1; > > bool bypass_spec_v4; > > bool seen_direct_write; > > + bool invented_prog; > > Instead of a flag in two places how about adding aux->func_cnt_real > and use it in JITing and free-ing while get_info*() keep using aux->func_cnt. > That does seem better, thanks. I'll make the change in v2. > > +/* The function requires that first instruction in 'patch' is insnsi[prog->len - 1] */ > > +static int invent_subprog(struct bpf_verifier_env *env, struct bpf_insn *patch, int len) > > +{ > > + struct bpf_subprog_info *info = env->subprog_info; > > + int cnt = env->subprog_cnt; > > + struct bpf_prog *prog; > > + > > + if (env->invented_prog) { > > + verbose(env, "verifier internal error: only one invented prog supported\n"); > > + return -EFAULT; > > + } > > + prog = bpf_patch_insn_data(env, env->prog->len - 1, patch, len); > > The actual patching is not necessary. > bpf_prog_realloc() and memcpy would be enough, no? > Yes, it should be fine. But I didn't want to special case things here just to make sure assumptions elsewhere don't break. E.g. code readily assumes every insn has its own insn_aux_data which might be broken if we don't expand it. I think bpf_patch_insn_single is already doing a realloc (and reusing trailing space in current allocation if available), so it didn't seem worth it to me. If you still feel it's better I can analyze if anything might break and make the change. [...]
On Mon, Jul 17, 2023 at 9:22 AM Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote: > > On Sat, 15 Jul 2023 at 03:28, Alexei Starovoitov > <alexei.starovoitov@gmail.com> wrote: > > > > On Thu, Jul 13, 2023 at 08:02:26AM +0530, Kumar Kartikeya Dwivedi wrote: > > > Introduce support in the verifier for generating a subprogram and > > > include it as part of a BPF program dynamically after the do_check > > > phase is complete. The appropriate place of invocation would be > > > do_misc_fixups. > > > > > > Since they are always appended to the end of the instruction sequence of > > > the program, it becomes relatively inexpensive to do the related > > > adjustments to the subprog_info of the program. Only the fake exit > > > subprogram is shifted forward by 1, making room for our invented subprog. > > > > > > This is useful to insert a new subprogram and obtain its function > > > pointer. The next patch will use this functionality to insert a default > > > exception callback which will be invoked after unwinding the stack. > > > > > > Note that these invented subprograms are invisible to userspace, and > > > never reported in BPF_OBJ_GET_INFO_BY_ID etc. For now, only a single > > > invented program is supported, but more can be easily supported in the > > > future. > > > > > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> > > > --- > > > include/linux/bpf.h | 1 + > > > include/linux/bpf_verifier.h | 4 +++- > > > kernel/bpf/core.c | 4 ++-- > > > kernel/bpf/syscall.c | 19 ++++++++++++++++++- > > > kernel/bpf/verifier.c | 29 ++++++++++++++++++++++++++++- > > > 5 files changed, 52 insertions(+), 5 deletions(-) > > > > > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > > > index 360433f14496..70f212dddfbf 100644 > > > --- a/include/linux/bpf.h > > > +++ b/include/linux/bpf.h > > > @@ -1385,6 +1385,7 @@ struct bpf_prog_aux { > > > bool sleepable; > > > bool tail_call_reachable; > > > bool xdp_has_frags; > > > + bool invented_prog; > > > /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ > > > const struct btf_type *attach_func_proto; > > > /* function name for valid attach_btf_id */ > > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > > > index f70f9ac884d2..360aa304ec09 100644 > > > --- a/include/linux/bpf_verifier.h > > > +++ b/include/linux/bpf_verifier.h > > > @@ -540,6 +540,7 @@ struct bpf_subprog_info { > > > bool has_tail_call; > > > bool tail_call_reachable; > > > bool has_ld_abs; > > > + bool invented_prog; > > > bool is_async_cb; > > > }; > > > > > > @@ -594,10 +595,11 @@ struct bpf_verifier_env { > > > bool bypass_spec_v1; > > > bool bypass_spec_v4; > > > bool seen_direct_write; > > > + bool invented_prog; > > > > Instead of a flag in two places how about adding aux->func_cnt_real > > and use it in JITing and free-ing while get_info*() keep using aux->func_cnt. > > > > That does seem better, thanks. I'll make the change in v2. > > > > +/* The function requires that first instruction in 'patch' is insnsi[prog->len - 1] */ > > > +static int invent_subprog(struct bpf_verifier_env *env, struct bpf_insn *patch, int len) > > > +{ > > > + struct bpf_subprog_info *info = env->subprog_info; > > > + int cnt = env->subprog_cnt; > > > + struct bpf_prog *prog; > > > + > > > + if (env->invented_prog) { > > > + verbose(env, "verifier internal error: only one invented prog supported\n"); > > > + return -EFAULT; > > > + } > > > + prog = bpf_patch_insn_data(env, env->prog->len - 1, patch, len); > > > > The actual patching is not necessary. > > bpf_prog_realloc() and memcpy would be enough, no? > > > > Yes, it should be fine. But I didn't want to special case things here > just to make sure assumptions elsewhere don't break. > E.g. code readily assumes every insn has its own insn_aux_data which > might be broken if we don't expand it. > I think bpf_patch_insn_single is already doing a realloc (and reusing > trailing space in current allocation if available), so it didn't seem > worth it to me. > > If you still feel it's better I can analyze if anything might break > and make the change. bpf_patch_insn_data() is a known performance bottleneck. Folks have been trying to optimize it in the past. It's certainly delicate code. I guess since this extra subprog will only be added once we can live with unnecessary overhead of bpf_patch_insn_data(). Just add the comment that we're not patching existing insn and all of adjust* ops are nop.
On Mon, 17 Jul 2023 at 22:55, Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > On Mon, Jul 17, 2023 at 9:22 AM Kumar Kartikeya Dwivedi > <memxor@gmail.com> wrote: > > > > On Sat, 15 Jul 2023 at 03:28, Alexei Starovoitov > > <alexei.starovoitov@gmail.com> wrote: > > > > > > On Thu, Jul 13, 2023 at 08:02:26AM +0530, Kumar Kartikeya Dwivedi wrote: > > > > Introduce support in the verifier for generating a subprogram and > > > > include it as part of a BPF program dynamically after the do_check > > > > phase is complete. The appropriate place of invocation would be > > > > do_misc_fixups. > > > > > > > > Since they are always appended to the end of the instruction sequence of > > > > the program, it becomes relatively inexpensive to do the related > > > > adjustments to the subprog_info of the program. Only the fake exit > > > > subprogram is shifted forward by 1, making room for our invented subprog. > > > > > > > > This is useful to insert a new subprogram and obtain its function > > > > pointer. The next patch will use this functionality to insert a default > > > > exception callback which will be invoked after unwinding the stack. > > > > > > > > Note that these invented subprograms are invisible to userspace, and > > > > never reported in BPF_OBJ_GET_INFO_BY_ID etc. For now, only a single > > > > invented program is supported, but more can be easily supported in the > > > > future. > > > > > > > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> > > > > --- > > > > include/linux/bpf.h | 1 + > > > > include/linux/bpf_verifier.h | 4 +++- > > > > kernel/bpf/core.c | 4 ++-- > > > > kernel/bpf/syscall.c | 19 ++++++++++++++++++- > > > > kernel/bpf/verifier.c | 29 ++++++++++++++++++++++++++++- > > > > 5 files changed, 52 insertions(+), 5 deletions(-) > > > > > > > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > > > > index 360433f14496..70f212dddfbf 100644 > > > > --- a/include/linux/bpf.h > > > > +++ b/include/linux/bpf.h > > > > @@ -1385,6 +1385,7 @@ struct bpf_prog_aux { > > > > bool sleepable; > > > > bool tail_call_reachable; > > > > bool xdp_has_frags; > > > > + bool invented_prog; > > > > /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ > > > > const struct btf_type *attach_func_proto; > > > > /* function name for valid attach_btf_id */ > > > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > > > > index f70f9ac884d2..360aa304ec09 100644 > > > > --- a/include/linux/bpf_verifier.h > > > > +++ b/include/linux/bpf_verifier.h > > > > @@ -540,6 +540,7 @@ struct bpf_subprog_info { > > > > bool has_tail_call; > > > > bool tail_call_reachable; > > > > bool has_ld_abs; > > > > + bool invented_prog; > > > > bool is_async_cb; > > > > }; > > > > > > > > @@ -594,10 +595,11 @@ struct bpf_verifier_env { > > > > bool bypass_spec_v1; > > > > bool bypass_spec_v4; > > > > bool seen_direct_write; > > > > + bool invented_prog; > > > > > > Instead of a flag in two places how about adding aux->func_cnt_real > > > and use it in JITing and free-ing while get_info*() keep using aux->func_cnt. > > > > > > > That does seem better, thanks. I'll make the change in v2. > > > > > > +/* The function requires that first instruction in 'patch' is insnsi[prog->len - 1] */ > > > > +static int invent_subprog(struct bpf_verifier_env *env, struct bpf_insn *patch, int len) > > > > +{ > > > > + struct bpf_subprog_info *info = env->subprog_info; > > > > + int cnt = env->subprog_cnt; > > > > + struct bpf_prog *prog; > > > > + > > > > + if (env->invented_prog) { > > > > + verbose(env, "verifier internal error: only one invented prog supported\n"); > > > > + return -EFAULT; > > > > + } > > > > + prog = bpf_patch_insn_data(env, env->prog->len - 1, patch, len); > > > > > > The actual patching is not necessary. > > > bpf_prog_realloc() and memcpy would be enough, no? > > > > > > > Yes, it should be fine. But I didn't want to special case things here > > just to make sure assumptions elsewhere don't break. > > E.g. code readily assumes every insn has its own insn_aux_data which > > might be broken if we don't expand it. > > I think bpf_patch_insn_single is already doing a realloc (and reusing > > trailing space in current allocation if available), so it didn't seem > > worth it to me. > > > > If you still feel it's better I can analyze if anything might break > > and make the change. > > bpf_patch_insn_data() is a known performance bottleneck. > Folks have been trying to optimize it in the past. > It's certainly delicate code. > I guess since this extra subprog will only be added once > we can live with unnecessary overhead of bpf_patch_insn_data(). > Just add the comment that we're not patching existing insn and > all of adjust* ops are nop. Ack.
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 360433f14496..70f212dddfbf 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1385,6 +1385,7 @@ struct bpf_prog_aux { bool sleepable; bool tail_call_reachable; bool xdp_has_frags; + bool invented_prog; /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index f70f9ac884d2..360aa304ec09 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -540,6 +540,7 @@ struct bpf_subprog_info { bool has_tail_call; bool tail_call_reachable; bool has_ld_abs; + bool invented_prog; bool is_async_cb; }; @@ -594,10 +595,11 @@ struct bpf_verifier_env { bool bypass_spec_v1; bool bypass_spec_v4; bool seen_direct_write; + bool invented_prog; struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */ const struct bpf_line_info *prev_linfo; struct bpf_verifier_log log; - struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1]; + struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 2]; /* max + 2 for the fake and exception subprogs */ union { struct bpf_idmap idmap_scratch; struct bpf_idset idset_scratch; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index dc85240a0134..5c484b2bc3d6 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -211,7 +211,7 @@ void bpf_prog_fill_jited_linfo(struct bpf_prog *prog, const struct bpf_line_info *linfo; void **jited_linfo; - if (!prog->aux->jited_linfo) + if (!prog->aux->jited_linfo || prog->aux->invented_prog) /* Userspace did not provide linfo */ return; @@ -579,7 +579,7 @@ bpf_prog_ksym_set_name(struct bpf_prog *prog) sym = bin2hex(sym, prog->tag, sizeof(prog->tag)); /* prog->aux->name will be ignored if full btf name is available */ - if (prog->aux->func_info_cnt) { + if (prog->aux->func_info_cnt && !prog->aux->invented_prog) { type = btf_type_by_id(prog->aux->btf, prog->aux->func_info[prog->aux->func_idx].type_id); func_name = btf_name_by_offset(prog->aux->btf, type->name_off); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index ee8cb1a174aa..769550287bed 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4291,8 +4291,11 @@ static int bpf_prog_get_info_by_fd(struct file *file, u32 i; info.jited_prog_len = 0; - for (i = 0; i < prog->aux->func_cnt; i++) + for (i = 0; i < prog->aux->func_cnt; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; info.jited_prog_len += prog->aux->func[i]->jited_len; + } } else { info.jited_prog_len = prog->jited_len; } @@ -4311,6 +4314,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, free = ulen; for (i = 0; i < prog->aux->func_cnt; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; len = prog->aux->func[i]->jited_len; len = min_t(u32, len, free); img = (u8 *) prog->aux->func[i]->bpf_func; @@ -4332,6 +4337,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, ulen = info.nr_jited_ksyms; info.nr_jited_ksyms = prog->aux->func_cnt ? : 1; + if (prog->aux->func_cnt && prog->aux->func[prog->aux->func_cnt - 1]->aux->invented_prog) + info.nr_jited_ksyms--; if (ulen) { if (bpf_dump_raw_ok(file->f_cred)) { unsigned long ksym_addr; @@ -4345,6 +4352,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, user_ksyms = u64_to_user_ptr(info.jited_ksyms); if (prog->aux->func_cnt) { for (i = 0; i < ulen; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; ksym_addr = (unsigned long) prog->aux->func[i]->bpf_func; if (put_user((u64) ksym_addr, @@ -4363,6 +4372,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, ulen = info.nr_jited_func_lens; info.nr_jited_func_lens = prog->aux->func_cnt ? : 1; + if (prog->aux->func_cnt && prog->aux->func[prog->aux->func_cnt - 1]->aux->invented_prog) + info.nr_jited_func_lens--; if (ulen) { if (bpf_dump_raw_ok(file->f_cred)) { u32 __user *user_lens; @@ -4373,6 +4384,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, user_lens = u64_to_user_ptr(info.jited_func_lens); if (prog->aux->func_cnt) { for (i = 0; i < ulen; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; func_len = prog->aux->func[i]->jited_len; if (put_user(func_len, &user_lens[i])) @@ -4443,6 +4456,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, ulen = info.nr_prog_tags; info.nr_prog_tags = prog->aux->func_cnt ? : 1; + if (prog->aux->func_cnt && prog->aux->func[prog->aux->func_cnt - 1]->aux->invented_prog) + info.nr_prog_tags--; if (ulen) { __u8 __user (*user_prog_tags)[BPF_TAG_SIZE]; u32 i; @@ -4451,6 +4466,8 @@ static int bpf_prog_get_info_by_fd(struct file *file, ulen = min_t(u32, info.nr_prog_tags, ulen); if (prog->aux->func_cnt) { for (i = 0; i < ulen; i++) { + if (prog->aux->func[i]->aux->invented_prog) + break; if (copy_to_user(user_prog_tags[i], prog->aux->func[i]->tag, BPF_TAG_SIZE)) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5d432df5df06..8ce72a7b4758 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -14891,8 +14891,11 @@ static void adjust_btf_func(struct bpf_verifier_env *env) if (!aux->func_info) return; - for (i = 0; i < env->subprog_cnt; i++) + for (i = 0; i < env->subprog_cnt; i++) { + if (env->subprog_info[i].invented_prog) + break; aux->func_info[i].insn_off = env->subprog_info[i].start; + } } #define MIN_BPF_LINEINFO_SIZE offsetofend(struct bpf_line_info, line_col) @@ -17778,6 +17781,7 @@ static int jit_subprogs(struct bpf_verifier_env *env) } func[i]->aux->num_exentries = num_exentries; func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable; + func[i]->aux->invented_prog = env->subprog_info[i].invented_prog; func[i] = bpf_int_jit_compile(func[i]); if (!func[i]->jited) { err = -ENOTSUPP; @@ -18071,6 +18075,29 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } +/* The function requires that first instruction in 'patch' is insnsi[prog->len - 1] */ +static int invent_subprog(struct bpf_verifier_env *env, struct bpf_insn *patch, int len) +{ + struct bpf_subprog_info *info = env->subprog_info; + int cnt = env->subprog_cnt; + struct bpf_prog *prog; + + if (env->invented_prog) { + verbose(env, "verifier internal error: only one invented prog supported\n"); + return -EFAULT; + } + prog = bpf_patch_insn_data(env, env->prog->len - 1, patch, len); + if (!prog) + return -ENOMEM; + env->prog = prog; + info[cnt + 1].start = info[cnt].start; + info[cnt].start = prog->len - len + 1; + info[cnt].invented_prog = true; + env->subprog_cnt++; + env->invented_prog = true; + return 0; +} + /* Do various post-verification rewrites in a single program pass. * These rewrites simplify JIT and interpreter implementations. */
Introduce support in the verifier for generating a subprogram and include it as part of a BPF program dynamically after the do_check phase is complete. The appropriate place of invocation would be do_misc_fixups. Since they are always appended to the end of the instruction sequence of the program, it becomes relatively inexpensive to do the related adjustments to the subprog_info of the program. Only the fake exit subprogram is shifted forward by 1, making room for our invented subprog. This is useful to insert a new subprogram and obtain its function pointer. The next patch will use this functionality to insert a default exception callback which will be invoked after unwinding the stack. Note that these invented subprograms are invisible to userspace, and never reported in BPF_OBJ_GET_INFO_BY_ID etc. For now, only a single invented program is supported, but more can be easily supported in the future. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> --- include/linux/bpf.h | 1 + include/linux/bpf_verifier.h | 4 +++- kernel/bpf/core.c | 4 ++-- kernel/bpf/syscall.c | 19 ++++++++++++++++++- kernel/bpf/verifier.c | 29 ++++++++++++++++++++++++++++- 5 files changed, 52 insertions(+), 5 deletions(-)