Message ID | 20220208123348.40360-1-houtao1@huawei.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | [bpf-next,v2] bpf: reject kfunc calls that overflow insn->imm | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Clearly marked for bpf-next |
netdev/fixes_present | success | Fixes tag not required for -next series |
netdev/subject_prefix | success | Link |
netdev/cover_letter | success | Single patches do not need cover letters |
netdev/patch_count | success | Link |
netdev/header_inline | success | No static functions without inline keyword in header files |
netdev/build_32bit | success | Errors and warnings before: 20 this patch: 20 |
netdev/cc_maintainers | warning | 1 maintainers not CCed: kpsingh@kernel.org |
netdev/build_clang | success | Errors and warnings before: 18 this patch: 18 |
netdev/module_param | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Signed-off-by tag matches author and committer |
netdev/verify_fixes | success | No Fixes tag |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 25 this patch: 25 |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 25 lines checked |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/source_inline | fail | Was 0 now: 1 |
bpf/vmtest-bpf-next | fail | VM_Test |
bpf/vmtest-bpf-next-PR | fail | PR summary |
On 2/8/22 4:33 AM, Hou Tao wrote: > Now kfunc call uses s32 to represent the offset between the address > of kfunc and __bpf_call_base, but it doesn't check whether or not > s32 will be overflowed, so add an extra checking to reject these > invalid kfunc calls. > > Signed-off-by: Hou Tao <houtao1@huawei.com> > --- > v2: > * instead of checking the overflow in selftests, just reject > these kfunc calls directly in verifier > > v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com > --- > kernel/bpf/verifier.c | 13 +++++++++++++ > 1 file changed, 13 insertions(+) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index a39eedecc93a..fd836e64b701 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -1832,6 +1832,13 @@ static struct btf *find_kfunc_desc_btf(struct bpf_verifier_env *env, > return btf_vmlinux ?: ERR_PTR(-ENOENT); > } > > +static inline bool is_kfunc_call_imm_overflowed(unsigned long addr) > +{ > + unsigned long offset = BPF_CALL_IMM(addr); > + > + return (unsigned long)(s32)offset != offset; > +} > + > static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset) > { > const struct btf_type *func, *func_proto; > @@ -1925,6 +1932,12 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset) > return -EINVAL; > } > > + if (is_kfunc_call_imm_overflowed(addr)) { > + verbose(env, "address of kernel function %s is out of range\n", > + func_name); > + return -EINVAL; > + } > + > desc = &tab->descs[tab->nr_descs++]; > desc->func_id = func_id; > desc->imm = BPF_CALL_IMM(addr); Thanks, I would like to call BPF_CALL_IMM only once and keep checking overflow and setting desc->imm close to each other. How about the following not-compile-tested code unsigned long call_imm; ... call_imm = BPF_CALL_IMM(addr); /* some comment here */ if ((unsigned long)(s32)call_imm != call_imm) { verbose(env, ...); return -EINVAL; } else { desc->imm = call_imm; }
Hi, On 2/9/2022 12:57 AM, Yonghong Song wrote: > > > On 2/8/22 4:33 AM, Hou Tao wrote: >> Now kfunc call uses s32 to represent the offset between the address >> of kfunc and __bpf_call_base, but it doesn't check whether or not >> s32 will be overflowed, so add an extra checking to reject these >> invalid kfunc calls. >> >> Signed-off-by: Hou Tao <houtao1@huawei.com> >> --- >> v2: >> * instead of checking the overflow in selftests, just reject >> these kfunc calls directly in verifier >> >> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com >> --- >> kernel/bpf/verifier.c | 13 +++++++++++++ >> 1 file changed, 13 insertions(+) >> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c >> index a39eedecc93a..fd836e64b701 100644 >> --- a/kernel/bpf/verifier.c >> +++ b/kernel/bpf/verifier.c >> @@ -1832,6 +1832,13 @@ static struct btf *find_kfunc_desc_btf(struct >> bpf_verifier_env *env, >> return btf_vmlinux ?: ERR_PTR(-ENOENT); >> } >> +static inline bool is_kfunc_call_imm_overflowed(unsigned long addr) >> +{ >> + unsigned long offset = BPF_CALL_IMM(addr); >> + >> + return (unsigned long)(s32)offset != offset; >> +} >> + >> static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 >> offset) >> { >> const struct btf_type *func, *func_proto; >> @@ -1925,6 +1932,12 @@ static int add_kfunc_call(struct bpf_verifier_env >> *env, u32 func_id, s16 offset) >> return -EINVAL; >> } >> + if (is_kfunc_call_imm_overflowed(addr)) { >> + verbose(env, "address of kernel function %s is out of range\n", >> + func_name); >> + return -EINVAL; >> + } >> + >> desc = &tab->descs[tab->nr_descs++]; >> desc->func_id = func_id; >> desc->imm = BPF_CALL_IMM(addr); > > Thanks, I would like to call BPF_CALL_IMM only once and keep checking overflow > and setting desc->imm close to each other. How about the following > not-compile-tested code > > unsigned long call_imm; > > ... > call_imm = BPF_CALL_IMM(addr); > /* some comment here */ > if ((unsigned long)(s32)call_imm != call_imm) { > verbose(env, ...); > return -EINVAL; > } else { > desc->imm = call_imm; > } call BPF_CALL_IMM once is OK for me. but I don't think the else branch is unnecessary and it make the code ugly. Can we just return directly when found that imm is overflowed ? call_imm = BPF_CALL_IMM(addr); /* Check whether or not the relative offset overflows desc->imm */ if ((unsigned long)(s32)call_imm != call_imm) { verbose(env, "address of kernel function %s is out of range\n", func_name); return -EINVAL; } desc = &tab->descs[tab->nr_descs++]; desc->func_id = func_id; desc->imm = call_imm; > .
On 2/8/22 10:20 PM, Hou Tao wrote: > Hi, > > On 2/9/2022 12:57 AM, Yonghong Song wrote: >> >> >> On 2/8/22 4:33 AM, Hou Tao wrote: >>> Now kfunc call uses s32 to represent the offset between the address >>> of kfunc and __bpf_call_base, but it doesn't check whether or not >>> s32 will be overflowed, so add an extra checking to reject these >>> invalid kfunc calls. >>> >>> Signed-off-by: Hou Tao <houtao1@huawei.com> >>> --- >>> v2: >>> * instead of checking the overflow in selftests, just reject >>> these kfunc calls directly in verifier >>> >>> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com >>> --- >>> kernel/bpf/verifier.c | 13 +++++++++++++ >>> 1 file changed, 13 insertions(+) >>> >>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c >>> index a39eedecc93a..fd836e64b701 100644 >>> --- a/kernel/bpf/verifier.c >>> +++ b/kernel/bpf/verifier.c >>> @@ -1832,6 +1832,13 @@ static struct btf *find_kfunc_desc_btf(struct >>> bpf_verifier_env *env, >>> return btf_vmlinux ?: ERR_PTR(-ENOENT); >>> } >>> +static inline bool is_kfunc_call_imm_overflowed(unsigned long addr) >>> +{ >>> + unsigned long offset = BPF_CALL_IMM(addr); >>> + >>> + return (unsigned long)(s32)offset != offset; >>> +} >>> + >>> static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 >>> offset) >>> { >>> const struct btf_type *func, *func_proto; >>> @@ -1925,6 +1932,12 @@ static int add_kfunc_call(struct bpf_verifier_env >>> *env, u32 func_id, s16 offset) >>> return -EINVAL; >>> } >>> + if (is_kfunc_call_imm_overflowed(addr)) { >>> + verbose(env, "address of kernel function %s is out of range\n", >>> + func_name); >>> + return -EINVAL; >>> + } >>> + >>> desc = &tab->descs[tab->nr_descs++]; >>> desc->func_id = func_id; >>> desc->imm = BPF_CALL_IMM(addr); >> >> Thanks, I would like to call BPF_CALL_IMM only once and keep checking overflow >> and setting desc->imm close to each other. How about the following >> not-compile-tested code >> >> unsigned long call_imm; >> >> ... >> call_imm = BPF_CALL_IMM(addr); >> /* some comment here */ >> if ((unsigned long)(s32)call_imm != call_imm) { >> verbose(env, ...); >> return -EINVAL; >> } else { >> desc->imm = call_imm; >> } > call BPF_CALL_IMM once is OK for me. but I don't think the else branch is > unnecessary and it make the code > ugly. Can we just return directly when found that imm is overflowed ? > > call_imm = BPF_CALL_IMM(addr); > /* Check whether or not the relative offset overflows desc->imm */ > if ((unsigned long)(s32)call_imm != call_imm) { > verbose(env, "address of kernel function %s is out of range\n", > func_name); > return -EINVAL; > } > > desc = &tab->descs[tab->nr_descs++]; > desc->func_id = func_id; > desc->imm = call_imm; Sure. Your above change looks good. My change is just an illustration :-). > > > > >> . > >
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a39eedecc93a..fd836e64b701 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1832,6 +1832,13 @@ static struct btf *find_kfunc_desc_btf(struct bpf_verifier_env *env, return btf_vmlinux ?: ERR_PTR(-ENOENT); } +static inline bool is_kfunc_call_imm_overflowed(unsigned long addr) +{ + unsigned long offset = BPF_CALL_IMM(addr); + + return (unsigned long)(s32)offset != offset; +} + static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset) { const struct btf_type *func, *func_proto; @@ -1925,6 +1932,12 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset) return -EINVAL; } + if (is_kfunc_call_imm_overflowed(addr)) { + verbose(env, "address of kernel function %s is out of range\n", + func_name); + return -EINVAL; + } + desc = &tab->descs[tab->nr_descs++]; desc->func_id = func_id; desc->imm = BPF_CALL_IMM(addr);
Now kfunc call uses s32 to represent the offset between the address of kfunc and __bpf_call_base, but it doesn't check whether or not s32 will be overflowed, so add an extra checking to reject these invalid kfunc calls. Signed-off-by: Hou Tao <houtao1@huawei.com> --- v2: * instead of checking the overflow in selftests, just reject these kfunc calls directly in verifier v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com --- kernel/bpf/verifier.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)