Message ID | 20240108132802.6103-3-eddyz87@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | infer packet range for 'if pkt ==/!= pkt_end' instructions | expand |
(I've looked at all 3 patches, and they seem fine... but I don't claim to understand the intricacies of the verifier/registers/etc well enough to be certain) On Mon, Jan 8, 2024 at 5:28 AM Eduard Zingerman <eddyz87@gmail.com> wrote: > > Extend try_match_pkt_pointers() to handle == and != operations. > For instruction: > > .--------------- pointer to packet with some range R > | .--------- pointer to packet end > v v > if rA == rB goto ... it's possible this would be better without the 'goto' as just 'if (rA == rB) ...' > > It is valid to infer that R bytes are available in packet. > This change should allow verification of BPF generated for > C code like below: > > if (data + 42 != data_end) { ... } this should probably be: if (data + 42 != data_end) return; ... > > Suggested-by: Maciej Żenczykowski <zenczykowski@gmail.com> > Link: https://lore.kernel.org/bpf/CAHo-Oow5V2u4ZYvzuR8NmJmFDPNYp0pQDJX66rZqUjFHvhx82A@mail.gmail.com/ > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> > --- > kernel/bpf/verifier.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 918e6a7912e2..b229ba0ad114 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -14677,6 +14677,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, > struct bpf_verifier_state *this_branch, > struct bpf_verifier_state *other_branch) > { > + struct bpf_verifier_state *eq_branch; > int opcode = BPF_OP(insn->code); > int dst_regno = insn->dst_reg; > > @@ -14713,6 +14714,13 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, > find_good_pkt_pointers(other_branch, dst_reg, dst_reg->type, opcode == BPF_JLT); > mark_pkt_end(this_branch, dst_regno, opcode == BPF_JLE); > break; > + case BPF_JEQ: > + case BPF_JNE: > + /* pkt_data ==/!= pkt_end, pkt_meta ==/!= pkt_data */ > + eq_branch = opcode == BPF_JEQ ? other_branch : this_branch; > + find_good_pkt_pointers(eq_branch, dst_reg, dst_reg->type, true); > + mark_pkt_end(eq_branch, dst_regno, false); > + break; > default: > return false; > } > -- > 2.43.0 >
On Mon, 2024-01-08 at 05:49 -0800, Maciej Żenczykowski wrote: > (I've looked at all 3 patches, and they seem fine... but I don't claim > to understand the intricacies of the verifier/registers/etc well > enough to be certain) Thank you. > On Mon, Jan 8, 2024 at 5:28 AM Eduard Zingerman <eddyz87@gmail.com> wrote: > > > > Extend try_match_pkt_pointers() to handle == and != operations. > > For instruction: > > > > .--------------- pointer to packet with some range R > > | .--------- pointer to packet end > > v v > > if rA == rB goto ... > > it's possible this would be better without the 'goto' as just 'if (rA > == rB) ...' The idea was to show this as BPF asm instruction, which has syntax 'if rA == rB goto C'. I'll wait for more feedback and submit v2 with updated commit message to make it more clear. > > It is valid to infer that R bytes are available in packet. > > This change should allow verification of BPF generated for > > C code like below: > > > > if (data + 42 != data_end) { ... } > > this should probably be: > if (data + 42 != data_end) return; > ... Makes sense, will update commit message.
On Mon, Jan 8, 2024 at 5:28 AM Eduard Zingerman <eddyz87@gmail.com> wrote: > > Extend try_match_pkt_pointers() to handle == and != operations. > For instruction: > > .--------------- pointer to packet with some range R > | .--------- pointer to packet end > v v > if rA == rB goto ... > > It is valid to infer that R bytes are available in packet. > This change should allow verification of BPF generated for > C code like below: > > if (data + 42 != data_end) { ... } > > Suggested-by: Maciej Żenczykowski <zenczykowski@gmail.com> > Link: https://lore.kernel.org/bpf/CAHo-Oow5V2u4ZYvzuR8NmJmFDPNYp0pQDJX66rZqUjFHvhx82A@mail.gmail.com/ > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> > --- > kernel/bpf/verifier.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 918e6a7912e2..b229ba0ad114 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -14677,6 +14677,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, > struct bpf_verifier_state *this_branch, > struct bpf_verifier_state *other_branch) > { > + struct bpf_verifier_state *eq_branch; > int opcode = BPF_OP(insn->code); > int dst_regno = insn->dst_reg; > > @@ -14713,6 +14714,13 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, > find_good_pkt_pointers(other_branch, dst_reg, dst_reg->type, opcode == BPF_JLT); > mark_pkt_end(this_branch, dst_regno, opcode == BPF_JLE); > break; > + case BPF_JEQ: > + case BPF_JNE: > + /* pkt_data ==/!= pkt_end, pkt_meta ==/!= pkt_data */ > + eq_branch = opcode == BPF_JEQ ? other_branch : this_branch; > + find_good_pkt_pointers(eq_branch, dst_reg, dst_reg->type, true); > + mark_pkt_end(eq_branch, dst_regno, false); hm... if pkt_data != pkt_end in this_branch, can we really infer whether reg->range is BEYOND_PKT_END or AT_PKT_END? What if it's IN_FRONT_OF_PKT_END? > + break; > default: > return false; > } > -- > 2.43.0 >
On Mon, 2024-01-08 at 16:45 -0800, Andrii Nakryiko wrote: [...] > > @@ -14713,6 +14714,13 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, > > find_good_pkt_pointers(other_branch, dst_reg, dst_reg->type, opcode == BPF_JLT); > > mark_pkt_end(this_branch, dst_regno, opcode == BPF_JLE); > > break; > > + case BPF_JEQ: > > + case BPF_JNE: > > + /* pkt_data ==/!= pkt_end, pkt_meta ==/!= pkt_data */ > > + eq_branch = opcode == BPF_JEQ ? other_branch : this_branch; > > + find_good_pkt_pointers(eq_branch, dst_reg, dst_reg->type, true); > > + mark_pkt_end(eq_branch, dst_regno, false); > > hm... if pkt_data != pkt_end in this_branch, can we really infer > whether reg->range is BEYOND_PKT_END or AT_PKT_END? What if it's > IN_FRONT_OF_PKT_END? pkt_data != pkt_end in this_branch means that there is an instruction: ... if pkt_data == pkt_end goto <other_branch> ... <this_branch> ... the 'eq_branch' would be set to 'other_branch' and AT_PKT_END would be set for dst register in 'other_branch'. What's wrong with this? Or did you mean something else?
On 1/8/24 5:28 AM, Eduard Zingerman wrote: > Extend try_match_pkt_pointers() to handle == and != operations. > For instruction: > > .--------------- pointer to packet with some range R > | .--------- pointer to packet end > v v > if rA == rB goto ... > > It is valid to infer that R bytes are available in packet. > This change should allow verification of BPF generated for > C code like below: > > if (data + 42 != data_end) { ... } > > Suggested-by: Maciej Żenczykowski <zenczykowski@gmail.com> > Link: https://lore.kernel.org/bpf/CAHo-Oow5V2u4ZYvzuR8NmJmFDPNYp0pQDJX66rZqUjFHvhx82A@mail.gmail.com/ > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> > --- > kernel/bpf/verifier.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 918e6a7912e2..b229ba0ad114 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -14677,6 +14677,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, > struct bpf_verifier_state *this_branch, > struct bpf_verifier_state *other_branch) > { > + struct bpf_verifier_state *eq_branch; > int opcode = BPF_OP(insn->code); > int dst_regno = insn->dst_reg; > > @@ -14713,6 +14714,13 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, > find_good_pkt_pointers(other_branch, dst_reg, dst_reg->type, opcode == BPF_JLT); > mark_pkt_end(this_branch, dst_regno, opcode == BPF_JLE); > break; > + case BPF_JEQ: > + case BPF_JNE: > + /* pkt_data ==/!= pkt_end, pkt_meta ==/!= pkt_data */ > + eq_branch = opcode == BPF_JEQ ? other_branch : this_branch; > + find_good_pkt_pointers(eq_branch, dst_reg, dst_reg->type, true); > + mark_pkt_end(eq_branch, dst_regno, false); > + break; What will happen if there are multiple BPF_JEQ/BPF_JNE? I made a change to one of tests in patch 3: +SEC("tc") +__success __log_level(2) +__msg("if r3 != r2 goto pc+3 ; R2_w=pkt_end() R3_w=pkt(off=8,r=0xffffffffffffffff)") +__naked void data_plus_const_neq_pkt_end(void) +{ + asm volatile (" \ + r9 = r1; \ + r1 = *(u32*)(r9 + %[__sk_buff_data]); \ + r2 = *(u32*)(r9 + %[__sk_buff_data_end]); \ + r3 = r1; \ + r3 += 8; \ + if r3 != r2 goto 1f; \ + r3 += 8; \ + if r3 != r2 goto 1f; \ + r1 = *(u64 *)(r1 + 0); \ +1: \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} The verifier output: func#0 @0 Global function data_plus_const_neq_pkt_end() doesn't return scalar. Only those are supported. 0: R1=ctx() R10=fp0 ; asm volatile (" \ 0: (bf) r9 = r1 ; R1=ctx() R9_w=ctx() 1: (61) r1 = *(u32 *)(r9 +76) ; R1_w=pkt(r=0) R9_w=ctx() 2: (61) r2 = *(u32 *)(r9 +80) ; R2_w=pkt_end() R9_w=ctx() 3: (bf) r3 = r1 ; R1_w=pkt(r=0) R3_w=pkt(r=0) 4: (07) r3 += 8 ; R3_w=pkt(off=8,r=0) 5: (5d) if r3 != r2 goto pc+3 ; R2_w=pkt_end() R3_w=pkt(off=8,r=0xffffffffffffffff) 6: (07) r3 += 8 ; R3_w=pkt(off=16,r=0xffffffffffffffff) 7: (5d) if r3 != r2 goto pc+1 ; R2_w=pkt_end() R3_w=pkt(off=16,r=0xffffffffffffffff) 8: (79) r1 = *(u64 *)(r1 +0) ; R1=scalar() 9: (b7) r0 = 0 ; R0_w=0 10: (95) exit from 7 to 9: safe from 5 to 9: safe processed 13 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 0 insn 5, for this_branch (straight one), r3 range will be 8 and assuming pkt_end is 8. insn 7, r3 range becomes 18 and then we assume pkt_end is 16. I guess we should handle this case. For branch 5 and 7, it cannot be that both will be true. > default: > return false; > }
On Mon, Jan 8, 2024 at 4:57 PM Eduard Zingerman <eddyz87@gmail.com> wrote: > > On Mon, 2024-01-08 at 16:45 -0800, Andrii Nakryiko wrote: > [...] > > > @@ -14713,6 +14714,13 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, > > > find_good_pkt_pointers(other_branch, dst_reg, dst_reg->type, opcode == BPF_JLT); > > > mark_pkt_end(this_branch, dst_regno, opcode == BPF_JLE); > > > break; > > > + case BPF_JEQ: > > > + case BPF_JNE: > > > + /* pkt_data ==/!= pkt_end, pkt_meta ==/!= pkt_data */ > > > + eq_branch = opcode == BPF_JEQ ? other_branch : this_branch; > > > + find_good_pkt_pointers(eq_branch, dst_reg, dst_reg->type, true); > > > + mark_pkt_end(eq_branch, dst_regno, false); > > > > hm... if pkt_data != pkt_end in this_branch, can we really infer > > whether reg->range is BEYOND_PKT_END or AT_PKT_END? What if it's > > IN_FRONT_OF_PKT_END? > > pkt_data != pkt_end in this_branch means that there is an instruction: > > ... > if pkt_data == pkt_end goto <other_branch> > ... <this_branch> ... > > the 'eq_branch' would be set to 'other_branch' and AT_PKT_END would be set > for dst register in 'other_branch'. What's wrong with this? > Or did you mean something else? Well, first off, I'm very unfamiliar with all these pkt_xxx registers, so excuse me for stupid questions. I guess what got me confused was that find_good_pkt_pointer() and mark_pkt_end() previously (for GT/GE/LT/LE) were working on opposing branches. But here I see they work on the same "equal" branch. But now I'm wondering what's the point of even calling find_good_pkt_pointer()? Is there a scenario where it can derive new information from JEQ/JNE?
On Tue, 2024-01-09 at 09:26 -0800, Yonghong Song wrote: [...] > > What will happen if there are multiple BPF_JEQ/BPF_JNE? I made a change to one of tests > in patch 3: > > +SEC("tc") > +__success __log_level(2) > +__msg("if r3 != r2 goto pc+3 ; R2_w=pkt_end() R3_w=pkt(off=8,r=0xffffffffffffffff)") > +__naked void data_plus_const_neq_pkt_end(void) > +{ > + asm volatile (" \ > + r9 = r1; \ > + r1 = *(u32*)(r9 + %[__sk_buff_data]); \ > + r2 = *(u32*)(r9 + %[__sk_buff_data_end]); \ > + r3 = r1; \ > + r3 += 8; \ > + if r3 != r2 goto 1f; \ > + r3 += 8; \ > + if r3 != r2 goto 1f; \ > + r1 = *(u64 *)(r1 + 0); \ > +1: \ > + r0 = 0; \ > + exit; \ > +" : > + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), > + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) > + : __clobber_all); > +} > > > The verifier output: > func#0 @0 > Global function data_plus_const_neq_pkt_end() doesn't return scalar. Only those are supported. > 0: R1=ctx() R10=fp0 > ; asm volatile (" \ > 0: (bf) r9 = r1 ; R1=ctx() R9_w=ctx() > 1: (61) r1 = *(u32 *)(r9 +76) ; R1_w=pkt(r=0) R9_w=ctx() > 2: (61) r2 = *(u32 *)(r9 +80) ; R2_w=pkt_end() R9_w=ctx() > 3: (bf) r3 = r1 ; R1_w=pkt(r=0) R3_w=pkt(r=0) > 4: (07) r3 += 8 ; R3_w=pkt(off=8,r=0) > 5: (5d) if r3 != r2 goto pc+3 ; R2_w=pkt_end() R3_w=pkt(off=8,r=0xffffffffffffffff) > 6: (07) r3 += 8 ; R3_w=pkt(off=16,r=0xffffffffffffffff) > 7: (5d) if r3 != r2 goto pc+1 ; R2_w=pkt_end() R3_w=pkt(off=16,r=0xffffffffffffffff) > 8: (79) r1 = *(u64 *)(r1 +0) ; R1=scalar() > 9: (b7) r0 = 0 ; R0_w=0 > 10: (95) exit > > from 7 to 9: safe > > from 5 to 9: safe > processed 13 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 0 > > insn 5, for this_branch (straight one), r3 range will be 8 and assuming pkt_end is 8. > insn 7, r3 range becomes 18 and then we assume pkt_end is 16. > > I guess we should handle this case. For branch 5 and 7, it cannot be that both will be true. This is an interesting case. reg->range is set to AT_PKT_END or BEYOND_PKT_END only in try_match_pkt_pointers() (in mark_pkt_end() call). And this range mark appears not to be reset by += operation (which might add a negative number as well). So, once r3 is marked AT_PKT_END it would remain so even after r3 += 8, which is obviously not true. Not sure what to do yet.
On Wed, 2024-01-10 at 03:07 +0200, Eduard Zingerman wrote: > On Tue, 2024-01-09 at 09:26 -0800, Yonghong Song wrote: > [...] > > > > What will happen if there are multiple BPF_JEQ/BPF_JNE? I made a change to one of tests > > in patch 3: > > > > +SEC("tc") > > +__success __log_level(2) > > +__msg("if r3 != r2 goto pc+3 ; R2_w=pkt_end() R3_w=pkt(off=8,r=0xffffffffffffffff)") > > +__naked void data_plus_const_neq_pkt_end(void) > > +{ > > + asm volatile (" \ > > + r9 = r1; \ > > + r1 = *(u32*)(r9 + %[__sk_buff_data]); \ > > + r2 = *(u32*)(r9 + %[__sk_buff_data_end]); \ > > + r3 = r1; \ > > + r3 += 8; \ > > + if r3 != r2 goto 1f; \ > > + r3 += 8; \ > > + if r3 != r2 goto 1f; \ > > + r1 = *(u64 *)(r1 + 0); \ > > +1: \ > > + r0 = 0; \ > > + exit; \ > > +" : > > + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), > > + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) > > + : __clobber_all); > > +} > > > > > > The verifier output: > > func#0 @0 > > Global function data_plus_const_neq_pkt_end() doesn't return scalar. Only those are supported. > > 0: R1=ctx() R10=fp0 > > ; asm volatile (" \ > > 0: (bf) r9 = r1 ; R1=ctx() R9_w=ctx() > > 1: (61) r1 = *(u32 *)(r9 +76) ; R1_w=pkt(r=0) R9_w=ctx() > > 2: (61) r2 = *(u32 *)(r9 +80) ; R2_w=pkt_end() R9_w=ctx() > > 3: (bf) r3 = r1 ; R1_w=pkt(r=0) R3_w=pkt(r=0) > > 4: (07) r3 += 8 ; R3_w=pkt(off=8,r=0) > > 5: (5d) if r3 != r2 goto pc+3 ; R2_w=pkt_end() R3_w=pkt(off=8,r=0xffffffffffffffff) > > 6: (07) r3 += 8 ; R3_w=pkt(off=16,r=0xffffffffffffffff) > > 7: (5d) if r3 != r2 goto pc+1 ; R2_w=pkt_end() R3_w=pkt(off=16,r=0xffffffffffffffff) > > 8: (79) r1 = *(u64 *)(r1 +0) ; R1=scalar() > > 9: (b7) r0 = 0 ; R0_w=0 > > 10: (95) exit > > > > from 7 to 9: safe > > > > from 5 to 9: safe > > processed 13 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 0 > > > > insn 5, for this_branch (straight one), r3 range will be 8 and assuming pkt_end is 8. > > insn 7, r3 range becomes 18 and then we assume pkt_end is 16. > > > > I guess we should handle this case. For branch 5 and 7, it cannot be that both will be true. > > This is an interesting case. > reg->range is set to AT_PKT_END or BEYOND_PKT_END only in > try_match_pkt_pointers() (in mark_pkt_end() call). > And this range mark appears not to be reset by += operation > (which might add a negative number as well). > So, once r3 is marked AT_PKT_END it would remain so > even after r3 += 8, which is obviously not true. > Not sure what to do yet. Here is another example which is currently not handled correctly, even w/o my patch: SEC("tc") __success __naked void pkt_vs_pkt_end_with_bound_change(void) { asm volatile (" \ r9 = r1; \ r0 = 0; \ r1 = *(u32*)(r9 + %[__sk_buff_data]); \ r2 = *(u32*)(r9 + %[__sk_buff_data_end]); \ r3 = r1; \ r3 += 8; \ if r3 <= r2 goto 1f; \ r3 -= 8; \ if r3 >= r2 goto 1f; \ r4 = *(u64 *)(r1 + 0); \ 1: exit; \ " : : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) : __clobber_all); } Verifier log: 0: R1=ctx() R10=fp0 ; asm volatile (" \ 0: (bf) r9 = r1 ; R1=ctx() R9_w=ctx() 1: (b7) r0 = 0 ; R0_w=0 2: (61) r1 = *(u32 *)(r9 +76) ; R1_w=pkt(r=0) R9_w=ctx() 3: (61) r2 = *(u32 *)(r9 +80) ; R2_w=pkt_end() R9_w=ctx() 4: (bf) r3 = r1 ; R1_w=pkt(r=0) R3_w=pkt(r=0) 5: (07) r3 += 8 ; R3_w=pkt(off=8,r=0) 6: (bd) if r3 <= r2 goto pc+3 ; R2_w=pkt_end() R3_w=pkt(off=8,r=0xfffffffffffffffe) 7: (17) r3 -= 8 ; R3_w=pkt(r=0xfffffffffffffffe) 8: (3d) if r3 >= r2 goto pc+1 ; R2_w=pkt_end() R3_w=pkt(r=0xfffffffffffffffe) 10: (95) exit from 6 to 10: safe processed 11 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 1 At (6) for this_branch r3 is marked BEYOND_PKT_END, packet range is known to be 8; at (7) it is changed to point back to start of the packet; at (8) is_pkt_ptr_branch_taken() incorrectly predicts that r3 >= r2 (r3 - packet start, r2 - packet end).
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 918e6a7912e2..b229ba0ad114 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -14677,6 +14677,7 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, struct bpf_verifier_state *this_branch, struct bpf_verifier_state *other_branch) { + struct bpf_verifier_state *eq_branch; int opcode = BPF_OP(insn->code); int dst_regno = insn->dst_reg; @@ -14713,6 +14714,13 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, find_good_pkt_pointers(other_branch, dst_reg, dst_reg->type, opcode == BPF_JLT); mark_pkt_end(this_branch, dst_regno, opcode == BPF_JLE); break; + case BPF_JEQ: + case BPF_JNE: + /* pkt_data ==/!= pkt_end, pkt_meta ==/!= pkt_data */ + eq_branch = opcode == BPF_JEQ ? other_branch : this_branch; + find_good_pkt_pointers(eq_branch, dst_reg, dst_reg->type, true); + mark_pkt_end(eq_branch, dst_regno, false); + break; default: return false; }
Extend try_match_pkt_pointers() to handle == and != operations. For instruction: .--------------- pointer to packet with some range R | .--------- pointer to packet end v v if rA == rB goto ... It is valid to infer that R bytes are available in packet. This change should allow verification of BPF generated for C code like below: if (data + 42 != data_end) { ... } Suggested-by: Maciej Żenczykowski <zenczykowski@gmail.com> Link: https://lore.kernel.org/bpf/CAHo-Oow5V2u4ZYvzuR8NmJmFDPNYp0pQDJX66rZqUjFHvhx82A@mail.gmail.com/ Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> --- kernel/bpf/verifier.c | 8 ++++++++ 1 file changed, 8 insertions(+)