Message ID | 20230502165754.16728-1-will@kernel.org (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields | expand |
On 5/2/23 9:57 AM, Will Deacon wrote: > A narrow load from a 64-bit context field results in a 64-bit load > followed potentially by a 64-bit right-shift and then a bitwise AND > operation to extract the relevant data. > > In the case of a 32-bit access, an immediate mask of 0xffffffff is used > to construct a 64-bit BPP_AND operation which then sign-extends the mask > value and effectively acts as a glorified no-op. > > Fix the mask generation so that narrow loads always perform a 32-bit AND > operation. > > Cc: Alexei Starovoitov <ast@kernel.org> > Cc: Daniel Borkmann <daniel@iogearbox.net> > Cc: John Fastabend <john.fastabend@gmail.com> > Cc: Krzesimir Nowak <krzesimir@kinvolk.io> > Cc: Yonghong Song <yhs@fb.com> > Cc: Andrey Ignatov <rdna@fb.com> > Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") > Signed-off-by: Will Deacon <will@kernel.org> Thanks for the fix! You didn't miss anything. It is a bug and we did not find it probably because user always use 'u64 val = ctx->u64_field' in their bpf code... But I think the commit message can be improved. An example to show the difference without and with this patch can explain the issue much better. Acked-by: Yonghong Song <yhs@fb.com> > --- > > I spotted this while playing around with the JIT on arm64. I can't > figure out why 31fd85816dbe special-cases 8-byte ctx fields in the > first place, so I fear I may be missing something... > > kernel/bpf/verifier.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index fbcf5a4e2fcd..5871aa78d01a 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -17033,7 +17033,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) > insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH, > insn->dst_reg, > shift); > - insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg, > + insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg, > (1ULL << size * 8) - 1); > } > }
On Thu, May 4, 2023 at 1:18 PM Yonghong Song <yhs@meta.com> wrote: > > > > On 5/2/23 9:57 AM, Will Deacon wrote: > > A narrow load from a 64-bit context field results in a 64-bit load > > followed potentially by a 64-bit right-shift and then a bitwise AND > > operation to extract the relevant data. > > > > In the case of a 32-bit access, an immediate mask of 0xffffffff is used > > to construct a 64-bit BPP_AND operation which then sign-extends the mask > > value and effectively acts as a glorified no-op. > > > > Fix the mask generation so that narrow loads always perform a 32-bit AND > > operation. > > > > Cc: Alexei Starovoitov <ast@kernel.org> > > Cc: Daniel Borkmann <daniel@iogearbox.net> > > Cc: John Fastabend <john.fastabend@gmail.com> > > Cc: Krzesimir Nowak <krzesimir@kinvolk.io> > > Cc: Yonghong Song <yhs@fb.com> > > Cc: Andrey Ignatov <rdna@fb.com> > > Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") > > Signed-off-by: Will Deacon <will@kernel.org> > > > Thanks for the fix! You didn't miss anything. It is a bug and we did not > find it probably because user always use 'u64 val = ctx->u64_field' in > their bpf code... > > But I think the commit message can be improved. An example to show the > difference without and with this patch can explain the issue much better. > > Acked-by: Yonghong Song <yhs@fb.com> If I'm reading it correctly it's indeed a bug. alu64(and, 0xffffFFFF) is a nop but it should have been alu32(and, 0xffffFFFF) which will clear upper 32-bit, right? Would be good to have a selftest for this.
On 5/5/23 8:30 AM, Alexei Starovoitov wrote: > On Thu, May 4, 2023 at 1:18 PM Yonghong Song <yhs@meta.com> wrote: >> >> >> >> On 5/2/23 9:57 AM, Will Deacon wrote: >>> A narrow load from a 64-bit context field results in a 64-bit load >>> followed potentially by a 64-bit right-shift and then a bitwise AND >>> operation to extract the relevant data. >>> >>> In the case of a 32-bit access, an immediate mask of 0xffffffff is used >>> to construct a 64-bit BPP_AND operation which then sign-extends the mask >>> value and effectively acts as a glorified no-op. >>> >>> Fix the mask generation so that narrow loads always perform a 32-bit AND >>> operation. >>> >>> Cc: Alexei Starovoitov <ast@kernel.org> >>> Cc: Daniel Borkmann <daniel@iogearbox.net> >>> Cc: John Fastabend <john.fastabend@gmail.com> >>> Cc: Krzesimir Nowak <krzesimir@kinvolk.io> >>> Cc: Yonghong Song <yhs@fb.com> >>> Cc: Andrey Ignatov <rdna@fb.com> >>> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") >>> Signed-off-by: Will Deacon <will@kernel.org> >> >> >> Thanks for the fix! You didn't miss anything. It is a bug and we did not >> find it probably because user always use 'u64 val = ctx->u64_field' in >> their bpf code... >> >> But I think the commit message can be improved. An example to show the >> difference without and with this patch can explain the issue much better. >> >> Acked-by: Yonghong Song <yhs@fb.com> > > If I'm reading it correctly it's indeed a bug. > alu64(and, 0xffffFFFF) is a nop > but it should have been > alu32(and, 0xffffFFFF) which will clear upper 32-bit, right? Right. This is my understanding as well. > Would be good to have a selftest for this.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fbcf5a4e2fcd..5871aa78d01a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -17033,7 +17033,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH, insn->dst_reg, shift); - insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg, + insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg, (1ULL << size * 8) - 1); } }
A narrow load from a 64-bit context field results in a 64-bit load followed potentially by a 64-bit right-shift and then a bitwise AND operation to extract the relevant data. In the case of a 32-bit access, an immediate mask of 0xffffffff is used to construct a 64-bit BPP_AND operation which then sign-extends the mask value and effectively acts as a glorified no-op. Fix the mask generation so that narrow loads always perform a 32-bit AND operation. Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Krzesimir Nowak <krzesimir@kinvolk.io> Cc: Yonghong Song <yhs@fb.com> Cc: Andrey Ignatov <rdna@fb.com> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") Signed-off-by: Will Deacon <will@kernel.org> --- I spotted this while playing around with the JIT on arm64. I can't figure out why 31fd85816dbe special-cases 8-byte ctx fields in the first place, so I fear I may be missing something... kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)