Message ID | 20220208012539.491753-3-houtao1@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Delegated to: | BPF |
Headers | show |
Series | bpf, arm64: fix bpf line info | expand |
On 2/8/22 2:25 AM, Hou Tao wrote: > insn_to_jit_off passed to bpf_prog_fill_jited_linfo() is calculated > in instruction granularity instead of bytes granularity, but bpf > line info requires byte offset, so fixing it by calculating ctx->offset > as byte-offset. bpf2a64_offset() needs to return relative instruction > offset by using ctx->offfset, so update it accordingly. > > Fixes: 37ab566c178d ("bpf: arm64: Enable arm64 jit to provide bpf_line_info") > Signed-off-by: Hou Tao <houtao1@huawei.com> > --- > arch/arm64/net/bpf_jit_comp.c | 16 +++++++++++----- > 1 file changed, 11 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c > index 68b35c83e637..aed07cba78ec 100644 > --- a/arch/arm64/net/bpf_jit_comp.c > +++ b/arch/arm64/net/bpf_jit_comp.c > @@ -164,9 +164,14 @@ static inline int bpf2a64_offset(int bpf_insn, int off, > /* > * Whereas arm64 branch instructions encode the offset > * from the branch itself, so we must subtract 1 from the > - * instruction offset. > + * instruction offset. The unit of ctx->offset is byte, so > + * subtract AARCH64_INSN_SIZE from it. bpf2a64_offset() > + * returns instruction offset, so divide by AARCH64_INSN_SIZE > + * at the end. > */ > - return ctx->offset[bpf_insn + off] - (ctx->offset[bpf_insn] - 1); > + return (ctx->offset[bpf_insn + off] - > + (ctx->offset[bpf_insn] - AARCH64_INSN_SIZE)) / > + AARCH64_INSN_SIZE; > } > > static void jit_fill_hole(void *area, unsigned int size) > @@ -1087,13 +1092,14 @@ static int build_body(struct jit_ctx *ctx, bool extra_pass) > const struct bpf_insn *insn = &prog->insnsi[i]; > int ret; > > + /* BPF line info needs byte-offset instead of insn-offset */ > if (ctx->image == NULL) > - ctx->offset[i] = ctx->idx; > + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE; > ret = build_insn(insn, ctx, extra_pass); > if (ret > 0) { > i++; > if (ctx->image == NULL) > - ctx->offset[i] = ctx->idx; > + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE; > continue; > } > if (ret) > @@ -1105,7 +1111,7 @@ static int build_body(struct jit_ctx *ctx, bool extra_pass) > * instruction (end of program) > */ > if (ctx->image == NULL) > - ctx->offset[i] = ctx->idx; > + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE; Both patches look good to me. For this one specifically, given bpf2a64_offset() needs to return relative instruction offset via ctx->offfset, can't we just simplify it like this w/o the AARCH64_INSN_SIZE back/forth dance? diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 74f9a9b6a053..72f4702a9d01 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -999,7 +999,7 @@ struct arm64_jit_data { struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { - int image_size, prog_size, extable_size; + int image_size, prog_size, extable_size, i; struct bpf_prog *tmp, *orig_prog = prog; struct bpf_binary_header *header; struct arm64_jit_data *jit_data; @@ -1130,6 +1130,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) prog->jited_len = prog_size; if (!prog->is_func || extra_pass) { + /* BPF line info needs byte-offset instead of insn-offset. */ + for (i = 0; i < prog->len + 1; i++) + ctx.offset[i] *= AARCH64_INSN_SIZE; bpf_prog_fill_jited_linfo(prog, ctx.offset + 1); out_off: kfree(ctx.offset);
Hi, On 2/19/2022 7:20 AM, Daniel Borkmann wrote: > On 2/8/22 2:25 AM, Hou Tao wrote: >> insn_to_jit_off passed to bpf_prog_fill_jited_linfo() is calculated >> in instruction granularity instead of bytes granularity, but bpf >> line info requires byte offset, so fixing it by calculating ctx->offset >> as byte-offset. bpf2a64_offset() needs to return relative instruction >> offset by using ctx->offfset, so update it accordingly. >> >> Fixes: 37ab566c178d ("bpf: arm64: Enable arm64 jit to provide bpf_line_info") >> Signed-off-by: Hou Tao <houtao1@huawei.com> >> --- >> arch/arm64/net/bpf_jit_comp.c | 16 +++++++++++----- >> 1 file changed, 11 insertions(+), 5 deletions(-) >> [snip] >> if (ret) >> @@ -1105,7 +1111,7 @@ static int build_body(struct jit_ctx *ctx, bool >> extra_pass) >> * instruction (end of program) >> */ >> if (ctx->image == NULL) >> - ctx->offset[i] = ctx->idx; >> + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE; > > Both patches look good to me. For this one specifically, given bpf2a64_offset() > needs to return relative instruction offset via ctx->offfset, can't we just > simplify it like this w/o the AARCH64_INSN_SIZE back/forth dance? > > diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c > index 74f9a9b6a053..72f4702a9d01 100644 > --- a/arch/arm64/net/bpf_jit_comp.c > +++ b/arch/arm64/net/bpf_jit_comp.c > @@ -999,7 +999,7 @@ struct arm64_jit_data { > > struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) > { > - int image_size, prog_size, extable_size; > + int image_size, prog_size, extable_size, i; > struct bpf_prog *tmp, *orig_prog = prog; > struct bpf_binary_header *header; > struct arm64_jit_data *jit_data; > @@ -1130,6 +1130,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) > prog->jited_len = prog_size; > > if (!prog->is_func || extra_pass) { > + /* BPF line info needs byte-offset instead of insn-offset. */ > + for (i = 0; i < prog->len + 1; i++) > + ctx.offset[i] *= AARCH64_INSN_SIZE; > bpf_prog_fill_jited_linfo(prog, ctx.offset + 1); > out_off: > kfree(ctx.offset); The fix is much simpler. I will check whether or not it works. Thanks, Tao
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 68b35c83e637..aed07cba78ec 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -164,9 +164,14 @@ static inline int bpf2a64_offset(int bpf_insn, int off, /* * Whereas arm64 branch instructions encode the offset * from the branch itself, so we must subtract 1 from the - * instruction offset. + * instruction offset. The unit of ctx->offset is byte, so + * subtract AARCH64_INSN_SIZE from it. bpf2a64_offset() + * returns instruction offset, so divide by AARCH64_INSN_SIZE + * at the end. */ - return ctx->offset[bpf_insn + off] - (ctx->offset[bpf_insn] - 1); + return (ctx->offset[bpf_insn + off] - + (ctx->offset[bpf_insn] - AARCH64_INSN_SIZE)) / + AARCH64_INSN_SIZE; } static void jit_fill_hole(void *area, unsigned int size) @@ -1087,13 +1092,14 @@ static int build_body(struct jit_ctx *ctx, bool extra_pass) const struct bpf_insn *insn = &prog->insnsi[i]; int ret; + /* BPF line info needs byte-offset instead of insn-offset */ if (ctx->image == NULL) - ctx->offset[i] = ctx->idx; + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE; ret = build_insn(insn, ctx, extra_pass); if (ret > 0) { i++; if (ctx->image == NULL) - ctx->offset[i] = ctx->idx; + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE; continue; } if (ret) @@ -1105,7 +1111,7 @@ static int build_body(struct jit_ctx *ctx, bool extra_pass) * instruction (end of program) */ if (ctx->image == NULL) - ctx->offset[i] = ctx->idx; + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE; return 0; }
insn_to_jit_off passed to bpf_prog_fill_jited_linfo() is calculated in instruction granularity instead of bytes granularity, but bpf line info requires byte offset, so fixing it by calculating ctx->offset as byte-offset. bpf2a64_offset() needs to return relative instruction offset by using ctx->offfset, so update it accordingly. Fixes: 37ab566c178d ("bpf: arm64: Enable arm64 jit to provide bpf_line_info") Signed-off-by: Hou Tao <houtao1@huawei.com> --- arch/arm64/net/bpf_jit_comp.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-)