Message ID | 20241018221644.3240898-1-pcc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | bpf, arm64: Fix address emission with tag-based KASAN enabled | expand |
On 10/19/2024 6:16 AM, Peter Collingbourne wrote: > When BPF_TRAMP_F_CALL_ORIG is enabled, the address of a bpf_tramp_image > struct on the stack is passed during the size calculation pass and > an address on the heap is passed during code generation. This may > cause a heap buffer overflow if the heap address is tagged because > emit_a64_mov_i64() will emit longer code than it did during the size > calculation pass. The same problem could occur without tag-based > KASAN if one of the 16-bit words of the stack address happened to > be all-ones during the size calculation pass. Fix the problem by > assuming the worst case (4 instructions) when calculating the size > of the bpf_tramp_image address emission. > > Fixes: 19d3c179a377 ("bpf, arm64: Fix trampoline for BPF_TRAMP_F_CALL_ORIG") > Signed-off-by: Peter Collingbourne <pcc@google.com> > Link: https://linux-review.googlesource.com/id/I1496f2bc24fba7a1d492e16e2b94cf43714f2d3c > Cc: stable@vger.kernel.org > --- > arch/arm64/net/bpf_jit_comp.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c > index 8bbd0b20136a8..5db82bfc9dc11 100644 > --- a/arch/arm64/net/bpf_jit_comp.c > +++ b/arch/arm64/net/bpf_jit_comp.c > @@ -2220,7 +2220,11 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, > emit(A64_STR64I(A64_R(20), A64_SP, regs_off + 8), ctx); > > if (flags & BPF_TRAMP_F_CALL_ORIG) { > - emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); > + /* for the first pass, assume the worst case */ > + if (!ctx->image) > + ctx->idx += 4; > + else > + emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); > emit_call((const u64)__bpf_tramp_enter, ctx); > } > > @@ -2264,7 +2268,11 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, > > if (flags & BPF_TRAMP_F_CALL_ORIG) { > im->ip_epilogue = ctx->ro_image + ctx->idx; > - emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); > + /* for the first pass, assume the worst case */ > + if (!ctx->image) > + ctx->idx += 4; > + else > + emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); > emit_call((const u64)__bpf_tramp_exit, ctx); > } > Acked-by: Xu Kuohai <xukuohai@huawei.com>
Hello: This patch was applied to bpf/bpf.git (master) by Daniel Borkmann <daniel@iogearbox.net>: On Fri, 18 Oct 2024 15:16:43 -0700 you wrote: > When BPF_TRAMP_F_CALL_ORIG is enabled, the address of a bpf_tramp_image > struct on the stack is passed during the size calculation pass and > an address on the heap is passed during code generation. This may > cause a heap buffer overflow if the heap address is tagged because > emit_a64_mov_i64() will emit longer code than it did during the size > calculation pass. The same problem could occur without tag-based > KASAN if one of the 16-bit words of the stack address happened to > be all-ones during the size calculation pass. Fix the problem by > assuming the worst case (4 instructions) when calculating the size > of the bpf_tramp_image address emission. > > [...] Here is the summary with links: - bpf, arm64: Fix address emission with tag-based KASAN enabled https://git.kernel.org/bpf/bpf/c/a552e2ef5fd1 You are awesome, thank you!
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 8bbd0b20136a8..5db82bfc9dc11 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -2220,7 +2220,11 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, emit(A64_STR64I(A64_R(20), A64_SP, regs_off + 8), ctx); if (flags & BPF_TRAMP_F_CALL_ORIG) { - emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); + /* for the first pass, assume the worst case */ + if (!ctx->image) + ctx->idx += 4; + else + emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); emit_call((const u64)__bpf_tramp_enter, ctx); } @@ -2264,7 +2268,11 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, if (flags & BPF_TRAMP_F_CALL_ORIG) { im->ip_epilogue = ctx->ro_image + ctx->idx; - emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); + /* for the first pass, assume the worst case */ + if (!ctx->image) + ctx->idx += 4; + else + emit_a64_mov_i64(A64_R(0), (const u64)im, ctx); emit_call((const u64)__bpf_tramp_exit, ctx); }
When BPF_TRAMP_F_CALL_ORIG is enabled, the address of a bpf_tramp_image struct on the stack is passed during the size calculation pass and an address on the heap is passed during code generation. This may cause a heap buffer overflow if the heap address is tagged because emit_a64_mov_i64() will emit longer code than it did during the size calculation pass. The same problem could occur without tag-based KASAN if one of the 16-bit words of the stack address happened to be all-ones during the size calculation pass. Fix the problem by assuming the worst case (4 instructions) when calculating the size of the bpf_tramp_image address emission. Fixes: 19d3c179a377 ("bpf, arm64: Fix trampoline for BPF_TRAMP_F_CALL_ORIG") Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/I1496f2bc24fba7a1d492e16e2b94cf43714f2d3c Cc: stable@vger.kernel.org --- arch/arm64/net/bpf_jit_comp.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)