Message ID | 20240522054507.3941595-1-xiao.w.wang@intel.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | riscv, bpf: Use STACK_ALIGN macro for size rounding up | expand |
On 2024/5/22 13:45, Xiao Wang wrote: > Use the macro STACK_ALIGN that is defined in asm/processor.h for stack size > rounding up, just like bpf_jit_comp32.c does. > > Signed-off-by: Xiao Wang <xiao.w.wang@intel.com> > --- > arch/riscv/net/bpf_jit_comp64.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c > index 39149ad002da..bd869d41612f 100644 > --- a/arch/riscv/net/bpf_jit_comp64.c > +++ b/arch/riscv/net/bpf_jit_comp64.c > @@ -858,7 +858,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, > stack_size += 8; > sreg_off = stack_size; > > - stack_size = round_up(stack_size, 16); > + stack_size = round_up(stack_size, STACK_ALIGN); > > if (!is_struct_ops) { > /* For the trampoline called from function entry, > @@ -1723,7 +1723,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx, bool is_subprog) > { > int i, stack_adjust = 0, store_offset, bpf_stack_adjust; > > - bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, 16); > + bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, STACK_ALIGN); > if (bpf_stack_adjust) > mark_fp(ctx); > > @@ -1743,7 +1743,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx, bool is_subprog) > if (seen_reg(RV_REG_S6, ctx)) > stack_adjust += 8; > > - stack_adjust = round_up(stack_adjust, 16); > + stack_adjust = round_up(stack_adjust, STACK_ALIGN); > stack_adjust += bpf_stack_adjust; > > store_offset = stack_adjust - 8; Reviewed-by: Pu Lehui <pulehui@huawei.com>
On 2024/5/22 13:45, Xiao Wang wrote: > Use the macro STACK_ALIGN that is defined in asm/processor.h for stack size > rounding up, just like bpf_jit_comp32.c does. > > Signed-off-by: Xiao Wang <xiao.w.wang@intel.com> > --- > arch/riscv/net/bpf_jit_comp64.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) It met a patching conflict. I think you should target for the bpf-next tree. https://github.com/kernel-patches/bpf/pull/7080
> -----Original Message----- > From: Pu Lehui <pulehui@huawei.com> > Sent: Thursday, May 23, 2024 9:43 AM > To: Wang, Xiao W <xiao.w.wang@intel.com> > Cc: paul.walmsley@sifive.com; palmer@dabbelt.com; > aou@eecs.berkeley.edu; luke.r.nels@gmail.com; xi.wang@gmail.com; > bjorn@kernel.org; ast@kernel.org; daniel@iogearbox.net; andrii@kernel.org; > martin.lau@linux.dev; eddyz87@gmail.com; song@kernel.org; > yonghong.song@linux.dev; john.fastabend@gmail.com; kpsingh@kernel.org; > sdf@google.com; haoluo@google.com; jolsa@kernel.org; linux- > riscv@lists.infradead.org; linux-kernel@vger.kernel.org; bpf@vger.kernel.org; > Li, Haicheng <haicheng.li@intel.com> > Subject: Re: [PATCH] riscv, bpf: Use STACK_ALIGN macro for size rounding up > > > On 2024/5/22 13:45, Xiao Wang wrote: > > Use the macro STACK_ALIGN that is defined in asm/processor.h for stack size > > rounding up, just like bpf_jit_comp32.c does. > > > > Signed-off-by: Xiao Wang <xiao.w.wang@intel.com> > > --- > > arch/riscv/net/bpf_jit_comp64.c | 6 +++--- > > 1 file changed, 3 insertions(+), 3 deletions(-) > > It met a patching conflict. I think you should target for the bpf-next tree. > https://github.com/kernel-patches/bpf/pull/7080 OK, I would make v2 based on bpf-next. BRs, Xiao
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index 39149ad002da..bd869d41612f 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -858,7 +858,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, stack_size += 8; sreg_off = stack_size; - stack_size = round_up(stack_size, 16); + stack_size = round_up(stack_size, STACK_ALIGN); if (!is_struct_ops) { /* For the trampoline called from function entry, @@ -1723,7 +1723,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx, bool is_subprog) { int i, stack_adjust = 0, store_offset, bpf_stack_adjust; - bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, 16); + bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, STACK_ALIGN); if (bpf_stack_adjust) mark_fp(ctx); @@ -1743,7 +1743,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx, bool is_subprog) if (seen_reg(RV_REG_S6, ctx)) stack_adjust += 8; - stack_adjust = round_up(stack_adjust, 16); + stack_adjust = round_up(stack_adjust, STACK_ALIGN); stack_adjust += bpf_stack_adjust; store_offset = stack_adjust - 8;
Use the macro STACK_ALIGN that is defined in asm/processor.h for stack size rounding up, just like bpf_jit_comp32.c does. Signed-off-by: Xiao Wang <xiao.w.wang@intel.com> --- arch/riscv/net/bpf_jit_comp64.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)