diff mbox series

riscv, bpf: Use STACK_ALIGN macro for size rounding up

Message ID 20240522054507.3941595-1-xiao.w.wang@intel.com (mailing list archive)
State Superseded
Headers show
Series riscv, bpf: Use STACK_ALIGN macro for size rounding up | expand

Checks

Context Check Description
conchuod/vmtest-for-next-PR success PR summary
conchuod/patch-1-test-1 success .github/scripts/patches/tests/build_rv32_defconfig.sh
conchuod/patch-1-test-2 success .github/scripts/patches/tests/build_rv64_clang_allmodconfig.sh
conchuod/patch-1-test-3 success .github/scripts/patches/tests/build_rv64_gcc_allmodconfig.sh
conchuod/patch-1-test-4 success .github/scripts/patches/tests/build_rv64_nommu_k210_defconfig.sh
conchuod/patch-1-test-5 success .github/scripts/patches/tests/build_rv64_nommu_virt_defconfig.sh
conchuod/patch-1-test-6 success .github/scripts/patches/tests/checkpatch.sh
conchuod/patch-1-test-7 success .github/scripts/patches/tests/dtb_warn_rv64.sh
conchuod/patch-1-test-8 success .github/scripts/patches/tests/header_inline.sh
conchuod/patch-1-test-9 success .github/scripts/patches/tests/kdoc.sh
conchuod/patch-1-test-10 success .github/scripts/patches/tests/module_param.sh
conchuod/patch-1-test-11 success .github/scripts/patches/tests/verify_fixes.sh
conchuod/patch-1-test-12 success .github/scripts/patches/tests/verify_signedoff.sh

Commit Message

Wang, Xiao W May 22, 2024, 5:45 a.m. UTC
Use the macro STACK_ALIGN that is defined in asm/processor.h for stack size
rounding up, just like bpf_jit_comp32.c does.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
---
 arch/riscv/net/bpf_jit_comp64.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Pu Lehui May 22, 2024, 6:29 a.m. UTC | #1
On 2024/5/22 13:45, Xiao Wang wrote:
> Use the macro STACK_ALIGN that is defined in asm/processor.h for stack size
> rounding up, just like bpf_jit_comp32.c does.
> 
> Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
> ---
>   arch/riscv/net/bpf_jit_comp64.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
> index 39149ad002da..bd869d41612f 100644
> --- a/arch/riscv/net/bpf_jit_comp64.c
> +++ b/arch/riscv/net/bpf_jit_comp64.c
> @@ -858,7 +858,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>   	stack_size += 8;
>   	sreg_off = stack_size;
>   
> -	stack_size = round_up(stack_size, 16);
> +	stack_size = round_up(stack_size, STACK_ALIGN);
>   
>   	if (!is_struct_ops) {
>   		/* For the trampoline called from function entry,
> @@ -1723,7 +1723,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx, bool is_subprog)
>   {
>   	int i, stack_adjust = 0, store_offset, bpf_stack_adjust;
>   
> -	bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, 16);
> +	bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, STACK_ALIGN);
>   	if (bpf_stack_adjust)
>   		mark_fp(ctx);
>   
> @@ -1743,7 +1743,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx, bool is_subprog)
>   	if (seen_reg(RV_REG_S6, ctx))
>   		stack_adjust += 8;
>   
> -	stack_adjust = round_up(stack_adjust, 16);
> +	stack_adjust = round_up(stack_adjust, STACK_ALIGN);
>   	stack_adjust += bpf_stack_adjust;
>   
>   	store_offset = stack_adjust - 8;

Reviewed-by: Pu Lehui <pulehui@huawei.com>
Pu Lehui May 23, 2024, 1:42 a.m. UTC | #2
On 2024/5/22 13:45, Xiao Wang wrote:
> Use the macro STACK_ALIGN that is defined in asm/processor.h for stack size
> rounding up, just like bpf_jit_comp32.c does.
> 
> Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
> ---
>   arch/riscv/net/bpf_jit_comp64.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)

It met a patching conflict. I think you should target for the bpf-next tree.
https://github.com/kernel-patches/bpf/pull/7080
Wang, Xiao W May 23, 2024, 3:11 a.m. UTC | #3
> -----Original Message-----
> From: Pu Lehui <pulehui@huawei.com>
> Sent: Thursday, May 23, 2024 9:43 AM
> To: Wang, Xiao W <xiao.w.wang@intel.com>
> Cc: paul.walmsley@sifive.com; palmer@dabbelt.com;
> aou@eecs.berkeley.edu; luke.r.nels@gmail.com; xi.wang@gmail.com;
> bjorn@kernel.org; ast@kernel.org; daniel@iogearbox.net; andrii@kernel.org;
> martin.lau@linux.dev; eddyz87@gmail.com; song@kernel.org;
> yonghong.song@linux.dev; john.fastabend@gmail.com; kpsingh@kernel.org;
> sdf@google.com; haoluo@google.com; jolsa@kernel.org; linux-
> riscv@lists.infradead.org; linux-kernel@vger.kernel.org; bpf@vger.kernel.org;
> Li, Haicheng <haicheng.li@intel.com>
> Subject: Re: [PATCH] riscv, bpf: Use STACK_ALIGN macro for size rounding up
> 
> 
> On 2024/5/22 13:45, Xiao Wang wrote:
> > Use the macro STACK_ALIGN that is defined in asm/processor.h for stack size
> > rounding up, just like bpf_jit_comp32.c does.
> >
> > Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
> > ---
> >   arch/riscv/net/bpf_jit_comp64.c | 6 +++---
> >   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> It met a patching conflict. I think you should target for the bpf-next tree.
> https://github.com/kernel-patches/bpf/pull/7080

OK, I would make v2 based on bpf-next.

BRs,
Xiao
diff mbox series

Patch

diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 39149ad002da..bd869d41612f 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -858,7 +858,7 @@  static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
 	stack_size += 8;
 	sreg_off = stack_size;
 
-	stack_size = round_up(stack_size, 16);
+	stack_size = round_up(stack_size, STACK_ALIGN);
 
 	if (!is_struct_ops) {
 		/* For the trampoline called from function entry,
@@ -1723,7 +1723,7 @@  void bpf_jit_build_prologue(struct rv_jit_context *ctx, bool is_subprog)
 {
 	int i, stack_adjust = 0, store_offset, bpf_stack_adjust;
 
-	bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, 16);
+	bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, STACK_ALIGN);
 	if (bpf_stack_adjust)
 		mark_fp(ctx);
 
@@ -1743,7 +1743,7 @@  void bpf_jit_build_prologue(struct rv_jit_context *ctx, bool is_subprog)
 	if (seen_reg(RV_REG_S6, ctx))
 		stack_adjust += 8;
 
-	stack_adjust = round_up(stack_adjust, 16);
+	stack_adjust = round_up(stack_adjust, STACK_ALIGN);
 	stack_adjust += bpf_stack_adjust;
 
 	store_offset = stack_adjust - 8;