Message ID | 20220512074546.231616-6-hbathini@linux.ibm.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | BPF |
Headers | show |
Series | Atomics support for eBPF on powerpc | expand |
Context | Check | Description |
---|---|---|
bpf/vmtest-bpf-next-PR | success | PR summary |
bpf/vmtest-bpf-next-VM_Test-1 | success | Logs for Kernel LATEST on ubuntu-latest + selftests |
bpf/vmtest-bpf-next-VM_Test-2 | success | Logs for Kernel LATEST on z15 + selftests |
netdev/tree_selection | success | Not a local patch |
Le 12/05/2022 à 09:45, Hari Bathini a écrit : > This adds two atomic opcodes BPF_XCHG and BPF_CMPXCHG on ppc32, both > of which include the BPF_FETCH flag. The kernel's atomic_cmpxchg > operation fundamentally has 3 operands, but we only have two register > fields. Therefore the operand we compare against (the kernel's API > calls it 'old') is hard-coded to be BPF_REG_R0. Also, kernel's > atomic_cmpxchg returns the previous value at dst_reg + off. JIT the > same for BPF too with return value put in BPF_REG_0. > > BPF_REG_R0 = atomic_cmpxchg(dst_reg + off, BPF_REG_R0, src_reg); Ah, now we mix the xchg's with the bitwise operations. Ok I understand better that goto atomic_ops in the previous patch then. But it now becomes uneasy to read and follow. I think it would be cleaner to separate completely the bitwise operations and this, even if it duplicates half a dozen of lines. > > Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> > --- > arch/powerpc/net/bpf_jit_comp32.c | 17 +++++++++++++++++ > 1 file changed, 17 insertions(+) > > diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c > index 5604ae1b60ab..4690fd6e9e52 100644 > --- a/arch/powerpc/net/bpf_jit_comp32.c > +++ b/arch/powerpc/net/bpf_jit_comp32.c > @@ -829,6 +829,23 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * > /* we're done if this succeeded */ > PPC_BCC_SHORT(COND_NE, tmp_idx); > break; > + case BPF_CMPXCHG: > + /* Compare with old value in BPF_REG_0 */ > + EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), _R0)); > + /* Don't set if different from old value */ > + PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4); > + fallthrough; > + case BPF_XCHG: > + /* store new value */ > + EMIT(PPC_RAW_STWCX(src_reg, tmp_reg, dst_reg)); > + PPC_BCC_SHORT(COND_NE, tmp_idx); > + /* > + * Return old value in src_reg for BPF_XCHG & > + * BPF_REG_0 for BPF_CMPXCHG. > + */ > + EMIT(PPC_RAW_MR(imm == BPF_XCHG ? src_reg : bpf_to_ppc(BPF_REG_0), > + _R0)); If the line spreads into two lines, compact form is probably not worth it. Would be more readable as if (imm == BPF_XCHG) EMIT_PPC_RAW_MR(src_reg, _R0)); else EMIT_PPC_RAW_MR(src_reg, bpf_to_ppc(BPF_REG_0))); At the end, it's probably even more readable if you separate both cases completely: case BPF_CMPXCHG: /* Compare with old value in BPF_REG_0 */ EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), _R0)); /* Don't set if different from old value */ PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4); /* store new value */ EMIT(PPC_RAW_STWCX(src_reg, tmp_reg, dst_reg)); PPC_BCC_SHORT(COND_NE, tmp_idx); /* Return old value in BPF_REG_0 */ EMIT_PPC_RAW_MR(src_reg, bpf_to_ppc(BPF_REG_0))); break; case BPF_XCHG: /* store new value */ EMIT(PPC_RAW_STWCX(src_reg, tmp_reg, dst_reg)); PPC_BCC_SHORT(COND_NE, tmp_idx); /* Return old value in src_reg */ EMIT_PPC_RAW_MR(src_reg, _R0)); break; > + break; > default: > pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n", > code, i);
diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index 5604ae1b60ab..4690fd6e9e52 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -829,6 +829,23 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * /* we're done if this succeeded */ PPC_BCC_SHORT(COND_NE, tmp_idx); break; + case BPF_CMPXCHG: + /* Compare with old value in BPF_REG_0 */ + EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), _R0)); + /* Don't set if different from old value */ + PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4); + fallthrough; + case BPF_XCHG: + /* store new value */ + EMIT(PPC_RAW_STWCX(src_reg, tmp_reg, dst_reg)); + PPC_BCC_SHORT(COND_NE, tmp_idx); + /* + * Return old value in src_reg for BPF_XCHG & + * BPF_REG_0 for BPF_CMPXCHG. + */ + EMIT(PPC_RAW_MR(imm == BPF_XCHG ? src_reg : bpf_to_ppc(BPF_REG_0), + _R0)); + break; default: pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n", code, i);
This adds two atomic opcodes BPF_XCHG and BPF_CMPXCHG on ppc32, both of which include the BPF_FETCH flag. The kernel's atomic_cmpxchg operation fundamentally has 3 operands, but we only have two register fields. Therefore the operand we compare against (the kernel's API calls it 'old') is hard-coded to be BPF_REG_R0. Also, kernel's atomic_cmpxchg returns the previous value at dst_reg + off. JIT the same for BPF too with return value put in BPF_REG_0. BPF_REG_R0 = atomic_cmpxchg(dst_reg + off, BPF_REG_R0, src_reg); Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> --- arch/powerpc/net/bpf_jit_comp32.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)