Message ID | 20210215171208.1181305-1-jackmanb@google.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | [bpf-next] bpf: x86: Explicitly zero-extend rax after 32-bit cmpxchg | expand |
On Mon, Feb 15, 2021 at 6:12 PM Brendan Jackman <jackmanb@google.com> wrote: > > As pointed out by Ilya and explained in the new comment, there's a > discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads > the value from memory into r0, while x86 only does so when r0 and the > value in memory are different. > > At first this might sound like pure semantics, but it makes a real > difference when the comparison is 32-bit, since the load will > zero-extend r0/rax. > > The fix is to explicitly zero-extend rax after doing such a CMPXCHG. > > Note that this doesn't generate totally optimal code: at one of > emit_atomic's callsites (where BPF_{AND,OR,XOR} | BPF_FETCH are I think this should be okay and was also suggested by Alexei in: https://lore.kernel.org/bpf/CAADnVQ+gnQED7WYAw7Vmm5=omngCKYXnmgU_NqPUfESBerH8gQ@mail.gmail.com/ > implemented), the new mov is superfluous because there's already a > mov generated afterwards that will zero-extend r0. We could avoid > this unnecessary mov by just moving the new logic outside of > emit_atomic. But I think it's simpler to keep emit_atomic as a unit > of correctness (it generates the correct x86 code for a certain set > of BPF instructions, no further knowledge is needed to use it > correctly). > > Reported-by: Ilya Leoshkevich <iii@linux.ibm.com> > Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg") > Signed-off-by: Brendan Jackman <jackmanb@google.com> Thanks for fixing this! Acked-by: KP Singh <kpsingh@kernel.org>
On 2/15/21 6:12 PM, Brendan Jackman wrote: > As pointed out by Ilya and explained in the new comment, there's a > discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads > the value from memory into r0, while x86 only does so when r0 and the > value in memory are different. > > At first this might sound like pure semantics, but it makes a real > difference when the comparison is 32-bit, since the load will > zero-extend r0/rax. > > The fix is to explicitly zero-extend rax after doing such a CMPXCHG. > > Note that this doesn't generate totally optimal code: at one of > emit_atomic's callsites (where BPF_{AND,OR,XOR} | BPF_FETCH are > implemented), the new mov is superfluous because there's already a > mov generated afterwards that will zero-extend r0. We could avoid > this unnecessary mov by just moving the new logic outside of > emit_atomic. But I think it's simpler to keep emit_atomic as a unit > of correctness (it generates the correct x86 code for a certain set > of BPF instructions, no further knowledge is needed to use it > correctly). > > Reported-by: Ilya Leoshkevich <iii@linux.ibm.com> > Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg") > Signed-off-by: Brendan Jackman <jackmanb@google.com> > --- > arch/x86/net/bpf_jit_comp.c | 10 +++++++ > .../selftests/bpf/verifier/atomic_cmpxchg.c | 25 ++++++++++++++++++ > .../selftests/bpf/verifier/atomic_or.c | 26 +++++++++++++++++++ > 3 files changed, 61 insertions(+) > > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > index 79e7a0ec1da5..7919d5c54164 100644 > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c > @@ -834,6 +834,16 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, > > emit_insn_suffix(&prog, dst_reg, src_reg, off); > > + if (atomic_op == BPF_CMPXCHG && bpf_size == BPF_W) { > + /* > + * BPF_CMPXCHG unconditionally loads into R0, which means it > + * zero-extends 32-bit values. However x86 CMPXCHG doesn't do a > + * load if the comparison is successful. Therefore zero-extend > + * explicitly. > + */ > + emit_mov_reg(&prog, false, BPF_REG_0, BPF_REG_0); How does the situation look on other archs when they need to implement this in future? Mainly asking whether it would be better to instead to move this logic into the verifier instead, so it'll be consistent across all archs. > + } > + > *pprog = prog; > return 0; > } > diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c > index 2efd8bcf57a1..6e52dfc64415 100644 > --- a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c > +++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c > @@ -94,3 +94,28 @@ > .result = REJECT, > .errstr = "invalid read from stack", > }, > +{ > + "BPF_W cmpxchg should zero top 32 bits", > + .insns = { > + /* r0 = U64_MAX; */ > + BPF_MOV64_IMM(BPF_REG_0, 0), > + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), > + /* u64 val = r0; */ > + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), > + /* r0 = (u32)atomic_cmpxchg((u32 *)&val, r0, 1); */ > + BPF_MOV32_IMM(BPF_REG_1, 1), > + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_1, -8), > + /* r1 = 0x00000000FFFFFFFFull; */ > + BPF_MOV64_IMM(BPF_REG_1, 1), > + BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), > + BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1), > + /* if (r0 != r1) exit(1); */ > + BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 2), > + BPF_MOV32_IMM(BPF_REG_0, 1), > + BPF_EXIT_INSN(), > + /* exit(0); */ > + BPF_MOV32_IMM(BPF_REG_0, 0), > + BPF_EXIT_INSN(), > + }, > + .result = ACCEPT, > +}, > diff --git a/tools/testing/selftests/bpf/verifier/atomic_or.c b/tools/testing/selftests/bpf/verifier/atomic_or.c > index 70f982e1f9f0..e0811eb11542 100644 > --- a/tools/testing/selftests/bpf/verifier/atomic_or.c > +++ b/tools/testing/selftests/bpf/verifier/atomic_or.c > @@ -75,3 +75,29 @@ > }, > .result = ACCEPT, > }, > +{ > + "BPF_W atomic or should zero top 32 bits", > + .insns = { > + /* r1 = U64_MAX; */ > + BPF_MOV64_IMM(BPF_REG_1, 0), > + BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1), > + /* u64 val = r0; */ > + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), > + /* r1 = (u32)atomic_sub((u32 *)&val, 1); */ > + BPF_MOV32_IMM(BPF_REG_1, 2), > + BPF_ATOMIC_OP(BPF_W, BPF_OR | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8), > + /* r2 = 0x00000000FFFFFFFF; */ > + BPF_MOV64_IMM(BPF_REG_2, 1), > + BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 32), > + BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 1), > + /* if (r2 != r1) exit(1); */ > + BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 2), > + /* BPF_MOV32_IMM(BPF_REG_0, 1), */ > + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), > + BPF_EXIT_INSN(), > + /* exit(0); */ > + BPF_MOV32_IMM(BPF_REG_0, 0), > + BPF_EXIT_INSN(), > + }, > + .result = ACCEPT, > +}, > > base-commit: 5e1d40b75ed85ecd76347273da17e5da195c3e96 >
On Mon, 2021-02-15 at 23:20 +0100, Daniel Borkmann wrote: > On 2/15/21 6:12 PM, Brendan Jackman wrote: > > As pointed out by Ilya and explained in the new comment, there's a > > discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads > > the value from memory into r0, while x86 only does so when r0 and > > the > > value in memory are different. > > > > At first this might sound like pure semantics, but it makes a real > > difference when the comparison is 32-bit, since the load will > > zero-extend r0/rax. > > > > The fix is to explicitly zero-extend rax after doing such a > > CMPXCHG. > > > > Note that this doesn't generate totally optimal code: at one of > > emit_atomic's callsites (where BPF_{AND,OR,XOR} | BPF_FETCH are > > implemented), the new mov is superfluous because there's already a > > mov generated afterwards that will zero-extend r0. We could avoid > > this unnecessary mov by just moving the new logic outside of > > emit_atomic. But I think it's simpler to keep emit_atomic as a unit > > of correctness (it generates the correct x86 code for a certain set > > of BPF instructions, no further knowledge is needed to use it > > correctly). > > > > Reported-by: Ilya Leoshkevich <iii@linux.ibm.com> > > Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg") > > Signed-off-by: Brendan Jackman <jackmanb@google.com> > > --- > > arch/x86/net/bpf_jit_comp.c | 10 +++++++ > > .../selftests/bpf/verifier/atomic_cmpxchg.c | 25 > > ++++++++++++++++++ > > .../selftests/bpf/verifier/atomic_or.c | 26 > > +++++++++++++++++++ > > 3 files changed, 61 insertions(+) > > > > diff --git a/arch/x86/net/bpf_jit_comp.c > > b/arch/x86/net/bpf_jit_comp.c > > index 79e7a0ec1da5..7919d5c54164 100644 > > --- a/arch/x86/net/bpf_jit_comp.c > > +++ b/arch/x86/net/bpf_jit_comp.c > > @@ -834,6 +834,16 @@ static int emit_atomic(u8 **pprog, u8 > > atomic_op, > > > > emit_insn_suffix(&prog, dst_reg, src_reg, off); > > > > + if (atomic_op == BPF_CMPXCHG && bpf_size == BPF_W) { > > + /* > > + * BPF_CMPXCHG unconditionally loads into R0, which > > means it > > + * zero-extends 32-bit values. However x86 CMPXCHG > > doesn't do a > > + * load if the comparison is successful. Therefore > > zero-extend > > + * explicitly. > > + */ > > + emit_mov_reg(&prog, false, BPF_REG_0, BPF_REG_0); > > How does the situation look on other archs when they need to > implement this in future? > Mainly asking whether it would be better to instead to move this > logic into the verifier > instead, so it'll be consistent across all archs. I have exactly the same check in my s390 wip patch. So having a common solution would be great. [...]
On 2/15/21 11:24 PM, Ilya Leoshkevich wrote: > On Mon, 2021-02-15 at 23:20 +0100, Daniel Borkmann wrote: >> On 2/15/21 6:12 PM, Brendan Jackman wrote: >>> As pointed out by Ilya and explained in the new comment, there's a >>> discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads >>> the value from memory into r0, while x86 only does so when r0 and >>> the >>> value in memory are different. >>> >>> At first this might sound like pure semantics, but it makes a real >>> difference when the comparison is 32-bit, since the load will >>> zero-extend r0/rax. >>> >>> The fix is to explicitly zero-extend rax after doing such a >>> CMPXCHG. >>> >>> Note that this doesn't generate totally optimal code: at one of >>> emit_atomic's callsites (where BPF_{AND,OR,XOR} | BPF_FETCH are >>> implemented), the new mov is superfluous because there's already a >>> mov generated afterwards that will zero-extend r0. We could avoid >>> this unnecessary mov by just moving the new logic outside of >>> emit_atomic. But I think it's simpler to keep emit_atomic as a unit >>> of correctness (it generates the correct x86 code for a certain set >>> of BPF instructions, no further knowledge is needed to use it >>> correctly). >>> >>> Reported-by: Ilya Leoshkevich <iii@linux.ibm.com> >>> Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg") >>> Signed-off-by: Brendan Jackman <jackmanb@google.com> >>> --- >>> arch/x86/net/bpf_jit_comp.c | 10 +++++++ >>> .../selftests/bpf/verifier/atomic_cmpxchg.c | 25 >>> ++++++++++++++++++ >>> .../selftests/bpf/verifier/atomic_or.c | 26 >>> +++++++++++++++++++ >>> 3 files changed, 61 insertions(+) >>> >>> diff --git a/arch/x86/net/bpf_jit_comp.c >>> b/arch/x86/net/bpf_jit_comp.c >>> index 79e7a0ec1da5..7919d5c54164 100644 >>> --- a/arch/x86/net/bpf_jit_comp.c >>> +++ b/arch/x86/net/bpf_jit_comp.c >>> @@ -834,6 +834,16 @@ static int emit_atomic(u8 **pprog, u8 >>> atomic_op, >>> >>> emit_insn_suffix(&prog, dst_reg, src_reg, off); >>> >>> + if (atomic_op == BPF_CMPXCHG && bpf_size == BPF_W) { >>> + /* >>> + * BPF_CMPXCHG unconditionally loads into R0, which >>> means it >>> + * zero-extends 32-bit values. However x86 CMPXCHG >>> doesn't do a >>> + * load if the comparison is successful. Therefore >>> zero-extend >>> + * explicitly. >>> + */ >>> + emit_mov_reg(&prog, false, BPF_REG_0, BPF_REG_0); >> >> How does the situation look on other archs when they need to >> implement this in future? >> Mainly asking whether it would be better to instead to move this >> logic into the verifier >> instead, so it'll be consistent across all archs. > > I have exactly the same check in my s390 wip patch. > So having a common solution would be great. We do rewrites for various cases like div/mod handling, perhaps would be best to emit an explicit BPF_MOV32_REG(insn->dst_reg, insn->dst_reg) there, see the fixup_bpf_calls().
On Mon, 2021-02-15 at 23:35 +0100, Daniel Borkmann wrote: > On 2/15/21 11:24 PM, Ilya Leoshkevich wrote: > > On Mon, 2021-02-15 at 23:20 +0100, Daniel Borkmann wrote: > > > On 2/15/21 6:12 PM, Brendan Jackman wrote: > > > > As pointed out by Ilya and explained in the new comment, > > > > there's a > > > > discrepancy between x86 and BPF CMPXCHG semantics: BPF always > > > > loads > > > > the value from memory into r0, while x86 only does so when r0 > > > > and > > > > the > > > > value in memory are different. > > > > > > > > At first this might sound like pure semantics, but it makes a > > > > real > > > > difference when the comparison is 32-bit, since the load will > > > > zero-extend r0/rax. > > > > > > > > The fix is to explicitly zero-extend rax after doing such a > > > > CMPXCHG. > > > > > > > > Note that this doesn't generate totally optimal code: at one of > > > > emit_atomic's callsites (where BPF_{AND,OR,XOR} | BPF_FETCH are > > > > implemented), the new mov is superfluous because there's > > > > already a > > > > mov generated afterwards that will zero-extend r0. We could > > > > avoid > > > > this unnecessary mov by just moving the new logic outside of > > > > emit_atomic. But I think it's simpler to keep emit_atomic as a > > > > unit > > > > of correctness (it generates the correct x86 code for a certain > > > > set > > > > of BPF instructions, no further knowledge is needed to use it > > > > correctly). > > > > > > > > Reported-by: Ilya Leoshkevich <iii@linux.ibm.com> > > > > Fixes: 5ffa25502b5a ("bpf: Add instructions for > > > > atomic_[cmp]xchg") > > > > Signed-off-by: Brendan Jackman <jackmanb@google.com> > > > > --- > > > > arch/x86/net/bpf_jit_comp.c | 10 +++++++ > > > > .../selftests/bpf/verifier/atomic_cmpxchg.c | 25 > > > > ++++++++++++++++++ > > > > .../selftests/bpf/verifier/atomic_or.c | 26 > > > > +++++++++++++++++++ > > > > 3 files changed, 61 insertions(+) > > > > > > > > diff --git a/arch/x86/net/bpf_jit_comp.c > > > > b/arch/x86/net/bpf_jit_comp.c > > > > index 79e7a0ec1da5..7919d5c54164 100644 > > > > --- a/arch/x86/net/bpf_jit_comp.c > > > > +++ b/arch/x86/net/bpf_jit_comp.c > > > > @@ -834,6 +834,16 @@ static int emit_atomic(u8 **pprog, u8 > > > > atomic_op, > > > > > > > > emit_insn_suffix(&prog, dst_reg, src_reg, off); > > > > > > > > + if (atomic_op == BPF_CMPXCHG && bpf_size == BPF_W) { > > > > + /* > > > > + * BPF_CMPXCHG unconditionally loads into R0, > > > > which > > > > means it > > > > + * zero-extends 32-bit values. However x86 > > > > CMPXCHG > > > > doesn't do a > > > > + * load if the comparison is successful. > > > > Therefore > > > > zero-extend > > > > + * explicitly. > > > > + */ > > > > + emit_mov_reg(&prog, false, BPF_REG_0, > > > > BPF_REG_0); > > > > > > How does the situation look on other archs when they need to > > > implement this in future? > > > Mainly asking whether it would be better to instead to move this > > > logic into the verifier > > > instead, so it'll be consistent across all archs. > > > > I have exactly the same check in my s390 wip patch. > > So having a common solution would be great. > > We do rewrites for various cases like div/mod handling, perhaps would > be > best to emit an explicit BPF_MOV32_REG(insn->dst_reg, insn->dst_reg) > there, > see the fixup_bpf_calls(). How about BPF_ZEXT_REG? Then arches that don't need this (I think aarch64's instruction always zero-extends) can detect this using insn_is_zext() and skip such insns.
On Mon, Feb 15, 2021 at 11:42 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote: > [...] > > > > > > > > How does the situation look on other archs when they need to > > > > implement this in future? > > > > Mainly asking whether it would be better to instead to move this > > > > logic into the verifier > > > > instead, so it'll be consistent across all archs. > > > > > > I have exactly the same check in my s390 wip patch. > > > So having a common solution would be great. > > > > We do rewrites for various cases like div/mod handling, perhaps would > > be > > best to emit an explicit BPF_MOV32_REG(insn->dst_reg, insn->dst_reg) > > there, > > see the fixup_bpf_calls(). Agreed, this would be better. > > How about BPF_ZEXT_REG? Then arches that don't need this (I think > aarch64's instruction always zero-extends) can detect this using > insn_is_zext() and skip such insns. > +1
On 2/16/21 12:30 AM, KP Singh wrote: > On Mon, Feb 15, 2021 at 11:42 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote: >> > > [...] > >>>>> How does the situation look on other archs when they need to >>>>> implement this in future? >>>>> Mainly asking whether it would be better to instead to move this >>>>> logic into the verifier >>>>> instead, so it'll be consistent across all archs. >>>> >>>> I have exactly the same check in my s390 wip patch. >>>> So having a common solution would be great. >>> >>> We do rewrites for various cases like div/mod handling, perhaps would >>> be >>> best to emit an explicit BPF_MOV32_REG(insn->dst_reg, insn->dst_reg) >>> there, >>> see the fixup_bpf_calls(). > > Agreed, this would be better. > >> >> How about BPF_ZEXT_REG? Then arches that don't need this (I think >> aarch64's instruction always zero-extends) can detect this using >> insn_is_zext() and skip such insns. >> > > +1 That would be nicer indeed.
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 79e7a0ec1da5..7919d5c54164 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -834,6 +834,16 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, emit_insn_suffix(&prog, dst_reg, src_reg, off); + if (atomic_op == BPF_CMPXCHG && bpf_size == BPF_W) { + /* + * BPF_CMPXCHG unconditionally loads into R0, which means it + * zero-extends 32-bit values. However x86 CMPXCHG doesn't do a + * load if the comparison is successful. Therefore zero-extend + * explicitly. + */ + emit_mov_reg(&prog, false, BPF_REG_0, BPF_REG_0); + } + *pprog = prog; return 0; } diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c index 2efd8bcf57a1..6e52dfc64415 100644 --- a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c +++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c @@ -94,3 +94,28 @@ .result = REJECT, .errstr = "invalid read from stack", }, +{ + "BPF_W cmpxchg should zero top 32 bits", + .insns = { + /* r0 = U64_MAX; */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), + /* u64 val = r0; */ + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), + /* r0 = (u32)atomic_cmpxchg((u32 *)&val, r0, 1); */ + BPF_MOV32_IMM(BPF_REG_1, 1), + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_1, -8), + /* r1 = 0x00000000FFFFFFFFull; */ + BPF_MOV64_IMM(BPF_REG_1, 1), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1), + /* if (r0 != r1) exit(1); */ + BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 2), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_or.c b/tools/testing/selftests/bpf/verifier/atomic_or.c index 70f982e1f9f0..e0811eb11542 100644 --- a/tools/testing/selftests/bpf/verifier/atomic_or.c +++ b/tools/testing/selftests/bpf/verifier/atomic_or.c @@ -75,3 +75,29 @@ }, .result = ACCEPT, }, +{ + "BPF_W atomic or should zero top 32 bits", + .insns = { + /* r1 = U64_MAX; */ + BPF_MOV64_IMM(BPF_REG_1, 0), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1), + /* u64 val = r0; */ + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* r1 = (u32)atomic_sub((u32 *)&val, 1); */ + BPF_MOV32_IMM(BPF_REG_1, 2), + BPF_ATOMIC_OP(BPF_W, BPF_OR | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8), + /* r2 = 0x00000000FFFFFFFF; */ + BPF_MOV64_IMM(BPF_REG_2, 1), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 32), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 1), + /* if (r2 != r1) exit(1); */ + BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 2), + /* BPF_MOV32_IMM(BPF_REG_0, 1), */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +},
As pointed out by Ilya and explained in the new comment, there's a discrepancy between x86 and BPF CMPXCHG semantics: BPF always loads the value from memory into r0, while x86 only does so when r0 and the value in memory are different. At first this might sound like pure semantics, but it makes a real difference when the comparison is 32-bit, since the load will zero-extend r0/rax. The fix is to explicitly zero-extend rax after doing such a CMPXCHG. Note that this doesn't generate totally optimal code: at one of emit_atomic's callsites (where BPF_{AND,OR,XOR} | BPF_FETCH are implemented), the new mov is superfluous because there's already a mov generated afterwards that will zero-extend r0. We could avoid this unnecessary mov by just moving the new logic outside of emit_atomic. But I think it's simpler to keep emit_atomic as a unit of correctness (it generates the correct x86 code for a certain set of BPF instructions, no further knowledge is needed to use it correctly). Reported-by: Ilya Leoshkevich <iii@linux.ibm.com> Fixes: 5ffa25502b5a ("bpf: Add instructions for atomic_[cmp]xchg") Signed-off-by: Brendan Jackman <jackmanb@google.com> --- arch/x86/net/bpf_jit_comp.c | 10 +++++++ .../selftests/bpf/verifier/atomic_cmpxchg.c | 25 ++++++++++++++++++ .../selftests/bpf/verifier/atomic_or.c | 26 +++++++++++++++++++ 3 files changed, 61 insertions(+) base-commit: 5e1d40b75ed85ecd76347273da17e5da195c3e96