Message ID | 20220127162127.2391947-4-james.morse@arm.com (mailing list archive) |
---|---|
State | Accepted |
Commit | a6aab018829948c1818bed656656df9ae321408b |
Headers | show |
Series | arm64: insn: Generate 64 bit mask immediates correctly | expand |
On Thu, 27 Jan 2022 16:21:27 +0000, James Morse <james.morse@arm.com> wrote: > > When the insn framework is used to encode an AND/ORR/EOR instruction, > aarch64_encode_immediate() is used to pick the immr imms values. > > If the immediate is a 64bit mask, with bit 63 set, and zeros in any > of the upper 32 bits, the immr value is incorrectly calculated meaning > the wrong mask is generated. > For example, 0x8000000000000001 should have an immr of 1, but 32 is used, > meaning the resulting mask is 0x0000000300000000. > > It would appear eBPF is unable to hit these cases, as build_insn()'s > imm value is a s32, so when used with BPF_ALU64, the sign-extended > u64 immediate would always have all-1s or all-0s in the upper 32 bits. > > KVM does not generate a va_mask with any of the top bits set as these > VA wouldn't be usable with TTBR0_EL2. > > This happens because the rotation is calculated from fls(~imm), which > takes an unsigned int, but the immediate may be 64bit. > > Use fls64() so the 64bit mask doesn't get truncated to a u32. > > Signed-off-by: James Morse <james.morse@arm.com> > --- > arch/arm64/lib/insn.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c > index 8888e407032f..90253af7e294 100644 > --- a/arch/arm64/lib/insn.c > +++ b/arch/arm64/lib/insn.c > @@ -1381,7 +1381,7 @@ static u32 aarch64_encode_immediate(u64 imm, > * Compute the rotation to get a continuous set of > * ones, with the first bit set at position 0 > */ > - ror = fls(~imm); > + ror = fls64(~imm); > } > > /* Oh crap, not again... :-( Clearly, my initial test harness wasn't as good as I thought. Out for morbid curiosity, how was this found? Brown-paper-bag-for: Marc Zyngier <maz@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> M.
diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index 8888e407032f..90253af7e294 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -1381,7 +1381,7 @@ static u32 aarch64_encode_immediate(u64 imm, * Compute the rotation to get a continuous set of * ones, with the first bit set at position 0 */ - ror = fls(~imm); + ror = fls64(~imm); } /*
When the insn framework is used to encode an AND/ORR/EOR instruction, aarch64_encode_immediate() is used to pick the immr imms values. If the immediate is a 64bit mask, with bit 63 set, and zeros in any of the upper 32 bits, the immr value is incorrectly calculated meaning the wrong mask is generated. For example, 0x8000000000000001 should have an immr of 1, but 32 is used, meaning the resulting mask is 0x0000000300000000. It would appear eBPF is unable to hit these cases, as build_insn()'s imm value is a s32, so when used with BPF_ALU64, the sign-extended u64 immediate would always have all-1s or all-0s in the upper 32 bits. KVM does not generate a va_mask with any of the top bits set as these VA wouldn't be usable with TTBR0_EL2. This happens because the rotation is calculated from fls(~imm), which takes an unsigned int, but the immediate may be 64bit. Use fls64() so the 64bit mask doesn't get truncated to a u32. Signed-off-by: James Morse <james.morse@arm.com> --- arch/arm64/lib/insn.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)