diff mbox series

arm64: io: permit offset addressing

Message ID 20240124111259.874975-1-mark.rutland@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: io: permit offset addressing | expand

Commit Message

Mark Rutland Jan. 24, 2024, 11:12 a.m. UTC
Currently our IO accessors all use register addressing without offsets,
but we could safely use offset addressing (without writeback) to
simplify and optimize the generated code.

To function correctly under a hypervisor which emulates IO accesses, we
must ensure that any faulting/trapped IO access results in an ESR_ELx
value with ESR_ELX.ISS.ISV=1 and with the tranfer register described in
ESR_ELx.ISS.SRT. This means that we can only use loads/stores of a
single general purpose register (or the zero register), and must avoid
writeback addressing modes. However, we can use immediate offset
addressing modes, as these still provide ESR_ELX.ISS.ISV=1 and a valid
ESR_ELx.ISS.SRT when those accesses fault at Stage-2.

Currently we only use register addressing without offsets. We use the
"r" constraint to place the address into a register, and manually
generate the register addressing by surrounding the resulting register
operand with square braces, e.g.

| static __always_inline void __raw_writeq(u64 val, volatile void __iomem *addr)
| {
|         asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
| }

Due to this, sequences of adjacent accesses need to generate addresses
using separate instructions. For example, the following code:

| void writeq_zero_8_times(void *ptr)
| {
|        writeq_relaxed(0, ptr + 8 * 0);
|        writeq_relaxed(0, ptr + 8 * 1);
|        writeq_relaxed(0, ptr + 8 * 2);
|        writeq_relaxed(0, ptr + 8 * 3);
|        writeq_relaxed(0, ptr + 8 * 4);
|        writeq_relaxed(0, ptr + 8 * 5);
|        writeq_relaxed(0, ptr + 8 * 6);
|        writeq_relaxed(0, ptr + 8 * 7);
| }

... is compiled to:

| <writeq_zero_8_times>:
|     str     xzr, [x0]
|     add     x1, x0, #0x8
|     str     xzr, [x1]
|     add     x1, x0, #0x10
|     str     xzr, [x1]
|     add     x1, x0, #0x18
|     str     xzr, [x1]
|     add     x1, x0, #0x20
|     str     xzr, [x1]
|     add     x1, x0, #0x28
|     str     xzr, [x1]
|     add     x1, x0, #0x30
|     str     xzr, [x1]
|     add     x0, x0, #0x38
|     str     xzr, [x0]
|     ret

As described above, we could safely use immediate offset addressing,
which would allow the ADDs to be folded into the address generation for
the STRs, resulting in simpler and smaller generated assembly. We can do
this by using the "o" constraint to allow the compiler to generate
offset addressing (without writeback) for a memory operand, e.g.

| static __always_inline void __raw_writeq(u64 val, volatile void __iomem *addr)
| {
|         volatile u64 __iomem *ptr = addr;
|         asm volatile("str %x0, %1" : : "rZ" (val), "o" (*ptr));
| }

... which results in the earlier code sequence being compiled to:

| <writeq_zero_8_times>:
|     str     xzr, [x0]
|     str     xzr, [x0, #8]
|     str     xzr, [x0, #16]
|     str     xzr, [x0, #24]
|     str     xzr, [x0, #32]
|     str     xzr, [x0, #40]
|     str     xzr, [x0, #48]
|     str     xzr, [x0, #56]
|     ret

As Will notes at:

  https://lore.kernel.org/linux-arm-kernel/20240117160528.GA3398@willie-the-truck/

... some compilers struggle with a plain "o" constraint, so it's
preferable to use "Qo", where the additional "Q" constraint permits
using non-offset register addressing.

This patch modifies our IO write accessors to use "Qo" constraints,
resulting in the better code generation described above. The IO read
accessors are left as-is because ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE
requires that non-offset register addressing is used, as the LDAR
instruction does not support offset addressing.

When compiling v6.8-rc1 defconfig with GCC 13.2.0, this saves ~4KiB of
text:

| [mark@lakrids:~/src/linux]% ls -al vmlinux-*
| -rwxr-xr-x 1 mark mark 153960576 Jan 23 12:01 vmlinux-after
| -rwxr-xr-x 1 mark mark 153862192 Jan 23 11:57 vmlinux-before
|
| [mark@lakrids:~/src/linux]% size vmlinux-before vmlinux-after
|    text    data     bss     dec     hex filename
| 26708921        16690350         622736 44022007        29fb8f7 vmlinux-before
| 26704761        16690414         622736 44017911        29fa8f7 vmlinux-after

... though due to internal alignment of sections, this has no impact on
the size of the resulting Image:

| [mark@lakrids:~/src/linux]% ls -al Image-*
| -rw-r--r-- 1 mark mark 43590144 Jan 23 12:01 Image-after
| -rw-r--r-- 1 mark mark 43590144 Jan 23 11:57 Image-before

Aside from the better code generation, there should be no functional
change as a result of this patch. I have lightly tested this patch,
including booting under KVM (where some devices such as PL011 are
emulated).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/io.h | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Jason Gunthorpe Jan. 24, 2024, 5:24 p.m. UTC | #1
On Wed, Jan 24, 2024 at 11:12:59AM +0000, Mark Rutland wrote:
> Currently our IO accessors all use register addressing without offsets,
> but we could safely use offset addressing (without writeback) to
> simplify and optimize the generated code.

> Aside from the better code generation, there should be no functional
> change as a result of this patch. I have lightly tested this patch,
> including booting under KVM (where some devices such as PL011 are
> emulated).

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

FWIW I have had 0-day compile test a very similar patch with no
compilation problems.

I also got similar code savings results.

However, I noticed that clang 17.0.6 at least does not have any
benifit, I suppose it is a missing compiler feature.

Finally, in my experiments with the WC issue it wasn't entirely
helpful, I could not get both gcc and clang always generate load-free
store sequences for 64 bytes even with this.

Jason
Mark Rutland Jan. 25, 2024, 11:22 a.m. UTC | #2
Hi,

[adding LLVM folk for codegen concern below]

On Wed, Jan 24, 2024 at 01:24:18PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 24, 2024 at 11:12:59AM +0000, Mark Rutland wrote:
> > Currently our IO accessors all use register addressing without offsets,
> > but we could safely use offset addressing (without writeback) to
> > simplify and optimize the generated code.
> 
> > Aside from the better code generation, there should be no functional
> > change as a result of this patch. I have lightly tested this patch,
> > including booting under KVM (where some devices such as PL011 are
> > emulated).
> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> 
> FWIW I have had 0-day compile test a very similar patch with no
> compilation problems.
> 
> I also got similar code savings results.

Thanks; much appreciated!

> However, I noticed that clang 17.0.6 at least does not have any
> benifit, I suppose it is a missing compiler feature.

Hmm; yes, clang doesn't seem to handle this well.

LLVM folk, any idea why clang can't make use of offset addressing for the
inline assembly in the original patch?

  https://lore.kernel.org/linux-arm-kernel/20240124111259.874975-1-mark.rutland@arm.com/

I've written a self-contained test case below to demonstrate; I can't convince
clang to generate offset addressing modes whereas gcc will, meaning gcc
generates much nicer code. I'm not sure if this is just something clang can't
currently do or if we're tickling some edge case.

I also note it's not directly using XZR for the value; I'm not sure if I'm
doing something wrong here.

| [mark@lakrids:~/src/linux]% cat asm-test.c
| #define __always_inline inline __attribute__((always_inline))
| #define u64 unsigned long
| 
| static __always_inline void writeq(u64 val, volatile u64 *addr)
| {
|         asm volatile(
|         "       str     %x[val], %[addr]\n"
|         :
|         : [addr] "Qo" (*addr),
|           [val] "rZ" (val)
|         );
| }
| 
| void writeq_zero_8_times(volatile void *addr)
| {
|         writeq(0, addr + 8 * 0);
|         writeq(0, addr + 8 * 1);
|         writeq(0, addr + 8 * 2);
|         writeq(0, addr + 8 * 3);
|         writeq(0, addr + 8 * 4);
|         writeq(0, addr + 8 * 5);
|         writeq(0, addr + 8 * 6);
|         writeq(0, addr + 8 * 7);
| }
| [mark@lakrids:~/src/linux]% /mnt/data/opt/toolchain/kernel-org-llvm/17.0.6/llvm-17.0.6-x86_64/bin/clang --target=aarch64-linux -c asm-test.c -O2
| [mark@lakrids:~/src/linux]% usekorg 13.2.0 aarch64-linux-objdump -d asm-test.o
| 
| asm-test.o:     file format elf64-littleaarch64
| 
| 
| Disassembly of section .text:
| 
| 0000000000000000 <writeq_zero_8_times>:
|    0:   91002009        add     x9, x0, #0x8
|    4:   aa1f03e8        mov     x8, xzr
|    8:   9100400a        add     x10, x0, #0x10
|    c:   9100600b        add     x11, x0, #0x18
|   10:   9100800c        add     x12, x0, #0x20
|   14:   9100a00d        add     x13, x0, #0x28
|   18:   f9000008        str     x8, [x0]
|   1c:   f9000128        str     x8, [x9]
|   20:   9100c009        add     x9, x0, #0x30
|   24:   9100e00e        add     x14, x0, #0x38
|   28:   f9000148        str     x8, [x10]
|   2c:   f9000168        str     x8, [x11]
|   30:   f9000188        str     x8, [x12]
|   34:   f90001a8        str     x8, [x13]
|   38:   f9000128        str     x8, [x9]
|   3c:   f90001c8        str     x8, [x14]
|   40:   d65f03c0        ret
| [mark@lakrids:~/src/linux]% usekorg 13.2.0 aarch64-linux-gcc -c asm-test.c -O2
| [mark@lakrids:~/src/linux]% usekorg 13.2.0 aarch64-linux-objdump -d asm-test.o
| 
| asm-test.o:     file format elf64-littleaarch64
| 
| 
| Disassembly of section .text:
| 
| 0000000000000000 <writeq_zero_8_times>:
|    0:   f900001f        str     xzr, [x0]
|    4:   f900041f        str     xzr, [x0, #8]
|    8:   f900081f        str     xzr, [x0, #16]
|    c:   f9000c1f        str     xzr, [x0, #24]
|   10:   f900101f        str     xzr, [x0, #32]
|   14:   f900141f        str     xzr, [x0, #40]
|   18:   f900181f        str     xzr, [x0, #48]
|   1c:   f9001c1f        str     xzr, [x0, #56]
|   20:   d65f03c0        ret

In case this was an edge case I tried a few variations, which didn't seem to
help:

* Using different constraints (I tried input constraints {"Qo", "o", "m"} and
  output constraints {"=Qo" "=o", "=m"})

* Removing "volatile" in case that was getting in the way of some optimization

* Turning writeq() into a macro in case this was a problem with inlining.

> Finally, in my experiments with the WC issue it wasn't entirely
> helpful, I could not get both gcc and clang always generate load-free
> store sequences for 64 bytes even with this.

Fair enough, thanks for having a go.

FWIW, for the WC case I'd be happy to have inline asm for 8 back-to-back STRs,
if that would help? i.e. like before but with 8 STRs instead of 4 STPs.

Mark.
Will Deacon Jan. 30, 2024, 11:56 a.m. UTC | #3
On Wed, Jan 24, 2024 at 11:12:59AM +0000, Mark Rutland wrote:
> diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
> index 3b694511b98f8..8d825522c55c8 100644
> --- a/arch/arm64/include/asm/io.h
> +++ b/arch/arm64/include/asm/io.h
> @@ -24,25 +24,29 @@
>  #define __raw_writeb __raw_writeb
>  static __always_inline void __raw_writeb(u8 val, volatile void __iomem *addr)
>  {
> -	asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr));
> +	volatile u8 __iomem *ptr = addr;
> +	asm volatile("strb %w0, %1" : : "rZ" (val), "Qo" (*ptr));
>  }
>  
>  #define __raw_writew __raw_writew
>  static __always_inline void __raw_writew(u16 val, volatile void __iomem *addr)
>  {
> -	asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr));
> +	volatile u16 __iomem *ptr = addr;
> +	asm volatile("strh %w0, %1" : : "rZ" (val), "Qo" (*ptr));
>  }
>  
>  #define __raw_writel __raw_writel
>  static __always_inline void __raw_writel(u32 val, volatile void __iomem *addr)
>  {
> -	asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
> +	volatile u32 __iomem *ptr = addr;
> +	asm volatile("str %w0, %1" : : "rZ" (val), "Qo" (*ptr));
>  }
>  
>  #define __raw_writeq __raw_writeq
>  static __always_inline void __raw_writeq(u64 val, volatile void __iomem *addr)
>  {
> -	asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
> +	volatile u64 __iomem *ptr = addr;
> +	asm volatile("str %x0, %1" : : "rZ" (val), "Qo" (*ptr));
>  }
>  
>  #define __raw_readb __raw_readb
> -- 
> 2.30.2

Acked-by: Will Deacon <will@kernel.org>

Will
Catalin Marinas Feb. 20, 2024, 6:20 p.m. UTC | #4
On Wed, 24 Jan 2024 11:12:59 +0000, Mark Rutland wrote:
> Currently our IO accessors all use register addressing without offsets,
> but we could safely use offset addressing (without writeback) to
> simplify and optimize the generated code.
> 
> To function correctly under a hypervisor which emulates IO accesses, we
> must ensure that any faulting/trapped IO access results in an ESR_ELx
> value with ESR_ELX.ISS.ISV=1 and with the tranfer register described in
> ESR_ELx.ISS.SRT. This means that we can only use loads/stores of a
> single general purpose register (or the zero register), and must avoid
> writeback addressing modes. However, we can use immediate offset
> addressing modes, as these still provide ESR_ELX.ISS.ISV=1 and a valid
> ESR_ELx.ISS.SRT when those accesses fault at Stage-2.
> 
> [...]

Applied to arm64 (for-next/misc), thanks!

[1/1] arm64: io: permit offset addressing
      https://git.kernel.org/arm64/c/d044d6ba6f02
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 3b694511b98f8..8d825522c55c8 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -24,25 +24,29 @@ 
 #define __raw_writeb __raw_writeb
 static __always_inline void __raw_writeb(u8 val, volatile void __iomem *addr)
 {
-	asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr));
+	volatile u8 __iomem *ptr = addr;
+	asm volatile("strb %w0, %1" : : "rZ" (val), "Qo" (*ptr));
 }
 
 #define __raw_writew __raw_writew
 static __always_inline void __raw_writew(u16 val, volatile void __iomem *addr)
 {
-	asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr));
+	volatile u16 __iomem *ptr = addr;
+	asm volatile("strh %w0, %1" : : "rZ" (val), "Qo" (*ptr));
 }
 
 #define __raw_writel __raw_writel
 static __always_inline void __raw_writel(u32 val, volatile void __iomem *addr)
 {
-	asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
+	volatile u32 __iomem *ptr = addr;
+	asm volatile("str %w0, %1" : : "rZ" (val), "Qo" (*ptr));
 }
 
 #define __raw_writeq __raw_writeq
 static __always_inline void __raw_writeq(u64 val, volatile void __iomem *addr)
 {
-	asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
+	volatile u64 __iomem *ptr = addr;
+	asm volatile("str %x0, %1" : : "rZ" (val), "Qo" (*ptr));
 }
 
 #define __raw_readb __raw_readb