mbox series

[v7,00/50] arm64: Add support for LPA2 and WXN at stage 1

Message ID 20240123145258.1462979-52-ardb+git@google.com (mailing list archive)
Headers show
Series arm64: Add support for LPA2 and WXN at stage 1 | expand

Message

Ard Biesheuvel Jan. 23, 2024, 2:52 p.m. UTC
From: Ard Biesheuvel <ardb@kernel.org>

This v7 covers the remaining changes that implement support for LPA2 and
WXN at stage 1, now that some of the prerequisites are in place.

Ryan's KVM series for LPA2 at stage 2 has been merged in the mean time,
so the temporary changes to plug that hole have been dropped.

v4: https://lore.kernel.org/r/20230912141549.278777-63-ardb@google.com/
v5: https://lore.kernel.org/r/20231124101840.944737-41-ardb@google.com/
v6: https://lore.kernel.org/r/20231129111555.3594833-43-ardb@google.com/

-%-

Changes in v7:
- rebase onto v6.8-rc1 which includes some patches of the previous
  revision, and includes the KVM changes for LPA2

The first ~8 patches of this series rework how the kernel VA space is
organized, so that the vmemmap region does not take up more space than
necessary, and so that most of it can be reclaimed when running a build
capable of 52-bit virtual addressing on hardware that is not. This is
needed because the vmemmap region will take up a substantial part of the
upper VA region that it shares with the kernel, modules and
vmalloc/vmap mappings once we enable LPA2 with 4k pages.

The next ~30 patches rework the early init code, reimplementing most of
the page table and relocation handling in C code. There are several
reasons why this is beneficial:
- we generally prefer C code over asm for these things, and the macros
  that currently exist in head.S for creating the kernel pages tables
  are a good example why;
- we no longer need to create the kernel mapping in two passes, which
  means we can remove the logic that copies parts of the fixmap and the
  KAsan shadow from one set of page tables to the other; this is
  especially advantageous for KAsan with LPA2, which needs more
  elaborate shadow handling across multiple levels, since the KAsan
  region cannot be placed on exact pgd_t boundaries in that case;
- we can read the ID registers and parse command line overrides before
  creating the page tables, which simplifies the LPA2 case, as flicking
  the global TCR_EL1.DS bit at a later stage would require elaborate
  repainting of all page table descriptors, some of which with the MMU
  disabled;
- we can use more elaborate logic to create the mappings, which means we
  can use more precise mappings for code and data sections even when
  using 2 MiB granularity, and this is a prerequisite for running with
  WXN.

As part of the ID map changes, we decouple the ID map size from the
kernel VA size, and switch to a 48-bit VA map for all configurations.

The next ~10 patches rework the existing LVA support as a CPU feature,
which simplifies some code and gets rid of the vabits_actual variable.
Then, LPA2 support is implemented in the same vein. This requires adding
support for 5 level paging as well, given that LPA2 introduces a new
paging level '-1' when using 4k pages.

Combined with the vmemmap changes at the start of the series, the
resulting LPA2/4k pages configuration will have the exact same VA space
layout as the ordinary 4k/4 levels configuration, and so LPA2 support
can reasonably be enabled by default, as the fallback is seamless on
non-LPA2 hardware.

In the 16k/LPA2 case, the fallback also reduces the number of paging
levels, resulting in a 47-bit VA space. This is based on the assumption
that hybrid LPA2/non-LPA2 16k pages kernels in production use would
prefer not to take the performance hit of 4 level paging to gain only a
single additional bit of VA space. (Note that generic Android kernels
use only 3 levels of paging today.) Bespoke 16k configurations can still
configure 48-bit virtual addressing as before.

Finally, enable support for running with the WXN control enabled. This
was previously part of a separate series, but given that the delta is
tiny, it is included here as well.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Kees Cook <keescook@chromium.org>

Ard Biesheuvel (49):
  arm64: mm: Move PCI I/O emulation region above the vmemmap region
  arm64: mm: Move fixmap region above vmemmap region
  arm64: ptdump: Allow all region boundaries to be defined at boot time
  arm64: ptdump: Discover start of vmemmap region at runtime
  arm64: vmemmap: Avoid base2 order of struct page size to dimension
    region
  arm64: mm: Reclaim unused vmemmap region for vmalloc use
  arm64: kaslr: Adjust randomization range dynamically
  arm64: kernel: Manage absolute relocations in code built under pi/
  arm64: kernel: Don't rely on objcopy to make code under pi/ __init
  arm64: head: move relocation handling to C code
  arm64: idreg-override: Move to early mini C runtime
  arm64: kernel: Remove early fdt remap code
  arm64: head: Clear BSS and the kernel page tables in one go
  arm64: Move feature overrides into the BSS section
  arm64: head: Run feature override detection before mapping the kernel
  arm64: head: move dynamic shadow call stack patching into early C
    runtime
  arm64: cpufeature: Add helper to test for CPU feature overrides
  arm64: kaslr: Use feature override instead of parsing the cmdline
    again
  arm64: idreg-override: Create a pseudo feature for rodata=off
  arm64: Add helpers to probe local CPU for PAC and BTI support
  arm64: head: allocate more pages for the kernel mapping
  arm64: head: move memstart_offset_seed handling to C code
  arm64: mm: Make kaslr_requires_kpti() a static inline
  arm64: mmu: Make __cpu_replace_ttbr1() out of line
  arm64: head: Move early kernel mapping routines into C code
  arm64: mm: Use 48-bit virtual addressing for the permanent ID map
  arm64: pgtable: Decouple PGDIR size macros from PGD/PUD/PMD levels
  arm64: kernel: Create initial ID map from C code
  arm64: mm: avoid fixmap for early swapper_pg_dir updates
  arm64: mm: omit redundant remap of kernel image
  arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()"
  arm64: mm: Handle LVA support as a CPU feature
  arm64: mm: Add feature override support for LVA
  arm64: Avoid #define'ing PTE_MAYBE_NG to 0x0 for asm use
  arm64: Add ESR decoding for exceptions involving translation level -1
  arm64: mm: Wire up TCR.DS bit to PTE shareability fields
  arm64: mm: Add LPA2 support to phys<->pte conversion routines
  arm64: mm: Add definitions to support 5 levels of paging
  arm64: mm: add LPA2 and 5 level paging support to G-to-nG conversion
  arm64: Enable LPA2 at boot if supported by the system
  arm64: mm: Add 5 level paging support to fixmap and swapper handling
  arm64: kasan: Reduce minimum shadow alignment and enable 5 level
    paging
  arm64: mm: Add support for folding PUDs at runtime
  arm64: ptdump: Disregard unaddressable VA space
  arm64: ptdump: Deal with translation levels folded at runtime
  arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs
  arm64: defconfig: Enable LPA2 support
  mm: add arch hook to validate mmap() prot flags
  arm64: mm: add support for WXN memory translation attribute

Catalin Marinas (1):
  arm64: Set the default CONFIG_ARM64_VA_BITS_52 in Kconfig rather than
    defconfig

 arch/arm64/Kconfig                          |  38 +-
 arch/arm64/configs/defconfig                |   1 -
 arch/arm64/include/asm/archrandom.h         |   2 -
 arch/arm64/include/asm/assembler.h          |  55 +--
 arch/arm64/include/asm/cpufeature.h         | 116 +++++
 arch/arm64/include/asm/esr.h                |  13 +-
 arch/arm64/include/asm/fixmap.h             |   2 +-
 arch/arm64/include/asm/kasan.h              |   2 -
 arch/arm64/include/asm/kernel-pgtable.h     | 103 ++---
 arch/arm64/include/asm/kvm_emulate.h        |  10 +-
 arch/arm64/include/asm/memory.h             |  31 +-
 arch/arm64/include/asm/mman.h               |  36 ++
 arch/arm64/include/asm/mmu.h                |  40 +-
 arch/arm64/include/asm/mmu_context.h        |  83 ++--
 arch/arm64/include/asm/pgalloc.h            |  53 ++-
 arch/arm64/include/asm/pgtable-hwdef.h      |  33 +-
 arch/arm64/include/asm/pgtable-prot.h       |  20 +-
 arch/arm64/include/asm/pgtable-types.h      |   6 +
 arch/arm64/include/asm/pgtable.h            | 229 +++++++++-
 arch/arm64/include/asm/scs.h                |  36 +-
 arch/arm64/include/asm/setup.h              |   3 -
 arch/arm64/include/asm/tlb.h                |   3 +
 arch/arm64/kernel/Makefile                  |  13 +-
 arch/arm64/kernel/cpufeature.c              | 111 +++--
 arch/arm64/kernel/head.S                    | 463 ++------------------
 arch/arm64/kernel/image-vars.h              |  35 ++
 arch/arm64/kernel/kaslr.c                   |   4 +-
 arch/arm64/kernel/module.c                  |   2 +-
 arch/arm64/kernel/pi/Makefile               |  27 +-
 arch/arm64/kernel/{ => pi}/idreg-override.c |  80 ++--
 arch/arm64/kernel/pi/kaslr_early.c          |  78 +---
 arch/arm64/kernel/pi/map_kernel.c           | 276 ++++++++++++
 arch/arm64/kernel/pi/map_range.c            | 105 +++++
 arch/arm64/kernel/{ => pi}/patch-scs.c      |  36 +-
 arch/arm64/kernel/pi/pi.h                   |  36 ++
 arch/arm64/kernel/pi/relacheck.c            | 130 ++++++
 arch/arm64/kernel/pi/relocate.c             |  64 +++
 arch/arm64/kernel/setup.c                   |  22 -
 arch/arm64/kernel/sleep.S                   |   3 -
 arch/arm64/kernel/vmlinux.lds.S             |  17 +-
 arch/arm64/kvm/mmu.c                        |  15 +-
 arch/arm64/mm/fault.c                       |  30 +-
 arch/arm64/mm/fixmap.c                      |  39 +-
 arch/arm64/mm/init.c                        |   2 +-
 arch/arm64/mm/kasan_init.c                  | 159 +++++--
 arch/arm64/mm/mmap.c                        |   4 +
 arch/arm64/mm/mmu.c                         | 255 ++++++-----
 arch/arm64/mm/pgd.c                         |  17 +-
 arch/arm64/mm/proc.S                        | 122 +++++-
 arch/arm64/mm/ptdump.c                      |  77 ++--
 arch/arm64/tools/cpucaps                    |   1 +
 include/linux/mman.h                        |  15 +
 mm/mmap.c                                   |   3 +
 53 files changed, 1995 insertions(+), 1161 deletions(-)
 rename arch/arm64/kernel/{ => pi}/idreg-override.c (83%)
 create mode 100644 arch/arm64/kernel/pi/map_kernel.c
 create mode 100644 arch/arm64/kernel/pi/map_range.c
 rename arch/arm64/kernel/{ => pi}/patch-scs.c (89%)
 create mode 100644 arch/arm64/kernel/pi/pi.h
 create mode 100644 arch/arm64/kernel/pi/relacheck.c
 create mode 100644 arch/arm64/kernel/pi/relocate.c

Comments

Ard Biesheuvel Feb. 9, 2024, 1:18 p.m. UTC | #1
On Tue, 23 Jan 2024 at 14:54, Ard Biesheuvel <ardb+git@google.com> wrote:
>
> From: Ard Biesheuvel <ardb@kernel.org>
>
> This v7 covers the remaining changes that implement support for LPA2 and
> WXN at stage 1, now that some of the prerequisites are in place.
>
> Ryan's KVM series for LPA2 at stage 2 has been merged in the mean time,
> so the temporary changes to plug that hole have been dropped.
>

... and I ended up dropping a patch that is still required:

https://lore.kernel.org/all/20230912141549.278777-119-ardb@google.com/

which requires some minimal rebasing as well. I am not going to resend
the whole series because of the single patch until it gains some more
review traction.