Message ID | 20220701130444.2945106-3-ardb@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: add support for WXN | expand |
On Fri, Jul 01, 2022 at 03:04:37PM +0200, Ard Biesheuvel wrote: > Our virtual KASLR displacement consists of a fully randomized multiple > of 2 MiB, combined with an offset that is equal to the physical > placement modulo 2 MiB. This arrangement ensures that we can always use > 2 MiB block mappings (or contiguous PTE mappings for 16k or 64k pages) > to map the kernel. > > This means that a KASLR offset of less than 2 MiB is simply the product > of this physical displacement, and no randomization has actually taken > place. So let's avoid misreporting this case as 'KASLR enabled'. Might be worth backporting to stable? Though the consequence is just that we might enable KPTI when we don't *need* it which is not the end of the world. Reviewed-by: Mark Brown <broonie@kernel.org>
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index c751cd9b94f8..498af99d1adc 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -172,6 +172,7 @@ #include <linux/compiler.h> #include <linux/mmdebug.h> #include <linux/types.h> +#include <asm/boot.h> #include <asm/bug.h> #if VA_BITS > 48 @@ -195,6 +196,16 @@ static inline unsigned long kaslr_offset(void) return kimage_vaddr - KIMAGE_VADDR; } +static inline bool kaslr_enabled(void) +{ + /* + * The KASLR offset modulo MIN_KIMG_ALIGN is taken from the physical + * placement of the image rather than from the seed, so a displacement + * of less than MIN_KIMG_ALIGN means that no seed was provided. + */ + return kaslr_offset() >= MIN_KIMG_ALIGN; +} + /* * Allow all memory at the discovery stage. We will clip it later. */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 98b48d9069a7..22e3604aee02 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1562,7 +1562,7 @@ bool kaslr_requires_kpti(void) return false; } - return kaslr_offset() > 0; + return kaslr_enabled(); } static bool __meltdown_safe = true; diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index bcbcca938da8..d63322fc1d40 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -43,7 +43,7 @@ static int __init kaslr_init(void) return 0; } - if (!kaslr_offset()) { + if (!kaslr_enabled()) { pr_warn("KASLR disabled due to lack of seed\n"); return 0; }
Our virtual KASLR displacement consists of a fully randomized multiple of 2 MiB, combined with an offset that is equal to the physical placement modulo 2 MiB. This arrangement ensures that we can always use 2 MiB block mappings (or contiguous PTE mappings for 16k or 64k pages) to map the kernel. This means that a KASLR offset of less than 2 MiB is simply the product of this physical displacement, and no randomization has actually taken place. So let's avoid misreporting this case as 'KASLR enabled'. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> --- arch/arm64/include/asm/memory.h | 11 +++++++++++ arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/kaslr.c | 2 +- 3 files changed, 13 insertions(+), 2 deletions(-)