Message ID | 20210305005327.405365-1-jiancai@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v6] ARM: Implement SLS mitigation | expand |
On Thu, Mar 04, 2021 at 04:53:18PM -0800, Jian Cai wrote: > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on > -mharden-sls=all, which mitigates the straight-line speculation > vulnerability, speculative execution of the instruction following some > unconditional jumps. Notice -mharden-sls= has other options as below, > and this config turns on the strongest option. > > all: enable all mitigations against Straight Line Speculation that are implemented. > none: disable all mitigations against Straight Line Speculation. > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions. > blr: enable the mitigation against Straight Line Speculation for BLR instructions. > > Links: > https://reviews.llvm.org/D93221 > https://reviews.llvm.org/D81404 > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2 > > Suggested-by: Manoj Gupta <manojgupta@google.com> > Suggested-by: Nick Desaulniers <ndesaulniers@google.com> > Suggested-by: Nathan Chancellor <nathan@kernel.org> > Suggested-by: David Laight <David.Laight@aculab.com> > Suggested-by: Will Deacon <will@kernel.org> I'm still reasonably opposed to this patch, so please don't add my "Suggested-by" here as, if I were to suggest anything, it would be not to apply this patch :) I still don't see why SLS is worth a compiler mitigation which will affect all CPUs that run the kernel binary, but Spectre-v1 is not. In other words, the big thing missing from this is a justification as to why SLS is a problem worth working around for general C code. Will
On Fri, Mar 5, 2021 at 10:53 AM Will Deacon <will@kernel.org> wrote: > I still don't see why SLS is worth a compiler mitigation which will affect > all CPUs that run the kernel binary, but Spectre-v1 is not. In other words, > the big thing missing from this is a justification as to why SLS is a > problem worth working around for general C code. I might be on the Dunning Kruger-scale of a little knowledge is dangerous here, but AFAICT it is because it is mitigating all branches that result from the compilation. I think the people who devised this approach didn't think about the kernel problem per se but about "any code". They would have to go back to the compilers, have them introduce a marker instead for each branch or return, and then emit symbols that the kernel can run-time patch to mitigate the vulnerability. Something like that. (I guess.) Notice that these symbols/pointers would first have a footprint impact, though maybe they could be discarded if not applicable. The patch says: It inserts speculation barrier sequences (SB or DSB+ISB depending on the target architecture) after RET and BR, and replacing BLR with BL+BR sequence. How would you do that at runtime? If you slot in NOPs around the branch for mitigating, there will still be impact. If you want to make the code look the same unless vulnerable, you would have to patch the branch with a branch to another place to do the barriers... that patched branch in turn could be speculated? I feel stupid here. But I guess someone could come up with something? So instead of a simple straight-forward solution that becomes a really complicated awkward solution that generate a few thousand more man-hours and delays the mitigations. So I understand if the authors would want to try the simple approach first. It may however be our job to say "no, go do the really complicated thing", I guess that is what you're saying. :) Yours, Linus Walleij
diff --git a/arch/arm/Makefile b/arch/arm/Makefile index dad5502ecc28..54f9b5ff9682 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__ KBUILD_LDFLAGS += -EL endif +ifeq ($(CONFIG_HARDEN_SLS_ALL), y) +KBUILD_CFLAGS += -mharden-sls=all +endif + # # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and # later may result in code being generated that handles signed short and signed diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h index 4a91428c324d..c7f9717511ca 100644 --- a/arch/arm/include/asm/vmlinux.lds.h +++ b/arch/arm/include/asm/vmlinux.lds.h @@ -145,3 +145,7 @@ __edtcm_data = .; \ } \ . = __dtcm_start + SIZEOF(.data_dtcm); + +#define SLS_TEXT \ + ALIGN_FUNCTION(); \ + *(.text.__llvm_slsblr_thunk_*) diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index f7f4620d59c3..e71f2bc97bae 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -63,6 +63,7 @@ SECTIONS .text : { /* Real text segment */ _stext = .; /* Text and read-only data */ ARM_TEXT + SLS_TEXT } #ifdef CONFIG_DEBUG_ALIGN_RODATA diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 35f43d0aa056..bdb63e7b1bec 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -837,6 +837,7 @@ config HARDEN_BRANCH_PREDICTOR bool "Harden the branch predictor against aliasing attacks" if EXPERT depends on CPU_SPECTRE default y + select HARDEN_SLS_ALL help Speculation attacks against some high-performance processors rely on being able to manipulate the branch predictor for a victim diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 5b84aec31ed3..e233bfbdb1c2 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils) endif endif +ifeq ($(CONFIG_HARDEN_SLS_ALL), y) +KBUILD_CFLAGS += -mharden-sls=all +endif + cc_has_k_constraint := $(call try-run,echo \ 'int main(void) { \ asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \ diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 7eea7888bb02..d5c892605862 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -103,6 +103,10 @@ jiffies = jiffies_64; #define TRAMP_TEXT #endif +#define SLS_TEXT \ + ALIGN_FUNCTION(); \ + *(.text.__llvm_slsblr_thunk_*) + /* * The size of the PE/COFF section that covers the kernel image, which * runs from _stext to _edata, must be a round multiple of the PE/COFF @@ -154,6 +158,7 @@ SECTIONS HIBERNATE_TEXT TRAMP_TEXT *(.fixup) + SLS_TEXT *(.gnu.warning) . = ALIGN(16); *(.got) /* Global offset table */ diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening index 269967c4fc1b..db76ad732c14 100644 --- a/security/Kconfig.hardening +++ b/security/Kconfig.hardening @@ -121,6 +121,14 @@ choice endchoice +config HARDEN_SLS_ALL + bool "enable SLS vulnerability hardening" + depends on $(cc-option,-mharden-sls=all) + help + Enables straight-line speculation vulnerability hardening. This inserts + speculation barrier instruction sequences after certain unconditional jumps + to prevent speculative execution past those barriers. + config GCC_PLUGIN_STRUCTLEAK_VERBOSE bool "Report forcefully initialized variables" depends on GCC_PLUGIN_STRUCTLEAK
This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on -mharden-sls=all, which mitigates the straight-line speculation vulnerability, speculative execution of the instruction following some unconditional jumps. Notice -mharden-sls= has other options as below, and this config turns on the strongest option. all: enable all mitigations against Straight Line Speculation that are implemented. none: disable all mitigations against Straight Line Speculation. retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions. blr: enable the mitigation against Straight Line Speculation for BLR instructions. Links: https://reviews.llvm.org/D93221 https://reviews.llvm.org/D81404 https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2 Suggested-by: Manoj Gupta <manojgupta@google.com> Suggested-by: Nick Desaulniers <ndesaulniers@google.com> Suggested-by: Nathan Chancellor <nathan@kernel.org> Suggested-by: David Laight <David.Laight@aculab.com> Suggested-by: Will Deacon <will@kernel.org> Suggested-by: Russell King <rmk+kernel@armlinux.org.uk> Suggested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Jian Cai <jiancai@google.com> --- Changes v1 -> v2: Update the description and patch based on Nathan and David's comments. Changes v2 -> v3: Modify linker scripts as Nick suggested to address boot failure (verified with qemu). Added more details in Kconfig.hardening description. Disable the config by default. Changes v3 -> v4: Address Nathan's comment and replace def_bool with depends on in HARDEN_SLS_ALL. Changes v4 -> v5: Removed "default n" and made the description target indepdent in Kconfig.hardening. Changes v5 -> v6: Add select HARDEN_SLS_ALL under config HARDEN_BRANCH_PREDICTOR. This turns on HARDEN_SLS_ALL by default where applicable. arch/arm/Makefile | 4 ++++ arch/arm/include/asm/vmlinux.lds.h | 4 ++++ arch/arm/kernel/vmlinux.lds.S | 1 + arch/arm/mm/Kconfig | 1 + arch/arm64/Makefile | 4 ++++ arch/arm64/kernel/vmlinux.lds.S | 5 +++++ security/Kconfig.hardening | 8 ++++++++ 7 files changed, 27 insertions(+)