Message ID | 20210212195255.1321544-1-jiancai@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] ARM: Implement Clang's SLS mitigation | expand |
On Fri, Feb 12, 2021 at 11:52:53AM -0800, Jian Cai wrote: > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on > -mharden-sls=all, which mitigates the straight-line speculation > vulnerability, speculative execution of the instruction following some > unconditional jumps. Notice -mharden-sls= has other options as below, > and this config turns on the strongest option. > > all: enable all mitigations against Straight Line Speculation that are implemented. > none: disable all mitigations against Straight Line Speculation. > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions. > blr: enable the mitigation against Straight Line Speculation for BLR instructions. What exactly does this mitigation do? This should be documented somewhere, maybe in the Kconfig text? > Link: https://reviews.llvm.org/D93221 > Link: https://reviews.llvm.org/D81404 > Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2 > > Suggested-by: Manoj Gupta <manojgupta@google.com> > Suggested-by: Nathan Chancellor <nathan@kernel.org> > Suggested-by: David Laight <David.Laight@aculab.com> > Signed-off-by: Jian Cai <jiancai@google.com> > --- > > Changes v1 -> v2: > Update the description and patch based on Nathan and David's comments. > > arch/arm/Makefile | 4 ++++ > arch/arm64/Makefile | 4 ++++ > security/Kconfig.hardening | 7 +++++++ > 3 files changed, 15 insertions(+) > > diff --git a/arch/arm/Makefile b/arch/arm/Makefile > index 4aaec9599e8a..11d89ef32da9 100644 > --- a/arch/arm/Makefile > +++ b/arch/arm/Makefile > @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__ > KBUILD_LDFLAGS += -EL > endif > > +ifeq ($(CONFIG_HARDEN_SLS_ALL), y) > +KBUILD_CFLAGS += -mharden-sls=all > +endif > + > # > # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and > # later may result in code being generated that handles signed short and signed > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile > index 90309208bb28..ca7299b356a9 100644 > --- a/arch/arm64/Makefile > +++ b/arch/arm64/Makefile > @@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils) > endif > endif > > +ifeq ($(CONFIG_HARDEN_SLS_ALL), y) > +KBUILD_CFLAGS += -mharden-sls=all > +endif The big problem I have with this is that it's a compile-time decision. For the other spectre crap we have a combination of the "mitigations=off" command-line and CPU detection to avoid the cost of the mitigation where it is not deemed necessary. So I think that either we enable this unconditionally, or we don't enable it at all (and people can hack their CFLAGS themselves if they want to). It would be helpful for one of the Arm folks to chime in, as I'm yet to see any evidence that this is actually exploitable. Is it any worse that Spectre-v1, where we _don't_ have a compiler mitigation? Finally, do we have to worry about our assembly code? Will
From: Will Deacon > Sent: 17 February 2021 09:49 > > On Fri, Feb 12, 2021 at 11:52:53AM -0800, Jian Cai wrote: > > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on > > -mharden-sls=all, which mitigates the straight-line speculation > > vulnerability, speculative execution of the instruction following some > > unconditional jumps. Notice -mharden-sls= has other options as below, > > and this config turns on the strongest option. > > > > all: enable all mitigations against Straight Line Speculation that are implemented. > > none: disable all mitigations against Straight Line Speculation. > > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions. > > blr: enable the mitigation against Straight Line Speculation for BLR instructions. > > What exactly does this mitigation do? This should be documented somewhere, > maybe in the Kconfig text? I looked it up, it adds some fairly heavy serialising instructions after the unconditional jump. For BLR (call indirect) it has to use a BL (call) to an indirect jump. I don't know if the execution of the serialising instructions gets aborted. If not you could end up with unexpected delays - like those on some x86 cpu when they speculatively executed trig functions. It all seems pretty broken though. I'd expect the branch prediction unit to speculate at the jump target for 'predicted taken' conditional jumps. So you'd really expect unconditional jumps to behave the same way. BLR ought to be using the branch target buffer (BTB). (It isn't actually 100% clear that some processors don't use the BTB for non-indirect jumps though....) David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Fri, Feb 12, 2021 at 11:53 AM 'Jian Cai' via Clang Built Linux <clang-built-linux@googlegroups.com> wrote: The oneline of the commit is "ARM: Implement Clang's SLS mitigation," but that's not precise. GCC implements the same flag with the same arguments. There is nothing compiler specific about this patch. (Though perhaps different section names are used, see below). > > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on > -mharden-sls=all, which mitigates the straight-line speculation > vulnerability, speculative execution of the instruction following some > unconditional jumps. Notice -mharden-sls= has other options as below, > and this config turns on the strongest option. > > all: enable all mitigations against Straight Line Speculation that are implemented. > none: disable all mitigations against Straight Line Speculation. > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions. > blr: enable the mitigation against Straight Line Speculation for BLR instructions. > > Link: https://reviews.llvm.org/D93221 > Link: https://reviews.llvm.org/D81404 > Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2 > > Suggested-by: Manoj Gupta <manojgupta@google.com> > Suggested-by: Nathan Chancellor <nathan@kernel.org> > Suggested-by: David Laight <David.Laight@aculab.com> > Signed-off-by: Jian Cai <jiancai@google.com> I observe lots of linker warnings with this applied on linux-next: ld.lld: warning: init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_x0) is being placed in '.text.__llvm_slsblr_thunk_x0' You need to modify arch/arm64/kernel/vmlinux.lds.S and arch/arm/kernel/vmlinux.lds.S (and possibly arch/arm/boot/compressed/vmlinux.lds.S as well) to add these sections back into .text so that the linkers don't place these orphaned sections in wild places. The resulting aarch64 kernel image doesn't even boot (under emulation). For 32b ARM: ld.lld: warning: init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_arm_r0) is being placed in '.text.__llvm_slsblr_thunk_arm_r0' ... ld.lld: warning: init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_thumb_r0) is being placed in '.text.__llvm_slsblr_thunk_thumb_r0' ... <trimmed, but there's close to 60 of these> And the image doesn't boot (under emulation). > --- > > Changes v1 -> v2: > Update the description and patch based on Nathan and David's comments. > > arch/arm/Makefile | 4 ++++ > arch/arm64/Makefile | 4 ++++ > security/Kconfig.hardening | 7 +++++++ > 3 files changed, 15 insertions(+) > > diff --git a/arch/arm/Makefile b/arch/arm/Makefile > index 4aaec9599e8a..11d89ef32da9 100644 > --- a/arch/arm/Makefile > +++ b/arch/arm/Makefile > @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__ > KBUILD_LDFLAGS += -EL > endif > > +ifeq ($(CONFIG_HARDEN_SLS_ALL), y) > +KBUILD_CFLAGS += -mharden-sls=all > +endif > + > # > # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and > # later may result in code being generated that handles signed short and signed > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile > index 90309208bb28..ca7299b356a9 100644 > --- a/arch/arm64/Makefile > +++ b/arch/arm64/Makefile > @@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils) > endif > endif > > +ifeq ($(CONFIG_HARDEN_SLS_ALL), y) > +KBUILD_CFLAGS += -mharden-sls=all > +endif > + > cc_has_k_constraint := $(call try-run,echo \ > 'int main(void) { \ > asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \ > diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening > index 269967c4fc1b..9266d8d1f78f 100644 > --- a/security/Kconfig.hardening > +++ b/security/Kconfig.hardening > @@ -121,6 +121,13 @@ choice > > endchoice > > +config HARDEN_SLS_ALL > + bool "enable SLS vulnerability hardening" > + def_bool $(cc-option,-mharden-sls=all) This fails to set CONFIG_HARDEN_SLS_ALL for me with: $ ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- make LLVM=1 LLVM_IAS=1 -j72 defconfig $ grep SLS_ALL .config # CONFIG_HARDEN_SLS_ALL is not set but it's flipped on there for arm64 defconfig: $ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make LLVM=1 LLVM_IAS=1 -j72 defconfig $ grep SLS_ALL .config CONFIG_HARDEN_SLS_ALL=y What's going on there? Is the cc-option Kconfig macro broken for Clang when cross compiling 32b ARM? I can still enable CONFIG_HARDEN_SLS_ALL via menuconfig, but I wonder if the default value is funny because the cc-option check is failing? > + help > + Enables straight-line speculation vulnerability hardening > + at highest level. > + > config GCC_PLUGIN_STRUCTLEAK_VERBOSE > bool "Report forcefully initialized variables" > depends on GCC_PLUGIN_STRUCTLEAK > --
Hi Will, I went back and found this feedback which is kind of the heart of the issues regarding SLS. On Wed, Feb 17, 2021 at 10:51 AM Will Deacon <will@kernel.org> wrote: > The big problem I have with this is that it's a compile-time decision. > For the other spectre crap we have a combination of the "mitigations=off" > command-line and CPU detection to avoid the cost of the mitigation where > it is not deemed necessary. For newcomers, the way this works today can be found in e.g.: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/kernel/proton-pack.c mitigations=off turns off Spectre v2 and v4 mitigations. AFAICT this is achived with misc parameterization to firmware and hypervisors and no runtime-patching of any code at all? (On ARM32 it has no effect whatsoever, we just turn on all spectre v2 mitigations by default. No runtime choice.) The way I understand it is that for SLS the compiler must at least put in some kind of placeholders, but that it *might* be possible to do runtime mitigations on top of that. We need feedback from the compiler people as to what is possible here. If it is *not* possible to mitigate at run-time, then I don't know what is the right thing to do. Certainly not to turn it on by default as is done today? > So I think that either we enable this unconditionally, or we don't enable it > at all (and people can hack their CFLAGS themselves if they want to). It > would be helpful for one of the Arm folks to chime in, as I'm yet to see any > evidence that this is actually exploitable. (...) > Is it any worse that Spectre-v1, > where we _don't_ have a compiler mitigation? There is such a compiler mitigation for Spectre v1, under the name "Speculative load hardening" the kernel is not (yet) enabling it. https://llvm.org/docs/SpeculativeLoadHardening.html it comes with the intuitive command line switch -mspeculative-load-hardening Certainly a separate patch can add speculative load hardening support on top of this, or before this patch, if there is desire and/or feels like a more coherent approach. As the article says "The performance overhead of this style of comprehensive mitigation is very high (...) most large applications seeing a 30% overhead or less." I suppose it can be enabled while compiling the kernel just like this patch enables -mharden-sls=all I don't know if your comment means that if we enable one of them we should just as well enable both or none as otherwise there is no real protection, as attackers can just use the other similar attack vector? > Finally, do we have to worry about our assembly code? AFAICT yes, and you seem to have hardened Aarch64's ERET:s which seemed especially vulnerable in commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 "arm64: entry: Place an SB sequence following an ERET instruction" Link for people without kernel source: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=679db70801da9fda91d26caf13bf5b5ccc74e8e8 So it seems the most vulnerable spot was already fixed by you, thanks! But I bet there are some more spots. Yours, Linus Walleij
diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 4aaec9599e8a..11d89ef32da9 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__ KBUILD_LDFLAGS += -EL endif +ifeq ($(CONFIG_HARDEN_SLS_ALL), y) +KBUILD_CFLAGS += -mharden-sls=all +endif + # # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and # later may result in code being generated that handles signed short and signed diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 90309208bb28..ca7299b356a9 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils) endif endif +ifeq ($(CONFIG_HARDEN_SLS_ALL), y) +KBUILD_CFLAGS += -mharden-sls=all +endif + cc_has_k_constraint := $(call try-run,echo \ 'int main(void) { \ asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \ diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening index 269967c4fc1b..9266d8d1f78f 100644 --- a/security/Kconfig.hardening +++ b/security/Kconfig.hardening @@ -121,6 +121,13 @@ choice endchoice +config HARDEN_SLS_ALL + bool "enable SLS vulnerability hardening" + def_bool $(cc-option,-mharden-sls=all) + help + Enables straight-line speculation vulnerability hardening + at highest level. + config GCC_PLUGIN_STRUCTLEAK_VERBOSE bool "Report forcefully initialized variables" depends on GCC_PLUGIN_STRUCTLEAK
This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on -mharden-sls=all, which mitigates the straight-line speculation vulnerability, speculative execution of the instruction following some unconditional jumps. Notice -mharden-sls= has other options as below, and this config turns on the strongest option. all: enable all mitigations against Straight Line Speculation that are implemented. none: disable all mitigations against Straight Line Speculation. retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions. blr: enable the mitigation against Straight Line Speculation for BLR instructions. Link: https://reviews.llvm.org/D93221 Link: https://reviews.llvm.org/D81404 Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2 Suggested-by: Manoj Gupta <manojgupta@google.com> Suggested-by: Nathan Chancellor <nathan@kernel.org> Suggested-by: David Laight <David.Laight@aculab.com> Signed-off-by: Jian Cai <jiancai@google.com> --- Changes v1 -> v2: Update the description and patch based on Nathan and David's comments. arch/arm/Makefile | 4 ++++ arch/arm64/Makefile | 4 ++++ security/Kconfig.hardening | 7 +++++++ 3 files changed, 15 insertions(+)