From patchwork Wed Apr 10 13:17:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10893877 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 67529139A for ; Wed, 10 Apr 2019 13:18:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5298128676 for ; Wed, 10 Apr 2019 13:18:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 44CC823B24; Wed, 10 Apr 2019 13:18:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D64D289A6 for ; Wed, 10 Apr 2019 13:18:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731653AbfDJNSX (ORCPT ); Wed, 10 Apr 2019 09:18:23 -0400 Received: from mail-ot1-f74.google.com ([209.85.210.74]:49559 "EHLO mail-ot1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727497AbfDJNSW (ORCPT ); Wed, 10 Apr 2019 09:18:22 -0400 Received: by mail-ot1-f74.google.com with SMTP id s22so999263otk.16 for ; Wed, 10 Apr 2019 06:18:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=92BVS4bOOFYP/dev0+TE/aXOaSZOqfDor9qoSr0AfGY=; b=KlD3R7xqSemcWSTra8s+1FVKrLwUdua/UkuTLb1R8auPLhTxEjHOlmxR4AJFAvHib7 /0agFv6GFB2mtXF46aij3e8o+qr1V41ReyzjqDCSa+pzrGuitIPPnb9lF8icWtlB42dl Z5GAf6FUGclv/v7vEunjDhj1MuTFePNte/flucwqDbmmyX8TSYYTeiiwC8VKEu6wI5SL 1aZ90PjWyPp31lGKQhJpZcLBf4OcCL6UFTck5qVDJhs1HMazLi1qefmV/Zp/Uv4d3GI6 CiiFvEL1YxghdlNyDgP6Lk6zCkZYkkUXIGYlO9fRsmWa0ZZ2pmPtDBbQXDTsX0KNlm1F +H1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=92BVS4bOOFYP/dev0+TE/aXOaSZOqfDor9qoSr0AfGY=; b=rIHA+7OBAQwNOv/rej5992pIZ83/BgyKI/Td/9x+9h6fYYR/OaaxKH4YZdxTSWfAAO za+FkK1MzvnCOlZo140hHMDjX4VIIDdchxz+WYE2A7534Kp+85RBz7r/ycFsddtl7RwK k5G0UmA0df7HsIkCIrk1htDD6vC9BM8yQINa81zyzYw7qU3jiMHJCPolxXS41rmWpYQH SdbiUMZGKTa3HxcVV7oJ5IpBWvPdoQVkkvbDlskpf8Qe0bzEoITFbbTnvIGY/bYT0c6o vi5o7OzEt/4j5vWpfK//5qhVRYWxbfoXSezCssfwe8aOsSGoCdb1nyii21N2MPsGF/3M VKWg== X-Gm-Message-State: APjAAAUNcl5b/doSWs5jwtMadnOxge5zeou+fG3vOfDKgJvlTZhtOIU0 ypgEqi2nhs3zueV7kvNV0SZDSp6ZJOs= X-Google-Smtp-Source: APXvYqwbHimlBzwRHIj8V77isV8vldO++KJUl9VOK5KppM9XkBC/pYiCrpwNuQX7vKzJTow1zfpoZmxDil8= X-Received: by 2002:a05:6830:1c6:: with SMTP id r6mr3515701ota.67.1554902301749; Wed, 10 Apr 2019 06:18:21 -0700 (PDT) Date: Wed, 10 Apr 2019 15:17:24 +0200 In-Reply-To: <20190410131726.250295-1-glider@google.com> Message-Id: <20190410131726.250295-2-glider@google.com> Mime-Version: 1.0 References: <20190410131726.250295-1-glider@google.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e-goog Subject: [PATCH v4 1/3] initmem: introduce CONFIG_INIT_ALL_MEMORY and CONFIG_INIT_ALL_STACK From: Alexander Potapenko To: yamada.masahiro@socionext.com, jmorris@namei.org, serge@hallyn.com Cc: linux-security-module@vger.kernel.org, linux-kbuild@vger.kernel.org, ndesaulniers@google.com, kcc@google.com, dvyukov@google.com, keescook@chromium.org, sspatil@android.com, labbott@redhat.com, kernel-hardening@lists.openwall.com Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP CONFIG_INIT_ALL_MEMORY is going to be an umbrella config for options that force heap and stack initialization. The rationale behind doing so is to reduce the severity of bugs caused by using uninitialized memory. CONFIG_INIT_ALL_STACK turns on stack initialization based on -ftrivial-auto-var-init in Clang builds and on -fplugin-arg-structleak_plugin-byref-all in GCC builds. -ftrivial-auto-var-init is a Clang flag that provides trivial initializers for uninitialized local variables, variable fields and padding. It has three possible values: pattern - uninitialized locals are filled with a fixed pattern (mostly 0xAA on 64-bit platforms, see https://reviews.llvm.org/D54604 for more details) likely to cause crashes when uninitialized value is used; zero (it's still debated whether this flag makes it to the official Clang release) - uninitialized locals are filled with zeroes; uninitialized (default) - uninitialized locals are left intact. The proposed config builds the kernel with -ftrivial-auto-var-init=pattern. Developers have the possibility to opt-out of this feature on a per-variable basis by using __attribute__((uninitialized)). For GCC builds, CONFIG_INIT_ALL_STACK is simply wired up to CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. No opt-out is possible at the moment. Signed-off-by: Alexander Potapenko Cc: Masahiro Yamada Cc: James Morris Cc: "Serge E. Hallyn" Cc: Nick Desaulniers Cc: Kostya Serebryany Cc: Dmitry Vyukov Cc: Kees Cook Cc: Sandeep Patil Cc: Randy Dunlap Cc: linux-security-module@vger.kernel.org Cc: linux-kbuild@vger.kernel.org Cc: kernel-hardening@lists.openwall.com --- v2: - addressed Kees Cook's comments: added GCC support v3: addressed Masahiro Yamada's comments: - dropped per-file opt-out mechanism - fixed GCC_PLUGINS dependencies v4: - addressed Randy Dunlap's comments: remove redundant "depends on" - addressed Masahiro Yamada's comments: drop Makefile.initmem --- Makefile | 10 ++++++++++ security/Kconfig | 1 + security/Kconfig.initmem | 28 ++++++++++++++++++++++++++++ 3 files changed, 39 insertions(+) create mode 100644 security/Kconfig.initmem diff --git a/Makefile b/Makefile index 15c8251d4d5e..02f4b9df0102 100644 --- a/Makefile +++ b/Makefile @@ -727,6 +727,16 @@ KBUILD_CFLAGS += $(call cc-disable-warning, tautological-compare) # See modpost pattern 2 KBUILD_CFLAGS += $(call cc-option, -mno-global-merge,) KBUILD_CFLAGS += $(call cc-option, -fcatch-undefined-behavior) + +ifdef CONFIG_INIT_ALL_STACK +# Clang's -ftrivial-auto-var-init=pattern flag initializes the +# uninitialized parts of local variables (including fields and padding) +# with a fixed pattern (0xAA in most cases). +ifdef CONFIG_CC_HAS_AUTO_VAR_INIT +KBUILD_CFLAGS += -ftrivial-auto-var-init=pattern +endif +endif + else # These warnings generated too much noise in a regular build. diff --git a/security/Kconfig b/security/Kconfig index 353cfef71d4e..4d27437c2eb8 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -229,6 +229,7 @@ config STATIC_USERMODEHELPER_PATH If you wish for all usermode helper programs to be disabled, specify an empty string here (i.e. ""). +source "security/Kconfig.initmem" source "security/selinux/Kconfig" source "security/smack/Kconfig" source "security/tomoyo/Kconfig" diff --git a/security/Kconfig.initmem b/security/Kconfig.initmem new file mode 100644 index 000000000000..cdad1e185b10 --- /dev/null +++ b/security/Kconfig.initmem @@ -0,0 +1,28 @@ +menu "Initialize all memory" + +config CC_HAS_AUTO_VAR_INIT + def_bool $(cc-option,-ftrivial-auto-var-init=pattern) + +config INIT_ALL_MEMORY + bool "Initialize all memory" + default n + help + Enforce memory initialization to mitigate infoleaks and make + the control-flow bugs depending on uninitialized values more + deterministic. + +if INIT_ALL_MEMORY + +config INIT_ALL_STACK + bool "Initialize all stack" + depends on CC_HAS_AUTO_VAR_INIT || (HAVE_GCC_PLUGINS && PLUGIN_HOSTCC != "") + select GCC_PLUGINS if !CC_HAS_AUTO_VAR_INIT + select GCC_PLUGIN_STRUCTLEAK if !CC_HAS_AUTO_VAR_INIT + select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL if !CC_HAS_AUTO_VAR_INIT + default y + help + Initialize uninitialized stack data with a fixed pattern + (0x00 in GCC, 0xAA in Clang). + +endif # INIT_ALL_MEMORY +endmenu From patchwork Wed Apr 10 13:17:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10893883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F0ACB14DB for ; Wed, 10 Apr 2019 13:18:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E145926E76 for ; Wed, 10 Apr 2019 13:18:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DF6C0286D4; Wed, 10 Apr 2019 13:18:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8332226E76 for ; Wed, 10 Apr 2019 13:18:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729831AbfDJNS2 (ORCPT ); Wed, 10 Apr 2019 09:18:28 -0400 Received: from mail-qt1-f202.google.com ([209.85.160.202]:39792 "EHLO mail-qt1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727943AbfDJNS1 (ORCPT ); Wed, 10 Apr 2019 09:18:27 -0400 Received: by mail-qt1-f202.google.com with SMTP id n13so2182297qtn.6 for ; Wed, 10 Apr 2019 06:18:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+o0Ntb7SIwLZWETPV/HKtZApObcLc4LkdcmnujOUhts=; b=WW7MXZyLEov6of9VamS4HiPQ7LAvDWgQ5zadnCGCZfiCLyiKuAN54I+r/cAmhRdXXs x8yj36lwSwcA8fIXGkIM3lMgNBfp1LF/kJPpuoTGF5qAqvtnOqEtufgJdIvvPRnVc60W 0au9oSQBcRUDKbg/lfpAu6ajEmD4mofbWuQRyHtehs1cVpIg0KT95kfMl/ZhsxAvvkmI g143SFWYmodicWfFhaNr7IzCOMfzVJvS7h79wmS5fcU6VCMPK0mOcGF2pRCo2LhfK8zK kHRpV4gF7x/FPvhI+LGE7hiV/lhsmnBXhn2l9SgVofpsKJ4nvsXANRaIamDBdFYAnIDH b0MQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+o0Ntb7SIwLZWETPV/HKtZApObcLc4LkdcmnujOUhts=; b=XFc6FeJXWbUuqRqnX4XlnZhIANm+FQH6iO45rMDHQjL6IVkJL5kAmSOExzRBCwOeVH bUHd6Gs7Xl0yPQLhb/TzsaGIpty5omsbOPIoWbxZuP+NY9Zh3T+Xj6OXe5HpywNIJVln JfcdWSDEFDju9RNRFs+pvXkjfi54JJtqJyiGJaA9JjCpWui/y9hkUGyVEsz5Ch50CBxH vIMXs5yYbsKWk8LQK+VHS0WHbKe0Qqdynsm2nUmwahuhl9nHB5ecsu4ObwxOu5N8c0JT sZ8AFM0mOoYgKWEwQIit03qI7cWi+ONuzMCeauQQ8ZUNFq654qOLi3qwKxkva9piAzbU PXhQ== X-Gm-Message-State: APjAAAXi2ieRN+vGxlxbFJsxf9BJuJ16yU29d/0yqjBNvfQr2KkCgqNw xsOU/1pUo33rZQlvEUtKU8H0Q52mnr0= X-Google-Smtp-Source: APXvYqwveKOK6/gPL1azSWe6krfPpA8mwyeT4i7ZjQopcXRe0G7RzaUhUVvY70xaNvyLIgNeA+WhLaI+jL4= X-Received: by 2002:a37:b05:: with SMTP id 5mr4960772qkl.25.1554902306520; Wed, 10 Apr 2019 06:18:26 -0700 (PDT) Date: Wed, 10 Apr 2019 15:17:25 +0200 In-Reply-To: <20190410131726.250295-1-glider@google.com> Message-Id: <20190410131726.250295-3-glider@google.com> Mime-Version: 1.0 References: <20190410131726.250295-1-glider@google.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e-goog Subject: [PATCH v4 2/3] initmem: introduce CONFIG_INIT_ALL_HEAP From: Alexander Potapenko To: yamada.masahiro@socionext.com, jmorris@namei.org, serge@hallyn.com Cc: linux-security-module@vger.kernel.org, linux-kbuild@vger.kernel.org, ndesaulniers@google.com, kcc@google.com, dvyukov@google.com, keescook@chromium.org, sspatil@android.com, labbott@redhat.com, kernel-hardening@lists.openwall.com Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This config option adds the possibility to initialize newly allocated pages and heap objects with a 0xAA pattern. There's already a number of places where allocations are initialized based on the presence of __GFP_ZERO flag. We just change this code so that under CONFIG_INIT_ALL_HEAP these allocations are always initialized with either 0x00 or 0xAA depending on the __GFP_ZERO. No performance optimizations are done at the moment to reduce double initialization of memory regions. Signed-off-by: Alexander Potapenko Cc: Masahiro Yamada Cc: James Morris Cc: "Serge E. Hallyn" Cc: Nick Desaulniers Cc: Kostya Serebryany Cc: Dmitry Vyukov Cc: Kees Cook Cc: Sandeep Patil Cc: Laura Abbott Cc: Randy Dunlap Cc: Jann Horn Cc: Mark Rutland Cc: linux-security-module@vger.kernel.org Cc: linux-kbuild@vger.kernel.org Cc: kernel-hardening@lists.openwall.com --- v3: - addressed comments by Masahiro Yamada (Kconfig fixes) v4: - addressed Randy Dunlap's comments: remove redundant "depends on" - replaced code wiring SLUB_DEBUG and page poisoning with a more lightweight implementation (Laura Abbott mentioned turning on debugging has serious performance issues) --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/page.h | 1 + arch/x86/Kconfig | 1 + arch/x86/include/asm/page_64.h | 10 ++++++++++ arch/x86/lib/clear_page_64.S | 24 ++++++++++++++++++++++++ drivers/infiniband/core/uverbs_ioctl.c | 4 ++-- include/linux/gfp.h | 10 ++++++++++ include/linux/highmem.h | 8 ++++++++ kernel/kexec_core.c | 8 ++++++-- mm/dmapool.c | 4 ++-- mm/page_alloc.c | 9 +++++++-- mm/slab.c | 19 +++++++++++++------ mm/slub.c | 12 ++++++++---- security/Kconfig.initmem | 15 +++++++++++++++ 14 files changed, 108 insertions(+), 18 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7e34b9eba5de..3548235752f6 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -110,6 +110,7 @@ config ARM64 select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_BITREVERSE select HAVE_ARCH_HUGE_VMAP + select HAVE_ARCH_INIT_ALL_HEAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index c88a3cb117a1..a7bea266c16d 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -33,6 +33,7 @@ extern void copy_page(void *to, const void *from); extern void clear_page(void *to); #define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr) +#define clear_page_pattern(addr) __memset((addr), GFP_POISON_BYTE, PAGE_SIZE) #define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr) typedef struct page *pgtable_t; diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 5ad92419be19..2058beaf3582 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -116,6 +116,7 @@ config X86 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE + select HAVE_ARCH_INIT_ALL_HEAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index 939b1cff4a7b..8e6e0c00f8c8 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -54,6 +54,16 @@ static inline void clear_page(void *page) : "cc", "memory", "rax", "rcx"); } +#ifdef CONFIG_INIT_ALL_HEAP + +void clear_page_pattern_orig(void *page); +// FIXME: add rep and erms. +static inline void clear_page_pattern(void *page) +{ + clear_page_pattern_orig(page); +} +#endif + void copy_page(void *to, void *from); #endif /* !__ASSEMBLY__ */ diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S index 88acd349911b..b64dab97d4ba 100644 --- a/arch/x86/lib/clear_page_64.S +++ b/arch/x86/lib/clear_page_64.S @@ -49,3 +49,27 @@ ENTRY(clear_page_erms) ret ENDPROC(clear_page_erms) EXPORT_SYMBOL_GPL(clear_page_erms) + +#ifdef CONFIG_INIT_ALL_HEAP +ENTRY(clear_page_pattern_orig) + movq $0xAAAAAAAAAAAAAAAA,%rax + movl $4096/64,%ecx + .p2align 4 +.Lloop_pat: + decl %ecx + movq %rax,(%rdi) + PUT(1) + PUT(2) + PUT(3) + PUT(4) + PUT(5) + PUT(6) + PUT(7) + leaq 64(%rdi),%rdi + jnz .Lloop_pat + nop + ret +ENDPROC(clear_page_pattern_orig) +EXPORT_SYMBOL_GPL(clear_page_pattern_orig) +#endif + diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c index e1379949e663..1ad0eb28e651 100644 --- a/drivers/infiniband/core/uverbs_ioctl.c +++ b/drivers/infiniband/core/uverbs_ioctl.c @@ -127,8 +127,8 @@ __malloc void *_uverbs_alloc(struct uverbs_attr_bundle *bundle, size_t size, res = (void *)pbundle->internal_buffer + pbundle->internal_used; pbundle->internal_used = ALIGN(new_used, sizeof(*pbundle->internal_buffer)); - if (flags & __GFP_ZERO) - memset(res, 0, size); + if (GFP_INIT_ALWAYS_ON || (flags & __GFP_ZERO)) + memset(res, INITMEM_FILL_BYTE(flags), size); return res; } EXPORT_SYMBOL(_uverbs_alloc); diff --git a/include/linux/gfp.h b/include/linux/gfp.h index fdab7de7490d..a0016357a91a 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -213,6 +213,16 @@ struct vm_area_struct; #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) +#define GFP_POISON_BYTE (0xAA) +#ifdef CONFIG_INIT_ALL_HEAP +#define GFP_INIT_ALWAYS_ON (1) +#define INITMEM_FILL_BYTE(gfp_flags) \ + (((gfp_flags) & __GFP_ZERO) ? (0) : GFP_POISON_BYTE) +#else +#define GFP_INIT_ALWAYS_ON (0) +#define INITMEM_FILL_BYTE(gfp_flags) (0) +#endif + /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index ea5cdbd8c2c3..b48a04c7b046 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -215,6 +215,14 @@ static inline void clear_highpage(struct page *page) kunmap_atomic(kaddr); } +static inline void clear_highpage_pattern(struct page *page) +{ + void *kaddr = kmap_atomic(page); + + clear_page_pattern(kaddr); + kunmap_atomic(kaddr); +} + static inline void zero_user_segments(struct page *page, unsigned start1, unsigned end1, unsigned start2, unsigned end2) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d7140447be75..63104a71cf60 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -315,9 +315,13 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order) arch_kexec_post_alloc_pages(page_address(pages), count, gfp_mask); - if (gfp_mask & __GFP_ZERO) + if (GFP_INIT_ALWAYS_ON || (gfp_mask & __GFP_ZERO)) { for (i = 0; i < count; i++) - clear_highpage(pages + i); + if (!INITMEM_FILL_BYTE(gfp_mask)) + clear_highpage(pages + i); + else + clear_highpage_pattern(pages + i); + } } return pages; diff --git a/mm/dmapool.c b/mm/dmapool.c index 76a160083506..f36dd9065f2b 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -381,8 +381,8 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, #endif spin_unlock_irqrestore(&pool->lock, flags); - if (mem_flags & __GFP_ZERO) - memset(retval, 0, pool->size); + if (GFP_INIT_ALWAYS_ON || (mem_flags & __GFP_ZERO)) + memset(retval, INITMEM_FILL_BYTE(mem_flags), pool->size); return retval; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d96ca5bc555b..c8477db5ac94 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2014,9 +2014,14 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags post_alloc_hook(page, order, gfp_flags); - if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO)) + if (!free_pages_prezeroed() && (GFP_INIT_ALWAYS_ON || + (gfp_flags & __GFP_ZERO))) { for (i = 0; i < (1 << order); i++) - clear_highpage(page + i); + if (!INITMEM_FILL_BYTE(gfp_flags)) + clear_highpage(page + i); + else + clear_highpage_pattern(page + i); + } if (order && (gfp_flags & __GFP_COMP)) prep_compound_page(page, order); diff --git a/mm/slab.c b/mm/slab.c index 47a380a486ee..6d30db845dbd 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3331,8 +3331,11 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, local_irq_restore(save_flags); ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); - if (unlikely(flags & __GFP_ZERO) && ptr) - memset(ptr, 0, cachep->object_size); + if ((unlikely(flags & __GFP_ZERO) || + (GFP_INIT_ALWAYS_ON && !cachep->ctor && + !(cachep->flags & SLAB_TYPESAFE_BY_RCU))) && + ptr) + memset(ptr, INITMEM_FILL_BYTE(flags), cachep->object_size); slab_post_alloc_hook(cachep, flags, 1, &ptr); return ptr; @@ -3388,8 +3391,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); - if (unlikely(flags & __GFP_ZERO) && objp) - memset(objp, 0, cachep->object_size); + if (((GFP_INIT_ALWAYS_ON && !cachep->ctor && + !(cachep->flags & SLAB_TYPESAFE_BY_RCU)) || + unlikely(flags & __GFP_ZERO)) && objp) + memset(objp, INITMEM_FILL_BYTE(flags), cachep->object_size); slab_post_alloc_hook(cachep, flags, 1, &objp); return objp; @@ -3596,9 +3601,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_); /* Clear memory outside IRQ disabled section */ - if (unlikely(flags & __GFP_ZERO)) + if ((GFP_INIT_ALWAYS_ON && !s->ctor && + !(flags & SLAB_TYPESAFE_BY_RCU)) || + unlikely(flags & __GFP_ZERO)) for (i = 0; i < size; i++) - memset(p[i], 0, s->object_size); + memset(p[i], INITMEM_FILL_BYTE(flags), s->object_size); slab_post_alloc_hook(s, flags, size, p); /* FIXME: Trace call missing. Christoph would like a bulk variant */ diff --git a/mm/slub.c b/mm/slub.c index d30ede89f4a6..f89c39b01578 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2750,8 +2750,10 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, stat(s, ALLOC_FASTPATH); } - if (unlikely(gfpflags & __GFP_ZERO) && object) - memset(object, 0, s->object_size); + if (((GFP_INIT_ALWAYS_ON && !s->ctor && + !(s->flags & SLAB_TYPESAFE_BY_RCU)) || + unlikely(gfpflags & __GFP_ZERO)) && object) + memset(object, INITMEM_FILL_BYTE(gfpflags), s->object_size); slab_post_alloc_hook(s, gfpflags, 1, &object); @@ -3172,11 +3174,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_irq_enable(); /* Clear memory outside IRQ disabled fastpath loop */ - if (unlikely(flags & __GFP_ZERO)) { + if ((GFP_INIT_ALWAYS_ON && !s->ctor && + !(s->flags & SLAB_TYPESAFE_BY_RCU)) || + unlikely(flags & __GFP_ZERO)) { int j; for (j = 0; j < i; j++) - memset(p[j], 0, s->object_size); + memset(p[j], INITMEM_FILL_BYTE(flags), s->object_size); } /* memcg and kmem_cache debug support */ diff --git a/security/Kconfig.initmem b/security/Kconfig.initmem index cdad1e185b10..93bc6fe32536 100644 --- a/security/Kconfig.initmem +++ b/security/Kconfig.initmem @@ -1,5 +1,8 @@ menu "Initialize all memory" +config HAVE_ARCH_INIT_ALL_HEAP + bool + config CC_HAS_AUTO_VAR_INIT def_bool $(cc-option,-ftrivial-auto-var-init=pattern) @@ -24,5 +27,17 @@ config INIT_ALL_STACK Initialize uninitialized stack data with a fixed pattern (0x00 in GCC, 0xAA in Clang). +if HAVE_ARCH_INIT_ALL_HEAP + +config INIT_ALL_HEAP + bool "Initialize all heap" + depends on SLAB || SLUB + default y + help + Initialize uninitialized pages and SL[AOU]B allocations with 0xAA + pattern. + +endif # HAVE_ARCH_INIT_ALL_HEAP + endif # INIT_ALL_MEMORY endmenu From patchwork Wed Apr 10 13:17:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10893887 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A657E17E6 for ; Wed, 10 Apr 2019 13:18:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97767286A4 for ; Wed, 10 Apr 2019 13:18:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 95CCF28830; Wed, 10 Apr 2019 13:18:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04C0D286A4 for ; Wed, 10 Apr 2019 13:18:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729162AbfDJNSa (ORCPT ); Wed, 10 Apr 2019 09:18:30 -0400 Received: from mail-qt1-f202.google.com ([209.85.160.202]:53561 "EHLO mail-qt1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730030AbfDJNSa (ORCPT ); Wed, 10 Apr 2019 09:18:30 -0400 Received: by mail-qt1-f202.google.com with SMTP id 18so2156584qtw.20 for ; Wed, 10 Apr 2019 06:18:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Qpyh2PaNh/u0oi9ThfEHkFImlddlN8fnWeaDdSG7BTU=; b=tTEh16bK+/alZUiIcYZFseVJR45vK/paTzk+IFa7C+s7m86gzClwbMVALTH2LR7o0+ iSA2p+o7pjQmeoQHq+tT9MyzjOWl5w9HnGh66tjT87ucsUFZrtD6OUeuykEloorYwxk4 9uvA4kZn65GgkUlh03F3YFnpVNYbsqk2a8JVGxg02LgSgi25ywNqiFSlF0AzUIZccqlt JsxR/6siAcnh4tW7zhUZGZuQ6yl4ghSUIsPZDLYrJrJyv2HQdPJ69lVou+iYXUNeLfj3 w2nQKFJwvdKdrh20GykU3Go3NF9GzVe1PCaSsPQ/Nmg/eQ8uTH/fQKswFe76iPsbM3JV h8ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Qpyh2PaNh/u0oi9ThfEHkFImlddlN8fnWeaDdSG7BTU=; b=LHb7qeG5KDt16XXr9GvwiswCtKrXzSK6UUc4FTxsKmJWieS00FSqu73luBLlERr0qk 9DgCPWU8qkHeF2CwsjEEk15CtqV0oksALxe8rG53Uy95Kwl8xgB2QIDq8EhAF1h1jf4p 1ck6ftuTFD+BtLKxIiJEFslZEmQGSIbJShjlK/Q4y6Yq91CEoCiy0lJdQQ/vDF/NHnqb N3Dpv3fj/FssGeFnBclh4ouWT+m2pG1RQaXM7UyidT21jMA/zaYBTn8djnabwN6VGcG6 /5Slij2MQEOxRoik1T6LW5MzeiZ+DpvQpqjdaaw2sEgbLM3HlRCxMKb2K5WvUEp5pPui kn6Q== X-Gm-Message-State: APjAAAUABwaOo7B0su9KiHlDUVWktlXw77CW574XIt+6SQEvHbTR0KyC 6oxpgEWDRsvR5QWgzz2TsNOhILHSWnI= X-Google-Smtp-Source: APXvYqwHKsvirtbcvUeQdpP+cIjxWUEgXzJiwWi4ZgBf0UNrbIdb93uo9rTFr7bKYns6oDjGchZCYX/oc3c= X-Received: by 2002:aed:3e16:: with SMTP id l22mr5656924qtf.50.1554902309282; Wed, 10 Apr 2019 06:18:29 -0700 (PDT) Date: Wed, 10 Apr 2019 15:17:26 +0200 In-Reply-To: <20190410131726.250295-1-glider@google.com> Message-Id: <20190410131726.250295-4-glider@google.com> Mime-Version: 1.0 References: <20190410131726.250295-1-glider@google.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e-goog Subject: [PATCH 3/3] net: make sk_prot_alloc() work with CONFIG_INIT_ALL_HEAP From: Alexander Potapenko To: yamada.masahiro@socionext.com, jmorris@namei.org, serge@hallyn.com Cc: linux-security-module@vger.kernel.org, linux-kbuild@vger.kernel.org, ndesaulniers@google.com, kcc@google.com, dvyukov@google.com, keescook@chromium.org, sspatil@android.com, labbott@redhat.com, kernel-hardening@lists.openwall.com Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Rename sk_prot_clear_nulls() to sk_prot_clear() and introduce an extra init_byte parameter to be passed to memset() when initializing struct sock. In the case CONFIG_INIT_ALL_HEAP is on, initialize newly created struct sock with 0xAA. Signed-off-by: Alexander Potapenko Cc: Eric Dumazet Cc: David S. Miller Cc: Masahiro Yamada Cc: James Morris Cc: "Serge E. Hallyn" Cc: Nick Desaulniers Cc: Kostya Serebryany Cc: Dmitry Vyukov Cc: Kees Cook Cc: Sandeep Patil Cc: Laura Abbott Cc: Randy Dunlap Cc: Jann Horn Cc: Mark Rutland Cc: linux-security-module@vger.kernel.org Cc: netdev@vger.kernel.org Cc: linux-kbuild@vger.kernel.org Cc: kernel-hardening@lists.openwall.com --- include/net/sock.h | 8 ++++---- net/core/sock.c | 5 +++-- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index 8de5ee258b93..a49c1f1c71c1 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1044,13 +1044,13 @@ struct module; /* * caches using SLAB_TYPESAFE_BY_RCU should let .next pointer from nulls nodes - * un-modified. Special care is taken when initializing object to zero. + * un-modified. Special care is taken when initializing object. */ -static inline void sk_prot_clear_nulls(struct sock *sk, int size) +static inline void sk_prot_clear(struct sock *sk, int size, int init_byte) { if (offsetof(struct sock, sk_node.next) != 0) - memset(sk, 0, offsetof(struct sock, sk_node.next)); - memset(&sk->sk_node.pprev, 0, + memset(sk, init_byte, offsetof(struct sock, sk_node.next)); + memset(&sk->sk_node.pprev, init_byte, size - offsetof(struct sock, sk_node.pprev)); } diff --git a/net/core/sock.c b/net/core/sock.c index 782343bb925b..1ad855e99512 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1601,8 +1601,9 @@ static struct sock *sk_prot_alloc(struct proto *prot, gfp_t priority, sk = kmem_cache_alloc(slab, priority & ~__GFP_ZERO); if (!sk) return sk; - if (priority & __GFP_ZERO) - sk_prot_clear_nulls(sk, prot->obj_size); + if (GFP_INIT_ALWAYS_ON || (priority & __GFP_ZERO)) + sk_prot_clear(sk, prot->obj_size, + INITMEM_FILL_BYTE(priority)); } else sk = kmalloc(prot->obj_size, priority);