From patchwork Wed Jun 22 00:47:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9191605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5151F6075A for ; Wed, 22 Jun 2016 00:48:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 406F72837F for ; Wed, 22 Jun 2016 00:48:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 347D52838C; Wed, 22 Jun 2016 00:48:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 0FC912837F for ; Wed, 22 Jun 2016 00:48:04 +0000 (UTC) Received: (qmail 24250 invoked by uid 550); 22 Jun 2016 00:47:33 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24016 invoked from network); 22 Jun 2016 00:47:29 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9jWunQFY0YnZuPbL2jSbEwD7qWGuh5/d3JnXuRQ2AwU=; b=bhLLFtoBCO1YjUAGNgOdQvsWKIfIF1gW6fkPjLuGzatBZ8CZW7stY6hw9Gd/NcjGq6 xLXqwdCv3gr+4FOu5/jy+/F8pakxnyTunbGQb+OPsgeIxY+GafY3LRIiZX01E07gB0ge zMeei55yGvu5Q669QYVRXifw8TCYAcT6jeXG4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9jWunQFY0YnZuPbL2jSbEwD7qWGuh5/d3JnXuRQ2AwU=; b=jX37p4DXdWo3XfSyRUVkL3RAY+I0SWv91vhWg8Jac/sV3p8kwBEQSsmcYEIwazPAc9 vUJXce2OKuk6v6feFH1WVvHEkYqc6ZQC+k6AjPdeIwfBt6VE/hkJ/RLgBMHxqtbl3hyR PsYjQ35tYgFtgzYgUSmtCIuC0KP0P19Y1TrBJmwrm0XqBkO8F7iWPmS7b29AnoHj0YRo UWf9k7d7f41kjKy6/Za6SP0L9xrU1XbtM7bIZJth5I7qLIvQU+b0WbdvtuizI5Dg3F5d Mai1f7/CXi44YER6js92Z0LrHtf52+dD9HqgTC8lvCyVeJAXo5Yasl4+7nmN40bhCQS6 3QSA== X-Gm-Message-State: ALyK8tKuScP7ASvT/MOubiKnjntn3F68/Wlo+Gh60Q7/PYeHhFsDlyG+M3rIdZ+zceB0MaPl X-Received: by 10.98.89.85 with SMTP id n82mr30766370pfb.23.1466556437342; Tue, 21 Jun 2016 17:47:17 -0700 (PDT) From: Kees Cook To: Ingo Molnar Cc: Kees Cook , Thomas Garnier , Andy Lutomirski , x86@kernel.org, Borislav Petkov , Baoquan He , Yinghai Lu , Juergen Gross , Matt Fleming , Toshi Kani , Andrew Morton , Dan Williams , "Kirill A. Shutemov" , Dave Hansen , Xiao Guangrong , Martin Schwidefsky , "Aneesh Kumar K.V" , Alexander Kuleshov , Alexander Popov , Dave Young , Joerg Roedel , Lv Zheng , Mark Salter , Dmitry Vyukov , Stephen Smalley , Boris Ostrovsky , Christian Borntraeger , Jan Beulich , linux-kernel@vger.kernel.org, Jonathan Corbet , linux-doc@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Tue, 21 Jun 2016 17:47:03 -0700 Message-Id: <1466556426-32664-7-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1466556426-32664-1-git-send-email-keescook@chromium.org> References: <1466556426-32664-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH v7 6/9] x86/mm: Enable KASLR for physical mapping memory region (x86_64) X-Virus-Scanned: ClamAV using ClamSMTP From: Thomas Garnier Add the physical mapping in the list of randomized memory regions. The physical memory mapping holds most allocations from boot and heap allocators. Knowing the base address and physical memory size, an attacker can deduce the PDE virtual address for the vDSO memory page. This attack was demonstrated at CanSecWest 2016, in the "Getting Physical: Extreme Abuse of Intel Based Paged Systems" https://goo.gl/ANpWdV (see second part of the presentation). The exploits used against Linux worked successfully against 4.6+ but fail with KASLR memory enabled (https://goo.gl/iTtXMJ). Similar research was done at Google leading to this patch proposal. Variants exists to overwrite /proc or /sys objects ACLs leading to elevation of privileges. These variants were tested against 4.6+. The page offset used by the compressed kernel retains the static value since it is not yet randomized during this boot stage. Signed-off-by: Thomas Garnier Signed-off-by: Kees Cook --- arch/x86/boot/compressed/pagetable.c | 3 +++ arch/x86/include/asm/kaslr.h | 2 ++ arch/x86/include/asm/page_64_types.h | 11 ++++++++++- arch/x86/kernel/head_64.S | 2 +- arch/x86/mm/kaslr.c | 18 +++++++++++++++--- 5 files changed, 31 insertions(+), 5 deletions(-) diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c index 6e31a6aac4d3..56589d0a804b 100644 --- a/arch/x86/boot/compressed/pagetable.c +++ b/arch/x86/boot/compressed/pagetable.c @@ -20,6 +20,9 @@ /* These actually do the work of building the kernel identity maps. */ #include #include +/* Use the static base for this part of the boot process */ +#undef __PAGE_OFFSET +#define __PAGE_OFFSET __PAGE_OFFSET_BASE #include "../../mm/ident_map.c" /* Used by pgtable.h asm code to force instruction serialization. */ diff --git a/arch/x86/include/asm/kaslr.h b/arch/x86/include/asm/kaslr.h index 683c9d736314..62b1b815a83a 100644 --- a/arch/x86/include/asm/kaslr.h +++ b/arch/x86/include/asm/kaslr.h @@ -4,6 +4,8 @@ unsigned long kaslr_get_random_long(const char *purpose); #ifdef CONFIG_RANDOMIZE_MEMORY +extern unsigned long page_offset_base; + void kernel_randomize_memory(void); #else static inline void kernel_randomize_memory(void) { } diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index d5c2f8b40faa..9215e0527647 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -1,6 +1,10 @@ #ifndef _ASM_X86_PAGE_64_DEFS_H #define _ASM_X86_PAGE_64_DEFS_H +#ifndef __ASSEMBLY__ +#include +#endif + #ifdef CONFIG_KASAN #define KASAN_STACK_ORDER 1 #else @@ -32,7 +36,12 @@ * hypervisor to fit. Choosing 16 slots here is arbitrary, but it's * what Xen requires. */ -#define __PAGE_OFFSET _AC(0xffff880000000000, UL) +#define __PAGE_OFFSET_BASE _AC(0xffff880000000000, UL) +#ifdef CONFIG_RANDOMIZE_MEMORY +#define __PAGE_OFFSET page_offset_base +#else +#define __PAGE_OFFSET __PAGE_OFFSET_BASE +#endif /* CONFIG_RANDOMIZE_MEMORY */ #define __START_KERNEL_map _AC(0xffffffff80000000, UL) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 5df831ef1442..03a2aa067ff3 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -38,7 +38,7 @@ #define pud_index(x) (((x) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) -L4_PAGE_OFFSET = pgd_index(__PAGE_OFFSET) +L4_PAGE_OFFSET = pgd_index(__PAGE_OFFSET_BASE) L4_START_KERNEL = pgd_index(__START_KERNEL_map) L3_START_KERNEL = pud_index(__START_KERNEL_map) diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c index d5380a48e8fb..609ecf2b37ed 100644 --- a/arch/x86/mm/kaslr.c +++ b/arch/x86/mm/kaslr.c @@ -43,8 +43,12 @@ * before. You also need to add a BUILD_BUG_ON in kernel_randomize_memory to * ensure that this order is correct and won't be changed. */ -static const unsigned long vaddr_start; -static const unsigned long vaddr_end; +static const unsigned long vaddr_start = __PAGE_OFFSET_BASE; +static const unsigned long vaddr_end = VMALLOC_START; + +/* Default values */ +unsigned long page_offset_base = __PAGE_OFFSET_BASE; +EXPORT_SYMBOL(page_offset_base); /* * Memory regions randomized by KASLR (except modules that use a separate logic @@ -55,6 +59,7 @@ static __initdata struct kaslr_memory_region { unsigned long *base; unsigned long size_tb; } kaslr_regions[] = { + { &page_offset_base, 64/* Maximum */ }, }; /* Get size in bytes used by the memory region */ @@ -77,13 +82,20 @@ void __init kernel_randomize_memory(void) { size_t i; unsigned long vaddr = vaddr_start; - unsigned long rand; + unsigned long rand, memory_tb; struct rnd_state rand_state; unsigned long remain_entropy; if (!kaslr_memory_enabled()) return; + BUG_ON(kaslr_regions[0].base != &page_offset_base); + memory_tb = ((max_pfn << PAGE_SHIFT) >> TB_SHIFT); + + /* Adapt phyiscal memory region size based on available memory */ + if (memory_tb < kaslr_regions[0].size_tb) + kaslr_regions[0].size_tb = memory_tb; + /* Calculate entropy available between regions */ remain_entropy = vaddr_end - vaddr_start; for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)