From patchwork Tue Nov 20 23:23:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 10691395 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C118916B1 for ; Tue, 20 Nov 2018 23:22:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A86482ABE0 for ; Tue, 20 Nov 2018 23:22:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 97D972ABED; Tue, 20 Nov 2018 23:22:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E8C82ABE0 for ; Tue, 20 Nov 2018 23:22:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE2776B227C; Tue, 20 Nov 2018 18:22:08 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D49A86B227B; Tue, 20 Nov 2018 18:22:08 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9D546B227F; Tue, 20 Nov 2018 18:22:08 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 699886B227B for ; Tue, 20 Nov 2018 18:22:08 -0500 (EST) Received: by mail-pl1-f200.google.com with SMTP id d23so4168461plj.22 for ; Tue, 20 Nov 2018 15:22:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; bh=fTvbZ0m/aSq5RznzKpeMZ7cyPzohn81nq5FFUTS5KLw=; b=cQ306yA39uYTfyxRZWC2oqasXtHOC3IYl9HtuoYsAtBu0FRR7fzVEwDO79koRhNTtn VbbiJ0ywRcvwU32WGw3ohL9VFgu3MjGk8y1w4bBQVmyiEfNLNg6zlpEKNMxlOfT9AAIS 08j+yco0Gyi2wEx8Tn2YFpv2KnIuZAui9/t3oDRRNvzphYhgyyc7nuUtSrqb3jRBUVAP W//2MW42TAUHJw+vTZf2uYVrF5ujEuNUGQRm6NjZmYx4YEb4Qpk4Ka9ty+42fFaBEegZ ua+tTyT3/MxLjP0+BMgqvmFV8KE/CSH+/+rxCg0Ki7ox/3v1rpNWcrj9euyXYz/LvbXV /klg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: AGRZ1gI29mHbhW2Wewjjx1YIETqkV4x4ZyQxewcFdu+yQF+xe4miqC/v jwDLSsxW8OFqIUQv+FeOs6cvt1fWpbnVPNillhKa15HStxViL2i4rY//HaBQ3AkZrXwFYsVO13j rchoH+rAxl2jIsxnCDRhWm5kqxGFWBgxBrTWPsvO1+uxfCmaJQWnZD7K8dtXKa9rLsQ== X-Received: by 2002:a63:e247:: with SMTP id y7mr3608847pgj.84.1542756127875; Tue, 20 Nov 2018 15:22:07 -0800 (PST) X-Google-Smtp-Source: AJdET5dZmCHdo/25WWGasSWLUHvv0Im6J+MjjmU+QKxXRr2wGB9V26ub5R/MPY2LDcX2zj4y+M+m X-Received: by 2002:a63:e247:: with SMTP id y7mr3608778pgj.84.1542756126251; Tue, 20 Nov 2018 15:22:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542756126; cv=none; d=google.com; s=arc-20160816; b=odIHoGXFm8Pu2m5G4rup8nJkCO+Dprpzjj4fPBy69l7EnQA7V7wPMeZrGRoZklY0O5 pDYc7hJiDUrt8kdY/+WPAM2HWku3n2ryKBazchvNUYGRxv2oSB2nIzf6CvnkIkNTqw87 0O8MOo1zJviDzY0IoGCOzbDne8eTuFfmP81AjgGPNWj8411hAzT8bJY7b34lytDj72v6 AHDhorHx5vpNKj8OvhJJOex8++UDNq4HQRLASjFwAQRJAjN7hQ2TiOQPUycqDqrQuL8c P4qLMW5YgKE2PfTz78GxXt7B30cFF6ynhK9+9TXT0jUEOGVKkVAHs8wkAY8XFb8BnxUL RvQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from; bh=fTvbZ0m/aSq5RznzKpeMZ7cyPzohn81nq5FFUTS5KLw=; b=J7XUqSCvBs6yY+HaFm0G73hwZ3phRfRYmt9NWdorTml6g84eCIkDAVn8bgBr6NWzWU x6/duOqhhbmJhU2p/FElHZODrLJwxmBH/33lt9wH2qEXDRDVHM87tsSqNsyfZQlx9kRP 5mRQ2YrWARIG15DkO+6umppThQ+NDvC+YAcrqxByxdS1Wil83mmYbs1d3NnIMK+cHu7g xtq019A45EvBJWmBAHYAFekAcFsOboPAhv+vCZuZGVoe5mWaTiDyixAVoMEa69cr5SLz 80DjyBOZn9AVXy62hCnoIM6aZiYsYsM2m4FuCTFOZE+MAGwY+SqzIqoC7vTPK9fVTKVb UzZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga17.intel.com (mga17.intel.com. [192.55.52.151]) by mx.google.com with ESMTPS id o3-v6si47169979pld.329.2018.11.20.15.22.05 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Nov 2018 15:22:06 -0800 (PST) Received-SPF: pass (google.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.151 as permitted sender) client-ip=192.55.52.151; Authentication-Results: mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Nov 2018 15:22:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,258,1539673200"; d="scan'208";a="251330787" Received: from rpedgeco-desk5.jf.intel.com ([10.54.75.128]) by orsmga004.jf.intel.com with ESMTP; 20 Nov 2018 15:22:04 -0800 From: Rick Edgecombe To: jeyu@kernel.org, akpm@linux-foundation.org, willy@infradead.org, tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, daniel@iogearbox.net, jannh@google.com, keescook@chromium.org Cc: kristen@linux.intel.com, dave.hansen@intel.com, arjan@linux.intel.com, Rick Edgecombe Subject: [PATCH v9 RESEND 0/4] KASLR feature to randomize each loadable module Date: Tue, 20 Nov 2018 15:23:08 -0800 Message-Id: <20181120232312.30037-1-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Resending this because I missed Jessica in the "to" list. Also removing the part of this coverletter that talked about KPTI helping with some local kernel text de-randomizing methods, because I'm not sure I fully understand this. ------------------------------------------------------------ This is V9 of the "KASLR feature to randomize each loadable module" patchset. The purpose is to increase the randomization for the module space from 10 to 17 bits, and also to make the modules randomized in relation to each other instead of just the address where the allocations begin, so that if one module leaks the location of the others can't be inferred. Why its useful ============== Randomizing the location of executable code is a defense against control flow attacks, where the kernel is tricked into jumping, or speculatively executing code other than what is intended. By randomizing the location of the code, the attacker doesn't know where to redirect the control flow. Today the RANDOMIZE_BASE feature randomizes the base address where the module allocations begin with 10 bits of entropy for this purpose. From here, a highly deterministic algorithm allocates space for the modules as they are loaded and unloaded. If an attacker can predict the order and identities for modules that will be loaded (either by the system, or controlled by the user with request_module or BPF), then a single text address leak can give the attacker access to the locations of other modules. So in this case this new algorithm can take the entropy of the other modules from ~0 to 17, making it much more robust. Another problem today is that the low 10 bits of entropy makes brute force attacks feasible, especially in the case of speculative execution where a wrong guess won't necessarily cause a crash. In this case, increasing the randomization will force attacks to take longer, and so increase the time an attacker may be detected on a system. There are multiple efforts to apply more randomization to the core kernel text as well, and so this module space piece can be a first step to increasing randomization for all kernel space executable code. Userspace ASLR can get 28 bits of entropy or more, so at least increasing this to 17 for now improves what is currently a pretty low amount of randomization for the higher privileged kernel space. How it works ============ The algorithm is pretty simple. It just breaks the module space in two, a random area (2/3 of module space) and a backup area (1/3 of module space). It first tries to allocate up to 10000 randomly located starting pages inside the random section. If this fails, it will allocate in the backup area. The backup area base will be offset in the same way as current algorithm does for the base area, which has 10 bits of entropy. The vmalloc allocator can be used to try an allocation at a specific address, however it is usually used to try an allocation over a large address range, and so some behaviors which are non-issues in normal usage can be be sub-optimal when trying the an allocation at 10000 small ranges. So this patch also includes a new vmalloc function __vmalloc_node_try_addr and some other vmalloc tweaks that allow for more efficient trying of addresses. This algorithm targets maintaining high entropy for many 1000's of module allocations. This is because there are other users of the module space besides kernel modules, like eBPF JIT, classic BPF socket filter JIT and kprobes. Performance =========== Simulations were run using module sizes derived from the x86_64 modules to measure the allocation performance at various levels of fragmentation and whether the backup area was used. Capacity -------- There is a slight reduction in the capacity of modules as simulated by the x86_64 module sizes of <1000. Note this is a worst case, since in practice module allocations in the 1000's will consist of smaller BPF JIT allocations or kprobes which would fit better in the random area. Allocation time --------------- Below are three sets of measurements in ns of the allocation time as measured by the included kselftests. The first two columns are this new algorithm with and with out the vmalloc optimizations for trying random addresses quickly. They are included for consideration of whether the changes are worth it. The last column is the performance of the original algorithm. Modules Vmalloc optimization No Vmalloc Optimization Existing Module KASLR 1000 1433 1993 3821 2000 2295 3681 7830 3000 4424 7450 13012 4000 7746 13824 18106 5000 12721 21852 22572 6000 19724 33926 26443 7000 27638 47427 30473 8000 37745 64443 34200 These allocations are not taking very long, but it may show up on systems with very high usage of the module space (BPF JITs). If the trade-off of touching vmalloc doesn't seem worth it to people, I can remove the optimizations. Randomness ---------- Unlike the existing algorithm, the amount of randomness provided has a dependency on the number of modules allocated and the sizes of the modules text sections. The entropy provided for the Nth allocation will come from three sources of randomness, the range of addresses for the random area, the probability the section will be allocated in the backup area and randomness from the number of modules already allocated in the backup area. For computing a lower bound entropy in the following calculations, the randomness of the modules already in the backup area, or overlapping from the random area, is ignored since it is usually small and will only increase the entropy. Below is an attempt to compute a worst case value for entropy to compare to the existing algorithm. For probability of the Nth allocation being in the backup area, p, a lower bound entropy estimate is calculated here as: Random Area Slots = ((2/3)*1073741824)/4096 = 174762 Entropy = -( (1-p)*log2((1-p)/174762) + p*log2(p/1024) ) For >8000 modules the entropy remains above 17.3. For non-speculative control flow attacks, an attack might crash the system. So the probability of the first guess being right can be more important than the Nth guess. KASLR schemes usually have equal probability for each possible position, but in this scheme that is not the case. So a more conservative comparison to existing schemes is the amount of information that would have to be guessed correctly for the position that has the highest probability for having the Nth module allocated (as that would be the attackers best guess): Min Info = MIN(-log2(p/1024), -log2((1-p)/174762)) Allocations Entropy 1000 17.4 2000 17.4 3000 17.4 4000 16.8 5000 15.8 6000 14.9 7000 14.8 8000 14.2 If anyone is keeping track, these numbers are different than as reported in V2, because they are generated using the more compact allocation size heuristic that is included in the kselftest rather than the real much larger dataset. The heuristic generates randomization benchmarks that are slightly slower than the real dataset. The real dataset also isn't representative of the case of mostly smaller BPF filters, so it represents a worst case lower bound for entropy and in practice 17+ bits should be maintained to much higher number of modules. PTE usage --------- Since the allocations are spread out over a wider address space, there is increased PTE usage which should not exceed 1.3MB more than the old algorithm. Changes for V9: - Better explanations in commit messages, instructions in kselftests (Andrew Morton) Changes for V8: - Simplify code by removing logic for optimum handling of lazy free areas Changes for V7: - More 0-day build fixes, readability improvements (Kees Cook) Changes for V6: - 0-day build fixes by removing un-needed functional testing, more error handling Changes for V5: - Add module_alloc test module Changes for V4: - Fix issue caused by KASAN, kmemleak being provided different allocation lengths (padding). - Avoid kmalloc until sure its needed in __vmalloc_node_try_addr. - Fixed issues reported by 0-day. Changes for V3: - Code cleanup based on internal feedback. (thanks to Dave Hansen and Andriy Shevchenko) - Slight refactor of existing algorithm to more cleanly live along side new one. - BPF synthetic benchmark Changes for V2: - New implementation of __vmalloc_node_try_addr based on the __vmalloc_node_range implementation, that only flushes TLB when needed. - Modified module loading algorithm to try to reduce the TLB flushes further. - Increase "random area" tries in order to increase the number of modules that can get high randomness. - Increase "random area" size to 2/3 of module area in order to increase the number of modules that can get high randomness. - Fix for 0day failures on other architectures. - Fix for wrong debugfs permissions. (thanks to Jann Horn) - Spelling fix. (thanks to Jann Horn) - Data on module_alloc performance and TLB flushes. (brought up by Kees Cook and Jann Horn) - Data on memory usage. (suggested by Jann) Rick Edgecombe (4): vmalloc: Add __vmalloc_node_try_addr function x86/modules: Increase randomization for modules vmalloc: Add debugfs modfraginfo Kselftest for module text allocation benchmarking arch/x86/Kconfig | 3 + arch/x86/include/asm/kaslr_modules.h | 38 ++ arch/x86/include/asm/pgtable_64_types.h | 7 + arch/x86/kernel/module.c | 111 ++++-- include/linux/vmalloc.h | 3 + lib/Kconfig.debug | 9 + lib/Makefile | 1 + lib/test_mod_alloc.c | 375 ++++++++++++++++++ mm/vmalloc.c | 228 +++++++++-- tools/testing/selftests/bpf/test_mod_alloc.sh | 29 ++ 10 files changed, 743 insertions(+), 61 deletions(-) create mode 100644 arch/x86/include/asm/kaslr_modules.h create mode 100644 lib/test_mod_alloc.c create mode 100755 tools/testing/selftests/bpf/test_mod_alloc.sh