From patchwork Thu Oct 31 09:39:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11220811 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2670214DB for ; Thu, 31 Oct 2019 09:39:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B1DDC2083E for ; Thu, 31 Oct 2019 09:39:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="fkWPimMX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B1DDC2083E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 080796B000A; Thu, 31 Oct 2019 05:39:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0344E6B000C; Thu, 31 Oct 2019 05:39:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DED4D6B000D; Thu, 31 Oct 2019 05:39:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0207.hostedemail.com [216.40.44.207]) by kanga.kvack.org (Postfix) with ESMTP id AD8FF6B000A for ; Thu, 31 Oct 2019 05:39:23 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 310938249980 for ; Thu, 31 Oct 2019 09:39:23 +0000 (UTC) X-FDA: 76103581806.01.geese52_6eecc1376ea14 X-Spam-Summary: 2,0,0,bb69cdac4eb98771,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::x86@kernel.org:aryabinin@virtuozzo.com:glider@google.com:luto@kernel.org:linux-kernel@vger.kernel.org:mark.rutland@arm.com:dvyukov@google.com:christophe.leroy@c-s.fr:linuxppc-dev@lists.ozlabs.org:gor@linux.ibm.com:dja@axtens.net,RULES_HIT:41:69:327:355:379:541:800:960:966:967:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2196:2198:2199:2200:2201:2393:2525:2553:2559:2563:2682:2685:2689:2693:2731:2736:2740:2859:2901:2902:2903:2904:2918:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4385:4605:5007:6119:6261:6653:6691:7875:7903:7904:8603:8660:8784:9010:9025:9036:9121:10004:11026:11232:11233:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12663:12679:12683:12698:12737:12895:12986:13146:131 48:13153 X-HE-Tag: geese52_6eecc1376ea14 X-Filterd-Recvd-Size: 32780 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Thu, 31 Oct 2019 09:39:22 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id d13so3998673pfq.2 for ; Thu, 31 Oct 2019 02:39:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BuzVp2aKfR22YRon8jJ49lWlRhQC7xNIc5nXVUQtVUY=; b=fkWPimMXjHij8Q3E/eKzsrjXXdV1wWluKb29zAzG09fftX8Ldm0XmyjORTI1d5QBrt 05ZM35nroG/AzoudWCVDdPwEq6xWBeKzA7p2CvX38vN1c0+qlpcsVkawXIRPsJSjSCYC j4r2W/JllRWPMIdHD9/P02MsveY/lhOREkQg8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BuzVp2aKfR22YRon8jJ49lWlRhQC7xNIc5nXVUQtVUY=; b=UVrrVUqWX3khqeufIDu7p8TW+LZ6WC7F0cKF70Nbsb4KnQvhZngidMkc5ixv3jyxV8 NOHrUa77D/fbYMaK5Hc0w3PZvdzKQ5jyrpF1M17EAHDNrgUcTdf/8u+MT1aD3RJRioZK S1JXXA1M7uqy0sOAVpy60d9CfoNMLWQfUtHdunFILd/yYLH28VZ06j0TGGiuvLVoYw2b u1HWynArw8JGFE31/zTAfcHGiFicF1LhfHmgvec1rRPOTS4lK2sh/AKkJbatxyTCjGiu MYCJKBIEQT/fhY1q+1svogg+Drc//gPA1gaVoRNdLbwq0XeJff6eslXEx4sQ4opXGo1P sKqQ== X-Gm-Message-State: APjAAAW/UbpmKGDf6xAOTuSlx8d2Wce6tFJc1JOAYPz72b5MCtv4JxkD g38Zve7EfZeV17yASu5IrpGssA== X-Google-Smtp-Source: APXvYqwDFg7tdANkGLV7knY1KBtIufK5le6/nrSX8OW97OSZZ//rN5hdm0S+5joUmUDMzmxCLtPAeg== X-Received: by 2002:a17:90a:e2c7:: with SMTP id fr7mr6142025pjb.116.1572514760502; Thu, 31 Oct 2019 02:39:20 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-783a-2bb9-f7cb-7c3c.static.ipv6.internode.on.net. [2001:44b8:1113:6700:783a:2bb9:f7cb:7c3c]) by smtp.gmail.com with ESMTPSA id f12sm2708200pfn.152.2019.10.31.02.39.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 02:39:19 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr Cc: linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com, Daniel Axtens Subject: [PATCH v11 1/4] kasan: support backing vmalloc space with real shadow memory Date: Thu, 31 Oct 2019 20:39:06 +1100 Message-Id: <20191031093909.9228-2-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191031093909.9228-1-dja@axtens.net> References: <20191031093909.9228-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hook into vmalloc and vmap, and dynamically allocate real shadow memory to back the mappings. Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. Instead, share backing space across multiple mappings. Allocate a backing page when a mapping in vmalloc space uses a particular page of the shadow region. This page can be shared by other vmalloc mappings later on. We hook in to the vmap infrastructure to lazily clean up unused shadow memory. To avoid the difficulties around swapping mappings around, this code expects that the part of the shadow region that covers the vmalloc space will not be covered by the early shadow page, but will be left unmapped. This will require changes in arch-specific code. This allows KASAN with VMAP_STACK, and may be helpful for architectures that do not have a separate module space (e.g. powerpc64, which I am currently working on). It also allows relaxing the module alignment back to PAGE_SIZE. Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that: - Turning on KASAN, inline instrumentation, without vmalloc, introuduces a 4.1x-4.2x slowdown in vmalloc operations. - Turning this on introduces the following slowdowns over KASAN: * ~1.76x slower single-threaded (test_vmalloc.sh performance) * ~2.18x slower when both cpus are performing operations simultaneously (test_vmalloc.sh sequential_test_order=1) This is unfortunate but given that this is a debug feature only, not the end of the world. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 Acked-by: Vasily Gorbik Reviewed-by: Andrey Ryabinin Co-developed-by: Mark Rutland Signed-off-by: Mark Rutland [shadow rework] Signed-off-by: Daniel Axtens --- v2: let kasan_unpoison_shadow deal with ranges that do not use a full shadow byte. v3: relax module alignment rename to kasan_populate_vmalloc which is a much better name deal with concurrency correctly v4: Mark's rework Poision pages on vfree Handle allocation failures v5: Per Christophe Leroy, split out test and dynamically free pages. v6: Guard freeing page properly. Drop WARN_ON_ONCE(pte_none(*ptep)), on reflection it's unnecessary debugging cruft with too high a false positive rate. v7: tlb flush, thanks Mark. explain more clearly how freeing works and is concurrency-safe. v9: - Pull in Uladzislau Rezki's changes to better line up with the design of the new vmalloc implementation. Thanks Vlad. - clarify comment explaining smp_wmb() per Mark and Andrey's discussion - tighten up the allocation of backing memory so that it only happens for vmalloc or module space allocations. Thanks Andrey Ryabinin. - A TLB flush in the freeing path, thanks Mark Rutland. v10: - rebase on next, pulling in Vlad's new work on splitting the vmalloc locks. This doesn't require changes in our behaviour but does require rechecking and rewording the explanation of why our behaviour is safe. - after much discussion of barriers, I now document where I think they are needed and why. Thanks Mark and Andrey. - clean up some TLB flushing. We were doing it twice - once after each page and once at the end of the whole process. Only do it at the end of the whole depopulate process. - checkpatch cleanups v11: Nit from Andrey, tighten up release to vmalloc/module space, thanks Vlad. Add benchmark results. The full benchmark results are: Performance No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN fix_size_alloc_test 1697913 14229459 8.38 22981983 13.54 1.62 full_fit_alloc_test 1841601 15152633 8.23 17902922 9.72 1.18 long_busy_list_alloc_test 17874082 58856758 3.29 103925371 5.81 1.77 random_size_alloc_test 9356047 29544085 3.16 57871338 6.19 1.96 fix_align_alloc_test 3188968 19821620 6.22 37979436 11.91 1.92 random_size_align_alloc_te 3033507 17584339 5.80 32588942 10.74 1.85 align_shift_alloc_test 325 1154 3.55 7263 22.35 6.29 pcpu_alloc_test 231952 278181 1.20 318977 1.38 1.15 Total Cycles 235852824254 985040965542 4.18 1733258779416 7.35 1.76 Sequential, 2 cpus No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN fix_size_alloc_test 2505806 17989253 7.18 39651038 15.82 2.20 full_fit_alloc_test 3579676 18829862 5.26 21142645 5.91 1.12 long_busy_list_alloc_test 21594983 74766736 3.46 140701363 6.52 1.88 random_size_alloc_test 10884695 34282077 3.15 91945108 8.45 2.68 fix_align_alloc_test 4133226 26304745 6.36 76163270 18.43 2.90 random_size_align_alloc_te 4261175 22927883 5.38 55236058 12.96 2.41 align_shift_alloc_test 948 4827 5.09 4144 4.37 0.86 pcpu_alloc_test 371789 307654 0.83 374412 1.01 1.22 Total Cycles 99965417402 412710461642 4.13 897968646378 8.98 2.18 fix_size_alloc_test 2502718 17921542 7.16 39893515 15.94 2.23 full_fit_alloc_test 3547996 18675007 5.26 21330495 6.01 1.14 long_busy_list_alloc_test 21522579 74610739 3.47 139822907 6.50 1.87 random_size_alloc_test 10881507 34317349 3.15 91110531 8.37 2.65 fix_align_alloc_test 4119755 26180887 6.35 75818927 18.40 2.90 random_size_align_alloc_te 4297708 23058344 5.37 55969004 13.02 2.43 align_shift_alloc_test 956 5574 5.83 4591 4.80 0.82 pcpu_alloc_test 306340 347014 1.13 571289 1.86 1.65 Total Cycles 99642832084 412084074628 4.14 896497227762 9.00 2.18 --- Documentation/dev-tools/kasan.rst | 63 ++++++++ include/linux/kasan.h | 31 ++++ include/linux/moduleloader.h | 2 +- include/linux/vmalloc.h | 12 ++ lib/Kconfig.kasan | 16 +++ mm/kasan/common.c | 231 ++++++++++++++++++++++++++++++ mm/kasan/generic_report.c | 3 + mm/kasan/kasan.h | 1 + mm/vmalloc.c | 53 +++++-- 9 files changed, 403 insertions(+), 9 deletions(-) diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index 525296121d89..e4d66e7c50de 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -218,3 +218,66 @@ brk handler is used to print bug reports. A potential expansion of this mode is a hardware tag-based mode, which would use hardware memory tagging support instead of compiler instrumentation and manual shadow memory manipulation. + +What memory accesses are sanitised by KASAN? +-------------------------------------------- + +The kernel maps memory in a number of different parts of the address +space. This poses something of a problem for KASAN, which requires +that all addresses accessed by instrumented code have a valid shadow +region. + +The range of kernel virtual addresses is large: there is not enough +real memory to support a real shadow region for every address that +could be accessed by the kernel. + +By default +~~~~~~~~~~ + +By default, architectures only map real memory over the shadow region +for the linear mapping (and potentially other small areas). For all +other areas - such as vmalloc and vmemmap space - a single read-only +page is mapped over the shadow area. This read-only shadow page +declares all memory accesses as permitted. + +This presents a problem for modules: they do not live in the linear +mapping, but in a dedicated module space. By hooking in to the module +allocator, KASAN can temporarily map real shadow memory to cover +them. This allows detection of invalid accesses to module globals, for +example. + +This also creates an incompatibility with ``VMAP_STACK``: if the stack +lives in vmalloc space, it will be shadowed by the read-only page, and +the kernel will fault when trying to set up the shadow data for stack +variables. + +CONFIG_KASAN_VMALLOC +~~~~~~~~~~~~~~~~~~~~ + +With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the +cost of greater memory usage. Currently this is only supported on x86. + +This works by hooking into vmalloc and vmap, and dynamically +allocating real shadow memory to back the mappings. + +Most mappings in vmalloc space are small, requiring less than a full +page of shadow space. Allocating a full shadow page per mapping would +therefore be wasteful. Furthermore, to ensure that different mappings +use different shadow pages, mappings would have to be aligned to +``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``. + +Instead, we share backing space across multiple mappings. We allocate +a backing page when a mapping in vmalloc space uses a particular page +of the shadow region. This page can be shared by other vmalloc +mappings later on. + +We hook in to the vmap infrastructure to lazily clean up unused shadow +memory. + +To avoid the difficulties around swapping mappings around, we expect +that the part of the shadow region that covers the vmalloc space will +not be covered by the early shadow page, but will be left +unmapped. This will require changes in arch-specific code. + +This allows ``VMAP_STACK`` support on x86, and can simplify support of +architectures that do not have a fixed module region. diff --git a/include/linux/kasan.h b/include/linux/kasan.h index cc8a03cc9674..4f404c565db1 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -70,8 +70,18 @@ struct kasan_cache { int free_meta_offset; }; +/* + * These functions provide a special case to support backing module + * allocations with real shadow memory. With KASAN vmalloc, the special + * case is unnecessary, as the work is handled in the generic case. + */ +#ifndef CONFIG_KASAN_VMALLOC int kasan_module_alloc(void *addr, size_t size); void kasan_free_shadow(const struct vm_struct *vm); +#else +static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } +static inline void kasan_free_shadow(const struct vm_struct *vm) {} +#endif int kasan_add_zero_shadow(void *start, unsigned long size); void kasan_remove_zero_shadow(void *start, unsigned long size); @@ -194,4 +204,25 @@ static inline void *kasan_reset_tag(const void *addr) #endif /* CONFIG_KASAN_SW_TAGS */ +#ifdef CONFIG_KASAN_VMALLOC +int kasan_populate_vmalloc(unsigned long requested_size, + struct vm_struct *area); +void kasan_poison_vmalloc(void *start, unsigned long size); +void kasan_release_vmalloc(unsigned long start, unsigned long end, + unsigned long free_region_start, + unsigned long free_region_end); +#else +static inline int kasan_populate_vmalloc(unsigned long requested_size, + struct vm_struct *area) +{ + return 0; +} + +static inline void kasan_poison_vmalloc(void *start, unsigned long size) {} +static inline void kasan_release_vmalloc(unsigned long start, + unsigned long end, + unsigned long free_region_start, + unsigned long free_region_end) {} +#endif + #endif /* LINUX_KASAN_H */ diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 5229c18025e9..ca92aea8a6bd 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -91,7 +91,7 @@ void module_arch_cleanup(struct module *mod); /* Any cleanup before freeing mod->module_init */ void module_arch_freeing_init(struct module *mod); -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) #include #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) #else diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 4e7809408073..61c43d1a29ca 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -22,6 +22,18 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ + +/* + * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC. + * + * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after + * shadow memory has been mapped. It's used to handle allocation errors so that + * we don't try to poision shadow on free if it was never allocated. + * + * Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to + * determine which allocations need the module shadow freed. + */ + /* * Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with * vfree_atomic(). diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan index 6c9682ce0254..81f5464ea9e1 100644 --- a/lib/Kconfig.kasan +++ b/lib/Kconfig.kasan @@ -6,6 +6,9 @@ config HAVE_ARCH_KASAN config HAVE_ARCH_KASAN_SW_TAGS bool +config HAVE_ARCH_KASAN_VMALLOC + bool + config CC_HAS_KASAN_GENERIC def_bool $(cc-option, -fsanitize=kernel-address) @@ -142,6 +145,19 @@ config KASAN_SW_TAGS_IDENTIFY (use-after-free or out-of-bounds) at the cost of increased memory consumption. +config KASAN_VMALLOC + bool "Back mappings in vmalloc space with real shadow memory" + depends on KASAN && HAVE_ARCH_KASAN_VMALLOC + help + By default, the shadow region for vmalloc space is the read-only + zero page. This means that KASAN cannot detect errors involving + vmalloc space. + + Enabling this option will hook in to vmap/vmalloc and back those + mappings with real shadow memory allocated on demand. This allows + for KASAN to detect more sorts of errors (and to support vmapped + stacks), but at the cost of higher memory usage. + config TEST_KASAN tristate "Module for testing KASAN for bug detection" depends on m && KASAN diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 6814d6d6a023..6e7bc5d3fa83 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -36,6 +36,8 @@ #include #include +#include + #include "kasan.h" #include "../slab.h" @@ -590,6 +592,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip) /* The object will be poisoned by page_alloc. */ } +#ifndef CONFIG_KASAN_VMALLOC int kasan_module_alloc(void *addr, size_t size) { void *ret; @@ -625,6 +628,7 @@ void kasan_free_shadow(const struct vm_struct *vm) if (vm->flags & VM_KASAN) vfree(kasan_mem_to_shadow(vm->addr)); } +#endif extern void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip); @@ -744,3 +748,230 @@ static int __init kasan_memhotplug_init(void) core_initcall(kasan_memhotplug_init); #endif + +#ifdef CONFIG_KASAN_VMALLOC +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, + void *unused) +{ + unsigned long page; + pte_t pte; + + if (likely(!pte_none(*ptep))) + return 0; + + page = __get_free_page(GFP_KERNEL); + if (!page) + return -ENOMEM; + + memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); + pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); + + spin_lock(&init_mm.page_table_lock); + if (likely(pte_none(*ptep))) { + set_pte_at(&init_mm, addr, ptep, pte); + page = 0; + } + spin_unlock(&init_mm.page_table_lock); + if (page) + free_page(page); + return 0; +} + +int kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area) +{ + unsigned long shadow_start, shadow_end; + int ret; + + shadow_start = (unsigned long)kasan_mem_to_shadow(area->addr); + shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE); + shadow_end = (unsigned long)kasan_mem_to_shadow(area->addr + + area->size); + shadow_end = ALIGN(shadow_end, PAGE_SIZE); + + ret = apply_to_page_range(&init_mm, shadow_start, + shadow_end - shadow_start, + kasan_populate_vmalloc_pte, NULL); + if (ret) + return ret; + + kasan_unpoison_shadow(area->addr, requested_size); + + area->flags |= VM_KASAN; + + /* + * We need to be careful about inter-cpu effects here. Consider: + * + * CPU#0 CPU#1 + * WRITE_ONCE(p, vmalloc(100)); while (x = READ_ONCE(p)) ; + * p[99] = 1; + * + * With compiler instrumentation, that ends up looking like this: + * + * CPU#0 CPU#1 + * // vmalloc() allocates memory + * // let a = area->addr + * // we reach kasan_populate_vmalloc + * // and call kasan_unpoison_shadow: + * STORE shadow(a), unpoison_val + * ... + * STORE shadow(a+99), unpoison_val x = LOAD p + * // rest of vmalloc process + * STORE p, a LOAD shadow(x+99) + * + * If there is no barrier between the end of unpoisioning the shadow + * and the store of the result to p, the stores could be committed + * in a different order by CPU#0, and CPU#1 could erroneously observe + * poison in the shadow. + * + * We need some sort of barrier between the stores. + * + * In the vmalloc() case, this is provided by a smp_wmb() in + * clear_vm_uninitialized_flag(). In the per-cpu allocator and in + * get_vm_area() and friends, the caller gets shadow allocated but + * doesn't have any pages mapped into the virtual address space that + * has been reserved. Mapping those pages in will involve taking and + * releasing a page-table lock, which will provide the barrier. + */ + + return 0; +} + +/* + * Poison the shadow for a vmalloc region. Called as part of the + * freeing process at the time the region is freed. + */ +void kasan_poison_vmalloc(void *start, unsigned long size) +{ + size = round_up(size, KASAN_SHADOW_SCALE_SIZE); + kasan_poison_shadow(start, size, KASAN_VMALLOC_INVALID); +} + +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, + void *unused) +{ + unsigned long page; + + page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT); + + spin_lock(&init_mm.page_table_lock); + + if (likely(!pte_none(*ptep))) { + pte_clear(&init_mm, addr, ptep); + free_page(page); + } + spin_unlock(&init_mm.page_table_lock); + + return 0; +} + +/* + * Release the backing for the vmalloc region [start, end), which + * lies within the free region [free_region_start, free_region_end). + * + * This can be run lazily, long after the region was freed. It runs + * under vmap_area_lock, so it's not safe to interact with the vmalloc/vmap + * infrastructure. + * + * How does this work? + * ------------------- + * + * We have a region that is page aligned, labelled as A. + * That might not map onto the shadow in a way that is page-aligned: + * + * start end + * v v + * |????????|????????|AAAAAAAA|AA....AA|AAAAAAAA|????????| < vmalloc + * -------- -------- -------- -------- -------- + * | | | | | + * | | | /-------/ | + * \-------\|/------/ |/---------------/ + * ||| || + * |??AAAAAA|AAAAAAAA|AA??????| < shadow + * (1) (2) (3) + * + * First we align the start upwards and the end downwards, so that the + * shadow of the region aligns with shadow page boundaries. In the + * example, this gives us the shadow page (2). This is the shadow entirely + * covered by this allocation. + * + * Then we have the tricky bits. We want to know if we can free the + * partially covered shadow pages - (1) and (3) in the example. For this, + * we are given the start and end of the free region that contains this + * allocation. Extending our previous example, we could have: + * + * free_region_start free_region_end + * | start end | + * v v v v + * |FFFFFFFF|FFFFFFFF|AAAAAAAA|AA....AA|AAAAAAAA|FFFFFFFF| < vmalloc + * -------- -------- -------- -------- -------- + * | | | | | + * | | | /-------/ | + * \-------\|/------/ |/---------------/ + * ||| || + * |FFAAAAAA|AAAAAAAA|AAF?????| < shadow + * (1) (2) (3) + * + * Once again, we align the start of the free region up, and the end of + * the free region down so that the shadow is page aligned. So we can free + * page (1) - we know no allocation currently uses anything in that page, + * because all of it is in the vmalloc free region. But we cannot free + * page (3), because we can't be sure that the rest of it is unused. + * + * We only consider pages that contain part of the original region for + * freeing: we don't try to free other pages from the free region or we'd + * end up trying to free huge chunks of virtual address space. + * + * Concurrency + * ----------- + * + * How do we know that we're not freeing a page that is simultaneously + * being used for a fresh allocation in kasan_populate_vmalloc(_pte)? + * + * We _can_ have kasan_release_vmalloc and kasan_populate_vmalloc running + * at the same time. While we run under free_vmap_area_lock, the population + * code does not. + * + * free_vmap_area_lock instead operates to ensure that the larger range + * [free_region_start, free_region_end) is safe: because __alloc_vmap_area and + * the per-cpu region-finding algorithm both run under free_vmap_area_lock, + * no space identified as free will become used while we are running. This + * means that so long as we are careful with alignment and only free shadow + * pages entirely covered by the free region, we will not run in to any + * trouble - any simultaneous allocations will be for disjoint regions. + */ +void kasan_release_vmalloc(unsigned long start, unsigned long end, + unsigned long free_region_start, + unsigned long free_region_end) +{ + void *shadow_start, *shadow_end; + unsigned long region_start, region_end; + + region_start = ALIGN(start, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); + region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); + + free_region_start = ALIGN(free_region_start, + PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); + + if (start != region_start && + free_region_start < region_start) + region_start -= PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE; + + free_region_end = ALIGN_DOWN(free_region_end, + PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE); + + if (end != region_end && + free_region_end > region_end) + region_end += PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE; + + shadow_start = kasan_mem_to_shadow((void *)region_start); + shadow_end = kasan_mem_to_shadow((void *)region_end); + + if (shadow_end > shadow_start) { + apply_to_page_range(&init_mm, (unsigned long)shadow_start, + (unsigned long)(shadow_end - shadow_start), + kasan_depopulate_vmalloc_pte, NULL); + flush_tlb_kernel_range((unsigned long)shadow_start, + (unsigned long)shadow_end); + } +} +#endif diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c index 36c645939bc9..2d97efd4954f 100644 --- a/mm/kasan/generic_report.c +++ b/mm/kasan/generic_report.c @@ -86,6 +86,9 @@ static const char *get_shadow_bug_type(struct kasan_access_info *info) case KASAN_ALLOCA_RIGHT: bug_type = "alloca-out-of-bounds"; break; + case KASAN_VMALLOC_INVALID: + bug_type = "vmalloc-out-of-bounds"; + break; } return bug_type; diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 35cff6bbb716..3a083274628e 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -25,6 +25,7 @@ #endif #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ +#define KASAN_VMALLOC_INVALID 0xF9 /* unallocated space in vmapped page */ /* * Stack redzone shadow values diff --git a/mm/vmalloc.c b/mm/vmalloc.c index f48f64c8d200..72d0aa039e68 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -683,7 +683,7 @@ insert_vmap_area_augment(struct vmap_area *va, * free area is inserted. If VA has been merged, it is * freed. */ -static __always_inline void +static __always_inline struct vmap_area * merge_or_add_vmap_area(struct vmap_area *va, struct rb_root *root, struct list_head *head) { @@ -750,7 +750,10 @@ merge_or_add_vmap_area(struct vmap_area *va, /* Free vmap_area object. */ kmem_cache_free(vmap_area_cachep, va); - return; + + /* Point to the new merged area. */ + va = sibling; + merged = true; } } @@ -759,6 +762,8 @@ merge_or_add_vmap_area(struct vmap_area *va, link_va(va, root, parent, link, head); augment_tree_propagate_from(va); } + + return va; } static __always_inline bool @@ -1196,8 +1201,7 @@ static void free_vmap_area(struct vmap_area *va) * Insert/Merge it back to the free tree/list. */ spin_lock(&free_vmap_area_lock); - merge_or_add_vmap_area(va, - &free_vmap_area_root, &free_vmap_area_list); + merge_or_add_vmap_area(va, &free_vmap_area_root, &free_vmap_area_list); spin_unlock(&free_vmap_area_lock); } @@ -1294,14 +1298,20 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) spin_lock(&free_vmap_area_lock); llist_for_each_entry_safe(va, n_va, valist, purge_list) { unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; + unsigned long orig_start = va->va_start; + unsigned long orig_end = va->va_end; /* * Finally insert or merge lazily-freed area. It is * detached and there is no need to "unlink" it from * anything. */ - merge_or_add_vmap_area(va, - &free_vmap_area_root, &free_vmap_area_list); + va = merge_or_add_vmap_area(va, &free_vmap_area_root, + &free_vmap_area_list); + + if (is_vmalloc_or_module_addr((void *)orig_start)) + kasan_release_vmalloc(orig_start, orig_end, + va->va_start, va->va_end); atomic_long_sub(nr, &vmap_lazy_nr); @@ -2090,6 +2100,22 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, setup_vmalloc_vm(area, va, flags, caller); + /* + * For KASAN, if we are in vmalloc space, we need to cover the shadow + * area with real memory. If we come here through VM_ALLOC, this is + * done by a higher level function that has access to the true size, + * which might not be a full page. + * + * We assume module space comes via VM_ALLOC path. + */ + if (is_vmalloc_addr(area->addr) && !(area->flags & VM_ALLOC)) { + if (kasan_populate_vmalloc(area->size, area)) { + unmap_vmap_area(va); + kfree(area); + return NULL; + } + } + return area; } @@ -2267,6 +2293,9 @@ static void __vunmap(const void *addr, int deallocate_pages) debug_check_no_locks_freed(area->addr, get_vm_area_size(area)); debug_check_no_obj_freed(area->addr, get_vm_area_size(area)); + if (area->flags & VM_KASAN) + kasan_poison_vmalloc(area->addr, area->size); + vm_remove_mappings(area, deallocate_pages); if (deallocate_pages) { @@ -2519,6 +2548,11 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, if (!addr) return NULL; + if (is_vmalloc_or_module_addr(area->addr)) { + if (kasan_populate_vmalloc(real_size, area)) + return NULL; + } + /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized. @@ -3377,6 +3411,9 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); + + /* assume success here */ + kasan_populate_vmalloc(sizes[area], vms[area]); } spin_unlock(&vmap_area_lock); @@ -3391,8 +3428,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, * and when pcpu_get_vm_areas() is success. */ while (area--) { - merge_or_add_vmap_area(vas[area], - &free_vmap_area_root, &free_vmap_area_list); + merge_or_add_vmap_area(vas[area], &free_vmap_area_root, + &free_vmap_area_list); vas[area] = NULL; } From patchwork Thu Oct 31 09:39:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11220813 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBA7814DB for ; Thu, 31 Oct 2019 09:39:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8F1952083E for ; Thu, 31 Oct 2019 09:39:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="WNnsrZjR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F1952083E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A45706B000C; Thu, 31 Oct 2019 05:39:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9F5036B000D; Thu, 31 Oct 2019 05:39:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E4216B000E; Thu, 31 Oct 2019 05:39:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id 5FE2E6B000C for ; Thu, 31 Oct 2019 05:39:26 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id EB58F8249980 for ; Thu, 31 Oct 2019 09:39:25 +0000 (UTC) X-FDA: 76103581890.11.month73_6f6ae5b4b2e0d X-Spam-Summary: 2,0,0,f4dfeca8d59e39ed,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::x86@kernel.org:aryabinin@virtuozzo.com:glider@google.com:luto@kernel.org:linux-kernel@vger.kernel.org:mark.rutland@arm.com:dvyukov@google.com:christophe.leroy@c-s.fr:linuxppc-dev@lists.ozlabs.org:gor@linux.ibm.com:dja@axtens.net,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1534:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3352:3865:3866:3867:3868:3874:4250:4321:4385:5007:6261:6653:7903:10004:11026:11473:11658:11914:12043:12048:12291:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:14181:14384:14394:14721:21080:21444:21451:21627:30054,0,RBL:209.85.210.194:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: month73_6f6ae5b4b2e0d X-Filterd-Recvd-Size: 4379 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Thu, 31 Oct 2019 09:39:25 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id x28so766034pfo.6 for ; Thu, 31 Oct 2019 02:39:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bVXjkvYDDmbnxsC38vuCWt+2DP5YT7XnHdrJpHAvWR8=; b=WNnsrZjRTgHN6z6qKlm3PPgDZ1DUQyGBME1hqttIlhCsP6pp51i+VPTsPipTm3KOvJ GEptraycQIRKuAP1Hj9aeVXUefmzwkWW5fefm1hacJjRHsvlIYfTLhVRVB97p5euRlwi jr0ph8pnjBiRogObdVC2UZwlO/vOSBdYij0G4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bVXjkvYDDmbnxsC38vuCWt+2DP5YT7XnHdrJpHAvWR8=; b=SzeVrceSP97OPQcNdYZx0nIRH9iAOhPMkrK4KHwyTVE/+I0AvDEoQzKyZ1Gf2qv3/h +Hg7Ahc0sZTPX6dzspjZEn+IWcwZDG0LR9WchAAlSPGNYW6nJjZNXZJg0IQptUy6lDI4 BHZOxh51g0SjTU2wFo8A2kEKy/wPfirL5/LGisETpPmZJR1c/SzjJ/2C5xxCRZKylp2Y Q6uHFNIkO8rXC7pcsQw5zq8lSksMql6Sp8ITLKBBDmmsD/xrsmYN1MJlbAq5P1NBwMgl FURgnR/YJLgx0ZinnVOjjaJ8LUvAANn3KHaNHFrEK/Ww5j8ODy+va0kalqyugUNa2zQr txUg== X-Gm-Message-State: APjAAAXbhHcWi2YwAecQzOpefHE/q/fgfTM5YAM4x9W9WOycHnx3b6jI LzodhOfQq54iLXfY8M9zXgk58Q== X-Google-Smtp-Source: APXvYqzVRXl1zQigZoY89DcUqTx/QAdcydg/VNoN51Ltb+z4DjNulE5WAl6UPNAKeADCABZ+amSUTQ== X-Received: by 2002:a17:90a:1f4b:: with SMTP id y11mr5863515pjy.123.1572514764668; Thu, 31 Oct 2019 02:39:24 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-783a-2bb9-f7cb-7c3c.static.ipv6.internode.on.net. [2001:44b8:1113:6700:783a:2bb9:f7cb:7c3c]) by smtp.gmail.com with ESMTPSA id n15sm2785042pfq.146.2019.10.31.02.39.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 02:39:23 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr Cc: linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com, Daniel Axtens Subject: [PATCH v11 2/4] kasan: add test for vmalloc Date: Thu, 31 Oct 2019 20:39:07 +1100 Message-Id: <20191031093909.9228-3-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191031093909.9228-1-dja@axtens.net> References: <20191031093909.9228-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Test kasan vmalloc support by adding a new test to the module. Reviewed-by: Andrey Ryabinin Signed-off-by: Daniel Axtens --- v5: split out per Christophe Leroy --- lib/test_kasan.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/lib/test_kasan.c b/lib/test_kasan.c index 49cc4d570a40..328d33beae36 100644 --- a/lib/test_kasan.c +++ b/lib/test_kasan.c @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -748,6 +749,30 @@ static noinline void __init kmalloc_double_kzfree(void) kzfree(ptr); } +#ifdef CONFIG_KASAN_VMALLOC +static noinline void __init vmalloc_oob(void) +{ + void *area; + + pr_info("vmalloc out-of-bounds\n"); + + /* + * We have to be careful not to hit the guard page. + * The MMU will catch that and crash us. + */ + area = vmalloc(3000); + if (!area) { + pr_err("Allocation failed\n"); + return; + } + + ((volatile char *)area)[3100]; + vfree(area); +} +#else +static void __init vmalloc_oob(void) {} +#endif + static int __init kmalloc_tests_init(void) { /* @@ -793,6 +818,7 @@ static int __init kmalloc_tests_init(void) kasan_strings(); kasan_bitops(); kmalloc_double_kzfree(); + vmalloc_oob(); kasan_restore_multi_shot(multishot); From patchwork Thu Oct 31 09:39:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11220815 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CFAD014DB for ; Thu, 31 Oct 2019 09:39:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 919DB217D9 for ; Thu, 31 Oct 2019 09:39:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="eV1YePql" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 919DB217D9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D77DF6B000D; Thu, 31 Oct 2019 05:39:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D296E6B000E; Thu, 31 Oct 2019 05:39:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C19136B0010; Thu, 31 Oct 2019 05:39:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id A12F96B000D for ; Thu, 31 Oct 2019 05:39:30 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 3C1AB181AF5C1 for ; Thu, 31 Oct 2019 09:39:30 +0000 (UTC) X-FDA: 76103582100.16.crate42_700735e3e4f4d X-Spam-Summary: 2,0,0,9f89ceb3dcc85405,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::x86@kernel.org:aryabinin@virtuozzo.com:glider@google.com:luto@kernel.org:linux-kernel@vger.kernel.org:mark.rutland@arm.com:dvyukov@google.com:christophe.leroy@c-s.fr:linuxppc-dev@lists.ozlabs.org:gor@linux.ibm.com:dja@axtens.net,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:2689:2898:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3873:3874:4250:4321:4470:5007:6261:6653:7875:8784:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:14093:14096:14181:14394:14721:14877:21063:21080:21444:21451:21627:30054:30070:30091,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: crate42_700735e3e4f4d X-Filterd-Recvd-Size: 5077 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Thu, 31 Oct 2019 09:39:29 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id q26so3965735pfn.11 for ; Thu, 31 Oct 2019 02:39:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v18HXwwUv8fdcmaD7WznN2j7mmagzU8FAwS/lrMAbV8=; b=eV1YePqlbyO7tj1kbB9uk7OP/edmTARvYsB9Zgi7PH5bknrFXg/tJPOc7emhOO4x5j KtyJj8T2PfQws87ybdR8idy8g57nveDsfrqx/ExGASQ3uibiJe4goUkyOdBsJ/O6tM75 VsNL8fB1S0lA/l67fDIo2fDZfwDm47jbBhpzA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v18HXwwUv8fdcmaD7WznN2j7mmagzU8FAwS/lrMAbV8=; b=hCw27RPkQ5VbCMiYT8rXY1r+w/uGQ/HVRsdXnqTKSx7vduopp1m8l21mERGVVs8R/S fSJ7Qu67ZeSalsziqpWA73rxM017O6kNqYKlywKwwxDOwt8BaigI2CsGJG/CT4j/VGYK AR7lnm1a6Jy0ECnZDQuIMqFdwHGztAewYaFwpLOLm3P1Aw0FY1fPAb2SkZePKGXyEQOI hmzlBsTIiux0mukC0yONlWSXUpVBICZxvpDgfYFWwArBtkyMgBGdSWIAT4yyjwP+6ZsP JO0ozC/KLuPtcDNx/ReYtMxjJ2OQ6uNr8L+Cdm3kTmfA8l30vLKIfnvqtu5DPdgtd+q6 rAEw== X-Gm-Message-State: APjAAAU0Kh29QMKpv+Lql+M86+/Q2XUtDB+BCmQZ0FqPx9xxmm/Ng1DC S2w6nzqQczYuKuaLjj1X7dAH5g== X-Google-Smtp-Source: APXvYqzcQf2seyFm3b/JjsEvVOy0D7utuomMGdhHrdtSNRCDKkdOsqd1sFCDqTrjR6KoKB+g/V5A2A== X-Received: by 2002:a62:2f43:: with SMTP id v64mr91811pfv.13.1572514768894; Thu, 31 Oct 2019 02:39:28 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-783a-2bb9-f7cb-7c3c.static.ipv6.internode.on.net. [2001:44b8:1113:6700:783a:2bb9:f7cb:7c3c]) by smtp.gmail.com with ESMTPSA id p1sm2503669pfb.112.2019.10.31.02.39.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 02:39:28 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr Cc: linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com, Daniel Axtens Subject: [PATCH v11 3/4] fork: support VMAP_STACK with KASAN_VMALLOC Date: Thu, 31 Oct 2019 20:39:08 +1100 Message-Id: <20191031093909.9228-4-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191031093909.9228-1-dja@axtens.net> References: <20191031093909.9228-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Supporting VMAP_STACK with KASAN_VMALLOC is straightforward: - clear the shadow region of vmapped stacks when swapping them in - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN Reviewed-by: Dmitry Vyukov Reviewed-by: Andrey Ryabinin Signed-off-by: Daniel Axtens --- arch/Kconfig | 9 +++++---- kernel/fork.c | 4 ++++ 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 5f8a5d84dbbe..2d914990402f 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -843,16 +843,17 @@ config HAVE_ARCH_VMAP_STACK config VMAP_STACK default y bool "Use a virtually-mapped stack" - depends on HAVE_ARCH_VMAP_STACK && !KASAN + depends on HAVE_ARCH_VMAP_STACK + depends on !KASAN || KASAN_VMALLOC ---help--- Enable this if you want the use virtually-mapped kernel stacks with guard pages. This causes kernel stack overflows to be caught immediately rather than causing difficult-to-diagnose corruption. - This is presently incompatible with KASAN because KASAN expects - the stack to map directly to the KASAN shadow map using a formula - that is incorrect if the stack is in vmalloc space. + To use this with KASAN, the architecture must support backing + virtual mappings with real shadow memory, and KASAN_VMALLOC must + be enabled. config ARCH_OPTIONAL_KERNEL_RWX def_bool n diff --git a/kernel/fork.c b/kernel/fork.c index 4b2a82eda8e5..0eef4243019c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -224,6 +225,9 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node) if (!s) continue; + /* Clear the KASAN shadow of the stack. */ + kasan_unpoison_shadow(s->addr, THREAD_SIZE); + /* Clear stale pointers from reused stack. */ memset(s->addr, 0, THREAD_SIZE); From patchwork Thu Oct 31 09:39:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11220817 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3387D14E5 for ; Thu, 31 Oct 2019 09:39:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E6DDD218DE for ; Thu, 31 Oct 2019 09:39:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="cNX69LQO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6DDD218DE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 366556B000E; Thu, 31 Oct 2019 05:39:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 340316B0010; Thu, 31 Oct 2019 05:39:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E2EC6B0266; Thu, 31 Oct 2019 05:39:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id EA5EA6B000E for ; Thu, 31 Oct 2019 05:39:34 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 8F01F8249980 for ; Thu, 31 Oct 2019 09:39:34 +0000 (UTC) X-FDA: 76103582268.02.soap30_70a539a085505 X-Spam-Summary: 2,0,0,14331ca6efd17aa5,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::x86@kernel.org:aryabinin@virtuozzo.com:glider@google.com:luto@kernel.org:linux-kernel@vger.kernel.org:mark.rutland@arm.com:dvyukov@google.com:christophe.leroy@c-s.fr:linuxppc-dev@lists.ozlabs.org:gor@linux.ibm.com:dja@axtens.net,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2914:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4321:4605:5007:6117:6119:6261:6653:7875:7903:10004:11026:11233:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13141:13230:13894:14096:14181:14394:14721:21080:21444:21451:21627:30054:30070,0,RBL:209.85.215.196:@axtens.net:.lbl8.mailshell.net-62.2.175.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA _SUMMARY X-HE-Tag: soap30_70a539a085505 X-Filterd-Recvd-Size: 6937 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Thu, 31 Oct 2019 09:39:33 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id c8so3704405pgb.2 for ; Thu, 31 Oct 2019 02:39:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PPZxJ9u49e1rTbwMFrZRlJu9A0GDiffR0uj0Pjy+KKM=; b=cNX69LQOCELZECnYKOu5T7jfJmyq7A8jLTk/cgu/lzv+F9nZOKzRMgwfyXnqZkkHmK 9OI1ai6nubAgXXwNkjeFWcIrjdNk3aL2HvMexg/iAGww752AIqiBw4Zy0juae2GEFYoH HvNcoSFUdylybGMD0PMUe1r/XMVFa2zWRNjc8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PPZxJ9u49e1rTbwMFrZRlJu9A0GDiffR0uj0Pjy+KKM=; b=tfJvw7w/X1DgWq6GGNgW99T8H8OI3jGPExEPkDAtHCtGFLC+h0NFj2LK5jGmktxzK9 2/8XEmkWOtIwwRmX/5xOOE1p33XD4cOMMCDcyGBHZPWVJ15C6JCmCZ8Z24suHVCTWEH/ NJPshFqFIrWpvemljATPjD/gRAsRfsw/BuR+upD9/S3lG13ffs7IHsPDUDoUTFXuTD+I tfkcRYaRf4X83Ry8dEddEM0xJpVKjVbPYLnwTFadEL5rNLNCd4Jh/FJNPGEC9Gp6lYT6 ANuqG1WAaijPAZ7NN0wDG64zhxZg16aZUs1XyxAfxBITl8o3odbir1XyAr6rUmRo9mIo yBTA== X-Gm-Message-State: APjAAAWx4u3TkCO62UzWjHejVXnXsSL564J75g1BrdvPUhIuNz8Zt/2z YeuX2oJ6ejR8yh+7p1j+IKVMDQ== X-Google-Smtp-Source: APXvYqyOnOdosZYDLl2Gr52q1zQGgdAVyvjDrVGugXl7Bml1NZecdGGZMjCCXXqDunQU1uppNZ2K2g== X-Received: by 2002:a63:d308:: with SMTP id b8mr5489951pgg.246.1572514773130; Thu, 31 Oct 2019 02:39:33 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-783a-2bb9-f7cb-7c3c.static.ipv6.internode.on.net. [2001:44b8:1113:6700:783a:2bb9:f7cb:7c3c]) by smtp.gmail.com with ESMTPSA id q185sm4870092pfc.153.2019.10.31.02.39.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 02:39:32 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr Cc: linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com, Daniel Axtens Subject: [PATCH v11 4/4] x86/kasan: support KASAN_VMALLOC Date: Thu, 31 Oct 2019 20:39:09 +1100 Message-Id: <20191031093909.9228-5-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191031093909.9228-1-dja@axtens.net> References: <20191031093909.9228-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the case where KASAN directly allocates memory to back vmalloc space, don't map the early shadow page over it. We prepopulate pgds/p4ds for the range that would otherwise be empty. This is required to get it synced to hardware on boot, allowing the lower levels of the page tables to be filled dynamically. Acked-by: Dmitry Vyukov Reviewed-by: Andrey Ryabinin Signed-off-by: Daniel Axtens --- v11: use NUMA_NO_NODE, not a completely invalid value, and don't populate more real p[g4]ds than necessary - thanks Andrey. v5: fix some checkpatch CHECK warnings. There are some that remain around lines ending with '(': I have not changed these because it's consistent with the rest of the file and it's not easy to see how to fix it without creating an overlong line or lots of temporary variables. v2: move from faulting in shadow pgds to prepopulating --- arch/x86/Kconfig | 1 + arch/x86/mm/kasan_init_64.c | 61 +++++++++++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 45699e458057..d65b0fcc9bc0 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -135,6 +135,7 @@ config X86 select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 + select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 296da58f3013..cf5bc37c90ac 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -245,6 +245,49 @@ static void __init kasan_map_early_shadow(pgd_t *pgd) } while (pgd++, addr = next, addr != end); } +static void __init kasan_shallow_populate_p4ds(pgd_t *pgd, + unsigned long addr, + unsigned long end) +{ + p4d_t *p4d; + unsigned long next; + void *p; + + p4d = p4d_offset(pgd, addr); + do { + next = p4d_addr_end(addr, end); + + if (p4d_none(*p4d)) { + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE, true); + p4d_populate(&init_mm, p4d, p); + } + } while (p4d++, addr = next, addr != end); +} + +static void __init kasan_shallow_populate_pgds(void *start, void *end) +{ + unsigned long addr, next; + pgd_t *pgd; + void *p; + + addr = (unsigned long)start; + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, (unsigned long)end); + + if (pgd_none(*pgd)) { + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE, true); + pgd_populate(&init_mm, pgd, p); + } + + /* + * we need to populate p4ds to be synced when running in + * four level mode - see sync_global_pgds_l4() + */ + kasan_shallow_populate_p4ds(pgd, addr, next); + } while (pgd++, addr = next, addr != (unsigned long)end); +} + #ifdef CONFIG_KASAN_INLINE static int kasan_die_handler(struct notifier_block *self, unsigned long val, @@ -354,6 +397,24 @@ void __init kasan_init(void) kasan_populate_early_shadow( kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), + kasan_mem_to_shadow((void *)VMALLOC_START)); + + /* + * If we're in full vmalloc mode, don't back vmalloc space with early + * shadow pages. Instead, prepopulate pgds/p4ds so they are synced to + * the global table and we can populate the lower levels on demand. + */ + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) + kasan_shallow_populate_pgds( + kasan_mem_to_shadow((void *)VMALLOC_START), + kasan_mem_to_shadow((void *)VMALLOC_END)); + else + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)VMALLOC_START), + kasan_mem_to_shadow((void *)VMALLOC_END)); + + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)VMALLOC_END + 1), shadow_cpu_entry_begin); kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,