From patchwork Tue Jun 26 13:15:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10488927 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 79EA260386 for ; Tue, 26 Jun 2018 13:16:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 674C328946 for ; Tue, 26 Jun 2018 13:16:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A8F22893B; Tue, 26 Jun 2018 13:16:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 60DC62893B for ; Tue, 26 Jun 2018 13:16:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A7D36B0278; Tue, 26 Jun 2018 09:16:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 37DE76B0279; Tue, 26 Jun 2018 09:16:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FAD26B027A; Tue, 26 Jun 2018 09:16:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f71.google.com (mail-wm0-f71.google.com [74.125.82.71]) by kanga.kvack.org (Postfix) with ESMTP id ABEA46B0278 for ; Tue, 26 Jun 2018 09:15:59 -0400 (EDT) Received: by mail-wm0-f71.google.com with SMTP id n14-v6so1252095wmh.1 for ; Tue, 26 Jun 2018 06:15:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=YZ4DVnMT9/HgtT77N06ERiaGY+swOS3/UplqsCQuXIw=; b=bYWBYpVyPiq5mryOpz4e27ZVrbxq0g27/ihfDiKLF252XY4AkNDJTQgodjCQNx+gPo SpDwO9ftkkxeTIWpjN7OQ1y8Jwp1V3Gftk9TxFs2zdBn9ovSiCDxYxDLcYLct1bw0AQL rJdTvj5jMYkMUryZM6ljKkkppju1faJ+xs9r3MJarKnGkwtjhRGxL9djbkaKEY7+MSK4 texMcC/6csehThvNKFHOVegaYNoRlGbnCdKFG5YF0NZyTAyjqZDn8GNJHFpZZhpLh1tx C4UmPF1oXPRFH+9YYqxp+pQlOR0DItnYHt8ErtoXg6unH3OESl0kcPAUlrIMV+lEL6mP y36w== X-Gm-Message-State: APt69E0JpnAuXdYTHlMo5kuk82QQGEZzIFYnSY5ZyEHMrgK76GfkfnDm fxZM6vlhz2iTG6njvNI6S1/DfQ2/WQ6DAAC5XYxqTZ3SSdKOzLhfpcCzKSThhi1+PdsvhI3L84Q oV0gklCJHcNj2pnsPcFDtKQ2m8CgDL2vdmJvShHtrAD/xZnwCAe6crHjbr3ptQlfUfQ0YbGPbRe 40eKSiPcqC9skL0eEvtWpcddS4dB3kr7z52htFTdTmwMqVDjximQbSh3I1DSjgWpu2oEQK4xRs2 Lr351jMnVBQBmBiKQUZ863pti0emdZEkmMm3r349BmFjnFtNTodrpj5hSHKp5IxilOLDw0rblzk S1kqCVf9NEObzZ0XAGOazS71XgKwkQsJ1vsecGiz/JTEqv55Pbpitfyz0zeQhE+Mg6lvrkQWfzE W X-Received: by 2002:adf:c377:: with SMTP id e52-v6mr1429486wrg.257.1530018959229; Tue, 26 Jun 2018 06:15:59 -0700 (PDT) X-Received: by 2002:adf:c377:: with SMTP id e52-v6mr1429417wrg.257.1530018958073; Tue, 26 Jun 2018 06:15:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530018958; cv=none; d=google.com; s=arc-20160816; b=Q/QT6pdKil9oICHmBmHgwQIqW24wuvDGODddE9/x+168vRBk2sHvma6+FMDR1ACetb pUoksL1jUKCyOy8993gipZzTv8L2UdccEgrIq0hrQadWr72Q63eZ8CRGaHgyRMpIVDCJ N29KA167P46xhd+oGzDzJt0CCO6QaNxaZj7a4TvOz39eY/ewI5OfhcmabFOhOBJ+Iwap 3MElMizuVXOMaGzzDdAmn5dIPlMgC41Pc0wtjyWdSEjmbTbLPMXS8860FPhgehuTWHzK o+sP/YvR4U6A27Z4kfuISlUDWsLx3JR1vGlbbELNv5p2FfZ0sHEZHBJ03XZgR0QKWgNQ JIUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=YZ4DVnMT9/HgtT77N06ERiaGY+swOS3/UplqsCQuXIw=; b=iwfPePOa5das2dDmO6tV+ICg8qPQQIbCXHnqkEOil90gXMdchIbxj7HrRB53mL50Vp CYOi7eqrNfQdv7CK/EvkyVVRowOgnUsrevdzSt/qlf69lwFP9RVov47Ku8yKjZrUr6qs DK3Rm3tiIxVIaP5xXU+FTRDZtw5mgzO+zULoLqGkfeVWkFXm6N5WoMnTYaUQtPEq3EgK NeE7HcrGNMrS0Mdy5rweqrHLjAkSKxJDhtaMQ5lnGJzgdzuSHyNr4ryCcRIA0fXjvFJh WeoLJD/8AZbL2A4Z+8sa4cBdXD4hvokKHEQH0jpud7a+tTDnNHqO7gxtcQAkGIdCWy95 KwGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=fsOQodnR; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id t16-v6sor791276wrs.11.2018.06.26.06.15.57 for (Google Transport Security); Tue, 26 Jun 2018 06:15:58 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=fsOQodnR; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YZ4DVnMT9/HgtT77N06ERiaGY+swOS3/UplqsCQuXIw=; b=fsOQodnRw940B67mYiFYkTnfzGlqJRkjOIaf9ad6ov6NmsQlDtDRUaJtXRPIosmsUi oiSuyMoTLf+CTzR1iB31T7kgQAdBe0PqvJ2flyTqqurGBOnKJS4cpm4ROG9YBTetMtEb JtWvOcMc7Oj8CTNTQGzrBZG8AI6GLXGt+ABRh/A5joX0LClBVjLRp321iNAYni9gMamd DiS0QBfZkIUOPm49CYbuUD31nJ3jPJacDZehdALPhZ9H1PCf8yUvFxW8J92xRMyFJ8zl IFhUVVL7kcl/AkziahGtr7RhnjXIdnfhoyplpg4KcEEBWUngCHxXYpuZvsQ8Tyv1ah6v 9oew== X-Google-Smtp-Source: AAOMgpf+oCcoJ17zH1keETr1PiTQ/319PZKMD0kw3Q8EuNvnNku2ZrnsSDC6ASE1iM8irTtRtxI5GQ== X-Received: by 2002:adf:9a0b:: with SMTP id z11-v6mr1466241wrb.47.1530018957417; Tue, 26 Jun 2018 06:15:57 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id w15-v6sm2162639wrn.25.2018.06.26.06.15.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Jun 2018 06:15:56 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Andrey Konovalov Subject: [PATCH v4 13/17] khwasan: add hooks implementation Date: Tue, 26 Jun 2018 15:15:23 +0200 Message-Id: X-Mailer: git-send-email 2.18.0.rc2.346.g013aa6912e-goog In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This commit adds KHWASAN specific hooks implementation and adjusts common KASAN and KHWASAN ones. 1. When a new slab cache is created, KHWASAN rounds up the size of the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16). 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory, that corresponds to this object to this tag, and embeds this tag value into the top byte of the returned pointer. 3. On each kfree KHWASAN poisons the shadow memory with a random tag to allow detection of use-after-free bugs. The rest of the logic of the hook implementation is very much similar to the one provided by KASAN. KHWASAN saves allocation and free stack metadata to the slab object the same was KASAN does this. Signed-off-by: Andrey Konovalov --- mm/kasan/common.c | 83 +++++++++++++++++++++++++++++++++++----------- mm/kasan/kasan.h | 8 +++++ mm/kasan/khwasan.c | 40 ++++++++++++++++++++++ 3 files changed, 111 insertions(+), 20 deletions(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 656baa8984c7..1e96ca050c75 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) { void *shadow_start, *shadow_end; + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + shadow_start = kasan_mem_to_shadow(address); shadow_end = kasan_mem_to_shadow(address + size); @@ -148,11 +151,20 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) void kasan_unpoison_shadow(const void *address, size_t size) { - kasan_poison_shadow(address, size, 0); + u8 tag = get_tag(address); + + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + + kasan_poison_shadow(address, size, tag); if (size & KASAN_SHADOW_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); - *shadow = size & KASAN_SHADOW_MASK; + + if (IS_ENABLED(CONFIG_KASAN_HW)) + *shadow = tag; + else + *shadow = size & KASAN_SHADOW_MASK; } } @@ -200,8 +212,9 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) void kasan_alloc_pages(struct page *page, unsigned int order) { - if (likely(!PageHighMem(page))) - kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); + if (unlikely(PageHighMem(page))) + return; + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); } void kasan_free_pages(struct page *page, unsigned int order) @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) { unsigned int orig_size = *size; + unsigned int redzone_size = 0; int redzone_adjust; /* Add alloc meta. */ @@ -242,20 +256,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, *size += sizeof(struct kasan_alloc_meta); /* Add free meta. */ - if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || - cache->object_size < sizeof(struct kasan_free_meta)) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC) && + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta))) { cache->kasan_info.free_meta_offset = *size; *size += sizeof(struct kasan_free_meta); } - redzone_adjust = optimal_redzone(cache->object_size) - - (*size - cache->object_size); + redzone_size = optimal_redzone(cache->object_size); + redzone_adjust = redzone_size - (*size - cache->object_size); if (redzone_adjust > 0) *size += redzone_adjust; *size = min_t(unsigned int, KMALLOC_MAX_SIZE, - max(*size, cache->object_size + - optimal_redzone(cache->object_size))); + max(*size, cache->object_size + redzone_size)); /* * If the metadata doesn't fit, don't enable KASAN at all. @@ -268,6 +282,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, return; } + cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE); + *flags |= SLAB_KASAN; } @@ -325,18 +341,41 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) { - return kasan_kmalloc(cache, object, cache->object_size, flags); + object = kasan_kmalloc(cache, object, cache->object_size, flags); + if (IS_ENABLED(CONFIG_KASAN_HW) && unlikely(cache->ctor)) { + /* + * Cache constructor might use object's pointer value to + * initialize some of its fields. + */ + cache->ctor(object); + } + return object; +} + +static inline bool shadow_invalid(u8 tag, s8 shadow_byte) +{ + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + return shadow_byte < 0 || + shadow_byte >= KASAN_SHADOW_SCALE_SIZE; + else + return tag != (u8)shadow_byte; } static bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip, bool quarantine) { s8 shadow_byte; + u8 tag; + void *tagged_object; unsigned long rounded_up_size; + tag = get_tag(object); + tagged_object = object; + object = reset_tag(object); + if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != object)) { - kasan_report_invalid_free(object, ip); + kasan_report_invalid_free(tagged_object, ip); return true; } @@ -345,20 +384,22 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object, return false; shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); - if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) { - kasan_report_invalid_free(object, ip); + if (shadow_invalid(tag, shadow_byte)) { + kasan_report_invalid_free(tagged_object, ip); return true; } rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); - if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN))) + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || + unlikely(!(cache->flags & SLAB_KASAN))) return false; set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); quarantine_put(get_free_info(cache, object), cache); - return true; + + return IS_ENABLED(CONFIG_KASAN_GENERIC); } bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) @@ -371,6 +412,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, { unsigned long redzone_start; unsigned long redzone_end; + u8 tag; if (gfpflags_allow_blocking(flags)) quarantine_reduce(); @@ -383,14 +425,15 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_SHADOW_SCALE_SIZE); - kasan_unpoison_shadow(object, size); + tag = random_tag(); + kasan_unpoison_shadow(set_tag(object, tag), size); kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); if (cache->flags & SLAB_KASAN) set_track(&get_alloc_info(cache, object)->alloc_track, flags); - return (void *)object; + return set_tag(object, tag); } EXPORT_SYMBOL(kasan_kmalloc); @@ -440,7 +483,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (ptr != page_address(page)) { + if (reset_tag(ptr) != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -453,7 +496,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (ptr != page_address(virt_to_head_page(ptr))) + if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index d60859d26be7..6f4f2ebf5f57 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -12,10 +12,18 @@ #define KHWASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ #define KHWASAN_TAG_MAX 0xFD /* maximum value for random tags */ +#ifdef CONFIG_KASAN_GENERIC #define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ #define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */ +#else +#define KASAN_FREE_PAGE KHWASAN_TAG_INVALID +#define KASAN_PAGE_REDZONE KHWASAN_TAG_INVALID +#define KASAN_KMALLOC_REDZONE KHWASAN_TAG_INVALID +#define KASAN_KMALLOC_FREE KHWASAN_TAG_INVALID +#endif + #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ /* diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c index d34679b8f8c7..fd1725022794 100644 --- a/mm/kasan/khwasan.c +++ b/mm/kasan/khwasan.c @@ -88,15 +88,52 @@ void *khwasan_reset_tag(const void *addr) void check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) { + u8 tag; + u8 *shadow_first, *shadow_last, *shadow; + void *untagged_addr; + + tag = get_tag((const void *)addr); + + /* Ignore accesses for pointers tagged with 0xff (native kernel + * pointer tag) to suppress false positives caused by kmap. + * + * Some kernel code was written to account for archs that don't keep + * high memory mapped all the time, but rather map and unmap particular + * pages when needed. Instead of storing a pointer to the kernel memory, + * this code saves the address of the page structure and offset within + * that page for later use. Those pages are then mapped and unmapped + * with kmap/kunmap when necessary and virt_to_page is used to get the + * virtual address of the page. For arm64 (that keeps the high memory + * mapped all the time), kmap is turned into a page_address call. + + * The issue is that with use of the page_address + virt_to_page + * sequence the top byte value of the original pointer gets lost (gets + * set to KHWASAN_TAG_KERNEL (0xFF). + */ + if (tag == KHWASAN_TAG_KERNEL) + return; + + untagged_addr = reset_tag((const void *)addr); + shadow_first = kasan_mem_to_shadow(untagged_addr); + shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1); + + for (shadow = shadow_first; shadow <= shadow_last; shadow++) { + if (*shadow != tag) { + kasan_report(addr, size, write, ret_ip); + return; + } + } } #define DEFINE_HWASAN_LOAD_STORE(size) \ void __hwasan_load##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, false, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ void __hwasan_store##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, true, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_store##size##_noabort) @@ -108,15 +145,18 @@ DEFINE_HWASAN_LOAD_STORE(16); void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, false, _RET_IP_); } EXPORT_SYMBOL(__hwasan_loadN_noabort); void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, true, _RET_IP_); } EXPORT_SYMBOL(__hwasan_storeN_noabort); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) { + kasan_poison_shadow((void *)addr, size, tag); } EXPORT_SYMBOL(__hwasan_tag_memory);