From patchwork Fri May 25 14:40:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10427679 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C67CF602D6 for ; Fri, 25 May 2018 14:41:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B1DC929767 for ; Fri, 25 May 2018 14:41:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A440C29792; Fri, 25 May 2018 14:41:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AFC162977B for ; Fri, 25 May 2018 14:41:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D223F6B026D; Fri, 25 May 2018 10:41:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CD99B6B026E; Fri, 25 May 2018 10:41:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD8636B026F; Fri, 25 May 2018 10:41:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr0-f199.google.com (mail-wr0-f199.google.com [209.85.128.199]) by kanga.kvack.org (Postfix) with ESMTP id 467E56B026D for ; Fri, 25 May 2018 10:41:10 -0400 (EDT) Received: by mail-wr0-f199.google.com with SMTP id j8-v6so3810262wrh.18 for ; Fri, 25 May 2018 07:41:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=ggpOGDcAGKq59uA6O/C0mSfdUsbcWWrjMmOn0yK9+o4=; b=OQhGd+4pstRwaHIFQDnpbAMSI0Z3u6HRR0pQva209K18DqKBK99l555HqSYU0HndAP nTxG65QIO6Fa23gTlJzHFGYb0SYzkhybEIchIW+4aXnToe/Dnup+A691+Q8x4jJbI2Y9 6VswepQY6YblvgLUu1ascT6JlBzd76V5VertO5z8qeZvp5NYnCPXHdKEXHHj3gX3ji9d HsJB5SViRoQkeR/5ll/asmkLIYUqFgdWL70dtwKYsBfNHfyWSqpngJnhe3xrLXRdRdrD cfO10wItD6aQVTnCZgm01mHcXdmlVgqGqPpVsWGGT6G5iIb9sFnlHppHGsnGUMljU1j3 MR9w== X-Gm-Message-State: ALKqPwcyucX0dizqkxnJ//o/uRIYZL8ZY7QDpLOuHgYXmAJWruaNebLn BRx1lT+THURHe0t2s41t/H+bberT6uE7VHYD92CPIx/YA4c80GzlJ/Ak8u/S+3w8k1h7eLzmA7J DLWMcqw8Za+Xsu1WE3S4dP4B3mKFeuIBV4p67fPNtESPxj1HH9+OhsFc58dbpYkSR2dYlOtn2aQ Ma2IGLhOkCbMrs57X2QbWjYHSV5sx3CzEsdNUFelaObDeEsWFGVTBpx9LD/lobjNLy8vFhAs1lO C0qNfoiBDPpheMxaiYnkWECbucsWOpRkcLHZn5Jb5n/QRXriYL0+JqHD2ov98f6qHVqvjru0YOw afP1bjMP6IpJeTMVOEQFqU9jgMmzDmcPqPB4u9qCEv/OPgMTkVQd4DiU9oNn4cm/Ycxa/6th60O 0 X-Received: by 2002:a1c:e80a:: with SMTP id f10-v6mr2034589wmh.16.1527259269767; Fri, 25 May 2018 07:41:09 -0700 (PDT) X-Received: by 2002:a1c:e80a:: with SMTP id f10-v6mr2034522wmh.16.1527259268568; Fri, 25 May 2018 07:41:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527259268; cv=none; d=google.com; s=arc-20160816; b=0GmY/rYsinZFMRIQG96O2S6PX+j2+6oy4hseykmjP/Vw7f+1w1Cgzi+ymwFxrBXsNF tX2Q2GkKbjxt031pqsCMPHOvcb/fPJ/WbHKWf5VouVFUNssEgnAAHdK8yExQ3Zq824Oj mj0aGATUM3n0hD765K8Rs7qUMnjdQKy+0bxCcau46ZLwsJd3UNwV+0k+UEQvNnjavT8C ESWVRBX2HphMxs0R/QrT0slL+Wyj7fjZ9Q4Hs9odtLsiQO5HYBDtrNhO6jUw6PAGA2ZD iCZodswpLwfOXSAezL5u4NysyTmppoiV3nXDWmVy+PfPVm/pnjxkVC8NMBcgl8l22UKq fMyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=ggpOGDcAGKq59uA6O/C0mSfdUsbcWWrjMmOn0yK9+o4=; b=ZmfBrJMGmlkEgHBNq6K5fXuSYBTO2L9Hf2D9IK3pc9AJxGQVsraqNR8b66YcVdwu22 lA4EMxUsrQ8SqVd16qhOV3JwlqKcgTsMz5tNt1O8kyRlAPmYd4OcYiCGb3UmCrjLIm4+ rgV4xUD/hhiJbQNRah/+F6Wxg5eGHnM1j9+OHZUQgTwhICH9iv/IHDY0GW+3qcCksbdX Dlk195z99LT0EZsqklk/G77laImzzFOYM49IIAAZp7UntuO7awknqM5zLibGgA5czlL6 uDUScxivaaRWGxH1tuhI+yNebvH6f8ji4CRlrPd7hyH5JLIE9OJrCWtdFtS/WBwgE5Dc eihA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=DPzAwXmG; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id e50-v6sor12154756wre.45.2018.05.25.07.41.08 for (Google Transport Security); Fri, 25 May 2018 07:41:08 -0700 (PDT) Received-SPF: pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=DPzAwXmG; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ggpOGDcAGKq59uA6O/C0mSfdUsbcWWrjMmOn0yK9+o4=; b=DPzAwXmGzyrVOGEyq6OxtYgwPhmPn3l6ISut6FlXZi2luGqDuFTkouIleeda7lr49q yXw8HEAQGzlUTtQ6casyP7PfOuCGcDXSkOxtskH+n/aHutdIhX6rsz+BMC7DoGFMs9kY PExb+BGaaAkVGQFZtc8y6a037YqXIzWyZiMLeLxhFRJXlr1aoQugJN2L0heetvM1gMmF MRbjPvQp1h/iT6SKh2Sbml9mLXXC+wFk9zJg/epAr9KRi23jwd12NGMxAINk3HGKyQsw g0PpACaIBLdk7yRQ/mmQtI//nHwwswyvD4qBfGC6iMka7+eYh+BsoXO+C+Mb6FbAq6Jh RUfQ== X-Google-Smtp-Source: AB8JxZpvtyI8wBDNgpkv60KejVKIiCIfheLa9GlzVgRAG4CefwMUB3agAqjecK8Bj6mEo7kfdELdug== X-Received: by 2002:adf:9502:: with SMTP id 2-v6mr2289173wrs.241.1527259267607; Fri, 25 May 2018 07:41:07 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id q2-v6sm25293885wrm.26.2018.05.25.07.41.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 May 2018 07:41:06 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Jonathan Corbet , Catalin Marinas , Will Deacon , Christopher Li , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Masahiro Yamada , Michal Marek , Andrey Konovalov , Mark Rutland , Nick Desaulniers , Yury Norov , Marc Zyngier , Kristina Martsenko , Suzuki K Poulose , Punit Agrawal , Dave Martin , Ard Biesheuvel , James Morse , Michael Weiser , Julien Thierry , Tyler Baicar , "Eric W . Biederman" , Thomas Gleixner , Ingo Molnar , Kees Cook , Sandipan Das , David Woodhouse , Paul Lawrence , Herbert Xu , Josh Poimboeuf , Geert Uytterhoeven , Tom Lendacky , Arnd Bergmann , Dan Williams , Michal Hocko , Jan Kara , Ross Zwisler , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Matthew Wilcox , "Kirill A . Shutemov" , Souptick Joarder , Hugh Dickins , Davidlohr Bueso , Greg Kroah-Hartman , Philippe Ombredanne , Kate Stewart , Laura Abbott , Boris Brezillon , Vlastimil Babka , Pintu Agarwal , Doug Berger , Anshuman Khandual , Mike Rapoport , Mel Gorman , Pavel Tatashin , Tetsuo Handa , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Kees Cook , Jann Horn , Mark Brand , Chintan Pandya Subject: [PATCH v2 13/16] khwasan: add hooks implementation Date: Fri, 25 May 2018 16:40:29 +0200 Message-Id: <5793b6c9c8de0d24ae774ef8a18a9d304f11ca7d.1527259068.git.andreyknvl@google.com> X-Mailer: git-send-email 2.17.0.921.gf22659ad46-goog In-Reply-To: References: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This commit adds KHWASAN specific hooks implementation and adjusts common KASAN and KHWASAN ones. 1. When a new slab cache is created, KHWASAN rounds up the size of the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16). 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory, that corresponds to this object to this tag, and embeds this tag value into the top byte of the returned pointer. 3. On each kfree KHWASAN poisons the shadow memory with a random tag to allow detection of use-after-free bugs. The rest of the logic of the hook implementation is very much similar to the one provided by KASAN. KHWASAN saves allocation and free stack metadata to the slab object the same was KASAN does this. Signed-off-by: Andrey Konovalov --- mm/kasan/common.c | 83 +++++++++++++++++++++++++++++++++++----------- mm/kasan/khwasan.c | 40 ++++++++++++++++++++++ 2 files changed, 103 insertions(+), 20 deletions(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 3d7277cc1f5b..8123a61b7e0f 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) { void *shadow_start, *shadow_end; + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + shadow_start = kasan_mem_to_shadow(address); shadow_end = kasan_mem_to_shadow(address + size); @@ -148,11 +151,20 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) void kasan_unpoison_shadow(const void *address, size_t size) { - kasan_poison_shadow(address, size, 0); + u8 tag = get_tag(address); + + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + + kasan_poison_shadow(address, size, tag); if (size & KASAN_SHADOW_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); - *shadow = size & KASAN_SHADOW_MASK; + + if (IS_ENABLED(CONFIG_KASAN_HW)) + *shadow = tag; + else + *shadow = size & KASAN_SHADOW_MASK; } } @@ -200,8 +212,9 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) void kasan_alloc_pages(struct page *page, unsigned int order) { - if (likely(!PageHighMem(page))) - kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); + if (unlikely(PageHighMem(page))) + return; + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); } void kasan_free_pages(struct page *page, unsigned int order) @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) { unsigned int orig_size = *size; + unsigned int redzone_size = 0; int redzone_adjust; /* Add alloc meta. */ @@ -242,20 +256,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, *size += sizeof(struct kasan_alloc_meta); /* Add free meta. */ - if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || - cache->object_size < sizeof(struct kasan_free_meta)) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC) && + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta))) { cache->kasan_info.free_meta_offset = *size; *size += sizeof(struct kasan_free_meta); } - redzone_adjust = optimal_redzone(cache->object_size) - - (*size - cache->object_size); + redzone_size = optimal_redzone(cache->object_size); + redzone_adjust = redzone_size - (*size - cache->object_size); if (redzone_adjust > 0) *size += redzone_adjust; *size = min_t(unsigned int, KMALLOC_MAX_SIZE, - max(*size, cache->object_size + - optimal_redzone(cache->object_size))); + max(*size, cache->object_size + redzone_size)); /* * If the metadata doesn't fit, don't enable KASAN at all. @@ -268,6 +282,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, return; } + cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE); + *flags |= SLAB_KASAN; } @@ -325,18 +341,41 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) { - return kasan_kmalloc(cache, object, cache->object_size, flags); + object = kasan_kmalloc(cache, object, cache->object_size, flags); + if (IS_ENABLED(CONFIG_KASAN_HW) && unlikely(cache->ctor)) { + /* + * Cache constructor might use object's pointer value to + * initialize some of its fields. + */ + cache->ctor(object); + } + return object; +} + +static inline bool shadow_invalid(u8 tag, s8 shadow_byte) +{ + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + return shadow_byte < 0 || + shadow_byte >= KASAN_SHADOW_SCALE_SIZE; + else + return tag != (u8)shadow_byte; } static bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip, bool quarantine) { s8 shadow_byte; + u8 tag; + void *tagged_object; unsigned long rounded_up_size; + tag = get_tag(object); + tagged_object = object; + object = reset_tag(object); + if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != object)) { - kasan_report_invalid_free(object, ip); + kasan_report_invalid_free(tagged_object, ip); return true; } @@ -345,20 +384,22 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object, return false; shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); - if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) { - kasan_report_invalid_free(object, ip); + if (shadow_invalid(tag, shadow_byte)) { + kasan_report_invalid_free(tagged_object, ip); return true; } rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); - if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN))) + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || + unlikely(!(cache->flags & SLAB_KASAN))) return false; set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); quarantine_put(get_free_info(cache, object), cache); - return true; + + return IS_ENABLED(CONFIG_KASAN_GENERIC); } bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) @@ -371,6 +412,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, { unsigned long redzone_start; unsigned long redzone_end; + u8 tag; if (gfpflags_allow_blocking(flags)) quarantine_reduce(); @@ -383,14 +425,15 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_SHADOW_SCALE_SIZE); - kasan_unpoison_shadow(object, size); + tag = random_tag(); + kasan_unpoison_shadow(set_tag(object, tag), size); kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); if (cache->flags & SLAB_KASAN) set_track(&get_alloc_info(cache, object)->alloc_track, flags); - return (void *)object; + return set_tag(object, tag); } EXPORT_SYMBOL(kasan_kmalloc); @@ -440,7 +483,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (ptr != page_address(page)) { + if (reset_tag(ptr) != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -453,7 +496,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (ptr != page_address(virt_to_head_page(ptr))) + if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c index d34679b8f8c7..fd1725022794 100644 --- a/mm/kasan/khwasan.c +++ b/mm/kasan/khwasan.c @@ -88,15 +88,52 @@ void *khwasan_reset_tag(const void *addr) void check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) { + u8 tag; + u8 *shadow_first, *shadow_last, *shadow; + void *untagged_addr; + + tag = get_tag((const void *)addr); + + /* Ignore accesses for pointers tagged with 0xff (native kernel + * pointer tag) to suppress false positives caused by kmap. + * + * Some kernel code was written to account for archs that don't keep + * high memory mapped all the time, but rather map and unmap particular + * pages when needed. Instead of storing a pointer to the kernel memory, + * this code saves the address of the page structure and offset within + * that page for later use. Those pages are then mapped and unmapped + * with kmap/kunmap when necessary and virt_to_page is used to get the + * virtual address of the page. For arm64 (that keeps the high memory + * mapped all the time), kmap is turned into a page_address call. + + * The issue is that with use of the page_address + virt_to_page + * sequence the top byte value of the original pointer gets lost (gets + * set to KHWASAN_TAG_KERNEL (0xFF). + */ + if (tag == KHWASAN_TAG_KERNEL) + return; + + untagged_addr = reset_tag((const void *)addr); + shadow_first = kasan_mem_to_shadow(untagged_addr); + shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1); + + for (shadow = shadow_first; shadow <= shadow_last; shadow++) { + if (*shadow != tag) { + kasan_report(addr, size, write, ret_ip); + return; + } + } } #define DEFINE_HWASAN_LOAD_STORE(size) \ void __hwasan_load##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, false, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ void __hwasan_store##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, true, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_store##size##_noabort) @@ -108,15 +145,18 @@ DEFINE_HWASAN_LOAD_STORE(16); void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, false, _RET_IP_); } EXPORT_SYMBOL(__hwasan_loadN_noabort); void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, true, _RET_IP_); } EXPORT_SYMBOL(__hwasan_storeN_noabort); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) { + kasan_poison_shadow((void *)addr, size, tag); } EXPORT_SYMBOL(__hwasan_tag_memory);