From patchwork Mon Nov 2 16:04:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11874433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6640C4742C for ; Mon, 2 Nov 2020 16:20:45 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 683A9223BD for ; Mon, 2 Nov 2020 16:20:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="E/zhl/IT"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="eK4arbIO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 683A9223BD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=85gIC3CgfVAGdo1SPqwYP2xihFLNB3XYDYWS/wnjrFg=; b=E/zhl/IT01y5VHIkXx3cGabOH eGAqjwNKpm1U2Pi46OibGY8d0A6tchT/n+GDzTk/efKJqNLJgjU4B9qtw2SIRxF2PPCD0BnPc40qg QfKZ/t9uyUBRzPduTHzQr5qBIoR4Md3+ncpWQzz5rF+o7nuUbdU2PMEGg0ckHPFXTckoxrYmMCBHK LEleDYIH/f73iO9NzPvARcVm2JALCAyT3K69Old7mafuj0QLJs5JQoqOSCQmLsNDdsADXyeOHwO1F JgMMrl2fvF2VHH6mi7lowKBcU18axpKo3zbR21cP4DcCdoMSF64IDUtCPH5rJDUFhE7ESzEbYupWi Zv/yVC20Q==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kZcYm-00067X-Nz; Mon, 02 Nov 2020 16:19:57 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kZcLK-000763-2v for linux-arm-kernel@lists.infradead.org; Mon, 02 Nov 2020 16:06:03 +0000 Received: by mail-yb1-xb49.google.com with SMTP id b7so14472714ybn.3 for ; Mon, 02 Nov 2020 08:05:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=C6Op3yFR6sxPD/1lbh7YRsoWm4fdKowdqrD7aVgHz7A=; b=eK4arbIOS/5CHXcBJkTH59HHa0MKkTascEGPXT+1ixgKvZv0bcE/6+XhQsfyK7wfNV XWVq1H7NQIUKj0Ym6RDI+B1A42pL3f6o6yiDf6bCPcjVo+Q4nSzm0sJcwBmsZ3znAvub o/CVyctUdC6pdZZlwWHfOP2logitn0/CKWxGPG1vN5U2YXZoWxBdHUo2KUfnH7+w56UT T9YY/6+sF/oAXKcgriNid4j+mOWnf/THnMpVaYTNZqq+If84CMuhJK7MkrvfI0oSvxB0 k1RP6xh8iOPCnXqqTduFHiZTfh08Q0JpkFFCmCQjHMpGVdi0FeAZmbUu0bNn/7Xd22Ue wj8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=C6Op3yFR6sxPD/1lbh7YRsoWm4fdKowdqrD7aVgHz7A=; b=jfF6Tc2HytXKbMojzFWcmrnEw5fGtx+TJLVqVJdytYtxRFCP/k2t0nkFSDh9sQvrUa iuKa34ec5Cu+EdzXv4ySjm/+54hpx4o35Ut69kkPBKdpTGxBTIIkz/zqg+/m7l1kjwzC dQzfW1uTESrvKxuAoFtRRyiAWOeGxHBMsaSYAM1n6AaFeQ5JiN0546MOFZ7oZnobkqxN XxmKqpeOQwjpqrln/hs3mraFx98v8k+gS4MdXmbz4L0OYP6XGghciyhvi9IcUwk1jB3S E4SXg/+usX/aSgtiF1VOEywDTkavEbLCZzvzwRqzR9aWWzkcWIcrdatlZV93S+EFx64d Beew== X-Gm-Message-State: AOAM5320Z9exxvkH4+oKEmoGq+DqmWNbiolSRgyZhvjx3N5Pq7UmPwnf IPm/kJJKPK61bd17PQTwJX3wZqF2TRBaQHsl X-Google-Smtp-Source: ABdhPJyZJ/CYYJqXZkKp1UOBlXcHvrdLsNlEtg7oTktfVAwxxBNv7ahUE7dmUZPZiQ1e0fUVJdTQV1KPO3eI7u/q X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a25:d284:: with SMTP id j126mr21605003ybg.220.1604333158481; Mon, 02 Nov 2020 08:05:58 -0800 (PST) Date: Mon, 2 Nov 2020 17:04:18 +0100 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH v7 38/41] kasan, mm: reset tags when accessing metadata From: Andrey Konovalov To: Catalin Marinas , Will Deacon X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201102_110602_259083_8AB66BDB X-CRM114-Status: GOOD ( 19.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Marco Elver , Elena Petrova , Andrey Konovalov , Kevin Brodsky , Branislav Rankov , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Potapenko , Evgenii Stepanov , Andrey Ryabinin , Andrew Morton , Vincenzo Frascino , Dmitry Vyukov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Kernel allocator code accesses metadata for slab objects, that may lie out-of-bounds of the object itself, or be accessed when an object is freed. Such accesses trigger tag faults and lead to false-positive reports with hardware tag-based KASAN. Software KASAN modes disable instrumentation for allocator code via KASAN_SANITIZE Makefile macro, and rely on kasan_enable/disable_current() annotations which are used to ignore KASAN reports. With hardware tag-based KASAN neither of those options are available, as it doesn't use compiler instrumetation, no tag faults are ignored, and MTE is disabled after the first one. Instead, reset tags when accessing metadata (currently only for SLUB). Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Acked-by: Marco Elver --- Change-Id: I39f3c4d4f29299d4fbbda039bedf230db1c746fb --- mm/page_alloc.c | 4 +++- mm/page_poison.c | 2 +- mm/slub.c | 29 ++++++++++++++++------------- 3 files changed, 20 insertions(+), 15 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23f5066bd4a5..6231a6e456cf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1195,8 +1195,10 @@ static void kernel_init_free_pages(struct page *page, int numpages) /* s390's use of memset() could override KASAN redzones. */ kasan_disable_current(); - for (i = 0; i < numpages; i++) + for (i = 0; i < numpages; i++) { + page_kasan_tag_reset(page + i); clear_highpage(page + i); + } kasan_enable_current(); } diff --git a/mm/page_poison.c b/mm/page_poison.c index ae0482cded87..e6c994af7518 100644 --- a/mm/page_poison.c +++ b/mm/page_poison.c @@ -53,7 +53,7 @@ static void poison_page(struct page *page) /* KASAN still think the page is in-use, so skip it. */ kasan_disable_current(); - memset(addr, PAGE_POISON, PAGE_SIZE); + memset(kasan_reset_tag(addr), PAGE_POISON, PAGE_SIZE); kasan_enable_current(); kunmap_atomic(addr); } diff --git a/mm/slub.c b/mm/slub.c index b30be2385d1c..df2fd5b57df1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -249,7 +249,7 @@ static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr, { #ifdef CONFIG_SLAB_FREELIST_HARDENED /* - * When CONFIG_KASAN_SW_TAGS is enabled, ptr_addr might be tagged. + * When CONFIG_KASAN_SW/HW_TAGS is enabled, ptr_addr might be tagged. * Normally, this doesn't cause any issues, as both set_freepointer() * and get_freepointer() are called with a pointer with the same tag. * However, there are some issues with CONFIG_SLUB_DEBUG code. For @@ -275,6 +275,7 @@ static inline void *freelist_dereference(const struct kmem_cache *s, static inline void *get_freepointer(struct kmem_cache *s, void *object) { + object = kasan_reset_tag(object); return freelist_dereference(s, object + s->offset); } @@ -304,6 +305,7 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) BUG_ON(object == fp); /* naive detection of double free or corruption */ #endif + freeptr_addr = (unsigned long)kasan_reset_tag((void *)freeptr_addr); *(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr); } @@ -538,8 +540,8 @@ static void print_section(char *level, char *text, u8 *addr, unsigned int length) { metadata_access_enable(); - print_hex_dump(level, text, DUMP_PREFIX_ADDRESS, 16, 1, addr, - length, 1); + print_hex_dump(level, kasan_reset_tag(text), DUMP_PREFIX_ADDRESS, + 16, 1, addr, length, 1); metadata_access_disable(); } @@ -570,7 +572,7 @@ static struct track *get_track(struct kmem_cache *s, void *object, p = object + get_info_end(s); - return p + alloc; + return kasan_reset_tag(p + alloc); } static void set_track(struct kmem_cache *s, void *object, @@ -583,7 +585,8 @@ static void set_track(struct kmem_cache *s, void *object, unsigned int nr_entries; metadata_access_enable(); - nr_entries = stack_trace_save(p->addrs, TRACK_ADDRS_COUNT, 3); + nr_entries = stack_trace_save(kasan_reset_tag(p->addrs), + TRACK_ADDRS_COUNT, 3); metadata_access_disable(); if (nr_entries < TRACK_ADDRS_COUNT) @@ -747,7 +750,7 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, static void init_object(struct kmem_cache *s, void *object, u8 val) { - u8 *p = object; + u8 *p = kasan_reset_tag(object); if (s->flags & SLAB_RED_ZONE) memset(p - s->red_left_pad, val, s->red_left_pad); @@ -777,7 +780,7 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page, u8 *addr = page_address(page); metadata_access_enable(); - fault = memchr_inv(start, value, bytes); + fault = memchr_inv(kasan_reset_tag(start), value, bytes); metadata_access_disable(); if (!fault) return 1; @@ -873,7 +876,7 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) pad = end - remainder; metadata_access_enable(); - fault = memchr_inv(pad, POISON_INUSE, remainder); + fault = memchr_inv(kasan_reset_tag(pad), POISON_INUSE, remainder); metadata_access_disable(); if (!fault) return 1; @@ -1118,7 +1121,7 @@ void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) return; metadata_access_enable(); - memset(addr, POISON_INUSE, page_size(page)); + memset(kasan_reset_tag(addr), POISON_INUSE, page_size(page)); metadata_access_disable(); } @@ -1566,10 +1569,10 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, * Clear the object and the metadata, but don't touch * the redzone. */ - memset(object, 0, s->object_size); + memset(kasan_reset_tag(object), 0, s->object_size); rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0; - memset((char *)object + s->inuse, 0, + memset((char *)kasan_reset_tag(object) + s->inuse, 0, s->size - s->inuse - rsize); } @@ -2883,10 +2886,10 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, stat(s, ALLOC_FASTPATH); } - maybe_wipe_obj_freeptr(s, object); + maybe_wipe_obj_freeptr(s, kasan_reset_tag(object)); if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) - memset(object, 0, s->object_size); + memset(kasan_reset_tag(object), 0, s->object_size); slab_post_alloc_hook(s, objcg, gfpflags, 1, &object);