From patchwork Mon Nov 23 20:14:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11926697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A19DC388F9 for ; Mon, 23 Nov 2020 20:37:31 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 98E6120721 for ; Mon, 23 Nov 2020 20:37:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Mh+Onlj4"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="vb3tnn8k" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 98E6120721 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=jCs14sBWRCBoTXzsyh++cwNqEYaOIt8+j1OKht35HtU=; b=Mh+Onlj4vCCG3vcfwouSWcxlo MiBAQ2cDsG0qaw/yai8rE4kJsMesAp2Pyxm47oZYro3MOMDV5bYmx7SrKkGLVaBhHb3+cm/NdyJ6y qeRFG50gqUaXXBmK7wA73aGkYjZ79b7bSygejrriKwQqxxUHQrVKl8Kz02L/EtErFSuit8YxHcH3G GjLUDHJwUde1U/sxntfDek96Cj+v6CehygrDWRG4A+gNINqQgfsXLHqKWdh5VpfZcf8yYmo+cHQv2 qy5rjH+2M32hyp6KyVTf5UZOG9vIEyy5TFse/LyzuYMEUxAGGwm5HmW7+VITtMGDs5eoCxvUJ7q/g tpIIiaSTA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1khIYd-0002nT-Oi; Mon, 23 Nov 2020 20:35:31 +0000 Received: from mail-qk1-x74a.google.com ([2607:f8b0:4864:20::74a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1khIFQ-0002Nj-7g for linux-arm-kernel@lists.infradead.org; Mon, 23 Nov 2020 20:16:00 +0000 Received: by mail-qk1-x74a.google.com with SMTP id p129so131127qkc.20 for ; Mon, 23 Nov 2020 12:15:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=ynP119GCVFCNEeKVEPVj7pM+QyD/hOvaLJsj/XDDQFw=; b=vb3tnn8kMgXNs0Y1GKGinWycRFk3eYD9mqaTh1+rBK0yxZG9Ua4sRfT523rRAQMS+B vn1qqVfjTp15yGAj5s5CoWbAUq34yjwp5e4FktPNcrp+zSPo3Refqd2kZ7xQWzT54BVm 4mp6nA47cwN+Aa5/waSd8BeOvAnZiT39dcO7Us2JLrCLnUBaY9A05C6yEO7swK1iRAn0 Zyo7/cwTjO1k3pRyEj7H2r6mHNYg7C0jUtw7eexwKoJYJIXmDYE5kSIsNwaFxXZ+P6sE gBK28YiUv1EwQh+kvmYohYs88El1mtuXLCDj4zjitmbw0eLYE0j2105D8dtmXu9VO5Ni 3EXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ynP119GCVFCNEeKVEPVj7pM+QyD/hOvaLJsj/XDDQFw=; b=dwsLvrUR+WJSKsYat+oK3WcFovFFYBwt6yodazrzl0gJyOAGV3+PvpyXUDNrKZf/5a 4OC707g5DIB57Wgi31kdigNp0zpu3QJDQTahiMLissOvb6lcC+jVEYwecZsoSAis9bHO LpzPvOBmQouQxxnICrxjMQURwnqUZ34TsUa2fDo/LScTFkwlggMpEOrU+rOJVjHZfY4d xLqVH8sfUzepOemuXmeDfH4ANeAAXYQhBPH9Nezo8+lieN5hnl2JJ/DwnYNJJsTUeFGx JZN0LbLC/VBccXSZbOWY03Fezt4oIubUrOCdh7LggcY0EvxY8b+pnsHNvW5OCrJ/FDGa FH4A== X-Gm-Message-State: AOAM530S/ZNLS8Mry2qXXpUv6Xx2RZyDVHTsG1OGN3ZC38UBUC33GgHm 5Qa1UkKe0bctweC81gj+tghmfhxS8htPeidy X-Google-Smtp-Source: ABdhPJy1Ttccayj5bHL1Qkwd9GPXwXjN+B592JrI/RD9bRkUAe5l3XPvVktIawbd9OOrK8MaGRi9OAQO5giJsfwB X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a0c:ab07:: with SMTP id h7mr1353179qvb.34.1606162533975; Mon, 23 Nov 2020 12:15:33 -0800 (PST) Date: Mon, 23 Nov 2020 21:14:47 +0100 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH mm v4 17/19] kasan: sanitize objects when metadata doesn't fit From: Andrey Konovalov To: Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201123_151540_957109_3FE39318 X-CRM114-Status: GOOD ( 31.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Marco Elver , Catalin Marinas , Kevin Brodsky , Will Deacon , Branislav Rankov , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Potapenko , Evgenii Stepanov , Andrey Konovalov , Andrey Ryabinin , Vincenzo Frascino , Dmitry Vyukov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KASAN marks caches that are sanitized with the SLAB_KASAN cache flag. Currently if the metadata that is appended after the object (stores e.g. stack trace ids) doesn't fit into KMALLOC_MAX_SIZE (can only happen with SLAB, see the comment in the patch), KASAN turns off sanitization completely. With this change sanitization of the object data is always enabled. However the metadata is only stored when it fits. Instead of checking for SLAB_KASAN flag accross the code to find out whether the metadata is there, use cache->kasan_info.alloc/free_meta_offset. As 0 can be a valid value for free_meta_offset, introduce KASAN_NO_FREE_META as an indicator that the free metadata is missing. Without this change all sanitized KASAN objects would be put into quarantine with generic KASAN. With this change, only the objects that have metadata (i.e. when it fits) are put into quarantine, the rest is freed right away. Along the way rework __kasan_cache_create() and add claryfying comments. Co-developed-by: Vincenzo Frascino Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Link: https://linux-review.googlesource.com/id/Icd947e2bea054cb5cfbdc6cf6652227d97032dcb --- mm/kasan/common.c | 116 ++++++++++++++++++++++++-------------- mm/kasan/generic.c | 15 ++--- mm/kasan/hw_tags.c | 6 +- mm/kasan/kasan.h | 17 +++++- mm/kasan/quarantine.c | 16 +++++- mm/kasan/report.c | 43 +++++++------- mm/kasan/report_sw_tags.c | 9 ++- mm/kasan/sw_tags.c | 4 ++ 8 files changed, 149 insertions(+), 77 deletions(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 42ba64fce8a3..249ccba1ecf5 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -115,9 +115,6 @@ void __kasan_free_pages(struct page *page, unsigned int order) */ static inline unsigned int optimal_redzone(unsigned int object_size) { - if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) - return 0; - return object_size <= 64 - 16 ? 16 : object_size <= 128 - 32 ? 32 : @@ -131,47 +128,77 @@ static inline unsigned int optimal_redzone(unsigned int object_size) void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) { - unsigned int orig_size = *size; - unsigned int redzone_size; - int redzone_adjust; + unsigned int ok_size; + unsigned int optimal_size; + + /* + * SLAB_KASAN is used to mark caches as ones that are sanitized by + * KASAN. Currently this flag is used in two places: + * 1. In slab_ksize() when calculating the size of the accessible + * memory within the object. + * 2. In slab_common.c to prevent merging of sanitized caches. + */ + *flags |= SLAB_KASAN; - if (!kasan_stack_collection_enabled()) { - *flags |= SLAB_KASAN; + if (!kasan_stack_collection_enabled()) return; - } - /* Add alloc meta. */ + ok_size = *size; + + /* Add alloc meta into redzone. */ cache->kasan_info.alloc_meta_offset = *size; *size += sizeof(struct kasan_alloc_meta); - /* Add free meta. */ - if (IS_ENABLED(CONFIG_KASAN_GENERIC) && - (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || - cache->object_size < sizeof(struct kasan_free_meta))) { - cache->kasan_info.free_meta_offset = *size; - *size += sizeof(struct kasan_free_meta); + /* + * If alloc meta doesn't fit, don't add it. + * This can only happen with SLAB, as it has KMALLOC_MAX_SIZE equal + * to KMALLOC_MAX_CACHE_SIZE and doesn't fall back to page_alloc for + * larger sizes. + */ + if (*size > KMALLOC_MAX_SIZE) { + cache->kasan_info.alloc_meta_offset = 0; + *size = ok_size; + /* Continue, since free meta might still fit. */ } - redzone_size = optimal_redzone(cache->object_size); - redzone_adjust = redzone_size - (*size - cache->object_size); - if (redzone_adjust > 0) - *size += redzone_adjust; - - *size = min_t(unsigned int, KMALLOC_MAX_SIZE, - max(*size, cache->object_size + redzone_size)); + /* Only the generic mode uses free meta or flexible redzones. */ + if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) { + cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META; + return; + } /* - * If the metadata doesn't fit, don't enable KASAN at all. + * Add free meta into redzone when it's not possible to store + * it in the object. This is the case when: + * 1. Object is SLAB_TYPESAFE_BY_RCU, which means that it can + * be touched after it was freed, or + * 2. Object has a constructor, which means it's expected to + * retain its content until the next allocation, or + * 3. Object is too small. + * Otherwise cache->kasan_info.free_meta_offset = 0 is implied. */ - if (*size <= cache->kasan_info.alloc_meta_offset || - *size <= cache->kasan_info.free_meta_offset) { - cache->kasan_info.alloc_meta_offset = 0; - cache->kasan_info.free_meta_offset = 0; - *size = orig_size; - return; + if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta)) { + ok_size = *size; + + cache->kasan_info.free_meta_offset = *size; + *size += sizeof(struct kasan_free_meta); + + /* If free meta doesn't fit, don't add it. */ + if (*size > KMALLOC_MAX_SIZE) { + cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META; + *size = ok_size; + } } - *flags |= SLAB_KASAN; + /* Calculate size with optimal redzone. */ + optimal_size = cache->object_size + optimal_redzone(cache->object_size); + /* Limit it with KMALLOC_MAX_SIZE (relevant for SLAB only). */ + if (optimal_size > KMALLOC_MAX_SIZE) + optimal_size = KMALLOC_MAX_SIZE; + /* Use optimal size if the size with added metas is not large enough. */ + if (*size < optimal_size) + *size = optimal_size; } size_t __kasan_metadata_size(struct kmem_cache *cache) @@ -187,15 +214,21 @@ size_t __kasan_metadata_size(struct kmem_cache *cache) struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache, const void *object) { + if (!cache->kasan_info.alloc_meta_offset) + return NULL; return kasan_reset_tag(object) + cache->kasan_info.alloc_meta_offset; } +#ifdef CONFIG_KASAN_GENERIC struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, const void *object) { BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); + if (cache->kasan_info.free_meta_offset == KASAN_NO_FREE_META) + return NULL; return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset; } +#endif void __kasan_poison_slab(struct page *page) { @@ -272,11 +305,9 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache, struct kasan_alloc_meta *alloc_meta; if (kasan_stack_collection_enabled()) { - if (!(cache->flags & SLAB_KASAN)) - return (void *)object; - alloc_meta = kasan_get_alloc_meta(cache, object); - __memset(alloc_meta, 0, sizeof(*alloc_meta)); + if (alloc_meta) + __memset(alloc_meta, 0, sizeof(*alloc_meta)); } /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */ @@ -318,15 +349,12 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object, if (!kasan_stack_collection_enabled()) return false; - if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || - unlikely(!(cache->flags & SLAB_KASAN))) + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine)) return false; kasan_set_free_info(cache, object, tag); - quarantine_put(cache, object); - - return IS_ENABLED(CONFIG_KASAN_GENERIC); + return quarantine_put(cache, object); } bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) @@ -359,7 +387,11 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip) static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags) { - kasan_set_track(&kasan_get_alloc_meta(cache, object)->alloc_track, flags); + struct kasan_alloc_meta *alloc_meta; + + alloc_meta = kasan_get_alloc_meta(cache, object); + if (alloc_meta) + kasan_set_track(&alloc_meta->alloc_track, flags); } static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, @@ -389,7 +421,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, poison_range((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); - if (kasan_stack_collection_enabled() && (cache->flags & SLAB_KASAN)) + if (kasan_stack_collection_enabled()) set_alloc_info(cache, (void *)object, flags); return set_tag(object, tag); diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 9c6b77f8c4a4..157df6c762a4 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -338,10 +338,10 @@ void kasan_record_aux_stack(void *addr) cache = page->slab_cache; object = nearest_obj(cache, page, addr); alloc_meta = kasan_get_alloc_meta(cache, object); + if (!alloc_meta) + return; - /* - * record the last two call_rcu() call stacks. - */ + /* Record the last two call_rcu() call stacks. */ alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0]; alloc_meta->aux_stack[0] = kasan_save_stack(GFP_NOWAIT); } @@ -352,11 +352,11 @@ void kasan_set_free_info(struct kmem_cache *cache, struct kasan_free_meta *free_meta; free_meta = kasan_get_free_meta(cache, object); - kasan_set_track(&free_meta->free_track, GFP_NOWAIT); + if (!free_meta) + return; - /* - * the object was freed and has free track set - */ + kasan_set_track(&free_meta->free_track, GFP_NOWAIT); + /* The object was freed and has free track set. */ *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREETRACK; } @@ -365,5 +365,6 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, { if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_KMALLOC_FREETRACK) return NULL; + /* Free meta must be present with KASAN_KMALLOC_FREETRACK. */ return &kasan_get_free_meta(cache, object)->free_track; } diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 30ce88935e9d..c91f2c06ecb5 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -187,7 +187,8 @@ void kasan_set_free_info(struct kmem_cache *cache, struct kasan_alloc_meta *alloc_meta; alloc_meta = kasan_get_alloc_meta(cache, object); - kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT); + if (alloc_meta) + kasan_set_track(&alloc_meta->free_track[0], GFP_NOWAIT); } struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, @@ -196,5 +197,8 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, struct kasan_alloc_meta *alloc_meta; alloc_meta = kasan_get_alloc_meta(cache, object); + if (!alloc_meta) + return NULL; + return &alloc_meta->free_track[0]; } diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index d01a5ac34f70..725a472e8ea7 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -156,20 +156,31 @@ struct kasan_alloc_meta { struct qlist_node { struct qlist_node *next; }; + +/* + * Generic mode either stores free meta in the object itself or in the redzone + * after the object. In the former case free meta offset is 0, in the latter + * case it has some sane value smaller than INT_MAX. Use INT_MAX as free meta + * offset when free meta isn't present. + */ +#define KASAN_NO_FREE_META INT_MAX + struct kasan_free_meta { +#ifdef CONFIG_KASAN_GENERIC /* This field is used while the object is in the quarantine. * Otherwise it might be used for the allocator freelist. */ struct qlist_node quarantine_link; -#ifdef CONFIG_KASAN_GENERIC struct kasan_track free_track; #endif }; struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache, const void *object); +#ifdef CONFIG_KASAN_GENERIC struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache, const void *object); +#endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) @@ -234,11 +245,11 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, #if defined(CONFIG_KASAN_GENERIC) && \ (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) -void quarantine_put(struct kmem_cache *cache, void *object); +bool quarantine_put(struct kmem_cache *cache, void *object); void quarantine_reduce(void); void quarantine_remove_cache(struct kmem_cache *cache); #else -static inline void quarantine_put(struct kmem_cache *cache, void *object) { } +static inline bool quarantine_put(struct kmem_cache *cache, void *object) { return false; } static inline void quarantine_reduce(void) { } static inline void quarantine_remove_cache(struct kmem_cache *cache) { } #endif diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c index 0da3d37e1589..a598c3514e1a 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -135,7 +135,12 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache) if (IS_ENABLED(CONFIG_SLAB)) local_irq_save(flags); + /* + * As the object now gets freed from the quaratine, assume that its + * free track is no longer valid. + */ *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE; + ___cache_free(cache, object, _THIS_IP_); if (IS_ENABLED(CONFIG_SLAB)) @@ -161,13 +166,20 @@ static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache) qlist_init(q); } -void quarantine_put(struct kmem_cache *cache, void *object) +bool quarantine_put(struct kmem_cache *cache, void *object) { unsigned long flags; struct qlist_head *q; struct qlist_head temp = QLIST_INIT; struct kasan_free_meta *meta = kasan_get_free_meta(cache, object); + /* + * If there's no metadata for this object, don't put it into + * quarantine. + */ + if (!meta) + return false; + /* * Note: irq must be disabled until after we move the batch to the * global quarantine. Otherwise quarantine_remove_cache() can miss @@ -200,6 +212,8 @@ void quarantine_put(struct kmem_cache *cache, void *object) } local_irq_restore(flags); + + return true; } void quarantine_reduce(void) diff --git a/mm/kasan/report.c b/mm/kasan/report.c index ffa6076b1710..8b6656d47983 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -168,32 +168,35 @@ static void describe_object_addr(struct kmem_cache *cache, void *object, static void describe_object_stacks(struct kmem_cache *cache, void *object, const void *addr, u8 tag) { - struct kasan_alloc_meta *alloc_meta = kasan_get_alloc_meta(cache, object); - - if (cache->flags & SLAB_KASAN) { - struct kasan_track *free_track; + struct kasan_alloc_meta *alloc_meta; + struct kasan_track *free_track; + alloc_meta = kasan_get_alloc_meta(cache, object); + if (alloc_meta) { print_track(&alloc_meta->alloc_track, "Allocated"); pr_err("\n"); - free_track = kasan_get_free_track(cache, object, tag); - if (free_track) { - print_track(free_track, "Freed"); - pr_err("\n"); - } + } + + free_track = kasan_get_free_track(cache, object, tag); + if (free_track) { + print_track(free_track, "Freed"); + pr_err("\n"); + } #ifdef CONFIG_KASAN_GENERIC - if (alloc_meta->aux_stack[0]) { - pr_err("Last call_rcu():\n"); - print_stack(alloc_meta->aux_stack[0]); - pr_err("\n"); - } - if (alloc_meta->aux_stack[1]) { - pr_err("Second to last call_rcu():\n"); - print_stack(alloc_meta->aux_stack[1]); - pr_err("\n"); - } -#endif + if (!alloc_meta) + return; + if (alloc_meta->aux_stack[0]) { + pr_err("Last call_rcu():\n"); + print_stack(alloc_meta->aux_stack[0]); + pr_err("\n"); } + if (alloc_meta->aux_stack[1]) { + pr_err("Second to last call_rcu():\n"); + print_stack(alloc_meta->aux_stack[1]); + pr_err("\n"); + } +#endif } static void describe_object(struct kmem_cache *cache, void *object, diff --git a/mm/kasan/report_sw_tags.c b/mm/kasan/report_sw_tags.c index 7604b46239d4..1b026793ad57 100644 --- a/mm/kasan/report_sw_tags.c +++ b/mm/kasan/report_sw_tags.c @@ -48,9 +48,12 @@ const char *get_bug_type(struct kasan_access_info *info) object = nearest_obj(cache, page, (void *)addr); alloc_meta = kasan_get_alloc_meta(cache, object); - for (i = 0; i < KASAN_NR_FREE_STACKS; i++) - if (alloc_meta->free_pointer_tag[i] == tag) - return "use-after-free"; + if (alloc_meta) { + for (i = 0; i < KASAN_NR_FREE_STACKS; i++) { + if (alloc_meta->free_pointer_tag[i] == tag) + return "use-after-free"; + } + } return "out-of-bounds"; } diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c index e17de2619bbf..5dcd830805b2 100644 --- a/mm/kasan/sw_tags.c +++ b/mm/kasan/sw_tags.c @@ -170,6 +170,8 @@ void kasan_set_free_info(struct kmem_cache *cache, u8 idx = 0; alloc_meta = kasan_get_alloc_meta(cache, object); + if (!alloc_meta) + return; #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY idx = alloc_meta->free_track_idx; @@ -187,6 +189,8 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, int i = 0; alloc_meta = kasan_get_alloc_meta(cache, object); + if (!alloc_meta) + return NULL; #ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {