From patchwork Tue Dec 14 16:20:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5C3EC433F5 for ; Tue, 14 Dec 2021 16:31:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 179FA6B008C; Tue, 14 Dec 2021 11:22:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 128DE6B0092; Tue, 14 Dec 2021 11:22:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0C766B0093; Tue, 14 Dec 2021 11:22:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id E210C6B008C for ; Tue, 14 Dec 2021 11:22:48 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A788789086 for ; Tue, 14 Dec 2021 16:22:38 +0000 (UTC) X-FDA: 78916917996.15.10B96AF Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf28.hostedemail.com (Postfix) with ESMTP id 45FB5C000C for ; Tue, 14 Dec 2021 16:22:38 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id y11-20020a056402358b00b003f7ce63b89eso744513edc.3 for ; Tue, 14 Dec 2021 08:22:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=l+xyVLYgnwfnFRoJMzTKh10TTGj06mIlcmxJ5/UGlBQ=; b=QooFOWp2lk3/dKVKqOC7MDgcKwTO9DGY48nPfSpN6VPUKJEZ0SbsY1PzLSGm0HOGyh A0v+L3T0XhHrYN84dELB+dSr61gJdgPYVpYRMz7zu8WU3qXKDQzRIi0NL2e8SVfMvc8p c14YgIf/6oMVYfqCfWyzQ5jBFZ4txHdG+dSpPpw3+/pgqTcFFQEo1WS7Igeun3EJI3B/ JO6EPKjSnDb2mm6geVtm1W5m058MaTfx2hfyOSP6BjwFa/8xWrVisENezxQfB1oZssEC FSfyawA8GY+PgsVbf/6u8Vw6m/CqLTjPwLCN+lvMtHUr0RIO9cq3e1mEylRB0+I899xL RoLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=l+xyVLYgnwfnFRoJMzTKh10TTGj06mIlcmxJ5/UGlBQ=; b=AsGKT+3IRD/yMUYCeJtr3WBQrMG8ym8XSD5EYjqrn+/67HqDwjeidlp/0nasVP2bi1 bHAZzitgqMJ2U9BNCuKc/fo6OVAhYGrGS8t9L2sHnAx4NX2Ifi0eZZ9IxMdELWO5FTvp SXnWQUQv+UOIeDB1oB1HtIg5oA8imUZdJI6shJ6OSb/EsPKAAjXT9e9FX9BPqispJAtE Cn6fGvq8TG2gdknfIH/7AGBbHXcxqmmps+MjMLPUMKxzeLXRHLEJcR/4FozLpl8yIa5P 9f7jlzKvJNB0TZF8xVEQx2VuB72ppdwU2KSHCtBxMkR2J4uG+YA9D6G8zBvlcSy8Xgre VtTA== X-Gm-Message-State: AOAM532rT6a1SE9gX6TY342buo8NW9YZgGOa0mfdYFuUcOPRUGn4h+5C W7kzzgzNXONDoLmzIBAT1bdGD/YXccc= X-Google-Smtp-Source: ABdhPJztDlcuNNiWUlHLDRp2NxbkZc1rACcRnXIrugF8C58vuD1abHGLk1hNOWIpStZ9q4sY8JIsWSVOIBI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a17:906:d975:: with SMTP id rp21mr6647872ejb.756.1639498956586; Tue, 14 Dec 2021 08:22:36 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:23 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-17-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 16/43] kmsan: mm: call KMSAN hooks from SLUB code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 45FB5C000C X-Stat-Signature: khwwty69muk5rz4rs8mgok68qx1nrfqn Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=QooFOWp2; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 3zMS4YQYKCE0v0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3zMS4YQYKCE0v0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com X-Rspamd-Server: rspam11 X-HE-Tag: 1639498958-12236 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I6954b386c5c5d7f99f48bb6cbcc74b75136ce86e --- mm/slab.h | 1 + mm/slub.c | 26 +++++++++++++++++++++++--- 2 files changed, 24 insertions(+), 3 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 56ad7eea3ddfb..6175a74047b47 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -521,6 +521,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, memset(p[i], 0, s->object_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); + kmsan_slab_alloc(s, p[i], flags); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); diff --git a/mm/slub.c b/mm/slub.c index abe7db581d686..5a63486e52531 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -346,10 +347,13 @@ static inline void *freelist_dereference(const struct kmem_cache *s, (unsigned long)ptr_addr); } +/* + * See the comment to get_freepointer_safe(). + */ static inline void *get_freepointer(struct kmem_cache *s, void *object) { object = kasan_reset_tag(object); - return freelist_dereference(s, object + s->offset); + return kmsan_init(freelist_dereference(s, object + s->offset)); } static void prefetch_freepointer(const struct kmem_cache *s, void *object) @@ -357,18 +361,28 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetchw(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, use kmsan_init() to force initialize the + * return value of get_freepointer_safe(). + */ static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled_static()) - return get_freepointer(s, object); + return kmsan_init(get_freepointer(s, object)); object = kasan_reset_tag(object); freepointer_addr = (unsigned long)object + s->offset; copy_from_kernel_nofault(&p, (void **)freepointer_addr, sizeof(p)); - return freelist_ptr(s, p, freepointer_addr); + return kmsan_init(freelist_ptr(s, p, freepointer_addr)); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) @@ -1678,6 +1692,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1685,12 +1700,14 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); + kmsan_slab_free(s, x); debug_check_no_locks_freed(x, s->object_size); @@ -3729,6 +3746,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, */ slab_post_alloc_hook(s, objcg, flags, size, p, slab_want_init_on_alloc(flags, s)); + return i; error: slub_put_cpu_ptr(s->cpu_slab); @@ -5905,6 +5923,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_memory(name, p - name); return name; } @@ -6006,6 +6025,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_memory(al, sizeof(struct saved_alias)); return 0; }