From patchwork Tue Aug 30 21:49:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 12959914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BD0CECAAD4 for ; Tue, 30 Aug 2022 21:50:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D469E80007; Tue, 30 Aug 2022 17:50:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF5288D0002; Tue, 30 Aug 2022 17:50:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B706F80007; Tue, 30 Aug 2022 17:50:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A7DD58D0002 for ; Tue, 30 Aug 2022 17:50:04 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 78713120C0D for ; Tue, 30 Aug 2022 21:50:04 +0000 (UTC) X-FDA: 79857602328.05.40CB488 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf22.hostedemail.com (Postfix) with ESMTP id 3A835C001C for ; Tue, 30 Aug 2022 21:50:04 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id v5-20020a2583c5000000b006964324be8cso714790ybm.14 for ; Tue, 30 Aug 2022 14:50:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=w9gVcgQRdZitDZxjqrDoJq6Gs/0aEjUQjbVWDU06BvI=; b=Q1PzL2bThBJ4iiSWL0d7IheB5VfLNxWH1gqUohex8OE2QSiGR/fpg9gqy0yixW+Isb pxJFdeTMvRwdBwAcg7flVNtX1j9eJ9C9tS/tDg05AImXSgWHVN2GUlakKwYKuvgZhIEZ 53DiKTHKeM16TZHvnMmCYNdJyP0y2AnTT97Oi3Zv6WKFja6rmmzzL6Teq1ArHTNPI2TC NVjO3wFAVKlA4rp9aDdNwzT8IVNC+AXM7IGd6mywh1LPpaNIM2cA8K+l/DF+pItxSR+n jG8tcEuC2nhcd/7nioVG87S+Xo+Y8ZaGYAgx8HTdNWhJ4JbCh9BURQy523rvt+lhf5xu HCDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=w9gVcgQRdZitDZxjqrDoJq6Gs/0aEjUQjbVWDU06BvI=; b=46wwFwP7s0Z6p5TIJx7auSpJs4+AsvkhVUWbikItnhrSDXrkZHwZffzhMVoshAAm+o z+4VIdjLMgDaUINWQ9JS7QSpu6tR2f63MyqNt5wTpeUjao6xpH/Fsy8uaf/SqyR/cnwP MTsykp7uqvs3phF7DRvwYH8ayH50tm5gJPjf9/CuiLyX/+BQJrL89oZNINw+NXp5VT+9 GULMZyRhM6fYG8GaUe0P3FhBexWBNCys6CWVxnU0AB58bwwhsuTVZ0ReJgNo8jq5bSKr X5MEdbPYr5KNObv6VFWH+DstwnD05MriruLz5V+B+wTaYSIyvKNpvqTuMZEuhABG2hTX ex4Q== X-Gm-Message-State: ACgBeo1AljUsqGKiDbgy6otj2ur40CWh50Lyjo3q81ndDgqlldJQe9+j HY08f6RT4s+wrlc6V5fCEkBP66MajEs= X-Google-Smtp-Source: AA6agR4vG82TMLNZbe9b42TzeXUrvZHL0ChO+lv1LMjTJEyr7xCSEiF1DQ2YaYAniXhteJjohj+5Rf5pvb8= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:a005:55b3:6c26:b3e4]) (user=surenb job=sendgmr) by 2002:a25:e6c6:0:b0:695:f4dc:8c4f with SMTP id d189-20020a25e6c6000000b00695f4dc8c4fmr13235724ybh.329.1661896203524; Tue, 30 Aug 2022 14:50:03 -0700 (PDT) Date: Tue, 30 Aug 2022 14:49:04 -0700 In-Reply-To: <20220830214919.53220-1-surenb@google.com> Mime-Version: 1.0 References: <20220830214919.53220-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830214919.53220-16-surenb@google.com> Subject: [RFC PATCH 15/30] lib: introduce slab allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, changbin.du@intel.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, arnd@arndb.de, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-mm@kvack.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, io-uring@vger.kernel.org, linux-arch@vger.kernel.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-modules@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661896204; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w9gVcgQRdZitDZxjqrDoJq6Gs/0aEjUQjbVWDU06BvI=; b=Dy16iQEEST61/227R3JGuISppS/7KyQcJg2gMM1poUTkqWWeorA5dZBhQymHzZ1N32dDiV dCsSGGXSyXpj3K4L1jl0mVcM6Qhl8cfkTg8Xm8h2azb62+KWMkkMnZmKuqw3HYNIqAEUMy NvWC8xTKYc5F/SyDriIzWTxtb+MdKDQ= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Q1PzL2bT; spf=pass (imf22.hostedemail.com: domain of 3C4YOYwYKCHAgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3C4YOYwYKCHAgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661896204; a=rsa-sha256; cv=none; b=CI7D7BFymDQUqBOdbzoM6OYjr9IZhssBa9/f62/kl0Dfl/QkP/Tx32C0y4hbCJOgwP48EU HCW070jCrEmbt0Q4JEaclXALZJDuxFnimchgyXjrWZOhXmxfoj5398pq+kWlDlXWzsA/hP crCHUFM1uQalCeRmIKWdJ6o/O7WTD3c= Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Q1PzL2bT; spf=pass (imf22.hostedemail.com: domain of 3C4YOYwYKCHAgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3C4YOYwYKCHAgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: cbe3g4pz11bi9fegowssagrc5z7t4k1f X-Rspamd-Queue-Id: 3A835C001C X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1661896204-912745 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce CONFIG_SLAB_ALLOC_TAGGING which provides helper functions to easily instrument slab allocators and adds a codetag_ref field into slabobj_ext to store a pointer to the allocation tag associated with the code that allocated the slab object. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/memcontrol.h | 5 +++++ include/linux/slab.h | 25 +++++++++++++++++++++++++ include/linux/slab_def.h | 2 +- include/linux/slub_def.h | 4 ++-- lib/Kconfig.debug | 11 +++++++++++ mm/slab_common.c | 33 +++++++++++++++++++++++++++++++++ 6 files changed, 77 insertions(+), 3 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 315399f77173..97c0153f0247 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -232,7 +232,12 @@ struct obj_cgroup { * if MEMCG_DATA_OBJEXTS is set. */ struct slabobj_ext { +#ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *objcg; +#endif +#ifdef CONFIG_SLAB_ALLOC_TAGGING + union codetag_ref ref; +#endif } __aligned(8); /* diff --git a/include/linux/slab.h b/include/linux/slab.h index 55ae3ea864a4..5a198aa02a08 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -438,6 +438,31 @@ static __always_inline unsigned int __kmalloc_index(size_t size, #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ +#ifdef CONFIG_SLAB_ALLOC_TAGGING + +#include + +union codetag_ref *get_slab_tag_ref(const void *objp); + +#define slab_tag_add(_old, _new) \ +do { \ + if (!ZERO_OR_NULL_PTR(_new) && _old != _new) \ + alloc_tag_add(get_slab_tag_ref(_new), __ksize(_new)); \ +} while (0) + +static inline void slab_tag_dec(const void *ptr) +{ + if (!ZERO_OR_NULL_PTR(ptr)) + alloc_tag_sub(get_slab_tag_ref(ptr), __ksize(ptr)); +} + +#else + +#define slab_tag_add(_old, _new) do {} while (0) +static inline void slab_tag_dec(const void *ptr) {} + +#endif + void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index e24c9aff6fed..25feb5f7dc32 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -106,7 +106,7 @@ static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *sla * reciprocal_divide(offset, cache->reciprocal_buffer_size) */ static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) + const struct slab *slab, const void *obj) { u32 offset = (obj - slab->s_mem); return reciprocal_divide(offset, cache->reciprocal_buffer_size); diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index f9c68a9dac04..940c146768d4 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -170,14 +170,14 @@ static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *sla /* Determine object index from a given position */ static inline unsigned int __obj_to_index(const struct kmem_cache *cache, - void *addr, void *obj) + void *addr, const void *obj) { return reciprocal_divide(kasan_reset_tag(obj) - addr, cache->reciprocal_size); } static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) + const struct slab *slab, const void *obj) { if (is_kfence_address(obj)) return 0; diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 6686648843b3..08c97a978906 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -989,6 +989,17 @@ config PAGE_ALLOC_TAGGING initiated at that code location. The mechanism can be used to track memory leaks with a low performance impact. +config SLAB_ALLOC_TAGGING + bool "Enable slab allocation tagging" + default n + select ALLOC_TAGGING + select SLAB_OBJ_EXT + help + Instrument slab allocators to track allocation source code and + collect statistics on the number of allocations and their total size + initiated at that code location. The mechanism can be used to track + memory leaks with a low performance impact. + source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" diff --git a/mm/slab_common.c b/mm/slab_common.c index 17996649cfe3..272eda62ecaa 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -202,6 +202,39 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, return NULL; } +#ifdef CONFIG_SLAB_ALLOC_TAGGING + +union codetag_ref *get_slab_tag_ref(const void *objp) +{ + struct slabobj_ext *obj_exts; + union codetag_ref *res = NULL; + struct slab *slab; + unsigned int off; + + slab = virt_to_slab(objp); + /* + * We could be given a kmalloc_large() object, skip those. They use + * alloc_pages and can be tracked by page allocation tracking. + */ + if (!slab) + goto out; + + obj_exts = slab_obj_exts(slab); + if (!obj_exts) + goto out; + + if (!slab->slab_cache) + goto out; + + off = obj_to_index(slab->slab_cache, slab, objp); + res = &obj_exts[off].ref; +out: + return res; +} +EXPORT_SYMBOL(get_slab_tag_ref); + +#endif /* CONFIG_SLAB_ALLOC_TAGGING */ + static struct kmem_cache *create_cache(const char *name, unsigned int object_size, unsigned int align, slab_flags_t flags, unsigned int useroffset,