From patchwork Wed Dec 9 01:12:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 11960129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3892DC433FE for ; Wed, 9 Dec 2020 01:13:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B17F023B54 for ; Wed, 9 Dec 2020 01:13:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B17F023B54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D8B1D6B0068; Tue, 8 Dec 2020 20:13:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D10756B0070; Tue, 8 Dec 2020 20:13:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD7686B006E; Tue, 8 Dec 2020 20:13:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id A48396B0068 for ; Tue, 8 Dec 2020 20:13:07 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 716941EE6 for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) X-FDA: 77571970014.11.moon93_29035be273eb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 53FC7180F8B80 for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) X-HE-Tag: moon93_29035be273eb X-Filterd-Recvd-Size: 11864 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 01:13:06 +0000 (UTC) From: paulmck@kernel.org Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, joel@joelfernandes.org, iamjoonsoo.kim@lge.com, andrii@kernel.org, "Paul E. McKenney" , Christoph Lameter , Pekka Enberg , David Rientjes , linux-mm@kvack.org Subject: [PATCH v2 sl-b 1/5] mm: Add mem_dump_obj() to print source of memory block Date: Tue, 8 Dec 2020 17:12:59 -0800 Message-Id: <20201209011303.32737-1-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20201209011124.GA31164@paulmck-ThinkPad-P72> References: <20201209011124.GA31164@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" There are kernel facilities such as per-CPU reference counts that give error messages in generic handlers or callbacks, whose messages are unenlightening. In the case of per-CPU reference-count underflow, this is not a problem when creating a new use of this facility because in that case the bug is almost certainly in the code implementing that new use. However, trouble arises when deploying across many systems, which might exercise corner cases that were not seen during development and testing. Here, it would be really nice to get some kind of hint as to which of several uses the underflow was caused by. This commit therefore exposes a mem_dump_obj() function that takes a pointer to memory (which must still be allocated if it has been dynamically allocated) and prints available information on where that memory came from. This pointer can reference the middle of the block as well as the beginning of the block, as needed by things like RCU callback functions and timer handlers that might not know where the beginning of the memory block is. These functions and handlers can use mem_dump_obj() to print out better hints as to where the problem might lie. The information printed can depend on kernel configuration. For example, the allocation return address can be printed only for slab and slub, and even then only when the necessary debug has been enabled. For slab, build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space to the next power of two or use the SLAB_STORE_USER when creating the kmem_cache structure. For slub, build with CONFIG_SLUB_DEBUG=y and boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create() if more focused use is desired. Also for slub, use CONFIG_STACKTRACE to enable printing of the allocation-time stack trace. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Reported-by: Andrii Nakryiko [ paulmck: Convert to printing and change names per Joonsoo Kim. ] [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ] Signed-off-by: Paul E. McKenney --- include/linux/mm.h | 2 ++ include/linux/slab.h | 2 ++ mm/slab.c | 28 +++++++++++++++++++++ mm/slab.h | 11 +++++++++ mm/slab_common.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++ mm/slob.c | 7 ++++++ mm/slub.c | 40 ++++++++++++++++++++++++++++++ mm/util.c | 25 +++++++++++++++++++ 8 files changed, 184 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index ef360fe..1eea266 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping, extern int sysctl_nr_trim_pages; +void mem_dump_obj(void *object); + #endif /* __KERNEL__ */ #endif /* _LINUX_MM_H */ diff --git a/include/linux/slab.h b/include/linux/slab.h index dd6897f..169b511 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -186,6 +186,8 @@ void kfree(const void *); void kfree_sensitive(const void *); size_t __ksize(const void *); size_t ksize(const void *); +bool kmem_valid_obj(void *object); +void kmem_dump_obj(void *object); #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR void __check_heap_object(const void *ptr, unsigned long n, struct page *page, diff --git a/mm/slab.c b/mm/slab.c index b111356..72b6743 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3602,6 +3602,34 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif +void kmem_provenance(struct kmem_provenance *kpp) +{ +#ifdef DEBUG + struct kmem_cache *cachep; + void *object = kpp->kp_ptr; + unsigned int objnr; + void *objp; + struct page *page = kpp->kp_page; + + cachep = page->slab_cache; + if (!(cachep->flags & SLAB_STORE_USER)) { + kpp->kp_ret = NULL; + goto nodebug; + } + objp = object - obj_offset(cachep); + page = virt_to_head_page(objp); + objnr = obj_to_index(cachep, page, objp); + objp = index_to_obj(cachep, page, objnr); + kpp->kp_objp = objp; + kpp->kp_ret = *dbg_userword(cachep, objp); +nodebug: +#else + kpp->kp_ret = NULL; +#endif + if (kpp->kp_nstack) + kpp->kp_stack[0] = NULL; +} + static __always_inline void * __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) { diff --git a/mm/slab.h b/mm/slab.h index 6d7c6a5..28a41d5 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -630,4 +630,15 @@ static inline bool slab_want_init_on_free(struct kmem_cache *c) return false; } +#define KS_ADDRS_COUNT 16 +struct kmem_provenance { + void *kp_ptr; + struct page *kp_page; + void *kp_objp; + void *kp_ret; + void *kp_stack[KS_ADDRS_COUNT]; + int kp_nstack; +}; +void kmem_provenance(struct kmem_provenance *kpp); + #endif /* MM_SLAB_H */ diff --git a/mm/slab_common.c b/mm/slab_common.c index f9ccd5d..09f0cbc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -536,6 +536,75 @@ bool slab_is_available(void) return slab_state >= UP; } +/** + * kmem_valid_obj - does the pointer reference a valid slab object? + * @object: pointer to query. + * + * Return: %true if the pointer is to a not-yet-freed object from + * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer + * is to an already-freed object, and %false otherwise. + */ +bool kmem_valid_obj(void *object) +{ + struct page *page; + + if (!virt_addr_valid(object)) + return false; + page = virt_to_head_page(object); + return PageSlab(page); +} +EXPORT_SYMBOL_GPL(kmem_valid_obj); + +/** + * kmem_dump_obj - Print available slab provenance information + * @object: slab object for which to find provenance information. + * + * This function uses pr_cont(), so that the caller is expected to have + * printed out whatever preamble is appropriate. The provenance information + * depends on the type of object and on how much debugging is enabled. + * For a slab-cache object, the fact that it is a slab object is printed, + * and, if available, the slab name, return address, and stack trace from + * the allocation of that object. + * + * This function will splat if passed a pointer to a non-slab object. + * If you are not sure what type of object you have, you should instead + * use mem_dump_obj(). + */ +void kmem_dump_obj(void *object) +{ + int i; + struct page *page; + struct kmem_provenance kp; + + if (WARN_ON_ONCE(!virt_addr_valid(object))) + return; + page = virt_to_head_page(object); + if (WARN_ON_ONCE(!PageSlab(page))) { + pr_cont(" non-slab memory.\n"); + return; + } + kp.kp_ptr = object; + kp.kp_page = page; + kp.kp_nstack = KS_ADDRS_COUNT; + kmem_provenance(&kp); + if (page->slab_cache) + pr_cont(" slab %s", page->slab_cache->name); + else + pr_cont(" slab "); + if (kp.kp_ret) + pr_cont(" allocated at %pS\n", kp.kp_ret); + else + pr_cont("\n"); + if (kp.kp_stack[0]) { + for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) { + if (!kp.kp_stack[i]) + break; + pr_info(" %pS\n", kp.kp_stack[i]); + } + } +} +EXPORT_SYMBOL_GPL(kmem_dump_obj); + #ifndef CONFIG_SLOB /* Create a cache during boot when no slab services are available yet */ void __init create_boot_cache(struct kmem_cache *s, const char *name, diff --git a/mm/slob.c b/mm/slob.c index 7cc9805..fb10493 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -461,6 +461,13 @@ static void slob_free(void *block, int size) spin_unlock_irqrestore(&slob_lock, flags); } +void kmem_provenance(struct kmem_provenance *kpp) +{ + kpp->kp_ret = NULL; + if (kpp->kp_nstack) + kpp->kp_stack[0] = NULL; +} + /* * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend. */ diff --git a/mm/slub.c b/mm/slub.c index b30be23..027fe0f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3918,6 +3918,46 @@ int __kmem_cache_shutdown(struct kmem_cache *s) return 0; } +void kmem_provenance(struct kmem_provenance *kpp) +{ +#ifdef CONFIG_SLUB_DEBUG + void *base; + int i; + void *object = kpp->kp_ptr; + unsigned int objnr; + void *objp; + struct page *page = kpp->kp_page; + struct kmem_cache *s = page->slab_cache; + struct track *trackp; + + base = page_address(page); + objp = kasan_reset_tag(object); + objp = restore_red_left(s, objp); + objnr = obj_to_index(s, page, objp); + objp = base + s->size * objnr; + kpp->kp_objp = objp; + if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) || + !(s->flags & SLAB_STORE_USER)) + goto nodebug; + trackp = get_track(s, objp, TRACK_ALLOC); + kpp->kp_ret = (void *)trackp->addr; +#ifdef CONFIG_STACKTRACE + for (i = 0; i < kpp->kp_nstack && i < TRACK_ADDRS_COUNT; i++) { + kpp->kp_stack[i] = (void *)trackp->addrs[i]; + if (!kpp->kp_stack[i]) + break; + } +#endif + if (kpp->kp_stack && i < kpp->kp_nstack) + kpp->kp_stack[i] = NULL; + return; +nodebug: +#endif + kpp->kp_ret = NULL; + if (kpp->kp_nstack) + kpp->kp_stack[0] = NULL; +} + /******************************************************************** * Kmalloc subsystem *******************************************************************/ diff --git a/mm/util.c b/mm/util.c index 4ddb6e1..d0e60d2 100644 --- a/mm/util.c +++ b/mm/util.c @@ -970,3 +970,28 @@ int __weak memcmp_pages(struct page *page1, struct page *page2) kunmap_atomic(addr1); return ret; } + +/** + * mem_dump_obj - Print available provenance information + * @object: object for which to find provenance information. + * + * This function uses pr_cont(), so that the caller is expected to have + * printed out whatever preamble is appropriate. The provenance information + * depends on the type of object and on how much debugging is enabled. + * For example, for a slab-cache object, the slab name is printed, and, + * if available, the return address and stack trace from the allocation + * of that object. + */ +void mem_dump_obj(void *object) +{ + if (!virt_addr_valid(object)) { + pr_cont(" non-paged (local) memory.\n"); + return; + } + if (kmem_valid_obj(object)) { + kmem_dump_obj(object); + return; + } + pr_cont(" non-slab memory.\n"); +} +EXPORT_SYMBOL_GPL(mem_dump_obj); From patchwork Wed Dec 9 01:13:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 11960131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 744F7C4361B for ; Wed, 9 Dec 2020 01:13:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 163F823B54 for ; Wed, 9 Dec 2020 01:13:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 163F823B54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 07CEF6B006C; Tue, 8 Dec 2020 20:13:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EBE596B0071; Tue, 8 Dec 2020 20:13:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D87406B006C; Tue, 8 Dec 2020 20:13:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id C02306B0068 for ; Tue, 8 Dec 2020 20:13:07 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 793E01EF1 for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) X-FDA: 77571970014.04.boat19_340abb0273eb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 5B56D8008B3C for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) X-HE-Tag: boat19_340abb0273eb X-Filterd-Recvd-Size: 2414 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 01:13:06 +0000 (UTC) From: paulmck@kernel.org Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, joel@joelfernandes.org, iamjoonsoo.kim@lge.com, andrii@kernel.org, "Paul E. McKenney" , Christoph Lameter , Pekka Enberg , David Rientjes , linux-mm@kvack.org Subject: [PATCH v2 sl-b 2/5] mm: Make mem_dump_obj() handle NULL and zero-sized pointers Date: Tue, 8 Dec 2020 17:13:00 -0800 Message-Id: <20201209011303.32737-2-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20201209011124.GA31164@paulmck-ThinkPad-P72> References: <20201209011124.GA31164@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" This commit makes mem_dump_obj() call out NULL and zero-sized pointers specially instead of classifying them as non-paged memory. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Reported-by: Andrii Nakryiko Signed-off-by: Paul E. McKenney Acked-by: Vlastimil Babka --- mm/util.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/util.c b/mm/util.c index d0e60d2..8c2449f 100644 --- a/mm/util.c +++ b/mm/util.c @@ -985,7 +985,12 @@ int __weak memcmp_pages(struct page *page1, struct page *page2) void mem_dump_obj(void *object) { if (!virt_addr_valid(object)) { - pr_cont(" non-paged (local) memory.\n"); + if (object == NULL) + pr_cont(" NULL pointer.\n"); + else if (object == ZERO_SIZE_PTR) + pr_cont(" zero-size pointer.\n"); + else + pr_cont(" non-paged (local) memory.\n"); return; } if (kmem_valid_obj(object)) { From patchwork Wed Dec 9 01:13:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 11960135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B936C433FE for ; Wed, 9 Dec 2020 01:13:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5B7423B54 for ; Wed, 9 Dec 2020 01:13:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5B7423B54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 778B86B006E; Tue, 8 Dec 2020 20:13:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 659E36B0073; Tue, 8 Dec 2020 20:13:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AC916B0072; Tue, 8 Dec 2020 20:13:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 1F2876B0070 for ; Tue, 8 Dec 2020 20:13:08 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DE519181AEF00 for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) X-FDA: 77571970014.10.home56_1d01275273eb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id B7FE716A0DE for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) X-HE-Tag: home56_1d01275273eb X-Filterd-Recvd-Size: 4172 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) From: paulmck@kernel.org Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, joel@joelfernandes.org, iamjoonsoo.kim@lge.com, andrii@kernel.org, "Paul E. McKenney" , linux-mm@kvack.org Subject: [PATCH v2 sl-b 3/5] mm: Make mem_dump_obj() handle vmalloc() memory Date: Tue, 8 Dec 2020 17:13:01 -0800 Message-Id: <20201209011303.32737-3-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20201209011124.GA31164@paulmck-ThinkPad-P72> References: <20201209011124.GA31164@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" This commit adds vmalloc() support to mem_dump_obj(). Note that the vmalloc_dump_obj() function combines the checking and dumping, in contrast with the split between kmem_valid_obj() and kmem_dump_obj(). The reason for the difference is that the checking in the vmalloc() case involves acquiring a global lock, and redundant acquisitions of global locks should be avoided, even on not-so-fast paths. Note that this change causes on-stack variables to be reported as vmalloc() storage from kernel_clone() or similar, depending on the degree of inlining that your compiler does. This is likely more helpful than the earlier "non-paged (local) memory". Cc: Andrew Morton Cc: Joonsoo Kim Cc: Reported-by: Andrii Nakryiko Signed-off-by: Paul E. McKenney --- include/linux/vmalloc.h | 6 ++++++ mm/util.c | 12 +++++++----- mm/vmalloc.c | 12 ++++++++++++ 3 files changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 938eaf9..c89c2be 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -248,4 +248,10 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) int register_vmap_purge_notifier(struct notifier_block *nb); int unregister_vmap_purge_notifier(struct notifier_block *nb); +#ifdef CONFIG_MMU +bool vmalloc_dump_obj(void *object); +#else +static inline bool vmalloc_dump_obj(void *object) { return false; } +#endif + #endif /* _LINUX_VMALLOC_H */ diff --git a/mm/util.c b/mm/util.c index 8c2449f..ee99a0a 100644 --- a/mm/util.c +++ b/mm/util.c @@ -984,6 +984,12 @@ int __weak memcmp_pages(struct page *page1, struct page *page2) */ void mem_dump_obj(void *object) { + if (kmem_valid_obj(object)) { + kmem_dump_obj(object); + return; + } + if (vmalloc_dump_obj(object)) + return; if (!virt_addr_valid(object)) { if (object == NULL) pr_cont(" NULL pointer.\n"); @@ -993,10 +999,6 @@ void mem_dump_obj(void *object) pr_cont(" non-paged (local) memory.\n"); return; } - if (kmem_valid_obj(object)) { - kmem_dump_obj(object); - return; - } - pr_cont(" non-slab memory.\n"); + pr_cont(" non-slab/vmalloc memory.\n"); } EXPORT_SYMBOL_GPL(mem_dump_obj); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6ae491a..7421719 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3431,6 +3431,18 @@ void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) } #endif /* CONFIG_SMP */ +bool vmalloc_dump_obj(void *object) +{ + struct vm_struct *vm; + void *objp = (void *)PAGE_ALIGN((unsigned long)object); + + vm = find_vm_area(objp); + if (!vm) + return false; + pr_cont(" vmalloc allocated at %pS\n", vm->caller); + return true; +} + #ifdef CONFIG_PROC_FS static void *s_start(struct seq_file *m, loff_t *pos) __acquires(&vmap_purge_lock) From patchwork Wed Dec 9 01:13:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 11960133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9609C4167B for ; Wed, 9 Dec 2020 01:13:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8555423B54 for ; Wed, 9 Dec 2020 01:13:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8555423B54 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 54B256B0070; Tue, 8 Dec 2020 20:13:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FDF76B006E; Tue, 8 Dec 2020 20:13:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DCAC6B0073; Tue, 8 Dec 2020 20:13:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id 114526B006E for ; Tue, 8 Dec 2020 20:13:08 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BC7FB180AD81D for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) X-FDA: 77571970014.02.able01_0811805273eb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 9E87510097AA0 for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) X-HE-Tag: able01_0811805273eb X-Filterd-Recvd-Size: 3154 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 01:13:07 +0000 (UTC) From: paulmck@kernel.org Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, joel@joelfernandes.org, iamjoonsoo.kim@lge.com, andrii@kernel.org, "Paul E. McKenney" , Christoph Lameter , Pekka Enberg , David Rientjes , linux-mm@kvack.org Subject: [PATCH v2 sl-b 4/5] rcu: Make call_rcu() print mem_dump_obj() info for double-freed callback Date: Tue, 8 Dec 2020 17:13:02 -0800 Message-Id: <20201209011303.32737-4-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20201209011124.GA31164@paulmck-ThinkPad-P72> References: <20201209011124.GA31164@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" The debug-object double-free checks in __call_rcu() print out the RCU callback function, which is usually sufficient to track down the double free. However, all uses of things like queue_rcu_work() will have the same RCU callback function (rcu_work_rcufn() in this case), so a diagnostic message for a double queue_rcu_work() needs more than just the callback function. This commit therefore calls mem_dump_obj() to dump out any additional available information on the double-freed callback. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Reported-by: Andrii Nakryiko Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b6c9c49..464cf14 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2957,6 +2957,7 @@ static void check_cb_ovld(struct rcu_data *rdp) static void __call_rcu(struct rcu_head *head, rcu_callback_t func) { + static atomic_t doublefrees; unsigned long flags; struct rcu_data *rdp; bool was_alldone; @@ -2970,8 +2971,10 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func) * Use rcu:rcu_callback trace event to find the previous * time callback was passed to __call_rcu(). */ - WARN_ONCE(1, "__call_rcu(): Double-freed CB %p->%pS()!!!\n", - head, head->func); + if (atomic_inc_return(&doublefrees) < 4) { + pr_err("%s(): Double-freed CB %p->%pS()!!! ", __func__, head, head->func); + mem_dump_obj(head); + } WRITE_ONCE(head->func, rcu_leak_callback); return; }