From patchwork Wed Jan 6 01:17:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12000697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1639EC433E6 for ; Wed, 6 Jan 2021 01:17:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7A5EF22CF6 for ; Wed, 6 Jan 2021 01:17:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A5EF22CF6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 147ED8D00CE; Tue, 5 Jan 2021 20:17:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D2EFA8D00CF; Tue, 5 Jan 2021 20:17:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA7938D00D0; Tue, 5 Jan 2021 20:17:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id 9CFDD8D00CE for ; Tue, 5 Jan 2021 20:17:53 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 634A8181AEF10 for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-FDA: 77673588426.05.lead96_3b0acf9274dd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 477211802E8C7 for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-HE-Tag: lead96_3b0acf9274dd X-Filterd-Recvd-Size: 12725 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 01:17:52 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id A97FA22CB1; Wed, 6 Jan 2021 01:17:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1609895871; bh=soxRt9v0iuy+N4TGLEWfNgoDGdvUP3WsSCwEgBZRGN8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pYRpXuGmWKSLY0yedbS4JWi/7jApBzU4EaB02G0+lNzOEahccDPFQUePr1E0lTwWU lgJ0zAETbrf00enHgKIYeFie6zgfqnzvCITB1hOxh8uxtr0AOjMV+OFsML7anZSIhi 5nKrpANKD+T9bYvtkYO13xvimivsxzqfmfpG1cILavcwxIoiySNYFmBkkY354U/kvy VxeQGdVE97Ccv/ttYGEWstCOT8fep26i/NhGWtTJlR0vNzo6WcHfMl81Yy+913y24t VYD6aeYdY2Rhu2qkbg6u0gtrtCPIDcOHObeJzj+z5Bz14P3e9zoaGbo1X4TbgXZTJW zUOX4Fz3HdBYQ== From: paulmck@kernel.org To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, ming.lei@redhat.com, axboe@kernel.dk, kernel-team@fb.com, "Paul E. McKenney" Subject: [PATCH mm,percpu_ref,rcu 1/6] mm: Add mem_dump_obj() to print source of memory block Date: Tue, 5 Jan 2021 17:17:45 -0800 Message-Id: <20210106011750.13709-1-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20210106011603.GA13180@paulmck-ThinkPad-P72> References: <20210106011603.GA13180@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" There are kernel facilities such as per-CPU reference counts that give error messages in generic handlers or callbacks, whose messages are unenlightening. In the case of per-CPU reference-count underflow, this is not a problem when creating a new use of this facility because in that case the bug is almost certainly in the code implementing that new use. However, trouble arises when deploying across many systems, which might exercise corner cases that were not seen during development and testing. Here, it would be really nice to get some kind of hint as to which of several uses the underflow was caused by. This commit therefore exposes a mem_dump_obj() function that takes a pointer to memory (which must still be allocated if it has been dynamically allocated) and prints available information on where that memory came from. This pointer can reference the middle of the block as well as the beginning of the block, as needed by things like RCU callback functions and timer handlers that might not know where the beginning of the memory block is. These functions and handlers can use mem_dump_obj() to print out better hints as to where the problem might lie. The information printed can depend on kernel configuration. For example, the allocation return address can be printed only for slab and slub, and even then only when the necessary debug has been enabled. For slab, build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space to the next power of two or use the SLAB_STORE_USER when creating the kmem_cache structure. For slub, build with CONFIG_SLUB_DEBUG=y and boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create() if more focused use is desired. Also for slub, use CONFIG_STACKTRACE to enable printing of the allocation-time stack trace. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Reported-by: Andrii Nakryiko [ paulmck: Convert to printing and change names per Joonsoo Kim. ] [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ] [ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ] [ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ] [ paulmck: Extract more info from !SLUB_DEBUG per Joonsoo Kim. ] Acked-by: Joonsoo Kim Signed-off-by: Paul E. McKenney Acked-by: Vlastimil Babka --- include/linux/mm.h | 2 ++ include/linux/slab.h | 2 ++ mm/slab.c | 20 ++++++++++++++ mm/slab.h | 12 +++++++++ mm/slab_common.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++++ mm/slob.c | 6 +++++ mm/slub.c | 40 ++++++++++++++++++++++++++++ mm/util.c | 24 +++++++++++++++++ 8 files changed, 180 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5299b90a..af7d050 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3169,5 +3169,7 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping, extern int sysctl_nr_trim_pages; +void mem_dump_obj(void *object); + #endif /* __KERNEL__ */ #endif /* _LINUX_MM_H */ diff --git a/include/linux/slab.h b/include/linux/slab.h index be4ba58..7ae6040 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -186,6 +186,8 @@ void kfree(const void *); void kfree_sensitive(const void *); size_t __ksize(const void *); size_t ksize(const void *); +bool kmem_valid_obj(void *object); +void kmem_dump_obj(void *object); #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR void __check_heap_object(const void *ptr, unsigned long n, struct page *page, diff --git a/mm/slab.c b/mm/slab.c index d7c8da9..dcc55e7 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3635,6 +3635,26 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags, EXPORT_SYMBOL(__kmalloc_node_track_caller); #endif /* CONFIG_NUMA */ +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +{ + struct kmem_cache *cachep; + unsigned int objnr; + void *objp; + + kpp->kp_ptr = object; + kpp->kp_page = page; + cachep = page->slab_cache; + kpp->kp_slab_cache = cachep; + objp = object - obj_offset(cachep); + kpp->kp_data_offset = obj_offset(cachep); + page = virt_to_head_page(objp); + objnr = obj_to_index(cachep, page, objp); + objp = index_to_obj(cachep, page, objnr); + kpp->kp_objp = objp; + if (DEBUG && cachep->flags & SLAB_STORE_USER) + kpp->kp_ret = *dbg_userword(cachep, objp); +} + /** * __do_kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab.h b/mm/slab.h index 1a756a3..ecad9b5 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -615,4 +615,16 @@ static inline bool slab_want_init_on_free(struct kmem_cache *c) return false; } +#define KS_ADDRS_COUNT 16 +struct kmem_obj_info { + void *kp_ptr; + struct page *kp_page; + void *kp_objp; + unsigned long kp_data_offset; + struct kmem_cache *kp_slab_cache; + void *kp_ret; + void *kp_stack[KS_ADDRS_COUNT]; +}; +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page); + #endif /* MM_SLAB_H */ diff --git a/mm/slab_common.c b/mm/slab_common.c index e981c80..b594413 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -537,6 +537,80 @@ bool slab_is_available(void) return slab_state >= UP; } +/** + * kmem_valid_obj - does the pointer reference a valid slab object? + * @object: pointer to query. + * + * Return: %true if the pointer is to a not-yet-freed object from + * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer + * is to an already-freed object, and %false otherwise. + */ +bool kmem_valid_obj(void *object) +{ + struct page *page; + + if (!virt_addr_valid(object)) + return false; + page = virt_to_head_page(object); + return PageSlab(page); +} + +/** + * kmem_dump_obj - Print available slab provenance information + * @object: slab object for which to find provenance information. + * + * This function uses pr_cont(), so that the caller is expected to have + * printed out whatever preamble is appropriate. The provenance information + * depends on the type of object and on how much debugging is enabled. + * For a slab-cache object, the fact that it is a slab object is printed, + * and, if available, the slab name, return address, and stack trace from + * the allocation of that object. + * + * This function will splat if passed a pointer to a non-slab object. + * If you are not sure what type of object you have, you should instead + * use mem_dump_obj(). + */ +void kmem_dump_obj(void *object) +{ + char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc"; + int i; + struct page *page; + unsigned long ptroffset; + struct kmem_obj_info kp = { }; + + if (WARN_ON_ONCE(!virt_addr_valid(object))) + return; + page = virt_to_head_page(object); + if (WARN_ON_ONCE(!PageSlab(page))) { + pr_cont(" non-slab memory.\n"); + return; + } + kmem_obj_info(&kp, object, page); + if (kp.kp_slab_cache) + pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name); + else + pr_cont(" slab%s", cp); + if (kp.kp_objp) + pr_cont(" start %px", kp.kp_objp); + if (kp.kp_data_offset) + pr_cont(" data offset %lu", kp.kp_data_offset); + if (kp.kp_objp) { + ptroffset = ((char *)object - (char *)kp.kp_objp) - kp.kp_data_offset; + pr_cont(" pointer offset %lu", ptroffset); + } + if (kp.kp_slab_cache && kp.kp_slab_cache->usersize) + pr_cont(" size %u", kp.kp_slab_cache->usersize); + if (kp.kp_ret) + pr_cont(" allocated at %pS\n", kp.kp_ret); + else + pr_cont("\n"); + for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) { + if (!kp.kp_stack[i]) + break; + pr_info(" %pS\n", kp.kp_stack[i]); + } +} + #ifndef CONFIG_SLOB /* Create a cache during boot when no slab services are available yet */ void __init create_boot_cache(struct kmem_cache *s, const char *name, diff --git a/mm/slob.c b/mm/slob.c index 8d4bfa4..ef87ada 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -461,6 +461,12 @@ static void slob_free(void *block, int size) spin_unlock_irqrestore(&slob_lock, flags); } +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +{ + kpp->kp_ptr = object; + kpp->kp_page = page; +} + /* * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend. */ diff --git a/mm/slub.c b/mm/slub.c index 0c8b43a..3c1a843 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3919,6 +3919,46 @@ int __kmem_cache_shutdown(struct kmem_cache *s) return 0; } +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page) +{ + void *base; + int __maybe_unused i; + unsigned int objnr; + void *objp; + void *objp0; + struct kmem_cache *s = page->slab_cache; + struct track __maybe_unused *trackp; + + kpp->kp_ptr = object; + kpp->kp_page = page; + kpp->kp_slab_cache = s; + base = page_address(page); + objp0 = kasan_reset_tag(object); +#ifdef CONFIG_SLUB_DEBUG + objp = restore_red_left(s, objp0); +#else + objp = objp0; +#endif + objnr = obj_to_index(s, page, objp); + kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp); + objp = base + s->size * objnr; + kpp->kp_objp = objp; + if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) || + !(s->flags & SLAB_STORE_USER)) + return; +#ifdef CONFIG_SLUB_DEBUG + trackp = get_track(s, objp, TRACK_ALLOC); + kpp->kp_ret = (void *)trackp->addr; +#ifdef CONFIG_STACKTRACE + for (i = 0; i < KS_ADDRS_COUNT && i < TRACK_ADDRS_COUNT; i++) { + kpp->kp_stack[i] = (void *)trackp->addrs[i]; + if (!kpp->kp_stack[i]) + break; + } +#endif +#endif +} + /******************************************************************** * Kmalloc subsystem *******************************************************************/ diff --git a/mm/util.c b/mm/util.c index 8c9b7d1..da46f9d 100644 --- a/mm/util.c +++ b/mm/util.c @@ -982,3 +982,27 @@ int __weak memcmp_pages(struct page *page1, struct page *page2) kunmap_atomic(addr1); return ret; } + +/** + * mem_dump_obj - Print available provenance information + * @object: object for which to find provenance information. + * + * This function uses pr_cont(), so that the caller is expected to have + * printed out whatever preamble is appropriate. The provenance information + * depends on the type of object and on how much debugging is enabled. + * For example, for a slab-cache object, the slab name is printed, and, + * if available, the return address and stack trace from the allocation + * of that object. + */ +void mem_dump_obj(void *object) +{ + if (!virt_addr_valid(object)) { + pr_cont(" non-paged (local) memory.\n"); + return; + } + if (kmem_valid_obj(object)) { + kmem_dump_obj(object); + return; + } + pr_cont(" non-slab memory.\n"); +} From patchwork Wed Jan 6 01:17:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12000695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0168AC433DB for ; Wed, 6 Jan 2021 01:17:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0A9B22DD3 for ; Wed, 6 Jan 2021 01:17:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0A9B22DD3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D3DA98D00D2; Tue, 5 Jan 2021 20:17:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C6B7F8D00D1; Tue, 5 Jan 2021 20:17:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE4E68D00CF; Tue, 5 Jan 2021 20:17:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id 9435F8D006E for ; Tue, 5 Jan 2021 20:17:53 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 64D7C362C for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-FDA: 77673588426.11.brain86_2d05685274dd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 44F64180F8B80 for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-HE-Tag: brain86_2d05685274dd X-Filterd-Recvd-Size: 2745 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 01:17:52 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id D197F22D6F; Wed, 6 Jan 2021 01:17:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1609895871; bh=gLFu+KgQ4ck58W8mpBbjs74dHggSLHFYOf67lC5Cr3I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Uif8275PxW3KW/QFDS83CgQMCU8FRKEU74nYX24qobr0YhgYZ7Yat4jReuiCK9xBh i2YhNYMtxbwPSP2nctigH6YbL9EHvPP42z7n4AhQv1bDlpgkCM4iaD6Rpft217gQeM 2gvHLjq0fM20fgLr+R5SvOuMyQO/NTD8LhDDoKtxXhLS7hcAM9f8OqYKU0O0iOKm23 0cgFCUzXLXc33Bd59PYlhfBTutUNy5MGTRj+Tw3ykxsa/fSdwlhsKm0LVNmmVxH9zR zn5XseL8WDdomGn1DZ2EBqGN74Y/x2nXYZZDlu0+ykt3uYsXOk0fbCMuOMDvcJcq2A EF1wttLK4d63g== From: paulmck@kernel.org To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, ming.lei@redhat.com, axboe@kernel.dk, kernel-team@fb.com, "Paul E. McKenney" Subject: [PATCH mm,percpu_ref,rcu 2/6] mm: Make mem_dump_obj() handle NULL and zero-sized pointers Date: Tue, 5 Jan 2021 17:17:46 -0800 Message-Id: <20210106011750.13709-2-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20210106011603.GA13180@paulmck-ThinkPad-P72> References: <20210106011603.GA13180@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" This commit makes mem_dump_obj() call out NULL and zero-sized pointers specially instead of classifying them as non-paged memory. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Reported-by: Andrii Nakryiko Acked-by: Vlastimil Babka Signed-off-by: Paul E. McKenney --- mm/util.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/util.c b/mm/util.c index da46f9d..92f23d2 100644 --- a/mm/util.c +++ b/mm/util.c @@ -997,7 +997,12 @@ int __weak memcmp_pages(struct page *page1, struct page *page2) void mem_dump_obj(void *object) { if (!virt_addr_valid(object)) { - pr_cont(" non-paged (local) memory.\n"); + if (object == NULL) + pr_cont(" NULL pointer.\n"); + else if (object == ZERO_SIZE_PTR) + pr_cont(" zero-size pointer.\n"); + else + pr_cont(" non-paged (local) memory.\n"); return; } if (kmem_valid_obj(object)) { From patchwork Wed Jan 6 01:17:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12000699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30CA4C433E0 for ; Wed, 6 Jan 2021 01:17:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C296422CF6 for ; Wed, 6 Jan 2021 01:17:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C296422CF6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 50FC98D00CF; Tue, 5 Jan 2021 20:17:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DF228D006E; Tue, 5 Jan 2021 20:17:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9CF78D006E; Tue, 5 Jan 2021 20:17:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id B536B8D006E for ; Tue, 5 Jan 2021 20:17:53 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 824138245571 for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-FDA: 77673588426.26.judge15_3a02228274dd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 613D01804B660 for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-HE-Tag: judge15_3a02228274dd X-Filterd-Recvd-Size: 4610 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 01:17:52 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0581722E03; Wed, 6 Jan 2021 01:17:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1609895872; bh=89la9VIbgxJoAiOtZh6vDNKFvb9uJilRSBqp+fBr5wE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MRutX3aiqUpvAG+88HkSzREaDWr0CvykHbJfeThw+zQEdcM6Ss+dif4kKlLCo5WxK F47Jx4OTqjDc3pBiOxKtiwGPQx4/x+anciT0ZhIbIkC7UpyWyC1+56wD6rt/FUEbt4 AeRQTXwgZWea4Dk905NoqbyDbDR5DWWhBuzhVO2pYhRQHr1xhVcnyB3mdYH8gqblrG SZA6Wq7Q781/xiqCinhEIGFP5vIlds4jgSaAdtVPmKuFw2aNDT869L6YtfqX44RMa2 IcdtRhxeAUnvtkJ0rc/YFoBg0ZB7m5njlcPli1OSsGAxNPsJSfnjNSwtxo3gTvwLMd o3J1vRr2mazag== From: paulmck@kernel.org To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, ming.lei@redhat.com, axboe@kernel.dk, kernel-team@fb.com, "Paul E. McKenney" Subject: [PATCH mm,percpu_ref,rcu 3/6] mm: Make mem_dump_obj() handle vmalloc() memory Date: Tue, 5 Jan 2021 17:17:47 -0800 Message-Id: <20210106011750.13709-3-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20210106011603.GA13180@paulmck-ThinkPad-P72> References: <20210106011603.GA13180@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" This commit adds vmalloc() support to mem_dump_obj(). Note that the vmalloc_dump_obj() function combines the checking and dumping, in contrast with the split between kmem_valid_obj() and kmem_dump_obj(). The reason for the difference is that the checking in the vmalloc() case involves acquiring a global lock, and redundant acquisitions of global locks should be avoided, even on not-so-fast paths. Note that this change causes on-stack variables to be reported as vmalloc() storage from kernel_clone() or similar, depending on the degree of inlining that your compiler does. This is likely more helpful than the earlier "non-paged (local) memory". Cc: Andrew Morton Cc: Joonsoo Kim Cc: Reported-by: Andrii Nakryiko Signed-off-by: Paul E. McKenney Acked-by: Vlastimil Babka --- include/linux/vmalloc.h | 6 ++++++ mm/util.c | 14 ++++++++------ mm/vmalloc.c | 12 ++++++++++++ 3 files changed, 26 insertions(+), 6 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 80c0181..c18f475 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -246,4 +246,10 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) int register_vmap_purge_notifier(struct notifier_block *nb); int unregister_vmap_purge_notifier(struct notifier_block *nb); +#ifdef CONFIG_MMU +bool vmalloc_dump_obj(void *object); +#else +static inline bool vmalloc_dump_obj(void *object) { return false; } +#endif + #endif /* _LINUX_VMALLOC_H */ diff --git a/mm/util.c b/mm/util.c index 92f23d2..5487022 100644 --- a/mm/util.c +++ b/mm/util.c @@ -996,18 +996,20 @@ int __weak memcmp_pages(struct page *page1, struct page *page2) */ void mem_dump_obj(void *object) { + if (kmem_valid_obj(object)) { + kmem_dump_obj(object); + return; + } + if (vmalloc_dump_obj(object)) + return; if (!virt_addr_valid(object)) { if (object == NULL) pr_cont(" NULL pointer.\n"); else if (object == ZERO_SIZE_PTR) pr_cont(" zero-size pointer.\n"); else - pr_cont(" non-paged (local) memory.\n"); - return; - } - if (kmem_valid_obj(object)) { - kmem_dump_obj(object); + pr_cont(" non-paged memory.\n"); return; } - pr_cont(" non-slab memory.\n"); + pr_cont(" non-slab/vmalloc memory.\n"); } diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4d88fe5..c274ea4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3448,6 +3448,18 @@ void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) } #endif /* CONFIG_SMP */ +bool vmalloc_dump_obj(void *object) +{ + struct vm_struct *vm; + void *objp = (void *)PAGE_ALIGN((unsigned long)object); + + vm = find_vm_area(objp); + if (!vm) + return false; + pr_cont(" vmalloc allocated at %pS\n", vm->caller); + return true; +} + #ifdef CONFIG_PROC_FS static void *s_start(struct seq_file *m, loff_t *pos) __acquires(&vmap_purge_lock) From patchwork Wed Jan 6 01:17:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12000701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C918C433DB for ; Wed, 6 Jan 2021 01:18:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DE74322CF6 for ; Wed, 6 Jan 2021 01:17:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DE74322CF6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 96E6B8D006E; Tue, 5 Jan 2021 20:17:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 456348D00D0; Tue, 5 Jan 2021 20:17:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0100E8D00D3; Tue, 5 Jan 2021 20:17:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id C0EB88D00CE for ; Tue, 5 Jan 2021 20:17:53 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 92CCA1F08 for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-FDA: 77673588426.11.front97_550fcff274dd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 713BD180F8B80 for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-HE-Tag: front97_550fcff274dd X-Filterd-Recvd-Size: 2521 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 2E007230F9; Wed, 6 Jan 2021 01:17:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1609895872; bh=GMkUWi1HSMx6tCW3EpjcA1+Pgb7uUPIJLE5afoazVfo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t6NMudKKD76xBb8X/tG4J62mW7lW6/cnBXETpELbjaddU2tbIooynS/8EhUVUognJ ETTH7J1c8lmqUJqnPOiFbFzcls7G4axaDBMCjOxFNr1HX03Qn4sy+XFExMWnxCEGdp veRcTc8vDr+sXuqZxo2eU/nVABRVzMINnycM94xAEkV+hK65sf/bN730Hpt7SjH1xo uH3IIHTyawZ2W64PVsSNbTY95DqYYVIS0UmR1NZrwIoCrACxslxDK3ZUkg/qtR59fF NQ26zJHgwEiffejk48QZVC5q49fYze+Txg+OhDUersc5yPtgkcckTc+L4Lz8irZ6sQ Ch5ggT7TMu7LA== From: paulmck@kernel.org To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, ming.lei@redhat.com, axboe@kernel.dk, kernel-team@fb.com, "Paul E. McKenney" Subject: [PATCH mm,percpu_ref,rcu 4/6] mm: Make mem_obj_dump() vmalloc() dumps include start and length Date: Tue, 5 Jan 2021 17:17:48 -0800 Message-Id: <20210106011750.13709-4-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20210106011603.GA13180@paulmck-ThinkPad-P72> References: <20210106011603.GA13180@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" This commit adds the starting address and number of pages to the vmalloc() information dumped by way of vmalloc_dump_obj(). Cc: Andrew Morton Cc: Joonsoo Kim Cc: Reported-by: Andrii Nakryiko Suggested-by: Vlastimil Babka Signed-off-by: Paul E. McKenney Acked-by: Vlastimil Babka --- mm/vmalloc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index c274ea4..e3229ff 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3456,7 +3456,8 @@ bool vmalloc_dump_obj(void *object) vm = find_vm_area(objp); if (!vm) return false; - pr_cont(" vmalloc allocated at %pS\n", vm->caller); + pr_cont(" %u-page vmalloc region starting at %#lx allocated at %pS\n", + vm->nr_pages, (unsigned long)vm->addr, vm->caller); return true; } From patchwork Wed Jan 6 01:17:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12000703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59566C433DB for ; Wed, 6 Jan 2021 01:18:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EB29D22CF6 for ; Wed, 6 Jan 2021 01:18:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB29D22CF6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D34CF8D00D0; Tue, 5 Jan 2021 20:17:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5DDF18D00D3; Tue, 5 Jan 2021 20:17:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DDF08D00D1; Tue, 5 Jan 2021 20:17:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id E1BA18D00D0 for ; Tue, 5 Jan 2021 20:17:53 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AE59F1F0A for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-FDA: 77673588426.17.car39_4116b4b274dd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 9643D180D0185 for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) X-HE-Tag: car39_4116b4b274dd X-Filterd-Recvd-Size: 3441 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 55C81230FB; Wed, 6 Jan 2021 01:17:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1609895872; bh=ww/Zdig22m8u2xnm1Qhu/umiYDeAt6Ww+0ijypYNKug=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JBVWsNytN0FF6rHC/lUNdjAHZlOC77y+9lHvsMrarARu1wpL2smnFV8zunLQqncrW 9kff/Y+JvYrQXj6cAtgmb2t0ZHOeq7vw9RK5aE4gCT0I6qSbB4jdZsm1hc84e7VbHb Fie7FblgG4V24RxRv7RI43LPYv/P5RpaErLT7iqfYjnVX7RXgig/UZiijx59R9GjhW oJNzTGOUJxvh2z8Ker9L2fJXvzaS6aymQCUExEwXXysZ0eG1jv7tIVlVRZfvkI8pIE RNkbp4NB8mIp0xIY3I0cQfHY8WEWPvM1IOraNbRxhw+Otix/qUKq1XIr+bCpO1x7N2 EJbK+bbX6fnww== From: paulmck@kernel.org To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, ming.lei@redhat.com, axboe@kernel.dk, kernel-team@fb.com, "Paul E. McKenney" Subject: [PATCH mm,percpu_ref,rcu 5/6] rcu: Make call_rcu() print mem_dump_obj() info for double-freed callback Date: Tue, 5 Jan 2021 17:17:49 -0800 Message-Id: <20210106011750.13709-5-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20210106011603.GA13180@paulmck-ThinkPad-P72> References: <20210106011603.GA13180@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" The debug-object double-free checks in __call_rcu() print out the RCU callback function, which is usually sufficient to track down the double free. However, all uses of things like queue_rcu_work() will have the same RCU callback function (rcu_work_rcufn() in this case), so a diagnostic message for a double queue_rcu_work() needs more than just the callback function. This commit therefore calls mem_dump_obj() to dump out any additional available information on the double-freed callback. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Reported-by: Andrii Nakryiko Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 40e5e3d..84513c5 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2941,6 +2941,7 @@ static void check_cb_ovld(struct rcu_data *rdp) static void __call_rcu(struct rcu_head *head, rcu_callback_t func) { + static atomic_t doublefrees; unsigned long flags; struct rcu_data *rdp; bool was_alldone; @@ -2954,8 +2955,10 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func) * Use rcu:rcu_callback trace event to find the previous * time callback was passed to __call_rcu(). */ - WARN_ONCE(1, "__call_rcu(): Double-freed CB %p->%pS()!!!\n", - head, head->func); + if (atomic_inc_return(&doublefrees) < 4) { + pr_err("%s(): Double-freed CB %p->%pS()!!! ", __func__, head, head->func); + mem_dump_obj(head); + } WRITE_ONCE(head->func, rcu_leak_callback); return; } From patchwork Wed Jan 6 01:17:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12000705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8570C433E0 for ; Wed, 6 Jan 2021 01:18:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6BC4222CF6 for ; Wed, 6 Jan 2021 01:18:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6BC4222CF6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3BDFE8D00D3; Tue, 5 Jan 2021 20:17:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 349638D00D1; Tue, 5 Jan 2021 20:17:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03B428D00D4; Tue, 5 Jan 2021 20:17:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id CD64D8D00D1 for ; Tue, 5 Jan 2021 20:17:54 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 97502181AEF10 for ; Wed, 6 Jan 2021 01:17:54 +0000 (UTC) X-FDA: 77673588468.13.crow87_4f05642274dd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 7457718140B60 for ; Wed, 6 Jan 2021 01:17:54 +0000 (UTC) X-HE-Tag: crow87_4f05642274dd X-Filterd-Recvd-Size: 3982 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 01:17:53 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 7DED8230FD; Wed, 6 Jan 2021 01:17:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1609895872; bh=JZ/A0Ej4DAs1pgQNQqM4FCqjxD7UEtQkZzWwKn4VsH0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hpZfLkbTX/cwRuUA535Txxwqp3jKj2UGCk47A6JOieqgFc0C03FDtut6txIvNx0Ts wHYqAN3rEEz2sdRjaz2deuyZn2og00LJ0xe8haAgs2jr04EXTRNhS1AFkTOek5PBMC U8s/EyykT+Zn5RuXGNIcppzXnqYv2l3IjXXINpM3t7zPEbXIJLzezw/vEhs4v394zG llIiCBQt63QL0nGZxPxssMw7KUebJEjOJFdqrYeAdbzF6GvfMEHnGyDkjT0GTee9Sm nVxtv0HJT8wNAjkMHq7Jipzrec/vXRSI+sJdHott0mJaA31KUcP0KGhAisNjcDLRGp M///Q36MEEj3w== From: paulmck@kernel.org To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, ming.lei@redhat.com, axboe@kernel.dk, kernel-team@fb.com, "Paul E. McKenney" Subject: [PATCH mm,percpu_ref,rcu 6/6] percpu_ref: Dump mem_dump_obj() info upon reference-count underflow Date: Tue, 5 Jan 2021 17:17:50 -0800 Message-Id: <20210106011750.13709-6-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20210106011603.GA13180@paulmck-ThinkPad-P72> References: <20210106011603.GA13180@paulmck-ThinkPad-P72> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Paul E. McKenney" Reference-count underflow for percpu_ref is detected in the RCU callback percpu_ref_switch_to_atomic_rcu(), and the resulting warning does not print anything allowing easy identification of which percpu_ref use case is underflowing. This is of course not normally a problem when developing a new percpu_ref use case because it is most likely that the problem resides in this new use case. However, when deploying a new kernel to a large set of servers, the underflow might well be a new corner case in any of the old percpu_ref use cases. This commit therefore calls mem_dump_obj() to dump out any additional available information on the underflowing percpu_ref instance. Cc: Ming Lei Cc: Jens Axboe Cc: Joonsoo Kim Reported-by: Andrii Nakryiko Signed-off-by: Paul E. McKenney --- lib/percpu-refcount.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index e59eda0..a1071cd 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -5,6 +5,7 @@ #include #include #include +#include #include /* @@ -168,6 +169,7 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu) struct percpu_ref_data, rcu); struct percpu_ref *ref = data->ref; unsigned long __percpu *percpu_count = percpu_count_ptr(ref); + static atomic_t underflows; unsigned long count = 0; int cpu; @@ -191,9 +193,13 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu) */ atomic_long_add((long)count - PERCPU_COUNT_BIAS, &data->count); - WARN_ONCE(atomic_long_read(&data->count) <= 0, - "percpu ref (%ps) <= 0 (%ld) after switching to atomic", - data->release, atomic_long_read(&data->count)); + if (WARN_ONCE(atomic_long_read(&data->count) <= 0, + "percpu ref (%ps) <= 0 (%ld) after switching to atomic", + data->release, atomic_long_read(&data->count)) && + atomic_inc_return(&underflows) < 4) { + pr_err("%s(): percpu_ref underflow", __func__); + mem_dump_obj(data); + } /* @ref is viewed as dead on all CPUs, send out switch confirmation */ percpu_ref_call_confirm_rcu(rcu);