From patchwork Sun May 29 08:15:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Wang X-Patchwork-Id: 12864138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4185C433EF for ; Sun, 29 May 2022 08:15:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BC9288D0003; Sun, 29 May 2022 04:15:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B9CB78D0002; Sun, 29 May 2022 04:15:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FD8C8D0002; Sun, 29 May 2022 04:15:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 93D5C8D0002 for ; Sun, 29 May 2022 04:15:44 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 61BBA33D5C for ; Sun, 29 May 2022 08:15:44 +0000 (UTC) X-FDA: 79518071808.11.95CAF5E Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by imf14.hostedemail.com (Postfix) with ESMTP id 72DEC100034 for ; Sun, 29 May 2022 08:15:38 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R971e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04357;MF=rongwei.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VEe4qN3_1653812135; Received: from localhost.localdomain(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0VEe4qN3_1653812135) by smtp.aliyun-inc.com(127.0.0.1); Sun, 29 May 2022 16:15:37 +0800 From: Rongwei Wang To: akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@linux.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free Date: Sun, 29 May 2022 16:15:33 +0800 Message-Id: <20220529081535.69275-1-rongwei.wang@linux.alibaba.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 72DEC100034 X-Stat-Signature: wsctt5zh6phrg81mwz9qbhhmfgppkrh9 X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf14.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.57 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com X-HE-Tag: 1653812138-467833 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In use cases where allocating and freeing slab frequently, some error messages, such as "Left Redzone overwritten", "First byte 0xbb instead of 0xcc" would be printed when validating slabs. That's because an object has been filled with SLAB_RED_INACTIVE, but has not been added to slab's freelist. And between these two states, the behaviour of validating slab is likely to occur. Actually, it doesn't mean the slab can not work stably. But, these confusing messages will disturb slab debugging more or less. Signed-off-by: Rongwei Wang Reported-by: Rongwei Wang Signed-off-by: Vlastimil Babka --- mm/slub.c | 40 +++++++++++++++++----------------------- 1 file changed, 17 insertions(+), 23 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index ed5c2c03a47a..310e56d99116 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1374,15 +1374,12 @@ static noinline int free_debug_processing( void *head, void *tail, int bulk_cnt, unsigned long addr) { - struct kmem_cache_node *n = get_node(s, slab_nid(slab)); void *object = head; int cnt = 0; - unsigned long flags, flags2; + unsigned long flags; int ret = 0; - spin_lock_irqsave(&n->list_lock, flags); - slab_lock(slab, &flags2); - + slab_lock(slab, &flags); if (s->flags & SLAB_CONSISTENCY_CHECKS) { if (!check_slab(s, slab)) goto out; @@ -1414,8 +1411,7 @@ static noinline int free_debug_processing( slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); - slab_unlock(slab, &flags2); - spin_unlock_irqrestore(&n->list_lock, flags); + slab_unlock(slab, &flags); if (!ret) slab_fix(s, "Object at 0x%p not freed", object); return ret; @@ -3304,7 +3300,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, { void *prior; - int was_frozen; + int was_frozen, to_take_off = 0; struct slab new; unsigned long counters; struct kmem_cache_node *n = NULL; @@ -3315,15 +3311,19 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, if (kfence_free(head)) return; + n = get_node(s, slab_nid(slab)); + spin_lock_irqsave(&n->list_lock, flags); + if (kmem_cache_debug(s) && - !free_debug_processing(s, slab, head, tail, cnt, addr)) + !free_debug_processing(s, slab, head, tail, cnt, addr)) { + + spin_unlock_irqrestore(&n->list_lock, flags); return; + } do { - if (unlikely(n)) { - spin_unlock_irqrestore(&n->list_lock, flags); - n = NULL; - } + if (unlikely(to_take_off)) + to_take_off = 0; prior = slab->freelist; counters = slab->counters; set_freepointer(s, tail, prior); @@ -3343,18 +3343,11 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, new.frozen = 1; } else { /* Needs to be taken off a list */ - - n = get_node(s, slab_nid(slab)); /* - * Speculatively acquire the list_lock. * If the cmpxchg does not succeed then we may - * drop the list_lock without any processing. - * - * Otherwise the list_lock will synchronize with - * other processors updating the list of slabs. + * drop this behavior without any processing. */ - spin_lock_irqsave(&n->list_lock, flags); - + to_take_off = 1; } } @@ -3363,8 +3356,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, head, new.counters, "__slab_free")); - if (likely(!n)) { + if (likely(!to_take_off)) { + spin_unlock_irqrestore(&n->list_lock, flags); if (likely(was_frozen)) { /* * The list lock was not taken therefore no list From patchwork Sun May 29 08:15:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Wang X-Patchwork-Id: 12864139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 927DFC433FE for ; Sun, 29 May 2022 08:15:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E13AF8D0002; Sun, 29 May 2022 04:15:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D54DA8D0006; Sun, 29 May 2022 04:15:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B51428D0005; Sun, 29 May 2022 04:15:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9B8608D0003 for ; Sun, 29 May 2022 04:15:44 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6E91561225 for ; Sun, 29 May 2022 08:15:44 +0000 (UTC) X-FDA: 79518071808.07.2BA9E65 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by imf20.hostedemail.com (Postfix) with ESMTP id 934EB1C0004 for ; Sun, 29 May 2022 08:15:26 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=rongwei.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VEe4qOA_1653812137; Received: from localhost.localdomain(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0VEe4qOA_1653812137) by smtp.aliyun-inc.com(127.0.0.1); Sun, 29 May 2022 16:15:38 +0800 From: Rongwei Wang To: akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@linux.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/3] mm/slub: improve consistency of nr_slabs count Date: Sun, 29 May 2022 16:15:34 +0800 Message-Id: <20220529081535.69275-2-rongwei.wang@linux.alibaba.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220529081535.69275-1-rongwei.wang@linux.alibaba.com> References: <20220529081535.69275-1-rongwei.wang@linux.alibaba.com> MIME-Version: 1.0 X-Stat-Signature: i7moyhehx6ekjtnr5gx9ge6ruc9pecqw X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.131 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 934EB1C0004 X-HE-Tag: 1653812126-797024 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, discard_slab() can change nr_slabs count without holding node's list_lock. This will lead some error messages print when scanning node's partial or full list, e.g. validate all slabs. Literally, it affects the consistency of nr_slabs count. Here, discard_slab() is abandoned, And dec_slabs_node() is called before releasing node's list_lock. dec_slabs_nodes() and free_slab() can be called separately to ensure consistency of nr_slabs count. Signed-off-by: Rongwei Wang --- mm/slub.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 310e56d99116..bffb95bbb0ee 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2039,12 +2039,6 @@ static void free_slab(struct kmem_cache *s, struct slab *slab) __free_slab(s, slab); } -static void discard_slab(struct kmem_cache *s, struct slab *slab) -{ - dec_slabs_node(s, slab_nid(slab), slab->objects); - free_slab(s, slab); -} - /* * Management of partially allocated slabs. */ @@ -2413,6 +2407,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, if (!new.inuse && n->nr_partial >= s->min_partial) { mode = M_FREE; + spin_lock_irqsave(&n->list_lock, flags); } else if (new.freelist) { mode = M_PARTIAL; /* @@ -2437,7 +2432,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) { - if (mode == M_PARTIAL || mode == M_FULL) + if (mode != M_FULL_NOLIST) spin_unlock_irqrestore(&n->list_lock, flags); goto redo; } @@ -2449,7 +2444,10 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, stat(s, tail); } else if (mode == M_FREE) { stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab); + dec_slabs_node(s, slab_nid(slab), slab->objects); + spin_unlock_irqrestore(&n->list_lock, flags); + + free_slab(s, slab); stat(s, FREE_SLAB); } else if (mode == M_FULL) { add_full(s, n, slab); @@ -2502,6 +2500,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { slab->next = slab_to_discard; slab_to_discard = slab; + dec_slabs_node(s, slab_nid(slab), slab->objects); } else { add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); @@ -2516,7 +2515,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) slab_to_discard = slab_to_discard->next; stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab); + free_slab(s, slab); stat(s, FREE_SLAB); } } @@ -3404,9 +3403,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, remove_full(s, n, slab); } + dec_slabs_node(s, slab_nid(slab), slab->objects); spin_unlock_irqrestore(&n->list_lock, flags); stat(s, FREE_SLAB); - discard_slab(s, slab); + free_slab(s, slab); } /* @@ -4265,6 +4265,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) if (!slab->inuse) { remove_partial(n, slab); list_add(&slab->slab_list, &discard); + dec_slabs_node(s, slab_nid(slab), slab->objects); } else { list_slab_objects(s, slab, "Objects remaining in %s on __kmem_cache_shutdown()"); @@ -4273,7 +4274,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) spin_unlock_irq(&n->list_lock); list_for_each_entry_safe(slab, h, &discard, slab_list) - discard_slab(s, slab); + free_slab(s, slab); } bool __kmem_cache_empty(struct kmem_cache *s) @@ -4595,6 +4596,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) if (free == slab->objects) { list_move(&slab->slab_list, &discard); n->nr_partial--; + dec_slabs_node(s, slab_nid(slab), slab->objects); } else if (free <= SHRINK_PROMOTE_MAX) list_move(&slab->slab_list, promote + free - 1); } @@ -4610,7 +4612,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) /* Release empty slabs */ list_for_each_entry_safe(slab, t, &discard, slab_list) - discard_slab(s, slab); + free_slab(s, slab); if (slabs_node(s, node)) ret = 1; From patchwork Sun May 29 08:15:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Wang X-Patchwork-Id: 12864140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 098F4C4332F for ; Sun, 29 May 2022 08:15:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B19C08D0006; Sun, 29 May 2022 04:15:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA74A8D0005; Sun, 29 May 2022 04:15:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96B3F8D0006; Sun, 29 May 2022 04:15:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 806658D0005 for ; Sun, 29 May 2022 04:15:45 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5237C33D5C for ; Sun, 29 May 2022 08:15:45 +0000 (UTC) X-FDA: 79518071850.08.B3A5816 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by imf30.hostedemail.com (Postfix) with ESMTP id AF8F480048 for ; Sun, 29 May 2022 08:15:11 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R501e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=rongwei.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VEe4qOs_1653812139; Received: from localhost.localdomain(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0VEe4qOs_1653812139) by smtp.aliyun-inc.com(127.0.0.1); Sun, 29 May 2022 16:15:40 +0800 From: Rongwei Wang To: akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@linux.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] mm/slub: add nr_full count for debugging slub Date: Sun, 29 May 2022 16:15:35 +0800 Message-Id: <20220529081535.69275-3-rongwei.wang@linux.alibaba.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220529081535.69275-1-rongwei.wang@linux.alibaba.com> References: <20220529081535.69275-1-rongwei.wang@linux.alibaba.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AF8F480048 X-Stat-Signature: 33g4jwx39u8n5oe41fyi6ih5akn15hb6 Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf30.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.43 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1653812111-793005 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The n->nr_slabs will be updated when really to allocate or free a slab, but this slab is not necessarily in full list or partial list of one node. That means the total count of slab in node's full and partial list is not necessarily equal to n->nr_slabs, even though flush_all() has been called. An example here, an error message likes below will be printed when 'slabinfo -v' is executed: SLUB: kmemleak_object 4157 slabs counted but counter=4161 SLUB: kmemleak_object 4072 slabs counted but counter=4077 SLUB: kmalloc-2k 19 slabs counted but counter=20 SLUB: kmalloc-2k 12 slabs counted but counter=13 SLUB: kmemleak_object 4205 slabs counted but counter=4209 Here, nr_full is introduced in kmem_cache_node, to replace nr_slabs and eliminate these confusing messages. Signed-off-by: Rongwei Wang --- mm/slab.h | 1 + mm/slub.c | 33 +++++++++++++++++++++++++++++++-- 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 95eb34174c1b..b1190e41a243 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -782,6 +782,7 @@ struct kmem_cache_node { unsigned long nr_partial; struct list_head partial; #ifdef CONFIG_SLUB_DEBUG + unsigned long nr_full; atomic_long_t nr_slabs; atomic_long_t total_objects; struct list_head full; diff --git a/mm/slub.c b/mm/slub.c index bffb95bbb0ee..99e980c8295c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1220,6 +1220,9 @@ static void add_full(struct kmem_cache *s, lockdep_assert_held(&n->list_lock); list_add(&slab->slab_list, &n->full); +#ifdef CONFIG_SLUB_DEBUG + n->nr_full++; +#endif } static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct slab *slab) @@ -1229,6 +1232,9 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct lockdep_assert_held(&n->list_lock); list_del(&slab->slab_list); +#ifdef CONFIG_SLUB_DEBUG + n->nr_full--; +#endif } /* Tracking of the number of slabs for debugging purposes */ @@ -3880,6 +3886,7 @@ init_kmem_cache_node(struct kmem_cache_node *n) INIT_LIST_HEAD(&n->partial); #ifdef CONFIG_SLUB_DEBUG atomic_long_set(&n->nr_slabs, 0); + n->nr_full = 0; atomic_long_set(&n->total_objects, 0); INIT_LIST_HEAD(&n->full); #endif @@ -4994,9 +5001,30 @@ static int validate_slab_node(struct kmem_cache *s, unsigned long count = 0; struct slab *slab; unsigned long flags; + unsigned long nr_cpu_slab = 0, nr_cpu_partial = 0; + int cpu; spin_lock_irqsave(&n->list_lock, flags); + for_each_possible_cpu(cpu) { + struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); + struct slab *slab; + + slab = READ_ONCE(c->slab); + if (slab && n == get_node(s, slab_nid(slab))) + nr_cpu_slab += 1; +#ifdef CONFIG_SLUB_CPU_PARTIAL + slab = slub_percpu_partial_read_once(c); + if (slab && n == get_node(s, slab_nid(slab))) + nr_cpu_partial += slab->slabs; +#endif + } + if (nr_cpu_slab || nr_cpu_partial) { + pr_err("SLUB %s: %ld cpu slabs and %ld cpu partial slabs counted\n", + s->name, nr_cpu_slab, nr_cpu_partial); + slab_add_kunit_errors(); + } + list_for_each_entry(slab, &n->partial, slab_list) { validate_slab(s, slab, obj_map); count++; @@ -5010,13 +5038,14 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; + count = 0; list_for_each_entry(slab, &n->full, slab_list) { validate_slab(s, slab, obj_map); count++; } - if (count != atomic_long_read(&n->nr_slabs)) { + if (count != n->nr_full) { pr_err("SLUB: %s %ld slabs counted but counter=%ld\n", - s->name, count, atomic_long_read(&n->nr_slabs)); + s->name, count, n->nr_full); slab_add_kunit_errors(); }