Message ID | 20220930100730.250248-1-feng.tang@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [-next] mm/slub: fix a slab missed to be freed problem | expand |
On 9/30/22 12:07, Feng Tang wrote: > When enable kasan and kfence's in-kernel kunit test with slub_debug on, > it caught a problem (in linux-next tree): > > ------------[ cut here ]------------ > kmem_cache_destroy test: Slab cache still has objects when called from test_exit+0x1a/0x30 > WARNING: CPU: 3 PID: 240 at mm/slab_common.c:492 kmem_cache_destroy+0x16c/0x170 Assuming the warning was preceded by some kunit test failures? I don't see how leaving more empty slabs on free list than needed would cause this warning, the shutdown should just drop the empty slab. > Modules linked in: > CPU: 3 PID: 240 Comm: kunit_try_catch Tainted: G B N 6.0.0-rc7-next-20220929 #52 > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 > RIP: 0010:kmem_cache_destroy+0x16c/0x170 > Code: 41 5c 41 5d e9 a5 04 0b 00 c3 cc cc cc cc 48 8b 55 60 48 8b 4c 24 20 48 c7 c6 40 37 d2 82 48 c7 c7 e8 a0 33 83 e8 4e d7 14 01 <0f> 0b eb a7 41 56 41 89 d6 41 55 49 89 f5 41 54 49 89 fc 55 48 89 > RSP: 0000:ffff88800775fea0 EFLAGS: 00010282 > RAX: 0000000000000000 RBX: ffffffff83bdec48 RCX: 0000000000000000 > RDX: 0000000000000001 RSI: 1ffff11000eebf9e RDI: ffffed1000eebfc6 > RBP: ffff88804362fa00 R08: ffffffff81182e58 R09: ffff88800775fbdf > R10: ffffed1000eebf7b R11: 0000000000000001 R12: 000000008c800d00 > R13: ffff888005e78040 R14: 0000000000000000 R15: ffff888005cdfad0 > FS: 0000000000000000(0000) GS:ffff88807ed00000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 0000000000000000 CR3: 000000000360e001 CR4: 0000000000370ee0 > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > Call Trace: > <TASK> > test_exit+0x1a/0x30 > kunit_try_run_case+0xad/0xc0 > kunit_generic_run_threadfn_adapter+0x26/0x50 > kthread+0x17b/0x1b0 > > It was biscted to commit c7323a5ad078 ("mm/slub: restrict sysfs > validation to debug caches and make it safe") > > The problem is inside free_debug_processing(), in one path, the slab > on partial list is missed to be freed when partial list is full. > > Signed-off-by: Feng Tang <feng.tang@intel.com> > --- > > Hi reviewers, > > Sorry for the late reporting, but it's curious that this problem didn't > show up in my earlier test (which caught some other problems). I think we can reuse the slab_free and don't need a new bool? diff --git a/mm/slub.c b/mm/slub.c index 5c3c31a154ba..a63953f649ed 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2886,22 +2886,25 @@ static noinline void free_debug_processing( set_freepointer(s, tail, prior); slab->freelist = head; - /* Do we need to remove the slab from full or partial list? */ + /* + * If the slab is empty, and node's partial list is full, + * it should be discarded anyway no matter it's on full or + * partial list. + */ + if (slab->inuse == 0 && n->nr_partial >= s->min_partial) + slab_free = slab; + if (!prior) { + /* was on full list */ remove_full(s, n, slab); - } else if (slab->inuse == 0 && - n->nr_partial >= s->min_partial) { + if (!slab_free) { + add_partial(n, slab, DEACTIVATE_TO_TAIL); + stat(s, FREE_ADD_PARTIAL); + } + } else if (slab_free) { remove_partial(n, slab); stat(s, FREE_REMOVE_PARTIAL); } - - /* Do we need to discard the slab or add to partial list? */ - if (slab->inuse == 0 && n->nr_partial >= s->min_partial) { - slab_free = slab; - } else if (!prior) { - add_partial(n, slab, DEACTIVATE_TO_TAIL); - stat(s, FREE_ADD_PARTIAL); - } } if (slab_free) { > mm/slub.c | 28 +++++++++++++++++----------- > 1 file changed, 17 insertions(+), 11 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 5c3c31a154ba..4c037bd0b22b 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2880,28 +2880,34 @@ static noinline void free_debug_processing( > out: > if (checks_ok) { > void *prior = slab->freelist; > + bool slab_need_discard = false; > > /* Perform the actual freeing while we still hold the locks */ > slab->inuse -= cnt; > set_freepointer(s, tail, prior); > slab->freelist = head; > > - /* Do we need to remove the slab from full or partial list? */ > + /* > + * If the slab is empty, and node's partial list is full, > + * it should be discarded anyway no matter it's on full or > + * partial list. > + */ > + if (slab->inuse == 0 && n->nr_partial >= s->min_partial) { > + slab_need_discard = true; > + slab_free = slab; > + } > + > if (!prior) { > + /* was on full list */ > remove_full(s, n, slab); > - } else if (slab->inuse == 0 && > - n->nr_partial >= s->min_partial) { > + if (!slab_need_discard) { > + add_partial(n, slab, DEACTIVATE_TO_TAIL); > + stat(s, FREE_ADD_PARTIAL); > + } > + } else if (slab_need_discard) { > remove_partial(n, slab); > stat(s, FREE_REMOVE_PARTIAL); > } > - > - /* Do we need to discard the slab or add to partial list? */ > - if (slab->inuse == 0 && n->nr_partial >= s->min_partial) { > - slab_free = slab; > - } else if (!prior) { > - add_partial(n, slab, DEACTIVATE_TO_TAIL); > - stat(s, FREE_ADD_PARTIAL); > - } > } > > if (slab_free) {
On Fri, Sep 30, 2022 at 07:25:54PM +0800, Vlastimil Babka wrote: > > On 9/30/22 12:07, Feng Tang wrote: > > When enable kasan and kfence's in-kernel kunit test with slub_debug on, > > it caught a problem (in linux-next tree): > > > > ------------[ cut here ]------------ > > kmem_cache_destroy test: Slab cache still has objects when called from test_exit+0x1a/0x30 > > WARNING: CPU: 3 PID: 240 at mm/slab_common.c:492 kmem_cache_destroy+0x16c/0x170 > > Assuming the warning was preceded by some kunit test failures? > I don't see how leaving more empty slabs on free list than needed would > cause this warning, the shutdown should just drop the empty slab. The previous code only call remove_partial() to dequeue the slab from partial list, and miss to call discard_slab() for it. From the debug dump, the n->nr_partils stays at 5, while n->nr_slabs keeps increasing. And during shutdown, the free_partial() only free the 5 slabs on partial list, and n->nr_slabs still has a big numbers of orphan slabs > > Modules linked in: > > CPU: 3 PID: 240 Comm: kunit_try_catch Tainted: G B N 6.0.0-rc7-next-20220929 #52 > > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 > > RIP: 0010:kmem_cache_destroy+0x16c/0x170 > > Code: 41 5c 41 5d e9 a5 04 0b 00 c3 cc cc cc cc 48 8b 55 60 48 8b 4c 24 20 48 c7 c6 40 37 d2 82 48 c7 c7 e8 a0 33 83 e8 4e d7 14 01 <0f> 0b eb a7 41 56 41 89 d6 41 55 49 89 f5 41 54 49 89 fc 55 48 89 > > RSP: 0000:ffff88800775fea0 EFLAGS: 00010282 > > RAX: 0000000000000000 RBX: ffffffff83bdec48 RCX: 0000000000000000 > > RDX: 0000000000000001 RSI: 1ffff11000eebf9e RDI: ffffed1000eebfc6 > > RBP: ffff88804362fa00 R08: ffffffff81182e58 R09: ffff88800775fbdf > > R10: ffffed1000eebf7b R11: 0000000000000001 R12: 000000008c800d00 > > R13: ffff888005e78040 R14: 0000000000000000 R15: ffff888005cdfad0 > > FS: 0000000000000000(0000) GS:ffff88807ed00000(0000) knlGS:0000000000000000 > > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > CR2: 0000000000000000 CR3: 000000000360e001 CR4: 0000000000370ee0 > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > > Call Trace: > > <TASK> > > test_exit+0x1a/0x30 > > kunit_try_run_case+0xad/0xc0 > > kunit_generic_run_threadfn_adapter+0x26/0x50 > > kthread+0x17b/0x1b0 > > > > It was biscted to commit c7323a5ad078 ("mm/slub: restrict sysfs > > validation to debug caches and make it safe") > > > > The problem is inside free_debug_processing(), in one path, the slab > > on partial list is missed to be freed when partial list is full. > > > > Signed-off-by: Feng Tang <feng.tang@intel.com> > > --- > > > > Hi reviewers, > > > > Sorry for the late reporting, but it's curious that this problem didn't > > show up in my earlier test (which caught some other problems). > > I think we can reuse the slab_free and don't need a new bool? Yes, much simpler! Thanks, Feng > diff --git a/mm/slub.c b/mm/slub.c > index 5c3c31a154ba..a63953f649ed 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2886,22 +2886,25 @@ static noinline void free_debug_processing( > set_freepointer(s, tail, prior); > slab->freelist = head; > > - /* Do we need to remove the slab from full or partial list? */ > + /* > + * If the slab is empty, and node's partial list is full, > + * it should be discarded anyway no matter it's on full or > + * partial list. > + */ > + if (slab->inuse == 0 && n->nr_partial >= s->min_partial) > + slab_free = slab; > + > if (!prior) { > + /* was on full list */ > remove_full(s, n, slab); > - } else if (slab->inuse == 0 && > - n->nr_partial >= s->min_partial) { > + if (!slab_free) { > + add_partial(n, slab, DEACTIVATE_TO_TAIL); > + stat(s, FREE_ADD_PARTIAL); > + } > + } else if (slab_free) { > remove_partial(n, slab); > stat(s, FREE_REMOVE_PARTIAL); > } > - > - /* Do we need to discard the slab or add to partial list? */ > - if (slab->inuse == 0 && n->nr_partial >= s->min_partial) { > - slab_free = slab; > - } else if (!prior) { > - add_partial(n, slab, DEACTIVATE_TO_TAIL); > - stat(s, FREE_ADD_PARTIAL); > - } > } > > if (slab_free) {
On 9/30/22 13:51, Feng Tang wrote: > On Fri, Sep 30, 2022 at 07:25:54PM +0800, Vlastimil Babka wrote: >> >> On 9/30/22 12:07, Feng Tang wrote: >> > When enable kasan and kfence's in-kernel kunit test with slub_debug on, >> > it caught a problem (in linux-next tree): >> > >> > ------------[ cut here ]------------ >> > kmem_cache_destroy test: Slab cache still has objects when called from test_exit+0x1a/0x30 >> > WARNING: CPU: 3 PID: 240 at mm/slab_common.c:492 kmem_cache_destroy+0x16c/0x170 >> >> Assuming the warning was preceded by some kunit test failures? >> I don't see how leaving more empty slabs on free list than needed would >> cause this warning, the shutdown should just drop the empty slab. > > The previous code only call remove_partial() to dequeue the slab from > partial list, and miss to call discard_slab() for it. > > From the debug dump, the n->nr_partils stays at 5, while n->nr_slabs > keeps increasing. And during shutdown, the free_partial() only free > the 5 slabs on partial list, and n->nr_slabs still has a big numbers > of orphan slabs Thanks, finally I get the exact cause now. I've added the more detailed explanation to commit log and the result is here: https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/commit/?h=for-6.1/slub_validation_locking&id=b731e3575f7a45a46512708f9fdf953b40c46a53 >> > Modules linked in: >> > CPU: 3 PID: 240 Comm: kunit_try_catch Tainted: G B N 6.0.0-rc7-next-20220929 #52 >> > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 >> > RIP: 0010:kmem_cache_destroy+0x16c/0x170 >> > Code: 41 5c 41 5d e9 a5 04 0b 00 c3 cc cc cc cc 48 8b 55 60 48 8b 4c 24 20 48 c7 c6 40 37 d2 82 48 c7 c7 e8 a0 33 83 e8 4e d7 14 01 <0f> 0b eb a7 41 56 41 89 d6 41 55 49 89 f5 41 54 49 89 fc 55 48 89 >> > RSP: 0000:ffff88800775fea0 EFLAGS: 00010282 >> > RAX: 0000000000000000 RBX: ffffffff83bdec48 RCX: 0000000000000000 >> > RDX: 0000000000000001 RSI: 1ffff11000eebf9e RDI: ffffed1000eebfc6 >> > RBP: ffff88804362fa00 R08: ffffffff81182e58 R09: ffff88800775fbdf >> > R10: ffffed1000eebf7b R11: 0000000000000001 R12: 000000008c800d00 >> > R13: ffff888005e78040 R14: 0000000000000000 R15: ffff888005cdfad0 >> > FS: 0000000000000000(0000) GS:ffff88807ed00000(0000) knlGS:0000000000000000 >> > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 >> > CR2: 0000000000000000 CR3: 000000000360e001 CR4: 0000000000370ee0 >> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 >> > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 >> > Call Trace: >> > <TASK> >> > test_exit+0x1a/0x30 >> > kunit_try_run_case+0xad/0xc0 >> > kunit_generic_run_threadfn_adapter+0x26/0x50 >> > kthread+0x17b/0x1b0 >> > >> > It was biscted to commit c7323a5ad078 ("mm/slub: restrict sysfs >> > validation to debug caches and make it safe") >> > >> > The problem is inside free_debug_processing(), in one path, the slab >> > on partial list is missed to be freed when partial list is full. >> > >> > Signed-off-by: Feng Tang <feng.tang@intel.com> >> > --- >> > >> > Hi reviewers, >> > >> > Sorry for the late reporting, but it's curious that this problem didn't >> > show up in my earlier test (which caught some other problems). >> >> I think we can reuse the slab_free and don't need a new bool? > > Yes, much simpler! > > Thanks, > Feng > >> diff --git a/mm/slub.c b/mm/slub.c >> index 5c3c31a154ba..a63953f649ed 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2886,22 +2886,25 @@ static noinline void free_debug_processing( >> set_freepointer(s, tail, prior); >> slab->freelist = head; >> >> - /* Do we need to remove the slab from full or partial list? */ >> + /* >> + * If the slab is empty, and node's partial list is full, >> + * it should be discarded anyway no matter it's on full or >> + * partial list. >> + */ >> + if (slab->inuse == 0 && n->nr_partial >= s->min_partial) >> + slab_free = slab; >> + >> if (!prior) { >> + /* was on full list */ >> remove_full(s, n, slab); >> - } else if (slab->inuse == 0 && >> - n->nr_partial >= s->min_partial) { >> + if (!slab_free) { >> + add_partial(n, slab, DEACTIVATE_TO_TAIL); >> + stat(s, FREE_ADD_PARTIAL); >> + } >> + } else if (slab_free) { >> remove_partial(n, slab); >> stat(s, FREE_REMOVE_PARTIAL); >> } >> - >> - /* Do we need to discard the slab or add to partial list? */ >> - if (slab->inuse == 0 && n->nr_partial >= s->min_partial) { >> - slab_free = slab; >> - } else if (!prior) { >> - add_partial(n, slab, DEACTIVATE_TO_TAIL); >> - stat(s, FREE_ADD_PARTIAL); >> - } >> } >> >> if (slab_free) {
On Fri, Sep 30, 2022 at 04:43:23PM +0200, Vlastimil Babka wrote: > On 9/30/22 13:51, Feng Tang wrote: > > On Fri, Sep 30, 2022 at 07:25:54PM +0800, Vlastimil Babka wrote: > >> > >> On 9/30/22 12:07, Feng Tang wrote: > >> > When enable kasan and kfence's in-kernel kunit test with slub_debug on, > >> > it caught a problem (in linux-next tree): > >> > > >> > ------------[ cut here ]------------ > >> > kmem_cache_destroy test: Slab cache still has objects when called from test_exit+0x1a/0x30 > >> > WARNING: CPU: 3 PID: 240 at mm/slab_common.c:492 kmem_cache_destroy+0x16c/0x170 > >> > >> Assuming the warning was preceded by some kunit test failures? > >> I don't see how leaving more empty slabs on free list than needed would > >> cause this warning, the shutdown should just drop the empty slab. > > > > The previous code only call remove_partial() to dequeue the slab from > > partial list, and miss to call discard_slab() for it. > > > > From the debug dump, the n->nr_partils stays at 5, while n->nr_slabs > > keeps increasing. And during shutdown, the free_partial() only free > > the 5 slabs on partial list, and n->nr_slabs still has a big numbers > > of orphan slabs > > Thanks, finally I get the exact cause now. I've added the more detailed > explanation to commit log and the result is here: > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/commit/?h=for-6.1/slub_validation_locking&id=b731e3575f7a45a46512708f9fdf953b40c46a53 > Very nice finding Feng, thanks! Yeah, there are some cases where first and second check do not agree, leading unfreed slabs. the latest version looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Nit: - discard_slab() is not what's actually called, but I get what you mean anyway... - s/Reoganize/Reorganize/g > >> > Modules linked in: > >> > CPU: 3 PID: 240 Comm: kunit_try_catch Tainted: G B N 6.0.0-rc7-next-20220929 #52 > >> > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 > >> > RIP: 0010:kmem_cache_destroy+0x16c/0x170 > >> > Code: 41 5c 41 5d e9 a5 04 0b 00 c3 cc cc cc cc 48 8b 55 60 48 8b 4c 24 20 48 c7 c6 40 37 d2 82 48 c7 c7 e8 a0 33 83 e8 4e d7 14 01 <0f> 0b eb a7 41 56 41 89 d6 41 55 49 89 f5 41 54 49 89 fc 55 48 89 > >> > RSP: 0000:ffff88800775fea0 EFLAGS: 00010282 > >> > RAX: 0000000000000000 RBX: ffffffff83bdec48 RCX: 0000000000000000 > >> > RDX: 0000000000000001 RSI: 1ffff11000eebf9e RDI: ffffed1000eebfc6 > >> > RBP: ffff88804362fa00 R08: ffffffff81182e58 R09: ffff88800775fbdf > >> > R10: ffffed1000eebf7b R11: 0000000000000001 R12: 000000008c800d00 > >> > R13: ffff888005e78040 R14: 0000000000000000 R15: ffff888005cdfad0 > >> > FS: 0000000000000000(0000) GS:ffff88807ed00000(0000) knlGS:0000000000000000 > >> > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > >> > CR2: 0000000000000000 CR3: 000000000360e001 CR4: 0000000000370ee0 > >> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > >> > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > >> > Call Trace: > >> > <TASK> > >> > test_exit+0x1a/0x30 > >> > kunit_try_run_case+0xad/0xc0 > >> > kunit_generic_run_threadfn_adapter+0x26/0x50 > >> > kthread+0x17b/0x1b0 > >> > > >> > It was biscted to commit c7323a5ad078 ("mm/slub: restrict sysfs > >> > validation to debug caches and make it safe") > >> > > >> > The problem is inside free_debug_processing(), in one path, the slab > >> > on partial list is missed to be freed when partial list is full. > >> > > >> > Signed-off-by: Feng Tang <feng.tang@intel.com> > >> > --- > >> > > >> > Hi reviewers, > >> > > >> > Sorry for the late reporting, but it's curious that this problem didn't > >> > show up in my earlier test (which caught some other problems). > >> > >> I think we can reuse the slab_free and don't need a new bool? > > > > Yes, much simpler! > > > > Thanks, > > Feng > > > >> diff --git a/mm/slub.c b/mm/slub.c > >> index 5c3c31a154ba..a63953f649ed 100644 > >> --- a/mm/slub.c > >> +++ b/mm/slub.c > >> @@ -2886,22 +2886,25 @@ static noinline void free_debug_processing( > >> set_freepointer(s, tail, prior); > >> slab->freelist = head; > >> > >> - /* Do we need to remove the slab from full or partial list? */ > >> + /* > >> + * If the slab is empty, and node's partial list is full, > >> + * it should be discarded anyway no matter it's on full or > >> + * partial list. > >> + */ > >> + if (slab->inuse == 0 && n->nr_partial >= s->min_partial) > >> + slab_free = slab; > >> + > >> if (!prior) { > >> + /* was on full list */ > >> remove_full(s, n, slab); > >> - } else if (slab->inuse == 0 && > >> - n->nr_partial >= s->min_partial) { > >> + if (!slab_free) { > >> + add_partial(n, slab, DEACTIVATE_TO_TAIL); > >> + stat(s, FREE_ADD_PARTIAL); > >> + } > >> + } else if (slab_free) { > >> remove_partial(n, slab); > >> stat(s, FREE_REMOVE_PARTIAL); > >> } > >> - > >> - /* Do we need to discard the slab or add to partial list? */ > >> - if (slab->inuse == 0 && n->nr_partial >= s->min_partial) { > >> - slab_free = slab; > >> - } else if (!prior) { > >> - add_partial(n, slab, DEACTIVATE_TO_TAIL); > >> - stat(s, FREE_ADD_PARTIAL); > >> - } > >> } > >> > >> if (slab_free) { >
diff --git a/mm/slub.c b/mm/slub.c index 5c3c31a154ba..4c037bd0b22b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2880,28 +2880,34 @@ static noinline void free_debug_processing( out: if (checks_ok) { void *prior = slab->freelist; + bool slab_need_discard = false; /* Perform the actual freeing while we still hold the locks */ slab->inuse -= cnt; set_freepointer(s, tail, prior); slab->freelist = head; - /* Do we need to remove the slab from full or partial list? */ + /* + * If the slab is empty, and node's partial list is full, + * it should be discarded anyway no matter it's on full or + * partial list. + */ + if (slab->inuse == 0 && n->nr_partial >= s->min_partial) { + slab_need_discard = true; + slab_free = slab; + } + if (!prior) { + /* was on full list */ remove_full(s, n, slab); - } else if (slab->inuse == 0 && - n->nr_partial >= s->min_partial) { + if (!slab_need_discard) { + add_partial(n, slab, DEACTIVATE_TO_TAIL); + stat(s, FREE_ADD_PARTIAL); + } + } else if (slab_need_discard) { remove_partial(n, slab); stat(s, FREE_REMOVE_PARTIAL); } - - /* Do we need to discard the slab or add to partial list? */ - if (slab->inuse == 0 && n->nr_partial >= s->min_partial) { - slab_free = slab; - } else if (!prior) { - add_partial(n, slab, DEACTIVATE_TO_TAIL); - stat(s, FREE_ADD_PARTIAL); - } } if (slab_free) {
When enable kasan and kfence's in-kernel kunit test with slub_debug on, it caught a problem (in linux-next tree): ------------[ cut here ]------------ kmem_cache_destroy test: Slab cache still has objects when called from test_exit+0x1a/0x30 WARNING: CPU: 3 PID: 240 at mm/slab_common.c:492 kmem_cache_destroy+0x16c/0x170 Modules linked in: CPU: 3 PID: 240 Comm: kunit_try_catch Tainted: G B N 6.0.0-rc7-next-20220929 #52 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 RIP: 0010:kmem_cache_destroy+0x16c/0x170 Code: 41 5c 41 5d e9 a5 04 0b 00 c3 cc cc cc cc 48 8b 55 60 48 8b 4c 24 20 48 c7 c6 40 37 d2 82 48 c7 c7 e8 a0 33 83 e8 4e d7 14 01 <0f> 0b eb a7 41 56 41 89 d6 41 55 49 89 f5 41 54 49 89 fc 55 48 89 RSP: 0000:ffff88800775fea0 EFLAGS: 00010282 RAX: 0000000000000000 RBX: ffffffff83bdec48 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 1ffff11000eebf9e RDI: ffffed1000eebfc6 RBP: ffff88804362fa00 R08: ffffffff81182e58 R09: ffff88800775fbdf R10: ffffed1000eebf7b R11: 0000000000000001 R12: 000000008c800d00 R13: ffff888005e78040 R14: 0000000000000000 R15: ffff888005cdfad0 FS: 0000000000000000(0000) GS:ffff88807ed00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000000360e001 CR4: 0000000000370ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> test_exit+0x1a/0x30 kunit_try_run_case+0xad/0xc0 kunit_generic_run_threadfn_adapter+0x26/0x50 kthread+0x17b/0x1b0 It was biscted to commit c7323a5ad078 ("mm/slub: restrict sysfs validation to debug caches and make it safe") The problem is inside free_debug_processing(), in one path, the slab on partial list is missed to be freed when partial list is full. Signed-off-by: Feng Tang <feng.tang@intel.com> --- Hi reviewers, Sorry for the late reporting, but it's curious that this problem didn't show up in my earlier test (which caught some other problems). mm/slub.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-)