Message ID | 20210916123920.48704-6-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Fixups for slub | expand |
On 9/16/21 14:39, Miaohe Lin wrote: > kmem_cache_free_bulk() will call memcg_slab_free_hook() for all objects > when doing bulk free. So we shouldn't call memcg_slab_free_hook() again > for bulk free to avoid incorrect memcg slab count. > > Fixes: d1b2cf6cb84a ("mm: memcg/slab: uncharge during kmem_cache_free_bulk()") > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> I now noticed the series doesn't Cc: stable and it should, so I hope Andrew can add those together with the review tags. Thanks. > --- > mm/slub.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index f3df0f04a472..d8f77346376d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3420,7 +3420,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s, > struct kmem_cache_cpu *c; > unsigned long tid; > > - memcg_slab_free_hook(s, &head, 1); > + /* memcg_slab_free_hook() is already called for bulk free. */ > + if (!tail) > + memcg_slab_free_hook(s, &head, 1); > redo: > /* > * Determine the currently cpus per cpu slab. >
On Tue, 5 Oct 2021 12:50:08 +0200 Vlastimil Babka <vbabka@suse.cz> wrote: > I now noticed the series doesn't Cc: stable and it should, so I hope Andrew > can add those together with the review tags. Thanks. Done, thanks.
diff --git a/mm/slub.c b/mm/slub.c index f3df0f04a472..d8f77346376d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3420,7 +3420,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s, struct kmem_cache_cpu *c; unsigned long tid; - memcg_slab_free_hook(s, &head, 1); + /* memcg_slab_free_hook() is already called for bulk free. */ + if (!tail) + memcg_slab_free_hook(s, &head, 1); redo: /* * Determine the currently cpus per cpu slab.
kmem_cache_free_bulk() will call memcg_slab_free_hook() for all objects when doing bulk free. So we shouldn't call memcg_slab_free_hook() again for bulk free to avoid incorrect memcg slab count. Fixes: d1b2cf6cb84a ("mm: memcg/slab: uncharge during kmem_cache_free_bulk()") Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)