Message ID | 20240117-slab-misc-v1-2-fd1c49ccbe70@bytedance.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/slub: some minor optimization and cleanup | expand |
On Wed, 17 Jan 2024, Chengming Zhou wrote: > Since debug slab is processed by free_to_partial_list(), and only debug > slab which has SLAB_STORE_USER flag would care about the full list, we > can remove these unrelated full list manipulations from __slab_free(). Acked-by: Christoph Lameter (Ampere) <cl@linux.com>
On 1/17/24 12:45, Chengming Zhou wrote: > Since debug slab is processed by free_to_partial_list(), and only debug > slab which has SLAB_STORE_USER flag would care about the full list, we > can remove these unrelated full list manipulations from __slab_free(). Well spotted. > Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/slub.c | 4 ---- > 1 file changed, 4 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 20c03555c97b..f0307e8b4cd2 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -4187,7 +4187,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > * then add it. > */ > if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { > - remove_full(s, n, slab); > add_partial(n, slab, DEACTIVATE_TO_TAIL); > stat(s, FREE_ADD_PARTIAL); > } > @@ -4201,9 +4200,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > */ > remove_partial(n, slab); > stat(s, FREE_REMOVE_PARTIAL); > - } else { > - /* Slab must be on the full list */ > - remove_full(s, n, slab); > } > > spin_unlock_irqrestore(&n->list_lock, flags); >
diff --git a/mm/slub.c b/mm/slub.c index 20c03555c97b..f0307e8b4cd2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4187,7 +4187,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * then add it. */ if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { - remove_full(s, n, slab); add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } @@ -4201,9 +4200,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, */ remove_partial(n, slab); stat(s, FREE_REMOVE_PARTIAL); - } else { - /* Slab must be on the full list */ - remove_full(s, n, slab); } spin_unlock_irqrestore(&n->list_lock, flags);
Since debug slab is processed by free_to_partial_list(), and only debug slab which has SLAB_STORE_USER flag would care about the full list, we can remove these unrelated full list manipulations from __slab_free(). Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> --- mm/slub.c | 4 ---- 1 file changed, 4 deletions(-)