Message ID | 20240311132720.37741-1-sxwjean@me.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/slub: Simplify get_partial_node() | expand |
On 2024/3/11 21:27, sxwjean@me.com wrote: > From: Xiongwei Song <xiongwei.song@windriver.com> > > Remove the check of !kmem_cache_has_cpu_partial() because it is always > false, we've known this by calling kmem_cache_debug() before calling > remove_partial(), so we can remove the check. This is correct. > > Meanwhile, redo filling cpu partial and add comment to improve the > readability. > > Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com> > --- > mm/slub.c | 22 ++++++++++++---------- > 1 file changed, 12 insertions(+), 10 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index a3ab096c38c0..62388f2a0ac7 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2620,19 +2620,21 @@ static struct slab *get_partial_node(struct kmem_cache *s, > if (!partial) { > partial = slab; > stat(s, ALLOC_FROM_PARTIAL); > - } else { > + > + /* Fill cpu partial if needed from next iteration, or break */ > + if (IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL)) > + continue; > + else > + break; > + } > + > + if (IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL)) { But this won't work since "s->cpu_partial_slabs" is defined under CONFIG_SLUB_CPU_PARTIAL, you would get compiler error if building without CONFIG_SLUB_CPU_PARTIAL. Thanks. > put_cpu_partial(s, slab, 0); > stat(s, CPU_PARTIAL_NODE); > - partial_slabs++; > - } > -#ifdef CONFIG_SLUB_CPU_PARTIAL > - if (!kmem_cache_has_cpu_partial(s) > - || partial_slabs > s->cpu_partial_slabs / 2) > - break; > -#else > - break; > -#endif > > + if (++partial_slabs > s->cpu_partial_slabs/2) > + break; > + } > } > spin_unlock_irqrestore(&n->list_lock, flags); > return partial;
> On 2024/3/11 21:27, sxwjean@me.com wrote: > > From: Xiongwei Song <xiongwei.song@windriver.com> > > > > Remove the check of !kmem_cache_has_cpu_partial() because it is always > > false, we've known this by calling kmem_cache_debug() before calling > > remove_partial(), so we can remove the check. > > This is correct. > > > > > Meanwhile, redo filling cpu partial and add comment to improve the > > readability. > > > > Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com> > > --- > > mm/slub.c | 22 ++++++++++++---------- > > 1 file changed, 12 insertions(+), 10 deletions(-) > > > > diff --git a/mm/slub.c b/mm/slub.c > > index a3ab096c38c0..62388f2a0ac7 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -2620,19 +2620,21 @@ static struct slab *get_partial_node(struct > kmem_cache *s, > > if (!partial) { > > partial = slab; > > stat(s, ALLOC_FROM_PARTIAL); > > - } else { > > + > > + /* Fill cpu partial if needed from next iteration, or break */ > > + if (IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL)) > > + continue; > > + else > > + break; > > + } > > + > > + if (IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL)) { > > But this won't work since "s->cpu_partial_slabs" is defined under > CONFIG_SLUB_CPU_PARTIAL, > you would get compiler error if building without CONFIG_SLUB_CPU_PARTIAL. Yes, maybe we can use "#if IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL)" instead. Regards, Xiongwei > > Thanks. > > > put_cpu_partial(s, slab, 0); > > stat(s, CPU_PARTIAL_NODE); > > - partial_slabs++; > > - } > > -#ifdef CONFIG_SLUB_CPU_PARTIAL > > - if (!kmem_cache_has_cpu_partial(s) > > - || partial_slabs > s->cpu_partial_slabs / 2) > > - break; > > -#else > > - break; > > -#endif > > > > + if (++partial_slabs > s->cpu_partial_slabs/2) > > + break; > > + } > > } > > spin_unlock_irqrestore(&n->list_lock, flags); > > return partial;
diff --git a/mm/slub.c b/mm/slub.c index a3ab096c38c0..62388f2a0ac7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2620,19 +2620,21 @@ static struct slab *get_partial_node(struct kmem_cache *s, if (!partial) { partial = slab; stat(s, ALLOC_FROM_PARTIAL); - } else { + + /* Fill cpu partial if needed from next iteration, or break */ + if (IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL)) + continue; + else + break; + } + + if (IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL)) { put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); - partial_slabs++; - } -#ifdef CONFIG_SLUB_CPU_PARTIAL - if (!kmem_cache_has_cpu_partial(s) - || partial_slabs > s->cpu_partial_slabs / 2) - break; -#else - break; -#endif + if (++partial_slabs > s->cpu_partial_slabs/2) + break; + } } spin_unlock_irqrestore(&n->list_lock, flags); return partial;