Message ID | 20230624053120.643409-2-senozhatsky@chromium.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | zsmalloc: small compaction improvements | expand |
Hi, On Sat, Jun 24, 2023 at 02:12:14PM +0900, Sergey Senozhatsky wrote: > zspage migration can terminate as soon as it moves the last > allocated object from the source zspage. Add a simple helper > zspage_empty() that tests zspage ->inuse on each migration > iteration. > > Suggested-by: Alexey Romanov <AVRomanov@sberdevices.ru> > Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> > --- > mm/zsmalloc.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 3f057970504e..5d60eaedc3b7 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -1147,6 +1147,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage) > return get_zspage_inuse(zspage) == class->objs_per_zspage; > } > > +static bool zspage_empty(struct zspage *zspage) > +{ > + return get_zspage_inuse(zspage) == 0; > +} > + > /** > * zs_lookup_class_index() - Returns index of the zsmalloc &size_class > * that hold objects of the provided size. > @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class, > obj_idx++; > record_obj(handle, free_obj); > obj_free(class->size, used_obj); > + > + /* Stop if there are no more objects to migrate */ > + if (zspage_empty(get_zspage(s_page))) > + break; > } > > /* Remember last position in this iteration */ > -- > 2.41.0.162.gfafddb0af9-goog > not sure if I can keep this tag but, Reviewed-by: Alexey Romanov <avromanov@sberdevices.ru>
On Sat, Jun 24, 2023 at 02:12:14PM +0900, Sergey Senozhatsky wrote: > zspage migration can terminate as soon as it moves the last > allocated object from the source zspage. Add a simple helper > zspage_empty() that tests zspage ->inuse on each migration > iteration. > > Suggested-by: Alexey Romanov <AVRomanov@sberdevices.ru> > Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Acked-by: Minchan Kim <minchan@kernel.org>
On (23/06/26 10:57), Alexey Romanov wrote: > not sure if I can keep this tag but, Sure, why not > > Reviewed-by: Alexey Romanov <avromanov@sberdevices.ru>
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 3f057970504e..5d60eaedc3b7 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1147,6 +1147,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage) return get_zspage_inuse(zspage) == class->objs_per_zspage; } +static bool zspage_empty(struct zspage *zspage) +{ + return get_zspage_inuse(zspage) == 0; +} + /** * zs_lookup_class_index() - Returns index of the zsmalloc &size_class * that hold objects of the provided size. @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class, obj_idx++; record_obj(handle, free_obj); obj_free(class->size, used_obj); + + /* Stop if there are no more objects to migrate */ + if (zspage_empty(get_zspage(s_page))) + break; } /* Remember last position in this iteration */
zspage migration can terminate as soon as it moves the last allocated object from the source zspage. Add a simple helper zspage_empty() that tests zspage ->inuse on each migration iteration. Suggested-by: Alexey Romanov <AVRomanov@sberdevices.ru> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> --- mm/zsmalloc.c | 9 +++++++++ 1 file changed, 9 insertions(+)