Message ID | 20190308041426.16654-8-tobin@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: Implement Slab Movable Objects (SMO) | expand |
On Fri, Mar 08, 2019 at 03:14:18PM +1100, Tobin C. Harding wrote: > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3642,6 +3642,7 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) > > set_cpu_partial(s); > > + s->defrag_used_ratio = 30; > #ifdef CONFIG_NUMA > s->remote_node_defrag_ratio = 1000; > #endif > @@ -5261,6 +5262,28 @@ static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf) > } > SLAB_ATTR_RO(destroy_by_rcu); > > +static ssize_t defrag_used_ratio_show(struct kmem_cache *s, char *buf) > +{ > + return sprintf(buf, "%d\n", s->defrag_used_ratio); > +} > + > +static ssize_t defrag_used_ratio_store(struct kmem_cache *s, > + const char *buf, size_t length) > +{ > + unsigned long ratio; > + int err; > + > + err = kstrtoul(buf, 10, &ratio); > + if (err) > + return err; > + > + if (ratio <= 100) > + s->defrag_used_ratio = ratio; else return -EINVAL; maybe? Tycho
On Fri, Mar 08, 2019 at 09:01:51AM -0700, Tycho Andersen wrote: > On Fri, Mar 08, 2019 at 03:14:18PM +1100, Tobin C. Harding wrote: > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -3642,6 +3642,7 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) > > > > set_cpu_partial(s); > > > > + s->defrag_used_ratio = 30; > > #ifdef CONFIG_NUMA > > s->remote_node_defrag_ratio = 1000; > > #endif > > @@ -5261,6 +5262,28 @@ static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf) > > } > > SLAB_ATTR_RO(destroy_by_rcu); > > > > +static ssize_t defrag_used_ratio_show(struct kmem_cache *s, char *buf) > > +{ > > + return sprintf(buf, "%d\n", s->defrag_used_ratio); > > +} > > + > > +static ssize_t defrag_used_ratio_store(struct kmem_cache *s, > > + const char *buf, size_t length) > > +{ > > + unsigned long ratio; > > + int err; > > + > > + err = kstrtoul(buf, 10, &ratio); > > + if (err) > > + return err; > > + > > + if (ratio <= 100) > > + s->defrag_used_ratio = ratio; > else > return -EINVAL; Nice, thanks. I moulded your suggestion into if (ratio > 100) return -EINVAL; s->defrag_used_ratio = ratio; return length; thanks, Tobin.
diff --git a/Documentation/ABI/testing/sysfs-kernel-slab b/Documentation/ABI/testing/sysfs-kernel-slab index 29601d93a1c2..7770c03be6b4 100644 --- a/Documentation/ABI/testing/sysfs-kernel-slab +++ b/Documentation/ABI/testing/sysfs-kernel-slab @@ -180,6 +180,20 @@ Description: list. It can be written to clear the current count. Available when CONFIG_SLUB_STATS is enabled. +What: /sys/kernel/slab/cache/defrag_used_ratio +Date: February 2019 +KernelVersion: 5.0 +Contact: Christoph Lameter <cl@linux-foundation.org> + Pekka Enberg <penberg@cs.helsinki.fi>, +Description: + The defrag_used_ratio file allows the control of how aggressive + slab fragmentation reduction works at reclaiming objects from + sparsely populated slabs. This is a percentage. If a slab has + less than this percentage of objects allocated then reclaim will + attempt to reclaim objects so that the whole slab page can be + freed. 0% specifies no reclaim attempt (defrag disabled), 100% + specifies attempt to reclaim all pages. The default is 30%. + What: /sys/kernel/slab/cache/deactivate_to_tail Date: February 2008 KernelVersion: 2.6.25 diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index a7340a1ed5dc..6da6197ca973 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -107,6 +107,13 @@ struct kmem_cache { unsigned int red_left_pad; /* Left redzone padding size */ const char *name; /* Name (only for display!) */ struct list_head list; /* List of slab caches */ + int defrag_used_ratio; /* + * Ratio used to check against the + * percentage of objects allocated in a + * slab page. If less than this ratio + * is allocated then reclaim attempts + * are made. + */ #ifdef CONFIG_SYSFS struct kobject kobj; /* For sysfs */ struct work_struct kobj_remove_work; diff --git a/mm/slub.c b/mm/slub.c index f37103e22d3f..515db0f36c55 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3642,6 +3642,7 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) set_cpu_partial(s); + s->defrag_used_ratio = 30; #ifdef CONFIG_NUMA s->remote_node_defrag_ratio = 1000; #endif @@ -5261,6 +5262,28 @@ static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf) } SLAB_ATTR_RO(destroy_by_rcu); +static ssize_t defrag_used_ratio_show(struct kmem_cache *s, char *buf) +{ + return sprintf(buf, "%d\n", s->defrag_used_ratio); +} + +static ssize_t defrag_used_ratio_store(struct kmem_cache *s, + const char *buf, size_t length) +{ + unsigned long ratio; + int err; + + err = kstrtoul(buf, 10, &ratio); + if (err) + return err; + + if (ratio <= 100) + s->defrag_used_ratio = ratio; + + return length; +} +SLAB_ATTR(defrag_used_ratio); + #ifdef CONFIG_SLUB_DEBUG static ssize_t slabs_show(struct kmem_cache *s, char *buf) { @@ -5585,6 +5608,7 @@ static struct attribute *slab_attrs[] = { &validate_attr.attr, &alloc_calls_attr.attr, &free_calls_attr.attr, + &defrag_used_ratio_attr.attr, #endif #ifdef CONFIG_ZONE_DMA &cache_dma_attr.attr,
In preparation for enabling defragmentation of slab pages. "defrag_used_ratio" is used to set the threshold at which defragmentation should be attempted on a slab page. "defrag_used_ratio" is a percentage in the range of 0 - 100 (inclusive). If less than that percentage of slots in a slab page are in use then the slab page will become subject to defragmentation. Add a defrag ratio field and set it to 30% by default. A limit of 30% specifies that more than 3 out of 10 available slots for objects need to be in use otherwise slab defragmentation will be attempted on the remaining objects. Co-developed-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tobin C. Harding <tobin@kernel.org> --- Documentation/ABI/testing/sysfs-kernel-slab | 14 ++++++++++++ include/linux/slub_def.h | 7 ++++++ mm/slub.c | 24 +++++++++++++++++++++ 3 files changed, 45 insertions(+)