Message ID | 20240212213922.783301-21-surenb@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Memory allocation profiling | expand |
On 2/12/24 22:39, Suren Baghdasaryan wrote: > To store code tag for every slab object, a codetag reference is embedded > into slabobj_ext when CONFIG_MEM_ALLOC_PROFILING=y. > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev> > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> > --- > include/linux/memcontrol.h | 5 +++++ > lib/Kconfig.debug | 1 + > mm/slab.h | 4 ++++ > 3 files changed, 10 insertions(+) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index f3584e98b640..2b010316016c 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1653,7 +1653,12 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, > * if MEMCG_DATA_OBJEXTS is set. > */ > struct slabobj_ext { > +#ifdef CONFIG_MEMCG_KMEM > struct obj_cgroup *objcg; > +#endif > +#ifdef CONFIG_MEM_ALLOC_PROFILING > + union codetag_ref ref; > +#endif > } __aligned(8); So this means that compiling with CONFIG_MEM_ALLOC_PROFILING will increase the memory overhead of arrays allocated for CONFIG_MEMCG_KMEM, even if allocation profiling itself is not enabled in runtime? Similar concern to the unconditional page_ext usage, that this would hinder enabling in a general distro kernel. The unused field overhead would be smaller than currently page_ext, but getting rid of it when alloc profiling is not enabled would be more work than introducing an early boot param for the page_ext case. Could be however solved similarly to how page_ext is populated dynamically at runtime. Hopefully it wouldn't add noticeable cpu overhead. > static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug > index 7bbdb0ddb011..9ecfcdb54417 100644 > --- a/lib/Kconfig.debug > +++ b/lib/Kconfig.debug > @@ -979,6 +979,7 @@ config MEM_ALLOC_PROFILING > depends on !DEBUG_FORCE_WEAK_PER_CPU > select CODE_TAGGING > select PAGE_EXTENSION > + select SLAB_OBJ_EXT > help > Track allocation source code and record total allocation size > initiated at that code location. The mechanism can be used to track > diff --git a/mm/slab.h b/mm/slab.h > index 77cf7474fe46..224a4b2305fb 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -569,6 +569,10 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, > > static inline bool need_slab_obj_ext(void) > { > +#ifdef CONFIG_MEM_ALLOC_PROFILING > + if (mem_alloc_profiling_enabled()) > + return true; > +#endif > /* > * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally > * inside memcg_slab_post_alloc_hook. No other users for now.
On Fri, Feb 16, 2024 at 7:36 AM Vlastimil Babka <vbabka@suse.cz> wrote: > > On 2/12/24 22:39, Suren Baghdasaryan wrote: > > To store code tag for every slab object, a codetag reference is embedded > > into slabobj_ext when CONFIG_MEM_ALLOC_PROFILING=y. > > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > > Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev> > > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> > > --- > > include/linux/memcontrol.h | 5 +++++ > > lib/Kconfig.debug | 1 + > > mm/slab.h | 4 ++++ > > 3 files changed, 10 insertions(+) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index f3584e98b640..2b010316016c 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -1653,7 +1653,12 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, > > * if MEMCG_DATA_OBJEXTS is set. > > */ > > struct slabobj_ext { > > +#ifdef CONFIG_MEMCG_KMEM > > struct obj_cgroup *objcg; > > +#endif > > +#ifdef CONFIG_MEM_ALLOC_PROFILING > > + union codetag_ref ref; > > +#endif > > } __aligned(8); > > So this means that compiling with CONFIG_MEM_ALLOC_PROFILING will increase > the memory overhead of arrays allocated for CONFIG_MEMCG_KMEM, even if > allocation profiling itself is not enabled in runtime? Similar concern to > the unconditional page_ext usage, that this would hinder enabling in a > general distro kernel. > > The unused field overhead would be smaller than currently page_ext, but > getting rid of it when alloc profiling is not enabled would be more work > than introducing an early boot param for the page_ext case. Could be however > solved similarly to how page_ext is populated dynamically at runtime. > Hopefully it wouldn't add noticeable cpu overhead. Yes, slabobj_ext overhead is much smaller than page_ext one but still considerable and it would be harder to eliminate. Boot-time resizing of the extension object might be doable but that again would be quite complex and better be done as a separate patchset. This is lower on my TODO list than page_ext ones since the overhead is order of magnitude smaller. > > > static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) > > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug > > index 7bbdb0ddb011..9ecfcdb54417 100644 > > --- a/lib/Kconfig.debug > > +++ b/lib/Kconfig.debug > > @@ -979,6 +979,7 @@ config MEM_ALLOC_PROFILING > > depends on !DEBUG_FORCE_WEAK_PER_CPU > > select CODE_TAGGING > > select PAGE_EXTENSION > > + select SLAB_OBJ_EXT > > help > > Track allocation source code and record total allocation size > > initiated at that code location. The mechanism can be used to track > > diff --git a/mm/slab.h b/mm/slab.h > > index 77cf7474fe46..224a4b2305fb 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -569,6 +569,10 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, > > > > static inline bool need_slab_obj_ext(void) > > { > > +#ifdef CONFIG_MEM_ALLOC_PROFILING > > + if (mem_alloc_profiling_enabled()) > > + return true; > > +#endif > > /* > > * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally > > * inside memcg_slab_post_alloc_hook. No other users for now. >
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index f3584e98b640..2b010316016c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1653,7 +1653,12 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, * if MEMCG_DATA_OBJEXTS is set. */ struct slabobj_ext { +#ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *objcg; +#endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + union codetag_ref ref; +#endif } __aligned(8); static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 7bbdb0ddb011..9ecfcdb54417 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -979,6 +979,7 @@ config MEM_ALLOC_PROFILING depends on !DEBUG_FORCE_WEAK_PER_CPU select CODE_TAGGING select PAGE_EXTENSION + select SLAB_OBJ_EXT help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/mm/slab.h b/mm/slab.h index 77cf7474fe46..224a4b2305fb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -569,6 +569,10 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, static inline bool need_slab_obj_ext(void) { +#ifdef CONFIG_MEM_ALLOC_PROFILING + if (mem_alloc_profiling_enabled()) + return true; +#endif /* * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally * inside memcg_slab_post_alloc_hook. No other users for now.