Message ID | 20230501165450.15352-35-surenb@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Memory allocation profiling | expand |
On Mon 01-05-23 09:54:44, Suren Baghdasaryan wrote: [...] > +static inline void add_ctx(struct codetag_ctx *ctx, > + struct codetag_with_ctx *ctc) > +{ > + kref_init(&ctx->refcount); > + spin_lock(&ctc->ctx_lock); > + ctx->flags = CTC_FLAG_CTX_PTR; > + ctx->ctc = ctc; > + list_add_tail(&ctx->node, &ctc->ctx_head); > + spin_unlock(&ctc->ctx_lock); AFAIU every single tracked allocation will get its own codetag_ctx. There is no aggregation per allocation site or anything else. This looks like a scalability and a memory overhead red flag to me. > +} > + > +static inline void rem_ctx(struct codetag_ctx *ctx, > + void (*free_ctx)(struct kref *refcount)) > +{ > + struct codetag_with_ctx *ctc = ctx->ctc; > + > + spin_lock(&ctc->ctx_lock); This could deadlock when allocator is called from the IRQ context. > + /* ctx might have been removed while we were using it */ > + if (!list_empty(&ctx->node)) > + list_del_init(&ctx->node); > + spin_unlock(&ctc->ctx_lock); > + kref_put(&ctx->refcount, free_ctx);
On Wed, May 3, 2023 at 12:36 AM Michal Hocko <mhocko@suse.com> wrote: > > On Mon 01-05-23 09:54:44, Suren Baghdasaryan wrote: > [...] > > +static inline void add_ctx(struct codetag_ctx *ctx, > > + struct codetag_with_ctx *ctc) > > +{ > > + kref_init(&ctx->refcount); > > + spin_lock(&ctc->ctx_lock); > > + ctx->flags = CTC_FLAG_CTX_PTR; > > + ctx->ctc = ctc; > > + list_add_tail(&ctx->node, &ctc->ctx_head); > > + spin_unlock(&ctc->ctx_lock); > > AFAIU every single tracked allocation will get its own codetag_ctx. > There is no aggregation per allocation site or anything else. This looks > like a scalability and a memory overhead red flag to me. True. The allocations here would not be limited. We could introduce a global limit to the amount of memory that we can use to store contexts and maybe reuse the oldest entry (in LRU fashion) when we hit that limit? > > > +} > > + > > +static inline void rem_ctx(struct codetag_ctx *ctx, > > + void (*free_ctx)(struct kref *refcount)) > > +{ > > + struct codetag_with_ctx *ctc = ctx->ctc; > > + > > + spin_lock(&ctc->ctx_lock); > > This could deadlock when allocator is called from the IRQ context. I see. spin_lock_irqsave() then? Thanks for the feedback! Suren. > > > + /* ctx might have been removed while we were using it */ > > + if (!list_empty(&ctx->node)) > > + list_del_init(&ctx->node); > > + spin_unlock(&ctc->ctx_lock); > > + kref_put(&ctx->refcount, free_ctx); > -- > Michal Hocko > SUSE Labs
On 5/3/23 08:18, Suren Baghdasaryan wrote: >>> +static inline void rem_ctx(struct codetag_ctx *ctx, >>> + void (*free_ctx)(struct kref *refcount)) >>> +{ >>> + struct codetag_with_ctx *ctc = ctx->ctc; >>> + >>> + spin_lock(&ctc->ctx_lock); >> This could deadlock when allocator is called from the IRQ context. > I see. spin_lock_irqsave() then? Yes. But, even better, please turn on lockdep when you are testing. It will find these for you. If you're on x86, we have a set of handy-dandy debug options that you can add to an existing config with: make x86_debug.config That said, I'm as concerned as everyone else that this is all "new" code and doesn't lean on existing tracing or things like PAGE_OWNER enough.
On Wed, May 3, 2023 at 8:26 AM Dave Hansen <dave.hansen@intel.com> wrote: > > On 5/3/23 08:18, Suren Baghdasaryan wrote: > >>> +static inline void rem_ctx(struct codetag_ctx *ctx, > >>> + void (*free_ctx)(struct kref *refcount)) > >>> +{ > >>> + struct codetag_with_ctx *ctc = ctx->ctc; > >>> + > >>> + spin_lock(&ctc->ctx_lock); > >> This could deadlock when allocator is called from the IRQ context. > > I see. spin_lock_irqsave() then? > > Yes. But, even better, please turn on lockdep when you are testing. It > will find these for you. If you're on x86, we have a set of handy-dandy > debug options that you can add to an existing config with: > > make x86_debug.config Nice! I thought I tested with lockdep enabled but I might be wrong. The beauty of working on multiple patchsets in parallel is that I can't remember what I did for each one :) > > That said, I'm as concerned as everyone else that this is all "new" code > and doesn't lean on existing tracing or things like PAGE_OWNER enough. Yeah, that's being actively discussed. >
On Wed 03-05-23 08:18:39, Suren Baghdasaryan wrote: > On Wed, May 3, 2023 at 12:36 AM Michal Hocko <mhocko@suse.com> wrote: > > > > On Mon 01-05-23 09:54:44, Suren Baghdasaryan wrote: > > [...] > > > +static inline void add_ctx(struct codetag_ctx *ctx, > > > + struct codetag_with_ctx *ctc) > > > +{ > > > + kref_init(&ctx->refcount); > > > + spin_lock(&ctc->ctx_lock); > > > + ctx->flags = CTC_FLAG_CTX_PTR; > > > + ctx->ctc = ctc; > > > + list_add_tail(&ctx->node, &ctc->ctx_head); > > > + spin_unlock(&ctc->ctx_lock); > > > > AFAIU every single tracked allocation will get its own codetag_ctx. > > There is no aggregation per allocation site or anything else. This looks > > like a scalability and a memory overhead red flag to me. > > True. The allocations here would not be limited. We could introduce a > global limit to the amount of memory that we can use to store contexts > and maybe reuse the oldest entry (in LRU fashion) when we hit that > limit? Wouldn't it make more sense to aggregate same allocations? Sure pids get recycled but quite honestly I am not sure that information is all that interesting. Precisely because of the recycle and short lived processes reasons. I think there is quite a lot to think about the detailed context tracking. > > > > > +} > > > + > > > +static inline void rem_ctx(struct codetag_ctx *ctx, > > > + void (*free_ctx)(struct kref *refcount)) > > > +{ > > > + struct codetag_with_ctx *ctc = ctx->ctc; > > > + > > > + spin_lock(&ctc->ctx_lock); > > > > This could deadlock when allocator is called from the IRQ context. > > I see. spin_lock_irqsave() then? yes. I have checked that the lock is not held over the all list traversal which is good but the changelog could be more explicit about the iterators and lock hold times implications.
On Thu, May 4, 2023 at 1:04 AM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 03-05-23 08:18:39, Suren Baghdasaryan wrote: > > On Wed, May 3, 2023 at 12:36 AM Michal Hocko <mhocko@suse.com> wrote: > > > > > > On Mon 01-05-23 09:54:44, Suren Baghdasaryan wrote: > > > [...] > > > > +static inline void add_ctx(struct codetag_ctx *ctx, > > > > + struct codetag_with_ctx *ctc) > > > > +{ > > > > + kref_init(&ctx->refcount); > > > > + spin_lock(&ctc->ctx_lock); > > > > + ctx->flags = CTC_FLAG_CTX_PTR; > > > > + ctx->ctc = ctc; > > > > + list_add_tail(&ctx->node, &ctc->ctx_head); > > > > + spin_unlock(&ctc->ctx_lock); > > > > > > AFAIU every single tracked allocation will get its own codetag_ctx. > > > There is no aggregation per allocation site or anything else. This looks > > > like a scalability and a memory overhead red flag to me. > > > > True. The allocations here would not be limited. We could introduce a > > global limit to the amount of memory that we can use to store contexts > > and maybe reuse the oldest entry (in LRU fashion) when we hit that > > limit? > > Wouldn't it make more sense to aggregate same allocations? Sure pids > get recycled but quite honestly I am not sure that information is all > that interesting. Precisely because of the recycle and short lived > processes reasons. I think there is quite a lot to think about the > detailed context tracking. That would be a nice optimization. I'll need to look into the implementation details. Thanks for the idea. > > > > > > > > +} > > > > + > > > > +static inline void rem_ctx(struct codetag_ctx *ctx, > > > > + void (*free_ctx)(struct kref *refcount)) > > > > +{ > > > > + struct codetag_with_ctx *ctc = ctx->ctc; > > > > + > > > > + spin_lock(&ctc->ctx_lock); > > > > > > This could deadlock when allocator is called from the IRQ context. > > > > I see. spin_lock_irqsave() then? > > yes. I have checked that the lock is not held over the all list > traversal which is good but the changelog could be more explicit about > the iterators and lock hold times implications. Ack. Will add more information. > > -- > Michal Hocko > SUSE Labs
diff --git a/include/linux/codetag.h b/include/linux/codetag.h index 87207f199ac9..9ab2f017e845 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -5,8 +5,12 @@ #ifndef _LINUX_CODETAG_H #define _LINUX_CODETAG_H +#include <linux/container_of.h> +#include <linux/spinlock.h> #include <linux/types.h> +struct kref; +struct codetag_ctx; struct codetag_iterator; struct codetag_type; struct seq_buf; @@ -18,15 +22,38 @@ struct module; * an array of these. */ struct codetag { - unsigned int flags; /* used in later patches */ + unsigned int flags; /* has to be the first member shared with codetag_ctx */ unsigned int lineno; const char *modname; const char *function; const char *filename; } __aligned(8); +/* codetag_with_ctx flags */ +#define CTC_FLAG_CTX_PTR (1 << 0) +#define CTC_FLAG_CTX_READY (1 << 1) +#define CTC_FLAG_CTX_ENABLED (1 << 2) + +/* + * Code tag with context capture support. Contains a list to store context for + * each tag hit, a lock protecting the list and a flag to indicate whether + * context capture is enabled for the tag. + */ +struct codetag_with_ctx { + struct codetag ct; + struct list_head ctx_head; + spinlock_t ctx_lock; +} __aligned(8); + +/* + * Tag reference can point to codetag directly or indirectly via codetag_ctx. + * Direct codetag pointer is used when context capture is disabled or not + * supported. When context capture for the tag is used, the reference points + * to the codetag_ctx through which the codetag can be reached. + */ union codetag_ref { struct codetag *ct; + struct codetag_ctx *ctx; }; struct codetag_range { @@ -46,6 +73,7 @@ struct codetag_type_desc { struct codetag_module *cmod); bool (*module_unload)(struct codetag_type *cttype, struct codetag_module *cmod); + void (*free_ctx)(struct kref *ref); }; struct codetag_iterator { @@ -53,6 +81,7 @@ struct codetag_iterator { struct codetag_module *cmod; unsigned long mod_id; struct codetag *ct; + struct codetag_ctx *ctx; }; #define CODE_TAG_INIT { \ @@ -63,9 +92,28 @@ struct codetag_iterator { .flags = 0, \ } +static inline bool is_codetag_ctx_ref(union codetag_ref *ref) +{ + return !!(ref->ct->flags & CTC_FLAG_CTX_PTR); +} + +static inline +struct codetag_with_ctx *ct_to_ctc(struct codetag *ct) +{ + return container_of(ct, struct codetag_with_ctx, ct); +} + void codetag_lock_module_list(struct codetag_type *cttype, bool lock); struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype); struct codetag *codetag_next_ct(struct codetag_iterator *iter); +struct codetag_ctx *codetag_next_ctx(struct codetag_iterator *iter); + +bool codetag_enable_ctx(struct codetag_with_ctx *ctc, bool enable); +static inline bool codetag_ctx_enabled(struct codetag_with_ctx *ctc) +{ + return !!(ctc->ct.flags & CTC_FLAG_CTX_ENABLED); +} +bool codetag_has_ctx(struct codetag_with_ctx *ctc); void codetag_to_text(struct seq_buf *out, struct codetag *ct); diff --git a/include/linux/codetag_ctx.h b/include/linux/codetag_ctx.h new file mode 100644 index 000000000000..e741484f0e08 --- /dev/null +++ b/include/linux/codetag_ctx.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * code tag context + */ +#ifndef _LINUX_CODETAG_CTX_H +#define _LINUX_CODETAG_CTX_H + +#include <linux/codetag.h> +#include <linux/kref.h> + +/* Code tag hit context. */ +struct codetag_ctx { + unsigned int flags; /* has to be the first member shared with codetag */ + struct codetag_with_ctx *ctc; + struct list_head node; + struct kref refcount; +} __aligned(8); + +static inline struct codetag_ctx *kref_to_ctx(struct kref *refcount) +{ + return container_of(refcount, struct codetag_ctx, refcount); +} + +static inline void add_ctx(struct codetag_ctx *ctx, + struct codetag_with_ctx *ctc) +{ + kref_init(&ctx->refcount); + spin_lock(&ctc->ctx_lock); + ctx->flags = CTC_FLAG_CTX_PTR; + ctx->ctc = ctc; + list_add_tail(&ctx->node, &ctc->ctx_head); + spin_unlock(&ctc->ctx_lock); +} + +static inline void rem_ctx(struct codetag_ctx *ctx, + void (*free_ctx)(struct kref *refcount)) +{ + struct codetag_with_ctx *ctc = ctx->ctc; + + spin_lock(&ctc->ctx_lock); + /* ctx might have been removed while we were using it */ + if (!list_empty(&ctx->node)) + list_del_init(&ctx->node); + spin_unlock(&ctc->ctx_lock); + kref_put(&ctx->refcount, free_ctx); +} + +#endif /* _LINUX_CODETAG_CTX_H */ diff --git a/lib/codetag.c b/lib/codetag.c index 84f90f3b922c..d891bbe4481d 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only #include <linux/codetag.h> +#include <linux/codetag_ctx.h> #include <linux/idr.h> #include <linux/kallsyms.h> #include <linux/module.h> @@ -92,6 +93,139 @@ struct codetag *codetag_next_ct(struct codetag_iterator *iter) return ct; } +static struct codetag_ctx *next_ctx_from_ct(struct codetag_iterator *iter) +{ + struct codetag_with_ctx *ctc; + struct codetag_ctx *ctx = NULL; + struct codetag *ct = iter->ct; + + while (ct) { + if (!(ct->flags & CTC_FLAG_CTX_READY)) + goto next; + + ctc = ct_to_ctc(ct); + spin_lock(&ctc->ctx_lock); + if (!list_empty(&ctc->ctx_head)) { + ctx = list_first_entry(&ctc->ctx_head, + struct codetag_ctx, node); + kref_get(&ctx->refcount); + } + spin_unlock(&ctc->ctx_lock); + if (ctx) + break; +next: + ct = codetag_next_ct(iter); + } + + iter->ctx = ctx; + return ctx; +} + +struct codetag_ctx *codetag_next_ctx(struct codetag_iterator *iter) +{ + struct codetag_ctx *ctx = iter->ctx; + struct codetag_ctx *found = NULL; + + lockdep_assert_held(&iter->cttype->mod_lock); + + if (!ctx) + return next_ctx_from_ct(iter); + + spin_lock(&ctx->ctc->ctx_lock); + /* + * Do not advance if the object was isolated, restart at the same tag. + */ + if (!list_empty(&ctx->node)) { + if (list_is_last(&ctx->node, &ctx->ctc->ctx_head)) { + /* Finished with this tag, advance to the next */ + codetag_next_ct(iter); + } else { + found = list_next_entry(ctx, node); + kref_get(&found->refcount); + } + } + spin_unlock(&ctx->ctc->ctx_lock); + kref_put(&ctx->refcount, iter->cttype->desc.free_ctx); + + if (!found) + return next_ctx_from_ct(iter); + + iter->ctx = found; + return found; +} + +static struct codetag_type *find_cttype(struct codetag *ct) +{ + struct codetag_module *cmod; + struct codetag_type *cttype; + unsigned long mod_id; + unsigned long tmp; + + mutex_lock(&codetag_lock); + list_for_each_entry(cttype, &codetag_types, link) { + down_read(&cttype->mod_lock); + idr_for_each_entry_ul(&cttype->mod_idr, cmod, tmp, mod_id) { + if (ct >= cmod->range.start && ct < cmod->range.stop) { + up_read(&cttype->mod_lock); + goto found; + } + } + up_read(&cttype->mod_lock); + } + cttype = NULL; +found: + mutex_unlock(&codetag_lock); + + return cttype; +} + +bool codetag_enable_ctx(struct codetag_with_ctx *ctc, bool enable) +{ + struct codetag_type *cttype = find_cttype(&ctc->ct); + + if (!cttype || !cttype->desc.free_ctx) + return false; + + lockdep_assert_held(&cttype->mod_lock); + BUG_ON(!rwsem_is_locked(&cttype->mod_lock)); + + if (codetag_ctx_enabled(ctc) == enable) + return false; + + if (enable) { + /* Initialize context capture fields only once */ + if (!(ctc->ct.flags & CTC_FLAG_CTX_READY)) { + spin_lock_init(&ctc->ctx_lock); + INIT_LIST_HEAD(&ctc->ctx_head); + ctc->ct.flags |= CTC_FLAG_CTX_READY; + } + ctc->ct.flags |= CTC_FLAG_CTX_ENABLED; + } else { + /* + * The list of context objects is intentionally left untouched. + * It can be read back and if context capture is re-enablied it + * will append new objects. + */ + ctc->ct.flags &= ~CTC_FLAG_CTX_ENABLED; + } + + return true; +} + +bool codetag_has_ctx(struct codetag_with_ctx *ctc) +{ + bool no_ctx; + + if (!(ctc->ct.flags & CTC_FLAG_CTX_READY)) + return false; + + spin_lock(&ctc->ctx_lock); + no_ctx = list_empty(&ctc->ctx_head); + spin_unlock(&ctc->ctx_lock); + + return !no_ctx; +} + void codetag_to_text(struct seq_buf *out, struct codetag *ct) { seq_buf_printf(out, "%s:%u module:%s func:%s",
Add support for code tag context capture when registering a new code tag type. When context capture for a specific code tag is enabled, codetag_ref will point to a codetag_ctx object which can be attached to an application-specific object storing code invocation context. codetag_ctx has a pointer to its codetag_with_ctx object with embedded codetag object in it. All context objects of the same code tag are placed into codetag_with_ctx.ctx_head linked list. codetag.flag is used to indicate when a context capture for the associated code tag is initialized and enabled. Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- include/linux/codetag.h | 50 +++++++++++++- include/linux/codetag_ctx.h | 48 +++++++++++++ lib/codetag.c | 134 ++++++++++++++++++++++++++++++++++++ 3 files changed, 231 insertions(+), 1 deletion(-) create mode 100644 include/linux/codetag_ctx.h