Message ID | 306aeddcd3c01f432d308043c382669e5f63b395.1693328501.git.andreyknvl@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | stackdepot: allow evicting stack traces | expand |
On Tue, Aug 29, 2023 at 07:11PM +0200, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov <andreyknvl@google.com> > > Add a reference counter for how many times a stack records has been added > to stack depot. > > Do no yet decrement the refcount, this is implemented in one of the > following patches. > > This is preparatory patch for implementing the eviction of stack records > from the stack depot. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > lib/stackdepot.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index 5ad454367379..a84c0debbb9e 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -22,6 +22,7 @@ > #include <linux/mutex.h> > #include <linux/percpu.h> > #include <linux/printk.h> > +#include <linux/refcount.h> > #include <linux/slab.h> > #include <linux/spinlock.h> > #include <linux/stacktrace.h> > @@ -60,6 +61,7 @@ struct stack_record { > u32 hash; /* Hash in hash table */ > u32 size; /* Number of stored frames */ > union handle_parts handle; > + refcount_t count; > unsigned long entries[DEPOT_STACK_MAX_FRAMES]; /* Frames */ > }; > > @@ -348,6 +350,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) > stack->hash = hash; > stack->size = size; > /* stack->handle is already filled in by depot_init_pool. */ > + refcount_set(&stack->count, 1); > memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); > > /* > @@ -452,6 +455,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > /* Fast path: look the stack trace up without full locking. */ > found = find_stack(*bucket, entries, nr_entries, hash); > if (found) { > + refcount_inc(&found->count); If someone doesn't use stack_depot_evict(), and the refcount eventually overflows, it'll do a WARN (per refcount_warn_saturate()). I think the interface needs to be different: stack_depot_get(): increments refcount (could be inline if just wrapper around refcount_inc()) stack_depot_put(): what stack_depot_evict() currently does Then it's clear that if someone uses either stack_depot_get() or _put() that these need to be balanced. Not using either will result in the old behaviour of never evicting an entry. > read_unlock_irqrestore(&pool_rwlock, flags); > goto exit; > } > -- > 2.25.1 >
On Tue, 2023-08-29 at 19:11 +0200, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov <andreyknvl@google.com> > > Add a reference counter for how many times a stack records has been > added > to stack depot. > > Do no yet decrement the refcount, this is implemented in one of the > following patches. > > This is preparatory patch for implementing the eviction of stack > records > from the stack depot. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > lib/stackdepot.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index 5ad454367379..a84c0debbb9e 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -22,6 +22,7 @@ > #include <linux/mutex.h> > #include <linux/percpu.h> > #include <linux/printk.h> > +#include <linux/refcount.h> > #include <linux/slab.h> > #include <linux/spinlock.h> > #include <linux/stacktrace.h> > @@ -60,6 +61,7 @@ struct stack_record { > u32 hash; /* Hash in hash table */ > u32 size; /* Number of stored frames */ > union handle_parts handle; > + refcount_t count; > unsigned long entries[DEPOT_STACK_MAX_FRAMES]; /* Frames */ > }; > > @@ -348,6 +350,7 @@ depot_alloc_stack(unsigned long *entries, int > size, u32 hash, void **prealloc) > stack->hash = hash; > stack->size = size; > /* stack->handle is already filled in by depot_init_pool. */ > + refcount_set(&stack->count, 1); > memcpy(stack->entries, entries, flex_array_size(stack, entries, > size)); > > /* > @@ -452,6 +455,7 @@ depot_stack_handle_t __stack_depot_save(unsigned > long *entries, > /* Fast path: look the stack trace up without full locking. */ > found = find_stack(*bucket, entries, nr_entries, hash); > if (found) { > + refcount_inc(&found->count); > read_unlock_irqrestore(&pool_rwlock, flags); > goto exit; > } Hi Andrey, There are two find_stack() function calls in __stack_depot_save(). Maybe we need to add refcount_inc() for both two find_stack()? Thanks, Kuan-Ying Lee
On Wed, Aug 30, 2023 at 11:33 AM Marco Elver <elver@google.com> wrote: > > If someone doesn't use stack_depot_evict(), and the refcount eventually > overflows, it'll do a WARN (per refcount_warn_saturate()). > > I think the interface needs to be different: > > stack_depot_get(): increments refcount (could be inline if just > wrapper around refcount_inc()) > > stack_depot_put(): what stack_depot_evict() currently does > > Then it's clear that if someone uses either stack_depot_get() or _put() > that these need to be balanced. Not using either will result in the old > behaviour of never evicting an entry. So you mean the exported interface needs to be different? And the users will need to call both stack_depot_save+stack_depot_get for saving? Hm, this seems odd. WDYT about adding a new flavor of stack_depot_save called stack_depot_save_get that would increment the refcount? And renaming stack_depot_evict to stack_depot_put. I'm not sure though if the overflow is actually an issue. Hitting that would require calling stack_depot_save INT_MAX times.
On Fri, Sep 1, 2023 at 3:06 PM 'Kuan-Ying Lee (李冠穎)' via kasan-dev <kasan-dev@googlegroups.com> wrote: > > > @@ -452,6 +455,7 @@ depot_stack_handle_t __stack_depot_save(unsigned > > long *entries, > > /* Fast path: look the stack trace up without full locking. */ > > found = find_stack(*bucket, entries, nr_entries, hash); > > if (found) { > > + refcount_inc(&found->count); > > read_unlock_irqrestore(&pool_rwlock, flags); > > goto exit; > > } > > Hi Andrey, > > There are two find_stack() function calls in __stack_depot_save(). > > Maybe we need to add refcount_inc() for both two find_stack()? Indeed, good catch! Will fix in v2. Thanks!
On Mon, 4 Sept 2023 at 20:46, Andrey Konovalov <andreyknvl@gmail.com> wrote: > > On Wed, Aug 30, 2023 at 11:33 AM Marco Elver <elver@google.com> wrote: > > > > If someone doesn't use stack_depot_evict(), and the refcount eventually > > overflows, it'll do a WARN (per refcount_warn_saturate()). > > > > I think the interface needs to be different: > > > > stack_depot_get(): increments refcount (could be inline if just > > wrapper around refcount_inc()) > > > > stack_depot_put(): what stack_depot_evict() currently does > > > > Then it's clear that if someone uses either stack_depot_get() or _put() > > that these need to be balanced. Not using either will result in the old > > behaviour of never evicting an entry. > > So you mean the exported interface needs to be different? And the > users will need to call both stack_depot_save+stack_depot_get for > saving? Hm, this seems odd. > > WDYT about adding a new flavor of stack_depot_save called > stack_depot_save_get that would increment the refcount? And renaming > stack_depot_evict to stack_depot_put. If there are no other uses of stack_depot_get(), which seems likely, just stack_depot_save_get() seems ok. > I'm not sure though if the overflow is actually an issue. Hitting that > would require calling stack_depot_save INT_MAX times. With a long-running kernel it's possible.
On Mon, Sep 4, 2023 at 8:56 PM Marco Elver <elver@google.com> wrote: > > > WDYT about adding a new flavor of stack_depot_save called > > stack_depot_save_get that would increment the refcount? And renaming > > stack_depot_evict to stack_depot_put. > > If there are no other uses of stack_depot_get(), which seems likely, > just stack_depot_save_get() seems ok. Ok, I will implement a similar approach in v2: add another flag to __stack_depot_save to avoid multiplying API functions. Thanks!
diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 5ad454367379..a84c0debbb9e 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -22,6 +22,7 @@ #include <linux/mutex.h> #include <linux/percpu.h> #include <linux/printk.h> +#include <linux/refcount.h> #include <linux/slab.h> #include <linux/spinlock.h> #include <linux/stacktrace.h> @@ -60,6 +61,7 @@ struct stack_record { u32 hash; /* Hash in hash table */ u32 size; /* Number of stored frames */ union handle_parts handle; + refcount_t count; unsigned long entries[DEPOT_STACK_MAX_FRAMES]; /* Frames */ }; @@ -348,6 +350,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) stack->hash = hash; stack->size = size; /* stack->handle is already filled in by depot_init_pool. */ + refcount_set(&stack->count, 1); memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); /* @@ -452,6 +455,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, /* Fast path: look the stack trace up without full locking. */ found = find_stack(*bucket, entries, nr_entries, hash); if (found) { + refcount_inc(&found->count); read_unlock_irqrestore(&pool_rwlock, flags); goto exit; }