Message ID | 20211130095727.2378739-1-elver@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | lib/stackdepot: always do filter_irq_stacks() in stack_depot_save() | expand |
On Tue, Nov 30, 2021 at 11:14 AM Marco Elver <elver@google.com> wrote: > > The non-interrupt portion of interrupt stack traces before interrupt > entry is usually arbitrary. Therefore, saving stack traces of interrupts > (that include entries before interrupt entry) to stack depot leads to > unbounded stackdepot growth. > > As such, use of filter_irq_stacks() is a requirement to ensure > stackdepot can efficiently deduplicate interrupt stacks. > > Looking through all current users of stack_depot_save(), none (except > KASAN) pass the stack trace through filter_irq_stacks() before passing > it on to stack_depot_save(). > > Rather than adding filter_irq_stacks() to all current users of > stack_depot_save(), it became clear that stack_depot_save() should > simply do filter_irq_stacks(). > > Signed-off-by: Marco Elver <elver@google.com> Reviewed-by: Alexander Potapenko <glider@google.com> > --- > lib/stackdepot.c | 13 +++++++++++++ > mm/kasan/common.c | 1 - > 2 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index b437ae79aca1..519c7898c7f2 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -305,6 +305,9 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); > * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids > * any allocations and will fail if no space is left to store the stack trace. > * > + * If the stack trace in @entries is from an interrupt, only the portion up to > + * interrupt entry is saved. > + * > * Context: Any context, but setting @can_alloc to %false is required if > * alloc_pages() cannot be used from the current context. Currently > * this is the case from contexts where neither %GFP_ATOMIC nor > @@ -323,6 +326,16 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > unsigned long flags; > u32 hash; > > + /* > + * If this stack trace is from an interrupt, including anything before > + * interrupt entry usually leads to unbounded stackdepot growth. > + * > + * Because use of filter_irq_stacks() is a requirement to ensure > + * stackdepot can efficiently deduplicate interrupt stacks, always > + * filter_irq_stacks() to simplify all callers' use of stackdepot. > + */ > + nr_entries = filter_irq_stacks(entries, nr_entries); > + > if (unlikely(nr_entries == 0) || stack_depot_disable) > goto fast_exit; > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 8428da2aaf17..efaa836e5132 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -36,7 +36,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) > unsigned int nr_entries; > > nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); > - nr_entries = filter_irq_stacks(entries, nr_entries); > return __stack_depot_save(entries, nr_entries, flags, can_alloc); > } > > -- > 2.34.0.rc2.393.gf8c9666880-goog >
On 11/30/21 10:57, Marco Elver wrote: > The non-interrupt portion of interrupt stack traces before interrupt > entry is usually arbitrary. Therefore, saving stack traces of interrupts > (that include entries before interrupt entry) to stack depot leads to > unbounded stackdepot growth. > > As such, use of filter_irq_stacks() is a requirement to ensure > stackdepot can efficiently deduplicate interrupt stacks. > > Looking through all current users of stack_depot_save(), none (except > KASAN) pass the stack trace through filter_irq_stacks() before passing > it on to stack_depot_save(). > > Rather than adding filter_irq_stacks() to all current users of > stack_depot_save(), it became clear that stack_depot_save() should > simply do filter_irq_stacks(). Agree. > Signed-off-by: Marco Elver <elver@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Thanks. > --- > lib/stackdepot.c | 13 +++++++++++++ > mm/kasan/common.c | 1 - > 2 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index b437ae79aca1..519c7898c7f2 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -305,6 +305,9 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); > * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids > * any allocations and will fail if no space is left to store the stack trace. > * > + * If the stack trace in @entries is from an interrupt, only the portion up to > + * interrupt entry is saved. > + * > * Context: Any context, but setting @can_alloc to %false is required if > * alloc_pages() cannot be used from the current context. Currently > * this is the case from contexts where neither %GFP_ATOMIC nor > @@ -323,6 +326,16 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > unsigned long flags; > u32 hash; > > + /* > + * If this stack trace is from an interrupt, including anything before > + * interrupt entry usually leads to unbounded stackdepot growth. > + * > + * Because use of filter_irq_stacks() is a requirement to ensure > + * stackdepot can efficiently deduplicate interrupt stacks, always > + * filter_irq_stacks() to simplify all callers' use of stackdepot. > + */ > + nr_entries = filter_irq_stacks(entries, nr_entries); > + > if (unlikely(nr_entries == 0) || stack_depot_disable) > goto fast_exit; > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 8428da2aaf17..efaa836e5132 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -36,7 +36,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) > unsigned int nr_entries; > > nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); > - nr_entries = filter_irq_stacks(entries, nr_entries); > return __stack_depot_save(entries, nr_entries, flags, can_alloc); > } > >
On Tue, Nov 30, 2021 at 11:14 AM Marco Elver <elver@google.com> wrote: > > The non-interrupt portion of interrupt stack traces before interrupt > entry is usually arbitrary. Therefore, saving stack traces of interrupts > (that include entries before interrupt entry) to stack depot leads to > unbounded stackdepot growth. > > As such, use of filter_irq_stacks() is a requirement to ensure > stackdepot can efficiently deduplicate interrupt stacks. > > Looking through all current users of stack_depot_save(), none (except > KASAN) pass the stack trace through filter_irq_stacks() before passing > it on to stack_depot_save(). > > Rather than adding filter_irq_stacks() to all current users of > stack_depot_save(), it became clear that stack_depot_save() should > simply do filter_irq_stacks(). > > Signed-off-by: Marco Elver <elver@google.com> > --- > lib/stackdepot.c | 13 +++++++++++++ > mm/kasan/common.c | 1 - > 2 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index b437ae79aca1..519c7898c7f2 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -305,6 +305,9 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); > * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids > * any allocations and will fail if no space is left to store the stack trace. > * > + * If the stack trace in @entries is from an interrupt, only the portion up to > + * interrupt entry is saved. > + * > * Context: Any context, but setting @can_alloc to %false is required if > * alloc_pages() cannot be used from the current context. Currently > * this is the case from contexts where neither %GFP_ATOMIC nor > @@ -323,6 +326,16 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > unsigned long flags; > u32 hash; > > + /* > + * If this stack trace is from an interrupt, including anything before > + * interrupt entry usually leads to unbounded stackdepot growth. > + * > + * Because use of filter_irq_stacks() is a requirement to ensure > + * stackdepot can efficiently deduplicate interrupt stacks, always > + * filter_irq_stacks() to simplify all callers' use of stackdepot. > + */ > + nr_entries = filter_irq_stacks(entries, nr_entries); > + > if (unlikely(nr_entries == 0) || stack_depot_disable) > goto fast_exit; > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 8428da2aaf17..efaa836e5132 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -36,7 +36,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) > unsigned int nr_entries; > > nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); > - nr_entries = filter_irq_stacks(entries, nr_entries); > return __stack_depot_save(entries, nr_entries, flags, can_alloc); > } > > -- > 2.34.0.rc2.393.gf8c9666880-goog > Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
diff --git a/lib/stackdepot.c b/lib/stackdepot.c index b437ae79aca1..519c7898c7f2 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -305,6 +305,9 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids * any allocations and will fail if no space is left to store the stack trace. * + * If the stack trace in @entries is from an interrupt, only the portion up to + * interrupt entry is saved. + * * Context: Any context, but setting @can_alloc to %false is required if * alloc_pages() cannot be used from the current context. Currently * this is the case from contexts where neither %GFP_ATOMIC nor @@ -323,6 +326,16 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned long flags; u32 hash; + /* + * If this stack trace is from an interrupt, including anything before + * interrupt entry usually leads to unbounded stackdepot growth. + * + * Because use of filter_irq_stacks() is a requirement to ensure + * stackdepot can efficiently deduplicate interrupt stacks, always + * filter_irq_stacks() to simplify all callers' use of stackdepot. + */ + nr_entries = filter_irq_stacks(entries, nr_entries); + if (unlikely(nr_entries == 0) || stack_depot_disable) goto fast_exit; diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 8428da2aaf17..efaa836e5132 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -36,7 +36,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) unsigned int nr_entries; nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); - nr_entries = filter_irq_stacks(entries, nr_entries); return __stack_depot_save(entries, nr_entries, flags, can_alloc); }
The non-interrupt portion of interrupt stack traces before interrupt entry is usually arbitrary. Therefore, saving stack traces of interrupts (that include entries before interrupt entry) to stack depot leads to unbounded stackdepot growth. As such, use of filter_irq_stacks() is a requirement to ensure stackdepot can efficiently deduplicate interrupt stacks. Looking through all current users of stack_depot_save(), none (except KASAN) pass the stack trace through filter_irq_stacks() before passing it on to stack_depot_save(). Rather than adding filter_irq_stacks() to all current users of stack_depot_save(), it became clear that stack_depot_save() should simply do filter_irq_stacks(). Signed-off-by: Marco Elver <elver@google.com> --- lib/stackdepot.c | 13 +++++++++++++ mm/kasan/common.c | 1 - 2 files changed, 13 insertions(+), 1 deletion(-)