Message ID | f7ab7ad4013669f25808bb0e39b3613b98189063.1693328501.git.andreyknvl@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | stackdepot: allow evicting stack traces | expand |
On Tue, Aug 29, 2023 at 07:11PM +0200, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov <andreyknvl@google.com> > > Evict stack traces from the stack depot for the tag-based KASAN modes > once they are evicted from the stack ring. > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > --- > mm/kasan/tags.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c > index 7dcfe341d48e..fa6b0f77a7dd 100644 > --- a/mm/kasan/tags.c > +++ b/mm/kasan/tags.c > @@ -96,7 +96,7 @@ static void save_stack_info(struct kmem_cache *cache, void *object, > gfp_t gfp_flags, bool is_free) > { > unsigned long flags; > - depot_stack_handle_t stack; > + depot_stack_handle_t stack, old_stack; > u64 pos; > struct kasan_stack_ring_entry *entry; > void *old_ptr; > @@ -120,6 +120,8 @@ static void save_stack_info(struct kmem_cache *cache, void *object, > if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR)) > goto next; /* Busy slot. */ > > + old_stack = READ_ONCE(entry->stack); Why READ_ONCE? Is it possible that there is a concurrent writer once the slot has been "locked" with STACK_RING_BUSY_PTR? If there is no concurrency, it would be clearer to leave it unmarked and add a comment to that effect. (I also think a comment would be good to say what the WRITE_ONCE below pair with, because at this point I've forgotten.) > WRITE_ONCE(entry->size, cache->object_size); > WRITE_ONCE(entry->pid, current->pid); > WRITE_ONCE(entry->stack, stack); > @@ -131,6 +133,9 @@ static void save_stack_info(struct kmem_cache *cache, void *object, > smp_store_release(&entry->ptr, (s64)object); > > read_unlock_irqrestore(&stack_ring.lock, flags); > + > + if (old_stack) > + stack_depot_evict(old_stack); > } > > void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags) > -- > 2.25.1 >
On Wed, Aug 30, 2023 at 11:38 AM Marco Elver <elver@google.com> wrote: > > > --- a/mm/kasan/tags.c > > +++ b/mm/kasan/tags.c > > @@ -96,7 +96,7 @@ static void save_stack_info(struct kmem_cache *cache, void *object, > > gfp_t gfp_flags, bool is_free) > > { > > unsigned long flags; > > - depot_stack_handle_t stack; > > + depot_stack_handle_t stack, old_stack; > > u64 pos; > > struct kasan_stack_ring_entry *entry; > > void *old_ptr; > > @@ -120,6 +120,8 @@ static void save_stack_info(struct kmem_cache *cache, void *object, > > if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR)) > > goto next; /* Busy slot. */ > > > > + old_stack = READ_ONCE(entry->stack); > > Why READ_ONCE? Is it possible that there is a concurrent writer once the > slot has been "locked" with STACK_RING_BUSY_PTR? > > If there is no concurrency, it would be clearer to leave it unmarked and > add a comment to that effect. (I also think a comment would be good to > say what the WRITE_ONCE below pair with, because at this point I've > forgotten.) Hm, I actually suspect we don't need these READ/WRITE_ONCE to entry fields at all. This seems to be a leftover from the initial series when I didn't yet have the rwlock. The rwlock prevents the entries from being read (in kasan_complete_mode_report_info) while being written and the try_cmpxchg prevents the same entry from being rewritten (in the unlikely case of wrapping during writing). Marco, do you think we can drop these READ/WRITE_ONCE? Thanks!
On Mon, 4 Sept 2023 at 20:48, Andrey Konovalov <andreyknvl@gmail.com> wrote: > > On Wed, Aug 30, 2023 at 11:38 AM Marco Elver <elver@google.com> wrote: > > > > > --- a/mm/kasan/tags.c > > > +++ b/mm/kasan/tags.c > > > @@ -96,7 +96,7 @@ static void save_stack_info(struct kmem_cache *cache, void *object, > > > gfp_t gfp_flags, bool is_free) > > > { > > > unsigned long flags; > > > - depot_stack_handle_t stack; > > > + depot_stack_handle_t stack, old_stack; > > > u64 pos; > > > struct kasan_stack_ring_entry *entry; > > > void *old_ptr; > > > @@ -120,6 +120,8 @@ static void save_stack_info(struct kmem_cache *cache, void *object, > > > if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR)) > > > goto next; /* Busy slot. */ > > > > > > + old_stack = READ_ONCE(entry->stack); > > > > Why READ_ONCE? Is it possible that there is a concurrent writer once the > > slot has been "locked" with STACK_RING_BUSY_PTR? > > > > If there is no concurrency, it would be clearer to leave it unmarked and > > add a comment to that effect. (I also think a comment would be good to > > say what the WRITE_ONCE below pair with, because at this point I've > > forgotten.) > > Hm, I actually suspect we don't need these READ/WRITE_ONCE to entry > fields at all. This seems to be a leftover from the initial series > when I didn't yet have the rwlock. The rwlock prevents the entries > from being read (in kasan_complete_mode_report_info) while being > written and the try_cmpxchg prevents the same entry from being > rewritten (in the unlikely case of wrapping during writing). > > Marco, do you think we can drop these READ/WRITE_ONCE? Yes, I think they can be dropped.
On Mon, Sep 4, 2023 at 8:59 PM Marco Elver <elver@google.com> wrote: > > > > > Hm, I actually suspect we don't need these READ/WRITE_ONCE to entry > > fields at all. This seems to be a leftover from the initial series > > when I didn't yet have the rwlock. The rwlock prevents the entries > > from being read (in kasan_complete_mode_report_info) while being > > written and the try_cmpxchg prevents the same entry from being > > rewritten (in the unlikely case of wrapping during writing). > > > > Marco, do you think we can drop these READ/WRITE_ONCE? > > Yes, I think they can be dropped. Will drop in v2, thanks!
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c index 7dcfe341d48e..fa6b0f77a7dd 100644 --- a/mm/kasan/tags.c +++ b/mm/kasan/tags.c @@ -96,7 +96,7 @@ static void save_stack_info(struct kmem_cache *cache, void *object, gfp_t gfp_flags, bool is_free) { unsigned long flags; - depot_stack_handle_t stack; + depot_stack_handle_t stack, old_stack; u64 pos; struct kasan_stack_ring_entry *entry; void *old_ptr; @@ -120,6 +120,8 @@ static void save_stack_info(struct kmem_cache *cache, void *object, if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR)) goto next; /* Busy slot. */ + old_stack = READ_ONCE(entry->stack); + WRITE_ONCE(entry->size, cache->object_size); WRITE_ONCE(entry->pid, current->pid); WRITE_ONCE(entry->stack, stack); @@ -131,6 +133,9 @@ static void save_stack_info(struct kmem_cache *cache, void *object, smp_store_release(&entry->ptr, (s64)object); read_unlock_irqrestore(&stack_ring.lock, flags); + + if (old_stack) + stack_depot_evict(old_stack); } void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)