Message ID | cover.1699297309.git.andreyknvl@google.com (mailing list archive) |
---|---|
Headers | show |
Series | kasan: save mempool stack traces | expand |
On Mon, Nov 06, 2023 at 09:10PM +0100, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov <andreyknvl@google.com> > > This series updates KASAN to save alloc and free stack traces for > secondary-level allocators that cache and reuse allocations internally > instead of giving them back to the underlying allocator (e.g. mempool). Nice. > As a part of this change, introduce and document a set of KASAN hooks: > > bool kasan_mempool_poison_pages(struct page *page, unsigned int order); > void kasan_mempool_unpoison_pages(struct page *page, unsigned int order); > bool kasan_mempool_poison_object(void *ptr); > void kasan_mempool_unpoison_object(void *ptr, size_t size); > > and use them in the mempool code. > > Besides mempool, skbuff and io_uring also cache allocations and already > use KASAN hooks to poison those. Their code is updated to use the new > mempool hooks. > > The new hooks save alloc and free stack traces (for normal kmalloc and > slab objects; stack traces for large kmalloc objects and page_alloc are > not supported by KASAN yet), improve the readability of the users' code, > and also allow the users to prevent double-free and invalid-free bugs; > see the patches for the details. > > I'm posting this series as an RFC, as it has a few non-trivial-to-resolve > conflicts with the stack depot eviction patches. I'll rebase the series and > resolve the conflicts once the stack depot patches are in the mm tree. > > Andrey Konovalov (20): > kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object > kasan: move kasan_mempool_poison_object > kasan: document kasan_mempool_poison_object > kasan: add return value for kasan_mempool_poison_object > kasan: introduce kasan_mempool_unpoison_object > kasan: introduce kasan_mempool_poison_pages > kasan: introduce kasan_mempool_unpoison_pages > kasan: clean up __kasan_mempool_poison_object > kasan: save free stack traces for slab mempools > kasan: clean up and rename ____kasan_kmalloc > kasan: introduce poison_kmalloc_large_redzone > kasan: save alloc stack traces for mempool > mempool: use new mempool KASAN hooks > mempool: introduce mempool_use_prealloc_only > kasan: add mempool tests > kasan: rename pagealloc tests > kasan: reorder tests > kasan: rename and document kasan_(un)poison_object_data > skbuff: use mempool KASAN hooks > io_uring: use mempool KASAN hook > > include/linux/kasan.h | 161 +++++++- > include/linux/mempool.h | 2 + > io_uring/alloc_cache.h | 5 +- > mm/kasan/common.c | 221 ++++++---- > mm/kasan/kasan_test.c | 876 +++++++++++++++++++++++++++------------- > mm/mempool.c | 49 ++- > mm/slab.c | 10 +- > mm/slub.c | 4 +- > net/core/skbuff.c | 10 +- > 9 files changed, 940 insertions(+), 398 deletions(-) Overall LGTM and the majority of it is cleanups, so I think once the stack depot patches are in the mm tree, just send v1 of this series.
On Wed, Nov 22, 2023 at 6:13 PM Marco Elver <elver@google.com> wrote: > > On Mon, Nov 06, 2023 at 09:10PM +0100, andrey.konovalov@linux.dev wrote: > > From: Andrey Konovalov <andreyknvl@google.com> > > > > This series updates KASAN to save alloc and free stack traces for > > secondary-level allocators that cache and reuse allocations internally > > instead of giving them back to the underlying allocator (e.g. mempool). > > Nice. Thanks! :) > Overall LGTM and the majority of it is cleanups, so I think once the > stack depot patches are in the mm tree, just send v1 of this series. Will do, thank you for looking at the patches!
From: Andrey Konovalov <andreyknvl@google.com> This series updates KASAN to save alloc and free stack traces for secondary-level allocators that cache and reuse allocations internally instead of giving them back to the underlying allocator (e.g. mempool). As a part of this change, introduce and document a set of KASAN hooks: bool kasan_mempool_poison_pages(struct page *page, unsigned int order); void kasan_mempool_unpoison_pages(struct page *page, unsigned int order); bool kasan_mempool_poison_object(void *ptr); void kasan_mempool_unpoison_object(void *ptr, size_t size); and use them in the mempool code. Besides mempool, skbuff and io_uring also cache allocations and already use KASAN hooks to poison those. Their code is updated to use the new mempool hooks. The new hooks save alloc and free stack traces (for normal kmalloc and slab objects; stack traces for large kmalloc objects and page_alloc are not supported by KASAN yet), improve the readability of the users' code, and also allow the users to prevent double-free and invalid-free bugs; see the patches for the details. I'm posting this series as an RFC, as it has a few non-trivial-to-resolve conflicts with the stack depot eviction patches. I'll rebase the series and resolve the conflicts once the stack depot patches are in the mm tree. Andrey Konovalov (20): kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object kasan: move kasan_mempool_poison_object kasan: document kasan_mempool_poison_object kasan: add return value for kasan_mempool_poison_object kasan: introduce kasan_mempool_unpoison_object kasan: introduce kasan_mempool_poison_pages kasan: introduce kasan_mempool_unpoison_pages kasan: clean up __kasan_mempool_poison_object kasan: save free stack traces for slab mempools kasan: clean up and rename ____kasan_kmalloc kasan: introduce poison_kmalloc_large_redzone kasan: save alloc stack traces for mempool mempool: use new mempool KASAN hooks mempool: introduce mempool_use_prealloc_only kasan: add mempool tests kasan: rename pagealloc tests kasan: reorder tests kasan: rename and document kasan_(un)poison_object_data skbuff: use mempool KASAN hooks io_uring: use mempool KASAN hook include/linux/kasan.h | 161 +++++++- include/linux/mempool.h | 2 + io_uring/alloc_cache.h | 5 +- mm/kasan/common.c | 221 ++++++---- mm/kasan/kasan_test.c | 876 +++++++++++++++++++++++++++------------- mm/mempool.c | 49 ++- mm/slab.c | 10 +- mm/slub.c | 4 +- net/core/skbuff.c | 10 +- 9 files changed, 940 insertions(+), 398 deletions(-)