Message ID | 20210630134943.20781-2-yee.lee@mediatek.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3,1/1] kasan: Add memzero init for unaligned size under SLUB debug | expand |
On Wed, Jun 30, 2021 at 09:49PM +0800, yee.lee@mediatek.com wrote: > From: Yee Lee <yee.lee@mediatek.com> > > Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite > the redzone of object with unaligned size. > > An additional memzero_explicit() path is added to replacing init by > hwtag instruction for those unaligned size at SLUB debug mode. > > The penalty is acceptable since they are only enabled in debug mode, > not production builds. A block of comment is added for explanation. > > Signed-off-by: Yee Lee <yee.lee@mediatek.com> > Suggested-by: Marco Elver <elver@google.com> > Suggested-by: Andrey Konovalov <andreyknvl@gmail.com> > Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> > Cc: Alexander Potapenko <glider@google.com> > Cc: Dmitry Vyukov <dvyukov@google.com> > Cc: Andrew Morton <akpm@linux-foundation.org> In future, please add changes to each version after an additional '---'. Example: --- v2: * Use IS_ENABLED(CONFIG_SLUB_DEBUG) in if-statement. > --- > mm/kasan/kasan.h | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 8f450bc28045..6f698f13dbe6 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -387,6 +387,16 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > + /* > + * Explicitly initialize the memory with the precise object size > + * to avoid overwriting the SLAB redzone. This disables initialization > + * in the arch code and may thus lead to performance penalty. > + * The penalty is accepted since SLAB redzones aren't enabled in production builds. > + */ Can we please format the comment properly: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 6f698f13dbe6..1972ec5736cb 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -388,10 +388,10 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; /* - * Explicitly initialize the memory with the precise object size - * to avoid overwriting the SLAB redzone. This disables initialization - * in the arch code and may thus lead to performance penalty. - * The penalty is accepted since SLAB redzones aren't enabled in production builds. + * Explicitly initialize the memory with the precise object size to + * avoid overwriting the SLAB redzone. This disables initialization in + * the arch code and may thus lead to performance penalty. The penalty + * is accepted since SLAB redzones aren't enabled in production builds. */ if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; > + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > + init = false; > + memzero_explicit((void *)addr, size); > + } > size = round_up(size, KASAN_GRANULE_SIZE); > > hw_set_mem_tag_range((void *)addr, size, tag, init); I think this solution might be fine for now, as I don't see an easy way to do this without some major refactor to use kmem_cache_debug_flags(). However, I think there's an intermediate solution where we only check the static-key 'slub_debug_enabled' though. Because I've checked, and various major distros _do_ enabled CONFIG_SLUB_DEBUG. But the static branch just makes sure there's no performance overhead. Checking the static branch requires including mm/slab.h into mm/kasan/kasan.h, which we currently don't do and perhaps wanted to avoid. Although I don't see a reason there, because there's no circular dependency even if we did. Andrey, any opinion? In case you guys think checking static key is the better solution, I think the below would work together with the pre-requisite patch at the end: diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 1972ec5736cb..9130d025612c 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -6,6 +6,8 @@ #include <linux/kfence.h> #include <linux/stackdepot.h> +#include "../slab.h" + #ifdef CONFIG_KASAN_HW_TAGS #include <linux/static_key.h> @@ -393,7 +395,8 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) * the arch code and may thus lead to performance penalty. The penalty * is accepted since SLAB redzones aren't enabled in production builds. */ - if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + if (slub_debug_enabled_unlikely() && + init && ((unsigned long)size & KASAN_GRANULE_MASK)) { init = false; memzero_explicit((void *)addr, size); } [ Note: You can pick the below patch up by extracting it from the email and running 'git am -s <file>'. You could then use it as part of a patch series together with your original patch. ] From: Marco Elver <elver@google.com> Date: Wed, 30 Jun 2021 20:56:57 +0200 Subject: [PATCH] mm: introduce helper to check slub_debug_enabled Introduce a helper to check slub_debug_enabled, so that we can confine the use of #ifdef to the definition of the slub_debug_enabled_unlikely() helper. Signed-off-by: Marco Elver <elver@google.com> --- mm/slab.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 18c1927cd196..9439da434712 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -215,10 +215,18 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); #endif extern void print_tracking(struct kmem_cache *s, void *object); +static inline bool slub_debug_enabled_unlikely(void) +{ + return static_branch_unlikely(&slub_debug_enabled); +} #else static inline void print_tracking(struct kmem_cache *s, void *object) { } +static inline bool slub_debug_enabled_unlikely(void) +{ + return false; +} #endif /* @@ -228,11 +236,10 @@ static inline void print_tracking(struct kmem_cache *s, void *object) */ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) { -#ifdef CONFIG_SLUB_DEBUG - VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); - if (static_branch_unlikely(&slub_debug_enabled)) + if (IS_ENABLED(CONFIG_SLUB_DEBUG)) + VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); + if (slub_debug_enabled_unlikely()) return s->flags & flags; -#endif return false; }
On Wed, Jun 30, 2021 at 10:13 PM Marco Elver <elver@google.com> wrote: > > > + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > > + init = false; > > + memzero_explicit((void *)addr, size); > > + } > > size = round_up(size, KASAN_GRANULE_SIZE); > > > > hw_set_mem_tag_range((void *)addr, size, tag, init); > > I think this solution might be fine for now, as I don't see an easy way > to do this without some major refactor to use kmem_cache_debug_flags(). > > However, I think there's an intermediate solution where we only check > the static-key 'slub_debug_enabled' though. Because I've checked, and > various major distros _do_ enabled CONFIG_SLUB_DEBUG. But the static > branch just makes sure there's no performance overhead. > > Checking the static branch requires including mm/slab.h into > mm/kasan/kasan.h, which we currently don't do and perhaps wanted to > avoid. Although I don't see a reason there, because there's no circular > dependency even if we did. Most likely this won't be a problem. We already include ../slab.h into many mm/kasan/*.c files. > Andrey, any opinion? I like this approach. Easy to implement and is better than checking only CONFIG_SLUB_DEBUG. Thanks, Marco!
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 8f450bc28045..6f698f13dbe6 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -387,6 +387,16 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; + /* + * Explicitly initialize the memory with the precise object size + * to avoid overwriting the SLAB redzone. This disables initialization + * in the arch code and may thus lead to performance penalty. + * The penalty is accepted since SLAB redzones aren't enabled in production builds. + */ + if (IS_ENABLED(CONFIG_SLUB_DEBUG) && init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + init = false; + memzero_explicit((void *)addr, size); + } size = round_up(size, KASAN_GRANULE_SIZE); hw_set_mem_tag_range((void *)addr, size, tag, init);