Message ID | 20221130085451.3390992-2-feng.tang@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3,1/2] mm/slub, kunit: add SLAB_SKIP_KFENCE flag for cache creation | expand |
On Wed, Nov 30, 2022 at 04:54:51PM +0800, Feng Tang wrote: > kmalloc redzone check for slub has been merged, and it's better to add > a kunit case for it, which is inspired by a real-world case as described > in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"): > > " > octeon-hcd will crash the kernel when SLOB is used. This usually happens > after the 18-byte control transfer when a device descriptor is read. > The DMA engine is always transferring full 32-bit words and if the > transfer is shorter, some random garbage appears after the buffer. > The problem is not visible with SLUB since it rounds up the allocations > to word boundary, and the extra bytes will go undetected. > " > > To avoid interrupting the normal functioning of kmalloc caches, a > kmem_cache mimicing kmalloc cache is created with similar flags, and > kmalloc_trace() is used to really test the orig_size and redzone setup. > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Signed-off-by: Feng Tang <feng.tang@intel.com> > --- > Changelog: > > since v2: > * only add SLAB_KMALLOC to SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTEDa, > and use new wrapper of cache creation(Vlastimil Babka) > > since v1: > * create a new cache mimicing kmalloc cache, reduce dependency > over global slub_debug setting (Vlastimil Babka) > > lib/slub_kunit.c | 22 ++++++++++++++++++++++ > mm/slab.h | 4 +++- > 2 files changed, 25 insertions(+), 1 deletion(-) > > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c > index 5b0c8e7eb6dc..ff24879e3afe 100644 > --- a/lib/slub_kunit.c > +++ b/lib/slub_kunit.c > @@ -135,6 +135,27 @@ static void test_clobber_redzone_free(struct kunit *test) > kmem_cache_destroy(s); > } > > +static void test_kmalloc_redzone_access(struct kunit *test) > +{ > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_kmalloc", 32, > + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE); > + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); > + > + kasan_disable_current(); > + > + /* Suppress the -Warray-bounds warning */ > + OPTIMIZER_HIDE_VAR(p); > + p[18] = 0xab; > + p[19] = 0xab; > + > + kmem_cache_free(s, p); > + validate_slab_cache(s); > + KUNIT_EXPECT_EQ(test, 2, slab_errors); > + > + kasan_enable_current(); > + kmem_cache_destroy(s); > +} > + > static int test_init(struct kunit *test) > { > slab_errors = 0; > @@ -154,6 +175,7 @@ static struct kunit_case test_cases[] = { > #endif > > KUNIT_CASE(test_clobber_redzone_free), > + KUNIT_CASE(test_kmalloc_redzone_access), > {} > }; > > diff --git a/mm/slab.h b/mm/slab.h > index c71590f3a22b..7cc432969945 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -344,7 +344,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, > SLAB_ACCOUNT) > #elif defined(CONFIG_SLUB) > #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \ > - SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_USER_FLAGS) > + SLAB_TEMPORARY | SLAB_ACCOUNT | \ > + SLAB_NO_USER_FLAGS | SLAB_KMALLOC) > #else > #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE) > #endif > @@ -364,6 +365,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, > SLAB_RECLAIM_ACCOUNT | \ > SLAB_TEMPORARY | \ > SLAB_ACCOUNT | \ > + SLAB_KMALLOC | \ > SLAB_NO_USER_FLAGS) > > bool __kmem_cache_empty(struct kmem_cache *); > -- > 2.34.1 > Looks good to me. Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
On 11/30/22 09:54, Feng Tang wrote: > kmalloc redzone check for slub has been merged, and it's better to add > a kunit case for it, which is inspired by a real-world case as described > in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"): > > " > octeon-hcd will crash the kernel when SLOB is used. This usually happens > after the 18-byte control transfer when a device descriptor is read. > The DMA engine is always transferring full 32-bit words and if the > transfer is shorter, some random garbage appears after the buffer. > The problem is not visible with SLUB since it rounds up the allocations > to word boundary, and the extra bytes will go undetected. > " > > To avoid interrupting the normal functioning of kmalloc caches, a > kmem_cache mimicing kmalloc cache is created with similar flags, and > kmalloc_trace() is used to really test the orig_size and redzone setup. > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Signed-off-by: Feng Tang <feng.tang@intel.com> > --- > Changelog: > > since v2: > * only add SLAB_KMALLOC to SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTEDa, > and use new wrapper of cache creation(Vlastimil Babka) > > since v1: > * create a new cache mimicing kmalloc cache, reduce dependency > over global slub_debug setting (Vlastimil Babka) > > lib/slub_kunit.c | 22 ++++++++++++++++++++++ > mm/slab.h | 4 +++- > 2 files changed, 25 insertions(+), 1 deletion(-) > > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c > index 5b0c8e7eb6dc..ff24879e3afe 100644 > --- a/lib/slub_kunit.c > +++ b/lib/slub_kunit.c > @@ -135,6 +135,27 @@ static void test_clobber_redzone_free(struct kunit *test) > kmem_cache_destroy(s); > } > > +static void test_kmalloc_redzone_access(struct kunit *test) > +{ > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_kmalloc", 32, > + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE); > + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); > + > + kasan_disable_current(); > + > + /* Suppress the -Warray-bounds warning */ > + OPTIMIZER_HIDE_VAR(p); > + p[18] = 0xab; > + p[19] = 0xab; > + > + kmem_cache_free(s, p); > + validate_slab_cache(s); > + KUNIT_EXPECT_EQ(test, 2, slab_errors); With this ordering the expectation was failing as slab_Errors was 0, had to fix it up to look more like TestSlub_RZ_alloc: > + kasan_enable_current(); > + kmem_cache_destroy(s); > +} > + --- a/lib/slub_kunit.c +++ b/lib/slub_kunit.c @@ -148,11 +148,11 @@ static void test_kmalloc_redzone_access(struct kunit *test) p[18] = 0xab; p[19] = 0xab; - kmem_cache_free(s, p); validate_slab_cache(s); KUNIT_EXPECT_EQ(test, 2, slab_errors); kasan_enable_current(); + kmem_cache_free(s, p); kmem_cache_destroy(s); } With that, added both to slab.git branch slab/for-6.2/kmalloc_redzone Thanks!
On Thu, Dec 01, 2022 at 12:05:41AM +0100, Vlastimil Babka wrote: [...] > > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c > > index 5b0c8e7eb6dc..ff24879e3afe 100644 > > --- a/lib/slub_kunit.c > > +++ b/lib/slub_kunit.c > > @@ -135,6 +135,27 @@ static void test_clobber_redzone_free(struct kunit *test) > > kmem_cache_destroy(s); > > } > > > > +static void test_kmalloc_redzone_access(struct kunit *test) > > +{ > > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_kmalloc", 32, > > + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE); > > + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); > > + > > + kasan_disable_current(); > > + > > + /* Suppress the -Warray-bounds warning */ > > + OPTIMIZER_HIDE_VAR(p); > > + p[18] = 0xab; > > + p[19] = 0xab; > > + > > + kmem_cache_free(s, p); > > + validate_slab_cache(s); > > + KUNIT_EXPECT_EQ(test, 2, slab_errors); > > With this ordering the expectation was failing as slab_Errors was 0, had to > fix it up to look more like TestSlub_RZ_alloc: Thanks for the catch and fix! I checked why it worked in my test, and it should be related with kasan. My test environment has both kasan and kfence enabled, and kasan could delay the object freeing, and with the original code, when validate_slab_cache() is called, the object is not freed yet and gets redzone-checked. > > + kasan_enable_current(); > > + kmem_cache_destroy(s); > > +} > > + > > --- a/lib/slub_kunit.c > +++ b/lib/slub_kunit.c > @@ -148,11 +148,11 @@ static void test_kmalloc_redzone_access(struct kunit *test) > p[18] = 0xab; > p[19] = 0xab; > > - kmem_cache_free(s, p); > validate_slab_cache(s); > KUNIT_EXPECT_EQ(test, 2, slab_errors); > > kasan_enable_current(); > + kmem_cache_free(s, p); > kmem_cache_destroy(s); > } > > With that, added both to slab.git branch slab/for-6.2/kmalloc_redzone > Thanks! Thanks! - Feng
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c index 5b0c8e7eb6dc..ff24879e3afe 100644 --- a/lib/slub_kunit.c +++ b/lib/slub_kunit.c @@ -135,6 +135,27 @@ static void test_clobber_redzone_free(struct kunit *test) kmem_cache_destroy(s); } +static void test_kmalloc_redzone_access(struct kunit *test) +{ + struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_kmalloc", 32, + SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE); + u8 *p = kmalloc_trace(s, GFP_KERNEL, 18); + + kasan_disable_current(); + + /* Suppress the -Warray-bounds warning */ + OPTIMIZER_HIDE_VAR(p); + p[18] = 0xab; + p[19] = 0xab; + + kmem_cache_free(s, p); + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 2, slab_errors); + + kasan_enable_current(); + kmem_cache_destroy(s); +} + static int test_init(struct kunit *test) { slab_errors = 0; @@ -154,6 +175,7 @@ static struct kunit_case test_cases[] = { #endif KUNIT_CASE(test_clobber_redzone_free), + KUNIT_CASE(test_kmalloc_redzone_access), {} }; diff --git a/mm/slab.h b/mm/slab.h index c71590f3a22b..7cc432969945 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -344,7 +344,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, SLAB_ACCOUNT) #elif defined(CONFIG_SLUB) #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \ - SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_USER_FLAGS) + SLAB_TEMPORARY | SLAB_ACCOUNT | \ + SLAB_NO_USER_FLAGS | SLAB_KMALLOC) #else #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE) #endif @@ -364,6 +365,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size, SLAB_RECLAIM_ACCOUNT | \ SLAB_TEMPORARY | \ SLAB_ACCOUNT | \ + SLAB_KMALLOC | \ SLAB_NO_USER_FLAGS) bool __kmem_cache_empty(struct kmem_cache *);
kmalloc redzone check for slub has been merged, and it's better to add a kunit case for it, which is inspired by a real-world case as described in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"): " octeon-hcd will crash the kernel when SLOB is used. This usually happens after the 18-byte control transfer when a device descriptor is read. The DMA engine is always transferring full 32-bit words and if the transfer is shorter, some random garbage appears after the buffer. The problem is not visible with SLUB since it rounds up the allocations to word boundary, and the extra bytes will go undetected. " To avoid interrupting the normal functioning of kmalloc caches, a kmem_cache mimicing kmalloc cache is created with similar flags, and kmalloc_trace() is used to really test the orig_size and redzone setup. Suggested-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Feng Tang <feng.tang@intel.com> --- Changelog: since v2: * only add SLAB_KMALLOC to SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTEDa, and use new wrapper of cache creation(Vlastimil Babka) since v1: * create a new cache mimicing kmalloc cache, reduce dependency over global slub_debug setting (Vlastimil Babka) lib/slub_kunit.c | 22 ++++++++++++++++++++++ mm/slab.h | 4 +++- 2 files changed, 25 insertions(+), 1 deletion(-)