Message ID | 7fbac00e4d155cf529517a165a48351dcf3c3156.1610553774.git.andreyknvl@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kasan: fixes for 5.11-rc | expand |
On 1/13/21 5:03 PM, Andrey Konovalov wrote: > A few places where SLUB accesses object's data or metadata were missed in > a previous patch. This leads to false positives with hardware tag-based > KASAN when bulk allocations are used with init_on_alloc/free. > > Fix the false-positives by resetting pointer tags during these accesses. > > Link: https://linux-review.googlesource.com/id/I50dd32838a666e173fe06c3c5c766f2c36aae901 > Fixes: aa1ef4d7b3f67 ("kasan, mm: reset tags when accessing metadata") > Reported-by: Dmitry Vyukov <dvyukov@google.com> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/slub.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index dc5b42e700b8..75fb097d990d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2791,7 +2791,8 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, > void *obj) > { > if (unlikely(slab_want_init_on_free(s)) && obj) > - memset((void *)((char *)obj + s->offset), 0, sizeof(void *)); > + memset((void *)((char *)kasan_reset_tag(obj) + s->offset), > + 0, sizeof(void *)); > } > > /* > @@ -2883,7 +2884,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, > stat(s, ALLOC_FASTPATH); > } > > - maybe_wipe_obj_freeptr(s, kasan_reset_tag(object)); > + maybe_wipe_obj_freeptr(s, object); And in that case the reset was unnecessary, right. (commit log only mentions adding missing resets). > if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) > memset(kasan_reset_tag(object), 0, s->object_size); > @@ -3329,7 +3330,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > int j; > > for (j = 0; j < i; j++) > - memset(p[j], 0, s->object_size); > + memset(kasan_reset_tag(p[j]), 0, s->object_size); > } > > /* memcg and kmem_cache debug support */ >
On Wed, Jan 13, 2021 at 6:25 PM Vlastimil Babka <vbabka@suse.cz> wrote: > > On 1/13/21 5:03 PM, Andrey Konovalov wrote: > > A few places where SLUB accesses object's data or metadata were missed in > > a previous patch. This leads to false positives with hardware tag-based > > KASAN when bulk allocations are used with init_on_alloc/free. > > > > Fix the false-positives by resetting pointer tags during these accesses. > > > > Link: https://linux-review.googlesource.com/id/I50dd32838a666e173fe06c3c5c766f2c36aae901 > > Fixes: aa1ef4d7b3f67 ("kasan, mm: reset tags when accessing metadata") > > Reported-by: Dmitry Vyukov <dvyukov@google.com> > > Signed-off-by: Andrey Konovalov <andreyknvl@google.com> > > Acked-by: Vlastimil Babka <vbabka@suse.cz> > > > --- > > mm/slub.c | 7 ++++--- > > 1 file changed, 4 insertions(+), 3 deletions(-) > > > > diff --git a/mm/slub.c b/mm/slub.c > > index dc5b42e700b8..75fb097d990d 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -2791,7 +2791,8 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, > > void *obj) > > { > > if (unlikely(slab_want_init_on_free(s)) && obj) > > - memset((void *)((char *)obj + s->offset), 0, sizeof(void *)); > > + memset((void *)((char *)kasan_reset_tag(obj) + s->offset), > > + 0, sizeof(void *)); > > } > > > > /* > > @@ -2883,7 +2884,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, > > stat(s, ALLOC_FASTPATH); > > } > > > > - maybe_wipe_obj_freeptr(s, kasan_reset_tag(object)); > > + maybe_wipe_obj_freeptr(s, object); > > And in that case the reset was unnecessary, right. (commit log only mentions > adding missing resets). The reset has been moved into maybe_wipe_obj_freeptr(). I'll mention it in the changelog in v2. Thanks!
diff --git a/mm/slub.c b/mm/slub.c index dc5b42e700b8..75fb097d990d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2791,7 +2791,8 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, void *obj) { if (unlikely(slab_want_init_on_free(s)) && obj) - memset((void *)((char *)obj + s->offset), 0, sizeof(void *)); + memset((void *)((char *)kasan_reset_tag(obj) + s->offset), + 0, sizeof(void *)); } /* @@ -2883,7 +2884,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, stat(s, ALLOC_FASTPATH); } - maybe_wipe_obj_freeptr(s, kasan_reset_tag(object)); + maybe_wipe_obj_freeptr(s, object); if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(kasan_reset_tag(object), 0, s->object_size); @@ -3329,7 +3330,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, int j; for (j = 0; j < i; j++) - memset(p[j], 0, s->object_size); + memset(kasan_reset_tag(p[j]), 0, s->object_size); } /* memcg and kmem_cache debug support */
A few places where SLUB accesses object's data or metadata were missed in a previous patch. This leads to false positives with hardware tag-based KASAN when bulk allocations are used with init_on_alloc/free. Fix the false-positives by resetting pointer tags during these accesses. Link: https://linux-review.googlesource.com/id/I50dd32838a666e173fe06c3c5c766f2c36aae901 Fixes: aa1ef4d7b3f67 ("kasan, mm: reset tags when accessing metadata") Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> --- mm/slub.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)