Message ID | 20210916123920.48704-3-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Fixups for slub | expand |
On 9/16/21 14:39, Miaohe Lin wrote: > If object's reuse is delayed, it will be excluded from the reconstructed > freelist. But we forgot to adjust the cnt accordingly. So there will be > a mismatch between reconstructed freelist depth and cnt. This will lead > to free_debug_processing() complain about freelist count or a incorrect > slub inuse count. > > Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook") > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> I was worried about taking pointer of the cnt parameter when it's hardcoded 1, whether it would destroy inlining. Looks like not, luckily, the function is just renamed: > ./scripts/bloat-o-meter mm/slub.o slub.o.after add/remove: 1/1 grow/shrink: 0/0 up/down: 292/-292 (0) Function old new delta slab_free_freelist_hook.constprop - 292 +292 slab_free_freelist_hook 292 - -292 > --- > mm/slub.c | 11 +++++++++-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index ed160b6c54f8..a56a6423d4e8 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1701,7 +1701,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, > } > > static inline bool slab_free_freelist_hook(struct kmem_cache *s, > - void **head, void **tail) > + void **head, void **tail, > + int *cnt) > { > > void *object; > @@ -1728,6 +1729,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, > *head = object; > if (!*tail) > *tail = object; > + } else { > + /* > + * Adjust the reconstructed freelist depth > + * accordingly if object's reuse is delayed. > + */ > + --(*cnt); > } > } while (object != old_tail); > > @@ -3480,7 +3487,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, > * With KASAN enabled slab_free_freelist_hook modifies the freelist > * to remove objects, whose reuse must be delayed. > */ > - if (slab_free_freelist_hook(s, &head, &tail)) > + if (slab_free_freelist_hook(s, &head, &tail, &cnt)) > do_slab_free(s, page, head, tail, cnt, addr); > } > >
On 2021/10/5 17:57, Vlastimil Babka wrote: > On 9/16/21 14:39, Miaohe Lin wrote: >> If object's reuse is delayed, it will be excluded from the reconstructed >> freelist. But we forgot to adjust the cnt accordingly. So there will be >> a mismatch between reconstructed freelist depth and cnt. This will lead >> to free_debug_processing() complain about freelist count or a incorrect >> slub inuse count. >> >> Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook") >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > > Reviewed-by: Vlastimil Babka <vbabka@suse.cz> > > I was worried about taking pointer of the cnt parameter when it's hardcoded > 1, whether it would destroy inlining. Looks like not, luckily, the function > is just renamed: > Many thanks for your review and thoughtful consideration! :) >> ./scripts/bloat-o-meter mm/slub.o slub.o.after > add/remove: 1/1 grow/shrink: 0/0 up/down: 292/-292 (0) > Function old new delta > slab_free_freelist_hook.constprop - 292 +292 > slab_free_freelist_hook 292 - -292 > >> --- >> mm/slub.c | 11 +++++++++-- >> 1 file changed, 9 insertions(+), 2 deletions(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index ed160b6c54f8..a56a6423d4e8 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -1701,7 +1701,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, >> } >> >> static inline bool slab_free_freelist_hook(struct kmem_cache *s, >> - void **head, void **tail) >> + void **head, void **tail, >> + int *cnt) >> { >> >> void *object; >> @@ -1728,6 +1729,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, >> *head = object; >> if (!*tail) >> *tail = object; >> + } else { >> + /* >> + * Adjust the reconstructed freelist depth >> + * accordingly if object's reuse is delayed. >> + */ >> + --(*cnt); >> } >> } while (object != old_tail); >> >> @@ -3480,7 +3487,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, >> * With KASAN enabled slab_free_freelist_hook modifies the freelist >> * to remove objects, whose reuse must be delayed. >> */ >> - if (slab_free_freelist_hook(s, &head, &tail)) >> + if (slab_free_freelist_hook(s, &head, &tail, &cnt)) >> do_slab_free(s, page, head, tail, cnt, addr); >> } >> >> > > . >
diff --git a/mm/slub.c b/mm/slub.c index ed160b6c54f8..a56a6423d4e8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1701,7 +1701,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, } static inline bool slab_free_freelist_hook(struct kmem_cache *s, - void **head, void **tail) + void **head, void **tail, + int *cnt) { void *object; @@ -1728,6 +1729,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, *head = object; if (!*tail) *tail = object; + } else { + /* + * Adjust the reconstructed freelist depth + * accordingly if object's reuse is delayed. + */ + --(*cnt); } } while (object != old_tail); @@ -3480,7 +3487,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, * With KASAN enabled slab_free_freelist_hook modifies the freelist * to remove objects, whose reuse must be delayed. */ - if (slab_free_freelist_hook(s, &head, &tail)) + if (slab_free_freelist_hook(s, &head, &tail, &cnt)) do_slab_free(s, page, head, tail, cnt, addr); }
If object's reuse is delayed, it will be excluded from the reconstructed freelist. But we forgot to adjust the cnt accordingly. So there will be a mismatch between reconstructed freelist depth and cnt. This will lead to free_debug_processing() complain about freelist count or a incorrect slub inuse count. Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook") Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/slub.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)