mbox series

[v2,0/5] mm/slub: fix validation races and cleanup locking

Message ID 20220823170400.26546-1-vbabka@suse.cz (mailing list archive)
Headers show
Series mm/slub: fix validation races and cleanup locking | expand

Message

Vlastimil Babka Aug. 23, 2022, 5:03 p.m. UTC
This series builds on the validation races fix posted previously [1]
that became patch 2 here and contains all the details in its
description.

Thanks to Hyeonggon Yoo's observation, patch 3 removes more slab_lock()
usage that became unnecessary after patch 2.

This made it possible to further simplify locking code in patches 4 and
5. Since those are related to PREEMPT_RT, I'm CCing relevant people on
this series.

Changes since v1 [2]:

- add acks/reviews from Hyeonggon and David
- minor fixes to patch 2 as reported by Hyeonggon
- patch 5 reworked to rely on disabled preemption by bit_spin_lock()
  which should be sufficient without disabled interrupts on RT

git version:

https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slub-validate-fix-v2r2

I plan to add this series to slab.git for-next in few days.

[1] https://lore.kernel.org/all/20220809140043.9903-1-vbabka@suse.cz/
[2] https://lore.kernel.org/all/20220812091426.18418-1-vbabka@suse.cz/

Vlastimil Babka (5):
  mm/slub: move free_debug_processing() further
  mm/slub: restrict sysfs validation to debug caches and make it safe
  mm/slub: remove slab_lock() usage for debug operations
  mm/slub: convert object_map_lock to non-raw spinlock
  mm/slub: simplify __cmpxchg_double_slab() and slab_[un]lock()

 mm/slub.c | 417 ++++++++++++++++++++++++++++++++----------------------
 1 file changed, 251 insertions(+), 166 deletions(-)

Comments

Vlastimil Babka Aug. 25, 2022, 1:16 p.m. UTC | #1
On 8/23/22 19:03, Vlastimil Babka wrote:
> This series builds on the validation races fix posted previously [1]
> that became patch 2 here and contains all the details in its
> description.
> 
> Thanks to Hyeonggon Yoo's observation, patch 3 removes more slab_lock()
> usage that became unnecessary after patch 2.
> 
> This made it possible to further simplify locking code in patches 4 and
> 5. Since those are related to PREEMPT_RT, I'm CCing relevant people on
> this series.
> 
> Changes since v1 [2]:
> 
> - add acks/reviews from Hyeonggon and David
> - minor fixes to patch 2 as reported by Hyeonggon
> - patch 5 reworked to rely on disabled preemption by bit_spin_lock()
>   which should be sufficient without disabled interrupts on RT
> 
> git version:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slub-validate-fix-v2r2
> 
> I plan to add this series to slab.git for-next in few days.

Thanks for the reviews, fixup suggestions and patch 6/5, now all pushed to
slab.git for-6.1/slub_validation_locking and merged to for-next.

> 
> [1] https://lore.kernel.org/all/20220809140043.9903-1-vbabka@suse.cz/
> [2] https://lore.kernel.org/all/20220812091426.18418-1-vbabka@suse.cz/
> 
> Vlastimil Babka (5):
>   mm/slub: move free_debug_processing() further
>   mm/slub: restrict sysfs validation to debug caches and make it safe
>   mm/slub: remove slab_lock() usage for debug operations
>   mm/slub: convert object_map_lock to non-raw spinlock
>   mm/slub: simplify __cmpxchg_double_slab() and slab_[un]lock()
> 
>  mm/slub.c | 417 ++++++++++++++++++++++++++++++++----------------------
>  1 file changed, 251 insertions(+), 166 deletions(-)
>