Message ID | 20240619150814.BRAvaziM@linutronix.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | zram: Replace bit spinlocks with spinlock_t for PREEMPT_RT. | expand |
On 6/19/24 9:08 AM, Sebastian Andrzej Siewior wrote: > From: Mike Galbraith <umgwanakikbuti@gmail.com> > > The bit spinlock disables preemption. The spinlock_t lock becomes a sleeping > lock on PREEMPT_RT and it can not be acquired in this context. In this locked > section, zs_free() acquires a zs_pool::lock, and there is access to > zram::wb_limit_lock. > > Use a spinlock_t on PREEMPT_RT for locking and set/ clear ZRAM_LOCK bit after > the lock has been acquired/ dropped. The conditional code depending on CONFIG_PREEMPT_RT is nasty. Why not just get rid of that and use the CONFIG_PREEMPT_RT variants for everything? They are either good enough to work well in general, or it should be redone such that it is.
On 2024-06-19 11:34:23 [-0600], Jens Axboe wrote: > On 6/19/24 9:08 AM, Sebastian Andrzej Siewior wrote: > > From: Mike Galbraith <umgwanakikbuti@gmail.com> > > > > The bit spinlock disables preemption. The spinlock_t lock becomes a sleeping > > lock on PREEMPT_RT and it can not be acquired in this context. In this locked > > section, zs_free() acquires a zs_pool::lock, and there is access to > > zram::wb_limit_lock. > > > > Use a spinlock_t on PREEMPT_RT for locking and set/ clear ZRAM_LOCK bit after > > the lock has been acquired/ dropped. > > The conditional code depending on CONFIG_PREEMPT_RT is nasty. Why not > just get rid of that and use the CONFIG_PREEMPT_RT variants for > everything? They are either good enough to work well in general, or it > should be redone such that it is. That would increase the struct size with lockdep for !RT. But it is probably not a concern. Also other bits (besides ZRAM_LOCK) can not be added but that wasn't needed in the last few years. Okay, let me redo it. Sebastian
On 6/19/24 11:52 AM, Sebastian Andrzej Siewior wrote: > On 2024-06-19 11:34:23 [-0600], Jens Axboe wrote: >> On 6/19/24 9:08 AM, Sebastian Andrzej Siewior wrote: >>> From: Mike Galbraith <umgwanakikbuti@gmail.com> >>> >>> The bit spinlock disables preemption. The spinlock_t lock becomes a sleeping >>> lock on PREEMPT_RT and it can not be acquired in this context. In this locked >>> section, zs_free() acquires a zs_pool::lock, and there is access to >>> zram::wb_limit_lock. >>> >>> Use a spinlock_t on PREEMPT_RT for locking and set/ clear ZRAM_LOCK bit after >>> the lock has been acquired/ dropped. >> >> The conditional code depending on CONFIG_PREEMPT_RT is nasty. Why not >> just get rid of that and use the CONFIG_PREEMPT_RT variants for >> everything? They are either good enough to work well in general, or it >> should be redone such that it is. > > That would increase the struct size with lockdep for !RT. But it is > probably not a concern. Also other bits (besides ZRAM_LOCK) can not be > added but that wasn't needed in the last few years. Yeah I really don't think anyone cares about the struct size when PROVE_LOCKING is on...
--- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -57,6 +57,41 @@ static void zram_free_page(struct zram * static int zram_read_page(struct zram *zram, struct page *page, u32 index, struct bio *parent); +#ifdef CONFIG_PREEMPT_RT +static void zram_meta_init_table_locks(struct zram *zram, size_t num_pages) +{ + size_t index; + + for (index = 0; index < num_pages; index++) + spin_lock_init(&zram->table[index].lock); +} + +static int zram_slot_trylock(struct zram *zram, u32 index) +{ + int ret; + + ret = spin_trylock(&zram->table[index].lock); + if (ret) + __set_bit(ZRAM_LOCK, &zram->table[index].flags); + return ret; +} + +static void zram_slot_lock(struct zram *zram, u32 index) +{ + spin_lock(&zram->table[index].lock); + __set_bit(ZRAM_LOCK, &zram->table[index].flags); +} + +static void zram_slot_unlock(struct zram *zram, u32 index) +{ + __clear_bit(ZRAM_LOCK, &zram->table[index].flags); + spin_unlock(&zram->table[index].lock); +} + +#else + +static void zram_meta_init_table_locks(struct zram *zram, size_t num_pages) { } + static int zram_slot_trylock(struct zram *zram, u32 index) { return bit_spin_trylock(ZRAM_LOCK, &zram->table[index].flags); @@ -71,6 +106,7 @@ static void zram_slot_unlock(struct zram { bit_spin_unlock(ZRAM_LOCK, &zram->table[index].flags); } +#endif static inline bool init_done(struct zram *zram) { @@ -1226,6 +1262,7 @@ static bool zram_meta_alloc(struct zram if (!huge_class_size) huge_class_size = zs_huge_class_size(zram->mem_pool); + zram_meta_init_table_locks(zram, num_pages); return true; } --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -69,6 +69,9 @@ struct zram_table_entry { unsigned long element; }; unsigned long flags; +#ifdef CONFIG_PREEMPT_RT + spinlock_t lock; +#endif #ifdef CONFIG_ZRAM_TRACK_ENTRY_ACTIME ktime_t ac_time; #endif