Message ID | 20220326134059.4082-1-hdanton@sina.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [RFC] locking/rwsem: dont wake up wwaiter in case of lock holder | expand |
On 26.03.22 14:40, Hillf Danton wrote: > In the slowpath of down for write, we bail out in case of signal received and > try to wake up any pending waiter but it makes no sense to wake up a write > waiter given any lock holder, either write or read. But is handling this better really worth additional code and runtime checks? IOW, does this happen often enough that we actually care about optimizing this? I have no idea :) > > The RFC is do nothing for wwaiter if any lock holder present - they will fill > their duty at lock release time. > > Only for thoughts now. > > Hillf > > --- x/kernel/locking/rwsem.c > +++ y/kernel/locking/rwsem.c > @@ -418,6 +418,8 @@ static void rwsem_mark_wake(struct rw_se > waiter = rwsem_first_waiter(sem); > > if (waiter->type == RWSEM_WAITING_FOR_WRITE) { > + if (RWSEM_LOCK_MASK & atomic_long_read(&sem->count)) > + return; > if (wake_type == RWSEM_WAKE_ANY) { > /* > * Mark writer at the front of the queue for wakeup. > -- >
On 3/28/22 10:18, David Hildenbrand wrote: > On 26.03.22 14:40, Hillf Danton wrote: >> In the slowpath of down for write, we bail out in case of signal received and >> try to wake up any pending waiter but it makes no sense to wake up a write >> waiter given any lock holder, either write or read. > But is handling this better really worth additional code and runtime > checks? IOW, does this happen often enough that we actually care about > optimizing this? I have no idea :) > >> The RFC is do nothing for wwaiter if any lock holder present - they will fill >> their duty at lock release time. >> >> Only for thoughts now. >> >> Hillf >> >> --- x/kernel/locking/rwsem.c >> +++ y/kernel/locking/rwsem.c >> @@ -418,6 +418,8 @@ static void rwsem_mark_wake(struct rw_se >> waiter = rwsem_first_waiter(sem); >> >> if (waiter->type == RWSEM_WAITING_FOR_WRITE) { >> + if (RWSEM_LOCK_MASK & atomic_long_read(&sem->count)) >> + return; >> if (wake_type == RWSEM_WAKE_ANY) { >> /* >> * Mark writer at the front of the queue for wakeup. >> -- That check isn't good enough. First of all, any reader count in sem->count can be transient due to the fact that we do an unconditional atomic_long_add() on down_read(). The reader may then remove its reader count in the slow path. This patch may cause missed wakeup which is a much bigger problem than spending a bit of cpu time to check for lock availability and sleep again. The write lock bit, however, is real. We do support the first writer in the wait queue to spin on the lock when the handoff bit is set. So waking up a writer when the rwsem is currently write-locked can still be useful. BTW, I didn't see this RFC patch in LKML. Is it only posted on linux-mm originally? Cheers, Longman
On Mon, 28 Mar 2022 11:11:31 -0400 Waiman Long wrote: > On 3/28/22 10:18, David Hildenbrand wrote: > > On 26.03.22 14:40, Hillf Danton wrote: > >> In the slowpath of down for write, we bail out in case of signal received and > >> try to wake up any pending waiter but it makes no sense to wake up a write > >> waiter given any lock holder, either write or read. > > > > But is handling this better really worth additional code and runtime > > checks? IOW, does this happen often enough that we actually care about > > optimizing this? I have no idea :) Thanks for taking a look, David. > > > >> The RFC is do nothing for wwaiter if any lock holder present - they will fill > >> their duty at lock release time. > >> > >> Only for thoughts now. > >> > >> Hillf > >> > >> --- x/kernel/locking/rwsem.c > >> +++ y/kernel/locking/rwsem.c > >> @@ -418,6 +418,8 @@ static void rwsem_mark_wake(struct rw_se > >> waiter = rwsem_first_waiter(sem); > >> > >> if (waiter->type == RWSEM_WAITING_FOR_WRITE) { > >> + if (RWSEM_LOCK_MASK & atomic_long_read(&sem->count)) > >> + return; > >> if (wake_type == RWSEM_WAKE_ANY) { > >> /* > >> * Mark writer at the front of the queue for wakeup. > >> -- > > That check isn't good enough. First of all, any reader count in > sem->count can be transient due to the fact that we do an unconditional > atomic_long_add() on down_read(). The reader may then remove its reader > count in the slow path. Correct. > This patch may cause missed wakeup which is a > much bigger problem than spending a bit of cpu time to check for lock > availability and sleep again. In rwsem_down_read_slowpath(), the comment prior to the RWSEM_WAKE_ANY wakeup rules out the chance for missed wakeup, because the RFC is only for wwaiter who is exclusive from any lock holder. It is not unusual for me to miss anything, OTOH, particularly in cases like this one worth twenty minutes of scratching scalp. Hillf /* * If there are no active locks, wake the front queued process(es). The RFC goes in line with the top half. * * If there are no writers and we are first in the queue, * wake our own waiter to join the existing active readers ! */
--- x/kernel/locking/rwsem.c +++ y/kernel/locking/rwsem.c @@ -418,6 +418,8 @@ static void rwsem_mark_wake(struct rw_se waiter = rwsem_first_waiter(sem); if (waiter->type == RWSEM_WAITING_FOR_WRITE) { + if (RWSEM_LOCK_MASK & atomic_long_read(&sem->count)) + return; if (wake_type == RWSEM_WAKE_ANY) { /* * Mark writer at the front of the queue for wakeup.