Message ID | 1522947547-24081-3-git-send-email-will.deacon@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, Apr 05, 2018 at 05:58:59PM +0100, Will Deacon wrote: > The qspinlock locking slowpath utilises a "pending" bit as a simple form > of an embedded test-and-set lock that can avoid the overhead of explicit > queuing in cases where the lock is held but uncontended. This bit is > managed using a cmpxchg loop which tries to transition the uncontended > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). > > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved > indefinitely if the lock word is seen to oscillate between unlocked > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are > able to take the lock in the cmpxchg loop without queuing and pass it > around amongst themselves. > > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL > using atomic_fetch_or, Of course, LL/SC or cmpxchg implementations of fetch_or do not in fact get anything from this ;-)
On Thu, Apr 05, 2018 at 05:58:59PM +0100, Will Deacon wrote: > @@ -306,58 +306,48 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > return; > > /* > + * If we observe any contention; queue. > + */ > + if (val & ~_Q_LOCKED_MASK) > + goto queue; > + > + /* > * trylock || pending > * > * 0,0,0 -> 0,0,1 ; trylock > * 0,0,1 -> 0,1,1 ; pending > */ > + val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val); > + if (!(val & ~_Q_LOCKED_MASK)) { > /* > + * we're pending, wait for the owner to go away. > + * > + * *,1,1 -> *,1,0 > + * > + * this wait loop must be a load-acquire such that we match the > + * store-release that clears the locked bit and create lock > + * sequentiality; this is because not all > + * clear_pending_set_locked() implementations imply full > + * barriers. > */ > + if (val & _Q_LOCKED_MASK) > + smp_cond_load_acquire(&lock->val.counter, > + !(VAL & _Q_LOCKED_MASK)); I much prefer { } for multi-line statements like this. > /* > + * take ownership and clear the pending bit. > + * > + * *,1,0 -> *,0,1 > */ > + clear_pending_set_locked(lock); > return; > + }
On 04/05/2018 12:58 PM, Will Deacon wrote: > The qspinlock locking slowpath utilises a "pending" bit as a simple form > of an embedded test-and-set lock that can avoid the overhead of explicit > queuing in cases where the lock is held but uncontended. This bit is > managed using a cmpxchg loop which tries to transition the uncontended > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). > > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved > indefinitely if the lock word is seen to oscillate between unlocked > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are > able to take the lock in the cmpxchg loop without queuing and pass it > around amongst themselves. > > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL > using atomic_fetch_or, and then inspecting the old value to see whether > we need to spin on the current lock owner, or whether we now effectively > hold the lock. The tricky scenario is when concurrent lockers end up > queuing on the lock and the lock becomes available, causing us to see > a lockword of (n,0,0). With pending now set, simply queuing could lead > to deadlock as the head of the queue may not have observed the pending > flag being cleared. Conversely, if the head of the queue did observe > pending being cleared, then it could transition the lock from (n,0,0) -> > (0,0,1) meaning that any attempt to "undo" our setting of the pending > bit could race with a concurrent locker trying to set it. > > We handle this race by preserving the pending bit when taking the lock > after reaching the head of the queue and leaving the tail entry intact > if we saw pending set, because we know that the tail is going to be > updated shortly. > > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Ingo Molnar <mingo@kernel.org> > Signed-off-by: Will Deacon <will.deacon@arm.com> > --- > kernel/locking/qspinlock.c | 80 ++++++++++++++++++++-------------------------- > 1 file changed, 35 insertions(+), 45 deletions(-) > > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index a192af2fe378..b75361d23ea5 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -294,7 +294,7 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, > void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > { > struct mcs_spinlock *prev, *next, *node; > - u32 new, old, tail; > + u32 old, tail; > int idx; > > BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); > @@ -306,58 +306,48 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > return; > > /* > + * If we observe any contention; queue. > + */ > + if (val & ~_Q_LOCKED_MASK) > + goto queue; > + > + /* > * trylock || pending > * > * 0,0,0 -> 0,0,1 ; trylock > * 0,0,1 -> 0,1,1 ; pending > */ > - for (;;) { > + val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val); > + if (!(val & ~_Q_LOCKED_MASK)) { > /* > - * If we observe any contention; queue. > + * we're pending, wait for the owner to go away. > + * > + * *,1,1 -> *,1,0 > + * > + * this wait loop must be a load-acquire such that we match the > + * store-release that clears the locked bit and create lock > + * sequentiality; this is because not all > + * clear_pending_set_locked() implementations imply full > + * barriers. > */ > - if (val & ~_Q_LOCKED_MASK) > - goto queue; > - > - new = _Q_LOCKED_VAL; > - if (val == new) > - new |= _Q_PENDING_VAL; > - > + if (val & _Q_LOCKED_MASK) > + smp_cond_load_acquire(&lock->val.counter, > + !(VAL & _Q_LOCKED_MASK)); > /* > - * Acquire semantic is required here as the function may > - * return immediately if the lock was free. > + * take ownership and clear the pending bit. > + * > + * *,1,0 -> *,0,1 > */ > - old = atomic_cmpxchg_acquire(&lock->val, val, new); > - if (old == val) > - break; > - > - val = old; > - } > - > - /* > - * we won the trylock > - */ > - if (new == _Q_LOCKED_VAL) > + clear_pending_set_locked(lock); > return; > + } > > /* > - * we're pending, wait for the owner to go away. > - * > - * *,1,1 -> *,1,0 > - * > - * this wait loop must be a load-acquire such that we match the > - * store-release that clears the locked bit and create lock > - * sequentiality; this is because not all clear_pending_set_locked() > - * implementations imply full barriers. > - */ > - smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK)); > - > - /* > - * take ownership and clear the pending bit. > - * > - * *,1,0 -> *,0,1 > + * If pending was clear but there are waiters in the queue, then > + * we need to undo our setting of pending before we queue ourselves. > */ > - clear_pending_set_locked(lock); > - return; > + if (!(val & _Q_PENDING_MASK)) > + atomic_andnot(_Q_PENDING_VAL, &lock->val); Can we add a clear_pending() helper that will just clear the byte if _Q_PENDING_BITS == 8? That will eliminate one atomic instruction from the failure path. -Longman
On Thu, Apr 05, 2018 at 07:07:06PM +0200, Peter Zijlstra wrote: > On Thu, Apr 05, 2018 at 05:58:59PM +0100, Will Deacon wrote: > > The qspinlock locking slowpath utilises a "pending" bit as a simple form > > of an embedded test-and-set lock that can avoid the overhead of explicit > > queuing in cases where the lock is held but uncontended. This bit is > > managed using a cmpxchg loop which tries to transition the uncontended > > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). > > > > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved > > indefinitely if the lock word is seen to oscillate between unlocked > > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are > > able to take the lock in the cmpxchg loop without queuing and pass it > > around amongst themselves. > > > > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL > > using atomic_fetch_or, > > Of course, LL/SC or cmpxchg implementations of fetch_or do not in fact > get anything from this ;-) Whilst it's true that they would still be unfair, the window is at least reduced and moves a lot more of the fairness burden onto hardware itself. ARMv8.1 has an instruction for atomic_fetch_or, so we can make good use of it here. Will
On Thu, Apr 05, 2018 at 05:16:16PM -0400, Waiman Long wrote: > On 04/05/2018 12:58 PM, Will Deacon wrote: > > /* > > - * we're pending, wait for the owner to go away. > > - * > > - * *,1,1 -> *,1,0 > > - * > > - * this wait loop must be a load-acquire such that we match the > > - * store-release that clears the locked bit and create lock > > - * sequentiality; this is because not all clear_pending_set_locked() > > - * implementations imply full barriers. > > - */ > > - smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK)); > > - > > - /* > > - * take ownership and clear the pending bit. > > - * > > - * *,1,0 -> *,0,1 > > + * If pending was clear but there are waiters in the queue, then > > + * we need to undo our setting of pending before we queue ourselves. > > */ > > - clear_pending_set_locked(lock); > > - return; > > + if (!(val & _Q_PENDING_MASK)) > > + atomic_andnot(_Q_PENDING_VAL, &lock->val); > Can we add a clear_pending() helper that will just clear the byte if > _Q_PENDING_BITS == 8? That will eliminate one atomic instruction from > the failure path. Good idea! Will
On 04/05/2018 12:58 PM, Will Deacon wrote: > The qspinlock locking slowpath utilises a "pending" bit as a simple form > of an embedded test-and-set lock that can avoid the overhead of explicit > queuing in cases where the lock is held but uncontended. This bit is > managed using a cmpxchg loop which tries to transition the uncontended > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). > > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved > indefinitely if the lock word is seen to oscillate between unlocked > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are > able to take the lock in the cmpxchg loop without queuing and pass it > around amongst themselves. > > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL > using atomic_fetch_or, and then inspecting the old value to see whether > we need to spin on the current lock owner, or whether we now effectively > hold the lock. The tricky scenario is when concurrent lockers end up > queuing on the lock and the lock becomes available, causing us to see > a lockword of (n,0,0). With pending now set, simply queuing could lead > to deadlock as the head of the queue may not have observed the pending > flag being cleared. Conversely, if the head of the queue did observe > pending being cleared, then it could transition the lock from (n,0,0) -> > (0,0,1) meaning that any attempt to "undo" our setting of the pending > bit could race with a concurrent locker trying to set it. > > We handle this race by preserving the pending bit when taking the lock > after reaching the head of the queue and leaving the tail entry intact > if we saw pending set, because we know that the tail is going to be > updated shortly. > > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Ingo Molnar <mingo@kernel.org> > Signed-off-by: Will Deacon <will.deacon@arm.com> > --- The pending bit was added to the qspinlock design to counter performance degradation compared with ticket lock for workloads with light spinlock contention. I run my spinlock stress test on a Intel Skylake server running the vanilla 4.16 kernel vs a patched kernel with this patchset. The locking rates with different number of locking threads were as follows: # of threads 4.16 kernel patched 4.16 kernel ------------ ----------- ------------------- 1 7,417 kop/s 7,408 kop/s 2 5,755 kop/s 4,486 kop/s 3 4,214 kop/s 4,169 kop/s 4 4,396 kop/s 4,383 kop/s The 2 contending threads case is the one that exercise the pending bit code path the most. So it is obvious that this is the one that is most impacted by this patchset. The differences in the other cases are mostly noise or maybe just a little bit on the 3 contending threads case. I am not against this patch, but we certainly need to find out a way to bring the performance number up closer to what it is before applying the patch. Cheers, Longman
On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote: > On 04/05/2018 12:58 PM, Will Deacon wrote: > > The qspinlock locking slowpath utilises a "pending" bit as a simple form > > of an embedded test-and-set lock that can avoid the overhead of explicit > > queuing in cases where the lock is held but uncontended. This bit is > > managed using a cmpxchg loop which tries to transition the uncontended > > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). > > > > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved > > indefinitely if the lock word is seen to oscillate between unlocked > > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are > > able to take the lock in the cmpxchg loop without queuing and pass it > > around amongst themselves. > > > > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL > > using atomic_fetch_or, and then inspecting the old value to see whether > > we need to spin on the current lock owner, or whether we now effectively > > hold the lock. The tricky scenario is when concurrent lockers end up > > queuing on the lock and the lock becomes available, causing us to see > > a lockword of (n,0,0). With pending now set, simply queuing could lead > > to deadlock as the head of the queue may not have observed the pending > > flag being cleared. Conversely, if the head of the queue did observe > > pending being cleared, then it could transition the lock from (n,0,0) -> > > (0,0,1) meaning that any attempt to "undo" our setting of the pending > > bit could race with a concurrent locker trying to set it. > > > > We handle this race by preserving the pending bit when taking the lock > > after reaching the head of the queue and leaving the tail entry intact > > if we saw pending set, because we know that the tail is going to be > > updated shortly. > > > > Cc: Peter Zijlstra <peterz@infradead.org> > > Cc: Ingo Molnar <mingo@kernel.org> > > Signed-off-by: Will Deacon <will.deacon@arm.com> > > --- > > The pending bit was added to the qspinlock design to counter performance > degradation compared with ticket lock for workloads with light > spinlock contention. I run my spinlock stress test on a Intel Skylake > server running the vanilla 4.16 kernel vs a patched kernel with this > patchset. The locking rates with different number of locking threads > were as follows: > > # of threads 4.16 kernel patched 4.16 kernel > ------------ ----------- ------------------- > 1 7,417 kop/s 7,408 kop/s > 2 5,755 kop/s 4,486 kop/s > 3 4,214 kop/s 4,169 kop/s > 4 4,396 kop/s 4,383 kop/s > > The 2 contending threads case is the one that exercise the pending bit > code path the most. So it is obvious that this is the one that is most > impacted by this patchset. The differences in the other cases are mostly > noise or maybe just a little bit on the 3 contending threads case. > > I am not against this patch, but we certainly need to find out a way to > bring the performance number up closer to what it is before applying > the patch. It would indeed be good to not be in the position of having to trade off forward-progress guarantees against performance, but that does appear to be where we are at the moment. Thanx, Paul
On Fri, Apr 06, 2018 at 02:09:53PM -0700, Paul E. McKenney wrote: > It would indeed be good to not be in the position of having to trade off > forward-progress guarantees against performance, but that does appear to > be where we are at the moment. Depends of course on how unfair cmpxchg is. On x86 we trade one cmpxchg loop for another so the patch doesn't cure anything at all there. And our cmpxchg has 'some' hardware fairness to it. So while the patch is 'good' for platforms that have native fetch-or, it doesn't help (or in our case even hurts) those that do not.
On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote: > # of threads 4.16 kernel patched 4.16 kernel > ------------ ----------- ------------------- > 1 7,417 kop/s 7,408 kop/s > 2 5,755 kop/s 4,486 kop/s > 3 4,214 kop/s 4,169 kop/s > 4 4,396 kop/s 4,383 kop/s > Interesting, I didn't see that dip in my userspace tests.. I'll have to try again.
On Sat, Apr 07, 2018 at 10:47:32AM +0200, Peter Zijlstra wrote: > On Fri, Apr 06, 2018 at 02:09:53PM -0700, Paul E. McKenney wrote: > > It would indeed be good to not be in the position of having to trade off > > forward-progress guarantees against performance, but that does appear to > > be where we are at the moment. > > Depends of course on how unfair cmpxchg is. On x86 we trade one cmpxchg > loop for another so the patch doesn't cure anything at all there. And > our cmpxchg has 'some' hardware fairness to it. > > So while the patch is 'good' for platforms that have native fetch-or, > it doesn't help (or in our case even hurts) those that do not. Might need different implementations for different architectures, then. Or take advantage of the fact that x86 can do a native fetch-or to the topmost bit, if that helps. Thanx, Paul
Hi Waiman, Thanks for taking this lot for a spin. Comments and questions below. On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote: > On 04/05/2018 12:58 PM, Will Deacon wrote: > > The qspinlock locking slowpath utilises a "pending" bit as a simple form > > of an embedded test-and-set lock that can avoid the overhead of explicit > > queuing in cases where the lock is held but uncontended. This bit is > > managed using a cmpxchg loop which tries to transition the uncontended > > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). > > > > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved > > indefinitely if the lock word is seen to oscillate between unlocked > > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are > > able to take the lock in the cmpxchg loop without queuing and pass it > > around amongst themselves. > > > > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL > > using atomic_fetch_or, and then inspecting the old value to see whether > > we need to spin on the current lock owner, or whether we now effectively > > hold the lock. The tricky scenario is when concurrent lockers end up > > queuing on the lock and the lock becomes available, causing us to see > > a lockword of (n,0,0). With pending now set, simply queuing could lead > > to deadlock as the head of the queue may not have observed the pending > > flag being cleared. Conversely, if the head of the queue did observe > > pending being cleared, then it could transition the lock from (n,0,0) -> > > (0,0,1) meaning that any attempt to "undo" our setting of the pending > > bit could race with a concurrent locker trying to set it. > > > > We handle this race by preserving the pending bit when taking the lock > > after reaching the head of the queue and leaving the tail entry intact > > if we saw pending set, because we know that the tail is going to be > > updated shortly. > > > > Cc: Peter Zijlstra <peterz@infradead.org> > > Cc: Ingo Molnar <mingo@kernel.org> > > Signed-off-by: Will Deacon <will.deacon@arm.com> > > --- > > The pending bit was added to the qspinlock design to counter performance > degradation compared with ticket lock for workloads with light > spinlock contention. I run my spinlock stress test on a Intel Skylake > server running the vanilla 4.16 kernel vs a patched kernel with this > patchset. The locking rates with different number of locking threads > were as follows: > > # of threads 4.16 kernel patched 4.16 kernel > ------------ ----------- ------------------- > 1 7,417 kop/s 7,408 kop/s > 2 5,755 kop/s 4,486 kop/s > 3 4,214 kop/s 4,169 kop/s > 4 4,396 kop/s 4,383 kop/s > > The 2 contending threads case is the one that exercise the pending bit > code path the most. So it is obvious that this is the one that is most > impacted by this patchset. The differences in the other cases are mostly > noise or maybe just a little bit on the 3 contending threads case. That is bizarre. A few questions: 1. Is this with my patches as posted, or also with your WRITE_ONCE change? 2. Could you try to bisect my series to see which patch is responsible for this degradation, please? 3. Could you point me at your stress test, so I can try to reproduce these numbers on arm64 systems, please? > I am not against this patch, but we certainly need to find out a way to > bring the performance number up closer to what it is before applying > the patch. We certainly need to *understand* where the drop is coming from, because the two-threaded case is still just a CAS on x86 with and without this patch series. Generally, there's a throughput cost when ensuring fairness and forward-progress otherwise we'd all be using test-and-set. Thanks, Will
On Sat, Apr 07, 2018 at 10:47:32AM +0200, Peter Zijlstra wrote: > On Fri, Apr 06, 2018 at 02:09:53PM -0700, Paul E. McKenney wrote: > > It would indeed be good to not be in the position of having to trade off > > forward-progress guarantees against performance, but that does appear to > > be where we are at the moment. > > Depends of course on how unfair cmpxchg is. On x86 we trade one cmpxchg > loop for another so the patch doesn't cure anything at all there. And > our cmpxchg has 'some' hardware fairness to it. > > So while the patch is 'good' for platforms that have native fetch-or, > it doesn't help (or in our case even hurts) those that do not. We need to get to the bottom of this, otherwise we're just relying on Waiman's testing to validate any changes to this code! Will
On 04/09/2018 06:58 AM, Will Deacon wrote: > Hi Waiman, > > Thanks for taking this lot for a spin. Comments and questions below. > > On Fri, Apr 06, 2018 at 04:50:19PM -0400, Waiman Long wrote: >> On 04/05/2018 12:58 PM, Will Deacon wrote: >>> The qspinlock locking slowpath utilises a "pending" bit as a simple form >>> of an embedded test-and-set lock that can avoid the overhead of explicit >>> queuing in cases where the lock is held but uncontended. This bit is >>> managed using a cmpxchg loop which tries to transition the uncontended >>> lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). >>> >>> Unfortunately, the cmpxchg loop is unbounded and lockers can be starved >>> indefinitely if the lock word is seen to oscillate between unlocked >>> (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are >>> able to take the lock in the cmpxchg loop without queuing and pass it >>> around amongst themselves. >>> >>> This patch fixes the problem by unconditionally setting _Q_PENDING_VAL >>> using atomic_fetch_or, and then inspecting the old value to see whether >>> we need to spin on the current lock owner, or whether we now effectively >>> hold the lock. The tricky scenario is when concurrent lockers end up >>> queuing on the lock and the lock becomes available, causing us to see >>> a lockword of (n,0,0). With pending now set, simply queuing could lead >>> to deadlock as the head of the queue may not have observed the pending >>> flag being cleared. Conversely, if the head of the queue did observe >>> pending being cleared, then it could transition the lock from (n,0,0) -> >>> (0,0,1) meaning that any attempt to "undo" our setting of the pending >>> bit could race with a concurrent locker trying to set it. >>> >>> We handle this race by preserving the pending bit when taking the lock >>> after reaching the head of the queue and leaving the tail entry intact >>> if we saw pending set, because we know that the tail is going to be >>> updated shortly. >>> >>> Cc: Peter Zijlstra <peterz@infradead.org> >>> Cc: Ingo Molnar <mingo@kernel.org> >>> Signed-off-by: Will Deacon <will.deacon@arm.com> >>> --- >> The pending bit was added to the qspinlock design to counter performance >> degradation compared with ticket lock for workloads with light >> spinlock contention. I run my spinlock stress test on a Intel Skylake >> server running the vanilla 4.16 kernel vs a patched kernel with this >> patchset. The locking rates with different number of locking threads >> were as follows: >> >> # of threads 4.16 kernel patched 4.16 kernel >> ------------ ----------- ------------------- >> 1 7,417 kop/s 7,408 kop/s >> 2 5,755 kop/s 4,486 kop/s >> 3 4,214 kop/s 4,169 kop/s >> 4 4,396 kop/s 4,383 kop/s >> >> The 2 contending threads case is the one that exercise the pending bit >> code path the most. So it is obvious that this is the one that is most >> impacted by this patchset. The differences in the other cases are mostly >> noise or maybe just a little bit on the 3 contending threads case. > That is bizarre. A few questions: > > 1. Is this with my patches as posted, or also with your WRITE_ONCE change? This is just the with your patches as posted. > 2. Could you try to bisect my series to see which patch is responsible > for this degradation, please? I have done further analysis with the help of CONFIG_QUEUED_LOCK_STAT with another patch to enable counting the pending and the queuing code paths. Running the 2-thread test with the original qspinlock code on a Haswell server, the performance data were pending count = 3,265,220 queuing count = 22 locking rate = 11,648 kop/s With your posted patches, pending count = 330 queuing count = 9,965,127 locking rate = 4,178 kop/s I believe that my test case has heavy dependency on _Q_PENDING_VAL spinning loop. When I added back the loop, the performance data became: pending count = 3,278,320 queuing count = 0 locking rate = 11,884 kop/s Instead of an infinite loop, I also tried a limited spin with loop count of 0x200 and I got similar performance data as the infinite loop case. > 3. Could you point me at your stress test, so I can try to reproduce these > numbers on arm64 systems, please? I will send you the test that I used in a separate email. >> I am not against this patch, but we certainly need to find out a way to >> bring the performance number up closer to what it is before applying >> the patch. > We certainly need to *understand* where the drop is coming from, because > the two-threaded case is still just a CAS on x86 with and without this > patch series. Generally, there's a throughput cost when ensuring fairness > and forward-progress otherwise we'd all be using test-and-set. As stated above, the drop comes mainly from skipping the _Q_PENDING_VAL spinning loop. I supposed that if we just do a limited spin, we can still ensure forward progress while preserving the performance profile of the original qspinlock code. I don't think other codes in your patches cause any performance regression as far as my testing is concerned. Cheers, Longman
Hi, [This is an automated email] This commit has been processed by the -stable helper bot and determined to be a high probability candidate for -stable trees. (score: 32.4825) The bot has tested the following trees: v4.16.1, v4.15.16, v4.14.33, v4.9.93, v4.4.127. v4.16.1: Failed to apply! Possible dependencies: Unable to calculate v4.15.16: Failed to apply! Possible dependencies: Unable to calculate v4.14.33: Failed to apply! Possible dependencies: Unable to calculate v4.9.93: Failed to apply! Possible dependencies: Unable to calculate v4.4.127: Failed to apply! Possible dependencies: 1c4941fd53af ("locking/pvqspinlock: Allow limited lock stealing") 1f03e8d29192 ("locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()") 64d816cba06c ("locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()") Please let us know if you'd like to have this patch included in a stable tree. -- Thanks, Sasha
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index a192af2fe378..b75361d23ea5 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -294,7 +294,7 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { struct mcs_spinlock *prev, *next, *node; - u32 new, old, tail; + u32 old, tail; int idx; BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); @@ -306,58 +306,48 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) return; /* + * If we observe any contention; queue. + */ + if (val & ~_Q_LOCKED_MASK) + goto queue; + + /* * trylock || pending * * 0,0,0 -> 0,0,1 ; trylock * 0,0,1 -> 0,1,1 ; pending */ - for (;;) { + val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val); + if (!(val & ~_Q_LOCKED_MASK)) { /* - * If we observe any contention; queue. + * we're pending, wait for the owner to go away. + * + * *,1,1 -> *,1,0 + * + * this wait loop must be a load-acquire such that we match the + * store-release that clears the locked bit and create lock + * sequentiality; this is because not all + * clear_pending_set_locked() implementations imply full + * barriers. */ - if (val & ~_Q_LOCKED_MASK) - goto queue; - - new = _Q_LOCKED_VAL; - if (val == new) - new |= _Q_PENDING_VAL; - + if (val & _Q_LOCKED_MASK) + smp_cond_load_acquire(&lock->val.counter, + !(VAL & _Q_LOCKED_MASK)); /* - * Acquire semantic is required here as the function may - * return immediately if the lock was free. + * take ownership and clear the pending bit. + * + * *,1,0 -> *,0,1 */ - old = atomic_cmpxchg_acquire(&lock->val, val, new); - if (old == val) - break; - - val = old; - } - - /* - * we won the trylock - */ - if (new == _Q_LOCKED_VAL) + clear_pending_set_locked(lock); return; + } /* - * we're pending, wait for the owner to go away. - * - * *,1,1 -> *,1,0 - * - * this wait loop must be a load-acquire such that we match the - * store-release that clears the locked bit and create lock - * sequentiality; this is because not all clear_pending_set_locked() - * implementations imply full barriers. - */ - smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK)); - - /* - * take ownership and clear the pending bit. - * - * *,1,0 -> *,0,1 + * If pending was clear but there are waiters in the queue, then + * we need to undo our setting of pending before we queue ourselves. */ - clear_pending_set_locked(lock); - return; + if (!(val & _Q_PENDING_MASK)) + atomic_andnot(_Q_PENDING_VAL, &lock->val); /* * End of pending bit optimistic spinning and beginning of MCS @@ -461,15 +451,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * claim the lock: * * n,0,0 -> 0,0,1 : lock, uncontended - * *,0,0 -> *,0,1 : lock, contended + * *,*,0 -> *,*,1 : lock, contended * - * If the queue head is the only one in the queue (lock value == tail), - * clear the tail code and grab the lock. Otherwise, we only need - * to grab the lock. + * If the queue head is the only one in the queue (lock value == tail) + * and nobody is pending, clear the tail code and grab the lock. + * Otherwise, we only need to grab the lock. */ for (;;) { /* In the PV case we might already have _Q_LOCKED_VAL set */ - if ((val & _Q_TAIL_MASK) != tail) { + if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) { set_locked(lock); break; }
The qspinlock locking slowpath utilises a "pending" bit as a simple form of an embedded test-and-set lock that can avoid the overhead of explicit queuing in cases where the lock is held but uncontended. This bit is managed using a cmpxchg loop which tries to transition the uncontended lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). Unfortunately, the cmpxchg loop is unbounded and lockers can be starved indefinitely if the lock word is seen to oscillate between unlocked (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are able to take the lock in the cmpxchg loop without queuing and pass it around amongst themselves. This patch fixes the problem by unconditionally setting _Q_PENDING_VAL using atomic_fetch_or, and then inspecting the old value to see whether we need to spin on the current lock owner, or whether we now effectively hold the lock. The tricky scenario is when concurrent lockers end up queuing on the lock and the lock becomes available, causing us to see a lockword of (n,0,0). With pending now set, simply queuing could lead to deadlock as the head of the queue may not have observed the pending flag being cleared. Conversely, if the head of the queue did observe pending being cleared, then it could transition the lock from (n,0,0) -> (0,0,1) meaning that any attempt to "undo" our setting of the pending bit could race with a concurrent locker trying to set it. We handle this race by preserving the pending bit when taking the lock after reaching the head of the queue and leaving the tail entry intact if we saw pending set, because we know that the tail is going to be updated shortly. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> --- kernel/locking/qspinlock.c | 80 ++++++++++++++++++++-------------------------- 1 file changed, 35 insertions(+), 45 deletions(-)