Message ID | 20180427101619.GB21705@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 04/27/2018 06:16 AM, Will Deacon wrote: > Hi Waiman, > > On Thu, Apr 26, 2018 at 04:16:30PM -0400, Waiman Long wrote: >> On 04/26/2018 06:34 AM, Will Deacon wrote: >>> diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h >>> index 2711940429f5..2dbad2f25480 100644 >>> --- a/kernel/locking/qspinlock_paravirt.h >>> +++ b/kernel/locking/qspinlock_paravirt.h >>> @@ -118,11 +118,6 @@ static __always_inline void set_pending(struct qspinlock *lock) >>> WRITE_ONCE(lock->pending, 1); >>> } >>> >>> -static __always_inline void clear_pending(struct qspinlock *lock) >>> -{ >>> - WRITE_ONCE(lock->pending, 0); >>> -} >>> - >>> /* >>> * The pending bit check in pv_queued_spin_steal_lock() isn't a memory >>> * barrier. Therefore, an atomic cmpxchg_acquire() is used to acquire the >> There is another clear_pending() function after the "#else /* >> _Q_PENDING_BITS == 8 */" line that need to be removed as well. > Bugger, sorry I missed that one. Is the >= 16K CPUs case supported elsewhere > in Linux? The x86 Kconfig appears to clamp NR_CPUS to 8192 iiuc. > > Anyway, additional patch below. Ingo -- please can you apply this on top? > I don't think we support >= 16k in any of the distros. However, this will be a limit that we will reach eventually. That is why I said we can wait. Cheers, Longman
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h index 25730b2ac022..5a0cf5f9008c 100644 --- a/kernel/locking/qspinlock_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -130,11 +130,6 @@ static __always_inline void set_pending(struct qspinlock *lock) atomic_or(_Q_PENDING_VAL, &lock->val); } -static __always_inline void clear_pending(struct qspinlock *lock) -{ - atomic_andnot(_Q_PENDING_VAL, &lock->val); -} - static __always_inline int trylock_clear_pending(struct qspinlock *lock) { int val = atomic_read(&lock->val);