diff mbox series

[1/2] xen/spinlocks: spin_trylock with interrupts off is always fine

Message ID 20201030142500.5464-2-jgross@suse.com (mailing list archive)
State Superseded
Headers show
Series xen/locking: fix and enhance lock debugging | expand

Commit Message

Jürgen Groß Oct. 30, 2020, 2:24 p.m. UTC
Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.

Add a bool parameter "try" to check_lock() for handling this case.

Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/spinlock.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

Comments

Jan Beulich Oct. 30, 2020, 2:59 p.m. UTC | #1
On 30.10.2020 15:24, Juergen Gross wrote:
> Even if a spinlock was taken with interrupts on before calling
> spin_trylock() with interrupts off is fine, as it can't block.
> 
> Add a bool parameter "try" to check_lock() for handling this case.
> 
> Remove the call of check_lock() from _spin_is_locked(), as it really
> serves no purpose and it can even lead to false crashes, e.g. when
> a lock was taken correctly with interrupts enabled and the call of
> _spin_is_locked() happened with interrupts off. In case the lock is
> taken with wrong interrupt flags this will be catched when taking
> the lock.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit I guess ...

> @@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
>       * 
>       * To guard against this subtle bug we latch the IRQ safety of every
>       * spinlock in the system, on first use.
> +     *
> +     * A spin_trylock() or spin_is_locked() with interrupts off is always
> +     * fine, as those can't block and above deadlock scenario doesn't apply.
>       */
> +    if ( try && irq_safe )
> +        return;

... the reference to spin_is_locked() here wants dropping,
since ...

> @@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
>  
>  int _spin_is_locked(spinlock_t *lock)
>  {
> -    check_lock(&lock->debug);

... you drop the call here?

Jan
Jürgen Groß Oct. 30, 2020, 3:01 p.m. UTC | #2
On 30.10.20 15:59, Jan Beulich wrote:
> On 30.10.2020 15:24, Juergen Gross wrote:
>> Even if a spinlock was taken with interrupts on before calling
>> spin_trylock() with interrupts off is fine, as it can't block.
>>
>> Add a bool parameter "try" to check_lock() for handling this case.
>>
>> Remove the call of check_lock() from _spin_is_locked(), as it really
>> serves no purpose and it can even lead to false crashes, e.g. when
>> a lock was taken correctly with interrupts enabled and the call of
>> _spin_is_locked() happened with interrupts off. In case the lock is
>> taken with wrong interrupt flags this will be catched when taking
>> the lock.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> albeit I guess ...
> 
>> @@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
>>        *
>>        * To guard against this subtle bug we latch the IRQ safety of every
>>        * spinlock in the system, on first use.
>> +     *
>> +     * A spin_trylock() or spin_is_locked() with interrupts off is always
>> +     * fine, as those can't block and above deadlock scenario doesn't apply.
>>        */
>> +    if ( try && irq_safe )
>> +        return;
> 
> ... the reference to spin_is_locked() here wants dropping,
> since ...
> 
>> @@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
>>   
>>   int _spin_is_locked(spinlock_t *lock)
>>   {
>> -    check_lock(&lock->debug);
> 
> ... you drop the call here?

Oh yes, this was a late modification and I didn't adapt the comment
accordingly. Thanks for spotting it.


Juergen
diff mbox series

Patch

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index ce3106e2d3..54f0c55dc2 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@ 
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug)
+static void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -42,7 +42,13 @@  static void check_lock(union lock_debug *debug)
      * 
      * To guard against this subtle bug we latch the IRQ safety of every
      * spinlock in the system, on first use.
+     *
+     * A spin_trylock() or spin_is_locked() with interrupts off is always
+     * fine, as those can't block and above deadlock scenario doesn't apply.
      */
+    if ( try && irq_safe )
+        return;
+
     if ( unlikely(debug->irq_safe != irq_safe) )
     {
         union lock_debug seen, new = { 0 };
@@ -102,7 +108,7 @@  void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l) ((void)0)
+#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
@@ -159,7 +165,7 @@  void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
     spinlock_tickets_t tickets = SPINLOCK_TICKET_INC;
     LOCK_PROFILE_VAR;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, false);
     preempt_disable();
     tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail,
                                            tickets.head_tail);
@@ -220,8 +226,6 @@  void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 
 int _spin_is_locked(spinlock_t *lock)
 {
-    check_lock(&lock->debug);
-
     /*
      * Recursive locks may be locked by another CPU, yet we return
      * "false" here, making this function suitable only for use in
@@ -236,7 +240,7 @@  int _spin_trylock(spinlock_t *lock)
 {
     spinlock_tickets_t old, new;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
     old = observe_lock(&lock->tickets);
     if ( old.head != old.tail )
         return 0;
@@ -294,7 +298,7 @@  int _spin_trylock_recursive(spinlock_t *lock)
     BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
     BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
 
     if ( likely(lock->recurse_cpu != cpu) )
     {