diff mbox

[v3,02/12] locking/mutex: Fix a race with handoffs and interruptible waits

Message ID 1482346000-9927-3-git-send-email-nhaehnle@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Nicolai Hähnle Dec. 21, 2016, 6:46 p.m. UTC
From: Nicolai Hähnle <Nicolai.Haehnle@amd.com>

There's a possible race where the waiter in front of us leaves the wait list
due to a signal, and the current owner subsequently hands the lock off to us
even though we never observed ourselves at the front of the list.

Set the task state before checking our position in the list, so that the
race is handled by falling through the next schedule().

Found by inspection.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
---
 kernel/locking/mutex.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)
diff mbox

Patch

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 9b34961..c02c566 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -697,17 +697,18 @@  __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		spin_unlock_mutex(&lock->wait_lock, flags);
 		schedule_preempt_disabled();
 
-		if (!first && __mutex_waiter_is_first(lock, &waiter)) {
-			first = true;
-			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
-		}
-
 		set_task_state(task, state);
 		/*
 		 * Here we order against unlock; we must either see it change
 		 * state back to RUNNING and fall through the next schedule(),
 		 * or we must see its unlock and acquire.
 		 */
+
+		if (!first && __mutex_waiter_is_first(lock, &waiter)) {
+			first = true;
+			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+		}
+
 		if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
 		     __mutex_trylock(lock, first))
 			break;