From patchwork Thu Dec 1 14:06:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Nicolai_H=C3=A4hnle?= X-Patchwork-Id: 9456215 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EBA4460235 for ; Thu, 1 Dec 2016 14:07:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DEF7A284DB for ; Thu, 1 Dec 2016 14:07:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D340B284D9; Thu, 1 Dec 2016 14:07:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7BF5C284D9 for ; Thu, 1 Dec 2016 14:07:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4A8F86E1B5; Thu, 1 Dec 2016 14:07:06 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x241.google.com (mail-wm0-x241.google.com [IPv6:2a00:1450:400c:c09::241]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5C03E6E1B5 for ; Thu, 1 Dec 2016 14:07:04 +0000 (UTC) Received: by mail-wm0-x241.google.com with SMTP id a20so34467959wme.2 for ; Thu, 01 Dec 2016 06:07:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yer458507bc7tWxrnimtdXoXFfxfTW5fs8Zw7Q3Ewi0=; b=pwoEDc9CNBX8pbuTCCKrC9fOW3XLilBD5xNwNQukR4KCEssXKLQfgCN/ipl3tfnt3a JFX+NZTRgdNqmwLQRhCYWj8xcrQQxUbYgKK/Ok4jmrxUTqTZvhvQ0uEAYWiU1gY1t0es YjwbxMmnSb0iKg2AsKz6Gh3+PieYv+BTZKjnnEh7zw/mrSvmUtO49zyI6gPpJMnoO9cm GD3YQgtyFmM00PBFs7MfUjdBsOl24O7110vSlNtiCi31WpqMTnvTAgA9wcM30BnsLE9n Ow2Umtb1ExvicazGBMwpCo+Ts24Y51e5oJHFYqca/15xaMT9ncpuSiznr3gC1u1coWmr MWXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yer458507bc7tWxrnimtdXoXFfxfTW5fs8Zw7Q3Ewi0=; b=fclEmNMTWECK10z1fSzH60U3/nfzZ96g5lpA1CAfjerSepEPJEn7LKwNLO0jZ9Rhq8 sK7B4G8ZBOyVqj8GTEWkdRmFXYCBysCxHKaSDD6f7rLIfhQlamM/y4eomspmggSv1+68 9KN1rh3RA1xGlieqYB++TAVkQeU7PSQ/YYtR4Qwqo8DWkpUs1/Ewxy4CALJDJdrhofZb MB668AMbmYi4dkwBNEZ7afuZeE91UlPBdLNi+Qc2WFf8wMO+aQtRYBF31NJC/ay3/fJC nauSxi9HzHpLEwDH2NpQ4Ssn0h/7//FBUA1XqRiMByFnIzz/OINRzQPINnTXT+PCTkW6 X++A== X-Gm-Message-State: AKaTC00pvlJMAJCbaHTWfDmKjelgVCKIzHzaOQF7fFy7d2tMQ6SoQN5mq3LQpBGQrsNi2g== X-Received: by 10.28.217.195 with SMTP id q186mr34600766wmg.17.1480601222694; Thu, 01 Dec 2016 06:07:02 -0800 (PST) Received: from cassiopeia.fritz.box ([2001:a61:116e:7d01:dc12:8e88:ab9d:8694]) by smtp.gmail.com with ESMTPSA id vr9sm411514wjc.35.2016.12.01.06.07.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 01 Dec 2016 06:07:02 -0800 (PST) From: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= To: linux-kernel@vger.kernel.org Subject: [PATCH v2 02/11] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop Date: Thu, 1 Dec 2016 15:06:45 +0100 Message-Id: <1480601214-26583-3-git-send-email-nhaehnle@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> MIME-Version: 1.0 Cc: Maarten Lankhorst , =?UTF-8?q?Nicolai=20H=C3=A4hnle?= , Peter Zijlstra , dri-devel@lists.freedesktop.org, Ingo Molnar X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Nicolai Hähnle In the following scenario, thread #1 should back off its attempt to lock ww1 and unlock ww2 (assuming the acquire context stamps are ordered accordingly). Thread #0 Thread #1 --------- --------- successfully lock ww2 set ww1->base.owner attempt to lock ww1 confirm ww1->ctx == NULL enter mutex_spin_on_owner set ww1->ctx What was likely to happen previously is: attempt to lock ww2 refuse to spin because ww2->ctx != NULL schedule() detect thread #0 is off CPU stop optimistic spin return -EDEADLK unlock ww2 wakeup thread #0 lock ww2 Now, we are more likely to see: detect ww1->ctx != NULL stop optimistic spin return -EDEADLK unlock ww2 successfully lock ww2 ... because thread #1 will stop its optimistic spin as soon as possible. The whole scenario is quite unlikely, since it requires thread #1 to get between thread #0 setting the owner and setting the ctx. But since we're idling here anyway, the additional check is basically free. Found by inspection. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Maarten Lankhorst Cc: Daniel Vetter Cc: Chris Wilson Cc: dri-devel@lists.freedesktop.org Signed-off-by: Nicolai Hähnle --- kernel/locking/mutex.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 9b34961..0afa998 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -350,7 +350,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock, * access and not reliable. */ static noinline -bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) +bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, + bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) { bool ret = true; @@ -373,6 +374,28 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) break; } + if (use_ww_ctx && ww_ctx->acquired > 0) { + struct ww_mutex *ww; + + ww = container_of(lock, struct ww_mutex, base); + + /* + * If ww->ctx is set the contents are undefined, only + * by acquiring wait_lock there is a guarantee that + * they are not invalid when reading. + * + * As such, when deadlock detection needs to be + * performed the optimistic spinning cannot be done. + * + * Check this in every inner iteration because we may + * be racing against another thread's ww_mutex_lock. + */ + if (READ_ONCE(ww->ctx)) { + ret = false; + break; + } + } + cpu_relax(); } rcu_read_unlock(); @@ -460,22 +483,6 @@ static bool mutex_optimistic_spin(struct mutex *lock, for (;;) { struct task_struct *owner; - if (use_ww_ctx && ww_ctx->acquired > 0) { - struct ww_mutex *ww; - - ww = container_of(lock, struct ww_mutex, base); - /* - * If ww->ctx is set the contents are undefined, only - * by acquiring wait_lock there is a guarantee that - * they are not invalid when reading. - * - * As such, when deadlock detection needs to be - * performed the optimistic spinning cannot be done. - */ - if (READ_ONCE(ww->ctx)) - goto fail_unlock; - } - /* * If there's an owner, wait for it to either * release the lock or go to sleep. @@ -487,7 +494,8 @@ static bool mutex_optimistic_spin(struct mutex *lock, break; } - if (!mutex_spin_on_owner(lock, owner)) + if (!mutex_spin_on_owner(lock, owner, use_ww_ctx, + ww_ctx)) goto fail_unlock; }