From patchwork Tue Dec 6 16:03:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 9463667 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B863360459 for ; Wed, 7 Dec 2016 00:54:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9AEF284D9 for ; Wed, 7 Dec 2016 00:54:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9E370284F2; Wed, 7 Dec 2016 00:54:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3D10A284D9 for ; Wed, 7 Dec 2016 00:54:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4E9906E4E6; Wed, 7 Dec 2016 00:54:18 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by gabe.freedesktop.org (Postfix) with ESMTPS id C6E1C6E130 for ; Tue, 6 Dec 2016 16:03:30 +0000 (UTC) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2CCB631B337; Tue, 6 Dec 2016 16:03:30 +0000 (UTC) Received: from llong.remote.csb (dhcp-17-66.bos.redhat.com [10.18.17.66]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id uB6G3SM1012109; Tue, 6 Dec 2016 11:03:28 -0500 Subject: Re: [PATCH v2 02/11] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop To: Peter Zijlstra , =?UTF-8?Q?Nicolai_H=c3=a4hnle?= References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> <1480601214-26583-3-git-send-email-nhaehnle@gmail.com> <20161206150620.GT3045@worktop.programming.kicks-ass.net> From: Waiman Long Organization: Red Hat Message-ID: Date: Tue, 6 Dec 2016 11:03:28 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <20161206150620.GT3045@worktop.programming.kicks-ass.net> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 06 Dec 2016 16:03:30 +0000 (UTC) X-Mailman-Approved-At: Wed, 07 Dec 2016 00:54:15 +0000 Cc: Maarten Lankhorst , =?UTF-8?Q?Nicolai_H=c3=a4hnle?= , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Ingo Molnar X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP On 12/06/2016 10:06 AM, Peter Zijlstra wrote: > On Thu, Dec 01, 2016 at 03:06:45PM +0100, Nicolai Hähnle wrote: >> +++ b/kernel/locking/mutex.c >> @@ -350,7 +350,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock, >> * access and not reliable. >> */ >> static noinline >> -bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) >> +bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, >> + bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) >> { >> bool ret = true; >> >> @@ -373,6 +374,28 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) >> break; >> } >> >> + if (use_ww_ctx && ww_ctx->acquired > 0) { >> + struct ww_mutex *ww; >> + >> + ww = container_of(lock, struct ww_mutex, base); >> + >> + /* >> + * If ww->ctx is set the contents are undefined, only >> + * by acquiring wait_lock there is a guarantee that >> + * they are not invalid when reading. >> + * >> + * As such, when deadlock detection needs to be >> + * performed the optimistic spinning cannot be done. >> + * >> + * Check this in every inner iteration because we may >> + * be racing against another thread's ww_mutex_lock. >> + */ >> + if (READ_ONCE(ww->ctx)) { >> + ret = false; >> + break; >> + } >> + } >> + >> cpu_relax(); >> } >> rcu_read_unlock(); > Aside from the valid question about mutex_can_spin_on_owner() there's > another 'problem' here, mutex_spin_on_owner() is marked noinline, so all > the use_ww_ctx stuff doesn't 'work' here. The mutex_spin_on_owner() function was originally marked noinline because it could be a major consumer of CPU cycles in a contended lock. Having it shown separately in the perf output will help the users have a better understanding of what is consuming all the CPU cycles. So I would still like to keep it this way. If you have concern about additional latency for non-ww_mutex calls, one alternative can be: diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 0afa998..777338d 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -349,9 +349,9 @@ static __always_inline void ww_mutex_lock_acquired(struct ww * Look out! "owner" is an entirely speculative pointer * access and not reliable. */ -static noinline -bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, - bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) +static __always_inline +bool __mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, + const bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) { bool ret = true; @@ -403,6 +403,19 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_st return ret; } +static noinline +bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) +{ + return __mutex_spin_on_owner(lock, owner, false, NULL); +} + +static noinline +bool ww_mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, + struct ww_acquire_ctx *ww_ctx) +{ + return __mutex_spin_on_owner(lock, owner, true, ww_ctx); +} + /* * Initial check for entering the mutex spinning loop */ @@ -489,13 +502,17 @@ static bool mutex_optimistic_spin(struct mutex *lock, */ owner = __mutex_owner(lock); if (owner) { + bool spin_ok; + if (waiter && owner == task) { smp_mb(); /* ACQUIRE */ break; } - if (!mutex_spin_on_owner(lock, owner, use_ww_ctx, - ww_ctx)) + spin_ok = use_ww_ctx + ? ww_mutex_spin_on_owner(lock, owner, ww_ctx) + : mutex_spin_on_owner(lock, owner); + if (!spin_ok) goto fail_unlock; }