From patchwork Thu Dec 1 14:06:50 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Nicolai_H=C3=A4hnle?= X-Patchwork-Id: 9456225 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9FED060235 for ; Thu, 1 Dec 2016 14:07:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9337B284D9 for ; Thu, 1 Dec 2016 14:07:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 881C3284E2; Thu, 1 Dec 2016 14:07:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 70929284D9 for ; Thu, 1 Dec 2016 14:07:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3D2436E804; Thu, 1 Dec 2016 14:07:12 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x243.google.com (mail-wm0-x243.google.com [IPv6:2a00:1450:400c:c09::243]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1D11C6E7F6 for ; Thu, 1 Dec 2016 14:07:10 +0000 (UTC) Received: by mail-wm0-x243.google.com with SMTP id g23so34457915wme.1 for ; Thu, 01 Dec 2016 06:07:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ga/A32Ws9n2JOw4mEgMqiVrMNfpUNBA9XvvOr6Ma6ZE=; b=ZsJKP9xZABtx6pyOcSjHxpqVdW+rgIx36AwQs6EbIex4QpkFlkMtGV7jOpCgIwcH5H tuGluQybneVQ2TpqRS/zajNaMDlchWBWJ+GHIZ18cbbfReCxbfAnmDM1xNpWDgEbwaXw z7faeF3Q7bxSEdnClrlsGcbTaTcEL+Bj25xKxJrQfDnbPVddpgBUayZqhgOR/Ggth0Uh kAKDS9meN4OMluuUrv+yCLFumZETg/y4FtHWxouNp77nnEvrxLfF00aDbhYjCowLQPKF BR4CAk6FeToYWdA8+O1fI7EFIVLRHNyUTc/Q00mTZZvqEbJcx3icQ47f76CplKlaETds +bpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ga/A32Ws9n2JOw4mEgMqiVrMNfpUNBA9XvvOr6Ma6ZE=; b=UGUkTQiuliR1W5LRGsfMBpuf5vGZ+cHPVmCteGTPrCUKkYS9Zjy9vmJ/iSAA8wt+oU yulQkOyceGnnk9JI2GNAr3MTF1HYHImZwGCnDNfOUSA2NqgXUmrHdhgYiTvq5GLzmG0A WLQeFg9Q/b5Vaete0a54HZ5v1Rg9VnWqD6CNotvpEvk8F3s8TRzgJa9ltONRSVkya9k9 AHfS4RMZT/dg18tJgeBUTNHhLeDG2I0OhlfmAWx8MiDVNZB0pej1Wa6JxF7b60tB0zDP GCtF5fm6UjSYc+93CtL0A8jR40CfOHv3xuMYBrBCBMU0YrvX5fzCo+wRiWSjKJkhs/6c JY6w== X-Gm-Message-State: AKaTC032s6ZQe1wSpbnROl83YL9ycr98yJrVVcGJNRnN9Ro8Gantqw7XgPFsx2Xgdx52hQ== X-Received: by 10.28.17.205 with SMTP id 196mr32780147wmr.78.1480601228446; Thu, 01 Dec 2016 06:07:08 -0800 (PST) Received: from cassiopeia.fritz.box ([2001:a61:116e:7d01:dc12:8e88:ab9d:8694]) by smtp.gmail.com with ESMTPSA id vr9sm411514wjc.35.2016.12.01.06.07.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 01 Dec 2016 06:07:07 -0800 (PST) From: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= To: linux-kernel@vger.kernel.org Subject: [PATCH v2 07/11] locking/ww_mutex: Wake at most one waiter for back off when acquiring the lock Date: Thu, 1 Dec 2016 15:06:50 +0100 Message-Id: <1480601214-26583-8-git-send-email-nhaehnle@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> MIME-Version: 1.0 Cc: Maarten Lankhorst , =?UTF-8?q?Nicolai=20H=C3=A4hnle?= , Peter Zijlstra , dri-devel@lists.freedesktop.org, Ingo Molnar X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Nicolai Hähnle The wait list is sorted by stamp order, and the only waiting task that may have to back off is the first waiter with a context. The regular slow path does not have to wake any other tasks at all, since all other waiters that would have to back off were either woken up when the waiter was added to the list, or detected the condition before they added themselves. Median timings taken of a contention-heavy GPU workload: Without this series: real 0m59.900s user 0m7.516s sys 2m16.076s With changes up to and including this patch: real 0m52.946s user 0m7.272s sys 1m55.964s Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Maarten Lankhorst Cc: Daniel Vetter Cc: Chris Wilson Cc: dri-devel@lists.freedesktop.org Signed-off-by: Nicolai Hähnle --- kernel/locking/mutex.c | 58 +++++++++++++++++++++++++++++++++----------------- 1 file changed, 39 insertions(+), 19 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 01e9438..d2ca447 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -285,6 +285,35 @@ __ww_mutex_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b) } /* + * Wake up any waiters that may have to back off when the lock is held by the + * given context. + * + * Due to the invariants on the wait list, this can only affect the first + * waiter with a context. + * + * Must be called with wait_lock held. The current task must not be on the + * wait list. + */ +static void __sched +__ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) +{ + struct mutex_waiter *cur; + + list_for_each_entry(cur, &lock->wait_list, list) { + if (!cur->ww_ctx) + continue; + + if (cur->ww_ctx->acquired > 0 && + __ww_mutex_stamp_after(cur->ww_ctx, ww_ctx)) { + debug_mutex_wake_waiter(lock, cur); + wake_up_process(cur->task); + } + + break; + } +} + +/* * After acquiring lock with fastpath or when we lost out in contested * slowpath, set ctx and wake up any waiters so they can recheck. */ @@ -293,7 +322,6 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { unsigned long flags; - struct mutex_waiter *cur; ww_mutex_lock_acquired(lock, ctx); @@ -319,16 +347,15 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, * so they can see the new lock->ctx. */ spin_lock_mutex(&lock->base.wait_lock, flags); - list_for_each_entry(cur, &lock->base.wait_list, list) { - debug_mutex_wake_waiter(&lock->base, cur); - wake_up_process(cur->task); - } + __ww_mutex_wakeup_for_backoff(&lock->base, ctx); spin_unlock_mutex(&lock->base.wait_lock, flags); } /* - * After acquiring lock in the slowpath set ctx and wake up any - * waiters so they can recheck. + * After acquiring lock in the slowpath set ctx. + * + * Unlike for the fast path, the caller ensures that waiters are woken up where + * necessary. * * Callers must hold the mutex wait_lock. */ @@ -336,19 +363,8 @@ static __always_inline void ww_mutex_set_context_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { - struct mutex_waiter *cur; - ww_mutex_lock_acquired(lock, ctx); lock->ctx = ctx; - - /* - * Give any possible sleeping processes the chance to wake up, - * so they can recheck if they have to back off. - */ - list_for_each_entry(cur, &lock->base.wait_list, list) { - debug_mutex_wake_waiter(&lock->base, cur); - wake_up_process(cur->task); - } } #ifdef CONFIG_MUTEX_SPIN_ON_OWNER @@ -737,8 +753,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, /* * After waiting to acquire the wait_lock, try again. */ - if (__mutex_trylock(lock, false)) + if (__mutex_trylock(lock, false)) { + if (use_ww_ctx && ww_ctx) + __ww_mutex_wakeup_for_backoff(lock, ww_ctx); + goto skip_wait; + } debug_mutex_lock_common(lock, &waiter); debug_mutex_add_waiter(lock, &waiter, task);