From patchwork Thu Apr 26 10:34:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10365467 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9E3C5602DC for ; Thu, 26 Apr 2018 10:42:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C9E5251F9 for ; Thu, 26 Apr 2018 10:42:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8104E290A8; Thu, 26 Apr 2018 10:42:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E1519251F9 for ; Thu, 26 Apr 2018 10:42:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=FenLTuHhHQGQK6/QSUDVcP6HOcMkglAg5XvzP9BHaoE=; b=ZY/1P5TrgfgGQLoDUkk/xigleF jkARGlm5cAOviV2XXdD/ksUq9oKJIL7Km0l5fK8TZtnSNHHg/3kl0HljHg5UaGBWpe8SH/9h3HFM2 vjoXH30lZNuPdPow0EZxxnEq3VsGmSRlAXSc7V2WMXNlHXlhLquajtboYyog7Ra9X3oqRaA4AsU30 u4i5tm6X7HAVFQ/5YfhgfBpSUT7LDZABpjGVV8lLAf6NgbkxtZKIlkcaT5ONbxiVq7rEXvFVqMC69 e4BpcqkNhd6PBYdi6SFWL4QNVyLrrxmo04dNY0XyixZMAwUbjV1dPJTnChDGcI/dKl1R5BuLDJp/a EqIsHbkg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fBeLo-0007hE-Lz; Thu, 26 Apr 2018 10:42:08 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fBeEL-00022m-Ux for linux-arm-kernel@bombadil.infradead.org; Thu, 26 Apr 2018 10:34:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=xc6s9HS5ti7pmWSBFcBDmDyJIxQZQ6uQpM22zxIDbSk=; b=sUoUkTp7LLwZbdSroJE3sfawC 96Nnj9U5siFuIZrMWoSc0TorsUlpDAHZzgA8d2RyAvezXT9N5JKwuSgRzpfXLwfLVsHTJ2POCsi7V p4zcJFi1sSeCQZHh8Pmidb6O1LcOqBzaxK4MpiPhpB5Cs+iGZlIeQJ26yOHwQpVGGjs2KRcLBlxxH 8WgBlfBXJlY2koNr0pxHS+r5vVAt1J0hB5iSR/16KkXgkCnTUe85wu0gk+kNFt2SoPy/LuiABR0Cw gvnP3KJcP9JyllegCBBB6/Gcau5quW2hQ5XdbLvhFCqiSTrLGYHQI7HZnJKSv5p7hzktFboj5/hOz LuZR/FhhA==; Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by merlin.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fBeEI-0007Mk-R7 for linux-arm-kernel@lists.infradead.org; Thu, 26 Apr 2018 10:34:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 83D7B1688; Thu, 26 Apr 2018 03:34:09 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 559D93F487; Thu, 26 Apr 2018 03:34:09 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 0450C1AE50BE; Thu, 26 Apr 2018 11:34:28 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH v3 03/14] locking/qspinlock: Bound spinning on pending->locked transition in slowpath Date: Thu, 26 Apr 2018 11:34:17 +0100 Message-Id: <1524738868-31318-4-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1524738868-31318-1-git-send-email-will.deacon@arm.com> References: <1524738868-31318-1-git-send-email-will.deacon@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180426_063422_952205_33FE697E X-CRM114-Status: GOOD ( 19.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peterz@infradead.org, boqun.feng@gmail.com, will.deacon@arm.com, longman@redhat.com, paulmck@linux.vnet.ibm.com, mingo@kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP If a locker taking the qspinlock slowpath reads a lock value indicating that only the pending bit is set, then it will spin whilst the concurrent pending->locked transition takes effect. Unfortunately, there is no guarantee that such a transition will ever be observed since concurrent lockers could continuously set pending and hand over the lock amongst themselves, leading to starvation. Whilst this would probably resolve in practice, it means that it is not possible to prove liveness properties about the lock and means that lock acquisition time is unbounded. Rather than removing the pending->locked spinning from the slowpath altogether (which has been shown to heavily penalise a 2-threaded locking stress test on x86), this patch replaces the explicit spinning with a call to atomic_cond_read_relaxed and allows the architecture to provide a bound on the number of spins. For architectures that can respond to changes in cacheline state in their smp_cond_load implementation, it should be sufficient to use the default bound of 1. Cc: Peter Zijlstra Cc: Ingo Molnar Suggested-by: Waiman Long Signed-off-by: Will Deacon --- kernel/locking/qspinlock.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index f5b0e59f6d14..a0f7976348f8 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -77,6 +77,18 @@ #endif /* + * The pending bit spinning loop count. + * This heuristic is used to limit the number of lockword accesses + * made by atomic_cond_read_relaxed when waiting for the lock to + * transition out of the "== _Q_PENDING_VAL" state. We don't spin + * indefinitely because there's no guarantee that we'll make forward + * progress. + */ +#ifndef _Q_PENDING_LOOPS +#define _Q_PENDING_LOOPS 1 +#endif + +/* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. * @@ -266,13 +278,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) return; /* - * wait for in-progress pending->locked hand-overs + * Wait for in-progress pending->locked hand-overs with a bounded + * number of spins so that we guarantee forward progress. * * 0,1,0 -> 0,0,1 */ if (val == _Q_PENDING_VAL) { - while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL) - cpu_relax(); + int cnt = _Q_PENDING_LOOPS; + val = atomic_cond_read_relaxed(&lock->val, + (VAL != _Q_PENDING_VAL) || !cnt--); } /*