From patchwork Thu Apr 5 16:58:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10325021 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0E7836053F for ; Thu, 5 Apr 2018 16:59:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F3C40292FD for ; Thu, 5 Apr 2018 16:59:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E845C292FF; Thu, 5 Apr 2018 16:59:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9D81A292FD for ; Thu, 5 Apr 2018 16:59:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=2JG9CmM5kI3Z0hCU8BC1+KCugS8KYYNt+5pMsTPZ2W0=; b=HwZGIJac4So2dQKB+8MLwPkX3Q PeqrOwYnT2GrTZoFwU9PcEu7uP/DcvvC7kmhQ49yt42KSvviFSMaqLrGrtG/k8N0kQ/wfhm2qNA2p K4Pg/zbYTuc7ZgyAxZtAcFLYdc+DaZa8vGphlI2g2tLG2T+CRyyz3BlQ2qhk5QqnW5Bp5typXz16k 0ZLlOwyZQ/DAxO0CZeHMv1nZ9X5OMn0CVq3lSB4fYqjGLgIwSnd9lqDia5ujTf8ix7JijK3xumc/p VAlTp6hBEguaTF+5le/ec6Dl7UFKM6wt5hUFXSzeZnW10sb9QW5pQRTBcdaHixO2wc+gUFZiLPWPa 1xeIWvqA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1f48EA-0003Z3-Vu; Thu, 05 Apr 2018 16:59:10 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1f48E7-0003WE-CB for linux-arm-kernel@lists.infradead.org; Thu, 05 Apr 2018 16:59:08 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 220B41596; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E85D13F5B1; Thu, 5 Apr 2018 09:58:54 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 8BEC51AE5448; Thu, 5 Apr 2018 17:59:08 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 01/10] locking/qspinlock: Don't spin on pending->locked transition in slowpath Date: Thu, 5 Apr 2018 17:58:58 +0100 Message-Id: <1522947547-24081-2-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> References: <1522947547-24081-1-git-send-email-will.deacon@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180405_095907_425186_2638F0B2 X-CRM114-Status: GOOD ( 14.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peterz@infradead.org, catalin.marinas@arm.com, boqun.feng@gmail.com, Will Deacon , paulmck@linux.vnet.ibm.com, mingo@kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP If a locker taking the qspinlock slowpath reads a lock value indicating that only the pending bit is set, then it will spin whilst the concurrent pending->locked transition takes effect. Unfortunately, there is no guarantee that such a transition will ever be observed since concurrent lockers could continuously set pending and hand over the lock amongst themselves, leading to starvation. Whilst this would probably resolve in practice, it means that it is not possible to prove liveness properties about the lock and means that lock acquisition time is unbounded. Remove the pending->locked spinning from the slowpath and instead queue explicitly if pending is set. Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Will Deacon --- kernel/locking/qspinlock.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index d880296245c5..a192af2fe378 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -306,16 +306,6 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) return; /* - * wait for in-progress pending->locked hand-overs - * - * 0,1,0 -> 0,0,1 - */ - if (val == _Q_PENDING_VAL) { - while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL) - cpu_relax(); - } - - /* * trylock || pending * * 0,0,0 -> 0,0,1 ; trylock