From patchwork Fri Sep 8 15:59:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 9944611 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7CCED604D4 for ; Fri, 8 Sep 2017 16:00:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 574DE28830 for ; Fri, 8 Sep 2017 16:00:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4BFCA28831; Fri, 8 Sep 2017 16:00:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A0F0228827 for ; Fri, 8 Sep 2017 16:00:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=QDNUzkOxDTb60jRYXUUu9yzw2D8GksWYBcBm3FxgzFs=; b=OF7 eA9auGUtl4eAou15wfftxPfSeNTLfyJzbcr/qHgKTHugr3o23G+23488vQ2w49lC8ptUg3e34IX/C FR1K8aGF7YZNcWZwVPq3z40zMAGxdzCYf4oR/RnN5rBL99frl8V2cv3cUCEUhmK8deSUzvnDtFtwS rqYk+lq8V1tXjJ6Qh7gCPhAf8LJ8FZtn7HcpmNnZHA01dlRExPkvH1+cgmwtAx21gWVMOxTUZ6/ID 4C2ePKRq4CCtVsnbX++KRGBlJsjf5DVs7NxJmIzl/dgOmOZelG8A0iR4UDycdqwHL4jdR0zhLfE9y qnRTvAS6PzAbLzKZQvvuiVKkWEmIxBw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dqLhx-0005yh-D6; Fri, 08 Sep 2017 16:00:41 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dqLhe-0004Mk-Pp for linux-arm-kernel@lists.infradead.org; Fri, 08 Sep 2017 16:00:30 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CDD1680D; Fri, 8 Sep 2017 09:00:00 -0700 (PDT) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 486143F540; Fri, 8 Sep 2017 09:00:00 -0700 (PDT) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] arm64: spinlocks: Fix write starvation with rwlock Date: Fri, 8 Sep 2017 10:59:59 -0500 Message-Id: <20170908155959.6940-1-jeremy.linton@arm.com> X-Mailer: git-send-email 2.13.5 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170908_090022_947446_201B84D0 X-CRM114-Status: GOOD ( 10.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, peterz@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, mark.salter@redhat.com, mingo@redhat.com, Robin.Murphy@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The ARM64 rwlock is unfair in that readers can perpetually block the writer. This is most noticeable with tests like `stress-ng --kill $SOMENUMBERLESSTHANCPUS -t $RUNTIME -v` which can hold the task lock with fairly small counts (say 4 on thunderx). This can in some circumstances result in machine deadlocks as kernel tasks get blocked and the machine gradually falls over. This patch changes the rwlock behavior so that the writer unconditionally flags the lock structure (given that its not already flagged by another writer). This blocks further readers from acquiring the lock. Once all the readers have drained, the writer that successfully flagged the lock can progress. With this change, the lock still has a fairness issue caused by an open race for ownership following a write unlock. If certain cores/clusters are favored to win these races it means a small set of writers could starve other users (including writers). This should not be a common problem given rwlock users should be read heavy with the occasional writer. Further, the queued rwlock should also help to alleviate this problem. Heavily tested on 1S thunderx, further testing on 2S thunderx, seattle, and the v8.3 fast model (for LSE). With the thunderx machines the stress-ng process counts can be larger than the number of cores on the machine without causing interactivity problems previously seen at much lower counts. Signed-off-by: Jeremy Linton --- arch/arm64/include/asm/spinlock.h | 46 +++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index cae331d553f8..2ddcce65ca17 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -191,7 +191,9 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock) * Write lock implementation. * * Write locks set bit 31. Unlocking, is done by writing 0 since the lock is - * exclusively held. + * exclusively held. Setting the write bit (31) is used as a flag to drain the + * readers. The lock is considered taken for the writer only once all the + * readers have exited. * * The memory barriers are implicit with the load-acquire and store-release * instructions. @@ -199,29 +201,41 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock) static inline void arch_write_lock(arch_rwlock_t *rw) { - unsigned int tmp; + unsigned int tmp, tmp2, status; asm volatile(ARM64_LSE_ATOMIC_INSN( /* LL/SC */ " sevl\n" "1: wfe\n" - "2: ldaxr %w0, %1\n" - " cbnz %w0, 1b\n" - " stxr %w0, %w2, %1\n" - " cbnz %w0, 2b\n" - __nops(1), + "2: ldaxr %w0, %3\n" + " tbnz %w0, #31, 1b\n" /* must be another writer */ + " orr %w1, %w0, %w4\n" + " stxr %w2, %w1, %3\n" + " cbnz %w2, 2b\n" /* failed to store, try again */ + " cbz %w0, 5f\n" /* if there aren't any readers we're done */ + " sevl\n" + "3: wfe\n" /* spin waiting for the readers to exit */ + "4: ldaxr %w0, %3\n" + " cmp %w0, %w4\n" + " b.ne 3b\n" + "5:", /* LSE atomics */ - "1: mov %w0, wzr\n" - "2: casa %w0, %w2, %1\n" - " cbz %w0, 3f\n" - " ldxr %w0, %1\n" - " cbz %w0, 2b\n" + "1: ldseta %w4, %w0, %3\n" + " cbz %w0, 5f\n" /* lock was clear, we are done */ + " tbz %w0, #31, 4f\n" /* we own the lock, wait for readers */ + "2: ldxr %w0, %3\n" /* spin waiting for writer to exit */ + " tbz %w0, #31, 1b\n" " wfe\n" - " b 1b\n" - "3:") - : "=&r" (tmp), "+Q" (rw->lock) + " b 2b\n" + __nops(2) + "3: wfe\n" /* spin waiting for the readers to exit*/ + "4: ldaxr %w0, %3\n" + " cmp %w0, %w4\n" + " b.ne 3b\n" + "5:") + : "=&r" (tmp), "=&r" (tmp2), "=&r" (status), "+Q" (rw->lock) : "r" (0x80000000) - : "memory"); + : "cc", "memory"); } static inline int arch_write_trylock(arch_rwlock_t *rw)