From patchwork Fri Feb 16 10:21:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10224327 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D72A060231 for ; Fri, 16 Feb 2018 10:21:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C36C2291B2 for ; Fri, 16 Feb 2018 10:21:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B82B82931A; Fri, 16 Feb 2018 10:21:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5110B291B2 for ; Fri, 16 Feb 2018 10:21:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gsT+g/z3I6UFtkXpLr/358hSc9SIzD/22brggixiUeo=; b=X6ymNS7sI4wi5+ L8xjfWNjfWSwhuD3egRjy3HoUM9HYswsIR1/8LG+LpV8TKNdNNkTOsLomCaIos1Yd9A5ZZrvhFRRx I5hCWUcQAR2VS3Wu7lyNC3iNLI23vR2wX8FY5K64LsYqbZdJSik34JhMm3kHV04wdQ+P1G1On/7Fi 5eg8f47lWNJBpzqoxEJnY3TgnFtXmqgOvRqPvKpDPYB4JjT4khwdkG1WMeXLxdesJdDXi60/+Q2AJ 2DSahVEJhor2amY6ZJ5eOfIEsumQD1KSl7PBC+5TucwN//byqS+6nr8MlxpmBFpfry4/FKxhH9spn 7CQeYPedj9ZF6CM1/KDA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1emd8d-0005YT-Gf; Fri, 16 Feb 2018 10:21:07 +0000 Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.89 #1 (Red Hat Linux)) id 1emd8Y-0005XE-S6; Fri, 16 Feb 2018 10:21:03 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id D01D120279154; Fri, 16 Feb 2018 11:21:00 +0100 (CET) Date: Fri, 16 Feb 2018 11:21:00 +0100 From: Peter Zijlstra To: Will Deacon Subject: Re: [RFC PATCH 3/5] asm-generic/bitops/atomic.h: Rewrite using atomic_fetch_* Message-ID: <20180216102100.GB25201@hirez.programming.kicks-ass.net> References: <1518708575-12284-1-git-send-email-will.deacon@arm.com> <1518708575-12284-4-git-send-email-will.deacon@arm.com> <20180215170847.GD25181@hirez.programming.kicks-ass.net> <20180215182049.GC15274@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180215182049.GC15274@arm.com> User-Agent: Mutt/1.9.2 (2017-12-15) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, mingo@kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On Thu, Feb 15, 2018 at 06:20:49PM +0000, Will Deacon wrote: > On Thu, Feb 15, 2018 at 06:08:47PM +0100, Peter Zijlstra wrote: > > On Thu, Feb 15, 2018 at 03:29:33PM +0000, Will Deacon wrote: > > > +static inline void __clear_bit_unlock(unsigned int nr, > > > + volatile unsigned long *p) > > > +{ > > > + unsigned long old; > > > > > > + p += BIT_WORD(nr); > > > + old = READ_ONCE(*p); > > > + old &= ~BIT_MASK(nr); > > > + smp_store_release(p, old); > > > > This should be atomic_set_release() I think, for the special case where > > atomics are implemented with spinlocks, see the 'fun' case in > > Documentation/atomic_t.txt. > > My understanding of __clear_bit_unlock is that there is guaranteed to be > no concurrent accesses to the same word, so why would it matter whether > locks are used to implement atomics? commit f75d48644c56a31731d17fa693c8175328957e1d Author: Peter Zijlstra Date: Wed Mar 9 12:40:54 2016 +0100 bitops: Do not default to __clear_bit() for __clear_bit_unlock() __clear_bit_unlock() is a special little snowflake. While it carries the non-atomic '__' prefix, it is specifically documented to pair with test_and_set_bit() and therefore should be 'somewhat' atomic. Therefore the generic implementation of __clear_bit_unlock() cannot use the fully non-atomic __clear_bit() as a default. If an arch is able to do better; is must provide an implementation of __clear_bit_unlock() itself. Specifically, this came up as a result of hackbench livelock'ing in slab_lock() on ARC with SMP + SLUB + !LLSC. The issue was incorrect pairing of atomic ops. slab_lock() -> bit_spin_lock() -> test_and_set_bit() slab_unlock() -> __bit_spin_unlock() -> __clear_bit() The non serializing __clear_bit() was getting "lost" 80543b8e: ld_s r2,[r13,0] <--- (A) Finds PG_locked is set 80543b90: or r3,r2,1 <--- (B) other core unlocks right here 80543b94: st_s r3,[r13,0] <--- (C) sets PG_locked (overwrites unlock) Fixes ARC STAR 9000817404 (and probably more). Reported-by: Vineet Gupta Tested-by: Vineet Gupta Signed-off-by: Peter Zijlstra (Intel) Cc: Andrew Morton Cc: Christoph Lameter Cc: David Rientjes Cc: Helge Deller Cc: James E.J. Bottomley Cc: Joonsoo Kim Cc: Linus Torvalds Cc: Noam Camus Cc: Paul E. McKenney Cc: Pekka Enberg Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20160309114054.GJ6356@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index c30266e94806..8ef0ccbf8167 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -29,16 +29,16 @@ do { \ * @nr: the bit to set * @addr: the address to start counting from * - * This operation is like clear_bit_unlock, however it is not atomic. - * It does provide release barrier semantics so it can be used to unlock - * a bit lock, however it would only be used if no other CPU can modify - * any bits in the memory until the lock is released (a good example is - * if the bit lock itself protects access to the other bits in the word). + * A weaker form of clear_bit_unlock() as used by __bit_lock_unlock(). If all + * the bits in the word are protected by this lock some archs can use weaker + * ops to safely unlock. + * + * See for example x86's implementation. */ #define __clear_bit_unlock(nr, addr) \ do { \ - smp_mb(); \ - __clear_bit(nr, addr); \ + smp_mb__before_atomic(); \ + clear_bit(nr, addr); \ } while (0) #endif /* _ASM_GENERIC_BITOPS_LOCK_H_ */