From patchwork Wed Jan 27 02:04:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 8128581 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7FB01BEEE5 for ; Wed, 27 Jan 2016 02:07:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 709B320272 for ; Wed, 27 Jan 2016 02:07:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 694B220279 for ; Wed, 27 Jan 2016 02:07:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aOFUW-0004mi-6j; Wed, 27 Jan 2016 02:05:52 +0000 Received: from mail-io0-x233.google.com ([2607:f8b0:4001:c06::233]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aOFUT-0004kw-M7 for linux-arm-kernel@lists.infradead.org; Wed, 27 Jan 2016 02:05:50 +0000 Received: by mail-io0-x233.google.com with SMTP id 77so4252415ioc.2 for ; Tue, 26 Jan 2016 18:05:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=APsJvzYmKY3x1MxQC/sKKIjTg3jgcL1k8M3sr2RisIo=; b=qzzyz6y2dchwu6YaF1q+3snzRvQOVeiQ5SzqC9DgAIwYNBmpX4yRcv++iAcCqKnd/x iZfbKVXZeuS7DW/C458yHxui+jJDXMzCLktAHPGhH+/NY2RISGrxH9QYCTx9Ms/aI5t6 SfS1n1j69MGzKqc8EMkW4PAtP195QCNTgD1nweStMWE+9P734XF2X6ds2uofZY1uYODo 0LGZuVqTDb1tQgAKZTJXlPd5TeB7WKS7ozRmvnjJJ1GwAEuJ6UN9/1VIHyLNlFSZyfhw njzUoTRBDJOOPNuTL/1EVbRcxfoZBo5C6rHHiw50eKIRo8I2z2/GzZCIdnab+UONbZEA vPzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=APsJvzYmKY3x1MxQC/sKKIjTg3jgcL1k8M3sr2RisIo=; b=LFNVyk/XjKURD05Q9M54XVEzVMH2aNr/tvPwttmwrOM6CsCOESfGgrj/KipOxQXXLa 2EMYQVTBsU4tVMkXDA4Au73THSUIAOD0Z09PuFn55PwWx7npfrfV6iSOCCcYMTe8en9r M5Zr2cgqyrkz2B8qglDy4mhLxd1jAdaiKuYOJUQCpgFJkagnw2y5Ai+BhLusQLFuzyyX MzERPBCrJxekZMTGwDemHDf7ksQskjdT1/RZuK3YJ6butUVP2EV/8qcihCoKJdBX5evt NoRZtgSH3zDdfYeq+hmUF6sZu2yJMreRNYSEpkWSrW68shxHGfbHpKxlxnGXUw6oI8ss xVZw== X-Gm-Message-State: AG10YORHaTz3OHDyj//IvWeTyBHMQrY8gIhXnwi9wMqv+ptxyIdp8nNSYdm0QtaSu2fm+A== X-Received: by 10.107.8.72 with SMTP id 69mr30840468ioi.122.1453860328891; Tue, 26 Jan 2016 18:05:28 -0800 (PST) Received: from localhost ([45.32.128.109]) by smtp.gmail.com with ESMTPSA id e3sm2453061igx.0.2016.01.26.18.05.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Jan 2016 18:05:27 -0800 (PST) Date: Wed, 27 Jan 2016 10:04:47 +0800 From: Boqun Feng To: "Paul E. McKenney" Subject: Re: [v3,11/41] mips: reuse asm-generic/barrier.h Message-ID: <20160127020447.GA1293@fixme-laptop.cn.ibm.com> References: <20160114204827.GE3818@linux.vnet.ibm.com> <20160118081929.GA30420@gondor.apana.org.au> <20160118154629.GB3818@linux.vnet.ibm.com> <20160126165207.GB6029@fixme-laptop.cn.ibm.com> <20160126172227.GG6357@twins.programming.kicks-ass.net> <20160126201037.GU4503@linux.vnet.ibm.com> <20160126232921.GY4503@linux.vnet.ibm.com> MIME-Version: 1.0 In-Reply-To: <20160126232921.GY4503@linux.vnet.ibm.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160126_180549_809597_008E88CF X-CRM114-Status: GOOD ( 34.25 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-mips , "linux-ia64@vger.kernel.org" , "Michael S. Tsirkin" , Peter Zijlstra , Will Deacon , virtualization , Peter Anvin , sparclinux@vger.kernel.org, Ingo Molnar , "linux-arch@vger.kernel.org" , linux-s390 , Russell King - ARM Linux , Herbert Xu , linux-sh@vger.kernel.org, Michael Ellerman , the arch/x86 maintainers , xen-devel@lists.xenproject.org, Ingo Molnar , linux-xtensa@linux-xtensa.org, James Hogan , uml-devel , Stefano Stabellini , adi-buildroot-devel@lists.sourceforge.net, Leonid Yegoshin , David Daney , Thomas Gleixner , linux-metag@vger.kernel.org, "linux-arm-kernel@lists.infradead.org" , Andrew Cooper , ppc-dev , Linux Kernel Mailing List , Ralf Baechle , Arnd Bergmann , Joe Perches , Linus Torvalds , David Miller Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Jan 26, 2016 at 03:29:21PM -0800, Paul E. McKenney wrote: > On Tue, Jan 26, 2016 at 02:33:40PM -0800, Linus Torvalds wrote: > > On Tue, Jan 26, 2016 at 2:15 PM, Linus Torvalds > > wrote: > > > > > > You might as well just write it as > > > > > > struct foo x = READ_ONCE(*ptr); > > > x->bar = 5; > > > > > > because that "smp_read_barrier_depends()" does NOTHING wrt the second write. > > > > Just to clarify: on alpha it adds a memory barrier, but that memory > > barrier is useless. > > No trailing data-dependent read, so agreed, no smp_read_barrier_depends() > needed. That said, I believe that we should encourage rcu_dereference*() > or lockless_dereference() instead of READ_ONCE() for documentation > reasons, though. > > > On non-alpha, it is a no-op, and obviously does nothing simply because > > it generates no code. > > > > So if anybody believes that the "smp_read_barrier_depends()" does > > something, they are *wrong*. > > The other problem with smp_read_barrier_depends() is that it is often > a pain figuring out which prior load it is supposed to apply to. > Hence my preference for rcu_dereference*() and lockless_dereference(). > Because semantically speaking, rcu_derefence*() and lockless_dereference() are CONSUME(i.e. data/address dependent read->read and read->write pairs are ordered), whereas smp_read_barrier_depends() only guarantees read->read pairs with data dependency are ordered, right? If so, maybe we need to call it out in memory-barriers.txt, for example: Regards, Boqun > > And if anybody sends out an email with that smp_read_barrier_depends() > > in an example, they are actively just confusing other people, which is > > even worse than just being wrong. Which is why I jumped in. > > > > So stop perpetuating the myth that smp_read_barrier_depends() does > > something here. It does not. It's a bug, and it has become this "mind > > virus" for some people that seem to believe that it does something. > > It looks like I should add words to memory-barriers.txt de-emphasizing > smp_read_barrier_depends(). I will take a look at that. > > > I had to remove this crap once from the kernel already, see commit > > 105ff3cbf225 ("atomic: remove all traces of READ_ONCE_CTRL() and > > atomic*_read_ctrl()"). > > > > I don't want to ever see that broken construct again. And I want to > > make sure that everybody is educated about how broken it was. I'm > > extremely unhappy that it came up again. > > Well, if it makes you feel better, that was control dependencies and this > was data dependencies. So it was not -exactly- the same. ;-) > > (Sorry, couldn't resist...) > > > If it turns out that some architecture does actually need a barrier > > between a read and a dependent write, then that will mean that > > > > (a) we'll have to make up a _new_ barrier, because > > "smp_read_barrier_depends()" is not that barrier. We'll presumably > > then have to make that new barrier part of "rcu_derefence()" and > > friends. > > Agreed. We can worry about whether or not we replace the current > smp_read_barrier_depends() with that new barrier when and if such > hardware appears. > > > (b) we will have found an architecture with even worse memory > > ordering semantics than alpha, and we'll have to stop castigating > > alpha for being the worst memory ordering ever. > > ;-) ;-) ;-) > > > but I sincerely hope that we'll never find that kind of broken architecture. > > Apparently at least some hardware vendors are reading memory-barriers.txt, > so perhaps the odds of that kind of breakage have reduced. > > Thanx, Paul > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index 904ee42..6b262c2 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -1703,8 +1703,8 @@ There are some more advanced barrier functions: (*) lockless_dereference(); - This can be thought of as a pointer-fetch wrapper around the - smp_read_barrier_depends() data-dependency barrier. + This is a load, and any load or store that has a data dependency on the + value returned by this load won't be reordered before this load. This is also similar to rcu_dereference(), but in cases where object lifetime is handled by some mechanism other than RCU, for