From patchwork Thu Apr 7 15:12:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 8773941 Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 175D69F3D1 for ; Thu, 7 Apr 2016 15:15:39 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 17C0D20142 for ; Thu, 7 Apr 2016 15:15:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 04735200E1 for ; Thu, 7 Apr 2016 15:15:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756644AbcDGPPR (ORCPT ); Thu, 7 Apr 2016 11:15:17 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:36140 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756334AbcDGPMr (ORCPT ); Thu, 7 Apr 2016 11:12:47 -0400 Received: by mail-wm0-f65.google.com with SMTP id l6so5121462wml.3; Thu, 07 Apr 2016 08:12:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3btQawC9qbjNhhZf9J+aJtNNm1SIW1y5UoTPNBzYU4k=; b=Bf0TZXxih7Ut3tpuNhzpxFAjWR2UJ0yQJklmQg5f+A49pfD8QQrrvKKrzcp+8qB90z wWDCsouoBLX6MI5cIm+FPf7qxwpC58IqGKcElK4qEln9Z2jKvGVSKptdCkdXkTtJRePu qaFNlJCDxOmeYnu9VyabLG7tCYyg3Zpu1C6VlIDuvoAokO2T0NWPyuI8ZY1tvT9b/1P5 cqRBvgfmZfKbbt8pGQsVtbGAAjrDbMwR5U2ZN0TFpPjey5nkBFz7oipQaqFCEcN+2WNo mLW+JiSHxS2ecam6iN1WIxCEyg7zf63kJ+NPVUwljIkVKD0VlfD+RaBqE0opxXtft61y 2Sbg== X-Gm-Message-State: AD7BkJI7PHA2lnTpk2zyECmWUV4/gxOHBF9SryoCWoRWa+NrWLLXDCw36XHHPwSAgQ8ynw== X-Received: by 10.28.158.78 with SMTP id h75mr28709686wme.53.1460041963435; Thu, 07 Apr 2016 08:12:43 -0700 (PDT) Received: from tiehlicka.suse.cz (nat1.scz.suse.com. [213.151.88.250]) by smtp.gmail.com with ESMTPSA id ks5sm8995472wjb.13.2016.04.07.08.12.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 07 Apr 2016 08:12:41 -0700 (PDT) From: Michal Hocko To: LKML Cc: Peter Zijlstra , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , "David S. Miller" , Tony Luck , Andrew Morton , Chris Zankel , Max Filippov , x86@kernel.org, linux-alpha@vger.kernel.org, linux-ia64@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org, Michal Hocko Subject: [PATCH 02/11] locking, rwsem: drop explicit memory barriers Date: Thu, 7 Apr 2016 17:12:22 +0200 Message-Id: <1460041951-22347-3-git-send-email-mhocko@kernel.org> X-Mailer: git-send-email 2.8.0.rc3 In-Reply-To: <1460041951-22347-1-git-send-email-mhocko@kernel.org> References: <1460041951-22347-1-git-send-email-mhocko@kernel.org> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Hocko sh and xtensa seem to be the only architectures which use explicit memory barriers for rw_semaphore operations even though they are not really needed because there is the full memory barrier is always implied by atomic_{inc,dec,add,sub}_return resp. cmpxchg. Remove them. Signed-off-by: Michal Hocko --- arch/sh/include/asm/rwsem.h | 14 ++------------ arch/xtensa/include/asm/rwsem.h | 14 ++------------ 2 files changed, 4 insertions(+), 24 deletions(-) diff --git a/arch/sh/include/asm/rwsem.h b/arch/sh/include/asm/rwsem.h index a5104bebd1eb..f6c951c7a875 100644 --- a/arch/sh/include/asm/rwsem.h +++ b/arch/sh/include/asm/rwsem.h @@ -24,9 +24,7 @@ */ static inline void __down_read(struct rw_semaphore *sem) { - if (atomic_inc_return((atomic_t *)(&sem->count)) > 0) - smp_wmb(); - else + if (atomic_inc_return((atomic_t *)(&sem->count)) <= 0) rwsem_down_read_failed(sem); } @@ -37,7 +35,6 @@ static inline int __down_read_trylock(struct rw_semaphore *sem) while ((tmp = sem->count) >= 0) { if (tmp == cmpxchg(&sem->count, tmp, tmp + RWSEM_ACTIVE_READ_BIAS)) { - smp_wmb(); return 1; } } @@ -53,9 +50,7 @@ static inline void __down_write(struct rw_semaphore *sem) tmp = atomic_add_return(RWSEM_ACTIVE_WRITE_BIAS, (atomic_t *)(&sem->count)); - if (tmp == RWSEM_ACTIVE_WRITE_BIAS) - smp_wmb(); - else + if (tmp != RWSEM_ACTIVE_WRITE_BIAS) rwsem_down_write_failed(sem); } @@ -65,7 +60,6 @@ static inline int __down_write_trylock(struct rw_semaphore *sem) tmp = cmpxchg(&sem->count, RWSEM_UNLOCKED_VALUE, RWSEM_ACTIVE_WRITE_BIAS); - smp_wmb(); return tmp == RWSEM_UNLOCKED_VALUE; } @@ -76,7 +70,6 @@ static inline void __up_read(struct rw_semaphore *sem) { int tmp; - smp_wmb(); tmp = atomic_dec_return((atomic_t *)(&sem->count)); if (tmp < -1 && (tmp & RWSEM_ACTIVE_MASK) == 0) rwsem_wake(sem); @@ -87,7 +80,6 @@ static inline void __up_read(struct rw_semaphore *sem) */ static inline void __up_write(struct rw_semaphore *sem) { - smp_wmb(); if (atomic_sub_return(RWSEM_ACTIVE_WRITE_BIAS, (atomic_t *)(&sem->count)) < 0) rwsem_wake(sem); @@ -108,7 +100,6 @@ static inline void __downgrade_write(struct rw_semaphore *sem) { int tmp; - smp_wmb(); tmp = atomic_add_return(-RWSEM_WAITING_BIAS, (atomic_t *)(&sem->count)); if (tmp < 0) rwsem_downgrade_wake(sem); @@ -119,7 +110,6 @@ static inline void __downgrade_write(struct rw_semaphore *sem) */ static inline int rwsem_atomic_update(int delta, struct rw_semaphore *sem) { - smp_mb(); return atomic_add_return(delta, (atomic_t *)(&sem->count)); } diff --git a/arch/xtensa/include/asm/rwsem.h b/arch/xtensa/include/asm/rwsem.h index 249619e7e7f2..593483f6e1ff 100644 --- a/arch/xtensa/include/asm/rwsem.h +++ b/arch/xtensa/include/asm/rwsem.h @@ -29,9 +29,7 @@ */ static inline void __down_read(struct rw_semaphore *sem) { - if (atomic_add_return(1,(atomic_t *)(&sem->count)) > 0) - smp_wmb(); - else + if (atomic_add_return(1,(atomic_t *)(&sem->count)) <= 0) rwsem_down_read_failed(sem); } @@ -42,7 +40,6 @@ static inline int __down_read_trylock(struct rw_semaphore *sem) while ((tmp = sem->count) >= 0) { if (tmp == cmpxchg(&sem->count, tmp, tmp + RWSEM_ACTIVE_READ_BIAS)) { - smp_wmb(); return 1; } } @@ -58,9 +55,7 @@ static inline void __down_write(struct rw_semaphore *sem) tmp = atomic_add_return(RWSEM_ACTIVE_WRITE_BIAS, (atomic_t *)(&sem->count)); - if (tmp == RWSEM_ACTIVE_WRITE_BIAS) - smp_wmb(); - else + if (tmp != RWSEM_ACTIVE_WRITE_BIAS) rwsem_down_write_failed(sem); } @@ -70,7 +65,6 @@ static inline int __down_write_trylock(struct rw_semaphore *sem) tmp = cmpxchg(&sem->count, RWSEM_UNLOCKED_VALUE, RWSEM_ACTIVE_WRITE_BIAS); - smp_wmb(); return tmp == RWSEM_UNLOCKED_VALUE; } @@ -81,7 +75,6 @@ static inline void __up_read(struct rw_semaphore *sem) { int tmp; - smp_wmb(); tmp = atomic_sub_return(1,(atomic_t *)(&sem->count)); if (tmp < -1 && (tmp & RWSEM_ACTIVE_MASK) == 0) rwsem_wake(sem); @@ -92,7 +85,6 @@ static inline void __up_read(struct rw_semaphore *sem) */ static inline void __up_write(struct rw_semaphore *sem) { - smp_wmb(); if (atomic_sub_return(RWSEM_ACTIVE_WRITE_BIAS, (atomic_t *)(&sem->count)) < 0) rwsem_wake(sem); @@ -113,7 +105,6 @@ static inline void __downgrade_write(struct rw_semaphore *sem) { int tmp; - smp_wmb(); tmp = atomic_add_return(-RWSEM_WAITING_BIAS, (atomic_t *)(&sem->count)); if (tmp < 0) rwsem_downgrade_wake(sem); @@ -124,7 +115,6 @@ static inline void __downgrade_write(struct rw_semaphore *sem) */ static inline int rwsem_atomic_update(int delta, struct rw_semaphore *sem) { - smp_mb(); return atomic_add_return(delta, (atomic_t *)(&sem->count)); }