From patchwork Tue Feb 2 20:19:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 8195381 Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1752F9FE6C for ; Tue, 2 Feb 2016 20:21:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3C402202B8 for ; Tue, 2 Feb 2016 20:21:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3F765202BE for ; Tue, 2 Feb 2016 20:21:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965780AbcBBUUo (ORCPT ); Tue, 2 Feb 2016 15:20:44 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:33101 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933667AbcBBUTv (ORCPT ); Tue, 2 Feb 2016 15:19:51 -0500 Received: by mail-wm0-f67.google.com with SMTP id r129so4062500wmr.0; Tue, 02 Feb 2016 12:19:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WOZomT9e2f6+HllpysLaOiAwslbQVFG3S8n0kD7H3ik=; b=Sl7xoUHArAZ3/sgP6SUjgmOlUVKj9maZDOQOIzzJAToOXBuqnjtsOmgLKSNm8OFmne LNq4GbBl216Jhx9Zgu5R2fcB8+VltlgC3YQTgxqZTc/K3bVmzrsac91/9Tofg75bX328 /HKISEhWKh617Yi9S5YnYGUSp2wu8cLy5VEnOPgCbp3A2ClguWy8zwQ2/88Oe6KpHNYx dq9soW/0BDPVTckk6FwWaQN2Q62Il55DbFHOnuTiNQDp4vpDoSCHeYbj1wkqqXQu/Fkx xyS2WcVdf/j8gBWHMWVAYTDBXEQW5sJgQRzRlQDcVJyDr/l/MD4h8UJgCuJWQUo71MMm fE4A== X-Gm-Message-State: AG10YOS4FpDre3X1/S1x8uS+9e4AClN6QVbm1kPQj9LpA8APP63JZkWPUfTVgjSeBOl0pQ== X-Received: by 10.194.240.66 with SMTP id vy2mr30681205wjc.28.1454444389621; Tue, 02 Feb 2016 12:19:49 -0800 (PST) Received: from tiehlicka.suse.cz (ip-86-49-65-8.net.upcbroadband.cz. [86.49.65.8]) by smtp.gmail.com with ESMTPSA id l7sm3099511wjx.14.2016.02.02.12.19.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 02 Feb 2016 12:19:49 -0800 (PST) From: Michal Hocko To: LKML Cc: Peter Zijlstra , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , "David S. Miller" , Tony Luck , Andrew Morton , Chris Zankel , Max Filippov , x86@kernel.org, linux-alpha@vger.kernel.org, linux-ia64@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org, Michal Hocko Subject: [RFC 10/12] x86, rwsem: simplify __down_write Date: Tue, 2 Feb 2016 21:19:27 +0100 Message-Id: <1454444369-2146-11-git-send-email-mhocko@kernel.org> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1454444369-2146-1-git-send-email-mhocko@kernel.org> References: <1454444369-2146-1-git-send-email-mhocko@kernel.org> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Hocko x86 implementation of __down_write is using inline asm to optimize the code flow. This however requires that it has go over an additional hop for the slow path call_rwsem_down_write_failed which has to save_common_regs/restore_common_regs to preserve the calling convention. This, however doesn't add much because the fast path only saves one register push/pop (rdx) when compared to the generic implementation: Before: 0000000000000019 : 19: e8 00 00 00 00 callq 1e 1e: 55 push %rbp 1f: 48 ba 01 00 00 00 ff movabs $0xffffffff00000001,%rdx 26: ff ff ff 29: 48 89 f8 mov %rdi,%rax 2c: 48 89 e5 mov %rsp,%rbp 2f: f0 48 0f c1 10 lock xadd %rdx,(%rax) 34: 85 d2 test %edx,%edx 36: 74 05 je 3d 38: e8 00 00 00 00 callq 3d 3d: 65 48 8b 04 25 00 00 mov %gs:0x0,%rax 44: 00 00 46: 5d pop %rbp 47: 48 89 47 38 mov %rax,0x38(%rdi) 4b: c3 retq After: 0000000000000019 : 19: e8 00 00 00 00 callq 1e 1e: 55 push %rbp 1f: 48 b8 01 00 00 00 ff movabs $0xffffffff00000001,%rax 26: ff ff ff 29: 48 89 e5 mov %rsp,%rbp 2c: 53 push %rbx 2d: 48 89 fb mov %rdi,%rbx 30: f0 48 0f c1 07 lock xadd %rax,(%rdi) 35: 48 85 c0 test %rax,%rax 38: 74 05 je 3f 3a: e8 00 00 00 00 callq 3f 3f: 65 48 8b 04 25 00 00 mov %gs:0x0,%rax 46: 00 00 48: 48 89 43 38 mov %rax,0x38(%rbx) 4c: 5b pop %rbx 4d: 5d pop %rbp 4e: c3 retq This doesn't seem to justify the code obfuscation and complexity. Use the generic implementation instead. Signed-off-by: Michal Hocko --- arch/x86/include/asm/rwsem.h | 17 +++++------------ arch/x86/lib/rwsem.S | 9 --------- 2 files changed, 5 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h index d79a218675bc..1b5e89b3643d 100644 --- a/arch/x86/include/asm/rwsem.h +++ b/arch/x86/include/asm/rwsem.h @@ -102,18 +102,11 @@ static inline int __down_read_trylock(struct rw_semaphore *sem) static inline void __down_write(struct rw_semaphore *sem) { long tmp; - asm volatile("# beginning down_write\n\t" - LOCK_PREFIX " xadd %1,(%2)\n\t" - /* adds 0xffff0001, returns the old value */ - " test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t" - /* was the active mask 0 before? */ - " jz 1f\n" - " call call_rwsem_down_write_failed\n" - "1:\n" - "# ending down_write" - : "+m" (sem->count), "=d" (tmp) - : "a" (sem), "1" (RWSEM_ACTIVE_WRITE_BIAS) - : "memory", "cc"); + + tmp = atomic_long_add_return(RWSEM_ACTIVE_WRITE_BIAS, + (atomic_long_t *)&sem->count); + if (unlikely(tmp != RWSEM_ACTIVE_WRITE_BIAS)) + rwsem_down_write_failed(sem); } /* diff --git a/arch/x86/lib/rwsem.S b/arch/x86/lib/rwsem.S index 40027db99140..ea5c7c177483 100644 --- a/arch/x86/lib/rwsem.S +++ b/arch/x86/lib/rwsem.S @@ -57,7 +57,6 @@ * is also the input argument to these helpers) * * The following can clobber %rdx because the asm clobbers it: - * call_rwsem_down_write_failed * call_rwsem_wake * but %rdi, %rsi, %rcx, %r8-r11 always need saving. */ @@ -93,14 +92,6 @@ ENTRY(call_rwsem_down_read_failed) ret ENDPROC(call_rwsem_down_read_failed) -ENTRY(call_rwsem_down_write_failed) - save_common_regs - movq %rax,%rdi - call rwsem_down_write_failed - restore_common_regs - ret -ENDPROC(call_rwsem_down_write_failed) - ENTRY(call_rwsem_wake) /* do nothing if still outstanding active readers */ __ASM_HALF_SIZE(dec) %__ASM_HALF_REG(dx)