From patchwork Thu Dec 20 07:39:48 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Boyd X-Patchwork-Id: 1898741 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 8FF01DF23A for ; Thu, 20 Dec 2012 07:42:56 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TlajW-0000pg-Mq; Thu, 20 Dec 2012 07:39:58 +0000 Received: from wolverine02.qualcomm.com ([199.106.114.251]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TlajS-0000oY-AZ for linux-arm-kernel@lists.infradead.org; Thu, 20 Dec 2012 07:39:55 +0000 X-IronPort-AV: E=Sophos;i="4.84,322,1355126400"; d="scan'208";a="15495799" Received: from pdmz-ns-snip_115_219.qualcomm.com (HELO mostmsg01.qualcomm.com) ([199.106.115.219]) by wolverine02.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 19 Dec 2012 23:39:51 -0800 Received: from sboyd-linux.qualcomm.com (pdmz-ns-snip_218_1.qualcomm.com [192.168.218.1]) by mostmsg01.qualcomm.com (Postfix) with ESMTPA id 9F79510004B4; Wed, 19 Dec 2012 23:39:51 -0800 (PST) From: Stephen Boyd To: Andrew Morton Subject: [PATCH] lib: atomic64: Initialize locks statically to fix early users Date: Wed, 19 Dec 2012 23:39:48 -0800 Message-Id: <1355989188-17665-1-git-send-email-sboyd@codeaurora.org> X-Mailer: git-send-email 1.8.1.rc2 In-Reply-To: <50D2B94D.10309@codeaurora.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121220_023954_606546_C2CFC8DB X-CRM114-Status: GOOD ( 14.53 ) X-Spam-Score: -4.2 (----) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-4.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [199.106.114.251 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: paul@pwsan.com, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, vaibhav.bedia@ti.com, "Eric W. Biederman" , linux-omap@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The atomic64 library uses a handful of static spin locks to implement atomic 64-bit operations on architectures without support for atomic 64-bit instructions. Unfortunately, the spinlocks are initialized in a pure initcall and that is too late for the vfs namespace code which wants to use atomic64 operations before the initcall is run (introduced by 8823c07 "vfs: Add setns support for the mount namespace"). This leads to BUG messages such as: BUG: spinlock bad magic on CPU#0, swapper/0/0 lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: /-1, .owner_cpu: 0 [] (unwind_backtrace+0x0/0xf0) from [] (do_raw_spin_lock+0x158/0x198) [] (do_raw_spin_lock+0x158/0x198) from [] (_raw_spin_lock_irqsave+0x4c/0x58) [] (_raw_spin_lock_irqsave+0x4c/0x58) from [] (atomic64_add_return+0x30/0x5c) [] (atomic64_add_return+0x30/0x5c) from [] (alloc_mnt_ns.clone.14+0x44/0xac) [] (alloc_mnt_ns.clone.14+0x44/0xac) from [] (create_mnt_ns+0xc/0x54) [] (create_mnt_ns+0xc/0x54) from [] (mnt_init+0x120/0x1d4) [] (mnt_init+0x120/0x1d4) from [] (vfs_caches_init+0xe0/0x10c) [] (vfs_caches_init+0xe0/0x10c) from [] (start_kernel+0x29c/0x300) [] (start_kernel+0x29c/0x300) from [<80008078>] (0x80008078) coming out early on during boot when spinlock debugging is enabled. Fix this problem by initializing the spinlocks statically at compile time. Reported-by: Vaibhav Bedia Tested-by: Vaibhav Bedia Cc: Eric W. Biederman Signed-off-by: Stephen Boyd Tested-by: Tony Lindgren --- Sorry Andrew, I couldn't find a maintainer of this file so I am picking on you. lib/atomic64.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/lib/atomic64.c b/lib/atomic64.c index 9785378..08a4f06 100644 --- a/lib/atomic64.c +++ b/lib/atomic64.c @@ -31,7 +31,11 @@ static union { raw_spinlock_t lock; char pad[L1_CACHE_BYTES]; -} atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp; +} atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp = { + [0 ... (NR_LOCKS - 1)] = { + .lock = __RAW_SPIN_LOCK_UNLOCKED(atomic64_lock.lock), + }, +}; static inline raw_spinlock_t *lock_addr(const atomic64_t *v) { @@ -173,14 +177,3 @@ int atomic64_add_unless(atomic64_t *v, long long a, long long u) return ret; } EXPORT_SYMBOL(atomic64_add_unless); - -static int init_atomic64_lock(void) -{ - int i; - - for (i = 0; i < NR_LOCKS; ++i) - raw_spin_lock_init(&atomic64_lock[i].lock); - return 0; -} - -pure_initcall(init_atomic64_lock);