From patchwork Thu Dec 20 07:39:48 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Boyd X-Patchwork-Id: 1898701 Return-Path: X-Original-To: patchwork-linux-omap@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 50AFEDF23A for ; Thu, 20 Dec 2012 07:40:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751324Ab2LTHjx (ORCPT ); Thu, 20 Dec 2012 02:39:53 -0500 Received: from wolverine02.qualcomm.com ([199.106.114.251]:56970 "EHLO wolverine02.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751307Ab2LTHjw (ORCPT ); Thu, 20 Dec 2012 02:39:52 -0500 X-IronPort-AV: E=Sophos;i="4.84,322,1355126400"; d="scan'208";a="15495799" Received: from pdmz-ns-snip_115_219.qualcomm.com (HELO mostmsg01.qualcomm.com) ([199.106.115.219]) by wolverine02.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 19 Dec 2012 23:39:51 -0800 Received: from sboyd-linux.qualcomm.com (pdmz-ns-snip_218_1.qualcomm.com [192.168.218.1]) by mostmsg01.qualcomm.com (Postfix) with ESMTPA id 9F79510004B4; Wed, 19 Dec 2012 23:39:51 -0800 (PST) From: Stephen Boyd To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, paul@pwsan.com, vaibhav.bedia@ti.com, "Eric W. Biederman" Subject: [PATCH] lib: atomic64: Initialize locks statically to fix early users Date: Wed, 19 Dec 2012 23:39:48 -0800 Message-Id: <1355989188-17665-1-git-send-email-sboyd@codeaurora.org> X-Mailer: git-send-email 1.8.1.rc2 In-Reply-To: <50D2B94D.10309@codeaurora.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org The atomic64 library uses a handful of static spin locks to implement atomic 64-bit operations on architectures without support for atomic 64-bit instructions. Unfortunately, the spinlocks are initialized in a pure initcall and that is too late for the vfs namespace code which wants to use atomic64 operations before the initcall is run (introduced by 8823c07 "vfs: Add setns support for the mount namespace"). This leads to BUG messages such as: BUG: spinlock bad magic on CPU#0, swapper/0/0 lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: /-1, .owner_cpu: 0 [] (unwind_backtrace+0x0/0xf0) from [] (do_raw_spin_lock+0x158/0x198) [] (do_raw_spin_lock+0x158/0x198) from [] (_raw_spin_lock_irqsave+0x4c/0x58) [] (_raw_spin_lock_irqsave+0x4c/0x58) from [] (atomic64_add_return+0x30/0x5c) [] (atomic64_add_return+0x30/0x5c) from [] (alloc_mnt_ns.clone.14+0x44/0xac) [] (alloc_mnt_ns.clone.14+0x44/0xac) from [] (create_mnt_ns+0xc/0x54) [] (create_mnt_ns+0xc/0x54) from [] (mnt_init+0x120/0x1d4) [] (mnt_init+0x120/0x1d4) from [] (vfs_caches_init+0xe0/0x10c) [] (vfs_caches_init+0xe0/0x10c) from [] (start_kernel+0x29c/0x300) [] (start_kernel+0x29c/0x300) from [<80008078>] (0x80008078) coming out early on during boot when spinlock debugging is enabled. Fix this problem by initializing the spinlocks statically at compile time. Reported-by: Vaibhav Bedia Tested-by: Vaibhav Bedia Cc: Eric W. Biederman Signed-off-by: Stephen Boyd Tested-by: Tony Lindgren --- Sorry Andrew, I couldn't find a maintainer of this file so I am picking on you. lib/atomic64.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/lib/atomic64.c b/lib/atomic64.c index 9785378..08a4f06 100644 --- a/lib/atomic64.c +++ b/lib/atomic64.c @@ -31,7 +31,11 @@ static union { raw_spinlock_t lock; char pad[L1_CACHE_BYTES]; -} atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp; +} atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp = { + [0 ... (NR_LOCKS - 1)] = { + .lock = __RAW_SPIN_LOCK_UNLOCKED(atomic64_lock.lock), + }, +}; static inline raw_spinlock_t *lock_addr(const atomic64_t *v) { @@ -173,14 +177,3 @@ int atomic64_add_unless(atomic64_t *v, long long a, long long u) return ret; } EXPORT_SYMBOL(atomic64_add_unless); - -static int init_atomic64_lock(void) -{ - int i; - - for (i = 0; i < NR_LOCKS; ++i) - raw_spin_lock_init(&atomic64_lock[i].lock); - return 0; -} - -pure_initcall(init_atomic64_lock);