From patchwork Wed Mar 23 15:37:27 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 656381 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p2NFc1oe012092 for ; Wed, 23 Mar 2011 15:38:01 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933192Ab1CWPhe (ORCPT ); Wed, 23 Mar 2011 11:37:34 -0400 Received: from mail-fx0-f46.google.com ([209.85.161.46]:62658 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932712Ab1CWPhc (ORCPT ); Wed, 23 Mar 2011 11:37:32 -0400 Received: by fxm17 with SMTP id 17so7449810fxm.19 for ; Wed, 23 Mar 2011 08:37:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:date:from:to:cc:subject:message-id :mime-version:content-type:content-disposition:user-agent; bh=H7r6r5kVsbpkiqIZrmFirZ9dqF2kcch8unZJn8ON1GM=; b=Wq/V+eUUlTgr5idzvJtqv00zj8z9IbrS9iuJZDvYGhFZDG92VUnVqDKhN9rjNXt9fu MCPNFRWblFAhkdqrzWv1UMCw6U91QEuHAUwBCmbH4Jd+S+FEgvddLemHWKBJZP1WNoKY bUqP6zoPghutwnhODwAz8Ahhu9yyLXB2VNf6I= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:date:from:to:cc:subject:message-id:mime-version:content-type :content-disposition:user-agent; b=cV78DVgtkn+sIrjLHgMgspzR++gNBh8XaJiauZWXrg2jnwJM/shPwu91JvW1TLGzlY PDBBhCptUwI9P2jvf6+wSq5trlpbAJQZDZWh1v3x5RTz+vDGWWfLcQ0rtO1g/cow9Ln3 8PAw/+jKdbQhNtpuPZxlUvXaj5PgsFHPosC74= Received: by 10.223.1.73 with SMTP id 9mr3254454fae.44.1300894650530; Wed, 23 Mar 2011 08:37:30 -0700 (PDT) Received: from htj.dyndns.org ([130.75.117.88]) by mx.google.com with ESMTPS id k5sm1005510faa.15.2011.03.23.08.37.29 (version=SSLv3 cipher=OTHER); Wed, 23 Mar 2011 08:37:29 -0700 (PDT) Date: Wed, 23 Mar 2011 16:37:27 +0100 From: Tejun Heo To: Peter Zijlstra , Ingo Molnar , Linus Torvalds , Andrew Morton , Chris Mason Cc: linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org Subject: [RFC PATCH] mutex: Apply adaptive spinning on mutex_trylock() Message-ID: <20110323153727.GB12003@htj.dyndns.org> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Wed, 23 Mar 2011 15:38:01 +0000 (UTC) Index: work/kernel/mutex.c =================================================================== --- work.orig/kernel/mutex.c +++ work/kernel/mutex.c @@ -126,39 +126,33 @@ void __sched mutex_unlock(struct mutex * EXPORT_SYMBOL(mutex_unlock); -/* - * Lock a mutex (possibly interruptible), slowpath: +/** + * mutex_spin - optimistic spinning on mutex + * @lock: mutex to spin on + * + * This function implements optimistic spin for acquisition of @lock when + * we find that there are no pending waiters and the lock owner is + * currently running on a (different) CPU. + * + * The rationale is that if the lock owner is running, it is likely to + * release the lock soon. + * + * Since this needs the lock owner, and this mutex implementation doesn't + * track the owner atomically in the lock field, we need to track it + * non-atomically. + * + * We can't do this for DEBUG_MUTEXES because that relies on wait_lock to + * serialize everything. + * + * CONTEXT: + * Preemption disabled. + * + * RETURNS: + * %true if @lock is acquired, %false otherwise. */ -static inline int __sched -__mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, - unsigned long ip) +static inline bool mutex_spin(struct mutex *lock) { - struct task_struct *task = current; - struct mutex_waiter waiter; - unsigned long flags; - - preempt_disable(); - mutex_acquire(&lock->dep_map, subclass, 0, ip); - #ifdef CONFIG_MUTEX_SPIN_ON_OWNER - /* - * Optimistic spinning. - * - * We try to spin for acquisition when we find that there are no - * pending waiters and the lock owner is currently running on a - * (different) CPU. - * - * The rationale is that if the lock owner is running, it is likely to - * release the lock soon. - * - * Since this needs the lock owner, and this mutex implementation - * doesn't track the owner atomically in the lock field, we need to - * track it non-atomically. - * - * We can't do this for DEBUG_MUTEXES because that relies on wait_lock - * to serialize everything. - */ - for (;;) { struct thread_info *owner; @@ -177,12 +171,8 @@ __mutex_lock_common(struct mutex *lock, if (owner && !mutex_spin_on_owner(lock, owner)) break; - if (atomic_cmpxchg(&lock->count, 1, 0) == 1) { - lock_acquired(&lock->dep_map, ip); - mutex_set_owner(lock); - preempt_enable(); - return 0; - } + if (atomic_cmpxchg(&lock->count, 1, 0) == 1) + return true; /* * When there's no owner, we might have preempted between the @@ -190,7 +180,7 @@ __mutex_lock_common(struct mutex *lock, * we're an RT task that will live-lock because we won't let * the owner complete. */ - if (!owner && (need_resched() || rt_task(task))) + if (!owner && (need_resched() || rt_task(current))) break; /* @@ -202,6 +192,30 @@ __mutex_lock_common(struct mutex *lock, cpu_relax(); } #endif + return false; +} + +/* + * Lock a mutex (possibly interruptible), slowpath: + */ +static inline int __sched +__mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, + unsigned long ip) +{ + struct task_struct *task = current; + struct mutex_waiter waiter; + unsigned long flags; + + preempt_disable(); + mutex_acquire(&lock->dep_map, subclass, 0, ip); + + if (mutex_spin(lock)) { + lock_acquired(&lock->dep_map, ip); + mutex_set_owner(lock); + preempt_enable(); + return 0; + } + spin_lock_mutex(&lock->wait_lock, flags); debug_mutex_lock_common(lock, &waiter); @@ -430,6 +444,15 @@ static inline int __mutex_trylock_slowpa unsigned long flags; int prev; + preempt_disable(); + + if (mutex_spin(lock)) { + mutex_set_owner(lock); + mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_); + preempt_enable(); + return 1; + } + spin_lock_mutex(&lock->wait_lock, flags); prev = atomic_xchg(&lock->count, -1); @@ -443,6 +466,7 @@ static inline int __mutex_trylock_slowpa atomic_set(&lock->count, 0); spin_unlock_mutex(&lock->wait_lock, flags); + preempt_enable(); return prev == 1; }