From patchwork Thu Mar 24 09:41:19 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 658371 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p2O9fn9r001120 for ; Thu, 24 Mar 2011 09:41:49 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755176Ab1CXJlZ (ORCPT ); Thu, 24 Mar 2011 05:41:25 -0400 Received: from mail-fx0-f46.google.com ([209.85.161.46]:57679 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754639Ab1CXJlY (ORCPT ); Thu, 24 Mar 2011 05:41:24 -0400 Received: by fxm17 with SMTP id 17so8215376fxm.19 for ; Thu, 24 Mar 2011 02:41:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:date:from:to:cc:subject:message-id :references:mime-version:content-type:content-disposition :in-reply-to:user-agent; bh=ys7f53t6qJOtxAYZ18piaGVRJlQM2HNpdD8OnJEOkIg=; b=wjmow8mE6ayLaJCbdlUEeZg90p37R3WUX+5tyeM+FQ9ouqKjtvA5ICT1lqGxrZN9U/ wJUrEHiyyrbfmyab/3Tucgo+SYe7CY+0DUPADYTrbjEj7PwvCnGBaPziYAYAdACWz1ND Bns4a15GqPdcsPZhovR6XGRX4OJ9Q4TzZfQcE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=RC0aKGjTlaMMtm8DOvXVw+5WCZGNh7azhABSVKD9BQRf9Yi9G1Wl7n5zP6x16bP1jA 5xmPPfDJcPYtwlFmBt0XTFLxZFthPuB79Y03FIOdOnLi8iOpUOfpXetVPw6q61B0PI4u qYVM/yRKO/DrUdYiTHVxpg6Y7YwWeO3VkhFdY= Received: by 10.223.57.86 with SMTP id b22mr3931149fah.95.1300959682617; Thu, 24 Mar 2011 02:41:22 -0700 (PDT) Received: from htj.dyndns.org ([130.75.117.88]) by mx.google.com with ESMTPS id n9sm3134615fax.27.2011.03.24.02.41.21 (version=SSLv3 cipher=OTHER); Thu, 24 Mar 2011 02:41:21 -0700 (PDT) Date: Thu, 24 Mar 2011 10:41:19 +0100 From: Tejun Heo To: Peter Zijlstra , Ingo Molnar , Linus Torvalds , Andrew Morton , Chris Mason Cc: linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org Subject: [PATCH 1/2] Subject: mutex: Separate out mutex_spin() Message-ID: <20110324094119.GD12038@htj.dyndns.org> References: <20110323153727.GB12003@htj.dyndns.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20110323153727.GB12003@htj.dyndns.org> User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 24 Mar 2011 09:41:49 +0000 (UTC) Index: work/kernel/mutex.c =================================================================== --- work.orig/kernel/mutex.c +++ work/kernel/mutex.c @@ -126,39 +126,32 @@ void __sched mutex_unlock(struct mutex * EXPORT_SYMBOL(mutex_unlock); -/* - * Lock a mutex (possibly interruptible), slowpath: +/** + * mutex_spin - optimistic spinning on mutex + * @lock: mutex to spin on + * + * This function implements optimistic spin for acquisition of @lock when + * the lock owner is currently running on a (different) CPU. + * + * The rationale is that if the lock owner is running, it is likely to + * release the lock soon. + * + * Since this needs the lock owner, and this mutex implementation doesn't + * track the owner atomically in the lock field, we need to track it + * non-atomically. + * + * We can't do this for DEBUG_MUTEXES because that relies on wait_lock to + * serialize everything. + * + * CONTEXT: + * Preemption disabled. + * + * RETURNS: + * %true if @lock is acquired, %false otherwise. */ -static inline int __sched -__mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, - unsigned long ip) +static inline bool mutex_spin(struct mutex *lock) { - struct task_struct *task = current; - struct mutex_waiter waiter; - unsigned long flags; - - preempt_disable(); - mutex_acquire(&lock->dep_map, subclass, 0, ip); - #ifdef CONFIG_MUTEX_SPIN_ON_OWNER - /* - * Optimistic spinning. - * - * We try to spin for acquisition when we find that there are no - * pending waiters and the lock owner is currently running on a - * (different) CPU. - * - * The rationale is that if the lock owner is running, it is likely to - * release the lock soon. - * - * Since this needs the lock owner, and this mutex implementation - * doesn't track the owner atomically in the lock field, we need to - * track it non-atomically. - * - * We can't do this for DEBUG_MUTEXES because that relies on wait_lock - * to serialize everything. - */ - for (;;) { struct thread_info *owner; @@ -177,12 +170,8 @@ __mutex_lock_common(struct mutex *lock, if (owner && !mutex_spin_on_owner(lock, owner)) break; - if (atomic_cmpxchg(&lock->count, 1, 0) == 1) { - lock_acquired(&lock->dep_map, ip); - mutex_set_owner(lock); - preempt_enable(); - return 0; - } + if (atomic_cmpxchg(&lock->count, 1, 0) == 1) + return true; /* * When there's no owner, we might have preempted between the @@ -190,7 +179,7 @@ __mutex_lock_common(struct mutex *lock, * we're an RT task that will live-lock because we won't let * the owner complete. */ - if (!owner && (need_resched() || rt_task(task))) + if (!owner && (need_resched() || rt_task(current))) break; /* @@ -202,6 +191,30 @@ __mutex_lock_common(struct mutex *lock, arch_mutex_cpu_relax(); } #endif + return false; +} + +/* + * Lock a mutex (possibly interruptible), slowpath: + */ +static inline int __sched +__mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, + unsigned long ip) +{ + struct task_struct *task = current; + struct mutex_waiter waiter; + unsigned long flags; + + preempt_disable(); + mutex_acquire(&lock->dep_map, subclass, 0, ip); + + if (mutex_spin(lock)) { + lock_acquired(&lock->dep_map, ip); + mutex_set_owner(lock); + preempt_enable(); + return 0; + } + spin_lock_mutex(&lock->wait_lock, flags); debug_mutex_lock_common(lock, &waiter);