From patchwork Thu Feb 3 12:58:38 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 529291 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p13CwDds003862 for ; Thu, 3 Feb 2011 12:58:13 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932111Ab1BCM57 (ORCPT ); Thu, 3 Feb 2011 07:57:59 -0500 Received: from casper.infradead.org ([85.118.1.10]:49917 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756368Ab1BCM5s convert rfc822-to-8bit (ORCPT ); Thu, 3 Feb 2011 07:57:48 -0500 Received: from j77219.upc-j.chello.nl ([24.132.77.219] helo=laptop) by casper.infradead.org with esmtpsa (Exim 4.72 #1 (Red Hat Linux)) id 1Pkykj-00038E-UK; Thu, 03 Feb 2011 12:57:38 +0000 Received: by laptop (Postfix, from userid 1000) id 4243A10C902B0; Thu, 3 Feb 2011 13:58:39 +0100 (CET) Subject: Re: [PATCH -v8a 4/7] sched: Add yield_to(task, preempt) functionality From: Peter Zijlstra To: Rik van Riel Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Avi Kiviti , Srivatsa Vaddagiri , Mike Galbraith , Chris Wright , "Nakajima, Jun" In-Reply-To: <20110201095051.4ddb7738@annuminas.surriel.com> References: <20110201094433.72829892@annuminas.surriel.com> <20110201095051.4ddb7738@annuminas.surriel.com> Date: Thu, 03 Feb 2011 13:58:38 +0100 Message-ID: <1296737918.26581.366.camel@laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.30.3 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 03 Feb 2011 12:58:13 +0000 (UTC) Index: linux-2.6/kernel/sched.c =================================================================== --- linux-2.6.orig/kernel/sched.c +++ linux-2.6/kernel/sched.c @@ -1686,6 +1686,39 @@ static void double_rq_unlock(struct rq * __release(rq2->lock); } +#else /* CONFIG_SMP */ + +/* + * double_rq_lock - safely lock two runqueues + * + * Note this does not disable interrupts like task_rq_lock, + * you need to do so manually before calling. + */ +static void double_rq_lock(struct rq *rq1, struct rq *rq2) + __acquires(rq1->lock) + __acquires(rq2->lock) +{ + BUG_ON(!irqs_disabled()); + BUG_ON(rq1 != rq2); + raw_spin_lock(&rq1->lock); + __acquire(rq2->lock); /* Fake it out ;) */ +} + +/* + * double_rq_unlock - safely unlock two runqueues + * + * Note this does not restore interrupts like task_rq_unlock, + * you need to do so manually after calling. + */ +static void double_rq_unlock(struct rq *rq1, struct rq *rq2) + __releases(rq1->lock) + __releases(rq2->lock) +{ + BUG_ON(rq1 != rq2); + raw_spin_unlock(&rq1->lock); + __release(rq2->lock); +} + #endif static void calc_load_account_idle(struct rq *this_rq);