From patchwork Mon Nov 23 14:06:05 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gleb Natapov X-Patchwork-Id: 62182 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id nANEBoTS018662 for ; Mon, 23 Nov 2009 14:11:51 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754286AbZKWOLW (ORCPT ); Mon, 23 Nov 2009 09:11:22 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754135AbZKWOJd (ORCPT ); Mon, 23 Nov 2009 09:09:33 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37977 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753921AbZKWOJb (ORCPT ); Mon, 23 Nov 2009 09:09:31 -0500 Received: from int-mx05.intmail.prod.int.phx2.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.18]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id nANE9CJg024648 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 23 Nov 2009 09:09:12 -0500 Received: from dhcp-1-237.tlv.redhat.com (dhcp-1-237.tlv.redhat.com [10.35.1.237]) by int-mx05.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id nANE7spH031599; Mon, 23 Nov 2009 09:09:11 -0500 Received: by dhcp-1-237.tlv.redhat.com (Postfix, from userid 13519) id 4AF5818D475; Mon, 23 Nov 2009 16:06:08 +0200 (IST) From: Gleb Natapov To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, avi@redhat.com, mingo@elte.hu, a.p.zijlstra@chello.nl, tglx@linutronix.de, hpa@zytor.com, riel@redhat.com Subject: [PATCH v2 10/12] Maintain preemptability count even for !CONFIG_PREEMPT kernels Date: Mon, 23 Nov 2009 16:06:05 +0200 Message-Id: <1258985167-29178-11-git-send-email-gleb@redhat.com> In-Reply-To: <1258985167-29178-1-git-send-email-gleb@redhat.com> References: <1258985167-29178-1-git-send-email-gleb@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.18 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index 6d527ee..484ba38 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -2,9 +2,7 @@ #define LINUX_HARDIRQ_H #include -#ifdef CONFIG_PREEMPT #include -#endif #include #include #include @@ -92,13 +90,8 @@ */ #define in_nmi() (preempt_count() & NMI_MASK) -#if defined(CONFIG_PREEMPT) -# define PREEMPT_INATOMIC_BASE kernel_locked() -# define PREEMPT_CHECK_OFFSET 1 -#else -# define PREEMPT_INATOMIC_BASE 0 -# define PREEMPT_CHECK_OFFSET 0 -#endif +#define PREEMPT_CHECK_OFFSET 1 +#define PREEMPT_INATOMIC_BASE kernel_locked() /* * Are we running in atomic context? WARNING: this macro cannot @@ -116,12 +109,11 @@ #define in_atomic_preempt_off() \ ((preempt_count() & ~PREEMPT_ACTIVE) != PREEMPT_CHECK_OFFSET) +#define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1) #ifdef CONFIG_PREEMPT # define preemptible() (preempt_count() == 0 && !irqs_disabled()) -# define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1) #else # define preemptible() 0 -# define IRQ_EXIT_OFFSET HARDIRQ_OFFSET #endif #if defined(CONFIG_SMP) || defined(CONFIG_GENERIC_HARDIRQS) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index 72b1a10..7d039ca 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -82,14 +82,24 @@ do { \ #else -#define preempt_disable() do { } while (0) -#define preempt_enable_no_resched() do { } while (0) -#define preempt_enable() do { } while (0) +#define preempt_disable() \ +do { \ + inc_preempt_count(); \ + barrier(); \ +} while (0) + +#define preempt_enable() \ +do { \ + barrier(); \ + dec_preempt_count(); \ +} while (0) + +#define preempt_enable_no_resched() preempt_enable() #define preempt_check_resched() do { } while (0) -#define preempt_disable_notrace() do { } while (0) -#define preempt_enable_no_resched_notrace() do { } while (0) -#define preempt_enable_notrace() do { } while (0) +#define preempt_disable_notrace() preempt_disable() +#define preempt_enable_no_resched_notrace() preempt_enable() +#define preempt_enable_notrace() preempt_enable() #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index 75e6e60..1895486 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2379,11 +2379,7 @@ extern int _cond_resched(void); extern int __cond_resched_lock(spinlock_t *lock); -#ifdef CONFIG_PREEMPT #define PREEMPT_LOCK_OFFSET PREEMPT_OFFSET -#else -#define PREEMPT_LOCK_OFFSET 0 -#endif #define cond_resched_lock(lock) ({ \ __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \ diff --git a/kernel/sched.c b/kernel/sched.c index 3c11ae0..92ce282 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2590,10 +2590,8 @@ void sched_fork(struct task_struct *p, int clone_flags) #if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW) p->oncpu = 0; #endif -#ifdef CONFIG_PREEMPT /* Want to start with kernel preemption disabled. */ task_thread_info(p)->preempt_count = 1; -#endif plist_node_init(&p->pushable_tasks, MAX_PRIO); put_cpu(); @@ -6973,11 +6971,7 @@ void __cpuinit init_idle(struct task_struct *idle, int cpu) spin_unlock_irqrestore(&rq->lock, flags); /* Set the preempt count _outside_ the spinlocks! */ -#if defined(CONFIG_PREEMPT) task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0); -#else - task_thread_info(idle)->preempt_count = 0; -#endif /* * The idle tasks have their own, simple scheduling class: */ diff --git a/lib/kernel_lock.c b/lib/kernel_lock.c index 39f1029..6e2659d 100644 --- a/lib/kernel_lock.c +++ b/lib/kernel_lock.c @@ -93,6 +93,7 @@ static inline void __lock_kernel(void) */ static inline void __lock_kernel(void) { + preempt_disable(); _raw_spin_lock(&kernel_flag); } #endif