From patchwork Sun Nov 1 11:56:28 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gleb Natapov X-Patchwork-Id: 56856 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id nA1BxoPD030005 for ; Sun, 1 Nov 2009 11:59:50 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752243AbZKAL5j (ORCPT ); Sun, 1 Nov 2009 06:57:39 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752189AbZKAL5h (ORCPT ); Sun, 1 Nov 2009 06:57:37 -0500 Received: from mx1.redhat.com ([209.132.183.28]:19573 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752188AbZKAL4b (ORCPT ); Sun, 1 Nov 2009 06:56:31 -0500 Received: from int-mx08.intmail.prod.int.phx2.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id nA1BuZd0022745; Sun, 1 Nov 2009 06:56:36 -0500 Received: from dhcp-1-237.tlv.redhat.com (dhcp-1-237.tlv.redhat.com [10.35.1.237]) by int-mx08.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id nA1BuXV4022622; Sun, 1 Nov 2009 06:56:34 -0500 Received: by dhcp-1-237.tlv.redhat.com (Postfix, from userid 13519) id 134B518D41C; Sun, 1 Nov 2009 13:56:31 +0200 (IST) From: Gleb Natapov To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/11] Maintain preemptability count even for !CONFIG_PREEMPT kernels Date: Sun, 1 Nov 2009 13:56:28 +0200 Message-Id: <1257076590-29559-10-git-send-email-gleb@redhat.com> In-Reply-To: <1257076590-29559-1-git-send-email-gleb@redhat.com> References: <1257076590-29559-1-git-send-email-gleb@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.21 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index 6d527ee..a6b6040 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -92,12 +92,11 @@ */ #define in_nmi() (preempt_count() & NMI_MASK) +#define PREEMPT_CHECK_OFFSET 1 #if defined(CONFIG_PREEMPT) # define PREEMPT_INATOMIC_BASE kernel_locked() -# define PREEMPT_CHECK_OFFSET 1 #else # define PREEMPT_INATOMIC_BASE 0 -# define PREEMPT_CHECK_OFFSET 0 #endif /* @@ -116,12 +115,11 @@ #define in_atomic_preempt_off() \ ((preempt_count() & ~PREEMPT_ACTIVE) != PREEMPT_CHECK_OFFSET) +#define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1) #ifdef CONFIG_PREEMPT # define preemptible() (preempt_count() == 0 && !irqs_disabled()) -# define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1) #else # define preemptible() 0 -# define IRQ_EXIT_OFFSET HARDIRQ_OFFSET #endif #if defined(CONFIG_SMP) || defined(CONFIG_GENERIC_HARDIRQS) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index 72b1a10..7d039ca 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -82,14 +82,24 @@ do { \ #else -#define preempt_disable() do { } while (0) -#define preempt_enable_no_resched() do { } while (0) -#define preempt_enable() do { } while (0) +#define preempt_disable() \ +do { \ + inc_preempt_count(); \ + barrier(); \ +} while (0) + +#define preempt_enable() \ +do { \ + barrier(); \ + dec_preempt_count(); \ +} while (0) + +#define preempt_enable_no_resched() preempt_enable() #define preempt_check_resched() do { } while (0) -#define preempt_disable_notrace() do { } while (0) -#define preempt_enable_no_resched_notrace() do { } while (0) -#define preempt_enable_notrace() do { } while (0) +#define preempt_disable_notrace() preempt_disable() +#define preempt_enable_no_resched_notrace() preempt_enable() +#define preempt_enable_notrace() preempt_enable() #endif diff --git a/kernel/sched.c b/kernel/sched.c index e886895..adb0415 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2561,10 +2561,8 @@ void sched_fork(struct task_struct *p, int clone_flags) #if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW) p->oncpu = 0; #endif -#ifdef CONFIG_PREEMPT /* Want to start with kernel preemption disabled. */ task_thread_info(p)->preempt_count = 1; -#endif plist_node_init(&p->pushable_tasks, MAX_PRIO); put_cpu(); @@ -6944,11 +6942,7 @@ void __cpuinit init_idle(struct task_struct *idle, int cpu) spin_unlock_irqrestore(&rq->lock, flags); /* Set the preempt count _outside_ the spinlocks! */ -#if defined(CONFIG_PREEMPT) task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0); -#else - task_thread_info(idle)->preempt_count = 0; -#endif /* * The idle tasks have their own, simple scheduling class: */ diff --git a/lib/kernel_lock.c b/lib/kernel_lock.c index 39f1029..6e2659d 100644 --- a/lib/kernel_lock.c +++ b/lib/kernel_lock.c @@ -93,6 +93,7 @@ static inline void __lock_kernel(void) */ static inline void __lock_kernel(void) { + preempt_disable(); _raw_spin_lock(&kernel_flag); } #endif