Message ID | 20130218124012.26245.44243.stgit@srivatsabhat.in.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 4f02b28..e546c98 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -6,6 +6,7 @@ #include "sched.h" #include <linux/slab.h> +#include <linux/cpu.h> static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun); @@ -26,7 +27,9 @@ static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer) if (!overrun) break; + get_online_cpus_atomic(); idle = do_sched_rt_period_timer(rt_b, overrun); + put_online_cpus_atomic(); } return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
Once stop_machine() is gone from the CPU offline path, we won't be able to depend on preempt_disable() or local_irq_disable() to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> --- kernel/sched/rt.c | 3 +++ 1 file changed, 3 insertions(+)