Message ID | 20200320180032.523372590@linutronix.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/entry: Consolidation part II (syscalls) | expand |
On Fri, Mar 20, 2020 at 06:59:57PM +0100, Thomas Gleixner wrote: > From: "Paul E. McKenney" <paulmck@kernel.org> > > The rcu_nmi_enter_common() function can be invoked both in interrupt > and NMI handlers. If it is invoked from process context (as opposed > to userspace or idle context) on a nohz_full CPU, it might acquire the > CPU's leaf rcu_node structure's ->lock. Because this lock is held only > with interrupts disabled, this is safe from an interrupt handler, but > doing so from an NMI handler can result in self-deadlock. > > This commit therefore adds "irq" to the "if" condition so as to only > acquire the ->lock from irq handlers or process context, never from > an NMI handler. > > Fixes: 5b14557b073c ("rcu: Avoid tick_dep_set_cpu() misordering") > Reported-by: Thomas Gleixner <tglx@linutronix.de> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > Link: https://lkml.kernel.org/r/20200313024046.27622-1-paulmck@kernel.org Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
--- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -816,7 +816,7 @@ static __always_inline void rcu_nmi_ente rcu_cleanup_after_idle(); incby = 1; - } else if (tick_nohz_full_cpu(rdp->cpu) && + } else if (irq && tick_nohz_full_cpu(rdp->cpu) && rdp->dynticks_nmi_nesting == DYNTICK_IRQ_NONIDLE && READ_ONCE(rdp->rcu_urgent_qs) && !rdp->rcu_forced_tick) { raw_spin_lock_rcu_node(rdp->mynode);