From patchwork Sat Mar 21 11:26:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450939 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F801139A for ; Sat, 21 Mar 2020 11:35:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3578620658 for ; Sat, 21 Mar 2020 11:35:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728639AbgCULe7 (ORCPT ); Sat, 21 Mar 2020 07:34:59 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38487 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728492AbgCULe5 (ORCPT ); Sat, 21 Mar 2020 07:34:57 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOd-0002LQ-Fh; Sat, 21 Mar 2020 12:34:31 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 2B9001040CB; Sat, 21 Mar 2020 12:34:22 +0100 (CET) Message-Id: <20200321113242.643576700@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:26:03 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 19/20] lockdep: Annotate irq_work References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org From: Sebastian Andrzej Siewior Mark irq_work items with IRQ_WORK_HARD_IRQ which should be invoked in hardirq context even on PREEMPT_RT. IRQ_WORK without this flag will be invoked in softirq context on PREEMPT_RT. Set ->irq_config to 1 for the IRQ_WORK items which are invoked in softirq context so lockdep knows that these can safely acquire a spinlock_t. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner --- include/linux/irq_work.h | 2 ++ include/linux/irqflags.h | 13 +++++++++++++ kernel/irq_work.c | 2 ++ kernel/rcu/tree.c | 1 + kernel/time/tick-sched.c | 1 + 5 files changed, 19 insertions(+) --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -18,6 +18,8 @@ /* Doesn't want IPI, wait for tick: */ #define IRQ_WORK_LAZY BIT(2) +/* Run hard IRQ context, even on RT */ +#define IRQ_WORK_HARD_IRQ BIT(3) #define IRQ_WORK_CLAIMED (IRQ_WORK_PENDING | IRQ_WORK_BUSY) --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -69,6 +69,17 @@ do { \ current->irq_config = 0; \ } while (0) +# define lockdep_irq_work_enter(__work) \ + do { \ + if (!(atomic_read(&__work->flags) & IRQ_WORK_HARD_IRQ))\ + current->irq_config = 1; \ + } while (0) +# define lockdep_irq_work_exit(__work) \ + do { \ + if (!(atomic_read(&__work->flags) & IRQ_WORK_HARD_IRQ))\ + current->irq_config = 0; \ + } while (0) + #else # define trace_hardirqs_on() do { } while (0) # define trace_hardirqs_off() do { } while (0) @@ -83,6 +94,8 @@ do { \ # define lockdep_softirq_exit() do { } while (0) # define lockdep_hrtimer_enter(__hrtimer) do { } while (0) # define lockdep_hrtimer_exit(__hrtimer) do { } while (0) +# define lockdep_irq_work_enter(__work) do { } while (0) +# define lockdep_irq_work_exit(__work) do { } while (0) #endif #if defined(CONFIG_IRQSOFF_TRACER) || \ --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -153,7 +153,9 @@ static void irq_work_run_list(struct lli */ flags = atomic_fetch_andnot(IRQ_WORK_PENDING, &work->flags); + lockdep_irq_work_enter(work); work->func(work); + lockdep_irq_work_exit(work); /* * Clear the BUSY bit and return to the free state if * no-one else claimed it meanwhile. --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1113,6 +1113,7 @@ static int rcu_implicit_dynticks_qs(stru !rdp->rcu_iw_pending && rdp->rcu_iw_gp_seq != rnp->gp_seq && (rnp->ffmask & rdp->grpmask)) { init_irq_work(&rdp->rcu_iw, rcu_iw_handler); + atomic_set(&rdp->rcu_iw.flags, IRQ_WORK_HARD_IRQ); rdp->rcu_iw_pending = true; rdp->rcu_iw_gp_seq = rnp->gp_seq; irq_work_queue_on(&rdp->rcu_iw, rdp->cpu); --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -245,6 +245,7 @@ static void nohz_full_kick_func(struct i static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = { .func = nohz_full_kick_func, + .flags = ATOMIC_INIT(IRQ_WORK_HARD_IRQ), }; /*