From patchwork Wed Aug 31 18:15:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55914ECAAD4 for ; Wed, 31 Aug 2022 18:30:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233004AbiHaSam (ORCPT ); Wed, 31 Aug 2022 14:30:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232455AbiHaSaT (ORCPT ); Wed, 31 Aug 2022 14:30:19 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE31011A24; Wed, 31 Aug 2022 11:25:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0288AB82220; Wed, 31 Aug 2022 18:16:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9089DC433D7; Wed, 31 Aug 2022 18:15:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969758; bh=vAqwYNsiS6egYw7ZedEOtoVgOp435w0oQaSAHoj18bA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F8ujJA9VEAg5qDXZG560Dml6+TqsFiw5tWVN4V3aFk5s8aUIN89fb9vgHH1IFNvVC i8nKfYtAS0xBTI+Dgrh371CC4Q/WUGtM1dLKaMaynkl7Xs7wiU87o96fNivnsvEzoY 1hg6FO+Rzg13ASnThr4F3mGh93BFBfhciHJrqOnB+xSsmr79OkI7reqhroceB9XGkX eMzdngAtEwJhfTFsH6/iVLDy80P0AUCdSCbtcuKScMgYcX/3GPR7cx4ttAR3QvhVsV 1uPrPbfk42WwsNWIHxDsOoSi/HVC19AlOb5ywDxhtuPggk60zogIjsdy3pjW9ExLEN 6VMz6bcemZwmQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5142A5C015D; Wed, 31 Aug 2022 11:15:58 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 1/3] rcu-tasks: Convert RCU_LOCKDEP_WARN() to WARN_ONCE() Date: Wed, 31 Aug 2022 11:15:54 -0700 Message-Id: <20220831181556.2696404-1-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181553.GA2696186@paulmck-ThinkPad-P17-Gen-1> References: <20220831181553.GA2696186@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang Kernels built with CONFIG_PROVE_RCU=y and CONFIG_DEBUG_LOCK_ALLOC=y attempt to emit a warning when the synchronize_rcu_tasks_generic() function is called during early boot while the rcu_scheduler_active variable is RCU_SCHEDULER_INACTIVE. However the warnings is not actually be printed because the debug_lockdep_rcu_enabled() returns false, exactly because the rcu_scheduler_active variable is still equal to RCU_SCHEDULER_INACTIVE. This commit therefore replaces RCU_LOCKDEP_WARN() with WARN_ONCE() to force these warnings to actually be printed. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tasks.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 83c7e6620d403..469bf2a3b505e 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -560,7 +560,7 @@ static int __noreturn rcu_tasks_kthread(void *arg) static void synchronize_rcu_tasks_generic(struct rcu_tasks *rtp) { /* Complain if the scheduler has not started. */ - RCU_LOCKDEP_WARN(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE, + WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE, "synchronize_rcu_tasks called too soon"); // If the grace-period kthread is running, use it. From patchwork Wed Aug 31 18:15:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EC67ECAAD3 for ; Wed, 31 Aug 2022 18:21:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232814AbiHaSVq (ORCPT ); Wed, 31 Aug 2022 14:21:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232506AbiHaSVR (ORCPT ); Wed, 31 Aug 2022 14:21:17 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09D72FCA2C; Wed, 31 Aug 2022 11:17:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D5EFBB82275; Wed, 31 Aug 2022 18:15:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 907C9C433D6; Wed, 31 Aug 2022 18:15:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969758; bh=cOpJs5hW2vyuuvdpJqbaSWv/BpYUcc3a3SIz66ot+sU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QHeSUc5vrkyZmaZXYyfv5gghetqxmCZI/3IV1j0ZgBBTh+j6LPsKoeZoaKqAwjCKk V3KhZRCjMRZMZ4YRZUb9vI4C2/4L5cUxiBuAoChoK2OYrVmz9mg4SjggSS8Y2XVQNU vfZ17DeFZvHCpH14OC2wpWtxWIMaPhE0HpMM/AyIuhmnRZ7n4LIQz5w1gTOk8lNf3A KLZddC+nwZZgjzru+1H04rh5pYpES24Ym1qGWKsBvoy/uIv9OK3v3yNMb4miHqrUzn b5YbZMVmdn8v12eKgypv4haIsYoDzbrEvLFH54T9pchyo4x51smSh6OObiPx2NvjKM CgJckhCxcy6aQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 533045C019C; Wed, 31 Aug 2022 11:15:58 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 2/3] rcu-tasks: Ensure RCU Tasks Trace loops have quiescent states Date: Wed, 31 Aug 2022 11:15:55 -0700 Message-Id: <20220831181556.2696404-2-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181553.GA2696186@paulmck-ThinkPad-P17-Gen-1> References: <20220831181553.GA2696186@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The RCU Tasks Trace grace-period kthread loops across all CPUs, and there can be quite a few CPUs, with some commercially available systems sporting well over a thousand of them. Some of these loops can feature IPIs, which can take some time. This commit therefore places a call to cond_resched_tasks_rcu_qs() in each such loop. Link: https://docs.google.com/document/d/1V0YnG1HTWMt9WHJjroiJL9lf-hMrud4v8Fn3fhyY0cI/edit?usp=sharing Signed-off-by: Paul E. McKenney --- kernel/rcu/tasks.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 469bf2a3b505e..f5bf6fb430dab 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -1500,6 +1500,7 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop) if (rcu_tasks_trace_pertask_prep(t, true)) trc_add_holdout(t, hop); rcu_read_unlock(); + cond_resched_tasks_rcu_qs(); } // Only after all running tasks have been accounted for is it @@ -1520,6 +1521,7 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop) raw_spin_lock_irqsave_rcu_node(rtpcp, flags); } raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); + cond_resched_tasks_rcu_qs(); } // Re-enable CPU hotplug now that the holdout list is populated. @@ -1619,6 +1621,7 @@ static void check_all_holdout_tasks_trace(struct list_head *hop, trc_del_holdout(t); else if (needreport) show_stalled_task_trace(t, firstreport); + cond_resched_tasks_rcu_qs(); } // Re-enable CPU hotplug now that the holdout list scan has completed. From patchwork Wed Aug 31 18:15:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE0E1ECAAD4 for ; Wed, 31 Aug 2022 18:21:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232372AbiHaSVh (ORCPT ); Wed, 31 Aug 2022 14:21:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232014AbiHaSVM (ORCPT ); Wed, 31 Aug 2022 14:21:12 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B39CF14CF; Wed, 31 Aug 2022 11:17:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D6CB9B82276; Wed, 31 Aug 2022 18:15:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 958C3C433C1; Wed, 31 Aug 2022 18:15:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969758; bh=jfJ9x14tWQcRlkb5NvDRB5EYKyKS/Qe0IP0owvlvDuM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ArifacRCUr65momtiKcOGwb4uZf/Nq58QgoVNCfDqxoONKqUDJvqZd5sry6HNFVZd fFOWLosEX5pSRG+nnjBvghd6bpdcnjSVkVtAAt8MEs8gP+Ze2KuEH5XSpXqOaNH8bi HGBrJVg6EilIjDjWk5GSBv1ukjaU8BzvofFk2JrC880SWmbt+vs9Gnej9YI+VEYt71 kZTRW8j/vxMFepj5rCXjTal/OSLYssB6BDdd6/Sa7QaU29lnidSkGGkd1ZGb0bxAdY /BILveF1pUMuRcjMOZtZM6kgGPeTQs/W//ZiZuA4w80+J2ftIUCgz2UkQw0M+bhMMw pkTyn/HCQTV5Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 557ED5C02A9; Wed, 31 Aug 2022 11:15:58 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 3/3] rcu-tasks: Make RCU Tasks Trace check for userspace execution Date: Wed, 31 Aug 2022 11:15:56 -0700 Message-Id: <20220831181556.2696404-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181553.GA2696186@paulmck-ThinkPad-P17-Gen-1> References: <20220831181553.GA2696186@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang Userspace execution is a valid quiescent state for RCU Tasks Trace, but the scheduling-clock interrupt does not currently report such quiescent states. Of course, the scheduling-clock interrupt is not strictly speaking userspace execution. However, the only way that this code is not in a quiescent state is if something invoked rcu_read_lock_trace(), and that would be reflected in the ->trc_reader_nesting field in the task_struct structure. Furthermore, this field is checked by rcu_tasks_trace_qs(), which is invoked by rcu_tasks_qs() which is in turn invoked by rcu_note_voluntary_context_switch() in kernels building at least one of the RCU Tasks flavors. It is therefore safe to invoke rcu_tasks_trace_qs() from the rcu_sched_clock_irq(). But rcu_tasks_qs() also invokes rcu_tasks_classic_qs() for RCU Tasks, which lacks the read-side markers provided by RCU Tasks Trace. This raises the possibility that an RCU Tasks grace period could start after the interrupt from userspace execution, but before the call to rcu_sched_clock_irq(). However, it turns out that this is safe because the RCU Tasks grace period waits for an RCU grace period, which will wait for the entire scheduling-clock interrupt handler, including any RCU Tasks read-side critical section that this handler might contain. This commit therefore updates the rcu_sched_clock_irq() function's check for usermode execution and its call to rcu_tasks_classic_qs() to instead check for both usermode execution and interrupt from idle, and to instead call rcu_note_voluntary_context_switch(). This consolidates code and provides more faster RCU Tasks Trace reporting of quiescent states in kernels that do scheduling-clock interrupts for userspace execution. [ paulmck: Consolidate checks into rcu_sched_clock_irq(). ] Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 4 ++-- kernel/rcu/tree_plugin.h | 4 ---- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 79aea7df4345e..11d5aefd16961 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2341,8 +2341,8 @@ void rcu_sched_clock_irq(int user) rcu_flavor_sched_clock_irq(user); if (rcu_pending(user)) invoke_rcu_core(); - if (user) - rcu_tasks_classic_qs(current, false); + if (user || rcu_is_cpu_rrupt_from_idle()) + rcu_note_voluntary_context_switch(current); lockdep_assert_irqs_disabled(); trace_rcu_utilization(TPS("End scheduler-tick")); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 438ecae6bd7e7..aa64b035c24f2 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -718,9 +718,6 @@ static void rcu_flavor_sched_clock_irq(int user) struct task_struct *t = current; lockdep_assert_irqs_disabled(); - if (user || rcu_is_cpu_rrupt_from_idle()) { - rcu_note_voluntary_context_switch(current); - } if (rcu_preempt_depth() > 0 || (preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK))) { /* No QS, force context switch if deferred. */ @@ -972,7 +969,6 @@ static void rcu_flavor_sched_clock_irq(int user) * neither access nor modify, at least not while the * corresponding CPU is online. */ - rcu_qs(); } }