From patchwork Wed Aug 31 18:11:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7193ECAAD3 for ; Wed, 31 Aug 2022 18:14:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232573AbiHaSOC (ORCPT ); Wed, 31 Aug 2022 14:14:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232623AbiHaSNo (ORCPT ); Wed, 31 Aug 2022 14:13:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09F6BEA324; Wed, 31 Aug 2022 11:12:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3728F61C2F; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 962E4C433D6; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969531; bh=v7yYUjYkw2p9AIgEWjIca4+sC4Wrp4YQLNStAinHtuw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uU3/9oeCtiLmY3BeUxgqBGLQozhfkzEgEfkxp9Zieqil7lOW+RvPiB58/xcOwOGjp cykaGIA1GRhfLTgc6nOF46The/C0HWqRQqzBzpuxda33XVzTMZvZ1/QqbD/XWgKUoP I9BsfshWebdbyHZO+blkMZh0HDBpmtVjrn93aFEF378PMdJ0/tMV8eohcoavwjaSoL VHrH4/RyfDTnbJoUlsoZpCEN38R94+ytulPRGQqAcGMaUqxfMW1hOaQcUgWI0IvPWo +u5snDqeI/HjlujdgPF0CbLEdQEH4wARjb62d/s0jm9Xjjqi1mXF70r1nncax66EYs mbMwXtcHq4OOQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 586545C015D; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 01/25] rcu: Add full-sized polling for get_completed*() and poll_state*() Date: Wed, 31 Aug 2022 11:11:46 -0700 Message-Id: <20220831181210.2695080-1-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The get_completed_synchronize_rcu() and poll_state_synchronize_rcu() APIs compress the combined expedited and normal grace-period states into a single unsigned long, which conserves storage, but can miss grace periods in certain cases involving overlapping normal and expedited grace periods. Missing the occasional grace period is usually not a problem, but there are use cases that care about each and every grace period. This commit therefore adds the first members of the full-state RCU grace-period polling API, namely the get_completed_synchronize_rcu_full() and poll_state_synchronize_rcu_full() functions. These use up to three times the storage (rcu_gp_oldstate structure instead of unsigned long), but which are guaranteed not to miss grace periods, at least in situations where the single-CPU grace-period optimization does not apply. Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h | 3 ++ include/linux/rcutiny.h | 9 +++++ include/linux/rcutree.h | 8 +++++ kernel/rcu/rcutorture.c | 9 +++++ kernel/rcu/tiny.c | 10 ++++++ kernel/rcu/tree.c | 76 +++++++++++++++++++++++++++++++++++++--- 6 files changed, 111 insertions(+), 4 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index f527f27e64387..faaa174dfb27c 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -42,7 +42,10 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); void rcu_barrier_tasks(void); void rcu_barrier_tasks_rude(void); void synchronize_rcu(void); + +struct rcu_gp_oldstate; unsigned long get_completed_synchronize_rcu(void); +void get_completed_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); #ifdef CONFIG_PREEMPT_RCU diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 62815c0a2dcef..1fbff8600d92d 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -14,10 +14,19 @@ #include /* for HZ */ +struct rcu_gp_oldstate { + unsigned long rgos_norm; +}; + unsigned long get_state_synchronize_rcu(void); unsigned long start_poll_synchronize_rcu(void); bool poll_state_synchronize_rcu(unsigned long oldstate); +static inline bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + return poll_state_synchronize_rcu(rgosp->rgos_norm); +} + static inline void cond_synchronize_rcu(unsigned long oldstate) { might_sleep(); diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 47eaa4cb0df72..4ccbc3aa9dc20 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -40,11 +40,19 @@ bool rcu_eqs_special_set(int cpu); void rcu_momentary_dyntick_idle(void); void kfree_rcu_scheduler_running(void); bool rcu_gp_might_be_stalled(void); + +struct rcu_gp_oldstate { + unsigned long rgos_norm; + unsigned long rgos_exp; + unsigned long rgos_polled; +}; + unsigned long start_poll_synchronize_rcu_expedited(void); void cond_synchronize_rcu_expedited(unsigned long oldstate); unsigned long get_state_synchronize_rcu(void); unsigned long start_poll_synchronize_rcu(void); bool poll_state_synchronize_rcu(unsigned long oldstate); +bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); void cond_synchronize_rcu(unsigned long oldstate); bool rcu_is_idle_cpu(int cpu); diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index d8e1b270a065f..b31e6ed64d1b9 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -336,8 +336,10 @@ struct rcu_torture_ops { void (*cond_sync_exp)(unsigned long oldstate); unsigned long (*get_gp_state)(void); unsigned long (*get_gp_completed)(void); + void (*get_gp_completed_full)(struct rcu_gp_oldstate *rgosp); unsigned long (*start_gp_poll)(void); bool (*poll_gp_state)(unsigned long oldstate); + bool (*poll_gp_state_full)(struct rcu_gp_oldstate *rgosp); void (*cond_sync)(unsigned long oldstate); call_rcu_func_t call; void (*cb_barrier)(void); @@ -503,8 +505,10 @@ static struct rcu_torture_ops rcu_ops = { .exp_sync = synchronize_rcu_expedited, .get_gp_state = get_state_synchronize_rcu, .get_gp_completed = get_completed_synchronize_rcu, + .get_gp_completed_full = get_completed_synchronize_rcu_full, .start_gp_poll = start_poll_synchronize_rcu, .poll_gp_state = poll_state_synchronize_rcu, + .poll_gp_state_full = poll_state_synchronize_rcu_full, .cond_sync = cond_synchronize_rcu, .get_gp_state_exp = get_state_synchronize_rcu, .start_gp_poll_exp = start_poll_synchronize_rcu_expedited, @@ -1212,6 +1216,7 @@ rcu_torture_writer(void *arg) bool boot_ended; bool can_expedite = !rcu_gp_is_expedited() && !rcu_gp_is_normal(); unsigned long cookie; + struct rcu_gp_oldstate cookie_full; int expediting = 0; unsigned long gp_snap; int i; @@ -1277,6 +1282,10 @@ rcu_torture_writer(void *arg) } cur_ops->readunlock(idx); } + if (cur_ops->get_gp_completed_full && cur_ops->poll_gp_state_full) { + cur_ops->get_gp_completed_full(&cookie_full); + WARN_ON_ONCE(!cur_ops->poll_gp_state_full(&cookie_full)); + } switch (synctype[torture_random(&rand) % nsynctypes]) { case RTWS_DEF_FREE: rcu_torture_writer_state = RTWS_DEF_FREE; diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c index f0561ee16b9c2..435edc785412c 100644 --- a/kernel/rcu/tiny.c +++ b/kernel/rcu/tiny.c @@ -183,6 +183,16 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func) } EXPORT_SYMBOL_GPL(call_rcu); +/* + * Store a grace-period-counter "cookie". For more information, + * see the Tree RCU header comment. + */ +void get_completed_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + rgosp->rgos_norm = RCU_GET_STATE_COMPLETED; +} +EXPORT_SYMBOL_GPL(get_completed_synchronize_rcu_full); + /* * Return a grace-period-counter "cookie". For more information, * see the Tree RCU header comment. diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 79aea7df4345e..d47c9b6d81066 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3522,6 +3522,22 @@ void synchronize_rcu(void) } EXPORT_SYMBOL_GPL(synchronize_rcu); +/** + * get_completed_synchronize_rcu_full - Return a full pre-completed polled state cookie + * @rgosp: Place to put state cookie + * + * Stores into @rgosp a value that will always be treated by functions + * like poll_state_synchronize_rcu_full() as a cookie whose grace period + * has already completed. + */ +void get_completed_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + rgosp->rgos_norm = RCU_GET_STATE_COMPLETED; + rgosp->rgos_exp = RCU_GET_STATE_COMPLETED; + rgosp->rgos_polled = RCU_GET_STATE_COMPLETED; +} +EXPORT_SYMBOL_GPL(get_completed_synchronize_rcu_full); + /** * get_state_synchronize_rcu - Snapshot current RCU state * @@ -3580,7 +3596,7 @@ unsigned long start_poll_synchronize_rcu(void) EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu); /** - * poll_state_synchronize_rcu - Conditionally wait for an RCU grace period + * poll_state_synchronize_rcu - Has the specified RCU grace period completed? * * @oldstate: value from get_state_synchronize_rcu() or start_poll_synchronize_rcu() * @@ -3595,9 +3611,10 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu); * But counter wrap is harmless. If the counter wraps, we have waited for * more than a billion grace periods (and way more on a 64-bit system!). * Those needing to keep oldstate values for very long time periods - * (many hours even on 32-bit systems) should check them occasionally - * and either refresh them or set a flag indicating that the grace period - * has completed. + * (many hours even on 32-bit systems) should check them occasionally and + * either refresh them or set a flag indicating that the grace period has + * completed. Alternatively, they can use get_completed_synchronize_rcu() + * to get a guaranteed-completed grace-period state. * * This function provides the same memory-ordering guarantees that * would be provided by a synchronize_rcu() that was invoked at the call @@ -3615,6 +3632,57 @@ bool poll_state_synchronize_rcu(unsigned long oldstate) } EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu); +/** + * poll_state_synchronize_rcu_full - Has the specified RCU grace period completed? + * @rgosp: value from get_state_synchronize_rcu_full() or start_poll_synchronize_rcu_full() + * + * If a full RCU grace period has elapsed since the earlier call from + * which *rgosp was obtained, return @true, otherwise return @false. + * If @false is returned, it is the caller's responsibility to invoke this + * function later on until it does return @true. Alternatively, the caller + * can explicitly wait for a grace period, for example, by passing @rgosp + * to cond_synchronize_rcu() or by directly invoking synchronize_rcu(). + * + * Yes, this function does not take counter wrap into account. + * But counter wrap is harmless. If the counter wraps, we have waited + * for more than a billion grace periods (and way more on a 64-bit + * system!). Those needing to keep rcu_gp_oldstate values for very + * long time periods (many hours even on 32-bit systems) should check + * them occasionally and either refresh them or set a flag indicating + * that the grace period has completed. Alternatively, they can use + * get_completed_synchronize_rcu_full() to get a guaranteed-completed + * grace-period state. + * + * This function provides the same memory-ordering guarantees that would + * be provided by a synchronize_rcu() that was invoked at the call to + * the function that provided @rgosp, and that returned at the end of this + * function. And this guarantee requires that the root rcu_node structure's + * ->gp_seq field be checked instead of that of the rcu_state structure. + * The problem is that the just-ending grace-period's callbacks can be + * invoked between the time that the root rcu_node structure's ->gp_seq + * field is updated and the time that the rcu_state structure's ->gp_seq + * field is updated. Therefore, if a single synchronize_rcu() is to + * cause a subsequent poll_state_synchronize_rcu_full() to return @true, + * then the root rcu_node structure is the one that needs to be polled. + */ +bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + struct rcu_node *rnp = rcu_get_root(); + + smp_mb(); // Order against root rcu_node structure grace-period cleanup. + if (rgosp->rgos_norm == RCU_GET_STATE_COMPLETED || + rcu_seq_done_exact(&rnp->gp_seq, rgosp->rgos_norm) || + rgosp->rgos_exp == RCU_GET_STATE_COMPLETED || + rcu_seq_done_exact(&rcu_state.expedited_sequence, rgosp->rgos_exp) || + rgosp->rgos_polled == RCU_GET_STATE_COMPLETED || + rcu_seq_done_exact(&rcu_state.gp_seq_polled, rgosp->rgos_polled)) { + smp_mb(); /* Ensure GP ends before subsequent accesses. */ + return true; + } + return false; +} +EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu_full); + /** * cond_synchronize_rcu - Conditionally wait for an RCU grace period * From patchwork Wed Aug 31 18:11:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96EBFECAAD4 for ; Wed, 31 Aug 2022 18:15:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232437AbiHaSPF (ORCPT ); Wed, 31 Aug 2022 14:15:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232484AbiHaSN6 (ORCPT ); Wed, 31 Aug 2022 14:13:58 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E286EF9CB; Wed, 31 Aug 2022 11:12:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 39FB061C4B; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96356C433D7; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969531; bh=7dlnoegYQ8t/B7XwDsIx7nzWC3DNxp9xf0llnojAlaM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R1HyF3L7g4LuKSUQyq29lRkkBcytdRsfI2KI6No6dLDhS5fyeINVRfP9Q5M03b9sR FrlPgChFtmW1/AzeoZYhodTvl+E5riWeO+xqtXGm3QI/dZhOftUhM6c9Lf1PKcpyRi MRut8Cg7sDQ2RYLJyVjzWfWNHyKmid3FPwa2RASckhqL+VLbbj4G1TbaYybQIIjsdX 78+N0FwrOgzCU0O2TZBs6rJw3Wk6C7O01291ImsG+ww4M5EjbLVfz8UiZ8nvbShsQG 2NJl6jBIFwC+My3O4TbJBsoyxguYIv12l3g/B27qJyx4mtCRxGk5SyfpJhnkwAsilG yTAnCwxc0Jenw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5ADF05C019C; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 02/25] rcu: Add full-sized polling for get_state() Date: Wed, 31 Aug 2022 11:11:47 -0700 Message-Id: <20220831181210.2695080-2-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The get_state_synchronize_rcu() API compresses the combined expedited and normal grace-period states into a single unsigned long, which conserves storage, but can miss grace periods in certain cases involving overlapping normal and expedited grace periods. Missing the occasional grace period is usually not a problem, but there are use cases that care about each and every grace period. This commit therefore adds the next member of the full-state RCU grace-period polling API, namely the get_state_synchronize_rcu_full() function. This uses up to three times the storage (rcu_gp_oldstate structure instead of unsigned long), but is guaranteed not to miss grace periods. Signed-off-by: Paul E. McKenney --- include/linux/rcutiny.h | 6 ++++++ include/linux/rcutree.h | 1 + kernel/rcu/rcutorture.c | 10 ++++++---- kernel/rcu/tree.c | 33 +++++++++++++++++++++++++++++++++ 4 files changed, 46 insertions(+), 4 deletions(-) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 1fbff8600d92d..6e299955c4e9a 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -19,6 +19,12 @@ struct rcu_gp_oldstate { }; unsigned long get_state_synchronize_rcu(void); + +static inline void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + rgosp->rgos_norm = get_state_synchronize_rcu(); +} + unsigned long start_poll_synchronize_rcu(void); bool poll_state_synchronize_rcu(unsigned long oldstate); diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 4ccbc3aa9dc20..7b769f1b417aa 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -50,6 +50,7 @@ struct rcu_gp_oldstate { unsigned long start_poll_synchronize_rcu_expedited(void); void cond_synchronize_rcu_expedited(unsigned long oldstate); unsigned long get_state_synchronize_rcu(void); +void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); unsigned long start_poll_synchronize_rcu(void); bool poll_state_synchronize_rcu(unsigned long oldstate); bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index b31e6ed64d1b9..4f196ebce7f29 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -335,6 +335,7 @@ struct rcu_torture_ops { bool (*poll_gp_state_exp)(unsigned long oldstate); void (*cond_sync_exp)(unsigned long oldstate); unsigned long (*get_gp_state)(void); + void (*get_gp_state_full)(struct rcu_gp_oldstate *rgosp); unsigned long (*get_gp_completed)(void); void (*get_gp_completed_full)(struct rcu_gp_oldstate *rgosp); unsigned long (*start_gp_poll)(void); @@ -504,6 +505,7 @@ static struct rcu_torture_ops rcu_ops = { .sync = synchronize_rcu, .exp_sync = synchronize_rcu_expedited, .get_gp_state = get_state_synchronize_rcu, + .get_gp_state_full = get_state_synchronize_rcu_full, .get_gp_completed = get_completed_synchronize_rcu, .get_gp_completed_full = get_completed_synchronize_rcu_full, .start_gp_poll = start_poll_synchronize_rcu, @@ -1293,12 +1295,12 @@ rcu_torture_writer(void *arg) break; case RTWS_EXP_SYNC: rcu_torture_writer_state = RTWS_EXP_SYNC; - if (cur_ops->get_gp_state && cur_ops->poll_gp_state) - cookie = cur_ops->get_gp_state(); + if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) + cur_ops->get_gp_state_full(&cookie_full); cur_ops->exp_sync(); cur_ops->exp_sync(); - if (cur_ops->get_gp_state && cur_ops->poll_gp_state) - WARN_ON_ONCE(!cur_ops->poll_gp_state(cookie)); + if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) + WARN_ON_ONCE(!cur_ops->poll_gp_state_full(&cookie_full)); rcu_torture_pipe_update(old_rp); break; case RTWS_COND_GET: diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index d47c9b6d81066..3fa79ee78b5b4 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1755,6 +1755,8 @@ static noinline void rcu_gp_cleanup(void) dump_blkd_tasks(rnp, 10); WARN_ON_ONCE(rnp->qsmask); WRITE_ONCE(rnp->gp_seq, new_gp_seq); + if (!rnp->parent) + smp_mb(); // Order against failing poll_state_synchronize_rcu_full(). rdp = this_cpu_ptr(&rcu_data); if (rnp == rdp->mynode) needgp = __note_gp_changes(rnp, rdp) || needgp; @@ -3556,6 +3558,37 @@ unsigned long get_state_synchronize_rcu(void) } EXPORT_SYMBOL_GPL(get_state_synchronize_rcu); +/** + * get_state_synchronize_rcu_full - Snapshot RCU state, both normal and expedited + * @rgosp: location to place combined normal/expedited grace-period state + * + * Places the normal and expedited grace-period states in @rgosp. This + * state value can be passed to a later call to cond_synchronize_rcu_full() + * or poll_state_synchronize_rcu_full() to determine whether or not a + * grace period (whether normal or expedited) has elapsed in the meantime. + * The rcu_gp_oldstate structure takes up twice the memory of an unsigned + * long, but is guaranteed to see all grace periods. In contrast, the + * combined state occupies less memory, but can sometimes fail to take + * grace periods into account. + * + * This does not guarantee that the needed grace period will actually + * start. + */ +void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + struct rcu_node *rnp = rcu_get_root(); + + /* + * Any prior manipulation of RCU-protected data must happen + * before the loads from ->gp_seq and ->expedited_sequence. + */ + smp_mb(); /* ^^^ */ + rgosp->rgos_norm = rcu_seq_snap(&rnp->gp_seq); + rgosp->rgos_exp = rcu_seq_snap(&rcu_state.expedited_sequence); + rgosp->rgos_polled = rcu_seq_snap(&rcu_state.gp_seq_polled); +} +EXPORT_SYMBOL_GPL(get_state_synchronize_rcu_full); + /** * start_poll_synchronize_rcu - Snapshot and start RCU grace period * From patchwork Wed Aug 31 18:11:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5443BECAAD3 for ; Wed, 31 Aug 2022 18:16:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232618AbiHaSQe (ORCPT ); Wed, 31 Aug 2022 14:16:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232717AbiHaSPs (ORCPT ); Wed, 31 Aug 2022 14:15:48 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7C75F2437; Wed, 31 Aug 2022 11:13:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DFB69B82012; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9BE3DC43470; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969531; bh=r7ITDQsToWIAnjiMtcCwlUOAgpHxXqPd8jbQE2qiT+c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mPE5z1X7II7GIIqBcigRZQVmmrBlB6uNqDnEW4nKnAf4uJvCnwW6HzFatBBPP7xGQ KlOSqPx8la/E+xIAGbISGpcBOLdFqR7wfKdeovKdWa8h7llPnc/tQXH9BAJwQir0QL SQa2l6juC0sUSHEw3y9Kh/TQ0VhtRI1MmNt4PTEFN9vNoE6FKB36sOzDqXPzafxYWZ fYllOYs0oYqbZCqwMQPIVEaB+pZ66nXN5b1SypRqWHGAzat4jvDa8ftui7QNlPlRdM qXjq9h8tDwbbvlLMN8ZREzBHMUlT0+tOinUC3a3du4W1HNMJ+rZXh7+T7t6fLUwqon lZ3lHLPsxsSSQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5D4685C02A9; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 03/25] rcutorture: Abstract synchronous and polled API testing Date: Wed, 31 Aug 2022 11:11:48 -0700 Message-Id: <20220831181210.2695080-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit abstracts a do_rtws_sync() function that does synchronous grace-period testing, but also testing the polled API 25% of the time each for the normal and full-state variants of the polled API. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 48 ++++++++++++++++++++++++++++++----------- 1 file changed, 36 insertions(+), 12 deletions(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 4f196ebce7f29..c3c94e343eb28 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -1207,6 +1207,40 @@ static void rcu_torture_write_types(void) } } +/* + * Do the specified rcu_torture_writer() synchronous grace period, + * while also testing out the polled APIs. Note well that the single-CPU + * grace-period optimizations must be accounted for. + */ +static void do_rtws_sync(struct torture_random_state *trsp, void (*sync)(void)) +{ + unsigned long cookie; + struct rcu_gp_oldstate cookie_full; + bool dopoll; + bool dopoll_full; + unsigned long r = torture_random(trsp); + + dopoll = cur_ops->get_gp_state && cur_ops->poll_gp_state && !(r & 0x300); + dopoll_full = cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full && !(r & 0xc00); + if (dopoll || dopoll_full) + cpus_read_lock(); + if (dopoll) + cookie = cur_ops->get_gp_state(); + if (dopoll_full) + cur_ops->get_gp_state_full(&cookie_full); + if (dopoll || (!IS_ENABLED(CONFIG_TINY_RCU) && dopoll_full && num_online_cpus() <= 1)) + sync(); + sync(); + WARN_ONCE(dopoll && !cur_ops->poll_gp_state(cookie), + "%s: Cookie check 3 failed %pS() online %*pbl.", + __func__, sync, cpumask_pr_args(cpu_online_mask)); + WARN_ONCE(dopoll_full && !cur_ops->poll_gp_state_full(&cookie_full), + "%s: Cookie check 4 failed %pS() online %*pbl", + __func__, sync, cpumask_pr_args(cpu_online_mask)); + if (dopoll || dopoll_full) + cpus_read_unlock(); +} + /* * RCU torture writer kthread. Repeatedly substitutes a new structure * for that pointed to by rcu_torture_current, freeing the old structure @@ -1295,12 +1329,7 @@ rcu_torture_writer(void *arg) break; case RTWS_EXP_SYNC: rcu_torture_writer_state = RTWS_EXP_SYNC; - if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) - cur_ops->get_gp_state_full(&cookie_full); - cur_ops->exp_sync(); - cur_ops->exp_sync(); - if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) - WARN_ON_ONCE(!cur_ops->poll_gp_state_full(&cookie_full)); + do_rtws_sync(&rand, cur_ops->exp_sync); rcu_torture_pipe_update(old_rp); break; case RTWS_COND_GET: @@ -1339,12 +1368,7 @@ rcu_torture_writer(void *arg) break; case RTWS_SYNC: rcu_torture_writer_state = RTWS_SYNC; - if (cur_ops->get_gp_state && cur_ops->poll_gp_state) - cookie = cur_ops->get_gp_state(); - cur_ops->sync(); - cur_ops->sync(); - if (cur_ops->get_gp_state && cur_ops->poll_gp_state) - WARN_ON_ONCE(!cur_ops->poll_gp_state(cookie)); + do_rtws_sync(&rand, cur_ops->sync); rcu_torture_pipe_update(old_rp); break; default: From patchwork Wed Aug 31 18:11:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EFBAECAAD3 for ; Wed, 31 Aug 2022 18:14:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232526AbiHaSN7 (ORCPT ); Wed, 31 Aug 2022 14:13:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232614AbiHaSNn (ORCPT ); Wed, 31 Aug 2022 14:13:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52110EE68C; Wed, 31 Aug 2022 11:12:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4050D61C52; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9BA2AC433B5; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969531; bh=lGjQj8ju6x6Bd2aOFRSdBdocyUBg+wR7phTP2jd87CM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iD4K5eF/8njcGwvLirBVXLTX0qCqUHoySTfwh+m/Os4zQbvVkIKMlUDye3wHKHPdi fHt5+OR4ndXXaBeqOgPX2vA69kmmvfjSpMRdKJKUQfTBL/CLo3VLyGlUAfGjpLpBrU 3GeNJ2etMtON7Rs2vKc2vURSZvZt99KqjlpX7ExYSrrnLp8kiHvMxN50LNXBsnq63V tW5zsgKxJQ3hXzmpVyxgVlbaCpeqBB7kdvGW4rghQKptev05uzKLT4LcvtZL0zYm0D QrKjYqpOFbiHwUYJUbFbXrC5g5U/EZcl8sMyVKYEE4b/Ks7YkYcnyDq8HAMHp5jNC3 qH6gIqFHnZZ1w== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5F2645C0513; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 04/25] rcutorture: Allow per-RCU-flavor polled double-GP check Date: Wed, 31 Aug 2022 11:11:49 -0700 Message-Id: <20220831181210.2695080-4-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Only vanilla RCU needs a double grace period for its compressed polled grace-period old-state cookie. This commit therefore adds an rcu_torture_ops per-flavor function ->poll_need_2gp to allow this check to be adapted to the RCU flavor under test. A NULL pointer for this function says that doubled grace periods are never needed. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index c3c94e343eb28..f2564c7633a8a 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -341,6 +341,7 @@ struct rcu_torture_ops { unsigned long (*start_gp_poll)(void); bool (*poll_gp_state)(unsigned long oldstate); bool (*poll_gp_state_full)(struct rcu_gp_oldstate *rgosp); + bool (*poll_need_2gp)(bool poll, bool poll_full); void (*cond_sync)(unsigned long oldstate); call_rcu_func_t call; void (*cb_barrier)(void); @@ -492,6 +493,11 @@ static void rcu_sync_torture_init(void) INIT_LIST_HEAD(&rcu_torture_removed); } +static bool rcu_poll_need_2gp(bool poll, bool poll_full) +{ + return poll || (!IS_ENABLED(CONFIG_TINY_RCU) && poll_full && num_online_cpus() <= 1); +} + static struct rcu_torture_ops rcu_ops = { .ttype = RCU_FLAVOR, .init = rcu_sync_torture_init, @@ -511,6 +517,7 @@ static struct rcu_torture_ops rcu_ops = { .start_gp_poll = start_poll_synchronize_rcu, .poll_gp_state = poll_state_synchronize_rcu, .poll_gp_state_full = poll_state_synchronize_rcu_full, + .poll_need_2gp = rcu_poll_need_2gp, .cond_sync = cond_synchronize_rcu, .get_gp_state_exp = get_state_synchronize_rcu, .start_gp_poll_exp = start_poll_synchronize_rcu_expedited, @@ -1228,7 +1235,7 @@ static void do_rtws_sync(struct torture_random_state *trsp, void (*sync)(void)) cookie = cur_ops->get_gp_state(); if (dopoll_full) cur_ops->get_gp_state_full(&cookie_full); - if (dopoll || (!IS_ENABLED(CONFIG_TINY_RCU) && dopoll_full && num_online_cpus() <= 1)) + if (cur_ops->poll_need_2gp && cur_ops->poll_need_2gp(dopoll, dopoll_full)) sync(); sync(); WARN_ONCE(dopoll && !cur_ops->poll_gp_state(cookie), From patchwork Wed Aug 31 18:11:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 583A9C0502A for ; Wed, 31 Aug 2022 18:14:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231755AbiHaSOs (ORCPT ); Wed, 31 Aug 2022 14:14:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232373AbiHaSNz (ORCPT ); Wed, 31 Aug 2022 14:13:55 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEB4C9BB6F; Wed, 31 Aug 2022 11:12:39 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 03E8FB82272; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A8675C433C1; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969531; bh=DyK363f/EcwyWVPynsUrSkClS2ROgjWL4eqGiB2RdSk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eo2tPBiPhNxIMOM2BJEyNr8O8h30B02X7HpkvHOi3btUul/SJ2+pkJDC2jKDKwA59 erfM2dLW6Tj0tsQK0RJISHGdGivqwyAx9XiAAtuwISBHBanbb7BNANHJlG9L2hRf3V Qsg9Pd2fEyCFjNF/eTUouJg5OnwCvtoYAtXdAGWiEfEE+fml/puCOZX/Nd+INGSqsf J4PAhgGSUrTM2mflW0sJT+XeP/bn4b1UIZZyK8EchEYC4RkXdblKDHeq2VOiLkxoaW g+g36cu8CEyQDElstwQUsMN3+QF0Ir9qvZS4FPOPQV/xSN5TviCJoEU7lz51qctxDG WBWZKZuoGmmqw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 60FAC5C06A7; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 05/25] rcutorture: Verify RCU reader prevents full polling from completing Date: Wed, 31 Aug 2022 11:11:50 -0700 Message-Id: <20220831181210.2695080-5-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit adds a test to rcu_torture_writer() that verifies that a ->get_gp_state_full() and ->poll_gp_state_full() polled grace-period sequence does not claim that a grace period elapsed within the confines of the corresponding read-side critical section. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index f2564c7633a8a..050f4d0a987ff 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -1309,6 +1309,8 @@ rcu_torture_writer(void *arg) atomic_inc(&rcu_torture_wcount[i]); WRITE_ONCE(old_rp->rtort_pipe_count, old_rp->rtort_pipe_count + 1); + + // Make sure readers block polled grace periods. if (cur_ops->get_gp_state && cur_ops->poll_gp_state) { idx = cur_ops->readlock(); cookie = cur_ops->get_gp_state(); @@ -1325,9 +1327,20 @@ rcu_torture_writer(void *arg) } cur_ops->readunlock(idx); } - if (cur_ops->get_gp_completed_full && cur_ops->poll_gp_state_full) { - cur_ops->get_gp_completed_full(&cookie_full); - WARN_ON_ONCE(!cur_ops->poll_gp_state_full(&cookie_full)); + if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) { + idx = cur_ops->readlock(); + cur_ops->get_gp_state_full(&cookie_full); + WARN_ONCE(cur_ops->poll_gp_state_full(&cookie_full), + "%s: Cookie check 5 failed %s(%d) online %*pbl\n", + __func__, + rcu_torture_writer_state_getname(), + rcu_torture_writer_state, + cpumask_pr_args(cpu_online_mask)); + if (cur_ops->get_gp_completed_full) { + cur_ops->get_gp_completed_full(&cookie_full); + WARN_ON_ONCE(!cur_ops->poll_gp_state_full(&cookie_full)); + } + cur_ops->readunlock(idx); } switch (synctype[torture_random(&rand) % nsynctypes]) { case RTWS_DEF_FREE: From patchwork Wed Aug 31 18:11:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961248 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AD15ECAAD1 for ; Wed, 31 Aug 2022 18:15:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232446AbiHaSPO (ORCPT ); Wed, 31 Aug 2022 14:15:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232561AbiHaSOB (ORCPT ); Wed, 31 Aug 2022 14:14:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E48E7F14F8; Wed, 31 Aug 2022 11:12:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D5D8861C12; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F00A5C43145; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=bzKU7PKdCGbkmiMWj3QAJG1+fLAMsGALe9aOVvfa5P0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O2M52KrAgOKK6OZ4MGlb2A88bli74vfHA09J58qAKAisbM3AQGwog9dcNB7iduFzq ANpP/nsVzZ6tHrV4GgHw2QItlHwe4njcvqDLpiGCYFojw2ydGdOOuY7iEyRYaVy8i+ qzSW38wZfp+bCTBWrP2qA+hg5Zl05fw6BUjm+vQS6egB++gPcMU2aa6sAy3hzRm+Fx D0ZU0kQ5ihmzzpb0X7zjI4HuRpitrE3xrsdp5BXZ7LqaGJqvK5p4zcVdoszI+AlILi +6a3UqZ/H3zyuaiVKIp9WyyGcz/P6ohDr/dzC6X3mchSCiw5oM8F/uOe1dP7Q/JiaR sYnRVS5aj9Fag== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 62BE15C090A; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 06/25] rcutorture: Remove redundant RTWS_DEF_FREE check Date: Wed, 31 Aug 2022 11:11:51 -0700 Message-Id: <20220831181210.2695080-6-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This check does nothing because the state at this point in the code because the rcu_torture_writer_state value is guaranteed to instead be RTWS_REPLACE. This commit therefore removes this check. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 050f4d0a987ff..236bd6b57277f 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -1314,8 +1314,7 @@ rcu_torture_writer(void *arg) if (cur_ops->get_gp_state && cur_ops->poll_gp_state) { idx = cur_ops->readlock(); cookie = cur_ops->get_gp_state(); - WARN_ONCE(rcu_torture_writer_state != RTWS_DEF_FREE && - cur_ops->poll_gp_state(cookie), + WARN_ONCE(cur_ops->poll_gp_state(cookie), "%s: Cookie check 1 failed %s(%d) %lu->%lu\n", __func__, rcu_torture_writer_state_getname(), From patchwork Wed Aug 31 18:11:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 484B2ECAAD1 for ; Wed, 31 Aug 2022 18:14:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232626AbiHaSOc (ORCPT ); Wed, 31 Aug 2022 14:14:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232660AbiHaSNt (ORCPT ); Wed, 31 Aug 2022 14:13:49 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5DC9EE688; Wed, 31 Aug 2022 11:12:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A6C93B82278; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0302CC43149; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=uX+u9GKdyE2AQiItmDVWhHTOMh+xCLFHSEjTFY425bg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qL+Jgv4Y15uAqUDdI9TZgl2bKNZ1UONu0gGuQfEcvQvSGYthPgabeWJaHJXRPmNLu J9Y/Do3bFssP0BHL16SwDI4m2LyIBKSOEzi41S4PgrCMFvTvjf1yrRXWofdvGC4yvh SMGB+G+lKyEyu4ufh0Ou0WgeBAGWIFL9KQlx70t6DUFq7iYgnRUaGvw0XmnogVpubi NrvFwWeMg+KHo3hDNQJ/nGRE4gPh0GWd/T860vm04EaPRe2O4mq3ggbYVgj2tw2L68 1LHS3CrkO4Wr/3aaSK7sMiEBN8tecIPix3gv1J5DF1VeEJbXS574Xx59gXdiAxgSkZ c0Qa82ob0dJhg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 647A55C0950; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 07/25] rcutorture: Verify long-running reader prevents full polling from completing Date: Wed, 31 Aug 2022 11:11:52 -0700 Message-Id: <20220831181210.2695080-7-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit adds full-state polling checks to accompany the old-style polling checks in the rcu_torture_one_read() function. If a polling cycle within an RCU reader completes, a WARN_ONCE() is triggered. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 236bd6b57277f..3d85420108477 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -1770,6 +1770,7 @@ rcutorture_loop_extend(int *readstate, struct torture_random_state *trsp, static bool rcu_torture_one_read(struct torture_random_state *trsp, long myid) { unsigned long cookie; + struct rcu_gp_oldstate cookie_full; int i; unsigned long started; unsigned long completed; @@ -1787,6 +1788,8 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp, long myid) rcutorture_one_extend(&readstate, newstate, trsp, rtrsp++); if (cur_ops->get_gp_state && cur_ops->poll_gp_state) cookie = cur_ops->get_gp_state(); + if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) + cur_ops->get_gp_state_full(&cookie_full); started = cur_ops->get_gp_seq(); ts = rcu_trace_clock_local(); p = rcu_dereference_check(rcu_torture_current, @@ -1827,6 +1830,13 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp, long myid) rcu_torture_writer_state_getname(), rcu_torture_writer_state, cookie, cur_ops->get_gp_state()); + if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) + WARN_ONCE(cur_ops->poll_gp_state_full(&cookie_full), + "%s: Cookie check 6 failed %s(%d) online %*pbl\n", + __func__, + rcu_torture_writer_state_getname(), + rcu_torture_writer_state, + cpumask_pr_args(cpu_online_mask)); rcutorture_one_extend(&readstate, 0, trsp, rtrsp); WARN_ON_ONCE(readstate); // This next splat is expected behavior if leakpointer, especially From patchwork Wed Aug 31 18:11:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E718EECAAD3 for ; Wed, 31 Aug 2022 18:16:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232698AbiHaSQR (ORCPT ); Wed, 31 Aug 2022 14:16:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232695AbiHaSPG (ORCPT ); Wed, 31 Aug 2022 14:15:06 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB3E0F23FD; Wed, 31 Aug 2022 11:12:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 847DBB82275; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0F04C43147; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=TRn3bzJI59okW4J4LYZ5Q0KKoRbfj7YEJocZB+cVgck=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MxGeMT8xlruhW0/JeJFwnsgkbaUmJCu/LS6a/ac+dTWnN2/Re4ixz0hN37i0S3uQS OmQRe0Qb5Rq9VcyBCTuKxJUFGSu9Wqwqts8iDoeEjZhJipoGktoCFI34GgnHXuvMYk pj6qZluI93xkSg+TNMoeA+HMjA0VA0H0iJ2VWQk5RpQdxS3u2QCXpqv53pILZBvbr8 eTy0aTS+Cjs63ytB6ND5CwU5OQO5FeMYhpUKP3H2nuDJYAoLjCUTmT/E4YPND5samO BFrCAYI8bDBVZ5Iy5X+RPwAj9R+zxd2MzWz49LIkqbWKDGaq2D4rJN5OJe6dKem1hj 8FbawG/3sZHew== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 666545C0981; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 08/25] rcu: Add full-sized polling for start_poll() Date: Wed, 31 Aug 2022 11:11:53 -0700 Message-Id: <20220831181210.2695080-8-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The start_poll_synchronize_rcu() API compresses the combined expedited and normal grace-period states into a single unsigned long, which conserves storage, but can miss grace periods in certain cases involving overlapping normal and expedited grace periods. Missing the occasional grace period is usually not a problem, but there are use cases that care about each and every grace period. This commit therefore adds the next member of the full-state RCU grace-period polling API, namely the start_poll_synchronize_rcu_full() function. This uses up to three times the storage (rcu_gp_oldstate structure instead of unsigned long), but is guaranteed not to miss grace periods. Signed-off-by: Paul E. McKenney --- include/linux/rcutiny.h | 6 +++++ include/linux/rcutree.h | 1 + kernel/rcu/rcutorture.c | 49 +++++++++++++++++++++++++++------- kernel/rcu/tree.c | 58 ++++++++++++++++++++++++++++++++--------- 4 files changed, 92 insertions(+), 22 deletions(-) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 6e299955c4e9a..6bc30e46a819c 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -26,6 +26,12 @@ static inline void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) } unsigned long start_poll_synchronize_rcu(void); + +static inline void start_poll_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + rgosp->rgos_norm = start_poll_synchronize_rcu(); +} + bool poll_state_synchronize_rcu(unsigned long oldstate); static inline bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 7b769f1b417aa..8f2e0f0b26f63 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -52,6 +52,7 @@ void cond_synchronize_rcu_expedited(unsigned long oldstate); unsigned long get_state_synchronize_rcu(void); void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); unsigned long start_poll_synchronize_rcu(void); +void start_poll_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); bool poll_state_synchronize_rcu(unsigned long oldstate); bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); void cond_synchronize_rcu(unsigned long oldstate); diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 3d85420108477..68387ccc7ddfc 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -88,6 +88,7 @@ torture_param(bool, gp_exp, false, "Use expedited GP wait primitives"); torture_param(bool, gp_normal, false, "Use normal (non-expedited) GP wait primitives"); torture_param(bool, gp_poll, false, "Use polling GP wait primitives"); torture_param(bool, gp_poll_exp, false, "Use polling expedited GP wait primitives"); +torture_param(bool, gp_poll_full, false, "Use polling full-state GP wait primitives"); torture_param(bool, gp_sync, false, "Use synchronous GP wait primitives"); torture_param(int, irqreader, 1, "Allow RCU readers from irq handlers"); torture_param(int, leakpointer, 0, "Leak pointer dereferences from readers"); @@ -198,12 +199,14 @@ static int rcu_torture_writer_state; #define RTWS_COND_SYNC 7 #define RTWS_COND_SYNC_EXP 8 #define RTWS_POLL_GET 9 -#define RTWS_POLL_GET_EXP 10 -#define RTWS_POLL_WAIT 11 -#define RTWS_POLL_WAIT_EXP 12 -#define RTWS_SYNC 13 -#define RTWS_STUTTER 14 -#define RTWS_STOPPING 15 +#define RTWS_POLL_GET_FULL 10 +#define RTWS_POLL_GET_EXP 11 +#define RTWS_POLL_WAIT 12 +#define RTWS_POLL_WAIT_FULL 13 +#define RTWS_POLL_WAIT_EXP 14 +#define RTWS_SYNC 15 +#define RTWS_STUTTER 16 +#define RTWS_STOPPING 17 static const char * const rcu_torture_writer_state_names[] = { "RTWS_FIXED_DELAY", "RTWS_DELAY", @@ -215,8 +218,10 @@ static const char * const rcu_torture_writer_state_names[] = { "RTWS_COND_SYNC", "RTWS_COND_SYNC_EXP", "RTWS_POLL_GET", + "RTWS_POLL_GET_FULL", "RTWS_POLL_GET_EXP", "RTWS_POLL_WAIT", + "RTWS_POLL_WAIT_FULL", "RTWS_POLL_WAIT_EXP", "RTWS_SYNC", "RTWS_STUTTER", @@ -339,6 +344,7 @@ struct rcu_torture_ops { unsigned long (*get_gp_completed)(void); void (*get_gp_completed_full)(struct rcu_gp_oldstate *rgosp); unsigned long (*start_gp_poll)(void); + void (*start_gp_poll_full)(struct rcu_gp_oldstate *rgosp); bool (*poll_gp_state)(unsigned long oldstate); bool (*poll_gp_state_full)(struct rcu_gp_oldstate *rgosp); bool (*poll_need_2gp)(bool poll, bool poll_full); @@ -515,6 +521,7 @@ static struct rcu_torture_ops rcu_ops = { .get_gp_completed = get_completed_synchronize_rcu, .get_gp_completed_full = get_completed_synchronize_rcu_full, .start_gp_poll = start_poll_synchronize_rcu, + .start_gp_poll_full = start_poll_synchronize_rcu_full, .poll_gp_state = poll_state_synchronize_rcu, .poll_gp_state_full = poll_state_synchronize_rcu_full, .poll_need_2gp = rcu_poll_need_2gp, @@ -1163,13 +1170,13 @@ static void rcu_torture_write_types(void) { bool gp_cond1 = gp_cond, gp_cond_exp1 = gp_cond_exp, gp_exp1 = gp_exp; bool gp_poll_exp1 = gp_poll_exp, gp_normal1 = gp_normal, gp_poll1 = gp_poll; - bool gp_sync1 = gp_sync; + bool gp_poll_full1 = gp_poll_full, gp_sync1 = gp_sync; /* Initialize synctype[] array. If none set, take default. */ if (!gp_cond1 && !gp_cond_exp1 && !gp_exp1 && !gp_poll_exp && - !gp_normal1 && !gp_poll1 && !gp_sync1) + !gp_normal1 && !gp_poll1 && !gp_poll_full1 && !gp_sync1) gp_cond1 = gp_cond_exp1 = gp_exp1 = gp_poll_exp1 = - gp_normal1 = gp_poll1 = gp_sync1 = true; + gp_normal1 = gp_poll1 = gp_poll_full1 = gp_sync1 = true; if (gp_cond1 && cur_ops->get_gp_state && cur_ops->cond_sync) { synctype[nsynctypes++] = RTWS_COND_GET; pr_info("%s: Testing conditional GPs.\n", __func__); @@ -1200,6 +1207,12 @@ static void rcu_torture_write_types(void) } else if (gp_poll && (!cur_ops->start_gp_poll || !cur_ops->poll_gp_state)) { pr_alert("%s: gp_poll without primitives.\n", __func__); } + if (gp_poll_full1 && cur_ops->start_gp_poll_full && cur_ops->poll_gp_state_full) { + synctype[nsynctypes++] = RTWS_POLL_GET_FULL; + pr_info("%s: Testing polling full-state GPs.\n", __func__); + } else if (gp_poll_full && (!cur_ops->start_gp_poll_full || !cur_ops->poll_gp_state_full)) { + pr_alert("%s: gp_poll_full without primitives.\n", __func__); + } if (gp_poll_exp1 && cur_ops->start_gp_poll_exp && cur_ops->poll_gp_state_exp) { synctype[nsynctypes++] = RTWS_POLL_GET_EXP; pr_info("%s: Testing polling expedited GPs.\n", __func__); @@ -1262,6 +1275,7 @@ rcu_torture_writer(void *arg) struct rcu_gp_oldstate cookie_full; int expediting = 0; unsigned long gp_snap; + struct rcu_gp_oldstate gp_snap_full; int i; int idx; int oldnice = task_nice(current); @@ -1376,6 +1390,15 @@ rcu_torture_writer(void *arg) &rand); rcu_torture_pipe_update(old_rp); break; + case RTWS_POLL_GET_FULL: + rcu_torture_writer_state = RTWS_POLL_GET_FULL; + cur_ops->start_gp_poll_full(&gp_snap_full); + rcu_torture_writer_state = RTWS_POLL_WAIT_FULL; + while (!cur_ops->poll_gp_state_full(&gp_snap_full)) + torture_hrtimeout_jiffies(torture_random(&rand) % 16, + &rand); + rcu_torture_pipe_update(old_rp); + break; case RTWS_POLL_GET_EXP: rcu_torture_writer_state = RTWS_POLL_GET_EXP; gp_snap = cur_ops->start_gp_poll_exp(); @@ -1454,6 +1477,7 @@ static int rcu_torture_fakewriter(void *arg) { unsigned long gp_snap; + struct rcu_gp_oldstate gp_snap_full; DEFINE_TORTURE_RANDOM(rand); VERBOSE_TOROUT_STRING("rcu_torture_fakewriter task started"); @@ -1499,6 +1523,13 @@ rcu_torture_fakewriter(void *arg) &rand); } break; + case RTWS_POLL_GET_FULL: + cur_ops->start_gp_poll_full(&gp_snap_full); + while (!cur_ops->poll_gp_state_full(&gp_snap_full)) { + torture_hrtimeout_jiffies(torture_random(&rand) % 16, + &rand); + } + break; case RTWS_POLL_GET_EXP: gp_snap = cur_ops->start_gp_poll_exp(); while (!cur_ops->poll_gp_state_exp(gp_snap)) { diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 3fa79ee78b5b4..89572385fd1aa 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3589,22 +3589,13 @@ void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) } EXPORT_SYMBOL_GPL(get_state_synchronize_rcu_full); -/** - * start_poll_synchronize_rcu - Snapshot and start RCU grace period - * - * Returns a cookie that is used by a later call to cond_synchronize_rcu() - * or poll_state_synchronize_rcu() to determine whether or not a full - * grace period has elapsed in the meantime. If the needed grace period - * is not already slated to start, notifies RCU core of the need for that - * grace period. - * - * Interrupts must be enabled for the case where it is necessary to awaken - * the grace-period kthread. +/* + * Helper function for start_poll_synchronize_rcu() and + * start_poll_synchronize_rcu_full(). */ -unsigned long start_poll_synchronize_rcu(void) +static void start_poll_synchronize_rcu_common(void) { unsigned long flags; - unsigned long gp_seq = get_state_synchronize_rcu(); bool needwake; struct rcu_data *rdp; struct rcu_node *rnp; @@ -3624,10 +3615,51 @@ unsigned long start_poll_synchronize_rcu(void) raw_spin_unlock_irqrestore_rcu_node(rnp, flags); if (needwake) rcu_gp_kthread_wake(); +} + +/** + * start_poll_synchronize_rcu - Snapshot and start RCU grace period + * + * Returns a cookie that is used by a later call to cond_synchronize_rcu() + * or poll_state_synchronize_rcu() to determine whether or not a full + * grace period has elapsed in the meantime. If the needed grace period + * is not already slated to start, notifies RCU core of the need for that + * grace period. + * + * Interrupts must be enabled for the case where it is necessary to awaken + * the grace-period kthread. + */ +unsigned long start_poll_synchronize_rcu(void) +{ + unsigned long gp_seq = get_state_synchronize_rcu(); + + start_poll_synchronize_rcu_common(); return gp_seq; } EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu); +/** + * start_poll_synchronize_rcu_full - Take a full snapshot and start RCU grace period + * @rgosp: value from get_state_synchronize_rcu_full() or start_poll_synchronize_rcu_full() + * + * Places the normal and expedited grace-period states in *@rgos. This + * state value can be passed to a later call to cond_synchronize_rcu_full() + * or poll_state_synchronize_rcu_full() to determine whether or not a + * grace period (whether normal or expedited) has elapsed in the meantime. + * If the needed grace period is not already slated to start, notifies + * RCU core of the need for that grace period. + * + * Interrupts must be enabled for the case where it is necessary to awaken + * the grace-period kthread. + */ +void start_poll_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + get_state_synchronize_rcu_full(rgosp); + + start_poll_synchronize_rcu_common(); +} +EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu_full); + /** * poll_state_synchronize_rcu - Has the specified RCU grace period completed? * From patchwork Wed Aug 31 18:11:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7951FECAAD4 for ; Wed, 31 Aug 2022 18:14:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232008AbiHaSOt (ORCPT ); Wed, 31 Aug 2022 14:14:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232421AbiHaSN4 (ORCPT ); Wed, 31 Aug 2022 14:13:56 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 415A8F23D4; Wed, 31 Aug 2022 11:12:39 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8CE50B82277; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F011BC43146; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=vF0tPndtNPFUTW8t5cP+pHfcosNYhFIVTBzI1A/Sdpo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H9x0ESTIYbXBTubuBxBm8neoBZ2+037u6KAm6o3k0ISfKA50jlzhsFJPenytmm8iE NBb53516lu5CMxPCbY1I/Q0flSjmblZzYRZiMfeX/xLwV74CjTM0SznSZZYjSKTNmx /UuoaB2wh+CuqLRQ5a/xuxwuCs2qSlG6q2oNLC1+l9136RvMX5OGD29gJoi5Bq9OsY YW7wjKxYBrOGR4TjRDt1GkpsJQXhAZyyPZB7572PWQZWxOTawaudvXxf0KftfoG7RR rROJIh8HXrMl/PK8NMAbKFt2vI0JkYNKBlGOB2Q1Wo009pianjmYoKviweJqzDaN3O S1Vj01cYl99Yw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 684725C09C2; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 09/25] rcu: Add full-sized polling for start_poll_expedited() Date: Wed, 31 Aug 2022 11:11:54 -0700 Message-Id: <20220831181210.2695080-9-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The start_poll_synchronize_rcu_expedited() API compresses the combined expedited and normal grace-period states into a single unsigned long, which conserves storage, but can miss grace periods in certain cases involving overlapping normal and expedited grace periods. Missing the occasional grace period is usually not a problem, but there are use cases that care about each and every grace period. This commit therefore adds yet another member of the full-state RCU grace-period polling API, which is the start_poll_synchronize_rcu_expedited_full() function. This uses up to three times the storage (rcu_gp_oldstate structure instead of unsigned long), but is guaranteed not to miss grace periods. [ paulmck: Apply feedback from kernel test robot and Julia Lawall. ] Signed-off-by: Paul E. McKenney --- include/linux/rcutiny.h | 5 ++++ include/linux/rcutree.h | 1 + kernel/rcu/rcutorture.c | 51 +++++++++++++++++++++++++++++++++-------- kernel/rcu/tree_exp.h | 18 +++++++++++++++ 4 files changed, 65 insertions(+), 10 deletions(-) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 6bc30e46a819c..653e35777a99b 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -49,6 +49,11 @@ static inline unsigned long start_poll_synchronize_rcu_expedited(void) return start_poll_synchronize_rcu(); } +static inline void start_poll_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp) +{ + rgosp->rgos_norm = start_poll_synchronize_rcu_expedited(); +} + static inline void cond_synchronize_rcu_expedited(unsigned long oldstate) { cond_synchronize_rcu(oldstate); diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 8f2e0f0b26f63..7151fd8617365 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -48,6 +48,7 @@ struct rcu_gp_oldstate { }; unsigned long start_poll_synchronize_rcu_expedited(void); +void start_poll_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp); void cond_synchronize_rcu_expedited(unsigned long oldstate); unsigned long get_state_synchronize_rcu(void); void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 68387ccc7ddfc..f9ca33555debf 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -89,6 +89,7 @@ torture_param(bool, gp_normal, false, "Use normal (non-expedited) GP wait primit torture_param(bool, gp_poll, false, "Use polling GP wait primitives"); torture_param(bool, gp_poll_exp, false, "Use polling expedited GP wait primitives"); torture_param(bool, gp_poll_full, false, "Use polling full-state GP wait primitives"); +torture_param(bool, gp_poll_exp_full, false, "Use polling full-state expedited GP wait primitives"); torture_param(bool, gp_sync, false, "Use synchronous GP wait primitives"); torture_param(int, irqreader, 1, "Allow RCU readers from irq handlers"); torture_param(int, leakpointer, 0, "Leak pointer dereferences from readers"); @@ -201,12 +202,14 @@ static int rcu_torture_writer_state; #define RTWS_POLL_GET 9 #define RTWS_POLL_GET_FULL 10 #define RTWS_POLL_GET_EXP 11 -#define RTWS_POLL_WAIT 12 -#define RTWS_POLL_WAIT_FULL 13 -#define RTWS_POLL_WAIT_EXP 14 -#define RTWS_SYNC 15 -#define RTWS_STUTTER 16 -#define RTWS_STOPPING 17 +#define RTWS_POLL_GET_EXP_FULL 12 +#define RTWS_POLL_WAIT 13 +#define RTWS_POLL_WAIT_FULL 14 +#define RTWS_POLL_WAIT_EXP 15 +#define RTWS_POLL_WAIT_EXP_FULL 16 +#define RTWS_SYNC 17 +#define RTWS_STUTTER 18 +#define RTWS_STOPPING 19 static const char * const rcu_torture_writer_state_names[] = { "RTWS_FIXED_DELAY", "RTWS_DELAY", @@ -220,9 +223,11 @@ static const char * const rcu_torture_writer_state_names[] = { "RTWS_POLL_GET", "RTWS_POLL_GET_FULL", "RTWS_POLL_GET_EXP", + "RTWS_POLL_GET_EXP_FULL", "RTWS_POLL_WAIT", "RTWS_POLL_WAIT_FULL", "RTWS_POLL_WAIT_EXP", + "RTWS_POLL_WAIT_EXP_FULL", "RTWS_SYNC", "RTWS_STUTTER", "RTWS_STOPPING", @@ -337,6 +342,7 @@ struct rcu_torture_ops { void (*exp_sync)(void); unsigned long (*get_gp_state_exp)(void); unsigned long (*start_gp_poll_exp)(void); + void (*start_gp_poll_exp_full)(struct rcu_gp_oldstate *rgosp); bool (*poll_gp_state_exp)(unsigned long oldstate); void (*cond_sync_exp)(unsigned long oldstate); unsigned long (*get_gp_state)(void); @@ -528,6 +534,7 @@ static struct rcu_torture_ops rcu_ops = { .cond_sync = cond_synchronize_rcu, .get_gp_state_exp = get_state_synchronize_rcu, .start_gp_poll_exp = start_poll_synchronize_rcu_expedited, + .start_gp_poll_exp_full = start_poll_synchronize_rcu_expedited_full, .poll_gp_state_exp = poll_state_synchronize_rcu, .cond_sync_exp = cond_synchronize_rcu_expedited, .call = call_rcu, @@ -1169,13 +1176,14 @@ static int nsynctypes; static void rcu_torture_write_types(void) { bool gp_cond1 = gp_cond, gp_cond_exp1 = gp_cond_exp, gp_exp1 = gp_exp; - bool gp_poll_exp1 = gp_poll_exp, gp_normal1 = gp_normal, gp_poll1 = gp_poll; - bool gp_poll_full1 = gp_poll_full, gp_sync1 = gp_sync; + bool gp_poll_exp1 = gp_poll_exp, gp_poll_exp_full1 = gp_poll_exp_full; + bool gp_normal1 = gp_normal, gp_poll1 = gp_poll, gp_poll_full1 = gp_poll_full; + bool gp_sync1 = gp_sync; /* Initialize synctype[] array. If none set, take default. */ - if (!gp_cond1 && !gp_cond_exp1 && !gp_exp1 && !gp_poll_exp && + if (!gp_cond1 && !gp_cond_exp1 && !gp_exp1 && !gp_poll_exp && !gp_poll_exp_full1 && !gp_normal1 && !gp_poll1 && !gp_poll_full1 && !gp_sync1) - gp_cond1 = gp_cond_exp1 = gp_exp1 = gp_poll_exp1 = + gp_cond1 = gp_cond_exp1 = gp_exp1 = gp_poll_exp1 = gp_poll_exp_full1 = gp_normal1 = gp_poll1 = gp_poll_full1 = gp_sync1 = true; if (gp_cond1 && cur_ops->get_gp_state && cur_ops->cond_sync) { synctype[nsynctypes++] = RTWS_COND_GET; @@ -1219,6 +1227,13 @@ static void rcu_torture_write_types(void) } else if (gp_poll_exp && (!cur_ops->start_gp_poll_exp || !cur_ops->poll_gp_state_exp)) { pr_alert("%s: gp_poll_exp without primitives.\n", __func__); } + if (gp_poll_exp_full1 && cur_ops->start_gp_poll_exp_full && cur_ops->poll_gp_state_full) { + synctype[nsynctypes++] = RTWS_POLL_GET_EXP_FULL; + pr_info("%s: Testing polling full-state expedited GPs.\n", __func__); + } else if (gp_poll_exp_full && + (!cur_ops->start_gp_poll_exp_full || !cur_ops->poll_gp_state_full)) { + pr_alert("%s: gp_poll_exp_full without primitives.\n", __func__); + } if (gp_sync1 && cur_ops->sync) { synctype[nsynctypes++] = RTWS_SYNC; pr_info("%s: Testing normal GPs.\n", __func__); @@ -1408,6 +1423,15 @@ rcu_torture_writer(void *arg) &rand); rcu_torture_pipe_update(old_rp); break; + case RTWS_POLL_GET_EXP_FULL: + rcu_torture_writer_state = RTWS_POLL_GET_EXP_FULL; + cur_ops->start_gp_poll_exp_full(&gp_snap_full); + rcu_torture_writer_state = RTWS_POLL_WAIT_EXP_FULL; + while (!cur_ops->poll_gp_state_full(&gp_snap_full)) + torture_hrtimeout_jiffies(torture_random(&rand) % 16, + &rand); + rcu_torture_pipe_update(old_rp); + break; case RTWS_SYNC: rcu_torture_writer_state = RTWS_SYNC; do_rtws_sync(&rand, cur_ops->sync); @@ -1537,6 +1561,13 @@ rcu_torture_fakewriter(void *arg) &rand); } break; + case RTWS_POLL_GET_EXP_FULL: + cur_ops->start_gp_poll_exp_full(&gp_snap_full); + while (!cur_ops->poll_gp_state_full(&gp_snap_full)) { + torture_hrtimeout_jiffies(torture_random(&rand) % 16, + &rand); + } + break; case RTWS_SYNC: cur_ops->sync(); break; diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index be667583a5547..18128ee0d36c0 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -1027,6 +1027,24 @@ unsigned long start_poll_synchronize_rcu_expedited(void) } EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu_expedited); +/** + * start_poll_synchronize_rcu_expedited_full - Take a full snapshot and start expedited grace period + * @rgosp: Place to put snapshot of grace-period state + * + * Places the normal and expedited grace-period states in rgosp. This + * state value can be passed to a later call to cond_synchronize_rcu_full() + * or poll_state_synchronize_rcu_full() to determine whether or not a + * grace period (whether normal or expedited) has elapsed in the meantime. + * If the needed expedited grace period is not already slated to start, + * initiates that grace period. + */ +void start_poll_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp) +{ + get_state_synchronize_rcu_full(rgosp); + (void)start_poll_synchronize_rcu_expedited(); +} +EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu_expedited_full); + /** * cond_synchronize_rcu_expedited - Conditionally wait for an expedited RCU grace period * From patchwork Wed Aug 31 18:11:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54CB6ECAAD1 for ; Wed, 31 Aug 2022 18:14:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232589AbiHaSOJ (ORCPT ); Wed, 31 Aug 2022 14:14:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232637AbiHaSNq (ORCPT ); Wed, 31 Aug 2022 14:13:46 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2855EDF663; Wed, 31 Aug 2022 11:12:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D454261C58; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F339DC43148; Wed, 31 Aug 2022 18:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=UfgC0TnAMojW9BPApDiZOMQioOSmMjDrtGha0d9H3ys=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iNXjlPDmqlcjMfkbH4V1ViLpxZfHrJPigIg3k6fIjltq8cjc8e7llmkMXnfxBoR9O SN9KrjTBUr5eT/pddiY47XdFzSKZiYl/zYX+ufgxIAlYYiV44e1RN3MQnNRbBdrv7m ekM31Qm0OfIrtmZUdhDwo0FGp8V8V1JVQJS8jxOFscfw+cjhWvS6iVHI0S4Sce5Zt4 eaA6NSmQ9FlhX99JquvC/ra6pJWRw2PYtt2PxKYSSjDL62knKgDErtNjJoFuNvWCiO Ft5NFJzyl/z5EEQRv+6KGPGhD0YsfFiNQoDQq1oaDrjQfoubImAbr87C6FXWq6H5sP CXIvanZmQOmbQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6A69C5C0A6B; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 10/25] rcu: Remove blank line from poll_state_synchronize_rcu() docbook header Date: Wed, 31 Aug 2022 11:11:55 -0700 Message-Id: <20220831181210.2695080-10-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit removes the blank line preceding the oldstate parameter to the docbook header for the poll_state_synchronize_rcu() function and marks uses of this parameter later in that header. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 89572385fd1aa..0a24ef4d6b823 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3662,11 +3662,10 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu_full); /** * poll_state_synchronize_rcu - Has the specified RCU grace period completed? - * * @oldstate: value from get_state_synchronize_rcu() or start_poll_synchronize_rcu() * * If a full RCU grace period has elapsed since the earlier call from - * which oldstate was obtained, return @true, otherwise return @false. + * which @oldstate was obtained, return @true, otherwise return @false. * If @false is returned, it is the caller's responsibility to invoke this * function later on until it does return @true. Alternatively, the caller * can explicitly wait for a grace period, for example, by passing @oldstate @@ -3675,7 +3674,7 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu_full); * Yes, this function does not take counter wrap into account. * But counter wrap is harmless. If the counter wraps, we have waited for * more than a billion grace periods (and way more on a 64-bit system!). - * Those needing to keep oldstate values for very long time periods + * Those needing to keep old state values for very long time periods * (many hours even on 32-bit systems) should check them occasionally and * either refresh them or set a flag indicating that the grace period has * completed. Alternatively, they can use get_completed_synchronize_rcu() From patchwork Wed Aug 31 18:11:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51EFAECAAD4 for ; Wed, 31 Aug 2022 18:14:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229453AbiHaSOq (ORCPT ); Wed, 31 Aug 2022 14:14:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231672AbiHaSNy (ORCPT ); Wed, 31 Aug 2022 14:13:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0299F23C2; Wed, 31 Aug 2022 11:12:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1030A61C61; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18877C4314C; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=LUL2A+Zv7HuiXq34oyc7N/QVEedua4nTRyKRGZjm96M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P0M3RvBkJ2xop124SqXh5CSYeQ0UMLip0nB+A3jC/PuoQZ3Ke7r/LkZciR/k5Zhwl OihU5JO4MnjIfWq7Tu+Mt1Bf8fsmlndfPMkTOgQUyPlw7eU/Wu7QeEb3rML4wLCy+C eTAmIu9srIaVLlNhZteWhzVZ1TzQfHSULe9NUEizod8PVtADudALvS1okbGpF5IVla WNeaQJ0o/s50EwvloyUdOcnRoSFXviGU4K3DLFhUNXVhRhd0vwSTGoQZcghPIdo4pD m/slWnpUYM9vjN2lSj1z1yMHEo6zsDchtNMJ1MGmy5SqDNx3XPOA3mPLSvR1J7oAmv n35UbSZIsyRnw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6C3585C0AAE; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 11/25] rcu: Add full-sized polling for cond_sync_full() Date: Wed, 31 Aug 2022 11:11:56 -0700 Message-Id: <20220831181210.2695080-11-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The cond_synchronize_rcu() API compresses the combined expedited and normal grace-period states into a single unsigned long, which conserves storage, but can miss grace periods in certain cases involving overlapping normal and expedited grace periods. Missing the occasional grace period is usually not a problem, but there are use cases that care about each and every grace period. This commit therefore adds yet another member of the full-state RCU grace-period polling API, which is the cond_synchronize_rcu_full() function. This uses up to three times the storage (rcu_gp_oldstate structure instead of unsigned long), but is guaranteed not to miss grace periods. [ paulmck: Apply feedback from kernel test robot and Julia Lawall. ] Signed-off-by: Paul E. McKenney --- include/linux/rcutiny.h | 5 +++ include/linux/rcutree.h | 1 + kernel/rcu/rcutorture.c | 67 +++++++++++++++++++++++++++++------------ kernel/rcu/tree.c | 28 ++++++++++++++++- 4 files changed, 80 insertions(+), 21 deletions(-) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 653e35777a99b..3bee97f76bf43 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -44,6 +44,11 @@ static inline void cond_synchronize_rcu(unsigned long oldstate) might_sleep(); } +static inline void cond_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + cond_synchronize_rcu(rgosp->rgos_norm); +} + static inline unsigned long start_poll_synchronize_rcu_expedited(void) { return start_poll_synchronize_rcu(); diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 7151fd8617365..1b44288c027da 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -57,6 +57,7 @@ void start_poll_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); bool poll_state_synchronize_rcu(unsigned long oldstate); bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); void cond_synchronize_rcu(unsigned long oldstate); +void cond_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); bool rcu_is_idle_cpu(int cpu); diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index f9ca33555debf..9d22161bf7700 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -84,6 +84,7 @@ torture_param(int, fwd_progress_holdoff, 60, "Time between forward-progress test torture_param(bool, fwd_progress_need_resched, 1, "Hide cond_resched() behind need_resched()"); torture_param(bool, gp_cond, false, "Use conditional/async GP wait primitives"); torture_param(bool, gp_cond_exp, false, "Use conditional/async expedited GP wait primitives"); +torture_param(bool, gp_cond_full, false, "Use conditional/async full-state GP wait primitives"); torture_param(bool, gp_exp, false, "Use expedited GP wait primitives"); torture_param(bool, gp_normal, false, "Use normal (non-expedited) GP wait primitives"); torture_param(bool, gp_poll, false, "Use polling GP wait primitives"); @@ -196,20 +197,22 @@ static int rcu_torture_writer_state; #define RTWS_DEF_FREE 3 #define RTWS_EXP_SYNC 4 #define RTWS_COND_GET 5 -#define RTWS_COND_GET_EXP 6 -#define RTWS_COND_SYNC 7 -#define RTWS_COND_SYNC_EXP 8 -#define RTWS_POLL_GET 9 -#define RTWS_POLL_GET_FULL 10 -#define RTWS_POLL_GET_EXP 11 -#define RTWS_POLL_GET_EXP_FULL 12 -#define RTWS_POLL_WAIT 13 -#define RTWS_POLL_WAIT_FULL 14 -#define RTWS_POLL_WAIT_EXP 15 -#define RTWS_POLL_WAIT_EXP_FULL 16 -#define RTWS_SYNC 17 -#define RTWS_STUTTER 18 -#define RTWS_STOPPING 19 +#define RTWS_COND_GET_FULL 6 +#define RTWS_COND_GET_EXP 7 +#define RTWS_COND_SYNC 8 +#define RTWS_COND_SYNC_FULL 9 +#define RTWS_COND_SYNC_EXP 10 +#define RTWS_POLL_GET 11 +#define RTWS_POLL_GET_FULL 12 +#define RTWS_POLL_GET_EXP 13 +#define RTWS_POLL_GET_EXP_FULL 14 +#define RTWS_POLL_WAIT 15 +#define RTWS_POLL_WAIT_FULL 16 +#define RTWS_POLL_WAIT_EXP 17 +#define RTWS_POLL_WAIT_EXP_FULL 18 +#define RTWS_SYNC 19 +#define RTWS_STUTTER 20 +#define RTWS_STOPPING 21 static const char * const rcu_torture_writer_state_names[] = { "RTWS_FIXED_DELAY", "RTWS_DELAY", @@ -217,8 +220,10 @@ static const char * const rcu_torture_writer_state_names[] = { "RTWS_DEF_FREE", "RTWS_EXP_SYNC", "RTWS_COND_GET", + "RTWS_COND_GET_FULL", "RTWS_COND_GET_EXP", "RTWS_COND_SYNC", + "RTWS_COND_SYNC_FULL", "RTWS_COND_SYNC_EXP", "RTWS_POLL_GET", "RTWS_POLL_GET_FULL", @@ -355,6 +360,7 @@ struct rcu_torture_ops { bool (*poll_gp_state_full)(struct rcu_gp_oldstate *rgosp); bool (*poll_need_2gp)(bool poll, bool poll_full); void (*cond_sync)(unsigned long oldstate); + void (*cond_sync_full)(struct rcu_gp_oldstate *rgosp); call_rcu_func_t call; void (*cb_barrier)(void); void (*fqs)(void); @@ -532,6 +538,7 @@ static struct rcu_torture_ops rcu_ops = { .poll_gp_state_full = poll_state_synchronize_rcu_full, .poll_need_2gp = rcu_poll_need_2gp, .cond_sync = cond_synchronize_rcu, + .cond_sync_full = cond_synchronize_rcu_full, .get_gp_state_exp = get_state_synchronize_rcu, .start_gp_poll_exp = start_poll_synchronize_rcu_expedited, .start_gp_poll_exp_full = start_poll_synchronize_rcu_expedited_full, @@ -1175,16 +1182,17 @@ static int nsynctypes; */ static void rcu_torture_write_types(void) { - bool gp_cond1 = gp_cond, gp_cond_exp1 = gp_cond_exp, gp_exp1 = gp_exp; - bool gp_poll_exp1 = gp_poll_exp, gp_poll_exp_full1 = gp_poll_exp_full; + bool gp_cond1 = gp_cond, gp_cond_exp1 = gp_cond_exp, gp_cond_full1 = gp_cond_full; + bool gp_exp1 = gp_exp, gp_poll_exp1 = gp_poll_exp, gp_poll_exp_full1 = gp_poll_exp_full; bool gp_normal1 = gp_normal, gp_poll1 = gp_poll, gp_poll_full1 = gp_poll_full; bool gp_sync1 = gp_sync; /* Initialize synctype[] array. If none set, take default. */ - if (!gp_cond1 && !gp_cond_exp1 && !gp_exp1 && !gp_poll_exp && !gp_poll_exp_full1 && - !gp_normal1 && !gp_poll1 && !gp_poll_full1 && !gp_sync1) - gp_cond1 = gp_cond_exp1 = gp_exp1 = gp_poll_exp1 = gp_poll_exp_full1 = - gp_normal1 = gp_poll1 = gp_poll_full1 = gp_sync1 = true; + if (!gp_cond1 && !gp_cond_exp1 && !gp_cond_full1 && !gp_exp1 && !gp_poll_exp && + !gp_poll_exp_full1 && !gp_normal1 && !gp_poll1 && !gp_poll_full1 && !gp_sync1) + gp_cond1 = gp_cond_exp1 = gp_cond_full1 = gp_exp1 = gp_poll_exp1 = + gp_poll_exp_full1 = gp_normal1 = gp_poll1 = gp_poll_full1 = + gp_sync1 = true; if (gp_cond1 && cur_ops->get_gp_state && cur_ops->cond_sync) { synctype[nsynctypes++] = RTWS_COND_GET; pr_info("%s: Testing conditional GPs.\n", __func__); @@ -1197,6 +1205,12 @@ static void rcu_torture_write_types(void) } else if (gp_cond_exp && (!cur_ops->get_gp_state_exp || !cur_ops->cond_sync_exp)) { pr_alert("%s: gp_cond_exp without primitives.\n", __func__); } + if (gp_cond_full1 && cur_ops->get_gp_state && cur_ops->cond_sync_full) { + synctype[nsynctypes++] = RTWS_COND_GET_FULL; + pr_info("%s: Testing conditional full-state GPs.\n", __func__); + } else if (gp_cond_full && (!cur_ops->get_gp_state || !cur_ops->cond_sync_full)) { + pr_alert("%s: gp_cond_full without primitives.\n", __func__); + } if (gp_exp1 && cur_ops->exp_sync) { synctype[nsynctypes++] = RTWS_EXP_SYNC; pr_info("%s: Testing expedited GPs.\n", __func__); @@ -1396,6 +1410,14 @@ rcu_torture_writer(void *arg) cur_ops->cond_sync_exp(gp_snap); rcu_torture_pipe_update(old_rp); break; + case RTWS_COND_GET_FULL: + rcu_torture_writer_state = RTWS_COND_GET_FULL; + cur_ops->get_gp_state_full(&gp_snap_full); + torture_hrtimeout_jiffies(torture_random(&rand) % 16, &rand); + rcu_torture_writer_state = RTWS_COND_SYNC_FULL; + cur_ops->cond_sync_full(&gp_snap_full); + rcu_torture_pipe_update(old_rp); + break; case RTWS_POLL_GET: rcu_torture_writer_state = RTWS_POLL_GET; gp_snap = cur_ops->start_gp_poll(); @@ -1540,6 +1562,11 @@ rcu_torture_fakewriter(void *arg) torture_hrtimeout_jiffies(torture_random(&rand) % 16, &rand); cur_ops->cond_sync_exp(gp_snap); break; + case RTWS_COND_GET_FULL: + cur_ops->get_gp_state_full(&gp_snap_full); + torture_hrtimeout_jiffies(torture_random(&rand) % 16, &rand); + cur_ops->cond_sync_full(&gp_snap_full); + break; case RTWS_POLL_GET: gp_snap = cur_ops->start_gp_poll(); while (!cur_ops->poll_gp_state(gp_snap)) { diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 0a24ef4d6b823..5c46c0d34ef0d 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3749,7 +3749,6 @@ EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu_full); /** * cond_synchronize_rcu - Conditionally wait for an RCU grace period - * * @oldstate: value from get_state_synchronize_rcu(), start_poll_synchronize_rcu(), or start_poll_synchronize_rcu_expedited() * * If a full RCU grace period has elapsed since the earlier call to @@ -3773,6 +3772,33 @@ void cond_synchronize_rcu(unsigned long oldstate) } EXPORT_SYMBOL_GPL(cond_synchronize_rcu); +/** + * cond_synchronize_rcu_full - Conditionally wait for an RCU grace period + * @rgosp: value from get_state_synchronize_rcu_full(), start_poll_synchronize_rcu_full(), or start_poll_synchronize_rcu_expedited_full() + * + * If a full RCU grace period has elapsed since the call to + * get_state_synchronize_rcu_full(), start_poll_synchronize_rcu_full(), + * or start_poll_synchronize_rcu_expedited_full() from which @rgosp was + * obtained, just return. Otherwise, invoke synchronize_rcu() to wait + * for a full grace period. + * + * Yes, this function does not take counter wrap into account. + * But counter wrap is harmless. If the counter wraps, we have waited for + * more than 2 billion grace periods (and way more on a 64-bit system!), + * so waiting for a couple of additional grace periods should be just fine. + * + * This function provides the same memory-ordering guarantees that + * would be provided by a synchronize_rcu() that was invoked at the call + * to the function that provided @rgosp and that returned at the end of + * this function. + */ +void cond_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) +{ + if (!poll_state_synchronize_rcu_full(rgosp)) + synchronize_rcu(); +} +EXPORT_SYMBOL_GPL(cond_synchronize_rcu_full); + /* * Check to see if there is any immediate RCU-related work to be done by * the current CPU, returning 1 if so and zero otherwise. The checks are From patchwork Wed Aug 31 18:11:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C32EECAAD3 for ; Wed, 31 Aug 2022 18:15:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231775AbiHaSO5 (ORCPT ); Wed, 31 Aug 2022 14:14:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232446AbiHaSN5 (ORCPT ); Wed, 31 Aug 2022 14:13:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08679F23DA; Wed, 31 Aug 2022 11:12:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7431D61CA4; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1854EC4314B; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=MUbxUGccBwX8w6yXUcoyydCki8L8sS4EgyPHInADDxQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ac8Ls0Uh/GqTCjkSSBmArcFsnlwrhbCu6S+RyRSI2OpQJgvBkxvbBIKJD1OX7Zmlv YQFjiCNmhboQX7J5cAiJ1CPeUvIArI6d2rarL6yQxKCIQLlFmQ82kISctZAu1Xs1YT y+qrIPjLUUWrxfbfPyhOX8iALbnYMQCFf1wKrtBJiW8Ncy8tCmOQxNUJukZSZyZ3+7 F8BA3nJv1BMpO8tkRUYdrxPVEpnymVaqLDrOr32jErVaradhZXHrFmYYOSi3fQdaFr UNN6tsRVEhG+g5PrvrqximINnPx75lr/HBlc+TbC1KJ2AumCw+p4Av8s99SAzKdBpI cBLXWbuIJuCIQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6E3CB5C0B00; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 12/25] rcu: Add full-sized polling for cond_sync_exp_full() Date: Wed, 31 Aug 2022 11:11:57 -0700 Message-Id: <20220831181210.2695080-12-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The cond_synchronize_rcu_expedited() API compresses the combined expedited and normal grace-period states into a single unsigned long, which conserves storage, but can miss grace periods in certain cases involving overlapping normal and expedited grace periods. Missing the occasional grace period is usually not a problem, but there are use cases that care about each and every grace period. This commit therefore adds yet another member of the full-state RCU grace-period polling API, which is the cond_synchronize_rcu_exp_full() function. This uses up to three times the storage (rcu_gp_oldstate structure instead of unsigned long), but is guaranteed not to miss grace periods. Signed-off-by: Paul E. McKenney --- include/linux/rcutiny.h | 5 +++ include/linux/rcutree.h | 1 + kernel/rcu/rcutorture.c | 72 ++++++++++++++++++++++++++++------------- kernel/rcu/tree_exp.h | 27 ++++++++++++++++ 4 files changed, 83 insertions(+), 22 deletions(-) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 3bee97f76bf43..4405e9112cee8 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -64,6 +64,11 @@ static inline void cond_synchronize_rcu_expedited(unsigned long oldstate) cond_synchronize_rcu(oldstate); } +static inline void cond_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp) +{ + cond_synchronize_rcu_expedited(rgosp->rgos_norm); +} + extern void rcu_barrier(void); static inline void synchronize_rcu_expedited(void) diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 1b44288c027da..755b082f4ec62 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -50,6 +50,7 @@ struct rcu_gp_oldstate { unsigned long start_poll_synchronize_rcu_expedited(void); void start_poll_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp); void cond_synchronize_rcu_expedited(unsigned long oldstate); +void cond_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp); unsigned long get_state_synchronize_rcu(void); void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); unsigned long start_poll_synchronize_rcu(void); diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 9d22161bf7700..8995429c6f1c2 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -85,6 +85,8 @@ torture_param(bool, fwd_progress_need_resched, 1, "Hide cond_resched() behind ne torture_param(bool, gp_cond, false, "Use conditional/async GP wait primitives"); torture_param(bool, gp_cond_exp, false, "Use conditional/async expedited GP wait primitives"); torture_param(bool, gp_cond_full, false, "Use conditional/async full-state GP wait primitives"); +torture_param(bool, gp_cond_exp_full, false, + "Use conditional/async full-stateexpedited GP wait primitives"); torture_param(bool, gp_exp, false, "Use expedited GP wait primitives"); torture_param(bool, gp_normal, false, "Use normal (non-expedited) GP wait primitives"); torture_param(bool, gp_poll, false, "Use polling GP wait primitives"); @@ -199,20 +201,22 @@ static int rcu_torture_writer_state; #define RTWS_COND_GET 5 #define RTWS_COND_GET_FULL 6 #define RTWS_COND_GET_EXP 7 -#define RTWS_COND_SYNC 8 -#define RTWS_COND_SYNC_FULL 9 -#define RTWS_COND_SYNC_EXP 10 -#define RTWS_POLL_GET 11 -#define RTWS_POLL_GET_FULL 12 -#define RTWS_POLL_GET_EXP 13 -#define RTWS_POLL_GET_EXP_FULL 14 -#define RTWS_POLL_WAIT 15 -#define RTWS_POLL_WAIT_FULL 16 -#define RTWS_POLL_WAIT_EXP 17 -#define RTWS_POLL_WAIT_EXP_FULL 18 -#define RTWS_SYNC 19 -#define RTWS_STUTTER 20 -#define RTWS_STOPPING 21 +#define RTWS_COND_GET_EXP_FULL 8 +#define RTWS_COND_SYNC 9 +#define RTWS_COND_SYNC_FULL 10 +#define RTWS_COND_SYNC_EXP 11 +#define RTWS_COND_SYNC_EXP_FULL 12 +#define RTWS_POLL_GET 13 +#define RTWS_POLL_GET_FULL 14 +#define RTWS_POLL_GET_EXP 15 +#define RTWS_POLL_GET_EXP_FULL 16 +#define RTWS_POLL_WAIT 17 +#define RTWS_POLL_WAIT_FULL 18 +#define RTWS_POLL_WAIT_EXP 19 +#define RTWS_POLL_WAIT_EXP_FULL 20 +#define RTWS_SYNC 21 +#define RTWS_STUTTER 22 +#define RTWS_STOPPING 23 static const char * const rcu_torture_writer_state_names[] = { "RTWS_FIXED_DELAY", "RTWS_DELAY", @@ -222,9 +226,11 @@ static const char * const rcu_torture_writer_state_names[] = { "RTWS_COND_GET", "RTWS_COND_GET_FULL", "RTWS_COND_GET_EXP", + "RTWS_COND_GET_EXP_FULL", "RTWS_COND_SYNC", "RTWS_COND_SYNC_FULL", "RTWS_COND_SYNC_EXP", + "RTWS_COND_SYNC_EXP_FULL", "RTWS_POLL_GET", "RTWS_POLL_GET_FULL", "RTWS_POLL_GET_EXP", @@ -350,6 +356,7 @@ struct rcu_torture_ops { void (*start_gp_poll_exp_full)(struct rcu_gp_oldstate *rgosp); bool (*poll_gp_state_exp)(unsigned long oldstate); void (*cond_sync_exp)(unsigned long oldstate); + void (*cond_sync_exp_full)(struct rcu_gp_oldstate *rgosp); unsigned long (*get_gp_state)(void); void (*get_gp_state_full)(struct rcu_gp_oldstate *rgosp); unsigned long (*get_gp_completed)(void); @@ -1183,16 +1190,17 @@ static int nsynctypes; static void rcu_torture_write_types(void) { bool gp_cond1 = gp_cond, gp_cond_exp1 = gp_cond_exp, gp_cond_full1 = gp_cond_full; - bool gp_exp1 = gp_exp, gp_poll_exp1 = gp_poll_exp, gp_poll_exp_full1 = gp_poll_exp_full; - bool gp_normal1 = gp_normal, gp_poll1 = gp_poll, gp_poll_full1 = gp_poll_full; - bool gp_sync1 = gp_sync; + bool gp_cond_exp_full1 = gp_cond_exp_full, gp_exp1 = gp_exp, gp_poll_exp1 = gp_poll_exp; + bool gp_poll_exp_full1 = gp_poll_exp_full, gp_normal1 = gp_normal, gp_poll1 = gp_poll; + bool gp_poll_full1 = gp_poll_full, gp_sync1 = gp_sync; /* Initialize synctype[] array. If none set, take default. */ - if (!gp_cond1 && !gp_cond_exp1 && !gp_cond_full1 && !gp_exp1 && !gp_poll_exp && - !gp_poll_exp_full1 && !gp_normal1 && !gp_poll1 && !gp_poll_full1 && !gp_sync1) - gp_cond1 = gp_cond_exp1 = gp_cond_full1 = gp_exp1 = gp_poll_exp1 = - gp_poll_exp_full1 = gp_normal1 = gp_poll1 = gp_poll_full1 = - gp_sync1 = true; + if (!gp_cond1 && !gp_cond_exp1 && !gp_cond_full1 && !gp_cond_exp_full1 && !gp_exp1 && + !gp_poll_exp && !gp_poll_exp_full1 && !gp_normal1 && !gp_poll1 && !gp_poll_full1 && + !gp_sync1) + gp_cond1 = gp_cond_exp1 = gp_cond_full1 = gp_cond_exp_full1 = gp_exp1 = + gp_poll_exp1 = gp_poll_exp_full1 = gp_normal1 = gp_poll1 = + gp_poll_full1 = gp_sync1 = true; if (gp_cond1 && cur_ops->get_gp_state && cur_ops->cond_sync) { synctype[nsynctypes++] = RTWS_COND_GET; pr_info("%s: Testing conditional GPs.\n", __func__); @@ -1211,6 +1219,13 @@ static void rcu_torture_write_types(void) } else if (gp_cond_full && (!cur_ops->get_gp_state || !cur_ops->cond_sync_full)) { pr_alert("%s: gp_cond_full without primitives.\n", __func__); } + if (gp_cond_exp_full1 && cur_ops->get_gp_state_exp && cur_ops->cond_sync_exp_full) { + synctype[nsynctypes++] = RTWS_COND_GET_EXP_FULL; + pr_info("%s: Testing conditional full-state expedited GPs.\n", __func__); + } else if (gp_cond_exp_full && + (!cur_ops->get_gp_state_exp || !cur_ops->cond_sync_exp_full)) { + pr_alert("%s: gp_cond_exp_full without primitives.\n", __func__); + } if (gp_exp1 && cur_ops->exp_sync) { synctype[nsynctypes++] = RTWS_EXP_SYNC; pr_info("%s: Testing expedited GPs.\n", __func__); @@ -1418,6 +1433,14 @@ rcu_torture_writer(void *arg) cur_ops->cond_sync_full(&gp_snap_full); rcu_torture_pipe_update(old_rp); break; + case RTWS_COND_GET_EXP_FULL: + rcu_torture_writer_state = RTWS_COND_GET_EXP_FULL; + cur_ops->get_gp_state_full(&gp_snap_full); + torture_hrtimeout_jiffies(torture_random(&rand) % 16, &rand); + rcu_torture_writer_state = RTWS_COND_SYNC_EXP_FULL; + cur_ops->cond_sync_exp_full(&gp_snap_full); + rcu_torture_pipe_update(old_rp); + break; case RTWS_POLL_GET: rcu_torture_writer_state = RTWS_POLL_GET; gp_snap = cur_ops->start_gp_poll(); @@ -1567,6 +1590,11 @@ rcu_torture_fakewriter(void *arg) torture_hrtimeout_jiffies(torture_random(&rand) % 16, &rand); cur_ops->cond_sync_full(&gp_snap_full); break; + case RTWS_COND_GET_EXP_FULL: + cur_ops->get_gp_state_full(&gp_snap_full); + torture_hrtimeout_jiffies(torture_random(&rand) % 16, &rand); + cur_ops->cond_sync_exp_full(&gp_snap_full); + break; case RTWS_POLL_GET: gp_snap = cur_ops->start_gp_poll(); while (!cur_ops->poll_gp_state(gp_snap)) { diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 18128ee0d36c0..9c0ae834ef076 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -1071,3 +1071,30 @@ void cond_synchronize_rcu_expedited(unsigned long oldstate) synchronize_rcu_expedited(); } EXPORT_SYMBOL_GPL(cond_synchronize_rcu_expedited); + +/** + * cond_synchronize_rcu_expedited_full - Conditionally wait for an expedited RCU grace period + * @rgosp: value from get_state_synchronize_rcu_full(), start_poll_synchronize_rcu_full(), or start_poll_synchronize_rcu_expedited_full() + * + * If a full RCU grace period has elapsed since the call to + * get_state_synchronize_rcu_full(), start_poll_synchronize_rcu_full(), + * or start_poll_synchronize_rcu_expedited_full() from which @rgosp was + * obtained, just return. Otherwise, invoke synchronize_rcu_expedited() + * to wait for a full grace period. + * + * Yes, this function does not take counter wrap into account. + * But counter wrap is harmless. If the counter wraps, we have waited for + * more than 2 billion grace periods (and way more on a 64-bit system!), + * so waiting for a couple of additional grace periods should be just fine. + * + * This function provides the same memory-ordering guarantees that + * would be provided by a synchronize_rcu() that was invoked at the call + * to the function that provided @rgosp and that returned at the end of + * this function. + */ +void cond_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp) +{ + if (!poll_state_synchronize_rcu_full(rgosp)) + synchronize_rcu_expedited(); +} +EXPORT_SYMBOL_GPL(cond_synchronize_rcu_expedited_full); From patchwork Wed Aug 31 18:11:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7CA4ECAAD1 for ; Wed, 31 Aug 2022 18:16:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232680AbiHaSQJ (ORCPT ); Wed, 31 Aug 2022 14:16:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232340AbiHaSO4 (ORCPT ); Wed, 31 Aug 2022 14:14:56 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B4B9ED02B; Wed, 31 Aug 2022 11:12:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0F6E961C67; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A510C4314A; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=wGST3YDzy3o68YtlnXnYOiYlSDAYr4NnGFg4G17K0hw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IcCxvanwi7RJ1Tw6jEy70675ckfhTiR9WefHcwRGJs6BfwIXx6pxkvjZq1GG+OyvH 8KJ7oakE/BkND9ASAGkwA0NKdTNn3Q6hyRPBqPUtxGyWuHSQu5FaCF/eKez+RwUxOM MCe9vmu8003kBbPsjp0FSKFqzO6/eYCnPf6auXB4k+KiU7rf53e3R7a8WoE/OrSi4z Osm9QpsXS5p0RnYq/N0KhJPhYbbU4fe1POMBC9Bcp10x+vbvNIqONjGwI5j4eVC6tm w81/89H9Q31qSFxRteMuzXDsMPkri3LIdSdU2HzA/SP+hKsmaXUjURCUImempVjxi+ HJE+h8hD2k0LQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6FF295C0B07; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 13/25] rcu: Disable run-time single-CPU grace-period optimization Date: Wed, 31 Aug 2022 11:11:58 -0700 Message-Id: <20220831181210.2695080-13-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The run-time single-CPU grace-period optimization applies only to kernels built with CONFIG_SMP=y && CONFIG_PREEMPTION=y that are running on a single-CPU system. But a kernel intended for a single-CPU system should instead be built with CONFIG_SMP=n, and in any case, single-CPU systems running Linux no longer appear to be the common case. Plus this optimization results in the rcu_gp_oldstate structure being half again larger than it needs to be. This commit therefore disables the run-time single-CPU grace-period optimization, so that this optimization applies only during the pre-scheduler portion of the boot sequence. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 40 +++++++++------------------------------- 1 file changed, 9 insertions(+), 31 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 5c46c0d34ef0d..5c4ec9dd4ce70 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3423,42 +3423,20 @@ void __init kfree_rcu_scheduler_running(void) /* * During early boot, any blocking grace-period wait automatically - * implies a grace period. Later on, this is never the case for PREEMPTION. + * implies a grace period. * - * However, because a context switch is a grace period for !PREEMPTION, any - * blocking grace-period wait automatically implies a grace period if - * there is only one CPU online at any point time during execution of - * either synchronize_rcu() or synchronize_rcu_expedited(). It is OK to - * occasionally incorrectly indicate that there are multiple CPUs online - * when there was in fact only one the whole time, as this just adds some - * overhead: RCU still operates correctly. + * Later on, this could in theory be the case for kernels built with + * CONFIG_SMP=y && CONFIG_PREEMPTION=y running on a single CPU, but this + * is not a common case. Furthermore, this optimization would cause + * the rcu_gp_oldstate structure to expand by 50%, so this potential + * grace-period optimization is ignored once the scheduler is running. */ static int rcu_blocking_is_gp(void) { - int ret; - - // Invoking preempt_model_*() too early gets a splat. - if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE || - preempt_model_full() || preempt_model_rt()) - return rcu_scheduler_active == RCU_SCHEDULER_INACTIVE; + if (rcu_scheduler_active != RCU_SCHEDULER_INACTIVE) + return false; might_sleep(); /* Check for RCU read-side critical section. */ - preempt_disable(); - /* - * If the rcu_state.n_online_cpus counter is equal to one, - * there is only one CPU, and that CPU sees all prior accesses - * made by any CPU that was online at the time of its access. - * Furthermore, if this counter is equal to one, its value cannot - * change until after the preempt_enable() below. - * - * Furthermore, if rcu_state.n_online_cpus is equal to one here, - * all later CPUs (both this one and any that come online later - * on) are guaranteed to see all accesses prior to this point - * in the code, without the need for additional memory barriers. - * Those memory barriers are provided by CPU-hotplug code. - */ - ret = READ_ONCE(rcu_state.n_online_cpus) <= 1; - preempt_enable(); - return ret; + return true; } /** From patchwork Wed Aug 31 18:11:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0B96ECAAD1 for ; Wed, 31 Aug 2022 18:14:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232283AbiHaSOp (ORCPT ); Wed, 31 Aug 2022 14:14:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232151AbiHaSNy (ORCPT ); Wed, 31 Aug 2022 14:13:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3DAFF23C1; Wed, 31 Aug 2022 11:12:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EC4D961C5B; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 131E5C43143; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=ZZy+CEe3X58TPi2PAPGr5aGa6z9m03nXkRDid1QaiUI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OGXrTPXSb4TPHXnCBLSOBtXGt0Y94oSzDImoM2jqzg1pHChP8X1dYFbLmekKt0tMv 8cpWdbxKCA9njeByXWmVnYRH0NLeRKRNtAQTuvwvwmztU652njMUWSsNqYSK0f33ku 3uSeM9RqwqfbnRxS2ccONiNqjt6T86lni0KjUf4VnUWHoQ1i/O3THV6wB62wSSHP0g 4C0R1sOsfdC7Il/+S/klZi8MDS9SIg5o2HlLhEKp7moOVSRDqFkgeaqaQtKyiON5Mq Hu9UzQhFBgeJaVQzyDLkcF5OgkUM61gqHJfL5QlsAlZ83t8Xi0Zal6ZdRIdsmCw+Uc M4JlZ8Q0m9+3Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 71A895C0B54; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 14/25] rcu: Set rcu_data structures' initial ->gpwrap value to true Date: Wed, 31 Aug 2022 11:11:59 -0700 Message-Id: <20220831181210.2695080-14-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org It would be good do reduce the size of the rcu_gp_oldstate structure from three unsigned long instances to two, but this requires that the boot-time optimized grace periods update the various ->gp_seq fields. Updating these fields in the rcu_state structure and in all of the rcu_node structures is at least semi-reasonable, but updating them in all of the rcu_data structures is a bridge too far. This means that if there are too many early boot-time grace periods, the ->gp_seq field in the rcu_data structure cannot be trusted. This commit therefore sets each rcu_data structure's ->gpwrap field to provide the necessary impetus for a suitable level of distrust. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 5c4ec9dd4ce70..03b089184b37e 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -76,6 +76,7 @@ /* Data structures. */ static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, rcu_data) = { + .gpwrap = true, #ifdef CONFIG_RCU_NOCB_CPU .cblist.flags = SEGCBLIST_RCU_CORE, #endif From patchwork Wed Aug 31 18:12:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AE93ECAAD1 for ; Wed, 31 Aug 2022 18:14:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231383AbiHaSO4 (ORCPT ); Wed, 31 Aug 2022 14:14:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232439AbiHaSN5 (ORCPT ); Wed, 31 Aug 2022 14:13:57 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5FCDEEF1B; Wed, 31 Aug 2022 11:12:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 74B7DCE2059; Wed, 31 Aug 2022 18:12:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 26BCCC43153; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=KuPCDZDlslpAEicML1omMI9g8VsZjZDGBIoeSeH7tzI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SBc4SVsEXrVsOj6xMNA3CYQyJFu9AR0xAAs28Yo3gMF1s3LSF0KP6yZs3MXh0OyP8 Ix+9avzZ8WbyJZdnRzKsCcP+yqwPw16zDsMThPMZDdwZkTl6deL8Ha4CjuoTJuaiFw mB0TwfbBRKVLkQdFH1FQu6YKYMOt6ZPKY5CPlmbqJqt87bMOaCIX4sK3lCuICvYzvQ EwmgKwS8g6zESqR4HeKJ5z6B9N9QSinbv/TOVNwG6pevr1eHKe+Xmj7vvO5FdfHK5b 2Cr4Btr59ccKfQ9qCr6eFl3nzfO9yBvAUXM0DD1vOz73DnMng6I+v8C/f2W8GGcA6L IE4tdmDh7FVxg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 734A35C0DA6; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 15/25] rcu-tasks: Remove grace-period fast-path rcu-tasks helper Date: Wed, 31 Aug 2022 11:12:00 -0700 Message-Id: <20220831181210.2695080-15-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Now that the grace-period fast path can only happen during the pre-scheduler portion of early boot, this fast path can no longer block run-time RCU Tasks and RCU Tasks Trace grace periods. This commit therefore removes the conditional cond_resched_tasks_rcu_qs() invocation. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 03b089184b37e..0ff7d5eaa3761 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3492,8 +3492,6 @@ void synchronize_rcu(void) // which allows reuse of ->gp_seq_polled_snap. rcu_poll_gp_seq_start_unlocked(&rcu_state.gp_seq_polled_snap); rcu_poll_gp_seq_end_unlocked(&rcu_state.gp_seq_polled_snap); - if (rcu_init_invoked()) - cond_resched_tasks_rcu_qs(); return; // Context allows vacuous grace periods. } if (rcu_gp_is_expedited()) From patchwork Wed Aug 31 18:12:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961256 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22518ECAAD1 for ; Wed, 31 Aug 2022 18:16:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232755AbiHaSQg (ORCPT ); Wed, 31 Aug 2022 14:16:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232625AbiHaSPt (ORCPT ); Wed, 31 Aug 2022 14:15:49 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0D2BE7253; Wed, 31 Aug 2022 11:13:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 98E3761CAF; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 57682C43159; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=07ujm17gMsps1cdH2vluoTIiYIZi7L2itN8CAFloXZ0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YhzskBwu8BVukyWgCBe63YyYzo06ZIImFpph2t6GpUjgxxkLGJcHuLAgvQntDM/3Z w5ERahQtM6wjpcPhLF4VDen1frxImEJMWXl80JkLQ3Mipxe9htYQPb+QsAgZRQMdES sfwdBlmw/wTDExszu7BhqrVp6XfT+C91hUWmvcOiZNrjwGbjLhLb8OTPaml8j8ncjZ KwhFO36TFo5Nw5HScoBY/jVXovQc84/zBgKT5XpzBvxpDmdDBFze9JaRx1wo5nlGbR 8dd39jzNnJbjkKfUVvvvItCSCNBE+/eleGaA0j1HlyEF5GuoJuCWW4uFa+8wv/xti/ BjVZg49QBTswQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 750175C0DE5; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 16/25] rcu: Make synchronize_rcu() fast path update ->gp_seq counters Date: Wed, 31 Aug 2022 11:12:01 -0700 Message-Id: <20220831181210.2695080-16-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit causes the early boot single-CPU synchronize_rcu() fastpath to update the rcu_state and rcu_node structures' ->gp_seq and ->gp_seq_needed counters. This will allow the full-state polled grace-period APIs to detect all normal grace periods without the need to track the special combined polling-only counter, which is a step towards removing the ->rgos_polled field from the rcu_gp_oldstate, thereby reducing its size by one third. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 39 ++++++++++++++++++++++++++------------- 1 file changed, 26 insertions(+), 13 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 0ff7d5eaa3761..8fa5ec0f3d111 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3480,24 +3480,37 @@ static int rcu_blocking_is_gp(void) */ void synchronize_rcu(void) { + unsigned long flags; + struct rcu_node *rnp; + RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map) || lock_is_held(&rcu_lock_map) || lock_is_held(&rcu_sched_lock_map), "Illegal synchronize_rcu() in RCU read-side critical section"); - if (rcu_blocking_is_gp()) { - // Note well that this code runs with !PREEMPT && !SMP. - // In addition, all code that advances grace periods runs at - // process level. Therefore, this normal GP overlaps with - // other normal GPs only by being fully nested within them, - // which allows reuse of ->gp_seq_polled_snap. - rcu_poll_gp_seq_start_unlocked(&rcu_state.gp_seq_polled_snap); - rcu_poll_gp_seq_end_unlocked(&rcu_state.gp_seq_polled_snap); - return; // Context allows vacuous grace periods. + if (!rcu_blocking_is_gp()) { + if (rcu_gp_is_expedited()) + synchronize_rcu_expedited(); + else + wait_rcu_gp(call_rcu); + return; } - if (rcu_gp_is_expedited()) - synchronize_rcu_expedited(); - else - wait_rcu_gp(call_rcu); + + // Context allows vacuous grace periods. + // Note well that this code runs with !PREEMPT && !SMP. + // In addition, all code that advances grace periods runs at + // process level. Therefore, this normal GP overlaps with other + // normal GPs only by being fully nested within them, which allows + // reuse of ->gp_seq_polled_snap. + rcu_poll_gp_seq_start_unlocked(&rcu_state.gp_seq_polled_snap); + rcu_poll_gp_seq_end_unlocked(&rcu_state.gp_seq_polled_snap); + + // Update normal grace-period counters to record grace period. + local_irq_save(flags); + WARN_ON_ONCE(num_online_cpus() > 1); + rcu_state.gp_seq += (1 << RCU_SEQ_CTR_SHIFT); + rcu_for_each_node_breadth_first(rnp) + rnp->gp_seq_needed = rnp->gp_seq = rcu_state.gp_seq; + local_irq_restore(flags); } EXPORT_SYMBOL_GPL(synchronize_rcu); From patchwork Wed Aug 31 18:12:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DD80ECAAD4 for ; Wed, 31 Aug 2022 18:15:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232634AbiHaSPw (ORCPT ); Wed, 31 Aug 2022 14:15:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232635AbiHaSOe (ORCPT ); Wed, 31 Aug 2022 14:14:34 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D109CF14D0; Wed, 31 Aug 2022 11:13:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 472AFB8227B; Wed, 31 Aug 2022 18:12:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5DE8DC43158; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=eQALP+6nah6BViSpqOqdrb0bckX7LhhhIQ1zzCaGtj4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fUw0kg02gBGahufpc1gIh/nb9EO9XNC7nAWI5q7q2vVZnLEvkRjnxf767al2tzBkc q5trO0bg9kzNhL8CVsVxK3i7jAzNVfd4IA3diA8jhGRljGvcLJmcP9B3bYPyWpg55F CYw2DTZVY2ibr4CpqZebAOBEGRfZ/Gx8YK1NW8qBZ0tfOHhBk+5IDDSxfZromczsag GlgnPpZacHM0suz1jrBAD5Hm++vk+xIb2PySR6wI7qMuGe+khV+ErT+afp+DD1+HmF OblFBJu2ts9txXwN1WDKOg3GlmroqXzao84g8m7Yc7aNQkI0oFsIkZKY7r8G+pc8u9 9+JOOJJ6PHasQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 76B585C0DF4; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 17/25] rcu: Remove expedited grace-period fast-path forward-progress helper Date: Wed, 31 Aug 2022 11:12:02 -0700 Message-Id: <20220831181210.2695080-17-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Now that the expedited grace-period fast path can only happen during the pre-scheduler portion of early boot, this fast path can no longer block run-time RCU Trace grace periods. This commit therefore removes the conditional cond_resched() invocation. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_exp.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 9c0ae834ef076..1a51f9301ebff 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -924,8 +924,6 @@ void synchronize_rcu_expedited(void) // them, which allows reuse of ->gp_seq_polled_exp_snap. rcu_poll_gp_seq_start_unlocked(&rcu_state.gp_seq_polled_exp_snap); rcu_poll_gp_seq_end_unlocked(&rcu_state.gp_seq_polled_exp_snap); - if (rcu_init_invoked()) - cond_resched(); return; // Context allows vacuous grace periods. } From patchwork Wed Aug 31 18:12:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0335EECAAD3 for ; Wed, 31 Aug 2022 18:17:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232417AbiHaSRE (ORCPT ); Wed, 31 Aug 2022 14:17:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232689AbiHaSQQ (ORCPT ); Wed, 31 Aug 2022 14:16:16 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08173EF9FB; Wed, 31 Aug 2022 11:13:24 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 320ACB8227A; Wed, 31 Aug 2022 18:12:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52D4EC4347C; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=TwQvXPSEvLyzs2xgFi3POyva60hZdjGop9E3Q6E66NA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tIFl2jRYM/70ivwGtBc1NUThPZtbFDPC4tP4XsBJP3whZimUxtfZqLCXaGsWZEOKZ mvboeGMrwn80J1xnwsXJB0KaqL+SQiXyDQzJpST/dRjtKXftyqXUZCTKKrJSlNcZsA RPUVfvd0YK/h3WwxsAtJzhcSkX7PVhtGx+V1f+Elgq75dPS08csNSR90IbNIAREExC 5mllniGYgQMqveB7u9EIvGzgRwcaBFy+hhmQC6sbZ3lbzmXiusGhzUOYcA/2eMq6VO dyBYvjXhm8Y33nqS+rAUBVPMjw6EveHIuyogYEIN1ZumZcZ9AYttvrMkZMFmPwbXjs 9qSaYAPblZqkQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 786FF5C0E4D; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 18/25] rcu: Make synchronize_rcu_expedited() fast path update .expedited_sequence Date: Wed, 31 Aug 2022 11:12:03 -0700 Message-Id: <20220831181210.2695080-18-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit causes the early boot single-CPU synchronize_rcu_expedited() fastpath to update the rcu_state structure's ->expedited_sequence counter. This will allow the full-state polled grace-period APIs to detect all expedited grace periods without the need to track the special combined polling-only counter, which is another step towards removing the ->rgos_polled field from the rcu_gp_oldstate, thereby reducing its size by one third. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_exp.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 1a51f9301ebff..54e05d13d1512 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -906,6 +906,7 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp) void synchronize_rcu_expedited(void) { bool boottime = (rcu_scheduler_active == RCU_SCHEDULER_INIT); + unsigned long flags; struct rcu_exp_work rew; struct rcu_node *rnp; unsigned long s; @@ -924,6 +925,11 @@ void synchronize_rcu_expedited(void) // them, which allows reuse of ->gp_seq_polled_exp_snap. rcu_poll_gp_seq_start_unlocked(&rcu_state.gp_seq_polled_exp_snap); rcu_poll_gp_seq_end_unlocked(&rcu_state.gp_seq_polled_exp_snap); + + local_irq_save(flags); + WARN_ON_ONCE(num_online_cpus() > 1); + rcu_state.expedited_sequence += (1 << RCU_SEQ_CTR_SHIFT); + local_irq_restore(flags); return; // Context allows vacuous grace periods. } From patchwork Wed Aug 31 18:12:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3BB4ECAAD1 for ; Wed, 31 Aug 2022 18:15:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232448AbiHaSPB (ORCPT ); Wed, 31 Aug 2022 14:15:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232415AbiHaSN5 (ORCPT ); Wed, 31 Aug 2022 14:13:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B5E6EE6AB; Wed, 31 Aug 2022 11:12:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2851161C77; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52AEEC433D7; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=2IUtHz4jkuLMZGxJz7R754/GACgS+v0YbUrl2b5jzbw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D4xO2XP1T7kYv98HPLEHDo7C3MA6RBFUE8NWVdTwBdgHnwzkbTz+LJPjyU1KIF/IL q7BDP3u+3LSxXiwpH/jc7bS2QSy9l9zDnleKDEK7TUnSAzhi1ifeaHKXb0RkURFy0V w8s2uuHIZeKOmDV/Gnd7jffBLHcTXT4LJe1B31pTJ5FmhHQluSbV4w7iQA0iWisUcV ExBjTeK4NE0fG2zDN3jZZFEeqZwyHFkiOnXFcmwgADneLFBHmqpkeiAiAaxSVFg7+x aEjRzSAXInHF2ZFGfaaz3liw1623CGErJXq1mYOEwScMWtPlp/dYtTocZyKuUYeSg8 Vll6LS5eWJHWg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7A1C75C0E68; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 19/25] rcu: Remove ->rgos_polled field from rcu_gp_oldstate structure Date: Wed, 31 Aug 2022 11:12:04 -0700 Message-Id: <20220831181210.2695080-19-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Because both normal and expedited grace periods increment their respective counters on their pre-scheduler early boot fastpaths, the rcu_gp_oldstate structure no longer needs its ->rgos_polled field. This commit therefore removes this field, shrinking this structure so that it is the same size as an rcu_head structure. Signed-off-by: Paul E. McKenney --- include/linux/rcutree.h | 1 - kernel/rcu/tree.c | 6 +----- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 755b082f4ec62..455a03bdce152 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -44,7 +44,6 @@ bool rcu_gp_might_be_stalled(void); struct rcu_gp_oldstate { unsigned long rgos_norm; unsigned long rgos_exp; - unsigned long rgos_polled; }; unsigned long start_poll_synchronize_rcu_expedited(void); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 8fa5ec0f3d111..b9e8ed00536d4 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3526,7 +3526,6 @@ void get_completed_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) { rgosp->rgos_norm = RCU_GET_STATE_COMPLETED; rgosp->rgos_exp = RCU_GET_STATE_COMPLETED; - rgosp->rgos_polled = RCU_GET_STATE_COMPLETED; } EXPORT_SYMBOL_GPL(get_completed_synchronize_rcu_full); @@ -3575,7 +3574,6 @@ void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) smp_mb(); /* ^^^ */ rgosp->rgos_norm = rcu_seq_snap(&rnp->gp_seq); rgosp->rgos_exp = rcu_seq_snap(&rcu_state.expedited_sequence); - rgosp->rgos_polled = rcu_seq_snap(&rcu_state.gp_seq_polled); } EXPORT_SYMBOL_GPL(get_state_synchronize_rcu_full); @@ -3727,9 +3725,7 @@ bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) if (rgosp->rgos_norm == RCU_GET_STATE_COMPLETED || rcu_seq_done_exact(&rnp->gp_seq, rgosp->rgos_norm) || rgosp->rgos_exp == RCU_GET_STATE_COMPLETED || - rcu_seq_done_exact(&rcu_state.expedited_sequence, rgosp->rgos_exp) || - rgosp->rgos_polled == RCU_GET_STATE_COMPLETED || - rcu_seq_done_exact(&rcu_state.gp_seq_polled, rgosp->rgos_polled)) { + rcu_seq_done_exact(&rcu_state.expedited_sequence, rgosp->rgos_exp)) { smp_mb(); /* Ensure GP ends before subsequent accesses. */ return true; } From patchwork Wed Aug 31 18:12:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17874ECAAD3 for ; Wed, 31 Aug 2022 18:16:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231559AbiHaSQj (ORCPT ); Wed, 31 Aug 2022 14:16:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230456AbiHaSQF (ORCPT ); Wed, 31 Aug 2022 14:16:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 114833ECE8; Wed, 31 Aug 2022 11:13:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3E4D461BF0; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61809C433D6; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=nicJ2bfIfXKr9ehxtHXUaWlz+sCDzYP0wUG+MHreSZc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rM71sH2vJfF1sA33XvJKPk+V+qI+9v3hyKAmEcdC6DyhIFmFs04S0Y84AtpYnr4vn kQoMVf2hPUK1NgGecFpGOCuXOKq47a2YBx42HE+QrFPjRwMcFJX9iLsrpwtijBCebF KteMGV44w07nU3SCwjyMBVBDID/bsZsMy1h78hCFwkiPR1tLx8GYkE7xOa5n08HSzd iVfFSk/hd7jZ1t/VY1Wap9z6w4gcgfeWGhj5rXwLcoNqKTdEG9c5Q3uURSRD/JfEac 9mziIirxoXUfwMez6hze0LJowlasHucTbCrDULxGGntS+YnnbwN3jUEco8xQQLmzho ROeTnNRcyqPbg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7BCFB5C0EAC; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 20/25] rcutorture: Adjust rcu_poll_need_2gp() for rcu_gp_oldstate field removal Date: Wed, 31 Aug 2022 11:12:05 -0700 Message-Id: <20220831181210.2695080-20-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Now that rcu_gp_oldstate can accurately track both normal and expedited grace periods regardless of system state, rcutorture's rcu_poll_need_2gp() function need only call for a second grace period for the old single-unsigned-long grace-period polling APIs This commit therefore adjusts rcu_poll_need_2gp() accordingly. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 8995429c6f1c2..029de67a9da91 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -520,7 +520,7 @@ static void rcu_sync_torture_init(void) static bool rcu_poll_need_2gp(bool poll, bool poll_full) { - return poll || (!IS_ENABLED(CONFIG_TINY_RCU) && poll_full && num_online_cpus() <= 1); + return poll; } static struct rcu_torture_ops rcu_ops = { From patchwork Wed Aug 31 18:12:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961250 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85204ECAAD1 for ; Wed, 31 Aug 2022 18:15:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232719AbiHaSPt (ORCPT ); Wed, 31 Aug 2022 14:15:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232622AbiHaSOc (ORCPT ); Wed, 31 Aug 2022 14:14:32 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EB65F1B6D; Wed, 31 Aug 2022 11:12:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5405B61CA0; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E26BC4315E; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=+3o3kcW+nFj8F0D42txGmbjiiaheUJvzJk/MtcXEmg0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gDPPrXlzXdJf72qGoy/xlZDI8SRO0RGaIJmpZKHzXLt2RW5TDZv1mdhWFE0cfl6JZ qb59rPZihhDo0fBnUJ39MpdadS/1rBxmfxAZYXLZx/KnXY41J/vStfCE5lmM/36BEL Kh6m9/QSUgmRbohwIZKIGtnocsQc7PPABYUKa086LQTVlgdw3n2UiLUyOy3JkwZ+uV eIeksiQ7ugEe1lHTrG2FQYC+pxJLR9CJmbBwh5iwrZDPh6vXEEAwCo8oble0BlqQHA OHKNNlNsOIoqJcwne3JPOMioCiZG5Z8q0hBAkTUKEXrSqZTSuJwlJaRXUNfcJ7h/bc Gu6s7L9H8/9ng== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7D7D85C0EBC; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 21/25] rcu: Make synchronize_rcu() fastpath update only boot-CPU counters Date: Wed, 31 Aug 2022 11:12:06 -0700 Message-Id: <20220831181210.2695080-21-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Large systems can have hundreds of rcu_node structures, and updating counters in each of them might slow down booting. This commit therefore updates only the counters in those rcu_node structures corresponding to the boot CPU, up to and including the root rcu_node structure. The counters for the remaining rcu_node structures are updated by the rcu_scheduler_starting() function, which executes just before the first non-boot kthread is spawned. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b9e8ed00536d4..ef15bae3c7c77 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3504,11 +3504,14 @@ void synchronize_rcu(void) rcu_poll_gp_seq_start_unlocked(&rcu_state.gp_seq_polled_snap); rcu_poll_gp_seq_end_unlocked(&rcu_state.gp_seq_polled_snap); - // Update normal grace-period counters to record grace period. + // Update the normal grace-period counters to record + // this grace period, but only those used by the boot CPU. + // The rcu_scheduler_starting() will take care of the rest of + // these counters. local_irq_save(flags); WARN_ON_ONCE(num_online_cpus() > 1); rcu_state.gp_seq += (1 << RCU_SEQ_CTR_SHIFT); - rcu_for_each_node_breadth_first(rnp) + for (rnp = this_cpu_ptr(&rcu_data)->mynode; rnp; rnp = rnp->parent) rnp->gp_seq_needed = rnp->gp_seq = rcu_state.gp_seq; local_irq_restore(flags); } @@ -4456,9 +4459,20 @@ early_initcall(rcu_spawn_gp_kthread); */ void rcu_scheduler_starting(void) { + unsigned long flags; + struct rcu_node *rnp; + WARN_ON(num_online_cpus() != 1); WARN_ON(nr_context_switches() > 0); rcu_test_sync_prims(); + + // Fix up the ->gp_seq counters. + local_irq_save(flags); + rcu_for_each_node_breadth_first(rnp) + rnp->gp_seq_needed = rnp->gp_seq = rcu_state.gp_seq; + local_irq_restore(flags); + + // Switch out of early boot mode. rcu_scheduler_active = RCU_SCHEDULER_INIT; rcu_test_sync_prims(); } From patchwork Wed Aug 31 18:12:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FD14ECAAD3 for ; Wed, 31 Aug 2022 18:15:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229510AbiHaSPp (ORCPT ); Wed, 31 Aug 2022 14:15:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232452AbiHaSOV (ORCPT ); Wed, 31 Aug 2022 14:14:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F542EEC41; Wed, 31 Aug 2022 11:12:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B2F2461CB7; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B53DC4315F; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=mKMLfJJ5PQglYt2nL/q4Ztr/omrySwsHvF1ZvrLRD10=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VHv47LAX5Wr9Q7Yx+XlI3B3yGgOV/EOia7cA5nD5QwTLMDD89QvNcd+XxHPfTxCaJ 3qcI3XfNPtzey3oOEVt1o9JX6nmu7B7egu6vOXxR47xGMSYXpQPKYbmWMLT3FWCwiN +3vPBft8K219znLLUtEobhQ6sLyJB1LMrBO21qFAtT6gxUBJcPqVE9ovllxYZWneWP 5d+V3+XBbWA9NFROJQIem4GReDZB/aaa+QoVRwRytWQBs2UiHxOaPeRz/Jc3/CkdaK EnTxHKv0eHXSy69c1CcPW1XSntUfx09pE+KMRBTcCQrRP0K4C6De/C9eywLRddIqFG pFDaUqrRzxSMw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7F27B5C0EC0; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 22/25] rcutorture: Use 1-suffixed variable in rcu_torture_write_types() check Date: Wed, 31 Aug 2022 11:12:07 -0700 Message-Id: <20220831181210.2695080-22-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit changes the use of gp_poll_exp to gp_poll_exp1 in the first check in rcu_torture_write_types(). No functional effect, but consistency is a good thing. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 029de67a9da91..71d1af9c060e1 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -1196,7 +1196,7 @@ static void rcu_torture_write_types(void) /* Initialize synctype[] array. If none set, take default. */ if (!gp_cond1 && !gp_cond_exp1 && !gp_cond_full1 && !gp_cond_exp_full1 && !gp_exp1 && - !gp_poll_exp && !gp_poll_exp_full1 && !gp_normal1 && !gp_poll1 && !gp_poll_full1 && + !gp_poll_exp1 && !gp_poll_exp_full1 && !gp_normal1 && !gp_poll1 && !gp_poll_full1 && !gp_sync1) gp_cond1 = gp_cond_exp1 = gp_cond_full1 = gp_cond_exp_full1 = gp_exp1 = gp_poll_exp1 = gp_poll_exp_full1 = gp_normal1 = gp_poll1 = From patchwork Wed Aug 31 18:12:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25DBDECAAD3 for ; Wed, 31 Aug 2022 18:14:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232620AbiHaSOc (ORCPT ); Wed, 31 Aug 2022 14:14:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232658AbiHaSNt (ORCPT ); Wed, 31 Aug 2022 14:13:49 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 085DDF14CB; Wed, 31 Aug 2022 11:12:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id C591BCE205F; Wed, 31 Aug 2022 18:12:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 76A6EC4315A; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=3hit05VM9H5Pv/TaHqZN5KHI+BLHnN72k5omzSbGN8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dU1hK/lZj2+BkLaTtGTPGvKQvOFvM6D7CCvjwuYo6kUuQGsu7bXEIAq2Ev4iAuXNQ g76f0dZ2iIE2dnix6KRuDJDgY3ovicziybEC8MrSfXuklwONX9+Zp3fEgMBghYmSed XfzSRzAnNQz7QUKSsuWzynH+bafZMaMRLF8nWmkb8169KxRIgvIhqsJTaaiFnFmF13 bznhd+KCPlyccNgxCWIKPo7Cow/nLuzg/24Y5DfVy9i/0x0p7X2iVjs0pmsXJJSYBq N6xPChZ9C2DZrmSOLqRZP7yrgrh5TYM78rEML4aN+aEeTSQpM826022lOM6YmzORHz 9oBZ73+2+zeCQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 80E075C0ED4; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 23/25] rcutorture: Expand rcu_torture_write_types() first "if" statement Date: Wed, 31 Aug 2022 11:12:08 -0700 Message-Id: <20220831181210.2695080-23-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit expands the rcu_torture_write_types() function's first "if" condition and body, placing one element per line, in order to make the compiler's error messages more helpful. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 71d1af9c060e1..fe1836aad6466 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -1195,12 +1195,29 @@ static void rcu_torture_write_types(void) bool gp_poll_full1 = gp_poll_full, gp_sync1 = gp_sync; /* Initialize synctype[] array. If none set, take default. */ - if (!gp_cond1 && !gp_cond_exp1 && !gp_cond_full1 && !gp_cond_exp_full1 && !gp_exp1 && - !gp_poll_exp1 && !gp_poll_exp_full1 && !gp_normal1 && !gp_poll1 && !gp_poll_full1 && - !gp_sync1) - gp_cond1 = gp_cond_exp1 = gp_cond_full1 = gp_cond_exp_full1 = gp_exp1 = - gp_poll_exp1 = gp_poll_exp_full1 = gp_normal1 = gp_poll1 = - gp_poll_full1 = gp_sync1 = true; + if (!gp_cond1 && + !gp_cond_exp1 && + !gp_cond_full1 && + !gp_cond_exp_full1 && + !gp_exp1 && + !gp_poll_exp1 && + !gp_poll_exp_full1 && + !gp_normal1 && + !gp_poll1 && + !gp_poll_full1 && + !gp_sync1) { + gp_cond1 = true; + gp_cond_exp1 = true; + gp_cond_full1 = true; + gp_cond_exp_full1 = true; + gp_exp1 = true; + gp_poll_exp1 = true; + gp_poll_exp_full1 = true; + gp_normal1 = true; + gp_poll1 = true; + gp_poll_full1 = true; + gp_sync1 = true; + } if (gp_cond1 && cur_ops->get_gp_state && cur_ops->cond_sync) { synctype[nsynctypes++] = RTWS_COND_GET; pr_info("%s: Testing conditional GPs.\n", __func__); From patchwork Wed Aug 31 18:12:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 759C1ECAAD4 for ; Wed, 31 Aug 2022 18:13:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231809AbiHaSNy (ORCPT ); Wed, 31 Aug 2022 14:13:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232334AbiHaSN3 (ORCPT ); Wed, 31 Aug 2022 14:13:29 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 243DFDD4D4; Wed, 31 Aug 2022 11:12:18 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7734CB82279; Wed, 31 Aug 2022 18:12:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A208C4315B; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=40Nnav/fNtUmhr+7K11apGq8VNSLD9p1jp36HWLtzQQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rwdrN3SGDh6mPT7Y+RXAIAjlKNRPsu9f3eECC/Pio7JTE7uYmovPPU8VEJnn/SaME qVj08MusyyJn7qJebM4oYA78qW/1HwbG4+MrL3LosvCL4QZmKyf+JN0MQvnhdIvhrR WaUjTqbCIAApySP+arURJVxjLNFTVYhdK6hqLIC5K0H1DfnsGcRQ9hZwX9D+SIm97Q 5Rrt/Pf6ZkPzQ+BT2KCTdr+oEwI3CSrza8ORmyHA/wN39z6YwJYBA0od8hq6zx3PaM 5NyxvK65qP1VsY/l9neu41lz8KDIPnoCIecti7XUipydP46KiZ+jqg5uiKauOVFksB 1g3Xf47QMEfZQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 82A365C0ED7; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 24/25] rcu: Add functions to compare grace-period state values Date: Wed, 31 Aug 2022 11:12:09 -0700 Message-Id: <20220831181210.2695080-24-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit adds same_state_synchronize_rcu() and same_state_synchronize_rcu_full() functions to compare grace-period state values, for example, those obtained from get_state_synchronize_rcu() and get_state_synchronize_rcu_full(). These functions allow small structures to omit these state values by placing them in list headers for lists containing structures with the same token value. Presumably the per-structure list pointers are the same ones used to link the structures into whatever reader-accessible data structure was used. This commit also adds both NUM_ACTIVE_RCU_POLL_OLDSTATE and NUM_ACTIVE_RCU_POLL_FULL_OLDSTATE, which define the maximum number of distinct unsigned long values and rcu_gp_oldstate values, respectively, corresponding to not-yet-completed grace periods. These values can be used to size arrays of the list headers described above. Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h | 21 +++++++++++++++++++++ include/linux/rcutiny.h | 14 ++++++++++++++ include/linux/rcutree.h | 28 ++++++++++++++++++++++++++++ 3 files changed, 63 insertions(+) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index faaa174dfb27c..9941d5c3d5e19 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -47,6 +47,27 @@ struct rcu_gp_oldstate; unsigned long get_completed_synchronize_rcu(void); void get_completed_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp); +// Maximum number of unsigned long values corresponding to +// not-yet-completed RCU grace periods. +#define NUM_ACTIVE_RCU_POLL_OLDSTATE 2 + +/** + * same_state_synchronize_rcu - Are two old-state values identical? + * @oldstate1: First old-state value. + * @oldstate2: Second old-state value. + * + * The two old-state values must have been obtained from either + * get_state_synchronize_rcu(), start_poll_synchronize_rcu(), or + * get_completed_synchronize_rcu(). Returns @true if the two values are + * identical and @false otherwise. This allows structures whose lifetimes + * are tracked by old-state values to push these values to a list header, + * allowing those structures to be slightly smaller. + */ +static inline bool same_state_synchronize_rcu(unsigned long oldstate1, unsigned long oldstate2) +{ + return oldstate1 == oldstate2; +} + #ifdef CONFIG_PREEMPT_RCU void __rcu_read_lock(void); diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 4405e9112cee8..768196a5f39d6 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -18,6 +18,20 @@ struct rcu_gp_oldstate { unsigned long rgos_norm; }; +// Maximum number of rcu_gp_oldstate values corresponding to +// not-yet-completed RCU grace periods. +#define NUM_ACTIVE_RCU_POLL_FULL_OLDSTATE 2 + +/* + * Are the two oldstate values the same? See the Tree RCU version for + * docbook header. + */ +static inline bool same_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp1, + struct rcu_gp_oldstate *rgosp2) +{ + return rgosp1->rgos_norm == rgosp2->rgos_norm; +} + unsigned long get_state_synchronize_rcu(void); static inline void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp) diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 455a03bdce152..5efb51486e8af 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -46,6 +46,34 @@ struct rcu_gp_oldstate { unsigned long rgos_exp; }; +// Maximum number of rcu_gp_oldstate values corresponding to +// not-yet-completed RCU grace periods. +#define NUM_ACTIVE_RCU_POLL_FULL_OLDSTATE 4 + +/** + * same_state_synchronize_rcu_full - Are two old-state values identical? + * @rgosp1: First old-state value. + * @rgosp2: Second old-state value. + * + * The two old-state values must have been obtained from either + * get_state_synchronize_rcu_full(), start_poll_synchronize_rcu_full(), + * or get_completed_synchronize_rcu_full(). Returns @true if the two + * values are identical and @false otherwise. This allows structures + * whose lifetimes are tracked by old-state values to push these values + * to a list header, allowing those structures to be slightly smaller. + * + * Note that equality is judged on a bitwise basis, so that an + * @rcu_gp_oldstate structure with an already-completed state in one field + * will compare not-equal to a structure with an already-completed state + * in the other field. After all, the @rcu_gp_oldstate structure is opaque + * so how did such a situation come to pass in the first place? + */ +static inline bool same_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp1, + struct rcu_gp_oldstate *rgosp2) +{ + return rgosp1->rgos_norm == rgosp2->rgos_norm && rgosp1->rgos_exp == rgosp2->rgos_exp; +} + unsigned long start_poll_synchronize_rcu_expedited(void); void start_poll_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp); void cond_synchronize_rcu_expedited(unsigned long oldstate); From patchwork Wed Aug 31 18:12:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12961252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CA6CECAAD3 for ; Wed, 31 Aug 2022 18:16:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232673AbiHaSQI (ORCPT ); Wed, 31 Aug 2022 14:16:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232417AbiHaSOu (ORCPT ); Wed, 31 Aug 2022 14:14:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E74F4EA31B; Wed, 31 Aug 2022 11:12:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5974861C63; Wed, 31 Aug 2022 18:12:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F1FEC43160; Wed, 31 Aug 2022 18:12:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661969532; bh=FckxDIfC5El2OqaWesfPy5RPaKRRdDuIXx9ES9lRUp8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q6LUYnVSjbw66FR4A9plBc3in+qKuwp8ICUSBQ4EK4TPPxiCxyFyg3WTiH/VQ9Uk9 LyX91cRg5vYB+Z3N10yiHpIHpp5lObGg+upVLVUFOfmW9HiuFF68+3hhCb/pPThnSn eaFm7tTqqovfFE1NRHVL0f6tICSIqdvnuFUr0kVTJfOhD8SLt3G9KhEyGndm2uUozY 3UjPG6AWWLITF3qWnsPfT4Uqm158AqzdGcZTSqxpQ5uOB7ltYB/95bL1qAmGJ734lA KRn0+ZZjR40cXUnm4p/6EAyDtXb9C7hxQVtLmBh5OeuVo895mb4UPnSSo91ItLNup5 qBk21n8asI2CA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 847EC5C0F13; Wed, 31 Aug 2022 11:12:11 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 25/25] rcutorture: Limit read-side polling-API testing Date: Wed, 31 Aug 2022 11:12:10 -0700 Message-Id: <20220831181210.2695080-25-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> References: <20220831181207.GA2694717@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org RCU's polled grace-period API is reasonably lightweight, but still contains heavyweight memory barriers. This commit therefore limits testing of this API from rcutorture's readers in order to avoid the false negatives that these heavyweight operations could provoke. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcutorture.c | 41 +++++++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index fe1836aad6466..91103279d7b4f 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -1903,6 +1903,7 @@ rcutorture_loop_extend(int *readstate, struct torture_random_state *trsp, */ static bool rcu_torture_one_read(struct torture_random_state *trsp, long myid) { + bool checkpolling = !(torture_random(trsp) & 0xfff); unsigned long cookie; struct rcu_gp_oldstate cookie_full; int i; @@ -1920,10 +1921,12 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp, long myid) WARN_ON_ONCE(!rcu_is_watching()); newstate = rcutorture_extend_mask(readstate, trsp); rcutorture_one_extend(&readstate, newstate, trsp, rtrsp++); - if (cur_ops->get_gp_state && cur_ops->poll_gp_state) - cookie = cur_ops->get_gp_state(); - if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) - cur_ops->get_gp_state_full(&cookie_full); + if (checkpolling) { + if (cur_ops->get_gp_state && cur_ops->poll_gp_state) + cookie = cur_ops->get_gp_state(); + if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) + cur_ops->get_gp_state_full(&cookie_full); + } started = cur_ops->get_gp_seq(); ts = rcu_trace_clock_local(); p = rcu_dereference_check(rcu_torture_current, @@ -1957,20 +1960,22 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp, long myid) } __this_cpu_inc(rcu_torture_batch[completed]); preempt_enable(); - if (cur_ops->get_gp_state && cur_ops->poll_gp_state) - WARN_ONCE(cur_ops->poll_gp_state(cookie), - "%s: Cookie check 2 failed %s(%d) %lu->%lu\n", - __func__, - rcu_torture_writer_state_getname(), - rcu_torture_writer_state, - cookie, cur_ops->get_gp_state()); - if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) - WARN_ONCE(cur_ops->poll_gp_state_full(&cookie_full), - "%s: Cookie check 6 failed %s(%d) online %*pbl\n", - __func__, - rcu_torture_writer_state_getname(), - rcu_torture_writer_state, - cpumask_pr_args(cpu_online_mask)); + if (checkpolling) { + if (cur_ops->get_gp_state && cur_ops->poll_gp_state) + WARN_ONCE(cur_ops->poll_gp_state(cookie), + "%s: Cookie check 2 failed %s(%d) %lu->%lu\n", + __func__, + rcu_torture_writer_state_getname(), + rcu_torture_writer_state, + cookie, cur_ops->get_gp_state()); + if (cur_ops->get_gp_state_full && cur_ops->poll_gp_state_full) + WARN_ONCE(cur_ops->poll_gp_state_full(&cookie_full), + "%s: Cookie check 6 failed %s(%d) online %*pbl\n", + __func__, + rcu_torture_writer_state_getname(), + rcu_torture_writer_state, + cpumask_pr_args(cpu_online_mask)); + } rcutorture_one_extend(&readstate, 0, trsp, rtrsp); WARN_ON_ONCE(readstate); // This next splat is expected behavior if leakpointer, especially