From patchwork Tue Nov 28 08:00:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13470581 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Uz+j2EkX" Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com [IPv6:2a00:1450:4864:20::135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CBD0CB; Tue, 28 Nov 2023 00:00:39 -0800 (PST) Received: by mail-lf1-x135.google.com with SMTP id 2adb3069b0e04-50bbb78efb5so372898e87.3; Tue, 28 Nov 2023 00:00:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701158437; x=1701763237; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6BsRDUs/wYt2ci1VcBpz4SJheODmui3/KELJxnudBXg=; b=Uz+j2EkXwZk+5VMMOPsa7J0g4mavGHnsCvbsedTmnUrH/vxUzWTMGWqANSJafcx3u9 bwvrcMbbb9ceauPpBrg+g8taVx9FnnbxPj2/ybM1g03ciE/TRAv/hQVNww29GcAo7Pmg dyqU+Wthq/ecL0EorDRTn2XNM+/5KrowHfb5ryGTR7//lHddGDQKF8v3bDIBiOeEPD/z ZxLB6kqz8hVFjkXUuNGHOXNB2BFH6LJ1AB6bI3uNQNdJhm+WWjnsIKgtU4lPW6FqqJyj xcz1q7pX3W2kqxRGRtxUOWCyRgXZN/pqwhM3CTjOPb3L4StIkbluExx4bsnnalwfmAqC V3+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701158437; x=1701763237; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6BsRDUs/wYt2ci1VcBpz4SJheODmui3/KELJxnudBXg=; b=AlIl8NdiKm4zRKK+8ZaJP1i4Nh+k3xTR0nhBHDRI+KXvWa85EGWgefEbxgzS6XEM+3 3a75aASRg5NpHKE1mY0Wfm3t9rbN94V15ULKMsLLzD4RiJrL2bXZcGjFdlvnjFa1fy9R ju1ZtUQG6VZbRAnzCkqfCLnTx5OfC/aXw9kvtmxXyWOjIb2OtmkM4kC03OCvIYjULQK6 43QNorekJ7bGe+B76ldkIFEprufBPtzzlxpgT+EkMorV7WSvvd1nUWsMithi2Z/GsIQh TBDG0ycmXepL7GqMEecPTzxwqyEUKvO1vsA6EvPPuLdzYEmCYP/bi/rel7XnzE9VR+Ie GREA== X-Gm-Message-State: AOJu0YzM/S5AB42V6F/g22+1UG3qlMWpR7DFMOr1ShUcC4y2o+/TyQHd Yzpl5isrwzVqf2eo+4XkzSs= X-Google-Smtp-Source: AGHT+IH9AKjFYiSSwd4Gl/X8xCmeiuE6ev0b1jilePlkMzKYVASLtbE6S3FbybnDTVZn4LqRpYskDw== X-Received: by 2002:a05:6512:38cf:b0:503:3781:ac32 with SMTP id p15-20020a05651238cf00b005033781ac32mr8391110lft.41.1701158436996; Tue, 28 Nov 2023 00:00:36 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id o16-20020ac24bd0000000b004fe202a5c7csm1765501lfq.135.2023.11.28.00.00.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 00:00:36 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: "Paul E . McKenney" Cc: RCU , Neeraj upadhyay , Boqun Feng , Hillf Danton , Joel Fernandes , LKML , Uladzislau Rezki , Oleksiy Avramchenko , Frederic Weisbecker Subject: [PATCH v3 1/7] rcu: Reduce synchronize_rcu() latency Date: Tue, 28 Nov 2023 09:00:27 +0100 Message-Id: <20231128080033.288050-2-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231128080033.288050-1-urezki@gmail.com> References: <20231128080033.288050-1-urezki@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 A call to a synchronize_rcu() can be optimized from a latency point of view. Workloads which depend on this can benefit of it. The delay of wakeme_after_rcu() callback, which unblocks a waiter, depends on several factors: - how fast a process of offloading is started. Combination of: - !CONFIG_RCU_NOCB_CPU/CONFIG_RCU_NOCB_CPU; - !CONFIG_RCU_LAZY/CONFIG_RCU_LAZY; - other. - when started, invoking path is interrupted due to: - time limit; - need_resched(); - if limit is reached. - where in a nocb list it is located; - how fast previous callbacks completed; Example: 1. On our embedded devices i can easily trigger the scenario when it is a last in the list out of ~3600 callbacks: <...>-29 [001] d..1. 21950.145313: rcu_batch_start: rcu_preempt CBs=3613 bl=28 ... <...>-29 [001] ..... 21950.152578: rcu_invoke_callback: rcu_preempt rhp=00000000b2d6dee8 func=__free_vm_area_struct.cfi_jt <...>-29 [001] ..... 21950.152579: rcu_invoke_callback: rcu_preempt rhp=00000000a446f607 func=__free_vm_area_struct.cfi_jt <...>-29 [001] ..... 21950.152580: rcu_invoke_callback: rcu_preempt rhp=00000000a5cab03b func=__free_vm_area_struct.cfi_jt <...>-29 [001] ..... 21950.152581: rcu_invoke_callback: rcu_preempt rhp=0000000013b7e5ee func=__free_vm_area_struct.cfi_jt <...>-29 [001] ..... 21950.152582: rcu_invoke_callback: rcu_preempt rhp=000000000a8ca6f9 func=__free_vm_area_struct.cfi_jt <...>-29 [001] ..... 21950.152583: rcu_invoke_callback: rcu_preempt rhp=000000008f162ca8 func=wakeme_after_rcu.cfi_jt <...>-29 [001] d..1. 21950.152625: rcu_batch_end: rcu_preempt CBs-invoked=3612 idle=.... 2. We use cpuset/cgroup to classify tasks and assign them into different cgroups. For example "backgrond" group which binds tasks only to little CPUs or "foreground" which makes use of all CPUs. Tasks can be migrated between groups by a request if an acceleration is needed. See below an example how "surfaceflinger" task gets migrated. Initially it is located in the "system-background" cgroup which allows to run only on little cores. In order to speed it up it can be temporary moved into "foreground" cgroup which allows to use big/all CPUs: cgroup_attach_task(): -> cgroup_migrate_execute() -> cpuset_can_attach() -> percpu_down_write() -> rcu_sync_enter() -> synchronize_rcu() -> now move tasks to the new cgroup. -> cgroup_migrate_finish() rcuop/1-29 [000] ..... 7030.528570: rcu_invoke_callback: rcu_preempt rhp=00000000461605e0 func=wakeme_after_rcu.cfi_jt PERFD-SERVER-1855 [000] d..1. 7030.530293: cgroup_attach_task: dst_root=3 dst_id=22 dst_level=1 dst_path=/foreground pid=1900 comm=surfaceflinger TimerDispatch-2768 [002] d..5. 7030.537542: sched_migrate_task: comm=surfaceflinger pid=1900 prio=98 orig_cpu=0 dest_cpu=4 "Boosting a task" depends on synchronize_rcu() latency: - first trace shows a completion of synchronize_rcu(); - second shows attaching a task to a new group; - last shows a final step when migration occurs. 3. To address this drawback, maintain a separate track that consists of synchronize_rcu() callers only. After completion of a grace period users are deferred to a dedicated worker to process requests. 4. This patch reduces the latency of synchronize_rcu() approximately by ~30-40% on synthetic tests. The real test case, camera launch time, shows(time is in milliseconds): 1-run 542 vs 489 improvement 9% 2-run 540 vs 466 improvement 13% 3-run 518 vs 468 improvement 9% 4-run 531 vs 457 improvement 13% 5-run 548 vs 475 improvement 13% 6-run 509 vs 484 improvement 4% Synthetic test(no "noise" from other callbacks): Hardware: x86_64 64 CPUs, 64GB of memory Linux-6.6 - 10K tasks(simultaneous); - each task does(1000 loops) synchronize_rcu(); kfree(p); default: CONFIG_RCU_NOCB_CPU: takes 54 seconds to complete all users; patch: CONFIG_RCU_NOCB_CPU: takes 35 seconds to complete all users. Running 60K gives approximately same results on my setup. Please note it is without any interaction with another type of callbacks, otherwise it will impact a lot a default case. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 135 +++++++++++++++++++++++++++++++++++++++++- kernel/rcu/tree_exp.h | 2 +- 2 files changed, 135 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index cb1caefa8bd0..57cfa467697b 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1384,6 +1384,105 @@ static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap) raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } +/* + * There are three lists for handling synchronize_rcu() users. + * A first list corresponds to new coming users, second for users + * which wait for a grace period and third is for which a grace + * period is passed. + */ +static struct sr_normal_state { + struct llist_head srs_next; /* request a GP users. */ + struct llist_head srs_wait; /* wait for GP users. */ + struct llist_head srs_done; /* ready for GP users. */ + + /* + * In order to add a batch of nodes to already + * existing srs-done-list, a tail of srs-wait-list + * is maintained. + */ + struct llist_node *srs_wait_tail; +} sr; + +/* Disabled by default. */ +static int rcu_normal_wake_from_gp; +module_param(rcu_normal_wake_from_gp, int, 0644); + +static void rcu_sr_normal_complete(struct llist_node *node) +{ + struct rcu_synchronize *rs = container_of( + (struct rcu_head *) node, struct rcu_synchronize, head); + unsigned long oldstate = (unsigned long) rs->head.func; + + WARN_ONCE(!rcu_gp_is_expedited() && !poll_state_synchronize_rcu(oldstate), + "A full grace period is not passed yet: %lu", + rcu_seq_diff(get_state_synchronize_rcu(), oldstate)); + + /* Finally. */ + complete(&rs->completion); +} + +static void rcu_sr_normal_gp_cleanup_work(struct work_struct *work) +{ + struct llist_node *done, *rcu, *next; + + done = llist_del_all(&sr.srs_done); + if (!done) + return; + + llist_for_each_safe(rcu, next, done) + rcu_sr_normal_complete(rcu); +} +static DECLARE_WORK(sr_normal_gp_cleanup, rcu_sr_normal_gp_cleanup_work); + +/* + * Helper function for rcu_gp_cleanup(). + */ +static void rcu_sr_normal_gp_cleanup(void) +{ + struct llist_node *head, *tail; + + if (llist_empty(&sr.srs_wait)) + return; + + tail = READ_ONCE(sr.srs_wait_tail); + head = __llist_del_all(&sr.srs_wait); + + if (head) { + /* Can be not empty. */ + llist_add_batch(head, tail, &sr.srs_done); + queue_work(system_highpri_wq, &sr_normal_gp_cleanup); + } +} + +/* + * Helper function for rcu_gp_init(). + */ +static void rcu_sr_normal_gp_init(void) +{ + struct llist_node *head, *tail; + + if (llist_empty(&sr.srs_next)) + return; + + tail = llist_del_all(&sr.srs_next); + head = llist_reverse_order(tail); + + /* + * A waiting list of GP should be empty on this step, + * since a GP-kthread, rcu_gp_init() -> gp_cleanup(), + * rolls it over. If not, it is a BUG, warn a user. + */ + WARN_ON_ONCE(!llist_empty(&sr.srs_wait)); + + WRITE_ONCE(sr.srs_wait_tail, tail); + __llist_add_batch(head, tail, &sr.srs_wait); +} + +static void rcu_sr_normal_add_req(struct rcu_synchronize *rs) +{ + llist_add((struct llist_node *) &rs->head, &sr.srs_next); +} + /* * Initialize a new grace period. Return false if no grace period required. */ @@ -1418,6 +1517,7 @@ static noinline_for_stack bool rcu_gp_init(void) /* Record GP times before starting GP, hence rcu_seq_start(). */ rcu_seq_start(&rcu_state.gp_seq); ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq); + rcu_sr_normal_gp_init(); trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq, TPS("start")); rcu_poll_gp_seq_start(&rcu_state.gp_seq_polled_snap); raw_spin_unlock_irq_rcu_node(rnp); @@ -1775,6 +1875,9 @@ static noinline void rcu_gp_cleanup(void) } raw_spin_unlock_irq_rcu_node(rnp); + // Make synchronize_rcu() users aware of the end of old grace period. + rcu_sr_normal_gp_cleanup(); + // If strict, make all CPUs aware of the end of the old grace period. if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD)) on_each_cpu(rcu_strict_gp_boundary, NULL, 0); @@ -3487,6 +3590,36 @@ static int rcu_blocking_is_gp(void) return true; } +/* + * Helper function for the synchronize_rcu() API. + */ +static void synchronize_rcu_normal(void) +{ + struct rcu_synchronize rs; + + if (!READ_ONCE(rcu_normal_wake_from_gp)) { + wait_rcu_gp(call_rcu_hurry); + return; + } + + init_rcu_head_on_stack(&rs.head); + init_completion(&rs.completion); + + /* + * This code might be preempted, therefore take a GP + * snapshot before adding a request. + */ + rs.head.func = (void *) get_state_synchronize_rcu(); + rcu_sr_normal_add_req(&rs); + + /* Kick a GP and start waiting. */ + (void) start_poll_synchronize_rcu(); + + /* Now we can wait. */ + wait_for_completion(&rs.completion); + destroy_rcu_head_on_stack(&rs.head); +} + /** * synchronize_rcu - wait until a grace period has elapsed. * @@ -3538,7 +3671,7 @@ void synchronize_rcu(void) if (rcu_gp_is_expedited()) synchronize_rcu_expedited(); else - wait_rcu_gp(call_rcu_hurry); + synchronize_rcu_normal(); return; } diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 8239b39d945b..53cc1d389f96 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -983,7 +983,7 @@ void synchronize_rcu_expedited(void) /* If expedited grace periods are prohibited, fall back to normal. */ if (rcu_gp_is_normal()) { - wait_rcu_gp(call_rcu_hurry); + synchronize_rcu_normal(); return; } From patchwork Tue Nov 28 08:00:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13470580 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BZ6G+63A" Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com [IPv6:2a00:1450:4864:20::134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B53D3F5; Tue, 28 Nov 2023 00:00:39 -0800 (PST) Received: by mail-lf1-x134.google.com with SMTP id 2adb3069b0e04-50943ccbbaeso7172841e87.2; Tue, 28 Nov 2023 00:00:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701158438; x=1701763238; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ix4aRk838AcYQmbPKw3siGTbYQUaFkXlULCZyng7Pu0=; b=BZ6G+63AffuFlXIRRCr1u1Nr8ecbP8MOAYjMuyr7jWsWDAhnJms/9R5Z/nOGPYAqh3 GRjkJU1HLhPadxrMO+H6HKD+X5w/qkP44pmd7wVttowREYHK693jwX84WbVsP0Lyxk7Y RPajvj5eqzLZaYDEVLAbpyKlktLvu7PgSZDCmu5BjtcgzfKNdNGvsUcsVq8gYMP5vHKh MniLZdGMI+/t4HK/TucbPYJrGga1avZOdWLxX5U0foUeooCGUOAG2Hb09JOuKz9YWCqD rUt/42N0PtMSn+dWX63tyDzcvpBYThPExMVrWqRan0qb9cjo3jFk9L4qL8q+Nhhj/tZJ HgKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701158438; x=1701763238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ix4aRk838AcYQmbPKw3siGTbYQUaFkXlULCZyng7Pu0=; b=hIfEPAcguwqhLq6XC7QiL1Xd1JAmVcYlgRAIEDcQfwvn04NTbzRlvm7xqTUtyuvT1R oml4DBney5QMxgTF9QTYzUm9z4I8AnX+LieVnEjcCh9xp9zLc6/gXGbjdZyrjCQOnN7f ioto1e351/lr9bDrg6FUIWGsu4f70xev6jgt6t3s/ojouXqQOTTkED2xXxqzyxc1he7y knXhyUbXGM8SFDtKEtfRwAmiLAIjqkAqKWlDMDTYAq9t7KcFRgYIE8SxKQsVgMadeAdH OZVXh62norLuvd+lIzOGjruu6pwSFiJKhb3jqiFkt1GYis8n/kizr6rj1XcSTZ29oQsI 0y4Q== X-Gm-Message-State: AOJu0YyiSJEUL/NYP/7VKEjiitucJj2cwVDlQu/LvI2mDHRpvexWJ4Pf u2WGaxzgWhO+oXbZ56u/F9o= X-Google-Smtp-Source: AGHT+IHGoDIkR+3LGSkpG2prqFUrijD96VN1lQHXfiKNn0H0hE3PzO+EvHHz4GFAsa/fPXxY4wtBuA== X-Received: by 2002:a05:6512:ac3:b0:4ff:7f7f:22e7 with SMTP id n3-20020a0565120ac300b004ff7f7f22e7mr11962877lfu.17.1701158437824; Tue, 28 Nov 2023 00:00:37 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id o16-20020ac24bd0000000b004fe202a5c7csm1765501lfq.135.2023.11.28.00.00.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 00:00:37 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: "Paul E . McKenney" Cc: RCU , Neeraj upadhyay , Boqun Feng , Hillf Danton , Joel Fernandes , LKML , Uladzislau Rezki , Oleksiy Avramchenko , Frederic Weisbecker Subject: [PATCH v3 2/7] rcu: Add a trace event for synchronize_rcu_normal() Date: Tue, 28 Nov 2023 09:00:28 +0100 Message-Id: <20231128080033.288050-3-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231128080033.288050-1-urezki@gmail.com> References: <20231128080033.288050-1-urezki@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add an rcu_sr_normal() trace event. It takes three arguments first one is the name of RCU flavour, second one is a user id which triggeres synchronize_rcu_normal() and last one is an event. There are two traces in the synchronize_rcu_normal(). On entry, when a new request is registered and on exit point when request is completed. Please note, CONFIG_RCU_TRACE=y is required to activate traces. Signed-off-by: Uladzislau Rezki (Sony) --- include/trace/events/rcu.h | 27 +++++++++++++++++++++++++++ kernel/rcu/tree.c | 7 ++++++- 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h index 2ef9c719772a..31b3e0d3e65f 100644 --- a/include/trace/events/rcu.h +++ b/include/trace/events/rcu.h @@ -707,6 +707,33 @@ TRACE_EVENT_RCU(rcu_invoke_kfree_bulk_callback, __entry->rcuname, __entry->p, __entry->nr_records) ); +/* + * Tracepoint for a normal synchronize_rcu() states. The first argument + * is the RCU flavor, the second argument is a pointer to rcu_head the + * last one is an event. + */ +TRACE_EVENT_RCU(rcu_sr_normal, + + TP_PROTO(const char *rcuname, struct rcu_head *rhp, const char *srevent), + + TP_ARGS(rcuname, rhp, srevent), + + TP_STRUCT__entry( + __field(const char *, rcuname) + __field(void *, rhp) + __field(const char *, srevent) + ), + + TP_fast_assign( + __entry->rcuname = rcuname; + __entry->rhp = rhp; + __entry->srevent = srevent; + ), + + TP_printk("%s rhp=0x%p event=%s", + __entry->rcuname, __entry->rhp, __entry->srevent) +); + /* * Tracepoint for exiting rcu_do_batch after RCU callbacks have been * invoked. The first argument is the name of the RCU flavor, diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 57cfa467697b..975621ef40e3 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3597,9 +3597,11 @@ static void synchronize_rcu_normal(void) { struct rcu_synchronize rs; + trace_rcu_sr_normal(rcu_state.name, &rs.head, TPS("request")); + if (!READ_ONCE(rcu_normal_wake_from_gp)) { wait_rcu_gp(call_rcu_hurry); - return; + goto trace_complete_out; } init_rcu_head_on_stack(&rs.head); @@ -3618,6 +3620,9 @@ static void synchronize_rcu_normal(void) /* Now we can wait. */ wait_for_completion(&rs.completion); destroy_rcu_head_on_stack(&rs.head); + +trace_complete_out: + trace_rcu_sr_normal(rcu_state.name, &rs.head, TPS("complete")); } /** From patchwork Tue Nov 28 08:00:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13470582 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="htrmT2Wz" Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com [IPv6:2a00:1450:4864:20::130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC55419D; Tue, 28 Nov 2023 00:00:40 -0800 (PST) Received: by mail-lf1-x130.google.com with SMTP id 2adb3069b0e04-50aaaf6e58fso7777336e87.2; Tue, 28 Nov 2023 00:00:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701158439; x=1701763239; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XhGfiMA+FxwDjc7t0nt/AcmpjpbgCOpq64Uuwhjmwcc=; b=htrmT2WzhWPg7YEYgl2rpQyaKIWN33ZAih/zXMGkVoQ91n4WxzTqefDewirCLxdu/U pB9fVtemOF/W3UYMQSiZT05DnY5J9RYlnga4ZPmbmhaV6azq4mvE6Vn6hmOWIYv3zVLV QfSobqiY9MM08TmnGgkwwVL3k/ezTdNg0TOio01f7GiHtACtnlu4AkZvh8IWQKPmdloY LwVvBtEchYnKbdZijLZod2RYmBKta17QPfqqKDeZc0xLcjyZ/ouc/LNXb02abBDi1fh2 CoQsN6xgYTIoZ0il+7+SlOkloPU2ZADGyn/evKjIQfwr8AFU4QSwOvtjnimAOx60rS5R y3uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701158439; x=1701763239; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XhGfiMA+FxwDjc7t0nt/AcmpjpbgCOpq64Uuwhjmwcc=; b=Dipc1XNKHsfAR/hrNghEilNB2AjbwYjTM9KVtqaLTQFus61IYFzWMp4ZKBSRQ67kDb 96p346e3ZtIF+Vv6CX/pAjFBuV3wGBLmZGh2s4keaZpQh06cq7p+7jwbJsKzXUMym/N5 G9Q1stEB5Qvsb/whdlRL0wbuh4vKBZCmONw14/daZZsxHhUgQgZAJX7sm+xYS0p7afpN YgiGYBlEA5qEtJ6hUEZ/hzqZqP1k/ZNg5+axihGFyTmsP7B8JD+iwOP8qbNJbmpynHKm SvN1bcze80pbtKM5ZJ95j/TPteKr4OztHmPj5Ad6jW4VwrP49jp+A+L8RBRQper8OxkF GTqQ== X-Gm-Message-State: AOJu0Ywt3Ay4HQGPoeYzs0xPMLN8cKQa3lKaMBmBS+SLuUUm10y0skr1 q8E6QpRiRPo7vVOLmDnbDOc7Hu7jtWs= X-Google-Smtp-Source: AGHT+IE/LYlsTrWkg7977AH0uge/4rfAnM8l3c5uoqkgIxckVASEsPJezyjKjS17I73AR9ZDfBbhxA== X-Received: by 2002:ac2:563b:0:b0:509:145c:6a49 with SMTP id b27-20020ac2563b000000b00509145c6a49mr8953744lff.42.1701158438666; Tue, 28 Nov 2023 00:00:38 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id o16-20020ac24bd0000000b004fe202a5c7csm1765501lfq.135.2023.11.28.00.00.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 00:00:38 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: "Paul E . McKenney" Cc: RCU , Neeraj upadhyay , Boqun Feng , Hillf Danton , Joel Fernandes , LKML , Uladzislau Rezki , Oleksiy Avramchenko , Frederic Weisbecker Subject: [PATCH v3 3/7] doc: Add rcutree.rcu_normal_wake_from_gp to kernel-parameters.txt Date: Tue, 28 Nov 2023 09:00:29 +0100 Message-Id: <20231128080033.288050-4-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231128080033.288050-1-urezki@gmail.com> References: <20231128080033.288050-1-urezki@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit adds rcutree.rcu_normal_wake_from_gp description to the kernel-parameters.txt file. Signed-off-by: Uladzislau Rezki (Sony) --- Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 0a1731a0f0ef..65bfbfb09522 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4944,6 +4944,20 @@ this kernel boot parameter, forcibly setting it to zero. + rcutree.rcu_normal_wake_from_gp= [KNL] + Reduces a latency of synchronize_rcu() call. This approach + maintains its own track of synchronize_rcu() callers, so it + does not interact with regular callbacks because it does not + use a call_rcu[_hurry]() path. Please note, this is for a + normal grace period. + + How to enable it: + + echo 1 > /sys/module/rcutree/parameters/rcu_normal_wake_from_gp + or pass a boot parameter "rcutree.rcu_normal_wake_from_gp=1" + + Default is 0. + rcuscale.gp_async= [KNL] Measure performance of asynchronous grace-period primitives such as call_rcu(). From patchwork Tue Nov 28 08:00:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13470583 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AXXnuMTy" Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com [IPv6:2a00:1450:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D799998; Tue, 28 Nov 2023 00:00:41 -0800 (PST) Received: by mail-lf1-x12e.google.com with SMTP id 2adb3069b0e04-50a6ff9881fso7791299e87.1; Tue, 28 Nov 2023 00:00:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701158440; x=1701763240; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yCjEWa8Q2f6t37ckyMp7lNr3JFQtn/U/ClZD7LTuOUg=; b=AXXnuMTyb0N7xumjFXww93ZjtMgbWMphmbg+ENogVVKejEszhJFXAI3FS789kpqic8 bsZHnNz7E9pg/Jk0q0qzXtrY7msfgnPGje2x0WWMimxbawmjsiyaMdPgj7jwcbazd2C/ UGMym+8lxL5Gh+ZCReU6+Qin3c1ikueQTpZFrptmYBGRm7MZlUtyO7JIhq72hoPIOQos HVp5txnWjlFqUgY5qo5gK50ioAYftAHCLU8XNyYDVs7q9BAunqp5iCjhTYjJhcWDBitV 5yHihsM4bREmcpiAO+TmnZhJF8vBAaBAn9pEyga5+EnkC/XDagAVGpM3k8TI0CIOT6YM 3akg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701158440; x=1701763240; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yCjEWa8Q2f6t37ckyMp7lNr3JFQtn/U/ClZD7LTuOUg=; b=uqrCQ1ONq0W2lDQ5D5u1cIA8mA6szC9EHKZessk5jpQN+J8pulxwSEJD5YbpyfD9Tm izY2vrTUAduh1myrABxUUAZuzs+60Ne/JL19UsVlOq0YMPkbze3NMULPsPHqqQJTd7Xh DPB4DcAOMKGwIsUBWhd3vmPEztS8CHxQTWCvgl/+895F5S3aI8ROpIr174ZYohz/deHK 2oliRTIV6IZc8P3Ua5TnowlPsH4LKaoDogTTev4Yg1c2ReVfaMFHrs037T1kNfkF516B kmp1M0NOWwT1XvhyiCiT/prQZTnkkkzvrVcJoDQic4K87gQms+4uizTEkwfOk6c0OYSu mZ0g== X-Gm-Message-State: AOJu0YxvSRUCxialhGFYDTvvHL6FbQAdO7Qi6Lrlhf7XqVaw9OhnCYuN O0x0z8D5cAXNPU+XlqzFS4Ap+JdPM6Q= X-Google-Smtp-Source: AGHT+IEuM/s74Np5OlgdJ9Nq72IMiFoGnCWNA+XDMCq04zLOvZLRTZaS4obMnKQ5drpdoqdD41HdDQ== X-Received: by 2002:a19:910e:0:b0:507:a04c:1bcf with SMTP id t14-20020a19910e000000b00507a04c1bcfmr7806324lfd.58.1701158439522; Tue, 28 Nov 2023 00:00:39 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id o16-20020ac24bd0000000b004fe202a5c7csm1765501lfq.135.2023.11.28.00.00.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 00:00:39 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: "Paul E . McKenney" Cc: RCU , Neeraj upadhyay , Boqun Feng , Hillf Danton , Joel Fernandes , LKML , Uladzislau Rezki , Oleksiy Avramchenko , Frederic Weisbecker Subject: [PATCH v3 4/7] rcu: Improve handling of synchronize_rcu() users Date: Tue, 28 Nov 2023 09:00:30 +0100 Message-Id: <20231128080033.288050-5-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231128080033.288050-1-urezki@gmail.com> References: <20231128080033.288050-1-urezki@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Neeraj Upadhyay Currently, processing of the next batch of rcu_synchronize nodes for the new grace period, requires doing a llist reversal operation to find the tail element of the list. This can be a very costly operation (high number of cache misses) for a long list. To address this, this patch introduces a "dummy-wait-node" entity. At every grace period init, a new wait node is added to the llist. This wait node is used as wait tail for this new grace period. This allows lockless additions of new rcu_synchronize nodes in the rcu_sr_normal_add_req(), while the cleanup work executes and does the progress. The dummy nodes are removed on next round of cleanup work execution. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Neeraj Upadhyay --- kernel/rcu/tree.c | 270 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 233 insertions(+), 37 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 975621ef40e3..d7b48996825f 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1384,25 +1384,173 @@ static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap) raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } +#define SR_NORMAL_GP_WAIT_HEAD_MAX 5 + +struct sr_wait_node { + atomic_t inuse; + struct llist_node node; +}; + /* - * There are three lists for handling synchronize_rcu() users. - * A first list corresponds to new coming users, second for users - * which wait for a grace period and third is for which a grace - * period is passed. + * There is a single llist, which is used for handling + * synchronize_rcu() users' enqueued rcu_synchronize nodes. + * Within this llist, there are two tail pointers: + * + * wait tail: Tracks the set of nodes, which need to + * wait for the current GP to complete. + * done tail: Tracks the set of nodes, for which grace + * period has elapsed. These nodes processing + * will be done as part of the cleanup work + * execution by a kworker. + * + * At every grace period init, a new wait node is added + * to the llist. This wait node is used as wait tail + * for this new grace period. Given that there are a fixed + * number of wait nodes, if all wait nodes are in use + * (which can happen when kworker callback processing + * is delayed) and additional grace period is requested. + * This means, a system is slow in processing callbacks. + * + * TODO: If a slow processing is detected, a first node + * in the llist should be used as a wait-tail for this + * grace period, therefore users which should wait due + * to a slow process are handled by _this_ grace period + * and not next. + * + * Below is an illustration of how the done and wait + * tail pointers move from one set of rcu_synchronize nodes + * to the other, as grace periods start and finish and + * nodes are processed by kworker. + * + * + * a. Initial llist callbacks list: + * + * +----------+ +--------+ +-------+ + * | | | | | | + * | head |---------> | cb2 |--------->| cb1 | + * | | | | | | + * +----------+ +--------+ +-------+ + * + * + * + * b. New GP1 Start: + * + * WAIT TAIL + * | + * | + * v + * +----------+ +--------+ +--------+ +-------+ + * | | | | | | | | + * | head ------> wait |------> cb2 |------> | cb1 | + * | | | head1 | | | | | + * +----------+ +--------+ +--------+ +-------+ + * + * + * + * c. GP completion: + * + * WAIT_TAIL == DONE_TAIL + * + * DONE TAIL + * | + * | + * v + * +----------+ +--------+ +--------+ +-------+ + * | | | | | | | | + * | head ------> wait |------> cb2 |------> | cb1 | + * | | | head1 | | | | | + * +----------+ +--------+ +--------+ +-------+ + * + * + * + * d. New callbacks and GP2 start: + * + * WAIT TAIL DONE TAIL + * | | + * | | + * v v + * +----------+ +------+ +------+ +------+ +-----+ +-----+ +-----+ + * | | | | | | | | | | | | | | + * | head ------> wait |--->| cb4 |--->| cb3 |--->|wait |--->| cb2 |--->| cb1 | + * | | | head2| | | | | |head1| | | | | + * +----------+ +------+ +------+ +------+ +-----+ +-----+ +-----+ + * + * + * + * e. GP2 completion: + * + * WAIT_TAIL == DONE_TAIL + * DONE TAIL + * | + * | + * v + * +----------+ +------+ +------+ +------+ +-----+ +-----+ +-----+ + * | | | | | | | | | | | | | | + * | head ------> wait |--->| cb4 |--->| cb3 |--->|wait |--->| cb2 |--->| cb1 | + * | | | head2| | | | | |head1| | | | | + * +----------+ +------+ +------+ +------+ +-----+ +-----+ +-----+ + * + * + * While the llist state transitions from d to e, a kworker + * can start executing rcu_sr_normal_gp_cleanup_work() and + * can observe either the old done tail (@c) or the new + * done tail (@e). So, done tail updates and reads need + * to use the rel-acq semantics. If the concurrent kworker + * observes the old done tail, the newly queued work + * execution will process the updated done tail. If the + * concurrent kworker observes the new done tail, then + * the newly queued work will skip processing the done + * tail, as workqueue semantics guarantees that the new + * work is executed only after the previous one completes. + * + * f. kworker callbacks processing complete: + * + * + * DONE TAIL + * | + * | + * v + * +----------+ +--------+ + * | | | | + * | head ------> wait | + * | | | head2 | + * +----------+ +--------+ + * */ static struct sr_normal_state { struct llist_head srs_next; /* request a GP users. */ - struct llist_head srs_wait; /* wait for GP users. */ - struct llist_head srs_done; /* ready for GP users. */ - - /* - * In order to add a batch of nodes to already - * existing srs-done-list, a tail of srs-wait-list - * is maintained. - */ - struct llist_node *srs_wait_tail; + struct llist_node *srs_wait_tail; /* wait for GP users. */ + struct llist_node *srs_done_tail; /* ready for GP users. */ + struct sr_wait_node srs_wait_nodes[SR_NORMAL_GP_WAIT_HEAD_MAX]; } sr; +static bool rcu_sr_is_wait_head(struct llist_node *node) +{ + return &(sr.srs_wait_nodes)[0].node <= node && + node <= &(sr.srs_wait_nodes)[SR_NORMAL_GP_WAIT_HEAD_MAX - 1].node; +} + +static struct llist_node *rcu_sr_get_wait_head(void) +{ + struct sr_wait_node *sr_wn; + int i; + + for (i = 0; i < SR_NORMAL_GP_WAIT_HEAD_MAX; i++) { + sr_wn = &(sr.srs_wait_nodes)[i]; + + if (!atomic_cmpxchg_acquire(&sr_wn->inuse, 0, 1)) + return &sr_wn->node; + } + + return NULL; +} + +static void rcu_sr_put_wait_head(struct llist_node *node) +{ + struct sr_wait_node *sr_wn = container_of(node, struct sr_wait_node, node); + atomic_set_release(&sr_wn->inuse, 0); +} + /* Disabled by default. */ static int rcu_normal_wake_from_gp; module_param(rcu_normal_wake_from_gp, int, 0644); @@ -1423,14 +1571,44 @@ static void rcu_sr_normal_complete(struct llist_node *node) static void rcu_sr_normal_gp_cleanup_work(struct work_struct *work) { - struct llist_node *done, *rcu, *next; + struct llist_node *done, *rcu, *next, *head; - done = llist_del_all(&sr.srs_done); + /* + * This work execution can potentially execute + * while a new done tail is being updated by + * grace period kthread in rcu_sr_normal_gp_cleanup(). + * So, read and updates of done tail need to + * follow acq-rel semantics. + * + * Given that wq semantics guarantees that a single work + * cannot execute concurrently by multiple kworkers, + * the done tail list manipulations are protected here. + */ + done = smp_load_acquire(&sr.srs_done_tail); if (!done) return; - llist_for_each_safe(rcu, next, done) - rcu_sr_normal_complete(rcu); + WARN_ON_ONCE(!rcu_sr_is_wait_head(done)); + head = done->next; + done->next = NULL; + + /* + * The dummy node, which is pointed to by the + * done tail which is acq-read above is not removed + * here. This allows lockless additions of new + * rcu_synchronize nodes in rcu_sr_normal_add_req(), + * while the cleanup work executes. The dummy + * nodes is removed, in next round of cleanup + * work execution. + */ + llist_for_each_safe(rcu, next, head) { + if (!rcu_sr_is_wait_head(rcu)) { + rcu_sr_normal_complete(rcu); + continue; + } + + rcu_sr_put_wait_head(rcu); + } } static DECLARE_WORK(sr_normal_gp_cleanup, rcu_sr_normal_gp_cleanup_work); @@ -1439,43 +1617,56 @@ static DECLARE_WORK(sr_normal_gp_cleanup, rcu_sr_normal_gp_cleanup_work); */ static void rcu_sr_normal_gp_cleanup(void) { - struct llist_node *head, *tail; + struct llist_node *wait_tail; - if (llist_empty(&sr.srs_wait)) + wait_tail = sr.srs_wait_tail; + if (wait_tail == NULL) return; - tail = READ_ONCE(sr.srs_wait_tail); - head = __llist_del_all(&sr.srs_wait); + sr.srs_wait_tail = NULL; + ASSERT_EXCLUSIVE_WRITER(sr.srs_wait_tail); - if (head) { - /* Can be not empty. */ - llist_add_batch(head, tail, &sr.srs_done); + // concurrent sr_normal_gp_cleanup work might observe this update. + smp_store_release(&sr.srs_done_tail, wait_tail); + ASSERT_EXCLUSIVE_WRITER(sr.srs_done_tail); + + if (wait_tail) queue_work(system_highpri_wq, &sr_normal_gp_cleanup); - } } /* * Helper function for rcu_gp_init(). */ -static void rcu_sr_normal_gp_init(void) +static bool rcu_sr_normal_gp_init(void) { - struct llist_node *head, *tail; + struct llist_node *first; + struct llist_node *wait_head; + bool start_new_poll = false; - if (llist_empty(&sr.srs_next)) - return; + first = READ_ONCE(sr.srs_next.first); + if (!first || rcu_sr_is_wait_head(first)) + return start_new_poll; + + wait_head = rcu_sr_get_wait_head(); + if (!wait_head) { + // Kick another GP to retry. + start_new_poll = true; + return start_new_poll; + } - tail = llist_del_all(&sr.srs_next); - head = llist_reverse_order(tail); + /* Inject a wait-dummy-node. */ + llist_add(wait_head, &sr.srs_next); /* - * A waiting list of GP should be empty on this step, - * since a GP-kthread, rcu_gp_init() -> gp_cleanup(), + * A waiting list of rcu_synchronize nodes should be empty on + * this step, since a GP-kthread, rcu_gp_init() -> gp_cleanup(), * rolls it over. If not, it is a BUG, warn a user. */ - WARN_ON_ONCE(!llist_empty(&sr.srs_wait)); + WARN_ON_ONCE(sr.srs_wait_tail != NULL); + sr.srs_wait_tail = wait_head; + ASSERT_EXCLUSIVE_WRITER(sr.srs_wait_tail); - WRITE_ONCE(sr.srs_wait_tail, tail); - __llist_add_batch(head, tail, &sr.srs_wait); + return start_new_poll; } static void rcu_sr_normal_add_req(struct rcu_synchronize *rs) @@ -1493,6 +1684,7 @@ static noinline_for_stack bool rcu_gp_init(void) unsigned long mask; struct rcu_data *rdp; struct rcu_node *rnp = rcu_get_root(); + bool start_new_poll; WRITE_ONCE(rcu_state.gp_activity, jiffies); raw_spin_lock_irq_rcu_node(rnp); @@ -1517,11 +1709,15 @@ static noinline_for_stack bool rcu_gp_init(void) /* Record GP times before starting GP, hence rcu_seq_start(). */ rcu_seq_start(&rcu_state.gp_seq); ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq); - rcu_sr_normal_gp_init(); + start_new_poll = rcu_sr_normal_gp_init(); trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq, TPS("start")); rcu_poll_gp_seq_start(&rcu_state.gp_seq_polled_snap); raw_spin_unlock_irq_rcu_node(rnp); + // New poll request after rnp unlock + if (start_new_poll) + (void) start_poll_synchronize_rcu(); + /* * Apply per-leaf buffered online and offline operations to * the rcu_node tree. Note that this new grace period need not From patchwork Tue Nov 28 08:00:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13470584 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LfhBQkt4" Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com [IPv6:2a00:1450:4864:20::12c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75B4CB6; Tue, 28 Nov 2023 00:00:42 -0800 (PST) Received: by mail-lf1-x12c.google.com with SMTP id 2adb3069b0e04-50abb83866bso6932977e87.3; Tue, 28 Nov 2023 00:00:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701158441; x=1701763241; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MRdqwIFL2uVW5SxPC7i3fG3JaqKAc+zofDkQy5XD79k=; b=LfhBQkt4wO0EbJX2OvRJREaSOnZEbNUkktBqO1TRXMgOZ8ds3NkB2YsMFzPlLmtLmY wBk5SyDZy4tQiciLSbv791qs56flA+8XRMJWbgrxJpNkrrxeskv8H37gkmRmXazphcW6 LKXj14loIRF0rTWMV3GuAPUcWNBh+zBL4m7U2N7+uV4CiHJ68Dx3VCYaBso3Me3B3bWw /RcXClN2pPmQTsf68A0giys4f+DhwnkRwgj3P0rtheFnll0oxTVLLYIShtYIK/3gzGuU J3dtw/zqc3IFuy8nEHs0+yXdh4JesRQmrhUnI8Y8kr66mf74JoQ00UN0XGDyI+mBpN+W WjBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701158441; x=1701763241; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MRdqwIFL2uVW5SxPC7i3fG3JaqKAc+zofDkQy5XD79k=; b=V4x4lDDxhXBTebb9mjzmNjFdURqOPPLqJ0nae87okG7YgDDgEeGIMrlg1GnE1lxPRf W61rBva6utkhWmIke7xMzVc9r/KL5qi5CCFjnuV8XZpyDcLxGAMoJnwN2w63HSR1qZ1O hC3SXFi+foFgmdJBkrVhq1H1ClRfSLqF4MItDZzfciz5CC0WqVm4AaO36M6r2daHoJGp +blkhDEFNrFuLN3sjPLe+GUqGVzFRVPU2uOrcClQUgEZ/wnKQkMrgj+eC8amFrLlPsRc ocRQhfXOquleSMbecCFbEOJV/Yd+ZNl8BAzvZKjEDRjeDlbbdm7YCiM82JrVPdI7NnMk q/wA== X-Gm-Message-State: AOJu0YwTvdclA3soNl3y+WeV1FDYoNe4lDqvFwTdcK3ujOwnUSO4utk0 Wp5Q+GORhIVVBGSaTga2nG0= X-Google-Smtp-Source: AGHT+IHe6xi4biLqoNAaaRuvyFCyxXEagtGb5TcjgRABkz7K+RpGBmUMYSw1iQ20LwMVr1VBG52asQ== X-Received: by 2002:a05:6512:1282:b0:50b:ae9f:bf9c with SMTP id u2-20020a056512128200b0050bae9fbf9cmr5286278lfs.55.1701158440567; Tue, 28 Nov 2023 00:00:40 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id o16-20020ac24bd0000000b004fe202a5c7csm1765501lfq.135.2023.11.28.00.00.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 00:00:40 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: "Paul E . McKenney" Cc: RCU , Neeraj upadhyay , Boqun Feng , Hillf Danton , Joel Fernandes , LKML , Uladzislau Rezki , Oleksiy Avramchenko , Frederic Weisbecker Subject: [PATCH v3 5/7] rcu: Support direct wake-up of synchronize_rcu() users Date: Tue, 28 Nov 2023 09:00:31 +0100 Message-Id: <20231128080033.288050-6-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231128080033.288050-1-urezki@gmail.com> References: <20231128080033.288050-1-urezki@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch introduces a small enhancement which allows to do a direct wake-up of synchronize_rcu() callers. It occurs after a completion of grace period, thus by the gp-kthread. Number of clients is limited by the hard-coded maximum allowed threshold. The remaining part, if still exists is deferred to a main worker. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 39 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index d7b48996825f..69663a6d5050 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1384,6 +1384,12 @@ static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap) raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } +/* + * A max threshold for synchronize_rcu() users which are + * awaken directly by the rcu_gp_kthread(). Left part is + * deferred to the main worker. + */ +#define SR_MAX_USERS_WAKE_FROM_GP 5 #define SR_NORMAL_GP_WAIT_HEAD_MAX 5 struct sr_wait_node { @@ -1617,7 +1623,8 @@ static DECLARE_WORK(sr_normal_gp_cleanup, rcu_sr_normal_gp_cleanup_work); */ static void rcu_sr_normal_gp_cleanup(void) { - struct llist_node *wait_tail; + struct llist_node *wait_tail, *head, *rcu; + int done = 0; wait_tail = sr.srs_wait_tail; if (wait_tail == NULL) @@ -1626,11 +1633,39 @@ static void rcu_sr_normal_gp_cleanup(void) sr.srs_wait_tail = NULL; ASSERT_EXCLUSIVE_WRITER(sr.srs_wait_tail); + WARN_ON_ONCE(!rcu_sr_is_wait_head(wait_tail)); + head = wait_tail->next; + + /* + * Process (a) and (d) cases. See an illustration. Apart of + * that it handles the scenario when all clients are done, + * wait-head is released if last. The worker is not kicked. + */ + llist_for_each_safe(rcu, head, head) { + if (rcu_sr_is_wait_head(rcu)) { + if (!rcu->next) { + rcu_sr_put_wait_head(rcu); + wait_tail->next = NULL; + } else { + wait_tail->next = rcu; + } + + break; + } + + rcu_sr_normal_complete(rcu); + // It can be last, update a next on this step. + wait_tail->next = head; + + if (++done == SR_MAX_USERS_WAKE_FROM_GP) + break; + } + // concurrent sr_normal_gp_cleanup work might observe this update. smp_store_release(&sr.srs_done_tail, wait_tail); ASSERT_EXCLUSIVE_WRITER(sr.srs_done_tail); - if (wait_tail) + if (wait_tail->next) queue_work(system_highpri_wq, &sr_normal_gp_cleanup); } From patchwork Tue Nov 28 08:00:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13470585 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="U8/i3izU" Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com [IPv6:2a00:1450:4864:20::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E592C5; Tue, 28 Nov 2023 00:00:43 -0800 (PST) Received: by mail-lf1-x133.google.com with SMTP id 2adb3069b0e04-50bb92811c0so410654e87.1; Tue, 28 Nov 2023 00:00:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701158442; x=1701763242; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v1knYvYUft74/9sPbBJw7fP0T2tfAJH0adcanUwigNQ=; b=U8/i3izUbmxxzt7x7qWA0aMVn1+fmScgc3KgJ5TcFYmK1q4Sy/Bi3nx3w7S86VwI0I 8nRpCf4OSYWrY94hbbNjKkc49UsLjb79Bw5AP1At4W3roCB9uiUqWuuespkrPV3QskNu 1J01GS6gEoekZu6/bfTLmyJsg2TpK7zfKV5YMiqfNCrOTncZ3C9uIj5otbsryQyDJaxv /zkGiV2My8Mp6tonwuZM5V4FToVeKpIPi/V9LXAak0wa0yqoNnvXeDDhVDrjzgDVvRcf UiBNwD/YuhfC1T3TODAhuCn+8pSwn/F+hsLzQt56+lXDb7tFdsoeAp7aUnDbew/K5ple E4gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701158442; x=1701763242; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v1knYvYUft74/9sPbBJw7fP0T2tfAJH0adcanUwigNQ=; b=Po09UyuJhYjQwT2iHXorYojbvkER+1ABx2sgoE4tnCx9qOy3f5/Cdu8yz0NtXMB+KJ U80d7ImLzTFGtsdgg5KZcMhTSSSxV4yIQa7iXdRwhBGvpgV5zTk4eEXZqjwoAebUT82u Crp/jaicDyzyYN/pKyupjPco0w1hZ5y1XmxsnFnwXi887BXL52DiGEr+u5lgx87+s3St j2J8u8FaRcKly19rq74Wd2EJynn426VF3cPKXWdHPjhX+E/X0/YGKZcjCheW+bSHrhTR ZPGDum4Cye505lXP0gsSpVfqMtPrd97AblCz1Hh10K+yefJ8nZvVQlGfGAuwanVrG1P8 2ZkQ== X-Gm-Message-State: AOJu0YzoRsldp38+hoQ9736rCSz1Uhpeo+pa3qRNtDAfHCSaXadbPhMI Ml/J/IpY6ZfB+aK1aZbPk88= X-Google-Smtp-Source: AGHT+IErBw7TS/n7sMQjbGmIi8k8IyP59ASSbwuOKTqfop6fLeJD/O5Albo8MhhmaCePMWvDn/ZCKQ== X-Received: by 2002:a05:6512:401f:b0:4fe:8ba8:1a8b with SMTP id br31-20020a056512401f00b004fe8ba81a8bmr5553476lfb.7.1701158441575; Tue, 28 Nov 2023 00:00:41 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id o16-20020ac24bd0000000b004fe202a5c7csm1765501lfq.135.2023.11.28.00.00.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 00:00:41 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: "Paul E . McKenney" Cc: RCU , Neeraj upadhyay , Boqun Feng , Hillf Danton , Joel Fernandes , LKML , Uladzislau Rezki , Oleksiy Avramchenko , Frederic Weisbecker Subject: [PATCH v3 6/7] rcu: Move sync related data to rcu_state structure Date: Tue, 28 Nov 2023 09:00:32 +0100 Message-Id: <20231128080033.288050-7-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231128080033.288050-1-urezki@gmail.com> References: <20231128080033.288050-1-urezki@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move synchronize_rcu() main control data under the rcu_state structure. An access is done via "rcu_state" global variable. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 50 ++++++++++++++--------------------------------- kernel/rcu/tree.h | 19 ++++++++++++++++++ 2 files changed, 34 insertions(+), 35 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 69663a6d5050..c0d3e46730e8 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1384,19 +1384,6 @@ static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap) raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } -/* - * A max threshold for synchronize_rcu() users which are - * awaken directly by the rcu_gp_kthread(). Left part is - * deferred to the main worker. - */ -#define SR_MAX_USERS_WAKE_FROM_GP 5 -#define SR_NORMAL_GP_WAIT_HEAD_MAX 5 - -struct sr_wait_node { - atomic_t inuse; - struct llist_node node; -}; - /* * There is a single llist, which is used for handling * synchronize_rcu() users' enqueued rcu_synchronize nodes. @@ -1523,17 +1510,10 @@ struct sr_wait_node { * +----------+ +--------+ * */ -static struct sr_normal_state { - struct llist_head srs_next; /* request a GP users. */ - struct llist_node *srs_wait_tail; /* wait for GP users. */ - struct llist_node *srs_done_tail; /* ready for GP users. */ - struct sr_wait_node srs_wait_nodes[SR_NORMAL_GP_WAIT_HEAD_MAX]; -} sr; - static bool rcu_sr_is_wait_head(struct llist_node *node) { - return &(sr.srs_wait_nodes)[0].node <= node && - node <= &(sr.srs_wait_nodes)[SR_NORMAL_GP_WAIT_HEAD_MAX - 1].node; + return &(rcu_state.srs_wait_nodes)[0].node <= node && + node <= &(rcu_state.srs_wait_nodes)[SR_NORMAL_GP_WAIT_HEAD_MAX - 1].node; } static struct llist_node *rcu_sr_get_wait_head(void) @@ -1542,7 +1522,7 @@ static struct llist_node *rcu_sr_get_wait_head(void) int i; for (i = 0; i < SR_NORMAL_GP_WAIT_HEAD_MAX; i++) { - sr_wn = &(sr.srs_wait_nodes)[i]; + sr_wn = &(rcu_state.srs_wait_nodes)[i]; if (!atomic_cmpxchg_acquire(&sr_wn->inuse, 0, 1)) return &sr_wn->node; @@ -1590,7 +1570,7 @@ static void rcu_sr_normal_gp_cleanup_work(struct work_struct *work) * cannot execute concurrently by multiple kworkers, * the done tail list manipulations are protected here. */ - done = smp_load_acquire(&sr.srs_done_tail); + done = smp_load_acquire(&rcu_state.srs_done_tail); if (!done) return; @@ -1626,12 +1606,12 @@ static void rcu_sr_normal_gp_cleanup(void) struct llist_node *wait_tail, *head, *rcu; int done = 0; - wait_tail = sr.srs_wait_tail; + wait_tail = rcu_state.srs_wait_tail; if (wait_tail == NULL) return; - sr.srs_wait_tail = NULL; - ASSERT_EXCLUSIVE_WRITER(sr.srs_wait_tail); + rcu_state.srs_wait_tail = NULL; + ASSERT_EXCLUSIVE_WRITER(rcu_state.srs_wait_tail); WARN_ON_ONCE(!rcu_sr_is_wait_head(wait_tail)); head = wait_tail->next; @@ -1662,8 +1642,8 @@ static void rcu_sr_normal_gp_cleanup(void) } // concurrent sr_normal_gp_cleanup work might observe this update. - smp_store_release(&sr.srs_done_tail, wait_tail); - ASSERT_EXCLUSIVE_WRITER(sr.srs_done_tail); + smp_store_release(&rcu_state.srs_done_tail, wait_tail); + ASSERT_EXCLUSIVE_WRITER(rcu_state.srs_done_tail); if (wait_tail->next) queue_work(system_highpri_wq, &sr_normal_gp_cleanup); @@ -1678,7 +1658,7 @@ static bool rcu_sr_normal_gp_init(void) struct llist_node *wait_head; bool start_new_poll = false; - first = READ_ONCE(sr.srs_next.first); + first = READ_ONCE(rcu_state.srs_next.first); if (!first || rcu_sr_is_wait_head(first)) return start_new_poll; @@ -1690,23 +1670,23 @@ static bool rcu_sr_normal_gp_init(void) } /* Inject a wait-dummy-node. */ - llist_add(wait_head, &sr.srs_next); + llist_add(wait_head, &rcu_state.srs_next); /* * A waiting list of rcu_synchronize nodes should be empty on * this step, since a GP-kthread, rcu_gp_init() -> gp_cleanup(), * rolls it over. If not, it is a BUG, warn a user. */ - WARN_ON_ONCE(sr.srs_wait_tail != NULL); - sr.srs_wait_tail = wait_head; - ASSERT_EXCLUSIVE_WRITER(sr.srs_wait_tail); + WARN_ON_ONCE(rcu_state.srs_wait_tail != NULL); + rcu_state.srs_wait_tail = wait_head; + ASSERT_EXCLUSIVE_WRITER(rcu_state.srs_wait_tail); return start_new_poll; } static void rcu_sr_normal_add_req(struct rcu_synchronize *rs) { - llist_add((struct llist_node *) &rs->head, &sr.srs_next); + llist_add((struct llist_node *) &rs->head, &rcu_state.srs_next); } /* diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 192536916f9a..f72166b5067a 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -316,6 +316,19 @@ do { \ __set_current_state(TASK_RUNNING); \ } while (0) +/* + * A max threshold for synchronize_rcu() users which are + * awaken directly by the rcu_gp_kthread(). Left part is + * deferred to the main worker. + */ +#define SR_MAX_USERS_WAKE_FROM_GP 5 +#define SR_NORMAL_GP_WAIT_HEAD_MAX 5 + +struct sr_wait_node { + atomic_t inuse; + struct llist_node node; +}; + /* * RCU global state, including node hierarchy. This hierarchy is * represented in "heap" form in a dense array. The root (first level) @@ -397,6 +410,12 @@ struct rcu_state { /* Synchronize offline with */ /* GP pre-initialization. */ int nocb_is_setup; /* nocb is setup from boot */ + + /* synchronize_rcu() part. */ + struct llist_head srs_next; /* request a GP users. */ + struct llist_node *srs_wait_tail; /* wait for GP users. */ + struct llist_node *srs_done_tail; /* ready for GP users. */ + struct sr_wait_node srs_wait_nodes[SR_NORMAL_GP_WAIT_HEAD_MAX]; }; /* Values for rcu_state structure's gp_flags field. */ From patchwork Tue Nov 28 08:00:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13470586 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="W/PgcouF" Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com [IPv6:2a00:1450:4864:20::12f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3883BCB; Tue, 28 Nov 2023 00:00:44 -0800 (PST) Received: by mail-lf1-x12f.google.com with SMTP id 2adb3069b0e04-50abb83866bso6933036e87.3; Tue, 28 Nov 2023 00:00:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701158442; x=1701763242; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QQVOvwrmfo1nPfUgIyQbOVsMWsIZrJImgM152q/sqUE=; b=W/PgcouFN/Z8pHWi5vDSGdOT1N5xbMMpc+cYROJFFgVtTK/TtpHatoK3A8HuGvSaSz UHiZCaRUUKegN4wsUoFPGtYo7xXlKolFOtJc0BoQ01aLah0HZCeEyO0Vp+jCcnv8k1Df vwOGTSMTVJjEDS8dz8N90ovwdbMdqHckOJnAIeQKjiQzLLJmbfMzPXvVo9tNWAiHY9qr ryHVY277Dg7evEmo1KConVX6ew7bmAAezkeG7EGxIDSLtJH4tgA5v5G8KHimunS/wyij b0nkPB7WsPFZT+nobrcficgZ/rwcWm7NX+RAqyhzJStnotTOMKS97OT1d/Np9tLafqEZ U6IQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701158442; x=1701763242; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QQVOvwrmfo1nPfUgIyQbOVsMWsIZrJImgM152q/sqUE=; b=Et/VHFtJzCerm5EgrTn05ATQ5LOHhiXd33ql4/YyvleEn2Tbmypp3+mIizrGXdKZLn +SQY1WoEAUvt/wGiRwJ7a0oAK5AVssHkU0mImXDsmmJZILhTTYIv6ms7JWBhFK1uzQt6 MwgV/+o5G7Q3gERQS54EOV9r2CadaRvZOthBTVpN8fnWtE6vhxsnqke+hO/XMXwnh7Sf OlfukZ01N7kJ/B+IEEjTzJZxVPLoPyCFvmYtBgHOLtor9NiSHDGebe8aFaEMuOzkBDT8 TN4sQncb69IOUCGRYrsV0gDpeCTFW8XiUNJv+7OgiPkMKjo+6coo/TN597oPe9YHcYva AQNw== X-Gm-Message-State: AOJu0Yx0KlAT1C4TKMZSGsOn9kLTw+reYOHT9RqOE3wu4/2DxliT0tPC nu7QHy2lFZMf/+U0CaisV7s= X-Google-Smtp-Source: AGHT+IGpGnDLULRUYULHIiEqENx6drVMGcTjDMRTQ69MqVN/chnP9zQSnV1eWsGowYbRCIcIp5MNkQ== X-Received: by 2002:a19:5506:0:b0:500:b53f:fbc2 with SMTP id n6-20020a195506000000b00500b53ffbc2mr8986246lfe.26.1701158442372; Tue, 28 Nov 2023 00:00:42 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id o16-20020ac24bd0000000b004fe202a5c7csm1765501lfq.135.2023.11.28.00.00.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 00:00:42 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: "Paul E . McKenney" Cc: RCU , Neeraj upadhyay , Boqun Feng , Hillf Danton , Joel Fernandes , LKML , Uladzislau Rezki , Oleksiy Avramchenko , Frederic Weisbecker Subject: [PATCH v3 7/7] rcu: Add CONFIG_RCU_SR_NORMAL_DEBUG_GP Date: Tue, 28 Nov 2023 09:00:33 +0100 Message-Id: <20231128080033.288050-8-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231128080033.288050-1-urezki@gmail.com> References: <20231128080033.288050-1-urezki@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This option enables additional debugging for detecting a grace period incompletion for synchronize_rcu() users. If a GP is not fully passed for any user, the warning message is emitted. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/Kconfig.debug | 12 ++++++++++++ kernel/rcu/tree.c | 7 +++++-- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug index 2984de629f74..3d44106ca1f0 100644 --- a/kernel/rcu/Kconfig.debug +++ b/kernel/rcu/Kconfig.debug @@ -143,4 +143,16 @@ config RCU_STRICT_GRACE_PERIOD when looking for certain types of RCU usage bugs, for example, too-short RCU read-side critical sections. +config RCU_SR_NORMAL_DEBUG_GP + bool "Debug synchronize_rcu() callers for a grace period completion" + depends on DEBUG_KERNEL && RCU_EXPERT + default n + help + This option enables additional debugging for detecting a grace + period incompletion for synchronize_rcu() users. If a GP is not + fully passed for any user, the warning message is emitted. + + Say Y here if you want to enable such debugging + Say N if you are unsure. + endmenu # "RCU Debugging" diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c0d3e46730e8..421bce4b8dd7 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1547,7 +1547,8 @@ static void rcu_sr_normal_complete(struct llist_node *node) (struct rcu_head *) node, struct rcu_synchronize, head); unsigned long oldstate = (unsigned long) rs->head.func; - WARN_ONCE(!rcu_gp_is_expedited() && !poll_state_synchronize_rcu(oldstate), + WARN_ONCE(IS_ENABLED(CONFIG_RCU_SR_NORMAL_DEBUG_GP) && + !poll_state_synchronize_rcu(oldstate), "A full grace period is not passed yet: %lu", rcu_seq_diff(get_state_synchronize_rcu(), oldstate)); @@ -3822,7 +3823,9 @@ static void synchronize_rcu_normal(void) * This code might be preempted, therefore take a GP * snapshot before adding a request. */ - rs.head.func = (void *) get_state_synchronize_rcu(); + if (IS_ENABLED(CONFIG_RCU_SR_NORMAL_DEBUG_GP)) + rs.head.func = (void *) get_state_synchronize_rcu(); + rcu_sr_normal_add_req(&rs); /* Kick a GP and start waiting. */