From patchwork Thu Jan 16 20:20:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942241 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B7D7234D1F; Thu, 16 Jan 2025 20:21:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=ajemDGGenlsgeaT1vIAV6fJVFdCvKKww5Y54eyoUrPHbqQyCyqilopAc9YdHVeHTM0PIL9krIdFBh1lKFMBZUlHfSk84iKj+/4I2CX17VqeZFqsUuJJe72JB7L/DqrX1wR2t6kfRtF58WkCgRj2VCOxLu7aSYVCuLaGq8vxhoLE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=VRFMoQRjCDCrcYqMSI5WVHsIfkcveQwBtMml9lqeExM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KMJ60aeyLGSrAstFuVyF6cu7YSeJAgcJgK9xTXrNv7fT/tS35I64xdojc8htImlQBt+Elu/m9rVYEJoPsxTpA2IYh7sdxx2U3bR+WQ5Ow1CSxN4zyRt6hAJH9Bcmjx1bBkZWahIhNznWRpP/EOMFurOPD8t/hjeudARAe91fzrY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IWCCRSlZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IWCCRSlZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD840C4CED6; Thu, 16 Jan 2025 20:21:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058874; bh=VRFMoQRjCDCrcYqMSI5WVHsIfkcveQwBtMml9lqeExM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IWCCRSlZ55WZZ+5V4EZDKfJChiI3ypOxZR0iTdwGS4vsq9uw/HBEeZO6ibmFXHZtk DSZS+4QCZa4d9JAB5GO8zG1DM8OYAltikxZTsMvyCwFCrtc1kcrOc8n0DmTssShU8o pwM9KJUOo9VtOM4no0BIIG4KOLMNEMdrUsQWyDbZCFS/vmCJuuSo9Eg4g7wK+qMG7d 4I7qHQmId6WWDVrXVKqpZ5j4hYuj1KRWU7VZOgMt0O1ZZeVkjdzl5R1fN3+yXlVNsr FwE8gf5vPYw2F2m5wcu0IX5UYyfNLvojqeNUI9vktR5OjCJZrrwSQ+oRmbZvJgTCyi QX8QsEOd5sMSg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 8249CCE13C4; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Ankur Arora , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 01/17] srcu: Make Tiny SRCU able to operate in preemptible kernels Date: Thu, 16 Jan 2025 12:20:56 -0800 Message-Id: <20250116202112.3783327-1-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Given that SRCU allows its read-side critical sections are not just preemptible, but also allow general blocking, there is not much reason to restrict Tiny SRCU to non-preemptible kernels. This commit therefore removes Tiny SRCU dependencies on non-preemptibility, primarily surrounding its interaction with rcutorture and early boot. Signed-off-by: Paul E. McKenney Cc: Ankur Arora Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- kernel/rcu/rcu.h | 9 ++++++--- kernel/rcu/srcutiny.c | 6 ++++++ 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index f87c9d6d36fcb..f6fcf87d91395 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -611,8 +611,6 @@ void srcutorture_get_gp_data(struct srcu_struct *sp, int *flags, static inline bool rcu_watching_zero_in_eqs(int cpu, int *vp) { return false; } static inline unsigned long rcu_get_gp_seq(void) { return 0; } static inline unsigned long rcu_exp_batches_completed(void) { return 0; } -static inline unsigned long -srcu_batches_completed(struct srcu_struct *sp) { return 0; } static inline void rcu_force_quiescent_state(void) { } static inline bool rcu_check_boost_fail(unsigned long gp_state, int *cpup) { return true; } static inline void show_rcu_gp_kthreads(void) { } @@ -624,7 +622,6 @@ static inline void rcu_gp_slow_unregister(atomic_t *rgssp) { } bool rcu_watching_zero_in_eqs(int cpu, int *vp); unsigned long rcu_get_gp_seq(void); unsigned long rcu_exp_batches_completed(void); -unsigned long srcu_batches_completed(struct srcu_struct *sp); bool rcu_check_boost_fail(unsigned long gp_state, int *cpup); void show_rcu_gp_kthreads(void); int rcu_get_gp_kthreads_prio(void); @@ -636,6 +633,12 @@ void rcu_gp_slow_register(atomic_t *rgssp); void rcu_gp_slow_unregister(atomic_t *rgssp); #endif /* #else #ifdef CONFIG_TINY_RCU */ +#ifdef CONFIG_TINY_SRCU +static inline unsigned long srcu_batches_completed(struct srcu_struct *sp) { return 0; } +#else // #ifdef CONFIG_TINY_SRCU +unsigned long srcu_batches_completed(struct srcu_struct *sp); +#endif // #else // #ifdef CONFIG_TINY_SRCU + #ifdef CONFIG_RCU_NOCB_CPU void rcu_bind_current_to_nocb(void); #else diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c index f688bdad293ed..6e9fe2ce1075d 100644 --- a/kernel/rcu/srcutiny.c +++ b/kernel/rcu/srcutiny.c @@ -20,7 +20,11 @@ #include "rcu_segcblist.h" #include "rcu.h" +#ifndef CONFIG_TREE_RCU int rcu_scheduler_active __read_mostly; +#else // #ifndef CONFIG_TREE_RCU +extern int rcu_scheduler_active; +#endif // #else // #ifndef CONFIG_TREE_RCU static LIST_HEAD(srcu_boot_list); static bool srcu_init_done; @@ -282,11 +286,13 @@ bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie) } EXPORT_SYMBOL_GPL(poll_state_synchronize_srcu); +#ifndef CONFIG_TREE_RCU /* Lockdep diagnostics. */ void __init rcu_scheduler_starting(void) { rcu_scheduler_active = RCU_SCHEDULER_RUNNING; } +#endif // #ifndef CONFIG_TREE_RCU /* * Queue work for srcu_struct structures with early boot callbacks. From patchwork Thu Jan 16 20:20:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942242 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B77723243D; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=TC6QQv9YCI7U1fKYkbE998oOhjgRg/v3/tbNbmHgx9BURtE6ms7eDm1szJYtL8tIFdA/LT0hrvdZ9R1O3EtEsIZKAxjB6C0noDSmDhwiPT0TLSHMZWem1uJspBlwlF7HuSs8yhn09fUkzJuUizkw40TkZ8NDJTCkG+HiHIBEN6k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=oUfzD2bTUdiWx5sH1tOTawIpY4y49KbPxHHZxd6j63E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Pa2wsqRXif8p/shGwK0PSX42rCcI2jR3/MGg8IcG4QAnzVZ0BAKeu/NYs8xeEkQSbmyJXjjP/HHFoRi1SgIbcxj7RvcFs4UZpaWpVlbDxpRkzxvyBkpf0oQJKiaT+UGG1wA6FL9EQkH4GLyUdG3nPaAasSCyRL/E6Fns0IH6rBo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SJC2PpuG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SJC2PpuG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9232C4CEDD; Thu, 16 Jan 2025 20:21:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058874; bh=oUfzD2bTUdiWx5sH1tOTawIpY4y49KbPxHHZxd6j63E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SJC2PpuG1toGYkxxEu0g0O2rX+soQar90oDJSmeFB5mlYEf0YUe3fLVfcvVtlGvdD X3GHnzyStkUT/0Nm1dDWDRI0qF9ewIxHXd6Hh4Z0b1OCoO/Q04oWieZiOH0wl0r6wm rYHLS7HmHkEFT+X0lIayFaKVDkB/wIRWhhUTzfEMoGE5I34Kx49KqP+Mo6agZZbYSR /Y+ecQ7uCymgNn/mQnn8a7HAYsRZGiDYaCfyMesHf7Thuf8BLPNtfDpAvCqSoxHRa+ tCdHmaBOZ9jfESVYGpMBCJoJLGY6cRw34orzY4C3YyW9BgSUVT/CguOirDs9tqu+dV C+ruiUqxd98Mw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 84CD5CE13C5; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Neeraj Upadhyay , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 02/17] srcu: Define SRCU_READ_FLAVOR_ALL in terms of symbols Date: Thu, 16 Jan 2025 12:20:57 -0800 Message-Id: <20250116202112.3783327-2-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit defines SRCU_READ_FLAVOR_ALL in terms of the SRCU_READ_FLAVOR_* definitions instead of a hexadecimal constant. Suggested-by: Neeraj Upadhyay Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcu.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index d7ba46e74f58e..f6f779b9d9ff2 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -47,7 +47,8 @@ int init_srcu_struct(struct srcu_struct *ssp); #define SRCU_READ_FLAVOR_NORMAL 0x1 // srcu_read_lock(). #define SRCU_READ_FLAVOR_NMI 0x2 // srcu_read_lock_nmisafe(). #define SRCU_READ_FLAVOR_LITE 0x4 // srcu_read_lock_lite(). -#define SRCU_READ_FLAVOR_ALL 0x7 // All of the above. +#define SRCU_READ_FLAVOR_ALL (SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_NMI | \ + SRCU_READ_FLAVOR_LITE) // All of the above. #ifdef CONFIG_TINY_SRCU #include From patchwork Thu Jan 16 20:20:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942240 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E9E51993B2; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=cO6RHZMAATQFeBvqR4JjCx5hDwyH47ssFQrYPWzakv1OjvBGHpMETMB2Y+rHqb9DzDtqrleWsshdjMb1dsNjQDYXVM1Si7egUy/Zc4deWLpyf5DbbPpk5sM591TJyBNFUD296jYGxtVGtHoEQkyDMj3WH3tkC6U9UA/YZLhKqYs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=agoDDzi55tZiNpJkbv6TUrSG6N86zEnTZs0F9AAe1Kk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=soKx7/DLP0rvF0mmIJ5dLgnXDJprVXJJ7S9IyGSKxF85Lo3agL8EeDFr88ebZ0HDG67zUg22Uq/4RmpCwwkKHl44Qmr+sjAqlKH8YS5pso9X4B/VvYAGZ06a+m0Rq+vlrCdRy0TXyclMFp4mRE30Jk781tqFRF26LQUoNO4PSWk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KoE0U1XV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KoE0U1XV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1DA2C4CEDF; Thu, 16 Jan 2025 20:21:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058874; bh=agoDDzi55tZiNpJkbv6TUrSG6N86zEnTZs0F9AAe1Kk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KoE0U1XVENdGNF4T2zoM57XFH/lHnPPBUBfC2grYhBMPlhJPqIUrTH2JajSz7CXMV D7XMYV1sRJexuRGDFANA4n4bFhkKSFJqZ8eLPciUcGIvEsO0x/ezGQ1VPV+EdILi+C t6jdFRF/192r4al/1iAz7GL0ILdoaot5wOL3UB3PwvZfmVUT7VhCLxuyU4pwzDhW2z I8VYwyRALEJbEv0m6tpuIBa2D7fySdJvrLsdcLC3kuWN8eb82nXdWzCtmcwJRRdGLj VlUpDgiVyYGwFdkoyjo9MWKHAdrLH2KrsTD2pYLxW1oZtDgLEkMYMz4fDcyI/A9ByA VaJpT0a91QHFw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 87286CE37B4; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 03/17] srcu: Use ->srcu_gp_seq for rcutorture reader batch Date: Thu, 16 Jan 2025 12:20:58 -0800 Message-Id: <20250116202112.3783327-3-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit stops using ->srcu_idx for rcutorture's reader-batch consistency checking, using ->srcu_gp_seq instead. This is a first step towards a faster srcu_read_{,un}lock_lite() that avoids the array accesses that use ->srcu_idx. Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- kernel/rcu/rcutorture.c | 2 ++ kernel/rcu/srcutree.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index d26fb1d33ed9a..1d2de50fb5d60 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -791,6 +791,7 @@ static struct rcu_torture_ops srcu_ops = { .readunlock = srcu_torture_read_unlock, .readlock_held = torture_srcu_read_lock_held, .get_gp_seq = srcu_torture_completed, + .gp_diff = rcu_seq_diff, .deferred_free = srcu_torture_deferred_free, .sync = srcu_torture_synchronize, .exp_sync = srcu_torture_synchronize_expedited, @@ -834,6 +835,7 @@ static struct rcu_torture_ops srcud_ops = { .readunlock = srcu_torture_read_unlock, .readlock_held = torture_srcu_read_lock_held, .get_gp_seq = srcu_torture_completed, + .gp_diff = rcu_seq_diff, .deferred_free = srcu_torture_deferred_free, .sync = srcu_torture_synchronize, .exp_sync = srcu_torture_synchronize_expedited, diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index f87a9fb6d6bb8..b5bb73c877de7 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1679,7 +1679,7 @@ EXPORT_SYMBOL_GPL(srcu_barrier); */ unsigned long srcu_batches_completed(struct srcu_struct *ssp) { - return READ_ONCE(ssp->srcu_idx); + return READ_ONCE(ssp->srcu_sup->srcu_gp_seq); } EXPORT_SYMBOL_GPL(srcu_batches_completed); From patchwork Thu Jan 16 20:20:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942243 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C25D2361DE; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=EB5hBSjvi/BBh7the7pDZYzdt1ag1o7uwQxPjUXxznq4P3O6dqa/yerqWxva7lX6WgFvhSirmkzpU8GrMoq2EGS35YEsJ9Rz7a9nVrU14g4crdh6zIcSE1hST7ivfGwUtG8vD/oHuVME2/pTpo8UoBJPiGM8FkKz9DBhU83VHrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=TtUqAHtCZPGmuCYiDxjMd5lrVuCO81aZ0Y6hgKFIzo4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uXmzEMwmEKMctRfVfxSMGIz/PM8bp/QNKsIQscZ6BpzHN2Xs8g6/KfDiMfJoSvIONkLDmr1lyVAGK50a7+UzAfoLRMlu04tlCxmHO2xKVMcDDQubaYIByURecKEVuVjnxJenItk3MM6aFyAZZJwXqn8GqUBCsL1VtVMzuCi9uiU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Wk/Rjpaa; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Wk/Rjpaa" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB03FC4CEE3; Thu, 16 Jan 2025 20:21:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=TtUqAHtCZPGmuCYiDxjMd5lrVuCO81aZ0Y6hgKFIzo4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wk/Rjpaa3p4yqaGnVwzIlsTfVHw3m8tYAx2ECOOSXuXlHS4y0pyeoqGNLeru4PKO0 m7RIYO+R1iLKQ9jX5xLZfwO2VuifL3Vsi6Qa2kdqubq+fObzxf0EaszFiOXEV98zAw Ko+InBA6gcX6bn9xlQcwFEC8rO9mi6L4/y5yQoVxaTHZuCgi25r+/rQqJtYrSxqqf9 hP8Y3xzWvM1Erx3rR0EM7QdchbyrZgVXaHmH6MmTlMCOwkcLIEEa6GehBlHyH7gg9m FBmzw57cuEAecqYkhuNnXG7MY7myjwyOeY37BkWOj1nCpwWWN0iDs0a4tZPxj5d8yM cECD+GzPu4zUA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 8A08ACE37B6; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 04/17] srcu: Pull ->srcu_{un,}lock_count into a new srcu_ctr structure Date: Thu, 16 Jan 2025 12:20:59 -0800 Message-Id: <20250116202112.3783327-4-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit prepares for array-index-free srcu_read_lock*() by moving the ->srcu_{un,}lock_count fields into a new srcu_ctr structure. This will permit ->srcu_index to be replaced by a per-CPU pointer to this structure. Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcutree.h | 13 +++-- kernel/rcu/srcutree.c | 115 +++++++++++++++++++-------------------- 2 files changed, 66 insertions(+), 62 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index b17814c9d1c76..c794d599db5c1 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -17,14 +17,19 @@ struct srcu_node; struct srcu_struct; +/* One element of the srcu_data srcu_ctrs array. */ +struct srcu_ctr { + atomic_long_t srcu_locks; /* Locks per CPU. */ + atomic_long_t srcu_unlocks; /* Unlocks per CPU. */ +}; + /* * Per-CPU structure feeding into leaf srcu_node, similar in function * to rcu_node. */ struct srcu_data { /* Read-side state. */ - atomic_long_t srcu_lock_count[2]; /* Locks per CPU. */ - atomic_long_t srcu_unlock_count[2]; /* Unlocks per CPU. */ + struct srcu_ctr srcu_ctrs[2]; /* Locks and unlocks per CPU. */ int srcu_reader_flavor; /* Reader flavor for srcu_struct structure? */ /* Values: SRCU_READ_FLAVOR_.* */ @@ -221,7 +226,7 @@ static inline int __srcu_read_lock_lite(struct srcu_struct *ssp) RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_lock_lite()."); idx = READ_ONCE(ssp->srcu_idx) & 0x1; - this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter); /* Y */ + this_cpu_inc(ssp->sda->srcu_ctrs[idx].srcu_locks.counter); /* Y */ barrier(); /* Avoid leaking the critical section. */ return idx; } @@ -240,7 +245,7 @@ static inline int __srcu_read_lock_lite(struct srcu_struct *ssp) static inline void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) { barrier(); /* Avoid leaking the critical section. */ - this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter); /* Z */ + this_cpu_inc(ssp->sda->srcu_ctrs[idx].srcu_unlocks.counter); /* Z */ RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_unlock_lite()."); } diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index b5bb73c877de7..d4e9cd917a69f 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -116,8 +116,9 @@ do { \ /* * Initialize SRCU per-CPU data. Note that statically allocated * srcu_struct structures might already have srcu_read_lock() and - * srcu_read_unlock() running against them. So if the is_static parameter - * is set, don't initialize ->srcu_lock_count[] and ->srcu_unlock_count[]. + * srcu_read_unlock() running against them. So if the is_static + * parameter is set, don't initialize ->srcu_ctrs[].srcu_locks and + * ->srcu_ctrs[].srcu_unlocks. */ static void init_srcu_struct_data(struct srcu_struct *ssp) { @@ -128,8 +129,6 @@ static void init_srcu_struct_data(struct srcu_struct *ssp) * Initialize the per-CPU srcu_data array, which feeds into the * leaves of the srcu_node tree. */ - BUILD_BUG_ON(ARRAY_SIZE(sdp->srcu_lock_count) != - ARRAY_SIZE(sdp->srcu_unlock_count)); for_each_possible_cpu(cpu) { sdp = per_cpu_ptr(ssp->sda, cpu); spin_lock_init(&ACCESS_PRIVATE(sdp, lock)); @@ -429,10 +428,10 @@ static bool srcu_gp_is_expedited(struct srcu_struct *ssp) } /* - * Computes approximate total of the readers' ->srcu_lock_count[] values - * for the rank of per-CPU counters specified by idx, and returns true if - * the caller did the proper barrier (gp), and if the count of the locks - * matches that of the unlocks passed in. + * Computes approximate total of the readers' ->srcu_ctrs[].srcu_locks + * values for the rank of per-CPU counters specified by idx, and returns + * true if the caller did the proper barrier (gp), and if the count of + * the locks matches that of the unlocks passed in. */ static bool srcu_readers_lock_idx(struct srcu_struct *ssp, int idx, bool gp, unsigned long unlocks) { @@ -443,7 +442,7 @@ static bool srcu_readers_lock_idx(struct srcu_struct *ssp, int idx, bool gp, uns for_each_possible_cpu(cpu) { struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); - sum += atomic_long_read(&sdp->srcu_lock_count[idx]); + sum += atomic_long_read(&sdp->srcu_ctrs[idx].srcu_locks); if (IS_ENABLED(CONFIG_PROVE_RCU)) mask = mask | READ_ONCE(sdp->srcu_reader_flavor); } @@ -455,8 +454,8 @@ static bool srcu_readers_lock_idx(struct srcu_struct *ssp, int idx, bool gp, uns } /* - * Returns approximate total of the readers' ->srcu_unlock_count[] values - * for the rank of per-CPU counters specified by idx. + * Returns approximate total of the readers' ->srcu_ctrs[].srcu_unlocks + * values for the rank of per-CPU counters specified by idx. */ static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx, unsigned long *rdm) { @@ -467,7 +466,7 @@ static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx, u for_each_possible_cpu(cpu) { struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); - sum += atomic_long_read(&sdp->srcu_unlock_count[idx]); + sum += atomic_long_read(&sdp->srcu_ctrs[idx].srcu_unlocks); mask = mask | READ_ONCE(sdp->srcu_reader_flavor); } WARN_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && (mask & (mask - 1)), @@ -510,9 +509,9 @@ static bool srcu_readers_active_idx_check(struct srcu_struct *ssp, int idx) * been no readers on this index at some point in this function. * But there might be more readers, as a task might have read * the current ->srcu_idx but not yet have incremented its CPU's - * ->srcu_lock_count[idx] counter. In fact, it is possible + * ->srcu_ctrs[idx].srcu_locks counter. In fact, it is possible * that most of the tasks have been preempted between fetching - * ->srcu_idx and incrementing ->srcu_lock_count[idx]. And there + * ->srcu_idx and incrementing ->srcu_ctrs[idx].srcu_locks. And there * could be almost (ULONG_MAX / sizeof(struct task_struct)) tasks * in a system whose address space was fully populated with memory. * Call this quantity Nt. @@ -521,36 +520,36 @@ static bool srcu_readers_active_idx_check(struct srcu_struct *ssp, int idx) * code for a long time. That now-preempted updater has already * flipped ->srcu_idx (possibly during the preceding grace period), * done an smp_mb() (again, possibly during the preceding grace - * period), and summed up the ->srcu_unlock_count[idx] counters. + * period), and summed up the ->srcu_ctrs[idx].srcu_unlocks counters. * How many times can a given one of the aforementioned Nt tasks - * increment the old ->srcu_idx value's ->srcu_lock_count[idx] + * increment the old ->srcu_idx value's ->srcu_ctrs[idx].srcu_locks * counter, in the absence of nesting? * * It can clearly do so once, given that it has already fetched - * the old value of ->srcu_idx and is just about to use that value - * to index its increment of ->srcu_lock_count[idx]. But as soon as - * it leaves that SRCU read-side critical section, it will increment - * ->srcu_unlock_count[idx], which must follow the updater's above - * read from that same value. Thus, as soon the reading task does - * an smp_mb() and a later fetch from ->srcu_idx, that task will be - * guaranteed to get the new index. Except that the increment of - * ->srcu_unlock_count[idx] in __srcu_read_unlock() is after the - * smp_mb(), and the fetch from ->srcu_idx in __srcu_read_lock() - * is before the smp_mb(). Thus, that task might not see the new - * value of ->srcu_idx until the -second- __srcu_read_lock(), - * which in turn means that this task might well increment - * ->srcu_lock_count[idx] for the old value of ->srcu_idx twice, - * not just once. + * the old value of ->srcu_idx and is just about to use that + * value to index its increment of ->srcu_ctrs[idx].srcu_locks. + * But as soon as it leaves that SRCU read-side critical section, + * it will increment ->srcu_ctrs[idx].srcu_unlocks, which must + * follow the updater's above read from that same value. Thus, + * as soon the reading task does an smp_mb() and a later fetch from + * ->srcu_idx, that task will be guaranteed to get the new index. + * Except that the increment of ->srcu_ctrs[idx].srcu_unlocks + * in __srcu_read_unlock() is after the smp_mb(), and the fetch + * from ->srcu_idx in __srcu_read_lock() is before the smp_mb(). + * Thus, that task might not see the new value of ->srcu_idx until + * the -second- __srcu_read_lock(), which in turn means that this + * task might well increment ->srcu_ctrs[idx].srcu_locks for the + * old value of ->srcu_idx twice, not just once. * * However, it is important to note that a given smp_mb() takes * effect not just for the task executing it, but also for any * later task running on that same CPU. * - * That is, there can be almost Nt + Nc further increments of - * ->srcu_lock_count[idx] for the old index, where Nc is the number - * of CPUs. But this is OK because the size of the task_struct - * structure limits the value of Nt and current systems limit Nc - * to a few thousand. + * That is, there can be almost Nt + Nc further increments + * of ->srcu_ctrs[idx].srcu_locks for the old index, where Nc + * is the number of CPUs. But this is OK because the size of + * the task_struct structure limits the value of Nt and current + * systems limit Nc to a few thousand. * * OK, but what about nesting? This does impose a limit on * nesting of half of the size of the task_struct structure @@ -581,10 +580,10 @@ static bool srcu_readers_active(struct srcu_struct *ssp) for_each_possible_cpu(cpu) { struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); - sum += atomic_long_read(&sdp->srcu_lock_count[0]); - sum += atomic_long_read(&sdp->srcu_lock_count[1]); - sum -= atomic_long_read(&sdp->srcu_unlock_count[0]); - sum -= atomic_long_read(&sdp->srcu_unlock_count[1]); + sum += atomic_long_read(&sdp->srcu_ctrs[0].srcu_locks); + sum += atomic_long_read(&sdp->srcu_ctrs[1].srcu_locks); + sum -= atomic_long_read(&sdp->srcu_ctrs[0].srcu_unlocks); + sum -= atomic_long_read(&sdp->srcu_ctrs[1].srcu_unlocks); } return sum; } @@ -746,7 +745,7 @@ int __srcu_read_lock(struct srcu_struct *ssp) int idx; idx = READ_ONCE(ssp->srcu_idx) & 0x1; - this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter); + this_cpu_inc(ssp->sda->srcu_ctrs[idx].srcu_locks.counter); smp_mb(); /* B */ /* Avoid leaking the critical section. */ return idx; } @@ -760,7 +759,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock); void __srcu_read_unlock(struct srcu_struct *ssp, int idx) { smp_mb(); /* C */ /* Avoid leaking the critical section. */ - this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter); + this_cpu_inc(ssp->sda->srcu_ctrs[idx].srcu_unlocks.counter); } EXPORT_SYMBOL_GPL(__srcu_read_unlock); @@ -777,7 +776,7 @@ int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); idx = READ_ONCE(ssp->srcu_idx) & 0x1; - atomic_long_inc(&sdp->srcu_lock_count[idx]); + atomic_long_inc(&sdp->srcu_ctrs[idx].srcu_locks); smp_mb__after_atomic(); /* B */ /* Avoid leaking the critical section. */ return idx; } @@ -793,7 +792,7 @@ void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); smp_mb__before_atomic(); /* C */ /* Avoid leaking the critical section. */ - atomic_long_inc(&sdp->srcu_unlock_count[idx]); + atomic_long_inc(&sdp->srcu_ctrs[idx].srcu_unlocks); } EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe); @@ -1123,17 +1122,17 @@ static void srcu_flip(struct srcu_struct *ssp) /* * Because the flip of ->srcu_idx is executed only if the * preceding call to srcu_readers_active_idx_check() found that - * the ->srcu_unlock_count[] and ->srcu_lock_count[] sums matched - * and because that summing uses atomic_long_read(), there is - * ordering due to a control dependency between that summing and - * the WRITE_ONCE() in this call to srcu_flip(). This ordering - * ensures that if this updater saw a given reader's increment from - * __srcu_read_lock(), that reader was using a value of ->srcu_idx - * from before the previous call to srcu_flip(), which should be - * quite rare. This ordering thus helps forward progress because - * the grace period could otherwise be delayed by additional - * calls to __srcu_read_lock() using that old (soon to be new) - * value of ->srcu_idx. + * the ->srcu_ctrs[].srcu_unlocks and ->srcu_ctrs[].srcu_locks sums + * matched and because that summing uses atomic_long_read(), + * there is ordering due to a control dependency between that + * summing and the WRITE_ONCE() in this call to srcu_flip(). + * This ordering ensures that if this updater saw a given reader's + * increment from __srcu_read_lock(), that reader was using a value + * of ->srcu_idx from before the previous call to srcu_flip(), + * which should be quite rare. This ordering thus helps forward + * progress because the grace period could otherwise be delayed + * by additional calls to __srcu_read_lock() using that old (soon + * to be new) value of ->srcu_idx. * * This sum-equality check and ordering also ensures that if * a given call to __srcu_read_lock() uses the new value of @@ -1918,8 +1917,8 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) struct srcu_data *sdp; sdp = per_cpu_ptr(ssp->sda, cpu); - u0 = data_race(atomic_long_read(&sdp->srcu_unlock_count[!idx])); - u1 = data_race(atomic_long_read(&sdp->srcu_unlock_count[idx])); + u0 = data_race(atomic_long_read(&sdp->srcu_ctrs[!idx].srcu_unlocks)); + u1 = data_race(atomic_long_read(&sdp->srcu_ctrs[idx].srcu_unlocks)); /* * Make sure that a lock is always counted if the corresponding @@ -1927,8 +1926,8 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) */ smp_rmb(); - l0 = data_race(atomic_long_read(&sdp->srcu_lock_count[!idx])); - l1 = data_race(atomic_long_read(&sdp->srcu_lock_count[idx])); + l0 = data_race(atomic_long_read(&sdp->srcu_ctrs[!idx].srcu_locks)); + l1 = data_race(atomic_long_read(&sdp->srcu_ctrs[idx].srcu_locks)); c0 = l0 - u0; c1 = l1 - u1; From patchwork Thu Jan 16 20:21:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942244 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C2F6241685; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=sK3P/VXYpIwYVci0jMjzHStuCeSAu1QRFX+R6aS4wjcCGeAiUmc3YiaTAO30cWBTGo5W7+6fWyFdYN2qwkDqbLFAIAzubQJq5nEuyp1El/jjVz9dQ36DJOMIdlowgrolZsiPFOTuyGsF2hzZZyM7wzGUusKd3Y1bDRwt1cx+tZs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=pL1/Lgx7fk6aIW5sHuHV7UEq+eyS6XB5BVEwlDIVglQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=k/3QiBblyMKDskQEA/XHwqE8kXumkf0g1PVcLduUb5Phy87Y9BCet7ihBeUsNpceUBWQ/E3xhvLzMQ9+7hQmqtI/XKRd94t8iZBTCZzGpAK1XTq12xseKwII7pxyqUtEixnxmFZ/tkJk2vkwMl4bSzyNwSh46x/bc1xEzc8/Kzk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SDkFYGRW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SDkFYGRW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2DD4C4CEE9; Thu, 16 Jan 2025 20:21:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=pL1/Lgx7fk6aIW5sHuHV7UEq+eyS6XB5BVEwlDIVglQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SDkFYGRWeK+T0nHPVLoFcz57JLXQ2scfvgISoxzSQ6eRNmggPcq0K8ouIr1//C/9/ u/R5VJqUg9V1wfKOYnqNxSPi36sD98P3DhjCc49iq0X827OePgc8ZHi2zFx0703GNR JAiPhT2hj6SoYkPpcPHxTqNadX0NeRbwliu8AdJ80XpDk8o13xuC151scj2b9d/1JU 00IyAxTnll2vzOwRT4VwKXEsWIYcjan8y77pncm68D0G278XDgaTV2za/fLHfHzseu iblw/AdeOJKy2Uq0PHtQncEkehC9IaGIGKCmmpIy6PM2JzMc2oUbnUud7Gh6M1m9Xz ryvlJq/ntY/Jw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 8C838CE37C1; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Z qiang , kernel test robot , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 05/17] srcu: Make SRCU readers use ->srcu_ctrs for counter selection Date: Thu, 16 Jan 2025 12:21:00 -0800 Message-Id: <20250116202112.3783327-5-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit causes SRCU readers to use ->srcu_ctrs for counter selection instead of ->srcu_idx. This takes another step towards array-indexing-free SRCU readers. [ paulmck: Apply kernel test robot feedback. ] Co-developed-by: Z qiang Signed-off-by: Z qiang Signed-off-by: Paul E. McKenney Tested-by: kernel test robot Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcutree.h | 9 +++++---- kernel/rcu/srcutree.c | 23 +++++++++++++---------- 2 files changed, 18 insertions(+), 14 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index c794d599db5c1..1b01ced61a45b 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -101,6 +101,7 @@ struct srcu_usage { */ struct srcu_struct { unsigned int srcu_idx; /* Current rdr array element. */ + struct srcu_ctr __percpu *srcu_ctrp; struct srcu_data __percpu *sda; /* Per-CPU srcu_data array. */ struct lockdep_map dep_map; struct srcu_usage *srcu_sup; /* Update-side data. */ @@ -167,6 +168,7 @@ struct srcu_struct { #define __SRCU_STRUCT_INIT(name, usage_name, pcpu_name) \ { \ .sda = &pcpu_name, \ + .srcu_ctrp = &pcpu_name.srcu_ctrs[0], \ __SRCU_STRUCT_INIT_COMMON(name, usage_name) \ } @@ -222,13 +224,12 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); */ static inline int __srcu_read_lock_lite(struct srcu_struct *ssp) { - int idx; + struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp); RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_lock_lite()."); - idx = READ_ONCE(ssp->srcu_idx) & 0x1; - this_cpu_inc(ssp->sda->srcu_ctrs[idx].srcu_locks.counter); /* Y */ + this_cpu_inc(scp->srcu_locks.counter); /* Y */ barrier(); /* Avoid leaking the critical section. */ - return idx; + return scp - &ssp->sda->srcu_ctrs[0]; } /* diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index d4e9cd917a69f..9af86ce2dd248 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -253,8 +253,10 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 0); INIT_DELAYED_WORK(&ssp->srcu_sup->work, process_srcu); ssp->srcu_sup->sda_is_static = is_static; - if (!is_static) + if (!is_static) { ssp->sda = alloc_percpu(struct srcu_data); + ssp->srcu_ctrp = &ssp->sda->srcu_ctrs[0]; + } if (!ssp->sda) goto err_free_sup; init_srcu_struct_data(ssp); @@ -742,12 +744,11 @@ EXPORT_SYMBOL_GPL(__srcu_check_read_flavor); */ int __srcu_read_lock(struct srcu_struct *ssp) { - int idx; + struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp); - idx = READ_ONCE(ssp->srcu_idx) & 0x1; - this_cpu_inc(ssp->sda->srcu_ctrs[idx].srcu_locks.counter); + this_cpu_inc(scp->srcu_locks.counter); smp_mb(); /* B */ /* Avoid leaking the critical section. */ - return idx; + return scp - &ssp->sda->srcu_ctrs[0]; } EXPORT_SYMBOL_GPL(__srcu_read_lock); @@ -772,13 +773,12 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock); */ int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) { - int idx; - struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); + struct srcu_ctr __percpu *scpp = READ_ONCE(ssp->srcu_ctrp); + struct srcu_ctr *scp = raw_cpu_ptr(scpp); - idx = READ_ONCE(ssp->srcu_idx) & 0x1; - atomic_long_inc(&sdp->srcu_ctrs[idx].srcu_locks); + atomic_long_inc(&scp->srcu_locks); smp_mb__after_atomic(); /* B */ /* Avoid leaking the critical section. */ - return idx; + return scpp - &ssp->sda->srcu_ctrs[0]; } EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); @@ -1152,6 +1152,8 @@ static void srcu_flip(struct srcu_struct *ssp) smp_mb(); /* E */ /* Pairs with B and C. */ WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1); // Flip the counter. + WRITE_ONCE(ssp->srcu_ctrp, + &ssp->sda->srcu_ctrs[!(ssp->srcu_ctrp - &ssp->sda->srcu_ctrs[0])]); /* * Ensure that if the updater misses an __srcu_read_unlock() @@ -2004,6 +2006,7 @@ static int srcu_module_coming(struct module *mod) ssp->sda = alloc_percpu(struct srcu_data); if (WARN_ON_ONCE(!ssp->sda)) return -ENOMEM; + ssp->srcu_ctrp = &ssp->sda->srcu_ctrs[0]; } return 0; } From patchwork Thu Jan 16 20:21:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942248 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 861412419F2; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=lKPRkmZSNeFwtG1MDQ3jL5VfwhoRk3V6Nev6b+nCC2JglcnCzTYJ1ZwzeNs9SDOLQy/q7QF9EDCq9q1rwQZV9Vx9BCOXYqTZOS8+IPlbfmY2euIrazB/4nHUR83iMbWBYP5PH1/OSWqZjOBtBG3H+4bYWIJHN+fOoMPXYTbInvA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=Qt4NdKFlhKeHcWt6XOoj627KrjJAjACww4AA4lQhELk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=C94oaqx12WLYpO01HEIdCzEzh184iLz5EohBwy6cKnhsCraYhxrx0JjxhGgQ1EmJiHCuR4FGL0S8iHLUxat9gl6VnicqrQg/i2JoJtyPpGNtP1Sc4DjSf7UhCG1wrdlVq1/VtEVZEWaXAOe8SvYv5sNPOBXSqR06//XbvZp6wL4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PJmpOjd6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PJmpOjd6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34754C4CEE4; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=Qt4NdKFlhKeHcWt6XOoj627KrjJAjACww4AA4lQhELk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PJmpOjd6U2pN11PIYIkeReXesXnLcqfeyeq624W2RoDBQFIUjAJXzkoC6VnCId6nS ZgC8DYPxdnuaPDOCSmJOK9pJZbcp+yFSkRNLbvqZpiBzCFTjcB+m7jqwpT3a+N22EK xtRiQo5J9ee/LQus8y6DAus3B/Raldi9Rij1MDZMBQ8uxUOX2E8Y6Q19cpXkzJlayf 6VGDXQQ5H8fLWOqeL7t6ctMzd5VfeCiiLXQmpX2n2IM7hfRQy1UNoW0GsSWVPqtuTZ fDq9Qxd4h0/sQysoEv1Sm2ULcRxxjrRx0SCpwhhbH5w/ST+dnTsssGhZq7EcbSnJ+9 85zh8+gf+llRw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 8F263CE37C4; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 06/17] srcu: Make Tree SRCU updates independent of ->srcu_idx Date: Thu, 16 Jan 2025 12:21:01 -0800 Message-Id: <20250116202112.3783327-6-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit makes Tree SRCU updates independent of ->srcu_idx, then drop ->srcu_idx. Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcutree.h | 1 - kernel/rcu/srcutree.c | 68 ++++++++++++++++++++-------------------- 2 files changed, 34 insertions(+), 35 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 1b01ced61a45b..6b7eba59f3849 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -100,7 +100,6 @@ struct srcu_usage { * Per-SRCU-domain structure, similar in function to rcu_state. */ struct srcu_struct { - unsigned int srcu_idx; /* Current rdr array element. */ struct srcu_ctr __percpu *srcu_ctrp; struct srcu_data __percpu *sda; /* Per-CPU srcu_data array. */ struct lockdep_map dep_map; diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 9af86ce2dd248..dfc98e69accaf 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -246,7 +246,6 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ssp->srcu_sup->node = NULL; mutex_init(&ssp->srcu_sup->srcu_cb_mutex); mutex_init(&ssp->srcu_sup->srcu_gp_mutex); - ssp->srcu_idx = 0; ssp->srcu_sup->srcu_gp_seq = SRCU_GP_SEQ_INITIAL_VAL; ssp->srcu_sup->srcu_barrier_seq = 0; mutex_init(&ssp->srcu_sup->srcu_barrier_mutex); @@ -510,38 +509,39 @@ static bool srcu_readers_active_idx_check(struct srcu_struct *ssp, int idx) * If the locks are the same as the unlocks, then there must have * been no readers on this index at some point in this function. * But there might be more readers, as a task might have read - * the current ->srcu_idx but not yet have incremented its CPU's + * the current ->srcu_ctrp but not yet have incremented its CPU's * ->srcu_ctrs[idx].srcu_locks counter. In fact, it is possible * that most of the tasks have been preempted between fetching - * ->srcu_idx and incrementing ->srcu_ctrs[idx].srcu_locks. And there - * could be almost (ULONG_MAX / sizeof(struct task_struct)) tasks - * in a system whose address space was fully populated with memory. - * Call this quantity Nt. + * ->srcu_ctrp and incrementing ->srcu_ctrs[idx].srcu_locks. And + * there could be almost (ULONG_MAX / sizeof(struct task_struct)) + * tasks in a system whose address space was fully populated + * with memory. Call this quantity Nt. * - * So suppose that the updater is preempted at this point in the - * code for a long time. That now-preempted updater has already - * flipped ->srcu_idx (possibly during the preceding grace period), - * done an smp_mb() (again, possibly during the preceding grace - * period), and summed up the ->srcu_ctrs[idx].srcu_unlocks counters. - * How many times can a given one of the aforementioned Nt tasks - * increment the old ->srcu_idx value's ->srcu_ctrs[idx].srcu_locks - * counter, in the absence of nesting? + * So suppose that the updater is preempted at this + * point in the code for a long time. That now-preempted + * updater has already flipped ->srcu_ctrp (possibly during + * the preceding grace period), done an smp_mb() (again, + * possibly during the preceding grace period), and summed up + * the ->srcu_ctrs[idx].srcu_unlocks counters. How many times + * can a given one of the aforementioned Nt tasks increment the + * old ->srcu_ctrp value's ->srcu_ctrs[idx].srcu_locks counter, + * in the absence of nesting? * * It can clearly do so once, given that it has already fetched - * the old value of ->srcu_idx and is just about to use that + * the old value of ->srcu_ctrp and is just about to use that * value to index its increment of ->srcu_ctrs[idx].srcu_locks. * But as soon as it leaves that SRCU read-side critical section, * it will increment ->srcu_ctrs[idx].srcu_unlocks, which must - * follow the updater's above read from that same value. Thus, - * as soon the reading task does an smp_mb() and a later fetch from - * ->srcu_idx, that task will be guaranteed to get the new index. + * follow the updater's above read from that same value. Thus, + as soon the reading task does an smp_mb() and a later fetch from + * ->srcu_ctrp, that task will be guaranteed to get the new index. * Except that the increment of ->srcu_ctrs[idx].srcu_unlocks * in __srcu_read_unlock() is after the smp_mb(), and the fetch - * from ->srcu_idx in __srcu_read_lock() is before the smp_mb(). - * Thus, that task might not see the new value of ->srcu_idx until + * from ->srcu_ctrp in __srcu_read_lock() is before the smp_mb(). + * Thus, that task might not see the new value of ->srcu_ctrp until * the -second- __srcu_read_lock(), which in turn means that this * task might well increment ->srcu_ctrs[idx].srcu_locks for the - * old value of ->srcu_idx twice, not just once. + * old value of ->srcu_ctrp twice, not just once. * * However, it is important to note that a given smp_mb() takes * effect not just for the task executing it, but also for any @@ -1095,7 +1095,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, /* * Wait until all readers counted by array index idx complete, but * loop an additional time if there is an expedited grace period pending. - * The caller must ensure that ->srcu_idx is not changed while checking. + * The caller must ensure that ->srcu_ctrp is not changed while checking. */ static bool try_check_zero(struct srcu_struct *ssp, int idx, int trycount) { @@ -1113,14 +1113,14 @@ static bool try_check_zero(struct srcu_struct *ssp, int idx, int trycount) } /* - * Increment the ->srcu_idx counter so that future SRCU readers will + * Increment the ->srcu_ctrp counter so that future SRCU readers will * use the other rank of the ->srcu_(un)lock_count[] arrays. This allows * us to wait for pre-existing readers in a starvation-free manner. */ static void srcu_flip(struct srcu_struct *ssp) { /* - * Because the flip of ->srcu_idx is executed only if the + * Because the flip of ->srcu_ctrp is executed only if the * preceding call to srcu_readers_active_idx_check() found that * the ->srcu_ctrs[].srcu_unlocks and ->srcu_ctrs[].srcu_locks sums * matched and because that summing uses atomic_long_read(), @@ -1128,15 +1128,15 @@ static void srcu_flip(struct srcu_struct *ssp) * summing and the WRITE_ONCE() in this call to srcu_flip(). * This ordering ensures that if this updater saw a given reader's * increment from __srcu_read_lock(), that reader was using a value - * of ->srcu_idx from before the previous call to srcu_flip(), + * of ->srcu_ctrp from before the previous call to srcu_flip(), * which should be quite rare. This ordering thus helps forward * progress because the grace period could otherwise be delayed * by additional calls to __srcu_read_lock() using that old (soon - * to be new) value of ->srcu_idx. + * to be new) value of ->srcu_ctrp. * * This sum-equality check and ordering also ensures that if * a given call to __srcu_read_lock() uses the new value of - * ->srcu_idx, this updater's earlier scans cannot have seen + * ->srcu_ctrp, this updater's earlier scans cannot have seen * that reader's increments, which is all to the good, because * this grace period need not wait on that reader. After all, * if those earlier scans had seen that reader, there would have @@ -1151,7 +1151,6 @@ static void srcu_flip(struct srcu_struct *ssp) */ smp_mb(); /* E */ /* Pairs with B and C. */ - WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1); // Flip the counter. WRITE_ONCE(ssp->srcu_ctrp, &ssp->sda->srcu_ctrs[!(ssp->srcu_ctrp - &ssp->sda->srcu_ctrs[0])]); @@ -1470,8 +1469,9 @@ EXPORT_SYMBOL_GPL(synchronize_srcu_expedited); * * Wait for the count to drain to zero of both indexes. To avoid the * possible starvation of synchronize_srcu(), it waits for the count of - * the index=((->srcu_idx & 1) ^ 1) to drain to zero at first, - * and then flip the srcu_idx and wait for the count of the other index. + * the index=!(ssp->srcu_ctrp - &ssp->sda->srcu_ctrs[0]) to drain to zero + * at first, and then flip the ->srcu_ctrp and wait for the count of the + * other index. * * Can block; must be called from process context. * @@ -1697,7 +1697,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) /* * Because readers might be delayed for an extended period after - * fetching ->srcu_idx for their index, at any point in time there + * fetching ->srcu_ctrp for their index, at any point in time there * might well be readers using both idx=0 and idx=1. We therefore * need to wait for readers to clear from both index values before * invoking a callback. @@ -1725,7 +1725,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) } if (rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)) == SRCU_STATE_SCAN1) { - idx = 1 ^ (ssp->srcu_idx & 1); + idx = !(ssp->srcu_ctrp - &ssp->sda->srcu_ctrs[0]); if (!try_check_zero(ssp, idx, 1)) { mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); return; /* readers present, retry later. */ @@ -1743,7 +1743,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) * SRCU read-side critical sections are normally short, * so check at least twice in quick succession after a flip. */ - idx = 1 ^ (ssp->srcu_idx & 1); + idx = !(ssp->srcu_ctrp - &ssp->sda->srcu_ctrs[0]); if (!try_check_zero(ssp, idx, 2)) { mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); return; /* readers present, retry later. */ @@ -1901,7 +1901,7 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) int ss_state = READ_ONCE(ssp->srcu_sup->srcu_size_state); int ss_state_idx = ss_state; - idx = ssp->srcu_idx & 0x1; + idx = ssp->srcu_ctrp - &ssp->sda->srcu_ctrs[0]; if (ss_state < 0 || ss_state >= ARRAY_SIZE(srcu_size_state_name)) ss_state_idx = ARRAY_SIZE(srcu_size_state_name) - 1; pr_alert("%s%s Tree SRCU g%ld state %d (%s)", From patchwork Thu Jan 16 20:21:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942245 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FDF42416AF; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=BbsyMKfMx8RggdfiPMPsdUNx6CTsTqurbpSiggBnDiOYesdQ3ojSV7iC6FEXeXZcpZwMWRIZxbz08sDS/nsbwUPuYibnVVJ2+IyQV+quNrX2aRbHjT+lO6tagb50O0NEPcxI9rkHuB7N+3WWp+XDAmIRE84iRhEODPMAy/1vYyo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=DmfpKQXf6Ml/a8ehOKAWLG+3urxxzFu0N5hVWDZFayA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XQh5Za8+o8dPNzezgTJ4HVKDBsYslbUv/aa/eyNZYEXDv2pjOg9qHWio40KEoR5wcJMNwW7dOWwXxfLkD6qV9UWcX1sfFGDgtnr8/ezS3ZTqyoRBmKfmEgJsvyvPMruWPGmkxnLJ3epTJj7pXqdxCe1tPmJ8oJ1eg9QYknUrzss= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=edf4wwcf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="edf4wwcf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3AFDBC4CEF0; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=DmfpKQXf6Ml/a8ehOKAWLG+3urxxzFu0N5hVWDZFayA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=edf4wwcfBkb+rCeXf6T/mMuM46j60Q89UKL3uTwDlSycRStc5UkVGaAKHNyBKk5vn JD6NRqhQKf/WGMuhTIdFn7hbUZaPzbdWlIWMpqQsT3avC61uvqTCKHmlyHYEvEYmKK DTpHJoqVTNaJ6UHVB/TjCX18HPlolowoAUhsWFHv5QaGoY8DX6WlRrIRgW0aQk6bmk OEUx8YpO/zW94evKWjjq0XqnOwuC4KU2NflRkIQ0DRFNYhVxM6DlFAaPUuS1L8G9p3 m6uomfx3To1cw2Hgva63FzRhgvSEzqlT4DXrU5b2AK5XlIfG896sXCCtYivY8TP49g fBv5hTsrlEZ4Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 918B6CE37D2; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , syzbot+16a19b06125a2963eaee@syzkaller.appspotmail.com, Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 07/17] srcu: Force synchronization for srcu_get_delay() Date: Thu, 16 Jan 2025 12:21:02 -0800 Message-Id: <20250116202112.3783327-7-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently, srcu_get_delay() can be called concurrently, for example, by a CPU that is the first to request a new grace period and the CPU processing the current grace period. Although concurrent access is harmless, it unnecessarily expands the state space. Additionally, all calls to srcu_get_delay() are from slow paths. This commit therefore protects all calls to srcu_get_delay() with ssp->srcu_sup->lock, which is already held on the invocation from the srcu_funnel_gp_start() function. While in the area, this commit also adds a lockdep_assert_held() to srcu_get_delay() itself. Reported-by: syzbot+16a19b06125a2963eaee@syzkaller.appspotmail.com Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- kernel/rcu/srcutree.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index dfc98e69accaf..46e4cdaa1786e 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -648,6 +648,7 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) unsigned long jbase = SRCU_INTERVAL; struct srcu_usage *sup = ssp->srcu_sup; + lockdep_assert_held(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); if (srcu_gp_is_expedited(ssp)) jbase = 0; if (rcu_seq_state(READ_ONCE(sup->srcu_gp_seq))) { @@ -675,9 +676,13 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) void cleanup_srcu_struct(struct srcu_struct *ssp) { int cpu; + unsigned long delay; struct srcu_usage *sup = ssp->srcu_sup; - if (WARN_ON(!srcu_get_delay(ssp))) + spin_lock_irq_rcu_node(ssp->srcu_sup); + delay = srcu_get_delay(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); + if (WARN_ON(!delay)) return; /* Just leak it! */ if (WARN_ON(srcu_readers_active(ssp))) return; /* Just leak it! */ @@ -1101,7 +1106,9 @@ static bool try_check_zero(struct srcu_struct *ssp, int idx, int trycount) { unsigned long curdelay; + spin_lock_irq_rcu_node(ssp->srcu_sup); curdelay = !srcu_get_delay(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); for (;;) { if (srcu_readers_active_idx_check(ssp, idx)) @@ -1854,7 +1861,9 @@ static void process_srcu(struct work_struct *work) ssp = sup->srcu_ssp; srcu_advance_state(ssp); + spin_lock_irq_rcu_node(ssp->srcu_sup); curdelay = srcu_get_delay(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); if (curdelay) { WRITE_ONCE(sup->reschedule_count, 0); } else { From patchwork Thu Jan 16 20:21:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942247 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 860D92419F1; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=g5P0WW1Tnts/1wo5OCPf4oSTaeTPqXnY5GObInaxgJ4tcn7U/HRxLS2WB6xVMxdZeV7jTVZtpDbwuKJky4LwG36UeaKypZXn89KCDrXJHAlfhSxukr9DAMgedSB+BfOVnfk6nCWBXkRhlb93DpK/NxQsikX/hU3eDI5BK8xIY4w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=yk/MWVXZdRjDc32PMvtz8WEH6sSkKtNJuArIejcJ3Tw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LX6BGnoIZ1li68DaLlGfB/qPahQvTJuWKHkqFzbxJaM4FeKFx6igJMKXV+vAT9Mts3kFfYwgnFL3OzzUmw0b5i6pZJZ9QJTLCgcsjP+9lc2QwpttsAOPUBnDF8UJLtv6L5B65Ou+yffLyEQ/CAIaqe1O8u5flMPFjRKgvmiiJBI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=srWjbDzC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="srWjbDzC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3B81AC4CEF3; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=yk/MWVXZdRjDc32PMvtz8WEH6sSkKtNJuArIejcJ3Tw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=srWjbDzCvpWnbEx1ahC8nFKLDT2YcSbIr0n4uQEG5HuNlMPTCL0f4DhGlbHG7hdzX 6UXtgAnYybPuDJi1oJt/xkqkRxRufEdZck0x2O0F8SKwb0179HmvgqAMlsK5vnLAY3 SS4SOMOODlKaRj/dDweeucnIST9xsx/FM6XzrKctvBy/Lc/SmJDWMXjUrwRC8pz3AY iKxtY7lVXWJG/ED8sz6wn9p3Tt3FbwFGhiOUDHQYVnCB+OJ3WXIbXZv0Ci1fWJzy0n OO4IBzU7cP1Sn3nUDA0RlUrOb+Gr5bpH+yMvrpUcjDxxISOLppvfJkQFOPV/vjQQ1v Iu/F8cYnWrtkA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 93EC4CE37D4; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 08/17] srcu: Rename srcu_check_read_flavor_lite() to srcu_check_read_flavor_force() Date: Thu, 16 Jan 2025 12:21:03 -0800 Message-Id: <20250116202112.3783327-8-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit renames the srcu_check_read_flavor_lite() function to srcu_check_read_flavor_force() and adds a read_flavor argument in order to support an srcu_read_lock_fast() variant that is to avoid array indexing in both the lock and unlock primitives. Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcu.h | 2 +- include/linux/srcutiny.h | 2 +- include/linux/srcutree.h | 10 ++++++---- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index f6f779b9d9ff2..ca00b9af7c237 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -279,7 +279,7 @@ static inline int srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp) { int retval; - srcu_check_read_flavor_lite(ssp); + srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_LITE); retval = __srcu_read_lock_lite(ssp); rcu_try_lock_acquire(&ssp->dep_map); return retval; diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 31b59b4be2a74..6b1a7276aa4c9 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -82,7 +82,7 @@ static inline void srcu_barrier(struct srcu_struct *ssp) } #define srcu_check_read_flavor(ssp, read_flavor) do { } while (0) -#define srcu_check_read_flavor_lite(ssp) do { } while (0) +#define srcu_check_read_flavor_force(ssp, read_flavor) do { } while (0) /* Defined here to avoid size increase for non-torture kernels. */ static inline void srcu_torture_stats_print(struct srcu_struct *ssp, diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 6b7eba59f3849..e29cc57eac81d 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -251,16 +251,18 @@ static inline void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor); -// Record _lite() usage even for CONFIG_PROVE_RCU=n kernels. -static inline void srcu_check_read_flavor_lite(struct srcu_struct *ssp) +// Record reader usage even for CONFIG_PROVE_RCU=n kernels. This is +// needed only for flavors that require grace-period smp_mb() calls to be +// promoted to synchronize_rcu(). +static inline void srcu_check_read_flavor_force(struct srcu_struct *ssp, int read_flavor) { struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); - if (likely(READ_ONCE(sdp->srcu_reader_flavor) & SRCU_READ_FLAVOR_LITE)) + if (likely(READ_ONCE(sdp->srcu_reader_flavor) & read_flavor)) return; // Note that the cmpxchg() in __srcu_check_read_flavor() is fully ordered. - __srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_LITE); + __srcu_check_read_flavor(ssp, read_flavor); } // Record non-_lite() usage only for CONFIG_PROVE_RCU=y kernels. From patchwork Thu Jan 16 20:21:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942246 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 861ED2419F3; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=oWqEdmmSF3JCOv+QHIPXTbKtbEnmJG9zx8TErvQISuA8nF7dowOOJENR+qaVAgXwoWsSODl0Wt7u6VmjyxuEUtYiLasqV3XWbU8mVAtgWXQKnnH6j6zeLLay1bGoCVGBEcJGqmBEzWyQjfinOrNWcTB+VVXX6mFeZCHP2rESy9Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=qZ5N0znarsb8nc88G8D6jI4/3NF8Vug/jojZz1Lsxpw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=H2eNyvmGXaI+ddFP0KPj2RCTVO+DX/df+b3pcr3T1sCtWcixI9mJDlG7IPlHH+7Ylr+7sTS/36PUOvfIyGnrmTToYG3EgA3MjkHQiviv8RMI9/DoGxvrvDZnnUPOw/ZF0eEdioS6l4X8crYETyZ7urmuoAJMEmwsbCWoMyHVcrE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FyGgN56T; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FyGgN56T" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C0DCC4CEF4; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=qZ5N0znarsb8nc88G8D6jI4/3NF8Vug/jojZz1Lsxpw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FyGgN56TBhcRfIKdVu2vCvzdZ3kAl7ksHf2M1s6mgxPX8FfFWV3rQzIZmN2Uiiic0 u7TWoTk2yswXXai8i+2HFCkJyvgRHSab3SPvGHkoqlzUCkWRmxV82HuUTq0DsMsW4+ 8QYGB18IrbOHx8iQmmJtZhij8iqlVBNr7HtdwF2ZlGIbHKpoKiUweL9M1ct/RtspP5 ga0DEWECUU9RT8bT6Ro/xo4qQyRNK9xELWfx7Psxyta7vQw1JI+C/uYxZf5EJTDqOV CmbzYMVbPt+Hh135npZIE/5BVWh4ZCA7grz4zSbjgXZbIMaC0cwypj7qXhEMrODE8W M/tWKNt86sbmg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 9641DCE37D5; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 09/17] srcu: Add SRCU_READ_FLAVOR_SLOWGP to flag need for synchronize_rcu() Date: Thu, 16 Jan 2025 12:21:04 -0800 Message-Id: <20250116202112.3783327-9-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit switches from a direct test of SRCU_READ_FLAVOR_LITE to a new SRCU_READ_FLAVOR_SLOWGP macro to check for substituting synchronize_rcu() for smp_mb() in SRCU grace periods. Right now, SRCU_READ_FLAVOR_SLOWGP is exactly SRCU_READ_FLAVOR_LITE, but the addition of the _fast() flavor of SRCU will change that. Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcu.h | 3 +++ kernel/rcu/srcutree.c | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index ca00b9af7c237..505f5bdce4446 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -49,6 +49,9 @@ int init_srcu_struct(struct srcu_struct *ssp); #define SRCU_READ_FLAVOR_LITE 0x4 // srcu_read_lock_lite(). #define SRCU_READ_FLAVOR_ALL (SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_NMI | \ SRCU_READ_FLAVOR_LITE) // All of the above. +#define SRCU_READ_FLAVOR_SLOWGP SRCU_READ_FLAVOR_LITE + // Flavors requiring synchronize_rcu() + // instead of smp_mb(). #ifdef CONFIG_TINY_SRCU #include diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 46e4cdaa1786e..973e49d04f4f1 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -449,7 +449,7 @@ static bool srcu_readers_lock_idx(struct srcu_struct *ssp, int idx, bool gp, uns } WARN_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && (mask & (mask - 1)), "Mixed reader flavors for srcu_struct at %ps.\n", ssp); - if (mask & SRCU_READ_FLAVOR_LITE && !gp) + if (mask & SRCU_READ_FLAVOR_SLOWGP && !gp) return false; return sum == unlocks; } @@ -487,7 +487,7 @@ static bool srcu_readers_active_idx_check(struct srcu_struct *ssp, int idx) unsigned long unlocks; unlocks = srcu_readers_unlock_idx(ssp, idx, &rdm); - did_gp = !!(rdm & SRCU_READ_FLAVOR_LITE); + did_gp = !!(rdm & SRCU_READ_FLAVOR_SLOWGP); /* * Make sure that a lock is always counted if the corresponding @@ -1205,7 +1205,7 @@ static bool srcu_should_expedite(struct srcu_struct *ssp) check_init_srcu_struct(ssp); /* If _lite() readers, don't do unsolicited expediting. */ - if (this_cpu_read(ssp->sda->srcu_reader_flavor) & SRCU_READ_FLAVOR_LITE) + if (this_cpu_read(ssp->sda->srcu_reader_flavor) & SRCU_READ_FLAVOR_SLOWGP) return false; /* If the local srcu_data structure has callbacks, not idle. */ sdp = raw_cpu_ptr(ssp->sda); From patchwork Thu Jan 16 20:21:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942249 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A087B241A03; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=ppNl5cJfiukTPoPbevSmMpKnUkNTaadZPKK9w5lzSYAuBYLedjl+1kbujgeJ6AjSGsGRQLQAXagvI7z0gLlt5GVkXbYcIR+sNimIPML24yF8QQASQDJ/m9f5uUCuOnCqKMiOovs7hEX5UL9vGGM5dh2rWYiDeMCKeRGZ39JrPqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=/Xhoj1WJzPcxvG0UU8IeYEawtU/PRon6+wnm3gf46G8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VWnbRqGSpn0D02P5EcYgffuh8vh9Bi8U1l8LBSlVX1Z1iR/1UfViExXMX8F2MPRfoseTMtnKg7KJ8sGouYkOdXjZutxdUqzb12uTJeVg18CpsoBEWMzSxGc7QBqQc+tahvhpAOI3Vc19z1gpeZpw6V+7YhLlFlCMZotyp24A37k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aDLOf/rC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aDLOf/rC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 540A2C4CEE5; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=/Xhoj1WJzPcxvG0UU8IeYEawtU/PRon6+wnm3gf46G8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aDLOf/rCYlfXUz6FMqGiB6jX5wg4YE7Zj9F03ZjSMrGsGwAwGbY4VFUNO93R58SLe ZQxOVb64FMQiFZHKU9+z2YXhdmgrvY/1/j7eBXTaqTkZzVUIoyOteSCSEOC1eg8c/t L1OLYbpBoy5WeB4zUKnVeyysR/d3YGp2YHMxir6m+au2wa3wl/VioD+Q22jI3hq2Kn fn7tOUeLWmhtxhQ3I2UFt1ewc/ycIkA+LZDjuGYXtroOdOw4QsLtFcmaV7VyiVpLfv W84oyKffe0Uzi4UpCp3pWuzghg1Q4TwzfUrPLs7hfyIbS/eBeuOjBa6liTNR6f6XIH fGmpi3kwNsZeg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 98972CE37D7; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 10/17] srcu: Pull pointer-to-integer conversion into __srcu_ptr_to_ctr() Date: Thu, 16 Jan 2025 12:21:05 -0800 Message-Id: <20250116202112.3783327-10-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit abstracts the srcu_read_lock*() pointer-to-integer conversion into a new __srcu_ptr_to_ctr(). This will be used in rcutorture for testing an srcu_read_lock_fast() that returns a pointer rather than an integer. Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcutree.h | 9 ++++++++- kernel/rcu/srcutree.c | 4 ++-- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index e29cc57eac81d..f41bb3a55a048 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -211,6 +211,13 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); +// Converts a per-CPU pointer to an ->srcu_ctrs[] array element to that +// element's index. +static inline bool __srcu_ptr_to_ctr(struct srcu_struct *ssp, struct srcu_ctr __percpu *scpp) +{ + return scpp - &ssp->sda->srcu_ctrs[0]; +} + /* * Counts the new reader in the appropriate per-CPU element of the * srcu_struct. Returns an index that must be passed to the matching @@ -228,7 +235,7 @@ static inline int __srcu_read_lock_lite(struct srcu_struct *ssp) RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_lock_lite()."); this_cpu_inc(scp->srcu_locks.counter); /* Y */ barrier(); /* Avoid leaking the critical section. */ - return scp - &ssp->sda->srcu_ctrs[0]; + return __srcu_ptr_to_ctr(ssp, scp); } /* diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 973e49d04f4f1..4643a8ed7e326 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -753,7 +753,7 @@ int __srcu_read_lock(struct srcu_struct *ssp) this_cpu_inc(scp->srcu_locks.counter); smp_mb(); /* B */ /* Avoid leaking the critical section. */ - return scp - &ssp->sda->srcu_ctrs[0]; + return __srcu_ptr_to_ctr(ssp, scp); } EXPORT_SYMBOL_GPL(__srcu_read_lock); @@ -783,7 +783,7 @@ int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) atomic_long_inc(&scp->srcu_locks); smp_mb__after_atomic(); /* B */ /* Avoid leaking the critical section. */ - return scpp - &ssp->sda->srcu_ctrs[0]; + return __srcu_ptr_to_ctr(ssp, scpp); } EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); From patchwork Thu Jan 16 20:21:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942252 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C093B242240; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=Bj+HcAgj7oIlmV5rmGvDrXE6LK+5pqypg+0NpCqTUFltT1WM1CNL7EZh970T+vcXEwCVpajYeBjVktUeThbWfW/I13uQ5ZsaCTfFhBwy+MlCHpXimQya7bmwtkFzlArt7FC0r2G0/OmXyrnsx8xO0xz8fnJPPCver2Vjaq21sRU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=F4djfqn8A/Pb1UhFQCW6zFEZ+HC21/X1BBN7u/MHvqk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=myS7ArDU2Go+LtyQCrRlO6yXzkovWRCR6VR1objhRb2ibFXJtPx+M6ODVc/f4FT5xfc7tFBJdGspQcCVxz8qbQdnJPTl4Zcax8sJfQxA88H/5eNdUQM81vDyFTk4s7b3Pku0Y7j2MhAM88UnIFf0PzrJuzo06khK1QgrA4QRVpo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mApRL/Z9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mApRL/Z9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5556FC4CEF5; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=F4djfqn8A/Pb1UhFQCW6zFEZ+HC21/X1BBN7u/MHvqk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mApRL/Z9n6zwPEprh17JaJ9/jJoPdtdwODQpkUIW9lpsmKXPc5s3Tu1OT0RxH1Kvm Dsncby1c/Ve6Iofp2/kr8DUnqxb29glBt6aPKWoYJ4hPU06Spi7KaufI8qEv0O/Pam e68w4h3yQ0LmnAH0UZYRE5sb19lIFvrh1Q4GqkQuDhQEUscuGKS9uEO+iTyhwa7crA a0wep4/3y/OI63gswBIaunlIoSf3dnQCp7jvT01FH095U9Le9iO4a9iBAk/YA4xofW HWNZElY5T3P2sFNJ9nyDcb2E/70n1XzQ0mCzT23yy3qjKNlr98cM7HhyoCR+etgMja IbUzeZ3N/Vk8g== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 9B412CE37D8; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 11/17] srcu: Pull integer-to-pointer conversion into __srcu_ctr_to_ptr() Date: Thu, 16 Jan 2025 12:21:06 -0800 Message-Id: <20250116202112.3783327-11-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit abstracts the srcu_read_unlock*() integer-to-pointer conversion into a new __srcu_ctr_to_ptr(). This will be used in rcutorture for testing an srcu_read_unlock_fast() that avoids array-indexing overhead by taking a pointer rather than an integer. [ paulmck: Apply kernel test robot feedback. ] Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcutree.h | 9 ++++++++- kernel/rcu/srcutree.c | 6 ++---- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index f41bb3a55a048..55fa400624bb4 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -218,6 +218,13 @@ static inline bool __srcu_ptr_to_ctr(struct srcu_struct *ssp, struct srcu_ctr __ return scpp - &ssp->sda->srcu_ctrs[0]; } +// Converts an integer to a per-CPU pointer to the corresponding +// ->srcu_ctrs[] array element. +static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ssp, int idx) +{ + return &ssp->sda->srcu_ctrs[idx]; +} + /* * Counts the new reader in the appropriate per-CPU element of the * srcu_struct. Returns an index that must be passed to the matching @@ -252,7 +259,7 @@ static inline int __srcu_read_lock_lite(struct srcu_struct *ssp) static inline void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) { barrier(); /* Avoid leaking the critical section. */ - this_cpu_inc(ssp->sda->srcu_ctrs[idx].srcu_unlocks.counter); /* Z */ + this_cpu_inc(__srcu_ctr_to_ptr(ssp, idx)->srcu_unlocks.counter); /* Z */ RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_unlock_lite()."); } diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 4643a8ed7e326..d2a6949445538 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -765,7 +765,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock); void __srcu_read_unlock(struct srcu_struct *ssp, int idx) { smp_mb(); /* C */ /* Avoid leaking the critical section. */ - this_cpu_inc(ssp->sda->srcu_ctrs[idx].srcu_unlocks.counter); + this_cpu_inc(__srcu_ctr_to_ptr(ssp, idx)->srcu_unlocks.counter); } EXPORT_SYMBOL_GPL(__srcu_read_unlock); @@ -794,10 +794,8 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); */ void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) { - struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); - smp_mb__before_atomic(); /* C */ /* Avoid leaking the critical section. */ - atomic_long_inc(&sdp->srcu_ctrs[idx].srcu_unlocks); + atomic_long_inc(&raw_cpu_ptr(__srcu_ctr_to_ptr(ssp, idx))->srcu_unlocks); } EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe); From patchwork Thu Jan 16 20:21:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942250 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA7D2241A0B; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=LemwpHzxCmlR6v7RKde3yyLtwnESsdjx13/cvkyynb+7d0wJ6bb86ZP6g4cWouHwgweYlk6VBmHJDwGnYF5gtoDoeeKDFOs++6AvMf3xKoQNGD/Qe8T9zFieaUU3FKMsT5+JBClwFR9QQW40v222x416Xyd1MJByxj0VqdmEsfg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=wIcgVIt7KUOyhwlcdCHRrRl0aW6cD0lJgYWfyk5heZY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Zj1nKKDDVCjHjoFppyc05DMNBZXeqG4/zu4cf8s4nPCOuH5ziBT0aaxD6TFypRXPxcWQb4oXolj3SsmKMqlvKt5F0Vk0UV9ZZf7zMn8hJrpjNjTieiazMnuot2su6rSQ2AOwAKqWGGZgklYcc07GI+XcdkLOz1P/YR2h5d8Sy+M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RNi8Ii+q; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RNi8Ii+q" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55538C4CEF2; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=wIcgVIt7KUOyhwlcdCHRrRl0aW6cD0lJgYWfyk5heZY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RNi8Ii+qgjgaTu1tG5B3a4QsQAvh3Hc/i7zZV2scBG81N3Nom0/VmZpDhaP0pTDUA aiApUOfi57VGdH3w0xK2hfo6UTOkv/9WHBOY1NSe4C7mzeYm8WolnHw0B355NVB46c jvxvDKwzX4ni7ImgIBjskCzxqz7/NmRas2tE33zDHeCHDTaX/yWGPlzOREphoXydBt XM5atRv05tFLhxdsWpwaI+SUaFPuTo+oR5JA1lxwNf75cWo5+tCMZQ/er4TUuUs5kD qCIWTryNtp/mcT1YXX7iQK5aoCVL2dzZlP0X1r2IBCTCYYmE3GejNPKW6/2WC8I1kR uDzQQuL4AiAeg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 9DAE2CE37D9; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 12/17] srcu: Move SRCU Tree/Tiny definitions from srcu.h Date: Thu, 16 Jan 2025 12:21:07 -0800 Message-Id: <20250116202112.3783327-12-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 There are a couple of definitions under "#ifdef CONFIG_TINY_SRCU" in include/linux/srcu.h. There is no point in them being there, so this commit moves them to include/linux/srcutiny.h and include/linux/srcutree.c, thus eliminating that #ifdef. Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcu.h | 10 +--------- include/linux/srcutiny.h | 3 +++ include/linux/srcutree.h | 1 + 3 files changed, 5 insertions(+), 9 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 505f5bdce4446..2bd0e24e9b554 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -52,6 +52,7 @@ int init_srcu_struct(struct srcu_struct *ssp); #define SRCU_READ_FLAVOR_SLOWGP SRCU_READ_FLAVOR_LITE // Flavors requiring synchronize_rcu() // instead of smp_mb(). +void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); #ifdef CONFIG_TINY_SRCU #include @@ -64,15 +65,6 @@ int init_srcu_struct(struct srcu_struct *ssp); void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, void (*func)(struct rcu_head *head)); void cleanup_srcu_struct(struct srcu_struct *ssp); -int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); -#ifdef CONFIG_TINY_SRCU -#define __srcu_read_lock_lite __srcu_read_lock -#define __srcu_read_unlock_lite __srcu_read_unlock -#else // #ifdef CONFIG_TINY_SRCU -int __srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) __releases(ssp); -#endif // #else // #ifdef CONFIG_TINY_SRCU void synchronize_srcu(struct srcu_struct *ssp); #define SRCU_GET_STATE_COMPLETED 0x1 diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 6b1a7276aa4c9..07a0c4489ea2f 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -71,6 +71,9 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp) return idx; } +#define __srcu_read_lock_lite __srcu_read_lock +#define __srcu_read_unlock_lite __srcu_read_unlock + static inline void synchronize_srcu_expedited(struct srcu_struct *ssp) { synchronize_srcu(ssp); diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 55fa400624bb4..ef3065c0cadcd 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -207,6 +207,7 @@ struct srcu_struct { #define DEFINE_SRCU(name) __DEFINE_SRCU(name, /* not static */) #define DEFINE_STATIC_SRCU(name) __DEFINE_SRCU(name, static) +int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); From patchwork Thu Jan 16 20:21:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942251 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BED68241A1F; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=XfCLYWhXvKtW+J+C50oDmdsd9I40ujePnznV4/lG7tp3MWB5SsDl2ESqyz+wrMvWpkWvrkxZH5rRflMVVN0l+K0AIg+YOZTLR3hDkM5hVlOGlsPQTqWxYQjnrxAJY1GXXMFkW+AYth+J2GAaTgPY4AqAcRL1nXYsCpRGpWGgP48= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=hrsr+dy0UX7JrwICmPPF02FwToFMAnvnsfZ5mUjMCj8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BXG+F1OT9wczkOyDK3uJ+f/x3hfbKE3I8+/kdx236TbO9Gkv84USWxzxqnshqVxPF4xp/drCgqYmqOQHz+5SoUVO2RI6TwKA1DNXXFzVT32HryeQDKk2SVaMBmeHgvZWJbSNZRhEK+S/zER8VdJqkWSYK1OoXGh9M4pheMHVy+o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dKTDhrMz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dKTDhrMz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F37FC4AF14; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=hrsr+dy0UX7JrwICmPPF02FwToFMAnvnsfZ5mUjMCj8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dKTDhrMzyEhkZeJZHq5xCzxjkYR2DvIckBlL1NEOAFvXSbxHKGwdbwiCC0Lp1Y/m0 L9s2hrAcVsOsNc+7YZYfiAMk5K5DuYf6GPUy63K4dYXzMR6taEpa/hgO+qB+fdW1+A bXgglhpSjYm56i456eJFflbPUpGmISClTxvzMbNxyr1KoanoKe/AfuBfBuNPMpUBF0 BAwk8qfmGDbpflKoingisk9LKZDKT3fe6QOZVk6AUSnLqy2UTNVF87GbNck4x4kL5T CQYmvgXtL+RTVpJBr78eFbnX7Nxp7itT7t75z6Q0Li1gTohzu3Dw74F049FYC+ML3+ L4wvtd+IEnNVA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id A10C3CE37DA; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 13/17] srcu: Add SRCU-fast readers Date: Thu, 16 Jan 2025 12:21:08 -0800 Message-Id: <20250116202112.3783327-13-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit adds srcu_read_{,un}lock_fast(), which is similar to srcu_read_{,un}lock_lite(), but avoids the array-indexing and pointer-following overhead. On a microbenchmark featuring tight loops around empty readers, this results in about a 20% speedup compared to RCU Tasks Trace on my x86 laptop. Please note that SRCU-fast has drawbacks compared to RCU Tasks Trace, including: o Lack of CPU stall warnings. o SRCU-fast readers permitted only where rcu_is_watching(). o A pointer-sized return value from srcu_read_lock_fast() must be passed to the corresponding srcu_read_unlock_fast(). o In the absence of readers, a synchronize_srcu() having _fast() readers will incur the latency of at least two normal RCU grace periods. o RCU Tasks Trace priority boosting could be easily added. Boosting SRCU readers is more difficult. SRCU-fast also has a drawback compared to SRCU-lite, namely that the return value from srcu_read_lock_fast()-fast is a 64-bit pointer and that from srcu_read_lock_lite() is only a 32-bit int. [ paulmck: Apply feedback from Akira Yokosawa. ] Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- include/linux/srcu.h | 47 ++++++++++++++++++++++++++++++++++++++-- include/linux/srcutiny.h | 22 +++++++++++++++++++ include/linux/srcutree.h | 38 ++++++++++++++++++++++++++++++++ 3 files changed, 105 insertions(+), 2 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 2bd0e24e9b554..63bddc3014238 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -47,9 +47,10 @@ int init_srcu_struct(struct srcu_struct *ssp); #define SRCU_READ_FLAVOR_NORMAL 0x1 // srcu_read_lock(). #define SRCU_READ_FLAVOR_NMI 0x2 // srcu_read_lock_nmisafe(). #define SRCU_READ_FLAVOR_LITE 0x4 // srcu_read_lock_lite(). +#define SRCU_READ_FLAVOR_FAST 0x8 // srcu_read_lock_fast(). #define SRCU_READ_FLAVOR_ALL (SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_NMI | \ - SRCU_READ_FLAVOR_LITE) // All of the above. -#define SRCU_READ_FLAVOR_SLOWGP SRCU_READ_FLAVOR_LITE + SRCU_READ_FLAVOR_LITE | SRCU_READ_FLAVOR_FAST) // All of the above. +#define SRCU_READ_FLAVOR_SLOWGP (SRCU_READ_FLAVOR_LITE | SRCU_READ_FLAVOR_FAST) // Flavors requiring synchronize_rcu() // instead of smp_mb(). void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); @@ -253,6 +254,33 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) return retval; } +/** + * srcu_read_lock_fast - register a new reader for an SRCU-protected structure. + * @ssp: srcu_struct in which to register the new reader. + * + * Enter an SRCU read-side critical section, but for a light-weight + * smp_mb()-free reader. See srcu_read_lock() for more information. + * + * If srcu_read_lock_fast() is ever used on an srcu_struct structure, + * then none of the other flavors may be used, whether before, during, + * or after. Note that grace-period auto-expediting is disabled for _fast + * srcu_struct structures because auto-expedited grace periods invoke + * synchronize_rcu_expedited(), IPIs and all. + * + * Note that srcu_read_lock_fast() can be invoked only from those contexts + * where RCU is watching, that is, from contexts where it would be legal + * to invoke rcu_read_lock(). Otherwise, lockdep will complain. + */ +static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_struct *ssp) __acquires(ssp) +{ + struct srcu_ctr __percpu *retval; + + srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_FAST); + retval = __srcu_read_lock_fast(ssp); + rcu_try_lock_acquire(&ssp->dep_map); + return retval; +} + /** * srcu_read_lock_lite - register a new reader for an SRCU-protected structure. * @ssp: srcu_struct in which to register the new reader. @@ -356,6 +384,21 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) __srcu_read_unlock(ssp, idx); } +/** + * srcu_read_unlock_fast - unregister a old reader from an SRCU-protected structure. + * @ssp: srcu_struct in which to unregister the old reader. + * @scp: return value from corresponding srcu_read_lock_fast(). + * + * Exit a light-weight SRCU read-side critical section. + */ +static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) + __releases(ssp) +{ + srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST); + srcu_lock_release(&ssp->dep_map); + __srcu_read_unlock_fast(ssp, scp); +} + /** * srcu_read_unlock_lite - unregister a old reader from an SRCU-protected structure. * @ssp: srcu_struct in which to unregister the old reader. diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 07a0c4489ea2f..380260317d98b 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -71,6 +71,28 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp) return idx; } +struct srcu_ctr; + +static inline bool __srcu_ptr_to_ctr(struct srcu_struct *ssp, struct srcu_ctr __percpu *scpp) +{ + return (int)(intptr_t)(struct srcu_ctr __force __kernel *)scpp; +} + +static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ssp, int idx) +{ + return (struct srcu_ctr __percpu *)(intptr_t)idx; +} + +static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct *ssp) +{ + return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp)); +} + +static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) +{ + __srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp)); +} + #define __srcu_read_lock_lite __srcu_read_lock #define __srcu_read_unlock_lite __srcu_read_unlock diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index ef3065c0cadcd..bdc467efce3a2 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -226,6 +226,44 @@ static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ss return &ssp->sda->srcu_ctrs[idx]; } +/* + * Counts the new reader in the appropriate per-CPU element of the + * srcu_struct. Returns a pointer that must be passed to the matching + * srcu_read_unlock_fast(). + * + * Note that this_cpu_inc() is an RCU read-side critical section either + * because it disables interrupts, because it is a single instruction, + * or because it is a read-modify-write atomic operation, depending on + * the whims of the architecture. + */ +static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct *ssp) +{ + struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp); + + RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_lock_fast()."); + this_cpu_inc(scp->srcu_locks.counter); /* Y */ + barrier(); /* Avoid leaking the critical section. */ + return scp; +} + +/* + * Removes the count for the old reader from the appropriate + * per-CPU element of the srcu_struct. Note that this may well be a + * different CPU than that which was incremented by the corresponding + * srcu_read_lock_fast(), but it must be within the same task. + * + * Note that this_cpu_inc() is an RCU read-side critical section either + * because it disables interrupts, because it is a single instruction, + * or because it is a read-modify-write atomic operation, depending on + * the whims of the architecture. + */ +static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) +{ + barrier(); /* Avoid leaking the critical section. */ + this_cpu_inc(scp->srcu_unlocks.counter); /* Z */ + RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_read_unlock_fast()."); +} + /* * Counts the new reader in the appropriate per-CPU element of the * srcu_struct. Returns an index that must be passed to the matching From patchwork Thu Jan 16 20:21:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942253 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC41C242249; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=ABBglwgJhTonWtJWvfd0VeTyAL5JD4VlI6nK4T2d5mgjyHykBoawCvUXWMQ9l/Dlhc9u0y6DTBMt82qI2HkPqZDXiq9GsC/BP7qDPPJ1dq4r5CqNjCqIkO7cXxQfkPnGqqbfUGUSJOFwJ18g1nIOB4tgBR8yMvw7UYXoJXN8cgM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=3qE7yuKsFlMf41c9U8r2tEXixXhgWo3FZtKXnx6kwDo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SxL6ItiZNWuEvkbhDPM9E4CE8dTXRUeZGyRCfOFrUr3aPRbEFC0hJKOU5VOWxvB/OmGe/ZLy5HeNu5qcDIP3dufv7fmEr6tPaciuAKb4IqgFwBiIAllg4vZ8NmoZZaZBxkk6A3X74MGhUI/GO5Kmg2o++IGWYAMejgbXmWe773w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DTHTtmj4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DTHTtmj4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A99BC4CEFB; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=3qE7yuKsFlMf41c9U8r2tEXixXhgWo3FZtKXnx6kwDo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DTHTtmj4yFpClebyIsRPzqSAQX8SK4/1vqHlJU6/rhN7jRAJdNY+wJX83R4x6WqAd GVI2YBiW78zRJ4PzFlBzeZ3CTdq5J1/GLdZ+S6QLCfMgtp+55kLarPsKp9Eu0c3Wvx cYk9ra9T8erdIIVBbLumCQcch0Yy8h3DnPzPjeN5vjZ1JdCaKNiG91e0MhcEWszh4B S/HTDBBTM2MvtOq+JTrrKvNDPvgmX6of/U7E/aD6ljscgAZXNoSj37AKPsLh/uPLU/ K7zjgGuSjsa2uGo8A/CJDaT0NZKRFyamrpiDxeHsqJ3ymoaOH//ShGI+VDVcFV837V X546lX3dE3SWQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id A30DBCE37DB; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 14/17] rcutorture: Add ability to test srcu_read_{,un}lock_fast() Date: Thu, 16 Jan 2025 12:21:09 -0800 Message-Id: <20250116202112.3783327-14-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit permits rcutorture to test srcu_read_{,un}lock_fast(), which is specified by the rcutorture.reader_flavor=0x8 kernel boot parameter. Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- kernel/rcu/rcutorture.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 1d2de50fb5d60..1bd3eaa0b8e7a 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -677,6 +677,7 @@ static void srcu_get_gp_data(int *flags, unsigned long *gp_seq) static int srcu_torture_read_lock(void) { int idx; + struct srcu_ctr __percpu *scp; int ret = 0; if ((reader_flavor & SRCU_READ_FLAVOR_NORMAL) || !(reader_flavor & SRCU_READ_FLAVOR_ALL)) { @@ -694,6 +695,12 @@ static int srcu_torture_read_lock(void) WARN_ON_ONCE(idx & ~0x1); ret += idx << 2; } + if (reader_flavor & SRCU_READ_FLAVOR_FAST) { + scp = srcu_read_lock_fast(srcu_ctlp); + idx = __srcu_ptr_to_ctr(srcu_ctlp, scp); + WARN_ON_ONCE(idx & ~0x1); + ret += idx << 3; + } return ret; } @@ -719,6 +726,8 @@ srcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp) static void srcu_torture_read_unlock(int idx) { WARN_ON_ONCE((reader_flavor && (idx & ~reader_flavor)) || (!reader_flavor && (idx & ~0x1))); + if (reader_flavor & SRCU_READ_FLAVOR_FAST) + srcu_read_unlock_fast(srcu_ctlp, __srcu_ctr_to_ptr(srcu_ctlp, (idx & 0x8) >> 3)); if (reader_flavor & SRCU_READ_FLAVOR_LITE) srcu_read_unlock_lite(srcu_ctlp, (idx & 0x4) >> 2); if (reader_flavor & SRCU_READ_FLAVOR_NMI) From patchwork Thu Jan 16 20:21:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942254 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D356424224C; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; cv=none; b=RYD2iV+P0ZxrR8r2gC5iU+2ajaTn53yrV09p0s50rnCaVrcDOPNRuzzIophp7E1ltdkuBeiohqMm7Li6JIPgTMQGCi9HO3dS77JFJi97BEoNpq0m45yvMlD+LpEhJA7cUdj4y72FYb1XkqjT/s3qYbBN34s4iWov158ljwdQJQ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058875; c=relaxed/simple; bh=bG4qKBMhSQIkQavcS9aqheWrdiR36LBjWLEYWEtdrVU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=N2xXW6oJWH4nSs2A9VWWk/oHmPe0yfsTWuUleZzDyM14iXTH+HGMSlKLj4cbiUNoB8RjqRh9jVMMnHj6fM/gqgpxMJCuI+AHWK4r4fz26f0HJnRsj1DeT8QuZhhwETboTpVgu/JxzepvSufmi04oWnXT1NK639GC5M1H4MEow+k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Q0v8+KM/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Q0v8+KM/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A1AAC4CEDD; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=bG4qKBMhSQIkQavcS9aqheWrdiR36LBjWLEYWEtdrVU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q0v8+KM/P1UA39NPguehW+6qihfUVZ4RJXk2VoQgIJeoEKNMnagRqCpcdVoDeVU55 GzX6futFwNq8/aGKUJQfZsqOqULQRJG+hIwSJz2GAypgQAiAiU6ATgGuSkYh2f4RkD rzPKnJbfWd/31mEPxI6yHfDbbM6lddJX4JmRyxH2m2gis9Qg53eP/IZfBSJYrsPbAe heA7GaiTlR4rxaxV++pElzlfEDXf9l4tAE6PWpu1uVl4QQ0ky0DnpXQmmwcku+KRnR Noe5rr6mAS4QcgmpwGxIsQvk7ok0tPvpgeHIevwV0c5WHtSup3gAiOFPbWGf1qURGW Kw3S32uv5p76A== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id A56AACE37DC; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Andrii Nakryiko , Peter Zijlstra , Kent Overstreet , bpf@vger.kernel.org Subject: [PATCH rcu 15/17] refscale: Add srcu_read_lock_fast() support using "srcu-fast" Date: Thu, 16 Jan 2025 12:21:10 -0800 Message-Id: <20250116202112.3783327-15-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit creates a new srcu-fast option for the refscale.scale_type module parameter that selects srcu_read_lock_fast() and srcu_read_unlock_fast(). Signed-off-by: Paul E. McKenney Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Peter Zijlstra Cc: Kent Overstreet Cc: --- kernel/rcu/refscale.c | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c index 1b47376acdc40..f11a7c2af778c 100644 --- a/kernel/rcu/refscale.c +++ b/kernel/rcu/refscale.c @@ -216,6 +216,36 @@ static const struct ref_scale_ops srcu_ops = { .name = "srcu" }; +static void srcu_fast_ref_scale_read_section(const int nloops) +{ + int i; + struct srcu_ctr __percpu *scp; + + for (i = nloops; i >= 0; i--) { + scp = srcu_read_lock_fast(srcu_ctlp); + srcu_read_unlock_fast(srcu_ctlp, scp); + } +} + +static void srcu_fast_ref_scale_delay_section(const int nloops, const int udl, const int ndl) +{ + int i; + struct srcu_ctr __percpu *scp; + + for (i = nloops; i >= 0; i--) { + scp = srcu_read_lock_fast(srcu_ctlp); + un_delay(udl, ndl); + srcu_read_unlock_fast(srcu_ctlp, scp); + } +} + +static const struct ref_scale_ops srcu_fast_ops = { + .init = rcu_sync_scale_init, + .readsection = srcu_fast_ref_scale_read_section, + .delaysection = srcu_fast_ref_scale_delay_section, + .name = "srcu-fast" +}; + static void srcu_lite_ref_scale_read_section(const int nloops) { int i; @@ -1163,7 +1193,7 @@ ref_scale_init(void) long i; int firsterr = 0; static const struct ref_scale_ops *scale_ops[] = { - &rcu_ops, &srcu_ops, &srcu_lite_ops, RCU_TRACE_OPS RCU_TASKS_OPS + &rcu_ops, &srcu_ops, &srcu_fast_ops, &srcu_lite_ops, RCU_TRACE_OPS RCU_TASKS_OPS &refcnt_ops, &rwlock_ops, &rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &sched_clock_ops, &clock_ops, &jiffies_ops, &typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops, From patchwork Thu Jan 16 20:21:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942256 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDADE242262; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058876; cv=none; b=axlm3YvztmFBzxD7CPoZwI6On2aM3aCUb67KnkikfaRpW8R9zHVoDpkUtce8z+NX4dhuOjIEHAfYCtyQPV5NY25YY1sMkeXAJFFBIZI5lqbI88hy1rcKTGwb3oKaEIxYABY0FeNT6rK4NlaafK+k2BF0bfOrxcngXZ4YNbZ/cWs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058876; c=relaxed/simple; bh=yWsk7wp4oUog2d6FUYqdT4GsYlXixBLX6B5MYwCKg90=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Lye1dSOwr2EZX73S4nLa/xZ/PoKpwqTPW9zf8nPN+X+B2gYxPoZWM5FAyLeBwNwWgATZ6+/Q+3vg4iSDQi0u1U7Obc9wMH+4JEq2tDu/GzDvW6pf055daXxEMBggfZVfDVA+m7iocje3NSdp0PVwIudvhu7rfIea9YdgRbPQJG4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uMKmZroF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uMKmZroF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9255DC4CEEA; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=yWsk7wp4oUog2d6FUYqdT4GsYlXixBLX6B5MYwCKg90=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uMKmZroFvnwFlWiqce+qznInE5DZiUqI9CqnIBJnT9aIxcaey3QAZU8Eaj/wWwnKf 4/ojdbR1un3Lyanx8wEOF+1YODqHScSteieQiDhPY7MdpBRisInzqHyaWFRSl3yVj3 nGdF+D6mvLb4+lClmkWpqtGBZDSDgEMKDQ5o/v75iO5UHn90NoMbHT2YtM8ik9Z69E +Z5PlFu6JnLhLstHFJXWVzaUPvKArdK/sOVTLd244rSKPWxVdcWkwMDIDIQjnh2ru3 1vsyPQ2baNTtmFCz6uE38HtdP6DQDyE8wFLGTdIABdevcF1XIRePs4n3r9dvQ3UlaA BA+WqgWDZqwUg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id A7E0ECE37DD; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 16/17] rcutorture: Make scenario SRCU-P use srcu_read_lock_fast() Date: Thu, 16 Jan 2025 12:21:11 -0800 Message-Id: <20250116202112.3783327-16-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This commit causes the rcutorture SRCU-P scenario use the srcu_read_lock_fast() and srcu_read_unlock_fast() functions. This will cause these two functions to be regularly tested by several developers (myself included), for example, those who use torture.sh as an RCU acceptance test. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot index 2db39f298d182..fb61703690cb3 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot +++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot @@ -2,3 +2,4 @@ rcutorture.torture_type=srcud rcupdate.rcu_self_test=1 rcutorture.fwd_progress=3 srcutree.big_cpu_lim=5 +rcutorture.reader_flavor=0x8 From patchwork Thu Jan 16 20:21:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13942255 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB812242260; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058876; cv=none; b=RYCiRZ3AGMc9hDpN5iy/p1nmje7B262pLXXKZis8E0le5mjxfkfG9fXuhexDUXYpZ2BynGrXC5PCvIA5jtUWf/vUOr1fIUdszWBeYV388WB9wV/W6rTQDfZNHWIfb0nOAUOG/P6y4iDv8Xpc3o/9sVwPmeQsEchcv+TWsKsLrk8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737058876; c=relaxed/simple; bh=w3taDxcgoawy8Sb/5sWcw2jn2Y53b7dx6nnqS+Qds70=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=M1jCcqnZddJAJmKQ2ZGzODhYsLMbLi6kltYGNbyGaY7XpA0owDgfO/SzHNZDCXEKWETe55MAFeR7HyvnMTXXXstRpS1ISocJnPYqH/Axakdqlmwlmc51vEjq2vI1oaO8RMGXw2UTd9dRcCYHHfQ6vyia0uixJn91egSyzlm6smE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eRY+pU4t; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eRY+pU4t" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 92890C4AF1A; Thu, 16 Jan 2025 20:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737058875; bh=w3taDxcgoawy8Sb/5sWcw2jn2Y53b7dx6nnqS+Qds70=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eRY+pU4tTM6NK/rYEEBV1KPNrF7Xv2E6MkfuvU2xq5696jUIhHejUKZqe5mk1bUi5 ltagFneYPKrZJwOUobcPuE/Ym7f9IZJyJdMzd5ISuipDNRfiZHOt5uTwE5a/mBpRrN MIVQ58A18C2QMOXH1dbQKrM+TZVqTzldXq77zzxmkxhPcnlw/NCtA8ILiYQb9iw0+R CV2RrWauB8vFOcBB4EFeUxiwKtmdC77T1F6/u9coOANzDlLqiK7Xy+E1yORQ8ziTEk hlQRwlOdjeUML7QtJAIUbiafGTVJ9ACFhJuEuyN5HfE+qItMdtlpbIM+HUJ89vz3Rm s5PpMIlLL4QrQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id AA19DCE37DF; Thu, 16 Jan 2025 12:21:14 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 17/17] srcu: Fix srcu_read_unlock_{lite,nmisafe}() kernel-doc Date: Thu, 16 Jan 2025 12:21:12 -0800 Message-Id: <20250116202112.3783327-17-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> References: <826c8527-d6ba-46c5-bb89-4625750cbeed@paulmck-laptop> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The srcu_read_unlock_lite() and srcu_read_unlock_nmisafe() both say that their idx parameters must come from srcu_read_lock(). This would be bad, because a given srcu_struct structure may be used only with one flavor of SRCU reader. This commit therefore updates the srcu_read_unlock_lite() kernel-doc header to say that its idx parameter must be obtained from srcu_read_lock_lite() and the srcu_read_unlock_nmisafe() kernel-doc header to say that its idx parameter must be obtained from srcu_read_lock_nmisafe(). Signed-off-by: Paul E. McKenney --- include/linux/srcu.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 63bddc3014238..a0df80baaccf3 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -402,7 +402,7 @@ static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ct /** * srcu_read_unlock_lite - unregister a old reader from an SRCU-protected structure. * @ssp: srcu_struct in which to unregister the old reader. - * @idx: return value from corresponding srcu_read_lock(). + * @idx: return value from corresponding srcu_read_lock_lite(). * * Exit a light-weight SRCU read-side critical section. */ @@ -418,7 +418,7 @@ static inline void srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) /** * srcu_read_unlock_nmisafe - unregister a old reader from an SRCU-protected structure. * @ssp: srcu_struct in which to unregister the old reader. - * @idx: return value from corresponding srcu_read_lock(). + * @idx: return value from corresponding srcu_read_lock_nmisafe(). * * Exit an SRCU read-side critical section, but in an NMI-safe manner. */