From patchwork Fri Oct 13 11:59:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13420819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA21FCDB47E for ; Fri, 13 Oct 2023 12:01:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231740AbjJMMBH (ORCPT ); Fri, 13 Oct 2023 08:01:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231750AbjJMMAZ (ORCPT ); Fri, 13 Oct 2023 08:00:25 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95EA1D58; Fri, 13 Oct 2023 05:00:01 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4B3C4C433C9; Fri, 13 Oct 2023 11:59:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697198401; bh=OrFL+Yftu/dBFajCkxlvH2Gier1tRKWYsuU8KyLe3dM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sMyYYtKJ6dCQrEqAGWWgpSh3oU9RJ/2NrbUph/U7CVV4Co0COOAd1YUHMmbT5JEZ3 nxDYpp9JYWLYbOYzTbAojl6BucGZyU3Qnx7Zx9hoqFwFptPvg0cvS/RkD0iSde8Udc U2MfXIhJIh2cQIUKhsk/vxkPDjpw1vJ7a3JhnVD0YwtiCj9Y8Tf4U7vsvc5XFOA2W0 tteZjbF+Rp9tjAuZ2SwGGlKFd6+H5DKYLzmbV/NqAdvZLqcqSvEQang4eG65pLZhlg vbbqemSvesQV+NBPydDED1XVc2DItoCUa++0cnaf602gc2880KXrGwqBt9+l/t3mGM TNXTiELmPnKxA== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Boqun Feng , Joel Fernandes , Josh Triplett , Mathieu Desnoyers , Neeraj Upadhyay , "Paul E . McKenney" , Steven Rostedt , Uladzislau Rezki , rcu , Yong He , Neeraj upadhyay , Like Xu Subject: [PATCH 18/18] srcu: Only accelerate on enqueue time Date: Fri, 13 Oct 2023 13:59:02 +0200 Message-Id: <20231013115902.1059735-19-frederic@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231013115902.1059735-1-frederic@kernel.org> References: <20231013115902.1059735-1-frederic@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Acceleration in SRCU happens on enqueue time for each new callback. This operation is expected not to fail and therefore any similar attempt from other places shouldn't find any remaining callbacks to accelerate. Moreover accelerations performed beyond enqueue time are error prone because rcu_seq_snap() then may return the snapshot for a new grace period that is not going to be started. Remove these dangerous and needless accelerations and introduce instead assertions reporting leaking unaccelerated callbacks beyond enqueue time. Co-developed-by: Yong He Signed-off-by: Yong He Co-developed-by: Joel Fernandes (Google) Signed-off-by: Joel Fernandes (Google) Co-developed-by: Neeraj upadhyay Signed-off-by: Neeraj upadhyay Reviewed-by: Like Xu Signed-off-by: Frederic Weisbecker --- kernel/rcu/srcutree.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 9fab9ac36996..560e99ec5333 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -784,8 +784,7 @@ static void srcu_gp_start(struct srcu_struct *ssp) spin_lock_rcu_node(sdp); /* Interrupts already disabled. */ rcu_segcblist_advance(&sdp->srcu_cblist, rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); - (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, - rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq)); + WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */ WRITE_ONCE(ssp->srcu_sup->srcu_gp_start, jiffies); WRITE_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay, 0); @@ -1721,6 +1720,7 @@ static void srcu_invoke_callbacks(struct work_struct *work) ssp = sdp->ssp; rcu_cblist_init(&ready_cbs); spin_lock_irq_rcu_node(sdp); + WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); rcu_segcblist_advance(&sdp->srcu_cblist, rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); if (sdp->srcu_cblist_invoking || @@ -1750,8 +1750,6 @@ static void srcu_invoke_callbacks(struct work_struct *work) */ spin_lock_irq_rcu_node(sdp); rcu_segcblist_add_len(&sdp->srcu_cblist, -len); - (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, - rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq)); sdp->srcu_cblist_invoking = false; more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist); spin_unlock_irq_rcu_node(sdp);