From patchwork Tue Oct 3 23:29:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13408092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3899E8FDBF for ; Tue, 3 Oct 2023 23:29:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236243AbjJCX3U (ORCPT ); Tue, 3 Oct 2023 19:29:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236052AbjJCX3U (ORCPT ); Tue, 3 Oct 2023 19:29:20 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98382AF; Tue, 3 Oct 2023 16:29:16 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0AB0DC433CB; Tue, 3 Oct 2023 23:29:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1696375756; bh=CgxY0LqJVEeiQkh649qa4IrU9jtDS9lKPNw7zJo199M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GdXvGbF/AWxAYuxckUhWjwbmC2g8wVhiZA+GsiwvL2w7wW+3ZfahRlEMz5eqlQiuZ A8IJPuFHWxv1q/wlKOLNIJY1WtvDfMnk8i3N7Q48BOaAgWNkNDhGw1mujdUdI2cw3y 81C8Ty5CpnNlhTT9W1J4Ucp/RoAQYbKGdM0M9gRpiDwWR9pFGqCdZPBn7WzFKFbdBX Y8SWXIfR5NbltrTaLhYEjbCq/H50C4OQPp+WNM5isy2ztzmkDsomYm/ok6XTUImG1a milVgIIcWpGmrvc69skdFZ9uWD0ycflSvBh02qpWAjAkibTQBcGs+PfDUT//aZRWL3 E5H2XdgVc5U5g== From: Frederic Weisbecker To: "Paul E . McKenney" Cc: LKML , Frederic Weisbecker , Yong He , Neeraj upadhyay , Joel Fernandes , Boqun Feng , Uladzislau Rezki , RCU Subject: [PATCH 2/5] srcu: Only accelerate on enqueue time Date: Wed, 4 Oct 2023 01:29:00 +0200 Message-ID: <20231003232903.7109-3-frederic@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231003232903.7109-1-frederic@kernel.org> References: <20231003232903.7109-1-frederic@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Acceleration in SRCU happens on enqueue time for each new callback. This operation is expected not to fail and therefore any similar attempt from other places shouldn't find any remaining callbacks to accelerate. Moreover accelerations performed beyond enqueue time are error prone because rcu_seq_snap() then may return the snapshot for a new grace period that is not going to be started. Remove these dangerous and needless accelerations and introduce instead assertions reporting leaking unaccelerated callbacks beyond enqueue time. Co-developed-by: Yong He Co-developed-by: Joel Fernandes Co-developed-by: Neeraj upadhyay Signed-off-by: Frederic Weisbecker Reviewed-by: Like Xu --- kernel/rcu/srcutree.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 9fab9ac36996..560e99ec5333 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -784,8 +784,7 @@ static void srcu_gp_start(struct srcu_struct *ssp) spin_lock_rcu_node(sdp); /* Interrupts already disabled. */ rcu_segcblist_advance(&sdp->srcu_cblist, rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); - (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, - rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq)); + WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */ WRITE_ONCE(ssp->srcu_sup->srcu_gp_start, jiffies); WRITE_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay, 0); @@ -1721,6 +1720,7 @@ static void srcu_invoke_callbacks(struct work_struct *work) ssp = sdp->ssp; rcu_cblist_init(&ready_cbs); spin_lock_irq_rcu_node(sdp); + WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); rcu_segcblist_advance(&sdp->srcu_cblist, rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); if (sdp->srcu_cblist_invoking || @@ -1750,8 +1750,6 @@ static void srcu_invoke_callbacks(struct work_struct *work) */ spin_lock_irq_rcu_node(sdp); rcu_segcblist_add_len(&sdp->srcu_cblist, -len); - (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, - rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq)); sdp->srcu_cblist_invoking = false; more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist); spin_unlock_irq_rcu_node(sdp);