From patchwork Tue Oct 3 23:29:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13408093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66F3AE8FDC0 for ; Tue, 3 Oct 2023 23:29:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236052AbjJCX3Z (ORCPT ); Tue, 3 Oct 2023 19:29:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236651AbjJCX3X (ORCPT ); Tue, 3 Oct 2023 19:29:23 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E766DC; Tue, 3 Oct 2023 16:29:19 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0123C433C7; Tue, 3 Oct 2023 23:29:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1696375758; bh=v3XWzp5y3b223EZI0Adeq7leNILWY4Ny2s3VXNJdVDs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rxBFBO9fexrgAIW8L6p/0lDFR+U7WAcxIocLnY5i/UnkELy4bW/aP796oq73qPyDe GgqjDQK5xFLEp4d9OxZ7X1QP5pGMLkxZDlpb4v7PNi+0s+guASOFbwUYqeCBCMaUFC lk7UIp12bc6DjYNl2ccy5N5xrHAPtnGqFQDiXFEjxGN8IEKvA9qPfwRgQzBL7HGvIx AgMOTbk04C7loWaFhVWUwVft01hR9G4WyrkiM3bnFKNyFDSkH+4bdVgsMUedtGVf7o 1PLlBmv41w4svke1Z8RSZd1rj1N0UBikaJTJq2i8hidOcZuLTXbNwmJ2gKPM2Va0Bk 4Ja4UJsw9BAKQ== From: Frederic Weisbecker To: "Paul E . McKenney" Cc: LKML , Frederic Weisbecker , Yong He , Neeraj upadhyay , Joel Fernandes , Boqun Feng , Uladzislau Rezki , RCU Subject: [PATCH 3/5] srcu: Remove superfluous callbacks advancing from srcu_start_gp() Date: Wed, 4 Oct 2023 01:29:01 +0200 Message-ID: <20231003232903.7109-4-frederic@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231003232903.7109-1-frederic@kernel.org> References: <20231003232903.7109-1-frederic@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Callbacks advancing on SRCU must be performed on two specific places: 1) On enqueue time in order to make room for the acceleration of the new callback. 2) On invocation time in order to move the callbacks ready to invoke. Any other callback advancing callsite is needless. Remove the remaining one in srcu_gp_start(). Co-developed-by: Yong He Co-developed-by: Joel Fernandes Co-developed-by: Neeraj upadhyay Signed-off-by: Frederic Weisbecker --- kernel/rcu/srcutree.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 560e99ec5333..e9356a103626 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -772,20 +772,10 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe); */ static void srcu_gp_start(struct srcu_struct *ssp) { - struct srcu_data *sdp; int state; - if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) - sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id()); - else - sdp = this_cpu_ptr(ssp->sda); lockdep_assert_held(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); WARN_ON_ONCE(ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)); - spin_lock_rcu_node(sdp); /* Interrupts already disabled. */ - rcu_segcblist_advance(&sdp->srcu_cblist, - rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); - WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); - spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */ WRITE_ONCE(ssp->srcu_sup->srcu_gp_start, jiffies); WRITE_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay, 0); smp_mb(); /* Order prior store to ->srcu_gp_seq_needed vs. GP start. */