From patchwork Mon Jan 29 23:23:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13536516 Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 090EE524B1; Mon, 29 Jan 2024 23:25:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570715; cv=none; b=qXSZHiKqdtZbbJGHIls/X7yBZrPLY8hpsB4lt9jXuoXVJey3fOLSUGX9131L5k0n6CUWdVpamjk/iru6I49wdeXhyqV1AddOMZbKU/p+xj3gCrh/HhI25KQo0Zfmp1+0hDBZcCcb59jcDgXCT9yB27Ht6FjANAllel8hTtj2yB4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570715; c=relaxed/simple; bh=Zqiy6B3tSxq3Lcht/Qe5YvYYzwqyjV5IKSggLfIvui8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OBtf9ffe2wEkrR4hBjYZ2JABRZKI/fazxn+dFM2gItsujlWq1TkJH/0LVzNa5ImO+ZfNoBTEhteK3eSGOnAPya3lcZdkKK8tFVFUzq3+061aqYMNDmmbEdn1+HGI3JLpJ+UkHhOOtnKEqWgVvIuYp/Sj5gcAZUGpNlLuF2/XzzA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=jB2b44Kg; arc=none smtp.client-ip=209.85.219.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jB2b44Kg" Received: by mail-qv1-f45.google.com with SMTP id 6a1803df08f44-6818a9fe380so26030926d6.2; Mon, 29 Jan 2024 15:25:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706570713; x=1707175513; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=+/Xp+usnOvRKbEZhvdn/PthxFWiJZPSLxCe1oxzCzno=; b=jB2b44KgySkoYtOcLcsf/umy5Xmo+WUYlU3dQAuWuDW3/zRYJCbzT719YtV/oo4H2r +kSTYlKqoW6JmOELA/VWwJQcsIRtDml6ax9h7/ndKC/hwoyVeAREgMMLZAtqSFqrcfaa HhNaGgAXKmUAw9LCNAcTVAvCp11uHADVy/PeA0jHbdPSeyM3NntXOHVQBsvysInap48j x08iMFSCpuEt2L2cvY4Q9+XS8ohq1t98TJ6lcK71+yY3LQ8yEuVie1Q7aM42m08v9pMK ylJ0Runz09frrP4RzwFUb4fgHhGxhMWoLfxWyuuoC4/OiZgsWROLhAC+pxwe+oPMksZq pT3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706570713; x=1707175513; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=+/Xp+usnOvRKbEZhvdn/PthxFWiJZPSLxCe1oxzCzno=; b=NNNoXV4AloiQKcFV/ump7uzll7y/tF1mRzNsFMOCsRNlX3E5ESdmMfdCqSHr7j+d2s q+oBjBz4nj1LHHHZloPJht5GBr5YzLu+5xVaCfZ6CG2f0+pXMV3Ujnz2mschp17yXaFt 2IC9MonK37DKfdP5mEEGjz1tM2oCB6GmDzWPAeBGWsEQd2XMlDeT5GZunO/KaGm+Gb3f xbqdC4jFUmpk9frG8Fpnx5daKjpCDSYcltZg6i7yVQzw6P9be6eFkzcNueM4E8dDg667 hhaREd9YybGKocm0anngfSEo5Ako7NbR5LvOGM5VhjawV3MYjDTbvUDGKCe5wj4CtPLN 5fFQ== X-Gm-Message-State: AOJu0YzHWcv7wJWKztl0M3a6Aj11XplyunDHnGzmSkMXqtunyRTlquEm MB4a89b7C/gl3X/KuPrDWP2vtwO1ZbkHiJrp22zDIxLOeqTOx23f X-Google-Smtp-Source: AGHT+IG/k7NjgAmVmJGWPMB872SnYL2SX/tx05iGELroNPsHBGQzkiDNmN+lYMOA4PsQgPEV+Udgjg== X-Received: by 2002:a05:6214:2a45:b0:685:33e7:9ab3 with SMTP id jf5-20020a0562142a4500b0068533e79ab3mr7711849qvb.15.1706570712908; Mon, 29 Jan 2024 15:25:12 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCXkrHBug8RO13YubK5W8zb2Otf2KhQkS9Q01pVSnxNNu91K4YlB4Sml/sK7P0MFo6O4OfstDiwyyZdnBjoYzCLHyNAnS17+KXfaKdjtlUmCXy7e7xWJ1JKzUAX1Y20JNCSW5vfkqMenWc4eXTeFH4gunR9pVE8yz8bF5HGsZCk3sQP2vDZp6h46VB4Gghg41Vb0/xQQrFVEmaEOzC8cwPto1DuoMlcMsnwrvaiFQ5oBjNPrbErvLC5YIUcV/iHQtc3vViBL2uNY72BOGpiuhmVVzP+OxJiiN4xVCSKoUEaL1RkSFOXQFQEJM/P+KpPReaXj9Jc6PiqOWaAadakDNb4I1+OSJXIhDRYyeEszeTP/9lR0DemCOTta60u+unc= Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id pd1-20020a056214490100b0068c5b3a7f2fsm508374qvb.37.2024.01.29.15.25.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 15:25:12 -0800 (PST) Received: from compute7.internal (compute7.nyi.internal [10.202.2.48]) by mailauth.nyi.internal (Postfix) with ESMTP id 0F20327C005B; Mon, 29 Jan 2024 18:25:12 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute7.internal (MEProxy); Mon, 29 Jan 2024 18:25:12 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrfedthedguddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 29 Jan 2024 18:25:11 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Frederic Weisbecker , "Paul E . McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang Subject: [PATCH 1/8] rcu/exp: Remove full barrier upon main thread wakeup Date: Mon, 29 Jan 2024 15:23:39 -0800 Message-ID: <20240129232349.3170819-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129232349.3170819-1-boqun.feng@gmail.com> References: <20240129232349.3170819-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Frederic Weisbecker When an expedited grace period is ending, care must be taken so that all the quiescent states propagated up to the root are correctly ordered against the wake up of the main expedited grace period workqueue. This ordering is already carried through the root rnp locking augmented by an smp_mb__after_unlock_lock() barrier. Therefore the explicit smp_mb() placed before the wake up is not needed and can be removed. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_exp.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 2ac440bc7e10..014ddf672165 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -198,10 +198,9 @@ static void __rcu_report_exp_rnp(struct rcu_node *rnp, } if (rnp->parent == NULL) { raw_spin_unlock_irqrestore_rcu_node(rnp, flags); - if (wake) { - smp_mb(); /* EGP done before wake_up(). */ + if (wake) swake_up_one_online(&rcu_state.expedited_wq); - } + break; } mask = rnp->grpmask; From patchwork Mon Jan 29 23:23:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13536517 Received: from mail-qk1-f178.google.com (mail-qk1-f178.google.com [209.85.222.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE1B553E3D; Mon, 29 Jan 2024 23:25:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570718; cv=none; b=PNJc7c1gxWW76Qc6AEVavS/Givu3MgU/yDKe+ZmeJ5KBIiYenYWN7MA+j6qkR7jEM1/BL21KTVp44H7h0N4dsZD8LfDjQv2RN66LeYlalDomVonvUH2KAxQBVwHumieRF/gJR4L2EEVzoflNGHief6jFQKYUbeUAWUZ74nEiqpU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570718; c=relaxed/simple; bh=eZlXqC8Pmc3GZ3mUzsHXQDXY6PdgTGdVKdE/Qizw9Rw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SbnPSHqYClb1wB6EBkLM3VtD00FagvcViqlyQt/9rEYEo3Dq/ASzaj8pcw7/CGZmc8IfPPbqs/OiJ5YMYh9vwW+w8vVRtFV4WtpmMnFkrShohMS9gfc/8uNrU+2q2wvrR2ZnbktzvIqjTKZDqSygQGTn5imQYGCdkZvtnSh4WVU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CSe+7jeE; arc=none smtp.client-ip=209.85.222.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CSe+7jeE" Received: by mail-qk1-f178.google.com with SMTP id af79cd13be357-783d4b3ad96so372086385a.3; Mon, 29 Jan 2024 15:25:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706570715; x=1707175515; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=484fmT+HsG7TJfN7jslHEwU65M6sD6LVJAUCymHVcHI=; b=CSe+7jeENMEe5b2YFSDJVbauO/mLVZeSOznA+WEn2JdXKBA7VpSQZPswETj8juRXDQ FfcYJbTHrYmB/18s/Bx3iUUv5vqYX09T5fee7z8kMj7Ud24wXy36JiF019ZsBa8RiWGv ymVnN7rmdbgIAcrcLgfxATg6RyzjZ2qVwMlEwcpNweXrEOTMuR1lts/DExyC+zk7LP1R FxTvHG4ZBXuyaXY4fhLUXPmJVkQxPYL5ZQ96wPDKLR4p0BlESY39Sf0sBoaSpT/1MOGk XURu/vwYHXdlRy+gzY9m4rK5Eto8ttoLqZC/tBUo4dJJU/oNA1GicfjBc242ssdcdwCL T+tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706570715; x=1707175515; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=484fmT+HsG7TJfN7jslHEwU65M6sD6LVJAUCymHVcHI=; b=ZW8YLSOCHubMGHmOZTAqoHmQTTypqzJJ2GwpWmy2IbOvmuf1r4xW5PazoDg51CxEDn 1XFb4oXi8Nc+Fp8GkwaDLXxeeidX9ikbP3vopetjp56lF8qiPGLun4mqxqPIr7Ar71w7 ertcWCWXPcKU1mHP4xwMuPUjgFjbdl/HLV79i152zw8zTbzvMZEPQVoeZ2V5eSfGan3i fHCemYLJ++qD5CHh7TCz9gzN97/Oc/B+NFT5esGB03hFpHzyf7FHWESwW7bjiDLk/MK8 bODCVnw1QEH0bLC0wTk4GBUufJ3hVyk9ub7fS81FdGCbRcvXGO4pvtf+twXkjRgTS+WK H7BA== X-Gm-Message-State: AOJu0Yzy4jd9mrI9iWTdifvpW7OVztTqtMkxxyXpenb6WuPaDLm1seEh H9OIYQYxVoo3rkXmYvUrxZFP7VEbuxJ8LyV9FD/LZvhAE0MhIbZPO2xA22Zv X-Google-Smtp-Source: AGHT+IE93z/pBgWHHsHxt95PwtqEuVFvDQWXYjf0t/+6Vj6mLH+0pnTruRl/BqesSMY3+GZ/ACvP3A== X-Received: by 2002:a05:6214:224a:b0:681:89ec:9789 with SMTP id c10-20020a056214224a00b0068189ec9789mr8331362qvc.128.1706570715556; Mon, 29 Jan 2024 15:25:15 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCWiKNk8/QJ1PGp5IApQf889EtnO9JFn9AofGp8GuY7aOQfvGrkNd6LbzyOs9VeRaX6V7Tj3PLqukLKDwi3dm7q3HhI2n99XGUkMteBxgTynntJBvxwSBhMEuaghOXIGQTFuFlFPft13goTQ4MaD1KwqxYzxc5GTgHBKaiUW+FRfrlZDZgVDksf4UACuBrrE1ypvyC5uLOORt8UZXcUKVO1bOoMC7mOTBJLkM7WjQ7+jm0n946NsOz40PnGv/XW7Vok/sVqqyegFdDIDHeOy7koqiP1usKxxDJtSMCQXUfW17duTqPeTG2trpJRyTKWMXfaSDTX8RaxSo9o6ceKmVvSj7EQljjCFxA5sPqnJ0QlZFUx/X8RN2d1bcWENgQOvDPFfR0z2eVO4LwOeh9RRlr2vLb34QGc= Received: from fauth2-smtp.messagingengine.com (fauth2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id ph15-20020a0562144a4f00b0068c50dec857sm1181786qvb.128.2024.01.29.15.25.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 15:25:15 -0800 (PST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailfauth.nyi.internal (Postfix) with ESMTP id C68601200066; Mon, 29 Jan 2024 18:25:14 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Mon, 29 Jan 2024 18:25:14 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrfedthedgudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 29 Jan 2024 18:25:13 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Frederic Weisbecker , Kalesh Singh , "Paul E . McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang Subject: [PATCH 2/8] rcu/exp: Fix RCU expedited parallel grace period kworker allocation failure recovery Date: Mon, 29 Jan 2024 15:23:40 -0800 Message-ID: <20240129232349.3170819-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129232349.3170819-1-boqun.feng@gmail.com> References: <20240129232349.3170819-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Frederic Weisbecker Under CONFIG_RCU_EXP_KTHREAD=y, the nodes initialization for expedited grace periods is queued to a kworker. However if the allocation of that kworker failed, the nodes initialization is performed synchronously by the caller instead. Now the check for kworker initialization failure relies on the kworker pointer to be NULL while its value might actually encapsulate an allocation failure error. Make sure to handle this case. Reviewed-by: Kalesh Singh Fixes: 9621fbee44df ("rcu: Move expedited grace period (GP) work to RT kthread_worker") Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b2bccfd37c38..38c86f2c040b 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4749,6 +4749,7 @@ static void __init rcu_start_exp_gp_kworkers(void) rcu_exp_par_gp_kworker = kthread_create_worker(0, par_gp_kworker_name); if (IS_ERR_OR_NULL(rcu_exp_par_gp_kworker)) { pr_err("Failed to create %s!\n", par_gp_kworker_name); + rcu_exp_par_gp_kworker = NULL; kthread_destroy_worker(rcu_exp_gp_kworker); return; } From patchwork Mon Jan 29 23:23:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13536518 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87FB854679; Mon, 29 Jan 2024 23:25:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570720; cv=none; b=hKB3YjT5sVITpTVT833V7dv+YqSHeJID5P1sJD8PegeTHLkGuNxH5SckwmosNNgsCGtN432si7OOn6D2xS27kauU6VRZJOyPmNjX1M3lI+LX7KIzfrwtbZ20yQtoNRIHOxO2/4aQh8NOTVSYYL0egIpmEzM9sNZV3Mylzfx76IY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570720; c=relaxed/simple; bh=tOPwwwAuI3o44t0gdJ5R9ub2RRxXX/bYOQUsMbxK9Ec=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ewXUcH6/uq+LkKvmX/g8PIUr1zVB8i3LFBjVrUCmt1TWqWITX2Rh0pwS44Z1VMsscNrLKT1oPnA+JfGCA+Hk6CqC5tsrL+QcVx3d7yt/tz7VsAp96LWZQFPiPJbhD2EPjPOQcTyRRtgyyojuKHvKPQrCY4QmiCqB1iOEhHEGTfU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=V2BH1a++; arc=none smtp.client-ip=209.85.160.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="V2BH1a++" Received: by mail-qt1-f173.google.com with SMTP id d75a77b69052e-42a8be32041so26639841cf.1; Mon, 29 Jan 2024 15:25:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706570717; x=1707175517; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=NMdvE0HmwIDlVJwOttocbmozTihZY+Emgf8doJHmn8A=; b=V2BH1a++iAiJkf+eQl5HlLhRxRoNPgTCOqS3glBzUZvBBA3sVTFGR4p9/Cc1KX04Mm QrOATetJCNlhiwmRY39vp+bsHJG+DombU4h5b3WO3yIBEvqsL05HU15jNqyZh2K2/o0g 0fZLixIcclqu46CVGvybJojOpHjleQzJwYih0UktkGr8vBqcQQYFQwu//PZaLnU4oVUK c1pVnF3kiW7a9i5rWbH9A2ZuiYrvKRWF3IKCL6wA8WZDK/er6r1MzFXxZ47Y8Nyt3ehU qlmP+YshWBY02oI8BgQPZTvswlx7tz3Kyf6N4ZSTHyIFf0BPdpvNLYxVuzOy50mhroTm JFng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706570717; x=1707175517; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=NMdvE0HmwIDlVJwOttocbmozTihZY+Emgf8doJHmn8A=; b=PKuaLw9SJrG1SwJnJd3914SOheKx8qsZCINCfMAslR84rS6q9Ry+Y4xYb8ZSTj27nU gMnKZcktDgr4Nm+95ZUNAFtLMCTrYGw9ZgpLQffhpxUIijzKFExJHmfRK/Yn8T+xxVSm 5wdwdmbYZ6LqDpFSZcl2VT5bY9Zr4uvauVbPpUcW3YUNF3PwrU2+EztSeTaDMlyWDjTZ N+JhB86T1/sT5te7f/O4j0JzKCTQfKP6Z2+Gayy/CaN/Wj0G7DQQOih92T+RxO3tBFLj oQayHvJ/kKSJDzNnlViuhzvd2LWBk3aD/yJ8qEuEQqgJy0B0X8kRraoqbHnObrSuJTTK j7Xg== X-Gm-Message-State: AOJu0YyS8ryGJKVPJXloJqn6pvEEb38pM2tTpfZMoIICzHsofIcehSWv GBU0g01dTDgoVY86/4Ab0Gl53byyzhvOx/hPKIFZFvvKqHi656HA X-Google-Smtp-Source: AGHT+IEoTzzZ11Snmk0w5e03CmWrdSYU5OewEBM5Kxnuipd4rpO3o/JfJiRy9wRzK/Ce5SpFa0fUwA== X-Received: by 2002:ac8:5902:0:b0:42a:b108:2bfc with SMTP id 2-20020ac85902000000b0042ab1082bfcmr1410819qty.96.1706570717326; Mon, 29 Jan 2024 15:25:17 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCWqhd/Xr8rRhpUqIY/vIhZLE8JGCZDnYQcetZlKsBAw+ZifdHNrrGHxz2cMivqKt+YsaUUr/aHbUSEcSFfFHdR6lnIxDatbtnrbLlBrnnbIj59WIY7cT1PLmpqCHeGK2LH6Z1oTMxzm+dRmJZaQLfW1tIi3X7oURYVxDciyv+BdnzGaPY8mWxBF1NwS9sHAHwpfYCjAdPzeNHMfcUpyJjm5+A1nVfVPOxBRiKRtEmNj8moaOySDJnDu8PPrzA6gb1iZHlSYO6MeFmeEKBEWZmwdNUP3DRLudEtVVMPeHL6h5lzdVXKfr/EAYtHlFW6TVymZH3vB+NMCqzZN1rIVR4Z+p8H9C7goSoS4nMQetpjFW6MLzBzAjZJDSKtFdsWWEPrtHmeEocAcQtCwTYlOHoDwDxdIugQ= Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id j24-20020ac85518000000b0042a86e06fdfsm3376480qtq.37.2024.01.29.15.25.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 15:25:16 -0800 (PST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailauth.nyi.internal (Postfix) with ESMTP id 8CE7627C005B; Mon, 29 Jan 2024 18:25:16 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Mon, 29 Jan 2024 18:25:16 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrfedthedgudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 29 Jan 2024 18:25:15 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Frederic Weisbecker , Kalesh Singh , "Paul E . McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang Subject: [PATCH 3/8] rcu/exp: Handle RCU expedited grace period kworker allocation failure Date: Mon, 29 Jan 2024 15:23:41 -0800 Message-ID: <20240129232349.3170819-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129232349.3170819-1-boqun.feng@gmail.com> References: <20240129232349.3170819-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Frederic Weisbecker Just like is done for the kworker performing nodes initialization, gracefully handle the possible allocation failure of the RCU expedited grace period main kworker. While at it perform a rename of the related checking functions to better reflect the expedited specifics. Reviewed-by: Kalesh Singh Fixes: 9621fbee44df ("rcu: Move expedited grace period (GP) work to RT kthread_worker") Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 2 ++ kernel/rcu/tree_exp.h | 25 +++++++++++++++++++------ 2 files changed, 21 insertions(+), 6 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 38c86f2c040b..f2c10d351b59 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4743,6 +4743,7 @@ static void __init rcu_start_exp_gp_kworkers(void) rcu_exp_gp_kworker = kthread_create_worker(0, gp_kworker_name); if (IS_ERR_OR_NULL(rcu_exp_gp_kworker)) { pr_err("Failed to create %s!\n", gp_kworker_name); + rcu_exp_gp_kworker = NULL; return; } @@ -4751,6 +4752,7 @@ static void __init rcu_start_exp_gp_kworkers(void) pr_err("Failed to create %s!\n", par_gp_kworker_name); rcu_exp_par_gp_kworker = NULL; kthread_destroy_worker(rcu_exp_gp_kworker); + rcu_exp_gp_kworker = NULL; return; } diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 014ddf672165..6123a60d9a4d 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -427,7 +427,12 @@ static void sync_rcu_exp_select_node_cpus(struct kthread_work *wp) __sync_rcu_exp_select_node_cpus(rewp); } -static inline bool rcu_gp_par_worker_started(void) +static inline bool rcu_exp_worker_started(void) +{ + return !!READ_ONCE(rcu_exp_gp_kworker); +} + +static inline bool rcu_exp_par_worker_started(void) { return !!READ_ONCE(rcu_exp_par_gp_kworker); } @@ -477,7 +482,12 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp) __sync_rcu_exp_select_node_cpus(rewp); } -static inline bool rcu_gp_par_worker_started(void) +static inline bool rcu_exp_worker_started(void) +{ + return !!READ_ONCE(rcu_gp_wq); +} + +static inline bool rcu_exp_par_worker_started(void) { return !!READ_ONCE(rcu_par_gp_wq); } @@ -540,7 +550,7 @@ static void sync_rcu_exp_select_cpus(void) rnp->exp_need_flush = false; if (!READ_ONCE(rnp->expmask)) continue; /* Avoid early boot non-existent wq. */ - if (!rcu_gp_par_worker_started() || + if (!rcu_exp_par_worker_started() || rcu_scheduler_active != RCU_SCHEDULER_RUNNING || rcu_is_last_leaf_node(rnp)) { /* No worker started yet or last leaf, do direct call. */ @@ -955,7 +965,7 @@ static void rcu_exp_print_detail_task_stall_rnp(struct rcu_node *rnp) */ void synchronize_rcu_expedited(void) { - bool boottime = (rcu_scheduler_active == RCU_SCHEDULER_INIT); + bool use_worker; unsigned long flags; struct rcu_exp_work rew; struct rcu_node *rnp; @@ -966,6 +976,9 @@ void synchronize_rcu_expedited(void) lock_is_held(&rcu_sched_lock_map), "Illegal synchronize_rcu_expedited() in RCU read-side critical section"); + use_worker = (rcu_scheduler_active != RCU_SCHEDULER_INIT) && + rcu_exp_worker_started(); + /* Is the state is such that the call is a grace period? */ if (rcu_blocking_is_gp()) { // Note well that this code runs with !PREEMPT && !SMP. @@ -995,7 +1008,7 @@ void synchronize_rcu_expedited(void) return; /* Someone else did our work for us. */ /* Ensure that load happens before action based on it. */ - if (unlikely(boottime)) { + if (unlikely(!use_worker)) { /* Direct call during scheduler init and early_initcalls(). */ rcu_exp_sel_wait_wake(s); } else { @@ -1013,7 +1026,7 @@ void synchronize_rcu_expedited(void) /* Let the next expedited grace period start. */ mutex_unlock(&rcu_state.exp_mutex); - if (likely(!boottime)) + if (likely(use_worker)) synchronize_rcu_expedited_destroy_work(&rew); } EXPORT_SYMBOL_GPL(synchronize_rcu_expedited); From patchwork Mon Jan 29 23:23:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13536519 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49BF07603C; Mon, 29 Jan 2024 23:25:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570721; cv=none; b=qIBMu5HkFIXWN4hqaF+uKhI35hKIOt2KOFFKv17WRiKzrcvESxl/LCnpLFbxU2Cg8m8fATKoSVsMAMzA8FJsEQizLMRMhKfw0i+i/2+C1SAcxOtYVko1tdpYSVkLXvussa6V/MJKaUgydqOiAMCrGkZ8mRn7s0xWMO7USPmG1qs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570721; c=relaxed/simple; bh=6Fxu62jgWecOBzhuRlJqFy8cBWmtPCl5ok4CyaFOmZg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DxdkqcI0TBVjs2mstDtmQ2xbF/aEyNit7rRt5HVQPGGQZ/EK+DdnBhdSD9IScpY/kUZToL6SLn9NN0aoh+ZYrZk5cPg7geCwZtlH3NmWGPnMaEC2f5ndRe2vlq8WzYxcfFsdxf1ILwlbLqa+8SeCW2z0FR+Fe1FEHdAcp/jh+m8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=KDgN1dKR; arc=none smtp.client-ip=209.85.219.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KDgN1dKR" Received: by mail-qv1-f54.google.com with SMTP id 6a1803df08f44-6868823be58so25072746d6.0; Mon, 29 Jan 2024 15:25:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706570719; x=1707175519; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=Wkmf0C6jjHSizsoyLESwTs8SZ7UOIs7D67BRQdDmsB4=; b=KDgN1dKR8lyIxScp5cmEEZkrHvEg1TPHMl8FM8N6LvIDW0IBLbiQ+mhRbkNlfPk30P 1M4rOnbZgbdYPOM1yEg0yU1zJ1ip6ewCviXAJ2jeDRWgn+ZkdviZrSv4TjHx8pESZR00 Jn65BqSBHKatJL9csred4uu2MWcY/SzzGFDkI4KO9wwJPcHf4AsNRyhWLbaP91wSIk2B X2EJZ/W3sBsiyJdT9nZO4rjgC7awhXWLT+qK0dlAPc5S5YmynWCLGuZItaFPf/8WYxLc jw8BYdiSAw3VYsZ4VhQV6kihJDxxOqoGhRkWwwei/AAp+aq/lGhiq5vt8VlHg/ICAm5K Hu0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706570719; x=1707175519; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Wkmf0C6jjHSizsoyLESwTs8SZ7UOIs7D67BRQdDmsB4=; b=M78nK4b5IXN6iZwIGY0c+/CFehWmb4gBhFFcYsX1glUnv1gEdoNSnoB5Zbepvjf3zn G/AZZDfwM42t6SZ/BUBQqcCvgO4z1bJZ+I51B/d+rp4obgzU1Hi/yEhpcu4xl/bNs4PG wJPRAUZLp6FF6e9X7X+BBqPf8od/Mi8eCJECJFi4cZFO3s56QIqSneMMUL4cD2mytLdo NMoF+RcgRkBIS6uMeeLWDJcc5LSaEWKyjLDCy3jQVoA7raX4vU8Nh6ej4iA5tTvJCmoF uvpyb1ElkYT/RXw7/+521eStgUp8dsA2qg0WqxEYjp5AwW6RdtznJOvBlCsa4BcBHsGl oiIA== X-Gm-Message-State: AOJu0YxrpmdYACpPPGt7APU6/lcpGSnjZmplfRIVxgJk3mhy0GZJQu4R 23njAb7hOERcew7zlOPD/L1I0fiDvuKeU92QCobsDDytplS5sgdN X-Google-Smtp-Source: AGHT+IGCR+lvmMOtKqre0oEiCSPn6/0z6kzGljUU4tM7zowHCj/1YUko77lLEnK0OMHEa8Lv2dUGng== X-Received: by 2002:a05:6214:2346:b0:686:9f2e:da2a with SMTP id hu6-20020a056214234600b006869f2eda2amr7838680qvb.37.1706570719277; Mon, 29 Jan 2024 15:25:19 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCVs9NA+PHx80Po9OCOeTN9DA3FVCtJBFE4fVoVL6y6GacVR/OwCzFmxnocbT1QhDk8f9Ghf2gkiIDLMTY7LA9yf7NW8BWI8ye5Vjtc13omMYAUmkm2QuU6dGJOxEsjRkA/5G1W3zgscHOlYcrE8NGbgGEi1yQUibKIi+eubVgN8dgCev8IsAU3egTyvPdVMm8EtxggTe4GgrYeGRDDIjmO1DOhmHTDaxduH9NuN4lBcHr1vWfuXyER7P1OlG1Fq+FDpNWQnAsAPLPioNwtP2zL6Lajjcnqs76eiq+ZrdOV+0BzbJM1FFOCgfqmDTiVmqEG+wnRceL9bVX1N04tIwQh7noTjzSToS2+L0cYbDYM+2UlrPVYl7svFx2VX0N0= Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id di3-20020ad458e3000000b0068c4f1da09csm1430070qvb.120.2024.01.29.15.25.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 15:25:18 -0800 (PST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailauth.nyi.internal (Postfix) with ESMTP id 5DB3027C005B; Mon, 29 Jan 2024 18:25:18 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Mon, 29 Jan 2024 18:25:18 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrfedthedgudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 29 Jan 2024 18:25:17 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Frederic Weisbecker , "Paul E . McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang Subject: [PATCH 4/8] rcu: s/boost_kthread_mutex/kthread_mutex Date: Mon, 29 Jan 2024 15:23:42 -0800 Message-ID: <20240129232349.3170819-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129232349.3170819-1-boqun.feng@gmail.com> References: <20240129232349.3170819-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Frederic Weisbecker This mutex is currently protecting per node boost kthreads creation and affinity setting across CPU hotplug operations. Since the expedited kworkers will soon be split per node as well, they will be subject to the same concurrency constraints against hotplug. Therefore their creation and affinity tuning operations will be grouped with those of boost kthreads and then rely on the same mutex. To prepare for that, generalize its name. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 2 +- kernel/rcu/tree.h | 2 +- kernel/rcu/tree_plugin.h | 10 +++++----- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f2c10d351b59..cdb80835c469 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4918,7 +4918,7 @@ static void __init rcu_init_one(void) init_waitqueue_head(&rnp->exp_wq[2]); init_waitqueue_head(&rnp->exp_wq[3]); spin_lock_init(&rnp->exp_lock); - mutex_init(&rnp->boost_kthread_mutex); + mutex_init(&rnp->kthread_mutex); raw_spin_lock_init(&rnp->exp_poll_lock); rnp->exp_seq_poll_rq = RCU_GET_STATE_COMPLETED; INIT_WORK(&rnp->exp_poll_wq, sync_rcu_do_polled_gp); diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index e9821a8422db..13e7b0d907ab 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -113,7 +113,7 @@ struct rcu_node { /* side effect, not as a lock. */ unsigned long boost_time; /* When to start boosting (jiffies). */ - struct mutex boost_kthread_mutex; + struct mutex kthread_mutex; /* Exclusion for thread spawning and affinity */ /* manipulation. */ struct task_struct *boost_kthread_task; diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 41021080ad25..0d307674915c 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1195,7 +1195,7 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) struct sched_param sp; struct task_struct *t; - mutex_lock(&rnp->boost_kthread_mutex); + mutex_lock(&rnp->kthread_mutex); if (rnp->boost_kthread_task || !rcu_scheduler_fully_active) goto out; @@ -1212,7 +1212,7 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */ out: - mutex_unlock(&rnp->boost_kthread_mutex); + mutex_unlock(&rnp->kthread_mutex); } /* @@ -1224,7 +1224,7 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) * no outgoing CPU. If there are no CPUs left in the affinity set, * this function allows the kthread to execute on any CPU. * - * Any future concurrent calls are serialized via ->boost_kthread_mutex. + * Any future concurrent calls are serialized via ->kthread_mutex. */ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu) { @@ -1237,7 +1237,7 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu) return; if (!zalloc_cpumask_var(&cm, GFP_KERNEL)) return; - mutex_lock(&rnp->boost_kthread_mutex); + mutex_lock(&rnp->kthread_mutex); mask = rcu_rnp_online_cpus(rnp); for_each_leaf_node_possible_cpu(rnp, cpu) if ((mask & leaf_node_cpu_bit(rnp, cpu)) && @@ -1250,7 +1250,7 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu) cpumask_clear_cpu(outgoingcpu, cm); } set_cpus_allowed_ptr(t, cm); - mutex_unlock(&rnp->boost_kthread_mutex); + mutex_unlock(&rnp->kthread_mutex); free_cpumask_var(cm); } From patchwork Mon Jan 29 23:23:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13536520 Received: from mail-yb1-f175.google.com (mail-yb1-f175.google.com [209.85.219.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 06B22159594; Mon, 29 Jan 2024 23:25:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570723; cv=none; b=i6BJ7QygwCeurEM71OVCmkuwrhg9qPO/4c+gpuoAbOpsxgv9O6YXVGW9+tw0YLEcW69sLH7QUUzm+mSZ69+aMt6l1pAn1m24F5i9l7LdkhOCye+5sXLXV6ijLU6HkDUAr7uDAIW9/kZDYMAgFGRVhqAcEkBRB5pyk0WCNbkeVac= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570723; c=relaxed/simple; bh=6NHANsKimEmeDczr5yWLAv9JpPQJ0N4prGmD0Ogf/zI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PwXquxYBhnQEdWlHyoO+zGFO7Do37fha7jZK8z6Ta9w7rR6NaXi8yQrAWGYxQMOq+2Mz49HvjF131d8sWEaPqUzDhPDEElk8zqwIcUr1J/9vDRKBihbxKYHcDszFURpOxpG2vNT+uUwB49rOu6hntnclJMPnskXCq5bsoIn34dw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ijKG9IBd; arc=none smtp.client-ip=209.85.219.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ijKG9IBd" Received: by mail-yb1-f175.google.com with SMTP id 3f1490d57ef6-dc223f3dd5eso3525079276.2; Mon, 29 Jan 2024 15:25:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706570721; x=1707175521; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=snjBKVpgcETM9dK5JtwaV9ltKohq1JoqryzBSSfrFXQ=; b=ijKG9IBdoRbwcxOzkqkIAinQZnlxM9cBRF8vDZ52ua6rDEtsg6EPihkrotdXN9HaAr HjNItJQ4lS7i61DB17N+9nOz/9gWdu5uz9WZSCLhcaEs+uQIgG9y4r0CXdToSUyuUy8C GqnYozrQ7VLCp4qKxv0OXXdD7NV2Xo8JwqMyWGIlw2hB42S5PymgOt3RNwgTUb4zK12r ftwCFULjLRUZoYcaaU96SwXw+W90xYqBa4wtKyI24YPEYH+CunOyW9vrpesJToTOcJN0 RUEIaBGakTqSERBdhg8FUDUIeLFm0fYBJM1W+Bff3jKKewqTe6vC5F2gUWq6TUETLZ/1 Uq/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706570721; x=1707175521; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=snjBKVpgcETM9dK5JtwaV9ltKohq1JoqryzBSSfrFXQ=; b=lseIv6seHkvthAOhTL46msueRWeLnyFUxz8wobrNgr3+Mt3XC9E9oyn0vReo0XXgvb 2Unwr+118g9qpgTK1YLWi4dCb1mcBu35AGJl5+3SY0Oqr8moiqKwi6WE9EWsRIMCWOdF m0jbs9izZqc16Z/Nt0MsIatA1PGGBE+6xgyCyeyXloBlDl/pr4CPxJmh5eb3LADoR6HA oDF9F/NwpjOMHLFVIrV2ilHYTLIXAl2Z1ogjQPZD+m1fC+w0OmhPgXLC+XWboc2Snf6f k3p/BPEAp69Hdff0pwDQp1Jj8t08yY3U7jZqpBQuzEtvTkZuLRcdvZz5MbhHEItX6E+o JC8Q== X-Gm-Message-State: AOJu0Yxy02VSEZwPFExl/ihiYMOtBiik+1EI+TYgNnhyy4WZ0quI45cK +49+OFh9MYEJ6F1avIJSKYojQJu9B/7QUmXWt80fmKfuvRMxT4yD X-Google-Smtp-Source: AGHT+IFnbrM+OIMzWet6q+D1iR86P3xA9kflKL/exQuNes5CoRkrXU9A6R83sDdM0ivA6JZH08CnEQ== X-Received: by 2002:a05:6902:2405:b0:dc6:48db:71a4 with SMTP id dr5-20020a056902240500b00dc648db71a4mr5657140ybb.97.1706570720862; Mon, 29 Jan 2024 15:25:20 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCVX/hUiM20kbw+yXKniPPDmzc/LOFI1dcxKANxu8aDAVh/cza5RLHGPkF2/iHjQUKipIYcRtFmJiantyYLMB+RKyNpzwokCFvmJyjyOizC6HiBlwZ5Wv2nUGXjnkhG+FoHEK+RlRpG2MPgt2IHFulvEdessuCQULRr0ZrKnBwurgd1RzJWLMn5uJbYvfMXGRvAsHShhe1xc6tLhH2gNPCploxPUZ9uBj4N+R0juu/ZEc50tj4y39yLGUdEjgUx6/5O0cp7DBBJ3m/ERnnpMbqFQl4i6S+GHgJ/VSg1oR6S5/nB2Kcx5RIEq+Ni3m/kuGpxa7Wejs0ZG+LUI51eUYeSPxHAVhVQCz1kLTVmY1Lm5kUpU/daGUYZAbbh5wNI= Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id bs11-20020a05620a470b00b0078374b01a71sm3535537qkb.5.2024.01.29.15.25.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 15:25:20 -0800 (PST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailauth.nyi.internal (Postfix) with ESMTP id 23AAD27C005B; Mon, 29 Jan 2024 18:25:20 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Mon, 29 Jan 2024 18:25:20 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrfedthedguddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 29 Jan 2024 18:25:19 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Frederic Weisbecker , "Paul E . McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang Subject: [PATCH 5/8] rcu/exp: Move expedited kthread worker creation functions above rcutree_prepare_cpu() Date: Mon, 29 Jan 2024 15:23:43 -0800 Message-ID: <20240129232349.3170819-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129232349.3170819-1-boqun.feng@gmail.com> References: <20240129232349.3170819-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Frederic Weisbecker The expedited kthread worker performing the per node initialization is going to be split into per node kthreads. As such, the future per node kthread creation will need to be called from CPU hotplug callbacks instead of an initcall, right beside the per node boost kthread creation. To prepare for that, move the kthread worker creation above rcutree_prepare_cpu() as a first step to make the review smoother for the upcoming modifications. No intended functional change. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 96 +++++++++++++++++++++++------------------------ 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index cdb80835c469..657ac12f9e27 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4394,6 +4394,54 @@ rcu_boot_init_percpu_data(int cpu) rcu_boot_init_nocb_percpu_data(rdp); } +#ifdef CONFIG_RCU_EXP_KTHREAD +struct kthread_worker *rcu_exp_gp_kworker; +struct kthread_worker *rcu_exp_par_gp_kworker; + +static void __init rcu_start_exp_gp_kworkers(void) +{ + const char *par_gp_kworker_name = "rcu_exp_par_gp_kthread_worker"; + const char *gp_kworker_name = "rcu_exp_gp_kthread_worker"; + struct sched_param param = { .sched_priority = kthread_prio }; + + rcu_exp_gp_kworker = kthread_create_worker(0, gp_kworker_name); + if (IS_ERR_OR_NULL(rcu_exp_gp_kworker)) { + pr_err("Failed to create %s!\n", gp_kworker_name); + rcu_exp_gp_kworker = NULL; + return; + } + + rcu_exp_par_gp_kworker = kthread_create_worker(0, par_gp_kworker_name); + if (IS_ERR_OR_NULL(rcu_exp_par_gp_kworker)) { + pr_err("Failed to create %s!\n", par_gp_kworker_name); + rcu_exp_par_gp_kworker = NULL; + kthread_destroy_worker(rcu_exp_gp_kworker); + rcu_exp_gp_kworker = NULL; + return; + } + + sched_setscheduler_nocheck(rcu_exp_gp_kworker->task, SCHED_FIFO, ¶m); + sched_setscheduler_nocheck(rcu_exp_par_gp_kworker->task, SCHED_FIFO, + ¶m); +} + +static inline void rcu_alloc_par_gp_wq(void) +{ +} +#else /* !CONFIG_RCU_EXP_KTHREAD */ +struct workqueue_struct *rcu_par_gp_wq; + +static void __init rcu_start_exp_gp_kworkers(void) +{ +} + +static inline void rcu_alloc_par_gp_wq(void) +{ + rcu_par_gp_wq = alloc_workqueue("rcu_par_gp", WQ_MEM_RECLAIM, 0); + WARN_ON(!rcu_par_gp_wq); +} +#endif /* CONFIG_RCU_EXP_KTHREAD */ + /* * Invoked early in the CPU-online process, when pretty much all services * are available. The incoming CPU is not present. @@ -4730,54 +4778,6 @@ static int rcu_pm_notify(struct notifier_block *self, return NOTIFY_OK; } -#ifdef CONFIG_RCU_EXP_KTHREAD -struct kthread_worker *rcu_exp_gp_kworker; -struct kthread_worker *rcu_exp_par_gp_kworker; - -static void __init rcu_start_exp_gp_kworkers(void) -{ - const char *par_gp_kworker_name = "rcu_exp_par_gp_kthread_worker"; - const char *gp_kworker_name = "rcu_exp_gp_kthread_worker"; - struct sched_param param = { .sched_priority = kthread_prio }; - - rcu_exp_gp_kworker = kthread_create_worker(0, gp_kworker_name); - if (IS_ERR_OR_NULL(rcu_exp_gp_kworker)) { - pr_err("Failed to create %s!\n", gp_kworker_name); - rcu_exp_gp_kworker = NULL; - return; - } - - rcu_exp_par_gp_kworker = kthread_create_worker(0, par_gp_kworker_name); - if (IS_ERR_OR_NULL(rcu_exp_par_gp_kworker)) { - pr_err("Failed to create %s!\n", par_gp_kworker_name); - rcu_exp_par_gp_kworker = NULL; - kthread_destroy_worker(rcu_exp_gp_kworker); - rcu_exp_gp_kworker = NULL; - return; - } - - sched_setscheduler_nocheck(rcu_exp_gp_kworker->task, SCHED_FIFO, ¶m); - sched_setscheduler_nocheck(rcu_exp_par_gp_kworker->task, SCHED_FIFO, - ¶m); -} - -static inline void rcu_alloc_par_gp_wq(void) -{ -} -#else /* !CONFIG_RCU_EXP_KTHREAD */ -struct workqueue_struct *rcu_par_gp_wq; - -static void __init rcu_start_exp_gp_kworkers(void) -{ -} - -static inline void rcu_alloc_par_gp_wq(void) -{ - rcu_par_gp_wq = alloc_workqueue("rcu_par_gp", WQ_MEM_RECLAIM, 0); - WARN_ON(!rcu_par_gp_wq); -} -#endif /* CONFIG_RCU_EXP_KTHREAD */ - /* * Spawn the kthreads that handle RCU's grace periods. */ From patchwork Mon Jan 29 23:23:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13536521 Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4134C15A4B2; Mon, 29 Jan 2024 23:25:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570726; cv=none; b=TeL6zO/4LPFfuFJscjeoY5FeV6jIzROYkAQMBGevFJIb/gnFNWkYixySohqdp4T055zHxOjXkm396msJrfHMZWrryeVL/OrZZbgVHGb/aUY4YlhNk55tVnFsF8m2Oid2GYO9do5QbvM4oJ8dCjhWwgc5gXGbyJpb0ZDA/VrghX4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570726; c=relaxed/simple; bh=SoFpc/ELI3RyKBJRaznOw7s6VzoZYqIiQc5Hb+kLNvo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WxwVJsa301EtP5OkVm3SyDIsC+scMDoP/IWHhSsSlwDQVLXb/0RPeDVoyz28skiC6P8/6W99UWrfrQNIXDBU9Y1lWpoKfflKJl+CExfEXbL1k+gQL1iDp0Ln14y6fKaNfADvgtLyzbx5WPc79FURttFcWXjfjFw+KSEgry5HD9g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nlxCMVSm; arc=none smtp.client-ip=209.85.160.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nlxCMVSm" Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-42a9199cfd2so17861841cf.0; Mon, 29 Jan 2024 15:25:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706570723; x=1707175523; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=wyo3gq0QuSUQokFParsBBGQO7fW0IZJ7sU9SyTB49hg=; b=nlxCMVSmZE1U41vlqY8d9k0AGppP3K6lwsBa+i2G3A2dl52RwcML2CZbVLd0tB2cIe M1lP3K61ivr+YMwz2LRa/ttHOkfjLipjge0EJxCv2qKNxjDb9k+XRA90frj6mdujKeoA XfG+kXeKECHUsfyZzrY6EHPO52efYsqoEXg1pqfEB92hFT4tj4F4YhFrVzUHEyKDFKp5 Yqz2WHEiQpG+5s6x5d6JPaBvnk1BSTLfOLEXol74AleBRSq71RGK/IngES2UusP2Nj25 WPsWb1yo7L5zJBvuSP96g3b6QcCqiALrj+QAA0ftxaMHdNPG+6xiitWz6hjTIE94AMOW V/yQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706570723; x=1707175523; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=wyo3gq0QuSUQokFParsBBGQO7fW0IZJ7sU9SyTB49hg=; b=knbC9D1ki2fhV1hCOkCY6OenHJM/Aa5wyT3SHrjqJ9FxheEOWfL+Hc4ckaE3pH0RAI YThh6DqaWmB5jemIlA2A0nWs1M9veYiZIchyyUqYDJAc3AgCQPZi/FOubC/KBfqskMBF WejHYhx9tRv5BNi3OPdNQhdd5t8r1Lwe6OacKfnakC99CT7D1uKRFbh4bUR0efPQbTo5 QtXUDKGQlBHl5FDiVYfl5HupzLqjAsEOX6vvMN2EXwIaxbvKVia9ll0b22QK+q1jJUO6 i0A3IW00att5XuHWC03nF0fLSBmQ6HqUrur62Ond17gifnPAZJiFQxQZX7iIdifPlrR9 eEbw== X-Gm-Message-State: AOJu0YxwaFjxhlFcPIjidaSBbOklyS5ynhZd8JgcO8+S+Watt6cWeTnE vl/ugFGUs/E7N7PiTeg8kzJJKE0W4q0XbVAemTNYW88jvD/WycmA X-Google-Smtp-Source: AGHT+IGmBec8dqFovCJq7HiFKdJz82ZcMd+zS8j4SiiNiBdshC4jhPxisP3HWJGB1QT2H2loSVXtIg== X-Received: by 2002:ac8:5908:0:b0:42a:a582:912 with SMTP id 8-20020ac85908000000b0042aa5820912mr3422602qty.58.1706570723077; Mon, 29 Jan 2024 15:25:23 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCUMX7BunnK/eoTe1QUXgKpbrxURfXkmhRr5Xr9kwe+wOgts6TIoiwO4kYVhZ8ShkvoS8VU38Z4970rwPCypBGh71OJL4w1k6ajH6BtKn5S9ei/2UIVpvJgJRZB7Kb3ko29nZPFZ4horsqvkBiQez2haV5k8X36M4wFvBFy27If+9Wjr1r3+GFEKMG4UNCAd7Q7l43eP7IIGLHBwcC29rVX7RKGu6Mzn4IJgx6emc1hEJiVD71FOr5jtc6CqbYO4dMlS0Y7C+TgMGIIWAOUiT7365AkrRTtYJ8G+GbFdXknjED5Nfst89ckECqq+IvdrkOUrTsgR5dEcSuTnGG+8g8YKSk5/kSOY7bpjupWiH041wqlgzZgbmu/sLxW/Xv8= Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id cr7-20020a05622a428700b0042a31bc98b9sm3989063qtb.24.2024.01.29.15.25.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 15:25:22 -0800 (PST) Received: from compute7.internal (compute7.nyi.internal [10.202.2.48]) by mailauth.nyi.internal (Postfix) with ESMTP id 2260C27C005B; Mon, 29 Jan 2024 18:25:22 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute7.internal (MEProxy); Mon, 29 Jan 2024 18:25:22 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrfedthedguddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 29 Jan 2024 18:25:21 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Frederic Weisbecker , "Paul E . McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang Subject: [PATCH 6/8] rcu/exp: Make parallel exp gp kworker per rcu node Date: Mon, 29 Jan 2024 15:23:44 -0800 Message-ID: <20240129232349.3170819-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129232349.3170819-1-boqun.feng@gmail.com> References: <20240129232349.3170819-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Frederic Weisbecker When CONFIG_RCU_EXP_KTHREAD=n, the expedited grace period per node initialization is performed in parallel via workqueues (one work per node). However in CONFIG_RCU_EXP_KTHREAD=y, this per node initialization is performed by a single kworker serializing each node initialization (one work for all nodes). The second part is certainly less scalable and efficient beyond a single leaf node. To improve this, expand this single kworker into per-node kworkers. This new layout is eventually intended to remove the workqueues based implementation since it will essentially now become duplicate code. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/rcu.h | 1 - kernel/rcu/tree.c | 61 +++++++++++++++++++++++++++------------- kernel/rcu/tree.h | 3 ++ kernel/rcu/tree_exp.h | 10 +++---- kernel/rcu/tree_plugin.h | 10 ++----- 5 files changed, 52 insertions(+), 33 deletions(-) diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index f94f65877f2b..6beaf70d629f 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -625,7 +625,6 @@ void rcu_force_quiescent_state(void); extern struct workqueue_struct *rcu_gp_wq; #ifdef CONFIG_RCU_EXP_KTHREAD extern struct kthread_worker *rcu_exp_gp_kworker; -extern struct kthread_worker *rcu_exp_par_gp_kworker; #else /* !CONFIG_RCU_EXP_KTHREAD */ extern struct workqueue_struct *rcu_par_gp_wq; #endif /* CONFIG_RCU_EXP_KTHREAD */ diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 657ac12f9e27..398c099d45d9 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4396,33 +4396,39 @@ rcu_boot_init_percpu_data(int cpu) #ifdef CONFIG_RCU_EXP_KTHREAD struct kthread_worker *rcu_exp_gp_kworker; -struct kthread_worker *rcu_exp_par_gp_kworker; -static void __init rcu_start_exp_gp_kworkers(void) +static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) { - const char *par_gp_kworker_name = "rcu_exp_par_gp_kthread_worker"; - const char *gp_kworker_name = "rcu_exp_gp_kthread_worker"; + struct kthread_worker *kworker; + const char *name = "rcu_exp_par_gp_kthread_worker/%d"; struct sched_param param = { .sched_priority = kthread_prio }; + int rnp_index = rnp - rcu_get_root(); - rcu_exp_gp_kworker = kthread_create_worker(0, gp_kworker_name); - if (IS_ERR_OR_NULL(rcu_exp_gp_kworker)) { - pr_err("Failed to create %s!\n", gp_kworker_name); - rcu_exp_gp_kworker = NULL; + if (rnp->exp_kworker) + return; + + kworker = kthread_create_worker(0, name, rnp_index); + if (IS_ERR_OR_NULL(kworker)) { + pr_err("Failed to create par gp kworker on %d/%d\n", + rnp->grplo, rnp->grphi); return; } + WRITE_ONCE(rnp->exp_kworker, kworker); + sched_setscheduler_nocheck(kworker->task, SCHED_FIFO, ¶m); +} - rcu_exp_par_gp_kworker = kthread_create_worker(0, par_gp_kworker_name); - if (IS_ERR_OR_NULL(rcu_exp_par_gp_kworker)) { - pr_err("Failed to create %s!\n", par_gp_kworker_name); - rcu_exp_par_gp_kworker = NULL; - kthread_destroy_worker(rcu_exp_gp_kworker); +static void __init rcu_start_exp_gp_kworker(void) +{ + const char *name = "rcu_exp_gp_kthread_worker"; + struct sched_param param = { .sched_priority = kthread_prio }; + + rcu_exp_gp_kworker = kthread_create_worker(0, name); + if (IS_ERR_OR_NULL(rcu_exp_gp_kworker)) { + pr_err("Failed to create %s!\n", name); rcu_exp_gp_kworker = NULL; return; } - sched_setscheduler_nocheck(rcu_exp_gp_kworker->task, SCHED_FIFO, ¶m); - sched_setscheduler_nocheck(rcu_exp_par_gp_kworker->task, SCHED_FIFO, - ¶m); } static inline void rcu_alloc_par_gp_wq(void) @@ -4431,7 +4437,11 @@ static inline void rcu_alloc_par_gp_wq(void) #else /* !CONFIG_RCU_EXP_KTHREAD */ struct workqueue_struct *rcu_par_gp_wq; -static void __init rcu_start_exp_gp_kworkers(void) +static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) +{ +} + +static void __init rcu_start_exp_gp_kworker(void) { } @@ -4442,6 +4452,17 @@ static inline void rcu_alloc_par_gp_wq(void) } #endif /* CONFIG_RCU_EXP_KTHREAD */ +static void rcu_spawn_rnp_kthreads(struct rcu_node *rnp) +{ + if ((IS_ENABLED(CONFIG_RCU_EXP_KTHREAD) || + IS_ENABLED(CONFIG_RCU_BOOST)) && rcu_scheduler_fully_active) { + mutex_lock(&rnp->kthread_mutex); + rcu_spawn_one_boost_kthread(rnp); + rcu_spawn_exp_par_gp_kworker(rnp); + mutex_unlock(&rnp->kthread_mutex); + } +} + /* * Invoked early in the CPU-online process, when pretty much all services * are available. The incoming CPU is not present. @@ -4490,7 +4511,7 @@ int rcutree_prepare_cpu(unsigned int cpu) rdp->rcu_iw_gp_seq = rdp->gp_seq - 1; trace_rcu_grace_period(rcu_state.name, rdp->gp_seq, TPS("cpuonl")); raw_spin_unlock_irqrestore_rcu_node(rnp, flags); - rcu_spawn_one_boost_kthread(rnp); + rcu_spawn_rnp_kthreads(rnp); rcu_spawn_cpu_nocb_kthread(cpu); WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus + 1); @@ -4812,10 +4833,10 @@ static int __init rcu_spawn_gp_kthread(void) * due to rcu_scheduler_fully_active. */ rcu_spawn_cpu_nocb_kthread(smp_processor_id()); - rcu_spawn_one_boost_kthread(rdp->mynode); + rcu_spawn_rnp_kthreads(rdp->mynode); rcu_spawn_core_kthreads(); /* Create kthread worker for expedited GPs */ - rcu_start_exp_gp_kworkers(); + rcu_start_exp_gp_kworker(); return 0; } early_initcall(rcu_spawn_gp_kthread); diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 13e7b0d907ab..e173808f486f 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -72,6 +72,9 @@ struct rcu_node { /* Online CPUs for next expedited GP. */ /* Any CPU that has ever been online will */ /* have its bit set. */ + struct kthread_worker *exp_kworker; + /* Workers performing per node expedited GP */ + /* initialization. */ unsigned long cbovldmask; /* CPUs experiencing callback overload. */ unsigned long ffmask; /* Fully functional CPUs. */ diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 6123a60d9a4d..0318a8a062d5 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -432,9 +432,9 @@ static inline bool rcu_exp_worker_started(void) return !!READ_ONCE(rcu_exp_gp_kworker); } -static inline bool rcu_exp_par_worker_started(void) +static inline bool rcu_exp_par_worker_started(struct rcu_node *rnp) { - return !!READ_ONCE(rcu_exp_par_gp_kworker); + return !!READ_ONCE(rnp->exp_kworker); } static inline void sync_rcu_exp_select_cpus_queue_work(struct rcu_node *rnp) @@ -445,7 +445,7 @@ static inline void sync_rcu_exp_select_cpus_queue_work(struct rcu_node *rnp) * another work item on the same kthread worker can result in * deadlock. */ - kthread_queue_work(rcu_exp_par_gp_kworker, &rnp->rew.rew_work); + kthread_queue_work(READ_ONCE(rnp->exp_kworker), &rnp->rew.rew_work); } static inline void sync_rcu_exp_select_cpus_flush_work(struct rcu_node *rnp) @@ -487,7 +487,7 @@ static inline bool rcu_exp_worker_started(void) return !!READ_ONCE(rcu_gp_wq); } -static inline bool rcu_exp_par_worker_started(void) +static inline bool rcu_exp_par_worker_started(struct rcu_node *rnp) { return !!READ_ONCE(rcu_par_gp_wq); } @@ -550,7 +550,7 @@ static void sync_rcu_exp_select_cpus(void) rnp->exp_need_flush = false; if (!READ_ONCE(rnp->expmask)) continue; /* Avoid early boot non-existent wq. */ - if (!rcu_exp_par_worker_started() || + if (!rcu_exp_par_worker_started(rnp) || rcu_scheduler_active != RCU_SCHEDULER_RUNNING || rcu_is_last_leaf_node(rnp)) { /* No worker started yet or last leaf, do direct call. */ diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 0d307674915c..09bdd36ca9ff 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1195,14 +1195,13 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) struct sched_param sp; struct task_struct *t; - mutex_lock(&rnp->kthread_mutex); - if (rnp->boost_kthread_task || !rcu_scheduler_fully_active) - goto out; + if (rnp->boost_kthread_task) + return; t = kthread_create(rcu_boost_kthread, (void *)rnp, "rcub/%d", rnp_index); if (WARN_ON_ONCE(IS_ERR(t))) - goto out; + return; raw_spin_lock_irqsave_rcu_node(rnp, flags); rnp->boost_kthread_task = t; @@ -1210,9 +1209,6 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) sp.sched_priority = kthread_prio; sched_setscheduler_nocheck(t, SCHED_FIFO, &sp); wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */ - - out: - mutex_unlock(&rnp->kthread_mutex); } /* From patchwork Mon Jan 29 23:23:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13536522 Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C664B524CC; Mon, 29 Jan 2024 23:25:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570727; cv=none; b=nn6d2vmHZifIf5H5ZDtSincpEW9JFPAT4iJbqwZDxj76g0wtFLJ+3pwuOYxInFt0dGXKFvwN43ksFUuBbXNwaFtunJzfpXDczgc48S9vj6H1zkYBWqI2or6k28J1JjLkrJLfhGPSpiz9tGlZpa3BpDJhDqNmZtm5LxnItb1wzlQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570727; c=relaxed/simple; bh=ZL88poR0QTNey5GX9keKNjRoMeBEej890116qx7Cfxk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rb2bK6etXjQoSDGszyJ0ZwYm1dAGSWVZdEWRF7++FfnW5ruAcbmjKUunhYtdycKHwAcP5s4kRAeaMRIXDbt1NWbnHA6ZbEJ/OGRyqYL/wVAZ/4lJeb73NI6FktV6XyqeuHz4A1RCBo7Stru+vjrU0fpKCAj696NOkpO/z1gU4Nk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=cO4iiK1b; arc=none smtp.client-ip=209.85.219.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cO4iiK1b" Received: by mail-yb1-f176.google.com with SMTP id 3f1490d57ef6-dc35fd0df02so2980759276.0; Mon, 29 Jan 2024 15:25:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706570725; x=1707175525; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=BaGBF15LgJCswlA/Xb0dIQOw2f8vA7meO+drqGQMIwc=; b=cO4iiK1bf1xkTMbrdPBWxUDT0pXtjFCI3YP5bArRsZkWP1LFm+HXLciPC3O7ablYaz BVEqLxwYyGBRyut9kCce4mFY+WVcdDKQTeuek3Ggl1QUoOMXmh4/1CgoqwsYlIYr2iz3 iEvAPY4zUZ53Ku4A2c8it39mr9NES/Ad6g/6HrSPHVw+GQ0THGu9mhzR1SJ1EWrjtrWa 2SPdO1wGrg5DUymphEJdLpVigfBC6+jkZhwDtCyOjmyu4rdLwo/F4xEflE5cAYJyYJrH RIvpnaXRnKmEzCkFpBCDFEkaHY1WVUD6FawtaM1vqhpKiTUt1L0QGN0+htFMN8zqufBI 9moA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706570725; x=1707175525; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=BaGBF15LgJCswlA/Xb0dIQOw2f8vA7meO+drqGQMIwc=; b=mwx76xLQw0UtHkqp6RyI9Y2ZRyGh+B+YGiqcEX7/3fjBrjoamrgN+Re9h5ZiDh07D3 Mwce6eH2fQc+bttX2PHFSe+OCS6SnisdyQYeVO/6zjLX57a1Hi109eJ5w9myFE1ex2PA tM3ndoj9XrNS7xvrKRkCL6JZp9P4LYi9XFFnn1PtMBV7+mMg+oSLXGEjxYvoktO94A8O FViPdbkN9FyRB6bWk7WjKRfYWWo9RogUvifkZmJ0w2tbP87c3FidsoNELy60tRkUNWm7 P3ultGEf90zxMHY5gHXZP2tNb6uH3jC7wBSmf5nGdDdrMycqlGttTY2lnPBbWl7ilMBO 0swg== X-Gm-Message-State: AOJu0YyDlsegPam/W50UKiPSAmpibavm9fr7FPzLYTpe2LjtkdUUpwra mrQ5hNKNaarUyumxeLnLzIc2MsZtWHQ2GU6q9rFKEhsugDC6m6uWFrymuCPt X-Google-Smtp-Source: AGHT+IFQGqPTnGxPiQO1Djtq9eBh/c4nCzgPRxDCLixqLhATJOOGjN7GMz/5eBGtGYtngLOOS7lO6Q== X-Received: by 2002:a25:83cb:0:b0:dc2:50ea:1044 with SMTP id v11-20020a2583cb000000b00dc250ea1044mr3635082ybm.86.1706570724591; Mon, 29 Jan 2024 15:25:24 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCUopi7LpDEn6Iic0kOjuwVX7zwfYaAj5nJ88lroT98VkUCVmYXSc/pGHpzIVOlu0NSZ1HZDBsPqiEvw3VZ8EZWzyTAhb0EojwT30RSnqe++9sMlN/+/ZD+3zEvSemDj1ecaqCHVtQU65ylwavHLr+PQk+iybptk1FaSkyuN6cNAlM2Nh8X8pCd+K0V5dauzvtAqH1vNTha3Y4sGpurqBGrIDzH1nU+VqS1AwCeeU3VKAxnjU/eDMVbq2EwE4SauUKUI66fxmaFcRjkI1SXYc25exJwsketHJ6/ldjYlKGNxbfjT1e0kG//kX1JGEnsftj1nFzUhnzucltKwV27CAMk90xZHXL1ObFci+1hF3essHWEGDuuF8KGOaHB/61Q= Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id os28-20020a05620a811c00b0078350718a63sm3536852qkn.67.2024.01.29.15.25.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 15:25:24 -0800 (PST) Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailauth.nyi.internal (Postfix) with ESMTP id D4FFF27C005B; Mon, 29 Jan 2024 18:25:23 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Mon, 29 Jan 2024 18:25:23 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrfedthedgudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 29 Jan 2024 18:25:23 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Frederic Weisbecker , "Paul E . McKenney" , Boqun Feng , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang Subject: [PATCH 7/8] rcu/exp: Handle parallel exp gp kworkers affinity Date: Mon, 29 Jan 2024 15:23:45 -0800 Message-ID: <20240129232349.3170819-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129232349.3170819-1-boqun.feng@gmail.com> References: <20240129232349.3170819-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Frederic Weisbecker Affine the parallel expedited gp kworkers to their respective RCU node in order to make them close to the cache their are playing with. This reuses the boost kthreads machinery that probe into CPU hotplug operations such that the kthreads become/stay affine to their respective node as soon/long as they contain online CPUs. Otherwise and if the current CPU going down was the last online on the leaf node, the related kthread is affine to the housekeeping CPUs. In the long run, this affinity VS CPU hotplug operation game should probably be implemented at the generic kthread level. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney [boqun: s/* rcu_boost_task/*rcu_boost_task as reported by checkpatch] Signed-off-by: Boqun Feng --- kernel/rcu/tree.c | 79 +++++++++++++++++++++++++++++++++++++--- kernel/rcu/tree_plugin.h | 42 ++------------------- 2 files changed, 78 insertions(+), 43 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 398c099d45d9..312c4c5d4509 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -145,7 +145,7 @@ static int rcu_scheduler_fully_active __read_mostly; static void rcu_report_qs_rnp(unsigned long mask, struct rcu_node *rnp, unsigned long gps, unsigned long flags); -static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu); +static struct task_struct *rcu_boost_task(struct rcu_node *rnp); static void invoke_rcu_core(void); static void rcu_report_exp_rdp(struct rcu_data *rdp); static void sync_sched_exp_online_cleanup(int cpu); @@ -4417,6 +4417,16 @@ static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) sched_setscheduler_nocheck(kworker->task, SCHED_FIFO, ¶m); } +static struct task_struct *rcu_exp_par_gp_task(struct rcu_node *rnp) +{ + struct kthread_worker *kworker = READ_ONCE(rnp->exp_kworker); + + if (!kworker) + return NULL; + + return kworker->task; +} + static void __init rcu_start_exp_gp_kworker(void) { const char *name = "rcu_exp_gp_kthread_worker"; @@ -4441,6 +4451,11 @@ static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) { } +static struct task_struct *rcu_exp_par_gp_task(struct rcu_node *rnp) +{ + return NULL; +} + static void __init rcu_start_exp_gp_kworker(void) { } @@ -4519,13 +4534,67 @@ int rcutree_prepare_cpu(unsigned int cpu) } /* - * Update RCU priority boot kthread affinity for CPU-hotplug changes. + * Update kthreads affinity during CPU-hotplug changes. + * + * Set the per-rcu_node kthread's affinity to cover all CPUs that are + * served by the rcu_node in question. The CPU hotplug lock is still + * held, so the value of rnp->qsmaskinit will be stable. + * + * We don't include outgoingcpu in the affinity set, use -1 if there is + * no outgoing CPU. If there are no CPUs left in the affinity set, + * this function allows the kthread to execute on any CPU. + * + * Any future concurrent calls are serialized via ->kthread_mutex. */ -static void rcutree_affinity_setting(unsigned int cpu, int outgoing) +static void rcutree_affinity_setting(unsigned int cpu, int outgoingcpu) { - struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); + cpumask_var_t cm; + unsigned long mask; + struct rcu_data *rdp; + struct rcu_node *rnp; + struct task_struct *task_boost, *task_exp; + + if (!IS_ENABLED(CONFIG_RCU_EXP_KTHREAD) && !IS_ENABLED(CONFIG_RCU_BOOST)) + return; + + rdp = per_cpu_ptr(&rcu_data, cpu); + rnp = rdp->mynode; + + task_boost = rcu_boost_task(rnp); + task_exp = rcu_exp_par_gp_task(rnp); + + /* + * If CPU is the boot one, those tasks are created later from early + * initcall since kthreadd must be created first. + */ + if (!task_boost && !task_exp) + return; + + if (!zalloc_cpumask_var(&cm, GFP_KERNEL)) + return; + + mutex_lock(&rnp->kthread_mutex); + mask = rcu_rnp_online_cpus(rnp); + for_each_leaf_node_possible_cpu(rnp, cpu) + if ((mask & leaf_node_cpu_bit(rnp, cpu)) && + cpu != outgoingcpu) + cpumask_set_cpu(cpu, cm); + cpumask_and(cm, cm, housekeeping_cpumask(HK_TYPE_RCU)); + if (cpumask_empty(cm)) { + cpumask_copy(cm, housekeeping_cpumask(HK_TYPE_RCU)); + if (outgoingcpu >= 0) + cpumask_clear_cpu(outgoingcpu, cm); + } + + if (task_exp) + set_cpus_allowed_ptr(task_exp, cm); + + if (task_boost) + set_cpus_allowed_ptr(task_boost, cm); + + mutex_unlock(&rnp->kthread_mutex); - rcu_boost_kthread_setaffinity(rdp->mynode, outgoing); + free_cpumask_var(cm); } /* diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 09bdd36ca9ff..36a8b5dbf5b5 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1211,43 +1211,9 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */ } -/* - * Set the per-rcu_node kthread's affinity to cover all CPUs that are - * served by the rcu_node in question. The CPU hotplug lock is still - * held, so the value of rnp->qsmaskinit will be stable. - * - * We don't include outgoingcpu in the affinity set, use -1 if there is - * no outgoing CPU. If there are no CPUs left in the affinity set, - * this function allows the kthread to execute on any CPU. - * - * Any future concurrent calls are serialized via ->kthread_mutex. - */ -static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu) +static struct task_struct *rcu_boost_task(struct rcu_node *rnp) { - struct task_struct *t = rnp->boost_kthread_task; - unsigned long mask; - cpumask_var_t cm; - int cpu; - - if (!t) - return; - if (!zalloc_cpumask_var(&cm, GFP_KERNEL)) - return; - mutex_lock(&rnp->kthread_mutex); - mask = rcu_rnp_online_cpus(rnp); - for_each_leaf_node_possible_cpu(rnp, cpu) - if ((mask & leaf_node_cpu_bit(rnp, cpu)) && - cpu != outgoingcpu) - cpumask_set_cpu(cpu, cm); - cpumask_and(cm, cm, housekeeping_cpumask(HK_TYPE_RCU)); - if (cpumask_empty(cm)) { - cpumask_copy(cm, housekeeping_cpumask(HK_TYPE_RCU)); - if (outgoingcpu >= 0) - cpumask_clear_cpu(outgoingcpu, cm); - } - set_cpus_allowed_ptr(t, cm); - mutex_unlock(&rnp->kthread_mutex); - free_cpumask_var(cm); + return READ_ONCE(rnp->boost_kthread_task); } #else /* #ifdef CONFIG_RCU_BOOST */ @@ -1266,10 +1232,10 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) { } -static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu) +static struct task_struct *rcu_boost_task(struct rcu_node *rnp) { + return NULL; } - #endif /* #else #ifdef CONFIG_RCU_BOOST */ /* From patchwork Mon Jan 29 23:23:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13536523 Received: from mail-qv1-f50.google.com (mail-qv1-f50.google.com [209.85.219.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE97215AAD5; Mon, 29 Jan 2024 23:25:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570730; cv=none; b=rmS6mpGVqaaL8/w/UM2NXmU0SE/0crTsTh6AZonbTdnFR/c5fZgUYihKY9MAzHEgEBeCsedOnwWD/fukmOYEc8U+0/MmnMVtbAoafTz9zFO6scMtfDZgWWyyPv50ulGfvF78XqitMxs3W5Fh8xn3Wu8cW4z7PyYqHB6bqDBbQwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706570730; c=relaxed/simple; bh=nKX4ofVFmrw8XvgMWLXzPnBWCUfubUzZ7pqTqkHNnm0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bGN5vWPu8U8zfCB0ajYzIX4ALnsctouWj8dneE3QwJiKR/DpGfIWX/F4q9MgMSbogt9j65FV+l3VoeQgE0euSDsPJuFu4ppSi9EEAiAkGCzVbWfI4q3hvo6LBiAxtaGBqwghKPpR0sQRZIP+40RIb/ZJwUY72Ruyh4pSSrFvnTY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=d+0JiUVy; arc=none smtp.client-ip=209.85.219.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d+0JiUVy" Received: by mail-qv1-f50.google.com with SMTP id 6a1803df08f44-680b1335af6so40092416d6.1; Mon, 29 Jan 2024 15:25:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706570728; x=1707175528; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=6Ds1hoUhG3JF+CidNPw7Wt36Vj4FavSCMYLcG+cPkCo=; b=d+0JiUVyejmvkInM6b1q/MwZHMJcA8wbF6zpH1YGg96Kw3QNpjECJiV6Sp6UtqkIVP 4qpWMMTOibtNNTbRGbaYkw8jlxXhHDx0aBdYLfGZmSUWtnF0HBHpMhuEM9uLVbgoqXX9 HLv98HU0tOuaaxAlb93mmvdvulUSEEnkgReJUVSxdHKii6NICtbc+9u6K5zyoYw0ey06 TQOqXgTNJcGbN8euMww/Y22GaZi9mktLY1ZyRdsqY/0V4zaeOr2pVbO+VxR3dn8sxbOh oukdDWXGOpRreOBgVd6WtO8HOUHxcJ1d6OkJ1fGo0jim+69rpXRUSbyI+wpc5V3DJvu3 tGMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706570728; x=1707175528; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=6Ds1hoUhG3JF+CidNPw7Wt36Vj4FavSCMYLcG+cPkCo=; b=tw4B1u0vZgkvoUH26JxOo8XnOWK9QV0580XDik+FwGO/C9YI63fnuQzzB3W7uewbCC dXumZXFNnlSjvkH09zLcy7Hmu/nFGol7BVmfTB6VwBvdG18DHPYfeplcHqEK1tuQ5OLc Np/WTnBbsLS6oOaA5CV2BonXMZHhly0ok9TWegMUsUIBRBhBa+HnjqqKJseAqX8UVypQ mgIEEUMQ0XmnPMWEbRb4jU6ZAmHaS99PtItdZWF09rCf2SimjcGomAJ6lOjD+s/8E037 yHZx4b8ggTMw6untNZZgn92gCFufdgLl5oM4EI0mSc2bKCk9U1yGzHzQ8BT5KleK7IIJ kW6A== X-Gm-Message-State: AOJu0YzMRQTogzuzo/VgkCKbB8IEypiPnDimcqm2U4rX5/xraWoBjFaP 1dOb2+P/CeDOhZpIGS/kLI/953yRmQSXnPyTP4aUxW75JyF5qgwoTaMIXyDz X-Google-Smtp-Source: AGHT+IGBgvvLP/o2QrfNWy9bxD3rEFVRWT02EilWPnmWc5O7lNoEWwrZwYTivW8fY0MnyXL8kzZM7A== X-Received: by 2002:a05:6214:28d:b0:68c:5c6e:3f1d with SMTP id l13-20020a056214028d00b0068c5c6e3f1dmr47061qvv.19.1706570727666; Mon, 29 Jan 2024 15:25:27 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCWXbJAjvneCcHcryOvH9anN9M+K0kQ/ZOE1+y1m/ousfqesbQ+wr/N219Yo68ucIr8wF7Ph24d0ttrNyFShyGStNNakqU7XPmflVEho/6dPfDORucQOZhDwlx5o9787nRcvjaFRxJ08XgNReBvNBS6HvEitNAhkE6XZ10NYqTruXb9UkqBivS/PEk/sG6ihO41GaeeEshuxW0IIrNI3zcCJmBpl/VcFO3+q1Mj9bKjOEUZ/6SwC9lOdkwCeFm+PV9m82ssiUPN9rSN/cnSbMj1uJSUKf/uHjcE/1j0LpajNuvfyHZX9IelicBvNhjCAwJi70CB+8J0+MvuNFFcOs4WpkmyC2jtvxbnW1Ka2fWN9c91k4PKT9aM1cxKVePBoz6JtbeSNXPav65vGwpH/1GU2NPOe4THs9jhwF2lU5fYJ+d96md7/KCiRXt5b1C1SBxAa01RoWeFQoTRvdcu6pdGyW3nziQ== Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id 12-20020ad45b8c000000b0068c501d0766sm1289369qvp.41.2024.01.29.15.25.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jan 2024 15:25:26 -0800 (PST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailauth.nyi.internal (Postfix) with ESMTP id D006627C005B; Mon, 29 Jan 2024 18:25:25 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Mon, 29 Jan 2024 18:25:25 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrfedthedgudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 29 Jan 2024 18:25:25 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org Cc: Frederic Weisbecker , Anna-Maria Behnsen , Thomas Gleixner , Joel Fernandes , "Paul E . McKenney" , Neeraj upadhyay , Neeraj Upadhyay , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang Subject: [PATCH 8/8] rcu/exp: Remove rcu_par_gp_wq Date: Mon, 29 Jan 2024 15:23:46 -0800 Message-ID: <20240129232349.3170819-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240129232349.3170819-1-boqun.feng@gmail.com> References: <20240129232349.3170819-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Frederic Weisbecker TREE04 running on short iterations can produce writer stalls of the following kind: ??? Writer stall state RTWS_EXP_SYNC(4) g3968 f0x0 ->state 0x2 cpu 0 task:rcu_torture_wri state:D stack:14568 pid:83 ppid:2 flags:0x00004000 Call Trace: __schedule+0x2de/0x850 ? trace_event_raw_event_rcu_exp_funnel_lock+0x6d/0xb0 schedule+0x4f/0x90 synchronize_rcu_expedited+0x430/0x670 ? __pfx_autoremove_wake_function+0x10/0x10 ? __pfx_synchronize_rcu_expedited+0x10/0x10 do_rtws_sync.constprop.0+0xde/0x230 rcu_torture_writer+0x4b4/0xcd0 ? __pfx_rcu_torture_writer+0x10/0x10 kthread+0xc7/0xf0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x2f/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 Waiting for an expedited grace period and polling for an expedited grace period both are operations that internally rely on the same workqueue performing necessary asynchronous work. However, a dependency chain is involved between those two operations, as depicted below: ====== CPU 0 ======= ====== CPU 1 ======= synchronize_rcu_expedited() exp_funnel_lock() mutex_lock(&rcu_state.exp_mutex); start_poll_synchronize_rcu_expedited queue_work(rcu_gp_wq, &rnp->exp_poll_wq); synchronize_rcu_expedited_queue_work() queue_work(rcu_gp_wq, &rew->rew_work); wait_event() // A, wait for &rew->rew_work completion mutex_unlock() // B //======> switch to kworker sync_rcu_do_polled_gp() { synchronize_rcu_expedited() exp_funnel_lock() mutex_lock(&rcu_state.exp_mutex); // C, wait B .... } // D Since workqueues are usually implemented on top of several kworkers handling the queue concurrently, the above situation wouldn't deadlock most of the time because A then doesn't depend on D. But in case of memory stress, a single kworker may end up handling alone all the works in a serialized way. In that case the above layout becomes a problem because A then waits for D, closing a circular dependency: A -> D -> C -> B -> A This however only happens when CONFIG_RCU_EXP_KTHREAD=n. Indeed synchronize_rcu_expedited() is otherwise implemented on top of a kthread worker while polling still relies on rcu_gp_wq workqueue, breaking the above circular dependency chain. Fix this with making expedited grace period to always rely on kthread worker. The workqueue based implementation is essentially a duplicate anyway now that the per-node initialization is performed by per-node kthread workers. Meanwhile the CONFIG_RCU_EXP_KTHREAD switch is still kept around to manage the scheduler policy of these kthread workers. Reported-by: Anna-Maria Behnsen Reported-by: Thomas Gleixner Suggested-by: Joel Fernandes Suggested-by: Paul E. McKenney Suggested-by: Neeraj upadhyay Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/rcu.h | 4 --- kernel/rcu/tree.c | 40 ++++-------------------- kernel/rcu/tree.h | 6 +--- kernel/rcu/tree_exp.h | 73 +------------------------------------------ 4 files changed, 8 insertions(+), 115 deletions(-) diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 6beaf70d629f..99032b9cb667 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -623,11 +623,7 @@ int rcu_get_gp_kthreads_prio(void); void rcu_fwd_progress_check(unsigned long j); void rcu_force_quiescent_state(void); extern struct workqueue_struct *rcu_gp_wq; -#ifdef CONFIG_RCU_EXP_KTHREAD extern struct kthread_worker *rcu_exp_gp_kworker; -#else /* !CONFIG_RCU_EXP_KTHREAD */ -extern struct workqueue_struct *rcu_par_gp_wq; -#endif /* CONFIG_RCU_EXP_KTHREAD */ void rcu_gp_slow_register(atomic_t *rgssp); void rcu_gp_slow_unregister(atomic_t *rgssp); #endif /* #else #ifdef CONFIG_TINY_RCU */ diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 312c4c5d4509..9591c22408a1 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4394,7 +4394,6 @@ rcu_boot_init_percpu_data(int cpu) rcu_boot_init_nocb_percpu_data(rdp); } -#ifdef CONFIG_RCU_EXP_KTHREAD struct kthread_worker *rcu_exp_gp_kworker; static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) @@ -4414,7 +4413,9 @@ static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) return; } WRITE_ONCE(rnp->exp_kworker, kworker); - sched_setscheduler_nocheck(kworker->task, SCHED_FIFO, ¶m); + + if (IS_ENABLED(CONFIG_RCU_EXP_KTHREAD)) + sched_setscheduler_nocheck(kworker->task, SCHED_FIFO, ¶m); } static struct task_struct *rcu_exp_par_gp_task(struct rcu_node *rnp) @@ -4438,39 +4439,14 @@ static void __init rcu_start_exp_gp_kworker(void) rcu_exp_gp_kworker = NULL; return; } - sched_setscheduler_nocheck(rcu_exp_gp_kworker->task, SCHED_FIFO, ¶m); -} - -static inline void rcu_alloc_par_gp_wq(void) -{ -} -#else /* !CONFIG_RCU_EXP_KTHREAD */ -struct workqueue_struct *rcu_par_gp_wq; - -static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) -{ -} - -static struct task_struct *rcu_exp_par_gp_task(struct rcu_node *rnp) -{ - return NULL; -} - -static void __init rcu_start_exp_gp_kworker(void) -{ -} -static inline void rcu_alloc_par_gp_wq(void) -{ - rcu_par_gp_wq = alloc_workqueue("rcu_par_gp", WQ_MEM_RECLAIM, 0); - WARN_ON(!rcu_par_gp_wq); + if (IS_ENABLED(CONFIG_RCU_EXP_KTHREAD)) + sched_setscheduler_nocheck(rcu_exp_gp_kworker->task, SCHED_FIFO, ¶m); } -#endif /* CONFIG_RCU_EXP_KTHREAD */ static void rcu_spawn_rnp_kthreads(struct rcu_node *rnp) { - if ((IS_ENABLED(CONFIG_RCU_EXP_KTHREAD) || - IS_ENABLED(CONFIG_RCU_BOOST)) && rcu_scheduler_fully_active) { + if (rcu_scheduler_fully_active) { mutex_lock(&rnp->kthread_mutex); rcu_spawn_one_boost_kthread(rnp); rcu_spawn_exp_par_gp_kworker(rnp); @@ -4554,9 +4530,6 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoingcpu) struct rcu_node *rnp; struct task_struct *task_boost, *task_exp; - if (!IS_ENABLED(CONFIG_RCU_EXP_KTHREAD) && !IS_ENABLED(CONFIG_RCU_BOOST)) - return; - rdp = per_cpu_ptr(&rcu_data, cpu); rnp = rdp->mynode; @@ -5245,7 +5218,6 @@ void __init rcu_init(void) /* Create workqueue for Tree SRCU and for expedited GPs. */ rcu_gp_wq = alloc_workqueue("rcu_gp", WQ_MEM_RECLAIM, 0); WARN_ON(!rcu_gp_wq); - rcu_alloc_par_gp_wq(); /* Fill in default value for rcutree.qovld boot parameter. */ /* -After- the rcu_node ->lock fields are initialized! */ diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index e173808f486f..f35e47f24d80 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -21,14 +21,10 @@ #include "rcu_segcblist.h" -/* Communicate arguments to a workqueue handler. */ +/* Communicate arguments to a kthread worker handler. */ struct rcu_exp_work { unsigned long rew_s; -#ifdef CONFIG_RCU_EXP_KTHREAD struct kthread_work rew_work; -#else - struct work_struct rew_work; -#endif /* CONFIG_RCU_EXP_KTHREAD */ }; /* RCU's kthread states for tracing. */ diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 0318a8a062d5..6b83537480b1 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -418,7 +418,6 @@ static void __sync_rcu_exp_select_node_cpus(struct rcu_exp_work *rewp) static void rcu_exp_sel_wait_wake(unsigned long s); -#ifdef CONFIG_RCU_EXP_KTHREAD static void sync_rcu_exp_select_node_cpus(struct kthread_work *wp) { struct rcu_exp_work *rewp = @@ -470,69 +469,6 @@ static inline void synchronize_rcu_expedited_queue_work(struct rcu_exp_work *rew kthread_queue_work(rcu_exp_gp_kworker, &rew->rew_work); } -static inline void synchronize_rcu_expedited_destroy_work(struct rcu_exp_work *rew) -{ -} -#else /* !CONFIG_RCU_EXP_KTHREAD */ -static void sync_rcu_exp_select_node_cpus(struct work_struct *wp) -{ - struct rcu_exp_work *rewp = - container_of(wp, struct rcu_exp_work, rew_work); - - __sync_rcu_exp_select_node_cpus(rewp); -} - -static inline bool rcu_exp_worker_started(void) -{ - return !!READ_ONCE(rcu_gp_wq); -} - -static inline bool rcu_exp_par_worker_started(struct rcu_node *rnp) -{ - return !!READ_ONCE(rcu_par_gp_wq); -} - -static inline void sync_rcu_exp_select_cpus_queue_work(struct rcu_node *rnp) -{ - int cpu = find_next_bit(&rnp->ffmask, BITS_PER_LONG, -1); - - INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus); - /* If all offline, queue the work on an unbound CPU. */ - if (unlikely(cpu > rnp->grphi - rnp->grplo)) - cpu = WORK_CPU_UNBOUND; - else - cpu += rnp->grplo; - queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work); -} - -static inline void sync_rcu_exp_select_cpus_flush_work(struct rcu_node *rnp) -{ - flush_work(&rnp->rew.rew_work); -} - -/* - * Work-queue handler to drive an expedited grace period forward. - */ -static void wait_rcu_exp_gp(struct work_struct *wp) -{ - struct rcu_exp_work *rewp; - - rewp = container_of(wp, struct rcu_exp_work, rew_work); - rcu_exp_sel_wait_wake(rewp->rew_s); -} - -static inline void synchronize_rcu_expedited_queue_work(struct rcu_exp_work *rew) -{ - INIT_WORK_ONSTACK(&rew->rew_work, wait_rcu_exp_gp); - queue_work(rcu_gp_wq, &rew->rew_work); -} - -static inline void synchronize_rcu_expedited_destroy_work(struct rcu_exp_work *rew) -{ - destroy_work_on_stack(&rew->rew_work); -} -#endif /* CONFIG_RCU_EXP_KTHREAD */ - /* * Select the nodes that the upcoming expedited grace period needs * to wait for. @@ -965,7 +901,6 @@ static void rcu_exp_print_detail_task_stall_rnp(struct rcu_node *rnp) */ void synchronize_rcu_expedited(void) { - bool use_worker; unsigned long flags; struct rcu_exp_work rew; struct rcu_node *rnp; @@ -976,9 +911,6 @@ void synchronize_rcu_expedited(void) lock_is_held(&rcu_sched_lock_map), "Illegal synchronize_rcu_expedited() in RCU read-side critical section"); - use_worker = (rcu_scheduler_active != RCU_SCHEDULER_INIT) && - rcu_exp_worker_started(); - /* Is the state is such that the call is a grace period? */ if (rcu_blocking_is_gp()) { // Note well that this code runs with !PREEMPT && !SMP. @@ -1008,7 +940,7 @@ void synchronize_rcu_expedited(void) return; /* Someone else did our work for us. */ /* Ensure that load happens before action based on it. */ - if (unlikely(!use_worker)) { + if (unlikely((rcu_scheduler_active == RCU_SCHEDULER_INIT) || !rcu_exp_worker_started())) { /* Direct call during scheduler init and early_initcalls(). */ rcu_exp_sel_wait_wake(s); } else { @@ -1025,9 +957,6 @@ void synchronize_rcu_expedited(void) /* Let the next expedited grace period start. */ mutex_unlock(&rcu_state.exp_mutex); - - if (likely(use_worker)) - synchronize_rcu_expedited_destroy_work(&rew); } EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);