From patchwork Tue Nov 22 01:04:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13051741 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF36CC352A1 for ; Tue, 22 Nov 2022 01:04:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232002AbiKVBE1 (ORCPT ); Mon, 21 Nov 2022 20:04:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231956AbiKVBE0 (ORCPT ); Mon, 21 Nov 2022 20:04:26 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C42C1C901; Mon, 21 Nov 2022 17:04:25 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 75F0061511; Tue, 22 Nov 2022 01:04:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33D5FC43156; Tue, 22 Nov 2022 01:04:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669079063; bh=QNpGjRTwvCwUOFj/pR6pze23etHEHGqQMnnOZBmIHF0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fjaKQJ3eSDwRn22hicp3w41Zl2GWI3YjHGmDWDJVyAPiyxrZOPD78A4UgUF/RxlKB mXGO2rgS6XfFXotIqbsRqz4o+pd1q/Hs692e4No5+HTCd0nOByQWtv9EHTTE1hc24X CFJQp86yxs1lzaKlkx+DuntsELItJ0tfxyeN6coVE+eJQ+MD+thR7MIPPb4Ie090Xp UZm6mR87a4Top0HJCaCJNNz08fh8MG6AgBOxWL2doq4PriTEJZZbwiURgI4Xy6Flqh qn4dfosyXjLIxsH4W3GBjX+5zNJCur7P2JuBr9Pit8CDSkARL0mDq5+BiumQOmTzzD nUWm6MnXK4CZA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 8B2805C1695; Mon, 21 Nov 2022 17:04:22 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Joel Fernandes (Google)" , David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, netdev@vger.kernel.org, "Paul E . McKenney" Subject: [PATCH v2 rcu 14/16] rxrpc: Use call_rcu_flush() instead of call_rcu() Date: Mon, 21 Nov 2022 17:04:19 -0800 Message-Id: <20221122010421.3799681-14-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> References: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: "Joel Fernandes (Google)" Earlier commits in this series allow battery-powered systems to build their kernels with the default-disabled CONFIG_RCU_LAZY=y Kconfig option. This Kconfig option causes call_rcu() to delay its callbacks in order to batch them. This means that a given RCU grace period covers more callbacks, thus reducing the number of grace periods, in turn reducing the amount of energy consumed, which increases battery lifetime which can be a very good thing. This is not a subtle effect: In some important use cases, the battery lifetime is increased by more than 10%. This CONFIG_RCU_LAZY=y option is available only for CPUs that offload callbacks, for example, CPUs mentioned in the rcu_nocbs kernel boot parameter passed to kernels built with CONFIG_RCU_NOCB_CPU=y. Delaying callbacks is normally not a problem because most callbacks do nothing but free memory. If the system is short on memory, a shrinker will kick all currently queued lazy callbacks out of their laziness, thus freeing their memory in short order. Similarly, the rcu_barrier() function, which blocks until all currently queued callbacks are invoked, will also kick lazy callbacks, thus enabling rcu_barrier() to complete in a timely manner. However, there are some cases where laziness is not a good option. For example, synchronize_rcu() invokes call_rcu(), and blocks until the newly queued callback is invoked. It would not be a good for synchronize_rcu() to block for ten seconds, even on an idle system. Therefore, synchronize_rcu() invokes call_rcu_flush() instead of call_rcu(). The arrival of a non-lazy call_rcu_flush() callback on a given CPU kicks any lazy callbacks that might be already queued on that CPU. After all, if there is going to be a grace period, all callbacks might as well get full benefit from it. Yes, this could be done the other way around by creating a call_rcu_lazy(), but earlier experience with this approach and feedback at the 2022 Linux Plumbers Conference shifted the approach to call_rcu() being lazy with call_rcu_flush() for the few places where laziness is inappropriate. And another call_rcu() instance that cannot be lazy is the one in rxrpc_kill_connection(), which sometimes does a wakeup that should not be unduly delayed. Therefore, make rxrpc_kill_connection() use call_rcu_flush() in order to revert to the old behavior. Signed-off-by: Joel Fernandes (Google) Cc: David Howells Cc: Marc Dionne Cc: "David S. Miller" Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Paolo Abeni Cc: Cc: Signed-off-by: Paul E. McKenney --- net/rxrpc/conn_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 22089e37e97f0..fdcfb509cc443 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -253,7 +253,7 @@ void rxrpc_kill_connection(struct rxrpc_connection *conn) * must carry a ref on the connection to prevent us getting here whilst * it is queued or running. */ - call_rcu(&conn->rcu, rxrpc_destroy_connection); + call_rcu_flush(&conn->rcu, rxrpc_destroy_connection); } /* From patchwork Tue Nov 22 01:04:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13051743 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30EA9C47089 for ; Tue, 22 Nov 2022 01:04:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232027AbiKVBEa (ORCPT ); Mon, 21 Nov 2022 20:04:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232004AbiKVBE1 (ORCPT ); Mon, 21 Nov 2022 20:04:27 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E04911A12; Mon, 21 Nov 2022 17:04:26 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2B273B818DF; Tue, 22 Nov 2022 01:04:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3BA9BC4315E; Tue, 22 Nov 2022 01:04:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669079063; bh=82QgUkdn96zAlaY+UFpNNtcTMhQgVxk6sdFaelyo3zA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oj2o0MV7bWppKYphpTZpnE345/Ekq1qeCL51DEXRnLTr9ODMUfEY6Po0mW1vDwE0N ZKt1Nr856cCvvre9rHNL7oio8FbnDnddN3xQDc/PpWXjmQUzB9Bbo4807N7jvKa+AX 87o4Tj/mVIAeC27B9POj/7hSsfsa5wE9QYk4sIycuuSeBgVhHY6Yc/Graw+BLBgdYe o8jkA8DFvGrnVI0c29rXdES4mjK6TiEOu0d3mqMORsQQgZ4cMEjqKNZT8H6/+6MXxU x4aRyTkMoQsvQgE7SOCx0yMQZ8zK5oriDMDd+lM5jCpeuQ8ZGIzm6AtDJsyBNP8HFk QSv+x/5jDzJLw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 8CE7F5C18FF; Mon, 21 Nov 2022 17:04:22 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Joel Fernandes (Google)" , David Ahern , "David S. Miller" , Eric Dumazet , Hideaki YOSHIFUJI , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, "Paul E . McKenney" Subject: [PATCH v2 rcu 15/16] net: Use call_rcu_flush() for dst_release() Date: Mon, 21 Nov 2022 17:04:20 -0800 Message-Id: <20221122010421.3799681-15-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> References: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: "Joel Fernandes (Google)" In a networking test on ChromeOS, kernels built with the new CONFIG_RCU_LAZY=y Kconfig option fail a networking test in the teardown phase. This failure may be reproduced as follows: ip netns del The CONFIG_RCU_LAZY=y Kconfig option was introduced by earlier commits in this series for the benefit of certain battery-powered systems. This Kconfig option causes call_rcu() to delay its callbacks in order to batch them. This means that a given RCU grace period covers more callbacks, thus reducing the number of grace periods, in turn reducing the amount of energy consumed, which increases battery lifetime which can be a very good thing. This is not a subtle effect: In some important use cases, the battery lifetime is increased by more than 10%. This CONFIG_RCU_LAZY=y option is available only for CPUs that offload callbacks, for example, CPUs mentioned in the rcu_nocbs kernel boot parameter passed to kernels built with CONFIG_RCU_NOCB_CPU=y. Delaying callbacks is normally not a problem because most callbacks do nothing but free memory. If the system is short on memory, a shrinker will kick all currently queued lazy callbacks out of their laziness, thus freeing their memory in short order. Similarly, the rcu_barrier() function, which blocks until all currently queued callbacks are invoked, will also kick lazy callbacks, thus enabling rcu_barrier() to complete in a timely manner. However, there are some cases where laziness is not a good option. For example, synchronize_rcu() invokes call_rcu(), and blocks until the newly queued callback is invoked. It would not be a good for synchronize_rcu() to block for ten seconds, even on an idle system. Therefore, synchronize_rcu() invokes call_rcu_flush() instead of call_rcu(). The arrival of a non-lazy call_rcu_flush() callback on a given CPU kicks any lazy callbacks that might be already queued on that CPU. After all, if there is going to be a grace period, all callbacks might as well get full benefit from it. Yes, this could be done the other way around by creating a call_rcu_lazy(), but earlier experience with this approach and feedback at the 2022 Linux Plumbers Conference shifted the approach to call_rcu() being lazy with call_rcu_flush() for the few places where laziness is inappropriate. Returning to the test failure, use of ftrace showed that this failure cause caused by the aadded delays due to this new lazy behavior of call_rcu() in kernels built with CONFIG_RCU_LAZY=y. Therefore, make dst_release() use call_rcu_flush() in order to revert to the old test-failure-free behavior. Signed-off-by: Joel Fernandes (Google) Cc: David Ahern Cc: "David S. Miller" Cc: Eric Dumazet Cc: Hideaki YOSHIFUJI Cc: Jakub Kicinski Cc: Paolo Abeni Cc: Signed-off-by: Paul E. McKenney --- net/core/dst.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/dst.c b/net/core/dst.c index bc9c9be4e0801..15b16322703f4 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -174,7 +174,7 @@ void dst_release(struct dst_entry *dst) net_warn_ratelimited("%s: dst:%p refcnt:%d\n", __func__, dst, newrefcnt); if (!newrefcnt) - call_rcu(&dst->rcu_head, dst_destroy_rcu); + call_rcu_flush(&dst->rcu_head, dst_destroy_rcu); } } EXPORT_SYMBOL(dst_release); From patchwork Tue Nov 22 01:04:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13051742 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFB9FC43217 for ; Tue, 22 Nov 2022 01:04:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231992AbiKVBE3 (ORCPT ); Mon, 21 Nov 2022 20:04:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231954AbiKVBE0 (ORCPT ); Mon, 21 Nov 2022 20:04:26 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C2C1DF40; Mon, 21 Nov 2022 17:04:25 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2A4D06151E; Tue, 22 Nov 2022 01:04:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75FB9C43166; Tue, 22 Nov 2022 01:04:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669079063; bh=yoNO14jnT7/3wJpWHWo0CYgw8+T0b3nI0VWmVo/0VZk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TTez867UG51Rj10O9MBv8/3v+xkdrmwVzCDqX9ISWk5WFBZYKeDXeOODA9souyOXQ 53WH/uy/84JsK2tNKVG2Jvi0x8VhcDvWhBLP0mZQqp6x2bZJqPl3z4afQjFW7ETIX+ OpceptNxp1GZpjnzC6xWRWl5nqZDjSRBvAarT3P2OOH6PRYJOzrB6Cf9iXiQGD1ZKU zr7ICASGaotS6FZUO9zEcuJPFKFuknV5yCsy84GE3hS3YoH3Yjwn9N9Tqvdy3CCXYz O2lfCRltmaMbxQLYhotx/3UEQo1EP97pAcoNAmWX2AWBVjzrcYz67tww5M5f1CF82i m1p7zPsS+2dhA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 8EC005C1C98; Mon, 21 Nov 2022 17:04:22 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, Eric Dumazet , Joel Fernandes , David Ahern , "David S. Miller" , Hideaki YOSHIFUJI , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, "Paul E . McKenney" Subject: [PATCH v2 rcu 16/16] net: devinet: Reduce refcount before grace period Date: Mon, 21 Nov 2022 17:04:21 -0800 Message-Id: <20221122010421.3799681-16-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> References: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Eric Dumazet Currently, the inetdev_destroy() function waits for an RCU grace period before decrementing the refcount and freeing memory. This causes a delay with a new RCU configuration that tries to save power, which results in the network interface disappearing later than expected. The resulting delay causes test failures on ChromeOS. Refactor the code such that the refcount is freed before the grace period and memory is freed after. With this a ChromeOS network test passes that does 'ip netns del' and polls for an interface disappearing, now passes. Reported-by: Joel Fernandes (Google) Signed-off-by: Eric Dumazet Signed-off-by: Joel Fernandes (Google) Cc: David Ahern Cc: "David S. Miller" Cc: Hideaki YOSHIFUJI Cc: Jakub Kicinski Cc: Paolo Abeni Cc: Signed-off-by: Paul E. McKenney --- net/ipv4/devinet.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c index e8b9a9202fecd..b0acf6e19aed3 100644 --- a/net/ipv4/devinet.c +++ b/net/ipv4/devinet.c @@ -234,13 +234,20 @@ static void inet_free_ifa(struct in_ifaddr *ifa) call_rcu(&ifa->rcu_head, inet_rcu_free_ifa); } +static void in_dev_free_rcu(struct rcu_head *head) +{ + struct in_device *idev = container_of(head, struct in_device, rcu_head); + + kfree(rcu_dereference_protected(idev->mc_hash, 1)); + kfree(idev); +} + void in_dev_finish_destroy(struct in_device *idev) { struct net_device *dev = idev->dev; WARN_ON(idev->ifa_list); WARN_ON(idev->mc_list); - kfree(rcu_dereference_protected(idev->mc_hash, 1)); #ifdef NET_REFCNT_DEBUG pr_debug("%s: %p=%s\n", __func__, idev, dev ? dev->name : "NIL"); #endif @@ -248,7 +255,7 @@ void in_dev_finish_destroy(struct in_device *idev) if (!idev->dead) pr_err("Freeing alive in_device %p\n", idev); else - kfree(idev); + call_rcu(&idev->rcu_head, in_dev_free_rcu); } EXPORT_SYMBOL(in_dev_finish_destroy); @@ -298,12 +305,6 @@ static struct in_device *inetdev_init(struct net_device *dev) goto out; } -static void in_dev_rcu_put(struct rcu_head *head) -{ - struct in_device *idev = container_of(head, struct in_device, rcu_head); - in_dev_put(idev); -} - static void inetdev_destroy(struct in_device *in_dev) { struct net_device *dev; @@ -328,7 +329,7 @@ static void inetdev_destroy(struct in_device *in_dev) neigh_parms_release(&arp_tbl, in_dev->arp_parms); arp_ifdown(dev); - call_rcu(&in_dev->rcu_head, in_dev_rcu_put); + in_dev_put(in_dev); } int inet_addr_onlink(struct in_device *in_dev, __be32 a, __be32 b)