From patchwork Tue Mar 28 22:16:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8565C761A6 for ; Tue, 28 Mar 2023 22:17:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230125AbjC1WRM (ORCPT ); Tue, 28 Mar 2023 18:17:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbjC1WRD (ORCPT ); Tue, 28 Mar 2023 18:17:03 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 628462D63 for ; Tue, 28 Mar 2023 15:16:53 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id f6-20020a170902ce8600b001a25ae310a9so2719964plg.10 for ; Tue, 28 Mar 2023 15:16:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041813; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ECd9n7cK9XN/T+IJ39MyPXAm65DzgHuyZCFnDGQgZIE=; b=hC/hCzGf8kQKLtuU/Lq4g/PH8Bikoaov/kMCU55X6hS3y0cgLvVqN0Ui9SeHKqoI2Y MLR9yv7Q6ZpTQchtNNsVDSGumEgnJDRGOrdjWF1vQ04Hu4VkyeJ+agrrQETPfZvxGuwE fYcWpRh33+ZEKqsq/V4FxFGt/SOC7Xson0BSVI7SL+S7Djz8oPuCrC/TQcGQu540vmXS GyCet5dMYq6C9cPRyqnOFXjOod08thOhwnsgDFjm//XeVxGxDJp0Yqlgz0MeTNLqnkrX POTGQvY7O/HgL6iaisQLmX0ut7TKBC/YmOGLurxCyE9UYzyCbGnroCIDWbRRsYz1/a88 d5vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041813; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ECd9n7cK9XN/T+IJ39MyPXAm65DzgHuyZCFnDGQgZIE=; b=GdN4Q0IjlmTKRf7MsvxNsFUVW55G/YT2oF4neuWErZCuVMXG/2ZjFfHHrWFdfoIWls vg1f4UfxeU3/AW4xG2uFRpu/9xtPFU4rFoZ1nVcYEpfdK19Uurhu2mJ1OCBd1JzS0Nh/ zxQw1CKLeebOwmFT6ScRSh6yShvn9p+Q/iiczg743NelbdOmQxsJMl0X9lxJekZ3ILJJ vURY9wqKOD8ivWnwNhdL6XIgHKA2UZwQEtllLNuWe8A4iZbobkV0on3+iZ8Qp/FEsrKl 75hVuUdrW7DrHqRSXp/4nYKRjS8cubD3Td2N2ip5NCg3kFhlUgHBsmdibMnMpTqmglzR XqUA== X-Gm-Message-State: AAQBX9fUCjFjWyqHKCrJC+i+qs/XiX9TtJFZIvvjDGgGQ7S+0qwGv43G 9J7FfgvY/zPBzuGin6buDRATAa3F9+Sew6eL X-Google-Smtp-Source: AKy350bt7vkQiq6OhtHhv/j8fBGrKxEfoXOdpNRwyGSGJDzpsapd3WOuPFqtBqzF+StnrMen5GR5vLj6Xa5jtVdE X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:90a:6943:b0:236:a1f9:9a9d with SMTP id j3-20020a17090a694300b00236a1f99a9dmr64331pjm.2.1680041812843; Tue, 28 Mar 2023 15:16:52 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:36 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-2-yosryahmed@google.com> Subject: [PATCH v2 1/9] cgroup: rename cgroup_rstat_flush_"irqsafe" to "atomic" From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org cgroup_rstat_flush_irqsafe() can be a confusing name. It may read as "irqs are disabled throughout", which is what the current implementation does (currently under discussion [1]), but is not the intention. The intention is that this function is safe to call from atomic contexts. Name it as such. Suggested-by: Johannes Weiner Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Johannes Weiner --- include/linux/cgroup.h | 2 +- kernel/cgroup/rstat.c | 4 ++-- mm/memcontrol.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index 3410aecffdb4..885f5395fcd0 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -692,7 +692,7 @@ static inline void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen) */ void cgroup_rstat_updated(struct cgroup *cgrp, int cpu); void cgroup_rstat_flush(struct cgroup *cgrp); -void cgroup_rstat_flush_irqsafe(struct cgroup *cgrp); +void cgroup_rstat_flush_atomic(struct cgroup *cgrp); void cgroup_rstat_flush_hold(struct cgroup *cgrp); void cgroup_rstat_flush_release(void); diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index 831f1f472bb8..d3252b0416b6 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -241,12 +241,12 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) } /** - * cgroup_rstat_flush_irqsafe - irqsafe version of cgroup_rstat_flush() + * cgroup_rstat_flush_atomic- atomic version of cgroup_rstat_flush() * @cgrp: target cgroup * * This function can be called from any context. */ -void cgroup_rstat_flush_irqsafe(struct cgroup *cgrp) +void cgroup_rstat_flush_atomic(struct cgroup *cgrp) { unsigned long flags; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 5abffe6f8389..0205e58ea430 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -642,7 +642,7 @@ static void __mem_cgroup_flush_stats(void) return; flush_next_time = jiffies_64 + 2*FLUSH_TIME; - cgroup_rstat_flush_irqsafe(root_mem_cgroup->css.cgroup); + cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); atomic_set(&stats_flush_threshold, 0); spin_unlock_irqrestore(&stats_flush_lock, flag); } From patchwork Tue Mar 28 22:16:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B5D8C77B60 for ; Tue, 28 Mar 2023 22:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230150AbjC1WRP (ORCPT ); Tue, 28 Mar 2023 18:17:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230124AbjC1WRM (ORCPT ); Tue, 28 Mar 2023 18:17:12 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C9BF2D43 for ; Tue, 28 Mar 2023 15:16:55 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-545d3b026a8so85043967b3.7 for ; Tue, 28 Mar 2023 15:16:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041814; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kpfCPbcrVlI467nPrWnClcfZlWEh8/oKLk4mWLWwJ5M=; b=aB5XInM5shRFEupjVRfOVTwIp8OT/8GOCdpf0mMVeXPooztYFeqB/4fTyYFky42c2L cHJL0WAYSCnTVIPjouGLoaqK+AnedliRs9QJra7EnwTz3y2TYL063GYspGDlo69C/aKX Q4WCP9t0FZSy4z5mYrJKLFahZFndlIErOfAb2mjxfSIGPKnJVKuUODD216rqjI4tCXyS D6OSq62KYNM8d3Y/gjIsH2sn135NHZXDjDoSFmzJ5ShWqmWT9/5W9qO3Eof0i2wwkL4l 11B1AMdH7KMsOMtqPWcbVxPahm2yYwvYDsj0ARVEud+qGQ3Lehzgwg9PonWNXUtgVgTf eH8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041814; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kpfCPbcrVlI467nPrWnClcfZlWEh8/oKLk4mWLWwJ5M=; b=TGgRVinDvMEl3VgzD3DxyyoTu5DqfsQEwLcpf5lpzLy6Op3UCZiMO3M6Fz0kIR0ScK trK+ZjVAF4OD5c5jSy6mgYKs0mo9wuW1eD1rwOeFhgnM+hsx93ACXVh0BCU0TXJQLe/k py5j1wD8m4rL9EBCAGu0vj8VMQSMaOKLCxlDXTMY/aGPRWtxrOmbRlwlvuyAM+AQTIrg Mij8HCO+W/jG/Go5SBHFAlLhVZ+K4YADQaBGrGs50Q+jgCI5LIqdw+riqIupbnVF4FNI ItSn4UNYe7qAvszMfn4vrWz8Sm5EUPVasy4Z9ajkTALWEpzyAaOLKAZajcgxCNkngdO1 Xu1Q== X-Gm-Message-State: AAQBX9eZfT++J6Unzw/a8S3iQc5UPXUq+q40VfN9kIprn4Z0FzjtBTD2 1JQ8EnloxJwhEFp6aynKgiTHp2tMrn4EtIo7 X-Google-Smtp-Source: AKy350a5zKI96b1WoEMsf2Uk1BbmTXMpgnT5uYIBtloHj2QDwXA1RiFaSK6WQDs33/Qs1TEwSL1tU23SVOrmYfBj X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a05:6902:1586:b0:b33:531b:3dd4 with SMTP id k6-20020a056902158600b00b33531b3dd4mr8636279ybu.1.1680041814635; Tue, 28 Mar 2023 15:16:54 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:37 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-3-yosryahmed@google.com> Subject: [PATCH v2 2/9] memcg: rename mem_cgroup_flush_stats_"delayed" to "ratelimited" From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org mem_cgroup_flush_stats_delayed() suggests his is using a delayed_work, but this is actually sometimes flushing directly from the callsite. What it's doing is ratelimited calls. A better name would be mem_cgroup_flush_stats_ratelimited(). Suggested-by: Johannes Weiner Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Johannes Weiner Acked-by: Michal Hocko --- include/linux/memcontrol.h | 4 ++-- mm/memcontrol.c | 2 +- mm/workingset.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index b6eda2ab205d..ac3f3b3a45e2 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1037,7 +1037,7 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } void mem_cgroup_flush_stats(void); -void mem_cgroup_flush_stats_delayed(void); +void mem_cgroup_flush_stats_ratelimited(void); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); @@ -1535,7 +1535,7 @@ static inline void mem_cgroup_flush_stats(void) { } -static inline void mem_cgroup_flush_stats_delayed(void) +static inline void mem_cgroup_flush_stats_ratelimited(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 0205e58ea430..c3b6aae78901 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -653,7 +653,7 @@ void mem_cgroup_flush_stats(void) __mem_cgroup_flush_stats(); } -void mem_cgroup_flush_stats_delayed(void) +void mem_cgroup_flush_stats_ratelimited(void) { if (time_after64(jiffies_64, flush_next_time)) mem_cgroup_flush_stats(); diff --git a/mm/workingset.c b/mm/workingset.c index 00c6f4d9d9be..af862c6738c3 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -462,7 +462,7 @@ void workingset_refault(struct folio *folio, void *shadow) mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); - mem_cgroup_flush_stats_delayed(); + mem_cgroup_flush_stats_ratelimited(); /* * Compare the distance to the existing workingset size. We * don't activate pages that couldn't stay resident even if From patchwork Tue Mar 28 22:16:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EC54C6FD18 for ; Tue, 28 Mar 2023 22:17:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230194AbjC1WRR (ORCPT ); Tue, 28 Mar 2023 18:17:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229762AbjC1WRN (ORCPT ); Tue, 28 Mar 2023 18:17:13 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBDDE2D79 for ; Tue, 28 Mar 2023 15:16:56 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id a6-20020aa795a6000000b006262c174d64so6314896pfk.7 for ; Tue, 28 Mar 2023 15:16:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041816; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/WDDqk1nTT7D9mKcEzqzCjbueo/E6L9K7q/9wjNO8LY=; b=TcS0DaK7zSX/dgbsp92dhBvU1YXIfJw7DU8Bxymw385Iypv0i9ByV9836BmH3XGIri GA3G9Ah6/ynKVVXAJLor7BukC88mhb5FcK9F0N2UppuVbKofnYFrmwIOVN2419Sgn/T3 Leo2J62N2CQ7p4vQ5tLG3l0ISmiaqnyiPoXjgBX2M4RYA1wXrmenFymFmJXjeUJkODde IPUO/XjdrDDGWWgE3vWyhfVxnLq+1UNPnYiIQva9A9S4iya4Iw/wc2ln/EZk6xTcIXjg k22ctJfIxPj6HJVEsaVfFSF1/WN3feXNcAQlYcNsdD7yhK296UbpIQqZDQ1X1PAq/IZq ToQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041816; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/WDDqk1nTT7D9mKcEzqzCjbueo/E6L9K7q/9wjNO8LY=; b=ORyJXlSC/fyAyl2E0BJH32UN74pxRu7Ybq80vh8LgLQsYo7T1H09kmsZaZmfC7lDxW GbJt4ltT7MsXkzYias+JhYhwm1f/3m3jn33Qbi7E2p2Y/UvTcNK2G6L3CqqZdAkVMZPa G1noQqgdiqu4upV7EF91qXQ58u+WQxiw90hrtrhKbAxACmrEI7+c1I/eMbqGmAe1XTRL 16M+8jUVQAinzRlG1vVzeKmP55VUG1+n5EtIDBnAZbQqYJCZAmk84TyVlxkiAkQyAcvV Raki77TFW1uVKbN16DBR3w1ozCguov4yS3yXNjesySwFvZlG5SoT6IptXdFA4Zqu9dhi N6rA== X-Gm-Message-State: AAQBX9cBzK+/5aXox+zkaZF8ukA6H9c+iKgITtsxMA5S9zHGADcADIOy cKamiX64JGI0c6aXmka10p3+mtHQurPnd6i3 X-Google-Smtp-Source: AKy350axjsRVVcKhkqlJ581QnQNXv5GyXPvAYEbbNcmbDp26pUkQK7hJHNgyYA5XT7+ktiHVuLWG9T6jy5rcV85F X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:455:0:b0:50b:e523:3cd2 with SMTP id 82-20020a630455000000b0050be5233cd2mr4608055pge.11.1680041816312; Tue, 28 Mar 2023 15:16:56 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:38 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-4-yosryahmed@google.com> Subject: [PATCH v2 3/9] memcg: do not flush stats in irq context From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, the only context in which we can invoke an rstat flush from irq context is through mem_cgroup_usage() on the root memcg when called from memcg_check_events(). An rstat flush is an expensive operation that should not be done in irq context, so do not flush stats and use the stale stats in this case. Arguably, usage threshold events are not reliable on the root memcg anyway since its usage is ill-defined. Suggested-by: Johannes Weiner Suggested-by: Shakeel Butt Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Johannes Weiner Acked-by: Michal Hocko --- mm/memcontrol.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c3b6aae78901..ff39f78f962e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3669,7 +3669,21 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) unsigned long val; if (mem_cgroup_is_root(memcg)) { - mem_cgroup_flush_stats(); + /* + * We can reach here from irq context through: + * uncharge_batch() + * |--memcg_check_events() + * |--mem_cgroup_threshold() + * |--__mem_cgroup_threshold() + * |--mem_cgroup_usage + * + * rstat flushing is an expensive operation that should not be + * done from irq context; use stale stats in this case. + * Arguably, usage threshold events are not reliable on the root + * memcg anyway since its usage is ill-defined. + */ + if (in_task()) + mem_cgroup_flush_stats(); val = memcg_page_state(memcg, NR_FILE_PAGES) + memcg_page_state(memcg, NR_ANON_MAPPED); if (swap) From patchwork Tue Mar 28 22:16:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75041C761A6 for ; Tue, 28 Mar 2023 22:17:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229942AbjC1WRW (ORCPT ); Tue, 28 Mar 2023 18:17:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230138AbjC1WRN (ORCPT ); Tue, 28 Mar 2023 18:17:13 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C38D02D52 for ; Tue, 28 Mar 2023 15:16:58 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id z31-20020a25a122000000b00b38d2b9a2e9so13339305ybh.3 for ; Tue, 28 Mar 2023 15:16:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041818; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZQQVGwjs4PjltBBm07Wln67ACtAz9IV2/pUsgTkN4l8=; b=BwpN5IENfDJAO3k0fvlZBkjsW+/4dRHFvKU3Y61vx7dQ2RtgMRwRfqyo0DTmjCAyVc CnLkW84f17NTLqbLQGhvt1qFEPE3lJHVmINhnviNxj716C70wOSUz3DOzWHSZ0RhU5XN r07qEoT8pGKoQFAOapju4r5aUWkkVs0eQmIgIci7KEjffCx1HyDeGdNPdwdlAJbu1aOk 7WwXMQu8pO8A54UInZRnQw9kbNQZxf/x1jM4losJCUKwsJkxT/8pf/ZH8ftd03KJhEdg XtPwGtxKxPzrVZOjpFL3fFaXSbnVLYOlrmJD9NS/UpXXC6msV74XQ/lannH6QIJ1sQ32 hXEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041818; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZQQVGwjs4PjltBBm07Wln67ACtAz9IV2/pUsgTkN4l8=; b=N4nWsAO1qNt7wzF+nQqDSY4w7jXwfKrO9Ko9QTumtWMMTNcl6PnVQA1dHM/8FdXMUw N/oaNwGL1IR4CKcuYC5FHh0aXdWlqiIMlAIiYOOqYkRIDDogKeEDTHUANJRd8AltVfA5 KoSXR4vZoisUmVJr2wUKAa67vVkwGCNUXD9ZsX1Jnzcb25ia21YW82QzROSNNYm2xUx3 fksu7OzxABtby4gn/rOkER710B5brFPsKXJ6FE6yfQr+4z+m640C9IbQYR4WLxt67YiT dje3RzGSWZcsKL1IdVAv1i+KmEBWiugsDKut+Nk1RQvheko+m5ty0ARqkd8iqYfGhYAL jaqg== X-Gm-Message-State: AAQBX9fQ5aFwnpOlOYv7w2hQqqAAwzqlcvaJqxWPyOlA+J3TL9cMkwBg rXjTdtCDzpeJP64GUlHT/MiITmhGs0V6jzeW X-Google-Smtp-Source: AKy350a+cGo/rQiavcF9zU9F0niI9qIhWFxPTrQKaAQYONdIyzIaIiEtY9m/Hx7sNG18TtPt6miuAFUHkt3sB0SU X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a05:6902:1003:b0:b1d:5061:98e3 with SMTP id w3-20020a056902100300b00b1d506198e3mr11533410ybt.6.1680041818092; Tue, 28 Mar 2023 15:16:58 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:39 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-5-yosryahmed@google.com> Subject: [PATCH v2 4/9] cgroup: rstat: add WARN_ON_ONCE() if flushing outside task context From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org rstat flushing is too expensive to perform in irq context. The previous patch removed the only context that may invoke an rstat flush from irq context, add a WARN_ON_ONCE() to detect future violations, or those that we are not aware of. Ideally, we wouldn't flush with irqs disabled either, but we have one context today that does so in mem_cgroup_usage(). Forbid callers from irq context for now, and hopefully we can also forbid callers with irqs disabled in the future when we can get rid of this callsite. Signed-off-by: Yosry Ahmed Reviewed-by: Shakeel Butt --- kernel/cgroup/rstat.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index d3252b0416b6..c2571939139f 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -176,6 +176,8 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep) { int cpu; + /* rstat flushing is too expensive for irq context */ + WARN_ON_ONCE(!in_task()); lockdep_assert_held(&cgroup_rstat_lock); for_each_possible_cpu(cpu) { From patchwork Tue Mar 28 22:16:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 671BBC76196 for ; Tue, 28 Mar 2023 22:17:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229749AbjC1WRU (ORCPT ); Tue, 28 Mar 2023 18:17:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbjC1WRP (ORCPT ); Tue, 28 Mar 2023 18:17:15 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A990030C2 for ; Tue, 28 Mar 2023 15:17:00 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id k17-20020a170902d59100b0019abcf45d75so8199975plh.8 for ; Tue, 28 Mar 2023 15:17:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041820; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PicCaNLBAprYzcnNZKIFMVmM/pcbqkg3z9jyIWcMc+U=; b=h3KlqW+jIEJYTK+BpLRlcd0fDHvnJwiAwYWj6YfVTBV4VsUspeq7grO2+Nyd5Hs9eL i4P1tnbfhFqPfNn0TjyAIHaXO2gZSr29zn48TaONjgFqCTDEnUQ3f8yJSse0IzidGN/G NpCoYbM12CNmhz16TMo5PtvGfqzYl/w2IuM1QOzqxIgbpDSLm4BrT8d/pKei+Ju4GCP/ Ogx40HCdcd1U7cHn91PPNKl9xn/1CmyhrjIjg+lC++Gh679/KOFvr1XoPNmeY+cl/pIr M7KlNiqZzpwmUKaPk8gBIyxokfcnZJdVNjXvvh+f/KGJuZhwnOInjtgGjgRxKbQyAU78 gp4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041820; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PicCaNLBAprYzcnNZKIFMVmM/pcbqkg3z9jyIWcMc+U=; b=glrFxRsfhrWUQcw03iZd17qhmFl8V/qPfyQFEomJ8QYSjPsIobB/8s8IlDFD7Z51Ax T3WJqa0euSt+AKCMXS8tuWRnGn5zBNPWD2wv3EpTcIGUdtn2iNZ+7Gij2Ozq+CAvRsE+ qQbEM3Ln9LF4ARwHC7tNiEUGYJdNjRlfLnoLcDkRHAC6PYm6ydeoSuALO2PmEei3/L+j o1b8ewET4hpVWypuJZ1/437EELdWXOtCowfR+zZXzP7iqW7CXG0iatLl0SJP0N73Cjua VJvMVd3CZi+nwp2O/dRfiPKAlCGvdbv/2ELgapxNquKQt87/nUr+f2i+/E9qFPUXfIta Hs4g== X-Gm-Message-State: AAQBX9cLscuDSktlWvOsCcQ9HzwyNxCj7KUEw5Mnd+7Mo/oz+pIPoGSY E6x+vl6WmVvEWRapnXE5hk1ahIp/PTjcFCEC X-Google-Smtp-Source: AKy350aocwWe2yH7hemOfHMFd7M5GQ6mfc2FkU5LSUuK9d8xU2zKVz1ThP6UHoZndV91Ud7xpyV60PqDU9oxBKYZ X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:ce08:0:b0:50c:6cd:cace with SMTP id y8-20020a63ce08000000b0050c06cdcacemr4745231pgf.2.1680041819916; Tue, 28 Mar 2023 15:16:59 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:40 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-6-yosryahmed@google.com> Subject: [PATCH v2 5/9] memcg: replace stats_flush_lock with an atomic From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org As Johannes notes in [1], stats_flush_lock is currently used to: (a) Protect updated to stats_flush_threshold. (b) Protect updates to flush_next_time. (c) Serializes calls to cgroup_rstat_flush() based on those ratelimits. However: 1. stats_flush_threshold is already an atomic 2. flush_next_time is not atomic. The writer is locked, but the reader is lockless. If the reader races with a flush, you could see this: if (time_after(jiffies, flush_next_time)) spin_trylock() flush_next_time = now + delay flush() spin_unlock() spin_trylock() flush_next_time = now + delay flush() spin_unlock() which means we already can get flushes at a higher frequency than FLUSH_TIME during races. But it isn't really a problem. The reader could also see garbled partial updates, so it needs at least READ_ONCE and WRITE_ONCE protection. 3. Serializing cgroup_rstat_flush() calls against the ratelimit factors is currently broken because of the race in 2. But the race is actually harmless, all we might get is the occasional earlier flush. If there is no delta, the flush won't do much. And if there is, the flush is justified. So the lock can be removed all together. However, the lock also served the purpose of preventing a thundering herd problem for concurrent flushers, see [2]. Use an atomic instead to serve the purpose of unifying concurrent flushers. [1]https://lore.kernel.org/lkml/20230323172732.GE739026@cmpxchg.org/ [2]https://lore.kernel.org/lkml/20210716212137.1391164-2-shakeelb@google.com/ Signed-off-by: Yosry Ahmed Acked-by: Johannes Weiner Acked-by: Shakeel Butt Acked-by: Michal Hocko --- mm/memcontrol.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ff39f78f962e..65750f8b8259 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -585,8 +585,8 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) */ static void flush_memcg_stats_dwork(struct work_struct *w); static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); -static DEFINE_SPINLOCK(stats_flush_lock); static DEFINE_PER_CPU(unsigned int, stats_updates); +static atomic_t stats_flush_ongoing = ATOMIC_INIT(0); static atomic_t stats_flush_threshold = ATOMIC_INIT(0); static u64 flush_next_time; @@ -636,15 +636,19 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) static void __mem_cgroup_flush_stats(void) { - unsigned long flag; - - if (!spin_trylock_irqsave(&stats_flush_lock, flag)) + /* + * We always flush the entire tree, so concurrent flushers can just + * skip. This avoids a thundering herd problem on the rstat global lock + * from memcg flushers (e.g. reclaim, refault, etc). + */ + if (atomic_read(&stats_flush_ongoing) || + atomic_xchg(&stats_flush_ongoing, 1)) return; - flush_next_time = jiffies_64 + 2*FLUSH_TIME; + WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); atomic_set(&stats_flush_threshold, 0); - spin_unlock_irqrestore(&stats_flush_lock, flag); + atomic_set(&stats_flush_ongoing, 0); } void mem_cgroup_flush_stats(void) @@ -655,7 +659,7 @@ void mem_cgroup_flush_stats(void) void mem_cgroup_flush_stats_ratelimited(void) { - if (time_after64(jiffies_64, flush_next_time)) + if (time_after64(jiffies_64, READ_ONCE(flush_next_time))) mem_cgroup_flush_stats(); } From patchwork Tue Mar 28 22:16:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5690C77B6C for ; Tue, 28 Mar 2023 22:17:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230119AbjC1WRX (ORCPT ); Tue, 28 Mar 2023 18:17:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230159AbjC1WRP (ORCPT ); Tue, 28 Mar 2023 18:17:15 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C0E530ED for ; Tue, 28 Mar 2023 15:17:02 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id i192-20020a6287c9000000b0062a43acb7faso6246690pfe.8 for ; Tue, 28 Mar 2023 15:17:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041822; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rItEK+KYvsSIiuNXlyB23Cs11TTT5fJ8aFIXbrMiIEw=; b=IxPRAGCqLmO64hebIOFNkgw1YGUDJSVyfgVVBwxUcvdgL59f7l4LN5zMPrXb+jjbv/ 44AuhCK3u7FtNfz2jZoS0YDY1UIo89Iub/aEWQsolPiMTjtNQ86ze7JWsd1EKsY5Y1Aw 07n0xvSJnL2lwlEm697GGmrpvMBX0F+5BNT3XjdQx09Cev4XnMOYZ7hcuMbACVnhAT8t T+Q0avDfvu5GCgeBYFMaudoWaGLkL6vKFSabNVsNzlmPSiAqWuR+oN/II5DxoATas6tB N+Po8vyHrMmHA1KarR8lOaWajBlxLkTT4ktZV7Et/fSUfAvv7rnsRtElYoPsoIvIEGG6 moFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041822; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rItEK+KYvsSIiuNXlyB23Cs11TTT5fJ8aFIXbrMiIEw=; b=acoQUM553ShFXdNFrQj0qLRFoEV48Ebn632nCnBl9cTfWPGo3n2hvs+Be4dQB65gGi 8LihI9UrXRg26oDSh/JNnVV8y1dW8LyG4aqOdYH9aSFhQqvcPprOnUXD2YKEbzbJqVTu xqJOKMq+SiFBuc+yHUykU/ICryxO1Q8cie7eQJvAV3iaHRmKkbV9hvYj/FkSkjAt7nfb MMTRLa5PF+WTPUY8dShd9fnQcV+PHtaAZ7+iz9FxNJOP8cPCzoJGglGkfCevWyvxoSVh Y9kbaVaCKRkjDVxhyhkko3LUgFMeTjzbYTITawghQxYUax41PtKVbie6k11xVh0zr4fv 5WIw== X-Gm-Message-State: AAQBX9dUrn0ickFvi2zV7SsLjrNn3wIK8VnzZ6zGejBFpPYfT/uguBvr 45jDFeI1BCWWk0AqYGaeTFZohgsgHsQYHX5k X-Google-Smtp-Source: AKy350bXvyCoGIAXSg9szgc2JGdZUl73nsfa2Et7WueltHgrvj0bY0SpfT6d/5uVqOXZZR9I4CnMF+3pkDT4wLPq X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:ce08:0:b0:50c:6cd:cace with SMTP id y8-20020a63ce08000000b0050c06cdcacemr4745262pgf.2.1680041821773; Tue, 28 Mar 2023 15:17:01 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:41 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-7-yosryahmed@google.com> Subject: [PATCH v2 6/9] memcg: sleep during flushing stats in safe contexts From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, all contexts that flush memcg stats do so with sleeping not allowed. Some of these contexts are perfectly safe to sleep in, such as reading cgroup files from userspace or the background periodic flusher. Refactor the code to make mem_cgroup_flush_stats() non-atomic (aka sleepable), and provide a separate atomic version. The atomic version is used in reclaim, refault, writeback, and in mem_cgroup_usage(). All other code paths are left to use the non-atomic version. This includes callbacks for userspace reads and the periodic flusher. Since refault is the only caller of mem_cgroup_flush_stats_ratelimited(), change it to mem_cgroup_flush_stats_atomic_ratelimited(). Reclaim and refault code paths are modified to do non-atomic flushing in separate later patches -- so it will eventually be changed back to mem_cgroup_flush_stats_ratelimited(). Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Michal Hocko --- include/linux/memcontrol.h | 9 ++++++-- mm/memcontrol.c | 45 ++++++++++++++++++++++++++++++-------- mm/vmscan.c | 2 +- mm/workingset.c | 2 +- 4 files changed, 45 insertions(+), 13 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index ac3f3b3a45e2..b424ba3ebd09 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1037,7 +1037,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } void mem_cgroup_flush_stats(void); -void mem_cgroup_flush_stats_ratelimited(void); +void mem_cgroup_flush_stats_atomic(void); +void mem_cgroup_flush_stats_atomic_ratelimited(void); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); @@ -1535,7 +1536,11 @@ static inline void mem_cgroup_flush_stats(void) { } -static inline void mem_cgroup_flush_stats_ratelimited(void) +static inline void mem_cgroup_flush_stats_atomic(void) +{ +} + +static inline void mem_cgroup_flush_stats_atomic_ratelimited(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 65750f8b8259..a2ce3aa10d94 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -634,7 +634,7 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } -static void __mem_cgroup_flush_stats(void) +static void do_flush_stats(bool atomic) { /* * We always flush the entire tree, so concurrent flushers can just @@ -646,26 +646,46 @@ static void __mem_cgroup_flush_stats(void) return; WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); - cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); + + if (atomic) + cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); + else + cgroup_rstat_flush(root_mem_cgroup->css.cgroup); + atomic_set(&stats_flush_threshold, 0); atomic_set(&stats_flush_ongoing, 0); } +static bool should_flush_stats(void) +{ + return atomic_read(&stats_flush_threshold) > num_online_cpus(); +} + void mem_cgroup_flush_stats(void) { - if (atomic_read(&stats_flush_threshold) > num_online_cpus()) - __mem_cgroup_flush_stats(); + if (should_flush_stats()) + do_flush_stats(false); } -void mem_cgroup_flush_stats_ratelimited(void) +void mem_cgroup_flush_stats_atomic(void) +{ + if (should_flush_stats()) + do_flush_stats(true); +} + +void mem_cgroup_flush_stats_atomic_ratelimited(void) { if (time_after64(jiffies_64, READ_ONCE(flush_next_time))) - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); } static void flush_memcg_stats_dwork(struct work_struct *w) { - __mem_cgroup_flush_stats(); + /* + * Always flush here so that flushing in latency-sensitive paths is + * as cheap as possible. + */ + do_flush_stats(false); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } @@ -3685,9 +3705,12 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) * done from irq context; use stale stats in this case. * Arguably, usage threshold events are not reliable on the root * memcg anyway since its usage is ill-defined. + * + * Additionally, other call paths through memcg_check_events() + * disable irqs, so make sure we are flushing stats atomically. */ if (in_task()) - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); val = memcg_page_state(memcg, NR_FILE_PAGES) + memcg_page_state(memcg, NR_ANON_MAPPED); if (swap) @@ -4610,7 +4633,11 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); struct mem_cgroup *parent; - mem_cgroup_flush_stats(); + /* + * wb_writeback() takes a spinlock and calls + * wb_over_bg_thresh()->mem_cgroup_wb_stats(). Do not sleep. + */ + mem_cgroup_flush_stats_atomic(); *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); diff --git a/mm/vmscan.c b/mm/vmscan.c index 9c1c5e8b24b8..a9511ccb936f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2845,7 +2845,7 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) * Flush the memory cgroup stats, so that we read accurate per-memcg * lruvec stats for heuristics. */ - mem_cgroup_flush_stats(); + mem_cgroup_flush_stats_atomic(); /* * Determine the scan balance between anon and file LRUs. diff --git a/mm/workingset.c b/mm/workingset.c index af862c6738c3..dab0c362b9e3 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -462,7 +462,7 @@ void workingset_refault(struct folio *folio, void *shadow) mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); - mem_cgroup_flush_stats_ratelimited(); + mem_cgroup_flush_stats_atomic_ratelimited(); /* * Compare the distance to the existing workingset size. We * don't activate pages that couldn't stay resident even if From patchwork Tue Mar 28 22:16:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE43FC761A6 for ; Tue, 28 Mar 2023 22:17:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229801AbjC1WRg (ORCPT ); Tue, 28 Mar 2023 18:17:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230180AbjC1WRR (ORCPT ); Tue, 28 Mar 2023 18:17:17 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B27B2717 for ; Tue, 28 Mar 2023 15:17:04 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id b1-20020a17090a8c8100b002400db03706so3661839pjo.0 for ; Tue, 28 Mar 2023 15:17:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041823; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=a7F6fk9fZnPdw9ddOVIzAjBdiTpqd88vdaggqxhF/qs=; b=j2h7VlEdeGIL4dqtCRH6krZRXCzlm8zdD/VKFrhl2Ix5hroKwD9u6hjC+9l9HOi6UV H1JPSNu1LdgWvdxHE9hg+SM93x1Jht2L9pd8vgLQEuBZlRpZngZUgkiaZwaZa4gCkUC+ zgspBsM6/B8TFiwy4IbbIfRW24OLGYSFku2vitXovcEwbuAN1td3zyOf9AyMfligZ5Lp BhTKdBAPhJ+a7QI4Cj7LZxYyn4A2yXYMpirZryPEzjspDaCYUkIzUrMU1Kd6ErRoShgT XFMZOBTc6edK5ZIe1IhscJC8wUQ57Xb89+9zAOCcDb3JVfb8T7SfV7MPXlrp4JguLYCz aCRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041823; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=a7F6fk9fZnPdw9ddOVIzAjBdiTpqd88vdaggqxhF/qs=; b=BZxF3oach47MDouzWTqsF9D8Zum2AHobeWyTvzUWdL9xMnsrp9hpJqMgpSfHh9OMGF nJAAW0Tve4wAyfQdMUQ/rBlnX4NGAq6PaGH0EpyV6KxzVHn6Ahjr5hmNZAWeYynyGUKO X8uHWnnsvMIW61W9sQ+qz05Pfh1HA+8yD9kS8GnlUclm9LsAMdaKVwKK2+ntO+ZMsTcm ZHduN+ZDrGKDG+5atJDidUDoJC9dt5wGTbd4NKdN+y+utp9QiVH4GLH5SDkjTbkFe1nR YQP8HsY59H99s6NTvamzMsGnOoMdOujr1tbvlYozJz98yV5q3lbOTl0ElIRWM8SqNtJY 5agA== X-Gm-Message-State: AAQBX9cmbANmHrFpqz2xljnKqVjUuoCdlUpt1068XiLWTdOzMMnz1zYJ pQCijve3DjSQd4rYsJnjRHbQqzX5vhbevfA1 X-Google-Smtp-Source: AKy350YX9lhtezGewKcmJEZ6+PWl0qdkFOVRrH2qPmewBQAi0Jj2O6028tHTh6zpbuIXp3Yq/V52VHvowCQfk3sU X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:902:ec90:b0:1a0:763d:6c2a with SMTP id x16-20020a170902ec9000b001a0763d6c2amr6633613plg.10.1680041823533; Tue, 28 Mar 2023 15:17:03 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:42 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-8-yosryahmed@google.com> Subject: [PATCH v2 7/9] workingset: memcg: sleep when flushing stats in workingset_refault() From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In workingset_refault(), we call mem_cgroup_flush_stats_atomic_ratelimited() to flush stats within an RCU read section and with sleeping disallowed. Move the call above the RCU read section and allow sleeping to avoid unnecessarily performing a lot of work without sleeping. Since workingset_refault() is the only caller of mem_cgroup_flush_stats_atomic_ratelimited(), just make it non-atomic, and rename it to mem_cgroup_flush_stats_ratelimited(). Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Johannes Weiner Acked-by: Michal Hocko --- include/linux/memcontrol.h | 4 ++-- mm/memcontrol.c | 4 ++-- mm/workingset.c | 5 +++-- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index b424ba3ebd09..a4bc3910a2eb 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1038,7 +1038,7 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, void mem_cgroup_flush_stats(void); void mem_cgroup_flush_stats_atomic(void); -void mem_cgroup_flush_stats_atomic_ratelimited(void); +void mem_cgroup_flush_stats_ratelimited(void); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); @@ -1540,7 +1540,7 @@ static inline void mem_cgroup_flush_stats_atomic(void) { } -static inline void mem_cgroup_flush_stats_atomic_ratelimited(void) +static inline void mem_cgroup_flush_stats_ratelimited(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a2ce3aa10d94..361c0bbf7283 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -673,10 +673,10 @@ void mem_cgroup_flush_stats_atomic(void) do_flush_stats(true); } -void mem_cgroup_flush_stats_atomic_ratelimited(void) +void mem_cgroup_flush_stats_ratelimited(void) { if (time_after64(jiffies_64, READ_ONCE(flush_next_time))) - mem_cgroup_flush_stats_atomic(); + mem_cgroup_flush_stats(); } static void flush_memcg_stats_dwork(struct work_struct *w) diff --git a/mm/workingset.c b/mm/workingset.c index dab0c362b9e3..3025beee9b34 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -406,6 +406,9 @@ void workingset_refault(struct folio *folio, void *shadow) unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset); eviction <<= bucket_order; + /* Flush stats (and potentially sleep) before holding RCU read lock */ + mem_cgroup_flush_stats_ratelimited(); + rcu_read_lock(); /* * Look up the memcg associated with the stored ID. It might @@ -461,8 +464,6 @@ void workingset_refault(struct folio *folio, void *shadow) lruvec = mem_cgroup_lruvec(memcg, pgdat); mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); - - mem_cgroup_flush_stats_atomic_ratelimited(); /* * Compare the distance to the existing workingset size. We * don't activate pages that couldn't stay resident even if From patchwork Tue Mar 28 22:16:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8086C76196 for ; Tue, 28 Mar 2023 22:17:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230233AbjC1WRw (ORCPT ); Tue, 28 Mar 2023 18:17:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230044AbjC1WRR (ORCPT ); Tue, 28 Mar 2023 18:17:17 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D050359F for ; Tue, 28 Mar 2023 15:17:06 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 8-20020a250508000000b00b7c653a0a4aso5509494ybf.23 for ; Tue, 28 Mar 2023 15:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041825; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AvW0vRtvUGz6RfpHjuJf5DcrLtUCw4uDLtzj7kPXkaI=; b=s2Rhy6bhmsxOtwUgNmtWPZ1OIy2px3+ZgcKdui8SYbOasl9GBmkC/HgOP+eca1phsP jP5oqLAVmD71L+yTcF9Qf7RNoZtsSSFPX0S6Mq6S5HfUed3nW8TUcaJtaUm/tsI8O9Am d5OJg/swbll5+1GgM/EDnx0GyYjs89Yzqk1B1Gc+8UCS388fxVzDgMBm93/UJRyrTJYr f1CeTGEfaowg6ARY0FoahfZMcN2kOGoi1UzCuTmmzShA86ZvRPo5Rcmo6znhWB9xOjYc VRAX/Gquhhjvyizfw+QszvCjk77JT0+yZFVam4Xqz3PGsmr5F2QrldBe6NoQ8SARTtvR pWDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041825; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AvW0vRtvUGz6RfpHjuJf5DcrLtUCw4uDLtzj7kPXkaI=; b=5p5ZrOI0B2LHDPxQhQWP5PH+QKnm0L7S+u8ICI/kekVzs49qHQdbgrufH3qqjWccb3 vFYypl5J2O7Ud0etxcnszbZNZHMnM/a5eT6V/F6Zy1TZUNS3NTTY9YRQQHO+JDnMA3Fu VhRr9kqxqSxfZgUVWfUPpfoWW/TtH/iOKFRVlaUKkqvwPdbSuxieZryzU19dWoNBSgX4 +acvsSaCdixn8a8Kp5IbTU8cvXwh6QljVZvwMdrapkyOSoWdswq31ivLbX8qSi6PtL0o O3IIWzta1JdXblcS+0RegxvTFHcB7g8MmfJDc/txaCJun2orp1Twul+k5LKK9deUoODE +4Ig== X-Gm-Message-State: AAQBX9dbHQsRBy2KSoxqV6qAH1AWykFatkmyYUjR3cSBGK7xJH/Ve27l SsAFSrufpIiFGopr+/bIAgNcS/E0Aj7yU8og X-Google-Smtp-Source: AKy350aC/5JUEL/+7c/dIUP+7KT0Wk9JcWhzZQ4Xw7OfNYXAfRzlQJVntfC4MTRTs3t2TIUHulQT5W2wJHMWSZ0v X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a05:6902:722:b0:a09:314f:a3ef with SMTP id l2-20020a056902072200b00a09314fa3efmr10887537ybt.12.1680041825151; Tue, 28 Mar 2023 15:17:05 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:43 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-9-yosryahmed@google.com> Subject: [PATCH v2 8/9] vmscan: memcg: sleep when flushing stats during reclaim From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Memory reclaim is a sleepable context. Allow sleeping when flushing memcg stats to avoid unnecessarily performing a lot of work without sleeping. This can slow down reclaim code if flushing stats is taking too long, but there is already multiple cond_resched()'s in reclaim code. Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Johannes Weiner Acked-by: Michal Hocko --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a9511ccb936f..9c1c5e8b24b8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2845,7 +2845,7 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) * Flush the memory cgroup stats, so that we read accurate per-memcg * lruvec stats for heuristics. */ - mem_cgroup_flush_stats_atomic(); + mem_cgroup_flush_stats(); /* * Determine the scan balance between anon and file LRUs. From patchwork Tue Mar 28 22:16:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13191632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B79B1C77B6C for ; Tue, 28 Mar 2023 22:17:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230255AbjC1WRy (ORCPT ); Tue, 28 Mar 2023 18:17:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230208AbjC1WRT (ORCPT ); Tue, 28 Mar 2023 18:17:19 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CEF53A9E for ; Tue, 28 Mar 2023 15:17:08 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id l19-20020a056a0016d300b006257255adb4so6405661pfc.13 for ; Tue, 28 Mar 2023 15:17:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680041827; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3EqK7sQZOfVTLBQ7dVPZO4hRmfVfAzm1laH3BYvfeCs=; b=Jh0ANfJdLtvcj14Ycx8o0NJ4r9qhZa68Pa0uc2IX0QIjCZKetrfT/Nq/ZDVr5AllWU XClT4UcNh1sPOxMcsh6sWOQo9/2lETExd/HKLVBB2M4wtgPvzRmNcVBtdBrQWB7ALdhZ R5a35ECD7K8gif2eOzvhZ1FDX+1AdzcBxQSZ/AU6EUVqL3HECb2nu5rbm6oZEkaJmqE+ JiSGi5TQPR/5oszlj+b2yUkvaRVAqFo8Ka4HGQ6Ui7wSkXSWPGkTPTug5dc09uPeC/es /a05azr2/gL04QjqP7TTdBynMB9W71leZpwkNhiYtT2r27tRHGjbrwcjlQFLDRhwcOoU qhHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680041827; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3EqK7sQZOfVTLBQ7dVPZO4hRmfVfAzm1laH3BYvfeCs=; b=2OXECpP6mMZkyL6az8pnIKAIjmwzS7jAA00GrKAb48fZcczC0joTt5kzo+ISYJGexp OfmiiQcC4H521y4bBEIvw/lT1erRKbqrg9JO+ysFeqY8OVowR7Oaep5QCaP2PgwfYF1t vLvyJPGIwricSWKgj/WG8lWqFnHOL48qLFGDcmIf3uWGzLXkDS8iBiY3DlFDMuAf1paw wJGCehdYm68GPzy/POYK4/SDkOVjRkrDnBufSOcIOg502HTypsbHXOPnhRZuTOedaVco +yFyehZ7/9MdWvIS7w3ZoTuWyUWT6uh6pMjW7GO2Ix3ckPpGvAPCL9z9bb1L1543GyYN wAyQ== X-Gm-Message-State: AAQBX9c1ZEoRv6RajmJQZ0vPudRFM40lePBtBNf2zJR29/zdxgacRC6J 7UB8nb8edKxRVLFYnwuY8qTAPXmsMzz85NZm X-Google-Smtp-Source: AKy350b8EHObFwTAz134ielB7Lfw4iTDwRchFeXjbgsPVENWxNL5SgSGxt3wM+3XFB7OgKxVRznr5DrlZ29gVq8P X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:90a:fb57:b0:23d:30c2:c5b7 with SMTP id iq23-20020a17090afb5700b0023d30c2c5b7mr60436pjb.3.1680041826556; Tue, 28 Mar 2023 15:17:06 -0700 (PDT) Date: Tue, 28 Mar 2023 22:16:44 +0000 In-Reply-To: <20230328221644.803272-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230328221644.803272-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328221644.803272-10-yosryahmed@google.com> Subject: [PATCH v2 9/9] memcg: do not modify rstat tree for zero updates From: Yosry Ahmed To: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , " =?utf-8?q?Michal_Koutn=C3=BD?= " Cc: Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In some situations, we may end up calling memcg_rstat_updated() with a value of 0, which means the stat was not actually updated. An example is if we fail to reclaim any pages in shrink_folio_list(). Do not add the cgroup to the rstat updated tree in this case, to avoid unnecessarily flushing it. Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt Acked-by: Johannes Weiner Acked-by: Michal Hocko --- mm/memcontrol.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 361c0bbf7283..a63ee2efa780 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -618,6 +618,9 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) { unsigned int x; + if (!val) + return; + cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id()); x = __this_cpu_add_return(stats_updates, abs(val));