From patchwork Wed Aug 30 17:53:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13370362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D349C6FA8F for ; Wed, 30 Aug 2023 17:53:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BBA0E28005C; Wed, 30 Aug 2023 13:53:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B697328005A; Wed, 30 Aug 2023 13:53:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0A8528005C; Wed, 30 Aug 2023 13:53:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8C49D28005A for ; Wed, 30 Aug 2023 13:53:46 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 64275B2AF6 for ; Wed, 30 Aug 2023 17:53:46 +0000 (UTC) X-FDA: 81181518852.26.8D21025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf05.hostedemail.com (Postfix) with ESMTP id 84EE3100018 for ; Wed, 30 Aug 2023 17:53:44 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=rIqyhZqN; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3J4LvZAoKCBgMCGFMy5A214CC492.0CA96BIL-AA8Jy08.CF4@flex--yosryahmed.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3J4LvZAoKCBgMCGFMy5A214CC492.0CA96BIL-AA8Jy08.CF4@flex--yosryahmed.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693418024; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6NYJxZAD84nnU9JwFvW1AuIl8sWdu9iyZk6Te6FK6mM=; b=nVJKvVF2H/LD+5FjdpKw4H9mkF7IIRNnx5/Ko5nLMlqGGCXbfVDIWrmt6HQJwpbAh8W3R8 HpQSFz+lx2fExWNuCWEFHqo4epbKsIjccBsQVrxVIZA8y7k1A74QLt/n6VS5KQGhkZw7tU uWn6hkIzLi9c71TWTPv7+Hc9rls3EGc= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=rIqyhZqN; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3J4LvZAoKCBgMCGFMy5A214CC492.0CA96BIL-AA8Jy08.CF4@flex--yosryahmed.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3J4LvZAoKCBgMCGFMy5A214CC492.0CA96BIL-AA8Jy08.CF4@flex--yosryahmed.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693418024; a=rsa-sha256; cv=none; b=jfSPLN5BDjUqYaZbOb8erxYIItPKkatveM1JLjUCu2/4HHkdLxEvrWOSUKmRRg35Yvemrr WYil7n6t4paWQ5wa3ofBKWjfgBA6mOJd7pizzKP3wsW17EJZ/KM3+vPmiMqA6WMAoltoTK HTlnDWoiECUNlIvge01NKYqr8z/vN1E= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-27183f4cc79so21031a91.0 for ; Wed, 30 Aug 2023 10:53:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1693418023; x=1694022823; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6NYJxZAD84nnU9JwFvW1AuIl8sWdu9iyZk6Te6FK6mM=; b=rIqyhZqNmL/9yQKkMfZR0CVAKdO4bHezkUid/yi0rLemLfcPcQEe2bBJu9itJ1HtQI i9WVKT2OkCwTyLkK58n1hKfqG3Vz7Bdcn3HlGB3HDa3Mj04OXdrl/7O1IpDMbNnusHPH noBEzObEp88/VY9uQgZ3G151Byh7N3tQvZWB22TeAJMIPdOvirVb6XVaF8Rq6rQMNECF Txk4p13i4Wep2i1k/PO4ICGrAmvtI8Zh6iqXa5gXLdnCqQ8eaF/HohDN3i53C3rXPl2s TIanLFp7AiaHpaREPL9oyL00LpY5B+zOktR5XzX9FFR9i9/dzQ3/Igin3IqkWbZvntqP 1sLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693418023; x=1694022823; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6NYJxZAD84nnU9JwFvW1AuIl8sWdu9iyZk6Te6FK6mM=; b=Wl+RFa0bHiotAHkYtbhTS4W2I9fReX9UGQmmt0l0EO+lS58/77FPmDF8ph+8OYOHC8 +WXlFWN/WVhG17H4Jxu9pkCwKQT/qPyP37qEuS7WcNt4OSeyMw4cnLrYhwsmiaME8D3y cIdyrSwq1vGB9GZRwiKnuA1KeEgOfCphyTa0l236lfQhHNCMVtNCRu3UCa6Mk4/hSkTi LYZf0hdRbRq6i5nSlPgkyjGYlQGkQNe0Hg16QV33lxi2OXKsYXdzj7cnJhThvN6yr7Ps ieYbABwd6pj3gHpohSv00R5JYXLJzpjv8EyMKZGNnqrgt7M0cUsepVTLdGNPj3TZkLq+ wKUQ== X-Gm-Message-State: AOJu0Yy6YSXOXpHgIj6ck/xBX05EgnXZbikJJjjeHUsZ3B/6kpdE0Wl1 Tz86eDUbLpcSYRkchVnH0WmSKO8J9dgIXvdg X-Google-Smtp-Source: AGHT+IFm03PF7l9othX5O2PNzU02Mo/Pftgs6NGOxw28476m+H2J4m+wbftCKNJqLkDpoptSWcWy0ycsXoD03X6k X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:90b:785:b0:26d:2637:b85f with SMTP id l5-20020a17090b078500b0026d2637b85fmr650295pjz.5.1693418023080; Wed, 30 Aug 2023 10:53:43 -0700 (PDT) Date: Wed, 30 Aug 2023 17:53:32 +0000 In-Reply-To: <20230830175335.1536008-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230830175335.1536008-1-yosryahmed@google.com> X-Mailer: git-send-email 2.42.0.rc2.253.gd59a3bf2b4-goog Message-ID: <20230830175335.1536008-2-yosryahmed@google.com> Subject: [PATCH v3 1/4] mm: memcg: properly name and document unified stats flushing From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Queue-Id: 84EE3100018 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: aq8hbuub3oenzmixu9qgpce4x6fte9w4 X-HE-Tag: 1693418024-814621 X-HE-Meta: U2FsdGVkX1+ZcI0+IuZ8hTU7t0fa5ZNdXtOCKQYhLDfClJj6QB54foGvyNEbF6SOt9xd9YjiF1YmW5g46QlosVqmKZ8Y8AfVYoyi9pHb20PJtO0K2vcsQdof/luzexXtFuIAWYNk5ojOIWowFeBj+zHGhbTKot6XjuLhYz4c0yojpPJfJpYgGFVLA1oU3FGuatFq9R9zzQL393T+zT1WtlqQdRwewdxXFoQJddSu0BHHZeF3hlg/xF/34lbxWtcH1p6yZp26ixui4KfIc7U69buF6fifq7U+1fyIt9ahHnFtQmOykMM6psVLQ0SvJe57x5r+WiYPfICgLgUO9vp+5pdD0gzxaP/3jFmB2U7UO55CYx/I/ELpD8V+Vjm3aj3ZkDDvMXehDluiwASqFDpVjxqLz4f7lPrL2A6bPd2k2HXDnDiJzxSGK4+if76D6Q/BTRXkgXKzs6MFkusDOtLdfbzR0waj/bbXvZQ09kTUNGMaplk/8D0cOH0xN3C6IDL/lQgnaW7sq0cZW02XtMbFzssGW4nPXKQPrz/hkfC6NQo/n+D7Ql369hK5tNEGHUPInY1bcV4vRpc36+lmQh4nM1n4qdkcm17R4Id6sLudCP49o+jH88sXy8LBAdeTtp8q40E42k3P5pBQOsoPm7K+m81OCAuzugSYPbU5reTen41kh9gXQh8FB1IrsIs4/uykRrUT2EpuwLNvNCPXhizvETGK4pquSfNWBz2teUXBse3jSCSS4n2iCAtt01FU0qCyUyBIPd0rKRHgsBFRtlj4F42vZART1FmXPWB+op15RxWmezZTAwobcUeTvnlJUnnohUogckSvN09mF8j35xYr/VC/FxgacTNS8++BvHqnmduqgo/sBfunPUpPcbbQ236uBMCf/SHRo/jZ/rSoHFwDMVi5b0xe9mbyjYedhW13mt3JwAqbcNmxxlZklUpzCKUNQidhtoz6EjUjKe8Ljr+ 2eIST1At X1GHOGWm9VSFIQtiCK4Ou0zFgFAt3bA1RZqu8T4ZJ1aG6xIgSOh1rOUJ4Xmf9zPKz2Mk00WY+2HnjZtKttcx0LdEw1TPTtGMu5EWp8amhhSvRDiS+IVyX2CSZlspC+Lng35AaA/G10Q7hwAlCBmg7+gZJUx3VHQbvJSK0a1RGROCv9PRU1m05bhgVUVJsfIqjc8O6/82MrCqlH4KZ9DGDUCqjsV1EiEIcQjcnrXvwdvmsB1RyYiU7FbYPtgXt30mkj99+lU9oq7FyECW4mxoLmKN3OIweEP+nnnzGDKUeZKdmR/X4+74+8Vem8UaewNc3A1QT7AXkxHPrpsMoOVMaBJKKVK8ohvddWoBwgJP3pU3gZUuJk5929bVKB6W6BMqrgYAfppZhE62JxrVV92Yva869/A21WtZKy6ZxfTpAgDsKDHIjBDGGjEIXfvoGnFedtZ2fW3Xq0bvBgrQs8KadRWRQioVOoMPOAAZ7ajdX1LVoTH/gidu1zxwUbzXTKrbWqIxLQhjhIDPcl+l48VXvEnxC9BG0TXELzitX1/3yHaOqXSkNzaprsNlZPcuPqgXvbqwjFZBp6QjCpuL43Mlwi3puRWd4ne2L+UEOY9E804LoOVtAVV9l9UHkJdnJ3bxhpQmDDll39hXBrbE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Most contexts that flush memcg stats use "unified" flushing, where basically all flushers attempt to flush the entire hierarchy, but only one flusher is allowed at a time, others skip flushing. This is needed because we need to flush the stats from paths such as reclaim or refaults, which may have high concurrency, especially on large systems. Serializing such performance-sensitive paths can introduce regressions, hence, unified flushing offers a tradeoff between stats staleness and the performance impact of flushing stats. Document this properly and explicitly by renaming the common flushing helper from do_flush_stats() to do_unified_stats_flush(), and adding documentation to describe unified flushing. Additionally, rename flushing APIs to add "try" in the name, which implies that flushing will not always happen. Also add proper documentation. No functional change intended. Signed-off-by: Yosry Ahmed --- include/linux/memcontrol.h | 8 ++--- mm/memcontrol.c | 61 +++++++++++++++++++++++++------------- mm/vmscan.c | 2 +- mm/workingset.c | 4 +-- 4 files changed, 47 insertions(+), 28 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 11810a2cfd2d..d517b0cc5221 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1034,8 +1034,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, return x; } -void mem_cgroup_flush_stats(void); -void mem_cgroup_flush_stats_ratelimited(void); +void mem_cgroup_try_flush_stats(void); +void mem_cgroup_try_flush_stats_ratelimited(void); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); @@ -1519,11 +1519,11 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, return node_page_state(lruvec_pgdat(lruvec), idx); } -static inline void mem_cgroup_flush_stats(void) +static inline void mem_cgroup_try_flush_stats(void) { } -static inline void mem_cgroup_flush_stats_ratelimited(void) +static inline void mem_cgroup_try_flush_stats_ratelimited(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index cf57fe9318d5..2d0ec828a1c4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -588,7 +588,7 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) static void flush_memcg_stats_dwork(struct work_struct *w); static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); static DEFINE_PER_CPU(unsigned int, stats_updates); -static atomic_t stats_flush_ongoing = ATOMIC_INIT(0); +static atomic_t stats_unified_flush_ongoing = ATOMIC_INIT(0); static atomic_t stats_flush_threshold = ATOMIC_INIT(0); static u64 flush_next_time; @@ -630,7 +630,7 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) /* * If stats_flush_threshold exceeds the threshold * (>num_online_cpus()), cgroup stats update will be triggered - * in __mem_cgroup_flush_stats(). Increasing this var further + * in mem_cgroup_try_flush_stats(). Increasing this var further * is redundant and simply adds overhead in atomic update. */ if (atomic_read(&stats_flush_threshold) <= num_online_cpus()) @@ -639,15 +639,19 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } -static void do_flush_stats(void) +/* + * do_unified_stats_flush - do a unified flush of memory cgroup statistics + * + * A unified flush tries to flush the entire hierarchy, but skips if there is + * another ongoing flush. This is meant for flushers that may have a lot of + * concurrency (e.g. reclaim, refault, etc), and should not be serialized to + * avoid slowing down performance-sensitive paths. A unified flush may skip, and + * hence may yield stale stats. + */ +static void do_unified_stats_flush(void) { - /* - * We always flush the entire tree, so concurrent flushers can just - * skip. This avoids a thundering herd problem on the rstat global lock - * from memcg flushers (e.g. reclaim, refault, etc). - */ - if (atomic_read(&stats_flush_ongoing) || - atomic_xchg(&stats_flush_ongoing, 1)) + if (atomic_read(&stats_unified_flush_ongoing) || + atomic_xchg(&stats_unified_flush_ongoing, 1)) return; WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); @@ -655,19 +659,34 @@ static void do_flush_stats(void) cgroup_rstat_flush(root_mem_cgroup->css.cgroup); atomic_set(&stats_flush_threshold, 0); - atomic_set(&stats_flush_ongoing, 0); + atomic_set(&stats_unified_flush_ongoing, 0); } -void mem_cgroup_flush_stats(void) +/* + * mem_cgroup_try_flush_stats - try to flush memory cgroup statistics + * + * Try to flush the stats of all memcgs that have stat updates since the last + * flush. We do not flush the stats if: + * - The magnitude of the pending updates is below a certain threshold. + * - There is another ongoing unified flush (see do_unified_stats_flush()). + * + * Hence, the stats may be stale, but ideally by less than FLUSH_TIME due to + * periodic flushing. + */ +void mem_cgroup_try_flush_stats(void) { if (atomic_read(&stats_flush_threshold) > num_online_cpus()) - do_flush_stats(); + do_unified_stats_flush(); } -void mem_cgroup_flush_stats_ratelimited(void) +/* + * Like mem_cgroup_try_flush_stats(), but only flushes if the periodic flusher + * is late. + */ +void mem_cgroup_try_flush_stats_ratelimited(void) { if (time_after64(jiffies_64, READ_ONCE(flush_next_time))) - mem_cgroup_flush_stats(); + mem_cgroup_try_flush_stats(); } static void flush_memcg_stats_dwork(struct work_struct *w) @@ -676,7 +695,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w) * Always flush here so that flushing in latency-sensitive paths is * as cheap as possible. */ - do_flush_stats(); + do_unified_stats_flush(); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } @@ -1576,7 +1595,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) * * Current memory state: */ - mem_cgroup_flush_stats(); + mem_cgroup_try_flush_stats(); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { u64 size; @@ -4018,7 +4037,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) int nid; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_flush_stats(); + mem_cgroup_try_flush_stats(); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { seq_printf(m, "%s=%lu", stat->name, @@ -4093,7 +4112,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); - mem_cgroup_flush_stats(); + mem_cgroup_try_flush_stats(); for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { unsigned long nr; @@ -4595,7 +4614,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); struct mem_cgroup *parent; - mem_cgroup_flush_stats(); + mem_cgroup_try_flush_stats(); *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); @@ -6610,7 +6629,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) int i; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_flush_stats(); + mem_cgroup_try_flush_stats(); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { int nid; diff --git a/mm/vmscan.c b/mm/vmscan.c index c7c149cb8d66..457a18921fda 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2923,7 +2923,7 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) * Flush the memory cgroup stats, so that we read accurate per-memcg * lruvec stats for heuristics. */ - mem_cgroup_flush_stats(); + mem_cgroup_try_flush_stats(); /* * Determine the scan balance between anon and file LRUs. diff --git a/mm/workingset.c b/mm/workingset.c index da58a26d0d4d..affb8699e58d 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -520,7 +520,7 @@ void workingset_refault(struct folio *folio, void *shadow) } /* Flush stats (and potentially sleep) before holding RCU read lock */ - mem_cgroup_flush_stats_ratelimited(); + mem_cgroup_try_flush_stats_ratelimited(); rcu_read_lock(); @@ -664,7 +664,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, struct lruvec *lruvec; int i; - mem_cgroup_flush_stats(); + mem_cgroup_try_flush_stats(); lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); for (pages = 0, i = 0; i < NR_LRU_LISTS; i++) pages += lruvec_page_state_local(lruvec, From patchwork Wed Aug 30 17:53:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13370364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDC01C83F18 for ; Wed, 30 Aug 2023 17:53:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B4DE28005D; Wed, 30 Aug 2023 13:53:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4661A28005A; Wed, 30 Aug 2023 13:53:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E0DF28005D; Wed, 30 Aug 2023 13:53:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1AF5728005A for ; Wed, 30 Aug 2023 13:53:48 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C57F7802FC for ; Wed, 30 Aug 2023 17:53:47 +0000 (UTC) X-FDA: 81181518894.11.AAD5550 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf24.hostedemail.com (Postfix) with ESMTP id 0FC1418001F for ; Wed, 30 Aug 2023 17:53:45 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=WZUaUyMi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3KYLvZAoKCBoOEIHO07C436EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yosryahmed.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3KYLvZAoKCBoOEIHO07C436EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yosryahmed.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693418026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rnl+F2n1kOs4P011p6EJFvzkt9Ylx9T9gDi6gAnNyhk=; b=gp/lPK8ZSgLHI8DkI/aXgsDXxdroEVG35AfePDnL9coUjjDZLner4eMG9pqFTcrBxSu1X1 qK+JRYY58CsOrF1OvErt5B5NO8gpBCSllAUOuQrgdAft0x8qSeC1CXSqBHPHTTUXksjzkO RnISHJVUlIhHcNWresW8QEeQELdv+xA= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=WZUaUyMi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3KYLvZAoKCBoOEIHO07C436EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yosryahmed.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3KYLvZAoKCBoOEIHO07C436EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yosryahmed.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693418026; a=rsa-sha256; cv=none; b=UhbWdjDVq0nEOzQxghrv1jBPFXPkZjF+bA7wENYIOsCbM3UPG7Ge85qKcnzuUTeGtMhiZQ ONLhkO6p2RrnEiTYf1YqB60MH51/jJfTnhCwmORAmNGVPTeqCW/jE4p+xSkU1PS/kuAbby p7jO/5j7z8a8y+fijU/pNgvncvFKu2w= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-594e8207103so626847b3.2 for ; Wed, 30 Aug 2023 10:53:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1693418025; x=1694022825; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rnl+F2n1kOs4P011p6EJFvzkt9Ylx9T9gDi6gAnNyhk=; b=WZUaUyMiY21N/7cZlwHeGR2FyQJRk9oAjuZRb3M+WQFbUvXQMDh+McKaYuzDX5d2Gp ME/qJJw9SMsgC6cxUYYJWue8JwSlhak90/s9euwFF8uopAOIZsd2ZLdR5RnCgP+gz7lW 1uLsvfQTuly0jLTJhL4O3X1odJjBNJDGwlsM5+R4CSBCQPQEWbJQ/9/93N5r7O3/9FnZ MZ1Qv4t75HX3049ETjRbKt38U1vhy56oQuATO4dEcC4NuB3nB1o0OGCiQJJhhdsqcuO+ qYPXnhtkDsGz+0e75AUh7kGJF1M3IkavqFl5W0f3zQPhPfiPKx+ko4YsmMimoOhW410p t4uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693418025; x=1694022825; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rnl+F2n1kOs4P011p6EJFvzkt9Ylx9T9gDi6gAnNyhk=; b=J6nAj7PlRe1ATGsZTTju6FJCjvLhBP/XWXffvvkrElzT0C3dqpfcVbKqTsylnH7p/a VkQAsvkB8WYgiXYg75NXJQnre/lSw6Wk1juSWEbLXgNCvDh9wrcmtVGBY8rdIVPzswGT G4JYy4utHt6BsGywSB4s/3deZ0LJe4QVxRbC/v5ZYu7gskFBYMQSWcR9df4RGxIviGyq 2F5F9CrT1Nz7xbejYqTIi3Dwy/mGu+lmDm6Sn1ys1ScpWB2zkPeLxcjtOP0WkZijQGJ4 8jY1ZdlbGt7lwxzkRHMw3tJq3zP9jIA9NI1QXGrSmXyVIBi85Vjaga0sQdP4YNB2i16u 6iGA== X-Gm-Message-State: AOJu0Yx01fGDk8VEriqumOX4lXHiSH/qRu2cTjkKfcTBT3a55UKxyNPo hyybJ0prm3S5YdfMbluuR1K2MF/7OBKkTiZL X-Google-Smtp-Source: AGHT+IHjIqmfPjmTzc3deZ3rYH8rl5Cq8a3bjQb9PwMna3dMgbfSrozthOyv79aDKkmUzbdghxJcDIanu7pcbRLb X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a25:34c3:0:b0:d0e:e780:81b3 with SMTP id b186-20020a2534c3000000b00d0ee78081b3mr78518yba.2.1693418025151; Wed, 30 Aug 2023 10:53:45 -0700 (PDT) Date: Wed, 30 Aug 2023 17:53:33 +0000 In-Reply-To: <20230830175335.1536008-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230830175335.1536008-1-yosryahmed@google.com> X-Mailer: git-send-email 2.42.0.rc2.253.gd59a3bf2b4-goog Message-ID: <20230830175335.1536008-3-yosryahmed@google.com> Subject: [PATCH v3 2/4] mm: memcg: add a helper for non-unified stats flushing From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Queue-Id: 0FC1418001F X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: sogpww3ei934era3pyfkuhowfty8uu6w X-HE-Tag: 1693418025-375011 X-HE-Meta: U2FsdGVkX18XjefCymEwVZbGMlwCG9W430O1fTlI2N8E4k0fqZoUDbunF5uPq18UeDnyga82UGgf9uOXHx5rxnP12hIV29frLnafGd5qn9xvoaWxNUCVv25oJTliETVfxorT/aHNbOPTPxih3gTrd3M8R2q3wKvJObm7zVh3aVtyMcwRMZeKbF0lydJqYsWCrxGqgkmTQ+7pEZ1qEgP/ciBtLm3/ASUeI5edtSQvM3yxgS8GFpKqeHD1Cuk2jLAk0tjmoPYXMqV8OJXTaTet6hMKZst2KgPgOCs7PwJhDW0B9JG5Fi3nSmtcurh6SwiwP17O0vwRJBmGBZ2SeQJzN2G4UkgWaaekcpGuN4+gJeDKgyPsznEUsSjBksU/TJG6SsmsDV4wDpzg335qNHotsA9IS5+mrym9E40axKKy/91zBT9YRjL7amDw+n6xqGktmVklcUsRQEER+Hdzdp7Lhi/OB8oPiKBdxsAsjEqNb+UvC9cMLio5ZRj7sWHy14RKGDCdGNlJe/QMhstLp/hGPIkGVJB+57efNLjwkPgKND7ocXp+yOOocBJeUdJPSSDtAlnOoUwvD9+C55Q/2kq27LMqvOPNr5b7/NtMR9h7POlc4iOkMzaLsEhUTtHJyGQlFT4R5IdjRSaA0/twWHnULVQRB+arFd2EwGIZy2LZmqo4Bp3vRsCbWfzONYzPxcI+VVsGzL2kLRz/nfACY1ZnWhwqRer6M7vHnbmp1Zg675PCJqIVCQLd5gzfMjjgY0FYa5pgf+rTs4zcabVcFlRwSceKJPEVLXyDVRc8cAbEzrKMd8RbmakdtPYsFez4SUue8WLsdrApYkWLRLd19b135dDTIZ0bGB571XgGG0n5zr4eJ3jsdvLylr/z9EGzMZPP5d/I7jSQXcjl27TvMIlDU8UpX9Nt8rfiUKUTd4Q+QnF+1OCpVOKmf969dRZXalGZLF4GBMZdk5OsoGM7+c0 wHBQDUrz w0qPBlWZh2Q64v4Vt0luLCsnsgoQUrM+ziIGSt0KnUxfbQjBAnv5qfjk6jvPhY8ZyLiXEgopvntjdhiLQblYlcpy51zCraZOoFR5+4RP476+ANt1Wd52ptC94Qb65nT3rt1IGRNBCuRC1q2vr5ydtlDv4UaLctLCxikajNSSvKTY/nfqik5tvumx1hLZ3Ybjiid64fENIACJhn0wKDpGChLCSgPWs0gcBkRlC0sCuFCmpARyDOdN/l4zJORAzbb4AQAuLe3A0ESAkJTWy6QRWfV8cV8x/yGY4dpz/SE7ZBNCn1Psv6R0j96ianGi0lVLZyY0U1s8hxP0xmBZMk/IH+MWP3tDF7DL9EoJGjqKx390rVChu0a7sERnvBxVvzzozqBdeRqwcajhIorU6fNdMHXYBPtFVKFP0Jo66ndfEUnz5glumw8jwZfzncrgdm8dMiY/q/bBvUPF/52f14sm37qv8VjicSacx5DYPPY9ZV6xJfqylY40JPFX1vm8NpcjFDv0bqoS+Kr30vxiE3SL3pcJ0ZVsO/t2evAsImAYeGcg5v0367Elq0TBHu4evcjxwSgRoQGivo7JwGebRDkZKfqaNnx2wc6tVmCvN4I6EZ9Xos/IjZDMsu/s7NzDNCL4bPybdvfhUTZCB9zqIQLPJ/BQr9fYvVtNA23Hhd01yRUIT/nA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some contexts flush memcg stats outside of unified flushing, directly using cgroup_rstat_flush(). Add a helper for non-unified flushing, a counterpart for do_unified_stats_flush(), and use it in those contexts, as well as in do_unified_stats_flush() itself. This abstracts the rstat API and makes it easy to introduce modifications to either unified or non-unified flushing functions without changing callers. No functional change intended. Signed-off-by: Yosry Ahmed --- mm/memcontrol.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2d0ec828a1c4..8c046feeaae7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -639,6 +639,17 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } +/* + * do_stats_flush - do a flush of the memory cgroup statistics + * @memcg: memory cgroup to flush + * + * Only flushes the subtree of @memcg, does not skip under any conditions. + */ +static void do_stats_flush(struct mem_cgroup *memcg) +{ + cgroup_rstat_flush(memcg->css.cgroup); +} + /* * do_unified_stats_flush - do a unified flush of memory cgroup statistics * @@ -656,7 +667,7 @@ static void do_unified_stats_flush(void) WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); - cgroup_rstat_flush(root_mem_cgroup->css.cgroup); + do_stats_flush(root_mem_cgroup); atomic_set(&stats_flush_threshold, 0); atomic_set(&stats_unified_flush_ongoing, 0); @@ -7790,7 +7801,7 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) break; } - cgroup_rstat_flush(memcg->css.cgroup); + do_stats_flush(memcg); pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE; if (pages < max) continue; @@ -7855,8 +7866,10 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size) static u64 zswap_current_read(struct cgroup_subsys_state *css, struct cftype *cft) { - cgroup_rstat_flush(css->cgroup); - return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B); + struct mem_cgroup *memcg = mem_cgroup_from_css(css); + + do_stats_flush(memcg); + return memcg_page_state(memcg, MEMCG_ZSWAP_B); } static int zswap_max_show(struct seq_file *m, void *v) From patchwork Wed Aug 30 17:53:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13370363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1A95C83F01 for ; Wed, 30 Aug 2023 17:53:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF1F828005E; Wed, 30 Aug 2023 13:53:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E9EE028005A; Wed, 30 Aug 2023 13:53:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF09D28005E; Wed, 30 Aug 2023 13:53:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BC70628005A for ; Wed, 30 Aug 2023 13:53:49 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 91E6C1C9221 for ; Wed, 30 Aug 2023 17:53:49 +0000 (UTC) X-FDA: 81181518978.30.421C976 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf30.hostedemail.com (Postfix) with ESMTP id B923B8001F for ; Wed, 30 Aug 2023 17:53:47 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=mqmvWFet; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of 3KoLvZAoKCBsPFJIP18D547FF7C5.3FDC9ELO-DDBM13B.FI7@flex--yosryahmed.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3KoLvZAoKCBsPFJIP18D547FF7C5.3FDC9ELO-DDBM13B.FI7@flex--yosryahmed.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693418027; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dJkwDc7Wxmywu9Yo0ZIK2ppmyNcYkDsKDS48R26U4fo=; b=2yE+vAhIbmiN9njd3FgBmGo1XKqyAaZFKH0B4eiDv2urA4VcmQiC85vur+++lknEu0UoJy ntn8S+a/HliryT0LYiMJE5uCE5R35wHRNfa6YCYy/v4DD9iEmrIoqwl5pR0AUt8Lv/dohR YqX4qNts/NTmtvX07fe0GtoOVUvaHG8= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=mqmvWFet; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of 3KoLvZAoKCBsPFJIP18D547FF7C5.3FDC9ELO-DDBM13B.FI7@flex--yosryahmed.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3KoLvZAoKCBsPFJIP18D547FF7C5.3FDC9ELO-DDBM13B.FI7@flex--yosryahmed.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693418027; a=rsa-sha256; cv=none; b=UZsZpjhiicgUCldKp4s1Bv2k7PMo+4QjIkpKl9znXRz6zIN49cHy+uUkVChr+moM5BE6l3 HOTCxlH4r8oX8vPTgE5e/51p0gHZWrMh98HSRaaoKbGChJ1jErnWvjzhXySEaHEzKb+Aei I17e7EQmILUERDhy0sVv0vmY+/FX7co= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-d7454a43541so6945969276.1 for ; Wed, 30 Aug 2023 10:53:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1693418027; x=1694022827; darn=kvack.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=dJkwDc7Wxmywu9Yo0ZIK2ppmyNcYkDsKDS48R26U4fo=; b=mqmvWFetNo6wNHYZKhe2MgjP4pr5CTas1cpwzuA9FxJyMJqztHb4UmSwACbQcxi6Be fqP62mNaHpZePYIvsvt4UBps4XoZG56cA1mXmTTvZim4ZyjtL6AlR8ipHex5QVGz1lwH lQi/uftCjPfs9ZqVVpjJqlmcGlh8s0ldCC9BLqorWJx8zXC3YSq/lit06TcYlZny8Fe5 i2xxQk4YmgyDaCTCKo/DOHYrP5PfpNru2bFlcQcJr5DklJekClb2U1FauD94YVxJX49D UNcIsMka26gsBvvSCc0zhs6K/Xtcvh6RvtgQTzpf+MspEzzdsK3BoaXL2SbvvMFnhNuS qTMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693418027; x=1694022827; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=dJkwDc7Wxmywu9Yo0ZIK2ppmyNcYkDsKDS48R26U4fo=; b=R3CUssrZqYI0rKpF9t3TkzO04x6mCCTr+gYFd1jF64aavR9bjHGxCkUzhre5YVLZq7 JWy7SudgWSJPw1JnzsF14k+prl36V0Mn+DDyCTF1ObAsjHBssNU2a/Gnzpc2zP8pdV4k a6Ki6YqMtii2Sq9TX5LvYAvPnWaD2MFDuB6xzTj3VN8eeuTXQWsBx8aUf+jSIOt6chvZ n5O6ZikcrlrlCWX5A4zwAUooMnY3ybKv6fHcWgMCxC/iCgk5EQrc2e+qjlZuH7U1FHYX vZd0cB6+kYlLCf1b/vzsBDtfRLF4933yZuz3FnLJIOYNNQ5o4I7vMIbi9kBTsy1gSe7n KViQ== X-Gm-Message-State: AOJu0YwVw5JTioON0GcNVuYpnVf6Iepqq7G5SKo6MGo5Jmzi40o9D9Se 6rZy7Dy+CQ6EHhPumX88YurxaxxAS8+yri5V X-Google-Smtp-Source: AGHT+IHXnF9jvMD/mtx9ACW7hIsA0mNc+2Mr0ZRpNzXABR3y52qvrgvfvPCgY3NH51AIMzA3Xb4AWwsKCWuq7vsA X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a25:e7d1:0:b0:d07:7001:495b with SMTP id e200-20020a25e7d1000000b00d077001495bmr79671ybh.11.1693418026898; Wed, 30 Aug 2023 10:53:46 -0700 (PDT) Date: Wed, 30 Aug 2023 17:53:34 +0000 In-Reply-To: <20230830175335.1536008-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230830175335.1536008-1-yosryahmed@google.com> X-Mailer: git-send-email 2.42.0.rc2.253.gd59a3bf2b4-goog Message-ID: <20230830175335.1536008-4-yosryahmed@google.com> Subject: [PATCH v3 3/4] mm: memcg: let non-unified root stats flushes help unified flushes From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Queue-Id: B923B8001F X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: q93dps83peo3zamwwsugsywtbp7oce4e X-HE-Tag: 1693418027-302654 X-HE-Meta: U2FsdGVkX1831PcV910jNcflcI4Id29cc6kzEH+r5HVEOxcDttOUbBm6tvkJNLTplRHOVHRHQyqRp4uq7aeNTe8Qn8HL9/b2PrfIJqZurVbfyuHSg3y1bzqJ6c7OCcs4JwAeSNvJEr8PaEvIF2BXGmyq9/qpYPQP/Y8YKh+SXkALl3W0jZHUTd3D1NIw5U+t+hMXgIXWG4lFnX90bYvtIN0UaymBFcfQZ1R69S7cPwgd2d559xEm+c6diFsNwNNS8+2nF2h8KP/EFbnN5q525cnSH7+6Uc6FInrO9zrHggWtjQRiGqWSRucR1ChEAUmTmGqVlUXftvRan5WDceWunCXqxtnh4t0CS8YGizfPvjIeCmFWorJbYghCBlWKoJRo11ztZWjNJznEnoHz80TGHrLPhQV/TYkkXXFbUmGlI0uVExyEWVA8xn1vqqw7HyXi/N3XBf30UxR6PHoTfxQmFs4/GMkSdIRQFRxge1MQbDYsniKNRhobc/VvgQz5kx7hQkFD13jf6hdx4/f7bzadEJbuQZoPFw34KvtMFJk4lgAjeQOZoG2z61YCHV3p3mABY+DbYVr4Lej3Z5Coxs3SHOgKN97Ce5TT8fYXsuaWABffxyNPYbesbY8w/rIRp2PG9eiDpTKOOrokYdBCYf0OExYbzL4AWL0po5xGKGyTONTDIUcVntTnArCKySCdN7GaDMrVmfE5Dx4Prk+vor15hyw6o6xIjstXCLsSxKxULcMEThAc6mvz3/pFo/hjAn8dq3NnzsNgJYaX0dFkNbm6yXXi9eQrpjEE0bRONucaj2Nyb1s5iOJdeXA00r/P7d04+wSCgjiKZ0LYzL4pJ5NAYl5MXL46agBK9re5n9Hn5ZYDsCOjKQSP/PrjW6GuxuaFLZNAXIYBMKsGzC0HFq62T681XvvtVTHcYC9Y3doqI9RDhdB7gSz/auZ4KLAh8vqFzZZzOy0DpUQG/7x+Yoj +BYTgx7Z Pg95R4vsIuIEBGLwyZCU1MkZZjTidor0F47Mc0EskrcX8e7RQqHm0WIxtID0WjfnyxzNIz7QyRCC19iDUjEVEur0uJgReZqIXUS/4cnpnz0CjXRtLOKdxnwjrewnW1g0gawXOiRIXOEeUfjL0Wpe62FjUl+mDdTe+hF8PJ1wAcQ97DkuV+1zma9Zrgj+Lfjqv9TXIIl5ttOgUU8493U3slpBHbELnZ8q6h6K/DWiOpKds6Ue3nLmIfyx6R9uT8WGNOLCr0GvK6I+JzhXgXLEouxo143tcyR2RwyeHh3qSGE2CDPWUSv3jO9RawW0RYscbxfzedqWEYIxneBNvsu0HT87EMSdfIp8Q3k04f78hNL74oygok7Fua+VIUBYW2bJ90Bz9fN/GmuBufgZrkrj6iEe7f0kJmufu3tnaJeohIMjy9l8UOW97x3OH/LT3cPFy4LBPZvjf1ETVJeTzQG/3t9DAigrPi/TVrXCwYcNTgG+Mx6kHymAIFVWULIFHQVk027iPSSpJ1U6t1rMGezINpxQL059qZj8ItYw2uhWV24C3snywOFRY2P9PglPzjpU2/wkWJm1q4+k63VZK38+83s+j+4cVx7C6ycGONvw5Tt1Bpf0pUNM3IEPapwH1dLx6d/JezFeF4KP9bygL78M740uQY86zGA1gHKfooWzxw7kl4+7rMswT4BEWaHtLrZoFcAycNfQ37oZ6DF8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unified flushing of memcg stats keeps track of the magnitude of pending updates, and only allows a flush if that magnitude exceeds a threshold. It also keeps track of the time at which ratelimited flushing should be allowed as flush_next_time. A non-unified flush on the root memcg has the same effect as a unified flush, so let it help unified flushing by resetting pending updates and kicking flush_next_time forward. Move the logic into the common do_stats_flush() helper, and do it for all root flushes, unified or not. There is a subtle change here, we reset stats_flush_threshold before a flush rather than after a flush. This probably okay because: (a) For flushers: only unified flushers check stats_flush_threshold, and those flushers skip anyway if there is another unified flush ongoing. Having them also skip if there is an ongoing non-unified root flush is actually more consistent. (b) For updaters: Resetting stats_flush_threshold early may lead to more atomic updates of stats_flush_threshold, as we start updating it earlier. This should not be significant in practice because we stop updating stats_flush_threshold when it reaches the threshold anyway. If we start early and stop early, the number of atomic updates remain the same. The only difference is the scenario where we reset stats_flush_threshold early, start doing atomic updates early, and then the periodic flusher kicks in before we reach the threshold. In this case, we will have done more atomic updates. However, since the threshold wasn't reached, then we did not do a lot of updates anyway. Suggested-by: Michal Koutný Signed-off-by: Yosry Ahmed --- mm/memcontrol.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8c046feeaae7..94d5a6751a9e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -647,6 +647,11 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) */ static void do_stats_flush(struct mem_cgroup *memcg) { + /* for unified flushing, root non-unified flushing can help as well */ + if (mem_cgroup_is_root(memcg)) { + WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); + atomic_set(&stats_flush_threshold, 0); + } cgroup_rstat_flush(memcg->css.cgroup); } @@ -665,11 +670,8 @@ static void do_unified_stats_flush(void) atomic_xchg(&stats_unified_flush_ongoing, 1)) return; - WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); - do_stats_flush(root_mem_cgroup); - atomic_set(&stats_flush_threshold, 0); atomic_set(&stats_unified_flush_ongoing, 0); } From patchwork Wed Aug 30 17:53:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13370365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57D61C83F17 for ; Wed, 30 Aug 2023 17:53:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7EB24440009; Wed, 30 Aug 2023 13:53:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 79A1D28005F; Wed, 30 Aug 2023 13:53:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 612F8440009; Wed, 30 Aug 2023 13:53:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4DA8B28005F for ; Wed, 30 Aug 2023 13:53:52 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 050FF802D4 for ; Wed, 30 Aug 2023 17:53:51 +0000 (UTC) X-FDA: 81181519104.03.F727EA1 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) by imf19.hostedemail.com (Postfix) with ESMTP id 406F61A0004 for ; Wed, 30 Aug 2023 17:53:49 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=vrOWX6Aa; spf=pass (imf19.hostedemail.com: domain of 3LYLvZAoKCB4SIMLS4BG87AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--yosryahmed.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3LYLvZAoKCB4SIMLS4BG87AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693418030; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ogw2uZiCm/cMvQMuQo8tpOc+9Ut9ubidtzhSqT66zzA=; b=BgdkIhNOJVycb4Zc654RTiO39TGagNezOEIkUJ0A/QGjdZAMk5rKjX2gFs8IMxfo498nZL yv5O048cCxxcRxRBtajN8dkQrvcUsGWJeUujG3Fbc60KQONJdJIKHqiCV/5CYb3FD8sMtG 3By8jHwzzuhCr/MbOoMaR1to/mM3NEA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693418030; a=rsa-sha256; cv=none; b=3sT9Vk62tHjlQjbuBbe3iPpvQRXX2aVUryOXeTv9zIBsMNnqXc8mc/naNFNTfoNNmdstid iIedZeV/ZtmfPAcdISAIrkAZOsTA4jPeuDaRKpnwqVWREFUVMRYAXeis4BOwga/K1L+sLw 8dwvhhN0yJTmAbKEGEvO416bDAgI/N4= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=vrOWX6Aa; spf=pass (imf19.hostedemail.com: domain of 3LYLvZAoKCB4SIMLS4BG87AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--yosryahmed.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3LYLvZAoKCB4SIMLS4BG87AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-563ab574cb5so93054a12.1 for ; Wed, 30 Aug 2023 10:53:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1693418029; x=1694022829; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ogw2uZiCm/cMvQMuQo8tpOc+9Ut9ubidtzhSqT66zzA=; b=vrOWX6AaTLXMDjjROOLrZVCH9sTynMUFsa/hVwvrOGhYg5hh2GJOwstZC5vORHUGon c8q4Y/34V3SkD/Hu8jOnHqS/oCBHicwioHvbC7Ww6OLtIlcGaKXoadNcUJxGMGHLKVdU 5IAic1KbUL1iN2DiyeOhQfgLrOF/U/ZvoHOyhSNFgF3BI6MdWhmqaJQuzZDqhEgDyP5y BhxmnvJwgpGxcgow4zsfy0Fy8JN6yEDD+ma7Xvnggb9T/ykA570XM6ISjeB5U/89b7a4 BXirC1hDs7YyofqMbpq+5eX/+99EYzoEUB1MR3695ognSTKTjEjIoslDTmdJ1tQJ6q0N Yefg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693418029; x=1694022829; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ogw2uZiCm/cMvQMuQo8tpOc+9Ut9ubidtzhSqT66zzA=; b=TgTCKkdnKQV5t3xfzCa1j9Y3QoVL/Flb3WkS+LG5tXP6V6tIWpzaHW7kAvu3ok2tlW eFMqIEA5MQ0CKj6+vi/DVlqa1zZwW6ZbwA8tZazQdiVu09PqLgMx939bUBr1YcmyAv6v 6zu+KQ46/AGKLJdbFc2YsHgg1q2A1LyR10jN/NHTIUTRON4SmNUWqvjelCKTVtA9GXeZ iGbzGp0JP0fqQG/JSkhTMsv+NBJGXnhqQKQ09WgGrgb8P+3VarhfyTzPN94jTV72DcW+ IO3Nwjxci0CiYgostQfp/Ply8/7BmODuGQYuuuD3KonXd4DHTgwhdBnyLp+o0ksSAqEw IzNw== X-Gm-Message-State: AOJu0YxgeDgHdv9+bFJ1flsQS9ktfiQAMJKTA/66PIcipJ6wcd0qAOg7 rYPr83wOUq0IIimkju3VoYXeQcWhLUOQObsk X-Google-Smtp-Source: AGHT+IEDvg6/Dv2ZCiJE47oEUfI2O/1AZvAH6hhLiJqKfB4icsZoJ0gRHnfLWPgfZWkICJoRQkM/9BaaldrxU8Ck X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:3c54:0:b0:56c:24c2:488d with SMTP id i20-20020a633c54000000b0056c24c2488dmr486119pgn.4.1693418029016; Wed, 30 Aug 2023 10:53:49 -0700 (PDT) Date: Wed, 30 Aug 2023 17:53:35 +0000 In-Reply-To: <20230830175335.1536008-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230830175335.1536008-1-yosryahmed@google.com> X-Mailer: git-send-email 2.42.0.rc2.253.gd59a3bf2b4-goog Message-ID: <20230830175335.1536008-5-yosryahmed@google.com> Subject: [PATCH v3 4/4] mm: memcg: use non-unified stats flushing for userspace reads From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Ivan Babrou , Tejun Heo , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Waiman Long , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Queue-Id: 406F61A0004 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: xfq49syb7yacfqgj695ezfmjbhmnhr14 X-HE-Tag: 1693418029-467996 X-HE-Meta: U2FsdGVkX19n9jaEwrDdkmuJB3lbuqZCGRBeOSr1EMNgUJhJZMV417UjD5yI6UNa66n8Bq0Cm1igxWz2qyNjkshe2sDzAlet2tRw3Y5thnmmxp0YhbZUGX5mIkJK0ACQWN22CdQDQPryCeoV9ChfbDtPpbOqbaBK5XxDjqmErXLmGTXmnlMmKrPwZ/dPryUS5IsR2AyxOmx2sVNI4WdvKi4K5o4yPsR016eAncnRyC/3rhanG1TuJyvWSHJRp8mgMGQnsGT71yjCuwg0b+yh7CYCeYRmXffXTkqXYo4G2ZuVdlQ9vx0ELkNQhjmEZESkYKS++SHcOmdhr8LHXReW2N/QrfcWOcswJhG0aVAMH2b4p+IXaHaJzB7pTCydK5hfEj8s41oXL/uY/egHmi+dIJCe6/feGgFSPuk7EOiC/YB3cM7Isln9MlNOoti3u1K+02Rqd2LDysHK9jHobj4A3tK/qVCv2DvAWgTJwn9aeSihHpxXvpNIk0CoZwtEOJtghdMzcN4ncQp7lN+OiwJpaKoSKYF3lQBytPwhNhbm63lUZOD+VMBGfu3rVaqZJFSy9aUBqVl3o/GIFDdEYeCQG/lim7ZHml4u7BNbUEWaIRV7I8vDaHHnsV+TKZ3A//6WX6E+seK/SG9TPsTlXCVNIrCCHZ/J/o8Y8xL8ASH4S4aaw7KfBz6BaOfbtCxSw4T8ACoe+GaGyms0K0p0wa1piCZKruFJq1ojxix/jSuynFiMt5lOpOXdx9G4hhYPo4EMKlzj3olePiTQT3LzJ3MQMZSBJIAj5PEmVtcRHJGQowBmTVPddRW9vId27ctfE3D0xenBjExpcUW+YdawwBF5Jv8v1HgebCSYw3qG+Fu86WzhbjHNSbWPiQqHx0SaYOkDaFoEYWyLhaGCjTiADSq62mdT1oL8uHrcywD4AkgyHtS9GP19/1KatSj8+1QbXw58Py3GzNnvlDOxosuSb7e US8q9lFz 0JcKc01ahK2P3rWGOVlboOl53yUwn3N88tMtKxMhExvzEtpyM09Qa9wK/nZWSDbOFMfgR99oYqs2Bzo2MniDagv4Nh8UMFlUZIT2CfUtwnmsgrRD7G6p8sPMly3gwLSDw9jknKDMPV2tK3zekpkMqXsxvzHrTQz8OpUUB4gqZICQgvdrGhRSE6lme/UEikHaFEJhrTyetBlafVquw4iCGD7r11NbqFtaUeQl6oWTkEtMWXF5w/prkgrfexMU9qVog5erVfWBYwPR7vFlFn51dHyNPDJD9651KR7dU5zoC7GIpCgWBkabsZmsrAd/vVJPKSbwYFRJGfo+8IRq/0BRvumnhZGZKZ/q2JmlXNvUPCqnUPTXWY/PT5HLlwQrtd38DD6NedP1XQZkLWqoVxgpSyYW6cKj744QRAvFFv8KYY9uieJBgYjbmdQfALEAiygmLmu/CeR5cm/nt5sKUqNQclr/1BJRyUYLTM4BLlMYZRfHUYNU0tB44byjw07HBr7SISoNoBLkJJKZJ+Oe7nAXj7CjVf23PyjlBXzS66yhRzoE9KuKCRSPFuu8b2lvbk8o5AsstY/oLwyDjx3yZ1j9hl0RbRxj7GaO9effNhtckAq39lSv6TDsA9mqKOahFxWsk0q/pXUUZf91Pa7VM1rIum4xTGcccCjYwa14Y2Xop6EBVO/d/w7uHYqM4xA6HsEVTEw53vBI0fWql34tHd1uYnUqiTw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unified flushing allows for great concurrency for paths that attempt to flush the stats, at the expense of potential staleness and a single flusher paying the extra cost of flushing the full tree. This tradeoff makes sense for in-kernel flushers that may observe high concurrency (e.g. reclaim, refault). For userspace readers, stale stats may be unexpected and problematic, especially when such stats are used for critical paths such as userspace OOM handling. Additionally, a userspace reader will occasionally pay the cost of flushing the entire hierarchy, which also causes problems in some cases [1]. Opt userspace reads out of unified flushing. This makes the cost of reading the stats more predictable (proportional to the size of the subtree), as well as the freshness of the stats. Userspace readers are not expected to have similar concurrency to in-kernel flushers, serializing them among themselves and among in-kernel flushers should be okay. Nonetheless, for extra safety, introduce a mutex when flushing for userspace readers to make sure only a single userspace reader can compete with in-kernel flushers at a time. This takes away userspace ability to directly influence or hurt in-kernel lock contention. An alternative is to remove flushing from the stats reading path completely, and rely on the periodic flusher. This should be accompanied by making the periodic flushing period tunable, and providing an interface for userspace to force a flush, following a similar model to /proc/vmstat. However, such a change will be hard to reverse if the implementation needs to be changed because: - The cost of reading stats will be very cheap and we won't be able to take that back easily. - There are user-visible interfaces involved. Hence, let's go with the change that's most reversible first and revisit as needed. This was tested on a machine with 256 cpus by running a synthetic test script [2] that creates 50 top-level cgroups, each with 5 children (250 leaf cgroups). Each leaf cgroup has 10 processes running that allocate memory beyond the cgroup limit, invoking reclaim (which is an in-kernel unified flusher). Concurrently, one thread is spawned per-cgroup to read the stats every second (including root, top-level, and leaf cgroups -- so total 251 threads). No significant regressions were observed in the total run time, which means that userspace readers are not significantly affecting in-kernel flushers: Base (mm-unstable): real 0m22.500s user 0m9.399s sys 73m41.381s real 0m22.749s user 0m15.648s sys 73m13.113s real 0m22.466s user 0m10.000s sys 73m11.933s With this patch: real 0m23.092s user 0m10.110s sys 75m42.774s real 0m22.277s user 0m10.443s sys 72m7.182s real 0m24.127s user 0m12.617s sys 78m52.765s [1]https://lore.kernel.org/lkml/CABWYdi0c6__rh-K7dcM_pkf9BJdTRtAU08M43KO9ME4-dsgfoQ@mail.gmail.com/ [2]https://lore.kernel.org/lkml/CAJD7tka13M-zVZTyQJYL1iUAYvuQ1fcHbCjcOBZcz6POYTV-4g@mail.gmail.com/ Signed-off-by: Yosry Ahmed --- mm/memcontrol.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 94d5a6751a9e..1544c3964f19 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -588,6 +588,7 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) static void flush_memcg_stats_dwork(struct work_struct *w); static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); static DEFINE_PER_CPU(unsigned int, stats_updates); +static DEFINE_MUTEX(stats_user_flush_mutex); static atomic_t stats_unified_flush_ongoing = ATOMIC_INIT(0); static atomic_t stats_flush_threshold = ATOMIC_INIT(0); static u64 flush_next_time; @@ -655,6 +656,21 @@ static void do_stats_flush(struct mem_cgroup *memcg) cgroup_rstat_flush(memcg->css.cgroup); } +/* + * mem_cgroup_user_flush_stats - do a stats flush for a user read + * @memcg: memory cgroup to flush + * + * Flush the subtree of @memcg. A mutex is used for userspace readers to gate + * the global rstat spinlock. This protects in-kernel flushers from userspace + * readers hogging the lock. + */ +void mem_cgroup_user_flush_stats(struct mem_cgroup *memcg) +{ + mutex_lock(&stats_user_flush_mutex); + do_stats_flush(memcg); + mutex_unlock(&stats_user_flush_mutex); +} + /* * do_unified_stats_flush - do a unified flush of memory cgroup statistics * @@ -1608,7 +1624,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) * * Current memory state: */ - mem_cgroup_try_flush_stats(); + mem_cgroup_user_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { u64 size; @@ -4050,7 +4066,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) int nid; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_try_flush_stats(); + mem_cgroup_user_flush_stats(memcg); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { seq_printf(m, "%s=%lu", stat->name, @@ -4125,7 +4141,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats)); - mem_cgroup_try_flush_stats(); + mem_cgroup_user_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { unsigned long nr; @@ -6642,7 +6658,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) int i; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - mem_cgroup_try_flush_stats(); + mem_cgroup_user_flush_stats(memcg); for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { int nid;