From patchwork Fri Jun 4 01:56:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 12298585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46822C47082 for ; Fri, 4 Jun 2021 01:57:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C779A611ED for ; Fri, 4 Jun 2021 01:57:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C779A611ED Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3F7226B0036; Thu, 3 Jun 2021 21:57:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 380026B006C; Thu, 3 Jun 2021 21:57:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AC7F6B006E; Thu, 3 Jun 2021 21:57:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id D8E0E6B0036 for ; Thu, 3 Jun 2021 21:57:05 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 763F48249980 for ; Fri, 4 Jun 2021 01:57:05 +0000 (UTC) X-FDA: 78214378410.32.F57E53C Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf20.hostedemail.com (Postfix) with ESMTP id ACA2754E for ; Fri, 4 Jun 2021 01:56:40 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id g144-20020a6252960000b029023d959faca6so4469618pfb.9 for ; Thu, 03 Jun 2021 18:56:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=bv2HXAmGAwf1OCVRda73ecPrrSIpyQRMd6cgUbsrMv0=; b=eFloSTVUQU9IkUU1DSNboTys4VUnQayFEX47/s3Gd1NDNoewr73Ghhjd88xCDQjiC+ gPpsMGcRGrou7m/vgdY1MfO4evGSmw3olKrXEHTk3VNKnRWl7TSHV5Rabkd+dCP8YMTv tyVd8NSX7jyVd4u+djCYOI3b3RVWe5YY6mnxCrpwmw3X+liuwb5CqtZTl2u3fvYpY+iV lXw2umisbk4ZxK4QNPOm8aekEaXQdY597VziUYvsKW2W1/QazgYxLWMM+YuMRsa3fjSl M5oVkOo0R9pperVTplkLvY+K/h9OMaHttfvG+oW1XiW+J2IIVbZ0Xrclq2HymEgOl7y8 7J0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=bv2HXAmGAwf1OCVRda73ecPrrSIpyQRMd6cgUbsrMv0=; b=SU8zhjXyYixR4PGKJW860BBECcioYuWC94F954vuGmjG43kQacB6+Yt7hran/xuEuO +UwhbnXmCiOM+r7JA8sxL14w4gMVm/1wG1lla4YSCiL682CScvzPXZ6uVWe//o2NGDMr XV92S3LRFjbBwgCqwQ5WbKZcrjcylBCcoh2vq8J5aiHhMaYY7fE7r5UM0kswRd3x7urC qulMLYQHsUuc0WcSC9p3tI51ZA0nVSYGCSzWBFL0RjV7A75xr62SJcHVRIcf7HXnBV1e N1eLMdFaqJeJgD2ewYh1p/RzXMnC/TfGJvgKHenvbq9a4Y/9J+jHUSU0aMNIo7CTf9n3 lAgg== X-Gm-Message-State: AOAM531uCjLeZJbAVW5IhSY3x8qHhm7NBcxZeZAeIL62GxproPLGbvOX 2rL4/xdJcTVH6ScuPjZFtQF23iE79U3IgA== X-Google-Smtp-Source: ABdhPJw87L3hPfWHiAVUQbhQVpGz0rUbJiIjQSnWEoq4bNXhysfDZhBKtC/yTn3SWVaN10/7zQzMcdMQzx33cA== X-Received: from shakeelb.svl.corp.google.com ([2620:15c:2cd:202:1d16:daf:7a47:a348]) (user=shakeelb job=sendgmr) by 2002:a17:90a:5b0c:: with SMTP id o12mr14449879pji.108.1622771818195; Thu, 03 Jun 2021 18:56:58 -0700 (PDT) Date: Thu, 3 Jun 2021 18:56:39 -0700 Message-Id: <20210604015640.2586269-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH 1/2] memcg: switch lruvec stats to rstat From: Shakeel Butt To: Tejun Heo , Johannes Weiner , Muchun Song Cc: Michal Hocko , Roman Gushchin , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Huang Ying , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Queue-Id: ACA2754E Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=eFloSTVU; spf=pass (imf20.hostedemail.com: domain of 3aoi5YAgKCG0dSLVPPWMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--shakeelb.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3aoi5YAgKCG0dSLVPPWMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--shakeelb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Stat-Signature: 9tt5howqj689fkfftjuuja5xkeww1upa X-HE-Tag: 1622771800-664152 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The commit 2d146aa3aa84 ("mm: memcontrol: switch to rstat") but skipped the conversion of the lruvec stats as such stats are read in the performance critical code paths and flushing stats may have impacted the performances of the applications. This patch converts the lruvec stats to rstat and later patch adds the periodic flushing of the stats and thus remove the need to synchronously flushing the stats in the performance critical code paths. The rstat conversion comes with the price i.e. memory cost. Effectively this patch reverts the savings done by the commit f3344adf38bd ("mm: memcontrol: optimize per-lruvec stats counter memory usage"). However this cost is justified due to negative impact of the inaccurate lruvec stats on many heuristics. One such case is reported in [1]. The memory reclaim code is filled with plethora of heuristics and many of those heuristics reads the lruvec stats. So, inaccurate stats can make such heuristics ineffective. [1] reports the impact of inaccurate lruvec stats on the "cache trim mode" heuristic. Inaccurate lruvec stats can impact the deactivation and aging anon heuristics as well. [1] https://lore.kernel.org/linux-mm/20210311004449.1170308-1-ying.huang@intel.com/ Signed-off-by: Shakeel Butt --- include/linux/memcontrol.h | 42 +++++++------ mm/memcontrol.c | 118 +++++++++++++------------------------ 2 files changed, 60 insertions(+), 100 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3cc18c2176e7..81d65d32ec2a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -105,14 +105,6 @@ struct mem_cgroup_reclaim_iter { unsigned int generation; }; -struct lruvec_stat { - long count[NR_VM_NODE_STAT_ITEMS]; -}; - -struct batched_lruvec_stat { - s32 count[NR_VM_NODE_STAT_ITEMS]; -}; - /* * Bitmap and deferred work of shrinker::id corresponding to memcg-aware * shrinkers, which have elements charged to this memcg. @@ -123,24 +115,30 @@ struct shrinker_info { unsigned long *map; }; +struct lruvec_stats_percpu { + /* Local (CPU and cgroup) state */ + long state[NR_VM_NODE_STAT_ITEMS]; + + /* Delta calculation for lockless upward propagation */ + long state_prev[NR_VM_NODE_STAT_ITEMS]; +}; + +struct lruvec_stats { + /* Aggregated (CPU and subtree) state */ + long state[NR_VM_NODE_STAT_ITEMS]; + + /* Pending child counts during tree propagation */ + long state_pending[NR_VM_NODE_STAT_ITEMS]; +}; + /* * per-node information in memory controller. */ struct mem_cgroup_per_node { struct lruvec lruvec; - /* - * Legacy local VM stats. This should be struct lruvec_stat and - * cannot be optimized to struct batched_lruvec_stat. Because - * the threshold of the lruvec_stat_cpu can be as big as - * MEMCG_CHARGE_BATCH * PAGE_SIZE. It can fit into s32. But this - * filed has no upper limit. - */ - struct lruvec_stat __percpu *lruvec_stat_local; - - /* Subtree VM stats (batched updates) */ - struct batched_lruvec_stat __percpu *lruvec_stat_cpu; - atomic_long_t lruvec_stat[NR_VM_NODE_STAT_ITEMS]; + struct lruvec_stats_percpu __percpu *lruvec_stats_percpu; + struct lruvec_stats lruvec_stats; unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; @@ -965,7 +963,7 @@ static inline unsigned long lruvec_page_state(struct lruvec *lruvec, return node_page_state(lruvec_pgdat(lruvec), idx); pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); - x = atomic_long_read(&pn->lruvec_stat[idx]); + x = READ_ONCE(pn->lruvec_stats.state[idx]); #ifdef CONFIG_SMP if (x < 0) x = 0; @@ -985,7 +983,7 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); for_each_possible_cpu(cpu) - x += per_cpu(pn->lruvec_stat_local->count[idx], cpu); + x += per_cpu(pn->lruvec_stats_percpu->state[idx], cpu); #ifdef CONFIG_SMP if (x < 0) x = 0; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b9a6db6a7d4f..d48f727bec05 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -665,46 +665,20 @@ static unsigned long memcg_page_state_local(struct mem_cgroup *memcg, int idx) return x; } -static struct mem_cgroup_per_node * -parent_nodeinfo(struct mem_cgroup_per_node *pn, int nid) -{ - struct mem_cgroup *parent; - - parent = parent_mem_cgroup(pn->memcg); - if (!parent) - return NULL; - return parent->nodeinfo[nid]; -} - void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { struct mem_cgroup_per_node *pn; struct mem_cgroup *memcg; - long x, threshold = MEMCG_CHARGE_BATCH; pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); memcg = pn->memcg; - /* Update memcg */ - __mod_memcg_state(memcg, idx, val); - /* Update lruvec */ - __this_cpu_add(pn->lruvec_stat_local->count[idx], val); + __this_cpu_add(pn->lruvec_stats_percpu->state[idx], val); - if (vmstat_item_in_bytes(idx)) - threshold <<= PAGE_SHIFT; - - x = val + __this_cpu_read(pn->lruvec_stat_cpu->count[idx]); - if (unlikely(abs(x) > threshold)) { - pg_data_t *pgdat = lruvec_pgdat(lruvec); - struct mem_cgroup_per_node *pi; - - for (pi = pn; pi; pi = parent_nodeinfo(pi, pgdat->node_id)) - atomic_long_add(x, &pi->lruvec_stat[idx]); - x = 0; - } - __this_cpu_write(pn->lruvec_stat_cpu->count[idx], x); + /* Update memcg */ + __mod_memcg_state(memcg, idx, val); } /** @@ -2271,40 +2245,13 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) mutex_unlock(&percpu_charge_mutex); } -static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg, int cpu) -{ - int nid; - - for_each_node(nid) { - struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid]; - unsigned long stat[NR_VM_NODE_STAT_ITEMS]; - struct batched_lruvec_stat *lstatc; - int i; - - lstatc = per_cpu_ptr(pn->lruvec_stat_cpu, cpu); - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { - stat[i] = lstatc->count[i]; - lstatc->count[i] = 0; - } - - do { - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) - atomic_long_add(stat[i], &pn->lruvec_stat[i]); - } while ((pn = parent_nodeinfo(pn, nid))); - } -} - static int memcg_hotplug_cpu_dead(unsigned int cpu) { struct memcg_stock_pcp *stock; - struct mem_cgroup *memcg; stock = &per_cpu(memcg_stock, cpu); drain_stock(stock); - for_each_mem_cgroup(memcg) - memcg_flush_lruvec_page_state(memcg, cpu); - return 0; } @@ -5108,17 +5055,9 @@ static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) if (!pn) return 1; - pn->lruvec_stat_local = alloc_percpu_gfp(struct lruvec_stat, - GFP_KERNEL_ACCOUNT); - if (!pn->lruvec_stat_local) { - kfree(pn); - return 1; - } - - pn->lruvec_stat_cpu = alloc_percpu_gfp(struct batched_lruvec_stat, - GFP_KERNEL_ACCOUNT); - if (!pn->lruvec_stat_cpu) { - free_percpu(pn->lruvec_stat_local); + pn->lruvec_stats_percpu = alloc_percpu_gfp(struct lruvec_stats_percpu, + GFP_KERNEL_ACCOUNT); + if (!pn->lruvec_stats_percpu) { kfree(pn); return 1; } @@ -5139,8 +5078,7 @@ static void free_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) if (!pn) return; - free_percpu(pn->lruvec_stat_cpu); - free_percpu(pn->lruvec_stat_local); + free_percpu(pn->lruvec_stats_percpu); kfree(pn); } @@ -5156,15 +5094,7 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg) static void mem_cgroup_free(struct mem_cgroup *memcg) { - int cpu; - memcg_wb_domain_exit(memcg); - /* - * Flush percpu lruvec stats to guarantee the value - * correctness on parent's and all ancestor levels. - */ - for_each_online_cpu(cpu) - memcg_flush_lruvec_page_state(memcg, cpu); __mem_cgroup_free(memcg); } @@ -5397,7 +5327,7 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) struct mem_cgroup *parent = parent_mem_cgroup(memcg); struct memcg_vmstats_percpu *statc; long delta, v; - int i; + int i, nid; statc = per_cpu_ptr(memcg->vmstats_percpu, cpu); @@ -5445,6 +5375,36 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) if (parent) parent->vmstats.events_pending[i] += delta; } + + for_each_node_state(nid, N_MEMORY) { + struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid]; + struct mem_cgroup_per_node *ppn = NULL; + struct lruvec_stats_percpu *lstatc; + + if (parent) + ppn = parent->nodeinfo[nid]; + + lstatc = per_cpu_ptr(pn->lruvec_stats_percpu, cpu); + + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { + delta = pn->lruvec_stats.state_pending[i]; + if (delta) + pn->lruvec_stats.state_pending[i] = 0; + + v = READ_ONCE(lstatc->state[i]); + if (v != lstatc->state_prev[i]) { + delta += v - lstatc->state_prev[i]; + lstatc->state_prev[i] = v; + } + + if (!delta) + continue; + + pn->lruvec_stats.state[i] += delta; + if (ppn) + ppn->lruvec_stats.state_pending[i] += delta; + } + } } #ifdef CONFIG_MMU @@ -6378,6 +6338,8 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) int i; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + cgroup_rstat_flush(memcg->css.cgroup); + for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { int nid; From patchwork Fri Jun 4 01:56:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 12298593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 003D2C47097 for ; Fri, 4 Jun 2021 02:03:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 91F2C613FA for ; Fri, 4 Jun 2021 02:03:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91F2C613FA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 17F216B0036; Thu, 3 Jun 2021 22:03:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 130B96B006C; Thu, 3 Jun 2021 22:03:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EEB1C6B006E; Thu, 3 Jun 2021 22:03:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id BC92E6B0036 for ; Thu, 3 Jun 2021 22:03:19 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5C131EFBE for ; Fri, 4 Jun 2021 02:03:19 +0000 (UTC) X-FDA: 78214394118.35.BE0D4F5 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf04.hostedemail.com (Postfix) with ESMTP id 664B1351C for ; Fri, 4 Jun 2021 02:03:05 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id x15-20020a170902e04fb02900f5295925dbso3434521plx.9 for ; Thu, 03 Jun 2021 19:03:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uh6gy4nZthBorhbRuZ4nfT30dGL+1kaEEjjmFJIUwIQ=; b=h7Se6qB/0cnnCs0FMOVwiNEYheVr2wFbzsatoLiYi1xkHnTgTljxG000R1EPRE5ksS U7kt5w744nMgfzzUltQKJlatxrPjqhQrX854XWGGw9u8WKzNKIw1cNdCphoH0IGL1TmS am3i4nOxfIT/O1nwzP4vL2sQFhbvB+BgMpB1gFQ0atXxtderGtY9fbqR3UkgUh7C9zLd XAX7q51IotP73K7SCw3RpR8taqAuzA9JYPJcC+onu5jJV0p06AfWOxlwtJIuhJWLJmHs ABfgbKc+cD+knNVJI6OUd42yvxwL65tayeQ0AD9qpxyRMjLJAGgL9eOixg6HpzneCLeW 89Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uh6gy4nZthBorhbRuZ4nfT30dGL+1kaEEjjmFJIUwIQ=; b=MdeqMLL9tX4HjkXasCwPBPm9CEA5L8jTnYJ9E66f6EJZ1l7ID/codywYn/lX6ggSeX V+oPexyM8XMOlKaQUiy3AIbw1Nwn5E4aj/IAKe6WctDkSKilwbbl/W/1QrnRw4Qx27CM EBCdxHzGnPxuamHySRRMy1UH3LSoZsIl4kS0K4gOZgKxc3g4uDACI0e8z8YSeIRFxPCE +WfnN+CWgcIo0Z7KsP7IhX5J1zbrOSxqy7tXPmP211hVbhAuzMg8qU/aqlXVJW3VJ687 XbIzwsQFD7qeekhHb/al04oQ5GpiD2/uXjHGrEtY6E/u60I42cfGV3POnbNVb9gcau3I 8wTA== X-Gm-Message-State: AOAM5314fQkJtnFKGqnqunfvj+W0RWVB+n8AkdIfQ4FyCeD9oG4nNcH5 G5DdvrBA2FH39WmaC+xYjjo3wDeHAPIERg== X-Google-Smtp-Source: ABdhPJwz56m8TYUYhjbxzv/3S7nXuqm0Lfx263QneYGQ/2VvVbc6MN0OrIO1+l3D4W6MehEckGkiUK+GgvaeqQ== X-Received: from shakeelb.svl.corp.google.com ([2620:15c:2cd:202:1d16:daf:7a47:a348]) (user=shakeelb job=sendgmr) by 2002:a63:f40d:: with SMTP id g13mr2462876pgi.290.1622771828563; Thu, 03 Jun 2021 18:57:08 -0700 (PDT) Date: Thu, 3 Jun 2021 18:56:40 -0700 In-Reply-To: <20210604015640.2586269-1-shakeelb@google.com> Message-Id: <20210604015640.2586269-2-shakeelb@google.com> Mime-Version: 1.0 References: <20210604015640.2586269-1-shakeelb@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH 2/2] memcg: periodically flush the memcg stats From: Shakeel Butt To: Tejun Heo , Johannes Weiner , Muchun Song Cc: Michal Hocko , Roman Gushchin , " =?utf-8?q?Michal_Koutn=C3=BD?= " , Huang Ying , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b="h7Se6qB/"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of 3dIi5YAgKCHcncVfZZgWbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--shakeelb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3dIi5YAgKCHcncVfZZgWbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--shakeelb.bounces.google.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 664B1351C X-Stat-Signature: b77wr5sfdeaj5ye8rbo4s1fn3rn17qnr X-HE-Tag: 1622772185-649354 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: At the moment memcg stats are read in four contexts: 1. memcg stat user interfaces 2. dirty throttling 3. page fault 4. memory reclaim Currently the kernel flushes the stats for first two cases. Flushing the stats for remaining two casese may have performance impact. Always flushing the memcg stats on the page fault code path may negatively impacts the performance of the applications. In addition flushing in the memory reclaim code path, though treated as slowpath, can become the source of contention for the global lock taken for stat flushing because when system or memcg is under memory pressure, many tasks may enter the reclaim path. Instead of synchronously flushing the stats, this patch adds support of asynchronous periodic flushing of the memcg stats. For now the flushing period is hardcoded to 2*HZ but that can be changed later through maybe sysctl if need arise. This patch does add the explicit flushing in the kswapd thread as the number of kswapd threads which corresponds to the number of nodes on realistic machines are usually low. Signed-off-by: Shakeel Butt --- include/linux/memcontrol.h | 10 ++++++++++ mm/memcontrol.c | 14 ++++++++++++++ mm/vmscan.c | 6 ++++++ 3 files changed, 30 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 81d65d32ec2a..222c00e76ef9 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -991,6 +991,12 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, return x; } +static inline void mem_cgroup_flush_stats(void) +{ + if (!mem_cgroup_disabled()) + cgroup_rstat_flush(root_mem_cgroup->css.cgroup); +} + void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val); @@ -1394,6 +1400,10 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, return node_page_state(lruvec_pgdat(lruvec), idx); } +static inline void mem_cgroup_flush_stats(void) +{ +} + static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d48f727bec05..6c8578faa8b4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -96,6 +96,10 @@ bool cgroup_memory_noswap __read_mostly; static DECLARE_WAIT_QUEUE_HEAD(memcg_cgwb_frn_waitq); #endif +/* Periodically flush memcg and lruvec stats. */ +static void flush_memcg_stats(struct work_struct *w); +static DECLARE_DEFERRABLE_WORK(stats_flush, flush_memcg_stats); + /* Whether legacy memory+swap accounting is active */ static bool do_memsw_account(void) { @@ -5230,6 +5234,10 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); + + if (unlikely(mem_cgroup_is_root(memcg))) + schedule_delayed_work(&stats_flush, round_jiffies(2UL*HZ)); + return 0; } @@ -5321,6 +5329,12 @@ static void mem_cgroup_css_reset(struct cgroup_subsys_state *css) memcg_wb_domain_size_changed(memcg); } +static void flush_memcg_stats(struct work_struct *w) +{ + cgroup_rstat_flush(root_mem_cgroup->css.cgroup); + schedule_delayed_work(&stats_flush, round_jiffies(2UL*HZ)); +} + static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu) { struct mem_cgroup *memcg = mem_cgroup_from_css(css); diff --git a/mm/vmscan.c b/mm/vmscan.c index 60a19fd6ea3f..16546a5be922 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3872,6 +3872,12 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) sc.may_writepage = !laptop_mode && !nr_boost_reclaim; sc.may_swap = !nr_boost_reclaim; + /* + * Flush the memory cgroup stats, so that we read accurate + * per-memcg lruvec stats for heuristics later. + */ + mem_cgroup_flush_stats(); + /* * Do some background aging of the anon list, to give * pages a chance to be referenced before reclaiming. All