From patchwork Thu Nov 7 06:08:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11231909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F1D114E5 for ; Thu, 7 Nov 2019 06:09:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4FC4021882 for ; Thu, 7 Nov 2019 06:09:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="E0buQVVt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4FC4021882 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8F8966B0008; Thu, 7 Nov 2019 01:08:59 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8CEFE6B000A; Thu, 7 Nov 2019 01:08:59 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BE4F6B000C; Thu, 7 Nov 2019 01:08:59 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id 678AC6B0008 for ; Thu, 7 Nov 2019 01:08:59 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 0DB31181AEF21 for ; Thu, 7 Nov 2019 06:08:59 +0000 (UTC) X-FDA: 76128453198.17.elbow73_888f8c9f7fd3c X-Spam-Summary: 2,0,0,51dac10be33cf93a,d41d8cd98f00b204,laoar.shao@gmail.com,:mhocko@kernel.org:hannes@cmpxchg.org:vdavydov.dev@gmail.com:akpm@linux-foundation.org::laoar.shao@gmail.com,RULES_HIT:1:2:41:355:379:541:800:960:966:973:988:989:1260:1345:1431:1437:1500:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2693:2731:2897:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4052:4321:4385:4605:5007:6119:6261:6653:7514:7875:9413:10004:11026:11232:11473:11658:11914:12043:12048:12291:12295:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13161:13229:14096:14394:14687:21080:21444:21451:21627:21666:21740:21966:30029:30054:30056:30070,0,RBL:209.85.210.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: elbow73_888f8c9f7fd3c X-Filterd-Recvd-Size: 12582 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Thu, 7 Nov 2019 06:08:58 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id r4so1672556pfl.7 for ; Wed, 06 Nov 2019 22:08:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=HEVA5roqDS0pmOVe88mTqw0MiAcFRzYPSElVLI7MDDM=; b=E0buQVVtyfol+HLHLx+ignZCQTEVjnIELh85XL+rMcOkcDZ/G8vjqL61wafm8RUgLR 4f8IZjmQdde7nVe1UNbb0wlUkb3qdpdVbV2D9KaDL2OHB5NposCt6smhmgppXrOe+1Pb YOcfa8jlN11qXAV5Ju1PstMC+NJmGJjIC0jNUrw0BXgNhA9Qyd3Uc3i0hJ7o93ysJdsi +vXOPQCUAWhha0u8txtXeClaM0OPCG4dLBsusVZVU86V9WmYQVg5iG1aFoY/Oc2Q2zHC plI4GruJaTRL2aiGl8spSH8YsSe/6uYUD074EeHQNtZ71eJHH9hoqRZcw12O34P59XkT hfGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=HEVA5roqDS0pmOVe88mTqw0MiAcFRzYPSElVLI7MDDM=; b=JWqBeQmuN2fOwjTbpllHdGYNoyCiyW3iQux/A9ASg8tu/xJ3qVNUUPQ14HhyrDrARU exRqoRY38nlyg0ZMx04jMl7wAFwN9cVnRYRS8K9tSmczNFMfJyzvMrxVsu4+zsYdxBc2 +tDST3xVpKNzfZuHGlHW11VBHbTHEIXIUItC8UpL2ETwIXB3OJ81meCm7YTuhWqGZ1hz ZuA2dLptCBJQNsfn98P2YkJIPqQ4rX8QxdIMEA5eGi+yT9KEJI7UA/7RNtqThmCKZNV2 rGCduO/O90VmdaZBMWRREsB+JxIOY0SvlELSFUozsi56wiAeJZ8QoQ63xoiYEVPra347 NPsw== X-Gm-Message-State: APjAAAWeuuVyopg5pgd5YFUj4iz53YPcF3QXxW3XtosQ2wfPe04ipmTV 68Z2N2ECetguMVHvS7KsvfA= X-Google-Smtp-Source: APXvYqy6qsmHYnJACdiywfmjPxmHsDmUlhT/EtlgRjRcDRTvzJXVcpejoMpGZqCPp/W46o+vgf7LEA== X-Received: by 2002:a63:ee44:: with SMTP id n4mr2301672pgk.137.1573106937156; Wed, 06 Nov 2019 22:08:57 -0800 (PST) Received: from dev.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id y16sm1083474pfo.62.2019.11.06.22.08.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Nov 2019 22:08:56 -0800 (PST) From: Yafang Shao To: mhocko@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, Yafang Shao Subject: [PATCH 1/2] mm, memcg: introduce multiple levels memory low protection Date: Thu, 7 Nov 2019 01:08:08 -0500 Message-Id: <1573106889-4939-1-git-send-email-laoar.shao@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch introduces a new memory controller file memory.low.level, which is used to set multiple levels memory.low protetion. The valid value of memory.low.level is [0..3], meaning we support four levels protection now. This new controller file takes effect only when memory.low is set. With this new file, we can do page reclaim QoS on different memcgs. For example, when the system is under memory pressure, it will reclaim pages from the memcg with lower priority first and then higher priority. - What is the problem in the current memory low proection ? Currently we can set bigger memory.low protection on memcg with higher priority, and smaller memory.low protection on memcg with lower priority. But once there's no available unprotected memory to reclaim, the reclaimers will reclaim the protected memory from all the memcgs. While we really want the reclaimers to reclaim the protected memory from the lower-priority memcgs first, and if it still can't meet the page allocation it will then reclaim the protected memory from higher-priority memdcgs. The logic can be displayed as bellow, under_memory_pressure reclaim_unprotected_memory if (meet_the_request) exit reclaim_protected_memory_from_lowest_priority_memcgs if (meet_the_request) exit reclaim_protected_memory_from_higher_priority_memcgs if (meet_the_request) exit reclaim_protected_memory_from_highest_priority_memcgs - Why does it support four-level memory low protection ? Low priority, medium priority and high priority, that is the most common usecases in the real life. So four-level memory low protection should be enough. The more levels it is, the higher overhead page reclaim will take. So four-level protection is really a trade-off. - How does it work ? One example how this multiple level controller works, target memcg (root or non-root) / \ B C / \ B1 B2 B/memory.low.level=2 effective low level is 2 B1/memory.low.level=3 effective low level is 2 B2/memory.low.level=0 effective low level is 0 C/memory.low.level=1 effective low level is 1 The effective low level is min(low_level, parent_low_level). memory.low in all memcgs is set. Then the reclaimer will reclaims these priority in this order: B2->C->B/B1 Signed-off-by: Yafang Shao --- Documentation/admin-guide/cgroup-v2.rst | 11 ++++++++ include/linux/page_counter.h | 3 +++ mm/memcontrol.c | 45 +++++++++++++++++++++++++++++++++ mm/vmscan.c | 31 ++++++++++++++++++----- 4 files changed, 83 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index ed912315..cdacc9c 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1137,6 +1137,17 @@ PAGE_SIZE multiple when read back. Putting more memory than generally available under this protection is discouraged. + memory.low.level + A read-write single value file which exists on non-root + cgroups. The default is "0". The valid value is [0..3]. + + The controller file takes effect only after memory.low is set. + If both memory.low and memory.low.level are set to many MEMCGs, + when under memory pressure the reclaimer will reclaim the + unprotected memory first, and then reclaims the protected memory + with lower memory.low.level and at last relcaims the protected + memory with highest memory.low.level. + memory.high A read-write single value file which exists on non-root cgroups. The default is "max". diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h index bab7e57..19bc589 100644 --- a/include/linux/page_counter.h +++ b/include/linux/page_counter.h @@ -6,6 +6,7 @@ #include #include +#define MEMCG_LOW_LEVEL_MAX 4 struct page_counter { atomic_long_t usage; unsigned long min; @@ -22,6 +23,8 @@ struct page_counter { unsigned long elow; atomic_long_t low_usage; atomic_long_t children_low_usage; + unsigned long low_level; + unsigned long elow_level; /* legacy */ unsigned long watermark; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 50f5bc5..9da4ef9 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5962,6 +5962,37 @@ static ssize_t memory_low_write(struct kernfs_open_file *of, return nbytes; } +static int memory_low_level_show(struct seq_file *m, void *v) +{ + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + seq_printf(m, "%lu\n", memcg->memory.low_level); + + return 0; +} + +static ssize_t memory_low_level_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, loff_t off) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + int ret, low_level; + + buf = strstrip(buf); + if (!buf) + return -EINVAL; + + ret = kstrtoint(buf, 0, &low_level); + if (ret) + return ret; + + if (low_level < 0 || low_level >= MEMCG_LOW_LEVEL_MAX) + return -EINVAL; + + memcg->memory.low_level = low_level; + + return nbytes; +} + static int memory_high_show(struct seq_file *m, void *v) { return seq_puts_memcg_tunable(m, READ_ONCE(mem_cgroup_from_seq(m)->high)); @@ -6151,6 +6182,12 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of, .write = memory_low_write, }, { + .name = "low.level", + .flags = CFTYPE_NOT_ON_ROOT, + .seq_show = memory_low_level_show, + .write = memory_low_level_write, + }, + { .name = "high", .flags = CFTYPE_NOT_ON_ROOT, .seq_show = memory_high_show, @@ -6280,6 +6317,7 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *parent; unsigned long emin, parent_emin; unsigned long elow, parent_elow; + unsigned long elow_level, parent_elow_level; unsigned long usage; if (mem_cgroup_disabled()) @@ -6296,6 +6334,7 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, emin = memcg->memory.min; elow = memcg->memory.low; + elow_level = memcg->memory.low_level; parent = parent_mem_cgroup(memcg); /* No parent means a non-hierarchical mode on v1 memcg */ @@ -6331,11 +6370,17 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, if (low_usage && siblings_low_usage) elow = min(elow, parent_elow * low_usage / siblings_low_usage); + + parent_elow_level = READ_ONCE(parent->memory.elow_level); + elow_level = min(elow_level, parent_elow_level); + } else { + elow_level = 0; } exit: memcg->memory.emin = emin; memcg->memory.elow = elow; + memcg->memory.elow_level = elow_level; if (usage <= emin) return MEMCG_PROT_MIN; diff --git a/mm/vmscan.c b/mm/vmscan.c index d979852..3b08e85 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -88,15 +88,16 @@ struct scan_control { /* Can pages be swapped as part of reclaim? */ unsigned int may_swap:1; + unsigned int hibernation_mode:1; /* * Cgroups are not reclaimed below their configured memory.low, * unless we threaten to OOM. If any cgroups are skipped due to * memory.low and nothing was reclaimed, go back for memory.low. */ - unsigned int memcg_low_reclaim:1; + unsigned int memcg_low_level:3; unsigned int memcg_low_skipped:1; + unsigned int memcg_low_step:2; - unsigned int hibernation_mode:1; /* One of the zones is ready for compaction */ unsigned int compaction_ready:1; @@ -2403,10 +2404,12 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, unsigned long lruvec_size; unsigned long scan; unsigned long protection; + bool memcg_low_reclaim = (sc->memcg_low_level > + memcg->memory.elow_level); lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx); protection = mem_cgroup_protection(memcg, - sc->memcg_low_reclaim); + memcg_low_reclaim); if (protection) { /* @@ -2691,6 +2694,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); unsigned long reclaimed; unsigned long scanned; + unsigned long step; switch (mem_cgroup_protected(target_memcg, memcg)) { case MEMCG_PROT_MIN: @@ -2706,7 +2710,11 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) * there is an unprotected supply * of reclaimable memory from other cgroups. */ - if (!sc->memcg_low_reclaim) { + if (sc->memcg_low_level <= memcg->memory.elow_level) { + step = (memcg->memory.elow_level - + sc->memcg_low_level); + if (step < sc->memcg_low_step) + sc->memcg_low_step = step; sc->memcg_low_skipped = 1; continue; } @@ -3007,6 +3015,9 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, pg_data_t *last_pgdat; struct zoneref *z; struct zone *zone; + + sc->memcg_low_step = MEMCG_LOW_LEVEL_MAX - 1; + retry: delayacct_freepages_start(); @@ -3061,9 +3072,15 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, return 1; /* Untapped cgroup reserves? Don't OOM, retry. */ - if (sc->memcg_low_skipped) { - sc->priority = initial_priority; - sc->memcg_low_reclaim = 1; + if (sc->memcg_low_skipped && + sc->memcg_low_level < MEMCG_LOW_LEVEL_MAX) { + /* + * If it is hard to reclaim page caches, we'd better use a + * lower priority to avoid taking too much time. + */ + sc->priority = initial_priority > sc->memcg_low_level ? + (initial_priority - sc->memcg_low_level) : 0; + sc->memcg_low_level += sc->memcg_low_step + 1; sc->memcg_low_skipped = 0; goto retry; } From patchwork Thu Nov 7 06:08:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11231911 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9661D1850 for ; Thu, 7 Nov 2019 06:09:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 54AAF21882 for ; Thu, 7 Nov 2019 06:09:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="qgGJOBaY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 54AAF21882 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 994646B000C; Thu, 7 Nov 2019 01:09:05 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 96B7A6B000D; Thu, 7 Nov 2019 01:09:05 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80BF06B000E; Thu, 7 Nov 2019 01:09:05 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id 68DDF6B000C for ; Thu, 7 Nov 2019 01:09:05 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 1F7592DFD for ; Thu, 7 Nov 2019 06:09:05 +0000 (UTC) X-FDA: 76128453450.04.blade33_8979242b96211 X-Spam-Summary: 2,0,0,a32f7750104eadc4,d41d8cd98f00b204,laoar.shao@gmail.com,:mhocko@kernel.org:hannes@cmpxchg.org:vdavydov.dev@gmail.com:akpm@linux-foundation.org::laoar.shao@gmail.com,RULES_HIT:2:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1605:1606:1730:1747:1777:1792:1801:2195:2198:2199:2200:2393:2559:2562:2693:2731:2897:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4119:4250:4321:4605:5007:6119:6261:6653:7514:8784:9010:9149:9413:10004:11026:11232:11233:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13869:14394:14687:14877:21080:21444:21451:21627:21666:30005:30054:30056,0,RBL:209.85.210.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: blade33_8979242b96211 X-Filterd-Recvd-Size: 8279 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Thu, 7 Nov 2019 06:09:04 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id 193so1636863pfc.13 for ; Wed, 06 Nov 2019 22:09:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nHHPpymPLZDPZyUX+onE+rIyRzuuTD5D0cpgPbuUkXM=; b=qgGJOBaYPTyUI4SOHaUy9anRFG8Ouj3tiI5F4cpjBxhhVg6iSyth3RSRZIl9rZ8RYt hVoRBB7T9bfi4Cq5+sDMNtaTiGs6yL0rOsBCTd0YcLdxzhof9Oy8UKGdPzkHopcJkOSM SqqLAvKojfBnu5s7ej/t9T64bY/XvEpVEdOC+Q+QDvigAKNcXs54u/AHX6aSkVercnw5 vThBT+YNMpITomHCf6b+FKmpL1EsqB0IIEfyZXvt46djhzlPEqbZ/s41jVR2Zb8lpscg 3cwjX/cwwFGI9yNKRwojFIA3obBpeWQ6PWopjcdyUQGhcn3mcqgA6fSJUaAr5Js4HdmV 2t8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nHHPpymPLZDPZyUX+onE+rIyRzuuTD5D0cpgPbuUkXM=; b=K3wUMlqQZmkLk0yWhprSMpXLlU2nysG6Gth7h2tn5o+91MbABP/5tzxtciHJPBLvGs +uPD8YisO9wSZh1Tne5LG1QbhMxM04+7nLL6WOSqSQ6CxnIZQ/m6vFM1t0Fm9EnYj7vm AtvobHRlGMfQQ/hn4BpNk8HSUdfAAk1JC+S/LzhpiYMc3zPZtYphWGQrPDtfH57PjXoG +aIAgShEW00clgX3ONnoB9h/UG98G/c+8/eqsuAv6dis6K+mOXc6DzA74Pwpa3M3LQhW 6W/FDyoJB23t7EH3WyR5108rnLa4oTJ5bFQSOJyQ23/br4jpFsD4D/xSOffog5fGTH7m NDDw== X-Gm-Message-State: APjAAAVDLLuv/yNPDcnJgCgrQ+/dueo4fZNc5MqBmkkdnGASOcuOlfp5 eghwkzc1UqRfuxrlvwEJnOWV+TXE X-Google-Smtp-Source: APXvYqxlIn1aaDMnCrjBZrnAqIcZ6SET/bp7QePUkZ6iQkK8nT5P9eXmFqObvUzs1tALNET6fKvURg== X-Received: by 2002:a63:5605:: with SMTP id k5mr2420125pgb.14.1573106943698; Wed, 06 Nov 2019 22:09:03 -0800 (PST) Received: from dev.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id y16sm1083474pfo.62.2019.11.06.22.09.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Nov 2019 22:09:03 -0800 (PST) From: Yafang Shao To: mhocko@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, Yafang Shao Subject: [PATCH 2/2] mm, memcg: introduce passive reclaim counters for non-root memcg Date: Thu, 7 Nov 2019 01:08:09 -0500 Message-Id: <1573106889-4939-2-git-send-email-laoar.shao@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1573106889-4939-1-git-send-email-laoar.shao@gmail.com> References: <1573106889-4939-1-git-send-email-laoar.shao@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memory.{min, low} protection can prevent the reclaimers to reclaim the pages from a memcg when there're memory pressure outside of a memcg. We'd better introduce some counters to show this behavior. This patch introduce two counters, pgscan_passive and pgsteal_passive. pgscan_passive: pages scanned due to memory pressure outside this memcg pgsteal_passive: pages reclaimed due to memory pressure outside this memcg memcg set with higher memory.{min, low} setting, will get a lower pgscan_passive and pgsteal_passive. For example, if memory.min is equal to memory.max, then these passive reclaim counters should always be zero. These counters are only for non-root memory cgroup. It's not easy to introduce some container-only vmstat counters, because it will make lots of changes. So I introduce some global vmstat counters, but they are always zero for root memory cgroup. Signed-off-by: Yafang Shao --- Documentation/admin-guide/cgroup-v2.rst | 10 ++++++++++ include/linux/vm_event_item.h | 2 ++ mm/memcontrol.c | 4 ++++ mm/vmscan.c | 11 +++++++++-- mm/vmstat.c | 2 ++ 5 files changed, 27 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index cdacc9c..11e0129 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1336,10 +1336,20 @@ PAGE_SIZE multiple when read back. Amount of scanned pages (in an inactive LRU list) + pgscan_passive + + Amount of scanned pages due to memory pressure outside this + memcg + pgsteal Amount of reclaimed pages + pgsteal_passive + + Amount of reclaimed pages due to memory pressure outside + this memcg + pgactivate Amount of pages moved to the active LRU list diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 47a3441..0fcdaa3 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -32,8 +32,10 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, PGREFILL, PGSTEAL_KSWAPD, PGSTEAL_DIRECT, + PGSTEAL_PASSIVE, PGSCAN_KSWAPD, PGSCAN_DIRECT, + PGSCAN_PASSIVE, PGSCAN_DIRECT_THROTTLE, #ifdef CONFIG_NUMA PGSCAN_ZONE_RECLAIM_FAILED, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9da4ef9..0a2bf9a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1443,9 +1443,13 @@ static char *memory_stat_format(struct mem_cgroup *memcg) seq_buf_printf(&s, "pgscan %lu\n", memcg_events(memcg, PGSCAN_KSWAPD) + memcg_events(memcg, PGSCAN_DIRECT)); + seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGSCAN_PASSIVE), + memcg_events(memcg, PGSCAN_PASSIVE)); seq_buf_printf(&s, "pgsteal %lu\n", memcg_events(memcg, PGSTEAL_KSWAPD) + memcg_events(memcg, PGSTEAL_DIRECT)); + seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGSTEAL_PASSIVE), + memcg_events(memcg, PGSTEAL_PASSIVE)); seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGACTIVATE), memcg_events(memcg, PGACTIVATE)); seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGDEACTIVATE), diff --git a/mm/vmscan.c b/mm/vmscan.c index 3b08e85..0ea6c4a0 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1910,6 +1910,7 @@ static int current_may_throttle(void) struct pglist_data *pgdat = lruvec_pgdat(lruvec); struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; bool stalled = false; + struct mem_cgroup *memcg; while (unlikely(too_many_isolated(pgdat, file, sc))) { if (stalled) @@ -1934,10 +1935,13 @@ static int current_may_throttle(void) __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken); reclaim_stat->recent_scanned[file] += nr_taken; + memcg = lruvec_memcg(lruvec); item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_scanned); - __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned); + if (memcg != sc->target_mem_cgroup) + __count_memcg_events(memcg, PGSCAN_PASSIVE, nr_scanned); + __count_memcg_events(memcg, item, nr_scanned); spin_unlock_irq(&pgdat->lru_lock); if (nr_taken == 0) @@ -1948,10 +1952,13 @@ static int current_may_throttle(void) spin_lock_irq(&pgdat->lru_lock); + memcg = lruvec_memcg(lruvec); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); - __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); + if (memcg != sc->target_mem_cgroup) + __count_memcg_events(memcg, PGSTEAL_PASSIVE, nr_reclaimed); + __count_memcg_events(memcg, item, nr_reclaimed); reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; reclaim_stat->recent_rotated[1] += stat.nr_activate[1]; diff --git a/mm/vmstat.c b/mm/vmstat.c index 78d5337..5d2a053 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1196,8 +1196,10 @@ int fragmentation_index(struct zone *zone, unsigned int order) "pgrefill", "pgsteal_kswapd", "pgsteal_direct", + "pgsteal_passive", "pgscan_kswapd", "pgscan_direct", + "pgscan_passive", "pgscan_direct_throttle", #ifdef CONFIG_NUMA