From patchwork Mon Oct 21 11:56:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hillf Danton X-Patchwork-Id: 11202067 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 142031390 for ; Mon, 21 Oct 2019 11:57:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D52A2214B2 for ; Mon, 21 Oct 2019 11:57:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D52A2214B2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 13CA56B000C; Mon, 21 Oct 2019 07:57:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0C7136B000D; Mon, 21 Oct 2019 07:57:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA9A16B000E; Mon, 21 Oct 2019 07:57:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id C2C936B000C for ; Mon, 21 Oct 2019 07:57:12 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 503BF52CC for ; Mon, 21 Oct 2019 11:57:12 +0000 (UTC) X-FDA: 76067641104.16.ear00_47cd6a3daa0b X-Spam-Summary: 2,0,0,5fb1f0306a97353e,d41d8cd98f00b204,hdanton@sina.com,::akpm@linux-foundation.org:linux-kernel@vger.kernel.org:chris@chrisdown.name:tj@kernel.org:guro@fb.com:mhocko@kernel.org:hannes@cmpxchg.org:shakeelb@google.com:willy@infradead.org:minchan@kernel.org:mgorman@suse.de:hdanton@sina.com,RULES_HIT:41:355:379:800:960:966:973:988:989:1260:1311:1314:1345:1437:1515:1535:1543:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:2898:2904:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4250:4385:5007:6119:6120:6261:6742:7875:7903:7904:7974:9010:10004:11026:11334:11473:11537:11658:11914:12043:12296:12297:12438:13200:13229:13894:14096:14181:14721:21080:21220:21325:21451:21627:21740:30054:30064:30070:30079,0,RBL:202.108.3.162:@sina.com:.lbl8.mailshell.net-62.50.2.100 64.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: ear00_47cd6a3daa0b X-Filterd-Recvd-Size: 5147 Received: from mail3-162.sinamail.sina.com.cn (mail3-162.sinamail.sina.com.cn [202.108.3.162]) by imf20.hostedemail.com (Postfix) with SMTP for ; Mon, 21 Oct 2019 11:57:10 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([222.131.66.83]) by sina.com with ESMTP id 5DAD9D0F0000ACFD; Mon, 21 Oct 2019 19:57:07 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 84247849283687 From: Hillf Danton To: linux-mm Cc: Andrew Morton , linux-kernel , Chris Down , Tejun Heo , Roman Gushchin , Michal Hocko , Johannes Weiner , Shakeel Butt , Matthew Wilcox , Minchan Kim , Mel Gorman , Hillf Danton Subject: [RFC v1] memcg: add memcg lru for page reclaiming Date: Mon, 21 Oct 2019 19:56:54 +0800 Message-Id: <20191021115654.14740-1-hdanton@sina.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently soft limit reclaim is frozen, see Documentation/admin-guide/cgroup-v2.rst for reasons. Copying the page lru idea, memcg lru is added for selecting victim memcg to reclaim pages from under memory pressure. It now works in parallel to slr not only because the latter needs some time to reap but the coexistence facilitates it a lot to add the lru in a straight forward manner. A lru list paired with a spin lock is added, thanks to the current memcg high_work that provides other things it needs, and a couple of helpers to add memcg to and pick victim from lru. V1 is based on 5.4-rc3. Changes since v0 - add MEMCG_LRU in init/Kconfig - drop changes in mm/vmscan.c - make memcg lru work in parallel to slr Cc: Chris Down Cc: Tejun Heo Cc: Roman Gushchin Cc: Michal Hocko Cc: Johannes Weiner Cc: Shakeel Butt Cc: Matthew Wilcox Cc: Minchan Kim Cc: Mel Gorman Signed-off-by: Hillf Danton --- -- --- a/init/Kconfig +++ b/init/Kconfig @@ -843,6 +843,14 @@ config MEMCG help Provides control over the memory footprint of tasks in a cgroup. +config MEMCG_LRU + bool + depends on MEMCG + help + Select victim memcg on lru for page reclaiming. + + Say N if unsure. + config MEMCG_SWAP bool "Swap controller" depends on MEMCG && SWAP --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -223,6 +223,10 @@ struct mem_cgroup { /* Upper bound of normal memory consumption range */ unsigned long high; +#ifdef CONFIG_MEMCG_LRU + struct list_head lru_node; +#endif + /* Range enforcement for interrupt charges */ struct work_struct high_work; --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2338,14 +2338,54 @@ static int memcg_hotplug_cpu_dead(unsign return 0; } +#ifdef CONFIG_MEMCG_LRU +static DEFINE_SPINLOCK(memcg_lru_lock); +static LIST_HEAD(memcg_lru); /* a copy of page lru */ + +static void memcg_add_lru(struct mem_cgroup *memcg) +{ + spin_lock_irq(&memcg_lru_lock); + if (list_empty(&memcg->lru_node)) + list_add_tail(&memcg->lru_node, &memcg_lru); + spin_unlock_irq(&memcg_lru_lock); +} + +static struct mem_cgroup *memcg_pick_lru(void) +{ + struct mem_cgroup *memcg, *next; + + spin_lock_irq(&memcg_lru_lock); + + list_for_each_entry_safe(memcg, next, &memcg_lru, lru_node) { + list_del_init(&memcg->lru_node); + + if (page_counter_read(&memcg->memory) > memcg->high) { + spin_unlock_irq(&memcg_lru_lock); + return memcg; + } + } + spin_unlock_irq(&memcg_lru_lock); + + return NULL; +} +#endif + static void reclaim_high(struct mem_cgroup *memcg, unsigned int nr_pages, gfp_t gfp_mask) { +#ifdef CONFIG_MEMCG_LRU + struct mem_cgroup *start = memcg; +#endif do { if (page_counter_read(&memcg->memory) <= memcg->high) continue; memcg_memory_event(memcg, MEMCG_HIGH); + if (IS_ENABLED(CONFIG_MEMCG_LRU)) + if (start != memcg) { + memcg_add_lru(memcg); + return; + } try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true); } while ((memcg = parent_mem_cgroup(memcg))); } @@ -3158,6 +3198,13 @@ unsigned long mem_cgroup_soft_limit_recl unsigned long excess; unsigned long nr_scanned; + if (IS_ENABLED(CONFIG_MEMCG_LRU)) { + struct mem_cgroup *memcg = memcg_pick_lru(); + if (memcg) + schedule_work(&memcg->high_work); + return 0; + } + if (order > 0) return 0; @@ -5068,6 +5115,8 @@ static struct mem_cgroup *mem_cgroup_all if (memcg_wb_domain_init(memcg, GFP_KERNEL)) goto fail; + if (IS_ENABLED(CONFIG_MEMCG_LRU)) + INIT_LIST_HEAD(&memcg->lru_node); INIT_WORK(&memcg->high_work, high_work_func); memcg->last_scanned_node = MAX_NUMNODES; INIT_LIST_HEAD(&memcg->oom_notify);