From patchwork Sun Mar 15 09:53:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11438739 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FC9492A for ; Sun, 15 Mar 2020 07:53:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 03CAB20738 for ; Sun, 15 Mar 2020 07:53:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AnNSrJxr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 03CAB20738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 95C2F6B0006; Sun, 15 Mar 2020 03:53:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 934736B0007; Sun, 15 Mar 2020 03:53:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 824C56B0008; Sun, 15 Mar 2020 03:53:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0029.hostedemail.com [216.40.44.29]) by kanga.kvack.org (Postfix) with ESMTP id 6167C6B0006 for ; Sun, 15 Mar 2020 03:53:02 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EF4A28248047 for ; Sun, 15 Mar 2020 07:53:01 +0000 (UTC) X-FDA: 76596830562.02.color38_71bad59200354 X-Spam-Summary: 2,0,0,a9ec71382016b6da,d41d8cd98f00b204,laoar.shao@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1605:1730:1747:1777:1792:2194:2198:2199:2200:2393:2553:2559:2562:2693:2897:2898:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4049:4120:4250:4321:4605:5007:6261:6653:7514:9413:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13161:13172:13229:13255:14394:14687:21080:21444:21451:21627:21666:21966:21990:30054:30070:30090,0,RBL:209.85.210.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: color38_71bad59200354 X-Filterd-Recvd-Size: 9193 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Sun, 15 Mar 2020 07:53:01 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id c144so7980146pfb.10 for ; Sun, 15 Mar 2020 00:53:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GR+4RQ/CtYjBoLtm6xDE+j3oEJzKJmkJkF8JHfLM9+I=; b=AnNSrJxr3L6owNh+RpUvCyIVb9IkGboEQh51GvFU7aHaxGPwVdI06d7i4AW+6kjc10 CU9osRaYVfKKmQfSZl9vTifThOPbfhRjv+0Pe+t8+JYZWl6GoiaNb/fWM8qICrYBHyIF th9YAM0yFugK7p+txE2AbIIzs3wBnjfA9zEFp62SU3R1kSZ4chkUlsmRjlJzUGr4x8nn PQAVcywhC699mjASDGfGlFP1itIQbJxS20v79/vv5ag89WofOWxvGSU6tw5YN8LOtQKN sTe9umHJ3jbWrxZH7ys2gumvWAqJrmpm1RmsKCo34cDQdMzHGk3WF66SBZ9wP1lQdzyw loRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GR+4RQ/CtYjBoLtm6xDE+j3oEJzKJmkJkF8JHfLM9+I=; b=V670gHCW+7dY2/ArTaJz62zjEznT/aeQxDz2yYtspA/Ceynm9fWSxNis5JLXZ48WrY Sun6TH2PrNT5lHW9EaFwJkaXwoYAz+agU2ZioC8ZIJZ1HAK7T5G36FPuddiPK782NqIH ZqGY28P+niIqyMPP59GBOvXon9DitzbZzhbyEa0kmHxIba8Rap/K32xijoPp1ip+bd6X 4+UK+KD3BjnMVO0tJ7ANc8Nvd9OvAGO5OUgW351vwNpHO7aWpln0qVhDQ7EXlbuFfI05 qDzNNV4NlOeG0v5mr8SWq7V//zlmHJOWOJZNEPXQyvMXoJU5osMdgMJ8y7DZk+yXNDli hQdA== X-Gm-Message-State: ANhLgQ0IGXnBnhu85E0ZdXS7Q9Oa4fTsi39ypS4h4RhHUcTa5O0eSrIJ /MmhNxBu/dbI4YJzTgy8f4j/D4A4EwA= X-Google-Smtp-Source: ADFU+vtDK7Ak+zn2clzqTJRjeDSuOpfNuPgnLW/K67KXpqO+HpI+acgrjbUDJ/GsyeXLxy2Xy2P8dw== X-Received: by 2002:a62:be04:: with SMTP id l4mr22649023pff.234.1584258780484; Sun, 15 Mar 2020 00:53:00 -0700 (PDT) Received: from master.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id w11sm62592984pfn.4.2020.03.15.00.52.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 15 Mar 2020 00:52:59 -0700 (PDT) From: Yafang Shao To: dchinner@redhat.com, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, guro@fb.com, akpm@linux-foundation.org, viro@zeniv.linux.org.uk Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Yafang Shao Subject: [PATCH v5 1/3] mm, list_lru: make memcg visible to lru walker isolation function Date: Sun, 15 Mar 2020 05:53:40 -0400 Message-Id: <20200315095342.10178-2-laoar.shao@gmail.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20200315095342.10178-1-laoar.shao@gmail.com> References: <20200315095342.10178-1-laoar.shao@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The lru walker isolation function may use this memcg to do something, e.g. the inode isolatation function will use the memcg to do inode protection in followup patch. So make memcg visible to the lru walker isolation function. Something should be emphasized in this patch is it replaces for_each_memcg_cache_index() with for_each_mem_cgroup() in list_lru_walk_node(). Because there's a gap between these two MACROs that for_each_mem_cgroup() depends on CONFIG_MEMCG while the other one depends on CONFIG_MEMCG_KMEM. But as list_lru_memcg_aware() returns false if CONFIG_MEMCG_KMEM is not configured, it is safe to this replacement. Another difference between for_each_memcg_cache_index() and for_each_mem_cgroup() is that for_each_memcg_cache_index() excludes the root_mem_cgroup because its kmemcg_id is -1, while for_each_mem_cgroup() includes the root_mem_cgroup. So we need to skip the root_mem_cgroup explicitly in the for loop. Cc: Dave Chinner Signed-off-by: Yafang Shao --- include/linux/memcontrol.h | 21 +++++++++++++++++ mm/list_lru.c | 47 +++++++++++++++++++++++--------------- mm/memcontrol.c | 15 ------------ 3 files changed, 49 insertions(+), 34 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index a7a0a1a5c8d5..a624c423e60b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -445,6 +445,21 @@ void mem_cgroup_iter_break(struct mem_cgroup *, struct mem_cgroup *); int mem_cgroup_scan_tasks(struct mem_cgroup *, int (*)(struct task_struct *, void *), void *); +/* + * Iteration constructs for visiting all cgroups (under a tree). If + * loops are exited prematurely (break), mem_cgroup_iter_break() must + * be used for reference counting. + */ +#define for_each_mem_cgroup_tree(iter, root) \ + for (iter = mem_cgroup_iter(root, NULL, NULL); \ + iter != NULL; \ + iter = mem_cgroup_iter(root, iter, NULL)) + +#define for_each_mem_cgroup(iter) \ + for (iter = mem_cgroup_iter(NULL, NULL, NULL); \ + iter != NULL; \ + iter = mem_cgroup_iter(NULL, iter, NULL)) + static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg) { if (mem_cgroup_disabled()) @@ -945,6 +960,12 @@ static inline int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, return 0; } +#define for_each_mem_cgroup_tree(iter) \ + for (iter = NULL; iter; ) + +#define for_each_mem_cgroup(iter) \ + for (iter = NULL; iter; ) + static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg) { return 0; diff --git a/mm/list_lru.c b/mm/list_lru.c index 0f1f6b06b7f3..6daa8c64d13d 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -207,11 +207,11 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid) EXPORT_SYMBOL_GPL(list_lru_count_node); static unsigned long -__list_lru_walk_one(struct list_lru_node *nlru, int memcg_idx, +__list_lru_walk_one(struct list_lru_node *nlru, struct mem_cgroup *memcg, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { - + int memcg_idx = memcg_cache_id(memcg); struct list_lru_one *l; struct list_head *item, *n; unsigned long isolated = 0; @@ -273,7 +273,7 @@ list_lru_walk_one(struct list_lru *lru, int nid, struct mem_cgroup *memcg, unsigned long ret; spin_lock(&nlru->lock); - ret = __list_lru_walk_one(nlru, memcg_cache_id(memcg), isolate, cb_arg, + ret = __list_lru_walk_one(nlru, memcg, isolate, cb_arg, nr_to_walk); spin_unlock(&nlru->lock); return ret; @@ -289,7 +289,7 @@ list_lru_walk_one_irq(struct list_lru *lru, int nid, struct mem_cgroup *memcg, unsigned long ret; spin_lock_irq(&nlru->lock); - ret = __list_lru_walk_one(nlru, memcg_cache_id(memcg), isolate, cb_arg, + ret = __list_lru_walk_one(nlru, memcg, isolate, cb_arg, nr_to_walk); spin_unlock_irq(&nlru->lock); return ret; @@ -299,25 +299,34 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { - long isolated = 0; - int memcg_idx; + struct list_lru_node *nlru; + struct mem_cgroup *memcg; + long isolated; - isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg, - nr_to_walk); - if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { - for_each_memcg_cache_index(memcg_idx) { - struct list_lru_node *nlru = &lru->node[nid]; + /* iterate the global lru first */ + isolated = list_lru_walk_one(lru, nid, NULL, isolate, cb_arg, + nr_to_walk); - spin_lock(&nlru->lock); - isolated += __list_lru_walk_one(nlru, memcg_idx, - isolate, cb_arg, - nr_to_walk); - spin_unlock(&nlru->lock); + if (!list_lru_memcg_aware(lru)) + goto out; - if (*nr_to_walk <= 0) - break; - } + nlru = &lru->node[nid]; + for_each_mem_cgroup(memcg) { + /* already scanned the root memcg above */ + if (mem_cgroup_is_root(memcg)) + continue; + + if (*nr_to_walk <= 0) + break; + + spin_lock(&nlru->lock); + isolated += __list_lru_walk_one(nlru, memcg, + isolate, cb_arg, + nr_to_walk); + spin_unlock(&nlru->lock); } + +out: return isolated; } EXPORT_SYMBOL_GPL(list_lru_walk_node); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d09776cd6e10..688d51dbb731 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -222,21 +222,6 @@ enum res_type { /* Used for OOM nofiier */ #define OOM_CONTROL (0) -/* - * Iteration constructs for visiting all cgroups (under a tree). If - * loops are exited prematurely (break), mem_cgroup_iter_break() must - * be used for reference counting. - */ -#define for_each_mem_cgroup_tree(iter, root) \ - for (iter = mem_cgroup_iter(root, NULL, NULL); \ - iter != NULL; \ - iter = mem_cgroup_iter(root, iter, NULL)) - -#define for_each_mem_cgroup(iter) \ - for (iter = mem_cgroup_iter(NULL, NULL, NULL); \ - iter != NULL; \ - iter = mem_cgroup_iter(NULL, iter, NULL)) - static inline bool should_force_charge(void) { return tsk_is_oom_victim(current) || fatal_signal_pending(current) || From patchwork Sun Mar 15 09:53:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11438743 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 027041667 for ; Sun, 15 Mar 2020 07:53:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B8EFA206E9 for ; Sun, 15 Mar 2020 07:53:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="a6HzSxBn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8EFA206E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7111B6B0007; Sun, 15 Mar 2020 03:53:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6E8326B0008; Sun, 15 Mar 2020 03:53:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58B4D6B000A; Sun, 15 Mar 2020 03:53:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 3E4D56B0007 for ; Sun, 15 Mar 2020 03:53:05 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D755852D6 for ; Sun, 15 Mar 2020 07:53:04 +0000 (UTC) X-FDA: 76596830688.15.toe16_722f3e1249f3d X-Spam-Summary: 2,0,0,1dd5ce5604d49470,d41d8cd98f00b204,laoar.shao@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:4117:4321:4385:4605:5007:6261:6653:7514:7550:7875:7903:8660:9036:9413:10004:11026:11658:11914:12048:12294:12296:12297:12517:12519:12555:12895:12986:13148:13161:13229:13230:13255:14096:14181:14394:14687:14721:14877:21080:21433:21444:21451:21627:21666:21972:21990:30004:30034:30054,0,RBL:209.85.215.195:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: toe16_722f3e1249f3d X-Filterd-Recvd-Size: 6544 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Sun, 15 Mar 2020 07:53:04 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id t24so7716895pgj.7 for ; Sun, 15 Mar 2020 00:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=r9vLV6Qq04cRRMSwOI+dLFQIqvFaYX0FB1SLrcgz+pc=; b=a6HzSxBnbEYIPl5ofeVIDVAmAl+juJ9bvXsP5A1bAOFzTp023UsfLTUEyd747zvH7M Mq/BT0PkcYddsr0awLrIqFwyHHjsTbD2+3U0FMGxiGEbVYcYhMNlMa89+NYeBVMbme9F gs74dco12foKmtPt0XDqi1CgoUSyOhr7KS+eTqx5m4yLI2RBG5p9ATVBG8d2Rk8cyKqU FFhA9v529m90kT6vYBRtUzYVK1nNsPAhgnuRJpLAP+vpmu5ysZXZp8FYLfluJIlJV/uP nxzBSCsRMZEF52iDxuMqQN/M9wfI0ScGzNeV/7jzYi/rtvoi/lXHV2tQaUKwp9qm1anD 7yOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=r9vLV6Qq04cRRMSwOI+dLFQIqvFaYX0FB1SLrcgz+pc=; b=MhQrNTxWVpolXJXWJ2rwKHMVH4M8rzpQmKXuFym+dGyYxWkQLaP4jnx2CaIHj7Isuo /0OQRt+ADfGheKnvnti8uENXQVyXrjdfxACAoSebdjdNY3oxvH8O4+AvoVTE5Vdtp4Gj agMd0upZiXFg4W0XAt/oB9ZC8vjdijeJpyg9bRR91nQuj0nu3xox3/lLknlzmyqMMnUI tAj6HaHsadBIKuVvsJhLQ4+8hjB6XVEhaBiSH36/InVb2Nxu/z30D1650bOVP0oXPSu4 +JaEKNnqCrR/JLwmigBJeJYq+jlfUD6xuf87yZ4grzxMfc6+nMS4jMxoeHIMWA2ka5H5 YpaA== X-Gm-Message-State: ANhLgQ37YbwFPA0HUSV+W/mlD2GaWvESk8hCUSywiNHiFvOzDiAZ1wef XCV0sBuW0DRLFTWlAWybymo= X-Google-Smtp-Source: ADFU+vuv2Fvoe3wxjcT4absfB1RnH6gkjP/X1kgSOnsg6owh1BfecuBWq3uSBVuRh4psUGbwZ9ECVQ== X-Received: by 2002:a63:cd12:: with SMTP id i18mr9138743pgg.98.1584258783452; Sun, 15 Mar 2020 00:53:03 -0700 (PDT) Received: from master.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id w11sm62592984pfn.4.2020.03.15.00.53.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 15 Mar 2020 00:53:02 -0700 (PDT) From: Yafang Shao To: dchinner@redhat.com, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, guro@fb.com, akpm@linux-foundation.org, viro@zeniv.linux.org.uk Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Yafang Shao Subject: [PATCH v5 2/3] mm, shrinker: make memcg low reclaim visible to lru walker isolation function Date: Sun, 15 Mar 2020 05:53:41 -0400 Message-Id: <20200315095342.10178-3-laoar.shao@gmail.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20200315095342.10178-1-laoar.shao@gmail.com> References: <20200315095342.10178-1-laoar.shao@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A new member memcg_low_reclaim is introduced in shrink_control struct, which is derived from scan_control struct, in order to tell the shrinker whether the reclaim session is under memcg low reclaim or not. The followup patch will use this new member. Cc: Dave Chinner Signed-off-by: Yafang Shao --- include/linux/shrinker.h | 3 +++ mm/vmscan.c | 27 ++++++++++++++++----------- 2 files changed, 19 insertions(+), 11 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 0f80123650e2..dc42ae57e8dc 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -31,6 +31,9 @@ struct shrink_control { /* current memcg being shrunk (for memcg aware shrinkers) */ struct mem_cgroup *memcg; + + /* derived from struct scan_control */ + bool memcg_low_reclaim; }; #define SHRINK_STOP (~0UL) diff --git a/mm/vmscan.c b/mm/vmscan.c index 876370565455..385750840979 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -625,10 +625,9 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, /** * shrink_slab - shrink slab caches - * @gfp_mask: allocation context - * @nid: node whose slab caches to target * @memcg: memory cgroup whose slab caches to target - * @priority: the reclaim priority + * @sc: scan_control struct for this reclaim session + * @nid: node whose slab caches to target * * Call the shrink functions to age shrinkable caches. * @@ -638,15 +637,18 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, * @memcg specifies the memory cgroup to target. Unaware shrinkers * are called only if it is the root cgroup. * - * @priority is sc->priority, we take the number of objects and >> by priority - * in order to get the scan target. + * @sc is the scan_control struct, we take the number of objects + * and >> by sc->priority in order to get the scan target. * * Returns the number of reclaimed slab objects. */ -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, - struct mem_cgroup *memcg, - int priority) +static unsigned long shrink_slab(struct mem_cgroup *memcg, + struct scan_control *sc, + int nid) { + bool memcg_low_reclaim = sc->memcg_low_reclaim; + gfp_t gfp_mask = sc->gfp_mask; + int priority = sc->priority; unsigned long ret, freed = 0; struct shrinker *shrinker; @@ -668,6 +670,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, .gfp_mask = gfp_mask, .nid = nid, .memcg = memcg, + .memcg_low_reclaim = memcg_low_reclaim, }; ret = do_shrink_slab(&sc, shrinker, priority); @@ -694,6 +697,9 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, void drop_slab_node(int nid) { unsigned long freed; + struct scan_control sc = { + .gfp_mask = GFP_KERNEL, + }; do { struct mem_cgroup *memcg = NULL; @@ -701,7 +707,7 @@ void drop_slab_node(int nid) freed = 0; memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - freed += shrink_slab(GFP_KERNEL, nid, memcg, 0); + freed += shrink_slab(memcg, &sc, nid); } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); } while (freed > 10); } @@ -2673,8 +2679,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) shrink_lruvec(lruvec, sc); - shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, - sc->priority); + shrink_slab(memcg, sc, pgdat->node_id); /* Record the group's reclaim efficiency */ vmpressure(sc->gfp_mask, memcg, false, From patchwork Sun Mar 15 09:53:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11438747 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 090B692A for ; Sun, 15 Mar 2020 07:53:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD986206E9 for ; Sun, 15 Mar 2020 07:53:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="K/ju9+Hw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD986206E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4A11E6B0008; Sun, 15 Mar 2020 03:53:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 42C686B000A; Sun, 15 Mar 2020 03:53:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CE216B000C; Sun, 15 Mar 2020 03:53:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 124556B0008 for ; Sun, 15 Mar 2020 03:53:08 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id ACC2E180AD822 for ; Sun, 15 Mar 2020 07:53:07 +0000 (UTC) X-FDA: 76596830814.24.laugh33_7292e0f7f5440 X-Spam-Summary: 2,0,0,df34aca5ca81defe,d41d8cd98f00b204,laoar.shao@gmail.com,,RULES_HIT:2:41:355:379:541:800:960:965:966:973:988:989:1260:1345:1359:1437:1535:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2553:2559:2562:2693:2897:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4250:4385:4390:4395:4470:4605:5007:6261:6653:7514:7903:9010:9121:9413:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13180:13229:14096:14394:14687:21080:21444:21450:21451:21611:21627:21666:21796:21990:30036:30054:30090,0,RBL:209.85.210.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: laugh33_7292e0f7f5440 X-Filterd-Recvd-Size: 9030 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Sun, 15 Mar 2020 07:53:07 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id c19so7977850pfo.13 for ; Sun, 15 Mar 2020 00:53:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cqqtyHIeT4vfGVExTGiwALcTdHhyTSr2VPDKQxL1EV0=; b=K/ju9+HwkvMQ8RIQcSAyzUw5L+xOv8h/u7e4SKXbAigpSjEJlNZ4VzcqmotFdKTJzi MPiFN0U15K7LcdRCMJ5v83DeBnNTH7AYFTcmo4WuiPf31nWn7aYZXZ6qyIYB+hZMSVr9 ucSqoal9HM18xd8qslsiBrNFzSElyQapeLHCe5QdiUmAr5Jo8TDO7fDtUE9bQ2KZW+9P BUwNt62uSh6niPIYPdwZDSaVa4l1KdDW1/gjxskXzAgbX6/ckpWhf6VCSWl+dKypnx2/ zF+PB0s9jzeLV5lI2CmiMFT9VwHaEkEM7MiWq7uySJEIc6MYwV6qC9KBEhbpmaHEDndP zOhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cqqtyHIeT4vfGVExTGiwALcTdHhyTSr2VPDKQxL1EV0=; b=drsjUjAxPP5mEcNB+ubj4An2d+4wSVl/TgIU/2C5/6HANocEXXVdGYtgCHWjbEInNb EXEkQsinkPg4TZhB0/eHtWLDIWpbTUoOiyGxrxYkJK7quzI58Pqb08puqANLiWcVyzC+ qqCkYcim8zf+YvCJ3PNSaHf8K3xIoHpgUVqcNfftqcOgiAsnaXkMsb9s1P5EEiagWOMu qQ0WQU9ai4eKDYg//voDtIycc4qX9fkqd4GnQOXnw6wcaQhstRK+SIGgJbrzky73FH0A JiX1CizaK/Vnj1lzWFtVG+UewJem3akJ63j6b9w68ZTURzfxV+7JBGB3DtOjIAElT6Sw vgwg== X-Gm-Message-State: ANhLgQ1LAs8SlrNEp8w9TmddI5OPw9b3LArfFtFMZBac/S//JlWBZwJ7 3iQ+sRSW54f+liJRyr4dSUY= X-Google-Smtp-Source: ADFU+vtQyTwtKmcMpijmAwbRMTvUiIvUIaWTMRDklRnXTyVMJTEcTHqGnWSogjKN8TVv7nDyL3P6MA== X-Received: by 2002:a63:27c5:: with SMTP id n188mr4284055pgn.345.1584258786402; Sun, 15 Mar 2020 00:53:06 -0700 (PDT) Received: from master.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id w11sm62592984pfn.4.2020.03.15.00.53.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 15 Mar 2020 00:53:05 -0700 (PDT) From: Yafang Shao To: dchinner@redhat.com, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, guro@fb.com, akpm@linux-foundation.org, viro@zeniv.linux.org.uk Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Yafang Shao Subject: [PATCH v5 3/3] inode: protect page cache from freeing inode Date: Sun, 15 Mar 2020 05:53:42 -0400 Message-Id: <20200315095342.10178-4-laoar.shao@gmail.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20200315095342.10178-1-laoar.shao@gmail.com> References: <20200315095342.10178-1-laoar.shao@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On my server there're some running MEMCGs protected by memory.{min, low}, but I found the usage of these MEMCGs abruptly became very small, which were far less than the protect limit. It confused me and finally I found that was because of inode stealing. Once an inode is freed, all its belonging page caches will be dropped as well, no matter how may page caches it has. So if we intend to protect the page caches in a memcg, we must protect their host (the inode) first. Otherwise the memcg protection can be easily bypassed with freeing inode, especially if there're big files in this memcg. Supposes we have a memcg, and the stat of this memcg is, memory.current = 1024M memory.min = 512M And in this memcg there's a inode with 800M page caches. Once this memcg is scanned by kswapd or other regular reclaimers, kswapd <<<< It can be either of the regular reclaimers. shrink_node_memcgs switch (mem_cgroup_protected()) <<<< Not protected case MEMCG_PROT_NONE: <<<< Will scan this memcg beak; shrink_lruvec() <<<< Reclaim the page caches shrink_slab() <<<< It may free this inode and drop all its page caches(800M). So we must protect the inode first if we want to protect page caches. Note that this inode may be a cold inode (in the tail of list lru), because memcg protection protects all slabs and page cache pages whatever they are cold or hot. IOW, this is a memcg-protection-specific issue. The inherent mismatch between memcg and inode is a trouble. One inode can be shared by different MEMCGs, but it is a very rare case. If an inode is shared, its belonging page caches may be charged to different MEMCGs. Currently there's no perfect solution to fix this kind of issue, but the inode majority-writer ownership switching can help it more or less. Cc: Dave Chinner Cc: Johannes Weiner Signed-off-by: Yafang Shao --- fs/inode.c | 76 +++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 73 insertions(+), 3 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index 7d57068b6b7a..6373cd09a06d 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -55,6 +55,12 @@ * inode_hash_lock */ +struct inode_isolate_control { + struct list_head *freeable; + struct mem_cgroup *memcg; /* derived from shrink_control */ + bool memcg_low_reclaim; /* derived from scan_control */ +}; + static unsigned int i_hash_mask __read_mostly; static unsigned int i_hash_shift __read_mostly; static struct hlist_head *inode_hashtable __read_mostly; @@ -714,6 +720,59 @@ int invalidate_inodes(struct super_block *sb, bool kill_dirty) return busy; } +#ifdef CONFIG_MEMCG_KMEM +/* + * Once an inode is freed, all its belonging page caches will be dropped as + * well, even if there're lots of page caches. So if we intend to protect + * page caches in a memcg, we must protect their host(the inode) first. + * Otherwise the memcg protection can be easily bypassed with freeing inode, + * especially if there're big files in this memcg. + * Note that it may happen that the page caches are already charged to the + * memcg, but the inode hasn't been added to this memcg yet. In this case, + * this inode is not protected. + * The inherent mismatch between memcg and inode is a trouble. One inode + * can be shared by different MEMCGs, but it is a very rare case. If + * an inode is shared, its belonging page caches may be charged to + * different MEMCGs. Currently there's no perfect solution to fix this + * kind of issue, but the inode majority-writer ownership switching can + * help it more or less. + */ +static bool memcg_can_reclaim_inode(struct inode *inode, + struct inode_isolate_control *iic) +{ + unsigned long protection; + struct mem_cgroup *memcg; + bool reclaimable = true; + + if (!inode->i_data.nrpages) + goto out; + + /* Excludes freeing inode via drop_caches */ + if (!current->reclaim_state) + goto out; + + memcg = iic->memcg; + if (!memcg || memcg == root_mem_cgroup) + goto out; + + protection = mem_cgroup_protection(memcg, iic->memcg_low_reclaim); + if (!protection) + goto out; + + if (inode->i_data.nrpages) + reclaimable = false; + +out: + return reclaimable; +} +#else /* CONFIG_MEMCG_KMEM */ +static bool memcg_can_reclaim_inode(struct inode *inode, + struct inode_isolate_control *iic) +{ + return true; +} +#endif /* CONFIG_MEMCG_KMEM */ + /* * Isolate the inode from the LRU in preparation for freeing it. * @@ -732,8 +791,9 @@ int invalidate_inodes(struct super_block *sb, bool kill_dirty) static enum lru_status inode_lru_isolate(struct list_head *item, struct list_lru_one *lru, spinlock_t *lru_lock, void *arg) { - struct list_head *freeable = arg; - struct inode *inode = container_of(item, struct inode, i_lru); + struct inode_isolate_control *iic = arg; + struct list_head *freeable = iic->freeable; + struct inode *inode = container_of(item, struct inode, i_lru); /* * we are inverting the lru lock/inode->i_lock here, so use a trylock. @@ -742,6 +802,11 @@ static enum lru_status inode_lru_isolate(struct list_head *item, if (!spin_trylock(&inode->i_lock)) return LRU_SKIP; + if (!memcg_can_reclaim_inode(inode, iic)) { + spin_unlock(&inode->i_lock); + return LRU_ROTATE; + } + /* * Referenced or dirty inodes are still in use. Give them another pass * through the LRU as we canot reclaim them now. @@ -799,9 +864,14 @@ long prune_icache_sb(struct super_block *sb, struct shrink_control *sc) { LIST_HEAD(freeable); long freed; + struct inode_isolate_control iic = { + .freeable = &freeable, + .memcg = sc->memcg, + .memcg_low_reclaim = sc->memcg_low_reclaim, + }; freed = list_lru_shrink_walk(&sb->s_inode_lru, sc, - inode_lru_isolate, &freeable); + inode_lru_isolate, &iic); dispose_list(&freeable); return freed; }