From patchwork Sun Mar 15 09:53:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11438743 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 027041667 for ; Sun, 15 Mar 2020 07:53:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B8EFA206E9 for ; Sun, 15 Mar 2020 07:53:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="a6HzSxBn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8EFA206E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7111B6B0007; Sun, 15 Mar 2020 03:53:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6E8326B0008; Sun, 15 Mar 2020 03:53:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58B4D6B000A; Sun, 15 Mar 2020 03:53:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 3E4D56B0007 for ; Sun, 15 Mar 2020 03:53:05 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D755852D6 for ; Sun, 15 Mar 2020 07:53:04 +0000 (UTC) X-FDA: 76596830688.15.toe16_722f3e1249f3d X-Spam-Summary: 2,0,0,1dd5ce5604d49470,d41d8cd98f00b204,laoar.shao@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:4117:4321:4385:4605:5007:6261:6653:7514:7550:7875:7903:8660:9036:9413:10004:11026:11658:11914:12048:12294:12296:12297:12517:12519:12555:12895:12986:13148:13161:13229:13230:13255:14096:14181:14394:14687:14721:14877:21080:21433:21444:21451:21627:21666:21972:21990:30004:30034:30054,0,RBL:209.85.215.195:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: toe16_722f3e1249f3d X-Filterd-Recvd-Size: 6544 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Sun, 15 Mar 2020 07:53:04 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id t24so7716895pgj.7 for ; Sun, 15 Mar 2020 00:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=r9vLV6Qq04cRRMSwOI+dLFQIqvFaYX0FB1SLrcgz+pc=; b=a6HzSxBnbEYIPl5ofeVIDVAmAl+juJ9bvXsP5A1bAOFzTp023UsfLTUEyd747zvH7M Mq/BT0PkcYddsr0awLrIqFwyHHjsTbD2+3U0FMGxiGEbVYcYhMNlMa89+NYeBVMbme9F gs74dco12foKmtPt0XDqi1CgoUSyOhr7KS+eTqx5m4yLI2RBG5p9ATVBG8d2Rk8cyKqU FFhA9v529m90kT6vYBRtUzYVK1nNsPAhgnuRJpLAP+vpmu5ysZXZp8FYLfluJIlJV/uP nxzBSCsRMZEF52iDxuMqQN/M9wfI0ScGzNeV/7jzYi/rtvoi/lXHV2tQaUKwp9qm1anD 7yOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=r9vLV6Qq04cRRMSwOI+dLFQIqvFaYX0FB1SLrcgz+pc=; b=MhQrNTxWVpolXJXWJ2rwKHMVH4M8rzpQmKXuFym+dGyYxWkQLaP4jnx2CaIHj7Isuo /0OQRt+ADfGheKnvnti8uENXQVyXrjdfxACAoSebdjdNY3oxvH8O4+AvoVTE5Vdtp4Gj agMd0upZiXFg4W0XAt/oB9ZC8vjdijeJpyg9bRR91nQuj0nu3xox3/lLknlzmyqMMnUI tAj6HaHsadBIKuVvsJhLQ4+8hjB6XVEhaBiSH36/InVb2Nxu/z30D1650bOVP0oXPSu4 +JaEKNnqCrR/JLwmigBJeJYq+jlfUD6xuf87yZ4grzxMfc6+nMS4jMxoeHIMWA2ka5H5 YpaA== X-Gm-Message-State: ANhLgQ37YbwFPA0HUSV+W/mlD2GaWvESk8hCUSywiNHiFvOzDiAZ1wef XCV0sBuW0DRLFTWlAWybymo= X-Google-Smtp-Source: ADFU+vuv2Fvoe3wxjcT4absfB1RnH6gkjP/X1kgSOnsg6owh1BfecuBWq3uSBVuRh4psUGbwZ9ECVQ== X-Received: by 2002:a63:cd12:: with SMTP id i18mr9138743pgg.98.1584258783452; Sun, 15 Mar 2020 00:53:03 -0700 (PDT) Received: from master.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id w11sm62592984pfn.4.2020.03.15.00.53.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 15 Mar 2020 00:53:02 -0700 (PDT) From: Yafang Shao To: dchinner@redhat.com, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, guro@fb.com, akpm@linux-foundation.org, viro@zeniv.linux.org.uk Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Yafang Shao Subject: [PATCH v5 2/3] mm, shrinker: make memcg low reclaim visible to lru walker isolation function Date: Sun, 15 Mar 2020 05:53:41 -0400 Message-Id: <20200315095342.10178-3-laoar.shao@gmail.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20200315095342.10178-1-laoar.shao@gmail.com> References: <20200315095342.10178-1-laoar.shao@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A new member memcg_low_reclaim is introduced in shrink_control struct, which is derived from scan_control struct, in order to tell the shrinker whether the reclaim session is under memcg low reclaim or not. The followup patch will use this new member. Cc: Dave Chinner Signed-off-by: Yafang Shao --- include/linux/shrinker.h | 3 +++ mm/vmscan.c | 27 ++++++++++++++++----------- 2 files changed, 19 insertions(+), 11 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 0f80123650e2..dc42ae57e8dc 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -31,6 +31,9 @@ struct shrink_control { /* current memcg being shrunk (for memcg aware shrinkers) */ struct mem_cgroup *memcg; + + /* derived from struct scan_control */ + bool memcg_low_reclaim; }; #define SHRINK_STOP (~0UL) diff --git a/mm/vmscan.c b/mm/vmscan.c index 876370565455..385750840979 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -625,10 +625,9 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, /** * shrink_slab - shrink slab caches - * @gfp_mask: allocation context - * @nid: node whose slab caches to target * @memcg: memory cgroup whose slab caches to target - * @priority: the reclaim priority + * @sc: scan_control struct for this reclaim session + * @nid: node whose slab caches to target * * Call the shrink functions to age shrinkable caches. * @@ -638,15 +637,18 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, * @memcg specifies the memory cgroup to target. Unaware shrinkers * are called only if it is the root cgroup. * - * @priority is sc->priority, we take the number of objects and >> by priority - * in order to get the scan target. + * @sc is the scan_control struct, we take the number of objects + * and >> by sc->priority in order to get the scan target. * * Returns the number of reclaimed slab objects. */ -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, - struct mem_cgroup *memcg, - int priority) +static unsigned long shrink_slab(struct mem_cgroup *memcg, + struct scan_control *sc, + int nid) { + bool memcg_low_reclaim = sc->memcg_low_reclaim; + gfp_t gfp_mask = sc->gfp_mask; + int priority = sc->priority; unsigned long ret, freed = 0; struct shrinker *shrinker; @@ -668,6 +670,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, .gfp_mask = gfp_mask, .nid = nid, .memcg = memcg, + .memcg_low_reclaim = memcg_low_reclaim, }; ret = do_shrink_slab(&sc, shrinker, priority); @@ -694,6 +697,9 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, void drop_slab_node(int nid) { unsigned long freed; + struct scan_control sc = { + .gfp_mask = GFP_KERNEL, + }; do { struct mem_cgroup *memcg = NULL; @@ -701,7 +707,7 @@ void drop_slab_node(int nid) freed = 0; memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - freed += shrink_slab(GFP_KERNEL, nid, memcg, 0); + freed += shrink_slab(memcg, &sc, nid); } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); } while (freed > 10); } @@ -2673,8 +2679,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) shrink_lruvec(lruvec, sc); - shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, - sc->priority); + shrink_slab(memcg, sc, pgdat->node_id); /* Record the group's reclaim efficiency */ vmpressure(sc->gfp_mask, memcg, false,