From patchwork Tue Jul 28 07:40:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11688567 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5189913B1 for ; Tue, 28 Jul 2020 07:41:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D4BC22B3F for ; Tue, 28 Jul 2020 07:41:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="opj4BCRo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D4BC22B3F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2D2F96B0008; Tue, 28 Jul 2020 03:41:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2350F6B000A; Tue, 28 Jul 2020 03:41:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D73F6B000C; Tue, 28 Jul 2020 03:41:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id E40D36B0008 for ; Tue, 28 Jul 2020 03:41:06 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9700C181AC9B6 for ; Tue, 28 Jul 2020 07:41:06 +0000 (UTC) X-FDA: 77086688532.23.arm45_2c0e9a726f68 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 6745437612 for ; Tue, 28 Jul 2020 07:41:06 +0000 (UTC) X-Spam-Summary: 1,0,0,52abf8c1e57c9e2d,d41d8cd98f00b204,laoar.shao@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1345:1437:1535:1544:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4117:4321:4385:4605:5007:6261:6653:7514:7875:9413:10004:11026:11233:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:12986:14181:14394:14687:14721:21080:21444:21451:21627:21666:21740:21990:30054:30070,0,RBL:209.85.210.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04yfk7kypowokz4ztt6xdqono79dtocd3ufg1fuefeykkbmbf8wfnrtg93of6py.zbcnh46ya9pqzucnxyfpced7y9shrwxwmwwj6od18o4gz4h3jy7ri4f6iba6nh5.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: arm45_2c0e9a726f68 X-Filterd-Recvd-Size: 6731 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 28 Jul 2020 07:41:05 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id l2so4125186pff.0 for ; Tue, 28 Jul 2020 00:41:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=CRoJ13E69xzrPdDVTfg41isU1PmeGbEW179TGO8LlvU=; b=opj4BCRoj6Gha9NXy3EwT65O0Ugfg38AIcsqoEi4vkRW4ydIUSmdxcr5GFcrUq+s1v sC9q+lCFgxdTT3lUtiRBxohu8q7rrh/ZPDilEHKHy5w4U78Is21edcKyNBFQPWxEBdod iAxpPxOxnbHYmx9mbnZ5sAlLsproG73od3nBA6zduGhlq/pDG3DcaZXwDwZraGp46Yto xJScYaMYQQ7cit/JfYtYCv4HoQlXMrL0u/obMHy5TxfR+Sx+1fAnWLj18R6B5viUufnb iORGM+gqnipmIh7J30jUa9Q2ybb6k//KnPKJGszFRUBKcXEopPGbgJj25QVXulXAJSRx hVEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=CRoJ13E69xzrPdDVTfg41isU1PmeGbEW179TGO8LlvU=; b=OaJQ5QlAeSNj9hAHZ5WrcpC1P7KFUxSr0Jl1zbc84+j5bXzfHzRQ7ADHWEk1ENyAqX ekmA86d/g+zn1HZd2O4wNLuAkVlWaoFKt6/lK0ntG9QY1Ta1Tqph+CP52iKK0JyHk973 UDy7oPhGl/tKM4LcwRmsFkUPRB0k2zwoV4S0AKCnuBn9Pj+N3/7l4X8AKMdJDQXctjvH gJVFcn+Ya0ct+8CG8ESUOPPzZbEoP8OHMkLtak9OtEQmmQ/vGsN+lNcArI/v/SfKuBvV rRT35HlZO9Nw4m2OEc0Kjw6Ramj6JMtDTgg3d/AZMg4WoH02EYq+b4VxK3r5zQ9+uSDQ RGdQ== X-Gm-Message-State: AOAM530OEVjJEESrVYmuDvE8kc141K2cmo8xxEHmCL+sqvlThbT+dOcV 9WqnzyM8VEfI6zXk+95pPQ8= X-Google-Smtp-Source: ABdhPJy2DSmuXdueBnWprEgFI23Uha6Nigjp6dact+nAcFljjnuFO7RIZGsNGKIgAJ9RGSKJINK/cQ== X-Received: by 2002:a62:c5c6:: with SMTP id j189mr8981392pfg.145.1595922065065; Tue, 28 Jul 2020 00:41:05 -0700 (PDT) Received: from localhost.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id b21sm18086564pfb.45.2020.07.28.00.41.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jul 2020 00:41:03 -0700 (PDT) From: Yafang Shao To: mhocko@kernel.org, hannes@cmpxchg.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, Yafang Shao Subject: [PATCH] mm, memcg: do full scan initially in force_empty Date: Tue, 28 Jul 2020 03:40:32 -0400 Message-Id: <20200728074032.1555-1-laoar.shao@gmail.com> X-Mailer: git-send-email 2.18.1 X-Rspamd-Queue-Id: 6745437612 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Sometimes we use memory.force_empty to drop pages in a memcg to work around some memory pressure issues. When we use force_empty, we want the pages can be reclaimed ASAP, however force_empty reclaims pages as a regular reclaimer which scans the page cache LRUs from DEF_PRIORITY priority and finally it will drop to 0 to do full scan. That is a waste of time, we'd better do full scan initially in force_empty. Signed-off-by: Yafang Shao --- include/linux/swap.h | 3 ++- mm/memcontrol.c | 16 ++++++++++------ mm/vmscan.c | 5 +++-- 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 5b3216ba39a9..d88430f1b964 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -364,7 +364,8 @@ extern int __isolate_lru_page(struct page *page, isolate_mode_t mode); extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, unsigned long nr_pages, gfp_t gfp_mask, - bool may_swap); + bool may_swap, + int priority); extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, gfp_t gfp_mask, bool noswap, pg_data_t *pgdat, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 13f559af1ab6..c873a98f8c7e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2237,7 +2237,8 @@ static void reclaim_high(struct mem_cgroup *memcg, READ_ONCE(memcg->memory.high)) continue; memcg_memory_event(memcg, MEMCG_HIGH); - try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true); + try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true, + DEF_PRIORITY); } while ((memcg = parent_mem_cgroup(memcg)) && !mem_cgroup_is_root(memcg)); } @@ -2515,7 +2516,8 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, memcg_memory_event(mem_over_limit, MEMCG_MAX); nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages, - gfp_mask, may_swap); + gfp_mask, may_swap, + DEF_PRIORITY); if (mem_cgroup_margin(mem_over_limit) >= nr_pages) goto retry; @@ -3089,7 +3091,8 @@ static int mem_cgroup_resize_max(struct mem_cgroup *memcg, } if (!try_to_free_mem_cgroup_pages(memcg, 1, - GFP_KERNEL, !memsw)) { + GFP_KERNEL, !memsw, + DEF_PRIORITY)) { ret = -EBUSY; break; } @@ -3222,7 +3225,8 @@ static int mem_cgroup_force_empty(struct mem_cgroup *memcg) return -EINTR; progress = try_to_free_mem_cgroup_pages(memcg, 1, - GFP_KERNEL, true); + GFP_KERNEL, true, + 0); if (!progress) { nr_retries--; /* maybe some writeback is necessary */ @@ -6065,7 +6069,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of, } reclaimed = try_to_free_mem_cgroup_pages(memcg, nr_pages - high, - GFP_KERNEL, true); + GFP_KERNEL, true, DEF_PRIORITY); if (!reclaimed && !nr_retries--) break; @@ -6113,7 +6117,7 @@ static ssize_t memory_max_write(struct kernfs_open_file *of, if (nr_reclaims) { if (!try_to_free_mem_cgroup_pages(memcg, nr_pages - max, - GFP_KERNEL, true)) + GFP_KERNEL, true, DEF_PRIORITY)) nr_reclaims--; continue; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 749d239c62b2..49298bb2892d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3315,7 +3315,8 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg, unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, unsigned long nr_pages, gfp_t gfp_mask, - bool may_swap) + bool may_swap, + int priority) { unsigned long nr_reclaimed; unsigned long pflags; @@ -3326,7 +3327,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK), .reclaim_idx = MAX_NR_ZONES - 1, .target_mem_cgroup = memcg, - .priority = DEF_PRIORITY, + .priority = priority, .may_writepage = !laptop_mode, .may_unmap = 1, .may_swap = may_swap,