From patchwork Thu Dec 22 04:19:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13079367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BF42C4332F for ; Thu, 22 Dec 2022 04:19:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B8DAC94000D; Wed, 21 Dec 2022 23:19:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7A93940007; Wed, 21 Dec 2022 23:19:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8595594000D; Wed, 21 Dec 2022 23:19:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6D001940007 for ; Wed, 21 Dec 2022 23:19:54 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3B5CB120440 for ; Thu, 22 Dec 2022 04:19:54 +0000 (UTC) X-FDA: 80268639108.20.1385F1C Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) by imf12.hostedemail.com (Postfix) with ESMTP id 9E14B40002 for ; Thu, 22 Dec 2022 04:19:51 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KVc8L16V; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of 35tqjYwYKCE8FBGyr5x55x2v.t532z4BE-331Crt1.58x@flex--yuzhao.bounces.google.com designates 209.85.166.201 as permitted sender) smtp.mailfrom=35tqjYwYKCE8FBGyr5x55x2v.t532z4BE-331Crt1.58x@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1671682791; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=beHmgSLzXbJpunP6/l/NxN2gRkL01SNlutgba3N1Qk4=; b=rkGLV4p3XKKWgoJecj+QlgnS5fZ1zOfbVkoUZxO8X+dmWhwnay5SVCfntBwsD18B9nNfdq gx/gGgBIRrfHeBVSHxrxnx1JVnQyRpvtyOF0J0N7LkOLC+VmMf2lQglwq9h5Gah5X+y3op JnOKulHrcKD1iI80O5+E//lZEeHbRog= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KVc8L16V; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of 35tqjYwYKCE8FBGyr5x55x2v.t532z4BE-331Crt1.58x@flex--yuzhao.bounces.google.com designates 209.85.166.201 as permitted sender) smtp.mailfrom=35tqjYwYKCE8FBGyr5x55x2v.t532z4BE-331Crt1.58x@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1671682791; a=rsa-sha256; cv=none; b=sJI5NgC/TnE5R3ka4OgIB1Bkhseq7WWcr3rjuHpFSnHbdJ5obfT23CQcc5Pe+Wrz6ei1Mh jNoz/mWRmHU40q0NK7ADg5WVgj4lobQNKFSDhpTJBvHEeT4vwxGrV5qaARBkY48vF2Opm9 KFOV4kIU2u8QGNcF/2JUtVLuRMDEPtg= Received: by mail-il1-f201.google.com with SMTP id a13-20020a056e0208ad00b003034c36b8b5so491298ilt.9 for ; Wed, 21 Dec 2022 20:19:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=beHmgSLzXbJpunP6/l/NxN2gRkL01SNlutgba3N1Qk4=; b=KVc8L16VLUlCC+NuT6JsPGih4l/R7AmDZj1lk/UOA7MArp9oW8qRRRAEE/CHL9JFRE k6OsWrqWjs+QpUdCZOasfSH0W/8vTYtZg4cIuIuZakFH3G4roLVmeOi6mhpvzf9Rr5Pm sG5fevrhs8sxV9/VRvNIlM/H7Jgms4DgL73qCf1rTYOzPm5XuvhBd0DzLtCuuYYtxdBA J1jv7wKED+6OqtgdXvWWjtKi1MH+Ffly5IDAHlnvtXC9SvYAIXhRiM6OkvNDWfvhZlSl 5hLGMSmwhFx6HT88lIBYT9dimMWP19Q8HexyCSyXhoj/OTlnPqL1UW4y0l5Xa0C+5wwP 8Www== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=beHmgSLzXbJpunP6/l/NxN2gRkL01SNlutgba3N1Qk4=; b=V/LTw3mL8cwS/9HkmsxvVOkvqBBG9UzVcJgf8RkgfOywsM0+20miKirUXAC5Qw7iGL 7cLZoGidH6wFVDofWRzg8G1WovAH8QdapGJianhTNjWPNfXwdnA2RQHosswLB706j/hr sZazfxwQyG4j8R8AgStgOqcTQsO8Lyi7E+2+nlqOI3UsFQOHRSqGkNnwfT7bPs6kWbAg 8IuN3cMcUNdEtQOafVgEJNUHxdN23SUEXaOKcrgb4KnjJ+VF5yLmj6yiPNCxR4KKPrku pEa3BHhl9/wPgLagZeoEbIEDJyK3qDPomiHwQJx+xjanucjOEPcka/QGdzs1fLbvV4r+ 8QNA== X-Gm-Message-State: AFqh2kraoAvyiNjN/vTeKVTtrEPYiyLkoyvWOlpvdEg9TvghWjcQYMk/ hP7gXn06hiWbWvhF10a8UEaM46fpLhk= X-Google-Smtp-Source: AMrXdXvUCUZtzccxqK7kgdP2DV3G+AxrfRMA4CYtUSyCFpt/vaQ9J1+I6DbC09S/08Y40JAtOCDqp8u0BPU= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:a463:5f7b:440e:5c77]) (user=yuzhao job=sendgmr) by 2002:a05:6602:88a:b0:6df:5f05:40b6 with SMTP id f10-20020a056602088a00b006df5f0540b6mr286712ioz.74.1671682790897; Wed, 21 Dec 2022 20:19:50 -0800 (PST) Date: Wed, 21 Dec 2022 21:19:05 -0700 In-Reply-To: <20221222041905.2431096-1-yuzhao@google.com> Message-Id: <20221222041905.2431096-8-yuzhao@google.com> Mime-Version: 1.0 References: <20221222041905.2431096-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Subject: [PATCH mm-unstable v3 7/8] mm: multi-gen LRU: clarify scan_control flags From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 9E14B40002 X-Stat-Signature: nkjtwf1paoq1tu536j759txes8kknchh X-HE-Tag: 1671682791-652779 X-HE-Meta: U2FsdGVkX1/e88NbPEiO06YKCN9oyljwaOkAYI10zweXem077vqWxzr4YJo2LIK5zYQ0APQKB07VYtKB/GSjktroFG6dDkEZRzBd5wnZtUKrYBV2W7vyTfWQI1VVOwwwQM7VzS909xauQGTCK79PUwLockeMJNc14xv9Ihkm9Krw4viWhSemJtvMPy65lCjqRIEb6SnG54/8h49Gw17cwKPhSg9X9Rcq0UzkNlyN7heQcdg8ZesbXQQ6WcfgNsH943YcLuZuqm9IfekB8hQ3J13qs7rIv0w7WOJThciXZIXv9TC+UVAKVgByi7IhvhckQ6D0f2AihrLg7+XWbHqgB2II4fZAkLuOSjzHUeeyLXACSG2Ntj+PDfIvoTce5eeK0BbUMdPUiqCeAdavKfCFHgTe03HavymBZF9gihtICsIVEhpKxF3SaswiJDEwfUxmw8DjmbjBV7M/wN6fxOsXupI/8YWesexkpzFF/rcdYYsIlB0E0HJXK+JATj5ZCOJNh2LAUIGS6WxRUKkNdPRJy2PQ1EuliWMuTnOzCPDdTDk4SNgG5E36pm/t/VzL3dGxJuH6DeTeKdSxqKzP3h0EMY3j1yccBu+xIpTpcxgthVJV2a+2fDf6dIh57N6rW/AJF8L5I1wNeVq1IK/Hj/4tmek9OV0sfCLXuwmU9zgV7ceNxRjPpZZWwI0ei2RatiMcH0Pr57bPxmMSKmuo+AtVhfhjmrN/ZRKGNOzbrsx66DCa935uDnwAsHK39oHLce7TO6avc2W/cgrBJDd9J7h4ktFjePmAIHpPmKycsKmGePIYD9d7OuwNy5CJiADyIeAfjJi49zxgjI4UdwDnzLtDCJUO+Py4Tl/gMCVoY9kDM9VPEeGMynWQxxmagw/H5sgsTTZEENF+ZE+XRQ065nT/2CJ71Y29osxkSnoogc16EysZ4IjFwtQ7MRLOdVwq8KCz6ND8kvAe4bmjQ1n/dX5 o2CGMl22 PUejezK6KfSuDlDiT+SaJqS75UMMBadLjh5CkrRKfINRMRTpWvpBkUB4PAJSYKDg8D+j1PzUYC7moF1/OVxxKxY3r7PJDzR+Rxa+OswDdJCr9PZDNelpKUTN5OL8HunCisd9BnBmhKClqFVpKw7kKYwi7HMa3lOdVq8QsXJLIH7WL1KMoz9TYijqd4DZE9Q2qIvS0cP8wCsJOCcsqaDOwyFrFmQBPE0GkePo7ZUpgpWZCfygWG0/dZPO3Xwan0Y40KRz/8lxKv55HAEPNMUZSIal0eIQMjSlT3K95384I7M7cvMSsxQIBowOZXtLoThKFF87owtTOKQ+ltHdpTquEIvv+LQteNyIncKF3X9b2WZI9GgxTmxlfJwDRusmmi4XRUKW3zYenuhrcZQtm+syUc5QnZD/GLnHktPQ3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Among the flags in scan_control: 1. sc->may_swap, which indicates swap constraint due to memsw.max, is supported as usual. 2. sc->proactive, which indicates reclaim by memory.reclaim, may not opportunistically skip the aging path, since it is considered less latency sensitive. 3. !(sc->gfp_mask & __GFP_IO), which indicates IO constraint, lowers swappiness to prioritize file LRU, since clean file folios are more likely to exist. 4. sc->may_writepage and sc->may_unmap, which indicates opportunistic reclaim, are rejected, since unmapped clean folios are already prioritized. Scanning for more of them is likely futile and can cause high reclaim latency when there is a large number of memcgs. The rest are handled by the existing code. Signed-off-by: Yu Zhao --- mm/vmscan.c | 56 ++++++++++++++++++++++++++--------------------------- 1 file changed, 28 insertions(+), 28 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index f22c8876473e..a9b318e1bdc2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3210,6 +3210,9 @@ static int get_swappiness(struct lruvec *lruvec, struct scan_control *sc) struct mem_cgroup *memcg = lruvec_memcg(lruvec); struct pglist_data *pgdat = lruvec_pgdat(lruvec); + if (!sc->may_swap) + return 0; + if (!can_demote(pgdat->node_id, sc) && mem_cgroup_get_nr_swap_pages(memcg) < MIN_LRU_BATCH) return 0; @@ -4236,7 +4239,7 @@ static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_ } while (err == -EAGAIN); } -static struct lru_gen_mm_walk *set_mm_walk(struct pglist_data *pgdat) +static struct lru_gen_mm_walk *set_mm_walk(struct pglist_data *pgdat, bool force_alloc) { struct lru_gen_mm_walk *walk = current->reclaim_state->mm_walk; @@ -4244,7 +4247,7 @@ static struct lru_gen_mm_walk *set_mm_walk(struct pglist_data *pgdat) VM_WARN_ON_ONCE(walk); walk = &pgdat->mm_walk; - } else if (!pgdat && !walk) { + } else if (!walk && force_alloc) { VM_WARN_ON_ONCE(current_is_kswapd()); walk = kzalloc(sizeof(*walk), __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN); @@ -4430,7 +4433,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, goto done; } - walk = set_mm_walk(NULL); + walk = set_mm_walk(NULL, true); if (!walk) { success = iterate_mm_list_nowalk(lruvec, max_seq); goto done; @@ -4499,8 +4502,6 @@ static bool lruvec_is_reclaimable(struct lruvec *lruvec, struct scan_control *sc struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MIN_SEQ(lruvec); - VM_WARN_ON_ONCE(sc->memcg_low_reclaim); - /* see the comment on lru_gen_folio */ gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]); birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); @@ -4756,12 +4757,8 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca { bool success; - /* unmapping inhibited */ - if (!sc->may_unmap && folio_mapped(folio)) - return false; - /* swapping inhibited */ - if (!(sc->may_writepage && (sc->gfp_mask & __GFP_IO)) && + if (!(sc->gfp_mask & __GFP_IO) && (folio_test_dirty(folio) || (folio_test_anon(folio) && !folio_test_swapcache(folio)))) return false; @@ -4858,9 +4855,8 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, __count_vm_events(PGSCAN_ANON + type, isolated); /* - * There might not be eligible pages due to reclaim_idx, may_unmap and - * may_writepage. Check the remaining to prevent livelock if it's not - * making progress. + * There might not be eligible folios due to reclaim_idx. Check the + * remaining to prevent livelock if it's not making progress. */ return isolated || !remaining ? scanned : 0; } @@ -5120,9 +5116,7 @@ static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); - if (mem_cgroup_below_min(sc->target_mem_cgroup, memcg) || - (mem_cgroup_below_low(sc->target_mem_cgroup, memcg) && - !sc->memcg_low_reclaim)) + if (mem_cgroup_below_min(sc->target_mem_cgroup, memcg)) return 0; if (!should_run_aging(lruvec, max_seq, sc, can_swap, &nr_to_scan)) @@ -5150,17 +5144,14 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) long nr_to_scan; unsigned long scanned = 0; unsigned long nr_to_reclaim = get_nr_to_reclaim(sc); + int swappiness = get_swappiness(lruvec, sc); + + /* clean file folios are more likely to exist */ + if (swappiness && !(sc->gfp_mask & __GFP_IO)) + swappiness = 1; while (true) { int delta; - int swappiness; - - if (sc->may_swap) - swappiness = get_swappiness(lruvec, sc); - else if (!cgroup_reclaim(sc) && get_swappiness(lruvec, sc)) - swappiness = 1; - else - swappiness = 0; nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness); if (nr_to_scan <= 0) @@ -5291,12 +5282,13 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc struct blk_plug plug; VM_WARN_ON_ONCE(global_reclaim(sc)); + VM_WARN_ON_ONCE(!sc->may_writepage || !sc->may_unmap); lru_add_drain(); blk_start_plug(&plug); - set_mm_walk(lruvec_pgdat(lruvec)); + set_mm_walk(NULL, sc->proactive); if (try_to_shrink_lruvec(lruvec, sc)) lru_gen_rotate_memcg(lruvec, MEMCG_LRU_YOUNG); @@ -5352,11 +5344,19 @@ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control * VM_WARN_ON_ONCE(!global_reclaim(sc)); + /* + * Unmapped clean folios are already prioritized. Scanning for more of + * them is likely futile and can cause high reclaim latency when there + * is a large number of memcgs. + */ + if (!sc->may_writepage || !sc->may_unmap) + goto done; + lru_add_drain(); blk_start_plug(&plug); - set_mm_walk(pgdat); + set_mm_walk(pgdat, sc->proactive); set_initial_priority(pgdat, sc); @@ -5374,7 +5374,7 @@ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control * clear_mm_walk(); blk_finish_plug(&plug); - +done: /* kswapd should never fail */ pgdat->kswapd_failures = 0; } @@ -5943,7 +5943,7 @@ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src, set_task_reclaim_state(current, &sc.reclaim_state); flags = memalloc_noreclaim_save(); blk_start_plug(&plug); - if (!set_mm_walk(NULL)) { + if (!set_mm_walk(NULL, true)) { err = -ENOMEM; goto done; }