From patchwork Thu Dec 1 22:39:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17E01C4332F for ; Thu, 1 Dec 2022 22:39:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A43AF6B0073; Thu, 1 Dec 2022 17:39:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 90B286B0074; Thu, 1 Dec 2022 17:39:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75AA26B0075; Thu, 1 Dec 2022 17:39:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 68F766B0073 for ; Thu, 1 Dec 2022 17:39:43 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 16D0E412A2 for ; Thu, 1 Dec 2022 22:39:43 +0000 (UTC) X-FDA: 80195205846.27.336AED1 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf08.hostedemail.com (Postfix) with ESMTP id B2878160012 for ; Thu, 1 Dec 2022 22:39:42 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=LA2DUDHi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934382; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oH9JeQGaNGGHkuIZZHdf9cs3YAnSP1jBnMbSTvH4Xu8=; b=8kpqvefJtFlfyhrOn14FPm8Myb2lmvUjyDe303XnDz2biQZR2chehoHYicIiSkZpZ7oCww bLivTdVAXAhTomLa8CAn1+cIzGUelEC3+6uDVm3d9l2IaCywET0mVLDH4994mjfw6czRXR 1mVZYoQvck/ltbdwbqu+P255SJaErKY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=LA2DUDHi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934382; a=rsa-sha256; cv=none; b=EPnNX5uBngMlGv4xFIVtLyJuNIXmOfrKCnUusfxtg4V+CVA/mraGXbaKkt8sopYz0/Kj7S 4Lm9TGtwBtxwJT1N8sFXmCroLVvLD87bRGQ2An615mypJSaOKe8RQhlU0Vbm/IFGL9OSn9 AsaukBA7FaE7RKN2RZN1WdXSoT+/kPo= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-3bd1ff8fadfso30504047b3.18 for ; Thu, 01 Dec 2022 14:39:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oH9JeQGaNGGHkuIZZHdf9cs3YAnSP1jBnMbSTvH4Xu8=; b=LA2DUDHiBc/zGzdpbHz3fR7TpBuWLu0tDpwMzDx05c1rOPOfYc4jBdJYRw/TauLXGh Lk5iAEgXtO2shh9reDNWP4Mxf5lF2lQ3dT4PpqiviZ/oHJhGd2BgwkhKd4hD8QgjTRBF Yc4O3BnKUXLPRZBem3ev1rOA2DJN9OLOC6qyOcP8czyp71nf5Yq8DES/hP5Mpbf/IsCm sM0dmYkSDklz9GXgvLxmSxx9f79bTV396VtjGVjwhXSoQx0lrwRKEYuZ2LBFZPhmASPi XBaJ5VBd8wdU0dmxhd5slcb2TFgkl/xZ7LYKkJG0EsviVWUhBjmOMi/g1304f2fFBoM4 r9KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oH9JeQGaNGGHkuIZZHdf9cs3YAnSP1jBnMbSTvH4Xu8=; b=K2XwLGOqk3qdyDVxY26xY4KWff/syQ70zVH3bkFrnbkmFHGa7vwdtqG8syvLgF+KEM f77dbGScbfNDhaGmnpOlRNdvognSXLAhzWanf5nD2+oLHJ9ozl848G4uznOQRod0k3PW EU2frf0ktW0sYWjVvn2zcublirMGa7B68GYEpG6jIvi98Zv+sdrESP9MAU+zweIOHAdj LU6QJBLKPFH0xMvU5Fe6APaSr9BO687ntQND5TqGOofxyFCn09mE3Jtw0Q4ffgsxtTpy sP3eGRT3bAeBCgSE7bZSBRTU5aVG+BF+QOo+b6B5fG4ydE1dU61I7PN64bBomqcM/9XH EAug== X-Gm-Message-State: ANoB5pl62B9b1nQy7wtS4F+fH+Cb+zP0iDkRapOkjDjh5/K+x1UykorM MMc+zQefMmLtxf5PfQH/4PCqfvdFkE8= X-Google-Smtp-Source: AA0mqf4Ct34RtDuJ1bfnD9jfZlROZxvu1fiB0f922pkx7EJaRkowBwyRWMIeD4dhue1oO3k5Kf+Dba8hnwU= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a81:f53:0:b0:367:27b7:af89 with SMTP id 80-20020a810f53000000b0036727b7af89mr66232818ywp.292.1669934381982; Thu, 01 Dec 2022 14:39:41 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:17 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-2-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 1/8] mm: multi-gen LRU: rename lru_gen_struct to lru_gen_folio From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Spamd-Result: default: False [2.90 / 9.00]; SORBS_IRL_BL(3.00)[209.85.128.202:from]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MV_CASE(0.50)[]; FORGED_SENDER(0.30)[yuzhao@google.com,3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; RCPT_COUNT_TWELVE(0.00)[12]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: B2878160012 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: qkphr8jk9dcfzh8qq7u7wguski8jrzoo X-HE-Tag: 1669934382-810091 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The new name lru_gen_folio will be more distinct from the coming lru_gen_memcg. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 4 ++-- include/linux/mmzone.h | 6 +++--- mm/vmscan.c | 34 +++++++++++++++++----------------- mm/workingset.c | 4 ++-- 4 files changed, 24 insertions(+), 24 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index e8ed225d8f7c..f63968bd7de5 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -178,7 +178,7 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct folio *foli int zone = folio_zonenum(folio); int delta = folio_nr_pages(folio); enum lru_list lru = type * LRU_INACTIVE_FILE; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE(old_gen != -1 && old_gen >= MAX_NR_GENS); VM_WARN_ON_ONCE(new_gen != -1 && new_gen >= MAX_NR_GENS); @@ -224,7 +224,7 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, int gen = folio_lru_gen(folio); int type = folio_is_file_lru(folio); int zone = folio_zonenum(folio); - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 5f74891556f3..bd3e4689f72d 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -404,7 +404,7 @@ enum { * The number of pages in each generation is eventually consistent and therefore * can be transiently negative when reset_batch_size() is pending. */ -struct lru_gen_struct { +struct lru_gen_folio { /* the aging increments the youngest generation number */ unsigned long max_seq; /* the eviction increments the oldest generation numbers */ @@ -461,7 +461,7 @@ struct lru_gen_mm_state { struct lru_gen_mm_walk { /* the lruvec under reclaim */ struct lruvec *lruvec; - /* unstable max_seq from lru_gen_struct */ + /* unstable max_seq from lru_gen_folio */ unsigned long max_seq; /* the next address within an mm to scan */ unsigned long next_addr; @@ -524,7 +524,7 @@ struct lruvec { unsigned long flags; #ifdef CONFIG_LRU_GEN /* evictable pages divided into generations */ - struct lru_gen_struct lrugen; + struct lru_gen_folio lrugen; /* to concurrently iterate lru_gen_mm_list */ struct lru_gen_mm_state mm_state; #endif diff --git a/mm/vmscan.c b/mm/vmscan.c index 9356a3ee639c..fcb4ac351f93 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3197,7 +3197,7 @@ static int get_nr_gens(struct lruvec *lruvec, int type) static bool __maybe_unused seq_is_valid(struct lruvec *lruvec) { - /* see the comment on lru_gen_struct */ + /* see the comment on lru_gen_folio */ return get_nr_gens(lruvec, LRU_GEN_FILE) >= MIN_NR_GENS && get_nr_gens(lruvec, LRU_GEN_FILE) <= get_nr_gens(lruvec, LRU_GEN_ANON) && get_nr_gens(lruvec, LRU_GEN_ANON) <= MAX_NR_GENS; @@ -3594,7 +3594,7 @@ struct ctrl_pos { static void read_ctrl_pos(struct lruvec *lruvec, int type, int tier, int gain, struct ctrl_pos *pos) { - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; int hist = lru_hist_from_seq(lrugen->min_seq[type]); pos->refaulted = lrugen->avg_refaulted[type][tier] + @@ -3609,7 +3609,7 @@ static void read_ctrl_pos(struct lruvec *lruvec, int type, int tier, int gain, static void reset_ctrl_pos(struct lruvec *lruvec, int type, bool carryover) { int hist, tier; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; bool clear = carryover ? NR_HIST_GENS == 1 : NR_HIST_GENS > 1; unsigned long seq = carryover ? lrugen->min_seq[type] : lrugen->max_seq + 1; @@ -3686,7 +3686,7 @@ static int folio_update_gen(struct folio *folio, int gen) static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclaiming) { int type = folio_is_file_lru(folio); - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]); unsigned long new_flags, old_flags = READ_ONCE(folio->flags); @@ -3731,7 +3731,7 @@ static void update_batch_size(struct lru_gen_mm_walk *walk, struct folio *folio, static void reset_batch_size(struct lruvec *lruvec, struct lru_gen_mm_walk *walk) { int gen, type, zone; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; walk->batched = 0; @@ -4248,7 +4248,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) { int zone; int remaining = MAX_LRU_BATCH; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]); if (type == LRU_GEN_ANON && !can_swap) @@ -4284,7 +4284,7 @@ static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) { int gen, type, zone; bool success = false; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; DEFINE_MIN_SEQ(lruvec); VM_WARN_ON_ONCE(!seq_is_valid(lruvec)); @@ -4305,7 +4305,7 @@ static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) ; } - /* see the comment on lru_gen_struct */ + /* see the comment on lru_gen_folio */ if (can_swap) { min_seq[LRU_GEN_ANON] = min(min_seq[LRU_GEN_ANON], min_seq[LRU_GEN_FILE]); min_seq[LRU_GEN_FILE] = max(min_seq[LRU_GEN_ANON], lrugen->min_seq[LRU_GEN_FILE]); @@ -4327,7 +4327,7 @@ static void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan) { int prev, next; int type, zone; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; spin_lock_irq(&lruvec->lru_lock); @@ -4385,7 +4385,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, bool success; struct lru_gen_mm_walk *walk; struct mm_struct *mm = NULL; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE(max_seq > READ_ONCE(lrugen->max_seq)); @@ -4450,7 +4450,7 @@ static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsig unsigned long old = 0; unsigned long young = 0; unsigned long total = 0; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; struct mem_cgroup *memcg = lruvec_memcg(lruvec); for (type = !can_swap; type < ANON_AND_FILE; type++) { @@ -4735,7 +4735,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx) int delta = folio_nr_pages(folio); int refs = folio_lru_refs(folio); int tier = lru_tier_from_refs(refs); - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE_FOLIO(gen >= MAX_NR_GENS, folio); @@ -4835,7 +4835,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, int scanned = 0; int isolated = 0; int remaining = MAX_LRU_BATCH; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; struct mem_cgroup *memcg = lruvec_memcg(lruvec); VM_WARN_ON_ONCE(!list_empty(list)); @@ -5235,7 +5235,7 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc static bool __maybe_unused state_is_valid(struct lruvec *lruvec) { - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; if (lrugen->enabled) { enum lru_list lru; @@ -5514,7 +5514,7 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec, int i; int type, tier; int hist = lru_hist_from_seq(seq); - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; for (tier = 0; tier < MAX_NR_TIERS; tier++) { seq_printf(m, " %10d", tier); @@ -5564,7 +5564,7 @@ static int lru_gen_seq_show(struct seq_file *m, void *v) unsigned long seq; bool full = !debugfs_real_fops(m->file)->write; struct lruvec *lruvec = v; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; int nid = lruvec_pgdat(lruvec)->node_id; struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); @@ -5818,7 +5818,7 @@ void lru_gen_init_lruvec(struct lruvec *lruvec) { int i; int gen, type, zone; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; lrugen->max_seq = MIN_NR_GENS + 1; lrugen->enabled = lru_gen_enabled(); diff --git a/mm/workingset.c b/mm/workingset.c index 1a86645b7b3c..fd666584515c 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -223,7 +223,7 @@ static void *lru_gen_eviction(struct folio *folio) unsigned long token; unsigned long min_seq; struct lruvec *lruvec; - struct lru_gen_struct *lrugen; + struct lru_gen_folio *lrugen; int type = folio_is_file_lru(folio); int delta = folio_nr_pages(folio); int refs = folio_lru_refs(folio); @@ -252,7 +252,7 @@ static void lru_gen_refault(struct folio *folio, void *shadow) unsigned long token; unsigned long min_seq; struct lruvec *lruvec; - struct lru_gen_struct *lrugen; + struct lru_gen_folio *lrugen; struct mem_cgroup *memcg; struct pglist_data *pgdat; int type = folio_is_file_lru(folio); From patchwork Thu Dec 1 22:39:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D920FC47088 for ; Thu, 1 Dec 2022 22:39:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 318C76B0074; Thu, 1 Dec 2022 17:39:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 251746B0075; Thu, 1 Dec 2022 17:39:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11AE78E0001; Thu, 1 Dec 2022 17:39:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0039B6B0074 for ; Thu, 1 Dec 2022 17:39:44 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C47CE141333 for ; Thu, 1 Dec 2022 22:39:44 +0000 (UTC) X-FDA: 80195205888.14.7017A6C Received: from mail-oi1-f202.google.com (mail-oi1-f202.google.com [209.85.167.202]) by imf10.hostedemail.com (Postfix) with ESMTP id 50146C0005 for ; Thu, 1 Dec 2022 22:39:44 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=dNnQaZ36; spf=pass (imf10.hostedemail.com: domain of 3Ly2JYwYKCNAKGL3wA2AA270.yA8749GJ-886Hwy6.AD2@flex--yuzhao.bounces.google.com designates 209.85.167.202 as permitted sender) smtp.mailfrom=3Ly2JYwYKCNAKGL3wA2AA270.yA8749GJ-886Hwy6.AD2@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934384; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vyO1Qwr5iuXNBK0f2JVvdJeRjkJhAMj/J0Y4E4uaOig=; b=vrK8iq4VeLgQnBru3HY5QW99ErTOgzWDBYYvjwincU0gR4bOOk8VnXKpVYdbWmk7G4RJNi DC0OoNiCZY+8UOVe6pP4OuOP0NmF0TvjU4yH1dzp66533NdodtYr+IosVFfKPdB2bvnhqq HrRtHyhXICtha7gK9R5I9Odkba9Fy+c= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=dNnQaZ36; spf=pass (imf10.hostedemail.com: domain of 3Ly2JYwYKCNAKGL3wA2AA270.yA8749GJ-886Hwy6.AD2@flex--yuzhao.bounces.google.com designates 209.85.167.202 as permitted sender) smtp.mailfrom=3Ly2JYwYKCNAKGL3wA2AA270.yA8749GJ-886Hwy6.AD2@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934384; a=rsa-sha256; cv=none; b=hvHTuLi/hQRiKpW0Kt+EF4BcwEZBm0x4zjyCc3Fio2imk+9i03iI+2tudMOHPUVZMuyZ85 ELPRw2G9SxDhKdtTKpc/5g6hfHwwBENH/E5NbGlpki7VbYM0sTPARKMboZ0MiHwG5WHmfc uW4jPS87q2GCHcfuPu2QeGY8DOPbsBA= Received: by mail-oi1-f202.google.com with SMTP id j3-20020a544803000000b0035bda575daeso1790138oij.23 for ; Thu, 01 Dec 2022 14:39:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vyO1Qwr5iuXNBK0f2JVvdJeRjkJhAMj/J0Y4E4uaOig=; b=dNnQaZ36J15h95JyHC5fHEfZ8ZxYNYeuOkea2FRUNH4ofc8ybEbvu0C+3jNROGAGjx ET/I4m66SPGpMXGXIoSfEOSD3BiQ0aW9yQx3hPapXImnfgpkzH83BAzalL0L/Wz6ieU/ JnM7STf2jp0IlZjVSYadK5ma8tjpczwUZ2zurBu1S/CU4/NKDV7rynyv+cFS1pMvRhsX RKky6KUKGnCTPbHlc1nfg0BmJeou/YVeDmFF7Xm9xMjgqSJ86MBsBW4MWNiLz2SO7e14 0vFzSim6libhGY7NPuc9s79tay49PyduByF7zRaryAO1gph0O3RcqDPJNVW+1Q+GqHXU bcoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vyO1Qwr5iuXNBK0f2JVvdJeRjkJhAMj/J0Y4E4uaOig=; b=u1SFPLbGn+CYJ/oSvurEk+W1nIgNGdEo53bnzZ82Jzk6kbh7tKdTWAOPcNnZXiDHgC prF5v9ao+qo7Lg3XYtdN5SFHmN+DYidIptA1yG1xLdTYWiu473tKaPvW7vtns7p8OoDl YnoBzKDC8k05ruxfcUNt8JVhOZxLatXv9eTxo2u+nJMdzQ66UIqL+IyJjNX68KPwP7zB HtA8ZB7xyGxRWrJuFOP/F6QgsCzq6d9rHi4SWrmGEc2jnurBVE0REkN1nKsVzkccJsV5 CGqtifhrZosRqjmXj1Hu8yp3qIep3yXNb04q3ql0qtSIVTFdNUhTZOb+fQ3MGPpsJC3h I/5Q== X-Gm-Message-State: ANoB5pkX9VI7Keib8gkMisQZlRtivh+gGd3XCz4SmVrxc2Jgkpg8z0s0 elZKMu71mz8UD9fESzhgMcZtWfVHDUw= X-Google-Smtp-Source: AA0mqf79Do6KgRzfvlvo4ZPtaeuS0kkLhMuc256zE7w4YcoHXYCXaETiWgiz0x9lev3tiDw2SBXW5omV2+c= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a05:6870:7996:b0:143:6f82:e1f7 with SMTP id he22-20020a056870799600b001436f82e1f7mr18313733oab.175.1669934383415; Thu, 01 Dec 2022 14:39:43 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:18 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-3-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 2/8] mm: multi-gen LRU: rename lrugen->lists[] to lrugen->folios[] From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Spamd-Result: default: False [-2.10 / 9.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; MV_CASE(0.50)[]; DMARC_POLICY_ALLOW(-0.50)[google.com,reject]; FORGED_SENDER(0.30)[yuzhao@google.com,3Ly2JYwYKCNAKGL3wA2AA270.yA8749GJ-886Hwy6.AD2@flex--yuzhao.bounces.google.com]; R_DKIM_ALLOW(-0.20)[google.com:s=20210112]; R_SPF_ALLOW(-0.20)[+ip4:209.85.128.0/17]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; MIME_TRACE(0.00)[0:+]; RCPT_COUNT_TWELVE(0.00)[12]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_SOME(0.00)[]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3Ly2JYwYKCNAKGL3wA2AA270.yA8749GJ-886Hwy6.AD2@flex--yuzhao.bounces.google.com]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[google.com:+]; TO_DN_SOME(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 50146C0005 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: mhpscr17ibme63k8m4ckmn7wg3ft6m64 X-HE-Tag: 1669934384-885770 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: lru_gen_folio will be chained into per-node lists by the coming lrugen->list. Signed-off-by: Yu Zhao --- Documentation/mm/multigen_lru.rst | 8 ++++---- include/linux/mm_inline.h | 4 ++-- include/linux/mmzone.h | 8 ++++---- mm/vmscan.c | 20 ++++++++++---------- 4 files changed, 20 insertions(+), 20 deletions(-) diff --git a/Documentation/mm/multigen_lru.rst b/Documentation/mm/multigen_lru.rst index d7062c6a8946..d8f721f98868 100644 --- a/Documentation/mm/multigen_lru.rst +++ b/Documentation/mm/multigen_lru.rst @@ -89,15 +89,15 @@ variables are monotonically increasing. Generation numbers are truncated into ``order_base_2(MAX_NR_GENS+1)`` bits in order to fit into the gen counter in ``folio->flags``. Each -truncated generation number is an index to ``lrugen->lists[]``. The +truncated generation number is an index to ``lrugen->folios[]``. The sliding window technique is used to track at least ``MIN_NR_GENS`` and at most ``MAX_NR_GENS`` generations. The gen counter stores a value within ``[1, MAX_NR_GENS]`` while a page is on one of -``lrugen->lists[]``; otherwise it stores zero. +``lrugen->folios[]``; otherwise it stores zero. Each generation is divided into multiple tiers. A page accessed ``N`` times through file descriptors is in tier ``order_base_2(N)``. Unlike -generations, tiers do not have dedicated ``lrugen->lists[]``. In +generations, tiers do not have dedicated ``lrugen->folios[]``. In contrast to moving across generations, which requires the LRU lock, moving across tiers only involves atomic operations on ``folio->flags`` and therefore has a negligible cost. A feedback loop @@ -127,7 +127,7 @@ page mapped by this PTE to ``(max_seq%MAX_NR_GENS)+1``. Eviction -------- The eviction consumes old generations. Given an ``lruvec``, it -increments ``min_seq`` when ``lrugen->lists[]`` indexed by +increments ``min_seq`` when ``lrugen->folios[]`` indexed by ``min_seq%MAX_NR_GENS`` becomes empty. To select a type and a tier to evict from, it first compares ``min_seq[]`` to select the older type. If both types are equally old, it selects the one whose first tier has diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f63968bd7de5..da38e3d962e2 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -256,9 +256,9 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, lru_gen_update_size(lruvec, folio, -1, gen); /* for folio_rotate_reclaimable() */ if (reclaiming) - list_add_tail(&folio->lru, &lrugen->lists[gen][type][zone]); + list_add_tail(&folio->lru, &lrugen->folios[gen][type][zone]); else - list_add(&folio->lru, &lrugen->lists[gen][type][zone]); + list_add(&folio->lru, &lrugen->folios[gen][type][zone]); return true; } diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index bd3e4689f72d..02e432374471 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -312,7 +312,7 @@ enum lruvec_flags { * They form a sliding window of a variable size [MIN_NR_GENS, MAX_NR_GENS]. An * offset within MAX_NR_GENS, i.e., gen, indexes the LRU list of the * corresponding generation. The gen counter in folio->flags stores gen+1 while - * a page is on one of lrugen->lists[]. Otherwise it stores 0. + * a page is on one of lrugen->folios[]. Otherwise it stores 0. * * A page is added to the youngest generation on faulting. The aging needs to * check the accessed bit at least twice before handing this page over to the @@ -324,8 +324,8 @@ enum lruvec_flags { * rest of generations, if they exist, are considered inactive. See * lru_gen_is_active(). * - * PG_active is always cleared while a page is on one of lrugen->lists[] so that - * the aging needs not to worry about it. And it's set again when a page + * PG_active is always cleared while a page is on one of lrugen->folios[] so + * that the aging needs not to worry about it. And it's set again when a page * considered active is isolated for non-reclaiming purposes, e.g., migration. * See lru_gen_add_folio() and lru_gen_del_folio(). * @@ -412,7 +412,7 @@ struct lru_gen_folio { /* the birth time of each generation in jiffies */ unsigned long timestamps[MAX_NR_GENS]; /* the multi-gen LRU lists, lazily sorted on eviction */ - struct list_head lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; + struct list_head folios[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; /* the multi-gen LRU sizes, eventually consistent */ long nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; /* the exponential moving average of refaulted */ diff --git a/mm/vmscan.c b/mm/vmscan.c index fcb4ac351f93..ebab1ec3d400 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4256,7 +4256,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) /* prevent cold/hot inversion if force_scan is true */ for (zone = 0; zone < MAX_NR_ZONES; zone++) { - struct list_head *head = &lrugen->lists[old_gen][type][zone]; + struct list_head *head = &lrugen->folios[old_gen][type][zone]; while (!list_empty(head)) { struct folio *folio = lru_to_folio(head); @@ -4267,7 +4267,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) VM_WARN_ON_ONCE_FOLIO(folio_zonenum(folio) != zone, folio); new_gen = folio_inc_gen(lruvec, folio, false); - list_move_tail(&folio->lru, &lrugen->lists[new_gen][type][zone]); + list_move_tail(&folio->lru, &lrugen->folios[new_gen][type][zone]); if (!--remaining) return false; @@ -4295,7 +4295,7 @@ static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) gen = lru_gen_from_seq(min_seq[type]); for (zone = 0; zone < MAX_NR_ZONES; zone++) { - if (!list_empty(&lrugen->lists[gen][type][zone])) + if (!list_empty(&lrugen->folios[gen][type][zone])) goto next; } @@ -4760,7 +4760,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx) /* promoted */ if (gen != lru_gen_from_seq(lrugen->min_seq[type])) { - list_move(&folio->lru, &lrugen->lists[gen][type][zone]); + list_move(&folio->lru, &lrugen->folios[gen][type][zone]); return true; } @@ -4769,7 +4769,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx) int hist = lru_hist_from_seq(lrugen->min_seq[type]); gen = folio_inc_gen(lruvec, folio, false); - list_move_tail(&folio->lru, &lrugen->lists[gen][type][zone]); + list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); WRITE_ONCE(lrugen->protected[hist][type][tier - 1], lrugen->protected[hist][type][tier - 1] + delta); @@ -4781,7 +4781,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx) if (folio_test_locked(folio) || folio_test_writeback(folio) || (type == LRU_GEN_FILE && folio_test_dirty(folio))) { gen = folio_inc_gen(lruvec, folio, true); - list_move(&folio->lru, &lrugen->lists[gen][type][zone]); + list_move(&folio->lru, &lrugen->folios[gen][type][zone]); return true; } @@ -4848,7 +4848,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, for (zone = sc->reclaim_idx; zone >= 0; zone--) { LIST_HEAD(moved); int skipped = 0; - struct list_head *head = &lrugen->lists[gen][type][zone]; + struct list_head *head = &lrugen->folios[gen][type][zone]; while (!list_empty(head)) { struct folio *folio = lru_to_folio(head); @@ -5248,7 +5248,7 @@ static bool __maybe_unused state_is_valid(struct lruvec *lruvec) int gen, type, zone; for_each_gen_type_zone(gen, type, zone) { - if (!list_empty(&lrugen->lists[gen][type][zone])) + if (!list_empty(&lrugen->folios[gen][type][zone])) return false; } } @@ -5293,7 +5293,7 @@ static bool drain_evictable(struct lruvec *lruvec) int remaining = MAX_LRU_BATCH; for_each_gen_type_zone(gen, type, zone) { - struct list_head *head = &lruvec->lrugen.lists[gen][type][zone]; + struct list_head *head = &lruvec->lrugen.folios[gen][type][zone]; while (!list_empty(head)) { bool success; @@ -5827,7 +5827,7 @@ void lru_gen_init_lruvec(struct lruvec *lruvec) lrugen->timestamps[i] = jiffies; for_each_gen_type_zone(gen, type, zone) - INIT_LIST_HEAD(&lrugen->lists[gen][type][zone]); + INIT_LIST_HEAD(&lrugen->folios[gen][type][zone]); lruvec->mm_state.seq = MIN_NR_GENS; init_waitqueue_head(&lruvec->mm_state.wait); From patchwork Thu Dec 1 22:39:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6DF1C3A5A7 for ; Thu, 1 Dec 2022 22:39:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 142088E0001; Thu, 1 Dec 2022 17:39:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 11BD46B0078; Thu, 1 Dec 2022 17:39:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAE388E0001; Thu, 1 Dec 2022 17:39:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DD2746B0075 for ; Thu, 1 Dec 2022 17:39:45 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B74CEA02C3 for ; Thu, 1 Dec 2022 22:39:45 +0000 (UTC) X-FDA: 80195205930.07.AC5DCAB Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf24.hostedemail.com (Postfix) with ESMTP id 59891180009 for ; Thu, 1 Dec 2022 22:39:45 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZZB8O77v; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3MC2JYwYKCNELHM4xB3BB381.zB985AHK-997Ixz7.BE3@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3MC2JYwYKCNELHM4xB3BB381.zB985AHK-997Ixz7.BE3@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934385; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=n1CAWLF/Pr37Q+tUzT+ZYK1OUBV6p0czcCCYFvheAt8=; b=WK9zLOSIUCJAuVPG38Qnm1bXdnCgBslOt/ECodVZVfFhUazF66jCq5QgQ8lV+QTuyRdpSS DZoWUoA+E0etAWOZszZSht8m5WW8taJQk6XJnCB2Lx+Tivmu3ZbVtifya0Nax/RmL7oCA9 61oqnD6mF0jb99FtZlYa3xRSaJj7Pnk= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZZB8O77v; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3MC2JYwYKCNELHM4xB3BB381.zB985AHK-997Ixz7.BE3@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3MC2JYwYKCNELHM4xB3BB381.zB985AHK-997Ixz7.BE3@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934385; a=rsa-sha256; cv=none; b=lI1ulBgrQEzh32+zZrqHHnb2iveCmQLvZ9HtH/w6QKWTbP/ylq0nlaKDDB17E2zcRlPmHw TO0b4I6II0ANe8dRLLZ3G3ZQU7Xr40843WzkiFPIV1TJYBwkKOnPw7ZO4nrKdi/PQl36HF +p+hsBe5yWegUmtVYS8eImSA0KBGpDY= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-37360a6236fso30704557b3.12 for ; Thu, 01 Dec 2022 14:39:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=n1CAWLF/Pr37Q+tUzT+ZYK1OUBV6p0czcCCYFvheAt8=; b=ZZB8O77vizkwbL+BtNrQny9vAyzGkEzi6VNYlAJ8o3lRHt9ct0ErfZZsrVGL3AcXQ1 g6b3y+/raQL/iqTo2KKKu4ie98C0Ave3f8NrVN4wj/zrtMmGBYSvl4T6CxGDB7tYzaQr Mq6iEjuAJj1LYaWFXS1ohNm+DICTB3/SMAmq6gEM6dCOOmmlvoh7c2ITyousM2yV6+/7 P+ER0w42T/CXOrOJHoQD91ZeM7pLhdUBj5LaNFQ+gfq1SEj9ZiGnCzeY+9hfLaeDKMPw aN+UMB3gHzXUEbX2Sk1x5SS7+I0WdPDFmnbV5uPjCh5o+bqRBhMmtBtIQkXnnW5Ve/GZ 4e4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=n1CAWLF/Pr37Q+tUzT+ZYK1OUBV6p0czcCCYFvheAt8=; b=A4w5ZfyMOrHa09ciWoXc/EXM6jFsBlPsakkFEo38dPMhm/V1DS20iotNowo4AyASty XrP+MXhprs909cn7yjvooDoT8vkUUD3eQsFFb+JmzXigxylRYG6zhpz24aku6CXXkDKX Rh5o57TEnCXdAJqHyxg6UNwfRirkK2wt7AQT5eryqUSHcI4h4lHNUTUnOwNtylAYdYlQ vod6pzKSOZanwdAkuvxlSFV5q3gnGEHww2WO/VGolx8Ie0wPZIhMxcQXsKgD5wLXiIvn iCTNe20qZ0f/x9O2vQEOLf/Lk8IsLl7Oz4tm5Hs2B3zVGU6o4/qQuTwSYatkPeGk006V dTqw== X-Gm-Message-State: ANoB5pkYXkU+aoHDhcNUaK8Ysb37gScHKl77+8bKp1b15d8selLAm3+6 50NUFKHes9vFwmitVuRj0kmaHpefs/o= X-Google-Smtp-Source: AA0mqf59sgS0ZhfbHLTxSsJYv8pDkOxnnBfFny9mVriiFvV1ECl7PNXEtC23rhvo7tBw7R7x1gSSmc1Y5OE= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a81:5f83:0:b0:367:a786:8318 with SMTP id t125-20020a815f83000000b00367a7868318mr47352781ywb.367.1669934384612; Thu, 01 Dec 2022 14:39:44 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:19 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-4-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 3/8] mm: multi-gen LRU: remove eviction fairness safeguard From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Spamd-Result: default: False [1.90 / 9.00]; SORBS_IRL_BL(3.00)[209.85.128.202:from]; BAYES_HAM(-3.00)[99.97%]; MID_CONTAINS_FROM(1.00)[]; MV_CASE(0.50)[]; FORGED_SENDER(0.30)[yuzhao@google.com,3MC2JYwYKCNELHM4xB3BB381.zB985AHK-997Ixz7.BE3@flex--yuzhao.bounces.google.com]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; RCPT_COUNT_TWELVE(0.00)[12]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3MC2JYwYKCNELHM4xB3BB381.zB985AHK-997Ixz7.BE3@flex--yuzhao.bounces.google.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[] X-Stat-Signature: 9typ77i9nww98xwhy53daewxkbiwxbbo X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 59891180009 X-Rspam-User: X-HE-Tag: 1669934385-474885 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Recall that the eviction consumes the oldest generation: first it bucket-sorts folios whose gen counters were updated by the aging and reclaims the rest; then it increments lrugen->min_seq. The current eviction fairness safeguard for global reclaim has a dilemma: when there are multiple eligible memcgs, should it continue or stop upon meeting the reclaim goal? If it continues, it overshoots and increases direct reclaim latency; if it stops, it loses fairness between memcgs it has taken memory away from and those it has yet to. With memcg LRU, the eviction, while ensuring eventual fairness, will stop upon meeting its goal. Therefore the current eviction fairness safeguard for global reclaim will not be needed. Note that memcg LRU only applies to global reclaim. For memcg reclaim, the eviction will continue, even if it is overshooting. This becomes unconditional due to code simplification. Signed-off-by: Yu Zhao --- mm/vmscan.c | 81 +++++++++++++++-------------------------------------- 1 file changed, 23 insertions(+), 58 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index ebab1ec3d400..d714a777c88b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -449,6 +449,11 @@ static bool cgroup_reclaim(struct scan_control *sc) return sc->target_mem_cgroup; } +static bool global_reclaim(struct scan_control *sc) +{ + return !sc->target_mem_cgroup || mem_cgroup_is_root(sc->target_mem_cgroup); +} + /** * writeback_throttling_sane - is the usual dirty throttling mechanism available? * @sc: scan_control in question @@ -499,6 +504,11 @@ static bool cgroup_reclaim(struct scan_control *sc) return false; } +static bool global_reclaim(struct scan_control *sc) +{ + return true; +} + static bool writeback_throttling_sane(struct scan_control *sc) { return true; @@ -4991,8 +5001,7 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw return scanned; } -static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness, - bool *need_swapping) +static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness) { int type; int scanned; @@ -5081,9 +5090,6 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap goto retry; } - if (need_swapping && type == LRU_GEN_ANON) - *need_swapping = true; - return scanned; } @@ -5122,67 +5128,26 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * return min_seq[!can_swap] + MIN_NR_GENS <= max_seq ? nr_to_scan : 0; } -static bool should_abort_scan(struct lruvec *lruvec, unsigned long seq, - struct scan_control *sc, bool need_swapping) +static unsigned long get_nr_to_reclaim(struct scan_control *sc) { - int i; - DEFINE_MAX_SEQ(lruvec); + /* don't abort memcg reclaim to ensure fairness */ + if (!global_reclaim(sc)) + return -1; - if (!current_is_kswapd()) { - /* age each memcg at most once to ensure fairness */ - if (max_seq - seq > 1) - return true; + /* discount the previous progress for kswapd */ + if (current_is_kswapd()) + return sc->nr_to_reclaim + sc->last_reclaimed; - /* over-swapping can increase allocation latency */ - if (sc->nr_reclaimed >= sc->nr_to_reclaim && need_swapping) - return true; - - /* give this thread a chance to exit and free its memory */ - if (fatal_signal_pending(current)) { - sc->nr_reclaimed += MIN_LRU_BATCH; - return true; - } - - if (cgroup_reclaim(sc)) - return false; - } else if (sc->nr_reclaimed - sc->last_reclaimed < sc->nr_to_reclaim) - return false; - - /* keep scanning at low priorities to ensure fairness */ - if (sc->priority > DEF_PRIORITY - 2) - return false; - - /* - * A minimum amount of work was done under global memory pressure. For - * kswapd, it may be overshooting. For direct reclaim, the allocation - * may succeed if all suitable zones are somewhat safe. In either case, - * it's better to stop now, and restart later if necessary. - */ - for (i = 0; i <= sc->reclaim_idx; i++) { - unsigned long wmark; - struct zone *zone = lruvec_pgdat(lruvec)->node_zones + i; - - if (!managed_zone(zone)) - continue; - - wmark = current_is_kswapd() ? high_wmark_pages(zone) : low_wmark_pages(zone); - if (wmark > zone_page_state(zone, NR_FREE_PAGES)) - return false; - } - - sc->nr_reclaimed += MIN_LRU_BATCH; - - return true; + return max(sc->nr_to_reclaim, compact_gap(sc->order)); } static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { struct blk_plug plug; bool need_aging = false; - bool need_swapping = false; unsigned long scanned = 0; unsigned long reclaimed = sc->nr_reclaimed; - DEFINE_MAX_SEQ(lruvec); + unsigned long nr_to_reclaim = get_nr_to_reclaim(sc); lru_add_drain(); @@ -5206,7 +5171,7 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc if (!nr_to_scan) goto done; - delta = evict_folios(lruvec, sc, swappiness, &need_swapping); + delta = evict_folios(lruvec, sc, swappiness); if (!delta) goto done; @@ -5214,7 +5179,7 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc if (scanned >= nr_to_scan) break; - if (should_abort_scan(lruvec, max_seq, sc, need_swapping)) + if (sc->nr_reclaimed >= nr_to_reclaim) break; cond_resched(); @@ -5661,7 +5626,7 @@ static int run_eviction(struct lruvec *lruvec, unsigned long seq, struct scan_co if (sc->nr_reclaimed >= nr_to_reclaim) return 0; - if (!evict_folios(lruvec, sc, swappiness, NULL)) + if (!evict_folios(lruvec, sc, swappiness)) return 0; cond_resched(); From patchwork Thu Dec 1 22:39:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1619AC4332F for ; Thu, 1 Dec 2022 22:39:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7362E8E0002; Thu, 1 Dec 2022 17:39:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BEAA6B0078; Thu, 1 Dec 2022 17:39:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 428618E0002; Thu, 1 Dec 2022 17:39:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 357B46B0075 for ; Thu, 1 Dec 2022 17:39:47 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0C98C12128F for ; Thu, 1 Dec 2022 22:39:47 +0000 (UTC) X-FDA: 80195206014.26.B7E6F51 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf06.hostedemail.com (Postfix) with ESMTP id AFFB218000E for ; Thu, 1 Dec 2022 22:39:46 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Fdzn6r94; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 3Mi2JYwYKCNMNJO6zD5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--yuzhao.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3Mi2JYwYKCNMNJO6zD5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934386; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hrU5wWNyp037AuwQOyxOk5gGg4D4Otqw3lyJ0b3KgV0=; b=XAsBLEAabrwCBb4aNWOLo7TKxeQ4L+nyQzLZ0kYF5J/xH//6j1/5756pW6XCBz0G2iRiF7 a77XIlGVkF4cUDPVeupN/lQ5ZKIZZAYPDg04hUkH3rjL+6gxxK11wt1Vjyvw3+WpTKltT5 SJipFZ+1ctNNRdGg8DXivPl8ZUfKUSg= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Fdzn6r94; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 3Mi2JYwYKCNMNJO6zD5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--yuzhao.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3Mi2JYwYKCNMNJO6zD5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934386; a=rsa-sha256; cv=none; b=Ty01CCUbU86sYV20QN4+uju3OcP1ohAdJUzSQMtfKkXndXq0BCK22aLeJL2W+JatTtnHA4 Vvcmisg+8KXy/VKcAPUWNEKtW0njzFwAJgw3sp5rFuQ5JvXfz24exemo6Q3qTtsWx0E/Bn HXE03lQ2no6TFmMniNnWcu6Tz2z/N6Q= Received: by mail-yb1-f201.google.com with SMTP id y6-20020a25b9c6000000b006c1c6161716so3166847ybj.8 for ; Thu, 01 Dec 2022 14:39:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hrU5wWNyp037AuwQOyxOk5gGg4D4Otqw3lyJ0b3KgV0=; b=Fdzn6r94xdaJ3DYryMEzAO2wZaCb3ia0dUUTL355o0LS86kDT6edU1HKYUg7PNhqZy I9m0jJg2PTx+tewcDGBo7oRNCI7tqcVMEry+t9RkrR65vT7UXdxMEGv16i8uUcZ3qRbU tZMX/qVM/KuBDNuQCgHMqf/xq3opU4yvZkoGNxUjqZo7z8U8Vco7lsoU2skM1iK7H91e DItAuQUBv5669YJUp4mpAJCFlAFZnPDIpBR0uEXZD/OINx4kdF8uJoptDFDiIri/UPab mP1Dh6M+1e2nBxcZmUZZThmM2P6sxACHQWRMMzm6rEQE1vnpQ1heHtIw/MvgBNx4W8K6 rU+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hrU5wWNyp037AuwQOyxOk5gGg4D4Otqw3lyJ0b3KgV0=; b=ouEHDM0Z+ujZkvuV9yXcmOQW95VVsrJbx31Btl1euLDWugJjHZI5R6+f+RDD+XWw6+ uWMoYGnQEmwZEkuJhtlIcXiemNKLDigevSDebz0bId7kZH3oEHuU4tOu/gGztxmzA41H khC0Yyzyokdiw9bEG/Uxoi9tHWmdA5HxqQkgx2g3pZCUa6mn6hc6d80r6cEUvet92gW1 rnO00kTemiN6ro0tR+FqPZYkslO2JlZnvXFfq8lJf2faFw+LQGKghcXN04mWW1tC2LiB mogwLg0qG1ZSfPRGqtmhHjKlxT0Z1n5MPiepDBcWpMYhFA7HHKY1p7fDNk6pahVitgmw +00A== X-Gm-Message-State: ANoB5pns5tX/Szy7fTRctUIag6VEqtEFXsKlM5UhwIixYISID/2LNQQ0 tkUqoEb7CZn7jp9qirPWmK8ivfOYwF0= X-Google-Smtp-Source: AA0mqf4N5mdmhx7yGoZLCT/j5w4kIAAXDnjMlRXuMQDJ3xosebIYlBF5Zlqewc9W8Y7fMlPj7ciT4XCqXlg= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a25:2141:0:b0:6fa:c7e7:a572 with SMTP id h62-20020a252141000000b006fac7e7a572mr8377083ybh.456.1669934386046; Thu, 01 Dec 2022 14:39:46 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:20 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-5-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 4/8] mm: multi-gen LRU: remove aging fairness safeguard From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Spamd-Result: default: False [1.92 / 9.00]; SORBS_IRL_BL(3.00)[209.85.219.201:from]; BAYES_HAM(-2.98)[99.91%]; MID_CONTAINS_FROM(1.00)[]; MV_CASE(0.50)[]; FORGED_SENDER(0.30)[yuzhao@google.com,3Mi2JYwYKCNMNJO6zD5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--yuzhao.bounces.google.com]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; RCPT_COUNT_TWELVE(0.00)[12]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3Mi2JYwYKCNMNJO6zD5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--yuzhao.bounces.google.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: AFFB218000E X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: uzs1aonw9pxmn4prhopsfyiu4hpw1g5q X-HE-Tag: 1669934386-684100 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Recall that the aging produces the youngest generation: first it scans for accessed folios and updates their gen counters; then it increments lrugen->max_seq. The current aging fairness safeguard for kswapd uses two passes to ensure the fairness to multiple eligible memcgs. On the first pass, which is shared with the eviction, it checks whether all eligible memcgs are low on cold folios. If so, it requires a second pass, on which it ages all those memcgs at the same time. With memcg LRU, the aging, while ensuring eventual fairness, will run when necessary. Therefore the current aging fairness safeguard for kswapd will not be needed. Note that memcg LRU only applies to global reclaim. For memcg reclaim, the aging can be unfair to different memcgs, i.e., their lrugen->max_seq can be incremented at different paces. Signed-off-by: Yu Zhao --- mm/vmscan.c | 150 +++++++++++++++++++++++++--------------------------- 1 file changed, 71 insertions(+), 79 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index d714a777c88b..67967a4b18a9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -137,7 +137,6 @@ struct scan_control { #ifdef CONFIG_LRU_GEN /* help kswapd make better choices among multiple memcgs */ - unsigned int memcgs_need_aging:1; unsigned long last_reclaimed; #endif @@ -4453,7 +4452,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, return true; } -static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsigned long *min_seq, +static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, struct scan_control *sc, bool can_swap, unsigned long *nr_to_scan) { int gen, type, zone; @@ -4462,6 +4461,13 @@ static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsig unsigned long total = 0; struct lru_gen_folio *lrugen = &lruvec->lrugen; struct mem_cgroup *memcg = lruvec_memcg(lruvec); + DEFINE_MIN_SEQ(lruvec); + + /* whether this lruvec is completely out of cold folios */ + if (min_seq[!can_swap] + MIN_NR_GENS > max_seq) { + *nr_to_scan = 0; + return true; + } for (type = !can_swap; type < ANON_AND_FILE; type++) { unsigned long seq; @@ -4490,8 +4496,6 @@ static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsig * stalls when the number of generations reaches MIN_NR_GENS. Hence, the * ideal number of generations is MIN_NR_GENS+1. */ - if (min_seq[!can_swap] + MIN_NR_GENS > max_seq) - return true; if (min_seq[!can_swap] + MIN_NR_GENS < max_seq) return false; @@ -4510,40 +4514,54 @@ static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsig return false; } -static bool age_lruvec(struct lruvec *lruvec, struct scan_control *sc, unsigned long min_ttl) +static bool lruvec_is_sizable(struct lruvec *lruvec, struct scan_control *sc) { - bool need_aging; - unsigned long nr_to_scan; - int swappiness = get_swappiness(lruvec, sc); + int gen, type, zone; + unsigned long total = 0; + bool can_swap = get_swappiness(lruvec, sc); + struct lru_gen_folio *lrugen = &lruvec->lrugen; struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); DEFINE_MIN_SEQ(lruvec); + for (type = !can_swap; type < ANON_AND_FILE; type++) { + unsigned long seq; + + for (seq = min_seq[type]; seq <= max_seq; seq++) { + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) + total += max(READ_ONCE(lrugen->nr_pages[gen][type][zone]), 0L); + } + } + + /* whether the size is big enough to be helpful */ + return mem_cgroup_online(memcg) ? (total >> sc->priority) : total; +} + +static bool lruvec_is_reclaimable(struct lruvec *lruvec, struct scan_control *sc, + unsigned long min_ttl) +{ + int gen; + unsigned long birth; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + DEFINE_MIN_SEQ(lruvec); + VM_WARN_ON_ONCE(sc->memcg_low_reclaim); + /* see the comment on lru_gen_folio */ + gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]); + birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); + + if (time_is_after_jiffies(birth + min_ttl)) + return false; + + if (!lruvec_is_sizable(lruvec, sc)) + return false; + mem_cgroup_calculate_protection(NULL, memcg); - if (mem_cgroup_below_min(memcg)) - return false; - - need_aging = should_run_aging(lruvec, max_seq, min_seq, sc, swappiness, &nr_to_scan); - - if (min_ttl) { - int gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]); - unsigned long birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); - - if (time_is_after_jiffies(birth + min_ttl)) - return false; - - /* the size is likely too small to be helpful */ - if (!nr_to_scan && sc->priority != DEF_PRIORITY) - return false; - } - - if (need_aging) - try_to_inc_max_seq(lruvec, max_seq, sc, swappiness, false); - - return true; + return !mem_cgroup_below_min(memcg); } /* to protect the working set of the last N jiffies */ @@ -4552,46 +4570,32 @@ static unsigned long lru_gen_min_ttl __read_mostly; static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) { struct mem_cgroup *memcg; - bool success = false; unsigned long min_ttl = READ_ONCE(lru_gen_min_ttl); VM_WARN_ON_ONCE(!current_is_kswapd()); sc->last_reclaimed = sc->nr_reclaimed; - /* - * To reduce the chance of going into the aging path, which can be - * costly, optimistically skip it if the flag below was cleared in the - * eviction path. This improves the overall performance when multiple - * memcgs are available. - */ - if (!sc->memcgs_need_aging) { - sc->memcgs_need_aging = true; - return; - } - - set_mm_walk(pgdat); - - memcg = mem_cgroup_iter(NULL, NULL, NULL); - do { - struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); - - if (age_lruvec(lruvec, sc, min_ttl)) - success = true; - - cond_resched(); - } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); - - clear_mm_walk(); - /* check the order to exclude compaction-induced reclaim */ - if (success || !min_ttl || sc->order) + if (!min_ttl || sc->order || sc->priority == DEF_PRIORITY) return; + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + + if (lruvec_is_reclaimable(lruvec, sc, min_ttl)) { + mem_cgroup_iter_break(NULL, memcg); + return; + } + + cond_resched(); + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); + /* * The main goal is to OOM kill if every generation from all memcgs is * younger than min_ttl. However, another possibility is all memcgs are - * either below min or empty. + * either too small or below min. */ if (mutex_trylock(&oom_lock)) { struct oom_control oc = { @@ -5099,33 +5103,27 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap * reclaim. */ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, - bool can_swap, bool *need_aging) + bool can_swap) { unsigned long nr_to_scan; struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); - DEFINE_MIN_SEQ(lruvec); if (mem_cgroup_below_min(memcg) || (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) return 0; - *need_aging = should_run_aging(lruvec, max_seq, min_seq, sc, can_swap, &nr_to_scan); - if (!*need_aging) + if (!should_run_aging(lruvec, max_seq, sc, can_swap, &nr_to_scan)) return nr_to_scan; /* skip the aging path at the default priority */ if (sc->priority == DEF_PRIORITY) - goto done; - - /* leave the work to lru_gen_age_node() */ - if (current_is_kswapd()) - return 0; - - if (try_to_inc_max_seq(lruvec, max_seq, sc, can_swap, false)) return nr_to_scan; -done: - return min_seq[!can_swap] + MIN_NR_GENS <= max_seq ? nr_to_scan : 0; + + try_to_inc_max_seq(lruvec, max_seq, sc, can_swap, false); + + /* skip this lruvec as it's low on cold folios */ + return 0; } static unsigned long get_nr_to_reclaim(struct scan_control *sc) @@ -5144,9 +5142,7 @@ static unsigned long get_nr_to_reclaim(struct scan_control *sc) static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { struct blk_plug plug; - bool need_aging = false; unsigned long scanned = 0; - unsigned long reclaimed = sc->nr_reclaimed; unsigned long nr_to_reclaim = get_nr_to_reclaim(sc); lru_add_drain(); @@ -5167,13 +5163,13 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc else swappiness = 0; - nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness, &need_aging); + nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness); if (!nr_to_scan) - goto done; + break; delta = evict_folios(lruvec, sc, swappiness); if (!delta) - goto done; + break; scanned += delta; if (scanned >= nr_to_scan) @@ -5185,10 +5181,6 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc cond_resched(); } - /* see the comment in lru_gen_age_node() */ - if (sc->nr_reclaimed - reclaimed >= MIN_LRU_BATCH && !need_aging) - sc->memcgs_need_aging = false; -done: clear_mm_walk(); blk_finish_plug(&plug); From patchwork Thu Dec 1 22:39:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD78BC4321E for ; Thu, 1 Dec 2022 22:39:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 49DF96B0075; Thu, 1 Dec 2022 17:39:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B2DA6B0078; Thu, 1 Dec 2022 17:39:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 168C26B007B; Thu, 1 Dec 2022 17:39:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 04AF06B0075 for ; Thu, 1 Dec 2022 17:39:49 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C1FEFC1205 for ; Thu, 1 Dec 2022 22:39:48 +0000 (UTC) X-FDA: 80195206056.16.1532FE8 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf04.hostedemail.com (Postfix) with ESMTP id 6A01940004 for ; Thu, 1 Dec 2022 22:39:48 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RHNDbvsz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of 3My2JYwYKCNQOKP70E6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3My2JYwYKCNQOKP70E6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934388; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JHZglG5ilwT7hsqyBjm+nL4wtqZp55RCSn8BZ+7jNpg=; b=DNicPxhSfAcyGIMySgu103R4S7b0oKtXzHN7SSp7OI0iJIq2bSsxYKzw/VPSqgK6uvVtEe 8JB3lipY3Epzy62beUICloToj1DVcvNm8+YlH7cYTXd8ZPCDe6oucRA284zzmnh9k7taoQ CJbYFt5r2Yra/0NlR13VyPhREd2x8KY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RHNDbvsz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of 3My2JYwYKCNQOKP70E6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3My2JYwYKCNQOKP70E6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934388; a=rsa-sha256; cv=none; b=X/2bGjn8/huCLoFbC87Q9ihYlScln8RZtTo+StFhroBHEWNJ14Mh3q22iCj5WwFYw4EkG/ v7u7aa2TmdmiUMPPfBxpCJIpyKQB3qKqiKXcO10JqdFUiXfuqRcV2+A7o2YF5/Vali3p+c ooTOBZe/TXpvMmOIG1J2p1LXLc/0iok= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-3dddef6adb6so15786177b3.11 for ; Thu, 01 Dec 2022 14:39:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JHZglG5ilwT7hsqyBjm+nL4wtqZp55RCSn8BZ+7jNpg=; b=RHNDbvszyBbPFEB77yE6AJGQfBt0EWpkE3qN5H9J3Pp5Q77hSPzW8ArtLS4SmB33mD 8SXDsp5hzK2/hxWT7MKoqC3fSW77JLZagIWlq/EqS/blNXzBtdfoH26kkDNbrKgwi2Tm JjamnWumeGYaJKFjwgbEqgqlyxC6Jx2bu0SUugLD26lLkdaQpsAYAiFUKxkw19M6otbb 1Z3qS4bBjRYFp6/H6vgSb6BjlQ2H8QkpZ1WmZkealqf3ubG/R58XI2SQtnO7fhHTTiOr 1saN6mqKZeo9AQuP9prToIk9MQu8V2Z8qFrUVWJgcECwRdBaWIGCcT4+RTIcmpvH7gmP 0EVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JHZglG5ilwT7hsqyBjm+nL4wtqZp55RCSn8BZ+7jNpg=; b=FhwPNS4N44zE3k7pNj68sJjk2+HUp8WCt8WAcQ6bVoS3QrPu1LGmTaoveiZ3OW0myd qWBFyih+PXIackeF729THSv2TP1rffoUXnj1UpVl5Hifl+yf+kcieSMXUDVY6l3W2uhS 9UVsev75f5wATaQpTkwEAqSRGiRSNRTMLqPxiS2sdwLecQOc3sAu2AsBIMq9BOK7MvHu HiSDtCnokhQNlax55QrRFiZux15H7nnaDdaRG2d2gIih4dElJDR3ByGnz5P1wZFzqG1v hF0zidxgovNtRq5YHzxhVVWR1JutSuOmBHcAaKCgMnFKTJImyg9ov8QqpsioFN03x0J0 Yecg== X-Gm-Message-State: ANoB5pllMempMxgUylM/gaZVSc5zbA4B3b1VqxVB5jYSpW7CVkGZV2Z7 dZKasHzUSQnCVYMdQIPaUhuZFFoMON8= X-Google-Smtp-Source: AA0mqf7944UNERKD5m4EdoDunAUNDT5CtA5j6zI05nuoJYATtJBJKXDdLhv1W54WaUpHG3XVTXdKF39O88I= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a81:4789:0:b0:39a:adfe:bf5c with SMTP id u131-20020a814789000000b0039aadfebf5cmr46711595ywa.403.1669934387675; Thu, 01 Dec 2022 14:39:47 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:21 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-6-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 5/8] mm: multi-gen LRU: shuffle should_run_aging() From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Spamd-Result: default: False [3.40 / 9.00]; SORBS_IRL_BL(3.00)[209.85.128.202:from]; BAYES_HAM(-2.50)[97.73%]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MID_CONTAINS_FROM(1.00)[]; MV_CASE(0.50)[]; FORGED_SENDER(0.30)[yuzhao@google.com,3My2JYwYKCNQOKP70E6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yuzhao.bounces.google.com]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; BAD_REP_POLICIES(0.10)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17:c]; RCPT_COUNT_TWELVE(0.00)[12]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3My2JYwYKCNQOKP70E6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--yuzhao.bounces.google.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[] X-Stat-Signature: xyrdgixsek64df6y4azzixg3bm6b1bee X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6A01940004 X-Rspam-User: X-HE-Tag: 1669934388-864756 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move should_run_aging() next to its only caller left. Signed-off-by: Yu Zhao --- mm/vmscan.c | 124 ++++++++++++++++++++++++++-------------------------- 1 file changed, 62 insertions(+), 62 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 67967a4b18a9..0557adce75c5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4452,68 +4452,6 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, return true; } -static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, - struct scan_control *sc, bool can_swap, unsigned long *nr_to_scan) -{ - int gen, type, zone; - unsigned long old = 0; - unsigned long young = 0; - unsigned long total = 0; - struct lru_gen_folio *lrugen = &lruvec->lrugen; - struct mem_cgroup *memcg = lruvec_memcg(lruvec); - DEFINE_MIN_SEQ(lruvec); - - /* whether this lruvec is completely out of cold folios */ - if (min_seq[!can_swap] + MIN_NR_GENS > max_seq) { - *nr_to_scan = 0; - return true; - } - - for (type = !can_swap; type < ANON_AND_FILE; type++) { - unsigned long seq; - - for (seq = min_seq[type]; seq <= max_seq; seq++) { - unsigned long size = 0; - - gen = lru_gen_from_seq(seq); - - for (zone = 0; zone < MAX_NR_ZONES; zone++) - size += max(READ_ONCE(lrugen->nr_pages[gen][type][zone]), 0L); - - total += size; - if (seq == max_seq) - young += size; - else if (seq + MIN_NR_GENS == max_seq) - old += size; - } - } - - /* try to scrape all its memory if this memcg was deleted */ - *nr_to_scan = mem_cgroup_online(memcg) ? (total >> sc->priority) : total; - - /* - * The aging tries to be lazy to reduce the overhead, while the eviction - * stalls when the number of generations reaches MIN_NR_GENS. Hence, the - * ideal number of generations is MIN_NR_GENS+1. - */ - if (min_seq[!can_swap] + MIN_NR_GENS < max_seq) - return false; - - /* - * It's also ideal to spread pages out evenly, i.e., 1/(MIN_NR_GENS+1) - * of the total number of pages for each generation. A reasonable range - * for this average portion is [1/MIN_NR_GENS, 1/(MIN_NR_GENS+2)]. The - * aging cares about the upper bound of hot pages, while the eviction - * cares about the lower bound of cold pages. - */ - if (young * MIN_NR_GENS > total) - return true; - if (old * (MIN_NR_GENS + 2) < total) - return true; - - return false; -} - static bool lruvec_is_sizable(struct lruvec *lruvec, struct scan_control *sc) { int gen, type, zone; @@ -5097,6 +5035,68 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap return scanned; } +static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, + struct scan_control *sc, bool can_swap, unsigned long *nr_to_scan) +{ + int gen, type, zone; + unsigned long old = 0; + unsigned long young = 0; + unsigned long total = 0; + struct lru_gen_folio *lrugen = &lruvec->lrugen; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + DEFINE_MIN_SEQ(lruvec); + + /* whether this lruvec is completely out of cold folios */ + if (min_seq[!can_swap] + MIN_NR_GENS > max_seq) { + *nr_to_scan = 0; + return true; + } + + for (type = !can_swap; type < ANON_AND_FILE; type++) { + unsigned long seq; + + for (seq = min_seq[type]; seq <= max_seq; seq++) { + unsigned long size = 0; + + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) + size += max(READ_ONCE(lrugen->nr_pages[gen][type][zone]), 0L); + + total += size; + if (seq == max_seq) + young += size; + else if (seq + MIN_NR_GENS == max_seq) + old += size; + } + } + + /* try to scrape all its memory if this memcg was deleted */ + *nr_to_scan = mem_cgroup_online(memcg) ? (total >> sc->priority) : total; + + /* + * The aging tries to be lazy to reduce the overhead, while the eviction + * stalls when the number of generations reaches MIN_NR_GENS. Hence, the + * ideal number of generations is MIN_NR_GENS+1. + */ + if (min_seq[!can_swap] + MIN_NR_GENS < max_seq) + return false; + + /* + * It's also ideal to spread pages out evenly, i.e., 1/(MIN_NR_GENS+1) + * of the total number of pages for each generation. A reasonable range + * for this average portion is [1/MIN_NR_GENS, 1/(MIN_NR_GENS+2)]. The + * aging cares about the upper bound of hot pages, while the eviction + * cares about the lower bound of cold pages. + */ + if (young * MIN_NR_GENS > total) + return true; + if (old * (MIN_NR_GENS + 2) < total) + return true; + + return false; +} + /* * For future optimizations: * 1. Defer try_to_inc_max_seq() to workqueues to reduce latency for memcg From patchwork Thu Dec 1 22:39:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76431C4332F for ; Thu, 1 Dec 2022 22:39:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11B056B0078; Thu, 1 Dec 2022 17:39:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CC838E0003; Thu, 1 Dec 2022 17:39:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC5306B007D; Thu, 1 Dec 2022 17:39:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id CC13E6B0078 for ; Thu, 1 Dec 2022 17:39:50 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9B8C8A02DD for ; Thu, 1 Dec 2022 22:39:50 +0000 (UTC) X-FDA: 80195206140.02.CF83E15 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) by imf15.hostedemail.com (Postfix) with ESMTP id 1FA43A0013 for ; Thu, 1 Dec 2022 22:39:49 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UbxOmLnZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3NS2JYwYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com designates 209.85.166.202 as permitted sender) smtp.mailfrom=3NS2JYwYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934389; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JCZZFKz+rtA84jog9pUPc6RPdcOi4cvnKeQ+Ve2rNdM=; b=EcjMjbJVXP6Ge/wdMeGTzxH8NI2ALi7WuFQ9H0S3NGrdJUr5zoZ+QKJEP2BCKEQ+uRo/Wt 7XbMD8v8V3TGUVuXunNRbygiXWXpofoxU+ls06EP+wlURp8UyiKtjb97n7ncmeBN/sKUT6 9i8M/ZmuBN86vbDBnncY+1UnN8CHADc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UbxOmLnZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3NS2JYwYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com designates 209.85.166.202 as permitted sender) smtp.mailfrom=3NS2JYwYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934389; a=rsa-sha256; cv=none; b=k2ihfNnIw2Lpu/TnJvQ6FeLqhChTKtBusaH3RcCUCtEjuGNiPWirMcBJ01MrTxX4ZTREuY fhbY8BGV1Qnx8wDae34lO9JSsaUw5Q4/DF98UOTHkjmfV2cjrJPo4a8u1/bltXjQcsEQUr U+IYCPgfmo44pVTJNQcSP+wcgisJ70c= Received: by mail-il1-f202.google.com with SMTP id d2-20020a056e020be200b00300ecc7e0d4so3479020ilu.5 for ; Thu, 01 Dec 2022 14:39:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JCZZFKz+rtA84jog9pUPc6RPdcOi4cvnKeQ+Ve2rNdM=; b=UbxOmLnZ5CeZ3Y0MzRmmlwdvmYKIjsuMUamx2oP79/4va+oEb4B7R/7p0VP4IJedAl wNdeQrEeO+e1TvaVrCYs97gBWk3YjMUkA9QydxXsFM8h8GtRG3wz4vMNRE8hU3LBFKho O1uAuWtD6bV0unZ9PiJ2X4yKWh7OCr8u/42oG9FIkkEe6CYf73MSnKBBOF2qWIlRXyFy eX9akud1cXHdCZ0pPnRNlNDMUO19ONHKR8oPFjTQbh3zOeT1QU3rAyzuBBWCWIBFZH8a H0vfJ2SVtfk7UXfof0q9UQjdkIUycTz+qTSUFZKT2J79d72Q0qxcPWB6fW6OoC7LJGZF W8Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JCZZFKz+rtA84jog9pUPc6RPdcOi4cvnKeQ+Ve2rNdM=; b=SfJIR2cJ/wX+8mu//ex7MrwwsEe9CVbo2fPjLMzdjVc2a87qtV0pjb1y/pIyM2cykS xuPVwvRAVwFMFQtV5knKAPXxlmEcSBiG2M7yqknuxn63TgEFE1Mt8/R3VNTye5RvDL6o 5rJEPmdPN/kyLdYP4BKY4/0P6xXMmTakRhLFAR2FcsdccRXE9YX2eILeR6GTHEh9JBv8 TvIpqgR6vCfpbD7U5Ut2MtY7aEs3+AiJ/sFGoxwSx04Ndf7XNlGMup6eG089Tbbz6wE5 LYTTlSYzE/eqS4wDzZlXDtZrjnytKHNHmWSq40h23awL/Q2XtjXqhfsXXqHMqhGUe8e8 fycQ== X-Gm-Message-State: ANoB5plJm/PTgalOk9UFjAqhqeEnfPZme9dojPgTB0gv1o1+gILzfUTz ciOFgKOGafbJlMgBLQSWtjey2BIDsqM= X-Google-Smtp-Source: AA0mqf4onMOQm2YVYn/d1w1wl510kavaaPK8p37Ecv7cCmu65OICv4uoE20H2JpjHHVQ2QAb3GCEuqZENvg= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a5d:8492:0:b0:6df:bdc1:2421 with SMTP id t18-20020a5d8492000000b006dfbdc12421mr5478726iom.116.1669934389347; Thu, 01 Dec 2022 14:39:49 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:22 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-7-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 6/8] mm: multi-gen LRU: per-node lru_gen_folio lists From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Stat-Signature: ojj6mftserj3wrre4pw46obz4ogpt696 X-Spamd-Result: default: False [2.90 / 9.00]; SORBS_IRL_BL(3.00)[209.85.166.202:from]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MV_CASE(0.50)[]; FORGED_SENDER(0.30)[yuzhao@google.com,3NS2JYwYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; RCPT_COUNT_TWELVE(0.00)[12]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3NS2JYwYKCNYQMR92G8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--yuzhao.bounces.google.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 1FA43A0013 X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1669934389-210199 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For each node, memcgs are divided into two generations: the old and the young. For each generation, memcgs are randomly sharded into multiple bins to improve scalability. For each bin, an RCU hlist_nulls is virtually divided into three segments: the head, the tail and the default. An onlining memcg is added to the tail of a random bin in the old generation. The eviction starts at the head of a random bin in the old generation. The per-node memcg generation counter, whose reminder (mod 2) indexes the old generation, is incremented when all its bins become empty. There are four operations: 1. MEMCG_LRU_HEAD, which moves an memcg to the head of a random bin in its current generation (old or young) and updates its "seg" to "head"; 2. MEMCG_LRU_TAIL, which moves an memcg to the tail of a random bin in its current generation (old or young) and updates its "seg" to "tail"; 3. MEMCG_LRU_OLD, which moves an memcg to the head of a random bin in the old generation, updates its "gen" to "old" and resets its "seg" to "default"; 4. MEMCG_LRU_YOUNG, which moves an memcg to the tail of a random bin in the young generation, updates its "gen" to "young" and resets its "seg" to "default". The events that trigger the above operations are: 1. Exceeding the soft limit, which triggers MEMCG_LRU_HEAD; 2. The first attempt to reclaim an memcg below low, which triggers MEMCG_LRU_TAIL; 3. The first attempt to reclaim an memcg below reclaimable size threshold, which triggers MEMCG_LRU_TAIL; 4. The second attempt to reclaim an memcg below reclaimable size threshold, which triggers MEMCG_LRU_YOUNG; 5. Attempting to reclaim an memcg below min, which triggers MEMCG_LRU_YOUNG; 6. Finishing the aging on the eviction path, which triggers MEMCG_LRU_YOUNG; 7. Offlining an memcg, which triggers MEMCG_LRU_OLD. Note that memcg LRU only applies to global reclaim. For memcg reclaim, it still relies on mem_cgroup_iter(). Signed-off-by: Yu Zhao --- include/linux/memcontrol.h | 10 + include/linux/mm_inline.h | 17 ++ include/linux/mmzone.h | 113 ++++++++++- mm/memcontrol.c | 16 ++ mm/page_alloc.c | 1 + mm/vmscan.c | 373 +++++++++++++++++++++++++++++++++---- 6 files changed, 495 insertions(+), 35 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e1644a24009c..f9a44d32e763 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -790,6 +790,11 @@ static inline void obj_cgroup_put(struct obj_cgroup *objcg) percpu_ref_put(&objcg->refcnt); } +static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg) +{ + return !memcg || css_tryget(&memcg->css); +} + static inline void mem_cgroup_put(struct mem_cgroup *memcg) { if (memcg) @@ -1290,6 +1295,11 @@ static inline void obj_cgroup_put(struct obj_cgroup *objcg) { } +static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg) +{ + return true; +} + static inline void mem_cgroup_put(struct mem_cgroup *memcg) { } diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index da38e3d962e2..c1fd3922dc5d 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -122,6 +122,18 @@ static inline bool lru_gen_in_fault(void) return current->in_lru_fault; } +#ifdef CONFIG_MEMCG +static inline int lru_gen_memcg_seg(struct lruvec *lruvec) +{ + return READ_ONCE(lruvec->lrugen.seg); +} +#else +static inline int lru_gen_memcg_seg(struct lruvec *lruvec) +{ + return 0; +} +#endif + static inline int lru_gen_from_seq(unsigned long seq) { return seq % MAX_NR_GENS; @@ -297,6 +309,11 @@ static inline bool lru_gen_in_fault(void) return false; } +static inline int lru_gen_memcg_seg(struct lruvec *lruvec) +{ + return 0; +} + static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) { return false; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 02e432374471..87b3b5a2aac4 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -367,6 +368,15 @@ struct page_vma_mapped_walk; #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) #define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS_PGOFF) +/* see the comment on MEMCG_NR_GENS */ +enum { + MEMCG_LRU_NOP, + MEMCG_LRU_HEAD, + MEMCG_LRU_TAIL, + MEMCG_LRU_OLD, + MEMCG_LRU_YOUNG, +}; + #ifdef CONFIG_LRU_GEN enum { @@ -426,6 +436,14 @@ struct lru_gen_folio { atomic_long_t refaulted[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS]; /* whether the multi-gen LRU is enabled */ bool enabled; +#ifdef CONFIG_MEMCG + /* the memcg generation this lru_gen_folio belongs to */ + u8 gen; + /* the list segment this lru_gen_folio belongs to */ + u8 seg; + /* per-node lru_gen_folio list for global reclaim */ + struct hlist_nulls_node list; +#endif }; enum { @@ -479,12 +497,83 @@ void lru_gen_init_lruvec(struct lruvec *lruvec); void lru_gen_look_around(struct page_vma_mapped_walk *pvmw); #ifdef CONFIG_MEMCG + +/* + * For each node, memcgs are divided into two generations: the old and the + * young. For each generation, memcgs are randomly sharded into multiple bins + * to improve scalability. For each bin, the hlist_nulls is virtually divided + * into three segments: the head, the tail and the default. + * + * An onlining memcg is added to the tail of a random bin in the old generation. + * The eviction starts at the head of a random bin in the old generation. The + * per-node memcg generation counter, whose reminder (mod MEMCG_NR_GENS) indexes + * the old generation, is incremented when all its bins become empty. + * + * There are four operations: + * 1. MEMCG_LRU_HEAD, which moves an memcg to the head of a random bin in its + * current generation (old or young) and updates its "seg" to "head"; + * 2. MEMCG_LRU_TAIL, which moves an memcg to the tail of a random bin in its + * current generation (old or young) and updates its "seg" to "tail"; + * 3. MEMCG_LRU_OLD, which moves an memcg to the head of a random bin in the old + * generation, updates its "gen" to "old" and resets its "seg" to "default"; + * 4. MEMCG_LRU_YOUNG, which moves an memcg to the tail of a random bin in the + * young generation, updates its "gen" to "young" and resets its "seg" to + * "default". + * + * The events that trigger the above operations are: + * 1. Exceeding the soft limit, which triggers MEMCG_LRU_HEAD; + * 2. The first attempt to reclaim an memcg below low, which triggers + * MEMCG_LRU_TAIL; + * 3. The first attempt to reclaim an memcg below reclaimable size threshold, + * which triggers MEMCG_LRU_TAIL; + * 4. The second attempt to reclaim an memcg below reclaimable size threshold, + * which triggers MEMCG_LRU_YOUNG; + * 5. Attempting to reclaim an memcg below min, which triggers MEMCG_LRU_YOUNG; + * 6. Finishing the aging on the eviction path, which triggers MEMCG_LRU_YOUNG; + * 7. Offlining an memcg, which triggers MEMCG_LRU_OLD. + */ +#define MEMCG_NR_GENS 2 +#define MEMCG_NR_BINS 8 + +struct lru_gen_memcg { + /* the per-node memcg generation counter */ + unsigned long seq; + /* each memcg has one lru_gen_folio per node */ + unsigned long nr_memcgs[MEMCG_NR_GENS]; + /* per-node lru_gen_folio list for global reclaim */ + struct hlist_nulls_head fifo[MEMCG_NR_GENS][MEMCG_NR_BINS]; + /* protects the above */ + spinlock_t lock; +}; + +void lru_gen_init_pgdat(struct pglist_data *pgdat); + void lru_gen_init_memcg(struct mem_cgroup *memcg); void lru_gen_exit_memcg(struct mem_cgroup *memcg); -#endif +void lru_gen_online_memcg(struct mem_cgroup *memcg); +void lru_gen_offline_memcg(struct mem_cgroup *memcg); +void lru_gen_release_memcg(struct mem_cgroup *memcg); +void lru_gen_rotate_memcg(struct lruvec *lruvec, int op); + +#else /* !CONFIG_MEMCG */ + +#define MEMCG_NR_GENS 1 + +struct lru_gen_memcg { +}; + +static inline void lru_gen_init_pgdat(struct pglist_data *pgdat) +{ +} + +#endif /* CONFIG_MEMCG */ #else /* !CONFIG_LRU_GEN */ +static inline void lru_gen_init_pgdat(struct pglist_data *pgdat) +{ +} + static inline void lru_gen_init_lruvec(struct lruvec *lruvec) { } @@ -494,6 +583,7 @@ static inline void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) } #ifdef CONFIG_MEMCG + static inline void lru_gen_init_memcg(struct mem_cgroup *memcg) { } @@ -501,7 +591,24 @@ static inline void lru_gen_init_memcg(struct mem_cgroup *memcg) static inline void lru_gen_exit_memcg(struct mem_cgroup *memcg) { } -#endif + +static inline void lru_gen_online_memcg(struct mem_cgroup *memcg) +{ +} + +static inline void lru_gen_offline_memcg(struct mem_cgroup *memcg) +{ +} + +static inline void lru_gen_release_memcg(struct mem_cgroup *memcg) +{ +} + +static inline void lru_gen_rotate_memcg(struct lruvec *lruvec, int op) +{ +} + +#endif /* CONFIG_MEMCG */ #endif /* CONFIG_LRU_GEN */ @@ -1219,6 +1326,8 @@ typedef struct pglist_data { #ifdef CONFIG_LRU_GEN /* kswap mm walk data */ struct lru_gen_mm_walk mm_walk; + /* lru_gen_folio list */ + struct lru_gen_memcg memcg_lru; #endif CACHELINE_PADDING(_pad2_); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 23750cec0036..6b976829e9f7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -477,6 +477,16 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, int nid) struct mem_cgroup_per_node *mz; struct mem_cgroup_tree_per_node *mctz; + if (lru_gen_enabled()) { + struct lruvec *lruvec = &memcg->nodeinfo[nid]->lruvec; + + /* see the comment on MEMCG_NR_GENS */ + if (soft_limit_excess(memcg) && lru_gen_memcg_seg(lruvec) != MEMCG_LRU_HEAD) + lru_gen_rotate_memcg(lruvec, MEMCG_LRU_HEAD); + + return; + } + mctz = soft_limit_tree.rb_tree_per_node[nid]; if (!mctz) return; @@ -3526,6 +3536,9 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, struct mem_cgroup_tree_per_node *mctz; unsigned long excess; + if (lru_gen_enabled()) + return 0; + if (order > 0) return 0; @@ -5371,6 +5384,7 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) if (unlikely(mem_cgroup_is_root(memcg))) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); + lru_gen_online_memcg(memcg); return 0; offline_kmem: memcg_offline_kmem(memcg); @@ -5402,6 +5416,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) memcg_offline_kmem(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); + lru_gen_offline_memcg(memcg); drain_all_stock(memcg); @@ -5413,6 +5428,7 @@ static void mem_cgroup_css_released(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); invalidate_reclaim_iterators(memcg); + lru_gen_release_memcg(memcg); } static void mem_cgroup_css_free(struct cgroup_subsys_state *css) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2d4c81224508..0aa134b8dae2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7914,6 +7914,7 @@ static void __init free_area_init_node(int nid) pgdat_set_deferred_range(pgdat); free_area_init_core(pgdat); + lru_gen_init_pgdat(pgdat); } static void __init free_area_init_memoryless_node(int nid) diff --git a/mm/vmscan.c b/mm/vmscan.c index 0557adce75c5..44506eb96c9d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -55,6 +55,7 @@ #include #include #include +#include #include #include @@ -135,11 +136,6 @@ struct scan_control { /* Always discard instead of demoting to lower tier memory */ unsigned int no_demotion:1; -#ifdef CONFIG_LRU_GEN - /* help kswapd make better choices among multiple memcgs */ - unsigned long last_reclaimed; -#endif - /* Allocation order */ s8 order; @@ -3167,6 +3163,9 @@ DEFINE_STATIC_KEY_ARRAY_FALSE(lru_gen_caps, NR_LRU_GEN_CAPS); for ((type) = 0; (type) < ANON_AND_FILE; (type)++) \ for ((zone) = 0; (zone) < MAX_NR_ZONES; (zone)++) +#define get_memcg_gen(seq) ((seq) % MEMCG_NR_GENS) +#define get_memcg_bin(bin) ((bin) % MEMCG_NR_BINS) + static struct lruvec *get_lruvec(struct mem_cgroup *memcg, int nid) { struct pglist_data *pgdat = NODE_DATA(nid); @@ -4438,8 +4437,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, if (sc->priority <= DEF_PRIORITY - 2) wait_event_killable(lruvec->mm_state.wait, max_seq < READ_ONCE(lrugen->max_seq)); - - return max_seq < READ_ONCE(lrugen->max_seq); + return false; } VM_WARN_ON_ONCE(max_seq != READ_ONCE(lrugen->max_seq)); @@ -4512,8 +4510,6 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) VM_WARN_ON_ONCE(!current_is_kswapd()); - sc->last_reclaimed = sc->nr_reclaimed; - /* check the order to exclude compaction-induced reclaim */ if (!min_ttl || sc->order || sc->priority == DEF_PRIORITY) return; @@ -5102,8 +5098,7 @@ static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, * 1. Defer try_to_inc_max_seq() to workqueues to reduce latency for memcg * reclaim. */ -static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, - bool can_swap) +static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool can_swap) { unsigned long nr_to_scan; struct mem_cgroup *memcg = lruvec_memcg(lruvec); @@ -5120,10 +5115,8 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * if (sc->priority == DEF_PRIORITY) return nr_to_scan; - try_to_inc_max_seq(lruvec, max_seq, sc, can_swap, false); - /* skip this lruvec as it's low on cold folios */ - return 0; + return try_to_inc_max_seq(lruvec, max_seq, sc, can_swap, false) ? -1 : 0; } static unsigned long get_nr_to_reclaim(struct scan_control *sc) @@ -5132,29 +5125,18 @@ static unsigned long get_nr_to_reclaim(struct scan_control *sc) if (!global_reclaim(sc)) return -1; - /* discount the previous progress for kswapd */ - if (current_is_kswapd()) - return sc->nr_to_reclaim + sc->last_reclaimed; - return max(sc->nr_to_reclaim, compact_gap(sc->order)); } -static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) +static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { - struct blk_plug plug; + long nr_to_scan; unsigned long scanned = 0; unsigned long nr_to_reclaim = get_nr_to_reclaim(sc); - lru_add_drain(); - - blk_start_plug(&plug); - - set_mm_walk(lruvec_pgdat(lruvec)); - while (true) { int delta; int swappiness; - unsigned long nr_to_scan; if (sc->may_swap) swappiness = get_swappiness(lruvec, sc); @@ -5164,7 +5146,7 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc swappiness = 0; nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness); - if (!nr_to_scan) + if (nr_to_scan <= 0) break; delta = evict_folios(lruvec, sc, swappiness); @@ -5181,10 +5163,251 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc cond_resched(); } + /* whether try_to_inc_max_seq() was successful */ + return nr_to_scan < 0; +} + +static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) +{ + bool success; + unsigned long scanned = sc->nr_scanned; + unsigned long reclaimed = sc->nr_reclaimed; + int seg = lru_gen_memcg_seg(lruvec); + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + + /* see the comment on MEMCG_NR_GENS */ + if (!lruvec_is_sizable(lruvec, sc)) + return seg != MEMCG_LRU_TAIL ? MEMCG_LRU_TAIL : MEMCG_LRU_YOUNG; + + mem_cgroup_calculate_protection(NULL, memcg); + + if (mem_cgroup_below_min(memcg)) + return MEMCG_LRU_YOUNG; + + if (mem_cgroup_below_low(memcg)) { + /* see the comment on MEMCG_NR_GENS */ + if (seg != MEMCG_LRU_TAIL) + return MEMCG_LRU_TAIL; + + memcg_memory_event(memcg, MEMCG_LOW); + } + + success = try_to_shrink_lruvec(lruvec, sc); + + shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority); + + if (!sc->proactive) + vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, + sc->nr_reclaimed - reclaimed); + + sc->nr_reclaimed += current->reclaim_state->reclaimed_slab; + current->reclaim_state->reclaimed_slab = 0; + + return success ? MEMCG_LRU_YOUNG : 0; +} + +#ifdef CONFIG_MEMCG + +static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc) +{ + int gen; + int bin; + int first_bin; + struct lruvec *lruvec; + struct lru_gen_folio *lrugen; + const struct hlist_nulls_node *pos; + int op = 0; + struct mem_cgroup *memcg = NULL; + unsigned long nr_to_reclaim = get_nr_to_reclaim(sc); + + bin = first_bin = prandom_u32_max(MEMCG_NR_BINS); +restart: + gen = get_memcg_gen(READ_ONCE(pgdat->memcg_lru.seq)); + + rcu_read_lock(); + + hlist_nulls_for_each_entry_rcu(lrugen, pos, &pgdat->memcg_lru.fifo[gen][bin], list) { + if (op) + lru_gen_rotate_memcg(lruvec, op); + + mem_cgroup_put(memcg); + + lruvec = container_of(lrugen, struct lruvec, lrugen); + memcg = lruvec_memcg(lruvec); + + if (!mem_cgroup_tryget(memcg)) { + op = 0; + memcg = NULL; + continue; + } + + rcu_read_unlock(); + + op = shrink_one(lruvec, sc); + + if (sc->nr_reclaimed >= nr_to_reclaim) + goto success; + + rcu_read_lock(); + } + + rcu_read_unlock(); + + /* restart if raced with lru_gen_rotate_memcg() */ + if (gen != get_nulls_value(pos)) + goto restart; + + /* try the rest of the bins of the current generation */ + bin = get_memcg_bin(bin + 1); + if (bin != first_bin) + goto restart; +success: + if (op) + lru_gen_rotate_memcg(lruvec, op); + + mem_cgroup_put(memcg); +} + +static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) +{ + struct blk_plug plug; + + VM_WARN_ON_ONCE(global_reclaim(sc)); + + lru_add_drain(); + + blk_start_plug(&plug); + + set_mm_walk(lruvec_pgdat(lruvec)); + + if (try_to_shrink_lruvec(lruvec, sc)) + lru_gen_rotate_memcg(lruvec, MEMCG_LRU_YOUNG); + + clear_mm_walk(); + + blk_finish_plug(&plug); +} + +#else /* !CONFIG_MEMCG */ + +static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc) +{ + BUILD_BUG(); +} + +static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) +{ + BUILD_BUG(); +} + +#endif + +static void set_initial_priority(struct pglist_data *pgdat, struct scan_control *sc) +{ + int priority; + unsigned long reclaimable; + struct lruvec *lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + + if (sc->priority != DEF_PRIORITY || sc->nr_to_reclaim < MIN_LRU_BATCH) + return; + /* + * Determine the initial priority based on ((total / MEMCG_NR_GENS) >> + * priority) * reclaimed_to_scanned_ratio = nr_to_reclaim, where the + * estimated reclaimed_to_scanned_ratio = inactive / total. + */ + reclaimable = node_page_state(pgdat, NR_INACTIVE_FILE); + if (get_swappiness(lruvec, sc)) + reclaimable += node_page_state(pgdat, NR_INACTIVE_ANON); + + reclaimable /= MEMCG_NR_GENS; + + /* round down reclaimable and round up sc->nr_to_reclaim */ + priority = fls_long(reclaimable) - 1 - fls_long(sc->nr_to_reclaim - 1); + + sc->priority = clamp(priority, 0, DEF_PRIORITY); +} + +static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *sc) +{ + struct blk_plug plug; + unsigned long reclaimed = sc->nr_reclaimed; + + VM_WARN_ON_ONCE(!global_reclaim(sc)); + + lru_add_drain(); + + blk_start_plug(&plug); + + set_mm_walk(pgdat); + + set_initial_priority(pgdat, sc); + + if (current_is_kswapd()) + sc->nr_reclaimed = 0; + + if (mem_cgroup_disabled()) + shrink_one(&pgdat->__lruvec, sc); + else + shrink_many(pgdat, sc); + + if (current_is_kswapd()) + sc->nr_reclaimed += reclaimed; + clear_mm_walk(); blk_finish_plug(&plug); + + /* kswapd should never fail */ + pgdat->kswapd_failures = 0; +} + +#ifdef CONFIG_MEMCG +void lru_gen_rotate_memcg(struct lruvec *lruvec, int op) +{ + int seg; + int old, new; + int bin = prandom_u32_max(MEMCG_NR_BINS); + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + + spin_lock(&pgdat->memcg_lru.lock); + + VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list)); + + seg = 0; + new = old = lruvec->lrugen.gen; + + /* see the comment on MEMCG_NR_GENS */ + if (op == MEMCG_LRU_HEAD) + seg = MEMCG_LRU_HEAD; + else if (op == MEMCG_LRU_TAIL) + seg = MEMCG_LRU_TAIL; + else if (op == MEMCG_LRU_OLD) + new = get_memcg_gen(pgdat->memcg_lru.seq); + else if (op == MEMCG_LRU_YOUNG) + new = get_memcg_gen(pgdat->memcg_lru.seq + 1); + else + VM_WARN_ON_ONCE(true); + + hlist_nulls_del_rcu(&lruvec->lrugen.list); + + if (op == MEMCG_LRU_HEAD || op == MEMCG_LRU_OLD) + hlist_nulls_add_head_rcu(&lruvec->lrugen.list, &pgdat->memcg_lru.fifo[new][bin]); + else + hlist_nulls_add_tail_rcu(&lruvec->lrugen.list, &pgdat->memcg_lru.fifo[new][bin]); + + pgdat->memcg_lru.nr_memcgs[old]--; + pgdat->memcg_lru.nr_memcgs[new]++; + + lruvec->lrugen.gen = new; + WRITE_ONCE(lruvec->lrugen.seg, seg); + + if (!pgdat->memcg_lru.nr_memcgs[old] && old == get_memcg_gen(pgdat->memcg_lru.seq)) + WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1); + + spin_unlock(&pgdat->memcg_lru.lock); } +#endif /****************************************************************************** * state change @@ -5639,11 +5862,11 @@ static int run_cmd(char cmd, int memcg_id, int nid, unsigned long seq, if (!mem_cgroup_disabled()) { rcu_read_lock(); + memcg = mem_cgroup_from_id(memcg_id); -#ifdef CONFIG_MEMCG - if (memcg && !css_tryget(&memcg->css)) + if (!mem_cgroup_tryget(memcg)) memcg = NULL; -#endif + rcu_read_unlock(); if (!memcg) @@ -5791,6 +6014,19 @@ void lru_gen_init_lruvec(struct lruvec *lruvec) } #ifdef CONFIG_MEMCG + +void lru_gen_init_pgdat(struct pglist_data *pgdat) +{ + int i, j; + + spin_lock_init(&pgdat->memcg_lru.lock); + + for (i = 0; i < MEMCG_NR_GENS; i++) { + for (j = 0; j < MEMCG_NR_BINS; j++) + INIT_HLIST_NULLS_HEAD(&pgdat->memcg_lru.fifo[i][j], i); + } +} + void lru_gen_init_memcg(struct mem_cgroup *memcg) { INIT_LIST_HEAD(&memcg->mm_list.fifo); @@ -5814,7 +6050,69 @@ void lru_gen_exit_memcg(struct mem_cgroup *memcg) } } } -#endif + +void lru_gen_online_memcg(struct mem_cgroup *memcg) +{ + int gen; + int nid; + int bin = prandom_u32_max(MEMCG_NR_BINS); + + for_each_node(nid) { + struct pglist_data *pgdat = NODE_DATA(nid); + struct lruvec *lruvec = get_lruvec(memcg, nid); + + spin_lock(&pgdat->memcg_lru.lock); + + VM_WARN_ON_ONCE(!hlist_nulls_unhashed(&lruvec->lrugen.list)); + + gen = get_memcg_gen(pgdat->memcg_lru.seq); + + hlist_nulls_add_tail_rcu(&lruvec->lrugen.list, &pgdat->memcg_lru.fifo[gen][bin]); + pgdat->memcg_lru.nr_memcgs[gen]++; + + lruvec->lrugen.gen = gen; + + spin_unlock(&pgdat->memcg_lru.lock); + } +} + +void lru_gen_offline_memcg(struct mem_cgroup *memcg) +{ + int nid; + + for_each_node(nid) { + struct lruvec *lruvec = get_lruvec(memcg, nid); + + lru_gen_rotate_memcg(lruvec, MEMCG_LRU_OLD); + } +} + +void lru_gen_release_memcg(struct mem_cgroup *memcg) +{ + int gen; + int nid; + + for_each_node(nid) { + struct pglist_data *pgdat = NODE_DATA(nid); + struct lruvec *lruvec = get_lruvec(memcg, nid); + + spin_lock(&pgdat->memcg_lru.lock); + + VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list)); + + gen = lruvec->lrugen.gen; + + hlist_nulls_del_rcu(&lruvec->lrugen.list); + pgdat->memcg_lru.nr_memcgs[gen]--; + + if (!pgdat->memcg_lru.nr_memcgs[gen] && gen == get_memcg_gen(pgdat->memcg_lru.seq)) + WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1); + + spin_unlock(&pgdat->memcg_lru.lock); + } +} + +#endif /* CONFIG_MEMCG */ static int __init init_lru_gen(void) { @@ -5841,6 +6139,10 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc { } +static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *sc) +{ +} + #endif /* CONFIG_LRU_GEN */ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) @@ -5854,7 +6156,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) bool proportional_reclaim; struct blk_plug plug; - if (lru_gen_enabled()) { + if (lru_gen_enabled() && !global_reclaim(sc)) { lru_gen_shrink_lruvec(lruvec, sc); return; } @@ -6097,6 +6399,11 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) struct lruvec *target_lruvec; bool reclaimable = false; + if (lru_gen_enabled() && global_reclaim(sc)) { + lru_gen_shrink_node(pgdat, sc); + return; + } + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); again: From patchwork Thu Dec 1 22:39:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF1A4C47088 for ; Thu, 1 Dec 2022 22:39:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DFAF8E0005; Thu, 1 Dec 2022 17:39:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 16B3C8E0003; Thu, 1 Dec 2022 17:39:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F229B8E0005; Thu, 1 Dec 2022 17:39:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E0E398E0003 for ; Thu, 1 Dec 2022 17:39:51 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B23BB811FC for ; Thu, 1 Dec 2022 22:39:51 +0000 (UTC) X-FDA: 80195206182.27.E049691 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) by imf09.hostedemail.com (Postfix) with ESMTP id 5CD0814000E for ; Thu, 1 Dec 2022 22:39:51 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=p9A73kQc; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3Ni2JYwYKCNcRNSA3H9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--yuzhao.bounces.google.com designates 209.85.166.202 as permitted sender) smtp.mailfrom=3Ni2JYwYKCNcRNSA3H9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934391; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kTMKVAj/Oh+vccsqCU2tEXECJUHGvjIuMZ7UFyAuXSQ=; b=G9p4kr6XiFINcXY4ECQEtBjso9prWF9cyforWEqfFm+eOUvZGnze3P5RpLR8fX0AYaIMp5 9bwVxaLmDs6qbErNgErBhvITjQS0ynzRVY1sE59c9QjTakal/fb0oPXahXmYGKHdNwEvpi bD4nU5BB+IFz+ZJdruMTYsRYKR5zQ7w= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=p9A73kQc; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3Ni2JYwYKCNcRNSA3H9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--yuzhao.bounces.google.com designates 209.85.166.202 as permitted sender) smtp.mailfrom=3Ni2JYwYKCNcRNSA3H9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934391; a=rsa-sha256; cv=none; b=ryRNwlIbYjMqMLCX5hiSTABfIBQGbkNYRnd0pHoOLNU7QYwCSKtNa4mnhywl3diPFR+64Q 2Wf63qRauxg54PnFZX5JVnHX1AVEtRCJ85I4ENhYZV5kiMaGxWpCGvfVWd0lb7OXFY9LLp QFrIDZuyT/v5VIJBXyP6HE+DlQff68o= Received: by mail-il1-f202.google.com with SMTP id s1-20020a056e021a0100b003026adad6a9so3470894ild.18 for ; Thu, 01 Dec 2022 14:39:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kTMKVAj/Oh+vccsqCU2tEXECJUHGvjIuMZ7UFyAuXSQ=; b=p9A73kQcbZz1ANqWmvhw5+NM5qEJ/CxqmfOTbuM8N99+6VxSq84fgNdEKrkYUMhfip lmJ7n3tpZ/xBYecN/+afu94713QEcl98y0zYIkI4uliyG5z+zey1R4whwOYxlaXZq1ev iXqr4q76FmCk7Chco4K5Nn4pJdjDfFvZ+UnZhEhg27dtTFS2knF97a5Hh+tcQGlcrzWF 5HehezoW0WwBj7eOEravZk8MgPyxJBkCkjfIjaTF+84Y5474JSAdj4pTUu8WGkXVUkFf 3EQGxJowwQ69hNwTBwTFA2iadFF/UC+K3j+HtEI1P1tXd7RZVe7zRac6DENBVYtgl1OX DZXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kTMKVAj/Oh+vccsqCU2tEXECJUHGvjIuMZ7UFyAuXSQ=; b=E9c66pjHanEAa++vksj++GcKLopO5iayE7H4Nl1mZPA7OeZWIbSC9PzZxR5B3IbkHd Rg6aVF8lWC2WriCGoBeIki9Ad6GWIH/kpq31woa+O1doxvkLGOautdd60eRtVfusVK6C Dh9h+g42gLgJk2S0LDA6xHtbLDejWnk45XRwasgNELsJ89pODjLRYGBPlxjRPnnwivIS acv7j0lfjpSCEF9GFy6CveGSnhIFE1hK6aEM6AtRofsz4bW88CzYVBTefdK+CJ4AZhX7 bnWAzC0Ks0qaxCaOJXl/MkAhCWR1bDJSFua41RbtFbRuEKQlx0yUdgN89CD1y22KWOSh 9eJw== X-Gm-Message-State: ANoB5pkuti3tnQpQFIdMrDTQaDaqTX1fBhB1S9hs1KufknsH+BW+Bk+Y ifBNwMbv+Nven18kbLSfXJA95ZEy9/M= X-Google-Smtp-Source: AA0mqf5yFEPKGDamnbBfxGCbfkrztfXH9jamspDONGWjmXUOhNGY8g9PMUCJHu2QNdhDtVVz63AUcswnFg4= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a6b:6310:0:b0:6cf:713a:9526 with SMTP id p16-20020a6b6310000000b006cf713a9526mr23534665iog.111.1669934390720; Thu, 01 Dec 2022 14:39:50 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:23 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-8-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 7/8] mm: multi-gen LRU: clarify scan_control flags From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Spamd-Result: default: False [2.90 / 9.00]; SORBS_IRL_BL(3.00)[209.85.166.202:from]; BAYES_HAM(-3.00)[99.99%]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MID_CONTAINS_FROM(1.00)[]; MV_CASE(0.50)[]; FORGED_SENDER(0.30)[yuzhao@google.com,3Ni2JYwYKCNcRNSA3H9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--yuzhao.bounces.google.com]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; BAD_REP_POLICIES(0.10)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17:c]; RCPT_COUNT_TWELVE(0.00)[12]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3Ni2JYwYKCNcRNSA3H9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--yuzhao.bounces.google.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[] X-Stat-Signature: fj11kqx8yheba36uncsy4uqm3k37owik X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5CD0814000E X-Rspam-User: X-HE-Tag: 1669934391-837146 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Among the flags in scan_control: 1. sc->may_swap, which indicates swap constraint due to memsw.max, is supported as usual. 2. sc->proactive, which indicates reclaim by memory.reclaim, may not opportunistically skip the aging path, since it is considered less latency sensitive. 3. !(sc->gfp_mask & __GFP_IO), which indicates IO constraint, prioritizes file LRU, since clean file folios are more likely to exist. 4. sc->may_writepage and sc->may_unmap, which indicates opportunistic reclaim, are rejected, since unmapped clean folios are already prioritized. Scanning for more of them is likely futile and can cause high reclaim latency when there is a large number of memcgs. The rest are handled by the existing code. Signed-off-by: Yu Zhao --- mm/vmscan.c | 53 +++++++++++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 26 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 44506eb96c9d..39724e7ae837 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3191,6 +3191,9 @@ static int get_swappiness(struct lruvec *lruvec, struct scan_control *sc) struct mem_cgroup *memcg = lruvec_memcg(lruvec); struct pglist_data *pgdat = lruvec_pgdat(lruvec); + if (!sc->may_swap) + return 0; + if (!can_demote(pgdat->node_id, sc) && mem_cgroup_get_nr_swap_pages(memcg) < MIN_LRU_BATCH) return 0; @@ -4220,7 +4223,7 @@ static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_ } while (err == -EAGAIN); } -static struct lru_gen_mm_walk *set_mm_walk(struct pglist_data *pgdat) +static struct lru_gen_mm_walk *set_mm_walk(struct pglist_data *pgdat, bool force_alloc) { struct lru_gen_mm_walk *walk = current->reclaim_state->mm_walk; @@ -4228,7 +4231,7 @@ static struct lru_gen_mm_walk *set_mm_walk(struct pglist_data *pgdat) VM_WARN_ON_ONCE(walk); walk = &pgdat->mm_walk; - } else if (!pgdat && !walk) { + } else if (!walk && force_alloc) { VM_WARN_ON_ONCE(current_is_kswapd()); walk = kzalloc(sizeof(*walk), __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN); @@ -4414,7 +4417,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, goto done; } - walk = set_mm_walk(NULL); + walk = set_mm_walk(NULL, true); if (!walk) { success = iterate_mm_list_nowalk(lruvec, max_seq); goto done; @@ -4483,8 +4486,6 @@ static bool lruvec_is_reclaimable(struct lruvec *lruvec, struct scan_control *sc struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MIN_SEQ(lruvec); - VM_WARN_ON_ONCE(sc->memcg_low_reclaim); - /* see the comment on lru_gen_folio */ gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]); birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); @@ -4740,12 +4741,8 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca { bool success; - /* unmapping inhibited */ - if (!sc->may_unmap && folio_mapped(folio)) - return false; - /* swapping inhibited */ - if (!(sc->may_writepage && (sc->gfp_mask & __GFP_IO)) && + if (!(sc->gfp_mask & __GFP_IO) && (folio_test_dirty(folio) || (folio_test_anon(folio) && !folio_test_swapcache(folio)))) return false; @@ -4842,9 +4839,8 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, __count_vm_events(PGSCAN_ANON + type, isolated); /* - * There might not be eligible pages due to reclaim_idx, may_unmap and - * may_writepage. Check the remaining to prevent livelock if it's not - * making progress. + * There might not be eligible folios due to reclaim_idx. Check the + * remaining to prevent livelock if it's not making progress. */ return isolated || !remaining ? scanned : 0; } @@ -5104,8 +5100,7 @@ static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); - if (mem_cgroup_below_min(memcg) || - (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) + if (mem_cgroup_below_min(memcg)) return 0; if (!should_run_aging(lruvec, max_seq, sc, can_swap, &nr_to_scan)) @@ -5133,17 +5128,14 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) long nr_to_scan; unsigned long scanned = 0; unsigned long nr_to_reclaim = get_nr_to_reclaim(sc); + int swappiness = get_swappiness(lruvec, sc); + + /* clean file folios are more likely to exist */ + if (swappiness && !(sc->gfp_mask & __GFP_IO)) + swappiness = 1; while (true) { int delta; - int swappiness; - - if (sc->may_swap) - swappiness = get_swappiness(lruvec, sc); - else if (!cgroup_reclaim(sc) && get_swappiness(lruvec, sc)) - swappiness = 1; - else - swappiness = 0; nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness); if (nr_to_scan <= 0) @@ -5274,12 +5266,13 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc struct blk_plug plug; VM_WARN_ON_ONCE(global_reclaim(sc)); + VM_WARN_ON_ONCE(!sc->may_writepage || !sc->may_unmap); lru_add_drain(); blk_start_plug(&plug); - set_mm_walk(lruvec_pgdat(lruvec)); + set_mm_walk(NULL, sc->proactive); if (try_to_shrink_lruvec(lruvec, sc)) lru_gen_rotate_memcg(lruvec, MEMCG_LRU_YOUNG); @@ -5335,11 +5328,19 @@ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control * VM_WARN_ON_ONCE(!global_reclaim(sc)); + /* + * Unmapped clean folios are already prioritized. Scanning for more of + * them is likely futile and can cause high reclaim latency when there + * is a large number of memcgs. + */ + if (!sc->may_writepage || !sc->may_unmap) + return; + lru_add_drain(); blk_start_plug(&plug); - set_mm_walk(pgdat); + set_mm_walk(pgdat, sc->proactive); set_initial_priority(pgdat, sc); @@ -5926,7 +5927,7 @@ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src, set_task_reclaim_state(current, &sc.reclaim_state); flags = memalloc_noreclaim_save(); blk_start_plug(&plug); - if (!set_mm_walk(NULL)) { + if (!set_mm_walk(NULL, true)) { err = -ENOMEM; goto done; } From patchwork Thu Dec 1 22:39:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48A74C3A5A7 for ; Thu, 1 Dec 2022 22:39:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9315D8E0006; Thu, 1 Dec 2022 17:39:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BA1D8E0003; Thu, 1 Dec 2022 17:39:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70D598E0006; Thu, 1 Dec 2022 17:39:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 557968E0003 for ; Thu, 1 Dec 2022 17:39:53 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1C929AAFFD for ; Thu, 1 Dec 2022 22:39:53 +0000 (UTC) X-FDA: 80195206266.14.4E4C520 Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) by imf24.hostedemail.com (Postfix) with ESMTP id B193B180012 for ; Thu, 1 Dec 2022 22:39:52 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=i2B3oY+S; spf=pass (imf24.hostedemail.com: domain of 3OC2JYwYKCNkTPUC5JBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuzhao.bounces.google.com designates 209.85.166.74 as permitted sender) smtp.mailfrom=3OC2JYwYKCNkTPUC5JBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934392; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Tf2mk5xivjN9zWbWWQQFqtvwZGiYn802bc7GbPor4PM=; b=L54UWMPUEV2JrJFpJiirXdfRxrKnvqHXZOZNS2K7dj7GgVnxde4a3rYNSpduFK13saAnHz R3zbQbN9kG2GhfNgZ+J8ST0N2GfhvAo5rBZIZOuK9JWyMaOloYflrey3hWkfOYeTmZM9Wn /Ur875F3BkRUpI6sk9nQPmxp4IX9umU= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=i2B3oY+S; spf=pass (imf24.hostedemail.com: domain of 3OC2JYwYKCNkTPUC5JBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuzhao.bounces.google.com designates 209.85.166.74 as permitted sender) smtp.mailfrom=3OC2JYwYKCNkTPUC5JBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934392; a=rsa-sha256; cv=none; b=M1rzkloO5n/MZwrXvVXk5IRiAlsOgukKcS4yXUmZOrq76zaCcZyi2Sq86gJO013rd3wQdk j135Cm7y1hf8oYFf9dl/rw2pTLVdW/MyYKTaa0GeCIhdmVH3LJjl+0THUfECZtW/p8NW87 3DDQ28pzXktx/IVFURt3J/kHAhRzyZ8= Received: by mail-io1-f74.google.com with SMTP id n8-20020a6b4108000000b006de520dc5c9so2910591ioa.19 for ; Thu, 01 Dec 2022 14:39:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Tf2mk5xivjN9zWbWWQQFqtvwZGiYn802bc7GbPor4PM=; b=i2B3oY+SMRNmEpbZRQCeIbFih6YbvqEnPTFZiN9hVG5t3QOQzoSe+l/KIXrLMGbnNL 29n9Ia2rDjkSsgbQQ/MxxZuRuAoKqL5eMjD+VlKWwofiK+FR8WEosx0Vd3XcGP7E22+q rRIKj1c4MUALlRfm8ECdcukwv7QN74na1G4XdxIZtXZwyL6E89lFK/BVd/HFKxpIQGfW ODkb+jcop4TEfW1KOsWqFtRUGakKyAvfCvdFTIoVnw49on8vW4fQhWoPmFnYCCKyQrSv 8Es0dQQ7c95B5PCBVozQHX2vu0f7iPSQ999hnj8darcm+YBVb07+s3aJwL5X/RYCLX8l 0G/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Tf2mk5xivjN9zWbWWQQFqtvwZGiYn802bc7GbPor4PM=; b=Of1ZhugFys8Cnz5PnVosy4xt5+dXtSYeo8cwL1zDylWHb9KhYN57Qx4v2Z1d+3jnKl fyD9aM0MfD2vTYXcqk4KOazxditZTHJrHZE6E5DjzDyJsGn/Yx/zf/15DEpwfc9pG3Nd BUGBn8JjexN19/3Ay+ET/o8tDbmj/0S/x7ISWPzbmJDe3FS6IX8323NfRRGC5akjRKYr 9A5QPB1ZXlYDx/Kr/xTgW+a7AGI4iPBPa3F1fWqAUzQeffqMj5KbnLPTywGduaGbDh/8 fyxKx83XnBJ6RR52rQzVJyye3Pqd9ksQE+5bjfzR2+sKXO/QkkoMmkX2De1U+D3CTDPv ZAYQ== X-Gm-Message-State: ANoB5plIHBrK0PlfhBz4uGUh9GmX9w7nF1s3wyXKzn8+Gpkw3A48VZJm I4TomS98dLhyvLPGpoA211xctHxLkB8= X-Google-Smtp-Source: AA0mqf4WbXEse5Izu5KRISp7nUEqmxo/6ym38RNrOlKYkM8tNbyDlcq5pn65NFJqlpq+5g+VfbZ7eK8zPJs= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a5e:aa07:0:b0:6bc:5d7b:2b9f with SMTP id s7-20020a5eaa07000000b006bc5d7b2b9fmr28153947ioe.38.1669934392148; Thu, 01 Dec 2022 14:39:52 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:24 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-9-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 8/8] mm: multi-gen LRU: simplify arch_has_hw_pte_young() check From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Spamd-Result: default: False [3.03 / 9.00]; SORBS_IRL_BL(3.00)[209.85.166.74:from]; BAYES_HAM(-2.87)[99.44%]; MID_CONTAINS_FROM(1.00)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MV_CASE(0.50)[]; FORGED_SENDER(0.30)[yuzhao@google.com,3OC2JYwYKCNkTPUC5JBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuzhao.bounces.google.com]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; RCPT_COUNT_TWELVE(0.00)[12]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3OC2JYwYKCNkTPUC5JBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--yuzhao.bounces.google.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: B193B180012 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: zicu6tkaj8zmaatzn5bsb4j6au83nxsa X-HE-Tag: 1669934392-850579 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Scanning page tables when hardware does not set the accessed bit has no real use cases. Signed-off-by: Yu Zhao --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 39724e7ae837..5994592c55fd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4412,7 +4412,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, * handful of PTEs. Spreading the work out over a period of time usually * is less efficient, but it avoids bursty page faults. */ - if (!force_scan && !(arch_has_hw_pte_young() && get_cap(LRU_GEN_MM_WALK))) { + if (!arch_has_hw_pte_young() || !get_cap(LRU_GEN_MM_WALK)) { success = iterate_mm_list_nowalk(lruvec, max_seq); goto done; }