From patchwork Thu Dec 1 22:39:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13061872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17E01C4332F for ; Thu, 1 Dec 2022 22:39:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A43AF6B0073; Thu, 1 Dec 2022 17:39:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 90B286B0074; Thu, 1 Dec 2022 17:39:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75AA26B0075; Thu, 1 Dec 2022 17:39:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 68F766B0073 for ; Thu, 1 Dec 2022 17:39:43 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 16D0E412A2 for ; Thu, 1 Dec 2022 22:39:43 +0000 (UTC) X-FDA: 80195205846.27.336AED1 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf08.hostedemail.com (Postfix) with ESMTP id B2878160012 for ; Thu, 1 Dec 2022 22:39:42 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=LA2DUDHi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669934382; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oH9JeQGaNGGHkuIZZHdf9cs3YAnSP1jBnMbSTvH4Xu8=; b=8kpqvefJtFlfyhrOn14FPm8Myb2lmvUjyDe303XnDz2biQZR2chehoHYicIiSkZpZ7oCww bLivTdVAXAhTomLa8CAn1+cIzGUelEC3+6uDVm3d9l2IaCywET0mVLDH4994mjfw6czRXR 1mVZYoQvck/ltbdwbqu+P255SJaErKY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=LA2DUDHi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669934382; a=rsa-sha256; cv=none; b=EPnNX5uBngMlGv4xFIVtLyJuNIXmOfrKCnUusfxtg4V+CVA/mraGXbaKkt8sopYz0/Kj7S 4Lm9TGtwBtxwJT1N8sFXmCroLVvLD87bRGQ2An615mypJSaOKe8RQhlU0Vbm/IFGL9OSn9 AsaukBA7FaE7RKN2RZN1WdXSoT+/kPo= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-3bd1ff8fadfso30504047b3.18 for ; Thu, 01 Dec 2022 14:39:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oH9JeQGaNGGHkuIZZHdf9cs3YAnSP1jBnMbSTvH4Xu8=; b=LA2DUDHiBc/zGzdpbHz3fR7TpBuWLu0tDpwMzDx05c1rOPOfYc4jBdJYRw/TauLXGh Lk5iAEgXtO2shh9reDNWP4Mxf5lF2lQ3dT4PpqiviZ/oHJhGd2BgwkhKd4hD8QgjTRBF Yc4O3BnKUXLPRZBem3ev1rOA2DJN9OLOC6qyOcP8czyp71nf5Yq8DES/hP5Mpbf/IsCm sM0dmYkSDklz9GXgvLxmSxx9f79bTV396VtjGVjwhXSoQx0lrwRKEYuZ2LBFZPhmASPi XBaJ5VBd8wdU0dmxhd5slcb2TFgkl/xZ7LYKkJG0EsviVWUhBjmOMi/g1304f2fFBoM4 r9KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oH9JeQGaNGGHkuIZZHdf9cs3YAnSP1jBnMbSTvH4Xu8=; b=K2XwLGOqk3qdyDVxY26xY4KWff/syQ70zVH3bkFrnbkmFHGa7vwdtqG8syvLgF+KEM f77dbGScbfNDhaGmnpOlRNdvognSXLAhzWanf5nD2+oLHJ9ozl848G4uznOQRod0k3PW EU2frf0ktW0sYWjVvn2zcublirMGa7B68GYEpG6jIvi98Zv+sdrESP9MAU+zweIOHAdj LU6QJBLKPFH0xMvU5Fe6APaSr9BO687ntQND5TqGOofxyFCn09mE3Jtw0Q4ffgsxtTpy sP3eGRT3bAeBCgSE7bZSBRTU5aVG+BF+QOo+b6B5fG4ydE1dU61I7PN64bBomqcM/9XH EAug== X-Gm-Message-State: ANoB5pl62B9b1nQy7wtS4F+fH+Cb+zP0iDkRapOkjDjh5/K+x1UykorM MMc+zQefMmLtxf5PfQH/4PCqfvdFkE8= X-Google-Smtp-Source: AA0mqf4Ct34RtDuJ1bfnD9jfZlROZxvu1fiB0f922pkx7EJaRkowBwyRWMIeD4dhue1oO3k5Kf+Dba8hnwU= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:1d8c:fe8c:ee3e:abb]) (user=yuzhao job=sendgmr) by 2002:a81:f53:0:b0:367:27b7:af89 with SMTP id 80-20020a810f53000000b0036727b7af89mr66232818ywp.292.1669934381982; Thu, 01 Dec 2022 14:39:41 -0800 (PST) Date: Thu, 1 Dec 2022 15:39:17 -0700 In-Reply-To: <20221201223923.873696-1-yuzhao@google.com> Message-Id: <20221201223923.873696-2-yuzhao@google.com> Mime-Version: 1.0 References: <20221201223923.873696-1-yuzhao@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Subject: [PATCH mm-unstable v1 1/8] mm: multi-gen LRU: rename lru_gen_struct to lru_gen_folio From: Yu Zhao To: Andrew Morton Cc: Johannes Weiner , Jonathan Corbet , Michael Larabel , Michal Hocko , Mike Rapoport , Roman Gushchin , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mm@google.com, Yu Zhao X-Spamd-Result: default: False [2.90 / 9.00]; SORBS_IRL_BL(3.00)[209.85.128.202:from]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MV_CASE(0.50)[]; FORGED_SENDER(0.30)[yuzhao@google.com,3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; BAD_REP_POLICIES(0.10)[]; PREVIOUSLY_DELIVERED(0.00)[linux-mm@kvack.org]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_DKIM_ALLOW(0.00)[google.com:s=20210112]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; R_SPF_ALLOW(0.00)[+ip4:209.85.128.0/17]; RCPT_COUNT_TWELVE(0.00)[12]; DKIM_TRACE(0.00)[google.com:+]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_POLICY_ALLOW(0.00)[google.com,reject]; FROM_NEQ_ENVFROM(0.00)[yuzhao@google.com,3LS2JYwYKCM4IEJ1u808805y.w86527EH-664Fuw4.8B0@flex--yuzhao.bounces.google.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: B2878160012 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: qkphr8jk9dcfzh8qq7u7wguski8jrzoo X-HE-Tag: 1669934382-810091 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The new name lru_gen_folio will be more distinct from the coming lru_gen_memcg. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 4 ++-- include/linux/mmzone.h | 6 +++--- mm/vmscan.c | 34 +++++++++++++++++----------------- mm/workingset.c | 4 ++-- 4 files changed, 24 insertions(+), 24 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index e8ed225d8f7c..f63968bd7de5 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -178,7 +178,7 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct folio *foli int zone = folio_zonenum(folio); int delta = folio_nr_pages(folio); enum lru_list lru = type * LRU_INACTIVE_FILE; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE(old_gen != -1 && old_gen >= MAX_NR_GENS); VM_WARN_ON_ONCE(new_gen != -1 && new_gen >= MAX_NR_GENS); @@ -224,7 +224,7 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, int gen = folio_lru_gen(folio); int type = folio_is_file_lru(folio); int zone = folio_zonenum(folio); - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 5f74891556f3..bd3e4689f72d 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -404,7 +404,7 @@ enum { * The number of pages in each generation is eventually consistent and therefore * can be transiently negative when reset_batch_size() is pending. */ -struct lru_gen_struct { +struct lru_gen_folio { /* the aging increments the youngest generation number */ unsigned long max_seq; /* the eviction increments the oldest generation numbers */ @@ -461,7 +461,7 @@ struct lru_gen_mm_state { struct lru_gen_mm_walk { /* the lruvec under reclaim */ struct lruvec *lruvec; - /* unstable max_seq from lru_gen_struct */ + /* unstable max_seq from lru_gen_folio */ unsigned long max_seq; /* the next address within an mm to scan */ unsigned long next_addr; @@ -524,7 +524,7 @@ struct lruvec { unsigned long flags; #ifdef CONFIG_LRU_GEN /* evictable pages divided into generations */ - struct lru_gen_struct lrugen; + struct lru_gen_folio lrugen; /* to concurrently iterate lru_gen_mm_list */ struct lru_gen_mm_state mm_state; #endif diff --git a/mm/vmscan.c b/mm/vmscan.c index 9356a3ee639c..fcb4ac351f93 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3197,7 +3197,7 @@ static int get_nr_gens(struct lruvec *lruvec, int type) static bool __maybe_unused seq_is_valid(struct lruvec *lruvec) { - /* see the comment on lru_gen_struct */ + /* see the comment on lru_gen_folio */ return get_nr_gens(lruvec, LRU_GEN_FILE) >= MIN_NR_GENS && get_nr_gens(lruvec, LRU_GEN_FILE) <= get_nr_gens(lruvec, LRU_GEN_ANON) && get_nr_gens(lruvec, LRU_GEN_ANON) <= MAX_NR_GENS; @@ -3594,7 +3594,7 @@ struct ctrl_pos { static void read_ctrl_pos(struct lruvec *lruvec, int type, int tier, int gain, struct ctrl_pos *pos) { - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; int hist = lru_hist_from_seq(lrugen->min_seq[type]); pos->refaulted = lrugen->avg_refaulted[type][tier] + @@ -3609,7 +3609,7 @@ static void read_ctrl_pos(struct lruvec *lruvec, int type, int tier, int gain, static void reset_ctrl_pos(struct lruvec *lruvec, int type, bool carryover) { int hist, tier; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; bool clear = carryover ? NR_HIST_GENS == 1 : NR_HIST_GENS > 1; unsigned long seq = carryover ? lrugen->min_seq[type] : lrugen->max_seq + 1; @@ -3686,7 +3686,7 @@ static int folio_update_gen(struct folio *folio, int gen) static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclaiming) { int type = folio_is_file_lru(folio); - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]); unsigned long new_flags, old_flags = READ_ONCE(folio->flags); @@ -3731,7 +3731,7 @@ static void update_batch_size(struct lru_gen_mm_walk *walk, struct folio *folio, static void reset_batch_size(struct lruvec *lruvec, struct lru_gen_mm_walk *walk) { int gen, type, zone; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; walk->batched = 0; @@ -4248,7 +4248,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) { int zone; int remaining = MAX_LRU_BATCH; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]); if (type == LRU_GEN_ANON && !can_swap) @@ -4284,7 +4284,7 @@ static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) { int gen, type, zone; bool success = false; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; DEFINE_MIN_SEQ(lruvec); VM_WARN_ON_ONCE(!seq_is_valid(lruvec)); @@ -4305,7 +4305,7 @@ static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) ; } - /* see the comment on lru_gen_struct */ + /* see the comment on lru_gen_folio */ if (can_swap) { min_seq[LRU_GEN_ANON] = min(min_seq[LRU_GEN_ANON], min_seq[LRU_GEN_FILE]); min_seq[LRU_GEN_FILE] = max(min_seq[LRU_GEN_ANON], lrugen->min_seq[LRU_GEN_FILE]); @@ -4327,7 +4327,7 @@ static void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan) { int prev, next; int type, zone; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; spin_lock_irq(&lruvec->lru_lock); @@ -4385,7 +4385,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, bool success; struct lru_gen_mm_walk *walk; struct mm_struct *mm = NULL; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE(max_seq > READ_ONCE(lrugen->max_seq)); @@ -4450,7 +4450,7 @@ static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsig unsigned long old = 0; unsigned long young = 0; unsigned long total = 0; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; struct mem_cgroup *memcg = lruvec_memcg(lruvec); for (type = !can_swap; type < ANON_AND_FILE; type++) { @@ -4735,7 +4735,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx) int delta = folio_nr_pages(folio); int refs = folio_lru_refs(folio); int tier = lru_tier_from_refs(refs); - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; VM_WARN_ON_ONCE_FOLIO(gen >= MAX_NR_GENS, folio); @@ -4835,7 +4835,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, int scanned = 0; int isolated = 0; int remaining = MAX_LRU_BATCH; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; struct mem_cgroup *memcg = lruvec_memcg(lruvec); VM_WARN_ON_ONCE(!list_empty(list)); @@ -5235,7 +5235,7 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc static bool __maybe_unused state_is_valid(struct lruvec *lruvec) { - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; if (lrugen->enabled) { enum lru_list lru; @@ -5514,7 +5514,7 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec, int i; int type, tier; int hist = lru_hist_from_seq(seq); - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; for (tier = 0; tier < MAX_NR_TIERS; tier++) { seq_printf(m, " %10d", tier); @@ -5564,7 +5564,7 @@ static int lru_gen_seq_show(struct seq_file *m, void *v) unsigned long seq; bool full = !debugfs_real_fops(m->file)->write; struct lruvec *lruvec = v; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; int nid = lruvec_pgdat(lruvec)->node_id; struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); @@ -5818,7 +5818,7 @@ void lru_gen_init_lruvec(struct lruvec *lruvec) { int i; int gen, type, zone; - struct lru_gen_struct *lrugen = &lruvec->lrugen; + struct lru_gen_folio *lrugen = &lruvec->lrugen; lrugen->max_seq = MIN_NR_GENS + 1; lrugen->enabled = lru_gen_enabled(); diff --git a/mm/workingset.c b/mm/workingset.c index 1a86645b7b3c..fd666584515c 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -223,7 +223,7 @@ static void *lru_gen_eviction(struct folio *folio) unsigned long token; unsigned long min_seq; struct lruvec *lruvec; - struct lru_gen_struct *lrugen; + struct lru_gen_folio *lrugen; int type = folio_is_file_lru(folio); int delta = folio_nr_pages(folio); int refs = folio_lru_refs(folio); @@ -252,7 +252,7 @@ static void lru_gen_refault(struct folio *folio, void *shadow) unsigned long token; unsigned long min_seq; struct lruvec *lruvec; - struct lru_gen_struct *lrugen; + struct lru_gen_folio *lrugen; struct mem_cgroup *memcg; struct pglist_data *pgdat; int type = folio_is_file_lru(folio);