From patchwork Tue Jun 13 12:00:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13278670 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 067CDC7EE29 for ; Tue, 13 Jun 2023 12:01:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58BCD8E0002; Tue, 13 Jun 2023 08:01:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 53C856B007B; Tue, 13 Jun 2023 08:01:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DDEE8E0002; Tue, 13 Jun 2023 08:01:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2D7606B0078 for ; Tue, 13 Jun 2023 08:01:18 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CFD86140470 for ; Tue, 13 Jun 2023 12:01:17 +0000 (UTC) X-FDA: 80897584194.05.A260EC8 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf08.hostedemail.com (Postfix) with ESMTP id 5B2C0160054 for ; Tue, 13 Jun 2023 12:00:57 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=aC8Ml0Tc; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf08.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686657658; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=7oLSxXgTrewlxnbryHws7s+RBbPlO/62vVlC+aUbgtI=; b=4FoVkoopRPx27hDn4UEhj0IXw1t4oTtiYvRJ3sFi9ew0ab5pNVnQzDBbdfazioG3pCIaD9 N8icqgCDyfhWHLFKzMJ/DiW7lQOTQNscYBkRuErW6sY0myrsFVH2Xb2F/OQskx+PJAFP5w SseN4UCLmDxdmrLlwD/XcNdES2t+V3s= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=aC8Ml0Tc; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf08.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686657658; a=rsa-sha256; cv=none; b=IUL6cjrfHHsRs0bJb0t7LQdhBowyowam/Zt14h+P7XlL0m6FuRK186BaNRBJbBwgKsEmvl KsXWY5VyEnhNHcAdNn/DpWNBBa9mbdzAsVYvlpAZ2ICegmBC+kYZcZl5iBnAPsAu9V4vbL rW5NpQnxQFFaMqlXe/TDrrFH8M6Gxnc= Received: from pps.filterd (m0353726.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35DBlMkg021969; Tue, 13 Jun 2023 12:00:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=7oLSxXgTrewlxnbryHws7s+RBbPlO/62vVlC+aUbgtI=; b=aC8Ml0TcW4Noq/Uv8Zi0paUik0TdsRMgItsF5w19Fq/zlHPwyLFqNmH3EICjdqAD5Svz h7H8a7q9Ky1tSqDwKH6XXublvOjw6C9P2sFOB4Y2yqMooIxgF2xOYuJJCjBYHXuKSP4+ MyD4jdz/vufcKDTIrlEZ5/8lzii/lFBVQXlscNszXW4mWjRBRlu4DtEF7lvvkgIvtfcr 5KfPfCmh9XRT3v3v7gfcAdM7HlW7tApiXEeUWnqVzvcX6i19saH2hrwfArB4enrl88Vu 8AMCOi7VyJd8JjLN4A5U3ab+9mwlXTbXFVU5ySKvL2oJQaokosdNQMe7kqBYbVUyGCgY KA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3r6qvrr9xx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 Jun 2023 12:00:55 +0000 Received: from m0353726.ppops.net (m0353726.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 35DBma6u024728; Tue, 13 Jun 2023 12:00:55 GMT Received: from ppma01wdc.us.ibm.com (fd.55.37a9.ip4.static.sl-reverse.com [169.55.85.253]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3r6qvrr9x0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 Jun 2023 12:00:55 +0000 Received: from pps.filterd (ppma01wdc.us.ibm.com [127.0.0.1]) by ppma01wdc.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 35D959cP002745; Tue, 13 Jun 2023 12:00:54 GMT Received: from smtprelay06.dal12v.mail.ibm.com ([9.208.130.100]) by ppma01wdc.us.ibm.com (PPS) with ESMTPS id 3r4gt587wk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 Jun 2023 12:00:54 +0000 Received: from smtpav03.wdc07v.mail.ibm.com (smtpav03.wdc07v.mail.ibm.com [10.39.53.230]) by smtprelay06.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 35DC0rAx48562676 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 13 Jun 2023 12:00:53 GMT Received: from smtpav03.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D077A5806D; Tue, 13 Jun 2023 12:00:52 +0000 (GMT) Received: from smtpav03.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 866E15805D; Tue, 13 Jun 2023 12:00:50 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.55.172]) by smtpav03.wdc07v.mail.ibm.com (Postfix) with ESMTP; Tue, 13 Jun 2023 12:00:50 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Yu Zhao , "T . J . Alumbaugh" , "Aneesh Kumar K.V" Subject: [PATCH 1/3] mm/lru_gen: Move some code around so that next patch is simpler Date: Tue, 13 Jun 2023 17:30:45 +0530 Message-Id: <20230613120047.149573-1-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.40.1 MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: ib33NBrmlB9Zk9Y0iew6vR6I9R3C4mY2 X-Proofpoint-ORIG-GUID: wyYBzSVhJMwJdmTI25H5OLdcF_YSXOsA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-06-13_04,2023-06-12_02,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 adultscore=0 priorityscore=1501 mlxlogscore=999 spamscore=0 malwarescore=0 clxscore=1015 phishscore=0 suspectscore=0 mlxscore=0 bulkscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2305260000 definitions=main-2306130102 X-Rspam-User: X-Stat-Signature: boujamges5fngabar961p4h6p4hbibso X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 5B2C0160054 X-HE-Tag: 1686657657-848471 X-HE-Meta: U2FsdGVkX1//zbJvsYL65m37v5RuqQTAPaKOsT/GEhlile2jfhCBzWNClGL8gvpI2/AKG1j+Xld5Ky+w5kmGVJp6YZMEexKH4NT0qlTGUCgE85u0tuMec0PFOCNeBAI7Ra9FW/sGkB6voRdAMOq4G4qSKxOs7sh4SaBtcr8GdrAme1AfN35N1/amoJRccmmDI3ku5BWWRWS9VZSkWAOfhsqZL+L/pQIwo+nsYFNFrNTPeLA+D4XzPQGzCIAfOLtW26r9lUXHDp3R8zD/Y0FrKwhfsDbadIj4qtdKYHQ7gq84nYC8cDSgxbp+T53hhEgi/s+2/sh+a9rYl74tWSC4KouvgJLeVGgoWdtarMmbD/MnLs7/CyC+6Youn3GjYZxwe2YnKOZ+lsHndnm/kAVGPtlDweBdU+7M2IB13zS728t6yXEj0LJULSjwWazBOJYnuwQTppYZd0Vsb84FgvHe2GVzmgJd3BVUq+2XWgnRkS/GBEAbZsAqnZEK0Ik0El9CTA2AS+29CSNWRvtuReWsP6vKQc6otAAXEUgSJ0uJe44bV3aWK003EGwjnJofVcNbt1thmuRIQPlEIdbKG6yBLlxey2CZnKdQnCjWI47tWhyblxC3uJ40204RCE+8XsR/u5wVQW8eoshBZ7a4Cv2R5qz+AT7H74NnHKEEwsDxto0BTL/en3d3ln6wuHIx1uWjegDt8CoKxnHgJcDsgcQjdgsOUK1J8EpyCBf1KkxEZ/Cw12AOT23e3bEyhSsfq3fOGibt4cABEw8XK/34orTPj/E0MLheU1Ezv4NOy9C1jovpCLEjipi2wsZCbOT9qMCm54LNkTl8FaXBcQuEqGX5pTbr3Jrbd11dDmcjQABzRyFphVcLgGNs6TjZp4ZDpQMRbjGXSBRsvmGBWrp8PZLJlgJOjdTEGJJxRwSWBntngBGYJQtz/WaJW7Fat512OeyLAptHkEbgkXER+cfjtMT IjnK3WOO 3IMsHQmxMOMXRR2v0g9Xp62m3d843P0tmX1tbo6EqsaMh3aR17KMGjfXz49ARG7dEafEKzM1X0Kat66ECU4wv/alqwJOkCUGdgu2iFEtY5BrmHn177HHLlQ2+RqeoUDgZSWgpQKQUxh/sWbyuz2jtklvYCBZc5OCy+FCp+XoyBKj3XC3bI7+lsk1RAQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move lrur_gen_add_folio to .c. We will support arch specific mapping of page access count to generation in a later patch and will use that when adding folio to lruvec. This move enables that. No functional change in this patch. Signed-off-by: Aneesh Kumar K.V --- include/linux/mm_inline.h | 47 +---------- mm/vmscan.c | 172 ++++++++++++++++++++++++-------------- 2 files changed, 110 insertions(+), 109 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 0e1d239a882c..2a86dc4d96ab 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -217,52 +217,7 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct folio *foli VM_WARN_ON_ONCE(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen)); } -static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) -{ - unsigned long seq; - unsigned long flags; - int gen = folio_lru_gen(folio); - int type = folio_is_file_lru(folio); - int zone = folio_zonenum(folio); - struct lru_gen_folio *lrugen = &lruvec->lrugen; - - VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); - - if (folio_test_unevictable(folio) || !lrugen->enabled) - return false; - /* - * There are three common cases for this page: - * 1. If it's hot, e.g., freshly faulted in or previously hot and - * migrated, add it to the youngest generation. - * 2. If it's cold but can't be evicted immediately, i.e., an anon page - * not in swapcache or a dirty page pending writeback, add it to the - * second oldest generation. - * 3. Everything else (clean, cold) is added to the oldest generation. - */ - if (folio_test_active(folio)) - seq = lrugen->max_seq; - else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio)))) - seq = lrugen->min_seq[type] + 1; - else - seq = lrugen->min_seq[type]; - - gen = lru_gen_from_seq(seq); - flags = (gen + 1UL) << LRU_GEN_PGOFF; - /* see the comment on MIN_NR_GENS about PG_active */ - set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); - - lru_gen_update_size(lruvec, folio, -1, gen); - /* for folio_rotate_reclaimable() */ - if (reclaiming) - list_add_tail(&folio->lru, &lrugen->folios[gen][type][zone]); - else - list_add(&folio->lru, &lrugen->folios[gen][type][zone]); - - return true; -} - +bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming); static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) { unsigned long flags; diff --git a/mm/vmscan.c b/mm/vmscan.c index 6d0cd2840cf0..edfe073b475e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3748,29 +3748,6 @@ static bool positive_ctrl_err(struct ctrl_pos *sp, struct ctrl_pos *pv) * the aging ******************************************************************************/ -/* promote pages accessed through page tables */ -static int folio_update_gen(struct folio *folio, int gen) -{ - unsigned long new_flags, old_flags = READ_ONCE(folio->flags); - - VM_WARN_ON_ONCE(gen >= MAX_NR_GENS); - VM_WARN_ON_ONCE(!rcu_read_lock_held()); - - do { - /* lru_gen_del_folio() has isolated this page? */ - if (!(old_flags & LRU_GEN_MASK)) { - /* for shrink_folio_list() */ - new_flags = old_flags | BIT(PG_referenced); - continue; - } - - new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_MASK | LRU_REFS_FLAGS); - new_flags |= (gen + 1UL) << LRU_GEN_PGOFF; - } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); - - return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; -} - /* protect pages accessed multiple times through file descriptors */ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclaiming) { @@ -3801,6 +3778,70 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai return new_gen; } +static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr) +{ + unsigned long pfn = pte_pfn(pte); + + VM_WARN_ON_ONCE(addr < vma->vm_start || addr >= vma->vm_end); + + if (!pte_present(pte) || is_zero_pfn(pfn)) + return -1; + + if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) + return -1; + + if (WARN_ON_ONCE(!pfn_valid(pfn))) + return -1; + + return pfn; +} + +static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, + struct pglist_data *pgdat, bool can_swap) +{ + struct folio *folio; + + /* try to avoid unnecessary memory loads */ + if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat)) + return NULL; + + folio = pfn_folio(pfn); + if (folio_nid(folio) != pgdat->node_id) + return NULL; + + if (folio_memcg_rcu(folio) != memcg) + return NULL; + + /* file VMAs can contain anon pages from COW */ + if (!folio_is_file_lru(folio) && !can_swap) + return NULL; + + return folio; +} + +/* promote pages accessed through page tables */ +static int folio_update_gen(struct folio *folio, int gen) +{ + unsigned long new_flags, old_flags = READ_ONCE(folio->flags); + + VM_WARN_ON_ONCE(gen >= MAX_NR_GENS); + VM_WARN_ON_ONCE(!rcu_read_lock_held()); + + do { + /* lru_gen_del_folio() has isolated this page? */ + if (!(old_flags & LRU_GEN_MASK)) { + /* for shrink_folio_list() */ + new_flags = old_flags | BIT(PG_referenced); + continue; + } + + new_flags = old_flags & ~(LRU_GEN_MASK | LRU_REFS_MASK | LRU_REFS_FLAGS); + new_flags |= (gen + 1UL) << LRU_GEN_PGOFF; + } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); + + return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; +} + static void update_batch_size(struct lru_gen_mm_walk *walk, struct folio *folio, int old_gen, int new_gen) { @@ -3910,23 +3951,6 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk return false; } -static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr) -{ - unsigned long pfn = pte_pfn(pte); - - VM_WARN_ON_ONCE(addr < vma->vm_start || addr >= vma->vm_end); - - if (!pte_present(pte) || is_zero_pfn(pfn)) - return -1; - - if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) - return -1; - - if (WARN_ON_ONCE(!pfn_valid(pfn))) - return -1; - - return pfn; -} #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr) @@ -3948,29 +3972,6 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned } #endif -static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, - struct pglist_data *pgdat, bool can_swap) -{ - struct folio *folio; - - /* try to avoid unnecessary memory loads */ - if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat)) - return NULL; - - folio = pfn_folio(pfn); - if (folio_nid(folio) != pgdat->node_id) - return NULL; - - if (folio_memcg_rcu(folio) != memcg) - return NULL; - - /* file VMAs can contain anon pages from COW */ - if (!folio_is_file_lru(folio) && !can_swap) - return NULL; - - return folio; -} - static bool suitable_to_scan(int total, int young) { int n = clamp_t(int, cache_line_size() / sizeof(pte_t), 2, 8); @@ -5557,6 +5558,51 @@ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control * pgdat->kswapd_failures = 0; } +bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) +{ + unsigned long seq; + unsigned long flags; + int gen = folio_lru_gen(folio); + int type = folio_is_file_lru(folio); + int zone = folio_zonenum(folio); + struct lru_gen_folio *lrugen = &lruvec->lrugen; + + VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); + + if (folio_test_unevictable(folio) || !lrugen->enabled) + return false; + /* + * There are three common cases for this page: + * 1. If it's hot, e.g., freshly faulted in or previously hot and + * migrated, add it to the youngest generation. + * 2. If it's cold but can't be evicted immediately, i.e., an anon page + * not in swapcache or a dirty page pending writeback, add it to the + * second oldest generation. + * 3. Everything else (clean, cold) is added to the oldest generation. + */ + if (folio_test_active(folio)) + seq = lrugen->max_seq; + else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || + (folio_test_reclaim(folio) && + (folio_test_dirty(folio) || folio_test_writeback(folio)))) + seq = lrugen->min_seq[type] + 1; + else + seq = lrugen->min_seq[type]; + + gen = lru_gen_from_seq(seq); + flags = (gen + 1UL) << LRU_GEN_PGOFF; + /* see the comment on MIN_NR_GENS about PG_active */ + set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); + + lru_gen_update_size(lruvec, folio, -1, gen); + /* for folio_rotate_reclaimable() */ + if (reclaiming) + list_add_tail(&folio->lru, &lrugen->folios[gen][type][zone]); + else + list_add(&folio->lru, &lrugen->folios[gen][type][zone]); + + return true; +} /****************************************************************************** * state change ******************************************************************************/