From patchwork Sun Apr 2 10:42:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 13197363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F19D1C7619A for ; Sun, 2 Apr 2023 10:43:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F6DC6B0074; Sun, 2 Apr 2023 06:43:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A6EB6B0075; Sun, 2 Apr 2023 06:43:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3B166B0078; Sun, 2 Apr 2023 06:43:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E34596B0074 for ; Sun, 2 Apr 2023 06:43:09 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C3198401EB for ; Sun, 2 Apr 2023 10:43:09 +0000 (UTC) X-FDA: 80636113698.29.85761CC Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf27.hostedemail.com (Postfix) with ESMTP id 668294000B for ; Sun, 2 Apr 2023 10:43:07 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=phnvHRbA; spf=pass (imf27.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680432187; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1ry+jYlVU28GDY1D//0emb3jPyyE3zc029+jUCACSlA=; b=BcoqSY4UC1Cvw69jDURHjlTO7LJe3PtG80dqnafncsLExPYm2vOxFybxvw79W7j0/i/WsA OXayepUADnQZyeBvOVtfqRWSMcfdoWLIvPkNjeTv2iu2Cb3VIu81mt1KxdJDt9su2lzooj LgyErVU0fGdPNrejMd4KiLzqzOY3h6E= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=phnvHRbA; spf=pass (imf27.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680432187; a=rsa-sha256; cv=none; b=nL9Jr4M8fSL/Kv5xcdAGErbZx4BjqHdgBX0yknrZAODh95PPkwD3IA+r4i5Ib0vl1D7Mik SxGHPZeLKpC0XcixFkqsZN2Qo5d1GEARxJCQMZAABV3uESNoNcnsTNWToP+7+VfN0WaOEm GFHQwzOjWfxsWz+Z1nsBW23LRVsR348= Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3326uOLE021080; Sun, 2 Apr 2023 10:43:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=1ry+jYlVU28GDY1D//0emb3jPyyE3zc029+jUCACSlA=; b=phnvHRbAnVvv9DNv1ci5p7hlBgj/ksCg6o0gVx8ZKnfrAFc/Rky0kpP2ki2sIJZdJHtN kb0X7wUXSybGNcN+iSfMFPCUh/eewhZaGdb+Xy5uSjonT+sbRdsNNLJlZfrnMXbLbgmr IQW7PdKzmOZxJhKivjUYVAzEMBIwtiM7diQVKr75nDj+0YGZjiGjB1sSGBH3SFdmP9/r bkJ7Axbqx1ZFW7QpEde/4ynvhiWASyiz1pBYL16TB63b3S7q5FYHXOelGWZhpLVhGRgu BYqA45TZSxBAKwqxgZEbxgnno4YMAXzW6BgoGEpY+KInqHD/QGhkZ88g3z+v/epsJdEK Kw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppx6s7hcr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:03 +0000 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 332Ah2XP031946; Sun, 2 Apr 2023 10:43:02 GMT Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppx6s7hcm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:02 +0000 Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1]) by ppma02wdc.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3326gl5X024270; Sun, 2 Apr 2023 10:43:02 GMT Received: from smtprelay07.wdc07v.mail.ibm.com ([9.208.129.116]) by ppma02wdc.us.ibm.com (PPS) with ESMTPS id 3ppc87x1u1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 02 Apr 2023 10:43:01 +0000 Received: from smtpav03.dal12v.mail.ibm.com (smtpav03.dal12v.mail.ibm.com [10.241.53.102]) by smtprelay07.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 332Ah01k8585794 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 2 Apr 2023 10:43:00 GMT Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7456058060; Sun, 2 Apr 2023 10:43:00 +0000 (GMT) Received: from smtpav03.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E8E2258056; Sun, 2 Apr 2023 10:42:56 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.8.200]) by smtpav03.dal12v.mail.ibm.com (Postfix) with ESMTP; Sun, 2 Apr 2023 10:42:56 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Dave Hansen , Johannes Weiner , Matthew Wilcox , Mel Gorman , Yu Zhao , Wei Xu , Guru Anbalagane , "Aneesh Kumar K.V" Subject: [RFC PATCH v1 1/7] mm: Move some code around so that next patch is simpler Date: Sun, 2 Apr 2023 16:12:34 +0530 Message-Id: <20230402104240.1734931-2-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> References: <20230402104240.1734931-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: YCZP_5FxiKLHa_pYPSoGOaIWTAi8fVBl X-Proofpoint-ORIG-GUID: GIyyutpeZIwEU0Z33XfcTsuBsvFvVppl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-31_07,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 adultscore=0 mlxscore=0 malwarescore=0 priorityscore=1501 suspectscore=0 bulkscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304020092 X-Rspamd-Queue-Id: 668294000B X-Stat-Signature: 8re8stjnndebupzsc5skbg9ky9te5yjz X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1680432187-958627 X-HE-Meta: U2FsdGVkX1/+bIVv8J0QABJ7+qlaG8C8FQsisWufLRWjnb/6DwQNGZ5dYDsKZbT60x2zVzCPFKA7QIqNgRbW3nSdaw8cUpOnluxnJoJOoSTKlh60JnbHA7ZDiB2zHcjDrT4CtD9Ue6gs/DG2lrIY/wEzfBFdIgtFzeII7omdZOKp4XZPlndEvAv1K99diMe22BNwumKgDxuPyYwNoptShqfPeEakx4+Nx9O1MgjhuF1RBXUt3LnGpbJH8Jf+jdRnnUjUQRTN6lkIJGP9Z1xXhnhTAfqqfWe9Pm589JyCgXAeW7ObNLJGa6JtaQetgXCplj81GvXtSkYTQoVa8MlWWhv8Js8vTUFNQTTooUxRtmrEKWEqXV5FsHri+DT5AMJ0yFxUBcW7ANJ3WfY4kAbJH/ANNNoo2uT0hZjNZQDhJ7KPvdnai5ohUmGAQJZI5Ocw0YXigWVtvbU9Te/SjmGUk6IFKgxJtQpvU37HnrGw9Evl0VhNOHILXtbJG++IyOXE0gIxt7X4g1pm5JB5El3gwmUXmpzDUQaqc+M5LIOiK2DpyF19cZ3u4Y2PwlJ+dVBqVQzkaNTb6M5mYdEbV8aaW9iAp9XJ62xGVafhxqqFVpnZWhJZIan8tSg742hndhENIVZLixHzw0rY7nVcKumR3YS3MZYBsuMNaLDqqQhDEsXSMLieXF1u/mtTkC5OSVThU+eUaYTmXoiVL6atX9kE8nrfNDXRfwq4KduNB2wc+W+7LpHLXFRY6gJCDX8oU9/GBfKLKfyXjZ+k72vct4IiaXJGXvqooZ7vADXIbagDtj/TNokyms8b14w2PB21SfHYkcvxMrVzKiwNlngVzXcW2kz2IZ9F/lvsxFsuM07IkJ2emd3LlWxhy4EvMcupl9R+HyA8O3zDU/tW9H3LCjUzxd0tY9KgJBGz4H5iV14OWsloq/dPi6vXEl/J3Y5YEUsWtZnoQncKiP9fH5Rb4PP Svk920mG ecjkhgND0HEVUPN5fwWnTHgaCfNJ9a/NUb1anGkqJny3IDBSCcwJO45JrII+f9EDWHPJ5GSTbz5C/hwPnLJGxwO6oCXjwDbCN1GXwjZEG4vxRgf4U4YisSoYZ0CicptcvWolRD49m44PGq55lrxkdmY2I0gOp4uTvQ0CZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move lrur_gen_add_folio to .c. We will support arch specific mapping of page access count to generation in a later patch and will use that when adding folio to lruvec. This move enables that. No functional change in this patch. Signed-off-by: Aneesh Kumar K.V --- include/linux/mm_inline.h | 47 +------------- mm/vmscan.c | 127 ++++++++++++++++++++++++++------------ 2 files changed, 88 insertions(+), 86 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ff3f3f23f649..4dc2ab95d612 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -217,52 +217,7 @@ static inline void lru_gen_update_size(struct lruvec *lruvec, struct folio *foli VM_WARN_ON_ONCE(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen)); } -static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) -{ - unsigned long seq; - unsigned long flags; - int gen = folio_lru_gen(folio); - int type = folio_is_file_lru(folio); - int zone = folio_zonenum(folio); - struct lru_gen_struct *lrugen = &lruvec->lrugen; - - VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); - - if (folio_test_unevictable(folio) || !lrugen->enabled) - return false; - /* - * There are three common cases for this page: - * 1. If it's hot, e.g., freshly faulted in or previously hot and - * migrated, add it to the youngest generation. - * 2. If it's cold but can't be evicted immediately, i.e., an anon page - * not in swapcache or a dirty page pending writeback, add it to the - * second oldest generation. - * 3. Everything else (clean, cold) is added to the oldest generation. - */ - if (folio_test_active(folio)) - seq = lrugen->max_seq; - else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || - (folio_test_reclaim(folio) && - (folio_test_dirty(folio) || folio_test_writeback(folio)))) - seq = lrugen->min_seq[type] + 1; - else - seq = lrugen->min_seq[type]; - - gen = lru_gen_from_seq(seq); - flags = (gen + 1UL) << LRU_GEN_PGOFF; - /* see the comment on MIN_NR_GENS about PG_active */ - set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); - - lru_gen_update_size(lruvec, folio, -1, gen); - /* for folio_rotate_reclaimable() */ - if (reclaiming) - list_add_tail(&folio->lru, &lrugen->lists[gen][type][zone]); - else - list_add(&folio->lru, &lrugen->lists[gen][type][zone]); - - return true; -} - +bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming); static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) { unsigned long flags; diff --git a/mm/vmscan.c b/mm/vmscan.c index 5b7b8d4f5297..f47d80ae77ef 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3737,6 +3737,47 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai return new_gen; } +static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr) +{ + unsigned long pfn = pte_pfn(pte); + + VM_WARN_ON_ONCE(addr < vma->vm_start || addr >= vma->vm_end); + + if (!pte_present(pte) || is_zero_pfn(pfn)) + return -1; + + if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) + return -1; + + if (WARN_ON_ONCE(!pfn_valid(pfn))) + return -1; + + return pfn; +} + +static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, + struct pglist_data *pgdat, bool can_swap) +{ + struct folio *folio; + + /* try to avoid unnecessary memory loads */ + if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat)) + return NULL; + + folio = pfn_folio(pfn); + if (folio_nid(folio) != pgdat->node_id) + return NULL; + + if (folio_memcg_rcu(folio) != memcg) + return NULL; + + /* file VMAs can contain anon pages from COW */ + if (!folio_is_file_lru(folio) && !can_swap) + return NULL; + + return folio; +} + static void update_batch_size(struct lru_gen_mm_walk *walk, struct folio *folio, int old_gen, int new_gen) { @@ -3843,23 +3884,6 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk return false; } -static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr) -{ - unsigned long pfn = pte_pfn(pte); - - VM_WARN_ON_ONCE(addr < vma->vm_start || addr >= vma->vm_end); - - if (!pte_present(pte) || is_zero_pfn(pfn)) - return -1; - - if (WARN_ON_ONCE(pte_devmap(pte) || pte_special(pte))) - return -1; - - if (WARN_ON_ONCE(!pfn_valid(pfn))) - return -1; - - return pfn; -} #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr) @@ -3881,29 +3905,6 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned } #endif -static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, - struct pglist_data *pgdat, bool can_swap) -{ - struct folio *folio; - - /* try to avoid unnecessary memory loads */ - if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat)) - return NULL; - - folio = pfn_folio(pfn); - if (folio_nid(folio) != pgdat->node_id) - return NULL; - - if (folio_memcg_rcu(folio) != memcg) - return NULL; - - /* file VMAs can contain anon pages from COW */ - if (!folio_is_file_lru(folio) && !can_swap) - return NULL; - - return folio; -} - static bool suitable_to_scan(int total, int young) { int n = clamp_t(int, cache_line_size() / sizeof(pte_t), 2, 8); @@ -5252,6 +5253,52 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc blk_finish_plug(&plug); } +bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) +{ + unsigned long seq; + unsigned long flags; + int gen = folio_lru_gen(folio); + int type = folio_is_file_lru(folio); + int zone = folio_zonenum(folio); + struct lru_gen_struct *lrugen = &lruvec->lrugen; + + VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); + + if (folio_test_unevictable(folio) || !lrugen->enabled) + return false; + /* + * There are three common cases for this page: + * 1. If it's hot, e.g., freshly faulted in or previously hot and + * migrated, add it to the youngest generation. + * 2. If it's cold but can't be evicted immediately, i.e., an anon page + * not in swapcache or a dirty page pending writeback, add it to the + * second oldest generation. + * 3. Everything else (clean, cold) is added to the oldest generation. + */ + if (folio_test_active(folio)) + seq = lrugen->max_seq; + else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) || + (folio_test_reclaim(folio) && + (folio_test_dirty(folio) || folio_test_writeback(folio)))) + seq = lrugen->min_seq[type] + 1; + else + seq = lrugen->min_seq[type]; + + gen = lru_gen_from_seq(seq); + flags = (gen + 1UL) << LRU_GEN_PGOFF; + /* see the comment on MIN_NR_GENS about PG_active */ + set_mask_bits(&folio->flags, LRU_GEN_MASK | BIT(PG_active), flags); + + lru_gen_update_size(lruvec, folio, -1, gen); + /* for folio_rotate_reclaimable() */ + if (reclaiming) + list_add_tail(&folio->lru, &lrugen->lists[gen][type][zone]); + else + list_add(&folio->lru, &lrugen->lists[gen][type][zone]); + + return true; +} + /****************************************************************************** * state change ******************************************************************************/