From patchwork Wed Sep 13 09:51:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13382729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E646BCA5531 for ; Wed, 13 Sep 2023 10:11:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D25E6B0170; Wed, 13 Sep 2023 06:11:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 581AB6B0172; Wed, 13 Sep 2023 06:11:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 470406B0173; Wed, 13 Sep 2023 06:11:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3876E6B0170 for ; Wed, 13 Sep 2023 06:11:46 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id ED9B1140A9A for ; Wed, 13 Sep 2023 10:11:45 +0000 (UTC) X-FDA: 81231157770.17.DCF82AE Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf21.hostedemail.com (Postfix) with ESMTP id 8A1C41C0017 for ; Wed, 13 Sep 2023 10:11:41 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694599904; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qs+4Pob6hE6VoMIGoj7d8hZIC1kd6o6kgmZoesDjY7M=; b=BCt7QqPncINNsm6yb7GuIYN7aiNcn3wZh+LRgYESgygPCZ1c1ZlF5nA82rhcTSGm9gl6Vx V8u2G/RJ3uJCWTg1I/Q11jXzW2jtK9C3iYFleAalsdifdLL/twwhXo6W7b6LeXCYJUPc8r w8xdYd1+TPaKN8RE1c5fVFgNsKY/q8w= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694599904; a=rsa-sha256; cv=none; b=anowxMkfvFFtdS3AflX3HwT8V2m1k8QrDxM8kM42o4iALQNCvIRVJorhHqgchh0E+Nv2nD rMAjtCTzDFDYI0Zf42INiAKx4v4Kv8j0pwVcWFyU4UOOF7iJZJN184jD/5atBLDqUaY7+Z cdu/a/hlZ4vJ0S0g2OIMb/cBmKeTkDg= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Rlwdj18sgzVkgC; Wed, 13 Sep 2023 17:49:13 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Wed, 13 Sep 2023 17:52:00 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v3 4/8] mm: migrate: convert migrate_misplaced_page() to migrate_misplaced_folio() Date: Wed, 13 Sep 2023 17:51:27 +0800 Message-ID: <20230913095131.2426871-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230913095131.2426871-1-wangkefeng.wang@huawei.com> References: <20230913095131.2426871-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Stat-Signature: 4baio6ycucsedr47rm3rt9e9yqw3c5d7 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 8A1C41C0017 X-Rspam-User: X-HE-Tag: 1694599901-798059 X-HE-Meta: U2FsdGVkX18dSEm8S0/2ubn8LV1KYYvgNlLtR2FhpRsB8Ry9ti1JmMSWUcWcdSwyKLJ+XH2ILTdcUfvzxSqfZ6GG4iTsnzSb1F7yAJbhs9/9rqoB0h5NRUEvA2aNWRuNaiB3ZtGP5nmR9qqlq9mfuSOJMNW67ldkN+l/i7h02w72grCnpidxBdQfU7fbr1MPFNxyijd6ZJa4ufQu3QrHTHGBGxvPN6WYmp/RSSNj5+CTBVC9IwhITc3zYCRu62epVSdBfD+gsoxLq5w0YTArrbZ/ajFXedMb40FipoSJhV/PIvRE/9etoGeAdGS08NX3wuQl0h8QOVSJP5Y0G3Tq7QqJyMiwzPMvhH8P5g4w7gkekStRXrKtJN9WZy8/f83LqXZIF/LpUaIrN/HlmFaA9I9BzxYXS9jFZGYdXq+9yfn2AkB6ic3ctT0ir/RQ02ReIQwutgFDP8FeASbu0TTWI9w2+Jek+5EGAi1ARTWkmL5GIYIPtKt+sBOFxpKHBOHh24fTnYJRBncRLa2ODP5OgdXhXepcz09sqcKiI/RJFuwwu/6LQA9Uv15RfTTF6wsRa+6FQZFDY0aSk3Mc3166/HNmWjJiUmkKKMwKmfecTuP8A55jVErKNEjBYxViaf1sSnlisHJ7bYHKH/0xtPJKzMDCN+S3qo04k13Mj41MzxBaiOLlqIS8ayqZnHA5nuumPSyHtJUjWllctZhkyH6d19oAl+TUuLh+Kww47rNR/j+m13SBraGtXO8K/dtTdYO9Kd9tDf/dACz7i/5aWtRkpAV+upr5tnWmNIxJ/HD3WaKX377zG4CNF4O65OPSldsxB+oesM4sWJP/uVLhQEKgZmoVxqWkICNAS22ZW7K6/SoXpBhnoOm1R78GgD51GB9jVVGnvuZAU1vtGnZVA5eZ6yAUAzRaRHgSHlLQq4F7hn9a9aJs/ZsTP/2LXTY4muF/NVLQZjJwllrXtCDAFds VeCWxy0r SlrjLXwZLXg3Ue/WDtMolxMmTMAJjdmvvIJwVs+mfr5sOZpR/7XE9uEvELIJao5JzGiVIj3NWvtKKXyktt4KHPv85yLJA0ulIL5OMNm/dCkesq0wurgfGkdCcuF3AG9f9VNzgfIb3l1898rbLjCOFQA0RMBvL3tpA1AQR+GPv9CoMtpdRkPIJHiEqQdRQUrPR8Um0uaguSSmXUyZPPwMtcs+zdg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: At present, numa balance only support base page and PMD-mapped THP, but we will expand to support to migrate large folio/pte-mapped THP in the future, it is better to make migrate_misplaced_page() to take a folio instead of a page, and rename it to migrate_misplaced_folio(), it is a preparation, also this remove several compound_head() calls. Reviewed-by: Zi Yan Signed-off-by: Kefeng Wang --- include/linux/migrate.h | 4 ++-- mm/huge_memory.c | 2 +- mm/memory.c | 2 +- mm/migrate.c | 39 +++++++++++++++++++++------------------ 4 files changed, 25 insertions(+), 22 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 711dd9412561..2ce13e8a309b 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -142,10 +142,10 @@ const struct movable_operations *page_movable_ops(struct page *page) } #ifdef CONFIG_NUMA_BALANCING -int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, +int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, int node); #else -static inline int migrate_misplaced_page(struct page *page, +static inline int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, int node) { return -EAGAIN; /* can't migrate now */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3e9443082035..36075e428a37 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1540,7 +1540,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) spin_unlock(vmf->ptl); writable = false; - migrated = migrate_misplaced_page(page, vma, target_nid); + migrated = migrate_misplaced_folio(page_folio(page), vma, target_nid); if (migrated) { flags |= TNF_MIGRATED; page_nid = target_nid; diff --git a/mm/memory.c b/mm/memory.c index 4c9e6fc2dcf7..983a40f8ee62 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4815,7 +4815,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) writable = false; /* Migrate to the requested node */ - if (migrate_misplaced_page(page, vma, target_nid)) { + if (migrate_misplaced_folio(page_folio(page), vma, target_nid)) { page_nid = target_nid; flags |= TNF_MIGRATED; } else { diff --git a/mm/migrate.c b/mm/migrate.c index 281eafdf8e63..caf60b58b44c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2513,55 +2513,58 @@ static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio) } /* - * Attempt to migrate a misplaced page to the specified destination + * Attempt to migrate a misplaced folio to the specified destination * node. Caller is expected to have an elevated reference count on - * the page that will be dropped by this function before returning. + * the folio that will be dropped by this function before returning. */ -int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, - int node) +int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, + int node) { pg_data_t *pgdat = NODE_DATA(node); int isolated; int nr_remaining; unsigned int nr_succeeded; LIST_HEAD(migratepages); - int nr_pages = thp_nr_pages(page); + int nr_pages = folio_nr_pages(folio); /* - * Don't migrate file pages that are mapped in multiple processes + * Don't migrate file folios that are mapped in multiple processes * with execute permissions as they are probably shared libraries. + * To check if the folio is shared, ideally we want to make sure + * every page is mapped to the same process. Doing that is very + * expensive, so check the estimated mapcount of the folio instead. */ - if (page_mapcount(page) != 1 && page_is_file_lru(page) && + if (folio_estimated_sharers(folio) != 1 && folio_is_file_lru(folio) && (vma->vm_flags & VM_EXEC)) goto out; /* - * Also do not migrate dirty pages as not all filesystems can move - * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles. + * Also do not migrate dirty folios as not all filesystems can move + * dirty folios in MIGRATE_ASYNC mode which is a waste of cycles. */ - if (page_is_file_lru(page) && PageDirty(page)) + if (folio_is_file_lru(folio) && folio_test_dirty(folio)) goto out; - isolated = numamigrate_isolate_folio(pgdat, page_folio(page)); + isolated = numamigrate_isolate_folio(pgdat, folio); if (!isolated) goto out; - list_add(&page->lru, &migratepages); + list_add(&folio->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio, NULL, node, MIGRATE_ASYNC, MR_NUMA_MISPLACED, &nr_succeeded); if (nr_remaining) { if (!list_empty(&migratepages)) { - list_del(&page->lru); - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -nr_pages); - putback_lru_page(page); + list_del(&folio->lru); + node_stat_mod_folio(folio, NR_ISOLATED_ANON + + folio_is_file_lru(folio), -nr_pages); + folio_putback_lru(folio); } isolated = 0; } if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); - if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node)) + if (!node_is_toptier(folio_nid(folio)) && node_is_toptier(node)) mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded); } @@ -2569,7 +2572,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, return isolated; out: - put_page(page); + folio_put(folio); return 0; } #endif /* CONFIG_NUMA_BALANCING */