From patchwork Mon Jun 19 11:07:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13284332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BDE6EB64D9 for ; Mon, 19 Jun 2023 10:52:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D6598D0003; Mon, 19 Jun 2023 06:52:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 385438D0001; Mon, 19 Jun 2023 06:52:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FEA98D0003; Mon, 19 Jun 2023 06:52:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 116D68D0001 for ; Mon, 19 Jun 2023 06:52:54 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C7B76805D6 for ; Mon, 19 Jun 2023 10:52:53 +0000 (UTC) X-FDA: 80919184626.07.99A6ECF Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf06.hostedemail.com (Postfix) with ESMTP id 4377F180009 for ; Mon, 19 Jun 2023 10:52:49 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687171972; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=D/iy/3iUmKUNJSTSg9So9IxsM+qD2mx3VX+zehrMy0M=; b=uHyHp1USUUNlb8eT1Q739wrLwyX0MQS/fhmoif8LwQfGEKXtDedeOoMbPZHvXk+outNbR2 3Fg/GLhMKjD//rVsRj8CeNk/VkHbti16FKfXuZIozzVnDd+GZeHgHCeB/XoO1sFy/UXlch t48wNOJfTOXr1/maEyPiY4SIMn9Gdgo= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687171972; a=rsa-sha256; cv=none; b=QgXWX5N3JN1f5k7nXv/3vmY+jkgmt69swwJzq2LpRLxPP/MHzUlkncCA9Ke3YRaYl0aC44 p7Hs9f8gTEAZlr5rX55QtAaFbkYYsBUuf0JcBeDkSX1lESsfcnzs8VCoOBJqcQAR3R0OmF S9xgOB8tMqf+HLFLlsVLyQ4Z3Wlp8NA= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Ql66c1rRYzqThG; Mon, 19 Jun 2023 18:52:40 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Mon, 19 Jun 2023 18:52:45 +0800 From: Kefeng Wang To: CC: , , , , , Kefeng Wang Subject: [PATCH -next v2 1/2] mm: compaction: convert to use a folio in isolate_migratepages_block() Date: Mon, 19 Jun 2023 19:07:17 +0800 Message-ID: <20230619110718.65679-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 4377F180009 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: mkzxsciim8iyr8bci6dgpapafq34oyas X-HE-Tag: 1687171969-281504 X-HE-Meta: U2FsdGVkX1/dLEXyF35zhyuYYLtYkcwCWUQH4UimU3Rylv6J4wnhQjhnvTsI728etZ0Y898N79Nse2okNpTljw13QFChfbC6tQ8CDnXHP3JNxhB3lPJjVJmRmmccXPY5GJqxqY1KQyqcrCPBe7dGdUdYor0C6v9nqi9KJcpfyACsIAKdTeXMJ6hSQE/HnGjPwMQe9I7F2lVFAApx67zIti41/WbOhto0/U5uBIBOoVMb5cbVyAHnavZztxOTGGVhHoduc20UJqZh5tOI+mdMXkXq8gNIKKpXDm+EFlUO+HOoLOb2jQh9FguKgG4Q98WcQiB1yI/7W/i8Rc2hKgjzt1iByhDrlUs4NBrOKz0+1z3/p9Km1wQZmbvJpb2a23PkfinEk3/15ibfJZkYUzRObepbeP/3G6hp0ShA/n5v5qOUPPXtV/dGtoRnsEvimm04Ad/wvOubqH1SONY7l1TrY3OGZ5gsM7Tu5L96MDlpMW8+1khgz8tP3QD0Z59RPIAF0qM7vrQ4M3ZywuJmJZw4vk2pfinHM5MjE/voc5muTz8exs1APCjaktd0FKNfcCvgZz+hvA5EedT8Ww2VxGfWcxciAld+KDPrT49jtkcE9pogj261bLKzAvaGkeVALR63vxkgCRbC/eio3XSoE10eQtpiiJ8YiSwF/qlAcwKXOR1jycRpIIW0ml2VSkmztSFKEbZQ+WvHlJ2fcOH6YhdYEk9G70lYIL1Lte+9ilUWCmMQ11Z2aMKkMuIKYl/Nsz6RhGVyDso3TEpc5z5jqwivwyfSv8NiZzYE9YWB4GR+sy2HS1wg26kx4Y/gbb6TRUVZx6jHmnWcKUJTWgq51yB2TSzjQjkve0zpmOqJ/uqvmByo9Laqm3mU6yN5jOFaN+OxcQyzOwgNfpUk2i6kCSvoGlX202MvKtoRbslJpYCcOFHW1dDIG3/LRE7hYIZrATUiFfXv12YQfAHazAWWuis 8x8ZgTdr fUtmg4Cc7TZDq2LedGr88lgBHEQRCNS35d75rsjqy3GYzSINstm7BAnCPU5ffAylT6XDz1SQ6ZFHheR5SGjKYsj8WwSOKXxcnDT4Ki38HF1SfPKffIayvh6C7PufmEI34CDXWBu7UPrZ4rn/k5gfhHH+RiT9ZaCE44blTh4S9g1yTjv6Fdh7jSsYkkZp7kh2tAKE/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Directly use a folio instead of page_folio() when page successfully isolated (hugepage and movable page) and after folio_get_nontail_page(), which removes several calls to compound_head(). Signed-off-by: Kefeng Wang Reviewed-by: Baolin Wang --- v2: - update comments and use node_stat_mod_folio, per Matthew Wilcox - add missed PageLRU conversion and rebase on next-20230619 mm/compaction.c | 84 ++++++++++++++++++++++++++----------------------- 1 file changed, 44 insertions(+), 40 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 6149a2d324be..0334eefe4bfa 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -795,6 +795,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, struct lruvec *lruvec; unsigned long flags = 0; struct lruvec *locked = NULL; + struct folio *folio = NULL; struct page *page = NULL, *valid_page = NULL; struct address_space *mapping; unsigned long start_pfn = low_pfn; @@ -891,7 +892,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!valid_page && pageblock_aligned(low_pfn)) { if (!isolation_suitable(cc, page)) { low_pfn = end_pfn; - page = NULL; + folio = NULL; goto isolate_abort; } valid_page = page; @@ -923,7 +924,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * Hugepage was successfully isolated and placed * on the cc->migratepages list. */ - low_pfn += compound_nr(page) - 1; + folio = page_folio(page); + low_pfn += folio_nr_pages(folio) - 1; goto isolate_success_no_list; } @@ -991,8 +993,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, locked = NULL; } - if (isolate_movable_page(page, mode)) + if (isolate_movable_page(page, mode)) { + folio = page_folio(page); goto isolate_success; + } } goto isolate_fail; @@ -1003,7 +1007,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * sure the page is not being freed elsewhere -- the * page release code relies on it. */ - if (unlikely(!get_page_unless_zero(page))) + folio = folio_get_nontail_page(page); + if (unlikely(!folio)) goto isolate_fail; /* @@ -1011,8 +1016,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * so avoid taking lru_lock and isolating it unnecessarily in an * admittedly racy check. */ - mapping = page_mapping(page); - if (!mapping && (page_count(page) - 1) > total_mapcount(page)) + mapping = folio_mapping(folio); + if (!mapping && (folio_ref_count(folio) - 1) > folio_mapcount(folio)) goto isolate_fail_put; /* @@ -1023,11 +1028,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, goto isolate_fail_put; /* Only take pages on LRU: a check now makes later tests safe */ - if (!PageLRU(page)) + if (!folio_test_lru(folio)) goto isolate_fail_put; /* Compaction might skip unevictable pages but CMA takes them */ - if (!(mode & ISOLATE_UNEVICTABLE) && PageUnevictable(page)) + if (!(mode & ISOLATE_UNEVICTABLE) && folio_test_unevictable(folio)) goto isolate_fail_put; /* @@ -1036,10 +1041,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * it will be able to migrate without blocking - clean pages * for the most part. PageWriteback would require blocking. */ - if ((mode & ISOLATE_ASYNC_MIGRATE) && PageWriteback(page)) + if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_writeback(folio)) goto isolate_fail_put; - if ((mode & ISOLATE_ASYNC_MIGRATE) && PageDirty(page)) { + if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_dirty(folio)) { bool migrate_dirty; /* @@ -1051,22 +1056,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * the page lock until after the page is removed * from the page cache. */ - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto isolate_fail_put; - mapping = page_mapping(page); + mapping = folio_mapping(folio); migrate_dirty = !mapping || mapping->a_ops->migrate_folio; - unlock_page(page); + folio_unlock(folio); if (!migrate_dirty) goto isolate_fail_put; } - /* Try isolate the page */ - if (!TestClearPageLRU(page)) + /* Try isolate the folio */ + if (!folio_test_clear_lru(folio)) goto isolate_fail_put; - lruvec = folio_lruvec(page_folio(page)); + lruvec = folio_lruvec(folio); /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { @@ -1076,7 +1081,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page_folio(page)); + lruvec_memcg_debug(lruvec, folio); /* * Try get exclusive access under lock. If marked for @@ -1092,34 +1097,33 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, } /* - * Page become compound since the non-locked check, - * and it's on LRU. It can only be a THP so the order - * is safe to read and it's 0 for tail pages. + * folio become large since the non-locked check, + * and it's on LRU. */ - if (unlikely(PageCompound(page) && !cc->alloc_contig)) { - low_pfn += compound_nr(page) - 1; - nr_scanned += compound_nr(page) - 1; - SetPageLRU(page); + if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) { + low_pfn += folio_nr_pages(folio) - 1; + nr_scanned += folio_nr_pages(folio) - 1; + folio_set_lru(folio); goto isolate_fail_put; } } - /* The whole page is taken off the LRU; skip the tail pages. */ - if (PageCompound(page)) - low_pfn += compound_nr(page) - 1; + /* The folio is taken off the LRU */ + if (folio_test_large(folio)) + low_pfn += folio_nr_pages(folio) - 1; /* Successfully isolated */ - del_page_from_lru_list(page, lruvec); - mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_is_file_lru(page), - thp_nr_pages(page)); + lruvec_del_folio(lruvec, folio); + node_stat_mod_folio(folio, + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); isolate_success: - list_add(&page->lru, &cc->migratepages); + list_add(&folio->lru, &cc->migratepages); isolate_success_no_list: - cc->nr_migratepages += compound_nr(page); - nr_isolated += compound_nr(page); - nr_scanned += compound_nr(page) - 1; + cc->nr_migratepages += folio_nr_pages(folio); + nr_isolated += folio_nr_pages(folio); + nr_scanned += folio_nr_pages(folio) - 1; /* * Avoid isolating too much unless this block is being @@ -1141,7 +1145,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, unlock_page_lruvec_irqrestore(locked, flags); locked = NULL; } - put_page(page); + folio_put(folio); isolate_fail: if (!skip_on_failure && ret != -ENOMEM) @@ -1182,14 +1186,14 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(low_pfn > end_pfn)) low_pfn = end_pfn; - page = NULL; + folio = NULL; isolate_abort: if (locked) unlock_page_lruvec_irqrestore(locked, flags); - if (page) { - SetPageLRU(page); - put_page(page); + if (folio) { + folio_set_lru(folio); + folio_put(folio); } /*