From patchwork Mon Jun 12 14:34:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13276750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76447C7EE2F for ; Mon, 12 Jun 2023 14:19:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D40636B0072; Mon, 12 Jun 2023 10:19:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF0FD6B0074; Mon, 12 Jun 2023 10:19:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE0076B0075; Mon, 12 Jun 2023 10:19:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AFCE16B0072 for ; Mon, 12 Jun 2023 10:19:16 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7E0C11C75D5 for ; Mon, 12 Jun 2023 14:19:16 +0000 (UTC) X-FDA: 80894303112.16.311BA0E Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf19.hostedemail.com (Postfix) with ESMTP id D40211A0002 for ; Mon, 12 Jun 2023 14:19:11 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf19.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686579554; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=uYsiE51xOQE97372+4MzK4Vf9RgTPz6ZL1P5whhNwOs=; b=SaG8M0SyPa1bVkIqyErFeslOBZEe0vYs6LcVPwj79M04+DVfLT/lMFV8r6mGDT0jtOtvh6 Ki5BHpdQorPgN0KJA5G316IMIiUfWBPhSYBntNetSluZM/hC7yMhaXgH/G22kBMya8Wck+ 7tI81vVFqwg6Lz1bOzjUTeNgHoHDJF8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf19.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686579554; a=rsa-sha256; cv=none; b=ab3D7rKQkOxzbBqkDjorBKFgfzjFy8Zro0a0rb36A18xSVE6IsU/qved6tUbqu/F+cDdUc 1pEeBFa/wrU1fSwUGbp1SuYdfQrrMwwLcxA46SSCh2ifGksbF8/JuKjEd+ff/oektUUsLm lPsm2bVWsvwH8zPaI9+jKAOud9L5EFU= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4QftwP2VmGz18M14; Mon, 12 Jun 2023 22:14:13 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 12 Jun 2023 22:19:07 +0800 From: Kefeng Wang To: CC: , , , , , Kefeng Wang Subject: [PATCH -next 1/2] mm: compaction: convert to use a folio in isolate_migratepages_block() Date: Mon, 12 Jun 2023 22:34:13 +0800 Message-ID: <20230612143414.186389-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: D40211A0002 X-Stat-Signature: 8xfw8zo5qx968qqn3hpxr1dmmm5cjdyt X-HE-Tag: 1686579551-273540 X-HE-Meta: U2FsdGVkX1+xRa2bOBzKaG4ZvSsfQrxm9/X6/gv9a9Io0+pBy7RCNxhq/JPIjNq6LNnp0YXruD9UuQA7VG+Rq6xVb67zrXrRDVhK/2FJRguS7We+0768bsyFsg4MXrRq1GZ2FGs0+x5pxVZtTg1VNH3PeJ7w48UHBfepDQGkK49CtW//NkYf1Owuyo/iwk/S0sMO7mAM+OfWlVmd0dymBjtUqWNOY21IbjjaeC4mCZkvOkm/IlxR4L2qK13XrEQVTr8K0GW29vqCXC3wpL6qtWtUbRIfcQEqnPSfx60aQhvNBs/1uIY0e1Dm3Hsjc1giNYIDKKHHQuOwiJR1KpO5a5LfWpDV8cal1j2TLNZNPhbRCeh3P84yntUTplwukqHxVuGJVMJ/FmqEhPhQJvpJWA6yvnML1zl/qY1IZA+cmJUz0bQKqHv2OuNH6/7cGSz5lei2svZcrhbU2AkJhn44Imed1I7Z3PBX2SWG1gf7X/f8tfNskxSdLwUjBp8mmSa2fjGcQmYzHpC9pm0F2e9fK2GaAMGa0OKOsPXwbBk49/kYhB8ln7RFvYwybXT0xuu3IoycqfGHrraPWbS/M0gAV5y8VAyxeByY6yspFnMj5Xu1V0pXJh5zXq0ODVeT0MM67XqWUpKhNOZ6ZAHaiszkw039Umvnl2zD2KxkT/898bO5YeIMXBiogALcj4xb/fBr55a+hWPGW6nRJNdQ/7bt1FdYu9j/rnxpsmOMp7IIRetuephjR3PXjfseFhFS8t/zDHqFip6YAkkJeRqfDa7pFDvFubVn+qKS2Ym8Q81LXB7hVfEJtlHaFtCNWa4tUvfiK9VmHNRBh7QhTu2ICoJo1UW/v3zw5uweWkmVj1vydHTdCYFnggn1NmFwUiUCdbiTdQnU090bBwXzptIdoc3jWe0uSKhXXPU9z8JKn1vBl0SdZizSs6XA+SRmmsshpF6G3X/jYqjyoxkllrMcIfN B00b7826 rjIDwf7lo971F/RkcKf+tlS6OTP1xUGrWBAWpyZgqKbR39n/BHMo5shaJD1qX98BWd6PRh2r/iEtv0BxtU8sXbnmrj3se1LIIBc6Xq2opkOQVC+ObtKhdOu7i8sCMni9sdnv3Awvw9NMhObrGGqicpEnggLeG2/VnaceDjwBmm4dWjrtTivOgD85XNu/1eas2hBal X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Directly use a folio instead of page_folio() when page successfully isolated (hugepage and movable page) and after folio_get_nontail_page(), which removes several calls to compound_head(). Signed-off-by: Kefeng Wang --- mm/compaction.c | 71 ++++++++++++++++++++++++++----------------------- 1 file changed, 38 insertions(+), 33 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 3398ef3a55fe..5d3f0aaa6785 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -831,6 +831,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, struct lruvec *lruvec; unsigned long flags = 0; struct lruvec *locked = NULL; + struct folio *folio = NULL; struct page *page = NULL, *valid_page = NULL; struct address_space *mapping; unsigned long start_pfn = low_pfn; @@ -927,7 +928,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!valid_page && pageblock_aligned(low_pfn)) { if (!isolation_suitable(cc, page)) { low_pfn = end_pfn; - page = NULL; + folio = NULL; goto isolate_abort; } valid_page = page; @@ -959,7 +960,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * Hugepage was successfully isolated and placed * on the cc->migratepages list. */ - low_pfn += compound_nr(page) - 1; + folio = page_folio(page); + low_pfn += folio_nr_pages(folio) - 1; goto isolate_success_no_list; } @@ -1027,8 +1029,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, locked = NULL; } - if (isolate_movable_page(page, mode)) + if (isolate_movable_page(page, mode)) { + folio = page_folio(page); goto isolate_success; + } } goto isolate_fail; @@ -1039,7 +1043,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * sure the page is not being freed elsewhere -- the * page release code relies on it. */ - if (unlikely(!get_page_unless_zero(page))) + folio = folio_get_nontail_page(page); + if (unlikely(!folio)) goto isolate_fail; /* @@ -1047,7 +1052,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * from long term pinning preventing it from migrating, * so avoid taking lru_lock and isolating it unnecessarily. */ - mapping = page_mapping(page); + mapping = folio_mapping(folio); if (!cc->alloc_contig && page_has_extra_refs(page, mapping)) goto isolate_fail_put; @@ -1063,7 +1068,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, goto isolate_fail_put; /* Compaction might skip unevictable pages but CMA takes them */ - if (!(mode & ISOLATE_UNEVICTABLE) && PageUnevictable(page)) + if (!(mode & ISOLATE_UNEVICTABLE) && folio_test_unevictable(folio)) goto isolate_fail_put; /* @@ -1072,10 +1077,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * it will be able to migrate without blocking - clean pages * for the most part. PageWriteback would require blocking. */ - if ((mode & ISOLATE_ASYNC_MIGRATE) && PageWriteback(page)) + if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_writeback(folio)) goto isolate_fail_put; - if ((mode & ISOLATE_ASYNC_MIGRATE) && PageDirty(page)) { + if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_dirty(folio)) { bool migrate_dirty; /* @@ -1087,22 +1092,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * the page lock until after the page is removed * from the page cache. */ - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto isolate_fail_put; - mapping = page_mapping(page); + mapping = folio_mapping(folio); migrate_dirty = !mapping || mapping->a_ops->migrate_folio; - unlock_page(page); + folio_unlock(folio); if (!migrate_dirty) goto isolate_fail_put; } /* Try isolate the page */ - if (!TestClearPageLRU(page)) + if (!folio_test_clear_lru(folio)) goto isolate_fail_put; - lruvec = folio_lruvec(page_folio(page)); + lruvec = folio_lruvec(folio); /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { @@ -1112,7 +1117,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page_folio(page)); + lruvec_memcg_debug(lruvec, folio); /* * Try get exclusive access under lock. If marked for @@ -1132,30 +1137,30 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * and it's on LRU. It can only be a THP so the order * is safe to read and it's 0 for tail pages. */ - if (unlikely(PageCompound(page) && !cc->alloc_contig)) { - low_pfn += compound_nr(page) - 1; - nr_scanned += compound_nr(page) - 1; - SetPageLRU(page); + if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) { + low_pfn += folio_nr_pages(folio) - 1; + nr_scanned += folio_nr_pages(folio) - 1; + folio_set_lru(folio); goto isolate_fail_put; } } /* The whole page is taken off the LRU; skip the tail pages. */ - if (PageCompound(page)) - low_pfn += compound_nr(page) - 1; + if (folio_test_large(folio)) + low_pfn += folio_nr_pages(folio) - 1; /* Successfully isolated */ - del_page_from_lru_list(page, lruvec); - mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_is_file_lru(page), - thp_nr_pages(page)); + lruvec_del_folio(lruvec, folio); + mod_node_page_state(folio_pgdat(folio), + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); isolate_success: - list_add(&page->lru, &cc->migratepages); + list_add(&folio->lru, &cc->migratepages); isolate_success_no_list: - cc->nr_migratepages += compound_nr(page); - nr_isolated += compound_nr(page); - nr_scanned += compound_nr(page) - 1; + cc->nr_migratepages += folio_nr_pages(folio); + nr_isolated += folio_nr_pages(folio); + nr_scanned += folio_nr_pages(folio) - 1; /* * Avoid isolating too much unless this block is being @@ -1177,7 +1182,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, unlock_page_lruvec_irqrestore(locked, flags); locked = NULL; } - put_page(page); + folio_put(folio); isolate_fail: if (!skip_on_failure && ret != -ENOMEM) @@ -1218,14 +1223,14 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(low_pfn > end_pfn)) low_pfn = end_pfn; - page = NULL; + folio = NULL; isolate_abort: if (locked) unlock_page_lruvec_irqrestore(locked, flags); - if (page) { - SetPageLRU(page); - put_page(page); + if (folio) { + folio_set_lru(folio); + folio_put(folio); } /*