From patchwork Mon Aug 26 04:01:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13777059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03D5DC5321D for ; Mon, 26 Aug 2024 04:01:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D3376B0357; Mon, 26 Aug 2024 00:01:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8271D6B035B; Mon, 26 Aug 2024 00:01:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31F606B0357; Mon, 26 Aug 2024 00:01:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0C96D6B0358 for ; Mon, 26 Aug 2024 00:01:42 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B5968A0B8C for ; Mon, 26 Aug 2024 04:01:41 +0000 (UTC) X-FDA: 82493047602.06.C53C1D2 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf12.hostedemail.com (Postfix) with ESMTP id 3A22C40002 for ; Mon, 26 Aug 2024 04:01:38 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724644881; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/pGzX6d2hRvhbIybk4tuU6fjzFGfMT9zmuQtVxF2CsM=; b=aywg4sZ7XT/cuDgk1osxCwZgY/pv/SUjiMjEjYUXmV1DiL9rKzXoTWJjYivj7vijp7RCN4 CY1gCY4rtFTJ0DMdE4X715nomyYo/NTp7hBaaaYSowbfBIZ5qhlAbcDLpQQ3k46XH8HduA U+82BFXuLY+6PBuYYIZS9qltZ7r1Dhg= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724644881; a=rsa-sha256; cv=none; b=5XlA0Tj6AOb2IpRxxJXGikB6coOrmVqJJ/BeTj/Pdu7eY3EZ75D+wRflNr2HWH9lUt8Yyj M7VBNL6B0LwaPvhHj/uGUYUs2tGkgbLIfiToSq5VIdvK24nJFnqYBC2U7k/L3m9GKGcMmm pc9Uc20o/WjMLxjg6D57F6Xn3bfIiqM= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4WscLR4TzBz69LG; Mon, 26 Aug 2024 11:56:47 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id E06801800A7; Mon, 26 Aug 2024 12:01:34 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 26 Aug 2024 12:01:34 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Matthew Wilcox , Baolin Wang , Zi Yan , , Kefeng Wang Subject: [PATCH 1/4] mm: migrate: add folio_isolate_movable() Date: Mon, 26 Aug 2024 12:01:29 +0800 Message-ID: <20240826040132.1202297-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240826040132.1202297-1-wangkefeng.wang@huawei.com> References: <20240826040132.1202297-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspam-User: X-Stat-Signature: 6gi6waim3oes8u5ik159penh7wogudue X-Rspamd-Queue-Id: 3A22C40002 X-Rspamd-Server: rspam11 X-HE-Tag: 1724644898-916977 X-HE-Meta: U2FsdGVkX198ieW1tt1SC6xo4e1O8IIlbxXODZZ8m+ktMvk5uEDYhfPMDURJeVyDF1Aq8mFn/KVcIh9hOt1TvtfH+MuSpzfAl2fDn7QfBDBnmv+ZATOyKfOYfvDPdsECVwqCIK2yhr32TQpjv+A2VDfrODbkU9TycmqXfrkvYH2ZO7xnurkLbLbIgy27iB6H1E4jaIJyN0kaMaAcYJRhl6aGLn8okDVJ4pngDVL0Sd+Cw0SamuBXxlAU99AeOCJm0F3ov1ViCNR32q+TH8cikSXLXCdvfRPF331fRfsV1/mTMWk5Rqtr23HExtMe/VI75GT0Fu2bxqLMoLFH3RRm1X0prqZZjs0JHPoSEGDABBzl8eH6pciS4GO+jOGcSk3LkS5WkHZGkkj4615FTSSxTepaZ3tCpTxP1Y4FUz7p3hinmOS4BfdlLqpPI8/ahAcU6EFMALr8mNyu20BGoKuOs63hJvzakKRsjaLpxlMyKIx3QPcp0jNmfXeg4HIt9LpnnKja3cXaGrFlrDObxzzaDW/fQMXXwi8yooA3ydMLJVshLqsLcOyt7HMsuFoq2nX5D9LOv0etqdX+ISFkIT4iLtdkTzvPglg0st/1HGExg84oZGL1nM2Gdi707IhmSfRcNVF17xUmK0GNb0zI+uFlMleIza+KUE/p4rpui6ZfLlFMAPC0xeB0UWrMxG01IT2vnsMfRvtLT5Q/QteWDkTUWyoQ0F8aa5JtRjRQVq1ZsmjXQjblaIeWqKw98O9XTvZ+mMjGZ0R+6snIpoYPUU6LFy0Z7Aq9yh9Lf+5MRkQ1oX2R9mVHfWemuAz2Cqec4Wzd4f4x1jLDESksJ+y/tMdudRfsse6wWMmj+/60n8ugePLrNrh+qLsVZF06BMRliqoUArQ4VH0lNsbiLQMyFjOkAQb90tCGuXH5foPnFS6WKtT2R3/TDrhagUSpYxBSCUgBknYW0T/M1S4jLOcJ0PY /ZQ/oS2s AluPqJYI0MgpBt6pwyfAC1u7TsreRYK/fkjvbrfc0lelgmvhwAxN1n+ETkMBmapwU5qhKAAyz9bUi3SITrBjnzbrnXkgBbqjzklURQxKD0rvYQj8aITU+yB+6FZTfbxNktr5mV3wcYFm0FlN5CqzpUDcZW0dscQxUe3eMg2kuPhdakR30ihVK3s2KdSFbCN/IKChm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Like isolate_lru_page(), make isolate_movable_page() as a wrapper around folio_isolate_movable(), since isolate_movable_page() always fails on a tail page, return immediately for a tail page in the warpper, and the wrapper will be removed once all callers are converted to folio_isolate_movable(). Signed-off-by: Kefeng Wang --- include/linux/migrate.h | 4 ++++ mm/migrate.c | 41 ++++++++++++++++++++++++----------------- 2 files changed, 28 insertions(+), 17 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 002e49b2ebd9..0a33f751596c 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free, unsigned int *ret_succeeded); struct folio *alloc_migration_target(struct folio *src, unsigned long private); bool isolate_movable_page(struct page *page, isolate_mode_t mode); +bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode); bool isolate_folio_to_list(struct folio *folio, struct list_head *list); int migrate_huge_page_move_mapping(struct address_space *mapping, @@ -92,6 +93,9 @@ static inline struct folio *alloc_migration_target(struct folio *src, { return NULL; } static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) { return false; } +static inline bool folio_isolate_movable(struct folio *folio, + isolate_mode_t mode) + { return false; } static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list) { return false; } diff --git a/mm/migrate.c b/mm/migrate.c index 4f55f4930fe8..cc1c268c3822 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -58,21 +58,20 @@ #include "internal.h" -bool isolate_movable_page(struct page *page, isolate_mode_t mode) +bool folio_isolate_movable(struct folio *folio, isolate_mode_t mode) { - struct folio *folio = folio_get_nontail_page(page); const struct movable_operations *mops; /* - * Avoid burning cycles with pages that are yet under __free_pages(), + * Avoid burning cycles with folios that are yet under __free_pages(), * or just got freed under us. * - * In case we 'win' a race for a movable page being freed under us and + * In case we 'win' a race for a movable folio being freed under us and * raise its refcount preventing __free_pages() from doing its job - * the put_page() at the end of this block will take care of - * release this page, thus avoiding a nasty leakage. + * the folio_put() at the end of this block will take care of + * release this folio, thus avoiding a nasty leakage. */ - if (!folio) + if (!folio_try_get(folio)) goto out; if (unlikely(folio_test_slab(folio))) @@ -80,9 +79,9 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) /* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */ smp_rmb(); /* - * Check movable flag before taking the page lock because - * we use non-atomic bitops on newly allocated page flags so - * unconditionally grabbing the lock ruins page's owner side. + * Check movable flag before taking the folio lock because + * we use non-atomic bitops on newly allocated folio flags so + * unconditionally grabbing the lock ruins folio's owner side. */ if (unlikely(!__folio_test_movable(folio))) goto out_putfolio; @@ -92,15 +91,15 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) goto out_putfolio; /* - * As movable pages are not isolated from LRU lists, concurrent - * compaction threads can race against page migration functions - * as well as race against the releasing a page. + * As movable folios are not isolated from LRU lists, concurrent + * compaction threads can race against folio migration functions + * as well as race against the releasing a folio. * - * In order to avoid having an already isolated movable page + * In order to avoid having an already isolated movable folio * being (wrongly) re-isolated while it is under migration, - * or to avoid attempting to isolate pages being released, - * lets be sure we have the page lock - * before proceeding with the movable page isolation steps. + * or to avoid attempting to isolate folios being released, + * lets be sure we have the folio lock + * before proceeding with the movable folio isolation steps. */ if (unlikely(!folio_trylock(folio))) goto out_putfolio; @@ -129,6 +128,14 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) return false; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + if (PageTail(page)) + return false; + + return folio_isolate_movable((struct folio *)page, mode); +} + static void putback_movable_folio(struct folio *folio) { const struct movable_operations *mops = folio_movable_ops(folio);