From patchwork Tue Dec 27 00:28:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13081976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCCDBC4332F for ; Tue, 27 Dec 2022 00:29:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6895294000C; Mon, 26 Dec 2022 19:29:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 63900940007; Mon, 26 Dec 2022 19:29:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5014E94000C; Mon, 26 Dec 2022 19:29:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 414DC940007 for ; Mon, 26 Dec 2022 19:29:49 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1EB2A140653 for ; Tue, 27 Dec 2022 00:29:49 +0000 (UTC) X-FDA: 80286203298.09.9064536 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf19.hostedemail.com (Postfix) with ESMTP id 33C561A0007 for ; Tue, 27 Dec 2022 00:29:47 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=FuOQiOXk; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672100987; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eLdqYltBVLujlBa34RffV+DhEK2QGJf9sLFKaJk8IOI=; b=wA1Riul5JfUxKmJ9fvvfgb2faLRKOjIhlMrl6m1GNKy/gpHzR82TN6GEQDglMA27iU7CvD Vg6BWcbrtWHXU261G01sdMiOZgodojQejEOcS0LNy/77sfDG2gjrsGcEziRxaGHiDgV//G e2NU9CK61xp/mhGME21ZBy6aiwi0sxM= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=FuOQiOXk; spf=pass (imf19.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672100987; a=rsa-sha256; cv=none; b=vVtmmq8Cpc9EuD226lF+wKrdqaI/KHKNTC9vu7zhFTLEPsAmwWBDa/9/RP9wnC4kiRcii2 BL9CqktZBQpu//GBCBweaA6pFpTVVr2P8DUOaQMvrHNmWD2zEGBjC3CUM9o5IMY2+k3p4c 6pXp4Ar7/SyVdIq9tjR7qnIJz5QvH8U= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672100987; x=1703636987; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Vd3x7uf76BZx4AuXVaHyrVdNIQHfuHN+j/7ggUNEXew=; b=FuOQiOXknbxkHHl5wg2Cjic+Ln6l1+wZLZzV5nQshTYJesSKn1Azz0rt YeVyfRLh5dw0HMykHtMd9sYco8W7dXqNzFTONsjdyGchbNbtYw/H2nS1y RSg57LghW43pc/1PgSu7DF2OGbhMmtYrar+RVmMHS3g+TDo74yYBxEfwN a/KFnh+5SqYhqJa8sBB1nb5So4Gxt4bxlvQe6wjazHnoaQ9+DnQx0lavh JAF/t4GAFEsBQ8FTmAFQ7EfiEVeVTVeFdkTb/c/l5ANK00FVSfvCO5KbX cVxkyD3OnQSZzhiywtc9XOVVvDaBo1SdfA/lo6n6NYB3eSFUkwcuKewCe A==; X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="322597267" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="322597267" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 16:29:46 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="760172236" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="760172236" Received: from yyang3-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.254.212.104]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 16:29:42 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin Subject: [PATCH 6/8] migrate_pages: move migrate_folio_done() and migrate_folio_unmap() Date: Tue, 27 Dec 2022 08:28:57 +0800 Message-Id: <20221227002859.27740-7-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221227002859.27740-1-ying.huang@intel.com> References: <20221227002859.27740-1-ying.huang@intel.com> MIME-Version: 1.0 X-Stat-Signature: d9743wye95z9airo1wxj7fzd14geumw1 X-Rspam-User: X-Rspamd-Queue-Id: 33C561A0007 X-Rspamd-Server: rspam06 X-HE-Tag: 1672100987-363131 X-HE-Meta: U2FsdGVkX19IFLfzNhUQMNFQC0qoGaoa8BWWFxs2uCAXioE5iZ9g7FvZ8EcclBP+hL3cyQDdZck9j946rQQFVBwYiN1mb3jYupRsNTLhwf3KnSmBK5cfUu5X+s/E1WLs2ziS+wL9FgKDCjCc4FFrFslwCp2/0XAE4sVi+ys+f7LZbtcvfbR3NrIe0t0NcOP8DVWAxnqNuwDSnTnd0kkyWggW85MKqfJvRZY57o9dUw606ScZXkifQzGilW4afJ/1f6SqML+hwheOeFQTaJtYZs+tcMi0G8koaBqsitDCVOtLh/YRjC95FO6rzvX7OsSkoXp8A5eGNDJ50yu+GrsSMIRD7hEkMsTtMTEicVRmTUQkwh1qYAoBxG37CSzDZdX5p9LKZO56n7PDSdHgXpUpTmslVDZkXyRsUBIVr8M3QcVmioGwKAna+mrFEYcu6yBkHJaPPtJgvs9M4kD9uBj66w94hMO23Dw7zlzdWkGj5EfrqwIjefdQscoAucDEO/jaCg9oqh0hXo5sbh7WpGjx2PeCVABfVbgtd/ddTAAvtOVZ8fXs5xQG7xxrqla05PrIK4tTxXzWg5q/w0C+/hsDUdsSGclxoNrfM1hYej5qC9CBOmdKpWDa4utceGZukDl1MpXCJ17WlCIILNsT8rsm8trf1JDUmiIR7ckWHx4zvCRy5cJiH8IMQujmwR8ZMzqEchBd829q3noF54XKaubl4eQrl3AlzPx36Pco3qlg+sc9IMfXerb8Q7hF2m59VHHbedCvgDyPafeH+JCQKBSLtcEB/Ufxk3RI8fAjJ/AdtMtujwXYuN2qG0eQx8cSsE9R/hynD2nbLfI8Oj2TB4H561WC16tAYViGUaNmoC8u17z5GY8b8pXqLgxv9ISdwnR/7bsGYJTU3nsKr5FD1ase4gKAU1Vu45eUf5M+IVgj5KUu2i3mCKOHOYFBv0iafL8Pg+9ftcuRl58= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Just move the position of 2 functions. There's no any functionality change. This is to make it easier to review the next patch via putting code near its position in the next patch. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: haoxin --- mm/migrate.c | 136 +++++++++++++++++++++++++-------------------------- 1 file changed, 68 insertions(+), 68 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index dd68c3de3da8..70b987391296 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1051,6 +1051,23 @@ static void migrate_folio_undo_dst(struct folio *dst, folio_put(dst); } +static void migrate_folio_done(struct folio *src, + enum migrate_reason reason) +{ + /* + * Compaction can migrate also non-LRU pages which are + * not accounted to NR_ISOLATED_*. They can be recognized + * as __PageMovable + */ + if (likely(!__folio_test_movable(src))) + mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + + folio_is_file_lru(src), -folio_nr_pages(src)); + + if (reason != MR_MEMORY_FAILURE) + /* We release the page in page_handle_poison. */ + folio_put(src); +} + static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force, bool force_lock, enum migrate_mode mode) { @@ -1186,6 +1203,57 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, return rc; } +/* Obtain the lock on page, remove all ptes. */ +static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, + unsigned long private, struct folio *src, + struct folio **dstp, int force, bool force_lock, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) +{ + struct folio *dst; + int rc = MIGRATEPAGE_UNMAP; + struct page *newpage = NULL; + + if (!thp_migration_supported() && folio_test_transhuge(src)) + return -ENOSYS; + + if (folio_ref_count(src) == 1) { + /* Folio was freed from under us. So we are done. */ + folio_clear_active(src); + folio_clear_unevictable(src); + /* free_pages_prepare() will clear PG_isolated. */ + list_del(&src->lru); + migrate_folio_done(src, reason); + return MIGRATEPAGE_SUCCESS; + } + + newpage = get_new_page(&src->page, private); + if (!newpage) + return -ENOMEM; + dst = page_folio(newpage); + *dstp = dst; + + dst->private = NULL; + rc = __migrate_folio_unmap(src, dst, force, force_lock, mode); + if (rc == MIGRATEPAGE_UNMAP) + return rc; + + /* + * A page that has not been migrated will have kept its + * references and be restored. + */ + /* restore the folio to right list. */ + if (rc != -EAGAIN && rc != -EDEADLOCK) + list_move_tail(&src->lru, ret); + + if (put_new_page) + put_new_page(&dst->page, private); + else + folio_put(dst); + + return rc; +} + static int __migrate_folio_move(struct folio *src, struct folio *dst, enum migrate_mode mode) { @@ -1239,74 +1307,6 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst, return rc; } -static void migrate_folio_done(struct folio *src, - enum migrate_reason reason) -{ - /* - * Compaction can migrate also non-LRU pages which are - * not accounted to NR_ISOLATED_*. They can be recognized - * as __PageMovable - */ - if (likely(!__folio_test_movable(src))) - mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + - folio_is_file_lru(src), -folio_nr_pages(src)); - - if (reason != MR_MEMORY_FAILURE) - /* We release the page in page_handle_poison. */ - folio_put(src); -} - -/* Obtain the lock on page, remove all ptes. */ -static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, - unsigned long private, struct folio *src, - struct folio **dstp, int force, bool force_lock, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - struct folio *dst; - int rc = MIGRATEPAGE_UNMAP; - struct page *newpage = NULL; - - if (!thp_migration_supported() && folio_test_transhuge(src)) - return -ENOSYS; - - if (folio_ref_count(src) == 1) { - /* Folio was freed from under us. So we are done. */ - folio_clear_active(src); - folio_clear_unevictable(src); - /* free_pages_prepare() will clear PG_isolated. */ - list_del(&src->lru); - migrate_folio_done(src, reason); - return MIGRATEPAGE_SUCCESS; - } - - newpage = get_new_page(&src->page, private); - if (!newpage) - return -ENOMEM; - dst = page_folio(newpage); - *dstp = dst; - - dst->private = NULL; - rc = __migrate_folio_unmap(src, dst, force, force_lock, mode); - if (rc == MIGRATEPAGE_UNMAP) - return rc; - - /* - * A page that has not been migrated will have kept its - * references and be restored. - */ - /* restore the folio to right list. */ - if (rc != -EAGAIN && rc != -EDEADLOCK) - list_move_tail(&src->lru, ret); - - if (put_new_page) - put_new_page(&dst->page, private); - else - folio_put(dst); - - return rc; -} - /* Migrate the folio to the newly allocated folio in dst. */ static int migrate_folio_move(free_page_t put_new_page, unsigned long private, struct folio *src, struct folio *dst,