From patchwork Wed Sep 21 06:06:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12983242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18F7DECAAD8 for ; Wed, 21 Sep 2022 06:07:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8A82940009; Wed, 21 Sep 2022 02:07:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A3ACE940007; Wed, 21 Sep 2022 02:07:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88F23940009; Wed, 21 Sep 2022 02:07:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7B698940007 for ; Wed, 21 Sep 2022 02:07:05 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3F1B21C5C21 for ; Wed, 21 Sep 2022 06:07:05 +0000 (UTC) X-FDA: 79935059610.29.829F91D Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf28.hostedemail.com (Postfix) with ESMTP id 83BD1C000F for ; Wed, 21 Sep 2022 06:07:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663740424; x=1695276424; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/7kWcvtAL/CcEdmW0newy/EIa1m6JlYj2jWMzPR9NMs=; b=Ng7R5O6gJz+tPkJHa7/AWRPzlHjizlhaNg7bb1AeJQE0CLLThK7iAV3x JA7Kc7LJmEhUJN2B0NoLeSTHG0J9XNFGLYV+sEHPNYM7n/fjpNqJTMtZN vynkDlQ3Nqn+A1T+OZLKxBBigosqZQ8Sp/aGsZndqWQNvKAcD2r7Csxi3 hDj/QDhsUQvin8qAMlclWEa6EvFIT88TR3QQ37PkWT3Yye/Xs8YaCyv8d H5kZAw76ipn+TNquZLaV0Lpy6cgzQzdRggU1UynyoAl+09kOl1z8DurvA l1jfGSpZ2+0haJbH413hUji1I8xARQvyDf8pGuf4Koxtk1cCMMIcYOClH g==; X-IronPort-AV: E=McAfee;i="6500,9779,10476"; a="282956858" X-IronPort-AV: E=Sophos;i="5.93,332,1654585200"; d="scan'208";a="282956858" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2022 23:07:04 -0700 X-IronPort-AV: E=Sophos;i="5.93,332,1654585200"; d="scan'208";a="649913954" Received: from yhuang6-mobl2.sh.intel.com ([10.238.5.245]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2022 23:07:01 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox Subject: [RFC 5/6] mm/migrate_pages: share more code between _unmap and _move Date: Wed, 21 Sep 2022 14:06:15 +0800 Message-Id: <20220921060616.73086-6-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220921060616.73086-1-ying.huang@intel.com> References: <20220921060616.73086-1-ying.huang@intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663740424; a=rsa-sha256; cv=none; b=8U1S1Tn9osPnCyTj+5/fx5h7LPt6/JShO6V22LXwObgtI+KABNrkFdJ76/jxb7fQn1qRed jxow0qmF42152ZHEZXgisRAd/FnVboG1kHtH06sf7N5afRlmVRkJSGfkl5wC5IIEtKQ3m3 7qQlRybfMGSgJ5zeaJqEO8vzbeQAdm4= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Ng7R5O6g; spf=pass (imf28.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663740424; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eQyFkYHJbG5EQSPlY7Hnf3+gCFwV5GHBg7K+f5vEopI=; b=QO74o8ialT5GA7KiA/TwPZnWOgutja77QOQJpyqJKqg04/Zq4iomYRquKQctRtxD1DdXCv BfhTbC3gsjAUjgBK+rJoB/sdt2wL4RO9fBh93PuIGjZ4H4iM9Gz3cYV3gKjBn6YS+KlEnt AkGQaqZGHhvmrc6UR8OvUvb8jcJTMpA= X-Rspamd-Server: rspam06 X-Rspam-User: X-Stat-Signature: x7dmofch3jss6sciui1qd4uxcrdyxnn3 X-Rspamd-Queue-Id: 83BD1C000F Authentication-Results: imf28.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Ng7R5O6g; spf=pass (imf28.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1663740424-106311 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a code cleanup patch to reduce the duplicated code between the _unmap and _move stages of migrate_pages(). No functionality change is expected. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox --- mm/migrate.c | 240 +++++++++++++++++++++------------------------------ 1 file changed, 100 insertions(+), 140 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 165cbbc834e2..042fa147f302 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -999,6 +999,7 @@ static void __migrate_page_extract(struct page *newpage, static void migrate_page_undo_page(struct page *page, int page_was_mapped, struct anon_vma *anon_vma, + bool locked, struct list_head *ret) { struct folio *folio = page_folio(page); @@ -1007,30 +1008,77 @@ static void migrate_page_undo_page(struct page *page, remove_migration_ptes(folio, folio, false); if (anon_vma) put_anon_vma(anon_vma); - unlock_page(page); - list_move_tail(&page->lru, ret); + if (locked) + unlock_page(page); + if (ret) + list_move_tail(&page->lru, ret); } static void migrate_page_undo_newpage(struct page *newpage, + bool locked, free_page_t put_new_page, unsigned long private) { - unlock_page(newpage); + if (locked) + unlock_page(newpage); if (put_new_page) put_new_page(newpage, private); else put_page(newpage); } -static int __migrate_page_unmap(struct page *page, struct page *newpage, - int force, enum migrate_mode mode) +static void migrate_page_done(struct page *page, + enum migrate_reason reason) +{ + /* + * Compaction can migrate also non-LRU pages which are + * not accounted to NR_ISOLATED_*. They can be recognized + * as __PageMovable + */ + if (likely(!__PageMovable(page))) + mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + + page_is_file_lru(page), -thp_nr_pages(page)); + + if (reason != MR_MEMORY_FAILURE) + /* We release the page in page_handle_poison. */ + put_page(page); +} + +/* Obtain the lock on page, remove all ptes. */ +static int migrate_page_unmap(new_page_t get_new_page, free_page_t put_new_page, + unsigned long private, struct page *page, + struct page **newpagep, int force, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { struct folio *folio = page_folio(page); - int rc = -EAGAIN; + int rc = MIGRATEPAGE_UNMAP; + struct page *newpage = NULL; int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__PageMovable(page); + bool locked = false; + bool newpage_locked = false; + + if (!thp_migration_supported() && PageTransHuge(page)) + return -ENOSYS; + if (page_count(page) == 1) { + /* Page was freed from under us. So we are done. */ + ClearPageActive(page); + ClearPageUnevictable(page); + /* free_pages_prepare() will clear PG_isolated. */ + list_del(&page->lru); + migrate_page_done(page, reason); + return MIGRATEPAGE_SUCCESS; + } + + newpage = get_new_page(page, private); + if (!newpage) + return -ENOMEM; + *newpagep = newpage; + + rc = -EAGAIN; if (!trylock_page(page)) { if (!force || mode == MIGRATE_ASYNC) goto out; @@ -1053,6 +1101,7 @@ static int __migrate_page_unmap(struct page *page, struct page *newpage, lock_page(page); } + locked = true; if (PageWriteback(page)) { /* @@ -1067,10 +1116,10 @@ static int __migrate_page_unmap(struct page *page, struct page *newpage, break; default: rc = -EBUSY; - goto out_unlock; + goto out; } if (!force) - goto out_unlock; + goto out; wait_on_page_writeback(page); } @@ -1100,7 +1149,8 @@ static int __migrate_page_unmap(struct page *page, struct page *newpage, * This is much like races on refcount of oldpage: just don't BUG(). */ if (unlikely(!trylock_page(newpage))) - goto out_unlock; + goto out; + newpage_locked = true; if (unlikely(!is_lru)) { __migrate_page_record(newpage, page_was_mapped, anon_vma); @@ -1123,7 +1173,7 @@ static int __migrate_page_unmap(struct page *page, struct page *newpage, VM_BUG_ON_PAGE(PageAnon(page), page); if (page_has_private(page)) { try_to_free_buffers(folio); - goto out_unlock_both; + goto out; } } else if (page_mapped(page)) { /* Establish migration ptes */ @@ -1141,20 +1191,28 @@ static int __migrate_page_unmap(struct page *page, struct page *newpage, if (page_was_mapped) remove_migration_ptes(folio, folio, false); -out_unlock_both: - unlock_page(newpage); -out_unlock: - /* Drop an anon_vma reference if we took one */ - if (anon_vma) - put_anon_vma(anon_vma); - unlock_page(page); out: + /* + * A page that has not been migrated will have kept its + * references and be restored. + */ + /* restore the page to right list. */ + if (rc != -EAGAIN) + ret = NULL; + + migrate_page_undo_page(page, page_was_mapped, anon_vma, locked, ret); + if (newpage) + migrate_page_undo_newpage(newpage, newpage_locked, + put_new_page, private); return rc; } -static int __migrate_page_move(struct page *page, struct page *newpage, - enum migrate_mode mode) +/* Migrate the page to the newly allocated page in newpage. */ +static int migrate_page_move(free_page_t put_new_page, unsigned long private, + struct page *page, struct page *newpage, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { struct folio *folio = page_folio(page); struct folio *dst = page_folio(newpage); @@ -1165,9 +1223,10 @@ static int __migrate_page_move(struct page *page, struct page *newpage, __migrate_page_extract(newpage, &page_was_mapped, &anon_vma); rc = move_to_new_folio(dst, folio, mode); + if (rc) + goto out; - if (rc != -EAGAIN) - list_del(&newpage->lru); + list_del(&newpage->lru); /* * When successful, push newpage to LRU immediately: so that if it * turns out to be an mlocked page, remove_migration_ptes() will @@ -1177,139 +1236,40 @@ static int __migrate_page_move(struct page *page, struct page *newpage, * unsuccessful, and other cases when a page has been temporarily * isolated from the unevictable LRU: but this case is the easiest. */ - if (rc == MIGRATEPAGE_SUCCESS) { - lru_cache_add(newpage); - if (page_was_mapped) - lru_add_drain(); - } - - if (rc == -EAGAIN) { - __migrate_page_record(newpage, page_was_mapped, anon_vma); - return rc; - } - + lru_cache_add(newpage); if (page_was_mapped) - remove_migration_ptes(folio, - rc == MIGRATEPAGE_SUCCESS ? dst : folio, false); + lru_add_drain(); + if (page_was_mapped) + remove_migration_ptes(folio, dst, false); unlock_page(newpage); - /* Drop an anon_vma reference if we took one */ - if (anon_vma) - put_anon_vma(anon_vma); - unlock_page(page); + set_page_owner_migrate_reason(newpage, reason); /* * If migration is successful, decrease refcount of the newpage, * which will not free the page because new page owner increased * refcounter. */ - if (rc == MIGRATEPAGE_SUCCESS) - put_page(newpage); - - return rc; -} + put_page(newpage); -static void migrate_page_done(struct page *page, - enum migrate_reason reason) -{ /* - * Compaction can migrate also non-LRU pages which are - * not accounted to NR_ISOLATED_*. They can be recognized - * as __PageMovable + * A page that has been migrated has all references removed + * and will be freed. */ - if (likely(!__PageMovable(page))) - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -thp_nr_pages(page)); - - if (reason != MR_MEMORY_FAILURE) - /* We release the page in page_handle_poison. */ - put_page(page); -} - -/* Obtain the lock on page, remove all ptes. */ -static int migrate_page_unmap(new_page_t get_new_page, free_page_t put_new_page, - unsigned long private, struct page *page, - struct page **newpagep, int force, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - int rc = MIGRATEPAGE_UNMAP; - struct page *newpage = NULL; - - if (!thp_migration_supported() && PageTransHuge(page)) - return -ENOSYS; - - if (page_count(page) == 1) { - /* Page was freed from under us. So we are done. */ - ClearPageActive(page); - ClearPageUnevictable(page); - /* free_pages_prepare() will clear PG_isolated. */ - list_del(&page->lru); - migrate_page_done(page, reason); - return MIGRATEPAGE_SUCCESS; - } - - newpage = get_new_page(page, private); - if (!newpage) - return -ENOMEM; - *newpagep = newpage; - - newpage->private = 0; - rc = __migrate_page_unmap(page, newpage, force, mode); - if (rc == MIGRATEPAGE_UNMAP) - return rc; - - /* - * A page that has not been migrated will have kept its - * references and be restored. - */ - /* restore the page to right list. */ - if (rc != -EAGAIN) - list_move_tail(&page->lru, ret); - - if (put_new_page) - put_new_page(newpage, private); - else - put_page(newpage); + list_del(&page->lru); + migrate_page_undo_page(page, 0, anon_vma, true, NULL); + migrate_page_done(page, reason); return rc; -} -/* Migrate the page to the newly allocated page in newpage. */ -static int migrate_page_move(free_page_t put_new_page, unsigned long private, - struct page *page, struct page *newpage, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - int rc; - - rc = __migrate_page_move(page, newpage, mode); - if (rc == MIGRATEPAGE_SUCCESS) - set_page_owner_migrate_reason(newpage, reason); - - if (rc != -EAGAIN) { - /* - * A page that has been migrated has all references - * removed and will be freed. A page that has not been - * migrated will have kept its references and be restored. - */ - list_del_init(&page->lru); +out: + if (rc == -EAGAIN) { + __migrate_page_record(newpage, page_was_mapped, anon_vma); + return rc; } - /* - * If migration is successful, releases reference grabbed during - * isolation. Otherwise, restore the page to right list unless - * we want to retry. - */ - if (rc == MIGRATEPAGE_SUCCESS) { - migrate_page_done(page, reason); - } else if (rc != -EAGAIN) { - list_add_tail(&page->lru, ret); - - if (put_new_page) - put_new_page(newpage, private); - else - put_page(newpage); - } + migrate_page_undo_page(page, page_was_mapped, anon_vma, true, ret); + list_del(&newpage->lru); + migrate_page_undo_newpage(newpage, true, put_new_page, private); return rc; } @@ -1763,9 +1723,9 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, __migrate_page_extract(newpage, &page_was_mapped, &anon_vma); migrate_page_undo_page(page, page_was_mapped, anon_vma, - &ret_pages); + true, &ret_pages); list_del(&newpage->lru); - migrate_page_undo_newpage(newpage, put_new_page, private); + migrate_page_undo_newpage(newpage, true, put_new_page, private); newpage = newpage2; newpage2 = list_next_entry(newpage, lru); }