From patchwork Mon Feb 13 12:34:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13138388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14F99C636CC for ; Mon, 13 Feb 2023 12:35:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8AABF6B007B; Mon, 13 Feb 2023 07:35:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 834396B0080; Mon, 13 Feb 2023 07:35:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6ADDE6B0081; Mon, 13 Feb 2023 07:35:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 58D7C6B007B for ; Mon, 13 Feb 2023 07:35:46 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 399A71406D4 for ; Mon, 13 Feb 2023 12:35:46 +0000 (UTC) X-FDA: 80462215092.25.2455E87 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf22.hostedemail.com (Postfix) with ESMTP id E8E58C0008 for ; Mon, 13 Feb 2023 12:35:43 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HJyRi8kD; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676291744; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XQwhEs+XZRIDnah2mU2EUx0wmK8jRwxbNBjhZ0XNldA=; b=bXLIhBGNPmzqoOyFf/vPgvVVHwjOG71aUBO7w17LOrq1Y3npMCrj1eBrP88bQuOArm48q/ oTzK2b3inu4wdRJld3UY9vRVxJEC2M9QlCQFyKFVAye/8/mGy296gqq1cfL33iF9VQ6RmK 2kohKYPJq3oM8+kAs3yxEprd2NvgwLc= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HJyRi8kD; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676291744; a=rsa-sha256; cv=none; b=EpG+efgwdPkH9DgIc1JAPXWsTd3pDjImroRJVww2JmRdDfYikulPhaSVCrPIzmOZebvvT1 5Bpv0g0IJn8xjaSfjq3a2qhXeGj4deZXSi7sd5HAz8PK1LY7I8k3TtP/W5Nq64nWgSx5Zx naTOVVk1CrnMU0JuWLhaHlVg7ZWbTP0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676291744; x=1707827744; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LJBQcFuwCDNelXry1gr5Vxbs1ZO8LcRLXJ+2vWMUK9M=; b=HJyRi8kDL3FHZSlcMHs9/T73kj8+zXaPbtyxtD5Fzsc9alGO2ulgigSr UP/F7lyKwPa7iyEZbPmBCcbendvJMol5J5CGCai7vqCYy+xE96qcNqvVa FJ937Xv3QWTDZ9YTPXHB3VHtAw2WYUnzqWFR6XG0cdgLAjT6qbHAUO3TL Ug9V7tkNRCiFhVs8z+48HHwwLQQlXqnSWYalUhrt9AIkfY/PM71Yu8q/I 3chzMdNakTcn4yk0Hp32//+qm9ZnW2FArirti9iwCLJYsDKo0TXwPnqO+ 5rKcwT9FN/3OsSZbkA/WnAWOW3iEn+lKWm3tnI45eHCAFQpm3wUtOeCA2 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10619"; a="310513296" X-IronPort-AV: E=Sophos;i="5.97,294,1669104000"; d="scan'208";a="310513296" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Feb 2023 04:35:43 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10619"; a="646366659" X-IronPort-AV: E=Sophos;i="5.97,294,1669104000"; d="scan'208";a="646366659" Received: from changxin-mobl2.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.171]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Feb 2023 04:35:39 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , Xin Hao , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH -v5 7/9] migrate_pages: share more code between _unmap and _move Date: Mon, 13 Feb 2023 20:34:42 +0800 Message-Id: <20230213123444.155149-8-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230213123444.155149-1-ying.huang@intel.com> References: <20230213123444.155149-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E8E58C0008 X-Stat-Signature: 78z8rizgxrcfjj4368penm6h1ozm8d88 X-HE-Tag: 1676291743-141978 X-HE-Meta: U2FsdGVkX188N6NqKBg4MVDLNs4WashG+x+UHvBSXL0IocKgzPJimI6YsDn6RGga4z0KPCSkksWU3BVsXgawyKhC4Rx+WwZfQtPcmC2CFPstKGr/Y8slXdnt+Zo1J9Jd1lzTyHM82sm0+FwNlOfLhc6Ck3YOg6CjPHi42eutCsNqJLiQSPrvjCdEVqJj2kAp7m91bqgoDSxU63SRRh8VcH+fF0jgQRgJNGRNq+9lt1neHE88eKfVxGe15FLF0dWXRUG4wiQvdvT2MZzVMJKnsjZEgovYB1/y8ZmQaWt+d9VKiUFtmkPzLUpEt0vKOBXwmOFYta6biTDscqsYKI5xIzCgRYlrWCRGv2zIKW8s7jI3mljiZ6OK/pXil/VrLv5to6XD1K8AO1ihiFxzYVYQspXDsvva2mEZlD6/qNx8iO/EG06aN+Bd8YiRLorXkNl321vJAlxWlVkybr+XbSazMeQKaXKGwifW1v1+hoqvJDhmk+bWxb7R1UNhsQHJ6HNBe1/boJaVr6cMh7HcsLaXvJBtyRdlX+lrejYXE5OTHFYuMIP/Gcubv1Vn6V0IXu6srtKRuHGy7ayljbrcqYuXTAlT8VQ+88qZ4qzW5rRpZtMO0r+WDmTJm0EPIs8X8K7IZwuhPOq+2cJjP0fz+Bp19SjfW+4SEei4xmP1tlsxhIspXmuUWiRib3LIvwK+vwOhJ32GYVaCEvkh/AjNOyzp7cIfoHgIvN7MwSg7MdUMf1+p8zpExV6vCtS8g+3XyMsU38Qdqr1xHX+KHPWM/Dvj1Wap2S99EL8t/kw/E+J42OTda90jiQz39+4MaV8yFmtjN0MmXVX605MlwMcMHUM6TOdZUiUgRxxnNZpVt/MNMpQ2vsWaxtDBGJrALtZ9gjjyU3AidusXkEyiAaxYykTmI0LheeW4U71oyOBAQwuoBelVmzoOv8ad/TdSDnNwfyCuPG+OTarmEvwpNkOXPLY qxoeXWTu 7Y2K9iUphslgrUl1RyR8wcmzAcGoqqTziP0lUomOLTuLBl/1GqWrMr6oP+JGEjsmw9gFDUSWx6PwJBlHAsToXZamR7CTl5hz6QZQ28GdvgleaLqlgaHHF/hlTPTDaF5En48oLyL1UPbCzKo06TaHdqlGGEoLY0iDm/IbX27RhK84L70cCPN4i0go9jWu1XX4uaIqjJ337Iod8UWCYxp6OJ16a3EDgXReT91tCRhYBuXmLU9T54lJ1iSzp1y+L4Q+YqXJ7wtUAHOE1aHP9/pwOyjJsxIw6o0a1tej+gQ+tRDmLxuFWYlY0W3HDWaCYru5kWkhOplwCYUHxVo46ikAIL5b0g36xJqw8u1t647qvITMSd1wNHfVA9m9D80ZkDxO1zBR2uHBeUkV1NrHlznvYzRX/fy3HE5B+3zNNBHHp2aIMKOztgilYlrdwm4YIyBq/Aul+rSBamMbSD2NNcetqZfAJQfD07c7jUuN9yZ2PfKIjxdq198nC/labo0JigfRIwYXy92Qw1os4RGuZ/3klNni0Mw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a code cleanup patch to reduce the duplicated code between the _unmap and _move stages of migrate_pages(). No functionality change is expected. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: Xin Hao Cc: Minchan Kim Cc: Mike Kravetz Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/migrate.c | 207 +++++++++++++++++++++------------------------------ 1 file changed, 85 insertions(+), 122 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 0c7488ebe248..00713ccb6643 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1055,6 +1055,7 @@ static void __migrate_folio_extract(struct folio *dst, static void migrate_folio_undo_src(struct folio *src, int page_was_mapped, struct anon_vma *anon_vma, + bool locked, struct list_head *ret) { if (page_was_mapped) @@ -1062,16 +1063,20 @@ static void migrate_folio_undo_src(struct folio *src, /* Drop an anon_vma reference if we took one */ if (anon_vma) put_anon_vma(anon_vma); - folio_unlock(src); - list_move_tail(&src->lru, ret); + if (locked) + folio_unlock(src); + if (ret) + list_move_tail(&src->lru, ret); } /* Restore the destination folio to the original state upon failure */ static void migrate_folio_undo_dst(struct folio *dst, + bool locked, free_page_t put_new_page, unsigned long private) { - folio_unlock(dst); + if (locked) + folio_unlock(dst); if (put_new_page) put_new_page(&dst->page, private); else @@ -1096,13 +1101,42 @@ static void migrate_folio_done(struct folio *src, folio_put(src); } -static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force, - bool avoid_force_lock, enum migrate_mode mode) +/* Obtain the lock on page, remove all ptes. */ +static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, + unsigned long private, struct folio *src, + struct folio **dstp, int force, bool avoid_force_lock, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { + struct folio *dst; int rc = -EAGAIN; + struct page *newpage = NULL; int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__PageMovable(&src->page); + bool locked = false; + bool dst_locked = false; + + if (!thp_migration_supported() && folio_test_transhuge(src)) + return -ENOSYS; + + if (folio_ref_count(src) == 1) { + /* Folio was freed from under us. So we are done. */ + folio_clear_active(src); + folio_clear_unevictable(src); + /* free_pages_prepare() will clear PG_isolated. */ + list_del(&src->lru); + migrate_folio_done(src, reason); + return MIGRATEPAGE_SUCCESS; + } + + newpage = get_new_page(&src->page, private); + if (!newpage) + return -ENOMEM; + dst = page_folio(newpage); + *dstp = dst; + + dst->private = NULL; if (!folio_trylock(src)) { if (!force || mode == MIGRATE_ASYNC) @@ -1137,6 +1171,7 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force folio_lock(src); } + locked = true; if (folio_test_writeback(src)) { /* @@ -1151,10 +1186,10 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force break; default: rc = -EBUSY; - goto out_unlock; + goto out; } if (!force) - goto out_unlock; + goto out; folio_wait_writeback(src); } @@ -1184,7 +1219,8 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force * This is much like races on refcount of oldpage: just don't BUG(). */ if (unlikely(!folio_trylock(dst))) - goto out_unlock; + goto out; + dst_locked = true; if (unlikely(!is_lru)) { __migrate_folio_record(dst, page_was_mapped, anon_vma); @@ -1206,7 +1242,7 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force if (!src->mapping) { if (folio_test_private(src)) { try_to_free_buffers(src); - goto out_unlock_both; + goto out; } } else if (folio_mapped(src)) { /* Establish migration ptes */ @@ -1221,73 +1257,25 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force return MIGRATEPAGE_UNMAP; } - if (page_was_mapped) - remove_migration_ptes(src, src, false); - -out_unlock_both: - folio_unlock(dst); -out_unlock: - /* Drop an anon_vma reference if we took one */ - if (anon_vma) - put_anon_vma(anon_vma); - folio_unlock(src); out: - - return rc; -} - -/* Obtain the lock on page, remove all ptes. */ -static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, - unsigned long private, struct folio *src, - struct folio **dstp, int force, bool avoid_force_lock, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - struct folio *dst; - int rc = MIGRATEPAGE_UNMAP; - struct page *newpage = NULL; - - if (!thp_migration_supported() && folio_test_transhuge(src)) - return -ENOSYS; - - if (folio_ref_count(src) == 1) { - /* Folio was freed from under us. So we are done. */ - folio_clear_active(src); - folio_clear_unevictable(src); - /* free_pages_prepare() will clear PG_isolated. */ - list_del(&src->lru); - migrate_folio_done(src, reason); - return MIGRATEPAGE_SUCCESS; - } - - newpage = get_new_page(&src->page, private); - if (!newpage) - return -ENOMEM; - dst = page_folio(newpage); - *dstp = dst; - - dst->private = NULL; - rc = __migrate_folio_unmap(src, dst, force, avoid_force_lock, mode); - if (rc == MIGRATEPAGE_UNMAP) - return rc; - /* * A folio that has not been unmapped will be restored to * right list unless we want to retry. */ - if (rc != -EAGAIN && rc != -EDEADLOCK) - list_move_tail(&src->lru, ret); + if (rc == -EAGAIN || rc == -EDEADLOCK) + ret = NULL; - if (put_new_page) - put_new_page(&dst->page, private); - else - folio_put(dst); + migrate_folio_undo_src(src, page_was_mapped, anon_vma, locked, ret); + migrate_folio_undo_dst(dst, dst_locked, put_new_page, private); return rc; } -static int __migrate_folio_move(struct folio *src, struct folio *dst, - enum migrate_mode mode) +/* Migrate the folio to the newly allocated folio in dst. */ +static int migrate_folio_move(free_page_t put_new_page, unsigned long private, + struct folio *src, struct folio *dst, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { int rc; int page_was_mapped = 0; @@ -1300,12 +1288,8 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst, list_del(&dst->lru); rc = move_to_new_folio(dst, src, mode); - - if (rc == -EAGAIN) { - list_add(&dst->lru, prev); - __migrate_folio_record(dst, page_was_mapped, anon_vma); - return rc; - } + if (rc) + goto out; if (unlikely(!is_lru)) goto out_unlock_both; @@ -1319,70 +1303,49 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst, * unsuccessful, and other cases when a page has been temporarily * isolated from the unevictable LRU: but this case is the easiest. */ - if (rc == MIGRATEPAGE_SUCCESS) { - folio_add_lru(dst); - if (page_was_mapped) - lru_add_drain(); - } + folio_add_lru(dst); + if (page_was_mapped) + lru_add_drain(); if (page_was_mapped) - remove_migration_ptes(src, - rc == MIGRATEPAGE_SUCCESS ? dst : src, false); + remove_migration_ptes(src, dst, false); out_unlock_both: folio_unlock(dst); - /* Drop an anon_vma reference if we took one */ - if (anon_vma) - put_anon_vma(anon_vma); - folio_unlock(src); + set_page_owner_migrate_reason(&dst->page, reason); /* * If migration is successful, decrease refcount of dst, * which will not free the page because new page owner increased * refcounter. */ - if (rc == MIGRATEPAGE_SUCCESS) - folio_put(dst); - - return rc; -} - -/* Migrate the folio to the newly allocated folio in dst. */ -static int migrate_folio_move(free_page_t put_new_page, unsigned long private, - struct folio *src, struct folio *dst, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - int rc; - - rc = __migrate_folio_move(src, dst, mode); - if (rc == MIGRATEPAGE_SUCCESS) - set_page_owner_migrate_reason(&dst->page, reason); - - if (rc != -EAGAIN) { - /* - * A folio that has been migrated has all references - * removed and will be freed. A folio that has not been - * migrated will have kept its references and be restored. - */ - list_del(&src->lru); - } + folio_put(dst); /* - * If migration is successful, releases reference grabbed during - * isolation. Otherwise, restore the folio to right list unless - * we want to retry. + * A folio that has been migrated has all references removed + * and will be freed. */ - if (rc == MIGRATEPAGE_SUCCESS) { - migrate_folio_done(src, reason); - } else if (rc != -EAGAIN) { - list_add_tail(&src->lru, ret); + list_del(&src->lru); + /* Drop an anon_vma reference if we took one */ + if (anon_vma) + put_anon_vma(anon_vma); + folio_unlock(src); + migrate_folio_done(src, reason); - if (put_new_page) - put_new_page(&dst->page, private); - else - folio_put(dst); + return rc; +out: + /* + * A folio that has not been migrated will be restored to + * right list unless we want to retry. + */ + if (rc == -EAGAIN) { + list_add(&dst->lru, prev); + __migrate_folio_record(dst, page_was_mapped, anon_vma); + return rc; } + migrate_folio_undo_src(src, page_was_mapped, anon_vma, true, ret); + migrate_folio_undo_dst(dst, true, put_new_page, private); + return rc; } @@ -1918,9 +1881,9 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); migrate_folio_undo_src(folio, page_was_mapped, anon_vma, - ret_folios); + true, ret_folios); list_del(&dst->lru); - migrate_folio_undo_dst(dst, put_new_page, private); + migrate_folio_undo_dst(dst, true, put_new_page, private); dst = dst2; dst2 = list_next_entry(dst, lru); }