From patchwork Mon Feb 6 06:33:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13129359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C60DC05027 for ; Mon, 6 Feb 2023 06:34:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B5516B0080; Mon, 6 Feb 2023 01:34:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 363A06B0081; Mon, 6 Feb 2023 01:34:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 253A86B0082; Mon, 6 Feb 2023 01:34:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1827B6B0080 for ; Mon, 6 Feb 2023 01:34:19 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D78E8A0B2A for ; Mon, 6 Feb 2023 06:34:18 +0000 (UTC) X-FDA: 80435902596.17.5E676B6 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf11.hostedemail.com (Postfix) with ESMTP id BD5B140004 for ; Mon, 6 Feb 2023 06:34:16 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MnB4WDqj; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675665257; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W1e2A+EMS/OuzA6fH+/0VR0gikiUbJ6gqtwKjant+Ik=; b=qATazPh87UY0BAH5umc6Do9rlaANCPjw24kyCQzJM72Q6nLzf77QeWPnrjE4ZNJLZg3SxW 7FB0GSM1b2MwZ8MfqY7d82HGvfLffuxYy4RcZcTJFZz7cCHpB2abD5B4uN2gXajymKkStO MXDoAGm3BzpZHXPwTDEBc6kgMJCdoBs= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MnB4WDqj; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675665257; a=rsa-sha256; cv=none; b=iicO/uI1cMvo/pornG5oT5ySCGFijH1ayyVAIGzW2N+RCyzpWqyQlynaM0MfIBxrCKO3kl 9DjkPhNpLGXNYeLHdu2JfPCU++0LfVzziO2VjVBZVnGWRvjXQLHEd5K/jMmapMAWGxwLS4 bZneS3SKDOPrLGY98Tx5wcs9792SSKc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675665256; x=1707201256; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+KR6MAogWT7k7pvOt7gIOQt6fuAqtcKUncYjRIY0PcU=; b=MnB4WDqjr72NjMpei60l3ykdg0Iy6vi55NrM1STO3VFF67qn4m0Et/W/ FVG8TAeOzAxLPDF88mBPLa/CmIZRJ2mB9Lsqk6TRdtiwqlSjLiBH/MOuy pPAf1957o2eacEoAeH81Hu9WoJlXO0youBs/PwRhhI6SpxQiYmXPE6stp z6Aw9HrYREjyFeSrdGdiof38ZYcgu5ZoVuLWyvME0pXDPL87Dhy/EOyFP SwU0ucQjimhr6aqPnrGWNXKKSz693V91N7aZ3+fEo/SPHinH/Fjzb4I8y MYB+Bbd3zwhw1GuwM97SzRrdWAxwFImWjM/4HdyZ2L3vnSCSKrYQLXnBi g==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="330432797" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="330432797" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 22:34:16 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="659744803" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="659744803" Received: from baoyumen-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.smartont.net) ([10.255.30.227]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 22:34:10 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Alistair Popple , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , haoxin , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH -v4 9/9] migrate_pages: move THP/hugetlb migration support check to simplify code Date: Mon, 6 Feb 2023 14:33:13 +0800 Message-Id: <20230206063313.635011-10-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230206063313.635011-1-ying.huang@intel.com> References: <20230206063313.635011-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: BD5B140004 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: ux7inkbhytp6hrnsdb56b6ingtmjh8wp X-HE-Tag: 1675665256-787791 X-HE-Meta: U2FsdGVkX19t+McI9We+G4EPND5NH8IGzBingLuMznjRbTSDNkP7hQTlNHVKDmu6mh9vkzN7VYK+H+Qplt3/U7Sm5jFpbXG7C9CU81WO8utLWH8Czg2At/q1HIwC8Zi25TrNIT4iBqF8DKAhUbN8tHQ1Yymp9OAXvrm3CPKIrS7KQmSIgCR6MwuYLm/rqowK+uRQgquNLey0d6dvDlDHV6HhQlsFpWinm2M7BH+YPeRtqrtVSS1LTsiuWJxhwCn9jDFGZP2spSiVAQf/ZQF0FXR95wjWBu45bbPZNU8tvNrKqB7XorRuji9Fglp0xiFGjHyJjVLKum3xbfuh0n6w2YjMekJC8VkNa7FPymcXpTR0dheS4Jgq79JofFYYhtDHaQtkcTQc8kRBAYRNEz9qRcOnF0mqLJmCX8O5/LoBi3Zb5tBJJm+yyZd+AVYziR1mQd9Tk6UAbmdvnxaOnSLp2fAljwxhjx2p+aLpPDYMvaP7GFqTsRRpmzHUDrx0Ar1AZxvis0TM9rrfK6CltvjT7pCosKVa7xugvl7ZwF9Rt/AugoXCORvIZKxe93aJKv+OnyHqHqE/ZqQDP6r7pur4GDRO5l7MWAloZIvXzV+ZNnPF9NvSTFBy0D9qT9IqzPRDAn7puVbsBIZEab/x18WeMvEstv5xzRm/cUfCqInlkTg4aeskZK9YA0mgWXxg0KEEnZnvyti3WQQIWavDAB9p18davndokgHiSoqwQm/iYwzHV/wSSfvf2Z9nzC9H9vzGpmnVwpgqaXnviDfzbc/PjdNUbU0AcRNVNPIQ3mHPlwe7FGnxybiyDAS/vz1+YQUHvLR7cJQDYY7z6vHNJpTQ1AM01vWmakjAQNETAO99gO0LBpnFCrWBRxmLD6fYBHs6pC5ZX32I/Vb3f0hKQmvbYIsfiKQLzCkH2fWQfsi0QKSEutBWxADxjpizSJbZwi1LWpxXtbK9Im4xlgXD4vz yFsz4aBl ZCzUKhbFoEd3jC5HMdAWehpRVV+7XupBtxCjkH8rIZBC348dpRFeFS4zJWvNKLzBJq0Lo72HbER12Oi4rafY7caiYgIZqdpAzgRRfVl38Tz+gIZJxM5/uZMY0157l89m0Mm2ENUy7x8iTg0Y1LIf5a9iXgsTU/DaBAVYG0nC5wrDAAOw8nx1vKNCOvL8cSDx++3lfJD43Ywr94roDCQzx27jmVEXcmcX65D1V4FIOo8YOdPGFTCYDB+KX3t6uJFq3fPbpMHyAHC5g8t3LDeCaatJyQNWju8pYCs0yh9t0yUaS+EQB84FC5CDbDuidbn/XbypYolXR4wMdiLkMKJOHM13Wc4xoaRNHAEQAStJEE2hHQj+fDQphYANij3ugmOLtPGRwwXhV7Ojr433+JRaEdLFjmjJoTvOYJwQd/LbcwtS9nS7aHKBkbkz/brV70Xv7yjyIubJn2CbxPLGExNqhI7Qej+nG/KMfW3Nsm9JdjX3ZjN/B44JgDqZb21vvCUaWmxF0BYOyuNVgocSP9L5faDzb9TnL5PwrFMsV8P3HS5cgi6hDlQVmjvpSrwiNNcXZi+dZeelCxTEkk9m4cA+RxXzY8DK5BF2675qDj0IwM1fFgM+O9s4UFMUDd4NSe1NESMwxPVzMMgEkgIPEcFrv2fnGVw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a code cleanup patch, no functionality change is expected. After the change, the line number reduces especially in the long migrate_pages_batch(). Signed-off-by: "Huang, Ying" Suggested-by: Alistair Popple Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: haoxin Cc: Minchan Kim Cc: Mike Kravetz Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/migrate.c | 83 +++++++++++++++++++++++----------------------------- 1 file changed, 36 insertions(+), 47 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index ca6e2ff02a09..83d7ec8dfa66 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1099,9 +1099,6 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page bool locked = false; bool dst_locked = false; - if (!thp_migration_supported() && folio_test_transhuge(src)) - return -ENOSYS; - if (folio_ref_count(src) == 1) { /* Folio was freed from under us. So we are done. */ folio_clear_active(src); @@ -1359,16 +1356,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; - /* - * Migratability of hugepages depends on architectures and their size. - * This check is necessary because some callers of hugepage migration - * like soft offline and memory hotremove don't walk through page - * tables or check whether the hugepage is pmd-based or not before - * kicking migration. - */ - if (!hugepage_migration_supported(page_hstate(hpage))) - return -ENOSYS; - if (folio_ref_count(src) == 1) { /* page was freed from under us. So we are done. */ putback_active_hugepage(hpage); @@ -1535,6 +1522,20 @@ static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, cond_resched(); + /* + * Migratability of hugepages depends on architectures and + * their size. This check is necessary because some callers + * of hugepage migration like soft offline and memory + * hotremove don't walk through page tables or check whether + * the hugepage is pmd-based or not before kicking migration. + */ + if (!hugepage_migration_supported(folio_hstate(folio))) { + nr_failed++; + stats->nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, ret_folios); + continue; + } + rc = unmap_and_move_huge_page(get_new_page, put_new_page, private, &folio->page, pass > 2, mode, @@ -1544,16 +1545,9 @@ static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, * Success: hugetlb folio will be put back * -EAGAIN: stay on the from list * -ENOMEM: stay on the from list - * -ENOSYS: stay on the from list * Other errno: put on ret_folios list */ switch(rc) { - case -ENOSYS: - /* Hugetlb migration is unsupported */ - nr_failed++; - stats->nr_failed_pages += nr_pages; - list_move_tail(&folio->lru, ret_folios); - break; case -ENOMEM: /* * When memory is low, don't bother to try to migrate @@ -1639,6 +1633,28 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, cond_resched(); + /* + * Large folio migration might be unsupported or + * the allocation might be failed so we should retry + * on the same folio with the large folio split + * to normal folios. + * + * Split folios are put in split_folios, and + * we will migrate them after the rest of the + * list is processed. + */ + if (!thp_migration_supported() && is_thp) { + nr_large_failed++; + stats->nr_thp_failed++; + if (!try_split_folio(folio, &split_folios)) { + stats->nr_thp_split++; + continue; + } + stats->nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, ret_folios); + continue; + } + rc = migrate_folio_unmap(get_new_page, put_new_page, private, folio, &dst, pass > 2, force_lock, mode, reason, ret_folios); @@ -1650,36 +1666,9 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, * -EAGAIN: stay on the from list * -EDEADLOCK: stay on the from list * -ENOMEM: stay on the from list - * -ENOSYS: stay on the from list * Other errno: put on ret_folios list */ switch(rc) { - /* - * Large folio migration might be unsupported or - * the allocation could've failed so we should retry - * on the same folio with the large folio split - * to normal folios. - * - * Split folios are put in split_folios, and - * we will migrate them after the rest of the - * list is processed. - */ - case -ENOSYS: - /* Large folio migration is unsupported */ - if (is_large) { - nr_large_failed++; - stats->nr_thp_failed += is_thp; - if (!try_split_folio(folio, &split_folios)) { - stats->nr_thp_split += is_thp; - break; - } - } else if (!no_split_folio_counting) { - nr_failed++; - } - - stats->nr_failed_pages += nr_pages; - list_move_tail(&folio->lru, ret_folios); - break; case -ENOMEM: /* * When memory is low, don't bother to try to migrate