From patchwork Fri May 10 06:52:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13660943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5A4DC25B4F for ; Fri, 10 May 2024 06:52:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60EF56B00A0; Fri, 10 May 2024 02:52:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5BE366B00A1; Fri, 10 May 2024 02:52:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EBE76B00A2; Fri, 10 May 2024 02:52:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 232436B00A0 for ; Fri, 10 May 2024 02:52:30 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9AEA3416A9 for ; Fri, 10 May 2024 06:52:29 +0000 (UTC) X-FDA: 82101567618.01.AC1D156 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by imf17.hostedemail.com (Postfix) with ESMTP id 936DE40004 for ; Fri, 10 May 2024 06:52:27 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715323947; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=ECjT2gPRLy0NnXoAXwZXSHAZTvCcvFYKGTfTRXwtQs4=; b=B4BUrrTmLdqp+FdQR0lMY+i8uGytOoZHyUC0If4FLIxlElBTUz4JliLbdNspWRWZ3fyUF/ Kod3uujb1Zu/cHm1V+lRugo0FbEpxP1q5kmtvCCLuuMovfrOoZ0/IRvO62KTr1hfSBFMw2 JwZWr1mx/O7fOpYMQ8zAD5zwfM7lSrg= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715323948; a=rsa-sha256; cv=none; b=wflzoJiBru43EKEBSuEN0Kgly9Ee+piAJFRSOB9H7B2X5eNAk9zMbEp9LYt42mrdsphPfB PXBIAOeiDYNu2SFnU3XDtOtxKfgmyQnDJ5zHv4Ti2tJQcxlEDA5UPneHItuRqwZidcrWC9 Iw+YWAGLmwKNJl6hjK1TU5Ytw6j6P14= X-AuditID: a67dfc5b-d6dff70000001748-f6-663dc4215938 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v10 11/12] mm, migrate: apply luf mechanism to unmapping during migration Date: Fri, 10 May 2024 15:52:05 +0900 Message-Id: <20240510065206.76078-12-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240510065206.76078-1-byungchul@sk.com> References: <20240510065206.76078-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnka7iEds0gwObzSzmrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG 232XWAqa3SpWnV/M3sC40KKLkZNDQsBEovtWCyuMPXXrJUYQm01AXeLGjZ/MILaIgJnEwdY/ 7CA2s8BdJokD/WwgtrBApMSbA8/A6lkEVCV+XZwONIeDgxeofm2rMMRIeYnVGw6AjeEECn9Y NoMJxBYSMJXoeDARqJULqOYzm8TdB4vYIBokJQ6uuMEygZF3ASPDKkahzLyy3MTMHBO9jMq8 zAq95PzcTYzAwF9W+yd6B+OnC8GHGAU4GJV4eHdstkkTYk0sK67MPcQowcGsJMJbVWOdJsSb klhZlVqUH19UmpNafIhRmoNFSZzX6Ft5ipBAemJJanZqakFqEUyWiYNTqoFR5/l9iUdWxT/i 2A7eWSst99t+jfraRYxyenWBupxG2anTN7EtvM3iFWHqxO+gZGTyw9B94j+pqpUP/tWsVNZT rInZmV2s/61F39TS/I3oWs5DD4+sClfi3qUkcbOfZY3euYwjqVaysVfK723WZ1Up3RlUvCVR tuzV3oJF4Sqm2leERbkP6ymxFGckGmoxFxUnAgCUIreoeAIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrKt4xDbNYNUOHYs569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZbzdd4mloNmtYtX5xewNjAstuhg5OSQETCSmbr3ECGKzCahL3LjxkxnEFhEw kzjY+ocdxGYWuMskcaCfDcQWFoiUeHPgGVg9i4CqxK+L01m7GDk4eIHq17YKQ4yUl1i94QDY GE6g8IdlM5hAbCEBU4mOBxMZJzByLWBkWMUokplXlpuYmWOqV5ydUZmXWaGXnJ+7iREYxstq /0zcwfjlsvshRgEORiUe3h2bbdKEWBPLiitzDzFKcDArifBW1VinCfGmJFZWpRblxxeV5qQW H2KU5mBREuf1Ck9NEBJITyxJzU5NLUgtgskycXBKNTDyGD78UXVh/tLXBtN9lZcrz87dcWz6 fjlu+a9fGKbYZ6vvieOtd564+PMzlaWVPxfx+i1wXb/9FMemc772q6wWFn3OFb2SY6ciJrhS P+yORN+5rsolF7hL1d6nL961V7Bhr/EBhf0WqQtKZDVjk+X2Bh/08mxrljM2a5q906Yw+9ZU yxq95EYlluKMREMt5qLiRAAmg/+iXwIAAA== X-CFilter-Loop: Reflected X-Rspamd-Server: rspam01 X-Stat-Signature: fhiqtgphgjuz8x4wcen9is8tex8gni8n X-Rspam-User: X-Rspamd-Queue-Id: 936DE40004 X-HE-Tag: 1715323947-989240 X-HE-Meta: U2FsdGVkX19xrCTKosK7EeoFB3I1dd5IZf/umzVdeZZJrbl5b/ahfvdtrYdCHYsZYEXfxoDL0ff/WYfymirMd+BDMaXFmsV+YY7YxUHqi7rx7VH/JiVWKps4uUf8hadq8F44ZvmkL4X+2tEA00TaZXl87AMElHM2QlSXRScdPhHfeo8Cgw0masSdNg0NRWSAoJVGh140VsPIx/dhG+UcTbXh9KLyS3UpA+XIFZW+xMos+s+XDtEDpdzzGgh8J0498saUKPncE1KBsTCmsD3M3lrXrp1GkxVAfUtDcHoYuq/Vrforae6LYS+8nTyBmby7UKhtErHAtv738Lsb/emEGZX3qErRvpCt346TYQ2ZQ1EhseXulFRJ6Zd50m8oteqJhROTBTduH/vkz/v2wD7edspg8ietttNhFEGYbs88VBoBwY2Nnf7q4GCO3+ip/7at88GRvLuFGGkh/0zsCu43FtqNU9W4/LB6AKn7JFh/y9uVRHSxwc78yWDfE1bfUgYm7xNM6KkLdT5RcRKQWBScf+CBZAlt5ndbFCLLVgn9w+Khx45LUNNUkKgFB+IvEFiBKy/cXz1yjrDvKdiiM2Pr7aEuYFvy0rYeOwcXq1cSA/Oy6nBTCZTJarqYwJoMvPf6EmEh4tnj9oOq9Yc/eQ9TWzF/8/6MVO9QnY6PgEZlf4zii1Z+/IqRaUQFjQ8LooKIg1cL7okghyQ0Ob//WR6MmQNxhwdcDxPjXfD7q3Jxu6YRPYyx/wY0pvvossGOgsXOluedZb8PsAIJ5mn3PI/KOeOmbTvN2fU92WLr8ZQdUDnNmkzf4WTRb7fuI26VMUadUSMrDhkF1/IN7QEvKxe2Lklxo3nKIWqKkyzy5MKiVFcD8DE21JowJXAxwkOTELbZJBIiN3kc7osXRbBo5icKhd0RhNhHzeVEIAESINTCdvBJ12Zlf9bxQ9VCRXOGB8dTu/FgOf/Y318oVZd3rVu 9jA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios that have been unmapped and freed, eventually get allocated again. It's safe for folios that had been mapped read only and were unmapped, since the contents of the folios don't change while staying in pcp or buddy so we can still read the data through the stale tlb entries. Applied the mechanism to unmapping during migration. Signed-off-by: Byungchul Park --- include/linux/rmap.h | 2 +- mm/migrate.c | 56 ++++++++++++++++++++++++++++++++------------ mm/rmap.c | 9 ++++--- 3 files changed, 48 insertions(+), 19 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 0f906dc6d280..1898a2c1c087 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -657,7 +657,7 @@ static inline int folio_try_share_anon_rmap_pmd(struct folio *folio, int folio_referenced(struct folio *, int is_locked, struct mem_cgroup *memcg, unsigned long *vm_flags); -void try_to_migrate(struct folio *folio, enum ttu_flags flags); +bool try_to_migrate(struct folio *folio, enum ttu_flags flags); void try_to_unmap(struct folio *, enum ttu_flags flags); int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, diff --git a/mm/migrate.c b/mm/migrate.c index f9ed7a2b8720..c8b0e5203e9a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1090,7 +1090,8 @@ static void migrate_folio_undo_dst(struct folio *dst, bool locked, /* Cleanup src folio upon migration success */ static void migrate_folio_done(struct folio *src, - enum migrate_reason reason) + enum migrate_reason reason, + unsigned short int ugen) { /* * Compaction can migrate also non-LRU pages which are @@ -1101,8 +1102,12 @@ static void migrate_folio_done(struct folio *src, mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + folio_is_file_lru(src), -folio_nr_pages(src)); - if (reason != MR_MEMORY_FAILURE) - /* We release the page in page_handle_poison. */ + /* We release the page in page_handle_poison. */ + if (reason == MR_MEMORY_FAILURE) + check_luf_flush(ugen); + else if (ugen) + folio_put_ugen(src, ugen); + else folio_put(src); } @@ -1110,7 +1115,8 @@ static void migrate_folio_done(struct folio *src, static int migrate_folio_unmap(new_folio_t get_new_folio, free_folio_t put_new_folio, unsigned long private, struct folio *src, struct folio **dstp, enum migrate_mode mode, - enum migrate_reason reason, struct list_head *ret) + enum migrate_reason reason, struct list_head *ret, + bool *can_luf) { struct folio *dst; int rc = -EAGAIN; @@ -1126,7 +1132,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, folio_clear_unevictable(src); /* free_pages_prepare() will clear PG_isolated. */ list_del(&src->lru); - migrate_folio_done(src, reason); + migrate_folio_done(src, reason, 0); return MIGRATEPAGE_SUCCESS; } @@ -1244,7 +1250,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, /* Establish migration ptes */ VM_BUG_ON_FOLIO(folio_test_anon(src) && !folio_test_ksm(src) && !anon_vma, src); - try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0); + *can_luf = try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0); old_page_state |= PAGE_WAS_MAPPED; } @@ -1272,7 +1278,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, struct folio *src, struct folio *dst, enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) + struct list_head *ret, unsigned short int ugen) { int rc; int old_page_state = 0; @@ -1326,7 +1332,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, if (anon_vma) put_anon_vma(anon_vma); folio_unlock(src); - migrate_folio_done(src, reason); + migrate_folio_done(src, reason, ugen); return rc; out: @@ -1616,7 +1622,7 @@ static void migrate_folios_move(struct list_head *src_folios, struct list_head *ret_folios, struct migrate_pages_stats *stats, int *retry, int *thp_retry, int *nr_failed, - int *nr_retry_pages) + int *nr_retry_pages, unsigned short int ugen) { struct folio *folio, *folio2, *dst, *dst2; bool is_thp; @@ -1633,7 +1639,7 @@ static void migrate_folios_move(struct list_head *src_folios, rc = migrate_folio_move(put_new_folio, private, folio, dst, mode, - reason, ret_folios); + reason, ret_folios, ugen); /* * The rules are: * Success: folio will be freed @@ -1710,7 +1716,11 @@ static int migrate_pages_batch(struct list_head *from, int rc, rc_saved = 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); + LIST_HEAD(unmap_folios_luf); + LIST_HEAD(dst_folios_luf); bool nosplit = (reason == MR_NUMA_MISPLACED); + unsigned short int ugen; + bool can_luf; VM_WARN_ON_ONCE(mode != MIGRATE_ASYNC && !list_empty(from) && !list_is_singular(from)); @@ -1773,9 +1783,11 @@ static int migrate_pages_batch(struct list_head *from, continue; } + can_luf = false; rc = migrate_folio_unmap(get_new_folio, put_new_folio, private, folio, &dst, mode, reason, - ret_folios); + ret_folios, &can_luf); + /* * The rules are: * Success: folio will be freed @@ -1821,7 +1833,8 @@ static int migrate_pages_batch(struct list_head *from, /* nr_failed isn't updated for not used */ stats->nr_thp_failed += thp_retry; rc_saved = rc; - if (list_empty(&unmap_folios)) + if (list_empty(&unmap_folios) && + list_empty(&unmap_folios_luf)) goto out; else goto move; @@ -1835,8 +1848,13 @@ static int migrate_pages_batch(struct list_head *from, stats->nr_thp_succeeded += is_thp; break; case MIGRATEPAGE_UNMAP: - list_move_tail(&folio->lru, &unmap_folios); - list_add_tail(&dst->lru, &dst_folios); + if (can_luf) { + list_move_tail(&folio->lru, &unmap_folios_luf); + list_add_tail(&dst->lru, &dst_folios_luf); + } else { + list_move_tail(&folio->lru, &unmap_folios); + list_add_tail(&dst->lru, &dst_folios); + } break; default: /* @@ -1856,6 +1874,8 @@ static int migrate_pages_batch(struct list_head *from, stats->nr_thp_failed += thp_retry; stats->nr_failed_pages += nr_retry_pages; move: + /* Should be before try_to_unmap_flush() */ + ugen = try_to_unmap_luf(); /* Flush TLBs for all unmapped folios */ try_to_unmap_flush(); @@ -1869,7 +1889,11 @@ static int migrate_pages_batch(struct list_head *from, migrate_folios_move(&unmap_folios, &dst_folios, put_new_folio, private, mode, reason, ret_folios, stats, &retry, &thp_retry, - &nr_failed, &nr_retry_pages); + &nr_failed, &nr_retry_pages, 0); + migrate_folios_move(&unmap_folios_luf, &dst_folios_luf, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages, ugen); } nr_failed += retry; stats->nr_thp_failed += thp_retry; @@ -1880,6 +1904,8 @@ static int migrate_pages_batch(struct list_head *from, /* Cleanup remaining folios */ migrate_folios_undo(&unmap_folios, &dst_folios, put_new_folio, private, ret_folios); + migrate_folios_undo(&unmap_folios_luf, &dst_folios_luf, + put_new_folio, private, ret_folios); return rc; } diff --git a/mm/rmap.c b/mm/rmap.c index e42783c02114..d25ae20a47b5 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2600,8 +2600,9 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, * * Tries to remove all the page table entries which are mapping this folio and * replace them with special swap entries. Caller must hold the folio lock. + * Return true if all the mappings are read-only, otherwise false. */ -void try_to_migrate(struct folio *folio, enum ttu_flags flags) +bool try_to_migrate(struct folio *folio, enum ttu_flags flags) { struct rmap_walk_control rwc = { .rmap_one = try_to_migrate_one, @@ -2620,11 +2621,11 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) */ if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | TTU_SYNC | TTU_BATCH_FLUSH))) - return; + return false; if (folio_is_zone_device(folio) && (!folio_is_device_private(folio) && !folio_is_device_coherent(folio))) - return; + return false; /* * During exec, a temporary VMA is setup and later moved. @@ -2649,6 +2650,8 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) fold_ubc(tlb_ubc_luf, tlb_ubc_ro); else fold_ubc(tlb_ubc, tlb_ubc_ro); + + return can_luf; } #ifdef CONFIG_DEVICE_PRIVATE