From patchwork Fri Apr 12 06:43:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 13626924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88AE1C4345F for ; Fri, 12 Apr 2024 06:44:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EDDD66B008A; Fri, 12 Apr 2024 02:44:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E8C036B008C; Fri, 12 Apr 2024 02:44:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7B366B0092; Fri, 12 Apr 2024 02:44:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BD7066B008A for ; Fri, 12 Apr 2024 02:44:42 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6ED0F120D4C for ; Fri, 12 Apr 2024 06:44:42 +0000 (UTC) X-FDA: 81999941604.26.43064A7 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by imf25.hostedemail.com (Postfix) with ESMTP id 6FD81A0018 for ; Fri, 12 Apr 2024 06:44:38 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712904279; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=gExy2JMG9XZgu2m8P/l9U284LioIClKD5z8RIUgyqMI=; b=kPCETTXGYatcdaTVo9dHhqLdMCa99AlUMYzsIM8Zc+LN0ob2Do6IKmPALF2JiBcKzPrKet 2OGw4UjuWiCkQOnN1KIC9nhHhj7Z0H2Q4JUL7ShmpjsAWrLKfsQhUmFMYn2/plZid1DR2u oty5hoQtKtbBetEQz/dCfpF8sfFUlLk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712904279; a=rsa-sha256; cv=none; b=oWmXDd/lQpBKYCvfOQSl8oJDIJMWgoGo+/xut5mvwAGG8r1NAP/7j9Ozhf9/xITbOXZVJD H5fuKSfU7AHI2VQmmggm2a3cAjNPHIpJJ+fLPIwUv/Yi+yFdipPLWZt2Dkvbg1SQCPp/4T 1CTleGuRCNh6tv3oRS+2bAlzZJotleU= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 43C6hw8O003083; Fri, 12 Apr 2024 14:43:58 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4VG6RX0KQ1z2K25cX; Fri, 12 Apr 2024 14:41:44 +0800 (CST) Received: from bj03382pcu01.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Fri, 12 Apr 2024 14:43:56 +0800 From: "zhaoyang.huang" To: Andrew Morton , Alex Shi , "Kirill A . Shutemov" , Hugh Dickins , Baolin Wang , , , Zhaoyang Huang , Subject: [PATCH 1/1] mm: protect xa split stuff under lruvec->lru_lock during migration Date: Fri, 12 Apr 2024 14:43:53 +0800 Message-ID: <20240412064353.133497-1-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.0.73.40] X-ClientProxiedBy: SHCAS01.spreadtrum.com (10.0.1.201) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 43C6hw8O003083 X-Rspamd-Queue-Id: 6FD81A0018 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: b9pqohg4ndrn347yawn7josrp6ods185 X-HE-Tag: 1712904278-732598 X-HE-Meta: U2FsdGVkX18IuqnLQ1xaSHXeoHwLdj7FnJaU4LfUCHIJKzZEwvRcpBuqpLvt5ZaZmGcZeA1T8n3BmT6Gs7TKD/jROGI7+W/qslfckTUru1roRR/za1Vwjy+iMScAsR1bbolD6beJ4f/j9g9hiiClIy01TLtZ9bH9ttBqOd8ll/gKDJSzlulivogI5vgi+8XrzTB1oPpIMv5g6ltEeMoxREi9mZVUZHsm/DgHaNOmXLUcpd9BfGAwqj65/QeQQp+6/PIV31ZEKIhtQA5bNPTsDJLj7BvwUDT6C/UF0b8iawPIrVcD22ULi1utrIs2XF19BoDQef+wPkYqaUBIGHtUI9fnC13BA2gWtwiR5L9/6oD7+T7n8IjUan0L+lpkT2aVSEIbzSmcL8kXD7GkC164kPKZGUJI6PkrstFpJp5+pXTF2xS6CdohAGKI9cnHNUzg3xZVKlrYppQN0u0q0wDAIfn1C00i7Wb/RF+tbPLFhzxbG/b32l9CKSwTPKUuB96lwa1ugI5Gjw4PaajcKNSB2Hw7c66jdNCmFRjSJIFC/ZFA1zRJNSFVFWxLDUFWqH8XaO23xRWPY1uQfLgi5k7pDXrFoYqJwebw4KaEaZuhP3sjzqhqGZ47ypi+epJUpX0cQZiFMAza4sIJAwXxKgfdaYZg/kA6TaeHmEjnfLo/Y81ZUIiUlMWlxac6QGV+IL1Ujs/jVEjTXC+rlm+S0PteO+zCNp5a10ZAmrSi+zPMhNp2khJbVMWN3RpFDCL1Wqk21DJK4CW3R43urrICu0CVHQtQqy40wx8soI13LaXuLxCycc5WOItlKzae32Ms4R96BERxfXnnNTbbrDeMOzv+NGqALlKAIPc6CP21cbSzvv1EfECZeCuWalikHocZXvf7QaRmS6i9QqAmsN/ztgkqNLJNuQpx8EyaPHJy98yb2NsOGlBzk7hpIZpuxvvvrjbhLmMk1vrtltEhnuICGyo D4J0qGRv wnYE+2XU4qMi4sBIv+Z3Js/bxHuQf/8w11WxS8ZFBEQ7ChpgmVY/yoePiHITT7BYdtrzYOQ6Yx8JQA06lL1MwQrtRriVvEgUJ+8MRNoDjQP3kDbt79VASYIknbMw4QpAGkL1lwlVvGIpdjNZurd7EIqtnpX2s8ScwuSsu85bxW5mstrAWQx/Jh2p1AWQLkIQDV/Z1r+GvNRDXwN9CLMh+RhqqQk9XTTGK5UulnVfdGp8NJ27k0YWaAJP01Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Zhaoyang Huang Livelock in [1] is reported multitimes since v515, where the zero-ref folio is repeatly found on the page cache by find_get_entry. A possible timing sequence is proposed in [2], which can be described briefly as the lockless xarray operation could get harmed by an illegal folio remaining on the slot[offset]. This commit would like to protect the xa split stuff(folio_ref_freeze and __split_huge_page) under lruvec->lock to remove the race window. [1] [167789.800297] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: [167726.780305] rcu: Tasks blocked on level-0 rcu_node (CPUs 0-7): P155 [167726.780319] (detected by 3, t=17256977 jiffies, g=19883597, q=2397394) [167726.780325] task:kswapd0 state:R running task stack: 24 pid: 155 ppid: 2 flags:0x00000008 [167789.800308] rcu: Tasks blocked on level-0 rcu_node (CPUs 0-7): P155 [167789.800322] (detected by 3, t=17272732 jiffies, g=19883597, q=2397470) [167789.800328] task:kswapd0 state:R running task stack: 24 pid: 155 ppid: 2 flags:0x00000008 [167789.800339] Call trace: [167789.800342] dump_backtrace.cfi_jt+0x0/0x8 [167789.800355] show_stack+0x1c/0x2c [167789.800363] sched_show_task+0x1ac/0x27c [167789.800370] print_other_cpu_stall+0x314/0x4dc [167789.800377] check_cpu_stall+0x1c4/0x36c [167789.800382] rcu_sched_clock_irq+0xe8/0x388 [167789.800389] update_process_times+0xa0/0xe0 [167789.800396] tick_sched_timer+0x7c/0xd4 [167789.800404] __run_hrtimer+0xd8/0x30c [167789.800408] hrtimer_interrupt+0x1e4/0x2d0 [167789.800414] arch_timer_handler_phys+0x5c/0xa0 [167789.800423] handle_percpu_devid_irq+0xbc/0x318 [167789.800430] handle_domain_irq+0x7c/0xf0 [167789.800437] gic_handle_irq+0x54/0x12c [167789.800445] call_on_irq_stack+0x40/0x70 [167789.800451] do_interrupt_handler+0x44/0xa0 [167789.800457] el1_interrupt+0x34/0x64 [167789.800464] el1h_64_irq_handler+0x1c/0x2c [167789.800470] el1h_64_irq+0x7c/0x80 [167789.800474] xas_find+0xb4/0x28c [167789.800481] find_get_entry+0x3c/0x178 [167789.800487] find_lock_entries+0x98/0x2f8 [167789.800492] __invalidate_mapping_pages.llvm.3657204692649320853+0xc8/0x224 [167789.800500] invalidate_mapping_pages+0x18/0x28 [167789.800506] inode_lru_isolate+0x140/0x2a4 [167789.800512] __list_lru_walk_one+0xd8/0x204 [167789.800519] list_lru_walk_one+0x64/0x90 [167789.800524] prune_icache_sb+0x54/0xe0 [167789.800529] super_cache_scan+0x160/0x1ec [167789.800535] do_shrink_slab+0x20c/0x5c0 [167789.800541] shrink_slab+0xf0/0x20c [167789.800546] shrink_node_memcgs+0x98/0x320 [167789.800553] shrink_node+0xe8/0x45c [167789.800557] balance_pgdat+0x464/0x814 [167789.800563] kswapd+0xfc/0x23c [167789.800567] kthread+0x164/0x1c8 [167789.800573] ret_from_fork+0x10/0x20 [2] Thread_isolate: 1. alloc_contig_range->isolate_migratepages_block isolate a certain of pages to cc->migratepages via pfn (folio has refcount: 1 + n (alloc_pages, page_cache)) 2. alloc_contig_range->migrate_pages->folio_ref_freeze(folio, 1 + extra_pins) set the folio->refcnt to 0 3. alloc_contig_range->migrate_pages->xas_split split the folios to each slot as folio from slot[offset] to slot[offset + sibs] 4. alloc_contig_range->migrate_pages->__split_huge_page->folio_lruvec_lock failed which have the folio be failed in setting refcnt to 2 5. Thread_kswapd enter the livelock by the chain below rcu_read_lock(); retry: find_get_entry folio = xas_find if(!folio_try_get_rcu) xas_reset; goto retry; rcu_read_unlock(); 5'. Thread_holdlock as the lruvec->lru_lock holder could be stalled in the same core of Thread_kswapd. Signed-off-by: Zhaoyang Huang --- mm/huge_memory.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9859aa4f7553..418e8d03480a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2891,7 +2891,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, { struct folio *folio = page_folio(page); struct page *head = &folio->page; - struct lruvec *lruvec; + struct lruvec *lruvec = folio_lruvec(folio); struct address_space *swap_cache = NULL; unsigned long offset = 0; int i, nr_dropped = 0; @@ -2908,8 +2908,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = folio_lruvec_lock(folio); ClearPageHasHWPoisoned(head); @@ -2942,7 +2940,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, folio_set_order(new_folio, new_order); } - unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, order, new_order); @@ -2961,7 +2958,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, folio_ref_add(folio, 1 + new_nr); xa_unlock(&folio->mapping->i_pages); } - local_irq_enable(); if (nr_dropped) shmem_uncharge(folio->mapping->host, nr_dropped); @@ -3048,6 +3044,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, int extra_pins, ret; pgoff_t end; bool is_hzp; + struct lruvec *lruvec; VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); @@ -3159,6 +3156,14 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); + + /* + * take lruvec's lock before freeze the folio to prevent the folio + * remains in the page cache with refcnt == 0, which could lead to + * find_get_entry enters livelock by iterating the xarray. + */ + lruvec = folio_lruvec_lock(folio); + if (mapping) { /* * Check if the folio is present in page cache. @@ -3203,12 +3208,16 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, } __split_huge_page(page, list, end, new_order); + unlock_page_lruvec(lruvec); + local_irq_enable(); ret = 0; } else { spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xas_unlock(&xas); + + unlock_page_lruvec(lruvec); local_irq_enable(); remap_page(folio, folio_nr_pages(folio)); ret = -EAGAIN;