From patchwork Tue Jun 20 08:04:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13285288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83E8DEB64DB for ; Tue, 20 Jun 2023 08:04:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17F60900003; Tue, 20 Jun 2023 04:04:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1300D900002; Tue, 20 Jun 2023 04:04:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F14EB900003; Tue, 20 Jun 2023 04:04:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E14FA900002 for ; Tue, 20 Jun 2023 04:04:43 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 41F59406EF for ; Tue, 20 Jun 2023 08:04:43 +0000 (UTC) X-FDA: 80922389646.12.7454B2B Received: from mail-yw1-f179.google.com (mail-yw1-f179.google.com [209.85.128.179]) by imf27.hostedemail.com (Postfix) with ESMTP id 5F6904000E for ; Tue, 20 Jun 2023 08:04:41 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="gPvWiKx/"; spf=pass (imf27.hostedemail.com: domain of hughd@google.com designates 209.85.128.179 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687248281; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dtDxY0gaJF8nLu8I+MlE5+MND4cbnHQ8Xpg0M/JIOAw=; b=x89dW1xr38AYhIdEQGnZahO321/LU/n5JpnlX7hAf0uefuanYj57R0ghVAOmzslX1NuQYq PvsNEMOT8fwcDf8lRC/n3MfxYhJUsZcqbgsq8hNbY7xbHrGX2msKOKZwlfpqCuwB/R/ozh erApk7j7v4yf5MUIL6s72KZfyC7p5c0= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="gPvWiKx/"; spf=pass (imf27.hostedemail.com: domain of hughd@google.com designates 209.85.128.179 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687248281; a=rsa-sha256; cv=none; b=dLLp6dx6oGsS18XdPyBBAP6IQZ4lxyNZp2DRvbYqpN/X9tNXD/kBRG4Z4W2VFeX6PFzEIn 1hrX5o+dBPMPbSvoc31LbyfguxhuaaHPXDTSrop2SMbwEqQSU1JfvaFdSje70QBVfk0CRw P1Z82Eb95zUfQd546iGtn3PYWSnXufY= Received: by mail-yw1-f179.google.com with SMTP id 00721157ae682-573491c4deeso20685887b3.0 for ; Tue, 20 Jun 2023 01:04:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687248280; x=1689840280; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=dtDxY0gaJF8nLu8I+MlE5+MND4cbnHQ8Xpg0M/JIOAw=; b=gPvWiKx/DFf5jWRkcN8nDyw2YpIeE+r8iurO72jEciR1+JDv7/7D8LotNXH5BjzGnO T9jJb2vEMdJhEt2AGR8z+zHOn8Qe+wf+OMT24MBldpqUQl344vx1LR9n+b5O9FqCdfYp 3z8XV/XSBS8gVXhubIiqZzFPhvYTS8xwLlDc7y9qd8IIOfBSLX5XWEitQF0NblxG8Lvz I7lIhyL0ScSJfMDJFdRNEBFH4FcEJr/mrMFtyJDBW7uG/RJmuIaxfz0Wgg9aLByzTnDf PorEHKM4R/KtSI9dgTiFBP4RUytSL+vEIRWloMSJws2fFfEej/gfAs4o9zhsnuikMb2n b8Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687248280; x=1689840280; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dtDxY0gaJF8nLu8I+MlE5+MND4cbnHQ8Xpg0M/JIOAw=; b=FDJ+3cIE/OdeFcpvWDzeH/mTCkQuwSn41wLqJjVcTEv3JRqA/X1bJ4RDlPGL256v2u KvyALHoKFiYcrLdjoMgB103sW7RljRwCPryf1aJfMw2rWjuVsY1zF/c6WedEe508PJ6d WDaO9HJ4w2pI4mNsoFqilLtrhZpMl0U9cMwkcDRcj4a7SDO6vlSCIL3O37URlNzHor/2 prUccppuk2rzvjUjjL+im+MO7jHrocctASYRHhwefNHoHW1EsJdErCyOGzl5e35YZN4h eWLH1M4zSBi9f5fMp5JDN4RPzm3rGrP+vre9zyb+5GKYX14Byca59GHZyDjK+nQ17XV1 ao4Q== X-Gm-Message-State: AC+VfDzuBmCd5/rQmfcYT+q0JW6/a795GNf6O9NzUklg6IFP/v96kIni ggUccD/q/7HMzIorCmhYbj5nQw== X-Google-Smtp-Source: ACHHUZ4lnR7dBBqMpdD6wja0MurlRaIB+27/PbAgi8qpKXS48k5ssxoKCeY4zA4H61CT4o+EBvmVjw== X-Received: by 2002:a25:2c7:0:b0:bca:8827:6a3 with SMTP id 190-20020a2502c7000000b00bca882706a3mr7612408ybc.42.1687248280161; Tue, 20 Jun 2023 01:04:40 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id g30-20020a25b11e000000b00be5af499cfcsm252823ybj.61.2023.06.20.01.04.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jun 2023 01:04:39 -0700 (PDT) Date: Tue, 20 Jun 2023 01:04:35 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Gerald Schaefer , Vasily Gorbik , Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Lorenzo Stoakes , Huang Ying , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , Russell King , "David S. Miller" , Michael Ellerman , "Aneesh Kumar K.V" , Heiko Carstens , Christian Borntraeger , Claudio Imbrenda , Alexander Gordeev , Jann Horn , Vishal Moola , Vlastimil Babka , linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH mm 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock() In-Reply-To: Message-ID: <1bf7bd26-7d6-77a1-150-c9665ad14d71@google.com> References: <54cb04f-3762-987f-8294-91dafd8ebfb0@google.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5F6904000E X-Rspam-User: X-Stat-Signature: kriash35yf4ot3fs9hsbdp61ehbpqu54 X-Rspamd-Server: rspam01 X-HE-Tag: 1687248281-8821 X-HE-Meta: U2FsdGVkX18WpVLp8Qx3ZKSGFWHR2eMpdaVBYRu24EiwS8qTjab3YdgmB0RDyy8L7dNmUz75zzrDNtGDp+iGucdw8iXpOp3AYCTsTiybO+600/1kh5UdVPrD3s+Pk5idc/uz4rNLW8APmnhBZaUW1MuR5t5JGfCAvbgtSWcgWE/EKm7d8INFxzMwxwDNu0QKJ0JJxN2Y9x8oMSGA/g4QAvx+vclmIqrJp6hJdEZyldzM+FAQbZigtDsgRgIhtDKyUCk6ugOELjzJqWhimVBz4oRvQ1CeMwsmqsXlxKrn13X+H7ItAndK6QvWY3e3elX3JUwQ2hvO/G2FQ0qng9GwE1FTWD4/OQuUugEaxSfDz+d8WWih2JmhRzR+mGrQirCEVO7a+7+o+Nsu/1exF8iRBFL53nS6UVuQBvlUgGcST6C7ODntwZJhjPrZUW/QiM+bIkWsuvSMFhrMj0eU5rS4bE4By0ZL/LdcwhHbspVpgl8RayOGJGdzC4jFW8B585kD/zXvRCm3IrYty5+qHzNXx/Th0pHhmR9iIVogQB2TtyERIImIt3yxUx0FQtqDZpgkXsimYaoeQGipxH4K03UvLUrr7gVOmquolfT+BUljV9ezrvVIPTBULMzkOZ0shJ3+PXdiKcsaR6OM7jmps/xKDjTAwyX+niQWUhbivl6pP/KRCPi6z0dTFnyjW+lZg8wQOq0+XIQogX4LD14UvDvQbq9XG0TDf/QW547VDfknV26AozZwNJoPXxO/0cKhzBvnrkJrFWt69xjGKIFCM7kwJddAFWbQBvwaSmlju6IVHBK9BuqyQArOn7lhAGMcUaQAqygtjkz5isTioyeHHWMVxcKcDjITHIQMDWuOjjV11Q4O33jAOeCy/wmogitgb8RGMG6Js7P9fvm070d/Ac0r/Ucakw8qBg2kwa7mJqb358uGNY1y2EgV3G5H0GnhMNE/lc/9hChBx1xAc++g5L0 kKBUCq7T S8RjENRf0bi4ZWlDPQI8PllIINuoAZAqlybZCS7fAWbNdVaMmeElmhsl8CNgKLmOLn2+v/yDjc5GREfSGF7RwPG9yKl+YZc4lmvWtwp6faAby24uAnJtqMKtL8k+KPRB6C+IPXpqFEvH03MTe/4eWDpmWckZQEk7XiUAzTZRxs+qn7HBj3Hczj3a9ijAAiX2/vTAatCVIVcjPsVqqeUUnstBdqMaYCq6hHhcCjN3QHjryn+jSuxFuIX96LenP/EEMrPEJeQthZB40U7HK6fje3ux4Ltt700cGP59h0J3uA8imNtgqGD3MxpwCEeZAH0s9DxP70jg9B3bffCdA+ZIl4mKpIfiJne1rIL7MhGjwzjGKJzlK26awTFxKw1DCccd1WsbcdmqqzG/d1pd294tnzWHF9pCp/ZUyYfFosiQpRXj6llYwTOkn0YBX106Fjsjwk9WBQJ9UU/CfKcSXiKVnoyXMAik7Tt5/ERhDBJlzF2tnSeFnGj6bGpS38lvXEyuQt9JK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Bring collapse_and_free_pmd() back into collapse_pte_mapped_thp(). It does need mmap_read_lock(), but it does not need mmap_write_lock(), nor vma_start_write() nor i_mmap lock nor anon_vma lock. All racing paths are relying on pte_offset_map_lock() and pmd_lock(), so use those. Follow the pattern in retract_page_tables(); and using pte_free_defer() removes most of the need for tlb_remove_table_sync_one() here; but call pmdp_get_lockless_sync() to use it in the PAE case. First check the VMA, in case page tables are being torn down: from JannH. Confirm the preliminary find_pmd_or_thp_or_none() once page lock has been acquired and the page looks suitable: from then on its state is stable. However, collapse_pte_mapped_thp() was doing something others don't: freeing a page table still containing "valid" entries. i_mmap lock did stop a racing truncate from double-freeing those pages, but we prefer collapse_pte_mapped_thp() to clear the entries as usual. Their TLB flush can wait until the pmdp_collapse_flush() which follows, but the mmu_notifier_invalidate_range_start() has to be done earlier. Do the "step 1" checking loop without mmu_notifier: it wouldn't be good for khugepaged to keep on repeatedly invalidating a range which is then found unsuitable e.g. contains COWs. "step 2", which does the clearing, must then be more careful (after dropping ptl to do mmu_notifier), with abort prepared to correct the accounting like "step 3". But with those entries now cleared, "step 4" (after dropping ptl to do pmd_lock) is kept safe by the huge page lock, which stops new PTEs from being faulted in. Signed-off-by: Hugh Dickins --- This is the version which applies to mm-everything or linux-next. mm/khugepaged.c | 174 ++++++++++++++++++++++-------------------------- 1 file changed, 78 insertions(+), 96 deletions(-) --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1483,7 +1483,7 @@ static bool khugepaged_add_pte_mapped_th return ret; } -/* hpage must be locked, and mmap_lock must be held in write */ +/* hpage must be locked, and mmap_lock must be held */ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp, struct page *hpage) { @@ -1495,7 +1495,7 @@ static int set_huge_pmd(struct vm_area_s }; VM_BUG_ON(!PageTransHuge(hpage)); - mmap_assert_write_locked(vma->vm_mm); + mmap_assert_locked(vma->vm_mm); if (do_set_pmd(&vmf, hpage)) return SCAN_FAIL; @@ -1504,48 +1504,6 @@ static int set_huge_pmd(struct vm_area_s return SCAN_SUCCEED; } -/* - * A note about locking: - * Trying to take the page table spinlocks would be useless here because those - * are only used to synchronize: - * - * - modifying terminal entries (ones that point to a data page, not to another - * page table) - * - installing *new* non-terminal entries - * - * Instead, we need roughly the same kind of protection as free_pgtables() or - * mm_take_all_locks() (but only for a single VMA): - * The mmap lock together with this VMA's rmap locks covers all paths towards - * the page table entries we're messing with here, except for hardware page - * table walks and lockless_pages_from_mm(). - */ -static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmdp) -{ - pmd_t pmd; - struct mmu_notifier_range range; - - mmap_assert_write_locked(mm); - if (vma->vm_file) - lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem); - /* - * All anon_vmas attached to the VMA have the same root and are - * therefore locked by the same lock. - */ - if (vma->anon_vma) - lockdep_assert_held_write(&vma->anon_vma->root->rwsem); - - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, - addr + HPAGE_PMD_SIZE); - mmu_notifier_invalidate_range_start(&range); - pmd = pmdp_collapse_flush(vma, addr, pmdp); - tlb_remove_table_sync_one(); - mmu_notifier_invalidate_range_end(&range); - mm_dec_nr_ptes(mm); - page_table_check_pte_clear_range(mm, addr, pmd); - pte_free(mm, pmd_pgtable(pmd)); -} - /** * collapse_pte_mapped_thp - Try to collapse a pte-mapped THP for mm at * address haddr. @@ -1561,26 +1519,29 @@ static void collapse_and_free_pmd(struct int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, bool install_pmd) { + struct mmu_notifier_range range; + bool notified = false; unsigned long haddr = addr & HPAGE_PMD_MASK; struct vm_area_struct *vma = vma_lookup(mm, haddr); struct page *hpage; pte_t *start_pte, *pte; - pmd_t *pmd; - spinlock_t *ptl; - int count = 0, result = SCAN_FAIL; + pmd_t *pmd, pgt_pmd; + spinlock_t *pml, *ptl; + int nr_ptes = 0, result = SCAN_FAIL; int i; - mmap_assert_write_locked(mm); + mmap_assert_locked(mm); + + /* First check VMA found, in case page tables are being torn down */ + if (!vma || !vma->vm_file || + !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE)) + return SCAN_VMA_CHECK; /* Fast check before locking page if already PMD-mapped */ result = find_pmd_or_thp_or_none(mm, haddr, &pmd); if (result == SCAN_PMD_MAPPED) return result; - if (!vma || !vma->vm_file || - !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE)) - return SCAN_VMA_CHECK; - /* * If we are here, we've succeeded in replacing all the native pages * in the page cache with a single hugepage. If a mm were to fault-in @@ -1610,6 +1571,7 @@ int collapse_pte_mapped_thp(struct mm_st goto drop_hpage; } + result = find_pmd_or_thp_or_none(mm, haddr, &pmd); switch (result) { case SCAN_SUCCEED: break; @@ -1623,27 +1585,10 @@ int collapse_pte_mapped_thp(struct mm_st goto drop_hpage; } - /* Lock the vma before taking i_mmap and page table locks */ - vma_start_write(vma); - - /* - * We need to lock the mapping so that from here on, only GUP-fast and - * hardware page walks can access the parts of the page tables that - * we're operating on. - * See collapse_and_free_pmd(). - */ - i_mmap_lock_write(vma->vm_file->f_mapping); - - /* - * This spinlock should be unnecessary: Nobody else should be accessing - * the page tables under spinlock protection here, only - * lockless_pages_from_mm() and the hardware page walker can access page - * tables while all the high-level locks are held in write mode. - */ result = SCAN_FAIL; start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl); - if (!start_pte) - goto drop_immap; + if (!start_pte) /* mmap_lock + page lock should prevent this */ + goto drop_hpage; /* step 1: check all mapped PTEs are to the right huge page */ for (i = 0, addr = haddr, pte = start_pte; @@ -1670,10 +1615,18 @@ int collapse_pte_mapped_thp(struct mm_st */ if (hpage + i != page) goto abort; - count++; } - /* step 2: adjust rmap */ + pte_unmap_unlock(start_pte, ptl); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, + haddr, haddr + HPAGE_PMD_SIZE); + mmu_notifier_invalidate_range_start(&range); + notified = true; + start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl); + if (!start_pte) /* mmap_lock + page lock should prevent this */ + goto abort; + + /* step 2: clear page table and adjust rmap */ for (i = 0, addr = haddr, pte = start_pte; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE, pte++) { struct page *page; @@ -1681,47 +1634,76 @@ int collapse_pte_mapped_thp(struct mm_st if (pte_none(ptent)) continue; + /* + * We dropped ptl after the first scan, to do the mmu_notifier: + * page lock stops more PTEs of the hpage being faulted in, but + * does not stop write faults COWing anon copies from existing + * PTEs; and does not stop those being swapped out or migrated. + */ + if (!pte_present(ptent)) { + result = SCAN_PTE_NON_PRESENT; + goto abort; + } page = vm_normal_page(vma, addr, ptent); - if (WARN_ON_ONCE(page && is_zone_device_page(page))) + if (hpage + i != page) goto abort; + + /* + * Must clear entry, or a racing truncate may re-remove it. + * TLB flush can be left until pmdp_collapse_flush() does it. + * PTE dirty? Shmem page is already dirty; file is read-only. + */ + pte_clear(mm, addr, pte); page_remove_rmap(page, vma, false); + nr_ptes++; } pte_unmap_unlock(start_pte, ptl); /* step 3: set proper refcount and mm_counters. */ - if (count) { - page_ref_sub(hpage, count); - add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count); - } + if (nr_ptes) { + page_ref_sub(hpage, nr_ptes); + add_mm_counter(mm, mm_counter_file(hpage), -nr_ptes); + } + + /* step 4: remove page table */ + + /* Huge page lock is still held, so page table must remain empty */ + pml = pmd_lock(mm, pmd); + if (ptl != pml) + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + pgt_pmd = pmdp_collapse_flush(vma, haddr, pmd); + pmdp_get_lockless_sync(); + if (ptl != pml) + spin_unlock(ptl); + spin_unlock(pml); - /* step 4: remove pte entries */ - /* we make no change to anon, but protect concurrent anon page lookup */ - if (vma->anon_vma) - anon_vma_lock_write(vma->anon_vma); - - collapse_and_free_pmd(mm, vma, haddr, pmd); + mmu_notifier_invalidate_range_end(&range); - if (vma->anon_vma) - anon_vma_unlock_write(vma->anon_vma); - i_mmap_unlock_write(vma->vm_file->f_mapping); + mm_dec_nr_ptes(mm); + page_table_check_pte_clear_range(mm, haddr, pgt_pmd); + pte_free_defer(mm, pmd_pgtable(pgt_pmd)); maybe_install_pmd: /* step 5: install pmd entry */ result = install_pmd ? set_huge_pmd(vma, haddr, pmd, hpage) : SCAN_SUCCEED; - + goto drop_hpage; +abort: + if (nr_ptes) { + flush_tlb_mm(mm); + page_ref_sub(hpage, nr_ptes); + add_mm_counter(mm, mm_counter_file(hpage), -nr_ptes); + } + if (start_pte) + pte_unmap_unlock(start_pte, ptl); + if (notified) + mmu_notifier_invalidate_range_end(&range); drop_hpage: unlock_page(hpage); put_page(hpage); return result; - -abort: - pte_unmap_unlock(start_pte, ptl); -drop_immap: - i_mmap_unlock_write(vma->vm_file->f_mapping); - goto drop_hpage; } static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_slot) @@ -2856,9 +2838,9 @@ handle_result: case SCAN_PTE_MAPPED_HUGEPAGE: BUG_ON(mmap_locked); BUG_ON(*prev); - mmap_write_lock(mm); + mmap_read_lock(mm); result = collapse_pte_mapped_thp(mm, addr, true); - mmap_write_unlock(mm); + mmap_locked = true; goto handle_result; /* Whitelisted set of results where continuing OK */ case SCAN_PMD_NULL: