From patchwork Tue Jun 20 07:54:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13285282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 272FEEB64D7 for ; Tue, 20 Jun 2023 07:55:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD2658D0003; Tue, 20 Jun 2023 03:55:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A82198D0001; Tue, 20 Jun 2023 03:55:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 923858D0003; Tue, 20 Jun 2023 03:55:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 82E188D0001 for ; Tue, 20 Jun 2023 03:55:04 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 589B61605D0 for ; Tue, 20 Jun 2023 07:55:04 +0000 (UTC) X-FDA: 80922365328.28.2E5D189 Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by imf11.hostedemail.com (Postfix) with ESMTP id 70B5C40003 for ; Tue, 20 Jun 2023 07:55:02 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=J8nzRWA0; spf=pass (imf11.hostedemail.com: domain of hughd@google.com designates 209.85.222.169 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687247702; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EoggPrivOwaF6YYpZ0mcfv9qnK5xaGr29ovQpplAZx8=; b=2/O8XjkcvXFJjNSTfeFeWLd8miSieI9UUbUDy78oGR8kRQQO4O6YHC/ohb2lR1AHoClll6 S7ESKT0YKAv+kHVWOWM87otsmqWwLtXbaQv/cM4Dpei4uERrGAjjzTAAvScxEG0PDQtPF1 JW3I0py3/DlkE3rxxTKtCR6Co/80Mmk= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=J8nzRWA0; spf=pass (imf11.hostedemail.com: domain of hughd@google.com designates 209.85.222.169 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687247702; a=rsa-sha256; cv=none; b=jmyp8Q17DdnuBqX6Uv/pkHpgudUco2+sRdpNCOBsTsbhUtrTikGZabZFusVNn8CNqTPHLa 8W4R5TsQcsaHybtRFGirHONH4sqmwwbQtJtdeXn8JtrCXrIXbE8jmzZFrqR3IE2lpzJk9c DZZfRAJdJft9lp2btLTNsqzu6sbT21o= Received: by mail-qk1-f169.google.com with SMTP id af79cd13be357-76391e63725so185547685a.3 for ; Tue, 20 Jun 2023 00:55:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687247701; x=1689839701; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=EoggPrivOwaF6YYpZ0mcfv9qnK5xaGr29ovQpplAZx8=; b=J8nzRWA0uvpss9wXeQOicc8Ot9y4/UFuc6plZhigByzD/E6z56SmcPuzuWHVg6RLIr zeSwVuKlgmRjiVprT2c29o+LY/xl0mKSG7EI+HK+ZmF9VRuRMDrYHfurcBSQUvbGMDJM BrTznrXs5U8ERPP2CrQPh2E6mB/xxogY+UVNmpVE8mP4PFPg1/HAfaCo5DqPYBJr1jNU jJBNI0/yqplj0fpPoW7vqJFOdNB1zyVXocOEAXeD5BoEwwZ8uu8OHUDQYXH1raSp0ip4 Z7eB2HLsP8hFBVrCu74QJhwf7v/CbG+Ae1Bf3UqUxfhWUQfg52/Gluq44YXQQXKCwSWh Wn8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687247701; x=1689839701; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EoggPrivOwaF6YYpZ0mcfv9qnK5xaGr29ovQpplAZx8=; b=NQ4QaRoe2MJOywC2PkjzZhgZcfP9inXENZij4w/bp1cXalu0lFpbpLfHnUKadccSlU SQq4U3vC+C0Xfe/8AParHg9Yr1Rqyg9yPSaOcDXY/OAa6Qb6Fn56i5n1p1IYcjW+7qGB wisid5JpnLLqPshU88BtDT8UPPO16tVJEOhdJDdF4tgQ5ubdgV/nG4nakwUCY3RAe7cC 7mz708rtm2a6zwC2rVhJDztgxhzNtwJoWfyl0FpAlXGNv6oAjVOngqf+MtATFJ//ZYUY PooVoglj96QScg7hoy7dQC+LkG1orKdYvgEPA95DVuca4wEA4fgpLCPN9YtTNkPydi2y 2heQ== X-Gm-Message-State: AC+VfDxVs6h82N33T2m9XZ7PWdMKKbHpxPnFgvu+iNQ47jmgft2asqve /sa17G78IDog8set4ll/AtJ9gg== X-Google-Smtp-Source: ACHHUZ4DJO4EDwJbG0wgrFZYngaLjP4vE5bfDEN/dCam98OaAjzy+Vc4NRjPKMFiq2LjNC4HChBWFA== X-Received: by 2002:a05:620a:2890:b0:75d:4145:154e with SMTP id j16-20020a05620a289000b0075d4145154emr12424317qkp.65.1687247701263; Tue, 20 Jun 2023 00:55:01 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id j1-20020a0df901000000b0054f50f71834sm368662ywf.124.2023.06.20.00.54.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Jun 2023 00:55:00 -0700 (PDT) Date: Tue, 20 Jun 2023 00:54:56 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Gerald Schaefer , Vasily Gorbik , Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Lorenzo Stoakes , Huang Ying , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , Russell King , "David S. Miller" , Michael Ellerman , "Aneesh Kumar K.V" , Heiko Carstens , Christian Borntraeger , Claudio Imbrenda , Alexander Gordeev , Jann Horn , Vishal Moola , Vlastimil Babka , linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock In-Reply-To: <54cb04f-3762-987f-8294-91dafd8ebfb0@google.com> Message-ID: References: <54cb04f-3762-987f-8294-91dafd8ebfb0@google.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 70B5C40003 X-Rspam-User: X-Stat-Signature: gz8qourd6okt6x7o7ubi9f5jk1nutzf1 X-Rspamd-Server: rspam01 X-HE-Tag: 1687247702-33836 X-HE-Meta: U2FsdGVkX19wf7bDCmcL2n/fMQ4HiBZMUASgshOSmVsfjn4ZPCSI+yL57bC6VuX9vrX8WfBhACJaUJOHbhm/9R7Z+GYcLhlgjZXhVdcvAGNZ+oPeIcTyoVhDCMZ9qj1riJr7OdILf59fgWsMMjRMrMfeaASMM7JKVM3pupojdpqfhyZZ1y2p7NtuwLT9k7GeGkLAaZwe5z3YVOQohMm0ZZjjUqsvc4REVVf66wYfIXC6LZVCB2lOIIA3dnAGOqfAIBlMngAJtBoKYcPUiwiOFyrVpNz9seLv3q+0g5m+fwhx2jTb/qtuMdoJKJCEjt0rlL2vTfjc/ni9qtoAgMtIZBHHGDEhyyFc1ysyY1pXhlR8xCWMEcqjyDyw80D4quvtoaqV/wgkORCEEK/Zozl6JOExjzmx+Iui9MgAilOibSDTUAxhoB8dKbsaIQxoJL49u/ZjWJVdp8SC1f8Alak/GNT1MiaZ9EbugxAsDVCT5O4g4gOUCi6oWEHGWJpjMogQNl7F1Kz6a6CQ1pWPl1LsWnHcKyVRKbzSZWz4AkkZ0+H2YQ8ZFWZ4KGrWwnluYrrrfwIb0+BG7bwuLQRfRDsyAsml/VP/wVZtLxcyH0zh1X8TYY+v2haYL4ivz8MfrhlYi1gCXgCvhiAjcTTvn3i1PCRaz9XCmuIMrpObl0/f0htXKT5/CndhQewvWqA/vpqcAsCIAgl5de3LSqyTqVoFremY10f7sIpCjXrBuBOz6hCXykB3PZiPsyasMatJ+ilURmCkP3gPasBEQsWFcMpENuoBHhO8vlvvyAc/5Gi7r34Y7VBmdMSVkWUpvyTH+2XZPijrb31CeNIBI/oNsdCzbiHbWnhUiLhA2cP7u+hv2/pzLDKwqWlxyvSIeP+pjwDj5ijOPOuVsVBu2oi8QAVhXIMLoEbehzb8Q3iT3uX9O9WhUkmR+DNX7bE3yzCvLyEopY3Zn0r5lDDSIqUO80R 84tLnsCO sQiH1SMj6I26aKETYBLHJsWRZa5YhCW/0Fn1D8KK9W6pOU5jH1+bRfT7/+q3idJXHJKQ++vApEIEhv6LN9G5XIgGdJULe6hmVqUVkB6UCw1TyK1URUtqQRetcS12r341aJ81IhnnnMLGgYYKBPfQdvLWBqmKL2/A+WFIGA2H+rt0v72pg0sCXBZYykCIe74JgiLku5iv5Vn52hDsDGK+YkGw86J3ZadGe0r8ri8M8R5eM9ZXNESpQUUKRYpoHCNpp6c2RuVaSaWG8dpXMtsFYXIBnpss3D+HG4RU+/mChjoIqhz2okyBN4YLNlx2yO+qES2DLWWrINyvj1gE7PbPiKbutgv2jiPA+fk+f/VBnebDuP6P6MCeAwW3ELwx6H3mbE5S6mhmF7D9dzKbF31EmBKsxi/IZlyYiaO7L3N41b950uW07SZB6GopOm85x11Ph/7uv6I6WLkB6NnTDLGG7c8dfFrqp8kni4TrMuoqy8sVa0Zw+B8Sa+94YoCRzsOrKSH+j X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Simplify shmem and file THP collapse's retract_page_tables(), and relax its locking: to improve its success rate and to lessen impact on others. Instead of its MADV_COLLAPSE case doing set_huge_pmd() at target_addr of target_mm, leave that part of the work to madvise_collapse() calling collapse_pte_mapped_thp() afterwards: just adjust collapse_file()'s result code to arrange for that. That spares retract_page_tables() four arguments; and since it will be successful in retracting all of the page tables expected of it, no need to track and return a result code itself. It needs i_mmap_lock_read(mapping) for traversing the vma interval tree, but it does not need i_mmap_lock_write() for that: page_vma_mapped_walk() allows for pte_offset_map_lock() etc to fail, and uses pmd_lock() for THPs. retract_page_tables() just needs to use those same spinlocks to exclude it briefly, while transitioning pmd from page table to none: so restore its use of pmd_lock() inside of which pte lock is nested. Users of pte_offset_map_lock() etc all now allow for them to fail: so retract_page_tables() now has no use for mmap_write_trylock() or vma_try_start_write(). In common with rmap and page_vma_mapped_walk(), it does not even need the mmap_read_lock(). But those users do expect the page table to remain a good page table, until they unlock and rcu_read_unlock(): so the page table cannot be freed immediately, but rather by the recently added pte_free_defer(). Use the (usually a no-op) pmdp_get_lockless_sync() to send an interrupt when PAE, and pmdp_collapse_flush() did not already do so: to make sure that the start,pmdp_get_lockless(),end sequence in __pte_offset_map() cannot pick up a pmd entry with mismatched pmd_low and pmd_high. retract_page_tables() can be enhanced to replace_page_tables(), which inserts the final huge pmd without mmap lock: going through an invalid state instead of pmd_none() followed by fault. But that enhancement does raise some more questions: leave it until a later release. Signed-off-by: Hugh Dickins --- mm/khugepaged.c | 184 ++++++++++++++++++++---------------------------- 1 file changed, 75 insertions(+), 109 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1083f0e38a07..f7a0f7673127 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1617,9 +1617,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, break; case SCAN_PMD_NONE: /* - * In MADV_COLLAPSE path, possible race with khugepaged where - * all pte entries have been removed and pmd cleared. If so, - * skip all the pte checks and just update the pmd mapping. + * All pte entries have been removed and pmd cleared. + * Skip all the pte checks and just update the pmd mapping. */ goto maybe_install_pmd; default: @@ -1748,123 +1747,88 @@ static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_sl mmap_write_unlock(mm); } -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, - struct mm_struct *target_mm, - unsigned long target_addr, struct page *hpage, - struct collapse_control *cc) +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) { struct vm_area_struct *vma; - int target_result = SCAN_FAIL; - i_mmap_lock_write(mapping); + i_mmap_lock_read(mapping); vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { - int result = SCAN_FAIL; - struct mm_struct *mm = NULL; - unsigned long addr = 0; - pmd_t *pmd; - bool is_target = false; + struct mmu_notifier_range range; + struct mm_struct *mm; + unsigned long addr; + pmd_t *pmd, pgt_pmd; + spinlock_t *pml; + spinlock_t *ptl; + bool skipped_uffd = false; /* * Check vma->anon_vma to exclude MAP_PRIVATE mappings that - * got written to. These VMAs are likely not worth investing - * mmap_write_lock(mm) as PMD-mapping is likely to be split - * later. - * - * Note that vma->anon_vma check is racy: it can be set up after - * the check but before we took mmap_lock by the fault path. - * But page lock would prevent establishing any new ptes of the - * page, so we are safe. - * - * An alternative would be drop the check, but check that page - * table is clear before calling pmdp_collapse_flush() under - * ptl. It has higher chance to recover THP for the VMA, but - * has higher cost too. It would also probably require locking - * the anon_vma. + * got written to. These VMAs are likely not worth removing + * page tables from, as PMD-mapping is likely to be split later. */ - if (READ_ONCE(vma->anon_vma)) { - result = SCAN_PAGE_ANON; - goto next; - } + if (READ_ONCE(vma->anon_vma)) + continue; + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); if (addr & ~HPAGE_PMD_MASK || - vma->vm_end < addr + HPAGE_PMD_SIZE) { - result = SCAN_VMA_CHECK; - goto next; - } - mm = vma->vm_mm; - is_target = mm == target_mm && addr == target_addr; - result = find_pmd_or_thp_or_none(mm, addr, &pmd); - if (result != SCAN_SUCCEED) - goto next; - /* - * We need exclusive mmap_lock to retract page table. - * - * We use trylock due to lock inversion: we need to acquire - * mmap_lock while holding page lock. Fault path does it in - * reverse order. Trylock is a way to avoid deadlock. - * - * Also, it's not MADV_COLLAPSE's job to collapse other - * mappings - let khugepaged take care of them later. - */ - result = SCAN_PTE_MAPPED_HUGEPAGE; - if ((cc->is_khugepaged || is_target) && - mmap_write_trylock(mm)) { - /* trylock for the same lock inversion as above */ - if (!vma_try_start_write(vma)) - goto unlock_next; - - /* - * Re-check whether we have an ->anon_vma, because - * collapse_and_free_pmd() requires that either no - * ->anon_vma exists or the anon_vma is locked. - * We already checked ->anon_vma above, but that check - * is racy because ->anon_vma can be populated under the - * mmap lock in read mode. - */ - if (vma->anon_vma) { - result = SCAN_PAGE_ANON; - goto unlock_next; - } - /* - * When a vma is registered with uffd-wp, we can't - * recycle the pmd pgtable because there can be pte - * markers installed. Skip it only, so the rest mm/vma - * can still have the same file mapped hugely, however - * it'll always mapped in small page size for uffd-wp - * registered ranges. - */ - if (hpage_collapse_test_exit(mm)) { - result = SCAN_ANY_PROCESS; - goto unlock_next; - } - if (userfaultfd_wp(vma)) { - result = SCAN_PTE_UFFD_WP; - goto unlock_next; - } - collapse_and_free_pmd(mm, vma, addr, pmd); - if (!cc->is_khugepaged && is_target) - result = set_huge_pmd(vma, addr, pmd, hpage); - else - result = SCAN_SUCCEED; - -unlock_next: - mmap_write_unlock(mm); - goto next; - } - /* - * Calling context will handle target mm/addr. Otherwise, let - * khugepaged try again later. - */ - if (!is_target) { - khugepaged_add_pte_mapped_thp(mm, addr); + vma->vm_end < addr + HPAGE_PMD_SIZE) continue; + + mm = vma->vm_mm; + if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED) + continue; + + if (hpage_collapse_test_exit(mm)) + continue; + /* + * When a vma is registered with uffd-wp, we cannot recycle + * the page table because there may be pte markers installed. + * Other vmas can still have the same file mapped hugely, but + * skip this one: it will always be mapped in small page size + * for uffd-wp registered ranges. + */ + if (userfaultfd_wp(vma)) + continue; + + /* PTEs were notified when unmapped; but now for the PMD? */ + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, + addr, addr + HPAGE_PMD_SIZE); + mmu_notifier_invalidate_range_start(&range); + + pml = pmd_lock(mm, pmd); + ptl = pte_lockptr(mm, pmd); + if (ptl != pml) + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + + /* + * Huge page lock is still held, so normally the page table + * must remain empty; and we have already skipped anon_vma + * and userfaultfd_wp() vmas. But since the mmap_lock is not + * held, it is still possible for a racing userfaultfd_ioctl() + * to have inserted ptes or markers. Now that we hold ptlock, + * repeating the anon_vma check protects from one category, + * and repeating the userfaultfd_wp() check from another. + */ + if (unlikely(vma->anon_vma || userfaultfd_wp(vma))) { + skipped_uffd = true; + } else { + pgt_pmd = pmdp_collapse_flush(vma, addr, pmd); + pmdp_get_lockless_sync(); + } + + if (ptl != pml) + spin_unlock(ptl); + spin_unlock(pml); + + mmu_notifier_invalidate_range_end(&range); + + if (!skipped_uffd) { + mm_dec_nr_ptes(mm); + page_table_check_pte_clear_range(mm, addr, pgt_pmd); + pte_free_defer(mm, pmd_pgtable(pgt_pmd)); } -next: - if (is_target) - target_result = result; } - i_mmap_unlock_write(mapping); - return target_result; + i_mmap_unlock_read(mapping); } /** @@ -2261,9 +2225,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, /* * Remove pte page tables, so we can re-fault the page as huge. + * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp(). */ - result = retract_page_tables(mapping, start, mm, addr, hpage, - cc); + retract_page_tables(mapping, start); + if (cc && !cc->is_khugepaged) + result = SCAN_PTE_MAPPED_HUGEPAGE; unlock_page(hpage); /*