From patchwork Fri Dec 9 17:00:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13070018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58E49C4167B for ; Fri, 9 Dec 2022 17:01:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB8D58E0009; Fri, 9 Dec 2022 12:01:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E44978E0001; Fri, 9 Dec 2022 12:01:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF7A78E0009; Fri, 9 Dec 2022 12:01:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AB7128E0001 for ; Fri, 9 Dec 2022 12:01:28 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2D9AE1612B3 for ; Fri, 9 Dec 2022 17:01:28 +0000 (UTC) X-FDA: 80223383856.05.6BC310A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf29.hostedemail.com (Postfix) with ESMTP id 5A6F8120026 for ; Fri, 9 Dec 2022 17:01:22 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SzwF386c; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670605282; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xtv4yezaaEbF1bO1grzv/hp6hNuvFjuXz2keNbWJszA=; b=S5tImzlTViwcoNaLvF8b4d5EagE2hAnbvt0YsK/Yxma2kGKQO+bEaOC/9xc5L2w2x0Af1C lzl++C9A+3kS9PH0RcARGQLBf0/zNjoHDBmMiOdJRvgHos9kjhqYwOWOuvUPAZlRBT5ulV yBqOUftkAOfB8OK4RJOjdp1+MVWrbfw= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SzwF386c; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670605282; a=rsa-sha256; cv=none; b=0ij+hhiae2i5y+oic0YAtAW9bMyzii78qX6OhYYoWiq1Azn3JoFuGgc8GQI1oPSDA+Mg0H yxggCh95U0+FjDfvXnSlP7JBVe/dh/iamMBrK8AFIgPk1xdl/eUat9ScqXMJxJ8sbbjhz+ 74+9rLoM8HF0ZdBLwzqweC0cJ0xWn3c= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670605281; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xtv4yezaaEbF1bO1grzv/hp6hNuvFjuXz2keNbWJszA=; b=SzwF386cJGBdeKTN2cX+P6IhpBeNqRC2JLfZm7jdN+9McljOZSuiRjd5B3QrCMqJpbv4bZ jvdrWeT7VXOsRRkvQEp7fhLmOMjp9UzpvBTESLf2yNx1CUrWTLmw1Ak6/QEr9iRAUYab6c Q/y5zbvMNCq0kkrVZGNKtbMRrJGPGSk= Received: from mail-oo1-f69.google.com (mail-oo1-f69.google.com [209.85.161.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-609-h5eFuJCkP5iftaob9YQxKA-1; Fri, 09 Dec 2022 12:01:20 -0500 X-MC-Unique: h5eFuJCkP5iftaob9YQxKA-1 Received: by mail-oo1-f69.google.com with SMTP id s17-20020a4a7311000000b004a35a996d0bso1584727ooc.20 for ; Fri, 09 Dec 2022 09:01:20 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xtv4yezaaEbF1bO1grzv/hp6hNuvFjuXz2keNbWJszA=; b=RZ9V/eMiskbgIglx2bAOmfGgX/IojjLQudK/dNfr0kxjBkVCTnVrZnJ9y1NpbtvozL yz/MQgNy9QKrJEIJxO4h+yPlZroLvjG679wqJ5jDKy/9J/fvEPMzmrRMbooYEu+Hn3Wo ZIzzx/EMVseuUN/JLxRlHxmMxbCvadtsa/4dxJfAfVvWVoT+I2pbb9GobHK7h9+L1dIO ZJFGQFsDab4wOgygyw7E86FaLQrcE+lwinvyxoX5zxWbEUGcwedLfUSOpqqA6olkwEX3 hP0kkFqRYeK01113jPBy/unbflDYA2QQMDjAss9+gLYADzNWlIfclhXgeSeoFBanQ7NY DKCw== X-Gm-Message-State: ANoB5pmNDViLlew9mA6RgCGZRk9fROE/5K8cuOQmPnzjw0PqRtjdE1qK +H6IzAyM7iMAjY4St7CWZ2Tda+ToHy8E16sIlKhuq2xEWxqB8rjkFSS/woK7azxwZS3VrrCp6Ug LCKemql8SYEKJ0HfjkrHh/Rg1wf7UGbPH7AAt+iFkECiOGxerOyxSvVmXPGq0 X-Received: by 2002:a4a:aec6:0:b0:49f:96f:e6c0 with SMTP id v6-20020a4aaec6000000b0049f096fe6c0mr3915672oon.8.1670605278377; Fri, 09 Dec 2022 09:01:18 -0800 (PST) X-Google-Smtp-Source: AA0mqf6Vuv1wv7YiysMsVS44h4EbTaETlm5u1UVAmf+zKytKpChDbeBgIigGDbiHmGXYIlQ2iKhB5w== X-Received: by 2002:a4a:aec6:0:b0:49f:96f:e6c0 with SMTP id v6-20020a4aaec6000000b0049f096fe6c0mr3915632oon.8.1670605278011; Fri, 09 Dec 2022 09:01:18 -0800 (PST) Received: from x1n.redhat.com (bras-base-aurron9127w-grc-46-70-31-27-79.dsl.bell.ca. [70.31.27.79]) by smtp.gmail.com with ESMTPSA id q7-20020a05620a0d8700b006cf38fd659asm178907qkl.103.2022.12.09.09.01.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Dec 2022 09:01:17 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Miaohe Lin , David Hildenbrand , Nadav Amit , peterx@redhat.com, Andrea Arcangeli , Jann Horn , John Hubbard , Mike Kravetz , James Houghton , Rik van Riel , Muchun Song Subject: [PATCH v3 4/9] mm/hugetlb: Move swap entry handling into vma lock when faulted Date: Fri, 9 Dec 2022 12:00:55 -0500 Message-Id: <20221209170100.973970-5-peterx@redhat.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221209170100.973970-1-peterx@redhat.com> References: <20221209170100.973970-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain X-Rspamd-Queue-Id: 5A6F8120026 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: zeko8atn3nxs1qtx38pyur83uo4mpfem X-HE-Tag: 1670605282-270784 X-HE-Meta: U2FsdGVkX19aPyEX2uIEH57Mexvb5A9um8QXoSiexlUiIt//YuR9sWJDTSbxW2UxneG9dP6wBaA4dDjjxx+P8oy7rmTB/j5WBfZrqbDUxZ+n7btTzGXCa4TqjLJ+Rpw99y5rcP4Zj4deiHLGrWwchupoyeCyK60a/XAq8qjw5j7bq1NENwJ9xvSHdRIhv7KdOr4HjuL8Ni9sMvE6Co/z1SdwVt0+XH685ujQBW/iwrC9bNt2Z3isq+0h99bEK6VN/yLuANu4+GcPsDq3i58rz1GCTKW92RGJCNh2XZar5Panlsprqn0JEE43E9wrks9HD8viE3ZBdOcJdK/fTVjdzP/JAl/9mC1BOwSZIONsUhdY7NOqi30kyH0Dfj7GQH5QzSUtHIPXiYMl/Xr4/FLCCKMqzak9CGDQ7+ZfVMghmRiTXGpM2ddHyuKegTiHToiH47rChH4/TFperMVX+sUWeyIJ5yxkWrINMVOEeBqbBVLAYjVvCFEMsvZsq1g+YElmWzeUotRwTHZPWv39PdaotdwFxHVyR+sLkjr40e3SSfAs/h3O0BlVFE8x2LeFkfk0VquBvUJkZmPjr6UFsPvFdyc6kuecTjCkdNG6bwMxmuaMV+gjgrskSmT98hKBMEswwMi8XbxTKtPguXIehqGrgs++J70EF0Wlg4ITAV8WTmJvVMXBy6EwBxJr80vH6KGeBjGYbZGOlaHRKx6HlUTDqUl6w4Q8MFz7May68ndzkrTekuz7239qP3vFlCcwmFSTC36SDYwJoGXHqX+GmfCQIrpdBd5BLgV94L4/6Rhh3Ft/hduILFaXj0ZRWvjFCN88jpoiGDKHc0Q9mAIT9hLzCFP+kWZMcbtnSPNobLNPeVk92+oE/1O6tX+hqOHqo2TR76/+I6GAUFdkivoEHL2YohQHN4gTsty0DtWOr7f0kv7bntFQi3psgM/ftnybwGU+0BFnS+GqN9DdtV34qYS je5Qslv7 vcWo871RRDyfh6CoKNsPnYjAYdaSkp5oAGA279jckFn8MpkpbQvIUz+hcXZ+Euf/+aWzpdSaIafebOvwdf0IUg1x7JfX0ZJ3tV2D2RA6v13593hU2BaeRdSWUYKheZFQUsO4bwbdismoRjzYElMqUATVO1/EMpX2/pHuAYJMrFdnY/rgpzCmaF3oIkLgs3srxQoe1aHvwZ+pxPd301SFM840WLYGsJfCFpYDF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In hugetlb_fault(), there used to have a special path to handle swap entry at the entrance using huge_pte_offset(). That's unsafe because huge_pte_offset() for a pmd sharable range can access freed pgtables if without any lock to protect the pgtable from being freed after pmd unshare. Here the simplest solution to make it safe is to move the swap handling to be after the vma lock being held. We may need to take the fault mutex on either migration or hwpoison entries now (also the vma lock, but that's really needed), however neither of them is hot path. Note that the vma lock cannot be released in hugetlb_fault() when the migration entry is detected, because in migration_entry_wait_huge() the pgtable page will be used again (by taking the pgtable lock), so that also need to be protected by the vma lock. Modify migration_entry_wait_huge() so that it must be called with vma read lock held, and properly release the lock in __migration_entry_wait_huge(). Reviewed-by: Mike Kravetz Reviewed-by: John Hubbard Signed-off-by: Peter Xu --- include/linux/swapops.h | 6 ++++-- mm/hugetlb.c | 37 ++++++++++++++++--------------------- mm/migrate.c | 25 +++++++++++++++++++++---- 3 files changed, 41 insertions(+), 27 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index a70b5c3a68d7..b134c5eb75cb 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -337,7 +337,8 @@ extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, unsigned long address); #ifdef CONFIG_HUGETLB_PAGE -extern void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl); +extern void __migration_entry_wait_huge(struct vm_area_struct *vma, + pte_t *ptep, spinlock_t *ptl); extern void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte); #endif /* CONFIG_HUGETLB_PAGE */ #else /* CONFIG_MIGRATION */ @@ -366,7 +367,8 @@ static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, unsigned long address) { } #ifdef CONFIG_HUGETLB_PAGE -static inline void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl) { } +static inline void __migration_entry_wait_huge(struct vm_area_struct *vma, + pte_t *ptep, spinlock_t *ptl) { } static inline void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte) { } #endif /* CONFIG_HUGETLB_PAGE */ static inline int is_writable_migration_entry(swp_entry_t entry) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c8a6673fe5b4..247702eb9f88 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5824,22 +5824,6 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, int need_wait_lock = 0; unsigned long haddr = address & huge_page_mask(h); - ptep = huge_pte_offset(mm, haddr, huge_page_size(h)); - if (ptep) { - /* - * Since we hold no locks, ptep could be stale. That is - * OK as we are only making decisions based on content and - * not actually modifying content here. - */ - entry = huge_ptep_get(ptep); - if (unlikely(is_hugetlb_entry_migration(entry))) { - migration_entry_wait_huge(vma, ptep); - return 0; - } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) - return VM_FAULT_HWPOISON_LARGE | - VM_FAULT_SET_HINDEX(hstate_index(h)); - } - /* * Serialize hugepage allocation and instantiation, so that we don't * get spurious allocation failures if two CPUs race to instantiate @@ -5854,10 +5838,6 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * Acquire vma lock before calling huge_pte_alloc and hold * until finished with ptep. This prevents huge_pmd_unshare from * being called elsewhere and making the ptep no longer valid. - * - * ptep could have already be assigned via huge_pte_offset. That - * is OK, as huge_pte_alloc will return the same value unless - * something has changed. */ hugetlb_vma_lock_read(vma); ptep = huge_pte_alloc(mm, vma, haddr, huge_page_size(h)); @@ -5886,8 +5866,23 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * fault, and is_hugetlb_entry_(migration|hwpoisoned) check will * properly handle it. */ - if (!pte_present(entry)) + if (!pte_present(entry)) { + if (unlikely(is_hugetlb_entry_migration(entry))) { + /* + * Release the hugetlb fault lock now, but retain + * the vma lock, because it is needed to guard the + * huge_pte_lockptr() later in + * migration_entry_wait_huge(). The vma lock will + * be released there. + */ + mutex_unlock(&hugetlb_fault_mutex_table[hash]); + migration_entry_wait_huge(vma, ptep); + return 0; + } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) + ret = VM_FAULT_HWPOISON_LARGE | + VM_FAULT_SET_HINDEX(hstate_index(h)); goto out_mutex; + } /* * If we are going to COW/unshare the mapping later, we examine the diff --git a/mm/migrate.c b/mm/migrate.c index 48584b032ea9..9c4e3a833449 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -333,24 +333,41 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, } #ifdef CONFIG_HUGETLB_PAGE -void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl) +/* + * The vma read lock must be held upon entry. Holding that lock prevents either + * the pte or the ptl from being freed. + * + * This function will release the vma lock before returning. + */ +void __migration_entry_wait_huge(struct vm_area_struct *vma, + pte_t *ptep, spinlock_t *ptl) { pte_t pte; + hugetlb_vma_assert_locked(vma); spin_lock(ptl); pte = huge_ptep_get(ptep); - if (unlikely(!is_hugetlb_entry_migration(pte))) + if (unlikely(!is_hugetlb_entry_migration(pte))) { spin_unlock(ptl); - else + hugetlb_vma_unlock_read(vma); + } else { + /* + * If migration entry existed, safe to release vma lock + * here because the pgtable page won't be freed without the + * pgtable lock released. See comment right above pgtable + * lock release in migration_entry_wait_on_locked(). + */ + hugetlb_vma_unlock_read(vma); migration_entry_wait_on_locked(pte_to_swp_entry(pte), NULL, ptl); + } } void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte) { spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, pte); - __migration_entry_wait_huge(pte, ptl); + __migration_entry_wait_huge(vma, pte, ptl); } #endif