From patchwork Mon Nov 28 18:02:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 13057844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA045C4332F for ; Mon, 28 Nov 2022 18:03:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40A086B0072; Mon, 28 Nov 2022 13:03:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BA826B0073; Mon, 28 Nov 2022 13:03:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A9FE6B0074; Mon, 28 Nov 2022 13:03:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1C8406B0072 for ; Mon, 28 Nov 2022 13:03:07 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9D1131A08A3 for ; Mon, 28 Nov 2022 18:03:06 +0000 (UTC) X-FDA: 80183622372.14.00C0FC1 Received: from mail-wr1-f48.google.com (mail-wr1-f48.google.com [209.85.221.48]) by imf19.hostedemail.com (Postfix) with ESMTP id 467281A0028 for ; Mon, 28 Nov 2022 18:03:04 +0000 (UTC) Received: by mail-wr1-f48.google.com with SMTP id g12so18180770wrs.10 for ; Mon, 28 Nov 2022 10:03:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=/Fh+crFYr0bdZ0CEDCCytk9X8KD6hynMr1Yq9paj0G4=; b=f8Ck9XrDcaB/E4hGC+FYMlnArzd3Sq82oo1yzaQNVZIBLt+2yTGRY7PB7dZdnKgaE/ /5XnlfYGvyoKSXRlzluNuKc31hOtz6cGxMjxUGm+kXC8IjtorJ39gGjCjfnn0uJaOxj0 V//Q4KL6qSyDcXprZMrHn52tOkCDJS3LAjPJdLsOK94Pbbf1INGO83C6HitBcDp0Tyxl bADOVWR84N4yp17bCUCe7GkV5NBS79AZG4dr++FvDEAW4Iy4PTixlNmXLIS8jcqhjMRL fKJ8YOCkV6miNWCDVcXbxGZdoESFHA1SQMVr0g/xuXhsCioyVU0uuTrx+OR2YBCvBSlO EJ+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/Fh+crFYr0bdZ0CEDCCytk9X8KD6hynMr1Yq9paj0G4=; b=z6zcKGpSs80VswqsNHXWFPKMqYQyESF9OWYTEWAVF0BfbsTJKxg5WXrHPVMdGSrbwa GtPc+H/mgPb7QUFVZiN84PEcB/1nGDmkTv4tlEEa3q1ffslJqYv8wEpGT+zXbqSxaNfv OmCkbSfK+ybbj5sub0MMpF5B7+yLsnX5D6KMm3LPEHqRbhbiBzdW9lJNWyHz+ZcwmK3Y DCtsLDvRJHHmqmXjC9Jpe+u6UPT8MQfHt6PUhEwTjR50q+IBXHzP9Jg45CsMz1n2hpFC B8JQ5EuSGQn7ZwLRcM9cEPz3Q6fF9dn7jVyMnI13g/DjjFhnjLuQSO9MsPS56SznBUwH mKkA== X-Gm-Message-State: ANoB5pk3a9op5PhbYZKl5c14Ne+B/iLLkLuqXRfL2+cBvJ5jDcPhKlM0 eUKU70CwIvGagzNirFr7fbM8ug== X-Google-Smtp-Source: AA0mqf6IfaziHtYUVKo6OfKfp+73RV+bf+8qqF3qBlgEbj4zfCkiNf7nhF/x6uTq/h4ojy4XDpMvrA== X-Received: by 2002:adf:de0e:0:b0:241:6f01:ad47 with SMTP id b14-20020adfde0e000000b002416f01ad47mr24231647wrm.222.1669658583556; Mon, 28 Nov 2022 10:03:03 -0800 (PST) Received: from localhost ([2a00:79e0:9d:4:f4d1:b340:8675:e840]) by smtp.gmail.com with ESMTPSA id m18-20020adffa12000000b00241c6729c2bsm11350836wrr.26.2022.11.28.10.03.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Nov 2022 10:03:02 -0800 (PST) From: Jann Horn To: security@kernel.org, Andrew Morton Cc: Yang Shi , David Hildenbrand , Peter Xu , John Hubbard , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 1/3] mm/khugepaged: Take the right locks for page table retraction Date: Mon, 28 Nov 2022 19:02:50 +0100 Message-Id: <20221128180252.1684965-1-jannh@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669658585; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=/Fh+crFYr0bdZ0CEDCCytk9X8KD6hynMr1Yq9paj0G4=; b=QYpWn6OU7ofCKdOnfE6wSzpdDKdbD1YFfmOwX5PEfQ1P5EXhgfVrVSs6MWuUP/VqkliF+Z pdWEOcM3sTFYf7vmN2J7t9B9/tg98t9ppTp9yfiMSPWNwDMDXsQxOCWZnT+hGBxdpr1ZcZ jnAJcxJ6oX4E5LOBkPn4Lttr+92OWt0= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=f8Ck9XrD; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of jannh@google.com designates 209.85.221.48 as permitted sender) smtp.mailfrom=jannh@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669658585; a=rsa-sha256; cv=none; b=zstO719WzSuluZxEd5/a1Z9EJyBhbTMgUZmyPC+5p2JLNkNJ3GN4U6Rmdyl7z/3QOcJSUW Hptt85t0VhdFJjgDPQVLxO+4WeP8eEGj6IPaWjZij4Ldpx9TAKKC3+Kydzrp32cv3Ine2M TnQNa/9L/gpchdF2U4G99fpbGoy5I9g= X-Rspam-User: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=f8Ck9XrD; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of jannh@google.com designates 209.85.221.48 as permitted sender) smtp.mailfrom=jannh@google.com X-Stat-Signature: h4g9fjfebw94tx7hrm1o163f8xdceudj X-Rspamd-Queue-Id: 467281A0028 X-Rspamd-Server: rspam08 X-HE-Tag: 1669658584-988962 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pagetable walks on address ranges mapped by VMAs can be done under the mmap lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's address_space. Only one of these needs to be held, and it does not need to be held in exclusive mode. Under those circumstances, the rules for concurrent access to page table entries are: - Terminal page table entries (entries that don't point to another page table) can be arbitrarily changed under the page table lock, with the exception that they always need to be consistent for hardware page table walks and lockless_pages_from_mm(). This includes that they can be changed into non-terminal entries. - Non-terminal page table entries (which point to another page table) can not be modified; readers are allowed to READ_ONCE() an entry, verify that it is non-terminal, and then assume that its value will stay as-is. Retracting a page table involves modifying a non-terminal entry, so page-table-level locks are insufficient to protect against concurrent page table traversal; it requires taking all the higher-level locks under which it is possible to start a page walk in the relevant range in exclusive mode. The collapse_huge_page() path for anonymous THP already follows this rule, but the shmem/file THP path was getting it wrong, making it possible for concurrent rmap-based operations to cause corruption. Cc: stable@kernel.org Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP") Acked-by: David Hildenbrand Signed-off-by: Jann Horn Reviewed-by: Yang Shi --- v4: added ack by David Hildenbrand mm/khugepaged.c | 55 +++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 51 insertions(+), 4 deletions(-) base-commit: eb7081409f94a9a8608593d0fb63a1aa3d6f95d8 diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 4734315f79407..674b111a24fa7 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1384,16 +1384,37 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr, return SCAN_SUCCEED; } +/* + * A note about locking: + * Trying to take the page table spinlocks would be useless here because those + * are only used to synchronize: + * + * - modifying terminal entries (ones that point to a data page, not to another + * page table) + * - installing *new* non-terminal entries + * + * Instead, we need roughly the same kind of protection as free_pgtables() or + * mm_take_all_locks() (but only for a single VMA): + * The mmap lock together with this VMA's rmap locks covers all paths towards + * the page table entries we're messing with here, except for hardware page + * table walks and lockless_pages_from_mm(). + */ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { - spinlock_t *ptl; pmd_t pmd; mmap_assert_write_locked(mm); - ptl = pmd_lock(vma->vm_mm, pmdp); + if (vma->vm_file) + lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem); + /* + * All anon_vmas attached to the VMA have the same root and are + * therefore locked by the same lock. + */ + if (vma->anon_vma) + lockdep_assert_held_write(&vma->anon_vma->root->rwsem); + pmd = pmdp_collapse_flush(vma, addr, pmdp); - spin_unlock(ptl); mm_dec_nr_ptes(mm); page_table_check_pte_clear_range(mm, addr, pmd); pte_free(mm, pmd_pgtable(pmd)); @@ -1444,6 +1465,14 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false)) return SCAN_VMA_CHECK; + /* + * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings + * that got written to. Without this, we'd have to also lock the + * anon_vma if one exists. + */ + if (vma->anon_vma) + return SCAN_VMA_CHECK; + /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ if (userfaultfd_wp(vma)) return SCAN_PTE_UFFD_WP; @@ -1477,6 +1506,20 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, goto drop_hpage; } + /* + * We need to lock the mapping so that from here on, only GUP-fast and + * hardware page walks can access the parts of the page tables that + * we're operating on. + * See collapse_and_free_pmd(). + */ + i_mmap_lock_write(vma->vm_file->f_mapping); + + /* + * This spinlock should be unnecessary: Nobody else should be accessing + * the page tables under spinlock protection here, only + * lockless_pages_from_mm() and the hardware page walker can access page + * tables while all the high-level locks are held in write mode. + */ start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl); result = SCAN_FAIL; @@ -1531,6 +1574,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, /* step 4: remove pte entries */ collapse_and_free_pmd(mm, vma, haddr, pmd); + i_mmap_unlock_write(vma->vm_file->f_mapping); + maybe_install_pmd: /* step 5: install pmd entry */ result = install_pmd @@ -1544,6 +1589,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, abort: pte_unmap_unlock(start_pte, ptl); + i_mmap_unlock_write(vma->vm_file->f_mapping); goto drop_hpage; } @@ -1600,7 +1646,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, * An alternative would be drop the check, but check that page * table is clear before calling pmdp_collapse_flush() under * ptl. It has higher chance to recover THP for the VMA, but - * has higher cost too. + * has higher cost too. It would also probably require locking + * the anon_vma. */ if (vma->anon_vma) { result = SCAN_PAGE_ANON;