From patchwork Tue Nov 29 15:47:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 13058730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AF16C4167B for ; Tue, 29 Nov 2022 15:47:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 563436B0071; Tue, 29 Nov 2022 10:47:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 512DF6B0072; Tue, 29 Nov 2022 10:47:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DAA46B0074; Tue, 29 Nov 2022 10:47:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2BA006B0071 for ; Tue, 29 Nov 2022 10:47:43 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DD2E11206B4 for ; Tue, 29 Nov 2022 15:47:42 +0000 (UTC) X-FDA: 80186909964.13.FBDB350 Received: from mail-wr1-f47.google.com (mail-wr1-f47.google.com [209.85.221.47]) by imf03.hostedemail.com (Postfix) with ESMTP id 67E7B20015 for ; Tue, 29 Nov 2022 15:47:42 +0000 (UTC) Received: by mail-wr1-f47.google.com with SMTP id h11so15469725wrw.13 for ; Tue, 29 Nov 2022 07:47:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=zcNi41xvTMS4NGTUpyHD2vhE6PXJ5qCDMr6KFHcPQag=; b=Ymj/TX58LmBHMlJm2tvaAwNPklk0OGDZu/h2bpwb8PEHBxKhk4ajUV93di2Grz8TVV 9lMyxfkKDAp24o03K5+1FRcgoCKVXZjlBGbIytbuviIem7nF4rYyG6KooxotGqK+hP1E oOi+BBXCoaw4iA3xFq6+2BtTm31okvGXqpDLHXAVrSQF4e3INEQh3xc0r9WI6qWJs9vr cn8nSzY+otKe+LsoUuNa8GtXB+bJo3a/2ezw2yBL/OKQ89GnW5aHwVJlGaUwwQWVVlUi OfYz0qsbjPaz3Sq039jK3ZktX72U89/hNuW+ZLmHuz+gzgfB998b5wYXplTrOYLmT1UV HOGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zcNi41xvTMS4NGTUpyHD2vhE6PXJ5qCDMr6KFHcPQag=; b=VWc2LB9UTA1BvWQU8QGeWj8hOSF428kYeDyaADdnFmkqMjm7kgV7AheiOLPJh9pKWv L5VBh8BVnWe77NLYVH60dE4CsiEK/6OZPUlIZFv0L3MxqiV/lIG1YKRaw0Jwwfdgvuxt jRm8uIughw0SMr5VP8zNOW5hcGzTNcH05bz+fviaTswDvFsk9BocFH8bjEM2/L3le+KV VRuNX3yFUSeN6hqpfC7nm/CbamRP9CnHNI2NCjasYcf04gfI/kkqLGelHT4buHqTfafi N+LFRWvT2Hfd3wiQ/XGS2bkiY+SI9TvdOIkBqsqnfa8AE8V3wY5IjOs9N+6u2SoahP8h /K3Q== X-Gm-Message-State: ANoB5pnqBanZiJ3RKg2xfXxWRd31C0OjiQ2beKbgUtyqV253LGKZaU9W XUNu0iQ8DQGSjqrB2n4ZDTVj5A== X-Google-Smtp-Source: AA0mqf4VS2OE+nEDrDPjpzaYNf+mLsm5WKH9zV+k+70WKT3wLEzTcfYv6YO/CIf/HCa4ZWlt9jGz0w== X-Received: by 2002:a05:6000:70f:b0:22e:41c5:7ef7 with SMTP id bs15-20020a056000070f00b0022e41c57ef7mr35635050wrb.332.1669736860806; Tue, 29 Nov 2022 07:47:40 -0800 (PST) Received: from localhost ([2a00:79e0:9d:4:5011:adcc:fddd:accf]) by smtp.gmail.com with ESMTPSA id x9-20020a5d6b49000000b002366e3f1497sm14090293wrw.6.2022.11.29.07.47.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Nov 2022 07:47:40 -0800 (PST) From: Jann Horn To: security@kernel.org, Andrew Morton Cc: Yang Shi , David Hildenbrand , Peter Xu , John Hubbard , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v5 1/3] mm/khugepaged: Take the right locks for page table retraction Date: Tue, 29 Nov 2022 16:47:28 +0100 Message-Id: <20221129154730.2274278-1-jannh@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669736862; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=zcNi41xvTMS4NGTUpyHD2vhE6PXJ5qCDMr6KFHcPQag=; b=3j4Otpi5YzltAH5FMWtGY7QpzMxICPNbaxupmtaIHrBaHQKvIO+z2QdIUuW1vcD1jbABBm 4G11h85hRbSJu6W4rkjRsnX8CAQyztigmOvS17UYZ8wT6eYOUWgALoNOE0bLW/u6ks3tem Ky+aohCqL8qIfWTo4irBpWQ/VW3mbLI= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="Ymj/TX58"; spf=pass (imf03.hostedemail.com: domain of jannh@google.com designates 209.85.221.47 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669736862; a=rsa-sha256; cv=none; b=F9rZDf5kTqHEf4goPnC7DItzjetjICfNyQ1C/aOv8OFeHGdS6RXhoMF9kw+aXXlXd6nO3J /C+wvaWblNDaFdOQV2mo/syS34CuH18IPFqm0MBvYOFbtP4u9D5Rw1mEND3uz4DQczTffD 3XSXVoL6Tw/g6lTl8itYKVrn2NfU6w0= X-Stat-Signature: db3wrcjpfd6ptuzndqcsredyhjp3rtz6 X-Rspamd-Queue-Id: 67E7B20015 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="Ymj/TX58"; spf=pass (imf03.hostedemail.com: domain of jannh@google.com designates 209.85.221.47 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1669736862-835814 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pagetable walks on address ranges mapped by VMAs can be done under the mmap lock, the lock of an anon_vma attached to the VMA, or the lock of the VMA's address_space. Only one of these needs to be held, and it does not need to be held in exclusive mode. Under those circumstances, the rules for concurrent access to page table entries are: - Terminal page table entries (entries that don't point to another page table) can be arbitrarily changed under the page table lock, with the exception that they always need to be consistent for hardware page table walks and lockless_pages_from_mm(). This includes that they can be changed into non-terminal entries. - Non-terminal page table entries (which point to another page table) can not be modified; readers are allowed to READ_ONCE() an entry, verify that it is non-terminal, and then assume that its value will stay as-is. Retracting a page table involves modifying a non-terminal entry, so page-table-level locks are insufficient to protect against concurrent page table traversal; it requires taking all the higher-level locks under which it is possible to start a page walk in the relevant range in exclusive mode. The collapse_huge_page() path for anonymous THP already follows this rule, but the shmem/file THP path was getting it wrong, making it possible for concurrent rmap-based operations to cause corruption. Cc: stable@kernel.org Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP") Acked-by: David Hildenbrand Reviewed-by: Yang Shi Signed-off-by: Jann Horn --- Notes: v4: added ack by David Hildenbrand v5: added reviewed-by by Yang Shi mm/khugepaged.c | 55 +++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 51 insertions(+), 4 deletions(-) base-commit: eb7081409f94a9a8608593d0fb63a1aa3d6f95d8 diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 4734315f79407..674b111a24fa7 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1384,16 +1384,37 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr, return SCAN_SUCCEED; } +/* + * A note about locking: + * Trying to take the page table spinlocks would be useless here because those + * are only used to synchronize: + * + * - modifying terminal entries (ones that point to a data page, not to another + * page table) + * - installing *new* non-terminal entries + * + * Instead, we need roughly the same kind of protection as free_pgtables() or + * mm_take_all_locks() (but only for a single VMA): + * The mmap lock together with this VMA's rmap locks covers all paths towards + * the page table entries we're messing with here, except for hardware page + * table walks and lockless_pages_from_mm(). + */ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { - spinlock_t *ptl; pmd_t pmd; mmap_assert_write_locked(mm); - ptl = pmd_lock(vma->vm_mm, pmdp); + if (vma->vm_file) + lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem); + /* + * All anon_vmas attached to the VMA have the same root and are + * therefore locked by the same lock. + */ + if (vma->anon_vma) + lockdep_assert_held_write(&vma->anon_vma->root->rwsem); + pmd = pmdp_collapse_flush(vma, addr, pmdp); - spin_unlock(ptl); mm_dec_nr_ptes(mm); page_table_check_pte_clear_range(mm, addr, pmd); pte_free(mm, pmd_pgtable(pmd)); @@ -1444,6 +1465,14 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false)) return SCAN_VMA_CHECK; + /* + * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings + * that got written to. Without this, we'd have to also lock the + * anon_vma if one exists. + */ + if (vma->anon_vma) + return SCAN_VMA_CHECK; + /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ if (userfaultfd_wp(vma)) return SCAN_PTE_UFFD_WP; @@ -1477,6 +1506,20 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, goto drop_hpage; } + /* + * We need to lock the mapping so that from here on, only GUP-fast and + * hardware page walks can access the parts of the page tables that + * we're operating on. + * See collapse_and_free_pmd(). + */ + i_mmap_lock_write(vma->vm_file->f_mapping); + + /* + * This spinlock should be unnecessary: Nobody else should be accessing + * the page tables under spinlock protection here, only + * lockless_pages_from_mm() and the hardware page walker can access page + * tables while all the high-level locks are held in write mode. + */ start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl); result = SCAN_FAIL; @@ -1531,6 +1574,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, /* step 4: remove pte entries */ collapse_and_free_pmd(mm, vma, haddr, pmd); + i_mmap_unlock_write(vma->vm_file->f_mapping); + maybe_install_pmd: /* step 5: install pmd entry */ result = install_pmd @@ -1544,6 +1589,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, abort: pte_unmap_unlock(start_pte, ptl); + i_mmap_unlock_write(vma->vm_file->f_mapping); goto drop_hpage; } @@ -1600,7 +1646,8 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, * An alternative would be drop the check, but check that page * table is clear before calling pmdp_collapse_flush() under * ptl. It has higher chance to recover THP for the VMA, but - * has higher cost too. + * has higher cost too. It would also probably require locking + * the anon_vma. */ if (vma->anon_vma) { result = SCAN_PAGE_ANON; From patchwork Tue Nov 29 15:47:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 13058731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2482C46467 for ; Tue, 29 Nov 2022 15:47:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 403F16B0072; Tue, 29 Nov 2022 10:47:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B5D16B0074; Tue, 29 Nov 2022 10:47:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 235586B0075; Tue, 29 Nov 2022 10:47:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0BC4D6B0072 for ; Tue, 29 Nov 2022 10:47:45 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D50BC1206B4 for ; Tue, 29 Nov 2022 15:47:44 +0000 (UTC) X-FDA: 80186910048.15.7605070 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by imf23.hostedemail.com (Postfix) with ESMTP id 7968D14000B for ; Tue, 29 Nov 2022 15:47:44 +0000 (UTC) Received: by mail-wm1-f41.google.com with SMTP id h131-20020a1c2189000000b003d02dd48c45so8831269wmh.0 for ; Tue, 29 Nov 2022 07:47:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kT8IZrjdmDIMAqdPXo5e6xGOYanI26oKv2vQ1WDKtQk=; b=BL0l4dA+hwXaxDagId779Kfqnra35IhbXewFqGa9/iFcvBB3etkTunBBW79/i36Z7W FfYpfWUnFoFz59NOcvGwEnZS33GiDNsps7XypTQfxe7OCHjHXk+z0tOPFjYOE+cACcsZ BXOa2b+J8eBTp5BN0CToCCnb2NeAqGP9snzI7hhPMzlf2jIo2SFynaVZjlfQYYi0iCSw GPCs7ry1zENWaAccRHHdPcJ6eAZekj2Pr5KT7h76W9ibjnORNwIxH00ODyhVgzvlN1vh CW16wRyiAnWnHq7GeBxH72cDAsZOijNEWdnWbcouFDuLz3dJ+2kRu6/4vmBtPncSAits V0Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kT8IZrjdmDIMAqdPXo5e6xGOYanI26oKv2vQ1WDKtQk=; b=YXV7q/dZ0GaXBTYrNxyaruw6SFeBq32wwfhBC3e6vy1gp6AWqOJtaJn8td4yhc6AdN fB9GlFXkuaw/z+bEacCRTqbaUb46T1coVYw39+UhpI3NenTxo2NkD02lJBz8XBZkf3bs H46x0RwNPX4O90JJqYCavC7TMXrLWvHQ61ljAXwa9Y6uucEFGPUf/sT13RkJQPPViyo4 A0aLqIb8aYXMNvEy65wUOAjPzf4HceMArsRm5PDQ1qbFvotN5fJUSfTAV+OZWtiqGlqH d+B3E22Av52+5aNqBmlzTC7X0aTwJ/Fcx751mQ9JmT8rv3Tf5PivNRhf1NP0gWGmZaWs u9UA== X-Gm-Message-State: ANoB5pmWcB/gvnSUpTV87AfQqfsH+FtQO6KcDud+DZz+NzUksXK0mBOb U9JSz32KSE21O2rNzS/pa3kxaA== X-Google-Smtp-Source: AA0mqf6a5G/8CLXnb0SAy9z118CKG1gCpO4zhtBYmnvQa4oO/gHafcMunPZk/HVikU3D8gdKcQu13g== X-Received: by 2002:a05:600c:19ce:b0:3d0:5160:e0e2 with SMTP id u14-20020a05600c19ce00b003d05160e0e2mr11570874wmq.147.1669736863055; Tue, 29 Nov 2022 07:47:43 -0800 (PST) Received: from localhost ([2a00:79e0:9d:4:5011:adcc:fddd:accf]) by smtp.gmail.com with ESMTPSA id o9-20020a5d4a89000000b00241fde8fe04sm14090930wrq.7.2022.11.29.07.47.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Nov 2022 07:47:42 -0800 (PST) From: Jann Horn To: security@kernel.org, Andrew Morton Cc: Yang Shi , David Hildenbrand , Peter Xu , John Hubbard , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v5 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI Date: Tue, 29 Nov 2022 16:47:29 +0100 Message-Id: <20221129154730.2274278-2-jannh@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221129154730.2274278-1-jannh@google.com> References: <20221129154730.2274278-1-jannh@google.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669736864; a=rsa-sha256; cv=none; b=cJ8eUkR067NDQu9mCrJX8ViiSDsKnDYIFDURLLC1pSGWYQTgslKcggXwxfTutoefk8ySwi /ZHj8QMGYSzq+QpbX+eUqCAJ8wY2R/0cVH+JR3a/5WpSXtiSZYuBcaWCDnTlTF0w5C23G4 t4PlB/LZq3gp2DLgvqjN44Z05nb65QE= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=BL0l4dA+; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of jannh@google.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=jannh@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669736864; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kT8IZrjdmDIMAqdPXo5e6xGOYanI26oKv2vQ1WDKtQk=; b=qJOTFwIQacAhJRwi7vjeSeEynTJhwHF/OGkkxpTRRReyQqkFN5iKCbzjIc17tJJKg/0QRC LaQrolwr14TsU6h5DD+uq/Ts9LB0zMIjzFK4Bcs1jKygx/SpIHKf70+Ol3A+bVu5chgP5d 7/+f/jR/JfMf7dXWakrDm58axNWZ7oM= X-Stat-Signature: 384ejag3no3txqk6hi4fkawjwpfigouq X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 7968D14000B Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=BL0l4dA+; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of jannh@google.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=jannh@google.com X-Rspam-User: X-HE-Tag: 1669736864-270608 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The khugepaged paths that remove page tables have to be careful to synchronize against the lockless_pages_from_mm() path, which traverses page tables while only being protected by disabled IRQs. lockless_pages_from_mm() must not: 1. interpret the contents of freed memory as page tables (and once a page table has been deposited, it can be freed) 2. interpret the contents of deposited page tables as PTEs, since some architectures will store non-PTE data inside deposited page tables (see radix__pgtable_trans_huge_deposit()) 3. create new page references from PTEs after the containing page table has been detached and: 3a. __collapse_huge_page_isolate() has checked the page refcount 3b. the page table has been reused at another virtual address and populated with new PTEs ("new page references" here refer to stable references returned to the caller; speculative references that are dropped on an error path are fine) commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP collapse") addressed issue 3 by making the lockless_pages_from_mm() fastpath recheck the pmd_t to ensure that the page table was not removed by khugepaged in between (under the assumption that the page table is not repeatedly moving back and forth between two addresses, with one PTE repeatedly being populated with the same value). But to address issues 1 and 2, we need to send IPIs before freeing/reusing page tables. By doing that, issue 3 is also automatically addressed, so the fix from commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP collapse") becomes redundant. We can ensure that the necessary IPI is sent by calling tlb_remove_table_sync_one() because, as noted in mm/gup.c, under configurations that define CONFIG_HAVE_FAST_GUP, there are two possible cases: 1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing tlb_remove_table_sync_one() to send an IPI to synchronize with lockless_pages_from_mm(). 2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all TLB flushes are already guaranteed to send IPIs. tlb_remove_table_sync_one() will do nothing, but we've already run pmdp_collapse_flush(), which did a TLB flush, which must have involved IPIs. Cc: stable@kernel.org Fixes: ba76149f47d8 ("thp: khugepaged") Reviewed-by: Yang Shi Acked-by: David Hildenbrand Signed-off-by: Jann Horn --- Notes: v4: - added ack from David Hildenbrand - made commit message more verbose v5: - added reviewed-by from Yang Shi - rewrote commit message based on feedback from Yang Shi include/asm-generic/tlb.h | 4 ++++ mm/khugepaged.c | 2 ++ mm/mmu_gather.c | 4 +--- 3 files changed, 7 insertions(+), 3 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 492dce43236ea..cab7cfebf40bd 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -222,12 +222,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table); #define tlb_needs_table_invalidate() (true) #endif +void tlb_remove_table_sync_one(void); + #else #ifdef tlb_needs_table_invalidate #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE #endif +static inline void tlb_remove_table_sync_one(void) { } + #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 674b111a24fa7..c3d3ce596bff7 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1057,6 +1057,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, _pmd = pmdp_collapse_flush(vma, address, pmd); spin_unlock(pmd_ptl); mmu_notifier_invalidate_range_end(&range); + tlb_remove_table_sync_one(); spin_lock(pte_ptl); result = __collapse_huge_page_isolate(vma, address, pte, cc, @@ -1415,6 +1416,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v lockdep_assert_held_write(&vma->anon_vma->root->rwsem); pmd = pmdp_collapse_flush(vma, addr, pmdp); + tlb_remove_table_sync_one(); mm_dec_nr_ptes(mm); page_table_check_pte_clear_range(mm, addr, pmd); pte_free(mm, pmd_pgtable(pmd)); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index add4244e5790d..3a2c3f8cad2fe 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -153,7 +153,7 @@ static void tlb_remove_table_smp_sync(void *arg) /* Simply deliver the interrupt */ } -static void tlb_remove_table_sync_one(void) +void tlb_remove_table_sync_one(void) { /* * This isn't an RCU grace period and hence the page-tables cannot be @@ -177,8 +177,6 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch) #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */ -static void tlb_remove_table_sync_one(void) { } - static void tlb_remove_table_free(struct mmu_table_batch *batch) { __tlb_remove_table_free(batch); From patchwork Tue Nov 29 15:47:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 13058732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CC52C4167B for ; Tue, 29 Nov 2022 15:47:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3FF56B0074; Tue, 29 Nov 2022 10:47:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DF5C16B0075; Tue, 29 Nov 2022 10:47:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB3FB6B0078; Tue, 29 Nov 2022 10:47:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BDF846B0074 for ; Tue, 29 Nov 2022 10:47:49 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9782C1405AC for ; Tue, 29 Nov 2022 15:47:49 +0000 (UTC) X-FDA: 80186910258.25.DBBB901 Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com [209.85.221.51]) by imf18.hostedemail.com (Postfix) with ESMTP id 1ABB41C000D for ; Tue, 29 Nov 2022 15:47:48 +0000 (UTC) Received: by mail-wr1-f51.google.com with SMTP id bs21so22760954wrb.4 for ; Tue, 29 Nov 2022 07:47:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YSdw+42zFeQjk2DGkc/5JCpwWvHgOHoXpertIATsPMg=; b=kyY6Gy0JKC+0AUgiybPvuNjn+SU6DV8iudSjb8b55zB8kB4CqhVidPvZfk6mww3B8n 6tPPi7aTD28MO7gKzrHO+ZFZTkMmX5YCdYvsqIG/OXyXs9ykyLPP5wRNXHSRX0t6RVsh IZX4qH8FwqDWynlcIRmrySVkQJImf/gX52fCnR1vlGWe2x9tFDFP0yt7ZfZIHsBm6j36 kYSmD90/AkP2Qg+2MXUuwV1NFk3p2dA+7dZpdiWsjPuLXQCQpLBU6xTlqxUMgxjOTIMH /nQkV9fl8zI6rpYGdiluQ5g7ciMaMfjc9nt2qlBPLrhiHOj/rqFidysvQv8pXgIoMM3b SNCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YSdw+42zFeQjk2DGkc/5JCpwWvHgOHoXpertIATsPMg=; b=lT3ftdc4aeqqtKdJvLNC2MXDB6A/ey/YXPaZADKYwTMknPnhtsULkq08Ntm7zx5G8V DB57ge6goUySXkSGknANgQoVojS3M44yFePBQqkkSLeGhs2yvfP5ZxIME6R5+IFgYBJG PwBkXHsLpmM+on9s4hjS9QDIBbCjPFEiWFcxMxsQh6Z9VrESAuJPzt6b1O0LDTSFt3D4 hOb9pXiJFM4J2wePqheRmexnGP84Q9HHM5G8yjz8J+RrujBa3+L7MF7QC8DDwY+yJMO+ QG9uVn1Ety1zluGFpHjgNs9NbMlu9CrnbycBdpLDIk6r9EiRmrqyEDbfpPlCY0gurlhM UHMg== X-Gm-Message-State: ANoB5pn5SMpvC+lBUS7V7e9k9P8YjBW3p+IqMYu92mi/kPV3GtEI8bwD 6c+FYs2J99Ved0W3ZU5Eb7M0iA== X-Google-Smtp-Source: AA0mqf4wfVRsEWf9AFOdhAoE+E+wRnIO6UPQlwSQr5h7WZkHCgo4KnQbZUBu1EEsJEC13nvpyrb1bg== X-Received: by 2002:a5d:5709:0:b0:241:d71c:5dde with SMTP id a9-20020a5d5709000000b00241d71c5ddemr27080262wrv.678.1669736867789; Tue, 29 Nov 2022 07:47:47 -0800 (PST) Received: from localhost ([2a00:79e0:9d:4:5011:adcc:fddd:accf]) by smtp.gmail.com with ESMTPSA id x9-20020a5d6b49000000b002366e3f1497sm14090560wrw.6.2022.11.29.07.47.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Nov 2022 07:47:47 -0800 (PST) From: Jann Horn To: security@kernel.org, Andrew Morton Cc: Yang Shi , David Hildenbrand , Peter Xu , John Hubbard , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v5 3/3] mm/khugepaged: Invoke MMU notifiers in shmem/file collapse paths Date: Tue, 29 Nov 2022 16:47:30 +0100 Message-Id: <20221129154730.2274278-3-jannh@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog In-Reply-To: <20221129154730.2274278-1-jannh@google.com> References: <20221129154730.2274278-1-jannh@google.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kyY6Gy0J; spf=pass (imf18.hostedemail.com: domain of jannh@google.com designates 209.85.221.51 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669736869; a=rsa-sha256; cv=none; b=Jpe7kse/fyPyCV1Xviyo4Ut4wehl8lpGu/WcFTNzk6aeT36nVoJH9sSi64ke1dbzeOQA9z Akj04xtYYVBSzytxA2D+YP6Pg0iOg/bcoE2kEvOWHVLAWbE/L1VUYCArhJpVmqVU/bqs1T IQPEpDE634dvgf0/fvCLmYt+btlEkmU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669736869; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YSdw+42zFeQjk2DGkc/5JCpwWvHgOHoXpertIATsPMg=; b=U8XKnIQ0nbXn3/DQlAzn0W159Gqe0UDiSvLekcIHTtmzWVP1UjA+5MdWQ1aeV7FEdoq+HR ic7epJJiLBREctB+T0YOJH8AaUhwQ8pOV+8Iff3hatZMRKQV6PTsjb8o9n0XHFzsmhlyXX pTCet3MX7ckKBkfs3uAwl8AwFkKpLpM= Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kyY6Gy0J; spf=pass (imf18.hostedemail.com: domain of jannh@google.com designates 209.85.221.51 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Stat-Signature: 6fcmktzqih64kyp63kfb8jsb8559nofn X-Rspamd-Queue-Id: 1ABB41C000D X-Rspam-User: X-HE-Tag: 1669736868-47529 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Any codepath that zaps page table entries must invoke MMU notifiers to ensure that secondary MMUs (like KVM) don't keep accessing pages which aren't mapped anymore. Secondary MMUs don't hold their own references to pages that are mirrored over, so failing to notify them can lead to page use-after-free. I'm marking this as addressing an issue introduced in commit f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of the security impact of this only came in commit 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP"), which actually omitted flushes for the removal of present PTEs, not just for the removal of empty page tables. Cc: stable@kernel.org Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages") Acked-by: David Hildenbrand Reviewed-by: Yang Shi Signed-off-by: Jann Horn --- Notes: v4: no changes v5: - added ack and reviewed-by mm/khugepaged.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c3d3ce596bff7..49eb4b4981d88 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1404,6 +1404,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v unsigned long addr, pmd_t *pmdp) { pmd_t pmd; + struct mmu_notifier_range range; mmap_assert_write_locked(mm); if (vma->vm_file) @@ -1415,8 +1416,12 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v if (vma->anon_vma) lockdep_assert_held_write(&vma->anon_vma->root->rwsem); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, addr, + addr + HPAGE_PMD_SIZE); + mmu_notifier_invalidate_range_start(&range); pmd = pmdp_collapse_flush(vma, addr, pmdp); tlb_remove_table_sync_one(); + mmu_notifier_invalidate_range_end(&range); mm_dec_nr_ptes(mm); page_table_check_pte_clear_range(mm, addr, pmd); pte_free(mm, pmd_pgtable(pmd));