From patchwork Mon Feb 24 16:55:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13988507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E17F2C021BB for ; Mon, 24 Feb 2025 16:56:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC1FA280004; Mon, 24 Feb 2025 11:56:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C5501280005; Mon, 24 Feb 2025 11:56:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2763280004; Mon, 24 Feb 2025 11:56:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7C512280002 for ; Mon, 24 Feb 2025 11:56:43 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 34AC51411D3 for ; Mon, 24 Feb 2025 16:56:43 +0000 (UTC) X-FDA: 83155442286.19.A868983 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf15.hostedemail.com (Postfix) with ESMTP id CB164A000A for ; Mon, 24 Feb 2025 16:56:40 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ShL5ZJgF; spf=pass (imf15.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740416200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SzcTE+N2KyOzlARv+w+SRDPyrebeHo8NXdkT+2IHowQ=; b=tnPhGRNyu5QZmBhW9cSvuWFaTlPQqm1ywqt5Xu+SETOEsbHKKXdpAOwpSEq7DIXExcnEFq /5W5QawcDMDJsb2NdUQ7EL5fx5xysdm7VB3QJjpE1D2Y2Q46ytW28yWf5alJpHkt1tabE+ LzfH3M6o0P5QFXZxlHZF787uqwWl43k= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ShL5ZJgF; spf=pass (imf15.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740416200; a=rsa-sha256; cv=none; b=3An+gvEJlwifoqp3S45TaJowXcWYzWeZaqHOxawuIK1rKtsvBrfZUpGNCs16Rlu/esfVZd uKeNCQ1yj70pEqhPfbTuLHccUGSaNpeYeKF2+jAPfT3571cuOBAaJ9w0HEgb3NKT6MifXG eDklETDec9ZuER7ndglk/k7S355z6rQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1740416199; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SzcTE+N2KyOzlARv+w+SRDPyrebeHo8NXdkT+2IHowQ=; b=ShL5ZJgF3yLA6iW7voWsyhxJRWoqu2YrxLBa/fkR5Lk0c9BBNv1esEcVzoQSy8yOXDfLaL LTtiqT2uDPcykwQYp7ByhirJUTpb1FprRM60E/PHCoNgUysVvN8QnUOO7egRJcm+dSLnlo mLPeiRHVvZYJMTJqXBEzynVFWRqnONw= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-297-Q_x--m_ROT69wpbzXeVwzQ-1; Mon, 24 Feb 2025 11:56:36 -0500 X-MC-Unique: Q_x--m_ROT69wpbzXeVwzQ-1 X-Mimecast-MFC-AGG-ID: Q_x--m_ROT69wpbzXeVwzQ_1740416195 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-38f36fcf4b3so2856484f8f.1 for ; Mon, 24 Feb 2025 08:56:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740416195; x=1741020995; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SzcTE+N2KyOzlARv+w+SRDPyrebeHo8NXdkT+2IHowQ=; b=H2iwWao62TezHoIWmJQogURJGC0r/5pSXOXn2QcnVlfoXHWQCiQP+W3w6BUDECU8+b Zcoy+S+CM1fdR+m9pe1RfwhHCw9n4lRN2kmH9Jigs/1VmOGPh7cSSqd/qeezZXbUNk1G nTsjeQZKR5oq0VE6KJ9+i/IxbQOvt7v57eQp1RK2cZUX9eag8nUOKwwSZqdtjHsdzErR B8b3sPiQFNzMmgV0tbG1+7M21JgyF+B0xHWXadLsnJOf8LUT9Q9sJBkfYAlHmDuCBNQQ aDgh+zG2Ob3jW2kczJ+XHh/7MQlmBRNCX5/JfeUbsrGljzwzRzKVVtx/jP6vdnhqUaLu c7Pg== X-Forwarded-Encrypted: i=1; AJvYcCXhnDbYd44JzfGMRh88EeqsrjWhkbWVawIn0aoSOBrZWP6igDc88IF6GzB/r9XHlxGd6JzKSpWJVw==@kvack.org X-Gm-Message-State: AOJu0Yz2+yOcxXJS3nrkBdQGGdSa5yci6DZRlweM78OaL/wDmrx5CdsR yFoqka5MM8fsmqXk8YYAFfI6SOx2YS33hOY08gfUhQdgbH5iwFkhD7HEHh3VaqxYReaIw3AHrDr VpBJFH7g87TSfSSfeHjXXgHz6mKPsZbJM0sQ0HdN9J9zxtPty X-Gm-Gg: ASbGncsxPp6QfctMp0bXgJIt0WDnMQl4G8XoDD/qw1Rt03RTJTzmdEm0klcNn0JCYeg 3VAFSOzWH1yfgcLHW185AsaltPfZ8/xRGc81JuSfbSgcTK22jK3iSEYv6LzJNMJ0BPbA20ZBvrf lpDqGA5nPgFlOTf30AWwbqN9oSMKilY90PMI8sXLjXuKc7j0whNz2m8JW2kuLcZP4lPbie8TaNc QkZpwh5dIuSQFMUIHEOVozzjIzN+vlXbTNnNp258ycduh/HTHxjSQg511fX9s8tMRZrJdeh+hnj TIVl7dzdRTTeMgp4jOj2FkvRQynemtC6cGtni18bMw== X-Received: by 2002:adf:f20f:0:b0:38f:2b34:5004 with SMTP id ffacd0b85a97d-38f6f0863c9mr12658016f8f.38.1740416194684; Mon, 24 Feb 2025 08:56:34 -0800 (PST) X-Google-Smtp-Source: AGHT+IEXaXs4/LysHX+bj1gIx5L2JWtgUcH0vURwv8svqv3a7UoxYuTerlEfxa7PfRrFkuwf5fEjKw== X-Received: by 2002:adf:f20f:0:b0:38f:2b34:5004 with SMTP id ffacd0b85a97d-38f6f0863c9mr12657972f8f.38.1740416194154; Mon, 24 Feb 2025 08:56:34 -0800 (PST) Received: from localhost (p4ff234b6.dip0.t-ipconnect.de. [79.242.52.182]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-439b0372124sm110600815e9.39.2025.02.24.08.56.32 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 24 Feb 2025 08:56:33 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Muchun Song , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn Subject: [PATCH v2 14/20] mm: convert folio_likely_mapped_shared() to folio_maybe_mapped_shared() Date: Mon, 24 Feb 2025 17:55:56 +0100 Message-ID: <20250224165603.1434404-15-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250224165603.1434404-1-david@redhat.com> References: <20250224165603.1434404-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: Ta_GCi2--b4DvuHlqnUyBhYuAZlB_ATzuAr2x4wVjoI_1740416195 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: CB164A000A X-Stat-Signature: tdxhoxq3typuo3f9mxedxkyd8e3iwxtp X-HE-Tag: 1740416200-445244 X-HE-Meta: U2FsdGVkX1+ljpYTvZ6nmiWi4+N4OYL0fQkKFkdFZeAPrEpEftHw4O821n2pg9Y8cLZOsrrVK36UI+1Kp+VHmWVwqufS0x0LLEcQBJeI6C0Mb4jh/Fke8kcrDtaczz8YztFnJ7uBOSPhAaTLhPU/zqCVLmdLKkFbPQZFNm7i78inn8O3hAF9Ks71HaLNmz5jb2EFqyNbnvyEFlLDyRsxfnHr+MxitvwgwfLJ1uLnzwlrJbAg+/JGCJKANw9XS/yY8eXM1i3Fnsoi9+5sAK3ZfRpWPtMfQHmRy4ZZirIRU9iirQJ3meiwvSQix5SVwVsgiNI6Q0Z5YdLHs4wj+RmK0/WDFmAsNhV8/C+08gPBPs5L2AdNxVp4uhAsuZ+e6uVvaAlhMi4J70c1IVTu89RsxI2zv2zE1fOoBfGUcCBJmVXYIjYXGoRC5HelItigIh/87U9ra/g+1628mHIXiktFzW6dity9aHBkhRDAJCnE1jmT/zYkV6P+xZLzjhG5YffZAad3S7WIW2AhhT2eU1Fbb5bv87Vm7ZvGUpHhvazV7rBRoUp8SIMvtVadz+QjUg2V2JqubPdnyrif96KsxdfAK7WARVcLYZexQYa38/DX9uSyG8mTyFSRyIL+nEHW7XTqKSkJxhGU+VwBtFuE1XG2oP6piygi5kfOo1BU1HJIMztkQaSLcgAz3tJSXCYgEKmATZ6j7/oygcSXdidx6lRzBFpt/uEdwhlvDHD4PMhauJn1okwimuYanWe/+hTqJipsAvdEZ4hMMeXHfG77FpfKBCExnCPXj4/y9X2oIxkXUBU+vQOtzIi1ZGtvo+Mn3hMIvICimUUO8QukCZV9Y9hmks/BzrzRb++UijGz0W888DRyYan1CvXAUqrDDCLUXYZ7A6sAmAXQKntkyddPzH5hCmv2YOWO+zBy9UHNUxWeF4/2XzH3bGjIFmphAtjj8NQE2F43bp9pkzhz93i8Hsv +FXoran4 Y58biv7RnGzRBi+3T2IKnlmAWL9YaA4cXw5pseBWPAmg02RMylgDFCb+ELi9VkTQgbEB1Tvnwz/mV7BoPxM5i+oBEgWCRpJf2Zh/3aq98nTbigEji5adhm51cas12Rr8uQb92YDn2m70T/OSazWPFWu05/ntlEj1FaTzJPzRskw4qAZQuiaF24voK7oL3pwdAzoNmacuJ+UXu2j8pOVR4+SzXww0Oc3sJ+N7Qjwmv/l7UnDMdpC4WZInsmnZvJO0SNw8NAIVfuXLL8egKLZ5R4mUFOm2BvMnEOdJnOVhDBSVtVmZgowGskwYPKetT/1BLiywIj6l9RLpwkj72kgGgcuTXCTSA4Ray5U2zdMI9OWLScFI6n3p/qq+YMjzOMew8bR5aHMivZ+XcJnAY93NIuD0Af9uRG8Am7/QrrxVK3YqD8ItmQBIF3JH0yx6Bz4DotQoJoo55C3rkl5wP0D3RsCpNZcD98gnhaT/szJnFf2wyCQldwOt5eVMMGk26faMBE13vzfqbinD71uG5x1BkWLdgrAoh483ohZtYS4Cd/Hf4TsI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's reuse our new MM ownership tracking infrastructure for large folios to make folio_likely_mapped_shared() never return false negatives -- never indicating "not mapped shared" although the folio *is* mapped shared. With that, we can rename it to folio_maybe_mapped_shared() and get rid of the dependency on the mapcount of the first folio page. The semantics are now arguably clearer: no mixture of "false negatives" and "false positives", only the remaining possibility for "false positives". Thoroughly document the new semantics. We might now detect that a large folio is "maybe mapped shared" although it *no longer* is -- but once was. Now, if more than two MMs mapped a folio at the same time, and the MM mapping the folio exclusively at the end is not one tracked in the two folio MM slots, we will detect the folio as "maybe mapped shared". For anonymous folios, usually (except weird corner cases) all PTEs that target a "maybe mapped shared" folio are R/O. As soon as a child process would write to them (iow, actively use them), we would CoW and effectively replace these PTEs. Most cases (below) are not expected to really matter with large anonymous folios for this reason. Most importantly, there will be no change at all for: * small folios * hugetlb folios * PMD-mapped PMD-sized THPs (single mapping) This change has the potential to affect existing callers of folio_likely_mapped_shared() -> folio_maybe_mapped_shared(): (1) fs/proc/task_mmu.c: no change (hugetlb) (2) khugepaged counts PTEs that target shared folios towards max_ptes_shared (default: HPAGE_PMD_NR / 2), meaning we could skip a collapse where we would have previously collapsed. This only applies to anonymous folios and is not expected to matter in practice. Worth noting that this change sorts out case (A) documented in commit 1bafe96e89f0 ("mm/khugepaged: replace page_mapcount() check by folio_likely_mapped_shared()") by removing the possibility for "false negatives". (3) MADV_COLD / MADV_PAGEOUT / MADV_FREE will not try splitting PTE-mapped THPs that are considered shared but not fully covered by the requested range, consequently not processing them. PMD-mapped PMD-sized THP are not affected, or when all PTEs are covered. These functions are usually only called on anon/file folios that are exclusively mapped most of the time (no other file mappings or no fork()), so the "false negatives" are not expected to matter in practice. (4) mbind() / migrate_pages() / move_pages() will refuse to migrate shared folios unless MPOL_MF_MOVE_ALL is effective (requires CAP_SYS_NICE). We will now reject some folios that could be migrated. Similar to (3), especially with MPOL_MF_MOVE_ALL, so this is not expected to matter in practice. Note that cpuset_migrate_mm_workfn() calls do_migrate_pages() with MPOL_MF_MOVE_ALL. (5) NUMA hinting mm/migrate.c:migrate_misplaced_folio_prepare() will skip file folios that are probably shared libraries (-> "mapped shared" and executable). This check would have detected it as a shared library at some point (at least 3 MMs mapping it), so detecting it afterwards does not sound wrong (still a shared library). Not expected to matter. mm/memory.c:numa_migrate_check() will indicate TNF_SHARED in MAP_SHARED file mappings when encountering a shared folio. Similar reasoning, not expected to matter. mm/mprotect.c:change_pte_range() will skip folios detected as shared in CoW mappings. Similarly, this is not expected to matter in practice, but if it would ever be a problem we could relax that check a bit (e.g., basing it on the average page-mapcount in a folio), because it was only an optimization when many (e.g., 288) processes were mapping the same folios -- see commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section pages.") (6) mm/rmap.c:folio_referenced_one() will skip exclusive swapbacked folios in dying processes. Applies to anonymous folios only. Without "false negatives", we'll now skip all actually shared ones. Skipping ones that are actually exclusive won't really matter, it's a pure optimization, and is not expected to matter in practice. In theory, one can detect the problematic scenario: folio_mapcount() > 0 and no folio MM slot is occupied ("state unknown"). One could reset the MM slots while doing an rmap walk, which migration / folio split already do when setting everything up. Further, when batching PTEs we might naturally learn about a owner (e.g., folio_mapcount() == nr_ptes) and could update the owner. However, we'll defer that until the scenarios where it would really matter are clear. Signed-off-by: David Hildenbrand --- fs/proc/task_mmu.c | 4 ++-- include/linux/mm.h | 43 ++++++++++++++++++++++--------------------- mm/huge_memory.c | 2 +- mm/khugepaged.c | 8 +++----- mm/madvise.c | 6 +++--- mm/memory.c | 2 +- mm/mempolicy.c | 8 ++++---- mm/migrate.c | 7 +++---- mm/mprotect.c | 2 +- mm/rmap.c | 2 +- 10 files changed, 41 insertions(+), 43 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index f02cd362309a0..2bddcea65cbf1 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1023,7 +1023,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask, if (folio) { /* We treat non-present entries as "maybe shared". */ - if (!present || folio_likely_mapped_shared(folio) || + if (!present || folio_maybe_mapped_shared(folio) || hugetlb_pmd_shared(pte)) mss->shared_hugetlb += huge_page_size(hstate_vma(vma)); else @@ -1879,7 +1879,7 @@ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask, if (!folio_test_anon(folio)) flags |= PM_FILE; - if (!folio_likely_mapped_shared(folio) && + if (!folio_maybe_mapped_shared(folio) && !hugetlb_pmd_shared(ptep)) flags |= PM_MMAP_EXCLUSIVE; diff --git a/include/linux/mm.h b/include/linux/mm.h index 9c1290588a11e..98a67488b5fef 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2245,23 +2245,18 @@ static inline size_t folio_size(const struct folio *folio) } /** - * folio_likely_mapped_shared - Estimate if the folio is mapped into the page - * tables of more than one MM + * folio_maybe_mapped_shared - Whether the folio is mapped into the page + * tables of more than one MM * @folio: The folio. * - * This function checks if the folio is currently mapped into more than one - * MM ("mapped shared"), or if the folio is only mapped into a single MM - * ("mapped exclusively"). + * This function checks if the folio maybe currently mapped into more than one + * MM ("maybe mapped shared"), or if the folio is certainly mapped into a single + * MM ("mapped exclusively"). * * For KSM folios, this function also returns "mapped shared" when a folio is * mapped multiple times into the same MM, because the individual page mappings * are independent. * - * As precise information is not easily available for all folios, this function - * estimates the number of MMs ("sharers") that are currently mapping a folio - * using the number of times the first page of the folio is currently mapped - * into page tables. - * * For small anonymous folios and anonymous hugetlb folios, the return * value will be exactly correct: non-KSM folios can only be mapped at most once * into an MM, and they cannot be partially mapped. KSM folios are @@ -2269,8 +2264,8 @@ static inline size_t folio_size(const struct folio *folio) * * For other folios, the result can be fuzzy: * #. For partially-mappable large folios (THP), the return value can wrongly - * indicate "mapped exclusively" (false negative) when the folio is - * only partially mapped into at least one MM. + * indicate "mapped shared" (false positive) if a folio was mapped by + * more than two MMs at one point in time. * #. For pagecache folios (including hugetlb), the return value can wrongly * indicate "mapped shared" (false positive) when two VMAs in the same MM * cover the same file range. @@ -2287,7 +2282,7 @@ static inline size_t folio_size(const struct folio *folio) * * Return: Whether the folio is estimated to be mapped into more than one MM. */ -static inline bool folio_likely_mapped_shared(struct folio *folio) +static inline bool folio_maybe_mapped_shared(struct folio *folio) { int mapcount = folio_mapcount(folio); @@ -2295,16 +2290,22 @@ static inline bool folio_likely_mapped_shared(struct folio *folio) if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio))) return mapcount > 1; - /* A single mapping implies "mapped exclusively". */ - if (mapcount <= 1) - return false; - - /* If any page is mapped more than once we treat it "mapped shared". */ - if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio)) + /* + * vm_insert_page() without CONFIG_TRANSPARENT_HUGEPAGE ... + * simply assume "mapped shared", nobody should really care + * about this for arbitrary kernel allocations. + */ + if (!IS_ENABLED(CONFIG_MM_ID)) return true; - /* Let's guess based on the first subpage. */ - return atomic_read(&folio->_mapcount) > 0; + /* + * A single mapping implies "mapped exclusively", even if the + * folio flag says something different: it's easier to handle this + * case here instead of on the RMAP hot path. + */ + if (mapcount <= 1) + return false; + return folio_test_large_maybe_mapped_shared(folio); } #ifndef HAVE_ARCH_MAKE_FOLIO_ACCESSIBLE diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a3264d88d4b49..d9a7614fe739a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2155,7 +2155,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, * If other processes are mapping this folio, we couldn't discard * the folio unless they all do MADV_FREE so let's skip the folio. */ - if (folio_likely_mapped_shared(folio)) + if (folio_maybe_mapped_shared(folio)) goto out; if (!folio_trylock(folio)) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5f0be134141e8..cc945c6ab3bdb 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -607,7 +607,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio); /* See hpage_collapse_scan_pmd(). */ - if (folio_likely_mapped_shared(folio)) { + if (folio_maybe_mapped_shared(folio)) { ++shared; if (cc->is_khugepaged && shared > khugepaged_max_ptes_shared) { @@ -1359,11 +1359,9 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, /* * We treat a single page as shared if any part of the THP - * is shared. "False negatives" from - * folio_likely_mapped_shared() are not expected to matter - * much in practice. + * is shared. */ - if (folio_likely_mapped_shared(folio)) { + if (folio_maybe_mapped_shared(folio)) { ++shared; if (cc->is_khugepaged && shared > khugepaged_max_ptes_shared) { diff --git a/mm/madvise.c b/mm/madvise.c index e01e93e179a8a..388dc289b5d12 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -387,7 +387,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, folio = pmd_folio(orig_pmd); /* Do not interfere with other mappings of this folio */ - if (folio_likely_mapped_shared(folio)) + if (folio_maybe_mapped_shared(folio)) goto huge_unlock; if (pageout_anon_only_filter && !folio_test_anon(folio)) @@ -486,7 +486,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (nr < folio_nr_pages(folio)) { int err; - if (folio_likely_mapped_shared(folio)) + if (folio_maybe_mapped_shared(folio)) continue; if (pageout_anon_only_filter && !folio_test_anon(folio)) continue; @@ -721,7 +721,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, if (nr < folio_nr_pages(folio)) { int err; - if (folio_likely_mapped_shared(folio)) + if (folio_maybe_mapped_shared(folio)) continue; if (!folio_trylock(folio)) continue; diff --git a/mm/memory.c b/mm/memory.c index 8dc241961b684..2a1e7d9722866 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5672,7 +5672,7 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, * Flag if the folio is shared between multiple address spaces. This * is later used when determining whether to group tasks together */ - if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) + if (folio_maybe_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) *flags |= TNF_SHARED; /* * For memory tiering mode, cpupid of slow memory page is used diff --git a/mm/mempolicy.c b/mm/mempolicy.c index bbaadbeeb2919..530e71fe91476 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -642,11 +642,11 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, * Unless MPOL_MF_MOVE_ALL, we try to avoid migrating a shared folio. * Choosing not to migrate a shared folio is not counted as a failure. * - * See folio_likely_mapped_shared() on possible imprecision when we + * See folio_maybe_mapped_shared() on possible imprecision when we * cannot easily detect if a folio is shared. */ if ((flags & MPOL_MF_MOVE_ALL) || - (!folio_likely_mapped_shared(folio) && !hugetlb_pmd_shared(pte))) + (!folio_maybe_mapped_shared(folio) && !hugetlb_pmd_shared(pte))) if (!folio_isolate_hugetlb(folio, qp->pagelist)) qp->nr_failed++; unlock: @@ -1033,10 +1033,10 @@ static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist, * Unless MPOL_MF_MOVE_ALL, we try to avoid migrating a shared folio. * Choosing not to migrate a shared folio is not counted as a failure. * - * See folio_likely_mapped_shared() on possible imprecision when we + * See folio_maybe_mapped_shared() on possible imprecision when we * cannot easily detect if a folio is shared. */ - if ((flags & MPOL_MF_MOVE_ALL) || !folio_likely_mapped_shared(folio)) { + if ((flags & MPOL_MF_MOVE_ALL) || !folio_maybe_mapped_shared(folio)) { if (folio_isolate_lru(folio)) { list_add_tail(&folio->lru, foliolist); node_stat_mod_folio(folio, diff --git a/mm/migrate.c b/mm/migrate.c index 365c6daa8d1b1..fb4afd31baf0c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2228,7 +2228,7 @@ static int __add_folio_for_migration(struct folio *folio, int node, if (folio_nid(folio) == node) return 0; - if (folio_likely_mapped_shared(folio) && !migrate_all) + if (folio_maybe_mapped_shared(folio) && !migrate_all) return -EACCES; if (folio_test_hugetlb(folio)) { @@ -2653,11 +2653,10 @@ int migrate_misplaced_folio_prepare(struct folio *folio, * processes with execute permissions as they are probably * shared libraries. * - * See folio_likely_mapped_shared() on possible imprecision + * See folio_maybe_mapped_shared() on possible imprecision * when we cannot easily detect if a folio is shared. */ - if ((vma->vm_flags & VM_EXEC) && - folio_likely_mapped_shared(folio)) + if ((vma->vm_flags & VM_EXEC) && folio_maybe_mapped_shared(folio)) return -EACCES; /* diff --git a/mm/mprotect.c b/mm/mprotect.c index 1444878f7aeb2..62c1f79457412 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -133,7 +133,7 @@ static long change_pte_range(struct mmu_gather *tlb, /* Also skip shared copy-on-write pages */ if (is_cow_mapping(vma->vm_flags) && (folio_maybe_dma_pinned(folio) || - folio_likely_mapped_shared(folio))) + folio_maybe_mapped_shared(folio))) continue; /* diff --git a/mm/rmap.c b/mm/rmap.c index c9922928616ee..8de415157bc8d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -889,7 +889,7 @@ static bool folio_referenced_one(struct folio *folio, if ((!atomic_read(&vma->vm_mm->mm_users) || check_stable_address_space(vma->vm_mm)) && folio_test_anon(folio) && folio_test_swapbacked(folio) && - !folio_likely_mapped_shared(folio)) { + !folio_maybe_mapped_shared(folio)) { pra->referenced = -1; page_vma_mapped_walk_done(&pvmw); return false;