From patchwork Mon Dec 11 15:56:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13487440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4D52C10F13 for ; Mon, 11 Dec 2023 15:57:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 013E06B0104; Mon, 11 Dec 2023 10:57:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EDCA26B0106; Mon, 11 Dec 2023 10:57:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7C526B0107; Mon, 11 Dec 2023 10:57:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C4D476B0104 for ; Mon, 11 Dec 2023 10:57:21 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 864E480783 for ; Mon, 11 Dec 2023 15:57:21 +0000 (UTC) X-FDA: 81554991882.27.EE8894C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id A31DE1C0016 for ; Mon, 11 Dec 2023 15:57:19 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RvbQmc0i; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702310239; a=rsa-sha256; cv=none; b=H81bU+rk6mnyI/JejjqoAjJR5JO689S0VJW0RIRx5xRXYeZ7QHmA1is7XqdLlzSf4mgzRY yB3upAy4w8ycgx2qStd0R1oRKbhIypuAcuB5ExLBBDjBzKu+sI2o4vuN8Pry+MwrLp2XmU WzcAFfnx1TbQS7HVmCxWJT5WYk84mPI= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RvbQmc0i; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702310239; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IYkHAajZElCwYCEwnUcbPY1wmlYkOiWYX0LP3vDIRKY=; b=G62ASeKRz22Fj1Ptfw0U95uPhUVsABiVEYwv53vQ78PNWEyepIl69MTOsxz0+/y/6kOWTe i1CCuGNgGZskXpEn99kwmmeWiMpfZHv3zf+fiEBnjPWJv8P9vsNZig6w01E0iMu8cdd0Wg y4Jy+SvlkE65XUywqUXnWhliBPdwfco= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702310238; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IYkHAajZElCwYCEwnUcbPY1wmlYkOiWYX0LP3vDIRKY=; b=RvbQmc0ik3ioYZEe6XUX7Uf9TejbefOUuI6IS6W6G1hkbWXKYAsRw8heidaazCcH2pOV5i 3PHR07z3n/ZLcKZ2n8JgSMImKbNzIPr8kBSwFdwfjYbCZ/Z2uVAApHpEwbdVr5/MocEfta f13anXwbnxjzZ24KT6FfVLuBozQgUWI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-389-vLDkjjTeNrqXiJAcMx9kjQ-1; Mon, 11 Dec 2023 10:57:15 -0500 X-MC-Unique: vLDkjjTeNrqXiJAcMx9kjQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E97A4185A780; Mon, 11 Dec 2023 15:57:14 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.166]) by smtp.corp.redhat.com (Postfix) with ESMTP id CEE511121306; Mon, 11 Dec 2023 15:57:12 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Ryan Roberts , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu Subject: [PATCH v1 07/39] mm/rmap: convert folio_add_file_rmap_range() into folio_add_file_rmap_[pte|ptes|pmd]() Date: Mon, 11 Dec 2023 16:56:20 +0100 Message-ID: <20231211155652.131054-8-david@redhat.com> In-Reply-To: <20231211155652.131054-1-david@redhat.com> References: <20231211155652.131054-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A31DE1C0016 X-Stat-Signature: 5fpanxibzp3bsijd3ynhybj88fwbs5ni X-HE-Tag: 1702310239-449440 X-HE-Meta: U2FsdGVkX18oxZ7235FYePGvNW7+uGrMgu0L2oNReA4Q4c6IrefGCMc7bB38nyG0RM2zZsNFgIVEVlftz3P1hSogsPo2Pwbupb8xWdkZ7APz0lGmwYM1ejoeeCXA3w5vL2K+4nps701iKgiGq2VNuqBgWFzNeQYlYbaZy4qmun/Jza3FIFrrdlZkBtTZJ2GWX6C5AM6HAAa3wFR5WD3BW0TNFHgq7Nnv64Sn5BUveOh1MbtWwmlDjwKWlQi2tbiwP0SR4peXXMWGbM7BvC4oyRppaQeYJFfo8hrX1RRM7EY89xwOqCqtDDntJJ2p66CBHvRthGBhQ65Hs0KwPIE3o7plgQkx7ctDNTN571C9zHoy4aFxuNn6KMwiM3v1gj0ojna1I3tbo59BOiFHKX6uDpqq29afCDkpLltoeLMWvmsmC1rO1PXCz7BnYk/TCEG4fx4GHXJiv58BdHd/FwPumarl9rwbtqb5uI1z+o2aWLDwULdz3cqvobdmjrXnUBokiZ1SkZxcaCVBEAs6kSxGdcAyBvJIaoKaxIHIysGoxi9AviqhKXIeAE81Or+Sdg4X0ZP3TsPhieWiHXg1mu0oowxg5qOmvojFouAl6u3hgXFXLkJGUTmONQ8yU/uZGBoQwxxPWqkW/gpGN0g3B5cs77CC9EtlSAaEbpzunWAVQPCJDSKIWuh0SlFoq1TbJHt4ldw9jzhGhhGsZP1FRVo+kzFlkSJXIhPAqKjx8ihn8ADhHaGS9gGEeHmH1h8O7lpNW4JytGIF//f5yyPj5FurMG+jmPZNQlDdd8WM82q+nGTSbqOBJG799xzHRHcD9YNmUvjLdU3BP1Lql8ySXO7PDV17JTA5Zt7xfoQomhDl7AfAdGLDq0Xl1coKfPHm7TLJYxZZ/rYNEPIm6Xw1iRGMFYPiM0OpdcjYDa9gGFCkPvYrfpzjW/7rwFffMNcq1bmNB9v5UNxF4+RfwnfK2Fs CtiOR5Y/ vUWxeYQ5zO2cddEy12euqZVw538kAoeuGT/+rXmC19yqSTn4+gkE+go9HiKhOHCq+B1NQV62H78XXPLYHdx6Nm+HHA92cMvWFLmvn/BqRkmEoyt+Aa6g4Dd4nGImoWdJZZyHjE8Gl9uRSeOmu0f/ssZG82fTXZXHMGj80mXqQQjkJv+5r8Y1coATGqoPFqKGn7VdFNs3jJAKWZx5bntl7eimmts97Uezk6swm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's get rid of the compound parameter and instead define implicitly which mappings we're adding. That is more future proof, easier to read and harder to mess up. Use an enum to express the granularity internally. Make the compiler always special-case on the granularity by using __always_inline. Replace the "compound" check by a switch-case that will be removed by the compiler completely. Add plenty of sanity checks with CONFIG_DEBUG_VM. Replace the folio_test_pmd_mappable() check by a config check in the caller and sanity checks. Convert the single user of folio_add_file_rmap_range(). This function design can later easily be extended to PUDs and to batch PMDs. Note that for now we don't support anything bigger than PMD-sized folios (as we cleanly separated hugetlb handling). Sanity checks will catch if that ever changes. Next up is removing page_remove_rmap() along with its "compound" parameter and smilarly converting all other rmap functions. Signed-off-by: David Hildenbrand Reviewed-by: Yin Fengwei Reviewed-by: Ryan Roberts --- include/linux/rmap.h | 47 +++++++++++++++++++++++++-- mm/memory.c | 2 +- mm/rmap.c | 75 +++++++++++++++++++++++++++++--------------- 3 files changed, 95 insertions(+), 29 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e3857d26b944..1753900f4aed 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -191,6 +191,45 @@ typedef int __bitwise rmap_t; */ #define RMAP_COMPOUND ((__force rmap_t)BIT(1)) +/* + * Internally, we're using an enum to specify the granularity. Usually, + * we make the compiler create specialized variants for the different + * granularity. + */ +enum rmap_mode { + RMAP_MODE_PTE = 0, + RMAP_MODE_PMD, +}; + +static inline void __folio_rmap_sanity_checks(struct folio *folio, + struct page *page, int nr_pages, enum rmap_mode mode) +{ + /* hugetlb folios are handled separately. */ + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); + VM_WARN_ON_FOLIO(folio_test_large(folio) && + !folio_test_large_rmappable(folio), folio); + + VM_WARN_ON_ONCE(nr_pages <= 0); + VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); + VM_WARN_ON_FOLIO(page_folio(page + nr_pages - 1) != folio, folio); + + switch (mode) { + case RMAP_MODE_PTE: + break; + case RMAP_MODE_PMD: + /* + * We don't support folios larger than a single PMD yet. So + * when RMAP_MODE_PMD is set, we assume that we are creating + * a single "entire" mapping of the folio. + */ + VM_WARN_ON_FOLIO(folio_nr_pages(folio) != HPAGE_PMD_NR, folio); + VM_WARN_ON_FOLIO(nr_pages != HPAGE_PMD_NR, folio); + break; + default: + VM_WARN_ON_ONCE(true); + } +} + /* * rmap interfaces called when adding or removing pte of page */ @@ -203,8 +242,12 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); -void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, - struct vm_area_struct *, bool compound); +void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages, + struct vm_area_struct *); +#define folio_add_file_rmap_pte(folio, page, vma) \ + folio_add_file_rmap_ptes(folio, page, 1, vma) +void folio_add_file_rmap_pmd(struct folio *, struct page *, + struct vm_area_struct *); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); diff --git a/mm/memory.c b/mm/memory.c index 8f0b936b90b5..6a5540ba3c65 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4515,7 +4515,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio, folio_add_lru_vma(folio, vma); } else { add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); - folio_add_file_rmap_range(folio, page, nr, vma, false); + folio_add_file_rmap_ptes(folio, page, nr, vma); } set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); diff --git a/mm/rmap.c b/mm/rmap.c index 41597da14f26..4f30930a1162 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1376,31 +1376,20 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); } -/** - * folio_add_file_rmap_range - add pte mapping to page range of a folio - * @folio: The folio to add the mapping to - * @page: The first page to add - * @nr_pages: The number of pages which will be mapped - * @vma: the vm area in which the mapping is added - * @compound: charge the page as compound or small page - * - * The page range of folio is defined by [first_page, first_page + nr_pages) - * - * The caller needs to hold the pte lock. - */ -void folio_add_file_rmap_range(struct folio *folio, struct page *page, - unsigned int nr_pages, struct vm_area_struct *vma, - bool compound) +static __always_inline void __folio_add_file_rmap(struct folio *folio, + struct page *page, int nr_pages, struct vm_area_struct *vma, + enum rmap_mode mode) { atomic_t *mapped = &folio->_nr_pages_mapped; unsigned int nr_pmdmapped = 0, first; int nr = 0; - VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); - VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio); + VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); + __folio_rmap_sanity_checks(folio, page, nr_pages, mode); /* Is page being mapped by PTE? Is this its first map to be added? */ - if (likely(!compound)) { + switch (mode) { + case RMAP_MODE_PTE: do { first = atomic_inc_and_test(&page->_mapcount); if (first && folio_test_large(folio)) { @@ -1411,9 +1400,8 @@ void folio_add_file_rmap_range(struct folio *folio, struct page *page, if (first) nr++; } while (page++, --nr_pages > 0); - } else if (folio_test_pmd_mappable(folio)) { - /* That test is redundant: it's for safety or to optimize out */ - + break; + case RMAP_MODE_PMD: first = atomic_inc_and_test(&folio->_entire_mapcount); if (first) { nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped); @@ -1428,6 +1416,7 @@ void folio_add_file_rmap_range(struct folio *folio, struct page *page, nr = 0; } } + break; } if (nr_pmdmapped) @@ -1441,6 +1430,43 @@ void folio_add_file_rmap_range(struct folio *folio, struct page *page, mlock_vma_folio(folio, vma); } +/** + * folio_add_file_rmap_ptes - add PTE mappings to a page range of a folio + * @folio: The folio to add the mappings to + * @page: The first page to add + * @nr_pages: The number of pages that will be mapped using PTEs + * @vma: The vm area in which the mappings are added + * + * The page range of the folio is defined by [page, page + nr_pages) + * + * The caller needs to hold the page table lock. + */ +void folio_add_file_rmap_ptes(struct folio *folio, struct page *page, + int nr_pages, struct vm_area_struct *vma) +{ + __folio_add_file_rmap(folio, page, nr_pages, vma, RMAP_MODE_PTE); +} + +/** + * folio_add_file_rmap_pmd - add a PMD mapping to a page range of a folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @vma: The vm area in which the mapping is added + * + * The page range of the folio is defined by [page, page + HPAGE_PMD_NR) + * + * The caller needs to hold the page table lock. + */ +void folio_add_file_rmap_pmd(struct folio *folio, struct page *page, + struct vm_area_struct *vma) +{ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + __folio_add_file_rmap(folio, page, HPAGE_PMD_NR, vma, RMAP_MODE_PMD); +#else + WARN_ON_ONCE(true); +#endif +} + /** * page_add_file_rmap - add pte mapping to a file page * @page: the page to add the mapping to @@ -1453,16 +1479,13 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, bool compound) { struct folio *folio = page_folio(page); - unsigned int nr_pages; VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page); if (likely(!compound)) - nr_pages = 1; + folio_add_file_rmap_pte(folio, page, vma); else - nr_pages = folio_nr_pages(folio); - - folio_add_file_rmap_range(folio, page, nr_pages, vma, compound); + folio_add_file_rmap_pmd(folio, page, vma); } /**