From patchwork Wed Apr 2 18:17:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 14036350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B307C36017 for ; Wed, 2 Apr 2025 18:17:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B871280001; Wed, 2 Apr 2025 14:17:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 61AB3280005; Wed, 2 Apr 2025 14:17:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4946A280001; Wed, 2 Apr 2025 14:17:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 29EA3280004 for ; Wed, 2 Apr 2025 14:17:15 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 12D8E805D5 for ; Wed, 2 Apr 2025 18:17:15 +0000 (UTC) X-FDA: 83289910830.04.F7F2FD8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 3D79540012 for ; Wed, 2 Apr 2025 18:17:13 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="QK/QZFk6"; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743617833; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TC2yKBXkWKOZQx81epWNSMRYD1pOXcRjkX0E8HuUhGM=; b=rrurP736ey1w2nKJVxwcpJM6m3EaapQrxq5W0AQ2ymGV8SUn6r4rQqAjgC1o2XpysHFRNw L80muv7fp83f1JROwbmlAHkO3DicKVQ+knOSan1iwJfCZ9d06iAcWj+fKkN4HOed3wc2Gy aHnuWkUIIpOwWiVm2Iae565SCkDASZo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743617833; a=rsa-sha256; cv=none; b=n6wrVseDC6XTl6tFvZ8eaeObHy2Aam1iIz2VnejxFmEE0fN9LDV5egRBYGvKLANagIVmUM jrQ1b77xlRTBra+/k07qpdMfZ+dev8HF8JKutnoBvfnuxJkNFnBD5TL8ezJDxA2OJCxPnM 8e547XN8W3GsuU1UpsUl6n8ZA71/2uk= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="QK/QZFk6"; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TC2yKBXkWKOZQx81epWNSMRYD1pOXcRjkX0E8HuUhGM=; b=QK/QZFk6HxEr6OedWH+SCq4BWC bgbgLXIDTvHDcE4mAWxdsZWn78kp4SHX0M3bD7+s8eDNqnKMKO5c8uyStYSObxMBhON2fJz4VKIq+ fcALIGc2g6KEl/5xhUphfJ31WvZHmEjMUq5tc63OgbSUma9f6rR1pmL2QsgjQO6aV+6omgzNyWHAL 2aDQBoyu5ArtRlU7mUOijUNjVnLVWAK6jgCyPR/NoLA/e0g/ivZ/8FF1hcEoFShxCV6wZZZYRukBK dsCBmO1e3Tu73j6nVneBVMn3Sr/WepqmCco9qUpJX23E3Qn0VNg76LrvLW+Bq/snm2SQyAHzYIpd/ H8MHXklA==; Received: from willy by casper.infradead.org with local (Exim 4.98.1 #2 (Red Hat Linux)) id 1u02eF-0000000A0im-3XGI; Wed, 02 Apr 2025 18:17:11 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-arch@vger.kernel.org Subject: [PATCH v2 10/11] mm: Add folio_mk_pmd() Date: Wed, 2 Apr 2025 19:17:04 +0100 Message-ID: <20250402181709.2386022-11-willy@infradead.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250402181709.2386022-1-willy@infradead.org> References: <20250402181709.2386022-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 3D79540012 X-Stat-Signature: xjgrix6hytd7rm5q43czawrykng69464 X-HE-Tag: 1743617833-214159 X-HE-Meta: U2FsdGVkX19+6Ipyu60rlvCbnsauISKW+lwiU0vhBmrn7Kj1HfxDLLOY00kFYn6eUlVVeHbLzAMKWcMrPlEE0Saz2ulsa9NtSgAfxiZ6yFG0Fxj4H+46bzqvsVcwqwrILk7N+aZmJqqceMX05HPO0jKjwvS1KRqfTvSUvT+gB4Yw5dfYoWuihkPB/CtP6KmqsJGAbONnwPeop2zYDqHlM7wLL0jFW9/1fyWTXW+ZK1TvnW8l+x+d067i58f8o0UkOkWNAeMhoin3c1Q9G4TQWh3n6PhkcpOYfZE4JWbIQat/LeMNpp924onMUulfU0jwQEJNUcIdTJd8dHDDra+JWueV9emTrbBaVGm7zRogmHAyGky5GtyyWE7lWDunnMzf8AE+4XH8Kzw/pPGrQAVVWjo3AKvYOGAUrUHOHiLDznSc7tL5vv61Fyz4XHdBBn0kvKJJzw+YvEn2cp2COQ6gzEQGaqD1hF8Mty0E8NRkxVMUCPIopghW0jTJS8PrWn/VWpaF2FdkqAp+07Ze3XrGHEqTdU6yd3fhZC2GxF4FdCv4mM7jlkmEbjB7lNNNt0EtWSlw34O91V/bzLJVexQihqTAsedM17C/YGi6ttjVbrJo+YjLuuGbxAx2MOWR5xyfQPUNoOBZJvwvniz37lAQxEKJ29+6D/LMZaB6UnuzWKieHkpF5vFmlfas2LQKM/LWn7m8Qve7LJ4f2uw4jpeL5vgmch7Yl0tdZZgDXT5wFsvVcoN+Eh+/devPFGKVcu/jJ02D4rqvofMlRj8fqBPgYGjKeFqpYzTkRUqYSZ/7F4O7I7kmUJZDwdVAFLxvH5jfRlUs0t6MQd70/LPrj/y1/1zLxD3qCDMCXoq4pQ67FQ/8kOJicjgVFOeREQbGnxh/Fs3URC1iyThtPMEoaiFa6qA4vLeeQBjWj087gXglCRcnDgxJM8Z4gOAjfF+/I2BRSwuJ84VfZTpnI/OmHQf JWVxCP39 UbXGDMUzotvfXUyvqEyC+fO5w/yxajzw/ZoqH6nra3qpAh53Rixdlu1FmW+y6ZzbvzKR/Pxu18iYB323m/1sBvDOPYxau13jz3XbeOMAYv/vwEjL+vG0zP/GENjddigO8LETMVI3wMw1Jc77j1xwfthzsXkeaKWONQDAGppMzX3L/ODJ8xSbM6qmZjRtppAYSXZ+qgKx+l2LWbTHWCM/4J8tWEJ0uJshiQJnd7geYwkhybjqXtX/sOnfCMZILySM2I+z5FfkbcanUTt+9tbttEj/IYNlnv/kpMRSQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Removes five conversions from folio to page. Also removes both callers of mk_pmd() that aren't part of mk_huge_pmd(), getting us a step closer to removing the confusion between mk_pmd(), mk_huge_pmd() and pmd_mkhuge(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/dax.c | 3 +-- include/linux/mm.h | 17 +++++++++++++++++ mm/huge_memory.c | 11 +++++------ mm/khugepaged.c | 2 +- mm/memory.c | 2 +- 5 files changed, 25 insertions(+), 10 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index af5045b0f476..564e44a31e40 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1421,8 +1421,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf, pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); mm_inc_nr_ptes(vma->vm_mm); } - pmd_entry = mk_pmd(&zero_folio->page, vmf->vma->vm_page_prot); - pmd_entry = pmd_mkhuge(pmd_entry); + pmd_entry = folio_mk_pmd(zero_folio, vmf->vma->vm_page_prot); set_pmd_at(vmf->vma->vm_mm, pmd_addr, vmf->pmd, pmd_entry); spin_unlock(ptl); trace_dax_pmd_load_hole(inode, vmf, zero_folio, *entry); diff --git a/include/linux/mm.h b/include/linux/mm.h index d657815305f7..d910b6ffcbed 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2007,7 +2007,24 @@ static inline pte_t folio_mk_pte(struct folio *folio, pgprot_t pgprot) { return pfn_pte(folio_pfn(folio), pgprot); } + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +/** + * folio_mk_pmd - Create a PMD for this folio + * @folio: The folio to create a PMD for + * @pgprot: The page protection bits to use + * + * Create a page table entry for the first page of this folio. + * This is suitable for passing to set_pmd_at(). + * + * Return: A page table entry suitable for mapping this folio. + */ +static inline pmd_t folio_mk_pmd(struct folio *folio, pgprot_t pgprot) +{ + return pmd_mkhuge(pfn_pmd(folio_pfn(folio), pgprot)); +} #endif +#endif /* CONFIG_MMU */ static inline bool folio_has_pincount(const struct folio *folio) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2a47682d1ab7..28c87e0e036f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1203,7 +1203,7 @@ static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd, { pmd_t entry; - entry = mk_huge_pmd(&folio->page, vma->vm_page_prot); + entry = folio_mk_pmd(folio, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); @@ -1309,8 +1309,7 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm, struct folio *zero_folio) { pmd_t entry; - entry = mk_pmd(&zero_folio->page, vma->vm_page_prot); - entry = pmd_mkhuge(entry); + entry = folio_mk_pmd(zero_folio, vma->vm_page_prot); pgtable_trans_huge_deposit(mm, pmd, pgtable); set_pmd_at(mm, haddr, pmd, entry); mm_inc_nr_ptes(mm); @@ -2653,12 +2652,12 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm folio_move_anon_rmap(src_folio, dst_vma); src_folio->index = linear_page_index(dst_vma, dst_addr); - _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot); + _dst_pmd = folio_mk_pmd(src_folio, dst_vma->vm_page_prot); /* Follow mremap() behavior and treat the entry dirty after the move */ _dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma); } else { src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd); - _dst_pmd = mk_huge_pmd(src_page, dst_vma->vm_page_prot); + _dst_pmd = folio_mk_pmd(src_folio, dst_vma->vm_page_prot); } set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd); @@ -4675,7 +4674,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) entry = pmd_to_swp_entry(*pvmw->pmd); folio_get(folio); - pmde = mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)); + pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde = pmd_mksoft_dirty(pmde); if (is_writable_migration_entry(entry)) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index cc945c6ab3bd..b8838ba8207a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1239,7 +1239,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, __folio_mark_uptodate(folio); pgtable = pmd_pgtable(_pmd); - _pmd = mk_huge_pmd(&folio->page, vma->vm_page_prot); + _pmd = folio_mk_pmd(folio, vma->vm_page_prot); _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); spin_lock(pmd_ptl); diff --git a/mm/memory.c b/mm/memory.c index fc4d8152a2e4..e6e7abb83c0b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5188,7 +5188,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) flush_icache_pages(vma, page, HPAGE_PMD_NR); - entry = mk_huge_pmd(page, vma->vm_page_prot); + entry = folio_mk_pmd(folio, vma->vm_page_prot); if (write) entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);