From patchwork Tue Jul 30 06:47:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13746745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8157C3DA61 for ; Tue, 30 Jul 2024 06:44:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 756FD6B009D; Tue, 30 Jul 2024 02:44:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 706666B009E; Tue, 30 Jul 2024 02:44:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5812A6B009F; Tue, 30 Jul 2024 02:44:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2E3196B009D for ; Tue, 30 Jul 2024 02:44:25 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id CDECBA1813 for ; Tue, 30 Jul 2024 06:44:24 +0000 (UTC) X-FDA: 82395480048.21.1B18213 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf14.hostedemail.com (Postfix) with ESMTP id 2ECA2100005 for ; Tue, 30 Jul 2024 06:44:23 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="J/Njo24p"; spf=pass (imf14.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722321810; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H5MUwhTKzH0V6RGrgcvR9YQ58liF8B5ZtgWibEGVrLY=; b=XwKWN1MTbRxzw46SmNplRHzhnr1UvtYcDqej1fntqHdGPDATuiaGWACM0zGLeAjNsmkb4f 8NfAH9+fnn8Qxx2S4fv94Pv99V+k4r2PIcwizQZzhnophFNPMrZNrLYJRUHTqvhnf+GF/7 HYUgob3Dda/3RtGCg3htI2z1UQ5izjk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722321810; a=rsa-sha256; cv=none; b=baRVjXYvZmyttuIuJbb8J61G98BPNUe5oh82AiC7so+2A4y6WclWjy3LfIUGq9NuSqmvpl 4vGK/VjPn+04gw0xTTlIiR81uEXty02uZn7wymdDGl9CU/v7nxvQeWo75TEHtrsG3es/5h n4DwsmJX0hZipFvOCZwsAqkrsgkVqts= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="J/Njo24p"; spf=pass (imf14.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 5F6ED61DB0; Tue, 30 Jul 2024 06:44:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20371C4AF09; Tue, 30 Jul 2024 06:44:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321862; bh=G//eVJ9Nw8aOOKl2d7JUp/+QzzL5npEnufgRCiBDKNQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=J/Njo24pZ085kd/eT/gy5nmywtW77TTpW3xwRWYKhIkRHK+lzJdbLggDAN6TpyGvN rW07M9G7E1rNbOiK/BYavaoAKy5hQTRctqPFFo3zD8RaDkoOT734VNnkfzYGuRTgPr 7lKhNURsW3JqfLiFVe6G4I5vB1kH5jyn1aLmIMd7FJoJlqMApaFJRvyhUsEwo7ZIpN 7RM5QFW8/cV4jRPKXNJ+uwP9fbxQTUYzoz9IEwP+GKQXDzHKvni3jbtRsnh4VwsG8Q iDZw7zJUhoXwEXAb/FQFLoYiOb7hjhzSksX+Dz2XjNV5rBm5sV5pprZ9SrmxaTGPRh KrVL8p90HnjKQ== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , Andrew Morton Subject: [RFC PATCH 07/18] mm/thp: use ptdesc in copy_huge_pmd Date: Tue, 30 Jul 2024 14:47:01 +0800 Message-ID: <20240730064712.3714387-8-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2ECA2100005 X-Stat-Signature: ktshrn97c5z1qzmr14cd4y5ck4yu7c8u X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1722321862-779434 X-HE-Meta: U2FsdGVkX1/aC7VOotm5JYUOcCv+ZaPSGww7ILoXWD6jUoUULjFM5JJX8OJUAZGVdN38z8MA7cp8BbHbkIKdMBCeTk0E2lMpY09FNNoY9u36gA51WwzPiVPC5Hbe0Vl35wOqL8f6yPkhlhOQr8L/gisvUU4eGCCLj2hwZM4fUkuWRbUcf1wLET8jk3iw8fP9cX7qvw4u1fBSs6TUoLtXpUz0WcveVVi/qf7grksaW9e3J0nw+YRXG6oZ/dLwOLdmCkcZ1yBHhHOroh48fX46Xo1YrQ4ZRGhI1MttboJrVE+yU35FvMN5VHFCmKQwe+y43hhYFjBdqZU6FEE69yML0rvdZxTAnJO7+dqAvNdbCSEnu3qR+l+BlfS7xY6+oHfzX8PLFMDQCkzGnoo5ABPG4NC1NU1d6Ki0+VMJJlxgiOb36IWSzjROdcUjrkQw3+u9ulUkaWY0ZO/YA8psK7q3iZFsIA9yym1ewk4UrADOGhIqGtxLPb5j+pqJdM9HuQGuBt/odooq/GJL+9GGovhUL9S0Z9+1LUqRW9rJ7ew8Wy+ocEW6+8yLjgVojID0mqnba8WI7vyHiiIVd8Kmc8vKle7x2jQbNDuleMWphJGZ4xPtonzRZTxaVVST2mNpCcF3yHgMtrQbou1s5TTlOjVg0wGtUQCGjJ9xBmq5ThLCWbGS1la5KFok3uGP7ONRj6gG6AZjD5EWp406HvDQIsNEgbOM9h4wPPjGeIcB9J42HyaKi/CvE48jbys+b/F9pZBDyk5di25DQpNxOSyNsYp5EE/+A1RtAy4bQ61QDpT1HmwRYcwBWRQcTNXqsfRoYJQHTOFKu1Nmq1Eww58dclRumYgnD58X7AKTDC5dfSZBQ2EuCWhqa5w70CmDx8BMarqAThT287q9f4VE38A+cIzO8fD+bXeQom5XdBiVFqaAj2ZYG8uuRBG2Jb9M1BZEmoZ3i5PiKXOnzXZYUILncdq MccasD6K owqKmyByk/gy1U1d04ZwI/qbGahvQTZ08Pu1IffkOD6tCBz1TpC4QQhubmfBYFogbx1IwH2iSoVa6VY9z9zAX2UpH8ueRuxwbV9iHvwZBEG4ZAvFWcW2cKeNjsgaSt2b3V1RE0OV/MrXydRi/UmhsX3W5W9CfWa5L+nWd8NzFDQigwQSXgKabdFuJriJDAul+nxb1kDmAmlRlHJwhaoU6gXCLysXnUey5y0MwdJmP3Dk1I5hTtDfL3yCy8jjjm7GmQX1S6ZHhlgGyyqBX5NsNYGVxN5LRZ1FTbUiTYQkEZw5MzOPmNKjH13mE+/yo+pvB/1cBYxZtVE65j9fDQjVhiZQO3b8u8nAk5yK4ISxPfvsGQn3vaBSAqFk6icHSjx7049cqw3rXB7JcYq8Dq/ohV847Nq5fh6aLOl5pJzZLWTXvyXtfxyjHubMF5z78+7GCw85ywSmHXZdYKTYKzQvn+ATNQw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alex Shi Since we have ptdesc struct now, better to use replace pgtable_t, aka 'struct page *'. It's alaos a preparation for return ptdesc pointer in pte_alloc_one series function. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/huge_memory.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a331d4504d52..236e1582d97e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1369,15 +1369,15 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, struct page *src_page; struct folio *src_folio; pmd_t pmd; - pgtable_t pgtable = NULL; + struct ptdesc *ptdesc = NULL; int ret = -ENOMEM; /* Skip if can be re-fill on fault */ if (!vma_is_anonymous(dst_vma)) return 0; - pgtable = pte_alloc_one(dst_mm); - if (unlikely(!pgtable)) + ptdesc = page_ptdesc(pte_alloc_one(dst_mm)); + if (unlikely(!ptdesc)) goto out; dst_ptl = pmd_lock(dst_mm, dst_pmd); @@ -1404,7 +1404,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, } add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(dst_mm); - pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc)); if (!userfaultfd_wp(dst_vma)) pmd = pmd_swp_clear_uffd_wp(pmd); set_pmd_at(dst_mm, addr, dst_pmd, pmd); @@ -1414,7 +1414,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, #endif if (unlikely(!pmd_trans_huge(pmd))) { - pte_free(dst_mm, pgtable); + pte_free(dst_mm, ptdesc_page(ptdesc)); goto out_unlock; } /* @@ -1440,7 +1440,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (unlikely(folio_try_dup_anon_rmap_pmd(src_folio, src_page, src_vma))) { /* Page maybe pinned: split and retry the fault on PTEs. */ folio_put(src_folio); - pte_free(dst_mm, pgtable); + pte_free(dst_mm, ptdesc_page(ptdesc)); spin_unlock(src_ptl); spin_unlock(dst_ptl); __split_huge_pmd(src_vma, src_pmd, addr, false, NULL); @@ -1449,7 +1449,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); out_zero_page: mm_inc_nr_ptes(dst_mm); - pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc)); pmdp_set_wrprotect(src_mm, addr, src_pmd); if (!userfaultfd_wp(dst_vma)) pmd = pmd_clear_uffd_wp(pmd);